source
stringlengths 31
168
| text
stringlengths 51
3k
|
---|---|
https://en.wikipedia.org/wiki/North%20Korean%20abductions%20of%20South%20Koreans
|
An estimated 84,532 South Koreans were taken to North Korea during the Korean War. In addition, South Korean statistics claim that, since the Korean Armistice Agreement in 1953, about 3,800 people have been abducted by North Korea (the vast majority in the late 1970s), 489 of whom were still being held in 2006.
Terminology
South Korean abductees by North Korea are categorized into two groups, wartime abductees and post-war abductees.
Wartime abductees
Koreans from the south who were kidnapped to the north against their wishes during the 1950–53 Korean War and died there or are still being detained in North Korea are called wartime abductees or Korean War abductees. Most of them were already educated or skilled, such as politicians, government officials, scholars, educators, doctors, judicial officials, journalists, or businessmen. According to testimonies by remaining family members, most abductions were carried out by North Korean soldiers who had specific names and identification in hand when they showed up at people's homes. This is an indication that the abductions were carried out intentionally and in an organized manner.
Post-war abductees
South Koreans who were kidnapped by North Korean agents in the South Korean territory or foreign countries after the armistice was signed in 1953 are known as post-war abductees. Most of them were captured while fishing near the Korean Demilitarized Zone (DMZ), but some were abducted by North Korean agents in South Korea. North Korea continued to abduct South Koreans into the 2000s, as is shown by the cases of the Reverend Kim Dong-shik (), who was abducted on January 16, 2000, and Jin Gyeong-suk (), a North Korean defector to South Korea who was abducted on August 8, 2004, when she had returned to the China-North Korea border region using her South Korean passport.
Background
During wartime, North Korea kidnapped South Koreans to increase its human capacity for rehabilitation after the war. It recruited intelligentsia who were exhausted in North Korea and kidnapped those needed for post-war rehabilitation, technical specialists, and laborers. There was an intention to drain the intelligentsia of South Korean society, exacerbate societal confusion, and promote communization of South Korea by making post-war rehabilitation difficult due to the shortage of technical specialists and youth. They also had the intention to guise the abductions as voluntary entry for the advancement of their political system.
In his Complete Works, Volume IV, dated July 31, 1946, North Korean leader Kim Il Sung wrote: "In regards to bringing Southern Chosun's intelligentsia, not only do we need to search out all Northern Chosun's intelligentsia in order to solve the issue of a shortage of intelligentsia, but we also have to bring Southern Chosun's intelligentsia."
In the case of post-war abductees, Yoichi Shimada, a Fukui University professor in Japan, states that North Korea appeared to abduct foreign citizens to:
elimi
|
https://en.wikipedia.org/wiki/Mikata%20District%2C%20Hy%C5%8Dgo
|
is a district located in Hyōgo Prefecture, Japan.
As of the April 1, 2005 merger (but using 2003 population statistics), the district has an estimated population of 40,084 and a density of 66 persons per km2. The total area is 610.02 km2.
Towns and villages
Kami
Shin'onsen
Mergers
On April 1, 2005 the towns of Mikata and Muraoka merged with the town of Kasumi, from Kinosaki District, to form the new town of Kami.
On October 1, 2005 the towns of Hamasaka and Onsen merged to form the town of Shin'onsen.
Points of interest
Tajima Plateau Botanical Gardens
Antaiji Zen monastery
References
Districts in Hyōgo Prefecture
|
https://en.wikipedia.org/wiki/Max%20Dehn
|
Max Wilhelm Dehn (November 13, 1878 – June 27, 1952) was a German mathematician most famous for his work in geometry, topology and geometric group theory. Dehn's early life and career took place in Germany. However, he was forced to retire in 1935 and eventually fled Germany in 1939 and emigrated to the United States.
Dehn was a student of David Hilbert, and in his habilitation in 1900 Dehn resolved Hilbert's third problem, making him the first to resolve one of Hilbert's well-known 23 problems. Dehn's students include Ott-Heinrich Keller, Ruth Moufang, Wilhelm Magnus, and the artists Dorothea Rockburne and Ruth Asawa.
Biography
Dehn was born to a family of Jewish origin
in Hamburg, Imperial Germany.
He studied the foundations of geometry with Hilbert at Göttingen in 1899, and obtained a proof of the Jordan curve theorem for polygons. In 1900 he wrote his dissertation on the role of the Legendre angle sum theorem in axiomatic geometry.
From 1900 to 1911 he was an employee and researcher at the University of Münster. In his habilitation at the University of Münster in 1900 he resolved Hilbert's third problem, by introducing what was afterwards called the Dehn invariant. This was the first resolution of one of the Hilbert Problems.
Dehn's interests later turned to topology and combinatorial group theory. In 1907 he wrote with Poul Heegaard the first book on the foundations of combinatorial topology, then known as analysis situs. Also in 1907, he described the construction of a new homology sphere. In 1908 he believed that he had found a proof of the Poincaré conjecture, but Tietze found an error.
In 1910 Dehn published a paper on three-dimensional topology in which he introduced Dehn surgery and used it to construct homology spheres. He also stated Dehn's lemma, but an error was found in his proof by Hellmuth Kneser in 1929. The result was proved in 1957 by Christos Papakyriakopoulos. The word problem for groups, also called the Dehn problem, was posed by him in 1911.
Dehn married Antonie Landau on August 23, 1912. Also in 1912, Dehn invented what is now known as Dehn's algorithm and used it in his work on the word and conjugacy problems for groups. The notion of a Dehn function in geometric group theory, which estimates the area of a relation in a finitely presented group in terms of the length of that relation, is also named after him. In 1914 he proved that the left and right trefoil knots are not equivalent. In the early 1920s Dehn introduced the result that would come to be known as the Dehn-Nielsen theorem; its proof would be published in 1927 by Jakob Nielsen.
In 1922 Dehn succeeded Ludwig Bieberbach at Frankfurt, where he stayed until he was forced to retire in 1935. During this time he taught a seminar on historical works of mathematics. The seminar attracted prolific mathematicians Carl Ludwig Siegel and André Weil, and Weil considered Dehn's seminar to be his most important contribution to mathematics. As an example of its inf
|
https://en.wikipedia.org/wiki/BGM
|
BGM can refer to:
Locations
Boddington Gold Mine, a gold mine in Western Australia.
Mathematics
Bayesian Graphical Model, a form of probability model.
Brace Gatarek Musiela LIBOR market model: a finance model, also called BGM in reference to some of its inventors
Medicine
Blood glucose monitoring, or the device used to monitor blood glucose levels
Music
Background music
BGM (album), 1981 album by Yellow Magic Orchestra
Bonnier Gazell Music
BGM (song), track on 2019 Wale album Wow... That's Crazy
Blackpool Grime Media, a controversial grime channel
Transport
Bellingham railway station serving London, England (National Rail station code: BGM)
Greater Binghamton Airport serving Binghamton, New York (IATA Code: BGM)
Other
Black Guns Matter
an abbreviation for the former Bell Globemedia, now Bell Media
The US Military designation for a surface attack guided missile with multiple launch environments
BGM-71 TOW missile
BGM-109 Tomahawk missile
|
https://en.wikipedia.org/wiki/Dirichlet%20problem
|
In mathematics, a Dirichlet problem is the problem of finding a function which solves a specified partial differential equation (PDE) in the interior of a given region that takes prescribed values on the boundary of the region.
The Dirichlet problem can be solved for many PDEs, although originally it was posed for Laplace's equation. In that case the problem can be stated as follows:
Given a function f that has values everywhere on the boundary of a region in Rn, is there a unique continuous function u twice continuously differentiable in the interior and continuous on the boundary, such that u is harmonic in the interior and u = f on the boundary?
This requirement is called the Dirichlet boundary condition. The main issue is to prove the existence of a solution; uniqueness can be proven using the maximum principle.
History
The Dirichlet problem goes back to George Green, who studied the problem on general domains with general boundary conditions in his Essay on the Application of Mathematical Analysis to the Theories of Electricity and Magnetism, published in 1828. He reduced the problem into a problem of constructing what we now call Green's functions, and argued that Green's function exists for any domain. His methods were not rigorous by today's standards, but the ideas were highly influential in the subsequent developments. The next steps in the study of the Dirichlet's problem were taken by Karl Friedrich Gauss, William Thomson (Lord Kelvin) and Peter Gustav Lejeune Dirichlet, after whom the problem was named, and the solution to the problem (at least for the ball) using the Poisson kernel was known to Dirichlet (judging by his 1850 paper submitted to the Prussian academy). Lord Kelvin and Dirichlet suggested a solution to the problem by a variational method based on the minimization of "Dirichlet's energy". According to Hans Freudenthal (in the Dictionary of Scientific Biography, vol. 11), Bernhard Riemann was the first mathematician who solved this variational problem based on a method which he called Dirichlet's principle. The existence of a unique solution is very plausible by the "physical argument": any charge distribution on the boundary should, by the laws of electrostatics, determine an electrical potential as solution. However, Karl Weierstrass found a flaw in Riemann's argument, and a rigorous proof of existence was found only in 1900 by David Hilbert, using his direct method in the calculus of variations. It turns out that the existence of a solution depends delicately on the smoothness of the boundary and the prescribed data.
General solution
For a domain having a sufficiently smooth boundary , the general solution to the Dirichlet problem is given by
where is the Green's function for the partial differential equation, and
is the derivative of the Green's function along the inward-pointing unit normal vector . The integration is performed on the boundary, with measure . The function is given by the unique sol
|
https://en.wikipedia.org/wiki/Stratification%20%28mathematics%29
|
Stratification has several usages in mathematics.
In mathematical logic
In mathematical logic, stratification is any consistent assignment of numbers to predicate symbols guaranteeing that a unique formal interpretation of a logical theory exists. Specifically, we say that a set of clauses of the form is stratified if and only if
there is a stratification assignment S that fulfills the following conditions:
If a predicate P is positively derived from a predicate Q (i.e., P is the head of a rule, and Q occurs positively in the body of the same rule), then the stratification number of P must be greater than or equal to the stratification number of Q, in short .
If a predicate P is derived from a negated predicate Q (i.e., P is the head of a rule, and Q occurs negatively in the body of the same rule), then the stratification number of P must be greater than the stratification number of Q, in short .
The notion of stratified negation leads to a very effective operational semantics for stratified programs in terms of the stratified least fixpoint, that is obtained by iteratively applying the fixpoint operator to each stratum of the program, from the lowest one up.
Stratification is not only useful for guaranteeing unique interpretation of Horn clause
theories.
In a specific set theory
In New Foundations (NF) and related set theories, a formula in the language of first-order logic with equality and membership is said to be
stratified if and only if there is a function
which sends each variable appearing in (considered as an item of syntax) to
a natural number (this works equally well if all integers are used) in such a way that
any atomic formula appearing in satisfies and any atomic formula appearing in satisfies .
It turns out that it is sufficient to require that these conditions be satisfied only when
both variables in an atomic formula are bound in the set abstract
under consideration. A set abstract satisfying this weaker condition is said to be
weakly stratified.
The stratification of New Foundations generalizes readily to languages with more
predicates and with term constructions. Each primitive predicate needs to have specified
required displacements between values of at its (bound) arguments
in a (weakly) stratified formula. In a language with term constructions, terms themselves
need to be assigned values under , with fixed displacements from the
values of each of their (bound) arguments in a (weakly) stratified formula. Defined term
constructions are neatly handled by (possibly merely implicitly) using the theory
of descriptions: a term (the x such that ) must
be assigned the same value under as the variable x.
A formula is stratified if and only if it is possible to assign types to all variables appearing
in the formula in such a way that it will make sense in a version TST of the theory of
types described in the New Foundations article, and this is probably the best way
to understand the stratification of New
|
https://en.wikipedia.org/wiki/Complete%20partial%20order
|
In mathematics, the phrase complete partial order is variously used to refer to at least three similar, but distinct, classes of partially ordered sets, characterized by particular completeness properties. Complete partial orders play a central role in theoretical computer science: in denotational semantics and domain theory.
Definitions
A complete partial order, abbreviated cpo, can refer to any of the following concepts depending on context.
A partially ordered set is a directed-complete partial order (dcpo) if each of its directed subsets has a supremum. A subset of a partial order is directed if it is non-empty and every pair of elements has an upper bound in the subset. In the literature, dcpos sometimes also appear under the label up-complete poset.
A partially ordered set is a pointed directed-complete partial order if it is a dcpo with a least element. They are sometimes abbreviated cppos.
A partially ordered set is a ω-complete partial order (ω-cpo) if it is a poset in which every ω-chain (x1 ≤ x2 ≤ x3 ≤ x4 ≤ ...) has a supremum that belongs to the poset. Every dcpo is an ω-cpo, since every ω-chain is a directed set, but the converse is not true. However, every ω-cpo with a basis is also a dcpo (with the same basis). An ω-cpo (dcpo) with a basis is also called a continuous ω-cpo (continuous dcpo).
Note that complete partial order is never used to mean a poset in which all subsets have suprema; the terminology complete lattice is used for this concept.
Requiring the existence of directed suprema can be motivated by viewing directed sets as generalized approximation sequences and suprema as limits of the respective (approximative) computations. This intuition, in the context of denotational semantics, was the motivation behind the development of domain theory.
The dual notion of a directed-complete partial order is called a filtered-complete partial order. However, this concept occurs far less frequently in practice, since one usually can work on the dual order explicitly.
Examples
Every finite poset is directed complete.
All complete lattices are also directed complete.
For any poset, the set of all non-empty filters, ordered by subset inclusion, is a dcpo. Together with the empty filter it is also pointed. If the order has binary meets, then this construction (including the empty filter) actually yields a complete lattice.
Every set S can be turned into a pointed dcpo by adding a least element ⊥ and introducing a flat order with ⊥ ≤ s and s ≤ s for every s in S and no other order relations.
The set of all partial functions on some given set S can be ordered by defining f ≤ g if and only if g extends f, i.e. if the domain of f is a subset of the domain of g and the values of f and g agree on all inputs for which they are both defined. (Equivalently, f ≤ g if and only if f ⊆ g where f and g are identified with their respective graphs.) This order is a pointed dcpo, where the least element is the nowhere-defined partia
|
https://en.wikipedia.org/wiki/Continuity%20correction
|
In probability theory, a continuity correction is an adjustment that is made when a discrete distribution is approximated by a continuous distribution.
Examples
Binomial
If a random variable X has a binomial distribution with parameters n and p, i.e., X is distributed as the number of "successes" in n independent Bernoulli trials with probability p of success on each trial, then
for any x ∈ {0, 1, 2, ... n}. If np and np(1 − p) are large (sometimes taken as both ≥ 5), then the probability above is fairly well approximated by
where Y is a normally distributed random variable with the same expected value and the same variance as X, i.e., E(Y) = np and var(Y) = np(1 − p). This addition of 1/2 to x is a continuity correction.
Poisson
A continuity correction can also be applied when other discrete distributions supported on the integers are approximated by the normal distribution. For example, if X has a Poisson distribution with expected value λ then the variance of X is also λ, and
if Y is normally distributed with expectation and variance both λ.
Applications
Before the ready availability of statistical software having the ability to evaluate probability distribution functions accurately, continuity corrections played an important role in the practical application of statistical tests in which the test statistic has a discrete distribution: it had a special importance for manual calculations. A particular example of this is the binomial test, involving the binomial distribution, as in checking whether a coin is fair. Where extreme accuracy is not necessary, computer calculations for some ranges of parameters may still rely on using continuity corrections to improve accuracy while retaining simplicity.
See also
Yates's correction for continuity
Wilson score interval with continuity correction
References
Devore, Jay L., Probability and Statistics for Engineering and the Sciences, Fourth Edition, Duxbury Press, 1995.
Feller, W., On the normal approximation to the binomial distribution, The Annals of Mathematical Statistics, Vol. 16 No. 4, Page 319–329, 1945.
Theory of probability distributions
Statistical tests
Computational statistics
|
https://en.wikipedia.org/wiki/Bell%20polynomials
|
In combinatorial mathematics, the Bell polynomials, named in honor of Eric Temple Bell, are used in the study of set partitions. They are related to Stirling and Bell numbers. They also occur in many applications, such as in the Faà di Bruno's formula.
Definitions
Exponential Bell polynomials
The partial or incomplete exponential Bell polynomials are a triangular array of polynomials given by
where the sum is taken over all sequences j1, j2, j3, ..., jn−k+1 of non-negative integers such that these two conditions are satisfied:
The sum
is called the nth complete exponential Bell polynomial.
Ordinary Bell polynomials
Likewise, the partial ordinary Bell polynomial is defined by
where the sum runs over all sequences j1, j2, j3, ..., jn−k+1 of non-negative integers such that
The ordinary Bell polynomials can be expressed in the terms of exponential Bell polynomials:
In general, Bell polynomial refers to the exponential Bell polynomial, unless otherwise explicitly stated.
Combinatorial meaning
The exponential Bell polynomial encodes the information related to the ways a set can be partitioned. For example, if we consider a set {A, B, C}, it can be partitioned into two non-empty, non-overlapping subsets, which are also referred to as parts or blocks, in 3 different ways:
{{A}, {B, C}}
{{B}, {A, C}}
{{C}, {B, A}}
Thus, we can encode the information regarding these partitions as
Here, the subscripts of B3,2 tell us that we are considering the partitioning of a set with 3 elements into 2 blocks. The subscript of each xi indicates the presence of a block with i elements (or block of size i) in a given partition. So here, x2 indicates the presence of a block with two elements. Similarly, x1 indicates the presence of a block with a single element. The exponent of xij indicates that there are j such blocks of size i in a single partition. Here, the fact that both x1 and x2 have exponent 1 indicates that there is only one such block in a given partition. The coefficient of the monomial indicates how many such partitions there are. Here, there are 3 partitions of a set with 3 elements into 2 blocks, where in each partition the elements are divided into two blocks of sizes 1 and 2.
Since any set can be divided into a single block in only one way, the above interpretation would mean that Bn,1 = xn. Similarly, since there is only one way that a set with n elements be divided into n singletons, Bn,n = x1n.
As a more complicated example, consider
This tells us that if a set with 6 elements is divided into 2 blocks, then we can have 6 partitions with blocks of size 1 and 5, 15 partitions with blocks of size 4 and 2, and 10 partitions with 2 blocks of size 3.
The sum of the subscripts in a monomial is equal to the total number of elements. Thus, the number of monomials that appear in the partial Bell polynomial is equal to the number of ways the integer n can be expressed as a summation of k positive integers. This is the same as the integer partit
|
https://en.wikipedia.org/wiki/Sphericon
|
In solid geometry, the sphericon is a solid that has a continuous developable surface with two congruent, semi-circular edges, and four vertices that define a square. It is a member of a special family of rollers that, while being rolled on a flat surface, bring all the points of their surface to contact with the surface they are rolling on. It was discovered independently by carpenter Colin Roberts (who named it) in the UK in 1969, by dancer and sculptor Alan Boeding of MOMIX in 1979, and by inventor David Hirsch, who patented it in Israel in 1980.
Construction
The sphericon may be constructed from a bicone (a double cone) with an apex angle of 90 degrees, by splitting the bicone along a plane through both apexes, rotating one of the two halves by 90 degrees, and reattaching the two halves.
Alternatively, the surface of a sphericon can be formed by cutting and gluing a paper template in the form of four circular sectors (with central angles ) joined edge-to-edge.
Geometric properties
The surface area of a sphericon with radius is given by
.
The volume is given by
,
exactly half the volume of a sphere with the same radius.
History
Around 1969, Colin Roberts (a carpenter from the UK) made a sphericon out of wood while attempting to carve a Möbius strip without a hole.
In 1979, David Hirsch invented a device for generating a meander motion. The device consisted of two perpendicular half discs joined at their axes of symmetry. While examining various configurations of this device, he discovered that the form created by joining the two half discs, exactly at their diameter centers, is actually a skeletal structure of a solid made of two half bicones, joined at their square cross-sections with an offset angle of 90 degrees, and that the two objects have exactly the same meander motion. Hirsch filed a patent in Israel in 1980, and a year later, a pull toy named Wiggler Duck, based on Hirsch's device, was introduced by Playskool Company.
In 1999, Colin Roberts sent Ian Stewart a package containing a letter and two sphericon models. In response, Stewart wrote an article "Cone with a Twist" in his Mathematical Recreations column of Scientific American. This sparked quite a bit of interest in the shape, and has been used by Tony Phillips to develop theories about mazes. Robert's name for the shape, the sphericon, was taken by Hirsch as the name for his company, Sphericon Ltd.
In popular culture
In 1979, modern dancer Alan Boeding designed his "Circle Walker" sculpture from two crosswise semicircles, a skeletal version of the sphericon. He began dancing with a scaled-up version of the sculpture in 1980 as part of an MFA program in sculpture at Indiana University, and after he joined the MOMIX dance company in 1984 the piece became incorporated into the company's performances. The company's later piece "Dream Catcher" is based around a similar Boeding sculpture whose linked teardrop shapes incorporate the skeleton and rolling motion of the oloid,
|
https://en.wikipedia.org/wiki/Trapezoidal%20rule
|
In calculus, the trapezoidal rule (also known as the trapezoid rule or trapezium rule) is a technique for numerical integration, i.e., approximating the definite integral:
The trapezoidal rule works by approximating the region under the graph of the function
as a trapezoid and calculating its area. It follows that
The trapezoidal rule may be viewed as the result obtained by averaging the left and right Riemann sums, and is sometimes defined this way. The integral can be even better approximated by partitioning the integration interval, applying the trapezoidal rule to each subinterval, and summing the results. In practice, this "chained" (or "composite") trapezoidal rule is usually what is meant by "integrating with the trapezoidal rule". Let be a partition of such that and be the length of the -th subinterval (that is, ), then
When the partition has a regular spacing, as is often the case, that is, when all the have the same value the formula can be simplified for calculation efficiency by factoring out:.
The approximation becomes more accurate as the resolution of the partition increases (that is, for larger , all decrease).
As discussed below, it is also possible to place error bounds on the accuracy of the value of a definite integral estimated using a trapezoidal rule.
History
A 2016 Science paper reports that the trapezoid rule was in use in Babylon before 50 BCE for integrating the velocity of Jupiter along the ecliptic.
Numerical implementation
Non-uniform grid
When the grid spacing is non-uniform, one can use the formula
wherein
Uniform grid
For a domain discretized into equally spaced panels, considerable simplification may occur. Let
the approximation to the integral becomes
Error analysis
The error of the composite trapezoidal rule is the difference between the value of the integral and the numerical result:
There exists a number ξ between a and b, such that
It follows that if the integrand is concave up (and thus has a positive second derivative), then the error is negative and the trapezoidal rule overestimates the true value. This can also be seen from the geometric picture: the trapezoids include all of the area under the curve and extend over it. Similarly, a concave-down function yields an underestimate because area is unaccounted for under the curve, but none is counted above. If the interval of the integral being approximated includes an inflection point, the error is harder to identify.
An asymptotic error estimate for N → ∞ is given by
Further terms in this error estimate are given by the Euler–Maclaurin summation formula.
Several techniques can be used to analyze the error, including:
Fourier series
Residue calculus
Euler–Maclaurin summation formula
Polynomial interpolation
It is argued that the speed of convergence of the trapezoidal rule reflects and can be used as a definition of classes of smoothness of the functions.
Proof
First suppose that and . Let be the function such that is
|
https://en.wikipedia.org/wiki/INSEE%20code
|
The INSEE code ( ) is a numerical indexing code used by the French National Institute for Statistics and Economic Studies (INSEE) to identify various entities, including communes and départements. They are also used as national identification numbers given to people.
Created under Vichy
Although today this national identification number is used by social security in France and is present on each person's social security card (carte Vitale), it was originally created under Vichy France under the guise of the Registration Number to the National Directory of Identification of Physical People (Numéro d'inscription au répertoire des personnes physiques, NIRPP or simply NIR). The latter was originally to be used as a clandestine military recruitment tool, but in the end served to identify Jews, gypsies, and other "undesirable" populations under Vichy's conceptions. The first digit of the NIR was 1 for a male European, 2 for a female European, 3 for a male Muslim, 4 for a female Muslim, 5 for a male Jew, 6 for a female Jew, 7 for a male foreigner, 8 for a female foreigner, while 9 and 0 were reserved for persons of undetermined racial status.
The Demographic Service was created in 1940 in order to replace the military recruitment office prohibited by the June 1940 Armistice with Nazi Germany. On October 11, 1941, the Demographic Service absorbed the former General Statistics of France (SGF, created in 1833). The new organization was called the National Statistical Service (Service national des statistiques, SNS).
National identification numbers
Each French person receives at birth a national identification number, the "numéro d'inscription au répertoire" (NIR or National Repertory registration), also called a "numéro de sécurité sociale" (or Social Security number). This INSEE number is composed of 13 digits + a two-digit key. Although the total number is of 15 digits, its composition makes it easy for individuals to remember at least the first seven digits (they just have to know their sex, year and month of birth, and department of birth). Since this number is used in many administrative procedures (whether by the state or by private enterprises), most people know by memory part of this identification number.
Their format is as follows: syymmlloookkk cc, where
s is 1 for a male, 2 for a female,
yy are the last two digits of the year of birth,
mm is the month of birth, usually 01 to 12 (but there are special values for persons whose exact date of birth is not known),
ll is the number of the department of origin : 2 digits, or 1 digit and 1 letter in metropolitan France, 3 digits for overseas.
ooo is the commune of origin (a department is composed of various communes) : 3 digits in metropolitan France or 2 digits for overseas.
kkk is an order number to distinguish people being born at the same place in the same year and month. This number is the one given by the Acte de naissance, an official paper which officialize a birth (and is needed th
|
https://en.wikipedia.org/wiki/LBB
|
LBB may stand for:
Lactobacillus delbrueckii subsp. bulgaricus, a bacterium used in the production of yogurt
Ladyzhenskaya–Babuška–Brezzi condition, in mathematics
Laura Bell Bundy, an actress and singer
Little brown bird or little brown bats, name given to an unidentified species
Little Black Book (disambiguation)
Lubbock Preston Smith International Airport, by IATA code
|
https://en.wikipedia.org/wiki/Hilbert%20transform
|
In mathematics and signal processing, the Hilbert transform is a specific singular integral that takes a function, of a real variable and produces another function of a real variable . The Hilbert transform is given by the Cauchy principal value of the convolution with the function (see ). The Hilbert transform has a particularly simple representation in the frequency domain: It imparts a phase shift of ±90° (/2 radians) to every frequency component of a function, the sign of the shift depending on the sign of the frequency (see ). The Hilbert transform is important in signal processing, where it is a component of the analytic representation of a real-valued signal . The Hilbert transform was first introduced by David Hilbert in this setting, to solve a special case of the Riemann–Hilbert problem for analytic functions.
Definition
The Hilbert transform of can be thought of as the convolution of with the function , known as the Cauchy kernel. Because 1/ is not integrable across , the integral defining the convolution does not always converge. Instead, the Hilbert transform is defined using the Cauchy principal value (denoted here by ). Explicitly, the Hilbert transform of a function (or signal) is given by
provided this integral exists as a principal value. This is precisely the convolution of with the tempered distribution . Alternatively, by changing variables, the principal-value integral can be written explicitly as
When the Hilbert transform is applied twice in succession to a function , the result is
provided the integrals defining both iterations converge in a suitable sense. In particular, the inverse transform is
. This fact can most easily be seen by considering the effect of the Hilbert transform on the Fourier transform of (see below).
For an analytic function in the upper half-plane, the Hilbert transform describes the relationship between the real part and the imaginary part of the boundary values. That is, if is analytic in the upper half complex plane , and , then up to an additive constant, provided this Hilbert transform exists.
Notation
In signal processing the Hilbert transform of is commonly denoted by . However, in mathematics, this notation is already extensively used to denote the Fourier transform of . Occasionally, the Hilbert transform may be denoted by . Furthermore, many sources define the Hilbert transform as the negative of the one defined here.
History
The Hilbert transform arose in Hilbert's 1905 work on a problem Riemann posed concerning analytic functions, which has come to be known as the Riemann–Hilbert problem. Hilbert's work was mainly concerned with the Hilbert transform for functions defined on the circle. Some of his earlier work related to the Discrete Hilbert Transform dates back to lectures he gave in Göttingen. The results were later published by Hermann Weyl in his dissertation. Schur improved Hilbert's results about the discrete Hilbert transform and extended them to the int
|
https://en.wikipedia.org/wiki/Cochran%27s%20theorem
|
In statistics, Cochran's theorem, devised by William G. Cochran, is a theorem used to justify results relating to the probability distributions of statistics that are used in the analysis of variance.
Statement
Let U1, ..., UN be i.i.d. standard normally distributed random variables, and . Let be symmetric matrices. Define ri to be the rank of . Define , so that the Qi are quadratic forms. Further assume .
Cochran's theorem states that the following are equivalent:
,
the Qi are independent
each Qi has a chi-squared distribution with ri degrees of freedom.
Often it's stated as , where is idempotent, and is replaced by . But after an orthogonal transform, , and so we reduce to the above theorem.
Proof
Claim: Let be a standard Gaussian in , then for any symmetric matrices , if and have the same distribution, then have the same eigenvalues (up to multiplicity).
Claim: .
Lemma: If , all symmetric, and have eigenvalues 0, 1, then they are simultaneously diagonalizable.
Now we prove the original theorem. We prove that the three cases are equivalent by proving that each case implies the next one in a cycle ().
Examples
Sample mean and sample variance
If X1, ..., Xn are independent normally distributed random variables with mean μ and standard deviation σ then
is standard normal for each i. Note that the total Q is equal to sum of squared Us as shown here:
which stems from the original assumption that .
So instead we will calculate this quantity and later separate it into Qi's. It is possible to write
(here is the sample mean). To see this identity, multiply throughout by and note that
and expand to give
The third term is zero because it is equal to a constant times
and the second term has just n identical terms added together. Thus
and hence
Now with the matrix of ones which has rank 1. In turn given that . This expression can be also obtained by expanding in matrix notation. It can be shown that the rank of is as the addition of all its rows is equal to zero. Thus the conditions for Cochran's theorem are met.
Cochran's theorem then states that Q1 and Q2 are independent, with chi-squared distributions with n − 1 and 1 degree of freedom respectively. This shows that the sample mean and sample variance are independent. This can also be shown by Basu's theorem, and in fact this property characterizes the normal distribution – for no other distribution are the sample mean and sample variance independent.
Distributions
The result for the distributions is written symbolically as
Both these random variables are proportional to the true but unknown variance σ2. Thus their ratio does not depend on σ2 and, because they are statistically independent. The distribution of their ratio is given by
where F1,n − 1 is the F-distribution with 1 and n − 1 degrees of freedom (see also Student's t-distribution). The final step here is effectively the definition of a random variable having the F-distribution.
Estimation of v
|
https://en.wikipedia.org/wiki/Parametric%20equation
|
In mathematics, a parametric equation defines a group of quantities as functions of one or more independent variables called parameters. Parametric equations are commonly used to express the coordinates of the points that make up a geometric object such as a curve or surface, called a parametric curve and parametric surface, respectively. In such cases, the equations are collectively called a parametric representation, or parametric system, or parameterization (alternatively spelled as parametrisation) of the object.
For example, the equations
form a parametric representation of the unit circle, where is the parameter: A point is on the unit circle if and only if there is a value of such that these two equations generate that point. Sometimes the parametric equations for the individual scalar output variables are combined into a single parametric equation in vectors:
Parametric representations are generally nonunique (see the "Examples in two dimensions" section below), so the same quantities may be expressed by a number of different parameterizations.
In addition to curves and surfaces, parametric equations can describe manifolds and algebraic varieties of higher dimension, with the number of parameters being equal to the dimension of the manifold or variety, and the number of equations being equal to the dimension of the space in which the manifold or variety is considered (for curves the dimension is one and one parameter is used, for surfaces dimension two and two parameters, etc.).
Parametric equations are commonly used in kinematics, where the trajectory of an object is represented by equations depending on time as the parameter. Because of this application, a single parameter is often labeled ; however, parameters can represent other physical quantities (such as geometric variables) or can be selected arbitrarily for convenience. Parameterizations are non-unique; more than one set of parametric equations can specify the same curve.
Applications
Kinematics
In kinematics, objects' paths through space are commonly described as parametric curves, with each spatial coordinate depending explicitly on an independent parameter (usually time). Used in this way, the set of parametric equations for the object's coordinates collectively constitute a vector-valued function for position. Such parametric curves can then be integrated and differentiated termwise. Thus, if a particle's position is described parametrically as
then its velocity can be found as
and its acceleration as
Computer-aided design
Another important use of parametric equations is in the field of computer-aided design (CAD). For example, consider the following three representations, all of which are commonly used to describe planar curves.
Each representation has advantages and drawbacks for CAD applications.
The explicit representation may be very complicated, or even may not exist. Moreover, it does not behave well under geometric transformations, and in particul
|
https://en.wikipedia.org/wiki/Nikolai%20Luzin
|
Nikolai Nikolayevich Luzin (also spelled Lusin; ; 9 December 1883 – 28 February 1950) was a Soviet and Russian mathematician known for his work in descriptive set theory and aspects of mathematical analysis with strong connections to point-set topology. He was the eponym of Luzitania, a loose group of young Moscow mathematicians of the first half of the 1920s. They adopted his set-theoretic orientation, and went on to apply it in other areas of mathematics.
Life
He started studying mathematics in 1901 at Moscow State University, where his advisor was Dmitri Egorov. He graduated in 1905.
Luzin underwent great personal turmoil in the years 1905 and 1906, when his materialistic worldview had collapsed and he found himself close to suicide. In 1906 he wrote to Pavel Florensky, a former fellow mathematics student who was now studying theology: You found me a mere child at the University, knowing nothing. I don't know how it happened, but I cannot be satisfied any more with analytic functions and Taylor series ... it happened about a year ago. ... To see the misery of people, to see the torment of life, to wend my way home from a mathematical meeting ... where, shivering in the cold, some women stand waiting in vain for dinner purchased with horror - this is an unbearable sight. It is unbearable, having seen this, to calmly study (in fact to enjoy) science. After that I could not study only mathematics, and I wanted to transfer to the medical school. The correspondence between the two men continued for many years and Luzin was greatly influenced by Florensky's religious treatise The Pillar and Foundation of Truth (1908).
From 1910 to 1914 Luzin studied at Göttingen, where he was influenced by Edmund Landau. He then returned to Moscow and received his Ph.D. degree in 1915. During the Russian Civil War (1918–1920) Luzin left Moscow for the Polytechnical Institute Ivanovo-Voznesensk (now called Ivanovo State University of Chemistry and Technology). He returned to Moscow in 1920.
In the 1920s Luzin organized a famous research seminar at Moscow State University. His doctoral students included some of the most famous Soviet mathematicians: Pavel Alexandrov, Nina Bari, Aleksandr Khinchin, Andrey Kolmogorov, Aleksandr Kronrod, Mikhail Lavrentyev, Alexey Lyapunov, Lazar Lyusternik, Pyotr Novikov, Lev Schnirelmann and Pavel Urysohn.
On 5 January 1927 Luzin was elected as a corresponding member of the Academy of Sciences of the Soviet Union and became a full member of the Academy of Sciences of the Soviet Union first at the Department of Philosophy and then at the Department of Pure Mathematics (12 January 1929). In 1929 he was elected as a member of the Polish Academy of Sciences and Letters in Kraków.
Research work
Luzin's first significant result was a construction of an almost everywhere divergent trigonometric series with monotonic convergence to zero coefficients (1912). This example disproved the Pierre Fatou conjecture and was unexpected to most mat
|
https://en.wikipedia.org/wiki/Discriminated%20union
|
The term discriminated union may refer to:
Disjoint union in set theory.
Tagged union in computer science.
Mathematics disambiguation pages
|
https://en.wikipedia.org/wiki/Finite%20field%20arithmetic
|
In mathematics, finite field arithmetic is arithmetic in a finite field (a field containing a finite number of elements) contrary to arithmetic in a field with an infinite number of elements, like the field of rational numbers.
There are infinitely many different finite fields. Their number of elements is necessarily of the form pn where p is a prime number and n is a positive integer, and two finite fields of the same size are isomorphic. The prime p is called the characteristic of the field, and the positive integer n is called the dimension of the field over its prime field.
Finite fields are used in a variety of applications, including in classical coding theory in linear block codes such as BCH codes and Reed–Solomon error correction, in cryptography algorithms such as the Rijndael (AES) encryption algorithm, in tournament scheduling, and in the design of experiments.
Effective polynomial representation
The finite field with pn elements is denoted GF(pn) and is also called the Galois field of order pn, in honor of the founder of finite field theory, Évariste Galois. GF(p), where p is a prime number, is simply the ring of integers modulo p. That is, one can perform operations (addition, subtraction, multiplication) using the usual operation on integers, followed by reduction modulo p. For instance, in GF(5), is reduced to 2 modulo 5. Division is multiplication by the inverse modulo p, which may be computed using the extended Euclidean algorithm.
A particular case is GF(2), where addition is exclusive OR (XOR) and multiplication is AND. Since the only invertible element is 1, division is the identity function.
Elements of GF(pn) may be represented as polynomials of degree strictly less than n over GF(p). Operations are then performed modulo R where R is an irreducible polynomial of degree n over GF(p), for instance using polynomial long division. The addition of two polynomials P and Q is done as usual; multiplication may be done as follows: compute as usual, then compute the remainder modulo R. This representation in terms of polynomial coefficients is called a monomial basis (a.k.a. 'polynomial basis').
There are other representations of the elements of GF(pn); some are isomorphic to the polynomial representation above and others look quite different (for instance, using matrices). Using a normal basis may have advantages in some contexts.
When the prime is 2, it is conventional to express elements of GF(pn) as binary numbers, with the coefficient of each term in a polynomial represented by one bit in the corresponding element's binary expression. Braces ( "{" and "}" ) or similar delimiters are commonly added to binary numbers, or to their hexadecimal equivalents, to indicate that the value gives the coefficients of a basis of a field, thus representing an element of the field. For example, the following are equivalent representations of the same value in a characteristic 2 finite field:
Primitive polynomials
There are many irred
|
https://en.wikipedia.org/wiki/Congruence%20of%20squares
|
In number theory, a congruence of squares is a congruence commonly used in integer factorization algorithms.
Derivation
Given a positive integer n, Fermat's factorization method relies on finding numbers x and y satisfying the equality
We can then factor n = x2 − y2 = (x + y)(x − y). This algorithm is slow in practice because we need to search many such numbers, and only a few satisfy the equation. However, n may also be factored if we can satisfy the weaker congruence of squares condition:
From here we easily deduce
This means that n divides the product (x + y)(x − y). Thus (x + y) and (x − y) each contain factors of n, but those factors can be trivial. In this case we need to find another x and y. Computing the greatest common divisors of (x + y, n) and of (x − y, n) will give us these factors; this can be done quickly using the Euclidean algorithm.
Congruences of squares are extremely useful in integer factorization algorithms and are extensively used in, for example, the quadratic sieve, general number field sieve, continued fraction factorization, and Dixon's factorization. Conversely, because finding square roots modulo a composite number turns out to be probabilistic polynomial-time equivalent to factoring that number, any integer factorization algorithm can be used efficiently to identify a congruence of squares.
Further generalizations
It is also possible to use factor bases to help find congruences of squares more quickly. Instead of looking for from the outset, we find many where the y have small prime factors, and try to multiply a few of these together to get a square on the right-hand side.
Examples
Factorize 35
We take n = 35 and find that
.
We thus factor as
Factorize 1649
Using n = 1649, as an example of finding a congruence of squares built up from the products of non-squares (see Dixon's factorization method), first we obtain several congruences
of these, two have only small primes as factors
and a combination of these has an even power of each small prime, and is therefore a square
yielding the congruence of squares
So using the values of 80 and 114 as our x and y gives factors
See also
Congruence relation
Equivalence (mathematics)
Integer factorization algorithms
Modular arithmetic
Squares in number theory
|
https://en.wikipedia.org/wiki/Decision%20tree%20learning
|
Decision tree learning is a supervised learning approach used in statistics, data mining and machine learning. In this formalism, a classification or regression decision tree is used as a predictive model to draw conclusions about a set of observations.
Tree models where the target variable can take a discrete set of values are called classification trees; in these tree structures, leaves represent class labels and branches represent conjunctions of features that lead to those class labels. Decision trees where the target variable can take continuous values (typically real numbers) are called regression trees. More generally, the concept of regression tree can be extended to any kind of object equipped with pairwise dissimilarities such as categorical sequences.
Decision trees are among the most popular machine learning algorithms given their intelligibility and simplicity.
In decision analysis, a decision tree can be used to visually and explicitly represent decisions and decision making. In data mining, a decision tree describes data (but the resulting classification tree can be an input for decision making).
General
Decision tree learning is a method commonly used in data mining. The goal is to create a model that predicts the value of a target variable based on several input variables.
A decision tree is a simple representation for classifying examples. For this section, assume that all of the input features have finite discrete domains, and there is a single target feature called the "classification". Each element of the domain of the classification is called a class.
A decision tree or a classification tree is a tree in which each internal (non-leaf) node is labeled with an input feature. The arcs coming from a node labeled with an input feature are labeled with each of the possible values of the target feature or the arc leads to a subordinate decision node on a different input feature. Each leaf of the tree is labeled with a class or a probability distribution over the classes, signifying that the data set has been classified by the tree into either a specific class, or into a particular probability distribution (which, if the decision tree is well-constructed, is skewed towards certain subsets of classes).
A tree is built by splitting the source set, constituting the root node of the tree, into subsets—which constitute the successor children. The splitting is based on a set of splitting rules based on classification features. This process is repeated on each derived subset in a recursive manner called recursive partitioning.
The recursion is completed when the subset at a node has all the same values of the target variable, or when splitting no longer adds value to the predictions. This process of top-down induction of decision trees (TDIDT) is an example of a greedy algorithm, and it is by far the most common strategy for learning decision trees from data.
In data mining, decision trees can be described also as the combination
|
https://en.wikipedia.org/wiki/Magnitude%20%28mathematics%29
|
In mathematics, the magnitude or size of a mathematical object is a property which determines whether the object is larger or smaller than other objects of the same kind. More formally, an object's magnitude is the displayed result of an ordering (or ranking) of the class of objects to which it belongs.
In physics, magnitude can be defined as quantity or distance.
History
The Greeks distinguished between several types of magnitude, including:
Positive fractions
Line segments (ordered by length)
Plane figures (ordered by area)
Solids (ordered by volume)
Angles (ordered by angular magnitude)
They proved that the first two could not be the same, or even isomorphic systems of magnitude. They did not consider negative magnitudes to be meaningful, and magnitude is still primarily used in contexts in which zero is either the smallest size or less than all possible sizes.
Numbers
The magnitude of any number is usually called its absolute value or modulus, denoted by .
Real numbers
The absolute value of a real number r is defined by:
Absolute value may also be thought of as the number's distance from zero on the real number line. For example, the absolute value of both 70 and −70 is 70.
Complex numbers
A complex number z may be viewed as the position of a point P in a 2-dimensional space, called the complex plane. The absolute value (or modulus) of z may be thought of as the distance of P from the origin of that space. The formula for the absolute value of is similar to that for the Euclidean norm of a vector in a 2-dimensional Euclidean space:
where the real numbers a and b are the real part and the imaginary part of z, respectively. For instance, the modulus of is . Alternatively, the magnitude of a complex number z may be defined as the square root of the product of itself and its complex conjugate, , where for any complex number , its complex conjugate is .
(where ).
Vector spaces
Euclidean vector space
A Euclidean vector represents the position of a point P in a Euclidean space. Geometrically, it can be described as an arrow from the origin of the space (vector tail) to that point (vector tip). Mathematically, a vector x in an n-dimensional Euclidean space can be defined as an ordered list of n real numbers (the Cartesian coordinates of P): x = [x1, x2, ..., xn]. Its magnitude or length, denoted by , is most commonly defined as its Euclidean norm (or Euclidean length):
For instance, in a 3-dimensional space, the magnitude of [3, 4, 12] is 13 because
This is equivalent to the square root of the dot product of the vector with itself:
The Euclidean norm of a vector is just a special case of Euclidean distance: the distance between its tail and its tip. Two similar notations are used for the Euclidean norm of a vector x:
A disadvantage of the second notation is that it can also be used to denote the absolute value of scalars and the determinants of matrices, which introduces an element of ambiguity.
Normed vect
|
https://en.wikipedia.org/wiki/Banach%E2%80%93Alaoglu%20theorem
|
In functional analysis and related branches of mathematics, the Banach–Alaoglu theorem (also known as Alaoglu's theorem) states that the closed unit ball of the dual space of a normed vector space is compact in the weak* topology.
A common proof identifies the unit ball with the weak-* topology as a closed subset of a product of compact sets with the product topology.
As a consequence of Tychonoff's theorem, this product, and hence the unit ball within, is compact.
This theorem has applications in physics when one describes the set of states of an algebra of observables, namely that any state can be written as a convex linear combination of so-called pure states.
History
According to Lawrence Narici and Edward Beckenstein, the Alaoglu theorem is a “very important result—maybe most important fact about the weak-* topology—[that] echos throughout functional analysis.”
In 1912, Helly proved that the unit ball of the continuous dual space of is countably weak-* compact.
In 1932, Stefan Banach proved that the closed unit ball in the continuous dual space of any separable normed space is sequentially weak-* compact (Banach only considered sequential compactness).
The proof for the general case was published in 1940 by the mathematician Leonidas Alaoglu.
According to Pietsch [2007], there are at least twelve mathematicians who can lay claim to this theorem or an important predecessor to it.
The Bourbaki–Alaoglu theorem is a generalization of the original theorem by Bourbaki to dual topologies on locally convex spaces.
This theorem is also called the Banach–Alaoglu theorem or the weak-* compactness theorem and it is commonly called simply the Alaoglu theorem.
Statement
If is a vector space over the field then will denote the algebraic dual space of and these two spaces are henceforth associated with the bilinear defined by
where the triple forms a dual system called the .
If is a topological vector space (TVS) then its continuous dual space will be denoted by where always holds.
Denote the weak-* topology on by and denote the weak-* topology on by
The weak-* topology is also called the topology of pointwise convergence because given a map and a net of maps the net converges to in this topology if and only if for every point in the domain, the net of values converges to the value
Proof involving duality theory
If is a normed vector space, then the polar of a neighborhood is closed and norm-bounded in the dual space.
In particular, if is the open (or closed) unit ball in then the polar of is the closed unit ball in the continuous dual space of (with the usual dual norm).
Consequently, this theorem can be specialized to:
When the continuous dual space of is an infinite dimensional normed space then it is for the closed unit ball in to be a compact subset when has its usual norm topology.
This is because the unit ball in the norm topology is compact if and only if the space is finite-dimensional (cf
|
https://en.wikipedia.org/wiki/Compact%20operator
|
In functional analysis, a branch of mathematics, a compact operator is a linear operator , where are normed vector spaces, with the property that maps bounded subsets of to relatively compact subsets of (subsets with compact closure in ). Such an operator is necessarily a bounded operator, and so continuous. Some authors require that are Banach, but the definition can be extended to more general spaces.
Any bounded operator that has finite rank is a compact operator; indeed, the class of compact operators is a natural generalization of the class of finite-rank operators in an infinite-dimensional setting. When is a Hilbert space, it is true that any compact operator is a limit of finite-rank operators, so that the class of compact operators can be defined alternatively as the closure of the set of finite-rank operators in the norm topology. Whether this was true in general for Banach spaces (the approximation property) was an unsolved question for many years; in 1973 Per Enflo gave a counter-example, building on work by Grothendieck and Banach.
The origin of the theory of compact operators is in the theory of integral equations, where integral operators supply concrete examples of such operators. A typical Fredholm integral equation gives rise to a compact operator K on function spaces; the compactness property is shown by equicontinuity. The method of approximation by finite-rank operators is basic in the numerical solution of such equations. The abstract idea of Fredholm operator is derived from this connection.
Equivalent formulations
A linear map between two topological vector spaces is said to be compact if there exists a neighborhood of the origin in such that is a relatively compact subset of .
Let be normed spaces and a linear operator. Then the following statements are equivalent, and some of them are used as the principal definition by different authors
is a compact operator;
the image of the unit ball of under is relatively compact in ;
the image of any bounded subset of under is relatively compact in ;
there exists a neighbourhood of the origin in and a compact subset such that ;
for any bounded sequence in , the sequence contains a converging subsequence.
If in addition is Banach, these statements are also equivalent to:
the image of any bounded subset of under is totally bounded in .
If a linear operator is compact, then it is continuous.
Important properties
In the following, are Banach spaces, is the space of bounded operators under the operator norm, and denotes the space of compact operators . denotes the identity operator on , , and .
is a closed subspace of (in the norm topology). Equivalently,
given a sequence of compact operators mapping (where are Banach) and given that converges to with respect to the operator norm, is then compact.
Conversely, if are Hilbert spaces, then every compact operator from is the limit of finite rank operators. Notably, this "approxima
|
https://en.wikipedia.org/wiki/Simple%20algebra%20%28universal%20algebra%29
|
In universal algebra, an abstract algebra A is called simple if and only if it has no nontrivial congruence relations, or equivalently, if every homomorphism with domain A is either injective or constant.
As congruences on rings are characterized by their ideals, this notion is a straightforward generalization of the notion from ring theory: a ring is simple in the sense that it has no nontrivial ideals if and only if it is simple in the sense of universal algebra. The same remark applies with respect to groups and normal subgroups; hence the universal notion is also a generalization of a simple group (it is a matter of convention whether a one-element algebra should be or should not be considered simple, hence only in this special case the notions might not match).
A theorem by Roberto Magari in 1969 asserts that every variety contains a simple algebra.
See also
simple group
simple ring
central simple algebra
References
Algebras
Ring theory
|
https://en.wikipedia.org/wiki/James%20Alan%20Gardner
|
James Alan Gardner (born January 10, 1955) is a Canadian science fiction author.
Raised in Simcoe and Bradford, Ontario, he earned bachelor's and master's degrees in applied mathematics from the University of Waterloo.
Gardner has published science fiction short stories in a range of periodicals, including The Magazine of Fantasy and Science Fiction and Amazing Stories. In 1989, his short story "The Children of Creche" was awarded the Grand Prize in the Writers of the Future contest. Two years later his story "Muffin Explains Teleology to the World at Large" won a Prix Aurora Award; another story, "Three Hearings on the Existence of Snakes in the Human Bloodstream," won an Aurora and was nominated for both the Nebula and Hugo Awards.
He has written a number of novels in a "League of Peoples" universe in which murderers are defined as "dangerous non-sentients" and are killed if they try to leave their solar system by aliens who are so advanced that they think of humans like humans think of bacteria. This precludes the possibility of interstellar wars.
He has also explored themes of gender in his novels, including Commitment Hour in which people change sex every year, and Vigilant in which group marriages are traditional.
Gardner is also an educator and technical writer. His book Learning UNIX is used as a textbook in some Canadian universities.
He lives in Waterloo, Ontario.
Bibliography
Lara Croft, Tomb Raider series
No. 3 Lara Croft and the Man of Bronze
League of Peoples universe
Commitment Hour (1998)
Trapped (2002)
Festina Ramos series:
Expendable (1997)
Vigilant (1999)
Hunted (2000)
Ascending (2001)
Radiant (2004)
Short story collections
Gravity Wells (2005)
The Dark vs. Spark
All Those Explosions Were Someone Else's Fault (2017)
They Promised Me the Gun Wasn't Loaded (2018)
Non-fiction
Learning UNIX (1994)
From C to C (1995)
See also
List of science fiction authors
List of University of Waterloo people
Sex in Science Fiction
UNIX
References
External links
James Alan Gardner's homepage
The Two Solitudes: An Interview with James Alan Gardner
Challenging Destiny: James Alan Gardner Explains Himself to the World at Large
Strange Horizons Interview: James Alan Gardner
1955 births
Living people
Canadian male short story writers
Canadian science fiction writers
People from Norfolk County, Ontario
University of Waterloo alumni
|
https://en.wikipedia.org/wiki/Scott%20Vanstone
|
Scott A. Vanstone was a mathematician and cryptographer in the University of Waterloo Faculty of Mathematics. He was a member of the school's Centre for Applied Cryptographic Research, and was also a founder of the cybersecurity company Certicom. He received his PhD in 1974 at the University of Waterloo, and for about a decade worked principally in combinatorial design theory, finite geometry, and finite fields. In the 1980s he started working in cryptography. An early result of Vanstone (joint with Ian Blake, R. Fuji-Hara, and Ron Mullin) was an improved algorithm for computing discrete logarithms in binary fields, which inspired Don Coppersmith to develop his famous exp(n^{1/3+ε}) algorithm (where n is the degree of the field).
Vanstone was one of the first to see the commercial potential of Elliptic Curve Cryptography (ECC), and much of his subsequent work was devoted to developing ECC algorithms, protocols, and standards. In 1985 he co-founded Certicom, which later became the chief developer and promoter of ECC.
Vanstone authored or coauthored five widely used books and almost two hundred research articles, and he held several patents.
He was a Fellow of the Royal Society of Canada and a Fellow of the International Association for Cryptologic Research. In 2001 he won the RSA Award for Excellence in Mathematics, and in 2009 he received the Ontario Premier's Catalyst Award for Lifetime Achievement in Innovation.
He died on March 2, 2014, shortly after a cancer diagnosis.
Bibliography
See also
List of University of Waterloo people
References
Notes
External links
Handbook of Applied Cryptography (Free download)
DBLP publication list
Modern cryptographers
University of Waterloo alumni
Academic staff of the University of Waterloo
1947 births
2014 deaths
20th-century Canadian mathematicians
21st-century Canadian mathematicians
People associated with computer security
Public-key cryptographers
Fellows of the Royal Society of Canada
International Association for Cryptologic Research fellows
|
https://en.wikipedia.org/wiki/Additive
|
Additive may refer to:
Mathematics
Additive function, a function in number theory
Additive map, a function that preserves the addition operation
Additive set-function see Sigma additivity
Additive category, a preadditive category with finite biproducts
Additive inverse, an arithmetic concept
Science
Additive color, as opposed to subtractive color
Additive model, a statistical regression model
Additive synthesis, an audio synthesis technique
Additive genetic effects
Additive quantity, a physical quantity that is additive for subsystems; see Intensive and extensive properties
Engineering
Feed additive
Gasoline additive, a substance used to improve the performance of a fuel, lower emissions or clean the engine
Oil additive, a substance used to improve the performance of a lubricant
Weakly additive, the quality of preferences in some logistics problems
Polymer additive
Pit additive, a material aiming to reduce fecal sludge build-up and control odor in pit latrines, septic tanks and wastewater treatment plants
Biodegradable additives
Other uses
, one of the grammatical cases in Estonian
Food additive, any substance added to food to improve flavor, appearance, shelf life, etc.
Additive rhythm, a larger period of time constructed from smaller ones
|
https://en.wikipedia.org/wiki/Scholz%20conjecture
|
In mathematics, the Scholz conjecture is a conjecture on the length of certain addition chains.
It is sometimes also called the Scholz–Brauer conjecture or the Brauer–Scholz conjecture, after Arnold Scholz who formulated it in 1937 and Alfred Brauer who studied it soon afterward and proved a weaker bound.
Statement
The conjecture states that
,
where is the length of the shortest addition chain producing n.
Here, an addition chain is defined as a sequence of numbers, starting with 1, such that every number after the first can be expressed as a sum of two earlier numbers (which are allowed to both be equal). Its length is the number of sums needed to express all its numbers, which is one less than the length of the sequence of numbers (since there is no sum of previous numbers for the first number in the sequence, 1). Computing the length of the shortest addition chain that contains a given number can be done by dynamic programming for small numbers, but it is not known whether it can be done in polynomial time measured as a function of the length of the binary representation of . Scholz's conjecture, if true, would provide short addition chains for numbers of a special form, the Mersenne numbers.
Example
As an example, : it has a shortest addition chain
1, 2, 4, 5
of length three, determined by the three sums
1 + 1 = 2,
2 + 2 = 4,
4 + 1 = 5.
Also, : it has a shortest addition chain
1, 2, 3, 6, 12, 24, 30, 31
of length seven, determined by the seven sums
1 + 1 = 2,
2 + 1 = 3,
3 + 3 = 6,
6 + 6 = 12,
12 + 12 = 24,
24 + 6 = 30,
30 + 1 = 31.
Both and equal 7.
Therefore, these values obey the inequality (which in this case is an equality) and the Scholz conjecture is true for the case .
Partial results
By using a combination of computer search techniques and mathematical characterizations of optimal addition chains, showed that the conjecture is true for all . Additionally, he verified that for all , the inequality of the conjecture is actually an equality.
References
External links
Shortest addition chains
OEIS sequence A003313
Addition chains
Conjectures
Unsolved problems in number theory
|
https://en.wikipedia.org/wiki/Addition%20chain
|
In mathematics, an addition chain for computing a positive integer can be given by a sequence of natural numbers starting with 1 and ending with , such that each number in the sequence is the sum of two previous numbers. The length of an addition chain is the number of sums needed to express all its numbers, which is one less than the cardinality of the sequence of numbers.
Examples
As an example: (1,2,3,6,12,24,30,31) is an addition chain for 31 of length 7, since
2 = 1 + 1
3 = 2 + 1
6 = 3 + 3
12 = 6 + 6
24 = 12 + 12
30 = 24 + 6
31 = 30 + 1
Addition chains can be used for addition-chain exponentiation. This method allows exponentiation with integer exponents to be performed using a number of multiplications equal to the length of an addition chain for the exponent. For instance, the addition chain for 31 leads to a method for computing the 31st power of any number using only seven multiplications, instead of the 30 multiplications that one would get from repeated multiplication, and eight multiplications with exponentiation by squaring:
2 = ×
3 = 2 ×
6 = 3 × 3
12 = 6 × 6
24 = 12 × 12
30 = 24 × 6
31 = 30 ×
Methods for computing addition chains
Calculating an addition chain of minimal length is not easy; a generalized version of the problem, in which one must find a chain that simultaneously forms each of a sequence of values, is NP-complete. There is no known algorithm which can calculate a minimal addition chain for a given number with any guarantees of reasonable timing or small memory usage. However, several techniques are known to calculate relatively short chains that are not always optimal.
One very well known technique to calculate relatively short addition chains is the binary method, similar to exponentiation by squaring. In this method, an addition chain for the number is obtained recursively, from an addition chain for . If is even, it can be obtained in a single additional sum, as . If is odd, this method uses two sums to obtain it, by computing and then adding one.
The factor method for finding addition chains is based on the prime factorization of the number to be represented. If has a number as one of its prime factors, then an addition chain for can be obtained by starting with a chain for , and then concatenating onto it a chain for , modified by multiplying each of its numbers by . The ideas of the factor method and binary method can be combined into Brauer's m-ary method by choosing any number (regardless of whether it divides ), recursively constructing a chain for , concatenating a chain for (modified in the same way as above) to obtain , and then adding the remainder. Additional refinements of these ideas lead to a family of methods called sliding window methods.
Chain length
Let denote the smallest so that there exists an addition chain
of length which computes .
It is known that
,
where is the Hamming weight (the number of ones) of the binary expansion of .
One can obtain an addition chain for fr
|
https://en.wikipedia.org/wiki/Image%20%28mathematics%29
|
In mathematics, the image of a function is the set of all output values it may produce.
More generally, evaluating a given function at each element of a given subset of its domain produces a set, called the "image of under (or through) ". Similarly, the inverse image (or preimage) of a given subset of the codomain of is the set of all elements of the domain that map to the members of
Image and inverse image may also be defined for general binary relations, not just functions.
Definition
The word "image" is used in three related ways. In these definitions, is a function from the set to the set
Image of an element
If is a member of then the image of under denoted is the value of when applied to is alternatively known as the output of for argument
Given the function is said to "" or "" if there exists some in the function's domain such that
Similarly, given a set is said to "" if there exists in the function's domain such that
However, "" and "" means that for point in 's domain.
Image of a subset
Throughout, let be a function.
The under of a subset of is the set of all for It is denoted by or by when there is no risk of confusion. Using set-builder notation, this definition can be written as
This induces a function where denotes the power set of a set that is the set of all subsets of See below for more.
Image of a function
The image of a function is the image of its entire domain, also known as the range of the function. This last usage should be avoided because the word "range" is also commonly used to mean the codomain of
Generalization to binary relations
If is an arbitrary binary relation on then the set is called the image, or the range, of Dually, the set is called the domain of
Inverse image
Let be a function from to The preimage or inverse image of a set under denoted by is the subset of defined by
Other notations include and
The inverse image of a singleton set, denoted by or by is also called the fiber or fiber over or the level set of The set of all the fibers over the elements of is a family of sets indexed by
For example, for the function the inverse image of would be Again, if there is no risk of confusion, can be denoted by and can also be thought of as a function from the power set of to the power set of The notation should not be confused with that for inverse function, although it coincides with the usual one for bijections in that the inverse image of under is the image of under
Notation for image and inverse image
The traditional notations used in the previous section do not distinguish the original function from the image-of-sets function ; likewise they do not distinguish the inverse function (assuming one exists) from the inverse image function (which again relates the powersets). Given the right context, this keeps the notation light and usually does not cause confusion. But if needed, an alternative is to give explicit na
|
https://en.wikipedia.org/wiki/Curve%20of%20constant%20width
|
In geometry, a curve of constant width is a simple closed curve in the plane whose width (the distance between parallel supporting lines) is the same in all directions. The shape bounded by a curve of constant width is a body of constant width or an orbiform, the name given to these shapes by Leonhard Euler. Standard examples are the circle and the Reuleaux triangle. These curves can also be constructed using circular arcs centered at crossings of an arrangement of lines, as the involutes of certain curves, or by intersecting circles centered on a partial curve.
Every body of constant width is a convex set, its boundary crossed at most twice by any line, and if the line crosses perpendicularly it does so at both crossings, separated by the width. By Barbier's theorem, the body's perimeter is exactly times its width, but its area depends on its shape, with the Reuleaux triangle having the smallest possible area for its width and the circle the largest. Every superset of a body of constant width includes pairs of points that are farther apart than the width, and every curve of constant width includes at least six points of extreme curvature. Although the Reuleaux triangle is not smooth, curves of constant width can always be approximated arbitrarily closely by smooth curves of the same constant width.
Cylinders with constant-width cross-section can be used as rollers to support a level surface. Another application of curves of constant width is for coinage shapes, where regular Reuleaux polygons are a common choice. The possibility that curves other than circles can have constant width makes it more complicated to check the roundness of an object.
Curves of constant width have been generalized in several ways to higher dimensions and to non-Euclidean geometry.
Definitions
Width, and constant width, are defined in terms of the supporting lines of curves; these are lines that touch a curve without crossing it.
Every compact curve in the plane has two supporting lines in any given direction, with the curve sandwiched between them. The Euclidean distance between these two lines is the width of the curve in that direction, and a curve has constant width if this distance is the same for all directions of lines. The width of a bounded convex set can be defined in the same way as for curves, by the distance between pairs of parallel lines that touch the set without crossing it, and a convex set is a body of constant width when this distance is nonzero and does not depend on the direction of the lines. Every body of constant width has a curve of constant width as its boundary, and every curve of constant width has a body of constant width as its convex hull.
Another equivalent way to define the width of a compact curve or of a convex set is by looking at its orthogonal projection onto a line. In both cases, the projection is a line segment, whose length equals the distance between support lines that are perpendicular to the line. So, a curve or a conv
|
https://en.wikipedia.org/wiki/Barbier%27s%20theorem
|
In geometry, Barbier's theorem states that every curve of constant width has perimeter times its width, regardless of its precise shape. This theorem was first published by Joseph-Émile Barbier in 1860.
Examples
The most familiar examples of curves of constant width are the circle and the Reuleaux triangle. For a circle, the width is the same as the diameter; a circle of width w has perimeter w. A Reuleaux triangle of width w consists of three arcs of circles of radius w. Each of these arcs has central angle /3, so the perimeter of the Reuleaux triangle of width w is equal to half the perimeter of a circle of radius w and therefore is equal to w. A similar analysis of other simple examples such as Reuleaux polygons gives the same answer.
Proofs
One proof of the theorem uses the properties of Minkowski sums. If K is a body of constant width w, then the Minkowski sum of K and its 180° rotation is a disk with radius w and perimeter 2w. However, the Minkowski sum acts linearly on the perimeters of convex bodies, so the perimeter of K must be half the perimeter of this disk, which is w as the theorem states.
Alternatively, the theorem follows immediately from the Crofton formula in integral geometry according to which the length of any curve equals the measure of the set of lines that cross the curve, multiplied by their numbers of crossings. Any two curves that have the same constant width are crossed by sets of lines with the same measure, and therefore they have the same length. Historically, Crofton derived his formula later than, and independently of, Barbier's theorem.
An elementary probabilistic proof of the theorem can be found at Buffon's noodle.
Higher dimensions
The analogue of Barbier's theorem for surfaces of constant width is false. In particular, the unit sphere has surface area , while the surface of revolution of a Reuleaux triangle with the same constant width has surface area .
Instead, Barbier's theorem generalizes to bodies of constant brightness, three-dimensional convex sets for which every two-dimensional projection has the same area. These all have the same surface area as a sphere of the same projected area.
And in general, if is a convex subset of , for which every (n−1)-dimensional projection has area of the unit ball in , then the surface area of is equal to that of the unit sphere in . This follows from the general form of Crofton formula.
See also
Blaschke–Lebesgue theorem and isoperimetric inequality, bounding the areas of curves of constant width
References
Theorems in plane geometry
Pi
Length
Constant width
|
https://en.wikipedia.org/wiki/Hopf%20fibration
|
In the mathematical field of differential topology, the Hopf fibration (also known as the Hopf bundle or Hopf map) describes a 3-sphere (a hypersphere in four-dimensional space) in terms of circles and an ordinary sphere. Discovered by Heinz Hopf in 1931, it is an influential early example of a fiber bundle. Technically, Hopf found a many-to-one continuous function (or "map") from the -sphere onto the -sphere such that each distinct point of the -sphere is mapped from a distinct great circle of the -sphere . Thus the -sphere is composed of fibers, where each fiber is a circle — one for each point of the -sphere.
This fiber bundle structure is denoted
meaning that the fiber space (a circle) is embedded in the total space (the -sphere), and (Hopf's map) projects onto the base space (the ordinary -sphere). The Hopf fibration, like any fiber bundle, has the important property that it is locally a product space. However it is not a trivial fiber bundle, i.e., is not globally a product of and although locally it is indistinguishable from it.
This has many implications: for example the existence of this bundle shows that the higher homotopy groups of spheres are not trivial in general. It also provides a basic example of a principal bundle, by identifying the fiber with the circle group.
Stereographic projection of the Hopf fibration induces a remarkable structure on , in which all of 3-dimensional space, except for the z-axis, is filled with nested tori made of linking Villarceau circles. Here each fiber projects to a circle in space (one of which is a line, thought of as a "circle through infinity"). Each torus is the stereographic projection of the inverse image of a circle of latitude of the -sphere. (Topologically, a torus is the product of two circles.) These tori are illustrated in the images at right. When is compressed to the boundary of a ball, some geometric structure is lost although the topological structure is retained (see Topology and geometry). The loops are homeomorphic to circles, although they are not geometric circles.
There are numerous generalizations of the Hopf fibration. The unit sphere in complex coordinate space fibers naturally over the complex projective space with circles as fibers, and there are also real, quaternionic, and octonionic versions of these fibrations. In particular, the Hopf fibration belongs to a family of four fiber bundles in which the total space, base space, and fiber space are all spheres:
By Adams's theorem such fibrations can occur only in these dimensions.
The Hopf fibration is important in twistor theory.
Definition and construction
For any natural number n, an n-dimensional sphere, or n-sphere, can be defined as the set of points in an -dimensional space which are a fixed distance from a central point. For concreteness, the central point can be taken to be the origin, and the distance of the points on the sphere from this origin can be assumed to be a unit length. With this
|
https://en.wikipedia.org/wiki/Implicit%20function%20theorem
|
In multivariable calculus, the implicit function theorem is a tool that allows relations to be converted to functions of several real variables. It does so by representing the relation as the graph of a function. There may not be a single function whose graph can represent the entire relation, but there may be such a function on a restriction of the domain of the relation. The implicit function theorem gives a sufficient condition to ensure that there is such a function.
More precisely, given a system of equations (often abbreviated into ), the theorem states that, under a mild condition on the partial derivatives (with respect to each ) at a point, the variables are differentiable functions of the in some neighborhood of the point. As these functions can generally not be expressed in closed form, they are implicitly defined by the equations, and this motivated the name of the theorem.
In other words, under a mild condition on the partial derivatives, the set of zeros of a system of equations is locally the graph of a function.
History
Augustin-Louis Cauchy (1789–1857) is credited with the first rigorous form of the implicit function theorem. Ulisse Dini (1845–1918) generalized the real-variable version of the implicit function theorem to the context of functions of any number of real variables.
First example
If we define the function , then the equation cuts out the unit circle as the level set . There is no way to represent the unit circle as the graph of a function of one variable because for each choice of , there are two choices of y, namely .
However, it is possible to represent part of the circle as the graph of a function of one variable. If we let for , then the graph of provides the upper half of the circle. Similarly, if , then the graph of gives the lower half of the circle.
The purpose of the implicit function theorem is to tell us that functions like and almost always exist, even in situations where we cannot write down explicit formulas. It guarantees that and are differentiable, and it even works in situations where we do not have a formula for .
Definitions
Let be a continuously differentiable function. We think of as the Cartesian product and we write a point of this product as Starting from the given function , our goal is to construct a function whose graph is precisely the set of all such that .
As noted above, this may not always be possible. We will therefore fix a point which satisfies , and we will ask for a that works near the point . In other words, we want an open set containing , an open set containing , and a function such that the graph of satisfies the relation on , and that no other points within do so. In symbols,
To state the implicit function theorem, we need the Jacobian matrix of , which is the matrix of the partial derivatives of . Abbreviating to , the Jacobian matrix is
where is the matrix of partial derivatives in the variables and is the matrix of partial d
|
https://en.wikipedia.org/wiki/Plus%20construction
|
In mathematics, the plus construction is a method for simplifying the fundamental group of a space without changing its homology and cohomology groups.
Explicitly, if is a based connected CW complex and is a perfect normal subgroup of then a map is called a +-construction relative to if induces an isomorphism on homology, and is the kernel of .
The plus construction was introduced by , and was used by Daniel Quillen to define algebraic K-theory. Given a perfect normal subgroup of the fundamental group of a connected CW complex , attach two-cells along loops in whose images in the fundamental group generate the subgroup. This operation generally changes the homology of the space, but these changes can be reversed by the addition of three-cells.
The most common application of the plus construction is in algebraic K-theory. If is a unital ring, we denote by the group of invertible -by- matrices with elements in . embeds in by attaching a along the diagonal and s elsewhere. The direct limit of these groups via these maps is denoted and its classifying space is denoted . The plus construction may then be applied to the perfect normal subgroup of , generated by matrices which only differ from the identity matrix in one off-diagonal entry. For , the -th homotopy group of the resulting space, , is isomorphic to the -th -group of , that is,
See also
Semi-s-cobordism
References
.
.
.
External links
Algebraic topology
Homotopy theory
|
https://en.wikipedia.org/wiki/Cram%C3%A9r%E2%80%93Rao%20bound
|
In estimation theory and statistics, the Cramér–Rao bound (CRB) relates to estimation of a deterministic (fixed, though unknown) parameter. The result is named in honor of Harald Cramér and C. R. Rao, but has also been derived independently by Maurice Fréchet, Georges Darmois, and by Alexander Aitken and Harold Silverstone. It states that the precision of any unbiased estimator is at most the Fisher information; or (equivalently) the reciprocal of the Fisher information is a lower bound on its variance.
An unbiased estimator that achieves this bound is said to be (fully) efficient. Such a solution achieves the lowest possible mean squared error among all unbiased methods, and is therefore the minimum variance unbiased (MVU) estimator. However, in some cases, no unbiased technique exists which achieves the bound. This may occur either if for any unbiased estimator, there exists another with a strictly smaller variance, or if an MVU estimator exists, but its variance is strictly greater than the inverse of the Fisher information.
The Cramér–Rao bound can also be used to bound the variance of estimators of given bias. In some cases, a biased approach can result in both a variance and a mean squared error that are the unbiased Cramér–Rao lower bound; see estimator bias.
Statement
The Cramér–Rao bound is stated in this section for several increasingly general cases, beginning with the case in which the parameter is a scalar and its estimator is unbiased. All versions of the bound require certain regularity conditions, which hold for most well-behaved distributions. These conditions are listed later in this section.
Scalar unbiased case
Suppose is an unknown deterministic parameter that is to be estimated from independent observations (measurements) of , each from a distribution according to some probability density function . The variance of any unbiased estimator of is then bounded by the reciprocal of the Fisher information :
where the Fisher information is defined by
and is the natural logarithm of the likelihood function for a single sample and denotes the expected value with respect to the density of . If not indicated, in what follows, the expectation is taken with respect to .
If is twice differentiable and certain regularity conditions hold, then the Fisher information can also be defined as follows:
The efficiency of an unbiased estimator measures how close this estimator's variance comes to this lower bound; estimator efficiency is defined as
or the minimum possible variance for an unbiased estimator divided by its actual variance.
The Cramér–Rao lower bound thus gives
.
General scalar case
A more general form of the bound can be obtained by considering a biased estimator , whose expectation is not but a function of this parameter, say, . Hence is not generally equal to 0. In this case, the bound is given by
where is the derivative of (by ), and is the Fisher information defined above.
Bound on the var
|
https://en.wikipedia.org/wiki/Isogonal%20figure
|
In geometry, a polytope (e.g. a polygon or polyhedron) or a tiling is isogonal or vertex-transitive if all its vertices are equivalent under the symmetries of the figure. This implies that each vertex is surrounded by the same kinds of face in the same or reverse order, and with the same angles between corresponding faces.
Technically, one says that for any two vertices there exists a symmetry of the polytope mapping the first isometrically onto the second. Other ways of saying this are that the group of automorphisms of the polytope acts transitively on its vertices, or that the vertices lie within a single symmetry orbit.
All vertices of a finite -dimensional isogonal figure exist on an -sphere.
The term isogonal has long been used for polyhedra. Vertex-transitive is a synonym borrowed from modern ideas such as symmetry groups and graph theory.
The pseudorhombicuboctahedronwhich is not isogonaldemonstrates that simply asserting that "all vertices look the same" is not as restrictive as the definition used here, which involves the group of isometries preserving the polyhedron or tiling.
Isogonal polygons and apeirogons
All regular polygons, apeirogons and regular star polygons are isogonal. The dual of an isogonal polygon is an isotoxal polygon.
Some even-sided polygons and apeirogons which alternate two edge lengths, for example a rectangle, are isogonal.
All planar isogonal 2n-gons have dihedral symmetry (Dn, n = 2, 3, ...) with reflection lines across the mid-edge points.
Isogonal polyhedra and 2D tilings
An isogonal polyhedron and 2D tiling has a single kind of vertex. An isogonal polyhedron with all regular faces is also a uniform polyhedron and can be represented by a vertex configuration notation sequencing the faces around each vertex. Geometrically distorted variations of uniform polyhedra and tilings can also be given the vertex configuration.
Isogonal polyhedra and 2D tilings may be further classified:
Regular if it is also isohedral (face-transitive) and isotoxal (edge-transitive); this implies that every face is the same kind of regular polygon.
Quasi-regular if it is also isotoxal (edge-transitive) but not isohedral (face-transitive).
Semi-regular if every face is a regular polygon but it is not isohedral (face-transitive) or isotoxal (edge-transitive). (Definition varies among authors; e.g. some exclude solids with dihedral symmetry, or nonconvex solids.)
Uniform if every face is a regular polygon, i.e. it is regular, quasiregular or semi-regular.
Semi-uniform if its elements are also isogonal.
Scaliform if all the edges are the same length.
Noble if it is also isohedral (face-transitive).
N dimensions: Isogonal polytopes and tessellations
These definitions can be extended to higher-dimensional polytopes and tessellations. All uniform polytopes are isogonal, for example, the uniform 4-polytopes and convex uniform honeycombs.
The dual of an isogonal polytope is an isohedral figure, which is transitive on it
|
https://en.wikipedia.org/wiki/Kummer%20theory
|
In abstract algebra and number theory, Kummer theory provides a description of certain types of field extensions involving the adjunction of nth roots of elements of the base field. The theory was originally developed by Ernst Eduard Kummer around the 1840s in his pioneering work on Fermat's Last Theorem. The main statements do not depend on the nature of the field – apart from its characteristic, which should not divide the integer n – and therefore belong to abstract algebra. The theory of cyclic extensions of the field K when the characteristic of K does divide n is called Artin–Schreier theory.
Kummer theory is basic, for example, in class field theory and in general in understanding abelian extensions; it says that in the presence of enough roots of unity, cyclic extensions can be understood in terms of extracting roots. The main burden in class field theory is to dispense with extra roots of unity ('descending' back to smaller fields); which is something much more serious.
Kummer extensions
A Kummer extension is a field extension L/K, where for some given integer n > 1 we have
K contains n distinct nth roots of unity (i.e., roots of Xn − 1)
L/K has abelian Galois group of exponent n.
For example, when n = 2, the first condition is always true if K has characteristic ≠ 2. The Kummer extensions in this case include quadratic extensions where a in K is a non-square element. By the usual solution of quadratic equations, any extension of degree 2 of K has this form. The Kummer extensions in this case also include biquadratic extensions and more general multiquadratic extensions. When K has characteristic 2, there are no such Kummer extensions.
Taking n = 3, there are no degree 3 Kummer extensions of the rational number field Q, since for three cube roots of 1 complex numbers are required. If one takes L to be the splitting field of X3 − a over Q, where a is not a cube in the rational numbers, then L contains a subfield K with three cube roots of 1; that is because if α and β are roots of the cubic polynomial, we shall have (α/β)3 =1 and the cubic is a separable polynomial. Then L/K is a Kummer extension.
More generally, it is true that when K contains n distinct nth roots of unity, which implies that the characteristic of K doesn't divide n, then adjoining to K the nth root of any element a of K creates a Kummer extension (of degree m, for some m dividing n). As the splitting field of the polynomial Xn − a, the Kummer extension is necessarily Galois, with Galois group that is cyclic of order m. It is easy to track the Galois action via the root of unity in front of
Kummer theory provides converse statements. When K contains n distinct nth roots of unity, it states that any abelian extension of K of exponent dividing n is formed by extraction of roots of elements of K. Further, if K× denotes the multiplicative group of non-zero elements of K, abelian extensions of K of exponent n correspond bijectively with subgroups of
that is, elemen
|
https://en.wikipedia.org/wiki/Singularity%20theory
|
In mathematics, singularity theory studies spaces that are almost manifolds, but not quite. A string can serve as an example of a one-dimensional manifold, if one neglects its thickness. A singularity can be made by balling it up, dropping it on the floor, and flattening it. In some places the flat string will cross itself in an approximate "X" shape. The points on the floor where it does this are one kind of singularity, the double point: one bit of the floor corresponds to more than one bit of string. Perhaps the string will also touch itself without crossing, like an underlined "U". This is another kind of singularity. Unlike the double point, it is not stable, in the sense that a small push will lift the bottom of the "U" away from the "underline".
Vladimir Arnold defines the main goal of singularity theory as describing how objects depend on parameters, particularly in cases where the properties undergo sudden change under a small variation of the parameters. These situations are called perestroika (), bifurcations or catastrophes. Classifying the types of changes and characterizing sets of parameters which give rise to these changes are some of the main mathematical goals. Singularities can occur in a wide range of mathematical objects, from matrices depending on parameters to wavefronts.
How singularities may arise
In singularity theory the general phenomenon of points and sets of singularities is studied, as part of the concept that manifolds (spaces without singularities) may acquire special, singular points by a number of routes. Projection is one way, very obvious in visual terms when three-dimensional objects are projected into two dimensions (for example in one of our eyes); in looking at classical statuary the folds of drapery are amongst the most obvious features. Singularities of this kind include caustics, very familiar as the light patterns at the bottom of a swimming pool.
Other ways in which singularities occur is by degeneration of manifold structure. The presence of symmetry can be good cause to consider orbifolds, which are manifolds that have acquired "corners" in a process of folding up, resembling the creasing of a table napkin.
Singularities in algebraic geometry
Algebraic curve singularities
Historically, singularities were first noticed in the study of algebraic curves. The double point at (0, 0) of the curve
and the cusp there of
are qualitatively different, as is seen just by sketching. Isaac Newton carried out a detailed study of all cubic curves, the general family to which these examples belong. It was noticed in the formulation of Bézout's theorem that such singular points must be counted with multiplicity (2 for a double point, 3 for a cusp), in accounting for intersections of curves.
It was then a short step to define the general notion of a singular point of an algebraic variety; that is, to allow higher dimensions.
The general position of singularities in algebraic geometry
Such singularities in al
|
https://en.wikipedia.org/wiki/Real%20field
|
Real field may refer to:
Real numbers, the numbers that can be represented by infinite decimals
Formally real field, an algebraic field that has the so-called "real" property
Real closed field
Real quadratic field
|
https://en.wikipedia.org/wiki/Invertible%20sheaf
|
In mathematics, an invertible sheaf is a sheaf on a ringed space which has an inverse with respect to tensor product of sheaves of modules. It is the equivalent in algebraic geometry of the topological notion of a line bundle. Due to their interactions with Cartier divisors, they play a central role in the study of algebraic varieties.
Definition
Let (X, OX) be a ringed space. Isomorphism classes of sheaves of OX-modules form a monoid under the operation of tensor product of OX-modules. The identity element for this operation is OX itself. Invertible sheaves are the invertible elements of this monoid. Specifically, if L is a sheaf of OX-modules, then L is called invertible if it satisfies any of the following equivalent conditions:
There exists a sheaf M such that .
The natural homomorphism is an isomorphism, where denotes the dual sheaf .
The functor from OX-modules to OX-modules defined by is an equivalence of categories.
Every locally free sheaf of rank one is invertible. If X is a locally ringed space, then L is invertible if and only if it is locally free of rank one. Because of this fact, invertible sheaves are closely related to line bundles, to the point where the two are sometimes conflated.
Examples
Let X be an affine scheme . Then an invertible sheaf on X is the sheaf associated to a rank one projective module over R. For example, this includes fractional ideals of algebraic number fields, since these are rank one projective modules over the rings of integers of the number field.
The Picard group
Quite generally, the isomorphism classes of invertible sheaves on X themselves form an abelian group under tensor product. This group generalises the ideal class group. In general it is written
with Pic the Picard functor. Since it also includes the theory of the Jacobian variety of an algebraic curve, the study of this functor is a major issue in algebraic geometry.
The direct construction of invertible sheaves by means of data on X leads to the concept of Cartier divisor.
See also
Vector bundles in algebraic geometry
Line bundle
First Chern class
Picard group
Birkhoff-Grothendieck theorem
References
Geometry of divisors
Sheaf theory
|
https://en.wikipedia.org/wiki/Pushout%20%28category%20theory%29
|
In category theory, a branch of mathematics, a pushout (also called a fibered coproduct or fibered sum or cocartesian square or amalgamated sum) is the colimit of a diagram consisting of two morphisms f : Z → X and g : Z → Y with a common domain. The pushout consists of an object P along with two morphisms X → P and Y → P that complete a commutative square with the two given morphisms f and g. In fact, the defining universal property of the pushout (given below) essentially says that the pushout is the "most general" way to complete this commutative square. Common notations for the pushout are and .
The pushout is the categorical dual of the pullback.
Universal property
Explicitly, the pushout of the morphisms f and g consists of an object P and two morphisms i1 : X → P and i2 : Y → P such that the diagram
commutes and such that (P, i1, i2) is universal with respect to this diagram. That is, for any other such triple (Q, j1, j2) for which the following diagram commutes, there must exist a unique u : P → Q also making the diagram commute:
As with all universal constructions, the pushout, if it exists, is unique up to a unique isomorphism.
Examples of pushouts
Here are some examples of pushouts in familiar categories. Note that in each case, we are only providing a construction of an object in the isomorphism class of pushouts; as mentioned above, though there may be other ways to construct it, they are all equivalent.
Suppose that X, Y, and Z as above are sets, and that f : Z → X and g : Z → Y are set functions. The pushout of f and g is the disjoint union of X and Y, where elements sharing a common preimage (in Z) are identified, together with the morphisms i1, i2 from X and Y, i.e. where ~ is the finest equivalence relation (cf. also this) such that f(z) ~ g(z) for all z in Z. In particular, if X and Y are subsets of some larger set W and Z is their intersection, with f and g the inclusion maps of Z into X and Y, then the pushout can be canonically identified with the union .
A specific case of this is the cograph of a function. If is a function, then the cograph of a function is the pushout of along the identity function of . In elementary terms, the cograph is the quotient of by the equivalence relation generated by identifying with . A function may be recovered by its cograph because each equivalence class in contains precisely one element of . Cographs are dual to graphs of functions since the graph may be defined as the pullback of along the identity of .
The construction of adjunction spaces is an example of pushouts in the category of topological spaces. More precisely, if Z is a subspace of Y and g : Z → Y is the inclusion map we can "glue" Y to another space X along Z using an "attaching map" f : Z → X. The result is the adjunction space , which is just the pushout of f and g. More generally, all identification spaces may be regarded as pushouts in this way.
A special case of the above is the wedge sum or one-point uni
|
https://en.wikipedia.org/wiki/Lefschetz%20fixed-point%20theorem
|
In mathematics, the Lefschetz fixed-point theorem is a formula that counts the fixed points of a continuous mapping from a compact topological space to itself by means of traces of the induced mappings on the homology groups of . It is named after Solomon Lefschetz, who first stated it in 1926.
The counting is subject to an imputed multiplicity at a fixed point called the fixed-point index. A weak version of the theorem is enough to show that a mapping without any fixed point must have rather special topological properties (like a rotation of a circle).
Formal statement
For a formal statement of the theorem, let
be a continuous map from a compact triangulable space to itself. Define the Lefschetz number of by
the alternating (finite) sum of the matrix traces of the linear maps induced by on , the singular homology groups of with rational coefficients.
A simple version of the Lefschetz fixed-point theorem states: if
then has at least one fixed point, i.e., there exists at least one in such that . In fact, since the Lefschetz number has been defined at the homology level, the conclusion can be extended to say that any map homotopic to has a fixed point as well.
Note however that the converse is not true in general: may be zero even if has fixed points, as is the case for the identity map on odd-dimensional spheres.
Sketch of a proof
First, by applying the simplicial approximation theorem, one shows that if has no fixed points, then (possibly after subdividing ) is homotopic to a fixed-point-free simplicial map (i.e., it sends each simplex to a different simplex). This means that the diagonal values of the matrices of the linear maps induced on the simplicial chain complex of must be all be zero. Then one notes that, in general, the Lefschetz number can also be computed using the alternating sum of the matrix traces of the aforementioned linear maps (this is true for almost exactly the same reason that the Euler characteristic has a definition in terms of homology groups; see below for the relation to the Euler characteristic). In the particular case of a fixed-point-free simplicial map, all of the diagonal values are zero, and thus the traces are all zero.
Lefschetz–Hopf theorem
A stronger form of the theorem, also known as the Lefschetz–Hopf theorem, states that, if has only finitely many fixed points, then
where is the set of fixed points of , and denotes the index of the fixed point . From this theorem one deduces the Poincaré–Hopf theorem for vector fields.
Relation to the Euler characteristic
The Lefschetz number of the identity map on a finite CW complex can be easily computed by realizing that each can be thought of as an identity matrix, and so each trace term is simply the dimension of the appropriate homology group. Thus the Lefschetz number of the identity map is equal to the alternating sum of the Betti numbers of the space, which in turn is equal to the Euler characteristic . Thus we have
Relation to
|
https://en.wikipedia.org/wiki/Theory%20of%20equations
|
In algebra, the theory of equations is the study of algebraic equations (also called "polynomial equations"), which are equations defined by a polynomial. The main problem of the theory of equations was to know when an algebraic equation has an algebraic solution. This problem was completely solved in 1830 by Évariste Galois, by introducing what is now called Galois theory.
Before Galois, there was no clear distinction between the "theory of equations" and "algebra". Since then algebra has been dramatically enlarged to include many new subareas, and the theory of algebraic equations receives much less attention. Thus, the term "theory of equations" is mainly used in the context of the history of mathematics, to avoid confusion between old and new meanings of "algebra".
History
Until the end of the 19th century, "theory of equations" was almost synonymous with "algebra". For a long time, the main problem was to find the solutions of a single non-linear polynomial equation in a single unknown. The fact that a complex solution always exists is the fundamental theorem of algebra, which was proved only at the beginning of the 19th century and does not have a purely algebraic proof. Nevertheless, the main concern of the algebraists was to solve in terms of radicals, that is to express the solutions by a formula which is built with the four operations of arithmetics and with nth roots. This was done up to degree four during the 16th century. Scipione del Ferro and Niccolò Fontana Tartaglia discovered solutions for cubic equations. Gerolamo Cardano published them in his 1545 book Ars Magna, together with a solution for the quartic equations, discovered by his student Lodovico Ferrari. In 1572 Rafael Bombelli published his L'Algebra in which he showed how to deal with the imaginary quantities that could appear in Cardano's formula for solving cubic equations.
The case of higher degrees remained open until the 19th century, when Paolo Ruffini gave an incomplete proof in 1799 that some fifth degree equations cannot be solved in radicals followed by Niels Henrik Abel's complete proof in 1824 (now known as the Abel–Ruffini theorem). Évariste Galois later introduced a theory (presently called Galois theory) to decide which equations are solvable by radicals.
Further problems
Other classical problems of the theory of equations are the following:
Linear equations: this problem was solved during antiquity.
Simultaneous linear equations: The general theoretical solution was provided by Gabriel Cramer in 1750. However devising efficient methods (algorithms) to solve these systems remains an active subject of research now called linear algebra.
Finding the integer solutions of an equation or of a system of equations. These problems are now called Diophantine equations, which are considered a part of number theory (see also integer programming).
Systems of polynomial equations: Because of their difficulty, these systems, with few exceptions, have been studied
|
https://en.wikipedia.org/wiki/Simplicial%20approximation%20theorem
|
In mathematics, the simplicial approximation theorem is a foundational result for algebraic topology, guaranteeing that continuous mappings can be (by a slight deformation) approximated by ones that are piecewise of the simplest kind. It applies to mappings between spaces that are built up from simplices—that is, finite simplicial complexes. The general continuous mapping between such spaces can be represented approximately by the type of mapping that is (affine-) linear on each simplex into another simplex, at the cost (i) of sufficient barycentric subdivision of the simplices of the domain, and (ii) replacement of the actual mapping by a homotopic one.
This theorem was first proved by L.E.J. Brouwer, by use of the Lebesgue covering theorem (a result based on compactness). It served to put the homology theory of the time—the first decade of the twentieth century—on a rigorous basis, since it showed that the topological effect (on homology groups) of continuous mappings could in a given case be expressed in a finitary way. This must be seen against the background of a realisation at the time that continuity was in general compatible with the pathological, in some other areas. This initiated, one could say, the era of combinatorial topology.
There is a further simplicial approximation theorem for homotopies, stating that a homotopy between continuous mappings can likewise be approximated by a combinatorial version.
Formal statement of the theorem
Let and be two simplicial complexes. A simplicial mapping is called a simplicial approximation of a continuous function if for every point , belongs to the minimal closed simplex of containing the point . If is a simplicial approximation to a continuous map , then the geometric realization of , is necessarily homotopic to .
The simplicial approximation theorem states that given any continuous map there exists a natural number such that for all there exists a simplicial approximation to (where denotes the barycentric subdivision of , and denotes the result of applying barycentric subdivision times.), in other words, if and are simplicial complexes and is a continuous function, then there is a subdivision of and a simplicial map which is homotopic to . Moreover, if is a positive continuous map, then there are subdivisions of and a simplicial map such that is -homotopic to ; that is, there is a homotopy from to such that for all . So, we may consider the simplicial approximation theorem as a piecewise linear analog of Whitney approximation theorem.
References
Theory of continuous functions
Simplicial sets
Theorems in algebraic topology
|
https://en.wikipedia.org/wiki/Barycentric%20subdivision
|
In mathematics, the barycentric subdivision is a standard way to subdivide a given simplex into smaller ones. Its extension on simplicial complexes is a canonical method to refine them. Therefore, the barycentric subdivision is an important tool in algebraic topology.
Motivation
The barycentric subdivision is an operation on simplicial complexes. In algebraic topology it is sometimes useful to replace the original spaces with simplicial complexes via triangulations: The substitution allows to assign combinatorial invariants as the Euler characteristic to the spaces. One can ask if there is an analogous way to replace the continuous functions defined on the topological spaces by functions that are linear on the simplices and which are homotopic to the original maps (see also simplicial approximation). In general, such an assignment requires a refinement of the given complex, meaning, one replaces bigger simplices by a union of smaller simplices. A standard way to effectuate such a refinement is the barycentric subdivision. Moreover, barycentric subdivision induces maps on homology groups and is helpful for computational concerns, see Excision and Mayer-Vietoris-sequence.
Definition
Subdivision of simplicial complexes
Let be a geometric simplicial complex. A complex is said to be a subdivision of if
each simplex of is contained in a simplex of
each simplex of is a finite union of simplices of
These conditions imply that and equal as sets and as topological spaces, only the simplicial structure changes.
Barycentric subdivision of a simplex
For a simplex spanned by points , the barycenter is defined to be the point . To define the subdivision, we will consider a simplex as a simplicial complex that contains only one simplex of maximal dimension, namely the simplex itself. The barycentric subdivision of a simplex can be defined inductively by its dimension.
For points, i.e. simplices of dimension 0, the barycentric subdivision is defined as the point itself.
Suppose then for a simplex of dimension that its faces of dimension are already divided. Therefore, there exist simplices covering . The barycentric subdivision is then defined to be the geometric simplicial complex whose maximal simplices of dimension are each a convex hulls of for one pair for some , so there will be simplices covering .
One can generalize the subdivision for simplicial complexes whose simplices are not all contained in a single simplex of maximal dimension, i.e. simplicial complexes that do not correspond geometrically to one simplex. This can be done by effectuating the steps described above simultaneously for every simplex of maximal dimension. The induction will then be based on the -th skeleton of the simplicial complex. It allows effectuating the subdivision more than once.
Barycentric subdivision of a convex polytope
The operation of barycentric subdivision can be applied to any convex polytope of any dimension, producing another con
|
https://en.wikipedia.org/wiki/Tarski%27s%20undefinability%20theorem
|
Tarski's undefinability theorem, stated and proved by Alfred Tarski in 1933, is an important limitative result in mathematical logic, the foundations of mathematics, and in formal semantics. Informally, the theorem states that "arithmetical truth cannot be defined in arithmetic".
The theorem applies more generally to any sufficiently strong formal system, showing that truth in the standard model of the system cannot be defined within the system.
History
In 1931, Kurt Gödel published the incompleteness theorems, which he proved in part by showing how to represent the syntax of formal logic within first-order arithmetic. Each expression of the formal language of arithmetic is assigned a distinct number. This procedure is known variously as Gödel numbering, coding and, more generally, as arithmetization. In particular, various sets of expressions are coded as sets of numbers. For various syntactic properties (such as being a formula, being a sentence, etc.), these sets are computable. Moreover, any computable set of numbers can be defined by some arithmetical formula. For example, there are formulas in the language of arithmetic defining the set of codes for arithmetic sentences, and for provable arithmetic sentences.
The undefinability theorem shows that this encoding cannot be done for semantic concepts such as truth. It shows that no sufficiently rich interpreted language can represent its own semantics. A corollary is that any metalanguage capable of expressing the semantics of some object language (e.g. a predicate is definable in Zermelo-Fraenkel set theory for whether formulae in the language of Peano arithmetic are true in the standard model of arithmetic) must have expressive power exceeding that of the object language. The metalanguage includes primitive notions, axioms, and rules absent from the object language, so that there are theorems provable in the metalanguage not provable in the object language.
The undefinability theorem is conventionally attributed to Alfred Tarski. Gödel also discovered the undefinability theorem in 1930, while proving his incompleteness theorems published in 1931, and well before the 1933 publication of Tarski's work (Murawski 1998). While Gödel never published anything bearing on his independent discovery of undefinability, he did describe it in a 1931 letter to John von Neumann. Tarski had obtained almost all results of his 1933 monograph "The Concept of Truth in the Languages of the Deductive Sciences" between 1929 and 1931, and spoke about them to Polish audiences. However, as he emphasized in the paper, the undefinability theorem was the only result he did not obtain earlier. According to the footnote to the undefinability theorem (Twierdzenie I) of the 1933 monograph, the theorem and the sketch of the proof were added to the monograph only after the manuscript had been sent to the printer in 1931. Tarski reports there that, when he presented the content of his monograph to the Warsaw Academy of Scie
|
https://en.wikipedia.org/wiki/Topological%20abelian%20group
|
In mathematics, a topological abelian group, or TAG, is a topological group that is also an abelian group.
That is, a TAG is both a group and a topological space, the group operations are continuous, and the group's binary operation is commutative.
The theory of topological groups applies also to TAGs, but more can be done with TAGs. Locally compact TAGs, in particular, are used heavily in harmonic analysis.
See also
, a topological abelian group that is compact and connected
References
Fourier analysis on Groups, by Walter Rudin.
Abelian group theory
Topology
Topological groups
|
https://en.wikipedia.org/wiki/Octahedral%20number
|
In number theory, an octahedral number is a figurate number that represents the number of spheres in an octahedron formed from close-packed spheres. The octahedral number can be obtained by the formula:
The first few octahedral numbers are:
1, 6, 19, 44, 85, 146, 231, 344, 489, 670, 891 .
Properties and applications
The octahedral numbers have a generating function
Sir Frederick Pollock conjectured in 1850 that every positive integer is the sum of at most 7 octahedral numbers. This statement, the Pollock octahedral numbers conjecture, has been proven true for all but finitely many numbers.
In chemistry, octahedral numbers may be used to describe the numbers of atoms in octahedral clusters; in this context they are called magic numbers.
Relation to other figurate numbers
Square pyramids
An octahedral packing of spheres may be partitioned into two square pyramids, one upside-down underneath the other, by splitting it along a square cross-section. Therefore,
the octahedral number can be obtained by adding two consecutive square pyramidal numbers together:
Tetrahedra
If is the octahedral number and is the tetrahedral number then
This represents the geometric fact that gluing a tetrahedron onto each of four non-adjacent faces of an octahedron produces a tetrahedron of twice the size.
Another relation between octahedral numbers and tetrahedral numbers is also possible, based on the fact that an octahedron may be divided into four tetrahedra each having two adjacent original faces (or alternatively, based on the fact that each square pyramidal number is the sum of two tetrahedral numbers):
Cubes
If two tetrahedra are attached to opposite faces of an octahedron, the result is a rhombohedron. The number of close-packed spheres in the rhombohedron is a cube, justifying the equation
Centered squares
The difference between two consecutive octahedral numbers is a centered square number:
Therefore, an octahedral number also represents the number of points in a square pyramid formed by stacking centered squares; for this reason, in his book Arithmeticorum libri duo (1575), Francesco Maurolico called these numbers "pyramides quadratae secundae".
The number of cubes in an octahedron formed by stacking centered squares is a centered octahedral number, the sum of two consecutive octahedral numbers. These numbers are
1, 7, 25, 63, 129, 231, 377, 575, 833, 1159, 1561, 2047, 2625, ...
given by the formula
for n = 1, 2, 3, ...
History
The first study of octahedral numbers appears to have been by René Descartes, around 1630, in his De solidorum elementis. Prior to Descartes, figurate numbers had been studied by the ancient Greeks and by Johann Faulhaber, but only for polygonal numbers, pyramidal numbers, and cubes. Descartes introduced the study of figurate numbers based on the Platonic solids and some of the semiregular polyhedra; his work included the octahedral numbers. However, De solidorum elementis was lost, and not rediscovered until 1
|
https://en.wikipedia.org/wiki/Scaling
|
Scaling may refer to:
Science and technology
Mathematics and physics
Scaling (geometry), a linear transformation that enlarges or diminishes objects
Scale invariance, a feature of objects or laws that do not change if scales of length, energy, or other variables are multiplied by a common factor
Scaling law, a law that describes the scale invariance found in many natural phenomena
The scaling of critical exponents in physics, such as Widom scaling, or scaling of the renormalization group
Computing and information technology
Feature scaling, a method used to standardize the range of independent variables or features of data
Image scaling, the resizing of an image
Multidimensional scaling, a means of visualizing the level of similarity of individual cases of a dataset
Scalability, a computer or network's ability to function as the amount of data or number of users increases
Scaling along the Z axis, a technique used in computer graphics for a pseudo-3D effect
Reduced scales of semiconductor device fabrication processes (the ability of a technology to scale to a smaller process)
Other uses in science and technology
Tooth scaling, in dentistry, the removal of plaque and calculus
Fouling, i.e., formation of a deposit layer (scale) on a solid surface, e.g., in a boiler; in particular, a kind of micro fouling as crystallization of salts
Scaling rock, the removal of loose rock from a rock wall after blasting
Scaling of innovations, a process that leads to widespread use of an innovation
Other uses
Scaling, North Yorkshire, England
Climbing
Card throwing, known in magic circles as scaling
Scaling fish, the removal of fish scales from the fish
See also
Scale (disambiguation)
Scaling function (disambiguation)
Homogeneous function, used for scaling extensive properties in thermodynamic equations
|
https://en.wikipedia.org/wiki/Closed-form%20expression
|
In mathematics, an expression is in closed form if it is formed with constants, variables and a finite set of basic functions connected by arithmetic operations (, and integer powers) and function composition. Commonly, the allowed functions are nth root, exponential function, logarithm, and trigonometric functions . However, the set of basic functions depends on the context.
The closed-form problem arises when new ways are introduced for specifying mathematical objects, such as limits, series and integrals: given an object specified with such tools, a natural problem is to find, if possible, a closed-form expression of this object, that is, an expression of this object in terms of previous ways of specifying it.
Example: roots of polynomials
The quadratic formula
is a closed form of the solutions to the general quadratic equation
More generally, in the context of polynomial equations, a closed form of a solution is a solution in radicals; that is, a closed-form expression for which the allowed functions are only th-roots and field operations (+-/*). In fact, field theory allows showing that if a solution of a polynomial equation has a closed form involving exponentials, logarithms or trigonometric functions, then it has also a closed form that does not involve these functions.
There are expressions in radicals for all solutions of cubic equations (degree 3) and quartic equations (degree 4). However, they are rarely written explicitly because they are too complicated to be useful.
In higher degrees, Abel–Ruffini theorem states that there are equations whose solutions cannot be expressed in radicals, and, thus, have no closed forms. The simplest example is the equation Galois theory provides an algorithmic method for deciding whether a particular polynomial equation can be solved in radicals.
Symbolic integration
Symbolic integration consists essentially of the search of closed forms for antiderivatives of functions that are specified by closed-form expressions. In this context, the basic functions used for defining closed forms are commonly logarithms, exponential function and polynomial roots. Functions that have a closed form for these basic functions are called elementary functions and include trigonometric functions, inverse trigonometric functions, hyperbolic functions, and inverse hyperbolic functions.
The fundamental problem of symbolic integration is thus, given an elementary function specified by a closed-form expression, to decide wheter its antiderivative is an elementary function, and, if it is, to find a closed-form expression for this antiderivative.
For rational functions; that is, for fractions of two polynomial functions; antiderivatives are not always rational fractions, but are always elementary functions that may involve logarithms and polynomial roots. This is usually proved with partial fraction decomposition. The need for logarithms and polynomial roots is illustrated by the formula
which is valid if and
|
https://en.wikipedia.org/wiki/Perfect%20group
|
In mathematics, more specifically in group theory, a group is said to be perfect if it equals its own commutator subgroup, or equivalently, if the group has no non-trivial abelian quotients (equivalently, its abelianization, which is the universal abelian quotient, is trivial). In symbols, a perfect group is one such that G(1) = G (the commutator subgroup equals the group), or equivalently one such that Gab = {1} (its abelianization is trivial).
Examples
The smallest (non-trivial) perfect group is the alternating group A5. More generally, any non-abelian simple group is perfect since the commutator subgroup is a normal subgroup with abelian quotient. Conversely, a perfect group need not be simple; for example, the special linear group over the field with 5 elements, SL(2,5) (or the binary icosahedral group, which is isomorphic to it) is perfect but not simple (it has a non-trivial center containing ).
The direct product of any two simple non-abelian groups is perfect but not simple; the commutator of two elements is [(a,b),(c,d)] = ([a,c],[b,d]). Since commutators in each simple group form a generating set, pairs of commutators form a generating set of the direct product.
More generally, a quasisimple group (a perfect central extension of a simple group) that is a non-trivial extension (and therefore not a simple group itself) is perfect but not simple; this includes all the insoluble non-simple finite special linear groups SL(n,q) as extensions of the projective special linear group PSL(n,q) (SL(2,5) is an extension of PSL(2,5), which is isomorphic to A5). Similarly, the special linear group over the real and complex numbers is perfect, but the general linear group GL is never perfect (except when trivial or over , where it equals the special linear group), as the determinant gives a non-trivial abelianization and indeed the commutator subgroup is SL.
A non-trivial perfect group, however, is necessarily not solvable; and 4 divides its order (if finite), moreover, if 8 does not divide the order, then 3 does.
Every acyclic group is perfect, but the converse is not true: A5 is perfect but not acyclic (in fact, not even superperfect), see . In fact, for the alternating group is perfect but not superperfect, with for .
Any quotient of a perfect group is perfect. A non-trivial finite perfect group that is not simple must then be an extension of at least one smaller simple non-abelian group. But it can be the extension of more than one simple group. In fact, the direct product of perfect groups is also perfect.
Every perfect group G determines another perfect group E (its universal central extension) together with a surjection f: E → G whose kernel is in the center of E,
such that f is universal with this property. The kernel of f is called the Schur multiplier of G because it was first studied by Issai Schur in 1904; it is isomorphic to the homology group .
In the plus construction of algebraic K-theory, if we consider the group for a com
|
https://en.wikipedia.org/wiki/Homology%20sphere
|
In algebraic topology, a homology sphere is an n-manifold X having the homology groups of an n-sphere, for some integer . That is,
and
for all other i.
Therefore X is a connected space, with one non-zero higher Betti number, namely, . It does not follow that X is simply connected, only that its fundamental group is perfect (see Hurewicz theorem).
A rational homology sphere is defined similarly but using homology with rational coefficients.
Poincaré homology sphere
The Poincaré homology sphere (also known as Poincaré dodecahedral space) is a particular example of a homology sphere, first constructed by Henri Poincaré. Being a spherical 3-manifold, it is the only homology 3-sphere (besides the 3-sphere itself) with a finite fundamental group. Its fundamental group is known as the binary icosahedral group and has order 120. Since the fundamental group of the 3-sphere is trivial, this shows that there exist 3-manifolds with the same homology groups as the 3-sphere that are not homeomorphic to it.
Construction
A simple construction of this space begins with a dodecahedron. Each face of the dodecahedron is identified with its opposite face, using the minimal clockwise twist to line up the faces. Gluing each pair of opposite faces together using this identification yields a closed 3-manifold. (See Seifert–Weber space for a similar construction, using more "twist", that results in a hyperbolic 3-manifold.)
Alternatively, the Poincaré homology sphere can be constructed as the quotient space SO(3)/I where I is the icosahedral group (i.e., the rotational symmetry group of the regular icosahedron and dodecahedron, isomorphic to the alternating group A5). More intuitively, this means that the Poincaré homology sphere is the space of all geometrically distinguishable positions of an icosahedron (with fixed center and diameter) in Euclidean 3-space. One can also pass instead to the universal cover of SO(3) which can be realized as the group of unit quaternions and is homeomorphic to the 3-sphere. In this case, the Poincaré homology sphere is isomorphic to where is the binary icosahedral group, the perfect double cover of I embedded in .
Another approach is by Dehn surgery. The Poincaré homology sphere results from +1 surgery on the right-handed trefoil knot.
Cosmology
In 2003, lack of structure on the largest scales (above 60 degrees) in the cosmic microwave background as observed for one year by the WMAP spacecraft led to the suggestion, by Jean-Pierre Luminet of the Observatoire de Paris and colleagues, that the shape of the universe is a Poincaré sphere. In 2008, astronomers found the best orientation on the sky for the model and confirmed some of the predictions of the model, using three years of observations by the WMAP spacecraft.
As of 2016, the publication of data analysis from the Planck spacecraft suggests that there is no observable non-trivial topology to the universe.
Constructions and examples
Surgery on a knot in the 3-spher
|
https://en.wikipedia.org/wiki/NUTS%20statistical%20regions%20of%20Sweden
|
In the NUTS (Nomenclature of Territorial Units for Statistics) codes of Sweden (SE), the three levels are:
NUTS codes
SE SWEDEN (SVERIGE)
SE1 EAST SWEDEN (ÖSTRA SVERIGE)
SE11 Stockholm (Stockholm)
SE110 Stockholm County (Stockholms län)
SE12 East Middle Sweden (Östra Mellansverige)
SE121 Uppsala County (Uppsala län)
SE122 Södermanland County (Södermanlands län)
SE123 Östergötland County (Östergötlands län)
SE124 Örebro County (Örebro län)
SE125 Västmanland County (Västmanlands län)
SE2 SOUTH SWEDEN (SÖDRA SVERIGE)
SE21 Småland and the islands (Småland med öarna)
SE211 Jönköping County (Jönköpings län)
SE212 Kronoberg County (Kronobergs län)
SE213 Kalmar County (Kalmar län)
SE214 Gotland County (Gotlands län)
SE22 South Sweden (Sydsverige)
SE221 Blekinge County (Blekinge län)
SE224 Skåne County (Skåne län)
SE23 West Sweden (Västsverige)
SE231 Halland County (Hallands län)
SE232 Västra Götaland County (Västra Götalands län)
SE3 NORTH SWEDEN (NORRA SVERIGE)
SE31 North Middle Sweden (Norra Mellansverige)
SE311 Värmland County (Värmlands län)
SE312 Dalarna County (Dalarnas län)
SE313 Gävleborg County (Gävleborgs län)
SE32 Middle Norrland (Mellersta Norrland)
SE321 Västernorrland County (Västernorrlands län)
SE322 Jämtland County (Jämtlands län)
SE33 Upper Norrland (Övre Norrland)
SE331 Västerbotten County (Västerbottens län)
SE332 Norrbotten County (Norrbottens län)
NUTS codes prior to 31.12.2007
Prior to 31.12.2007, the codes were as follows:
The National Areas of Sweden are 8 second level subdivisions (NUTS-2) of Sweden, created by the European Union for statistical purposes.
The 8 riksområden (Singular : Riksområde) includes the 21 counties of Sweden. Only Stockholm (SE01) corresponds simply to the homonymous county.
Local administrative units
Below the NUTS levels, the two LAU (Local Administrative Units) levels are:
The LAU codes of Sweden can be downloaded here: ''
NUTS 1 compared to Lands of Sweden
While similar, NUTS 1 doesn't correspond to Lands of Sweden.
See also
List of Swedish regions by Human Development Index
Subdivisions of Sweden
ISO 3166-2 codes of Sweden
FIPS region codes of Sweden
External links
Hierarchical list of the Nomenclature of territorial units for statistics - NUTS and the Statistical regions of Europe
Overview map of EU Countries - NUTS level 1
SVERIGE - NUTS level 2
SVERIGE - NUTS level 3
Correspondence between the NUTS levels and the national administrative units
List of current NUTS codes
Download current NUTS codes (ODS format)
Counties of Sweden, Statoids.com
References
Sweden
Nuts
|
https://en.wikipedia.org/wiki/Integer-valued%20polynomial
|
In mathematics, an integer-valued polynomial (also known as a numerical polynomial) is a polynomial whose value is an integer for every integer n. Every polynomial with integer coefficients is integer-valued, but the converse is not true. For example, the polynomial
takes on integer values whenever t is an integer. That is because one of t and must be an even number. (The values this polynomial takes are the triangular numbers.)
Integer-valued polynomials are objects of study in their own right in algebra, and frequently appear in algebraic topology.
Classification
The class of integer-valued polynomials was described fully by . Inside the polynomial ring of polynomials with rational number coefficients, the subring of integer-valued polynomials is a free abelian group. It has as basis the polynomials
for , i.e., the binomial coefficients. In other words, every integer-valued polynomial can be written as an integer linear combination of binomial coefficients in exactly one way. The proof is by the method of discrete Taylor series: binomial coefficients are integer-valued polynomials, and conversely, the discrete difference of an integer series is an integer series, so the discrete Taylor series of an integer series generated by a polynomial has integer coefficients (and is a finite series).
Fixed prime divisors
Integer-valued polynomials may be used effectively to solve questions about fixed divisors of polynomials. For example, the polynomials P with integer coefficients that always take on even number values are just those such that is integer valued. Those in turn are the polynomials that may be expressed as a linear combination with even integer coefficients of the binomial coefficients.
In questions of prime number theory, such as Schinzel's hypothesis H and the Bateman–Horn conjecture, it is a matter of basic importance to understand the case when P has no fixed prime divisor (this has been called Bunyakovsky's property, after Viktor Bunyakovsky). By writing P in terms of the binomial coefficients, we see the highest fixed prime divisor is also the highest prime common factor of the coefficients in such a representation. So Bunyakovsky's property is equivalent to coprime coefficients.
As an example, the pair of polynomials and violates this condition at : for every the product
is divisible by 3, which follows from the representation
with respect to the binomial basis, where the highest common factor of the coefficients—hence the highest fixed divisor of —is 3.
Other rings
Numerical polynomials can be defined over other rings and fields, in which case the integer-valued polynomials above are referred to as classical numerical polynomials.
Applications
The K-theory of BU(n) is numerical (symmetric) polynomials.
The Hilbert polynomial of a polynomial ring in k + 1 variables is the numerical polynomial .
References
Algebra
Algebraic topology
Further reading
Polynomials
Number theory
Commutative algebra
Ring theory
|
https://en.wikipedia.org/wiki/Invariant%20subspace
|
In mathematics, an invariant subspace of a linear mapping T : V → V i.e. from some vector space V to itself, is a subspace W of V that is preserved by T. More generally, an invariant subspace for a collection of linear mappings is a subspace preserved by each mapping individually.
For a single operator
Consider a vector space and a linear map A subspace is called an invariant subspace for , or equivalently, -invariant, if transforms any vector back into . In formulas, this can be writtenor
In this case, restricts to an endomorphism of :
The existence of an invariant subspace also has a matrix formulation. Pick a basis C for W and complete it to a basis B of V. With respect to , the operator has form for some and .
Examples
Any linear map admits the following invariant subspaces:
The vector space , because maps every vector in into
The set , because .
These are the trivial invariant subspaces. Certain linear operators have no non-trivial invariant subspace: for instance, rotation of a two-dimensional real vector space. However, the axis of a rotation in three dimensions is always an invariant subspace.
1-dimensional subspaces
If is a 1-dimensional invariant subspace for operator with vector , then the vectors and must be linearly dependent. Thus In fact, the scalar does not depend on .
The equation above formulates an eigenvalue problem. Any eigenvector for spans a 1-dimensional invariant subspace, and vice-versa. In particular, an nonzero invariant vector (i.e. a fixed point of T) spans an invariant subspace of dimension 1.
As a consequence of the fundamental theorem of algebra, every linear operator on a nonzero finite-dimensional complex vector space has an eigenvector. Therefore, every such linear operator has a non-trivial invariant subspace.
Diagonalization via projections
Determining whether a given subspace W is invariant under T is ostensibly a problem of geometric nature. Matrix representation allows one to phrase this problem algebraically.
Write as the direct sum ; a suitable can always be chosen by extending a basis of . The projection operator P onto W has matrix representation
A straightforward calculation shows that W is -invariant if and only if PTP = TP.
If 1 is the identity operator, then is projection onto . The equation holds if and only if both ran P and ran(1 − P) are invariant under T. In that case, T has matrix representation
Colloquially, a projection that commutes with T "diagonalizes" T.
Lattice of subspaces
As the above examples indicate, the invariant subspaces of a given linear transformation T shed light on the structure of T. When V is a finite-dimensional vector space over an algebraically closed field, linear transformations acting on V are characterized (up to similarity) by the Jordan canonical form, which decomposes V into invariant subspaces of T. Many fundamental questions regarding T can be translated to questions about invariant subspaces of T
|
https://en.wikipedia.org/wiki/List%20of%20NHL%20statistical%20leaders
|
Skaters
The statistics listed include the 2022–23 NHL regular season and 2023 playoffs.
All-time leaders (skaters)
Active skaters (during 2023–24 NHL season) are listed in boldface.
Regular season: Points
Regular season: Points per game
Minimum 500 points
Wayne Gretzky, 1.921
Mario Lemieux, 1.883
Mike Bossy, 1.497
Connor McDavid, 1.431
Bobby Orr, 1.393
Marcel Dionne, 1.314
Sidney Crosby, 1.272
Peter Stastny, 1.268
Peter Forsberg, 1.250
Kent Nilsson, 1.241
Phil Esposito, 1.240
Guy Lafleur, 1.202
Joe Sakic, 1.191
Dale Hawerchuk, 1.186
Pat LaFontaine, 1.171
Evgeni Malkin, 1.168
Steve Yzerman, 1.159
Eric Lindros, 1.138
Bernie Federko, 1.130
Artemi Panarin, 1.120
Denis Savard, 1.119
Jari Kurri, 1.118
Bryan Trottier, 1.114
Gilbert Perreault, 1.113
Pavel Bure, 1.110
Regular season: Goals
Regular season: Goals per game
Minimum: 200 goals
Mike Bossy, 0.762
Mario Lemieux, 0.754
Cy Denneny, 0.751
Babe Dye, 0.742
Pavel Bure, 0.623
Alexander Ovechkin, 0.610
Wayne Gretzky, 0.601
Brett Hull, 0.584
Bobby Hull, 0.574
Tim Kerr, 0.565
Rick Martin, 0.561
Phil Esposito, 0.559
Maurice Richard, 0.556
Cam Neely, 0.544
Marcel Dionne, 0.542
Pat LaFontaine, 0.541
Steven Stamkos, 0.513
Rick Vaive, 0.503
Michel Goulet, 0.503
Nels Stewart, 0.498
Guy Lafleur, 0.497
Mike Gartner, 0.494
Dino Ciccarelli, 0.493
Howie Morenz, 0.493
Blaine Stoughton, 0.490
Regular season: Power Play goals
Alexander Ovechkin, 299
Dave Andreychuk, 274
Brett Hull, 265
Teemu Selanne, 255
Luc Robitaille, 247
Phil Esposito, 246
Brendan Shanahan, 237
Mario Lemieux, 236
Marcel Dionne, 234
Dino Ciccarelli, 232
Mike Gartner, 217 Jaromir Jagr, 217
Joe Nieuwendyk, 215
Keith Tkachuk, 212
Gordie Howe, 211
Joe Sakic, 205
Wayne Gretzky, 204
Steve Yzerman, 202
Mark Recchi, 200
Brian Bellows, 198
Jarome Iginla, 197
Pierre Turgeon, 190
Ron Francis, 188
Pat Verbeek, 186
Jeremy Roenick, 184
Regular season: Short-handed goals
Wayne Gretzky, 73
Mark Messier, 63
Steve Yzerman, 50
Mario Lemieux, 49
Butch Goring, 39 Dave Poulin, 39 Jari Kurri, 39
Sergei Fedorov, 36
Theoren Fleury, 35 Dirk Graham, 35
Pavel Bure, 34 Derek Sanderson, 34 Marian Hossa, 34
Brian Rolston, 33 Guy Carbonneau, 33 Brad Marchand, 33
Peter Bondra, 32 Bobby Clarke, 32 Joe Sakic, 32 Dave Keon, 32
Bill Barber, 31 Mats Sundin, 31
Bob Pulford, 30
Martin St. Louis, 29 Russ Courtnall, 29 Craig MacTavish, 29 Mike Modano, 29 Esa Tikkanen, 29
Regular season: Game-winning goals
Jaromir Jagr, 135
Alexander Ovechkin, 121
Gordie Howe, 121
Phil Esposito, 118
Brett Hull, 110 Teemu Selanne, 110
Patrick Marleau, 109 Brendan Shanahan, 109
Jarome Iginla, 101
Guy Lafleur, 98 Bobby Hull, 98
Mats Sundin, 96
Steve Yzerman, 94
Sergei Fedorov, 93 Joe Nieuwendyk, 93
Mark Messier, 92 Mike Modano, 92 Jeremy Roenick, 92 Johnny Bucyk, 92
Wayne Gretzky, 91 Mark Recchi, 91
Mike Gartner, 90
Luc Robitaille, 89
Joe Sakic, 86 Pierr
|
https://en.wikipedia.org/wiki/Complementarity
|
Complementarity may refer to:
Physical sciences and mathematics
Complementarity (molecular biology), a property of nucleic acid molecules in molecular biology
Complementarity (physics), the principle that objects have complementary properties which cannot all be observed or measured simultaneously
Complementarity theory, a type of mathematical optimization problem
Quark–lepton complementarity, a possible fundamental symmetry between quarks and leptons
Society and law
Complementarianism, a theological view that men and women have different but complementary roles
Complementary good, a good for which demand is increased when the price of another good is decreased
An element of interpersonal compatibility in social psychology
The principle that the International Criminal Court is a court of last resort
See also
Complementarity-determining region, part of the variable chains in immunoglobulins
Complementary angles, in geometry
Self-complementary graph, in graph theory
Yin and yang, complementary relation between apparent opposites in Chinese philosophy
Complimentary (disambiguation)
Complement (disambiguation)
|
https://en.wikipedia.org/wiki/Dictionary%20order
|
Dictionary order may refer to:
Alphabetical order § Treatment of multiword strings
Other collation systems used to order words in dictionaries
Lexicographic order in mathematics
|
https://en.wikipedia.org/wiki/Signed%20number%20representations
|
In computing, signed number representations are required to encode negative numbers in binary number systems.
In mathematics, negative numbers in any base are represented by prefixing them with a minus sign ("−"). However, in RAM or CPU registers, numbers are represented only as sequences of bits, without extra symbols. The four best-known methods of extending the binary numeral system to represent signed numbers are: sign–magnitude, ones' complement, two's complement, and offset binary. Some of the alternative methods use implicit instead of explicit signs, such as negative binary, using the base −2. Corresponding methods can be devised for other bases, whether positive, negative, fractional, or other elaborations on such themes.
There is no definitive criterion by which any of the representations is universally superior. For integers, the representation used in most current computing devices is two's complement, although the Unisys ClearPath Dorado series mainframes use ones' complement.
History
The early days of digital computing were marked by competing ideas about both hardware technology and mathematics technology (numbering systems). One of the great debates was the format of negative numbers, with some of the era's top experts expressing very strong and differing opinions. One camp supported two's complement, the system that is dominant today. Another camp supported ones' complement, where a negative value is formed by inverting all of the bits in its positive equivalent. A third group supported sign–magnitude, where a value is changed from positive to negative simply by toggling the word's highest-order bit.
There were arguments for and against each of the systems. Sign–magnitude allowed for easier tracing of memory dumps (a common process in the 1960s) as small numeric values use fewer 1 bits. These systems did ones' complement math internally, so numbers would have to be converted to ones' complement values when they were transmitted from a register to the math unit and then converted back to sign–magnitude when the result was transmitted back to the register. The electronics required more gates than the other systemsa key concern when the cost and packaging of discrete transistors were critical. IBM was one of the early supporters of sign–magnitude, with their 704, 709 and 709x series computers being perhaps the best-known systems to use it.
Ones' complement allowed for somewhat simpler hardware designs, as there was no need to convert values when passed to and from the math unit. But it also shared an undesirable characteristic with sign–magnitude: the ability to represent negative zero (−0). Negative zero behaves exactly like positive zero: when used as an operand in any calculation, the result will be the same whether an operand is positive or negative zero. The disadvantage is that the existence of two forms of the same value necessitates two comparisons when checking for equality with zero. Ones' complement subtraction can
|
https://en.wikipedia.org/wiki/Pursuit%20curve
|
In geometry, a curve of pursuit is a curve constructed by analogy to having a point or points representing pursuers and pursuees; the curve of pursuit is the curve traced by the pursuers.
With the paths of the pursuer and pursuee parameterized in time, the pursuee is always on the pursuer's tangent. That is, given , the pursuer (follower), and , the pursued (leader), for every with there is an such that
History
The pursuit curve was first studied by Pierre Bouguer in 1732. In an article on navigation, Bouguer defined a curve of pursuit to explore the way in which one ship might maneuver while pursuing another.
Leonardo da Vinci has occasionally been credited with first exploring curves of pursuit. However Paul J. Nahin, having traced such accounts as far back as the late 19th century, indicates that these anecdotes are unfounded.
Single pursuer
The path followed by a single pursuer, following a pursuee that moves at constant speed on a line, is a radiodrome.
It is a solution of the differential equation
.
Multiple pursuers
Typical drawings of curves of pursuit have each point acting as both pursuer and pursuee, inside a polygon, and having each pursuer pursue the adjacent point on the polygon. An example of this is the mice problem.
See also
Logarithmic spiral
Tractrix
Circles of Apollonius#Apollonius pursuit problem
Pursuit–evasion
References
External links
Mathworld, with a slightly narrower definition that |L′(t)| and |F′(t)| are constant
MacTutor Pursuit curve
Curves
Pursuit–evasion
|
https://en.wikipedia.org/wiki/Nerve%20complex
|
In topology, the nerve complex of a set family is an abstract complex that records the pattern of intersections between the sets in the family. It was introduced by Pavel Alexandrov and now has many variants and generalisations, among them the Čech nerve of a cover, which in turn is generalised by hypercoverings. It captures many of the interesting topological properties in an algorithmic or combinatorial way.
Basic definition
Let be a set of indices and be a family of sets . The nerve of is a set of finite subsets of the index set . It contains all finite subsets such that the intersection of the whose subindices are in is non-empty:
In Alexandrov's original definition, the sets are open subsets of some topological space .
The set may contain singletons (elements such that is non-empty), pairs (pairs of elements such that ), triplets, and so on. If , then any subset of is also in , making an abstract simplicial complex. Hence N(C) is often called the nerve complex of .
Examples
Let X be the circle and , where is an arc covering the upper half of and is an arc covering its lower half, with some overlap at both sides (they must overlap at both sides in order to cover all of ). Then , which is an abstract 1-simplex.
Let X be the circle and , where each is an arc covering one third of , with some overlap with the adjacent . Then . Note that {1,2,3} is not in since the common intersection of all three sets is empty; so is an unfilled triangle.
The Čech nerve
Given an open cover of a topological space , or more generally a cover in a site, we can consider the pairwise fibre products , which in the case of a topological space are precisely the intersections . The collection of all such intersections can be referred to as and the triple intersections as .
By considering the natural maps and , we can construct a simplicial object defined by , n-fold fibre product. This is the Čech nerve.
By taking connected components we get a simplicial set, which we can realise topologically: .
Nerve theorems
The nerve complex is a simple combinatorial object. Often, it is much simpler than the underlying topological space (the union of the sets in ). Therefore, a natural question is whether the topology of is equivalent to the topology of .
In general, this need not be the case. For example, one can cover any n-sphere with two contractible sets and that have a non-empty intersection, as in example 1 above. In this case, is an abstract 1-simplex, which is similar to a line but not to a sphere.
However, in some cases does reflect the topology of X. For example, if a circle is covered by three open arcs, intersecting in pairs as in Example 2 above, then is a 2-simplex (without its interior) and it is homotopy-equivalent to the original circle.
A nerve theorem (or nerve lemma) is a theorem that gives sufficient conditions on C guaranteeing that reflects, in some sense, the topology of . A functorial nerve theorem is a nerve
|
https://en.wikipedia.org/wiki/Kakeya%20set
|
In mathematics, a Kakeya set, or Besicovitch set, is a set of points in Euclidean space which contains a unit line segment in every direction. For instance, a disk of radius 1/2 in the Euclidean plane, or a ball of radius 1/2 in three-dimensional space, forms a Kakeya set. Much of the research in this area has studied the problem of how small such sets can be. Besicovitch showed that there are Besicovitch sets of measure zero.
A Kakeya needle set (sometimes also known as a Kakeya set) is a (Besicovitch) set in the plane with a stronger property, that a unit line segment can be rotated continuously through 180 degrees within it, returning to its original position with reversed orientation. Again, the disk of radius 1/2 is an example of a Kakeya needle set.
Kakeya needle problem
The Kakeya needle problem asks whether there is a minimum area of a region in the plane, in which a needle of unit length can be turned through 360°. This question was first posed, for convex regions, by . The minimum area for convex sets is achieved by an equilateral triangle of height 1 and area 1/, as Pál showed.
Kakeya seems to have suggested that the Kakeya set of minimum area, without the convexity restriction, would be a three-pointed deltoid shape. However, this is false; there are smaller non-convex Kakeya sets.
Besicovitch needle sets
Besicovitch was able to show that there is no lower bound > 0 for the area of such a region , in which a needle of unit length can be turned around. That is, for every , there is region of area within which the needle can moved through a continuous motion that rotates it a full 360 degrees. This built on earlier work of his, on plane sets which contain a unit segment in each orientation. Such a set is now called a Besicovitch set. Besicovitch's work showing such a set could have arbitrarily small measure was from 1919. The problem may have been considered by analysts before that.
One method of constructing a Besicovitch set (see figure for corresponding illustrations) is known as a "Perron tree" after Oskar Perron who was able to simplify Besicovitch's original construction. The precise construction and numerical bounds are given in Besicovitch's popularization.
The first observation to make is that the needle can move in a straight line as far as it wants without sweeping any area. This is because the needle is a zero width line segment. The second trick of Pál, known as Pál joins describes how to move the needle between any two locations that are parallel while sweeping negligible area. The needle will follow the shape of an "N". It moves from the first location some distance up the left of the "N", sweeps out the angle to the middle diagonal, moves down the diagonal, sweeps out the second angle,
and them moves up the parallel right side of the "N" until it reaches the required second location. The only non-zero area regions swept are the two triangles of height one and the angle at the top of the "N". The swept area
|
https://en.wikipedia.org/wiki/Borel%20subgroup
|
In the theory of algebraic groups, a Borel subgroup of an algebraic group G is a maximal Zariski closed and connected solvable algebraic subgroup. For example, in the general linear group GLn (n x n invertible matrices), the subgroup of invertible upper triangular matrices is a Borel subgroup.
For groups realized over algebraically closed fields, there is a single conjugacy class of Borel subgroups.
Borel subgroups are one of the two key ingredients in understanding the structure of simple (more generally, reductive) algebraic groups, in Jacques Tits' theory of groups with a (B,N) pair. Here the group B is a Borel subgroup and N is the normalizer of a maximal torus contained in B.
The notion was introduced by Armand Borel, who played a leading role in the development of the theory of algebraic groups.
Parabolic subgroups
Subgroups between a Borel subgroup B and the ambient group G are called parabolic subgroups.
Parabolic subgroups P are also characterized, among algebraic subgroups, by the condition that G/P is a complete variety.
Working over algebraically closed fields, the Borel subgroups turn out to be the minimal parabolic subgroups in this sense. Thus B is a Borel subgroup when the homogeneous space G/B is a complete variety which is "as large as possible".
For a simple algebraic group G, the set of conjugacy classes of parabolic subgroups is in bijection with the set of all subsets of nodes of the corresponding Dynkin diagram; the Borel subgroup corresponds to the empty set and G itself corresponding to the set of all nodes. (In general each node of the Dynkin diagram determines a simple negative root and thus a one-dimensional 'root group' of G–a subset of the nodes thus yields a parabolic subgroup, generated by B and the corresponding negative root groups. Moreover, any parabolic subgroup is conjugate to such a parabolic subgroup.)
Example
Let . A Borel subgroup of is the set of upper triangular matricesand the maximal proper parabolic subgroups of containing areAlso, a maximal torus in isThis is isomorphic to the algebraic torus .
Lie algebra
For the special case of a Lie algebra with a Cartan subalgebra , given an ordering of , the Borel subalgebra is the direct sum of and the weight spaces of with positive weight. A Lie subalgebra of containing a Borel subalgebra is called a parabolic Lie algebra.
See also
Hyperbolic group
Cartan subgroup
Mirabolic subgroup
References
Specific
External links
Algebraic groups
|
https://en.wikipedia.org/wiki/John%20Winthrop%20%28educator%29
|
John Winthrop (December 19, 1714 – May 3, 1779) was an American mathematician, physicist and astronomer. He was the 2nd Hollis Professor of Mathematics and Natural Philosophy in Harvard College.
Early life
John Winthrop was born in Boston, Massachusetts. His great-great-grandfather, also named John Winthrop, was founder of the Massachusetts Bay Colony. He graduated in 1732 from Harvard, where, from 1738 until his death, he served as professor of mathematics and natural philosophy.
Career
Professor Winthrop was one of the foremost men of science in America during the 18th century, and his impact on its early advance in New England was particularly significant. Both Benjamin Franklin and Benjamin Thompson (Count Rumford) probably owed much of their early interest in scientific research to his influence. He also had a decisive influence in the early philosophical education of John Adams during the latter's time at Harvard. He corresponded regularly with the Royal Society in London—as such, he was one of the first American intellectuals to be taken seriously in Europe. He was elected to the revived American Philosophical Society in 1768. He was noted for attempting to explain the great Lisbon earthquake of 1755 as a scientific—rather than religious—phenomenon, and his application of mathematical computations to earthquake activity following the great quake formed the basis of the claim made on his behalf as the founder of the science of seismology. Additionally, he observed the transits of Mercury in 1740 and 1761 and journeyed to Newfoundland to observe a transit of Venus. He traveled in a ship provided by the Province of Massachusetts—probably the first scientific expedition ever sent out by any incipient American state. Winthrop was recorded as owning two enslaved men, George and Scipio, in 1759 and 1761 respectively.
He served as acting president of Harvard in 1769 and again in 1773, but each time declined the offer of the full presidency on the grounds of old age. During the nine months in 1775–1776 when Harvard moved to Concord, Massachusetts, Winthrop occupied the house that would become famous as The Wayside, home to Louisa May Alcott and Nathaniel Hawthorne. Additionally, he was actively interested in public affairs, was for several years a judge of probate in Middlesex County, was a member of the Governor's Council in 1773–74, and subsequently offered the weight of his influence to the patriotic cause in the Revolution. He published:
Lecture on Earthquakes (1755)
Answer to Mr. Prince's Letter on Earthquakes (1756)
Account of Some Fiery Meteors (1755)
Two Lectures on the Parallax (1769)
Personal life
In 1756, he married Hannah Fayerweather (1727–1790), the daughter of Thomas and Hannah Waldo Fayerweather. She was baptized at the First Church in Boston on February 12, 1727, and had been previously married in 1745 to Parr Tolman. Together, they raised Winthrop's son from his previous marriage, James Winthrop, who continued his father
|
https://en.wikipedia.org/wiki/Kurt%20Hirsch
|
Kurt August Hirsch (12 January 1906 – 4 November 1986) was a German mathematician who moved to England to escape the Nazi persecution of Jews. His research was in group theory. He also worked to reform mathematics education and became a county chess champion. The Hirsch length and Hirsch–Plotkin radical are named after him.
He taught at the University of Leicester from 1938 (except for a brief internment as an enemy alien in 1940), moved to King's College, Newcastle in 1948, and then moved again to Queen Mary College in London in 1951, where he stayed for the remainder of his career and worked with K. W. Gruenberg.
Hirsch's doctoral students include Ismail Mohamed and Ascher Wagner.
Publications
He translated several books from Russian, including:
The Theory of Groups (by Aleksandr Kurosh). His first translation
Algebraic Geometry (by Shafarevich). This was later retranslated by Miles Reid
References
External links
Author profile at Mathematical Reviews (subscription required).
1906 births
1986 deaths
Humboldt University of Berlin alumni
20th-century German mathematicians
20th-century British mathematicians
Jewish emigrants from Nazi Germany to the United Kingdom
People interned in the Isle of Man during World War II
|
https://en.wikipedia.org/wiki/Special%20number%20field%20sieve
|
In number theory, a branch of mathematics, the special number field sieve (SNFS) is a special-purpose integer factorization algorithm. The general number field sieve (GNFS) was derived from it.
The special number field sieve is efficient for integers of the form re ± s, where r and s are small (for instance Mersenne numbers).
Heuristically, its complexity for factoring an integer is of the form:
in O and L-notations.
The SNFS has been used extensively by NFSNet (a volunteer distributed computing effort), NFS@Home and others to factorise numbers of the Cunningham project; for some time the records for integer factorization have been numbers factored by SNFS.
Overview of method
The SNFS is based on an idea similar to the much simpler rational sieve; in particular, readers may find it helpful to read about the rational sieve first, before tackling the SNFS.
The SNFS works as follows. Let n be the integer we want to factor. As in the rational sieve, the SNFS can be broken into two steps:
First, find a large number of multiplicative relations among a factor base of elements of Z/nZ, such that the number of multiplicative relations is larger than the number of elements in the factor base.
Second, multiply together subsets of these relations in such a way that all the exponents are even, resulting in congruences of the form a2≡b2 (mod n). These in turn immediately lead to factorizations of n: n=gcd(a+b,n)×gcd(a-b,n). If done right, it is almost certain that at least one such factorization will be nontrivial.
The second step is identical to the case of the rational sieve, and is a straightforward linear algebra problem. The first step, however, is done in a different, more efficient way than the rational sieve, by utilizing number fields.
Details of method
Let n be the integer we want to factor. We pick an irreducible polynomial f with integer coefficients, and an integer m such that f(m)≡0 (mod n) (we will explain how they are chosen in the next section). Let α be a root of f; we can then form the ring Z[α]. There is a unique ring homomorphism φ from Z[α] to Z/nZ that maps α to m. For simplicity, we'll assume that Z[α] is a unique factorization domain; the algorithm can be modified to work when it isn't, but then there are some additional complications.
Next, we set up two parallel factor bases, one in Z[α] and one in Z. The one in Z[α] consists of all the prime ideals in Z[α] whose norm is bounded by a chosen value . The factor base in Z, as in the rational sieve case, consists of all prime integers up to some other bound.
We then search for relatively prime pairs of integers (a,b) such that:
a+bm is smooth with respect to the factor base in Z (i.e., it is a product of elements in the factor base).
a+bα is smooth with respect to the factor base in Z[α]; given how we chose the factor base, this is equivalent to the norm of a+bα being divisible only by primes less than .
These pairs are found through a sieving process, analogous to the Sie
|
https://en.wikipedia.org/wiki/Frank%20Morley
|
Frank Morley (September 9, 1860 – October 17, 1937) was a leading mathematician, known mostly for his teaching and research in the fields of algebra and geometry. Among his mathematical accomplishments was the discovery and proof of the celebrated Morley's trisector theorem in elementary plane geometry.
He led 50 Ph.D.'s to their degrees, and was said to be:
"...one of the more striking figures of the relatively small group of men who initiated that development which, within his own lifetime, brought Mathematics in America from a minor position to its present place in the sun."
Life
Morley was born in the town of Woodbridge in Suffolk, England. His parents were Elizabeth Muskett and Joseph Roberts Morley, Quakers who ran a china shop. After being educated at Woodbridge School, Morley went on to King's College, Cambridge (B.A., 1884).
In 1887, Morley moved to Pennsylvania. He taught at Haverford College until 1900, when he became chairman of the mathematics department at Johns Hopkins University. His publications include Elementary Treatise on the Theory of Functions (1893), with James Harkness; and Introduction to the Theory of Analytic Functions (1898). He was President of the American Mathematical Society from 1919 to 1920 and was the editor of the American Journal of Mathematics from 1900 to 1921. He was an invited speaker at the International Congress of Mathematicians in 1912 at Cambridge (England), in 1924 at Toronto, and in 1936 at Oslo.
In 1933 he and his son Frank Vigor Morley published the "stimulating volume", Inversive Geometry. The book develops complex numbers as a tool for geometry and function theory. Some non-standard terminology is used such as "base-circle" for unit circle and "turn" for a point on it.
He was a strong chess player and once beat world champion Emanuel Lasker in a game of chess.
He died in Baltimore, Maryland at age 77.
His three sons are novelist Christopher Morley, Pulitzer Prize winner Felix Morley, and Frank Vigor Morley, also a mathematician.
Works
1893: (with James Harkness) A treatise on the theory of functions (New York: Macmillan)
1898: (with James Harkness) Introduction to the Theory of Analytic Functions (G.E.Stechert And Company)
1919: On the Lüroth Quartic Curve
1933: (with son Frank Vigor Morley) Inversive Geometry, Ginn & Co., now available from HathiTrust
See also
cis
Turn
Lüroth quartic
Morley centers
Petersen–Morley theorem
References
R.C. Archibald, A Semicentennial History of the American Mathematical Society (1888–1938), Chapter 15: The Presidents: #15 Morley 1919–20. pp. 194–201, includes bibliography of Morley's papers.
External links
Clark Kimberling: Frank Morley (1860–1937) geometer.
1860 births
1937 deaths
19th-century British mathematicians
19th-century American mathematicians
20th-century American mathematicians
British expatriates in the United States
British geometers
Johns Hopkins University faculty
Haverford College faculty
Presidents of the American Math
|
https://en.wikipedia.org/wiki/George%20Salmon
|
George Salmon FBA FRS FRSE (25 September 1819 – 22 January 1904) was a distinguished and influential Irish mathematician and Anglican theologian. After working in algebraic geometry for two decades, Salmon devoted the last forty years of his life to theology. His entire career was spent at Trinity College Dublin.
Personal life
Salmon was born in Dublin, to Michael Salmon and Helen Weekes (the daughter of the Reverend Edward Weekes), but he spent his boyhood in Cork City, where his father Michael was a linen merchant. He attended Hamblin and Porter's School there before starting at Trinity College in 1833.
In 1837 he won a scholarship and graduated from Trinity in 1839 with first-class honours in mathematics. In 1841 at the age of 21, he attained a paid fellowship and teaching position in mathematics at Trinity. In 1845 he was additionally appointed to a position in theology at the university, after having been ordained a deacon in 1844 and a priest in the Church of Ireland in 1845.
He remained at Trinity for the rest of his career.
He died at the Provost's House on 22 January 1904 and was buried in Mount Jerome Cemetery, Dublin. He was an avid reader throughout his life, and his obituary refers to him as "specially devoted to the novels of Jane Austen."
Family
In 1844 he married Frances Anne Salvador, daughter of Rev J L Salvador of Staunton-upon-Wye in Herefordshire, with whom he had six children, of which only two survived him.
Mathematics
In the late 1840s and the 1850s Salmon was in regular and frequent communication with Arthur Cayley and J. J. Sylvester. The three of them, together with a small number of other mathematicians (including Charles Hermite), were developing a system for dealing with n-dimensional algebra and geometry. During this period Salmon published about 36 papers in journals. In these papers for the most part he solved narrowly defined, concrete problems in algebraic geometry, as opposed to more broadly systematic or foundational questions. But he was an early adopter of the foundational innovations of Cayley and the others. In 1859 he published the book Lessons Introductory to the Modern Higher Algebra (where the word "higher" means n-dimensional). This was for a while simultaneously the state-of-the-art and the standard presentation of the subject, and went through updated and expanded editions in 1866, 1876 and 1885, and was translated into German and French.
From 1858 to 1867 he was the Donegall Lecturer in Mathematics at Trinity.
Meanwhile, back in 1848 Salmon had published an undergraduate textbook entitled A Treatise on Conic Sections. This text remained in print for over fifty years, going through five updated editions in English, and was translated into German, French and Italian. Salmon himself did not participate in the expansions and updates of the more later editions. The German version, which was a "free adaptation" by Wilhelm Fiedler, was popular as an undergraduate text in Germany. Salmon also publ
|
https://en.wikipedia.org/wiki/Norman%20Steenrod
|
Norman Earl Steenrod (April 22, 1910October 14, 1971) was an American mathematician most widely known for his contributions to the field of algebraic topology.
Life
He was born in Dayton, Ohio, and educated at Miami University and University of Michigan (A.B. 1932). After receiving a master's degree from Harvard University in 1934, he enrolled at Princeton University. He completed his Ph.D. under the direction of Solomon Lefschetz, with a thesis titled Universal homology groups.
Steenrod held positions at the University of Chicago from 1939 to 1942, and the University of Michigan from 1942 to 1947. He moved to Princeton University in 1947, and remained on the Faculty there for the rest of his career. He was editor of the Annals of Mathematics and a member of the National Academy of Sciences. He died in Princeton, survived by his wife, the former Carolyn Witter, and two children.
Work
Thanks to Lefschetz and others, the cup product structure of cohomology was understood by the early 1940s. Steenrod was able to define operations from one cohomology group to another (the so-called Steenrod squares) that generalized the cup product. The additional structure made cohomology a finer invariant. The Steenrod cohomology operations form a (non-commutative) algebra under composition, known as the Steenrod algebra.
His book The Topology of Fibre Bundles is a standard reference. In collaboration with Samuel Eilenberg, he was a founder of the axiomatic approach to homology theory. See Eilenberg–Steenrod axioms.
See also
Abstract nonsense
Eilenberg–Steenrod axioms
Fiber bundle
Steenrod algebra
Steenrod homology
Steenrod operations
Steenrod problem
Publications
References
External links
Michael Hoffman (2013) Norman Steenrod from US Naval Academy
1910 births
1971 deaths
20th-century American mathematicians
Harvard University alumni
People from Dayton, Ohio
Princeton University alumni
Princeton University faculty
Topologists
University of Chicago faculty
University of Michigan faculty
University of Michigan alumni
Mathematicians from Ohio
Members of the United States National Academy of Sciences
|
https://en.wikipedia.org/wiki/Perxenate
|
In chemistry, perxenates are salts of the yellow xenon-containing anion . This anion has octahedral molecular geometry, as determined by Raman spectroscopy, having O–Xe–O bond angles varying between 87° and 93°. The Xe–O bond length was determined by X-ray crystallography to be 1.875 Å.
Synthesis
Perxenates are synthesized by the disproportionation of xenon trioxide when dissolved in strong alkali:
2 XeO3 () + 4 OH− () → Xe () + () + O2 () + 2 H2O ()
When Ba(OH)2 is used as the alkali, barium perxenate can be crystallized from the resulting solution.
Perxenic acid
Perxenic acid is the unstable conjugate acid of the perxenate anion, formed by the solution of xenon tetroxide in water. It has not been isolated as a free acid, because under acidic conditions it rapidly decomposes into xenon trioxide and oxygen gas:
Its extrapolated formula, H4XeO6, is inferred from the octahedral geometry of the perxenate ion () in its alkali metal salts.
The pKa of aqueous perxenic acid has been indirectly calculated to be below 0, making it an extremely strong acid. Its first ionization yields the anion , which has a pKa value of 4.29, still relatively acidic. The twice deprotonated species has a pKa value of 10.81. Due to its rapid decomposition under acidic conditions as described above, however, it is most commonly known as perxenate salts, bearing the anion .
Properties
Perxenic acid and the anion are both strong oxidizing agents, capable of oxidising silver(I) to silver(III), copper(II) to copper(III), and manganese(II) to permanganate. The perxenate anion is unstable in acidic solutions, being almost instantaneously reduced to .
The sodium, potassium, and barium salts are soluble. Barium perxenate solution is used as the starting material for the synthesis of xenon tetroxide (XeO4) by mixing it with concentrated sulfuric acid:
Ba2XeO6 (s) + 2 H2SO4 (l) → XeO4 (g) + 2 BaSO4 (s) + 2 H2O (l)
Most metal perxenates are stable, except silver perxenate, which decomposes violently.
Applications
Sodium perxenate, Na4XeO6, can be used for the analytic separation of trace amounts of americium from curium. The separation involves the oxidation of Am3+ to Am4+ by sodium perxenate in acidic solution in the presence of La3+, followed by treatment with calcium fluoride, which forms insoluble fluorides with Cm3+ and La3+, but retains Am4+ and Pu4+ in solution as soluble fluorides.
References
Oxyanions
Salts
Xenon(VIII) compounds
Octahedral compounds
|
https://en.wikipedia.org/wiki/Strong%20topology
|
In mathematics, a strong topology is a topology which is stronger than some other "default" topology. This term is used to describe different topologies depending on context, and it may refer to:
the final topology on the disjoint union
the topology arising from a norm
the strong operator topology
the strong topology (polar topology), which subsumes all topologies above.
A topology τ is stronger than a topology σ (is a finer topology) if τ contains all the open sets of σ.
In algebraic geometry, it usually means the topology of an algebraic variety as complex manifold or subspace of complex projective space, as opposed to the Zariski topology (which is rarely even a Hausdorff space).
See also
Weak topology
Topology
|
https://en.wikipedia.org/wiki/Isotopic
|
Isotopic may refer to:
In the physical sciences, to do with chemical isotopes
In mathematics, to do with a relation called isotopy; see Isotopy (disambiguation)
In geometry, isotopic refers to facet-transitivity
|
https://en.wikipedia.org/wiki/Nicolaus%20Rohlfs
|
Nicolaus Rohlfs was an 18th-century German mathematics teacher (arithmeticus) in Buxtehude and Hamburg who wrote astronomical calendars, a book about gardening, and other treatises that were continued by Matthias Rohlfs.
Works
Trigonometrische Calculation, der Anno Christi 1724. den 22 Maji ... vorfallenden grossen Sonnen-Finsterniss : wie dieselbe über den Hamburgischen Horizont sich praesentiren wird ... auffgesetzet von Nicolaus Rohlfs. Druck: Struckische Buchdruckerei, Lübeck, ca 1723
Tabula horologica. Oder Curieuse Uhr-Tabellen, : durch deren Beyhuülfe man vermittelst eines kleinen Stöckleins, Spatzier-Stocks, Fuss-Masses oder andern Dinges, wenn es nur in 12 Theile getheilt ist, bey Sonnenschein, die Stunde des Tages finden, und andere Divertissements haben kan. Wobey (1) Ein Kupferblatt, welches laut der Anweisung zu einem Universal-Uhr zu recht gemacht und gebraucht wird; ingleichen ein Unterricht, wie mit dem Universal- und einem Horizontal-Uhr die Mittags Linie zu finden, auch wie man bey Mondenschein die Stunde der Nacht finden könne. (2) Ein Zusatz, darinn gewiesen wird, erstlich, wie man bey Sonnenschein mit einem Stoöcklein oder Strohhalm in der Hand die Uhr-Zeit finden kan; zweytens wie man die Schlag- und Taschen-Uhren richtig stellen und corrigiren soll. Gottfried Richter, Hamburg, 1733.
Anweisung, wie die Sonnenfinsternißen über einen jeden Ort des Erd-Bodens zu berechnen. Hamburg, 1734.
Siebenfacher königlich gross=britannisch- und chur=fürstlich Braunschweig-Lüneburgischer Staats=Calender über Dero Chru=Fürstenthum Braunschweig=Lüneburg, und desselben zugehörige Lande, Aufs Jahr 1737. Darinnen der Verbesserte, Gregorianische, Julianische, Jüdische, Römische und Türckische, nebst einem Schreib=Calender enthalten, auch andere zum Calender gehörige Sachen zu sehen sind. Welchem allen beygefüget Das Staats=Register von denen Königlichen Regierungen, und übrigen Hohen Civil- und Militair Bedienten in den teutschen Landen; Auch eine Genealogische Verzeichniß aller jetztlebenden Durchlauchtigsten Höchst= und Hohen Häuser in Europa, nach dem Alphabet. 84 Blätter, Druck: Johann Christoph Berenberg, Lauenburg. Note: In this form published for many years (since ca 1752 continued by Matthias Rohlfs)
Betrachtung der beyden grossen Himmels-Lichter Sonn und Mond. Hamburg: Samuel Heyl, 1736
Betrachtung der ... grossen Sonnen-Finsterniss am 25. Julii dieses Jahrs: als ein Supplement .. Betrachtung der grossen Himmels-Lichter Sonn und Mond, ... 1736
Künstliches Zahlen Spiel, oder gründliche Anweisung wie die so genannten Magischen-Quadraten auf eine sehr leichte Art zu verfertigen sind, etc. 1742
Königl. schleswig-holsteinischer Haus- und Garten-Allmanach: auf das ... Jahr Christi; ueber den schleswig-holsteinis. Horizont gestellet. Altona : Burmester, 1750–1784. In the beginning: Nicolavs Rohlfs, later Matthias Rohlfs
German male writers
Year of birth missing
Year of death missing
|
https://en.wikipedia.org/wiki/Path%20analysis%20%28statistics%29
|
In statistics, path analysis is used to describe the directed dependencies among a set of variables. This includes models equivalent to any form of multiple regression analysis, factor analysis, canonical correlation analysis, discriminant analysis, as well as more general families of models in the multivariate analysis of variance and covariance analyses (MANOVA, ANOVA, ANCOVA).
In addition to being thought of as a form of multiple regression focusing on causality, path analysis can be viewed as a special case of structural equation modeling (SEM) – one in which only single indicators are employed for each of the variables in the causal model. That is, path analysis is SEM with a structural model, but no measurement model. Other terms used to refer to path analysis include causal modeling and analysis of covariance structures.
Path analysis is considered by Judea Pearl to be a direct ancestor to the techniques of Causal inference.
History
Path analysis was developed around 1918 by geneticist Sewall Wright, who wrote about it more extensively in the 1920s. It has since been applied to a vast array of complex modeling areas, including biology, psychology, sociology, and econometrics.
Path modeling
Typically, path models consist of independent and dependent variables depicted graphically by boxes or rectangles. Variables that are independent variables, and not dependent variables, are called 'exogenous'. Graphically, these exogenous variable boxes lie at outside edges of the model and have only single-headed arrows exiting from them. No single-headed arrows point at exogenous variables. Variables that are solely dependent variables, or are both independent and dependent variables, are termed 'endogenous'. Graphically, endogenous variables have at least one single-headed arrow pointing at them.
In the model below, the two exogenous variables (Ex1 and Ex2) are modeled as being correlated as depicted by the double-headed arrow. Both of these variables have direct and indirect (through En1) effects on En2 (the two dependent or 'endogenous' variables/factors). In most real-world models, the endogenous variables may also be affected by variables and factors stemming from outside the model (external effects including measurement error). These effects are depicted by the "e" or error terms in the model.
Using the same variables, alternative models are conceivable. For example, it may be hypothesized that Ex1 has only an indirect effect on En2, deleting the arrow from Ex1 to En2; and the likelihood or 'fit' of these two models can be compared statistically.
Path tracing rules
In order to validly calculate the relationship between any two boxes in the diagram, Wright (1934) proposed a simple set of path tracing rules, for calculating the correlation between two variables. The correlation is equal to the sum of the contribution of all the pathways through which the two variables are connected. The strength of each of these contributing pathways is
|
https://en.wikipedia.org/wiki/London%20Mathematical%20Society
|
The London Mathematical Society (LMS) is one of the United Kingdom's learned societies for mathematics (the others being the Royal Statistical Society (RSS), the Institute of Mathematics and its Applications (IMA), the Edinburgh Mathematical Society and the Operational Research Society (ORS).
History
The Society was established on 16 January 1865, the first president being Augustus De Morgan. The earliest meetings were held in University College, but the Society soon moved into Burlington House, Piccadilly. The initial activities of the Society included talks and publication of a journal.
The LMS was used as a model for the establishment of the American Mathematical Society in 1888.
Mary Cartwright was the first woman to be President of the LMS (in 1961–62).
The Society was granted a royal charter in 1965, a century after its foundation. In 1998 the Society moved from rooms in Burlington House into De Morgan House (named after the society's first president), at 57–58 Russell Square, Bloomsbury, to accommodate an expansion of its staff.
In 2015 the Society celebrated its 150th Anniversary. During the year the anniversary was celebrated with a wide range of meetings, events, and other activities, highlighting the historical and continuing value and prevalence of mathematics in society, and in everyday life.
Membership
Membership is open to those who are interested in mathematics. Currently, there are four classes of membership, namely: (a) Ordinary, (b) Reciprocity, (c) Associate, and (d) Associate (undergraduate). In addition, Honorary Members of the Society are distinguished mathematicians who are not normally resident in the UK, who are proposed by the Society's Council for election to Membership at a Society Meeting.
LMS Activities
The Society publishes books and periodicals; organises mathematical conferences; provides funding to promote mathematics research and education; and awards a number of prizes and fellowships for excellence in mathematical research.
Grants
The Society supports mathematics in the UK through its grant schemes. These schemes provide support for mathematicians at different stages in their careers. The Society’s grants include research grants for mathematicians, early career researchers and computer scientists working at the interface of mathematics and computer science; education grants for teachers and other educators; travel grants to attend conferences; and grants for those with caring responsibilities.
Awarding grants is one of the primary mechanisms through which the Society achieves its central purpose, namely to 'promote and extend mathematical knowledge’.
Fellowships
The Society also offers a range of Fellowships: LMS Early Career Fellowships; LMS Atiyah-Lebanon UK Fellowships; LMS Emmy Noether Fellowships and Grace Chisholm Young Fellowships.
Society lectures and meetings
The Society organises an annual programme of events and meetings. The programme provides meetings of interest to undergraduate
|
https://en.wikipedia.org/wiki/Willem%20de%20Sitter
|
Willem de Sitter (6 May 1872 – 20 November 1934) was a Dutch mathematician, physicist, and astronomer.
Life and work
Born in Sneek, de Sitter studied mathematics at the University of Groningen and then joined the Groningen astronomical laboratory. He worked at the Cape Observatory in South Africa (1897–1899). Then, in 1908, de Sitter was appointed to the chair of astronomy at Leiden University. He was director of the Leiden Observatory from 1919 until his death.
De Sitter made major contributions to the field of physical cosmology. He co-authored a paper with Albert Einstein in 1932 in which they discussed the implications of cosmological data for the curvature of the universe. He also came up with the concept of the de Sitter space and de Sitter universe, a solution for Einstein's general relativity in which there is no matter and a positive cosmological constant. This results in an exponentially expanding, empty universe. De Sitter was also famous for his research on the motions of the moons of Jupiter.
Willem de Sitter died after a brief illness in November 1934.
Honours
In 1912, he became a member of the Royal Netherlands Academy of Arts and Sciences.
Awards
James Craig Watson Medal (1929)
Bruce Medal (1931)
Gold Medal of the Royal Astronomical Society (1931)
Prix Jules Janssen, the highest award of the Société astronomique de France, the French astronomical society (1934)
Named after him
The crater De Sitter on the Moon
Asteroid 1686 De Sitter
de Sitter universe
de Sitter space
Anti-de Sitter space
de Sitter invariant special relativity
Einstein–de Sitter universe
de Sitter double star experiment
de Sitter precession
de Sitter–Schwarzschild metric
Family
One of his sons, Ulbo de Sitter (1902 – 1980), was a Dutch geologist, and one of Ulbo's sons was a Dutch sociologist Ulbo de Sitter (1930 – 2010).
Another son of Willem, Aernout de Sitter (1905 – 15 September 1944), was the director of the Bosscha Observatory in Lembang, Indonesia (then the Dutch East Indies), where he studied the Messier 4 globular cluster.
Selected publications
On Einstein's theory of gravitation and its astronomical consequences:
See also
de Sitter double star experiment
de Sitter precession
de Sitter relativity
de Sitter space
de Sitter universe
Anti-de Sitter space
The Dreams in the Witch House, a story by H. P. Lovecraft featuring de Sitter, and inspired by his lecture The Size of the Universe
References
External links
P.C. van der Kruit Willem de Sitter (1872 – 1934) in: History of science and scholarship in the Netherlands.
A. Blaauw, Sitter, Willem de (1872–1934), in Biografisch Woordenboek van Nederland.
Bruce Medal page
Awarding of Bruce Medal: PASP 43 (1931) 125
Awarding of RAS gold medal: MNRAS 91 (1931) 422
de Sitter's binary star arguments against Ritz's relativity theory (1913) (four articles)
Obituaries
AN 253 (1934) 495/496 (one line)
JRASC 29 (1935) 1
MNRAS 95 (1935) 343
Obs 58 (1935) 22
PASP 46 (1934) 368 (one paragr
|
https://en.wikipedia.org/wiki/Hurewicz%20theorem
|
In mathematics, the Hurewicz theorem is a basic result of algebraic topology, connecting homotopy theory with homology theory via a map known as the Hurewicz homomorphism. The theorem is named after Witold Hurewicz, and generalizes earlier results of Henri Poincaré.
Statement of the theorems
The Hurewicz theorems are a key link between homotopy groups and homology groups.
Absolute version
For any path-connected space X and positive integer n there exists a group homomorphism
called the Hurewicz homomorphism, from the n-th homotopy group to the n-th homology group (with integer coefficients). It is given in the following way: choose a canonical generator , then a homotopy class of maps is taken to .
The Hurewicz theorem states cases in which the Hurewicz homomorphism is an isomorphism.
For , if X is -connected (that is: for all ), then for all , and the Hurewicz map is an isomorphism. This implies, in particular, that the homological connectivity equals the homotopical connectivity when the latter is at least 1. In addition, the Hurewicz map is an epimorphism in this case.
For , the Hurewicz homomorphism induces an isomorphism , between the abelianization of the first homotopy group (the fundamental group) and the first homology group.
Relative version
For any pair of spaces and integer there exists a homomorphism
from relative homotopy groups to relative homology groups. The Relative Hurewicz Theorem states that if both and are connected and the pair is -connected then for and is obtained from by factoring out the action of . This is proved in, for example, by induction, proving in turn the absolute version and the Homotopy Addition Lemma.
This relative Hurewicz theorem is reformulated by as a statement about the morphism
where denotes the cone of . This statement is a special case of a homotopical excision theorem, involving induced modules for (crossed modules if ), which itself is deduced from a higher homotopy van Kampen theorem for relative homotopy groups, whose proof requires development of techniques of a cubical higher homotopy groupoid of a filtered space.
Triadic version
For any triad of spaces (i.e., a space X and subspaces A, B) and integer there exists a homomorphism
from triad homotopy groups to triad homology groups. Note that
The Triadic Hurewicz Theorem states that if X, A, B, and are connected, the pairs and are -connected and -connected, respectively, and the triad is -connected, then for and is obtained from by factoring out the action of and the generalised Whitehead products. The proof of this theorem uses a higher homotopy van Kampen type theorem for triadic homotopy groups, which requires a notion of the fundamental -group of an n-cube of spaces.
Simplicial set version
The Hurewicz theorem for topological spaces can also be stated for n-connected simplicial sets satisfying the Kan condition.
Rational Hurewicz theorem
Rational Hurewicz theorem: Let X be a simply connected top
|
https://en.wikipedia.org/wiki/Even%20and%20odd%20functions
|
In mathematics, even functions and odd functions are functions which satisfy particular symmetry relations, with respect to taking additive inverses. They are important in many areas of mathematical analysis, especially the theory of power series and Fourier series. They are named for the parity of the powers of the power functions which satisfy each condition: the function is an even function if n is an even integer, and it is an odd function if n is an odd integer.
Definition and examples
Evenness and oddness are generally considered for real functions, that is real-valued functions of a real variable. However, the concepts may be more generally defined for functions whose domain and codomain both have a notion of additive inverse. This includes abelian groups, all rings, all fields, and all vector spaces. Thus, for example, a real function could be odd or even (or neither), as could a complex-valued function of a vector variable, and so on.
The given examples are real functions, to illustrate the symmetry of their graphs.
Even functions
Let f be a real-valued function of a real variable. Then f is even if the following equation holds for all x such that x and −x are in the domain of f:
or equivalently if the following equation holds for all such x:
Geometrically, the graph of an even function is symmetric with respect to the y-axis, meaning that its graph remains unchanged after reflection about the y-axis.
Examples of even functions are:
The absolute value
cosine
hyperbolic cosine
Gaussian function
Odd functions
Again, let f be a real-valued function of a real variable. Then f is odd if the following equation holds for all x such that x and −x are in the domain of f:
or equivalently if the following equation holds for all such x:
Geometrically, the graph of an odd function has rotational symmetry with respect to the origin, meaning that its graph remains unchanged after rotation of 180 degrees about the origin.
Examples of odd functions are:
The sign function
The identity function
sine
hyperbolic sine
The error function
Basic properties
Uniqueness
If a function is both even and odd, it is equal to 0 everywhere it is defined.
If a function is odd, the absolute value of that function is an even function.
Addition and subtraction
The sum of two even functions is even.
The sum of two odd functions is odd.
The difference between two odd functions is odd.
The difference between two even functions is even.
The sum of an even and odd function is not even or odd, unless one of the functions is equal to zero over the given domain.
Multiplication and division
The product of two even functions is an even function.
That implies that product of any number of even functions is an even function as well.
The product of two odd functions is an even function.
The product of an even function and an odd function is an odd function.
The quotient of two even functions is an even function.
The quotient of two odd functions is
|
https://en.wikipedia.org/wiki/Current%20Population%20Survey
|
The Current Population Survey (CPS) is a monthly survey of about 60,000 U.S. households conducted by the United States Census Bureau for the Bureau of Labor Statistics (BLS). The BLS uses the data to publish reports early each month called the Employment Situation. This report provides estimates of the unemployment rate and the numbers of employed and unemployed people in the United States based on the CPS. A readable Employment Situation Summary is provided monthly. Annual estimates include employment and unemployment in large metropolitan areas. Researchers can use some CPS microdata to investigate these or other topics.
The survey asks about the employment status of each member of the household 15 years of age or older as of a particular calendar week. Based on responses to questions on work and job search activities, each person 16 years and over in a sample household is classified as employed, unemployed, or not in the labor force.
The CPS began in 1940, and responsibility for conducting the CPS was given to the Census Bureau in 1942. In 1994 the CPS was redesigned. CPS is a survey that is: employment-focused, enumerator-conducted, continuous, and cross-sectional. The BLS increased the sample size by 10,000 as of July 2001. The sample represents the civilian noninstitutional population.
Methodology
Approximately 60,000 households are eligible for the CPS. Sample households are selected by a multistage stratified statistical sampling scheme. A household is interviewed for 4 successive months, then not interviewed for 8 months, then returned to the sample for 4 months after that. An adult member of each household provides information for all members of the household.
As part of the demographic sample survey redesign, the CPS is redesigned once a decade, after the decennial census. The most recent CPS sample redesign began in April 2014.
Respondents are generally asked about their employment as of the week of the month that includes the 12th. To avoid holidays, this reference week is sometimes adjusted. All respondents are asked about the same week.
Employment classification
People are classified as employed if they did any work at all as paid employees during the reference week; worked in their own business, profession, or on their own farm; or worked without pay at least 15 hours in a family business or farm. People are also counted as employed if they were temporarily absent from their jobs because of illness, bad weather, vacation, labor-management disputes, or personal reasons.
People are classified as unemployed if they meet all of the following criteria:
They were not employed during the reference week
They were available for work at that time
They made specific efforts to find employment during the 4-week period ending with the reference week. (The exception to this category covers persons laid off from a job and expecting recall)
The unemployment data derived from the household survey doesn't relate or depend on the el
|
https://en.wikipedia.org/wiki/Hypocycloid
|
In geometry, a hypocycloid is a special plane curve generated by the trace of a fixed point on a small circle that rolls within a larger circle. As the radius of the larger circle is increased, the hypocycloid becomes more like the cycloid created
by rolling a circle on a line.
History
The 2-cusped hypocycloid called Tusi couple was first described by the 13th-century Persian astronomer and mathematician Nasir al-Din al-Tusi in Tahrir al-Majisti (Commentary on the Almagest). German painter and German Renaissance theorist Albrecht Dürer described epitrochoids in 1525, and later Roemer and Bernoulli concentrated on some specific hypocycloids, like the astroid, in 1674 and 1691, respectively.
Properties
If the smaller circle has radius , and the larger circle has radius , then the
parametric equations for the curve can be given by either:
or:
If is an integer, then the curve is closed, and has cusps (i.e., sharp corners, where the curve is not
differentiable). Specially for the curve is a straight line and the circles are called Cardano circles. Girolamo Cardano was the first to describe these hypocycloids and their applications to high-speed printing.
If is a rational number, say expressed in simplest terms, then the curve has cusps.
If is an irrational number, then the curve never closes, and fills the space between the larger circle and a circle of radius .
Each hypocycloid (for any value of ) is a brachistochrone for the gravitational potential inside a homogeneous sphere of radius .
The area enclosed by a hypocycloid is given by:
The arc length of a hypocycloid is given by:
Examples
The hypocycloid is a special kind of hypotrochoid, which is a particular kind of roulette.
A hypocycloid with three cusps is known as a deltoid.
A hypocycloid curve with four cusps is known as an astroid.
The hypocycloid with two "cusps" is a degenerate but still very interesting case, known as the Tusi couple.
Relationship to group theory
Any hypocycloid with an integral value of k, and thus k cusps, can move snugly inside another hypocycloid with k+1 cusps, such that the points of the smaller hypocycloid will always be in contact with the larger. This motion looks like 'rolling', though it is not technically rolling in the sense of classical mechanics, since it involves slipping.
Hypocycloid shapes can be related to special unitary groups, denoted SU(k), which consist of k × k unitary matrices with determinant 1. For example, the allowed values of the sum of diagonal entries for a matrix in SU(3), are precisely the points in the complex plane lying inside a hypocycloid of three cusps (a deltoid). Likewise, summing the diagonal entries of SU(4) matrices gives points inside an astroid, and so on.
Thanks to this result, one can use the fact that SU(k) fits inside SU(k+1) as a subgroup to prove that an epicycloid with k cusps moves snugly inside one with k+1 cusps.
Derived curves
The evolute of a hypocycloid is an enlarged version of t
|
https://en.wikipedia.org/wiki/Hellinger%E2%80%93Toeplitz%20theorem
|
In functional analysis, a branch of mathematics, the Hellinger–Toeplitz theorem states that an everywhere-defined symmetric operator on a Hilbert space with inner product is bounded. By definition, an operator A is symmetric if
for all x, y in the domain of A. Note that symmetric everywhere-defined operators are necessarily self-adjoint, so this theorem can also be stated as follows: an everywhere-defined self-adjoint operator is bounded. The theorem is named after Ernst David Hellinger and Otto Toeplitz.
This theorem can be viewed as an immediate corollary of the closed graph theorem, as self-adjoint operators are closed. Alternatively, it can be argued using the uniform boundedness principle. One relies on the symmetric assumption, therefore the inner product structure, in proving the theorem. Also crucial is the fact that the given operator A is defined everywhere (and, in turn, the completeness of Hilbert spaces).
The Hellinger–Toeplitz theorem reveals certain technical difficulties in the mathematical formulation of quantum mechanics. Observables in quantum mechanics correspond to self-adjoint operators on some Hilbert space, but some observables (like energy) are unbounded. By Hellinger–Toeplitz, such operators cannot be everywhere defined (but they may be defined on a dense subset). Take for instance the quantum harmonic oscillator. Here the Hilbert space is L2(R), the space of square integrable functions on R, and the energy operator H is defined by (assuming the units are chosen such that ℏ = m = ω = 1)
This operator is self-adjoint and unbounded (its eigenvalues are 1/2, 3/2, 5/2, ...), so it cannot be defined on the whole of L2(R).
References
Reed, Michael and Simon, Barry: Methods of Mathematical Physics, Volume 1: Functional Analysis. Academic Press, 1980. See Section III.5.
Theorems in functional analysis
Hilbert spaces
|
https://en.wikipedia.org/wiki/Liisi%20Oterma
|
Liisi Oterma (; 6 January 1915 – 4 April 2001) was a Finnish astronomer, the first woman to get a Ph.D. degree in astronomy in Finland.
She studied mathematics and astronomy at the University of Turku, and soon became Yrjö Väisälä's assistant and worked on the search for minor planets. She obtained her master's degree in 1938. From 1941 to 1965, Oterma worked as an observer at the university's observatory. She obtained her PhD in 1955 with a dissertation on telescope optics. She was the first Finnish woman to obtain a PhD in astronomy.
In 1959, Oterma became a docent of astronomy and from 1965 to 1978 a professor in University of Turku. In 1971, she succeeded Väisälä as the director of the Tuorla Observatory. She was director of the astronomical-optical research institute at the University of Turku from 1971-1975.
Oterma was interested in languages and spoke German, French, Italian, Spanish, Esperanto, Hungarian, English and also Arabic, for example. Oterma's original plans were to study Sanskrit, but it was not offered at the University of Turku, and the choice was ultimately focused on astronomy.
Oterma was quiet, modest in nature, and fearful of publicity. Anders Reiz, a professor at the Copenhagen Observatory, among others, said Oterma was “silent in eleven languages”. Oterma avoided appearing in photographs, and there are only a handful of pictures of her.
She discovered or co-discovered several comets, including periodic comets 38P/Stephan-Oterma, 39P/Oterma and 139P/Väisälä–Oterma. She is also credited by the Minor Planet Center (MPC) with the discovery of 54 minor planets between 1938 and 1953, and ranks 153rd on MPC's all-time discovery chart.
The Hildian asteroid 1529 Oterma, discovered by Finnish astronomer Yrjö Väisälä in 1938, was named in her honour.
Minor planets discovered
References
1915 births
2001 deaths
20th-century women scientists
20th-century astronomers
Discoverers of asteroids
Discoverers of comets
Finnish astronomers
Women astronomers
Astronomy-optics society
|
https://en.wikipedia.org/wiki/Point%20%28geometry%29
|
In classical Euclidean geometry, a point is a primitive notion that models an exact location in space, and has no length, width, or thickness. In modern mathematics, a point is considered as an element of some set, a point set. A space is a point set with some additional structure. An isolated point has no other neighboring points in a given subset.
Being a primitive notion means that a point cannot be defined in terms of previously defined objects. That is, a point is defined only by some properties, called axioms, that it must satisfy; for example, "there is exactly one line that passes through two different points".
Points in Euclidean geometry
Points, considered within the framework of Euclidean geometry, are one of the most fundamental objects. Euclid originally defined the point as "that which has no part". In the two-dimensional Euclidean plane, a point is represented by an ordered pair (, ) of numbers, where the first number conventionally represents the horizontal and is often denoted by , and the second number conventionally represents the vertical and is often denoted by . This idea is easily generalized to three-dimensional Euclidean space, where a point is represented by an ordered triplet (, , ) with the additional third number representing depth and often denoted by . Further generalizations are represented by an ordered tuplet of terms, where is the dimension of the space in which the point is located.
Many constructs within Euclidean geometry consist of an infinite collection of points that conform to certain axioms. This is usually represented by a set of points; As an example, a line is an infinite set of points of the form
where through and are constants and is the dimension of the space. Similar constructions exist that define the plane, line segment, and other related concepts. A line segment consisting of only a single point is called a degenerate line segment.
In addition to defining points and constructs related to points, Euclid also postulated a key idea about points, that any two points can be connected by a straight line. This is easily confirmed under modern extensions of Euclidean geometry, and had lasting consequences at its introduction, allowing the construction of almost all the geometric concepts known at the time. However, Euclid's postulation of points was neither complete nor definitive, and he occasionally assumed facts about points that did not follow directly from his axioms, such as the ordering of points on the line or the existence of specific points. In spite of this, modern expansions of the system serve to remove these assumptions.
Dimension of a point
There are several inequivalent definitions of dimension in mathematics. In all of the common definitions, a point is 0-dimensional.
Vector space dimension
The dimension of a vector space is the maximum size of a linearly independent subset. In a vector space consisting of a single point (which must be the zero vector 0), there is no
|
https://en.wikipedia.org/wiki/Unimodular%20matrix
|
In mathematics, a unimodular matrix M is a square integer matrix having determinant +1 or −1. Equivalently, it is an integer matrix that is invertible over the integers: there is an integer matrix N that is its inverse (these are equivalent under Cramer's rule). Thus every equation , where M and b both have integer components and M is unimodular, has an integer solution. The n × n unimodular matrices form a group called the n × n general linear group over , which is denoted .
Examples of unimodular matrices
Unimodular matrices form a subgroup of the general linear group under matrix multiplication, i.e. the following matrices are unimodular:
Identity matrix
The inverse of a unimodular matrix
The product of two unimodular matrices
Other examples include:
Pascal matrices
Permutation matrices
the three transformation matrices in the ternary tree of primitive Pythagorean triples
Certain transformation matrices for rotation, shearing (both with determinant 1) and reflection (determinant −1).
The unimodular matrix used (possibly implicitly) in lattice reduction and in the Hermite normal form of matrices.
The Kronecker product of two unimodular matrices is also unimodular. This follows since where p and q are the dimensions of A and B, respectively.
Total unimodularity
A totally unimodular matrix
(TU matrix) is a matrix for which every square non-singular submatrix is unimodular. Equivalently, every square submatrix has determinant 0, +1 or −1. A totally unimodular matrix need not be square itself. From the definition it follows that any submatrix of a totally unimodular matrix is itself totally unimodular (TU). Furthermore it follows that any TU matrix has only 0, +1 or −1 entries. The converse is not true, i.e., a matrix with only 0, +1 or −1 entries is not necessarily unimodular. A matrix is TU if and only if its transpose is TU.
Totally unimodular matrices are extremely important in polyhedral combinatorics and combinatorial optimization since they give a quick way to verify that a linear program is integral (has an integral optimum, when any optimum exists). Specifically, if A is TU and b is integral, then linear programs of forms like or have integral optima, for any c. Hence if A is totally unimodular and b is integral, every extreme point of the feasible region (e.g. ) is integral and thus the feasible region is an integral polyhedron.
Common totally unimodular matrices
1. The unoriented incidence matrix of a bipartite graph, which is the coefficient matrix for bipartite matching, is totally unimodular (TU). (The unoriented incidence matrix of a non-bipartite graph is not TU.) More generally, in the appendix to a paper by Heller and Tompkins, A.J. Hoffman and D. Gale prove the following. Let be an m by n matrix whose rows can be partitioned into two disjoint sets and . Then the following four conditions together are sufficient for A to be totally unimodular:
Every entry in is 0, +1, or −1;
Every column of contains at m
|
https://en.wikipedia.org/wiki/Indeterminate
|
Indeterminate may refer to:
In mathematics
Indeterminate (variable), a symbol that is treated as a variable
Indeterminate system, a system of simultaneous equations that has more than one solution
Indeterminate equation, an equation that has more than one solution
Indeterminate form, an algebraic expression with certain limiting behaviour in mathematical analysis
Other
Indeterminate growth, a term in biology and especially botany
Indeterminacy (philosophy), describing the shortcomings of definition in philosophy
Indeterminacy (music), music for which the composition or performance is determined by chance
Statically indeterminate, in statics, describing a structure for which the static equilibrium equations are insufficient for determining the internal forces
See also
Indeterminacy (disambiguation)
|
https://en.wikipedia.org/wiki/NUTS%20statistical%20regions%20of%20Finland
|
In the NUTS (Nomenclature of Territorial Units for Statistics) codes of Finland (FI), the three levels are:
NUTS codes
2013 version.
In the 2003 version, Satakunta was coded FI191, and Pirkanmaa was coded FI192.
Local administrative units
Below the NUTS levels, the two LAU (Local Administrative Units) levels are:
The LAU codes of Finland can be downloaded here:
See also
List of Finnish regions by Human Development Index
Subdivisions of Finland
ISO 3166-2 codes of Finland
FIPS region codes of Finland
Sources
Hierarchical list of the Nomenclature of territorial units for statistics - NUTS and the Statistical regions of Europe
Overview map of EU Countries - NUTS level 1
SUOMI / FINLAND - NUTS level 2
SUOMI / FINLAND - NUTS level 3
Correspondence between the NUTS levels and the national administrative units
List of current NUTS codes
Download current NUTS codes (ODS format)
Provinces of Finland, Statoids.com
References
Finland
Nuts
|
https://en.wikipedia.org/wiki/Propagation%20of%20uncertainty
|
In statistics, propagation of uncertainty (or propagation of error) is the effect of variables' uncertainties (or errors, more specifically random errors) on the uncertainty of a function based on them. When the variables are the values of experimental measurements they have uncertainties due to measurement limitations (e.g., instrument precision) which propagate due to the combination of variables in the function.
The uncertainty u can be expressed in a number of ways.
It may be defined by the absolute error . Uncertainties can also be defined by the relative error , which is usually written as a percentage.
Most commonly, the uncertainty on a quantity is quantified in terms of the standard deviation, , which is the positive square root of the variance. The value of a quantity and its error are then expressed as an interval .
However, the most general way of characterizing uncertainty is by specifying its probability distribution.
If the probability distribution of the variable is known or can be assumed, in theory it is possible to get any of its statistics. In particular, it is possible to derive confidence limits to describe the region within which the true value of the variable may be found. For example, the 68% confidence limits for a one-dimensional variable belonging to a normal distribution are approximately ± one standard deviation from the central value , which means that the region will cover the true value in roughly 68% of cases.
If the uncertainties are correlated then covariance must be taken into account. Correlation can arise from two different sources. First, the measurement errors may be correlated. Second, when the underlying values are correlated across a population, the uncertainties in the group averages will be correlated.
In a general context where a nonlinear function modifies the uncertain parameters (correlated or not), the standard tools to propagate uncertainty, and infer resulting quantity probability distribution/statistics, are sampling techniques from the Monte Carlo method family. For very expansive data or complex functions, the calculation of the error propagation may be very expansive so that a surrogate model or a parallel computing strategy may be necessary.
In some particular cases, the uncertainty propagation calculation can be done through simplistic algebraic procedures. Some of these scenarios are described below.
Linear combinations
Let be a set of m functions, which are linear combinations of variables with combination coefficients :
or in matrix notation,
Also let the variance–covariance matrix of be denoted by and let the mean value be denoted by :
is the outer product.
Then, the variance–covariance matrix of f is given by
In component notation, the equation
reads
This is the most general expression for the propagation of error from one set of variables onto another. When the errors on x are uncorrelated, the general expression simplifies to
where is the variance of k-th e
|
https://en.wikipedia.org/wiki/NUTS%20statistical%20regions%20of%20Denmark
|
The Nomenclature of Territorial Units for Statistics (NUTS) is a geocode standard for referencing the administrative division of Denmark for statistical purposes. The standard is developed and regulated by the European Union. The NUTS standard is instrumental in delivering the European Union's Structural Funds. The NUTS code for Denmark is DK and a hierarchy of three levels is established by Eurostat. Below these is a further levels of geographic organisation - the local administrative unit (LAU). In Denmark, the LAU 1 are municipalities and the LAU 2 is Parishes.
Overall
NUTS codes
Local administrative units
Below the NUTS levels, the two LAU (Local Administrative Units) levels are:
The LAU codes of Denmark can be downloaded here:
NUTS codes
Before 2003
In the 2003 version, before the counties were abolished, the codes were as follows:
See also
Administrative divisions of Denmark
FIPS region codes of Denmark
ISO 3166-2 codes of Denmark
References
Sources
Hierarchical list of the Nomenclature of territorial units for statistics - NUTS and the Statistical regions of Europe
Overview map of EU Countries - NUTS level 1
Overview map of EU Countries - Country level
Overview map of EU Countries - NUTS level 1
Correspondence between the NUTS levels and the national administrative units
List of current NUTS codes
Download current NUTS codes (ODS format)
Regions of Denmark, Statoids.com
Denmark
Nuts
|
https://en.wikipedia.org/wiki/QR%20algorithm
|
In numerical linear algebra, the QR algorithm or QR iteration is an eigenvalue algorithm: that is, a procedure to calculate the eigenvalues and eigenvectors of a matrix. The QR algorithm was developed in the late 1950s by John G. F. Francis and by Vera N. Kublanovskaya, working independently. The basic idea is to perform a QR decomposition, writing the matrix as a product of an orthogonal matrix and an upper triangular matrix, multiply the factors in the reverse order, and iterate.
The practical QR algorithm
Formally, let be a real matrix of which we want to compute the eigenvalues, and let . At the -th step (starting with ), we compute the QR decomposition where is an orthogonal matrix (i.e., ) and is an upper triangular matrix. We then form . Note that
so all the are similar and hence they have the same eigenvalues. The algorithm is numerically stable because it proceeds by orthogonal similarity transforms.
Under certain conditions, the matrices Ak converge to a triangular matrix, the Schur form of A. The eigenvalues of a triangular matrix are listed on the diagonal, and the eigenvalue problem is solved. In testing for convergence it is impractical to require exact zeros, but the Gershgorin circle theorem provides a bound on the error.
In this crude form the iterations are relatively expensive. This can be mitigated by first bringing the matrix to upper Hessenberg form (which costs arithmetic operations using a technique based on Householder reduction), with a finite sequence of orthogonal similarity transforms, somewhat like a two-sided QR decomposition. (For QR decomposition, the Householder reflectors are multiplied only on the left, but for the Hessenberg case they are multiplied on both left and right.) Determining the QR decomposition of an upper Hessenberg matrix costs arithmetic operations. Moreover, because the Hessenberg form is already nearly upper-triangular (it has just one nonzero entry below each diagonal), using it as a starting point reduces the number of steps required for convergence of the QR algorithm.
If the original matrix is symmetric, then the upper Hessenberg matrix is also symmetric and thus tridiagonal, and so are all the . This procedure costs arithmetic operations using a technique based on Householder reduction. Determining the QR decomposition of a symmetric tridiagonal matrix costs operations.
The rate of convergence depends on the separation between eigenvalues, so a practical algorithm will use shifts, either explicit or implicit, to increase separation and accelerate convergence. A typical symmetric QR algorithm isolates each eigenvalue (then reduces the size of the matrix) with only one or two iterations, making it efficient as well as robust.
Visualization
The basic QR algorithm can be visualized in the case where A is a positive-definite symmetric matrix. In that case, A can be depicted as an ellipse in 2 dimensions or an ellipsoid in higher dimensions. The relationship between the i
|
https://en.wikipedia.org/wiki/Lucas%20chain
|
In mathematics, a Lucas chain is a restricted type of addition chain, named for the French mathematician Édouard Lucas. It is a sequence
a0, a1, a2, a3, ...
that satisfies
a0=1,
and
for each k > 0: ak = ai + aj, and either ai = aj or |ai − aj| = am, for some i, j, m < k.
The sequence of powers of 2 (1, 2, 4, 8, 16, ...) and the Fibonacci sequence (with a slight adjustment of the starting point 1, 2, 3, 5, 8, ...) are simple examples of Lucas chains.
Lucas chains were introduced by Peter Montgomery in 1983. If L(n) is the length of the shortest Lucas chain for n, then Kutz has shown that most n do not have L < (1-ε) logφ n, where φ is the Golden ratio.
References
Integer sequences
Addition chains
|
https://en.wikipedia.org/wiki/Compactly%20generated%20group
|
In mathematics, a compactly generated (topological) group is a topological group G which is algebraically generated by one of its compact subsets. This should not be confused with the unrelated notion (widely used in algebraic topology) of a compactly generated space -- one whose topology is generated (in a suitable sense) by its compact subspaces.
Definition
A topological group G is said to be compactly generated if there exists a compact subset K of G such that
So if K is symmetric, i.e. K = K −1, then
Locally compact case
This property is interesting in the case of locally compact topological groups, since locally compact compactly generated topological groups can be approximated by locally compact, separable metric factor groups of G. More precisely, for a sequence
Un
of open identity neighborhoods, there exists a normal subgroup N contained in the intersection of that sequence, such that
G/N
is locally compact metric separable (the Kakutani-Kodaira-Montgomery-Zippin theorem).
References
Topological groups
|
https://en.wikipedia.org/wiki/Compactly%20generated
|
In mathematics, compactly generated can refer to:
Compactly generated group, a topological group which is algebraically generated by one of its compact subsets
Compactly generated space, a topological space whose topology is coherent with the family of all compact subspaces
Mathematics disambiguation pages
|
https://en.wikipedia.org/wiki/John%20Allen%20Paulos
|
John Allen Paulos (born July 4, 1945) is an American professor of mathematics at Temple University in Philadelphia, Pennsylvania. He has gained fame as a writer and speaker on mathematics and the importance of mathematical literacy. Paulos writes about many subjects, especially of the dangers of mathematical innumeracy; that is, the layperson's misconceptions about numbers, probability, and logic.
Early life
Paulos was born in Denver, Colorado and grew up in Chicago, Illinois and Milwaukee, Wisconsin, where he attended high school. After his Bachelor of Mathematics at University of Wisconsin (1967) and his Master of Science at University of Washington (1968), he received his PhD in mathematics from the University of Wisconsin–Madison (1974). In an interview he described himself as lifelong skeptic. He volunteered for the Peace Corps in the seventies.
Career
His academic work is mainly in mathematical logic and probability theory.
His book Innumeracy: Mathematical Illiteracy and its Consequences (1988) was a bestseller and A Mathematician Reads the Newspaper (1995) extended the critique. In his books Paulos discusses innumeracy with quirky anecdotes, scenarios and facts, encouraging readers in the end to look at their world in a more quantitative way.
He has also written on other subjects often "combining disparate disciplines", such as the mathematical and philosophical basis of humor in Mathematics and Humor and I Think, Therefore I Laugh, the stock market in A Mathematician Plays the Stock Market, quantitative aspects of narrative in Once Upon a Number, the arguments for God in Irreligion, and most recently "bringing mathematics to bear on...biography" in A Numerate Life.
Paulos also wrote a mathematics-tinged column for the UK newspaper The Guardian and is a Committee for Skeptical Inquiry fellow.
Paulos has appeared frequently on radio and television, including a four-part BBC adaptation of A Mathematician Reads the Newspaper and appearances on the Lehrer News Hour, 20/20, Larry King, and David Letterman.
In 2001 Paulos taught a course on quantitative literacy for journalists at the Columbia University School of Journalism. The course stimulated further programs at Columbia and elsewhere in precision and data-driven journalism.
His long-running "ABCNews.com" monthly column Who's Counting deals with mathematical aspects of stories in the news. All the columns over a 10- year period are archived here.
He is married, father of two, grandfather of four.
Paulos tweets frequently at @JohnAllenPaulos.
Awards
Paulos received the 2013 JPBM (Joint Policy Board for Mathematics) Award for Communicating Mathematics on a Sustained Basis to Large Audiences.
Paulos received the 2003 AAAS (American Association for the Advancement of Science) Award for Promoting the Public Understanding of Science and Technology.
In 2002 he received the University Creativity Award at Temple University.
Paulos' article "Counting on Dyscalculia," which appeared in
|
https://en.wikipedia.org/wiki/Functional%20equation%20%28L-function%29
|
In mathematics, the L-functions of number theory are expected to have several characteristic properties, one of which is that they satisfy certain functional equations. There is an elaborate theory of what these equations should be, much of which is still conjectural.
Introduction
A prototypical example, the Riemann zeta function has a functional equation relating its value at the complex number s with its value at 1 − s. In every case this relates to some value ζ(s) that is only defined by analytic continuation from the infinite series definition. That is, writingas is conventionalσ for the real part of s, the functional equation relates the cases
σ > 1 and σ < 0,
and also changes a case with
0 < σ < 1
in the critical strip to another such case, reflected in the line σ = ½. Therefore, use of the functional equation is basic, in order to study the zeta-function in the whole complex plane.
The functional equation in question for the Riemann zeta function takes the simple form
where Z(s) is ζ(s) multiplied by a gamma-factor, involving the gamma function. This is now read as an 'extra' factor in the Euler product for the zeta-function, corresponding to the infinite prime. Just the same shape of functional equation holds for the Dedekind zeta function of a number field K, with an appropriate gamma-factor that depends only on the embeddings of K (in algebraic terms, on the tensor product of K with the real field).
There is a similar equation for the Dirichlet L-functions, but this time relating them in pairs:
with χ a primitive Dirichlet character, χ* its complex conjugate, Λ the L-function multiplied by a gamma-factor, and ε a complex number of absolute value 1, of shape
where G(χ) is a Gauss sum formed from χ. This equation has the same function on both sides if and only if χ is a real character, taking values in {0,1,−1}. Then ε must be 1 or −1, and the case of the value −1 would imply a zero of Λ(s) at s = ½. According to the theory (of Gauss, in effect) of Gauss sums, the value is always 1, so no such simple zero can exist (the function is even about the point).
Theory of functional equations
A unified theory of such functional equations was given by Erich Hecke, and the theory was taken up again in Tate's thesis by John Tate. Hecke found generalised characters of number fields, now called Hecke characters, for which his proof (based on theta functions) also worked. These characters and their associated L-functions are now understood to be strictly related to complex multiplication, as the Dirichlet characters are to cyclotomic fields.
There are also functional equations for the local zeta-functions, arising at a fundamental level for the (analogue of) Poincaré duality in étale cohomology. The Euler products of the Hasse–Weil zeta-function for an algebraic variety V over a number field K, formed by reducing modulo prime ideals to get local zeta-functions, are conjectured to have a global functional equation; but this is currently
|
https://en.wikipedia.org/wiki/Tschirnhaus%20transformation
|
In mathematics, a Tschirnhaus transformation, also known as Tschirnhausen transformation, is a type of mapping on polynomials developed by Ehrenfried Walther von Tschirnhaus in 1683.
Simply, it is a method for transforming a polynomial equation of degree with some nonzero intermediate coefficients, , such that some or all of the transformed intermediate coefficients, , are exactly zero.
For example, finding a substitutionfor a cubic equation of degree ,such that substituting yields a new equationsuch that , , or both.
More generally, it may be defined conveniently by means of field theory, as the transformation on minimal polynomials implied by a different choice of primitive element. This is the most general transformation of an irreducible polynomial that takes a root to some rational function applied to that root.
Definition
For a generic degree reducible monic polynomial equation of the form , where and are polynomials and does not vanish at ,the Tschirnhaus transformation is the function:Such that the new equation in , , has certain special properties, most commonly such that some coefficients, , are identically zero.
Example: Tschirnhaus' method for cubic equations
In Tschirnhaus' 1683 paper, he solved the equationusing the Tschirnhaus transformationSubstituting yields the transformed equationorSetting yields,and finally the Tschirnhaus transformationWhich may be substituted into to yield an equation of the form:
Tschirnhaus went on to describe how a Tschirnhaus transformation of the form:
may be used to eliminate two coefficients in a similar way.
Generalization
In detail, let be a field, and a polynomial over . If is irreducible, then the quotient ring of the polynomial ring by the principal ideal generated by ,
,
is a field extension of . We have
where is modulo . That is, any element of is a polynomial in , which is thus a primitive element of . There will be other choices of primitive element in : for any such choice of we will have by definition:
,
with polynomials and over . Now if is the minimal polynomial for over , we can call a Tschirnhaus transformation of .
Therefore the set of all Tschirnhaus transformations of an irreducible polynomial is to be described as running over all ways of changing , but leaving the same. This concept is used in reducing quintics to Bring–Jerrard form, for example. There is a connection with Galois theory, when is a Galois extension of . The Galois group may then be considered as all the Tschirnhaus transformations of to itself.
History
In 1683, Ehrenfried Walther von Tschirnhaus published a method for rewriting a polynomial of degree such that the and terms have zero coefficients. In his paper, Tschirnhaus referenced a method by Descartes to reduce a quadratic polynomial such that the term has zero coefficient.
In 1786, this work was expanded by E. S. Bring who showed that any generic quintic polynomial could be similarly reduced.
In 1834, G. B. Jerr
|
https://en.wikipedia.org/wiki/Chowla%E2%80%93Mordell%20theorem
|
In mathematics, the Chowla–Mordell theorem is a result in number theory determining cases where a Gauss sum is the square root of a prime number, multiplied by a root of unity. It was proved and published independently by Sarvadaman Chowla and Louis Mordell, around 1951.
In detail, if is a prime number, a nontrivial Dirichlet character modulo , and
where is a primitive -th root of unity in the complex numbers, then
is a root of unity if and only if is the quadratic residue symbol modulo . The 'if' part was known to Gauss: the contribution of Chowla and Mordell was the 'only if' direction. The ratio in the theorem occurs in the functional equation of L-functions.
References
Gauss and Jacobi Sums by Bruce C. Berndt, Ronald J. Evans and Kenneth S. Williams, Wiley-Interscience, p. 53.
Cyclotomic fields
Zeta and L-functions
Theorems in number theory
fi:Chowlan–Mordellin lause
|
https://en.wikipedia.org/wiki/Quadratic%20Gauss%20sum
|
In number theory, quadratic Gauss sums are certain finite sums of roots of unity. A quadratic Gauss sum can be interpreted as a linear combination of the values of the complex exponential function with coefficients given by a quadratic character; for a general character, one obtains a more general Gauss sum. These objects are named after Carl Friedrich Gauss, who studied them extensively and applied them to quadratic, cubic, and biquadratic reciprocity laws.
Definition
For an odd prime number and an integer , the quadratic Gauss sum is defined as
where is a primitive th root of unity, for example .
Equivalently,
For divisible by the expression evaluates to . Hence, we have
For not divisible by , this expression reduces to
where
is the Gauss sum defined for any character modulo .
Properties
The value of the Gauss sum is an algebraic integer in the th cyclotomic field .
The evaluation of the Gauss sum for an integer not divisible by a prime can be reduced to the case :
The exact value of the Gauss sum for is given by the formula:
Remark
In fact, the identity
was easy to prove and led to one of Gauss's proofs of quadratic reciprocity. However, the determination of the sign of the Gauss sum turned out to be considerably more difficult: Gauss could only establish it after several years' work. Later, Dirichlet, Kronecker, Schur and other mathematicians found different proofs.
Generalized quadratic Gauss sums
Let be natural numbers. The generalized quadratic Gauss sum is defined by
.
The classical quadratic Gauss sum is the sum .
Properties
The Gauss sum depends only on the residue class of and modulo .
Gauss sums are multiplicative, i.e. given natural numbers with one has
This is a direct consequence of the Chinese remainder theorem.
One has if except if divides in which case one has
.
Thus in the evaluation of quadratic Gauss sums one may always assume .
Let be integers with and even. One has the following analogue of the quadratic reciprocity law for (even more general) Gauss sums
.
Define
for every odd integer . The values of Gauss sums with and are explicitly given by
Here is the Jacobi symbol. This is the famous formula of Carl Friedrich Gauss.
For the Gauss sums can easily be computed by completing the square in most cases. This fails however in some cases (for example, even and odd), which can be computed relatively easy by other means. For example, if is odd and one has
where is some number with . As another example, if 4 divides and is odd and as always then . This can, for example, be proved as follows: because of the multiplicative property of Gauss sums we only have to show that if and are odd with . If is odd then is even for all . By Hensel's lemma, for every , the equation has at most two solutions in . Because of a counting argument runs through all even residue classes modulo exactly two times. The geometric sum formula then shows that .
If is
|
https://en.wikipedia.org/wiki/Chowla%E2%80%93Selberg%20formula
|
In mathematics, the Chowla–Selberg formula is the evaluation of a certain product of values of the gamma function at rational values in terms of values of the Dedekind eta function at imaginary quadratic irrational numbers. The result was essentially found by and rediscovered by .
Statement
In logarithmic form, the Chowla–Selberg formula states that in certain cases the sum
can be evaluated using the Kronecker limit formula. Here χ is the quadratic residue symbol modulo D, where −D is the discriminant of an imaginary quadratic field. The sum is taken over 0 < r < D, with the usual convention χ(r) = 0 if r and D have a common factor. The function η is the Dedekind eta function, and h is the class number, and w is the number of roots of unity.
Origin and applications
The origin of such formulae is now seen to be in the theory of complex multiplication, and in particular in the theory of periods of an abelian variety of CM-type. This has led to much research and generalization. In particular there is an analog of the Chowla–Selberg formula for p-adic numbers, involving a p-adic gamma function, called the Gross–Koblitz formula.
The Chowla–Selberg formula gives a formula for a finite product of values of the eta functions. By combining this with the theory of complex multiplication, one can give a formula for the individual absolute values of the eta function as
for some algebraic number α.
Examples
Using the reflection formula for the gamma function gives:
See also
Multiplication theorem
References
Theorems in number theory
Gamma and related functions
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.