source
stringlengths 31
168
| text
stringlengths 51
3k
|
---|---|
https://en.wikipedia.org/wiki/Collapse
|
Collapse or its variants may refer to:
Concepts
Collapse (structural)
Collapse (topology), a mathematical concept
Collapsing manifold
Collapse, the action of collapsing or telescoping objects
Collapsing user interface elements
Accordion (GUI) -- collapsing list items
Code folding -- collapsing subsections of programs or text
Outliner -- supporting folding and unfolding subsections
Ecosystem collapse or Ecological collapse
Economic collapse
Gravitational collapse creating astronomical objects
Societal collapse
Dissolution of the Soviet Union, the collapse of Soviet federalism
State collapse
Wave function collapse, in physics
Medicine and biology
In medicine, collapse can refer to various forms of transient loss of consciousness such as syncope, or loss of postural muscle tone without loss of consciousness. It can also refer to:
Circulatory collapse
Lung collapse
Hydrophobic collapse in protein folding
Art, entertainment and media
Literature
Collapse: How Societies Choose to Fail or Succeed, a book by Jared Diamond
Collapse (journal), a journal of philosophical research and development published in the United Kingdom
Film
Collapse (film), a 2009 documentary directed by Chris Smith and starring Michael Ruppert
Collapse, a 2010 documentary film based on the book Collapse: How Societies Choose to Fail or Succeed
Games
Collapse (2008 video game), an action game released in 2008 for Microsoft Windows
Collapse!, a 1999 series games created by GameHouse
Collapse, a fictional event in the computer game Dreamfall
The Collapse (Deus Ex), a fictional event within the plot of the computer game Deus Ex and its sequel Deus Ex: Invisible War
Music
Albums
Collapse (Across Five Aprils album), 2006
Collapse (Deas Vail album), 2006
Collapse EP, 2018 record by Aphex Twin
Songs
"Collapse" (Soul Coughing song), 1996
"Collapse" (Saosin song), 2006
"Collapse" (Imperative Reaction song), 2006
"Collapsed" (Aly & AJ song), 2005
See also
Cave-in, a kind of structural collapse
Disintegrate (disambiguation)
Fall (disambiguation)
Telescoping (mechanics), the action of collapsing objects
|
https://en.wikipedia.org/wiki/Spin%287%29-manifold
|
In mathematics, a Spin(7)-manifold is an eight-dimensional Riemannian manifold whose holonomy group is contained in Spin(7). Spin(7)-manifolds are Ricci-flat and admit a parallel spinor. They also admit a parallel 4-form, known as the Cayley form, which is a calibrating form for a special class of submanifolds called Cayley cycles.
History
The fact that Spin(7) might possibly arise as the holonomy group of certain Riemannian 8-manifolds was first suggested by the 1955 classification theorem of Marcel Berger, and this possibility remained consistent with the simplified proof of Berger's theorem given by Jim Simons in 1962. Although not a single example of such a manifold had yet been discovered, Edmond Bonan then showed in 1966 that,
if such a manifold did in fact exist, it would carry a parallel 4-form, and that it would necessarily be Ricci-flat. The first local examples of 8-manifolds with holonomy Spin(7) were finally constructed around 1984 by Robert Bryant, and his full proof of their existence appeared in Annals of Mathematics in 1987. Next, complete (but still noncompact) 8-manifolds with holonomy Spin(7) were explicitly constructed by Bryant and Salamon in 1989. The first examples of compact Spin(7)-manifolds were then constructed by Dominic Joyce in 1996.
See also
G2 manifold
Calabi–Yau manifold
References
.
.
.
Riemannian manifolds
|
https://en.wikipedia.org/wiki/Matthew%20Foreman
|
Matthew Dean Foreman is an American mathematician at
University of California, Irvine. He has made notable contributions in set theory and in ergodic theory.
Biography
Born in Los Alamos, New Mexico, Foreman earned his Ph.D. from the
University of California, Berkeley in 1980 under Robert M. Solovay. His
dissertation title was Large Cardinals and Strong Model Theoretic Transfer Properties.
In addition to his mathematical work, Foreman is an avid sailor.
He and his family sailed their sailboat Veritas (a built by C&C Yachts) from North America to Europe in 2000. From 2000–2008 they sailed Veritas to the Arctic, the Shetland Islands, Scotland, Ireland, England, France, Spain, North Africa and Italy.
Notable high points were Fastnet Rock, Irish and Celtic seas and many passages including the
Maelstrom, Stad, Pentland Firth, Loch Ness, the Corryveckan and the Irish Sea.
Further south they sailed through the Chenal du Four and Raz de Sein, across the Bay of Biscay and around Cape Finisterre. After entering Gibraltar, Foreman and his family circumnavigated the Western Mediterranean. Some notable stops included: Barcelona, Morocco, Tunisia, Sicily, Naples, Sardinia and Corsica. In 2009 Foreman, his son with guest members as crew, circumnavigated Newfoundland.
Foreman has been recognized for his sailing by twice winning the Ullman Trophy.
Work
Foreman began his career in set theory. His early work with Hugh Woodin included showing that it is consistent that the generalized continuum hypothesis (see continuum hypothesis) fails at every infinite cardinal. In joint work with Menachem Magidor and Saharon Shelah he formulated Martin's maximum, a provably maximal form of Martin's axiom and showed its consistency. Foreman's later work in set theory was primarily concerned with developing the consequences of generic large cardinal axioms. He also worked on classical "Hungarian" partition relations, mostly with András Hajnal.
In the late 1980s Foreman became interested in measure theory and ergodic theory. With Randall Dougherty he settled the Marczewski problem (1930) by showing that there is a Banach–Tarski decomposition of the unit ball in which all pieces have the property of Baire (see Banach–Tarski paradox). A consequence is the existence of a decomposition of an open dense subset of the unit ball into disjoint open sets that can be rearranged by isometries to form two open dense subsets of the unit ball. With Friedrich Wehrung, Foreman showed that the Hahn–Banach theorem implied the existence of a non-Lebesgue measurable set, even in the absence of any other form of the axiom of choice.
This naturally led to attempts to apply the tools of descriptive set theory to classification problems in ergodic theory. His first work in this direction, with Ferenc Beleznay, showed that classical collections were beyond the Borel hierarchy in complexity. This was followed shortly by a proof of the analogous results for measure-preserving transformations with
|
https://en.wikipedia.org/wiki/Integer%20lattice
|
In mathematics, the -dimensional integer lattice (or cubic lattice), denoted , is the lattice in the Euclidean space whose lattice points are -tuples of integers. The two-dimensional integer lattice is also called the square lattice, or grid lattice. is the simplest example of a root lattice. The integer lattice is an odd unimodular lattice.
Automorphism group
The automorphism group (or group of congruences) of the integer lattice consists of all permutations and sign changes of the coordinates, and is of order 2n n!. As a matrix group it is given by the set of all n × n signed permutation matrices. This group is isomorphic to the semidirect product
where the symmetric group Sn acts on (Z2)n by permutation (this is a classic example of a wreath product).
For the square lattice, this is the group of the square, or the dihedral group of order 8; for the three-dimensional cubic lattice, we get the group of the cube, or octahedral group, of order 48.
Diophantine geometry
In the study of Diophantine geometry, the square lattice of points with integer coordinates is often referred to as the Diophantine plane. In mathematical terms, the Diophantine plane is the Cartesian product of the ring of all integers . The study of Diophantine figures focuses on the selection of nodes in the Diophantine plane such that all pairwise distances are integers.
Coarse geometry
In coarse geometry, the integer lattice is coarsely equivalent to Euclidean space.
Pick's theorem
Pick's theorem, first described by Georg Alexander Pick in 1899, provides a formula for the area of a simple polygon with all vertices lying on the 2-dimensional integer lattice, in terms of the number of integer points within it and on its boundary.
Let be the number of integer points interior to the polygon, and let be the number of integer points on its boundary (including both vertices and points along the sides). Then the area of this polygon is:
The example shown has interior points and boundary points, so its area is square units.
See also
Regular grid
References
Further reading
Euclidean geometry
Lattice points
Diophantine geometry
|
https://en.wikipedia.org/wiki/186%20%28number%29
|
186 (one hundred [and] eighty-six) is the natural number following 185 and preceding 187.
In mathematics
There is no integer with exactly 186 coprimes less than it, so 186 is a nontotient. It is also never the difference between an integer and the total of coprimes below it, so it is a noncototient.
There are 186 different pentahexes, shapes formed by gluing together five regular hexagons, when rotations of shapes are counted as distinct from each other.
186 is a Fine number.
See also
The year AD 186 or 186 BC
List of highways numbered 186
References
Integers
|
https://en.wikipedia.org/wiki/Tangent%20%28disambiguation%29
|
A tangent, in geometry, is a straight line through a point on a curve that has the same direction at that point as the curve.
Tangent may also refer to:
Mathematics
Analogous concepts for surfaces and higher-dimensional smooth manifolds, such as the tangent space
More generally, in geometry, two curves are said to be tangent when they intersect at a given point and have the same direction at that point; see for instance tangent circles
Bitangent, a line that is tangent to two different curves, or tangent twice to the same curve
The tangent function, one of the six basic trigonometric functions
Music
Tangent (clavichord), a part of the action of the clavichord that both initiates and sustains a tone, and helps determine pitch
Tangent (tangent piano), a part of the action of the tangent piano that only initiates the sound by striking the string(s) and rebounding immediately in the manner of a piano
The Tangent, an international progressive rock supergroup
Tangents: The Tea Party Collection, a compilation album from The Tea Party released in 2000.
Tangents: 1973–1983, a compilation box set from Tangerine Dream released in 1994.
Tangents (band), an Australian musical group
Entertainment
Tangent Comics, a short-lived imprint of DC Comics
"The Tangent Universe", the alternate universe in time travel in the cult film Donnie Darko
Tangent (Stargate SG-1), an episode of the television series Stargate SG-1
Tangents (film) or Time Chasers, a 1994 science fiction film
Tangents (collection), a collection of science fiction stories by Greg Bear
Geography
Tangent, Alberta, a hamlet in Alberta, Canada
Tangent, Oregon, a city in Linn County, Oregon, United States
The Tangent Line, part of the Mason-Dixon line between Delaware and Maryland, United States
Tangente River, a tributary of the Wawagosic River, in Quebec, Canada
Other uses
Tangent (club), an international social networking group for women over 45, part of the Round Table family
Track transition curve, a straight section of road or track in highway and railroad design
Mathematics disambiguation pages
|
https://en.wikipedia.org/wiki/Euler%27s%20theorem%20in%20geometry
|
In geometry, Euler's theorem states that the distance d between the circumcenter and incenter of a triangle is given by
or equivalently
where and denote the circumradius and inradius respectively (the radii of the circumscribed circle and inscribed circle respectively). The theorem is named for Leonhard Euler, who published it in 1765. However, the same result was published earlier by William Chapple in 1746.
From the theorem follows the Euler inequality:
which holds with equality only in the equilateral case.
Stronger version of the inequality
A stronger version is
where , , and are the side lengths of the triangle.
Euler's theorem for the escribed circle
If and denote respectively the radius of the escribed circle opposite to the vertex and the distance between its center and the center of
the circumscribed circle, then .
Euler's inequality in absolute geometry
Euler's inequality, in the form stating that, for all triangles inscribed in a given circle, the maximum of the radius of the inscribed circle is reached for the equilateral triangle and only for it, is valid in absolute geometry.
See also
Fuss' theorem for the relation among the same three variables in bicentric quadrilaterals
Poncelet's closure theorem, showing that there is an infinity of triangles with the same two circles (and therefore the same R, r, and d)
List of triangle inequalities
References
External links
Articles containing proofs
Triangle inequalities
Theorems about triangles and circles
|
https://en.wikipedia.org/wiki/Medial%20triangle
|
In Euclidean geometry, the medial triangle or midpoint triangle of a triangle is the triangle with vertices at the midpoints of the triangle's sides . It is the case of the midpoint polygon of a polygon with sides. The medial triangle is not the same thing as the median triangle, which is the triangle whose sides have the same lengths as the medians of .
Each side of the medial triangle is called a midsegment (or midline). In general, a midsegment of a triangle is a line segment which joins the midpoints of two sides of the triangle. It is parallel to the third side and has a length equal to half the length of the third side.
Properties
The medial triangle can also be viewed as the image of triangle transformed by a homothety centered at the centroid with ratio -1/2. Thus, the sides of the medial triangle are half and parallel to the corresponding sides of triangle ABC. Hence, the medial triangle is inversely similar and shares the same centroid and medians with triangle . It also follows from this that the perimeter of the medial triangle equals the semiperimeter of triangle , and that the area is one quarter of the area of triangle . Furthermore, the four triangles that the original triangle is subdivided into by the medial triangle are all mutually congruent by SSS, so their areas are equal and thus the area of each is 1/4 the area of the original triangle.
The orthocenter of the medial triangle coincides with the circumcenter of triangle . This fact provides a tool for proving collinearity of the circumcenter, centroid and orthocenter. The medial triangle is the pedal triangle of the circumcenter. The nine-point circle circumscribes the medial triangle, and so the nine-point center is the circumcenter of the medial triangle.
The Nagel point of the medial triangle is the incenter of its reference triangle.
A reference triangle's medial triangle is congruent to the triangle whose vertices are the midpoints between the reference triangle's orthocenter and its vertices.
The incenter of a triangle lies in its medial triangle.
A point in the interior of a triangle is the center of an inellipse of the triangle if and only if the point lies in the interior of the medial triangle.
The medial triangle is the only inscribed triangle for which none of the other three interior triangles has smaller area.
The reference triangle and its medial triangle are orthologic triangles.
Coordinates
Let be the sidelengths of triangle Trilinear coordinates for the vertices of the medial triangle are given by
Anticomplementary triangle
If is the medial triangle of then is the anticomplementary triangle or antimedial triangle of The anticomplementary triangle of is formed by three lines parallel to the sides of the parallel to through the parallel to through and the parallel to through
Trilinear coordinates for the vertices of the triangle anticomplementary to are given by
The name "anticomplementary triangle" corresponds to the fac
|
https://en.wikipedia.org/wiki/Mixed
|
Mixed is the past tense of mix.
Mixed may refer to:
Mixed (United Kingdom ethnicity category), an ethnicity category that has been used by the United Kingdom's Office for National Statistics since the 2001 Census
Music
Mixed (album), a compilation album of two avant-garde jazz sessions featuring performances by the Cecil Taylor Unit and the Roswell Rudd Sextet
See also
Mix (disambiguation)
Mixed breed, an animal whose family are from different breeds or species
Mixed ethnicity, a person who is of multiracial descent
|
https://en.wikipedia.org/wiki/Positive%20and%20negative%20parts
|
In mathematics, the positive part of a real or extended real-valued function is defined by the formula
Intuitively, the graph of is obtained by taking the graph of , chopping off the part under the x-axis, and letting take the value zero there.
Similarly, the negative part of f is defined as
Note that both f+ and f− are non-negative functions. A peculiarity of terminology is that the 'negative part' is neither negative nor a part (like the imaginary part of a complex number is neither imaginary nor a part).
The function f can be expressed in terms of f+ and f− as
Also note that
.
Using these two equations one may express the positive and negative parts as
Another representation, using the Iverson bracket is
One may define the positive and negative part of any function with values in a linearly ordered group.
The unit ramp function is the positive part of the identity function.
Measure-theoretic properties
Given a measurable space (X,Σ), an extended real-valued function f is measurable if and only if its positive and negative parts are. Therefore, if such a function f is measurable, so is its absolute value |f|, being the sum of two measurable functions. The converse, though, does not necessarily hold: for example, taking f as
where V is a Vitali set, it is clear that f is not measurable, but its absolute value is, being a constant function.
The positive part and negative part of a function are used to define the Lebesgue integral for a real-valued function. Analogously to this decomposition of a function, one may decompose a signed measure into positive and negative parts — see the Hahn decomposition theorem.
See also
Rectifier (neural networks)
Even and odd functions
Real and imaginary parts
References
External links
Positive part on MathWorld
Elementary mathematics
|
https://en.wikipedia.org/wiki/Network%20calculus
|
Network calculus is "a set of mathematical results which give insights into man-made systems such as concurrent programs, digital circuits and communication networks." Network calculus gives a theoretical framework for analysing performance guarantees in computer networks. As traffic flows through a network it is subject to constraints imposed by the system components, for example:
link capacity
traffic shapers (leaky buckets)
congestion control
background traffic
These constraints can be expressed and analysed with network calculus methods. Constraint curves can be combined using convolution under min-plus algebra. Network calculus can also be used to express traffic arrival and departure functions as well as service curves.
The calculus uses "alternate algebras ... to transform complex non-linear network systems into analytically tractable linear systems."
Currently, there exists two branches in network calculus: one handling deterministic bounded, and one handling stochastic bounds.
System modelling
Modelling flow and server
In network calculus, a flow is modelled as cumulative functions , where represents the amount of data (number of bits for example) send by the flow in the interval . Such functions are non-negative and non-decreasing. The time domain is often the set of non negative reals.
A server can be a link, a scheduler, a traffic shaper, or a whole network. It is simply modelled as a relation between some arrival cumulative curve and some departure cumulative curve . It is required that , to model the fact that the departure of some data can not occur before its arrival.
Modelling backlog and delay
Given some arrival and departure curve and , the backlog at any instant , denoted can be defined as the difference between and . The delay at , is defined as the minimal amount of time such that the departure function reached the arrival function. When considering the whole flows, the supremum of these values is used.
In general, the flows are not exactly known, and only some constraints on flows and servers are known (like the maximal number of packet sent on some period, the maximal size of packets, the minimal link bandwidth). The aim of network calculus is to compute upper bounds on delay and backlog, based on these constraints. To do so, network calculus uses the min-plus algebra.
Min-plus Semiring
Network calculus makes an intensive use on the min-plus semiring (sometimes called min-plus algebra).
In filter theory and linear systems theory the convolution of two functions and is defined as
In min-plus semiring the sum is replaced by the minimum respectively infimum operator and the product is replaced by the sum. So the min-plus convolution of two functions and becomes
e.g. see the definition of service curves. Convolution and min-plus convolution share many algebraic properties. In particular both are commutative and associative.
A so-called min-plus de-convolution operation is defined as
e.g. as use
|
https://en.wikipedia.org/wiki/Polignac%27s%20conjecture
|
In number theory, Polignac's conjecture was made by Alphonse de Polignac in 1849 and states:
For any positive even number n, there are infinitely many prime gaps of size n. In other words: There are infinitely many cases of two consecutive prime numbers with difference n.
Although the conjecture has not yet been proven or disproven for any given value of n, in 2013 an important breakthrough was made by Zhang Yitang who proved that there are infinitely many prime gaps of size n for some value of n < 70,000,000. Later that year, James Maynard announced a related breakthrough which proved that there are infinitely many prime gaps of some size less than or equal to 600. As of April 14, 2014, one year after Zhang's announcement, according to the Polymath project wiki, n has been reduced to 246. Further, assuming the Elliott–Halberstam conjecture and its generalized form, the Polymath project wiki states that n has been reduced to 12 and 6, respectively.
For n = 2, it is the twin prime conjecture. For n = 4, it says there are infinitely many cousin primes (p, p + 4). For n = 6, it says there are infinitely many sexy primes (p, p + 6) with no prime between p and p + 6.
Dickson's conjecture generalizes Polignac's conjecture to cover all prime constellations.
Conjectured density
Let for even n be the number of prime gaps of size n below x.
The first Hardy–Littlewood conjecture says the asymptotic density is of form
where Cn is a function of n, and means that the quotient of two expressions tends to 1 as x approaches infinity.
C2 is the twin prime constant
where the product extends over all prime numbers p ≥ 3.
Cn is C2 multiplied by a number which depends on the odd prime factors q of n:
For example, C4 = C2 and C6 = 2C2. Twin primes have the same conjectured density as cousin primes, and half that of sexy primes.
Note that each odd prime factor q of n increases the conjectured density compared to twin primes by a factor of . A heuristic argument follows. It relies on some unproven assumptions so the conclusion remains a conjecture. The chance of a random odd prime q dividing either a or a + 2 in a random "potential" twin prime pair is , since q divides one of the q numbers from a to a + q − 1. Now assume q divides n and consider a potential prime pair (a, a + n). q divides a + n if and only if q divides a, and the chance of that is . The chance of (a, a + n) being free from the factor q, divided by the chance that (a, a + 2) is free from q, then becomes divided by . This equals which transfers to the conjectured prime density. In the case of n = 6, the argument simplifies to: If a is a random number then 3 has chance 2/3 of dividing a or a + 2, but only chance 1/3 of dividing a and a + 6, so the latter pair is conjectured twice as likely to both be prime.
Notes
References
Alphonse de Polignac, Recherches nouvelles sur les nombres premiers. Comptes Rendus des Séances de l'Académie des Sciences (1849)
Conjectures about prime numbers
U
|
https://en.wikipedia.org/wiki/J%C3%B3zsef%20Beck
|
József Beck (Budapest, Hungary, February 14, 1952) is a Harold H. Martin Professor of Mathematics at Rutgers University.
His contributions to combinatorics include the partial colouring lemma and the Beck–Fiala theorem in discrepancy theory, the algorithmic version of the Lovász local lemma, the two extremes theorem in combinatorial geometry and the second moment method in the theory of positional games, among others.
Beck was awarded the Fulkerson Prize in 1985 for a paper titled "Roth's estimate of the discrepancy of integer sequences is nearly sharp", which introduced the notion of discrepancy on hypergraphs and established an upper bound on the discrepancy of the family of arithmetic progressions contained in {1,2,...,n}, matching the classical lower bound up to a polylogarithmic factor. Jiří Matoušek and Joel Spencer later succeeded in getting rid of this factor, showing that the bound was really sharp.
Beck gave an invited talk at the 1986 International Congress of Mathematicians.
He is an external member of the Hungarian Academy of Sciences (2004).
Books
Irregularities of Distribution (with William W. L. Chen, Cambridge Tracts in Mathematics 89, Cambridge University Press, 1987)
Combinatorial Games: Tic-Tac-Toe Theory (Encyclopedia of Mathematics and its Applications 114, Cambridge University Press, 2008)
Inevitable Randomness in Discrete Mathematics (University Lecture Series 49, American Mathematical Society, 2009)
Probabilistic Diophantine Approximation: Randomness in Lattice Point Counting (Springer Monographs in Mathematics. Springer-Verlag, 2014)
Strong Uniformity and Large Dynamical Systems (World Scientific Publishing, 2018)
References
External links
József Beck, personal webpage, Department of Mathematics, Rutgers University
József Beck, Mathematics Genealogy Project
Mathematicians from Budapest
Members of the Hungarian Academy of Sciences
1952 births
Living people
Rutgers University faculty
Positional games
Hungarian emigrants to the United States
|
https://en.wikipedia.org/wiki/Wiener%E2%80%93Khinchin%20theorem
|
In applied mathematics, the Wiener–Khinchin theorem or Wiener–Khintchine theorem, also known as the Wiener–Khinchin–Einstein theorem or the Khinchin–Kolmogorov theorem, states that the autocorrelation function of a wide-sense-stationary random process has a spectral decomposition given by the power spectral density of that process.
History
Norbert Wiener proved this theorem for the case of a deterministic function in 1930; Aleksandr Khinchin later formulated an analogous result for stationary stochastic processes and published that probabilistic analogue in 1934. Albert Einstein explained, without proofs, the idea in a brief two-page memo in 1914.
The case of a continuous-time process
For continuous time, the Wiener–Khinchin theorem says that if is a wide-sense-stationary random process whose autocorrelation function (sometimes called autocovariance) defined in terms of statistical expected value, exists and is finite at every lag , then there exists a monotone function in the frequency domain , or equivalently a non negative Radon measure on the frequency domain, such that
where the integral is a Riemann–Stieltjes integral. The asterisk denotes complex conjugate, and can be omitted if the random process is real-valued. This is a kind of spectral decomposition of the auto-correlation function. F is called the power spectral distribution function and is a statistical distribution function. It is sometimes called the integrated spectrum.
The Fourier transform of does not exist in general, because stochastic random functions are not generally either square-integrable or absolutely integrable. Nor is assumed to be absolutely integrable, so it need not have a Fourier transform either.
However, if the measure is absolutely continuous, for example, if the process is purely indeterministic, then is differentiable almost everywhere and we can write . In this case, one can determine , the power spectral density of , by taking the averaged derivative of . Because the left and right derivatives of exist everywhere, i.e. we can put everywhere, (obtaining that F is the integral of its averaged derivative), and the theorem simplifies to
If now one assumes that r and S satisfy the necessary conditions for Fourier inversion to be valid, the Wiener–Khinchin theorem takes the simple form of saying that r and S are a Fourier-transform pair, and
The case of a discrete-time process
For the discrete-time case, the power spectral density of the function with discrete values is
where is the angular frequency, is used to denote the imaginary unit (in engineering, sometimes the letter is used instead) and is the discrete autocorrelation function of , defined in its deterministic or stochastic formulation.
Provided is absolutely summable, i.e.
the result of the theorem then can be written as
Being a discrete-time sequence, the spectral density is periodic in the frequency domain. For this reason, the domain of the function is usually
|
https://en.wikipedia.org/wiki/Supersingular%20variety
|
In mathematics, a supersingular variety is (usually) a smooth projective variety in nonzero characteristic such that for all n the slopes of the Newton polygon of the nth crystalline cohomology are all n/2 . For special classes of varieties such as elliptic curves it is common to use various ad hoc definitions of "supersingular", which are (usually) equivalent to the one given above.
The term "singular elliptic curve" (or "singular j-invariant") was at one times used to refer to complex elliptic curves whose ring of endomorphisms has rank 2, the maximum possible. Helmut Hasse discovered that, in finite characteristic, elliptic curves can have larger rings of endomorphisms of rank 4, and these were called "supersingular elliptic curves". Supersingular elliptic curves can also be characterized by the slopes of their crystalline cohomology, and the term "supersingular" was later extended to other varieties whose cohomology has similar properties. The terms "supersingular" or "singular" do not mean that the variety has singularities.
Examples include:
Supersingular elliptic curve. Elliptic curves in non-zero characteristic with an unusually large ring of endomorphisms of rank 4.
Supersingular Abelian variety Sometimes defined to be an abelian variety isogenous to a product of supersingular elliptic curves, and sometimes defined to be an abelian variety of some rank g whose endomorphism ring has rank (2g)2.
Supersingular K3 surface. Certain K3 surfaces in non-zero characteristic.
Supersingular Enriques surface. Certain Enriques surfaces in characteristic 2.
A surface is called Shioda supersingular if the rank of its Néron–Severi group is equal to its second Betti number.
A surface is called Artin supersingular if its formal Brauer group has infinite height.
References
Algebraic geometry
|
https://en.wikipedia.org/wiki/Wavelet%20transform
|
In mathematics, a wavelet series is a representation of a square-integrable (real- or complex-valued) function by a certain orthonormal series generated by a wavelet. This article provides a formal, mathematical definition of an orthonormal wavelet and of the integral wavelet transform.
Definition
A function is called an orthonormal wavelet if it can be used to define a Hilbert basis, that is a complete orthonormal system, for the Hilbert space of square integrable functions.
The Hilbert basis is constructed as the family of functions by means of dyadic translations and dilations of ,
for integers .
If under the standard inner product on ,
this family is orthonormal, it is an orthonormal system:
where is the Kronecker delta.
Completeness is satisfied if every function may be expanded in the basis as
with convergence of the series understood to be convergence in norm. Such a representation of f is known as a wavelet series. This implies that an orthonormal wavelet is self-dual.
The integral wavelet transform is the integral transform defined as
The wavelet coefficients are then given by
Here, is called the binary dilation or dyadic dilation, and is the binary or dyadic position.
Principle
The fundamental idea of wavelet transforms is that the transformation should allow only changes in time extension, but not shape. This is achieved by choosing suitable basis functions that allow for this. Changes in the time extension are expected to conform to the corresponding analysis frequency of the basis function. Based on the uncertainty principle of signal processing,
where represents time and angular frequency (, where is ordinary frequency).
The higher the required resolution in time, the lower the resolution in frequency has to be. The larger the extension of the analysis windows is chosen, the larger is the value of .
When is large,
Bad time resolution
Good frequency resolution
Low frequency, large scaling factor
When is small
Good time resolution
Bad frequency resolution
High frequency, small scaling factor
In other words, the basis function can be regarded as an impulse response of a system with which the function has been filtered. The transformed signal provides information about the time and the frequency. Therefore, wavelet-transformation contains information similar to the short-time-Fourier-transformation, but with additional special properties of the wavelets, which show up at the resolution in time at higher analysis frequencies of the basis function. The difference in time resolution at ascending frequencies for the Fourier transform and the wavelet transform is shown below. Note however, that the frequency resolution is decreasing for increasing frequencies while the temporal resolution increases. This consequence of the Fourier uncertainty principle is not correctly displayed in the Figure.
This shows that wavelet transformation is good in time resolution of high frequencies, while for slowly varyi
|
https://en.wikipedia.org/wiki/Kerala%20school
|
Kerala school may refer to:
Kerala School Kalolsavam, an annual art competition for students in Kerala
Kerala school of astronomy and mathematics, in Kerala between the 14th and 16th centuries CE
Kerala School of Mathematics, Kozhikode, in Kunnamangalam near Kozhikode City
|
https://en.wikipedia.org/wiki/Dual%20wavelet
|
In mathematics, a dual wavelet is the dual to a wavelet. In general, the wavelet series generated by a square-integrable function will have a dual series, in the sense of the Riesz representation theorem. However, the dual series is not itself in general representable by a square-integrable function.
Definition
Given a square-integrable function , define the series by
for integers .
Such a function is called an R-function if the linear span of is dense in , and if there exist positive constants A, B with such that
for all bi-infinite square summable series . Here, denotes the square-sum norm:
and denotes the usual norm on :
By the Riesz representation theorem, there exists a unique dual basis such that
where is the Kronecker delta and is the usual inner product on . Indeed, there exists a unique series representation for a square-integrable function f expressed in this basis:
If there exists a function such that
then is called the dual wavelet or the wavelet dual to ψ. In general, for some given R-function ψ, the dual will not exist. In the special case of , the wavelet is said to be an orthogonal wavelet.
An example of an R-function without a dual is easy to construct. Let be an orthogonal wavelet. Then define for some complex number z. It is straightforward to show that this ψ does not have a wavelet dual.
See also
Multiresolution analysis
References
Charles K. Chui, An Introduction to Wavelets (Wavelet Analysis & Its Applications), (1992), Academic Press, San Diego,
Wavelets
Wavelet
|
https://en.wikipedia.org/wiki/Advanced%20statistics%20in%20basketball
|
Advanced statistics (also known as analytics or APBRmetrics) in basketball refers to the analysis of basketball statistics using objective evidence. APBRmetrics takes its name from the acronym APBR, which stands for the Association for Professional Basketball Research.
According to The Sporting News, the APBRmetrics message board was "the birthplace of basketball analytics".
Advanced basketball statistics include effective field goal percentage (eFG%), true shooting percentage (TS%), (on-court/off-court) plus–minus, real plus/minus (RPM), and player efficiency rating (PER).
A more complete explanation of basketball analytics is available in "A Starting Point for Analyzing Basketball Statistics" in the Journal of Quantitative Analysis in Sports.
Notable basketball analytics practitioners
The field of basketball analytics practitioners includes, but is not limited to, the following individuals:
John Hollinger authored four books in the Pro Basketball Forecast/Prospectus series and was a regular columnist for ESPN Insider. He is a former vice president of basketball operations for the Memphis Grizzlies.
Justin Kubatko created and maintained the website Basketball-Reference.com, the pro basketball arm of Sports Reference LLC, until 2013. During Kubatko's tenure, Sports Reference was named one of the 50 best websites of 2010 by Time magazine.
Dean Oliver, "one of the godfathers of NBA analytics", is a former Division Three player and assistant coach at Cal Tech. He is also a scout who has consulted with the Seattle SuperSonics, and he also served in the front office of the Denver Nuggets and the Sacramento Kings.
See also
Network Science Based Basketball Analytics
References
External links
Association for Professional Basketball Research
Basketball statistics
|
https://en.wikipedia.org/wiki/George%20Andrews%20%28mathematician%29
|
George Eyre Andrews (born December 4, 1938) is an American mathematician working in special functions, number theory, analysis and combinatorics.
Education and career
He is currently an Evan Pugh Professor of Mathematics at Pennsylvania State University. He did his undergraduate studies at Oregon State University and received his PhD in 1964 at the University of Pennsylvania where his advisor was Hans Rademacher.
During 2008–2009 he was president of the American Mathematical Society.
Contributions
Andrews's contributions include several monographs and over 250 research and popular articles on q-series, special functions, combinatorics and applications. He is considered to be the world's leading expert in the theory of integer partitions. In 1976 he discovered Ramanujan's Lost Notebook. He is interested in mathematical pedagogy.
His book The Theory of Partitions is the standard reference on the subject of integer partitions.
He has advanced mathematics in the theories of partitions and q-series. His work at the interface of number theory and combinatorics has also led to many important applications in physics.
Awards and honors
In 2003 Andrews was elected a member of the National Academy of Sciences. He was elected a Fellow of the American Academy of Arts and Sciences in 1997. In 1998 he was an Invited Speaker at the International Congress of Mathematicians in Berlin. In 2012 he became a fellow of the American Mathematical Society.
He was given honorary doctorates from the University of Parma in 1998, the University of Florida in 2002, the University of Waterloo in 2004, SASTRA University in Kumbakonam, India in 2012, and University of Illinois at Urbana–Champaign in 2014
Publications
Selected Works of George E Andrews (With Commentary) (World Scientific Publishing, 2012, )
Number Theory (Dover, 1994, )
The Theory of Partitions (Cambridge University Press, 1998, )
Integer Partitions (with Eriksson, Kimmo) (Cambridge University Press, 2004, )
Ramanujan's Lost Notebook: Part I (with Bruce C. Berndt) (Springer, 2005, )
Ramanujan's Lost Notebook: Part II, (with Bruce C. Berndt) (Springer, 2008, )
Ramanujan's Lost Notebook: Part III, (with Bruce C. Berndt) (Springer, 2012, )
Ramanujan's Lost Notebook: Part IV, (with Bruce C. Berndt) (Springer, 2013, )
"Special functions" by George Andrews, Richard Askey, and Ranjan Roy, Encyclopedia of Mathematics and Its Applications, The University Press, Cambridge, 1999.
References
External links
George Andrews's homepage
Author profile in the database zbMATH
"The Meaning of Ramanujan and His Lost Notebook" by George E. Andrews, Center for Advanced Study, U. of Illinois at Urbana-Champaign, YouTube, 2014
"Partitions, Dyson, and Ramanujan" - George Andrews, videosfromIAS, YouTube, 2016
1938 births
Living people
Members of the United States National Academy of Sciences
Mathematical analysts
Number theorists
20th-century American mathematicians
21st-century American mathematicians
Oregon S
|
https://en.wikipedia.org/wiki/Urban%20unit
|
In France, an urban unit (fr: "unité urbaine") is a statistical area defined by INSEE, the French national statistics office, for the measurement of contiguously built-up areas. According to the INSEE definition , an "unité urbaine" is a commune alone or a grouping of communes which: a) form a single unbroken spread of urban development, with no distance between habitations greater than 200 m and b) have all together a population greater than 2,000 inhabitants. Communes not belonging to an unité urbaine are considered rural.
The French unité urbaine is a statistical area in accordance with United Nations recommendations for the measurement of contiguously built-up areas. Other comparable units in other countries are the United States "Urbanized Area" and the "urban area" definition shared by Canada and the United Kingdom. The French aire d'attraction d'une ville is equivalent to the functional urban area as defined by Eurostat, and represents a population and employment centre (urban cluster) and its commuting zone. The zoning into unités urbaines and aires d'attraction des villes was last revised in 2020.
French urban units with over 200,000 inhabitants
This list shows the unités urbaines as of the 2020 revision.
See also
Functional area (France)
List of communes in France with over 20,000 inhabitants
Urban area (France)
References
Urban planning in France
Human habitats
Urban areas
INSEE concepts
|
https://en.wikipedia.org/wiki/Nilpotent%20Lie%20algebra
|
In mathematics, a Lie algebra is nilpotent if its lower central series terminates in the zero subalgebra. The lower central series is the sequence of subalgebras
We write , and for all . If the lower central series eventually arrives at the zero subalgebra, then the Lie algebra is called nilpotent. The lower central series for Lie algebras is analogous to the lower central series in group theory, and nilpotent Lie algebras are analogs of nilpotent groups.
The nilpotent Lie algebras are precisely those that can be obtained from abelian Lie algebras, by successive central extensions.
Note that the definition means that, viewed as a non-associative non-unital algebra, a Lie algebra is nilpotent if it is nilpotent as an ideal.
Definition
Let be a Lie algebra. One says that is nilpotent if the lower central series terminates, i.e. if for some
Explicitly, this means that
so that .
Equivalent conditions
A very special consequence of (1) is that
Thus for all . That is, is a nilpotent endomorphism in the usual sense of linear endomorphisms (rather than of Lie algebras). We call such an element in ad-nilpotent.
Remarkably, if is finite dimensional, the apparently much weaker condition (2) is actually equivalent to (1), as stated by
Engel's theorem: A finite dimensional Lie algebra is nilpotent if and only if all elements of are ad-nilpotent,
which we will not prove here.
A somewhat easier equivalent condition for the nilpotency of : is nilpotent if and only if is nilpotent (as a Lie algebra). To see this, first observe that (1) implies that is nilpotent, since the expansion of an -fold nested bracket will consist of terms of the form in (1). Conversely, one may write
and since is a Lie algebra homomorphism,
If is nilpotent, the last expression is zero for large enough n, and accordingly the first. But this implies (1), so is nilpotent.
Also, a finite-dimensional Lie algebra is nilpotent if and only if there exists a descending chain of ideals such that .
Examples
Strictly upper triangular matrices
If is the set of matrices with entries in , then the subalgebra consisting of strictly upper triangular matrices is a nilpotent Lie algebra.
Heisenberg algebras
A Heisenberg algebra is nilpotent. For example, in dimension 3, the commutator of two matriceswhere .
Cartan subalgebras
A Cartan subalgebra of a Lie algebra is nilpotent and self-normalizing page 80. The self-normalizing condition is equivalent to being the normalizer of a Lie algebra. This means . This includes upper triangular matrices and all diagonal matrices in .
Other examples
If a Lie algebra has an automorphism of prime period with no fixed points except at , then is nilpotent.
Properties
Nilpotent Lie algebras are solvable
Every nilpotent Lie algebra is solvable. This is useful in proving the solvability of a Lie algebra since, in practice, it is usually easier to prove nilpotency (when it holds!) rather than solvability. However,
|
https://en.wikipedia.org/wiki/Savilian%20Professor%20of%20Astronomy
|
The position of Savilian Professor of Astronomy was established at the University of Oxford in 1619. It was founded (at the same time as the Savilian Professorship of Geometry) by Sir Henry Savile, a mathematician and classical scholar who was Warden of Merton College, Oxford, and Provost of Eton College. He appointed John Bainbridge as the first professor, who took up his duties in 1620 or 1621.
There have been 21 astronomy professors in all; Steven Balbus, the professor , was appointed in 2012. Past professors include Christopher Wren (1661–73), architect of St Paul's Cathedral in London and the Sheldonian Theatre in Oxford; he held the professorship at the time of his commission to rebuild the cathedral after it was destroyed by the Great Fire of London in 1666. Three professors have been awarded the Gold Medal of the Royal Astronomical Society: Charles Pritchard (1870–93), Harry Plaskett (1932–60) and Joseph Silk (1999–2012). The two Savilian chairs have been linked with professorial fellowships at New College, Oxford, since the late 19th century. In the past, some of the professors were provided with an official residence, either near New College or at the Radcliffe Observatory, although this practice ended in the 19th century. The astronomy professor is a member of the Sub-Department of Astrophysics at Oxford.
Foundation and duties
Sir Henry Savile, the Warden of Merton College, Oxford, and Provost of Eton College, was deeply saddened by what the 20th-century mathematician Ida Busbridge has described as "the wretched state of mathematical studies in England", and so founded professorships in geometry and astronomy at the University of Oxford in 1619; both chairs were named after him. He also donated his books to the university's Bodleian Library. He required the professors to be men of good character, at least 26 years old, and to have "imbibed the purer philosophy from the springs of Aristotle and Plato" before acquiring a thorough knowledge of science. The professors could come from any Christian country, but he specified that a professor from England should have a Master of Arts degree as a minimum. He wanted students to be educated in the works of the leading scientists of the ancient world; in addition, the astronomy professor should cover Copernicus and the work of Arab astronomers. Tuition in trigonometry was to be shared by the two professors. As many students would have had little mathematical knowledge, the professors were also permitted to provide instruction in basic mathematics in English (as opposed to Latin, the language used in education at Oxford at the time). He also required the astronomy professor "to take astronomical observations as well by night as by day (making choice of proper instruments prepared for the purpose, and at fitting times and seasons)", and to place in the library records of his discoveries. Savile prohibited the professors from practicing astrology or preparing horoscopes, and stated that accept
|
https://en.wikipedia.org/wiki/Nonlinear%20control
|
Nonlinear control theory is the area of control theory which deals with systems that are nonlinear, time-variant, or both. Control theory is an interdisciplinary branch of engineering and mathematics that is concerned with the behavior of dynamical systems with inputs, and how to modify the output by changes in the input using feedback, feedforward, or signal filtering. The system to be controlled is called the "plant". One way to make the output of a system follow a desired reference signal is to compare the output of the plant to the desired output, and provide feedback to the plant to modify the output to bring it closer to the desired output.
Control theory is divided into two branches. Linear control theory applies to systems made of devices which obey the superposition principle. They are governed by linear differential equations. A major subclass is systems which in addition have parameters which do not change with time, called linear time invariant (LTI) systems. These systems can be solved by powerful frequency domain mathematical techniques of great generality, such as the Laplace transform, Fourier transform, Z transform, Bode plot, root locus, and Nyquist stability criterion.
Nonlinear control theory covers a wider class of systems that do not obey the superposition principle. It applies to more real-world systems, because all real control systems are nonlinear. These systems are often governed by nonlinear differential equations. The mathematical techniques which have been developed to handle them are more rigorous and much less general, often applying only to narrow categories of systems. These include limit cycle theory, Poincaré maps, Lyapunov stability theory, and describing functions. If only solutions near a stable point are of interest, nonlinear systems can often be linearized by approximating them by a linear system obtained by expanding the nonlinear solution in a series, and then linear techniques can be used. Nonlinear systems are often analyzed using numerical methods on computers, for example by simulating their operation using a simulation language. Even if the plant is linear, a nonlinear controller can often have attractive features such as simpler implementation, faster speed, more accuracy, or reduced control energy, which justify the more difficult design procedure.
An example of a nonlinear control system is a thermostat-controlled heating system. A building heating system such as a furnace has a nonlinear response to changes in temperature; it is either "on" or "off", it does not have the fine control in response to temperature differences that a proportional (linear) device would have. Therefore, the furnace is off until the temperature falls below the "turn on" setpoint of the thermostat, when it turns on. Due to the heat added by the furnace, the temperature increases until it reaches the "turn off" setpoint of the thermostat, which turns the furnace off, and the cycle repeats. This
|
https://en.wikipedia.org/wiki/Shiu-Yuen%20Cheng
|
Shiu-Yuen Cheng (鄭紹遠) is a Hong Kong mathematician. He is currently the Chair Professor of Mathematics at the Hong Kong University of Science and Technology. Cheng received his Ph.D. in 1974, under the supervision of Shiing-Shen Chern, from University of California at Berkeley. Cheng then spent some years as a post-doctoral fellow and assistant professor at Princeton University and the State University of New York at Stony Brook. Then he became a full professor at University of California at Los Angeles. Cheng chaired the Mathematics departments of both the Chinese University of Hong Kong and the Hong Kong University of Science and Technology in the 1990s. In 2004, he became the Dean of Science at HKUST. In 2012, he became a fellow of the American Mathematical Society.
He is well known for contributions to differential geometry and partial differential equations, including Cheng's eigenvalue comparison theorem, Cheng's maximal diameter theorem, and a number of works with Shing-Tung Yau. Many of Cheng and Yau's works formed part of the corpus of work for which Yau was awarded the Fields medal in 1982. As of 2020, Cheng's most recent research work was published in 1996.
Technical contributions
Gradient estimates and their applications
In 1975, Shing-Tung Yau found a novel gradient estimate for solutions of second-order elliptic partial differential equations on certain complete Riemannian manifolds. Cheng and Yau were able to localize Yau's estimate by making use of a method developed by Eugenio Calabi. The result, known as the Cheng–Yau gradient estimate, is ubiquitous in the field of geometric analysis. As a consequence, Cheng and Yau were able to show the existence of an eigenfunction, corresponding to the first eigenvalue, of the Laplace-Beltrami operator on a complete Riemannian manifold.
Cheng and Yau applied the same methodology to understand spacelike hypersurfaces of Minkowski space and the geometry of hypersurfaces in affine space. A particular application of their results is a Bernstein theorem for closed spacelike hypersurfaces of Minkowski space whose mean curvature is zero; any such hypersurface must be a plane.
In 1916, Hermann Weyl found a differential identity for the geometric data of a convex surface in Euclidean space. By applying the maximum principle, he was able to control the extrinsic geometry in terms of the intrinsic geometry. Cheng and Yau generalized this to the context of hypersurfaces in Riemannian manifolds.
The Minkowski problem and the Monge-Ampère equation
Any strictly convex closed hypersurface in the Euclidean space can be naturally considered as an embedding of the -dimensional sphere, via the Gauss map. The Minkowski problem asks whether an arbitrary smooth and positive function on the -dimensional sphere can be realized as the scalar curvature of the Riemannian metric induced by such an embedding. This was resolved in 1953 by Louis Nirenberg, in the case that is equal to two. In 1976, Cheng and Yau re
|
https://en.wikipedia.org/wiki/Ostrowski%27s%20theorem
|
In number theory, Ostrowski's theorem, due to Alexander Ostrowski (1916), states that every non-trivial absolute value on the rational numbers is equivalent to either the usual real absolute value or a -adic absolute value.
Definitions
Two absolute values and on the rationals are defined to be equivalent if they induce the same topology; this can be shown to be equivalent to the existence of a positive real number such that
(Note: In general, if is an absolute value, is not necessarily an absolute value anymore; however if two absolute values are equivalent, then each is a positive power of the other.) The trivial absolute value on any field K is defined to be
The real absolute value on the rationals is the standard absolute value on the reals, defined to be
This is sometimes written with a subscript 1 instead of infinity.
For a prime number , the -adic absolute value on is defined as follows: any non-zero rational can be written uniquely as , where and are coprime integers not divisible by , and is an integer; so we define
Proof
The following proof follows the one of Theorem 10.1 in Schikhof (2007).
Let be an absolute value on the rationals. We start the proof by showing that it is entirely determined by the values it takes on prime numbers.
From the fact that and the multiplicativity property of the absolute value, we infer that . In particular, has to be 0 or 1 and since , one must have . A similar argument shows that .
For all positive integer , the multiplicativity property entails . In other words, the absolute value of a negative integer coincides with that of its opposite.
Let be a positive integer. From the fact that and the multiplicativity property, we conclude that .
Let now be a positive rational. There exist two coprime positive integers and such that . The properties above show that . Altogether, the absolute value of a positive rational is entirely determined from that of its numerator and denominator.
Finally, let be the set of prime numbers. For all positive integer , we can write
where is the p-adic valuation of . The multiplicativity property enables one to compute the absolute value of from that of the prime numbers using the following relationship
We continue the proof by separating two cases:
There exists a positive integer such that ; or
For all integer , one has .
First case
Suppose that there exists a positive integer such that Let be a non-negative integer and be a positive integer greater than . We express in base : there exist a positive integer and integers such that for all , and . In particular, so .
Each term is smaller than (by the multiplicativity property and the triangle inequality). Besides, is smaller than . By the triangle inequality and the above bound on , it follows:
Therefore, raising both sides to the power , we obtain
Finally, taking the limit as tends to infinity shows that
Together with the condition the above arg
|
https://en.wikipedia.org/wiki/Statistics%20Denmark
|
Statistics Denmark () is a Danish governmental organization under the Ministry of the Interior and Housing, reporting to the Minister of Economic and Internal Affairs. The organization is responsible for creating statistics on the Danish society, including employment statistics, trade balance, and demographics.
Statistics Denmark relies heavily on public registers for statistical production, with a particular emphasis on the Central Person Register for population statistics.
Statistics Denmark's electronic data bank (Statbank.dk) is available freely in Danish or English to any user. It contains nearly all in-house produced statistics, which can be presented as cross-tables, diagrams, or maps, and can be exported to other programs for further analysis. When new general statistics are published in News from Statistics Denmark, the same data is simultaneously released in a more detailed format through the data bank.
History
The first population census in Denmark was conducted in 1769. Statistics Denmark was founded in January 1850, following the introduction of democracy to Denmark, under the name "Statistical Bureau."
In 1966, the Danish Parliament adopted the Act on Statistics Denmark. This act changed the name of the Statistical Bureau to Statistics Denmark and granted an independent Board of Directors the responsibility to determine the institution's work program. This allowed Statistics Denmark to operate independently from government control.
The Act also provides grants Statistics Denmark with access to the basic data necessary for it to produce its statistics. Under the Act, public authorities are required to supply the information they possess when it is requested by Statistics Denmark. The private sector is also obligated to provide certain information.
Since 1970, censuses have been exclusively based on administrative registers, with private citizens only participating in surveys on a voluntary basis only. In line with these principles, Statistics Denmark has focused to develop a data collection system primary relent on the administrative registers of other public offices. Other collection methods are employed when necessary but are considered as supplementary.
Role of Statistics Denmark
The production of statistics in Denmark is highly centralized, with Statistics Denmark as at its center. Therefore, the organization responsible for providing reliable data to the citizenry, politicians, the business community, public agencies, news media, educational institutions, researchers, and the EU.
The overall mission of the institution is stated in its Strategy 2005 paper as follows:
Statistics Denmark also offers "customized solutions" for purchase. These are tailor-made statistical reports focusing on a specific region and/or form of activity not included in the organization's standard products.
International Cooperation
Statistics Denmark actively participates in international statistical activities, including it's involvement in t
|
https://en.wikipedia.org/wiki/Irregular%20matrix
|
An irregular matrix, or ragged matrix, is a matrix that has a different number of elements in each row. Ragged matrices are not used in linear algebra, since standard matrix transformations cannot be performed on them, but they are useful in computing as arrays which are called jagged arrays. Irregular matrices are typically stored using Iliffe vectors.
For example, the following is an irregular matrix:
See also
Regular matrix (disambiguation)
Empty matrix
Sparse matrix
References
Paul E. Black, Ragged matrix, from Dictionary of Algorithms and Data Structures, Paul E. Black, ed., NIST, 2004.
Arrays
Matrices
|
https://en.wikipedia.org/wiki/Moore%20method
|
The Moore method is a deductive manner of instruction used in advanced mathematics courses. It is named after Robert Lee Moore, a famous topologist who first used a stronger version of the method at the University of Pennsylvania when he began teaching there in 1911. (Zitarelli, 2004)
The way the course is conducted varies from instructor to instructor, but the content of the course is usually presented in whole or in part by the students themselves. Instead of using a textbook, the students are given a list of definitions and, based on these, theorems which they are to prove and present in class, leading them through the subject material. The Moore method typically limits the amount of material that a class is able to cover, but its advocates claim that it induces a depth of understanding that listening to lectures cannot give.
The original method
F. Burton Jones, a student of Moore and a practitioner of his method, described it as follows:
The students were forbidden to read any book or article about the subject. They were even forbidden to talk about it outside of class. Hersh and John-Steiner (1977) claim that, "this method is reminiscent of a well-known, old method of teaching swimming called 'sink or swim' ".
Quotations
"That student is taught the best who is told the least." Moore, quote in Parker (2005: vii).
"I hear, I forget. I see, I remember. I do, I understand." (Chinese proverb that was a favorite of Moore's. Quoted in Halmos, P.R. (1985) I want to be a mathematician: an automathography. Springer-Verlag: 258)
References
Chalice, Donald R., 1995, "How to teach a class by the Modified Moore Method." American Mathematical Monthly 102: 317–321.
Cohen, David W., 1982, "A modified Moore method for teaching undergraduate mathematics", American Mathematical Monthly 89(7): 473–474,487-490.
Hersh, Reuben and John-Steiner, Vera, 1977, "Loving + Hating Mathematics".
Jones, F. Burton, 1977, "The Moore method," American Mathematical Monthly 84: 273–77.
Parker, John, 2005. R. L. Moore: Mathematician and Teacher. Mathematical Association of America. .
Wall, H. S. Creative Mathematics. University of Texas Press. .
Zitarelli, David, 2004. The Origin and Early Impact of the Moore Method", American Mathematical Monthly 111: 465–486.
External links
The Legacy of Robert Lee Moore Project.
Links to biographical material and the Moore method.
Mathematics education
|
https://en.wikipedia.org/wiki/Weak%20formulation
|
Weak formulations are important tools for the analysis of mathematical equations that permit the transfer of concepts of linear algebra to solve problems in other fields such as partial differential equations. In a weak formulation, equations or conditions are no longer required to hold absolutely (and this is not even well defined) and has instead weak solutions only with respect to certain "test vectors" or "test functions". In a strong formulation, the solution space is constructed such that these equations or conditions are already fulfilled.
The Lax–Milgram theorem, named after Peter Lax and Arthur Milgram who proved it in 1954, provides weak formulations for certain systems on Hilbert spaces.
General concept
Let be a Banach space, let be the dual space of , let , and let .
A vector is a solution of the equation
if and only if for all ,
Here, is called a test vector (in general) or a test function (if is a function space).
To bring this into the generic form of a weak formulation, find such that
by defining the bilinear form
Example 1: linear system of equations
Now, let and be a linear mapping. Then, the weak formulation of the equation
involves finding such that for all the following equation holds:
where denotes an inner product.
Since is a linear mapping, it is sufficient to test with basis vectors, and we get
Actually, expanding we obtain the matrix form of the equation
where and
The bilinear form associated to this weak formulation is
Example 2: Poisson's equation
To solve Poisson's equation
on a domain with on its boundary, and to specify the solution space later, one can use the scalar product
to derive the weak formulation. Then, testing with differentiable functions yields
The left side of this equation can be made more symmetric by integration by parts using Green's identity and assuming that on
This is what is usually called the weak formulation of Poisson's equation. Functions in the solution space must be zero on the boundary, and have square-integrable derivatives. The appropriate space to satisfy these requirements is the Sobolev space of functions with weak derivatives in and with zero boundary conditions, so
The generic form is obtained by assigning
and
The Lax–Milgram theorem
This is a formulation of the Lax–Milgram theorem which relies on properties of the symmetric part of the bilinear form. It is not the most general form.
Let be a Hilbert space and a bilinear form on which is
bounded: and
coercive:
Then, for any there is a unique solution to the equation
and it holds
Application to example 1
Here, application of the Lax–Milgram theorem is a stronger result than is needed.
Boundedness: all bilinear forms on are bounded. In particular, we have
Coercivity: this actually means that the real parts of the eigenvalues of are not smaller than . Since this implies in particular that no eigenvalue is zero, the system is solvable.
Additionally, this yields the es
|
https://en.wikipedia.org/wiki/Eleanor%20Roosevelt%20High%20School%20%28Maryland%29
|
Eleanor Roosevelt High School (ERHS) is a Maryland public magnet high school specializing in science, technology, engineering, and mathematics. The school was established in 1976 at its current location in Greenbelt, Maryland, United States and is part of the Prince George's County Public Schools system. It was the first high school named for former first lady Eleanor Roosevelt.
It serves all of the City of Greenbelt and a section of the Seabrook census-designated place. It also serves a section of the former Goddard CDP.
Roosevelt has received numerous awards, including being twice awarded National Blue Ribbon School of Excellence; a New American High School; a National School of Character; and receiving the Siemens Awards for Advanced Placement. Roosevelt was named #382 on America's Top 1,500 Public High Schools list for 2009, by Newsweek Magazine and was also recognized as a Silver Medal School by U.S. News & World Report, in 2008.
Several prominent figures have attended Eleanor Roosevelt, including Sergey Brin, one of the two founders of Google, R&B singers Mýa and Kenny Lattimore, as well as television personality Martin Lawrence; including numerous sports personalities in American basketball and football. James Seppi set the record for fastest float down the lower Colorado River
History
In December 1975 Margaret Wolfe, a woman who previously lived in Greenbelt, sent a letter to the Washington Star suggesting that the school be named after Eleanor Roosevelt. Edna Benefiel, another woman who once resided in Greenbelt, later sent another letter to the Star also favoring the Eleanor Roosevelt name. Prince George's Post released an editorial favoring the naming on January 8, 1976. The PGCPS board voted for that name one week later. The school was scheduled to open in fall 1976.
Academics
Roosevelt is best known for its specialized Science and Technology (S/T) program, which has been in place since the school was first opened. Roosevelt is the S/T center for the northern part of Prince George's County, and admission is based on a competitive exam. Roosevelt is a member of the National Consortium for Specialized Secondary Schools of Mathematics, Science and Technology (NCSSSMST).
Roosevelt is the first of three specialized science and technology centers located in the Prince George's County Public Schools system. STP is an active member of the National Consortium for Specialized Secondary Schools of Mathematics, Science and Technology (NCSSSMST). The magnet operates as a "School-Within-A-School", which essentially means it's a separate school within another school, and only a portion of the students who attend the school are actually enrolled in the magnet program. Many of the core courses such as English and Social Studies classes have mixed amounts of S/T, AOIT, QUEST, and comprehensive students in the same classes. The Science and Technology Center is a highly competitive selective enrollment program, and students are admitted into the ma
|
https://en.wikipedia.org/wiki/Jackson%20network
|
In queueing theory, a discipline within the mathematical theory of probability, a Jackson network (sometimes Jacksonian network) is a class of queueing network where the equilibrium distribution is particularly simple to compute as the network has a product-form solution. It was the first significant development in the theory of networks of queues, and generalising and applying the ideas of the theorem to search for similar product-form solutions in other networks has been the subject of much research, including ideas used in the development of the Internet. The networks were first identified by James R. Jackson and his paper was re-printed in the journal Management Science’s ‘Ten Most Influential Titles of Management Sciences First Fifty Years.’
Jackson was inspired by the work of Burke and Reich, though Jean Walrand notes "product-form results … [are] a much less immediate result of the output theorem than Jackson himself appeared to believe in his fundamental paper".
An earlier product-form solution was found by R. R. P. Jackson for tandem queues (a finite chain of queues where each customer must visit each queue in order) and cyclic networks (a loop of queues where each customer must visit each queue in order).
A Jackson network consists of a number of nodes, where each node represents a queue in which the service rate can be both node-dependent (different nodes have different service rates) and state-dependent (service rates change depending on queue lengths). Jobs travel among the nodes following a fixed routing matrix. All jobs at each node belong to a single "class" and jobs follow the same service-time distribution and the same routing mechanism. Consequently, there is no notion of priority in serving the jobs: all jobs at each node are served on a first-come, first-served basis.
Jackson networks where a finite population of jobs travel around a closed network also have a product-form solution described by the Gordon–Newell theorem.
Necessary conditions for a Jackson network
A network of m interconnected queues is known as a Jackson network or Jacksonian network if it meets the following conditions:
if the network is open, any external arrivals to node i form a Poisson process,
All service times are exponentially distributed and the service discipline at all queues is first-come, first-served,
a customer completing service at queue i will either move to some new queue j with probability or leave the system with probability , which, for an open network, is non-zero for some subset of the queues,
the utilization of all of the queues is less than one.
Theorem
In an open Jackson network of m M/M/1 queues where the utilization is less than 1 at every queue, the equilibrium state probability distribution exists and for state is given by the product of the individual queue equilibrium distributions
The result also holds for M/M/c model stations with ci servers at the station, with utilization requirement .
Definition
In an
|
https://en.wikipedia.org/wiki/Chien%20search
|
In abstract algebra, the Chien search, named after Robert Tienwen Chien, is a fast algorithm for determining roots of polynomials defined over a finite field. Chien search is commonly used to find the roots of error-locator polynomials encountered in decoding Reed-Solomon codes and BCH codes.
Algorithm
The problem is to find the roots of the polynomial (over the finite field ):
The roots may be found using brute force: there are a finite number of , so the polynomial can be evaluated for each element . If the polynomial evaluates to zero, then that element is a root.
For the trivial case , only the coefficient need be tested for zero. Below, the only concern will be for non-zero .
A straightforward evaluation of the polynomial involves general multiplications and additions. A more efficient scheme would use Horner's method for general multiplications and additions. Both of these approaches may evaluate the elements of the finite field in any order.
Chien search improves upon the above by selecting a specific order for the non-zero elements. In particular, the finite field has a (constant) generator element . Chien tests the elements in the generator's order . Consequently, Chien search needs only multiplications by constants and additions. The multiplications by constants are less complex than general multiplications.
The Chien search is based on two observations:
Each non-zero may be expressed as for some , where is a primitive element of , is the power number of primitive element . Thus the powers for cover the entire field (excluding the zero element).
The following relationship exists:
In other words, we may define each as the sum of a set of terms , from which the next set of coefficients may be derived thus:
In this way, we may start at with , and iterate through each value of up to . If at any stage the resultant summation is zero, i.e.
then also, so is a root. In this way, we check every element in the field.
When implemented in hardware, this approach significantly reduces the complexity, as all multiplications consist of one variable and one constant, rather than two variables as in the brute-force approach.
References
Error detection and correction
Finite fields
|
https://en.wikipedia.org/wiki/Jordan%20measure
|
In mathematics, the Peano–Jordan measure (also known as the Jordan content) is an extension of the notion of size (length, area, volume) to shapes more complicated than, for example, a triangle, disk, or parallelepiped.
It turns out that for a set to have Jordan measure it should be well-behaved in a certain restrictive sense. For this reason, it is now more common to work with the Lebesgue measure, which is an extension of the Jordan measure to a larger class of sets. Historically speaking, the Jordan measure came first, towards the end of the nineteenth century. For historical reasons, the term Jordan measure is now well-established for this set function, despite the fact that it is not a true measure in its modern definition, since Jordan-measurable sets do not form a σ-algebra. For example, singleton sets in each have a Jordan measure of 0, while , a countable union of them, is not Jordan-measurable. For this reason, some authors prefer to use the term .
The Peano–Jordan measure is named after its originators, the French mathematician Camille Jordan, and the Italian mathematician Giuseppe Peano.
Jordan measure of "simple sets"
Consider Euclidean space Jordan measure is first defined on Cartesian products of bounded half-open intervals
that are closed at the left and open at the right with all endpoints and finite real numbers (half-open intervals is a technical choice; as we see below, one can use closed or open intervals if preferred). Such a set will be called a , or simply a . The of such a rectangle is defined to be the product of the lengths of the intervals:
Next, one considers , sometimes called , which are finite unions of rectangles,
for any
One cannot define the Jordan measure of as simply the sum of the measures of the individual rectangles, because such a representation of is far from unique, and there could be significant overlaps between the rectangles.
Luckily, any such simple set can be rewritten as a union of another finite family of rectangles, rectangles which this time are mutually disjoint, and then one defines the Jordan measure as the sum of measures of the disjoint rectangles.
One can show that this definition of the Jordan measure of is independent of the representation of as a finite union of disjoint rectangles. It is in the "rewriting" step that the assumption of rectangles being made of half-open intervals is used.
Extension to more complicated sets
Notice that a set which is a product of closed intervals,
is not a simple set, and neither is a ball. Thus, so far the set of Jordan measurable sets is still very limited. The key step is then defining a bounded set to be if it is "well-approximated" by simple sets, exactly in the same way as a function is Riemann integrable if it is well-approximated by piecewise-constant functions.
Formally, for a bounded set define its as
and its as
where the infimum and supremum are taken over simple sets The set is said to be a if the i
|
https://en.wikipedia.org/wiki/Religion%20in%20Egypt
|
Religion in Egypt controls many aspects of social life and is endorsed by law. The state religion of Egypt is Islam, although estimates vary greatly in the absence of official statistics. Since the 2006 census religion has been excluded, and thus available statistics are estimates made by religious and non-governmental agencies. The country is majority Sunni Muslim (estimated to be 85-95% of the population), with the next largest religious group being Coptic Orthodox Christians (with estimates ranging from 5-15%). The exact numbers are subject to controversy, with Christians alleging that they have been systemically under-counted in existing censuses.
Egypt hosts two major religious institutions. Al-Azhar Mosque, founded in 970 CE by the Fatimids as the first Islamic university in Egypt and the Coptic Orthodox Church of Alexandria established in the middle of the 1st century by Saint Mark.
In Egypt, Muslims and Christians share a common history, national identity, ethnicity, race, culture, and language.
In 2002, under the Mubarak government, Coptic Christmas (January 7) was recognized as an official holiday, though Christians complain of being minimally represented in law enforcement, state security and public office, and of being discriminated against in the workforce on the basis of their religion.
Demographics
In 2010, based on the contested 2006 Census data, estimated that 94.9% of Egyptians are Muslims, 5.1% are Christians, and less than 1% are Jewish, Buddhists, or other religions. The share of Christians in the Egyptian population has according to official statistics been declining with the highest share reported in the past century being in 1927, when the official census put the percentage of Egyptian Christians at 8.3%. In each of the seven subsequent censuses, the percentage shrank, ending at 5.7% in 1996.
However, most Christians refuted these figures, claiming they have been under-counted. Christians maintain that they represent up to 15% or even 25% of the Egyptian population. In 2017 state-owned newspaper Al Ahram claimed that the percentage of Christians ranged from 10 to 15%, similar to the range claimed by the Washington Institute for Near East Policy.
Recent self-identification surveys put the Christian percentage at around 10%, as found by Afrobarometer in 2016 (which estimated the country to be 10.3% Christian and 89.4% Muslim) and by Arab Barometer in 2019 (which estimated it to be 9.6% Christian and 90.3% Muslim).
According to 2015 figures from the Central Intelligence Agency (CIA), Sunni Muslims make up 90% of the population, with Christians making up the remaining 10%. A significant number of Sunni Muslims follow native Sufi orders. There are reportedly close to fifty thousand Ahmadi Muslims in Egypt. Estimates of Egypt's Shia Twelvers and Ismaili community range from 800,000 to about two to three million members.
Most Egyptian Christians belong to the native Coptic Orthodox Church of Alexandria, an Oriental Ortho
|
https://en.wikipedia.org/wiki/Resolvent%20formalism
|
In mathematics, the resolvent formalism is a technique for applying concepts from complex analysis to the study of the spectrum of operators on Banach spaces and more general spaces. Formal justification for the manipulations can be found in the framework of holomorphic functional calculus.
The resolvent captures the spectral properties of an operator in the analytic structure of the functional. Given an operator , the resolvent may be defined as
Among other uses, the resolvent may be used to solve the inhomogeneous Fredholm integral equations; a commonly used approach is a series solution, the Liouville–Neumann series.
The resolvent of can be used to directly obtain information about the spectral decomposition
of . For example, suppose is an isolated eigenvalue in the
spectrum of . That is, suppose there exists a simple closed curve
in the complex plane that separates from the rest of the spectrum of .
Then the residue
defines a projection operator onto the eigenspace of .
The Hille–Yosida theorem relates the resolvent through a Laplace transform to an integral over the one-parameter group of transformations generated by . Thus, for example, if is a Hermitian, then is a one-parameter group of unitary operators. Whenever , the resolvent of A at z can be expressed as the Laplace transform
where the integral is taken along the ray .
History
The first major use of the resolvent operator as a series in (cf. Liouville–Neumann series) was by Ivar Fredholm, in a landmark 1903 paper in Acta Mathematica that helped establish modern operator theory.
The name resolvent was given by David Hilbert.
Resolvent identity
For all in , the resolvent set of an operator , we have that the first resolvent identity (also called Hilbert's identity) holds:
(Note that Dunford and Schwartz, cited, define the resolvent as , instead, so that the formula above differs in sign from theirs.)
The second resolvent identity is a generalization of the first resolvent identity, above, useful for comparing the resolvents of two distinct operators. Given operators and , both defined on the same linear space, and in the following identity holds,
A one-line proof goes as follows:
Compact resolvent
When studying a closed unbounded operator : → on a Hilbert space , if there exists such that is a compact operator, we say that has compact resolvent. The spectrum of such is a discrete subset of . If furthermore is self-adjoint, then and there exists an orthonormal basis of eigenvectors of with eigenvalues respectively. Also, has no finite accumulation point.
See also
Resolvent set
Stone's theorem on one-parameter unitary groups
Holomorphic functional calculus
Spectral theory
Compact operator
Laplace transform
Fredholm theory
Liouville–Neumann series
Decomposition of spectrum (functional analysis)
Limiting absorption principle
References
.
.
Fredholm theory
Formalism (deductive)
Mathematical physics
|
https://en.wikipedia.org/wiki/Stone%27s%20theorem
|
Stone's theorem may refer to a number of theorems of Marshall Stone:
Stone's representation theorem for Boolean algebras
Stone–Weierstrass theorem
Stone–von Neumann theorem
Stone's theorem on one-parameter unitary groups
It may also refer to the theorem of A. H. Stone that for Hausdorff spaces the property of being a paracompact space and being a fully normal space are equivalent, or its immediate corollary that metric spaces are paracompact.
|
https://en.wikipedia.org/wiki/Harry%20Vandiver
|
Harry Schultz Vandiver (21 October 1882 – 9 January 1973) was an American mathematician, known for work in number theory.
He was born in Philadelphia, Pennsylvania to John Lyon and Ida Frances (Everett) Vandiver. He did not complete a formal education, choosing instead to leave school at an early age to work for his father's firm, although he did attend some graduate classes at the University of Pennsylvania in 1904–5.
From 1917 to 1919 he was a member of the United States Naval Reserve, and in 1919 became an instructor of mathematics at Cornell University, where he taught for five years before becoming an associate professor of pure mathematics at the University of Texas in 1924. He was made a full professor the following year, and named distinguished professor of applied mathematics and astronomy in 1947. He remained at Texas until his retirement in 1966.
Vandiver won the Frank Nelson Cole Prize of the American Mathematical Society for his paper on Fermat's Last Theorem in 1931. In 1952 he used a computer to study it, proving the result for all primes less than 2000.
A question he frequently asked about the class group of cyclotomic fields, and now known as Vandiver's conjecture, was first posed in an 1849 letter from Ernst Kummer to Leopold Kronecker.
For the academic year 1927–1928 Vandiver received a Guggenheim Fellowship. In 1934 he was elected to the National Academy of Sciences. In 1945 the U. of Pennsylvania gave him an honorary doctoral degree.
References
External links
1882 births
1973 deaths
20th-century American mathematicians
Number theorists
Cornell University faculty
University of Texas at Austin faculty
People from Philadelphia
Members of the United States National Academy of Sciences
|
https://en.wikipedia.org/wiki/Kontorovich%E2%80%93Lebedev%20transform
|
In mathematics, the Kontorovich–Lebedev transform is an integral transform which uses a Macdonald function (modified Bessel function of the second kind) with imaginary index as its kernel. Unlike other Bessel function transforms, such as the Hankel transform, this transform involves integrating over the index of the function rather than its argument.
The transform of a function ƒ(x) and its inverse (provided they exist) are given below:
Laguerre previously studied a similar transform regarding Laguerre function as:
Erdélyi et al., for instance, contains a short list of Kontorovich–Lebedev transforms as well references to the original work of Kontorovich and Lebedev in the late 1930s. This transform is mostly used in solving the Laplace equation in cylindrical coordinates for wedge shaped domains by the method of separation of variables.
References
Erdélyi et al. Table of Integral Transforms Vol. 2 (McGraw Hill 1954)
I.N. Sneddon, The use of integral Transforms, (McGraw Hill, New York 1972)
Integral transforms
Special functions
|
https://en.wikipedia.org/wiki/American%20Football%20League%20win%E2%80%93loss%20records
|
See also
American Football League
Sources
American football records and statistics
American Football League
|
https://en.wikipedia.org/wiki/George%20F.%20Carrier
|
George Francis Carrier (May 4, 1918 – March 8, 2002) was an engineer and physicist, and the T. Jefferson Coolidge Professor of Applied Mathematics Emeritus of Harvard University. He was particularly noted for his ability to intuitively model a physical system and then deduce an analytical solution. He worked especially in the modeling of fluid mechanics, combustion, and tsunamis.
Born in Millinocket, Maine, he received a master's in engineering degree in 1939 and a Ph.D. in 1944 from Cornell University with a dissertation in applied mechanics entitled Investigations in the Field of Aeolotropic Elasticity and the Bending of the Sectorial-Plate under the supervision of J. Norman Goodier. He was co-author of a number of mathematical textbooks and over 100 journal papers.
Carrier was elected to the American Academy of Arts and Sciences in 1953, the United States National Academy of Sciences in 1967, and the American Philosophical Society in 1976. In 1990, he received the National Medal of Science, the United States' highest scientific award, presented by President Bush, for his contributions to the natural sciences.
He died from esophageal cancer on March 8, 2002.
Carrier's Rule
Carrier is known for "Carrier's Rule", a humorous explanation of why divergent asymptotic series often yield good approximations if the first few terms are taken even when the expansion parameter is of order one, while in the case of a convergent series many terms are needed to get a good approximation: “Divergent series converge faster than convergent series because they don't have to converge.”
References
Notes
Other
Sources
The Harvard Gazette Online
External links
Obituary at www.news.harvard.edu
National Medal of Science laureates
20th-century American physicists
20th-century American mathematicians
Cornell University College of Engineering alumni
Fluid dynamicists
Harvard University faculty
1918 births
2002 deaths
People from Millinocket, Maine
Deaths from esophageal cancer
Deaths from cancer in Massachusetts
Brown University faculty
Members of the American Philosophical Society
|
https://en.wikipedia.org/wiki/Barrier%20function
|
In constrained optimization, a field of mathematics, a barrier function is a continuous function whose value on a point increases to infinity as the point approaches the boundary of the feasible region of an optimization problem. Such functions are used to replace inequality constraints by a penalizing term in the objective function that is easier to handle.
The two most common types of barrier functions are inverse barrier functions and logarithmic barrier functions. Resumption of interest in logarithmic barrier functions was motivated by their connection with primal-dual interior point methods.
Motivation
Consider the following constrained optimization problem:
minimize
subject to
where is some constant. If one wishes to remove the inequality constraint, the problem can be re-formulated as
minimize ,
where if , and zero otherwise.
This problem is equivalent to the first. It gets rid of the inequality, but introduces the issue that the penalty function , and therefore the objective function , is discontinuous, preventing the use of calculus to solve it.
A barrier function, now, is a continuous approximation to that tends to infinity as approaches from above. Using such a function, a new optimization problem is formulated, viz.
minimize
where is a free parameter. This problem is not equivalent to the original, but as approaches zero, it becomes an ever-better approximation.
Logarithmic barrier function
For logarithmic barrier functions, is defined as when and otherwise (in 1 dimension. See below for a definition in higher dimensions). This essentially relies on the fact that tends to negative infinity as tends to 0.
This introduces a gradient to the function being optimized which favors less extreme values of (in this case values lower than ), while having relatively low impact on the function away from these extremes.
Logarithmic barrier functions may be favored over less computationally expensive inverse barrier functions depending on the function being optimized.
Higher dimensions
Extending to higher dimensions is simple, provided each dimension is independent. For each variable which should be limited to be strictly lower than , add .
Formal definition
Minimize subject to
Assume strictly feasible:
Define logarithmic barrier
See also
Penalty method
Augmented Lagrangian method
References
External links
Lecture 14: Barrier method from Professor Lieven Vandenberghe of UCLA
Constraint programming
Convex optimization
Types of functions
|
https://en.wikipedia.org/wiki/Discrete-time%20Markov%20chain
|
In probability, a discrete-time Markov chain (DTMC) is a sequence of random variables, known as a stochastic process, in which the value of the next variable depends only on the value of the current variable, and not any variables in the past. For instance, a machine may have two states, A and E. When it is in state A, there is a 40% chance of it moving to state E and a 60% chance of it remaining in state A. When it is in state E, there is a 70% chance of it moving to A and a 30% chance of it staying in E. The sequence of states of the machine is a Markov chain. If we denote the chain by then is the state which the machine starts in and is the random variable describing its state after 10 transitions. The process continues forever, indexed by the natural numbers.
An example of a stochastic process which is not a Markov chain is the model of a machine which has states A and E and moves to A from either state with 50% chance if it has ever visited A before, and 20% chance if it has never visited A before (leaving a 50% or 80% chance that the machine moves to E). This is because the behavior of the machine depends on the whole history—if the machine is in E, it may have a 50% or 20% chance of moving to A, depending on its past values. Hence, it does not have the Markov property.
A Markov chain can be described by a stochastic matrix, which lists the probabilities of moving to each state from any individual state. From this matrix, the probability of being in a particular state n steps in the future can be calculated. A Markov chain's state space can be partitioned into communicating classes that describe which states are reachable from each other (in one transition or in many). Each state can be described as transient or recurrent, depending on the probability of the chain ever returning to that state. Markov chains can have properties including periodicity, reversibility and stationarity. A continuous-time Markov chain is like a discrete-time Markov chain, but it moves states continuously through time rather than as discrete time steps. Other stochastic processes can satisfy the Markov property, the property that past behavior does not affect the process, only the present state.
Definition
A discrete-time Markov chain is a sequence of random variables with the Markov property, namely that the probability of moving to the next state depends only on the present state and not on the previous states:
if both conditional probabilities are well defined, that is, if
The possible values of Xi form a countable set S called the state space of the chain.
Markov chains are often described by a sequence of directed graphs, where the edges of graph n are labeled by the probabilities of going from one state at time n to the other states at time n + 1, The same information is represented by the transition matrix from time n to time n + 1. However, Markov chains are frequently assumed to be time-homogeneous (see variations below), in which case the gra
|
https://en.wikipedia.org/wiki/Regular%20measure
|
In mathematics, a regular measure on a topological space is a measure for which every measurable set can be approximated from above by open measurable sets and from below by compact measurable sets.
Definition
Let (X, T) be a topological space and let Σ be a σ-algebra on X. Let μ be a measure on (X, Σ). A measurable subset A of X is said to be inner regular if
and said to be outer regular if
A measure is called inner regular if every measurable set is inner regular. Some authors use a different definition: a measure is called inner regular if every open measurable set is inner regular.
A measure is called outer regular if every measurable set is outer regular.
A measure is called regular if it is outer regular and inner regular.
Examples
Regular measures
Lebesgue measure on the real line is a regular measure: see the regularity theorem for Lebesgue measure.
Any Baire probability measure on any locally compact σ-compact Hausdorff space is a regular measure.
Any Borel probability measure on a locally compact Hausdorff space with a countable base for its topology, or compact metric space, or Radon space, is regular.
Inner regular measures that are not outer regular
An example of a measure on the real line with its usual topology that is not outer regular is the measure μ where , , and for any other set .
The Borel measure on the plane that assigns to any Borel set the sum of the (1-dimensional) measures of its horizontal sections is inner regular but not outer regular, as every non-empty open set has infinite measure. A variation of this example is a disjoint union of an uncountable number of copies of the real line with Lebesgue measure.
An example of a Borel measure μ on a locally compact Hausdorff space that is inner regular, σ-finite, and locally finite but not outer regular is given by as follows. The topological space X has as underlying set the subset of the real plane given by the y-axis of points (0,y) together with the points (1/n,m/n2) with m,n positive integers. The topology is given as follows. The single points (1/n,m/n2) are all open sets. A base of neighborhoods of the point (0,y) is given by wedges consisting of all points in X of the form (u,v) with |v − y| ≤ |u| ≤ 1/n for a positive integer n. This space X is locally compact. The measure μ is given by letting the y-axis have measure 0 and letting the point (1/n,m/n2) have measure 1/n3. This measure is inner regular and locally finite, but is not outer regular as any open set containing the y-axis has measure infinity.
Outer regular measures that are not inner regular
If μ is the inner regular measure in the previous example, and M is the measure given by M(S) = infU⊇S μ(U) where the inf is taken over all open sets containing the Borel set S, then M is an outer regular locally finite Borel measure on a locally compact Hausdorff space that is not inner regular in the strong sense, though all open sets are inner regular so it is inner regular in the weak sense. Th
|
https://en.wikipedia.org/wiki/Endeavour%20College
|
Endeavour College is a Lutheran high school in Mawson Lakes, a northern suburb of Adelaide, South Australia. Subjects taught include Art & Design, Drama, Music, English, German, Japanese, Mathematics, Physical Education, History, Business Studies, Science (Biology, Chemistry, Physics, Psychology), Material Technology, Multimedia, Geography, Christian Living & Home Economics.
History
The College started its life at Good Shepherd Lutheran Primary School in 1998, with 20 students. It moved to the Mawson Lakes Campus in 1999. Three stages of building have been completed at this site, adjacent to the UniSA Mawson Lakes Campus. Stage 4, the Gymnasium, was completed at the start of 2008, and is now in use. The 10th anniversary was celebrated in 2008. Endeavour College now has around 600 students, introducing its first intake of year seven students in 2017
Facilities
Endeavour College has a library, a number of science laboratories, hard technology center, art and design rooms and music rehearsal room. The Endeavour Centre was completed in 2008 and has a gymnasium, basketball court and supporting physical educational activities. Major expansion occurred in 2016 with the construction of the middle school to allow incorporation of year seven students.
About Endeavour
Houses
Students at Endeavour are split into eight villages, each named after a prominent South Australian. Each house is again split into 4 care groups (North, South, East, and West).
Heysen – Hans Heysen, painter of Australian landscapes, particularly the Flinders Ranges.
Florey – Baron Howard Florey, co-contributor to the use of Penicillin.
Mawson – Sir Douglas Mawson, geologist and explorer of the Antarctic.
Spence – Catherine Helen Spence, suffragette, politician and first Australian female political candidate.
Mackillop – Saint Mary MacKillop, nun who emphasized education for the poor, and Australia's first saint.
Mitchell – Dame Roma Mitchell QC, former Governor of South Australia, Justice of the Supreme Court of South Australia, women's rights activist.
Kavel – August Kavel, founder of the Lutheran church in Australia
Litchfield – Frederick Henry Litchfield, noted explorer of the Northern Territory
Feeder Schools
Endeavour College has three feeder primary schools: Good Shepherd Lutheran at Para Vista, Golden Grove Lutheran at Golden Grove, and St Paul Lutheran at Blair Athol. It forms Connected Schools, along with the Salisbury Lutheran Kindergarten. It also shares strong links with the other Lutheran Colleges in South Australia: Faith, Immanuel, Cornerstone, Unity and Concordia. Endeavour also attracts students from the nearby public schools of Mawson Lakes, Para Hills and Pooraka.
Notable alumni
Matthew Cowdrey, Australian Paralympian
Rohan Dennis, Australian racing cyclist
References
External links
Endeavour College
Lutheran Education Australia
Lutheran schools in Australia
Private secondary schools in South Australia
High schools and secondary schools affiliated
|
https://en.wikipedia.org/wiki/Joukowsky%20transform
|
In applied mathematics, the Joukowsky transform (sometimes transliterated Joukovsky, Joukowski or Zhukovsky) is a conformal map historically used to understand some principles of airfoil design. It is named after Nikolai Zhukovsky, who published it in 1910.
The transform is
where is a complex variable in the new space and is a complex variable in the original space.
In aerodynamics, the transform is used to solve for the two-dimensional potential flow around a class of airfoils known as Joukowsky airfoils. A Joukowsky airfoil is generated in the complex plane (-plane) by applying the Joukowsky transform to a circle in the -plane. The coordinates of the centre of the circle are variables, and varying them modifies the shape of the resulting airfoil. The circle encloses the point (where the derivative is zero) and intersects the point This can be achieved for any allowable centre position by varying the radius of the circle.
Joukowsky airfoils have a cusp at their trailing edge. A closely related conformal mapping, the Kármán–Trefftz transform, generates the broader class of Kármán–Trefftz airfoils by controlling the trailing edge angle. When a trailing edge angle of zero is specified, the Kármán–Trefftz transform reduces to the Joukowsky transform.
General Joukowsky transform
The Joukowsky transform of any complex number to is as follows:
So the real () and imaginary () components are:
Sample Joukowsky airfoil
The transformation of all complex numbers on the unit circle is a special case.
which gives
So the real component becomes and the imaginary component becomes .
Thus the complex unit circle maps to a flat plate on the real-number line from −2 to +2.
Transformations from other circles make a wide range of airfoil shapes.
Velocity field and circulation for the Joukowsky airfoil
The solution to potential flow around a circular cylinder is analytic and well known. It is the superposition of uniform flow, a doublet, and a vortex.
The complex conjugate velocity around the circle in the -plane is
where
is the complex coordinate of the centre of the circle,
is the freestream velocity of the fluid,
is the angle of attack of the airfoil with respect to the freestream flow,
is the radius of the circle, calculated using ,
is the circulation, found using the Kutta condition, which reduces in this case to
The complex velocity around the airfoil in the -plane is, according to the rules of conformal mapping and using the Joukowsky transformation,
Here with and the velocity components in the and directions respectively ( with and real-valued). From this velocity, other properties of interest of the flow, such as the coefficient of pressure and lift per unit of span can be calculated.
Kármán–Trefftz transform
The Kármán–Trefftz transform is a conformal map closely related to the Joukowsky transform. While a Joukowsky airfoil has a cusped trailing edge, a Kármán–Trefftz airfoil—which is the result of the t
|
https://en.wikipedia.org/wiki/Envelope%20theorem
|
In mathematics and economics, the envelope theorem is a major result about the differentiability properties of the value function of a parameterized optimization problem. As we change parameters of the objective, the envelope theorem shows that, in a certain sense, changes in the optimizer of the objective do not contribute to the change in the objective function. The envelope theorem is an important tool for comparative statics of optimization models.
The term envelope derives from describing the graph of the value function as the "upper envelope" of the graphs of the parameterized family of functions that are optimized.
Statement
Let and be real-valued continuously differentiable functions on , where are choice variables and are parameters, and consider the problem of choosing , for a given , so as to:
subject to and .
The Lagrangian expression of this problem is given by
where are the Lagrange multipliers. Now let and together be the solution that maximizes the objective function f subject to the constraints (and hence are saddle points of the Lagrangian),
and define the value function
Then we have the following theorem.
Theorem: Assume that and are continuously differentiable. Then
where .
For arbitrary choice sets
Let denote the choice set and let the relevant parameter be . Letting denote the parameterized objective function, the value function and the optimal choice correspondence (set-valued function) are given by:
"Envelope theorems" describe sufficient conditions for the value function to be differentiable in the parameter and describe its derivative as
where denotes the partial derivative of with respect to . Namely, the derivative of the value function with respect to the parameter equals the partial derivative of the objective function with respect to holding the maximizer fixed at its optimal level.
Traditional envelope theorem derivations use the first-order condition for (), which requires that the choice set have the convex and topological structure, and the objective function be differentiable in the variable . (The argument is that changes in the maximizer have only a "second-order effect" at the optimum and so can be ignored.) However, in many applications such as the analysis of incentive constraints in contract theory and game theory, nonconvex production problems, and "monotone" or "robust" comparative statics, the choice sets and objective functions generally lack the topological and convexity properties required by the traditional envelope theorems.
Paul Milgrom and Segal (2002) observe that the traditional envelope formula holds for optimization problems with arbitrary choice sets at any differentiability point of the value function, provided that the objective function is differentiable in the parameter:
Theorem 1: Let and . If both and exist, the envelope formula () holds.
Proof: Equation () implies that for ,
Under the assumptions, the objective function of the displayed maxi
|
https://en.wikipedia.org/wiki/Scale%20analysis%20%28mathematics%29
|
Scale analysis (or order-of-magnitude analysis) is a powerful tool used in the mathematical sciences for the simplification of equations with many terms. First the approximate magnitude of individual terms in the equations is determined. Then some negligibly small terms may be ignored.
Example: vertical momentum in synoptic-scale meteorology
Consider for example the momentum equation of the Navier–Stokes equations in the vertical coordinate direction of the atmosphere
where R is Earth radius, Ω is frequency of rotation of the Earth, g is gravitational acceleration, φ is latitude, ρ is density of air and ν is kinematic viscosity of air (we can neglect turbulence in free atmosphere).
In synoptic scale we can expect horizontal velocities about U = 101 m.s−1 and vertical about W = 10−2 m.s−1. Horizontal scale is L = 106 m and vertical scale is H = 104 m. Typical time scale is T = L/U = 105 s. Pressure differences in troposphere are ΔP = 104 Pa and density of air ρ = 100 kg⋅m−3. Other physical properties are approximately:
R = 6.378 × 106 m;
Ω = 7.292 × 10−5 rad⋅s−1;
ν = 1.46 × 10−5 m2⋅s−1;
g = 9.81 m⋅s−2.
Estimates of the different terms in equation () can be made using their scales:
Now we can introduce these scales and their values into equation ():
We can see that all terms — except the first and second on the right-hand side — are negligibly small. Thus we can simplify the vertical momentum equation to the hydrostatic equilibrium equation:
Rules of scale analysis
Scale analysis is very useful and widely used tool for solving problems in the area of heat transfer and fluid mechanics, pressure-driven wall jet, separating flows behind backward-facing steps, jet diffusion flames, study of linear and non-linear dynamics. Scale analysis is an effective shortcut for obtaining approximate solutions to equations often too complicated to solve exactly. The object of scale analysis is to use the basic principles of convective heat transfer to produce order-of-magnitude estimates for the quantities of interest. Scale analysis anticipates within a factor of order one when done properly, the expensive results produced by exact analyses. Scale analysis rules as follows:
Rule1- First step in scale analysis is to define the domain of extent in which we apply scale analysis. Any scale analysis of a flow region that is not uniquely defined is not valid.
Rule2- One equation constitutes an equivalence between the scales of two dominant terms appearing in the equation. For example,
In the above example, the left-hand side could be of equal order of magnitude as the right-hand side.
Rule3- If in the sum of two terms given by
the order of magnitude of one term is greater than order of magnitude of the other term
then the order of magnitude of the sum is dictated by the dominant term
The same conclusion holds if we have the difference of two terms
Rule4- In the sum of two terms, if two terms are same order of magnitude,
then the sum is also of same
|
https://en.wikipedia.org/wiki/Algebraic%20connectivity
|
The algebraic connectivity (also known as Fiedler value or Fiedler eigenvalue after Miroslav Fiedler) of a graph G is the second-smallest eigenvalue (counting multiple eigenvalues separately) of the Laplacian matrix of G. This eigenvalue is greater than 0 if and only if G is a connected graph. This is a corollary to the fact that the number of times 0 appears as an eigenvalue in the Laplacian is the number of connected components in the graph. The magnitude of this value reflects how well connected the overall graph is. It has been used in analyzing the robustness and synchronizability of networks.
Properties
The algebraic connectivity of undirected graphs with nonnegative weights, with the inequality being strict if and only if G is connected. However, the algebraic connectivity can be negative for general directed graphs, even if G is a connected graph. Furthermore, the value of the algebraic connectivity is bounded above by the traditional (vertex) connectivity of the graph, . If the number of vertices of an undirected connected graph with nonnegative edge weights is n and the diameter is D, the algebraic connectivity is also known to be bounded below by , and in fact (in a result due to Brendan McKay) by . For the graph with 6 nodes show above (n=6,D=3) these bound means, 4/18 = 0.222 ≤ algebraic connectivity 0.722 ≤ connectivity 1.
Unlike the traditional connectivity, the algebraic connectivity is dependent on the number of vertices, as well as the way in which vertices are connected. In random graphs, the algebraic connectivity decreases with the number of vertices, and increases with the average degree.
The exact definition of the algebraic connectivity depends on the type of Laplacian used. Fan Chung has developed an extensive theory using a rescaled version of the Laplacian, eliminating the dependence on the number of vertices, so that the bounds are somewhat different.
In models of synchronization on networks, such as the Kuramoto model, the Laplacian matrix arises naturally, so the algebraic connectivity gives an indication of how easily the network will synchronize. Other measures, such as the average distance (characteristic path length) can also be used, and in fact the algebraic connectivity is closely related to the (reciprocal of the) average distance.
The algebraic connectivity also relates to other connectivity attributes, such as the isoperimetric number, which is bounded below by half the algebraic connectivity.
Fiedler vector
The original theory related to algebraic connectivity was produced by Miroslav Fiedler. In his honor the eigenvector associated with the algebraic connectivity has been named the Fiedler vector. The Fiedler vector can be used to partition a graph.
Partitioning a graph using the Fiedler vector
For the example graph in the introductory section, the Fiedler vector is . The negative values are associated with the poorly connected vertex 6, and the neighbouring articulation point, vertex
|
https://en.wikipedia.org/wiki/Bil%27in
|
Bil'in () is a Palestinian village located in the Ramallah and al-Bireh Governorate, west of the city of Ramallah in the central West Bank. According to the Palestinian Central Bureau of Statistics, Bil'in had a population of 2,137 in 2017. In the 2000s, it was known for its regular protests against Israeli occupation.
History
Potsherds from the Hellenistic, Byzantine, Crusader/Ayyubid, and Mamluk periods have been found here. It has been suggested that Bil'in could be Ba'alah, a place mentioned in the Talmud.
Ottoman era
Potsherds from the early Ottoman period have been found.
In 1863, the French explorer Victor Guérin saw it from a distance, and described it a small hamlet, while an official Ottoman village list of about 1870 showed 32 houses and a population of 147, though the population count included men, only. In 1882 the PEF's Survey of Western Palestine described Bil'in (then called Belain) as "a little village on a hill-side".
British Mandate era
In the 1922 census of Palestine conducted by the British Mandate authorities, Bil'in had a population of 133, all Muslim, increasing in the 1931 census to 166, still all Muslims, in a total of 39 houses.
In the 1945 statistics, the village had 210 Muslim inhabitants, while the total land area was 3,992 dunams, according to an official land and population survey. Of this, 1,450 dunums of village land was plantations and irrigable land, 800 were used for cereals, while 6 dunams were classified as built-up public areas.
Jordanian era
In the wake of the 1948 Arab–Israeli War, and after the 1949 Armistice Agreements, Bil'in came under Jordanian rule.
The Jordanian census of 1961 found 365 inhabitants.
Post-1967
After the Six-Day War in 1967, Bil'in has been under Israeli occupation.
Since the signing of the Interim Agreement on the West Bank and the Gaza Strip in 1995, it has been administered by the Palestinian National Authority. It is adjacent to the Israeli West Bank barrier and the Israeli settlement of Modi'in Illit. Historically a small agricultural village, modern Bil'in is now from the western outskirts of Ramallah. According to Neil Rogachevsky, Bil'in is considered an ideological stronghold of Fatah, and many employees of the Palestinian Authority reside there.
Court rulings
Bil'in is located east of the Green Line. Israel's West Bank barrier split the village in two, separating it from 60 percent of its farmland. In 2004, the International Court of Justice issued an advisory opinion that "the construction of the wall by Israel in the Occupied Palestinian Territory is contrary to international law".
In 2005, the local council leader of Bil'in, Ahmed Issa Abdullah Yassin, hired Israeli human rights lawyer Michael Sfard to represent the village in a petition to the High Court of Justice. On 4 September 2007, the Court ordered the government to change the route of the wall near Bil'in. Chief Justice Dorit Beinish wrote in her ruling: "We were not convinced
|
https://en.wikipedia.org/wiki/SBJ
|
SBJ may refer to:
Statistics Bureau of Japan
Stourbridge Junction railway station
|
https://en.wikipedia.org/wiki/National%20Football%20League%20records
|
National Football League records are the superlative statistics of the National Football League.
NFL records include:
List of National Football League records (individual), a list of all-time records for individual NFL players
List of National Football League records (team), a list of all-time records for teams and franchises
NFL playoff records (team), a list of records in the NFL playoffs
List of Super Bowl records, a list of records set by teams and players in Super Bowl games
NFL Pro Bowl records, a list of records set in the Pro Bowl
Records may also refer to longest NFL streaks:
Most consecutive games with a touchdown pass (NFL)
Most consecutive starts (NFL)
List of most consecutive starts by a National Football League quarterback
List of NFL franchise post-season droughts
List of NFL franchise post-season streaks
Records may also refer to lists of career-high high statistics by individual players:
List of NFL players by games played
Most wins by a starting quarterback (NFL)
List of National Football League career passing yards leaders
List of National Football League career passing completions leaders
List of National Football League career passing touchdowns leaders
List of National Football League career rushing yards leaders
List of National Football League career rushing touchdowns leaders
List of National Football League career receiving yards leaders
List of National Football League career receptions leaders
List of National Football League career receiving touchdowns leaders
List of National Football League career all-purpose yards leaders
List of National Football League career sacks leaders
List of National Football League career interceptions leaders
List of National Football League career punts leaders
List of National Football League career punting yards leaders
List of National Football League career scoring leaders
Records
|
https://en.wikipedia.org/wiki/Nef%20line%20bundle
|
In algebraic geometry, a line bundle on a projective variety is nef if it has nonnegative degree on every curve in the variety. The classes of nef line bundles are described by a convex cone, and the possible contractions of the variety correspond to certain faces of the nef cone. In view of the correspondence between line bundles and divisors (built from codimension-1 subvarieties), there is an equivalent notion of a nef divisor.
Definition
More generally, a line bundle L on a proper scheme X over a field k is said to be nef if it has nonnegative degree on every (closed irreducible) curve in X. (The degree of a line bundle L on a proper curve C over k is the degree of the divisor (s) of any nonzero rational section s of L.) A line bundle may also be called an invertible sheaf.
The term "nef" was introduced by Miles Reid as a replacement for the older terms "arithmetically effective" and "numerically effective", as well as for the phrase "numerically eventually free". The older terms were misleading, in view of the examples below.
Every line bundle L on a proper curve C over k which has a global section that is not identically zero has nonnegative degree. As a result, a basepoint-free line bundle on a proper scheme X over k has nonnegative degree on every curve in X; that is, it is nef. More generally, a line bundle L is called semi-ample if some positive tensor power is basepoint-free. It follows that a semi-ample line bundle is nef. Semi-ample line bundles can be considered the main geometric source of nef line bundles, although the two concepts are not equivalent; see the examples below.
A Cartier divisor D on a proper scheme X over a field is said to be nef if the associated line bundle O(D) is nef on X. Equivalently, D is nef if the intersection number is nonnegative for every curve C in X.
To go back from line bundles to divisors, the first Chern class is the isomorphism from the Picard group of line bundles on a variety X to the group of Cartier divisors modulo linear equivalence. Explicitly, the first Chern class is the divisor (s) of any nonzero rational section s of L.
The nef cone
To work with inequalities, it is convenient to consider R-divisors, meaning finite linear combinations of Cartier divisors with real coefficients. The R-divisors modulo numerical equivalence form a real vector space of finite dimension, the Néron–Severi group tensored with the real numbers. (Explicitly: two R-divisors are said to be numerically equivalent if they have the same intersection number with all curves in X.) An R-divisor is called nef if it has nonnegative degree on every curve. The nef R-divisors form a closed convex cone in , the nef cone Nef(X).
The cone of curves is defined to be the convex cone of linear combinations of curves with nonnegative real coefficients in the real vector space of 1-cycles modulo numerical equivalence. The vector spaces and are dual to each other by the intersection pairing, and the nef cone is (by defin
|
https://en.wikipedia.org/wiki/Rng%20%28algebra%29
|
In mathematics, and more specifically in abstract algebra, a rng (or non-unital ring or pseudo-ring) is an algebraic structure satisfying the same properties as a ring, but without assuming the existence of a multiplicative identity. The term rng (IPA: ) is meant to suggest that it is a ring without i, that is, without the requirement for an identity element.
There is no consensus in the community as to whether the existence of a multiplicative identity must be one of the ring axioms (see ). The term rng was coined to alleviate this ambiguity when people want to refer explicitly to a ring without the axiom of multiplicative identity.
A number of algebras of functions considered in analysis are not unital, for instance the algebra of functions decreasing to zero at infinity, especially those with compact support on some (non-compact) space.
Definition
Formally, a rng is a set R with two binary operations called addition and multiplication such that
(R, +) is an abelian group,
(R, ·) is a semigroup,
Multiplication distributes over addition.
A rng homomorphism is a function from one rng to another such that
f(x + y) = f(x) + f(y)
f(x · y) = f(x) · f(y)
for all x and y in R.
If R and S are rings, then a ring homomorphism is the same as a rng homomorphism that maps 1 to 1.
Examples
All rings are rngs. A simple example of a rng that is not a ring is given by the even integers with the ordinary addition and multiplication of integers. Another example is given by the set of all 3-by-3 real matrices whose bottom row is zero. Both of these examples are instances of the general fact that every (one- or two-sided) ideal is a rng.
Rngs often appear naturally in functional analysis when linear operators on infinite-dimensional vector spaces are considered. Take for instance any infinite-dimensional vector space V and consider the set of all linear operators with finite rank (i.e. ). Together with addition and composition of operators, this is a rng, but not a ring. Another example is the rng of all real sequences that converge to 0, with component-wise operations.
Also, many test function spaces occurring in the theory of distributions consist of functions
decreasing to zero at infinity, like e.g. Schwartz space. Thus, the function everywhere equal to one, which would be the only possible identity element for pointwise multiplication, cannot exist in such spaces, which therefore are rngs (for pointwise addition and multiplication). In particular, the real-valued continuous functions with compact support defined on some topological space, together with pointwise addition and multiplication, form a rng; this is not a ring unless the underlying space is compact.
Example: even integers
The set 2Z of even integers is closed under addition and multiplication and has an additive identity, 0, so it is a rng, but it does not have a multiplicative identity, so it is not a ring.
In 2Z, the only multiplicative idempotent is 0, the only nilpotent
|
https://en.wikipedia.org/wiki/Corresponding%20sides%20and%20corresponding%20angles
|
In geometry, the tests for congruence and similarity involve comparing corresponding sides and corresponding angles of polygons. In these tests, each side and each angle in one polygon is paired with a side or angle in the second polygon, taking care to preserve the order of adjacency.
For example, if one polygon has sequential sides , , , , and and the other has sequential sides , , , , and , and if and are corresponding sides, then side (adjacent to ) must correspond to either or (both adjacent to ). If and correspond to each other, then corresponds to , corresponds to , and corresponds to ; hence the th element of the sequence corresponds to the th element of the sequence for On the other hand, if in addition to corresponding to we have corresponding to , then the th element of corresponds to the th element of the reverse sequence .
Congruence tests look for all pairs of corresponding sides to be equal in length, though except in the case of the triangle this is not sufficient to establish congruence (as exemplified by a square and a rhombus that have the same side length). Similarity tests look at whether the ratios of the lengths of each pair of corresponding sides are equal, though again this is not sufficient. In either case equality of corresponding angles is also necessary; equality (or proportionality) of corresponding sides combined with equality of corresponding angles is necessary and sufficient for congruence (or similarity). The corresponding angles as well as the corresponding sides are defined as appearing in the same sequence, so for example if in a polygon with the side sequence and another with the corresponding side sequence we have vertex angle appearing between sides and then its corresponding vertex angle must appear between sides and .
References
Geometry
|
https://en.wikipedia.org/wiki/Woolmer%20Green
|
Woolmer Green is a small village and civil parish in Hertfordshire, England. The 2011 census figure for the population (from the Office for National Statistics) is 661 people.
History
Situated between the villages of Welwyn and Knebworth, Woolmer Green was first settled in the Iron Age. The Belgae colonised the area in the 1st century BC, and later it was settled by the Romans. Many Roman artefacts have been found in the surrounding area with a bath house existing at nearby Welwyn. The village was at the junction of two thoroughfares, the Great North Road and another road called Stane Street (or Stone Street) from St Albans. The route of this road runs across the parish along the path of Robbery Bottom Lane, continuing on as a public bridleway to Datchworth and then Braughing, on its eventual way to another major Roman town, Camulodonum, Colchester.
Thomas de Wolvesmere is recorded as having lived in a dwelling here in 1297, and his name is considered to have led to the current name of the village. In the Middle Ages part of the village was in Mardleybury Manor, part in Rectory Manor, with the northern part owing allegiance to Broadwater Manor or Knebworth. The village remains at the point where the Districts of North Hertfordshire, East Hertfordshire and Welwyn Hatfield meet.
Apart from the trade generated by travellers, life in Woolmer Green was agricultural and feudal until the middle of the nineteenth century. Things started to change, however, when the railway arrived in 1850 (although the nearby station in Knebworth was not opened until 1884 after intervention from Viscount Knebworth). The village school, which was opened a few years after this, obtained much funding from the railway.
In 1863, only a gunsmith and a shoemaker were listed in the trade directory. By 1898, when the population of Woolmer Green stood at 363 and that of Knebworth at 382, there were five shops including two beer retailers; no mention in the trade directory of the many 'front room shops'! This level of service persisted until recent years with a general store and Post Office, a baker, a small supermarket and a butcher. These have all now closed. The former Post office was later used as a hair salon, a furniture shop and a wedding dress and suit hire shop which was opened in early 2016.
The main road through the centre of the village was the A1 - the Great North Road down which thousands of cattle and sheep were driven 'on the hoof' to London markets each year. The area around Knebworth and Woolmer Green provided what was probably the last overnight stop for the animals and their drovers before they reached London. The majority of the residents of Woolmer Green were dependent on farming and the 1879 harvest, which was the worst of the century, resulted in the leases of many farms in the area being relinquished, and thus labourers not being employed to work on them. At this time there was quite an influx of farmers from Scotland and Cornwall. They must have consi
|
https://en.wikipedia.org/wiki/Mathematical%20program
|
The term mathematical program can refer to:
A computer algebra system which is a computer program that manipulates mathematical entities symbolically
Computer programs that manipulate numerical entities numerically, which are the subject of numerical analysis
A problem formulation of an optimization problem in terms of an objective function and constraint (mathematics) (in this sense, a mathematical program is a specialized and now possibly misleading term that predates the invention of computer programming)
|
https://en.wikipedia.org/wiki/Credible%20interval
|
In Bayesian statistics, a credible interval is an interval within which an unobserved parameter value falls with a particular probability. It is an interval in the domain of a posterior probability distribution or a predictive distribution. The generalisation to multivariate problems is the credible region.
Credible intervals are analogous to confidence intervals and confidence regions in frequentist statistics, although they differ on a philosophical basis: Bayesian intervals treat their bounds as fixed and the estimated parameter as a random variable, whereas frequentist confidence intervals treat their bounds as random variables and the parameter as a fixed value. Also, Bayesian credible intervals use (and indeed, require) knowledge of the situation-specific prior distribution, while the frequentist confidence intervals do not.
For example, in an experiment that determines the distribution of possible values of the parameter , if the subjective probability that lies between 35 and 45 is 0.95, then is a 95% credible interval.
Choosing a credible interval
Credible intervals are not unique on a posterior distribution. Methods for defining a suitable credible interval include:
Choosing the narrowest interval, which for a unimodal distribution will involve choosing those values of highest probability density including the mode (the maximum a posteriori). This is sometimes called the highest posterior density interval (HPDI).
Choosing the interval where the probability of being below the interval is as likely as being above it. This interval will include the median. This is sometimes called the equal-tailed interval.
Assuming that the mean exists, choosing the interval for which the mean is the central point.
It is possible to frame the choice of a credible interval within decision theory and, in that context, a smallest interval will always be a highest probability density set. It is bounded by the contour of the density.
Credible intervals can also be estimated through the use of simulation techniques such as Markov chain Monte Carlo.
Contrasts with confidence interval
A frequentist 95% confidence interval means that with a large number of repeated samples, 95% of such calculated confidence intervals would include the true value of the parameter. In frequentist terms, the parameter is fixed (cannot be considered to have a distribution of possible values) and the confidence interval is random (as it depends on the random sample).
Bayesian credible intervals can be quite different from frequentist confidence intervals for two reasons:
credible intervals incorporate problem-specific contextual information from the prior distribution whereas confidence intervals are based only on the data;
credible intervals and confidence intervals treat nuisance parameters in radically different ways.
For the case of a single parameter and data that can be summarised in a single sufficient statistic, it can be shown that the credible interval and the con
|
https://en.wikipedia.org/wiki/National%20Statistician
|
The National Statistician is the Chief Executive of the UK Statistics Authority, and the Head of the UK Government Statistical Service. The office was created by the Statistics and Registration Service Act 2007. The UK Statistics Authority announced that Sir Ian Diamond would take over as National Statistician in October 2019, following the retirement of John Pullinger in June 2019.
Status
They are de facto permanent secretaries but do not use that title. As the ONS incorporated the OPCS, the Director also became the Registrar General for England and Wales. Following the implementation of the Statistics and Registration Service Act 2007, the General Register Office continues to be part of a ministerially accountable department, becoming a part of the Identity & Passport Service in the Home Office and the post of Registrar-General is now held by its head.
National Statisticians
The first Director of ONS was Tim Holt. Subsequent Directors have had this additional title, the National Statistician.
The second Director was Len Cook, who had previously held a similar post in New Zealand.
He was succeeded by Dame Karen Dunnell on 1 September 2005.
Jil Matheson succeeded Karen Dunnell on 1 September 2009.
John Pullinger succeeded Jil Matheson on 1 July 2014 and retired in June 2019.
Ian Diamond succeeded John Pullinger on 22 October 2019.
References
External links
UK Statistics Authority's National Statistician homepage
Civil servants in the Office for National Statistics
|
https://en.wikipedia.org/wiki/Ho%E2%80%93Lee%20model
|
In financial mathematics, the Ho-Lee model is a short-rate model widely used in the pricing of bond options, swaptions and other interest rate derivatives, and in modeling future interest rates. It was developed in 1986 by Thomas Ho and Sang Bin Lee.
Under this model, the short rate follows a normal process:
The model can be calibrated to market data by implying the form of from market prices, meaning that it can exactly return the price of bonds comprising the yield curve. This calibration, and subsequent valuation of bond options, swaptions and other interest rate derivatives, is typically performed via a binomial lattice based model. Closed form valuations of bonds, and "Black-like" bond option formulae are also available.
As the model generates a symmetric ("bell shaped") distribution of rates in the future, negative rates are possible. Further, it does not incorporate mean reversion. For both of these reasons, models such as Black–Derman–Toy (lognormal and mean reverting) and Hull–White (mean reverting with lognormal variant available) are often preferred. The Kalotay–Williams–Fabozzi model is a lognormal analogue to the Ho–Lee model, although is less widely used than the latter two.
References
Notes
Primary references
T.S.Y. Ho, S.B. Lee, Term structure movements and pricing interest rate contingent claims, Journal of Finance 41, 1986.
John C. Hull, Options, futures, and other derivatives, 5th edition, Prentice Hall,
External links
Valuation and Hedging of Interest Rates Derivatives with the Ho-Lee Model, Markus Leippold and Zvi Wiener, Wharton School
Term Structure Lattice Models, Martin Haugh, Columbia University
Online tools
Binomial Tree – Excel implementation, thomasho.com
Fixed income analysis
Short-rate models
Financial models
|
https://en.wikipedia.org/wiki/Sassan%20Sanei
|
Sassan Sanei (born January 7, 1973) is a Canadian engineer.
An intense fascination with mathematics, physics, and computing from an early age led him eventually to attend the University of Waterloo, where he received the Bachelor of Applied Science degree with first-class honours in Electrical Engineering and the Bachelor of Arts degree in Philosophy. He was also recipient of the Faculty of Engineering Entrance Scholarship and the Sandford Fleming Work Term Award. Prior to university, he attended the Toronto French School.
Since 1996, he has been employed by Research In Motion (RIM) in engineering and business capacities related to radio modems and BlackBerry devices.
He was an early proponent of the implementation of Java ME as a standard platform for wireless devices, which is in widespread use today.
He has emphasized that making efficient use of the available wireless capacity, allocating it across a large number of users, is more important to the overall user experience than implementing a small number of high-bandwidth applications. His notable contributions to the design and development of the BlackBerry have helped to make the devices so ubiquitous and addictive as to earn the nickname "CrackBerry."
He is also known within the wireless industry as the publisher of the BlackBerry Developer Journal, a technical magazine widely read by developers of wireless applications. He has also spoken extensively at industry conferences and other events related to wireless technology, software development, security, and hardware design.
External links
RIM conference showcases wireless development (CNN.com/Sci-Tech)
Building Applications for Mobile Appliances (Bell Canada)
Next-generation wireless devices (International Engineering Consortium)
References
Sanei, Sassan
|
https://en.wikipedia.org/wiki/Ten-year%20occupational%20employment%20projections
|
The ten-year occupational employment projection is a projection produced by the US Bureau of Labor Statistics' Office of Occupational Statistics and Employment Projections. The occupational employment projections, along with other information about occupations, are published in the Occupational Outlook Handbook and the National Employment Matrix.
The 10-year projections cover economic growth, employment by industry and occupation, and labor force. They are widely used in career guidance, in planning education and training programs, and in studying long-range employment trends. These projections, which are updated every two years, are part of a nearly 60-year tradition of providing information on occupations to those who are entering the job market, changing careers, or making further education and training choices.
Employment projections
Overall employment is projected to increase about 14 percent during the 2010–2020 decade with more than half a million new jobs expected for each of four occupations—registered nurses, retail salespersons, home health aides, and personal care aides. Occupations that typically need postsecondary education for entry are projected to grow faster than average, but occupations that typically need a high school diploma or less will continue to represent more than half of all jobs.
References
External links
Office of Employment Projections – Bureau of Labor Statistics
Occupational Outlook Handbook – Bureau of Labor Statistics
National Employment Matrix – Bureau of Labor Statistics
Labour economics
Reports of the Bureau of Labor Statistics
|
https://en.wikipedia.org/wiki/Tietze%20transformations
|
In group theory, Tietze transformations are used to transform a given presentation of a group into another, often simpler presentation of the same group. These transformations are named after Heinrich Franz Friedrich Tietze who introduced them in a paper in 1908.
A presentation is in terms of generators and relations; formally speaking the presentation is a pair of a set of named generators, and a set of words in the free group on the generators that are taken to be the relations. Tietze transformations are built up of elementary steps, each of which individually rather evidently takes the presentation to a presentation of an isomorphic group. These elementary steps may operate on generators or relations, and are of four kinds.
Adding a relation
If a relation can be derived from the existing relations then it may be added to the presentation without changing the group. Let G=〈 x | x3=1 〉 be a finite presentation for the cyclic group of order 3. Multiplying x3=1 on both sides by x3 we get x6 = x3 = 1 so x6 = 1 is derivable from x3=1. Hence G=〈 x | x3=1, x6=1 〉 is another presentation for the same group.
Removing a relation
If a relation in a presentation can be derived from the other relations then it can be removed from the presentation without affecting the group. In G = 〈 x | x3 = 1, x6 = 1 〉 the relation x6 = 1 can be derived from x3 = 1 so it can be safely removed. Note, however, that if x3 = 1 is removed from the presentation the group G = 〈 x | x6 = 1 〉 defines the cyclic group of order 6 and does not define the same group. Care must be taken to show that any relations that are removed are consequences of the other relations.
Adding a generator
Given a presentation it is possible to add a new generator that is expressed as a word in the original generators. Starting with G = 〈 x | x3 = 1 〉 and letting y = x2 the new presentation G = 〈 x,y | x3 = 1, y = x2 〉 defines the same group.
Removing a generator
If a relation can be formed where one of the generators is a word in the other generators then that generator may be removed. In order to do this it is necessary to replace all occurrences of the removed generator with its equivalent word. The presentation for the elementary abelian group of order 4, G=〈 x,y,z | x = yz, y2=1, z2=1, x=x−1 〉 can be replaced by G = 〈 y,z | y2 = 1, z2 = 1, (yz) = (yz)−1 〉 by removing x.
Examples
Let G = 〈 x,y | x3 = 1, y2 = 1, (xy)2 = 1 〉 be a presentation for the symmetric group of degree three. The generator x corresponds to the permutation (1,2,3) and y to (2,3). Through Tietze transformations this presentation can be converted to G = 〈 y, z | (zy)3 = 1, y2 = 1, z2 = 1 〉, where z corresponds to (1,2).
See also
Nielsen Transformation
Andrews-Curtis Conjecture
References
Roger C. Lyndon, Paul E. Schupp, Combinatorial Group Theory, Springer, 2001. .
Combinatorial group theory
|
https://en.wikipedia.org/wiki/Conformal%20gravity
|
Conformal gravity refers to gravity theories that are invariant under conformal transformations in the Riemannian geometry sense; more accurately, they are invariant under Weyl transformations where is the metric tensor and is a function on spacetime.
Weyl-squared theories
The simplest theory in this category has the square of the Weyl tensor as the Lagrangian
where is the Weyl tensor. This is to be contrasted with the usual Einstein–Hilbert action where the Lagrangian is just the Ricci scalar. The equation of motion upon varying the metric is called the Bach tensor,
where is the Ricci tensor. Conformally flat metrics are solutions of this equation.
Since these theories lead to fourth-order equations for the fluctuations around a fixed background, they are not manifestly unitary. It has therefore been generally believed that they could not be consistently quantized. This is now disputed.
Four-derivative theories
Conformal gravity is an example of a 4-derivative theory. This means that each term in the wave equation can contain up to four derivatives. There are pros and cons of 4-derivative theories. The pros are that the quantized version of the theory is more convergent and renormalisable. The cons are that there may be issues with causality. A simpler example of a 4-derivative wave equation is the scalar 4-derivative wave equation:
The solution for this in a central field of force is:
The first two terms are the same as a normal wave equation. Because this equation is a simpler approximation to conformal gravity, m corresponds to the mass of the central source. The last two terms are unique to 4-derivative wave equations. It has been suggested that small values be assigned to them to account for the galactic acceleration constant (also known as dark matter) and the dark energy constant. The solution equivalent to the Schwarzschild solution in general relativity for a spherical source for conformal gravity has a metric with:
to show the difference between general relativity. 6bc is very small, and so can be ignored. The problem is that now c is the total mass-energy of the source, and b is the integral of density, times the distance to source, squared. So this is a completely different potential from general relativity and not just a small modification.
The main issue with conformal gravity theories, as well as any theory with higher derivatives, is the typical presence of ghosts, which point to instabilities of the quantum version of the theory, although there might be a solution to the ghost problem.
An alternative approach is to consider the gravitational constant as a symmetry broken scalar field, in which case you would consider a small correction to Newtonian gravity like this (where we consider to be a small correction):
in which case the general solution is the same as the Newtonian case except there can be an additional term:
where there is an additional component varying sinusoidally over space. The wavelength of this
|
https://en.wikipedia.org/wiki/Affine%20hull
|
In mathematics, the affine hull or affine span of a set S in Euclidean space Rn is the smallest affine set containing S, or equivalently, the intersection of all affine sets containing S. Here, an affine set may be defined as the translation of a vector subspace.
The affine hull aff(S) of S is the set of all affine combinations of elements of S, that is,
Examples
The affine hull of the empty set is the empty set.
The affine hull of a singleton (a set made of one single element) is the singleton itself.
The affine hull of a set of two different points is the line through them.
The affine hull of a set of three points not on one line is the plane going through them.
The affine hull of a set of four points not in a plane in R3 is the entire space R3.
Properties
For any subsets
is a closed set if is finite dimensional.
If then .
If then is a linear subspace of .
.
So in particular, is always a vector subspace of .
If is convex then
For every , where is the smallest cone containing (here, a set is a cone if for all and all non-negative ).
Hence is always a linear subspace of parallel to .
Related sets
If instead of an affine combination one uses a convex combination, that is one requires in the formula above that all be non-negative, one obtains the convex hull of S, which cannot be larger than the affine hull of S as more restrictions are involved.
The notion of conical combination gives rise to the notion of the conical hull
If however one puts no restrictions at all on the numbers , instead of an affine combination one has a linear combination, and the resulting set is the linear span of S, which contains the affine hull of S.
References
Sources
R.J. Webster, Convexity, Oxford University Press, 1994. .
Affine geometry
Closure operators
|
https://en.wikipedia.org/wiki/Cumulative%20density%20function
|
Cumulative density function is a self-contradictory phrase resulting from confusion between:
probability density function, and
cumulative distribution function.
The two words cumulative and density contradict each other. The value of a density function in an interval about a point depends only on probabities of sets in arbitrarily small neighborhoods of that point, so it is not cumulative.
That is to say, if values are taken from a population of values described by the density function, and plotted as points on a linear axis, the density function reflects the density with which the plotted points will accumulate. The probability of finding a point between and is the integral of the probability density function over this range.
This is related to the probability mass function, which is the equivalent for variables that assign positive probability to individual points. The probability mass function is therefore sometimes referred to as the discrete density function.
In both cases, the cumulative distribution function is the integral (or, in the discrete case, the sum) for all values less than or equal to the current value of , and so shows the accumulated probability so far. This is the sense in which it is cumulative. Thus the probability density function of the normal distribution is a bell-curve, while the corresponding cumulative distribution function is a strictly increasing function that visually looks similar to a sigmoid function, which approaches 0 at −∞ and approaches 1 at +∞.
|
https://en.wikipedia.org/wiki/Molien%27s%20formula
|
In mathematics, Molien's formula computes the generating function attached to a linear representation of a group G on a finite-dimensional vector space, that counts the homogeneous polynomials of a given total degree that are invariants for G. It is named for Theodor Molien.
Precisely, it says: given a finite-dimensional complex representation V of G and , the space of homogeneous polynomial functions on V of degree n (degree-one homogeneous polynomials are precisely linear functionals), if G is a finite group, the series (called Molien series) can be computed as:
Here, is the subspace of that consists of all vectors fixed by all elements of G; i.e., invariant forms of degree n. Thus, the dimension of it is the number of invariants of degree n. If G is a compact group, the similar formula holds in terms of Haar measure.
Derivation
Let denote the irreducible characters of a finite group G and V, R as above. Then the character of can be written as:
Here, each is given by the inner product:
where and are the possibly repeated eigenvalues of . Now, we compute the series:
Taking to be the trivial character yields Molien's formula.
Example
Consider the symmetric group acting on R3 by permuting the coordinates. We add up the sum by group elements, as follows.
Starting with the identity, we have
.
There is a three-element conjugacy class of , consisting of swaps of two coordinates. This gives three terms of the form
There is a two-element conjugacy class of cyclic permutations, yielding two terms of the form
Notice that different elements of the same conjugacy class yield the same determinant. Thus, the Molien series is
On the other hand, we can expand the geometric series and multiply out to get
The coefficients of the series tell us the number of linearly independent homogeneous polynomials in three variables which are invariant under permutations of the three variables, i.e. the number of independent symmetric polynomials in three variables. In fact, if we consider the elementary symmetric polynomials
we can see for example that in degree 5 there is a basis consisting of , , , , and .
(In fact, if you multiply the series out by hand, you can see that the term comes from combinations of , , and exactly corresponding to combinations of , , and , also corresponding to partitions of with , , and as parts. See also Partition (number theory) and Representation theory of the symmetric group.)
References
David A. Cox, John B. Little, Donal O'Shea (2005), Using Algebraic Geometry, pp. 295–8
Further reading
https://mathoverflow.net/questions/58283/a-question-about-an-application-of-moliens-formula-to-find-the-generators-and-r
Invariant theory
Representation theory of groups
|
https://en.wikipedia.org/wiki/Generalised%20hyperbolic%20distribution
|
The generalised hyperbolic distribution (GH) is a continuous probability distribution defined as the normal variance-mean mixture where the mixing distribution is the generalized inverse Gaussian distribution (GIG). Its probability density function (see the box) is given in terms of modified Bessel function of the second kind, denoted by . It was introduced by Ole Barndorff-Nielsen, who studied it in the context of physics of wind-blown sand.
Properties
Linear transformation
This class is closed under affine transformations.
Summation
Barndorff-Nielsen and Halgreen proved that the GIG distribution is infinitely divisible and since the GH distribution can be obtained as a normal variance-mean mixture where the mixing distribution is the generalized inverse Gaussian distribution, Barndorff-Nielsen and Halgreen showed the GH distribution is infinitely divisible as well.
Fails to be convolution-closed
An important point about infinitely divisible distributions is their connection to Lévy processes, i.e. at any point in time a Lévy process is infinitely divisible distributed. Many families of well-known infinitely divisible distributions are so-called convolution-closed, i.e. if the distribution of a Lévy process at one point in time belongs to one of these families, then the distribution of the Lévy process at all points in time belong to the same family of distributions. For example, a Poisson process will be Poisson distributed at all points in time, or a Brownian motion will be normally distributed at all points in time. However, a Lévy process that is generalised hyperbolic at one point in time might fail to be generalized hyperbolic at another point in time. In fact, the generalized Laplace distributions and the normal inverse Gaussian distributions are the only subclasses of the generalized hyperbolic distributions that are closed under convolution.
Related distributions
As the name suggests it is of a very general form, being the superclass of, among others, the Student's t-distribution, the Laplace distribution, the hyperbolic distribution, the normal-inverse Gaussian distribution and the variance-gamma distribution.
has a Student's t-distribution with degrees of freedom.
has a hyperbolic distribution.
has a normal-inverse Gaussian distribution (NIG).
normal-inverse chi-squared distribution
normal-inverse gamma distribution (NI)
has a variance-gamma distribution
has a Laplace distribution with location parameter and scale parameter 1.
Applications
It is mainly applied to areas that require sufficient probability of far-field behaviour, which it can model due to its semi-heavy tails—a property the normal distribution does not possess. The generalised hyperbolic distribution is often used in economics, with particular application in the fields of modelling financial markets and risk management, due to its semi-heavy tails.
References
Continuous distributions
|
https://en.wikipedia.org/wiki/Johann%20Georg%20Sulzer
|
Johann Georg Sulzer (; 16 October 1720 in Winterthur – 27 February 1779 in Berlin) was a Swiss professor of Mathematics, who later on moved on to the field of electricity. He was a Wolffian philosopher and director of the philosophical section of the Berlin Academy of Sciences, and translator of David Hume's An Enquiry Concerning the Principles of Morals into German in 1755.
Sulzer is best known as the subject of an anecdote in the history of the development of the battery. In 1752, Sulzer happened to put the tip of his tongue between pieces of two different metals whose edges were in contact. He exclaimed, "a pungent sensation, reminds me of the taste of green vitriol when I placed my tongue between these metals." He thought the metals set up a vibratory motion in their particles which excited the nerves of taste. The event became known as the "battery tongue test": the saliva serves as the electrolyte carrying the current between two metallic electrodes.
His General Theory of the Fine Arts has been called "probably the most influential aesthetic compendium of the closing years of the eighteenth century". In it, he "extended Baumgarten's approach into an even more psychological theory that the primary object of enjoyment in aesthetic experience is the state of one's own cognitive condition." Kant had respectfully disagreed with Sulzer's metaphysical hopes. Kant wrote: "I cannot share the opinion so frequently expressed by excellent and thoughtful men (for instance Sulzer) who, being fully conscious of the weakness of the proofs hitherto advanced, indulge in a hope that the future would supply us with evident demonstrations of the two cardinal propositions of pure reason, namely, that there is a God, and that there is a future life. I am certain, on the contrary, that this will never be the case…."
Bibliography
Unterredungen über die Schönheit der Natur (1750)
Gedanken über den Ursprung der Wissenschaften und schönen Künste (1762)
Allgemeine Theorie der schönen Künste (1771–74)
Vermischte philosophische Schriften (1773/81)
Notes
1720 births
1779 deaths
German philosophers
Members of the Prussian Academy of Sciences
German music theorists
German male writers
|
https://en.wikipedia.org/wiki/Emanoil%20Bacaloglu
|
Emanoil Bacaloglu (; – 30 August 1891) was a Wallachian and Romanian mathematician, physicist and chemist.
Born in Bucharest and of Greek origin, he studied physics and mathematics in Paris and Leipzig, later becoming a professor at the University of Bucharest and, in 1879, a member of the Romanian Academy. Considered to be the founder of many scientific and technological fields in Romania (and aiding in the creation of the Romanian Athenaeum), Bacaloglu was also an accomplished scientist. He helped create Romanian-language terminology in his fields and was one of the principal founders of the Society of Physical Sciences in 1890.
He was also a participant in the 1848 Wallachian revolution.
He is known for the "Bacaloglu pseudosphere". This is a surface of revolution for which the "Bacaloglu curvature" is constant.
Main works
Elemente de fizică, 2nd ed., București, (1888).
Elemente de algebră, 2nd ed., București, (1870).
References
Florica Câmpan, "La pseudosphère de Bacaloglu", Acad. Roum. Bull. Sect. Sci. 24 (1943), 96–105.
External links
Emanoil Bacaloglu în Galeria personalităților – Muzeul Virtual al Științei și Tehnicii Românești
Emanoil Bacaloglu - Biography.name
Short bio
Short history, at the Polytechnic University of Bucharest
19th-century Romanian mathematicians
Romanian physicists
Romanian chemists
Titular members of the Romanian Academy
People of the Revolutions of 1848
Academic staff of the University of Bucharest
Scientists from Bucharest
1830 births
1891 deaths
Romanian expatriates in France
|
https://en.wikipedia.org/wiki/Ion%20Barbu
|
Ion Barbu (, pen name of Dan Barbilian; 18 March 1895 –11 August 1961) was a Romanian mathematician and poet. His name is associated with the Mathematics Subject Classification number 51C05, which is a major posthumous recognition reserved only to pioneers of investigations in an area of mathematical inquiry.
Early life
Born in Câmpulung-Muscel, Argeș County, he was the son of Constantin Barbilian and Smaranda, born Șoiculescu. He attended elementary school in Câmpulung, Dămienești, and Stâlpeni, and for secondary studies he went to the Ion Brătianu High School in Pitești, the Dinicu Golescu High School in Câmpulung, and finally the Gheorghe Lazăr High School and the Mihai Viteazul High School in Bucharest. During that time, he discovered that he had a talent for mathematics, and started publishing in Gazeta Matematică; it was also then that he discovered his passion for poetry. Barbu was known as "one of the greatest Romanian poets of the twentieth century and perhaps the greatest of all" according to Romanian literary critic Alexandru Ciorănescu. As a poet, he is known for his volume Joc secund ("Mirrored Play").
He was a student at the University of Bucharest when World War I caused his studies to be interrupted by military service. He completed his degree in 1921. He then went to the University of Göttingen to study number theory with Edmund Landau for two years. Returning to Bucharest, he studied with Gheorghe Țițeica, completing in 1929 his thesis, Canonical representation of the addition of hyperelliptic functions.
Achievements in mathematics
Apollonian metric
In 1934, Barbilian published his article describing metrization of a region K, the interior of a simple closed curve J. Let xy denote the Euclidean distance from x to y. Barbilian's function for the distance from a to b in K is
At the University of Missouri in 1938 Leonard Blumenthal wrote Distance Geometry. A Study of the Development of Abstract Metrics, where he used the term "Barbilian spaces" for metric spaces based on Barbilian's function to obtain their metric. And in 1954 the American Mathematical Monthly published an article by Paul J. Kelly on Barbilian's method of metrizing a region bounded by a curve. Barbilian claimed he did not have access to Kelly's publication, but he did read Blumenthal's review of it in Mathematical Reviews and he understood Kelly's construction. This motivated him to write in final form a series of four papers, which appeared after 1958, where the metric geometry of the spaces that today bears his name is investigated thoroughly.
He answered in 1959 with an article which described "a very general procedure of metrization through which the positive functions of two points, on certain sets, can be refined to a distance." Besides Blumenthal and Kelly, articles on "Barbilian spaces" have appeared in the 1990s from Patricia Souza, while Wladimir G. Boskoff, Marian G. Ciucă and Bogdan Suceavă wrote in the 2000s about "Barbilian's metrization proced
|
https://en.wikipedia.org/wiki/Oded%20Goldreich
|
Oded Goldreich (; b. 1957) is a professor of computer science at the faculty of mathematics and computer science of the Weizmann Institute of Science, Israel. His research interests lie within the theory of computation and are, specifically, the interplay of randomness and computation, the foundations of cryptography, and computational complexity theory. He won the Knuth Prize in 2017 and was selected in 2021 to receive the Israel Prize in mathematics.
Biography
Goldreich received a DSc in computer science at Technion in 1983 under Shimon Even.
Goldreich has contributed to the development of pseudorandomness,
zero knowledge proofs, secure function evaluation, property testing,
and other areas in cryptography and computational complexity.
Goldreich has also authored several books including: Foundations of Cryptography which comes in two volumes (volume 1 in 2001 and volume 2 in 2004), Computational Complexity: A Conceptual Perspective (2008), and Modern Cryptography, Probabilistic Proofs and Pseudorandomness (1998).
Awards
Goldreich received the Knuth prize in 2017 for "fundamental and lasting contributions to theoretical computer science in many areas including cryptography, randomness, probabilistically checkable proofs, inapproximability, property testing as well as complexity theory in general. Goldreich has, in addition to his outstanding research contributions, advanced these fields through many survey articles and several first class textbooks. He has contributed eminent results, new basic definitions and pointed to new directions of research. Goldreich has been one of the driving forces for the theoretical computer science community for three decades."
Israel Prize and controversy
In 2021 he was selected by committee to win the Israel Prize in mathematics. Education Minister Yoav Gallant vetoed his selection over Goldreich's alleged support of the boycott, divestment and sanctions movement (BDS) against Israel. One of the reasons for the decision was a letter signed by Goldreich calling German parliament not to equate BDS with Anti-Semitism. However, according to Goldreich, he did not support BDS but instead signed a petition calling for the halt of EU funding for the Israeli Ariel University on the occupied West Bank. The prize committee petitioned to the Supreme Court of Israel to ensure that Goldreich will win the prize. On 8 April 2021 Israel's Supreme Court of Justice ruled in favor of Gallant's petition so that Goldreich could receive the prize that year, giving Gallant a month to further examine the issue. On 11 April 2021, a 2004 Israeli Prize winner, Professor David Harel, decided to share his award with Goldreich in protest of the government's decision to not award the 2021 prize for Professor Goldreich. In August 2021 the Supreme Court wrote, "we found appropriate at this stage to accept the position of the Attorney General that the Education Minister should be allowed to examine new information that he received only tw
|
https://en.wikipedia.org/wiki/Reduction%20of%20order
|
Reduction of order (or d’Alembert reduction) is a technique in mathematics for solving second-order linear ordinary differential equations. It is employed when one solution is known and a second linearly independent solution is desired. The method also applies to n-th order equations. In this case the ansatz will yield an (n−1)-th order equation for .
Second-order linear ordinary differential equations
An example
Consider the general, homogeneous, second-order linear constant coefficient ordinary differential equation. (ODE)
where are real non-zero coefficients. Two linearly independent solutions for this ODE can be straightforwardly found using characteristic equations except for the case when the discriminant, , vanishes. In this case,
from which only one solution,
can be found using its characteristic equation.
The method of reduction of order is used to obtain a second linearly independent solution to this differential equation using our one known solution. To find a second solution we take as a guess
where is an unknown function to be determined. Since must satisfy the original ODE, we substitute it back in to get
Rearranging this equation in terms of the derivatives of we get
Since we know that is a solution to the original problem, the coefficient of the last term is equal to zero. Furthermore, substituting into the second term's coefficient yields (for that coefficient)
Therefore, we are left with
Since is assumed non-zero and is an exponential function (and thus always non-zero), we have
This can be integrated twice to yield
where are constants of integration. We now can write our second solution as
Since the second term in is a scalar multiple of the first solution (and thus linearly dependent) we can drop that term, yielding a final solution of
Finally, we can prove that the second solution found via this method is linearly independent of the first solution by calculating the Wronskian
Thus is the second linearly independent solution we were looking for.
General method
Given the general non-homogeneous linear differential equation
and a single solution of the homogeneous equation [], let us try a solution of the full non-homogeneous equation in the form:
where is an arbitrary function. Thus
and
If these are substituted for , , and in the differential equation, then
Since is a solution of the original homogeneous differential equation, , so we can reduce to
which is a first-order differential equation for (reduction of order). Divide by , obtaining
The integrating factor is .
Multiplying the differential equation by the integrating factor , the equation for can be reduced to
After integrating the last equation, is found, containing one constant of integration. Then, integrate to find the full solution of the original non-homogeneous second-order equation, exhibiting two constants of integration as it should:
See also
Variation of parameters
References
W. E. Boyce and R. C. DiPrim
|
https://en.wikipedia.org/wiki/Ado%27s%20theorem
|
In abstract algebra, Ado's theorem is a theorem characterizing finite-dimensional Lie algebras.
Statement
Ado's theorem states that every finite-dimensional Lie algebra L over a field K of characteristic zero can be viewed as a Lie algebra of square matrices under the commutator bracket. More precisely, the theorem states that L has a linear representation ρ over K, on a finite-dimensional vector space V, that is a faithful representation, making L isomorphic to a subalgebra of the endomorphisms of V.
History
The theorem was proved in 1935 by Igor Dmitrievich Ado of Kazan State University, a student of Nikolai Chebotaryov.
The restriction on the characteristic was later removed by Kenkichi Iwasawa (see also the below Gerhard Hochschild paper for a proof).
Implications
While for the Lie algebras associated to classical groups there is nothing new in this, the general case is a deeper result. Applied to the real Lie algebra of a Lie group G, it does not imply that G has a faithful linear representation (which is not true in general), but rather that G always has a linear representation that is a local isomorphism with a linear group.
References
. (Russian language)
translation in
Nathan Jacobson, Lie Algebras, pp. 202–203
External links
Ado’s theorem, comments and a proof of Ado's theorem in Terence Tao's blog What's new.
Lie algebras
Theorems about algebras
|
https://en.wikipedia.org/wiki/A%20Treatise%20on%20the%20Binomial%20Theorem
|
A Treatise on the Binomial Theorem is a fictional work of mathematics by the young Professor James Moriarty, the criminal mastermind and archenemy of the detective Sherlock Holmes in the fiction of Arthur Conan Doyle. The actual title of the treatise is never given in the stories; Holmes simply refers to "a treatise upon the Binomial Theorem". The treatise is mentioned in the 1893 short story "The Final Problem", when Holmes, speaking of Professor Moriarty, states:
Moriarty was a versatile mathematician as well as a criminal mastermind. In addition to the Treatise, he wrote the book The Dynamics of an Asteroid, containing mathematics so esoteric that no one could even review it. This is a very different branch of mathematics from the Binomial Theorem, further reflecting Moriarty's impressive intellectual prowess.
Review and discussion
Doyle, in his works, never describes the contents of the treatise. This has not stopped people from speculating on what it might have contained. Mathematician Harold Davis, in the book The Summation of Series, attributes certain binomial identities to Moriarty. These have been expanded on in further work, tying the Treatise into the standard mathematical literature. Less formal depictions of the content are also available. For example, in 1955 science fiction writer Poul Anderson wrote about the treatise for The Baker Street Journal.
The Treatise is sometimes used when a reference is needed to a non-specific example of a scientific paper.
In cryptography papers, the users and attackers of a cryptosystem are often given names suggestive of their roles. "Eve", for example, is most often the eavesdropper, listening in on an exchange. Malicious attackers are typically "Mallory", but in at least one cryptographic paper, the malicious attacker is "Moriarty". However, there are real academics named Moriarty, so to avoid confusion the paper distinguished the hypothetical attacker as "the author of A Treatise on the Binomial Theorem".
Other references
In The Seven-Per-Cent Solution, a 1974 Holmes pastiche by Nicholas Meyer, Moriarty in conversation with Watson denies having written any treatise on the binomial theorem, saying: "Certainly not. Who has anything new to say about the binomial theorem at this late date? At any rate, I am certainly not the man to know." In this novel, Moriarty is no evil genius, but a harmless maths teacher who became a monster in Holmes' fantasies because he was involved in certain traumatic childhood experiences of his.
See also
The Dynamics of an Asteroid, another fictional work by Moriarty
References
External links
A list of many references to this work, as well as to other works of Moriarty's such as The Dynamics of an Asteroid.
Fictional books
Sherlock Holmes
Fictional elements introduced in 1893
Treatises
|
https://en.wikipedia.org/wiki/Density%20on%20a%20manifold
|
In mathematics, and specifically differential geometry, a density is a spatially varying quantity on a differentiable manifold that can be integrated in an intrinsic manner. Abstractly, a density is a section of a certain line bundle, called the density bundle. An element of the density bundle at x is a function that assigns a volume for the parallelotope spanned by the n given tangent vectors at x.
From the operational point of view, a density is a collection of functions on coordinate charts which become multiplied by the absolute value of the Jacobian determinant in the change of coordinates. Densities can be generalized into s-densities, whose coordinate representations become multiplied by the s-th power of the absolute value of the jacobian determinant. On an oriented manifold, 1-densities can be canonically identified with the n-forms on M. On non-orientable manifolds this identification cannot be made, since the density bundle is the tensor product of the orientation bundle of M and the n-th exterior product bundle of TM (see pseudotensor).
Motivation (densities in vector spaces)
In general, there does not exist a natural concept of a "volume" for a parallelotope generated by vectors in a n-dimensional vector space V. However, if one wishes to define a function that assigns a volume for any such parallelotope, it should satisfy the following properties:
If any of the vectors vk is multiplied by , the volume should be multiplied by |λ|.
If any linear combination of the vectors v1, ..., vj−1, vj+1, ..., vn is added to the vector vj, the volume should stay invariant.
These conditions are equivalent to the statement that μ is given by a translation-invariant measure on V, and they can be rephrased as
Any such mapping is called a density on the vector space V. Note that if (v1, ..., vn) is any basis for V, then fixing μ(v1, ..., vn) will fix μ entirely; it follows that the set Vol(V) of all densities on V forms a one-dimensional vector space. Any n-form ω on V defines a density on V by
Orientations on a vector space
The set Or(V) of all functions that satisfy
forms a one-dimensional vector space, and an orientation on V is one of the two elements such that for any linearly independent . Any non-zero n-form ω on V defines an orientation such that
and vice versa, any and any density define an n-form ω on V by
In terms of tensor product spaces,
s-densities on a vector space
The s-densities on V are functions such that
Just like densities, s-densities form a one-dimensional vector space Vols(V), and any n-form ω on V defines an s-density |ω|s on V by
The product of s1- and s2-densities μ1 and μ2 form an (s1+s2)-density μ by
In terms of tensor product spaces this fact can be stated as
Definition
Formally, the s-density bundle Vols(M) of a differentiable manifold M is obtained by an associated bundle construction, intertwining the one-dimensional group representation
of the general linear group with the frame bundl
|
https://en.wikipedia.org/wiki/Hillclimbing%20%28disambiguation%29
|
Hillclimbing is a motorsport
Hillclimbing may also refer to:
Hillclimbing (cycling)
Hillclimbing (railway)
Hill climbing, an optimization algorithm in mathematics
See also
Hillwalking
Mountaineering
Hilcrhyme, a Japanese hip-hop duo
Newport Antique Auto Hill Climb, a racing event in Newport, Indiana
Hill Climb Racing (video game), video game
|
https://en.wikipedia.org/wiki/Calculus%20of%20voting
|
Calculus of voting refers to any mathematical model which predicts voting behaviour by an electorate, including such features as participation rate. A calculus of voting represents a hypothesized decision-making process.
These models are used in political science in an attempt to capture the relative importance of various factors influencing an elector to vote (or not vote) in a particular way.
Example
One such model was proposed by Anthony Downs (1957) and is adapted by William H. Riker and Peter Ordeshook, in “A Theory of the Calculus of Voting” (Riker and Ordeshook 1968)
V = pB − C + D
where
V = the proxy for the probability that the voter will turn out
p = probability of vote “mattering”
B = “utility” benefit of voting--differential benefit of one candidate winning over the other
C = costs of voting (time/effort spent)
D = citizen duty, goodwill feeling, psychological and civic benefit of voting (this term is not included in Downs's original model)
A political science model based on rational choice used to explain why citizens do or do not vote.
The alternative equation is
V = pB + D > C
Where for voting to occur the (P)robability the vote will matter "times" the (B)enefit of one candidate winning over another combined with the feeling of civic (D)uty, must be greater than the (C)ost of voting
References
Downs, Anthony. 1957. An Economic Theory of Democracy. New York: Harper & Row.
Riker, William and Peter Ordeshook. 1968. “A Theory of the Calculus of Voting.” American Political Science Review 62(1): 25–42.
Voting theory
Mathematical modeling
|
https://en.wikipedia.org/wiki/Osculating%20curve
|
In differential geometry, an osculating curve is a plane curve from a given family that has the highest possible order of contact with another curve. That is, if is a family of smooth curves, is a smooth curve (not in general belonging to ), and is a point on , then an osculating curve from at is a curve from that passes through and has as many of its derivatives at equal to the derivatives of as possible.
The term derives from the Latinate root "osculate", to kiss, because the two curves contact one another in a more intimate way than simple tangency.
Examples
Examples of osculating curves of different orders include:
The tangent line to a curve C at a point p, the osculating curve from the family of straight lines. The tangent line shares its first derivative (slope) with C and therefore has first-order contact with C.
The osculating circle to C at p, the osculating curve from the family of circles. The osculating circle shares both its first and second derivatives (equivalently, its slope and curvature) with C.
The osculating parabola to C at p, the osculating curve from the family of parabolas, has third order contact with C.
The osculating conic to C at p, the osculating curve from the family of conic sections, has fourth order contact with C.
Generalizations
The concept of osculation can be generalized to higher-dimensional spaces, and to objects that are not curves within those spaces. For instance an osculating plane to a space curve is a plane that has second-order contact with the curve. This is as high an order as is possible in the general case.
In one dimension, analytic curves are said to osculate at a point if they share the first three terms of their Taylor expansion about that point. This concept can be generalized to superosculation, in which two curves share more than the first three terms of their Taylor expansion.
See also
Osculating orbit
References
Curves
|
https://en.wikipedia.org/wiki/Science%2C%20technology%2C%20engineering%2C%20and%20mathematics
|
Science, technology, engineering, and mathematics (STEM) is an umbrella term used to group together the distinct but related technical disciplines of science, technology, engineering, and mathematics. The term is typically used in the context of education policy or curriculum choices in schools. It has implications for workforce development, national security concerns (as a shortage of STEM-educated citizens can reduce effectiveness in this area), and immigration policy, with regard to admitting foreign students and tech workers.
There is no universal agreement on which disciplines are included in STEM; in particular, whether or not the science in STEM includes social sciences, such as psychology, sociology, economics, and political science. In the United States, these are typically included by organizations such as the National Science Foundation (NSF), the Department of Labor's O*Net online database for job seekers, and the Department of Homeland Security. In the United Kingdom, the social sciences are categorized separately and are instead grouped with humanities and arts to form another counterpart acronym HASS (Humanities, Arts, and Social Sciences), rebranded in 2020 as SHAPE (Social Sciences, Humanities and the Arts for People and the Economy). Some sources also use HEAL (health, education, administration, and literacy) as the counterpart of STEM.
Terminology
History
Previously referred to as SMET by the NSF, in the early 1990s the acronym STEM was used by a variety of educators, including Charles E. Vela, the founder and director of the Center for the Advancement of Hispanics in Science and Engineering Education (CAHSEE). Moreover, the CAHSEE started a summer program for talented under-represented students in the Washington, D.C., area called the STEM Institute. Based on the program's recognized success and his expertise in STEM education, Charles Vela was asked to serve on numerous NSF and Congressional panels in science, mathematics, and engineering education; it is through this manner that NSF was first introduced to the acronym STEM. One of the first NSF projects to use the acronym was STEMTEC, the Science, Technology, Engineering, and Math Teacher Education Collaborative at the University of Massachusetts Amherst, which was founded in 1998.
In 2001, at the urging of Dr. Peter Faletra, the Director of Workforce Development for Teachers and Scientists at the Office of Science, the acronym was adopted by Rita Colwell and other science administrators in the National Science Foundation (NSF). The Office of Science was also an early adopter of the STEM acronym.
Other variations
A-STEM (arts, science, technology, engineering, and mathematics); more focused and based on humanism and arts.
eSTEM (environmental STEM)
GEMS (girls in engineering, math, and science); used for programs to encourage women to enter these fields.
MINT (mathematics, informatics, natural sciences, and technology)
SHTEAM (science, humanities, technology, engineering
|
https://en.wikipedia.org/wiki/Mean%20absolute%20percentage%20error
|
The mean absolute percentage error (MAPE), also known as mean absolute percentage deviation (MAPD), is a measure of prediction accuracy of a forecasting method in statistics. It usually expresses the accuracy as a ratio defined by the formula:
where is the actual value and is the forecast value. Their difference is divided by the actual value . The absolute value of this ratio is summed for every forecasted point in time and divided by the number of fitted points .
MAPE in regression problems
Mean absolute percentage error is commonly used as a loss function for regression problems and in model evaluation, because of its very intuitive interpretation in terms of relative error.
Definition
Consider a standard regression setting in which the data are fully described by a random pair with values in , and i.i.d. copies of . Regression models aim at finding a good model for the pair, that is a measurable function from to such that is close to .
In the classical regression setting, the closeness of to is measured via the risk, also called the mean squared error (MSE). In the MAPE regression context, the closeness of to is measured via the MAPE, and the aim of MAPE regressions is to find a model such that:
where is the class of models considered (e.g. linear models).
In practice
In practice can be estimated by the empirical risk minimization strategy, leading to
From a practical point of view, the use of the MAPE as a quality function for regression model is equivalent to doing weighted mean absolute error (MAE) regression, also known as quantile regression. This property is trivial since
As a consequence, the use of the MAPE is very easy in practice, for example using existing libraries for quantile regression allowing weights.
Consistency
The use of the MAPE as a loss function for regression analysis is feasible both on a practical point of view and on a theoretical one, since the existence of an optimal model and the consistency of the empirical risk minimization can be proved.
WMAPE
WMAPE (sometimes spelled wMAPE) stands for weighted mean absolute percentage error. It is a measure used to evaluate the performance of regression or forecasting models. It is a variant of MAPE in which the mean absolute percent errors is treated as a weighted arithmetic mean. Most commonly the absolute percent errors are weighted by the actuals (e.g. in case of sales forecasting, errors are weighted by sales volume).. Effectively, this overcomes the 'infinite error' issue.
Its formula is:
Where is the weight, is a vector of the actual data and is the forecast or prediction.
However, this effectively simplifies to a much simpler formula:
Confusingly, sometimes when people refer to wMAPE they are talking about a different model in which the numerator and denominator of the wMAPE formula above are weighted again by another set of custom weights . Perhaps it would be more accurate to call this the double weighted MAPE (wwMAPE). Its
|
https://en.wikipedia.org/wiki/D-module
|
In mathematics, a D-module is a module over a ring D of differential operators. The major interest of such D-modules is as an approach to the theory of linear partial differential equations. Since around 1970, D-module theory has been built up, mainly as a response to the ideas of Mikio Sato on algebraic analysis, and expanding on the work of Sato and Joseph Bernstein on the Bernstein–Sato polynomial.
Early major results were the Kashiwara constructibility theorem and Kashiwara index theorem of Masaki Kashiwara. The methods of D-module theory have always been drawn from sheaf theory and other techniques with inspiration from the work of Alexander Grothendieck in algebraic geometry. The approach is global in character, and differs from the functional analysis techniques traditionally used to study differential operators. The strongest results are obtained for over-determined systems (holonomic systems), and on the characteristic variety cut out by the symbols, which in the good case is a Lagrangian submanifold of the cotangent bundle of maximal dimension (involutive systems). The techniques were taken up from the side of the Grothendieck school by Zoghman Mebkhout, who obtained a general, derived category version of the Riemann–Hilbert correspondence in all dimensions.
Introduction: modules over the Weyl algebra
The first case of algebraic D-modules are modules over the Weyl algebra An(K) over a field K of characteristic zero. It is the algebra consisting of polynomials in the following variables
x1, ..., xn, ∂1, ..., ∂n.
where the variables xi and ∂j separately commute with each other, and xi and ∂j commute for i ≠ j, but the commutator satisfies the relation
[∂i, xi] = ∂ixi − xi∂i = 1.
For any polynomial f(x1, ..., xn), this implies the relation
[∂i, f] = ∂f / ∂xi,
thereby relating the Weyl algebra to differential equations.
An (algebraic) D-module is, by definition, a left module over the ring An(K). Examples for D-modules include the Weyl algebra itself (acting on itself by left multiplication), the (commutative) polynomial ring K[x1, ..., xn], where xi acts by multiplication and ∂j acts by partial differentiation with respect to xj and, in a similar vein, the ring of holomorphic functions on Cn (functions of n complex variables.)
Given some differential operator P = an(x) ∂n + ... + a1(x) ∂1 + a0(x), where x is a complex variable, ai(x) are polynomials, the quotient module M = A1(C)/A1(C)P is closely linked to space of solutions of the differential equation
P f = 0,
where f is some holomorphic function in C, say. The vector space consisting of the solutions of that equation is given by the space of homomorphisms of D-modules .
D-modules on algebraic varieties
The general theory of D-modules is developed on a smooth algebraic variety X defined over an algebraically closed field K of characteristic zero, such as K = C. The sheaf of differential operators DX is defined to be the OX-algebra generated by the vector fields on X, interpreted
|
https://en.wikipedia.org/wiki/Residue-class-wise%20affine%20group
|
In mathematics, specifically in group theory, residue-class-wise affine
groups are certain permutation groups acting on
(the integers), whose elements are bijective
residue-class-wise affine mappings.
A mapping is called residue-class-wise affine
if there is a nonzero integer such that the restrictions of
to the residue classes
(mod ) are all affine. This means that for any
residue class there are coefficients
such that the restriction of the mapping
to the set is given by
.
Residue-class-wise affine groups are countable, and they are accessible
to computational investigations.
Many of them act multiply transitively on or on subsets thereof.
A particularly basic type of residue-class-wise affine permutations are the
class transpositions: given disjoint residue classes
and , the corresponding class transposition is the permutation
of which interchanges and
for every and which
fixes everything else. Here it is assumed that
and that .
The set of all class transpositions of generates
a countable simple group which has the following properties:
It is not finitely generated.
Every finite group, every free product of finite groups and every free group of finite rank embeds into it.
The class of its subgroups is closed under taking direct products, under taking wreath products with finite groups, and under taking restricted wreath products with the infinite cyclic group.
It has finitely generated subgroups which do not have finite presentations.
It has finitely generated subgroups with algorithmically unsolvable membership problem.
It has an uncountable series of simple subgroups which is parametrized by the sets of odd primes.
It is straightforward to generalize the notion of a residue-class-wise affine group
to groups acting on suitable rings other than ,
though only little work in this direction has been done so far.
See also the Collatz conjecture, which is an assertion about a surjective,
but not injective residue-class-wise affine mapping.
References and external links
Stefan Kohl. Restklassenweise affine Gruppen. Dissertation, Universität Stuttgart, 2005. Archivserver der Deutschen Nationalbibliothek OPUS-Datenbank(Universität Stuttgart)
Stefan Kohl. RCWA – Residue-Class-Wise Affine Groups. GAP package. 2005.
Stefan Kohl. A Simple Group Generated by Involutions Interchanging Residue Classes of the Integers. Math. Z. 264 (2010), no. 4, 927–938.
Infinite group theory
Number theory
|
https://en.wikipedia.org/wiki/Ultranet
|
Ultranet can refer to one of the following:
Ultranet (company), a former telecommunications firm in Massachusetts, United States
Ultranet (math), a term in topology
, a HVDC-project in Germany
Ultranet (product), an online environment developed by the Department of Education and Early Childhood Development in Victoria, Australia
|
https://en.wikipedia.org/wiki/Pseudocircle
|
The pseudocircle is the finite topological space X consisting of four distinct points {a,b,c,d } with the following non-Hausdorff topology:
This topology corresponds to the partial order where open sets are downward-closed sets. X is highly pathological from the usual viewpoint of general topology as it fails to satisfy any separation axiom besides T0. However, from the viewpoint of algebraic topology X has the remarkable property that it is indistinguishable from the circle S1.
More precisely the continuous map from S1 to X (where we think of S1 as the unit circle in ) given by
is a weak homotopy equivalence, that is induces an isomorphism on all homotopy groups. It follows<ref>Allen Hatcher (2002) Algebraic Topology, Proposition 4.21, Cambridge University Press</ref> that also induces an isomorphism on singular homology and cohomology and more generally an isomorphism on all ordinary or extraordinary homology and cohomology theories (e.g., K-theory).
This can be proved using the following observation. Like S1, X is the union of two contractible open sets {a,b,c} and {a,b,d } whose intersection {a,b} is also the union of two disjoint contractible open sets {a} and {b}. So like S1, the result follows from the groupoid Seifert-van Kampen theorem, as in the book Topology and Groupoids.
More generally McCord has shown that for any finite simplicial complex K, there is a finite topological space XK which has the same weak homotopy type as the geometric realization |K| of K. More precisely there is a functor, taking K to XK, from the category of finite simplicial complexes and simplicial maps and a natural weak homotopy equivalence from |K| to X''K.
See also
References
Algebraic topology
Topological spaces
|
https://en.wikipedia.org/wiki/Emad%20Moteab
|
Emad Mohamed Abdelnaby Ibrahim Moteab (; born 20 February 1983) is an Egyptian former professional footballer who played as a striker.
Career statistics
International
Source:
International goals
Scores and results list Egypt's goal tally first.
Honours and achievements
Al Ahly
Egyptian Premier League: 2004–05, 2005–06, 2006–07, 2007–08, 2009–10, 2010–11, 2013–14, 2015–16, 2016–17, 2017–18
Egypt Cup: 2006, 2007, 2016–17
Egyptian Super Cup: 2005, 2006, 2007, 2012, 2014, 2015
CAF Champions League: 2005, 2006, 2008, 2012, 2013
CAF Confederation Cup: 2014
CAF Super Cup: 2006, 2007, 2013, 2014
Al-Ittihad
Saudi Professional League: 2008–09
Egypt U20
African Youth Championship: 2003
Egypt
African Cup of Nations: 2006, 2008, 2010
References
External links
1983 births
Living people
Egyptian men's footballers
Egypt men's international footballers
2006 Africa Cup of Nations players
2008 Africa Cup of Nations players
2010 Africa Cup of Nations players
Al Ahly SC players
Men's association football forwards
Al-Ittihad Club (Jeddah) players
People from Sharqia Governorate
Olympic footballers for Egypt
Footballers at the 2012 Summer Olympics
Egyptian Premier League players
Al Taawoun FC players
Saudi Pro League players
Expatriate men's footballers in Saudi Arabia
Egyptian expatriate sportspeople in Saudi Arabia
Egyptian expatriate men's footballers
|
https://en.wikipedia.org/wiki/Gradient-related
|
Gradient-related is a term used in multivariable calculus to describe a direction. A direction sequence is gradient-related to if for any subsequence that converges to a nonstationary point, the corresponding subsequence is bounded and satisfies
Gradient-related directions are usually encountered in the gradient-based iterative optimization of a function . At each iteration the current vector is and we move in the direction , thus generating a sequence of directions.
It is easy to guarantee that the directions generated are gradient-related: for example, they can be set equal to the gradient at each point.
Vector calculus
|
https://en.wikipedia.org/wiki/Antiunitary%20operator
|
In mathematics, an antiunitary transformation is a bijective antilinear map
between two complex Hilbert spaces such that
for all and in , where the horizontal bar represents the complex conjugate. If additionally one has then is called an antiunitary operator.
Antiunitary operators are important in quantum theory because they are used to represent certain symmetries, such as time reversal. Their fundamental importance in quantum physics is further demonstrated by Wigner's theorem.
Invariance transformations
In quantum mechanics, the invariance transformations of complex Hilbert space leave the absolute value of scalar product invariant:
for all and in .
Due to Wigner's theorem these transformations can either be unitary or antiunitary.
Geometric Interpretation
Congruences of the plane form two distinct classes. The first conserves the orientation and is generated by translations and rotations. The second does not conserve the orientation and is obtained from the first class by applying a reflection. On the complex plane these two classes correspond (up to translation) to unitaries and antiunitaries, respectively.
Properties
holds for all elements of the Hilbert space and an antiunitary .
When is antiunitary then is unitary. This follows from
For unitary operator the operator , where is complex conjugate operator, is antiunitary. The reverse is also true, for antiunitary the operator is unitary.
For antiunitary the definition of the adjoint operator is changed to compensate the complex conjugation, becoming
The adjoint of an antiunitary is also antiunitary and (This is not to be confused with the definition of unitary operators, as the antiunitary operator is not complex linear.)
Examples
The complex conjugate operator is an antiunitary operator on the complex plane.
The operator where is the second Pauli matrix and is the complex conjugate operator, is antiunitary. It satisfies .
Decomposition of an antiunitary operator into a direct sum of elementary Wigner antiunitaries
An antiunitary operator on a finite-dimensional space may be decomposed as a direct sum of elementary Wigner antiunitaries , . The operator is just simple complex conjugation on
For , the operator acts on two-dimensional complex Hilbert space. It is defined by
Note that for
so such may not be further decomposed into which square to the identity map.
Note that the above decomposition of antiunitary operators contrasts with the spectral decomposition of unitary operators. In particular, a unitary operator on a complex Hilbert space may be decomposed into a direct sum of unitaries acting on 1-dimensional complex spaces (eigenspaces), but an antiunitary operator may only be decomposed into a direct sum of elementary operators on 1- and 2-dimensional complex spaces.
References
Wigner, E. "Normal Form of Antiunitary Operators", Journal of Mathematical Physics Vol 1, no 5, 1960, pp. 409–412
Wigner, E. "Phenomenological D
|
https://en.wikipedia.org/wiki/St%C3%B8rmer%27s%20theorem
|
In number theory, Størmer's theorem, named after Carl Størmer, gives a finite bound on the number of consecutive pairs of smooth numbers that exist, for a given degree of smoothness, and provides a method for finding all such pairs using Pell equations. It follows from the Thue–Siegel–Roth theorem that there are only a finite number of pairs of this type, but Størmer gave a procedure for finding them all.
Statement
If one chooses a finite set of prime numbers then the -smooth numbers are defined as the set of integers
that can be generated by products of numbers in . Then Størmer's theorem states that, for every choice of , there are only finitely many pairs of consecutive -smooth numbers. Further, it gives a method of finding them all using Pell equations.
The procedure
Størmer's original procedure involves solving a set of roughly 3k Pell equations, in each one finding only the smallest solution. A simplified version of the procedure, due to D. H. Lehmer, is described below; it solves fewer equations but finds more solutions in each equation.
Let P be the given set of primes, and define a number to be P-smooth if all its prime factors belong to P. Assume p1 = 2; otherwise there could be no consecutive P-smooth numbers, because all P-smooth numbers would be odd. Lehmer's method involves solving the Pell equation
for each P-smooth square-free number q other than 2. Each such number q is generated as a product of a subset of P, so there are 2k − 1 Pell equations to solve. For each such equation, let xi, yi be the generated solutions, for i in the range from 1 to max(3, (pk + 1)/2) (inclusive), where pk is the largest of the primes in P.
Then, as Lehmer shows, all consecutive pairs of P-smooth numbers are of the form (xi − 1)/2, (xi + 1)/2. Thus one can find all such pairs by testing the numbers of this form for P-smoothness.
Example
To find the ten consecutive pairs of {2,3,5}-smooth numbers (in music theory, giving the superparticular ratios for just tuning) let P = {2,3,5}. There are seven P-smooth squarefree numbers q (omitting the eighth P-smooth squarefree number, 2): 1, 3, 5, 6, 10, 15, and 30, each of which leads to a Pell equation. The number of solutions per Pell equation required by Lehmer's method is max(3, (5 + 1)/2) = 3, so this method generates three solutions to each Pell equation, as follows.
For q = 1, the first three solutions to the Pell equation x2 − 2y2 = 1 are (3,2), (17,12), and (99,70). Thus, for each of the three values xi = 3, 17, and 99, Lehmer's method tests the pair (xi − 1)/2, (xi + 1)/2 for smoothness; the three pairs to be tested are (1,2), (8,9), and (49,50). Both (1,2) and (8,9) are pairs of consecutive P-smooth numbers, but (49,50) is not, as 49 has 7 as a prime factor.
For q = 3, the first three solutions to the Pell equation x2 − 6y2 = 1 are (5,2), (49,20), and (485,198). From the three values xi = 5, 49, and 485 Lehmer's method forms the three candidate pairs of consecutive numbers (xi − 1)/2, (x
|
https://en.wikipedia.org/wiki/Partition%20of%20sums%20of%20squares
|
The partition of sums of squares is a concept that permeates much of inferential statistics and descriptive statistics. More properly, it is the partitioning of sums of squared deviations or errors. Mathematically, the sum of squared deviations is an unscaled, or unadjusted measure of dispersion (also called variability). When scaled for the number of degrees of freedom, it estimates the variance, or spread of the observations about their mean value. Partitioning of the sum of squared deviations into various components allows the overall variability in a dataset to be ascribed to different types or sources of variability, with the relative importance of each being quantified by the size of each component of the overall sum of squares.
Background
The distance from any point in a collection of data, to the mean of the data, is the deviation. This can be written as , where is the ith data point, and is the estimate of the mean. If all such deviations are squared, then summed, as in , this gives the "sum of squares" for these data.
When more data are added to the collection the sum of squares will increase, except in unlikely cases such as the new data being equal to the mean. So usually, the sum of squares will grow with the size of the data collection. That is a manifestation of the fact that it is unscaled.
In many cases, the number of degrees of freedom is simply the number of data points in the collection, minus one. We write this as n − 1, where n is the number of data points.
Scaling (also known as normalizing) means adjusting the sum of squares so that it does not grow as the size of the data collection grows. This is important when we want to compare samples of different sizes, such as a sample of 100 people compared to a sample of 20 people. If the sum of squares were not normalized, its value would always be larger for the sample of 100 people than for the sample of 20 people. To scale the sum of squares, we divide it by the degrees of freedom, i.e., calculate the sum of squares per degree of freedom, or variance. Standard deviation, in turn, is the square root of the variance.
The above describes how the sum of squares is used in descriptive statistics; see the article on total sum of squares for an application of this broad principle to inferential statistics.
Partitioning the sum of squares in linear regression
Theorem. Given a linear regression model including a constant , based on a sample containing n observations, the total sum of squares can be partitioned as follows into the explained sum of squares (ESS) and the residual sum of squares (RSS):
where this equation is equivalent to each of the following forms:
where is the value estimated by the regression line having , , ..., as the estimated coefficients.
Proof
The requirement that the model include a constant or equivalently that the design matrix contain a column of ones ensures that , i.e. .
The proof can also be expressed in vector form, as follows:
The el
|
https://en.wikipedia.org/wiki/Pro-Football-Reference.com
|
Pro-Football-Reference.com is a website providing a variety of statistics for American football. It is one of the few sites that provides information on both active and retired players. The site provides statistics for teams dating back to 1920. It has statistics for quarterbacks, running backs, receivers, kickers, returners, and punters, as well as some defensive statistics, and Pro Bowl rosters. It also has each team's game-by-game results. The website is maintained by Sports Reference, and Fantasy Sports Ventures maintains a minority stake in the organization. The company also publishes similar statistics websites for basketball, baseball, and hockey.
References
National Football League websites
Internet properties established in 2003
Sports databases
|
https://en.wikipedia.org/wiki/Partial%20equivalence%20relation
|
In mathematics, a partial equivalence relation (often abbreviated as PER, in older literature also called restricted equivalence relation) is a homogeneous binary relation that is symmetric and transitive. If the relation is also reflexive, then the relation is an equivalence relation.
Definition
Formally, a relation on a set is a PER if it holds for all that:
if , then (symmetry)
if and , then (transitivity)
Another more intuitive definition is that on a set is a PER if there is some subset of such that and is an equivalence relation on . The two definitions are seen to be equivalent by taking .
Properties and applications
The following properties hold for a partial equivalence relation on a set :
is an equivalence relation on the subset .
difunctional: the relation is the set for two partial functions and some indicator set
right and left Euclidean: For , and implies and similarly for left Euclideanness and imply
quasi-reflexive: If and , then and .
None of these properties is sufficient to imply that the relation is a PER.
In non-set-theory settings
In type theory, constructive mathematics and their applications to computer science, constructing analogues of subsets is often problematic—in these contexts PERs are therefore more commonly used, particularly to define setoids, sometimes called partial setoids. Forming a partial setoid from a type and a PER is analogous to forming subsets and quotients in classical set-theoretic mathematics.
The algebraic notion of congruence can also be generalized to partial equivalences, yielding the notion of subcongruence, i.e. a homomorphic relation that is symmetric and transitive, but not necessarily reflexive.
Examples
A simple example of a PER that is not an equivalence relation is the empty relation , if is not empty.
Kernels of partial functions
If is a partial function on a set , then the relation defined by
if is defined at , is defined at , and
is a partial equivalence relation, since it is clearly symmetric and transitive.
If is undefined on some elements, then is not an equivalence relation. It is not reflexive since if is not defined then — in fact, for such an there is no such that . It follows immediately that the largest subset of on which is an equivalence relation is precisely the subset on which is defined.
Functions respecting equivalence relations
Let X and Y be sets equipped with equivalence relations (or PERs) . For , define to mean:
then means that f induces a well-defined function of the quotients . Thus, the PER captures both the idea of definedness on the quotients and of two functions inducing the same function on the quotient.
Equality of IEEE floating point values
The IEEE 754:2008 floating point standard defines an "EQ" relation for floating point values. This predicate is symmetrical and transitive, but is not reflexive because of the presence of NaN values that are not EQ to themselves.
Notes
Referen
|
https://en.wikipedia.org/wiki/Pole%E2%80%93zero%20plot
|
In mathematics, signal processing and control theory, a pole–zero plot is a graphical representation of a rational transfer function in the complex plane which helps to convey certain properties of the system such as:
Stability
Causal system / anticausal system
Region of convergence (ROC)
Minimum phase / non minimum phase
A pole-zero plot shows the location in the complex plane of the poles and zeros of the transfer function of a dynamic system, such as a controller, compensator, sensor, equalizer, filter, or communications channel. By convention, the poles of the system are indicated in the plot by an X while the zeros are indicated by a circle or O.
A pole-zero plot is plotted in the plane of a complex frequency domain, which can represent either a continuous-time or a discrete-time system:
Continuous-time systems use the Laplace transform and are plotted in the s-plane:
Real frequency components are along its vertical axis (the imaginary line where )
Discrete-time systems use the Z-transform and are plotted in the z-plane:
Real frequency components are along its unit circle
Continuous-time systems
In general, a rational transfer function for a continuous-time LTI system has the form:
where
and are polynomials in ,
is the order of the numerator polynomial,
is the coefficient of the numerator polynomial,
is the order of the denominator polynomial, and
is the coefficient of the denominator polynomial.
Either or or both may be zero, but in real systems, it should be the case that ; otherwise the gain would be unbounded at high frequencies.
Poles and zeros
the zeros of the system are roots of the numerator polynomial: such that
the poles of the system are roots of the denominator polynomial: such that
Region of convergence
The region of convergence (ROC) for a given continuous-time transfer function is a half-plane or vertical strip, either of which contains no poles. In general, the ROC is not unique, and the particular ROC in any given case depends on whether the system is causal or anti-causal.
If the ROC includes the imaginary axis, then the system is bounded-input, bounded-output (BIBO) stable.
If the ROC extends rightward from the pole with the largest real-part (but not at infinity), then the system is causal.
If the ROC extends leftward from the pole with the smallest real-part (but not at negative infinity), then the system is anti-causal.
The ROC is usually chosen to include the imaginary axis since it is important for most practical systems to have BIBO stability.
Example
This system has no (finite) zeros and two poles:
and
The pole-zero plot would be:
Notice that these two poles are complex conjugates, which is the necessary and sufficient condition to have real-valued coefficients in the differential equation representing the system.
Discrete-time systems
In general, a rational transfer function for a discrete-time LTI system has the form:
where
is the order of the numerator pol
|
https://en.wikipedia.org/wiki/Lefschetz%20hyperplane%20theorem
|
In mathematics, specifically in algebraic geometry and algebraic topology, the Lefschetz hyperplane theorem is a precise statement of certain relations between the shape of an algebraic variety and the shape of its subvarieties. More precisely, the theorem says that for a variety X embedded in projective space and a hyperplane section Y, the homology, cohomology, and homotopy groups of X determine those of Y. A result of this kind was first stated by Solomon Lefschetz for homology groups of complex algebraic varieties. Similar results have since been found for homotopy groups, in positive characteristic, and in other homology and cohomology theories.
A far-reaching generalization of the hard Lefschetz theorem is given by the decomposition theorem.
The Lefschetz hyperplane theorem for complex projective varieties
Let X be an n-dimensional complex projective algebraic variety in CPN, and let Y be a hyperplane section of X such that U = X ∖ Y is smooth. The Lefschetz theorem refers to any of the following statements:
The natural map Hk(Y, Z) → Hk(X, Z) in singular homology is an isomorphism for k < n − 1 and is surjective for k = n − 1.
The natural map Hk(X, Z) → Hk(Y, Z) in singular cohomology is an isomorphism for k < n − 1 and is injective for k = n − 1.
The natural map πk(Y, Z) → πk(X, Z) is an isomorphism for k < n − 1 and is surjective for k = n − 1.
Using a long exact sequence, one can show that each of these statements is equivalent to a vanishing theorem for certain relative topological invariants. In order, these are:
The relative singular homology groups Hk(X, Y, Z) are zero for .
The relative singular cohomology groups Hk(X, Y, Z) are zero for .
The relative homotopy groups πk(X, Y) are zero for .
Lefschetz's proof
Solomon Lefschetz used his idea of a Lefschetz pencil to prove the theorem. Rather than considering the hyperplane section Y alone, he put it into a family of hyperplane sections Yt, where Y = Y0. Because a generic hyperplane section is smooth, all but a finite number of Yt are smooth varieties. After removing these points from the t-plane and making an additional finite number of slits, the resulting family of hyperplane sections is topologically trivial. That is, it is a product of a generic Yt with an open subset of the t-plane. X, therefore, can be understood if one understands how hyperplane sections are identified across the slits and at the singular points. Away from the singular points, the identification can be described inductively. At the singular points, the Morse lemma implies that there is a choice of coordinate system for X of a particularly simple form. This coordinate system can be used to prove the theorem directly.
Andreotti and Frankel's proof
Aldo Andreotti and Theodore Frankel recognized that Lefschetz's theorem could be recast using Morse theory. Here the parameter t plays the role of a Morse function. The basic tool in this approach is the Andreotti–Frankel theorem, which states that a comp
|
https://en.wikipedia.org/wiki/Hyperplane%20section
|
In mathematics, a hyperplane section of a subset X of projective space Pn is the intersection of X with some hyperplane H. In other words, we look at the subset XH of those elements x of X that satisfy the single linear condition L = 0 defining H as a linear subspace. Here L or H can range over the dual projective space of non-zero linear forms in the homogeneous coordinates, up to scalar multiplication.
From a geometrical point of view, the most interesting case is when X is an algebraic subvariety; for more general cases, in mathematical analysis, some analogue of the Radon transform applies. In algebraic geometry, assuming therefore that X is V, a subvariety not lying completely in any H, the hyperplane sections are algebraic sets with irreducible components all of dimension dim(V) − 1. What more can be said is addressed by a collection of results known collectively as Bertini's theorem. The topology of hyperplane sections is studied in the topic of the Lefschetz hyperplane theorem and its refinements. Because the dimension drops by one in taking hyperplane sections, the process is potentially an inductive method for understanding varieties of higher dimension. A basic tool for that is the Lefschetz pencil.
References
Algebraic geometry
|
https://en.wikipedia.org/wiki/Schauder%20basis
|
In mathematics, a Schauder basis or countable basis is similar to the usual (Hamel) basis of a vector space; the difference is that Hamel bases use linear combinations that are finite sums, while for Schauder bases they may be infinite sums. This makes Schauder bases more suitable for the analysis of infinite-dimensional topological vector spaces including Banach spaces.
Schauder bases were described by Juliusz Schauder in 1927, although such bases were discussed earlier. For example, the Haar basis was given in 1909, and Georg Faber discussed in 1910 a basis for continuous functions on an interval, sometimes called a Faber–Schauder system.
Definitions
Let V denote a topological vector space over the field F. A Schauder basis is a sequence {bn} of elements of V such that for every element there exists a unique sequence {αn} of scalars in F so that The convergence of the infinite sum is implicitly that of the ambient topology, i.e., but can be reduced to only weak convergence in a normed vector space (such as a Banach space). Unlike a Hamel basis, the elements of the basis must be ordered since the series may not converge unconditionally.
Note that some authors define Schauder bases to be countable (as above), while others use the term to include uncountable bases. In either case, the sums themselves always are countable. An uncountable Schauder basis is a linearly ordered set rather than a sequence, and each sum inherits the order of its terms from this linear ordering. They can and do arise in practice. As an example, a separable Hilbert space can only have a countable Schauder basis but a non-separable Hilbert space may have an uncountable one.
Though the definition above technically does not require a normed space, a norm is necessary to say almost anything useful about Schauder bases. The results below assume the existence of a norm.
A Schauder basis is said to be normalized when all the basis vectors have norm 1 in the Banach space V.
A sequence in V is a basic sequence if it is a Schauder basis of its closed linear span.
Two Schauder bases, {bn} in V and {cn} in W, are said to be equivalent if there exist two constants and C such that for every natural number and all sequences {αn} of scalars,
A family of vectors in V is total if its linear span (the set of finite linear combinations) is dense in V. If V is a Hilbert space, an orthogonal basis is a total subset B of V such that elements in B are nonzero and pairwise orthogonal. Further, when each element in B has norm 1, then B is an orthonormal basis of V.
Properties
Let {bn} be a Schauder basis of a Banach space V over F = R or C. It is a subtle consequence of the open mapping theorem that the linear mappings {Pn} defined by
are uniformly bounded by some constant C. When , the basis is called a monotone basis. The maps {Pn} are the basis projections.
Let {b*n} denote the coordinate functionals, where b*n assigns to every vector v in V the coordinate αn of v i
|
https://en.wikipedia.org/wiki/Local%20system
|
In mathematics, a local system (or a system of local coefficients) on a topological space X is a tool from algebraic topology which interpolates between cohomology with coefficients in a fixed abelian group A, and general sheaf cohomology in which coefficients vary from point to point. Local coefficient systems were introduced by Norman Steenrod in 1943.
Local systems are the building blocks of more general tools, such as constructible and perverse sheaves.
Definition
Let X be a topological space. A local system (of abelian groups/modules/...) on X is a locally constant sheaf (of abelian groups/modules...) on X. In other words, a sheaf is a local system if every point has an open neighborhood such that the restricted sheaf is isomorphic to the sheafification of some constant presheaf.
Equivalent definitions
Path-connected spaces
If X is path-connected, a local system of abelian groups has the same stalk L at every point. There is a bijective correspondence between local systems on X and group homomorphisms
and similarly for local systems of modules. The map giving the local system is called the monodromy representation of .
This shows that (for X path-connected) a local system is precisely a sheaf whose pullback to the universal cover of X is a constant sheaf.
This correspondence can be upgraded to an equivalence of categories between the category of local systems of abelian groups on X and the category of abelian groups endowed with an action of (equivalently, -modules).
Stronger definition on non-connected spaces
A stronger nonequivalent definition that works for non-connected X is: the following: a local system is a covariant functor
from the fundamental groupoid of to the category of modules over a commutative ring , where typically . This is equivalently the data of an assignment to every point a module along with a group representation such that the various are compatible with change of basepoint and the induced map on fundamental groups.
Examples
Constant sheaves such as . This is a useful tool for computing cohomology since in good situations, there is an isomorphism between sheaf cohomology and singular cohomology:
Let . Since , there is an family of local systems on X corresponding to the maps :
Horizontal sections of vector bundles with a flat connection. If is a vector bundle with flat connection , then there is a local system given by For instance, take and the trivial bundle. Sections of E are n-tuples of functions on X, so defines a flat connection on E, as does for any matrix of one-forms on X. The horizontal sections are then</p> i.e., the solutions to the linear differential equation .If extends to a one-form on the above will also define a local system on , so will be trivial since . So to give an interesting example, choose one with a pole at 0: in which case for ,
An n-sheeted covering map is a local system with fibers given by the set . Similarly, a fibre bundle with discret
|
https://en.wikipedia.org/wiki/L%C3%A1szl%C3%B3%20Babai
|
László "Laci" Babai (born July 20, 1950, in Budapest) is a Hungarian professor of computer science and mathematics at the University of Chicago. His research focuses on computational complexity theory, algorithms, combinatorics, and finite groups, with an emphasis on the interactions between these fields.
Life
In 1968, Babai won a gold medal at the International Mathematical Olympiad. Babai studied mathematics at Faculty of Science of the Eötvös Loránd University from 1968 to 1973, received a PhD from the Hungarian Academy of Sciences in 1975, and received a DSc from the Hungarian Academy of Sciences in 1984. He held a teaching position at Eötvös Loránd University since 1971; in 1987 he took joint positions as a professor in algebra at Eötvös Loránd and in computer science at the University of Chicago. In 1995, he began a joint appointment in the mathematics department at Chicago and gave up his position at Eötvös Loránd.
Work
He is the author of over 180 academic papers.
His notable accomplishments include the introduction of interactive proof systems, the introduction of the term Las Vegas algorithm, and the introduction of group theoretic methods in graph isomorphism testing. In November 2015, he announced a quasipolynomial time algorithm for the graph isomorphism problem.
He is editor-in-chief of the refereed online journal Theory of Computing. Babai was also involved in the creation of the Budapest Semesters in Mathematics program and first coined the name.
Graph isomorphism in quasipolynomial time
After announcing the result in 2015,
Babai presented a paper proving that the graph isomorphism problem can be solved in quasi-polynomial time
in 2016, at the ACM Symposium on Theory of Computing. In response to an error discovered by Harald Helfgott, he posted an update in 2017.
Honors
In 1988, Babai won the Hungarian State Prize, in 1990 he was elected as a corresponding member of the Hungarian Academy of Sciences, and in 1994 he became a full member. In 1999 the Budapest University of Technology and Economics awarded him an honorary doctorate.
In 1993, Babai was awarded the Gödel Prize together with Shafi Goldwasser, Silvio Micali, Shlomo Moran, and Charles Rackoff, for their papers on interactive proof systems.
In 2015, he was elected a fellow of the American Academy of Arts and Sciences, and won the Knuth Prize.
Babai was an invited speaker at the International Congresses of Mathematicians in Kyoto (1990), Zürich (1994, plenary talk), and Rio de Janeiro (2018).
Sources
Professor László Babai's algorithm is next big step in conquering isomorphism in graphs // Published on Nov 20, 2015 Division of the Physical Sciences / The University of Chicago
Mathematician claims breakthrough in complexity theory, by Adrian Cho 10 November 2015 17:45 // Posted in Math, Science AAAS News
A Quasipolynomial Time Algorithm for Graph Isomorphism: The Details + Background on Graph Isomorphism + The Main Result // Math ∩ Programming. Posted on N
|
https://en.wikipedia.org/wiki/Subderivative
|
In mathematics, the subderivative, subgradient, and subdifferential generalize the derivative to convex functions which are not necessarily differentiable. Subderivatives arise in convex analysis, the study of convex functions, often in connection to convex optimization.
Let be a real-valued convex function defined on an open interval of the real line. Such a function need not be differentiable at all points: For example, the absolute value function is non-differentiable when . However, as seen in the graph on the right (where in blue has non-differentiable kinks similar to the absolute value function), for any in the domain of the function one can draw a line which goes through the point and which is everywhere either touching or below the graph of f. The slope of such a line is called a subderivative.
Definition
Rigorously, a subderivative of a convex function at a point in the open interval is a real number such that
for all . By the converse of the mean value theorem, the set of subderivatives at for a convex function is a nonempty closed interval , where and are the one-sided limits
The set of all subderivatives is called the subdifferential of the function at , denoted by . If is convex, then its subdifferential at any point is non-empty. Moreover, if its subdifferential at contains exactly one subderivative, then and is differentiable at .
Example
Consider the function which is convex. Then, the subdifferential at the origin is the interval . The subdifferential at any point is the singleton set , while the subdifferential at any point is the singleton set . This is similar to the sign function, but is not single-valued at , instead including all possible subderivatives.
Properties
A convex function is differentiable at if and only if the subdifferential is a singleton set, which is .
A point is a global minimum of a convex function if and only if zero is contained in the subdifferential. For instance, in the figure above, one may draw a horizontal "subtangent line" to the graph of at . This last property is a generalization of the fact that the derivative of a function differentiable at a local minimum is zero.
If and are convex functions with subdifferentials and with being the interior point of one of the functions, then the subdifferential of is (where the addition operator denotes the Minkowski sum). This reads as "the subdifferential of a sum is the sum of the subdifferentials."
The subgradient
The concepts of subderivative and subdifferential can be generalized to functions of several variables. If is a real-valued convex function defined on a convex open set in the Euclidean space , a vector in that space is called a subgradient at if for any one has that
where the dot denotes the dot product.
The set of all subgradients at is called the subdifferential at x0 and is denoted . The subdifferential is always a nonempty convex compact set.
These concepts generalize further to c
|
https://en.wikipedia.org/wiki/International%20Colloquium%20on%20Group%20Theoretical%20Methods%20in%20Physics
|
The International Colloquium on Group Theoretical Methods in Physics (ICGTMP) is an academic conference devoted to applications of group theory to physics. It was founded in 1972 by Henri Bacry and Aloysio Janner. It hosts a colloquium every two years. The ICGTMP is led by a Standing Committee, which helps select winners for the three major awards presented at the conference: the Wigner Medal (19782018), the Hermann Weyl Prize (since 2002) and the Weyl-Wigner Award (since 2022).
Wigner Medal
The Wigner Medal was an award designed "to recognize outstanding contributions to the understanding of physics through Group Theory". It was administered by The Group Theory and Fundamental Physics Foundation, a publicly supported organization. The first award was given in 1978 to Eugene Wigner at the Integrative Conference on Group Theory and Mathematical Physics.
The collaboration between the Standing Committee of the ICGTMP and the Foundation ended in 2020. In 2023 a new process for awarding the Wigner Medal was created by the Foundation. The new Wigner Medal can be granted in any field of theoretical physics. The new Wigner Medals for 2020 and 2022 were granted retrospectively in 2023. The first winners of the new prize were Yvette Kosmann-Schwarzbach, and Daniel Greenberger.
The Standing Committee does not recognize the post-2018 Wigner Medals awarded by the Foundation as the continuation of the prize from 1978 through 2018.
Weyl-Wigner Award
In 202021, the ICGTMP Standing Committee created a new prize to replace the Wigner Medal, called the Weyl-Wigner Award. The purpose of the Weyl-Wigner Award is "to recognize outstanding contributions to the understanding of physics through group theory, continuing the tradition of The Wigner Medal that was awarded at the International Colloquium on Group Theoretical Methods in Physics from 1978 to 2018." The recipients of this prize are chosen by an international selection committee elected by the Standing Committee.
The first Weyl-Wigner Award was awarded in Strasbourg in July 2022 during the ICGTMP Group34 Colloquium to Nicolai Reshetikhin.
Hermann Weyl Prize
The Hermann Weyl Prize was established to award young scientists "who have performed original work of significant scientific quality in the area of understanding physics through symmetries".
Heinz-Dietrich Doebner convinced the Standing Committee that it would be necessary for the future development of the field to acknowledge young researchers who presented outstanding work and to motivate them, to continue and to diversify their activity. He proposed to award in each Colloquium a Prize. Ivan Todorov suggested to name this Prize after the mathematician and physicist Hermann Weyl. The first Weyl Prize was awarded in 2002 to Edward Frenkel.
List of conferences
See also
List of physics awards
List of prizes named after people
References
External links
ICGTMP Homepage
Wigner Medal Homepage
2018 Wigner Medal reform
Wigner
American science a
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.