source
stringlengths
31
168
text
stringlengths
51
3k
https://en.wikipedia.org/wiki/Great%20icosihemidodecahedron
In geometry, the great icosihemidodecahedron (or great icosahemidodecahedron) is a nonconvex uniform polyhedron, indexed as U71. It has 26 faces (20 triangles and 6 decagrams), 60 edges, and 30 vertices. Its vertex figure is a crossed quadrilateral. It is a hemipolyhedron with 6 decagrammic faces passing through the model center. Related polyhedra Its convex hull is the icosidodecahedron. It also shares its edge arrangement with the great icosidodecahedron (having the triangular faces in common), and with the great dodecahemidodecahedron (having the decagrammic faces in common). Gallery See also List of uniform polyhedra References External links Uniform polyhedra and duals Uniform polyhedra
https://en.wikipedia.org/wiki/Icosidodecadodecahedron
In geometry, the icosidodecadodecahedron (or icosified dodecadodecahedron) is a nonconvex uniform polyhedron, indexed as U44. It has 44 faces (12 pentagons, 12 pentagrams and 20 hexagons), 120 edges and 60 vertices. Its vertex figure is a crossed quadrilateral. Related polyhedra It shares its vertex arrangement with the uniform compounds of 10 or 20 triangular prisms. It additionally shares its edges with the rhombidodecadodecahedron (having the pentagonal and pentagrammic faces in common) and the rhombicosahedron (having the hexagonal faces in common). See also List of uniform polyhedra Snub icosidodecadodecahedron References External links Uniform polyhedra
https://en.wikipedia.org/wiki/Small%20ditrigonal%20dodecicosidodecahedron
In geometry, the small ditrigonal dodecicosidodecahedron (or small dodekified icosidodecahedron) is a nonconvex uniform polyhedron, indexed as U43. It has 44 faces (20 triangles, 12 pentagrams and 12 decagons), 120 edges, and 60 vertices. Its vertex figure is a crossed quadrilateral. Related polyhedra It shares its vertex arrangement with the great stellated truncated dodecahedron. It additionally shares its edges with the small icosicosidodecahedron (having the triangular and pentagrammic faces in common) and the small dodecicosahedron (having the decagonal faces in common). See also List of uniform polyhedra References External links Uniform polyhedra
https://en.wikipedia.org/wiki/Nonconvex%20great%20rhombicosidodecahedron
In geometry, the nonconvex great rhombicosidodecahedron is a nonconvex uniform polyhedron, indexed as U67. It has 62 faces (20 triangles, 30 squares and 12 pentagrams), 120 edges, and 60 vertices. It is also called the quasirhombicosidodecahedron. It is given a Schläfli symbol rr{,3}. Its vertex figure is a crossed quadrilateral. This model shares the name with the convex great rhombicosidodecahedron, also known as the truncated icosidodecahedron. Cartesian coordinates Cartesian coordinates for the vertices of a nonconvex great rhombicosidodecahedron are all the even permutations of (±1/τ2, 0, ±(2−1/τ)) (±1, ±1/τ3, ±1) (±1/τ, ±1/τ2, ±2/τ) where τ = (1+)/2 is the golden ratio (sometimes written φ). Related polyhedra It shares its vertex arrangement with the truncated great dodecahedron, and with the uniform compounds of 6 or 12 pentagonal prisms. It additionally shares its edge arrangement with the great dodecicosidodecahedron (having the triangular and pentagrammic faces in common), and the great rhombidodecahedron (having the square faces in common). Great deltoidal hexecontahedron The great deltoidal hexecontahedron is a nonconvex isohedral polyhedron. It is the dual of the nonconvex great rhombicosidodecahedron. It is visually identical to the great rhombidodecacron. It has 60 intersecting cross quadrilateral faces, 120 edges, and 62 vertices. It is also called a great strombic hexecontahedron. See also List of uniform polyhedra References External links Uniform polyhedra and duals Uniform polyhedra
https://en.wikipedia.org/wiki/Great%20rhombihexahedron
In geometry, the great rhombihexahedron (or great rhombicube) is a nonconvex uniform polyhedron, indexed as U21. It has 18 faces (12 squares and 6 octagrams), 48 edges, and 24 vertices. Its dual is the great rhombihexacron. Its vertex figure is a crossed quadrilateral. Orthogonal projections Gallery Related polyhedra It shares the vertex arrangement with the convex truncated cube. It additionally shares its edge arrangement with the nonconvex great rhombicuboctahedron (having 12 square faces in common), and with the great cubicuboctahedron (having the octagrammic faces in common). It may be constructed as the exclusive or (blend) of three octagrammic prisms. Similarly, the small rhombihexahedron may be constructed as the exclusive or of three octagonal prisms. Great rhombihexacron The great rhombihexacron is a nonconvex isohedral polyhedron. It is the dual of the uniform great rhombihexahedron (U21). It has 24 identical bow-tie-shaped faces, 18 vertices, and 48 edges. It has 12 outer vertices which have the same vertex arrangement as the cuboctahedron, and 6 inner vertices with the vertex arrangement of an octahedron. As a surface geometry, it can be seen as visually similar to a Catalan solid, the disdyakis dodecahedron, with much taller rhombus-based pyramids joined to each face of a rhombic dodecahedron. See also List of uniform polyhedra References uniform polyhedra and duals External links Uniform polyhedra
https://en.wikipedia.org/wiki/Great%20dodecicosahedron
In geometry, the great dodecicosahedron (or great dodekicosahedron) is a nonconvex uniform polyhedron, indexed as U63. It has 32 faces (20 hexagons and 12 decagrams), 120 edges, and 60 vertices. Its vertex figure is a crossed quadrilateral. It has a composite Wythoff symbol, 3 ( ) |, requiring two different Schwarz triangles to generate it: (3 ) and (3 ). (3 | represents the great dodecicosahedron with an extra 12 pentagons, and 3 | represents it with an extra 20 triangles.) Its vertex figure 6... is also ambiguous, having two clockwise and two counterclockwise faces around each vertex. Related polyhedra It shares its vertex arrangement with the truncated dodecahedron. It additionally shares its edge arrangement with the great icosicosidodecahedron (having the hexagonal faces in common) and the great ditrigonal dodecicosidodecahedron (having the decagrammic faces in common). Gallery See also List of uniform polyhedra References External links Uniform polyhedra
https://en.wikipedia.org/wiki/Great%20rhombidodecahedron
In geometry, the great rhombidodecahedron is a nonconvex uniform polyhedron, indexed as U73. It has 42 faces (30 squares, 12 decagrams), 120 edges and 60 vertices. Its vertex figure is a crossed quadrilateral. Related polyhedra It shares its vertex arrangement with the truncated great dodecahedron and the uniform compounds of 6 or 12 pentagonal prisms. It additionally shares its edge arrangement with the nonconvex great rhombicosidodecahedron (having the square faces in common), and with the great dodecicosidodecahedron (having the decagrammic faces in common). Gallery See also List of uniform polyhedra References External links Uniform polyhedra
https://en.wikipedia.org/wiki/Inverted%20snub%20dodecadodecahedron
In geometry, the inverted snub dodecadodecahedron (or vertisnub dodecadodecahedron) is a nonconvex uniform polyhedron, indexed as U60. It is given a Schläfli symbol sr{5/3,5}. Cartesian coordinates Cartesian coordinates for the vertices of an inverted snub dodecadodecahedron are all the even permutations of (±2α, ±2, ±2β), (±(α+β/τ+τ), ±(-ατ+β+1/τ), ±(α/τ+βτ-1)), (±(-α/τ+βτ+1), ±(-α+β/τ-τ), ±(ατ+β-1/τ)), (±(-α/τ+βτ-1), ±(α-β/τ-τ), ±(ατ+β+1/τ)) and (±(α+β/τ-τ), ±(ατ-β+1/τ), ±(α/τ+βτ+1)), with an even number of plus signs, where β = (α2/τ+τ)/(ατ−1/τ), where τ = (1+)/2 is the golden mean and α is the negative real root of τα4−α3+2α2−α−1/τ, or approximately −0.3352090. Taking the odd permutations of the above coordinates with an odd number of plus signs gives another form, the enantiomorph of the other one. Related polyhedra Medial inverted pentagonal hexecontahedron The medial inverted pentagonal hexecontahedron (or midly petaloid ditriacontahedron) is a nonconvex isohedral polyhedron. It is the dual of the uniform inverted snub dodecadodecahedron. Its faces are irregular nonconvex pentagons, with one very acute angle. Proportions Denote the golden ratio by , and let be the largest (least negative) real zero of the polynomial . Then each face has three equal angles of , one of and one of . Each face has one medium length edge, two short and two long ones. If the medium length is , then the short edges have length , and the long edges have length . The dihedral angle equals . The other real zero of the polynomial plays a similar role for the medial pentagonal hexecontahedron. See also List of uniform polyhedra Snub dodecadodecahedron References p. 124 External links Uniform polyhedra
https://en.wikipedia.org/wiki/Great%20snub%20dodecicosidodecahedron
In geometry, the great snub dodecicosidodecahedron (or great snub dodekicosidodecahedron) is a nonconvex uniform polyhedron, indexed as U64. It has 104 faces (80 triangles and 24 pentagrams), 180 edges, and 60 vertices. It has Coxeter diagram . It has the unusual feature that its 24 pentagram faces occur in 12 coplanar pairs. Related polyhedra It shares its vertices and edges, as well as 20 of its triangular faces and all its pentagrammic faces, with the great dirhombicosidodecahedron, (although the latter has 60 edges not contained in the great snub dodecicosidodecahedron). It shares its other 60 triangular faces (and its pentagrammic faces again) with the great disnub dirhombidodecahedron. The edges and triangular faces also occur in the compound of twenty octahedra. In addition, 20 of the triangular faces occur in one enantiomer of the compound of twenty tetrahemihexahedra, and the other 60 triangular faces occur in the other enantiomer. Gallery See also List of uniform polyhedra References External links Uniform polyhedra
https://en.wikipedia.org/wiki/Great%20inverted%20snub%20icosidodecahedron
In geometry, the great inverted snub icosidodecahedron (or great vertisnub icosidodecahedron) is a uniform star polyhedron, indexed as U69. It is given a Schläfli symbol sr{,3}, and Coxeter-Dynkin diagram . In the book Polyhedron Models by Magnus Wenninger, the polyhedron is misnamed great snub icosidodecahedron, and vice versa. Cartesian coordinates Cartesian coordinates for the vertices of a great inverted snub icosidodecahedron are all the even permutations of (±2α, ±2, ±2β), (±(α−βτ−1/τ), ±(α/τ+β−τ), ±(−ατ−β/τ−1)), (±(ατ−β/τ+1), ±(−α−βτ+1/τ), ±(−α/τ+β+τ)), (±(ατ−β/τ−1), ±(α+βτ+1/τ), ±(−α/τ+β−τ)) and (±(α−βτ+1/τ), ±(−α/τ−β−τ), ±(−ατ−β/τ+1)), with an even number of plus signs, where α = ξ−1/ξ and β = −ξ/τ+1/τ2−1/(ξτ), where τ = (1+)/2 is the golden mean and ξ is the greater positive real solution to ξ3−2ξ=−1/τ, or approximately 1.2224727. Taking the odd permutations of the above coordinates with an odd number of plus signs gives another form, the enantiomorph of the other one. The circumradius for unit edge length is where is the appropriate root of . The four positive real roots of the sextic in are the circumradii of the snub dodecahedron (U29), great snub icosidodecahedron (U57), great inverted snub icosidodecahedron (U69), and great retrosnub icosidodecahedron (U74). Related polyhedra Great inverted pentagonal hexecontahedron The great inverted pentagonal hexecontahedron (or petaloidal trisicosahedron) is a nonconvex isohedral polyhedron. It is composed of 60 concave pentagonal faces, 150 edges and 92 vertices. It is the dual of the uniform great inverted snub icosidodecahedron. Proportions Denote the golden ratio by . Let be the smallest positive zero of the polynomial . Then each pentagonal face has four equal angles of and one angle of . Each face has three long and two short edges. The ratio between the lengths of the long and the short edges is given by . The dihedral angle equals . Part of each face lies inside the solid, hence is invisible in solid models. The other two zeroes of the polynomial play a similar role in the description of the great pentagonal hexecontahedron and the great pentagrammic hexecontahedron. See also List of uniform polyhedra Great snub icosidodecahedron Great retrosnub icosidodecahedron References p. 126 External links Uniform polyhedra
https://en.wikipedia.org/wiki/Snub%20icosidodecadodecahedron
In geometry, the snub icosidodecadodecahedron is a nonconvex uniform polyhedron, indexed as U46. It has 104 faces (80 triangles, 12 pentagons, and 12 pentagrams), 180 edges, and 60 vertices. As the name indicates, it belongs to the family of snub polyhedra. The circumradius of the snub icosidodecadodecahedron with unit edge length is where ρ is the plastic constant, or the unique real root of . Cartesian coordinates Cartesian coordinates for the vertices of a snub icosidodecadodecahedron are all the even permutations of (±2α, ±2γ, ±2β), (±(α+β/τ+γτ), ±(-ατ+β+γ/τ), ±(α/τ+βτ-γ)), (±(-α/τ+βτ+γ), ±(-α+β/τ-γτ), ±(ατ+β-γ/τ)), (±(-α/τ+βτ-γ), ±(α-β/τ-γτ), ±(ατ+β+γ/τ)) and (±(α+β/τ-γτ), ±(ατ-β+γ/τ), ±(α/τ+βτ+γ)) with an even number of plus signs, where τ = (1+)/2 is the golden ratio; ρ is the plastic constant, or the unique real solution to ρ3=ρ+1; α = ρ+1 = ρ3; β = τ2ρ4+τ; and γ = ρ2+τρ. Taking the odd permutations of the above coordinates with an odd number of plus signs gives another form, the enantiomorph of the other one. Related polyhedra Medial hexagonal hexecontahedron The medial hexagonal hexecontahedron is a nonconvex isohedral polyhedron. It is the dual of the uniform snub icosidodecadodecahedron. See also List of uniform polyhedra References External links Uniform polyhedra
https://en.wikipedia.org/wiki/Small%20retrosnub%20icosicosidodecahedron
In geometry, the small retrosnub icosicosidodecahedron (also known as a retrosnub disicosidodecahedron, small inverted retrosnub icosicosidodecahedron, or retroholosnub icosahedron) is a nonconvex uniform polyhedron, indexed as . It has 112 faces (100 triangles and 12 pentagrams), 180 edges, and 60 vertices. It is given a Schläfli symbol sr{⁵/₃,³/₂}. The 40 non-snub triangular faces form 20 coplanar pairs, forming star hexagons that are not quite regular. Unlike most snub polyhedra, it has reflection symmetries. George Olshevsky nicknamed it the yog-sothoth (after the Cthulhu Mythos deity). Convex hull Its convex hull is a nonuniform truncated dodecahedron. Cartesian coordinates Cartesian coordinates for the vertices of a small retrosnub icosicosidodecahedron are all the even permutations of (±(1-ϕ−α), 0, ±(3−ϕα)) (±(ϕ-1−α), ±2, ±(2ϕ-1−ϕα)) (±(ϕ+1−α), ±2(ϕ-1), ±(1−ϕα)) where ϕ = (1+)/2 is the golden ratio and α = . See also List of uniform polyhedra Small snub icosicosidodecahedron References External links Uniform polyhedra
https://en.wikipedia.org/wiki/Great%20retrosnub%20icosidodecahedron
In geometry, the great retrosnub icosidodecahedron or great inverted retrosnub icosidodecahedron is a nonconvex uniform polyhedron, indexed as . It has 92 faces (80 triangles and 12 pentagrams), 150 edges, and 60 vertices. It is given a Schläfli symbol Cartesian coordinates Cartesian coordinates for the vertices of a great retrosnub icosidodecahedron are all the even permutations of (±2α, ±2, ±2β), (±(α−βτ−1/τ), ±(α/τ+β−τ), ±(−ατ−β/τ−1)), (±(ατ−β/τ+1), ±(−α−βτ+1/τ), ±(−α/τ+β+τ)), (±(ατ−β/τ−1), ±(α+βτ+1/τ), ±(−α/τ+β−τ)) and (±(α−βτ+1/τ), ±(−α/τ−β−τ), ±(−ατ−β/τ+1)), with an even number of plus signs, where α = ξ−1/ξ and β = −ξ/τ+1/τ2−1/(ξτ), where τ = (1+)/2 is the golden mean and ξ is the smaller positive real root of ξ3−2ξ=−1/τ, namely Taking the odd permutations of the above coordinates with an odd number of plus signs gives another form, the enantiomorph of the other one. Taking the odd permutations with an even number of plus signs or vice versa results in the same two figures rotated by 90 degrees. The circumradius for unit edge length is where is the appropriate root of . The four positive real roots of the sextic in are the circumradii of the snub dodecahedron (U29), great snub icosidodecahedron (U57), great inverted snub icosidodecahedron (U69), and great retrosnub icosidodecahedron (U74). See also List of uniform polyhedra Great snub icosidodecahedron Great inverted snub icosidodecahedron References External links Uniform polyhedra
https://en.wikipedia.org/wiki/M%C3%BCnchhausen%20trilemma
In epistemology, the Münchhausen trilemma is a thought experiment intended to demonstrate the theoretical impossibility of proving any truth, even in the fields of logic and mathematics, without appealing to accepted assumptions. If it is asked how any given proposition is known to be true, proof in support of that proposition may be provided. Yet that same question can be asked of that supporting proof, and any subsequent supporting proof. The Münchhausen trilemma is that there are only three ways of completing a proof: The circular argument, in which the proof of some proposition presupposes the truth of that very proposition The regressive argument, in which each proof requires a further proof, ad infinitum The dogmatic argument, which rests on accepted precepts which are merely asserted rather than defended The trilemma, then, is the decision among the three equally unsatisfying options. Karl Popper's suggestion was to accept the trilemma as unsolvable and work with knowledge by way of conjecture and criticism. Name The name Münchhausen-Trilemma was coined by the German philosopher Hans Albert in 1968 in reference to a trilemma of "dogmatism versus infinite regress versus psychologism" used by Karl Popper. It is a reference to the problem of "bootstrapping", based on the story of Baron Munchausen (in German, "Münchhausen") pulling himself and the horse on which he was sitting out of a mire by his own hair. Like Munchausen, who cannot make progress because he has no solid ground to stand on, any purported justification of all knowledge must fail, because it must start from a position of no knowledge, and therefore cannot make progress. It must either start with some knowledge, as with dogmatism, not start at all, as with infinite regress, or be a circular argument, justified only by itself and have no solid foundation, much like the absurdity of Münchhausen pulling himself out of the mire without any independent support. In contemporary epistemology, advocates of coherentism are supposed to accept the "circular" horn of the trilemma; foundationalists rely on the axiomatic argument. The view that accepts infinite regress is called infinitism. Agrippa's Trilemma It is also known as Agrippa's trilemma or the Agrippan trilemma after a similar argument reported by Sextus Empiricus, which was attributed to Agrippa the Skeptic by Diogenes Laërtius. Sextus' argument, however, consists of five (not three) "modes". Fries's trilemma Popper in Logic of Scientific Discovery mentions neither Sextus nor Agrippa, but instead attributes his trilemma to German philosopher Jakob Friedrich Fries, leading some to call it Fries's trilemma as a result. Jakob Friedrich Fries formulated a similar trilemma in which statements can be accepted either: dogmatically supported by infinite regress based on perceptual experience (psychologism) The first two possibilities are rejected by Fries as unsatisfactory, requiring his adopting the third option. Karl Po
https://en.wikipedia.org/wiki/Algarheim
Algarheim is a village in the municipality of Ullensaker, Norway. Its population (2005) is 371. Algarheim was from 2008 to 2013 and again from 2020 in Statistics Norway's settlement statistics counted as part of Jessheim. In the past there has been both a school, a camp, a post office, and a couple of larger farms connected to Algarheim. The name Algarheim itself originally comes from a large farm in the area. Algarheim had its own general store and post office until the end of the 1990s. The place also has its own public kindergarten and primary school Algarheim School. The place was heavily developed from the mid-1970s with new housing estates consisting of villas, as well as a new primary school. References Villages in Akershus Ullensaker
https://en.wikipedia.org/wiki/Curvature%20tensor
In differential geometry, the term curvature tensor may refer to: the Riemann curvature tensor of a Riemannian manifold — see also Curvature of Riemannian manifolds; the curvature of an affine connection or covariant derivative (on tensors); the curvature form of an Ehresmann connection: see Ehresmann connection, connection (principal bundle) or connection (vector bundle). It is the one of the numbers that are important in the Einstein field equations. See also Tensor (disambiguation)
https://en.wikipedia.org/wiki/Supporting%20hyperplane
In geometry, a supporting hyperplane of a set in Euclidean space is a hyperplane that has both of the following two properties: is entirely contained in one of the two closed half-spaces bounded by the hyperplane, has at least one boundary-point on the hyperplane. Here, a closed half-space is the half-space that includes the points within the hyperplane. Supporting hyperplane theorem This theorem states that if is a convex set in the topological vector space and is a point on the boundary of then there exists a supporting hyperplane containing If ( is the dual space of , is a nonzero linear functional) such that for all , then defines a supporting hyperplane. Conversely, if is a closed set with nonempty interior such that every point on the boundary has a supporting hyperplane, then is a convex set, and is the intersection of all its supporting closed half-spaces. The hyperplane in the theorem may not be unique, as noticed in the second picture on the right. If the closed set is not convex, the statement of the theorem is not true at all points on the boundary of as illustrated in the third picture on the right. The supporting hyperplanes of convex sets are also called tac-planes or tac-hyperplanes. The forward direction can be proved as a special case of the separating hyperplane theorem (see the page for the proof). For the converse direction, See also Support function Supporting line (supporting hyperplanes in ) Notes References & further reading Convex geometry Functional analysis Duality theories
https://en.wikipedia.org/wiki/List%20of%20films%20about%20mathematicians
This is a list of feature films and documentaries that include mathematicians, scientists who use math or references to mathematicians. About mathematics Films where mathematics is central to the plot: 21 (2008) – A group of current and former MIT students, mostly mathematicians, and an algebra professor devise a card counting scheme for success at Las Vegas Strip blackjack tables. The Bank (2001) – A mathematician discovers a formula to predict fluctuations in the stock market. Cube (1997) – Six people, including Leaven, a math student, awake in a deathtrap based on mathematical principles. Fermat's Room (2007) – Three mathematicians and one inventor are invited to a house under the premise of solving a great enigma and told to use pseudonyms based on famous historical mathematicians. At the house, they are trapped in a room. They must solve puzzles given by the host, who calls himself "Fermat", in order to escape the slowly closing walls of the room. Gifted (2017) – Frank Adler (Chris Evans) is a single man raising a child prodigy—his spirited young niece Mary (Mckenna Grace)—in a coastal town in Florida after the death of her mother Diane, a mathematician. Mary's grandmother Evelyn (Lindsay Duncan) and uncle have different ideas on how to raise her. Mary tells her grandmother she wants to solve the problem her mother was working on, the Navier-Stokes existence and smoothness problem. Good Will Hunting (1997) – Janitor and genius Will Hunting (Matt Damon) begins to turn his life around with the help of psychologist (Robin Williams) and a Fields Medal-winning professor (Stellan Skarsgård). I.Q. (1994) – Albert Einstein (Walter Matthau) helps a young man (Tim Robbins) pretend to be a physicist in order to catch the attention of Einstein's niece (Meg Ryan). An Invisible Sign (2011) – Mona Gray (Jessica Alba) gives up everything important to her in life, except mathematics, as part of a "deal with the universe" to help restore her father (a mathematician) to health. Years later, Mona teaches the subject, and does her best to help her students contend with their own personal crises. Moebius (1996) – Topologists including a young girl make contributions to the subway system and other facets of reality in Argentina in this math film with a science fiction and surreal feel. Moneyball (2011) – Oakland Athletics baseball team's general manager Billy Beane attempts to assemble a competitive team using statistics. The Oxford Murders (2008) – A Student (Elijah Wood) finds out about mysterious killings in Oxford and helped by a professor (John Hurt), they reveal the math patterns used by the killer. Pi (1998) – A mathematician searches for the number that underlies all of nature. Proof (2005) – A former student (Jake Gyllenhaal) of a recently deceased, brilliant mathematician (Anthony Hopkins) finds a notebook in his office containing a proof of an important theorem, but the mathematician's daughter (Gwyneth Paltrow) claims it is hers. The ens
https://en.wikipedia.org/wiki/Convex%20geometry
In mathematics, convex geometry is the branch of geometry studying convex sets, mainly in Euclidean space. Convex sets occur naturally in many areas: computational geometry, convex analysis, discrete geometry, functional analysis, geometry of numbers, integral geometry, linear programming, probability theory, game theory, etc. Classification According to the Mathematics Subject Classification MSC2010, the mathematical discipline Convex and Discrete Geometry includes three major branches: general convexity polytopes and polyhedra discrete geometry (though only portions of the latter two are included in convex geometry). General convexity is further subdivided as follows: axiomatic and generalized convexity convex sets without dimension restrictions convex sets in topological vector spaces convex sets in 2 dimensions (including convex curves) convex sets in 3 dimensions (including convex surfaces) convex sets in n dimensions (including convex hypersurfaces) finite-dimensional Banach spaces random convex sets and integral geometry asymptotic theory of convex bodies approximation by convex sets variants of convex sets (star-shaped, (m, n)-convex, etc.) Helly-type theorems and geometric transversal theory other problems of combinatorial convexity length, area, volume mixed volumes and related topics valuations on convex bodies inequalities and extremum problems convex functions and convex programs spherical and hyperbolic convexity Historical note Convex geometry is a relatively young mathematical discipline. Although the first known contributions to convex geometry date back to antiquity and can be traced in the works of Euclid and Archimedes, it became an independent branch of mathematics at the turn of the 20th century, mainly due to the works of Hermann Brunn and Hermann Minkowski in dimensions two and three. A big part of their results was soon generalized to spaces of higher dimensions, and in 1934 T. Bonnesen and W. Fenchel gave a comprehensive survey of convex geometry in Euclidean space Rn. Further development of convex geometry in the 20th century and its relations to numerous mathematical disciplines are summarized in the Handbook of convex geometry edited by P. M. Gruber and J. M. Wills. See also List of convexity topics Notes References Expository articles on convex geometry K. Ball, An elementary introduction to modern convex geometry, in: Flavors of Geometry, pp. 1–58, Math. Sci. Res. Inst. Publ. Vol. 31, Cambridge Univ. Press, Cambridge, 1997, available online. M. Berger, Convexity, Amer. Math. Monthly, Vol. 97 (1990), 650–678. DOI: 10.2307/2324573 P. M. Gruber, Aspects of convexity and its applications, Exposition. Math., Vol. 2 (1984), 47–83. V. Klee, What is a convex set? Amer. Math. Monthly, Vol. 78 (1971), 616–631, DOI: 10.2307/2316569 Books on convex geometry T. Bonnesen, W. Fenchel, Theorie der konvexen Körper, Julius Springer, Berlin, 1934. English tran
https://en.wikipedia.org/wiki/Intrastat
Intrastat is the system for collecting information and producing statistics on the trade in goods between countries of the European Union (EU). It began operation on 1 January 1993, when it replaced customs declarations as the source of trade statistics within the EU. The requirements of Intrastat are similar in all member states of the EU, although there are important exceptions. Motivation Trade statistics are an essential part of a country's balance of payments account and are regarded as an important economic indicator of the performance of any country. Export data in particular can be used as an indicator of the state of a country's manufacturing industries as a whole. The statistics are used by government departments to help set overall trade policy and generate initiatives on new trade areas. The volume of goods moving is also assessed to allow the planning of future transport infrastructure needs. Intrastat is also being used as a tool against VAT fraud, permitting the comparison between Intrastat and VAT declarations, for example in the UK and in Italy (as suggested by rules governing Intrastat in that country). The commercial world uses the statistics to assess markets within the country (e.g. to gauge how imports are penetrating the market) and externally (e.g. to establish new markets for their goods). In addition, the statistics are passed on to European and International bodies such as Eurostat (the Statistical Office of the European Communities), the United Nations and the International Monetary Fund. Intrastat data forms an integral part of these statistics and therefore it is important that the data submitted be of a high quality. Practicalities Each member country establishes an annual threshold value below which a business is not required to file Intrastat forms. The reporting thresholds vary from country to country and, within one country, may be different for dispatches (exports) and arrivals (imports). For the eurozone countries excluding Italy, thresholds for 2013 average 316,000 and 422,000 euro for arrivals (imports) and dispatches (exports), respectively. As established by Regulation (EC) No 222/2009 of the European Parliament and of the Council of 11 March 2009, the current thresholds are designed to cover 97% of dispatches and 95% of arrivals. This regulation raised thresholds with the purpose of reducing the statistical and, hence, financial burden on small operators. According to an EU Memo of 2013, the increased thresholds reduced the number of reporting enterprises by one third and saved them 134 million euro. Italy is the only EU country to set no thresholds, so all operators (excluding only new, young entrepreneurs in a facilitated tax regime) must file Intrastat forms. For the Intrastat system, the customs authorities provide the national authorities with statistics on dispatches and arrivals of goods. Member states then transmit their data to Eurostat on a monthly basis, generally within 40 days.
https://en.wikipedia.org/wiki/Warren%20Ambrose
Warren Arthur Ambrose (October 25, 1914 – December 4, 1995) was Professor Emeritus of Mathematics at the Massachusetts Institute of Technology and at the University of Buenos Aires. He was born in Virden, Illinois in 1914. He received his bachelor of science degree in 1935, his master's in 1936 and his Ph.D. in 1939, all from the University of Illinois at Urbana–Champaign. Personal life Warren Ambrose was a food and wine connoisseur, and also a fan of jazz saxophone player, Charlie "Bird" Parker. He is noted for his work with MIT colleague, Isadore Singer, both of whom helped to shape the pure mathematics department at MIT. He retired from teaching at MIT in 1985, thereafter moving to France. Ambrose died in 1995 in Paris. He was survived by his wife, Jeannette (Grillet) Ambrose of Paris, two children from an earlier marriage, Adam Ambrose of Bisbee, AZ, and Ellen Ambrose of Laurel, MD, and four grandchildren, David and Adam Holzsager, Ari Ambrose, and Jennifer Laurent. Career Ambrose became an assistant professor at MIT in 1947, an associate professor in 1950 and full professor in 1957. He was several times between 1939 and 1959 a visiting scholar at the Institute for Advanced Study in Princeton, New Jersey. Ambrose is often considered one of the fathers of modern geometry. He is noted for making changes in the pure mathematics undergraduate curriculum at MIT to reflect recent findings in differential geometry. For example, less than ten years after André Weil presented the differential form, Ambrose was using it in his undergraduate differential geometry courses. In the 1950s, Ambrose (together with I. M. Singer) made MIT into the only center in geometry in the United States outside the University of Chicago. His influence continued through his students, in particular Hung-Hsi Wu and John Rhodes, both of the University of California, Berkeley. In the 1960s Ambrose was a visiting professor at the University of Buenos Aires, Argentina. He is noted for his opposition of takeover of South American countries military regimes, specifically Argentina. Political activism In the summer of 1966, while a visiting professor at the University of Buenos Aires in Argentina, Ambrose was severely beaten along with Argentinian faculty members and students by Argentinian military police. This occurred shortly after a military regime took over public universities in Argentina. Ambrose responded by bringing several of the best and brightest students from the University of Buenos Aires back to MIT with him. Ambrose is often renowned amongst Latin American intellectuals for bringing attention to right-wing dictatorships in South America. In 1967, Ambrose signed a letter declaring his intention to refuse to pay taxes in protest against the U.S. war against Vietnam, and urging other people to also take this stand. References External links 20th-century American mathematicians American tax resisters Institute for Advanced Study visiting scholars Massachuset
https://en.wikipedia.org/wiki/Split-biquaternion
In mathematics, a split-biquaternion is a hypercomplex number of the form where w, x, y, and z are split-complex numbers and i, j, and k multiply as in the quaternion group. Since each coefficient w, x, y, z spans two real dimensions, the split-biquaternion is an element of an eight-dimensional vector space. Considering that it carries a multiplication, this vector space is an algebra over the real field, or an algebra over a ring where the split-complex numbers form the ring. This algebra was introduced by William Kingdon Clifford in an 1873 article for the London Mathematical Society. It has been repeatedly noted in mathematical literature since then, variously as a deviation in terminology, an illustration of the tensor product of algebras, and as an illustration of the direct sum of algebras. The split-biquaternions have been identified in various ways by algebraists; see below. Modern definition A split-biquaternion is ring isomorphic to the Clifford algebra Cl0,3(R). This is the geometric algebra generated by three orthogonal imaginary unit basis directions, under the combination rule giving an algebra spanned by the 8 basis elements , with (e1e2)2 = (e2e3)2 = (e3e1)2 = −1 and ω2 = (e1e2e3)2 = +1. The sub-algebra spanned by the 4 elements is the division ring of Hamilton's quaternions, . One can therefore see that where is the algebra spanned by , the algebra of the split-complex numbers. Equivalently, Split-biquaternion group The split-biquaternions form an associative ring as is clear from considering multiplications in its basis . When ω is adjoined to the quaternion group one obtains a 16 element group ( {1, i, j, k, −1, −i, −j, −k, ω, ωi, ωj, ωk, −ω, −ωi, −ωj, −ωk}, × ). Module Since elements of the quaternion group can be taken as a basis of the space of split-biquaternions, it may be compared to a vector space. But split-complex numbers form a ring, not a field, so vector space is not appropriate. Rather the space of split-biquaternions forms a free module. This standard term of ring theory expresses a similarity to a vector space, and this structure by Clifford in 1873 is an instance. Split-biquaternions form an algebra over a ring, but not a group ring. Direct sum of two quaternion rings The direct sum of the division ring of quaternions with itself is denoted . The product of two elements and is in this direct sum algebra. Proposition: The algebra of split-biquaternions is isomorphic to proof: Every split-biquaternion has an expression q = w + z ω where w and z are quaternions and ω2 = +1. Now if p = u + v ω is another split-biquaternion, their product is The isomorphism mapping from split-biquaternions to is given by In , the product of these images, according to the algebra-product of indicated above, is This element is also the image of pq under the mapping into Thus the products agree, the mapping is a homomorphism; and since it is bijective, it is an isomorphism. Though split-biquat
https://en.wikipedia.org/wiki/Quadratic%20form%20%28statistics%29
In multivariate statistics, if is a vector of random variables, and is an -dimensional symmetric matrix, then the scalar quantity is known as a quadratic form in . Expectation It can be shown that where and are the expected value and variance-covariance matrix of , respectively, and tr denotes the trace of a matrix. This result only depends on the existence of and ; in particular, normality of is not required. A book treatment of the topic of quadratic forms in random variables is that of Mathai and Provost. Proof Since the quadratic form is a scalar quantity, . Next, by the cyclic property of the trace operator, Since the trace operator is a linear combination of the components of the matrix, it therefore follows from the linearity of the expectation operator that A standard property of variances then tells us that this is Applying the cyclic property of the trace operator again, we get Variance in the Gaussian case In general, the variance of a quadratic form depends greatly on the distribution of . However, if does follow a multivariate normal distribution, the variance of the quadratic form becomes particularly tractable. Assume for the moment that is a symmetric matrix. Then, . In fact, this can be generalized to find the covariance between two quadratic forms on the same (once again, and must both be symmetric): . In addition, a quadratic form such as this follows a generalized chi-squared distribution. Computing the variance in the non-symmetric case Some texts incorrectly state that the above variance or covariance results hold without requiring to be symmetric. The case for general can be derived by noting that so is a quadratic form in the symmetric matrix , so the mean and variance expressions are the same, provided is replaced by therein. Examples of quadratic forms In the setting where one has a set of observations and an operator matrix , then the residual sum of squares can be written as a quadratic form in : For procedures where the matrix is symmetric and idempotent, and the errors are Gaussian with covariance matrix , has a chi-squared distribution with degrees of freedom and noncentrality parameter , where may be found by matching the first two central moments of a noncentral chi-squared random variable to the expressions given in the first two sections. If estimates with no bias, then the noncentrality is zero and follows a central chi-squared distribution. See also Quadratic form Covariance matrix Matrix representation of conic sections References Statistical theory Quadratic forms
https://en.wikipedia.org/wiki/Hyperboloid%20structure
Hyperboloid structures are architectural structures designed using a hyperboloid in one sheet. Often these are tall structures, such as towers, where the hyperboloid geometry's structural strength is used to support an object high above the ground. Hyperboloid geometry is often used for decorative effect as well as structural economy. The first hyperboloid structures were built by Russian engineer Vladimir Shukhov (1853–1939), including the Shukhov Tower in Polibino, Dankovsky District, Lipetsk Oblast, Russia. Properties Hyperbolic structures have a negative Gaussian curvature, meaning they curve inward rather than curving outward or being straight. As doubly ruled surfaces, they can be made with a lattice of straight beams, hence are easier to build than curved surfaces that do not have a ruling and must instead be built with curved beams. Hyperboloid structures are superior in stability against outside forces compared with "straight" buildings, but have shapes often creating large amounts of unusable volume (low space efficiency). Hence they are more commonly used in purpose-driven structures, such as water towers (to support a large mass), cooling towers, and aesthetic features. A hyperbolic structure is beneficial for cooling towers. At the bottom, the widening of the tower provides a large area for installation of fill to promote thin film evaporative cooling of the circulated water. As the water first evaporates and rises, the narrowing effect helps accelerate the laminar flow, and then as it widens out, contact between the heated air and atmospheric air supports turbulent mixing. Work of Shukhov In the 1880s, Shukhov began to work on the problem of the design of roof systems to use a minimum of materials, time and labor. His calculations were most likely derived from mathematician Pafnuty Chebyshev's work on the theory of best approximations of functions. Shukhov's mathematical explorations of efficient roof structures led to his invention of a new system that was innovative both structurally and spatially. By applying his analytical skills to the doubly curved surfaces Nikolai Lobachevsky named "hyperbolic", Shukhov derived a family of equations that led to new structural and constructional systems, known as hyperboloids of revolution and hyperbolic paraboloids. The steel gridshells of the exhibition pavilions of the 1896 All-Russian Industrial and Handicrafts Exposition in Nizhny Novgorod were the first publicly prominent examples of Shukhov's new system. Two pavilions of this type were built for the Nizhni Novgorod exposition, one oval in plan and one circular. The roofs of these pavilions were doubly curved gridshells formed entirely of a lattice of straight angle-iron and flat iron bars. Shukhov himself called them azhurnaia bashnia ("lace tower", i.e., lattice tower). The patent of this system, for which Shukhov applied in 1895, was awarded in 1899. Shukhov also turned his attention to the development of an efficient and ea
https://en.wikipedia.org/wiki/Polynomial%20expansion
In mathematics, an expansion of a product of sums expresses it as a sum of products by using the fact that multiplication distributes over addition. Expansion of a polynomial expression can be obtained by repeatedly replacing subexpressions that multiply two other subexpressions, at least one of which is an addition, by the equivalent sum of products, continuing until the expression becomes a sum of (repeated) products. During the expansion, simplifications such as grouping of like terms or cancellations of terms may also be applied. Instead of multiplications, the expansion steps could also involve replacing powers of a sum of terms by the equivalent expression obtained from the binomial formula; this is a shortened form of what would happen if the power were treated as a repeated multiplication, and expanded repeatedly. It is customary to reintroduce powers in the final result when terms involve products of identical symbols. Simple examples of polynomial expansions are the well known rules when used from left to right. A more general single-step expansion will introduce all products of a term of one of the sums being multiplied with a term of the other: An expansion which involves multiple nested rewrite steps is that of working out a Horner scheme to the (expanded) polynomial it defines, for instance . The opposite process of trying to write an expanded polynomial as a product is called polynomial factorization. Expansion of a polynomial written in factored form To multiply two factors, each term of the first factor must be multiplied by each term of the other factor. If both factors are binomials, the FOIL rule can be used, which stands for "First Outer Inner Last," referring to the terms that are multiplied together. For example, expanding yields Expansion of (x+y)n When expanding , a special relationship exists between the coefficients of the terms when written in order of descending powers of x and ascending powers of y. The coefficients will be the numbers in the (n + 1)th row of Pascal's triangle (since Pascal's triangle starts with row and column number of 0). For example, when expanding , the following is obtained: See also Polynomial factorization Factorization Multinomial theorem External links Discussion Review of Algebra: Expansion , University of Akron Online tools Expand page, quickmath.com Online Calculator with Symbolic Calculations, livephysics.com Polynomials de:Ausmultiplizieren
https://en.wikipedia.org/wiki/Science%20in%20the%20Renaissance
During the Renaissance, great advances occurred in geography, astronomy, chemistry, physics, mathematics, manufacturing, anatomy and engineering. The collection of ancient scientific texts began in earnest at the start of the 15th century and continued up to the Fall of Constantinople in 1453, and the invention of printing allowed a faster propagation of new ideas. Nevertheless, some have seen the Renaissance, at least in its initial period, as one of scientific backwardness. Historians like George Sarton and Lynn Thorndike criticized how the Renaissance affected science, arguing that progress was slowed for some amount of time. Humanists favored human-centered subjects like politics and history over study of natural philosophy or applied mathematics. More recently, however, scholars have acknowledged the positive influence of the Renaissance on mathematics and science, pointing to factors like the rediscovery of lost or obscure texts and the increased emphasis on the study of language and the correct reading of texts. Marie Boas Hall coined the term Scientific Renaissance to designate the early phase of the Scientific Revolution, 1450–1630. More recently, Peter Dear has argued for a two-phase model of early modern science: a Scientific Renaissance of the 15th and 16th centuries, focused on the restoration of the natural knowledge of the ancients; and a Scientific Revolution of the 17th century, when scientists shifted from recovery to innovation. Context During and after the Renaissance of the 12th century, Europe experienced an intellectual revitalization, especially with regard to the investigation of the natural world. In the 14th century, however, a series of events that would come to be known as the Crisis of the Late Middle Ages was underway. When the Black Death came, it wiped out so many lives it affected the entire system. It brought a sudden end to the previous period of massive scientific change. The plague killed 25–50% of the people in Europe, especially in the crowded conditions of the towns, where the heart of innovations lay. Recurrences of the plague and other disasters caused a continuing decline of population for a century. The Renaissance The 14th century saw the beginning of the cultural movement of the Renaissance. By the early 15th century, an international search for ancient manuscripts was underway and would continue unabated until the Fall of Constantinople in 1453, when many Byzantine scholars had to seek refuge in the West, particularly Italy. Likewise, the invention of the printing press was to have great effect on European society: the facilitated dissemination of the printed word democratized learning and allowed a faster propagation of new ideas. Initially, there were no new developments in physics or astronomy, and the reverence for classical sources further enshrined the Aristotelian and Ptolemaic views of the universe. Renaissance philosophy lost much of its rigor as the rules of logic and deduction were s
https://en.wikipedia.org/wiki/Sidney%20Siegel
Sidney Siegel (4 January 1916 in New York City – 29 November 1961) was an American psychologist who became especially well known for his work in popularizing non-parametric statistics for use in the behavioral sciences. He was a co-developer of the statistical test known as the Siegel–Tukey test. In 1951 Siegel completed a B.A. in vocational arts at San Jose State College (now San Jose State University), then in 1953 a Ph.D. in Psychology at Stanford University. Except for a year spent at the Center for Advanced Study in the Behavioral Sciences at Stanford, he thereafter taught at Pennsylvania State University, until his death in November 1961 of a coronary thrombosis. His parents, Jacob and Rebecca Siegel, were Jewish immigrants from Romania. See also Siegel–Tukey test. Notes References Nonparametric Statistics for the Behavioral Sciences, 1956 Bargaining and Group Decision Making (co-authored with Lawrence E. Fouraker), winning the 1959 Monograph Price of the American Association for the Advancement of Science Bargaining Behaviour (co-authoree with Lawrence E. Fouraker). A nonparametric sum of ranks procedure for relative spread in unpaired samples, in Journal of the American Statistical Association, 1960 (coauthored with John Wilder Tukey) Choice, Strategy, and Utility (completed posthumously by Alberta E. Siegel and Julia McMichael Andrews) Bargaining, Information and the Use of Threat (co-authored with Donald L. Harnett), 1961 External links In Memory of Alberta and Sidney Siegel 20th-century American psychologists American statisticians 1916 births 1961 deaths Scientists from New York City American people of Romanian-Jewish descent Center for Advanced Study in the Behavioral Sciences fellows Mathematicians from New York (state) Stanford University alumni Fellows of the American Physical Society Quantitative psychologists San Jose State University alumni Pennsylvania State University alumni
https://en.wikipedia.org/wiki/Additive%20identity
In mathematics, the additive identity of a set that is equipped with the operation of addition is an element which, when added to any element in the set, yields . One of the most familiar additive identities is the number 0 from elementary mathematics, but additive identities occur in other mathematical structures where addition is defined, such as in groups and rings. Elementary examples The additive identity familiar from elementary mathematics is zero, denoted 0. For example, In the natural numbers (if 0 is included), the integers the rational numbers the real numbers and the complex numbers the additive identity is 0. This says that for a number belonging to any of these sets, Formal definition Let be a group that is closed under the operation of addition, denoted +. An additive identity for , denoted , is an element in such that for any element in , Further examples In a group, the additive identity is the identity element of the group, is often denoted 0, and is unique (see below for proof). A ring or field is a group under the operation of addition and thus these also have a unique additive identity 0. This is defined to be different from the multiplicative identity 1 if the ring (or field) has more than one element. If the additive identity and the multiplicative identity are the same, then the ring is trivial (proved below). In the ring of -by- matrices over a ring , the additive identity is the zero matrix, denoted or , and is the -by- matrix whose entries consist entirely of the identity element 0 in . For example, in the 2×2 matrices over the integers the additive identity is In the quaternions, 0 is the additive identity. In the ring of functions from , the function mapping every number to 0 is the additive identity. In the additive group of vectors in the origin or zero vector is the additive identity. Properties The additive identity is unique in a group Let be a group and let and in both denote additive identities, so for any in , It then follows from the above that The additive identity annihilates ring elements In a system with a multiplication operation that distributes over addition, the additive identity is a multiplicative absorbing element, meaning that for any in , . This follows because: The additive and multiplicative identities are different in a non-trivial ring Let be a ring and suppose that the additive identity 0 and the multiplicative identity 1 are equal, i.e. 0 = 1. Let be any element of . Then proving that is trivial, i.e. The contrapositive, that if is non-trivial then 0 is not equal to 1, is therefore shown. See also 0 (number) Additive inverse Identity element Multiplicative identity References Bibliography David S. Dummit, Richard M. Foote, Abstract Algebra, Wiley (3rd ed.): 2003, . External links Abstract algebra Elementary algebra Group theory Ring theory 0 (number)
https://en.wikipedia.org/wiki/Morgan%20Prize
Distinguish from the De Morgan Medal awarded by the London Mathematical Society. The Morgan Prize (full name Frank and Brennie Morgan Prize for Outstanding Research in Mathematics by an Undergraduate Student) is an annual award given to an undergraduate student in the US, Canada, or Mexico who demonstrates superior mathematics research. The $1,200 award, endowed by Mrs. Frank Morgan of Allentown, Pennsylvania, was founded in 1995. The award is made jointly by the American Mathematical Society, the Mathematical Association of America, and the Society for Industrial and Applied Mathematics. The Morgan Prize has been described as the highest honor given to an undergraduate in mathematics. Previous winners 1995 Winner: Kannan Soundararajan (Analytic Number Theory, University of Michigan) Honorable mention: Kiran Kedlaya (Harvard University) 1996 Winner: Manjul Bhargava (Algebra, Harvard University) Honorable mention: Lenhard Ng (Harvard University) 1997 Winner: Jade Vinson (Analysis and Geometry, Washington University in St. Louis) Honorable mention: Vikaas S. Sohal (Harvard University) 1998 Winner: Daniel Biss (Combinatorial Group Theory and Topology, Harvard University) Honorable mention: Aaron F. Archer (Harvey Mudd College) 1999 Winner: Sean McLaughlin (Proof of the Dodecahedral Conjecture, University of Michigan) Honorable mention: Samit Dasgupta (Harvard University) 2000 Winner: Jacob Lurie (Lie Algebras, Harvard University) Honorable mention: Wai Ling Yee (University of Waterloo) 2001 Winner: Ciprian Manolescu (Floer Homology, Harvard University) Honorable mention: Michael Levin (Massachusetts Institute of Technology) 2002 Winner: Joshua Greene (Proof of the Kneser conjecture, Harvey Mudd College) Honorable mention: None 2003 Winner: Melanie Wood (Belyi-extending maps and P-orderings, Duke University) Honorable mention: Karen Yeats (University of Waterloo) 2004 Winner: Reid W. Barton (Packing Densities of Patterns, Massachusetts Institute of Technology) Honorable mention: Po-Shen Loh (California Institute of Technology) 2005 Winner: Jacob Fox (Ramsey theory and graph theory, Massachusetts Institute of Technology) Honorable mention: None 2007 Winner: Daniel Kane (Number Theory, Massachusetts Institute of Technology) Honorable mention: None 2008 Winner: Nathan Kaplan (Algebraic number theory, Princeton University) Honorable mention: None 2009 Winner: Aaron Pixton (Algebraic topology and number theory, Princeton University) Honorable mention: Andrei Negut (Algebraic cobordism theory and dynamical systems, Princeton University) 2010 Winner: Scott Duke Kominers (Number theory, computational geometry, and mathematical economics, Harvard University) Honorable mention: Maria Monks (Combinatorics and number theory, Massachusetts Institute of Technology) 2011 Winner: Maria Monks (Combinatorics and number theory, Massachusetts Institute of Technology) Honorable mention: Michael Viscardi (Algebraic geometry, Harvard University), Yufei Zhao (Combinatorics
https://en.wikipedia.org/wiki/Correlation%20integral
In chaos theory, the correlation integral is the mean probability that the states at two different times are close: where is the number of considered states , is a threshold distance, a norm (e.g. Euclidean norm) and the Heaviside step function. If only a time series is available, the phase space can be reconstructed by using a time delay embedding (see Takens' theorem): where is the time series, the embedding dimension and the time delay. The correlation integral is used to estimate the correlation dimension. An estimator of the correlation integral is the correlation sum: See also Recurrence quantification analysis References (LINK) Chaos theory
https://en.wikipedia.org/wiki/Simon%20Rosenbaum%20%28statistician%29
Simon Rosenbaum B.Sc. (born 1877) was a British academic active in the early twentieth century, whose major field of study was statistics. He is perhaps best known today for an editing a collection of essays entitled Against Home Rule: The Case for Union, which was first published in 1912 at the height of the crisis over Irish Home Rule. Contributors to this work included Arthur Balfour, Austen Chamberlain, and Leo Amery. Partial bibliography Books Against Home Rule: The Case for Union. Kennikat Press, 1912. Journal articles "A Contribution to the Study of the Vital and Other Statistics of the Jews in the United Kingdom" in Journal of the Royal Statistical Society, Vol. 68, No. 3. (September, 1905), pp. 526–562. "Food Taxation in the United Kingdom, France, Germany, and the United States" in Journal of the Royal Statistical Society, Vol. 71, No. 2. (June, 1908), pp. 319–365. "The General Election of January, 1910, and the Bearing of the Results on Some Problems of Representation" in Journal of the Royal Statistical Society, Vol. 73, No. 5. (May, 1910), pp. 473–528. "The Trade of the British Empire" in Journal of the Royal Statistical Society, Vol. 76, No. 8. (July, 1913), pp. 739–774. "The Effects of the War on the Overseas Trade of the United Kingdom" in Journal of the Royal Statistical Society, Vol. 78, No. 4. (July, 1915), pp. 501–554. External links 1877 births Year of death missing British Jews British statisticians
https://en.wikipedia.org/wiki/Markov%20additive%20process
In applied probability, a Markov additive process (MAP) is a bivariate Markov process where the future states depends only on one of the variables. Definition Finite or countable state space for J(t) The process is a Markov additive process with continuous time parameter t if is a Markov process the conditional distribution of given depends only on . The state space of the process is R × S where X(t) takes real values and J(t) takes values in some countable set S. General state space for J(t) For the case where J(t) takes a more general state space the evolution of X(t) is governed by J(t) in the sense that for any f and g we require . Example A fluid queue is a Markov additive process where J(t) is a continuous-time Markov chain. Applications Çinlar uses the unique structure of the MAP to prove that, given a gamma process with a shape parameter that is a function of Brownian motion, the resulting lifetime is distributed according to the Weibull distribution. Kharoufeh presents a compact transform expression for the failure distribution for wear processes of a component degrading according to a Markovian environment inducing state-dependent continuous linear wear by using the properties of a MAP and assuming the wear process to be temporally homogeneous and that the environmental process has a finite state space. Notes Markov processes
https://en.wikipedia.org/wiki/Gamma%20process
Also known as the (Moran-)Gamma Process, the gamma process is a random process studied in mathematics, statistics, probability theory, and stochastics. The gamma process is a stochastic or random process consisting of independently distributed gamma distributions where represents the number of event occurrences from time 0 to time . The gamma distribution has scale parameter and shape parameter , often written as . Both and must be greater than 0. The gamma process is often written as where represents the time from 0. The process is a pure-jump increasing Lévy process with intensity measure for all positive . Thus jumps whose size lies in the interval occur as a Poisson process with intensity The parameter controls the rate of jump arrivals and the scaling parameter inversely controls the jump size. It is assumed that the process starts from a value 0 at t = 0 meaning .   The gamma process is sometimes also parameterised in terms of the mean () and variance () of the increase per unit time, which is equivalent to and . Plain English definition The gamma process is a process which measures the number of occurrences of independent gamma-distributed variables over a span of time. This image below displays two different gamma processes on from time 0 until time 4. The red process has more occurrences in the timeframe compared to the blue process because its shape parameter is larger than the blue shape parameter. Properties We use the Gamma function in these properties, so the reader should distinguish between (the Gamma function) and (the Gamma process). We will sometimes abbreviate the process as . Some basic properties of the gamma process are: Marginal distribution The marginal distribution of a gamma process at time is a gamma distribution with mean and variance That is, the probability distribution of the random variable is given by the density Scaling Multiplication of a gamma process by a scalar constant is again a gamma process with different mean increase rate. Adding independent processes The sum of two independent gamma processes is again a gamma process. Moments The moment function helps mathematicians find expected values, variances, skewness, and kurtosis. where is the Gamma function. Moment generating function The moment generating function is the expected value of where X is the random variable. Correlation Correlation displays the statistical relationship between any two gamma processes. , for any gamma process The gamma process is used as the distribution for random time change in the variance gamma process. Literature Lévy Processes and Stochastic Calculus by David Applebaum, CUP 2004, . References Lévy processes
https://en.wikipedia.org/wiki/Islam%20in%20the%20Dominican%20Republic
Islam in the Dominican Republic is a minority religion. Accurate statistics of religious affiliation are difficult to calculate and there is a wide variation concerning the actual numerical amount. Although the majority of the population is Christian, Muslims have formed local organizations such as the Círculo Islámico de República Dominicana (The Islamic Circle of Dominican Republic) and the Islamic Center of the Dominican Republic (located in Miami). Currently, the Círculo Islámico estimates that Muslims number over 4,000 (most recent statistics), including of a good number of local converts. Most recently, there has been another organization, led by native born Muslim converts, the Entidad Islámica Dominicana or EID (Dominican Islamic Entity). The Círculo Islámico established the first mosque in the Dominican Republic in the center of Santo Domingo, about a five-minute walk from the Palacio de Policía Nacional and the Universidad Iberoamericana (UNIBE) where Muslims from around the city would have easy access to reach it. They made an agreement with the owner to purchase the land and the building for an amount of 2.85 million pesos. The mosque is open daily for the five prayers (salat) and offers classes on Islamic studies for ladies and children on weekends. They also provide free medical consultation along with a free pharmacy, Consulta Al-Foutory, which is located in a separate building at the back of the mosque. The Al-Noor Mosque is largely believed to be the only active mosque in the country and receives the bulk of the Muslim population for the two Eids, Ramadan, Salat al Jummah, and the five daily prayers. However, there is another mosque in Los Llanos neighborhood of San Pedro de Macorix, Dominican Republic. This mosque is led by a converted Dominican Imam. Los Llanos is roughly a 30 minutes drive from the Al-Noor Mosque. The Musalla Al-Hidayya provides Jummah services in the city of Santiago de los Caballeros, and the Musalah Al-Nabawi( address: Av. Espana Plaza Estrella No. 411, Bavaro Punta Cana ) serves local and visiting Muslims in the tourist sector of Bávaro-Punta Cana in the East of the country. History Slavery Like other countries in the Caribbean and Latin America, the history of Islam in the Dominican Republic began with the importation of African slaves, which first arrived to the island of Hispaniola (Haiti and the Dominican Republic), beginning in 1502. These people arrived with a rich and ancient culture, although brutal repression and forced conversions gradually diluted their original cultural identity and religions. The first recorded instances of resistance were in 1503, when Nicolás de Ovando, Hispaniola's first royal governor, wrote to Isabella requesting that she prevent further shipments to the colony of enslaved Black ladinos, or persons possessing knowledge of Spanish or Portuguese languages and cultures, but who also often had connections to either Senegambia, Islam, or both. De Ovando had arrived earlier
https://en.wikipedia.org/wiki/James%20Loudon
James Loudon (May 24, 1841 – December 29, 1916) was a Canadian professor of mathematics and physics and President of the University of Toronto from 1892 to 1906. He was the first Canadian-born professor at the University of Toronto. Biography Loudon was educated at the Toronto Grammar School, Upper Canada College, and the University of Toronto, where he received a B.A. in 1862 and an M.A. in 1864. Initially a tutor in classics, he soon moved to mathematics, eventually becoming the professor of mathematics and physics at University College in 1875, succeeding his teacher John Bradford Cherriman. In 1887 he became professor of physics only, and became president of the University in 1892. He visited the United Kingdom to attend the 450th jubilee of the University of Glasgow in June 1901, and received an honorary doctorate (LL.D) from the university. References External links James Loudon archival papers held at the University of Toronto Archives and Records Management Services 1841 births 1916 deaths Canadian physicists Fellows of the Royal Society of Canada Presidents of the University of Toronto
https://en.wikipedia.org/wiki/Higman%27s%20lemma
In mathematics, Higman's lemma states that the set of finite sequences over a finite alphabet, as partially ordered by the subsequence relation, is well-quasi-ordered. That is, if is an infinite sequence of words over some fixed finite alphabet, then there exist indices such that can be obtained from by deleting some (possibly none) symbols. More generally this remains true when the alphabet is not necessarily finite, but is itself well-quasi-ordered, and the subsequence relation allows the replacement of symbols by earlier symbols in the well-quasi-ordering of labels. This is a special case of the later Kruskal's tree theorem. It is named after Graham Higman, who published it in 1952. Reverse-mathematical calibration Higman's lemma has been reverse mathematically calibrated (in terms of subsystems of second-order arithmetic) as equivalent to over the base theory . References Wellfoundedness Order theory Lemmas
https://en.wikipedia.org/wiki/224%20%28number%29
224 (two hundred [and] twenty-four) is the natural number following 223 and preceding 225. In mathematics 224 is a practical number, and a sum of two positive cubes . It is also , making it one of the smallest numbers to be the sum of distinct positive cubes in more than one way. 224 is the smallest k with λ(k) = 24, where λ(k) is the Carmichael function. The mathematician and philosopher Alex Bellos suggested in 2014 that a candidate for the lowest uninteresting number would be 224 because it was, at the time, "the lowest number not to have its own page on [the English-language version of] Wikipedia". In other areas In the SHA-2 family of six cryptographic hash functions, the weakest is SHA-224, named because it produces 224-bit hash values. It was defined in this way so that the number of bits of security it provides (half of its output length, 112 bits) would match the key length of two-key Triple DES. The ancient Phoenician shekel was a standardized measure of silver, equal to 224 grains, although other forms of the shekel employed in other ancient cultures (including the Babylonians and Hebrews) had different measures. Likely not coincidentally, as far as ancient Burma and Thailand, silver was measured in a unit called a tikal, equal to 224 grains. See also 224 (disambiguation) References Integers
https://en.wikipedia.org/wiki/Jack%20Baskin%20School%20of%20Engineering
The Baskin School of Engineering, known simply as Baskin Engineering, is the school of engineering at the University of California, Santa Cruz. It consists of six departments: Applied Mathematics, Biomolecular Engineering, Computational Media, Computer Science and Engineering, Electrical and Computer Engineering, and Statistics. The school was formed in 1997 and endowed with a multimillion-dollar gift from retired local engineer and developer Jack Baskin. Although it is a relatively young engineering school, it is already known in the Silicon Valley region and beyond for producing prominent tech innovators, including the founders of companies Pure Storage, Cloudflare, Concur Technologies, Five3 Genomics, and a host of other startups. It is a leader in the field of games and playable media, and was the first school in the country to offer a graduate degree in Serious Games. The school is also renowned for its research in genomics and bioinformatics, having played a critical role in the Human Genome Project. Researchers at UC Santa Cruz were responsible for creating the UCSC Genome Browser, which continues to be an important open-source tool for researchers in genomics. In 2022, the Baskin School continued this work finishing first truly complete sequence of the human genome, covering each chromosome from end to end with no gaps and unprecedented accuracy, is now accessible through the UCSC Genome Browser. Degrees offered The Baskin School of Engineering offers degrees in the following areas: In addition to these degree programs, the Baskin School also offers emphases in Computational Media, Human Language Media and Modeling, Robotics and Control (graduate minor), Scientific Computing, and Statistics Research Research areas Because of its proximity to Silicon Valley, the Baskin School of Engineering has strong ties to technology corporations and start-ups, and tends to focus its research on innovations in big data, cyber-physical systems, genomics, and computational media. Its current research areas include: Algorithms, logic and complexity Artificial intelligence and machine learning Bayesian statistics Bioinformatics and genomics Communications, signals, and digital image processing Computational biology Computer hardware: architecture, chip design, FPGAs, and electronic design automation Computer games Computer security and privacy Computer vision, visualization, and graphics Cyber-physical systems Database systems Data science Distributed systems Electronic circuit Human–computer interaction Mathematical modeling and numerical analysis Nanoscience and nanotechnology Natural language processing Networks Photonics and electronic devices Power systems and energy engineering Programming languages Real-time systems Remote sensing and environmental technology Robotics and control Systems biology Software engineering Storage systems Vaccines and antiviral therapeutics Role in sequencing the human genome In May
https://en.wikipedia.org/wiki/Limitation%20of%20size
In the philosophy of mathematics, specifically the philosophical foundations of set theory, limitation of size is a concept developed by Philip Jourdain and/or Georg Cantor to avoid Cantor's paradox. It identifies certain "inconsistent multiplicities", in Cantor's terminology, that cannot be sets because they are "too large". In modern terminology these are called proper classes. Use The axiom of limitation of size is an axiom in some versions of von Neumann–Bernays–Gödel set theory or Morse–Kelley set theory. This axiom says that any class that is not "too large" is a set, and a set cannot be "too large". "Too large" is defined as being large enough that the class of all sets can be mapped one-to-one into it. References Philosophy of mathematics History of mathematics Basic concepts in infinite set theory
https://en.wikipedia.org/wiki/Overdispersion
In statistics, overdispersion is the presence of greater variability (statistical dispersion) in a data set than would be expected based on a given statistical model. A common task in applied statistics is choosing a parametric model to fit a given set of empirical observations. This necessitates an assessment of the fit of the chosen model. It is usually possible to choose the model parameters in such a way that the theoretical population mean of the model is approximately equal to the sample mean. However, especially for simple models with few parameters, theoretical predictions may not match empirical observations for higher moments. When the observed variance is higher than the variance of a theoretical model, overdispersion has occurred. Conversely, underdispersion means that there was less variation in the data than predicted. Overdispersion is a very common feature in applied data analysis because in practice, populations are frequently heterogeneous (non-uniform) contrary to the assumptions implicit within widely used simple parametric models. Examples Poisson Overdispersion is often encountered when fitting very simple parametric models, such as those based on the Poisson distribution. The Poisson distribution has one free parameter and does not allow for the variance to be adjusted independently of the mean. The choice of a distribution from the Poisson family is often dictated by the nature of the empirical data. For example, Poisson regression analysis is commonly used to model count data. If overdispersion is a feature, an alternative model with additional free parameters may provide a better fit. In the case of count data, a Poisson mixture model like the negative binomial distribution can be proposed instead, in which the mean of the Poisson distribution can itself be thought of as a random variable drawn – in this case – from the gamma distribution thereby introducing an additional free parameter (note the resulting negative binomial distribution is completely characterized by two parameters). Binomial As a more concrete example, it has been observed that the number of boys born to families does not conform faithfully to a binomial distribution as might be expected. Instead, the sex ratios of families seem to skew toward either boys or girls (see, for example the Trivers–Willard hypothesis for one possible explanation) i.e. there are more all-boy families, more all-girl families and not enough families close to the population 51:49 boy-to-girl mean ratio than expected from a binomial distribution, and the resulting empirical variance is larger than specified by a binomial model. In this case, the beta-binomial model distribution is a popular and analytically tractable alternative model to the binomial distribution since it provides a better fit to the observed data. To capture the heterogeneity of the families, one can think of the probability parameter of the binomial model (say, probability of being a boy) is itself a rando
https://en.wikipedia.org/wiki/Moscow%20Mathematical%20Journal
The Moscow Mathematical Journal (MMJ) is a mathematics journal published quarterly by the Independent University of Moscow and the HSE Faculty of Mathematics and distributed by the American Mathematical Society. The journal published its first issue in 2001. Its editors-in-chief are Yulij Ilyashenko (Independent University of Moscow and Cornell University), Michael Tsfasman (Independent University of Moscow and Aix-Marseille University), and Sabir Gusein-Zade (Moscow State University and the Independent University of Moscow). External links Academic journals established in 2001 Mathematics journals Higher School of Economics academic journals Quarterly journals English-language journals
https://en.wikipedia.org/wiki/Relevance%20vector%20machine
In mathematics, a Relevance Vector Machine (RVM) is a machine learning technique that uses Bayesian inference to obtain parsimonious solutions for regression and probabilistic classification. The RVM has an identical functional form to the support vector machine, but provides probabilistic classification. It is actually equivalent to a Gaussian process model with covariance function: where is the kernel function (usually Gaussian), are the variances of the prior on the weight vector , and are the input vectors of the training set. Compared to that of support vector machines (SVM), the Bayesian formulation of the RVM avoids the set of free parameters of the SVM (that usually require cross-validation-based post-optimizations). However RVMs use an expectation maximization (EM)-like learning method and are therefore at risk of local minima. This is unlike the standard sequential minimal optimization (SMO)-based algorithms employed by SVMs, which are guaranteed to find a global optimum (of the convex problem). The relevance vector machine was patented in the United States by Microsoft (patent expired September 4, 2019). See also Kernel trick Platt scaling: turns an SVM into a probability model References Software dlib C++ Library The Kernel-Machine Library rvmbinary: R package for binary classification scikit-rvm fast-scikit-rvm, rvm tutorial External links Tipping's webpage on Sparse Bayesian Models and the RVM A Tutorial on RVM by Tristan Fletcher Applied tutorial on RVM Comparison of RVM and SVM Classification algorithms Kernel methods for machine learning Nonparametric Bayesian statistics
https://en.wikipedia.org/wiki/Weeks%20manifold
In mathematics, the Weeks manifold, sometimes called the Fomenko–Matveev–Weeks manifold, is a closed hyperbolic 3-manifold obtained by (5, 2) and (5, 1) Dehn surgeries on the Whitehead link. It has volume approximately equal to 0.942707… () and showed that it has the smallest volume of any closed orientable hyperbolic 3-manifold. The manifold was independently discovered by as well as . Volume Since the Weeks manifold is an arithmetic hyperbolic 3-manifold, its volume can be computed using its arithmetic data and a formula due to Armand Borel: where is the number field generated by satisfying and is the Dedekind zeta function of . Alternatively, where is the polylogarithm and is the absolute value of the complex root (with positive imaginary part) of the cubic. Related manifolds The cusped hyperbolic 3-manifold obtained by (5, 1) Dehn surgery on the Whitehead link is the so-called sibling manifold, or sister, of the figure-eight knot complement. The figure eight knot's complement and its sibling have the smallest volume of any orientable, cusped hyperbolic 3-manifold. Thus the Weeks manifold can be obtained by hyperbolic Dehn surgery on one of the two smallest orientable cusped hyperbolic 3-manifolds. See also Meyerhoff manifold - second small volume References . 3-manifolds Hyperbolic geometry
https://en.wikipedia.org/wiki/Lazar%20Lyusternik
Lazar Aronovich Lyusternik (also Lusternik, Lusternick, Ljusternik; ; 31 December 1899 – 22 July 1981) was a Soviet mathematician. He is famous for his work in topology and differential geometry, to which he applied the variational principle. Using the theory he introduced, together with Lev Schnirelmann, he proved the theorem of the three geodesics, a conjecture by Henri Poincaré that every convex body in 3-dimensions has at least three simple closed geodesics. The ellipsoid with distinct but nearly equal axis is the critical case with exactly three closed geodesics. The Lusternik–Schnirelmann theory, as it is called now, is based on the previous work by Poincaré, David Birkhoff, and Marston Morse. It has led to numerous advances in differential geometry and topology. For this work Lyusternik received the Stalin Prize in 1946. In addition to serving as a professor of mathematics at Moscow State University, Lyusternik also worked at the Steklov Mathematical Institute (RAS) from 1934 to 1948 and the Lebedev Institute of Precise Mechanics and Computer Engineering (IPMCE) from 1948 to 1955. He was a student of Nikolai Luzin. In 1930 he became one of the initiators of the Egorov affair and then one of the participants in the notorious political persecution of his teacher Nikolai Luzin known as the Luzin affair. See also Lusternik–Schnirelmann category Lyusternik's generalization of the Brunn–Minkowski theorem References Pavel Alexandrov et al., LAZAR' ARONOVICH LYUSTERNIK (on the occasion of his 60th birthday), Russ. Math. Surv. 15 (1960), 153–168. Pavel Alexandrov, In memory of Lazar Aronovich Lyusternik, Russ. Math. Surv. 37 (1982), 145-147 External links 1899 births 1981 deaths 20th-century Polish Jews 20th-century Polish mathematicians 20th-century Russian mathematicians People from Zduńska Wola People from Kalisz Governorate Academic staff of Moscow State University Corresponding Members of the USSR Academy of Sciences Moscow State University alumni Recipients of the Order of the Badge of Honour Recipients of the Order of Lenin Recipients of the Order of the Red Banner of Labour Recipients of the Stalin Prize Differential geometers Topologists Soviet mathematicians Burials at Kuntsevo Cemetery
https://en.wikipedia.org/wiki/Islam%20in%20Belize
Islam is one of the smallest minority faiths in Belize, which is a predominantly Christian country. The statistics for Islam in Belize estimate a total Muslim population of 577, representing 0.2 percent of the total population. There is an Islamic Mission of Belize (IMB) headquartered in Belize City. There is also presence of fast growing dynamic worldwide Ahmadiyya Muslim Jamaat since 2013. They have a membership of about 200 from all over Belize.They have three mosques in Belize. Masjid Noor in Belize City is situated on 1.5 Miles George Price Highway. They have mosques in Belmopan and Orange Walk. . The Muslim Community Primary School (MCPS) was recognised by the government in 1978 and offers Islamic as well as elementary level academic courses to Muslim and non-Muslim children. Mosques Al-Falah Mosque References Belize
https://en.wikipedia.org/wiki/The%20World%20Economy%3A%20Historical%20Statistics
The World Economy: Historical Statistics is a landmark book by Angus Maddison. Published in 2004 by the OECD Development Centre, it studies the growth of populations and economies across the centuries: not just the world economy as it is now, but how it was in the past. Among other things, it showed that Europe's gross domestic product (GDP) per capita was faster progressing than the leading Asian economies since 1000 AD, reaching again a higher level than elsewhere from the 15th century, while Asian GDP per capita remained static until 1800, when it even began to shrink in absolute terms, as Maddison demonstrated in a subsequent book. At the same time, Maddison showed them recovering lost ground from the 1950s, and documents the much faster rise of Japan and East Asia and the economic shrinkage of Russia in the 1990s. It also shows how colonialism strongly benefited Europe at a tremendous cost to Asia. The book is a mass of statistical tables, mostly on a decade-by-decade basis, along with notes explaining the methods employed in arriving at particular figures. It is available both as a paperback book and in electronic format. Some tables are available on the official website. See also List of regions by past GDP (PPP) per capita Angus Maddison statistics of the ten largest economies by GDP (PPP) Maddison Project, a project started in March 2010 to continue Maddison's work after his death References External links Angus Maddison's Homepage at the Groningen Growth and Development Centre Official website of The World Economy 2004 non-fiction books Demography Economic growth Books about economic history
https://en.wikipedia.org/wiki/Dehn%20plane
In geometry, Max Dehn introduced two examples of planes, a semi-Euclidean geometry and a non-Legendrian geometry, that have infinitely many lines parallel to a given one that pass through a given point, but where the sum of the angles of a triangle is at least . A similar phenomenon occurs in hyperbolic geometry, except that the sum of the angles of a triangle is less than . Dehn's examples use a non-Archimedean field, so that the Archimedean axiom is violated. They were introduced by and discussed by . Dehn's non-archimedean field Ω(t) To construct his geometries, Dehn used a non-Archimedean ordered Pythagorean field Ω(t), a Pythagorean closure of the field of rational functions R(t), consisting of the smallest field of real-valued functions on the real line containing the real constants, the identity function t (taking any real number to itself) and closed under the operation . The field Ω(t) is ordered by putting x > y if the function x is larger than y for sufficiently large reals. An element x of Ω(t) is called finite if m < x < n for some integers m, n, and is called infinite otherwise. Dehn's semi-Euclidean geometry The set of all pairs (x, y), where x and y are any (possibly infinite) elements of the field Ω(t), and with the usual metric which takes values in Ω(t), gives a model of Euclidean geometry. The parallel postulate is true in this model, but if the deviation from the perpendicular is infinitesimal (meaning smaller than any positive rational number), the intersecting lines intersect at a point that is not in the finite part of the plane. Hence, if the model is restricted to the finite part of the plane (points (x,y) with x and y finite), a geometry is obtained in which the parallel postulate fails but the sum of the angles of a triangle is . This is Dehn's semi-Euclidean geometry. It is discussed in . Dehn's non-Legendrian geometry In the same paper, Dehn also constructed an example of a non-Legendrian geometry where there are infinitely many lines through a point not meeting another line, but the sum of the angles in a triangle exceeds . Riemann's elliptic geometry over Ω(t) consists of the projective plane over Ω(t), which can be identified with the affine plane of points (x:y:1) together with the "line at infinity", and has the property that the sum of the angles of any triangle is greater than The non-Legendrian geometry consists of the points (x:y:1) of this affine subspace such that tx and ty are finite (where as above t is the element of Ω(t) represented by the identity function). Legendre's theorem states that the sum of the angles of a triangle is at most , but assumes Archimedes's axiom, and Dehn's example shows that Legendre's theorem need not hold if Archimedes' axiom is dropped. References Planes (geometry) Non-Euclidean geometry
https://en.wikipedia.org/wiki/Independent%20Days%20Festival
Independent Days Festival was an Italian music festival that took place every September in Bologna. In June 2017, the festival was held in Monza. About History Statistics: 20.000 revellers in 1999 40.000 revellers in 2000 40,000 revellers in 2001 The site The Stages Arena Parco Nord is the location of the main stage – a grass field with a curved banking giving an amphitheatre shape. TENDA ESTRAGON is the tent which houses the second stage. The festival takes place as part of Festa de l'Unità – a popular outdoor festival full of restaurants, beer tents, and entertainment. Although the two stages are not connected, you are allowed to move between the two, enabling festival goers to enjoy all the attractions of the Festa de l'Unità. Traditionally the festival has had a punk theme, but in recent years, more mainstream acts have played, such as Franz Ferdinand, Editors, Maxïmo Park, and The Bravery. The billing 2012 Angels & Airwaves, Social Distortion, All Time Low 2011 Arctic Monkeys, Kasabian, The Wombats, The Offspring, No Use for a Name, Face to Face 2010 The Leeches, All Time Low, Simple Plan, Sum 41, Blink-182 2009 Deep Purple, The Kooks, Kasabian, Twisted Wheel, Expatriate, The Hacienda 2008 It did not take place 2007 Nine Inch Nails, Tool, Maxïmo Park, Hot Hot Heat, ...And You Will Know Us by the Trail of Dead, Billy Talent, Petrol 2006 One year hiatus 2005 Bad Religion, Queens of the Stone Age, The Blood Brothers, The Bravery, Editors, The Futureheads, Maxïmo Park, Meganoidi, Skin, Social Distortion, Forty Winks, Marsh Mallows, The Peawees, Sikitikis 2004 Sonic Youth, Franz Ferdinand, The Libertines, Mark Lanegan, Mondo Generator, Tre Allegri Ragazzi Morti, Colour Of Fire, Blueskins, Julie's Haircut, Raydaytona, The Darkness, Velvet Revolver, MC 5, Lars Frederiksen, Auf Der Maur, The Dirtbombs, New Found Glory, Thee S.T.P., Persiana Jones, Derozer, Radio 4, Everlast, Feist, Madbones, Friday Star, Morticia Lovers, Flogging Molly, Yellowcard, Vanilla Sky, Coheed and Cambria, Young Heart Attack, Ghetto Ways, Forty Winks, The No One, Wet Tones 2003 Rancid, The Cramps, The Mars Volta, Radio Birdman, Nashville Pussy, Lagwagon, A.F.I, Alkaline Trio, The Ataris, Mad Caddies, Fratelli Di Soledad, Thrice, The Hormonauts, Los Fastidios, Immortal Lee County Killers, Bigoz Quartet, All American Rejects, Punx Crew, The Peawees, Forty Winks Motorama, Thee S.T.P., Kim's Teddy Bears, Moravagine, Le Braghe Corte, Marsh Mallows, Karnea, Coffee Shower 2002 Subsonica, NOFX, The Jon Spencer Blues Explosion, Modena City Ramblers, Sick of It All, No Use for a Name, Punkreas, Meganoidi, The Music, Something Corporate, Pulley, Bouncing Souls, Ikara Colt, D4 2001 Man or Astro-man?, Mogwai, Eels, Turin Brakes, Ed Harcourt, I Am Kloot, The (International) Noise Conspiracy, Micevice, Boy Hits Cart, Cut, Scarlet, The Valentines, Manu Chao, Muse, Africa Unite, Ska-P, Modena City Ramblers, Rocket From The Crypt, Mad Caddies, Reel Big Fish, Ba
https://en.wikipedia.org/wiki/Islam%20in%20Antigua%20and%20Barbuda
The statistics for Islam in Antigua and Barbuda estimate a total Muslim population of about 200, representing 0.3 percent of the total population of 67,448. Most of the Muslims of the islands are Arabs of Syrian or Lebanese descent. There are two known Islamic organizations in St. John's, including the Antigua and Barbuda International Islamic Society and the American University of Antigua (School of Medicine) Muslim Students Association. There is also an Ahmadiyya mission in Antigua. Outside St. John's, there is the Muslim Community of Antigua and Barbuda in Codrington, Barbuda. A Pew Research Center survey in 2016 calculated the total number to be around 950. Antigua and Barbuda have yet to establish a proper mosque, Islamic centre or institutions for Muslims in the country. The proposed site of the first mosque to be constructed by the Antigua and Barbuda International Islamic Society (ABIIS) is located on American Road in St. John's. Currently, the location used for a mosque is a small hut that could accommodate about thirty individuals and is available for Friday prayers, the five daily Salat, the two Eids and Qurbani. References Religion in Antigua and Barbuda Antigua and Barbuda Antigua and Barbuda
https://en.wikipedia.org/wiki/Kadomtsev%E2%80%93Petviashvili%20equation
In mathematics and physics, the Kadomtsev–Petviashvili equation (often abbreviated as KP equation) is a partial differential equation to describe nonlinear wave motion. Named after Boris Borisovich Kadomtsev and Vladimir Iosifovich Petviashvili, the KP equation is usually written as where . The above form shows that the KP equation is a generalization to two spatial dimensions, x and y, of the one-dimensional Korteweg–de Vries (KdV) equation. To be physically meaningful, the wave propagation direction has to be not-too-far from the x direction, i.e. with only slow variations of solutions in the y direction. Like the KdV equation, the KP equation is completely integrable. It can also be solved using the inverse scattering transform much like the nonlinear Schrödinger equation. In 2002, the regularized version of the KP equation, naturally referred to as the Benjamin–Bona–Mahony–Kadomtsev–Petviashvili equation (or simply the BBM-KP equation), was introduced as an alternative model for small amplitude long waves in shallow water moving mainly in the x direction in 2+1 space. where . The BBM-KP equation provides an alternative to the usual KP equation, in a similar way that the Benjamin–Bona–Mahony equation is related to the classical Korteweg–de Vries equation, as the linearized dispersion relation of the BBM-KP is a good approximation to that of the KP but does not exhibit the unwanted limiting behavior as the Fourier variable dual to x approaches . History The KP equation was first written in 1970 by Soviet physicists Boris B. Kadomtsev (1928–1998) and Vladimir I. Petviashvili (1936–1993); it came as a natural generalization of the KdV equation (derived by Korteweg and De Vries in 1895). Whereas in the KdV equation waves are strictly one-dimensional, in the KP equation this restriction is relaxed. Still, both in the KdV and the KP equation, waves have to travel in the positive x-direction. Connections to physics The KP equation can be used to model water waves of long wavelength with weakly non-linear restoring forces and frequency dispersion. If surface tension is weak compared to gravitational forces, is used; if surface tension is strong, then . Because of the asymmetry in the way x- and y-terms enter the equation, the waves described by the KP equation behave differently in the direction of propagation (x-direction) and transverse (y) direction; oscillations in the y-direction tend to be smoother (be of small-deviation). The KP equation can also be used to model waves in ferromagnetic media, as well as two-dimensional matter–wave pulses in Bose–Einstein condensates. Limiting behavior For , typical x-dependent oscillations have a wavelength of giving a singular limiting regime as . The limit is called the dispersionless limit. If we also assume that the solutions are independent of y as , then they also satisfy the inviscid Burgers' equation: Suppose the amplitude of oscillations of a solution is asymptotically small —
https://en.wikipedia.org/wiki/Convention%20Concerning%20Statistics%20of%20Wages%20and%20Hours%20of%20Work%2C%201938
The Convention Concerning Statistics of Wages and Hours of Work, 1938 is an International Labour Organization Convention. It was established in 1938: Ratifications As of 2013, the convention has been ratified by 34 states. Of the ratifying states, 20 have denounced the treaty by means of an automatic process that denounces the 1938 treaty when other superseding conventions are ratified by the same state. External links Text. Ratifications. International Labour Organization conventions Statistical data agreements Treaties concluded in 1938 Treaties entered into force in 1940 Treaties of Algeria Treaties of Barbados Treaties of Chile Treaties of Cuba Treaties of Djibouti Treaties of the Kingdom of Egypt Treaties of the French Fourth Republic Treaties of Kenya Treaties of Myanmar Treaties of Nicaragua Treaties of South Africa Treaties of Tanganyika Treaties of the United Arab Republic Treaties of Uruguay 1938 in labor relations
https://en.wikipedia.org/wiki/Coanalytic%20set
In the mathematical discipline of descriptive set theory, a coanalytic set is a set (typically a set of real numbers or more generally a subset of a Polish space) that is the complement of an analytic set (Kechris 1994:87). Coanalytic sets are also referred to as sets (see projective hierarchy). References Descriptive set theory
https://en.wikipedia.org/wiki/IMS%20Health
IMS Health was an American company that provided information, services and technology for the healthcare industry. IMS stood for Intercontinental Medical Statistics. It was the largest vendor of U.S. physician prescribing data. IMS Health was founded in 1954 by Bill Frohlich and David Dubow with Arthur Sackler having a hidden ownership stake. In 2010, IMS Health was taken private by TPG Capital, CPP Investment Board and Leonard Green & Partners. The company went public on April 4, 2014, and began trading on the NYSE under the symbol IMS. IMS Health was headquartered in Danbury, Connecticut. Over 2016 Quintiles and IMS Health merged, and the resulting company was named QuintilesIMS, which was renamed to IQVIA in 2017. Business model IMS Health was best known for its collection of healthcare information spanning sales, de-identified prescription data, medical claims, electronic medical records and social media. IMS Health's products and services were used by companies to develop commercialization plans and portfolio strategies, to select patient and physician populations for specific therapies, and to measure the effectiveness of pharmaceutical marketing and sales resources. The firm uses its own data to produce syndicated reports such as market forecasts and market intelligence. History and acquisitions The original name of the company was Intercontinental Marketing Statistics, hence the IMS name. IMS Health's corporate headquarters is located in Danbury, Connecticut, United States. The company's chairman and CEO is Ari Bousbib. In 1998, the parent company, Cognizant Corporation, split into two companies: IMS Health and Nielsen Media Research. After this restructuring, Cognizant Technology Solutions became a public subsidiary of IMS Health In 2002, IMS Health acquired Cambridge Pharma Consultancy, a privately held international firm that provides strategic advice to pharmaceutical management. In 2002, IMS Health acquired the Rosenblatt Klauber Group, a privately held international consultancy that provides forecasting, opportunity assessment & management development services to pharmaceutical companies. In 2003, acquired Marketing Initiatives, a specialist in healthcare facility profile data, and Data Niche Associates, a provider of rebate validation services for Medicaid and managed care. In 2003, IMS Health sold its entire 56% stake in Cognizant and both companies are separated into two independent entities as IMS Health and Cognizant In 2004, United Research China Shanghai was acquired, providing coverage of China's consumer health market. In 2005, acquired PharMetrics, a U.S. provider of patient-centric integrated claims data. In 2006, acquired the Life Sciences practice of Strategic Decisions Group, a portfolio strategy consultant to the life sciences industry. In 2007, IMS Health acquired IHS and MedInitiatives, providers of healthcare data management analytics and technology services. That same year, ValueMedics Research was ac
https://en.wikipedia.org/wiki/Light%20transport%20theory
Light transport theory deals with the mathematics behind calculating the energy transfers between media that affect visibility. This article is currently specific to light transport in rendering processes such as global illumination and HDRI. Light Light Transport The amount of light transported is measured by flux density, or luminous flux per unit area on the point of the surface at which it is measured. Radiometry Energy Transfer Media Models Hemisphere Given a surface S, a hemisphere H can be projected on to S to calculate the amount of incoming and outgoing light. If a point P is selected at random on the surface S, the amount of incoming and outgoing light can be calculated by its projection onto the hemisphere. Hemicube The hemicube model works in a similar way that the hemisphere model works, with the exception that a hemicube is projected as opposed to a hemisphere. The similarity is only in concept, the actual calculation done by integration has a different form factor. Particle Wave Equations Maxwell's Equations Rendering Rendering converts a model into an image either by simulating a method such as light transport to get physically based photorealistic images, or by applying some kind of style as non-photorealistic rendering. The two basic operations in light transport are transport (how much light gets from one place to another) and scattering (how surfaces interact with light). See also Path Tracing Global illumination Monte Carlo Method Photon mapping Radiosity (computer graphics) Ray tracing (graphics) Ray tracing (physics) Reyes rendering 3D computer graphics
https://en.wikipedia.org/wiki/Starred%20transform
In applied mathematics, the starred transform, or star transform, is a discrete-time variation of the Laplace transform, so-named because of the asterisk or "star" in the customary notation of the sampled signals. The transform is an operator of a continuous-time function , which is transformed to a function in the following manner: where is a Dirac comb function, with period of time T. The starred transform is a convenient mathematical abstraction that represents the Laplace transform of an impulse sampled function , which is the output of an ideal sampler, whose input is a continuous function, . The starred transform is similar to the Z transform, with a simple change of variables, where the starred transform is explicitly declared in terms of the sampling period (T), while the Z transform is performed on a discrete signal and is independent of the sampling period. This makes the starred transform a de-normalized version of the one-sided Z-transform, as it restores the dependence on sampling parameter T. Relation to Laplace transform Since , where: Then per the convolution theorem, the starred transform is equivalent to the complex convolution of and , hence: This line integration is equivalent to integration in the positive sense along a closed contour formed by such a line and an infinite semicircle that encloses the poles of X(s) in the left half-plane of p. The result of such an integration (per the residue theorem) would be: Alternatively, the aforementioned line integration is equivalent to integration in the negative sense along a closed contour formed by such a line and an infinite semicircle that encloses the infinite poles of in the right half-plane of p. The result of such an integration would be: Relation to Z transform Given a Z-transform, X(z), the corresponding starred transform is a simple substitution:   This substitution restores the dependence on T. It's interchangeable, Properties of the starred transform Property 1:   is periodic in with period Property 2:  If has a pole at , then must have poles at , where Citations References Phillips and Nagle, "Digital Control System Analysis and Design", 3rd Edition, Prentice Hall, 1995. Transforms
https://en.wikipedia.org/wiki/K-theory%20%28physics%29
In string theory, K-theory classification refers to a conjectured application of K-theory (in abstract algebra and algebraic topology) to superstrings, to classify the allowed Ramond–Ramond field strengths as well as the charges of stable D-branes. In condensed matter physics K-theory has also found important applications, specially in the topological classification of topological insulators, superconductors and stable Fermi surfaces (, ). History This conjecture, applied to D-brane charges, was first proposed by . It was popularized by who demonstrated that in type IIB string theory arises naturally from Ashoke Sen's realization of arbitrary D-brane configurations as stacks of D9 and anti-D9-branes after tachyon condensation. Such stacks of branes are inconsistent in a non-torsion Neveu–Schwarz (NS) 3-form background, which, as was highlighted by , complicates the extension of the K-theory classification to such cases. suggested a solution to this problem: D-branes are in general classified by a twisted K-theory, that had earlier been defined by . Applications The K-theory classification of D-branes has had numerous applications. For example, used it to argue that there are eight species of orientifold one-plane. applied the K-theory classification to derive new consistency conditions for flux compactifications. K-theory has also been used to conjecture a formula for the topologies of T-dual manifolds by . Recently K-theory has been conjectured to classify the spinors in compactifications on generalized complex manifolds. Open problems Despite these successes, RR fluxes are not quite classified by K-theory. argued that the K-theory classification is incompatible with S-duality in IIB string theory. In addition, if one attempts to classify fluxes on a compact ten-dimensional spacetime, then a complication arises due to the self-duality of the RR fluxes. The duality uses the Hodge star, which depends on the metric and so is continuously valued and in particular is generically irrational. Thus not all of the RR fluxes, which are interpreted as the Chern characters in K-theory, can be rational. However Chern characters are always rational, and so the K-theory classification must be replaced. One needs to choose a half of the fluxes to quantize, or a polarization in the geometric quantization-inspired language of Diaconescu, Moore, and Witten and later of . Alternately one may use the K-theory of a 9-dimensional time slice as has been done by . K-theory classification of RR fluxes In the classical limit of type II string theory, which is type II supergravity, the Ramond–Ramond field strengths are differential forms. In the quantum theory the well-definedness of the partition functions of D-branes implies that the RR field strengths obey Dirac quantization conditions when spacetime is compact, or when a spatial slice is compact and one considers only the (magnetic) components of the field strength which lie along the spati
https://en.wikipedia.org/wiki/Witt%27s%20theorem
"Witt's theorem" or "the Witt theorem" may also refer to the Bourbaki–Witt fixed point theorem of order theory. In mathematics, Witt's theorem, named after Ernst Witt, is a basic result in the algebraic theory of quadratic forms: any isometry between two subspaces of a nonsingular quadratic space over a field k may be extended to an isometry of the whole space. An analogous statement holds also for skew-symmetric, Hermitian and skew-Hermitian bilinear forms over arbitrary fields. The theorem applies to classification of quadratic forms over k and in particular allows one to define the Witt group W(k) which describes the "stable" theory of quadratic forms over the field k. Statement Let be a finite-dimensional vector space over a field k of characteristic different from 2 together with a non-degenerate symmetric or skew-symmetric bilinear form. If {{nowrap|f : U → ''U}} is an isometry between two subspaces of V then f extends to an isometry of V. Witt's theorem implies that the dimension of a maximal totally isotropic subspace (null space) of V is an invariant, called the index or of b, and moreover, that the isometry group of acts transitively on the set of maximal isotropic subspaces. This fact plays an important role in the structure theory and representation theory of the isometry group and in the theory of reductive dual pairs. Witt's cancellation theorem Let , , be three quadratic spaces over a field k. Assume that Then the quadratic spaces and are isometric: In other words, the direct summand appearing in both sides of an isomorphism between quadratic spaces may be "cancelled". Witt's decomposition theorem Let be a quadratic space over a field k. Then it admits a Witt decomposition: where is the radical of q, is an anisotropic quadratic space and is a split quadratic space. Moreover, the anisotropic summand, termed the core form, and the hyperbolic summand in a Witt decomposition of are determined uniquely up to isomorphism. Quadratic forms with the same core form are said to be similar or Witt equivalent'''. Citations References Emil Artin (1957) Geometric Algebra, page 121 via Internet Archive Theorems in linear algebra Quadratic forms
https://en.wikipedia.org/wiki/Truncation%20%28geometry%29
In geometry, a truncation is an operation in any dimension that cuts polytope vertices, creating a new facet in place of each vertex. The term originates from Kepler's names for the Archimedean solids. Uniform truncation In general any polyhedron (or polytope) can also be truncated with a degree of freedom as to how deep the cut is, as shown in Conway polyhedron notation truncation operation. A special kind of truncation, usually implied, is a uniform truncation, a truncation operator applied to a regular polyhedron (or regular polytope) which creates a resulting uniform polyhedron (uniform polytope) with equal edge lengths. There are no degrees of freedom, and it represents a fixed geometric, just like the regular polyhedra. In general all single ringed uniform polytopes have a uniform truncation. For example, the icosidodecahedron, represented as Schläfli symbols r{5,3} or , and Coxeter-Dynkin diagram or has a uniform truncation, the truncated icosidodecahedron, represented as tr{5,3} or , . In the Coxeter-Dynkin diagram, the effect of a truncation is to ring all the nodes adjacent to the ringed node. A uniform truncation performed on the regular triangular tiling {3,6} results in the regular hexagonal tiling {6,3}. Truncation of polygons A truncated n-sided polygon will have 2n sides (edges). A regular polygon uniformly truncated will become another regular polygon: t{n} is {2n}. A complete truncation (or rectification), r{3}, is another regular polygon in its dual position. A regular polygon can also be represented by its Coxeter-Dynkin diagram, , and its uniform truncation , and its complete truncation . The graph represents Coxeter group I2(n), with each node representing a mirror, and the edge representing the angle π/n between the mirrors, and a circle is given around one or both mirrors to show which ones are active. Star polygons can also be truncated. A truncated pentagram {5/2} will look like a pentagon, but is actually a double-covered (degenerate) decagon ({10/2}) with two sets of overlapping vertices and edges. A truncated great heptagram {7/3} gives a tetradecagram {14/3}. Uniform truncation in regular polyhedra and tilings and higher When "truncation" applies to platonic solids or regular tilings, usually "uniform truncation" is implied, which means truncating until the original faces become regular polygons with twice as many sides as the original form. This sequence shows an example of the truncation of a cube, using four steps of a continuous truncating process between a full cube and a rectified cube. The final polyhedron is a cuboctahedron. The middle image is the uniform truncated cube; it is represented by a Schläfli symbol t{p,q,...}. A bitruncation is a deeper truncation, removing all the original edges, but leaving an interior part of the original faces. Example: a truncated octahedron is a bitruncated cube: t{3,4} = 2t{4,3}. A complete bitruncation, called a birectification, reduces original faces to
https://en.wikipedia.org/wiki/164%20%28number%29
164 (one hundred [and] sixty-four) is the natural number following 163 and preceding 165. In mathematics 164 is a zero of the Mertens function. In base 10, 164 is the smallest number that can be expressed as a concatenation of two squares in two different ways: as 1 concatenate 64 or 16 concatenate 4. In astronomy 164P/Christensen is a comet in the Solar System 164 Eva is a large and dark Main belt asteroid In geography Chaplin no. 164, Saskatchewan in Saskatchewan, Canada In the military was a cargo vessel during World War II was a T2 tanker during World War II was a Barracuda-class submarine during World War II was an Alamosa-class cargo ship during World War II was an Admirable-class minesweeper during World War II was a Trefoil-class concrete barge during World War II was a during World War II was a during World War II was a yacht during World War I was a during World War II In sports Baseball Talk was a set of 164 talking baseball cards released by Topps Baseball Card Company in 1989 In transportation Caproni Ca.164 was a training biplane produced in Italy prior to World War II The Alfa Romeo 164 car produced from 1988 to 1997 The Volvo 164 car produced from 1968 to 1975 List of highways numbered 164 Is a London Transport bus route running between Sutton and Wimbledon In other fields 164 is also: The year AD 164 or 164 BC 164 AH is a year in the Islamic calendar that corresponds to 780 – 781 CE The Scrabble board, a 15-by-15 grid, includes 164 squares that have neither word nor letter multiplier. The remainder have attributes such as double letter, triple letter, double word, and triple word The atomic number of an element temporarily called Unhexquadium Solvent Red 164 is a synthetic red diazo dye E.164 is an ITU-T recommendation defines public telecommunication numbering plan used in the PSTN and data networks See also United States Supreme Court cases, Volume 164 United Nations Security Council Resolution 164 External links Number Facts and Trivia: 164 The Number 164 The Positive Integer 164 Integers
https://en.wikipedia.org/wiki/Islam%20in%20Paraguay
The latest statistics for Islam in Paraguay estimate a total Muslim population of under 1,000 representing 0.02% of the population. But, another estimate puts the number of Muslim members in Paraguay at 35,000 people. Most of the Muslims are descendants of immigrants from Syria and Lebanon. The major Islamic organization in Paraguay is the Centro Benéfico Cultural Islámico Asunción, led by Faozi Mohamed Omairi. The community is concentrated in and around the capital, Asuncion. References Paraguay Religion in Paraguay Paraguary
https://en.wikipedia.org/wiki/Automatic%20group
In mathematics, an automatic group is a finitely generated group equipped with several finite-state automata. These automata represent the Cayley graph of the group. That is, they can tell if a given word representation of a group element is in a "canonical form" and can tell if two elements given in canonical words differ by a generator. More precisely, let G be a group and A be a finite set of generators. Then an automatic structure of G with respect to A is a set of finite-state automata: the word-acceptor, which accepts for every element of G at least one word in representing it; multipliers, one for each , which accept a pair (w1, w2), for words wi accepted by the word-acceptor, precisely when in G. The property of being automatic does not depend on the set of generators. Properties Automatic groups have word problem solvable in quadratic time. More strongly, a given word can actually be put into canonical form in quadratic time, based on which the word problem may be solved by testing whether the canonical forms of two words represent the same element (using the multiplier for ). Automatic groups are characterized by the fellow traveler property. Let denote the distance between in the Cayley graph of . Then, G is automatic with respect to a word acceptor L if and only if there is a constant such that for all words which differ by at most one generator, the distance between the respective prefixes of u and v is bounded by C. In other words, where for the k-th prefix of (or itself if ). This means that when reading the words synchronously, it is possible to keep track of the difference between both elements with a finite number of states (the neighborhood of the identity with diameter C in the Cayley graph). Examples of automatic groups The automatic groups include: Finite groups. To see this take the regular language to be the set of all words in the finite group. Euclidean groups All finitely generated Coxeter groups Geometrically finite groups Examples of non-automatic groups Baumslag–Solitar groups Non-Euclidean nilpotent groups Biautomatic groups A group is biautomatic if it has two multiplier automata, for left and right multiplication by elements of the generating set, respectively. A biautomatic group is clearly automatic. Examples include: Hyperbolic groups. Any Artin group of finite type, including braid groups. Automatic structures The idea of describing algebraic structures with finite-automata can be generalized from groups to other structures. For instance, it generalizes naturally to automatic semigroups. References Further reading . Computability theory Properties of groups Combinatorics on words Computational group theory
https://en.wikipedia.org/wiki/M-estimator
In statistics, M-estimators are a broad class of extremum estimators for which the objective function is a sample average. Both non-linear least squares and maximum likelihood estimation are special cases of M-estimators. The definition of M-estimators was motivated by robust statistics, which contributed new types of M-estimators. However, M-estimators are not inherently robust, as is clear from the fact that they include maximum likelihood estimators, which are in general not robust. The statistical procedure of evaluating an M-estimator on a data set is called M-estimation. More generally, an M-estimator may be defined to be a zero of an estimating function. This estimating function is often the derivative of another statistical function. For example, a maximum-likelihood estimate is the point where the derivative of the likelihood function with respect to the parameter is zero; thus, a maximum-likelihood estimator is a critical point of the score function. In many applications, such M-estimators can be thought of as estimating characteristics of the population. Historical motivation The method of least squares is a prototypical M-estimator, since the estimator is defined as a minimum of the sum of squares of the residuals. Another popular M-estimator is maximum-likelihood estimation. For a family of probability density functions f parameterized by θ, a maximum likelihood estimator of θ is computed for each set of data by maximizing the likelihood function over the parameter space { θ } . When the observations are independent and identically distributed, a ML-estimate satisfies or, equivalently, Maximum-likelihood estimators have optimal properties in the limit of infinitely many observations under rather general conditions, but may be biased and not the most efficient estimators for finite samples. Definition In 1964, Peter J. Huber proposed generalizing maximum likelihood estimation to the minimization of where ρ is a function with certain properties (see below). The solutions are called M-estimators ("M" for "maximum likelihood-type" (Huber, 1981, page 43)); other types of robust estimators include L-estimators, R-estimators and S-estimators. Maximum likelihood estimators (MLE) are thus a special case of M-estimators. With suitable rescaling, M-estimators are special cases of extremum estimators (in which more general functions of the observations can be used). The function ρ, or its derivative, ψ, can be chosen in such a way to provide the estimator desirable properties (in terms of bias and efficiency) when the data are truly from the assumed distribution, and 'not bad' behaviour when the data are generated from a model that is, in some sense, close to the assumed distribution. Types M-estimators are solutions, θ, which minimize This minimization can always be done directly. Often it is simpler to differentiate with respect to θ and solve for the root of the derivative. When this differentiation is possible, the M-estima
https://en.wikipedia.org/wiki/Binary%20splitting
In mathematics, binary splitting is a technique for speeding up numerical evaluation of many types of series with rational terms. In particular, it can be used to evaluate hypergeometric series at rational points. Method Given a series where pn and qn are integers, the goal of binary splitting is to compute integers P(a, b) and Q(a, b) such that The splitting consists of setting m = [(a + b)/2] and recursively computing P(a, b) and Q(a, b) from P(a, m), P(m, b), Q(a, m), and Q(m, b). When a and b are sufficiently close, P(a, b) and Q(a, b) can be computed directly from pa...pb and qa...qb. Comparison with other methods Binary splitting requires more memory than direct term-by-term summation, but is asymptotically faster since the sizes of all occurring subproducts are reduced. Additionally, whereas the most naive evaluation scheme for a rational series uses a full-precision division for each term in the series, binary splitting requires only one final division at the target precision; this is not only faster, but conveniently eliminates rounding errors. To take full advantage of the scheme, fast multiplication algorithms such as Toom–Cook and Schönhage–Strassen must be used; with ordinary O(n2) multiplication, binary splitting may render no speedup at all or be slower. Since all subdivisions of the series can be computed independently of each other, binary splitting lends well to parallelization and checkpointing. In a less specific sense, binary splitting may also refer to any divide and conquer algorithm that always divides the problem in two halves. References Xavier Gourdon & Pascal Sebah. Binary splitting method David V. Chudnovsky & Gregory V. Chudnovsky. Computer algebra in the service of mathematical physics and number theory. In Computers and Mathematics (Stanford, CA, 1986), pp. 09–232, Dekker, New York, 1990. Bruno Haible, Thomas Papanikolaou. Fast multiprecision evaluation of series of rational numbers. Paper distributed with the CLN library source code. Lozier, D.W. and Olver, F.W.J. Numerical Evaluation of Special Functions. Mathematics of Computation 1943–1993: A Half-Century of Computational Mathematics, W.Gautschi, eds., Proc. Sympos. Applied Mathematics, AMS, v.48, pp. 79–125 (1994). Bach, E. The complexity of number-theoretic constants. Info. Proc. Letters, N 62, pp. 145–152 (1997). Borwein, J.M., Bradley, D.M. and Crandall, R.E. Computational strategies for the Riemann zeta function. J. of Comput. Appl. Math., v.121, N 1-2, pp. 247–296 (2000). Karatsuba, E.A. Fast evaluation of transcendental functions. (English. Russian original) Probl. Inf. Transm. 27, No.4, 339-360 (1991); translation from Probl. Peredachi Inf. 27, No.4, 76–99 (1991). Ekatherina Karatsuba. Fast Algorithms and the FEE method Computer arithmetic algorithms
https://en.wikipedia.org/wiki/Error%20term
In mathematics and statistics, an error term is an additive type of error. Common examples include: errors and residuals in statistics, e.g. in linear regression the error term in numerical integration Error measures
https://en.wikipedia.org/wiki/Mathematics%20Magazine
Mathematics Magazine is a refereed bimonthly publication of the Mathematical Association of America. Its intended audience is teachers of collegiate mathematics, especially at the junior/senior level, and their students. It is explicitly a journal of mathematics rather than pedagogy. Rather than articles in the terse "theorem-proof" style of research journals, it seeks articles which provide a context for the mathematics they deliver, with examples, applications, illustrations, and historical background. Paid circulation in 2008 was 9,500 and total circulation was 10,000. Mathematics Magazine is a continuation of Mathematics News Letter (1926–1934) and National Mathematics Magazine (1934–1945). Doris Schattschneider became the first female editor of Mathematics Magazine in 1981. The MAA gives the Carl B. Allendoerfer Awards annually "for articles of expository excellence" published in Mathematics Magazine. See also American Mathematical Monthly Carl B. Allendoerfer Award Notes Further reading External links Mathematics Magazine at JSTOR Mathematics Magazine at Taylor & Francis Online Mathematics education journals Academic journals published by learned and professional societies of the United States Mathematical Association of America
https://en.wikipedia.org/wiki/Singular%20point%20of%20a%20curve
In geometry, a singular point on a curve is one where the curve is not given by a smooth embedding of a parameter. The precise definition of a singular point depends on the type of curve being studied. Algebraic curves in the plane Algebraic curves in the plane may be defined as the set of points satisfying an equation of the form where is a polynomial function If is expanded as If the origin is on the curve then . If then the implicit function theorem guarantees there is a smooth function so that the curve has the form near the origin. Similarly, if then there is a smooth function so that the curve has the form near the origin. In either case, there is a smooth map from to the plane which defines the curve in the neighborhood of the origin. Note that at the origin so the curve is non-singular or regular at the origin if at least one of the partial derivatives of is non-zero. The singular points are those points on the curve where both partial derivatives vanish, Regular points Assume the curve passes through the origin and write Then can be written If is not 0 then has a solution of multiplicity 1 at and the origin is a point of single contact with line If then has a solution of multiplicity 2 or higher and the line or is tangent to the curve. In this case, if is not 0 then the curve has a point of double contact with If the coefficient of , is 0 but the coefficient of is not then the origin is a point of inflection of the curve. If the coefficients of and are both 0 then the origin is called point of undulation of the curve. This analysis can be applied to any point on the curve by translating the coordinate axes so that the origin is at the given point. Double points If and are both in the above expansion, but at least one of , , is not 0 then the origin is called a double point of the curve. Again putting can be written Double points can be classified according to the solutions of Crunodes If has two real solutions for , that is if then the origin is called a crunode. The curve in this case crosses itself at the origin and has two distinct tangents corresponding to the two solutions of The function has a saddle point at the origin in this case. Acnodes If has no real solutions for , that is if then the origin is called an acnode. In the real plane the origin is an isolated point on the curve; however when considered as a complex curve the origin is not isolated and has two imaginary tangents corresponding to the two complex solutions of The function has a local extremum at the origin in this case. Cusps If has a single solution of multiplicity 2 for , that is if then the origin is called a cusp. The curve in this case changes direction at the origin creating a sharp point. The curve has a single tangent at the origin which may be considered as two coincident tangents. Further classification The term node is used to indicate either a crunode or an acnode, in other words a double poin
https://en.wikipedia.org/wiki/Orbit%20method
In mathematics, the orbit method (also known as the Kirillov theory, the method of coadjoint orbits and by a few similar names) establishes a correspondence between irreducible unitary representations of a Lie group and its coadjoint orbits: orbits of the action of the group on the dual space of its Lie algebra. The theory was introduced by for nilpotent groups and later extended by Bertram Kostant, Louis Auslander, Lajos Pukánszky and others to the case of solvable groups. Roger Howe found a version of the orbit method that applies to p-adic Lie groups. David Vogan proposed that the orbit method should serve as a unifying principle in the description of the unitary duals of real reductive Lie groups. Relation with symplectic geometry One of the key observations of Kirillov was that coadjoint orbits of a Lie group G have natural structure of symplectic manifolds whose symplectic structure is invariant under G. If an orbit is the phase space of a G-invariant classical mechanical system then the corresponding quantum mechanical system ought to be described via an irreducible unitary representation of G. Geometric invariants of the orbit translate into algebraic invariants of the corresponding representation. In this way the orbit method may be viewed as a precise mathematical manifestation of a vague physical principle of quantization. In the case of a nilpotent group G the correspondence involves all orbits, but for a general G additional restrictions on the orbit are necessary (polarizability, integrality, Pukánszky condition). This point of view has been significantly advanced by Kostant in his theory of geometric quantization of coadjoint orbits. Kirillov character formula For a Lie group , the Kirillov orbit method gives a heuristic method in representation theory. It connects the Fourier transforms of coadjoint orbits, which lie in the dual space of the Lie algebra of G, to the infinitesimal characters of the irreducible representations. The method got its name after the Russian mathematician Alexandre Kirillov. At its simplest, it states that a character of a Lie group may be given by the Fourier transform of the Dirac delta function supported on the coadjoint orbits, weighted by the square-root of the Jacobian of the exponential map, denoted by . It does not apply to all Lie groups, but works for a number of classes of connected Lie groups, including nilpotent, some semisimple groups, and compact groups. Special cases Nilpotent group case Let G be a connected, simply connected nilpotent Lie group. Kirillov proved that the equivalence classes of irreducible unitary representations of G are parametrized by the coadjoint orbits of G, that is the orbits of the action G on the dual space of its Lie algebra. The Kirillov character formula expresses the Harish-Chandra character of the representation as a certain integral over the corresponding orbit. Compact Lie group case Complex irreducible representations of compact Lie groups
https://en.wikipedia.org/wiki/Scottish%20Book
The Scottish Book () was a thick notebook used by mathematicians of the Lwów School of Mathematics in Poland for jotting down problems meant to be solved. The notebook was named after the "Scottish Café" where it was kept. Originally, the mathematicians who gathered at the cafe would write down the problems and equations directly on the cafe's marble table tops, but these would be erased at the end of each day, and so the record of the preceding discussions would be lost. The idea for the book was most likely originally suggested by Stefan Banach's wife, Łucja Banach. Stefan or Łucja Banach purchased a large notebook and left it with the proprietor of the cafe. History The Scottish Café () was the café in Lwów (now Lviv, Ukraine) where, in the 1930s and 1940s, mathematicians from the Lwów School collaboratively discussed research problems, particularly in functional analysis and topology. Stanislaw Ulam recounts that the tables of the café had marble tops, so they could write in pencil, directly on the table, during their discussions. To keep the results from being lost, and after becoming annoyed with their writing directly on the table tops, Stefan Banach's wife provided the mathematicians with a large notebook, which was used for writing the problems and answers and eventually became known as the Scottish Book. The book—a collection of solved, unsolved, and even probably unsolvable problems—could be borrowed by any of the guests of the café. Solving any of the problems was rewarded with prizes, with the most difficult and challenging problems having expensive prizes (during the Great Depression and on the eve of World War II), such as a bottle of fine brandy. For problem 153, which was later recognized as being closely related to Stefan Banach's "basis problem", Stanisław Mazur offered the prize of a live goose. This problem was solved only in 1972 by Per Enflo, who was presented with the live goose in a ceremony that was broadcast throughout Poland. The café building used to house the at the street address of 27 Taras Shevchenko Prospekt. The original cafe was renovated in May 2014 and contains a copy of the Scottish Book. Problems contributed by individual authors A total of 193 problems were written down in the book. Stanisław Mazur contributed a total of 43 problems, 24 of them as a single author and 19 together with Stefan Banach. Banach himself wrote 14, plus another 11 with Stanislaw Ulam and Mazur. Ulam wrote 40 problems and additional 15 ones with others. During the Soviet occupation of Lwów, several Russian mathematicians visited the city and also added problems to the book. Hugo Steinhaus contributed the last problem on 31 May 1941, shortly before the German attack on the Soviet Union; this problem involved a question about the likely distribution of matches within a matchbox, a problem motivated by Banach's habit of chain smoking cigarettes. Continuity After World War II, an English translation annotated by Ulam was p
https://en.wikipedia.org/wiki/Berlin%20Papyrus%206619
The Berlin Papyrus 6619, simply called the Berlin Papyrus when the context makes it clear, is one of the primary sources of ancient Egyptian mathematics. One of the two mathematics problems on the Papyrus may suggest that the ancient Egyptians knew the Pythagorean theorem. Description, dating, and provenance The Berlin Papyrus 6619 is an ancient Egyptian papyrus document from the Middle Kingdom, second half of the 12th (c. 1990–1800 BC) or 13th Dynasty (c. 1800 BC – 1649 BC). The two readable fragments were published by Hans Schack-Schackenburg in 1900 and 1902. Connection to the Pythagorean theorem The Berlin Papyrus contains two problems, the first stated as "the area of a square of 100 is equal to that of two smaller squares. The side of one is ½ + ¼ the side of the other." The interest in the question may suggest some knowledge of the Pythagorean theorem, though the papyrus only shows a straightforward solution to a single second degree equation in one unknown. In modern terms, the simultaneous equations and reduce to the single equation in y: , giving the solution y = 8 and x = 6. See also List of ancient Egyptian papyri Papyrology Timeline of mathematics Egyptian fraction References External links Simultaneous equation examples from the Berlin papyrus Two algebra problems compared to RMP algebra Two suggested solutions Egyptian mathematics Papyri from ancient Egypt
https://en.wikipedia.org/wiki/AP%20Calculus
Advanced Placement (AP) Calculus (also known as AP Calc, Calc AB / Calc BC or simply AB / BC) is a set of two distinct Advanced Placement calculus courses and exams offered by the American nonprofit organization College Board. AP Calculus AB covers basic introductions to limits, derivatives, and integrals. AP Calculus BC covers all AP Calculus AB topics plus additional topics (including integration by parts, Taylor series, parametric equations, vector calculus, and polar coordinate functions). AP Calculus AB AP Calculus AB is an Advanced Placement calculus course. It is traditionally taken after precalculus and is the first calculus course offered at most schools except for possibly a regular calculus class. The Pre-Advanced Placement pathway for math helps prepare students for further Advanced Placement classes and exams. Purpose According to the College Board: Topic outline The material includes the study and application of differentiation and integration, and graphical analysis including limits, asymptotes, and continuity. An AP Calculus AB course is typically equivalent to one semester of college calculus. Analysis of graphs (predicting and explaining behavior) Limits of functions (one and two sided) Asymptotic and unbounded behavior Continuity Derivatives Concept At a point As a function Applications Higher order derivatives Techniques Integrals Interpretations Properties Applications Techniques Numerical approximations Fundamental theorem of calculus Antidifferentiation L'Hôpital's rule Separable differential equations AP Calculus BC AP Calculus BC is equivalent to a full year regular college course, covering both Calculus I and II. After passing the exam, students may move on to Calculus III (Multivariable Calculus). Purpose According to the College Board, Topic outline AP Calculus BC includes all of the topics covered in AP Calculus AB, as well as the following: Convergence tests for series Taylor series Parametric equations Polar functions (including arc length in polar coordinates and calculating area) Arc length calculations using integration Integration by parts Improper integrals Differential equations for logistic growth Using partial fractions to integrate rational functions It is worth mentioning that the pass rate (score of 3 or higher) of AP Calculus BC is higher than AP Calculus AB. A possible explanation for this is that students who take AP Calculus BC are more prepared and advanced in math, leading to its higher pass rate compared to AP Calculus AB. The 5-rate is consistently over 40% (much higher than almost all the other AP exams), owing to the fact that many students who take AP Calculus BC may have taken AP Calculus AB the previous year. This gives them the advantage of just reviewing previously learned content, as well as having to learn less new content as only 2 additional units and a few additional lessons in previous units are taught. AB sub-score distribution AP Exam The College Board intention
https://en.wikipedia.org/wiki/Islam%20in%20Nicaragua
According to 2007 statistics released by the U.S. Department of State concerning Islam in Nicaragua, there are approximately 1,200 to 1,500 Muslims, mostly Sunnis who are resident aliens or naturalized citizens from Palestine, Libya, and Iran or natural-born Nicaraguan citizens born to both of the two groups. The Islamic Cultural Center in Managua serves as the primary salaat (prayer) center for Muslims in the city, with approximately 320 men attending on a regular basis. Muslims from Granada, Masaya, Leon, and Chinandega also travel to the Managua center for Friday prayers. Granada, Masaya, and Leon have smaller prayer centers in the homes of prominent local Muslims. In May 2007 the Sunni leader of the Managua prayer center was dismissed, due to the increase in Iranian influence in the Muslim community and was to be replaced by a Shi'a religious leader. By the end of the reporting period (May 2007) the Shi'a leader had not been identified. Background Early Immigration Muslim immigration occurred in moderate numbers in Nicaragua in the late 19th century. The majority were Palestinian Arab Muslims; the immigration constituted one of the largest waves of immigration to Central America. Although the exact number of Palestinians is not available, Guzmán writes "it is possible that from the end of the nineteenth century until 1917, when the Ottoman Empire entered its final decline, during World War I, 40 Palestinian families arrived in Nicaragua". This early wave of immigrants quickly lost their Islamic roots and blended into the local population, often by adopting a Christian heritage due to intermarrying and government pressure. At different points during the 1890s to the 1940s Nicaragua, and many other Latin American countries, established laws or issued ordinances that restricted the entry of Arabs, forbade the stay of Arabs already present in the country and curtailed the expansion of their commercial activities. Immigration: 1960s through 2000 The second group of immigrants in the 1960s was better educated, but not any more oriented towards Islam than the first. This group was affected by two major events in Nicaragua: the 1972 Nicaragua earthquake, and the Nicaraguan Revolution in 1979. At that time, many of the former Palestinians immigrated to North America or returned to Palestine. Those that stayed suffered greatly and their families were further assimilated into Christianity. The latest and smallest group of émigrés was in the early 1990s. Many of these were immigrants returning to Nicaragua who had since become more aware of their Muslim heritage from exposure in North America or Palestine. These immigrants also possessed a stronger Islamic identity than previous groups, enabling an Islamic reawakening by the community. By 2000 it was estimated that there were 500 families of Palestine Arabs and Palestinian descendants in Nicaragua. The Palestinians that arrived in Nicaragua were mostly Christians and a small number of Muslims, the
https://en.wikipedia.org/wiki/Victor%20Zalgaller
Victor (Viktor) Abramovich Zalgaller (; ; 25 December 1920 – 2 October 2020) was a Russian-Israeli mathematician in the fields of geometry and optimization. He is best known for the results he achieved on convex polyhedra, linear and dynamic programming, isoperimetry, and differential geometry. Biography Zalgaller was born in Parfino, Novgorod Governorate on 25 December 1920. In 1936, he was one of the winners of the Leningrad Mathematics Olympiads for high school students. He started his studies at the Leningrad State University, however, World War II intervened in 1941, and Zalgaller joined the Red Army. He took part in the defence of Leningrad, and in 1945 marched into Germany. He worked as a teacher at the Saint Petersburg Lyceum 239, and received his 1963 doctoral dissertation on polyhedra with the aid of his high school students who wrote the computer programs for the calculation. Zalgaller did his early work under direction of A. D. Alexandrov and Leonid Kantorovich. He wrote joint monographs with both of them. His later monograph Geometric Inequalities (joint with Yu. Burago) is still the main reference in the field. Zalgaller lived in Saint Petersburg most of his life, having studied and worked at the Leningrad State University and the Steklov Institute of Mathematics (Saint Petersburg branch). In 1999, he immigrated to Israel. Zalgaller died on 2 October 2020 at the age of 99. References V. A. Aleksandrov, et al. Viktor Abramovich Zalgaller (on his 80th birthday), Russian Mathematical Surveys, Vol. 56 (2001), 1013–1014 (see here for a Russian version). Yu. D. Burago, et al. Viktor Abramovich Zalgaller (on his 80th birthday), J. Math. Sci. (N. Y.) J. Math. Sci. (N.Y.) Vol. 119 (2004), 129–132 (see here for a Russian version). M. Z. Solomyak, A few words about Viktor Abramovich Zalgaller, J. Math. Sci. (N.Y.) Vol. 119 (2004), 138–140. S. S. Kutateladze, A Tribute to the Philanthropist and Geometer. List of papers of V. A. Zalgaller, available here (mostly in Russian). External links Intrinsic Geometry of Surfaces — book by A.D Alexandrov and V.A. Zalgaller (AMS Online Book, originally translated in 1967). Personal war memoir (in Russian). Lecture read in 1999 in St.Petersburg, Russia (video, in Russian) 1920 births 2020 deaths 20th-century Russian mathematicians 21st-century Russian mathematicians Differential geometers Israeli Jews Israeli mathematicians People from Parfinsky District Russian emigrants to Israel Russian Jews Saint Petersburg State University alumni Soviet mathematicians
https://en.wikipedia.org/wiki/Catamorphism
In category theory, the concept of catamorphism (from the Ancient Greek: "downwards" and "form, shape") denotes the unique homomorphism from an initial algebra into some other algebra. In functional programming, catamorphisms provide generalizations of folds of lists to arbitrary algebraic data types, which can be described as initial algebras. The dual concept is that of anamorphism that generalize unfolds. A hylomorphism is the composition of an anamorphism followed by a catamorphism. Definition Consider an initial -algebra for some endofunctor of some category into itself. Here is a morphism from to . Since it is initial, we know that whenever is another -algebra, i.e. a morphism from to , there is a unique homomorphism from to . By the definition of the category of -algebra, this corresponds to a morphism from to , conventionally also denoted , such that . In the context of -algebra, the uniquely specified morphism from the initial object is denoted by and hence characterized by the following relationship: Terminology and history Another notation found in the literature is . The open brackets used are known as banana brackets, after which catamorphisms are sometimes referred to as bananas, as mentioned in Erik Meijer et al. One of the first publications to introduce the notion of a catamorphism in the context of programming was the paper “Functional Programming with Bananas, Lenses, Envelopes and Barbed Wire”, by Erik Meijer et al., which was in the context of the Squiggol formalism. The general categorical definition was given by Grant Malcolm. Examples We give a series of examples, and then a more global approach to catamorphisms, in the Haskell programming language. Iteration Iteration-step prescriptions lead to natural numbers as initial object. Consider the functor fmaybe mapping a data type b to a data type fmaybe b, which contains a copy of each term from b as well as one additional term Nothing (in Haskell, this is what Maybe does). This can be encoded using one term and one function. So let an instance of a StepAlgebra also include a function from fmaybe b to b, which maps Nothing to a fixed term nil of b, and where the actions on the copied terms will be called next. type StepAlgebra b = (b, b->b) -- the algebras, which we encode as pairs (nil, next) data Nat = Zero | Succ Nat -- which is the initial algebra for the functor described above foldSteps :: StepAlgebra b -> (Nat -> b) -- the catamorphisms map from Nat to b foldSteps (nil, next) Zero = nil foldSteps (nil, next) (Succ nat) = next $ foldSteps (nil, next) nat As a silly example, consider the algebra on strings encoded as ("go!", \s -> "wait.. " ++ s), for which Nothing is mapped to "go!" and otherwise "wait.. " is prepended. As (Succ . Succ . Succ . Succ $ Zero) denotes the number four in Nat, the following will evaluate to "wait.. wait.. wait.. wait.. go!": foldSteps ("go!", \s -> "wait.. " ++ s) (Succ . Succ . Succ . Succ $ Zero). We can
https://en.wikipedia.org/wiki/Holbrook%20Working
Holbrook Working (February 5, 1895 – October 5, 1985) was an American professor of economics and statistics at Stanford University's Food Research Institute known for his contributions on hedging, on the theory of futures prices, on an early theory of market maker behavior, and on the theory of storage (including the Working curve which plots the difference between short term and long term grain futures prices against current inventory). Biography He was born in Fort Collins, Colorado, on February 5, 1895. Working earned his Ph.D. in Agricultural Economics from the University of Wisconsin–Madison in 1921. He taught at Cornell University and the University of Minnesota before he joined Stanford's Food Research Institute in 1925. His younger brother Elmer Working made a major contribution on the identification problem for demand curves in econometrics, with which Holbrook Working was also involved. Working disagreed with Keynes's backwardation theory of futures prices, which argued that short hedgers (farmers) drive down futures prices because of their demand for price insurance. Working argued that there could be hedgers on both sides of the market and that hedging was essentially not a risk reduction technique, but "speculation in the basis" which allows informed traders and commodity dealers to profit from their knowledge of future changes in the difference between futures and spot prices. He was a founding member of the Econometric Society and was elected a Fellow of the American Agricultural Economics Association, the American Statistical Association, and the American Association for the Advancement of Science. In 1981 he was awarded the Wilks Memorial Award by the American Statistical Association. He died on October 5, 1985, in Santa Clara, California. See also Working–Hotelling procedure Notable papers References 1895 births 1985 deaths 20th-century American economists Cornell University faculty Fellows of the American Statistical Association Stanford University faculty Fellows of the Econometric Society
https://en.wikipedia.org/wiki/D50%20%28radiotherapy%29
D50 in medicine is the half-maximal dose: the dose that produces 50% of the maximum response. It may specifically refer to the radiation dose required to achieve a 50% tumor control probability. See also , is the dose required to kill half the members of a tested population after a specified test duration. References Radiation therapy
https://en.wikipedia.org/wiki/Urchin%20%28software%29
Urchin was a web statistics analysis program that was developed by Urchin Software Corporation. Urchin analyzed web server log file content and displayed the traffic information on that website based upon the log data. Sales of Urchin products ended on March 28, 2012. Urchin software could be run in two different data collection modes: log file analyzer or hybrid. As a log file analyzer, Urchin processed web server log files in a variety of log file formats. Custom file formats could also be defined. As a hybrid, Urchin combined page tags with log file data to eradicate the limitations of each data collection method in isolation. The result was more accurate web visitor data. Urchin became one of the more popular solutions for website traffic analysis, particularly with ISPs and web hosting providers. This was largely due to its scalability in performance and its pricing model. Urchin Software Corp. was acquired by Google in April 2005, forming Google Analytics. In April 2008, Google released Urchin 6. In February 2009, Google released Urchin 6.5, integrating AdWords. Urchin 7 was released in September 2010 and included 64-bit support, a new UI, and event tracking, among other features. See also UTM parameters List of web analytics software References External links Google software Web analytics Discontinued Google services Web log analysis software
https://en.wikipedia.org/wiki/Particular%20values%20of%20the%20gamma%20function
The gamma function is an important special function in mathematics. Its particular values can be expressed in closed form for integer and half-integer arguments, but no simple expressions are known for the values at rational points in general. Other fractional arguments can be approximated through efficient infinite products, infinite series, and recurrence relations. Integers and half-integers For positive integer arguments, the gamma function coincides with the factorial. That is, and hence and so on. For non-positive integers, the gamma function is not defined. For positive half-integers, the function values are given exactly by or equivalently, for non-negative integer values of : where denotes the double factorial. In particular, {| |- | | | | |- | | | | |- | | | | |- | | | | |} and by means of the reflection formula, {| |- | | | | |- | | | | |- | | | | |} General rational argument In analogy with the half-integer formula, where denotes the th multifactorial of . Numerically, . As tends to infinity, where is the Euler–Mascheroni constant and denotes asymptotic equivalence. It is unknown whether these constants are transcendental in general, but and were shown to be transcendental by G. V. Chudnovsky. has also long been known to be transcendental, and Yuri Nesterenko proved in 1996 that , , and are algebraically independent. The number is related to the lemniscate constant by and it has been conjectured by Gramain that where is the Masser–Gramain constant , although numerical work by Melquiond et al. indicates that this conjecture is false. Borwein and Zucker have found that can be expressed algebraically in terms of , , , , and where is a complete elliptic integral of the first kind. This permits efficiently approximating the gamma function of rational arguments to high precision using quadratically convergent arithmetic–geometric mean iterations. For example: No similar relations are known for or other denominators. In particular, where AGM() is the arithmetic–geometric mean, we have Other formulas include the infinite products and where is the Glaisher–Kinkelin constant and is Catalan's constant. The following two representations for were given by I. Mező and where and are two of the Jacobi theta functions. Certain values of the gamma function can also be written in terms of the hypergeometric function. For instance, and however it is an open question whether this is possible for all rational inputs to the gamma function. Products Some product identities include: In general: From those products can be deduced other values, for example, from the former equations for , and , can be deduced: Other rational relations include and many more relations for where the denominator d divides 24 or 60. Gamma quotients with algebraic values must be "poised" in the sense that the sum of arguments is the same (modulo 1) for the denominator and the numerator. A more sophisticat
https://en.wikipedia.org/wiki/Interval%20vector
In musical set theory, an interval vector is an array of natural numbers which summarize the intervals present in a set of pitch classes. (That is, a set of pitches where octaves are disregarded.) Other names include: ic vector (or interval-class vector), PIC vector (or pitch-class interval vector) and APIC vector (or absolute pitch-class interval vector, which Michiel Schuijer states is more proper.) While primarily an analytic tool, interval vectors can also be useful for composers, as they quickly show the sound qualities that are created by different collections of pitch class. That is, sets with high concentrations of conventionally dissonant intervals (i.e., seconds and sevenths) sound more dissonant, while sets with higher numbers of conventionally consonant intervals (i.e., thirds and sixths) sound more consonant. While the actual perception of consonance and dissonance involves many contextual factors, such as register, an interval vector can nevertheless be a helpful tool. Definition In twelve-tone equal temperament, an interval vector has six digits, with each digit representing the number of times an interval class appears in the set. Because interval classes are used, the interval vector for a given set remains the same, regardless of the set's permutation or vertical arrangement. The interval classes designated by each digit ascend from left to right. That is: minor seconds/major sevenths (1 or 11 semitones) major seconds/minor sevenths (2 or 10 semitones) minor thirds/major sixths (3 or 9 semitones) major thirds/minor sixths (4 or 8 semitones) perfect fourths/perfect fifths (5 or 7 semitones) tritones (6 semitones) (The tritone is inversionally equivalent to itself.) Interval class 0, representing unisons and octaves, is omitted. In his 1960 book, The Harmonic Materials of Modern Music, Howard Hanson introduced a monomial method of notation for this concept, which he termed intervallic content: pemdnc.sbdatf for what would now be written . The modern notation, introduced by Allen Forte, has considerable advantages and is extendable to any equal division of the octave. A scale whose interval vector has six unique digits is said to have the deep scale property. The major scale and its modes have this property. For a practical example, the interval vector for a C major triad (3-11B) in the root position, {C E G} (), is . This means that the set has one major third or minor sixth (i.e. from C to E, or E to C), one minor third or major sixth (i.e. from E to G, or G to E), and one perfect fifth or perfect fourth (i.e. from C to G, or G to C). As the interval vector does not change with transposition or inversion, it belongs to the entire set class, meaning that is the vector of all major (and minor) triads. Some interval vectors correspond to more than one sets that cannot be transposed or inverted to produce the other. (These are called Z-related sets, explained below). For a set of n pitch classes, the sum of all the
https://en.wikipedia.org/wiki/Jimena%20de%20la%20Frontera
Jimena de la Frontera is a historic town and municipality located in the province of Cádiz, Spain. According to estimates made by the National Statistics Institute of Spain (INE), the municipality has a population of 6,707 inhabitants as of 2020. The municipality contains three major towns, Jimena de la Frontera, Los Ángeles and San Pablo de Buceite. Other towns include Montenegral Alto and Marchenilla. It is situated in the eastern part of the province, on the (San Roque-Ronda) road. It is located near Málaga, practically being the border between the provinces of Málaga and Cádiz. Its location between the Serranía de Ronda and the Bay of Algeciras preserves one of the most important Mediterranean forest spots in southern Europe: the Alcornocales Natural Park. Almost two thirds of the municipality belong to the park. History Origins The existence of caves and natural shelters with abundant remains and cave paintings throughout the Campo de Gibraltar indicates the existence of human settlements that date back to the Palaeolithic. Jimena de la Frontera is no exception, with the paintings of Laja Alta, with unique maritime scenes from the Bronze Age in the Iberian Peninsula. The ancient Phoenician city of Oba, known for its minting of coins in the Libyan-Phoenician alphabet, is usually identified with Jimena. In the castle, epigraphs have been found with the text: res publica Obensis. This name was used during Roman times. During this period, Jimena flourished as a commercial and strategic center. According to local tradition, the great-grandfather of Mark Antony was born here. The location of the town, sheltered by hills but reasonably close to the Strait of Gibraltar, meant that its strategic functionality was exploited by the different peoples that have populated it. Thus, after the fall of the Roman Empire, the site served as a defensive post over the Strait of Gibraltar for the Visigoths, who lost it to Byzantine hands in the 6th century. According to a local legend, the great-grandfather of Mark Antony was born here. The arrival of the Muslims in the 8th century did not alter this situation. The conquerors carried out a series of actions to reinforce the enclave, already called Xemina (from which the Christian name of Ximena and later Jimena would derive), building a new fortification. The town was in the hands of the Marinids, until 1319, when Ismail I gave it, along with other cities, to the Nasrid kingdom of Granada in exchange for help against Christian advances. After the Reconquista It remained at the frontier position of the Nasrid kingdom (hence its name of de la Frontera) until 1431, when it was conquered during the Reconquista by Pedro García de Herrera, Marshal of Castile, under the reign of Juan II of Castile, who took the town on March 11. Its border situation was not stable, since it made it change hands between Muslims and Christians during the 15th century on some occasions. In 1451 it returned to Nasrid power, until i
https://en.wikipedia.org/wiki/Andrew%20M.%20Gleason
Andrew Mattei Gleason (November 4, 1921October 17, 2008) was an American mathematician who made fundamental contributions to widely varied areas of mathematics, including the solution of Hilbert's fifth problem, and was a leader in reform and innovation in teaching at all levels.<ref name="mactutor"></ref> Gleason's theorem in quantum logic and the Greenwood–Gleason graph, an important example in Ramsey theory, are named for him. As a young World War II naval officer, Gleason broke German and Japanese military codes. After the war he spent his entire academic career at Harvard University, from which he retired in 1992. His numerous academic and scholarly leadership posts included chairmanship of the Harvard Mathematics Department and the Harvard Society of Fellows, and presidency of the American Mathematical Society. He continued to advise the United States government on cryptographic security, and the Commonwealth of Massachusetts on education for children, almost until the end of his life. Gleason won the Newcomb Cleveland Prize in 1952 and the Gung–Hu Distinguished Service Award of the American Mathematical Society in 1996. He was a member of the National Academy of Sciences and of the American Philosophical Society, and held the Hollis Chair of Mathematics and Natural Philosophy at Harvard. He was fond of saying that proofs "really aren't there to convince you that something is truethey're there to show you why it is true." The Notices of the American Mathematical Society called him "one of the quiet giants of twentieth-century mathematics, the consummate professor dedicated to scholarship, teaching, and service in equal measure." Biography Gleason was born in Fresno, California, the youngest of three children; his father Henry Gleason was a botanist and a member of the Mayflower Society, and his mother was the daughter of Swiss-American winemaker Andrew Mattei. His older brother Henry Jr. became a linguist. He grew up in Bronxville, New York, where his father was the curator of the New York Botanical Garden.<ref name="mmp"> . </ref> After briefly attending Berkeley High School (Berkeley, California) he graduated from Roosevelt High School in Yonkers, winning a scholarship to Yale University. Though Gleason's mathematics education had gone only so far as some self-taught calculus, Yale mathematician William Raymond Longley urged him to try a course in mechanics normally intended for juniors. One month later he enrolled in a differential equations course ("mostly full of seniors") as well. When Einar Hille temporarily replaced the regular instructor, Gleason found Hille's style "unbelievably different ... He had a view of mathematics that was just vastly different ... That was a very important experience for me. So after that I took a lot of courses from Hille" including, in his sophomore year, graduate-level real analysis. "Starting with that course with Hille, I began to have some sense of what mathematics is about." While at Yale
https://en.wikipedia.org/wiki/Oren%20Patashnik
Oren Patashnik (born 1954) is an American computer scientist. He is notable for co-creating BibTeX, and co-writing Concrete Mathematics: A Foundation for Computer Science. He is a researcher at the Center for Communications Research, La Jolla, and lives nearby in San Diego. Oren and his wife Amy have three children, Josh, Ariel, and Jeremy. History Oren Patashnik graduated from Yale University in 1976, and later became a doctoral student in computer science at Stanford University, where his research was supervised by Donald Knuth. While working at Bell Labs in 1980, Patashnik proved that Qubic can always be won by the first player. Using 1500 hours of computer time, Patashnik's proof is a notable example of a computer-assisted proof. In 1985, Patashnik created the bibliography-system, BibTeX, in collaboration with Leslie Lamport, the creator of LaTeX. LaTeX is a system and programming language for formatting documents, which is especially designed for mathematical documents. BibTeX is a widely used bibliography-formatting tool for LaTeX. In 1988, Patashnik assisted Ronald Graham and Donald Knuth in writing Concrete Mathematics: A Foundation for Computer Science, an important mathematical publication and college textbook. In 1990, he got his doctorate in computer science. His thesis paper was about "Optimal Circuit Segmentation for Pseudo-Exhaustive Testing" . After the 2003 Cedar Fire destroyed 60% of the houses in his immediate neighborhood, his statistical study showed that houses with a wood-shake shingle roof did very badly, but surprisingly, so did houses with a Spanish-style, curved-red-tile roof. Notes References (PDF) "How to Win at Tic-Tac-Toe" (Mathellaneous, July 2005, University of Melbourne) - 11-page article with a section relating Patashnik's effort on Qubic Credits of Concrete Mathematics 1954 births Living people American computer scientists Jewish American scientists Jewish scientists Yale University alumni Timothy Dwight College alumni Stanford University alumni BibTeX
https://en.wikipedia.org/wiki/Felix%20Otto%20%28mathematician%29
Felix Otto (born 19 May 1966) is a German mathematician. Biography He studied mathematics at the University of Bonn, finishing his PhD thesis in 1993 under the supervision of Stephan Luckhaus. After postdoctoral studies at the Courant Institute of Mathematical Sciences of New York University and at Carnegie Mellon University, in 1997 he became a professor at the University of California, Santa Barbara. From 1999 to 2010 he was professor for applied mathematics at the University of Bonn, and currently serves as one of the directors of the Max Planck Institute for Mathematics in the Sciences, Leipzig. Honours In 2006, he received the Gottfried Wilhelm Leibniz Prize of the Deutsche Forschungsgemeinschaft, which is the highest honour awarded in German research. In 2009, he was awarded a Gauss Lecture by the German Mathematical Society. In 2008 he became a member of the German Academy of Sciences Leopoldina. References DFG portrait 1966 births Living people 20th-century German mathematicians University of Bonn alumni Courant Institute of Mathematical Sciences alumni Carnegie Mellon University alumni Academic staff of the University of Bonn University of California, Santa Barbara faculty Studienstiftung alumni Gottfried Wilhelm Leibniz Prize winners 21st-century German mathematicians Members of the German National Academy of Sciences Leopoldina Max Planck Institute directors
https://en.wikipedia.org/wiki/Auxiliary%20field
In physics, and especially quantum field theory, an auxiliary field is one whose equations of motion admit a single solution. Therefore, the Lagrangian describing such a field contains an algebraic quadratic term and an arbitrary linear term, while it contains no kinetic terms (derivatives of the field): The equation of motion for is and the Lagrangian becomes Auxiliary fields generally do not propagate, and hence the content of any theory can remain unchanged in many circumstances by adding such fields by hand. If we have an initial Lagrangian describing a field , then the Lagrangian describing both fields is Therefore, auxiliary fields can be employed to cancel quadratic terms in in and linearize the action . Examples of auxiliary fields are the complex scalar field F in a chiral superfield, the real scalar field D in a vector superfield, the scalar field B in BRST and the field in the Hubbard–Stratonovich transformation. The quantum mechanical effect of adding an auxiliary field is the same as the classical, since the path integral over such a field is Gaussian. To wit: See also Bosonic field Fermionic field Composite Field References Quantum field theory
https://en.wikipedia.org/wiki/Religion%20in%20North%20Korea
There are no known official statistics of religions in North Korea. Officially, North Korea is an atheist state, although its constitution guarantees free exercise of religion, provided that religious practice does not introduce foreign forces, harm the state, or harm the existing social order. Based on estimates from the late 1990s and the 2000s, North Korea is mostly irreligious, with the main religions being Shamanism and Chondoism. There are small communities of Buddhists and Christians. Chondoism is represented in politics by the Party of the Young Friends of the Heavenly Way, and is regarded by the government as Korea's "national religion" because of its identity as a minjung (popular) and "revolutionary anti-imperialist" movement. History Before 1945 In ancient times, most Koreans believed in their indigenous religion socially guided by mu (shamans). Buddhism was introduced from the Chinese Former Qin state in 372 to the northern Korean state of Goguryeo, and developed into distinctive Korean forms. At that time, the Korean peninsula was divided into three kingdoms: the aforementioned Goguryeo in the north, Baekje in the southwest, and Silla in the southeast. Buddhism reached Silla only in the 5th century, but it was made the state religion only in that kingdom in the year 552. In Goguryeo, the Korean indigenous religion remained dominant, while Buddhism became more widespread in Silla and Baekje (both areas comprehended in modern South Korea). In the following unified state of Goryeo (918–1392), that developed from Goguryeo incorporating the southern kingdoms, Buddhism flourished even becoming a political force. In the same period, the influence of Chinese Confucianism penetrated the country and led to the formation of Korean Confucianism that would have become the state ideology and religion of the following Joseon state. The Joseon kingdom (1392–1910), strictly Neo-Confucian, harshly suppressed Buddhism and Shamanism. Buddhist monasteries were destroyed and their number dropped from several hundreds to a mere thirty-six; Buddhism was eradicated from the life of towns as monks and nuns were prohibited from entering them and were marginalised to the mountains. These restrictions lasted until the 19th century. In this environment, Christianity began to rapidly gain foothold since the late 18th century, due to an intense missionary activity that was aided by the endorsement at first by the Silhak and Seohak intellectual parties, and then at the end of the following century by the king of Korea himself and the intellectual elite of the crumbling Joseon state, who were looking for a new social factor to invigorate the Korean nation. In the late 19th century, the Joseon state was politically and culturally collapsing. The intelligentsia was looking for solutions to invigorate and transform the nation. It was in this critical period that they came into contact with Western Protestant missionaries who offered a solution to the plight of Kor
https://en.wikipedia.org/wiki/Normal%20sequence
In mathematics, the term normal sequence has multiple meanings, depending on the area of specialty. In general, it is a sequence with "nice" properties. In set theory, a normal sequence is one that is continuous and strictly increasing. In probability theory, a normal number is a number whose representation is a normal sequence in all bases, i.e. regardless of which base is chosen (e.g. base 2, base 8, base 10, etc.) the sequence of digits contains every finite subsequence with equal probability. References Thomas Jech. Set Theory, 3rd millennium ed., 2002, Springer Monographs in Mathematics,Springer,
https://en.wikipedia.org/wiki/Scale%20factor%20%28computer%20science%29
In computer science, a scale factor is a number used as a multiplier to represent a number on a different scale, functioning similarly to an exponent in mathematics. A scale factor is used when a real-world set of numbers needs to be represented on a different scale in order to fit a specific number format. Although using a scale factor extends the range of representable values, it also decreases the precision, resulting in rounding error for certain calculations. Uses Certain number formats may be chosen for an application for convenience in programming, or because of certain advantages offered by the hardware for that number format. For instance, early processors did not natively support the IEEE floating-point standard for representing fractional values, so integers were used to store representations of the real world values by applying a scale factor to the real value. Similarly, because hardware arithmetic has a fixed width (commonly 16, 32, or 64 bits, depending on the data type), scale factors allow representation of larger numbers (by manually multiplying or dividing by the specified scale factor), though at the expense of precision. By necessity, this was done in software, since the hardware did not support fractional value. Scale factors are also used in floating-point numbers, and most commonly are powers of two. For example, the double-precision format sets aside 11 bits for the scaling factor (a binary exponent) and 53 bits for the significand, allowing various degrees of precision for representing different ranges of numbers, and expanding the range of representable numbers beyond what could be represented using 64 explicit bits (though at the cost of precision). As an example of where precision is lost, a 16-bit unsigned integer (uint16) can only hold a value as large as 65,53510. If unsigned 16-bit integers are used to represent values from 0 to 131,07010, then a scale factor of would be introduced, such that the scaled values correspond exactly to the real-world even integers. As a consequence, for example, the number 3 cannot be represented, because a stored 1 represents a real-world 2, and a stored 2 represents a real-world 4; there are not enough bits available to avoid this error in this representation. Operations on scaled values Once the scaled representation of a real value is stored, the scaling can often be ignored until the value needs to come back into the "real world". For instance, adding two scaled values is just as valid as unscaling the values, adding the real values, and then scaling the result, and the former is much easier and faster. In either approach, though, the two added numbers must be scaled the same. For other operations, the scaling is very important. Multiplication, for instance, needs to take into account that both numbers are scaled. As an example, consider two real world values A and B. The real world multiplication of these real world values is: A * B = P If they are instead represented with
https://en.wikipedia.org/wiki/Teflic%20acid
Teflic acid is the chemical compound with the formula . This strong acid is related to orthotelluric acid, . Teflic acid has a slightly distorted octahedral geometry. Preparation Teflic acid was accidentally discovered by Engelbrecht and Sladky. Their synthesis did not yield the anticipated telluryl fluoride , but a mixture of volatile telluric compounds, containing : (25%) Teflic acid can also be prepared from fluorosulfonic acid and barium tellurate: It is also the first hydrolysis product of tellurium hexafluoride: Teflates The conjugate base of teflic acid is called the teflate anion, (not to be confused with triflate). Many teflates are known, one example being , that can be pyrolysed to give acid anhydride . The teflate anion is known to resist oxidation. This property has allowed the preparation several highly unusual species such as the hexateflates (in which M = As, Sb, Bi). Xenon forms the cation . References Further reading R.B. King; Inorganic Chemistry of Main Group Elements, VCH Publishers, New York,1994. Oxohalides Tellurium(VI) compounds Substances discovered in the 1960s
https://en.wikipedia.org/wiki/Oval%20%28projective%20plane%29
In projective geometry an oval is a point set in a plane that is defined by incidence properties. The standard examples are the nondegenerate conics. However, a conic is only defined in a pappian plane, whereas an oval may exist in any type of projective plane. In the literature, there are many criteria which imply that an oval is a conic, but there are many examples, both infinite and finite, of ovals in pappian planes which are not conics. As mentioned, in projective geometry an oval is defined by incidence properties, but in other areas, ovals may be defined to satisfy other criteria, for instance, in differential geometry by differentiability conditions in the real plane. The higher dimensional analog of an oval is an ovoid in a projective space. A generalization of the oval concept is an abstract oval, which is a structure that is not necessarily embedded in a projective plane. Indeed, there exist abstract ovals which can not lie in any projective plane. Definition of an oval In a projective plane a set of points is called an oval, if: Any line meets in at most two points, and For any point there exists exactly one tangent line through , i.e., }. When the line is an exterior line (or passant), if a tangent line and if the line is a secant line. For finite planes (i.e. the set of points is finite) we have a more convenient characterization: For a finite projective plane of order (i.e. any line contains points) a set of points is an oval if and only if and no three points are collinear (on a common line). A set of points in an affine plane satisfying the above definition is called an affine oval. An affine oval is always a projective oval in the projective closure (adding a line at infinity) of the underlying affine plane. An oval can also be considered as a special quadratic set. Examples Conic sections In any pappian projective plane there exist nondegenerate projective conic sections and any nondegenerate projective conic section is an oval. This statement can be verified by a straightforward calculation for any of the conics (such as the parabola or hyperbola). Non-degenerate conics are ovals with special properties: Pascal's Theorem and its various degenerations are valid. There are many projectivities which leave a conic invariant. Ovals, which are not conics in the real plane If one glues one half of a circle and a half of an ellipse smoothly together, one gets a non-conic oval. If one takes the inhomogeneous representation of a conic oval as a parabola plus a point at infinity and replaces the expression by , one gets an oval which is not a conic. If one takes the inhomogeneous representation of a conic oval as a hyperbola plus two points at infinity and replaces the expression by , one gets an oval which is not a conic. The implicit curve is a non conic oval. in a finite plane of even order In a finite pappian plane of even order a nondegenerate conic has a nucleus (a single point through whic
https://en.wikipedia.org/wiki/Metre%20per%20hour
Metre per hour (American spelling: meter per hour) is a metric unit of both speed (scalar) and velocity (Vector (geometry)). Its symbol is m/h or m·h−1 (not to be confused with the imperial unit symbol mph). By definition, an object travelling at a speed of 1 m/h for an hour would move 1 metre. The term is rarely used however as the units of metres per second and kilometres per hour are considered sufficient for the majority of circumstances. Metres per hour can however be convenient for documenting extremely slow moving objects. A Garden Snail for instance, typically moves at a speed of up to 47 metres per hour. Conversions 3,600 m/h ≡ 1 m·s−1, the SI derived unit of speed, metre per second 1 m/h ≈ 0.00027778 m/s 1 m/h ≈ 0.00062137 mph ≈ 0.00091134 feet per second How to convert To convert from kilometers per hour to meters per hour, multiply the figure by 1,000 (hence the prefix kilo- from the ancient Greek language word for thousand). To convert from meters per second to meters per hour, divide the figure by 3,600 (that is 60 * 60, i.e. 60 seconds for each of the 60 minutes). See also Orders of magnitude (speed) References Units of velocity
https://en.wikipedia.org/wiki/Poisson%E2%80%93Lie%20group
In mathematics, a Poisson–Lie group is a Poisson manifold that is also a Lie group, with the group multiplication being compatible with the Poisson algebra structure on the manifold. The infinitesimal counterpart of a Poisson–Lie group is a Lie bialgebra, in analogy to Lie algebras as the infinitesimal counterparts of Lie groups. Many quantum groups are quantizations of the Poisson algebra of functions on a Poisson–Lie group. Definition A Poisson–Lie group is a Lie group equipped with a Poisson bracket for which the group multiplication with is a Poisson map, where the manifold has been given the structure of a product Poisson manifold. Explicitly, the following identity must hold for a Poisson–Lie group: where and are real-valued, smooth functions on the Lie group, while and are elements of the Lie group. Here, denotes left-multiplication and denotes right-multiplication. If denotes the corresponding Poisson bivector on , the condition above can be equivalently stated as In particular, taking one obtains , or equivalently . Applying Weinstein splitting theorem to one sees that non-trivial Poisson-Lie structure is never symplectic, not even of constant rank. Poisson-Lie groups - Lie bialgebra correspondence The Lie algebra of a Poisson–Lie group has a natural structure of Lie coalgebra given by linearising the Poisson tensor at the identity, i.e. is a comultiplication. Moreover, the algebra and the coalgebra structure are compatible, i.e. is a Lie bialgebra, The classical Lie group–Lie algebra correspondence, which gives an equivalence of categories between simply connected Lie groups and finite-dimensional Lie algebras, was extended by Drinfeld to an equivalence of categories between simply connected Poisson–Lie groups and finite-dimensional Lie bialgebras. Thanks to Drinfeld theorem, any Poisson–Lie group has a dual Poisson–Lie group, defined as the Poisson–Lie group integrating the dual of its bialgebra. Homomorphisms A Poisson–Lie group homomorphism is defined to be both a Lie group homomorphism and a Poisson map. Although this is the "obvious" definition, neither left translations nor right translations are Poisson maps. Also, the inversion map taking is not a Poisson map either, although it is an anti-Poisson map: for any two smooth functions on . Examples Trivial examples Any trivial Poisson structure on a Lie group defines a Poisson–Lie group structure, whose bialgebra is simply with the trivial comultiplication. The dual of a Lie algebra, together with its linear Poisson structure, is an additive Poisson–Lie group. These two example are dual of each other via Drinfeld theorem, in the sense explained above. Other examples Let be any semisimple Lie group. Choose a maximal torus and a choice of positive roots. Let be the corresponding opposite Borel subgroups, so that and there is a natural projection . Then define a Lie group which is a subgroup of the product , and has the same dime
https://en.wikipedia.org/wiki/Denjoy%27s%20theorem%20on%20rotation%20number
In mathematics, the Denjoy theorem gives a sufficient condition for a diffeomorphism of the circle to be topologically conjugate to a diffeomorphism of a special kind, namely an irrational rotation. proved the theorem in the course of his topological classification of homeomorphisms of the circle. He also gave an example of a C1 diffeomorphism with an irrational rotation number that is not conjugate to a rotation. Statement of the theorem Let ƒ: S1 → S1 be an orientation-preserving diffeomorphism of the circle whose rotation number θ = ρ(ƒ) is irrational. Assume that it has positive derivative ƒ(x) > 0 that is a continuous function with bounded variation on the interval [0,1). Then ƒ is topologically conjugate to the irrational rotation by θ. Moreover, every orbit is dense and every nontrivial interval I of the circle intersects its forward image ƒ°q(I), for some q > 0 (this means that the non-wandering set of ƒ is the whole circle). Complements If ƒ is a C2 map, then the hypothesis on the derivative holds; however, for any irrational rotation number Denjoy constructed an example showing that this condition cannot be relaxed to C1, continuous differentiability of ƒ. Vladimir Arnold showed that the conjugating map need not be smooth, even for an analytic diffeomorphism of the circle. Later Michel Herman proved that nonetheless, the conjugating map of an analytic diffeomorphism is itself analytic for "most" rotation numbers, forming a set of full Lebesgue measure, namely, for those that are badly approximable by rational numbers. His results are even more general and specify differentiability class of the conjugating map for Cr diffeomorphisms with any r ≥ 3. See also Circle map References Kornfeld, Sinai, Fomin, Ergodic theory. External links John Milnor, Denjoy Theorem Dynamical systems Diffeomorphisms Theorems in topology Theorems in dynamical systems
https://en.wikipedia.org/wiki/Lebesgue%27s%20density%20theorem
In mathematics, Lebesgue's density theorem states that for any Lebesgue measurable set , the "density" of A is 0 or 1 at almost every point in . Additionally, the "density" of A is 1 at almost every point in A. Intuitively, this means that the "edge" of A, the set of points in A whose "neighborhood" is partially in A and partially outside of A, is negligible. Let μ be the Lebesgue measure on the Euclidean space Rn and A be a Lebesgue measurable subset of Rn. Define the approximate density of A in a ε-neighborhood of a point x in Rn as where Bε denotes the closed ball of radius ε centered at x. Lebesgue's density theorem asserts that for almost every point x of A the density exists and is equal to 0 or 1. In other words, for every measurable set A, the density of A is 0 or 1 almost everywhere in Rn. However, if μ(A) > 0 and , then there are always points of Rn where the density is neither 0 nor 1. For example, given a square in the plane, the density at every point inside the square is 1, on the edges is 1/2, and at the corners is 1/4. The set of points in the plane at which the density is neither 0 nor 1 is non-empty (the square boundary), but it is negligible. The Lebesgue density theorem is a particular case of the Lebesgue differentiation theorem. Thus, this theorem is also true for every finite Borel measure on Rn instead of Lebesgue measure, see Discussion. See also References Hallard T. Croft. Three lattice-point problems of Steinhaus. Quart. J. Math. Oxford (2), 33:71-83, 1982. Theorems in measure theory Integral calculus
https://en.wikipedia.org/wiki/Locally%20integrable%20function
In mathematics, a locally integrable function (sometimes also called locally summable function) is a function which is integrable (so its integral is finite) on every compact subset of its domain of definition. The importance of such functions lies in the fact that their function space is similar to spaces, but its members are not required to satisfy any growth restriction on their behavior at the boundary of their domain (at infinity if the domain is unbounded): in other words, locally integrable functions can grow arbitrarily fast at the domain boundary, but are still manageable in a way similar to ordinary integrable functions. Definition Standard definition . Let be an open set in the Euclidean space and be a Lebesgue measurable function. If on is such that i.e. its Lebesgue integral is finite on all compact subsets of , then is called locally integrable. The set of all such functions is denoted by : where denotes the restriction of to the set . The classical definition of a locally integrable function involves only measure theoretic and topological concepts and can be carried over abstract to complex-valued functions on a topological measure space : however, since the most common application of such functions is to distribution theory on Euclidean spaces, all the definitions in this and the following sections deal explicitly only with this important case. An alternative definition . Let be an open set in the Euclidean space . Then a function such that for each test function is called locally integrable, and the set of such functions is denoted by . Here denotes the set of all infinitely differentiable functions with compact support contained in . This definition has its roots in the approach to measure and integration theory based on the concept of continuous linear functional on a topological vector space, developed by the Nicolas Bourbaki school: it is also the one adopted by and by . This "distribution theoretic" definition is equivalent to the standard one, as the following lemma proves: . A given function is locally integrable according to if and only if it is locally integrable according to , i.e. Proof of If part: Let be a test function. It is bounded by its supremum norm , measurable, and has a compact support, let's call it . Hence by . Only if part: Let be a compact subset of the open set . We will first construct a test function which majorises the indicator function of . The usual set distance between and the boundary is strictly greater than zero, i.e. hence it is possible to choose a real number such that (if is the empty set, take ). Let and denote the closed -neighborhood and -neighborhood of , respectively. They are likewise compact and satisfy Now use convolution to define the function by where is a mollifier constructed by using the standard positive symmetric one. Obviously is non-negative in the sense that , infinitely differentiable, and its support is contained in
https://en.wikipedia.org/wiki/F.%20Burton%20Jones
Floyd Burton Jones (November 22, 1910, Cisco, Texas – April 15, 1999, Santa Barbara, California) was an American mathematician, active mainly in topology. Jones's father was a pharmacist and local politician in Shackelford County, Texas. As the valedictorian of his high school class, Jones earned a Regents' Scholarship to The University of Texas, intending to study law eventually. Jones soon discovered that he had a poor memory for dates and history, and thus changed his major to chemistry. Jones had the extraordinary good fortune to be taught freshman calculus by Robert Lee Moore, a founder of topology in the US, a legendary mathematics teacher, and the inventor of the Moore method. Jones went on to take more mathematics courses than required to be a chemist. He displayed sufficient ability in those courses that when he graduated in 1932, Moore invited him to do a Ph.D. in mathematics and offered him a part-time job as a math instructor. Moore later supervised Jones's Ph.D. dissertation, completed in 1935. Jones then taught at the University of Texas for the next 15 years except during 1942–44, when he was a research associate at the Harvard Underwater Sound Laboratory, helping develop scanning sonar for the Navy. In 1950, Jones moved to the University of North Carolina, where he eventually headed the Department of Mathematics. From 1962 until his 1978 retirement, he was at the University of California at Riverside, where he helped launch the doctoral program in mathematics. Over the course of his career, Jones published 67 articles and supervised 15 Ph.D. dissertations. In 1987, he endowed a Chair in Topology at the University of California at Riverside. Jones taught using a modified version of the Moore method. He believed in "learning by doing" but unlike Moore, he incorporated textbooks into his courses. In 1969, Louis McAuley wrote of "the magical powers of Jones in the classroom—a master who breathes the very life of mathematics into his students." References External links Rogers, James T. Jr, 2000, "F. Burton Jones (1910 - 1999) - An Appreciation." Includes a complete list of Jones's publications, and a brief summary of four of Jones's papers. 1910 births 1999 deaths 20th-century American mathematicians People from Cisco, Texas Topologists University of Texas at Austin College of Natural Sciences alumni University of Texas at Austin faculty University of North Carolina at Chapel Hill faculty University of California, Riverside faculty Harvard University staff Mathematicians from Texas
https://en.wikipedia.org/wiki/Basketball%20statistics
Statistics in basketball are kept to evaluate a player's or a team's performance. Examples Examples of basketball statistics include: GM, GP; GS: games played; games started PTS: points FGM, FGA, FG%: field goals made, attempted and percentage FTM, FTA, FT%: free throws made, attempted and percentage 3FGM, 3FGA, 3FG%: three-point field goals made, attempted and percentage REB, OREB, DREB: rebounds, offensive rebounds, defensive rebounds AST: assists STL: steals BLK: blocks TO: turnovers TD: triple double EFF: efficiency: NBA's efficiency rating: (PTS + REB + AST + STL + BLK − ((FGA − FGM) + (FTA − FTM) + TO)) PF: personal fouls MIN: minutes AST/TO: assist to turnover ratio PER: Player Efficiency Rating: John Hollinger's Player Efficiency Rating PIR: Performance Index Rating: Euroleague's and Eurocup's Performance Index Rating: (Points + Rebounds + Assists + Steals + Blocks + Fouls Drawn) − (Missed Field Goals + Missed Free Throws + Turnovers + Shots Rejected + Fouls Committed) Averages per game are denoted by *PG (e.g. BLKPG or BPG, STPG or SPG, APG, RPG and MPG). Sometime the players statistics are divided by minutes played and multiplied by 48 minutes (had he played the entire game), denoted by * per 48 min. or *48M. A player who makes double digits in a game in any two of the PTS, REB, AST, STL, and BLK statistics is said to make a double double; in three statistics, a triple double; in four statistics, a quadruple double. A quadruple double is extremely rare (and has only occurred four times in the NBA). There is also a 5x5, when a player records at least a 5 in each of the 5 statistics. The NBA also posts to the statistics section of its Web site a simple composite efficiency statistic, denoted EFF and derived by the formula, ((Points + Rebounds + Assists + Steals + Blocks) − ((Field Goals Attempted − Field Goals Made) + (Free Throws Attempted − Free Throws Made) + Turnovers)). While conveniently distilling most of a player's key statistics in one numerical score, the formula is not highly regarded by the statistics community, with the alternative Player Efficiency Rating developed by ESPN basketball statistician John Hollinger being more widely used to compare the overall efficiency of players. Tempo-free statistics Examples of tempo-free statistics including the following Pace: Possessions per game (typically ranges from 60 to 75) PPP: Points per possession, the points a team score for each possession regardless of a team's pace TO%: Turnover percentage, the measure of how often a team loses possession of the ball before creating a scoring opportunity Fantasy leagues In fantasy basketball, statistics are used in a formula as the measurement of a player's performance. See also Player Efficiency Rating Efficiency (basketball) Similarity score Advanced statistics in basketball References External links Land of Basketball NBA statistics NBA & pro Basketball statistics Proballers.com Basketball-Reference.
https://en.wikipedia.org/wiki/List%20of%20Chelsea%20F.C.%20records%20and%20statistics
Chelsea Football Club are an English professional association football club based in Fulham, London. The club was established in 1905 and plays its home games at Stamford Bridge. Domestically, Chelsea have won six top-flight titles, eight FA Cups and five League Cups. In international competitions, they have won two UEFA Champions League titles, two UEFA Europa Leagues, two UEFA Cup Winners' Cups, two UEFA Super Cups and one FIFA Club World Cup. They are the first English club to win three main UEFA club competitions and are the only London club to win the UEFA Champions League. The club's record appearance maker is Ron Harris, who made 795 appearances between 1961 and 1980. Frank Lampard is Chelsea's record goalscorer, scoring 211 goals in total. Honours The first major trophy won by Chelsea came in 1955, when the team became national champions after winning the 1954–55 First Division title. In the 2009–10 season, Chelsea won their first and only double after winning both the Premier League and the FA Cup. Upon winning the 2012–13 UEFA Europa League, Chelsea became the fourth club in history to have won the "European Treble" of European Cup/UEFA Champions League, European Cup Winners' Cup/UEFA Cup Winners' Cup, and UEFA Cup/UEFA Europa League. Their most recent success came in February 2022, when they won their first FIFA Club World Cup title. Players Appearances Most appearances in all competitions: 795, Ron Harris (1961–1980) Most league appearances: 655, Ron Harris (1961–1980) Most FA Cup appearances: 64, Ron Harris (1961–1980) Most League Cup appearances: 48, John Hollins (1963–1975 and 1983–1984) and Ron Harris (1961–1980) Most appearances in UEFA competitions: 124, John Terry (1998–2015) Most consecutive appearances: 167, John Hollins, 14 August 1971 – 25 September 1974 Most consecutive league appearances: 164, Frank Lampard, 13 October 2001 – 26 December 2005 Most appearances in a single season: 64, Juan Mata, Oscar and Fernando Torres, 2012–13 Most international caps while a Chelsea player: Frank Lampard, 104 for England First Chelsea player to play for England: George Hilsdon, 16 February 1907 First Chelsea player to play for England at a World Cup: Roy Bentley, 1950 World Cup, 25 June 1950 First foreign (non-UK) player: Nils Middelboe (Denmark), 15 November 1913 Youngest player: Ian Hamilton, 16 years 138 days, vs. Tottenham Hotspur, First Division, 18 March 1967 Oldest player: Mark Schwarzer, 41 years and 218 days, vs. Cardiff City, Premier League, 11 May 2014 First substitute: John Boyle, who replaced George Graham vs. Fulham, First Division, 28 August 1965 Most appearances Competitive matches only. 1 The "Other" column includes appearances in Charity/Community Shield, Football League play-offs, Full Members' Cup, UEFA Super Cup, and FIFA Club World Cup. Goalscorers Most goals in all competitions: 211, Frank Lampard (2001–2014) Most goals in a season: 43, Jimmy Greaves (First Division, 1960–61) Most goals in one match: 6, Ge
https://en.wikipedia.org/wiki/Ovoid%20%28projective%20geometry%29
In projective geometry an ovoid is a sphere like pointset (surface) in a projective space of dimension . Simple examples in a real projective space are hyperspheres (quadrics). The essential geometric properties of an ovoid are: Any line intersects in at most 2 points, The tangents at a point cover a hyperplane (and nothing more), and contains no lines. Property 2) excludes degenerated cases (cones,...). Property 3) excludes ruled surfaces (hyperboloids of one sheet, ...). An ovoid is the spatial analog of an oval in a projective plane. An ovoid is a special type of a quadratic set. Ovoids play an essential role in constructing examples of Möbius planes and higher dimensional Möbius geometries. Definition of an ovoid In a projective space of dimension a set of points is called an ovoid, if (1) Any line meets in at most 2 points. In the case of , the line is called a passing (or exterior) line, if the line is a tangent line, and if the line is a secant line. (2) At any point the tangent lines through cover a hyperplane, the tangent hyperplane, (i.e., a projective subspace of dimension ). (3) contains no lines. From the viewpoint of the hyperplane sections, an ovoid is a rather homogeneous object, because For an ovoid and a hyperplane , which contains at least two points of , the subset is an ovoid (or an oval, if ) within the hyperplane . For finite projective spaces of dimension (i.e., the point set is finite, the space is pappian), the following result is true: If is an ovoid in a finite projective space of dimension , then . (In the finite case, ovoids exist only in 3-dimensional spaces.) In a finite projective space of order (i.e. any line contains exactly points) and dimension any pointset is an ovoid if and only if and no three points are collinear (on a common line). Replacing the word projective in the definition of an ovoid by affine, gives the definition of an affine ovoid. If for an (projective) ovoid there is a suitable hyperplane not intersecting it, one can call this hyperplane the hyperplane at infinity and the ovoid becomes an affine ovoid in the affine space corresponding to . Also, any affine ovoid can be considered a projective ovoid in the projective closure (adding a hyperplane at infinity) of the affine space. Examples In real projective space (inhomogeneous representation) (hypersphere) These two examples are quadrics and are projectively equivalent. Simple examples, which are not quadrics can be obtained by the following constructions: (a) Glue one half of a hypersphere to a suitable hyperellipsoid in a smooth way. (b) In the first two examples replace the expression by . Remark: The real examples can not be converted into the complex case (projective space over ). In a complex projective space of dimension there are no ovoidal quadrics, because in that case any non degenerated quadric contains lines. But the following method guarantees many non quadric ovoids: For an
https://en.wikipedia.org/wiki/2-valued%20morphism
In mathematics, a 2-valued morphism is a homomorphism that sends a Boolean algebra B onto the two-element Boolean algebra 2 = {0,1}. It is essentially the same thing as an ultrafilter on B, and, in a different way, also the same things as a maximal ideal of B. 2-valued morphisms have also been proposed as a tool for unifying the language of physics. 2-valued morphisms, ultrafilters and maximal ideals Suppose B is a Boolean algebra. If s : B → 2 is a 2-valued morphism, then the set of elements of B that are sent to 1 is an ultrafilter on B, and the set of elements of B that are sent to 0 is a maximal ideal of B. If U is an ultrafilter on B, then the complement of U is a maximal ideal of B, and there is exactly one 2-valued morphism s : B → 2 that sends the ultrafilter to 1 and the maximal ideal to 0. If M is a maximal ideal of B, then the complement of M is an ultrafilter on B, and there is exactly one 2-valued morphism s : B → 2 that sends the ultrafilter to 1 and the maximal ideal to 0. Physics If the elements of B are viewed as "propositions about some object", then a 2-valued morphism on B can be interpreted as representing a particular "state of that object", namely the one where the propositions of B which are mapped to 1 are true, and the propositions mapped to 0 are false. Since the morphism conserves the Boolean operators (negation, conjunction, etc.), the set of true propositions will not be inconsistent but will correspond to a particular maximal conjunction of propositions, denoting the (atomic) state. (The true propositions form an ultrafilter, the false propositions form a maximal ideal, as mentioned above.) The transition between two states s1 and s2 of B, represented by 2-valued morphisms, can then be represented by an automorphism f from B to B, such that s2 o f = s1. The possible states of different objects defined in this way can be conceived as representing potential events. The set of events can then be structured in the same way as invariance of causal structure, or local-to-global causal connections or even formal properties of global causal connections. The morphisms between (non-trivial) objects could be viewed as representing causal connections leading from one event to another one. For example, the morphism f above leads form event s1 to event s2. The sequences or "paths" of morphisms for which there is no inverse morphism, could then be interpreted as defining horismotic or chronological precedence relations. These relations would then determine a temporal order, a topology, and possibly a metric. According to, "A minimal realization of such a relationally determined space-time structure can be found". In this model there are, however, no explicit distinctions. This is equivalent to a model where each object is characterized by only one distinction: (presence, absence) or (existence, non-existence) of an event. In this manner, "the 'arrows' or the 'structural language' can then be interpreted as morphism
https://en.wikipedia.org/wiki/Method%20of%20matched%20asymptotic%20expansions
In mathematics, the method of matched asymptotic expansions is a common approach to finding an accurate approximation to the solution to an equation, or system of equations. It is particularly used when solving singularly perturbed differential equations. It involves finding several different approximate solutions, each of which is valid (i.e. accurate) for part of the range of the independent variable, and then combining these different solutions together to give a single approximate solution that is valid for the whole range of values of the independent variable. In the Russian literature, these methods were known under the name of "intermediate asymptotics" and were introduced in the work of Yakov Zeldovich and Grigory Barenblatt. Method overview In a large class of singularly perturbed problems, the domain may be divided into two or more subdomains. In one of these, often the largest, the solution is accurately approximated by an asymptotic series found by treating the problem as a regular perturbation (i.e. by setting a relatively small parameter to zero). The other subdomains consist of one or more small areas in which that approximation is inaccurate, generally because the perturbation terms in the problem are not negligible there. These areas are referred to as transition layers, and as boundary or interior layers depending on whether they occur at the domain boundary (as is the usual case in applications) or inside the domain. An approximation in the form of an asymptotic series is obtained in the transition layer(s) by treating that part of the domain as a separate perturbation problem. This approximation is called the "inner solution," and the other is the "outer solution," named for their relationship to the transition layer(s). The outer and inner solutions are then combined through a process called "matching" in such a way that an approximate solution for the whole domain is obtained. A simple example Consider the boundary value problem where is a function of independent time variable , which ranges from 0 to 1, the boundary conditions are and , and is a small parameter, such that . Outer solution, valid for t = O(1) Since is very small, our first approach is to treat the equation as a regular perturbation problem, i.e. make the approximation , and hence find the solution to the problem Alternatively, consider that when and are both of size O(1), the four terms on the left hand side of the original equation are respectively of sizes , O(1), and O(1). The leading-order balance on this timescale, valid in the distinguished limit , is therefore given by the second and fourth terms, i.e., This has solution for some constant . Applying the boundary condition , we would have ; applying the boundary condition , we would have . It is therefore impossible to satisfy both boundary conditions, so is not a valid approximation to make across the whole of the domain (i.e. this is a singular perturbation problem). From this we in
https://en.wikipedia.org/wiki/Aleksei%20Pogorelov
Aleksei Vasil'evich Pogorelov (, ; March 2, 1919 – December 17, 2002), was a Soviet mathematician. Specialist in the field of convex and differential geometry, geometric PDEs and elastic shells theory, the author of the novel school textbook on geometry and university textbooks on analytical geometry, on differential geometry, and on foundations of geometry. Pogorelov's uniqueness theorem and the Alexandrov–Pogorelov theorem are named after him. Biography Born March 3, 1919, in Korocha, Kursk Governorate (now Belgorod region) in a peasant family. In 1931, because of the collectivization, the parents of Pogorelov escaped from the village to Kharkiv, where his father become a worker at the construction of the Kharkiv tractor plant. In 1935, A.V. Pogorelov won the first prize at the Mathematical Olympiad in Kharkiv State University. After high school graduation in 1937, he entered the mathematical department of the Kharkiv State University. He was the best student at the department. In 1941, after the involvement of the Soviet Union in the World War II, Aleksei Vasil'evich was sent for 11 months study to N.Y. Zhukovsky Air Force Engineering Academy. During his studies, the students periodically were sent for several months to the front as technicians for the airplane service. After the Red Army Victory over Nazi near Moscow, the training continued for a full term. After academy graduation, he worked at N.Y. Zhukovsky Central Aero-hydrodynamic Institute (TsAGI) as a design engineer. The desire to complete university education and specialize in geometry professionally led A.V. Pogorelov to Moscow State University. By recommendation of I.G. Petrovsky (Dean of the Mechanics and Mathematics Department) and a well-known geometer V.F. Kagan, Aleksei Vasil'evich met A.D. Aleksandrov – the founder of the theory of non-smooth convex surfaces. There were many new questions concerning this theory. Aleksandr Danilovich proposed to give an answer to one of them to A.V. Pogorelov. In a year the problem was solved and A.V. Pogorelov was enrolled to the graduate school of the Mechanics and Mathematics Department of Moscow State University. Nikolai Efimov became his scientific advisor on topics of Aleksandrov theory. After defending his Ph.D. thesis in 1947, he was demobilized and moved to Kharkiv, where he started to work at the Institute of Mathematics of Kharkov State University and the Geometry Department of the university. In 1948 he defended his doctoral thesis. In 1951 he became the Corresponding Member of the Academy of Sciences of Ukraine, in 1960 he became the Corresponding member of the USSR Academy of Sciences (Division of Physical and Mathematical Sciences). In 1961 he became an Academician of the Academy of Sciences of Ukraine. In 1976, he became an Academician of the USSR Academy of Sciences (Mathematics Division). From 1950 to 1960 he was the Head of the Geometry Department at Kharkiv State University. From 1960 to 2000 he was the Head of the G