source
stringlengths
31
168
text
stringlengths
51
3k
https://en.wikipedia.org/wiki/End%20mill
An end mill is a type of milling cutter, a cutting tool used in industrial milling applications. It is distinguished from the drill bit in its application, geometry, and manufacture. While a drill bit can only cut in the axial direction, most milling bits can cut in the radial direction. Not all mills can cut axially; those designed to cut axially are known as end mills. End mills are used in milling applications such as profile milling, tracer milling, face milling, and plunging. Types Several broad categories of end- and face-milling tools exist, such as center-cutting versus non-center-cutting (whether the mill can take plunging cuts); and categorization by number of flutes; by helix angle; by material; and by coating material. Each category may be further divided by specific application and special geometry. A very popular helix angle, especially for general cutting of metal materials, is 30°. For finishing end mills, it is common to see more tight spiral, with helix angles 45° or 60°. Straight flute end mills (helix angle 0°) are used in special applications, like milling plastics or composites of epoxy and glass. Straight flute end mills were also used historically for metal cutting before invention of helical flute end mill by Carl A. Bergstrom of Weldon Tool Company in 1918. There exist end mills with variable flute helix or pseudo-random helix angle, and discontinuous flute geometries, to help break material into smaller pieces while cutting (improving chip evacuation and reducing risk of jamming) and reduce tool engagement on big cuts. Some modern designs also include small features like the corner chamfer and chipbreaker. While more expensive, due to more complex design and manufacturing process, such end mills can last longer due to less wear and improve productivity in high speed machining (HSM) applications. It is becoming increasingly common for traditional solid end mills to be replaced by more cost-effective inserted cutting tools (which, though more expensive initially, reduce tool-change times and allow for the easy replacement of worn or broken cutting edges rather than the entire tool). Another advantage of indexable end mills(another term for tools with inserts) is their ability to be flexible with what materials they can work on, rather than being specialized for a certain material type like more traditional end mills. For the time being however, this only generally applies to larger diameter end mills, at or above 3/4 of an inch. These end mills are generally used for roughing operation, whereas traditional end mills are still used for finishing and work where a smaller diameter, or a tighter tolerance, are required; modular tooling introduces additional margins of error that can compound with each new component, whereas a solid tool can provide a smaller tolerance range for the same price level. End mills are sold in both imperial and metric shank and cutting diameters. In the USA, metric is readily available, bu
https://en.wikipedia.org/wiki/Tricategory
In mathematics, a tricategory is a kind of structure of category theory studied in higher-dimensional category theory. Whereas a weak 2-category is said to be a bicategory, a weak 3-category is said to be a tricategory (Gordon, Power & Street 1995; Baez & Dolan 1996; Leinster 1998). Tetracategories are the corresponding notion in dimension four. Dimensions beyond three are seen as increasingly significant to the relationship between knot theory and physics. John Baez, R. Gordon, A. J. Power and Ross Street have done much of the significant work with categories beyond bicategories thus far. See also Weak n-category References External links The Dimensional Ladder Branches of higher dimensional algebra Higher category theory
https://en.wikipedia.org/wiki/Khovanov%20homology
In mathematics, Khovanov homology is an oriented link invariant that arises as the cohomology of a cochain complex. It may be regarded as a categorification of the Jones polynomial. It was developed in the late 1990s by Mikhail Khovanov, then at the University of California, Davis, now at Columbia University. Overview To any link diagram D representing a link L, we assign the Khovanov bracket [D], a cochain complex of graded vector spaces. This is the analogue of the Kauffman bracket in the construction of the Jones polynomial. Next, we normalise [D] by a series of degree shifts (in the graded vector spaces) and height shifts (in the cochain complex) to obtain a new cochain complex C(D). The cohomology of this cochain complex turns out to be an invariant of L, and its graded Euler characteristic is the Jones polynomial of L. Definition This definition follows the formalism given in Dror Bar-Natan's 2002 paper. Let {l} denote the degree shift operation on graded vector spaces—that is, the homogeneous component in dimension m is shifted up to dimension m + l. Similarly, let [s] denote the height shift operation on cochain complexes—that is, the rth vector space or module in the complex is shifted along to the (r + s)th place, with all the differential maps being shifted accordingly. Let V be a graded vector space with one generator q of degree 1, and one generator q−1 of degree −1. Now take an arbitrary diagram D representing a link L. The axioms for the Khovanov bracket are as follows: [ø] = 0 → Z → 0, where ø denotes the empty link. [O D] = V ⊗ [D], where O denotes an unlinked trivial component. [D] = F(0 → [D0] → [D1]{1} → 0) In the third of these, F denotes the `flattening' operation, where a single complex is formed from a double complex by taking direct sums along the diagonals. Also, D0 denotes the `0-smoothing' of a chosen crossing in D, and D1 denotes the `1-smoothing', analogously to the skein relation for the Kauffman bracket. Next, we construct the `normalised' complex C(D) = [D][−n−]{n+ − 2n−}, where n− denotes the number of left-handed crossings in the chosen diagram for D, and n+ the number of right-handed crossings. The Khovanov homology of L is then defined as the cohomology H(L) of this complex C(D). It turns out that the Khovanov homology is indeed an invariant of L, and does not depend on the choice of diagram. The graded Euler characteristic of H(L) turns out to be the Jones polynomial of L. However, H(L) has been shown to contain more information about L than the Jones polynomial, but the exact details are not yet fully understood. In 2006 Dror Bar-Natan developed a computer program to calculate the Khovanov homology (or category) for any knot. Related theories One of the most interesting aspects of Khovanov's homology is that its exact sequences are formally similar to those arising in the Floer homology of 3-manifolds. Moreover, it has been used to produce another proof of a result first demonstrated using gau
https://en.wikipedia.org/wiki/Hermitian%20manifold
In mathematics, and more specifically in differential geometry, a Hermitian manifold is the complex analogue of a Riemannian manifold. More precisely, a Hermitian manifold is a complex manifold with a smoothly varying Hermitian inner product on each (holomorphic) tangent space. One can also define a Hermitian manifold as a real manifold with a Riemannian metric that preserves a complex structure. A complex structure is essentially an almost complex structure with an integrability condition, and this condition yields a unitary structure (U(n) structure) on the manifold. By dropping this condition, we get an almost Hermitian manifold. On any almost Hermitian manifold, we can introduce a fundamental 2-form (or cosymplectic structure) that depends only on the chosen metric and the almost complex structure. This form is always non-degenerate. With the extra integrability condition that it is closed (i.e., it is a symplectic form), we get an almost Kähler structure. If both the almost complex structure and the fundamental form are integrable, then we have a Kähler structure. Formal definition A Hermitian metric on a complex vector bundle E over a smooth manifold M is a smoothly varying positive-definite Hermitian form on each fiber. Such a metric can be viewed as a smooth global section h of the vector bundle such that for every point p in M, for all , in the fiber Ep and for all nonzero in Ep. A Hermitian manifold is a complex manifold with a Hermitian metric on its holomorphic tangent bundle. Likewise, an almost Hermitian manifold is an almost complex manifold with a Hermitian metric on its holomorphic tangent bundle. On a Hermitian manifold the metric can be written in local holomorphic coordinates (zα) as where are the components of a positive-definite Hermitian matrix. Riemannian metric and associated form A Hermitian metric h on an (almost) complex manifold M defines a Riemannian metric g on the underlying smooth manifold. The metric g is defined to be the real part of h: The form g is a symmetric bilinear form on TMC, the complexified tangent bundle. Since g is equal to its conjugate it is the complexification of a real form on TM. The symmetry and positive-definiteness of g on TM follow from the corresponding properties of h. In local holomorphic coordinates the metric g can be written One can also associate to h a complex differential form ω of degree (1,1). The form ω is defined as minus the imaginary part of h: Again since ω is equal to its conjugate it is the complexification of a real form on TM. The form ω is called variously the associated (1,1) form, the fundamental form, or the Hermitian form. In local holomorphic coordinates ω can be written It is clear from the coordinate representations that any one of the three forms , , and uniquely determine the other two. The Riemannian metric and associated (1,1) form are related by the almost complex structure as follows for all complex tangent vectors and . The Hermi
https://en.wikipedia.org/wiki/Quaternion-K%C3%A4hler%20manifold
In differential geometry, a quaternion-Kähler manifold (or quaternionic Kähler manifold) is a Riemannian 4n-manifold whose Riemannian holonomy group is a subgroup of Sp(n)·Sp(1) for some . Here Sp(n) is the sub-group of consisting of those orthogonal transformations that arise by left-multiplication by some quaternionic matrix, while the group of unit-length quaternions instead acts on quaternionic -space by right scalar multiplication. The Lie group generated by combining these actions is then abstractly isomorphic to . Although the above loose version of the definition includes hyperkähler manifolds, the standard convention of excluding these will be followed by also requiring that the scalar curvature be non-zero— as is automatically true if the holonomy group equals the entire group Sp(n)·Sp(1). Early history Marcel Berger's 1955 paper on the classification of Riemannian holonomy groups first raised the issue of the existence of non-symmetric manifolds with holonomy Sp(n)·Sp(1).Interesting results were proved in the mid-1960s in pioneering work by Edmond Bonan and Kraines who have independently proven that any such manifold admits a parallel 4-form .The long awaited analog of strong Lefschetz theorem was published in 1982 : In the context of Berger's classification of Riemannian holonomies, quaternion-Kähler manifolds constitute the only class of irreducible, non-symmetric manifolds of special holonomy that are automatically Einstein, but not automatically Ricci-flat. If the Einstein constant of a simply connected manifold with holonomy in is zero, where , then the holonomy is actually contained in , and the manifold is hyperkähler. This case is excluded from the definition by declaring quaternion-Kähler to mean not only that the holonomy group is contained in , but also that the manifold has non-zero (constant) scalar curvature. With this convention, quaternion-Kähler manifolds can thus be naturally divided into those for which the Ricci curvature is positive, and those for which it is instead negative. Examples There are no known examples of compact quaternion-Kähler manifolds that are not locally symmetric. (Again, hyperkähler manifolds are excluded from the discussion by fiat.) On the other hand, there are many symmetric quaternion-Kähler manifolds; these were first classified by Joseph A. Wolf, and so are known as Wolf spaces. For any simple Lie group G, there is a unique Wolf space G/K obtained as a quotient of G by a subgroup , where is the subgroup associated with the highest root of G, and K0 is its centralizer in G. The Wolf spaces with positive Ricci curvature are compact and simply connected. For example, if , the corresponding Wolf space is the quaternionic projective space of (right) quaternionic lines through the origin in . A conjecture often attributed to LeBrun and Salamon (see below) asserts that all complete quaternion
https://en.wikipedia.org/wiki/Descartes%27%20rule%20of%20signs
In mathematics, Descartes' rule of signs, first described by René Descartes in his work La Géométrie, is a technique for getting information on the number of positive real roots of a polynomial. It asserts that the number of positive roots is at most the number of sign changes in the sequence of polynomial's coefficients (omitting the zero coefficients), and that the difference between these two numbers is always even. This implies, in particular, that if the number of sign changes is zero or one, then there are exactly zero or one positive roots, respectively. By a linear fractional transformation of the variable, one may use Descartes' rule of signs for getting a similar information on the number of roots in any interval. This is the basic idea of Budan's theorem and Budan–Fourier theorem. By repeating the division of an interval into two intervals, one gets eventually a list of disjoint intervals containing together all real roots of the polynomial, and containing each exactly one real root. Descartes rule of signs and linear fractional transformations of the variable are, nowadays, the basis of the fastest algorithms for computer computation of real roots of polynomials (see real-root isolation). Descartes himself used the transformation for using his rule for getting information of the number of negative roots. Descartes' rule of signs Positive roots The rule states that if the nonzero terms of a single-variable polynomial with real coefficients are ordered by descending variable exponent, then the number of positive roots of the polynomial is either equal to the number of sign changes between consecutive (nonzero) coefficients, or is less than it by an even number. A root of multiplicity is counted as roots. In particular, if the number of sign changes is zero or one, the number of positive roots equals the number of sign changes. Negative roots As a corollary of the rule, the number of negative roots is the number of sign changes after multiplying the coefficients of odd-power terms by −1, or fewer than it by an even number. This procedure is equivalent to substituting the negation of the variable for the variable itself. For example, the negative roots of are the positive roots of Thus, applying Descartes' rule of signs to this polynomial gives the maximum number of negative roots of the original polynomial. Example: cubic polynomial The polynomial has one sign change between the second and third terms, as the sequence of signs is . Therefore, it has exactly one positive root. To find the number of negative roots, change the signs of the coefficients of the terms with odd exponents, i.e., apply Descartes' rule of signs to the polynomial This polynomial has two sign changes, as the sequence of signs is , meaning that this second polynomial has two or zero positive roots; thus the original polynomial has two or zero negative roots. In fact, the factorization of the first polynomial is so the roots are −1 (twice) and
https://en.wikipedia.org/wiki/Problem%20of%20Apollonius
In Euclidean plane geometry, Apollonius's problem is to construct circles that are tangent to three given circles in a plane (Figure 1). Apollonius of Perga (c. 262 190 BC) posed and solved this famous problem in his work (, "Tangencies"); this work has been lost, but a 4th-century AD report of his results by Pappus of Alexandria has survived. Three given circles generically have eight different circles that are tangent to them (Figure 2), a pair of solutions for each way to divide the three given circles in two subsets (there are 4 ways to divide a set of cardinality 3 in 2 parts). In the 16th century, Adriaan van Roomen solved the problem using intersecting hyperbolas, but this solution does not use only straightedge and compass constructions. François Viète found such a solution by exploiting limiting cases: any of the three given circles can be shrunk to zero radius (a point) or expanded to infinite radius (a line). Viète's approach, which uses simpler limiting cases to solve more complicated ones, is considered a plausible reconstruction of Apollonius' method. The method of van Roomen was simplified by Isaac Newton, who showed that Apollonius' problem is equivalent to finding a position from the differences of its distances to three known points. This has applications in navigation and positioning systems such as LORAN. Later mathematicians introduced algebraic methods, which transform a geometric problem into algebraic equations. These methods were simplified by exploiting symmetries inherent in the problem of Apollonius: for instance solution circles generically occur in pairs, with one solution enclosing the given circles that the other excludes (Figure 2). Joseph Diaz Gergonne used this symmetry to provide an elegant straightedge and compass solution, while other mathematicians used geometrical transformations such as reflection in a circle to simplify the configuration of the given circles. These developments provide a geometrical setting for algebraic methods (using Lie sphere geometry) and a classification of solutions according to 33 essentially different configurations of the given circles. Apollonius' problem has stimulated much further work. Generalizations to three dimensions—constructing a sphere tangent to four given spheres—and beyond have been studied. The configuration of three mutually tangent circles has received particular attention. René Descartes gave a formula relating the radii of the solution circles and the given circles, now known as Descartes' theorem. Solving Apollonius' problem iteratively in this case leads to the Apollonian gasket, which is one of the earliest fractals to be described in print, and is important in number theory via Ford circles and the Hardy–Littlewood circle method. Statement of the problem The general statement of Apollonius' problem is to construct one or more circles that are tangent to three given objects in a plane, where an object may be a line, a point or a circle of any size.
https://en.wikipedia.org/wiki/Heinrich%20Martin%20Weber
Heinrich Martin Weber (5 March 1842, Heidelberg, Germany – 17 May 1913, Straßburg, Alsace-Lorraine, German Empire, now Strasbourg, France) was a German mathematician. Weber's main work was in algebra, number theory, and analysis. He is best known for his text Lehrbuch der Algebra published in 1895 and much of it is his original research in algebra and number theory. His work Theorie der algebraischen Functionen einer Veränderlichen (with Dedekind) established an algebraic foundation for Riemann surfaces, allowing a purely algebraic formulation of the Riemann–Roch theorem. Weber's research papers were numerous, most of them appearing in Crelle's Journal or Mathematische Annalen. He was the editor of Riemann's collected works. Weber was born in Heidelberg, Baden, and entered the University of Heidelberg in 1860. In 1866 he became a privatdozent, and in 1869 he was appointed as extraordinary professor at that school. Weber also taught in Zurich at the Federal Polytechnic Institute (today the ETH Zurich), at the University of Königsberg, and at the Technische Hochschule in Charlottenburg. His final post was at the Kaiser-Wilhelm-Universität Straßburg, Alsace-Lorraine, where he died. In 1893 in Chicago, his paper Zur Theorie der ganzzahligen algebraischen Gleichungen was read (but not by him) at the International Mathematical Congress held in connection with the World's Columbian Exposition. In 1895 and in 1904 he was president of the Deutsche Mathematiker-Vereinigung. His doctoral students include Heinrich Brandt, E. V. Huntington, Louis Karpinski, and Friedrich Levi. Publications with Richard Dedekind: Theorie der algebraischen Functionen einer Veränderlichen. J. Reine Angew. Math. 92 (1882) 181–290 Elliptische Functionen und algebraische Zahlen. Braunschweig 1891 Encyklopädie der Elementar-Mathematik. Ein Handbuch für Lehrer und Studierende. Leipzig 1903/07, (Vol. 1, Vol. 2, Vol. 3) (in German) with Bernhard Riemann (i.e. partly based on Riemann's lectures): Die partiellen Differential-Gleichungen der mathematischen Physik. Braunschweig 1900-01 Lehrbuch der Algebra. Braunschweig 1924, ed. Robert Fricke The third volume is an expanded version of his earlier book "Elliptische Functionen und algebraische Zahlen". References 1842 births 1913 deaths 19th-century German mathematicians 20th-century German mathematicians Algebraists Number theorists Scientists from Heidelberg People from the Grand Duchy of Baden Heidelberg University alumni Academic staff of Heidelberg University Academic staff of the University of Königsberg Academic staff of the Technical University of Berlin Academic staff of the University of Strasbourg Heads of universities in Germany Academic staff of ETH Zurich Members of the Royal Society of Sciences in Uppsala
https://en.wikipedia.org/wiki/Rectifiable%20set
In mathematics, a rectifiable set is a set that is smooth in a certain measure-theoretic sense. It is an extension of the idea of a rectifiable curve to higher dimensions; loosely speaking, a rectifiable set is a rigorous formulation of a piece-wise smooth set. As such, it has many of the desirable properties of smooth manifolds, including tangent spaces that are defined almost everywhere. Rectifiable sets are the underlying object of study in geometric measure theory. Definition A Borel subset of Euclidean space is said to be -rectifiable set if is of Hausdorff dimension , and there exist a countable collection of continuously differentiable maps such that the -Hausdorff measure of is zero. The backslash here denotes the set difference. Equivalently, the may be taken to be Lipschitz continuous without altering the definition. Other authors have different definitions, for example, not requiring to be -dimensional, but instead requiring that is a countable union of sets which are the image of a Lipschitz map from some bounded subset of . A set is said to be purely -unrectifiable if for every (continuous, differentiable) , one has A standard example of a purely-1-unrectifiable set in two dimensions is the Cartesian product of the Smith–Volterra–Cantor set times itself. Rectifiable sets in metric spaces gives the following terminology for m-rectifiable sets E in a general metric space X. E is rectifiable when there exists a Lipschitz map for some bounded subset of onto . E is countably rectifiable when E equals the union of a countable family of rectifiable sets. E is countably rectifiable when is a measure on X and there is a countably rectifiable set F such that . E is rectifiable when E is countably rectifiable and E is purely unrectifiable when is a measure on X and E includes no rectifiable set F with . Definition 3 with and comes closest to the above definition for subsets of Euclidean spaces. Notes References External links Rectifiable set at Encyclopedia of Mathematics Measure theory
https://en.wikipedia.org/wiki/Versor
In mathematics, a versor is a quaternion of norm one (a unit quaternion). Each versor has the form where the r2 = −1 condition means that r is a unit-length vector quaternion (or that the first component of r is zero, and the last three components of r are a unit vector in 3 dimensions). The corresponding 3-dimensional rotation has the angle 2a about the axis r in axis–angle representation. In case (a right angle), then , and the resulting unit vector is termed a right versor. The collection of versors with quaternion multiplication forms a group, and the set of versors is a 3-sphere in the 4-dimensional quaternion algebra. Presentation on 3- and 2-spheres Hamilton denoted the versor of a quaternion q by the symbol Uq. He was then able to display the general quaternion in polar coordinate form q = Tq Uq, where Tq is the norm of q. The norm of a versor is always equal to one; hence they occupy the unit 3-sphere in H. Examples of versors include the eight elements of the quaternion group. Of particular importance are the right versors, which have angle π/2. These versors have zero scalar part, and so are vectors of length one (unit vectors). The right versors form a sphere of square roots of −1 in the quaternion algebra. The generators i, j, and k are examples of right versors, as well as their additive inverses. Other versors include the twenty-four Hurwitz quaternions that have the norm 1 and form vertices of a 24-cell polychoron. Hamilton defined a quaternion as the quotient of two vectors. A versor can be defined as the quotient of two unit vectors. For any fixed plane Π the quotient of two unit vectors lying in Π depends only on the angle (directed) between them, the same a as in the unit vector–angle representation of a versor explained above. That's why it may be natural to understand corresponding versors as directed arcs that connect pairs of unit vectors and lie on a great circle formed by intersection of Π with the unit sphere, where the plane Π passes through the origin. Arcs of the same direction and length (or, the same, its subtended angle in radians) are equivalent, i.e. define the same versor. Such an arc, although lying in the three-dimensional space, does not represent a path of a point rotating as described with the sandwiched product with the versor. Indeed, it represents the left multiplication action of the versor on quaternions that preserves the plane Π and the corresponding great circle of 3-vectors. The 3-dimensional rotation defined by the versor has the angle two times the arc's subtended angle, and preserves the same plane. It is a rotation about the corresponding vector r, that is perpendicular to Π. On three unit vectors, Hamilton writes and imply Multiplication of quaternions of norm one corresponds to the (non-commutative) "addition" of great circle arcs on the unit sphere. Any pair of great circles either is the same circle or has two intersection points. Hence, one can always move the point B and
https://en.wikipedia.org/wiki/List%20of%20numerical-analysis%20software
Listed here are notable end-user computer applications intended for use with numerical or data analysis: Numerical-software packages General-purpose computer algebra systems Interface-oriented Language-oriented Historically significant Expensive Desk Calculator written for the TX-0 and PDP-1 in the late 1950s or early 1960s. S is an (array-based) programming language with strong numerical support. R is an implementation of the S language. See also References Lists of software Mathematics-related lists Software
https://en.wikipedia.org/wiki/Proizvolov%27s%20identity
In mathematics, Proizvolov's identity is an identity concerning sums of differences of positive integers. The identity was posed by Vyacheslav Proizvolov as a problem in the 1985 All-Union Soviet Student Olympiads. To state the identity, take the first 2N positive integers, 1, 2, 3, ..., 2N − 1, 2N, and partition them into two subsets of N numbers each. Arrange one subset in increasing order: Arrange the other subset in decreasing order: Then the sum is always equal to N2. Example Take for example N = 3. The set of numbers is then {1, 2, 3, 4, 5, 6}. Select three numbers of this set, say 2, 3 and 5. Then the sequences A and B are: A1 = 2, A2 = 3, and A3 = 5; B1 = 6, B2 = 4, and B3 = 1. The sum is which indeed equals 32. Proof A slick proof of the identity is as follows. Note that for any , we have that :. For this reason, it suffices to establish that the sets and : coincide. Since the numbers are all distinct, it therefore suffices to show that for any , . Assume the contrary that this is false for some , and consider positive integers . Clearly, these numbers are all distinct (due to the construction), but they are at most : a contradiction is reached. Notes References . External links Proizvolov's identity at cut-the-knot.org A video illustration (and proof outline) of Proizvolov's identity by Dr. James Grime Recreational mathematics Theorems in number theory
https://en.wikipedia.org/wiki/Trinomial
In elementary algebra, a trinomial is a polynomial consisting of three terms or monomials. Examples of trinomial expressions with variables with variables with variables , the quadratic polynomial in standard form with variables. with variables, nonnegative integers and any constants. where is variable and constants are nonnegative integers and any constants. Trinomial equation A trinomial equation is a polynomial equation involving three terms. An example is the equation studied by Johann Heinrich Lambert in the 18th century. Some notable trinomials The quadratic trinomial in standard form (as from above): sum or difference of two cubes: A special type of trinomial can be factored in a manner similar to quadratics since it can be viewed as a quadratic in a new variable ( below). This form is factored as: where For instance, the polynomial is an example of this type of trinomial with . The solution and of the above system gives the trinomial factorization: . The same result can be provided by Ruffini's rule, but with a more complex and time-consuming process. See also Trinomial expansion Monomial Binomial Multinomial Simple expression Compound expression Sparse polynomial Notes References Elementary algebra Polynomials
https://en.wikipedia.org/wiki/Superposition%20calculus
The superposition calculus is a calculus for reasoning in equational logic. It was developed in the early 1990s and combines concepts from first-order resolution with ordering-based equality handling as developed in the context of (unfailing) Knuth–Bendix completion. It can be seen as a generalization of either resolution (to equational logic) or unfailing completion (to full clausal logic). Like most first-order calculi, superposition tries to show the unsatisfiability of a set of first-order clauses, i.e. it performs proofs by refutation. Superposition is refutation complete—given unlimited resources and a fair derivation strategy, from any unsatisfiable clause set a contradiction will eventually be derived. , most of the (state-of-the-art) theorem provers for first-order logic are based on superposition (e.g. the E equational theorem prover), although only a few implement the pure calculus. Implementations E SPASS Vampire Waldmeister (official web page) References Rewrite-Based Equational Theorem Proving with Selection and Simplification, Leo Bachmair and Harald Ganzinger, Journal of Logic and Computation 3(4), 1994. Paramodulation-Based Theorem Proving, Robert Nieuwenhuis and Alberto Rubio, Handbook of Automated Reasoning I(7), Elsevier Science and MIT Press, 2001. Mathematical logic Logical calculi
https://en.wikipedia.org/wiki/Bent%20bond
In organic chemistry, a bent bond, also known as a banana bond, is a type of covalent chemical bond with a geometry somewhat reminiscent of a banana. The term itself is a general representation of electron density or configuration resembling a similar "bent" structure within small ring molecules, such as cyclopropane (C3H6) or as a representation of double or triple bonds within a compound that is an alternative to the sigma and pi bond model. Small cyclic molecules Bent bonds are a special type of chemical bonding in which the ordinary hybridization state of two atoms making up a chemical bond are modified with increased or decreased s-orbital character in order to accommodate a particular molecular geometry. Bent bonds are found in strained organic compounds such as cyclopropane, oxirane and aziridine. In these compounds, it is not possible for the carbon atoms to assume the 109.5° bond angles with standard sp3 hybridization. Increasing the p-character to sp5 (i.e. s-density and p-density) makes it possible to reduce the bond angles to 60°. At the same time, the carbon-to-hydrogen bonds gain more s-character, which shortens them. In cyclopropane, the maximum electron density between two carbon atoms does not correspond to the internuclear axis, hence the name bent bond. In cyclopropane, the interorbital angle is 104°. This bending can be observed experimentally by X-ray diffraction of certain cyclopropane derivatives: the deformation density is outside the line of centers between the two carbon atoms. The carbon–carbon bond lengths are shorter than in a regular alkane bond: 151 pm versus 153 pm. Cyclobutane is a larger ring, but still has bent bonds. In this molecule, the carbon bond angles are 90° for the planar conformation and 88° for the puckered one. Unlike in cyclopropane, the C–C bond lengths actually increase rather than decrease; this is mainly due to 1,3-nonbonded steric repulsion. In terms of reactivity, cyclobutane is relatively inert and behaves like ordinary alkanes. Walsh orbital model An alternative model utilizes semi-localized Walsh orbitals in which cyclopropane is described as a carbon sp2 sigma bonding and in-plane pi bonding system. Critics of the Walsh orbital theory argue that this model does not represent the ground state of cyclopropane as it cannot be transformed into the localized or fully delocalized descriptions via a unitary transformation. Double and triple bonds Two different explanations for the nature of double and triple covalent bonds in organic molecules were proposed in the 1930s. Linus Pauling proposed that the double bond results from two equivalent tetrahedral orbitals from each atom, which later came to be called banana bonds or tau bonds. Erich Hückel proposed a representation of the double bond as a combination of a sigma bond plus a pi bond. The Hückel representation is the better-known one, and it is the one found in most textbooks since the late-20th century. Both models represent the s
https://en.wikipedia.org/wiki/Latent%20and%20observable%20variables
In statistics, latent variables (from Latin: present participle of lateo, “lie hidden”) are variables that can only be inferred indirectly through a mathematical model from other observable variables that can be directly observed or measured. Such latent variable models are used in many disciplines, including political science, demography, engineering, medicine, ecology, physics, machine learning/artificial intelligence, bioinformatics, chemometrics, natural language processing, management, psychology and the social sciences. Latent variables may correspond to aspects of physical reality. These could in principle be measured, but may not be for practical reasons. In this situation, the term hidden variables is commonly used (reflecting the fact that the variables are meaningful, but not observable). Other latent variables correspond to abstract concepts, like categories, behavioral or mental states, or data structures. The terms hypothetical variables or hypothetical constructs may be used in these situations. The use of latent variables can serve to reduce the dimensionality of data. Many observable variables can be aggregated in a model to represent an underlying concept, making it easier to understand the data. In this sense, they serve a function similar to that of scientific theories. At the same time, latent variables link observable "sub-symbolic" data in the real world to symbolic data in the modeled world. Examples Psychology Latent variables, as created by factor analytic methods, generally represent "shared" variance, or the degree to which variables "move" together. Variables that have no correlation cannot result in a latent construct based on the common factor model. The "Big Five personality traits" have been inferred using factor analysis. extraversion spatial ability wisdom “Two of the more predominant means of assessing wisdom include wisdom-related performance and latent variable measures.” Spearman's g, or the general intelligence factor in psychometrics Economics Examples of latent variables from the field of economics include quality of life, business confidence, morale, happiness and conservatism: these are all variables which cannot be measured directly. But linking these latent variables to other, observable variables, the values of the latent variables can be inferred from measurements of the observable variables. Quality of life is a latent variable which cannot be measured directly so observable variables are used to infer quality of life. Observable variables to measure quality of life include wealth, employment, environment, physical and mental health, education, recreation and leisure time, and social belonging. Medicine Latent-variable methodology is used in many branches of medicine. A class of problems that naturally lend themselves to latent variables approaches are longitudinal studies where the time scale (e.g. age of participant or time since study baseline) is not synchronized with the trait
https://en.wikipedia.org/wiki/Irreducible%20component
In algebraic geometry, an irreducible algebraic set or irreducible variety is an algebraic set that cannot be written as the union of two proper algebraic subsets. An irreducible component is an algebraic subset that is irreducible and maximal (for set inclusion) for this property. For example, the set of solutions of the equation is not irreducible, and its irreducible components are the two lines of equations and . It is a fundamental theorem of classical algebraic geometry that every algebraic set may be written in a unique way as a finite union of irreducible components. These concepts can be reformulated in purely topological terms, using the Zariski topology, for which the closed sets are the algebraic subsets: A topological space is irreducible if it is not the union of two proper closed subsets, and an irreducible component is a maximal subspace (necessarily closed) that is irreducible for the induced topology. Although these concepts may be considered for every topological space, this is rarely done outside algebraic geometry, since most common topological spaces are Hausdorff spaces, and, in a Hausdorff space, the irreducible components are the singletons. In topology A topological space X is reducible if it can be written as a union of two closed proper subsets , of A topological space is irreducible (or hyperconnected) if it is not reducible. Equivalently, X is irreducible if all non empty open subsets of X are dense, or if any two nonempty open sets have nonempty intersection. A subset F of a topological space X is called irreducible or reducible, if F considered as a topological space via the subspace topology has the corresponding property in the above sense. That is, is reducible if it can be written as a union where are closed subsets of , neither of which contains An irreducible component of a topological space is a maximal irreducible subset. If a subset is irreducible, its closure is also irreducible, so irreducible components are closed. Every irreducible subset of a space X is contained in a (not necessarily unique) irreducible component of X. Every point is contained in some irreducible component of X. In algebraic geometry Every affine or projective algebraic set is defined as the set of the zeros of an ideal in a polynomial ring. An irreducible algebraic set, more commonly known as an algebraic variety is an algebraic set that cannot be decomposed as the union of two smaller algebraic sets. Lasker–Noether theorem implies that every algebraic set is the union of a finite number of uniquely defined algebraic sets, called its irreducible components. These notions of irreducibility and irreducible components are exactly the above defined ones, when the Zariski topology is considered, since the algebraic sets are exactly the closed sets of this topology. The spectrum of a ring is a topological space whose points are the prime ideals and the closed sets are the sets of all prime ideals that contain a fixed
https://en.wikipedia.org/wiki/Peak
Peak or The Peak may refer to: Basic meanings Geology Mountain peak Pyramidal peak, a mountaintop that has been sculpted by erosion to form a point Mathematics Peak hour or rush hour, in traffic congestion Peak (geometry), an (n-3)-dimensional element of a polytope Peak electricity demand or peak usage Peak-to-peak, the highest (or sometimes the highest and lowest) points on a varying waveform Peak (pharmacology), the time at which a drug reaches its maximum plasma concentration Peak experience, psychological term for a euphoric mental state Resource production In terms of resource production, the peak is the moment when the production of a resource reaches a maximum level, after which it declines; in particular see: Peak oil Peak car Peak coal Peak copper Peak farmland Peak gas Peak gold Peak minerals Peak phosphorus Peak uranium Peak water Peak wheat Peak wood Other basic meanings Visor, a part of a hat, known as a "peak" in British English Peaked cap Geography Peak District in the Midlands of England The Peak, summit of Kinder Scout, the highest point in the Peak District Ravenscar, North Yorkshire, a village in England formerly known as "Peak" and "The Peak" The Peak (Hong Kong), also known as Victoria Peak Victoria Peak (disambiguation) Peak, a village in Ya Tung, Cambodia People Bob Peak (1927–1992), American commercial illustrator Howard W. Peak (b. 1948), American politician Jill Peak, British dog breeder and Crufts judge Junius W. Peak (1845–1934), Confederate soldier and Texas Ranger Products and brands BIAS Peak, a professional audio editing program on the Apple platform GeeksPhone Peak, a mobile phone PEAKS, a software program for tandem mass spectroscopy Peak Sport Products, a Chinese sneaker brand The Peak Twin Towers, an apartment building in Jakarta, Indonesia Peak (automotive products), a manufacturer of automotive products Healthpeak Properties, an American real estate company having stock that trades under the symbol PEAK Transportation A nickname used to refer to the British Rail Class 44 diesel locomotives, and also classes 45 and 46 The highest corner of a four-sided, fore-aft sail PRS Peak, a German mountain descent paraglider design The Peak Terminus, Hong Kong Media and entertainment The Peak (newspaper), a student newspaper of Simon Fraser University in Burnaby, British Columbia, Canada The Peak (TV series), a TV series in Singapore CFBV, a radio station branded as The Peak based in Smithers, British Columbia 98.7 The Peak, a radio station in Phoenix, Arizona 100.5 The Peak, a radio station based in Vancouver, British Columbia 107.1 The Peak, a radio station based in White Plains, New York CKCB-FM, a radio station branded as 95.1 The Peak FM based in Collingwood, Ontario KPEK, a radio station on 100.3 FM branded as The Peak based in Albuquerque, New Mexico Peak: Secrets from the New Science of Expertise, a 2016 book Peak (novel), by Roland Smith Peak Reco
https://en.wikipedia.org/wiki/Uniform%20boundedness
In mathematics, a uniformly bounded family of functions is a family of bounded functions that can all be bounded by the same constant. This constant is larger than or equal to the absolute value of any value of any of the functions in the family. Definition Real line and complex plane Let be a family of functions indexed by , where is an arbitrary set and is the set of real or complex numbers. We call uniformly bounded if there exists a real number such that Metric space In general let be a metric space with metric , then the set is called uniformly bounded if there exists an element from and a real number such that Examples Every uniformly convergent sequence of bounded functions is uniformly bounded. The family of functions defined for real with traveling through the integers, is uniformly bounded by 1. The family of derivatives of the above family, is not uniformly bounded. Each is bounded by but there is no real number such that for all integers References Mathematical analysis
https://en.wikipedia.org/wiki/Raised%20cosine%20distribution
In probability theory and statistics, the raised cosine distribution is a continuous probability distribution supported on the interval . The probability density function (PDF) is for and zero otherwise. The cumulative distribution function (CDF) is for and zero for and unity for . The moments of the raised cosine distribution are somewhat complicated in the general case, but are considerably simplified for the standard raised cosine distribution. The standard raised cosine distribution is just the raised cosine distribution with and . Because the standard raised cosine distribution is an even function, the odd moments are zero. The even moments are given by: where is a generalized hypergeometric function. See also Hann function Havercosine (hvc) References Continuous distributions
https://en.wikipedia.org/wiki/Quasiperiodic%20function
In mathematics, a quasiperiodic function is a function that has a certain similarity to a periodic function. A function is quasiperiodic with quasiperiod if , where is a "simpler" function than . What it means to be "simpler" is vague. A simple case (sometimes called arithmetic quasiperiodic) is if the function obeys the equation: Another case (sometimes called geometric quasiperiodic) is if the function obeys the equation: An example of this is the Jacobi theta function, where shows that for fixed it has quasiperiod ; it also is periodic with period one. Another example is provided by the Weierstrass sigma function, which is quasiperiodic in two independent quasiperiods, the periods of the corresponding Weierstrass ℘ function. Functions with an additive functional equation are also called quasiperiodic. An example of this is the Weierstrass zeta function, where for a z-independent η when ω is a period of the corresponding Weierstrass ℘ function. In the special case where we say f is periodic with period ω in the period lattice . Quasiperiodic signals Quasiperiodic signals in the sense of audio processing are not quasiperiodic functions in the sense defined here; instead they have the nature of almost periodic functions and that article should be consulted. The more vague and general notion of quasiperiodicity has even less to do with quasiperiodic functions in the mathematical sense. A useful example is the function: If the ratio A/B is rational, this will have a true period, but if A/B is irrational there is no true period, but a succession of increasingly accurate "almost" periods. See also Quasiperiodic motion References External links Quasiperiodic function at PlanetMath Complex analysis Types of functions
https://en.wikipedia.org/wiki/Open%20disc
Open disc can refer to: a disk (mathematics) which does not include the circle forming its boundary the OpenDisc software project
https://en.wikipedia.org/wiki/Albert%20Shiryaev
Albert Nikolayevich Shiryaev (; born October 12, 1934) is a Soviet and Russian mathematician. He is known for his work in probability theory, statistics and financial mathematics. Career He graduated from Moscow State University in 1957. From that time until now he has been working in Steklov Mathematical Institute. He earned his candidate degree in 1961 (Andrey Kolmogorov was his advisor) and a doctoral degree in 1967 for his work "On statistical sequential analysis". He is a professor of the department of mechanics and mathematics of Moscow State University, since 1971. Shiryaev holds a 20% permanent professorial position at the School of Mathematics, University of Manchester. He has supervised more than 50 doctoral dissertations and is the author or coauthor of more than 250 publications. In 1970 he was an Invited Speaker with talk Sur les equations stochastiques aux dérivées partielles at the International Congress of Mathematicians (ICM) in Nice. In 1978 he was a Plenary Speaker with talk Absolute Continuity and Singularity of Probability Measures in Functional Spaces at the ICM in Helsinki. He was elected in 1985 an honorary member of the Royal Statistical Society and in 1990 a member of Academia Europaea. From 1989 to 1991 he was the president of the Bernoulli Society for Mathematical Statistics and Probability. From 1994 to 1998 he was the president of the Russian Actuarial Society. In 1996 he was awarded a Humboldt Prize. He was elected a corresponding member of the Russian Academy of Sciences in 1997 and a full member in 2011. From 1998 to 1999 he was a founding member and the first president of the Bachelier Finance Society. He was made in 2000 Doctor Rerum Naturalium Honoris Causa of Albert Ludwigs University of Freiburg and in 2002 Professor Honoris Causa of the University of Amsterdam. In 2017 he was awarded the Chebyschev gold medal of the Russian Academy of Sciences. Contributions His scientific work concerns different aspects of probability theory, statistics and its applications. He has contributions to: Nonlinear theory of stationary stochastic processes Problems of fast detection of random effects (Kolmogorov Prize of Russian Academy of Sciences, 1994) Problems of optimal nonlinear filtration, stochastic differential equations (A.N. Markov Prize of USSR Academy of Sciences, 1974) Problems of stochastic optimization, including "Optimal stopping rules" Problems of general stochastic theory and martingale theory Problems of stochastic finance (monograph "The Essentials of Stochastic Finance", English and Russian editions) Publications Statistical sequential analysis: optimal stopping rules. American Mathematical Society 1976 (Russian 1969), new edition entitled Optimal Stopping Rules, Springer 1978, 2008 with Robert Liptser: Statistics of random processes. 2 vols., Springer, 1977/1978, 1981; 2nd edition 2013, vol. 1 with P. Greenwood: Contiguity and Statistical Invariance Principle. Gordon and Breach, 1985 with
https://en.wikipedia.org/wiki/Shortlex%20order
In mathematics, and particularly in the theory of formal languages, shortlex is a total ordering for finite sequences of objects that can themselves be totally ordered. In the shortlex ordering, sequences are primarily sorted by cardinality (length) with the shortest sequences first, and sequences of the same length are sorted into lexicographical order. Shortlex ordering is also called radix, length-lexicographic, military, or genealogical ordering. In the context of strings on a totally ordered alphabet, the shortlex order is identical to the lexicographical order, except that shorter strings precede longer strings. For example, the shortlex order of the set of strings on the English alphabet (in its usual order) is [ε, a, b, c, ..., z, aa, ab, ac, ..., zz, aaa, aab, aac, ..., zzz, ...], where ε denotes the empty string. The strings in this ordering over a fixed finite alphabet can be placed into one-to-one order-preserving correspondence with the natural numbers, giving the bijective numeration system for representing numbers. The shortlex ordering is also important in the theory of automatic groups. See also Graded lexicographic order References Order theory
https://en.wikipedia.org/wiki/Stephen%20Fienberg
Stephen Elliott Fienberg (27 November 1942 – 14 December 2016) was a Professor Emeritus (formerly the Maurice Falk University Professor of Statistics and Social Science) in the Department of Statistics, the Machine Learning Department, Heinz College, and Cylab at Carnegie Mellon University. Fienberg was the founding co-editor of the Annual Review of Statistics and Its Application and of the Journal of Privacy and Confidentiality. Early life and education Born in Toronto, Ontario, Fienberg earned a Bachelor of Science degree in Mathematics and Statistics from the University of Toronto in 1964, a Master of Arts degree in Statistics in 1965, and a Ph.D. in Statistics in 1968 from Harvard University for research supervised by Frederick Mosteller. Career and research Fienberg was on the Carnegie Mellon University faculty from 1980 and served as Dean of the Dietrich College of Humanities and Social Sciences. He became a U.S. citizen in 1998. Fienberg was one of the foremost social statisticians in the world, and was well known for his work in log-linear modeling for categorical data, the statistical analysis of network data, and methodology for disclosure limitation. He was also an expert on forensic science, the only statistician to serve on the National Commission on Forensic Science. He authored more than 400 publications, including six books, advised more than 30 Ph.D. students, and could claim more than 105 descendants in his mathematical genealogy. His publications included books on discrete multivariate analysis categorical data analysis, US census adjustment, and forensic science. He was a founder and editor-in-chief of the Journal of Privacy and Confidentiality. and of the Annual Review of Statistics and Its Application. Awards and honors Fienberg was an elected member of the National Academy of Sciences, an elected fellow of the Royal Society of Canada, an elected fellow of the American Academy of Arts and Sciences, a fellow of the American Association for the Advancement of Science, a fellow of the American Statistical Association and a fellow of the Institute of Mathematical Statistics. He was a recipient of the Committee of Presidents of Statistical Societies (COPSS) Presidents' Award in 1982. In 2002, Fienberg received the Samuel S. Wilks Award from the American Statistical Association for his distinguished career in statistics. He received the inaugural Statistical Society of Canada's Lise Manchester Award in 2008 in recognition of his application of statistics to problems of public interest. In 2015, he received the Jerome Sacks Award for Cross-Disciplinary Research from the National Institute of Statistical Sciences, and the R. A. Fisher Lectureship from COPSS in 2015. He was awarded the Zellner Medal by the International Society for Bayesian Analysis (ISBA) in 2016. Selected publications Bishop, Y. M. M., Fienberg, S. E. and Holland, P. W. (1975). Discrete Multivariate Analysis: Theory and Practice. M.I.T.
https://en.wikipedia.org/wiki/Robert%20I.%20Soare
Robert Irving Soare is an American mathematician. He is the Paul Snowden Russell Distinguished Service Professor of Mathematics and Computer Science at the University of Chicago, where he has been on the faculty since 1967. He proved, together with Carl Jockusch, the low basis theorem, and has done other work in mathematical logic, primarily in the area of computability theory. In 2012 he became a fellow of the American Mathematical Society. Selected publications C. G. Jockusch Jr. and R. I. Soare, "Π(0, 1) Classes and Degrees of Theories" in Transactions of the American Mathematical Society (1972). See also Jockusch–Soare forcing References External links Professional homepage Living people Year of birth missing (living people) University of Chicago faculty 20th-century American mathematicians 21st-century American mathematicians Fellows of the American Mathematical Society
https://en.wikipedia.org/wiki/Sine%20and%20cosine%20transforms
In mathematics, the Fourier sine and cosine transforms are forms of the Fourier transform that do not use complex numbers or require negative frequency. They are the forms originally used by Joseph Fourier and are still preferred in some applications, such as signal processing or statistics. Definition The Fourier sine transform of , sometimes denoted by either or , is If means time, then is frequency in cycles per unit time, but in the abstract, they can be any pair of variables which are dual to each other. This transform is necessarily an odd function of frequency, i.e. for all : The numerical factors in the Fourier transforms are defined uniquely only by their product. Here, in order that the Fourier inversion formula not have any numerical factor, the factor of 2 appears because the sine function has norm of The Fourier cosine transform of , sometimes denoted by either or , is It is necessarily an even function of frequency, i.e. for all : Since positive frequencies can fully express the transform, the non-trivial concept of negative frequency needed in the regular Fourier transform can be avoided. Simplification to avoid negative t Some authors only define the cosine transform for even functions of , in which case its sine transform is zero. Since cosine is also even, a simpler formula can be used, Similarly, if is an odd function, then the cosine transform is zero and the sine transform can be simplified to Other conventions Just like the Fourier transform takes the form of different equations with different constant factors (see ), other authors also define the cosine transform as and sine as or, the cosine transform as and the sine transform as using as the transformation variable. And while is typically used to represent the time domain, is often used alternatively, particularly when representing frequencies in a spatial domain. Fourier inversion The original function can be recovered from its transform under the usual hypotheses, that and both of its transforms should be absolutely integrable. For more details on the different hypotheses, see Fourier inversion theorem. The inversion formula is which has the advantage that all quantities are real. Using the addition formula for cosine, this can be rewritten as If the original function is an even function, then the sine transform is zero; if is an odd function, then the cosine transform is zero. In either case, the inversion formula simplifies. Relation with complex exponentials The form of the Fourier transform used more often today is Numerical evaluation Using standard methods of numerical evaluation for Fourier integrals, such as Gaussian or tanh-sinh quadrature, is likely to lead to completely incorrect results, as the quadrature sum is (for most integrands of interest) highly ill-conditioned. Special numerical methods which exploit the structure of the oscillation are required, an example of which is Ooura's method for Fourier integrals This
https://en.wikipedia.org/wiki/Classification%20of%20discontinuities
Continuous functions are of utmost importance in mathematics, functions and applications. However, not all functions are continuous. If a function is not continuous at a point in its domain, one says that it has a discontinuity there. The set of all points of discontinuity of a function may be a discrete set, a dense set, or even the entire domain of the function. The oscillation of a function at a point quantifies these discontinuities as follows: in a removable discontinuity, the distance that the value of the function is off by is the oscillation; in a jump discontinuity, the size of the jump is the oscillation (assuming that the value at the point lies between these limits of the two sides); in an essential discontinuity, oscillation measures the failure of a limit to exist; the limit is constant. A special case is if the function diverges to infinity or minus infinity, in which case the oscillation is not defined (in the extended real numbers, this is a removable discontinuity). Classification For each of the following, consider a real valued function of a real variable defined in a neighborhood of the point at which is discontinuous. Removable discontinuity Consider the piecewise function The point is a removable discontinuity. For this kind of discontinuity: The one-sided limit from the negative direction: and the one-sided limit from the positive direction: at both exist, are finite, and are equal to In other words, since the two one-sided limits exist and are equal, the limit of as approaches exists and is equal to this same value. If the actual value of is not equal to then is called a . This discontinuity can be removed to make continuous at or more precisely, the function is continuous at The term removable discontinuity is sometimes broadened to include a removable singularity, in which the limits in both directions exist and are equal, while the function is undefined at the point This use is an abuse of terminology because continuity and discontinuity of a function are concepts defined only for points in the function's domain. Jump discontinuity Consider the function Then, the point is a . In this case, a single limit does not exist because the one-sided limits, and exist and are finite, but are not equal: since, the limit does not exist. Then, is called a jump discontinuity, step discontinuity, or discontinuity of the first kind. For this type of discontinuity, the function may have any value at Essential discontinuity For an essential discontinuity, at least one of the two one-sided limits does not exist in . (Notice that one or both one-sided limits can be ). Consider the function Then, the point is an . In this example, both and do not exist in , thus satisfying the condition of essential discontinuity. So is an essential discontinuity, infinite discontinuity, or discontinuity of the second kind. (This is distinct from an essential singularity, which is often used when
https://en.wikipedia.org/wiki/Galerkin%20method
In mathematics, in the area of numerical analysis, Galerkin methods are named after the Soviet mathematician Boris Galerkin. They convert a continuous operator problem, such as a differential equation, commonly in a weak formulation, to a discrete problem by applying linear constraints determined by finite sets of basis functions. Often when referring to a Galerkin method, one also gives the name along with typical assumptions and approximation methods used: Ritz–Galerkin method (after Walther Ritz) typically assumes symmetric and positive definite bilinear form in the weak formulation, where the differential equation for a physical system can be formulated via minimization of a quadratic function representing the system energy and the approximate solution is a linear combination of the given set of the basis functions. Bubnov–Galerkin method (after Ivan Bubnov) does not require the bilinear form to be symmetric and substitutes the energy minimization with orthogonality constraints determined by the same basis functions that are used to approximate the solution. In an operator formulation of the differential equation, Bubnov–Galerkin method can be viewed as applying an orthogonal projection to the operator. Petrov–Galerkin method (after Georgii I. Petrov) allows using basis functions for orthogonality constraints (called test basis functions) that are different from the basis functions used to approximate the solution. Petrov–Galerkin method can be viewed as an extension of Bubnov–Galerkin method, applying a projection that is not necessarily orthogonal in the operator formulation of the differential equation. Examples of Galerkin methods are: the Galerkin method of weighted residuals, the most common method of calculating the global stiffness matrix in the finite element method, the boundary element method for solving integral equations, Krylov subspace methods. Example: Matrix linear system We first introduce and illustrate the Galerkin method as being applied to a system of linear equations with the following symmetric and positive definite matrix and the solution and right-hand-side vectors Let us take then the matrix of the Galerkin equation is the right-hand-side vector of the Galerkin equation is so that we obtain the solution vector to the Galerkin equation , which we finally uplift to determine the approximate solution to the original equation as In this example, our original Hilbert space is actually the 3-dimensional Euclidean space equipped with the standard scalar product , our 3-by-3 matrix defines the bilinear form , and the right-hand-side vector defines the bounded linear functional . The columns of the matrix form an orthonormal basis of the 2-dimensional subspace of the Galerkin projection. The entries of the 2-by-2 Galerkin matrix are , while the components of the right-hand-side vector of the Galerkin equation are . Finally, the approximate solution is obtained from the components of the solutio
https://en.wikipedia.org/wiki/Thomas%20Tymoczko
A. Thomas Tymoczko (September 1, 1943August 8, 1996) was a philosopher specializing in logic and the philosophy of mathematics. He taught at Smith College in Northampton, Massachusetts from 1971 until his death from stomach cancer in 1996, aged 52. His publications include New Directions in the Philosophy of Mathematics, an edited collection of essays for which he wrote individual introductions, and Sweet Reason: A Field Guide to Modern Logic, co-authored by Jim Henle. In addition, he published a number of philosophical articles, such as "The Four-Color Problem and its Philosophical Significance", which argues that the increasing use of computers is changing the nature of mathematical proof. He is considered to be a member of the fallibilist school in philosophy of mathematics. Philip Kitcher dubbed this school the "maverick" tradition in the philosophy of mathematics. (Paul Ernest) He completed an undergraduate degree from Harvard University in 1965, and his PhD from the same university in 1972. Personal life Tymoczko was married to comparative literature scholar Maria Tymoczko of the University of Massachusetts Amherst. Their three children include music composer Dmitri Tymoczko and Smith College mathematics professor Julianna Tymoczko. References 1943 births 1996 deaths People from New Kensington, Pennsylvania Harvard Graduate School of Arts and Sciences alumni Smith College faculty 20th-century American philosophers Deaths from cancer in the United States Deaths from stomach cancer
https://en.wikipedia.org/wiki/Fermat%20point
In Euclidean geometry, the Fermat point of a triangle, also called the Torricelli point or Fermat–Torricelli point, is a point such that the sum of the three distances from each of the three vertices of the triangle to the point is the smallest possible or, equivalently, the geometric median of the three vertices. It is so named because this problem was first raised by Fermat in a private letter to Evangelista Torricelli, who solved it. The Fermat point gives a solution to the geometric median and Steiner tree problems for three points. Construction The Fermat point of a triangle with largest angle at most 120° is simply its first isogonic center or X(13), which is constructed as follows: Construct an equilateral triangle on each of two arbitrarily chosen sides of the given triangle. Draw a line from each new vertex to the opposite vertex of the original triangle. The two lines intersect at the Fermat point. An alternative method is the following: On each of two arbitrarily chosen sides, construct an isosceles triangle, with base the side in question, 30-degree angles at the base, and the third vertex of each isosceles triangle lying outside the original triangle. For each isosceles triangle draw a circle, in each case with center on the new vertex of the isosceles triangle and with radius equal to each of the two new sides of that isosceles triangle. The intersection inside the original triangle between the two circles is the Fermat point. When a triangle has an angle greater than 120°, the Fermat point is sited at the obtuse-angled vertex. In what follows "Case 1" means the triangle has an angle exceeding 120°. "Case 2" means no angle of the triangle exceeds 120°. Location of X(13) Fig. 2 shows the equilateral triangles attached to the sides of the arbitrary triangle . Here is a proof using properties of concyclic points to show that the three lines in Fig 2 all intersect at the point and cut one another at angles of 60°. The triangles are congruent because the second is a 60° rotation of the first about . Hence and . By the converse of the inscribed angle theorem applied to the segment , the points are concyclic (they lie on a circle). Similarly, the points are concyclic. , so , using the inscribed angle theorem. Similarly, . So . Therefore, . Using the inscribed angle theorem, this implies that the points are concyclic. So, using the inscribed angle theorem applied to the segment , . Because , the point lies on the line segment . So, the lines are concurrent (they intersect at a single point). Q.E.D. This proof applies only in Case 2, since if , point lies inside the circumcircle of which switches the relative positions of and . However it is easily modified to cover Case 1. Then hence which means is concyclic so . Therefore, lies on . The lines joining the centers of the circles in Fig. 2 are perpendicular to the line segments . For example, the line joining the center of the circle containing a
https://en.wikipedia.org/wiki/Enterolith
An enterolith is a mineral concretion or calculus formed anywhere in the gastrointestinal system. Enteroliths are uncommon and usually incidental findings but, once found, they require at a minimum watchful waiting. If there is evidence of complications, they must be removed. An enterolith may form around a nidus, a small foreign object such as a seed, pebble, or piece of twine that serves as an irritant. In this respect, an enterolith forms by a process similar to the creation of a pearl. An enterolith is not to be confused with a gastrolith, which helps digestion. In equines Equine enteroliths are found by walking pastures or turning over manure compost piles to find small enteroliths, during necroscopy, and increasingly, during surgery for colic. Therefore, the incidence of asymptomatic enteroliths is unknown. Equine enteroliths are typically smoothly spherical or tetrahedral, consist mostly of the mineral struvite (ammonium magnesium phosphate), and have concentric rings of mineral precipitated around a nidus. Enteroliths in horses were reported widely in the 19th century, infrequently in the early 20th century, and now increasingly. They have also been reported in zebras: five in a zoo in California and one in a zoo in Wisconsin. Struvite enteroliths are associated with elevated pH and mineral concentrations in the lumen. In California, struvite enteroliths are associated also with a high proportion of alfalfa in the feed and less access to grass pasture. This association has been attributed to the cultivation of alfalfa on serpentine soils, resulting in high concentrations of magnesium in the alfalfa. In humans In humans, enteroliths are rare and may be difficult to distinguish from gall stones. Their chemical composition is diverse, and rarely can a nidus be found. A differential diagnosis of an enterolith requires the enterolith, a normal gallbladder, and a diverticulum. An enterolith typically forms within a diverticulum. An enterolith formed in a Meckel's diverticulum sometimes is known as a Meckel's enterolith. Improper use of magnesium oxide as a long-term laxative has been reported to cause enteroliths and/or medication bezoars. Most enteroliths are not apparent and cause no complications. However, any complications that do occur are likely to be severe. Of these, bowel obstruction is most common, followed by ileus and perforation. Bowel obstruction and ileus typically occur when a large enterolith is expelled from a diverticulum into the lumen. Perforation typically occurs within the diverticulum. On plain X-rays,the visibility of the enterolith depends on its calcium content. Calcium-rich stones usually demonstrate a radiodense rim and a relatively radioluscent core. Choleic acid stones are almost always radiolucent. They sometimes can be visualized on CT scans without contrast; presence of contrast in the lumen may reveal the enterolith as a void. Most often, they are visualized using ultrasound. Although r
https://en.wikipedia.org/wiki/Controversy%20over%20Cantor%27s%20theory
In mathematical logic, the theory of infinite sets was first developed by Georg Cantor. Although this work has become a thoroughly standard fixture of classical set theory, it has been criticized in several areas by mathematicians and philosophers. Cantor's theorem implies that there are sets having cardinality greater than the infinite cardinality of the set of natural numbers. Cantor's argument for this theorem is presented with one small change. This argument can be improved by using a definition he gave later. The resulting argument uses only five axioms of set theory. Cantor's set theory was controversial at the start, but later became largely accepted. Most modern mathematics textbooks implicitly use Cantor's views on mathematical infinity. For example, a line is generally presented as the infinite set of its points, and it is commonly taught that there are more real numbers than rational numbers (see cardinality of the continuum). Cantor's argument Cantor's first proof that infinite sets can have different cardinalities was published in 1874. This proof demonstrates that the set of natural numbers and the set of real numbers have different cardinalities. It uses the theorem that a bounded increasing sequence of real numbers has a limit, which can be proved by using Cantor's or Richard Dedekind's construction of the irrational numbers. Because Leopold Kronecker did not accept these constructions, Cantor was motivated to develop a new proof. In 1891, he published "a much simpler proof ... which does not depend on considering the irrational numbers." His new proof uses his diagonal argument to prove that there exists an infinite set with a larger number of elements (or greater cardinality) than the set of natural numbers N = {1, 2, 3, ...}. This larger set consists of the elements (x1, x2, x3, ...), where each xn is either m or w. Each of these elements corresponds to a subset of N—namely, the element (x1, x2, x3, ...) corresponds to {n ∈ N:  xn = w}. So Cantor's argument implies that the set of all subsets of N has greater cardinality than N. The set of all subsets of N is denoted by P(N), the power set of N. Cantor generalized his argument to an arbitrary set A and the set consisting of all functions from A to {0, 1}. Each of these functions corresponds to a subset of A, so his generalized argument implies the theorem: The power set P(A) has greater cardinality than A. This is known as Cantor's theorem. The argument below is a modern version of Cantor's argument that uses power sets (for his original argument, see Cantor's diagonal argument). By presenting a modern argument, it is possible to see which assumptions of axiomatic set theory are used. The first part of the argument proves that N and P(N) have different cardinalities: There exists at least one infinite set. This assumption (not formally specified by Cantor) is captured in formal set theory by the axiom of infinity. This axiom implies that N, the set of all natural nu
https://en.wikipedia.org/wiki/Octagram
In geometry, an octagram is an eight-angled star polygon. The name octagram combine a Greek numeral prefix, octa-, with the Greek suffix -gram. The -gram suffix derives from γραμμή (grammḗ) meaning "line". Detail In general, an octagram is any self-intersecting octagon (8-sided polygon). The regular octagram is labeled by the Schläfli symbol {8/3}, which means an 8-sided star, connected by every third point. Variations These variations have a lower dihedral, Dih4, symmetry: The symbol Rub el Hizb is a Unicode glyph ۞ at U+06DE. As a quasitruncated square Deeper truncations of the square can produce isogonal (vertex-transitive) intermediate star polygon forms with equal spaced vertices and two edge lengths. A truncated square is an octagon, t{4}={8}. A quasitruncated square, inverted as {4/3}, is an octagram, t{4/3}={8/3}. The uniform star polyhedron stellated truncated hexahedron, t'{4,3}=t{4/3,3} has octagram faces constructed from the cube in this way. It may be considered for this reason as a three-dimensional analogue of the octagram. Another three-dimensional version of the octagram is the nonconvex great rhombicuboctahedron (quasirhombicuboctahedron), which can be thought of as a quasicantellated (quasiexpanded) cube, t0,2{4/3,3}. Star polygon compounds There are two regular octagrammic star figures (compounds) of the form {8/k}, the first constructed as two squares {8/2}=2{4}, and second as four degenerate digons, {8/4}=4{2}. There are other isogonal and isotoxal compounds including rectangular and rhombic forms. {8/2} or 2{4}, like Coxeter diagrams + , can be seen as the 2D equivalent of the 3D compound of cube and octahedron, + , 4D compound of tesseract and 16-cell, + and 5D compound of 5-cube and 5-orthoplex; that is, the compound of a n-cube and cross-polytope in their respective dual positions. Other presentations of an octagonal star An octagonal star can be seen as a concave hexadecagon, with internal intersecting geometry erased. It can also be dissected by radial lines. Other uses In Unicode, the "Eight Spoked Asterisk" symbol ✳ is U+2733. The 8-pointed diffraction spikes of the star images from the James Webb Space Telescope are due to the diffraction caused by the hexagonal shape of the mirror sections and the struts holding the secondary mirror. See also Usage Rub el Hizb – Islamic character Star of Ishtar – symbol of the ancient Sumerian goddess Inanna and her East Semitic counterpart Ishtar and Roman Venus. Seshat – the hieroglyph of this ancient Egyptian goddess depicts a seven-petaled flower, forming an octagram with its stem. Star of Lakshmi – Indian character Surya Majapahit – usage during Majapahit times in Indonesia to represent the Hindu gods of the directions Compass rose – usage in compasses to represent the cardinal directions for the eight principal winds Auseklis – usage of regular octagram by Latvians Guñelve – representation of Venus in Mapuche iconography. Selburose – usag
https://en.wikipedia.org/wiki/FORMAC
FORMAC, the FORmula MAnipulation Compiler, was the first computer algebra system to have significant use. It was developed by Jean E. Sammet and her team, as an extension of FORTRAN IV. The compiler was implemented as a preprocessor taking the FORMAC program and converting it to a FORTRAN IV program which was in turn compiled without further user intervention. Initial development started in 1962 and was complete by April 1964. In November it was released to IBM customers. FORMAC supported computation, manipulation, and use of symbolic expressions. In addition it supported rational arithmetic. See also ALTRAN References Bibliography External links Computer algebra systems Fortran programming language family Procedural programming languages Programming languages created in 1962 Programming languages created by women
https://en.wikipedia.org/wiki/Joseph%20Z%C3%A4hringer
Joseph Zähringer (often written Josef, March 15, 1929 – July 22, 1970) was a German physicist. From 1949 until 1954 he attended the Universität Freiburg, studying physics, mathematics, chemistry and mineralogy. In 1955 he became an assistant at the university, and in 1956 he came to the Brookhaven National Laboratory in Upton, New York. By 1958 he joined the Max Planck Institute for Nuclear Physics in Heidelberg, Germany as an assistant. He eventually became the director of the institute in 1965. His contributions to astronomy included the study of gas isotopes in meteorites and lunar materials. The crater Zähringer on the Moon is named after him. At Brookhaven National Laboratory Dr. Zahringer worked with Dr. Oliver Schaeffer's cosmochemistry group applying mass spectrometry techniques to the study of rare gases in meteorites. These studies were largely related to determining the exposure ages of meteorites to cosmic rays in space. Dr. Zahringer contributed much of the mass spectrometer technology from the MPI-Heidelberg. From this period until his untimely death Dr. Zahringer collaborated with Dr. Schaeffer who had moved on to found the Earth and Space Sciences Department at Stony Brook University. This collaboration included work on the Apollo 11 & 12 missions. External links Max Planck society brief biography (in German). Astronomy/Planetary Database entries. 1929 births 1970 deaths 20th-century German physicists
https://en.wikipedia.org/wiki/Semicubical%20parabola
In mathematics, a cuspidal cubic or semicubical parabola is an algebraic plane curve that has an implicit equation of the form (with ) in some Cartesian coordinate system. Solving for leads to the explicit form which imply that every real point satisfies . The exponent explains the term semicubical parabola. (A parabola can be described by the equation .) Solving the implicit equation for yields a second explicit form The parametric equation can also be deduced from the implicit equation by putting The semicubical parabolas have a cuspidal singularity; hence the name of cuspidal cubic. The arc length of the curve was calculated by the English mathematician William Neile and published in 1657 (see section History). Properties of semicubical parabolas Similarity Any semicubical parabola is similar to the semicubical unit parabola Proof: The similarity (uniform scaling) maps the semicubical parabola onto the curve with Singularity The parametric representation is regular except at point At point the curve has a singularity (cusp). The proof follows from the tangent vector Only for this vector has zero length. Tangents Differentiating the semicubical unit parabola one gets at point of the upper branch the equation of the tangent: This tangent intersects the lower branch at exactly one further point with coordinates (Proving this statement one should use the fact, that the tangent meets the curve at twice.) Arclength Determining the arclength of a curve one has to solve the integral For the semicubical parabola one gets (The integral can be solved by the substitution Example: For (semicubical unit parabola) and which means the length of the arc between the origin and point (4,8), one gets the arc length 9.073. Evolute of the unit parabola The evolute of the parabola is a semicubical parabola shifted by 1/2 along the x-axis: Polar coordinates In order to get the representation of the semicubical parabola in polar coordinates, one determines the intersection point of the line with the curve. For there is one point different from the origin: This point has distance from the origin. With and ( see List of identities) one gets Relation between a semicubical parabola and a cubic function Mapping the semicubical parabola by the projective map (involutoric perspectivity with axis and center yields hence the cubic function The cusp (origin) of the semicubical parabola is exchanged with the point at infinity of the y-axis. This property can be derived, too, if one represents the semicubical parabola by homogeneous coordinates: In equation (A) the replacement (the line at infinity has equation and the multiplication by is performed. One gets the equation of the curve in homogeneous coordinates: Choosing line as line at infinity and introducing yields the (affine) curve Isochrone curve An additional defining property of the semicubical parabola is that it is an isochrone curve, meaning
https://en.wikipedia.org/wiki/Artin%E2%80%93Schreier%20theory
In mathematics, Artin–Schreier theory is a branch of Galois theory, specifically a positive characteristic analogue of Kummer theory, for Galois extensions of degree equal to the characteristic p. introduced Artin–Schreier theory for extensions of prime degree p, and generalized it to extensions of prime power degree pn. If K is a field of characteristic p, a prime number, any polynomial of the form for in K, is called an Artin–Schreier polynomial. When for all , this polynomial is irreducible in K[X], and its splitting field over K is a cyclic extension of K of degree p. This follows since for any root β, the numbers β + i, for , form all the roots—by Fermat's little theorem—so the splitting field is . Conversely, any Galois extension of K of degree p equal to the characteristic of K is the splitting field of an Artin–Schreier polynomial. This can be proved using additive counterparts of the methods involved in Kummer theory, such as Hilbert's theorem 90 and additive Galois cohomology. These extensions are called Artin–Schreier extensions. Artin–Schreier extensions play a role in the theory of solvability by radicals, in characteristic p, representing one of the possible classes of extensions in a solvable chain. They also play a part in the theory of abelian varieties and their isogenies. In characteristic p, an isogeny of degree p of abelian varieties must, for their function fields, give either an Artin–Schreier extension or a purely inseparable extension. Artin–Schreier–Witt extensions There is an analogue of Artin–Schreier theory which describes cyclic extensions in characteristic p of p-power degree (not just degree p itself), using Witt vectors, developed by . References Section VI.6 Section VI.1 Galois theory
https://en.wikipedia.org/wiki/Floer%20homology
In mathematics, Floer homology is a tool for studying symplectic geometry and low-dimensional topology. Floer homology is a novel invariant that arises as an infinite-dimensional analogue of finite-dimensional Morse homology. Andreas Floer introduced the first version of Floer homology, now called Lagrangian Floer homology, in his proof of the Arnold conjecture in symplectic geometry. Floer also developed a closely related theory for Lagrangian submanifolds of a symplectic manifold. A third construction, also due to Floer, associates homology groups to closed three-dimensional manifolds using the Yang–Mills functional. These constructions and their descendants play a fundamental role in current investigations into the topology of symplectic and contact manifolds as well as (smooth) three- and four-dimensional manifolds. Floer homology is typically defined by associating to the object of interest an infinite-dimensional manifold and a real valued function on it. In the symplectic version, this is the free loop space of a symplectic manifold with the symplectic action functional. For the (instanton) version for three-manifolds, it is the space of SU(2)-connections on a three-dimensional manifold with the Chern–Simons functional. Loosely speaking, Floer homology is the Morse homology of the function on the infinite-dimensional manifold. A Floer chain complex is formed from the abelian group spanned by the critical points of the function (or possibly certain collections of critical points). The differential of the chain complex is defined by counting the function's gradient flow lines connecting certain pairs of critical points (or collections thereof). Floer homology is the homology of this chain complex. The gradient flow line equation, in a situation where Floer's ideas can be successfully applied, is typically a geometrically meaningful and analytically tractable equation. For symplectic Floer homology, the gradient flow equation for a path in the loopspace is (a perturbed version of) the Cauchy–Riemann equation for a map of a cylinder (the total space of the path of loops) to the symplectic manifold of interest; solutions are known as pseudoholomorphic curves. The Gromov compactness theorem is then used to show that the differential is well-defined and squares to zero, so that the Floer homology is defined. For instanton Floer homology, the gradient flow equations is exactly the Yang–Mills equation on the three-manifold crossed with the real line. Symplectic Floer homology Symplectic Floer Homology (SFH) is a homology theory associated to a symplectic manifold and a nondegenerate symplectomorphism of it. If the symplectomorphism is Hamiltonian, the homology arises from studying the symplectic action functional on the (universal cover of the) free loop space of a symplectic manifold. SFH is invariant under Hamiltonian isotopy of the symplectomorphism. Here, nondegeneracy means that 1 is not an eigenvalue of the derivative of
https://en.wikipedia.org/wiki/Set%20Theory%3A%20An%20Introduction%20to%20Independence%20Proofs
Set Theory: An Introduction to Independence Proofs is a textbook and reference work in set theory by Kenneth Kunen. It starts from basic notions, including the ZFC axioms, and quickly develops combinatorial notions such as trees, Suslin's problem, ◊, and Martin's axiom. It develops some basic model theory (rather specifically aimed at models of set theory) and the theory of Gödel's constructible universe L. The book then proceeds to describe the method of forcing. Kunen completely rewrote the book for the 2011 edition (under the title "Set Theory"), including more model theory. References 1980 non-fiction books Mathematics textbooks Set theory
https://en.wikipedia.org/wiki/Full%20reptend%20prime
In number theory, a full reptend prime, full repetend prime, proper prime or long prime in base b is an odd prime number p such that the Fermat quotient (where p does not divide b) gives a cyclic number. Therefore, the base b expansion of repeats the digits of the corresponding cyclic number infinitely, as does that of with rotation of the digits for any a between 1 and p − 1. The cyclic number corresponding to prime p will possess p − 1 digits if and only if p is a full reptend prime. That is, the multiplicative order = p − 1, which is equivalent to b being a primitive root modulo p. The term "long prime" was used by John Conway and Richard Guy in their Book of Numbers. Confusingly, Sloane's OEIS refers to these primes as "cyclic numbers". Base 10 Base 10 may be assumed if no base is specified, in which case the expansion of the number is called a repeating decimal. In base 10, if a full reptend prime ends in the digit 1, then each digit 0, 1, ..., 9 appears in the reptend the same number of times as each other digit. (For such primes in base 10, see . In fact, in base b, if a full reptend prime ends in the digit 1, then each digit 0, 1, ..., b − 1 appears in the repetend the same number of times as each other digit, but no such prime exists when b = 12, since every full reptend prime in base 12 ends in the digit 5 or 7 in the same base. Generally, no such prime exists when b is congruent to 0 or 1 modulo 4. The values of p for which this formula produces cyclic numbers in decimal are: 7, 17, 19, 23, 29, 47, 59, 61, 97, 109, 113, 131, 149, 167, 179, 181, 193, 223, 229, 233, 257, 263, 269, 313, 337, 367, 379, 383, 389, 419, 433, 461, 487, 491, 499, 503, 509, 541, 571, 577, 593, 619, 647, 659, 701, 709, 727, 743, 811, 821, 823, 857, 863, 887, 937, 941, 953, 971, 977, 983, 1019, 1021, 1033, 1051... For example, the case b = 10, p = 7 gives the cyclic number 142857; thus 7 is a full reptend prime. The case b = 10, p = 17 gives the cyclic number 0588235294117647 (16 digits); thus 17 is a full reptend prime. The case b = 10, p = 19 gives the cyclic number 052631578947368421 (18 digits); thus 19 is a full reptend prime. Not all values of p will yield a cyclic number using this formula; for example, p = 13 gives , and p = 31 gives Failed cases such as these will always contain a repetition of digits (possibly several) over the course of p − 1 digits. The known pattern to this sequence comes from algebraic number theory, specifically, this sequence is the set of primes p such that 10 is a primitive root modulo p. Artin's conjecture on primitive roots is that this sequence contains 37.395...% of the primes. Patterns of occurrence of full reptend primes Advanced modular arithmetic can show that any prime of the following forms: 40k + 1 40k + 3 40k + 9 40k + 13 40k + 27 40k + 31 40k + 37 40k + 39 can never be a full reptend prime in base 10. The first primes of these forms, with their periods, are: However, studies show that two-th
https://en.wikipedia.org/wiki/O%28n%29
In mathematics, O(n) may refer to: O(n), the orthogonal group Big O notation, indicating the order of growth of some quantity as a function of n or the limiting behavior of a function, e.g. in computational complexity theory The nth tensor power of Serre's twisting sheaf
https://en.wikipedia.org/wiki/Extremal%20combinatorics
Extremal combinatorics is a field of combinatorics, which is itself a part of mathematics. Extremal combinatorics studies how large or how small a collection of finite objects (numbers, graphs, vectors, sets, etc.) can be, if it has to satisfy certain restrictions. Much of extremal combinatorics concerns classes of sets; this is called extremal set theory. For instance, in an n-element set, what is the largest number of k-element subsets that can pairwise intersect one another? What is the largest number of subsets of which none contains any other? The latter question is answered by Sperner's theorem, which gave rise to much of extremal set theory. Another kind of example: How many people can be invited to a party where among each three people there are two who know each other and two who don't know each other? Ramsey theory shows that at most five persons can attend such a party. Or, suppose we are given a finite set of nonzero integers, and are asked to mark as large a subset as possible of this set under the restriction that the sum of any two marked integers cannot be marked. It appears that (independent of what the given integers actually are) we can always mark at least one-third of them. See also Extremal graph theory Sauer–Shelah lemma Erdős–Ko–Rado theorem Kruskal–Katona theorem Fisher's inequality Union-closed sets conjecture References . . . Combinatorial optimization
https://en.wikipedia.org/wiki/XploRe
XploRe was a commercial statistics software package, developed by the German software company MD*Tech around Prof. Dr. Wolfgang Härdle. XploRe has been discontinued in 2008, the last version, 4.8, is available for download at no cost. The user interacted with the software via the XploRe programming language, which is derived from the C programming language. Individual XploRe programs were called Quantlets. Functions Besides the standard functions for one- and multidimensional data analysis the focus was on non- and semiparametric modelling and the statistics of financial markets. Kernel density estimation and regression (kernel regression) Single index models Generalized linear and additive models (GLM and GAM) Value at risk (VaR) and implied volatilities XploRe Quantlet Client With the XploRe Quantlet Client users were able to run XploRe as Java applet in a web browser. The applet sent the user commands via a TCP/IP based communication protocol to the XploRe Quantlet Server, which computed the necessary results and sent them back to the client. This technology was also used to enrich (electronic) books with interactive examples. See also Comparison of statistical packages Literature Härdle, Klinke, Müller. XploRe Learning Guide. Springer. Härdle, Klinke, Müller. XploRe Applications Guide. Springer. External links Discontinued software Windows-only freeware Statistical programming languages
https://en.wikipedia.org/wiki/Nitrosyl%20fluoride
Nitrosyl fluoride (NOF) is a covalently bonded nitrosyl compound. Physical properties The compound is a colorless gas, with bent molecular shape. The VSEPR model explains this geometry via a lone-pair of electrons on the nitrogen atom. Chemistry Nitrosyl fluoride is typically produced by direct reaction of nitric oxide and fluorine, although halogenation with a perfluorinated metal salt is also possible. The compound is a highly reactive fluorinating agent that converts many metals to their fluorides, releasing nitric oxide in the process: n NOF + M → MFn + n NO For this reason, aqueous NOF solutions, like aqua regia, are powerful solvents for metals. Absent an oxidizable metal, NOF reacts with water to form nitrous acid, which then disproportionates to nitric acid: NOF + H2O → HNO2 + HF 3 HNO2 → HNO3 + 2 NO + H2O These reactions occur in both acidic and basic solutions. Nitrosyl fluoride also forms salt-like adducts with Lewis-acidic fluorides; for example, BF3 reacts to give NOBF4. Similarly, the compound nitrosylates compounds with a free proton; thus alcohols convert to nitrites: ROH + NOF → RONO + HF Uses Nitrosyl fluoride is used as a solvent and as a fluorinating and nitrating agent in organic synthesis. It has also been proposed as an oxidizer in rocket propellants. References External links WebBook page for NOF National Pollutant Inventory - Fluoride and compounds fact sheet Nitrosyl compounds Oxyfluorides Fluorinating agents Nitrogen(III) compounds Nitrogen oxohalides
https://en.wikipedia.org/wiki/Massey%20product
In algebraic topology, the Massey product is a cohomology operation of higher order introduced in , which generalizes the cup product. The Massey product was created by William S. Massey, an American algebraic topologist. Massey triple product Let be elements of the cohomology algebra of a differential graded algebra . If , the Massey product is a subset of , where . The Massey product is defined algebraically, by lifting the elements to equivalence classes of elements of , taking the Massey products of these, and then pushing down to cohomology. This may result in a well-defined cohomology class, or may result in indeterminacy. Define to be . The cohomology class of an element of will be denoted by . The Massey triple product of three cohomology classes is defined by The Massey product of three cohomology classes is not an element of , but a set of elements of , possibly empty and possibly containing more than one element. If have degrees , then the Massey product has degree , with the coming from the differential . The Massey product is nonempty if the products and are both exact, in which case all its elements are in the same element of the quotient group So the Massey product can be regarded as a function defined on triples of classes such that the product of the first or last two is zero, taking values in the above quotient group. More casually, if the two pairwise products and both vanish in homology (), i.e., and for some chains and , then the triple product vanishes "for two different reasons" — it is the boundary of and (since and because elements of homology are cycles). The bounding chains and have indeterminacy, which disappears when one moves to homology, and since and have the same boundary, subtracting them (the sign convention is to correctly handle the grading) gives a cocycle (the boundary of the difference vanishes), and one thus obtains a well-defined element of cohomology — this step is analogous to defining the st homotopy or homology group in terms of indeterminacy in null-homotopies/null-homologies of n-dimensional maps/chains. Geometrically, in singular cohomology of a manifold, one can interpret the product dually in terms of bounding manifolds and intersections, following Poincaré duality: dual to cocycles are cycles, often representable as closed manifolds (without boundary), dual to product is intersection, and dual to the subtraction of the bounding products is gluing the two bounding manifolds together along the boundary, obtaining a closed manifold which represents the homology class dual of the Massey product. In reality homology classes of manifolds cannot always be represented by manifolds – a representing cycle may have singularities – but with this caveat the dual picture is correct. Higher order Massey products More generally, the n-fold Massey product of n elements of is defined to be the set of elements of the form for all solutions of the equations , with and , w
https://en.wikipedia.org/wiki/Lie%20algebra%E2%80%93valued%20differential%20form
In differential geometry, a Lie-algebra-valued form is a differential form with values in a Lie algebra. Such forms have important applications in the theory of connections on a principal bundle as well as in the theory of Cartan connections. Formal definition A Lie-algebra-valued differential -form on a manifold, , is a smooth section of the bundle , where is a Lie algebra, is the cotangent bundle of and denotes the exterior power. Wedge product The wedge product of ordinary, real-valued differential forms is defined using multiplication of real numbers. For a pair of Lie algebra–valued differential forms, the wedge product can be defined similarly, but substituting the bilinear Lie bracket operation, to obtain another Lie algebra–valued form. For a -valued -form and a -valued -form , their wedge product is given by where the 's are tangent vectors. The notation is meant to indicate both operations involved. For example, if and are Lie-algebra-valued one forms, then one has The operation can also be defined as the bilinear operation on satisfying for all and . Some authors have used the notation instead of . The notation , which resembles a commutator, is justified by the fact that if the Lie algebra is a matrix algebra then is nothing but the graded commutator of and , i. e. if and then where are wedge products formed using the matrix multiplication on . Operations Let be a Lie algebra homomorphism. If is a -valued form on a manifold, then is an -valued form on the same manifold obtained by applying to the values of : . Similarly, if is a multilinear functional on , then one puts where and are -valued -forms. Moreover, given a vector space , the same formula can be used to define the -valued form when is a multilinear map, is a -valued form and is a -valued form. Note that, when giving amounts to giving an action of on ; i.e., determines the representation and, conversely, any representation determines with the condition . For example, if (the bracket of ), then we recover the definition of given above, with , the adjoint representation. (Note the relation between and above is thus like the relation between a bracket and .) In general, if is a -valued -form and is a -valued -form, then one more commonly writes when . Explicitly, With this notation, one has for example: . Example: If is a -valued one-form (for example, a connection form), a representation of on a vector space and a -valued zero-form, then Forms with values in an adjoint bundle Let be a smooth principal bundle with structure group and . acts on via adjoint representation and so one can form the associated bundle: Any -valued forms on the base space of are in a natural one-to-one correspondence with any tensorial forms on of adjoint type. See also Maurer–Cartan form Adjoint bundle Notes References External links Wedge Product of Lie Algebra Valued One-Form Differential forms Lie algebras
https://en.wikipedia.org/wiki/Bel%20decomposition
In semi-Riemannian geometry, the Bel decomposition, taken with respect to a specific timelike congruence, is a way of breaking up the Riemann tensor of a pseudo-Riemannian manifold into lower order tensors with properties similar to the electric field and magnetic field. Such a decomposition was partially described by Alphonse Matte in 1953 and by Lluis Bel in 1958. This decomposition is particularly important in general relativity. This is the case of four-dimensional Lorentzian manifolds, for which there are only three pieces with simple properties and individual physical interpretations. Decomposition of the Riemann tensor In four dimensions the Bel decomposition of the Riemann tensor, with respect to a timelike unit vector field , not necessarily geodesic or hypersurface orthogonal, consists of three pieces: the electrogravitic tensor Also known as the tidal tensor. It can be physically interpreted as giving the tidal stresses on small bits of a material object (which may also be acted upon by other physical forces), or the tidal accelerations of a small cloud of test particles in a vacuum solution or electrovacuum solution. the magnetogravitic tensor Can be interpreted physically as a specifying possible spin-spin forces on spinning bits of matter, such as spinning test particles. the topogravitic tensor Can be interpreted as representing the sectional curvatures for the spatial part of a frame field. Because these are all transverse (i.e. projected to the spatial hyperplane elements orthogonal to our timelike unit vector field), they can be represented as linear operators on three-dimensional vectors, or as three-by-three real matrices. They are respectively symmetric, traceless, and symmetric (6,8,6 linearly independent components, for a total of 20). If we write these operators as E, B, L respectively, the principal invariants of the Riemann tensor are obtained as follows: is the trace of E2 + L2 - 2 B BT, is the trace of B ( E - L ), is the trace of E L - B2. See also Bel–Robinson tensor Ricci decomposition Tidal tensor Papapetrou–Dixon equations Curvature invariant References Lorentzian manifolds Tensors in general relativity
https://en.wikipedia.org/wiki/Herbert%20Sichel
Herbert Sichel (1915–1995) was a statistician who made great advances in the areas of both theoretical and applied statistics. He developed the Sichel-t estimator for the log-normal distribution's t-statistic. He also made great leaps in the area of the generalized inverse Gaussian distribution, the mixture of which with the Poisson distribution became known as the Sichel distribution. Dr Sichel pioneered the science of geostatistics with Danie Krige in the early 1950s. Sichel also was well recognised in the field of statistical linguistics. He established the Operational Research Bureau in 1952. He was appointed as professor in Statistics and Operations Research in the Graduate Business School of the University of the Witwatersrand. He has been recognized as "one of the grand old men of the SA Statistical Association". In 1958 he was elected as a Fellow of the American Statistical Association. The Herbert Sichel medal was established in 1997 and is awarded annually to the best statistics paper published in a South African journal in the previous year. References External links https://web.archive.org/web/20060923050607/http://saturn.cs.unp.ac.za/~orssa/history_content.htmVarious historical references Practical applications of some of Dr Sichel's work South African statisticians Operations researchers 1915 births 1995 deaths South African scientists Fellows of the American Statistical Association
https://en.wikipedia.org/wiki/McCullagh%27s%20parametrization%20of%20the%20Cauchy%20distributions
In probability theory, the "standard" Cauchy distribution is the probability distribution whose probability density function (pdf) is for x real. This has median 0, and first and third quartiles respectively −1 and +1. Generally, a Cauchy distribution is any probability distribution belonging to the same location-scale family as this one. Thus, if X has a standard Cauchy distribution and μ is any real number and σ > 0, then Y = μ + σX has a Cauchy distribution whose median is μ and whose first and third quartiles are respectively μ − σ and μ + σ. McCullagh's parametrization, introduced by Peter McCullagh, professor of statistics at the University of Chicago, uses the two parameters of the non-standardised distribution to form a single complex-valued parameter, specifically, the complex number θ = μ + iσ, where i is the imaginary unit. It also extends the usual range of scale parameter to include σ < 0. Although the parameter is notionally expressed using a complex number, the density is still a density over the real line. In particular the density can be written using the real-valued parameters μ and σ, which can each take positive or negative values, as where the distribution is regarded as degenerate if σ = 0. An alternative form for the density can be written using the complex parameter θ = μ + iσ as where . To the question "Why introduce complex numbers when only real-valued random variables are involved?", McCullagh wrote: To this question I can give no better answer than to present the curious result that for all real numbers a, b, c and d. ...the induced transformation on the parameter space has the same fractional linear form as the transformation on the sample space only if the parameter space is taken to be the complex plane. In other words, if the random variable Y has a Cauchy distribution with complex parameter θ, then the random variable Y * defined above has a Cauchy distribution with parameter (aθ + b)/(cθ + d). McCullagh also wrote, "The distribution of the first exit point from the upper half-plane of a Brownian particle starting at θ is the Cauchy density on the real line with parameter θ." In addition, McCullagh shows that the complex-valued parameterisation allows a simple relationship to be made between the Cauchy and the "circular Cauchy distribution". Using the complex parameter also let easily prove the invariance of f-divergences (e.g., Kullback-Leibler divergence, chi-squared divergence, etc.) with respect to real linear fractional transformations (group action of SL(2,R)), and show that all f-divergences between univariate Cauchy densities are symmetric. References Peter McCullagh, "Conditional inference and Cauchy models", Biometrika, volume 79 (1992), pages 247–259. PDF from McCullagh's homepage. Frank Nielsen and Kazuki Okamura, "On f-divergences between Cauchy distributions", arXiv 2101.12459 (2021). Continuous distributions
https://en.wikipedia.org/wiki/Antoine%20Parent
Antoine Parent (September 16, 1666 – September 26, 1716) was a French mathematician, born in Paris and died there, who wrote in 1700 on analytical geometry of three dimensions. His works were collected and published in three volumes at Paris in 1713. Parent had the idea to represent any surface by means of an equation between the three coordinates to any of its points. He derived the correct formula for bending of cantilever beams. He correctly assumed a central neutral axis and linear stress distribution from tensile at the top face to equal and opposite compression at the bottom, thus deriving a correct elastic section modulus of the cross sectional area times the section depth divided by six. Parent's work had little impact, and it was many more years before scientific principles were regularly applied to the analysis of the strength of beams in bending. References 1666 births 1716 deaths 17th-century French mathematicians 18th-century French mathematicians Members of the French Academy of Sciences
https://en.wikipedia.org/wiki/Jean%20Paul%20de%20Gua%20de%20Malves
Jean Paul de Gua de Malves (1713, Malves-en-Minervois (Aude) – June 2, 1785, Paris) was a French mathematician who published in 1740 a work on analytical geometry in which he applied it, without the aid of differential calculus, to find the tangents, asymptotes, and various singular points of an algebraic curve. He further showed how singular points and isolated loops were affected by conical projection. He gave the proof of Descartes's rule of signs which is to be found in most modern works. It is not clear whether Descartes ever proved it strictly, and Newton seems to have regarded it as obvious. De Gua de Malves was acquainted with many of the French philosophes during the last decades of the Ancien Régime. He was an early, short-lived, participant, then editor (later replaced by Diderot) of the project that ended up as the Encyclopédie. Condorcet claimed that it was in fact the de Gua who recruited Diderot to the project, though this claim has never been verified. In either case, Jean-Paul and Jean le Rond d'Alembert, also thought to have been recruited by the de Gua, first show up on the December 1746 payroll of the publishers who were backing the Encyclopédie project. Diderot was added just weeks later and took over as editor on 16 October 1747. At the funeral of the "profound geometrician", as Diderot called him, the eulogy was given by Condorcet. He was elected a Fellow of the Royal Society in 1743. See also De Gua's theorem References Bibliography Arthur M. Wilson: Diderot. Oxford University Press, New York, 1972, pp. 79–81. Nicolas de Condorcet, « Éloge de M. l’abbé de Gua », Œuvres de Condorcet, Firmin Didot frères, 1847-1849, Paris, p. 241-58. (online copy) Rene Taton: Gua De Malves, Jean Paul De. Complete Dictionary of Scientific Biography, 2008. 1713 births 1785 deaths People from Carcassonne English–French translators 18th-century French mathematicians Members of the French Academy of Sciences Contributors to the Encyclopédie (1751–1772) Fellows of the Royal Society French geometers 18th-century French translators
https://en.wikipedia.org/wiki/Curvature%20invariant
In Riemannian geometry and pseudo-Riemannian geometry, curvature invariants are scalar quantities constructed from tensors that represent curvature. These tensors are usually the Riemann tensor, the Weyl tensor, the Ricci tensor and tensors formed from these by the operations of taking dual contractions and covariant differentiations. Types of curvature invariants The invariants most often considered are polynomial invariants. These are polynomials constructed from contractions such as traces. Second degree examples are called quadratic invariants, and so forth. Invariants constructed using covariant derivatives up to order n are called n-th order differential invariants. The Riemann tensor is a multilinear operator of fourth rank acting on tangent vectors. However, it can also be considered a linear operator acting on bivectors, and as such it has a characteristic polynomial, whose coefficients and roots (eigenvalues) are polynomial scalar invariants. Physical applications In metric theories of gravitation such as general relativity, curvature scalars play an important role in telling distinct spacetimes apart. Two of the most basic curvature invariants in general relativity are the Kretschmann scalar and the Chern–Pontryagin scalar, These are analogous to two familiar quadratic invariants of the electromagnetic field tensor in classical electromagnetism. An important unsolved problem in general relativity is to give a basis (and any syzygies) for the zero-th order invariants of the Riemann tensor. They have limitations because many distinct spacetimes cannot be distinguished on this basis. In particular, so called VSI spacetimes (including pp-waves as well as some other Petrov type N and III spacetimes) cannot be distinguished from Minkowski spacetime using any number of polynomial curvature invariants (of any order). See also Cartan–Karlhede algorithm Carminati–McLenaghan invariants Curvature invariant (general relativity) Ricci decomposition References Riemannian geometry
https://en.wikipedia.org/wiki/Location%E2%80%93scale%20family
In probability theory, especially in mathematical statistics, a location–scale family is a family of probability distributions parametrized by a location parameter and a non-negative scale parameter. For any random variable whose probability distribution function belongs to such a family, the distribution function of also belongs to the family (where means "equal in distribution"—that is, "has the same distribution as"). In other words, a class of probability distributions is a location–scale family if for all cumulative distribution functions and any real numbers and , the distribution function is also a member of . If has a cumulative distribution function , then has a cumulative distribution function . If is a discrete random variable with probability mass function , then is a discrete random variable with probability mass function . If is a continuous random variable with probability density function , then is a continuous random variable with probability density function . Moreover, if and are two random variables whose distribution functions are members of the family, and assuming existence of the first two moments and has zero mean and unit variance, then can be written as , where and are the mean and standard deviation of . In decision theory, if all alternative distributions available to a decision-maker are in the same location–scale family, and the first two moments are finite, then a two-moment decision model can apply, and decision-making can be framed in terms of the means and the variances of the distributions. Examples Often, location–scale families are restricted to those where all members have the same functional form. Most location–scale families are univariate, though not all. Well-known families in which the functional form of the distribution is consistent throughout the family include the following: Normal distribution Elliptical distributions Cauchy distribution Uniform distribution (continuous) Uniform distribution (discrete) Logistic distribution Laplace distribution Student's t-distribution Generalized extreme value distribution Converting a single distribution to a location–scale family The following shows how to implement a location–scale family in a statistical package or programming environment where only functions for the "standard" version of a distribution are available. It is designed for R but should generalize to any language and library. The example here is of the Student's t-distribution, which is normally provided in R only in its standard form, with a single degrees of freedom parameter df. The versions below with _ls appended show how to generalize this to a generalized Student's t-distribution with an arbitrary location parameter mu and scale parameter sigma. Note that the generalized functions do not have standard deviation sigma since the standard t distribution does not have standard deviation of 1. References External links http://www.randomservices.or
https://en.wikipedia.org/wiki/Rvachev%20function
In mathematics, an R-function, or Rvachev function, is a real-valued function whose sign does not change if none of the signs of its arguments change; that is, its sign is determined solely by the signs of its arguments. Interpreting positive values as true and negative values as false, an R-function is transformed into a "companion" Boolean function (the two functions are called friends). For instance, the R-function ƒ(x, y) = min(x, y) is one possible friend of the logical conjunction (AND). R-functions are used in computer graphics and geometric modeling in the context of implicit surfaces and the function representation. They also appear in certain boundary-value problems, and are also popular in certain artificial intelligence applications, where they are used in pattern recognition. R-functions were first proposed by () in 1963, though the name, "R-functions", was given later on by Ekaterina L. Rvacheva-Yushchenko, in memory of their father, Logvin Fedorovich Rvachev (). See also Function representation Slesarenko function (S-function) Notes References Meshfree Modeling and Analysis, R-Functions (University of Wisconsin) Pattern Recognition Methods Based on Rvachev Functions (Purdue University) Shape Modeling and Computer Graphics with Real Functions Non-classical logic Real analysis Types of functions
https://en.wikipedia.org/wiki/List%20of%20airports%20in%20the%20Czech%20Republic
This is a list of airports in the Czech Republic, grouped by type and sorted by location. Passenger statistics Czech Republic's airports with number of passengers served in 2014 / 2015 years. Airports Railway connections Since 2015, Ostrava Airport has had a railway connection. It is the only airport with a railway connection in the Czech Republic (via line S4), but there are plans to connect Prague Airport to the railway network. See also Czech Air Force Transport in the Czech Republic List of airlines of the Czech Republic List of airports by ICAO code: L#LK – Czech Republic Wikipedia: Airline destination lists: Europe#Czech Republic References Sources Czech Ministry of Transport – includes IATA codes – ICAO codes – IATA and ICAO codes Czech Republic Airports Czech Republic Airports
https://en.wikipedia.org/wiki/Generalized%20inverse%20Gaussian%20distribution
In probability theory and statistics, the generalized inverse Gaussian distribution (GIG) is a three-parameter family of continuous probability distributions with probability density function where Kp is a modified Bessel function of the second kind, a > 0, b > 0 and p a real parameter. It is used extensively in geostatistics, statistical linguistics, finance, etc. This distribution was first proposed by Étienne Halphen. It was rediscovered and popularised by Ole Barndorff-Nielsen, who called it the generalized inverse Gaussian distribution. Its statistical properties are discussed in Bent Jørgensen's lecture notes. Properties Alternative parametrization By setting and , we can alternatively express the GIG distribution as where is the concentration parameter while is the scaling parameter. Summation Barndorff-Nielsen and Halgreen proved that the GIG distribution is infinitely divisible. Entropy The entropy of the generalized inverse Gaussian distribution is given as where is a derivative of the modified Bessel function of the second kind with respect to the order evaluated at Characteristic Function The characteristic of a random variable is given as(for a derivation of the characteristic function, see supplementary materials of ) for where denotes the imaginary number. Related distributions Special cases The inverse Gaussian and gamma distributions are special cases of the generalized inverse Gaussian distribution for p = −1/2 and b = 0, respectively. Specifically, an inverse Gaussian distribution of the form is a GIG with , , and . A Gamma distribution of the form is a GIG with , , and . Other special cases include the inverse-gamma distribution, for a = 0. Conjugate prior for Gaussian The GIG distribution is conjugate to the normal distribution when serving as the mixing distribution in a normal variance-mean mixture. Let the prior distribution for some hidden variable, say , be GIG: and let there be observed data points, , with normal likelihood function, conditioned on where is the normal distribution, with mean and variance . Then the posterior for , given the data is also GIG: where . Sichel distribution The Sichel distribution results when the GIG is used as the mixing distribution for the Poisson parameter . Notes References See also Inverse Gaussian distribution Gamma distribution Continuous distributions Exponential family distributions
https://en.wikipedia.org/wiki/Mohammad%20Kaykobad
Mohammad Kaykobad () is a computer scientist, educator, author, and columnist from Bangladesh. Along with Muhammed Zafar Iqbal, he started the national mathematics olympiad. He was a professor of computer science and engineering in Bangladesh University of Engineering and Technology. and currently is a faculty member of computer science and engineering in BRAC University.Also a faculty member of University of information technology and Sciences. Education In 1970, Kaykobad finished his SSC from Manikganj Govt. High School and in 1972, his HSC from Debendra College. He did his M.S. in Engineering at the Institute of Marine Engineers, Odesa, Ukraine (then in the USSR), in 1979. He did his M.Eng. in computer applications technology at the Asian Institute of Technology, Thailand, in 1982. He did his PhD at the Flinders University of South Australia, in 1986 under the Supervision of Dr FJM Salzborn. Career Dr. Kaykobad served as an adviser to ICT Projects for e-Governance in Bangladesh. He was awarded the gold medal for contribution in ICT Education at a ceremony at Bangabandhu International Conference Center by Bangladesh Computer Society and was presented the award by the President of Bangladesh on 26 July 2005. He was recognized as the best coach of ACM International Collegiate Programming Contest by IBM at 26th World Finals of ACM ICPC at Honolulu, Hawaii on 22 March 2002. He researched the Computerization of class scheduling of different universities of Bangladesh which was submitted to University Grants Commission in 1995. He is a member of the Bangladesh Academy of Sciences. Honors and awards Received the Best Coach award in 2002 at Honolulu, Hawaii Recognized as a distinguished alumnus by the Flinders University of South Australia. References 1954 births Living people Fellows of Bangladesh Academy of Sciences Asian Institute of Technology alumni Academic staff of Bangladesh University of Engineering and Technology People from Manikganj District Bangladeshi computer scientists
https://en.wikipedia.org/wiki/VGG
VGG may refer to: Volgograd Oblast Van de Graaff generator Verkehrsgesellschaft Görlitz Visual Geometry Group, an academic group focused on computer vision at Oxford University A deep convolutional network for object recognition developed and trained by this group. Vice Grip Garage, a popular YouTube channel. Vaush.gg, the website of the streamer Vaush.
https://en.wikipedia.org/wiki/Lists%20of%20tennis%20records%20and%20statistics
The following articles list tennis records and statistics: General Grand Slam Grand Slam List of Grand Slam–related tennis records List of Grand Slam mixed doubles champions List of quad wheelchair tennis champions List of Open Era Grand Slam champions by country List of Grand Slam singles champions by country List of Grand Slam singles champions in Open Era with age of first title Other ITF rankings ITF World Champions List of tennis players career achievements Tennis players with most titles in the Open Era List of highest ranked tennis players per country List of Olympic medalists in tennis List of tennis rivalries Longest tennis match records & Shortest tennis match records Longest tiebreaker in tennis Fastest recorded tennis serves Ace & Double fault Bagel & Golden set Men's tennis Grand Slam Chronological list of men's Grand Slam tennis champions List of Grand Slam men's singles champions List of Grand Slam men's doubles champions List of Grand Slam boys' singles champions List of Grand Slam boys' doubles champions List of men's wheelchair tennis champions List of Grand Slam men's singles finals Tennis performance timeline comparison (men) Major professional tennis tournaments before the Open Era Other All-time tennis records – Men's singles Open Era tennis records – Men's singles Tennis male players statistics World number 1 ranked male tennis players Top ten ranked male tennis players Top ten ranked male tennis players (1912–1972) Tennis Masters Series singles records and statistics Tennis Masters Series doubles records and statistics List of Davis Cup champions ATP ATP Tour records ATP rankings List of ATP number 1 ranked singles tennis players List of ATP number 1 ranked doubles tennis players List of ATP Tour top-level tournament singles champions List of ATP Tour top-level tournament doubles champions ATP Awards ATP Finals appearances ATP Cup champions Women's tennis Grand Slam Chronological list of women's Grand Slam tennis champions List of Grand Slam women's singles champions List of Grand Slam women's doubles champions List of Grand Slam girls' singles champions List of Grand Slam girls' doubles champions List of women's wheelchair tennis champions List of Grand Slam women's singles finals Tennis performance timeline comparison (women) Tennis performance timeline comparison (women) (1884–1977) Other All-time tennis records – Women's singles Open Era tennis records – Women's singles World number 1 ranked female tennis players Top ten ranked female tennis players Top ten ranked female tennis players (1921–1974) List of Billie Jean King Cup champions WTA 1000 Series singles records and statistics WTA 1000 Series doubles records and statistics WTA WTA Tour records WTA rankings List of WTA number 1 ranked singles tennis players List of WTA number 1 ranked doubles tennis players List of WTA Tour top-level tournament singles champions List of WTA To
https://en.wikipedia.org/wiki/Fredholm%20determinant
In mathematics, the Fredholm determinant is a complex-valued function which generalizes the determinant of a finite dimensional linear operator. It is defined for bounded operators on a Hilbert space which differ from the identity operator by a trace-class operator. The function is named after the mathematician Erik Ivar Fredholm. Fredholm determinants have had many applications in mathematical physics, the most celebrated example being Gábor Szegő's limit formula, proved in response to a question raised by Lars Onsager and C. N. Yang on the spontaneous magnetization of the Ising model. Definition Let be a Hilbert space and the set of bounded invertible operators on of the form , where is a trace-class operator. is a group because so is trace class if is. It has a natural metric given by , where is the trace-class norm. If is a Hilbert space with inner product , then so too is the th exterior power with inner product In particular gives an orthonormal basis of if is an orthonormal basis of . If is a bounded operator on , then functorially defines a bounded operator on by If is trace-class, then is also trace-class with This shows that the definition of the Fredholm determinant given by makes sense. Properties If is a trace-class operator defines an entire function such that The function is continuous on trace-class operators, with One can improve this inequality slightly to the following, as noted in Chapter 5 of Simon: If and are trace-class then The function defines a homomorphism of into the multiplicative group of nonzero complex numbers (since elements of are invertible). If is in and is invertible, If is trace-class, then Fredholm determinants of commutators A function from into is said to be differentiable if is differentiable as a map into the trace-class operators, i.e. if the limit exists in trace-class norm. If is a differentiable function with values in trace-class operators, then so too is and where Israel Gohberg and Mark Krein proved that if is a differentiable function into , then is a differentiable map into with This result was used by Joel Pincus, William Helton and Roger Howe to prove that if and are bounded operators with trace-class commutator , then Szegő limit formula Let and let be the orthogonal projection onto the Hardy space . If is a smooth function on the circle, let denote the corresponding multiplication operator on . The commutator is trace-class. Let be the Toeplitz operator on defined by then the additive commutator is trace-class if and are smooth. Berger and Shaw proved that If and are smooth, then is in . Harold Widom used the result of Pincus-Helton-Howe to prove that where He used this to give a new proof of Gábor Szegő's celebrated limit formula: where is the projection onto the subspace of spanned by and . Szegő's limit formula was proved in 1951 in response to a question raised by the work Lars
https://en.wikipedia.org/wiki/Boy%20or%20Girl%20paradox
The Boy or Girl paradox surrounds a set of questions in probability theory, which are also known as The Two Child Problem, Mr. Smith's Children and the Mrs. Smith Problem. The initial formulation of the question dates back to at least 1959, when Martin Gardner featured it in his October 1959 "Mathematical Games column" in Scientific American. He titled it The Two Children Problem, and phrased the paradox as follows: Mr. Jones has two children. The older child is a girl. What is the probability that both children are girls? Mr. Smith has two children. At least one of them is a boy. What is the probability that both children are boys? Gardner initially gave the answers and , respectively, but later acknowledged that the second question was ambiguous. Its answer could be , depending on the procedure by which the information "at least one of them is a boy" was obtained. The ambiguity, depending on the exact wording and possible assumptions, was confirmed by Maya Bar-Hillel and Ruma Falk, and Raymond S. Nickerson. Other variants of this question, with varying degrees of ambiguity, have been popularized by Ask Marilyn in Parade Magazine, John Tierney of The New York Times, and Leonard Mlodinow in The Drunkard's Walk. One scientific study showed that when identical information was conveyed, but with different partially ambiguous wordings that emphasized different points, the percentage of MBA students who answered changed from 85% to 39%. The paradox has stimulated a great deal of controversy. The paradox stems from whether the problem setup is similar for the two questions. The intuitive answer is . This answer is intuitive if the question leads the reader to believe that there are two equally likely possibilities for the sex of the second child (i.e., boy and girl), and that the probability of these outcomes is absolute, not conditional. Gender assumptions Although Gardner envisioned the paradox being considered in a world in which gender was static and binary, and the distribution of children was uniform across that gender binary, his framing of the problem does not state or require those assumptions. The difference between the two questions is equally interesting from a mathematical point of view in a world in which P(girl) and P(boy) are well-defined across a population of individuals at a given time, but are not necessarily equal or static and do not necessarily add to one. The remainder of this article makes the assumptions listed below, which appear to have been shared by Gardner and many others who have analyzed the problem. Readers who are troubled by the theory of gender underlying these assumptions may prefer to consider the discussion below as referring to a situation in which each of the two parents in question has flipped two fair coins (each of which has “B” on one face and “G” on the other), the reference to birth order is to the order of the coin flips, and the references to genders are to the faces of the coins that are showing
https://en.wikipedia.org/wiki/Ring%20of%20sets
In mathematics, there are two different notions of a ring of sets, both referring to certain families of sets. In order theory, a nonempty family of sets is called a ring (of sets) if it is closed under union and intersection. That is, the following two statements are true for all sets and , implies and implies In measure theory, a nonempty family of sets is called a ring (of sets) if it is closed under union and relative complement (set-theoretic difference). That is, the following two statements are true for all sets and , implies and implies This implies that a ring in the measure-theoretic sense always contains the empty set. Furthermore, for all sets and , which shows that a family of sets closed under relative complement is also closed under intersection, so that a ring in the measure-theoretic sense is also a ring in the order-theoretic sense. Examples If is any set, then the power set of (the family of all subsets of ) forms a ring of sets in either sense. If is a partially ordered set, then its upper sets (the subsets of with the additional property that if belongs to an upper set U and , then must also belong to ) are closed under both intersections and unions. However, in general it will not be closed under differences of sets. The open sets and closed sets of any topological space are closed under both unions and intersections. On the real line , the family of sets consisting of the empty set and all finite unions of half-open intervals of the form , with is a ring in the measure-theoretic sense. If is any transformation defined on a space, then the sets that are mapped into themselves by are closed under both unions and intersections. If two rings of sets are both defined on the same elements, then the sets that belong to both rings themselves form a ring of sets. Related structures A ring of sets in the order-theoretic sense forms a distributive lattice in which the intersection and union operations correspond to the lattice's meet and join operations, respectively. Conversely, every distributive lattice is isomorphic to a ring of sets; in the case of finite distributive lattices, this is Birkhoff's representation theorem and the sets may be taken as the lower sets of a partially ordered set. A family of sets closed under union and relative complement is also closed under symmetric difference and intersection. Conversely, every family of sets closed under both symmetric difference and intersection is also closed under union and relative complement. This is due to the identities and Symmetric difference and intersection together give a ring in the measure-theoretic sense the structure of a boolean ring. In the measure-theoretic sense, a is a ring closed under unions, and a δ-ring is a ring closed under countable intersections. Explicitly, a σ-ring over is a set such that for any sequence we have Given a set a − also called an − is a ring that contains This definition entails that a
https://en.wikipedia.org/wiki/Guanta%20Municipality
The Guanta Municipality is one of the 21 municipalities (municipios) that makes up the eastern Venezuelan state of Anzoátegui and, according to the 2011 census by the National Institute of Statistics of Venezuela, the municipality has a population of 30,891. The town of Guanta is the shire town of the Guanta Municipality. History Guanta dates from the completion of the railway to the coal mines of Naricual and Capiricual nearly beyond Barcelona, and was created for the shipment of coal. Demographics The Guanta Municipality, according to a 2007 population estimate by the National Institute of Statistics of Venezuela, had a population of 31,629 (up from 28,542 in 2000). This amounted to 2.1% of the state's population. The municipality's population density is . Government The mayor of the Guanta Municipality is Jhonnathan Marín, elected on 23 November 2008 with 58% of the vote. He replaced Luis Alfredo Cardozo Belizario shortly after the elections. The municipality is divided into two parishes; Guanta and Chorrerón (previous to 27 June 1995, the Guanta Municipality contained only a single). See also Guanta Anzoátegui Municipalities of Venezuela References External links guanta-anzoategui.gob.ve Municipalities of Anzoategui
https://en.wikipedia.org/wiki/Twistor%20space
In mathematics and theoretical physics (especially twistor theory), twistor space is the complex vector space of solutions of the twistor equation . It was described in the 1960s by Roger Penrose and Malcolm MacCallum. According to Andrew Hodges, twistor space is useful for conceptualizing the way photons travel through space, using four complex numbers. He also posits that twistor space may aid in understanding the asymmetry of the weak nuclear force. Informal motivation In the (translated) words of Jacques Hadamard: "the shortest path between two truths in the real domain passes through the complex domain." Therefore when studying four-dimensional space it might be valuable to identify it with However, since there is no canonical way of doing so, instead all isomorphisms respecting orientation and metric between the two are considered. It turns out that complex projective 3-space parametrizes such isomorphisms together with complex coordinates. Thus one complex coordinate describes the identification and the other two describe a point in . It turns out that vector bundles with self-dual connections on (instantons) correspond bijectively to holomorphic vector bundles on complex projective 3-space . Formal definition For Minkowski space, denoted , the solutions to the twistor equation are of the form where and are two constant Weyl spinors and is a point in Minkowski space. The are the Pauli matrices, with the indexes on the matrices. This twistor space is a four-dimensional complex vector space, whose points are denoted by , and with a hermitian form which is invariant under the group SU(2,2) which is a quadruple cover of the conformal group C(1,3) of compactified Minkowski spacetime. Points in Minkowski space are related to subspaces of twistor space through the incidence relation This incidence relation is preserved under an overall re-scaling of the twistor, so usually one works in projective twistor space, denoted , which is isomorphic as a complex manifold to . Given a point it is related to a line in projective twistor space where we can see the incidence relation as giving the linear embedding of a parametrized by . The geometric relation between projective twistor space and complexified compactified Minkowski space is the same as the relation between lines and two-planes in twistor space; more precisely, twistor space is It has associated to it the double fibration of flag manifolds where is the projective twistor space and is the compactified complexified Minkowski space and the correspondence space between and is In the above, stands for projective space, a Grassmannian, and a flag manifold. The double fibration gives rise to two correspondences (see also Penrose transform), and The compactified complexified Minkowski space is embedded in by the Plücker embedding; the image is the Klein quadric. References Complex manifolds
https://en.wikipedia.org/wiki/Topological%20algebra
In mathematics, a topological algebra is an algebra and at the same time a topological space, where the algebraic and the topological structures are coherent in a specified sense. Definition A topological algebra over a topological field is a topological vector space together with a bilinear multiplication , that turns into an algebra over and is continuous in some definite sense. Usually the continuity of the multiplication is expressed by one of the following (non-equivalent) requirements: joint continuity: for each neighbourhood of zero there are neighbourhoods of zero and such that (in other words, this condition means that the multiplication is continuous as a map between topological spaces or stereotype continuity: for each totally bounded set and for each neighbourhood of zero there is a neighbourhood of zero such that and , or separate continuity: for each element and for each neighbourhood of zero there is a neighbourhood of zero such that and . (Certainly, joint continuity implies stereotype continuity, and stereotype continuity implies separate continuity.) In the first case is called a "topological algebra with jointly continuous multiplication", and in the last, "with separately continuous multiplication". A unital associative topological algebra is (sometimes) called a topological ring. History The term was coined by David van Dantzig; it appears in the title of his doctoral dissertation (1931). Examples 1. Fréchet algebras are examples of associative topological algebras with jointly continuous multiplication. 2. Banach algebras are special cases of Fréchet algebras. 3. Stereotype algebras are examples of associative topological algebras with stereotype continuous multiplication. Notes External links References Topological vector spaces Algebras
https://en.wikipedia.org/wiki/Lilliefors%20test
In statistics, the Lilliefors test is a normality test based on the Kolmogorov–Smirnov test. It is used to test the null hypothesis that data come from a normally distributed population, when the null hypothesis does not specify which normal distribution; i.e., it does not specify the expected value and variance of the distribution. It is named after Hubert Lilliefors, professor of statistics at George Washington University. A variant of the test can be used to test the null hypothesis that data come from an exponentially distributed population, when the null hypothesis does not specify which exponential distribution. The test The test proceeds as follows: First estimate the population mean and population variance based on the data. Then find the maximum discrepancy between the empirical distribution function and the cumulative distribution function (CDF) of the normal distribution with the estimated mean and estimated variance. Just as in the Kolmogorov–Smirnov test, this will be the test statistic. Finally, assess whether the maximum discrepancy is large enough to be statistically significant, thus requiring rejection of the null hypothesis. This is where this test becomes more complicated than the Kolmogorov–Smirnov test. Since the hypothesised CDF has been moved closer to the data by estimation based on those data, the maximum discrepancy has been made smaller than it would have been if the null hypothesis had singled out just one normal distribution. Thus the "null distribution" of the test statistic, i.e. its probability distribution assuming the null hypothesis is true, is stochastically smaller than the Kolmogorov–Smirnov distribution. This is the Lilliefors distribution. To date, tables for this distribution have been computed only by Monte Carlo methods. In 1986 a corrected table of critical values for the test was published. See also Jarque–Bera test Shapiro–Wilk test References Sources Conover, W.J. (1999), "Practical nonparametric statistics", 3rd ed. Wiley : New York. External links Lilliefors test in R Lilliefors test in Python Lilliefors test on Mathworks Normality tests
https://en.wikipedia.org/wiki/Empirical%20distribution%20function
In statistics, an empirical distribution function (commonly also called an empirical cumulative distribution function, eCDF) is the distribution function associated with the empirical measure of a sample. This cumulative distribution function is a step function that jumps up by at each of the data points. Its value at any specified value of the measured variable is the fraction of observations of the measured variable that are less than or equal to the specified value. The empirical distribution function is an estimate of the cumulative distribution function that generated the points in the sample. It converges with probability 1 to that underlying distribution, according to the Glivenko–Cantelli theorem. A number of results exist to quantify the rate of convergence of the empirical distribution function to the underlying cumulative distribution function. Definition Let be independent, identically distributed real random variables with the common cumulative distribution function . Then the empirical distribution function is defined as where is the indicator of event . For a fixed , the indicator is a Bernoulli random variable with parameter ; hence is a binomial random variable with mean and variance . This implies that is an unbiased estimator for . However, in some textbooks, the definition is given as Mean The mean of the empirical distribution is an unbiased estimator of the mean of the population distribution. which is more commonly denoted Variance The variance of the empirical distribution times is an unbiased estimator of the variance of the population distribution, for any distribution of X that has a finite variance. Mean squared error The mean squared error for the empirical distribution is as follows. Where is an estimator and an unknown parameter Quantiles For any real number the notation (read “ceiling of a”) denotes the least integer greater than or equal to . For any real number a, the notation (read “floor of a”) denotes the greatest integer less than or equal to . If is not an integer, then the -th quantile is unique and is equal to If is an integer, then the -th quantile is not unique and is any real number such that Empirical median If is odd, then the empirical median is the number If is even, then the empirical median is the number Asymptotic properties Since the ratio approaches 1 as goes to infinity, the asymptotic properties of the two definitions that are given above are the same. By the strong law of large numbers, the estimator converges to as almost surely, for every value of : thus the estimator is consistent. This expression asserts the pointwise convergence of the empirical distribution function to the true cumulative distribution function. There is a stronger result, called the Glivenko–Cantelli theorem, which states that the convergence in fact happens uniformly over : The sup-norm in this expression is called the Kolmogorov–Smirnov statistic for testing the
https://en.wikipedia.org/wiki/Morse%E2%80%93Kelley%20set%20theory
In the foundations of mathematics, Morse–Kelley set theory (MK), Kelley–Morse set theory (KM), Morse–Tarski set theory (MT), Quine–Morse set theory (QM) or the system of Quine and Morse is a first-order axiomatic set theory that is closely related to von Neumann–Bernays–Gödel set theory (NBG). While von Neumann–Bernays–Gödel set theory restricts the bound variables in the schematic formula appearing in the axiom schema of Class Comprehension to range over sets alone, Morse–Kelley set theory allows these bound variables to range over proper classes as well as sets, as first suggested by Quine in 1940 for his system ML. Morse–Kelley set theory is named after mathematicians John L. Kelley and Anthony Morse and was first set out by and later in an appendix to Kelley's textbook General Topology (1955), a graduate level introduction to topology. Kelley said the system in his book was a variant of the systems due to Thoralf Skolem and Morse. Morse's own version appeared later in his book A Theory of Sets (1965). While von Neumann–Bernays–Gödel set theory is a conservative extension of Zermelo–Fraenkel set theory (ZFC, the canonical set theory) in the sense that a statement in the language of ZFC is provable in NBG if and only if it is provable in ZFC, Morse–Kelley set theory is a proper extension of ZFC. Unlike von Neumann–Bernays–Gödel set theory, where the axiom schema of Class Comprehension can be replaced with finitely many of its instances, Morse–Kelley set theory cannot be finitely axiomatized. MK axioms and ontology NBG and MK share a common ontology. The universe of discourse consists of classes. Classes that are members of other classes are called sets. A class that is not a set is a proper class. The primitive atomic sentences involve membership or equality. With the exception of Class Comprehension, the following axioms are the same as those for NBG, inessential details aside. The symbolic versions of the axioms employ the following notational devices: The upper case letters other than M, appearing in Extensionality, Class Comprehension, and Foundation, denote variables ranging over classes. A lower case letter denotes a variable that cannot be a proper class, because it appears to the left of an ∈. As MK is a one-sorted theory, this notational convention is only mnemonic. The monadic predicate whose intended reading is "the class x is a set", abbreviates The empty set is defined by The class V, the universal class having all possible sets as members, is defined by V is also the von Neumann universe. Extensionality: Classes having the same members are the same class. A set and a class having the same extension are identical. Hence MK is not a two-sorted theory, appearances to the contrary notwithstanding. Foundation: Each nonempty class A is disjoint from at least one of its members. Class Comprehension: Let φ(x) be any formula in the language of MK in which x is a free variable and Y is not free. φ(x) may contain parameter
https://en.wikipedia.org/wiki/Point%20groups%20in%20three%20dimensions
In geometry, a point group in three dimensions is an isometry group in three dimensions that leaves the origin fixed, or correspondingly, an isometry group of a sphere. It is a subgroup of the orthogonal group O(3), the group of all isometries that leave the origin fixed, or correspondingly, the group of orthogonal matrices. O(3) itself is a subgroup of the Euclidean group E(3) of all isometries. Symmetry groups of geometric objects are isometry groups. Accordingly, analysis of isometry groups is analysis of possible symmetries. All isometries of a bounded (finite) 3D object have one or more common fixed points. We follow the usual convention by choosing the origin as one of them. The symmetry group of an object is sometimes also called its full symmetry group, as opposed to its proper symmetry group, the intersection of its full symmetry group with E+(3), which consists of all direct isometries, i.e., isometries preserving orientation. For a bounded object, the proper symmetry group is called its rotation group. It is the intersection of its full symmetry group with SO(3), the full rotation group of the 3D space. The rotation group of a bounded object is equal to its full symmetry group if and only if the object is chiral. The point groups that are generated purely by a finite set of reflection mirror planes passing through the same point are the finite Coxeter groups, represented by Coxeter notation. The point groups in three dimensions are heavily used in chemistry, especially to describe the symmetries of a molecule and of molecular orbitals forming covalent bonds, and in this context they are also called molecular point groups. 3D isometries that leave origin fixed The symmetry group operations (symmetry operations) are the isometries of three-dimensional space R3 that leave the origin fixed, forming the group O(3). These operations can be categorized as: The direct (orientation-preserving) symmetry operations, which form the group SO(3): The identity operation, denoted by E or the identity matrix I. Rotation about an axis through the origin by an angle θ. Rotation by θ = 360°/n for any positive integer n is denoted Cn (from the Schoenflies notation for the group Cn that it generates). The identity operation, also written C1, is a special case of the rotation operator. The indirect (orientation-reversing) operations: Inversion, denoted i or Ci. The matrix notation is −I. Reflection in a plane through the origin, denoted σ. Improper rotation, also called rotation-reflection: rotation about an axis by an angle θ, combined with reflection in the plane through the origin perpendicular to the axis. Rotation-reflection by θ = 360°/n for any positive integer n is denoted Sn (from the Schoenflies notation for the group Sn that it generates if n is even). Inversion is a special case of rotation-reflection (i = S2), as is reflection (σ = S1), so these operations are often considered to be improper rotations. A circumflex is sometimes added to
https://en.wikipedia.org/wiki/Karl-Henning%20Rehren
Karl-Henning Rehren (born 1956 in Celle) is a German physicist who focuses on algebraic quantum field theory. Biography Rehren studied physics in Heidelberg, Paris and Freiburg. In Freiburg he received his PhD (advisor Klaus Pohlmeyer) in 1984. Habilitation 1991 in Berlin. Since 1997 he teaches physics in Göttingen. He became notable outside his field, especially among string theorists, in 1999 when he discovered the Algebraic holography (also called Rehren duality), a relation between quantum field theories AdSd+1 and conformal quantum field theories on d-dimensional Minkowski spacetime, which is similar in scope to the Holographic principle. This work has no direct relation to the more well known Maldacena duality, but refers to the more general statement of the AdS/CFT correspondence by Edward Witten. It is generally accepted that the relation found by Rehren does not provide a proof for Witten's conjecture and is thus considered an independent result. Selected publications See also AdS/CFT correspondence Axiomatic quantum field theory Conformal field theory Local quantum physics Quantum field theory Rehren duality References External links . . Author page on INSPIRE-HEP 1956 births Living people 20th-century German physicists Heidelberg University alumni University of Paris alumni University of Freiburg alumni Academic staff of the University of Göttingen People from Celle 21st-century German physicists
https://en.wikipedia.org/wiki/Superconformal%20algebra
In theoretical physics, the superconformal algebra is a graded Lie algebra or superalgebra that combines the conformal algebra and supersymmetry. In two dimensions, the superconformal algebra is infinite-dimensional. In higher dimensions, superconformal algebras are finite-dimensional and generate the superconformal group (in two Euclidean dimensions, the Lie superalgebra does not generate any Lie supergroup). Superconformal algebra in dimension greater than 2 The conformal group of the -dimensional space is and its Lie algebra is . The superconformal algebra is a Lie superalgebra containing the bosonic factor and whose odd generators transform in spinor representations of . Given Kac's classification of finite-dimensional simple Lie superalgebras, this can only happen for small values of and . A (possibly incomplete) list is in 3+0D thanks to ; in 2+1D thanks to ; in 4+0D thanks to ; in 3+1D thanks to ; in 2+2D thanks to ; real forms of in five dimensions in 5+1D, thanks to the fact that spinor and fundamental representations of are mapped to each other by outer automorphisms. Superconformal algebra in 3+1D According to the superconformal algebra with supersymmetries in 3+1 dimensions is given by the bosonic generators , , , , the U(1) R-symmetry , the SU(N) R-symmetry and the fermionic generators , , and . Here, denote spacetime indices; left-handed Weyl spinor indices; right-handed Weyl spinor indices; and the internal R-symmetry indices. The Lie superbrackets of the bosonic conformal algebra are given by where η is the Minkowski metric; while the ones for the fermionic generators are: The bosonic conformal generators do not carry any R-charges, as they commute with the R-symmetry generators: But the fermionic generators do carry R-charge: Under bosonic conformal transformations, the fermionic generators transform as: Superconformal algebra in 2D There are two possible algebras with minimal supersymmetry in two dimensions; a Neveu–Schwarz algebra and a Ramond algebra. Additional supersymmetry is possible, for instance the N = 2 superconformal algebra. See also Conformal symmetry Super Virasoro algebra Supersymmetry algebra References Conformal field theory Supersymmetry Lie algebras
https://en.wikipedia.org/wiki/158%20%28number%29
158 (one hundred [and] fifty-eight) is the natural number following 157 and preceding 159. In mathematics 158 is a nontotient, since there is no integer with 158 coprimes below it. 158 is a Perrin number, appearing after 68, 90, 119. 158 is the number of digits in the decimal expansion of 100!, the product of all the natural numbers up to and including 100. In the military was a United States Navy during World War II was a United States Navy during World War II was a United States Navy during World War II was a United States Navy following World War II was a United States Navy during World War II was a United States Navy Trefoil-class concrete barge during World War II was a United States Navy during World War II was a United States Navy converted yacht patrol vessel during World War I In music The song 158 by the Indie-rock band Blackbud The song "Here We Go" (1998) from The Bouncing Souls’ Tie One On CD includes the lyrics "Me, Shal Pete and Lamar thumbed down the ramp of Exit 158" In transportation The Alfa Romeo 158 racecar The Ferrari 158 racecar produced between 1964 and 1965 The British Rail Class 158 Express Sprinter is a diesel multiple unit (DMU) train, built for British Rail between 1989 and 1992 In other fields 158 is also: The year AD 158 or 158 BC One of a number of highways The atomic number of an element temporarily called unpentoctium. 158 Koronis is a Main belt asteroid In the Israeli satirical comedy Operation Grandma ("Mivtza Safta", מבצע סבתא), the number 158 is implied to be a classified high-rank officer position (Alon says: "Since you've became 158, you became all that?") Township 158-30 is a small township in Lake of the Woods County, Minnesota Edenwold No. 158, Saskatchewan is a rural municipality in Saskatchewan, Canada John Irving's third novel, The 158-Pound Marriage Financial Accounting Standards Board summary of statement No. 158 requires an employer to recognize the overfunded or underfunded status of a defined benefit postretirement plan See also List of highways numbered 158 United Nations Security Council Resolution 158 United States Supreme Court cases, Volume 158 Pennsylvania House of Representatives, District 158 Consolidated School District 158, Illinois Marie Curie Middle School 158, Bayside, New York P.S. 158, Manhattan, New York City References External links The Number 158 Integers
https://en.wikipedia.org/wiki/Victor%20Th%C3%A9bault
Victor Michael Jean-Marie Thébault (1882–1960) was a French mathematician best known for propounding three problems in geometry. The name Thébault's theorem is used by some authors to refer to the first of these problems and by others to refer to the third. Thébault was born on March 6, 1882, in Ambrières-les-Grand (today a part of Ambrières-les-Vallées, Mayenne) in the northwest of France. He got his education at a teacher's college in Laval, where he studied from 1898 to 1901. After his graduation he taught for three years at Pré-en-Pail until he got a professorship at technical school in Ernée. In 1909 he placed first in a competitive exams, which yielded him a certificate to work as a science professor at teachers' colleges. Thébault however found a professor's salary insufficient to support his large family and hence he left teaching to become a factory superintendent at Ernée from 1910 to 1923. In 1924 he became a chief insurance inspector in Le Mans, a position he held until his retirement in 1940. During his retirement he lived in Tennie. He died on March 19, 1960, shortly after a severe stroke and was survived by his wife, five sons and a daughter. Despite leaving teaching Thébault stayed active in mathematics with number theory and geometry being his main areas of interest. He published a large number of articles in math journals all over the world and aside from regular articles he also contributed many original problems and solutions to their problem sections. He published over 1000 original problems in various mathematical magazines and his contributions to the problem section of the American Mathematical Monthly alone comprise over 600 problems and solutions. In recognition of his contributions the French government bestowed two titles on him. In 1932 he became an Officier de L'Instruction Publique and in a 1935 a Chevalier de l'Order de Couronne de Belgium. Notes 1882 births 1960 deaths 20th-century French mathematicians French geometers
https://en.wikipedia.org/wiki/Th%C3%A9bault%27s%20theorem
Thébault's theorem is the name given variously to one of the geometry problems proposed by the French mathematician Victor Thébault, individually known as Thébault's problem I, II, and III. Thébault's problem I Given any parallelogram, construct on its sides four squares external to the parallelogram. The quadrilateral formed by joining the centers of those four squares is a square. It is a special case of van Aubel's theorem and a square version of the Napoleon's theorem. Thébault's problem II Given a square, construct equilateral triangles on two adjacent edges, either both inside or both outside the square. Then the triangle formed by joining the vertex of the square distant from both triangles and the vertices of the triangles distant from the square is equilateral. Thébault's problem III Given any triangle ABC, and any point M on BC, construct the incircle and circumcircle of the triangle. Then construct two additional circles, each tangent to AM, BC, and to the circumcircle. Then their centers and the center of the incircle are colinear. Until 2003, academia thought this third problem of Thébault the most difficult to prove. It was published in the American Mathematical Monthly in 1938, and proved by Dutch mathematician H. Streefkerk in 1973. However, in 2003, Jean-Louis Ayme discovered that Y. Sawayama, an instructor at The Central Military School of Tokyo, independently proposed and solved this problem in 1905. An "external" version of this theorem, where the incircle is replaced by an excircle and the two additional circles are external to the circumcircle, is found in Shay Gueron (2002). A proof based on Casey's theorem is in the paper. References External links Thébault's problems and variations at cut-the.knot.org Theorems about quadrilaterals Theorems about triangles and circles
https://en.wikipedia.org/wiki/Divine%20Proportions%3A%20Rational%20Trigonometry%20to%20Universal%20Geometry
Divine Proportions: Rational Trigonometry to Universal Geometry is a 2005 book by the mathematician Norman J. Wildberger on a proposed alternative approach to Euclidean geometry and trigonometry, called rational trigonometry. The book advocates replacing the usual basic quantities of trigonometry, Euclidean distance and angle measure, by squared distance and the square of the sine of the angle, respectively. This is logically equivalent to the standard development (as the replacement quantities can be expressed in terms of the standard ones and vice versa). The author claims his approach holds some advantages, such as avoiding the need for irrational numbers. The book was "essentially self-published" by Wildberger through his publishing company Wild Egg. The formulas and theorems in the book are regarded as correct mathematics but the claims about practical or pedagogical superiority are primarily promoted by Wildberger himself and have received mixed reviews. Overview The main idea of Divine Proportions is to replace distances by the squared Euclidean distance, which Wildberger calls the quadrance, and to replace angle measures by the squares of their sines, which Wildberger calls the spread between two lines. Divine Proportions defines both of these concepts directly from the Cartesian coordinates of points that determine a line segment or a pair of crossing lines. Defined in this way, they are rational functions of those coordinates, and can be calculated directly without the need to take the square roots or inverse trigonometric functions required when computing distances or angle measures. For Wildberger, a finitist, this replacement has the purported advantage of avoiding the concepts of limits and actual infinity used in defining the real numbers, which Wildberger claims to be unfounded. It also allows analogous concepts to be extended directly from the rational numbers to other number systems such as finite fields using the same formulas for quadrance and spread. Additionally, this method avoids the ambiguity of the two supplementary angles formed by a pair of lines, as both angles have the same spread. This system is claimed to be more intuitive, and to extend more easily from two to three dimensions. However, in exchange for these benefits, one loses the additivity of distances and angles: for instance, if a line segment is divided in two, its length is the sum of the lengths of the two pieces, but combining the quadrances of the pieces is more complicated and requires square roots. Organization and topics Divine Proportions is divided into four parts. Part I presents an overview of the use of quadrance and spread to replace distance and angle, and makes the argument for their advantages. Part II formalizes the claims made in part I, and proves them rigorously. Rather than defining lines as infinite sets of points, they are defined by their homogeneous coordinates, which may be used in formulas for testing the incidence of points
https://en.wikipedia.org/wiki/Creature%20Catalogue
Creature Catalogue is a supplement for Basic Dungeons & Dragons first released in 1986, and updated in 1993. Contents The Creature Catalogue is a supplement which presents game statistics for more than 200 monsters, most of which had been compiled from previous D&D rules set and adventure modules, as well as 80 new monsters which had never been printed before; each monster features an illustration and they are indexed by what habitat they can be encountered in. In Creature Catalogue is collected all the creatures first presented in the official D&D adventure modules to that time, plus many new creatures and some converted from AD&D. Also included is a comprehensive index of all D&D monsters found in the Basic, Expert, Companion and Master rulesets. Each creature in the book is illustrated, and the entries in the book are arranged by type of creature rather than alphabetically. This includes six sections: Animals (along with giant and extinct creatures), Conjurations (such as elementals and golems), Humanoids, Lowlife (such as insects, plants, and jellies and slimes), Monsters (a miscellaneous category), and Undead. Each monster has its Intelligence score listed as part of its statistics. The book also includes a comprehensive index of monsters sorted by environment, and the introduction of the book reproduced a guide by Frank Mentzer on how to balance monster encounters properly for the levels of the player characters, which was originally printed in the Master Set. Each creature is listed with appropriate D&D statistics and a short description of the creature, its abilities and tactics. A variety of authors and artists combined to the listings including Zeb Cook and Gary Gygax. Publication history AC9 Creature Catalogue was compiled by Graeme Morris, Phil Gallagher, and Jim Bambra, and was published by TSR in 1986. The Creature Catalogue is in the format of a 96-page perfect-bound book, which TSR had been adopting more frequently at the time. Cover art is by Keith Parkinson. An updated version, titled Creature Catalog, was released in March 1993 as accessory "DMR2", part of the "Challenger Series" of Basic D&D accessories that followed the 1991 release of the revised "black box" basic set and the Rules Cyclopedia. Reception Tim Brinsley reviewed the Creature Catalogue for White Dwarf No. 85. Brinsley quipped that the book "is basically a Monster Manual for the D&D game". He noted that the book was produced in the UK, and believed that unlike TSR UK's last attempt at a monster book, "the disappointing Fiend Folio with its many one-use creatures", there was a lot in this book to recommend it. Brinsley felt that organizing the book in sections by type of creature rather than just alphabetically like AD&D's Monster Manuals "should certainly make life easier for those DMs who design their own adventures, and know what sort of monster they want, rather than by name". He also felt that the comprehensive references to all the monsters appearing i
https://en.wikipedia.org/wiki/Milnor%20map
In mathematics, Milnor maps are named in honor of John Milnor, who introduced them to topology and algebraic geometry in his book Singular Points of Complex Hypersurfaces (Princeton University Press, 1968) and earlier lectures. The most studied Milnor maps are actually fibrations, and the phrase Milnor fibration is more commonly encountered in the mathematical literature. These were introduced to study isolated singularities by constructing numerical invariants related to the topology of a smooth deformation of the singular space. Definition Let be a non-constant polynomial function of complex variables where the vanishing locus of is only at the origin, meaning the associated variety is not smooth at the origin. Then, for (a sphere inside of radius ) the Milnor fibrationpg 68 associated to is defined as the map , which is a locally trivial smooth fibration for sufficiently small . Originally this was proven as a theorem by Milnor, but was later taken as the definition of a Milnor fibration. Note this is a well defined map since , where is the argument of a complex number. Historical motivation One of the original motivations for studying such maps was in the study of knots constructed by taking an -ball around a singular point of a plane curve, which is isomorphic to a real 4-dimensional ball, and looking at the knot inside the boundary, which is a 1-manifold inside of a 3-sphere. Since this concept could be generalized to hypersurfaces with isolated singularities, Milnor introduced the subject and proved his theorem. In algebraic geometry Another closed related notion in algebraic geometry is the Milnor fiber of an isolated hypersurface singularity. This has a similar setup, where a polynomial with having a singularity at the origin, but now the polynomial is considered. Then, the algebraic Milnor fiber is taken as one of the polynomials . Properties and Theorems Parallelizability One of the basic structure theorems about Milnor fibers is they are parallelizable manifoldspg 75. Homotopy type Milnor fibers are special because they have the homotopy type of a bouquet of spherespg 78. The number of these spheres is the Milnor number. In fact, the number of spheres can be computed using the formula where the quotient ideal is the Jacobian ideal, defined by the partial derivatives . These spheres deformed to the algebraic Milnor fiber are the Vanishing cycles of the fibrationpg 83. Unfortunately, computing the eigenvalues of their monodromy is computationally challenging and requires advanced techniques such as b-functionspg 23. Milnor's fibration theorem Milnor's Fibration Theorem states that, for every such that the origin is a singular point of the hypersurface (in particular, for every non-constant square-free polynomial of two variables, the case of plane curves), then for sufficiently small, is a fibration. Each fiber is a non-compact differentiable manifold of real dimension . Note that the closure of each fibe
https://en.wikipedia.org/wiki/William%20Wallace%20Smith%20Bliss
William Wallace Smith Bliss (August 17, 1815 – August 5, 1853) was a United States Army officer and mathematics professor. A gifted mathematician, he taught at West Point and also served as a line officer. In December 1848 Bliss married Mary Elizabeth Taylor, youngest daughter of President-elect Zachary Taylor, whom he would serve as presidential secretary. Five years later Bliss contracted yellow fever in New Orleans and died at the age of 37. Having become interested in the various Native American tribes, Bliss learned a number of their languages and studied their cultures. He was a member of the Royal Society of Northern Antiquaries of Copenhagen, Denmark, and an Honorary Member of the American Ethnological Society. Gifted at languages, he was able to read thirteen and could speak a number of those fluently. Early life and education Born in Whitehall, New York, he was the son of Captain John Bliss (of Lebanon, New Hampshire) and Olive Hall Simonds (of Todd County, Kentucky). Military career At the age of 14, Bliss entered the United States Military Academy on September 1, 1829. He showed very great skills in mathematics while a student. He graduated July 1, 1833 (age 17) and commissioned as a second lieutenant in the 4th Infantry Regiment. It was his choice to serve in the infantry. He served in the Fort Mitchell army garrison in Alabama from 1833 to 1834. During 1835 he was involved in operations against the Cherokee during Indian Removal, which moved most of them to Indian Territory west of the Mississippi River. From October 2, 1834 (age 19) until January 4, 1840 (age 24), Bliss served as assistant professor of mathematics at West Point. As a captain, he served as chief of staff from 1840 until 1841 to Brigadier General Walker Keith Armistead, the commanding general in the Seminole Wars. He served at Fort Smith, Arkansas, and at Fort Jesup, Louisiana, as a staff officer. In 1845, Bliss took part in the United States military occupation of the Republic of Texas, prior to its annexation. Between April 1846 and November 1847, he took part in the Mexican War, including fighting in the battles of Palo Alto, Resaca de la Palma and Buena Vista. He was brevetted to major in May 1846, and brevetted to lieutenant colonel in February 1847 for gallant and meritorious service. During his entire service in Texas and Mexico, he served as chief of staff to Major General Zachary Taylor. Bliss was noted for his efficiency and skills as a high-level aide. His writing was simple, elegant, vigorous, and picturesque. He was cheerful and popular with the public. Intellectual pursuits He received the honorary degree of Master of Arts from Dartmouth College, New Hampshire, in 1849. He was a member of the Royal Society of Northern Antiquaries of Copenhagen, Denmark, and an Honorary Member of the American Ethnological Society. He had a talent for languages, and was fluent in at least thirteen. George Perkins Marsh, the philologist, said that Bliss was the
https://en.wikipedia.org/wiki/Frank%20Spitzer
Frank Ludvig Spitzer (July 24, 1926 – February 1, 1992) was an Austrian-born American mathematician who made fundamental contributions to probability theory, including the theory of random walks, fluctuation theory, percolation theory, the Wiener sausage, and especially the theory of interacting particle systems. Rare among mathematicians, he chose to focus broadly on "phenomena", rather than any one of the many specific theorems that might help to articulate a given phenomenon. His book Principles of Random Walk, first published in 1964, remains a well-cited classic. Spitzer was born into a Jewish family in Vienna, Austria, and by the time he was twelve years old, the Nazi threat in Austria was evident. His parents were able to send him to a summer camp for Jewish children in Sweden, and, as a result, Spitzer spent all of the war years in Sweden. He lived with two Swedish families, learned Swedish, graduated from high school, and for one year attended Tekniska Hogskolan in Stockholm. During the war years, Spitzer's parents and his sister were able to make their way to the United States by passing through the unoccupied parts of France and North Africa, and, after the war, Spitzer joined his family in their new country. Spitzer enlisted in the U.S. Army just as the war in Europe was ending. After completing his military service in 1947, Spitzer entered the University of Michigan to study mathematics. His studies went quickly, and he completed his B.A. and Ph.D. in just six years. Spitzer's first academic appointments were at the California Institute of Technology (1953–1958), but most of his academic career was spent at Cornell University, with leaves at the Institute for Advanced Study in Princeton and the Mittag-Leffler Institute in Sweden. Among his many honors, Spitzer was a member of the National Academy of Sciences. Publications References External links Harry Kesten, "Frank Ludvig Spitzer", Biographical Memoirs of the National Academy of Sciences (1996) 1926 births 1992 deaths Austrian mathematicians American people of Austrian-Jewish descent 20th-century American mathematicians University of Michigan College of Literature, Science, and the Arts alumni KTH Royal Institute of Technology alumni Probability theorists Members of the United States National Academy of Sciences
https://en.wikipedia.org/wiki/Composition%20operator
In mathematics, the composition operator with symbol is a linear operator defined by the rule where denotes function composition. The study of composition operators is covered by AMS category 47B33. In physics In physics, and especially the area of dynamical systems, the composition operator is usually referred to as the Koopman operator (and its wild surge in popularity is sometimes jokingly called "Koopmania"), named after Bernard Koopman. It is the left-adjoint of the transfer operator of Frobenius–Perron. In Borel functional calculus Using the language of category theory, the composition operator is a pull-back on the space of measurable functions; it is adjoint to the transfer operator in the same way that the pull-back is adjoint to the push-forward; the composition operator is the inverse image functor. Since the domain considered here is that of Borel functions, the above describes the Koopman operator as it appears in Borel functional calculus. In holomorphic functional calculus The domain of a composition operator can be taken more narrowly, as some Banach space, often consisting of holomorphic functions: for example, some Hardy space or Bergman space. In this case, the composition operator lies in the realm of some functional calculus, such as the holomorphic functional calculus. Interesting questions posed in the study of composition operators often relate to how the spectral properties of the operator depend on the function space. Other questions include whether is compact or trace-class; answers typically depend on how the function behaves on the boundary of some domain. When the transfer operator is a left-shift operator, the Koopman operator, as its adjoint, can be taken to be the right-shift operator. An appropriate basis, explicitly manifesting the shift, can often be found in the orthogonal polynomials. When these are orthogonal on the real number line, the shift is given by the Jacobi operator. When the polynomials are orthogonal on some region of the complex plane (viz, in Bergman space), the Jacobi operator is replaced by a Hessenberg operator. Applications In mathematics, composition operators commonly occur in the study of shift operators, for example, in the Beurling–Lax theorem and the Wold decomposition. Shift operators can be studied as one-dimensional spin lattices. Composition operators appear in the theory of Aleksandrov–Clark measures. The eigenvalue equation of the composition operator is Schröder's equation, and the principal eigenfunction is often called Schröder's function or Koenigs function. The composition operator has been used in data-driven techniques for dynamical systems in the context of dynamic mode decomposition algorithms, which approximate the modes and eigenvalues of the composition operator. See also Carleman linearization Dynamic mode decomposition References C. C. Cowen and B. D. MacCluer, Composition operators on spaces of analytic functions. Studies in Adva
https://en.wikipedia.org/wiki/Nuclear%20operators%20between%20Banach%20spaces
In mathematics, nuclear operators between Banach spaces are a linear operators between Banach spaces in infinite dimensions that share some of the properties of their counter-part in finite dimension. In Hilbert spaces such operators are usually called trace class operators and one can define such things as the trace. In Banach spaces this is no longer possible for general nuclear operators, it is however possible for -nuclear operator via the Grothendieck trace theorem. The general definition for Banach spaces was given by Grothendieck. This article presents both cases but concentrates on the general case of nuclear operators on Banach spaces. Nuclear operators on Hilbert spaces An operator on a Hilbert space is compact if it can be written in the form where and and are (not necessarily complete) orthonormal sets. Here is a set of real numbers, the set of singular values of the operator, obeying if The bracket is the scalar product on the Hilbert space; the sum on the right hand side must converge in norm. An operator that is compact as defined above is said to be or if Properties A nuclear operator on a Hilbert space has the important property that a trace operation may be defined. Given an orthonormal basis for the Hilbert space, the trace is defined as Obviously, the sum converges absolutely, and it can be proven that the result is independent of the basis. It can be shown that this trace is identical to the sum of the eigenvalues of (counted with multiplicity). Nuclear operators on Banach spaces The definition of trace-class operator was extended to Banach spaces by Alexander Grothendieck in 1955. Let and be Banach spaces, and be the dual of that is, the set of all continuous or (equivalently) bounded linear functionals on with the usual norm. There is a canonical evaluation map (from the projective tensor product of and to the Banach space of continuous linear maps from to ). It is determined by sending and to the linear map An operator is called if it is in the image of this evaluation map. -nuclear operators An operator is said to be if there exist sequences of vectors with functionals with and complex numbers with such that the operator may be written as with the sum converging in the operator norm. Operators that are nuclear of order 1 are called : these are the ones for which the series is absolutely convergent. Nuclear operators of order 2 are called Hilbert–Schmidt operators. Relation to trace-class operators With additional steps, a trace may be defined for such operators when Properties The trace and determinant can no longer be defined in general in Banach spaces. However they can be defined for the so-called -nuclear operators via Grothendieck trace theorem. Generalizations More generally, an operator from a locally convex topological vector space to a Banach space is called if it satisfies the condition above with all bounded by 1 on some fixed neighborho
https://en.wikipedia.org/wiki/Fredholm%20kernel
In mathematics, a Fredholm kernel is a certain type of a kernel on a Banach space, associated with nuclear operators on the Banach space. They are an abstraction of the idea of the Fredholm integral equation and the Fredholm operator, and are one of the objects of study in Fredholm theory. Fredholm kernels are named in honour of Erik Ivar Fredholm. Much of the abstract theory of Fredholm kernels was developed by Alexander Grothendieck and published in 1955. Definition Let B be an arbitrary Banach space, and let B* be its dual, that is, the space of bounded linear functionals on B. The tensor product has a completion under the norm where the infimum is taken over all finite representations The completion, under this norm, is often denoted as and is called the projective topological tensor product. The elements of this space are called Fredholm kernels. Properties Every Fredholm kernel has a representation in the form with and such that and Associated with each such kernel is a linear operator which has the canonical representation Associated with every Fredholm kernel is a trace, defined as p-summable kernels A Fredholm kernel is said to be p-summable if A Fredholm kernel is said to be of order q if q is the infimum of all for all p for which it is p-summable. Nuclear operators on Banach spaces An operator : is said to be a nuclear operator if there exists an ∈ such that = . Such an operator is said to be -summable and of order if is. In general, there may be more than one associated with such a nuclear operator, and so the trace is not uniquely defined. However, if the order ≤ 2/3, then there is a unique trace, as given by a theorem of Grothendieck. Grothendieck's theorem If is an operator of order then a trace may be defined, with where are the eigenvalues of . Furthermore, the Fredholm determinant is an entire function of z. The formula holds as well. Finally, if is parameterized by some complex-valued parameter w, that is, , and the parameterization is holomorphic on some domain, then is holomorphic on the same domain. Examples An important example is the Banach space of holomorphic functions over a domain . In this space, every nuclear operator is of order zero, and is thus of trace-class. Nuclear spaces The idea of a nuclear operator can be adapted to Fréchet spaces. A nuclear space is a Fréchet space where every bounded map of the space to an arbitrary Banach space is nuclear. References Fredholm theory Banach spaces Topology of function spaces Topological tensor products Linear operators
https://en.wikipedia.org/wiki/Projection%20%28relational%20algebra%29
In relational algebra, a projection is a unary operation written as , where is a relation and are attribute names. Its result is defined as the set obtained when the components of the tuples in are restricted to the set – it discards (or excludes) the other attributes. In practical terms, if a relation is thought of as a table, then projection can be thought of as picking a subset of its columns. For example, if the attributes are (name, age), then projection of the relation {(Alice, 5), (Bob, 8)} onto attribute list (age) yields {5,8} – we have discarded the names, and only know what ages are present. Projections may also modify attribute values. For example, if has attributes , , , where the values of are numbers, then is like , but with all -values halved. Related concepts The closely related concept in set theory (see: projection (set theory)) differs from that of relational algebra in that, in set theory, one projects onto ordered components, not onto attributes. For instance, projecting onto the second component yields 7. Projection is relational algebra's counterpart of existential quantification in predicate logic. The attributes not included correspond to existentially quantified variables in the predicate whose extension the operand relation represents. The example below illustrates this point. Because of the correspondence with existential quantification, some authorities prefer to define projection in terms of the excluded attributes. In a computer language it is of course possible to provide notations for both, and that was done in ISBL and several languages that have taken their cue from ISBL. A nearly identical concept occurs in the category of monoids, called a string projection, which consists of removing all of the letters in the string that do not belong to a given alphabet. When implemented in SQL standard the "default projection" returns a multiset instead of a set, and the projection is obtained by the addition of the DISTINCT keyword to eliminate duplicate data. Example For an example, consider the relations depicted in the following two tables which are the relation and its projection on (some say "over") the attributes and : Suppose the predicate of Person is "Name is age years old and weighs weight." Then the given projection represents the predicate, "There exists Name such that Name is age years old and weighs weight." Note that Harry and Peter have the same age and weight, but since the result is a relation, and therefore a set, this combination only appears once in the result. Formal definition More formally the semantics of projection are defined as follows: where is the restriction of the tuple to the set so that where is an attribute value, is an attribute name, and is an element of that attribute's domain — see Relation (database). The result of a projection is defined only if is a subset of the header of . Projection over no attributes at all is possible, yielding a rela
https://en.wikipedia.org/wiki/Nuclear%20space
In mathematics, nuclear spaces are topological vector spaces that can be viewed as a generalization of finite dimensional Euclidean spaces and share many of their desirable properties. Nuclear spaces are however quite different from Hilbert spaces, another generalization of finite dimensional Euclidean spaces. They were introduced by Alexander Grothendieck. The topology on nuclear spaces can be defined by a family of seminorms whose unit balls decrease rapidly in size. Vector spaces whose elements are "smooth" in some sense tend to be nuclear spaces; a typical example of a nuclear space is the set of smooth functions on a compact manifold. All finite-dimensional vector spaces are nuclear. There are no Banach spaces that are nuclear, except for the finite-dimensional ones. In practice a sort of converse to this is often true: if a "naturally occurring" topological vector space is a Banach space, then there is a good chance that it is nuclear. Original motivation: The Schwartz kernel theorem Much of the theory of nuclear spaces was developed by Alexander Grothendieck while investigating the Schwartz kernel theorem and published in . We now describe this motivation. For any open subsets and the canonical map is an isomorphism of TVSs (where has the topology of uniform convergence on bounded subsets) and furthermore, both of these spaces are canonically TVS-isomorphic to (where since is nuclear, this tensor product is simultaneously the injective tensor product and projective tensor product). In short, the Schwartz kernel theorem states that: where all of these TVS-isomorphisms are canonical. This result is false if one replaces the space with (which is a reflexive space that is even isomorphic to its own strong dual space) and replaces with the dual of this space. Why does such a nice result hold for the space of distributions and test functions but not for the Hilbert space (which is generally considered one of the "nicest" TVSs)? This question led Grothendieck to discover nuclear spaces, nuclear maps, and the injective tensor product. Motivations from geometry Another set of motivating examples comes directly from geometry and smooth manifold theoryappendix 2. Given smooth manifolds and a locally convex Hausdorff topological vector space, then there are the following isomorphisms of nuclear spaces Using standard tensor products for as a vector space, the functioncannot be expressed as a function for This gives an example demonstrating there is a strict inclusion of sets Definition This section lists some of the more common definitions of a nuclear space. The definitions below are all equivalent. Note that some authors use a more restrictive definition of a nuclear space, by adding the condition that the space should also be a Fréchet space. (This means that the space is complete and the topology is given by a family of seminorms.) The following definition was used by Grothendieck to define nuclear spaces.
https://en.wikipedia.org/wiki/Topological%20tensor%20product
In mathematics, there are usually many different ways to construct a topological tensor product of two topological vector spaces. For Hilbert spaces or nuclear spaces there is a simple well-behaved theory of tensor products (see Tensor product of Hilbert spaces), but for general Banach spaces or locally convex topological vector spaces the theory is notoriously subtle. Motivation One of the original motivations for topological tensor products is the fact that tensor products of the spaces of smooth functions on do not behave as expected. There is an injection but this is not an isomorphism. For example, the function cannot be expressed as a finite linear combination of smooth functions in We only get an isomorphism after constructing the topological tensor product; i.e., This article first details the construction in the Banach space case. is not a Banach space and further cases are discussed at the end. Tensor products of Hilbert spaces The algebraic tensor product of two Hilbert spaces A and B has a natural positive definite sesquilinear form (scalar product) induced by the sesquilinear forms of A and B. So in particular it has a natural positive definite quadratic form, and the corresponding completion is a Hilbert space A ⊗ B, called the (Hilbert space) tensor product of A and B. If the vectors ai and bj run through orthonormal bases of A and B, then the vectors ai⊗bj form an orthonormal basis of A ⊗ B. Cross norms and tensor products of Banach spaces We shall use the notation from in this section. The obvious way to define the tensor product of two Banach spaces and is to copy the method for Hilbert spaces: define a norm on the algebraic tensor product, then take the completion in this norm. The problem is that there is more than one natural way to define a norm on the tensor product. If and are Banach spaces the algebraic tensor product of and means the tensor product of and as vector spaces and is denoted by The algebraic tensor product consists of all finite sums where is a natural number depending on and and for When and are Banach spaces, a (or ) on the algebraic tensor product is a norm satisfying the conditions Here and are elements of the topological dual spaces of and respectively, and is the dual norm of The term is also used for the definition above. There is a cross norm called the projective cross norm, given by where It turns out that the projective cross norm agrees with the largest cross norm (, proposition 2.1). There is a cross norm called the injective cross norm, given by where Here and denote the topological duals of and respectively. Note hereby that the injective cross norm is only in some reasonable sense the "smallest". The completions of the algebraic tensor product in these two norms are called the projective and injective tensor products, and are denoted by and When and are Hilbert spaces, the norm used for their Hilbert space tensor product is not
https://en.wikipedia.org/wiki/Selection%20%28relational%20algebra%29
In relational algebra, a selection (sometimes called a restriction in reference to E.F. Codd's 1970 paper and not, contrary to a popular belief, to avoid confusion with SQL's use of SELECT, since Codd's article predates the existence of SQL) is a unary operation that denotes a subset of a relation. A selection is written as or where: and are attribute names is a binary operation in the set is a value constant is a relation The selection denotes all tuples in for which holds between the and the attribute. The selection denotes all tuples in for which holds between the attribute and the value . For an example, consider the following tables where the first table gives the relation , the second table gives the result of and the third table gives the result of . More formally the semantics of the selection is defined as follows: The result of the selection is only defined if the attribute names that it mentions are in the heading of the relation that it operates upon. Generalized selection A generalized selection is a unary operation written as where is a propositional formula that consists of atoms as allowed in the normal selection and, in addition, the logical operators ∧ (and), ∨ (or) and (negation). This selection selects all those tuples in for which holds. For an example, consider the following tables where the first table gives the relation and the second the result of . Formally the semantics of the generalized selection is defined as follows: The result of the selection is only defined if the attribute names that it mentions are in the header of the relation that it operates upon. The generalized selection is expressible with other basic algebraic operations. A simulation of generalized selection using the fundamental operators is defined by the following rules: Computer languages In computer languages it is expected that any truth-valued expression be permitted as the selection condition rather than restricting it to be a simple comparison. In SQL, selections are performed by using WHERE definitions in SELECT, UPDATE, and DELETE statements, but note that the selection condition can result in any of three truth values (true, false and unknown) instead of the usual two. In SQL, general selections are performed by using WHERE definitions with AND, OR, or NOT operands in SELECT, UPDATE, and DELETE statements. References External links http://cisnet.baruch.cuny.edu/holowczak/classes/3400/relationalalgebra/#selectionoperator Relational algebra
https://en.wikipedia.org/wiki/Functional%20determinant
In functional analysis, a branch of mathematics, it is sometimes possible to generalize the notion of the determinant of a square matrix of finite order (representing a linear transformation from a finite-dimensional vector space to itself) to the infinite-dimensional case of a linear operator S mapping a function space V to itself. The corresponding quantity det(S) is called the functional determinant of S. There are several formulas for the functional determinant. They are all based on the fact that the determinant of a finite matrix is equal to the product of the eigenvalues of the matrix. A mathematically rigorous definition is via the zeta function of the operator, where tr stands for the functional trace: the determinant is then defined by where the zeta function in the point s = 0 is defined by analytic continuation. Another possible generalization, often used by physicists when using the Feynman path integral formalism in quantum field theory (QFT), uses a functional integration: This path integral is only well defined up to some divergent multiplicative constant. To give it a rigorous meaning it must be divided by another functional determinant, thus effectively cancelling the problematic 'constants'. These are now, ostensibly, two different definitions for the functional determinant, one coming from quantum field theory and one coming from spectral theory. Each involves some kind of regularization: in the definition popular in physics, two determinants can only be compared with one another; in mathematics, the zeta function was used. have shown that the results obtained by comparing two functional determinants in the QFT formalism agree with the results obtained by the zeta functional determinant. Defining formulae Path integral version For a positive self-adjoint operator S on a finite-dimensional Euclidean space V, the formula holds. The problem is to find a way to make sense of the determinant of an operator S on an infinite dimensional function space. One approach, favored in quantum field theory, in which the function space consists of continuous paths on a closed interval, is to formally attempt to calculate the integral where V is the function space and the L2 inner product, and the Wiener measure. The basic assumption on S is that it should be self-adjoint, and have discrete spectrum λ1, λ2, λ3, … with a corresponding set of eigenfunctions f1, f2, f3, … which are complete in L2 (as would, for example, be the case for the second derivative operator on a compact interval Ω). This roughly means all functions φ can be written as linear combinations of the functions fi: Hence the inner product in the exponential can be written as In the basis of the functions fi, the functional integration reduces to an integration over all basis functions. Formally, assuming our intuition from the finite dimensional case carries over into the infinite dimensional setting, the measure should then be equal to This makes the functio
https://en.wikipedia.org/wiki/Rename
Rename may refer to: Rename (computing), rename of a file on a computer RENAME (command), command to rename a file in various operating systems Rename (relational algebra), unary operation in relational algebra Company renaming, rename of a product Name change, rename of a person Geographical renaming, rename of a geographical location See also Renaming (disambiguation)
https://en.wikipedia.org/wiki/WeBWorK
WeBWorK is an online homework delivery system primarily used for mathematics and science. It allows students to complete their homework over the web, and receive instantaneous feedback as to the correctness of their responses. WeBWorK uses a Perl-based language called PG to specify exercises, which allows instructors a great deal of flexibility in how exercises are presented. WeBWorK was originally developed at the University of Rochester by professors Michael Gage and Arnold Pizer. It is now a free software project maintained by many contributors at several colleges and universities. It is made available under the Artistic License (the same license as Perl) and the GNU General Public License. WeBWorK is currently maintained by The WeBWorK Project. WeBWorK is currently used by many universities and high-schools around the world. WeBWorK is supported by the National Science Foundation and the Mathematical Association of America. References External links WeBWorK Site Original WeBWorK Site Learning management systems
https://en.wikipedia.org/wiki/Two-graph
In mathematics, a two-graph is a set of (unordered) triples chosen from a finite vertex set X, such that every (unordered) quadruple from X contains an even number of triples of the two-graph. A regular two-graph has the property that every pair of vertices lies in the same number of triples of the two-graph. Two-graphs have been studied because of their connection with equiangular lines and, for regular two-graphs, strongly regular graphs, and also finite groups because many regular two-graphs have interesting automorphism groups. A two-graph is not a graph and should not be confused with other objects called 2-graphs in graph theory, such as 2-regular graphs. Examples On the set of vertices {1,...,6} the following collection of unordered triples is a two-graph: 123  124  135  146  156  236  245  256  345  346 This two-graph is a regular two-graph since each pair of distinct vertices appears together in exactly two triples. Given a simple graph G = (V,E), the set of triples of the vertex set V whose induced subgraph has an odd number of edges forms a two-graph on the set V. Every two-graph can be represented in this way. This example is referred to as the standard construction of a two-graph from a simple graph. As a more complex example, let T be a tree with edge set E. The set of all triples of E that are not contained in a path of T form a two-graph on the set E. Switching and graphs A two-graph is equivalent to a switching class of graphs and also to a (signed) switching class of signed complete graphs. Switching a set of vertices in a (simple) graph means reversing the adjacencies of each pair of vertices, one in the set and the other not in the set: thus the edge set is changed so that an adjacent pair becomes nonadjacent and a nonadjacent pair becomes adjacent. The edges whose endpoints are both in the set, or both not in the set, are not changed. Graphs are switching equivalent if one can be obtained from the other by switching. An equivalence class of graphs under switching is called a switching class. Switching was introduced by and developed by Seidel; it has been called graph switching or Seidel switching, partly to distinguish it from switching of signed graphs. In the standard construction of a two-graph from a simple graph given above, two graphs will yield the same two-graph if and only if they are equivalent under switching, that is, they are in the same switching class. Let Γ be a two-graph on the set X. For any element x of X, define a graph Γx with vertex set X having vertices y and z adjacent if and only if {x, y, z} is in Γ. In this graph, x will be an isolated vertex. This construction is reversible; given a simple graph G, adjoin a new element x to the set of vertices of G, retaining the same edge set, and apply the standard construction above. To a graph G there corresponds a signed complete graph Σ on the same vertex set, whose edges are signed negative if in G and positive if not in G. Conversely, G
https://en.wikipedia.org/wiki/Equiangular%20lines
In geometry, a set of lines is called equiangular if all the lines intersect at a single point, and every pair of lines makes the same angle. Equiangular lines in Euclidean space Computing the maximum number of equiangular lines in n-dimensional Euclidean space is a difficult problem, and unsolved in general, though bounds are known. The maximal number of equiangular lines in 2-dimensional Euclidean space is 3: we can take the lines through opposite vertices of a regular hexagon, each at an angle 120 degrees from the other two. The maximum in 3 dimensions is 6: we can take lines through opposite vertices of an icosahedron. It is known that the maximum number in any dimension is less than or equal to . This upper bound is tight up to a constant factor to a construction by de Caen. The maximum in dimensions 1 through 16 is listed in the On-Line Encyclopedia of Integer Sequences as follows: 1, 3, 6, 6, 10, 16, 28, 28, 28, 28, 28, 28, 28, 28, 36, 40, ... . In particular, the maximum number of equiangular lines in 7 dimensions is 28. We can obtain these lines as follows. Take the vector (−3,−3,1,1,1,1,1,1) in , and form all 28 vectors obtained by permuting the components of this. The dot product of two of these vectors is 8 if both have a component 3 in the same place or −8 otherwise. Thus, the lines through the origin containing these vectors are equiangular. Moreover, all 28 vectors are orthogonal to the vector (1,1,1,1,1,1,1,1) in , so they lie in a 7-dimensional space. In fact, these 28 vectors and their negatives are, up to rotation and dilation, the 56 vertices of the 321 polytope. In other words, they are the weight vectors of the 56-dimensional representation of the Lie group E7. Equiangular lines are equivalent to two-graphs. Given a set of equiangular lines, let c be the cosine of the common angle. We assume that the angle is not 90°, since that case is trivial (i.e., not interesting, because the lines are just coordinate axes); thus, c is nonzero. We may move the lines so they all pass through the origin of coordinates. Choose one unit vector in each line. Form the matrix M of inner products. This matrix has 1 on the diagonal and ±c everywhere else, and it is symmetric. Subtracting the identity matrix I and dividing by c, we have a symmetric matrix with zero diagonal and ±1 off the diagonal. This is the Seidel adjacency matrix of a two-graph. Conversely, every two-graph can be represented as a set of equiangular lines. The problem of determining the maximum number of equiangular lines with a fixed angle in sufficiently high dimensions was solved by Jiang, Tidor, Yao, Zhang, and Zhao. The answer is expressed in spectral graph theoretic terms. Let denote the maximum number of lines through the origin in dimensions with common pairwise angle . Let denote the minimum number (if it exists) of vertices in a graph whose adjacency matrix has spectral radius exactly . If is finite, then for all sufficiently large dimension
https://en.wikipedia.org/wiki/Rhombohedron
In geometry, a rhombohedron (also called a rhombic hexahedron or, inaccurately, a rhomboid) is a three-dimensional figure with six faces which are rhombi. It is a special case of a parallelepiped where all edges are the same length. It can be used to define the rhombohedral lattice system, a honeycomb with rhombohedral cells. A cube is a special case of a rhombohedron with all sides square. In general a rhombohedron can have up to three types of rhombic faces in congruent opposite pairs, Ci symmetry, order 2. Four points forming non-adjacent vertices of a rhombohedron necessarily form the four vertices of an orthocentric tetrahedron, and all orthocentric tetrahedra can be formed in this way. Rhombohedral lattice system The rhombohedral lattice system has rhombohedral cells, with 6 congruent rhombic faces forming a trigonal trapezohedron: Special cases by symmetry Cube: with Oh symmetry, order 48. All faces are squares. Trigonal trapezohedron (also called isohedral rhombohedron): with D3d symmetry, order 12. All non-obtuse internal angles of the faces are equal (all faces are congruent rhombi). This can be seen by stretching a cube on its body-diagonal axis. For example, a regular octahedron with two regular tetrahedra attached on opposite faces constructs a 60 degree trigonal trapezohedron. Right rhombic prism: with D2h symmetry, order 8. It is constructed by two rhombi and four squares. This can be seen by stretching a cube on its face-diagonal axis. For example, two right prisms with regular triangular bases attached together makes a 60 degree right rhombic prism. Oblique rhombic prism: with C2h symmetry, order 4. It has only one plane of symmetry, through four vertices, and six rhombic faces. Solid geometry For a unit (i.e.: with side length 1) isohedral rhombohedron, with rhombic acute angle , with one vertex at the origin (0, 0, 0), and with one edge lying along the x-axis, the three generating vectors are e1 : e2 : e3 : The other coordinates can be obtained from vector addition of the 3 direction vectors: e1 + e2 , e1 + e3 , e2 + e3 , and e1 + e2 + e3 . The volume of an isohedral rhombohedron, in terms of its side length and its rhombic acute angle , is a simplification of the volume of a parallelepiped, and is given by We can express the volume another way : As the area of the (rhombic) base is given by , and as the height of a rhombohedron is given by its volume divided by the area of its base, the height of an isohedral rhombohedron in terms of its side length and its rhombic acute angle is given by Note: 3 , where 3 is the third coordinate of e3 . The body diagonal between the acute-angled vertices is the longest. By rotational symmetry about that diagonal, the other three body diagonals, between the three pairs of opposite obtuse-angled vertices, are all the same length. See also Lists of shapes References External links Volume Calculator https://rechneronline.de/pi/rhombohedron.php Prismatoid po
https://en.wikipedia.org/wiki/Ultrastrong%20topology
In functional analysis, the ultrastrong topology, or σ-strong topology, or strongest topology on the set B(H) of bounded operators on a Hilbert space is the topology defined by the family of seminorms for positive elements of the predual that consists of trace class operators. It was introduced by John von Neumann in 1936. Relation with the strong (operator) topology The ultrastrong topology is similar to the strong (operator) topology. For example, on any norm-bounded set the strong operator and ultrastrong topologies are the same. The ultrastrong topology is stronger than the strong operator topology. One problem with the strong operator topology is that the dual of B(H) with the strong operator topology is "too small". The ultrastrong topology fixes this problem: the dual is the full predual B*(H) of all trace class operators. In general the ultrastrong topology is better than the strong operator topology, but is more complicated to define so people usually use the strong operator topology if they can get away with it. The ultrastrong topology can be obtained from the strong operator topology as follows. If H1 is a separable infinite dimensional Hilbert space then B(H) can be embedded in B(H⊗H1) by tensoring with the identity map on H1. Then the restriction of the strong operator topology on B(H⊗H1) is the ultrastrong topology of B(H). Equivalently, it is given by the family of seminorms where The adjoint map is not continuous in the ultrastrong topology. There is another topology called the ultrastrong* topology, which is the weakest topology stronger than the ultrastrong topology such that the adjoint map is continuous. See also References Topology of function spaces Von Neumann algebras
https://en.wikipedia.org/wiki/Condorcet%27s%20jury%20theorem
Condorcet's jury theorem is a political science theorem about the relative probability of a given group of individuals arriving at a correct decision. The theorem was first expressed by the Marquis de Condorcet in his 1785 work Essay on the Application of Analysis to the Probability of Majority Decisions. The assumptions of the theorem are that a group wishes to reach a decision by majority vote. One of the two outcomes of the vote is correct, and each voter has an independent probability p of voting for the correct decision. The theorem asks how many voters we should include in the group. The result depends on whether p is greater than or less than 1/2: If p is greater than 1/2 (each voter is more likely to vote correctly), then adding more voters increases the probability that the majority decision is correct. In the limit, the probability that the majority votes correctly approaches 1 as the number of voters increases. On the other hand, if p is less than 1/2 (each voter is more likely to vote incorrectly), then adding more voters makes things worse: the optimal jury consists of a single voter. Since Condorcet, many other researchers have proved various other jury theorems, relaxing some or all of Condorcet's assumptions. Proofs Proof 1: Calculating the probability that two additional voters change the outcome To avoid the need for a tie-breaking rule, we assume n is odd. Essentially the same argument works for even n if ties are broken by adding a single voter. Now suppose we start with n voters, and let m of these voters vote correctly. Consider what happens when we add two more voters (to keep the total number odd). The majority vote changes in only two cases: m was one vote too small to get a majority of the n votes, but both new voters voted correctly. m was just equal to a majority of the n votes, but both new voters voted incorrectly. The rest of the time, either the new votes cancel out, only increase the gap, or don't make enough of a difference. So we only care what happens when a single vote (among the first n) separates a correct from an incorrect majority. Restricting our attention to this case, we can imagine that the first n-1 votes cancel out and that the deciding vote is cast by the n-th voter. In this case the probability of getting a correct majority is just p. Now suppose we send in the two extra voters. The probability that they change an incorrect majority to a correct majority is (1-p)p2, while the probability that they change a correct majority to an incorrect majority is p(1-p)(1-p). The first of these probabilities is greater than the second if and only if p > 1/2, proving the theorem. Proof 2: Calculating the probability that the decision is correct This proof is direct; it just sums up the probabilities of the majorities. Each term of the sum multiplies the number of combinations of a majority by the probability of that majority. Each majority is counted using a combination, n items taken k at a time,
https://en.wikipedia.org/wiki/Poisson%20regression
In statistics, Poisson regression is a generalized linear model form of regression analysis used to model count data and contingency tables. Poisson regression assumes the response variable Y has a Poisson distribution, and assumes the logarithm of its expected value can be modeled by a linear combination of unknown parameters. A Poisson regression model is sometimes known as a log-linear model, especially when used to model contingency tables. Negative binomial regression is a popular generalization of Poisson regression because it loosens the highly restrictive assumption that the variance is equal to the mean made by the Poisson model. The traditional negative binomial regression model is based on the Poisson-gamma mixture distribution. This model is popular because it models the Poisson heterogeneity with a gamma distribution. Poisson regression models are generalized linear models with the logarithm as the (canonical) link function, and the Poisson distribution function as the assumed probability distribution of the response. Regression models If is a vector of independent variables, then the model takes the form where and . Sometimes this is written more compactly as where is now an (n + 1)-dimensional vector consisting of n independent variables concatenated to the number one. Here is simply concatenated to . Thus, when given a Poisson regression model and an input vector , the predicted mean of the associated Poisson distribution is given by If are independent observations with corresponding values of the predictor variables, then can be estimated by maximum likelihood. The maximum-likelihood estimates lack a closed-form expression and must be found by numerical methods. The probability surface for maximum-likelihood Poisson regression is always concave, making Newton–Raphson or other gradient-based methods appropriate estimation techniques. Maximum likelihood-based parameter estimation Given a set of parameters θ and an input vector x, the mean of the predicted Poisson distribution, as stated above, is given by and thus, the Poisson distribution's probability mass function is given by Now suppose we are given a data set consisting of m vectors , along with a set of m values . Then, for a given set of parameters θ, the probability of attaining this particular set of data is given by By the method of maximum likelihood, we wish to find the set of parameters θ that makes this probability as large as possible. To do this, the equation is first rewritten as a likelihood function in terms of θ: Note that the expression on the right hand side has not actually changed. A formula in this form is typically difficult to work with; instead, one uses the log-likelihood: Notice that the parameters θ only appear in the first two terms of each term in the summation. Therefore, given that we are only interested in finding the best value for θ we may drop the yi! and simply write To find a maximum, we need to solve an equation
https://en.wikipedia.org/wiki/Lehmer%E2%80%93Schur%20algorithm
In mathematics, the Lehmer–Schur algorithm (named after Derrick Henry Lehmer and Issai Schur) is a root-finding algorithm for complex polynomials, extending the idea of enclosing roots like in the one-dimensional bisection method to the complex plane. It uses the Schur-Cohn test to test increasingly smaller disks for the presence or absence of roots. Schur-Cohn algorithm This algorithm allows one to find the distribution of the roots of a complex polynomial with respect to the unit circle in the complex plane. It is based on two auxiliary polynomials, introduced by Schur. For a complex polynomial of degree its reciprocal adjoint polynomial is defined by and its Schur Transform by where a bar denotes complex conjugation. So, if with , then , with leading zero-terms, if any, removed. The coefficients of can therefore be directly expressed in those of and, since one or more leading coefficients cancel, has lower degree than . The roots of , , and are related as follows. Lemma Let be a complex polynomial and . The roots of , including their multiplicities, are the images under inversion in the unit circle of the non-zero roots of . If , then , and share roots on the unit circle, including their multiplicities. If , then and have the same number of roots inside the unit circle. If , then and have the same number of roots inside the unit circle. Proof For we have and, in particular, for . Also implies . From this and the definitions above the first two statements follow. The other two statements are a consequence of Rouché's theorem applied on the unit circle to the functions and , where is a polynomial that has as its roots the roots of on the unit circle, with the same multiplicities. □ For a more accessible representation of the lemma, let , and denote the number of roots of inside, on, and outside the unit circle respectively and similarly for . Moreover let be the difference in degree of and . Then the lemma implies that if and if (note the interchange of and ). Now consider the sequence of polynomials , where and . Application of the foregoing to each pair of consecutive members of this sequence gives the following result. Theorem[Schur-Cohn test] Let be a complex polynomial with and let be the smallest number such that . Moreover let for and for . All roots of lie inside the unit circle if and only if , for , and . All roots of lie outside the unit circle if and only if for and . If and if for (in increasing order) and otherwise, then has no roots on the unit circle and the number of roots of inside the unit circle is . More generally, the distribution of the roots of a polynomial with respect to an arbitrary circle in the complex plane, say one with centre and radius , can be found by application of the Schur-Cohn test to the 'shifted and scaled' polynomial defined by . Not every scaling factor is allowed, however, for the Schur-Cohn test can be applied to the pol
https://en.wikipedia.org/wiki/Robust%20regression
In robust statistics, robust regression seeks to overcome some limitations of traditional regression analysis. A regression analysis models the relationship between one or more independent variables and a dependent variable. Standard types of regression, such as ordinary least squares, have favourable properties if their underlying assumptions are true, but can give misleading results otherwise (i.e. are not robust to assumption violations). Robust regression methods are designed to limit the effect that violations of assumptions by the underlying data-generating process have on regression estimates. For example, least squares estimates for regression models are highly sensitive to outliers: an outlier with twice the error magnitude of a typical observation contributes four (two squared) times as much to the squared error loss, and therefore has more leverage over the regression estimates. The Huber loss function is a robust alternative to standard square error loss that reduces outliers' contributions to the squared error loss, thereby limiting their impact on regression estimates. Applications Heteroscedastic errors One instance in which robust estimation should be considered is when there is a strong suspicion of heteroscedasticity. In the homoscedastic model, it is assumed that the variance of the error term is constant for all values of x. Heteroscedasticity allows the variance to be dependent on x, which is more accurate for many real scenarios. For example, the variance of expenditure is often larger for individuals with higher income than for individuals with lower incomes. Software packages usually default to a homoscedastic model, even though such a model may be less accurate than a heteroscedastic model. One simple approach (Tofallis, 2008) is to apply least squares to percentage errors, as this reduces the influence of the larger values of the dependent variable compared to ordinary least squares. Presence of outliers Another common situation in which robust estimation is used occurs when the data contain outliers. In the presence of outliers that do not come from the same data-generating process as the rest of the data, least squares estimation is inefficient and can be biased. Because the least squares predictions are dragged towards the outliers, and because the variance of the estimates is artificially inflated, the result is that outliers can be masked. (In many situations, including some areas of geostatistics and medical statistics, it is precisely the outliers that are of interest.) Although it is sometimes claimed that least squares (or classical statistical methods in general) are robust, they are only robust in the sense that the type I error rate does not increase under violations of the model. In fact, the type I error rate tends to be lower than the nominal level when outliers are present, and there is often a dramatic increase in the type II error rate. The reduction of the type I error rate has been labelled as
https://en.wikipedia.org/wiki/Symmetry%20in%20mathematics
Symmetry occurs not only in geometry, but also in other branches of mathematics. Symmetry is a type of invariance: the property that a mathematical object remains unchanged under a set of operations or transformations. Given a structured object X of any sort, a symmetry is a mapping of the object onto itself which preserves the structure. This can occur in many ways; for example, if X is a set with no additional structure, a symmetry is a bijective map from the set to itself, giving rise to permutation groups. If the object X is a set of points in the plane with its metric structure or any other metric space, a symmetry is a bijection of the set to itself which preserves the distance between each pair of points (i.e., an isometry). In general, every kind of structure in mathematics will have its own kind of symmetry, many of which are listed in the given points mentioned above. Symmetry in geometry The types of symmetry considered in basic geometry include reflectional symmetry, rotation symmetry, translational symmetry and glide reflection symmetry, which are described more fully in the main article Symmetry (geometry). Symmetry in calculus Even and odd functions Even functions Let f(x) be a real-valued function of a real variable, then f is even if the following equation holds for all x and -x in the domain of f: Geometrically speaking, the graph face of an even function is symmetric with respect to the y-axis, meaning that its graph remains unchanged after reflection about the y-axis. Examples of even functions include , x2, x4, cos(x), and cosh(x). Odd functions Again, let f be a real-valued function of a real variable, then f is odd if the following equation holds for all x and -x in the domain of f: That is, Geometrically, the graph of an odd function has rotational symmetry with respect to the origin, meaning that its graph remains unchanged after rotation of 180 degrees about the origin. Examples of odd functions are x, x3, sin(x), sinh(x), and erf(x). Integrating The integral of an odd function from −A to +A is zero, provided that A is finite and that the function is integrable (e.g., has no vertical asymptotes between −A and A). The integral of an even function from −A to +A is twice the integral from 0 to +A, provided that A is finite and the function is integrable (e.g., has no vertical asymptotes between −A and A). This also holds true when A is infinite, but only if the integral converges. Series The Maclaurin series of an even function includes only even powers. The Maclaurin series of an odd function includes only odd powers. The Fourier series of a periodic even function includes only cosine terms. The Fourier series of a periodic odd function includes only sine terms. Symmetry in linear algebra Symmetry in matrices In linear algebra, a symmetric matrix is a square matrix that is equal to its transpose (i.e., it is invariant under matrix transposition). Formally, matrix A is symmetric if By the definition o
https://en.wikipedia.org/wiki/Random%20number%20table
Random number tables have been used in statistics for tasks such as selected random samples. This was much more effective than manually selecting the random samples (with dice, cards, etc.). Nowadays, tables of random numbers have been replaced by computational random number generators. If carefully prepared, the filtering and testing processes remove any noticeable bias or asymmetry from the hardware-generated original numbers so that such tables provide the most "reliable" random numbers available to the casual user. Any published (or otherwise accessible) random data table is unsuitable for cryptographic purposes since the accessibility of the numbers makes them effectively predictable, and hence their effect on a cryptosystem is also predictable. By way of contrast, genuinely random numbers that are only accessible to the intended encoder and decoder allow literally unbreakable encryption of a similar or lesser amount of meaningful data (using a simple exclusive OR operation) in a method known as the one-time pad, which has often insurmountable problems that are barriers to implementing this method correctly. History Tables of random numbers have the desired properties no matter how chosen from the table: by row, column, diagonal or irregularly. The first such table was published by L.H.C. Tippett in 1927, and since then a number of other such tables were developed. The first tables were generated through a variety of ways—one (by L.H.C. Tippett) took its numbers "at random" from census registers, another (by R.A. Fisher and Francis Yates) used numbers taken "at random" from logarithm tables, and in 1939 a set of 100,000 digits were published by M.G. Kendall and B. Babington Smith produced by a specialized machine in conjunction with a human operator. In the mid-1940s, the RAND Corporation set about to develop a large table of random numbers for use with the Monte Carlo method, and using a hardware random number generator produced A Million Random Digits with 100,000 Normal Deviates. The RAND table used electronic simulation of a roulette wheel attached to a computer, the results of which were then carefully filtered and tested before being used to generate the table. The RAND table was an important breakthrough in delivering random numbers because such a large and carefully prepared table had never before been available (the largest previously published table was ten times smaller in size), and because it was also available on IBM punched cards, which allowed for its use in computers. In the 1950s, a hardware random number generator named ERNIE was used to draw British premium bond numbers. The first "testing" of random numbers for statistical randomness was developed by M.G. Kendall and B. Babington Smith in the late 1930s, and was based upon looking for certain types of probabilistic expectations in a given sequence. The simplest test looked to make sure that roughly equal numbers of 1s, 2s, 3s, etc. were present; more complicated te