source
stringlengths
31
168
text
stringlengths
51
3k
https://en.wikipedia.org/wiki/WKT
WKT may refer to: Well-known text representation of coordinate reference systems, a text markup language for representing coordinate reference systems Well-known text representation of geometry, a text markup language for representing vector geometry objects WKT (sealant), a marine sealant West Kowloon Terminus, a railway station in Hong Kong
https://en.wikipedia.org/wiki/Fundamental%20lemma%20of%20the%20calculus%20of%20variations
In mathematics, specifically in the calculus of variations, a variation of a function can be concentrated on an arbitrarily small interval, but not a single point. Accordingly, the necessary condition of extremum (functional derivative equal zero) appears in a weak formulation (variational form) integrated with an arbitrary function . The fundamental lemma of the calculus of variations is typically used to transform this weak formulation into the strong formulation (differential equation), free of the integration with arbitrary function. The proof usually exploits the possibility to choose concentrated on an interval on which keeps sign (positive or negative). Several versions of the lemma are in use. Basic versions are easy to formulate and prove. More powerful versions are used when needed. Basic version If a continuous function on an open interval satisfies the equality for all compactly supported smooth functions on , then is identically zero. Here "smooth" may be interpreted as "infinitely differentiable", but often is interpreted as "twice continuously differentiable" or "continuously differentiable" or even just "continuous", since these weaker statements may be strong enough for a given task. "Compactly supported" means "vanishes outside for some , such that "; but often a weaker statement suffices, assuming only that (or and a number of its derivatives) vanishes at the endpoints , ; in this case the closed interval is used. Version for two given functions If a pair of continuous functions f, g on an interval (a,b) satisfies the equality for all compactly supported smooth functions h on (a,b), then g is differentiable, and g''' = f  everywhere. The special case for g = 0 is just the basic version. Here is the special case for f = 0 (often sufficient). If a continuous function g on an interval (a,b) satisfies the equality for all smooth functions h on (a,b) such that , then g is constant. If, in addition, continuous differentiability of g is assumed, then integration by parts reduces both statements to the basic version; this case is attributed to Joseph-Louis Lagrange, while the proof of differentiability of g is due to Paul du Bois-Reymond. Versions for discontinuous functions The given functions (f, g) may be discontinuous, provided that they are locally integrable (on the given interval). In this case, Lebesgue integration is meant, the conclusions hold almost everywhere (thus, in all continuity points), and differentiability of g is interpreted as local absolute continuity (rather than continuous differentiability). Sometimes the given functions are assumed to be piecewise continuous, in which case Riemann integration suffices, and the conclusions are stated everywhere except the finite set of discontinuity points. Higher derivatives If a tuple of continuous functions on an interval (a,b) satisfies the equality for all compactly supported smooth functions h on (a,b), then there exist continuously differe
https://en.wikipedia.org/wiki/Transitive%20set
In set theory, a branch of mathematics, a set is called transitive if either of the following equivalent conditions hold: whenever , and , then . whenever , and is not an urelement, then is a subset of . Similarly, a class is transitive if every element of is a subset of . Examples Using the definition of ordinal numbers suggested by John von Neumann, ordinal numbers are defined as hereditarily transitive sets: an ordinal number is a transitive set whose members are also transitive (and thus ordinals). The class of all ordinals is a transitive class. Any of the stages and leading to the construction of the von Neumann universe and Gödel's constructible universe are transitive sets. The universes and themselves are transitive classes. This is a complete list of all finite transitive sets with up to 20 brackets: Properties A set is transitive if and only if , where is the union of all elements of that are sets, . If is transitive, then is transitive. If and are transitive, then and are transitive. In general, if is a class all of whose elements are transitive sets, then and are transitive. (The first sentence in this paragraph is the case of .) A set that does not contain urelements is transitive if and only if it is a subset of its own power set, The power set of a transitive set without urelements is transitive. Transitive closure The transitive closure of a set is the smallest (with respect to inclusion) transitive set that includes (i.e. ). Suppose one is given a set , then the transitive closure of is Proof. Denote and . Then we claim that the set is transitive, and whenever is a transitive set including then . Assume . Then for some and so . Since , . Thus is transitive. Now let be as above. We prove by induction that for all , thus proving that : The base case holds since . Now assume . Then . But is transitive so , hence . This completes the proof. Note that this is the set of all of the objects related to by the transitive closure of the membership relation, since the union of a set can be expressed in terms of the relative product of the membership relation with itself. The transitive closure of a set can be expressed by a first-order formula: is a transitive closure of iff is an intersection of all transitive supersets of (that is, every transitive superset of contains as a subset). Transitive models of set theory Transitive classes are often used for construction of interpretations of set theory in itself, usually called inner models. The reason is that properties defined by bounded formulas are absolute for transitive classes. A transitive set (or class) that is a model of a formal system of set theory is called a transitive model of the system (provided that the element relation of the model is the restriction of the true element relation to the universe of the model). Transitivity is an important factor in determining the absoluteness of formulas. In the superstructu
https://en.wikipedia.org/wiki/Generalized%20Appell%20polynomials
In mathematics, a polynomial sequence has a generalized Appell representation if the generating function for the polynomials takes on a certain form: where the generating function or kernel is composed of the series with and and all and with Given the above, it is not hard to show that is a polynomial of degree . Boas–Buck polynomials are a slightly more general class of polynomials. Special cases The choice of gives the class of Brenke polynomials. The choice of results in the Sheffer sequence of polynomials, which include the general difference polynomials, such as the Newton polynomials. The combined choice of and gives the Appell sequence of polynomials. Explicit representation The generalized Appell polynomials have the explicit representation The constant is where this sum extends over all compositions of into parts; that is, the sum extends over all such that For the Appell polynomials, this becomes the formula Recursion relation Equivalently, a necessary and sufficient condition that the kernel can be written as with is that where and have the power series and Substituting immediately gives the recursion relation For the special case of the Brenke polynomials, one has and thus all of the , simplifying the recursion relation significantly. See also q-difference polynomials References Ralph P. Boas, Jr. and R. Creighton Buck, Polynomial Expansions of Analytic Functions (Second Printing Corrected), (1964) Academic Press Inc., Publishers New York, Springer-Verlag, Berlin. Library of Congress Card Number 63-23263. Polynomials
https://en.wikipedia.org/wiki/Donald%20Dines%20Wall
Donald Dines Wall (August 13, 1921 – November 28, 2000) was an American mathematician working primarily on number theory. He obtained his Ph.D. on normal numbers from University of California, Berkeley in 1949, where his adviser was Derrick Henry Lehmer. His better known papers include the first modern analysis of Fibonacci sequence modulo a positive integer. Drawing on Wall's work, Zhi-Hong Sun and his twin brother Zhi-Wei Sun proved a theorem about what are now known as the Wall–Sun–Sun primes that guided the search for counterexamples to Fermat's Last Theorem. Early life Wall was born in Kansas City, Missouri, on August 13, 1921 to Donald F. Wall and Mary Wooldridge. The family lived in Louisiana and Texas as he was growing up. In 1933, they moved to Whittier, California, then in 1936 to Santa Barbara, California, where he graduated from high school in 1938. He enrolled at UCLA and joined the Delta Sigma Phi fraternity. In April 1940 he had a successful operation to remove a brain tumor at UC Hospital in San Francisco. The surgeon was Howard C. Naffziger, who later served on the board of Regents of the University of California. He and Naffziger kept in touch for many years afterwards. Career He graduated from UCLA with a B.A. in Mathematics in the spring of 1944. After graduation he took a full-time job with Douglas Aircraft but continued in graduate school at UCLA. In 1946 he received an M.A. in mathematical statistics from UCLA. In the same year he passed the first three of eight actuarial exams. In 1947, he moved to Hartford, Connecticut, to work for the Aetna Life Insurance Company. He also taught an evening math class at Trinity College. In the fall of 1947, he returned to graduate school at Harvard University, where he became interested in number theory. He also took a class at Massachusetts Institute of Technology. In June 1948 he returned to California to complete his Ph.D. at UC Berkeley, where he also taught classes as a teaching assistant. In 1949 he was awarded his Ph.D. in normal numbers from UC Berkeley. In fall of 1949 he and his family moved to Santa Barbara where he took a job as instructor in mathematics at Santa Barbara College of the University of California. He taught general astronomy as well as number theory and other math courses. After two years, he was promoted to assistant professor. In 1950, he taught a course in computer mathematics at Naval Air Station Point Mugu, where computers were being developed for calculating missile trajectories. From his work at Pt Mugu, he was recruited by IBM to work as an Applied Science Representative starting July 1951 in Los Angeles. In 1956, he became IBM's Education Coordinator for the west coast. He traveled to interested universities in the western US to give them details of a program developed by the UCLA Anderson School of Management about the use of computers in business. In 1958, he moved to White Plains, New York, and continued working at IBM until his r
https://en.wikipedia.org/wiki/Newton%27s%20identities
In mathematics, Newton's identities, also known as the Girard–Newton formulae, give relations between two types of symmetric polynomials, namely between power sums and elementary symmetric polynomials. Evaluated at the roots of a monic polynomial P in one variable, they allow expressing the sums of the k-th powers of all roots of P (counted with their multiplicity) in terms of the coefficients of P, without actually finding those roots. These identities were found by Isaac Newton around 1666, apparently in ignorance of earlier work (1629) by Albert Girard. They have applications in many areas of mathematics, including Galois theory, invariant theory, group theory, combinatorics, as well as further applications outside mathematics, including general relativity. Mathematical statement Formulation in terms of symmetric polynomials Let x1, ..., xn be variables, denote for k ≥ 1 by pk(x1, ..., xn) the k-th power sum: and for k ≥ 0 denote by ek(x1, ..., xn) the elementary symmetric polynomial (that is, the sum of all distinct products of k distinct variables), so Then Newton's identities can be stated as valid for all . Also, one has for all . Concretely, one gets for the first few values of k: The form and validity of these equations do not depend on the number n of variables (although the point where the left-hand side becomes 0 does, namely after the n-th identity), which makes it possible to state them as identities in the ring of symmetric functions. In that ring one has and so on; here the left-hand sides never become zero. These equations allow to recursively express the ei in terms of the pk; to be able to do the inverse, one may rewrite them as In general, we have valid for all n ≥k ≥ 1. Also, one has for all k > n ≥ 1. Application to the roots of a polynomial The polynomial with roots xi may be expanded as where the coefficients are the symmetric polynomials defined above. Given the power sums of the roots the coefficients of the polynomial with roots may be expressed recursively in terms of the power sums as Formulating polynomials in this way is useful in using the method of Delves and Lyness to find the zeros of an analytic function. Application to the characteristic polynomial of a matrix When the polynomial above is the characteristic polynomial of a matrix A (in particular when A is the companion matrix of the polynomial), the roots are the eigenvalues of the matrix, counted with their algebraic multiplicity. For any positive integer k, the matrix Ak has as eigenvalues the powers xik, and each eigenvalue of A contributes its multiplicity to that of the eigenvalue xik of Ak. Then the coefficients of the characteristic polynomial of Ak are given by the elementary symmetric polynomials in those powers xik. In particular, the sum of the xik, which is the k-th power sum pk of the roots of the characteristic polynomial of A, is given by its trace: The Newton identities now relate the traces of
https://en.wikipedia.org/wiki/Maris%E2%80%93McGwire%E2%80%93Sosa%20pair
In recreational mathematics, Maris–McGwire–Sosa pairs (MMS pairs, also MMS numbers) are two consecutive natural numbers such that adding each number's digits (in base 10) to the digits of its prime factorization gives the same sum. Thus 61 → 6 + 1 (the sum of its digits) + 6 + 1 (since 61 is its prime factorization) and 62 → 6 + 2 (the sum of its digits) + 3 + 1 + 2 (since 31 × 2 is its prime factorization). The above two sums are equal (= 14), so 61 and 62 form an MMS pair. MMS pairs are so named because in 1998 the baseball players Mark McGwire and Sammy Sosa both hit their 62nd home runs for the season, passing the old record of 61, held by Roger Maris. American engineer Mike Keith noticed this property of these numbers and named pairs of numbers like these MMS pairs. See also Ruth–Aaron pair References External links Mike Keith. Maris–McGwire–Sosa Numbers. Ivars Peterson. MathTrek – Home Run Numbers. Hans Havermann. Maris–McGwire–Sosa 7-tuples, 8-tuples, & 9-tuples Base-dependent integer sequences
https://en.wikipedia.org/wiki/Gershgorin%20circle%20theorem
In mathematics, the Gershgorin circle theorem may be used to bound the spectrum of a square matrix. It was first published by the Soviet mathematician Semyon Aronovich Gershgorin in 1931. Gershgorin's name has been transliterated in several different ways, including Geršgorin, Gerschgorin, Gershgorin, Hershhorn, and Hirschhorn. Statement and proof Let be a complex matrix, with entries . For let be the sum of the absolute values of the non-diagonal entries in the -th row: Let be a closed disc centered at with radius . Such a disc is called a Gershgorin disc. Theorem. Every eigenvalue of lies within at least one of the Gershgorin discs Proof. Let be an eigenvalue of with corresponding eigenvector . Find i such that the element of x with the largest absolute value is . Since , in particular we take the ith component of that equation to get: Taking to the other side: Therefore, applying the triangle inequality and recalling that based on how we picked i, Corollary. The eigenvalues of A must also lie within the Gershgorin discs Cj corresponding to the columns of A. Proof. Apply the Theorem to AT while recognizing that the eigenvalues of the transpose are the same as those of the original matrix. Example. For a diagonal matrix, the Gershgorin discs coincide with the spectrum. Conversely, if the Gershgorin discs coincide with the spectrum, the matrix is diagonal. Discussion One way to interpret this theorem is that if the off-diagonal entries of a square matrix over the complex numbers have small norms, the eigenvalues of the matrix cannot be "far from" the diagonal entries of the matrix. Therefore, by reducing the norms of off-diagonal entries one can attempt to approximate the eigenvalues of the matrix. Of course, diagonal entries may change in the process of minimizing off-diagonal entries. The theorem does not claim that there is one disc for each eigenvalue; if anything, the discs rather correspond to the axes in , and each expresses a bound on precisely those eigenvalues whose eigenspaces are closest to one particular axis. In the matrix — which by construction has eigenvalues , , and with eigenvectors , , and — it is easy to see that the disc for row 2 covers and while the disc for row 3 covers and . This is however just a happy coincidence; if working through the steps of the proof one finds that it in each eigenvector is the first element that is the largest (every eigenspace is closer to the first axis than to any other axis), so the theorem only promises that the disc for row 1 (whose radius can be twice the sum of the other two radii) covers all three eigenvalues. Strengthening of the theorem If one of the discs is disjoint from the others then it contains exactly one eigenvalue. If however it meets another disc it is possible that it contains no eigenvalue (for example, or ). In the general case the theorem can be strengthened as follows: Theorem: If the union of k discs is disjoint from the uni
https://en.wikipedia.org/wiki/Cabal%20%28disambiguation%29
A cabal is a group of people united in some design. Cabal or the Cabal may also refer to: The Cabal Ministry, a government under King Charles II of England Cabal (set theory), an American group of mathematicians concentrated in southern California Cabal (surname) Conway Cabal, an effort to remove George Washington as commander of the Continental Army during the Revolutionary War Santa Rosa de Cabal, a town and municipality in the Risaralda Department, Colombia Fiction Cabal (novella), a 1988 horror novella by Clive Barker Cabal (Dibdin novel), a 1992 novel by Michael Dibdin The Cabal, a fictional secret society in the Robert Heinlein science fiction novella If This Goes On— The Cabal, an organization in the TV-series of Sanctuary The Cabal (comics), a villainous counterpart for the Illuminati in the Marvel Comics universe Cabal (dog), the Latin spelling of the name of a dog belonging to King Arthur, whose Welsh name is Cavall The Cabal, a fictional clandestine organization in the series The Blacklist The Cabala, a 1926 novel by Thornton Wilder Computing and games Cabal (software), a packaging system used for Haskell programming language libraries Cabal (video game), a 1988 arcade game by TAD Corporation Computer Assisted Biologically Augmented Lifeform (CABAL), a highly advanced artificial intelligence in the Command and Conquer game Tiberian Sun, and its expansion pack Firestorm A secret devil-worship cult and the primary antagonists in Blood Cabal Online, a 2005 MMORPG developed by the South Korean company ESTsoft Backbone cabal, a group of administrators on Usenet in the late 80s to early 90s GURPS Cabal The Cabal, a vampiric resistance group led by Vorador in Blood Omen 2 The Cabal, a secret organization in Tomb Raider: The Angel of Darkness The Cabal, a highly militarized species of alien, and one of the four alien enemy factions found in Destiny Cabals: Magic & Battle Cards, a 2011 online trading card game developed by Kyy Games See also Cabala (disambiguation), one of several systems of mysticism Cable (disambiguation) Kabal (disambiguation)
https://en.wikipedia.org/wiki/Independence%20system
In combinatorial mathematics, an independence system is a pair , where is a finite set and is a collection of subsets of (called the independent sets or feasible sets) with the following properties: The empty set is independent, i.e., . (Alternatively, at least one subset of is independent, i.e., .) Every subset of an independent set is independent, i.e., for each , we have . This is sometimes called the hereditary property, or downward-closedness. Another term for an independence system is an abstract simplicial complex. Relation to other concepts A pair , where is a finite set and is a collection of subsets of is also called a hypergraph. When using this terminology, the elements in the set are called vertices and elements in the family are called hyperedges. So an independence system can be defined shortly as a downward-closed hypergraph. An independence system with an additional property called the augmentation property or the independent set exchange property yields a matroid. The following expression summarizes the relations between the terms:HYPERGRAPHS INDEPENDENCE-SYSTEMS ABSTRACT-SIMPLICIAL-COMPLEXES MATROIDS. References . Combinatorics Hypergraphs
https://en.wikipedia.org/wiki/Constraint%20counting
In mathematics, constraint counting is counting the number of constraints in order to compare it with the number of variables, parameters, etc. that are free to be determined, the idea being that in most cases the number of independent choices that can be made is the excess of the latter over the former. For example, in linear algebra if the number of constraints (independent equations) in a system of linear equations equals the number of unknowns then precisely one solution exists; if there are fewer independent equations than unknowns, an infinite number of solutions exist; and if the number of independent equations exceeds the number of unknowns, then no solutions exist. In the context of partial differential equations, constraint counting is a crude but often useful way of counting the number of free functions needed to specify a solution to a partial differential equation. Partial differential equations Consider a second order partial differential equation in three variables, such as the two-dimensional wave equation It is often profitable to think of such an equation as a rewrite rule allowing us to rewrite arbitrary partial derivatives of the function using fewer partials than would be needed for an arbitrary function. For example, if satisfies the wave equation, we can rewrite where in the first equality, we appealed to the fact that partial derivatives commute. Linear equations To answer this in the important special case of a linear partial differential equation, Einstein asked: how many of the partial derivatives of a solution can be linearly independent? It is convenient to record his answer using an ordinary generating function where is a natural number counting the number of linearly independent partial derivatives (of order k) of an arbitrary function in the solution space of the equation in question. Whenever a function satisfies some partial differential equation, we can use the corresponding rewrite rule to eliminate some of them, because further mixed partials have necessarily become linearly dependent. Specifically, the power series counting the variety of arbitrary functions of three variables (no constraints) is but the power series counting those in the solution space of some second order p.d.e. is which records that we can eliminate one second order partial , three third order partials , and so forth. More generally, the o.g.f. for an arbitrary function of n variables is where the coefficients of the infinite power series of the generating function are constructed using an appropriate infinite sequence of binomial coefficients, and the power series for a function required to satisfy a linear m-th order equation is Next, which can be interpreted to predict that a solution to a second order linear p.d.e. in three variables is expressible by two freely chosen functions of two variables, one of which is used immediately, and the second, only after taking a first derivative, in order to express the soluti
https://en.wikipedia.org/wiki/Direction%20cosine
In analytic geometry, the direction cosines (or directional cosines) of a vector are the cosines of the angles between the vector and the three positive coordinate axes. Equivalently, they are the contributions of each component of the basis to a unit vector in that direction. Three-dimensional Cartesian coordinates If v is a Euclidean vector in three-dimensional Euclidean space, R3, where ex, ey, ez are the standard basis in Cartesian notation, then the direction cosines are It follows that by squaring each equation and adding the results Here α, β and γ are the direction cosines and the Cartesian coordinates of the unit vector v/|v|, and a, b and c are the direction angles of the vector v. The direction angles a, b and c are acute or obtuse angles, i.e., 0 ≤ a ≤ π, 0 ≤ b ≤ π and 0 ≤ c ≤ π, and they denote the angles formed between v and the unit basis vectors, ex, ey and ez. General meaning More generally, direction cosine refers to the cosine of the angle between any two vectors. They are useful for forming direction cosine matrices that express one set of orthonormal basis vectors in terms of another set, or for expressing a known vector in a different basis. See also Cartesian tensor References Algebraic geometry Vectors (mathematics and physics)
https://en.wikipedia.org/wiki/Signed%20graph
In the area of graph theory in mathematics, a signed graph is a graph in which each edge has a positive or negative sign. A signed graph is balanced if the product of edge signs around every cycle is positive. The name "signed graph" and the notion of balance appeared first in a mathematical paper of Frank Harary in 1953. Dénes Kőnig had already studied equivalent notions in 1936 under a different terminology but without recognizing the relevance of the sign group. At the Center for Group Dynamics at the University of Michigan, Dorwin Cartwright and Harary generalized Fritz Heider's psychological theory of balance in triangles of sentiments to a psychological theory of balance in signed graphs. Signed graphs have been rediscovered many times because they come up naturally in many unrelated areas. For instance, they enable one to describe and analyze the geometry of subsets of the classical root systems. They appear in topological graph theory and group theory. They are a natural context for questions about odd and even cycles in graphs. They appear in computing the ground state energy in the non-ferromagnetic Ising model; for this one needs to find a largest balanced edge set in Σ. They have been applied to data classification in correlation clustering. Fundamental theorem The sign of a path is the product of the signs of its edges. Thus a path is positive only if there are an even number of negative edges in it (where zero is even). In the mathematical balance theory of Frank Harary, a signed graph is balanced when every cycle is positive. Harary proves that a signed graph is balanced when (1) for every pair of nodes, all paths between them have the same sign, or (2) the vertices partition into a pair of subsets (possibly empty), each containing only positive edges, but connected by negative edges. It generalizes the theorem that an ordinary (unsigned) graph is bipartite if and only if every cycle has even length. A simple proof uses the method of switching. Switching a signed graph means reversing the signs of all edges between a vertex subset and its complement. To prove Harary's theorem, one shows by induction that Σ can be switched to be all positive if and only if it is balanced. A weaker theorem, but with a simpler proof, is that if every 3-cycle in a signed complete graph is positive, then the graph is balanced. For the proof, pick an arbitrary node n and place it and all those nodes that are linked to n by a positive edge in one group, called A, and all those linked to n by a negative edge in the other, called B. Since this is a complete graph, every two nodes in A must be friends and every two nodes in B must be friends, otherwise there would be a 3-cycle which was unbalanced. (Since this is a complete graph, any one negative edge would cause an unbalanced 3-cycle.) Likewise, all negative edges must go between the two groups. Frustration Frustration index The frustration index (early called the line index of balance) of
https://en.wikipedia.org/wiki/Colored%20matroid
In mathematics, a colored matroid is a matroid whose elements are labeled from a set of colors, which can be any set that suits the purpose, for instance the set of the first n positive integers, or the sign set {+, −}. The interest in colored matroids is through their invariants, especially the colored Tutte polynomial, which generalizes the Tutte polynomial of a signed graph of . There has also been study of optimization problems on matroids where the objective function of the optimization depends on the set of colors chosen as part of a matroid basis. See also Bipartite matroid Rota's basis conjecture References Matroid theory
https://en.wikipedia.org/wiki/Biased%20graph
In mathematics, a biased graph is a graph with a list of distinguished circles (edge sets of simple cycles), such that if two circles in the list are contained in a theta graph, then the third circle of the theta graph is also in the list. A biased graph is a generalization of the combinatorial essentials of a gain graph and in particular of a signed graph. Formally, a biased graph Ω is a pair (G, B) where B is a linear class of circles; this by definition is a class of circles that satisfies the theta-graph property mentioned above. A subgraph or edge set whose circles are all in B (and which contains no half-edges) is called balanced. For instance, a circle belonging to B is balanced and one that does not belong to B is unbalanced. Biased graphs are interesting mostly because of their matroids, but also because of their connection with multiary quasigroups. See below. Technical notes A biased graph may have half-edges (one endpoint) and loose edges (no endpoints). The edges with two endpoints are of two kinds: a link has two distinct endpoints, while a loop has two coinciding endpoints. Linear classes of circles are a special case of linear subclasses of circuits in a matroid. Examples If every circle belongs to B, and there are no half-edges, Ω is balanced. A balanced biased graph is (for most purposes) essentially the same as an ordinary graph. If B is empty, Ω is called contrabalanced. Contrabalanced biased graphs are related to bicircular matroids. If B consists of the circles of even length, Ω is called antibalanced and is the biased graph obtained from an all-negative signed graph. The linear class B is additive, that is, closed under repeated symmetric difference (when the result is a circle), if and only if B is the class of positive circles of a signed graph. Ω may have underlying graph that is a cycle of length n ≥ 3 with all edges doubled. Call this a biased 2Cn . Such biased graphs in which no digon (circle of length 2) is balanced lead to spikes and swirls (see Matroids, below). Some kinds of biased graph are obtained from gain graphs or are generalizations of special kinds of gain graph. The latter include biased expansion graphs, which generalize group expansion graphs. Minors A minor of a biased graph Ω = (G, B) is the result of any sequence of taking subgraphs and contracting edge sets. For biased graphs, as for graphs, it suffices to take a subgraph (which may be the whole graph) and then contract an edge set (which may be the empty set). A subgraph of Ω consists of a subgraph H of the underlying graph G, with balanced circle class consisting of those balanced circles that are in H. The deletion of an edge set S, written Ω − S, is the subgraph with all vertices and all edges except those of S. Contraction of Ω is relatively complicated. To contract one edge e, the procedure depends on the kind of edge e is. If e is a link, contract it in G. A circle C in the contraction G/e is balanced if either
https://en.wikipedia.org/wiki/Theta%20graph
In computational geometry, the Theta graph, or -graph, is a type of geometric spanner similar to a Yao graph. The basic method of construction involves partitioning the space around each vertex into a set of cones, which themselves partition the remaining vertices of the graph. Like Yao Graphs, a -graph contains at most one edge per cone; where they differ is how that edge is selected. Whereas Yao Graphs will select the nearest vertex according to the metric space of the graph, the -graph defines a fixed ray contained within each cone (conventionally the bisector of the cone) and selects the nearest neighbor with respect to orthogonal projections to that ray. The resulting graph exhibits several good spanner properties. -graphs were first described by Clarkson in 1987 and independently by Keil in 1988. Construction -graphs are specified with a few parameters which determine their construction. The most obvious parameter is , which corresponds to the number of equal angle cones that partition the space around each vertex. In particular, for a vertex , a cone about can be imagined as two infinite rays emanating from it with angle between them. With respect to , we can label these cones as through in a counterclockwise pattern from , which conventionally opens so that its bisector has angle 0 with respect to the plane. As these cones partition the plane, they also partition the remaining vertex set of the graph (assuming general position) into the sets through , again with respect to . Every vertex in the graph gets the same number of cones in the same orientation, and we can consider the set of vertices that fall into each. Considering a single cone, we need to specify another ray emanating from , which we will label . For every vertex in , we consider the orthogonal projection of each onto . Suppose that is the vertex with the closest such projection, then the edge is added to the graph. This is the primary difference from Yao Graphs which always select the nearest vertex; in the example image, a Yao Graph would include the edge instead. Construction of a -graph is possible with a sweepline algorithm in time. Properties -graphs exhibit several good geometric spanner properties. When the parameter is a constant, the -graph is a sparse spanner. As each cone generates at most one edge per cone, most vertices will have small degree, and the overall graph will have at most edges. The stretch factor between any pair of points in a spanner is defined as the ratio between their metric space distance, and their distance within the spanner (i.e. from following edges of the spanner). The stretch factor of the entire spanner is the maximum stretch factor over all pairs of points within it. Recall from above that , then when , the -graph has a stretch factor of at most . If the orthogonal projection line in each cone is chosen to be the bisector, then for , the spanning ratio is at most . For , the -graph forms a nearest neighbor gr
https://en.wikipedia.org/wiki/Bayesian%20information%20criterion
In statistics, the Bayesian information criterion (BIC) or Schwarz information criterion (also SIC, SBC, SBIC) is a criterion for model selection among a finite set of models; models with lower BIC are generally preferred. It is based, in part, on the likelihood function and it is closely related to the Akaike information criterion (AIC). When fitting models, it is possible to increase the maximum likelihood by adding parameters, but doing so may result in overfitting. Both BIC and AIC attempt to resolve this problem by introducing a penalty term for the number of parameters in the model; the penalty term is larger in BIC than in AIC for sample sizes greater than 7. The BIC was developed by Gideon E. Schwarz and published in a 1978 paper, where he gave a Bayesian argument for adopting it. Definition The BIC is formally defined as where = the maximized value of the likelihood function of the model , i.e. , where are the parameter values that maximize the likelihood function; = the observed data; = the number of data points in , the number of observations, or equivalently, the sample size; = the number of parameters estimated by the model. For example, in multiple linear regression, the estimated parameters are the intercept, the slope parameters, and the constant variance of the errors; thus, . Derivation Konishi and Kitagawa derive the BIC to approximate the distribution of the data, integrating out the parameters using Laplace's method, starting with the following model evidence: where is the prior for under model . The log-likelihood, , is then expanded to a second order Taylor series about the MLE, , assuming it is twice differentiable as follows: where is the average observed information per observation, and denotes the residual term. To the extent that is negligible and is relatively linear near , we can integrate out to get the following: As increases, we can ignore and as they are . Thus, where BIC is defined as above, and either (a) is the Bayesian posterior mode or (b) uses the MLE and the prior has nonzero slope at the MLE. Then the posterior Usage When picking from several models, ones with lower BIC values are generally preferred. The BIC is an increasing function of the error variance and an increasing function of k. That is, unexplained variation in the dependent variable and the number of explanatory variables increase the value of BIC. However, a lower BIC does not necessarily indicate one model is better than another. Because it involves approximations, the BIC is merely a heuristic. In particular, differences in BIC should never be treated like transformed Bayes factors. It is important to keep in mind that the BIC can be used to compare estimated models only when the numerical values of the dependent variable are identical for all models being compared. The models being compared need not be nested, unlike the case when models are being compared using an F-test or a likelihood ratio test. P
https://en.wikipedia.org/wiki/Residual%20sum%20of%20squares
In statistics, the residual sum of squares (RSS), also known as the sum of squared residuals (SSR) or the sum of squared estimate of errors (SSE), is the sum of the squares of residuals (deviations predicted from actual empirical values of data). It is a measure of the discrepancy between the data and an estimation model, such as a linear regression. A small RSS indicates a tight fit of the model to the data. It is used as an optimality criterion in parameter selection and model selection. In general, total sum of squares = explained sum of squares + residual sum of squares. For a proof of this in the multivariate ordinary least squares (OLS) case, see partitioning in the general OLS model. One explanatory variable In a model with a single explanatory variable, RSS is given by: where yi is the ith value of the variable to be predicted, xi is the ith value of the explanatory variable, and is the predicted value of yi (also termed ). In a standard linear simple regression model, , where and are coefficients, y and x are the regressand and the regressor, respectively, and ε is the error term. The sum of squares of residuals is the sum of squares of ; that is where is the estimated value of the constant term and is the estimated value of the slope coefficient . Matrix expression for the OLS residual sum of squares The general regression model with observations and explanators, the first of which is a constant unit vector whose coefficient is the regression intercept, is where is an n × 1 vector of dependent variable observations, each column of the n × k matrix is a vector of observations on one of the k explanators, is a k × 1 vector of true coefficients, and is an n× 1 vector of the true underlying errors. The ordinary least squares estimator for is The residual vector ; so the residual sum of squares is: , (equivalent to the square of the norm of residuals). In full: , where is the hat matrix, or the projection matrix in linear regression. Relation with Pearson's product-moment correlation The least-squares regression line is given by , where and , where and Therefore, where The Pearson product-moment correlation is given by therefore, See also Akaike information criterion#Comparison with least squares Chi-squared distribution#Applications Degrees of freedom (statistics)#Sum of squares and degrees of freedom Errors and residuals in statistics Lack-of-fit sum of squares Mean squared error Reduced chi-squared statistic, RSS per degree of freedom Squared deviations Sum of squares (statistics) References Least squares Errors and residuals
https://en.wikipedia.org/wiki/Explained%20sum%20of%20squares
In statistics, the explained sum of squares (ESS), alternatively known as the model sum of squares or sum of squares due to regression (SSR – not to be confused with the residual sum of squares (RSS) or sum of squares of errors), is a quantity used in describing how well a model, often a regression model, represents the data being modelled. In particular, the explained sum of squares measures how much variation there is in the modelled values and this is compared to the total sum of squares (TSS), which measures how much variation there is in the observed data, and to the residual sum of squares, which measures the variation in the error between the observed data and modelled values. Definition The explained sum of squares (ESS) is the sum of the squares of the deviations of the predicted values from the mean value of a response variable, in a standard regression model — for example, , where yi is the i th observation of the response variable, xji is the i th observation of the j th explanatory variable, a and bj are coefficients, i indexes the observations from 1 to n, and εi is the i th value of the error term. In general, the greater the ESS, the better the estimated model performs. If and are the estimated coefficients, then is the i th predicted value of the response variable. The ESS is then: where the value estimated by the regression line . In some cases (see below): total sum of squares (TSS) = explained sum of squares (ESS) + residual sum of squares (RSS). Partitioning in simple linear regression The following equality, stating that the total sum of squares (TSS) equals the residual sum of squares (=SSE : the sum of squared errors of prediction) plus the explained sum of squares (SSR :the sum of squares due to regression or explained sum of squares), is generally true in simple linear regression: Simple derivation Square both sides and sum over all i: Here is how the last term above is zero from simple linear regression So, Therefore, Partitioning in the general ordinary least squares model The general regression model with n observations and k explanators, the first of which is a constant unit vector whose coefficient is the regression intercept, is where y is an n × 1 vector of dependent variable observations, each column of the n × k matrix X is a vector of observations on one of the k explanators, is a k × 1 vector of true coefficients, and e is an n × 1 vector of the true underlying errors. The ordinary least squares estimator for is The residual vector is , so the residual sum of squares is, after simplification, Denote as the constant vector all of whose elements are the sample mean of the dependent variable values in the vector y. Then the total sum of squares is The explained sum of squares, defined as the sum of squared deviations of the predicted values from the observed mean of y, is Using in this, and simplifying to obtain , gives the result that TSS = ESS + RSS if and only if . The left si
https://en.wikipedia.org/wiki/Unit%20root%20test
In statistics, a unit root test tests whether a time series variable is non-stationary and possesses a unit root. The null hypothesis is generally defined as the presence of a unit root and the alternative hypothesis is either stationarity, trend stationarity or explosive root depending on the test used. General approach In general, the approach to unit root testing implicitly assumes that the time series to be tested can be written as, where, is the deterministic component (trend, seasonal component, etc.) is the stochastic component. is the stationary error process. The task of the test is to determine whether the stochastic component contains a unit root or is stationary. Main tests Other popular tests include: augmented Dickey–Fuller test this is valid in large samples. Phillips–Perron test KPSS test here the null hypothesis is trend stationarity rather than the presence of a unit root. ADF-GLS test Unit root tests are closely linked to serial correlation tests. However, while all processes with a unit root will exhibit serial correlation, not all serially correlated time series will have a unit root. Popular serial correlation tests include: Breusch–Godfrey test Ljung–Box test Durbin–Watson test Notes References "2007 revision" Time series statistical tests
https://en.wikipedia.org/wiki/MISG
MISG may refer to: The Mathematics in Industry Study Group, an annual workshop now held in Australia, under the wing of Australian and NZ Industrial Applied Maths ANZIAM Malaysian Islamic Study Group, a U.S.-based student organization Military Intelligence and Security Group, the former secret police agency of the Philippines
https://en.wikipedia.org/wiki/Dickey%E2%80%93Fuller%20test
In statistics, the Dickey–Fuller test tests the null hypothesis that a unit root is present in an autoregressive (AR) time series model. The alternative hypothesis is different depending on which version of the test is used, but is usually stationarity or trend-stationarity. The test is named after the statisticians David Dickey and Wayne Fuller, who developed it in 1979. Explanation A simple AR model is where is the variable of interest, is the time index, is a coefficient, and is the error term (assumed to be white noise). A unit root is present if . The model would be non-stationary in this case. The regression model can be written as where is the first difference operator and . This model can be estimated, and testing for a unit root is equivalent to testing . Since the test is done over the residual term rather than raw data, it is not possible to use standard t-distribution to provide critical values. Therefore, this statistic has a specific distribution simply known as the Dickey–Fuller table. There are three main versions of the test: 1. Test for a unit root: 2. Test for a unit root with constant: 3. Test for a unit root with constant and deterministic time trend: Each version of the test has its own critical value which depends on the size of the sample. In each case, the null hypothesis is that there is a unit root, . The tests have low statistical power in that they often cannot distinguish between true unit-root processes () and near unit-root processes ( is close to zero). This is called the "near observation equivalence" problem. The intuition behind the test is as follows. If the series is stationary (or trend-stationary), then it has a tendency to return to a constant (or deterministically trending) mean. Therefore, large values will tend to be followed by smaller values (negative changes), and small values by larger values (positive changes). Accordingly, the level of the series will be a significant predictor of next period's change, and will have a negative coefficient. If, on the other hand, the series is integrated, then positive changes and negative changes will occur with probabilities that do not depend on the current level of the series; in a random walk, where you are now does not affect which way you will go next. It is notable that may be rewritten as with a deterministic trend coming from and a stochastic intercept term coming from , resulting in what is referred to as a stochastic trend. There is also an extension of the Dickey–Fuller (DF) test called the augmented Dickey–Fuller test (ADF), which removes all the structural effects (autocorrelation) in the time series and then tests using the same procedure. Dealing with uncertainty about including the intercept and deterministic time trend terms Which of the three main versions of the test should be used is not a minor issue. The decision is important for the size of the unit root test (the probability of rejecting the nu
https://en.wikipedia.org/wiki/Epsilon-induction
In set theory, -induction, also called epsilon-induction or set-induction, is a principle that can be used to prove that all sets satisfy a given property. Considered as an axiomatic principle, it is called the axiom schema of set induction. The principle implies transfinite induction and recursion. It may also be studied in a general context of induction on well-founded relations. Statement The schema is for any given property of sets and states that, if for every set , the truth of follows from the truth of for all elements of , then this property holds for all sets. In symbols: Note that for the "bottom case" where denotes the empty set , the subexpression is vacuously true for all propositions and so that implication is proven by just proving . In words, if a property is persistent when collecting any sets with that property into a new set (and this also requires establishing the property for the empty set), then the property is simply true for all sets. Said differently, persistence of a property with respect to set formation suffices to reach each set in the domain of discourse. In terms of classes One may use the language of classes to express schemata. Denote the universal class by . Let be and use the informal as abbreviation for . The principle then says that for any , Here the quantifier ranges over all sets. In words this says that any class that contains all of its subsets is simply just the class of all sets. Assuming bounded separation, is a proper class. So the property is exhibited only by the proper class , and in particular by no set. Indeed, note that any set is a subset of itself and under some more assumptions, already the self-membership will be ruled out. For comparison to another property, note that for a class to be -transitive means There are many transitive sets - in particular the set theoretical ordinals. Related notions of induction If is for some predicate , then with , where is defined as . If is the universal class, then this is again just an instance of the schema. But indeed if is any -transitive class, then still and a version of set induction for holds inside of . Ordinals Ordinals may be defined as transitive sets of transitive sets. The induction situation in the first infinite ordinal , the set of natural numbers, is discussed in more detail below. As set induction allows for induction in transitive sets containing , this gives what is called transfinite induction and definition by transfinite recursion using, indeed, the whole proper class of ordinals. With ordinals, induction proves that all sets have ordinal rank and the rank of an ordinal is itself. The theory of Von Neumann ordinals describes such sets and, there, models the order relation , which classically is provably trichotomous and total. Of interest there is the successor operation that maps ordinals to ordinals. In the classical case, the induction step for successor ordinals can be simplified so th
https://en.wikipedia.org/wiki/Equiangular%20polygon
In Euclidean geometry, an equiangular polygon is a polygon whose vertex angles are equal. If the lengths of the sides are also equal (that is, if it is also equilateral) then it is a regular polygon. Isogonal polygons are equiangular polygons which alternate two edge lengths. For clarity, a planar equiangular polygon can be called direct or indirect. A direct equiangular polygon has all angles turning in the same direction in a plane and can include multiple turns. Convex equiangular polygons are always direct. An indirect equiangular polygon can include angles turning right or left in any combination. A skew equiangular polygon may be isogonal, but can't be considered direct since it is nonplanar. A spirolateral nθ is a special case of an equiangular polygon with a set of n integer edge lengths repeating sequence until returning to the start, with vertex internal angles θ. Construction An equiangular polygon can be constructed from a regular polygon or regular star polygon where edges are extended as infinite lines. Each edges can be independently moved perpendicular to the line's direction. Vertices represent the intersection point between pairs of neighboring line. Each moved line adjusts its edge-length and the lengths of its two neighboring edges. If edges are reduced to zero length, the polygon becomes degenerate, or if reduced to negative lengths, this will reverse the internal and external angles. For an even-sided direct equiangular polygon, with internal angles θ°, moving alternate edges can invert all vertices into supplementary angles, 180-θ°. Odd-sided direct equiangular polygons can only be partially inverted, leaving a mixture of supplementary angles. Every equiangular polygon can be adjusted in proportions by this construction and still preserve equiangular status. Equiangular polygon theorem For a convex equiangular p-gon, each internal angle is 180(1-2/p)°; this is the equiangular polygon theorem. For a direct equiangular p/q star polygon, density q, each internal angle is 180(1-2q/p)°, with 1<2q<p. For w=gcd(p,q)>1, this represents a w-wound (p/w)/(q/w) star polygon, which is degenerate for the regular case. A concave indirect equiangular (pr+pl)-gon, with pr right turn vertices and pl left turn vertices, will have internal angles of 180(1-2/|pr-pl|))°, regardless of their sequence. An indirect star equiangular (pr+pl)-gon, with pr right turn vertices and pl left turn vertices and q total turns, will have internal angles of 180(1-2q/|pr-pl|))°, regardless of their sequence. An equiangular polygon with the same number of right and left turns has zero total turns, and has no constraints on its angles. Notation Every direct equiangular p-gon can be given a notation <p> or <p/q>, like regular polygons {p} and regular star polygons {p/q}, containing p vertices, and stars having density q. Convex equiangular p-gons <p> have internal angles 180(1-2/p)°, while direct star equiangular polygons, <p/q>, have internal angles 180
https://en.wikipedia.org/wiki/Kummer%E2%80%93Vandiver%20conjecture
In mathematics, the Kummer–Vandiver conjecture, or Vandiver conjecture, states that a prime p does not divide the class number hK of the maximal real subfield of the p-th cyclotomic field. The conjecture was first made by Ernst Kummer on 28 December 1849 and 24 April 1853 in letters to Leopold Kronecker, reprinted in , and independently rediscovered around 1920 by Philipp Furtwängler and , As of 2011, there is no particularly strong evidence either for or against the conjecture and it is unclear whether it is true or false, though it is likely that counterexamples are very rare. Background The class number h of the cyclotomic field is a product of two integers h1 and h2, called the first and second factors of the class number, where h2 is the class number of the maximal real subfield of the p-th cyclotomic field. The first factor h1 is well understood and can be computed easily in terms of Bernoulli numbers, and is usually rather large. The second factor h2 is not well understood and is hard to compute explicitly, and in the cases when it has been computed it is usually small. Kummer showed that if a prime p does not divide the class number h, then Fermat's Last Theorem holds for exponent p. The Kummer–Vandiver conjecture states that p does not divide the second factor h2. Kummer showed that if p divides the second factor, then it also divides the first factor. In particular the Kummer–Vandiver conjecture holds for regular primes (those for which p does not divide the first factor). Evidence for and against the Kummer–Vandiver conjecture Kummer verified the Kummer–Vandiver conjecture for p less than 200, and Vandiver extended this to p less than 600. verified it for p < 12 million. extended this to primes less than 163 million, and extended this to primes less than 231. describes an informal probability argument, based on rather dubious assumptions about the equidistribution of class numbers mod p, suggesting that the number of primes less than x that are exceptions to the Kummer–Vandiver conjecture might grow like (1/2)log log x. This grows extremely slowly, and suggests that the computer calculations do not provide much evidence for Vandiver's conjecture: for example, the probability argument (combined with the calculations for small primes) suggests that one should only expect about 1 counterexample in the first 10100 primes, suggesting that it is unlikely any counterexample will be found by further brute force searches even if there are an infinite number of exceptions. gave conjectural calculations of the class numbers of real cyclotomic fields for primes up to 10000, which strongly suggest that the class numbers are not randomly distributed mod p. They tend to be quite small and are often just 1. For example, assuming the generalized Riemann hypothesis, the class number of the real cyclotomic field for the prime p is 1 for p<163, and divisible by 4 for p=163. This suggests that Washington's informal probability ar
https://en.wikipedia.org/wiki/Monte%20Carlo%20N-Particle%20Transport%20Code
Monte Carlo N-Particle Transport (MCNP) is a general-purpose, continuous-energy, generalized-geometry, time-dependent, Monte Carlo radiation transport code designed to track many particle types over broad ranges of energies and is developed by Los Alamos National Laboratory. Specific areas of application include, but are not limited to, radiation protection and dosimetry, radiation shielding, radiography, medical physics, nuclear criticality safety, detector design and analysis, nuclear oil well logging, accelerator target design, fission and fusion reactor design, decontamination and decommissioning. The code treats an arbitrary three-dimensional configuration of materials in geometric cells bounded by first- and second-degree surfaces and fourth-degree elliptical tori. Point-wise cross section data are typically used, although group-wise data also are available. For neutrons, all reactions given in a particular cross-section evaluation (such as ENDF/B-VI) are accounted for. Thermal neutrons are described by both the free gas and S(α,β) models. For photons, the code accounts for incoherent and coherent scattering, the possibility of fluorescent emission after photoelectric absorption, absorption in pair production with local emission of annihilation radiation, and bremsstrahlung. A continuous-slowing-down model is used for electron transport that includes positrons, k x-rays, and bremsstrahlung but does not include external or self-induced fields. Important standard features that make MCNP very versatile and easy to use include a powerful general source, criticality source, and surface source; both geometry and output tally plotters; a rich collection of variance reduction techniques; a flexible tally structure; and an extensive collection of cross-section data. MCNP contains numerous flexible tallies: surface current & flux, volume flux (track length), point or ring detectors, particle heating, fission heating, pulse height tally for energy or charge deposition, mesh tallies, and radiography tallies. The key value MCNP provides is a predictive capability that can replace expensive or impossible-to-perform experiments. It is often used to design large-scale measurements providing a significant time and cost savings to the community. LANL's latest version of the MCNP code, version 6.2, represents one piece of a set of synergistic capabilities each developed at LANL; it includes evaluated nuclear data (ENDF) and the data processing code, NJOY. The international user community's high confidence in MCNP's predictive capabilities are based on its performance with verification and validation test suites, comparisons to its predecessor codes, automated testing, underlying high quality nuclear and atomic databases and significant testing by its users. History The Monte Carlo method for radiation particle transport has its origins at LANL dates back to 1946. The creators of these methods were Drs. Stanislaw Ulam, John von Neumann, Robert Rich
https://en.wikipedia.org/wiki/Farkas%27%20lemma
In mathematics, Farkas' lemma is a solvability theorem for a finite system of linear inequalities. It was originally proven by the Hungarian mathematician Gyula Farkas. Farkas' lemma is the key result underpinning the linear programming duality and has played a central role in the development of mathematical optimization (alternatively, mathematical programming). It is used amongst other things in the proof of the Karush–Kuhn–Tucker theorem in nonlinear programming. Remarkably, in the area of the foundations of quantum theory, the lemma also underlies the complete set of Bell inequalities in the form of necessary and sufficient conditions for the existence of a local hidden-variable theory, given data from any specific set of measurements. Generalizations of the Farkas' lemma are about the solvability theorem for convex inequalities, i.e., infinite system of linear inequalities. Farkas' lemma belongs to a class of statements called "theorems of the alternative": a theorem stating that exactly one of two systems has a solution. Statement of the lemma There are a number of slightly different (but equivalent) formulations of the lemma in the literature. The one given here is due to Gale, Kuhn and Tucker (1951). Here, the notation means that all components of the vector are nonnegative. Example Let , and The lemma says that exactly one of the following two statements must be true (depending on and ): There exist , such that and , or There exist such that , , and . Here is a proof of the lemma in this special case: If and , then option 1 is true, since the solution of the linear equations is and Option 2 is false, since so if the right-hand side is positive, the left-hand side must be positive too. Otherwise, option 1 is false, since the unique solution of the linear equations is not weakly positive. But in this case, option 2 is true: If , then we can take e.g. and . If , then, for some number , , so: Thus we can take, for example, , . Geometric interpretation Consider the closed convex cone spanned by the columns of ; that is, Observe that is the set of the vectors for which the first assertion in the statement of Farkas' lemma holds. On the other hand, the vector in the second assertion is orthogonal to a hyperplane that separates and The lemma follows from the observation that belongs to if and only if there is no hyperplane that separates it from More precisely, let denote the columns of . In terms of these vectors, Farkas' lemma states that exactly one of the following two statements is true: There exist non-negative coefficients such that There exists a vector such that for and The sums with nonnegative coefficients form the cone spanned by the columns of . Therefore, the first statement tells that belongs to The second statement tells that there exists a vector such that the angle of with the vectors is at most 90°, while the angle of with the vector is more than 90°. The hype
https://en.wikipedia.org/wiki/Singly%20and%20doubly%20even
In mathematics an even integer, that is, a number that is divisible by 2, is called evenly even or doubly even if it is a multiple of 4, and oddly even or singly even if it is not. The former names are traditional ones, derived from ancient Greek mathematics; the latter have become common in recent decades. These names reflect a basic concept in number theory, the 2-order of an integer: how many times the integer can be divided by 2. This is equivalent to the multiplicity of 2 in the prime factorization. A singly even number can be divided by 2 only once; it is even but its quotient by 2 is odd. A doubly even number is an integer that is divisible more than once by 2; it is even and its quotient by 2 is also even. The separate consideration of oddly and evenly even numbers is useful in many parts of mathematics, especially in number theory, combinatorics, coding theory (see even codes), among others. Definitions The ancient Greek terms "even-times-even" () and "even-times-odd" ( or ) were given various inequivalent definitions by Euclid and later writers such as Nicomachus. Today, there is a standard development of the concepts. The 2-order or 2-adic order is simply a special case of the p-adic order at a general prime number p; see p-adic number for more on this broad area of mathematics. Many of the following definitions generalize directly to other primes. For an integer n, the 2-order of n (also called valuation) is the largest natural number ν such that 2ν divides n. This definition applies to positive and negative numbers n, although some authors restrict it to positive n; and one may define the 2-order of 0 to be infinity (see also parity of zero). The 2-order of n is written ν2(n) or ord2(n). It is not to be confused with the multiplicative order modulo 2. The 2-order provides a unified description of various classes of integers defined by evenness: Odd numbers are those with ν2(n) = 0, i.e., integers of the form . Even numbers are those with ν2(n) > 0, i.e., integers of the form . In particular: Singly even numbers are those with ν2(n) = 1, i.e., integers of the form . Doubly even numbers are those with ν2(n) > 1, i.e., integers of the form . In this terminology, a doubly even number may or may not be divisible by 8, so there is no particular terminology for "triply even" numbers in pure math, although it is used in children's teaching materials including higher multiples such as "quadruply even." One can also extend the 2-order to the rational numbers by defining ν2(q) to be the unique integer ν where and a and b are both odd. For example, half-integers have a negative 2-order, namely −1. Finally, by defining the 2-adic absolute value one is well on the way to constructing the 2-adic numbers. Applications Safer outs in darts The object of the game of darts is to reach a score of 0, so the player with the smaller score is in a better position to win. At the beginning of a leg, "smaller" has the usual meaning of absolute value,
https://en.wikipedia.org/wiki/Max%20Noether
Max Noether (24 September 1844 – 13 December 1921) was a German mathematician who worked on algebraic geometry and the theory of algebraic functions. He has been called "one of the finest mathematicians of the nineteenth century". He was the father of Emmy Noether. Biography Max Noether was born in Mannheim in 1844, to a Jewish family of wealthy wholesale hardware dealers. His grandfather, Elias Samuel, had started the business in Bruchsal in 1797. In 1809 the Grand Duchy of Baden established a "Tolerance Edict", which assigned a hereditary surname to the male head of every Jewish family which did not already possess one. Thus the Samuels became the Noether family, and as part of this Christianization of names, their son Hertz (Max's father) became Hermann. Max was the third of five children Hermann had with his wife Amalia Würzburger. At 14, Max contracted polio and was afflicted by its effects for the rest of his life. Through self-study, he learned advanced mathematics and entered the University of Heidelberg in 1865. He served on the faculty there for several years, then moved to the University of Erlangen in 1888. While there, he helped to found the field of algebraic geometry. In 1880 he married Ida Amalia Kaufmann, the daughter of another wealthy Jewish merchant family. Two years later they had their first child, named Amalia ("Emmy") after her mother. Emmy Noether went on to become a central figure in abstract algebra. In 1883 they had a son named Alfred, who later studied chemistry before dying in 1918. Their third child, Fritz Noether, was born in 1884, and like Emmy, found prominence as a mathematician; he was executed in the Soviet Union in 1941. Little is known about their fourth child, Gustav Robert, born in 1889; he suffered from continual illness and died in 1928. Noether served as an Ordinarius (full professor) at Erlangen for many years, and died there on 13 December 1921. Work on algebraic geometry Brill and Max Noether developed alternative proofs using algebraic methods for much of Riemann's work on Riemann surfaces. Brill–Noether theory went further by estimating the dimension of the space of maps of given degree d from an algebraic curve to projective space Pn. In birational geometry, Noether introduced the fundamental technique of blowing up in order to prove resolution of singularities for plane curves. Noether made major contributions to the theory of algebraic surfaces. Noether's formula is the first case of the Riemann-Roch theorem for surfaces. The Noether inequality is one of the main restrictions on the possible discrete invariants of a surface. The Noether-Lefschetz theorem (proved by Lefschetz) says that the Picard group of a very general surface of degree at least 4 in P3 is generated by the restriction of the line bundle O(1). Noether and Castelnuovo showed that the Cremona group of birational automorphisms of the complex projective plane is generated by the "quadratic transformation" [x,y,z] ↦ [1/x, 1
https://en.wikipedia.org/wiki/Residuated%20lattice
In abstract algebra, a residuated lattice is an algebraic structure that is simultaneously a lattice x ≤ y and a monoid x•y which admits operations x\z and z/y, loosely analogous to division or implication, when x•y is viewed as multiplication or conjunction, respectively. Called respectively right and left residuals, these operations coincide when the monoid is commutative. The general concept was introduced by Morgan Ward and Robert P. Dilworth in 1939. Examples, some of which existed prior to the general concept, include Boolean algebras, Heyting algebras, residuated Boolean algebras, relation algebras, and MV-algebras. Residuated semilattices omit the meet operation ∧, for example Kleene algebras and action algebras. Definition In mathematics, a residuated lattice is an algebraic structure such that (i) (L, ≤) is a lattice. (ii) is a monoid. (iii) For all z there exists for every x a greatest y, and for every y a greatest x, such that x•y ≤ z (the residuation properties). In (iii), the "greatest y", being a function of z and x, is denoted x\z and called the right residual of z by x. Think of it as what remains of z on the right after "dividing" z on the left by x. Dually, the "greatest x" is denoted z/y and called the left residual of z by y. An equivalent, more formal statement of (iii) that uses these operations to name these greatest values is (iii)' for all x, y, z in L,   y ≤ x\z   ⇔   x•y ≤ z   ⇔   x ≤ z/y. As suggested by the notation, the residuals are a form of quotient. More precisely, for a given x in L, the unary operations x• and x\ are respectively the lower and upper adjoints of a Galois connection on L, and dually for the two functions •y and /y. By the same reasoning that applies to any Galois connection, we have yet another definition of the residuals, namely, x•(x\y) ≤ y ≤ x\(x•y), and (y/x)•x ≤ y ≤ (y•x)/x, together with the requirement that x•y be monotone in x and y. (When axiomatized using (iii) or (iii)' monotonicity becomes a theorem and hence not required in the axiomatization.) These give a sense in which the functions and are pseudoinverses or adjoints of each other, and likewise for and . This last definition is purely in terms of inequalities, noting that monotonicity can be axiomatized as and similarly for the other operations and their arguments. Moreover, any inequality x ≤ y can be expressed equivalently as an equation, either or . This along with the equations axiomatizing lattices and monoids then yields a purely equational definition of residuated lattices, provided the requisite operations are adjoined to the signature thereby expanding it to . When thus organized, residuated lattices form an equational class or variety, whose homomorphisms respect the residuals as well as the lattice and monoid operations. Note that distributivity and x•0 = 0 are consequences of these axioms and so do not need to be made part of the definition. This necessary distributivity of • over
https://en.wikipedia.org/wiki/MV-algebra
In abstract algebra, a branch of pure mathematics, an MV-algebra is an algebraic structure with a binary operation , a unary operation , and the constant , satisfying certain axioms. MV-algebras are the algebraic semantics of Łukasiewicz logic; the letters MV refer to the many-valued logic of Łukasiewicz. MV-algebras coincide with the class of bounded commutative BCK algebras. Definitions An MV-algebra is an algebraic structure consisting of a non-empty set a binary operation on a unary operation on and a constant denoting a fixed element of which satisfies the following identities: and By virtue of the first three axioms, is a commutative monoid. Being defined by identities, MV-algebras form a variety of algebras. The variety of MV-algebras is a subvariety of the variety of BL-algebras and contains all Boolean algebras. An MV-algebra can equivalently be defined (Hájek 1998) as a prelinear commutative bounded integral residuated lattice satisfying the additional identity Examples of MV-algebras A simple numerical example is with operations and In mathematical fuzzy logic, this MV-algebra is called the standard MV-algebra, as it forms the standard real-valued semantics of Łukasiewicz logic. The trivial MV-algebra has the only element 0 and the operations defined in the only possible way, and The two-element MV-algebra is actually the two-element Boolean algebra with coinciding with Boolean disjunction and with Boolean negation. In fact adding the axiom to the axioms defining an MV-algebra results in an axiomatization of Boolean algebras. If instead the axiom added is , then the axioms define the MV3 algebra corresponding to the three-valued Łukasiewicz logic Ł3. Other finite linearly ordered MV-algebras are obtained by restricting the universe and operations of the standard MV-algebra to the set of equidistant real numbers between 0 and 1 (both included), that is, the set which is closed under the operations and of the standard MV-algebra; these algebras are usually denoted MVn. Another important example is Chang's MV-algebra, consisting just of infinitesimals (with the order type ω) and their co-infinitesimals. Chang also constructed an MV-algebra from an arbitrary totally ordered abelian group G by fixing a positive element u and defining the segment [0, u] as { x ∈ G | 0 ≤ x ≤ u }, which becomes an MV-algebra with x ⊕ y = min(u, x + y) and ¬x = u − x. Furthermore, Chang showed that every linearly ordered MV-algebra is isomorphic to an MV-algebra constructed from a group in this way. Daniele Mundici extended the above construction to abelian lattice-ordered groups. If G is such a group with strong (order) unit u, then the "unit interval" { x ∈ G | 0 ≤ x ≤ u } can be equipped with ¬x = u − x, x ⊕ y = u ∧G (x + y), and x ⊗ y = 0 ∨G (x + y − u). This construction establishes a categorical equivalence between lattice-ordered abelian groups with strong unit and MV-algebras. An effect algebra that is l
https://en.wikipedia.org/wiki/Exact%20test
In statistics, an exact (significance) test is a test such that if the null hypothesis is true, then all assumptions made during the derivation of the distribution of the test statistic are met. Using an exact test provides a significance test that maintains the type I error rate of the test () at the desired significance level of the test. For example, an exact test at a significance level of , when repeated over many samples where the null hypothesis is true, will reject at most of the time. This is in contrast to an approximate test in which the desired type I error rate is only approximately maintained (i.e.: the test might reject > 5% of the time), while this approximation may be made as close to as desired by making the sample size sufficiently large. Exact tests that are based on discrete test statistics may be conservative, indicating that the actual rejection rate lies below the nominal significance level . As an example, this is the case for Fisher's exact test and its more powerful alternative, Boschloo's test. If the test statistic is continuous, it will reach the significance level exactly. Parametric tests, such as those used in exact statistics, are exact tests when the parametric assumptions are fully met, but in practice, the use of the term exact (significance) test is reserved for non-parametric tests, i.e., tests that do not rest on parametric assumptions. However, in practice, most implementations of non-parametric test software use asymptotical algorithms to obtain the significance value, which renders the test non-exact. Hence, when a result of statistical analysis is termed an “exact test” or specifies an “exact p-value”, this implies that the test is defined without parametric assumptions and is evaluated without making use of approximate algorithms. In principle, however, this could also signify that a parametric test has been employed in a situation where all parametric assumptions are fully met, but it is in most cases impossible to prove this completely in a real-world situation. Exceptions in which it is certain that parametric tests are exact include tests based on the binomial or Poisson distributions. The term permutation test is sometimes used as a synonym for exact test, but it should be kept in mind that all permutation tests are exact tests, but not all exact tests are permutation tests. Formulation The basic equation underlying exact tests is where: x is the actual observed outcome, Pr(y) is the probability under the null hypothesis of a potentially observed outcome y, T(y) is the value of the test statistic for an outcome y, with larger values of T representing cases which notionally represent greater departures from the null hypothesis, and where the sum ranges over all outcomes y (including the observed one) that have the same value of the test statistic obtained for the observed sample x, or a larger one. Example: Pearson's chi-squared test versus an exact test A simple example of this conce
https://en.wikipedia.org/wiki/Stochastic%20drift
In probability theory, stochastic drift is the change of the average value of a stochastic (random) process. A related concept is the drift rate, which is the rate at which the average changes. For example, a process that counts the number of heads in a series of fair coin tosses has a drift rate of 1/2 per toss. This is in contrast to the random fluctuations about this average value. The stochastic mean of that coin-toss process is 1/2 and the drift rate of the stochastic mean is 0, assuming 1 = heads and 0 = tails. Stochastic drifts in population studies Longitudinal studies of secular events are frequently conceptualized as consisting of a trend component fitted by a polynomial, a cyclical component often fitted by an analysis based on autocorrelations or on a Fourier series, and a random component (stochastic drift) to be removed. In the course of the time series analysis, identification of cyclical and stochastic drift components is often attempted by alternating autocorrelation analysis and differencing of the trend. Autocorrelation analysis helps to identify the correct phase of the fitted model while the successive differencing transforms the stochastic drift component into white noise. Stochastic drift can also occur in population genetics where it is known as genetic drift. A finite population of randomly reproducing organisms would experience changes from generation to generation in the frequencies of the different genotypes. This may lead to the fixation of one of the genotypes, and even the emergence of a new species. In sufficiently small populations, drift can also neutralize the effect of deterministic natural selection on the population. Stochastic drift in economics and finance Time series variables in economics and finance — for example, stock prices, gross domestic product, etc. — generally evolve stochastically and frequently are non-stationary. They are typically modelled as either trend-stationary or difference stationary. A trend stationary process {yt} evolves according to where t is time, f is a deterministic function, and et is a zero-long-run-mean stationary random variable. In this case the stochastic term is stationary and hence there is no stochastic drift, though the time series itself may drift with no fixed long-run mean due to the deterministic component f(t) not having a fixed long-run mean. This non-stochastic drift can be removed from the data by regressing on using a functional form coinciding with that of f, and retaining the stationary residuals. In contrast, a unit root (difference stationary) process evolves according to where is a zero-long-run-mean stationary random variable; here c is a non-stochastic drift parameter: even in the absence of the random shocks ut, the mean of y would change by c per period. In this case the non-stationarity can be removed from the data by first differencing, and the differenced variable will have a long-run mean of c and hence no drift. But even in t
https://en.wikipedia.org/wiki/Brunnian%20link
In knot theory, a branch of topology, a Brunnian link is a nontrivial link that becomes a set of trivial unlinked circles if any one component is removed. In other words, cutting any loop frees all the other loops (so that no two loops can be directly linked). The name Brunnian is after Hermann Brunn. Brunn's 1892 article Über Verkettung included examples of such links. Examples The best-known and simplest possible Brunnian link is the Borromean rings, a link of three unknots. However for every number three or above, there are an infinite number of links with the Brunnian property containing that number of loops. Here are some relatively simple three-component Brunnian links which are not the same as the Borromean rings: The simplest Brunnian link other than the 6-crossing Borromean rings is presumably the 10-crossing L10a140 link. An example of an n-component Brunnian link is given by the "rubberband" Brunnian Links, where each component is looped around the next as aba−1b−1, with the last looping around the first, forming a circle. In 2020, new and much more complicated Brunnian links have been discovered in using highly flexible geometric-topology methods, far more than having been previously constructed. See Section 6. Non-circularity It is impossible for a Brunnian link to be constructed from geometric circles. Somewhat more generally, if a link has the property that each component is a circle and no two components are linked, then it is trivial. The proof, by Michael Freedman and Richard Skora, embeds the three-dimensional space containing the link as the boundary of a Poincaré ball model of four-dimensional hyperbolic space, and considers the hyperbolic convex hulls of the circles. These are two-dimensional subspaces of the hyperbolic space, and their intersection patterns reflect the pairwise linking of the circles: if two circles are linked, then their hulls have a point of intersection, but with the assumption that pairs of circles are unlinked, the hulls are disjoint. Taking cross-sections of the Poincaré ball by concentric three-dimensional spheres, the intersection of each sphere with the hulls of the circles is again a link made out of circles, and this family of cross-sections provides a continuous motion of all of the circles that shrinks each of them to a point without crossing any of the others. Classification Brunnian links were classified up to link-homotopy by John Milnor in , and the invariants he introduced are now called Milnor invariants. An (n + 1)-component Brunnian link can be thought of as an element of the link group – which in this case (but not in general) is the fundamental group of the link complement – of the n-component unlink, since by Brunnianness removing the last link unlinks the others. The link group of the n-component unlink is the free group on n generators, Fn, as the link group of a single link is the knot group of the unknot, which is the integers, and the link group of an unlinked uni
https://en.wikipedia.org/wiki/Bispectrum
In mathematics, in the area of statistical analysis, the bispectrum is a statistic used to search for nonlinear interactions. Definitions The Fourier transform of the second-order cumulant, i.e., the autocorrelation function, is the traditional power spectrum. The Fourier transform of C3(t1, t2) (third-order cumulant-generating function) is called the bispectrum or bispectral density. Calculation Applying the convolution theorem allows fast calculation of the bispectrum: , where denotes the Fourier transform of the signal, and its conjugate. Applications Bispectrum and bicoherence may be applied to the case of non-linear interactions of a continuous spectrum of propagating waves in one dimension. Bispectral measurements have been carried out for EEG signals monitoring. It was also shown that bispectra characterize differences between families of musical instruments. In seismology, signals rarely have adequate duration for making sensible bispectral estimates from time averages. Bispectral analysis describes observations made at two wavelengths. It is often used by scientists to analyze elemental makeup of a planetary atmosphere by analyzing the amount of light reflected and received through various color filters. By combining and removing two filters, much can be gleaned from only two filters. Through modern computerized interpolation, a third virtual filter can be created to recreate true color photographs that, while not particularly useful for scientific analysis, are popular for public display in textbooks and fund raising campaigns. Bispectral analysis can also be used to analyze interactions between wave patterns and tides on Earth. A form of bispectral analysis called the bispectral index is applied to EEG waveforms to monitor depth of anesthesia. Biphase (phase of polyspectrum) can be used for detection of phase couplings, noise reduction of polharmonic (particularly, speech ) signal analysis. A physical interpretation The bispectrum reflects the energy budget of interactions, as it can be interpreted as a covariance defined between energy-supplying and energy-receiving parties of waves involved in an nonlinear interaction. On the other hand, bicoherence has been proven to be the corresponding correlation coefficient. Just as correlation cannot sufficiently demonstrate the presence of causality, spectrum and bicoherence also cannot sufficiently substantiate the existence of an nonlinear interaction. Generalizations Bispectra fall in the category of higher-order spectra, or polyspectra and provide supplementary information to the power spectrum. The third order polyspectrum (bispectrum) is the easiest to compute, and hence the most popular. A statistic defined analogously is the bispectral coherency or bicoherence. Trispectrum The Fourier transform of C4 (t1, t2, t3) (fourth-order cumulant-generating function) is called the trispectrum or trispectral density. The trispectrum T(f1,f2,f3) falls into the category of h
https://en.wikipedia.org/wiki/Bicoherence
In mathematics and statistical analysis, bicoherence (also known as bispectral coherency) is a squared normalised version of the bispectrum. The bicoherence takes values bounded between 0 and 1, which make it a convenient measure for quantifying the extent of phase coupling in a signal. The prefix bi- in bispectrum and bicoherence refers not to two time series xt, yt but rather to two frequencies of a single signal. The bispectrum is a statistic used to search for nonlinear interactions. The Fourier transform of the second-order cumulant, i.e., the autocorrelation function, is the traditional power spectrum. The Fourier transform of C3(t1,t2) (third-order cumulant) is called bispectrum or bispectral density. They fall in the category of Higher Order Spectra, or Polyspectra and provide supplementary information to the power spectrum. The third order polyspectrum (bispectrum) is the easiest to compute, and hence the most popular. The difference with measuring coherence (coherence analysis is an extensively used method to study the correlations in frequency domain, between two simultaneously measured signals) is the need for both input and output measurements by estimating two auto-spectra and one cross spectrum. On the other hand, bicoherence is an auto-quantity, i.e. it can be computed from a single signal. The coherence function provides a quantification of deviations from linearity in the system which lies between the input and output measurement sensors. The bicoherence measures the proportion of the signal energy at any bifrequency that is quadratically phase coupled. It is usually normalized in the range similar to correlation coefficient and classical (second order) coherence. It was also used for depth of anasthesia assessment and widely in plasma physics (nonlinear energy transfer) and also for detection of gravitational waves. Bispectrum and bicoherence may be applied to the case of non-linear interactions of a continuous spectrum of propagating waves in one dimension. Bicoherence measurements have been carried out for EEG signals monitoring in sleep, wakefulness and seizures. Definition The bispectrum is defined as the triple product where is the bispectrum evaluated at frequencies and , is the Fourier transform of the signal, and denotes the complex conjugate. The Fourier transform is a complex quantity, and so is the bispectrum. From complex multiplication, the magnitude of the bispectrum is equal to the product of the magnitudes of each of the frequency components, and the phase of the bispectrum is the sum of the phases of each of the frequency components. Suppose that the three Fourier components , and were perfectly phase locked. Then if the Fourier transform was calculated several times from different parts of the time series, the bispectrum will always have the same value. If we add together all of the bispectra, they will sum without cancelling. On the other hand, suppose that the phases of each of these frequenci
https://en.wikipedia.org/wiki/Isard
Isard may refer to: Pistol Isard, a Spanish semi-automatic pistol Pyrenean chamois or isard Andorra national rugby union team or Els Isards Isard, an interactive geometry program People with the surname Walter Isard (1919–2010), American economist Fictional Ysanne Isard, a character in the Star Wars franchise See also Isarn (disambiguation) Izard (disambiguation) Izzard (disambiguation)
https://en.wikipedia.org/wiki/Point%20plotting
Point plotting is an elementary mathematical skill required in analytic geometry. Invented by René Descartes and originally used to locate positions on military maps, this skill is now assumed of everyone who wants to locate grid 7A on any map. Using point plotting, one associates an ordered pair of real numbers (x, y) with a point in the plane in a one-to-one manner. As a result, one obtains the 2-dimensional Cartesian coordinate system. To be able to plot points, one needs to first decide on a point in plane which will be called the origin, and a couple of perpendicular lines, called the x and y axes, as well as a preferred direction on each of the lines. Usually one chooses the x axis pointing right and the y axis pointing up, and these will be named the positive directions. Also, one picks a segment in the plane which is declared to be of unit length. Using rotated versions of this segment, one can measure distances along the x and y axes. Having the origin and the axes in place, given a pair (x, y) of real numbers, one considers the point on the x axis at distance |x| from the origin and along the positive direction if x≥0, and the other direction otherwise. In the same way one picks the point on the y axis corresponding to the number y. The line parallel to the y axis going through the first point and the line parallel to the x axis going through the second point will intersect at precisely one point, which will be called the point with coordinates (x, y). See also Cartesian coordinate system Graph of a function Elementary mathematics
https://en.wikipedia.org/wiki/Max%20Weiss
Miksa (Max) Weisz (21 July 1857 – 14 March 1927) was an Austrian chess player born in the Kingdom of Hungary. Weiss was born in Sereď. Moving to Vienna, he studied mathematics and physics at the university, and later taught those subjects. Weiss learned to play chess at age 12, and his strength increased steadily throughout the 1880s. 1880, Graz, tied with Adolf Schwarz and Johannes von Minckwitz for first prize. 1882, Vienna, tenth, won two games from Johann Zukertort, and drew with Wilhelm Steinitz. 1883, Nuremberg, tenth. 1885, Hamburg, tied with Berthold Englisch and Siegbert Tarrasch for second prize. 1887, Frankfort-on-the-Main, divided second and third prizes with Joseph Henry Blackburne. 1888, Bradford, tied with Blackburne for sixth prize. 1889, New York, (the sixth American Chess Congress), scored +24−4=10 to tie with Mikhail Chigorin for first prize, ahead of Isidor Gunsberg and Blackburne. 1889, Breslau, third prize. 1890, Vienna, first prize, ahead of Johann Bauer and Englisch. The New York 1889 tournament was organized to find a challenger for the World Chess Championship, but neither Chigorin (who had already lost a championship match) nor Weiss pursued a title match with Steinitz. In fact, having become one of the top players in the world, Weiss quit international chess after this tournament, though he did play a few Viennese events. In 1895 he defeated Georg Marco in a match, +5 −1 =1, and he tied for first in the 1895–6 winter tournament with Carl Schlechter. Around this time, Weiss began working to create a Viennese school of chess players. In 1905 Weiss was employed by S M von Rothschild bank in Vienna. His chess writings, Schach-Meistersteich (Mühlhausen 1918), Kleines Schachlehrbuch (Mühlhausen 1920), and the earlier problem collection Caissa Bambergensis (Bamberg 1902), are little remembered today. In 1927 Weiss died in Vienna, Austria. References See also List of Jewish chess players External links 1857 births 1927 deaths Hungarian Jews Austrian Jews Hungarian chess players Austrian chess players Jewish chess players Chess players from Austria-Hungary
https://en.wikipedia.org/wiki/Complex%20differential%20form
In mathematics, a complex differential form is a differential form on a manifold (usually a complex manifold) which is permitted to have complex coefficients. Complex forms have broad applications in differential geometry. On complex manifolds, they are fundamental and serve as the basis for much of algebraic geometry, Kähler geometry, and Hodge theory. Over non-complex manifolds, they also play a role in the study of almost complex structures, the theory of spinors, and CR structures. Typically, complex forms are considered because of some desirable decomposition that the forms admit. On a complex manifold, for instance, any complex k-form can be decomposed uniquely into a sum of so-called (p, q)-forms: roughly, wedges of p differentials of the holomorphic coordinates with q differentials of their complex conjugates. The ensemble of (p, q)-forms becomes the primitive object of study, and determines a finer geometrical structure on the manifold than the k-forms. Even finer structures exist, for example, in cases where Hodge theory applies. Differential forms on a complex manifold Suppose that M is a complex manifold of complex dimension n. Then there is a local coordinate system consisting of n complex-valued functions z1, ..., zn such that the coordinate transitions from one patch to another are holomorphic functions of these variables. The space of complex forms carries a rich structure, depending fundamentally on the fact that these transition functions are holomorphic, rather than just smooth. One-forms We begin with the case of one-forms. First decompose the complex coordinates into their real and imaginary parts: for each j. Letting one sees that any differential form with complex coefficients can be written uniquely as a sum Let Ω1,0 be the space of complex differential forms containing only 's and Ω0,1 be the space of forms containing only 's. One can show, by the Cauchy–Riemann equations, that the spaces Ω1,0 and Ω0,1 are stable under holomorphic coordinate changes. In other words, if one makes a different choice wi of holomorphic coordinate system, then elements of Ω1,0 transform tensorially, as do elements of Ω0,1. Thus the spaces Ω0,1 and Ω1,0 determine complex vector bundles on the complex manifold. Higher-degree forms The wedge product of complex differential forms is defined in the same way as with real forms. Let p and q be a pair of non-negative integers ≤ n. The space Ωp,q of (p, q)-forms is defined by taking linear combinations of the wedge products of p elements from Ω1,0 and q elements from Ω0,1. Symbolically, where there are p factors of Ω1,0 and q factors of Ω0,1. Just as with the two spaces of 1-forms, these are stable under holomorphic changes of coordinates, and so determine vector bundles. If Ek is the space of all complex differential forms of total degree k, then each element of Ek can be expressed in a unique way as a linear combination of elements from among the spaces Ωp,q with . More
https://en.wikipedia.org/wiki/Alternating%20algebra
In mathematics, an alternating algebra is a -graded algebra for which for all nonzero homogeneous elements and (i.e. it is an anticommutative algebra) and has the further property that for every homogeneous element of odd degree. Examples The differential forms on a differentiable manifold form an alternating algebra. The exterior algebra is an alternating algebra. The cohomology ring of a topological space is an alternating algebra. Properties The algebra formed as the direct sum of the homogeneous subspaces of even degree of an anticommutative algebra is a subalgebra contained in the centre of , and is thus commutative. An anticommutative algebra over a (commutative) base ring in which 2 is not a zero divisor is alternating. See also Alternating multilinear map Exterior algebra Graded-symmetric algebra References Algebraic geometry
https://en.wikipedia.org/wiki/Supermatrix
In mathematics and theoretical physics, a supermatrix is a Z2-graded analog of an ordinary matrix. Specifically, a supermatrix is a 2×2 block matrix with entries in a superalgebra (or superring). The most important examples are those with entries in a commutative superalgebra (such as a Grassmann algebra) or an ordinary field (thought of as a purely even commutative superalgebra). Supermatrices arise in the study of super linear algebra where they appear as the coordinate representations of a linear transformations between finite-dimensional super vector spaces or free supermodules. They have important applications in the field of supersymmetry. Definitions and notation Let R be a fixed superalgebra (assumed to be unital and associative). Often one requires R be supercommutative as well (for essentially the same reasons as in the ungraded case). Let p, q, r, and s be nonnegative integers. A supermatrix of dimension (r|s)×(p|q) is a matrix with entries in R that is partitioned into a 2×2 block structure with r+s total rows and p+q total columns (so that the submatrix X00 has dimensions r×p and X11 has dimensions s×q). An ordinary (ungraded) matrix can be thought of as a supermatrix for which q and s are both zero. A square supermatrix is one for which (r|s) = (p|q). This means that not only is the unpartitioned matrix X square, but the diagonal blocks X00 and X11 are as well. An even supermatrix is one for which the diagonal blocks (X00 and X11) consist solely of even elements of R (i.e. homogeneous elements of parity 0) and the off-diagonal blocks (X01 and X10) consist solely of odd elements of R. An odd supermatrix is one for which the reverse holds: the diagonal blocks are odd and the off-diagonal blocks are even. If the scalars R are purely even there are no nonzero odd elements, so the even supermatices are the block diagonal ones and the odd supermatrices are the off-diagonal ones. A supermatrix is homogeneous if it is either even or odd. The parity, |X|, of a nonzero homogeneous supermatrix X is 0 or 1 according to whether it is even or odd. Every supermatrix can be written uniquely as the sum of an even supermatrix and an odd one. Algebraic structure Supermatrices of compatible dimensions can be added or multiplied just as for ordinary matrices. These operations are exactly the same as the ordinary ones with the restriction that they are defined only when the blocks have compatible dimensions. One can also multiply supermatrices by elements of R (on the left or right), however, this operation differs from the ungraded case due to the presence of odd elements in R. Let Mr|s×p|q(R) denote the set of all supermatrices over R with dimension (r|s)×(p|q). This set forms a supermodule over R under supermatrix addition and scalar multiplication. In particular, if R is a superalgebra over a field K then Mr|s×p|q(R) forms a super vector space over K. Let Mp|q(R) denote the set of all square supermatices over R with dimension (p|q)×(p|
https://en.wikipedia.org/wiki/Panel%20data
In statistics and econometrics, panel data and longitudinal data are both multi-dimensional data involving measurements over time. Panel data is a subset of longitudinal data where observations are for the same subjects each time. Time series and cross-sectional data can be thought of as special cases of panel data that are in one dimension only (one panel member or individual for the former, one time point for the latter). A literature search often involves time series, cross-sectional, or panel data. Cross-panel data (CPD) is an innovative yet underappreciated source of information in the mathematical and statistical sciences. CPD stands out from other research methods because it vividly illustrates how independent and dependent variables may shift between countries. This panel data collection allows researchers to examine the connection between variables across several cross-sections and time periods and analyze the results of policy actions in other nations. A study that uses panel data is called a longitudinal study or panel study. Example In the multiple response permutation procedure (MRPP) example above, two datasets with a panel structure are shown and the objective is to test whether there's a significant difference between people in the sample data. Individual characteristics (income, age, sex) are collected for different persons and different years. In the first dataset, two persons (1, 2) are observed every year for three years (2016, 2017, 2018). In the second dataset, three persons (1, 2, 3) are observed two times (person 1), three times (person 2), and one time (person 3), respectively, over three years (2016, 2017, 2018); in particular, person 1 is not observed in year 2018 and person 3 is not observed in 2016 or 2018. A balanced panel (e.g., the first dataset above) is a dataset in which each panel member (i.e., person) is observed every year. Consequently, if a balanced panel contains N panel members and T periods, the number of observations (n) in the dataset is necessarily . An unbalanced panel (e.g., the second dataset above) is a dataset in which at least one panel member is not observed every period. Therefore, if an unbalanced panel contains N panel members and T periods, then the following strict inequality holds for the number of observations (n) in the dataset: . Both datasets above are structured in the long format, which is where one row holds one observation per time. Another way to structure panel data would be the wide format where one row represents one observational unit for all points in time (for the example, the wide format would have only two (first example) or three (second example) rows of data with additional columns for each time-varying variable (income, age). Analysis A panel has the form where is the individual dimension and is the time dimension. A general panel data regression model is written as Different assumptions can be made on the precise structure of this general model. Two
https://en.wikipedia.org/wiki/List%20of%20Zimbabwe%20ODI%20cricketers
This is a list of Zimbabwean One-day International cricketers displaying career statistics for all players that have represented Zimbabwe in at least one One Day International (ODI). An ODI is an international cricket match between two representative teams, each having ODI status, as determined by the International Cricket Council (ICC). An ODI differs from Test matches in that the number of overs per team is limited, and that each team has only one innings. The list is arranged in the order in which each player won his first ODI cap. Where more than one player won his first ODI cap in the same match, those players are listed alphabetically by surname. Key Players Statistics are correct as of 4 July 2023. Notes See also One Day International Zimbabwean cricket team List of Zimbabwe Test cricketers List of Zimbabwe Twenty20 International cricketers Notes External links Howstat Cricinfo References Zimbabwe ODI Zimbabwe
https://en.wikipedia.org/wiki/Yvonne%20John%20Lewis
Yvonne John Lewis (occasionally spelled Yvonne John-Lewis) is a British female lead and backing singer. She is currently teaching mathematics at a secondary school in North London. Hailing from London, she was discovered by Osmond Wright, better known by his stage name "Mozez" and a singer for British downtempo group Zero 7. John Lewis first featured as a lead vocalist on Zero 7's albums, and has gone on to provide lead vocals for and been featured on recordings by artists including as Basement Jaxx, Sia, Stella Browne, Narcotic Thrust and Rollercone. She is well known as the featured singer on Narcotic Thrust's number one Billboard Hot Dance Music/Club Play hit from 2002, Safe from Harm. John Lewis has worked as a backing vocalist for artists like Bryan Ferry, Blue, Enrique Iglesias, James Fargas, Westlife and Atomic Kitten. She also provided the vocal sample in Simon Webbe's track, "No Worries". She toured with Roxy Music in 2004. See also List of number-one dance hits (United States) List of artists who reached number one on the US Dance chart References External links Year of birth missing (living people) Living people 21st-century Black British women singers English house musicians
https://en.wikipedia.org/wiki/Kan%20extension
Kan extensions are universal constructs in category theory, a branch of mathematics. They are closely related to adjoints, but are also related to limits and ends. They are named after Daniel M. Kan, who constructed certain (Kan) extensions using limits in 1960. An early use of (what is now known as) a Kan extension from 1956 was in homological algebra to compute derived functors. In Categories for the Working Mathematician Saunders Mac Lane titled a section "All Concepts Are Kan Extensions", and went on to write that The notion of Kan extensions subsumes all the other fundamental concepts of category theory. Kan extensions generalize the notion of extending a function defined on a subset to a function defined on the whole set. The definition, not surprisingly, is at a high level of abstraction. When specialised to posets, it becomes a relatively familiar type of question on constrained optimization. Definition A Kan extension proceeds from the data of three categories and two functors , and comes in two varieties: the "left" Kan extension and the "right" Kan extension of along . The right Kan extension amounts to finding the dashed arrow and the natural transformation in the following diagram: Formally, the right Kan extension of along consists of a functor and a natural transformation that is couniversal with respect to the specification, in the sense that for any functor and natural transformation , a unique natural transformation is defined and fits into a commutative diagram: where is the natural transformation with for any object of The functor R is often written . As with the other universal constructs in category theory, the "left" version of the Kan extension is dual to the "right" one and is obtained by replacing all categories by their opposites. The effect of this on the description above is merely to reverse the direction of the natural transformations. (Recall that a natural transformation between the functors consists of having an arrow for every object of , satisfying a "naturality" property. When we pass to the opposite categories, the source and target of are swapped, causing to act in the opposite direction). This gives rise to the alternate description: the left Kan extension of along consists of a functor and a natural transformation that are universal with respect to this specification, in the sense that for any other functor and natural transformation , a unique natural transformation exists and fits into a commutative diagram: where is the natural transformation with for any object of . The functor L is often written . The use of the word "the" (as in "the left Kan extension") is justified by the fact that, as with all universal constructions, if the object defined exists, then it is unique up to unique isomorphism. In this case, that means that (for left Kan extensions) if are two left Kan extensions of along , and are the corresponding transformations, then there
https://en.wikipedia.org/wiki/Parallel%20projection
In three-dimensional geometry, a parallel projection (or axonometric projection) is a projection of an object in three-dimensional space onto a fixed plane, known as the projection plane or image plane, where the rays, known as lines of sight or projection lines, are parallel to each other. It is a basic tool in descriptive geometry. The projection is called orthographic if the rays are perpendicular (orthogonal) to the image plane, and oblique or skew if they are not. Overview A parallel projection is a particular case of projection in mathematics and graphical projection in technical drawing. Parallel projections can be seen as the limit of a central or perspective projection, in which the rays pass through a fixed point called the center or viewpoint, as this point is moved towards infinity. Put differently, a parallel projection corresponds to a perspective projection with an infinite focal length (the distance between the lens and the focal point in photography) or "zoom". Further, in parallel projections, lines that are parallel in three-dimensional space remain parallel in the two-dimensionally projected image. A perspective projection of an object is often considered more realistic than a parallel projection, since it more closely resembles human vision and photography. However, parallel projections are popular in technical applications, since the parallelism of an object's lines and faces is preserved, and direct measurements can be taken from the image. Among parallel projections, orthographic projections are seen as the most realistic, and are commonly used by engineers. On the other hand, certain types of oblique projections (for instance cavalier projection, military projection) are very simple to implement, and are used to create quick and informal pictorials of objects. The term parallel projection is used in the literature to describe both the procedure itself (a mathematical mapping function) as well as the resulting image produced by the procedure. Properties Every parallel projection has the following properties: It is uniquely defined by its projection plane Π and the direction of the (parallel) projection lines. The direction must not be parallel to the projection plane. Any point of the space has a unique image in the projection plane Π, and the points of Π are fixed. Any line not parallel to direction is mapped onto a line; any line parallel to is mapped onto a point. Parallel lines are mapped on parallel lines, or on a pair of points (if they are parallel to ). The ratio of the length of two line segments on a line stays unchanged. As a special case, midpoints are mapped on midpoints. The length of a line segment parallel to the projection plane remains unchanged. The length of any line segment is shortened if the projection is an orthographic one. Any circle that lies in a plane parallel to the projection plane is mapped onto a circle with the same radius. Any other circle is mapped onto an ellipse o
https://en.wikipedia.org/wiki/Nested%20radical
In algebra, a nested radical is a radical expression (one containing a square root sign, cube root sign, etc.) that contains (nests) another radical expression. Examples include which arises in discussing the regular pentagon, and more complicated ones such as Denesting Some nested radicals can be rewritten in a form that is not nested. For example, Another simple example, Rewriting a nested radical in this way is called denesting. This is not always possible, and, even when possible, it is often difficult. Two nested square roots In the case of two nested square roots, the following theorem completely solves the problem of denesting. If and are rational numbers and is not the square of a rational number, there are two rational numbers and such that if and only if is the square of a rational number . If the nested radical is real, and are the two numbers and where is a rational number. In particular, if and are integers, then and are integers. This result includes denestings of the form as may always be written and at least one of the terms must be positive (because the left-hand side of the equation is positive). A more general denesting formula could have the form However, Galois theory implies that either the left-hand side belongs to or it must be obtained by changing the sign of either or both. In the first case, this means that one can take and In the second case, and another coefficient must be zero. If one may rename as for getting Proceeding similarly if it results that one can suppose This shows that the apparently more general denesting can always be reduced to the above one. Proof: By squaring, the equation is equivalent with and, in the case of a minus in the right-hand side, (square roots are nonnegative by definition of the notation). As the inequality may always be satisfied by possibly exchanging and , solving the first equation in and is equivalent with solving This equality implies that belongs to the quadratic field In this field every element may be uniquely written with and being rational numbers. This implies that is not rational (otherwise the right-hand side of the equation would be rational; but the left-hand side is irrational). As and must be rational, the square of must be rational. This implies that in the expression of as Thus for some rational number The uniqueness of the decomposition over and implies thus that the considered equation is equivalent with It follows by Vieta's formulas that and must be roots of the quadratic equation its (, otherwise would be the square of ), hence and must be and Thus and are rational if and only if is a rational number. For explicitly choosing the various signs, one must consider only positive real square roots, and thus assuming . The equation shows that . Thus, if the nested radical is real, and if denesting is possible, then . Then the solution is Some identities of Ramanujan Srinivasa Ra
https://en.wikipedia.org/wiki/Banach%20manifold
In mathematics, a Banach manifold is a manifold modeled on Banach spaces. Thus it is a topological space in which each point has a neighbourhood homeomorphic to an open set in a Banach space (a more involved and formal definition is given below). Banach manifolds are one possibility of extending manifolds to infinite dimensions. A further generalisation is to Fréchet manifolds, replacing Banach spaces by Fréchet spaces. On the other hand, a Hilbert manifold is a special case of a Banach manifold in which the manifold is locally modeled on Hilbert spaces. Definition Let be a set. An atlas of class on is a collection of pairs (called charts) such that each is a subset of and the union of the is the whole of ; each is a bijection from onto an open subset of some Banach space and for any indices is open in the crossover map is an -times continuously differentiable function for every that is, the th Fréchet derivative exists and is a continuous function with respect to the -norm topology on subsets of and the operator norm topology on One can then show that there is a unique topology on such that each is open and each is a homeomorphism. Very often, this topological space is assumed to be a Hausdorff space, but this is not necessary from the point of view of the formal definition. If all the Banach spaces are equal to the same space the atlas is called an -atlas. However, it is not a priori necessary that the Banach spaces be the same space, or even isomorphic as topological vector spaces. However, if two charts and are such that and have a non-empty intersection, a quick examination of the derivative of the crossover map shows that and must indeed be isomorphic as topological vector spaces. Furthermore, the set of points for which there is a chart with in and isomorphic to a given Banach space is both open and closed. Hence, one can without loss of generality assume that, on each connected component of the atlas is an -atlas for some fixed A new chart is called compatible with a given atlas if the crossover map is an -times continuously differentiable function for every Two atlases are called compatible if every chart in one is compatible with the other atlas. Compatibility defines an equivalence relation on the class of all possible atlases on A -manifold structure on is then defined to be a choice of equivalence class of atlases on of class If all the Banach spaces are isomorphic as topological vector spaces (which is guaranteed to be the case if is connected), then an equivalent atlas can be found for which they are all equal to some Banach space is then called an -manifold, or one says that is modeled on Examples Every Banach space can be canonically identified as a Banach manifold. If is a Banach space, then is a Banach manifold with an atlas containing a single, globally-defined chart (the identity map). Similarly, if is an open subset of some Banach space then is a Banac
https://en.wikipedia.org/wiki/Hilbert%20manifold
In mathematics, a Hilbert manifold is a manifold modeled on Hilbert spaces. Thus it is a separable Hausdorff space in which each point has a neighbourhood homeomorphic to an infinite dimensional Hilbert space. The concept of a Hilbert manifold provides a possibility of extending the theory of manifolds to infinite-dimensional setting. Analogously to the finite-dimensional situation, one can define a differentiable Hilbert manifold by considering a maximal atlas in which the transition maps are differentiable. Properties Many basic constructions of the manifold theory, such as the tangent space of a manifold and a tubular neighbourhood of a submanifold (of finite codimension) carry over from the finite dimensional situation to the Hilbert setting with little change. However, in statements involving maps between manifolds, one often has to restrict consideration to Fredholm maps, that is, maps whose differential at every point is Fredholm. The reason for this is that Sard's lemma holds for Fredholm maps, but not in general. Notwithstanding this difference, Hilbert manifolds have several very nice properties. Kuiper's theorem: If is a compact topological space or has the homotopy type of a CW complex then every (real or complex) Hilbert space bundle over is trivial. In particular, every Hilbert manifold is parallelizable. Every smooth Hilbert manifold can be smoothly embedded onto an open subset of the model Hilbert space. Every homotopy equivalence between two Hilbert manifolds is homotopic to a diffeomorphism. In particular every two homotopy equivalent Hilbert manifolds are already diffeomorphic. This stands in contrast to lens spaces and exotic spheres, which demonstrate that in the finite-dimensional situation, homotopy equivalence, homeomorphism, and diffeomorphism of manifolds are distinct properties. Although Sard's Theorem does not hold in general, every continuous map from a Hilbert manifold can be arbitrary closely approximated by a smooth map which has no critical points. Examples Any Hilbert space is a Hilbert manifold with a single global chart given by the identity function on Moreover, since is a vector space, the tangent space to at any point is canonically isomorphic to itself, and so has a natural inner product, the "same" as the one on Thus can be given the structure of a Riemannian manifold with metric where denotes the inner product in Similarly, any open subset of a Hilbert space is a Hilbert manifold and a Riemannian manifold under the same construction as for the whole space. There are several mapping spaces between manifolds which can be viewed as Hilbert spaces by only considering maps of suitable Sobolev class. For example we can consider the space of all maps from the unit circle into a manifold This can be topologized via the compact open topology as a subspace of the space of all continuous mappings from the circle to that is, the free loop space of The Sobolev kind mapping space
https://en.wikipedia.org/wiki/Pseudomonad
Pseudomonad may refer to: Biology a member of: Pseudomonadaceae, the family. Pseudomonas, the genus. Mathematics Pseudomonad (Category Theory), a generalisation of a monad on a category.
https://en.wikipedia.org/wiki/Paul%20Zeitz
Paul Zeitz (born July 5, 1958) is a Professor of Mathematics at the University of San Francisco. He is the author of The Art and Craft of Problem Solving, and a co-author of Statistical Explorations with Excel. Biography In 1974 Paul Zeitz won the USA Mathematical Olympiad (USAMO) and was a member of the first American team to participate in the International Mathematical Olympiad (IMO). The following year he graduated from Stuyvesant High School. He earned a Westinghouse scholarship and graduated from Harvard University in 1981. Since 1985, he has composed and edited problems for several national math contests, including the USAMO. He has helped train several American IMO teams, most notably the 1994 "Dream Team", the first team from any country to score a perfect 252 in the Olympiad. (The only other team to have ever done so was China's 2022 team.) Zeitz founded the Bay Area Math Meet in 1994 and co-founded the Bay Area Mathematical Olympiad in 1999. In 1999 he wrote The Art and Craft of Problem Solving , a popular book on problem solving. In 2003 Zeitz received from the Mathematical Association of America the Deborah and Franklin Haimo Awards for Distinguished College or University Teaching of Mathematics. References Stuyvesant High School alumni 1958 births Living people Harvard University alumni University of San Francisco faculty 20th-century American mathematicians 21st-century American mathematicians International Mathematical Olympiad participants Mathematicians from New York (state)
https://en.wikipedia.org/wiki/Pythagoras%20tree%20%28fractal%29
The Pythagoras tree is a plane fractal constructed from squares. Invented by the Dutch mathematics teacher Albert E. Bosman in 1942, it is named after the ancient Greek mathematician Pythagoras because each triple of touching squares encloses a right triangle, in a configuration traditionally used to depict the Pythagorean theorem. If the largest square has a size of L × L, the entire Pythagoras tree fits snugly inside a box of size 6L × 4L. The finer details of the tree resemble the Lévy C curve. Construction The construction of the Pythagoras tree begins with a square. Upon this square are constructed two squares, each scaled down by a linear factor of /2, such that the corners of the squares coincide pairwise. The same procedure is then applied recursively to the two smaller squares, ad infinitum. The illustration below shows the first few iterations in the construction process. This is the simplest symmetric triangle. Alternatively, the sides of the triangle are recursively equal proportions, leading to the sides being proportional to the square root of the inverse golden ratio, and the areas of the squares being in golden ratio proportion. Area Iteration n in the construction adds 2n squares of area , for a total area of 1. Thus the area of the tree might seem to grow without bound in the limit as n → ∞. However, some of the squares overlap starting at the order 5 iteration, and the tree actually has a finite area because it fits inside a 6×4 box. It can be shown easily that the area A of the Pythagoras tree must be in the range 5 < A < 18, which can be narrowed down further with extra effort. Little seems to be known about the actual value of A. Varying the angle An interesting set of variations can be constructed by maintaining an isosceles triangle but changing the base angle (90 degrees for the standard Pythagoras tree). In particular, when the base half-angle is set to (30°) = arcsin(0.5), it is easily seen that the size of the squares remains constant. The first overlap occurs at the fourth iteration. The general pattern produced is the rhombitrihexagonal tiling, an array of hexagons bordered by the constructing squares. In the limit where the half-angle is 90 degrees, there is obviously no overlap, and the total area is twice the area of the base square. History The Pythagoras tree was first constructed by Albert E. Bosman (1891–1961), a Dutch mathematics teacher, in 1942. See also Lévy C curve References External links Gallery of Pythagoras trees Filled Pythagoras Tree using VB6 by Edward Bole (Boleeman) Interactive generator with code Pythagoras Tree by Enrique Zeleny based on a program by Eric W. Weisstein, The Wolfram Demonstrations Project. Three-dimensional Pythagoras tree MatLab script to generate Pythagoras Tree Construction step by step in the virtual reality software Neotrie VR Fractals
https://en.wikipedia.org/wiki/ISUP
ISUP may refer to: Paris Institute of Statistics, a school for statistics in France ISDN User Part or ISUP, a feature of Public Switched Telephone Networks Inflatable Stand Up Paddle Board or iSUP, a water craft for the sport of Stand Up Paddling that is inflated rather than having a solid construction. fr:ISUP
https://en.wikipedia.org/wiki/Flamingo%20%28disambiguation%29
Flamingo is the common name for birds in the genus Phoenicopterus. Flamingo, Flamingoes or Flamingos may also refer to: Places Topology Flamingo, Costa Rica, a beach Flamingo/Lummus, Miami Beach, Florida, United States Flamingo, Monroe County, Florida, a ghost town Flamingo Bay (disambiguation) Airports Flamingo International Airport, Kralendijk, Bonaire, Netherlands Antilles Roads Flamingo Road (Las Vegas) Flamingo Road, part of Florida State Road 823 People Raven (wrestler) and Scotty Flamingo, ring personae of American professional wrestler Scott Levy (born 1964) Arts, entertainment, and media Fictional characters Flamingo (comics), a DC Comics villain Music Groups and labels Flamingo Recordings, a Dutch record label The Flamingos, an American doo-wop group Albums Flamingo (Flamin' Groovies album) (1970) Flamingo (Brandon Flowers album) (2010) Flamingo (Herbie Mann album) (1955) Flamingo (Olympia album) (2019) Flamingos (album), a 2002 album by Enrique Bunbury Songs "Flamingo" (song), a 1940 song written by Ted Grouya and Edmund Anderson "Flamingo", a 2014 song by English group Kero Kero Bonito "Flamingo", a 2010 song by Venezuelan group La Vida Bohème "Flamingo", a 1973 song from the album A Wizard, a True Star by Todd Rundgren "Flamingo", a 2018 song by Japanese musician Kenshi Yonezu "Flamingo", a 2014 song by American singer-songwriter Rob Cantor Other uses arts, entertainment, and media Flamingo (sculpture), a 1973 sculpture by Alexander Calder in Chicago, Illinois Flamingo Televisión, a Venezuelan regional television station from 1990 to 2000 Captain Flamingo, Canadian animated TV series (2006–2010) Flamingo (imprint), a former publishing imprint Brands and enterprises Flamingo Hotel, Miami Beach, Florida, a hotel from 1921 to the 1950s Flamingo Las Vegas, a casino resort and hotel in Las Vegas, Nevada, United States Flamingo, Vantaa, an entertainment center in Vantaa, Finland The Flamingo Club, a club in London, England which was a meeting place for international musicians from 1957 to 1962 Military Flamingo, a popular name for the Panzer II Flamm tank , two ships , three ships Sports Flamingo Stakes, an American Thoroughbred horse race run annually from 1926 to 2001 Flamingoes F.C., a disbanded nineteenth century English rugby union club Flamingos F.C., a Namibian football club since 1986 Florida Flamingos, a charter franchise of World Team Tennis which played only in the 1974 season before folding Miami Beach Flamingos, a minor league baseball team from 1940 to 1954 Transportation Airlines Flamingo Air (Cincinnati airline), a small charter airline Flamingo Air, two small seaplane airlines which operate between Florida and the Bahamas Aircraft Aeros UL-2000 Flamingo, a Czech ultralight aircraft de Havilland Flamingo, a World War II era passenger airliner, also used by the Royal Air Force MBB 223 Flamingo, a West German 1960s light aircraft Metal Aircraft Flamingo, a monopl
https://en.wikipedia.org/wiki/Vietoris%E2%80%93Begle%20mapping%20theorem
The Vietoris–Begle mapping theorem is a result in the mathematical field of algebraic topology. It is named for Leopold Vietoris and Edward G. Begle. The statement of the theorem, below, is as formulated by Stephen Smale. Theorem Let and be compact metric spaces, and let be surjective and continuous. Suppose that the fibers of are acyclic, so that for all and all , with denoting the th reduced Vietoris homology group. Then, the induced homomorphism is an isomorphism for and a surjection for . Note that as stated the theorem doesn't hold for homology theories like singular homology. For example, Vietoris homology groups of the closed topologist's sine curve and of a segment are isomorphic (since the first projects onto the second with acyclic fibers). But the singular homology differs, since the segment is path connected and the topologist's sine curve is not. References "Leopold Vietoris (1891–2002)", Notices of the American Mathematical Society, vol. 49, no. 10 (November 2002) by Heinrich Reitberger Theorems in algebraic topology
https://en.wikipedia.org/wiki/Coherence%20%28statistics%29
In probability theory and statistics, coherence can have several different meanings. Coherence in statistics is an indication of the quality of the information, either within a single data set, or between similar but not identical data sets. Fully coherent data are logically consistent and can be reliably combined for analysis. In probability When dealing with personal probability assessments, or supposed probabilities derived in nonstandard ways, it is a property of self-consistency across a whole set of such assessments. In gambling strategy One way of expressing such self-consistency is in terms of responses to various betting propositions, as described in relation to coherence (philosophical gambling strategy). In Bayesian decision theory The coherency principle in Bayesian decision theory is the assumption that subjective probabilities follow the ordinary rules/axioms of probability calculations (where the validity of these rules corresponds to the self-consistency just referred to) and thus that consistent decisions can be obtained from these probabilities. In time series analysis In time series analysis, and particularly in spectral analysis, it is used to describe the strength of association between two series where the possible dependence between the two series is not limited to simultaneous values but may include leading, lagged and smoothed relationships. The concepts here are sometimes known as coherency and are essentially those set out for coherence as for signal processing. However, note that the quantity coefficient of coherence may sometimes be called the squared coherence. References Probability assessment Bayesian statistics Frequency-domain analysis Statistical principles de:Kohärenz (Signalanalyse)
https://en.wikipedia.org/wiki/Statistical%20Lab
The computer program Statistical Lab (Statistiklabor) is an explorative and interactive toolbox for statistical analysis and visualization of data. It supports educational applications of statistics in business administration, economics, social sciences and humanities. The program is developed and constantly advanced by the Center for Digital Systems of the Free University of Berlin. Their website states that the source code is available to private users under the GPL. So if a commercial user wishes to obtain a copy, then they must do so indirectly, from a private user who already has a copy (any of their employees will do). Simple or complex statistical problems can be simulated, edited and solved individually with the Statistical Lab. It can be extended by using external libraries. Via these libraries, it can also be adapted to individual and local demands like specific target groups. The versatile graphical diagrams allow demonstrative visualization of underlying data. The Statistical Lab is the successor of Statistik interaktiv!. In contrast to the commercial SPSS the Statistical Lab is didactically driven. It is focused on providing facilities for users with little statistical experience. It combines data frames, contingency tables, random numbers, matrices in a user friendly virtual worksheet. This worksheet allows users to explore the possibilities of calculations, analysis, simulations and manipulation of data. For mathematical calculations, the Statistical Lab uses the R package, which is a free implementation of the language S Plus (originally developed by Bell Laboratories). See also R interfaces References External links Homepage of the Statistical Lab - in English Statistical Lab Tutorial for newbies - English versions available forum for Statistical Lab users - bilingual English and German Tigris.org Source-Code of the Statistical Lab (discontinued, source-code now available in the download-area of the main pages: ) Homepage of the Center for Digital Systems Statistical software Windows-only freeware
https://en.wikipedia.org/wiki/Spider%20diagram
In mathematics, a unitary spider diagram adds existential points to an Euler or a Venn diagram. The points indicate the existence of an attribute described by the intersection of contours in the Euler diagram. These points may be joined forming a shape like a spider. Joined points represent an "or" condition, also known as a logical disjunction. A spider diagram is a boolean expression involving unitary spider diagrams and the logical symbols . For example, it may consist of the conjunction of two spider diagrams, the disjunction of two spider diagrams, or the negation of a spider diagram. Example In the image shown, the following conjunctions are apparent from the Euler diagram. In the universe of discourse defined by this Euler diagram, in addition to the conjunctions specified above, all possible sets from A through B and D through G are available separately. The set C is only available as a subset of B. Often, in complicated diagrams, singleton sets and/or conjunctions may be obscured by other set combinations. The two spiders in the example correspond to the following logical expressions: Red spider: Blue spider: References Howse, J. and Stapleton, G. and Taylor, H. Spider Diagrams London Mathematical Society Journal of Computation and Mathematics, (2005) v. 8, pp. 145–194. Accessed on January 8, 2012 here Stapleton, G. and Howse, J. and Taylor, J. and Thompson, S. What can spider diagrams say? Proc. Diagrams, (2004) v. 168, pp. 169–219. Accessed on January 4, 2012 here Stapleton, G. and Jamnik, M. and Masthoff, J. On the Readability of Diagrammatic Proofs Proc. Automated Reasoning Workshop, 2009. PDF External links Brighton and Kent University - Euler Diagrams Diagrams Diagram algebras
https://en.wikipedia.org/wiki/Graded%20poset
In mathematics, in the branch of combinatorics, a graded poset is a partially-ordered set (poset) P equipped with a rank function ρ from P to the set N of all natural numbers. ρ must satisfy the following two properties: The rank function is compatible with the ordering, meaning that for all x and y in the order, if x < y then ρ(x) < ρ(y), and The rank is consistent with the covering relation of the ordering, meaning that for all x and y, if y covers x then ρ(y) = ρ(x) + 1. The value of the rank function for an element of the poset is called its rank. Sometimes a graded poset is called a ranked poset but that phrase has other meanings; see Ranked poset. A rank or rank level of a graded poset is the subset of all the elements of the poset that have a given rank value. Graded posets play an important role in combinatorics and can be visualized by means of a Hasse diagram. Examples Some examples of graded posets (with the rank function in parentheses) are: the natural numbers N with their usual order (rank: the number itself), or some interval [0, N] of this poset, Nn, with the product order (sum of the components), or a subposet of it that is a product of intervals, the positive integers, ordered by divisibility (number of prime factors, counted with multiplicity), or a subposet of it formed by the divisors of a fixed N, the Boolean lattice of finite subsets of a set (number of elements of the subset), the lattice of partitions of a set into finitely many parts, ordered by reverse refinement (number of parts), the lattice of partitions of a finite set X, ordered by refinement (number of elements of X minus number of parts), a group and a generating set, or equivalently its Cayley graph, ordered by the weak or strong Bruhat order, and ranked by word length (length of shortest reduced word). In particular for Coxeter groups, for example permutations of a totally ordered n-element set, with either the weak or strong Bruhat order (number of adjacent inversions), geometric lattices, such as the lattice of subspaces of a vector space (dimension of the subspace), the distributive lattice of finite lower sets of another poset (number of elements), the poset of all unlabeled posets on (number of elements), Young's lattice, a particular instance of the previous example (number of boxes in the Young diagram), face lattices of convex polytopes (dimension of the face, plus one), abstract polytopes ("distance" from the least face, minus one), abstract simplicial complexes (number of elements of the simplex). Alternative characterizations A bounded poset admits a grading if and only if all maximal chains in P have the same length: setting the rank of the least element to 0 then determines the rank function completely. This covers many finite cases of interest; see picture for a negative example. However, unbounded posets can be more complicated. A candidate rank function, compatible with the ordering, makes a poset into graded poset i
https://en.wikipedia.org/wiki/Structural%20stability
In mathematics, structural stability is a fundamental property of a dynamical system which means that the qualitative behavior of the trajectories is unaffected by small perturbations (to be exact C1-small perturbations). Examples of such qualitative properties are numbers of fixed points and periodic orbits (but not their periods). Unlike Lyapunov stability, which considers perturbations of initial conditions for a fixed system, structural stability deals with perturbations of the system itself. Variants of this notion apply to systems of ordinary differential equations, vector fields on smooth manifolds and flows generated by them, and diffeomorphisms. Structurally stable systems were introduced by Aleksandr Andronov and Lev Pontryagin in 1937 under the name "systèmes grossiers", or rough systems. They announced a characterization of rough systems in the plane, the Andronov–Pontryagin criterion. In this case, structurally stable systems are typical, they form an open dense set in the space of all systems endowed with appropriate topology. In higher dimensions, this is no longer true, indicating that typical dynamics can be very complex (cf. strange attractor). An important class of structurally stable systems in arbitrary dimensions is given by Anosov diffeomorphisms and flows. During the late 1950s and the early 1960s, Maurício Peixoto and Marília Chaves Peixoto, motivated by the work of Andronov and Pontryagin, developed and proved Peixoto's theorem, the first global characterization of structural stability. Definition Let G be an open domain in Rn with compact closure and smooth (n−1)-dimensional boundary. Consider the space X1(G) consisting of restrictions to G of C1 vector fields on Rn that are transversal to the boundary of G and are inward oriented. This space is endowed with the C1 metric in the usual fashion. A vector field F ∈ X1(G) is weakly structurally stable if for any sufficiently small perturbation F1, the corresponding flows are topologically equivalent on G: there exists a homeomorphism h: G → G which transforms the oriented trajectories of F into the oriented trajectories of F1. If, moreover, for any ε > 0 the homeomorphism h may be chosen to be C0 ε-close to the identity map when F1 belongs to a suitable neighborhood of F depending on ε, then F is called (strongly) structurally stable. These definitions extend in a straightforward way to the case of n-dimensional compact smooth manifolds with boundary. Andronov and Pontryagin originally considered the strong property. Analogous definitions can be given for diffeomorphisms in place of vector fields and flows: in this setting, the homeomorphism h must be a topological conjugacy. It is important to note that topological equivalence is realized with a loss of smoothness: the map h cannot, in general, be a diffeomorphism. Moreover, although topological equivalence respects the oriented trajectories, unlike topological conjugacy, it is not time-compatible. Thus, the relevant
https://en.wikipedia.org/wiki/Generation%20of%20primes
In computational number theory, a variety of algorithms make it possible to generate prime numbers efficiently. These are used in various applications, for example hashing, public-key cryptography, and search of prime factors in large numbers. For relatively small numbers, it is possible to just apply trial division to each successive odd number. Prime sieves are almost always faster. Prime sieving is the fastest known way to deterministically enumerate the primes. There are some known formulas that can calculate the next prime but there is no known way to express the next prime in terms of the previous primes. Also, there is no effective known general manipulation and/or extension of some mathematical expression (even such including later primes) that deterministically calculates the next prime. Prime sieves A prime sieve or prime number sieve is a fast type of algorithm for finding primes. There are many prime sieves. The simple sieve of Eratosthenes (250s BCE), the sieve of Sundaram (1934), the still faster but more complicated sieve of Atkin (2003), and various wheel sieves are most common. A prime sieve works by creating a list of all integers up to a desired limit and progressively removing composite numbers (which it directly generates) until only primes are left. This is the most efficient way to obtain a large range of primes; however, to find individual primes, direct primality tests are more efficient. Furthermore, based on the sieve formalisms, some integer sequences are constructed which also could be used for generating primes in certain intervals. Large primes For the large primes used in cryptography, provable primes can be generated based on variants of Pocklington primality test, while probable primes can be generated with probabilistic primality tests such as the Baillie–PSW primality test or the Miller–Rabin primality test. Both the provable and probable primality tests rely on modular exponentiation. To further reduce the computational cost, the integers are first checked for any small prime divisors using either sieves similar to the sieve of Eratosthenes or trial division. Integers of special forms, such as Mersenne primes or Fermat primes, can be efficiently tested for primality if the prime factorization of p − 1 or p + 1 is known. Complexity The sieve of Eratosthenes is generally considered the easiest sieve to implement, but it is not the fastest in the sense of the number of operations for a given range for large sieving ranges. In its usual standard implementation (which may include basic wheel factorization for small primes), it can find all the primes up to N in time while basic implementations of the sieve of Atkin and wheel sieves run in linear time . Special versions of the Sieve of Eratosthenes using wheel sieve principles can have this same linear time complexity. A special version of the Sieve of Atkin and some special versions of wheel sieves which may include sieving using the methods from the
https://en.wikipedia.org/wiki/Subbundle
In mathematics, a subbundle of a vector bundle on a topological space is a collection of linear subspaces of the fibers of at in that make up a vector bundle in their own right. In connection with foliation theory, a subbundle of the tangent bundle of a smooth manifold may be called a distribution (of tangent vectors). If a set of vector fields span the vector space and all Lie commutators are linear combinations of the then one says that is an involutive distribution. See also Fiber bundles
https://en.wikipedia.org/wiki/Mayo%2C%20Yukon
Mayo is a village in Yukon, Canada, along the Silver Trail and the Stewart River. It had a population of 200 in 2016. The Yukon Bureau of Statistics estimated a population of 496 in 2019. It is also the home of the First Nation of Na-Cho Nyäk Dun, whose primary language is Northern Tutchone. Na-Cho Nyäk Dun translates into "big river people." The community, formerly called Mayo Landing, is serviced by Mayo Airport. The village was named after former circus acrobat turned settler and explorer Alfred Mayo. Its only school is J. V. Clark School, which is named after James Vincent Clark (1924–1994). The school had about 70 students in 2012. As of the 2020/2021 school year, the acting principal is Nicholas Vienneau. History Before Europeans came there were in the area two communities of the Na-cho Nyäk Dun people, who lived by hunting and trapping. The river now known as the Stewart River was known as the "Náhcho Nyäk" ('Great River'). The people lived across the Stewart River from the main focus of today's Mayo, in a district today called "Old Mayo village". The old settlement was reinstated on the initiative of a missionary, but in 1934 the river burst its banks and flattened much of the old village, destroying the church and many cultural treasures. The first gold discoveries in the area were made in the 1880s: silver was also discovered some time later. Till the mid-twentieth century Mayo was connected with the outside world by the river and received any supplies by boat. In the 1950s the construction of the Klondike Highway and the Silver Trail provided Mayo with a road link to Stewart Crossing. Between 1973 and 1984 negotiation took place between the government and the northern Tutchone leaders over land rights and self-government. A breakthrough came only in 1993 with a treaty between the residents and the lawmakers concerning an area of and a payment, over fifteen years, totalling C$14.5 million. Together with the Tr'ondek Hwech’in First Nation an agreement has been made with Yukon Energy to supply electricity to Dawson City using the Mayo-Dawson Power Line. May 2008 saw a preliminary agreement with Alexco Resource Corp concerning silver extraction in the Keno Hill Silver area near the far end of Mayo lake where the corporation operates approximately 40 silver mines. Geography Climate Mayo has a subarctic climate (Koppen: Dfc), with generally warm summers and severely cold winters lasting half the year. Spring and autumn are very short transitional seasons between summer and winter, with average temperatures rising and falling very fast during these times. The temperature difference between the record low in February () and the record high in June () is (), one of the largest temperature differentials ever recorded. It has some of the warmest summers in the Yukon with a mean average summer temperature of . Demographics In the 2021 Census of Population conducted by Statistics Canada, Mayo had a population of living in of its
https://en.wikipedia.org/wiki/Heawood%20number
In mathematics, the Heawood number of a surface is an upper bound for the number of colors that suffice to color any graph embedded in the surface. In 1890 Heawood proved for all surfaces except the sphere that no more than colors are needed to color any graph embedded in a surface of Euler characteristic , or genus for an orientable surface. The number became known as Heawood number in 1976. Franklin proved that the chromatic number of a graph embedded in the Klein bottle can be as large as , but never exceeds . Later it was proved in the works of Gerhard Ringel, J. W. T. Youngs, and other contributors that the complete graph with vertices can be embedded in the surface unless is the Klein bottle. This established that Heawood's bound could not be improved. For example, the complete graph on vertices can be embedded in the torus as follows: The case of the sphere is the four-color conjecture, which was settled by Kenneth Appel and Wolfgang Haken in 1976. Notes Béla Bollobás, Graph Theory: An Introductory Course, Graduate Texts in Mathematics, volume 63, Springer-Verlag, 1979. . Thomas L. Saaty and Paul Chester Kainen; The Four-Color Problem: Assaults and Conquest, Dover, 1986. . References Topological graph theory Graph coloring
https://en.wikipedia.org/wiki/Star%20product
In mathematics, the star product is a method of combining graded posets with unique minimal and maximal elements, preserving the property that the posets are Eulerian. Definition The star product of two graded posets and , where has a unique maximal element and has a unique minimal element , is a poset on the set . We define the partial order by if and only if: 1. , and ; 2. , and ; or 3. and . In other words, we pluck out the top of and the bottom of , and require that everything in be smaller than everything in . Example For example, suppose and are the Boolean algebra on two elements. Then is the poset with the Hasse diagram below. Properties The star product of Eulerian posets is Eulerian. See also Product order, a different way of combining posets References Stanley, R., Flag -vectors and the -index, Math. Z. 216 (1994), 483-499. Combinatorics
https://en.wikipedia.org/wiki/Pluriharmonic%20function
In mathematics, precisely in the theory of functions of several complex variables, a pluriharmonic function is a real valued function which is locally the real part of a holomorphic function of several complex variables. Sometimes such a function is referred to as n-harmonic function, where n ≥ 2 is the dimension of the complex domain where the function is defined. However, in modern expositions of the theory of functions of several complex variables it is preferred to give an equivalent formulation of the concept, by defining pluriharmonic function a complex valued function whose restriction to every complex line is a harmonic function with respect to the real and imaginary part of the complex line parameter. Formal definition . Let be a complex domain and be a (twice continuously differentiable) function. The function is called pluriharmonic if, for every complex line formed by using every couple of complex tuples , the function is a harmonic function on the set . Let be a complex manifold and be a function. The function is called pluriharmonic if Basic properties Every pluriharmonic function is a harmonic function, but not the other way around. Further, it can be shown that for holomorphic functions of several complex variables the real (and the imaginary) parts are locally pluriharmonic functions. However a function being harmonic in each variable separately does not imply that it is pluriharmonic. See also Plurisubharmonic function Wirtinger derivatives Notes Historical references . . . . Notes from a course held by Francesco Severi at the Istituto Nazionale di Alta Matematica (which at present bears his name), containing appendices of Enzo Martinelli, Giovanni Battista Rizza and Mario Benedicty. An English translation of the title reads as:-"Lectures on analytic functions of several complex variables – Lectured in 1956–57 at the Istituto Nazionale di Alta Matematica in Rome". References . The first paper where a set of (fairly complicate) necessary and sufficient conditions for the solvability of the Dirichlet problem for holomorphic functions of several variables is given. An English translation of the title reads as:-"About a boundary value problem". ."Boundary value problems for pluriharmonic functions" (English translation of the title) deals with boundary value problems for pluriharmonic functions: Fichera proves a trace condition for the solvability of the problem and reviews several earlier results of Enzo Martinelli, Giovanni Battista Rizza and Francesco Severi. . An English translation of the title reads as:-"Boundary values of pluriharmonic functions: extension to the space R2n of a theorem of L. Amoroso". . An English translation of the title reads as:-"On a theorem of L. Amoroso in the theory of analytic functions of two complex variables". . , available at Gallica , available at Gallica , available at DigiZeitschirften. External links Harmonic functions Several complex variables
https://en.wikipedia.org/wiki/Pluripolar%20set
In mathematics, in the area of potential theory, a pluripolar set is the analog of a polar set for plurisubharmonic functions. Definition Let and let be a plurisubharmonic function which is not identically . The set is called a complete pluripolar set. A pluripolar set is any subset of a complete pluripolar set. Pluripolar sets are of Hausdorff dimension at most and have zero Lebesgue measure. If is a holomorphic function then is a plurisubharmonic function. The zero set of is then a pluripolar set. See also Skoda-El Mir theorem References Steven G. Krantz. Function Theory of Several Complex Variables, AMS Chelsea Publishing, Providence, Rhode Island, 1992. Potential theory
https://en.wikipedia.org/wiki/Plurisubharmonic%20function
In mathematics, plurisubharmonic functions (sometimes abbreviated as psh, plsh, or plush functions) form an important class of functions used in complex analysis. On a Kähler manifold, plurisubharmonic functions form a subset of the subharmonic functions. However, unlike subharmonic functions (which are defined on a Riemannian manifold) plurisubharmonic functions can be defined in full generality on complex analytic spaces. Formal definition A function with domain is called plurisubharmonic if it is upper semi-continuous, and for every complex line with the function is a subharmonic function on the set In full generality, the notion can be defined on an arbitrary complex manifold or even a complex analytic space as follows. An upper semi-continuous function is said to be plurisubharmonic if and only if for any holomorphic map the function is subharmonic, where denotes the unit disk. Differentiable plurisubharmonic functions If is of (differentiability) class , then is plurisubharmonic if and only if the hermitian matrix , called Levi matrix, with entries is positive semidefinite. Equivalently, a -function f is plurisubharmonic if and only if is a positive (1,1)-form. Examples Relation to Kähler manifold: On n-dimensional complex Euclidean space , is plurisubharmonic. In fact, is equal to the standard Kähler form on up to constant multiples. More generally, if satisfies for some Kähler form , then is plurisubharmonic, which is called Kähler potential. These can be readily generated by applying the ddbar lemma to Kähler forms on a Kähler manifold. Relation to Dirac Delta: On 1-dimensional complex Euclidean space , is plurisubharmonic. If is a C∞-class function with compact support, then Cauchy integral formula says which can be modified to . It is nothing but Dirac measure at the origin 0 . More Examples If is an analytic function on an open set, then is plurisubharmonic on that open set. Convex functions are plurisubharmonic If is a Domain of Holomorphy then is plurisubharmonic Harmonic functions are not necessarily plurisubharmonic History Plurisubharmonic functions were defined in 1942 by Kiyoshi Oka and Pierre Lelong. Properties The set of plurisubharmonic functions has the following properties like a convex cone: if is a plurisubharmonic function and a positive real number, then the function is plurisubharmonic, if and are plurisubharmonic functions, then the sum is a plurisubharmonic function. Plurisubharmonicity is a local property, i.e. a function is plurisubharmonic if and only if it is plurisubharmonic in a neighborhood of each point. If is plurisubharmonic and a monotonically increasing, convex function then is plurisubharmonic. If and are plurisubharmonic functions, then the function is plurisubharmonic. If is a monotonically decreasing sequence of plurisubharmonic functions then is plurisubharmonic. Every continuous plurisubharmonic function can be obtained as the
https://en.wikipedia.org/wiki/384%20%28number%29
384 (three hundred [and] eighty-four) is the natural number following 383 and preceding 385. It is an even composite positive integer. In mathematics 384 is: the sum of a twin prime pair (191 + 193). the sum of six consecutive primes (53 + 59 + 61 + 67 + 71 + 73). the order of the hyperoctahedral group for n = 4 the double factorial of 8. an abundant number. the third 129-gonal number after 1, 129 and before 766 and 1275. a Harshad number in bases 2, 3, 4, 5, 7, 8, 9, 13, 17, and 62 other bases. a refactorable number. Computing Being a low multiple of a power of two, 384 occurs often in the field of computing. For example, the digest length of the secure hash function SHA-384, the screen resolution of Virtual Boy is 384x224, MP3 Audio layer 1 encoding is 384 kibps, in 3G phones the WAN implementation of CDMA is up to 384 kbit/s. References External links Integers
https://en.wikipedia.org/wiki/Hiroshi%20Haruki
was a Japanese mathematician. A world-renowned expert in functional equations, he is best known for discovering "Haruki's theorem" and "Haruki's Lemma" in plane geometry. Some of his published work, such as: "On a Characteristic Property of Confocal Conic Sections" is available (open source) on Project Euclid. Haruki earned his MSc and PhD from Osaka University and taught there. He was a professor at the University of Waterloo in Canada from 1966 till his retirement in 1986. He was a founding member of the university's computer science department (1967). See also List of University of Waterloo people References News release, Department of Computer Science, University of Waterloo. External links Haruki's theorem on MathWorld Hiroshi Haruki's Lemma (Interactive Mathematics Miscellany and Puzzles) Hiroshi Haruki's Theorem (Interactive Mathematics Miscellany and Puzzles) Year of birth missing 1997 deaths Euclidean geometry Canadian mathematicians 20th-century Japanese mathematicians Academic staff of the University of Waterloo Osaka University alumni
https://en.wikipedia.org/wiki/Octene
Octene is an alkene with the formula . Several isomers of octene are known, depending on the position and the geometry of the double bond in the carbon chain. The simplest isomer is 1-octene, an alpha-olefin used primarily as a co-monomer in production of polyethylene via the solution polymerization process. Several useful structural isomers of the octenes are obtained by dimerization of isobutene and 1-butene. These branched alkenes are used to alkylate phenols to give precursors to detergents. References External links OSHA Safety and Health Topics: 1-Octene Alkenes Monomers
https://en.wikipedia.org/wiki/Universal%20code%20%28data%20compression%29
In data compression, a universal code for integers is a prefix code that maps the positive integers onto binary codewords, with the additional property that whatever the true probability distribution on integers, as long as the distribution is monotonic (i.e., p(i) ≥ p(i + 1) for all positive i), the expected lengths of the codewords are within a constant factor of the expected lengths that the optimal code for that probability distribution would have assigned. A universal code is asymptotically optimal if the ratio between actual and optimal expected lengths is bounded by a function of the information entropy of the code that, in addition to being bounded, approaches 1 as entropy approaches infinity. In general, most prefix codes for integers assign longer codewords to larger integers. Such a code can be used to efficiently communicate a message drawn from a set of possible messages, by simply ordering the set of messages by decreasing probability and then sending the index of the intended message. Universal codes are generally not used for precisely known probability distributions, and no universal code is known to be optimal for any distribution used in practice. A universal code should not be confused with universal source coding, in which the data compression method need not be a fixed prefix code and the ratio between actual and optimal expected lengths must approach one. However, note that an asymptotically optimal universal code can be used on independent identically-distributed sources, by using increasingly large blocks, as a method of universal source coding. Universal and non-universal codes These are some universal codes for integers; an asterisk (*) indicates a code that can be trivially restated in lexicographical order, while a double dagger (‡) indicates a code that is asymptotically optimal: Elias gamma coding * Elias delta coding * ‡ Elias omega coding * ‡ Exp-Golomb coding *, which has Elias gamma coding as a special case. (Used in H.264/MPEG-4 AVC) Fibonacci coding Levenshtein coding * ‡, the original universal coding technique Byte coding where a special bit pattern (with at least two bits) is used to mark the end of the code — for example, if an integer is encoded as a sequence of nibbles representing digits in base 15 instead of the more natural base 16, then the highest nibble value (i.e., a sequence of four ones in binary) can be used to indicate the end of the integer. Variable-length quantity These are non-universal ones: Unary coding, which is used in Elias codes Rice coding, which is used in the FLAC audio codec and which has unary coding as a special case Golomb coding, which has Rice coding and unary coding as special cases. Their nonuniversality can be observed by noticing that, if any of these are used to code the Gauss–Kuzmin distribution or the Zeta distribution with parameter s=2, expected codeword length is infinite. For example, using unary coding on the Zeta distribution yields an e
https://en.wikipedia.org/wiki/Chow%20variety
In mathematics, particularly in the field of algebraic geometry, a Chow variety is an algebraic variety whose points correspond to effective algebraic cycles of fixed dimension and degree on a given projective space. More precisely, the Chow variety is the fine moduli variety parametrizing all effective algebraic cycles of dimension and degree in . The Chow variety may be constructed via a Chow embedding into a sufficiently large projective space. This is a direct generalization of the construction of a Grassmannian variety via the Plücker embedding, as Grassmannians are the case of Chow varieties. Chow varieties are distinct from Chow groups, which are the abelian group of all algebraic cycles on a variety (not necessarily projective space) up to rational equivalence. Both are named for Wei-Liang Chow(周煒良), a pioneer in the study of algebraic cycles. Background on algebraic cycles If X is a closed subvariety of of dimension , the degree of X is the number of intersection points between X and a generic -dimensional projective subspace of . Degree is constant in families of subvarieties, except in certain degenerate limits. To see this, consider the following family parametrized by t. . Whenever , is a conic (an irreducible subvariety of degree 2), but degenerates to the line (which has degree 1). There are several approaches to reconciling this issue, but the simplest is to declare to be a line of multiplicity 2 (and more generally to attach multiplicities to subvarieties) using the language of algebraic cycles. A -dimensional algebraic cycle is a finite formal linear combination . in which s are -dimensional irreducible closed subvarieties in , and s are integers. An algebraic cycle is effective if each . The degree of an algebraic cycle is defined to be . A homogeneous polynomial or homogeneous ideal in n-many variables defines an effective algebraic cycle in , in which the multiplicity of each irreducible component is the order of vanishing at that component. In the family of algebraic cycles defined by , the cycle is 2 times the line , which has degree 2. More generally, the degree of an algebraic cycle is constant in families, and so it makes sense to consider the moduli problem of effective algebraic cycles of fixed dimension and degree. Examples of Chow varieties There are three special classes of Chow varieties with particularly simple constructions. Degree 1: Subspaces An effective algebraic cycle in of dimension k-1 and degree 1 is the projectivization of a k-dimensional subspace of n-dimensional affine space. This gives an isomorphism to a Grassmannian variety: The latter space has a distinguished system of homogeneous coordinates, given by the Plücker coordinates. Dimension 0: Points An effective algebraic cycle in of dimension 0 and degree d is an (unordered) d-tuple of points in , possibly with repetition. This gives an isomorphism to a symmetric power of : . Codimension 1: Divisors An effective algeb
https://en.wikipedia.org/wiki/Horn%20function
In the theory of special functions in mathematics, the Horn functions (named for Jakob Horn) are the 34 distinct convergent hypergeometric series of order two (i.e. having two independent variables), enumerated by (corrected by ). They are listed in . B. C. Carlson revealed a problem with the Horn function classification scheme. The total 34 Horn functions can be further categorised into 14 complete hypergeometric functions and 20 confluent hypergeometric functions. The complete functions, with their domain of convergence, are: while the confluent functions include: Notice that some of the complete and confluent functions share the same notation. References J. Horn Math. Ann. 111, 637 (1933) Hypergeometric functions
https://en.wikipedia.org/wiki/Unusual%20number
In number theory, an unusual number is a natural number n whose largest prime factor is strictly greater than . A k-smooth number has all its prime factors less than or equal to k, therefore, an unusual number is non--smooth. Relation to prime numbers All prime numbers are unusual. For any prime p, its multiples less than p2 are unusual, that is p, ... (p-1)p, which have a density 1/p in the interval (p, p2). Examples The first few unusual numbers are 2, 3, 5, 6, 7, 10, 11, 13, 14, 15, 17, 19, 20, 21, 22, 23, 26, 28, 29, 31, 33, 34, 35, 37, 38, 39, 41, 42, 43, 44, 46, 47, 51, 52, 53, 55, 57, 58, 59, 61, 62, 65, 66, 67, ... The first few non-prime (composite) unusual numbers are 6, 10, 14, 15, 20, 21, 22, 26, 28, 33, 34, 35, 38, 39, 42, 44, 46, 51, 52, 55, 57, 58, 62, 65, 66, 68, 69, 74, 76, 77, 78, 82, 85, 86, 87, 88, 91, 92, 93, 94, 95, 99, 102, ... Distribution If we denote the number of unusual numbers less than or equal to n by u(n) then u(n) behaves as follows: Richard Schroeppel stated in 1972 that the asymptotic probability that a randomly chosen number is unusual is ln(2). In other words: External links Integer sequences
https://en.wikipedia.org/wiki/Ohio%20Graduation%20Test
The Ohio Graduation Test (OGT) is the high school graduation examination given to sophomores in the U.S. state of Ohio. Students must pass all five sections (reading, writing, mathematics, science and social studies) in order to graduate. Students have multiple chances to pass these sections and can still graduate without passing each using the alternative pathway. In 2009, the Ohio legislature passed an education reform bill eliminating the OGT in favor of a new assessment system. The development and transition of replacement began in 2014 and will end in 2022. Test History and Development History Prior to the OGT, passing the ninth grade proficiency test was required for graduation beginning with the class of 1994. It had the same five subjects, apart from the social studies test was referred to as the citizenship test. In 2001, the Ohio legislature directed the Ohio Department of Education (ODE) to develop the OGT based on the soon-to-be-adopted academic content standards. The first official OGT was given in March 2005. It replaced the ninth grade proficiency test as a graduation requirement for the class of 2007. The last administration of the ninth grade proficiency test was in 2005. Development Questions are developed by ODE staff, sent through committees, and placed on exams before official inclusion on the OGT. First, the Content Advisory Committee runs the ODE developed question past parents and educators to see if it addresses the content. Second, the Fairness Sensitivity Review Committee helps ensure that questions are fair and do not put any student at a disadvantage because of a student’s moral values, social status, or religious beliefs. Third, the question is field tested. It is placed on an exam, but does not count towards the score of the student. Finally, the committees evaluate the performance data and decide if the question is to be used. Test Characteristics Questions The OGT is made up of five tests: reading, writing, mathematics, science, and social studies. These sections match the core school subjects and fulfill the high school testing requirement in reading, mathematics, and science under the federal No Child Left Behind Act. Each of the five sections is formatted differently, but they each contain multiple choice, short answer, and extended response questions: total 38 questions Each exam has approximately six extra questions that are being field tested for future OGT tests. Students are not penalized for incorrect answers on field tested questions. Students have up to two and a half hours to complete each section of the test. Typically, the tests are split up so that there is only one per day (for five days). Testing Dates The OGT is first given to students in the spring of their sophomore year. If they do not pass all five sections, they can continue to retake the exam. The OGT is administered in the fall (October), spring (March), and summer (June) each year. Not all schools offer the summer OGT, but s
https://en.wikipedia.org/wiki/Duncan%27s%20new%20multiple%20range%20test
In statistics, Duncan's new multiple range test (MRT) is a multiple comparison procedure developed by David B. Duncan in 1955. Duncan's MRT belongs to the general class of multiple comparison procedures that use the studentized range statistic qr to compare sets of means. David B. Duncan developed this test as a modification of the Student–Newman–Keuls method that would have greater power. Duncan's MRT is especially protective against false negative (Type II) error at the expense of having a greater risk of making false positive (Type I) errors. Duncan's test is commonly used in agronomy and other agricultural research. The result of the test is a set of subsets of means, where in each subset means have been found not to be significantly different from one another. This test is often followed by the Compact Letter Display (CLD) methodology that renders the output of such test much more accessible to non-statistician audiences. Definition Assumptions: 1.A sample of observed means , which have been drawn independently from n normal populations with "true" means, respectively. 2.A common standard error . This standard error is unknown, but there is available the usual estimate , which is independent of the observed means and is based on a number of degrees of freedom, denoted by . (More precisely, , has the property that is distributed as with degrees of freedom, independently of sample means). The exact definition of the test is: The difference between any two means in a set of n means is significant provided the range of each and every subset which contains the given means is significant according to an level range test where , and is the number of means in the subset concerned. Exception: The sole exception to this rule is that no difference between two means can be declared significant if the two means concerned are both contained in a subset of the means which has a non-significant range. Procedure The procedure consists of a series of pairwise comparisons between means. Each comparison is performed at a significance level , defined by the number of means separating the two means compared ( for separating means). The test are performed sequentially, where the result of a test determines which test is performed next. The tests are performed in the following order: the largest minus the smallest, the largest minus the second smallest, up to the largest minus the second largest; then the second largest minus the smallest, the second largest minus the second smallest, and so on, finishing with the second smallest minus the smallest. With only one exception, given below, each difference is significant if it exceeds the corresponding shortest significant range; otherwise it is not significant. Where the shortest significant range is the significant studentized range, multiplied by the standard error. The shortest significant range will be designated as , where is the number means in the subset. The sole exception to thi
https://en.wikipedia.org/wiki/Padovan%20polynomials
In mathematics, Padovan polynomials are a generalization of Padovan sequence numbers. These polynomials are defined by: The first few Padovan polynomials are: The Padovan numbers are recovered by evaluating the polynomials Pn−3(x) at x = 1. Evaluating Pn−3(x) at x = 2 gives the nth Fibonacci number plus (−1)n. The ordinary generating function for the sequence is See also Polynomial sequences Polynomials
https://en.wikipedia.org/wiki/Algebra%20Project
The Algebra Project is a national U.S. mathematics literacy program aimed at helping low-income students and students of color achieve the mathematical skills in high school that are a prerequisite for a college preparatory mathematics sequence. Founded by Civil Rights activist and Math educator Bob Moses in the 1980s, the Algebra Project provides curricular materials, teacher training, and professional development support and community involvement activities for schools to improve mathematics education. By 2001, the Algebra Project had trained approximately 300 teachers and was reaching 10,000 students in 28 locations in 10 states. History The Algebra Project was founded in 1982 by Bob Moses in Cambridge, Massachusetts. Moses worked with his daughter's eighth-grade teacher, Mary Lou Mehrling, to provide extra tutoring for several students in her class in algebra. Moses, who had taught secondary school mathematics in New York City and Tanzania, wanted to ensure that those students had sufficient algebra skills to qualify for honors math and science courses in high school. Through his tutorage, students from the Open Program of the Martin Luther King School passed the citywide algebra examination and qualified for ninth grade honors geometry, the first students from the program to do so. The Algebra Project grew out of attempts to recreate this on a wider community level, to provide similar students with a higher level of mathematical literacy. The Algebra Project now focuses on the southern states of the United States, where the Southern Initiative of the Algebra Project is directed by Dave Dennis. Young People's Project Founded in 1996, the Young People's Project (YPP) is a spin-off of the Algebra Project, which recruits and trains high school and college age "Math Literacy Workers" to tutor younger students in mathematics, and is directed by Omowale Moses. YPP has established sites in Jackson, Mississippi, Chicago, and the Greater Boston area of Massachusetts, and is developing programs in Miami, Petersburg, Virginia, Los Angeles, Ann Arbor, and Mansfield, Ohio. Each site employs between 30 and 100 high school and college age students part-time, and serves up to 1,000 elementary and middle-school students through on and off site programs. In 2005, the Algebra Project initiated Quality Education as a Civil Right (QECR), a national organizing effort to establish a federal constitutional guarantee of quality public education for all. Throughout 2005, YPP worked with students from Baltimore, New Orleans, Los Angeles, Oakland, Miami, Jackson, Chicago and Virginia to raise awareness about QECR. The Algebra Project and YPP students from Jackson and New Orleans hosted conferences, organized a Spring Break Community Education Tour to Miami and participated in QECR planning meetings at Howard University, the University of Michigan, and Jackson State University. References External links Algebra Project website Website of The Young People's Projec
https://en.wikipedia.org/wiki/Mike%20Wead
Mike Wead (born Mickael Vikström on 6 April 1967) is a Swedish guitarist who lives in Stockholm. Wead contributed to heavy metal bands such as Hexenhaus, Memento Mori, Abstrakt Algebra, The Haunted, Edge of Sanity, Candlemass, The Project Hate. Currently Wead is the guitarist of Mercyful Fate, King Diamond, and bibleblack. Discography With Mercyful Fate Dead Again (1998) 9 (1999) With King Diamond Abigail II: The Revenge (2002) The Puppet Master (2003) Deadly Lullabyes Live (2004) Give Me Your Soul... Please (2007) Songs for the Dead Live (2019) The Institute (TBA) With Hexenhaus A Tribute to Insanity (1988) At the Edge of Eternity (1990) Awakening (1991) Dejavoodoo (1997) With Memento Mori Rhymes of Lunacy (1993) Life, Death, and Other Morbid Tales (1994) La Danse Macabre (1996) Songs for the Apocalypse Vol IV (1997) With Abstrakt Algebra Abstrakt Algebra (1995) With Bibleblack The Black Swan Epilogue (2009) With Escape The Cult All You Want To (2014) Selected Guest Appearances With Candlemass Nightfall (1987) As It Is, As It Was (1994) Leif Edling The Black Heart of Candlemass (2002) With Memory Garden Verdict of Prosterity (1998) With Edge of Sanity Crimson II (2003) With Notre Dame Demi Monde Bizarros (2004) With In Aeternum Dawn of the New Aeon (2005) With Elvenking The Scythe (2007) With Her Whisper The Great Unifier (2008) With Sinners Paradise The Awakening (2009) With Kamlath Stronger Than Frost (2010) With The Project Hate MCMXCIX The Lustrate Process (2009) Bleeding The New Apocalypse (Cum Victriciis in Manibus Armis) (2011) With Deadlands Evilution (2012) With Pharaoh Bury the Light (2012) With Snowy Shaw Snowy Shaw is Alive! (2012) Nachtgeist (2016) With Minions Soul Mirror (2013) With Entombed When in Sodom (2012) With Zoromr Corpus Hermeticum (2015) With Devilish Impressions The I (2017) With Pigface Beauty Love & Hate (2017) References Living people Lead guitarists King Diamond (band) members Mercyful Fate members Swedish heavy metal guitarists 1967 births Abstrakt Algebra members Memento Mori (band) members
https://en.wikipedia.org/wiki/Cauchy%20surface
In the mathematical field of Lorentzian geometry, a Cauchy surface is a certain kind of submanifold of a Lorentzian manifold. In the application of Lorentzian geometry to the physics of general relativity, a Cauchy surface is usually interpreted as defining an "instant of time"; in the mathematics of general relativity, Cauchy surfaces are important in the formulation of the Einstein equations as an evolutionary problem. They are named for French mathematician Augustin-Louis Cauchy (1789-1857) due to their relevance for the Cauchy problem of general relativity. Informal introduction Although it is usually phrased in terms of general relativity, the formal notion of a Cauchy surface can be understood in familiar terms. Suppose that humans can travel at a maximum speed of 20 miles per hour. This places constraints, for any given person, upon where they can reach by a certain time. For instance, it is impossible for a person who is in Mexico at 3 o'clock to arrive in Libya by 4 o'clock; however it is possible for a person who is in Manhattan at 1 o'clock to reach Brooklyn by 2 o'clock, since the locations are ten miles apart. So as to speak semi-formally, ignore time zones and travel difficulties, and suppose that travelers are immortal beings who have lived forever. The system of all possible ways to fill in the four blanks in defines the notion of a causal structure. A Cauchy surface for this causal structure is a collection of pairs of locations and times such that, for any hypothetical traveler whatsoever, there is exactly one location and time pair in the collection for which the traveler was at the indicated location at the indicated time. There are a number of uninteresting Cauchy surfaces. For instance, one Cauchy surface for this causal structure is given by considering the pairing of every location with the time of 1 o'clock (on a certain specified day), since any hypothetical traveler must have been at one specific location at this time; furthermore, no traveler can be at multiple locations at this time. By contrast, there cannot be any Cauchy surface for this causal structure that contains both the pair (Manhattan, 1 o'clock) and (Brooklyn, 2 o'clock) since there are hypothetical travelers that could have been in Manhattan at 1 o'clock and Brooklyn at 2 o'clock. There are, also, some more interesting Cauchy surfaces which are harder to describe verbally. One could define a function τ from the collection of all locations into the collection of all times, such that the gradient of τ is everywhere less than 1/20 hours per mile. Then another example of a Cauchy surface is given by the collection of pairs The point is that, for any hypothetical traveler, there must be some location which the traveler was at, at time ; this follows from the intermediate value theorem. Furthermore, it is impossible that there are two locations and and that there is some traveler who is at at time and at at time , since by the mean value theorem th
https://en.wikipedia.org/wiki/Uniform%20polyhedron
In geometry, a uniform polyhedron has regular polygons as faces and is vertex-transitive (i.e., there is an isometry mapping any vertex onto any other). It follows that all vertices are congruent. Uniform polyhedra may be regular (if also face- and edge-transitive), quasi-regular (if also edge-transitive but not face-transitive), or semi-regular (if neither edge- nor face-transitive). The faces and vertices need not be convex, so many of the uniform polyhedra are also star polyhedra. There are two infinite classes of uniform polyhedra, together with 75 other polyhedra: Infinite classes: prisms, antiprisms. Convex exceptional: 5 Platonic solids: regular convex polyhedra, 13 Archimedean solids: 2 quasiregular and 11 semiregular convex polyhedra. Star (nonconvex) exceptional: 4 Kepler–Poinsot polyhedra: regular nonconvex polyhedra, 53 uniform star polyhedra: 14 quasiregular and 39 semiregular. Hence 5 + 13 + 4 + 53 = 75. There are also many degenerate uniform polyhedra with pairs of edges that coincide, including one found by John Skilling called the great disnub dirhombidodecahedron (Skilling's figure). Dual polyhedra to uniform polyhedra are face-transitive (isohedral) and have regular vertex figures, and are generally classified in parallel with their dual (uniform) polyhedron. The dual of a regular polyhedron is regular, while the dual of an Archimedean solid is a Catalan solid. The concept of uniform polyhedron is a special case of the concept of uniform polytope, which also applies to shapes in higher-dimensional (or lower-dimensional) space. Definition define uniform polyhedra to be vertex-transitive polyhedra with regular faces. They define a polyhedron to be a finite set of polygons such that each side of a polygon is a side of just one other polygon, such that no non-empty proper subset of the polygons has the same property. By a polygon they implicitly mean a polygon in 3-dimensional Euclidean space; these are allowed to be non-convex and to intersect each other. There are some generalizations of the concept of a uniform polyhedron. If the connectedness assumption is dropped, then we get uniform compounds, which can be split as a union of polyhedra, such as the compound of 5 cubes. If we drop the condition that the realization of the polyhedron is non-degenerate, then we get the so-called degenerate uniform polyhedra. These require a more general definition of polyhedra. gave a rather complicated definition of a polyhedron, while gave a simpler and more general definition of a polyhedron: in their terminology, a polyhedron is a 2-dimensional abstract polytope with a non-degenerate 3-dimensional realization. Here an abstract polytope is a poset of its "faces" satisfying various condition, a realization is a function from its vertices to some space, and the realization is called non-degenerate if any two distinct faces of the abstract polytope have distinct realizations. Some of the ways they can be degenerate are as fol
https://en.wikipedia.org/wiki/Consumer%20Expenditure%20Survey
The Consumer Expenditure Survey (CE or CEX) is a Bureau of Labor Statistics (BLS) household survey that collects information on the buying habits of U.S. consumers. The program consists of two components — the Interview Survey and the Diary Survey — each with its own sample. The surveys collect data on expenditures, income, and consumer unit characteristics. In May 2020, the American Association for Public Opinion Research recognized the CE program with its 2020 Policy Impact Award, for joint work by the BLS -- including CE and the Division of Price and Index Number Research -- and the Census Bureau on the Supplemental Poverty thresholds and measure, and the essential contributions these data products have made to the understanding, discussion, and advancement of public policy related to the alleviation of poverty in the United States. Interview Survey For the Interview Survey, each consumer unit is interviewed once per quarter, for four consecutive quarters. This survey is designed to capture large purchases, such as spending on rent, property, vehicles and expenses that occur on a regular basis such as rent or utilities. Since April 2003, data have been collected using a Computer Assisted Personal Interview (CAPI). Prior to that, interviews were administered using paper and pencil. An example of the most recent CAPI instrument is available on the Consumer Expenditure Survey website. Diary Survey The Diary Survey is self-administered, and each consumer unit keeps a diary for two one-week periods. This survey is meant to capture small, frequently purchased items and allows respondents to record all purchases such as spending for food and beverages, tobacco, personal care products, and nonprescription drugs and supplies. The most recent Diary Survey form is available on the Consumer Expenditure Survey website. Consumer Unit A consumer unit consists of any of the following: (1) All members of a particular household who are related by blood, marriage, adoption, or other legal arrangements; (2) a person living alone or sharing a household with others or living as a roomer in a private home or lodging house or in permanent living quarters in a hotel or motel, but who is financially independent; or (3) two or more persons living together who use their incomes to make joint expenditure decisions. Financial independence is determined by spending behavior with regard to the three major expense categories: Housing, food, and other living expenses. To be considered financially independent, the respondent must provide at least two of the three major expenditure categories, either entirely or in part. The terms consumer unit, family, and household are often used interchangeably for convenience. However, the proper technical term for purposes of the Consumer Expenditure Survey is consumer unit. Integrated Results Data from the Interview Survey and the Diary Survey are combined to provide a complete account of expenditures and income. In some c
https://en.wikipedia.org/wiki/PL/Perl
PL/Perl (Procedural Language/Perl) is a procedural language supported by the PostgreSQL RDBMS. PL/Perl, as an imperative programming language, allows more control than the relational algebra of SQL. Programs created in the PL/Perl language are called functions and can use most of the features that the Perl programming language provides, including common flow control structures and syntax that has incorporated regular expressions directly. These functions can be evaluated as part of a SQL statement, or in response to a trigger or rule. The design goals of PL/Perl were to create a loadable procedural language that: can be used to create functions and trigger procedures, adds control structures to the SQL language, can perform complex computations, can be defined to be either trusted or untrusted by the server, is easy to use. PL/Perl is one of many "PL" languages available for PostgreSQL PL/pgSQL PL/Java, plPHP, PL/Python, PL/R, PL/Ruby, PL/sh, and PL/Tcl. References PostgreSQL PL/Perl documentation Data management PostgreSQL Data-centric programming languages
https://en.wikipedia.org/wiki/Oseledets%20theorem
In mathematics, the multiplicative ergodic theorem, or Oseledets theorem provides the theoretical background for computation of Lyapunov exponents of a nonlinear dynamical system. It was proved by Valery Oseledets (also spelled "Oseledec") in 1965 and reported at the International Mathematical Congress in Moscow in 1966. A conceptually different proof of the multiplicative ergodic theorem was found by M. S. Raghunathan. The theorem has been extended to semisimple Lie groups by V. A. Kaimanovich and further generalized in the works of David Ruelle, Grigory Margulis, Anders Karlsson, and François Ledrappier. Cocycles The multiplicative ergodic theorem is stated in terms of matrix cocycles of a dynamical system. The theorem states conditions for the existence of the defining limits and describes the Lyapunov exponents. It does not address the rate of convergence. A cocycle of an autonomous dynamical system X is a map C : X×T → Rn×n satisfying where X and T (with T = Z⁺ or T = R⁺) are the phase space and the time range, respectively, of the dynamical system, and In is the n-dimensional unit matrix. The dimension n of the matrices C is not related to the phase space X. Examples A prominent example of a cocycle is given by the matrix Jt in the theory of Lyapunov exponents. In this special case, the dimension n of the matrices is the same as the dimension of the manifold X. For any cocycle C, the determinant det C(x, t) is a one-dimensional cocycle. Statement of the theorem Let μ be an ergodic invariant measure on X and C a cocycle of the dynamical system such that for each t ∈ T, the maps and are L1-integrable with respect to μ. Then for μ-almost all x and each non-zero vector u ∈ Rn the limit exists and assumes, depending on u but not on x, up to n different values. These are the Lyapunov exponents. Further, if λ1 > ... > λm are the different limits then there are subspaces Rn = R1 ⊃ ... ⊃ Rm ⊃ Rm+1 = {0}, depending on x, such that the limit is λi for u ∈ Ri \ Ri+1 and i = 1, ..., m. The values of the Lyapunov exponents are invariant with respect to a wide range of coordinate transformations. Suppose that g : X → X is a one-to-one map such that and its inverse exist; then the values of the Lyapunov exponents do not change. Additive versus multiplicative ergodic theorems Verbally, ergodicity means that time and space averages are equal, formally: where the integrals and the limit exist. Space average (right hand side, μ is an ergodic measure on X) is the accumulation of f(x) values weighted by μ(dx). Since addition is commutative, the accumulation of the f(x)μ(dx) values may be done in arbitrary order. In contrast, the time average (left hand side) suggests a specific ordering of the f(x(s)) values along the trajectory. Since matrix multiplication is, in general, not commutative, accumulation of multiplied cocycle values (and limits thereof) according to C(x(t0),tk) = C(x(tk−1),tk − tk−1) ... C(x(t0),t1 − t0) — fo
https://en.wikipedia.org/wiki/225%20%28number%29
225 (two hundred [and] twenty-five) is the natural number following 224 and preceding 226. In mathematics 225 is the smallest number that is a polygonal number in five different ways. It is a square number , an octagonal number, and a squared triangular number . As the square of a double factorial, counts the number of permutations of six items in which all cycles have even length, or the number of permutations in which all cycles have odd length. And as one of the Stirling numbers of the first kind, it counts the number of permutations of six items with exactly three cycles. 225 is a highly composite odd number, meaning that it has more divisors than any smaller odd numbers. After 1 and 9, 225 is the third smallest number n for which , where σ is the sum of divisors function and φ is Euler's totient function. 225 is a refactorable number. 225 is the smallest square number to have one of every digit in some number base (225 is 3201 in base 4) 225 is the first odd number with exactly 9 divisors. References Integers
https://en.wikipedia.org/wiki/Predictive%20modelling
Predictive modelling uses statistics to predict outcomes. Most often the event one wants to predict is in the future, but predictive modelling can be applied to any type of unknown event, regardless of when it occurred. For example, predictive models are often used to detect crimes and identify suspects, after the crime has taken place. In many cases, the model is chosen on the basis of detection theory to try to guess the probability of an outcome given a set amount of input data, for example given an email determining how likely that it is spam. Models can use one or more classifiers in trying to determine the probability of a set of data belonging to another set. For example, a model might be used to determine whether an email is spam or "ham" (non-spam). Depending on definitional boundaries, predictive modelling is synonymous with, or largely overlapping with, the field of machine learning, as it is more commonly referred to in academic or research and development contexts. When deployed commercially, predictive modelling is often referred to as predictive analytics. Predictive modelling is often contrasted with causal modelling/analysis. In the former, one may be entirely satisfied to make use of indicators of, or proxies for, the outcome of interest. In the latter, one seeks to determine true cause-and-effect relationships. This distinction has given rise to a burgeoning literature in the fields of research methods and statistics and to the common statement that "correlation does not imply causation". Models Nearly any statistical model can be used for prediction purposes. Broadly speaking, there are two classes of predictive models: parametric and non-parametric. A third class, semi-parametric models, includes features of both. Parametric models make "specific assumptions with regard to one or more of the population parameters that characterize the underlying distribution(s)". Non-parametric models "typically involve fewer assumptions of structure and distributional form [than parametric models] but usually contain strong assumptions about independencies". Applications Uplift modelling Uplift modelling is a technique for modelling the change in probability caused by an action. Typically this is a marketing action such as an offer to buy a product, to use a product more or to re-sign a contract. For example, in a retention campaign you wish to predict the change in probability that a customer will remain a customer if they are contacted. A model of the change in probability allows the retention campaign to be targeted at those customers on whom the change in probability will be beneficial. This allows the retention programme to avoid triggering unnecessary churn or customer attrition without wasting money contacting people who would act anyway. Archaeology Predictive modelling in archaeology gets its foundations from Gordon Willey's mid-fifties work in the Virú Valley of Peru. Complete, intensive surveys were performed then covari
https://en.wikipedia.org/wiki/Tourism%20in%20Laos
Tourism in Laos is governed by a ministry-level government agency, the Lao National Tourism Administration (LNTA). Statistics Annual statistics Notes: 1.COVID-19 pandemic. 2.SARS epidemic 3.September 11 attacks International visitor arrivals ∗ASEAN nation See also Visa policy of Laos References External links Laos Cultural Profile (Ministry of Information and Culture/Visiting Arts) The official Laos Tourism Authority site Laos virtual tour Laos Tourism Video Laos
https://en.wikipedia.org/wiki/Two%20envelopes%20problem
The two envelopes problem, also known as the exchange paradox, is a paradox in probability theory. It is of special interest in decision theory and for the Bayesian interpretation of probability theory. It is a variant of an older problem known as the necktie paradox. The problem is typically introduced by formulating a hypothetical challenge like the following example: Since the situation is symmetric, it seems obvious that there is no point in switching envelopes. On the other hand, a simple calculation using expected values suggests the opposite conclusion, that it is always beneficial to swap envelopes, since the person stands to gain twice as much money if they switch, while the only risk is halving what they currently have. Introduction Problem A person is given two indistinguishable envelopes, each of which contains a sum of money. One envelope contains twice as much as the other. The person may pick one envelope and keep whatever amount it contains. They pick one envelope at random but before they open it they are given the chance to take the other envelope instead. The switching argument Now suppose the person reasons as follows: The puzzle The puzzle is to find the flaw in the line of reasoning in the switching argument. This includes determining exactly why and under what conditions that step is not correct, to be sure not to make this mistake in a situation where the misstep may not be so obvious. In short, the problem is to solve the paradox. The puzzle is not solved by finding another way to calculate the probabilities that does not lead to a contradiction. Multiplicity of proposed solutions There have been many solutions proposed, and commonly one writer proposes a solution to the problem as stated, after which another writer shows that altering the problem slightly revives the paradox. Such sequences of discussions have produced a family of closely related formulations of the problem, resulting in voluminous literature on the subject. No proposed solution is widely accepted as definitive. Despite this, it is common for authors to claim that the solution to the problem is easy, even elementary. Upon investigating these elementary solutions, however, they often differ from one author to the next. Example resolution Suppose that the total amount in both envelopes is a constant , with in one envelope and in the other. If you select the envelope with first you gain the amount by swapping. If you select the envelope with first you lose the amount by swapping. So you gain on average by swapping. So on this supposition that the total amount is fixed, swapping is not better than keeping. The expected value is the same for both the envelopes. Thus no contradiction exists. The famous mystification is evoked by confusing the situation where the total amount in the two envelopes is fixed with the situation where the amount in one envelope is fixed and the other can be either double or half that amount. The so-called par
https://en.wikipedia.org/wiki/Equidistribution%20theorem
In mathematics, the equidistribution theorem is the statement that the sequence a, 2a, 3a, ... mod 1 is uniformly distributed on the circle , when a is an irrational number. It is a special case of the ergodic theorem where one takes the normalized angle measure . History While this theorem was proved in 1909 and 1910 separately by Hermann Weyl, Wacław Sierpiński and Piers Bohl, variants of this theorem continue to be studied to this day. In 1916, Weyl proved that the sequence a, 22a, 32a, ... mod 1 is uniformly distributed on the unit interval. In 1937, Ivan Vinogradov proved that the sequence pn a mod 1 is uniformly distributed, where pn is the nth prime. Vinogradov's proof was a byproduct of the odd Goldbach conjecture, that every sufficiently large odd number is the sum of three primes. George Birkhoff, in 1931, and Aleksandr Khinchin, in 1933, proved that the generalization x + na, for almost all x, is equidistributed on any Lebesgue measurable subset of the unit interval. The corresponding generalizations for the Weyl and Vinogradov results were proven by Jean Bourgain in 1988. Specifically, Khinchin showed that the identity holds for almost all x and any Lebesgue integrable function ƒ. In modern formulations, it is asked under what conditions the identity might hold, given some general sequence bk. One noteworthy result is that the sequence 2ka mod 1 is uniformly distributed for almost all, but not all, irrational a. Similarly, for the sequence bk = 2ka, for every irrational a, and almost all x, there exists a function ƒ for which the sum diverges. In this sense, this sequence is considered to be a universally bad averaging sequence, as opposed to bk = k, which is termed a universally good averaging sequence, because it does not have the latter shortcoming. A powerful general result is Weyl's criterion, which shows that equidistribution is equivalent to having a non-trivial estimate for the exponential sums formed with the sequence as exponents. For the case of multiples of a, Weyl's criterion reduces the problem to summing finite geometric series. See also Diophantine approximation Low-discrepancy sequence Dirichlet's approximation theorem Three-gap theorem References Historical references P. Bohl, (1909) Über ein in der Theorie der säkularen Störungen vorkommendes Problem, J. reine angew. Math. 135, pp. 189–283. W. Sierpinski, (1910) Sur la valeur asymptotique d'une certaine somme, Bull Intl. Acad. Polonaise des Sci. et des Lettres (Cracovie) series A, pp. 9–11. Modern references Joseph M. Rosenblatt and Máté Weirdl, Pointwise ergodic theorems via harmonic analysis, (1993) appearing in Ergodic Theory and its Connections with Harmonic Analysis, Proceedings of the 1993 Alexandria Conference, (1995) Karl E. Petersen and Ibrahim A. Salama, eds., Cambridge University Press, Cambridge, . (An extensive survey of the ergodic properties of generalizations of the equidistribution theorem of shift maps on the unit inter
https://en.wikipedia.org/wiki/Cross-sectional%20regression
In statistics and econometrics, a cross-sectional regression is a type of regression in which the explained and explanatory variables are all associated with the same single period or point in time. This type of cross-sectional analysis is in contrast to a time-series regression or longitudinal regression in which the variables are considered to be associated with a sequence of points in time. For example, in economics a regression to explain and predict money demand (how much people choose to hold in the form of the most liquid assets) could be conducted with either cross-sectional or time series data. A cross-sectional regression would have as each data point an observation on a particular individual's money holdings, income, and perhaps other variables at a single point in time, and different data points would reflect different individuals at the same point in time. In contrast, a regression using time series would have as each data point an entire economy's money holdings, income, etc. at one point in time, and different data points would be drawn on the same economy but at different points in time. See also Linear regression Regression analysis References Preprint External links A Review of Cross Sectional Regression for Financial Data Lecture notes by Gary Koop, Department of Economics, University of Strathclyde Regression analysis Cross-sectional analysis
https://en.wikipedia.org/wiki/Krylov%20subspace
In linear algebra, the order-r Krylov subspace generated by an n-by-n matrix A and a vector b of dimension n is the linear subspace spanned by the images of b under the first r powers of A (starting from ), that is, Background The concept is named after Russian applied mathematician and naval engineer Alexei Krylov, who published a paper about it in 1931. Properties . Let . Then are linearly independent unless , for all , and . So is the maximal dimension of the Krylov subspaces . The maximal dimension satisfies and . Consider , where is the minimal polynomial of . We have . Moreover, for any , there exists a for which this bound is tight, i.e. . is a cyclic submodule generated by of the torsion -module , where is the linear space on . can be decomposed as the direct sum of Krylov subspaces. Use Krylov subspaces are used in algorithms for finding approximate solutions to high-dimensional linear algebra problems. Many linear dynamical system tests in control theory, especially those related to controllability and observability, involve checking the rank of the Krylov subspace. These tests are equivalent to finding the span of the Gramians associated with the system/output maps so the uncontrollable and unobservable subspaces are simply the orthogonal complement to the Krylov subspace. Modern iterative methods such as Arnoldi iteration can be used for finding one (or a few) eigenvalues of large sparse matrices or solving large systems of linear equations. They try to avoid matrix-matrix operations, but rather multiply vectors by the matrix and work with the resulting vectors. Starting with a vector , one computes , then one multiplies that vector by to find and so on. All algorithms that work this way are referred to as Krylov subspace methods; they are among the most successful methods currently available in numerical linear algebra. These methods can be used in situations where there is an algorithm to compute the matrix-vector multiplication without there being an explicit representation of , giving rise to Matrix-free methods. Issues Because the vectors usually soon become almost linearly dependent due to the properties of power iteration, methods relying on Krylov subspace frequently involve some orthogonalization scheme, such as Lanczos iteration for Hermitian matrices or Arnoldi iteration for more general matrices. Existing methods The best known Krylov subspace methods are the Conjugate gradient, IDR(s) (Induced dimension reduction), GMRES (generalized minimum residual), BiCGSTAB (biconjugate gradient stabilized), QMR (quasi minimal residual), TFQMR (transpose-free QMR) and MINRES (minimal residual method). See also Iterative method, which has a section on Krylov subspace methods References Further reading Gerard Meurant and Jurjen Duintjer Tebbens: ”Krylov methods for nonsymmetric linear systems - From theory to computations”, Springer Series in Computational Mathematics, vol.57, (Oct. 2020). , url=https
https://en.wikipedia.org/wiki/Michio%20Suzuki%20%28mathematician%29
was a Japanese mathematician who studied group theory. Biography He was a professor at the University of Illinois at Urbana–Champaign from 1953 to his death. He also had visiting positions at the University of Chicago (1960–61), the Institute for Advanced Study (1962–63, 1968–69, spring 1981), the University of Tokyo (spring 1971), and the University of Padua (1994). Suzuki received his Ph.D. in 1952 from the University of Tokyo, despite having moved to the United States the previous year. He was the first to attack the Burnside conjecture, that every finite non-abelian simple group has even order. A notable achievement was his discovery in 1960 of the Suzuki groups, an infinite family of the only non-abelian simple groups whose order is not divisible by 3. The smallest, of order 29120, was the first simple group of order less than 1 million to be discovered since Dickson's list of 1900. He classified several classes of simple groups of small rank, including the CIT-groups and C-groups and CA-groups. There is also a sporadic simple group called the Suzuki group, which he announced in 1968. The Tits ovoid is also referred to as the Suzuki ovoid. He wrote several textbooks in Japanese. See also Baer–Suzuki theorem Bender–Suzuki theorem Brauer–Suzuki theorem Brauer–Suzuki–Wall theorem Publications References M. Aschbacher, H. Bender, W. Feit, R. Solomon, Michio Suzuki (1926–1998), Notices Amer. Math. Soc. 46 (1999), no. 5, 543–551. External links 20th-century Japanese mathematicians 20th-century American mathematicians 1926 births 1998 deaths Group theorists University of Illinois Urbana-Champaign faculty
https://en.wikipedia.org/wiki/Orientation%20%28geometry%29
In geometry, the orientation, attitude, bearing, direction, or angular position of an object – such as a line, plane or rigid body – is part of the description of how it is placed in the space it occupies. More specifically, it refers to the imaginary rotation that is needed to move the object from a reference placement to its current placement. A rotation may not be enough to reach the current placement, in which case it may be necessary to add an imaginary translation to change the object's position (or linear position). The position and orientation together fully describe how the object is placed in space. The above-mentioned imaginary rotation and translation may be thought to occur in any order, as the orientation of an object does not change when it translates, and its position does not change when it rotates. Euler's rotation theorem shows that in three dimensions any orientation can be reached with a single rotation around a fixed axis. This gives one common way of representing the orientation using an axis–angle representation. Other widely used methods include rotation quaternions, rotors, Euler angles, or rotation matrices. More specialist uses include Miller indices in crystallography, strike and dip in geology and grade on maps and signs. Unit vector may also be used to represent an object's normal vector orientation or the relative direction between two points. Typically, the orientation is given relative to a frame of reference, usually specified by a Cartesian coordinate system. Two objects sharing the same direction are said to be codirectional (as in parallel lines). Two directions are said to be opposite if they are the additive inverse of one another, as in an arbitrary unit vector and its multiplication by -1. Two directions are obtuse if they form an obtuse angle (greater than a right angle) or, equivalently, if their scalar product or scalar projection is negative. Mathematical representations Three dimensions In general the position and orientation in space of a rigid body are defined as the position and orientation, relative to the main reference frame, of another reference frame, which is fixed relative to the body, and hence translates and rotates with it (the body's local reference frame, or local coordinate system). At least three independent values are needed to describe the orientation of this local frame. Three other values describe the position of a point on the object. All the points of the body change their position during a rotation except for those lying on the rotation axis. If the rigid body has rotational symmetry not all orientations are distinguishable, except by observing how the orientation evolves in time from a known starting orientation. For example, the orientation in space of a line, line segment, or vector can be specified with only two values, for example two direction cosines. Another example is the position of a point on the Earth, often described using the orientation of a line joining i
https://en.wikipedia.org/wiki/Robinson%20arithmetic
In mathematics, Robinson arithmetic is a finitely axiomatized fragment of first-order Peano arithmetic (PA), first set out by Raphael M. Robinson in 1950. It is usually denoted Q. Q is almost PA without the axiom schema of mathematical induction. Q is weaker than PA but it has the same language, and both theories are incomplete. Q is important and interesting because it is a finitely axiomatized fragment of PA that is recursively incompletable and essentially undecidable. Axioms The background logic of Q is first-order logic with identity, denoted by infix '='. The individuals, called natural numbers, are members of a set called N with a distinguished member 0, called zero. There are three operations over N: A unary operation called successor and denoted by prefix S; Two binary operations, addition and multiplication, denoted by infix + and ·, respectively. The following axioms for Q are Q1–Q7 in (cf. also the axioms of first-order arithmetic). Variables not bound by an existential quantifier are bound by an implicit universal quantifier. Sx ≠ 0 0 is not the successor of any number. (Sx = Sy) → x = y If the successor of x is identical to the successor of y, then x and y are identical. (1) and (2) yield the minimum of facts about N (it is an infinite set bounded by 0) and S (it is an injective function whose domain is N) needed for non-triviality. The converse of (2) follows from the properties of identity. y=0 ∨ ∃x (Sx = y) Every number is either 0 or the successor of some number. The axiom schema of mathematical induction present in arithmetics stronger than Q turns this axiom into a theorem. x + 0 = x x + Sy = S(x + y) (4) and (5) are the recursive definition of addition. x·0 = 0 x·Sy = (x·y) + x (6) and (7) are the recursive definition of multiplication. Variant axiomatizations The axioms in are (1)–(13) in . The first 6 of Robinson's 13 axioms are required only when, unlike here, the background logic does not include identity. The usual strict total order on N, "less than" (denoted by "<"), can be defined in terms of addition via the rule . Equivalently, we get a definitional conservative extension of Q by taking "<" as primitive and adding this rule as an eighth axiom; this system is termed "Robinson arithmetic R" in . A different extension of Q, which we temporarily call Q+, is obtained if we take "<" as primitive and add (instead of the last definitional axiom) the following three axioms to axioms (1)–(7) of Q: ¬(x < 0) x < Sy ↔ (x < y ∨ x = y) x < y ∨ x = y ∨ y < x Q+ is still a conservative extension of Q, in the sense that any formula provable in Q+ not containing the symbol "<" is already provable in Q. (Adding only the first two of the above three axioms to Q gives a conservative extension of Q that is equivalent to what calls Q*. See also , but note that the second of the above three axioms cannot be deduced from "the pure definitional extension" of Q obtained by adding only the axiom .) Among the axioms
https://en.wikipedia.org/wiki/Systolic%20geometry
In mathematics, systolic geometry is the study of systolic invariants of manifolds and polyhedra, as initially conceived by Charles Loewner and developed by Mikhail Gromov, Michael Freedman, Peter Sarnak, Mikhail Katz, Larry Guth, and others, in its arithmetical, ergodic, and topological manifestations. See also a slower-paced Introduction to systolic geometry. The notion of systole The systole of a compact metric space X is a metric invariant of X, defined to be the least length of a noncontractible loop in X (i.e. a loop that cannot be contracted to a point in the ambient space X). In more technical language, we minimize length over free loops representing nontrivial conjugacy classes in the fundamental group of X. When X is a graph, the invariant is usually referred to as the girth, ever since the 1947 article on girth by W. T. Tutte. Possibly inspired by Tutte's article, Loewner started thinking about systolic questions on surfaces in the late 1940s, resulting in a 1950 thesis by his student Pao Ming Pu. The actual term "systole" itself was not coined until a quarter century later, by Marcel Berger. This line of research was, apparently, given further impetus by a remark of René Thom, in a conversation with Berger in the library of Strasbourg University during the 1961–62 academic year, shortly after the publication of the papers of R. Accola and C. Blatter. Referring to these systolic inequalities, Thom reportedly exclaimed: Mais c'est fondamental! [These results are of fundamental importance!] Subsequently, Berger popularized the subject in a series of articles and books, most recently in the March 2008 issue of the Notices of the American Mathematical Society (see reference below). A bibliography at the Website for systolic geometry and topology currently contains over 160 articles. Systolic geometry is a rapidly developing field, featuring a number of recent publications in leading journals. Recently (see the 2006 paper by Katz and Rudyak below), the link with the Lusternik–Schnirelmann category has emerged. The existence of such a link can be thought of as a theorem in systolic topology. Property of a centrally symmetric polyhedron in 3-space Every convex centrally symmetric polyhedron P in R3 admits a pair of opposite (antipodal) points and a path of length L joining them and lying on the boundary ∂P of P, satisfying An alternative formulation is as follows. Any centrally symmetric convex body of surface area A can be squeezed through a noose of length , with the tightest fit achieved by a sphere. This property is equivalent to a special case of Pu's inequality (see below), one of the earliest systolic inequalities. Concepts To give a preliminary idea of the flavor of the field, one could make the following observations. The main thrust of Thom's remark to Berger quoted above appears to be the following. Whenever one encounters an inequality relating geometric invariants, such a phenomenon in itself is interesting; all th
https://en.wikipedia.org/wiki/Zero-sum%20problem
In number theory, zero-sum problems are certain kinds of combinatorial problems about the structure of a finite abelian group. Concretely, given a finite abelian group G and a positive integer n, one asks for the smallest value of k such that every sequence of elements of G of size k contains n terms that sum to 0. The classic result in this area is the 1961 theorem of Paul Erdős, Abraham Ginzburg, and Abraham Ziv. They proved that for the group of integers modulo n, Explicitly this says that any multiset of 2n − 1 integers has a subset of size n the sum of whose elements is a multiple of n, but that the same is not true of multisets of size 2n − 2. (Indeed, the lower bound is easy to see: the multiset containing n − 1 copies of 0 and n − 1 copies of 1 contains no n-subset summing to a multiple of n.) This result is known as the Erdős–Ginzburg–Ziv theorem after its discoverers. It may also be deduced from the Cauchy–Davenport theorem. More general results than this theorem exist, such as Olson's theorem, Kemnitz's conjecture (proved by Christian Reiher in 2003), and the weighted EGZ theorem (proved by David J. Grynkiewicz in 2005). See also Davenport constant Subset sum problem References External links PlanetMath Erdős, Ginzburg, Ziv Theorem Sun, Zhi-Wei, "Covering Systems, Restricted Sumsets, Zero-sum Problems and their Unification" Further reading Zero-sum problems - A survey (open-access journal article) Zero-Sum Ramsey Theory: Graphs, Sequences and More (workshop homepage) Arie Bialostocki, "Zero-sum trees: a survey of results and open problems" N.W. Sauer (ed.) R.E. Woodrow (ed.) B. Sands (ed.), Finite and Infinite Combinatorics in Sets and Logic, Nato ASI Ser., Kluwer Acad. Publ. (1993) pp. 19–29 Y. Caro, "Zero-sum problems: a survey" Discrete Math., 152 (1996) pp. 93–113 Ramsey theory Combinatorics Paul Erdős Mathematical problems
https://en.wikipedia.org/wiki/Stern%E2%80%93Brocot%20tree
In number theory, the Stern–Brocot tree is an infinite complete binary tree in which the vertices correspond one-for-one to the positive rational numbers, whose values are ordered from the left to the right as in a search tree. The Stern–Brocot tree was introduced independently by and . Stern was a German number theorist; Brocot was a French clockmaker who used the Stern–Brocot tree to design systems of gears with a gear ratio close to some desired value by finding a ratio of smooth numbers near that value. The root of the Stern–Brocot tree corresponds to the number 1. The parent-child relation between numbers in the Stern–Brocot tree may be defined in terms of continued fractions or mediants, and a path in the tree from the root to any other number q provides a sequence of approximations to q with smaller denominators than q. Because the tree contains each positive rational number exactly once, a breadth first search of the tree provides a method of listing all positive rationals that is closely related to Farey sequences. The left subtree of the Stern–Brocot tree, containing the rational numbers in the range (0,1), is called the Farey tree. A tree of continued fractions Every positive rational number may be expressed as a continued fraction of the form where and are non-negative integers, and each subsequent coefficient is a positive integer. This representation is not unique because but using this equivalence to replace every continued fraction ending with a one by a shorter continued fraction shows that every rational number has a unique representation in which the last coefficient is greater than one. Then, unless , the number has a parent in the Stern–Brocot tree given by the continued fraction expression Equivalently this parent is formed by decreasing the denominator in the innermost term of the continued fraction by 1, and contracting with the previous term if the fraction becomes . For instance, the rational number has the continued fraction representation so its parent in the Stern–Brocot tree is the number Conversely each number in the Stern–Brocot tree has exactly two children: if then one child is the number represented by the continued fraction while the other child is represented by the continued fraction One of these children is less than and this is the left child; the other is greater than and it is the right child (in fact the former expression gives the left child if is odd, and the right child if is even). For instance, the continued fraction representation of is [1;2,4] and its two children are [1;2,5] =  (the right child) and [1;2,3,2] =  (the left child). It is clear that for each finite continued fraction expression one can repeatedly move to its parent, and reach the root [1;]= of the tree in finitely many steps (in steps to be precise). Therefore, every positive rational number appears exactly once in this tree. Moreover all descendants of the left child of any number q are less than q, and
https://en.wikipedia.org/wiki/Variable-geometry%20turbocharger
Variable-geometry turbochargers (VGTs), occasionally known as variable-nozzle turbines (VNTs), are a type of turbochargers, usually designed to allow the effective aspect ratio (A/R ratio) of the turbocharger to be altered as conditions change. This is done with the use of adjustable vanes located inside the turbine housing between the inlet and turbine, these vanes affect flow of gases towards the turbine. The benefit of the VGT is that the optimum aspect ratio at low engine speeds is very different from that at high engine speeds. If the aspect ratio is too large, the turbo will fail to create boost at low speeds; if the aspect ratio is too small, the turbo will choke the engine at high speeds, leading to high exhaust manifold pressures, high pumping losses, and ultimately lower power output. By altering the geometry of the turbine housing as the engine accelerates, the turbo's aspect ratio can be maintained at its optimum. Because of this, VGTs have a minimal amount of lag, a low boost threshold, and high efficiency at higher engine speeds. History The rotating-vane VGT was first developed under Garrett and patented in 1953. One of the first production cars to use these turbochargers was the 1988 Honda Legend; it used a water-cooled VGT installed on its 2.0-litre V6 engine. The limited-production 1989 Shelby CSX-VNT, with only 500 examples produced, was equipped with a 2.2-litre Chrysler K engine with a Garrett turbo called the VNT-25 (because it used the same compressor and shaft as the fixed-geometry Garrett T-25). In 1991, Fiat incorporated a VGT into the Croma's direct-injected turbodiesel. The Peugeot 405 T16, launched in 1992, used a Garrett VAT25 variable-geometry turbocharger on its 2.0-litre 16-valve engine. The 2007 Porsche 911 Turbo has twin variable-geometry turbochargers on its 3.6-litre horizontally-opposed six-cylinder gasoline engine. In 2007, Acura introduced the RDX with Variable Geometry Turbocharger following a (VFT) design. The 2015 Koenigsegg One:1 (named after its power-to-weight ratio of 1:1) uses twin variable-geometry turbochargers on its 5.0-litre V8 engine, allowing it to produce 1361 horsepower. Common designs The most common implementations of VGTs are Variable-Nozzle Turbines (VNT), Sliding Wall Turbines, and Variable Flow Turbines (VFT). Variable-Nozzle Turbines are common in light-duty engines (passenger cars, race cars, and light commercial vehicles), the turbine's vanes rotate in unison, relative to its hub, to vary its pitch and cross-sectional area. VNTs offer higher flow rates and higher peak efficiency compared to other variable geometry designs. Sliding Wall Turbines are commonly found in heavy-duty engines, the vanes do not rotate, but instead, their effective width is changed. This is usually done by moving the turbine along its axis, partially retracting the vanes within the housing. Alternatively, a partition within the housing may slide back and forth. The area between the edges of the