source
stringlengths
31
168
text
stringlengths
51
3k
https://en.wikipedia.org/wiki/Landau%20prime%20ideal%20theorem
In algebraic number theory, the prime ideal theorem is the number field generalization of the prime number theorem. It provides an asymptotic formula for counting the number of prime ideals of a number field K, with norm at most X. Example What to expect can be seen already for the Gaussian integers. There for any prime number p of the form 4n + 1, p factors as a product of two Gaussian primes of norm p. Primes of the form 4n + 3 remain prime, giving a Gaussian prime of norm p2. Therefore, we should estimate where r counts primes in the arithmetic progression 4n + 1, and r′ in the arithmetic progression 4n + 3. By the quantitative form of Dirichlet's theorem on primes, each of r(Y) and r′(Y) is asymptotically Therefore, the 2r(X) term dominates, and is asymptotically General number fields This general pattern holds for number fields in general, so that the prime ideal theorem is dominated by the ideals of norm a prime number. As Edmund Landau proved in , for norm at most X the same asymptotic formula always holds. Heuristically this is because the logarithmic derivative of the Dedekind zeta-function of K always has a simple pole with residue −1 at s = 1. As with the Prime Number Theorem, a more precise estimate may be given in terms of the logarithmic integral function. The number of prime ideals of norm ≤ X is where cK is a constant depending on K. See also Abstract analytic number theory References Theorems in analytic number theory Theorems in algebraic number theory
https://en.wikipedia.org/wiki/Characteristic%20function%20%28probability%20theory%29
In probability theory and statistics, the characteristic function of any real-valued random variable completely defines its probability distribution. If a random variable admits a probability density function, then the characteristic function is the Fourier transform of the probability density function. Thus it provides an alternative route to analytical results compared with working directly with probability density functions or cumulative distribution functions. There are particularly simple results for the characteristic functions of distributions defined by the weighted sums of random variables. In addition to univariate distributions, characteristic functions can be defined for vector- or matrix-valued random variables, and can also be extended to more generic cases. The characteristic function always exists when treated as a function of a real-valued argument, unlike the moment-generating function. There are relations between the behavior of the characteristic function of a distribution and properties of the distribution, such as the existence of moments and the existence of a density function. Introduction The characteristic function is a way to describe a random variable. The characteristic function, a function of t, completely determines the behavior and properties of the probability distribution of the random variable X. The characteristic function is similar to the cumulative distribution function, (where 1{X ≤ x} is the indicator function — it is equal to 1 when , and zero otherwise), which also completely determines the behavior and properties of the probability distribution of the random variable X. The two approaches are equivalent in the sense that knowing one of the functions it is always possible to find the other, yet they provide different insights for understanding the features of the random variable. Moreover, in particular cases, there can be differences in whether these functions can be represented as expressions involving simple standard functions. If a random variable admits a density function, then the characteristic function is its Fourier dual, in the sense that each of them is a Fourier transform of the other. If a random variable has a moment-generating function , then the domain of the characteristic function can be extended to the complex plane, and Note however that the characteristic function of a distribution always exists, even when the probability density function or moment-generating function do not. The characteristic function approach is particularly useful in analysis of linear combinations of independent random variables: a classical proof of the Central Limit Theorem uses characteristic functions and Lévy's continuity theorem. Another important application is to the theory of the decomposability of random variables. Definition For a scalar random variable X the characteristic function is defined as the expected value of eitX, where i is the imaginary unit, and is the argument of the char
https://en.wikipedia.org/wiki/Annihilator%20method
In mathematics, the annihilator method is a procedure used to find a particular solution to certain types of non-homogeneous ordinary differential equations (ODE's). It is similar to the method of undetermined coefficients, but instead of guessing the particular solution in the method of undetermined coefficients, the particular solution is determined systematically in this technique. The phrase undetermined coefficients can also be used to refer to the step in the annihilator method in which the coefficients are calculated. The annihilator method is used as follows. Given the ODE , find another differential operator such that . This operator is called the annihilator, hence the name of the method. Applying to both sides of the ODE gives a homogeneous ODE for which we find a solution basis as before. Then the original inhomogeneous ODE is used to construct a system of equations restricting the coefficients of the linear combination to satisfy the ODE. This method is not as general as variation of parameters in the sense that an annihilator does not always exist. Annihilator table Where is in the natural numbers, and are in the real numbers. If consists of the sum of the expressions given in the table, the annihilator is the product of the corresponding annihilators. Example Given , . The simplest annihilator of is . The zeros of are , so the solution basis of is Setting we find giving the system which has solutions , giving the solution set This solution can be broken down into the homogeneous and nonhomogeneous parts. In particular, is a particular integral for the nonhomogeneous differential equation, and is a complementary solution to the corresponding homogeneous equation. The values of and are determined usually through a set of initial conditions. Since this is a second-order equation, two such conditions are necessary to determine these values. The fundamental solutions and can be further rewritten using Euler's formula: Then , and a suitable reassignment of the constants gives a simpler and more understandable form of the complementary solution, . Ordinary differential equations
https://en.wikipedia.org/wiki/Broken%20diagonal
In recreational mathematics and the theory of magic squares, a broken diagonal is a set of n cells forming two parallel diagonal lines in the square. Alternatively, these two lines can be thought of as wrapping around the boundaries of the square to form a single sequence. In pandiagonal magic squares A magic square in which the broken diagonals have the same sum as the rows, columns, and diagonals is called a pandiagonal magic square. Examples of broken diagonals from the number square in the image are as follows: 3,12,14,5; 10,1,7,16; 10,13,7,4; 15,8,2,9; 15,12,2,5; and 6,13,11,4. The fact that this square is a pandiagonal magic square can be verified by checking that all of its broken diagonals add up to the same constant: 3+12+14+5 = 34 10+1+7+16 = 34 10+13+7+4 = 34 One way to visualize a broken diagonal is to imagine a "ghost image" of the panmagic square adjacent to the original: 111x121px The set of numbers {3, 12, 14, 5} of a broken diagonal, wrapped around the original square, can be seen starting with the first square of the ghost image and moving down to the left. In linear algebra Broken diagonals are used in a formula to find the determinant of 3 by 3 matrices. For a 3 × 3 matrix A, its determinant is Here, and are (products of the elements of) the broken diagonals of the matrix. Broken diagonals are used in the calculation of the determinants of all matrices of size 3 × 3 or larger. This can be shown by using the matrix's minors to calculate the determinant. References Magic squares
https://en.wikipedia.org/wiki/Lagrangian%20Grassmannian
In mathematics, the Lagrangian Grassmannian is the smooth manifold of Lagrangian subspaces of a real symplectic vector space V. Its dimension is n(n + 1) (where the dimension of V is 2n). It may be identified with the homogeneous space , where is the unitary group and the orthogonal group. Following Vladimir Arnold it is denoted by Λ(n). The Lagrangian Grassmannian is a submanifold of the ordinary Grassmannian of V. A complex Lagrangian Grassmannian is the complex homogeneous manifold of Lagrangian subspaces of a complex symplectic vector space V of dimension 2n. It may be identified with the homogeneous space of complex dimension n(n + 1) , where is the compact symplectic group. As a homogeneous space To see that the Lagrangian Grassmannian Λ(n) can be identified with , note that is a 2n-dimensional real vector space, with the imaginary part of its usual inner product making it into a symplectic vector space. The Lagrangian subspaces of are then the real subspaces of real dimension n on which the imaginary part of the inner product vanishes. An example is . The unitary group acts transitively on the set of these subspaces, and the stabilizer of is the orthogonal group . It follows from the theory of homogeneous spaces that Λ(n) is isomorphic to as a homogeneous space of . Topology The stable topology of the Lagrangian Grassmannian and complex Lagrangian Grassmannian is completely understood, as these spaces appear in the Bott periodicity theorem: , and – they are thus exactly the homotopy groups of the stable orthogonal group, up to a shift in indexing (dimension). In particular, the fundamental group of is infinite cyclic. Its first homology group is therefore also infinite cyclic, as is its first cohomology group, with a distinguished generator given by the square of the determinant of a unitary matrix, as a mapping to the unit circle. Arnold showed that this leads to a description of the Maslov index, introduced by V. P. Maslov. For a Lagrangian submanifold M of V, in fact, there is a mapping which classifies its tangent space at each point (cf. Gauss map). The Maslov index is the pullback via this mapping, in of the distinguished generator of . Maslov index A path of symplectomorphisms of a symplectic vector space may be assigned a Maslov index, named after V. P. Maslov; it will be an integer if the path is a loop, and a half-integer in general. If this path arises from trivializing the symplectic vector bundle over a periodic orbit of a Hamiltonian vector field on a symplectic manifold or the Reeb vector field on a contact manifold, it is known as the Conley–Zehnder index. It computes the spectral flow of the Cauchy–Riemann-type operators that arise in Floer homology. It appeared originally in the study of the WKB approximation and appears frequently in the study of quantization, quantum chaos trace formulas, and in symplectic geometry and topology. It can be described as above in terms of a Maslov index
https://en.wikipedia.org/wiki/List%20of%20exceptional%20set%20concepts
This is a list of exceptional set concepts. In mathematics, and in particular in mathematical analysis, it is very useful to be able to characterise subsets of a given set X as 'small', in some definite sense, or 'large' if their complement in X is small. There are numerous concepts that have been introduced to study 'small' or 'exceptional' subsets. In the case of sets of natural numbers, it is possible to define more than one concept of 'density', for example. See also list of properties of sets of reals. Almost all Almost always Almost everywhere Almost never Almost surely Analytic capacity Closed unbounded set Cofinal (mathematics) Cofinite Dense set IP set 2-large Large set (Ramsey theory) Meagre set Measure zero Natural density Negligible set Nowhere dense set Null set, conull set Partition regular Piecewise syndetic set Schnirelmann density Small set (combinatorics) Stationary set Syndetic set Thick set Thin set (Serre) Exceptional Exceptional
https://en.wikipedia.org/wiki/Saxon%20math
Saxon math, developed by John Saxon (1923–1996), is a teaching method for incremental learning of mathematics created in the 1980s. It involves teaching a new mathematical concept every day and constantly reviewing old concepts. Early editions were deprecated for providing very few opportunities to practice the new material before plunging into a review of all previous material. Newer editions typically split the day's work evenly between practicing the new material and reviewing old material. It uses a steady review of all previous material, with a focus on students who struggle with retaining the math they previously learned. However, it has sometimes been criticized for its heavy emphasis on rote rather than conceptual learning. The Saxon Math 1 to Algebra 1/2 (the equivalent of a Pre-Algebra book) curriculum is designed so that students complete assorted mental math problems, learn a new mathematical concept, practice problems relating to that lesson, and solve a variety of problems. Daily practice problems include relevant questions from the current day's lesson as well as cumulative problems. This daily cycle is interrupted for tests and additional topics. From Algebra 1/2 on, the higher level books remove the mental math problems and incorporate testing more frequently. Saxon Publishers has also published a phonics and spelling curriculum. This curriculum, authored by Lorna Simmons and first published in 2005, follows the same incremental principles as the Saxon Math curriculum. The Saxon math program has a specific set of products to support homeschoolers, including solution keys and ready-made tests, which makes it popular among some homeschool families. It has also been adopted as an alternative to reform mathematics programs in public and private schools. Saxon teaches memorization of algorithms, unlike many reform texts. Relation to Common Core In some reviews, such as ones performed by the nonprofit curriculum rating site EdReports.org, Saxon Math is ranked poorly because it is not aligned with the Common Core State Standards Initiative. That initiative, which has been adopted by most U.S. states, is an important factor in determining which curricula are used in public schools in those states. However, Saxon Math continues to be popular among private schools and homeschoolers, many of whom favor its more traditional approach to teaching math. References External links Saxon teaching materials, distributed by Houghton Mifflin Harcourt Education reform Mathematics education Homeschooling Mathematics education reform Traditional mathematics Alternative education
https://en.wikipedia.org/wiki/Pseudoconvexity
In mathematics, more precisely in the theory of functions of several complex variables, a pseudoconvex set is a special type of open set in the n-dimensional complex space Cn. Pseudoconvex sets are important, as they allow for classification of domains of holomorphy. Let be a domain, that is, an open connected subset. One says that is pseudoconvex (or Hartogs pseudoconvex) if there exists a continuous plurisubharmonic function on such that the set is a relatively compact subset of for all real numbers In other words, a domain is pseudoconvex if has a continuous plurisubharmonic exhaustion function. Every (geometrically) convex set is pseudoconvex. However, there are pseudoconvex domains which are not geometrically convex. When has a (twice continuously differentiable) boundary, this notion is the same as Levi pseudoconvexity, which is easier to work with. More specifically, with a boundary, it can be shown that has a defining function, i.e., that there exists which is so that , and . Now, is pseudoconvex iff for every and in the complex tangent space at p, that is, , we have The definition above is analogous to definitions of convexity in Real Analysis. If does not have a boundary, the following approximation result can be useful. Proposition 1 If is pseudoconvex, then there exist bounded, strongly Levi pseudoconvex domains with (smooth) boundary which are relatively compact in , such that This is because once we have a as in the definition we can actually find a C∞ exhaustion function. The case n = 1 In one complex dimension, every open domain is pseudoconvex. The concept of pseudoconvexity is thus more useful in dimensions higher than 1. See also Analytic polyhedron Eugenio Elia Levi Holomorphically convex hull Stein manifold References Lars Hörmander, An Introduction to Complex Analysis in Several Variables, North-Holland, 1990. (). Steven G. Krantz. Function Theory of Several Complex Variables, AMS Chelsea Publishing, Providence, Rhode Island, 1992. External links Several complex variables
https://en.wikipedia.org/wiki/Domain%20of%20holomorphy
In mathematics, in the theory of functions of several complex variables, a domain of holomorphy is a domain which is maximal in the sense that there exists a holomorphic function on this domain which cannot be extended to a bigger domain. Formally, an open set in the n-dimensional complex space is called a domain of holomorphy if there do not exist non-empty open sets and where is connected, and such that for every holomorphic function on there exists a holomorphic function on with on In the case, every open set is a domain of holomorphy: we can define a holomorphic function with zeros accumulating everywhere on the boundary of the domain, which must then be a natural boundary for a domain of definition of its reciprocal. For this is no longer true, as it follows from Hartogs' lemma. Equivalent conditions For a domain the following conditions are equivalent: is a domain of holomorphy is holomorphically convex is pseudoconvex is Levi convex - for every sequence of analytic compact surfaces such that for some set we have ( cannot be "touched from inside" by a sequence of analytic surfaces) has local Levi property - for every point there exist a neighbourhood of and holomorphic on such that cannot be extended to any neighbourhood of Implications are standard results (for , see Oka's lemma). The main difficulty lies in proving , i.e. constructing a global holomorphic function which admits no extension from non-extendable functions defined only locally. This is called the Levi problem (after E. E. Levi) and was first solved by Kiyoshi Oka, and then by Lars Hörmander using methods from functional analysis and partial differential equations (a consequence of -problem). Properties If are domains of holomorphy, then their intersection is also a domain of holomorphy. If is an ascending sequence of domains of holomorphy, then their union is also a domain of holomorphy (see Behnke-Stein theorem). If and are domains of holomorphy, then is a domain of holomorphy. The first Cousin problem is always solvable in a domain of holomorphy; this is also true, with additional topological assumptions, for the second Cousin problem. See also Behnke–Stein theorem Levi pseudoconvex solution of the Levi problem Stein manifold References Steven G. Krantz. Function Theory of Several Complex Variables, AMS Chelsea Publishing, Providence, Rhode Island, 1992. Boris Vladimirovich Shabat, Introduction to Complex Analysis, AMS, 1992 Several complex variables
https://en.wikipedia.org/wiki/Pseudoconvex%20function
In convex analysis and the calculus of variations, both branches of mathematics, a pseudoconvex function is a function that behaves like a convex function with respect to finding its local minima, but need not actually be convex. Informally, a differentiable function is pseudoconvex if it is increasing in any direction where it has a positive directional derivative. The property must hold in all of the function domain, and not only for nearby points. Formal definition Consider a differentiable function , defined on a (nonempty) convex open set of the finite-dimensional Euclidean space . This function is said to be pseudoconvex if the following property holds: Equivalently: Here is the gradient of , defined by: Note that the definition may also be stated in terms of the directional derivative of , in the direction given by the vector . This is because, as is differentiable, this directional derivative is given by: Properties Relation to other types of "convexity" Every convex function is pseudoconvex, but the converse is not true. For example, the function is pseudoconvex but not convex. Similarly, any pseudoconvex function is quasiconvex; but the converse is not true, since the function is quasiconvex but not pseudoconvex. This can be summarized schematically as: To see that is not pseudoconvex, consider its derivative at : . Then, if was pseudoconvex, we should have: In particular it should be true for . But it is not, as: . Sufficient optimality condition For any differentiable function, we have the Fermat's theorem necessary condition of optimality, which states that: if has a local minimum at in an open domain, then must be a stationary point of (that is: ). Pseudoconvexity is of great interest in the area of optimization, because the converse is also true for any pseudoconvex function. That is: if is a stationary point of a pseudoconvex function , then has a global minimum at . Note also that the result guarantees a global minimum (not only local). This last result is also true for a convex function, but it is not true for a quasiconvex function. Consider for example the quasiconvex function: This function is not pseudoconvex, but it is quasiconvex. Also, the point is a critical point of , as . However, does not have a global minimum at (not even a local minimum). Finally, note that a pseudoconvex function may not have any critical point. Take for example the pseudoconvex function: , whose derivative is always positive: . Examples An example of a function that is pseudoconvex, but not convex, is: The figure shows this function for the case where . This example may be generalized to two variables as: The previous example may be modified to obtain a function that is not convex, nor pseudoconvex, but is quasiconvex: The figure shows this function for the case where . As can be seen, this function is not convex because of the concavity, and it is not pseudoconvex because it is not differentiable at . Gen
https://en.wikipedia.org/wiki/Convex%20analysis
Convex analysis is the branch of mathematics devoted to the study of properties of convex functions and convex sets, often with applications in convex minimization, a subdomain of optimization theory. Convex sets A subset of some vector space is if it satisfies any of the following equivalent conditions: If is real and then If is real and with then Throughout, will be a map valued in the extended real numbers with a domain that is a convex subset of some vector space. The map is a if holds for any real and any with If this remains true of when the defining inequality () is replaced by the strict inequality then is called . Convex functions are related to convex sets. Specifically, the function is convex if and only if its is a convex set. The epigraphs of extended real-valued functions play a role in convex analysis that is analogous to the role played by graphs of real-valued function in real analysis. Specifically, the epigraph of an extended real-valued function provides geometric intuition that can be used to help formula or prove conjectures. The domain of a function is denoted by while its is the set The function is called if and for Alternatively, this means that there exists some in the domain of at which and is also equal to In words, a function is if its domain is not empty, it never takes on the value and it also is not identically equal to If is a proper convex function then there exist some vector and some such that for every where denotes the dot product of these vectors. Convex conjugate The of an extended real-valued function (not necessarily convex) is the function from the (continuous) dual space of and where the brackets denote the canonical duality The of is the map defined by for every If denotes the set of -valued functions on then the map defined by is called the . Subdifferential set and the Fenchel-Young inequality If and then the is For example, in the important special case where is a norm on , it can be shown that if then this definition reduces down to: and For any and which is called the . This inequality is an equality (i.e. ) if and only if It is in this way that the subdifferential set is directly related to the convex conjugate Biconjugate The of a function is the conjugate of the conjugate, typically written as The biconjugate is useful for showing when strong or weak duality hold (via the perturbation function). For any the inequality follows from the . For proper functions, if and only if is convex and lower semi-continuous by Fenchel–Moreau theorem. Convex minimization A () is one of the form find when given a convex function and a convex subset Dual problem In optimization theory, the states that optimization problems may be viewed from either of two perspectives, the primal problem or the dual problem. In general given two dual pairs separated locally convex spaces and The
https://en.wikipedia.org/wiki/Subharmonic%20function
In mathematics, subharmonic and superharmonic functions are important classes of functions used extensively in partial differential equations, complex analysis and potential theory. Intuitively, subharmonic functions are related to convex functions of one variable as follows. If the graph of a convex function and a line intersect at two points, then the graph of the convex function is below the line between those points. In the same way, if the values of a subharmonic function are no larger than the values of a harmonic function on the boundary of a ball, then the values of the subharmonic function are no larger than the values of the harmonic function also inside the ball. Superharmonic functions can be defined by the same description, only replacing "no larger" with "no smaller". Alternatively, a superharmonic function is just the negative of a subharmonic function, and for this reason any property of subharmonic functions can be easily transferred to superharmonic functions. Formal definition Formally, the definition can be stated as follows. Let be a subset of the Euclidean space and let be an upper semi-continuous function. Then, is called subharmonic if for any closed ball of center and radius contained in and every real-valued continuous function on that is harmonic in and satisfies for all on the boundary of , we have for all Note that by the above, the function which is identically −∞ is subharmonic, but some authors exclude this function by definition. A function is called superharmonic if is subharmonic. Properties A function is harmonic if and only if it is both subharmonic and superharmonic. If is C2 (twice continuously differentiable) on an open set in , then is subharmonic if and only if one has on , where is the Laplacian. The maximum of a subharmonic function cannot be achieved in the interior of its domain unless the function is constant, which is called the maximum principle. However, the minimum of a subharmonic function can be achieved in the interior of its domain. Subharmonic functions make a convex cone, that is, a linear combination of subharmonic functions with positive coefficients is also subharmonic. The pointwise maximum of two subharmonic functions is subharmonic. If the pointwise maximum of a countable number of subharmonic functions is upper semi-continuous, then it is also subharmonic. The limit of a decreasing sequence of subharmonic functions is subharmonic (or identically equal to ). Subharmonic functions are not necessarily continuous in the usual topology, however one can introduce the fine topology which makes them continuous. Examples If is analytic then is subharmonic. More examples can be constructed by using the properties listed above, by taking maxima, convex combinations and limits. In dimension 1, all subharmonic functions can be obtained in this way. Riesz Representation Theorem If is subharmonic in a region , in Euclidean space of dimension , is harmonic in
https://en.wikipedia.org/wiki/Indian%20National%20Mathematical%20Olympiad
The Indian National Mathematical Olympiad (INMO) is a high school mathematics competition held annually in India since 1989. It is the third tier in the Indian team selection procedure for the International Mathematical Olympiad and is conducted by the Homi Bhabha Centre for Science Education (HBCSE) under the aegis of the National Board of Higher Mathematics (NBHM). The Mathematical Olympiad Program is a five stage process conducted under the aegis of National Board for Higher Mathematics (NBHM). The first stage PRMO is conducted by the Mathematics Teachers’ Association (India). All the remaining stages are organized by Homi Bhabha Centre for Science Education (HBCSE). Eligibility and participant selection process The INMO is conducted by the MO Cell which is held on the third Sunday of January at 30 centers across the country. Prospective candidates first need to write the Pre-Regional Mathematical Olympiad (known as PRMO or Pre-RMO) then the Regional Mathematical Olympiad of their respective state or region. Around thirty students are selected from each region, to write the INMO. The best-performing students from the RMO (approximately 900) qualify for the second stage INMO. Structure of the examination The Indian National Mathematics Olympiad is the national level Olympiad which is conducted to select students for the International Mathematical Olympiad Training Camp, which is further conducted to select the Indian team for the International Mathematical Olympiad. It is similar to the USAMO conducted in the USA. The exam structure various from year to year. From 2024 onwards, INMO will consist of 6 problems to be solved over a span of 4.5 hrs. The topics asked are generally what is taught at high school level, except calculus. The difficulty of the problems tends to be generally higher than what is done in schools, with strong focus on application of concepts. The topics generally covered are Number Theory, Geometry, Combinatorics and Algebra. Further stages The International Mathematical Olympiad Training Camp (IMOTC) The qualifying students are invited to the International Mathematical Olympiad Training Camp (IMOTC), a one-month mathematics camp hosted by the Homi Bhabha Centre for Science Education in Mumbai. For first time participants, it usually extends from late April till the end of May, while it begins about 10–14 days later for senior participants. In this camp, the students are taught Olympiad mathematics and some other general mathematics. Four selection tests and two practice tests are held during this period and the top six students in the selection tests qualify to represent India in the International Mathematical Olympiad. Pre-departure Training Camp for IMO The selected team of 6 students goes through another round of training and orientation for about 10 days prior to departure for IMO. International Mathematical Olympiad The six member team selected at the end of IMOTC accompanied by a leader, a deputy leader and a
https://en.wikipedia.org/wiki/Bianchi%20group
In mathematics, a Bianchi group is a group of the form where d is a positive square-free integer. Here, PSL denotes the projective special linear group and is the ring of integers of the imaginary quadratic field . The groups were first studied by as a natural class of discrete subgroups of , now termed Kleinian groups. As a subgroup of , a Bianchi group acts as orientation-preserving isometries of 3-dimensional hyperbolic space . The quotient space is a non-compact, hyperbolic 3-fold with finite volume, which is also called Bianchi orbifold. An exact formula for the volume, in terms of the Dedekind zeta function of the base field , was computed by Humbert as follows. Let be the discriminant of , and , the discontinuous action on , then The set of cusps of is in bijection with the class group of . It is well known that every non-cocompact arithmetic Kleinian group is weakly commensurable with a Bianchi group. References External links Allen Hatcher, Bianchi Orbifolds Group theory
https://en.wikipedia.org/wiki/Philippine%20Statistics%20Authority
The Philippine Statistics Authority (; PSA) is the central statistical authority of the Philippine government that collects, compiles, analyzes and publishes statistical information on economic, social, demographic, political affairs and general affairs of the people of the Philippines and enforces the civil registration functions in the country. It is an attached agency of the National Economic and Development Authority (NEDA) for purposes of policy coordination. The PSA comprises the PSA Board and offices on sectoral statistics, censuses and technical coordination, civil registration, Philippine registry office, central support and field statistical services. The National Statistician, who is appointed by the president of the Philippines from a list of nominees submitted by a Special Committee and endorsed by the PSA Board Chairperson, is the head of the PSA and has a rank equivalent to an Undersecretary. Aside from directing and supervising the general administration of the PSA, the National Statistician provides overall direction in the implementation of the Civil Registry Law and related issuances and exercise technical supervision over the civil registrars as Civil Registrar General. The current National Statistician and Civil Registrar General (NSCRG) is Usec. Dennis Mapa, Ph.D. as appointed by President Rodrigo Duterte. History and precursor agencies Philippine Statistical System Recognizing the need to further enhance the efficiency of the statistical system and improve the timeliness and accuracy of statistics for planning and decision making, the Philippine Statistical System (PSS) was restructured on January 30, 1987. The issuance of Executive Order 121 provided the basis for the structure of the decentralized PSS. The PSS consist of statistical organizations at all administrative levels, its personnel and the national statistical program. Specifically, the organizations composing the system included the following: A policy-making and coordinating body – National Statistical Coordination Board (NSCB) A single general-purpose statistical agency – National Statistics Office (NSO) A research and training arm – Statistical Research and Training Center (SRTC) Units of government engaged in statistical activities either as their primary function or as part of their administrative or regulatory functions The major statistical agencies in the PSS included the National Statistical Coordination Board (NSCB), National Statistics Office (NSO), Bureau of Agricultural Statistics (BAS), Bureau of Labor and Employment Statistics (BLES), Statistical Research and Training Center (SRTC), and the Department of Economic Statistics of the Bangko Sentral ng Pilipinas (BSP). Precursor agencies Bureau of Agricultural Statistics The Bureau of Agricultural Statistics (BAS) was a successor organization to the Bureau of Agricultural Economics (BAECON) and was established on January 30, 1987, by virtue of Executive Order No. 116. It absorbed the Bur
https://en.wikipedia.org/wiki/Binary%20alphabet
Binary alphabet may refer to: The members of a binary set in mathematical set theory A 2-element alphabet, in formal language theory ASCII See also Binary numeral system
https://en.wikipedia.org/wiki/Trevor%20Truran
Trevor Truran (born 1942) is a United Kingdom former mathematics teacher, best known as the creator of many games and puzzles. Truran began making up games as mathematical teaching aids. At one time his entire mathematics course for 9-13 year olds was based on games, puzzles and story situations. Early games were published in Games & Puzzles Magazine and he became Puzzles Editor of that magazine and later of Top Puzzles. For over 13 years he wrote for Computer Talk magazine and included many new games and puzzles as well as early articles on the Rubik's Cube. A nine-part puzzle Treasure Trail appeared in the Sunday Telegraph and he freelanced for many magazines and newspapers before taking up puzzling full-time in 1985 with the publishers now called Puzzler Media Ltd. In that time he has created and edited a wide variety of magazines from Wordsearch to mathematical but has largely concentrated on logical puzzling, providing much of the content to magazines such as Logical Puzzles. He is the inventor of the logical puzzle now known as Mosaic (1980s) which was developed by Conceptis Ltd. and which had its first success on Japanese telephones. He is credited by some as a possible founder or early creator of what might be called cross-referencing or row-and-column puzzles, where numbers outside a grid give information as to what to put inside the grid. An early example is Whittleword (1979) which was followed by Domino Deal, Ace in Place and others. He is currently a Managing Editor at Puzzler Media Ltd. and edits Sudoku and Kakuro magazines as well as Hanjie, Hashi, Super Hanjie, Mosaic, Enigma and Colour Hanjie. He also contributes to other magazines such as Tough Puzzles and has created the "Squiffy Sudokus" for a Carol Vorderman book. A chance meeting with Bernard Pearson led to an involvement with Terry Pratchett's Discworld fantasy setting, and the game Thud was the first result. A second edition followed in 2005 to tie-in with the novel inspired by the game, Thud!, which also features a faster, shorter game, "Koom Valley Thud", reflecting incidents in the book. A third edition of the game is still in print, and it has also been translated into Dutch. Truran also designed another Discworld game, Watch Out, another two-player game pitting members of the Ankh-Morpork City Watch against members of the Thieves Guild. It was publicly tested in 2004 but not eventually published, as according to Bernard Pearson it was not thought to be "sufficiently Discworld". Publications He has published two books: Masterful Mindbenders (puzzle collection) Hanjie Solved (A guide to Japanese logic picture puzzles - 2005) References Puzzle designers People associated with the Discworld series Living people 1942 births
https://en.wikipedia.org/wiki/Stieltjes%20moment%20problem
In mathematics, the Stieltjes moment problem, named after Thomas Joannes Stieltjes, seeks necessary and sufficient conditions for a sequence (m0, m1, m2, ...) to be of the form for some measure μ. If such a function μ exists, one asks whether it is unique. The essential difference between this and other well-known moment problems is that this is on a half-line [0, ∞), whereas in the Hausdorff moment problem one considers a bounded interval [0, 1], and in the Hamburger moment problem one considers the whole line (−∞, ∞). Existence Let and Then { mn : n = 1, 2, 3, ... } is a moment sequence of some measure on with infinite support if and only if for all n, both { mn : n = 1, 2, 3, ... } is a moment sequence of some measure on with finite support of size m if and only if for all , both and for all larger Uniqueness There are several sufficient conditions for uniqueness, for example, Carleman's condition, which states that the solution is unique if References Probability problems Mathematical analysis Moment (mathematics) Mathematical problems
https://en.wikipedia.org/wiki/Treasure%20MathStorm%21
Treasure MathStorm! is an educational computer game intended to teach children ages five to nine mathematical problem solving. This sequel to Treasure Mountain! is the sixth installment of The Learning Company's Super Seekers games and the second in its "Treasure" series. The objective of Treasure MathStorm! is to return all of the treasures hidden across the mountain to the treasure chest in the castle at the top of the mountain. Although it runs smoother and has better graphics, basic gameplay is very similar to that of its predecessor. In 1994, an enhanced and more Windows-friendly version was released on CD-ROM. Gameplay The game takes place in a magical realm called Treasure Mountain. As the game opens, the Master of Mischief, the common antagonist to the Super Seekers games, uses a weather machine to freeze the mountain in snow and ice and scatters the castle's treasures all over the mountain. The player takes on the role of the Super Seeker, whose job is to find the scattered treasures and return them to the castle's treasure chest in order to thaw out the mountain. The mountain itself consists of three levels. The player cannot climb higher until he has gathered the supplies, like ice axes, ladders, or catapult parts, useful for scaling the mountain. To obtain these items, the players must help out the local inhabitants complete math-related tasks such as adjusting clocks to a given time, balancing scales, and counting crystals. In order to find treasures, the player must place a specific amount of snowballs at a certain location. To find out how many snowballs are needed, the player must catch an elf carrying a scroll. If he answers the riddle correctly, he will be told how to find treasures on that particular level. Once the player reaches the top of the castle on the highest level of the mountain, he deposits all treasures found into the castle's treasure chest and is given a prize as a reward for completing the three stages. This prize is kept on display in the player's clubhouse, showing how many times he has ascended the mountain. These prizes are usually children's toys, such as flutes or toy trains. From this point, the player may exit the clubhouse and start again from the bottom of the mountain. At higher ranks, the game becomes more difficult, as there will be more treasures to find, harder riddles to answer, and snowbullies that steal money. Development Treasure series Treasure MathStorm! is the second of four games in The Learning Company's "Treasure" series along with Treasure Mountain!, Treasure Cove!, and Treasure Galaxy!. The "Treasure" series is a subgroup of the company's Super Solvers series. All the games in this series are math and reading comprehension oriented educational adventure games aimed at younger children. Games in the treasure series all have the same three stage gameplay format where a special object, whose location can be deduced by answering educational riddles, is needed to reach the next stage.
https://en.wikipedia.org/wiki/Effective%20descriptive%20set%20theory
Effective descriptive set theory is the branch of descriptive set theory dealing with sets of reals having lightface definitions; that is, definitions that do not require an arbitrary real parameter (Moschovakis 1980). Thus effective descriptive set theory combines descriptive set theory with recursion theory. Constructions Effective Polish space An effective Polish space is a complete separable metric space that has a computable presentation. Such spaces are studied in both effective descriptive set theory and in constructive analysis. In particular, standard examples of Polish spaces such as the real line, the Cantor set and the Baire space are all effective Polish spaces. Arithmetical hierarchy The arithmetical hierarchy, arithmetic hierarchy or Kleene–Mostowski hierarchy classifies certain sets based on the complexity of formulas that define them. Any set that receives a classification is called "arithmetical". More formally, the arithmetical hierarchy assigns classifications to the formulas in the language of first-order arithmetic. The classifications are denoted and for natural numbers n (including 0). The Greek letters here are lightface symbols, which indicates that the formulas do not contain set parameters. If a formula is logically equivalent to a formula with only bounded quantifiers then is assigned the classifications and . The classifications and are defined inductively for every natural number n using the following rules: If is logically equivalent to a formula of the form , where is , then is assigned the classification . If is logically equivalent to a formula of the form , where is , then is assigned the classification . References Second edition available online Effective descriptive set theory
https://en.wikipedia.org/wiki/Mean%20squared%20prediction%20error
In statistics the mean squared prediction error (MSPE), also known as mean squared error of the predictions, of a smoothing, curve fitting, or regression procedure is the expected value of the squared prediction errors (PE), the square difference between the fitted values implied by the predictive function and the values of the (unobservable) true value g. It is an inverse measure of the explanatory power of and can be used in the process of cross-validation of an estimated model. Knowledge of g would be required in order to calculate the MSPE exactly; in practice, MSPE is estimated. Formulation If the smoothing or fitting procedure has projection matrix (i.e., hat matrix) L, which maps the observed values vector to predicted values vector then PE and MSPE are formulated as: The MSPE can be decomposed into two terms: the squared bias (mean error) of the fitted values and the variance of the fitted values: The quantity is called sum squared prediction error. The root mean squared prediction error is the square root of MSPE: . Computation of MSPE over out-of-sample data The mean squared prediction error can be computed exactly in two contexts. First, with a data sample of length n, the data analyst may run the regression over only q of the data points (with q < n), holding back the other n – q data points with the specific purpose of using them to compute the estimated model’s MSPE out of sample (i.e., not using data that were used in the model estimation process). Since the regression process is tailored to the q in-sample points, normally the in-sample MSPE will be smaller than the out-of-sample one computed over the n – q held-back points. If the increase in the MSPE out of sample compared to in sample is relatively slight, that results in the model being viewed favorably. And if two models are to be compared, the one with the lower MSPE over the n – q out-of-sample data points is viewed more favorably, regardless of the models’ relative in-sample performances. The out-of-sample MSPE in this context is exact for the out-of-sample data points that it was computed over, but is merely an estimate of the model’s MSPE for the mostly unobserved population from which the data were drawn. Second, as time goes on more data may become available to the data analyst, and then the MSPE can be computed over these new data. Estimation of MSPE over the population When the model has been estimated over all available data with none held back, the MSPE of the model over the entire population of mostly unobserved data can be estimated as follows. For the model where , one may write Using in-sample data values, the first term on the right side is equivalent to Thus, If is known or well-estimated by , it becomes possible to estimate MSPE by Colin Mallows advocated this method in the construction of his model selection statistic Cp, which is a normalized version of the estimated MSPE: where p the number of estimated parameters p and is computed f
https://en.wikipedia.org/wiki/Fibered%20manifold
In differential geometry, in the category of differentiable manifolds, a fibered manifold is a surjective submersion that is, a surjective differentiable mapping such that at each point the tangent mapping is surjective, or, equivalently, its rank equals History In topology, the words fiber (Faser in German) and fiber space (gefaserter Raum) appeared for the first time in a paper by Herbert Seifert in 1932, but his definitions are limited to a very special case. The main difference from the present day conception of a fiber space, however, was that for Seifert what is now called the base space (topological space) of a fiber (topological) space was not part of the structure, but derived from it as a quotient space of The first definition of fiber space is given by Hassler Whitney in 1935 under the name sphere space, but in 1940 Whitney changed the name to sphere bundle. The theory of fibered spaces, of which vector bundles, principal bundles, topological fibrations and fibered manifolds are a special case, is attributed to Seifert, Hopf, Feldbau, Whitney, Steenrod, Ehresmann, Serre, and others. Formal definition A triple where and are differentiable manifolds and is a surjective submersion, is called a fibered manifold. is called the total space, is called the base. Examples Every differentiable fiber bundle is a fibered manifold. Every differentiable covering space is a fibered manifold with discrete fiber. In general, a fibered manifold need not be a fiber bundle: different fibers may have different topologies. An example of this phenomenon may be constructed by taking the trivial bundle and deleting two points in two different fibers over the base manifold The result is a new fibered manifold where all the fibers except two are connected. Properties Any surjective submersion is open: for each open the set is open in Each fiber is a closed embedded submanifold of of dimension A fibered manifold admits local sections: For each there is an open neighborhood of in and a smooth mapping with and A surjection is a fibered manifold if and only if there exists a local section of (with ) passing through each Fibered coordinates Let (resp. ) be an -dimensional (resp. -dimensional) manifold. A fibered manifold admits fiber charts. We say that a chart on is a fiber chart, or is adapted to the surjective submersion if there exists a chart on such that and where The above fiber chart condition may be equivalently expressed by where is the projection onto the first coordinates. The chart is then obviously unique. In view of the above property, the fibered coordinates of a fiber chart are usually denoted by where the coordinates of the corresponding chart on are then denoted, with the obvious convention, by where Conversely, if a surjection admits a fibered atlas, then is a fibered manifold. Local trivialization and fiber bundles Let be a fibered manifold and any manifold. Then an ope
https://en.wikipedia.org/wiki/Conjugate-permutable%20subgroup
In mathematics, in the field of group theory, a conjugate-permutable subgroup is a subgroup that commutes with all its conjugate subgroups. The term was introduced by Tuval Foguel in 1997 and arose in the context of the proof that for finite groups, every quasinormal subgroup is a subnormal subgroup. Clearly, every quasinormal subgroup is conjugate-permutable. In fact, it is true that for a finite group: Every maximal conjugate-permutable subgroup is normal. Every conjugate-permutable subgroup is a conjugate-permutable subgroup of every intermediate subgroup containing it. Combining the above two facts, every conjugate-permutable subgroup is subnormal. Conversely, every 2-subnormal subgroup (that is, a subgroup that is a normal subgroup of a normal subgroup) is conjugate-permutable. References Subgroup properties
https://en.wikipedia.org/wiki/American%20Institute%20of%20Mathematics
The American Institute of Mathematics (AIM) is one of eight mathematical institutes in the United States, funded by the National Science Foundation (NSF). It was founded in 1994 by John Fry, co-founder of Fry's Electronics, and originally located in the Fry's Electronics store in San Jose, California. It was privately funded by Fry at inception, and has obtained NSF funding since 2002. From 2023 onwards, the institute will be located on the campus of the California Institute of Technology in Pasadena, California. History The institute was founded with the primary goal of identifying and solving important mathematical problems. Originally very small groups of top mathematicians would be assembled to solve a major problem, such as the Birch and Swinnerton-Dyer conjecture. Later, the institute began running a program of week-long workshops on current topics in mathematical research. These workshops rely strongly on interactive problem sessions. Brian Conrey became the institute's director in 1997. From 1998 to 2009 (with the exception of 1999), AIM annually awarded a five-year fellowship to an "outstanding new PhD pursuing research in an area of pure mathematics", but currently is not offering the fellowship. AIM also sponsors local mathematics competitions and a yearly meeting for women mathematicians. The institute planned to move to Morgan Hill, California, about 39 miles (63 km) to the southeast of San Jose, when its new facility is completed. Plans for the new facility were started about 2000, but construction work was delayed by regulatory and engineering issues. In February 2014, the AIM received permission to start construction of the facility, which will be built as a facsimile of The Alhambra, a 14th-century Moorish palace and fortress in Spain, but as of August 2017, no construction activity had started. On March 24, 2022, the institute announced its relocation to the California Institute of Technology (Caltech). Alexanderson Award In 2018, AIM announced a new prize in mathematics: the Alexanderson Award, recognizing outstanding scholarly articles arising from AIM research activities that have been published within the past few years. The award honors Gerald L. Alexanderson, Professor at Santa Clara University and founding chair of AIM's Board of Trustees . Sponsored research The American Institute of Mathematics has sponsored fundamental research for high-profile problems in several mathematical areas. Among them are: Combinatorics The strong perfect graph theorem — proved in 2003 by Maria Chudnovsky, Neil Robertson, Paul Seymour, and Robin Thomas Hadwiger's conjecture — research by Neil Robertson and Paul Seymour. Representation theory Atlas of Lie groups and representations, a massive project to compute the unitary representations of Lie groups. The computations have been done for the exceptional Lie group E8. References External links Research institutes in the San Francisco Bay Area National Science Foundation ma
https://en.wikipedia.org/wiki/%E2%89%A1
The symbol ≡ (triple bar) is used in science and mathematics with several different meanings. It may refer to the following: Mathematics Identity (mathematics), identity of two mathematical expressions Logical biconditional, in logic (if and only if) Modular arithmetic, a ≡ b (mod m) Equivalence relation, often denoted using a triple bar Chemistry Triple bond, a type of covalent bond between two atoms Computing Hamburger button, often used for drop-down menus Symbol for the line feed character in ISO 2047 See also ≅, a symbol used in approximation The eight trigrams of the Bagua: ☰, ☱, ☲, ☳, ☴, ☵, ☶, ☷ Ξ, capital letter Xi of the Greek alphabet 三, Chinese numeral for the number 3 Glossary of mathematical symbols Tesla Model 3, whose logo originally stylized the digit 3 as three horizontal bars III (disambiguation), three letter Is in a row
https://en.wikipedia.org/wiki/Prediction%20models
Prediction models may refer to: Financial forecast or stock market prediction in finance Free-space path loss in telecommunications Predictive inference in statistics
https://en.wikipedia.org/wiki/Hurwitz%27s%20theorem
Hurwitz's theorem can refer to several theorems named after Adolf Hurwitz: Hurwitz's theorem (complex analysis) Riemann–Hurwitz formula in algebraic geometry Hurwitz's theorem (composition algebras) on quadratic forms and nonassociative algebras Hurwitz's automorphisms theorem on Riemann surfaces Hurwitz's theorem (number theory)
https://en.wikipedia.org/wiki/K-vector
In mathematics and physics, k-vector may refer to: A wave vector k Crystal momentum A multivector of grade k, also called a k-vector, the dual of a differential k-form An element of a k-dimensional vector space, especially a four-vector used in relativity to mean a quantity related to four-dimensional spacetime
https://en.wikipedia.org/wiki/Centre%20for%20Mathematical%20Sciences%20%28Cambridge%29
The Centre for Mathematical Sciences (CMS) at the University of Cambridge houses the university's Faculty of Mathematics, the Isaac Newton Institute, and the Betty and Gordon Moore Library. It is situated on Wilberforce Road, on a site which was formerly a St John's College playing field, and has been leased by St John's to the university as part of its expansion into West Cambridge. The Isaac Newton Institute was opened in July 1992. Andrew Wiles announced his proof here of Fermat's Last Theorem on 23 June 1993, though it required additional fine tuning. The rest of the site was designed by Edward Cullinan architects and Buro Happold and construction under project manager Davis Langdon was completed in 2003. It consists of 340 offices in 7 'pavilions', arranged in a parabola around a 'central core' with lecture rooms, common space, and a grass-covered roof, as well as a gatehouse. The design won awards including the British Construction Industry Major Project Award 2003, the David Urwin Design Award 2003, the Royal Fine Art Commission Trust Specialist Award 2003 and the RIBA Award 2003. Gallery References External links Centre for Mathematical Sciences, University of Cambridge Article by Jonathan Glancey in 'The Guardian', Mathematical Sciences Faculty of Mathematics, University of Cambridge
https://en.wikipedia.org/wiki/Faculty%20of%20Mathematics%2C%20University%20of%20Cambridge
The Faculty of Mathematics at the University of Cambridge comprises the Department of Pure Mathematics and Mathematical Statistics (DPMMS) and the Department of Applied Mathematics and Theoretical Physics (DAMTP). It is housed in the Centre for Mathematical Sciences site in West Cambridge, alongside the Isaac Newton Institute. Many distinguished mathematicians have been members of the faculty. Some current members DPMMS Béla Bollobás John Coates Thomas Forster Timothy Gowers Peter Johnstone Imre Leader Gabriel Paternain Statistical Laboratory John Aston Geoffrey Grimmett Frank Kelly Ioannis Kontoyiannis Richard Nickl James Norris Richard Samworth David Spiegelhalter Richard Weber DAMTP Gary Gibbons Julia Gog, professor of mathematical biology Raymond E. Goldstein Rich Kerswell Paul Linden Michael Green Peter Haynes, fluid dynamicist John Hinch, fluid dynamicist, retired 2014 Richard Jozsa Hugh Osborn John Papaloizou Malcolm Perry David Tong, theoretical physicist Paul Townsend Grae Worster, editor for the Journal of Fluid Mechanics Mihaela van der Schaar Carola-Bibiane Schönlieb Pure Mathematics and Mathematical Statistics The Department of Pure Mathematics and Mathematical Statistics (DPMMS) was created in 1964 under the headship of Sir William Hodge. It was housed in a converted warehouse at 16 Mill Lane, adjacent to its sister department DAMTP, until its move around 2000 to the present Centre for Mathematical Sciences where it occupies Pavilions C, D, and E. Heads of department 1964–1969 W. V. D. Hodge 1969–1984 J. W. S. Cassels 1984–1991 D. J. H. Garling 1991–1997 John H. Coates 1997–2002 W. B. R. Lickorish 2002–2007 Geoffrey Grimmett 2007–2014 Martin Hyland 2014–2018 Gabriel Paternain 2018–2023 James Norris 2023- Ivan Smith Statistical Laboratory The Statistical Laboratory is a Sub-Department of DPMMS. It was created in 1947 with accommodation in a "temporary hut", and was established on 21 March 1953 within the Faculty of Mathematics. It moved in 1958 to the basement of the new Chemistry Department in Lensfield Road, and then formed part of the new Department (DPMMS) in Mill Lane on its creation in 1964. It occupies Pavilion D of the Centre for Mathematical Sciences. Directors of the Statistical Laboratory 1953–1956 John Wishart 1956–1957 Henry Daniels, Acting Director 1957–1960 Dennis Lindley 1960–1962 Morris Walker, Acting Director 1962–1973 David Kendall 1973–1987 Peter Whittle 1987–1991 David Williams 1991–1993 Frank Kelly 1994–2000 Geoffrey Grimmett 2000–2009 Richard Weber 2009–2017 James Norris 2017– Richard Samworth Applied Mathematics and Theoretical Physics The Department of Applied Mathematics and Theoretical Physics (DAMTP) was founded by George Batchelor in 1959, and for many years was situated on Silver Street, in the former office buildings of Cambridge University Press. Currently, the Department is located at the Centre for Mathematical Sciences (Cambridge). Theoretical Physics (including cosmology, relativity,
https://en.wikipedia.org/wiki/West%20Yorkshire%20Built-up%20Area
The West Yorkshire Built-up Area, previously known as the West Yorkshire Urban Area is a term used by the Office for National Statistics (ONS) to refer to a conurbation in West Yorkshire, England, based on the cities of Leeds, Bradford and Wakefield, and the large towns of Huddersfield and Halifax. It is the 4th largest urban area in the United Kingdom. However, it excludes other towns and villages such as Featherstone, Normanton, Castleford, Pontefract, Hemsworth, Todmorden, Hebden Bridge, Knottingley, Wetherby and Garforth which, though part of the county of West Yorkshire are considered independently. There are substantial areas of agricultural land within the designated area – more than in any other official urban area in England – many of the towns and cities are only just connected with one another by narrow outlying strips of development. Urban subdivisions The ONS gives the conurbation a population of 1,777,934 (2011 census), which makes it the fourth-most populous in the UK. The ONS partitions the area down into 39 sub-divisions: Three further subdivisions are given with no population numbers as they are present or former industrial areas with no resident population. Rawdon is the subdivision name for Horsforth Vale, on which a former industrial plant was redeveloped for housing from 2010, too late to be recorded for the 2011 census. Brookfoot Quarry (Marshalls Southowram) Esholt Water Treatment plant, named 'Works, nr Bradford' by the ONS Rawdon Note that the areas below do not have exactly the same borders in each census, so the numbers are not always comparable (e.g. what was classified as Lofthouse/Stanley in 2001 was classified as part of Wakefield in 2011). References Urban areas of England Geography of West Yorkshire
https://en.wikipedia.org/wiki/Farnborough/Aldershot%20built-up%20area
Farnborough/Aldershot built-up area and Aldershot Urban Area are names used by the Office for National Statistics (ONS) to refer to a conurbation spanning the borders of Surrey, Berkshire and Hampshire in England. The ONS found a population of 252,937 in 2011 (up 4%, rounded, from the 2001 figure of 243,344 residents). This makes it the 29th-largest built-up area in England. A conurbation consisting of Aldershot and Farnborough, together with Frimley and Camberley was identified as a conurbation since at least the mid 20th century. These four places had a total population of 70,000 in 1931 which grew to 91,700 by 1961. Most of the conurbation lies alongside the River Blackwater which gives a wider area including Fleet (which is not geographically in the Blackwater Valley) the alternate name of Blackwater Valley. The area forms part of the London metropolitan area and borders the Metropolitan green belt. It almost adjoins the somewhat lower density Reading/Wokingham Urban Area at Sandhurst. Subdivisions It was given these subdivisions in the 2011 census: References Urban areas of England Geography of Hampshire Geography of Surrey Geography of Berkshire Aldershot
https://en.wikipedia.org/wiki/Imprecise%20probability
Imprecise probability generalizes probability theory to allow for partial probability specifications, and is applicable when information is scarce, vague, or conflicting, in which case a unique probability distribution may be hard to identify. Thereby, the theory aims to represent the available knowledge more accurately. Imprecision is useful for dealing with expert elicitation, because: People have a limited ability to determine their own subjective probabilities and might find that they can only provide an interval. As an interval is compatible with a range of opinions, the analysis ought to be more convincing to a range of different people. Introduction Uncertainty is traditionally modelled by a probability distribution, as developed by Kolmogorov, Laplace, de Finetti, Ramsey, Cox, Lindley, and many others. However, this has not been unanimously accepted by scientists, statisticians, and probabilists: it has been argued that some modification or broadening of probability theory is required, because one may not always be able to provide a probability for every event, particularly when only little information or data is available—an early example of such criticism is Boole's critique of Laplace's work—, or when we wish to model probabilities that a group agrees with, rather than those of a single individual. Perhaps the most common generalization is to replace a single probability specification with an interval specification. Lower and upper probabilities, denoted by and , or more generally, lower and upper expectations (previsions), aim to fill this gap. A lower probability function is superadditive but not necessarily additive, whereas an upper probability is subadditive. To get a general understanding of the theory, consider: the special case with for all events is equivalent to a precise probability and for all non-trivial events represents no constraint at all on the specification of We then have a flexible continuum of more or less precise models in between. Some approaches, summarized under the name nonadditive probabilities, directly use one of these set functions, assuming the other one to be naturally defined such that , with the complement of . Other related concepts understand the corresponding intervals for all events as the basic entity. History The idea to use imprecise probability has a long history. The first formal treatment dates back at least to the middle of the nineteenth century, by George Boole, who aimed to reconcile the theories of logic and probability. In the 1920s, in A Treatise on Probability, Keynes formulated and applied an explicit interval estimate approach to probability. Work on imprecise probability models proceeded fitfully throughout the 20th century, with important contributions by Bernard Koopman, C.A.B. Smith, I.J. Good, Arthur Dempster, Glenn Shafer, Peter M. Williams, Henry Kyburg, Isaac Levi, and Teddy Seidenfeld. At the start of the 1990s, the field started to gather some moment
https://en.wikipedia.org/wiki/Double%20descent
In statistics and machine learning, double descent is the phenomenon where a statistical model with a small number of parameters and a model with an extremely large number of parameters have a small error, but a model whose number of parameters is about the same as the number of data points used to train the model will have a large error. It was discovered around 2018 when researchers were trying to reconcile the bias-variance tradeoff in classical statistics, which states that having too many parameters will yield an extremely large error, with the 2010s empirical observation of machine learning practitioners that the larger models are, the better they work. The scaling behavior of double descent has been found to follow a broken neural scaling law functional form. References Further reading External links Model selection Machine learning Statistical classification
https://en.wikipedia.org/wiki/List%20of%20Jewish%20American%20mathematicians
This is a list of notable Jewish American mathematicians. For other Jewish Americans, see Lists of Jewish Americans. Abraham Adrian Albert (1905-1972), abstract algebra Kenneth Appel (1932-2013), four-color problem Lipman Bers (1914-1993), non-linear elliptic equations Paul Cohen (1934-2007), set theorist; Fields Medal (1966) Jesse Douglas (1897-1965), mathematician; Fields Medal (1936), Bôcher Memorial Prize (1943) Samuel Eilenberg (1913-1988), category theory; Wolf Prize (1986), Steele Prize (1987) Yakov Eliashberg (born 1946), symplectic topology and partial differential equations Charles Fefferman (born 1949), mathematician; Fields Medal (1978), Bôcher Prize (2008) William Feller (1906-1970), probability theory Michael Freedman (born 1951), mathematician; Fields Medal (1986) Hillel Furstenberg (born 1935), mathematician; Wolf Prize (2006/07), Abel Prize (2020) Michael Golomb (1909-2008), theory of approximation Michael Harris (born 1954), mathematician E. Morton Jellinek (1890-1963), biostatistician Edward Kasner (1878-1955), mathematician Sergiu Klainerman (born 1950), hyperbolic differential equations and general relativity, MacArthur Fellow (1991), Guggenheim Fellow (1997), Bôcher Memorial Prize(1999) Cornelius Lanczos (1893-1974), mathematician and mathematical physicist Peter Lax (born 1926), mathematician; Wolf Prize (1987), Steele Prize (1993), Abel Prize (2005) Emma Lehmer (1906-2007), mathematician Grigory Margulis (born 1946), mathematician; Fields Medal (1978), Wolf Prize (2005), Abel Prize (2020) Barry Mazur (born 1937), mathematician; Cole Prize (1982), Chern Medal (2022) John von Neumann (1903-1957), mathematician Ken Ribet (born 1948), algebraic number theory and algebraic geometry Peter Sarnak (born 1953), analytic number theory; Pólya Prize (1998), Cole Prize (2005), Wolf Prize (2014) Yakov Sinai (born 1935), dynamical systems; Wolf Prize (1997), Steele Prize (2013), Abel Prize (2014) Isadore Singer (1924-2021), mathematician; Bôcher Prize (1969), Steele Prize (2000), Abel Prize (2004) Robert M. Solovay (born 1938), mathematician; Paris Kanellakis Award (2003) Elias Stein (1931-2018), harmonic analysis; Wolf Prize (1999), Steele Prize (2002) Edward Witten (born 1951), theoretical physics; Fields Medal (1990) See also List of Jewish mathematicians References +Jewish +Mathematicians American Jewish Mathematicians
https://en.wikipedia.org/wiki/Alexander%20Bogomolny
Alexander Bogomolny (January 4, 1948 July 7, 2018) was a Soviet-born Israeli-American mathematician. He was Professor Emeritus of Mathematics at the University of Iowa, and formerly research fellow at the Moscow Institute of Electronics and Mathematics, senior instructor at Hebrew University and software consultant at Ben Gurion University. He wrote extensively about arithmetic, probability, algebra, geometry, trigonometry and mathematical games. He was known for his contribution to heuristics and mathematics education, creating and maintaining the mathematically themed educational website Cut-the-Knot for the Mathematical Association of America (MAA) Online. He was a pioneer in mathematical education on the internet, having started Cut-the-Knot in October 1996. Education and academic career Bogomolny attended Moscow school No. 444, for gifted children, then entered Moscow State University, where he graduated with a master's degree in mathematics in 1971. From 1971 to 1974 he was a junior research fellow at the Moscow Institute of Electronic Machine Building (MIEM). He emigrated to Israel and became a senior programmer at Lake Kinneret Research Laboratory in Tiberias, Israel (19741977) and a software consultant at Ben Gurion University in Negev, Be’er Sheva, Israel (19761977). From 1976 to 1983 he was a senior instructor and researcher at Hebrew University in Jerusalem. He received his Ph.D. in mathematics at Hebrew University in 1981. His dissertation is titled, A New Numerical Solution for the Stamp Problem and his thesis advisor was Gregory I. Eskin. From 1981 to 1982 he was also a visiting professor at Ohio State University, where he taught mathematics. From 1982 to 1987 he was professor of mathematics at the University of Iowa. From August 1987 to August 1991 he was vice president of software development at CompuDoc, Inc. Cut-the-Knot Cut-the-Knot (CTK) is a free, advertisement-funded educational website which Bogomolny maintained from 1996 to 2018. It is devoted to popular exposition of various topics in mathematics. The site was designed for teachers, children and parents, and anyone else curious about mathematics, with an eye to educating, encouraging interest, and provoking curiosity. Its name is a reference to the legend of Alexander the Great's solution to the Gordian knot. CTK won more than 20 awards from scientific and educational publications, including a Scientific American Web Award in 2003, the Encyclopædia Britannicas Internet Guide Award, and Sciences NetWatch award. The site contains extensive analysis of many of the classic problems in recreational mathematics including the Apollonian gasket, Napoleon's theorem, logarithmic spirals, the "Futurama Theorem" from the episode "The Prisoner of Benda", the Pitot theorem, and the monkey and the coconuts problem. One page includes 122 proofs of the Pythagorean theorem. Bogomolny wrote a manifesto for CTK in which he said that "Judging Mathematics by its pragmatic value is
https://en.wikipedia.org/wiki/Abel%27s%20identity
In mathematics, Abel's identity (also called Abel's formula or Abel's differential equation identity) is an equation that expresses the Wronskian of two solutions of a homogeneous second-order linear ordinary differential equation in terms of a coefficient of the original differential equation. The relation can be generalised to nth-order linear ordinary differential equations. The identity is named after the Norwegian mathematician Niels Henrik Abel. Since Abel's identity relates to the different linearly independent solutions of the differential equation, it can be used to find one solution from the other. It provides useful identities relating the solutions, and is also useful as a part of other techniques such as the method of variation of parameters. It is especially useful for equations such as Bessel's equation where the solutions do not have a simple analytical form, because in such cases the Wronskian is difficult to compute directly. A generalisation of first-order systems of homogeneous linear differential equations is given by Liouville's formula. Statement Consider a homogeneous linear second-order ordinary differential equation on an interval I of the real line with real- or complex-valued continuous functions p and q. Abel's identity states that the Wronskian of two real- or complex-valued solutions and of this differential equation, that is the function defined by the determinant satisfies the relation for each point . Remarks In particular, when the differential equation is real-valued, the Wronskian is always either identically zero, always positive, or always negative at every point in (see proof below). The latter cases imply the two solutions and are linearly independent (see Wronskian for a proof). It is not necessary to assume that the second derivatives of the solutions and are continuous. Abel's theorem is particularly useful if , because it implies that is constant. Proof Differentiating the Wronskian using the product rule gives (writing for and omitting the argument for brevity) Solving for in the original differential equation yields Substituting this result into the derivative of the Wronskian function to replace the second derivatives of and gives This is a first-order linear differential equation, and it remains to show that Abel's identity gives the unique solution, which attains the value at . Since the function is continuous on , it is bounded on every closed and bounded subinterval of and therefore integrable, hence is a well-defined function. Differentiating both sides, using the product rule, the chain rule, the derivative of the exponential function and the fundamental theorem of calculus, one obtains due to the differential equation for . Therefore, has to be constant on , because otherwise we would obtain a contradiction to the mean value theorem (applied separately to the real and imaginary part in the complex-valued case). Since , Abel's identity foll
https://en.wikipedia.org/wiki/Indecomposable%20continuum
In point-set topology, an indecomposable continuum is a continuum that is indecomposable, i.e. that cannot be expressed as the union of any two of its proper subcontinua. In 1910, L. E. J. Brouwer was the first to describe an indecomposable continuum. Indecomposable continua have been used by topologists as a source of counterexamples. They also occur in dynamical systems. Definitions A continuum is a nonempty compact connected metric space. The arc, the n-sphere, and the Hilbert cube are examples of path-connected continua; the topologist's sine curve is an example non-path-connected continuum, Warsaw circle is a path-connected continuum that is not locally path-connected. A subcontinuum of a continuum is a closed, connected subset of . A space is nondegenerate if it is not equal to a single point. A continuum is decomposable if there exist two subcontinua and of such that and but . It follows that and are nondegenerate. A continuum that is not decomposable is an indecomposable continuum. A continuum in which every subcontinuum is indecomposable is said to be hereditarily indecomposable. A composant of an indecomposable continuum is a maximal set in which any two points lie within some proper subcontinuum of . A continuum is irreducible between and if and no proper subcontinuum contains both points. For a nondegenerate indecomposable metric continuum , there exists an uncountable subset such that is irreducible between any two points of . History In 1910 L. E. J. Brouwer described an indecomposable continuum that disproved a conjecture made by Arthur Moritz Schoenflies that, if and are open, connected, disjoint sets in such that , then must be the union of two closed, connected proper subsets. Zygmunt Janiszewski described more such indecomposable continua, including a version of the bucket handle. Janiszewski, however, focused on the irreducibility of these continua. In 1917 Kunizo Yoneyama described the Lakes of Wada (named after Takeo Wada) whose common boundary is indecomposable. In the 1920s indecomposable continua began to be studied by the Warsaw School of Mathematics in Fundamenta Mathematicae for their own sake, rather than as pathological counterexamples. Stefan Mazurkiewicz was the first to give the definition of indecomposability. In 1922 Bronisław Knaster described the pseudo-arc, the first example found of a hereditarily indecomposable continuum. Bucket handle example Indecomposable continua are often constructed as the limit of a sequence of nested intersections, or (more generally) as the inverse limit of a sequence of continua. The buckethandle, or Brouwer–Janiszewski–Knaster continuum, is often considered the simplest example of an indecomposable continuum, and can be so constructed (see upper right). Alternatively, take the Cantor ternary set projected onto the interval of the -axis in the plane. Let be the family of semicircles above the -axis with center and with endpoints on (which is sy
https://en.wikipedia.org/wiki/Prostate%20cancer%20staging
Prostate cancer staging is the process by which physicians categorize the risk of cancer having spread beyond the prostate, or equivalently, the probability of being cured with local therapies such as surgery or radiation. Once patients are placed in prognostic categories, this information can contribute to the selection of an optimal approach to treatment. Prostate cancer stage can be assessed by either clinical or pathological staging methods. Clinical staging usually occurs before the first treatment and tumour presence is determined through imaging and rectal examination, while pathological staging is done after treatment once a biopsy is performed or the prostate is removed by looking at the cell types within the sample. There are two schemes commonly used to stage prostate cancer in the United States. The most common is promulgated by the American Joint Committee on Cancer (AJCC), and is known as the TNM system, which evaluates the size of the tumor, the extent of involved lymph nodes, and any metastasis (distant spread) and also takes into account cancer grade. As with many other cancers, these are often grouped into four stages (I–IV). Another scheme that was used in the past was Whitmore-Jewett staging, although TNM staging is more common in modern practice. In the United Kingdom the 5-tiered Cambridge Prognostic Group (CPG) is used, replacing a previous system that divided prostate cancer into three risk groups. TNM staging From the AJCC 7th edition and UICC 7th edition. Stage I disease is cancer that is found incidentally in a small part of the sample when prostate tissue was removed for other reasons, such as benign prostatic hypertrophy, and the cells closely resemble normal cells and the gland feels normal to the examining finger. In Stage II more of the prostate is involved and a lump can be felt within the gland. In Stage III, the tumor has spread through the prostatic capsule and the lump can be felt on the surface of the gland. In Stage IV disease, the tumor has invaded nearby structures, or has spread to lymph nodes or other organs. The Gleason Grading System is based on cellular content and tissue architecture from biopsies, which provides an estimate of the destructive potential and ultimate prognosis of the disease. Evaluation of the (primary) tumor ('T') Clinical T stage (cT) cTX: cannot evaluate the primary tumor cT0: no evidence of tumor cT1: tumor present, but not detectable clinically or with imaging cT1a: tumor was incidentally found in 5% or less of prostate tissue resected (for other reasons) cT1b: tumor was incidentally found in greater than 5% of prostate tissue resected cT1c: tumor was found in a needle biopsy performed due to an elevated serum PSA cT2: the tumor can be felt (palpated) on examination, but has not spread outside the prostate cT2a: the tumor is in half or less than half of one of the prostate gland's two lobes cT2b: the tumor is in more than half of one lobe, but not both cT2c: th
https://en.wikipedia.org/wiki/Random%20variate
In probability and statistics, a random variate or simply variate is a particular outcome of a random variable; the random variates which are other outcomes of the same random variable might have different values (random numbers). A random deviate or simply deviate is the difference of a random variate with respect to the distribution central location (e.g., mean), often divided by the standard deviation of the distribution (i.e., as a standard score). Random variates are used when simulating processes driven by random influences (stochastic processes). In modern applications, such simulations would derive random variates corresponding to any given probability distribution from computer procedures designed to create random variates corresponding to a uniform distribution, where these procedures would actually provide values chosen from a uniform distribution of pseudorandom numbers. Procedures to generate random variates corresponding to a given distribution are known as procedures for (uniform) random number generation or non-uniform pseudo-random variate generation. In probability theory, a random variable is a measurable function from a probability space to a measurable space of values that the variable can take on. In that context, those values are also known as random variates or random deviates, and this represents a wider meaning than just that associated with pseudorandom numbers. Definition Devroye defines a random variate generation algorithm (for real numbers) as follows: Assume that Computers can manipulate real numbers. Computers have access to a source of random variates that are uniformly distributed on the closed interval [0,1]. Then a random variate generation algorithm is any program that halts almost surely and exits with a real number x. This x is called a random variate. (Both assumptions are violated in most real computers. Computers necessarily lack the ability to manipulate real numbers, typically using floating point representations instead. Most computers lack a source of true randomness (like certain hardware random number generators), and instead use pseudorandom number sequences.) The distinction between random variable and random variate is subtle and is not always made in the literature. It is useful when one wants to distinguish between a random variable itself with an associated probability distribution on the one hand, and random draws from that probability distribution on the other, in particular when those draws are ultimately derived by floating-point arithmetic from a pseudo-random sequence. Practical aspects For the generation of uniform random variates, see Random number generation. For the generation of non-uniform random variates, see Pseudo-random number sampling. See also Deviation (statistics) Raw score References Statistical randomness
https://en.wikipedia.org/wiki/Lattice%20constant
A lattice constant or lattice parameter is one of the physical dimensions and angles that determine the geometry of the unit cells in a crystal lattice, and is proportional to the distance between atoms in the crystal. A simple cubic crystal has only one lattice constant, the distance between atoms, but in general lattices in three dimensions have six lattice constants: the lengths a, b, and c of the three cell edges meeting at a vertex, and the angles α, β, and γ between those edges. The crystal lattice parameters a, b, and c have the dimension of length. The three numbers represent the size of the unit cell, that is, the distance from a given atom to an identical atom in the same position and orientation in a neighboring cell (except for very simple crystal structures, this will not necessarily be distance to the nearest neighbor). Their SI unit is the meter, and they are traditionally specified in angstroms (Å); an angstrom being 0.1 nanometer (nm), or 100 picometres (pm). Typical values start at a few angstroms. The angles α, β, and γ are usually specified in degrees. Introduction A chemical substance in the solid state may form crystals in which the atoms, molecules, or ions are arranged in space according to one of a small finite number of possible crystal systems (lattice types), each with fairly well defined set of lattice parameters that are characteristic of the substance. These parameters typically depend on the temperature, pressure (or, more generally, the local state of mechanical stress within the crystal), electric and magnetic fields, and its isotopic composition. The lattice is usually distorted near impurities, crystal defects, and the crystal's surface. Parameter values quoted in manuals should specify those environment variables, and are usually averages affected by measurement errors. Depending on the crystal system, some or all of the lengths may be equal, and some of the angles may have fixed values. In those systems, only some of the six parameters need to be specified. For example, in the cubic system, all of the lengths are equal and all the angles are 90°, so only the a length needs to be given. This is the case of diamond, which has at 300 K. Similarly, in hexagonal system, the a and b constants are equal, and the angles are 60°, 90°, and 90°, so the geometry is determined by the a and c constants alone. The lattice parameters of a crystalline substance can be determined using techniques such as X-ray diffraction or with an atomic force microscope. They can be used as a natural length standard of nanometer range. In the epitaxial growth of a crystal layer over a substrate of different composition, the lattice parameters must be matched in order to reduce strain and crystal defects. Volume The volume of the unit cell can be calculated from the lattice constant lengths and angles. If the unit cell sides are represented as vectors, then the volume is the scalar triple product of the vectors. The volume i
https://en.wikipedia.org/wiki/ALTRAN
ALTRAN (ALgebraic TRANslator) is a programming language for the formal manipulation of rational functions of several variables with integer coefficients. It was developed at Bell Labs in 1960s. ALTRAN is a FORTRAN version of ALPAK rational algebra package, and “can be thought of as a variant of FORTRAN with the addition of an extra declaration, the ‘algebraic’ type declaration.” Although ALTRAN is written in ANSI FORTRAN, nevertheless there exist differences in FORTRAN implementations. ALTRAN handles machine dependencies through the use of a macro processor called M6. ALTRAN should not be confused with the ALGOL to FORTRAN Translator, called Altran, that "converts Extended Algol programs into Fortran IV." History ALPAK, written in 1964, originally consisted of a set of subroutines for FORTRAN written in assembly language. These subroutines were themselves rewritten in FORTRAN for ALTRAN. An early version of ALTRAN was developed by M. Douglas McIlroy and W. Stanley Brown in the middle 1960s. However, soon after the completion of their ALTRAN translator, the IBM 7094 computers, on which ALPAK and ALTRAN were reliant, began to be phased out in favor of newer machines. This led to development of a more advanced ALTRAN language and implementation developed by Brown, Andrew D. Hall, Stephen C. Johnson, Dennis M. Ritchie, and Stuart I. Feldman, which was highly portable. The translator was implemented by Ritchie, the interpreter by Hall, the run-time rational function and polynomial routines by Feldman, Hall, and Johnson, and the I/O routines by Johnson. Later, Feldman and Julia Ho added a rational expression evaluation package that generated accurate and efficient FORTRAN subroutines for the numerical evaluation of symbolic expressions produced by ALTRAN. In 1979, ALTRAN was ported to the Control Data Corporation 6600 and Cyber 176 computers at the Air Force Weapons Laboratory. They found that "ALTRAN is about 15 times faster than FORMAC in a PL/I environment, and it is at least 12 times faster than REDUCE." It was also observed that ALTRAN was able to quickly solve problems which neither FORMAC nor REDUCE could handle on the given hardware or in reasonable time. Sample program PROCEDURE MAIN # SIMPLE EXAMPLE OF USE OF FTNOUT LONG ALGEBRAIC (X:10,Y:10) F ALTRAN FTNOUT OPTS(201,72) # FTNOUT REQUIRES A LINE LENGTH OF 72 F = EXPAND( (X+2*Y+1000000)**3 ) WRITE F # PRINT F WRITE (25) " FUNCTION F(X,Y)" "C EXAMPLE PROG WRITTEN WITH FTNOUT." , F , " RETURN"™, " END" # WE HAVE WRITTEN A SIMPLE PROGRAM ON UNIT 25, NOW WE INVOKE FTNOUT TO # THIS ALTRAN OUTPUT TO LEGAL FORTRAN. FTNOUT END Operations References W.S. Brown, "A language and system for symbolic algebra on a digital computer", SYMSAC '66 Proceedings of the first ACM symposium on Symbolic and algebraic manipulation, p. 501- 540, January 1966. W.S. Brown, ALTRAN User's Manual (2nd ed.), Bell L
https://en.wikipedia.org/wiki/Measure%20%28data%20warehouse%29
In a data warehouse, a measure is a property on which calculations (e.g., sum, count, average, minimum, maximum) can be made. A measure can either be categorical, algebraic or holistic. Example For example, if a retail store sold a specific product, the quantity and prices of each item sold could be added or averaged to find the total number of items sold or the total or average price of the goods sold. Use of ISO representation terms When entering data into a metadata registry such as ISO/IEC 11179, representation terms such as number, value and measure are typically used as measures. See also Data warehouse Dimension (data warehouse) References Kimball, Ralph et al. (1998); The Data Warehouse Lifecycle Toolkit, p17. Pub. Wiley. . Kimball, Ralph (1996); The Data Warehouse Toolkit, p100. Pub. Wiley. . Han, Jiawei Pei, Jian Tong, Hanghang. (2023). Data Mining Concepts and Techniques (4th Edition) - 3.2.4 Measures: Categorization and Computation. (pp. 105). Elsevier. Data warehousing Metadata
https://en.wikipedia.org/wiki/British%20Mathematical%20Olympiad
The British Mathematical Olympiad (BMO) forms part of the selection process for the UK International Mathematical Olympiad team and for other international maths competitions, including the European Girls' Mathematical Olympiad, the Romanian Master of Mathematics and Sciences, and the Balkan Mathematical Olympiad. It is organised by the British Mathematical Olympiad Subtrust, which is part of the United Kingdom Mathematics Trust. There are two rounds, the BMO1 and the BMO2. BMO Round 1 The first round of the BMO is held in November each year, and from 2006 is an open entry competition. The qualification to BMO Round 1 is through the Senior Mathematical Challenge. Students who do not make the qualification through the Senior Mathematical Challenge may be entered at the discretion of their school for a fee of £40. The paper lasts 3½ hours, and consists of six questions (from 2005), each worth 10 marks. The exam in the 2020-2021 cycle was adjusted to consist of two sections, first section with 4 questions each worth 5 marks (only answers required), and second section with 3 question each worth 10 marks (full solutions required). The duration of the exam had been reduced to 2½ hours, due to the difficulties of holding a 3½ hours exam under COVID-19. Candidates are required to write full proofs to the questions. An answer is marked on either a "0+" or a "10-" mark scheme, depending on whether the answer looks generally complete or not. An answer judged incomplete or unfinished is usually capped at 3 or 4, whereas for an answer judged as complete, marks may be deducted for minor errors or poor reasoning but it is likely to get a score of 7 or more. As a result, it is uncommon for an answer to score a middling mark between 4 and 6. While around 1000 gain automatic qualification to sit the BMO1 paper each year, the additional discretionary and international students means that since 2016, on average, around 1600 candidates have been entered for BMO1 each year. Although these candidates represent the very best mathematicians in their age group, the difficulty level of the BMO papers mean that many of these attain a very low score. The scores were particularly low until 2004, for example, when the median score was approximately 5-6 (out of 50). In 2005, UKMT changed the system and added an extra easier question meaning the median is now raised. In 2008, 23 students scored more than 40/60 and around 50 got over 30/60. In addition to the British students, until 2018, there was a history of about 20 students from New Zealand being invited to take part. In recent years, entries to BMO have been made from schools in Ireland, Kazakhstan, India, China, South Korea, Hong Kong, Singapore, and Thailand. BMO1 paper for the cycle 2021-22 attracted 1857 entries. Only 5 candidates scored 90% or more. A score of 21/60 was enough to earn a Distinction, awarded to top 26% of the candidates. From the results of the BMO1, around 100 top scoring students are invited
https://en.wikipedia.org/wiki/Pseudo-arc
In general topology, the pseudo-arc is the simplest nondegenerate hereditarily indecomposable continuum. The pseudo-arc is an arc-like homogeneous continuum, and played a central role in the classification of homogeneous planar continua. R. H. Bing proved that, in a certain well-defined sense, most continua in Rn, n ≥ 2, are homeomorphic to the pseudo-arc. History In 1920, Bronisław Knaster and Kazimierz Kuratowski asked whether a nondegenerate homogeneous continuum in the Euclidean plane R2 must be a Jordan curve. In 1921, Stefan Mazurkiewicz asked whether a nondegenerate continuum in R2 that is homeomorphic to each of its nondegenerate subcontinua must be an arc. In 1922, Knaster discovered the first example of a hereditarily indecomposable continuum K, later named the pseudo-arc, giving a negative answer to a Mazurkiewicz question. In 1948, R. H. Bing proved that Knaster's continuum is homogeneous, i.e. for any two of its points there is a homeomorphism taking one to the other. Yet also in 1948, Edwin Moise showed that Knaster's continuum is homeomorphic to each of its non-degenerate subcontinua. Due to its resemblance to the fundamental property of the arc, namely, being homeomorphic to all its nondegenerate subcontinua, Moise called his example M a pseudo-arc. Bing's construction is a modification of Moise's construction of M, which he had first heard described in a lecture. In 1951, Bing proved that all hereditarily indecomposable arc-like continua are homeomorphic — this implies that Knaster's K, Moise's M, and Bing's B are all homeomorphic. Bing also proved that the pseudo-arc is typical among the continua in a Euclidean space of dimension at least 2 or an infinite-dimensional separable Hilbert space. Bing and F. Burton Jones constructed a decomposable planar continuum that admits an open map onto the circle, with each point preimage homeomorphic to the pseudo-arc, called the circle of pseudo-arcs. Bing and Jones also showed that it is homogeneous. In 2016 Logan Hoehn and Lex Oversteegen classified all planar homogeneous continua, up to a homeomorphism, as the circle, pseudo-arc and circle of pseudo-arcs. In 2019 Hoehn and Oversteegen showed that the pseudo-arc is topologically the only, other than the arc, hereditarily equivalent planar continuum, thus providing a complete solution to the planar case of Mazurkiewicz's problem from 1921. Construction The following construction of the pseudo-arc follows . Chains At the heart of the definition of the pseudo-arc is the concept of a chain, which is defined as follows: A chain is a finite collection of open sets in a metric space such that if and only if The elements of a chain are called its links, and a chain is called an ε-chain if each of its links has diameter less than ε. While being the simplest of the type of spaces listed above, the pseudo-arc is actually very complex. The concept of a chain being crooked (defined below) is what endows the pseudo-arc with its complexit
https://en.wikipedia.org/wiki/Jayyous
Jayyus () is a Palestinian village near the west border of the West Bank, close to Qalqilya. It is a farming community. According to the Palestinian Central Bureau of Statistics, the village had a population of 3,478 inhabitants in 2017. Location Jayyus (including Khirbet Sir) is located - northeast of Qalqiliya. It is bordered by Baqat al Hatab and Kafr Laqif to the east, Kafr Jamal, Kafr Zibad and Kafr ‘Abbush to the south, ‘Azzun, ‘Izbat at Tabib, An Nabi Elyas and ‘Arab Abu Farda to the west, and the Green Line to the north. History At Khirbet Sir, just east of Jayyus, two rock-cut tombs have been found, with a large mound with terraces cut in the sides, and a good well below. Byzantine ceramics have also been found. Ottoman era Jayyus was incorporated into the Ottoman Empire in 1517 with all of Palestine, and in 1596 it appeared in the tax registers as being in the Nahiya of Bani Sa'b of the Liwa of Nablus. It had a population of 24 households and 6 bachelors, all Muslim. The villagers paid taxes on wheat, barley, summer crops, olive trees, occasional revenues, goats and/or beehives; a total of 11,746 akçe. Half of the revenue went to a Muslim charitable endowment. According to historian Roy Marom, in the 18th or early 19th centuries, residents of Jayyous affiliated with the Qaysi camp during the Qays and Yaman conflicts, alongside residents of Deir Abu Mash'al and part of the residents of Bayt Nabala. They fought several skirmishes against Yamani rivals from Qibya and Dayr Tarif. In 1838, Robinson noted the village, called Jiyus, as being in the Beni Sa'ab district, west of Nablus. In the 1860s, the Ottoman authorities granted the village an agricultural plot of land called Ghabat Jayyus in the former confines of the Forest of Arsur (Ar. Al-Ghaba) in the coastal plain, west of the village. In 1870/1871 (1288 AH), an Ottoman census listed the village in the nahiya (sub-district) of Bani Sa'b. In 1882, the PEF's Survey of Western Palestine described Jiyus as a "moderate-sized stone village on a ridge, with olives to the south-east. It appears to be an ancient site, having rock-cut tombs and ancient wells." In the 19th century and early 20th century the village was dominated by the Palestinian el-Jayusah or Jayyusi clan. British Mandate era In the 1922 census of Palestine conducted by the British Mandate authorities, Jaiyus had a population of 433, all Muslims, increasing in the 1931 census to 569, again all Muslim, in a total of 147 houses. In the 1945 statistics the population of Jayyus consisted of 830 Muslims with a land area of 12,571 dunams according to an official land and population survey. Of this, 1,556 dunams were designated for plantations and irrigable land, 2,155 for cereals, while 22 dunams were built-up areas. Jordanian era In the wake of the 1948 Arab–Israeli War, and after the 1949 Armistice Agreements, Jayyus came under Jordanian rule. Post-1967 Since the Six-Day War in 1967, Jayyus has been und
https://en.wikipedia.org/wiki/P%C3%A9pin%27s%20test
In mathematics, Pépin's test is a primality test, which can be used to determine whether a Fermat number is prime. It is a variant of Proth's test. The test is named for a French mathematician, Théophile Pépin. Description of the test Let be the nth Fermat number. Pépin's test states that for n > 0, is prime if and only if The expression can be evaluated modulo by repeated squaring. This makes the test a fast polynomial-time algorithm. However, Fermat numbers grow so rapidly that only a handful of Fermat numbers can be tested in a reasonable amount of time and space. Other bases may be used in place of 3. These bases are: 3, 5, 6, 7, 10, 12, 14, 20, 24, 27, 28, 39, 40, 41, 45, 48, 51, 54, 56, 63, 65, 75, 78, 80, 82, 85, 90, 91, 96, 102, 105, 108, 112, 119, 125, 126, 130, 147, 150, 156, 160, ... . The primes in the above sequence are called Elite primes, they are: 3, 5, 7, 41, 15361, 23041, 26881, 61441, 87041, 163841, 544001, 604801, 6684673, 14172161, 159318017, 446960641, 1151139841, 3208642561, 38126223361, 108905103361, 171727482881, 318093312001, 443069456129, 912680550401, ... For integer b > 1, base b may be used if and only if only a finite number of Fermat numbers Fn satisfies that , where is the Jacobi symbol. In fact, Pépin's test is the same as the Euler-Jacobi test for Fermat numbers, since the Jacobi symbol is −1, i.e. there are no Fermat numbers which are Euler-Jacobi pseudoprimes to these bases listed above. Proof of correctness Sufficiency: assume that the congruence holds. Then , thus the multiplicative order of 3 modulo divides , which is a power of two. On the other hand, the order does not divide , and therefore it must be equal to . In particular, there are at least numbers below coprime to , and this can happen only if is prime. Necessity: assume that is prime. By Euler's criterion, , where is the Legendre symbol. By repeated squaring, we find that , thus , and . As , we conclude from the law of quadratic reciprocity. Historical Pépin tests Because of the sparsity of the Fermat numbers, the Pépin test has only been run eight times (on Fermat numbers whose primality statuses were not already known). Mayer, Papadopoulos and Crandall speculate that in fact, because of the size of the still undetermined Fermat numbers, it will take considerable advances in technology before any more Pépin tests can be run in a reasonable amount of time. the smallest untested Fermat number with no known prime factor is which has 2,585,827,973 digits. Notes References P. Pépin, Sur la formule , Comptes rendus de l'Académie des Sciences de Paris 85 (1877), pp. 329–333. External links The Prime Glossary: Pepin's test Primality tests
https://en.wikipedia.org/wiki/Complex%20modulus
Complex modulus may refer to: Modulus of complex number, in mathematics, the norm or absolute value, of a complex number: Dynamic modulus, in materials engineering, the ratio of stress to strain under vibratory conditions
https://en.wikipedia.org/wiki/Jagjit%20Singh%20%28writer%29
Jagjit Singh (1912–2002) was an Indian writer and science popularizer. In college he excelled in mathematics courses, receiving his MA in Mathematics from the Government College, Lahore. Yet he made his career as an important director of India's railways, applying his mathematical skills there. Upon retirement, he set out in writing several books, starting with Great Ideas of Modern Mathematics, popularizing science and targeting laymen. Singh subsequently won the Kalinga Prize from UNESCO in 1963, being the first Indian and Asian to do so. In 1960, he was appointed director of the Indian Railways Board, and nine years later he was appointed general manager of the Northeast Frontier Railway. After his retirement he went to work as managing director of the Indian Drugs and Pharmaceuticals, adviser of Asian Development Bank and adviser of Tata Chemicals. Singh was elected a Fellow of the Royal Statistical Society of London, and was President of the Operational Society of India and a member of the Indian Statistical Institute. He was awarded an honorary Doctorate in Science in 1968 by Roorkee University. He was also chosen by Pakistan scientist and Nobel Prize winner in Physics in 1979, Abdus Salam to write his biography, which came out in 1992 published by Penguin books. Some works Mathematical Ideas: Their Nature and Use (1959) Great Ideas of Modern Mathematics Great Ideas and Theories of Modern Cosmology Great Ideas in Information Theory, Language and Cybernetics Reminiscences of a Mathematician Manqué Great Ideas of Operations Research The making of a good science writer Abdus Salam: A Biography (1992) Modern Cosmology (1970) References Jagjit Singh under the heading 'A MAN OF SCIENCE' Biographical note Frontispiece notes in 'Modern Cosmology' (retitled from 'Great ideas and Theories of Modern Cosmology'), Penguin (1961). External links Modern Cosmology, Penguin (1961). Singh, Jagjit Singh, Jagjit Singh, Jagjit Indian Sikhs Kalinga Prize recipients
https://en.wikipedia.org/wiki/Music%20Genome%20Project
The Music Genome Project is an effort to "capture the essence of music at the most fundamental level" using various attributes to describe songs and mathematics to connect them together into an interactive map. The Music Genome Project covers five music genres: Pop/Rock, Hip-Hop/Electronica, Jazz, World Music, and Classical. Any given song is represented by approximately 450 "genes" (analogous to trait-determining genes for organisms in the field of genetics). Each gene corresponds to a characteristic of the music, for example, gender of lead vocalist, prevalent use of groove, level of distortion on the electric guitar, type of background vocals, etc. Rock and pop songs have 150 genes, rap songs have 350, and jazz songs have approximately 400. Other genres of music, such as world and classical music, have 300–450 genes. The system depends on a sufficient number of genes to render useful results. Each gene is assigned a number between 0 and 5, in half-integer increments. The Music Genome Project's database is built using a methodology that includes the use of precisely defined terminology, a consistent frame of reference, redundant analysis, and ongoing quality control to ensure that data integrity remains reliably high. Given the vector of one or more songs, a list of other similar songs is constructed using what the company calls its "matching algorithm". Each song is analyzed by a musician in a process that takes 20 to 30 minutes per song. Ten percent of songs are analyzed by more than one musician to ensure conformity with the in-house standards and statistical reliability. The Music Genome Project was first conceived by Will Glaser in late 1999, and populated with musicological input from Tim Westergren in early 2000. In January 2000, they joined forces with Jon Kraft to found Savage Beast Technologies to bring their idea to market. The Music Genome Project was developed in its entirety by Pandora Media and remains the core technology used for Pandora Radio, its internet radio service. Although there was a time when the company licensed this technology for use by others, today it is limited for use only by its users. Intellectual property "Music Genome Project" is a registered trademark in the United States. The mark is owned by the company Pandora Media, Inc. The Music Genome Project is covered by which shows William T. Glaser, Timothy B. Westergren, Jeffrey P. Stearns, and Jonathan M. Kraft as the inventors of this technology. The patent has been assigned to Pandora Media, Inc. With that initial patent filed, most of the intellectual property associated with Glaser's founding algorithm remains a trade secret to this day. The full list of attributes for individual songs is not publicly released, and ostensibly constitutes a trade secret. See also Moodbar MusicBrainz Pandora Radio WhoSampled References Further reading External links "The Music Genome Project"—short historical statement by Tim Westergren Patent Number 7003515—Co
https://en.wikipedia.org/wiki/Liverpool%20Built-up%20Area
The Liverpool Built-up Area (previously Liverpool Urban Area in 2001 and prior) is a term used by the Office for National Statistics (ONS) to denote the urban area around Liverpool in England, to the east of the River Mersey. The contiguous built-up area extends beyond the area administered by Liverpool City Council into adjoining local authority areas, particularly parts of Sefton and Knowsley. As defined by ONS, the area extends as far east as St Helens, Haydock, and Ashton-in-Makerfield in Greater Manchester. The Liverpool Urban Area is not the same area as Merseyside (or Greater Merseyside), which includes areas of Wirral on the west bank of the Mersey and Southport. The western extent of the Greater Manchester conurbation is narrowly avoided as that extends as far as Golborne and Newton-le-Willows, with small gaps separating those towns from Ashton-In-Makerfield and Haydock. Settlements The Liverpool Urban Area defined by ONS covers Liverpool and its contiguous built-up areas, with a population of 864,122 a considerable increase from the 2001 census due to the rapid growth in the population of Liverpool during this period. The population of the area was 816,216 in the 2001 census,. The urban area facing Liverpool on the Wirral Peninsula is a separate division known as the Birkenhead Urban Area. The ONS definition is based purely on physical criteria with a focus on the presence or absence of significant gaps between built-up areas. It therefore extends as far as Ashton-in-Makerfield, but excludes some areas much closer to Liverpool which are separated from it by open spaces, notably Kirkby with a narrow gap along the M57 motorway, and Maghull. Subdivisions are not always aligned to present administrative or county borders. For example, Liverpool as designated by the ONS also containing the towns Huyton, Roby, and Halewood which are all within the neighbouring borough of Knowsley. St Helens only covers the settlement, and not the St Helens borough which contains Rainford and Haydock. According to the ONS, the subcomponents of the Liverpool Urban Area are: Notes: Huyton-with-Roby was included as part of the Liverpool subdivision in the 2011 census. Rainford and Ashton-in-Makerfield were not part of the Liverpool Urban Area prior to 2011. Greater Liverpool Greater Liverpool is an informal term used by the Rent Service as one of its Broad Rental Market Areas (BRMA). This area includes such districts outside the Liverpool City Council boundaries as Crosby, Maghull, Prescot and St Helens. Merseytravel include a similar Greater Liverpool area for its Public Transport Map and Guide as seen on its Liverpool area map. References Geography of Merseyside Urban areas of England
https://en.wikipedia.org/wiki/Reading%20built-up%20area
The Reading Built-up Area or Reading/Wokingham Urban Area is a name given by the Office for National Statistics to a conurbation in Berkshire, England, with a population of 318,014. This was a significant decrease from the population according to the 2001 census of 369,804 due to Bracknell no longer being considered part of the built-up area, but forming part of the Greater London Urban Area instead. Its largest population centre is Reading, and it also includes Arborfield, Woodley, Theale, Crowthorne, Earley and Wokingham. Part of the urban area, Crowthorne, is just to the north of Sandhurst, part of the Farnborough/Aldershot Urban Area, and its eastern extremity is just west of Bracknell part of the Greater London Urban Area. References Urban areas of England Urban Area Urban Area
https://en.wikipedia.org/wiki/Partition%20problem
In number theory and computer science, the partition problem, or number partitioning, is the task of deciding whether a given multiset S of positive integers can be partitioned into two subsets S1 and S2 such that the sum of the numbers in S1 equals the sum of the numbers in S2. Although the partition problem is NP-complete, there is a pseudo-polynomial time dynamic programming solution, and there are heuristics that solve the problem in many instances, either optimally or approximately. For this reason, it has been called "the easiest hard problem". There is an optimization version of the partition problem, which is to partition the multiset S into two subsets S1, S2 such that the difference between the sum of elements in S1 and the sum of elements in S2 is minimized. The optimization version is NP-hard, but can be solved efficiently in practice. The partition problem is a special case of two related problems: In the subset sum problem, the goal is to find a subset of S whose sum is a certain target number T given as input (the partition problem is the special case in which T is half the sum of S). In multiway number partitioning, there is an integer parameter k, and the goal is to decide whether S can be partitioned into k subsets of equal sum (the partition problem is the special case in which k = 2). However, it is quite different than the 3-partition problem: in that problem, the number of subsets is not fixed in advance – it should be |S|/3, where each subset must have exactly 3 elements. 3-partition is much harder than partition – it has no pseudo-polynomial time algorithm unless P = NP. Examples Given S = {3,1,1,2,2,1}, a valid solution to the partition problem is the two sets S1 = {1,1,1,2} and S2 = {2,3}. Both sets sum to 5, and they partition S. Note that this solution is not unique. S1 = {3,1,1} and S2 = {2,2,1} is another solution. Not every multiset of positive integers has a partition into two subsets with equal sum. An example of such a set is S = {2,5}. Computational hardness The partition problem is NP hard. This can be proved by reduction from the subset sum problem. An instance of SubsetSum consists of a set S of positive integers and a target sum T; the goal is to decide if there is a subset of S with sum exactly T. Given such an instance, construct an instance of Partition in which the input set contains the original set plus two elements: z1 and z2, with z1 = sum(S) and z2 = 2T. The sum of this input set is sum(S) + z1 + z2 = 2 sum(S) + 2T, so the target sum for Partition is sum(S) + T. Suppose there exists a solution S′ to the SubsetSum instance. Then sum(S′) = T, so sum(S′ z_1) = sum(S) + T, so S′ z_1 is a solution to the Partition instance. Conversely, suppose there exists a solution S′′ to the Partition instance. Then, S′′ must contain either z1 or z2, but not both, since their sum is more than sum(S) + T. If S'' contains z1, then it must contain elements from S with a sum of exactly T, so S'' minus z1
https://en.wikipedia.org/wiki/Carpenter%27s%20rule%20problem
The carpenter's rule problem is a discrete geometry problem, which can be stated in the following manner: Can a simple planar polygon be moved continuously to a position where all its vertices are in convex position, so that the edge lengths and simplicity are preserved along the way? A closely related problem is to show that any non-self-crossing polygonal chain can be straightened, again by a continuous transformation that preserves edge distances and avoids crossings. Both problems were successfully solved by . The problem is named after the multiple-jointed wooden rulers popular among carpenters in the 19th and early 20th centuries before improvements to metal tape measures made them obsolete. Combinatorial proof Subsequently to their work, Ileana Streinu provided a simplified combinatorial proof formulated in the terminology of robot arm motion planning. Both the original proof and Streinu's proof work by finding non-expansive motions of the input, continuous transformations such that no two points ever move towards each other. Streinu's version of the proof adds edges to the input to form a pointed pseudotriangulation, removes one added convex hull edge from this graph, and shows that the remaining graph has a one-parameter family of motions in which all distances are nondecreasing. By repeatedly applying such motions, one eventually reaches a state in which no further expansive motions are possible, which can only happen when the input has been straightened or convexified. provide an application of this result to the mathematics of paper folding: they describe how to fold any single-vertex origami shape using only simple non-self-intersecting motions of the paper. Essentially, this folding process is a time-reversed version of the problem of convexifying a polygon of length smaller than π, but on the surface of a sphere rather than in the Euclidean plane. This result was extended by for spherical polygons of edge length smaller than 2π. Generalization generalized the Carpenter's rule problem to rectifiable curves. He showed that every rectifiable Jordan curve can be made convex, without increasing its length and without decreasing the distance between any pair of points. This research, performed while he was still a high school student, won the second-place prize for Pardon in the 2007 Intel Science Talent Search . See also Curve-shortening flow, a continuous transformation of a closed curve in the plane that eventually convexifies it References . Preliminary version appeared at 41st Annual Symposium on Foundations of Computer Science, 2000. . . External links Erik Demaine's page with animations of the straightening motion applied to some linkages Discrete geometry Recreational mathematics Mathematical problems Mathematics of rigidity
https://en.wikipedia.org/wiki/History%20of%20computer%20science
The history of computer science began long before the modern discipline of computer science, usually appearing in forms like mathematics or physics. Developments in previous centuries alluded to the discipline that we now know as computer science. This progression, from mechanical inventions and mathematical theories towards modern computer concepts and machines, led to the development of a major academic field, massive technological advancement across the Western world, and the basis of a massive worldwide trade and culture. Prehistory The earliest known tool for use in computation was the abacus, developed in the period between 2700 and 2300 BCE in Sumer. The Sumerians' abacus consisted of a table of successive columns which delimited the successive orders of magnitude of their sexagesimal number system. Its original style of usage was by lines drawn in sand with pebbles. Abaci of a more modern design are still used as calculation tools today, such as the Chinese abacus. In the 5th century BC in ancient India, the grammarian Pāṇini formulated the grammar of Sanskrit in 3959 rules known as the Ashtadhyayi which was highly systematized and technical. Panini used metarules, transformations and recursions. The Antikythera mechanism is believed to be an early mechanical analog computer. It was designed to calculate astronomical positions. It was discovered in 1901 in the Antikythera wreck off the Greek island of Antikythera, between Kythera and Crete, and has been dated to circa 100 BC. Mechanical analog computer devices appeared again a thousand years later in the medieval Islamic world and were developed by Muslim astronomers, such as the mechanical geared astrolabe by Abū Rayhān al-Bīrūnī, and the torquetum by Jabir ibn Aflah. According to Simon Singh, Muslim mathematicians also made important advances in cryptography, such as the development of cryptanalysis and frequency analysis by Alkindus. Programmable machines were also invented by Muslim engineers, such as the automatic flute player by the Banū Mūsā brothers, Technological artifacts of similar complexity appeared in 14th century Europe, with mechanical astronomical clocks. When John Napier discovered logarithms for computational purposes in the early 17th century, there followed a period of considerable progress by inventors and scientists in making calculating tools. In 1623 Wilhelm Schickard designed a calculating machine, but abandoned the project, when the prototype he had started building was destroyed by a fire in 1624. Around 1640, Blaise Pascal, a leading French mathematician, constructed a mechanical adding device based on a design described by Greek mathematician Hero of Alexandria. Then in 1672 Gottfried Wilhelm Leibniz invented the Stepped Reckoner which he completed in 1694. In 1837 Charles Babbage first described his Analytical Engine which is accepted as the first design for a modern computer. The analytical engine had expandable memory, an arithmetic unit, and lo
https://en.wikipedia.org/wiki/Trade%20in%20services%20statistics
Trade in services statistics are economic statistics which detail international trade in services. They received a great deal of focus at the advent of services negotiations which took place under the Uruguay Round, which became part of the General Agreement on Trade in Services, one of the four principal pillars of the World Trade Organization (WTO) trade treaty, also called the "WTO Agreement". The General Agreement on Trade in Services (GATS) Four Modes of Supply comprises: Mode 1 Cross border trade, which is defined as delivery of a service from the territory of one country into the territory of other country; Mode 2 Consumption abroad - this mode covers supply of a service of one country to the service consumer of any other country; Mode 3 Commercial presence - which covers services provided by a service supplier of one country in the territory of any other country, i.e., foreign direct investment undertaken by a service provider; Mode 4 Presence of natural persons - which covers services provided by a service supplier of one country through the presence of natural persons in the territory another economy. Statistics which correspondent to the GATS Four Modes of Supply comprise quantitative data addressing: Trade in services, which is defined as delivery of a service from the territory of one country into the territory of other country, specific disaggregation as per GATS Four Modes of Supply may not apply, i.e., this depends on decisions taken by each country; Foreign direct investment (FDI) Cross-border foreign investment as per International Monetary Fund guidelines. Roughly correspondent to Mode 3 Foreign Affiliate Trade Statistics (FATS) Statistics, or corporate data detailing the operations of foreign direct investment-based enterprises, including sales, expenditures, profits, value-added, inter- and intra-firm trade, exports and imports; Roughly correspondent to Mode 3 Statistics which detail commercial services trade taking place under the GATS are in a state of development in most countries. Most countries don't have information which details trade as per the GATS Four Modes of Supply, which makes trade negotiations in this realm difficult, especially for developing country WTO members. The United States Bureau of Economic analysis produces rich statistics in this area, but they do not address the GATS Four Modes of Supply directly, rather, they address only cross-border services, generally defined, and statistics related to FDI. FATS, are collected by the United States BEA, and several other OECD countries. UN Manual on Services Statistics UN Manual on Statistics of International Trade in Services (from the UN website) Statistical databases OECD statistics on trade in services database OECD statistics on value added and employment Eurostat database: Industry, Trade and Services US BEA page on international economic accounts JETRO database for FDI and trade in goods and services UNCTAD FDI/TNC database U
https://en.wikipedia.org/wiki/Foreign%20affiliate%20trade%20statistics
Foreign affiliate trade statistics (FATS), also known as transnational corporation (TNC) data details the economic operations of foreign direct investment-based enterprises. Collection of such information, and aggregation at the national level, can provide economists and policymakers with insight as to the relationship that transnational corporations, being FDI-related enterprises, have on economies. FATS indicators - including: employment information, expenditures, exports and imports (specific to FDI-owned firms) inter- and intra-firm trade, profits, sales, value-added (product). Inward FATS - Data which represent the operations of foreign-owned (in the FDI sense, i.e. at a minimum of 10% of book value) firms in the local economy, or country. Outward FATS - Data which represent the operations firms abroad, which are owned by a firm in our home-country ("owned" in the FDI sense, i.e. at a minimum of 10% of book value). FATS are an economic indicator which has a direct linkage to WTO-GATS Mode 3 Legal Commitments; GATS Mode 3 is one of the Four Modes of Supply enshrined as the framework of the General Agreement on Trade in Services GATS of the World Trade Organization WTO. FATS describe economic activities which take place as a result of WTO-GATS Mode 3 enterprise trade, or trade which takes place under Commercial Presence circumstances. The standard for definition for Commercial Presence in the WTO-GATS differs from the generally accepted definition of FDI, which under IMF Balance of Payments Volume 5 standards is 10%; WTO-GATS Commercial Presence defines ownership level benchmarks at 10% of enterprise book value. See also Foreign direct investment General Agreement on Trade in Services Statistical databases on Services Trade Statistics, FDI and FATS OECD statistics on trade in services database OECD statistics on value added and employment OECD Statistics for Trade in Services from Publication Eurostat - Statistics Explained: Foreign affiliates statistics - FATS Eurostat database: Industry, Trade and Services US BEA page on international economic accounts JETRO database for FDI and trade in goods and services UNCTAD FDI/FATS database UNCTAD World Investment Directory online ITC/UNCTAD Investment Map: Foreign direct investment together with foreign affiliates, international trade and market access See also . fr:Investissement direct à l'étranger
https://en.wikipedia.org/wiki/Padovan%20cuboid%20spiral
In mathematics the Padovan cuboid spiral is the spiral created by joining the diagonals of faces of successive cuboids added to a unit cube. The cuboids are added sequentially so that the resulting cuboid has dimensions that are successive Padovan numbers. The first cuboid is 1x1x1. The second is formed by adding to this a 1x1x1 cuboid to form a 1x1x2 cuboid. To this is added a 1x1x2 cuboid to form a 1x2x2 cuboid. This pattern continues, forming in succession a 2x2x3 cuboid, a 2x3x4 cuboid etc. Joining the diagonals of the exposed end of each new added cuboid creates a spiral (seen as the black line in the figure). The points on this spiral all lie in the same plane. The cuboids are added in a sequence that adds to the face in the positive y direction, then the positive x direction, then the positive z direction. This is followed by cuboids added in the negative y, negative x and negative z directions. Each new cuboid added has a length and width that matches the length and width of the face being added to. The height of the nth added cuboid is the nth Padovan number. Connecting alternate points where the spiral bends creates a series of triangles, where each triangle has two sides that are successive Padovan numbers and that has an obtuse angle of 120 degrees between these two sides. References External links Padovan Spiral Numbers, Robert Dickau, Wolfram Demonstrations Project Spirals Cuboids
https://en.wikipedia.org/wiki/Osculate
In mathematics, osculate, meaning to touch (from the Latin osculum meaning kiss), may refer to: osculant, an invariant of hypersurfaces osculating circle osculating curve osculating plane osculating orbit osculating sphere The obsolete Quinarian system of biological classification attempted to group creatures into circles which could touch or overlap with adjacent circles, a phenomenon called 'osculation'.
https://en.wikipedia.org/wiki/Line%E2%80%93plane%20intersection
In analytic geometry, the intersection of a line and a plane in three-dimensional space can be the empty set, a point, or a line. It is the entire line if that line is embedded in the plane, and is the empty set if the line is parallel to the plane but outside it. Otherwise, the line cuts through the plane at a single point. Distinguishing these cases, and determining equations for the point and line in the latter cases, have use in computer graphics, motion planning, and collision detection. Algebraic form In vector notation, a plane can be expressed as the set of points for which where is a normal vector to the plane and is a point on the plane. (The notation denotes the dot product of the vectors and .) The vector equation for a line is where is a unit vector in the direction of the line, is a point on the line, and is a scalar in the real number domain. Substituting the equation for the line into the equation for the plane gives Expanding gives And solving for gives If then the line and plane are parallel. There will be two cases: if then the line is contained in the plane, that is, the line intersects the plane at each point of the line. Otherwise, the line and plane have no intersection. If there is a single point of intersection. The value of can be calculated and the point of intersection, , is given by . Parametric form A line is described by all points that are a given direction from a point. A general point on a line passing through points and can be represented as where is the vector pointing from to . Similarly a general point on a plane determined by the triangle defined by the points , and can be represented as where is the vector pointing from to , and is the vector pointing from to . The point at which the line intersects the plane is therefore described by setting the point on the line equal to the point on the plane, giving the parametric equation: This can be rewritten as which can be expressed in matrix form as where the vectors are written as column vectors. This produces a system of linear equations which can be solved for , and . If the solution satisfies the condition , then the intersection point is on the line segment between and , otherwise it is elsewhere on the line. Likewise, if the solution satisfies , then the intersection point is in the parallelogram formed by the point and vectors and . If the solution additionally satisfies , then the intersection point lies in the triangle formed by the three points , and . The determinant of the matrix can be calculated as If the determinant is zero, then there is no unique solution; the line is either in the plane or parallel to it. If a unique solution exists (determinant is not 0), then it can be found by inverting the matrix and rearranging: which expands to and then to thus giving the solutions: The point of intersection is then equal to Uses In the ray tracing method of computer graphics a surface can be r
https://en.wikipedia.org/wiki/Composant
In point-set topology, the composant of a point p in a continuum A is the union of all proper subcontinua of A that contain p. If a continuum is indecomposable, then its composants are pairwise disjoint. The composants of a continuum are dense in that continuum. References Continuum theory
https://en.wikipedia.org/wiki/Lagrange%27s%20theorem
In mathematics, Lagrange's theorem usually refers to any of the following theorems, attributed to Joseph Louis Lagrange: Lagrange's theorem (group theory) Lagrange's theorem (number theory) Lagrange's four-square theorem, which states that every positive integer can be expressed as the sum of four squares of integers Mean value theorem in calculus The Lagrange inversion theorem The Lagrange reversion theorem The method of Lagrangian multipliers for mathematical optimization
https://en.wikipedia.org/wiki/Lagrange%27s%20theorem%20%28number%20theory%29
In number theory, Lagrange's theorem is a statement named after Joseph-Louis Lagrange about how frequently a polynomial over the integers may evaluate to a multiple of a fixed prime. More precisely, it states that if p is a prime number, , and is a polynomial with integer coefficients, then either: every coefficient of is divisible by p, or has at most solutions where is the degree of . If the modulus is not prime, then it is possible for there to be more than solutions. Proof The two key ideas are the following. Let be the polynomial obtained from by taking the coefficients . Now: is divisible by if and only if ; and has no more than roots. More rigorously, start by noting that if and only if each coefficient of is divisible by . Assume ; its degree is thus well-defined. It is easy to see . To prove (1), first note that we can compute either directly, i.e. by plugging in (the residue class of) and performing arithmetic in , or by reducing . Hence if and only if , i.e. if and only if is divisible by . To prove (2), note that is a field, which is a standard fact (a quick proof is to note that since is prime, is a finite integral domain, hence is a field). Another standard fact is that a non-zero polynomial over a field has at most as many roots as its degree; this follows from the division algorithm. Finally, note that two solutions are incongruent if and only if . Putting everything together, the number of incongruent solutions by (1) is the same as the number of roots of , which by (2) is at most , which is at most . References Theorems about prime numbers Theorems about polynomials
https://en.wikipedia.org/wiki/Unit%20measure
Unit measure is an axiom of probability theory that states that the probability of the entire sample space is equal to one (unity); that is, P(S)=1 where S is the sample space. Loosely speaking, it means that S must be chosen so that when the experiment is performed, something happens. The term measure here refers to the measure-theoretic approach to probability. Violations of unit measure have been reported in arguments about the outcomes of events under which events acquire "probabilities" that are not the probabilities of probability theory. In situations such as these the term "probability" serves as a false premise to the associated argument. References Probability theory
https://en.wikipedia.org/wiki/Byrchall%20High%20School
Byrchall High School is a secondary school and specialist mathematics and computing school with academy status, in the Ashton-in-Makerfield area of the Metropolitan Borough of Wigan, Greater Manchester. Admissions It has a mixed intake of both boys and girls aged 11–16. The current pupil population is approximately 1,200. The current headteacher is Alan Birchall. Byrchall High School is one of three secondary schools in Ashton, the other two being St Edmund Arrowsmith Catholic High School, next to Byrchall High School, and Cansfield High School. The school is situated between the A49 and the M6 on the southern edge of the Wigan borough, neighbouring St Helens. History Grammar school The school was founded in 1588 as Ashton Grammar School by Robert Byrchall on land donated by wealthy local land owner William Gerrard. The original building in Seneley Green is now Garswood Library. Through the school, Ashton-in-Makerfield Grammar School Old Boys F.C. (now known as Ashtonians AFC) entered the Lancashire Amateur Football League in 1951. After the Second World War a prisoner-of-war camp for Germans, POW Camp 50, operated at its site. One of its inmates was footballer Bert Trautmann who was confined there until 1948. In 1960, Lancashire Education Committee proposed to amalgamate the school with Upholland Grammar School when the school had around 450 pupils. The school was administered by Wigan Metropolitan Borough Council from April 1974. By 1973 the school had 700 pupils and 800 by 1975. Comprehensive It became a comprehensive school in 1978. Academy The school became an academy on 1 October 2012. Academic performance The school's pupils generally obtain above-average GCSE results; one of the few schools in Wigan LEA to achieve this which is not a faith school. Alumni Ashton-in-Makerfield Grammar School Sir George Bishop CB OBE, Chairman from 1972-79 of Booker-McConnell, President from 1957-58 of the International Sugar Council, President from 1983-87 of the Royal Geographical Society Prof Rodney Robert Porter FRS, biochemist, won the 1972 Nobel Prize in Physiology or Medicine for discovering the structure of antibodies, Whitley Professor of Biochemistry from 1967-85 at the University of Oxford Sir John Randall FRS, physicist who invented the cavity magnetron, currently found in microwave ovens Byrchall High School Jane Bruton, Chairman in 2007 of the British Society of Magazine Editors, and Editor from 2005-15 of Grazia and from 2001-01 of Eve Lemn Sissay, BAFTA-nominated writer and broadcaster Kym Marsh, award winning actress, presenter and singer. References OFSTED Report External links School Website EduBase Educational institutions established in the 1580s Secondary schools in the Metropolitan Borough of Wigan Academies in the Metropolitan Borough of Wigan 1588 establishments in England Ashton-in-Makerfield
https://en.wikipedia.org/wiki/Noncentral%20t-distribution
The noncentral t-distribution generalizes Student's t-distribution using a noncentrality parameter. Whereas the central probability distribution describes how a test statistic t is distributed when the difference tested is null, the noncentral distribution describes how t is distributed when the null is false. This leads to its use in statistics, especially calculating statistical power. The noncentral t-distribution is also known as the singly noncentral t-distribution, and in addition to its primary use in statistical inference, is also used in robust modeling for data. Definitions If Z is a standard normal random variable, and V is a chi-squared distributed random variable with ν degrees of freedom that is independent of Z, then is a noncentral t-distributed random variable with ν degrees of freedom and noncentrality parameter μ ≠ 0. Note that the noncentrality parameter may be negative. Cumulative distribution function The cumulative distribution function of noncentral t-distribution with ν degrees of freedom and noncentrality parameter μ can be expressed as where is the regularized incomplete beta function, and Φ is the cumulative distribution function of the standard normal distribution. Alternatively, the noncentral t-distribution CDF can be expressed as: where Γ is the gamma function and I is the regularized incomplete beta function. Although there are other forms of the cumulative distribution function, the first form presented above is very easy to evaluate through recursive computing. In statistical software R, the cumulative distribution function is implemented as pt. Probability density function The probability density function (pdf) for the noncentral t-distribution with ν > 0 degrees of freedom and noncentrality parameter μ can be expressed in several forms. The confluent hypergeometric function form of the density function is where and where 1F1 is a confluent hypergeometric function. An alternative integral form is A third form of the density is obtained using its cumulative distribution functions, as follows. This is the approach implemented by the dt function in R. Properties Moments of the noncentral t-distribution In general, the kth raw moment of the noncentral t-distribution is In particular, the mean and variance of the noncentral t-distribution are An excellent approximation to is , which can be used in both formulas. Asymmetry The non-central t-distribution is asymmetric unless μ is zero, i.e., a central t-distribution. In addition, the asymmetry becomes smaller the larger degree of freedom. The right tail will be heavier than the left when μ > 0, and vice versa. However, the usual skewness is not generally a good measure of asymmetry for this distribution, because if the degrees of freedom is not larger than 3, the third moment does not exist at all. Even if the degrees of freedom is greater than 3, the sample estimate of the skewness is still very unstable unless the sample size is very large.
https://en.wikipedia.org/wiki/Paul%20Ackerley
Paul Douglas Ackerley (16 May 1949 – 3 May 2011) was a field hockey player, maths teacher and public servant from New Zealand. He played field hockey at right half. He was a member of the national team that won the gold medal at the 1976 Summer Olympics in Montreal. He was selected for the 1980 Summer Olympics, but most sports in New Zealand boycotted the Moscow games so he did not compete. He had 25 international caps for New Zealand. Ackerley was born in Dunedin but grew up in Ashburton. He graduated from the University of Canterbury, where he played in the Canterbury University hockey club in the late 1960s. He was a secondary school mathematics teacher at Linwood College, Christchurch and then head of the maths department at Awatapu College, Palmerston North. He transferred to the Education Ministry inspectorate, and then the New Zealand Qualifications Authority, where he was in the group that developed the NCEA. He joined Sport and Recreation New Zealand (SPARC) in 2004 as a senior advisor in coaching and volunteers. He coached the national woman's hockey team when they won a bronze at the 1998 Commonwealth Games in Malaysia, and the Wellington women's team. Ackerley died in Wellington, New Zealand aged 61 on 3 May 2011 from skin cancer after a short illness. References Ultimate team player and gifted coach: Obituary in Dominion Post, 7 May 2011 page A26 External links Deaths from cancer in New Zealand Deaths from skin cancer New Zealand male field hockey players Olympic field hockey players for New Zealand Field hockey players at the 1976 Summer Olympics New Zealand field hockey coaches Sportspeople from Ashburton, New Zealand Sportspeople from Dunedin Field hockey players from Christchurch University of Canterbury alumni 1949 births 2011 deaths Olympic medalists in field hockey New Zealand educators People educated at Ashburton College Medalists at the 1976 Summer Olympics Olympic gold medalists for New Zealand New Zealand public servants New Zealand women's national field hockey team coaches
https://en.wikipedia.org/wiki/Rough%20fuzzy%20hybridization
Rough fuzzy hybridization is a method of hybrid intelligent system or soft computing, where Fuzzy set theory is used for linguistic representation of patterns, leading to a fuzzy granulation of the feature space. Rough set theory is used to obtain dependency rules which model informative regions in the granulated feature space. External links Case generation A textbook Fuzzy logic
https://en.wikipedia.org/wiki/Nikko%20Patrelakis
Nikos "Nikko" Patrelakis was born in Athens, Greece. He studied music in the National Conservatory and mathematics in the University of Athens. He releases albums, singles and compilations around the world under the electronica – idm genre through his label Smallhouse Records. He has composed and produced music for featured films and theatrical plays. Ηe has created musical ids for national TV-stations and major radio-stations, as well as music for hundreds of TV-commercials. As a dj he has contributed in the evolution of the Greek club scene, participating in the initiation of clubs like X-club, Factory, +Soda in Athens and Cavo Paradiso Club Mykonos in Mykonos as a resident Dj. In 1999 he co-wrote "Voice" with Paul McCartney that was presented by Heather Mills for the support of the people with kinetic disabilities. That year he released 'Habitat' his first solo album, introducing his unique sound, followed up two years later by "Elements", a continuous play release in a form of a soundtrack, with guests like famous Greek journalist Malvina Karali narrating, and Stamatis Kraounakis, one of the most important Greek contemporary songwriters, improvising on a piano. In 2003 he released the album “TIME”, which stayed for 9 weeks in the official IFPI national top-50. He also composed and produced three themes for the opening and closing ceremonies of the Olympic Games in Athens 2004, performed by him and London Philharmonic Orchestra, for the parade of the Greek flag and the Greek team, in the opening ceremonies, and for the entrance of the athletes of the world in the Olympic Stadium, for the closing ceremonies. That year he also composed and produced the soundtrack of “Hostage”, directed by Constantine Giannaris, which was the official Greek participation in the Panorama of the Berlinale 2005. The film won the prize for the best direction in the Thessaloniki Film Festival 2005 and candidated for the Helix Award of the European Film Academy. His last album ‘Echo’ was released worldwide in 2007 and gathered excellent reviews from the press. It included a variety of sounds, from 'Magnet', a fully arranged piece performed by him and the Symphonic Orchestra of the Greek National Television, to "Shortcut", a collaboration with KBhta, to "Voyage" including the narration of French radio producer Louis Bozon. Cinematic ambiences, orchestral elements, deep rhythms and dreamy electric guitars produce a nu-jazz aroma with an electronic accent, that is the characteristic sound of Nikko Patrelakis. He has participated in several exhibitions as a visual artist and in 2008 he made his first personal photography exhibition in Athens under the title "Ec(h)o". Currently he is recording his new album, due to release 2010. External links Berlin 2005, Living people Musicians from Athens Greek composers Year of birth missing (living people)
https://en.wikipedia.org/wiki/Nijenhuis%E2%80%93Richardson%20bracket
In mathematics, the algebraic bracket or Nijenhuis–Richardson bracket is a graded Lie algebra structure on the space of alternating multilinear forms of a vector space to itself, introduced by A. Nijenhuis and R. W. Richardson, Jr (1966, 1967). It is related to but not the same as the Frölicher–Nijenhuis bracket and the Schouten–Nijenhuis bracket. Definition The primary motivation for introducing the bracket was to develop a uniform framework for discussing all possible Lie algebra structures on a vector space, and subsequently the deformations of these structures. If V is a vector space and is an integer, let be the space of all skew-symmetric -multilinear mappings of V to itself. The direct sum Alt(V) is a graded vector space. A Lie algebra structure on V is determined by a skew-symmetric bilinear map . That is to say, μ is an element of Alt1(V). Furthermore, μ must obey the Jacobi identity. The Nijenhuis–Richardson bracket supplies a systematic manner for expressing this identity in the form . In detail, the bracket is a bilinear bracket operation defined on Alt(V) as follows. On homogeneous elements and , the Nijenhuis–Richardson bracket is given by Here the interior product iP is defined by where denotes (q+1, p)-shuffles of the indices, i.e. permutations of such that and . On non-homogeneous elements, the bracket is extended by bilinearity. Derivations of the ring of forms The Nijenhuis–Richardson bracket can be defined on the vector valued forms Ω*(M, T(M)) on a smooth manifold M in a similar way. Vector valued forms act as derivations on the supercommutative ring Ω*(M) of forms on M by taking K to the derivation iK, and the Nijenhuis–Richardson bracket then corresponds to the commutator of two derivations. This identifies Ω*(M, T(M)) with the algebra of derivations that vanish on smooth functions. Not all derivations are of this form; for the structure of the full ring of all derivations see the article Frölicher–Nijenhuis bracket. The Nijenhuis–Richardson bracket and the Frölicher–Nijenhuis bracket both make Ω*(M, T(M)) into a graded superalgebra, but have different degrees. References Bilinear maps Lie algebras
https://en.wikipedia.org/wiki/Fr%C3%B6licher%E2%80%93Nijenhuis%20bracket
In mathematics, the Frölicher–Nijenhuis bracket is an extension of the Lie bracket of vector fields to vector-valued differential forms on a differentiable manifold. It is useful in the study of connections, notably the Ehresmann connection, as well as in the more general study of projections in the tangent bundle. It was introduced by Alfred Frölicher and Albert Nijenhuis (1956) and is related to the work of Schouten (1940). It is related to but not the same as the Nijenhuis–Richardson bracket and the Schouten–Nijenhuis bracket. Definition Let Ω*(M) be the sheaf of exterior algebras of differential forms on a smooth manifold M. This is a graded algebra in which forms are graded by degree: A graded derivation of degree ℓ is a mapping which is linear with respect to constants and satisfies Thus, in particular, the interior product with a vector defines a graded derivation of degree ℓ = −1, whereas the exterior derivative is a graded derivation of degree ℓ = 1. The vector space of all derivations of degree ℓ is denoted by DerℓΩ*(M). The direct sum of these spaces is a graded vector space whose homogeneous components consist of all graded derivations of a given degree; it is denoted This forms a graded Lie superalgebra under the anticommutator of derivations defined on homogeneous derivations D1 and D2 of degrees d1 and d2, respectively, by Any vector-valued differential form K in Ωk(M, TM) with values in the tangent bundle of M defines a graded derivation of degree k − 1, denoted by iK, and called the insertion operator. For ω ∈ Ωℓ(M), The Nijenhuis–Lie derivative along K ∈ Ωk(M, TM) is defined by where d is the exterior derivative and iK is the insertion operator. The Frölicher–Nijenhuis bracket is defined to be the unique vector-valued differential form such that Hence, If k = 0, so that K ∈ Ω0(M, TM) is a vector field, the usual homotopy formula for the Lie derivative is recovered If k=ℓ=1, so that K,L ∈ Ω1(M, TM), one has for any vector fields X and Y If k=0 and ℓ=1, so that K=Z∈ Ω0(M, TM) is a vector field and L ∈ Ω1(M, TM), one has for any vector field X An explicit formula for the Frölicher–Nijenhuis bracket of and (for forms φ and ψ and vector fields X and Y) is given by Derivations of the ring of forms Every derivation of Ω*(M) can be written as for unique elements K and L of Ω*(M, TM). The Lie bracket of these derivations is given as follows. The derivations of the form form the Lie superalgebra of all derivations commuting with d. The bracket is given by where the bracket on the right is the Frölicher–Nijenhuis bracket. In particular the Frölicher–Nijenhuis bracket defines a graded Lie algebra structure on , which extends the Lie bracket of vector fields. The derivations of the form form the Lie superalgebra of all derivations vanishing on functions Ω0(M). The bracket is given by where the bracket on the right is the Nijenhuis–Richardson bracket. The bracket of derivations of different types is giv
https://en.wikipedia.org/wiki/Alex%20Mineiro
Alexander Pereira Cardoso (born March 15, 1975), most commonly known as Alex Mineiro, is a former Brazilian football striker. Club statistics Honours Club Cruzeiro Copa Libertadores: 1997 Minas Gerais State Championship: 1997 Atlético Paranaense Brazilian Série A: 2001 Paraná State Championship: 2001, 2005 Palmeiras São Paulo State Championship: 2008 Individual Bola de Ouro: 2001 Campeonato Paulista Top Scorer: 2008 Campeonato Brasileiro Série A Team of the Year: 2008 References External links CBF 1975 births Living people Brazilian men's footballers Brazilian expatriate men's footballers Club Athletico Paranaense players Expatriate men's footballers in Japan Expatriate men's footballers in Mexico Cruzeiro Esporte Clube players Kashima Antlers players América Futebol Clube (MG) players Esporte Clube Vitória players Tigres UANL footballers Clube Atlético Mineiro players Sociedade Esportiva Palmeiras players J1 League players Grêmio Foot-Ball Porto Alegrense players Esporte Clube Bahia players Liga MX players Campeonato Brasileiro Série A players Men's association football forwards Footballers from Belo Horizonte
https://en.wikipedia.org/wiki/Defined%20daily%20dose
The defined daily dose (DDD) is a statistical measure of drug consumption, defined by the World Health Organization (WHO) Collaborating Centre for Drug Statistics Methodology. It is defined in combination with the ATC Code drug classification system for grouping related drugs. The DDD enables comparison of drug usage between different drugs in the same group or between different health care environments, or to look at trends in drug utilisation over time. The DDD is not to be confused with the therapeutic dose or prescribed daily dose (PDD), or recorded daily dose (RDD), and will often be different to the dose actually prescribed by a physician for an individual person. The WHO's definition is: "The DDD is the assumed average maintenance dose per day for a drug used for its main indication in adults." The Defined Daily Dose was first developed in the late 1970s. Assignment Before a DDD is assigned by the WHO Collaborating Centre for Drug Statistics Methodology, it must have an ATC Code and be approved for sale in at least one country. The DDD is calculated for a 70kg adult, except if this drug is only ever used in children. The dose is based on recommendations for treatment rather than prevention, except if prevention is the main indication. Generally there is only one DDD for all formulations of a drug, however exceptions are made if some formulations are typically used in significantly different strengths (e.g., antibiotic injection in a hospital vs tablets in the community). The DDD of combination tablets (containing more than one drug) is more complex, most taking into account a "unit dose", though combination tablets used for high blood pressure take the number of doses per day into account. The formula for determining the dose is: If there is a single recommended maintenance dose in the literature, this is preferred. If there are a range of recommended maintenance doses then If the literature recommends generally increasing from initial to maximum dose provided it is tolerated, pick the maximum dose. If the literature recommends only increasing from an initial dose if not sufficiently effective, pick the minimum dose. If there is no guidance then pick the mid point between the dose range extremes. The DDD of a drug is reviewed after three years. Ad hoc requests for change may be made but are discouraged and generally not permitted unless the main indication for the drug has changed or the average dose used has changed by more than 50%. Limitations The DDD is generally the same for all formulations of a drug, even if some (e.g., flavoured syrup) are designed with children in mind. Some types of drug are not assigned a DDD, for example: medicines applied to the skin, anaesthetics and vaccines. Because the DDD is a calculated value, it is sometimes a "dose" not actually ever prescribed (e.g., a midpoint of two prescribed tablet strengths may not be equal to or be a multiple of any available tablet). Different people may in practice be
https://en.wikipedia.org/wiki/Morita%20equivalence
In abstract algebra, Morita equivalence is a relationship defined between rings that preserves many ring-theoretic properties. More precisely two rings like R, S are Morita equivalent (denoted by ) if their categories of modules are additively equivalent (denoted by ). It is named after Japanese mathematician Kiiti Morita who defined equivalence and a similar notion of duality in 1958. Motivation Rings are commonly studied in terms of their modules, as modules can be viewed as representations of rings. Every ring R has a natural R-module structure on itself where the module action is defined as the multiplication in the ring, so the approach via modules is more general and gives useful information. Because of this, one often studies a ring by studying the category of modules over that ring. Morita equivalence takes this viewpoint to a natural conclusion by defining rings to be Morita equivalent if their module categories are equivalent. This notion is of interest only when dealing with noncommutative rings, since it can be shown that two commutative rings are Morita equivalent if and only if they are isomorphic. Definition Two rings R and S (associative, with 1) are said to be (Morita) equivalent if there is an equivalence of the category of (left) modules over R, R-Mod, and the category of (left) modules over S, S-Mod. It can be shown that the left module categories R-Mod and S-Mod are equivalent if and only if the right module categories Mod-R and Mod-S are equivalent. Further it can be shown that any functor from R-Mod to S-Mod that yields an equivalence is automatically additive. Examples Any two isomorphic rings are Morita equivalent. The ring of n-by-n matrices with elements in R, denoted Mn(R), is Morita-equivalent to R for any n > 0. Notice that this generalizes the classification of simple artinian rings given by Artin–Wedderburn theory. To see the equivalence, notice that if X is a left R-module then Xn is an Mn(R)-module where the module structure is given by matrix multiplication on the left of column vectors from X. This allows the definition of a functor from the category of left R-modules to the category of left Mn(R)-modules. The inverse functor is defined by realizing that for any Mn(R)-module there is a left R-module X such that the Mn(R)-module is obtained from X as described above. Criteria for equivalence Equivalences can be characterized as follows: if F:R-Mod S-Mod and G:S-Mod R-Mod are additive (covariant) functors, then F and G are an equivalence if and only if there is a balanced (S,R)-bimodule P such that SP and PR are finitely generated projective generators and there are natural isomorphisms of the functors , and of the functors Finitely generated projective generators are also sometimes called progenerators for their module category. For every right-exact functor F from the category of left-R modules to the category of left-S modules that commutes with direct sums, a theorem of homological algebra sh
https://en.wikipedia.org/wiki/Cross-sectional%20data
In statistics and econometrics, cross-sectional data is a type of data collected by observing many subjects (such as individuals, firms, countries, or regions) at a single point or period of time. Analysis of cross-sectional data usually consists of comparing the differences among selected subjects, typically with no regard to differences in time. For example, if we want to measure current obesity levels in a population, we could draw a sample of 1,000 people randomly from that population (also known as a cross section of that population), measure their weight and height, and calculate what percentage of that sample is categorized as obese. This cross-sectional sample provides us with a snapshot of that population, at that one point in time. Note that we do not know based on one cross-sectional sample if obesity is increasing or decreasing; we can only describe the current proportion. Cross-sectional data differs from time series data, in which the same small-scale or aggregate entity is observed at various points in time. Another type of data, panel data (or longitudinal data), combines both cross-sectional and time series data aspects and looks at how the subjects (firms, individuals, etc.) change over a time series. Panel data deals with the observations on the same subjects in different times. Panel analysis uses panel data to examine changes in variables over time and its differences in variables between selected subjects. Variants include pooled cross-sectional data, which deals with the observations on the same subjects in different times. In a rolling cross-section, both the presence of an individual in the sample and the time at which the individual is included in the sample are determined randomly. For example, a political poll may decide to interview 1000 individuals. It first selects these individuals randomly from the entire population. It then assigns a random date to each individual. This is the random date that the individual will be interviewed, and thus included in the survey. Cross-sectional data can be used in cross-sectional regression, which is regression analysis of cross-sectional data. For example, the consumption expenditures of various individuals in a fixed month could be regressed on their incomes, accumulated wealth levels, and their various demographic features to find out how differences in those features lead to differences in consumers’ behavior. References Further reading Cross-sectional analysis Statistical data types
https://en.wikipedia.org/wiki/P-compact%20group
In mathematics, in particular algebraic topology, a p-compact group is a homotopical version of a compact Lie group, but with all the local structure concentrated at a single prime p. This concept was introduced in , making precise earlier notions of a mod p finite loop space. A p-compact group has many Lie-like properties like maximal tori and Weyl groups, which are defined purely homotopically in terms of the classifying space, but with the important difference that the Weyl group, rather than being a finite reflection group over the integers, is now a finite p-adic reflection group. They admit a classification in terms of root data, which mirrors the classification of compact Lie groups, but with the integers replaced by the p-adic integers. Definition A p-compact group is a pointed space BG, with is local with respect to mod p homology, and such the pointed loop space G = ΩBG has finite mod p homology. One sometimes also refer to the p-compact group by G, but then one needs to keep in mind that the loop space structure is part of the data (which then allows one to recover BG). A p-compact group is said to be connected if G is a connected space (in general the group of components of G will be a finite p-group). The rank of a p-compact group is the rank of its maximal torus. Examples The p-completion, in the sense of homotopy theory, of (the classifying space of) a compact connected Lie group defines a connected p-compact group. (The Weyl group is just its ordinary Weyl group, now viewed as a p-adic reflection group by tensoring the coweight lattice by .) More generally the p-completion of a connected finite loop space defines a p-compact group. (Here the Weyl will be a -reflection group that may not come from a -reflection group.) A rank 1 connected 2-compact group is either the 2-completion of SU(2) or SO(3). A rank 1 connected p-compact group, for p odd, is a "Sullivan sphere", i.e., the p-completion of a 2n-1-sphere S2n-1, where n divides p − 1. These spheres turn out to have a unique loop space structure. They were first constructed by Dennis Sullivan in his 1970 MIT notes. (The Weyl group is a cyclic group of order n, acting on via an nth root of unity.) Generalizing the rank 1 case, any finite complex reflection group can be realized as the Weyl group of a p-compact group for infinitely many primes, with the primes being determined by whether W and be conjugated into or not, with some embedding of in . The construction of a p-compact group with this Weyl group is then relatively straightforward for large primes where p does not divide the order of W (carried out already in using the Chevalley–Shephard–Todd theorem), but requires more sophisticated methods for the "modular primes" p that divide the order of W. Classification The classification of p-compact groups from states that there is a 1-1 correspondence between connected p-compact groups, up to homotopy equivalence, and root data over the p-adic integers, up
https://en.wikipedia.org/wiki/Flora%20of%20India
The flora of India is one of the richest in the world due to the wide range of climate, topology and habitat in the country. There are estimated to be over 18,000 species of flowering plants in India, which constitute some 6-7 percent of the total plant species in the world. India is home to more than 50,000 species of plants, including a variety of endemics. The use of plants as a source of medicines has been an integral part of life in India from the earliest times. There are more than 3000 Indian plant species officially documented as possessing into eight main floristic regions : Western Himalayas, Eastern Himalayas, Assam, Indus plain, Ganges plain, the Deccan, Malabar and the Andaman Islands. Forests and wildlife resources In 1992, around 7,43,534 km2 of land in the country was under forests of which 92 percent belongs to the government. Only 22.7 percent is forested compared to the recommended 33 percent of the National Forest Policy Resolution 1952. The majority of it are broad-leaved deciduous trees which comprise one-sixth sal and one-tenth teak. Coniferous types are found in the northern high altitude regions and comprise pines, junipers and deodars. India's forest cover ranges from the tropical rainforest of the Andaman Islands, Western Ghats, and Northeast India to the coniferous forest of the Himalaya. Between these extremes lie the sal-dominated moist deciduous forest of eastern India; teak-dominated dry deciduous forest of central and southern India; and the babul-dominated thorn forest of the central Deccan and western Gangetic plain. Pine, fir, spruce, cedar, larch and cypress are the timber-yielding plants widely prevalent throughout the hilly regions of India. See also Indian Council of Forestry Research and Education List of endemic and threatened plants of India References SPECIES CHECKLIST: Species Diversity in India; ENVIS Centre: Wildlife & Protected Areas (Secondary Database); Wildlife Institute of India (WII) ENVIS Centre: Wildlife & Protected Areas (Secondary Database); Wildlife Institute of India (WII) Free EBOOK: Special Habitats and Threatened Plants of India; Wildlife Institute of India (WII) ENVIS Centre on Conservation of Ecological Heritage and Sacred Sights of India ; ENVIS; C.P.R. Environmental Education Centre is a Centre of Excellence of the Ministry of Environment and Forests, Government of India. External links List of Indian medicinal plants on the Biodiversity of India website. A list of 932 commercially traded Indian medicinal plants (as per the ENVIS-FRLHT database) and their taxonomic status. Hooker, J. D. Flora of British India Volume 1 Hooker, J. D. Flora of British India Volume 2 Hooker, J. D. Flora of British India Volume 3 Hooker, J. D. Flora of British India Volume 4 Hooker, J. D. Flora of British India Volume 5 Hooker, J. D. Flora of British India Volume 6 Flora of Andhra Pradesh By Sharfudding Khan Flora of Andhra Pradesh by RD Reddy E-Flora of Kerala by N Sasidharan
https://en.wikipedia.org/wiki/Transversality%20%28mathematics%29
In mathematics, transversality is a notion that describes how spaces can intersect; transversality can be seen as the "opposite" of tangency, and plays a role in general position. It formalizes the idea of a generic intersection in differential topology. It is defined by considering the linearizations of the intersecting spaces at the points of intersection. Definition Two submanifolds of a given finite-dimensional smooth manifold are said to intersect transversally if at every point of intersection, their separate tangent spaces at that point together generate the tangent space of the ambient manifold at that point. Manifolds that do not intersect are vacuously transverse. If the manifolds are of complementary dimension (i.e., their dimensions add up to the dimension of the ambient space), the condition means that the tangent space to the ambient manifold is the direct sum of the two smaller tangent spaces. If an intersection is transverse, then the intersection will be a submanifold whose codimension is equal to the sums of the codimensions of the two manifolds. In the absence of the transversality condition the intersection may fail to be a submanifold, having some sort of singular point. In particular, this means that transverse submanifolds of complementary dimension intersect in isolated points (i.e., a 0-manifold). If both submanifolds and the ambient manifold are oriented, their intersection is oriented. When the intersection is zero-dimensional, the orientation is simply a plus or minus for each point. One notation for the transverse intersection of two submanifolds and of a given manifold is . This notation can be read in two ways: either as “ and intersect transversally” or as an alternative notation for the set-theoretic intersection of and when that intersection is transverse. In this notation, the definition of transversality reads Transversality of maps The notion of transversality of a pair of submanifolds is easily extended to transversality of a submanifold and a map to the ambient manifold, or to a pair of maps to the ambient manifold, by asking whether the pushforwards of the tangent spaces along the preimage of points of intersection of the images generate the entire tangent space of the ambient manifold. If the maps are embeddings, this is equivalent to transversality of submanifolds. Meaning of transversality for different dimensions Suppose we have transverse maps and where and are manifolds with dimensions and respectively. The meaning of transversality differs a lot depending on the relative dimensions of and . The relationship between transversality and tangency is clearest when . We can consider three separate cases: When , it is impossible for the image of and 's tangent spaces to span 's tangent space at any point. Thus any intersection between and cannot be transverse. However, non-intersecting manifolds vacuously satisfy the condition, so can be said to intersect transversely. When
https://en.wikipedia.org/wiki/Andreotti%E2%80%93Frankel%20theorem
In mathematics, the Andreotti–Frankel theorem, introduced by , states that if is a smooth, complex affine variety of complex dimension or, more generally, if is any Stein manifold of dimension , then admits a Morse function with critical points of index at most n, and so is homotopy equivalent to a CW complex of real dimension at most n. Consequently, if is a closed connected complex submanifold of complex dimension , then has the homotopy type of a CW complex of real dimension . Therefore and This theorem applies in particular to any smooth, complex affine variety of dimension . References Chapter 7. Complex manifolds Theorems in homotopy theory
https://en.wikipedia.org/wiki/Weighing%20matrix
In mathematics, a weighing matrix of order and weight is a matrix with entries from the set such that: Where is the transpose of and is the identity matrix of order . The weight is also called the degree of the matrix. For convenience, a weighing matrix of order and weight is often denoted by . Weighing matrices are so called because of their use in optimally measuring the individual weights of multiple objects. When the weighing device is a balance scale, the statistical variance of the measurement can be minimized by weighing multiple objects at once, including some objects in the opposite pan of the scale where they subtract from the measurement. Properties Some properties are immediate from the definition. If is a , then: The rows of are pairwise orthogonal (that is, every pair of rows you pick from will be orthogonal). Similarly, the columns are pairwise orthogonal. Each row and each column of has exactly non-zero elements. , since the definition means that where is the inverse of where is the determinant of A weighing matrix is a generalization of Hadamard matrix, which does not allow zero entries. As two special cases, a is a Hadamard matrix and a is equivalent to a conference matrix. Applications Experiment design Weighing matrices take their name from the problem of measuring the weight of multiple objects. If a measuring device has a statistical variance of , then measuring the weights of objects and subtracting the (equally imprecise) tare weight will result in a final measurement with a variance of . It is possible to increase the accuracy of the estimated weights by measuring different subsets of the objects, especially when using a balance scale where objects can be put on the opposite measuring pan where they subtract their weight from the measurement. An order matrix can be used to represent the placement of objects—including the tare weight—in trials. Suppose the left pan of the balance scale adds to the measurement and the right pan subtracts from the measurement. Each element of this matrix will have: Let be a column vector of the measurements of each of the trials, let be the errors to these measurements each independent and identically distributed with variance , and let be a column vector of the true weights of each of the objects. Then we have: Assuming that is non-singular, we can use the method of least-squares to calculate an estimate of the true weights: The variance of the estimated vector cannot be lower than , and will be minimum if and only if is a weighing matrix. Optical measurement Weighing matrices appear in the engineering of spectrometers, image scanners, and optical multiplexing systems. The design of these instruments involve an optical mask and two detectors that measure the intensity of light. The mask can either transmit light to the first detector, absorb it, or reflect it toward the second detector. The measurement of the second detector is subtract
https://en.wikipedia.org/wiki/Trev%20Faulk
Treverance Donta Faulk (born August 6, 1981) is a former NFL American football linebacker. College career Faulk attended Louisiana State University (LSU). Statistics Professional career Denver Broncos Faulk signed with the Broncos as an undrafted rookie free agent on April 29, 2002. He appeared in the week 1 preseason game against the Chicago Bears on August 10, 2002, and made 3 tackles and sacked quarterback Henry Burris for a loss of 9 yards. The Broncos went on to win the game 27-3. He made another appearance in week 2 and was on the roster in week 3 but did not play. He did not survive the preseason cuts and the Broncos waived him on August 26, 2002, making him a free agent. Dallas Cowboys Faulk joined the Cowboys' practice squad on September 26, 2002. He was waived on November 15, 2002. Arizona Cardinals Faulk signed with the Arizona Cardinals on December 11, 2002. He was on the inactive list through weeks 15 to 17 of the 2002 season but did not see any playing time. During the 2003 preseason, he appeared in the week 2 game against the San Diego Chargers on August 16, 2003, and made 2 tackles. He was waived on August 25, 2003. St. Louis Rams He was picked up by the St. Louis Rams on December 31, 2003. He survived preseason cuts and made the 2004 regular season team for the first time in his career. He tore his hamstring in week 1 against the Cardinals and was sidelined for the next two games. He returned in week 4 against the 49ers, making 4 tackles. In the 2004 postseason, he made a total of 3 tackles, including one in the 27-20 win against the Seattle Seahawks. He led the franchise in special teams tackles during the 2004 season with 24 and was named Outstanding Special Teams player. During a 2005 preseason match with the Chicago Bears, he brought down quarterback Rex Grossman who broke his ankle, sidelining him for the majority of the season. He signed a one-year contract extension with the Rams before the final regular season match in 2005. During the 2004 and 2005 seasons, he played a total of 29 regular season games and appeared in two postseason games. During the 2006 off-season, he underwent back surgery and missed the mini-camp in April. He was waived by the Rams on September 3, 2006. New Orleans Saints After missing the 2006 season due to injury, he joined the New Orleans Saints on April 16, 2007. He appeared in the Hall of Fame game against the Pittsburgh Steelers during the 2007 preseason. He had 2 penalties on special teams and the Saints lost the game 7-20. He was released on August 8, 2007 and retired the same year. Coaching career Faulk was working as a volunteer assistant at Northside High when he decided to try his hand at coaching. In February 2011, Faulk was announced as the new head coach of Vermilion Catholic High School football team. Under his leadership, the Screaming Eagles went unbeaten in the 2011 regular season and reached the state semifinals, finishing 13-1. He left Vermilion Catholic
https://en.wikipedia.org/wiki/Mathematics%20in%20the%20medieval%20Islamic%20world
Mathematics during the Golden Age of Islam, especially during the 9th and 10th centuries, was built on Greek mathematics (Euclid, Archimedes, Apollonius) and Indian mathematics (Aryabhata, Brahmagupta). Important progress was made, such as full development of the decimal place-value system to include decimal fractions, the first systematised study of algebra, and advances in geometry and trigonometry. Arabic works played an important role in the transmission of mathematics to Europe during the 10th—12th centuries. Concepts Algebra The study of algebra, the name of which is derived from the Arabic word meaning completion or "reunion of broken parts", flourished during the Islamic golden age. Muhammad ibn Musa al-Khwarizmi, a Persian scholar in the House of Wisdom in Baghdad was the founder of algebra, is along with the Greek mathematician Diophantus, known as the father of algebra. In his book The Compendious Book on Calculation by Completion and Balancing, Al-Khwarizmi deals with ways to solve for the positive roots of first and second-degree (linear and quadratic) polynomial equations. He introduces the method of reduction, and unlike Diophantus, also gives general solutions for the equations he deals with. Al-Khwarizmi's algebra was rhetorical, which means that the equations were written out in full sentences. This was unlike the algebraic work of Diophantus, which was syncopated, meaning that some symbolism is used. The transition to symbolic algebra, where only symbols are used, can be seen in the work of Ibn al-Banna' al-Marrakushi and Abū al-Ḥasan ibn ʿAlī al-Qalaṣādī. On the work done by Al-Khwarizmi, J. J. O'Connor and Edmund F. Robertson said: Several other mathematicians during this time period expanded on the algebra of Al-Khwarizmi. Abu Kamil Shuja' wrote a book of algebra accompanied with geometrical illustrations and proofs. He also enumerated all the possible solutions to some of his problems. Abu al-Jud, Omar Khayyam, along with Sharaf al-Dīn al-Tūsī, found several solutions of the cubic equation. Omar Khayyam found the general geometric solution of a cubic equation. Cubic equations Omar Khayyam (c. 1038/48 in Iran – 1123/24) wrote the Treatise on Demonstration of Problems of Algebra containing the systematic solution of cubic or third-order equations, going beyond the Algebra of al-Khwārizmī. Khayyám obtained the solutions of these equations by finding the intersection points of two conic sections. This method had been used by the Greeks, but they did not generalize the method to cover all equations with positive roots. Sharaf al-Dīn al-Ṭūsī (? in Tus, Iran – 1213/4) developed a novel approach to the investigation of cubic equations—an approach which entailed finding the point at which a cubic polynomial obtains its maximum value. For example, to solve the equation , with a and b positive, he would note that the maximum point of the curve occurs at , and that the equation would have no solutions, one solution or two
https://en.wikipedia.org/wiki/McNemar%27s%20test
In statistics, McNemar's test is a statistical test used on paired nominal data. It is applied to 2 × 2 contingency tables with a dichotomous trait, with matched pairs of subjects, to determine whether the row and column marginal frequencies are equal (that is, whether there is "marginal homogeneity"). It is named after Quinn McNemar, who introduced it in 1947. An application of the test in genetics is the transmission disequilibrium test for detecting linkage disequilibrium. The commonly used parameters to assess a diagnostic test in medical sciences are sensitivity and specificity. Sensitivity (or recall) is the ability of a test to correctly identify the people with disease. Specificity is the ability of the test to correctly identify those without the disease. Now presume two tests are performed on the same group of patients. And also presume that these tests have identical sensitivity and specificity. In this situation one is carried away by these findings and presume that both the tests are equivalent. However this may not be the case. For this we have to study the patients with disease and patients without disease (by a reference test). We also have to find out where these two tests disagree with each other. This is precisely the basis of McNemar's test. This test compares the sensitivity and specificity of two diagnostic tests on the same group of patients. Definition The test is applied to a 2 × 2 contingency table, which tabulates the outcomes of two tests on a sample of N subjects, as follows. The null hypothesis of marginal homogeneity states that the two marginal probabilities for each outcome are the same, i.e. pa + pb = pa + pc and pc + pd = pb + pd. Thus the null and alternative hypotheses are Here pa, etc., denote the theoretical probability of occurrences in cells with the corresponding label. The McNemar test statistic is: Under the null hypothesis, with a sufficiently large number of discordants (cells b and c), has a chi-squared distribution with 1 degree of freedom. If the result is significant, this provides sufficient evidence to reject the null hypothesis, in favour of the alternative hypothesis that pb ≠ pc, which would mean that the marginal proportions are significantly different from each other. Variations If either b or c is small (b + c < 25) then is not well-approximated by the chi-squared distribution. An exact binomial test can then be used, where b is compared to a binomial distribution with size parameter n = b + c and p = 0.5. Effectively, the exact binomial test evaluates the imbalance in the discordants b and c. To achieve a two-sided P-value, the P-value of the extreme tail should be multiplied by 2. For b ≥ c: which is simply twice the binomial distribution cumulative distribution function with p = 0.5 and n = b + c. Edwards proposed the following continuity corrected version of the McNemar test to approximate the binomial exact-P-value: The mid-P McNemar test (mid-p binomial te
https://en.wikipedia.org/wiki/Pyramid%20%28geometry%29
In geometry, a pyramid () is a polyhedron formed by connecting a polygonal base and a point, called the apex. Each base edge and apex form a triangle, called a lateral face. It is a conic solid with polygonal base. A pyramid with an base has vertices, faces, and edges. All pyramids are self-dual. Terminology A right pyramid has its apex directly above the centroid of its base. Nonright pyramids are called oblique pyramids. A regular pyramid has a regular polygon base and is usually implied to be a right pyramid. When unspecified, a pyramid is usually assumed to be a regular square pyramid, like the physical pyramid structures. A triangle-based pyramid is more often called a tetrahedron. Among oblique pyramids, like acute and obtuse triangles, a pyramid can be called acute if its apex is above the interior of the base and obtuse if its apex is above the exterior of the base. A right-angled pyramid has its apex above an edge or vertex of the base. In a tetrahedron these qualifiers change based on which face is considered the base. Pyramids are a class of the prismatoids. Pyramids can be doubled into bipyramids by adding a second offset point on the other side of the base plane. A pyramid cut off by a plane is called a truncated pyramid; if the truncation plane is parallel to the pyramid's base, it is called a frustum. Right pyramids with a regular base A right pyramid with a regular base has isosceles triangle sides, with symmetry is Cnv or [1,n], with order 2n. It can be given an extended Schläfli symbol ( ) ∨ {n}, representing a point, ( ), joined (orthogonally offset) to a regular polygon, {n}. A join operation creates a new edge between all pairs of vertices of the two joined figures. The trigonal or triangular pyramid with all equilateral triangle faces becomes the regular tetrahedron, one of the Platonic solids. A lower symmetry case of the triangular pyramid is C3v, which has an equilateral triangle base, and 3 identical isosceles triangle sides. The square and pentagonal pyramids can also be composed of regular convex polygons, in which case they are Johnson solids. If all edges of a square pyramid (or any convex polyhedron) are tangent to a sphere so that the average position of the tangential points are at the center of the sphere, then the pyramid is said to be canonical, and it forms half of a regular octahedron. Pyramids with a hexagon or higher base must be composed of isosceles triangles. A hexagonal pyramid with equilateral triangles would be a completely flat figure, and a heptagonal or higher would have the triangles not meet at all. Right star pyramids Right pyramids with regular star polygon bases are called star pyramids. For example, the pentagrammic pyramid has a pentagram base and 5 intersecting triangle sides. Right pyramids with an irregular base A right pyramid can be named as ( )∨P, where ( ) is the apex point, ∨ is a join operator, and P is a base polygon. An isosceles triangle right tetrahedron can be
https://en.wikipedia.org/wiki/Simson%20line
In geometry, given a triangle and a point on its circumcircle, the three closest points to on lines , , and are collinear. The line through these points is the Simson line of , named for Robert Simson. The concept was first published, however, by William Wallace in 1799, and is sometimes called the Wallace line. The converse is also true; if the three closest points to on three lines are collinear, and no two of the lines are parallel, then lies on the circumcircle of the triangle formed by the three lines. Or in other words, the Simson line of a triangle and a point is just the pedal triangle of and that has degenerated into a straight line and this condition constrains the locus of to trace the circumcircle of triangle . Equation Placing the triangle in the complex plane, let the triangle with unit circumcircle have vertices whose locations have complex coordinates , , , and let P with complex coordinates be a point on the circumcircle. The Simson line is the set of points satisfying where an overbar indicates complex conjugation. Properties The Simson line of a vertex of the triangle is the altitude of the triangle dropped from that vertex, and the Simson line of the point diametrically opposite to the vertex is the side of the triangle opposite to that vertex. If and are points on the circumcircle, then the angle between the Simson lines of and is half the angle of the arc . In particular, if the points are diametrically opposite, their Simson lines are perpendicular and in this case the intersection of the lines lies on the nine-point circle. Letting denote the orthocenter of the triangle , the Simson line of bisects the segment in a point that lies on the nine-point circle. Given two triangles with the same circumcircle, the angle between the Simson lines of a point on the circumcircle for both triangles does not depend of . The set of all Simson lines, when drawn, form an envelope in the shape of a deltoid known as the Steiner deltoid of the reference triangle. The construction of the Simson line that coincides with a side of the reference triangle (see first property above) yields a nontrivial point on this side line. This point is the reflection of the foot of the altitude (dropped onto the side line) about the midpoint of the side line being constructed. Furthermore, this point is a tangent point between the side of the reference triangle and its Steiner deltoid. A quadrilateral that is not a parallelogram has one and only one pedal point, called the Simson point, with respect to which the feet on the quadrilateral are collinear. The Simson point of a trapezoid is the point of intersection of the two nonparallel sides. No convex polygon with at least 5 sides has a Simson line. Proof of existence It suffices to show that . is a cyclic quadrilateral, so . is a cyclic quadrilateral (since ), so . Hence . Now is cyclic, so . Therefore . Generalizations Generalization 1 Let ABC be a triangle, let a li
https://en.wikipedia.org/wiki/Lars%20Peter%20Hansen
Lars Peter Hansen (born 26 October 1952 in Urbana, Illinois) is an American economist. He is the David Rockefeller Distinguished Service Professor in Economics, Statistics, and the Booth School of Business, at the University of Chicago and a 2013 recipient of the Nobel Memorial Prize in Economics. Hansen is best known for his work on the generalized method of moments, he is also a distinguished macroeconomist, focusing on the linkages between the financial sector and the macroeconomy. His current collaborative research develops and applies methods for pricing the exposure to macroeconomic shocks over alternative investment horizons and investigates the implications of the pricing of long-term uncertainty. Among other honors, he received the 2010 BBVA Foundation Frontiers of Knowledge Award in the category of Economy, Finance and Management. Biography After graduating from Utah State University (B.S. Mathematics, Political Science, 1974) and the University of Minnesota (Ph.D. Economics, 1978), he served as assistant and associate professor at Carnegie Mellon University before moving to the University of Chicago in 1981. He is currently the David Rockefeller Distinguished Service Professor in Economics, Statistics and the College at the University of Chicago. He is married to Grace Tsiang (), who is the daughter of the famous economist Sho-Chieh Tsiang. Together, Hansen and Tsiang have one son named Peter. He has two brothers, Ted Howard Hansen, an immunologist at Washington University in St. Louis and Roger Hansen, an engineer in water resource management. His father, Roger Gaurth Hansen, served as provost of Utah State University and was a professor of biochemistry. Contributions Hansen is best known as the developer of the econometric technique generalized method of moments (GMM) and has written and co-authored papers applying GMM to analyze economic models in numerous fields including labor economics, international finance, finance and macroeconomics. This method has been widely adopted in economics and other fields and applications where fully specifying and solving a model of a complex economic environment is unwieldy or otherwise impractical. Hansen showed how to exploit moment conditions (e.g. relations where conditional expectations are known to be zero at true parameter values) to construct reasonable, reliable estimators (i.e. having desirable statistical properties such as consistency, asymptotic normality, and efficiency within the class of all asymptotic normal estimators) with less stringent maintained model assumptions than needed for maximum likelihood estimation. However, these estimators are mathematically equivalent to those based on "orthogonality conditions" (Sargan, 1958, 1959) or "unbiased estimating equations" (Huber, 1967; Wang et al., 1997). Moreover, maximum likelihood estimation methods provide guidance for devising more efficient instrumental variables estimators that take into account special features such as res
https://en.wikipedia.org/wiki/Circle-valued%20Morse%20theory
In mathematics, circle-valued Morse theory studies the topology of a smooth manifold by analyzing the critical points of smooth maps from the manifold to the circle, in the framework of Morse homology. It is an important special case of Sergei Novikov's Morse theory of closed one-forms. Michael Hutchings and Yi-Jen Lee have connected it to Reidemeister torsion and Seiberg–Witten theory. References Morse theory
https://en.wikipedia.org/wiki/Continuation%20map
In differential topology, given a family of Morse-Smale functions on a smooth manifold X parameterized by a closed interval I, one can construct a Morse-Smale vector field on X × I whose critical points occur only on the boundary. The Morse differential defines a chain map from the Morse complexes at the boundaries of the family, the continuation map. This can be shown to descend to an isomorphism on Morse homology, proving its invariance of Morse homology of a smooth manifold. Continuation maps were defined by Andreas Floer to prove the invariance of Floer homology in infinite dimensional analogues of the situation described above; in the case of finite-dimensional Morse theory, invariance may be proved by proving that Morse homology is isomorphic to singular homology, which is known to be invariant. However, Floer homology is not always isomorphic to a familiar invariant, so continuation maps yield an a priori proof of invariance. In finite-dimensional Morse theory, different choices made in constructing the vector field on X × I yield distinct but chain homotopic maps and thus descend to the same isomorphism on homology. However, in certain infinite dimensional cases, this does not hold, and these techniques may be used to produce invariants of one-parameter families of objects (such as contact structures or Legendrian knots). References Lecture Notes on Morse Homology (including continuation maps in finite-dimensional theory), by Michael Hutchings Contact homology and homotopy groups of the space of contact structures by Frederic Bourgeois Contact homology and one parameter families of Legendrian knots by Tamas Kalman Floer homology of families I, by Michael Hutchings Morse theory Homology theory
https://en.wikipedia.org/wiki/Pythagorean%20means
In mathematics, the three classical Pythagorean means are the arithmetic mean (AM), the geometric mean (GM), and the harmonic mean (HM). These means were studied with proportions by Pythagoreans and later generations of Greek mathematicians because of their importance in geometry and music. Definition They are defined by: Properties Each mean, , has the following properties: First-order homogeneity Invariance under exchange for any and . Monotonicity Idempotence Monotonicity and idempotence together imply that a mean of a set always lies between the extremes of the set: The harmonic and arithmetic means are reciprocal duals of each other for positive arguments, while the geometric mean is its own reciprocal dual: Inequalities among means There is an ordering to these means (if all of the are positive) with equality holding if and only if the are all equal. This is a generalization of the inequality of arithmetic and geometric means and a special case of an inequality for generalized means. The proof follows from the arithmetic–geometric mean inequality, , and reciprocal duality ( and are also reciprocal dual to each other). The study of the Pythagorean means is closely related to the study of majorization and Schur-convex functions. The harmonic and geometric means are concave symmetric functions of their arguments, and hence Schur-concave, while the arithmetic mean is a linear function of its arguments and hence is both concave and convex. History Almost everything that we know about the Pythagorean means came from arithmetic handbooks written in the first and second century. Nicomachus of Gerasa says that they were “acknowledged by all the ancients, Pythagoras, Plato and Aristotle.” Their earliest known use is a fragment of the Pythagorean philosopher Archytas of Tarentum: The name "harmonic mean", according to Iamblichus, was coined by Archytas and Hippasus. The Pythagorean means also appear in Plato's Timaeus. Another evidence of their early use is a commentary by Pappus. The term "mean" (μεσότης, mesótēs in Ancient Greek) appears in the Neopythagorean arithmetic handbooks in connection with the term "proportion" (ἀναλογία, analogía in Ancient Greek). Trivia Of all pairs of different natural numbers of the form (a, b) such that a < b, the smallest (as defined by least value of a + b) for which the arithmetic, geometric and harmonic means are all also natural numbers are (5,45) and (10,40). See also Arithmetic–geometric mean Average Golden ratio Kepler triangle Notes References External links Means
https://en.wikipedia.org/wiki/List%20of%20cohomology%20theories
This is a list of some of the ordinary and generalized (or extraordinary) homology and cohomology theories in algebraic topology that are defined on the categories of CW complexes or spectra. For other sorts of homology theories see the links at the end of this article. Notation S = π = S0 is the sphere spectrum. Sn is the spectrum of the n-dimensional sphere SnY = Sn∧Y is the nth suspension of a spectrum Y. [X,Y] is the abelian group of morphisms from the spectrum X to the spectrum Y, given (roughly) as homotopy classes of maps. [X,Y]n = [SnX,Y] [X,Y]* is the graded abelian group given as the sum of the groups [X,Y]n. πn(X) = [Sn, X] = [S, X]n is the nth stable homotopy group of X. π*(X) is the sum of the groups πn(X), and is called the coefficient ring of X when X is a ring spectrum. X∧Y is the smash product of two spectra. If X is a spectrum, then it defines generalized homology and cohomology theories on the category of spectra as follows. Xn(Y) = [S, X∧Y]n = [Sn, X∧Y] is the generalized homology of Y, Xn(Y) = [Y, X]−n = [S−nY, X] is the generalized cohomology of Y Ordinary homology theories These are the theories satisfying the "dimension axiom" of the Eilenberg–Steenrod axioms that the homology of a point vanishes in dimension other than 0. They are determined by an abelian coefficient group G, and denoted by H(X, G) (where G is sometimes omitted, especially if it is Z). Usually G is the integers, the rationals, the reals, the complex numbers, or the integers mod a prime p. The cohomology functors of ordinary cohomology theories are represented by Eilenberg–MacLane spaces. On simplicial complexes, these theories coincide with singular homology and cohomology. Homology and cohomology with integer coefficients. Spectrum: H (Eilenberg–MacLane spectrum of the integers.) Coefficient ring: πn(H) = Z if n = 0, 0 otherwise. The original homology theory. Homology and cohomology with rational (or real or complex) coefficients. Spectrum: HQ (Eilenberg–Mac Lane spectrum of the rationals.) Coefficient ring: πn(HQ) = Q if n = 0, 0 otherwise. These are the easiest of all homology theories. The homology groups HQn(X) are often denoted by Hn(X, Q). The homology groups H(X, Q), H(X, R), H(X, C) with rational, real, and complex coefficients are all similar, and are used mainly when torsion is not of interest (or too complicated to work out). The Hodge decomposition writes the complex cohomology of a complex projective variety as a sum of sheaf cohomology groups. Homology and cohomology with mod p coefficients. Spectrum: HZp (Eilenberg–Maclane spectrum of the integers mod p.) Coefficient ring: πn(HZp) = Zp (Integers mod p) if n = 0, 0 otherwise. K-theories The simpler K-theories of a space are often related to vector bundles over the space, and different sorts of K-theories correspond to different structures that can be put on a vector bundle. Real K-theory Spectrum: KO Coefficient ring: The coefficient groups πi(KO) have period 8 in i
https://en.wikipedia.org/wiki/Rank%20correlation
In statistics, a rank correlation is any of several statistics that measure an ordinal association—the relationship between rankings of different ordinal variables or different rankings of the same variable, where a "ranking" is the assignment of the ordering labels "first", "second", "third", etc. to different observations of a particular variable. A rank correlation coefficient measures the degree of similarity between two rankings, and can be used to assess the significance of the relation between them. For example, two common nonparametric methods of significance that use rank correlation are the Mann–Whitney U test and the Wilcoxon signed-rank test. Context If, for example, one variable is the identity of a college basketball program and another variable is the identity of a college football program, one could test for a relationship between the poll rankings of the two types of program: do colleges with a higher-ranked basketball program tend to have a higher-ranked football program? A rank correlation coefficient can measure that relationship, and the measure of significance of the rank correlation coefficient can show whether the measured relationship is small enough to likely be a coincidence. If there is only one variable, the identity of a college football program, but it is subject to two different poll rankings (say, one by coaches and one by sportswriters), then the similarity of the two different polls' rankings can be measured with a rank correlation coefficient. As another example, in a contingency table with low income, medium income, and high income in the row variable and educational level—no high school, high school, university—in the column variable), a rank correlation measures the relationship between income and educational level. Correlation coefficients Some of the more popular rank correlation statistics include Spearman's ρ Kendall's τ Goodman and Kruskal's γ Somers' D An increasing rank correlation coefficient implies increasing agreement between rankings. The coefficient is inside the interval [−1, 1] and assumes the value: 1 if the agreement between the two rankings is perfect; the two rankings are the same. 0 if the rankings are completely independent. −1 if the disagreement between the two rankings is perfect; one ranking is the reverse of the other. Following , a ranking can be seen as a permutation of a set of objects. Thus we can look at observed rankings as data obtained when the sample space is (identified with) a symmetric group. We can then introduce a metric, making the symmetric group into a metric space. Different metrics will correspond to different rank correlations. General correlation coefficient Kendall 1970 showed that his (tau) and Spearman's (rho) are particular cases of a general correlation coefficient. Suppose we have a set of objects, which are being considered in relation to two properties, represented by and , forming the sets of values and . To any pair of individu
https://en.wikipedia.org/wiki/Rowbottom%20cardinal
In set theory, a Rowbottom cardinal, introduced by , is a certain kind of large cardinal number. An uncountable cardinal number is said to be - Rowbottom if for every function f: [κ]<ω → λ (where λ < κ) there is a set H of order type that is quasi-homogeneous for f, i.e., for every n, the f-image of the set of n-element subsets of H has < elements. is Rowbottom if it is - Rowbottom. Every Ramsey cardinal is Rowbottom, and every Rowbottom cardinal is Jónsson. By a theorem of Kleinberg, the theories ZFC + “there is a Rowbottom cardinal” and ZFC + “there is a Jónsson cardinal” are equiconsistent. In general, Rowbottom cardinals need not be large cardinals in the usual sense: Rowbottom cardinals could be singular. It is an open question whether ZFC + “ is Rowbottom” is consistent. If it is, it has much higher consistency strength than the existence of a Rowbottom cardinal. The axiom of determinacy does imply that is Rowbottom (but contradicts the axiom of choice). References Large cardinals
https://en.wikipedia.org/wiki/J%C3%B3nsson%20cardinal
In set theory, a Jónsson cardinal (named after Bjarni Jónsson) is a certain kind of large cardinal number. An uncountable cardinal number κ is said to be Jónsson if for every function there is a set of order type such that for each , restricted to -element subsets of omits at least one value in . Every Rowbottom cardinal is Jónsson. By a theorem of Eugene M. Kleinberg, the theories ZFC + “there is a Rowbottom cardinal” and ZFC + “there is a Jónsson cardinal” are equiconsistent. William Mitchell proved, with the help of the Dodd-Jensen core model that the consistency of the existence of a Jónsson cardinal implies the consistency of the existence of a Ramsey cardinal, so that the existence of Jónsson cardinals and the existence of Ramsey cardinals are equiconsistent. In general, Jónsson cardinals need not be large cardinals in the usual sense: they can be singular. But the existence of a singular Jónsson cardinal is equiconsistent to the existence of a measurable cardinal. Using the axiom of choice, a lot of small cardinals (the , for instance) can be proved to be not Jónsson. Results like this need the axiom of choice, however: The axiom of determinacy does imply that for every positive natural number n, the cardinal is Jónsson. A Jónsson algebra is an algebra with no proper subalgebras of the same cardinality. (They are unrelated to Jónsson–Tarski algebras). Here an algebra means a model for a language with a countable number of function symbols, in other words a set with a countable number of functions from finite products of the set to itself. A cardinal is a Jónsson cardinal if and only if there are no Jónsson algebras of that cardinality. The existence of Jónsson functions shows that if algebras are allowed to have infinitary operations, then there are no analogues of Jónsson cardinals. References Large cardinals
https://en.wikipedia.org/wiki/Igor%20Pak
Igor Pak () (born 1971, Moscow, Soviet Union) is a professor of mathematics at the University of California, Los Angeles, working in combinatorics and discrete probability. He formerly taught at the Massachusetts Institute of Technology and the University of Minnesota, and he is best known for his bijective proof of the hook-length formula for the number of Young tableaux, and his work on random walks. He was a keynote speaker alongside George Andrews and Doron Zeilberger at the 2006 Harvey Mudd College Mathematics Conference on Enumerative Combinatorics. Pak is an Associate Editor for the journal Discrete Mathematics. He gave a Fejes Tóth Lecture at the University of Calgary in February 2009. In 2018, he was an invited speaker at the International Congress of Mathematicians in Rio de Janeiro. Background Pak went to Moscow High School № 57. After graduating, he worked for a year at Bank Menatep. He did his undergraduate studies at Moscow State University. He was a PhD student of Persi Diaconis at Harvard University, where he received a doctorate in Mathematics in 1997, with a thesis titled Random Walks on Groups: Strong Uniform Time Approach. Afterwards, he worked with László Lovász as a postdoc at Yale University. He was a fellow at the Mathematical Sciences Research Institute and a long-term visitor at the Hebrew University of Jerusalem. References External links Personal site. List of published papers, with abstracts. MIT Mathematics Department website. MathSciNet: "Items authored by Pak, Igor." DBLP: Igor Pak. 1971 births 20th-century American mathematicians 21st-century American mathematicians Combinatorialists Harvard University alumni Living people Massachusetts Institute of Technology School of Science faculty Moscow State University alumni Mathematicians from Moscow Russian emigrants to the United States University of Minnesota faculty University of California, Los Angeles faculty
https://en.wikipedia.org/wiki/Lemoine%20hexagon
In geometry, the Lemoine hexagon is a cyclic hexagon with vertices given by the six intersections of the edges of a triangle and the three lines that are parallel to the edges that pass through its symmedian point. There are two definitions of the hexagon that differ based on the order in which the vertices are connected. Area and perimeter The Lemoine hexagon can be drawn defined in two ways, first as a simple hexagon with vertices at the intersections as defined before. The second is a self-intersecting hexagon with the lines going through the symmedian point as three of the edges and the other three edges join pairs of adjacent vertices. For the simple hexagon drawn in a triangle with side lengths and area the perimeter is given by and the area by For the self intersecting hexagon the perimeter is given by and the area by Circumcircle In geometry, five points determine a conic, so arbitrary sets of six points do not generally lie on a conic section, let alone a circle. Nevertheless, the Lemoine hexagon (with either order of connection) is a cyclic polygon, meaning that its vertices all lie on a common circle. The circumcircle of the Lemoine hexagon is known as the first Lemoine circle. References . . . External links Types of polygons
https://en.wikipedia.org/wiki/Abraham%20Adrian%20Albert
Abraham Adrian Albert (November 9, 1905 – June 6, 1972) was an American mathematician. In 1939, he received the American Mathematical Society's Cole Prize in Algebra for his work on Riemann matrices. He is best known for his work on the Albert–Brauer–Hasse–Noether theorem on finite-dimensional division algebras over number fields and as the developer of Albert algebras, which are also known as exceptional Jordan algebras. Professional overview A first generation American, he was born in Chicago and most associated with that city. He received his Bachelor of Science in 1926, Masters in 1927, and PhD in 1928, at the age of 22. All degrees were obtained from the University of Chicago. He married around the same time as his graduation. He spent his postdoctoral year at Princeton University and then from 1929 to 1931 he was an instructor at Columbia University. During this period he worked on Abelian varieties and their endomorphism algebras. He returned to Princeton for the opening year of the Institute for Advanced Study in 1933-34 and spent another year in Princeton in 1961-62 as the first Director of the Communications Research Division of IDA (the Institute for Defense Analyses). From 1931 to 1972, he served on the mathematics faculty at the University of Chicago, where he became chair of the Mathematics Department in 1958 and Dean of the Physical Sciences Division in 1961. As a research mathematician, he is primarily known for his work as one of the principal developers of the theory of linear associative algebras and as a pioneer in the development of linear non-associative algebras, although all of this grew out of his work on endomorphism algebras of Abelian varieties. As an applied mathematician, he also did work for the military during World War II and thereafter. One of his most notable achievements was his groundbreaking work on cryptography. He prepared a manuscript, "Some Mathematical Aspects of Cryptography," for his invited address at a meeting of the American Mathematical Society in November 1941. The theory that developed from this work can be seen in digital communications technologies. After WWII, he became a forceful advocate favoring government support for research in mathematics on a par with physical sciences. He served on policy-making bodies at the Office of Naval Research, the United States National Research Council, and the National Science Foundation that funneled research grants into mathematics, giving many young mathematicians career opportunities previously unavailable. Due to his success in helping to give mathematical research a sound financial footing, he earned a reputation as a "statesman for mathematics." Albert was elected a Fellow of the American Academy of Arts and Sciences in 1968. Publications Books A. A. Albert, Algebras and their radicals, and division algebras, 1928. . A. A. Albert, Structure of algebras, 1939. Colloquium publications 24, American Mathematical Society, 2003, . wi
https://en.wikipedia.org/wiki/Concept%20class
In computational learning theory in mathematics, a concept over a domain X is a total Boolean function over X. A concept class is a class of concepts. Concept classes are a subject of computational learning theory. Concept class terminology frequently appears in model theory associated with probably approximately correct (PAC) learning. In this setting, if one takes a set Y as a set of (classifier output) labels, and X is a set of examples, the map , i.e. from examples to classifier labels (where and where c is a subset of X), c is then said to be a concept. A concept class is then a collection of such concepts. Given a class of concepts C, a subclass D is reachable if there exists a sample s such that D contains exactly those concepts in C that are extensions to s. Not every subclass is reachable. Background A sample is a partial function from to . Identifying a concept with its characteristic function mapping to , it is a special case of a sample. Two samples are consistent if they agree on the intersection of their domains. A sample extends another sample if the two are consistent and the domain of is contained in the domain of . Examples Suppose that . Then: the subclass is reachable with the sample ; the subclass for are reachable with a sample that maps the elements of to zero; the subclass , which consists of the singleton sets, is not reachable. Applications Let be some concept class. For any concept , we call this concept -good for a positive integer if, for all , at least of the concepts in agree with on the classification of . The fingerprint dimension of the entire concept class is the least positive integer such that every reachable subclass contains a concept that is -good for it. This quantity can be used to bound the minimum number of equivalence queries needed to learn a class of concepts according to the following inequality:. References Computational learning theory
https://en.wikipedia.org/wiki/Bondy%27s%20theorem
In mathematics, Bondy's theorem is a bound on the number of elements needed to distinguish the sets in a family of sets from each other. It belongs to the field of combinatorics, and is named after John Adrian Bondy, who published it in 1972. Statement The theorem is as follows: Let X be a set with n elements and let A1, A2, ..., An be distinct subsets of X. Then there exists a subset S of X with n − 1 elements such that the sets Ai ∩ S are all distinct. In other words, if we have a 0-1 matrix with n rows and n columns such that each row is distinct, we can remove one column such that the rows of the resulting n × (n − 1) matrix are distinct. Example Consider the 4 × 4 matrix where all rows are pairwise distinct. If we delete, for example, the first column, the resulting matrix no longer has this property: the first row is identical to the second row. Nevertheless, by Bondy's theorem we know that we can always find a column that can be deleted without introducing any identical rows. In this case, we can delete the third column: all rows of the 3 × 4 matrix are distinct. Another possibility would have been deleting the fourth column. Learning theory application From the perspective of computational learning theory, Bondy's theorem can be rephrased as follows: Let C be a concept class over a finite domain X. Then there exists a subset S of X with the size at most |C| − 1 such that S is a witness set for every concept in C. This implies that every finite concept class C has its teaching dimension bounded by |C| − 1. Notes Computational learning theory Theorems in combinatorics
https://en.wikipedia.org/wiki/Sumner%20Byron%20Myers
Sumner Byron Myers (February 19, 1910 – October 8, 1955) was an American mathematician specializing in topology and differential geometry. He studied at Harvard University under H. C. Marston Morse, where he graduated with a Ph.D. in 1932. Myers then pursued postdoctoral studies at Princeton University (1934–1936) before becoming a professor for mathematics at the University of Michigan. He died unexpectedly from a heart attack during the 1955 Michigan–Army football game at Michigan Stadium. Sumner B. Myers Prize The Sumner B. Myers Prize was created in his honor for distinguished theses within the LSA Mathematics Department. The recipients since 2004 are as follows: 2004: Peter Storm 2005: Kevin Woods 2006: Calin Chindris 2007: Yann Bernard, Samuel Payne 2008: Bryden Cais 2009: Susan Sierra 2010: Paul Johnson, Alan Stapledon 2011: Kevin Tucker 2012: Matthew Elsey 2013: Max Glick 2014: Jae Kyoung Kim 2015: June Huh, Mary Wootters 2016: Brandon Seward 2017: Hamed Razavi 2018: Rohini Ramadas 2019: Visu Makam 2020: Han Huang 2021: Emanuel Reinecke 2022: Xin Zhang References Further reading 1910 births 1955 deaths 20th-century American mathematicians Topologists Harvard University alumni University of Michigan faculty Geometers
https://en.wikipedia.org/wiki/Invariant%20basis%20number
In mathematics, more specifically in the field of ring theory, a ring has the invariant basis number (IBN) property if all finitely generated free left modules over R have a well-defined rank. In the case of fields, the IBN property becomes the statement that finite-dimensional vector spaces have a unique dimension. Definition A ring R has invariant basis number (IBN) if for all positive integers m and n, Rm isomorphic to Rn (as left R-modules) implies that . Equivalently, this means there do not exist distinct positive integers m and n such that Rm is isomorphic to Rn. Rephrasing the definition of invariant basis number in terms of matrices, it says that, whenever A is an m-by-n matrix over R and B is an n-by-m matrix over R such that and , then . This form reveals that the definition is left–right symmetric, so it makes no difference whether we define IBN in terms of left or right modules; the two definitions are equivalent. Note that the isomorphisms in the definitions are not ring isomorphisms, they are module isomorphisms, even when one of n or m is 1. Properties The main purpose of the invariant basis number condition is that free modules over an IBN ring satisfy an analogue of the dimension theorem for vector spaces: any two bases for a free module over an IBN ring have the same cardinality. Assuming the ultrafilter lemma (a strictly weaker form of the axiom of choice), this result is actually equivalent to the definition given here, and can be taken as an alternative definition. The rank of a free module Rn over an IBN ring R is defined to be the cardinality of the exponent m of any (and therefore every) R-module Rm isomorphic to Rn. Thus the IBN property asserts that every isomorphism class of free R-modules has a unique rank. The rank is not defined for rings not satisfying IBN. For vector spaces, the rank is also called the dimension. Thus the result above is in short: the rank is uniquely defined for all free R-modules iff it is uniquely defined for finitely generated free R-modules. Examples Any field satisfies IBN, and this amounts to the fact that finite-dimensional vector spaces have a well defined dimension. Moreover, any commutative ring (except the zero ring) satisfies IBN, as does any left-Noetherian ring and any semilocal ring. Let A be a commutative ring and assume there exists an A-module isomorphism . Let the canonical basis of An, which means is all zeros except a one in the i-th position. By Krull's theorem, let I a maximal proper ideal of A and . An A-module morphism means because I is an ideal. So f induces an A/I-module morphism , that can easily be proven to be an isomorphism. Since A/I is a field, f''' is an isomorphism between finite dimensional vector spaces, so . An example of a nonzero ring that does not satisfy IBN is the ring of column finite matrices , the matrices with coefficients in a ring R, with entries indexed by and with each column having only finitely many non-zero entries. That