source
stringlengths
31
168
text
stringlengths
51
3k
https://en.wikipedia.org/wiki/Parabigyrate%20rhombicosidodecahedron
In geometry, the parabigyrate rhombicosidodecahedron is one of the Johnson solids (). It can be constructed as a rhombicosidodecahedron with two opposing pentagonal cupolae rotated through 36 degrees. It is also a canonical polyhedron. Alternative Johnson solids, constructed by rotating different cupolae of a rhombicosidodecahedron, are: The gyrate rhombicosidodecahedron () where only one cupola is rotated; The metabigyrate rhombicosidodecahedron () where two non-opposing cupolae are rotated; And the trigyrate rhombicosidodecahedron () where three cupolae are rotated. External links Johnson solids
https://en.wikipedia.org/wiki/Paragyrate%20diminished%20rhombicosidodecahedron
In geometry, the paragyrate diminished rhombicosidodecahedron is one of the Johnson solids (). It can be constructed as a rhombicosidodecahedron with one pentagonal cupola rotated through 36 degrees, and the opposing pentagonal cupola removed. External links Johnson solids
https://en.wikipedia.org/wiki/Metagyrate%20diminished%20rhombicosidodecahedron
In geometry, the metagyrate diminished rhombicosidodecahedron is one of the Johnson solids (). It can be constructed as a rhombicosidodecahedron with one pentagonal cupola () rotated through 36 degrees, and a non-opposing pentagonal cupola removed. (The cupolae cannot be adjacent.) External links Johnson solids
https://en.wikipedia.org/wiki/Bigyrate%20diminished%20rhombicosidodecahedron
In geometry, the bigyrate diminished rhombicosidodecahedron is one of the Johnson solids (). It can be constructed as a rhombicosidodecahedron with two pentagonal cupolae rotated through 36 degrees, and a third pentagonal cupola removed. (None of the cupolae can be adjacent.) External links Johnson solids
https://en.wikipedia.org/wiki/Gyrate%20bidiminished%20rhombicosidodecahedron
In geometry, the gyrate bidiminished rhombicosidodecahedron is one of the Johnson solids (). It can be constructed as a rhombicosidodecahedron with two non-opposing pentagonal cupolae () removed and a third is rotated 36 degrees. Related Johnson solids are: The diminished rhombicosidodecahedron () where one cupola is removed, The parabidiminished rhombicosidodecahedron () where two opposing cupolae are removed, The metabidiminished rhombicosidodecahedron () where two non-opposing cupolae are removed, And the tridiminished rhombicosidodecahedron () where three cupolae are removed. External links Johnson solids
https://en.wikipedia.org/wiki/Metabigyrate%20rhombicosidodecahedron
In geometry, the metabigyrate rhombicosidodecahedron is one of the Johnson solids (). It can be constructed as a rhombicosidodecahedron with two non-opposing pentagonal cupolae rotated through 36 degrees. It is also a canonical polyhedron. Alternative Johnson solids, constructed by rotating different cupolae of a rhombicosidodecahedron, are: The gyrate rhombicosidodecahedron () where only one cupola is rotated; The parabigyrate rhombicosidodecahedron () where two opposing cupolae are rotated; And the trigyrate rhombicosidodecahedron () where three cupolae are rotated. External links World of Polyhedra - metabigyrate rhombicosidodecahedron (interactive rotatable wireframe applet) Johnson solids
https://en.wikipedia.org/wiki/Gyroelongated%20pentagonal%20bicupola
In geometry, the gyroelongated pentagonal bicupola is one of the Johnson solids (). As the name suggests, it can be constructed by gyroelongating a pentagonal bicupola ( or ) by inserting a decagonal antiprism between its congruent halves. The gyroelongated pentagonal bicupola is one of five Johnson solids which are chiral, meaning that they have a "left-handed" and a "right-handed" form. In the illustration to the right, each square face on the bottom half of the figure is connected by a path of two triangular faces to a square face above it and to the right. In the figure of opposite chirality (the mirror image of the illustrated figure), each bottom square would be connected to a square face above it and to the left. The two chiral forms of are not considered different Johnson solids. Area and Volume With edge length a, the surface area is and the volume is External links Johnson solids Chiral polyhedra
https://en.wikipedia.org/wiki/Classical%20mathematics
In the foundations of mathematics, classical mathematics refers generally to the mainstream approach to mathematics, which is based on classical logic and ZFC set theory. It stands in contrast to other types of mathematics such as constructive mathematics or predicative mathematics. In practice, the most common non-classical systems are used in constructive mathematics. Classical mathematics is sometimes attacked on philosophical grounds, due to constructivist and other objections to the logic, set theory, etc., chosen as its foundations, such as have been expressed by L. E. J. Brouwer. Almost all mathematics, however, is done in the classical tradition, or in ways compatible with it. Defenders of classical mathematics, such as David Hilbert, have argued that it is easier to work in, and is most fruitful; although they acknowledge non-classical mathematics has at times led to fruitful results that classical mathematics could not (or could not so easily) attain, they argue that on the whole, it is the other way round. See also Constructivism (mathematics) Finitism Intuitionism Non-classical analysis Traditional mathematics Ultrafinitism Philosophy of Mathematics References Mathematical logic
https://en.wikipedia.org/wiki/Change%20of%20basis
In mathematics, an ordered basis of a vector space of finite dimension allows representing uniquely any element of the vector space by a coordinate vector, which is a sequence of scalars called coordinates. If two different bases are considered, the coordinate vector that represents a vector on one basis is, in general, different from the coordinate vector that represents on the other basis. A change of basis consists of converting every assertion expressed in terms of coordinates relative to one basis into an assertion expressed in terms of coordinates relative to the other basis. Such a conversion results from the change-of-basis formula which expresses the coordinates relative to one basis in terms of coordinates relative to the other basis. Using matrices, this formula can be written where "old" and "new" refer respectively to the firstly defined basis and the other basis, and are the column vectors of the coordinates of the same vector on the two bases, and is the change-of-basis matrix (also called transition matrix), which is the matrix whose columns are the coordinate vectors of the new basis vectors on the old basis. This article deals mainly with finite-dimensional vector spaces. However, many of the principles are also valid for infinite-dimensional vector spaces. Change of basis formula Let be a basis of a finite-dimensional vector space over a field . For , one can define a vector by its coordinates over Let be the matrix whose th column is formed by the coordinates of . (Here and in what follows, the index refers always to the rows of and the while the index refers always to the columns of and the such a convention is useful for avoiding errors in explicit computations.) Setting one has that is a basis of if and only if the matrix is invertible, or equivalently if it has a nonzero determinant. In this case, is said to be the change-of-basis matrix from the basis to the basis Given a vector let be the coordinates of over and its coordinates over that is (One could take the same summation index for the two sums, but choosing systematically the indexes for the old basis and for the new one makes clearer the formulas that follows, and helps avoiding errors in proofs and explicit computations.) The change-of-basis formula expresses the coordinates over the old basis in term of the coordinates over the new basis. With above notation, it is In terms of matrices, the change of basis formula is where and are the column vectors of the coordinates of over and respectively. Proof: Using the above definition of the change-of basis matrix, one has As the change-of-basis formula results from the uniqueness of the decomposition of a vector over a basis. Example Consider the Euclidean vector space Its standard basis consists of the vectors and If one rotates them by an angle of , one gets a new basis formed by and So, the change-of-basis matrix is The change-of-basis formula asserts t
https://en.wikipedia.org/wiki/Dendroid
The word Dendroid derives from the Greek word "dendron" meaning ( "tree-like") Dendroid may refer to: Dendroid (topology), in mathematics Dendroid (malware), Android malware See also Dendrite (disambiguation)
https://en.wikipedia.org/wiki/Ancient%20Egyptian%20mathematics
Ancient Egyptian mathematics is the mathematics that was developed and used in Ancient Egypt 3000 to c. , from the Old Kingdom of Egypt until roughly the beginning of Hellenistic Egypt. The ancient Egyptians utilized a numeral system for counting and solving written mathematical problems, often involving multiplication and fractions. Evidence for Egyptian mathematics is limited to a scarce amount of surviving sources written on papyrus. From these texts it is known that ancient Egyptians understood concepts of geometry, such as determining the surface area and volume of three-dimensional shapes useful for architectural engineering, and algebra, such as the false position method and quadratic equations. Overview Written evidence of the use of mathematics dates back to at least 3200 BC with the ivory labels found in Tomb U-j at Abydos. These labels appear to have been used as tags for grave goods and some are inscribed with numbers. Further evidence of the use of the base 10 number system can be found on the Narmer Macehead which depicts offerings of 400,000 oxen, 1,422,000 goats and 120,000 prisoners. Archaeological evidence has suggested that the Ancient Egyptian counting system had origins in Sub-Saharan Africa. Also, fractal geometry designs which are widespread among Sub-Saharan African cultures are also found in Egyptian architecture and cosmological signs. The evidence of the use of mathematics in the Old Kingdom (c. 2690–2180 BC) is scarce, but can be deduced from inscriptions on a wall near a mastaba in Meidum which gives guidelines for the slope of the mastaba. The lines in the diagram are spaced at a distance of one cubit and show the use of that unit of measurement. The earliest true mathematical documents date to the 12th Dynasty (c. 1990–1800 BC). The Moscow Mathematical Papyrus, the Egyptian Mathematical Leather Roll, the Lahun Mathematical Papyri which are a part of the much larger collection of Kahun Papyri and the Berlin Papyrus 6619 all date to this period. The Rhind Mathematical Papyrus which dates to the Second Intermediate Period (c. 1650 BC) is said to be based on an older mathematical text from the 12th dynasty. The Moscow Mathematical Papyrus and Rhind Mathematical Papyrus are so called mathematical problem texts. They consist of a collection of problems with solutions. These texts may have been written by a teacher or a student engaged in solving typical mathematics problems. An interesting feature of ancient Egyptian mathematics is the use of unit fractions. The Egyptians used some special notation for fractions such as , and and in some texts for , but other fractions were all written as unit fractions of the form or sums of such unit fractions. Scribes used tables to help them work with these fractions. The Egyptian Mathematical Leather Roll for instance is a table of unit fractions which are expressed as sums of other unit fractions. The Rhind Mathematical Papyrus and some of the other texts contain tables.
https://en.wikipedia.org/wiki/Small%20set
In mathematics, the term small set may refer to: Small set (category theory) Small set (combinatorics), a set of positive integers whose sum of reciprocals converges Small set, an element of a Grothendieck universe See also Ideal (set theory) Natural density Large set (disambiguation)
https://en.wikipedia.org/wiki/Large%20set
In mathematics, the term large set is sometimes used to refer to any set that is "large" in some sense. It has specialized meanings in three branches of mathematics: Large set (category theory), a set that does not belong to a fixed universe of sets Large set (combinatorics), a set of integers whose sum of reciprocals diverges Large set (Ramsey theory), a set of integers with the property that, if all the integers are colored, one of the color classes has long arithmetic progressions whose differences are in the set See also Natural density Small set (disambiguation)
https://en.wikipedia.org/wiki/Complex%20algebra
Complex algebra may refer to: A complex algebra (set theory), also known as field of sets Algebra over the complex numbers Algebra involving complex numbers
https://en.wikipedia.org/wiki/Carmichael%20function
In number theory, a branch of mathematics, the Carmichael function of a positive integer is the smallest positive integer such that holds for every integer coprime to . In algebraic terms, is the exponent of the multiplicative group of integers modulo . The Carmichael function is named after the American mathematician Robert Carmichael who defined it in 1910. It is also known as Carmichael's λ function, the reduced totient function, and the least universal exponent function. The following table compares the first 36 values of with Euler's totient function (in bold if they are different; the s such that they are different are listed in ). Numerical examples Carmichael's function at 5 is 4, , because for any number coprime to 5, i.e. there is with namely, , , and . And this is the smallest exponent with this property, because (and as well.)Moreover, Euler's totient function at 5 is 4, , because there are exactly 4 numbers less than and coprime to 5 (1, 2, 3, and 4). Euler's theorem assures that for all coprime to 5, and 4 is the smallest such exponent. Carmichael's function at 8 is 2, , because for any number coprime to 8, i.e. it holds that . Namely, , , and .Euler's totient function at 8 is 4, , because there are exactly 4 numbers less than and coprime to 8 (1, 3, 5, and 7). Moreover, Euler's theorem assures that for all coprime to 8, but 4 is not the smallest such exponent. Computing with Carmichael's theorem By the unique factorization theorem, any can be written in a unique way as where are primes and are positive integers. Then is the least common multiple of the of each of its prime power factors: This can be proved using the Chinese remainder theorem. Carmichael's theorem explains how to compute of a prime power : for a power of an odd prime and for 2 and 4, is equal to the Euler totient ; for powers of 2 greater than 4 it is equal to half of the Euler totient: Euler's function for prime powers is given by Properties of the Carmichael function In this section, an integer is divisible by a nonzero integer if there exists an integer such that . This is written as Order of elements modulo Let and be coprime and let be the smallest exponent with , then it holds that . That is, the order of a unit in the ring of integers modulo divides and Minimality Suppose for all numbers coprime with . Then . Proof: If with , then for all numbers coprime with . It follows , since and the minimal positive such number. divides This follows from elementary group theory, because the exponent of any finite group must divide the order of the group. is the exponent of the multiplicative group of integers modulo while is the order of that group. In particular, the two must be equal in the cases where the multiplicative group is cyclic due to the existence of a primitive root, which is the case for odd prime powers. We can thus view Carmichael's theorem as a sharpening of Euler's theorem.
https://en.wikipedia.org/wiki/Andrea%20diSessa
Andrea A. diSessa (born June 3, 1947) is an education researcher and author of the book Turtle Geometry about Logo. He has also written highly cited research papers on the epistemology of physics, educational experimentation, and constructivist analysis of knowledge. He also created, with Hal Abelson, the Boxer Programming Environment at the Massachusetts Institute of Technology. Personal history DiSessa received an A.B. in physics from Princeton University in 1969 and a Ph.D. in physics from the Massachusetts Institute of Technology in 1975. He is currently Evelyn Lois Corey Professor of Education at the University of California, Berkeley and has been a member of the National Academy for Education since 1995. Some of his notable work in Education research focuses on the concept of material intelligence and computational literacy, and ontological innovations and the role of theory in design based research. Material intelligence Material Intelligence can be thought of as a subset of distributed cognition, where it refers to the new knowledge that furthers human intelligence and skills by interaction with the computer, and existing computer literacy, in a social environment. It can also be the ability of tools in general, and computers in specific, to increase the intelligence and skills of human mind. It was coined by Andrea DiSessa in his book Changing Minds: Computers, Learning and Literacy. He uses the terms computational literacy, material literacy, and material intelligence interchangeably. Conceptually, material intelligence is influenced by constructionism and distributed cognition theory. This concept is similar to constructionism because user makes sense of the world around them using a tool, and the interaction with this tool is helpful in shaping the understanding of the world. It is similar to distributed cognition because it focuses on “social and material setting of cognitive activity, so that culture, context and history can be linked with the core concepts of cognition.” Material intelligence has to be dependent on the material (the tool, or the computer), but it also has to be social. He says that “Material intelligence does not reside in either the mind or the materials alone. Indeed, the coupling of external and internal activity is intricate and critical”. Hatch & Gardner elaborated on the social aspect of human intelligence in their chapter on Distributed Cognitions. They stated that learning is social and happens whenever humans interact with one another, even though the manner in which learning happens may differ based on 1) cultural forces, 2) local forces, and 3) personal forces, with cultural forces being the least motivating force, and personal forces being the most motivating force. This is relevant because instances of material intelligence happen at an individual level, which are shaped by personal experiences, and then they spread outwards to influence the culture. Examples of material intelligence A typical ex
https://en.wikipedia.org/wiki/Dual%20basis%20in%20a%20field%20extension
In mathematics, the linear algebra concept of dual basis can be applied in the context of a finite extension L/K, by using the field trace. This requires the property that the field trace TrL/K provides a non-degenerate quadratic form over K. This can be guaranteed if the extension is separable; it is automatically true if K is a perfect field, and hence in the cases where K is finite, or of characteristic zero. A dual basis () is not a concrete basis like the polynomial basis or the normal basis; rather it provides a way of using a second basis for computations. Consider two bases for elements in a finite field, GF(pm): and then B2 can be considered a dual basis of B1 provided Here the trace of a value in GF(pm) can be calculated as follows: Using a dual basis can provide a way to easily communicate between devices that use different bases, rather than having to explicitly convert between bases using the change of bases formula. Furthermore, if a dual basis is implemented then conversion from an element in the original basis to the dual basis can be accomplished with multiplication by the multiplicative identity (usually 1). References , Definition 2.30, p. 54. Linear algebra Field extensions Theory of cryptography
https://en.wikipedia.org/wiki/Dual%20basis
In linear algebra, given a vector space with a basis of vectors indexed by an index set (the cardinality of is the dimension of ), the dual set of is a set of vectors in the dual space with the same index set I such that and form a biorthogonal system. The dual set is always linearly independent but does not necessarily span . If it does span , then is called the dual basis or reciprocal basis for the basis . Denoting the indexed vector sets as and , being biorthogonal means that the elements pair to have an inner product equal to 1 if the indexes are equal, and equal to 0 otherwise. Symbolically, evaluating a dual vector in on a vector in the original space : where is the Kronecker delta symbol. Introduction To perform operations with a vector, we must have a straightforward method of calculating its components. In a Cartesian frame the necessary operation is the dot product of the vector and the base vector. For example, where is the bases in a Cartesian frame. The components of can be found by However, in a non-Cartesian frame, we do not necessarily have for all . However, it is always possible to find a vector such that The equality holds when is the dual base of . Notice the difference in position of the index . In a Cartesian frame, we have Existence and uniqueness The dual set always exists and gives an injection from V into V∗, namely the mapping that sends vi to vi. This says, in particular, that the dual space has dimension greater or equal to that of V. However, the dual set of an infinite-dimensional V does not span its dual space V∗. For example, consider the map w in V∗ from V into the underlying scalars F given by for all i. This map is clearly nonzero on all vi. If w were a finite linear combination of the dual basis vectors vi, say for a finite subset K of I, then for any j not in K, , contradicting the definition of w. So, this w does not lie in the span of the dual set. The dual of an infinite-dimensional space has greater dimensionality (this being a greater infinite cardinality) than the original space has, and thus these cannot have a basis with the same indexing set. However, a dual set of vectors exists, which defines a subspace of the dual isomorphic to the original space. Further, for topological vector spaces, a continuous dual space can be defined, in which case a dual basis may exist. Finite-dimensional vector spaces In the case of finite-dimensional vector spaces, the dual set is always a dual basis and it is unique. These bases are denoted by and . If one denotes the evaluation of a covector on a vector as a pairing, the biorthogonality condition becomes: The association of a dual basis with a basis gives a map from the space of bases of V to the space of bases of V∗, and this is also an isomorphism. For topological fields such as the real numbers, the space of duals is a topological space, and this gives a homeomorphism between the Stiefel manifolds of bases of these space
https://en.wikipedia.org/wiki/Quadratic%20residuosity%20problem
The quadratic residuosity problem (QRP) in computational number theory is to decide, given integers and , whether is a quadratic residue modulo or not. Here for two unknown primes and , and is among the numbers which are not obviously quadratic non-residues (see below). The problem was first described by Gauss in his Disquisitiones Arithmeticae in 1801. This problem is believed to be computationally difficult. Several cryptographic methods rely on its hardness, see . An efficient algorithm for the quadratic residuosity problem immediately implies efficient algorithms for other number theoretic problems, such as deciding whether a composite of unknown factorization is the product of 2 or 3 primes. Precise formulation Given integers and , is said to be a quadratic residue modulo if there exists an integer such that . Otherwise we say it is a quadratic non-residue. When is a prime, it is customary to use the Legendre symbol: This is a multiplicative character which means for exactly of the values , and it is for the remaining. It is easy to compute using the law of quadratic reciprocity in a manner akin to the Euclidean algorithm, see Legendre symbol. Consider now some given where and are two, different unknown primes. A given is a quadratic residue modulo if and only if is a quadratic residue modulo both and and . Since we don't know or , we cannot compute and . However, it is easy to compute their product. This is known as the Jacobi symbol: This can also be efficiently computed using the law of quadratic reciprocity for Jacobi symbols. However, can not in all cases tell us whether is a quadratic residue modulo or not! More precisely, if then is necessarily a quadratic non-residue modulo either or , in which case we are done. But if then it is either the case that is a quadratic residue modulo both and , or a quadratic non-residue modulo both and . We cannot distinguish these cases from knowing just that . This leads to the precise formulation of the quadratic residue problem: Problem: Given integers and , where and are unknown, different primes, and where , determine whether is a quadratic residue modulo or not. Distribution of residues If is drawn uniformly at random from integers such that , is more often a quadratic residue or a quadratic non-residue modulo ? As mentioned earlier, for exactly half of the choices of , then , and for the rest we have . By extension, this also holds for half the choices of . Similarly for . From basic algebra, it follows that this partitions into 4 parts of equal size, depending on the sign of and . The allowed in the quadratic residue problem given as above constitute exactly those two parts corresponding to the cases and . Consequently, exactly half of the possible are quadratic residues and the remaining are not. Applications The intractability of the quadratic residuosity problem is the basis for the security of the Blum Blum Shub pseudora
https://en.wikipedia.org/wiki/Islam%20in%20Thailand
Islam is a minority faith in Thailand, with statistics suggesting 4.9% of the population are Muslim. Figures as high as 5% of Thailand's population have also been mentioned. A 2023 Pew Research Center survey gave 7%. Most Thai Muslims are Sunni Muslims, although Thailand has a diverse population that includes immigrants from around the world. Demographics and geography Popular opinion seems to hold that a vast majority of the country's Muslims are found in Thailand's four southernmost provinces of Satun, Yala, Pattani and Narathiwat, where they make up majority of the population. However, the Thai Ministry of Foreign Affairs' research indicates that only 18 percent of Thai Muslims live in those four provinces. There are also significant minority of Muslims in other southern provinces such as Krabi, Trang, Phatthalung and Phuket. In Bangkok, large Muslim populations maybe found in districts such as Nong Chok, Min Buri and Bang Rak. According to the National Statistics Office, in 2015 census, Muslims in Southern Thailand made up 24 percent of the total population, while less than three percent in other parts of the country. History Muslim merchant communities resided in Thailand as early as the 9th century. In early modern Thailand, Muslims from the Coromandel Coast served as eunuchs in the Thai palace and court. Thailand, as Siam, was known for religious tolerance, and there were Muslims working for the Siamese Royal Governments throughout the eras. This culture of tolerance in Siam and later Thailand resulted in the great diversity of Islam in Thailand. Malay separatism in South Thailand is mostly a war based on ethnicity, as Malays in the region have sought to separate from Thailand, although extremist Muslim groups are involved in the conflict. Ethnicity and identity Thailand's Muslim population is diverse, with ethnic groups having migrated from as far as China, Pakistan, Cambodia, Bangladesh, Malaysia, and Indonesia, as well as including ethnic Thais, while about two-thirds of Muslims in Thailand are Thai Malays. Ethnic Thai Muslims Many Thai Muslims are ethnically and linguistically Thai, who are either hereditary Muslims, Muslims by intermarriage, or recent converts to the faith. Ethnic Thai Muslims live mainly in the central and southern provinces - varying from entire Muslim communities to mixed settlements. Former Commander-in-Chief of the Royal Thai Army General Sonthi Boonyaratglin is an example of an indigenous Thai Muslim. Sonthi is of remote Persian ancestry. His ancestor, Sheikh Ahmad of Qom, was an Iranian expatriate trader who lived in the Ayutthaya Kingdom for 26 years. Many Thais, including those of the Bunnag and Ahmadchula families trace their ancestry back to him. Sri Sulalai was a princess of the royal family of the Sultanate of Singora. Rama II of Siam took her as a concubine. In 1946 Prince Bhumibol Adulyadej and Ananda Mahidol, Rama VIII, toured the Tonson Mosque. Malay Muslims In the three southernmost b
https://en.wikipedia.org/wiki/Islam%20in%20the%20United%20Arab%20Emirates
Islam is the official religion of the United Arab Emirates. Of the total population, 76.9% are Muslims as of a 2010 estimate by the Pew Research Center. Although no official statistics are available for the breakdown between Sunni and Shia Muslims among noncitizen residents, media estimates suggest less than 20 percent of the noncitizen Muslim population are Shia. History The arrival of envoys from the Islamic prophet Muhammad in 632 heralded the conversion of the region to Islam. After prophet Muhammad's death, one of the major battles of the Ridda Wars was fought at Dibba, to the east coast of the present-day Emirates. The defeat of the non-Muslims, including Laqit bin Malik Al-Azdi, in this battle resulted in the triumph of Islam in the Arabian Peninsula. The Bani Yas, which today form the Emirate of Abu Dhabi and Emirate of Dubai, traditionally adhere to the Sunni Maliki school of Islamic jurisprudence from the Uyunid dynasty, who spread the Maliki school by the command of Sheikh Abdullah bin Ali Al Uyuni. The four emirates of Sharjah, Umm al-Quwain, Ras al-Khaimah, and Ajman follow the Hanbali school, and the Emirate of Fujairah follows the Shafi'i school. Structure The federal General Authority of Islamic Affairs and Endowments (Awqaf) oversee the administration of Sunni mosques, except in Dubai, where they are administered by the Dubai’s Islamic Affairs and Charitable Activities Department (IACAD). The Awqaf distributes weekly guidance to Sunni imams regarding the themes and content of khutbah with a published script every week. The khutbas get posted on the Awqaf website. The Awqaf applies a three-tier system in which junior imams follows the Awqaf khutbah script closely; midlevel imams prepare khutbas according to the topic or subject matter selected by Awqaf authorities; and senior imams have the flexibility to choose their own subject for their khutbas. Some Shia religious leaders in Shia majority mosques choose to follow Awqaf-approved weekly addresses, while others write their own khutbah. The government funds and supports Sunni mosques, with the exception of those considered private, and employs all Sunni imams as employees. Islamic studies are mandatory for all students in public schools and for Muslim students in private schools. The government-funded public schools do not provide instruction in any religion other than Islam. In private schools, non-Muslim students are not required to attend Islamic study classes. As an alternative, private schools are available for non-Muslims. Christian-affiliated schools are authorized to provide instruction tailored to the religious background of the student such as Christian instruction for Christian students, and ethics or other religions. The Awqaf operates official toll-free call centers and text messaging service for fatwas. The fatwas in the United Arab Emirates are available in three languages (Arabic, English, and Urdu). Fatwas are given based on the questions asked and include
https://en.wikipedia.org/wiki/Initial%20condition
In mathematics and particularly in dynamic systems, an initial condition, in some contexts called a seed value, is a value of an evolving variable at some point in time designated as the initial time (typically denoted t = 0). For a system of order k (the number of time lags in discrete time, or the order of the largest derivative in continuous time) and dimension n (that is, with n different evolving variables, which together can be denoted by an n-dimensional coordinate vector), generally nk initial conditions are needed in order to trace the system's variables forward through time. In both differential equations in continuous time and difference equations in discrete time, initial conditions affect the value of the dynamic variables (state variables) at any future time. In continuous time, the problem of finding a closed form solution for the state variables as a function of time and of the initial conditions is called the initial value problem. A corresponding problem exists for discrete time situations. While a closed form solution is not always possible to obtain, future values of a discrete time system can be found by iterating forward one time period per iteration, though rounding error may make this impractical over long horizons. Linear system Discrete time A linear matrix difference equation of the homogeneous (having no constant term) form has closed form solution predicated on the vector of initial conditions on the individual variables that are stacked into the vector; is called the vector of initial conditions or simply the initial condition, and contains nk pieces of information, n being the dimension of the vector X and k = 1 being the number of time lags in the system. The initial conditions in this linear system do not affect the qualitative nature of the future behavior of the state variable X; that behavior is stable or unstable based on the eigenvalues of the matrix A but not based on the initial conditions. Alternatively, a dynamic process in a single variable x having multiple time lags is Here the dimension is n = 1 and the order is k, so the necessary number of initial conditions to trace the system through time, either iteratively or via closed form solution, is nk = k. Again the initial conditions do not affect the qualitative nature of the variable's long-term evolution. The solution of this equation is found by using its characteristic equation to obtain the latter's k solutions, which are the characteristic values for use in the solution equation Here the constants are found by solving a system of k different equations based on this equation, each using one of k different values of t for which the specific initial condition Is known. Continuous time A differential equation system of the first order with n variables stacked in a vector X is Its behavior through time can be traced with a closed form solution conditional on an initial condition vector . The number of required initial pieces of infor
https://en.wikipedia.org/wiki/Linear%20system%20of%20divisors
In algebraic geometry, a linear system of divisors is an algebraic generalization of the geometric notion of a family of curves; the dimension of the linear system corresponds to the number of parameters of the family. These arose first in the form of a linear system of algebraic curves in the projective plane. It assumed a more general form, through gradual generalisation, so that one could speak of linear equivalence of divisors D on a general scheme or even a ringed space (X, OX). Linear system of dimension 1, 2, or 3 are called a pencil, a net, or a web, respectively. A map determined by a linear system is sometimes called the Kodaira map. Definitions Given a general variety , two divisors are linearly equivalent if for some non-zero rational function on , or in other words a non-zero element of the function field . Here denotes the divisor of zeroes and poles of the function . Note that if has singular points, the notion of 'divisor' is inherently ambiguous (Cartier divisors, Weil divisors: see divisor (algebraic geometry)). The definition in that case is usually said with greater care (using invertible sheaves or holomorphic line bundles); see below. A complete linear system on is defined as the set of all effective divisors linearly equivalent to some given divisor . It is denoted . Let be the line bundle associated to . In the case that is a nonsingular projective variety, the set is in natural bijection with by associating the element of to the set of non-zero multiples of (this is well defined since two non-zero rational functions have the same divisor if and only if they are non-zero multiples of each other). A complete linear system is therefore a projective space. A linear system is then a projective subspace of a complete linear system, so it corresponds to a vector subspace W of The dimension of the linear system is its dimension as a projective space. Hence . Linear systems can also be introduced by means of the line bundle or invertible sheaf language. In those terms, divisors (Cartier divisors, to be precise) correspond to line bundles, and linear equivalence of two divisors means that the corresponding line bundles are isomorphic. Examples Linear equivalence Consider the line bundle on whose sections define quadric surfaces. For the associated divisor , it is linearly equivalent to any other divisor defined by the vanishing locus of some using the rational function (Proposition 7.2). For example, the divisor associated to the vanishing locus of is linearly equivalent to the divisor associated to the vanishing locus of . Then, there is the equivalence of divisors Linear systems on curves One of the important complete linear systems on an algebraic curve of genus is given by the complete linear system associated with the canonical divisor , denoted . This definition follows from proposition II.7.7 of Hartshorne since every effective divisor in the linear system comes from the zeros of s
https://en.wikipedia.org/wiki/Ample%20line%20bundle
In mathematics, a distinctive feature of algebraic geometry is that some line bundles on a projective variety can be considered "positive", while others are "negative" (or a mixture of the two). The most important notion of positivity is that of an ample line bundle, although there are several related classes of line bundles. Roughly speaking, positivity properties of a line bundle are related to having many global sections. Understanding the ample line bundles on a given variety X amounts to understanding the different ways of mapping X into projective space. In view of the correspondence between line bundles and divisors (built from codimension-1 subvarieties), there is an equivalent notion of an ample divisor. In more detail, a line bundle is called basepoint-free if it has enough sections to give a morphism to projective space. A line bundle is semi-ample if some positive power of it is basepoint-free; semi-ampleness is a kind of "nonnegativity". More strongly, a line bundle on a complete variety X is very ample if it has enough sections to give a closed immersion (or "embedding") of X into projective space. A line bundle is ample if some positive power is very ample. An ample line bundle on a projective variety X has positive degree on every curve in X. The converse is not quite true, but there are corrected versions of the converse, the Nakai–Moishezon and Kleiman criteria for ampleness. Introduction Pullback of a line bundle and hyperplane divisors Given a morphism of schemes, a vector bundle E on Y (or more generally a coherent sheaf on Y) has a pullback to X, (see Sheaf of modules#Operations). The pullback of a vector bundle is a vector bundle of the same rank. In particular, the pullback of a line bundle is a line bundle. (Briefly, the fiber of at a point x in X is the fiber of E at f(x).) The notions described in this article are related to this construction in the case of a morphism to projective space with E = O(1) the line bundle on projective space whose global sections are the homogeneous polynomials of degree 1 (that is, linear functions) in variables . The line bundle O(1) can also be described as the line bundle associated to a hyperplane in (because the zero set of a section of O(1) is a hyperplane). If f is a closed immersion, for example, it follows that the pullback is the line bundle on X associated to a hyperplane section (the intersection of X with a hyperplane in ). Basepoint-free line bundles Let X be a scheme over a field k (for example, an algebraic variety) with a line bundle L. (A line bundle may also be called an invertible sheaf.) Let be elements of the k-vector space of global sections of L. The zero set of each section is a closed subset of X; let U be the open subset of points at which at least one of is not zero. Then these sections define a morphism In more detail: for each point x of U, the fiber of L over x is a 1-dimensional vector space over the residue field k(x). Choosing a basis for th
https://en.wikipedia.org/wiki/Algebroid
In mathematics, algebroid may refer to several distinct notions, which nevertheless all arise from generalising certain aspects of the theory of algebras or Lie algebras. Algebroid branch, a formal power series branch of an algebraic curve Algebroid cohomology Algebroid multifunction Courant algebroid, an object generalising Lie bialgebroids Lie algebroid, the infinitesimal counterpart of Lie groupoids Atiyah algebroid, a fundamental example of a Lie algebroid associated to a principal bundle R-algebroid, a categorical construction associated to groupoids Mathematics disambiguation pages
https://en.wikipedia.org/wiki/Block%20design
In combinatorial mathematics, a block design is an incidence structure consisting of a set together with a family of subsets known as blocks, chosen such that frequency of the elements satisfies certain conditions making the collection of blocks exhibit symmetry (balance). Block designs have applications in many areas, including experimental design, finite geometry, physical chemistry, software testing, cryptography, and algebraic geometry. Without further specifications the term block design usually refers to a balanced incomplete block design (BIBD), specifically (and also synonymously) a 2-design, which has been the most intensely studied type historically due to its application in the design of experiments. Its generalization is known as a t-design. Overview A design is said to be balanced (up to t) if all t-subsets of the original set occur in equally many (i.e., λ) blocks. When t is unspecified, it can usually be assumed to be 2, which means that each pair of elements is found in the same number of blocks and the design is pairwise balanced. For t=1, each element occurs in the same number of blocks (the replication number, denoted r) and the design is said to be regular. Any design balanced up to t is also balanced in all lower values of t (though with different λ-values), so for example a pairwise balanced (t=2) design is also regular (t=1). When the balancing requirement fails, a design may still be partially balanced if the t-subsets can be divided into n classes, each with its own (different) λ-value. For t=2 these are known as PBIBD(n) designs, whose classes form an association scheme. Designs are usually said (or assumed) to be incomplete, meaning that the collection of blocks is not all possible k-subsets, thus ruling out a trivial design. A block design in which all the blocks have the same size (usually denoted k) is called uniform or proper. The designs discussed in this article are all uniform. Block designs that are not necessarily uniform have also been studied; for t=2 they are known in the literature under the general name pairwise balanced designs (PBDs). Block designs may or may not have repeated blocks. Designs without repeated blocks are called simple, in which case the "family" of blocks is a set rather than a multiset. In statistics, the concept of a block design may be extended to non-binary block designs, in which blocks may contain multiple copies of an element (see blocking (statistics)). There, a design in which each element occurs the same total number of times is called equireplicate, which implies a regular design only when the design is also binary. The incidence matrix of a non-binary design lists the number of times each element is repeated in each block. Regular uniform designs (configurations) The simplest type of "balanced" design (t=1) is known as a tactical configuration or 1-design. The corresponding incidence structure in geometry is known simply as a configuration, see Configuration (geometry
https://en.wikipedia.org/wiki/Path%20%28topology%29
In mathematics, a path in a topological space is a continuous function from the closed unit interval into Paths play an important role in the fields of topology and mathematical analysis. For example, a topological space for which there exists a path connecting any two points is said to be path-connected. Any space may be broken up into path-connected components. The set of path-connected components of a space is often denoted One can also define paths and loops in pointed spaces, which are important in homotopy theory. If is a topological space with basepoint then a path in is one whose initial point is . Likewise, a loop in is one that is based at . Definition A curve in a topological space is a continuous function from a non-empty and non-degenerate interval A in is a curve whose domain is a compact non-degenerate interval (meaning are real numbers), where is called the of the path and is called its . A is a path whose initial point is and whose terminal point is Every non-degenerate compact interval is homeomorphic to which is why a is sometimes, especially in homotopy theory, defined to be a continuous function from the closed unit interval into An or 0 in is a path in that is also a topological embedding. Importantly, a path is not just a subset of that "looks like" a curve, it also includes a parameterization. For example, the maps and represent two different paths from 0 to 1 on the real line. A loop in a space based at is a path from to A loop may be equally well regarded as a map with or as a continuous map from the unit circle to This is because is the quotient space of when is identified with The set of all loops in forms a space called the loop space of Homotopy of paths Paths and loops are central subjects of study in the branch of algebraic topology called homotopy theory. A homotopy of paths makes precise the notion of continuously deforming a path while keeping its endpoints fixed. Specifically, a homotopy of paths, or path-homotopy, in is a family of paths indexed by such that and are fixed. the map given by is continuous. The paths and connected by a homotopy are said to be homotopic (or more precisely path-homotopic, to distinguish between the relation defined on all continuous functions between fixed spaces). One can likewise define a homotopy of loops keeping the base point fixed. The relation of being homotopic is an equivalence relation on paths in a topological space. The equivalence class of a path under this relation is called the homotopy class of often denoted Path composition One can compose paths in a topological space in the following manner. Suppose is a path from to and is a path from to . The path is defined as the path obtained by first traversing and then traversing : Clearly path composition is only defined when the terminal point of coincides with the initial point of If one considers all loops based at a point
https://en.wikipedia.org/wiki/Loop%20%28topology%29
In mathematics, a loop in a topological space is a continuous function from the unit interval to such that In other words, it is a path whose initial point is equal to its terminal point. A loop may also be seen as a continuous map from the pointed unit circle into , because may be regarded as a quotient of under the identification of 0 with 1. The set of all loops in forms a space called the loop space of . See also Free loop Loop group Loop space Loop algebra Fundamental group Quasigroup References Topology es:Grupo fundamental#Lazo
https://en.wikipedia.org/wiki/Proper%20map
In mathematics, a function between topological spaces is called proper if inverse images of compact subsets are compact. In algebraic geometry, the analogous concept is called a proper morphism. Definition There are several competing definitions of a "proper function". Some authors call a function between two topological spaces if the preimage of every compact set in is compact in Other authors call a map if it is continuous and ; that is if it is a continuous closed map and the preimage of every point in is compact. The two definitions are equivalent if is locally compact and Hausdorff. Let be a closed map, such that is compact (in ) for all Let be a compact subset of It remains to show that is compact. Let be an open cover of Then for all this is also an open cover of Since the latter is assumed to be compact, it has a finite subcover. In other words, for every there exists a finite subset such that The set is closed in and its image under is closed in because is a closed map. Hence the set is open in It follows that contains the point Now and because is assumed to be compact, there are finitely many points such that Furthermore, the set is a finite union of finite sets, which makes a finite set. Now it follows that and we have found a finite subcover of which completes the proof. If is Hausdorff and is locally compact Hausdorff then proper is equivalent to . A map is universally closed if for any topological space the map is closed. In the case that is Hausdorff, this is equivalent to requiring that for any map the pullback be closed, as follows from the fact that is a closed subspace of An equivalent, possibly more intuitive definition when and are metric spaces is as follows: we say an infinite sequence of points in a topological space if, for every compact set only finitely many points are in Then a continuous map is proper if and only if for every sequence of points that escapes to infinity in the sequence escapes to infinity in Properties Every continuous map from a compact space to a Hausdorff space is both proper and closed. Every surjective proper map is a compact covering map. A map is called a if for every compact subset there exists some compact subset such that A topological space is compact if and only if the map from that space to a single point is proper. If is a proper continuous map and is a compactly generated Hausdorff space (this includes Hausdorff spaces that are either first-countable or locally compact), then is closed. Generalization It is possible to generalize the notion of proper maps of topological spaces to locales and topoi, see . See also Citations References , esp. section C3.2 "Proper maps" , esp. p. 90 "Proper maps" and the Exercises to Section 3.6. Theory of continuous functions
https://en.wikipedia.org/wiki/Pl%C3%BCcker%20embedding
In mathematics, the Plücker map embeds the Grassmannian , whose elements are k-dimensional subspaces of an n-dimensional vector space V, either real or complex, in a projective space, thereby realizing it as a projective algebraic variety. More precisely, the Plücker map embeds into the projectivization of the -th exterior power of . The image is algebraic, consisting of the intersection of a number of quadrics defined by the (see below). The Plücker embedding was first defined by Julius Plücker in the case as a way of describing the lines in three-dimensional space (which, as projective lines in real projective space, correspond to two-dimensional subspaces of a four-dimensional vector space). The image of that embedding is the Klein quadric in RP5. Hermann Grassmann generalized Plücker's embedding to arbitrary k and n. The homogeneous coordinates of the image of the Grassmannian under the Plücker embedding, relative to the basis in the exterior space corresponding to the natural basis in (where is the base field) are called Plücker coordinates. Definition Denoting by the -dimensional vector space over the field , and by the Grassmannian of -dimensional subspaces of , the Plücker embedding is the map ι defined by where is a basis for the element and is the projective equivalence class of the element of the th exterior power of . This is an embedding of the Grassmannian into the projectivization . The image can be completely characterized as the intersection of a number of quadrics, the Plücker quadrics (see below), which are expressed by homogeneous quadratic relations on the Plücker coordinates (see below) that derive from linear algebra. The bracket ring appears as the ring of polynomial functions on . Plücker relations The image under the Plücker embedding satisfies a simple set of homogeneous quadratic relations, usually called the Plücker relations, or Grassmann–Plücker relations, defining the intersection of a number of quadrics in . This shows that the Grassmannian embeds as an algebraic subvariety of and gives another method of constructing the Grassmannian. To state the Grassmann–Plücker relations, let be the -dimensional subspace spanned by the basis represented by column vectors . Let be the matrix of homogeneous coordinates, whose columns are . Then the equivalence class of all such homogeneous coordinates matrices related to each other by right multiplication by an invertible matrix may be identified with the element . For any ordered sequence of integers, let be the determinant of the matrix whose rows are the rows of . Then, up to projectivization, are the Plücker coordinates of the element whose homogeneous coordinates are . They are the linear coordinates of the image of under the Plücker map, relative to the standard basis in the exterior space . Changing the basis defining the homogeneous coordinate matrix just changes the Plücker coordinates by a nonzero scaling factor e
https://en.wikipedia.org/wiki/Splitting%20of%20prime%20ideals%20in%20Galois%20extensions
In mathematics, the interplay between the Galois group G of a Galois extension L of a number field K, and the way the prime ideals P of the ring of integers OK factorise as products of prime ideals of OL, provides one of the richest parts of algebraic number theory. The splitting of prime ideals in Galois extensions is sometimes attributed to David Hilbert by calling it Hilbert theory. There is a geometric analogue, for ramified coverings of Riemann surfaces, which is simpler in that only one kind of subgroup of G need be considered, rather than two. This was certainly familiar before Hilbert. Definitions Let L/K be a finite extension of number fields, and let OK and OL be the corresponding ring of integers of K and L, respectively, which are defined to be the integral closure of the integers Z in the field in question. Finally, let p be a non-zero prime ideal in OK, or equivalently, a maximal ideal, so that the residue OK/p is a field. From the basic theory of one-dimensional rings follows the existence of a unique decomposition of the ideal pOL generated in OL by p into a product of distinct maximal ideals Pj, with multiplicities ej. The field F = OK/p naturally embeds into Fj = OL/Pj for every j, the degree fj = [OL/Pj : OK/p] of this residue field extension is called inertia degree of Pj over p. The multiplicity ej is called ramification index of Pj over p. If it is bigger than 1 for some j, the field extension L/K is called ramified at p (or we say that p ramifies in L, or that it is ramified in L). Otherwise, L/K is called unramified at p. If this is the case then by the Chinese remainder theorem the quotient OL/pOL is a product of fields Fj. The extension L/K is ramified in exactly those primes that divide the relative discriminant, hence the extension is unramified in all but finitely many prime ideals. Multiplicativity of ideal norm implies If fj = ej = 1 for every j (and thus g = [L : K]), we say that p splits completely in L. If g = 1 and f1 = 1 (and so e1 = [L : K]), we say that p ramifies completely in L. Finally, if g = 1 and e1 = 1 (and so f1 = [L : K]), we say that p is inert in L. The Galois situation In the following, the extension L/K is assumed to be a Galois extension. Then the prime avoidance lemma can be used to show the Galois group acts transitively on the Pj. That is, the prime ideal factors of p in L form a single orbit under the automorphisms of L over K. From this and the unique factorisation theorem, it follows that f = fj and e = ej are independent of j; something that certainly need not be the case for extensions that are not Galois. The basic relations then read . and The relation above shows that [L : K]/ef equals the number g of prime factors of p in OL. By the orbit-stabilizer formula this number is also equal to |G|/|DPj| for every j, where DPj, the decomposition group of Pj, is the subgroup of elements of G sending a given Pj to itself. Since the degree of L/K and the order of G are equal
https://en.wikipedia.org/wiki/Manifold%20%28disambiguation%29
A manifold is an abstract mathematical space which, in a close-up view, resembles the spaces described by Euclidean geometry. Manifold may also refer to: Arts and music Manifold (comics), a fictional character in Marvel Comics publications Manifold Records, a record label Manifold Trilogy, by science fiction author Stephen Baxter Engineering Manifold (fluid mechanics), a machine element used to split or combine a gas or liquid Hydraulic manifold, a component used to regulate fluid flow in a hydraulic system, thus controlling the transfer of power between actuators and pumps Manifold (scuba), in a scuba set, connects two or more diving cylinders Vacuum gas manifold, a piece of apparatus used in chemistry to manipulate gases Exhaust manifold, an engine part which collects the exhaust gases from multiple cylinders into one pipe Inlet manifold or "intake manifold", an engine part which supplies the air or fuel/air mixture to the cylinders Mathematics Manifold, an abstract mathematical space which, in a close-up view, resembles the spaces described by Euclidean geometry. Manifold (magazine), a former magazine of the University of Warwick mathematical community Topological manifold, a topological space which is a locally Euclidean Hausdorff space Almost complex manifold Algebraic manifold Analytic manifold Calabi–Yau manifold Complex manifold, a manifold over the complex numbers Differentiable manifold Einstein manifold Flag manifold Flat manifold G2 manifold Hermitian manifold Iwasawa manifold Orientable manifold People John Manifold (1915–1985), Australian poet and critic Sir Walter Manifold (1849–1928), Australian grazier and politician Places Manifold, Pennsylvania, US River Manifold, a river in Staffordshire, West Midlands, England Manifold Way, a footpath and cycleway following the route of a former railway Science Manifold (fluid mechanics), a machine element used to split or combine a gas or liquid various applications such as automotive exhaust manifolds: see Engineering section Vacuum manifold, in quantum field theory Software Manifold System, a geographic information system software package ManifoldCF, Apache Software Foundation open-source project for transferring content between repositories or search indexes Manifold (prediction market), a reputation-based prediction market
https://en.wikipedia.org/wiki/Pairing
In mathematics, a pairing is an R-bilinear map from the Cartesian product of two R-modules, where the underlying ring R is commutative. Definition Let R be a commutative ring with unit, and let M, N and L be R-modules. A pairing is any R-bilinear map . That is, it satisfies , and for any and any and any . Equivalently, a pairing is an R-linear map where denotes the tensor product of M and N. A pairing can also be considered as an R-linear map , which matches the first definition by setting . A pairing is called perfect if the above map is an isomorphism of R-modules. A pairing is called non-degenerate on the right if for the above map we have that for all implies ; similarly, is called non-degenerate on the left if for all implies . A pairing is called alternating if and for all m. In particular, this implies , while bilinearity shows . Thus, for an alternating pairing, . Examples Any scalar product on a real vector space V is a pairing (set , in the above definitions). The determinant map (2 × 2 matrices over k) → k can be seen as a pairing . The Hopf map written as is an example of a pairing. For instance, Hardie et al. present an explicit construction of the map using poset models. Pairings in cryptography In cryptography, often the following specialized definition is used: Let be additive groups and a multiplicative group, all of prime order . Let be generators of and respectively. A pairing is a map: for which the following holds: Bilinearity: Non-degeneracy: For practical purposes, has to be computable in an efficient manner Note that it is also common in cryptographic literature for all groups to be written in multiplicative notation. In cases when , the pairing is called symmetric. As is cyclic, the map will be commutative; that is, for any , we have . This is because for a generator , there exist integers , such that and . Therefore . The Weil pairing is an important concept in elliptic curve cryptography; e.g., it may be used to attack certain elliptic curves (see MOV attack). It and other pairings have been used to develop identity-based encryption schemes. Slightly different usages of the notion of pairing Scalar products on complex vector spaces are sometimes called pairings, although they are not bilinear. For example, in representation theory, one has a scalar product on the characters of complex representations of a finite group which is frequently called character pairing. See also Dual system Yoneda product References External links The Pairing-Based Crypto Library Linear algebra Module theory Pairing-based cryptography
https://en.wikipedia.org/wiki/Green%27s%20relations
In mathematics, Green's relations are five equivalence relations that characterise the elements of a semigroup in terms of the principal ideals they generate. The relations are named for James Alexander Green, who introduced them in a paper of 1951. John Mackintosh Howie, a prominent semigroup theorist, described this work as "so all-pervading that, on encountering a new semigroup, almost the first question one asks is 'What are the Green relations like?'" (Howie 2002). The relations are useful for understanding the nature of divisibility in a semigroup; they are also valid for groups, but in this case tell us nothing useful, because groups always have divisibility. Instead of working directly with a semigroup S, it is convenient to define Green's relations over the monoid S1. (S1 is "S with an identity adjoined if necessary"; if S is not already a monoid, a new element is adjoined and defined to be an identity.) This ensures that principal ideals generated by some semigroup element do indeed contain that element. For an element a of S, the relevant ideals are: The principal left ideal generated by a: . This is the same as , which is . The principal right ideal generated by a: , or equivalently . The principal two-sided ideal generated by a: , or . The L, R, and J relations For elements a and b of S, Green's relations L, R and J are defined by a L b if and only if S1 a = S1 b. a R b if and only if a S1 = b S1. a J b if and only if S1 a S1 = S1 b S1. That is, a and b are L-related if they generate the same left ideal; R-related if they generate the same right ideal; and J-related if they generate the same two-sided ideal. These are equivalence relations on S, so each of them yields a partition of S into equivalence classes. The L-class of a is denoted La (and similarly for the other relations). The L-classes and R-classes can be equivalently understood as the strongly connected components of the left and right Cayley graphs of S1. Further, the L, R, and J relations define three preorders ≤L, ≤R, and ≤J, where a ≤J b holds for two elements a and b of S if the ideal generated by a is included in that of b, i.e., S1 a S1 ⊆ S1 b S1, and ≤L and ≤R are defined analogously. Green used the lowercase blackletter , and for these relations, and wrote for a L b (and likewise for R and J). Mathematicians today tend to use script letters such as instead, and replace Green's modular arithmetic-style notation with the infix style used here. Ordinary letters are used for the equivalence classes. The L and R relations are left-right dual to one another; theorems concerning one can be translated into similar statements about the other. For example, L is right-compatible: if a L b and c is another element of S, then ac L bc. Dually, R is left-compatible: if a R b, then ca R cb. If S is commutative, then L, R and J coincide. The H and D relations The remaining relations are derived from L and R. Their intersection is H: a H b if and only if a L b and
https://en.wikipedia.org/wiki/Whitehead%20manifold
In mathematics, the Whitehead manifold is an open 3-manifold that is contractible, but not homeomorphic to discovered this puzzling object while he was trying to prove the Poincaré conjecture, correcting an error in an earlier paper where he incorrectly claimed that no such manifold exists. A contractible manifold is one that can continuously be shrunk to a point inside the manifold itself. For example, an open ball is a contractible manifold. All manifolds homeomorphic to the ball are contractible, too. One can ask whether all contractible manifolds are homeomorphic to a ball. For dimensions 1 and 2, the answer is classical and it is "yes". In dimension 2, it follows, for example, from the Riemann mapping theorem. Dimension 3 presents the first counterexample: the Whitehead manifold. Construction Take a copy of the three-dimensional sphere. Now find a compact unknotted solid torus inside the sphere. (A solid torus is an ordinary three-dimensional doughnut, that is, a filled-in torus, which is topologically a circle times a disk.) The closed complement of the solid torus inside is another solid torus. Now take a second solid torus inside so that and a tubular neighborhood of the meridian curve of is a thickened Whitehead link. Note that is null-homotopic in the complement of the meridian of This can be seen by considering as and the meridian curve as the z-axis together with The torus has zero winding number around the z-axis. Thus the necessary null-homotopy follows. Since the Whitehead link is symmetric, that is, a homeomorphism of the 3-sphere switches components, it is also true that the meridian of is also null-homotopic in the complement of Now embed inside in the same way as lies inside and so on; to infinity. Define W, the Whitehead continuum, to be or more precisely the intersection of all the for The Whitehead manifold is defined as which is a non-compact manifold without boundary. It follows from our previous observation, the Hurewicz theorem, and Whitehead's theorem on homotopy equivalence, that X is contractible. In fact, a closer analysis involving a result of Morton Brown shows that However, X is not homeomorphic to The reason is that it is not simply connected at infinity. The one point compactification of X is the space (with W crunched to a point). It is not a manifold. However, is homeomorphic to David Gabai showed that X is the union of two copies of whose intersection is also homeomorphic to Related spaces More examples of open, contractible 3-manifolds may be constructed by proceeding in similar fashion and picking different embeddings of in in the iterative process. Each embedding should be an unknotted solid torus in the 3-sphere. The essential properties are that the meridian of should be null-homotopic in the complement of and in addition the longitude of should not be null-homotopic in Another variation is to pick several subtori at each stage instead of just one.
https://en.wikipedia.org/wiki/Skellam%20distribution
The Skellam distribution is the discrete probability distribution of the difference of two statistically independent random variables and each Poisson-distributed with respective expected values and . It is useful in describing the statistics of the difference of two images with simple photon noise, as well as describing the point spread distribution in sports where all scored points are equal, such as baseball, hockey and soccer. The distribution is also applicable to a special case of the difference of dependent Poisson random variables, but just the obvious case where the two variables have a common additive random contribution which is cancelled by the differencing: see Karlis & Ntzoufras (2003) for details and an application. The probability mass function for the Skellam distribution for a difference between two independent Poisson-distributed random variables with means and is given by: where Ik(z) is the modified Bessel function of the first kind. Since k is an integer we have that Ik(z)=I|k|(z). Derivation The probability mass function of a Poisson-distributed random variable with mean μ is given by for (and zero otherwise). The Skellam probability mass function for the difference of two independent counts is the convolution of two Poisson distributions: (Skellam, 1946) Since the Poisson distribution is zero for negative values of the count , the second sum is only taken for those terms where and . It can be shown that the above sum implies that so that: where I k(z) is the modified Bessel function of the first kind. The special case for is given by Irwin (1937): Using the limiting values of the modified Bessel function for small arguments, we can recover the Poisson distribution as a special case of the Skellam distribution for . Properties As it is a discrete probability function, the Skellam probability mass function is normalized: We know that the probability generating function (pgf) for a Poisson distribution is: It follows that the pgf, , for a Skellam probability mass function will be: Notice that the form of the probability-generating function implies that the distribution of the sums or the differences of any number of independent Skellam-distributed variables are again Skellam-distributed. It is sometimes claimed that any linear combination of two Skellam distributed variables are again Skellam-distributed, but this is clearly not true since any multiplier other than would change the support of the distribution and alter the pattern of moments in a way that no Skellam distribution can satisfy. The moment-generating function is given by: which yields the raw moments mk . Define: Then the raw moments mk are The central moments M k are The mean, variance, skewness, and kurtosis excess are respectively: The cumulant-generating function is given by: which yields the cumulants: For the special case when μ1 = μ2, an asymptotic expansion of the modified Bessel function of the first kind yields for l
https://en.wikipedia.org/wiki/Nottingham%20Urban%20Area
The Nottingham Built-up Area (BUA), Nottingham Urban Area, or Greater Nottingham is an area of land defined by the Office for National Statistics as which is built upon, with nearby areas linked if within 200 metres. It consists of the city of Nottingham and the adjoining urban areas of Nottinghamshire and Derbyshire, in the East Midlands of England. It had a total population of 729,977 at the time of the 2011 census. This was an increase of almost 10% since the 2001 census recorded population of 666,358, due to population increases, reductions and several new sub-divisions. Geography Greater Nottingham is largely within the three districts of Rushcliffe, Broxtowe and Gedling surrounding the city, though the area spills into the Nottinghamshire district of Ashfield, and also to the Amber Valley and Erewash districts of Derbyshire. The Nottingham Urban Area is, by the ONS' figures, the 8th largest in England (9th in the UK), with a population size between that of the Tyneside and Sheffield built-up areas, and a total area of . The Nottingham Urban Area is bounded to the west by a narrow gap between Draycott (to the west of the Breaston urban area sub-division) and Borrowash (to the east of the Derby Urban Area). The Heanor/Ripley and West Hallam north-western extensions have a somewhat tenuous linkage through to the core of Nottingham City largely due to ribbon development, and are in close proximity to other nearby urban areas which together, almost link to Derby from the north. Sub-divisions do not always match administrative geographic boundaries; the subdivision of Clifton for example is within the Nottingham Unitary Authority city area but is subdivided by the River Trent. The Nottingham subdivision oversteps the city's borders at several locations. Together, these two subdivisions exceed the official city population (305,680 in 2011) as a result, even though West Bridgford includes the counts of city suburbs Silverdale and Wilford. In the 1991 census, Ilkeston was considered outside of the Nottingham Urban Area, and its addition gave the BUA an 8% increase in 2001. This was due to improvements in mapping methodology by the ONS, and is chiefly responsible for the increase in sub-divisions over censuses rather than any large scale 'bricks and mortar' building, as much of the area between the cities is protected green belt and wedges, restricting actual development. Greater Nottingham Partnership/D2N2 The local authorities collaborate in some ways. The Greater Nottingham Partnership considered Greater Nottingham to consist of the City of Nottingham plus the entirety of the Rushcliffe, Broxtowe and Gedling boroughs, along with Hucknall from Ashfield, but no part of Derbyshire, as no Derbyshire council was a member of the Partnership. They together worked as an advisory and lobbying body for projects and decisions involving the region. However it was axed due to funding in 2011 and the D2N2 Local Enterprise Partnership is instead assuming
https://en.wikipedia.org/wiki/Jacobian%20conjecture
In mathematics, the Jacobian conjecture is a famous unsolved problem concerning polynomials in several variables. It states that if a polynomial function from an n-dimensional space to itself has Jacobian determinant which is a non-zero constant, then the function has a polynomial inverse. It was first conjectured in 1939 by Ott-Heinrich Keller, and widely publicized by Shreeram Abhyankar, as an example of a difficult question in algebraic geometry that can be understood using little beyond a knowledge of calculus. The Jacobian conjecture is notorious for the large number of attempted proofs that turned out to contain subtle errors. As of 2018, there are no plausible claims to have proved it. Even the two-variable case has resisted all efforts. There are currently no known compelling reasons for believing the conjecture to be true, and according to van den Essen there are some suspicions that the conjecture is in fact false for large numbers of variables (indeed, there is equally also no compelling evidence to support these suspicions). The Jacobian conjecture is number 16 in Stephen Smale's 1998 list of Mathematical Problems for the Next Century. The Jacobian determinant Let N > 1 be a fixed integer and consider polynomials f1, ..., fN in variables X1, ..., XN with coefficients in a field k. Then we define a vector-valued function F: kN → kN by setting: F(X1, ..., XN) = (f1(X1, ...,XN),..., fN(X1,...,XN)). Any map F: kN → kN arising in this way is called a polynomial mapping. The Jacobian determinant of F, denoted by JF, is defined as the determinant of the N × N Jacobian matrix consisting of the partial derivatives of fi with respect to Xj: then JF is itself a polynomial function of the N variables X1, ..., XN. Formulation of the conjecture It follows from the multivariable chain rule that if F has a polynomial inverse function G: kN → kN, then JF has a polynomial reciprocal, so is a nonzero constant. The Jacobian conjecture is the following partial converse: Jacobian conjecture: Let k have characteristic 0. If JF is a non-zero constant, then F has an inverse function G: kN → kN which is regular, meaning its components are polynomials. According to van den Essen, the problem was first conjectured by Keller in 1939 for the limited case of two variables and integer coefficients. The obvious analogue of the Jacobian conjecture fails if k has characteristic p > 0 even for one variable. The characteristic of a field, if it is not zero, must be prime, so at least 2. The polynomial has derivative which is 1 (because px is 0) but it has no inverse function. However, suggested extending the Jacobian conjecture to characteristic by adding the hypothesis that p does not divide the degree of the field extension . The existence of a polynomial inverse is obvious if F is simply a set of functions linear in the variables, because then the inverse will also be a set of linear functions. A simple non-linear example would be if in which case
https://en.wikipedia.org/wiki/Ott-Heinrich%20Keller
Eduard Ott-Heinrich Keller (22 June 1906 in Frankfurt – 5 December 1990 in Halle) was a German mathematician who worked in the fields of geometry, topology and algebraic geometry. He formulated the celebrated problem which is now called the Jacobian conjecture in 1939. He was born in Frankfurt–am-Main, and studied at the universities of Frankfurt, Vienna, Berlin and Göttingen. As a student of Max Dehn he wrote a dissertation on the tiling of space with cubes. This led to another 'Keller conjecture': the Keller cube-tiling conjecture from 1930. Subsequently he worked with Georg Hamel in Berlin, habilitating in 1933 with a thesis on Cremona transformations. The Jacobian conjecture is quite naturally posed in that setting. The motivation for looking at rather general polynomial transformations, say of the projective plane, came from the singularity theory for algebraic curves. During World War II he taught in a naval college in Flensburg. After the war he had several positions, and was appointed a professor at Martin Luther University of Halle-Wittenberg in 1952, as successor of H. W. E. Jung. References Biography (in German) External links 1906 births 1990 deaths 20th-century German mathematicians Geometers Academic staff of the University of Münster
https://en.wikipedia.org/wiki/Poisson%20formula
In mathematics, the Poisson formula, named after Siméon Denis Poisson, may refer to: Poisson distribution in probability Poisson summation formula in Fourier analysis Poisson kernel in complex or harmonic analysis Poisson–Jensen formula in complex analysis
https://en.wikipedia.org/wiki/Heine%E2%80%93Cantor%20theorem
In mathematics, the Heine–Cantor theorem, named after Eduard Heine and Georg Cantor, states that if is a continuous function between two metric spaces and , and is compact, then is uniformly continuous. An important special case is that every continuous function from a closed bounded interval to the real numbers is uniformly continuous. Proof Suppose that and are two metric spaces with metrics and , respectively. Suppose further that a function is continuous and is compact. We want to show that is uniformly continuous, that is, for every positive real number there exists a positive real number such that for all points in the function domain , implies that . Consider some positive real number . By continuity, for any point in the domain , there exists some positive real number such that when , i.e., a fact that is within of implies that is within of . Let be the open -neighborhood of , i.e. the set Since each point is contained in its own , we find that the collection is an open cover of . Since is compact, this cover has a finite subcover where . Each of these open sets has an associated radius . Let us now define , i.e. the minimum radius of these open sets. Since we have a finite number of positive radii, this minimum is well-defined and positive. We now show that this works for the definition of uniform continuity. Suppose that for any two in . Since the sets form an open (sub)cover of our space , we know that must lie within one of them, say . Then we have that . The triangle inequality then implies that implying that and are both at most away from . By definition of , this implies that and are both less than . Applying the triangle inequality then yields the desired For an alternative proof in the case of , a closed interval, see the article Non-standard calculus. See also Cauchy-continuous function External links Theory of continuous functions Metric geometry Theorems in analysis Articles containing proofs
https://en.wikipedia.org/wiki/P.%20K.%20Srinivasan
P.K. Srinivasan (PKS) (4 November 1924 – 20 June 2005) was a well known mathematics teacher in India. He taught mathematics at the Muthialpet High School in Chennai, India until his retirement. His singular dedication to education of mathematics would bring him to the United States, where he worked for a year, and then to Nigeria, where he would work for six years. He is known in India for his dedication to teaching mathematics and in creating pioneering awareness of the Indian mathematician Ramanujan. He has authored several books in English, Telugu and Tamil that introduce mathematics to children in novel and interesting ways. He was also a prominent reviewer of math books in the weekly Book Review column of the Indian newspaper The Hindu in Chennai. Experience PKS, as he was known to the world-at-large, among his colleagues, students and friends, travelled to the United States as a Fulbright exchange teacher and worked in Liverpool Central School, New York, in 1965-'66. Later he served as a Senior education officer and a Senior lecturer in Mathematics in Nigeria for seven years. He served as a lecturer in the National Council of Educational Research and Training (NCERT) in India. He organized over sixty math expositions and fairs in India, Nigeria and the United States, and participated actively in four International Congress on Mathematical Education (ICME) conferences. He inspired many a creative idea and gave them shape through demonstrative displays by his students. Conduct of Bharat Dharshan, World Exhibition, three-day three-tier Math Expositions to make all his math students walk across the curriculum which was highly talked about in his days and research papers on subjects from English to Social Studies by students studying from 7 to 12 Standards was an everlasting contribution made by him to the students and to The Muthialpet High School, Chennai where he was a teacher par excellence. He could pick the brightest of students and discuss esoteric topics such as Boolean algebra, Ramanujan's theorems and at same time deploy easy-to-understand teaching tools for teaching mathematics and English. As a class teacher, he could reach across to the poorest performers and made them cross average levels in English and mathematics. Although he hailed from a traditional family, he was always clad in a white kurta and dhoti spun out of khadi - rough and homespun cotton cloth that symbolized the Swadeshi concept of Gandhiji. He sported a Gandhian cap as well. His vision and unquenchable thirst for knowledge transcended the narrow barriers of caste, language and religion. Personal and family interests always took a backseat in his mission for spreading knowledge and awareness and imparting a sense of purpose in his students to go beyond the narrow frontiers of a syllabus-oriented formal education to exploration of the unfathomable depths of knowledge. He displayed the same missionary zeal in making classroom education and teaching of English an
https://en.wikipedia.org/wiki/Hilbert%E2%80%93Schmidt
In mathematics, Hilbert–Schmidt may refer to a Hilbert–Schmidt operator; a Hilbert–Schmidt integral operator; the Hilbert–Schmidt theorem.
https://en.wikipedia.org/wiki/American%20Regions%20Mathematics%20League
The American Regions Mathematics League (ARML), is an annual, national high school mathematics team competition held simultaneously at four locations in the United States: the University of Iowa, Penn State, University of Nevada, Reno, and the University of Alabama in Huntsville. Past sites have included San Jose State University, Rutgers University, UNLV, Duke University, and University of Georgia. Teams consist of 15 members, which usually represent a large geographic region (such as a state) or a large population center (such as a major city). Some schools also field teams. The competition is held in June, on the first Saturday after Memorial Day. In 2022, 120 teams competed with about 1800 students. ARML problems cover a wide variety of mathematical topics including algebra, geometry, number theory, combinatorics, probability, and inequalities. Calculus is not required to successfully complete any problem, but it may facilitate solving the problem more quickly or efficiently. While part of the competition is short-answer based, there is a cooperative team round, and a proof-based power question (also completed as a team). ARML problems are harder than most high school mathematics competitions. The contest is sponsored by D. E. Shaw & Co. Contest supporters are the American Mathematical Society, Mu Alpha Theta (the National Mathematics Honor Society for High School and Two-Year College students), Star League, Penguin Books, and Princeton University Press. Competition format The competition consists of four formal events: A team round, where the entire team has 20 minutes to solve 10 problems. Each problem is worth 5 points, for a possible total of 50 points A power question, where the entire team has one hour to solve a multiple-part (usually ten) question requiring explanations and proofs. This is usually an unusual, unique, or invented topic so students are forced to deal with complex new mathematical ideas. Each problem is weighted for a possible 50 points. An individual round, where each team member answers five groups of two questions each, with ten minutes per pair. Starting in 2009, the individual round expanded from eight questions to ten. Each problem is worth 1 point, for a grand total of 150 points possible for the team. Only 12 students nationwide received a perfect score in 2014. This round's format is similar to that of the Target Round in MATHCOUNTS. A relay, where the team is broken into five groups of three. Within each group, the first team member solves a problem and passes the solution to the next team member, who plugs that answer into their question, and so on. The allotted time is six minutes, but extra points are given for solving the problem in three minutes. Solving the relay in 3 minutes gives 5 points, solving it in 6 minutes gives 3 points. The whole process is done twice, making the maximum 50 points possible for the team. The teams are scored based on the number of points they attained with the
https://en.wikipedia.org/wiki/List%20of%20interactive%20geometry%20software
Interactive geometry software (IGS) or dynamic geometry environments (DGEs) are computer programs which allow one to create and then manipulate geometric constructions, primarily in plane geometry. In most IGS, one starts construction by putting a few points and using them to define new objects such as lines, circles or other points. After some construction is done, one can move the points one started with and see how the construction changes. History The earliest IGS was the Geometric Supposer, which was developed in the early 1980s. This was soon followed by Cabri in 1986 and The Geometer's Sketchpad. Comparison There are three main types of computer environments for studying school geometry: supposers, dynamic geometry environments (DGEs) and Logo-based programs. Most are DGEs: software that allows the user to manipulate ("drag") the geometric object into different shapes or positions. The main example of a supposer is the Geometric Supposer, which does not have draggable objects, but allows students to study pre-defined shapes. Nearly all of the following programs are DGEs. For a related, comparative physical example of these algorithms, see Lenart Sphere. License and platform The following table provides a first comparison of the different software according to their license and platform. 3D Software General features The following table provides a more detailed comparison : Macros Features related to macro constructions: (TODO) Loci Loci features related to IGS: (TODO) Proof We detail here the proof related features. (TODO) Measurements and calculation Measurement and calculation features related to IGS: (TODO) Graphics export formats Object attributes 2D programs C.a.R. C.a.R. is a free GPL analog of The Geometer's Sketchpad (GSP), written in Java. Cabri Cabri Cabri was developed by the French school of mathematics education in Grenoble (Laborde, 1993) CaRMetal CaRMetal is a free GPL software written in Java. Derived from C.a.R., it provides a different user interface. Cinderella Cinderella, written in Java, is very different from The Geometer's Sketchpad. The later version Cinderella.2 also includes a physics simulation engine and a scripting language. Also, it now supports macros, line segments, calculations, arbitrary functions, plots, etc. Full documentation is available online. Dr Genius Dr Genius was an attempt to merge Dr. Geo and the Genius calculator. Dr. Geo Dr. Geo is a GPL interactive software intended for younger students (7-15). The later version, Dr. Geo II, is a complete rewrite of Dr. Geo, for the Squeak/Smalltalk environment. GCLC GCLC is a dynamic geometry tool for visualizing and teaching geometry, and for producing mathematical illustrations. In GCLC, figures are described rather than drawn. This approach stresses the fact that geometrical constructions are abstract, formal procedures and not figures. A concrete figure can be generated on the basis of the abstract desc
https://en.wikipedia.org/wiki/Pre-intuitionism
In the philosophy of mathematics, the pre-intuitionists were a small but influential group who informally shared similar philosophies on the nature of mathematics. The term itself was used by L. E. J. Brouwer, who in his 1951 lectures at Cambridge described the differences between intuitionism and its predecessors: Of a totally different orientation [from the "Old Formalist School" of Dedekind, Cantor, Peano, Zermelo, and Couturat, etc.] was the Pre-Intuitionist School, mainly led by Poincaré, Borel and Lebesgue. These thinkers seem to have maintained a modified observational standpoint for the introduction of natural numbers, for the principle of complete induction [...] For these, even for such theorems as were deduced by means of classical logic, they postulated an existence and exactness independent of language and logic and regarded its non-contradictority as certain, even without logical proof. For the continuum, however, they seem not to have sought an origin strictly extraneous to language and logic. The introduction of natural numbers The pre-intuitionists, as defined by L. E. J. Brouwer, differed from the formalist standpoint in several ways, particularly in regard to the introduction of natural numbers, or how the natural numbers are defined/denoted. For Poincaré, the definition of a mathematical entity is the construction of the entity itself and not an expression of an underlying essence or existence. This is to say that no mathematical object exists without human construction of it, both in mind and language. The principle of complete induction This sense of definition allowed Poincaré to argue with Bertrand Russell over Giuseppe Peano's axiomatic theory of natural numbers. Peano's fifth axiom states: Allow that; zero has a property P; And; if every natural number less than a number x has the property P then x also has the property P. Therefore; every natural number has the property P. This is the principle of complete induction, which establishes the property of induction as necessary to the system. Since Peano's axiom is as infinite as the natural numbers, it is difficult to prove that the property of P does belong to any x and also x + 1. What one can do is say that, if after some number n of trials that show a property P conserved in x and x + 1, then we may infer that it will still hold to be true after n + 1 trials. But this is itself induction. And hence the argument begs the question. From this Poincaré argues that if we fail to establish the consistency of Peano's axioms for natural numbers without falling into circularity, then the principle of complete induction is not provable by general logic. Thus arithmetic and mathematics in general is not analytic but synthetic. Logicism thus rebuked and Intuition is held up. What Poincaré and the Pre-Intuitionists shared was the perception of a difference between logic and mathematics that is not a matter of language alone, but of knowledge itself. Arguments over the
https://en.wikipedia.org/wiki/Elongated%20triangular%20cupola
In geometry, the elongated triangular cupola is one of the Johnson solids (). As the name suggests, it can be constructed by elongating a triangular cupola () by attaching a hexagonal prism to its base. Formulae The following formulae for volume and surface area can be used if all faces are regular, with edge length a: Dual polyhedron The dual of the elongated triangular cupola has 15 faces: 6 isosceles triangles, 3 rhombi, and 6 quadrilaterals. Related polyhedra and honeycombs The elongated triangular cupola can form a tessellation of space with tetrahedra and square pyramids. References External links Johnson solids
https://en.wikipedia.org/wiki/Gyroelongated%20triangular%20cupola
In geometry, the gyroelongated triangular cupola is one of the Johnson solids (J22). It can be constructed by attaching a hexagonal antiprism to the base of a triangular cupola (J3). This is called "gyroelongation", which means that an antiprism is joined to the base of a solid, or between the bases of more than one solid. The gyroelongated triangular cupola can also be seen as a gyroelongated triangular bicupola (J44) with one triangular cupola removed. Like all cupolae, the base polygon has twice as many sides as the top (in this case, the bottom polygon is a hexagon because the top is a triangle). Formulae The following formulae for volume and surface area can be used if all faces are regular, with edge length a: Dual polyhedron The dual of the gyroelongated triangular cupola has 15 faces: 6 kites, 3 rhombi, and 6 pentagons. References External links Johnson solids
https://en.wikipedia.org/wiki/Triangular%20orthobicupola
In geometry, the triangular orthobicupola is one of the Johnson solids (). As the name suggests, it can be constructed by attaching two triangular cupolas () along their bases. It has an equal number of squares and triangles at each vertex; however, it is not vertex-transitive. It is also called an anticuboctahedron, twisted cuboctahedron or disheptahedron. It is also a canonical polyhedron. The triangular orthobicupola is the first in an infinite set of orthobicupolae. Relation to cuboctahedra The triangular orthobicupola has a superficial resemblance to the cuboctahedron, which would be known as the triangular gyrobicupola in the nomenclature of Johnson solids — the difference is that the two triangular cupolas which make up the triangular orthobicupola are joined so that pairs of matching sides abut (hence, "ortho"); the cuboctahedron is joined so that triangles abut squares and vice versa. Given a triangular orthobicupola, a 60-degree rotation of one cupola before the joining yields a cuboctahedron. Hence, another name for the triangular orthobicupola is the anticuboctahedron. The elongated triangular orthobicupola (J35), which is constructed by elongating this solid, has a (different) special relationship with the rhombicuboctahedron. The dual of the triangular orthobicupola is the trapezo-rhombic dodecahedron. It has 6 rhombic and 6 trapezoidal faces, and is similar to the rhombic dodecahedron. Formulae The following formulae for volume, surface area, and circumradius can be used if all faces are regular, with edge length a: The circumradius of a triangular orthobicupola is the same as the edge length (C = a). Related polyhedra and honeycombs The rectified cubic honeycomb can be dissected and rebuilt as a space-filling lattice of triangular orthobicupolae and square pyramids. References External links Johnson solids
https://en.wikipedia.org/wiki/Pentagonal%20orthocupolarotunda
In geometry, the pentagonal orthocupolarotunda is one of the Johnson solids (). As the name suggests, it can be constructed by joining a pentagonal cupola () and a pentagonal rotunda () along their decagonal bases, matching the pentagonal faces. A 36-degree rotation of one of the halves before the joining yields a pentagonal gyrocupolarotunda (). Formulae The following formulae for volume and surface area can be used if all faces are regular, with edge length a: References External links Johnson solids
https://en.wikipedia.org/wiki/Pentagonal%20gyrocupolarotunda
In geometry, the pentagonal gyrocupolarotunda is one of the Johnson solids (). Like the pentagonal orthocupolarotunda (), it can be constructed by joining a pentagonal cupola () and a pentagonal rotunda () along their decagonal bases. The difference is that in this solid, the two halves are rotated 36 degrees with respect to one another. Formulae The following formulae for volume and surface area can be used if all faces are regular, with edge length a: References External links Johnson solids
https://en.wikipedia.org/wiki/Elongated%20triangular%20orthobicupola
In geometry, the elongated triangular orthobicupola or cantellated triangular prism is one of the Johnson solids (). As the name suggests, it can be constructed by elongating a triangular orthobicupola () by inserting a hexagonal prism between its two halves. The resulting solid is superficially similar to the rhombicuboctahedron (one of the Archimedean solids), with the difference that it has threefold rotational symmetry about its axis instead of fourfold symmetry. Volume The volume of J35 can be calculated as follows: J35 consists of 2 cupolae and hexagonal prism. The two cupolae makes 1 cuboctahedron = 8 tetrahedra + 6 half-octahedra. 1 octahedron = 4 tetrahedra, so total we have 20 tetrahedra. What is the volume of a tetrahedron? Construct a tetrahedron having vertices in common with alternate vertices of a cube (of side , if tetrahedron has unit edges). The 4 triangular pyramids left if the tetrahedron is removed from the cube form half an octahedron = 2 tetrahedra. So The hexagonal prism is more straightforward. The hexagon has area , so Finally numerical value: Related polyhedra and honeycombs The elongated triangular orthobicupola forms space-filling honeycombs with tetrahedra and square pyramids. References External links Johnson solids
https://en.wikipedia.org/wiki/Elongated%20triangular%20gyrobicupola
In geometry, the elongated triangular gyrobicupola is one of the Johnson solids (). As the name suggests, it can be constructed by elongating a "triangular gyrobicupola," or cuboctahedron, by inserting a hexagonal prism between its two halves, which are congruent triangular cupolae (). Rotating one of the cupolae through 60 degrees before the elongation yields the triangular orthobicupola (). Formulae The following formulae for volume and surface area can be used if all faces are regular, with edge length a: Related polyhedra and honeycombs The elongated triangular gyrobicupola forms space-filling honeycombs with tetrahedra and square pyramids. References External links Johnson solids
https://en.wikipedia.org/wiki/Gyroelongated%20triangular%20bicupola
In geometry, the gyroelongated triangular bicupola is one of the Johnson solids (). As the name suggests, it can be constructed by gyroelongating a triangular bicupola (either triangular orthobicupola, , or the cuboctahedron) by inserting a hexagonal antiprism between its congruent halves. The gyroelongated triangular bicupola is one of five Johnson solids which are chiral, meaning that they have a "left-handed" and a "right-handed" form. In the illustration to the right, each square face on the bottom half of the figure is connected by a path of two triangular faces to a square face above it and to the right. In the figure of opposite chirality (the mirror image of the illustrated figure), each bottom square would be connected to a square face above it and to the left. The two chiral forms of are not considered different Johnson solids. Formulae The following formulae for volume and surface area can be used if all faces are regular, with edge length a: References External links Johnson solids Chiral polyhedra
https://en.wikipedia.org/wiki/O%27Nan%20group
In the area of abstract algebra known as group theory, the O'Nan group O'N or O'Nan–Sims group is a sporadic simple group of order    2934573111931 = 460815505920 ≈ 5. History O'Nan is one of the 26 sporadic groups and was found by in a study of groups with a Sylow 2-subgroup of "Alperin type", meaning isomorphic to a Sylow 2-Subgroup of a group of type (Z/2nZ ×Z/2nZ ×Z/2nZ).PSL3(F2). For the O'Nan group n = 2 and the extension does not split. The only other simple group with a Sylow 2-subgroup of Alperin type with n ≥ 2 is the Higman–Sims group again with n = 2, but the extension splits. The Schur multiplier has order 3, and its outer automorphism group has order 2. showed that O'Nan cannot be a subquotient of the monster group. Thus it is one of the 6 sporadic groups called the pariahs. Representations showed that its triple cover has two 45-dimensional representations over the field with 7 elements, exchanged by an outer automorphism. Maximal subgroups and independently found the 13 conjugacy classes of maximal subgroups of O'Nan as follows: L3(7):2 (2 classes, fused by an outer automorphism) J1 The subgroup fixed by an outer involution in O'Nan:2. 42.L3(4):21 The centralizer of an (inner) involution in O'Nan. (32:4 × A6).2 34:21+4.D10 L2(31) (2 classes, fused by an outer automorphism) 43.L3(2) M11 (2 classes, fused by an outer automorphism) A7 (2 classes, fused by an outer automorphism) O'Nan moonshine In 2017 John F. R. Duncan, Michael H. Mertens, and Ken Ono proved theorems that establish an analogue of monstrous moonshine for the O'Nan group. Their results "reveal a role for the O'Nan pariah group as a provider of hidden symmetry to quadratic forms and elliptic curves." The O'Nan moonshine results "also represent the intersection of moonshine theory with the Langlands program, which, since its inception in the 1960s, has become a driving force for research in number theory, geometry and mathematical physics." . An informal description of these developments was written by in Quanta Magazine. Sources External links MathWorld: O'Nan Group Sporadic groups
https://en.wikipedia.org/wiki/Rudvalis%20group
In the area of modern algebra known as group theory, the Rudvalis group Ru is a sporadic simple group of order    214335371329 = 145926144000 ≈ 1. History Ru is one of the 26 sporadic groups and was found by and constructed by . Its Schur multiplier has order 2, and its outer automorphism group is trivial. In 1982 Robert Griess showed that Ru cannot be a subquotient of the monster group. Thus it is one of the 6 sporadic groups called the pariahs. Properties The Rudvalis group acts as a rank 3 permutation group on 4060 points, with one point stabilizer being the Ree group 2F4(2), the automorphism group of the Tits group. This representation implies a strongly regular graph srg(4060, 2304, 1328, 1208). That is, each vertex has 2304 neighbors and 1755 non-neighbors, any two adjacent vertices have 1328 common neighbors, while any two non-adjacent ones have 1208 . Its double cover acts on a 28-dimensional lattice over the Gaussian integers. The lattice has 4×4060 minimal vectors; if minimal vectors are identified whenever one is 1, i, –1, or –i times another, then the 4060 equivalence classes can be identified with the points of the rank 3 permutation representation. Reducing this lattice modulo the principal ideal gives an action of the Rudvalis group on a 28-dimensional vector space over the field with 2 elements. Duncan (2006) used the 28-dimensional lattice to construct a vertex operator algebra acted on by the double cover. characterized the Rudvalis group by the centralizer of a central involution. gave another characterization as part of their identification of the Rudvalis group as one of the quasithin groups. Maximal subgroups found the 15 conjugacy classes of maximal subgroups of Ru as follows: 2F4(2) = 2F4(2)'.2 26.U3(3).2 (22 × Sz(8)):3 23+8:L3(2) U3(5):2 21+4+6.S5 PSL2(25).22 A8 PSL2(29) 52:4.S5 3.A6.22 51+2:[25] L2(13):2 A6.22 5:4 × A5 References External links MathWorld: Rudvalis Group Atlas of Finite Group Representations: Rudvalis group Sporadic groups
https://en.wikipedia.org/wiki/Harada%E2%80%93Norton%20group
In the area of modern algebra known as group theory, the Harada–Norton group HN is a sporadic simple group of order    214365671119 = 273030912000000 ≈ 3. History and properties HN is one of the 26 sporadic groups and was found by and ). Its Schur multiplier is trivial and its outer automorphism group has order 2. HN has an involution whose centralizer is of the form 2.HS.2, where HS is the Higman-Sims group (which is how Harada found it). The prime 5 plays a special role in the group. For example, it centralizes an element of order 5 in the Monster group (which is how Norton found it), and as a result acts naturally on a vertex operator algebra over the field with 5 elements . This implies that it acts on a 133 dimensional algebra over F5 with a commutative but nonassociative product, analogous to the Griess algebra . Generalized monstrous moonshine Conway and Norton suggested in their 1979 paper that monstrous moonshine is not limited to the monster, but that similar phenomena may be found for other groups. Larissa Queen and others subsequently found that one can construct the expansions of many Hauptmoduln from simple combinations of dimensions of sporadic groups. To recall, the prime number 5 plays a special role in the group and for HN, the relevant McKay-Thompson series is where one can set the constant term (), and η(τ) is the Dedekind eta function. Maximal subgroups found the 14 conjugacy classes of maximal subgroups of HN as follows: A12 2.HS.2 U3(8):3 21+8.(A5 × A5).2 (D10 × U3(5)).2 51+4.21+4.5.4 26.U4(2) (A6 × A6).D8 23+2+6.(3 × L3(2)) 52+1+2.4.A5 M12:2 (Two classes, fused by an outer automorphism) 34:2.(A4 × A4).4 31+4:4.A5 References S. P. Norton, F and other simple groups, PhD Thesis, Cambridge 1975. External links MathWorld: Harada–Norton Group Atlas of Finite Group Representations: Harada–Norton group Sporadic groups
https://en.wikipedia.org/wiki/Mohr%E2%80%93Mascheroni%20theorem
In mathematics, the Mohr–Mascheroni theorem states that any geometric construction that can be performed by a compass and straightedge can be performed by a compass alone. It must be understood that "any geometric construction" refers to figures that contain no straight lines, as it is clearly impossible to draw a straight line without a straightedge. It is understood that a line is determined provided that two distinct points on that line are given or constructed, even though no visual representation of the line will be present. The theorem can be stated more precisely as: Any Euclidean construction, insofar as the given and required elements are points (or circles), may be completed with the compass alone if it can be completed with both the compass and the straightedge together. Though the use of a straightedge can make a construction significantly easier, the theorem shows that any set of points that fully defines a constructed figure can be determined with compass alone, and the only reason to use a straightedge is for the aesthetics of seeing straight lines, which for the purposes of construction is functionally unnecessary. History The result was originally published by Georg Mohr in 1672, but his proof languished in obscurity until 1928. The theorem was independently discovered by Lorenzo Mascheroni in 1797 and it was known as Mascheroni's Theorem until Mohr's work was rediscovered. Several proofs of the result are known. Mascheroni's proof of 1797 was generally based on the idea of using reflection in a line as the major tool. Mohr's solution was different. In 1890, August Adler published a proof using the inversion transformation. An algebraic approach uses the isomorphism between the Euclidean plane and the real coordinate space . In this way, a stronger version of the theorem was proven in 1990. It also shows the dependence of the theorem on Archimedes' axiom (which cannot be formulated in a first-order language). Constructive proof Outline To prove the theorem, each of the basic constructions of compass and straightedge need to be proven to be possible by using a compass alone, as these are the foundations of, or elementary steps for, all other constructions. These are: Creating the line through two existing points Creating the circle through one point with centre another point Creating the point which is the intersection of two existing, non-parallel lines Creating the one or two points in the intersection of a line and a circle (if they intersect) Creating the one or two points in the intersection of two circles (if they intersect). #1 - A line through two points It is understood that a straight line cannot be drawn without a straightedge. A line is considered to be given by any two points, as any such pair define a unique line. In keeping with the intent of the theorem which we aim to prove, the actual line need not be drawn but for aesthetic reasons. #2 - A circle through one point with defined center This can be d
https://en.wikipedia.org/wiki/Initial%20topology
In general topology and related areas of mathematics, the initial topology (or induced topology or weak topology or limit topology or projective topology) on a set with respect to a family of functions on is the coarsest topology on that makes those functions continuous. The subspace topology and product topology constructions are both special cases of initial topologies. Indeed, the initial topology construction can be viewed as a generalization of these. The dual notion is the final topology, which for a given family of functions mapping to a set is the finest topology on that makes those functions continuous. Definition Given a set and an indexed family of topological spaces with functions the initial topology on is the coarsest topology on such that each is continuous. Definition in terms of open sets If is a family of topologies indexed by then the of these topologies is the coarsest topology on that is finer than each This topology always exists and it is equal to the topology generated by If for every denotes the topology on then is a topology on , and the is the least upper bound topology of the -indexed family of topologies (for ). Explicitly, the initial topology is the collection of open sets generated by all sets of the form where is an open set in for some under finite intersections and arbitrary unions. Sets of the form are often called . If contains exactly one element, then all the open sets of the initial topology are cylinder sets. Examples Several topological constructions can be regarded as special cases of the initial topology. The subspace topology is the initial topology on the subspace with respect to the inclusion map. The product topology is the initial topology with respect to the family of projection maps. The inverse limit of any inverse system of spaces and continuous maps is the set-theoretic inverse limit together with the initial topology determined by the canonical morphisms. The weak topology on a locally convex space is the initial topology with respect to the continuous linear forms of its dual space. Given a family of topologies on a fixed set the initial topology on with respect to the functions is the supremum (or join) of the topologies in the lattice of topologies on That is, the initial topology is the topology generated by the union of the topologies A topological space is completely regular if and only if it has the initial topology with respect to its family of (bounded) real-valued continuous functions. Every topological space has the initial topology with respect to the family of continuous functions from to the Sierpiński space. Properties Characteristic property The initial topology on can be characterized by the following characteristic property: A function from some space to is continuous if and only if is continuous for each Note that, despite looking quite similar, this is not a universal property. A categorical descriptio
https://en.wikipedia.org/wiki/Formal%20proof
In logic and mathematics, a formal proof or derivation is a finite sequence of sentences (called well-formed formulas in the case of a formal language), each of which is an axiom, an assumption, or follows from the preceding sentences in the sequence by a rule of inference. It differs from a natural language argument in that it is rigorous, unambiguous and mechanically verifiable. If the set of assumptions is empty, then the last sentence in a formal proof is called a theorem of the formal system. The notion of theorem is not in general effective, therefore there may be no method by which we can always find a proof of a given sentence or determine that none exists. The concepts of Fitch-style proof, sequent calculus and natural deduction are generalizations of the concept of proof. The theorem is a syntactic consequence of all the well-formed formulas preceding it in the proof. For a well-formed formula to qualify as part of a proof, it must be the result of applying a rule of the deductive apparatus (of some formal system) to the previous well-formed formulas in the proof sequence. Formal proofs often are constructed with the help of computers in interactive theorem proving (e.g., through the use of proof checker and automated theorem prover). Significantly, these proofs can be checked automatically, also by computer. Checking formal proofs is usually simple, while the problem of finding proofs (automated theorem proving) is usually computationally intractable and/or only semi-decidable, depending upon the formal system in use. Background Formal language A formal language is a set of finite sequences of symbols. Such a language can be defined without reference to any meanings of any of its expressions; it can exist before any interpretation is assigned to it – that is, before it has any meaning. Formal proofs are expressed in some formal languages. Formal grammar A formal grammar (also called formation rules) is a precise description of the well-formed formulas of a formal language. It is synonymous with the set of strings over the alphabet of the formal language which constitute well formed formulas. However, it does not describe their semantics (i.e. what they mean). Formal systems A formal system (also called a logical calculus, or a logical system) consists of a formal language together with a deductive apparatus (also called a deductive system). The deductive apparatus may consist of a set of transformation rules (also called inference rules) or a set of axioms, or have both. A formal system is used to derive one expression from one or more other expressions. Interpretations An interpretation of a formal system is the assignment of meanings to the symbols, and truth values to the sentences of a formal system. The study of interpretations is called formal semantics. Giving an interpretation is synonymous with ''constructing a model. See also Axiomatic system Formal verification Mathematical proof Proof assistant Proof calcul
https://en.wikipedia.org/wiki/Subgroup%20growth
In mathematics, subgroup growth is a branch of group theory, dealing with quantitative questions about subgroups of a given group. Let be a finitely generated group. Then, for each integer define to be the number of subgroups of index in . Similarly, if is a topological group, denotes the number of open subgroups of index in . One similarly defines and to denote the number of maximal and normal subgroups of index , respectively. Subgroup growth studies these functions, their interplay, and the characterization of group theoretical properties in terms of these functions. The theory was motivated by the desire to enumerate finite groups of given order, and the analogy with Mikhail Gromov's notion of word growth. Nilpotent groups Let be a finitely generated torsionfree nilpotent group. Then there exists a composition series with infinite cyclic factors, which induces a bijection (though not necessarily a homomorphism). such that group multiplication can be expressed by polynomial functions in these coordinates; in particular, the multiplication is definable. Using methods from the model theory of p-adic integers, F. Grunewald, D. Segal and G. Smith showed that the local zeta function is a rational function in . As an example, let be the discrete Heisenberg group. This group has a "presentation" with generators and relations Hence, elements of can be represented as triples of integers with group operation given by To each finite index subgroup of , associate the set of all "good bases" of as follows. Note that has a normal series with infinite cyclic factors. A triple is called a good basis of , if generate , and . In general, it is quite complicated to determine the set of good bases for a fixed subgroup . To overcome this difficulty, one determines the set of all good bases of all finite index subgroups, and determines how many of these belong to one given subgroup. To make this precise, one has to embed the Heisenberg group over the integers into the group over p-adic numbers. After some computations, one arrives at the formula where is the Haar measure on , denotes the p-adic absolute value and is the set of tuples of -adic integers such that is a good basis of some finite-index subgroup. The latter condition can be translated into . Now, the integral can be transformed into an iterated sum to yield where the final evaluation consists of repeated application of the formula for the value of the geometric series. From this we deduce that can be expressed in terms of the Riemann zeta function as For more complicated examples, the computations become difficult, and in general one cannot expect a closed expression for . The local factor can always be expressed as a definable -adic integral. Applying a result of MacIntyre on the model theory of -adic integers, one deduces again that is a rational function in . Moreover, M. du Sautoy and F. Grunewald showed that the integral can be approximated by Artin L-fun
https://en.wikipedia.org/wiki/Conditional%20convergence
In mathematics, a series or integral is said to be conditionally convergent if it converges, but it does not converge absolutely. Definition More precisely, a series of real numbers is said to converge conditionally if exists (as a finite real number, i.e. not or ), but A classic example is the alternating harmonic series given by which converges to , but is not absolutely convergent (see Harmonic series). Bernhard Riemann proved that a conditionally convergent series may be rearranged to converge to any value at all, including ∞ or −∞; see Riemann series theorem. The Lévy–Steinitz theorem identifies the set of values to which a series of terms in Rn can converge. A typical conditionally convergent integral is that on the non-negative real axis of (see Fresnel integral). See also Absolute convergence Unconditional convergence References Walter Rudin, Principles of Mathematical Analysis (McGraw-Hill: New York, 1964). Mathematical series Integral calculus Convergence (mathematics) Summability theory
https://en.wikipedia.org/wiki/Elongated%20pentagonal%20gyrocupolarotunda
In geometry, the elongated pentagonal gyrocupolarotunda is one of the Johnson solids (). As the name suggests, it can be constructed by elongating a pentagonal gyrocupolarotunda () by inserting a decagonal prism between its halves. Rotating either the pentagonal cupola () or the pentagonal rotunda () through 36 degrees before inserting the prism yields an elongated pentagonal orthocupolarotunda (). Formulae The following formulae for volume and surface area can be used if all faces are regular, with edge length a: References External links Johnson solids
https://en.wikipedia.org/wiki/Elongated%20pentagonal%20orthocupolarotunda
In geometry, the elongated pentagonal orthocupolarotunda is one of the Johnson solids (). As the name suggests, it can be constructed by elongating a pentagonal orthocupolarotunda () by inserting a decagonal prism between its halves. Rotating either the cupola or the rotunda through 36 degrees before inserting the prism yields an elongated pentagonal gyrocupolarotunda (). Formulae The following formulae for volume and surface area can be used if all faces are regular, with edge length a: References External links Johnson solids
https://en.wikipedia.org/wiki/Gyroelongated%20pentagonal%20cupolarotunda
In geometry, the gyroelongated pentagonal cupolarotunda is one of the Johnson solids (). As the name suggests, it can be constructed by gyroelongating a pentagonal cupolarotunda ( or ) by inserting a decagonal antiprism between its two halves. The gyroelongated pentagonal cupolarotunda is one of five Johnson solids which are chiral, meaning that they have a "left-handed" and a "right-handed" form. In the illustration to the right, each pentagonal face on the bottom half of the figure is connected by a path of two triangular faces to a square face above it and to the left. In the figure of opposite chirality (the mirror image of the illustrated figure), each bottom pentagon would be connected to a square face above it and to the right. The two chiral forms of are not considered different Johnson solids. Area and Volume With edge length a, the surface area is and the volume is External links Johnson solids Chiral polyhedra
https://en.wikipedia.org/wiki/Augmented%20truncated%20tetrahedron
In geometry, the augmented truncated tetrahedron is one of the Johnson solids (). It is created by attaching a triangular cupola () to one hexagonal face of a truncated tetrahedron. External links Johnson solids
https://en.wikipedia.org/wiki/California%20Academy%20of%20Mathematics%20and%20Science
The California Academy of Mathematics and Science (CAMS) is a public magnet high school in Carson, California, United States focusing on science and mathematics. Its California API scores are fourth-highest in the state. Located on the campus of California State University, Dominguez Hills, CAMS shares many facilities with the university, including the gymnasium, the student union, the tennis courts, the pool, the library and a few of the parking lots. It is a National "No Child Left Behind" Blue Ribbon (2011) and California Distinguished school. The No Child Left Behind blue ribbon was only presented to 32 public schools nationwide. Newsweek states in its top 1200 High Schools in the USA, CAMS is in the top 4% taking number 281 in 2006. In the December 2007, Newsweek released the results of a two-year study to determine the 100 best High Schools in the United States of America. Out of the 18,000+ schools reviewed, CAMS made it into the top 100 as number 21. As of August 24, 2016, CAMS moved up in ranking to the 100th best high school in the nation. In California CAMS is ranked 10th in the state. According to U.S. News & World Report, as of November 2019 CAMS is rated as the 46th best high school in the nation, and the 5th best in California. It also ranks as the best magnet high school in California. Unlike similar schools such as the Illinois Mathematics and Science Academy and the North Carolina School of Science and Mathematics, CAMS is non-residential, drawing its students solely from most of Long Beach, portions of Los Angeles, and some cities of the South Bay region. Students are admitted only as freshmen. In 2016, the admissions process was changed and is now based solely on academic achievement in middle school. The prior interviewing and applications process was discontinued due to a legal settlement. In the past, applicants from different grade levels were allowed to apply and be accepted, but due to the strict, demanding curriculum at CAMS, the school felt incoming students from other grade levels would be unable to keep up, as they would be unaccustomed to such a curriculum. General information The California Academy of Mathematics and Science (CAMS) opened on the California State University at Dominguez Hills (CSUDH) campus in 1990, the product of partnerships among CSUDH, the California State University’s Chancellor’s Office, a consortium of eleven local school districts, and defense industries. Long Beach Unified School District serves as the managing school district fiscal agent. Today, CAMS ranks in the top ten schools in California on the NCLB Academic Performance Index; its students score well above state and national averages on the math and verbal SATs. Average student daily attendance in 2003-04 was 98%. Attrition is less than 5% for all reasons, as opposed to a 50% drop-out rate in some local high schools, and 95% of CAMS students go on to four-year colleges and universities, including the most selective and prestigi
https://en.wikipedia.org/wiki/Comodule
In mathematics, a comodule or corepresentation is a concept dual to a module. The definition of a comodule over a coalgebra is formed by dualizing the definition of a module over an associative algebra. Formal definition Let K be a field, and C be a coalgebra over K. A (right) comodule over C is a K-vector space M together with a linear map such that , where Δ is the comultiplication for C, and ε is the counit. Note that in the second rule we have identified with . Examples A coalgebra is a comodule over itself. If M is a finite-dimensional module over a finite-dimensional K-algebra A, then the set of linear functions from A to K forms a coalgebra, and the set of linear functions from M to K forms a comodule over that coalgebra. A graded vector space V can be made into a comodule. Let I be the index set for the graded vector space, and let be the vector space with basis for . We turn into a coalgebra and V into a -comodule, as follows: Let the comultiplication on be given by . Let the counit on be given by . Let the map on V be given by , where is the i-th homogeneous piece of . In algebraic topology One important result in algebraic topology is the fact that homology over the dual Steenrod algebra forms a comodule. This comes from the fact the Steenrod algebra has a canonical action on the cohomologyWhen we dualize to the dual Steenrod algebra, this gives a comodule structureThis result extends to other cohomology theories as well, such as complex cobordism and is instrumental in computing its cohomology ring . The main reason for considering the comodule structure on homology instead of the module structure on cohomology lies in the fact the dual Steenrod algebra is a commutative ring, and the setting of commutative algebra provides more tools for studying its structure. Rational comodule If M is a (right) comodule over the coalgebra C, then M is a (left) module over the dual algebra C∗, but the converse is not true in general: a module over C∗ is not necessarily a comodule over C. A rational comodule is a module over C∗ which becomes a comodule over C in the natural way. Comodule morphisms Let R be a ring, M, N, and C be R-modules, and be right C-comodules. Then an R-linear map is called a (right) comodule morphism, or (right) C-colinear, if This notion is dual to the notion of a linear map between vector spaces, or, more generally, of a homomorphism between R-modules. See also Divided power structure References Module theory Coalgebras
https://en.wikipedia.org/wiki/Homotopy%20sphere
In algebraic topology, a branch of mathematics, a homotopy sphere is an n-manifold that is homotopy equivalent to the n-sphere. It thus has the same homotopy groups and the same homology groups as the n-sphere, and so every homotopy sphere is necessarily a homology sphere. The topological generalized Poincaré conjecture is that any n-dimensional homotopy sphere is homeomorphic to the n-sphere; it was solved by Stephen Smale in dimensions five and higher, by Michael Freedman in dimension 4, and for dimension 3 (the original Poincaré conjecture) by Grigori Perelman in 2005. The resolution of the smooth Poincaré conjecture in dimensions 5 and larger implies that homotopy spheres in those dimensions are precisely exotic spheres. It is still an open question () whether or not there are non-trivial smooth homotopy spheres in dimension 4. See also Homology sphere Homotopy groups of spheres Poincaré conjecture References External links Homotopy theory Topological spaces
https://en.wikipedia.org/wiki/Voigt%20profile
The Voigt profile (named after Woldemar Voigt) is a probability distribution given by a convolution of a Cauchy-Lorentz distribution and a Gaussian distribution. It is often used in analyzing data from spectroscopy or diffraction. Definition Without loss of generality, we can consider only centered profiles, which peak at zero. The Voigt profile is then where x is the shift from the line center, is the centered Gaussian profile: and is the centered Lorentzian profile: The defining integral can be evaluated as: where Re[w(z)] is the real part of the Faddeeva function evaluated for In the limiting cases of and then simplifies to and , respectively. History and applications In spectroscopy, a Voigt profile results from the convolution of two broadening mechanisms, one of which alone would produce a Gaussian profile (usually, as a result of the Doppler broadening), and the other would produce a Lorentzian profile. Voigt profiles are common in many branches of spectroscopy and diffraction. Due to the expense of computing the Faddeeva function, the Voigt profile is sometimes approximated using a pseudo-Voigt profile. Properties The Voigt profile is normalized: since it is a convolution of normalized profiles. The Lorentzian profile has no moments (other than the zeroth), and so the moment-generating function for the Cauchy distribution is not defined. It follows that the Voigt profile will not have a moment-generating function either, but the characteristic function for the Cauchy distribution is well defined, as is the characteristic function for the normal distribution. The characteristic function for the (centered) Voigt profile will then be the product of the two: Since normal distributions and Cauchy distributions are stable distributions, they are each closed under convolution (up to change of scale), and it follows that the Voigt distributions are also closed under convolution. Cumulative distribution function Using the above definition for z , the cumulative distribution function (CDF) can be found as follows: Substituting the definition of the Faddeeva function (scaled complex error function) yields for the indefinite integral: which may be solved to yield where is a hypergeometric function. In order for the function to approach zero as x approaches negative infinity (as the CDF must do), an integration constant of 1/2 must be added. This gives for the CDF of Voigt: The uncentered Voigt profile If the Gaussian profile is centered at and the Lorentzian profile is centered at , the convolution is centered at and the characteristic function is: The probability density function is simply offset from the centered profile by : where: The mode and median are both located at . Derivatives Using the definition above for and , the first and second derivatives can be expressed in terms of the Faddeeva function as and respectively. Often, one or multiple Voigt profiles and/or their respective derivatives need to be f
https://en.wikipedia.org/wiki/Stopping%20time
In probability theory, in particular in the study of stochastic processes, a stopping time (also Markov time, Markov moment, optional stopping time or optional time) is a specific type of “random time”: a random variable whose value is interpreted as the time at which a given stochastic process exhibits a certain behavior of interest. A stopping time is often defined by a stopping rule, a mechanism for deciding whether to continue or stop a process on the basis of the present position and past events, and which will almost always lead to a decision to stop at some finite time. Stopping times occur in decision theory, and the optional stopping theorem is an important result in this context. Stopping times are also frequently applied in mathematical proofs to “tame the continuum of time”, as Chung put it in his book (1982). Definition Discrete time Let be a random variable, which is defined on the filtered probability space with values in . Then is called a stopping time (with respect to the filtration ), if the following condition holds: for all Intuitively, this condition means that the "decision" of whether to stop at time must be based only on the information present at time , not on any future information. General case Let be a random variable, which is defined on the filtered probability space with values in . In most cases, . Then is called a stopping time (with respect to the filtration ), if the following condition holds: for all As adapted process Let be a random variable, which is defined on the filtered probability space with values in . Then is called a stopping time iff the stochastic process , defined by is adapted to the filtration Comments Some authors explicitly exclude cases where can be , whereas other authors allow to take any value in the closure of . Examples To illustrate some examples of random times that are stopping rules and some that are not, consider a gambler playing roulette with a typical house edge, starting with $100 and betting $1 on red in each game: Playing exactly five games corresponds to the stopping time τ = 5, and is a stopping rule. Playing until they either run out of money or have played 500 games is a stopping rule. Playing until they obtain the maximum amount ahead they will ever be is not a stopping rule and does not provide a stopping time, as it requires information about the future as well as the present and past. Playing until they double their money (borrowing if necessary) is not a stopping rule, as there is a positive probability that they will never double their money. Playing until they either double their money or run out of money is a stopping rule, even though there is potentially no limit to the number of games they play, since the probability that they stop in a finite time is 1. To illustrate the more general definition of stopping time, consider Brownian motion, which is a stochastic process , where each is a random variable defined on the probability spa
https://en.wikipedia.org/wiki/Autocovariance
In probability theory and statistics, given a stochastic process, the autocovariance is a function that gives the covariance of the process with itself at pairs of time points. Autocovariance is closely related to the autocorrelation of the process in question. Auto-covariance of stochastic processes Definition With the usual notation for the expectation operator, if the stochastic process has the mean function , then the autocovariance is given by where and are two instances in time. Definition for weakly stationary process If is a weakly stationary (WSS) process, then the following are true: for all and for all and where is the lag time, or the amount of time by which the signal has been shifted. The autocovariance function of a WSS process is therefore given by: which is equivalent to . Normalization It is common practice in some disciplines (e.g. statistics and time series analysis) to normalize the autocovariance function to get a time-dependent Pearson correlation coefficient. However in other disciplines (e.g. engineering) the normalization is usually dropped and the terms "autocorrelation" and "autocovariance" are used interchangeably. The definition of the normalized auto-correlation of a stochastic process is . If the function is well-defined, its value must lie in the range , with 1 indicating perfect correlation and −1 indicating perfect anti-correlation. For a WSS process, the definition is . where . Properties Symmetry property respectively for a WSS process: Linear filtering The autocovariance of a linearly filtered process is Calculating turbulent diffusivity Autocovariance can be used to calculate turbulent diffusivity. Turbulence in a flow can cause the fluctuation of velocity in space and time. Thus, we are able to identify turbulence through the statistics of those fluctuations. Reynolds decomposition is used to define the velocity fluctuations (assume we are now working with 1D problem and is the velocity along direction): where is the true velocity, and is the expected value of velocity. If we choose a correct , all of the stochastic components of the turbulent velocity will be included in . To determine , a set of velocity measurements that are assembled from points in space, moments in time or repeated experiments is required. If we assume the turbulent flux (, and c is the concentration term) can be caused by a random walk, we can use Fick's laws of diffusion to express the turbulent flux term: The velocity autocovariance is defined as or where is the lag time, and is the lag distance. The turbulent diffusivity can be calculated using the following 3 methods: Auto-covariance of random vectors See also Autoregressive process Correlation Cross-covariance Cross-correlation Noise covariance estimation (as an application example) References Further reading Lecture notes on autocovariance from WHOI Fourier analysis Autocorrelation
https://en.wikipedia.org/wiki/Pr%C3%BCfer%20rank
In mathematics, especially in the area of algebra known as group theory, the Prüfer rank of a pro-p group measures the size of a group in terms of the ranks of its elementary abelian sections. The rank is well behaved and helps to define analytic pro-p-groups. The term is named after Heinz Prüfer. Definition The Prüfer rank of pro-p-group is where is the rank of the abelian group , where is the Frattini subgroup of . As the Frattini subgroup of can be thought of as the group of non-generating elements of , it can be seen that will be equal to the size of any minimal generating set of . Properties Those profinite groups with finite Prüfer rank are more amenable to analysis. Specifically in the case of finitely generated pro-p groups, having finite Prüfer rank is equivalent to having an open normal subgroup that is powerful. In turn these are precisely the class of pro-p groups that are p-adic analytic – that is groups that can be imbued with a p-adic manifold structure. References Infinite group theory
https://en.wikipedia.org/wiki/Gluing%20axiom
In mathematics, the gluing axiom is introduced to define what a sheaf on a topological space must satisfy, given that it is a presheaf, which is by definition a contravariant functor to a category which initially one takes to be the category of sets. Here is the partial order of open sets of ordered by inclusion maps; and considered as a category in the standard way, with a unique morphism if is a subset of , and none otherwise. As phrased in the sheaf article, there is a certain axiom that must satisfy, for any open cover of an open set of . For example, given open sets and with union and intersection , the required condition is that is the subset of With equal image in In less formal language, a section of over is equally well given by a pair of sections : on and respectively, which 'agree' in the sense that and have a common image in under the respective restriction maps and . The first major hurdle in sheaf theory is to see that this gluing or patching axiom is a correct abstraction from the usual idea in geometric situations. For example, a vector field is a section of a tangent bundle on a smooth manifold; this says that a vector field on the union of two open sets is (no more and no less than) vector fields on the two sets that agree where they overlap. Given this basic understanding, there are further issues in the theory, and some will be addressed here. A different direction is that of the Grothendieck topology, and yet another is the logical status of 'local existence' (see Kripke–Joyal semantics). Removing restrictions on C To rephrase this definition in a way that will work in any category that has sufficient structure, we note that we can write the objects and morphisms involved in the definition above in a diagram which we will call (G), for "gluing": Here the first map is the product of the restriction maps and each pair of arrows represents the two restrictions and . It is worthwhile to note that these maps exhaust all of the possible restriction maps among , the , and the . The condition for to be a sheaf is that for any open set and any collection of open sets whose union is , the diagram (G) above is an equalizer. One way of understanding the gluing axiom is to notice that is the colimit of the following diagram: The gluing axiom says that turns colimits of such diagrams into limits. Sheaves on a basis of open sets In some categories, it is possible to construct a sheaf by specifying only some of its sections. Specifically, let be a topological space with basis . We can define a category to be the full subcategory of whose objects are the . A B-sheaf on with values in is a contravariant functor which satisfies the gluing axiom for sets in . That is, on a selection of open sets of , specifies all of the sections of a sheaf, and on the other open sets, it is undetermined. B-sheaves are equivalent to sheaves (that is, the category of sheaves is equivalent to the category
https://en.wikipedia.org/wiki/Folk%20theorem
Folk theorem may refer to: Folk theorem (game theory), a general feasibility theorem Ethnomathematics, the study of the relationship between mathematics and culture Mathematical folklore, theorems that are widely known to mathematicians but cannot be traced back to an individual Mathematics disambiguation pages
https://en.wikipedia.org/wiki/Practical%20number
In number theory, a practical number or panarithmic number is a positive integer such that all smaller positive integers can be represented as sums of distinct divisors of . For example, 12 is a practical number because all the numbers from 1 to 11 can be expressed as sums of its divisors 1, 2, 3, 4, and 6: as well as these divisors themselves, we have 5 = 3 + 2, 7 = 6 + 1, 8 = 6 + 2, 9 = 6 + 3, 10 = 6 + 3 + 1, and 11 = 6 + 3 + 2. The sequence of practical numbers begins Practical numbers were used by Fibonacci in his Liber Abaci (1202) in connection with the problem of representing rational numbers as Egyptian fractions. Fibonacci does not formally define practical numbers, but he gives a table of Egyptian fraction expansions for fractions with practical denominators. The name "practical number" is due to . He noted that "the subdivisions of money, weights, and measures involve numbers like 4, 12, 16, 20 and 28 which are usually supposed to be so inconvenient as to deserve replacement by powers of 10." His partial classification of these numbers was completed by and . This characterization makes it possible to determine whether a number is practical by examining its prime factorization. Every even perfect number and every power of two is also a practical number. Practical numbers have also been shown to be analogous with prime numbers in many of their properties. Characterization of practical numbers The original characterisation by stated that a practical number cannot be a deficient number, that is one of which the sum of all divisors (including 1 and itself) is less than twice the number unless the deficiency is one. If the ordered set of all divisors of the practical number is with and , then Srinivasan's statement can be expressed by the inequality In other words, the ordered sequence of all divisors of a practical number has to be a complete sub-sequence. This partial characterization was extended and completed by and who showed that it is straightforward to determine whether a number is practical from its prime factorization. A positive integer greater than one with prime factorization (with the primes in sorted order ) is practical if and only if each of its prime factors is small enough for to have a representation as a sum of smaller divisors. For this to be true, the first prime must equal 2 and, for every from 2 to , each successive prime must obey the inequality where denotes the sum of the divisors of x. For example, 2 × 32 × 29 × 823 = 429606 is practical, because the inequality above holds for each of its prime factors: 3 ≤ σ(2) + 1 = 4, 29 ≤ σ(2 × 32) + 1 = 40, and 823 ≤ σ(2 × 32 × 29) + 1 = 1171. The condition stated above is necessary and sufficient for a number to be practical. In one direction, this condition is necessary in order to be able to represent as a sum of divisors of , because if the inequality failed to be true then even adding together all the smaller divisors would give a sum too sma
https://en.wikipedia.org/wiki/Stein%20factorization
In algebraic geometry, the Stein factorization, introduced by for the case of complex spaces, states that a proper morphism can be factorized as a composition of a finite mapping and a proper morphism with connected fibers. Roughly speaking, Stein factorization contracts the connected components of the fibers of a mapping to points. Statement One version for schemes states the following: Let X be a scheme, S a locally noetherian scheme and a proper morphism. Then one can write where is a finite morphism and is a proper morphism so that The existence of this decomposition itself is not difficult. See below. But, by Zariski's connectedness theorem, the last part in the above says that the fiber is connected for any . It follows: Corollary: For any , the set of connected components of the fiber is in bijection with the set of points in the fiber . Proof Set: where SpecS is the relative Spec. The construction gives the natural map , which is finite since is coherent and f is proper. The morphism f factors through g and one gets , which is proper. By construction, . One then uses the theorem on formal functions to show that the last equality implies has connected fibers. (This part is sometimes referred to as Zariski's connectedness theorem.) See also Contraction morphism References Algebraic geometry
https://en.wikipedia.org/wiki/Spherical%203-manifold
In mathematics, a spherical 3-manifold M is a 3-manifold of the form where is a finite subgroup of SO(4) acting freely by rotations on the 3-sphere . All such manifolds are prime, orientable, and closed. Spherical 3-manifolds are sometimes called elliptic 3-manifolds or Clifford-Klein manifolds. Properties A spherical 3-manifold has a finite fundamental group isomorphic to Γ itself. The elliptization conjecture, proved by Grigori Perelman, states that conversely all compact 3-manifolds with finite fundamental group are spherical manifolds. The fundamental group is either cyclic, or is a central extension of a dihedral, tetrahedral, octahedral, or icosahedral group by a cyclic group of even order. This divides the set of such manifolds into 5 classes, described in the following sections. The spherical manifolds are exactly the manifolds with spherical geometry, one of the 8 geometries of Thurston's geometrization conjecture. Cyclic case (lens spaces) The manifolds with Γ cyclic are precisely the 3-dimensional lens spaces. A lens space is not determined by its fundamental group (there are non-homeomorphic lens spaces with isomorphic fundamental groups); but any other spherical manifold is. Three-dimensional lens spaces arise as quotients of by the action of the group that is generated by elements of the form where . Such a lens space has fundamental group for all , so spaces with different are not homotopy equivalent. Moreover, classifications up to homeomorphism and homotopy equivalence are known, as follows. The three-dimensional spaces and are: homotopy equivalent if and only if for some homeomorphic if and only if In particular, the lens spaces L(7,1) and L(7,2) give examples of two 3-manifolds that are homotopy equivalent but not homeomorphic. The lens space L(1,0) is the 3-sphere, and the lens space L(2,1) is 3 dimensional real projective space. Lens spaces can be represented as Seifert fiber spaces in many ways, usually as fiber spaces over the 2-sphere with at most two exceptional fibers, though the lens space with fundamental group of order 4 also has a representation as a Seifert fiber space over the projective plane with no exceptional fibers. Dihedral case (prism manifolds) A prism manifold is a closed 3-dimensional manifold M whose fundamental group is a central extension of a dihedral group. The fundamental group π1(M) of M is a product of a cyclic group of order m with a group having presentation for integers k, m, n with k ≥ 1, m ≥ 1, n ≥ 2 and m coprime to 2n. Alternatively, the fundamental group has presentation for coprime integers m, n with m ≥ 1, n ≥ 2. (The n here equals the previous n, and the m here is 2k-1 times the previous m.) We continue with the latter presentation. This group is a metacyclic group of order 4mn with abelianization of order 4m (so m and n are both determined by this group). The element y generates a cyclic normal subgroup of order 2n, and the element x has order 4m. T
https://en.wikipedia.org/wiki/Biquaternion
In abstract algebra, the biquaternions are the numbers , where , and are complex numbers, or variants thereof, and the elements of multiply as in the quaternion group and commute with their coefficients. There are three types of biquaternions corresponding to complex numbers and the variations thereof: Biquaternions when the coefficients are complex numbers. Split-biquaternions when the coefficients are split-complex numbers. Dual quaternions when the coefficients are dual numbers. This article is about the ordinary biquaternions named by William Rowan Hamilton in 1844. Some of the more prominent proponents of these biquaternions include Alexander Macfarlane, Arthur W. Conway, Ludwik Silberstein, and Cornelius Lanczos. As developed below, the unit quasi-sphere of the biquaternions provides a representation of the Lorentz group, which is the foundation of special relativity. The algebra of biquaternions can be considered as a tensor product , where is the field of complex numbers and is the division algebra of (real) quaternions. In other words, the biquaternions are just the complexification of the quaternions. Viewed as a complex algebra, the biquaternions are isomorphic to the algebra of complex matrices . They are also isomorphic to several Clifford algebras including , the Pauli algebra , and the even part of the spacetime algebra. Definition Let be the basis for the (real) quaternions , and let be complex numbers, then is a biquaternion. To distinguish square roots of minus one in the biquaternions, Hamilton and Arthur W. Conway used the convention of representing the square root of minus one in the scalar field by to avoid confusion with the in the quaternion group. Commutativity of the scalar field with the quaternion group is assumed: Hamilton introduced the terms bivector, biconjugate, bitensor, and biversor to extend notions used with real quaternions . Hamilton's primary exposition on biquaternions came in 1853 in his Lectures on Quaternions. The editions of Elements of Quaternions, in 1866 by William Edwin Hamilton (son of Rowan), and in 1899, 1901 by Charles Jasper Joly, reduced the biquaternion coverage in favour of the real quaternions. Considered with the operations of component-wise addition, and multiplication according to the quaternion group, this collection forms a 4-dimensional algebra over the complex numbers . The algebra of biquaternions is associative, but not commutative. A biquaternion is either a unit or a zero divisor. The algebra of biquaternions forms a composition algebra and can be constructed from bicomplex numbers. See below. Place in ring theory Linear representation Note that the matrix product . Because is the imaginary unit, each of these three arrays has a square equal to the negative of the identity matrix. When this matrix product is interpreted as , then one obtains a subgroup of matrices that is isomorphic to the quaternion group. Consequently, represents biquaternion
https://en.wikipedia.org/wiki/Grete%20Hermann
Grete Hermann (2 March 1901 – 15 April 1984) was a German mathematician and philosopher noted for her work in mathematics, physics, philosophy and education. She is noted for her early philosophical work on the foundations of quantum mechanics, and is now known most of all for an early, but long-ignored critique of a "no hidden-variables theorem" by John von Neumann. It has been suggested that, had her critique not remained nearly unknown for decades, the historical development of quantum mechanics might have been very different. Mathematics Hermann studied mathematics at Göttingen under Emmy Noether and Edmund Landau, where she achieved her PhD in 1926. Her doctoral thesis, "Die Frage der endlich vielen Schritte in der Theorie der Polynomideale" (in English "The Question of Finitely Many Steps in Polynomial Ideal Theory"), published in Mathematische Annalen, is the foundational paper for computer algebra. It first established the existence of algorithms (including complexity bounds) for many of the basic problems of abstract algebra, such as ideal membership for polynomial rings. Hermann's algorithm for primary decomposition is still in contemporary use. Assistant to Leonard Nelson From 1925 to 1927, Hermann worked as assistant for Leonard Nelson. Together with Minna Specht, she posthumously published Nelson's work System der philosophischen Ethik und Pädagogik, while continuing her own research. Quantum mechanics As a philosopher, Hermann had a particular interest in the foundations of physics. In 1934, she went to Leipzig "for the express purpose of reconciling a neo-Kantian conception of causality with the new quantum mechanics". In Leipzig, many exchanges of thoughts took place among Hermann, Carl Friedrich von Weizsäcker, and Werner Heisenberg. The contents of her work in this time, including a focus on a distinction of predictability and causality, are known from three of her own publications, and from later description of their discussions by von Weizsäcker, and the discussion of Hermann's work in chapter ten of Heisenberg's The Part and The Whole. From Denmark, she published her work The foundations of quantum mechanics in the philosophy of nature (German original title: Die naturphilosophischen Grundlagen der Quantenmechanik). This work has been referred to as "one of the earliest and best philosophical treatments of the new quantum mechanics". In this work, she concludes: In June 1936, Hermann was awarded the Richard Avenarius prize together with Eduard May and Th. Vogel. Earlier, in 1935, Hermann published a critique of John von Neumann's 1932 proof which was widely claimed to show that a hidden variable theory of quantum mechanics was impossible. Hermann's work on this subject went unnoticed by the physics community until it was independently discovered and published by John Stewart Bell in 1966, and her earlier discovery was pointed out by Max Jammer in 1974. Some have posited that had her critique not remained near
https://en.wikipedia.org/wiki/Primary%20decomposition
In mathematics, the Lasker–Noether theorem states that every Noetherian ring is a Lasker ring, which means that every ideal can be decomposed as an intersection, called primary decomposition, of finitely many primary ideals (which are related to, but not quite the same as, powers of prime ideals). The theorem was first proven by for the special case of polynomial rings and convergent power series rings, and was proven in its full generality by . The Lasker–Noether theorem is an extension of the fundamental theorem of arithmetic, and more generally the fundamental theorem of finitely generated abelian groups to all Noetherian rings. The theorem plays an important role in algebraic geometry, by asserting that every algebraic set may be uniquely decomposed into a finite union of irreducible components. It has a straightforward extension to modules stating that every submodule of a finitely generated module over a Noetherian ring is a finite intersection of primary submodules. This contains the case for rings as a special case, considering the ring as a module over itself, so that ideals are submodules. This also generalizes the primary decomposition form of the structure theorem for finitely generated modules over a principal ideal domain, and for the special case of polynomial rings over a field, it generalizes the decomposition of an algebraic set into a finite union of (irreducible) varieties. The first algorithm for computing primary decompositions for polynomial rings over a field of characteristic 0 was published by Noether's student . The decomposition does not hold in general for non-commutative Noetherian rings. Noether gave an example of a non-commutative Noetherian ring with a right ideal that is not an intersection of primary ideals. Primary decomposition of an ideal Let be a Noetherian commutative ring. An ideal of is called primary if it is a proper ideal and for each pair of elements and in such that is in , either or some power of is in ; equivalently, every zero-divisor in the quotient is nilpotent. The radical of a primary ideal is a prime ideal and is said to be -primary for . Let be an ideal in . Then has an irredundant primary decomposition into primary ideals: . Irredundancy means: Removing any of the changes the intersection, i.e. for each we have: . The prime ideals are all distinct. Moreover, this decomposition is unique in the two ways: The set is uniquely determined by , and If is a minimal element of the above set, then is uniquely determined by ; in fact, is the pre-image of under the localization map . Primary ideals which correspond to non-minimal prime ideals over are in general not unique (see an example below). For the existence of the decomposition, see #Primary decomposition from associated primes below. The elements of are called the prime divisors of or the primes belonging to . In the language of module theory, as discussed below, the set is also the set of associated prime
https://en.wikipedia.org/wiki/Bicyclic%20semigroup
In mathematics, the bicyclic semigroup is an algebraic object important for the structure theory of semigroups. Although it is in fact a monoid, it is usually referred to as simply a semigroup. It is perhaps most easily understood as the syntactic monoid describing the Dyck language of balanced pairs of parentheses. Thus, it finds common applications in combinatorics, such as describing binary trees and associative algebras. History The first published description of this object was given by Evgenii Lyapin in 1953. Alfred H. Clifford and Gordon Preston claim that one of them, working with David Rees, discovered it independently (without publication) at some point before 1943. Construction There are at least three standard ways of constructing the bicyclic semigroup, and various notations for referring to it. Lyapin called it P; Clifford and Preston used ; and most recent papers have tended to use B. This article will use the modern style throughout. From a free semigroup The bicyclic semigroup is the quotient of the free monoid on two generators p and q by the congruence generated by the relation p q = 1. Thus, each semigroup element is a string of those two letters, with the proviso that the subsequence "p q" does not appear. The semigroup operation is concatenation of strings, which is clearly associative. It can then be shown that all elements of B in fact have the form qa pb, for some natural numbers a and b. The composition operation simplifies to (qa pb) (qc pd) = qa + c − min{b, c} pd + b − min{b, c}. From ordered pairs The way in which these exponents are constrained suggests that the "p and q structure" can be discarded, leaving only operations on the "a and b" part. So B is the semigroup of pairs of natural numbers (including zero), with operation (a, b) (c, d) = (a + c − min{b, c}, d + b − min{b, c}). This is sufficient to define B so that it is the same object as in the original construction. Just as p and q generated B originally, with the empty string as the monoid identity, this new construction of B has generators (1, 0) and (0, 1), with identity (0, 0). From functions It can be shown that any semigroup S generated by elements e, a, and b satisfying the statements below is isomorphic to the bicyclic semigroup. a e = e a = a b e = e b = b a b = e b a ≠ e It is not entirely obvious that this should be the case—perhaps the hardest task is understanding that S must be infinite. To see this, suppose that a (say) does not have infinite order, so ak + h = ah for some h and k. Then ak = e, and b = e b = ak b = ak - 1 e = ak - 1, so b a = ak = e, which is not allowed—so there are infinitely many distinct powers of a. The full proof is given in Clifford and Preston's book. Note that the two definitions given above both satisfy these properties. A third way of deriving B uses two appropriately-chosen functions to yield the bicyclic semigroup as a monoid of transformations of the natural numbers. Let α, β, and ι be element
https://en.wikipedia.org/wiki/Martin-Andersen-Nex%C3%B6-Gymnasium%20Dresden
The Martin-Andersen-Nexö-Gymnasium Dresden (MANOS) is a selective high school (gymnasium) in Dresden, Germany, with a special focus on mathematics and sciences. It was formerly the school for radio mechanics in the GDR. It is named after the Danish writer Martin Andersen Nexø. The current head of school is Mr. Holm Wieczoreck. History 1903 Creation of classes for secondary education at the Bürgerschule Blasewitz 1904 Planning for own school building 1907 Start of the construction 1908 Inauguration of the new school building as Realgymnasium Blasewitz on April 30, 1908 1938 Renaming to Schillerschule Blasewitz 1945 Anglo-American air raid on Dresden on February 13, 1945, damages the roof of the school building 1945 Resumption of classes on October 1, 1945 as the Oberschule Dresden-Ost, housed in various buildings, with separate classes for boys and girls 1947 First mixed classes on September 1, 1947 1949 Complete co-education 1954 Renaming to Martin-Andersen-Nexö-Oberschule 1963 School for radio mechanics 1964 School for electronics industry 1969 Inauguration of the Martin Andersen Nexö Memorial on June 26, 1969 1986 Selective school for mathematics and sciences 1992 Foundation of Gymnasium Dresden-Blasewitz in Seidnitz (in the former 94th Polytechnic Secondary School) with a branch campus for advanced mathematics and science courses in the Kretschmerstraße 1998 The building in the Kretschmerstraße becomes the main campus of the Gymnasium Dresden-Blasewitz 2001 The school officially regains the name Martin-Andersen-Nexö-Gymnasium Dresden 2008 The school moves to the facilities of the former Joseph-Haydn-Gymnasium in Dresden-Striesen. Profile The school focuses early on a more advanced education in mathematics and sciences. An entrance examination is required for admission to the school. With more classes than normal in mathematics, biology, physics, chemistry and informatics, the students from grade 7 on learn more about STEM than students in a regular high school. The school also has a contract with the Dresden University of Technology under which students in grades 7 and 8 spend a week there doing a project. In addition, 11th-grade students in the intensive program are required to carry out a scientific study. Students also take three subjects, rather than the normal two, at a higher level. Students regularly take part in competitions like the International Mathematical Olympiad, the International Physics Olympiad, the International Olympiad in Informatics, and the International Chemistry Olympiad. The school's alumni includes the research mathematician Lisa Sauermann, who received acclaim for her results at the International Mathematical Olympiad. References External links Official Homepage Site concerning Martin Andersen Nexø made by graduates of the school Gymnasiums in Germany Education in Dresden Educational institutions established in 1903 Schools in Saxony Schools in Germany 1903 establishments in Germa
https://en.wikipedia.org/wiki/Denjoy%20integral
The Denjoy integral in mathematics can refer to two closely related integrals connected to the work of Arnaud Denjoy: the narrow Denjoy integral, or just Denjoy integral, also known as Henstock–Kurzweil integral, the (more general) wide Denjoy integral, or Khinchin integral.
https://en.wikipedia.org/wiki/Abstract%20polytope
In mathematics, an abstract polytope is an algebraic partially ordered set which captures the dyadic property of a traditional polytope without specifying purely geometric properties such as points and lines. A geometric polytope is said to be a realization of an abstract polytope in some real N-dimensional space, typically Euclidean. This abstract definition allows more general combinatorial structures than traditional definitions of a polytope, thus allowing new objects that have no counterpart in traditional theory. Introductory concepts Traditional versus abstract polytopes In Euclidean geometry, two shapes that are not similar can nonetheless share a common structure. For example, a square and a trapezoid both comprise an alternating chain of four vertices and four sides, which makes them quadrilaterals. They are said to be isomorphic or “structure preserving”. This common structure may be represented in an underlying abstract polytope, a purely algebraic partially ordered set which captures the pattern of connections (or incidences) between the various structural elements. The measurable properties of traditional polytopes such as angles, edge-lengths, skewness, straightness and convexity have no meaning for an abstract polytope. What is true for traditional polytopes (also called classical or geometric polytopes) may not be so for abstract ones, and vice versa. For example, a traditional polytope is regular if all its facets and vertex figures are regular, but this is not necessarily so for an abstract polytope. Realizations A traditional polytope is said to be a realization of the associated abstract polytope. A realization is a mapping or injection of the abstract object into a real space, typically Euclidean, to construct a traditional polytope as a real geometric figure. The six quadrilaterals shown are all distinct realizations of the abstract quadrilateral, each with different geometric properties. Some of them do not conform to traditional definitions of a quadrilateral and are said to be unfaithful realizations. A conventional polytope is a faithful realization. Faces, ranks and ordering In an abstract polytope, each structural element (vertex, edge, cell, etc.) is associated with a corresponding member of the set. The term face is used to refer to any such element e.g. a vertex (0-face), edge (1-face) or a general k-face, and not just a polygonal 2-face. The faces are ranked according to their associated real dimension: vertices have rank 0, edges rank 1 and so on. Incident faces of different ranks, for example, a vertex F of an edge G, are ordered by the relation F < G. F is said to be a subface of G. F, G are said to be incident if either F = G or F < G or G < F. This usage of "incidence" also occurs in finite geometry, although it differs from traditional geometry and some other areas of mathematics. For example, in the square ABCD, edges AB and BC are not abstractly incident (although they are both incident with
https://en.wikipedia.org/wiki/Least%20fixed%20point
In order theory, a branch of mathematics, the least fixed point (lfp or LFP, sometimes also smallest fixed point) of a function from a partially ordered set to itself is the fixed point which is less than each other fixed point, according to the order of the poset. A function need not have a least fixed point, but if it does then the least fixed point is unique. Examples With the usual order on the real numbers, the least fixed point of the real function f(x) = x2 is x = 0 (since the only other fixed point is 1 and 0 < 1). In contrast, f(x) = x + 1 has no fixed points at all, so has no least one, and f(x) = x has infinitely many fixed points, but has no least one. Let be a directed graph and be a vertex. The set of vertices accessible from can be defined as the least fixed-point of the function , defined as The set of vertices which are co-accessible from is defined by a similar least fix-point. The strongly connected component of is the intersection of those two least fixed-points. Let be a context-free grammar. The set of symbols which produces the empty string can be obtained as the least fixed-point of the function , defined as , where denotes the power set of . Applications Many fixed-point theorems yield algorithms for locating the least fixed point. Least fixed points often have desirable properties that arbitrary fixed points do not. Denotational semantics In computer science, the denotational semantics approach uses least fixed points to obtain from a given program text a corresponding mathematical function, called its semantics. To this end, an artificial mathematical object, , is introduced, denoting the exceptional value "undefined". Given e.g. the program datatype int, its mathematical counterpart is defined as it is made a partially ordered set by defining for each and letting any two different members be uncomparable w.r.t. , see picture. The semantics of a program definition int f(int n){...} is some mathematical function If the program definition f does not terminate for some input n, this can be expressed mathematically as The set of all mathematical functions is made partially ordered by defining if, for each the relation holds, that is, if is less defined or equal to For example, the semantics of the expression x+x/x is less defined than that of x+1, since the former, but not the latter, maps to and they agree otherwise. Given some program text f, its mathematical counterpart is obtained as least fixed point of some mapping from functions to functions that can be obtained by "translating" f. For example, the C definition int fact(int n) { if (n == 0) return 1; else return n * fact(n-1); } is translated to a mapping defined as The mapping is defined in a non-recursive way, although fact was defined recursively. Under certain restrictions (see Kleene fixed-point theorem), which are met in the example, necessarily has a least fixed point, , that is for all . It is possible to show that
https://en.wikipedia.org/wiki/Richard%20Schwartz%20%28mathematician%29
Richard Evan Schwartz (born August 11, 1966) is an American mathematician notable for his contributions to geometric group theory and to an area of mathematics known as billiards. Geometric group theory is a relatively new area of mathematics beginning around the late 1980s which explores finitely generated groups, and seeks connections between their algebraic properties and the geometric spaces on which these groups act. He has worked on what mathematicians refer to as billiards, which are dynamical systems based on a convex shape in a plane. He has explored geometric iterations involving polygons, and he has been credited for developing the mathematical concept known as the pentagram map. In addition, he is a bestselling author of a mathematics picture book for young children. His published work usually appears under the name Richard Evan Schwartz. In 2018 he is a professor of mathematics at Brown University. Career Schwartz was born in Los Angeles on August 11, 1966. He attended John F. Kennedy High School in Los Angeles from 1981 to 1984, then earned a B. S. in mathematics from U.C.L.A. in 1987, and then a Ph. D. in mathematics from Princeton University in 1991 under the supervision of William Thurston. He taught at the University of Maryland. He is currently the Chancellor's Professor of Mathematics at Brown University. He lives with his wife and two daughters in Barrington, Rhode Island. Schwartz is credited by other mathematicians for introducing the concept of the pentagram map. According to Schwartz's conception, a convex polygon would be inscribed with diagonal lines inside it, by drawing a line from one point to the next point—that is, by skipping over the immediate point on the polygon. The intersection points of the diagonals would form an inner polygon, and the process could be repeated. Schwartz observed these geometric patterns, partly by experimenting with computers. He has collaborated with mathematicians Valentin Ovsienko and Sergei Tabachnikov to show that the pentagram map is "completely integrable." In his spare time he draws comic books, writes computer programs, listens to music and exercises. He admired the late Russian mathematician Vladimir Arnold and dedicated a paper to him. He played an April Fool's joke on fellow mathematics professors at Brown University by sending an email suggesting that students could be admitted randomly, along with references to bogus studies which purportedly suggested that there were benefits to having a certain population of the student body selected at random; the story was reported in the Brown Daily Herald. Colleagues such as mathematician Jeffrey Brock describe Schwartz as having a "very wry sense of humor." In 2003, Schwartz was teaching one of his young daughters about number basics and developed a poster of the first 100 numbers using colorful monsters. This project gelled into a mathematics book for young children published in 2010, entitled You Can Count on Monsters, which b
https://en.wikipedia.org/wiki/Block%20LU%20decomposition
In linear algebra, a Block LU decomposition is a matrix decomposition of a block matrix into a lower block triangular matrix L and an upper block triangular matrix U. This decomposition is used in numerical analysis to reduce the complexity of the block matrix formula. Block LDU decomposition Block Cholesky decomposition Consider a block matrix: where the matrix is assumed to be non-singular, is an identity matrix with proper dimension, and is a matrix whose elements are all zero. We can also rewrite the above equation using the half matrices: where the Schur complement of in the block matrix is defined by and the half matrices can be calculated by means of Cholesky decomposition or LDL decomposition. The half matrices satisfy that Thus, we have where The matrix can be decomposed in an algebraic manner into See also Matrix decomposition Matrix decompositions
https://en.wikipedia.org/wiki/CPV
CPV may refer to: In mathematics, science and technology Viruses Canine parvovirus Cricket paralysis virus Cryptosporidium parvum virus, a dsRNA virus of the single-celled causative agent of Cryptosporidiosis Other uses in mathematics, science and technology Cauchy principal value, a method for assigning values to certain improper integrals in mathematics Composite Pressure Vessel, often gas cylinders made of composite materials Concentrator photovoltaics, a solar power technology Continued process verification, ongoing monitoring of all aspects of the production cycle CP-violation, a phenomenon in physics Transport Air Corporate (ICAO code CPV), an Italian airline Compassvale LRT station (LRT station abbreviation CPV), a Light Rail Transit station in Sengkang, Singapore Other uses CPV-TV, a defunct UK media company Cape Verde, ISO 3166-1 alpha-3 country code Child-to-parent violence, parental abuse by children Common Procurement Vocabulary, a European Union system of codes used by member states in public procurement procedures Communist Party of Vietnam Cost per view, in video online advertising See also
https://en.wikipedia.org/wiki/Paul%20Ernest
Paul Ernest is a contributor to the social constructivist philosophy of mathematics. Life Paul Ernest is currently emeritus professor of the philosophy of mathematics education at Exeter University, UK. He is best known for his work on philosophical aspects of mathematics education and his contributions to developing a social constructivist philosophy of mathematics. He is currently working on questions about ethics in mathematics. References Ernest, Paul; Social Constructivism as a Philosophy of Mathematics; Albany, New York: State University of New York Press, (1998) Ernest, Paul; The Philosophy of Mathematics Education; London: RoutledgeFalmer, (1991) External links Paul Ernest's page at Philosophy of Mathematics Education Journal, the journal that he edits at School of Education, University of Exeter-publications, CV etc. Paul Ernest's page at Amazon.com Living people Scientists from New York City English mathematicians 20th-century American mathematicians 21st-century American mathematicians Philosophers of mathematics Mathematicians from New York (state) Year of birth missing (living people)
https://en.wikipedia.org/wiki/Discrete-time%20Fourier%20transform
In mathematics, the discrete-time Fourier transform (DTFT), also called the finite Fourier transform, is a form of Fourier analysis that is applicable to a sequence of values. The DTFT is often used to analyze samples of a continuous function. The term discrete-time refers to the fact that the transform operates on discrete data, often samples whose interval has units of time. From uniformly spaced samples it produces a function of frequency that is a periodic summation of the continuous Fourier transform of the original continuous function. Under certain theoretical conditions, described by the sampling theorem, the original continuous function can be recovered perfectly from the DTFT and thus from the original discrete samples. The DTFT itself is a continuous function of frequency, but discrete samples of it can be readily calculated via the discrete Fourier transform (DFT) (see ), which is by far the most common method of modern Fourier analysis. Both transforms are invertible. The inverse DTFT is the original sampled data sequence. The inverse DFT is a periodic summation of the original sequence. The fast Fourier transform (FFT) is an algorithm for computing one cycle of the DFT, and its inverse produces one cycle of the inverse DFT. Definition The discrete-time Fourier transform of a discrete sequence of real or complex numbers , for all integers , is a Trigonometric series, which produces a periodic function of a frequency variable. When the frequency variable, ω, has normalized units of radians/sample, the periodicity is , and the DTFT series is: The discrete-time Fourier transform is analogous to a Fourier series, except instead of starting with a periodic function of time and producing discrete sequence over frequency, it starts with a discrete sequence in time and produces a periodic function in frequency. The utility of this frequency domain function is rooted in the Poisson summation formula. Let be the Fourier transform of any function, , whose samples at some interval (seconds) are equal (or proportional) to the sequence, i.e. .  Then the periodic function represented by the Fourier series is a periodic summation of in terms of frequency in hertz (cycles/sec): The integer has units of cycles/sample, and is the sample-rate, (samples/sec). So comprises exact copies of that are shifted by multiples of hertz and combined by addition. For sufficiently large the term can be observed in the region with little or no distortion (aliasing) from the other terms. In Fig.1, the extremities of the distribution in the upper left corner are masked by aliasing in the periodic summation (lower left). We also note that is the Fourier transform of . Therefore, an alternative definition of DTFT is: The modulated Dirac comb function is a mathematical abstraction sometimes referred to as impulse sampling. Inverse transform An operation that recovers the discrete data sequence from the DTFT function is called an inver
https://en.wikipedia.org/wiki/Algebra%20of%20random%20variables
The algebra of random variables in statistics, provides rules for the symbolic manipulation of random variables, while avoiding delving too deeply into the mathematically sophisticated ideas of probability theory. Its symbolism allows the treatment of sums, products, ratios and general functions of random variables, as well as dealing with operations such as finding the probability distributions and the expectations (or expected values), variances and covariances of such combinations. In principle, the elementary algebra of random variables is equivalent to that of conventional non-random (or deterministic) variables. However, the changes occurring on the probability distribution of a random variable obtained after performing algebraic operations are not straightforward. Therefore, the behavior of the different operators of the probability distribution, such as expected values, variances, covariances, and moments, may be different from that observed for the random variable using symbolic algebra. It is possible to identify some key rules for each of those operators, resulting in different types of algebra for random variables, apart from the elementary symbolic algebra: Expectation algebra, Variance algebra, Covariance algebra, Moment algebra, etc. Elementary symbolic algebra of random variables Considering two random variables and , the following algebraic operations are possible: Addition: Subtraction: Multiplication: Division: Exponentiation: In all cases, the variable resulting from each operation is also a random variable. All commutative and associative properties of conventional algebraic operations are also valid for random variables. If any of the random variables is replaced by a deterministic variable or by a constant value, all the previous properties remain valid. Expectation algebra for random variables The expected value of the random variable resulting from an algebraic operation between two random variables can be calculated using the following set of rules: Addition: Subtraction: Multiplication: . Particularly, if and are independent from each other, then: . Division: . Particularly, if and are independent from each other, then: . Exponentiation: If any of the random variables is replaced by a deterministic variable or by a constant value (), the previous properties remain valid considering that and, therefore, . If is defined as a general non-linear algebraic function of a random variable , then: Some examples of this property include: The exact value of the expectation of the non-linear function will depend on the particular probability distribution of the random variable . Variance algebra for random variables The variance of the random variable resulting from an algebraic operation between random variables can be calculated using the following set of rules: Addition: . Particularly, if and are independent from each other, then: . Subtraction: . Particularly, if and are ind
https://en.wikipedia.org/wiki/Misuse%20of%20statistics
Statistics, when used in a misleading fashion, can trick the casual observer into believing something other than what the data shows. That is, a misuse of statistics occurs when a statistical argument asserts a falsehood. In some cases, the misuse may be accidental. In others, it is purposeful and for the gain of the perpetrator. When the statistical reason involved is false or misapplied, this constitutes a statistical fallacy. The false statistics trap can be quite damaging for the quest for knowledge. For example, in medical science, correcting a falsehood may take decades and cost lives. Misuses can be easy to fall into. Professional scientists, even mathematicians and professional statisticians, can be fooled by even some simple methods, even if they are careful to check everything. Scientists have been known to fool themselves with statistics due to lack of knowledge of probability theory and lack of standardization of their tests. Definition, limitations and context One usable definition is: "Misuse of Statistics: Using numbers in such a manner that – either by intent or through ignorance or carelessness – the conclusions are unjustified or incorrect." The "numbers" include misleading graphics discussed in other sources. The term is not commonly encountered in statistics texts and there is no single authoritative definition. It is a generalization of lying with statistics which was richly described by examples from statisticians 60 years ago. The definition confronts some problems (some are addressed by the source): Statistics usually produces probabilities; conclusions are provisional The provisional conclusions have errors and error rates. Commonly 5% of the provisional conclusions of significance testing are wrong Statisticians are not in complete agreement on ideal methods Statistical methods are based on assumptions which are seldom fully met Data gathering is usually limited by ethical, practical and financial constraints. How to Lie with Statistics acknowledges that statistics can legitimately take many forms. Whether the statistics show that a product is "light and economical" or "flimsy and cheap" can be debated whatever the numbers. Some object to the substitution of statistical correctness for moral leadership (for example) as an objective. Assigning blame for misuses is often difficult because scientists, pollsters, statisticians and reporters are often employees or consultants. An insidious misuse of statistics is completed by the listener, observer, audience, or juror. The supplier provides the "statistics" as numbers or graphics (or before/after photographs), allowing the consumer to draw conclusions that may be unjustified or incorrect. the poor state of public statistical literacy and the non-statistical nature of human intuition make it possible to mislead without explicitly producing faulty conclusion. The definition is weak on the responsibility of the consumer of statistics. A historian listed ove
https://en.wikipedia.org/wiki/Swadesh%20list
The Swadesh list () is a classic compilation of tentatively universal concepts for the purposes of lexicostatistics. Translations of the Swadesh list into a set of languages allow researchers to quantify the interrelatedness of those languages. The Swadesh list is named after linguist Morris Swadesh. It is used in lexicostatistics (the quantitative assessment of the genealogical relatedness of languages) and glottochronology (the dating of language divergence). Because there are several different lists, some authors also refer to "Swadesh lists". Versions and authors Morris Swadesh created several versions of his list. He started with a list of 215 meanings (falsely introduced as a list of 225 meanings in the paper due to a spelling error), which he reduced to 165 words for the Salish-Spokane-Kalispel language. In 1952, he published a list of 215 meanings, of which he suggested the removal of 16 for being unclear or not universal, with one added to arrive at 200 words. In 1955, he wrote, "The only solution appears to be a drastic weeding out of the list, in the realization that quality is at least as important as quantity. Even the new list has defects, but they are relatively mild and few in number." After minor corrections, the final 100-word list was published posthumously in 1971 and 1972. Other versions of lexicostatistical test lists were published e.g. by Robert Lees (1953), John A. Rea (1958:145f), Dell Hymes (1960:6), E. Cross (1964 with 241 concepts), W. J. Samarin (1967:220f), D. Wilson (1969 with 57 meanings), Lionel Bender (1969), R. L. Oswald (1971), Winfred P. Lehmann (1984:35f), D. Ringe (1992, passim, different versions), Sergei Starostin (1984, passim, different versions), William S-Y. Wang (1994), M. Lohr (2000, 128 meanings in 18 languages). B. Kessler (2002), and many others. The Concepticon, a project hosted at the Cross-Linguistic Linked Data (CLLD) project, collects various concept lists (including classical Swadesh lists) across different linguistic areas and times, currently listing 240 different concept lists. Frequently used and widely available on the internet, is the version by Isidore Dyen (1992, 200 meanings of 95 language variants). Since 2010, a team around Michael Dunn has tried to update and enhance that list. Principle In origin, the words in the Swadesh lists were chosen for their universal, culturally independent availability in as many languages as possible, regardless of their "stability". Nevertheless, the stability of the resulting list of "universal" vocabulary under language change and the potential use of this fact for purposes of glottochronology have been analyzed by numerous authors, including Marisa Lohr 1999, 2000. The Swadesh list was put together by Morris Swadesh on the basis of his intuition. Similar more recent lists, such as the Dolgopolsky list (1964) or the Leipzig–Jakarta list (2009), are based on systematic data from many different languages, but they are not yet as widely know
https://en.wikipedia.org/wiki/Sturm%27s%20theorem
In mathematics, the Sturm sequence of a univariate polynomial is a sequence of polynomials associated with and its derivative by a variant of Euclid's algorithm for polynomials. Sturm's theorem expresses the number of distinct real roots of located in an interval in terms of the number of changes of signs of the values of the Sturm sequence at the bounds of the interval. Applied to the interval of all the real numbers, it gives the total number of real roots of . Whereas the fundamental theorem of algebra readily yields the overall number of complex roots, counted with multiplicity, it does not provide a procedure for calculating them. Sturm's theorem counts the number of distinct real roots and locates them in intervals. By subdividing the intervals containing some roots, it can isolate the roots into arbitrarily small intervals, each containing exactly one root. This yields the oldest real-root isolation algorithm, and arbitrary-precision root-finding algorithm for univariate polynomials. For computing over the reals, Sturm's theorem is less efficient than other methods based on Descartes' rule of signs. However, it works on every real closed field, and, therefore, remains fundamental for the theoretical study of the computational complexity of decidability and quantifier elimination in the first order theory of real numbers. The Sturm sequence and Sturm's theorem are named after Jacques Charles François Sturm, who discovered the theorem in 1829. The theorem The Sturm chain or Sturm sequence of a univariate polynomial with real coefficients is the sequence of polynomials such that for , where is the derivative of , and is the remainder of the Euclidean division of by The length of the Sturm sequence is at most the degree of . The number of sign variations at of the Sturm sequence of is the number of sign changes–ignoring zeros—in the sequence of real numbers This number of sign variations is denoted here . Sturm's theorem states that, if is a square-free polynomial, the number of distinct real roots of in the half-open interval is (here, and are real numbers such that ). The theorem extends to unbounded intervals by defining the sign at of a polynomial as the sign of its leading coefficient (that is, the coefficient of the term of highest degree). At the sign of a polynomial is the sign of its leading coefficient for a polynomial of even degree, and the opposite sign for a polynomial of odd degree. In the case of a non-square-free polynomial, if neither nor is a multiple root of , then is the number of distinct real roots of . The proof of the theorem is as follows: when the value of increases from to , it may pass through a zero of some (); when this occurs, the number of sign variations of does not change. When passes through a root of the number of sign variations of decreases from 1 to 0. These are the only values of where some sign may change. Example Suppose we wish to find the number of roots
https://en.wikipedia.org/wiki/Tame%20group
In mathematical group theory, a tame group is a certain kind of group defined in model theory. Formally, we define a bad field as a structure of the form (K, T), where K is an algebraically closed field and T is an infinite, proper, distinguished subgroup of K, such that (K, T) is of finite Morley rank in its full language. A group G is then called a tame group if no bad field is interpretable in G. References A. V. Borovik, Tame groups of odd and even type, pp. 341–-366, in Algebraic Groups and their Representations, R. W. Carter and J. Saxl, eds. (NATO ASI Series C: Mathematical and Physical Sciences, vol. 517), Kluwer Academic Publishers, Dordrecht, 1998. Model theory Infinite group theory Properties of groups
https://en.wikipedia.org/wiki/Sheaf
Sheaf may refer to: Sheaf (agriculture), a bundle of harvested cereal stems Sheaf (mathematics), a mathematical tool Sheaf toss, a Scottish sport River Sheaf, a tributary of River Don in England The Sheaf, a student-run newspaper serving the University of Saskatchewan Aluma, a settlement in Israel whose name translates as Sheaf See also Sceafa, a king of English legend Sheath (disambiguation) Sheave, a wheel or roller with a groove along its edge for holding a belt, rope or cable
https://en.wikipedia.org/wiki/Hull%E2%80%93White%20model
In financial mathematics, the Hull–White model is a model of future interest rates. In its most generic formulation, it belongs to the class of no-arbitrage models that are able to fit today's term structure of interest rates. It is relatively straightforward to translate the mathematical description of the evolution of future interest rates onto a tree or lattice and so interest rate derivatives such as bermudan swaptions can be valued in the model. The first Hull–White model was described by John C. Hull and Alan White in 1990. The model is still popular in the market today. The model One-factor model The model is a short-rate model. In general, it has the following dynamics: There is a degree of ambiguity among practitioners about exactly which parameters in the model are time-dependent or what name to apply to the model in each case. The most commonly accepted naming convention is the following: has t (time) dependence — the Hull–White model. and are both time-dependent — the extended Vasicek model. Two-factor model The two-factor Hull–White model contains an additional disturbance term whose mean reverts to zero, and is of the form: where is a deterministic function, typically the identity function (extension of the one-factor version, analytically tractable, and with potentially negative rates), the natural logarithm (extension of Black–Karasinksi, not analytically tractable, and with positive interest rates), or combinations (proportional to the natural logarithm on small rates and proportional to the identity function on large rates); and has an initial value of 0 and follows the process: Analysis of the one-factor model For the rest of this article we assume only has t-dependence. Neglecting the stochastic term for a moment, notice that for the change in r is negative if r is currently "large" (greater than and positive if the current value is small. That is, the stochastic process is a mean-reverting Ornstein–Uhlenbeck process. θ is calculated from the initial yield curve describing the current term structure of interest rates. Typically α is left as a user input (for example it may be estimated from historical data). σ is determined via calibration to a set of caplets and swaptions readily tradeable in the market. When , , and are constant, Itô's lemma can be used to prove that which has distribution where is the normal distribution with mean and variance . When is time-dependent, which has distribution Bond pricing using the Hull–White model It turns out that the time-S value of the T-maturity discount bond has distribution (note the affine term structure here!) where Note that their terminal distribution for is distributed log-normally. Derivative pricing By selecting as numeraire the time-S bond (which corresponds to switching to the S-forward measure), we have from the fundamental theorem of arbitrage-free pricing, the value at time t of a derivative which has payoff at time S. Here, is the expec
https://en.wikipedia.org/wiki/Principal%20part
In mathematics, the principal part has several independent meanings, but usually refers to the negative-power portion of the Laurent series of a function. Laurent series definition The principal part at of a function is the portion of the Laurent series consisting of terms with negative degree. That is, is the principal part of at . If the Laurent series has an inner radius of convergence of 0 , then has an essential singularity at , if and only if the principal part is an infinite sum. If the inner radius of convergence is not 0, then may be regular at despite the Laurent series having an infinite principal part. Other definitions Calculus Consider the difference between the function differential and the actual increment: The differential dy is sometimes called the principal (linear) part of the function increment Δy. Distribution theory The term principal part is also used for certain kinds of distributions having a singular support at a single point. See also Mittag-Leffler's theorem Cauchy principal value References External links Cauchy Principal Part at PlanetMath Complex analysis Generalized functions
https://en.wikipedia.org/wiki/Urn%20%28disambiguation%29
An urn is a vase-like container. Urn may refer to: Urn problem of probability theory Urn (album), an album by Ne Obliviscaris The acronym URN may refer to: Uniform Resource Name, an Internet identifier Unique Reference Number, an identifier of UK schools University Radio Nottingham, England See also