source
stringlengths
31
168
text
stringlengths
51
3k
https://en.wikipedia.org/wiki/Carl%20B.%20Allendoerfer
Carl Barnett Allendoerfer (April 4, 1911 – September 29, 1974) was an American mathematician in the mid-twentieth century, known for his work in topology and mathematics education. Background Allendoerfer was born in Kansas City, the son of a prominent banker. He graduated from Haverford College in 1932 and attended New College, Oxford as a Rhodes Scholar, 1932-1934. He received his Ph.D. in mathematics from Princeton University in 1937. Research & Teaching Allendoerfer taught at Haverford College in the mid-1940s where he became known for work with André Weil on the Gauss–Bonnet theorem, an important theorem in differential geometry. He continued his studies of differential geometry at the Institute for Advanced Study (1948-1949). In 1951, he became professor and later chair of the Mathematics Department at the University of Washington, where he is known for establishing the Summer Mathematics Institute for High School Teachers. Allendoerfer was president of the Mathematical Association of America (1959–60) and editor of its monthly journal. In 1966 he won a Lester R. Ford Award. In 1972, he received the MAA's Award for Distinguished Service to Mathematics. After his death, the MAA established the Carl B. Allendoerfer Award, which is given each year for "expository excellence published in Mathematics Magazine." Allendoerfer is also known as a proponent of the New Math movement in the 1950s and 1960s, which sought to improve American primary and secondary mathematics education by teaching abstract concepts like set theory early in the curriculum. Allendoerfer was a member of Commission on Mathematics of the College Entrance Examination Board whose 1959 report Program for College Preparatory Mathematics outlined many concepts of the New Math. The commission and report were criticized by some for emphasizing pure mathematics in place of more traditional and practical considerations like arithmetic. Allendoerfer was the author, with Cletus Oakley (1899–1990), of several prominent mathematics textbooks used in the 1950s and 1960s. He was also author of a series of math films. Selected publications Articles Books Allendoerfer, Carl B., & Oakley, Cletus O. (1955). Principles of Mathematics. McGraw-Hill. () Allendoerfer, Carl B., & Oakley, Cletus O. (1959). Fundamentals of Freshman Mathematics. McGraw-Hill. () Allendoerfer, Carl B. (1965). Mathematics for Parents. MacMillan. Allendoerfer, Carl B., & Oakely, Cletus O. (1967). Fundamentals of College Algebra. McGraw-Hill. Allendoerfer, Carl B. (1971). Principles of Arithmetic and Geometry for Elementary School Teachers. MacMillan. Allendoerfer, Carl B. (1974). Calculus of Several Variables and Differentiable Manifolds. Macmillan. () Allendoerfer, Carl B., Oakley, Cletus O., & Kerr, Donald R. (1977). Elementary Functions. McGraw-Hill. () Films Allendoerfer produced the following films for Ward's Natural Science in Rochester, New York: Cycloidal Curves or Tales from Wanklenbe
https://en.wikipedia.org/wiki/Erd%C5%91s%E2%80%93Graham%20problem
In combinatorial number theory, the Erdős–Graham problem is the problem of proving that, if the set of integers greater than one is partitioned into finitely many subsets, then one of the subsets can be used to form an Egyptian fraction representation of unity. That is, for every , and every -coloring of the integers greater than one, there is a finite monochromatic subset of these integers such that In more detail, Paul Erdős and Ronald Graham conjectured that, for sufficiently large , the largest member of could be bounded by for some constant independent of . It was known that, for this to be true, must be at least Euler's constant . Ernie Croot proved the conjecture as part of his Ph.D thesis, and later (while a post-doctoral researcher at UC Berkeley) published the proof in the Annals of Mathematics. The value Croot gives for is very large: it is at most . Croot's result follows as a corollary of a more general theorem stating the existence of Egyptian fraction representations of unity for sets of smooth numbers in intervals of the form , where contains sufficiently many numbers so that the sum of their reciprocals is at least six. The Erdős–Graham conjecture follows from this result by showing that one can find an interval of this form in which the sum of the reciprocals of all smooth numbers is at least ; therefore, if the integers are -colored there must be a monochromatic subset satisfying the conditions of Croot's theorem. A stronger form of the result, that any set of integers with positive upper density includes the denominators of an Egyptian fraction representation of one, was announced in 2021 by Thomas Bloom, a postdoctoral researcher at the University of Oxford. See also Conjectures by Erdős References External links Ernie Croot's Webpage Combinatorics Conjectures that have been proved Theorems in number theory Egyptian fractions Graham problem
https://en.wikipedia.org/wiki/Banach%20bundle
In mathematics, a Banach bundle is a vector bundle each of whose fibres is a Banach space, i.e. a complete normed vector space, possibly of infinite dimension. Definition of a Banach bundle Let M be a Banach manifold of class Cp with p ≥ 0, called the base space; let E be a topological space, called the total space; let π : E → M be a surjective continuous map. Suppose that for each point x ∈ M, the fibre Ex = π−1(x) has been given the structure of a Banach space. Let be an open cover of M. Suppose also that for each i ∈ I, there is a Banach space Xi and a map τi such that the map τi is a homeomorphism commuting with the projection onto Ui, i.e. the following diagram commutes: and for each x ∈ Ui the induced map τix on the fibre Ex is an invertible continuous linear map, i.e. an isomorphism in the category of topological vector spaces; if Ui and Uj are two members of the open cover, then the map is a morphism (a differentiable map of class Cp), where Lin(X; Y) denotes the space of all continuous linear maps from a topological vector space X to another topological vector space Y. The collection {(Ui, τi)|i∈I} is called a trivialising covering for π : E → M, and the maps τi are called trivialising maps. Two trivialising coverings are said to be equivalent if their union again satisfies the two conditions above. An equivalence class of such trivialising coverings is said to determine the structure of a Banach bundle on π : E → M. If all the spaces Xi are isomorphic as topological vector spaces, then they can be assumed all to be equal to the same space X. In this case, π : E → M is said to be a Banach bundle with fibre X. If M is a connected space then this is necessarily the case, since the set of points x ∈ M for which there is a trivialising map for a given space X is both open and closed. In the finite-dimensional case, the second condition above is implied by the first. Examples of Banach bundles If V is any Banach space, the tangent space TxV to V at any point x ∈ V is isomorphic in an obvious way to V itself. The tangent bundle TV of V is then a Banach bundle with the usual projection This bundle is "trivial" in the sense that TV admits a globally defined trivialising map: the identity function If M is any Banach manifold, the tangent bundle TM of M forms a Banach bundle with respect to the usual projection, but it may not be trivial. Similarly, the cotangent bundle T*M, whose fibre over a point x ∈ M is the topological dual space to the tangent space at x: also forms a Banach bundle with respect to the usual projection onto M. There is a connection between Bochner spaces and Banach bundles. Consider, for example, the Bochner space X = L²([0, T]; H1(Ω)), which might arise as a useful object when studying the heat equation on a domain Ω. One might seek solutions σ ∈ X to the heat equation; for each time t, σ(t) is a function in the Sobolev space H1(Ω). One could also think of Y = [0, T] × H1(Ω), which as a Cartesi
https://en.wikipedia.org/wiki/Cogenerator
Cogenerator may refer to: Cogeneration, simultaneous generation of heat and electricity Injective cogenerator, in mathematics More generally, cogenerator is the dual of a generator of a category. An operator in the dilation theorem for contraction semigroups
https://en.wikipedia.org/wiki/Lindley%20equation
In probability theory, the Lindley equation, Lindley recursion or Lindley processes is a discrete-time stochastic process An where n takes integer values and: An + 1 = max(0, An + Bn). Processes of this form can be used to describe the waiting time of customers in a queue or evolution of a queue length over time. The idea was first proposed in the discussion following Kendall's 1951 paper. Waiting times In Dennis Lindley's first paper on the subject the equation is used to describe waiting times experienced by customers in a queue with the First-In First-Out (FIFO) discipline. Wn + 1 = max(0,Wn + Un) where Tn is the time between the nth and (n+1)th arrivals, Sn is the service time of the nth customer, and Un = Sn − Tn Wn is the waiting time of the nth customer. The first customer does not need to wait so W1 = 0. Subsequent customers will have to wait if they arrive at a time before the previous customer has been served. Queue lengths The evolution of the queue length process can also be written in the form of a Lindley equation. Integral equation Lindley's integral equation is a relationship satisfied by the stationary waiting time distribution F(x) in a G/G/1 queue. Where K(x) is the distribution function of the random variable denoting the difference between the (k - 1)th customer's arrival and the inter-arrival time between (k - 1)th and kth customers. The Wiener–Hopf method can be used to solve this expression. Notes Queueing theory
https://en.wikipedia.org/wiki/Zero%20element
In mathematics, a zero element is one of several generalizations of the number zero to other algebraic structures. These alternate meanings may or may not reduce to the same thing, depending on the context. Additive identities An additive identity is the identity element in an additive group. It corresponds to the element 0 such that for all x in the group, . Some examples of additive identity include: The zero vector under vector addition: the vector of length 0 and whose components are all 0. Often denoted as or . The zero function or zero map defined by , under pointwise addition The empty set under set union An empty sum or empty coproduct An initial object in a category (an empty coproduct, and so an identity under coproducts) Absorbing elements An absorbing element in a multiplicative semigroup or semiring generalises the property . Examples include: The empty set, which is an absorbing element under Cartesian product of sets, since The zero function or zero map defined by under pointwise multiplication Many absorbing elements are also additive identities, including the empty set and the zero function. Another important example is the distinguished element 0 in a field or ring, which is both the additive identity and the multiplicative absorbing element, and whose principal ideal is the smallest ideal. Zero objects A zero object in a category is both an initial and terminal object (and so an identity under both coproducts and products). For example, the trivial structure (containing only the identity) is a zero object in categories where morphisms must map identities to identities. Specific examples include: The trivial group, containing only the identity (a zero object in the category of groups) The zero module, containing only the identity (a zero object in the category of modules over a ring) Zero morphisms A zero morphism in a category is a generalised absorbing element under function composition: any morphism composed with a zero morphism gives a zero morphism. Specifically, if is the zero morphism among morphisms from X to Y, and and are arbitrary morphisms, then and . If a category has a zero object 0, then there are canonical morphisms and and composing them gives a zero morphism . In the category of groups, for example, zero morphisms are morphisms which always return group identities, thus generalising the function Least elements A least element in a partially ordered set or lattice may sometimes be called a zero element, and written either as 0 or ⊥. Zero module In mathematics, the zero module is the module consisting of only the additive identity for the module's addition function. In the integers, this identity is zero, which gives the name zero module. That the zero module is in fact a module is simple to show; it is closed under addition and multiplication trivially. Zero ideal In mathematics, the zero ideal in a ring is the ideal consisting of only the additive identity (or zero element). The fact t
https://en.wikipedia.org/wiki/Rencontres%20numbers
In combinatorial mathematics, the rencontres numbers are a triangular array of integers that enumerate permutations of the set { 1, ..., n } with specified numbers of fixed points: in other words, partial derangements. (Rencontre is French for encounter. By some accounts, the problem is named after a solitaire game.) For n ≥ 0 and 0 ≤ k ≤ n, the rencontres number Dn, k is the number of permutations of { 1, ..., n } that have exactly k fixed points. For example, if seven presents are given to seven different people, but only two are destined to get the right present, there are D7, 2 = 924 ways this could happen. Another often cited example is that of a dance school with 7 couples, where, after tea-break the participants are told to randomly find a partner to continue, then once more there are D7, 2 = 924 possibilities that 2 previous couples meet again by chance. Numerical values Here is the beginning of this array : Formulas The numbers in the k = 0 column enumerate derangements. Thus for non-negative n. It turns out that where the ratio is rounded up for even n and rounded down for odd n. For n ≥ 1, this gives the nearest integer. More generally, for any , we have The proof is easy after one knows how to enumerate derangements: choose the k fixed points out of n; then choose the derangement of the other n − k points. The numbers are generated by the power series ; accordingly, an explicit formula for Dn, m can be derived as follows: This immediately implies that for n large, m fixed. Probability distribution The sum of the entries in each row for the table in "Numerical Values" is the total number of permutations of { 1, ..., n }, and is therefore n!. If one divides all the entries in the nth row by n!, one gets the probability distribution of the number of fixed points of a uniformly distributed random permutation of { 1, ..., n }. The probability that the number of fixed points is k is For n ≥ 1, the expected number of fixed points is 1 (a fact that follows from linearity of expectation). More generally, for i ≤ n, the ith moment of this probability distribution is the ith moment of the Poisson distribution with expected value 1. For i > n, the ith moment is smaller than that of that Poisson distribution. Specifically, for i ≤ n, the ith moment is the ith Bell number, i.e. the number of partitions of a set of size i. Limiting probability distribution As the size of the permuted set grows, we get This is just the probability that a Poisson-distributed random variable with expected value 1 is equal to k. In other words, as n grows, the probability distribution of the number of fixed points of a random permutation of a set of size n approaches the Poisson distribution with expected value 1. See also Oberwolfach problem, a different mathematical problem involving the arrangement of diners at tables Problème des ménages, a similar problem involving partial derangements References Riordan, John, An Introduction
https://en.wikipedia.org/wiki/Anderson%E2%80%93Darling%20test
The Anderson–Darling test is a statistical test of whether a given sample of data is drawn from a given probability distribution. In its basic form, the test assumes that there are no parameters to be estimated in the distribution being tested, in which case the test and its set of critical values is distribution-free. However, the test is most often used in contexts where a family of distributions is being tested, in which case the parameters of that family need to be estimated and account must be taken of this in adjusting either the test-statistic or its critical values. When applied to testing whether a normal distribution adequately describes a set of data, it is one of the most powerful statistical tools for detecting most departures from normality. K-sample Anderson–Darling tests are available for testing whether several collections of observations can be modelled as coming from a single population, where the distribution function does not have to be specified. In addition to its use as a test of fit for distributions, it can be used in parameter estimation as the basis for a form of minimum distance estimation procedure. The test is named after Theodore Wilbur Anderson (1918–2016) and Donald A. Darling (1915–2014), who invented it in 1952. The single-sample test The Anderson–Darling and Cramér–von Mises statistics belong to the class of quadratic EDF statistics (tests based on the empirical distribution function). If the hypothesized distribution is , and empirical (sample) cumulative distribution function is , then the quadratic EDF statistics measure the distance between and by where is the number of elements in the sample, and is a weighting function. When the weighting function is , the statistic is the Cramér–von Mises statistic. The Anderson–Darling (1954) test is based on the distance which is obtained when the weight function is . Thus, compared with the Cramér–von Mises distance, the Anderson–Darling distance places more weight on observations in the tails of the distribution. Basic test statistic The Anderson–Darling test assesses whether a sample comes from a specified distribution. It makes use of the fact that, when given a hypothesized underlying distribution and assuming the data does arise from this distribution, the cumulative distribution function (CDF) of the data can be assumed to follow a uniform distribution. The data can be then tested for uniformity with a distance test (Shapiro 1980). The formula for the test statistic to assess if data (note that the data must be put in order) comes from a CDF is where The test statistic can then be compared against the critical values of the theoretical distribution. In this case, no parameters are estimated in relation to the cumulative distribution function . Tests for families of distributions Essentially the same test statistic can be used in the test of fit of a family of distributions, but then it must be compared against the critical values ap
https://en.wikipedia.org/wiki/Zarankiewicz%20problem
The Zarankiewicz problem, an unsolved problem in mathematics, asks for the largest possible number of edges in a bipartite graph that has a given number of vertices and has no complete bipartite subgraphs of a given size. It belongs to the field of extremal graph theory, a branch of combinatorics, and is named after the Polish mathematician Kazimierz Zarankiewicz, who proposed several special cases of the problem in 1951. Problem statement A bipartite graph consists of two disjoint sets of vertices and , and a set of edges each of which connects a vertex in to a vertex in . No two edges can both connect the same pair of vertices. A complete bipartite graph is a bipartite graph in which every pair of a vertex from and a vertex from is connected to each other. A complete bipartite graph in which has vertices and has vertices is denoted . If is a bipartite graph, and there exists a set of vertices of and vertices of that are all connected to each other, then these vertices induce a subgraph of the form . (In this formulation, the ordering of and is significant: the set of vertices must be from and the set of vertices must be from , not vice versa.) The Zarankiewicz function denotes the maximum possible number of edges in a bipartite graph for which and , but which does not contain a subgraph of the form . As a shorthand for an important special case, is the same as . The Zarankiewicz problem asks for a formula for the Zarankiewicz function, or (failing that) for tight asymptotic bounds on the growth rate of assuming that is a fixed constant, in the limit as goes to infinity. For this problem is the same as determining cages with girth six. The Zarankiewicz problem, cages and finite geometry are strongly interrelated. The same problem can also be formulated in terms of digital geometry. The possible edges of a bipartite graph can be visualized as the points of a rectangle in the integer lattice, and a complete subgraph is a set of rows and columns in this rectangle in which all points are present. Thus, denotes the maximum number of points that can be placed within an grid in such a way that no subset of rows and columns forms a complete grid. An alternative and equivalent definition is that is the smallest integer such that every (0,1)-matrix of size with ones must have a set of rows and columns such that the corresponding submatrix is made up only of 1s. Examples The number asks for the maximum number of edges in a bipartite graph with vertices on each side that has no 4-cycle (its girth is six or more). Thus, (achieved by a three-edge path), and (a hexagon). In his original formulation of the problem, Zarankiewicz asked for the values of for . The answers were supplied soon afterwards by Wacław Sierpiński: , , and . The case of is relatively simple: a 13-edge bipartite graph with four vertices on each side of the bipartition, and no subgraph, may be obtained by adding one of the long diagonals t
https://en.wikipedia.org/wiki/Helena%20Rasiowa
Helena Rasiowa (20 June 1917 – 9 August 1994) was a Polish mathematician. She worked in the foundations of mathematics and algebraic logic. Early years Rasiowa was born in Vienna on 20 June 1917 to Polish parents. As soon as Poland regained its independence in 1918, the family settled in Warsaw. Helena's father was a railway specialist. She exhibited many different skills and interests, from music to business management and the most important of her interests, mathematics. In 1938, the time was not very opportune for entering a university. Rasiowa had to interrupt her studies, as no legal education was possible in Poland after 1939. Many people fled the country, or at least they fled the big towns, which were subject to German bombardment and terror. The Rasiowa family fled also, as most high-ranking administration officials and members of the government were being evacuated to Romania. The family spent a year in Lviv. After the Soviet invasion in September 1939, the town was taken over by the Soviet Union. The lives of many Poles became endangered, so Helena's father decided to return to Warsaw. Academic development Rasiowa became strongly influenced by Polish logicians. She wrote her Master's thesis under the supervision of Jan Łukasiewicz and Bolesław Sobociński. In 1944, the Warsaw Uprising broke out and consequently Warsaw was almost completely destroyed. This was not only due to the immediate fighting, but also because of the systematic destruction which followed the uprising after it had been suppressed. Rasiowa's thesis burned with the whole house. She herself survived with her mother in a cellar covered by the ruins of the demolished building. After the war, Polish mathematics began to recover its institutions, its moods, and its people. Those who remained considered their duty to be the reconstruction of Polish universities and the scientific community. One of the important conditions for this reconstruction was to gather all those who could participate in re-creating mathematics. In the meantime, Rasiowa had accepted a teaching position in a secondary school. That is where she met Andrzej Mostowski and came back to the university. She re-wrote her Master's thesis in 1945 and in the next year she started her academic career as an assistant at the University of Warsaw, the institution she remained linked with for the rest of her life. At the university, she prepared and defended her PhD thesis, Algebraic Treatment of the Functional Calculi of Lewis and Heyting, in 1950 under the guidance of Prof. Andrzej Mostowski. This thesis on algebraic logic initiated her career contributing to the Lwów–Warsaw school of logic: In 1956, she took her second academic degree, doktor nauk (equivalent to habilitation today) in the Institute of Mathematics of the Polish Academy of Sciences, where between 1954 and 1957, she held a post of associate professor, becoming a professor in 1957 and subsequently Full Professor in 1967. For the degree, she subm
https://en.wikipedia.org/wiki/Ada%20Kaleh
{ "type": "FeatureCollection", "features": [ { "type": "Feature", "properties": {}, "geometry": { "type": "Point", "coordinates": [ 22.450389, 44.716233 ] } } ] }Ada Kaleh (; from , meaning "Island Fortress"; or ; Serbian and Bulgarian: Адакале, Adakale) was a small island on the Danube, located in what is now Romania, that was submerged during the construction of the Iron Gates hydroelectric plant in 1970. The island was about downstream from Orșova and was less than two kilometers long and approximately half a kilometer wide (1.75 x 0.4–0.5 km). Ada Kaleh was inhabited by Turkish Muslims from all parts of the Ottoman Empire, and there were also family ties to the Turkish Muslim populations of Vidin and Ruse, Bulgaria due to exogamic marriages. The isle of Ada Kaleh is probably the most evocative victim of the Iron Gate dam's construction. Once an Ottoman Turkish exclave that changed hands multiple times in the 18th and 19th centuries, it had a mosque and numerous twisting alleys, and was known as a free port and a smuggler's nest. The islanders produced Turkish delight, baklava, rose water, fig and rose marmalade, and rose oil, and were well-known for Turkish oil wrestling. The existence of Ada Kaleh was overlooked at the 1878 Congress of Berlin peace talks surrounding the Russo-Turkish War, known in Romania as the War of Independence, which allowed the island to remain a de jure possession of the Ottoman Sultan until 1923. Turkish population Adakale Turks (). The settlement of Turks began in 1699 when the Ottoman Empire took the island. In an Ottoman archive document, a brief history of the island and its inhabitants are described as follows: “After the 1770s, no boats crossed the Danube, presumably, and the Sipahi officers, who were under the command of an Ottoman pasha in Adakale, brought their families to Adakale. The people here are all descendants of these military families… This is why the native language of the people is Turkish.” In 1830, when the Serbian Principality was established in the territory of the Sanjak of Smederevo of the Ottoman Empire, the crowded Turks in Serbia community living in the Principality of Serbia was settled in 6 settlements that would be considered Ottoman lands. Adakale became one of these six Turkish quarters, each of which was considered a township. The Ottoman Turkish speaking Muslim inhabitants who came from all parts of the Ottoman Empire, it is said in this source the islanders who called themselves Turks, were of mixed Turkish, Kurdish, Albanian, and Arabian ancestry. There were also family ties to the Bulgarian Turks of Vidin and Ruse, Bulgaria, due to exogamic marriages, while in this Source, of the Population census from 1913, shows clearly that the majority of the islanders were Turks in the Balkans and Muslim Roma from different parts of the former Rumelia Eyalet, who came after Russo-Turkish War (1877–187
https://en.wikipedia.org/wiki/Logic%20%28disambiguation%29
Logic is the study of the principles and criteria of valid inference and demonstration. Logic may also refer to: Mathematical logic, a branch of mathematics that grew out of symbolic logic Philosophical logic, the application of formal logic to philosophical problems Art, entertainment, and the media "Logic" (song), by Operator Please, 2010 Logic, a 1981 album by Hideki Matsutake's Logic System Mr Logic, a character in a Viz magazine comic strip Logic (poem) by Ringelnatz The Logic, a Canadian news website People Logic (rapper) (born 1990), American rapper Logic, member of hip hop group Y'all So Stupid DJ Logic (born 1972), American turntablist Lamont "Logic" Coleman, producer of two tracks on Jim Jones's 2011 album, Capo Lora Logic (born 1960), British saxophonist and singer Louis Logic, American underground hip-hop emcee Samantha Logic (born 1992), American basketball player Science and technology Business logic, program portion encoding the rules determining data processing Digital logic, a class of digital circuits characterized by the technology underlying its logic gates LOGIC (electronic cigarette), an electronic cigarette owned by Logic Technology Development Relocating logic, embedded information in programs for relocation Logically, a startup known for its software, which utilizes artificial intelligence to label textual or visual media as real or fake. Software Dolby Pro Logic, also known as Pro Logic, a surround sound processing technology Logic Pro, a MIDI sequencer and Digital Audio Workstation application, part of Logic Studio Logic Studio, a music production suite by Apple Inc. See also Logarithm Logik (disambiguation) Logistic (disambiguation) School of Names
https://en.wikipedia.org/wiki/Alternating%20factorial
In mathematics, an alternating factorial is the absolute value of the alternating sum of the first n factorials of positive integers. This is the same as their sum, with the odd-indexed factorials multiplied by −1 if n is even, and the even-indexed factorials multiplied by −1 if n is odd, resulting in an alternation of signs of the summands (or alternation of addition and subtraction operators, if preferred). To put it algebraically, or with the recurrence relation in which af(1) = 1. The first few alternating factorials are 1, 1, 5, 19, 101, 619, 4421, 35899, 326981, 3301819, 36614981, 442386619, 5784634181, 81393657019 For example, the third alternating factorial is 1! – 2! + 3!. The fourth alternating factorial is −1! + 2! − 3! + 4! = 19. Regardless of the parity of n, the last (nth) summand, n!, is given a positive sign, the (n – 1)th summand is given a negative sign, and the signs of the lower-indexed summands are alternated accordingly. This pattern of alternation ensures the resulting sums are all positive integers. Changing the rule so that either the odd- or even-indexed summands are given negative signs (regardless of the parity of n) changes the signs of the resulting sums but not their absolute values. proved that there are only a finite number of alternating factorials that are also prime numbers, since 3612703 divides af(3612702) and therefore divides af(n) for all n ≥ 3612702. , the known primes and probable primes are af(n) for n = 3, 4, 5, 6, 7, 8, 10, 15, 19, 41, 59, 61, 105, 160, 661, 2653, 3069, 3943, 4053, 4998, 8275, 9158, 11164 Only the values up to n = 661 have been proved prime in 2006. af(661) is approximately 7.818097272875 × 101578. Notes References Yves Gallot, Is the number of primes finite? Paul Jobling, Guy's problem B43: search for primes of form n!-(n-1)!+(n-2)!-(n-3)!+...+/-1! Integer sequences Factorial and binomial topics
https://en.wikipedia.org/wiki/Implementation%20of%20mathematics%20in%20set%20theory
This article examines the implementation of mathematical concepts in set theory. The implementation of a number of basic mathematical concepts is carried out in parallel in ZFC (the dominant set theory) and in NFU, the version of Quine's New Foundations shown to be consistent by R. B. Jensen in 1969 (here understood to include at least axioms of Infinity and Choice). What is said here applies also to two families of set theories: on the one hand, a range of theories including Zermelo set theory near the lower end of the scale and going up to ZFC extended with large cardinal hypotheses such as "there is a measurable cardinal"; and on the other hand a hierarchy of extensions of NFU which is surveyed in the New Foundations article. These correspond to different general views of what the set-theoretical universe is like, and it is the approaches to implementation of mathematical concepts under these two general views that are being compared and contrasted. It is not the primary aim of this article to say anything about the relative merits of these theories as foundations for mathematics. The reason for the use of two different set theories is to illustrate that multiple approaches to the implementation of mathematics are feasible. Precisely because of this approach, this article is not a source of "official" definitions for any mathematical concept. Preliminaries The following sections carry out certain constructions in the two theories ZFC and NFU and compare the resulting implementations of certain mathematical structures (such as the natural numbers). Mathematical theories prove theorems (and nothing else). So saying that a theory allows the construction of a certain object means that it is a theorem of that theory that that object exists. This is a statement about a definition of the form "the x such that exists", where is a formula of our language: the theory proves the existence of "the x such that " just in case it is a theorem that "there is one and only one x such that ". (See Bertrand Russell's theory of descriptions.) Loosely, the theory "defines" or "constructs" this object in this case. If the statement is not a theorem, the theory cannot show that the object exists; if the statement is provably false in the theory, it proves that the object cannot exist; loosely, the object cannot be constructed. ZFC and NFU share the language of set theory, so the same formal definitions "the x such that " can be contemplated in the two theories. A specific form of definition in the language of set theory is set-builder notation: means "the set A such that for all x, " (A cannot be free in ). This notation admits certain conventional extensions: is synonymous with ; is defined as , where is an expression already defined. Expressions definable in set-builder notation make sense in both ZFC and NFU: it may be that both theories prove that a given definition succeeds, or that neither do (the expression fails to refer to anything in any set t
https://en.wikipedia.org/wiki/Reynolds%20transport%20theorem
In differential calculus, the Reynolds transport theorem (also known as the Leibniz–Reynolds transport theorem), or simply the Reynolds theorem, named after Osborne Reynolds (1842–1912), is a three-dimensional generalization of the Leibniz integral rule. It is used to recast time derivatives of integrated quantities and is useful in formulating the basic equations of continuum mechanics. Consider integrating over the time-dependent region that has boundary , then taking the derivative with respect to time: If we wish to move the derivative into the integral, there are two issues: the time dependence of , and the introduction of and removal of space from due to its dynamic boundary. Reynolds transport theorem provides the necessary framework. General form Reynolds transport theorem can be expressed as follows: in which is the outward-pointing unit normal vector, is a point in the region and is the variable of integration, and are volume and surface elements at , and is the velocity of the area element (not the flow velocity). The function may be tensor-, vector- or scalar-valued. Note that the integral on the left hand side is a function solely of time, and so the total derivative has been used. Form for a material element In continuum mechanics, this theorem is often used for material elements. These are parcels of fluids or solids which no material enters or leaves. If is a material element then there is a velocity function , and the boundary elements obey This condition may be substituted to obtain: A special case If we take to be constant with respect to time, then and the identity reduces to as expected. (This simplification is not possible if the flow velocity is incorrectly used in place of the velocity of an area element.) Interpretation and reduction to one dimension The theorem is the higher-dimensional extension of differentiation under the integral sign and reduces to that expression in some cases. Suppose is independent of and , and that is a unit square in the -plane and has limits and . Then Reynolds transport theorem reduces to which, up to swapping and , is the standard expression for differentiation under the integral sign. See also References External links Osborne Reynolds, Collected Papers on Mechanical and Physical Subjects, in three volumes, published circa 1903, now fully and freely available in digital format: Volume 1, Volume 2, Volume 3, http://planetmath.org/reynoldstransporttheorem Aerodynamics Articles containing proofs Chemical engineering Continuum mechanics Eponymous theorems of physics Equations of fluid dynamics Fluid dynamics Fluid mechanics Mechanical engineering
https://en.wikipedia.org/wiki/Lin%20Hsin%20Hsin
Lin Hsin Hsin () is an IT inventor, artist, poet and composer from Singapore, deeply rooted in mathematics and information technology. Early life and education Lin was born in Singapore. She graduated in mathematics from the University of Singapore and received a postgraduate degree in computer science from Newcastle University, England. She studied music and art in Singapore, printmaking at the University of Ulster, papermaking in Ogawamachi, Japan and paper conservation at the University of Melbourne Conservation Services. Career Lin is a digital native. Lin builds paradigm shift & patent-grade inventions.She is an IT visionary some 20 years ahead of time, who pens her IT vision in computing, poems, and paintings. In 1976, Lin painted "Distillation of an Apple", an oil painting claimed to visualised the construction and usage of Apple computer 7 days before the birth of Apple computer. In 1977, she painted "The Computer as Architect", an oil painting depicting the vision of the power of computer in architecture. Lin claimed she has never seen nor used a Computer-aided design (CAD) system prior to her painting while commercial CAD systems are available since early 1970s. 1988 March organized 1st Artificial Intelligence conference in Singapore 1991 February 1 poem titled "Cellular Phone Galore" predicted mobile phone, & cellular network BEFORE 2G GSM launch, 27 March 1991, p.  54,55, "from time to time" 1992 wanted to build a multimedia museum (letter to National Computer Board, Singapore) 1993 February, predicted the Y2K bug while building a ten-year forecasting model on an IBM i486 PC, Journal of the Asia Pacific Economic Conference (APEC), 1999 1993 August 21, poem title "Online Intimacy" on Online dating service, p. 235, "Sunny Side Up" 1993 August 23, poem titled "Till Bankrupt Do Us Part", on online shopping & e-commerce, p. 241, "Sunny Side Up" 1994 May, painted "Voices of the Future" – oil painting depicted the wireless and mobile entertainment future lifestyle, p. 32, "Lin Hsin Hsin: Works from Art, Science & Technology Series" 1994 Designed and built a virtual museum in the world, 1995 September 9, poem titled "Virtual Offices" predicted virtual office as the concept of businesses, p. 20, "In Bytes We Travel" 1995 September 9, poem titled "The Day Will Come..." on SMS, using fingers to type on screen, p. 181, "In Bytes We Travel" 1996 October, predicted Twitter, tweeting as a way of life. in a diptych titled "twigee-tweedee", digital art in private collection, Paris, France 1996 October 1, poem titled "e-money" on digital currency & Cryptocurrency p. 58, "In Bytes We Travel" 1999 February 1, poem titled "Mobility" on mobility, p. 20, "Between the Lines" IT Inventions 2018 May 19 Next Generation 100% Directed Acyclic Graph DAG-based blockchain protocol 2017 Nov 11 Encryption Algorithm 2017 FACT Finger on Android Circling Techniques—A Finger Intelligence Technology (FIT 3.0) on Android 2016 <PugScript>-- A Visual L
https://en.wikipedia.org/wiki/Grace%20Alele-Williams
Grace Alele-Williams (16 December 1932 – 25 March 2022) was a Nigerian professor of mathematics education, who made history as the first Nigerian woman to receive a doctorate, and the first Nigerian female vice-chancellor at the University of Benin. Early life and education Grace Awani Alele was born to Itsekiri parents in Warri, Western Region (present-day Delta State), Nigeria on 16 December 1932. She attended Government School, Warri, Queen's College, Lagos and the University College of Ibadan (now University of Ibadan). She obtained a master's degree in mathematics while teaching at Queen's School, Ede in Osun State in 1957 and her PhD degree in mathematics education at the University of Chicago (U.S.) in 1963, thereby making her the first Nigerian woman to be awarded a doctorate. Grace Alele was married later that year and became known as Grace Alele-Williams. She returned to Nigeria for a couple of years' postdoctoral work at the University of Ibadan before joining the University of Lagos in 1965. Career Alele-Williams's teaching career started at Queen's School, Ede, Osun State, where she was a mathematics teacher from 1954 to 1957. She left for the University of Vermont to become a graduate assistant and later assistant professor. From 1963 to 1965, Alele-Williams was a postdoctoral research fellow, department (and institute) of education, University of Ibadan from where she was appointed a professor of mathematics at the University of Lagos in 1976. She had a special interest in women's education. While spending a decade directing the institute of education, she introduced innovative non-degree programmes, allowing older women working as elementary school teachers to receive certificates. Alele-Williams has always demonstrated concern for the access of female African students to scientific and technological subjects. Her interest in mathematics education was originally sparked by her stay in the US, which coincided with the Sputnik phenomenon. Working with the African Mathematics Program in Newton, Massachusetts, under the leadership of MIT professor Ted Martins, she participated in mathematics workshops held in various African cities from 1963 to 1975. Highlights included writing texts and correspondence courses covering basic concepts in mathematics working in concert with leading mathematicians and educators. such as the book Modern Mathematics Handbook for Teachers published in 1974. She taught at the University of Lagos from 1965 to 1985, and spent a decade directing the institute of education, which introduced innovative non-degree programmes, with many of the certificate recipients older women working as elementary school teachers. By serving in various committees and boards, Alele-Williams had made useful contributions in the development of education in Nigeria. She was chairman of the curriculum review committee, former Bendel State 1973–1979. From 1979 to 1985, she served as chairman of the Lagos State curriculum review c
https://en.wikipedia.org/wiki/Beauty%20%28disambiguation%29
Beauty is an aesthetic characteristic. Beauty may also refer to: Science and mathematics Beauty (quantum number) or bottomness, a flavour quantum number Mathematical beauty, a mathematical philosophy Characters Beauty (Belle), a central character in the fairy tale Beauty and the Beast and adaptations Beauty, the main character in The Sleeping Beauty Quartet, a series of novels by Anne Rice (writing as A. N. Roquelaure) Beauty, anime/manga series character from Bobobo-bo Bo-bobo, see list of characters Film and television Two 1965 films by Andy Warhol: Beauty No. 1 Beauty No. 2 Beauty: In the Eyes of the Beheld, a 2008 American documentary by Liza Figueroa Kravinsky Beauty (2009 film), a Japanese film by Toshio Gotō Beauty (2011 film), a South African film by Oliver Hermanus Beauty (2022 film), an American film by Andrew Dosunmu "Beauty" (Once Upon a Time), a 2017 television episode Literature Beauty (Selbourne novel), a 2009 novel by Raphael Selbourne Beauty (Tepper novel), a 1991 novel by Sheri S. Tepper Beauty: A Retelling of the Story of Beauty and the Beast, a 1978 young-adult novel by Robin McKinley Beauty, a 1992 novel by Brian D'Amato Beauty, a 2012 Anita Blake: Vampire Hunter book by Laurell K. Hamilton "Beauty", a 2003 short story by Sherwood Smith Music The Beauties, a Canadian roots/country band Albums Beauty (Ryuichi Sakamoto album), 1989 Beauty (Neutral Milk Hotel album), 1992 Beauty?, by Sound of the Blue Heart (2006) Songs "Beauty" (song), by Mötley Crüe "Beauty", by Alan, a B-side of the single "Swear" "Beauty", by Edan from Beauty and the Beat "Beauty", by Shaye (2004) "Beauty", by Earth, Wind and Fire from The Need of Love (1971) People Beauty Dlulane, South African Member of Parliament Beauty Gonzalez (born 1991), Filipino-Spanish actress Beauty McGowan (1901–1982), American baseball player Beauty Ngxongo (born 1953), South African master Zulu basket weaver Beauty Turner (1957–2008), American housing activist and journalist Places in the United States Beauty, Kentucky, an unincorporated community Beauty, West Virginia, an unincorporated community Beauty Lake, a lake in Montana Other uses Beauty (ancient thought) Beauty (dog), a World War II search and rescue dog The Beauty, a 1915 Boris Kustodiev painting Beauty industry, or sometimes just "Beauty," is a catch-all term for the services and products that deal with feminine appearance (Cosmetics, Hairstyling, etc.) See also Aesthetics, a branch of philosophy dealing with the nature of beauty, art, and taste Beauty and the Beast (disambiguation) Black Beauty (disambiguation) Sleeping Beauty (disambiguation) Beautiful (disambiguation) Pretty (disambiguation) Unattractiveness, the antonym of beauty
https://en.wikipedia.org/wiki/Poly-Bernoulli%20number
In mathematics, poly-Bernoulli numbers, denoted as , were defined by M. Kaneko as where Li is the polylogarithm. The are the usual Bernoulli numbers. Moreover, the Generalization of Poly-Bernoulli numbers with a,b,c parameters defined as follows where Li is the polylogarithm. Kaneko also gave two combinatorial formulas: where is the number of ways to partition a size set into non-empty subsets (the Stirling number of the second kind). A combinatorial interpretation is that the poly-Bernoulli numbers of negative index enumerate the set of by (0,1)-matrices uniquely reconstructible from their row and column sums. Also it is the number of open tours by a biased rook on a board (see A329718 for definition). The Poly-Bernoulli number satisfies the following asymptotic: For a positive integer n and a prime number p, the poly-Bernoulli numbers satisfy which can be seen as an analog of Fermat's little theorem. Further, the equation has no solution for integers x, y, z, n > 2; an analog of Fermat's Last Theorem. Moreover, there is an analogue of Poly-Bernoulli numbers (like Bernoulli numbers and Euler numbers) which is known as Poly-Euler numbers. See also Bernoulli numbers Stirling numbers Gregory coefficients Bernoulli polynomials Bernoulli polynomials of the second kind Stirling polynomials References . . . . Integer sequences Enumerative combinatorics
https://en.wikipedia.org/wiki/Cisoid
Cisoid may refer to: Cisoid (chemistry), form of geometric isomer in chemistry Cisoid (mathematics), complex sinusoid function See also Cisoidal (disambiguation) Cosinusoid Sinusoid Cissoid Transoid
https://en.wikipedia.org/wiki/Twenty-One%20Card%20Trick
The Twenty-One Card Trick, also known as the 11th card trick or three column trick, is a simple self-working card trick that uses basic mathematics to reveal the user's selected card. The game uses a selection of 21 cards out of a standard deck. These are shuffled and the player selects one at random. The cards are then dealt out face up in three columns of 7 cards each. The player points to the column containing their card. The cards are picked up and the process is repeated three times, at which point the magician reveals the selected card. Variations Minor aspects of the presentation are adjustable, for example the cards can be dealt either face-up or face-down. If they are dealt face-down then the spectator must look through each of the piles until finding which one contains the selected card, whereas if they are dealt face-up then an attentive spectator can immediately answer the question of which pile contains the selected card. Some performers deal the cards into face-up rows or columns instead of piles, which saves more time as all cards are partly visible. When the same method is applied to three piles of nine cards each, it is called the 27 card trick. It is identical in principle. Method The magician begins by handing the spectator the 21-card packet and asking them to look through it and select any one card to remember. The cards are then dealt into three piles one at a time, like when dealing out hands in a card game. Each time they are dealt out, after the spectator indicates which pile contains the thought of card, the magician places that pile between the other two. After the first time, the card will be one of the ones in position 8-14. When the cards are dealt out the second time, the selection will be the third, fourth, or fifth card in the pile it ends up in. In picking up the piles, the magician places this pile between the other two again. This ensures that the selection will now be one of the ones in position 10-12. The third time the cards are dealt out, the selection will be the fourth card in whichever pile it ends up in. On the third deal, as soon as the spectator indicates which pile contains the selection, the magician knows that it is the fourth, or middle, card in that pile. If the magician gathers up the piles again, as before with the pile containing the selection in the middle, the selection will be the eleventh card in the 21 card packet. If 27 cards are used, the procedure is the same but the selection will be the fourteenth card in the packet. Literature Professor Hoffmann, Modern Magic External links A Brief Analysis of the Twenty-One Card Trick and Related Effects by Justin Higham Card tricks
https://en.wikipedia.org/wiki/TPDF
TPDF may refer to: Tanzania People's Defence Force Triangular Probability Density Function, a type of distribution which is used in audio dithering
https://en.wikipedia.org/wiki/Cauchy%20condensation%20test
In mathematics, the Cauchy condensation test, named after Augustin-Louis Cauchy, is a standard convergence test for infinite series. For a non-increasing sequence of non-negative real numbers, the series converges if and only if the "condensed" series converges. Moreover, if they converge, the sum of the condensed series is no more than twice as large as the sum of the original. Estimate The Cauchy condensation test follows from the stronger estimate, which should be understood as an inequality of extended real numbers. The essential thrust of a proof follows, patterned after Oresme's proof of the divergence of the harmonic series. To see the first inequality, the terms of the original series are rebracketed into runs whose lengths are powers of two, and then each run is bounded above by replacing each term by the largest term in that run. That term is always the first one, since by assumption the terms are non-increasing. To see the second inequality, these two series are again rebracketed into runs of power of two length, but "offset" as shown below, so that the run of which begins with lines up with the end of the run of which ends with , so that the former stays always "ahead" of the latter. Integral comparison The "condensation" transformation recalls the integral variable substitution yielding . Pursuing this idea, the integral test for convergence gives us, in the case of monotone , that converges if and only if converges. The substitution yields the integral . We then notice that , where the right hand side comes from applying the integral test to the condensed series . Therefore, converges if and only if converges. Examples The test can be useful for series where appears as in a denominator in . For the most basic example of this sort, the harmonic series is transformed into the series , which clearly diverges. As a more complex example, take Here the series definitely converges for , and diverges for . When , the condensation transformation gives the series The logarithms "shift to the left". So when , we have convergence for , divergence for . When the value of enters. This result readily generalizes: the condensation test, applied repeatedly, can be used to show that for , the generalized Bertrand series converges for and diverges for . Here denotes the th iterate of a function , so that The lower limit of the sum, , was chosen so that all terms of the series are positive. Notably, these series provide examples of infinite sums that converge or diverge arbitrarily slowly. For instance, in the case of and , the partial sum exceeds 10 only after (a googolplex) terms; yet the series diverges nevertheless. Schlömilch's generalization A generalization of the condensation test was given by Oskar Schlömilch. Let be a strictly increasing sequence of positive integers such that the ratio of successive differences is bounded: there is a positive real number , for which Then, provided that meets the
https://en.wikipedia.org/wiki/Cyclic%20graph
In mathematics, a cyclic graph may mean a graph that contains a cycle, or a graph that is a cycle, with varying definitions of cycles. See: Cycle (graph theory), a cycle in a graph Forest (graph theory), an undirected graph with no cycles Biconnected graph, an undirected graph in which every edge belongs to a cycle Directed acyclic graph, a directed graph with no cycles Strongly connected graph, a directed graph in which every edge belongs to a cycle Aperiodic graph, a directed graph in which the cycle lengths have no nontrivial common divisor Pseudoforest, a directed or undirected graph in which every connected component includes at most one cycle Cycle graph, a graph that has the structure of a single cycle Pancyclic graph, a graph that has cycles of all possible lengths Cycle detection (graph theory), the algorithmic problem of finding cycles in graphs Other similarly-named concepts include Cycle graph (algebra), a graph that illustrates the cyclic subgroups of a group Circulant graph, a graph with an automorphism which permutes its vertices cyclically.
https://en.wikipedia.org/wiki/The%20Fourth%20Dimension%20%28book%29
The Fourth Dimension: Toward a Geometry of Higher Reality (1984) is a popular mathematics book by Rudy Rucker, a Silicon Valley professor of mathematics and computer science. It provides a popular presentation of set theory and four dimensional geometry as well as some mystical implications. A foreword is provided by Martin Gardner and the 200+ illustrations are by David Povilaitis. The Fourth Dimension: Toward a Geometry of Higher Reality was reprinted in 1985 as the paperback The Fourth Dimension: A Guided Tour of the Higher Universes. It was again reprinted in paperback in 2014 by Dover Publications with its original subtitle. Like other Rucker books, The Fourth Dimension is dedicated to Edwin Abbott Abbott, author of the novella Flatland. Synopsis The Fourth Dimension teaches readers about the concept of a fourth spatial dimension. Several analogies are made to Flatland; in particular, Rucker compares how a square in Flatland would react to a cube in Spaceland to how a cube in Spaceland would react to a hypercube from the fourth dimension. The book also includes multiple puzzles. Reception Kirkus Reviews called it "animated, often amusing", and a "rare treat", but noted that the book eventually leaves mathematical topics behind to focus instead on "mysticism of the all-is-one-one-is-all thinking of an Ouspensky." The Quarterly Review of Biology declared it to be "nice", and "at times (...) enchanting", comparing it to The Tao of Physics. See also Hiding in the Mirror, a similar book by Lawrence M. Krauss. References 1984 non-fiction books Books by Rudy Rucker Mathematics books
https://en.wikipedia.org/wiki/Rational%20expression
Rational expression may refer to: A mathematical expression that may be rewritten to a rational fraction, an algebraic fraction such that both the numerator and the denominator are polynomials. A regular expression, also known as rational expression, used in formal language theory (computer science) See also rational number rational (disambiguation)
https://en.wikipedia.org/wiki/Dennis%20Lindley
Dennis Victor Lindley (25 July 1923 – 14 December 2013) was an English statistician, decision theorist and leading advocate of Bayesian statistics. Biography Lindley grew up in the south-west London suburb of Surbiton. He was an only child and his father was a local building contractor. Lindley recalled (to Adrian Smith) that the family had "little culture" and that both his parents were "proud of the fact that they had never read a book." The school Lindley attended, Tiffin School, introduced him to "ordinary cultural activities." From there Lindley went to read mathematics at Trinity College, Cambridge in 1941. During the war the degree course lasted only two years and, on finishing, Lindley had a choice between entering the armed forces and joining the Civil Service as a statistician. He chose the latter and, after taking a short course given by Oscar Irwin which he "did not understand", he joined a section of the Ministry of Supply doing statistical work under George Barnard. After the war Lindley spent some time at the National Physical Laboratory before returning to Cambridge for a further year of study. From 1948 to 1960 he worked at Cambridge, starting as a demonstrator and leaving as director of the Statistical Laboratory. In 1960 Lindley left to take up a new chair at Aberystwyth. In 1967 he moved to University College London. In 1977 Lindley took early retirement at the age of 54. From then until 1987 he travelled the world as an "itinerant scholar", and later continued to write and to attend conferences. He was awarded the Royal Statistical Society's Guy Medal in Gold in 2002. Lindley first encountered statistics as a set of techniques and in his early years at Cambridge he worked to find a mathematical basis for the subject. His lectures on probability were based on Kolmogorov's approach which at that time had no following in Britain. In 1954 Lindley met Savage who was also looking for a deeper justification of the ideas of Neyman, Pearson, Wald and Fisher. Both found that justification in Bayesian theory and they turned into critics of the classical statistical inference they had hoped to justify. Lindley became a great missionary for the Bayesian gospel. The atmosphere of the Bayesian revival is captured in a comment by Rivett on Lindley's move to University College London and the premier chair of statistics in Britain: "it was as though a Jehovah's Witness had been elected Pope." In 1959 he was elected as a Fellow of the American Statistical Association. In 2000, the International Society for Bayesian Analysis created the Lindley prize in his honour. Publications (with J. C. P. Miller) Cambridge Elementary Statistical Tables, Cambridge. 1953. Introduction to Probability and Statistics from a Bayesian Viewpoint, 2 volumes, Cambridge 1965. Bayesian Statistics : a Review, SIAM. 1971. Making Decisions, Wiley-Interscience. 1971. (with W.F. Scott) New Cambridge Elementary Statistical Tables, Cambridge. 1984. The bibliogr
https://en.wikipedia.org/wiki/Algebra%20of%20communicating%20processes
The algebra of communicating processes (ACP) is an algebraic approach to reasoning about concurrent systems. It is a member of the family of mathematical theories of concurrency known as process algebras or process calculi. ACP was initially developed by Jan Bergstra and Jan Willem Klop in 1982, as part of an effort to investigate the solutions of unguarded recursive equations. More so than the other seminal process calculi (CCS and CSP), the development of ACP focused on the algebra of processes, and sought to create an abstract, generalized axiomatic system for processes, and in fact the term process algebra was coined during the research that led to ACP. Informal description ACP is fundamentally an algebra, in the sense of universal algebra. This algebra is a way to describe systems in terms of algebraic process expressions that define compositions of other processes, or of certain primitive elements. Primitives ACP uses instantaneous, atomic actions () as its primitives. Some actions have special meaning, such as the action , which represents deadlock or stagnation, and the action , which represents a silent action (abstracted actions that have no specific identity). Algebraic operators Actions can be combined to form processes using a variety of operators. These operators can be roughly categorized as providing a basic process algebra, concurrency, and communication. Choice and sequencing – the most fundamental of algebraic operators are the alternative operator (), which provides a choice between actions, and the sequencing operator (), which specifies an ordering on actions. So, for example, the process first chooses to perform either or , and then performs action . How the choice between and is made does not matter and is left unspecified. Note that alternative composition is commutative but sequential composition is not (because time flows forward). Concurrency – to allow the description of concurrency, ACP provides the merge and left-merge operators. The merge operator, , represents the parallel composition of two processes, the individual actions of which are interleaved. The left-merge operator, , is an auxiliary operator with similar semantics to the merge, but a commitment to always choose its initial step from the left-hand process. As an example, the process may perform the actions in any of the sequences . On the other hand, the process may only perform the sequences since the left-merge operators ensure that the action occurs first. Communication — interaction (or communication) between processes is represented using the binary communications operator, . For example, the actions and might be interpreted as the reading and writing of a data item , respectively. Then the process will communicate the value from the right component process to the left component process (i.e. the identifier is bound to the value , and free instances of in the process take on that value), and then behave as the merge o
https://en.wikipedia.org/wiki/Congruum
In number theory, a congruum (plural congrua) is the difference between successive square numbers in an arithmetic progression of three squares. That is, if , , and (for integers , , and ) are three square numbers that are equally spaced apart from each other, then the spacing between them, , is called a congruum. The congruum problem is the problem of finding squares in arithmetic progression and their associated congrua. It can be formalized as a Diophantine equation: find integers , , and such that When this equation is satisfied, both sides of the equation equal the congruum. Fibonacci solved the congruum problem by finding a parameterized formula for generating all congrua, together with their associated arithmetic progressions. According to this formula, each congruum is four times the area of a Pythagorean triangle. Congrua are also closely connected with congruent numbers: every congruum is a congruent number, and every congruent number is a congruum multiplied by the square of a rational number. Examples As an example, the number 96 is a congruum because it is the difference between adjacent squares in the sequence 4, 100, and 196 (the squares of 2, 10, and 14 respectively). The first few congrua are: History The congruum problem was originally posed in 1225, as part of a mathematical tournament held by Frederick II, Holy Roman Emperor, and answered correctly at that time by Fibonacci, who recorded his work on this problem in his Book of Squares. Fibonacci was already aware that it is impossible for a congruum to itself be a square, but did not give a satisfactory proof of this fact. Geometrically, this means that it is not possible for the pair of legs of a Pythagorean triangle to be the leg and hypotenuse of another Pythagorean triangle. A proof was eventually given by Pierre de Fermat, and the result is now known as Fermat's right triangle theorem. Fermat also conjectured, and Leonhard Euler proved, that there is no sequence of four squares in arithmetic progression. Parameterized solution The congruum problem may be solved by choosing two distinct positive integers and (with ); then the number is a congruum. The middle square of the associated arithmetic progression of squares is , and the other two squares may be found by adding or subtracting the congruum. Additionally, multiplying a congruum by a square number produces another congruum, whose progression of squares is multiplied by the same factor. All solutions arise in one of these two ways. For instance, the congruum 96 can be constructed by these formulas with and , while the congruum 216 is obtained by multiplying the smaller congruum 24 by the square number 9. An equivalent formulation of this solution, given by Bernard Frénicle de Bessy, is that for the three squares in arithmetic progression , , and , the middle number is the hypotenuse of a Pythagorean triangle and the other two numbers and are the difference and sum respectively of the triangle's two
https://en.wikipedia.org/wiki/Trigonal%20trapezohedron
In geometry, a trigonal trapezohedron is a rhombohedron (a polyhedron with six rhombus-shaped faces) in which, additionally, all six faces are congruent. Alternative names for the same shape are the trigonal deltohedron or isohedral rhombohedron. Some sources just call them rhombohedra. Geometry Six identical rhombic faces can construct two configurations of trigonal trapezohedra. The acute or prolate form has three acute angle corners of the rhombic faces meeting at the two polar axis vertices. The obtuse or oblate or flat form has three obtuse angle corners of the rhombic faces meeting at the two polar axis vertices. More strongly than having all faces congruent, the trigonal trapezohedra are isohedral figures, meaning that they have symmetries that take any face to any other face. Special cases A cube can be interpreted as a special case of a trigonal trapezohedron, with square rather than rhombic faces. The two golden rhombohedra are the acute and obtuse form of the trigonal trapezohedron with golden rhombus faces. Copies of these can be assembled to form other convex polyhedra with golden rhombus faces, including the Bilinski dodecahedron and rhombic triacontahedron. Four oblate rhombohedra whose ratio of face diagonal lengths are the square root of two can be assembled to form a rhombic dodecahedron. The same rhombohedra also tile space in the trigonal trapezohedral honeycomb. Related polyhedra The trigonal trapezohedra are special cases of trapezohedra, polyhedra with an even number of congruent kite-shaped faces. When this number of faces is six, the kites degenerate to rhombi, and the result is a trigonal trapezohedron. As with the rhombohedra more generally, the trigonal trapezohedra are also special cases of parallelepipeds, and are the only parallelepipeds with six congruent faces. Parallelepipeds are zonohedra, and Evgraf Fedorov proved that the trigonal trapezohedra are the only infinite family of zonohedra whose faces are all congruent rhombi. Dürer's solid is generally presumed to be a truncated triangular trapezohedron, a trigonal trapezohedron with two opposite vertices truncated, although its precise shape is still a matter for debate. See also Truncated triangular trapezohedron References External links Space-filling polyhedra Zonohedra Polyhedra
https://en.wikipedia.org/wiki/Tetragonal%20trapezohedron
In geometry, a tetragonal trapezohedron, or deltohedron, is the second in an infinite series of trapezohedra, which are dual to the antiprisms. It has eight faces, which are congruent kites, and is dual to the square antiprism. In mesh generation This shape has been used as a test case for hexahedral mesh generation, simplifying an earlier test case posited by mathematician Robert Schneiders in the form of a square pyramid with its boundary subdivided into 16 quadrilaterals. In this context the tetragonal trapezohedron has also been called the cubical octahedron, quadrilateral octahedron, or octagonal spindle, because it has eight quadrilateral faces and is uniquely defined as a combinatorial polyhedron by that property. Adding four cuboids to a mesh for the cubical octahedron would also give a mesh for Schneiders' pyramid. As a simply-connected polyhedron with an even number of quadrilateral faces, the cubical octahedron can be decomposed into topological cuboids with curved faces that meet face-to-face without subdividing the boundary quadrilaterals, and an explicit mesh of this type has been constructed. However, it is unclear whether a decomposition of this type can be obtained in which all the cuboids are convex polyhedra with flat faces. In art A tetragonal trapezohedron appears in the upper left as one of the polyhedral "stars" in M. C. Escher's 1948 wood engraving Stars. Spherical tiling The tetragonal trapezohedron also exists as a spherical tiling, with 2 vertices on the poles, and alternating vertices equally spaced above and below the equator. Related polyhedra The tetragonal trapezohedron is first in a series of dual snub polyhedra and tilings with face configuration V3.3.4.3.n. References External links Paper model tetragonal (square) trapezohedron Polyhedra
https://en.wikipedia.org/wiki/Pentagonal%20trapezohedron
In geometry, a pentagonal trapezohedron or deltohedron is the third in an infinite series of face-transitive polyhedra which are dual polyhedra to the antiprisms. It has ten faces (i.e., it is a decahedron) which are congruent kites. It can be decomposed into two pentagonal pyramids and a pentagonal antiprism in the middle. It can also be decomposed into two pentagonal pyramids and a dodecahedron in the middle. 10-sided dice The pentagonal trapezohedron was patented for use as a gaming die (i.e. "game apparatus") in 1906. These dice are used for role-playing games that use percentile-based skills; however, a twenty-sided die can be labeled with the numbers 0-9 twice to use for percentages instead. Subsequent patents on ten-sided dice have made minor refinements to the basic design by rounding or truncating the edges. This enables the die to tumble so that the outcome is less predictable. One such refinement became notorious at the 1980 Gen Con when the patent was incorrectly thought to cover ten-sided dice in general. Ten-sided dice are commonly numbered from 0 to 9, as this allows two to be rolled in order to easily obtain a percentile result. Where one die represents the 'tens', the other represents 'units' therefore a result of 7 on the former and 0 on the latter would be combined to produce 70. A result of double-zero is commonly interpreted as 100. Some ten-sided dice (often called 'Percentile Dice') are sold in sets of two where one is numbered from 0 to 9 and the other from 00 to 90 in increments of 10, thus making it impossible to misinterpret which one is the tens and which the units die. Ten-sided dice may also be numbered 1 to 10 for use in games where a random number in this range is desirable, or the zero may be interpreted as 10 in this situation. Spherical tiling The pentagonal trapezohedron also exists as a spherical tiling, with 2 vertices on the poles, and alternating vertices equally spaced above and below the equator. See also References Sources External links Generalized formula of uniform polyhedron (trapezohedron) having 2n congruent right kite faces from Academia.edu Virtual Reality Polyhedra www.georgehart.com: The Encyclopedia of Polyhedra VRML model Conway Notation for Polyhedra Try: "dA5" Dice Polyhedra sv:Tärning#Tiosidig tärning
https://en.wikipedia.org/wiki/Hexagonal%20trapezohedron
In geometry, a hexagonal trapezohedron or deltohedron is the fourth in an infinite series of trapezohedra which are dual polyhedra to the antiprisms. It has twelve faces which are congruent kites. It can be described by the Conway notation . It is an isohedral (face-transitive) figure, meaning that all its faces are the same. More specifically, all faces are not merely congruent but also transitive, i.e. lie within the same symmetry orbit. Convex isohedral polyhedra are the shapes that will make fair dice. Symmetry The symmetry a hexagonal trapezohedron is D6d of order 24. The rotation group is D6 of order 12. Variations One degree of freedom within D6 symmetry changes the kites into congruent quadrilaterals with 3 edges lengths. In the limit, one edge of each quadrilateral goes to zero length, and these become bipyramids. Crystal arrangements of atoms can repeat in space with a hexagonal trapezohedral configuration around one atom, which is always enantiomorphous, and comprises space groups 177–182. Beta-quartz is the only common mineral with this crystal system. If the kites surrounding the two peaks are of different shapes, it can only have C6v symmetry, order 12. These can be called unequal trapezohedra. The dual is an unequal antiprism, with the top and bottom polygons of different radii. If it twisted and unequal its symmetry is reduced to cyclic symmetry, C6 symmetry, order 6. Spherical tiling The hexagonal trapezohedron also exists as a spherical tiling, with 2 vertices on the poles, and alternating vertices equally spaced above and below the equator. Related polyhedra References External links Virtual Reality Polyhedra The Encyclopedia of Polyhedra VRML model <6> Polyhedra
https://en.wikipedia.org/wiki/Laplace%20expansion
In linear algebra, the Laplace expansion, named after Pierre-Simon Laplace, also called cofactor expansion, is an expression of the determinant of an matrix as a weighted sum of minors, which are the determinants of some submatrices of . Specifically, for every , the Laplace expansion along the th row is the equality where is the entry of the th row and th column of , and is the determinant of the submatrix obtained by removing the th row and the th column of . Similarly, the Laplace expansion along the th column is the equality (Each identity implies the other, since the determinants of a matrix and its transpose are the same.) The term is called the cofactor of in . The Laplace expansion is often useful in proofs, as in, for example, allowing recursion on the size of matrices. It is also of didactic interest for its simplicity and as one of several ways to view and compute the determinant. For large matrices, it quickly becomes inefficient to compute when compared to Gaussian elimination. Examples Consider the matrix The determinant of this matrix can be computed by using the Laplace expansion along any one of its rows or columns. For instance, an expansion along the first row yields: Laplace expansion along the second column yields the same result: It is easy to verify that the result is correct: the matrix is singular because the sum of its first and third column is twice the second column, and hence its determinant is zero. Proof Suppose is an n × n matrix and For clarity we also label the entries of that compose its minor matrix as for Consider the terms in the expansion of that have as a factor. Each has the form for some permutation with , and a unique and evidently related permutation which selects the same minor entries as . Similarly each choice of determines a corresponding i.e. the correspondence is a bijection between and Using Cauchy's two-line notation, the explicit relation between and can be written as where is a temporary shorthand notation for a cycle . This operation decrements all indices larger than j so that every index fits in the set {1,2,...,n-1} The permutation can be derived from as follows. Define by for and . Then is expressed as Now, the operation which apply first and then apply is (Notice applying A before B is equivalent to applying inverse of A to the upper row of B in two-line notation) where is temporary shorthand notation for . the operation which applies first and then applies is above two are equal thus, where is the inverse of which is . Thus Since the two cycles can be written respectively as and transpositions, And since the map is bijective, from which the result follows. Similarly, the result holds if the index of the outer summation was replaced with . Laplace expansion of a determinant by complementary minors Laplace's cofactor expansion can be generalised as follows. Example Consider the matrix The determinant of t
https://en.wikipedia.org/wiki/Sequential%20analysis
In statistics, sequential analysis or sequential hypothesis testing is statistical analysis where the sample size is not fixed in advance. Instead data are evaluated as they are collected, and further sampling is stopped in accordance with a pre-defined stopping rule as soon as significant results are observed. Thus a conclusion may sometimes be reached at a much earlier stage than would be possible with more classical hypothesis testing or estimation, at consequently lower financial and/or human cost. History The method of sequential analysis is first attributed to Abraham Wald with Jacob Wolfowitz, W. Allen Wallis, and Milton Friedman while at Columbia University's Statistical Research Group as a tool for more efficient industrial quality control during World War II. Its value to the war effort was immediately recognised, and led to its receiving a "restricted" classification. At the same time, George Barnard led a group working on optimal stopping in Great Britain. Another early contribution to the method was made by K.J. Arrow with D. Blackwell and M.A. Girshick. A similar approach was independently developed from first principles at about the same time by Alan Turing, as part of the Banburismus technique used at Bletchley Park, to test hypotheses about whether different messages coded by German Enigma machines should be connected and analysed together. This work remained secret until the early 1980s. Peter Armitage introduced the use of sequential analysis in medical research, especially in the area of clinical trials. Sequential methods became increasingly popular in medicine following Stuart Pocock's work that provided clear recommendations on how to control Type 1 error rates in sequential designs. Alpha spending functions When researchers repeatedly analyze data as more observations are added, the probability of a Type 1 error increases. Therefore, it is important to adjust the alpha level at each interim analysis, such that the overall Type 1 error rate remains at the desired level. This is conceptually similar to using the Bonferroni correction, but because the repeated looks at the data are dependent, more efficient corrections for the alpha level can be used. Among the earliest proposals is the Pocock boundary. Alternative ways to control the Type 1 error rate exist, such as the Haybittle-Peto bounds, and additional work on determining the boundaries for interim analyses has been done by O’Brien & Fleming and Wang & Tsiatis. A limitation of corrections such as the Pocock boundary is that the number of looks at the data must be determined before the data is collected, and that the looks at the data should be equally spaced (e.g., after 50, 100, 150, and 200 patients). The alpha spending function approach developed by Demets & Lan does not have these restrictions, and depending on the parameters chosen for the spending function, can be very similar to Pocock boundaries or the corrections proposed by O'Brien and Fleming. Ap
https://en.wikipedia.org/wiki/Francis%20Muir
Francis Muir (born April 27, 1926) is a former research associate at the Geophysics Department of Stanford University. Muir graduated from Oxford University in 1950 with an MA degree in mathematics. He worked as a research and field exploration seismologist with Seismograph Service from 1954 through 1962, and then with West Australian Petroleum as a field supervisor until 1967. He then transferred to the Chevron Oilfield Research Company, which he left in 1983 as senior research associate. Since then he has held an appointment as consulting professor in the Geophysics Department at Stanford University, first with Jon Claerbout's SEP group and more recently with Amos Nur's SRB Project. Muir consults with industry, particularly on applications of velocity anisotropy to oilfield development, and is a co-investigator on a project on Anisotropy for the DOE. He is a member of the SEG Research Committee, an erstwhile fellow of the Royal Astronomical Society, and an active participant in the Web-based "anisotropists" list. The asteroid 95802 Francismuir commemorates Muir in his capacities as the mentor and advisor of its discoverer. He retired from Stanford in 2005. Publications Claerbout, J. F.; and Muir, F.; 1973 "Robust modeling with erratic data", Geophysics, 38, 826–844. Dellinger, J.; and Muir, F.; 1988, Imaging reflections in elliptically anisotropic media (Short Note), Geophysics, Vol 53.12, 1616–1618. Schoenberg, M.; and Muir, F.; 1989, A calculus for finely layered anisotropic media, Geophysics, 54.5, 581–589. Michelena, R. J.; Muir, F.; and Harris, J.; 1993, "Anisotropic travel time tomography", Geophysical Prospecting, 41.4. Schoenberg, M.; Muir, F.; and Sayers, C.; 1996, "Introducing ANNIE: A simple three-parameter anisotropic velocity model for shales", J. Seis. Expl. vol. 5, 35–49. References External links Francis Muir's Stanford web page Francis Muir's Usenet traces posted as francis@stanford.edu (via Google Groups) 1926 births Living people Usenet people Alumni of the University of Oxford American seismologists Stanford University staff
https://en.wikipedia.org/wiki/Minkowski%20functional
In mathematics, in the field of functional analysis, a Minkowski functional (after Hermann Minkowski) or gauge function is a function that recovers a notion of distance on a linear space. If is a subset of a real or complex vector space then the or of is defined to be the function valued in the extended real numbers, defined by where the infimum of the empty set is defined to be positive infinity (which is a real number so that would then be real-valued). The set is often assumed/picked to have properties, such as being an absorbing disk in that guarantee that will be a real-valued seminorm on In fact, every seminorm on is equal to the Minkowski functional (that is, ) of any subset of satisfying (where all three of these sets are necessarily absorbing in and the first and last are also disks). Thus every seminorm (which is a defined by purely algebraic properties) can be associated (non-uniquely) with an absorbing disk (which is a with certain geometric properties) and conversely, every absorbing disk can be associated with its Minkowski functional (which will necessarily be a seminorm). These relationships between seminorms, Minkowski functionals, and absorbing disks is a major reason why Minkowski functionals are studied and used in functional analysis. In particular, through these relationships, Minkowski functionals allow one to "translate" certain properties of a subset of into certain properties of a function on The Minkowski function is always non-negative (meaning ). This property of being nonnegative stands in contrast to other classes of functions, such as sublinear functions and real linear functionals, that do allow negative values. However, might not be real-valued since for any given the value is a real number if and only if is not empty. Consequently, is usually assumed to have properties (such as being absorbing in for instance) that will guarantee that is real-valued. Definition Let be a subset of a real or complex vector space Define the of or the associated with or induced by as being the function valued in the extended real numbers, defined by where recall that the infimum of the empty set is (that is, ). Here, is shorthand for For any if and only if is not empty. The arithmetic operations on can be extended to operate on where for all non-zero real The products and remain undefined. Some conditions making a gauge real-valued In the field of convex analysis, the map taking on the value of is not necessarily an issue. However, in functional analysis is almost always real-valued (that is, to never take on the value of ), which happens if and only if the set is non-empty for every In order for to be real-valued, it suffices for the origin of to belong to the or of in If is absorbing in where recall that this implies that then the origin belongs to the algebraic interior of in and thus is real-valued. Characterizations of when is rea
https://en.wikipedia.org/wiki/Phase-type%20distribution
A phase-type distribution is a probability distribution constructed by a convolution or mixture of exponential distributions. It results from a system of one or more inter-related Poisson processes occurring in sequence, or phases. The sequence in which each of the phases occurs may itself be a stochastic process. The distribution can be represented by a random variable describing the time until absorption of a Markov process with one absorbing state. Each of the states of the Markov process represents one of the phases. It has a discrete-time equivalent the discrete phase-type distribution. The set of phase-type distributions is dense in the field of all positive-valued distributions, that is, it can be used to approximate any positive-valued distribution. Definition Consider a continuous-time Markov process with m + 1 states, where m ≥ 1, such that the states 1,...,m are transient states and state 0 is an absorbing state. Further, let the process have an initial probability of starting in any of the m + 1 phases given by the probability vector (α0,α) where α0 is a scalar and α is a 1 × m vector. The continuous phase-type distribution is the distribution of time from the above process's starting until absorption in the absorbing state. This process can be written in the form of a transition rate matrix, where S is an m × m matrix and S0 = –S1. Here 1 represents an m × 1 column vector with every element being 1. Characterization The distribution of time X until the process reaches the absorbing state is said to be phase-type distributed and is denoted PH(α,S). The distribution function of X is given by, and the density function, for all x > 0, where exp( · ) is the matrix exponential. It is usually assumed the probability of process starting in the absorbing state is zero (i.e. α0= 0). The moments of the distribution function are given by The Laplace transform of the phase type distribution is given by where I is the identity matrix. Special cases The following probability distributions are all considered special cases of a continuous phase-type distribution: Degenerate distribution, point mass at zero or the empty phase-type distribution 0 phases. Exponential distribution 1 phase. Erlang distribution 2 or more identical phases in sequence. Deterministic distribution (or constant) The limiting case of an Erlang distribution, as the number of phases become infinite, while the time in each state becomes zero. Coxian distribution 2 or more (not necessarily identical) phases in sequence, with a probability of transitioning to the terminating/absorbing state after each phase. Hyperexponential distribution (also called a mixture of exponential) 2 or more non-identical phases, that each have a probability of occurring in a mutually exclusive, or parallel, manner. (Note: The exponential distribution is the degenerate situation when all the parallel phases are identical.) Hypoexponential distribution 2 or more phases in sequence, can
https://en.wikipedia.org/wiki/Bicupola%20%28geometry%29
In geometry, a bicupola is a solid formed by connecting two cupolae on their bases. There are two classes of bicupola because each cupola (bicupola half) is bordered by alternating triangles and squares. If similar faces are attached together the result is an orthobicupola; if squares are attached to triangles it is a gyrobicupola. Cupolae and bicupolae categorically exist as infinite sets of polyhedra, just like the pyramids, bipyramids, prisms, and trapezohedra. Six bicupolae have regular polygon faces: triangular, square and pentagonal ortho- and gyrobicupolae. The triangular gyrobicupola is an Archimedean solid, the cuboctahedron; the other five are Johnson solids. Bicupolae of higher order can be constructed if the flank faces are allowed to stretch into rectangles and isosceles triangles. Bicupolae are special in having four faces on every vertex. This means that their dual polyhedra will have all quadrilateral faces. The best known example is the rhombic dodecahedron composed of 12 rhombic faces. The dual of the ortho-form, triangular orthobicupola, is also a dodecahedron, similar to rhombic dodecahedron, but it has 6 trapezoid faces which alternate long and short edges around the circumference. Forms Set of orthobicupolae Set of gyrobicupolae A -gonal gyrobicupola has the same topology as a -gonal rectified antiprism, Conway polyhedron notation, . References Norman W. Johnson, "Convex Solids with Regular Faces", Canadian Journal of Mathematics, 18, 1966, pages 169–200. Contains the original enumeration of the 92 solids and the conjecture that there are no others. The first proof that there are only 92 Johnson solids. Polyhedra
https://en.wikipedia.org/wiki/Clarence%20F.%20Stephens
Clarence Francis Stephens (July 24, 1917 – March 5, 2018) was the ninth African American to receive a Ph.D. in mathematics. He is credited with inspiring students and faculty at SUNY Potsdam to form the most successful United States undergraduate mathematics degree programs in the past century. Stephens was recognized by Mathematically Gifted & Black as a Black History Month 2018 Honoree. Early life The fifth of six children, he was orphaned at the age of eight. For his early education, he attended Harbison Agricultural and Industrial Institute, a boarding school for African-Americans in Irmo, South Carolina under Dean R. W. Bouleware and later President Rev John G. Porter. Stephens graduated from Johnson C. Smith University in 1938 with a B.S. degree in mathematics. He received his M.S. (1939) and his Ph.D. (1944) from the University of Michigan. He was the 9th African American to receive a Ph.D in mathematics––for a thesis on Non-Linear Difference Equations Analytic in a Parameter under James Nyswander. After serving in the U.S. Navy (1942–1946) as a Teaching Specialist, Dr. Stephens joined the mathematics faculty of Prairie View A&M University. The next year (1947) he was invited to join the mathematics faculty at Morgan State University. From research to teaching As a Mathematics Association of America (MAA) biography explains, “Dr. Stephens' focus was on being a research mathematician, so he accepted the position in part because he would be near a research library at Johns Hopkins University. While at Morgan State University, Dr. Stephens became appalled at what a poor job was being done in general to teach and inspire students to learn mathematics. He changed his focus from being a researcher to achieving excellence, with desirable results, in teaching mathematics. In 1953, he received a one-year Ford Fellowship to study at the Institute for Advanced Study in Princeton, New Jersey. Dr. Stephens remained at Morgan State until 1962, where is credited with initiating the program which led to five students achieving 91% to 99% on the graduate record exam in mathematics, three of these students (Earl R. Barnes, Arthur D. Grainger and Scott W. Williams) became the only three students of the same class at a Historically Black College to earn a PhD in mathematics. Stephens accepted an appointment as professor of mathematics at SUNY Geneseo. In 1969 he left Geneseo to join the mathematics faculty at SUNY Potsdam, where he served as chair of the mathematics department until his retirement in 1987. The MAA biography reports that during Dr. Stephens’ tenure at SUNY Potsdam "the department became nationally known as a model of teaching excellence in mathematics. For several of these years the program was among the top producers of mathematics majors in the country. The teaching techniques that Professor Stephens introduced at Potsdam, and earlier at Morgan State, have been adopted by many mathematics departments across the country. They have bee
https://en.wikipedia.org/wiki/Lambert%20quadrilateral
In geometry, a Lambert quadrilateral (also known as Ibn al-Haytham–Lambert quadrilateral), is a quadrilateral in which three of its angles are right angles. Historically, the fourth angle of a Lambert quadrilateral was of considerable interest since if it could be shown to be a right angle, then the Euclidean parallel postulate could be proved as a theorem. It is now known that the type of the fourth angle depends upon the geometry in which the quadrilateral exists. In hyperbolic geometry the fourth angle is acute, in Euclidean geometry it is a right angle and in elliptic geometry it is an obtuse angle. A Lambert quadrilateral can be constructed from a Saccheri quadrilateral by joining the midpoints of the base and summit of the Saccheri quadrilateral. This line segment is perpendicular to both the base and summit and so either half of the Saccheri quadrilateral is a Lambert quadrilateral. Lambert quadrilateral in hyperbolic geometry In hyperbolic geometry a Lambert quadrilateral AOBF where the angles are right, and F is opposite O , is an acute angle , and the curvature = -1 the following relations hold: Where are hyperbolic functions Examples See also Non-Euclidean geometry Notes References George E. Martin, The Foundations of Geometry and the Non-Euclidean Plane, Springer-Verlag, 1975 M. J. Greenberg, Euclidean and Non-Euclidean Geometries: Development and History, 4th edition, W. H. Freeman, 2008. Hyperbolic geometry Types of quadrilaterals
https://en.wikipedia.org/wiki/Elongated%20dodecahedron
In geometry, the elongated dodecahedron, extended rhombic dodecahedron, rhombo-hexagonal dodecahedron or hexarhombic dodecahedron is a convex dodecahedron with 8 rhombic and 4 hexagonal faces. The hexagons can be made equilateral, or regular depending on the shape of the rhombi. It can be seen as constructed from a rhombic dodecahedron elongated by a square prism. Parallelohedron Along with the rhombic dodecahedron, it is a space-filling polyhedron, one of the five types of parallelohedron identified by Evgraf Fedorov that tile space face-to-face by translations. It has 5 sets of parallel edges, called zones or belts. Tessellation It can tesselate all space by translations. It is the Wigner–Seitz cell for certain body-centered tetragonal lattices. This is related to the rhombic dodecahedral honeycomb with an elongation of zero. Projected normal to the elongation direction, the honeycomb looks like a square tiling with the rhombi projected into squares. Variations The expanded dodecahedra can be distorted into cubic volumes, with the honeycomb as a half-offset stacking of cubes. It can also be made concave by adjusting the 8 corners downward by the same amount as the centers are moved up. The elongated dodecahedron can be constructed as a contraction of a uniform truncated octahedron, where square faces are reduced to single edges and regular hexagonal faces are reduced to 60 degree rhombic faces (or pairs of equilateral triangles). This construction alternates square and rhombi on the 4-valence vertices, and has half the symmetry, D2h symmetry, order 8. See also Trapezo-rhombic dodecahedron Elongated octahedron Elongated gyrobifastigium References rhombo-hexagonal dodecahedron, p169 H.S.M. Coxeter, Regular Polytopes, Third edition, (1973), Dover edition, p. 257 External links Uniform space-filling using only rhombo-hexagonal dodecahedra Elongated dodecahedron VRML Model Space-filling polyhedra Zonohedra
https://en.wikipedia.org/wiki/Roller%20Coaster%20DataBase
Roller Coaster DataBase (RCDB) is a roller coaster and amusement park database begun in 1996 by Duane Marden. It has grown to feature statistics and pictures of over 10,000 roller coasters from around the world. Publications that have mentioned RCDB include The New York Times, Los Angeles Times, Toledo Blade, Orlando Sentinel, Time, Forbes, Mail & Guardian, and Chicago Sun-Times. History RCDB was started in 1996 by Duane Marden, a computer programmer from Brookfield, Wisconsin. The website is run off web servers in Marden's basement and a location in St. Louis. Content Each roller coaster entry includes any of the following information for the ride: current amusement park location, type, status (existing, standing but not operating (SBNO), defunct), opening date, make/model, cost, capacity, length, height, drop, number of inversions, speed, duration, maximum vertical angle, trains, and special notes. Entries may also feature reader-contributed photos and/or press releases. The site also categorizes the rides into special orders, including a list of the tallest coasters, a list of the fastest coasters, a list of the most inversions on a coaster, a list of the parks with the most inversions, etc., each sortable by steel, wooden, or both. Each roller coaster entry links back to a page which lists all of that park's roller coasters, past and present, and includes a brief history and any links to fan web pages saluting the park. Languages The site is available in ten languages: English, German, French, Spanish, Dutch, Portuguese, Italian, Swedish, Japanese and Simplified Chinese. References External links Internet properties established in 1996 1996 establishments in Wisconsin Online databases Roller coasters Entertainment databases
https://en.wikipedia.org/wiki/Markovian%20discrimination
Within the probability theory Markov model, Markovian discrimination in spam filtering is a method used in CRM114 and other spam filters to model the statistical behaviors of spam and nonspam more accurately than in simple Bayesian methods. A simple Bayesian model of written text contains only the dictionary of legal words and their relative probabilities. A Markovian model adds the relative transition probabilities that given one word, predict what the next word will be. It is based on the theory of Markov chains by Andrey Markov, hence the name. In essence, a Bayesian filter works on single words alone, while a Markovian filter works on phrases or entire sentences. There are two types of Markov models; the visible Markov model, and the hidden Markov model or HMM. The difference is that with a visible Markov model, the current word is considered to contain the entire state of the language model, while a hidden Markov model hides the state and presumes only that the current word is probabilistically related to the actual internal state of the language. For example, in a visible Markov model the word "the" should predict with accuracy the following word, while in a hidden Markov model, the entire prior text implies the actual state and predicts the following words, but does not actually guarantee that state or prediction. Since the latter case is what's encountered in spam filtering, hidden Markov models are almost always used. In particular, because of storage limitations, the specific type of hidden Markov model called a Markov random field is particularly applicable, usually with a clique size of between four and six tokens. See also Maximum-entropy Markov model References Chhabra, S., Yerazunis, W. S., and Siefkes, C. 2004. Spam Filtering using a Markov Random Field Model with Variable Weighting Schemas. In Proceedings of the Fourth IEEE international Conference on Data Mining (November 1–04, 2004). ICDM. IEEE Computer Society, Washington, DC, Mazharul Spam filtering Markov models Statistical natural language processing
https://en.wikipedia.org/wiki/MIT%20Department%20of%20Mathematics
The Department of Mathematics at the Massachusetts Institute of Technology (also known as Course 18) is one of the premier mathematics departments both in the U.S. and the world. In the 2023 U.S. News & World Report rankings of the U.S. graduate programs for mathematics, MIT's program is ranked in the first place, tied only with that of Princeton University, and thereafter it is a three-way tie between Harvard University, Stanford University, and the University of California, Berkeley. The current faculty of around 50 members includes Wolf Prize winner Michael Artin, Shaw Prize winner George Lusztig, Gödel Prize winner Peter Shor, and numerical analyst Gilbert Strang. History Originally under John Daniel Runkle, mathematics at MIT was regarded as service teaching for engineers. Harry W Tyler succeeded Runkle after his death in 1902, and continued as its head until 1930. Tyler had been exposed to modern European mathematics and was influenced by Felix Klein and Max Noether.<ref>{{cite book|url=https://books.google.com/books?id=uMvcfEYr6tsC&pg=PA229|author=Parshall, Karen|author-link=Karen Parshall|author2=Rowe, David E.|author-link2=David E. Rowe|title=The Emergence of the American Mathematical Research Community 1876–1900: J. J. Sylvester, Felix Klein, and E. H. Moore|series=AMS/LMS History of Mathematics 8|location= Providence/London|year=1994|pages=229–230|isbn=9780821809075}}</ref> Much of the early work was on geometry. Norbert Wiener, famous for his contribution to the mathematics of signal processing, joined the MIT faculty in 1919. By 1920, the department started publishing the Journal of Mathematics and Physics (in 1969 renamed as Studies in Applied Mathematics), a sign of its growing confidence; the first PhD was conferred to James E Taylor in 1925. Among illustrious members of the faculty were Norman Levinson and Gian-Carlo Rota. George B. Thomas wrote the widely used calculus textbook Calculus and Analytical Geometry, known today as Thomas' Calculus. Longtime faculty member Arthur Mattuck received several awards for his teaching of MIT undergraduates. References Joel Segel (editor) (2009) Recountings - Conversations with MIT Mathematicians'', AK Peters External links MIT Mathematics Department website MIT OpenCourseWare: Mathematics Mathematics Department Mathematical institutes
https://en.wikipedia.org/wiki/Hyperexponential%20distribution
In probability theory, a hyperexponential distribution is a continuous probability distribution whose probability density function of the random variable X is given by where each Yi is an exponentially distributed random variable with rate parameter λi, and pi is the probability that X will take on the form of the exponential distribution with rate λi. It is named the hyperexponential distribution since its coefficient of variation is greater than that of the exponential distribution, whose coefficient of variation is 1, and the hypoexponential distribution, which has a coefficient of variation smaller than one. While the exponential distribution is the continuous analogue of the geometric distribution, the hyperexponential distribution is not analogous to the hypergeometric distribution. The hyperexponential distribution is an example of a mixture density. An example of a hyperexponential random variable can be seen in the context of telephony, where, if someone has a modem and a phone, their phone line usage could be modeled as a hyperexponential distribution where there is probability p of them talking on the phone with rate λ1 and probability q of them using their internet connection with rate λ2. Properties Since the expected value of a sum is the sum of the expected values, the expected value of a hyperexponential random variable can be shown as and from which we can derive the variance: The standard deviation exceeds the mean in general (except for the degenerate case of all the λs being equal), so the coefficient of variation is greater than 1. The moment-generating function is given by Fitting A given probability distribution, including a heavy-tailed distribution, can be approximated by a hyperexponential distribution by fitting recursively to different time scales using Prony's method. See also Phase-type distribution Hyper-Erlang distribution Lomax distribution (continuous mixture of exponentials) References Continuous distributions
https://en.wikipedia.org/wiki/Bertram%20Kostant
Bertram Kostant (May 24, 1928 – February 2, 2017) was an American mathematician who worked in representation theory, differential geometry, and mathematical physics. Early life and education Kostant grew up in New York City, where he graduated from Stuyvesant High School in 1945. He went on to obtain an undergraduate degree in mathematics from Purdue University in 1950. He earned his Ph.D. from the University of Chicago in 1954, under the direction of Irving Segal, where he wrote a dissertation on representations of Lie groups. Career in mathematics After time at the Institute for Advanced Study, Princeton University, and the University of California, Berkeley, he joined the faculty at the Massachusetts Institute of Technology, where he remained until his retirement in 1993. Kostant's work has involved representation theory, Lie groups, Lie algebras, homogeneous spaces, differential geometry and mathematical physics, particularly symplectic geometry. He has given several lectures on the Lie group E8. He has been one of the principal developers of the theory of geometric quantization. His introduction of the theory of prequantization has led to the theory of quantum Toda lattices. The Kostant partition function is named after him. With Gerhard Hochschild and Alex F. T. W. Rosenberg, he is one of the namesakes of the Hochschild–Kostant–Rosenberg theorem which describes the Hochschild homology of some algebras. His students include James Harris Simons, James Lepowsky, Moss Sweedler, David Vogan, and Birgit Speh. At present he has more than 100 mathematical descendants. Awards and honors Kostant was a Guggenheim Fellow in 1959-60 (in Paris), and a Sloan Fellow in 1961-63. In 1962 he was elected to the American Academy of Arts and Sciences, and in 1978 to the National Academy of Sciences. In 1982 he was a fellow of the Sackler Institute for Advanced Studies at Tel Aviv University. In 1990 he was awarded the Steele Prize of the American Mathematical Society, in recognition of his 1975 paper, “On the existence and irreducibility of certain series of representations.” In 2001, Kostant was a Chern Lecturer and Chern Visiting Professor at Berkeley. He received honorary degrees from the University of Córdoba in Argentina in 1989, the University of Salamanca in Spain in 1992, and Purdue University in 1997. The latter, from his alma mater, was an honorary Doctor of Science degree, citing his fundamental contributions to mathematics and the inspiration he and his work provided to generations of researchers. In May 2008, the Pacific Institute for Mathematical Sciences hosted a conference: “Lie Theory and Geometry: the Mathematical Legacy of Bertram Kostant,” at the University of British Columbia, celebrating the life and work of Kostant in his 80th year. In 2012, he was elected to the inaugural class of fellows of the American Mathematical Society. In the last year of his life, Kostant traveled to Rio de Janeiro for the Colloquium on Group Theoretical
https://en.wikipedia.org/wiki/Melvin%20Hochster
Melvin Hochster (born August 2, 1943) is an American mathematician working in commutative algebra. He is currently the Jack E. McLaughlin Distinguished University Professor Emeritus of Mathematics at the University of Michigan. Education Hochster attended Stuyvesant High School, where he was captain of the Math Team, and received a B.A. from Harvard University. While at Harvard, he was a Putnam Fellow in 1960. He earned his Ph.D. in 1967 from Princeton University, where he wrote a dissertation under Goro Shimura characterizing the prime spectra of commutative rings. Career He held positions at the University of Minnesota and Purdue University before joining the faculty at Michigan in 1977. Hochster's work is primarily in commutative algebra, especially the study of modules over local rings. He has established classic theorems concerning Cohen–Macaulay rings, invariant theory and homological algebra. For example, the Hochster–Roberts theorem states that the invariant ring of a linearly reductive group acting on a regular ring is Cohen–Macaulay. His best-known work is on the homological conjectures, many of which he established for local rings containing a field, thanks to his proof of the existence of big Cohen–Macaulay modules and his technique of reduction to prime characteristic. His most recent work on tight closure, introduced in 1986 with Craig Huneke, has found unexpected applications throughout commutative algebra and algebraic geometry. He has had more than 40 doctoral students, and the Association for Women in Mathematics has pointed out his outstanding role in mentoring women students pursuing a career in mathematics. He served as the chair of the department of Mathematics at the University of Michigan from 2008 to 2017. Awards Hochster shared the 1980 Cole Prize with Michael Aschbacher, received a Guggenheim Fellowship in 1981, and has been a member of both the National Academy of Sciences and the American Academy of Arts and Sciences since 1992. In 2008, on the occasion of his 65th birthday, he was honored with a conference in Ann Arbor and with a special volume of the Michigan Mathematical Journal. See also Monomial conjecture References External links Hochster's home page at Michigan Some expository papers by Mel Hochster Hochster's blurb at the National Academies The University Record (October 8, 2004) Men and the AWM Page of the conference in honor of Mel Hochster (2008) Members of the United States National Academy of Sciences 20th-century American mathematicians 21st-century American mathematicians Algebraists Stuyvesant High School alumni Harvard University alumni Princeton University alumni University of Michigan faculty Purdue University faculty University of Minnesota faculty Putnam Fellows Living people 1943 births Scientists from New York City Mathematicians from New York (state)
https://en.wikipedia.org/wiki/Peter%20Shalen
Peter B. Shalen (born c. 1946) is an American mathematician, working primarily in low-dimensional topology. He is the "S" in JSJ decomposition. Life He graduated from Stuyvesant High School in 1962, and went on to earn a B.A. from Harvard College in 1966 and his Ph.D. from Harvard University in 1972. After posts at Columbia University, Rice University, and the Courant Institute, he joined the faculty of the University of Illinois at Chicago. Shalen was a Sloan Foundation Research Fellow in mathematics (1977—1979). In 1986 he was an invited speaker at the International Congress of Mathematicians in Berkeley, California. He was elected as a member of the 2017 class of Fellows of the American Mathematical Society "for contributions to three-dimensional topology and for exposition". Work His work with Marc Culler related properties of representation varieties of hyperbolic 3-manifold groups to decompositions of 3-manifolds. Based on this work, Culler, Cameron Gordon, John Luecke, and Shalen proved the cyclic surgery theorem. An important corollary of the theorem is that at most one nontrivial Dehn surgery (+1 or −1) on a knot can result in a simply-connected 3-manifold. This was an important piece of the Gordon–Luecke theorem that knots are determined by their complements. This paper is often referred to as "CGLS". With John W. Morgan, he generalized his work with Culler, and reproved several foundational results of William Thurston. Selected publications Shalen, Peter B. Separating, incompressible surfaces in 3-manifolds. Inventiones Mathematicae 52 (1979), no. 2, 105–126. Culler, Marc; Shalen, Peter B. Varieties of group representations and splittings of 3-manifolds. Annals of Mathematics (2) 117 (1983), no. 1, 109–146. Culler, Marc; Gordon, C. McA.; Luecke, J.; Shalen, Peter B. Dehn surgery on knots. Annals of Mathematics (2) 125 (1987), no. 2, 237–300. Morgan, John W.; Shalen, Peter B. Valuations, trees, and degenerations of hyperbolic structures. I. Ann. of Math. (2) 120 (1984), no. 3, 401–476. Morgan, John W.; Shalen, Peter B. Degenerations of hyperbolic structures. II. Measured laminations in 3-manifolds. Annals of Mathematics (2) 127 (1988), no. 2, 403–456. Morgan, John W.; Shalen, Peter B. Degenerations of hyperbolic structures. III. Actions of 3-manifold groups on trees and Thurston's compactness theorem. Annals of Mathematics (2) 127 (1988), no. 3, 457–519. References External links Shalen's home page at UIC Art Rothstein's Stuyvesant Math Team page 1940s births 20th-century American mathematicians 21st-century American mathematicians Topologists Stuyvesant High School alumni Harvard College alumni Columbia University faculty Rice University faculty University of Illinois Chicago faculty Living people Courant Institute of Mathematical Sciences faculty Fellows of the American Mathematical Society Mathematicians from New York (state)
https://en.wikipedia.org/wiki/Holditch%27s%20theorem
In plane geometry, Holditch's theorem states that if a chord of fixed length is allowed to rotate inside a convex closed curve, then the locus of a point on the chord a distance p from one end and a distance q from the other is a closed curve whose enclosed area is less than that of the original curve by . The theorem was published in 1858 by Rev. Hamnet Holditch. While not mentioned by Holditch, the proof of the theorem requires an assumption that the chord be short enough that the traced locus is a simple closed curve. Observations The theorem is included as one of Clifford Pickover's 250 milestones in the history of mathematics. Some peculiarities of the theorem include that the area formula is independent of both the shape and the size of the original curve, and that the area formula is the same as for that of the area of an ellipse with semi-axes p and q. The theorem's author was a president of Caius College, Cambridge. Extensions Broman gives a more precise statement of the theorem, along with a generalization. The generalization allows, for example, consideration of the case in which the outer curve is a triangle, so that the conditions of the precise statement of Holditch's theorem do not hold because the paths of the endpoints of the chord have retrograde portions (portions that retrace themselves) whenever an acute angle is traversed. Nevertheless, the generalization shows that if the chord is shorter than any of the triangle's altitudes, and is short enough that the traced locus is a simple curve, Holditch's formula for the in-between area is still correct (and remains so if the triangle is replaced by any convex polygon with a short enough chord). However, other cases result in different formulas. References Further reading B. Williamson, FRS, An elementary treatise on the integral calculus : containing applications to plane curves and surfaces, with numerous examples (Longmans, Green, London, 1875; 2nd 1877; 3rd 1880; 4th 1884; 5th 1888; 6th 1891; 7th 1896; 8th 1906; 1912, 1916, 1918, 1926); Ist 1875, pp. 192–193, with citation of Holditch's Prize Question set in The Lady's and Gentleman's Diary for 1857 (appearing in late 1856), with extension by Woolhouse in the issue for 1858; 5th 1888; 8th 1906 pp. 206–211 J. Edwards, A Treatise on the Integral Calculus with Applications, Examples and Problems, Vol. 1 (Macmillan, London, 1921), Chap. XV, esp. Sections 478, 481–491, 496 (see also Chap. XIX for instantaneous centers, roulettes and glisettes); expounds and references extensions due to Woolhouse, Elliott, Leudesdorf, Kempe, drawing on the earlier book of Williamson. External links Theorems in plane geometry
https://en.wikipedia.org/wiki/Trapezo-rhombic%20dodecahedron
In geometry, the trapezo-rhombic dodecahedron or rhombo-trapezoidal dodecahedron is a convex dodecahedron with 6 rhombic and 6 trapezoidal faces. It has symmetry. A concave form can be constructed with an identical net, seen as excavating trigonal trapezohedra from the top and bottom. It is also called the trapezoidal dodecahedron. Construction This polyhedron could be constructed by taking a tall uniform hexagonal prism, and making 3 angled cuts on the top and bottom. The trapezoids represent what remains of the original prism sides, and the 6 rhombi a result of the top and bottom cuts. Space-filling tessellation A space-filling tessellation, the trapezo-rhombic dodecahedral honeycomb, can be made by translated copies of this cell. Each "layer" is a hexagonal tiling, or a rhombille tiling, and alternate layers are connected by shifting their centers and rotating each polyhedron so the rhombic faces match up. : In the special case that the long sides of the trapezoids equals twice the length of the short sides, the solid now represents the 3D Voronoi cell of a sphere in a hexagonal close packing, next to face-centered cubic an optimal way to stack spheres in a lattice. It is therefore related to the rhombic dodecahedron, which can be represented by turning the lower half of the picture at right over an angle of 60 degrees. The rhombic dodecahedron is a Voronoi cell of the other optimal way to stack spheres. The two shapes differ in their combinatorial structure as well as in their geometry: in the rhombic dodecahedron, every edge connects a degree-three vertex to a degree-four vertex, whereas the trapezo-rhombic dodecahedron has six edges that connect vertices of equal degrees. As the Voronoi cell of a regular space pattern, it is a plesiohedron. It is the polyhedral dual of the triangular orthobicupola. Variations The trapezo-rhombic dodecahedron can be seen as an elongation of another dodecahedron, which can be called a rhombo-triangular dodecahedron, with 6 rhombi (or squares) and 6 triangles. It also has d3h symmetry and is space-filling. It has 21 edges and 11 vertices. With square faces it can be seen as a cube split across the 3-fold axis, separated with the two halves rotated 180 degrees, and filling the gaps with triangles. When used as a space-filler, connecting dodecahedra on their triangles leaves two cubical step surfaces on the top and bottom which can connect with complementary steps. See also Elongated dodecahedron Hexagonal prismatic honeycomb References Further reading Mathematical Recreations and Essays Walter William Rouse Ball, Harold Scott Macdonald Coxeter, p.151 Structure in Nature Is a Strategy for Design, Peter Jon Pearce, p.48 Spacefilling systems based on rhombic dodecahedron External links VRML model Space-filling polyhedra
https://en.wikipedia.org/wiki/Exhaustion%20by%20compact%20sets
In mathematics, especially general topology and analysis, an exhaustion by compact sets of a topological space is a nested sequence of compact subsets of (i.e. ), such that is contained in the interior of , i.e. for each and . A space admitting an exhaustion by compact sets is called exhaustible by compact sets. For example, consider and the sequence of closed balls Occasionally some authors drop the requirement that is in the interior of , but then the property becomes the same as the space being σ-compact, namely a countable union of compact subsets. Properties The following are equivalent for a topological space : is exhaustible by compact sets. is σ-compact and weakly locally compact. is Lindelöf and weakly locally compact. (where weakly locally compact means locally compact in the weak sense that each point has a compact neighborhood). The hemicompact property is intermediate between exhaustible by compact sets and σ-compact. Every space exhaustible by compact sets is hemicompact and every hemicompact space is σ-compact, but the reverse implications do not hold. For example, the Arens-Fort space and the Appert space are hemicompact, but not exhaustible by compact sets (because not weakly locally compact), and the set of rational numbers with the usual topology is σ-compact, but not hemicompact. Every regular space exhaustible by compact sets is paracompact. Notes References Leon Ehrenpreis, Theory of Distributions for Locally Compact Spaces, American Mathematical Society, 1982. . Hans Grauert and Reinhold Remmert, Theory of Stein Spaces, Springer Verlag (Classics in Mathematics), 2004. . External links Compactness (mathematics) Mathematical analysis General topology
https://en.wikipedia.org/wiki/David%20Harbater
David Harbater (born December 19, 1952) is an American mathematician at the University of Pennsylvania, well known for his work in Galois theory, algebraic geometry and arithmetic geometry. Early life and education Harbater was born in New York City and attended Stuyvesant High School, where he was on the math team. After graduating in 1970, he entered Harvard University. After graduating summa cum laude in 1974, Harbater earned a master's degree from Brandeis University and then a Ph.D. in 1978 from MIT, where he wrote a dissertation (Deformation Theory and the Fundamental Group in Algebraic Geometry) under the direction of Michael Artin. Research He solved the inverse Galois problem over , and made many other significant contributions to the field of Galois theory. Harbater's recent work on patching over fields, together with Julia Hartmann and Daniel Krashen, has had applications in such varied fields as quadratic forms, central simple algebras and local-global principles. Awards and honors In 1995, Harbater was awarded the Cole Prize for his solution, with Michel Raynaud, of the long outstanding Abhyankar conjecture. In 2012, he became a fellow of the American Mathematical Society. Selected publications References External links Recollections of Arthur Rothstein, (Math Teammate of Harbater) Cole Prize citation for David Harbater Harbater's home page at Penn 1952 births Living people 20th-century American mathematicians 21st-century American mathematicians Group theorists Fellows of the American Mathematical Society Stuyvesant High School alumni Harvard University alumni Brandeis University alumni Massachusetts Institute of Technology alumni Scientists from New York City Mathematicians from New York (state) University of Pennsylvania faculty Mathematicians at the University of Pennsylvania
https://en.wikipedia.org/wiki/Impact%20calculus
In policy and public forum debates, impact calculus, also known as weighing impacts, is a type of argumentation which seeks to compare the impacts presented in both causes and effects. Basic Impact Calculus There are several basic types of impact calculus that compare the impacts of the plan to the impacts of a disadvantage: Substantiality (one impact is more realistic than the other) e.g. Economic collapse is more seriously realistic than an outbreak of grey goo, therefore the risk of economic collapse outweighs the probability of a grey goo disaster. Probability is really important in a debate Probability has to have some substantiated historical evidence beyond theory to move to precedence or initial risk already incurred. Timeframe (one impact will happen sooner) e.g. An asteroid impact will cause extinction before Global warming will, therefore an asteroid impact outweighs Global Warming. Magnitude (one impact is bigger) e.g. Nuclear war could kill more people instantly than car accidents cumulatively over a long period. Severity (one impact is more causal) e.g. Air pollution increasing illnesses matter more than a loss in a country's GDP because each sickness is more harmful than each loss of money. Because money is intrinsically derived from productivity that maintains or improves health, productivity is inherently much less during illnesses Other types of impact calculus Some other more sophisticated arguments are also considered impact calculus: Impact inclusivity (one impact is inclusive of the other) e.g. Global war is inclusive of a Taiwan war, therefore global war outweighs Taiwan war. Root Cause (one impact causes the other impact to happen) e.g. War causes genocide, therefore war outweighs genocide Internal link shortcircuiting (one impact prevents a (positive) impact from happening) e.g. Nuclear war halts space colonization, therefore nuclear war outweighs space colonization Reversibility e.g. Civil liberties lost in the name of security during a time of crisis can be restored later, but deaths caused by a lack of security are irreversible. Link Strength e.g. Improbable double-dip recessions may be caused by enacting trade sanctions, but more solid arguments with better evidence should be considered more strongly. (This may be considered a type of Substantiability argument, where more firmly-established nontheoretic effects are more important.) Approach arguments can also be considered impact calculus. Arguments as to why the judge should adopt a utilitarian or consequentialist perspective or conversely a deontological perspective may change the way they compare impacts. Impact calculus and "new" arguments Basic impact calculus arguments may be made at any time and are generally not considered "new" arguments, even if brought up for the first time in the 2NR or 2AR. More sophisticated forms of impact calculus should generally be brought up earlier in the debate and supported by evidence whenever possible. Referen
https://en.wikipedia.org/wiki/Truncated%20trapezohedron
In geometry, an truncated trapezohedron is a polyhedron formed by a trapezohedron with pyramids truncated from its two polar axis vertices. If the polar vertices are completely truncated (diminished), a trapezohedron becomes an antiprism. The vertices exist as 4 in four parallel planes, with alternating orientation in the middle creating the pentagons. The regular dodecahedron is the most common polyhedron in this class, being a Platonic solid, with 12 congruent pentagonal faces. A truncated trapezohedron has all vertices with 3 faces. This means that the dual polyhedra, the set of gyroelongated dipyramids, have all triangular faces. For example, the icosahedron is the dual of the dodecahedron. Forms Triangular truncated trapezohedron (Dürer's solid) – 6 pentagons, 2 triangles, dual gyroelongated triangular dipyramid Truncated square trapezohedron – 8 pentagons, 2 squares, dual gyroelongated square dipyramid Truncated pentagonal trapezohedron or regular dodecahedron – 12 pentagonal faces, dual icosahedron Truncated hexagonal trapezohedron – 12 pentagons, 2 hexagons, dual gyroelongated hexagonal dipyramid ... Truncated n-gonal trapezohedron – 2n pentagons, 2 n-gons, dual gyroelongated dipyramids See also Diminished trapezohedron External links Conway Notation for Polyhedra Try: "tndAn", where n=4,5,6... example "t5dA5" is a dodecahedron. Polyhedra Truncated tilings
https://en.wikipedia.org/wiki/Truncated%20hexagonal%20trapezohedron
In geometry, the truncated hexagonal trapezohedron is the fourth in an infinite series of truncated trapezohedra. It has 12 pentagon and 2 hexagon faces. It can be constructed by taking a hexagonal trapezohedron and truncating the polar axis vertices. Weaire–Phelan structure Another form of this polyhedron has D2d symmetry and is a part of a space-filling honeycomb along with an irregular dodecahedron, called Weaire–Phelan structure. See also Goldberg polyhedron External links Conway Notation for Polyhedra Try: "t6dA6". Polyhedra
https://en.wikipedia.org/wiki/Elongated%20hexagonal%20bipyramid
In geometry, the elongated hexagonal bipyramid is constructed by elongating a hexagonal bipyramid (by inserting a hexagonal prism between its congruent halves). Related polyhedra This polyhedron is in the family of elongated bipyramids, of which the first three can be Johnson solids: J14, J15, and J16. The hexagonal form can be constructed by all regular faces but is not a Johnson solid because 6 equilateral triangles would form six co-planar faces (in a regular hexagon). Uses A quartz crystal is an example of an elongated hexagonal bipyramid. Because it has 18 faces, it can be called an octadecahedron. Other chemicals also have this shape. The edge-first orthogonal projection of a 24-cell is an elongated hexagonal bipyramid. Used as the shape of Fruit Gushers candy. Used as a physical manifestation for assisting various branches of three-dimensional graph theory. References Pyramids and bipyramids
https://en.wikipedia.org/wiki/Truncated%20square%20trapezohedron
In geometry, the square truncated trapezohedron is the second in an infinite series of truncated trapezohedra. It has 8 pentagon and 2 square faces. This polyhedron can be constructed by taking a tetragonal trapezohedron and truncating the polar axis vertices. The kite faces of the trapezohedron become pentagons. The vertices exist as 4 squares in four parallel planes, with alternating orientation in the middle creating the pentagons. A truncated trapezohedron has all valence-3 vertices. This means that the dual polyhedrona gyroelongated square dipyramid has all triangular faces. It represents the dual polyhedron to the Johnson solid, gyroelongated square dipyramid (), with specific proportions: Polyhedra
https://en.wikipedia.org/wiki/Triangular%20bifrustum
In geometry, the triangular bifrustum is the second in an infinite series of bifrustum polyhedra. It has 6 trapezoid and 2 triangle faces. It may also be called the truncated triangular bipyramid; however, that term is ambiguous, as it may also refer to polyhedra formed by truncating all five vertices of a triangular bipyramid. This polyhedron can be constructed by taking a triangular bipyramid and truncating the polar axis vertices, making it into two end-to-end frustums. It appears as the form of certain nanocrystals. A truncated triangular bipyramid can be constructed by connecting two stacked regular octahedra with 3 pairs of tetrahedra around the sides. This represents a portion of the gyrated alternated cubic honeycomb. References External links Conway Notation for Polyhedra Try: t3dP3 Polyhedra
https://en.wikipedia.org/wiki/Elongated%20bipyramid
In geometry, the elongated bipyramids are an infinite set of polyhedra, constructed by elongating an bipyramid (by inserting an prism between its congruent halves). There are three elongated bipyramids that are Johnson solids: Elongated triangular bipyramid (), Elongated square bipyramid (), and Elongated pentagonal bipyramid (). Higher forms can be constructed with isosceles triangles. Forms See also Gyroelongated bipyramid Gyroelongated pyramid Elongated pyramid Diminished trapezohedron References Norman W. Johnson, "Convex Solids with Regular Faces", Canadian Journal of Mathematics, 18, 1966, pages 169–200. Contains the original enumeration of the 92 solids and the conjecture that there are no others. The first proof that there are only 92 Johnson solids. Pyramids and bipyramids
https://en.wikipedia.org/wiki/Particle%20statistics
Particle statistics is a particular description of multiple particles in statistical mechanics. A key prerequisite concept is that of a statistical ensemble (an idealization comprising the state space of possible states of a system, each labeled with a probability) that emphasizes properties of a large system as a whole at the expense of knowledge about parameters of separate particles. When an ensemble describes a system of particles with similar properties, their number is called the particle number and usually denoted by N. Classical statistics In classical mechanics, all particles (fundamental and composite particles, atoms, molecules, electrons, etc.) in the system are considered distinguishable. This means that individual particles in a system can be tracked. As a consequence, switching the positions of any pair of particles in the system leads to a different configuration of the system. Furthermore, there is no restriction on placing more than one particle in any given state accessible to the system. These characteristics of classical positions are called Maxwell–Boltzmann statistics. Quantum statistics The fundamental feature of quantum mechanics that distinguishes it from classical mechanics is that particles of a particular type are indistinguishable from one another. This means that in an ensemble of similar particles, interchanging any two particles does not lead to a new configuration of the system. In the language of quantum mechanics this means that the wave function of the system is invariant up to a phase with respect to the interchange of the constituent particles. In the case of a system consisting of particles of different kinds (for example, electrons and protons), the wave function of the system is invariant up to a phase separately for both assemblies of particles. The applicable definition of a particle does not require it to be elementary or even "microscopic", but it requires that all its degrees of freedom (or internal states) that are relevant to the physical problem considered shall be known. All quantum particles, such as leptons and baryons, in the universe have three translational motion degrees of freedom (represented with the wave function) and one discrete degree of freedom, known as spin. Progressively more "complex" particles obtain progressively more internal freedoms (such as various quantum numbers in an atom), and, when the number of internal states that "identical" particles in an ensemble can occupy dwarfs their count (the particle number), then effects of quantum statistics become negligible. That's why quantum statistics is useful when one considers, say, helium liquid or ammonia gas (its molecules have a large, but conceivable number of internal states), but is useless applied to systems constructed of macromolecules. While this difference between classical and quantum descriptions of systems is fundamental to all of quantum statistics, quantum particles are divided into two further classes on th
https://en.wikipedia.org/wiki/Punching%20power
Punching power is the amount of kinetic energy in a person's punches. Knockout power is a similar concept relating to the probability of any strike to the head to cause unconsciousness or a strike to the body that renders an opponent unable to continue fighting. Knockout power is related to the force delivered, the timing, the technique, precision of the strike, among other factors. In order to increase the mass behind a punch, it is essential to move the body as a unit throughout the punch. Power is generated from the ground up, such that force from the ankles transfers to the knees; force from the knees transfers to the thighs; force from the thighs transfers to the core; from the core to the chest; from the chest to the shoulders; from the shoulders to the forearms and finally the compounded force transfers through the fist into an opponent. So the most powerful punchers are able to connect their whole body and channel the force from each portion of the body into a punch. Generally, there are five components to punching power that must be present for a puncher to be considered truly powerful: lack of arm punching, proper weight shifting, stepping during a punch, pivoting with a punch, and using proper footwork. This body connection requires the development of a strong core. The core is perhaps the most important element in a powerful punch, since it connects the powerhouse of the legs to the delivery system of the arms. While the core may be important, experienced boxers have greater contributions from the legs compared to less experienced boxers meaning strong powerful legs are the foundation for punching power. When it comes to throwing a powerful jab, one must take a lead step forward to allow the body's momentum to carry forward with a braced lead arm to transmit this momentum to the target. See also Boxing Boxing styles and technique Chin (combat sports) Kickboxing Knockout Mixed Martial Arts Muay Thai Punch (combat) King hit (slang) Fa jin/Kinetic Linking References Strikes (martial arts) Boxing terminology Kickboxing terminology
https://en.wikipedia.org/wiki/Bernstein%27s%20theorem%20on%20monotone%20functions
In real analysis, a branch of mathematics, Bernstein's theorem states that every real-valued function on the half-line that is totally monotone is a mixture of exponential functions. In one important special case the mixture is a weighted average, or expected value. Total monotonicity (sometimes also complete monotonicity) of a function means that is continuous on , infinitely differentiable on , and satisfies for all nonnegative integers and for all . Another convention puts the opposite inequality in the above definition. The "weighted average" statement can be characterized thus: there is a non-negative finite Borel measure on with cumulative distribution function such that the integral being a Riemann–Stieltjes integral. In more abstract language, the theorem characterises Laplace transforms of positive Borel measures on . In this form it is known as the Bernstein–Widder theorem, or Hausdorff–Bernstein–Widder theorem. Felix Hausdorff had earlier characterised completely monotone sequences. These are the sequences occurring in the Hausdorff moment problem. Bernstein functions Nonnegative functions whose derivative is completely monotone are called Bernstein functions. Every Bernstein function has the Lévy–Khintchine representation: where and is a measure on the positive real half-line such that References External links MathWorld page on completely monotonic functions Theorems in real analysis Theorems in measure theory
https://en.wikipedia.org/wiki/Rose%20%28topology%29
In mathematics, a rose (also known as a bouquet of n circles) is a topological space obtained by gluing together a collection of circles along a single point. The circles of the rose are called petals. Roses are important in algebraic topology, where they are closely related to free groups. Definition A rose is a wedge sum of circles. That is, the rose is the quotient space C/S, where C is a disjoint union of circles and S a set consisting of one point from each circle. As a cell complex, a rose has a single vertex, and one edge for each circle. This makes it a simple example of a topological graph. A rose with n petals can also be obtained by identifying n points on a single circle. The rose with two petals is known as the figure eight. Relation to free groups The fundamental group of a rose is free, with one generator for each petal. The universal cover is an infinite tree, which can be identified with the Cayley graph of the free group. (This is a special case of the presentation complex associated to any presentation of a group.) The intermediate covers of the rose correspond to subgroups of the free group. The observation that any cover of a rose is a graph provides a simple proof that every subgroup of a free group is free (the Nielsen–Schreier theorem) Because the universal cover of a rose is contractible, the rose is actually an Eilenberg–MacLane space for the associated free group F. This implies that the cohomology groups Hn(F) are trivial for n ≥ 2. Other properties Any connected graph is homotopy equivalent to a rose. Specifically, the rose is the quotient space of the graph obtained by collapsing a spanning tree. A disc with n points removed (or a sphere with n + 1 points removed) deformation retracts onto a rose with n petals. One petal of the rose surrounds each of the removed points. A torus with one point removed deformation retracts onto a figure eight, namely the union of two generating circles. More generally, a surface of genus g with one point removed deformation retracts onto a rose with 2g petals, namely the boundary of a fundamental polygon. A rose can have infinitely many petals, leading to a fundamental group which is free on infinitely many generators. The rose with countably infinitely many petals is similar to the Hawaiian earring: there is a continuous bijection from this rose onto the Hawaiian earring, but the two are not homeomorphic. A rose with infinitely many petals is not compact, whereas the Hawaiian earring is compact. See also Bouquet graph Free group List of topologies Petal projection Quadrifolium Topological graph References Topological spaces Algebraic topology
https://en.wikipedia.org/wiki/Mathomatic
Mathomatic is a free, portable, general-purpose computer algebra system (CAS) that can symbolically solve, simplify, combine, and compare algebraic equations, and can perform complex number, modular, and polynomial arithmetic, along with standard arithmetic. It does some symbolic calculus (derivative, extrema, Taylor series, and polynomial integration and Laplace transforms), numerical integration, and handles all elementary algebra except logarithms. Trigonometric functions can be entered and manipulated using complex exponentials, with the GNU m4 preprocessor. Not currently implemented are general functions like f(x), arbitrary-precision and interval arithmetic, and matrices. Features Mathomatic is capable of solving, differentiating, simplifying, calculating, and visualizing elementary algebra. It also can perform summations, products, and automated display of calculations of any length by plugging sequential or test values into any formula, then approximating and simplifying before display. Intermediate results (showing the work) may be displayed by previously typing "set debug 1" (see the session example); this works for solving and almost every command in Mathomatic. "set debug 2" shows more details about the work done. The software does not include a GUI except with the Mathomatic trademark authorized, versions for smartphones and tablets running iOS or Android. The Mathomatic software, available on the official Mathomatic website, is authorized for use in any other type of software, due to its permissive free software license (GNU LGPL). It is available as a free software library, and as a free console mode application that uses a color command-line interface with pretty-print output that runs in a terminal emulator under any operating system. The console interface is simple and requires learning the basic algebra notation to start. All input and output is line-at-a-time ASCII text. By default, input is standard input and output is standard output. Mathomatic is typically compiled with editline or GNU readline for easier input. There is no programming capability; the interpreter works like an algebraic calculator. Expressions and equations are entered in standard algebraic infix notation. Operations are performed on them by entering simple English commands. Because all numeric arithmetic is double precision floating point, and round-off error is not tracked, Mathomatic is not suitable for applications requiring high precision, such as astronomical calculations. It is useful for symbolic-numeric calculations of about 14 decimal digits accuracy, although many results will be exact, if possible. Mathomatic can be used as a floating point or integer arithmetic code generating tool, simplifying and converting equations into optimized assignment statements in the Python, C, and Java programming languages. The output can be made compatible with most other mathematics programs, except TeX and MathML format input/output are currently not a
https://en.wikipedia.org/wiki/Pseudospectrum
In mathematics, the pseudospectrum of an operator is a set containing the spectrum of the operator and the numbers that are "almost" eigenvalues. Knowledge of the pseudospectrum can be particularly useful for understanding non-normal operators and their eigenfunctions. The ε-pseudospectrum of a matrix A consists of all eigenvalues of matrices which are ε-close to A: Numerical algorithms which calculate the eigenvalues of a matrix give only approximate results due to rounding and other errors. These errors can be described with the matrix E. More generally, for Banach spaces and operators , one can define the -pseudospectrum of (typically denoted by ) in the following way where we use the convention that if is not invertible. Notes Bibliography Lloyd N. Trefethen and Mark Embree: "Spectra And Pseudospectra: The Behavior of Nonnormal Matrices And Operators", Princeton Univ. Press, (2005). External links Pseudospectra Gateway by Embree and Trefethen Numerical linear algebra Spectral theory
https://en.wikipedia.org/wiki/2004%20World%20Cup%20of%20Hockey%20statistics
These are the individual player statistics for the 2004 World Cup of Hockey. Canada Note: GP = Games Played, G = Goals, A = Assists, Pts = Points, PIM = Penalty Minutes Czech Republic Note: GP = Games Played, G = Goals, A = Assists, Pts = Points, PIM = Penalty Minutes Germany Note: GP = Games Played, G = Goals, A = Assists, Pts = Points, PIM = Penalty Minutes Finland Note: GP = Games Played, G = Goals, A = Assists, Pts = Points, PIM = Penalty Minutes Russia Note: GP = Games Played, G = Goals, A = Assists, Pts = Points, PIM = Penalty Minutes Slovakia Note: GP = Games Played, G = Goals, A = Assists, Pts = Points, PIM = Penalty Minutes Sweden Note: GP = Games Played, G = Goals, A = Assists, Pts = Points, PIM = Penalty Minutes United States Note: GP = Games Played, G = Goals, A = Assists, Pts = Points, PIM = Penalty Minutes Goalies Note: W = Wins, L = Losses, T = Ties, GAA = Goals Against Average, SPCT = Save Percentage References The Hockey News Volume 58 Issue No. 5. Statistics Ice hockey statistics
https://en.wikipedia.org/wiki/Discontinuous%20linear%20map
In mathematics, linear maps form an important class of "simple" functions which preserve the algebraic structure of linear spaces and are often used as approximations to more general functions (see linear approximation). If the spaces involved are also topological spaces (that is, topological vector spaces), then it makes sense to ask whether all linear maps are continuous. It turns out that for maps defined on infinite-dimensional topological vector spaces (e.g., infinite-dimensional normed spaces), the answer is generally no: there exist discontinuous linear maps. If the domain of definition is complete, it is trickier; such maps can be proven to exist, but the proof relies on the axiom of choice and does not provide an explicit example. A linear map from a finite-dimensional space is always continuous Let X and Y be two normed spaces and a linear map from X to Y. If X is finite-dimensional, choose a basis in X which may be taken to be unit vectors. Then, and so by the triangle inequality, Letting and using the fact that for some C>0 which follows from the fact that any two norms on a finite-dimensional space are equivalent, one finds Thus, is a bounded linear operator and so is continuous. In fact, to see this, simply note that f is linear, and therefore for some universal constant K. Thus for any we can choose so that ( and are the normed balls around and ), which gives continuity. If X is infinite-dimensional, this proof will fail as there is no guarantee that the supremum M exists. If Y is the zero space {0}, the only map between X and Y is the zero map which is trivially continuous. In all other cases, when X is infinite-dimensional and Y is not the zero space, one can find a discontinuous map from X to Y. A concrete example Examples of discontinuous linear maps are easy to construct in spaces that are not complete; on any Cauchy sequence of linearly independent vectors which does not have a limit, there is a linear operator such that the quantities grow without bound. In a sense, the linear operators are not continuous because the space has "holes". For example, consider the space X of real-valued smooth functions on the interval [0, 1] with the uniform norm, that is, The derivative-at-a-point map, given by defined on X and with real values, is linear, but not continuous. Indeed, consider the sequence for This sequence converges uniformly to the constantly zero function, but as instead of which would hold for a continuous map. Note that T is real-valued, and so is actually a linear functional on X (an element of the algebraic dual space X*). The linear map X → X which assigns to each function its derivative is similarly discontinuous. Note that although the derivative operator is not continuous, it is closed. The fact that the domain is not complete here is important. Discontinuous operators on complete spaces require a little more work. A nonconstructive example An algebraic basis for the re
https://en.wikipedia.org/wiki/Beniamino%20Segre
Beniamino Segre (16 February 1903 – 2 October 1977) was an Italian mathematician who is remembered today as a major contributor to algebraic geometry and one of the founders of finite geometry. Life and career He was born and studied in Turin. Corrado Segre, his uncle, also served as his doctoral advisor. Among his main contributions to algebraic geometry are studies of birational invariants of algebraic varieties, singularities and algebraic surfaces. His work was in the style of the old Italian School, although he also appreciated the greater rigour of modern algebraic geometry. Segre was a pioneer in finite geometry, in particular projective geometry based on vector spaces over a finite field. In a well-known paper he proved the following theorem: In a Desarguesian plane of odd order, the ovals are exactly the irreducible conics. In 1959 he authored a survey "Le geometrie di Galois" on Galois geometry. According to J. W. P. Hirschfeld, it "gave a comprehensive list of results and methods, and is to my mind the seminal paper in the subject." Some critics felt that his work was no longer geometry, but today it is recognized as a separate sub-discipline: finite geometry or combinatorial geometry. According to Hirschfeld, "He published the most as well as the deepest papers in the subject. His enormous knowledge of classical algebraic geometry enabled him to identify those results which could be applied to finite spaces. His theorem on the characterization of conics (Segre's theorem) not only stimulated a great deal of research but also made many mathematicians realize that finite spaces were worth studying." In 1938 he lost his professorship at the University of Bologna, as a result of the anti-Jewish laws enacted under Benito Mussolini's government. He spent the next 8 years in Great Britain (mostly at the University of Manchester), then returned to Italy to resume his academic career. Selected publications . . The second volume was never published: however an updated and largely expanded English edition was published as: . . . . (also available with (ebook)). (also available with , (softcover reprint) and (ebook)). . . Notes References . . . Sernesi, Edoardo: "Beniamino Segre (Torino 1903-Frascati 1977)" (incl. extensive bibliography) External links 1903 births 1977 deaths 20th-century Italian Jews Scientists from Turin 20th-century Italian mathematicians Italian algebraic geometers Algebraic geometers Members of the Lincean Academy Academic staff of the University of Bologna Italian expatriates in the United Kingdom
https://en.wikipedia.org/wiki/Hjelmslev%20transformation
In mathematics, the Hjelmslev transformation is an effective method for mapping an entire hyperbolic plane into a circle with a finite radius. The transformation was invented by Danish mathematician Johannes Hjelmslev. It utilizes Nikolai Ivanovich Lobachevsky's 23rd theorem from his work Geometrical Investigations on the Theory of Parallels. Lobachevsky observes, using a combination of his 16th and 23rd theorems, that it is a fundamental characteristic of hyperbolic geometry that there must exist a distinct angle of parallelism for any given line length. Let us say for the length AE, its angle of parallelism is angle BAF. This being the case, line AH and EJ will be hyperparallel, and therefore will never meet. Consequently, any line drawn perpendicular to base AE between A and E must necessarily cross line AH at some finite distance. Johannes Hjelmslev discovered from this a method of compressing an entire hyperbolic plane into a finite circle. The method is as follows: for any angle of parallelism, draw from its line AE a perpendicular to the other ray; using that cutoff length, e.g., AH, as the radius of a circle, "map" the point H onto the line AE. This point H thus mapped must fall between A and E. By applying this process for every line within the plane, the infinite hyperbolic space thus becomes contained and planar. Hjelmslev's transformation does not yield a proper circle however. The circumference of the circle created does not have a corresponding location within the plane, and therefore, the product of a Hjelmslev transformation is more aptly called a Hjelmslev Disk. Likewise, when this transformation is extended in all three dimensions, it is referred to as a Hjelmslev Ball. There are a few properties that are retained through the transformation which enable valuable information to be ascertained therefrom, namely: The image of a circle sharing the center of the transformation will be a circle about this same center. As a result, the images of all the right angles with one side passing through the center will be right angles. Any angle with the center of the transformation as its vertex will be preserved. The image of any straight line will be a finite straight line segment. Likewise, the point order is maintained throughout a transformation, i.e. if B is between A and C, the image of B will be between the image of A and the image of C. The image of a rectilinear angle is a rectilinear angle. The Hjelmslev transformation and the Klein model If we represent hyperbolic space by means of the Klein model, and take the center of the Hjelmslev transformation to be the center point of the Klein model, then the Hjelmslev transformation maps points in the unit disk to points in a disk centered at the origin with a radius less than one. Given a real number k, the Hjelmslev transformation, if we ignore rotations, is in effect what we obtain by mapping a vector u representing a point in the Klein model to ku, with 0<k<1. It is therefore in t
https://en.wikipedia.org/wiki/Binet%E2%80%93Cauchy%20identity
In algebra, the Binet–Cauchy identity, named after Jacques Philippe Marie Binet and Augustin-Louis Cauchy, states that for every choice of real or complex numbers (or more generally, elements of a commutative ring). Setting and , it gives Lagrange's identity, which is a stronger version of the Cauchy–Schwarz inequality for the Euclidean space . The Binet-Cauchy identity is a special case of the Cauchy–Binet formula for matrix determinants. The Binet–Cauchy identity and exterior algebra When , the first and second terms on the right hand side become the squared magnitudes of dot and cross products respectively; in dimensions these become the magnitudes of the dot and wedge products. We may write it where , , , and are vectors. It may also be written as a formula giving the dot product of two wedge products, as which can be written as in the case. In the special case and , the formula yields When both and are unit vectors, we obtain the usual relation where is the angle between the vectors. This is a special case of the inner product on the exterior algebra of a vector space, which is defined on wedge-decomposable elements as the Gram determinant of their components. Einstein notation A relationship between the Levi–Cevita symbols and the generalized Kronecker delta is The form of the Binet–Cauchy identity can be written as Proof Expanding the last term, where the second and fourth terms are the same and artificially added to complete the sums as follows: This completes the proof after factoring out the terms indexed by i. Generalization A general form, also known as the Cauchy–Binet formula, states the following: Suppose A is an m×n matrix and B is an n×m matrix. If S is a subset of {1, ..., n} with m elements, we write AS for the m×m matrix whose columns are those columns of A that have indices from S. Similarly, we write BS for the m×m matrix whose rows are those rows of B that have indices from S. Then the determinant of the matrix product of A and B satisfies the identity where the sum extends over all possible subsets S of {1, ..., n} with m elements. We get the original identity as special case by setting Notes References Mathematical identities Multilinear algebra Articles containing proofs
https://en.wikipedia.org/wiki/Essential%20infimum%20and%20essential%20supremum
In mathematics, the concepts of essential infimum and essential supremum are related to the notions of infimum and supremum, but adapted to measure theory and functional analysis, where one often deals with statements that are not valid for all elements in a set, but rather almost everywhere, that is, except on a set of measure zero. While the exact definition is not immediately straightforward, intuitively the essential supremum of a function is the smallest value that is greater than or equal to the function values everywhere while ignoring what the function does at a set of points of measure zero. For example, if one takes the function that is equal to zero everywhere except at where then the supremum of the function equals one. However, its essential supremum is zero because we are allowed to ignore what the function does at the single point where is peculiar. The essential infimum is defined in a similar way. Definition As is often the case in measure-theoretic questions, the definition of essential supremum and infimum does not start by asking what a function does at points (that is, the image of ), but rather by asking for the set of points where equals a specific value (that is, the preimage of under ). Let be a real valued function defined on a set The supremum of a function is characterized by the following property: for all and if for some we have for all then More concretely, a real number is called an upper bound for if for all that is, if the set is empty. Let be the set of upper bounds of and define the infimum of the empty set by Then the supremum of is if the set of upper bounds is nonempty, and otherwise. Now assume in addition that is a measure space and, for simplicity, assume that the function is measurable. Similar to the supremum, the essential supremum of a function is characterised by the following property: for -almost all and if for some we have for -almost all then More concretely, a number is called an of if the measurable set is a set of -measure zero, That is, if for -almost all in Let be the set of essential upper bounds. Then the is defined similarly as if and otherwise. Exactly in the same way one defines the as the supremum of the s, that is, if the set of essential lower bounds is nonempty, and as otherwise; again there is an alternative expression as (with this being if the set is empty). Examples On the real line consider the Lebesgue measure and its corresponding -algebra Define a function by the formula The supremum of this function (largest value) is 5, and the infimum (smallest value) is −4. However, the function takes these values only on the sets and respectively, which are of measure zero. Everywhere else, the function takes the value 2. Thus, the essential supremum and the essential infimum of this function are both 2. As another example, consider the function where denotes the rational numbers. This function is unbounded bot
https://en.wikipedia.org/wiki/Guerino%20Mazzola
Guerino Bruno Mazzola (born 1947) is a Swiss mathematician, musicologist, and jazz pianist, as well as a writer. Education and career Mazzola obtained his PhD in mathematics at University of Zürich in 1971 under the supervision of Herbert Groß and Bartel Leendert van der Waerden. In 1980, he habilitated in Algebraic Geometry and Representation Theory. In 2000, he was awarded the medal of the Mexican Mathematical Society. In 2003, he habilitated in Computational Science at the University of Zürich. Mazzola was an associate professor at Laval University in 1996 and at Ecole Normale Supérieure in Paris in 2005. Since 2007, he is professor at the School of Music at the University of Minnesota. From 2007 to 2021 he was the president of the Society for Mathematics and Computation in Music. Mazzola is well known for his music theory book The Topos of Music. The result has drawn dissent from Dmitri Tymoczko, who said of Mazzola: "If you can't learn algebraic geometry, he sometimes seems to be saying, then you have no business trying to understand Mozart." Records Mazzola has recorded several free jazz CDs with musicians like Mat Maneri, Heinz Geisser, Sirone, Jeff Kaiser, Scott Fields, Matt Turner and Rob Brown. His 2010 album Dancing the Body of Time was recorded in concert at the Pit Inn in Tokyo. Playing style A reviewer of Dancing the Body of Time mentioned similarities between Mazzola's playing style and that of Cecil Taylor. This was also mentioned by the AllMusic reviewer of Mazzola's earlier Toni's Delight: Live in Seoul, who also stated that Mazzola "infuses his incredible technique with a blues aesthetic and a sometimes-romantic stamp, overwhelming everything in his path". Bibliography Gruppen und Kategorien in der Musik. Hermann (1985) . Rasterbild - Bildraster, CAD-gestützte Analyse von Raffaels "Schule von Athen". Springer (1987) . Geometrie der Töne. Birkhäuser (1990) . Ansichten eines Hirns. Birkhäuser (1990) . The Topos of Music, Geometric Logic of Concepts, Theory, and Performance. Birkhäuser (2002) . Perspectives in Mathematical Music Theory.. EpOs (2004). . Comprehensive Mathematics for Computer Scientists I & II here and Errata here Elemente der Musikinformatik. Birkhäuser (2006) . La vérité du beau dans la musique. Delatour/IRCAM (2007) . Flow, Gesture, and Spaces in Free Jazz—Towards a Theory of Collaboration. Springer (2009) . Musical Performance. Springer (2011) . Musical Creativity—Strategies and Tools in Composition and Improvisation. Springer (2011) . Computational Musicology in Hindustani Music. Springer (2014) . Computational Counterpoint Worlds. Springer (2015) . Cool Math for Hot Music. Springer (2016) . All About Music. Springer (2016) . The Topos of Music, 2nd ed. Vol. I: Theory. Springer (2017) . The Topos of Music, 2nd ed. Vol. II: Performance. Springer (2017) . The Topos of Music, 2nd ed. Vol. III: Gestures. Springer (2017) . The Topos of Music, 2nd ed. Vol. IV: Roots. Springer (2017) . Basic Music Technology. Sp
https://en.wikipedia.org/wiki/National%20Statistics%20Institute%20%28Spain%29
The (INE; ) is the official agency in Spain that collects statistics about demography, economy, and Spanish society. It is an autonomous organization responsible for overall coordination of statistical services of the General State Administration in monitoring, control and supervision of technical procedures. Every 10 years, this organization conducts a national census. The last census took place in 2011. Through the official website one can follow all the updates of different fields of study. History First agency and evolution The oldest statistics agency of Spain and the predecessor of the current agency was the General Statistics Commission of the Kingdom, created on November 3, 1856 during the reign of Isabella II. The so-then Prime Minister Narváez approved a decree creating this body and ordering that people with recognized ability in this matter were part of it. On May 1, 1861, the Commission changed its name to General Statistics Board and their first work was to do a population census. By a decree of September 12, 1870, Prime Minister Serrano created the Geographic Institute and in 1873 this Institute changed its name to Geographic and Statistic Institute assuming the competences of the General Statistics Board. In 1890, the titularity of the agency was transferred from the Prime Minister's Office to the Ministry of Development. Between 1921 and 1939, change its name many times. In the same way, the agency was transferred from one ministry to another, passing through the Deputy Prime Minister's Office, the Ministry of the Presidency, and the Ministry of Labour. National Statistics Institute The National Statistics Institute was created following the Law of December 31, 1945, published in the (BOE) of January 3, 1946, with a mission to develop and refine the demographic, economic and social statistics already existing, creating new statistics and coordination with the statistical offices of provincial and municipal areas. At the end of 1964 the first computer was installed at the INE. It was a first-generation IBM 1401, for which a team was formed consisting of four statistics faculty and ten technicians. In the four years following it was possible that said computer would operate at its full capacity. From 29 October 2019 until eights days later, INE will ascertain mobile phone movements to know the usual shifts to improve his services. Headquarters The main headquarters of the INE is located at Paseo de la Castellana no. 183, in Madrid. Although the building was built in 1973, it was thoroughly restored between 2006 and 2008, through a work by the architects César Ruiz-Larrea and Antonio Gómez Gutiérrez, who have completely transformed its original appearance (ocher in color) giving it a colorful appearance, since colored panels with numbers ranging from 001 to 058 have been placed on the façade. This façade is the work of the sculptor José María Cruz Novillo and has been called the Decaphonic Diaphragm of Digits. It also
https://en.wikipedia.org/wiki/Second-order%20arithmetic
In mathematical logic, second-order arithmetic is a collection of axiomatic systems that formalize the natural numbers and their subsets. It is an alternative to axiomatic set theory as a foundation for much, but not all, of mathematics. A precursor to second-order arithmetic that involves third-order parameters was introduced by David Hilbert and Paul Bernays in their book Grundlagen der Mathematik. The standard axiomatization of second-order arithmetic is denoted by Z2. Second-order arithmetic includes, but is significantly stronger than, its first-order counterpart Peano arithmetic. Unlike Peano arithmetic, second-order arithmetic allows quantification over sets of natural numbers as well as numbers themselves. Because real numbers can be represented as (infinite) sets of natural numbers in well-known ways, and because second-order arithmetic allows quantification over such sets, it is possible to formalize the real numbers in second-order arithmetic. For this reason, second-order arithmetic is sometimes called "analysis". Second-order arithmetic can also be seen as a weak version of set theory in which every element is either a natural number or a set of natural numbers. Although it is much weaker than Zermelo–Fraenkel set theory, second-order arithmetic can prove essentially all of the results of classical mathematics expressible in its language. A subsystem of second-order arithmetic is a theory in the language of second-order arithmetic each axiom of which is a theorem of full second-order arithmetic (Z2). Such subsystems are essential to reverse mathematics, a research program investigating how much of classical mathematics can be derived in certain weak subsystems of varying strength. Much of core mathematics can be formalized in these weak subsystems, some of which are defined below. Reverse mathematics also clarifies the extent and manner in which classical mathematics is nonconstructive. Definition Syntax The language of second-order arithmetic is two-sorted. The first sort of terms and in particular variables, usually denoted by lower case letters, consists of individuals, whose intended interpretation is as natural numbers. The other sort of variables, variously called "set variables", "class variables", or even "predicates" are usually denoted by upper-case letters. They refer to classes/predicates/properties of individuals, and so can be thought of as sets of natural numbers. Both individuals and set variables can be quantified universally or existentially. A formula with no bound set variables (that is, no quantifiers over set variables) is called arithmetical. An arithmetical formula may have free set variables and bound individual variables. Individual terms are formed from the constant 0, the unary function S (the successor function), and the binary operations + and (addition and multiplication). The successor function adds 1 to its input. The relations = (equality) and < (comparison of natural numbers) relate two indi
https://en.wikipedia.org/wiki/Probability%20current
In quantum mechanics, the probability current (sometimes called probability flux) is a mathematical quantity describing the flow of probability. Specifically, if one thinks of probability as a heterogeneous fluid, then the probability current is the rate of flow of this fluid. It is a real vector that changes with space and time. Probability currents are analogous to mass currents in hydrodynamics and electric currents in electromagnetism. As in those fields, the probability current (i.e. the probability current density) is related to the probability density function via a continuity equation. The probability current is invariant under gauge transformation. The concept of probability current is also used outside of quantum mechanics, when dealing with probability density functions that change over time, for instance in Brownian motion and the Fokker–Planck equation. Definition (non-relativistic 3-current) Free spin-0 particle In non-relativistic quantum mechanics, the probability current of the wave function of a particle of mass in one dimension is defined as where is the reduced Planck constant; denotes the complex conjugate of the wave function; denotes the real part; denotes the imaginary part. Note that the probability current is proportional to a Wronskian In three dimensions, this generalizes to where denotes the del or gradient operator. This can be simplified in terms of the kinetic momentum operator, to obtain These definitions use the position basis (i.e. for a wavefunction in position space), but momentum space is possible. Spin-0 particle in an electromagnetic field The above definition should be modified for a system in an external electromagnetic field. In SI units, a charged particle of mass and electric charge includes a term due to the interaction with the electromagnetic field; where is the magnetic vector potential. The term has dimensions of momentum. Note that used here is the canonical momentum and is not gauge invariant, unlike the kinetic momentum operator . In Gaussian units: where is the speed of light. Spin-s particle in an electromagnetic field If the particle has spin, it has a corresponding magnetic moment, so an extra term needs to be added incorporating the spin interaction with the electromagnetic field. According to Landau-Lifschitz's Course of Theoretical Physics the electric current density is in Gaussian units: And in SI units: Hence the probability current (density) is in SI units: where is the spin vector of the particle with corresponding spin magnetic moment and spin quantum number . It is doubtful if this formula is vaild for particles with an interior structure. The neutron has zero charge but non-zero magnetic moment, so would be impossible (except would also be zero in this case). For composite particles with a non-zero charge – like the proton which has spin quantum number s=1/2 and µS= 2.7927·µN or the deuteron (H-2 nucleus) which has s=1 and µS=0.8574
https://en.wikipedia.org/wiki/Salluste%20Duval
Clarent-Salluste-Hermycle Duval (February 1852 – July 1917) was a Canadian doctor of medicine, inventor, engineer, organist, musician and professor of Mathematics & Mechanics at Université Laval and at the École Polytechnique de Montréal. Duval is primarily known for his improvements to the organ. Personal life Family Born in Saint-Jean-Port-Joli, Canada East, Duval was the son of Louis-Zepirin Duval, the Notary of the Seigneur in Saint-Jean-Port-Joli, and nephew to Eleonore Verreai, who was the daughter of another notary; Germain-Alexandre Verreau. Throughout Duval's early life he was inspired by his mother's career as an educator, finding himself interested in science, physics, mechanics, and music. Duval was claimed to be a tinkerer as a child and later became an inventor and engineer. Death In July 1917, Salluste Duval died in Montreal at his home on Wolfe Street. Duval was buried in Saint-Jean-Port-Joli, Quebec. External links Biography at the Dictionary of Canadian Biography Online French Quebecers Physicians from Quebec 1852 births 1917 deaths
https://en.wikipedia.org/wiki/Savilian%20Professor
Savilian Professor may refer to: Savilian Professor of Astronomy, University of Oxford, England Savilian Professor of Geometry, University of Oxford, England
https://en.wikipedia.org/wiki/Stepan%20Malygin
Stepan Gavrilovich Malygin () (unknown-1 August 1764) was a Russian Arctic explorer. Malygin studied at the Moscow School of Mathematics and Navigation from 1711 to 1717. After his graduation, Malygin began his career as a naval cadet and was then promoted to the rank of lieutenant four years later. He served in the Baltic Fleet until 1735. Malygin wrote the first Russian manual on navigation, titled Сокращённая навигация по карте де-Редукцион (1733). In early 1736, Malygin was appointed leader of the western unit of the Second Kamchatka Expedition. In 1736–1737, two boats Perviy (First) and Vtoroy (Second) under the command of Malygin and A. Skuratov undertook a voyage from Dolgiy Island in the Barents Sea to the mouth of the Ob River. Malygin explored this part of the Russian Arctic coastline on the trip and made a map of the area between the Pechora and Ob Rivers. Between 1741 and 1748, Malygin was placed in charge of preparing navigators for the Russian Navy. In 1762, he was appointed head of the Admiralty office in Kazan. Explorers from the Russian Empire Explorers of the Arctic 1764 deaths Imperial Russian Navy personnel 18th-century people from the Russian Empire Great Northern Expedition Year of birth unknown
https://en.wikipedia.org/wiki/Evidence%20under%20Bayes%27%20theorem
The use of evidence under Bayes' theorem relates to the probability of finding evidence in relation to the accused, where Bayes' theorem concerns the probability of an event and its inverse. Specifically, it compares the probability of finding particular evidence if the accused were guilty, versus if they were not guilty. An example would be the probability of finding a person's hair at the scene, if guilty, versus if just passing through the scene. Another issue would be finding a person's DNA where they lived, regardless of committing a crime there. Explanation Among evidence scholars, the study of evidence in recent decades has become broadly interdisciplinary, incorporating insights from psychology, economics, and probability theory. One area of particular interest and controversy has been Bayes' theorem. Bayes' theorem is an elementary proposition of probability theory. It provides a way of updating, in light of new information, one’s probability that a proposition is true. Evidence scholars have been interested in its application to their field, either to study the value of rules of evidence, or to help determine facts at trial. Suppose, that the proposition to be proven is that defendant was the source of a hair found at the crime scene. Before learning that the hair was a genetic match for the defendant’s hair, the factfinder believes that the odds are 2 to 1 that the defendant was the source of the hair. If they used Bayes’ theorem, they could multiply those prior odds by a “likelihood ratio” in order to update her odds after learning that the hair matched the defendant’s hair. The likelihood ratio is a statistic derived by comparing the odds that the evidence (expert testimony of a match) would be found if the defendant was the source with the odds that it would be found if defendant was not the source. If it is ten times more likely that the testimony of a match would occur if the defendant was the source than if not, then the factfinder should multiply their prior odds by ten, giving posterior odds of 20 to one. Bayesian skeptics have objected to this use of Bayes’ theorem in litigation on a variety of grounds. These run from jury confusion and computational complexity to the assertion that standard probability theory is not a normatively satisfactory basis for adjudication of rights. Bayesian enthusiasts have replied on two fronts. First, they have said that whatever its value in litigation, Bayes' theorem is valuable in studying evidence rules. For example, it can be used to model relevance. It teaches that the relevance of evidence that a proposition is true depends on how much the evidence changes the prior odds, and that how much it changes the prior odds depends on how likely the evidence would be found (or not) if the proposition were true. These basic insights are also useful in studying individual evidence rules, such as the rule allowing witnesses to be impeached with prior convictions. Second, they hav
https://en.wikipedia.org/wiki/Identity%20theorem
In real analysis and complex analysis, branches of mathematics, the identity theorem for analytic functions states: given functions f and g analytic on a domain D (open and connected subset of or ), if f = g on some , where has an accumulation point in D, then f = g on D. Thus an analytic function is completely determined by its values on a single open neighborhood in D, or even a countable subset of D (provided this contains a converging sequence together with its limit). This is not true in general for real-differentiable functions, even infinitely real-differentiable functions. In comparison, analytic functions are a much more rigid notion. Informally, one sometimes summarizes the theorem by saying analytic functions are "hard" (as opposed to, say, continuous functions which are "soft"). The underpinning fact from which the theorem is established is the expandability of a holomorphic function into its Taylor series. The connectedness assumption on the domain D is necessary. For example, if D consists of two disjoint open sets, can be on one open set, and on another, while is on one, and on another. Lemma If two holomorphic functions and on a domain D agree on a set S which has an accumulation point in , then on a disk in centered at . To prove this, it is enough to show that for all . If this is not the case, let be the smallest nonnegative integer with . By holomorphy, we have the following Taylor series representation in some open neighborhood U of : By continuity, is non-zero in some small open disk around . But then on the punctured set . This contradicts the assumption that is an accumulation point of . This lemma shows that for a complex number , the fiber is a discrete (and therefore countable) set, unless . Proof Define the set on which and have the same Taylor expansion: We'll show is nonempty, open, and closed. Then by connectedness of , must be all of , which implies on . By the lemma, in a disk centered at in , they have the same Taylor series at , so , is nonempty. As and are holomorphic on , , the Taylor series of and at have non-zero radius of convergence. Therefore, the open disk also lies in for some . So is open. By holomorphy of and , they have holomorphic derivatives, so all are continuous. This means that is closed for all . is an intersection of closed sets, so it's closed. Full characterisation Since the Identity Theorem is concerned with the equality of two holomorphic functions, we can simply consider the difference (which remains holomorphic) and can simply characterise when a holomorphic function is identically . The following result can be found in. Claim Let denote a non-empty, connected open subset of the complex plane. For the following are equivalent. on ; the set contains an accumulation point, ; the set is non-empty, where . Proof The directions (1 2) and (1 3) hold trivially. For (3 1), by connectedness of it suffices to prove t
https://en.wikipedia.org/wiki/Quad-edge
A quad-edge data structure is a computer representation of the topology of a two-dimensional or three-dimensional map, that is, a graph drawn on a (closed) surface. It was first described by Jorge Stolfi and Leonidas J. Guibas. It is a variant of the earlier winged edge data structure. Overview The fundamental idea behind the quad-edge structure is the recognition that a single edge, in a closed polygonal mesh topology, sits between exactly two faces and exactly two vertices. The Quad-Edge Data Structure The quad-edge data structure represents an edge, along with the edges it is connected to around the adjacent vertices and faces to encode the topology of the graph. An example implementation of the quad-edge data-type is as follows typedef struct { quadedge_ref e[4]; } quadedge; typedef struct { quadedge *next; unsigned int rot; } quadedge_ref; Each quad-edge contains four references to adjacent quad-edges. Each of the four references points to the next edge counter-clockwise around either a vertex or a face. Each of these references represent either the origin vertex of the edge, the right face, the destination vertex, or the left face. Each quad-edge reference points to a quad-edge and the rotation (from 0 to 3) of the 'arm' it points at. Due to this representation, the quad-edge: represents a graph, its dual, and its mirror image. the dual of the graph can be obtained simply by reversing the convention on what is a vertex and what is a face; and can represent the most general form of a map, admitting vertices and faces of degree 1 and 2. Details The quad-edge structure gets its name from the general mechanism by which they are stored. A single Edge structure conceptually stores references to up to two faces, two vertices, and 4 edges. The four edges stored are the edges starting with the two vertices that are attached to the two stored faces. Uses Much like Winged Edge, quad-edge structures are used in programs to store the topology of a 2D or 3D polygonal mesh. The mesh itself does not need to be closed in order to form a valid quad-edge structure. Using a quad-edge structure, iterating through the topology is quite easy. Often, the interface to quad-edge topologies is through directed edges. This allows the two vertices to have explicit names (start and end), and this gives faces explicit names as well (left and right, relative to a person standing on start and looking in the direction of end). The four edges are also given names, based on the vertices and faces: start-left, start-right, end-left, and end-right. A directed edge can be reversed to generate the edge in the opposite direction. Iterating around a particular face only requires having a single directed edge to which that face is on the left (by convention) and then walking through all of the start-left edges until the original edge is reached. See also Winged edge Combinatorial maps Doubly connected edge list References External links https://www.cs.cm
https://en.wikipedia.org/wiki/Weak%20solution
In mathematics, a weak solution (also called a generalized solution) to an ordinary or partial differential equation is a function for which the derivatives may not all exist but which is nonetheless deemed to satisfy the equation in some precisely defined sense. There are many different definitions of weak solution, appropriate for different classes of equations. One of the most important is based on the notion of distributions. Avoiding the language of distributions, one starts with a differential equation and rewrites it in such a way that no derivatives of the solution of the equation show up (the new form is called the weak formulation, and the solutions to it are called weak solutions). Somewhat surprisingly, a differential equation may have solutions which are not differentiable; and the weak formulation allows one to find such solutions. Weak solutions are important because many differential equations encountered in modelling real-world phenomena do not admit of sufficiently smooth solutions, and the only way of solving such equations is using the weak formulation. Even in situations where an equation does have differentiable solutions, it is often convenient to first prove the existence of weak solutions and only later show that those solutions are in fact smooth enough. A concrete example As an illustration of the concept, consider the first-order wave equation: where u = u(t, x) is a function of two real variables. To indirectly probe the properties of a possible solution u, one integrates it against an arbitrary smooth function of compact support, known as a test function, taking For example, if is a smooth probability distribution concentrated near a point , the integral is approximately . Notice that while the integrals go from to , they are essentially over a finite box where is non-zero. Thus, assume a solution u is continuously differentiable on the Euclidean space R2, multiply the equation () by a test function (smooth of compact support), and integrate: Using Fubini's theorem which allows one to interchange the order of integration, as well as integration by parts (in t for the first term and in x for the second term) this equation becomes: (Boundary terms vanish since is zero outside a finite box.) We have shown that equation () implies equation () as long as u is continuously differentiable. The key to the concept of weak solution is that there exist functions u which satisfy equation () for any , but such u may not be differentiable and so cannot satisfy equation (). An example is u(t, x) = |t − x|, as one may check by splitting the integrals over regions x ≥ t and x ≤ t where u is smooth, and reversing the above computation using integration by parts. A weak solution of equation () means any solution u of equation () over all test functions . General case The general idea which follows from this example is that, when solving a differential equation in u, one can rewrite it using a test function , such t
https://en.wikipedia.org/wiki/Convex%20cone
In linear algebra, a cone—sometimes called a linear cone for distinguishing it from other sorts of cones—is a subset of a vector space that is closed under positive scalar multiplication; that is, is a cone if implies for every . When the scalars are real numbers, or belong to an ordered field, one generally calls a cone a subset of a vector space that is closed under multiplication by a positive scalar. In this context, a convex cone is a cone that is closed under addition, or, equivalently, a subset of a vector space that is closed under linear combinations with positive coefficients. It follows that convex cones are convex sets. In this article, only the case of scalars in an ordered field is considered. Definition A subset C of a vector space V over an ordered field F is a cone (or sometimes called a linear cone) if for each x in C and positive scalar α in F, the product αx is in C. Note that some authors define cone with the scalar α ranging over all non-negative scalars (rather than all positive scalars, which does not include 0). A cone C is a convex cone if belongs to C, for any positive scalars α, β, and any x, y in C. A cone C is convex if and only if C + C ⊆ C. This concept is meaningful for any vector space that allows the concept of "positive" scalar, such as spaces over the rational, algebraic, or (more commonly) the real numbers. Also note that the scalars in the definition are positive meaning that the origin does not have to belong to C. Some authors use a definition that ensures the origin belongs to C. Because of the scaling parameters α and β, cones are infinite in extent and not bounded. If C is a convex cone, then for any positive scalar α and any x in C the vector It follows that a convex cone C is a special case of a linear cone. It follows from the above property that a convex cone can also be defined as a linear cone that is closed under convex combinations, or just under additions. More succinctly, a set C is a convex cone if and only if and , for any positive scalar α. Examples For a vector space V, the empty set, the space V, and any linear subspace of V are convex cones. The conical combination of a finite or infinite set of vectors in is a convex cone. The tangent cones of a convex set are convex cones. The set is a cone but not a convex cone. The norm cone is a convex cone. The intersection of two convex cones in the same vector space is again a convex cone, but their union may fail to be one. The class of convex cones is also closed under arbitrary linear maps. In particular, if C is a convex cone, so is its opposite and is the largest linear subspace contained in C. The set of positive semidefinite matrices. The set of nonnegative continuous functions is a convex cone. Special examples Affine convex cones An affine convex cone is the set resulting from applying an affine transformation to a convex cone. A common example is translating a convex cone by a point . Technically,
https://en.wikipedia.org/wiki/Ray%20Streater
Raymond Frederick Streater (born 1936) is a British physicist, and professor emeritus of Applied Mathematics at King's College London. He is best known for co-authoring a text on quantum field theory, the 1964 PCT, Spin and Statistics and All That. Life Ray Streater was born on 21 April 1936 in Three Bridges in the parish of Worth, Sussex, England, United Kingdom, the second son of Frederick Arthur Streater (builder) (1905-1965) and Dorothy Beatrice Streater, née Thomas (17 December 1907 - 16 December 1994). He married Mary Patricia née Palmer on 19 September 1962, and they had three children: Alexander Paul (1963); Stephen Bernard (1965); Catherine Jane Mary (1967). Professor Streater's career may be summarised as follows. Jan.-Sep. 1960 – Research Fellow, CERN, Geneva, Switzerland 1960-1961 – Instructor in Physics, Princeton University, NJ, USA 1961-1964 – Assistant Lecturer in Physics, Imperial College, London 1964-1967 – Lecturer in Physics, Imperial College, London 1967-1969 – Senior Lecturer in Mathematics, Imperial College, London 1969-1984 – Professor of Applied Mathematics, Bedford College, London 1984-2001 – Professor of Applied Mathematics, King's College London Oct. 2001 on – Emeritus Professor, King's College London Works Streater co-authored a classic text on mathematical quantum field theory, reprinted as PCT, Spin and Statistics and All That (written jointly with Wightman, A. S.), 2000, Princeton University Press, Landmarks in Mathematics and Physics ( paperback); first published in 1964 by W. A. Benjamin. The title is an homage to 1066 and All That. He has also become interested in the dynamics of quantum systems that are not in a pure state, but are large. This is expressed in Statistical Dynamics: A Stochastic Approach to Nonequilibrium Thermodynamics, 1995, Imperial College Press ( hardback, paperback). This work was simplified and extended in the second edition, published in 2009. References External links Professor Streater's page at King's College, London () 1936 births Living people People from Worth, West Sussex British physicists Academics of Imperial College London Academics of King's College London Princeton University faculty Quantum physicists People associated with CERN
https://en.wikipedia.org/wiki/Statistics%20Poland
Statistics Poland (formerly known in English as the Central Statistical Office (, popularly called GUS)) is Poland's chief government executive agency charged with collecting and publishing statistics related to the country's economy, population, and society, at the national and local levels. The president of Statistics Poland (currently Dominik Rozkrut) reports directly to the Prime Minister of Poland and is considered the equivalent of a Polish government minister. The agency was established on 13 July 1918 by Ludwik Krzywicki, one of the most notable sociologists of his time. Inactive during World War II, GUS was reorganized in March 1945 and as of 31 July 1947 was under control of the Ordinance of the Council of Ministers (along with the Organization of Official Statistics). The office is divided into several separate branches, each responsible for a different set of data. The branches include the Divisions of Coordination of Statistical Surveys, Analyses and Regional Statistics, Dissemination, National Accounts and Finance, Business Statistics and Registers, Social Statistics, Services Statistics, Agriculture and Environment Statistics, International Cooperation, Budgetary, and Personnel. Notable GUS publications include Rocznik Statystyczny (Statistical Yearbook), Mały Rocznik Statystyczny (Concise Statistical Yearbook), Demographic Yearbook of Poland, and Wiadomości Statystyczne (Statistical News). In November 2018 GUS estimated that the average monthly wage in Poland was PLN 4,966 (€1,158, $1,317). According to GUS, during the same month Poland's retail sales increased by 8.2% year-on-year and fell by 2.7% month-on-month while the economy as a whole grew at an annual rate of 5.1%. In December 2018, prices of consumer goods and services increased by 1.1% from the previous year while wages rose 1% from the previous month and unemployment rose .1%. Former presidents 1918–1929 Józef Buzek 1929–1939 Edward Szturm de Sztrem 1939–1945 vacant 1945–1949 Stefan Szulc 1949–1965 Zygmunt Pudowicz 1965–1972 Wincenty Kawalec 1972–1980 Stanisław Kuziński 1980–1989 Wiesław Sadowski 1989–1991 Franciszek Kubiczek 1991–1992 Bohdan Wyżnikiewicz 1992–1995 Józef Oleński 1995–2006 Tadeusz Toczyński 2006–2006 Janusz Witkowski (acting) 2006–2011 Józef Oleński 2011-2016 Janusz Witkowski since 2016 Dominik Rozkrut See also Census in Poland References External links 1918 establishments in Poland Poland Demographics of Poland Government agencies of Poland Organizations established in 1918
https://en.wikipedia.org/wiki/Munish%20Chander%20Puri
Munish Chander Puri (15 August 1939 – 28 December 2005) was Professor Emeritus of Mathematics at IIT Delhi. He was Organizing Chair, Asia Pacific Operational Research Societies (APORS). He was killed in Bangalore in the 2005 Indian Institute of Science shooting. Career Puri did his B.Sc (Hons.) mathematics in 1960, M.Sc. mathematics in 1962 (first position) and Ph.D. in operations research in 1972 from Delhi University. He served at Hans Raj College in Delhi University until 1984 and then served at IITD until 2004. He had supervised thirteen PhD theses, nine MPhil dissertations, six MTech projects and eleven M.Sc. projects. He had written many research articles in various journals of international repute. His area of specialization, included combinatorial optimization, fractional programming, linear programming and network flow problems. He had been in the editorial board of Opsearch, an official journal of ORSI. He was one of the founder members of the "Mathematical Programming Group (MPG)" which was started by retired Prof. R. N. Kaul, Department of Mathematics, University of Delhi in 1972. This group of researchers from various colleges of Delhi University, IITD and Indian Institute of Science Bangalore meets every Wednesday from 3 4 p.m at Delhi University to attend the seminar by some invited speakers or one of the researchers of the group. Professor Puri was one of the members who attended this weekly seminar since its inception. Puri published 78 papers during his lifetime – 25 in Indian journals and 53 abroad. Indian Institute of Technology Puri first joined IIT, Delhi in 1984. One of the popular undergraduate courses he taught was linear optimization. On 29 December 2005 the Indian Institute of Technology, Delhi mourned the death of its retired professor M C Puri in shootout at Bangalore terming it as a great loss to the institute. Addressing a condolence meeting at the institute, IIT Delhi Director D.P.Kothari said the staff and students "deeply mourn the tragic, sad and untimely demise of Prof Puri". Kothari described Puri as a "very conscientious, hard working and devoted faculty member of the institute". Puri was an Emeritus Professor of Mathematics. He won several laurels for his work on the subject. He was one of the most senior and influential members of the Operations Research Society of India (ORSI) and he has been attending the annual convention of ORSI without fail for the past 35 years. Death Puri was attending the International Conference on Operations Research in conjunction with the 38th Annual Convention of ORSI, which was organized by the Bangalore Chapter of ORSI from 27–29 December 2005 at J. N. Tata Auditorium, Indian Institute of Science, Bangalore, India. At about 7 pm. after convention was over at the auditorium, Puri and three colleagues were walking towards the seminar hall of management studies for Annual General Body meeting of ORSI when suddenly bullets were fired from the back within the a
https://en.wikipedia.org/wiki/Concave
Concave or concavity may refer to: Science and technology Concave lens Concave mirror Mathematics Concave function, the negative of a convex function Concave polygon, a polygon which is not convex Concave set The concavity of a function, determined by its second derivative See also
https://en.wikipedia.org/wiki/George%20Thibaut
George Frederick William Thibaut (March 20, 1848 – 1914) was a German Indologist notable for his contributions to the understanding of ancient Indian mathematics and astronomy. Life Thibaut was born in Germany, worked briefly in England, and then in 1875, was appointed Professor at the Government Sanskrit College, Varanasi in northern India. From 1888 to 1895, he was professor at Muir Central College in Allahabad. On 6 November 2014, in its column "100 Years Ago" The Statesman reprinted the following obituary on the late Dr. Thibaut: The death is reported at Heidelberg Hospital, Germany of Dr George Thibaut, C.I.E., Ph.D., D.Sc., who recently retired from the Education Service as Registrar of the Calcutta University. Dr. Thibaut who took part in Franco-German War of 1870 as a noncommissioned officer joined the Muir Central College, Allahabad some 22 years ago as Professor of Philosophy. He rose to be the Principal of the College and was appointed Registrar of the Allahabad University, afterwards being transferred to Calcutta. Besides being a well-known student of philosophy Eastern and Western, the late Dr. Thibaut was an eminent Sanskrit scholar. He was appointed CIE in the 1906 New Year Honours. Works Between 1875 and 1878 Thibaut published a detailed essay on the Śulba sūtras, together with a translation of the Baudhāyana Śulba sūtra; he later translated the Pañca Siddhāntikā which he co-edited with Pandit Sudhakar Dwivedi (the latter added a Sanskrit commentary). He also edited and translated the following volumes in Max Müller's Sacred Books of the East: Vol. 34, The Vedanta-Sutras, vol. 1 of 3, with the commentary of Sankaracharya, part 1 of 2. Adhyâya I–II (Pâda I–II). (1890) Vol. 38, The Vedanta-Sutras, vol. 2 of 3, with the commentary of Sankaracharya, part 1 of 2. Adhyâya II (Pâda III–IV)–IV. (1896) It was one of the earliest translation of the Brahma Sutras along with the work of Paul Deussen. Vol. 48, The Vedanta-Sutras, vol. 3 of 3, with the commentary of Râmânuja. (1904) Thibaut contributed a number of Sanskrit manuscripts to the Department of Oriental Collections, Bodleian Library, University of Oxford, where they are archived today. References External links The Vedânta-sutras ... translated by George Thibaut (1890) Thibaut, George Frederick William 1848 births 1914 deaths Hindu astronomy Companions of the Order of the Indian Empire German historians of mathematics Academic staff of the University of Calcutta German male non-fiction writers
https://en.wikipedia.org/wiki/Dai%20Zhen
Dai Zhen (, January 19, 1724 – July 1, 1777) was a Chinese philosopher of the Qing dynasty. Hailing from Xiuning, Anhui Dai was a versatile scholar who made great contributions to mathematics, geography, phonology and philosophy. His philosophical and philological critiques of Neo-Confucianism continue to be influential. In 1733, Dai was recruited by scholar Ji Yun to be one of the editors of the official encyclopedia and collection of books, Siku Quanshu. Dai's philosophical contributions included those to the Han Learning school of Evidential Learning (Evidentialism) which criticized the Song Learning school of Neo-Confucianism. In particular, two criticisms that Dai made were: First, Neo-Confucianism focused too much on introspective self-examination whereas truth was to be found in investigation of the external world. Second, he criticized the Neo-Confucian drive to eliminate human desire as an obstacle to rational investigation. Dai argued that human desire was a good and integral part of the human experience, and that eliminating human desire from philosophy had the bad effect of making it difficult to understand and control one's emotions as well as making it impossible to establish empathy with others. Famous works Faxianglun (On images and patterns) Yuanshan (Tracing the origin of goodness) in three paragraphs Du Yi Xici lun xing (Reading “Appended Words” in The Book of Changes on human nature) Du Meng Zi lun xing (Reading Mencius about human nature) Yuanshan (Tracing the origin of goodness) in three chapters Meng Zi sishulu (Record of Mencius's private virtue) Xuyan (Prefatory words) Daxue buzhu (Additional annotations to the Daxue) Zhongyong buzhu (Additional annotations to the Zhongyong) Meng Zi ziyi shuzheng (Evidential Commentary on the Meaning of the Words of Mencius) Yu mou shu (A letter to a certain person) Yu Peng jinshi Yunchu shu Dingchou zhengyue yu Duan Yucai shu (A letter to Duan Yucai dated in the first month of the year dingchou [February 1777]) References Sources Elman, Benjamin A. From Philosophy to Philology: Intellectual and Social Aspects of Change in Late Imperial China. Cambridge, MA: Council on East Asian Studies, 1984. Tiwald, Justin. Internet Encyclopedia of Philosophy entry on Dai Zhen Encyclopedia of Religion entry on Dai Zhen External links 1724 births 1777 deaths 18th-century Chinese philosophers Qing dynasty classicists People from Huangshan Philosophers from Anhui Chinese mathematicians Chinese philologists
https://en.wikipedia.org/wiki/ZIP%20Code%20Tabulation%20Area
ZIP Code Tabulation Areas (ZCTAs) are statistical entities developed by the United States Census Bureau for tabulating summary statistics. These were introduced with the Census 2000 and continued with the 2010 Census and 5 year American Community Survey data sets. This new entity was developed to overcome the difficulties in precisely defining the land area covered by each ZIP code. Defining the extent of an area is necessary in order to tabulate census data for that area. ZCTAs are generalized area representations of the United States Postal Service (USPS) ZIP code service areas, but are not the same as ZIP codes. Individual USPS ZIP codes can cross state, place, county, census tract, census block group and census block boundaries, so the Census Bureau asserts that "there is no correlation between ZIP codes and Census Bureau geography". Moreover, the USPS frequently realigns, merges, or splits ZIP codes to meet changing needs. These changes are usually not reflected in the annual TIGER releases. Each ZCTA is constructed by aggregating the Census 2010 blocks whose addresses use a given ZIP code. In assembling census statistical units to create ZCTAs, the Census Bureau took the ZIP code used by the majority of addresses in each census unit at the time the data was compiled. As a result, some addresses end up with a ZCTA code that is different from their ZIP code. ZCTAs are not developed for ZIP codes that comprise only a small number of addresses. Several ZCTAs represent ZIPs that no longer exist due to realignment by the USPS. There are approximately 42,000 ZIP Codes and 32,000 ZCTAs. The reason that there is not one ZCTA for every ZIP Code is that only populated areas are included in the Census data, and thus ZIP Codes that only serve PO Boxes have no corresponding ZCTA. See also Census-designated place ZIP Code References External links United States Census Bureau ZIP Code Tabulation Areas (ZCTAs) United States Census Bureau geography ZIP code
https://en.wikipedia.org/wiki/Len%20Martin
Leonard Martin (17 April 1919 – 21 August 1995) was an Australian results reader. He was known in the UK for reading out the football results, associated football pools statistics and horse-racing results on the BBC's Saturday afternoon sports programme, Grandstand. Martin was born in Australia where he began his broadcasting career. He came to England on holiday in 1953 for the Coronation and received a call from the BBC the day before he was due to sail for Australia. He never used his return ticket home, and only twice went back to Australia in 1983 and 1993, on holiday. He performed his role on Grandstand from the programme's very first edition in 1958 until his death in 1995. Martin was well known for his intonation when reading the scores. It was clear from the way in which he presented the home or away team name, followed by number of goals, whether the result was a home win, an away win, a no-score draw or a score draw; this was important for the football pools results. He was succeeded by Tim Gudgin who also used the distinct BBC intonation. In addition to his role on Grandstand, Martin was a voice-over artist heard on Movietone newsreels. He also used to run four flights of stairs at Lime Grove Studios in the late 1960s after Grandstand to introduce Simon Dee's programme, with 'Simon' elongated, in the distinctive manner. External links Death date 1919 births 1995 deaths British sports broadcasters Australian emigrants to England
https://en.wikipedia.org/wiki/Torsion%20group
In group theory, a branch of mathematics, a torsion group or a periodic group is a group in which every element has finite order. The exponent of such a group, if it exists, is the least common multiple of the orders of the elements. For example, it follows from Lagrange's theorem that every finite group is periodic and it has an exponent that divides its order. Infinite examples Examples of infinite periodic groups include the additive group of the ring of polynomials over a finite field, and the quotient group of the rationals by the integers, as well as their direct summands, the Prüfer groups. Another example is the direct sum of all dihedral groups. None of these examples has a finite generating set. Explicit examples of finitely generated infinite periodic groups were constructed by Golod, based on joint work with Shafarevich (see Golod–Shafarevich theorem), and by Aleshin and Grigorchuk using automata. These groups have infinite exponent; examples with finite exponent are given for instance by Tarski monster groups constructed by Olshanskii. Burnside's problem Burnside's problem is a classical question that deals with the relationship between periodic groups and finite groups, when only finitely generated groups are considered: Does specifying an exponent force finiteness? The existence of infinite, finitely generated periodic groups as in the previous paragraph shows that the answer is "no" for an arbitrary exponent. Though much more is known about which exponents can occur for infinite finitely generated groups there are still some for which the problem is open. For some classes of groups, for instance linear groups, the answer to Burnside's problem restricted to the class is positive. Mathematical logic An interesting property of periodic groups is that the definition cannot be formalized in terms of first-order logic. This is because doing so would require an axiom of the form which contains an infinite disjunction and is therefore inadmissible: first order logic permits quantifiers over one type and cannot capture properties or subsets of that type. It is also not possible to get around this infinite disjunction by using an infinite set of axioms: the compactness theorem implies that no set of first-order formulae can characterize the periodic groups. Related notions The torsion subgroup of an abelian group A is the subgroup of A that consists of all elements that have finite order. A torsion abelian group is an abelian group in which every element has finite order. A torsion-free abelian group is an abelian group in which the identity element is the only element with finite order. See also Torsion (algebra) Jordan–Schur theorem References R. I. Grigorchuk, Degrees of growth of finitely generated groups and the theory of invariant means, Izv. Akad. Nauk SSSR Ser. Mat. 48:5 (1984), 939–985 (Russian). Properties of groups
https://en.wikipedia.org/wiki/Hinge%20theorem
In geometry, the hinge theorem (sometimes called the open mouth theorem) states that if two sides of one triangle are congruent to two sides of another triangle, and the included angle of the first is larger than the included angle of the second, then the third side of the first triangle is longer than the third side of the second triangle. This theorem is given as Proposition 24 in Book I of Euclid's Elements. Scope and generalizations The hinge theorem holds in Euclidean spaces and more generally in simply connected non-positively curved space forms. It can be also extended from plane Euclidean geometry to higher dimension Euclidean spaces (e.g., to tetrahedra and more generally to simplices), as has been done for orthocentric tetrahedra (i.e., tetrahedra in which altitudes are concurrent) and more generally for orthocentric simplices (i.e., simplices in which altitudes are concurrent). Converse The converse of the hinge theorem is also true: If the two sides of one triangle are congruent to two sides of another triangle, and the third side of the first triangle is greater than the third side of the second triangle, then the included angle of the first triangle is larger than the included angle of the second triangle. In some textbooks, the theorem and its converse are written as the SAS Inequality Theorem and the AAS Inequality Theorem respectively. References Elementary geometry Theorems about triangles
https://en.wikipedia.org/wiki/Socle
Socle may refer to: Socle (mathematics), an algebraic object generated by minimal subobjects or by an eigenspace of an automorphism Socle (architecture), a plinth that supports a pedestal, statue, or column
https://en.wikipedia.org/wiki/Socle%20%28mathematics%29
In mathematics, the term socle has several related meanings. Socle of a group In the context of group theory, the socle of a group G, denoted soc(G), is the subgroup generated by the minimal normal subgroups of G. It can happen that a group has no minimal non-trivial normal subgroup (that is, every non-trivial normal subgroup properly contains another such subgroup) and in that case the socle is defined to be the subgroup generated by the identity. The socle is a direct product of minimal normal subgroups. As an example, consider the cyclic group Z12 with generator u, which has two minimal normal subgroups, one generated by u4 (which gives a normal subgroup with 3 elements) and the other by u6 (which gives a normal subgroup with 2 elements). Thus the socle of Z12 is the group generated by u4 and u6, which is just the group generated by u2. The socle is a characteristic subgroup, and hence a normal subgroup. It is not necessarily transitively normal, however. If a group G is a finite solvable group, then the socle can be expressed as a product of elementary abelian p-groups. Thus, in this case, it is just a product of copies of Z/pZ for various p, where the same p may occur multiple times in the product. Socle of a module In the context of module theory and ring theory the socle of a module M over a ring R is defined to be the sum of the minimal nonzero submodules of M. It can be considered as a dual notion to that of the radical of a module. In set notation, Equivalently, The socle of a ring R can refer to one of two sets in the ring. Considering R as a right R-module, soc(RR) is defined, and considering R as a left R-module, soc(RR) is defined. Both of these socles are ring ideals, and it is known they are not necessarily equal. If M is an Artinian module, soc(M) is itself an essential submodule of M. A module is semisimple if and only if soc(M) = M. Rings for which soc(M) = M for all M are precisely semisimple rings. soc(soc(M)) = soc(M). M is a finitely cogenerated module if and only if soc(M) is finitely generated and soc(M) is an essential submodule of M. Since the sum of semisimple modules is semisimple, the socle of a module could also be defined as the unique maximal semisimple submodule. From the definition of rad(R), it is easy to see that rad(R) annihilates soc(R). If R is a finite-dimensional unital algebra and M a finitely generated R-module then the socle consists precisely of the elements annihilated by the Jacobson radical of R. Socle of a Lie algebra In the context of Lie algebras, a socle of a symmetric Lie algebra is the eigenspace of its structural automorphism that corresponds to the eigenvalue −1. (A symmetric Lie algebra decomposes into the direct sum of its socle and cosocle.) See also Injective hull Radical of a module Cosocle References Module theory Group theory Functional subgroups
https://en.wikipedia.org/wiki/Moral%20certainty
Moral certainty is a concept of intuitive probability. It means a very high degree of probability, sufficient for action, but short of absolute or mathematical certainty. Origins The notion of different degrees of certainty can be traced back to a statement in Aristotle's Nicomachean Ethics that one must be content with the kind of certainty appropriate to different subject matters, so that in practical decisions one cannot expect the certainty of mathematics. The Latin phrase moralis certitudo was first used by the French philosopher Jean Gerson about 1400, to provide a basis for moral action that could (if necessary) be less exact than Aristotelian practical knowledge, thus avoiding the dangers of philosophical scepticism and opening the way for a benevolent casuistry. The Oxford English Dictionary mentions occurrences in English from 1637. Law In law, moral (or 'virtual') certainty has been associated with verdicts based on certainty beyond a reasonable doubt. Legal debate about instructions to seek a moral certainty has turned on the changing definitions of the phrase over time. Whereas it can be understood as an equivalent to 'beyond reasonable doubt', in another sense moral certainty refers to a firm conviction which does not correlate but rather opposes evidentiary certainty: i.e. one may have a firm subjective gut feeling of guilt – a feeling of moral certainty – without the evidence necessarily justifying a guilty conviction. See also Argument from ignorance Precautionary principle References Further reading James Franklin, The Science of Conjecture: Evidence and Probability Before Pascal (Johns Hopkins University Press, 2001), ch. 4 External links Legal definition of "moral certainty" "And the moral of the story is...", on the dangers of moral certainty. Ideas, CBC Radio 1. February 17, 2010. Legal doctrines and principles Sociology of law Skepticism American legal terminology Criminal law Criminal procedure Moral psychology Legal reasoning
https://en.wikipedia.org/wiki/Baire%20function
In mathematics, Baire functions are functions obtained from continuous functions by transfinite iteration of the operation of forming pointwise limits of sequences of functions. They were introduced by René-Louis Baire in 1899. A Baire set is a set whose characteristic function is a Baire function. Classification of Baire functions Baire functions of class α, for any countable ordinal number α, form a vector space of real-valued functions defined on a topological space, as follows. The Baire class 0 functions are the continuous functions. The Baire class 1 functions are those functions which are the pointwise limit of a sequence of Baire class 0 functions. In general, the Baire class α functions are all functions which are the pointwise limit of a sequence of functions of Baire class less than α. Some authors define the classes slightly differently, by removing all functions of class less than α from the functions of class α. This means that each Baire function has a well defined class, but the functions of given class no longer form a vector space. Henri Lebesgue proved that (for functions on the unit interval) each Baire class of a countable ordinal number contains functions not in any smaller class, and that there exist functions which are not in any Baire class. Baire class 1 Examples: The derivative of any differentiable function is of class 1. An example of a differentiable function whose derivative is not continuous (at x = 0) is the function equal to when x ≠ 0, and 0 when x = 0. An infinite sum of similar functions (scaled and displaced by rational numbers) can even give a differentiable function whose derivative is discontinuous on a dense set. However, it necessarily has points of continuity, which follows easily from The Baire Characterisation Theorem (below; take K = X = R). The characteristic function of the set of integers, which equals 1 if x is an integer and 0 otherwise. (An infinite number of large discontinuities.) Thomae's function, which is 0 for irrational x and 1/q for a rational number p/q (in reduced form). (A dense set of discontinuities, namely the set of rational numbers.) The characteristic function of the Cantor set, which equals 1 if x is in the Cantor set and 0 otherwise. This function is 0 for an uncountable set of x values, and 1 for an uncountable set. It is discontinuous wherever it equals 1 and continuous wherever it equals 0. It is approximated by the continuous functions , where is the distance of x from the nearest point in the Cantor set. The Baire Characterisation Theorem states that a real valued function f defined on a Banach space X is a Baire-1 function if and only if for every non-empty closed subset K of X, the restriction of f to K has a point of continuity relative to the topology of K. By another theorem of Baire, for every Baire-1 function the points of continuity are a comeager Gδ set . Baire class 2 An example of a Baire class 2 function on the interval [0,1] that is not of class 1
https://en.wikipedia.org/wiki/The%20Indiana%20College%20Mathematics%20Competition
The Indiana College Mathematics Competition, originally The Friendly Mathematics Competition, is held each year by the Indiana Section of the Mathematical Association of America. History "The Friendly Mathematics Competition" was founded at Wabash College in 1965 by Professor Paul T. Mielke. Today it is known as "The Indiana College Mathematics Competition." The Competition has emphasized collegiality and teamwork from the very beginning, earning its sobriquet "The Friendly Exam" because of the (relatively) noncompetitive ambience created during the contest. Students within a team cooperate and the teams submit one solution per question. Each team determines how to manage its work and time: Some teams are truly collaborative, whereas others carry out a divide and conquer strategy, with different members working on different problems. The number of problems varies from six to eight per year, and no calculators are allowed. Since 1978, the competition has been a part of the spring meeting of the Indiana Section of the MAA. As is consistent with the "friendly" nature of the competition, each year's problems include "some problems everyone should be able to do," along with those that challenge and allow for distinguishing among the problem solvers. (One problem statement on the 1968 exam was false!) Many problems are classics borrowed from various sources. Early Competitions Exam #1 - 1966 The first "friendly competition" was held at Wabash College, located in Crawfordsville, a bit northwest of Indianapolis. Eight schools participated in the competition that year. It was won by the team from Wabash College consisting of James Clynch, Albert Hart Jr., and Larry Haugh. Exam #2 - 1967 This competition was held at Marian College in Indianapolis. The winning team, consisting of David Hafling, Albert Hart Jr., and Robert Spear was again from Wabash College, Sample Question A sample question from 1968: "Let us assume that a given pair of people either know each other or are strangers. If six people enter a room, show that there must be at least three people who know each other pairwise or there must be at least three people who are pairwise strangers." See also Marian College Wabash College References A Friendly Mathematics Competition, Rick Gillman Editor. 2003 External links Past Results Education in Indiana Wabash College
https://en.wikipedia.org/wiki/Helmholtz%20theorem
There are several theorems known as the Helmholtz theorem: Helmholtz decomposition, also known as the fundamental theorem of vector calculus Helmholtz reciprocity in optics Helmholtz theorem (classical mechanics) Helmholtz's theorems in fluid mechanics Helmholtz minimum dissipation theorem See also Helmholtz–Thévenin theorem