source
stringlengths 31
168
| text
stringlengths 51
3k
|
---|---|
https://en.wikipedia.org/wiki/Shou-Wu%20Zhang
|
Shou-Wu Zhang (; born October 9, 1962) is a Chinese-American mathematician known for his work in number theory and arithmetic geometry. He is currently a Professor of Mathematics at Princeton University.
Biography
Early life
Shou-Wu Zhang was born in Hexian, Ma'anshan, Anhui, China on October 9, 1962. Zhang grew up in a poor farming household and could not attend school until eighth grade due to the Cultural Revolution. He spent most of his childhood raising ducks in the countryside and self-studying mathematics textbooks that he acquired from sent-down youth in trades for frogs. By the time he entered junior high school at the age of fourteen, he had self-learned calculus and had become interested in number theory after reading about Chen Jingrun's proof of Chen's theorem which made substantial progress on Goldbach's conjecture.
Education
Zhang was admitted to the Sun Yat-sen University chemistry department in 1980 after scoring poorly on his mathematics entrance examinations, but he later transferred to the mathematics department after feigning color blindness and received his bachelor's degree in mathematics in 1983. He then studied under analytic number theorist Wang Yuan at the Chinese Academy of Sciences where he received his master's degree in 1986. In 1986, Zhang was brought to the United States to pursue his doctoral studies at Columbia University by Dorian M. Goldfeld. He then studied under Goldfeld, Hervé Jacquet, Lucien Szpiro, and Gerd Faltings, and then completed his PhD at Columbia University under Szpiro in 1991.
Career
Zhang was a member of the Institute for Advanced Study and an assistant professor at Princeton University from 1991 to 1996. In 1996, Zhang moved back to Columbia University where he was a tenured professor until 2013. He has been a professor at Princeton University since 2011 and is an Eugene Higgins Professor since 2021.
Zhang is on the editorial boards of: Acta Mathematica Sinica, Algebra & Number Theory, Forum of Mathematics, Journal of Differential Geometry, National Science Review, Pure and Applied Mathematics Quarterly, Science in China, and Research in Number Theory. He has previously served on the editorial boards of: Journal of Number Theory, Journal of the American Mathematical Society, Journal of Algebraic Geometry, and International Journal of Number Theory.
Research
Zhang's doctoral thesis Positive line bundles on Arithmetic Surfaces proved a Nakai–Moishezon type theorem in intersection theory using a result from differential geometry already proved in Tian Gang's doctoral thesis. In a series of subsequent papers (, ), he further developed his theory of 'positive line bundles' in Arakelov theory which culminated in a proof (with Emmanuel Ullmo) of the Bogomolov conjecture ().
In a series of works in the 2000s (, ), Zhang proved a generalization of the Gross–Zagier theorem from elliptic curves over rationals to modular abelian varieties of GL(2) type over totally real fields. In particular, th
|
https://en.wikipedia.org/wiki/Taubes
|
Taubes is a surname. Notable people with the surname include:
Clifford Taubes (born 1954), professor of mathematics at Harvard
Taubes's Gromov invariant, mathematical concept named after Clifford Taubes
Jacob Taubes (1923-1987), religion sociologist, philosopher and studied Judaism
Gary Taubes, science journalist and author of Good Calories, Bad Calories
Susan Taubes (1928-1969), writer and religion sociologist, wife of Jacob Taubes
See also
Daub (surname)
Taube (surname)
Taube family
Patronymic surnames
Surnames of Jewish origin
|
https://en.wikipedia.org/wiki/Productivism%20%28art%29
|
Productivism is an early twentieth-century art movement that is characterized by its spare geometry, limited color palette, and Cubist and Futurist influences. Aesthetically, it also looks similar to work by Kazimir Malevich and the Suprematists.
But where Constructivism sought to reflect modern industrial society and urban space and Suprematism sought to create "anti-materialist, abstract art that originated from pure feeling," Productivism's goal was to create accessible art in service to the proletariat, with artists functioning more like "engineers ... than easel painters."
"We declare uncompromising war on art!" Aleksei Gan wrote in a 1922 manifesto. Alexander Rodchenko, Varvara Stepanova, Kazimir Malevich, El Lissitzky, Liubov Popova, and others similarly renounced pure art in favor of serving society, a resolution born of extensive discussion and debate at the Moscow-based Institute of Artistic Culture (INKhUK), the Society of Young Artists, journals of the day and organizations like Higher State Artistic and Technical Workshops (VKhUTEMAS) all of whom agreed on the need for a radical break from the "critical and material radicalization of Constructivism."
Overview
The Constructivist movement reconceptualized the aesthetics of art by stripping it to its fundamentals, and rejecting insular precedents. In practice, this meant an emphasis on the fundamentals of geometry (circles, squares, rectangles), a limited palette: black, occasionally yellow — and red (Russian: красный), which was once "used to describe something beautiful, good or honorable." But the Productivists took things several ground-breaking steps further.
By 1923, Rodchenko was arguing that thematic montages replaced it. Meanwhile artist brothers Georgi and Vladimir Stenberg were cultivating new montage techniques, to optically indicate motion, energy and rhythm, with "unconventional viewing angles, radical foreshortening, and unsettling close-ups." El Lissitzky, for his part, developed a theory of type that could visually mimic sound and gesture so to best organize "the people’s consciousness." As a group, these innovations made the Productivists persuasive, attention-getting and influential, which is why what began as political messaging was later classified as agitprop, and used in commercial advertising. El Lissitzky's insight that "No form of representation is so readily comprehensible to the masses as photography” was proven true by the Soviet graphic art success of posters, and Rodchenko's later work creating "ads for ordinary objects such as beer, pacifiers, cookies, watches, and other consumer products."
Meanwhile, the avant-gardes propagating accessibility "began designing objects and furniture to transform ways of life." They also created "production books" that introduced children to the world of work, and taught them how things were made. Like the Secessionists in Central Europe, they also designed textiles, clothing, ceramics and typography.
By 1926, Bori
|
https://en.wikipedia.org/wiki/Scalar%20%28mathematics%29
|
A scalar is an element of a field which is used to define a vector space.
In linear algebra, real numbers or generally elements of a field are called scalars and relate to vectors in an associated vector space through the operation of scalar multiplication (defined in the vector space), in which a vector can be multiplied by a scalar in the defined way to produce another vector. Generally speaking, a vector space may be defined by using any field instead of real numbers (such as complex numbers). Then scalars of that vector space will be elements of the associated field (such as complex numbers).
A scalar product operation – not to be confused with scalar multiplication – may be defined on a vector space, allowing two vectors to be multiplied in the defined way to produce a scalar. A vector space equipped with a scalar product is called an inner product space.
A quantity described by multiple scalars, such as having both direction and magnitude, is called a vector.
The term scalar is also sometimes used informally to mean a vector, matrix, tensor, or other, usually, "compound" value that is actually reduced to a single component. Thus, for example, the product of a 1 × n matrix and an n × 1 matrix, which is formally a 1 × 1 matrix, is often said to be a scalar.
The real component of a quaternion is also called its scalar part.
The term scalar matrix is used to denote a matrix of the form kI where k is a scalar and I is the identity matrix.
Etymology
The word scalar derives from the Latin word scalaris, an adjectival form of scala (Latin for "ladder"), from which the English word scale also comes. The first recorded usage of the word "scalar" in mathematics occurs in François Viète's Analytic Art (In artem analyticem isagoge) (1591):
Magnitudes that ascend or descend proportionally in keeping with their nature from one kind to another may be called scalar terms.
(Latin: Magnitudines quae ex genere ad genus sua vi proportionaliter adscendunt vel descendunt, vocentur Scalares.)
According to a citation in the Oxford English Dictionary the first recorded usage of the term "scalar" in English came with W. R. Hamilton in 1846, referring to the real part of a quaternion:
The algebraically real part may receive, according to the question in which it occurs, all values contained on the one scale of progression of numbers from negative to positive infinity; we shall call it therefore the scalar part.
Definitions and properties
Scalars of vector spaces
A vector space is defined as a set of vectors (additive abelian group), a set of scalars (field), and a scalar multiplication operation that takes a scalar k and a vector v to form another vector kv. For example, in a coordinate space, the scalar multiplication yields . In a (linear) function space, is the function .
The scalars can be taken from any field, including the rational, algebraic, real, and complex numbers, as well as finite fields.
Scalars as vector components
According to a fundament
|
https://en.wikipedia.org/wiki/Richard%20McKelvey
|
Richard Drummond McKelvey (April 27, 1944 – April 22, 2002) was a political scientist, specializing in mathematical theories of voting. He received his BS in Mathematics from Oberlin College, MA in mathematics from Washington University in St. Louis, and PhD in political science from University of Rochester. He was an Econometric Society fellow, and was the Edie and Lew Wasserman Professor of Political Science at the California Institute of Technology until his death, from cancer, in 2002.
McKelvey also wrote several articles about instability. One discussed the topic agenda manipulation. The McKelvey theorem indicates that almost every possible outcome can be realized through democratic decision-making, by smartly choosing the order or agenda in which decisions are taken. The desired result is established by ensuring that in each stage another composition of the majority determines the outcome of that part of the decision-making procedure. The person who designs the decision-making procedure needs to know the preferences of the participants to achieve the most desirable outcome by shifting majorities. It will be clear that the position where one can control the agenda is attractive because it is possible to implement one's choice.
In 2007 John Aldrich (Duke), James Alt (Harvard) and Arthur Lupia (Michigan) published the edited volume Positive Changes in Political Science: The Legacy of Richard D. McKelvey’s Most Influential Writings with the University of Michigan Press. The volume contains reprints of several of Richard McKelvey's classic papers along with original essays by leading names in political science.
Publications
External links
Positive Changes in Political Science press release
Thomas R. Palfrey, "Richard Drummond McKelvey", Biographical Memoirs of the National Academy of Sciences
American political scientists
1944 births
2002 deaths
Oberlin College alumni
Washington University in St. Louis alumni
University of Rochester alumni
Washington University in St. Louis mathematicians
California Institute of Technology faculty
Fellows of the Econometric Society
Members of the United States National Academy of Sciences
20th-century political scientists
|
https://en.wikipedia.org/wiki/Formula%20%28disambiguation%29
|
A formula, in mathematics, is an entity constructed using the symbols and formation rules of a given logical language.
Formula may also refer to:
A concept in the theory of oral-formulaic composition, related to oral poetry
A type of ritual in Roman law
A defunct video game label of Lost Boys Games, a defunct Dutch game developer
Bill of materials
Chemical formula, an expression of the contents of a chemical compound
Dave Formula (born 1946), British musician
Formula (album), a 1995 album by OLD
Formula (boats), a brand of pleasure boats
Formula fiction, literature following a predictable form
Formula language, a Lotus Notes programming language
Formula racing, a type of motorsport
Formulæ (album), a 2016 album by Master's Hammer
Infant formula, a food for infants
Trinitarian formula, a Biblical phrase
Well-formed formula, a word that is part of a formal language, in logic
"[Formula]" (ΔMi−1 = −αΣn=1NDi[n] [Σj∈C[i]Fji[n − 1] + Fexti[n−1]]), the first B-side of "Windowlicker" by Aphex Twin, also known as "[Equation]"
See also
Formulation
The Formula (disambiguation)
|
https://en.wikipedia.org/wiki/Concrete%20Mathematics
|
Concrete Mathematics: A Foundation for Computer Science, by Ronald Graham, Donald Knuth, and Oren Patashnik, first published in 1989, is a textbook that is widely used in computer-science departments as a substantive but light-hearted treatment of the analysis of algorithms.
Contents and history
The book provides mathematical knowledge and skills for computer science, especially for the analysis of algorithms. According to the preface, the topics in Concrete Mathematics are "a blend of CONtinuous and disCRETE mathematics". Calculus is frequently used in the explanations and exercises. The term "concrete mathematics" also denotes a complement to "abstract mathematics".
The book is based on a course begun in 1970 by Knuth at Stanford University. The book expands on the material (approximately 100 pages) in the "Mathematical Preliminaries" section of Knuth's The Art of Computer Programming. Consequently, some readers use it as an introduction to that series of books.
Concrete Mathematics has an informal and often humorous style. The authors reject what they see as the dry style of most mathematics textbooks. The margins contain "mathematical graffiti", comments submitted by the text's first editors: Knuth and Patashnik's students at Stanford.
As with many of Knuth's books, readers are invited to claim a reward for any error found in the book—in this case, whether an error is "technically, historically, typographically, or politically incorrect".
The book popularized some mathematical notation: the Iverson bracket, floor and ceiling functions, and notation for rising and falling factorials.
Typography
Donald Knuth used the first edition of Concrete Mathematics as a test case for the AMS Euler typeface and Concrete Roman font.
Chapter outline
Recurrent Problems
Summation
Integer Functions
Number Theory
Binomial Coefficients
Special Numbers
Generating Functions
Discrete Probability
Asymptotics
Editions
Errata: (1994), (January 1998), (27th printing, May 2013)
References
External links
ToC and blurb for Concrete Mathematics: A Foundation for Computer Science, 2nd ed.
Preface for Concrete Mathematics: A Foundation for Computer Science, 2nd ed.
1988 non-fiction books
Computer science books
Mathematics textbooks
Books by Donald Knuth
Addison-Wesley books
American non-fiction books
|
https://en.wikipedia.org/wiki/Robert%20Calderbank
|
Robert Calderbank (born 28 December 1954) is a professor of Computer Science, Electrical Engineering, and Mathematics and director of the Information Initiative at Duke University. He received a BSc from Warwick University in 1975, an MSc from Oxford in 1976, and a PhD from Caltech in 1980, all in mathematics. He joined Bell Labs in 1980, and retired from AT&T Labs in 2003 as Vice President for Research and Internet and network systems. He then went to Princeton as a professor of Electrical Engineering, Mathematics and Applied and Computational Mathematics, before moving to Duke in 2010 to become Dean of Natural Sciences.
His contributions to coding and information theory won the IEEE Information Theory Society Paper Award in 1995 and 1999.
He was elected as a member into the US National Academy of Engineering in 2005 for leadership in communications research, from advances in algebraic coding theory to signal processing for wire-line and wireless modems. He also became a fellow of the American Mathematical Society in 2012.
Calderbank won the 2013 IEEE Richard W. Hamming Medal and the 2015 Claude E. Shannon Award.
He was named a SIAM Fellow in the 2021 class of fellows, "for deep contributions to information theory".
He is married to Ingrid Daubechies.
References
External links
Dean Profile at Duke.
Faculty Profile at Princeton.
Publications on the DBLP.
Publications from the arXiv.
Publications from Google Scholar.
1954 births
Living people
American electrical engineers
20th-century American mathematicians
21st-century American mathematicians
Alumni of the University of Warwick
Alumni of the University of Oxford
California Institute of Technology alumni
Princeton University faculty
Duke University faculty
Fellows of the American Mathematical Society
Fellows of the Society for Industrial and Applied Mathematics
Members of the United States National Academy of Engineering
|
https://en.wikipedia.org/wiki/Francis%20Sowerby%20Macaulay
|
Francis Sowerby Macaulay FRS (11 February 1862, Witney – 9 February 1937, Cambridge) was an English mathematician who made significant contributions to algebraic geometry. He is known for his 1916 book The Algebraic Theory of Modular Systems (an old term for ideals), which greatly influenced the later course of commutative algebra. Cohen–Macaulay rings, Macaulay duality, the Macaulay resultant and the Macaulay and Macaulay2 computer algebra systems are named for Macaulay.
Macaulay was educated at Kingswood School and graduated with distinction from St John's College, Cambridge. He taught the top mathematics class in St Paul's School in London from 1885 to 1911. His students included J. E. Littlewood and G. N. Watson.
In 1928 Macaulay was elected Fellow of the Royal Society.
Publications
See also
Gorenstein ring
References
1862 births
1937 deaths
19th-century English mathematicians
20th-century English mathematicians
Algebraic geometers
Alumni of St John's College, Cambridge
Fellows of the Royal Society
People educated at Kingswood School, Bath
|
https://en.wikipedia.org/wiki/Relay%20channel
|
In information theory, a relay channel is a probability model of the communication between a sender and a receiver aided by one or more intermediate relay nodes.
General discrete-time memoryless relay channel
A discrete memoryless single-relay channel can be modelled as four finite sets, and , and a conditional probability distribution on these sets. The probability distribution of the choice of symbols selected by the encoder and the relay encoder is represented by .
<nowiki>
o------------------o
| Relay Encoder |
o------------------o
Λ |
| y1 x2 |
| V
o---------o x1 o------------------o y o---------o
| Encoder |--->| p(y,y1|x1,x2) |--->| Decoder |
o---------o o------------------o o---------o
</nowiki>
There exist three main relaying schemes: Decode-and-Forward, Compress-and-Forward and Amplify-and-Forward. The first two schemes were first proposed in the pioneer article by Cover and El-Gamal.
Decode-and-Forward (DF): In this relaying scheme, the relay decodes the source message in one block and transmits the re-encoded message in the following block. The achievable rate of DF is known as .
Compress-and-Forward (CF): In this relaying scheme, the relay quantizes the received signal in one block and transmits the encoded version of the quantized received signal in the following block. The achievable rate of CF is known as subject to .
Amplify-and-Forward (AF): In this relaying scheme, the relay sends an amplified version of the received signal in the last time-slot. Comparing with DF and CF, AF requires much less delay as the relay node operates time-slot by time-slot. Also, AF requires much less computing power as no decoding or quantizing operation is performed at the relay side.
Cut-set upper bound
The first upper bound on the capacity of the relay channel is derived in the pioneer article by Cover and El-Gamal and is known as the Cut-set upper bound. This bound says where C is the capacity of the relay channel. The first term and second term in the minimization above are called broadcast bound and multi-access bound, respectively.
Degraded relay channel
A relay channel is said to be degraded if y depends on only through and , i.e., . In the article by Cover and El-Gamal it is shown that the capacity of the degraded relay channel can be achieved using Decode-and-Forward scheme. It turns out that the capacity in this case is equal to the Cut-set upper bound.
Reversely degraded relay channel
A relay channel is said to be reversely degraded if . Cover and El-Gamal proved that the Direct Transmission Lower Bound (wherein relay is not used) is tight when the relay channel is reversely degraded.
Feedback relay channel
Relay without delay channel
In a relay-without-delay channel (RWD), each transmitted relay symbol can depend on relay's past as well as present received symbols. Relay Wit
|
https://en.wikipedia.org/wiki/Pronormal%20subgroup
|
In mathematics, especially in the field of group theory, a pronormal subgroup is a subgroup that is embedded in a nice way. Pronormality is a simultaneous generalization of both normal subgroups and abnormal subgroups such as Sylow subgroups, .
A subgroup is pronormal if each of its conjugates is conjugate to it already in the subgroup generated by it and its conjugate. That is, H is pronormal in G if for every g in G, there is some k in the subgroup generated by H and Hg such that Hk = Hg. (Here Hg denotes the conjugate subgroup gHg-1.)
Here are some relations with other subgroup properties:
Every normal subgroup is pronormal.
Every Sylow subgroup is pronormal.
Every pronormal subnormal subgroup is normal.
Every abnormal subgroup is pronormal.
Every pronormal subgroup is weakly pronormal, that is, it has the Frattini property.
Every pronormal subgroup is paranormal, and hence polynormal.
References
Subgroup properties
|
https://en.wikipedia.org/wiki/Paranormal%20subgroup
|
In mathematics, in the field of group theory, a paranormal subgroup is a subgroup such that the subgroup generated by it and any conjugate of it, is also generated by it and a conjugate of it within that subgroup.
In symbols, is paranormal in if given any in , the subgroup generated by and is also equal to . Equivalently, a subgroup is paranormal if its weak closure and normal closure coincide in all intermediate subgroups.
Here are some facts relating paranormality to other subgroup properties:
Every pronormal subgroup, and hence, every normal subgroup and every abnormal subgroup, is paranormal.
Every paranormal subgroup is a polynormal subgroup.
In finite solvable groups, every polynormal subgroup is paranormal.
External links
Subgroup properties
|
https://en.wikipedia.org/wiki/Abnormal%20subgroup
|
In mathematics, specifically group theory, an abnormal subgroup is a subgroup H of a group G such that for all x in G, x lies in the subgroup generated by H and Hx, where Hx denotes the conjugate subgroup xHx−1.
Here are some facts relating abnormality to other subgroup properties:
Every abnormal subgroup is a self-normalizing subgroup, as well as a contranormal subgroup.
The only normal subgroup that is also abnormal is the whole group.
Every abnormal subgroup is a weakly abnormal subgroup, and every weakly abnormal subgroup is a self-normalizing subgroup.
Every abnormal subgroup is a pronormal subgroup, and hence a weakly pronormal subgroup, a paranormal subgroup, and a polynormal subgroup.
References
Subgroup properties
|
https://en.wikipedia.org/wiki/Contranormal%20subgroup
|
In mathematics, in the field of group theory, a contranormal subgroup is a subgroup whose
normal closure in the group is the whole group. Clearly, a contranormal subgroup can be normal only if it is the whole group.
Some facts:
Every subgroup of a finite group is a contranormal subgroup of a subnormal subgroup. In general, every subgroup of a group is a contranormal subgroup of a descendant subgroup.
Every abnormal subgroup is contranormal.
References
Bibliography
Subgroup properties
|
https://en.wikipedia.org/wiki/C-normal%20subgroup
|
In mathematics, in the field of group theory, a subgroup of a group is called c-normal if there is a normal subgroup of such that and the intersection of and lies inside the normal core of .
For a weakly c-normal subgroup, we only require to be subnormal.
Here are some facts about c-normal subgroups:
Every normal subgroup is c-normal
Every retract is c-normal
Every c-normal subgroup is weakly c-normal
References
Y. Wang, c-normality of groups and its properties, Journal of Algebra, Vol. 180 (1996), 954-965
Subgroup properties
|
https://en.wikipedia.org/wiki/Malnormal%20subgroup
|
In mathematics, in the field of group theory, a subgroup of a group is termed malnormal if for any in but not in , and intersect in the identity element.
Some facts about malnormality:
An intersection of malnormal subgroups is malnormal.
Malnormality is transitive, that is, a malnormal subgroup of a malnormal subgroup is malnormal.
The trivial subgroup and the whole group are malnormal subgroups. A normal subgroup that is also malnormal must be one of these.
Every malnormal subgroup is a special type of C-group called a trivial intersection subgroup or TI subgroup.
When G is finite, a malnormal subgroup H distinct from 1 and G is called a "Frobenius complement". The set N of elements of G which are, either equal to 1, or non-conjugate to any
element of H, is a normal subgroup of G, called the "Frobenius kernel", and G is the semi-direct product of H and N (Frobenius' theorem).
References
Subgroup properties
|
https://en.wikipedia.org/wiki/Business%20mathematics
|
Business mathematics are mathematics used by commercial enterprises to record and manage business operations. Commercial organizations use mathematics in accounting, inventory management, marketing, sales forecasting, and financial analysis.
Mathematics typically used in commerce includes elementary arithmetic, elementary algebra, statistics and probability. For some management problems, more advanced mathematics - calculus, matrix algebra, and linear programming - may be applied.
High school
Business mathematics, sometimes called commercial math or consumer math, is a group of practical subjects used in commerce and everyday life. In schools, these subjects are often taught to students who are not planning a university education. In the United States, they are typically offered in high schools and in schools that grant associate's degrees; elsewhere they may be included under business studies. The emphasis in these courses is on computational skills and their practical application, with practice being predominant. These courses often fulfill the general math credit for high school students.
A (U.S.) business math course typically includes a review of elementary arithmetic, including fractions, decimals, and percentages. Elementary algebra is often included as well, in the context of solving practical business problems. The practical applications typically include checking accounts, price discounts, markups and Markup, payroll calculations, simple and compound interest, consumer and business credit, and mortgages and revenues.
University level
Undergraduate
Business mathematics comprises mathematics credits taken at an undergraduate level by business students.
The course is often organized around the various business sub-disciplines, including the above applications, and usually includes a separate module on interest calculations; the mathematics comprises mainly algebraic techniques.
Many programs, as mentioned, extend to more sophisticated mathematics.
Common credits are Business Calculus
and Business Statistics.
Programs may also cover matrix operations, as above.
At many US universities, business students (instead) study "finite mathematics",
a course combining several applicable topics, including basic probability theory, an introduction to linear programming, some theory of matrices, and introductory game theory; the course sometimes includes a high level treatment of calculus.
Since these courses are focused on problems from the business world, the syllabus is then adjusted re standard courses in the mathematics or science fields.
The calculus course especially emphasizes differentiation, leading to optimization of costs and revenue.
Integration is less emphasized, as its business applications are fewer
— used here in some interest calculations, and for (theoretically) aggregating costs and / or revenue
— and it is more technically demanding.
Relatedly, in a regular calculus course students study trigonometric f
|
https://en.wikipedia.org/wiki/Modular%20subgroup
|
In mathematics, in the field of group theory, a modular subgroup is a subgroup that is a modular element in the lattice of subgroups, where the meet operation is defined by the intersection and the join operation is defined by the subgroup generated by the union of subgroups.
By the modular property of groups, every quasinormal subgroup (that is, a subgroup that permutes with all subgroups) is modular. In particular, every normal subgroup is modular.
References
.
Subgroup properties
|
https://en.wikipedia.org/wiki/T-group
|
T-group may refer to:
T-group (mathematics), a mathematical structure
T-group (social psychology), a group of people learning about human behaviour by interacting with each other
|
https://en.wikipedia.org/wiki/Christian%20Gouri%C3%A9roux
|
Christian Gouriéroux (born 1949) is an econometrician who holds a Doctor of Philosophy in mathematics from the University of Rouen. He has the Professor exceptional level title from France. Gouriéroux is now a professor at University of Toronto and CREST, Paris [Center for
Research in Economics and Statistics].
Gouriéroux has published in journals worldwide, and was a recipient of the Koopmans Prize (with two fellow partners) for their project, "General Approach to Serial Correlation" in 1985–1987. He was also awarded the Silver Medal of the Conseil National de Recherche Scientifique by the French Ministry of Research. He is a fellow of the Econometric Society.
Biography
Gouriéroux completed his undergraduate studies in economics and statistics at ENSAE. He has received Doctorat honoris causa from Université de Mons-Hainaut, Université de Neuchâtel, as well as HEC Montréal.
Works
Gouriéroux has written 17 books and over 160 articles, including 12 Econometrica. He his known for his work on the Quasi-maximum likelihood estimate and Indirect inference
Books
Articles/Essays/Papers
References
Christian S. Gouriéroux: IDEAS File
Global Investor Profile
French economists
Living people
Econometricians
1949 births
University of Rouen Normandy alumni
Fellows of the Econometric Society
|
https://en.wikipedia.org/wiki/Bernard%20Carr
|
Bernard J. Carr is a British professor of mathematics and astronomy at Queen Mary University of London (QMUL).
His research interests include the early universe, dark matter, general relativity, primordial black holes, and the anthropic principle.
Education
He completed his BA in mathematics in 1972 at Trinity College, Cambridge. For his doctorate, obtained in 1976, he studied relativity and cosmology under Stephen Hawking at the Institute of Astronomy in Cambridge. He was the president of the Cambridge University Buddhist Society and is
friends with Ajahn Brahm.
Academic career
In 1976 he was elected to a Fellowship at Trinity and he also became an advanced SERC fellow at the Institute of Astronomy. In 1979 he was awarded a Lindemann Fellowship for post-doctoral research in America and spent a year working in various universities there. In 1980 he took up a senior research fellowship at the Institute of Astronomy in Cambridge. In 1985 he moved to the then Queen Mary College, University of London, where he is now professor of mathematics and astronomy.
He has held visiting professorships at Kyoto University, Tokyo University and the Fermi National Accelerator Laboratory, and is a frequent visitor to other institutes in America and Canada. He is the author of more than two hundred scientific papers and his monograph, Cosmological Gravitational Waves, won the 1985 Adams Essay Prize.
Interests outside academia
He has interests outside physics, including psychic research. He has been a member of the Society for Psychical Research (SPR) for thirty years, serving as its education officer and the chairman of its research activities committee for various periods. He was president of the SPR from 2000 to 2004. He is also a director of Scientific and Medical Network (SMN) in the UK.
He has been the co-holder of a grant from the John Templeton Foundation for a project entitled Fundamental Physics and the Problem of our Existence. He is the editor of a book based on a series of conferences funded by the Foundation, entitled Universe or Multiverse? Bernard Carr also made an appearance in the documentary film The Trouble with Atheism, where he discussed these concepts, and also appeared in the science documentary film Target...Earth? (1980).
Publications
Bernard Carr (ed.): Universe or Multiverse? Cambridge University Press, 2007,
Media
In 2014, featured in "The Principle", a documentary examining the Copernican Principle.
References
External links
Page at QMUL
Year of birth missing (living people)
Living people
Academics of Queen Mary University of London
California Institute of Technology alumni
Alumni of Trinity College, Cambridge
Fellows of Trinity College, Cambridge
Members of the Eurasian Astronomical Society
British parapsychologists
|
https://en.wikipedia.org/wiki/ATR
|
ATR may refer to:
Medicine
Acute transfusion reaction
Ataxia telangiectasia and Rad3 related, a protein involved in DNA damage repair
Science and mathematics
Advanced Test Reactor, nuclear research reactor at the Idaho National Laboratory, US
Attenuated total reflectance in infrared spectroscopy
Advanced tongue root, a phonological feature in linguistics
Atractyloside, a toxin and inhibitor of "ADP/ATP translocase"
ATR0, an axiom system in reverse mathematics
Technology
Answer to reset, a message output by a contact Smart Card
Automatic target recognition, recognition ability
Autothermal reforming, a natural gas reforming technology
Transport
ATR (aircraft manufacturer) an Italian-French aircraft manufacturer
ATR 42 airliner
ATR 72 airliner
IATA code for Atar International Airport
Andaman Trunk Road
Air Transport Rack, standards for plug-in electronic modules in aviation and elsewhere; various suppliers e.g. ARINC
Atmore (Amtrak station), Amtrak station code ATR
Music
All That Remains (band), an American heavy metal band
Atari Teenage Riot, a German techno band performing "digital hardcore" music
ATR (song), a song by ATR
Organisations
Absent Teacher Reserve, of teachers in New York City
Americans for Tax Reform
Anglican Theological Review
Other
African Traditional Religion
ATR, the United States Navy hull classification symbol for a rescue tug
ATR: All Terrain Racing, a video game
ATR.1 certificate, in trade between the European Union and Turkey
ATR (company) (Auto-Teile-Ring), Germany
Average True Range, a market volatility indicator
ATR (TV channel), a Crimean Tatar television channel in Ukraine
|
https://en.wikipedia.org/wiki/Central%20Statistical%20Office%20%28United%20Kingdom%29
|
The Central Statistical Office (CSO) was a British government department charged with the collection and publication of economic statistics for the United Kingdom. It preceded the Office for National Statistics.
Establishment of the CSO
During the Second World War, the Prime Minister, Winston Churchill directed the Cabinet Secretary, Sir Edward Bridges (later Lord Bridges), to advise him on how a central statistical office could be created in the Prime Minister's office in order to consolidate and issue authoritative working statistics. Following consideration, a formal announcement was made to establish the CSO on 27 January 1941 with the purpose of handling the descriptive statistics required for the war effort and developing national income accounts.
Shortly afterward, Harry Campion (later Sir Harry Campion), a member of the Central Economic Information Service in the Cabinet Office, was appointed director. After the war there was an expansion in the work of official statisticians resulting from the aim of managing the economy through controlling government income and expenditure using an integrated system of national accounts and in 1962, comprehensive financial statistics were published for the first time.
Development of the CSO
Following Sir Harry Campion's retirement in March 1967, Claus Moser (now Lord Moser), a professor of statistics at the London School of Economics, was appointed director. Moser had the task of implementing proposals made by the House of Commons Estimates Committee in 1966, including the setting up of the Business Statistics Office to provide a centralised system of obtaining information from industry and the Office for Population, Censuses and Surveys to collect information from individuals and households through programmes of censuses, surveys and registers. He made major improvements in the area of social statistics in close partnership with the Office of Population Censuses and Surveys and paid particular attention to the development of the CSO's role of co-ordinating the statistical activities of individual government departments and the development of the Government Statistical Service (GSS), of which he became the head in 1968. After eleven years of statistical development and reorganisation, Moser resigned on 1 August 1978. The third director of the CSO was John Boreham (later Sir John Boreham), Moser's deputy.
The Rayner Review
In 1979, a new government came into office with a review of the CSO and the Government Statistical Service as an early part of its policy of reducing the size of the Civil Service. This review, conducted by Sir, later Lord Derek Rayner and known as the Rayner Review, was published in a government white paper in April 1981 and recommended that 'information should not be collected primarily for publication (but) primarily because government needs it for its own business'. The Government accepted this recommendation and as a result, the CSO was cut by around 25% but continued to produc
|
https://en.wikipedia.org/wiki/Tim%20Holt%20%28statistician%29
|
David Holt CB (29 October 1943 – 15 November 2022) was a British statistician who was Professor Emeritus of Social Statistics at the University of Southampton. He had been the president of the Royal Statistical Society (2005–2007), the last director of the Central Statistical Office of the United Kingdom, and the first director of the Office for National Statistics (and ex-officio Registrar General).
Background
Holt took a maths degree and a PhD in statistics at Exeter with thesis titled Some contributions to the statistical analysis of single and mixed exponential distributions in 1970. Throughout his career, his main interests have been survey methods, sampling theory and official statistics. He took a particular interest, through his membership of the Royal Statistical Society, in the independence of national statistics from government.
Career
Holt's first job was with Statistics Canada, the national statistics office of Canada, where he spent four years before joining the Department of Social Statistics at the University of Southampton in 1980. He was Leverhulme Professor of Social Statistics from 1980 to 1995 and Deputy Vice-Chancellor from 1990 to 1995. From 1989 to 1991, he was also vice-president of the International Association of Survey Statisticians (IASS).
Holt became the Director of the Central Statistical Office and Head of the Government Statistical Service in 1995 and, subsequently, the first Director of the Office for National Statistics when it was formed on 1 April 1996 from the merger of the Central Statistical Office (CSO) and the Office of Population Censuses and Surveys (OPCS). He was President of the Labour Statistics Congress (ILO) in 1997 and vice-chair of the United Nations Statistical Commission from 1998 to 1999.
Holt returned to the Department of Social Statistics at Southampton in 2000, working part-time as Professor of Social Statistics. He carried out consultancy work for the United Nations, the International Monetary Fund and the World Bank and was elected president of the Royal Statistical Society in 2005.
Awards and honours
In 1990 he was elected as a Fellow of the American Statistical Association.
Holt was appointed a Companion of the Order of the Bath (CB) in the 2000 New Year Honours. He was also the 2003 recipient of the Waksberg Prize in survey methodology.
Personal life and death
Holt died on 15 November 2022, at the age of 79.
References
"Professor Tim Holt: Career", Museum of Learning. Retrieved 20 May 2010.
Tim Holt, Julian Champkin (2007). "Tim Holt", Significance, vol 4 issue 2, pp 75–76.
1943 births
2022 deaths
Directors of the Central Statistical Office (United Kingdom)
Registrars-General for England and Wales
Directors of the Office for National Statistics
Presidents of the Royal Statistical Society
Survey methodologists
Academics of the University of Southampton
British social scientists
20th-century British mathematicians
21st-century British mathematicians
Fellows of the American Sta
|
https://en.wikipedia.org/wiki/Tsallis%20statistics
|
The term Tsallis statistics usually refers to the collection of mathematical functions and associated probability distributions that were originated by Constantino Tsallis. Using that collection, it is possible to derive Tsallis distributions from the optimization of the Tsallis entropic form. A continuous real parameter q can be used to adjust the distributions, so that distributions which have properties intermediate to that of Gaussian and Lévy distributions can be created. The parameter q represents the degree of non-extensivity of the distribution. Tsallis statistics are useful for characterising complex, anomalous diffusion.
Tsallis functions
The q-deformed exponential and logarithmic functions were first introduced in Tsallis statistics in 1994. However, the q-deformation is the Box–Cox transformation for , proposed by George Box and David Cox in 1964.
q-exponential
The q-exponential is a deformation of the exponential function using the real parameter q.
Note that the q-exponential in Tsallis statistics is different from a version used elsewhere.
q-logarithm
The q-logarithm is the inverse of q-exponential and a deformation of the logarithm using the real parameter q.
Inverses
These functions have the property that
Analysis
The limits of the above expression can be understood by considering
for the exponential function and
for the logarithm.
See also
Tsallis entropy
Tsallis distribution
q-Gaussian
q-exponential distribution
q-Weibull distribution
References
S. Abe, A.K. Rajagopal (2003). Letters, Science (11 April 2003), Vol. 300, issue 5617, 249–251.
S. Abe, Y. Okamoto, Eds. (2001) Nonextensive Statistical Mechanics and its Applications. Springer-Verlag.
G. Kaniadakis, M. Lissia, A. Rapisarda, Eds. (2002) "Special Issue on Nonextensive Thermodynamics and Physical Applications." Physica A 305, 1/2.
External links
Tsallis statistics on arxiv.org
Statistical mechanics
|
https://en.wikipedia.org/wiki/FIFA%20World%20Cup%20records%20and%20statistics
|
As of the 2022 FIFA World Cup, 80 national teams have competed at the finals of the FIFA World Cup. Brazil is the only team to have appeared in all 22 tournaments to date, with Germany having participated in 20, Italy and Argentina in 18 and Mexico in 17. Eight nations have won the tournament. The inaugural winners in 1930 were Uruguay; the current champions are Argentina. The most successful nation is Brazil, which has won the cup on five occasions. Five teams have appeared in FIFA World Cup finals without winning, while twelve more have appeared in the semi-finals.
List of tournaments
Overall team records
The system used in the World Cup up to 1990 was 2 points for a win. In this ranking 3 points are awarded for a win, 1 for a draw and 0 for a loss. As per statistical convention in football, matches decided in extra time are counted as wins and losses, while matches decided by penalty shoot-outs are counted as draws. Teams are ranked by total points, then by goal difference, then by goals scored.
Breakdown of successor team records
Finals records by team
Teams statistics
Note: In case there are teams with equal quantities, they will be mentioned in chronological order of tournament history (the teams that attained the quantity first, are listed first). If the quantity was attained by more than one team in the same tournament, the teams will be listed alphabetically.
For a detailed list of top four appearances, see FIFA World Cup results.
Most titles
– 5 (1958, 1962, 1970, 1994, 2002)
Most finishes in the top two
/West Germany – 8 (1954, 1966, 1974, 1982, 1986 and 1990 as West Germany, 2002 and 2014 as Germany)
Most second-place finishes
/West Germany – 4 (1966, 1982, 1986 as West Germany, 2002 as Germany)
Most World Cup appearances
– 22 (every tournament)
Most consecutive championships
– 2 (1934–1938)
– 2 (1958–1962)
Most consecutive finishes in the top two
– 3 (1982–1990)
– 3 (1994–2002)
Longest gap between successive titles
– 44 years (nine editions, 1938–1982)
Longest gap between successive appearances in the top two
– 48 years (10 editions, 1930–1978)
Longest gap between successive appearances at the FIFA World Cup
– 64 years (16 editions, 1958–2022)
Most consecutive failed qualification attempts
– 21 (all 1934–2022)
Worst finish by defending champions
Group stage – (1950)
Group stage – (1966)
Group stage – (2002)
Group stage – (2010)
Group stage – (2014)
Group stage – (2018)
Players
Most appearances
Players in bold text are still active with their national team as of the 2022 FIFA World Cup.
Most championships
Pelé – 3 (, 1958, 1962 and 1970)
Most appearances in a World Cup final
Cafu – 3 (; 1994, 1998, 2002)
Youngest player
Norman Whiteside – (for vs. , 17 June 1982)
Youngest player in a final
Pelé – (for vs. , 29 June 1958)
Oldest player
Essam El-Hadary – (for vs. , 25 June 2018)
Oldest player in a final
Dino Zoff – (for vs. , 11 July 1982)
Goalscoring
Individual
Top goalscorers
Pl
|
https://en.wikipedia.org/wiki/Herzog%E2%80%93Sch%C3%B6nheim%20conjecture
|
In mathematics, the Herzog–Schönheim conjecture is a combinatorial problem in the area of group theory, posed by Marcel Herzog and Jochanan Schönheim in 1974.
Let be a group, and let
be a finite system of left cosets of subgroups
of .
Herzog and Schönheim conjectured
that if forms a partition of
with ,
then the (finite) indices cannot be distinct. In contrast, if repeated indices are allowed, then partitioning a group into cosets is easy: if is any subgroup of
with index then can be partitioned into left cosets of .
Subnormal subgroups
In 2004, Zhi-Wei Sun proved an extended version
of the Herzog–Schönheim conjecture in the case where are subnormal in . A basic lemma in Sun's proof states that if are subnormal and of finite index in , then
and hence
where denotes the set of prime
divisors of .
Mirsky–Newman theorem
When is the additive group of integers, the cosets of are the arithmetic progressions.
In this case, the Herzog–Schönheim conjecture states that every covering system, a family of arithmetic progressions that together cover all the integers, must either cover some integers more than once or include at least one pair of progressions that have the same difference as each other. This result was conjectured in 1950 by Paul Erdős and proved soon thereafter by Leon Mirsky and Donald J. Newman. However, Mirsky and Newman never published their proof. The same proof was also found independently by Harold Davenport and Richard Rado.
In 1970, a geometric coloring problem equivalent to the Mirsky–Newman theorem was given in the Soviet mathematical olympiad: suppose that the vertices of a regular polygon are colored in such a way that every color class itself forms the vertices of a regular polygon. Then, there exist two color classes that form congruent polygons.
References
Combinatorial group theory
Conjectures
Unsolved problems in mathematics
|
https://en.wikipedia.org/wiki/Vertex%20configuration
|
In geometry, a vertex configuration is a shorthand notation for representing the vertex figure of a polyhedron or tiling as the sequence of faces around a vertex. For uniform polyhedra there is only one vertex type and therefore the vertex configuration fully defines the polyhedron. (Chiral polyhedra exist in mirror-image pairs with the same vertex configuration.)
A vertex configuration is given as a sequence of numbers representing the number of sides of the faces going around the vertex. The notation "" describes a vertex that has 3 faces around it, faces with , , and sides.
For example, "" indicates a vertex belonging to 4 faces, alternating triangles and pentagons. This vertex configuration defines the vertex-transitive icosidodecahedron. The notation is cyclic and therefore is equivalent with different starting points, so is the same as The order is important, so is different from (the first has two triangles followed by two pentagons). Repeated elements can be collected as exponents so this example is also represented as .
It has variously been called a vertex description, vertex type, vertex symbol, vertex arrangement, vertex pattern, face-vector. It is also called a Cundy and Rollett symbol for its usage for the Archimedean solids in their 1952 book Mathematical Models.
Vertex figures
A vertex configuration can also be represented as a polygonal vertex figure showing the faces around the vertex. This vertex figure has a 3-dimensional structure since the faces are not in the same plane for polyhedra, but for vertex-uniform polyhedra all the neighboring vertices are in the same plane and so this plane projection can be used to visually represent the vertex configuration.
Variations and uses
Different notations are used, sometimes with a comma (,) and sometimes a period (.) separator. The period operator is useful because it looks like a product and an exponent notation can be used. For example, 3.5.3.5 is sometimes written as (3.5)2.
The notation can also be considered an expansive form of the simple Schläfli symbol for regular polyhedra. The Schläfli notation {p,q} means q p-gons around each vertex. So {p,q} can be written as p.p.p... (q times) or pq. For example, an icosahedron is {3,5} = 3.3.3.3.3 or 35.
This notation applies to polygonal tilings as well as polyhedra. A planar vertex configuration denotes a uniform tiling just like a nonplanar vertex configuration denotes a uniform polyhedron.
The notation is ambiguous for chiral forms. For example, the snub cube has clockwise and counterclockwise forms which are identical across mirror images. Both have a 3.3.3.3.4 vertex configuration.
Star polygons
The notation also applies for nonconvex regular faces, the star polygons. For example, a pentagram has the symbol {5/2}, meaning it has 5 sides going around the centre twice.
For example, there are 4 regular star polyhedra with regular polygon or star polygon vertex figures. The small stellated dodecahedron has the Schlä
|
https://en.wikipedia.org/wiki/Kruskal%27s%20tree%20theorem
|
In mathematics, Kruskal's tree theorem states that the set of finite trees over a well-quasi-ordered set of labels is itself well-quasi-ordered under homeomorphic embedding.
History
The theorem was conjectured by Andrew Vázsonyi and proved by ; a short proof was given by . It has since become a prominent example in reverse mathematics as a statement that cannot be proved in ATR0 (a second-order arithmetic theory with a form of arithmetical transfinite recursion).
In 2004, the result was generalized from trees to graphs as the Robertson–Seymour theorem, a result that has also proved important in reverse mathematics and leads to the even-faster-growing SSCG function which dwarfs TREE(3). A finitary application of the theorem gives the existence of the fast-growing TREE function.
Statement
The version given here is that proven by Nash-Williams; Kruskal's formulation is somewhat stronger. All trees we consider are finite.
Given a tree with a root, and given vertices , , call a successor of if the unique path from the root to contains , and call an immediate successor of if additionally the path from to contains no other vertex.
Take to be a partially ordered set. If , are rooted trees with vertices labeled in , we say that is inf-embeddable in and write if there is an injective map from the vertices of to the vertices of such that
For all vertices of , the label of precedes the label of ,
If is any successor of in , then is a successor of , and
If , are any two distinct immediate successors of , then the path from to in contains .
Kruskal's tree theorem then states: If is well-quasi-ordered, then the set of rooted trees with labels in is well-quasi-ordered under the inf-embeddable order defined above. (That is to say, given any infinite sequence of rooted trees labeled in , there is some so that .)
Friedman's work
For a countable label set , Kruskal's tree theorem can be expressed and proven using second-order arithmetic. However, like Goodstein's theorem or the Paris–Harrington theorem, some special cases and variants of the theorem can be expressed in subsystems of second-order arithmetic much weaker than the subsystems where they can be proved. This was first observed by Harvey Friedman in the early 1980s, an early success of the then-nascent field of reverse mathematics. In the case where the trees above are taken to be unlabeled (that is, in the case where has order one), Friedman found that the result was unprovable in ATR0, thus giving the first example of a predicative result with a provably impredicative proof. This case of the theorem is still provable by Π-CA0, but by adding a "gap condition" to the definition of the order on trees above, he found a natural variation of the theorem unprovable in this system. Much later, the Robertson–Seymour theorem would give another theorem unprovable by Π-CA0.
Ordinal analysis confirms the strength of Kruskal's theorem, with the proof-theoretic ordinal of the theo
|
https://en.wikipedia.org/wiki/Fano%27s%20inequality
|
In information theory, Fano's inequality (also known as the Fano converse and the Fano lemma) relates the average information lost in a noisy channel to the probability of the categorization error. It was derived by Robert Fano in the early 1950s while teaching a Ph.D. seminar in information theory at MIT, and later recorded in his 1961 textbook.
It is used to find a lower bound on the error probability of any decoder as well as the lower bounds for minimax risks in density estimation.
Let the random variables and represent input and output messages with a joint probability . Let represent an occurrence of error; i.e., that , with being an approximate version of . Fano's inequality is
where denotes the support of ,
is the conditional entropy,
is the probability of the communication error, and
is the corresponding binary entropy.
Proof
Define an indicator random variable , that indicates the event that our estimate is in error,
Consider . We can use the chain rule for entropies to expand this in two different ways
Equating the two
Expanding the right most term,
Since means ; being given the value of allows us to know the value of with certainty. This makes the term .
On the other hand, means that , hence given the value of , we can narrow down to one of different values, allowing us to upper bound the conditional entropy . Hence
The other term, , because conditioning reduces entropy. Because of the way is defined, , meaning that . Putting it all together,
Because is a Markov chain, we have by the data processing inequality, and hence , giving us
Intuition
Fano's inequality can be interpreted as a way of dividing the uncertainty of a conditional distribution into two questions given an arbitrary predictor. The first question, corresponding to the term , relates to the uncertainty of the predictor. If the prediction is correct, there is no more uncertainty remaining. If the prediction is incorrect, the uncertainty of any discrete distribution has an upper bound of the entropy of the uniform distribution over all choices besides the incorrect prediction. This has entropy . Looking at extreme cases, if the predictor is always correct the first and second terms of the inequality are 0, and the existence of a perfect predictor implies is totally determined by , and so . If the predictor is always wrong, then the first term is 0, and can only be upper bounded with a uniform distribution over the remaining choices.
Alternative formulation
Let be a random variable with density equal to one of possible densities . Furthermore, the Kullback–Leibler divergence between any pair of densities cannot be too large,
for all
Let be an estimate of the index. Then
where is the probability induced by
Generalization
The following generalization is due to Ibragimov and Khasminskii (1979), Assouad and Birge (1983).
Let F be a class of densities with a subclass of r + 1 densities ƒθ such that for any θ ≠ θ′
Then in the worst c
|
https://en.wikipedia.org/wiki/Generalized%20additive%20model
|
In statistics, a generalized additive model (GAM) is a generalized linear model in which the linear response variable depends linearly on unknown smooth functions of some predictor variables, and interest focuses on inference about these smooth functions.
GAMs were originally developed by Trevor Hastie and Robert Tibshirani to blend properties of generalized linear models with additive models. They can be interpreted as the discriminative generalization of the naive Bayes generative model.
The model relates a univariate response variable, Y, to some predictor variables, xi. An exponential family distribution is specified for Y (for example normal, binomial or Poisson distributions) along with a link function g (for example the identity or log functions) relating the expected value of Y to the predictor variables via a structure such as
The functions fi may be functions with a specified parametric form (for example a polynomial, or an un-penalized regression spline of a variable) or may be specified non-parametrically, or semi-parametrically, simply as 'smooth functions', to be estimated by non-parametric means. So a typical GAM might use a scatterplot smoothing function, such as a locally weighted mean, for f1(x1), and then use a factor model for f2(x2). This flexibility to allow non-parametric fits with relaxed assumptions on the actual relationship between response and predictor, provides the potential for better fits to data than purely parametric models, but arguably with some loss of interpretability.
Theoretical background
It had been known since the 1950s (via. the Kolmogorov–Arnold representation theorem) that any multivariate continuous function could be represented as sums and compositions of univariate functions.
Unfortunately, though the Kolmogorov–Arnold representation theorem asserts the existence of a function of this form, it gives no mechanism whereby one could be constructed. Certain constructive proofs exist, but they tend to require highly complicated (i.e. fractal) functions, and thus are not suitable for modeling approaches. Therefore, the generalized additive model drops the outer sum, and demands instead that the function belong to a simpler class.
where is a smooth monotonic function. Writing for the inverse of , this is traditionally written as
.
When this function is approximating the expectation of some observed quantity, it could be written as
Which is the standard formulation of a generalized additive model. It was then shown that the backfitting algorithm will always converge for these functions.
Generality
The GAM model class is quite broad, given that smooth function is a rather broad category. For example, a covariate may be multivariate and the corresponding a smooth function of several variables, or might be the function mapping the level of a factor to the value of a random effect. Another example is a varying coefficient (geographic regression) term such as where and are both cov
|
https://en.wikipedia.org/wiki/Joe%20Harris%20%28mathematician%29
|
Joseph Daniel Harris (born August 17, 1951) is a mathematician at Harvard University working in the field of algebraic geometry. After earning an AB from Harvard College, where he took Math 55, he continued at Harvard to study for a PhD under Phillip Griffiths.
Work
During the 1980s, he was on the faculty of Brown University, moving to Harvard around 1988. He served as chair of the department at Harvard from 2002 to 2005. His work is characterized by its classical geometric flavor: he has claimed that nothing he thinks about could not have been imagined by the Italian geometers of the late 19th and early 20th centuries, and that if he has had greater success than them, it is because he has access to better tools.
Harris is well known for several of his books on algebraic geometry, notable for their informal presentations:
Principles of Algebraic Geometry , with Phillip Griffiths
Geometry of Algebraic Curves, Vol. 1 , with Enrico Arbarello, Maurizio Cornalba, and Phillip Griffiths
, with William Fulton
, with David Eisenbud
Moduli of Curves , with Ian Morrison.
Fat Chance: Probability from 0 to 1, with Benedict Gross and Emily Riehl, 2019
As of 2018, Harris has supervised 50 PhD students, including Brendan Hassett, James McKernan, Rahul Pandharipande, Zvezdelina Stankova, and Ravi Vakil.
References
1951 births
Living people
20th-century American mathematicians
21st-century American mathematicians
Algebraic geometers
Brown University faculty
Central High School (Philadelphia) alumni
Harvard College alumni
Harvard University Department of Mathematics faculty
Harvard University faculty
Harvard Graduate School of Arts and Sciences alumni
|
https://en.wikipedia.org/wiki/LOF
|
LOF may refer to:
In acronyms and codes:
Lack-of-fit test (disambiguation), a concept in statistics
Libbey–Owens–Ford, an automotive and building glass manufacturer
Lloyd's Open Form: a type of salvage agreement offered by Lloyd's of London
Local Outlier Factor, an anomaly detection algorithm
Lok Fu station, Hong Kong (MTR station code)
London & Overseas Freighters, a defunct UK merchant shipping company
London Fields railway station, England (National Rail station code)
Trans States Airlines (ICAO designator)
Leftöver Crack, a NYC crust punk band
In other uses:
Lof, a Chilean ethnic group
Löf, a municipality in Germany
|
https://en.wikipedia.org/wiki/Invader%20potential
|
Ecologically, invader potential is the qualitative and quantitative measures of a given invasive species probability to invade a given ecosystem. This is often seen through climate matching. There are many reasons why a species may invade a new area. The term invader potential may also be interchangeable with invasiveness. Invader potential is a large threat to global biodiversity. It has been shown that there is an ecosystem function loss due to the introduction of species in areas they are not native to.
Invaders are species that, through biomass, abundance, and strong interactions with natives, have significantly altered the structure and composition of the established community. This differs greatly from the term "introduced", which merely refers to species that have been introduced to an environment, disregarding whether or not they have created a successful establishment.1 They are simply organisms that have been accidentally, or deliberately, placed into an unfamiliar area .2 Many times, in fact, species do not have a strong impact on the introduced habitat. This can be for a variety of reasons; either the newcomers are not abundant or because they are small and unobtrusive.1
Understanding the mechanisms of invader potential is important to understanding why species relocate and to predict future invasions. There are three predicted reasons as to why species invade an area. They are as follows: adaptation to physical environment, resource competition and/or utilization, and enemy release. Some of these reasons as to why species move seem relatively simple to understand. For example, species may adapt to the new physical environment through having great phenotypic plasticity and environmental tolerance. Species with high rates of these find it easier to adapt to new environments. In terms of resources, those with low resource requirements thrive in unknown areas more than those with complex resource needs. This is shown directly through Tilman's R* rule. Those with less needs can competitively exclude those with more complex needs and take over an area. And finally, species with high reproduction rate and low defense to natural enemies have a better chance of invading other areas. All of these are reasons why species may thrive in places they are non-native to, due to having desirable flexibility within their species' needs.3
Climate matching
Climate matching is a technique used to identify extralimital destinations that invasive species may like to overtake, based on its similarities to the species previous native range. Species are more likely to invade areas that match their origin for ease of use, and abundance of resources. Climate matching assesses the invasion risk and heavily prioritizes destination-specific action.4
The Bioga irregularis, the brown tree snake, is a great example of a species that climate matches. This species is native to northern and eastern Australia, eastern Indonesia, Papua New Guinea and most of the Solom
|
https://en.wikipedia.org/wiki/Russian%20School%20of%20Mathematics
|
The Russian School of Mathematics (RSM) is an after-school program that provides mathematics education to children attending K–12 public and private schools. The school provides children with the opportunity to advance in mathematics beyond the traditional school curriculum. The founder of RSM is Inessa Rifkin and the co-founder is Irene Khavinson.
The focus of RSM is primary school mathematics. The high school level classes offer preparation for standardized tests such as the SAT, SAT II, and AP exams. Each class usually involves intensive reinforcement of topics using many examples and exercises. Accompanied by classwork, all students are given homework to reinforce what they have learned.
History
According to the official website, Inessa Rifkin (born in Minsk, Belarus) and Irina Khavinson (born in Chernigov, Ukraine) left the USSR in search of a better life for their children. Together, they created a math education program based on quality and depth. According to the website, they "Sparked a Movement." In 1997, the first class was held at Ms. Rifkin's kitchen table, also said to be in her living room, outside of Boston, Massachusetts. In October 1999, their first dedicated school building was established in a Boston-area suburb. They currently have approximately 40,000 students.
Locations
The after-school mathematics program was originally established in Boston, inside Inessa Rifkin's living room. Since then, the school has expanded to include more than 50 schools in the US and Canada, as well as online programs. RSM also runs an overnight camp in Sunapee, New Hampshire.
Controversy
On March 20, 2022 the cofounder Inessa Rifkin posted in the private RSM Summer Camp Facebook page stating:
Ukraine had a choice of surrender peacefully with min human casualties and min property loss to Putin. President Zelensky made a choice to fight back. He is risking not only his own life but by now thousands of civilians already lost their lives, among them a lot of children. On top of it almost 3,000,000 people become refugees. Theoretically one could argue that President Zelensky also, not only Putin, is responsible for human suffering. Was he right? Did he have a right to decide for other people’s children?
Rifkin has since elaborated on the statement on her facebook page, saying that this question was intentionally written in a provocative way not to express a view point, but to spark debate within the camp, and is similar to other questions about provocative topics such as Israel and the Holocaust asked in the past.
References
External links
"A Russian solution to U.S. problem" (Boston Globe article)
Private schools in Massachusetts
Schools in Middlesex County, Massachusetts
Russian-American culture in Massachusetts
Educational institutions established in 1997
1997 establishments in Massachusetts
|
https://en.wikipedia.org/wiki/Hierarchy%20%28mathematics%29
|
In mathematics, a hierarchy is a set-theoretical object, consisting of a preorder defined on a set. This is often referred to as an ordered set, though that is an ambiguous term that many authors reserve for partially ordered sets or totally ordered sets. The term pre-ordered set is unambiguous, and is always synonymous with a mathematical hierarchy. The term hierarchy is used to stress a hierarchical relation among the elements.
Sometimes, a set comes equipped with a natural hierarchical structure. For example, the set of natural numbers N is equipped with a natural pre-order structure, where whenever we can find some other number so that . That is, is bigger than only because we can get to from using . This idea can be applied to any commutative monoid. On the other hand, the set of integers Z requires a more sophisticated argument for its hierarchical structure, since we can always solve the equation by writing .
A mathematical hierarchy (a pre-ordered set) should not be confused with the more general concept of a hierarchy in the social realm, particularly when one is constructing computational models that are used to describe real-world social, economic or political systems. These hierarchies, or complex networks, are much too rich to be described in the category Set of sets. This is not just a pedantic claim; there are also mathematical hierarchies, in the general sense, that are not describable using set theory.
Other natural hierarchies arise in computer science, where the word refers to partially ordered sets whose elements are classes of objects of increasing complexity. In that case, the preorder defining the hierarchy is the class-containment relation. Containment hierarchies are thus special cases of hierarchies.
Related terminology
Individual elements of a hierarchy are often called levels and a hierarchy is said to be infinite if it has infinitely many distinct levels but said to collapse if it has only finitely many distinct levels.
Example
In theoretical computer science, the time hierarchy is a classification of decision problems according to the amount of time required to solve them.
See also
Order theory
Nested set collection
Tree structure
Lattice
Polynomial hierarchy
Chomsky hierarchy
Analytical hierarchy
Arithmetical hierarchy
Hyperarithmetical hierarchy
Abstract algebraic hierarchy
Borel hierarchy
Wadge hierarchy
Difference hierarchy
Tree (data structure)
Tree (graph theory)
Tree network
Tree (descriptive set theory)
Tree (set theory)
References
Hierarchy
Set theory
|
https://en.wikipedia.org/wiki/Seminormal%20subgroup
|
In mathematics, in the field of group theory, a subgroup of a group is termed seminormal if there is a subgroup such that , and for any proper subgroup of , is a proper subgroup of .
This definition of seminormal subgroups is due to Xiang Ying Su.
Every normal subgroup is seminormal. For finite groups, every quasinormal subgroup is seminormal.
References
Subgroup properties
|
https://en.wikipedia.org/wiki/Tube%20lemma
|
In mathematics, particularly topology, the tube lemma, also called Wallace's theorem, is a useful tool in order to prove that the finite product of compact spaces is compact.
Statement
The lemma uses the following terminology:
If and are topological spaces and is the product space, endowed with the product topology, a slice in is a set of the form for .
A tube in is a subset of the form where is an open subset of . It contains all the slices for .
Using the concept of closed maps, this can be rephrased concisely as follows: if is any topological space and a compact space, then the projection map is closed.
Examples and properties
1. Consider in the product topology, that is the Euclidean plane, and the open set The open set contains but contains no tube, so in this case the tube lemma fails. Indeed, if is a tube containing and contained in must be a subset of for all which means contradicting the fact that is open in (because is a tube). This shows that the compactness assumption is essential.
2. The tube lemma can be used to prove that if and are compact spaces, then is compact as follows:
Let be an open cover of . For each , cover the slice by finitely many elements of (this is possible since is compact, being homeomorphic to ).
Call the union of these finitely many elements
By the tube lemma, there is an open set of the form containing and contained in
The collection of all for is an open cover of and hence has a finite subcover . Thus the finite collection covers .
Using the fact that each is contained in and each is the finite union of elements of , one gets a finite subcollection of that covers .
3. By part 2 and induction, one can show that the finite product of compact spaces is compact.
4. The tube lemma cannot be used to prove the Tychonoff theorem, which generalizes the above to infinite products.
Proof
The tube lemma follows from the generalized tube lemma by taking and
It therefore suffices to prove the generalized tube lemma.
By the definition of the product topology, for each there are open sets and such that
For any is an open cover of the compact set so this cover has a finite subcover; namely, there is a finite set such that contains where observe that is open in
For every let which is an open in set since is finite.
Moreover, the construction of and implies that
We now essentially repeat the argument to drop the dependence on
Let be a finite subset such that contains and set
It then follows by the above reasoning that and and are open, which completes the proof.
See also
References
(See Chapter 8, Lemma 8.9)
Topology
Lemmas
Articles containing proofs
|
https://en.wikipedia.org/wiki/Oblique%20wing
|
An oblique wing (also called a slewed wing) is a variable geometry wing concept. On an aircraft so equipped, the wing is designed to rotate on center pivot, so that one tip is swept forward while the opposite tip is swept aft. By changing its sweep angle in this way, drag can be reduced at high speed (with the wing swept) without sacrificing low speed performance (with the wing perpendicular). This is a variation on the classic swing-wing design, intended to simplify construction and retain the center of gravity as the sweep angle is changed.
History
The oldest examples of this technology are the unrealized German aircraft projects Blohm & Voss P.202 and Messerschmitt Me P.1009-01 from the year 1944, based on a Messerschmitt patent. After the war, constructor Dr. Richard Vogt was brought to the US during Operation Paperclip.
The oblique wing concept was resurrected by Robert T. Jones, an aeronautical engineer at the NASA Ames Research Center, Moffett Field, California. Analytical and wind tunnel studies initiated by Jones at Ames indicated that a transport-size oblique-wing aircraft, flying at speeds up to Mach 1.4 (1.4 times the speed of sound), would have substantially better aerodynamic performance than aircraft with more conventional wings.
In the 1970s, an unmanned propeller-driven aircraft was constructed and tested at Moffett Field. Known as the NASA Oblique Wing, the project pointed out a craft's unpleasant characteristics at large sweep angles.
So far, only one manned aircraft, the NASA AD-1, has been built to explore this concept. It flew a series of flight tests starting in 1979. This aircraft demonstrated a number of serious roll-coupling modes and further experimentation ended.
Theory
The general idea is to design an aircraft that performs with high efficiency as the Mach number increases from takeoff to cruise conditions (M ~ 0.8, for a commercial aircraft). Since two different types of drag dominate in each of these two flight regimes, uniting high performance designs for each regime into a single airframe is problematic.
At low Mach numbers induced drag dominates drag concerns. Airplanes during takeoff and gliders are most concerned with induced drag. One way to reduce induced drag is to increase the effective wingspan of the lifting surface. This is why gliders have such long, narrow wings. An ideal wing has infinite span and induced drag is reduced to a two–dimensional property. At lower speeds, during takeoffs and landings, an oblique wing would be positioned perpendicular to the fuselage like a conventional wing to provide maximum lift and control qualities. As the aircraft gained speed, the wing would be pivoted to increase the oblique angle, thereby reducing the drag due to wetted area, and decreasing fuel consumption.
Alternatively, at Mach numbers increasing towards the speed of sound and beyond, wave drag dominates design concerns. As the aircraft displaces the air, a sonic wave is generated. Sweeping the wings away
|
https://en.wikipedia.org/wiki/Ume%C3%A5%20School%20of%20Business
|
The Umeå School of Business, Economics and Statistics, USBE, or Handelshögskolan vid Umeå Universitet, is the business school of Umeå University in the north of Sweden, founded in 1989 "to strengthen education in research and business while contributing to the community". About 2000 students currently study at USBE. The School offers one Bachelor program, four undergraduate programs (Civilekonomprogram), seven Master's degree programs (including the Erasmus Mundus Master Program in Strategic Project Management) and doctoral programs.
The International atmosphere is important to the business school and it offers one undergraduate program (the International Business Program) and all Master's programs and doctoral programs entirely in English. USBE also accept many international students as exchange or degree students.
USBE is located at the very heart of the University campus, a meeting-place for all academic disciplines, improving its opportunities to co-operate across traditional academic boundaries. It also gives USBE-students an opportunity to take an active part of student environment created for the 37 000 students at Umeå University.
Organization
Umeå School of Business, Economics and Statistics has three departments: the Department of Business Administration, the Department of Economics and the Department of Statistics.
USBE Career Center
USBE Career Center concentrates primarily on helping its graduates in the transition between graduation and the business world.
Research
Within the Umeå School of Business, Economics and Statistics, the Umeå Research Institute promotes research and awards funding to prospective researchers.
The School also hosts a group dedicated to research on decision-making in extreme environments. It is named Triple Ed (Research Group on Extreme Environments – Everyday Decisions).
Education
Master's programs
Master's Program in Accounting
Master's Program in Finance
Master's Program in Business Development and Internationalization
Master's Program in Management
Master's Program in Marketing
Master's Program in Economics
Master's Program in Statistical Sciences
Masters in Strategic Project Management (European): offered jointly with Heriot-Watt University and Politecnico di Milano Erasmus Mundus
Undergraduate programs
International Business Program (in English)
Business Administration and Economics Program (in Swedish)
Retail and Supply Chain Management Program (in Swedish)
Service Managementprogramet (in Swedish)
Bachelor's Program in Statistics
Notable alumni
Students
Linus Berg – founder and CEO of "Rest & Fly"
Frida Berglund – founder of the popular blog "Husmusen"
Wilhelm Geijer – former CEO and board member of Öhrlings PricewaterhouseCoopers
Christian Hermelin – CEO, Fabege
Leif Lindmark – former rector, Stockholm School of Economics
Agneta Marell – professor of business administration
Henrik P. Molin – author
Göran Carstedt – Leading the global network "Society for Organizational Learning"
|
https://en.wikipedia.org/wiki/MMCC
|
MMCC may refer to:
Margaret Morrison Carnegie College
Mid Michigan Community College
Multinational Medical Coordination Centre, to coordinate e
Mountfitchet Maths and Computing College, a former school in Stansted Mountfitchet, now Forest Hall School
Mobile Multi-Coloured Composite, 2D colour barcode
2200 in Roman numerals
|
https://en.wikipedia.org/wiki/Unit-weighted%20regression
|
In statistics, unit-weighted regression is a simplified and robust version (Wainer & Thissen, 1976) of multiple regression analysis where only the intercept term is estimated. That is, it fits a model
where each of the are binary variables, perhaps multiplied with an arbitrary weight.
Contrast this with the more common multiple regression model, where each predictor has its own estimated coefficient:
In the social sciences, unit-weighted regression is sometimes used for binary classification, i.e. to predict a yes-no answer where indicates "no", "yes". It is easier to interpret than multiple linear regression (known as linear discriminant analysis in the classification case).
Unit weights
Unit-weighted regression is a method of robust regression that proceeds in three steps. First, predictors for the outcome of interest are selected; ideally, there should be good empirical or theoretical reasons for the selection. Second, the predictors are converted to a standard form. Finally, the predictors are added together, and this sum is called the variate, which is used as the predictor of the outcome.
Burgess method
The Burgess method was first presented by the sociologist Ernest W. Burgess in a 1928 study to determine success or failure of inmates placed on parole. First, he selected 21 variables believed to be associated with parole success. Next, he converted each predictor to the standard form of zero or one (Burgess, 1928). When predictors had two values, the value associated with the target outcome was coded as one. Burgess selected success on parole as the target outcome, so a predictor such as a history of theft was coded as "yes" = 0 and "no" = 1. These coded values were then added to create a predictor score, so that higher scores predicted a better chance of success. The scores could possibly range from zero (no predictors of success) to 21 (all 21 predictors scored as predicting success).
For predictors with more than two values, the Burgess method selects a cutoff score based on subjective judgment. As an example, a study using the Burgess method (Gottfredson & Snyder, 2005) selected as one predictor the number of complaints for delinquent behavior. With failure on parole as the target outcome, the number of complaints was coded as follows: "zero to two complaints" = 0, and "three or more complaints" = 1 (Gottfredson & Snyder, 2005. p. 18).
Kerby method
The Kerby method is similar to the Burgess method, but differs in two ways. First, while the Burgess method uses subjective judgment to select a cutoff score for a multi-valued predictor with a binary outcome, the Kerby method uses classification and regression tree (CART) analysis. In this way, the selection of the cutoff score is based not on subjective judgment, but on a statistical criterion, such as the point where the chi-square value is a maximum.
The second difference is that while the Burgess method is applied to a binary outcome, the Kerby method can appl
|
https://en.wikipedia.org/wiki/Kempe%20chain
|
In mathematics, a Kempe chain is a device used mainly in the study of the four colour theorem. Intuitively, it is a connected chain of points on a graph with alternating colors.
History
Kempe chains were first used by Alfred Kempe in his attempted proof of the four colour theorem. Even though his proof turned out to be incomplete, the method of Kempe chains is crucial to the successful modern proofs (Appel & Haken, Robertson et al., etc.). Furthermore, the method is used in the proof of the five-colour theorem by Percy John Heawood, a weaker form of the four-colour theorem.
Formal definition
The term "Kempe chain" is used in two different but related ways.
Suppose G is a graph with vertex set V, and we are given a colouring function
where S is a finite set of colours, containing at least two distinct colours a and b. If v is a vertex with colour a, then the (a, b)-Kempe chain of G containing v is the maximal connected subset of V which contains v and whose vertices are all coloured either a or b.
The above definition is what Kempe worked with. Typically the set S has four elements (the four colours of the four colour theorem), and c is a proper colouring, that is, each pair of adjacent vertices in V are assigned distinct colours.
A more general definition, which is used in the modern computer-based proofs of the four colour theorem, is the following. Suppose again that G is a graph, with edge set E, and this time we have a colouring function
If e is an edge assigned colour a, then the (a, b)-Kempe chain of G containing e is the maximal connected subset of E which contains e and whose edges are all coloured either a or b.
This second definition is typically applied where S has three elements, say a, b and c, and where V is a cubic graph, that is, every vertex has three incident edges. If such a graph is properly coloured, then each vertex must have edges of three distinct colours, and Kempe chains end up being paths, which is simpler than in the case of the first definition.
In terms of maps
Application to the four colour theorem
In the four color theorem, Kempe was able to prove that all graphs necessarily have a vertex of five or less, or containing a vertex that touches five other vertices, called its neighbors. As such, to prove the four color theorem, it is sufficient to prove that vertices of five or less were all four-colorable. Kempe was able to prove the case of degree four and give a partial proof of degree five using Kempe chains.
In this case, Kempe chains are used to prove the idea that no vertex has to be touching four colors different from itself, i.e., having a degree of 4. First, one can create a graph with a vertex v and four vertices as neighbors. If we remove the vertex v, we can four-color the remaining vertices. We can set the colors as (in clockwise order) red, yellow, blue, and green. In this situation, there can be a Kempe chain joining the red and blue neighbors or a Kempe chain joining the green and yellow
|
https://en.wikipedia.org/wiki/Vmstat
|
vmstat (virtual memory statistics) is a computer system monitoring tool that collects and displays summary information about operating system memory, processes, interrupts, paging and block I/O. Users of vmstat can specify a sampling interval which permits observing system activity in near-real time.
The vmstat tool is available on most Unix and Unix-like operating systems, such as FreeBSD, Linux or Solaris.
Syntax
The syntax and output of vmstat often differs slightly between different operating systems.
# vmstat 2 6
procs -----------memory---------- ---swap-- -----io---- --system-- ----cpu----
r b swpd free buff cache si so bi bo in cs us sy id wa
0 0 2536 21496 185684 1353000 0 0 0 14 1 2 0 0 100 0
0 0 2536 21496 185684 1353000 0 0 0 28 1030 145 0 0 100 0
0 0 2536 21496 185684 1353000 0 0 0 0 1026 132 0 0 100 0
0 0 2536 21520 185684 1353000 0 0 0 0 1033 186 1 0 99 0
0 0 2536 21520 185684 1353000 0 0 0 0 1024 141 0 0 100 0
0 0 2536 21584 185684 1353000 0 0 0 0 1025 131 0 0 100 0
In the above example the tool reports every two seconds for six iterations.
We can get the customized or required outputs by using various options with the vmstat command.
# vmstat –s This option is used to get memory statistics.
# vmstat –d This option is used to get disk statistics.
See also
nmon — a system monitor tool for the AIX and Linux operating systems.
iostat
top
sar
External links
Softpanorama vmstat page
Unix software
System monitors
|
https://en.wikipedia.org/wiki/Iostat
|
iostat (input/output statistics) is a computer system monitor tool used to collect and show operating system storage input and output statistics. It is often used to identify performance issues with storage devices, including local disks, or remote disks accessed over network file systems such as NFS. It can also be used to provide information about terminal (TTY) input and output, and also includes some basic CPU information.
Syntax and availability
iostat -x displays output where each line (row) gives numerical data for one device. The first column lists the device name, and subsequent columns show various statistics for that device. Columns include the average service time (svc_t, which includes not only the time a request is in the service queue, but also the seek time and transfer time), the average busy percentage (%b, essentially the proportion of time that the device is in use), and the percentage of time that the queue is not empty (%w, which means the proportion of time in which requests from the device have not yet been fulfilled).
It is best to run iostat specifying a time interval in seconds (for example iostat -x 30) in order to see the results over time. This is because otherwise, the output will reflect the values over the entire timespan since the system was last rebooted.
The iostat tool is available on most Unix and Unix-like operating systems, such as FreeBSD, macOS (com.apple.pkg.Core package), Linux (sysstat package), and Solaris. The syntax and output of iostat often differs slightly between them.
Output of the command
Sun Microsystems stated that high values in the wait and svc_t fields suggest a lack of overall throughput in the system, indicating that "the system is overloaded with I/O operations". Consistently high values in the kr/s, kw/s, %w and %b fields also indicate "a possible I/O bottleneck".
In versions of Solaris before Solaris 7, iostat can give misleading information in the wait field on multiprocessor systems. This is because iostat can misinterpret one processor being in a state where it is waiting for I/O, as meaning that all processors in the system are having to wait.
It is also advisable to disregard high values in the svc_t field for disks that have very low rates of activity (less than 5%). This is because the fsflush process can force up the average service time when synchronising data on disk with what is in memory.
iostat does not display information about the individual volumes on each disk if a volume manager is used . The vxstat command can be used to show this information instead. In contrast, when using Linux LVM as a volume manager, iostat does display volume information individually, because each logical volume has its own device mapper (dm) device.
See also
Disk-drive performance characteristics
mpstat
netstat
sar (Unix)
systat
vmstat
References
External links
FreeBSD iostat(8) manual page
Solaris iostat(1M) manual page
Linux iostat manual page
Mac OS X iostat manual p
|
https://en.wikipedia.org/wiki/%21%21
|
‼ (a double exclamation mark, Unicode character U+203C) may refer to:
!! (chess), a brilliant move in chess annotation
Double factorial, an operator in mathematics
Retroflex click, a family of click consonants found only in Juu languages and in the Damin ritual jargon
Double-negation translation, !!p = p.
See also
! (disambiguation)
!!! (disambiguation)
|
https://en.wikipedia.org/wiki/Curve%20orientation
|
In mathematics, an orientation of a curve is the choice of one of the two possible directions for travelling on the curve. For example, for Cartesian coordinates, the -axis is traditionally oriented toward the right, and the -axis is upward oriented.
In the case of a planar simple closed curve (that is, a curve in the plane whose starting point is also the end point and which has no other self-intersections), the curve is said to be positively oriented or counterclockwise oriented, if one always has the curve interior to the left (and consequently, the curve exterior to the right), when traveling on it. Otherwise, that is if left and right are exchanged, the curve is negatively oriented or clockwise oriented. This definition relies on the fact that every simple closed curve admits a well-defined interior, which follows from the Jordan curve theorem.
The inner loop of a beltway road in a country where people drive on the right side of the road is an example of a negatively oriented (clockwise) curve. In trigonometry, the unit circle is traditionally oriented counterclockwise.
The concept of orientation of a curve is just a particular case of the notion of orientation of a manifold (that is, besides orientation of a curve one may also speak of orientation of a surface, hypersurface, etc.).
Orientation of a curve is associated with parametrization of its points by a real variable. A curve may have equivalent parametrizations when there is a continuous increasing monotonic function relating the parameter of one curve to the parameter of the other. When there is a decreasing continuous function relating the parameters, then the parametric representations are opposite and the orientation of the curve is reversed.
Orientation of a simple polygon
In two dimensions, given an ordered set of three or more connected vertices (points) (such as in connect-the-dots) which forms a simple polygon, the orientation of the resulting polygon is directly related to the sign of the angle at any vertex of the convex hull of the polygon, for example, of the angle ABC in the picture. In computations, the sign of the smaller angle formed by a pair of vectors is typically determined by the sign of the cross product of the vectors. The latter one may be calculated as the sign of the determinant of their orientation matrix. In the particular case when the two vectors are defined by two line segments with common endpoint, such as the sides BA and BC of the angle ABC in our example, the orientation matrix may be defined as follows:
A formula for its determinant may be obtained, e.g., using the method of cofactor expansion:
If the determinant is negative, then the polygon is oriented clockwise. If the determinant is positive, the polygon is oriented counterclockwise. The determinant is non-zero if points A, B, and C are non-collinear. In the above example, with points ordered A, B, C, etc., the determinant is negative, and therefore the polygon is clockwise.
|
https://en.wikipedia.org/wiki/AMD%20Am2900
|
Am2900 is a family of integrated circuits (ICs) created in 1975 by Advanced Micro Devices (AMD). They were constructed with bipolar devices, in a bit-slice topology, and were designed to be used as modular components each representing a different aspect of a computer control unit (CCU). By using the bit slicing technique, the Am2900 family was able to implement a CCU with data, addresses, and instructions to be any multiple of 4 bits by multiplying the number of ICs. One major problem with this modular technique was that it required a larger number of ICs to implement what could be done on a single CPU IC. The Am2901 chip included an arithmetic logic unit (ALU) and 16 4-bit processor register slices, and was the "core" of the series. It could count using 4 bits and implement binary operations as well as various bit-shifting operations.
The 2901 and some other chips in the family were second sourced by an unusually large number of other manufacturers, starting with Motorola and then Raytheon both in 1975 and also Cypress Semiconductor, National Semiconductor, NEC, Thomson, and Signetics. In the Soviet Union and later Russia the Am2900 family was manufactured as the 1804 series (with e.g. the Am2901 designated as KR1804VS1 / ) which was known to be in production in 2016.
Computers made with Am2900-family chips
There are probably many more, but here are some known machines using these parts:
The Apollo Computer Tern family: DN460, DN660 and DSP160. All used the same system board emulating the Motorola 68010 instruction set.
The Itek Advanced Technology Airborne Computer (ATAC) used on the Galileo Attitude and Articulation Control Computer System and some Navy aircraft had a 16-register, 16-bit word width assembled from 4-bit-wide 2900 series processors. Four special instructions were added to the Galileo version of the ATAC, and later some chips were replaced with radiation-hardened 2901 chips.
Data General Nova 4, which obtained 16-bit word width using four Am2901 ALUs in parallel; one of the boards had 15 Am2901 ALUs on it.
Digital Equipment Corporation (DEC) PDP-11 models PDP-11/23, PDP-11/34, and PDP-11/44 floating-point options (FPF11, FP11-A and FP11-F, respectively).
The DEC VAX 11/730, which used eight Am2901s for the CPU.
Hewlett-Packard 1000 A-series model A600 used four Am2901 ALUs for its 16-bit processor
The Xerox Dandelion, the machine used in the Xerox Star and Xerox 1108 Lisp machine.
Several models of the GEC 4000 series minicomputers: 4060, 4150, 4160 (four Am2901 each, 16-bit ALU), and 4090 and all 418x and 419x systems (eighteen Am2901 each, 32-bit integer ALU or 8-bit exponent, 64-bit Double Precision floating point ALU).
The DEC KS10 PDP-10 model.
The UCSD Pascal P-machine processor designed at NCR by Joel McCormack.
A number of MAI Basic Four machines.
The Tektronix 4052 graphics system computer.
The SM-1420, Soviet clone of PDP-11, used Soviet clone of Am2901 perhaps also used in others.
The Lilith computer
|
https://en.wikipedia.org/wiki/Cubic%20group
|
Cubic group can mean:
The octahedral symmetry group — one of the first 5 groups of the 7 point groups which are not in one of the 7 infinite series
cubic space group
Mathematics disambiguation pages
|
https://en.wikipedia.org/wiki/Atiyah%E2%80%93Bott%20fixed-point%20theorem
|
In mathematics, the Atiyah–Bott fixed-point theorem, proven by Michael Atiyah and Raoul Bott in the 1960s, is a general form of the Lefschetz fixed-point theorem for smooth manifolds M, which uses an elliptic complex on M. This is a system of elliptic differential operators on vector bundles, generalizing the de Rham complex constructed from smooth differential forms which appears in the original Lefschetz fixed-point theorem.
Formulation
The idea is to find the correct replacement for the Lefschetz number, which in the classical result is an integer counting the correct contribution of a fixed point of a smooth mapping
Intuitively, the fixed points are the points of intersection of the graph of f with the diagonal (graph of the identity mapping) in , and the Lefschetz number thereby becomes an intersection number. The Atiyah–Bott theorem is an equation in which the LHS must be the outcome of a global topological (homological) calculation, and the RHS a sum of the local contributions at fixed points of f.
Counting codimensions in , a transversality assumption for the graph of f and the diagonal should ensure that the fixed point set is zero-dimensional. Assuming M a closed manifold should ensure then that the set of intersections is finite, yielding a finite summation as the RHS of the expected formula. Further data needed relates to the elliptic complex of vector bundles , namely a bundle map
for each j, such that the resulting maps on sections give rise to an endomorphism of an elliptic complex . Such an endomorphism has Lefschetz number
which by definition is the alternating sum of its traces on each graded part of the homology of the elliptic complex.
The form of the theorem is then
Here trace means the trace of at a fixed point x of f, and is the determinant of the endomorphism at x, with the derivative of f (the non-vanishing of this is a consequence of transversality). The outer summation is over the fixed points x, and the inner summation over the index j in the elliptic complex.
Specializing the Atiyah–Bott theorem to the de Rham complex of smooth differential forms yields the original Lefschetz fixed-point formula. A famous application of the Atiyah–Bott theorem is a simple proof of the Weyl character formula in the theory of Lie groups.
History
The early history of this result is entangled with that of the Atiyah–Singer index theorem. There was other input, as is suggested by the alternate name Woods Hole fixed-point theorem that was used in the past (referring properly to the case of isolated fixed points). A 1964 meeting at Woods Hole brought together a varied group:
Eichler started the interaction between fixed-point theorems and automorphic forms. Shimura played an important part in this development by explaining this to Bott at the Woods Hole conference in 1964.
As Atiyah puts it:
[at the conference]...Bott and I learnt of a conjecture of Shimura concerning a generalization of the Lefschetz formula for holomo
|
https://en.wikipedia.org/wiki/Heybridge%2C%20Maldon
|
Heybridge is a large village and civil parish in the Maldon district of Essex, England. It is adjacent to the town of Maldon, near the River Blackwater.<ref>[http://www.neighbourhood.statistics.gov.uk/dissemination/viewFullDataset.do?instanceSelection=03070&productId=779&$ph=60_61&datasetInstanceId=3070&startColumn=1&numberOfColumns=8&containerAreaId=790393 Office for National Statistics : Census 2001 : Parish headcounts : Maldon] Retrieved 19 August 2010</ref> The parish had a population of 8,163 in 2021.
Heybridge has a number of residential areas, most recognisable is the newer Bovis housing estates to the west of the town, which were built in 1995. Before building commenced, a full archaeological dig was undertaken and the excavations showed the existence of an important Iron Age settlement and ritual complex, a large Roman settlement and a succeeding Saxon settlement, as well as scattered pre-historic remains. Along the Goldhanger road to the east are situated a number of traditional British holiday campsites, catering for both permanent residents and visitors.
History
Heybridge was originally called Tidwalditun. The name Heybridge came from the high bridge that was built over the River Blackwater in the Middle Ages, at Heybridge Square (the junction of Heybridge Street, Holloway Road, and the Causeway). This was a 5-arched stone bridge and it was replaced in 1870 by a 2-arched brick one. Much of the water flow down this part of the river had, by then, been diverted into the River Chelmer by diversion work done during construction of the Chelmer and Blackwater Navigation.
Some people believe that the River Blackwater at Heybridge, near where the "high bridge" was later constructed, was the site of the Battle of Maldon in 991 AD. This belief, however, is contentious. The site of the battle cannot be unambiguously determined from the poem The Battle of Maldon itself, and over the years, various people have had different theories about where it happened. The key role of an island in the poem would seem to make the traditional site of the battle at Northey Island to the south more likely. The island in question is within shouting distance of the mainland, which would rule out Osea Island to the east.
Heybridge was an agricultural village until the 1970s and 80s, when a considerable proportion of the local farm land was given over to house building. The main industry in Heybridge itself, until it ceased trading in 1984, was the agricultural machinery manufacturer E. H. Bentall & Co. William Bentall, some time between 1760 and 1790 invented the Goldhanger plough, which was put on the market in 1797. The company was established in 1805 on the south bank of the Chelmer and Blackwater Navigation, and grew into a large factory complex that operated for nearly 180 years. Prior to the First World War, Bentalls moved into the new world of the Automobile, producing their first car, the Bentall 9hp in 1908, with production ending in 1912.
By 1914 Ben
|
https://en.wikipedia.org/wiki/Constant%20curvature
|
In mathematics, constant curvature is a concept from differential geometry. Here, curvature refers to the sectional curvature of a space (more precisely a manifold) and is a single number determining its local geometry. The sectional curvature is said to be constant if it has the same value at every point and for every two-dimensional tangent plane at that point. For example, a sphere is a surface of constant positive curvature.
Classification
The Riemannian manifolds of constant curvature can be classified into the following three cases:
elliptic geometry – constant positive sectional curvature
Euclidean geometry – constant vanishing sectional curvature
hyperbolic geometry – constant negative sectional curvature.
Properties
Every space of constant curvature is locally symmetric, i.e. its curvature tensor is parallel .
Every space of constant curvature is locally maximally symmetric, i.e. it has number of local isometries, where is its dimension.
Conversely, there exists a similar but stronger statement: every maximally symmetric space, i.e. a space which has (global) isometries, has constant curvature.
(Killing–Hopf theorem) The universal cover of a manifold of constant sectional curvature is one of the model spaces:
sphere (sectional curvature positive)
plane (sectional curvature zero)
hyperbolic manifold (sectional curvature negative)
A space of constant curvature which is geodesically complete is called space form and the study of space forms is intimately related to generalized crystallography (see the article on space form for more details).
Two space forms are isomorphic if and only if they have the same dimension, their metrics possess the same signature and their sectional curvatures are equal.
References
Moritz Epple (2003) From Quaternions to Cosmology: Spaces of Constant Curvature ca. 1873 — 1925, invited address to International Congress of Mathematicians
Differential geometry of surfaces
Riemannian geometry
Curvature (mathematics)
|
https://en.wikipedia.org/wiki/162%20%28number%29
|
162 (one hundred [and] sixty-two) is the natural number between 161 and 163.
In mathematics
Having only 2 and 3 as its prime divisors, 162 is a 3-smooth number. 162 is also an abundant number, since its sum of divisors is greater than it. As the product of numbers three units apart from each other, it is a triple factorial number.
There are 162 ways of partitioning seven items into subsets of at least two items per subset. 16264 + 1 is a prime number.
In religion
Jared was 162 when he became the father of Enoch.
In sports
162 is the total number of baseball games each team plays during a regular season in Major League Baseball.
References
Integers
|
https://en.wikipedia.org/wiki/165%20%28number%29
|
165 (one hundred [and] sixty-five) is the natural number following 164 and preceding 166.
In mathematics
165 is:
an odd number, a composite number, and a deficient number.
a sphenic number.
a tetrahedral number
the sum of the sums of the divisors of the first 14 positive integers.
a self number in base 10.
a palindromic number in binary (101001012) and bases 14 (BB14), 32 (5532) and 54 (3354).
a unique period in base 2.
In astronomy
165 Loreley is a large Main belt asteroid
165P/LINEAR is a periodic comet in the Solar System
The planet Neptune takes about 165 years to orbit the Sun.
In the military
Caproni Ca.165 Italian fighter aircraft developed before World War II
was a United States Navy tanker, part of the U.S. Reserve Fleet, Beaumont, Texas
was a United States Navy Barracuda-class submarine during World War II
was a United States Navy during World War II
was a United States Navy during World War II
USS Counsel (AM-165) was a United States Navy during World War II
was a United States Navy minesweeper during World War II
was a United States Navy Oxford-class technical research ship following World War II
was a United States Navy during World War I
was a United States Navy during World War II
was a United States Navy transport and cargo ship during World War II
was a United States Navy yacht during World War I
The 165 Squadron, Republic of Singapore Air Force Air Defence Operations Command, Republic of Singapore Air Force
In transportation
British Rail Class 165
The Blériot 165 was a French four-engine biplane airliner of the early 1920s
The Cessna 165 single-engine plane of the 1930s
LOT Polish Airlines Flight 165, en route from Warsaw to Cracow Balice airport crashed during a snowstorm on April 2, 1969
In other fields
165 is also:
The year AD 165 or 165 BC
165 AH is a year in the Islamic calendar that corresponds to 781 – 782 CE
The atomic number of an element temporarily called Unhexpentium
G.165 is a Telecommunication Standardization Sector standard for echo cancellers, used in telephony
The human gene Zinc finger protein 165, or ZNF165
See also
List of highways numbered 165
United States Supreme Court cases, Volume 165
United Nations Security Council Resolution 165
Pennsylvania House of Representatives, District 165
U.S. Income Tax sections 165(d) Professional Gamblers and 165(d) Recreational Gamblers
References
External links
Number Facts and Trivia: 165
The Number 165
The Positive Integer 165
VirtueScience: 165
Integers
|
https://en.wikipedia.org/wiki/Giovanni%20Bianchini
|
Giovanni Bianchini (in Latin, Johannes Blanchinus) (1410 – c. 1469) was a professor of mathematics and astronomy at the University of Ferrara and court astrologer of Leonello d'Este. He was an associate of Georg Purbach and Regiomontanus. The letters exchanged with Regiomontanus in 1463–1464 mention works by Bianchini entitled: Primum mobile (astronomical tables included), Flores almagesti, Compositio instrumenti.
Bianchini was the first mathematician in Europe to use decimal positional fractions for his trigonometric tables, at the same time as Al-Kashi in Samarkand. In De arithmetica, part of the Flores almagesti, he uses operations with negative numbers and expresses the Law of Signs.
He was probably the father of the instrument maker Antonio Bianchino.
The crater Blanchinus on the Moon is named after him.
Works
Silvio Magrini (ed.), Joannes de Blanchinis ferrariensis e il suo carteggio scientifico col Regiomontano (1463-64), Zuffi, 1916 — Scientific letters exchanged by Bianchini and Regiomontanus
See also
Giovanni Bianchini should not be confused with two similarly-named Italians with their own lunar craters: Francesco Bianchini (1662–1729) (and the Bianchini crater), and Giuseppe Biancani (1566–1624) (and the Blancanus crater).
External links
Vescovini, Graziella Federici. « Bianchini, Giovanni ». In: Dizionario Biografico degli Italiani
Institute and History of the Museum of Science
Antonio Bianchini
15th-century Italian astronomers
15th-century Italian mathematicians
1410 births
1460s deaths
|
https://en.wikipedia.org/wiki/Doubly%20periodic%20function
|
In mathematics, a doubly periodic function is a function defined on the complex plane and having two "periods", which are complex numbers u and v that are linearly independent as vectors over the field of real numbers. That u and v are periods of a function ƒ means that
for all values of the complex number z.
The doubly periodic function is thus a two-dimensional extension of the simpler singly periodic function, which repeats itself in a single dimension. Familiar examples of functions with a single period on the real number line include the trigonometric functions like cosine and sine, In the complex plane the exponential function ez is a singly periodic function, with period 2πi.
Examples
As an arbitrary mapping from pairs of reals (or complex numbers) to reals, a doubly periodic function can be constructed with little effort. For example, assume that the periods are 1 and i, so that the repeating lattice is the set of unit squares with vertices at the Gaussian integers. Values in the prototype square (i.e. x + iy where 0 ≤ x < 1 and 0 ≤ y < 1) can be assigned rather arbitrarily and then 'copied' to adjacent squares. This function will then be necessarily doubly periodic.
If the vectors 1 and i in this example are replaced by linearly independent vectors u and v, the prototype square becomes a prototype parallelogram that still tiles the plane. The "origin" of the lattice of parallelograms does not have to be the point 0: the lattice can start from any point. In other words, we can think of the plane and its associated functional values as remaining fixed, and mentally translate the lattice to gain insight into the function's characteristics.
Use of complex analysis
If a doubly periodic function is also a complex function that satisfies the Cauchy–Riemann equations and provides an analytic function away from some set of isolated poles – in other words, a meromorphic function – then a lot of information about such a function can be obtained by applying some basic theorems from complex analysis.
A non-constant meromorphic doubly periodic function cannot be bounded on the prototype parallelogram. For if it were it would be bounded everywhere, and therefore constant by Liouville's theorem.
Since the function is meromorphic, it has no essential singularities and its poles are isolated. Therefore a translated lattice that does not pass through any pole can be constructed. The contour integral around any parallelogram in the lattice must vanish, because the values assumed by the doubly periodic function along the two pairs of parallel sides are identical, and the two pairs of sides are traversed in opposite directions as we move around the contour. Therefore, by the residue theorem, the function cannot have a single simple pole inside each parallelogram – it must have at least two simple poles within each parallelogram (Jacobian case), or it must have at least one pole of order greater than one (Weierstrassian case).
A similar argument can
|
https://en.wikipedia.org/wiki/Probabilistic%20number%20theory
|
In mathematics, Probabilistic number theory is a subfield of number theory, which explicitly uses probability to answer questions about the integers and integer-valued functions. One basic idea underlying it is that different prime numbers are, in some serious sense, like independent random variables. This however is not an idea that has a unique useful formal expression.
The founders of the theory were Paul Erdős, Aurel Wintner and Mark Kac during the 1930s, one of the periods of investigation in analytic number theory. Foundational results include the Erdős–Wintner theorem and the Erdős–Kac theorem on additive functions.
See also
Number theory
Analytic number theory
Areas of mathematics
List of number theory topics
List of probability topics
Probabilistic method
Probable prime
References
Further reading
Number theory
|
https://en.wikipedia.org/wiki/Function%20value
|
Function value may refer to:
In mathematics, the value of a function when applied to an argument.
In computer science, a closure.
|
https://en.wikipedia.org/wiki/Eric%20Priest
|
Eric Ronald Priest (born 7 November 1943) is Emeritus Professor at St Andrews University, where he previously held the Gregory Chair of Mathematics and a Bishop Wardlaw Professorship.
Career and research
Priest is a recognised authority in solar magnetohydrodynamics (or MHD for short), the study of the subtle, and often nonlinear, interaction between the Sun's magnetic field and its plasma interior or atmosphere, treated as a continuous medium. Priest is an applied mathematician and, along with the other members of his research group at St Andrews, is currently investigating a large number of solar phenomena, including sunspots, coronal heating, wave propagation, magnetic reconnection, magnetic instabilities, magnetic structures and helioseismology. This is done using mathematical modelling techniques and observational data from satellites such as SoHO, Yohkoh and TRACE, or ground-based observatories such as Kitt Peak and Big Bear. In 2000 he was the James Arthur Prize Lecturer at Harvard University. Professor Priest has received a number of academic awards for his research, including Hale Prize of the American Astronomical Society (2002), and was elected a Fellow of the Royal Society in the same year. He is notable in the solar physics community as something of an evangelist for the importance of magnetic reconnection in driving many solar phenomena, and as an explanation of the solar coronal heating problem.
As an applied mathematician, his research interests involve constructing mathematical models for the subtle and complex ways in which magnetic fields interact with plasmas in the atmosphere of the Sun and in more exotic cosmic objects. In particular, he is trying to understand how the corona of the Sun is heated to several million degrees and how magnetic energy is converted into other forms in solar flares.
In the area of science and religion, he considers himself aware of the importance of trying in small ways to encourage dialogue and understanding between Islam and Christianity and recently spoke on science and culture to 850 schoolchildren in Alexandria, Egypt. He has also preached in St Andrews on the tensions between Christianity and science and spoke on "Creativity in Science" at a conference on Creativity and the Imagination. He is active in the local Anglican church and enjoys hill-walking, bridge, singing in a couple of choirs and spending time with his wife Clare and four children.
Priest retired from full-time teaching in 2010, and was awarded a two-year Leverhulme Emeritus Fellowship in 2011. He retains a link with St Andrews as Emeritus Professor in the School of Mathematics and Statistics and remains active in research.
In 2020, Priest received from the European Physical Society the prestigious ESPD Senior Prize for "long-standing leadership via mentoring, supervising and field-defining textbooks and for fundamental contributions in key topics of solar magnetohydrodynamics, particularly magnetic reconnection in the sol
|
https://en.wikipedia.org/wiki/David%20Eisenbud
|
David Eisenbud (born 8 April 1947 in New York City) is an American mathematician. He is a professor of mathematics at the University of California, Berkeley and former director of the then Mathematical Sciences Research Institute (MSRI), now known as Simons Laufer Mathematical Sciences Institute (SLMath). He served as Director of MSRI from 1997 to 2007, and then again from 2013 to 2022.
Biography
Eisenbud is the son of mathematical physicist Leonard Eisenbud, who was a student and collaborator of the renowned physicist Eugene Wigner. Eisenbud received his Ph.D. in 1970 from the University of Chicago, where he was a student of Saunders Mac Lane and, unofficially, James Christopher Robson. He then taught at Brandeis University from 1970 to 1997, during which time he had visiting positions at Harvard University, Institut des Hautes Études Scientifiques (IHÉS), University of Bonn, and Centre national de la recherche scientifique (CNRS). He joined the staff at MSRI in 1997, and took a position at Berkeley at the same time.
From 2003 to 2005 Eisenbud was President of the American Mathematical Society.
Eisenbud's mathematical interests include commutative and non-commutative algebra, algebraic geometry, topology, and computational methods in these fields. He has written over 150 papers and books with over 60 co-authors. Notable contributions include the theory of matrix factorizations for maximal Cohen–Macaulay modules over hypersurface rings, the Eisenbud–Goto conjecture on degrees of generators of syzygy modules, and the Buchsbaum–Eisenbud criterion for exactness of a complex. He also proposed the Eisenbud–Evans conjecture, which was later settled by the Indian mathematician Neithalath Mohan Kumar.
He has had 31 doctoral students, including Craig Huneke, Mircea Mustaţă, Irena Peeva, and Gregory G. Smith (winner of the Aisenstadt Prize in 2007).
Eisenbud's hobbies are juggling (he has written two papers on the mathematics of juggling) and music. He has appeared in Brady Haran's online video channel "Numberphile".
Eisenbud was elected Fellow of the American Academy of Arts and Sciences in 2006. He was awarded the Leroy P. Steele Prize in 2010. In 2012 he became a fellow of the American Mathematical Society.
Selected publications
Books
Articles
See also
Eisenbud–Levine–Khimshiashvili signature formula
References
External links
Eisenbud's biographical page at MSRI
Algebraists
Algebraic geometers
University of Chicago alumni
University of California, Berkeley College of Letters and Science faculty
Brandeis University faculty
Harvard University staff
Mathematics popularizers
Presidents of the American Mathematical Society
Fellows of the American Mathematical Society
20th-century American mathematicians
21st-century American mathematicians
Mathematicians from New York (state)
1947 births
Living people
|
https://en.wikipedia.org/wiki/Doob%E2%80%93Meyer%20decomposition%20theorem
|
The Doob–Meyer decomposition theorem is a theorem in stochastic calculus stating the conditions under which a submartingale may be decomposed in a unique way as the sum of a martingale and an increasing predictable process. It is named for Joseph L. Doob and Paul-André Meyer.
History
In 1953, Doob published the Doob decomposition theorem which gives a unique decomposition for certain discrete time martingales. He conjectured a continuous time version of the theorem and in two publications in 1962 and 1963 Paul-André Meyer proved such a theorem, which became known as the Doob-Meyer decomposition. In honor of Doob, Meyer used the term "class D" to refer to the class of supermartingales for which his unique decomposition theorem applied.
Class D supermartingales
A càdlàg supermartingale is of Class D if and the collection
is uniformly integrable.
The theorem
Let be a cadlag supermartingale of class D. Then there exists a unique, increasing, predictable process with such that is a uniformly integrable martingale.
See also
Doob decomposition theorem
Notes
References
Martingale theory
Theorems in statistics
Probability theorems
|
https://en.wikipedia.org/wiki/Linear%20stability
|
In mathematics, in the theory of differential equations and dynamical systems, a particular stationary or quasistationary solution to a nonlinear system is called linearly unstable if the linearization of the equation at this solution has the form , where r is the perturbation to the steady state, A is a linear operator whose spectrum contains eigenvalues with positive real part. If all the eigenvalues have negative real part, then the solution is called linearly stable. Other names for linear stability include exponential stability or stability in terms of first approximation. If there exist an eigenvalue with zero real part then the question about stability cannot be solved on the basis of the first approximation and we approach the so-called "centre and focus problem".
Examples
Ordinary differential equation
The differential equation
has two stationary (time-independent) solutions: x = 0 and x = 1.
The linearization at x = 0 has the form
. The linearized operator is A0 = 1. The only eigenvalue is . The solutions to this equation grow exponentially;
the stationary point x = 0 is linearly unstable.
To derive the linearization at , one writes
, where . The linearized equation is then ; the linearized operator is , the only eigenvalue is , hence this stationary point is linearly stable.
Nonlinear Schrödinger Equation
The nonlinear Schrödinger equation
where and , has solitary wave solutions of the form .
To derive the linearization at a solitary wave, one considers the solution in the form
. The linearized equation on is given by
where
with
and
the differential operators.
According to Vakhitov–Kolokolov stability criterion,
when , the spectrum of A has positive point eigenvalues, so that the linearized equation is linearly (exponentially) unstable; for , the spectrum of A is purely imaginary, so that the corresponding solitary waves are linearly stable.
It should be mentioned that linear stability does not automatically imply stability;
in particular, when , the solitary waves are unstable. On the other hand, for , the solitary waves are not only linearly stable but also orbitally stable.
See also
Asymptotic stability
Linearization (stability analysis)
Lyapunov stability
Orbital stability
Stability theory
Vakhitov–Kolokolov stability criterion
References
Stability theory
Solitons
|
https://en.wikipedia.org/wiki/Quasi-isomorphism
|
In homological algebra, a branch of mathematics, a quasi-isomorphism or quism is a morphism A → B of chain complexes (respectively, cochain complexes) such that the induced morphisms
of homology groups (respectively, of cohomology groups) are isomorphisms for all n.
In the theory of model categories, quasi-isomorphisms are sometimes used as the class of weak equivalences when the objects of the category are chain or cochain complexes. This results in a homology-local theory, in the sense of Bousfield localization in homotopy theory.
See also
Derived category
References
Gelfand, Sergei I., Manin, Yuri I. Methods of Homological Algebra, 2nd ed. Springer, 2000.
Algebraic topology
Homological algebra
Equivalence (mathematics)
|
https://en.wikipedia.org/wiki/Eureka%20%28University%20of%20Cambridge%20magazine%29
|
Eureka is a journal published annually by The Archimedeans, the mathematical society of Cambridge University. It is one of the oldest recreational mathematics publications still in existence. Eureka includes many mathematical articles on a variety of different topics – written by students and mathematicians from all over the world – as well as a short summary of the activities of the society, problem sets, puzzles, artwork and book reviews.
Eureka has been published 66 times since 1939, and authors include many famous mathematicians and scientists such as Paul Erdős, Martin Gardner, Douglas Hofstadter, G. H. Hardy, Béla Bollobás, John Conway, Stephen Hawking, Roger Penrose, W. T. Tutte (writing with friends under the pseudonym Blanche Descartes), popular maths writer Ian Stewart, Fields Medallist Timothy Gowers and Nobel laureate Paul Dirac.
The journal was formerly distributed free of charge to all current members of the Archimedeans. Today, it is published electronically as well as in print. In 2020, the publication archive was made freely available online.
Eureka is edited by students from the university.
Of the mathematical articles, there is a paper by Freeman Dyson where he defined the rank of a partition in an effort to prove combinatorially the partition congruences earlier discovered by Srinivasa Ramanujan. In the article, Dyson made a series of conjectures that were all eventually resolved.
References
External links
Eureka at the website of The Archimedeans
Archive of old issues
Mathematics education in the United Kingdom
Mathematics journals
Publications associated with the University of Cambridge
Magazines established in 1939
|
https://en.wikipedia.org/wiki/Volterra%20operator
|
In mathematics, in the area of functional analysis and operator theory, the Volterra operator, named after Vito Volterra, is a bounded linear operator on the space L2[0,1] of complex-valued square-integrable functions on the interval [0,1]. On the subspace C[0,1] of continuous functions it represents indefinite integration. It is the operator corresponding to the Volterra integral equations.
Definition
The Volterra operator, V, may be defined for a function f ∈ L2[0,1] and a value t ∈ [0,1], as
Properties
V is a bounded linear operator between Hilbert spaces, with Hermitian adjoint
V is a Hilbert–Schmidt operator, hence in particular is compact.
V has no eigenvalues and therefore, by the spectral theory of compact operators, its spectrum σ(V) = {0}.
V is a quasinilpotent operator (that is, the spectral radius, ρ(V), is zero), but it is not nilpotent.
The operator norm of V is exactly ||V|| = 2⁄π.
References
Further reading
Operator theory
|
https://en.wikipedia.org/wiki/First%20variation
|
In applied mathematics and the calculus of variations, the first variation of a functional J(y) is defined as the linear functional mapping the function h to
where y and h are functions, and ε is a scalar. This is recognizable as the Gateaux derivative of the functional.
Example
Compute the first variation of
From the definition above,
See also
Calculus of variations
Functional derivative
Calculus of variations
|
https://en.wikipedia.org/wiki/Choquet%20theory
|
In mathematics, Choquet theory, named after Gustave Choquet, is an area of functional analysis and convex analysis concerned with measures which have support on the extreme points of a convex set C. Roughly speaking, every vector of C should appear as a weighted average of extreme points, a concept made more precise by generalizing the notion of weighted average from a convex combination to an integral taken over the set E of extreme points. Here C is a subset of a real vector space V, and the main thrust of the theory is to treat the cases where V is an infinite-dimensional (locally convex Hausdorff) topological vector space along lines similar to the finite-dimensional case. The main concerns of Gustave Choquet were in potential theory. Choquet theory has become a general paradigm, particularly for treating convex cones as determined by their extreme rays, and so for many different notions of positivity in mathematics.
The two ends of a line segment determine the points in between: in vector terms the segment from v to w consists of the λv + (1 − λ)w with 0 ≤ λ ≤ 1. The classical result of Hermann Minkowski says that in Euclidean space, a bounded, closed convex set C is the convex hull of its extreme point set E, so that any c in C is a (finite) convex combination of points e of E. Here E may be a finite or an infinite set. In vector terms, by assigning non-negative weights w(e) to the e in E, almost all 0, we can represent any c in C as
with
In any case the w(e) give a probability measure supported on a finite subset of E. For any affine function f on C, its value at the point c is
In the infinite dimensional setting, one would like to make a similar statement.
Choquet's theorem
Choquet's theorem states that for a compact convex subset C of a normed space V, given c in C there exists a probability measure w supported on the set E of extreme points of C such that, for any affine function f on C,
In practice V will be a Banach space. The original Krein–Milman theorem follows from Choquet's result. Another corollary is the Riesz representation theorem for states on the continuous functions on a metrizable compact Hausdorff space.
More generally, for V a locally convex topological vector space, the Choquet–Bishop–de Leeuw theorem gives the same formal statement.
In addition to the existence of a probability measure supported on the extreme boundary that represents a given point c, one might also consider the uniqueness of such measures. It is easy to see that uniqueness does not hold even in the finite dimensional setting. One can take, for counterexamples, the convex set to be a cube or a ball in R3. Uniqueness does hold, however, when the convex set is a finite dimensional simplex. A finite dimensional simplex is a special case of a Choquet simplex. Any point in a Choquet simplex is represented by a unique probability measure on the extreme points.
See also
Notes
References
Convex hulls
Functional analysis
Integral repres
|
https://en.wikipedia.org/wiki/Universal%20logic
|
Originally the expression Universal logic was coined by analogy with the expression Universal algebra. The first idea was to develop Universal logic as a field of logic that studies the features common to all logical systems, aiming to be to logic what Universal algebra is to algebra. A number of approaches to universal logic in this sense have been proposed since the twentieth century, using model theoretic, and categorical approaches. But then the Univeral Logic Project developed as a general universal logic project including this mathematical project but also many other logical activities (Congresses, Journals, Books Series, Encyclopedia, Logic Prizes, Webinar, YouTube Channel).
Development of Universal Logic as a General Theory of Logical Systems
The roots of universal logic as general theory of logical systems, may go as far back as some work of Alfred Tarski in the early twentieth century, but the modern notion was first presented in the 1990s by Swiss logician Jean-Yves Béziau. The term 'universal logic' has also been separately used by logicians such as Richard Sylvan and Ross Brady to refer to a new type of (weak) relevant logic.
In the context defined by Béziau, three main approaches to universal logic have been explored in depth:
An abstract model theory system axiomatized by Jon Barwise,
a topological/categorical approach based on sketches (sometimes called categorical model theory),
a categorical approach originating in Computer Science based on Goguen and Burstall's notion of institution.
While logic has been studied for centuries, Mossakowski et al commented in 2007 that "it is embarrassing that there is no widely acceptable formal definition of "a logic". These approaches to universal logic thus aim to address and formalize the nature of what may be called 'logic' as a form of "sound reasoning".
World Congresses and Schools on Universal Logic
Since 2005, Béziau has been organizing world congresses and schools on universal logic. These events bring together hundreds of researchers and students in the field and offer tutorials and research talks on a wide range of subjects.
First World Congress and School on Universal Logic, 26 March–3 April 2005, Montreux, Switzerland. Participants included Béziau, Dov Gabbay, and David Makinson. (Secret Speaker: Saul Kripke.)
Second World Congress and School on Universal Logic, 16–22 August 2007, Xi'an, China.
Third World Congress and School on Universal Logic, 18–25 April 2010, Lisbon, Portugal. (Secret Speaker: Jaakko Hintikka.)
Fourth World Congress and School on Universal Logic, 29 March–7 April 2013, Rio de Janeiro, Brazil.
Fifth World Congress and School on Universal Logic, 20–30 June 2015, Istanbul, Turkey.
Sixth World Congress and School on Universal Logic, 16–26 June 2018, Vichy, France.
Seventh World Congress and School on Universal Logic, 1–11 April 2022, Crete.
Publications in the field
A journal dedicated to the field, Logica Universalis, with Béziau as editor-in-
|
https://en.wikipedia.org/wiki/Formal%20epistemology
|
Formal epistemology uses formal methods from decision theory, logic, probability theory and computability theory to model and reason about issues of epistemological interest. Work in this area spans several academic fields, including philosophy, computer science, economics, and statistics. The focus of formal epistemology has tended to differ somewhat from that of traditional epistemology, with topics like uncertainty, induction, and belief revision garnering more attention than the analysis of knowledge, skepticism, and issues with justification.
History
Though formally oriented epistemologists have been laboring since the emergence of formal logic and probability theory (if not earlier), only recently have they been organized under a common disciplinary title. This gain in popularity may be attributed to the organization of yearly Formal Epistemology Workshops by Branden Fitelson and Sahotra Sarkar, starting in 2004, and the PHILOG-conferences starting in 2002 (The Network for Philosophical Logic and Its Applications) organized by Vincent F. Hendricks. Carnegie Mellon University's Philosophy Department hosts an annual summer school in logic and formal epistemology. In 2010, the department founded the Center for Formal Epistemology.
Bayesian epistemology
Bayesian epistemology is an important theory in the field of formal epistemology. It has its roots in Thomas Bayes' work in the field of probability theory. It is based on the idea that beliefs are held gradually and that the strengths of the beliefs can be described as subjective probabilities. As such, they are subject to the laws of probability theory, which act as the norms of rationality. These norms can be divided into static constraints, governing the rationality of beliefs at any moment, and dynamic constraints, governing how rational agents should change their beliefs upon receiving new evidence. The most characteristic Bayesian expression of these principles is found in the form of Dutch books, which illustrate irrationality in agents through a series of bets that lead to a loss for the agent no matter which of the probabilistic events occurs. Bayesians have applied these fundamental principles to various epistemological topics but Bayesianism does not cover all topics of traditional epistemology. The problem of confirmation in the philosophy of science, for example, can be approached through the Bayesian principle of conditionalization by holding that a piece of evidence confirms a theory if it raises the likelihood that this theory is true. Various proposals have been made to define the concept of coherence in terms of probability, usually in the sense that two propositions cohere if the probability of their conjunction is higher than if they were neutrally related to each other. The Bayesian approach has also been fruitful in the field of social epistemology, for example, concerning the problem of testimony or the problem of group belief. Bayesianism still faces various theoretica
|
https://en.wikipedia.org/wiki/Phillip%20Griffiths
|
Phillip Augustus Griffiths IV (born October 18, 1938) is an American mathematician, known for his work in the field of geometry, and in particular for the complex manifold approach to algebraic geometry. He is a major developer in particular of the theory of variation of Hodge structure in Hodge theory and moduli theory, which forms part of transcendental algebraic geometry and which also touches upon major and distant areas of differential geometry. He also worked on partial differential equations, coauthored with Shiing-Shen Chern, Robert Bryant and Robert Gardner on Exterior Differential Systems.
Professional career
He received his BS from Wake Forest College in 1959 and his PhD from Princeton University in 1962 after completing a doctoral dissertation, titled "On certain homogeneous complex manifolds", under the supervision of Donald Spencer. Afterwards, he held positions at University of California, Berkeley (1962–1967) and Princeton University (1967–1972). Griffiths was a professor of mathematics at Harvard University from 1972 to 1983. He was then a Provost and James B. Duke Professor of Mathematics at Duke University from 1983 to 1991. From 1991 to 2003, he was the Director of the Institute for Advanced Study (IAS) in Princeton, New Jersey. He remained as part of the Faculty of Mathematics at the IAS until June 2009, after which he has been emeritus at the IAS. He has published on algebraic geometry, differential geometry, geometric function theory, and the geometry of partial differential equations.
Griffiths serves as the Chair of the Science Initiative Group. He is co-author, with Joe Harris, of Principles of Algebraic Geometry, a well-regarded textbook on complex algebraic geometry.
Awards and honors
Griffiths was elected to the National Academy of Sciences in 1979 and the American Philosophical Society in 1992. In 2008 he was awarded the Wolf Prize (jointly with Deligne and Mumford) and the Brouwer Medal. In 2012 he became a fellow of the American Mathematical Society. Moreover, in 2014 Griffiths was awarded the Leroy P. Steele Prize for Lifetime Achievement by the American Mathematical Society. Also in 2014, Griffiths was awarded the Chern Medal for lifetime devotion to mathematics and outstanding achievements.
Selected publications
Articles
with Joe Harris:
with S. S. Chern:
Books
Mumford–Tate groups and domains: their geometry and arithmetic, with Mark Green and Matt Kerr, Princeton University Press, 2012,
in pbk:
Exterior differential systems and Euler-Lagrange partial differential equations, with Robert Bryant and Daniel Grossman, University of Chicago Press, 2003, cloth;
Introduction to Algebraic Curves, American Mathematical Society, Providence, RI, 1989,
Differential Systems and Isometric Embeddings, with Gary R. Jensen, Princeton University Press, Princeton, NJ, 1987,
Topics in Transcendental Algebraic Geometry, Princeton University Press, Princeton, NJ, 1984,
Exterior Differential Systems and the C
|
https://en.wikipedia.org/wiki/Cross-multiplication
|
In mathematics, specifically in elementary arithmetic and elementary algebra, given an equation between two fractions or rational expressions, one can cross-multiply to simplify the equation or determine the value of a variable.
The method is also occasionally known as the "cross your heart" method because lines resembling a heart outline can be drawn to remember which things to multiply together.
Given an equation like
where and are not zero, one can cross-multiply to get
In Euclidean geometry the same calculation can be achieved by considering the ratios as those of similar triangles.
Procedure
In practice, the method of cross-multiplying means that we multiply the numerator of each (or one) side by the denominator of the other side, effectively crossing the terms over:
The mathematical justification for the method is from the following longer mathematical procedure. If we start with the basic equation
we can multiply the terms on each side by the same number, and the terms will remain equal. Therefore, if we multiply the fraction on each side by the product of the denominators of both sides——we get
We can reduce the fractions to lowest terms by noting that the two occurrences of on the left-hand side cancel, as do the two occurrences of on the right-hand side, leaving
and we can divide both sides of the equation by any of the elements—in this case we will use —getting
Another justification of cross-multiplication is as follows. Starting with the given equation
multiply by = 1 on the left and by = 1 on the right, getting
and so
Cancel the common denominator = , leaving
Each step in these procedures is based on a single, fundamental property of equations. Cross-multiplication is a shortcut, an easily understandable procedure that can be taught to students.
Use
This is a common procedure in mathematics, used to reduce fractions or calculate a value for a given variable in a fraction. If we have an equation
where is a variable we are interested in solving for, we can use cross-multiplication to determine that
For example, suppose we want to know how far a car will travel in 7 hours, if we know that its speed is constant and that it already travelled 90 miles in the last 3 hours. Converting the word problem into ratios, we get
Cross-multiplying yields
and so
Note that even simple equations like
are solved using cross-multiplication, since the missing term is implicitly equal to 1:
Any equation containing fractions or rational expressions can be simplified by multiplying both sides by the least common denominator. This step is called clearing fractions.
Rule of three
The rule of three was a historical shorthand version for a particular form of cross-multiplication that could be taught to students by rote. It was considered the height of Colonial maths education and still figures in the French national curriculum for secondary education, and in the primary educa
|
https://en.wikipedia.org/wiki/%E2%88%86
|
∆ may refer to:
Triangle (∆), one of the basic shapes in geometry. Many different mathematical equations include the use of the triangle.
Delta (letter) (Δ), a Greek letter also used in mathematics and computer science
ᐃ, a letter of Canadian Aboriginal Syllabics
Delta baryon (Δ), one of several Baryons consisting of up and down quarks.
alt-J (Δ), a British indie band
Laplace operator (Δ), a differential operator
Increment operator (∆)
Symmetric difference, in mathematics, the set of elements which are in either of two sets and not in their intersection
|
https://en.wikipedia.org/wiki/Karamata
|
Karamata may refer to:
Jovan Karamata (1902–1967), Serbian mathematician
Karamata's inequality, named after Jovan Karamata, also known as the majorization inequality, a theorem in elementary algebra for convex and concave real-valued functions, defined on an interval of the real line. It generalizes the discrete form of Jensen's inequality
Karamata Family House, a cultural monument in Belgrade, Serbia
See also
Kalamata, a city in southern Greece
Kalamata (disambiguation)
Karamat (disambiguation)
Surnames of Serbian origin
|
https://en.wikipedia.org/wiki/GRE%20Mathematics%20Test
|
The GRE subject test in mathematics is a standardized test in the United States created by the Educational Testing Service (ETS), and is designed to assess a candidate's potential for graduate or post-graduate study in the field of mathematics. It contains questions from many fields of mathematics; about 50% of the questions come from calculus (including pre-calculus topics, multivariate calculus, and differential equations), 25% come from algebra (including linear algebra, abstract algebra, and number theory), and 25% come from a broad variety of other topics typically encountered in undergraduate mathematics courses, such as point-set topology, probability and statistics, geometry, and real analysis.
Up until the September 2023 administration, the GRE subject test in Mathematics was paper-based, as opposed to the GRE general test which is usually computer-based. Since then, it's been moved online It contains approximately 66 multiple-choice questions, which are to be answered within 2 hours and 50 minutes. Scores on this exam are required for entrance to most math Ph.D. programs in the United States.
Scores are scaled and then reported as a number between 200 and 990; however, in recent versions of the test, the maximum and minimum reported scores have been 920 and 400, which correspond to the 99th percentile and the 1st percentile, respectively. The mean score for all test takers from July 1, 2011 to June 30, 2014 was 659, with a standard deviation of 137.
Prior to October 2001, a significant percentage of students were achieving perfect scores on the exam, which made it difficult for competitive programs to differentiate between students in the upper percentiles. As a result, the test was reworked and renamed "The Mathematics Subject Test (Rescaled)". According to ETS, "Scores earned on the test after October 2001 should not be compared to scores earned prior to that date."
Tests generally take place three times per year, within an approximately 14-day window in each of September, October, and April. Students must register for the exam approximately five weeks before the administration of the exam.
See also
Graduate Record Examination
GRE Biochemistry Test
GRE Biology Test
GRE Chemistry Test
GRE Literature in English Test
GRE Physics Test
GRE Psychology Test
Graduate Management Admission Test (GMAT)
Graduate Aptitude Test in Engineering (GATE)
References
Mathematics tests
Year of introduction missing
|
https://en.wikipedia.org/wiki/Set-theoretic%20topology
|
In mathematics, set-theoretic topology is a subject that combines set theory and general topology. It focuses on topological questions that are independent of Zermelo–Fraenkel set theory (ZFC).
Objects studied in set-theoretic topology
Dowker spaces
In the mathematical field of general topology, a Dowker space is a topological space that is T4 but not countably paracompact.
Dowker conjectured that there were no Dowker spaces, and the conjecture was not resolved until M.E. Rudin constructed one in 1971. Rudin's counterexample is a very large space (of cardinality ) and is generally not well-behaved. Zoltán Balogh gave the first ZFC construction of a small (cardinality continuum) example, which was more well-behaved than Rudin's. Using PCF theory, M. Kojman and S. Shelah constructed a subspace of Rudin's Dowker space of cardinality that is also Dowker.
Normal Moore spaces
A famous problem is the normal Moore space question, a question in general topology that was the subject of intense research. The answer to the normal Moore space question was eventually proved to be independent of ZFC.
Cardinal functions
Cardinal functions are widely used in topology as a tool for describing various topological properties. Below are some examples. (Note: some authors, arguing that "there are no finite cardinal numbers in general topology", prefer to define the cardinal functions listed below so that they never take on finite cardinal numbers as values; this requires modifying some of the definitions given below, e.g. by adding "" to the right-hand side of the definitions, etc.)
Perhaps the simplest cardinal invariants of a topological space X are its cardinality and the cardinality of its topology, denoted respectively by |X| and o(X).
The weight w(X ) of a topological space X is the smallest possible cardinality of a base for X. When w(X ) the space X is said to be second countable.
The -weight of a space X is the smallest cardinality of a -base for X. (A -base is a set of nonempty opens whose supersets includes all opens.)
The character of a topological space X at a point x is the smallest cardinality of a local base for x. The character of space X is When the space X is said to be first countable.
The density d(X ) of a space X is the smallest cardinality of a dense subset of X. When the space X is said to be separable.
The Lindelöf number L(X ) of a space X is the smallest infinite cardinality such that every open cover has a subcover of cardinality no more than L(X ). When the space X is said to be a Lindelöf space.
The cellularity of a space X is is a family of mutually disjoint non-empty open subsets of .
The Hereditary cellularity (sometimes spread) is the least upper bound of cellularities of its subsets: or with the subspace topology is discrete .
The tightness t(x, X) of a topological space X at a point is the smallest cardinal number such that, whenever for some subset Y of X, there exists a subset Z of Y, with |Z |
|
https://en.wikipedia.org/wiki/Pattern%20%28disambiguation%29
|
A pattern is an original object used to make copies, or a set of repeating objects in a decorative design and in other disciplines.
Pattern, patterns, or patterning may also refer to:
Mathematics, science, and technology
Computing
Software design pattern, a standard form for a solution to common problems in software design.
Architectural pattern, for software architecture
Interaction design pattern, used in interaction design / human-computer interaction
Pattern recognition, in machine learning
In machine learning, a non-null finite sequence of constant and variable symbols
Regular expression, often called a pattern
Other
Airfield traffic pattern, the traffic flow immediately surrounding a runway
Design pattern, a standard form for a solution to common problems in design
Pattern book, a book of architectural designs
Pattern (architecture), a standard form (pattern language) for a solution to common problems in architecture
Software design pattern (see above)
Pattern formation, the processes and mechanisms by which patterns such as the stripes of animals form; also, a science dealing with outcomes of self-organisation
Pattern language, a structured method of describing good design practices
Pattern theory, a mathematical formalism to describe knowledge of the world as patterns
Patterns in nature, the visible regularities of form found in nature and explained by science
Pedagogical patterns
In ethnomethodology, a (generally non-rigid) routine
Manufacturing
Multiple patterning, a class of technologies for manufacturing integrated circuits
Pattern (casting), a replica of the object to be cast
Pattern coin, a coin struck to test a new design, alloy, or method of manufacture
Pattern (sewing), the original garment, or a paper template, from which other garments are copied and produced
Fiction
Patterns (Kraft Television Theatre), a 1955 live television drama written by Rod Serling
Patterns (film), a 1956 film based on the TV show
Patterns, a 1989 novel by Pat Cadigan
Music
Patterns, a 1975 album by Kiki Dee
Patterns (album), by Bobby Hutcherson, released in 1980
Patterns (EP), an EP by Repeater
Patterns, an alternative title of the Modern Jazz Quartet's album Music from Odds Against Tomorrow
"Patterns" (Paul Simon song)
"Patterns", a polka song by "Weird Al" Yankovic
Patterns (Small Faces song), 1967
Other uses
Pattern (devotional), in Irish Catholicism, the devotional practices associated with a patron saint
Patterns II, a pencil and paper game designed by Sid Sackson
Patterns (video game), a building game for personal computers
Juggling pattern, a trick performed while juggling
See also
The Pattern (disambiguation)
|
https://en.wikipedia.org/wiki/List%20of%20urban%20areas%20in%20Norway%20by%20population
|
This is a list of urban areas in Norway by population, with population numbers as of 1 January 2017. For a list of towns and cities in Norway, see this link.
Statistics Norway, the governmental organisation with the task of measuring the Norwegian population, uses the term tettsted (literally "dense place"; meaning urban settlement or urban area), which is defined as a continuous built-up area with a maximum distance of 50 metres between residences unless a greater distance is caused by public areas, cemeteries or similar reasons. The continuously built-up areas in Norway (urban areas) with the highest population are:
See also
List of municipalities of Norway
Metropolitan regions of Norway
List of urban areas in Sweden by population
List of urban areas in Denmark by population
List of urban areas in the Nordic countries
List of urban areas in Finland by population
List of cities and towns in Iceland
Largest metropolitan areas in the Nordic countries
List of metropolitan areas in Sweden
References and notes
Norway
Norway
Urban areas
Norway
|
https://en.wikipedia.org/wiki/Millennium%20Mathematics%20Project
|
The Millennium Mathematics Project (MMP) was set up within the University of Cambridge in England as a joint project between the Faculties of Mathematics and Education in 1999. The MMP aims to support maths education for pupils of all abilities from ages 5 to 19 and promote the development of mathematical skills and understanding, particularly through enrichment and extension activities beyond the school curriculum, and to enhance the mathematical understanding of the general public. The project was directed by John Barrow from 1999 until September 2020.
Programmes
The MMP includes a range of complementary programmes:
The NRICH website publishes free mathematics education enrichment material for ages 5 to 19. NRICH material focuses on problem-solving, building core mathematical reasoning and strategic thinking skills. In the academic year 2004/5 the website attracted over 1.7 million site visits (more than 49 million hits).
Plus Magazine is a free online maths magazine for age 15+ and the general public. In 2004/5, Plus attracted over 1.3 million website visits (more than 31 million hits). The website won the Webby award in 2001 for the best Science site on the Internet.
The Motivate video-conferencing project links university mathematicians and scientists to primary and secondary schools in areas of the UK from Jersey and Belfast to Glasgow and inner-city London, with international links to Pakistan, South Africa, India and Singapore.
The project has also developed a Hands On Maths Roadshow presenting creative methods of exploring mathematics, and in 2004 took on the running of Simon Singh's Enigma schools workshops, exploring maths through cryptography and codebreaking. Both are taken to primary and secondary schools and public venues such as shopping centres across the UK and Ireland. James Grime is the Enigma Project Officer and gives talks in schools and to the general public about the history and mathematics of code breaking - including the demonstration of a genuine World War II Enigma Machine.
In November 2005, the MMP won the Queen's Anniversary Prize for Higher and Further Education.
References
External links
Physics & Mathematics
1999 establishments in England
Projects established in 1999
Mathematical projects
Distance education institutions based in the United Kingdom
British educational websites
Mathematics education in the United Kingdom
Mathematics websites
Organisations associated with the University of Cambridge
Collaborative projects
Turn of the third millennium
|
https://en.wikipedia.org/wiki/Continuous%20symmetry
|
In mathematics, continuous symmetry is an intuitive idea corresponding to the concept of viewing some symmetries as motions, as opposed to discrete symmetry, e.g. reflection symmetry, which is invariant under a kind of flip from one state to another. However, a discrete symmetry can always be reinterpreted as a subset of some higher-dimensional continuous symmetry, e.g. reflection of a 2 dimensional object in 3 dimensional space can be achieved by continuously rotating that object 180 degrees across a non-parallel plane.
Formalization
The notion of continuous symmetry has largely and successfully been formalised in the mathematical notions of topological group, Lie group and group action. For most practical purposes continuous symmetry is modelled by a group action of a topological group that preserves some structure. Particularly, let be a function, and G is a group that acts on X then a subgroup is a symmetry of f if for all .
One-parameter subgroups
The simplest motions follow a one-parameter subgroup of a Lie group, such as the Euclidean group of three-dimensional space. For example translation parallel to the x-axis by u units, as u varies, is a one-parameter group of motions. Rotation around the z-axis is also a one-parameter group.
Noether's theorem
Continuous symmetry has a basic role in Noether's theorem in theoretical physics, in the derivation of conservation laws from symmetry principles, specifically for continuous symmetries. The search for continuous symmetries only intensified with the further developments of quantum field theory.
See also
Goldstone's theorem
Infinitesimal transformation
Noether's theorem
Sophus Lie
Motion (geometry)
Circular symmetry
References
Symmetry
Lie groups
Group actions (mathematics)
|
https://en.wikipedia.org/wiki/Otto%20Lehmann%20%28physicist%29
|
Otto Lehmann (13 January 1855 in Konstanz, Germany – 17 June 1922 in Karlsruhe) was a German physicist and "father" of liquid crystal.
Life
Otto was the son of Franz Xavier Lehmann, a mathematics teacher in the Baden-Wurtemberg school system, with a strong interest in microscopes. Otto learned to experiment and keep records of this findings. Between 1872 and 1877, Lehmann studied natural sciences at the University of Strassburg and obtained the Ph.D. under crystallographer Paul Groth.
Otto used polarizers in a microscope so that he might watch for birefringence appearing in the process of crystallization.
Initially becoming a school teacher for physics, mathematics and chemistry in Mülhausen (Alsace-Lorraine), he started university teaching at the RWTH Aachen University in 1883. In 1889, he succeeded Heinrich Hertz as head of the Institute of Physics in Karlsruhe.
Lehmann received a letter from Friedrich Reinitzer asking for confirmation of some unusual observations. As Dunmur and Sluckin(2011) say
It was Lehmann's jealously guarded and increasingly prestigious microscope, not yet available off the shelf, which had attracted Reinitzer's attention. With Reinitzer's peculiar double-melting liquid, a problem in search of a scientist had met a scientist in search of a problem.
The article "On Flowing Crystals" that Lehmann wrote for Zeitschrift für Physikalische Chemie addresses directly the question of phase of matter involved, and leaves in its wake the science of liquid crystals.
Lehmann was an unsuccessful nominee for a Nobel Prize from 1913 to 1922.
Work
Selbstanfertigung physikalischer Apparate. Leipzig 1885.
Molekularphysik (i.e. Molecular physics). 2 Bde, Leipzig 1888/89.
Die Kristallanalyse (i.e. The Analysis of Crystals). Leipzig 1891.
Elektricität und Licht (i.e. Electricity and Light). Braunschweig 1895.
Flüssige Krystalle (i.e. Liquid Crystals). Leipzig 1904.
Die scheinbar lebenden Krystalle. Eßlingen 1907.
Die wichtigsten Begriffe und Gesetze der Physik. Berlin 1907.
Flüssige Kristalle und ihr scheinbares Leben. Forschungsergebnisse dargestellt in einem Kinofilm. Voss, Leipzig 1921.
References
David Dunmur & Tim Sluckin (2011), Soap, Science, and Flat-screen TVs: a history of liquid crystals, pp 20–7, Oxford University Press .
Michel Mitov (2014), Liquid-Crystal Science from 1888 to 1922: Building a Revolution, in ChemPhysChem, vol. 15, pp 1245–1250.
1855 births
1922 deaths
19th-century German physicists
People from Konstanz
People from the Grand Duchy of Baden
University of Strasbourg alumni
Academic staff of the Karlsruhe Institute of Technology
Academic staff of RWTH Aachen University
20th-century German physicists
|
https://en.wikipedia.org/wiki/Carath%C3%A9odory%E2%80%93Jacobi%E2%80%93Lie%20theorem
|
The Carathéodory–Jacobi–Lie theorem is a theorem in symplectic geometry which generalizes Darboux's theorem.
Statement
Let M be a 2n-dimensional symplectic manifold with symplectic form ω. For p ∈ M and r ≤ n, let f1, f2, ..., fr be smooth functions defined on an open neighborhood V of p whose differentials are linearly independent at each point, or equivalently
where {fi, fj} = 0. (In other words, they are pairwise in involution.) Here {–,–} is the Poisson bracket. Then there are functions fr+1, ..., fn, g1, g2, ..., gn defined on an open neighborhood U ⊂ V of p such that (fi, gi) is a symplectic chart of M, i.e., ω is expressed on U as
Applications
As a direct application we have the following. Given a Hamiltonian system as where M is a symplectic manifold with symplectic form and H is the Hamiltonian function, around every point where there is a symplectic chart such that one of its coordinates is H.
References
Symplectic geometry
Theorems in differential geometry
|
https://en.wikipedia.org/wiki/Nilmanifold
|
In mathematics, a nilmanifold is a differentiable manifold which has a transitive nilpotent group of diffeomorphisms acting on it. As such, a nilmanifold is an example of a homogeneous space and is diffeomorphic to the quotient space , the quotient of a nilpotent Lie group N modulo a closed subgroup H. This notion was introduced by Anatoly Mal'cev in 1951.
In the Riemannian category, there is also a good notion of a nilmanifold. A Riemannian manifold is called a homogeneous nilmanifold if there exist a nilpotent group of isometries acting transitively on it. The requirement that the transitive nilpotent group acts by isometries leads to the following rigid characterization: every homogeneous nilmanifold is isometric to a nilpotent Lie group with left-invariant metric (see Wilson).
Nilmanifolds are important geometric objects and often arise as concrete examples with interesting properties; in Riemannian geometry these spaces always have mixed curvature, almost flat spaces arise as quotients of nilmanifolds, and compact nilmanifolds have been used to construct elementary examples of collapse of Riemannian metrics under the Ricci flow.
In addition to their role in geometry, nilmanifolds are increasingly being seen as having a role in arithmetic combinatorics (see Green–Tao) and ergodic theory (see, e.g., Host–Kra).
Compact nilmanifolds
A compact nilmanifold is a nilmanifold which is compact. One way to construct such spaces is to start with a simply connected nilpotent Lie group N and a discrete subgroup . If the subgroup acts cocompactly (via right multiplication) on N, then the quotient manifold will be a compact nilmanifold. As Mal'cev has shown, every compact
nilmanifold is obtained this way.
Such a subgroup as above is called a lattice in N. It is well known that a nilpotent Lie group admits a lattice if and only if its Lie algebra admits a basis with rational structure constants: this is Malcev's criterion. Not all nilpotent Lie groups admit lattices; for more details, see also M. S. Raghunathan.
A compact Riemannian nilmanifold is a compact Riemannian manifold which is locally isometric to a nilpotent Lie group with left-invariant metric. These spaces are constructed as follows. Let be a lattice in a simply connected nilpotent Lie group N, as above. Endow N with a left-invariant (Riemannian) metric. Then the subgroup acts by isometries on N via left-multiplication. Thus the quotient is a compact space locally isometric to N. Note: this space is naturally diffeomorphic to .
Compact nilmanifolds also arise as principal bundles. For example, consider a 2-step nilpotent Lie group N which admits a lattice (see above). Let be the commutator subgroup of N. Denote by p the dimension of Z and by q the codimension of Z; i.e. the dimension of N is p+q. It is known (see Raghunathan) that is a lattice in Z. Hence, is a p-dimensional compact torus. Since Z is central in N, the group G acts on the compact nilmanifold
|
https://en.wikipedia.org/wiki/Fano%20factor
|
In statistics, the Fano factor, like the coefficient of variation, is a measure of the dispersion of a counting process. It was originally used to measure the Fano noise in ion detectors. It is named after Ugo Fano, an Italian American physicist.
The Fano factor after a time is defined as
where is the standard deviation and is the mean number of events of a counting process after some time . The Fano factor can be viewed as a kind of noise-to-signal ratio; it is a measure of the reliability with which the waiting time random variable can be estimated after several random events.
For a Poisson counting process, the variance in the count equals the mean count, so .
Definition
For a counting process , the Fano factor after a time is defined as,
Sometimes, the long term limit is also termed the Fano factor,
For a renewal process with holding times distributed similar to a random variable , we have that,
Since we have that the right hand side is equal to the square of the coefficient of variation , the right hand side of this equation is sometimes referred to as the Fano factor as well.
Interpretation
When considered as the dispersion of the number, the Fano factor roughly corresponds to the width of the peak of . As such, the Fano factor is often interpreted as the unpredictability of the underlying process.
Example: Constant Random Variable
When the holding times are constant, then . As such, if then we interpret the renewal process as being very predictable.
Example: Poisson Counting Process
When the likelihood of an event occurring in any time interval is equal for all time, then the holding times must be exponentially distributed, giving a Poisson counting process, for which .
Use in particle detection
In particle detectors, the Fano factor results from the energy loss in a collision not being purely statistical. The process giving rise to each individual charge carrier is not independent as the number of ways an atom may be ionized is limited by the discrete electron shells. The net result is a better energy resolution than predicted by purely statistical considerations. For example, if w is the average energy for a particle to produce a charge carrier in a detector, then the relative FWHM resolution for measuring the particle energy E is:
where the factor of 2.35 relates the standard deviation to the FWHM.
The Fano factor is material specific. Some theoretical values are:
{|
|-
| Si: || 0.115 (note discrepancy to experimental value)
|-
| Ge: || 0.13
|-
|GaAs: || 0.12
|-
|Diamond: || 0.08
|}
Measuring the Fano factor is difficult because many factors contribute to the resolution, but some experimental values are:
{|
|Si:
|0.128 ± 0.001 (at 5.9 keV) / 0.159 ± 0.002 (at 122 keV)
|-
| Ar (gas): || 0.20 ± 0.01/0.02
|-
| Xe (gas): || 0.13 to 0.29
|-
| CZT: || 0.089 ± 0.005
|}
Use in neuroscience
The Fano factor is used in neuroscience to describe variability in neural spiking. In this context, the events are the
|
https://en.wikipedia.org/wiki/Leibniz%E2%80%93Newton%20calculus%20controversy
|
In the history of calculus, the calculus controversy () was an argument between the mathematicians Isaac Newton and Gottfried Wilhelm Leibniz over who had first invented calculus. The question was a major intellectual controversy, which began simmering in 1699 and broke out in full force in 1711. Leibniz had published his work first, but Newton's supporters accused Leibniz of plagiarizing Newton's unpublished ideas. Leibniz died in disfavour in 1716 after his patron, the Elector Georg Ludwig of Hanover, became King George I of Great Britain in 1714. The modern consensus is that the two men developed their ideas independently.
Newton said he had begun working on a form of calculus (which he called "the method of fluxions and fluents") in 1666, at the age of 23, but did not publish it except as a minor annotation in the back of one of his publications decades later (a relevant Newton manuscript of October 1666 is now published among his mathematical papers). Gottfried Leibniz began working on his variant of calculus in 1674, and in 1684 published his first paper employing it, "". L'Hôpital published a text on Leibniz's calculus in 1696 (in which he recognized that Newton's of 1687 was "nearly all about this calculus"). Meanwhile, Newton, though he explained his (geometrical) form of calculus in Section I of Book I of the of 1687, did not explain his eventual fluxional notation for the calculus in print until 1693 (in part) and 1704 (in full).
The prevailing opinion in the 18th century was against Leibniz (in Britain, not in the German-speaking world). Today the consensus is that Leibniz and Newton independently invented and described the calculus in Europe in the 17th century.
One author has identified the dispute as being about "profoundly different" methods:
On the other hand, other authors have emphasized the equivalences and mutual translatability of the methods: here N Guicciardini (2003) appears to confirm L'Hôpital (1696) (already cited):
Scientific priority in the 17th century
In the 17th century, as at the present time, the question of scientific priority was of great importance to scientists. However, during this period, scientific journals had just begun to appear, and the generally accepted mechanism for fixing priority by publishing information about the discovery had not yet been formed. Among the methods used by scientists were anagrams, sealed envelopes placed in a safe place, correspondence with other scientists, or a private message. A letter to the founder of the French Academy of Sciences, Marin Mersenne for a French scientist, or to the secretary of the Royal Society of London, Henry Oldenburg for English, had practically the status of a published article. The discoverer could 'time-stamp' the moment of his discovery, and prove that he knew of it at the point the letter was sealed, and had not copied it from anything subsequently published. Nevertheless, where an idea was subsequently published in conjunction with its u
|
https://en.wikipedia.org/wiki/Bochner%20integral
|
In mathematics, the Bochner integral, named for Salomon Bochner, extends the definition of Lebesgue integral to functions that take values in a Banach space, as the limit of integrals of simple functions.
Definition
Let be a measure space, and be a Banach space. The Bochner integral of a function is defined in much the same way as the Lebesgue integral. First, define a simple function to be any finite sum of the form
where the are disjoint members of the -algebra the are distinct elements of and χE is the characteristic function of If is finite whenever then the simple function is integrable, and the integral is then defined by
exactly as it is for the ordinary Lebesgue integral.
A measurable function is Bochner integrable if there exists a sequence of integrable simple functions such that
where the integral on the left-hand side is an ordinary Lebesgue integral.
In this case, the Bochner integral is defined by
It can be shown that the sequence is a Cauchy sequence in the Banach space hence the limit on the right exists; furthermore, the limit is independent of the approximating sequence of simple functions These remarks show that the integral is well-defined (i.e independent of any choices). It can be shown that a function is Bochner integrable if and only if it lies in the Bochner space
Properties
Elementary properties
Many of the familiar properties of the Lebesgue integral continue to hold for the Bochner integral. Particularly useful is Bochner's criterion for integrability, which states that if is a measure space, then a Bochner-measurable function is Bochner integrable if and only if
Here, a function is called Bochner measurable if it is equal -almost everywhere to a function taking values in a separable subspace of , and such that the inverse image of every open set in belongs to . Equivalently, is the limit -almost everywhere of a sequence of countably-valued simple functions.
Linear operators
If is a continuous linear operator between Banach spaces and , and is Bochner integrable, then it is relatively straightforward to show that is Bochner integrable and integration and the application of may be interchanged:
for all measurable subsets .
A non-trivially stronger form of this result, known as Hille's theorem, also holds for closed operators. If is a closed linear operator between Banach spaces and and both and are Bochner integrable, then
for all measurable subsets .
Dominated convergence theorem
A version of the dominated convergence theorem also holds for the Bochner integral. Specifically, if is a sequence of measurable functions on a complete measure space tending almost everywhere to a limit function , and if
for almost every , and , then
as and
for all .
If is Bochner integrable, then the inequality
holds for all In particular, the set function
defines a countably-additive -valued vector measure on which is absolutely continuous with respect to .
Radon–Ni
|
https://en.wikipedia.org/wiki/Augmented%20matrix
|
In linear algebra, an augmented matrix is a matrix obtained by appending the columns of two given matrices, usually for the purpose of performing the same elementary row operations on each of the given matrices.
Given the matrices and , where
the augmented matrix (A|B) is written as
This is useful when solving systems of linear equations.
For a given number of unknowns, the number of solutions to a system of linear equations depends only on the rank of the matrix representing the system and the rank of the corresponding augmented matrix. Specifically, according to the Rouché–Capelli theorem, any system of linear equations is inconsistent (has no solutions) if the rank of the augmented matrix is greater than the rank of the coefficient matrix; if, on the other hand, the ranks of these two matrices are equal, the system must have at least one solution. The solution is unique if and only if the rank equals the number of variables. Otherwise the general solution has free parameters where is the difference between the number of variables and the rank; hence in such a case there are an infinitude of solutions.
An augmented matrix may also be used to find the inverse of a matrix by combining it with the identity matrix.
To find the inverse of a matrix
Let be the square 2×2 matrix
To find the inverse of C we create (C|I) where I is the 2×2 identity matrix. We then reduce the part of (C|I) corresponding to C to the identity matrix using only elementary row operations on (C|I).
the right part of which is the inverse of the original matrix.
Existence and number of solutions
Consider the system of equations
The coefficient matrix is
and the augmented matrix is
Since both of these have the same rank, namely 2, there exists at least one solution; and since their rank is less than the number of unknowns, the latter being 3, there are an infinite number of solutions.
In contrast, consider the system
The coefficient matrix is
and the augmented matrix is
In this example the coefficient matrix has rank 2 while the augmented matrix has rank 3; so this system of equations has no solution. Indeed, an increase in the number of linearly independent rows has made the system of equations inconsistent.
Solution of a linear system
As used in linear algebra, an augmented matrix is used to represent the coefficients and the solution vector of each equation set.
For the set of equations
the coefficients and constant terms give the matrices
and hence give the augmented matrix
Note that the rank of the coefficient matrix, which is 3, equals the rank of the augmented matrix, so at least one solution exists; and since this rank equals the number of unknowns, there is exactly one solution.
To obtain the solution, row operations can be performed on the augmented matrix to obtain the identity matrix on the left side, yielding
so the solution of the system is .
References
Marvin Marcus and Henryk Minc, A survey of matrix theory and matrix inequal
|
https://en.wikipedia.org/wiki/Aleksandr%20Kurosh
|
Aleksandr Gennadyevich Kurosh (; January 19, 1908 – May 18, 1971) was a Soviet mathematician, known for his work in abstract algebra. He is credited with writing The Theory of Groups, the first modern and high-level text on group theory, published in 1944.
He was born in Yartsevo, in the Dukhovshchinsky Uyezd of the Smolensk Governorate of the Russian Empire and died in Moscow. He received his doctorate from the Moscow State University in 1936 under the direction of Pavel Alexandrov. In 1937 he became a professor there, and from 1949 until his death he held the Chair of Higher Algebra at Moscow State University. In 1938, he was the PhD thesis adviser to his fellow group theory scholar Sergei Chernikov, with whom he would develop important relationships between finite and infinite groups, discover the Kurosh-Chernikov class of groups, and publish several influential papers over the next decades. In all, he had 27 PhD students, including also Vladimir Andrunakievich, Mark Graev, and Anatoly Shirshov.
Selected publications
Teoriya Grupp (Теория групп), 2 vols., Nauk, 1944, 2nd edition 1953.
German translation: Gruppentheorie. 2 vols., 1953, 1956, Akademie Verlag, Berlin, 2nd edition 1970, 1972.
English translation: The Theory of Groups, 2 vols., Chelsea Publishing Company, the Bronx, tr. by K. A. Hirsch, 1950, 2nd edition 1955.
Vorlesungen über Allgemeine Algebra. Verlag Harri Deutsch, Zürich 1964.
Zur Theorie der Kategorien. Deutscher Verlag der Wissenschaften, Berlin 1963.
Kurosch: Zur Zerlegung unendlicher Gruppen. Mathematische Annalen vol. 106, 1932.
Kurosch: Über freie Produkte von Gruppen. Mathematische Annalen vol. 108, 1933.
Kurosch: Die Untergruppen der freien Produkte von beliebigen Gruppen. Mathematische Annalen, vol. 109, 1934.
A. G. Kurosh, S. N. Chernikov, “Solvable and nilpotent groups”, Uspekhi Mat. Nauk, 2:3(19) (1947), 18–59.
A. G. Kurosh, "Curso de Álgebra Superior", Editorial Mir, Moscú 1997, traducción de Emiliano Aparicio Bernardo (in Spanish)
See also
Kurosh subgroup theorem
Kurosh problem
References
External links
1908 births
1971 deaths
20th-century Russian mathematicians
People from Yartsevo
People from Dukhovshchinsky Uyezd
Academic staff of Moscow State University
Academic staff of Saratov State University
Moscow State University alumni
Recipients of the Order of the Red Banner of Labour
Recipients of the USSR State Prize
Algebraists
Number theorists
Topologists
Russian mathematicians
Soviet mathematicians
Burials at Vvedenskoye Cemetery
Russian scientists
|
https://en.wikipedia.org/wiki/Procrustes%20analysis
|
In statistics, Procrustes analysis is a form of statistical shape analysis used to analyse the distribution of a set of shapes. The name Procrustes () refers to a bandit from Greek mythology who made his victims fit his bed either by stretching their limbs or cutting them off.
In mathematics:
an orthogonal Procrustes problem is a method which can be used to find out the optimal rotation and/or reflection (i.e., the optimal orthogonal linear transformation) for the Procrustes Superimposition (PS) of an object with respect to another.
a constrained orthogonal Procrustes problem, subject to det(R) = 1 (where R is an orthogonal matrix), is a method which can be used to determine the optimal rotation for the PS of an object with respect to another (reflection is not allowed). In some contexts, this method is called the Kabsch algorithm.
When a shape is compared to another, or a set of shapes is compared to an arbitrarily selected reference shape, Procrustes analysis is sometimes further qualified as classical or ordinary, as opposed to generalized Procrustes analysis (GPA), which compares three or more shapes to an optimally determined "mean shape".
Introduction
To compare the shapes of two or more objects, the objects must be first optimally "superimposed". Procrustes superimposition (PS) is performed by optimally translating, rotating and uniformly scaling the objects. In other words, both the placement in space and the size of the objects are freely adjusted. The aim is to obtain a similar placement and size, by minimizing a measure of shape difference called the Procrustes distance between the objects. This is sometimes called full, as opposed to partial PS, in which scaling is not performed (i.e. the size of the objects is preserved). Notice that, after full PS, the objects will exactly coincide if their shape is identical. For instance, with full PS two spheres with different radii will always coincide, because they have exactly the same shape. Conversely, with partial PS they will never coincide. This implies that, by the strict definition of the term shape in geometry, shape analysis should be performed using full PS. A statistical analysis based on partial PS is not a pure shape analysis as it is not only sensitive to shape differences, but also to size differences. Both full and partial PS will never manage to perfectly match two objects with different shape, such as a cube and a sphere, or a right hand and a left hand.
In some cases, both full and partial PS may also include reflection. Reflection allows, for instance, a successful (possibly perfect) superimposition of a right hand to a left hand. Thus, partial PS with reflection enabled preserves size but allows translation, rotation and reflection, while full PS with reflection enabled allows translation, rotation, scaling and reflection.
Optimal translation and scaling are determined with much simpler operations (see below).
Ordinary Procrustes analysis
Here we just consider ob
|
https://en.wikipedia.org/wiki/Non-negative%20matrix%20factorization
|
Non-negative matrix factorization (NMF or NNMF), also non-negative matrix approximation is a group of algorithms in multivariate analysis and linear algebra where a matrix is factorized into (usually) two matrices and , with the property that all three matrices have no negative elements. This non-negativity makes the resulting matrices easier to inspect. Also, in applications such as processing of audio spectrograms or muscular activity, non-negativity is inherent to the data being considered. Since the problem is not exactly solvable in general, it is commonly approximated numerically.
NMF finds applications in such fields as astronomy, computer vision, document clustering, missing data imputation, chemometrics, audio signal processing, recommender systems, and bioinformatics.
History
In chemometrics non-negative matrix factorization has a long history under the name "self modeling curve resolution".
In this framework the vectors in the right matrix are continuous curves rather than discrete vectors.
Also early work on non-negative matrix factorizations was performed by a Finnish group of researchers in the 1990s under the name positive matrix factorization.
It became more widely known as non-negative matrix factorization after Lee and Seung investigated the properties of the algorithm and published some simple and useful
algorithms for two types of factorizations.
Background
Let matrix be the product of the matrices and ,
Matrix multiplication can be implemented as computing the column vectors of as linear combinations of the column vectors in using coefficients supplied by columns of . That is, each column of can be computed as follows:
where is the -th column vector of the product matrix and is the -th column vector of the matrix .
When multiplying matrices, the dimensions of the factor matrices may be significantly lower than those of the product matrix and it is this property that forms the basis of NMF. NMF generates factors with significantly reduced dimensions compared to the original matrix. For example, if is an matrix, is an matrix, and is a matrix then can be significantly less than both and .
Here is an example based on a text-mining application:
Let the input matrix (the matrix to be factored) be with 10000 rows and 500 columns where words are in rows and documents are in columns. That is, we have 500 documents indexed by 10000 words. It follows that a column vector in represents a document.
Assume we ask the algorithm to find 10 features in order to generate a features matrix with 10000 rows and 10 columns and a coefficients matrix with 10 rows and 500 columns.
The product of and is a matrix with 10000 rows and 500 columns, the same shape as the input matrix and, if the factorization worked, it is a reasonable approximation to the input matrix .
From the treatment of matrix multiplication above it follows that each column in the product matrix is a linear combination of the 10 column vecto
|
https://en.wikipedia.org/wiki/John%20Shawe-Taylor
|
John Stewart Shawe-Taylor (born 1953) is Director of the Centre for Computational Statistics and Machine Learning at University College, London (UK). His main research area is statistical learning theory. He has contributed to a number of fields ranging from graph theory through cryptography to statistical learning theory and its applications. However, his main contributions have been in the development of the analysis and subsequent algorithmic definition of principled machine learning algorithms founded in statistical learning theory. This work has helped to drive a fundamental rebirth in the field of machine learning with the introduction of kernel methods and support vector machines, including the mapping of these approaches onto novel domains including work in computer vision, document classification and brain scan analysis. More recently he has worked on interactive learning and reinforcement learning. He has also been instrumental in assembling a series of influential European Networks of Excellence (initially the NeuroCOLT projects and later the PASCAL networks). The scientific coordination of these projects has influenced a generation of researchers and promoted the widespread uptake of machine learning in both science and industry that we are currently witnessing. He has published over 300 papers with over 42000 citations. Two books co-authored with Nello Cristianini have become standard monographs for the study of kernel methods and support vector machines and together have attracted 21000 citations. He is Head of the Computer Science Department at University College London, where he has overseen a significant expansion and witnessed its emergence as the highest ranked Computer Science Department in the UK in the 2014 UK Research Evaluation Framework (REF).
Publications
He has written with Nello Cristianini two books on the theory of support vector machines and kernel methods:
An introduction to support vector machines N.C. and J. S.T. Cambridge University Press, 2000.
Kernel methods for pattern analysis J. S.T and N.C. Cambridge University Press, 2004.
He has published research in neural networks, machine learning, and graph theory.
Schooling
He was educated at Shrewsbury and graduated from the University of Ljubljana, Slovenia.
External links
Professional homepage
1953 births
British computer scientists
Living people
British statisticians
Artificial intelligence researchers
John
University of Ljubljana alumni
|
https://en.wikipedia.org/wiki/Des%20MacHale
|
Desmond MacHale (born 28 January 1946) is Emeritus Professor of Mathematics at University College Cork, Ireland. He is an author and speaker on several subjects, including George Boole, lateral thinking puzzles, and humour. He has published over 80 books, some of which have been translated into languages including Danish, Italian, Norwegian, Spanish, German, Korean, and Japanese.
Biography
Des MacHale was born in Castlebar, County Mayo. He earned his BSc and MSc in mathematical science at University College Galway in 1967 and 1968, and completed his PhD at the University of Keele in 1972 under Hans Liebeck. Since then he has been at University College Cork, where his research has focussed on group and ring theory, especially Boolean rings.
In 1985 MacHale published George Boole: His Life and Work, the first book length biography of Boole. In 2014, a year ahead of Boole's bicentennial, this was reissued in revised and expanded form as The Life and Work of George Boole: A Prelude to the Digital Age. He is considered the world's leading expert on Boole and in 2018 published another book New Light on George Boole, co-authored with Yvonne Cohen.
MacHale has also authored books on other subjects, including brainteasers and he has written more than 30 books of jokes and discussions of humour. His Comic Sections: The Book of Mathematical Jokes, Humour, Wit and Wisdom is a book which combines two of his interests. He has written over a dozen books of lateral thinking problems with author Paul Sloane; and many of these problems are featured on the Futility Closet website. He has written several books about the 1952 American film, The Quiet Man. He has spoken at schools, on radio, and television on the subjects of mathematics, humour, and puzzles.
MacHale designed the logo of the Irish Mathematical Society.
He is a longtime opponent of smoking, and for decades has played a role within the Irish Association of Non-Smokers. He appeared on RTÉ's The Late Late Show as early as the 1980s in an attempt to educate the public about the dangers of smoking.
His son is the actor Dominic MacHale, known for The Young Offenders.
Superbrain competition
From 1984 to 2007, MacHale ran the Superbrain Competition at University College Cork, including setting the questions and grading the papers. This annual competitive mathematics exam, now run by a committee of mathematics faculty, is open to UCC undergraduate and master's level students, and consists of 10 questions to be done in 3 hours. A book of the questions (with solutions) from 1984 to 2008 was published in 2011 as The First Twenty-Five Years of the Superbrain by Diarmuid Early & Des MacHale.
Selected books
Books by Des MacHale include:
2023 Sip & Solve Two-Minute Brainteasers (with Paul Sloane), Puzzlewright,
2022 Comic Sections Plus, Logic Press,
2020 The Poetry of George Boole, Logic Press,
2020 Number and Letter Puzzles, Logic Press,
2018 New Light on George Boole (with Yvonne Cohen), Cork Univ
|
https://en.wikipedia.org/wiki/Splitting%20circle%20method
|
In mathematics, the splitting circle method is a numerical algorithm for the numerical factorization of a polynomial and, ultimately, for finding its complex roots. It was introduced by Arnold Schönhage in his 1982 paper The fundamental theorem of algebra in terms of computational complexity (Technical report, Mathematisches Institut der Universität Tübingen). A revised algorithm was presented by Victor Pan in 1998. An implementation was provided by Xavier Gourdon in 1996 for the Magma and PARI/GP computer algebra systems.
General description
The fundamental idea of the splitting circle method is to use methods of complex analysis, more precisely the residue theorem, to construct factors of polynomials. With those methods it is possible to construct a factor of a given polynomial for any region of the complex plane with a piecewise smooth boundary. Most of those factors will be trivial, that is constant polynomials. Only regions that contain roots of p(x) result in nontrivial factors that have exactly those roots of p(x) as their own roots, preserving multiplicity.
In the numerical realization of this method one uses disks D(c,r) (center c, radius r) in the complex plane as regions. The boundary circle of a disk splits the set of roots of p(x) in two parts, hence the name of the method. To a given disk one computes approximate factors following the analytical theory and refines them using Newton's method. To avoid numerical instability one has to demand that all roots are well separated from the boundary circle of the disk. So to obtain a good splitting circle it should be embedded in a root free annulus A(c,r,R) (center c, inner radius r, outer radius R) with a large relative width R/r.
Repeating this process for the factors found, one finally arrives at an approximative factorization of the polynomial at a required precision. The factors are either linear polynomials representing well isolated zeros or higher order polynomials representing clusters of zeros.
Details of the analytical construction
Newton's identities are a bijective relation between the elementary symmetric polynomials of a tuple of complex numbers and its sums of powers. Therefore, it is possible to compute the coefficients of a polynomial
(or of a factor of it) from the sums of powers of its zeros
,
by solving the triangular system that is obtained by comparing the powers of u in the following identity of formal power series
If is a domain with piecewise smooth boundary C and if the zeros of p(x) are pairwise distinct and not on the boundary C, then from the residue theorem of residual calculus one gets
The identity of the left to the right side of this equation also holds for zeros with multiplicities. By using the Newton identities one is able to compute from those sums of powers the factor
of p(x) corresponding to the zeros of p(x) inside G. By polynomial division one also obtains the second factor g(x) in p(x) = f(x)g(x).
The commonly used regions are
|
https://en.wikipedia.org/wiki/Recursion%20theorem
|
Recursion theorem can refer to:
The recursion theorem in set theory
Kleene's recursion theorem, also called the fixed point theorem, in computability theory
The master theorem (analysis of algorithms), about the complexity of divide-and-conquer algorithms
|
https://en.wikipedia.org/wiki/DIMACS
|
The Center for Discrete Mathematics and Theoretical Computer Science (DIMACS) is a collaboration between Rutgers University, Princeton University, and the research firms AT&T, Bell Labs, Applied Communication Sciences, and NEC. It was founded in 1989 with money from the National Science Foundation. Its offices are located on the Rutgers campus, and 250 members from the six institutions form its permanent members.
DIMACS is devoted to both theoretical development and practical applications of discrete mathematics and theoretical computer science. It engages in a wide variety of evangelism including encouraging, inspiring, and facilitating researchers in these subject areas, and sponsoring conferences and workshops.
Fundamental research in discrete mathematics has applications in diverse fields including Cryptology, Engineering, Networking, and Management Decision Support.
Past directors have included Fred S. Roberts, Daniel Gorenstein, András Hajnal, and Rebecca N. Wright.
The DIMACS Challenges
DIMACS sponsors implementation challenges to determine practical algorithm performance on problems of interest. There have been eleven DIMACS challenges so far.
1990-1991: Network Flows and Matching
1992-1992: NP-Hard Problems: Max Clique, Graph Coloring, and SAT
1993-1994: Parallel Algorithms for Combinatorial Problems
1994-1995: Computational Biology: Fragment Assembly and Genome Rearrangement
1995-1996: Priority Queues, Dictionaries, and Multidimensional Point Sets
1998-1998: Near Neighbor Searches
2000-2000: Semidefinite and Related Optimization Problems
2001-2001: The Traveling Salesman Problem
2005-2005: The Shortest Path Problem
2011-2012: Graph Partitioning and Graph Clustering
2013-2014: Steiner Tree Problems
2020-2021: Vehicle Routing Problems
References
External links
DIMACS Website
1989 establishments in New Jersey
Combinatorics
Discrete mathematics
Rutgers University
Mathematical institutes
|
https://en.wikipedia.org/wiki/List%20of%20mathematics%20journals
|
This is a list of scientific journals covering mathematics with existing Wikipedia articles on them.
Alphabetic list of titles
A
B
C
D
E
F
G
H
I
J
K
L
M
N
O
P
Q
R
S
T
Z
See also
arXiv, an electronic preprint archive
List of computer science journals
List of mathematical physics journals
List of probability journals
List of scientific journals
List of statistics journals
References
External links
A list of formerly only-printed journals, now available in digital form, with links
An essentially complete list of mathematical journals, with abbreviations used by Mathematical Reviews
Lists of academic journals
|
https://en.wikipedia.org/wiki/Empirical%20process
|
In probability theory, an empirical process is a stochastic process that describes the proportion of objects in a system in a given state.
For a process in a discrete state space a population continuous time Markov chain or Markov population model is a process which counts the number of objects in a given state (without rescaling).
In mean field theory, limit theorems (as the number of objects becomes large) are considered and generalise the central limit theorem for empirical measures. Applications of the theory of empirical processes arise in non-parametric statistics.
Definition
For X1, X2, ... Xn independent and identically-distributed random variables in R with common cumulative distribution function F(x), the empirical distribution function is defined by
where IC is the indicator function of the set C.
For every (fixed) x, Fn(x) is a sequence of random variables which converge to F(x) almost surely by the strong law of large numbers. That is, Fn converges to F pointwise. Glivenko and Cantelli strengthened this result by proving uniform convergence of Fn to F by the Glivenko–Cantelli theorem.
A centered and scaled version of the empirical measure is the signed measure
It induces a map on measurable functions f given by
By the central limit theorem, converges in distribution to a normal random variable N(0, P(A)(1 − P(A))) for fixed measurable set A. Similarly, for a fixed function f, converges in distribution to a normal random variable , provided that and exist.
Definition
is called an empirical process indexed by , a collection of measurable subsets of S.
is called an empirical process indexed by , a collection of measurable functions from S to .
A significant result in the area of empirical processes is Donsker's theorem. It has led to a study of Donsker classes: sets of functions with the useful property that empirical processes indexed by these classes converge weakly to a certain Gaussian process. While it can be shown that Donsker classes are Glivenko–Cantelli classes, the converse is not true in general.
Example
As an example, consider empirical distribution functions. For real-valued iid random variables X1, X2, ..., Xn they are given by
In this case, empirical processes are indexed by a class It has been shown that is a Donsker class, in particular,
converges weakly in to a Brownian bridge B(F(x)) .
See also
Khmaladze transformation
Weak convergence of measures
Glivenko–Cantelli theorem
References
Further reading
External links
Empirical Processes: Theory and Applications, by David Pollard, a textbook available online.
Introduction to Empirical Processes and Semiparametric Inference, by Michael Kosorok, another textbook available online.
Nonparametric statistics
|
https://en.wikipedia.org/wiki/Derby%20Owners%20Club
|
is a horse racing arcade game developed by Sega AM3 and published by Sega. Players are put into the roles of breeder, trainer, jockey, and owner of a thoroughbred racehorse. Statistics are saved on a IC card that can be put into any machine. The first version was released in Japan in 1999 and ran on the NAOMI arcade board.
Gameplay
First time players create a new horse. At first, the parents are chosen, the name of the horse and silks to wear. There are two types of horses, the front runners and the stretch runners, who have different strengths. Front runners are quick out of the gate and strong around turns. Stretch runners are slow at first, but can overtake the rest through sprinting.
The horse can be trained in 10 exercises and then be given a meal. Vegetable salad, camembert cheese or chinese herbal dumplings are chosen, among others. Depending on the horse the meal will have different effects. It is important to retain a good relationship with the horse, as it will show in their behaviour. In the horse races, whip and hold buttons are used to control the speed of the horse, based on their condition. Whipping the horse too much will make it lose confidence in the player. Using the whip at the right time depending on the horse type, is also critical. Based on the horses performance on the race, the players receives virtual prize money and either gives encouragement or derision to the horse. Handicap races are available for out of depth horses, the horses ability will be rated by the prize money they earned. Different tracks, such as ones with dirt, are suited to different horses as well.
After the race is done, the game then asks for another coin to be inserted into the cabinet, in order to continue. Or the player can retire the horse in order to breed a second generation foal. A horse's breeding ability is limited however.
Development
Horse racing games typically have an image that they might be difficult to play, as some prerequisites were required. However with Derby Owners Club, the goal was to create a game that was easy to understand and play to appeal to a wide range of people at the arcades. A pet simulator aspect was added so that the player can easily get attached to the horse and the game. The cabinet set-up with multiple terminals connected to a big screen was also new, as was the ability to store the players progress on the card.
Usually, arcade games focus on training and races, however the feeding and raising the horse section was new to arcades. This aspect also elevated the community aspect that was crucial to Derby Owners Club.
When the game was ported to PC as Derby Owners Club Online, retaining the community aspect was the most difficult aspect. To retain that, a town was added where you interact with others with an avatar and do various other activities, similar to an MMORPG.
Reception
The game changed the Japanese arcade market. It charged 300 to 500 yen for a game, but the player could play for over 5 to 10 m
|
https://en.wikipedia.org/wiki/Primality%20certificate
|
In mathematics and computer science, a primality certificate or primality proof is a succinct, formal proof that a number is prime. Primality certificates allow the primality of a number to be rapidly checked without having to run an expensive or unreliable primality test. "Succinct" usually means that the proof should be at most polynomially larger than the number of digits in the number itself (for example, if the number has b bits, the proof might contain roughly b2 bits).
Primality certificates lead directly to proofs that problems such as primality testing and the complement of integer factorization lie in NP, the class of problems verifiable in polynomial time given a solution. These problems already trivially lie in co-NP. This was the first strong evidence that these problems are not NP-complete, since if they were, it would imply that NP is subset of co-NP, a result widely believed to be false; in fact, this was the first demonstration of a problem in NP intersect co-NP not known, at the time, to be in P.
Producing certificates for the complement problem, to establish that a number is composite, is straightforward: it suffices to give a nontrivial divisor. Standard probabilistic primality tests such as the Baillie–PSW primality test, the Fermat primality test, and the Miller–Rabin primality test also produce compositeness certificates in the event where the input is composite, but do not produce certificates for prime inputs.
Pratt certificates
The concept of primality certificates was historically introduced by the Pratt certificate, conceived in 1975 by Vaughan Pratt, who described its structure and proved it to have polynomial size and to be verifiable in polynomial time. It is based on the Lucas primality test, which is essentially the converse of Fermat's little theorem with an added condition to make it true:
Lucas' theorem: Suppose we have an integer a such that:
an − 1 ≡ 1 (mod n),
for every prime factor q of n − 1, it is not the case that a(n − 1)/q ≡ 1 (mod n).
Then n is prime.
Given such an a (called a witness) and the prime factorization of n − 1, it's simple to verify the above conditions quickly: we only need to do a linear number of modular exponentiations, since every integer has fewer prime factors than bits, and each of these can be done by exponentiation by squaring in O(log n) multiplications (see big-O notation). Even with grade-school integer multiplication, this is only O((log n)4) time; using the multiplication algorithm with best-known asymptotic running time, the Schönhage–Strassen algorithm, we can lower this to O((log n)3(log log n)(log log log n)) time, or using soft-O notation Õ((log n)3).
However, it is possible to trick a verifier into accepting a composite number by giving it a "prime factorization" of n − 1 that includes composite numbers. For example, suppose we claim that n = 85 is prime, supplying a = 4 and n − 1 = 6 × 14 as the "prime factorization". Then (using q = 6 and q = 14):
4 is co
|
https://en.wikipedia.org/wiki/Wythoff%20symbol
|
In geometry, the Wythoff symbol is a notation representing a Wythoff construction of a uniform polyhedron or plane tiling within a Schwarz triangle. It was first used by Coxeter, Longuet-Higgins and Miller in their enumeration of the uniform polyhedra. Later the Coxeter diagram was developed to mark uniform polytopes and honeycombs in n-dimensional space within a fundamental simplex.
A Wythoff symbol consists of three numbers and a vertical bar. It represents one uniform polyhedron or tiling, although the same tiling/polyhedron can have different Wythoff symbols from different symmetry generators. For example, the regular cube can be represented by 3 | 2 4 with Oh symmetry, and 2 4 | 2 as a square prism with 2 colors and D4h symmetry, as well as 2 2 2 | with 3 colors and D2h symmetry.
With a slight extension, Wythoff's symbol can be applied to all uniform polyhedra. However, the construction methods do not lead to all uniform tilings in Euclidean or hyperbolic space.
Description
The Wythoff construction begins by choosing a generator point on a fundamental triangle. This point must be chosen at equal distance from all edges that it does not lie on, and a perpendicular line is then dropped from it to each such edge.
The three numbers in Wythoff's symbol, p, q, and r, represent the corners of the Schwarz triangle used in the construction, which are , , and radians respectively. The triangle is also represented with the same numbers, written (p q r). The vertical bar in the symbol specifies a categorical position of the generator point within the fundamental triangle according to the following:
indicates that the generator lies on the corner p,
indicates that the generator lies on the edge between p and q,
indicates that the generator lies in the interior of the triangle.
In this notation the mirrors are labeled by the reflection-order of the opposite vertex. The p, q, r values are listed before the bar if the corresponding mirror is active.
A special use is the symbol which is designated for the case where all mirrors are active, but odd-numbered reflected images are ignored. The resulting figure has rotational symmetry only.
The generator point can either be on or off each mirror, activated or not. This distinction creates 8 (23) possible forms, but the one where the generator point is on all the mirrors is impossible. The symbol that would normally refer to that is reused for the snub tilings.
The Wythoff symbol is functionally similar to the more general Coxeter-Dynkin diagram, in which each node represents a mirror and the arcs between them – marked with numbers – the angles between the mirrors. (An arc representing a right angle is omitted.) A node is circled if the generator point is not on the mirror.
Example spherical, euclidean and hyperbolic tilings on right triangles
The fundamental triangles are drawn in alternating colors as mirror images. The sequence of triangles (p 3 2) change from spherical (p = 3, 4, 5), to Euclid
|
https://en.wikipedia.org/wiki/Formal%20science
|
Formal science is a branch of science studying disciplines concerned with abstract structures described by formal systems, such as logic, mathematics, statistics, theoretical computer science, artificial intelligence, information theory, game theory, systems theory, decision theory, and theoretical linguistics. Whereas the natural sciences and social sciences seek to characterize physical systems and social systems, respectively, using empirical methods, the formal sciences use language tools concerned with characterizing abstract structures described by formal systems. The formal sciences aid the natural and social sciences by providing information about the structures used to describe the physical world, and what inferences may be made about them.
Branches
Principal branches of formal sciences are:
logic (also a branch of philosophy);
mathematics;
and computer science.
Differences from other sciences
Because of their non-empirical nature, formal sciences are construed by outlining a set of axioms and definitions from which other statements (theorems) are deduced. For this reason, in Rudolf Carnap's logical-positivist conception of the epistemology of science, theories belonging to formal sciences are understood to contain no synthetic statements, instead containing only analytic statements.
See also
Philosophy
Science
Rationalism
Abstract structure
Abstraction in mathematics
Abstraction in computer science
Formalism (philosophy of mathematics)
Formal grammar
Formal language
Formal method
Formal system
Form and content
Mathematical model
Mathematics Subject Classification
Semiotics
Theory of forms
References
Further reading
Mario Bunge (1985). Philosophy of Science and Technology. Springer.
Mario Bunge (1998). Philosophy of Science. Rev. ed. of: Scientific research. Berlin, New York: Springer-Verlag, 1967.
C. West Churchman (1940). Elements of Logic and Formal Science, J.B. Lippincott Co., New York.
James Franklin (1994). The formal sciences discover the philosophers' stone. In: Studies in History and Philosophy of Science. Vol. 25, No. 4, pp. 513–533, 1994
Stephen Leacock (1906). Elements of Political Science. Houghton, Mifflin Co, 417 pp.
Bernt P. Stigum (1990). Toward a Formal Science of Economics. MIT Press
Marcus Tomalin (2006), Linguistics and the Formal Sciences. Cambridge University Press
William L. Twining (1997). Law in Context: Enlarging a Discipline. 365 pp.
External links
Interdisciplinary conferences — Foundations of the Formal Sciences
Branches of science
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.