source
stringlengths 31
168
| text
stringlengths 51
3k
|
---|---|
https://en.wikipedia.org/wiki/Pierre-Louis%20Lions
|
Pierre-Louis Lions (; born 11 August 1956) is a French mathematician. He is known for a number of contributions to the fields of partial differential equations and the calculus of variations. He was a recipient of the 1994 Fields Medal and the 1991 Prize of the Philip Morris tobacco and cigarette company.
Biography
Lions entered the École normale supérieure in 1975, and received his doctorate from the University of Pierre and Marie Curie in 1979. He holds the position of Professor of Partial differential equations and their applications at the Collège de France in Paris as well as a position at École Polytechnique. Since 2014, he has also been a visiting professor at the University of Chicago.
In 1979, Lions married Lila Laurenti, with whom he has one son. Lions' parents were Andrée Olivier and the renowned mathematician Jacques-Louis Lions, at the time a professor at the University of Nancy, and from 1991 through 1994 the President of the International Mathematical Union.
Awards and honors
In 1994, while working at the Paris Dauphine University, Lions received the International Mathematical Union's prestigious Fields Medal. He was cited for his contributions to viscosity solutions, the Boltzmann equation, and the calculus of variations. He has also received the French Academy of Science's Prix Paul Doistau–Émile Blutet (in 1986) and Ampère Prize (in 1992).
He was an invited professor at the Conservatoire national des arts et métiers (2000). He is a doctor honoris causa of Heriot-Watt University (Edinburgh), EPFL (2010), Narvik University College (2014), and of the City University of Hong-Kong and is listed as an ISI highly cited researcher.
Mathematical work
Operator theory
Lions' earliest work dealt with the functional analysis of Hilbert spaces. His first published article, in 1977, was a contribution to the vast literature on convergence of certain iterative algorithms to fixed points of a given nonexpansive self-map of a closed convex subset of Hilbert space. In collaboration with his thesis advisor Haïm Brézis, Lions gave new results about maximal monotone operators in Hilbert space, proving one of the first convergence results for Bernard Martinet and R. Tyrrell Rockafellar's proximal point algorithm. In the time since, there have been a large number of modifications and improvements of such results.
With Bertrand Mercier, Lions proposed a "forward-backward splitting algorithm" for finding a zero of the sum of two maximal monotone operators. Their algorithm can be viewed as an abstract version of the well-known Douglas−Rachford and Peaceman−Rachford numerical algorithms for computation of solutions to parabolic partial differential equations. The Lions−Mercier algorithms and their proof of convergence have been particularly influential in the literature on operator theory and its applications to numerical analysis. A similar method was studied at the same time by Gregory Passty.
Calculus of variations
The mathematical study of the
|
https://en.wikipedia.org/wiki/Derive%20%28computer%20algebra%20system%29
|
Derive was a computer algebra system, developed as a successor to muMATH by the Soft Warehouse in Honolulu, Hawaii, now owned by Texas Instruments. Derive was implemented in , also by Soft Warehouse. The first release was in 1988 for DOS. It was discontinued on June 29, 2007, in favor of the TI-Nspire CAS. The final version is Derive 6.1 for Windows.
Since Derive required comparably little memory, it was suitable for use on older and smaller machines. It was available for the DOS and Windows platforms and was used also in TI pocket calculators.
Books
Jerry Glynn, Exploring Math from Algebra to Calculus with Derive, A Mathematical Assistant, Mathware Inc, 1992,
Leon Magiera, General Physics Problem Solving With Cas Derive, Nova Science Pub Inc 2001,
Vladimir Dyakonov. Handbook on application system Derive. Moscow (Russia) 1996, Phismatlit, 320 p,
Vladimir Dyakonov. Computers algebra systems Derive. Moscow (Russia) 2002, SOLON-R, 320 p,
See also
List of computer algebra systems
External links
Derive Review at scientific-computing.com
Derive Newsletter from the International Derive Users Group
1988 software
Computer algebra systems
Discontinued software
Lisp (programming language) software
Science software for Windows
|
https://en.wikipedia.org/wiki/Random%20minimum%20spanning%20tree
|
In mathematics, a random minimum spanning tree may be formed by assigning random weights from some distribution to the edges of an undirected graph, and then constructing the minimum spanning tree of the graph.
When the given graph is a complete graph on vertices, and the edge weights have a continuous distribution function whose derivative at zero is , then the expected weight of its random minimum spanning trees is bounded by a constant, rather than growing as a function of . More precisely, this constant tends in the limit (as goes to infinity) to , where is the Riemann zeta function and is Apéry's constant. For instance, for edge weights that are uniformly distributed on the unit interval, the derivative is , and the limit is just .
In contrast to uniformly random spanning trees of complete graphs, for which the typical diameter is proportional to the square root of the number of vertices, random minimum spanning trees of complete graphs have typical diameter proportional to the cube root.
Random minimum spanning trees of grid graphs may be used for invasion percolation models of liquid flow through a porous medium, and for maze generation.
References
Spanning tree
|
https://en.wikipedia.org/wiki/Aida%20Yasuaki
|
also known as Aida Ammei, was a Japanese mathematician in the Edo period.
He made significant contributions to the fields of number theory and geometry, and furthered methods for simplifying continued fractions.
Aida created an original symbol for "equal". This was the first appearance of the notation for equal in East Asia.
Selected works
In a statistical overview derived from writings by and about Aida Yasuaki, OCLC/WorldCat encompasses roughly 50 works in 50+ publications in 1 language and 50+ library holdings.
1784 — OCLC 22057343766
1785 — OCLC 22049703851, Counter-arguments with seiyo sampō
1787 — OCLC 22056510030, Counter-arguments with seiyo sampō, new edition
1788 — OCLC 22056510044
1797 — OCLC 22057185824
1801 — OCLC 22057185770
1811 —
See also
Sangaku, the custom of presenting mathematical problems, carved in wood tablets, to the public in shinto shrines
Soroban, a Japanese abacus
Japanese mathematics
Notes
References
Endō Toshisada (1896). . Tōkyō: _.
Restivo, Sal P. (1992). Mathematics in Society and History: Sociological Inquiries. Dordrecht: Kluwer Academic Publishers. ;
Selin, Helaine. (1997). Encyclopaedia of the History of Science, Technology, and Medicine in Non-Western Cultures. Dordrecht: Kluwer/Springer. ;
Shimodaira, Kazuo. (1970). "Aida Yasuaki", Dictionary of Scientific Biography. New York: Charles Scribner's Sons.
David Eugene Smith and Yoshio Mikami. (1914). A History of Japanese Mathematics. Chicago: Open Court Publishing. – note alternate online, full-text copy at archive.org
External links
18th-century Japanese mathematicians
19th-century Japanese mathematicians
Number theorists
Geometers
1747 births
1817 deaths
Japanese writers of the Edo period
|
https://en.wikipedia.org/wiki/Projective%20line%20over%20a%20ring
|
In mathematics, the projective line over a ring is an extension of the concept of projective line over a field. Given a ring A with 1, the projective line P(A) over A consists of points identified by projective coordinates. Let U be the group of units of A; pairs and from are related when there is a u in U such that and . This relation is an equivalence relation. A typical equivalence class is written U[a, b].
that is, U[a, b] is in the projective line if the ideal generated by a and b is all of A.
The projective line P(A) is equipped with a group of homographies. The homographies are expressed through use of the matrix ring over A and its group of units V as follows: If c is in Z(U), the center of U, then the group action of matrix on P(A) is the same as the action of the identity matrix. Such matrices represent a normal subgroup N of V. The homographies of P(A) correspond to elements of the quotient group .
P(A) is considered an extension of the ring A since it contains a copy of A due to the embedding
. The multiplicative inverse mapping , ordinarily restricted to the group of units U of A, is expressed by a homography on P(A):
Furthermore, for , the mapping can be extended to a homography:
Since u is arbitrary, it may be substituted for u−1.
Homographies on P(A) are called linear-fractional transformations since
Instances
Rings that are fields are most familiar: The projective line over GF(2) has three elements: , , and . Its homography group is the permutation group on these three.
The ring Z / 3Z, or GF(3), has the elements 1, 0, and −1; its projective line has the four elements , , , since both 1 and −1 are units. The homography group on this projective line has 12 elements, also described with matrices or as permutations. For a finite field GF(q), the projective line is the Galois geometry . J. W. P. Hirschfeld has described the harmonic tetrads in the projective lines for q = 4, 5, 7, 8, 9.
Over discrete rings
Consider when n is a composite number. If p and q are distinct primes dividing n, then and are maximal ideals in and by Bézout's identity there are a and b in Z such that , so that is in but it is not an image of an element under the canonical embedding. The whole of is filled out by elements , , = the units of . The instances are given here for n = 6, 10, and 12, where according to modular arithmetic the group of units of the ring is , , and respectively. Modular arithmetic will confirm that, in each table, a given letter represents multiple points. In these tables a point is labeled by m in the row at the table bottom and n in the column at the left of the table. For instance, the point at infinity , where v is a unit of the ring.
The extra points can be associated with , the rationals in the extended complex upper-half plane. The group of homographies on is called a principal congruence subgroup.
For the rational numbers Q, homogeneity of coordinates means that every element of P(Q) may
|
https://en.wikipedia.org/wiki/Seifert%20conjecture
|
In mathematics, the Seifert conjecture states that every nonsingular, continuous vector field on the 3-sphere has a closed orbit. It is named after Herbert Seifert. In a 1950 paper, Seifert asked if such a vector field exists, but did not phrase non-existence as a conjecture. He also established the conjecture for perturbations of the Hopf fibration.
The conjecture was disproven in 1974 by Paul Schweitzer, who exhibited a counterexample. Schweitzer's construction was then modified by Jenny Harrison in 1988 to make a counterexample for some . The existence of smoother counterexamples remained an open question until 1993 when Krystyna Kuperberg constructed a very different counterexample. Later this construction was shown to have real analytic and piecewise linear versions.
References
V. Ginzburg and B. Gürel, A -smooth counterexample to the Hamiltonian Seifert conjecture in , Ann. of Math. (2) 158 (2003), no. 3, 953–976
P. A. Schweitzer, Counterexamples to the Seifert conjecture and opening closed leaves of foliations, Annals of Mathematics (2) 100 (1974), 386–400.
H. Seifert, Closed integral curves in 3-space and isotopic two-dimensional deformations, Proc. Amer. Math. Soc. 1, (1950). 287–302.
Further reading
K. Kuperberg, Aperiodic dynamical systems. Notices Amer. Math. Soc. 46 (1999), no. 9, 1035–1040.
Differential topology
Disproved conjectures
|
https://en.wikipedia.org/wiki/Stephen%20Stigler
|
Stephen Mack Stigler (born August 10, 1941) is the Ernest DeWitt Burton Distinguished Service Professor at the Department of Statistics of the University of Chicago. He has authored several books on the history of statistics; he is the son of the economist George Stigler.
Stigler is also known for Stigler's law of eponymy which states that no scientific discovery is named after its original discoverer – whose first formulation he credits to sociologist Robert K. Merton.
Biography
Stigler was born in Minneapolis. He received his Ph.D. in 1967 from the University of California, Berkeley. His dissertation was on linear functions of order statistics, and his advisor was Lucien Le Cam. His research has focused on statistical theory of robust estimators and the history of statistics.
Stigler taught at University of Wisconsin–Madison until 1979 when he joined the University of Chicago. In 2006, he was elected to membership of the American Philosophical Society, and is a past president (1994) of the Institute of Mathematical Statistics.
His father was the economist George Stigler, who was a close friend of Milton Friedman.
Bibliography
Books
As editor
Selected articles
———
———
———
———
——— Stigler, S. M. (1980). Stigler's law of eponymy. Transactions of the New York Academy of Sciences, 39: 147–58 (Merton Festschrift Volume, F. Gieryn (ed))
———
———
See also
Chicago school of economics
George Stigler
List of examples of Stigler's law
Milton Friedman
Stigler's law of eponymy
References
External links
Official CV of Stephen M. Stigler (September 2015)
Homepage at the University of Chicago
Mathematics Genealogy Project: Stephen Mack Stigler
University of California, Berkeley alumni
Presidents of the Institute of Mathematical Statistics
Presidents of the International Statistical Institute
Elected Members of the International Statistical Institute
Fellows of the American Statistical Association
American statisticians
American historians of mathematics
University of Chicago faculty
Scientists from Minneapolis
American people of German descent
Living people
1941 births
Mathematicians from Minnesota
Members of the American Philosophical Society
|
https://en.wikipedia.org/wiki/Branched%20surface
|
In mathematics, a branched surface is a generalization of both surfaces and train tracks.
Definition
A surface is a space that locally looks like ℝ² (up to homeomorphism).
Consider, however, the space obtained by taking the quotient of two copies A,B of ℝ² under the identification of a closed half-space of each with a closed half-space of the other. This will be a surface except along a single line. Now, pick another copy C of ℝ and glue it and A together along halfspaces so that the singular line of this gluing is transverse in A to the previous singular line.
Call this complicated space K. A branched surface is a space that is locally modeled on K.
Weight
A branched manifold can have a weight assigned to various of its subspaces; if this is done, the space is often called a weighted branched manifold. Weights are non-negative real numbers and are assigned to subspaces N that satisfy the following:
N is open.
N does not include any points whose only neighborhoods are the quotient space described above.
N is maximal with respect to the above two conditions.
That is, N is a component of the branched surface minus its branching set. Weights are assigned so that if a component branches into two other components, then the sum of the weights of the two unidentified halfplanes of that neighborhood is the weight of the identified halfplane.
See also
Branched covering
Branched manifold
References
Geometric topology
3-manifolds
Generalized manifolds
|
https://en.wikipedia.org/wiki/Branched%20manifold
|
In mathematics, a branched manifold is a generalization of a differentiable manifold which may have singularities of very restricted type and admits a well-defined tangent space at each point. A branched n-manifold is covered by n-dimensional "coordinate charts", each of which involves one or several "branches" homeomorphically projecting into the same differentiable n-disk in Rn. Branched manifolds first appeared in the dynamical systems theory, in connection with one-dimensional hyperbolic attractors constructed by Smale and were formalized by R. F. Williams in a series of papers on expanding attractors. Special cases of low dimensions are known as train tracks (n = 1) and branched surfaces (n = 2) and play prominent role in the geometry of three-manifolds after Thurston.
Definition
Let K be a metrizable space, together with:
a collection {Ui} of closed subsets of K;
for each Ui, a finite collection {Dij} of closed subsets of Ui;
for each i, a map πi: Ui → Din to a closed n-disk of class Ck in Rn.
These data must satisfy the following requirements:
∪j Dij = Ui and ∪i Int Ui = K;
the restriction of πi to Dij is a homeomorphism onto its image πi(Dij) which is a closed class Ck n-disk relative to the boundary of Din;
there is a cocycle of diffeomorphisms {αlm} of class Ck (k ≥ 1) such that πl = αlm · πm when defined. The domain of αlm is πm(Ul ∩ Um).
Then the space K is a branched n-manifold of class Ck.
The standard machinery of differential topology can be adapted to the case of branched manifolds. This leads to the definition of the tangent space TpK to a branched n-manifold K at a given point p, which is an n-dimensional real vector space; a natural notion of a Ck differentiable map f: K → L between branched manifolds, its differential df: TpK → Tf(p)L, the germ of f at p, jet spaces, and other related notions.
Examples
Extrinsically, branched n-manifolds are n-dimensional complexes embedded into some Euclidean space such that each point has a well-defined n-dimensional tangent space.
A finite graph whose edges are smoothly embedded arcs in a surface, such that all edges incident to a given vertex v have the same tangent line at v, is a branched one-manifold, or train track (there are several variants of the notion of a train track — here no restriction is placed on the valencies of the vertices). As a specific example, consider the "figure eight" formed by two externally tangent circles in the plane.
A two-complex in R3 consisting of several leaves that may tangentially come together in pairs along certain double curves, or come together in triples at isolated singular points where these double curves intersect transversally, is a branched two-manifold, or branched surface. For example, consider the space K obtained from 3 copies of the Euclidean plane, labelled T (top), M (middle) and B (bottom) by identifying the half-planes y ≤ 0 in T and M and the half-planes x ≤ 0 in M and B. One can imagine M being the flat coord
|
https://en.wikipedia.org/wiki/Train%20track%20%28mathematics%29
|
In the mathematical area of topology, a train track is a family of curves embedded on a surface, meeting the following conditions:
The curves meet at a finite set of vertices called switches.
Away from the switches, the curves are smooth and do not touch each other.
At each switch, three curves meet with the same tangent line, with two curves entering from one direction and one from the other.
The main application of train tracks in mathematics is to study laminations of surfaces, that is, partitions of closed subsets of surfaces into unions of smooth curves. Train tracks have also been used in graph drawing.
Train tracks and laminations
A lamination of a surface is a partition of a closed subset of the surface into smooth curves. The study of train tracks was originally motivated by the following observation: If a generic lamination on a surface is looked at from a distance by a myopic person, it will look like a train track.
A switch in a train track models a point where two families of parallel curves in the lamination merge to become a single family, as shown in the illustration. Although the switch consists of three curves ending in and intersecting at a single point, the curves in the lamination do not have endpoints and do not intersect each other.
For this application of train tracks to laminations, it is often important to constrain the shapes that can be formed by connected components of the surface between the curves of the track. For instance, Penner and Harer require that each such component, when glued to a copy of itself along its boundary to form a smooth surface with cusps, have negative cusped Euler characteristic.
A train track with weights, or weighted train track or measured train track, consists of a train track with a non-negative real number, called a weight, assigned to each branch. The weights can be used to model which of the curves in a parallel family of curves from a lamination are split to which sides of the switch. Weights must satisfy the following switch condition: The weight assigned to the ingoing branch at a switch should equal the sum of the weights assigned to the branches outgoing from that switch.
Weights are closely related to the notion of carrying. A train track is said to carry a lamination if there is a train track neighborhood such that every leaf of the lamination is contained in the neighborhood and intersects each vertical fiber transversely. If each vertical fiber has nontrivial intersection with some leaf, then the lamination is fully carried by the train track.
References
Topology
|
https://en.wikipedia.org/wiki/G-structure%20on%20a%20manifold
|
In differential geometry, a G-structure on an n-manifold M, for a given structure group G, is a principal G-subbundle of the tangent frame bundle FM (or GL(M)) of M.
The notion of G-structures includes various classical structures that can be defined on manifolds, which in some cases are tensor fields. For example, for the orthogonal group, an O(n)-structure defines a Riemannian metric, and for the special linear group an SL(n,R)-structure is the same as a volume form. For the trivial group, an {e}-structure consists of an absolute parallelism of the manifold.
Generalising this idea to arbitrary principal bundles on topological spaces, one can ask if a principal -bundle over a group "comes from" a subgroup of . This is called reduction of the structure group (to ).
Several structures on manifolds, such as a complex structure, a symplectic structure, or a Kähler structure, are G-structures with an additional integrability condition.
Reduction of the structure group
One can ask if a principal -bundle over a group "comes from" a subgroup of . This is called reduction of the structure group (to ), and makes sense for any map , which need not be an inclusion map (despite the terminology).
Definition
In the following, let be a topological space, topological groups and a group homomorphism .
In terms of concrete bundles
Given a principal -bundle over , a reduction of the structure group (from to ) is a -bundle and an isomorphism of the associated bundle to the original bundle.
In terms of classifying spaces
Given a map , where is the classifying space for -bundles, a reduction of the structure group is a map and a homotopy .
Properties and examples
Reductions of the structure group do not always exist. If they exist, they are usually not essentially unique, since the isomorphism is an important part of the data.
As a concrete example, every even-dimensional real vector space is isomorphic to the underlying real space of a complex vector space: it admits a linear complex structure. A real vector bundle admits an almost complex structure if and only if it is isomorphic to the underlying real bundle of a complex vector bundle. This is then a reduction along the inclusion GL(n,C) → GL(2n,R)
In terms of transition maps, a G-bundle can be reduced if and only if the transition maps can be taken to have values in H. Note that the term reduction is misleading: it suggests that H is a subgroup of G, which is often the case, but need not be (for example for spin structures): it's properly called a lifting.
More abstractly, "G-bundles over X" is a functor in G: Given a Lie group homomorphism H → G, one gets a map from H-bundles to G-bundles by inducing (as above). Reduction of the structure group of a G-bundle B is choosing an H-bundle whose image is B.
The inducing map from H-bundles to G-bundles is in general neither onto nor one-to-one, so the structure group cannot always be reduced, and when it can, this reduction need not be uniq
|
https://en.wikipedia.org/wiki/Euler%20brick
|
In mathematics, an Euler brick, named after Leonhard Euler, is a rectangular cuboid whose edges and face diagonals all have integer lengths. A primitive Euler brick is an Euler brick whose edge lengths are relatively prime. A perfect Euler brick is one whose space diagonal is also an integer, but such a brick has not yet been found.
Definition
The definition of an Euler brick in geometric terms is equivalent to a solution to the following system of Diophantine equations:
where are the edges and are the diagonals.
Properties
If is a solution, then is also a solution for any . Consequently, the solutions in rational numbers are all rescalings of integer solutions. Given an Euler brick with edge-lengths , the triple constitutes an Euler brick as well.
Exactly one edge and two face diagonals of a primitive Euler brick are odd.
At least two edges of an Euler brick are divisible by 3.
At least two edges of an Euler brick are divisible by 4.
At least one edge of an Euler brick is divisible by 11.
Examples
The smallest Euler brick, discovered by Paul Halcke in 1719, has edges and face diagonals . Some other small primitive solutions, given as edges — face diagonals , are below:
{| style="border-collapse:collapse;text-align:right;white-space:nowrap;"
|(|| 85,|| 132,|| 720||) — (|| 157,|| 725,|| 732||)
|-
|(||140,|| 480,|| 693||) — (|| 500,|| 707,|| 843||)
|-
|(||160,|| 231,|| 792||) — (|| 281,|| 808,|| 825||)
|-
|(||187,||1020,||1584||) — (||1037,||1595,||1884||)
|-
|(||195,|| 748,||6336||) — (|| 773,||6339,||6380||)
|-
|(||240,|| 252,|| 275||) — (|| 348,|| 365,|| 373||)
|-
|(||429,|| 880,||2340||) — (|| 979,||2379,||2500||)
|-
|(||495,||4888,||8160||) — (||4913,||8175,||9512||)
|-
|(||528,||5796,||6325||) — (||5820,||6347,||8579||)
|}
Generating formula
Euler found at least two parametric solutions to the problem, but neither gives all solutions.
An infinitude of Euler bricks can be generated with Saunderson's parametric formula. Let be a Pythagorean triple (that is, .) Then the edges
give face diagonals
There are many Euler bricks which are not parametrized as above, for instance the Euler brick with edges and face diagonals .
Perfect cuboid
A perfect cuboid (also called a perfect Euler brick or perfect box) is an Euler brick whose space diagonal also has integer length. In other words, the following equation is added to the system of Diophantine equations defining an Euler brick:
where is the space diagonal. , no example of a perfect cuboid had been found and no one has proven that none exist.
Exhaustive computer searches show that, if a perfect cuboid exists,
the odd edge must be greater than 2.5 × 1013,
the smallest edge must be greater than .
the space diagonal must be greater than 9 × 1015.
Some facts are known about properties that must be satisfied by a primitive perfect cuboid, if one exists, based on modular arithmetic:
One edge, two face diagonals and the space diagonal must be odd, one edge and
|
https://en.wikipedia.org/wiki/Related%20rates
|
In differential calculus, related rates problems involve finding a rate at which a quantity changes by relating that quantity to other quantities whose rates of change are known. The rate of change is usually with respect to time. Because science and engineering often relate quantities to each other, the methods of related rates have broad applications in these fields. Differentiation with respect to time or one of the other variables requires application of the chain rule, since most problems involve several variables.
Fundamentally, if a function is defined such that , then the derivative of the function can be taken with respect to another variable. We assume is a function of , i.e. . Then , so
Written in Leibniz notation, this is:
Thus, if it is known how changes with respect to , then we can determine how changes with respect to and vice versa. We can extend this application of the chain rule with the sum, difference, product and quotient rules of calculus, etc.
For example, if then
Procedure
The most common way to approach related rates problems is the following:
Identify the known variables, including rates of change and the rate of change that is to be found. (Drawing a picture or representation of the problem can help to keep everything in order)
Construct an equation relating the quantities whose rates of change are known to the quantity whose rate of change is to be found.
Differentiate both sides of the equation with respect to time (or other rate of change). Often, the chain rule is employed at this step.
Substitute the known rates of change and the known quantities into the equation.
Solve for the wanted rate of change.
Errors in this procedure are often caused by plugging in the known values for the variables before (rather than after) finding the derivative with respect to time. Doing so will yield an incorrect result, since if those values are substituted for the variables before differentiation, those variables will become constants; and when the equation is differentiated, zeroes appear in places of all variables for which the values were plugged in.
Example
A 10-meter ladder is leaning against the wall of a building, and the base of the ladder is sliding away from the building at a rate of 3 meters per second. How fast is the top of the ladder sliding down the wall when the base of the ladder is 6 meters from the wall?
The distance between the base of the ladder and the wall, x, and the height of the ladder on the wall, y, represent the sides of a right triangle with the ladder as the hypotenuse, h. The objective is to find dy/dt, the rate of change of y with respect to time, t, when h, x and dx/dt, the rate of change of x, are known.
Step 1:
Step 2:
From the Pythagorean theorem, the equation
describes the relationship between x, y and h, for a right triangle. Differentiating both sides of this equation with respect to time, t, yields
Step 3:
When solved for the wanted rate of change, dy/dt, g
|
https://en.wikipedia.org/wiki/Boyer
|
Boyer () is a French surname. In rarer cases, it can be a corruption or deliberate alteration of other names.
Origins and statistics
Boyer is found traditionally along the Mediterranean (Provence, Languedoc), the Rhône valley, Auvergne, Limousin, Périgord and more generally in the Southwest of France. It is also found in the north of the country. There are two variant spellings: Boyé (southwest) and Bouyer (Loire-Atlantique, Charente-Maritime).
, Boyer ranks 55th in the most common surnames in France. For the period 1891–1990 it ranked 34th.
Like many other surnames, it used to be a nickname describing somebody's job: "bullock driver", "cowherd", that is to say Bouvier in common French. It derives mainly from the Occitan buòu "ox", with the suffix -iar / -ier, frenchified phonetically or, further north, sometimes from a variant form in dialectal French bô, bou "ox" corresponding to common French bœuf with the suffix -ier. In French, the modern spelling -oyer avoids confusion between -oi-er and -oier .
In rarer cases, it can be a corruption or deliberate alteration of several other names :
In England, it may come from bowyer, meaning "bow maker" or "bow seller."
In Turkish, the name may come from "boy-er", "boy" meaning "size" or "stature" and "er" meaning "man" or "soldier."
It can also be a corruption or deliberate alteration of German names like Bayer or Bauer.
People with the surname
Abel Boyer (1667–1729), French-English lexicographer and journalist
Alexis de Boyer (1757–1833), French surgeon
Andre Boyer (disambiguation), several people
Angélique Boyer (born 1988), French-Mexican actress
Anita Boyer (1915–1985), American Big Band singer and songwriter
Anise Boyer (1914–2008), American dancer and actress
Anne Boyer (born 1973), American poet and essayist
Antide Boyer (1850-1918), French manual worker, Provençal dialect writer and journalist
Auguste Boyer (1896-1956), French professional golfer prominent on the European circuit
Benjamin Markley Boyer (1823-1887), Democratic member of the U.S. House of Representatives
Bert Boyer, director of the Centre of Alaska Native Health Research
Bill Boyer, American sports team owner
Bill Boyer Jr., American entrepreneur, owner of Mokulele Airlines
Blaine Boyer (born 1981), American baseball player
Blair Boyer (born 1981), Australian politician
Boni Boyer (1958–1996), American vocalist, multi-instrumentalist and composer
Carl Benjamin Boyer (1906–1976), American historian of mathematics
Charles Boyer (1899–1978), French-American actor
Charles-Georges Boyer (1743–1806 or 1807), French music publisher
Charles P. Boyer (born 1942), American mathematician
Christine Boyer (1771-1800), first wife of Lucien Bonaparte
Claude Boyer (1618-1698), French clergyman, playwright, apologist and poet
Claudette Boyer (1938–2013), Canadian politician
Clete Boyer (1937–2007), American baseball player
Denise Boyer-Merdich (born 1962), American soccer player and a part of United States women's national team
Derek B
|
https://en.wikipedia.org/wiki/J%C3%B3zef%20H.%20Przytycki
|
Józef Henryk Przytycki (, ; born 14 October 1953 in Warsaw, Poland), is a Polish mathematician specializing in the fields of knot theory and topology.
Academic background
Przytycki received a Master of Science degree in mathematics from University of Warsaw in 1977 and a PhD in mathematics from Columbia University (1981) advised by Joan Birman. Przytycki then returned to Poland, where he became an assistant professor at the University of Warsaw. From 1986 to 1995 he held visiting positions at the University of British Columbia, the University of Toronto, Michigan State University, the Institute for Advanced Study in Princeton, New Jersey, the University of California, Riverside, Odense University, and the University of California, Berkeley.
In 1995 he joined the Mathematics Department at George Washington University in Washington, D.C., where he became a professor in 1999. According to the Mathematics Genealogy Project, he has supervised 16 PhD students (as of 2022).
Research
Przytycki co-authored more than 100 research papers, 25 conference proceedings and 2 books.
In 1987, Przytycki and Pawel Traczyk published a paper that included a description of what is now called the HOMFLY(PT) polynomial. Postal delays prevented Przytycki and Traczyk from receiving full recognition alongside the other six discoverers. Przytycki also introduced skein modules in a paper published in 1991; see also his entry in the online Encyclopedia of Mathematics.
Przytycki has co-organized the conference Knots in Washington each semester since 1995. He also co-organized several international Knot Theory conferences in Europe, for example Knots in Poland (1995, 2003 and 2010), Knots in Hellas (1998 and 2016), and the Advanced School and Conference on Knot Theory and its Applications to Physics and Biology, Trieste, Italy (2009).
Personal life
Józef Przytycki is married to computational biologist and mathematician Teresa Przytycka (born 1958). They have two sons.
References
External links
1953 births
Living people
Polish emigrants to the United States
20th-century Polish mathematicians
21st-century Polish mathematicians
20th-century American mathematicians
21st-century American mathematicians
Topologists
University of Warsaw alumni
Columbia Graduate School of Arts and Sciences alumni
George Washington University faculty
Academic staff of the University of Warsaw
Institute for Advanced Study visiting scholars
University of British Columbia people
|
https://en.wikipedia.org/wiki/Hilbert%27s%20syzygy%20theorem
|
In mathematics, Hilbert's syzygy theorem is one of the three fundamental theorems about polynomial rings over fields, first proved by David Hilbert in 1890, which were introduced for solving important open questions in invariant theory, and are at the basis of modern algebraic geometry. The two other theorems are Hilbert's basis theorem that asserts that all ideals of polynomial rings over a field are finitely generated, and Hilbert's Nullstellensatz, which establishes a bijective correspondence between affine algebraic varieties and prime ideals of polynomial rings.
Hilbert's syzygy theorem concerns the relations, or syzygies in Hilbert's terminology, between the generators of an ideal, or, more generally, a module. As the relations form a module, one may consider the relations between the relations; the theorem asserts that, if one continues in this way, starting with a module over a polynomial ring in indeterminates over a field, one eventually finds a zero module of relations, after at most steps.
Hilbert's syzygy theorem is now considered to be an early result of homological algebra. It is the starting point of the use of homological methods in commutative algebra and algebraic geometry.
History
The syzygy theorem first appeared in Hilbert's seminal paper "Über die Theorie der algebraischen Formen" (1890). The paper is split into five parts: part I proves Hilbert's basis theorem over a field, while part II proves it over the integers. Part III contains the syzygy theorem (Theorem III), which is used in part IV to discuss the Hilbert polynomial. The last part, part V, proves finite generation of certain rings of invariants. Incidentally part III also contains a special case of the Hilbert–Burch theorem.
Syzygies (relations)
Originally, Hilbert defined syzygies for ideals in polynomial rings, but the concept generalizes trivially to (left) modules over any ring.
Given a generating set of a module over a ring , a relation or first syzygy between the generators is a -tuple of elements of such that
Let be a free module with basis The -tuple may be identified with the element
and the relations form the kernel of the linear map defined by In other words, one has an exact sequence
This first syzygy module depends on the choice of a generating set, but, if is the module which is obtained with another generating set, there exist two free modules and such that
where denote the direct sum of modules.
The second syzygy module is the module of the relations between generators of the first syzygy module. By continuing in this way, one may define the th syzygy module for every positive integer .
If the th syzygy module is free for some , then by taking a basis as a generating set, the next syzygy module (and every subsequent one) is the zero module. If one does not take a basis as a generating set, then all subsequent syzygy modules are free.
Let be the smallest integer, if any, such that the th syzygy module of a module i
|
https://en.wikipedia.org/wiki/Ain%20Zaatout
|
{
"type": "FeatureCollection",
"features": [
{
"type": "Feature",
"properties": {},
"geometry": {
"type": "Point",
"coordinates": [
5.8385467424523085,
35.152028137913895
]
}
}
]
}
Ain Zaatout () is the administrative name of a mountainous village in north east Algeria, called Ah Frah in the local Shawi dialect, and Beni Farah (sometimes spelled Beni Ferah) () in Arabic.
It is located at 35.14° North, 5.83° East, at the southern edge of the Saharan Atlas between the provinces of Batna and Biskra. The region is largely rocky with an average altitude of more than 900 metres (2,953 feet) above sea level.
Aïn Zaatout has an estimated population of around 5,000 composed of Farhi people, Muslim Berbers speaking a distinctive variant of the Shawi dialect used in the Aurès.
References
External links
Ain Zaatout in Google Pages
Populated places in Biskra Province
|
https://en.wikipedia.org/wiki/Orthant
|
In geometry, an orthant or hyperoctant is the analogue in n-dimensional Euclidean space of a quadrant in the plane or an octant in three dimensions.
In general an orthant in n-dimensions can be considered the intersection of n mutually orthogonal half-spaces. By independent selections of half-space signs, there are 2n orthants in n-dimensional space.
More specifically, a closed orthant in Rn is a subset defined by constraining each Cartesian coordinate to be nonnegative or nonpositive. Such a subset is defined by a system of inequalities:
ε1x1 ≥ 0 ε2x2 ≥ 0 · · · εnxn ≥ 0,
where each εi is +1 or −1.
Similarly, an open orthant in Rn is a subset defined by a system of strict inequalities
ε1x1 > 0 ε2x2 > 0 · · · εnxn > 0,
where each εi is +1 or −1.
By dimension:
In one dimension, an orthant is a ray.
In two dimensions, an orthant is a quadrant.
In three dimensions, an orthant is an octant.
John Conway defined the term n-orthoplex from orthant complex as a regular polytope in n-dimensions with 2n simplex facets, one per orthant.
The nonnegative orthant is the generalization of the first quadrant to n-dimensions and is important in many constrained optimization problems.
See also
Cross polytope (or orthoplex) – a family of regular polytopes in n-dimensions which can be constructed with one simplex facets in each orthant space.
Measure polytope (or hypercube) – a family of regular polytopes in n-dimensions which can be constructed with one vertex in each orthant space.
Orthotope – generalization of a rectangle in n-dimensions, with one vertex in each orthant.
References
Further reading
The facts on file: Geometry handbook, Catherine A. Gorini, 2003, , p.113
Euclidean geometry
Linear algebra
zh:卦限
|
https://en.wikipedia.org/wiki/Hyperbolic%20triangle
|
In hyperbolic geometry, a hyperbolic triangle is a triangle in the hyperbolic plane. It consists of three line segments called sides or edges and three points called angles or vertices.
Just as in the Euclidean case, three points of a hyperbolic space of an arbitrary dimension always lie on the same plane. Hence planar hyperbolic triangles also describe triangles possible in any higher dimension of hyperbolic spaces.
Definition
A hyperbolic triangle consists of three non-collinear points and the three segments between them.
Properties
Hyperbolic triangles have some properties that are analogous to those of triangles in Euclidean geometry:
Each hyperbolic triangle has an inscribed circle but not every hyperbolic triangle has a circumscribed circle (see below). Its vertices can lie on a horocycle or hypercycle.
Hyperbolic triangles have some properties that are analogous to those of triangles in spherical or elliptic geometry:
Two triangles with the same angle sum are equal in area.
There is an upper bound for the area of triangles.
There is an upper bound for radius of the inscribed circle.
Two triangles are congruent if and only if they correspond under a finite product of line reflections.
Two triangles with corresponding angles equal are congruent (i.e., all similar triangles are congruent).
Hyperbolic triangles have some properties that are the opposite of the properties of triangles in spherical or elliptic geometry:
The angle sum of a triangle is less than 180°.
The area of a triangle is proportional to the deficit of its angle sum from 180°.
Hyperbolic triangles also have some properties that are not found in other geometries:
Some hyperbolic triangles have no circumscribed circle, this is the case when at least one of its vertices is an ideal point or when all of its vertices lie on a horocycle or on a one sided hypercycle.
Hyperbolic triangles are thin, there is a maximum distance δ from a point on an edge to one of the other two edges. This principle gave rise to δ-hyperbolic space.
Triangles with ideal vertices
The definition of a triangle can be generalized, permitting vertices on the ideal boundary of the plane while keeping the sides within the plane. If a pair of sides is limiting parallel (i.e. the distance between them approaches zero as they tend to the ideal point, but they do not intersect), then they end at an ideal vertex represented as an omega point.
Such a pair of sides may also be said to form an angle of zero.
A triangle with a zero angle is impossible in Euclidean geometry for straight sides lying on distinct lines. However, such zero angles are possible with tangent circles.
A triangle with one ideal vertex is called an omega triangle.
Special Triangles with ideal vertices are:
Triangle of parallelism
A triangle where one vertex is an ideal point, one angle is right: the third angle is the angle of parallelism for the length of the side between the right and the third angle.
Schweikart triangle
The tr
|
https://en.wikipedia.org/wiki/Hyperbolic%20metric%20space
|
In mathematics, a hyperbolic metric space is a metric space satisfying certain metric relations (depending quantitatively on a nonnegative real number δ) between points. The definition, introduced by Mikhael Gromov, generalizes the metric properties of classical hyperbolic geometry and of trees. Hyperbolicity is a large-scale property, and is very useful to the study of certain infinite groups called Gromov-hyperbolic groups.
Definitions
In this paragraph we give various definitions of a -hyperbolic space. A metric space is said to be (Gromov-) hyperbolic if it is -hyperbolic for some .
Definition using the Gromov product
Let be a metric space. The Gromov product of two points with respect to a third one is defined by the formula:
Gromov's definition of a hyperbolic metric space is then as follows: is -hyperbolic if and only if all satisfy the four-point condition
Note that if this condition is satisfied for all and one fixed base point , then it is satisfied for all with a constant . Thus the hyperbolicity condition only needs to be verified for one fixed base point; for this reason, the subscript for the base point is often dropped from the Gromov product.
Definitions using triangles
Up to changing by a constant multiple, there is an equivalent geometric definition involving triangles when the metric space is geodesic, i.e. any two points are end points of a geodesic segment (an isometric image of a compact subinterval of the reals). Note that the definition via Gromov products does not require the space to be geodesic.
Let . A geodesic triangle with vertices is the union of three geodesic segments (where denotes a segment with endpoints and ).
If for any point there is a point in at distance less than of , and similarly for points on the other edges, and then the triangle is said to be -slim .
A definition of a -hyperbolic space is then a geodesic metric space all of whose geodesic triangles are -slim. This definition is generally credited to Eliyahu Rips.
Another definition can be given using the notion of a -approximate center of a geodesic triangle: this is a point which is at distance at most of any edge of the triangle (an "approximate" version of the incenter). A space is -hyperbolic if every geodesic triangle has a -center.
These two definitions of a -hyperbolic space using geodesic triangles are not exactly equivalent, but there exists such that a -hyperbolic space in the first sense is -hyperbolic in the second, and vice versa. Thus the notion of a hyperbolic space is independent of the chosen definition.
Examples
The hyperbolic plane is hyperbolic: in fact the incircle of a geodesic triangle is the circle of largest diameter contained in the triangle and every geodesic triangle lies in the interior of an ideal triangle, all of which are isometric with incircles of diameter 2 log 3. Note that in this case the Gromov product also has a simple interpretation in terms of the incircle of a geodes
|
https://en.wikipedia.org/wiki/Geometry%20template
|
A geometry template is a piece of clear plastic with cut-out shapes for use in mathematics and other subjects in primary school through secondary school. It also has various measurements on its sides to be used like a ruler. In Australia, popular brands include Mathomat and MathAid.
Brands
Mathomat and Mathaid
Mathomat is a trademark used for a plastic stencil developed in Australia by Craig Young in 1969, who originally worked as an engineering tradesperson in the Government Aircraft Factories (GAF) in Melbourne before retraining and working as head of mathematics in a secondary school in Melbourne. Young designed Mathomat to address what he perceived as limitations of traditional mathematics drawing sets in classrooms, mainly caused by students losing parts of the sets. The Mathomat stencil has a large number of geometric shapes stencils combined with the functions of a technical drawing set (rulers, set squares, protractor and circles stencils to replace a compass).
The template made use polycarbonate – a new type of thermoplastic polymer when Mathomat first came out – which was strong and transparent enough to allow for a large number of stencil shapes to be included in its design without breaking or tearing. The first template was exhibited in 1970 at a mathematics conference in Melbourne along with a series of popular mathematics teaching lesson plan; it became an immediate success with a large number of schools specifying it as a required students purchase. As of 2017, the stencil is widely specified in Australian schools, chiefly for students at early secondary school level. The manufacturing of Mathomat was taken over in 1989 by the W&G drawing instrument company, which had a factory in Melbourne for manufacture of technical drawing instruments. Young also developed MathAid, which was initially produced by him when he was living in Ringwood, Victoria. He later sold the company.
W&G published a series of teacher resource books for Mathomat authored by various teachers and academics who were interested in Mathomat as a teaching product.
See also
French curve
Protractor
Ruler
Technical drawing tools
References
Dimensional instruments
Drawing aids
Educational materials
Geometry
Mathematical tools
|
https://en.wikipedia.org/wiki/Harvard%20Science%20Center
|
The Harvard University Science Center is Harvard's main classroom and laboratory building for undergraduate science and mathematics, in addition to housing numerous other facilities and services.
Located just north of Harvard Yard, the Science Center was built in 1972 and opened in 1973 after a design by Josep Lluís Sert (then dean of the Harvard Graduate School of Design).
History
Planning
Harvard had been interested in building an undergraduate science center in the 1950s and 1960s. However, in the midst of an economic decline, funding could not be found. No concrete plans were made until in 1968, Edwin Land, inventor of the Polaroid "Land" camera, made a $12.5 million donation to construct a science center specifically for undergraduates.
Opponents of the plan feared that insufficient monies would be found to complete the project, and that the building's maintenance costs would be unreasonably high. The Biology Department also protested the move of its undergraduate-instruction facilities far from the department's main quarters. Professor George Wald argued that this would degrade the quality of instruction. There was also dissatisfaction with cancellation of plans at that time for a new biochemistry building.
The plan called for demolition of Lawrence Hall, a laboratory and a living space built in 1848. By the time of the scheduled demolition, a commune of students and "street people" calling themselves the "Free University" had taken residence in the unused building. The controversy was rendered moot when fire gutted the building a month later in May 1970.
As part of the project, in 196668 the portion of Cambridge Street running along the north edge of Harvard Yard was depressed into a 4-lane motor vehicle underpass, thus allowing unhindered pedestrian movement between the Yard and Harvard facilities to the north, including the new Science Center. Architectural historian Bainbridge Bunting wrote that this was the "most important improvement in Cambridge since the construction of [what would later be called] Memorial Drive in the 1890s".
Construction
Harvard commissioned architects Sert, Jackson and Associates to design and build the facility. Josep Lluis Sert, who had become Dean of the Harvard School of Design in 1953, had designed a number of other Harvard buildings, including Peabody Terrace, Holyoke Center (now the Smith Campus Center), and the Harvard Divinity School's Center for the Study of World Religions. These buildings were part of a modernist movement that sought to break away from the Georgian and related styles used at Harvard for hundreds of years. Thus, the Science Center is largely steel and concrete, with plentiful fenestration admitting natural light. Construction lasted from 1970 to 1972.
From 2001 to 2004 a $22 million, renovation created space for the Collection of Historical Scientific Instruments and expanded other facilities. A room-sized historic electromechanical computer built in 1944, the Harvard Mark
|
https://en.wikipedia.org/wiki/Pretzel%20knot
|
A Pretzel knot may refer to:
Pretzel link: a concept in mathematics
Soft pretzel with garlic
Stafford knot: a rope knot used in sailing and heraldry
|
https://en.wikipedia.org/wiki/Correspondence%20theorem
|
In group theory, the correspondence theorem (also the lattice theorem, and variously and ambiguously the third and fourth isomorphism theorem) states that if is a normal subgroup of a group , then there exists a bijection from the set of all subgroups of containing , onto the set of all subgroups of the quotient group . Loosely speaking, the structure of the subgroups of is exactly the same as the structure of the subgroups of containing , with collapsed to the identity element.
Specifically, if
G is a group,
, a normal subgroup of G,
, the set of all subgroups A of G that contain N, and
, the set of all subgroups of G/N,
then there is a bijective map such that
for all
One further has that if A and B are in then
if and only if ;
if then , where is the index of A in B (the number of cosets bA of A in B);
where is the subgroup of generated by
, and
is a normal subgroup of if and only if is a normal subgroup of .
This list is far from exhaustive. In fact, most properties of subgroups are preserved in their images under the bijection onto subgroups of a quotient group.
More generally, there is a monotone Galois connection between the lattice of subgroups of (not necessarily containing ) and the lattice of subgroups of : the lower adjoint of a subgroup of is given by and the upper adjoint of a subgroup of is a given by . The associated closure operator on subgroups of is ; the associated kernel operator on subgroups of is the identity. A proof of the correspondence theorem can be found here.
Similar results hold for rings, modules, vector spaces, and algebras. More generally an analogous result that concerns congruence relations instead of normal subgroups holds for any algebraic structure.
See also
Modular lattice
References
Isomorphism theorems
|
https://en.wikipedia.org/wiki/Vertex%20operator%20algebra
|
In mathematics, a vertex operator algebra (VOA) is an algebraic structure that plays an important role in two-dimensional conformal field theory and string theory. In addition to physical applications, vertex operator algebras have proven useful in purely mathematical contexts such as monstrous moonshine and the geometric Langlands correspondence.
The related notion of vertex algebra was introduced by Richard Borcherds in 1986, motivated by a construction of an infinite-dimensional Lie algebra due to Igor Frenkel. In the course of this construction, one employs a Fock space that admits an action of vertex operators attached to elements of a lattice. Borcherds formulated the notion of vertex algebra by axiomatizing the relations between the lattice vertex operators, producing an algebraic structure that allows one to construct new Lie algebras by following Frenkel's method.
The notion of vertex operator algebra was introduced as a modification of the notion of vertex algebra, by Frenkel, James Lepowsky, and Arne Meurman in 1988, as part of their project to construct the moonshine module. They observed that many vertex algebras that appear 'in nature' carry an action of the Virasoro algebra, and satisfy a bounded-below property with respect to an energy operator. Motivated by this observation, they added the Virasoro action and bounded-below property as axioms.
We now have post-hoc motivation for these notions from physics, together with several interpretations of the axioms that were not initially known. Physically, the vertex operators arising from holomorphic field insertions at points in two-dimensional conformal field theory admit operator product expansions when insertions collide, and these satisfy precisely the relations specified in the definition of vertex operator algebra. Indeed, the axioms of a vertex operator algebra are a formal algebraic interpretation of what physicists call chiral algebras (not to be confused with the more precise notion with the same name in mathematics) or "algebras of chiral symmetries", where these symmetries describe the Ward identities satisfied by a given conformal field theory, including conformal invariance. Other formulations of the vertex algebra axioms include Borcherds's later work on singular commutative rings, algebras over certain operads on curves introduced by Huang, Kriz, and others, D-module-theoretic objects called chiral algebras introduced by Alexander Beilinson and Vladimir Drinfeld and factorization algebras, also introduced by Beilinson and Drinfeld.
Important basic examples of vertex operator algebras include the lattice VOAs (modeling lattice conformal field theories), VOAs given by representations of affine Kac–Moody algebras (from the WZW model), the Virasoro VOAs, which are VOAs corresponding to representations of the Virasoro algebra, and the moonshine module V♮, which is distinguished by its monster symmetry. More sophisticated examples such as affine W-algebras and the ch
|
https://en.wikipedia.org/wiki/Stueckelberg%20action
|
In field theory, the Stueckelberg action (named after Ernst Stueckelberg) describes a massive spin-1 field as an R (the real numbers are the Lie algebra of U(1)) Yang–Mills theory coupled to a real scalar field . This scalar field takes on values in a real 1D affine representation of R with as the coupling strength.
This is a special case of the Higgs mechanism, where, in effect, and thus the mass of the Higgs scalar excitation has been taken to infinity, so the Higgs has decoupled and can be ignored, resulting in a nonlinear, affine representation of the field, instead of a linear representation — in contemporary terminology, a U(1) nonlinear -model.
Gauge-fixing , yields the Proca action.
This explains why, unlike the case for non-abelian vector fields, quantum electrodynamics with a massive photon is, in fact, renormalizable, even though it is not manifestly gauge invariant (after the Stückelberg scalar has been eliminated in the Proca action).
Stueckelberg extension of the Standard Model
The Stueckelberg extension of the Standard Model (StSM) consists of a gauge invariant kinetic term for a massive U(1) gauge field. Such a term can be implemented into the Lagrangian of the Standard Model
without destroying the renormalizability of the theory and further provides a mechanism for
mass generation that is distinct from the Higgs mechanism in the context of Abelian gauge theories.
The model involves a non-trivial
mixing of the Stueckelberg and the Standard Model sectors by including an additional term in the effective Lagrangian of the Standard Model given by
The first term above is the Stueckelberg field strength, and are topological mass parameters and is the axion.
After symmetry breaking in the electroweak sector the photon remains massless. The model predicts a new type of gauge boson dubbed which inherits a very distinct narrow decay width in this model. The St sector of the StSM decouples from the SM in limit .
Stueckelberg type couplings arise quite naturally in theories involving compactifications of higher-dimensional string theory, in particular, these couplings appear in the dimensional reduction of the ten-dimensional N = 1 supergravity coupled to supersymmetric Yang–Mills gauge fields in the presence of internal gauge fluxes. In the context of intersecting D-brane model building, products of U(N) gauge groups are broken to their SU(N) subgroups via the Stueckelberg couplings and thus the Abelian gauge fields become massive. Further, in a much simpler fashion one may consider a model with only one extra dimension (a type of Kaluza–Klein model) and compactify down to a four-dimensional theory. The resulting Lagrangian will contain massive vector gauge bosons that acquire masses through the Stueckelberg mechanism.
See also
Higgs mechanism#Affine Higgs mechanism
References
The edited PDF files of the physics course of Professor Stueckelberg, openly accessible, with commentary and complete biographical documents.
Review
|
https://en.wikipedia.org/wiki/Higher%20Education%20Statistics%20Agency
|
The Higher Education Statistics Agency (HESA) was the official agency for the collection, analysis and dissemination of quantitative information about higher education in the United Kingdom. HESA became a directorate of Jisc after a merger in 2022.
HESA was set up by agreement between the relevant government departments, the higher education funding councils and the universities and colleges in 1993, following the White Paper "Higher Education: a new framework", which called for more coherence in HE statistics, and the 1992 Higher and Further Education Acts, which established an integrated higher education system throughout the United Kingdom. In 2018 HESA became the Designated Data Body for higher education in England under the Higher Education and Research Act 2017, with designation passing to Jisc after the 2022 merger.
Data Collections
HESA collected data from all publicly funded higher education institutions (HEIs) in the UK as well as a small number of private providers. The annual data collection streams were:
Student data collection information about students, courses and qualifications at HEIs
AP Student data collection information about students, courses and qualifications at Alternative Providers of higher education
Staff data collection information about staff employed by HEIs
Finance record income and expenditure of HEIs
Graduate Outcomes survey of graduate activities 15 months after leaving higher education
Aggregate offshore record count of students studying wholly overseas for UK HE qualifications
HE Business and Community Interaction survey information about interactions between HEIs and business and the wider community
Estates management record buildings, estates and environmental information about HEIs
Initial Teacher Training (ITT) record
Unistats collection
Provider Profile collection
Destinations of Leavers from Higher Education survey of graduate activities six months after leaving HE (2002/03 to 2016/17)
Statistical Outputs
HESA published statistics and analyses based on the data it collects:
Statistical Bulletins Official Statistics outputs summarising each data stream
Annual open data releases detailed statistical tables
Performance Indicators comparative data on the performance of HEIs in Widening participation, student retention, learning and teaching outcomes, research output and employment of graduates
Jisc processes HESA data to provide data extracts for research and publication by external users such as League tables of British universities.
See also
Department for Employment and Learning
GuildHE
Office for Students
Higher Education Funding Council for Wales
Jisc
Quality Assurance Agency for Higher Education
Scottish Funding Council
Skills Funding Agency
UCAS
Universities UK
Universities in the United Kingdom
Universities' Statistical Record
External links
Department for Business, Innovation and Skills
Higher education organisations based in the United Kingdom
Jisc
Organizatio
|
https://en.wikipedia.org/wiki/Word%20metric
|
In group theory, a word metric on a discrete group is a way to measure distance between any two elements of . As the name suggests, the word metric is a metric on , assigning to any two elements , of a distance that measures how efficiently their difference can be expressed as a word whose letters come from a generating set for the group. The word metric on G is very closely related to the Cayley graph of G: the word metric measures the length of the shortest path in the Cayley graph between two elements of G.
A generating set for must first be chosen before a word metric on is specified. Different choices of a generating set will typically yield different word metrics. While this seems at first to be a weakness in the concept of the word metric, it can be exploited to prove theorems about geometric properties of groups, as is done in geometric group theory.
Examples
The group of integers ℤ
The group of integers ℤ is generated by the set {-1,+1}. The integer -3 can be expressed as -1-1-1+1-1, a word of length 5 in these generators. But the word that expresses -3 most efficiently is -1-1-1, a word of length 3. The distance between 0 and -3 in the word metric is therefore equal to 3. More generally, the distance between two integers m and n in the word metric is equal to |m-n|, because the shortest word representing the difference m-n has length equal to |m-n|.
The group
For a more illustrative example, the elements of the group can be thought of as vectors in the Cartesian plane with integer coefficients. The group is generated by the standard unit vectors , and their inverses , . The Cayley graph of is the so-called taxicab geometry. It can be pictured in the plane as an infinite square grid of city streets, where each horizontal and vertical line with integer coordinates is a street, and each point of lies at the intersection of a horizontal and a vertical street. Each horizontal segment between two vertices represents the generating vector or , depending on whether the segment is travelled in the forward or backward direction, and each vertical segment represents or . A car starting from and travelling along the streets to can make the trip by many different routes. But no matter what route is taken, the car must travel at least |1 - (-2)| = 3 horizontal blocks and at least |2 - 4| = 2 vertical blocks, for a total trip distance of at least 3 + 2 = 5. If the car goes out of its way the trip may be longer, but the minimal distance travelled by the car, equal in value to the word metric between and is therefore equal to 5.
In general, given two elements and of , the distance between and in the word metric is equal to .
Definition
Let G be a group, let S be a generating set for G, and suppose that S is closed under the inverse operation on G. A word over the set S is just a finite sequence whose entries are elements of S. The integer L is called the length of the word . Using the group operation in G, the entrie
|
https://en.wikipedia.org/wiki/Archimedean
|
Archimedean means of or pertaining to or named in honor of the Greek mathematician Archimedes and may refer to:
Mathematics
Archimedean absolute value
Archimedean circle
Archimedean constant
Archimedean copula
Archimedean field
Archimedean group
Archimedean point
Archimedean property
Archimedean solid
Archimedean spiral
Archimedean tiling
Other uses
Archimedean screw
Claw of Archimedes
The Archimedeans, the mathematical society of the University of Cambridge
Archimedean Dynasty
Archimedean Upper Conservatory
See also
Archimedes (disambiguation)
|
https://en.wikipedia.org/wiki/Charles%20Spearman
|
Charles Edward Spearman, FRS (10 September 1863 – 17 September 1945) was an English psychologist known for work in statistics, as a pioneer of factor analysis, and for Spearman's rank correlation coefficient. He also did seminal work on models for human intelligence, including his theory that disparate cognitive test scores reflect a single general intelligence factor and coining the term g factor.
Biography
Spearman had an unusual background for a psychologist. In his childhood he was ambitious to follow an academic career. He first joined the army as a regular officer of engineers in August 1883, and was promoted to captain on 8 July 1893, serving in the Munster Fusiliers. After 15 years he resigned in 1897 to study for a PhD in experimental psychology. In Britain, psychology was generally seen as a branch of philosophy and Spearman chose to study in Leipzig under Wilhelm Wundt, because it was a center of the "new psychology"—one that used the scientific method instead of metaphysical speculation. As Wundt was often absent due to his multiple duties and popularity, Spearman largely worked with Felix Krueger and Wilhelm Wirth, both of whom he admired. He started in 1897, and after some interruption (he was recalled to the army during the Second Boer war, and served as a Deputy Assistant Adjutant General from February 1900) he obtained his degree in 1906. He had already published his seminal paper on the factor analysis of intelligence (1904). Spearman met and impressed the psychologist William McDougall who arranged for Spearman to replace him when he left his position at University College London. Spearman stayed at University College until he retired in 1931. Initially he was Reader and head of the small psychological laboratory. In 1911 he was promoted to the Grote professorship of the Philosophy of Mind and Logic. His title changed to Professor of Psychology in 1928 when a separate Department of Psychology was created.
When Spearman was elected to the Royal Society in 1924 the citation read: Chief amongst these achievements was the discovery of the general factor in human intelligence, and his subsequent development of a theory of "g" and synthesis of empirical work on ability.
Spearman was strongly influenced by the work of Francis Galton. Galton did pioneering work in psychology and developed correlation, the main statistical tool used by Spearman.
In statistics, Spearman developed rank correlation (1904), a non-parametric version of the conventional Pearson correlation, as well as both the widely used correction for attenuation (1907), and the earliest version of a 'factor analysis (Lovie & Lovie, 1996, p. 81). His statistical work was not appreciated by his University College colleague Karl Pearson and there was a long feud between them.
Although Spearman achieved most recognition in his day for his statistical work, he regarded this work as subordinate to his quest for the fundamental laws of psychology, and he is now similarly re
|
https://en.wikipedia.org/wiki/Walsh%20matrix
|
In mathematics, a Walsh matrix is a specific square matrix of dimensions 2, where n is some particular natural number. The entries of the matrix are either +1 or −1 and its rows as well as columns are orthogonal, i.e. dot product is zero. The Walsh matrix was proposed by Joseph L. Walsh in 1923. Each row of a Walsh matrix corresponds to a Walsh function.
The Walsh matrices are a special case of Hadamard matrices. The naturally ordered Hadamard matrix is defined by the recursive formula below, and the sequency-ordered Hadamard matrix is formed by rearranging the rows so that the number of sign changes in a row is in increasing order. Confusingly, different sources refer to either matrix as the Walsh matrix.
The Walsh matrix (and Walsh functions) are used in computing the Walsh transform and have applications in the efficient implementation of certain signal processing operations.
Formula
The Hadamard matrices of dimension 2k for k ∈ N are given by the recursive formula (the lowest order of Hadamard matrix is 2):
and in general
for 2 ≤ k ∈ N, where ⊗ denotes the Kronecker product.
Permutation
Rearrange the rows of the matrix according to the number of sign change of each row. For example, in
the successive rows have 0, 3, 1, and 2 sign changes. If we rearrange the rows in sequency ordering:
then the successive rows have 0, 1, 2, and 3 sign changes.
Alternative forms of the Walsh matrix
Sequency ordering
The sequency ordering of the rows of the Walsh matrix can be derived from the ordering of the Hadamard matrix by first applying the bit-reversal permutation and then the Gray-code permutation:
where the successive rows have 0, 1, 2, 3, 4, 5, 6, and 7 sign changes.
Dyadic ordering
where the successive rows have 0, 1, 3, 2, 7, 6, 4, and 5 sign changes.
Natural ordering
where the successive rows have 0, 7, 3, 4, 1, 6, 2, and 5 sign changes.
See also
Haar wavelet
Quincunx matrix
Hadamard transform
Code-division multiple access
() – rows of the (negated) binary Walsh matrices read as reverse binary numbers
– antidiagonals of the negated binary Walsh matrix read as binary numbers
References
Matrices
de:Hadamard-Matrix#Walsh-Matrizen
|
https://en.wikipedia.org/wiki/Jonathan%20ben%20Joseph
|
Jonathan ben Joseph was a Lithuanian rabbi and astronomer who lived in Risenoi, Grodno in the late 17th century and early 18th century. Jonathan studied astronomy and mathematics.
In 1710 Jonathan and his family lived a year in the fields due to a plague at Risenoi. He vowed that, on surviving, he would spread astronomical knowledge among his fellow believers. After he became blind, he went to Germany, where the bibliographer Wolf met him in 1725. Jonathan authored two astronomical commentaries: the Yeshu'ah be-Yisrael, on Maimonides' neomenia laws (Frankfort-on-the-Main, 1720); and Bi'ur, on Abraham ben Ḥiyya's Ẓurat ha-Areẓ (Offenbach, 1720).
References
17th-century astronomers
18th-century Polish–Lithuanian astronomers
Lithuanian astronomers
17th-century births
18th-century deaths
18th-century Lithuanian rabbis
17th-century Lithuanian rabbis
People from Grodno
|
https://en.wikipedia.org/wiki/Property%20P%20conjecture
|
In mathematics, the Property P conjecture is a statement about 3-manifolds obtained by Dehn surgery on a knot in the 3-sphere. A knot in the 3-sphere is said to have Property P if every 3-manifold obtained by performing (non-trivial) Dehn surgery on the knot is not simply-connected. The conjecture states that all knots, except the unknot, have Property P.
Research on Property P was started by R. H. Bing, who popularized the name and conjecture.
This conjecture can be thought of as a first step to resolving the Poincaré conjecture, since the Lickorish–Wallace theorem says any closed, orientable 3-manifold results from Dehn surgery on a link.
If a knot has Property P, then one cannot construct a counterexample to the Poincaré conjecture by surgery along .
A proof was announced in 2004, as the combined result of efforts of mathematicians working in several different fields.
Algebraic Formulation
Let denote elements corresponding to a preferred longitude and meridian of a tubular neighborhood of .
has Property P if and only if its Knot group is never trivialised by adjoining a relation of the form for some .
See also
Property R conjecture
References
3-manifolds
Conjectures that have been proved
|
https://en.wikipedia.org/wiki/Georges%20Henri%20Halphen
|
Georges-Henri Halphen (; 30 October 1844, Rouen – 23 May 1889, Versailles) was a French mathematician. He was known for his work in geometry, particularly in enumerative geometry and the singularity theory of algebraic curves, in algebraic geometry. He also worked on invariant theory and projective differential geometry.
Biography
He did his studies at École Polytechnique (X 1862), where he graduated in 1866. He continued his education at École d'Application de l'Artillerie et du Génie de Metz. As a lieutenant of Artillery he was sent Auxonne first and then to Strasbourg. In 1872, Halphen settled in Paris, where he became a lecturer at the École Polytechnique and began his scientific studies. He completed his dissertation in 1878. In 1872 he married Rose Marguerite Aron, with whom he had eight children, four sons and four daughters. Of the four sons, three joined the military and two of them died in World War I. Louis Halphen (1880-1950) was a French historian specialized in medivial times; Charles Halphen (1885-1915), was deputy secretary of the Société mathématique de France. One of his grandsons was Étienne Halphen (1911–1954), who did significant work in applied statistics.
Awards
Georges-Henri Halphen received in the Steiner prize of the Prussian Academy of Sciences in 1880 along with Max Noether. In 1881 Halphen received the Grand Prix of the Académie des sciences for his work on linear differential equations: Mémoire sur la Reduction des Equations Différentielles Linéaires aux Formes Intégrales. He received the Prix Poncelet in 1883 and the Prix Petit d'Ormoy in 1885. He was elected to the Académie des sciences in 1886 in the Section de Géométrie, replacing the deceased Jean Claude Bouquet. In 1887 Halphen was elected to the Accademia dei Lincei in Rome.
Works
Oeuvres de G.H. Halphen, in 4 vols. edited by Camille Jordan, Henri Poincaré, Charles Émile Picard with assistance from Ernest Vessiot, 1916, 1918, 1921, 1924, Paris, France: Gauthier-Villars
Traité des fonctions elliptiques et de leurs applications, 3 vols., 1886, 1888, 1891 (in vol. 2 applications to physics, geometry, the theory of integrals, and geodesy; in vol. 3 applications to algebra, especially the quintic equation, number theory — vol. 3 consists merely of fragments)
An overview of Halphen's work is provided by Laurent Gruson and a complete list of the works was compiled by Camille Jordan as part of Halphen's obituary in Journal de Mathématiques Pures et Appliquées.
See also
Bézout's theorem
Cramer's paradox
References
External links
Jewish Encyclopedia
(French) Biography on the Université de Rouen site
A few of Halphen's works available online
1844 births
1889 deaths
19th-century French mathematicians
Algebraic geometers
Differential geometers
École Polytechnique alumni
Members of the French Academy of Sciences
|
https://en.wikipedia.org/wiki/Non-Archimedean
|
In mathematics and physics, non-Archimedean refers to something without the Archimedean property. This includes:
Ultrametric space
notably, p-adic numbers
Non-Archimedean ordered field, namely:
Levi-Civita field
Hyperreal numbers
Surreal numbers
Dehn planes
Non-Archimedean time in theoretical physics
|
https://en.wikipedia.org/wiki/William%20Goldman%20%28mathematician%29
|
William Mark Goldman (born 1955 in Kansas City, Missouri) is a professor of mathematics at the University of Maryland, College Park (since 1986). He received a B.A. in mathematics from Princeton University in 1977, and a Ph.D. in mathematics from the University of California, Berkeley in 1980.
Research contributions
Goldman has investigated geometric structures, in various incarnations, on manifolds since his undergraduate thesis, "Affine manifolds and projective geometry on manifolds", supervised by William Thurston and Dennis Sullivan. This work led to work with Morris Hirsch and David Fried on affine structures on manifolds, and work in real projective structures on compact surfaces. In particular he proved that the space of convex real projective structures on a closed orientable surface of genus is homeomorphic to an open cell of dimension . With Suhyoung Choi, he proved that this space is a connected component (the "Hitchin component") of the space of equivalence classes of representations of the fundamental group in . Combining this result with Suhyoung Choi's convex decomposition theorem, this led to a complete classification of convex real projective structures on compact surfaces.
His doctoral dissertation, "Discontinuous groups and the Euler class" (supervised by Morris W. Hirsch), characterizes discrete embeddings of surface groups in in terms of maximal Euler class, proving a converse to the Milnor–Wood inequality for flat bundles. Shortly thereafter he showed that the space of representations of the fundamental group of a closed orientable surface of genus in has connected components, distinguished by the Euler class.
With David Fried, he classified compact quotients of Euclidean 3-space by discrete groups of affine transformations, showing that all such manifolds are finite quotients of torus bundles over the circle. The noncompact case is much more interesting, as Grigory Margulis found complete affine manifolds with nonabelian free fundamental group. In his 1990 doctoral thesis, Todd Drumm found examples which are solid handlebodies using polyhedra which have since been called "crooked planes."
Goldman found examples (non-Euclidean nilmanifolds and solvmanifolds) of closed 3-manifolds which fail to admit flat conformal structures.
Generalizing Scott Wolpert's work on the Weil–Petersson symplectic structure on the space of hyperbolic structures on surfaces, he found an algebraic-topological description of a symplectic structure on spaces of representations of a surface group in a reductive Lie group. Traces of representations of the corresponding curves on the surfaces generate a Poisson algebra, whose Lie bracket has a topological description in terms of the intersections of curves. Furthermore, the Hamiltonian vector fields of these trace functions define flows generalizing the Fenchel–Nielsen flows on Teichmüller space. This symplectic structure is invariant under the natural action of the mapping class group, and
|
https://en.wikipedia.org/wiki/Topology%20%28disambiguation%29
|
Topology is a branch of mathematics concerned with geometric properties preserved under continuous deformation (stretching without tearing or gluing).
Topology may also refer to:
Math
Topology, the collection of open sets used to define a topological space
Algebraic topology
Differential topology
Discrete topology
General topology
Geometric topology
Grothendieck topology of a category
Lawvere–Tierney topology of a topos
Point set topology
Trivial topology
Electronics
Topology (electronics), a configuration of electronic components
Computing
Network topology, configurations of computer networks
Logical topology, the arrangement of devices on a computer network and how they communicate with one another
Geospatial data
Geospatial topology, the study or science of places with applications in earth science, geography, human geography, and geomorphology
In geographic information systems and their data structures, topology and planar enforcement are the storing of a border line between two neighboring areas (and the border point between two connecting lines) only once. Thus, any rounding errors might move the border, but will not lead to gaps or overlaps between the areas.
Also in cartography, a topological map is a greatly simplified map that preserves the mathematical topology while sacrificing scale and shape
Topology is often confused with the geographic meaning of topography (originally the study of places). The confusion may be a factor in topographies having become confused with terrain or relief, such that they are essentially synonymous.
Biology
The specific orientation of transmembrane proteins
In phylogenetics, the branching pattern of a phylogenetic tree
Music
Topology (musical ensemble), an Australian post-classical quintet
Topology (album), 1981 album by Joe McPhee
Other
Topology (journal), a mathematical journal, with an emphasis on subject areas related to topology and geometry
Spatial effects that cannot be described by topography, i.e., social, economical, spatial, or phenomenological interactions
|
https://en.wikipedia.org/wiki/Institute%20for%20Research%20in%20Fundamental%20Sciences
|
The Institute for Research in Fundamental Sciences (IPM; , Pazhuheshgah-e Daneshhai-ye Boniadi), previously Institute for Studies in Theoretical Physics and Mathematics, is an advanced public research institute in Tehran, Iran. IPM is directed by Mohammad-Javad Larijani, its original founder. The institute was the first Iranian organization to connect to the Internet and provide internet service to the nation. It is the domain name registry of .ir domain names.
The institute's activities are directed along several routes:
The institute conducts research along the lines that led to its inception, both independently and in cooperation with other research institutes inside the country and abroad.
The institute carries out conferences as well as joint research projects, and exchanges researchers to establish links with other research institutes and scientific communities within and outside Iran.
The institute provides facilities as well as financial support and opportunity for sabbaticals for researchers belonging to other institutes and universities.
The institute tries to provide the atmosphere necessary for attracting Iranian researchers and scientists from around the world.
The institute conducts graduate study programs to train researchers in areas where the institute is interested to increase the number of manpower.
The institute publicizes its scientific findings of IPM through books, journals, and scientific gatherings.
The institute provides scientific and cultural services that fit within the framework of its activities.
The institute seeks to recognize the basic needs of the country in fundamental sciences.
The institute has established a national scientific network over the intranet in order to connect all scientific and research centers and to develop the corresponding technologies in Iran.
The pillars of the institute are its board of governors, the director of the institute, and the institute's scientific council. The institute has four campuses, all north of Tehran in the Farmanieh district, immediately south of Niavaran. It offers advanced PhD degrees in areas such as mathematical logic, astronomy, particle physics and analytical philosophy, among others. At present the institute comprises nine schools:
School of Astronomy
School of Biological Sciences
School of Cognitive Sciences
School of Computer Science
School of Mathematics
School of Nano Science
School of Particles and Accelerators
School of Philosophy
School of Physics
IPM hosts key science projects namely Iranian National Observatory and the Iranian Light Source Facility
Notable faculty
Farhad Ardalan, physics
Mehdi Golshani, physics
Hamid Vahid-Dastjerdi. philosophy
Hashem Rafii Tabar
See also
Higher education in Iran
IPM School of Cognitive Sciences
Science and technology in Iran
External links
Research institutes in Iran
Physics research institutes
Universities in Iran
Education in Tehran
Research institutes established in 1989
Domain name
|
https://en.wikipedia.org/wiki/NHL%20Plus-Minus%20Award
|
The NHL Plus-Minus Award was a trophy awarded annually by the National Hockey League to the ice hockey "player, having played a minimum of 60 games, who leads the league in plus-minus statistics." It was sponsored by a commercial business, and it had been known under five different names. First given for performance during the season, Wayne Gretzky won the award the most times, with three. Gretzky also led the league once prior to the inception of the award. Bobby Orr has led the NHL the most times in plus-minus, with six, all prior to the inception of the award. The award was discontinued after being awarded to Pavel Datsyuk following the season.
History
The plus/minus statistic was first established during the 1967–68 NHL season. This statistic reflects a player's ability to contribute offensively and defensively. The award was first given at the end of the season. From to , it was known as the Emery Edge Award. During , there was no formal name for the Award. From to , it was known as the Alka-Seltzer Plus Award. From to , it was known as the Bud Ice Plus-Minus Award. Finally, from to , it was known as the Bud Light Plus-Minus Award.
Three-time winner Wayne Gretzky won the award the most times out of any player, and is one of only three repeat winners, joined by two-time winners John LeClair and Chris Pronger. Gretzky recorded the highest single-season result, +100, of all Award winners. The Award was won by players on the Edmonton Oilers and Detroit Red Wings four times each, with three wins by players on the Calgary Flames, Colorado Avalanche, Philadelphia Flyers, Pittsburgh Penguins and St. Louis Blues.
Including the 20 seasons the League tracked plus-minus as a statistic without an award, the Boston Bruins have led the League 11 times (six by Bobby Orr and twice by David Krejci, the only other repeat leaders), Edmonton Oilers five times (four by Wayne Gretzky, including three official awards) and four times each by players on the Detroit Red Wings, Montreal Canadiens and Philadelphia Flyers.
Winners
* Season shortened by the 1994–95 NHL lockout
Bold Player with the best plus-minus ever recorded in a season.
Plus-minus leaders (1967–1982)
Before 1983, there was no award for leading the League in plus-minus. The NHL started counting the statistics in 1967, and this lists all the leaders from the inception of the statistic to the inception of the award.
Plus-minus leaders (2008–present)
* Season shortened by the 2012–13 NHL lockout
† Season shortened by the COVID-19 pandemic
See also
List of National Hockey League awards
List of NHL players
List of NHL statistical leaders
Notes
During the 1990–91 season, there was a tie between Marty McSorley of the Los Angeles Kings and Theoren Fleury of the Calgary Flames.
During the 1998–99 season, Alexander Karpovtsev led the League with a +39 rating. However, he played in 58 games and was ineligible since there is a 60-game minimum. Therefore, LeClair was given the award with a +36 rati
|
https://en.wikipedia.org/wiki/Leibniz%20formula%20for%20%CF%80
|
In mathematics, the Leibniz formula for , named after Gottfried Wilhelm Leibniz, states that
an alternating series. It is sometimes called the Madhava–Leibniz series as it was first discovered by the Indian mathematician Madhava of Sangamagrama or his followers in the 14th–15th century (see Madhava series), and was later independently rediscovered by James Gregory in 1671 and Leibniz in 1673. The Taylor series for the inverse tangent function, often called Gregory's series, is:
The Leibniz formula is the special case
It also is the Dirichlet -series of the non-principal Dirichlet character of modulus 4 evaluated at , and, therefore, the value of the Dirichlet beta function.
Proofs
Proof 1
Considering only the integral in the last term, we have:
Therefore, by the squeeze theorem, as , we are left with the Leibniz series:
Proof 2
Let , when , the series to be converges uniformly, then
Therefore, if approaches so that it is continuous and converges uniformly, the proof is complete, where, the series to be converges by the Leibniz's test, and also, approaches from within the Stolz angle, so from Abel's theorem this is correct.
Convergence
Leibniz's formula converges extremely slowly: it exhibits sublinear convergence. Calculating to 10 correct decimal places using direct summation of the series requires precisely five billion terms because for (one needs to apply Calabrese error bound). To get 4 correct decimal places (error of 0.00005) one needs 5000 terms. Even better than Calabrese or Johnsonbaugh error bounds are available.
However, the Leibniz formula can be used to calculate to high precision (hundreds of digits or more) using various convergence acceleration techniques. For example, the Shanks transformation, Euler transform or Van Wijngaarden transformation, which are general methods for alternating series, can be applied effectively to the partial sums of the Leibniz series. Further, combining terms pairwise gives the non-alternating series
which can be evaluated to high precision from a small number of terms using Richardson extrapolation or the Euler–Maclaurin formula. This series can also be transformed into an integral by means of the Abel–Plana formula and evaluated using techniques for numerical integration.
Unusual behaviour
If the series is truncated at the right time, the decimal expansion of the approximation will agree with that of for many more digits, except for isolated digits or digit groups. For example, taking five million terms yields
where the underlined digits are wrong. The errors can in fact be predicted; they are generated by the Euler numbers according to the asymptotic formula
where is an integer divisible by 4. If is chosen to be a power of ten, each term in the right sum becomes a finite decimal fraction. The formula is a special case of the Euler–Boole summation formula for alternating series, providing yet another example of a convergence acceleration technique that can be a
|
https://en.wikipedia.org/wiki/Wallis%20product
|
In mathematics, the Wallis product for , published in 1656 by John Wallis, states that
Proof using integration
Wallis derived this infinite product using interpolation, though his method is not regarded as rigorous. A modern derivation can be found by examining for even and odd values of , and noting that for large , increasing by 1 results in a change that becomes ever smaller as increases. Let
(This is a form of Wallis' integrals.) Integrate by parts:
Now, we make two variable substitutions for convenience to obtain:
We obtain values for and for later use.
Now, we calculate for even values by repeatedly applying the recurrence relation result from the integration by parts. Eventually, we end get down to , which we have calculated.
Repeating the process for odd values ,
We make the following observation, based on the fact that
Dividing by :
, where the equality comes from our recurrence relation.
By the squeeze theorem,
Proof using Laplace's method
See the main page on Gaussian integral.
Proof using Euler's infinite product for the sine function
While the proof above is typically featured in modern calculus textbooks, the Wallis product is, in retrospect, an easy corollary of the later Euler infinite product for the sine function.
Let :
Relation to Stirling's approximation
Stirling's approximation for the factorial function asserts that
Consider now the finite approximations to the Wallis product, obtained by taking the first terms in the product
where can be written as
Substituting Stirling's approximation in this expression (both for and ) one can deduce (after a short calculation) that converges to as .
Derivative of the Riemann zeta function at zero
The Riemann zeta function and the Dirichlet eta function can be defined:
Applying an Euler transform to the latter series, the following is obtained:
See also
John Wallis, English mathematician who is given partial credit for the development of infinitesimal calculus and pi.
Viète's formula, a different infinite product formula for .
Leibniz formula for , an infinite sum that can be converted into an infinite Euler product for .
Wallis sieve
The Pippenger product formula obtains e by taking roots of terms in the Wallis product.
Notes
External links
Articles containing proofs
Pi algorithms
Infinite products
|
https://en.wikipedia.org/wiki/Spiral%20%28disambiguation%29
|
A spiral is a curve which emanates from a central point, getting progressively farther away as it revolves around the point.
Spiral may also refer to:
Science, mathematics and art
Spiral galaxy, a type of galaxy in astronomy
Spiral Dynamics, a theory of human development
Spiral cleavage, a type of cleavage in embryonic development
Victoria and Albert Museum Spiral, a proposed (abandoned in 2004) controversial extension to the museum
Spiral (arts alliance), an African-American art collective
Spiral (publisher), a New Zealand women's publisher and art collective
Spiral model, a software development process
Transport
Spiral (railway), a technique employed by railways to ascend steep hills
Spiral bridge, a similar technique for roads
9K114 Shturm, an anti-tank missile that is known under the NATO reporting name as AT-6 Spiral
Mikoyan-Gurevich MiG-105 Spiral, a Soviet spaceplane
Spiral dive, a type of generally undesirable and accidental descent manoeuvre in an aircraft
Film and television
Spiral (1978 film), a Polish film
Spiral (1998 film), a Japanese film
Uzumaki (film), or Spiral, a 2000 Japanese film
Spiral (2007 film), an American film
Spiral (2014 film), a Russian film
Spiral (2019 film), a Canadian film
Spiral (2021 film), an American film, part of the Saw horror franchise
Spiral (TV series), English title of French thriller series Engrenages
Spiral: The Bonds of Reasoning, a 2002 Japanese anime series
"Spiral" (Buffy the Vampire Slayer), a 2001 TV series episode
"Spiral", 2010 television series episode of Haven (season 1)
"Spiral", 2015 episode of NCIS: Los Angeles (season 6)
Glen Coroner, a.k.a. Spiral, 2006 contestant in the UK Big Brother TV show
Spiral, a character from the television series Pac-Man and the Ghostly Adventures
Books and comics
Spiral (Suzuki novel), a 1995 Japanese book in the Ring series
Spiral (comics), comic book character
Spiral: The Bonds of Reasoning, a 2002 mystery anime and manga series
Spiral (Tunnels novel), 2011/12 novel by Roderick Gordon
Spiralis/Spiril, the Bokmål/Danish name for the Marsupilami
Uzumaki, a 1998 horror manga series
Music
Spiraling (band)
Albums
Spiral (Allison Crowe album), 2010
Spiral (Andrew Hill album), 1975
Spiral (Bobby Hutcherson album), 1979
Spiral (Darkside album), 2021
Spiral (Hiromi album), 2006
Spiral (Kenny Barron album), 1982
Spiral (Rezz album), 2021
Spiral (Vangelis album), 1977
Spirals (album), by Blood Has Been Shed, 2003
Songs and pieces
Spiral (Norman), a 2018 orchestral composition by Andrew Norman
Spiral (Stockhausen), a 1968 process-music composition by Karlheinz Stockhausen
"Spiral" (Arne Bendiksen song), the Norwegian Eurovision Song Contest 1964 entry by Arne Joachim Bendiksen
"Spiral" (Paul McCartney song), from Working Classical, 1999
"Spiral" (Pendulum song), a 2003 song by Australian drum and bass group Pendulum
"Spiralling" (song), a 2008 song by Keane
Spiral, track four from John Coltrane's Giant Steps
"Spi
|
https://en.wikipedia.org/wiki/Heinrich%20Gr%C3%A4fe
|
Heinrich Gräfe or Graefe (March 3, 1802 – July 22, 1868), German educator, was born at Buttstädt in Saxe-Weimar.
He studied mathematics and theology at Jena, and in 1823 obtained a curacy in the town church of Weimar. He was transferred to Jena as rector of the town school in 1825; in 1840 he was also appointed extraordinary professor of the science of education (Pädagogik) in that university; and in 1842 he became head of the Burgersckule (middle class school) in Kassel.
After reorganizing the schools of the town, he became director of the new Realschule in 1843; and, devoting himself to the interests of educational reform in the Electorate of Hesse, he became in 1849 a member of the school commission, and also entered the house of representatives, where he made himself somewhat formidable as an agitator.
In 1852 for having been implicated in the September riots and in the movement against the unpopular minister Hassenpflug, who had dissolved the school commission, he was condemned to three years imprisonment, a sentence afterwards reduced to one of twelve months. On his release he withdrew to Geneva, where he engaged at the International Boarding School La Châtelaine (owner and director Achilles Roediger) until 1855, when he was appointed director of the Realschule in der Altstadt at Bremen until his death on 21 July 1868. His successor was Franz Georg Philipp Buchenau.
Besides being the author of many text-books and occasional papers on educational subjects, he wrote Des Rechisverhaltnis der Volksschule von innen und aussen (1829); Die Schulreform (1834); Schule fend Unterricht (1839); Allgemeine Pädagogik (1845); Die deutsche Volksschule (1847). Together with Naumann, he also edited the Archiv für das praktische Volksschulwesen (1828-1835).
Notes
1802 births
1868 deaths
People from Buttstädt
People from Saxe-Weimar
|
https://en.wikipedia.org/wiki/Irish%20Mathematical%20Society
|
The Irish Mathematical Society () or IMS is the main professional organisation for mathematicians in Ireland. The society aims to further mathematics and mathematical research in Ireland. Its membership is international, but it mainly represents mathematicians in universities and other third level institutes in Ireland. It publishes a bulletin, The Bulletin of the Irish Mathematical Society, twice per year and runs an annual conference in September.
The society was founded on 14 April 1976 at a meeting in Trinity College, Dublin when a constitution drafted by D McQuillan, John T. Lewis and Trevor West was accepted. It is a member organization in the European Mathematical Society. Since 2020, it has been the adhering organization for Ireland's membership of the International Mathematical Union. The logo was designed by Irish mathematician Desmond MacHale.
Bulletin of the Irish Mathematical Society
The Bulletin of the Irish Mathematical Society is a journal that has been published since 1986, and was preceded by the Newsletter of the Irish Mathematical Society. It accepts articles that are of interest to both Society members and the wider mathematical community. Articles include original research articles, expository survey articles, biographical and historical articles, classroom notes and book reviews. It also includes a problem page. Articles are available online.
Officers
Current officers of the society are listed in the bulletin.
References
External links
IMS website
Learned societies of Ireland
Mathematical societies
Professional associations based in Ireland
1976 establishments in Ireland
|
https://en.wikipedia.org/wiki/Round
|
Round or rounds may refer to:
Mathematics and science
The contour of a closed curve or surface with no sharp corners, such as an ellipse, circle, rounded rectangle, cant, or sphere
Rounding, the shortening of a number to reduce the number of significant figures it contains
Round number, a number that ends with one or more zeroes
Roundness (geology), the smoothness of clastic particles
Roundedness, rounding of lips when pronouncing vowels
Labialization, rounding of lips when pronouncing consonants
Music
Round (music), a type of musical composition
Rounds (album), a 2003 album by Four Tet
Places
The Round, a defunct theatre in the Ouseburn Valley, Newcastle upon Tyne, England
Round Point, a point on the north coast of King George Island, South Shetland Islands
Grand Rounds Scenic Byway, a parkway system in Minneapolis
Rounds Mountain, a peak in the Taconic Mountains, United States
Round Mountain (disambiguation), several places
Round Valley (disambiguation), several places
Repeated activities
Round (boxing), a time period within a boxing match
Round (cryptography), a basic crypto transformation
Grand rounds, a ritual in medical education and inpatient care
Round of drinks, a traditional method of paying in a drinking establishment
Funding round, a discrete round of investment in a business
Doing the rounds or patrol, moving through an area at regular or irregular intervals
Round (Theosophy), a planetary cycle of reincarnation in Theosophy
Round (dominoes), period of play in dominoes in which each player plays a piece or passes
A circular walk or run like the Bob Graham Round
Other
Round (surname)
Rounds (surname)
Round shot
Cartridge (firearms), a single unit of ammunition
Round steak, a cut of meat
Cattle
Bullion coins that are not legal tender, e.g. silver rounds
See also
Circle
Roundabout
Around (disambiguation)
Round and Round (disambiguation)
Round Hill (disambiguation)
Roundness (disambiguation)
OR
es:Ronda
|
https://en.wikipedia.org/wiki/Defense%20independent%20pitching%20statistics
|
In baseball, defense-independent pitching statistics (DIPS) (also referred to as fielding-independent pitching, or FIP) is intended to measure a pitcher's effectiveness based only on statistics that do not involve fielders (except the catcher). These include home runs allowed, strikeouts, hit batters, walks, and, more recently, fly ball percentage, ground ball percentage, and (to a much lesser extent) line drive percentage. By focusing on these statistics and ignoring what happens once a ball is put in play, which – on most plays – the pitcher has little control over, DIPS claims to offer a clearer picture of the pitcher's true ability.
The most controversial part of DIPS is the idea that pitchers have little influence over what happens to balls that are put into play. Some people believe this has been well-established (see below), primarily by showing the large variability of most pitchers' BABIP from year to year. However, there is a wide variation in career BABIP among pitchers, and this seems to correlate with career success. For instance, no pitcher in the Hall of Fame has a below-average career BABIP.
Origin of DIPS
In 1999, Voros McCracken became the first to detail and publicize these effects to the baseball research community when he wrote on rec.sport.baseball, "I've been working on a pitching evaluation tool and thought I'd post it here to get some feedback. I call it 'Defensive Independent Pitching' and what it does is evaluate a pitcher base[d] strictly on the statistics his defense has no ability to affect..." Until the publication of a more widely read article in 2001, however, on Baseball Prospectus, most of the baseball research community believed that individual pitchers had an inherent ability to prevent hits on balls in play. McCracken reasoned that if this ability existed, it would be noticeable in a pitcher's 'Batting Average on Balls In Play' (BABIP). His research found the opposite to be true: that while a pitcher's ability to cause strikeouts or prevent home runs remained somewhat constant from season to season, his ability to prevent hits on balls in play did not.
To better evaluate pitchers in light of his theory, McCracken developed "Defense-Independent ERA" (dERA), the most well-known defense-independent pitching statistic. McCracken's formula for dERA is very complicated, with a number of steps. DIPS ERA is not as useful for knuckleballers and other "trick" pitchers, a factor that McCracken mentioned a few days after his original announcement of his research findings in 1999, in a posting on the rec.sport.baseball.analysis Usenet site on November 23, 1999, when he wrote: "Also to [note] is that, anecdotally, I believe pitchers with trick deliveries (e.g. Knuckleballers) might post consistently lower $H numbers than other pitchers. I looked at Tim Wakefield's career and that seems to bear out slightly".
In later postings on the rec.sport.baseball site during 1999 and 2000 (prior to the publication of his widely re
|
https://en.wikipedia.org/wiki/Statistics%20relating%20to%20enlargement%20of%20the%20European%20Union
|
This is a sequence of tables giving statistical data for past and future enlargements of the European Union. All data refer to the populations, land areas, and gross domestic products (GDP) of the respective countries at the time of their accession to the European Union, illustrating historically accurate changes to the Union. The GDP figures are at purchasing power parity, in United States dollar at 1990 prices.
Past enlargements
Foundation
1973 enlargement
1981 enlargement
1986 enlargement
1990 enlargement
1995 enlargement
2004 enlargement
2007 enlargement
2013 enlargement
UK withdrawal
Candidate countries
EU27
Albania
Montenegro
Moldova
North Macedonia
Serbia
Turkey
Ukraine
All Candidates
Note: All data sourced from individual country entries on Wikipedia. Populations usually 2021 estimates; historical/future estimates not used. Figures are approximate due to fluctuations in population and economies.
See also
Demographics of the European Union
Footnotes
1. Algeria was part of France until 1962.2. German reunification in 1990 led to the inclusion of the territory of the former German Democratic Republic. This enlargement is not explicitly mentioned. Data for Germany in all tables is from current statistics.3. Greenland left the EC in 1985.4. Officially the whole of Cyprus lies within the European Union. "In light of Protocol 10 of the Accession Treaty 2003 Cyprus as a whole entered the EU, whereas the acquis is suspended in the northern part of the island ("areas not under effective control of the Government of the Republic of Cyprus"). This means inter alia that these areas are outside the customs and fiscal territory of the EU. The suspension has territorial effect, but does not concern the personal rights of Turkish Cypriots as EU citizens, as they are considered as citizens of the Member State Republic of Cyprus".
References
Citations
Enlargement of the European Union
|
https://en.wikipedia.org/wiki/Mixing%20%28mathematics%29
|
In mathematics, mixing is an abstract concept originating from physics: the attempt to describe the irreversible thermodynamic process of mixing in the everyday world: e.g. mixing paint, mixing drinks, industrial mixing.
The concept appears in ergodic theory—the study of stochastic processes and measure-preserving dynamical systems. Several different definitions for mixing exist, including strong mixing, weak mixing and topological mixing, with the last not requiring a measure to be defined. Some of the different definitions of mixing can be arranged in a hierarchical order; thus, strong mixing implies weak mixing. Furthermore, weak mixing (and thus also strong mixing) implies ergodicity: that is, every system that is weakly mixing is also ergodic (and so one says that mixing is a "stronger" condition than ergodicity).
Informal explanation
The mathematical definition of mixing aims to capture the ordinary every-day process of mixing, such as mixing paints, drinks, cooking ingredients, industrial process mixing, smoke in a smoke-filled room, and so on. To provide the mathematical rigor, such descriptions begin with the definition of a measure-preserving dynamical system, written as .
The set is understood to be the total space to be filled: the mixing bowl, the smoke-filled room, etc. The measure is understood to define the natural volume of the space and of its subspaces. The collection of subspaces is denoted by , and the size of any given subset is ; the size is its volume. Naively, one could imagine to be the power set of ; this doesn't quite work, as not all subsets of a space have a volume (famously, the Banach-Tarski paradox). Thus, conventionally, consists of the measurable subsets—the subsets that do have a volume. It is always taken to be a Borel set—the collection of subsets that can be constructed by taking intersections, unions and set complements; these can always be taken to be measurable.
The time evolution of the system is described by a map . Given some subset , its map will in general be a deformed version of – it is squashed or stretched, folded or cut into pieces. Mathematical examples include the baker's map and the horseshoe map, both inspired by bread-making. The set must have the same volume as ; the squashing/stretching does not alter the volume of the space, only its distribution. Such a system is "measure-preserving" (area-preserving, volume-preserving).
A formal difficulty arises when one tries to reconcile the volume of sets with the need to preserve their size under a map. The problem arises because, in general, several different points in the domain of a function can map to the same point in its range; that is, there may be with . Worse, a single point has no size. These difficulties can be avoided by working with the inverse map ; it will map any given subset to the parts that were assembled to make it: these parts are . It has the important property of not "losing track" of where things came f
|
https://en.wikipedia.org/wiki/Imputation
|
Imputation can refer to:
Imputation (law), the concept that ignorance of the law does not excuse
Imputation (statistics), substitution of some value for missing data
Imputation (genetics), estimation of unmeasured genotypes
Theory of imputation, the theory that factor prices are determined by output prices
Imputation (game theory), a distribution that benefits each player who cooperates in a game
Imputed righteousness, a concept in Christian theology
Double imputation, a concept in Christian theology
Imputation of sin, a theory for the transmission of original sin from Adam to his progeny
See also
Geo-imputation, a method in geographical information systems
Dividend imputation, a method of attributing a company's income tax to its shareholders
|
https://en.wikipedia.org/wiki/Imputation%20%28statistics%29
|
In statistics, imputation is the process of replacing missing data with substituted values. When substituting for a data point, it is known as "unit imputation"; when substituting for a component of a data point, it is known as "item imputation". There are three main problems that missing data causes: missing data can introduce a substantial amount of bias, make the handling and analysis of the data more arduous, and create reductions in efficiency. Because missing data can create problems for analyzing data, imputation is seen as a way to avoid pitfalls involved with listwise deletion of cases that have missing values. That is to say, when one or more values are missing for a case, most statistical packages default to discarding any case that has a missing value, which may introduce bias or affect the representativeness of the results. Imputation preserves all cases by replacing missing data with an estimated value based on other available information. Once all missing values have been imputed, the data set can then be analysed using standard techniques for complete data. There have been many theories embraced by scientists to account for missing data but the majority of them introduce bias. A few of the well known attempts to deal with missing data include: hot deck and cold deck imputation; listwise and pairwise deletion; mean imputation; non-negative matrix factorization; regression imputation; last observation carried forward; stochastic imputation; and multiple imputation.
Listwise (complete case) deletion
By far, the most common means of dealing with missing data is listwise deletion (also known as complete case), which is when all cases with a missing value are deleted. If the data are missing completely at random, then listwise deletion does not add any bias, but it does decrease the power of the analysis by decreasing the effective sample size. For example, if 1000 cases are collected but 80 have missing values, the effective sample size after listwise deletion is 920. If the cases are not missing completely at random, then listwise deletion will introduce bias because the sub-sample of cases represented by the missing data are not representative of the original sample (and if the original sample was itself a representative sample of a population, the complete cases are not representative of that population either). While listwise deletion is unbiased when the missing data is missing completely at random, this is rarely the case in actuality.
Pairwise deletion (or "available case analysis") involves deleting a case when it is missing a variable required for a particular analysis, but including that case in analyses for which all required variables are present. When pairwise deletion is used, the total N for analysis will not be consistent across parameter estimations. Because of the incomplete N values at some points in time, while still maintaining complete case comparison for other parameters, pairwise deletion can introduce impo
|
https://en.wikipedia.org/wiki/Sphenoid
|
Sphenoid may refer to:
Sphenoid bone, a bone in anatomy
Sphenoid (geometry), a tetrahedron with 2-fold mirror or rotation symmetry
|
https://en.wikipedia.org/wiki/L-space
|
L-space may refer to:
The classical function spaces Lp and
L-space (topology), a hereditarily Lindelöf space
The Banach lattice, an abstract normed Riesz space
A location in the fictional Discworld setting
|
https://en.wikipedia.org/wiki/Advanced%20z-transform
|
In mathematics and signal processing, the advanced z-transform is an extension of the z-transform, to incorporate ideal delays that are not multiples of the sampling time. It takes the form
where
T is the sampling period
m (the "delay parameter") is a fraction of the sampling period
It is also known as the modified z-transform.
The advanced z-transform is widely applied, for example to accurately model processing delays in digital control.
Properties
If the delay parameter, m, is considered fixed then all the properties of the z-transform hold for the advanced z-transform.
Linearity
Time shift
Damping
Time multiplication
Final value theorem
Example
Consider the following example where :
If then reduces to the transform
which is clearly just the z-transform of .
References
Transforms
|
https://en.wikipedia.org/wiki/Origin%20%28mathematics%29
|
In mathematics, the origin of a Euclidean space is a special point, usually denoted by the letter O, used as a fixed point of reference for the geometry of the surrounding space.
In physical problems, the choice of origin is often arbitrary, meaning any choice of origin will ultimately give the same answer. This allows one to pick an origin point that makes the mathematics as simple as possible, often by taking advantage of some kind of geometric symmetry.
Cartesian coordinates
In a Cartesian coordinate system, the origin is the point where the axes of the system intersect. The origin divides each of these axes into two halves, a positive and a negative semiaxis. Points can then be located with reference to the origin by giving their numerical coordinates—that is, the positions of their projections along each axis, either in the positive or negative direction. The coordinates of the origin are always all zero, for example (0,0) in two dimensions and (0,0,0) in three.
Other coordinate systems
In a polar coordinate system, the origin may also be called the pole. It does not itself have well-defined polar coordinates, because the polar coordinates of a point include the angle made by the positive x-axis and the ray from the origin to the point, and this ray is not well-defined for the origin itself.
In Euclidean geometry, the origin may be chosen freely as any convenient point of reference.
The origin of the complex plane can be referred as the point where real axis and imaginary axis intersect each other. In other words, it is the complex number zero.
See also
Null vector, an analogous point of a vector space
Distance from a point to a plane
Pointed space, a topological space with a distinguished point
Radial basis function, a function depending only on the distance from the origin
References
Elementary mathematics
|
https://en.wikipedia.org/wiki/Dirac%20comb
|
In mathematics, a Dirac comb (also known as sha function, impulse train or sampling function) is a periodic function with the formula
for some given period . Here t is a real variable and the sum extends over all integers k. The Dirac delta function and the Dirac comb are tempered distributions. The graph of the function resembles a comb (with the s as the comb's teeth), hence its name and the use of the comb-like Cyrillic letter sha (Ш) to denote the function.
The symbol , where the period is omitted, represents a Dirac comb of unit period. This implies
Because the Dirac comb function is periodic, it can be represented as a Fourier series based on the Dirichlet kernel:
The Dirac comb function allows one to represent both continuous and discrete phenomena, such as sampling and aliasing, in a single framework of continuous Fourier analysis on tempered distributions, without any reference to Fourier series. The Fourier transform of a Dirac comb is another Dirac comb. Owing to the Convolution Theorem on tempered distributions which turns out to be the Poisson summation formula, in signal processing, the Dirac comb allows modelling sampling by multiplication with it, but it also allows modelling periodization by convolution with it.
Dirac-comb identity
The Dirac comb can be constructed in two ways, either by using the comb operator (performing sampling) applied to the function that is constantly , or, alternatively, by using the rep operator (performing periodization) applied to the Dirac delta . Formally, this yields (; )
where
and
In signal processing, this property on one hand allows sampling a function by multiplication with , and on the other hand it also allows the periodization of by convolution with ().
The Dirac comb identity is a particular case of the Convolution Theorem for tempered distributions.
Scaling
The scaling property of the Dirac comb follows from the properties of the Dirac delta function.
Since for positive real numbers , it follows that:
Note that requiring positive scaling numbers instead of negative ones is not a restriction because the negative sign would only reverse the order of the summation within , which does not affect the result.
Fourier series
It is clear that is periodic with period . That is,
for all t. The complex Fourier series for such a periodic function is
where the Fourier coefficients are (symbolically)
All Fourier coefficients are 1/T resulting in
When the period is one unit, this simplifies to
Remark: Most rigorously, Riemann or Lebesgue integration over any products including a Dirac delta function yields zero. For this reason, the integration above (Fourier series coefficients determination) must be understood "in the generalized functions sense". It means that, instead of using the characteristic function of an interval applied to the Dirac comb, one uses a so-called Lighthill unitary function as cutout function, see , p.62, Theorem 22 for details.
Fourier transform
|
https://en.wikipedia.org/wiki/NPN
|
NPN may refer to:
Science and technology
Next Protocol Negotiation, in computer networking
Non-protein nitrogen, an animal feed component
NPN transistor
Normal Polish notation, in mathematics
Organisations
National Party of Nigeria, a former political party
New Politics Network, a UK think tank
Other uses
Natural Health Product Number, required by the Canadian Natural Health Products Directorate
|
https://en.wikipedia.org/wiki/Diagonal%20subgroup
|
In the mathematical discipline of group theory, for a given group the diagonal subgroup of the n-fold direct product is the subgroup
This subgroup is isomorphic to
Properties and applications
If acts on a set the n-fold diagonal subgroup has a natural action on the Cartesian product induced by the action of on defined by
If acts -transitively on then the -fold diagonal subgroup acts transitively on More generally, for an integer if acts -transitively on acts -transitively on
Burnside's lemma can be proved using the action of the twofold diagonal subgroup.
See also
Diagonalizable group
References
.
Group theory
|
https://en.wikipedia.org/wiki/E7
|
E7, E07, E-7 or E7 may refer to:
Science and engineering
E7 liquid crystal mixture
E7, the Lie group in mathematics
E7 polytope, in geometry
E7 papillomavirus protein
E7 European long distance path
Transport
EMD E7, a diesel locomotive
European route E07, an international road
Peugeot E7, a hackney cab
PRR E7, a steam locomotive
Carbon Motors E7,a police car
E7 series, a Japanese high-speed train
Nihonkai-Tōhoku Expressway and Akita Expressway (between Kawabe JCT and Kosaka JCT), route E7 in Japan
Cheras–Kajang Expressway, route E7 in Malaysia
Other uses
Boeing E-7, either:
Boeing E-7 ARIA, the original designation assigned by the United States Air Force under the Mission Designation System to the EC-18B Advanced Range Instrumentation Aircraft.
Boeing E-7 Wedgetail, the designation assigned by the Royal Australian Air Force to the Boeing 737 AEW&C (airborne early warning and control) aircraft.
Economy 7, an electricity tariff
E-7 enlisted rank in the military of the United States
E7 (countries)
E7, a musical note in the seventh octave
E-7, the original designation for the EC-18 ARIA electronic warfare aircraft
E7, a postcode district in the E postcode area for east London
European Aviation Air Charter, by IATA airline designator
Nokia E7, a smart phone
Samsung Galaxy E7, a smart phone
E07, a number station allegedly used by Russia, and nicknamed "The English Man"
|
https://en.wikipedia.org/wiki/152%20%28number%29
|
152 (one hundred [and] fifty-two) is the natural number following 151 and preceding 153.
In mathematics
152 is the sum of four consecutive primes (31 + 37 + 41 + 43). It is a nontotient since there is no integer with 152 coprimes below it.
152 is a refactorable number since it is divisible by the total number of divisors it has, and in base 10 it is divisible by the sum of its digits, making it a Harshad number.
Recently, the smallest repunit probable prime in base 152 was found, it has 589570 digits.
The number of surface points on a 6*6*6 cube is 152.
In the military
Focke-Wulf Ta 152 was a Luftwaffe high-altitude interceptor fighter aircraft during World War II
was a United States Navy during World War II
was a United States Navy during World War II
was a United States Navy supply ship during World War II
was a United States Navy during World War II
was a United States Navy ship during World War II
was a United States Navy during World War II
was a United States Navy during World War II
152.3 (5.9"), common medium artillery (and historically heavy tank destroyer) caliber utilized by Russia, China and former members of the Soviet Union, akin to the 155 mm standard caliber of NATO nations.
In transportation
The Baade 152, the first German jet passenger airliner in 1958
The Cessna 152 airplane
Garuda Indonesia Flight 152 was an Indonesian flight from Jakarta to Medan that crashed on September 26, 1997
London Buses route 152
In TV, radio, games and cinema
The aviation-frequency radio exchange (pronounced one-fifty-two), as 152 is associated with the Cessna 152
"NY152" AOL e-mail account use by Joe in the movie You've Got Mail
In other fields
152 is also:
The year AD 152 or 152 BC
152 AH is a year in the Islamic calendar that corresponds to 759 – 760 CE
152 Atala is a dark type D main belt asteroid
The atomic number of an element temporarily called Unpentbium
Sonnet 152
The Garmin GPS 152, produced in 2001
The Xerox DocuMate 152 Sheetfed Scanner
The number of whole millimeter “ticks” on a six-inch ruler
See also
List of highways numbered 152
United Nations Security Council Resolution 152
United States Supreme Court cases, Volume 152
Psalms 152–155
References
External links
Oklahoma Highway 152
Integers
|
https://en.wikipedia.org/wiki/Secant
|
Secant is a term in mathematics derived from the Latin secare ("to cut"). It may refer to:
a secant line, in geometry
the secant variety, in algebraic geometry
secant (trigonometry) (Latin: secans), the multiplicative inverse (or reciprocal) trigonometric function of the cosine
the secant method, a root-finding algorithm in numerical analysis, based on secant lines to graphs of functions
a secant ogive in nose cone designsr:Секанс
|
https://en.wikipedia.org/wiki/Khieu%20Rada
|
Khieu Rada (born April 15, 1949 in Battambang) is a Cambodian politician. He is the son of Khieu In and Sing Tep.
Education
C final exam (1969), M.G.P. (Physical General mathematics - 1970)
S.P.C.N. (Sciences, Physical, Natural Chemistry), Master es Sciences (1973)
C.N.A.M. (General mathematics - 1982) in France
AFPA of analysis Programming and Teleprocessing (1982) in France
Engineer Conceptor (Cap Gemini)
Politics
President of the UPAKAF (Union of Patriots of the Kampuchea in France) in 1979
Founding member of the Confederation of the Khmers Nationalists with Norodom Sihanouk in 1979
Founding member of the FUNCINPEC in 1981 with Norodom Sihanouk
President Director of the FUNCINPEC Television (Channel 9) in 1992
Vice Minister of Relations with the Parliament of the G.N.P. in 1993
Advisor of the Prime Minister the Prince Norodom Ranariddh from 1993 to 1994
Honorary member of the Royal Cabinet with rank of Minister since the 28 January 1994
Under Secretary of State of the Trade Ministry of Cambodia from 1994 to 1995
Delegation Chief of Cambodia at United Nation Conference about Trade and Development
Secretary General of the Khmer National Party (renamed to Sam Rainsy Party) Cambodia from 1995 to 1997
President of the Khmer Unity Party (KUP) from 23 October 1997 to June 2006
Vice Deleguate General of the Sangkum Jatiniyum Front Party of Prince Sisowath Thomico from July 2006 to September 2007
President Adviser of Sam Rainsy Party from October 2007 to January 2008
President Adviser of FUNCINPEC, Vice-President of Kampong Cham Province and President of Stung Trâng (Kampong Cham Province) since February 2008
References
1949 births
Living people
FUNCINPEC politicians
Candlelight Party politicians
Alliance of the National Community politicians
|
https://en.wikipedia.org/wiki/Robot%20kinematics
|
In robotics, robot kinematics applies geometry to the study of the movement of multi-degree of freedom kinematic chains that form the structure of robotic systems. The emphasis on geometry means that the links of the robot are modeled as rigid bodies and its joints are assumed to provide pure rotation or translation.
Robot kinematics studies the relationship between the dimensions and connectivity of kinematic chains and the position, velocity and acceleration of each of the links in the robotic system, in order to plan and control movement and to compute actuator forces and torques. The relationship between mass and inertia properties, motion, and the associated forces and torques is studied as part of robot dynamics.
Kinematic equations
A fundamental tool in robot kinematics is the kinematics equations of the kinematic chains that form the robot. These non-linear equations are used to map the joint parameters to the configuration of the robot system. Kinematics equations are also used in biomechanics of the skeleton and computer animation of articulated characters.
Forward kinematics uses the kinematic equations of a robot to compute the position of the end-effector from specified values for the joint parameters. The reverse process that computes the joint parameters that achieve a specified position of the end-effector is known as inverse kinematics. The dimensions of the robot and its kinematics equations define the volume of space reachable by the robot, known as its workspace.
There are two broad classes of robots and associated kinematics equations: serial manipulators and parallel manipulators. Other types of systems with specialized kinematics equations are air, land, and submersible mobile robots, hyper-redundant, or snake, robots and humanoid robots.
Forward kinematics
Forward kinematics specifies the joint parameters and computes the configuration of the chain. For serial manipulators this is achieved by direct substitution of the joint parameters into the forward kinematics equations for the serial chain. For parallel manipulators substitution of the joint parameters into the kinematics equations requires solution of the a set of polynomial constraints to determine the set of possible end-effector locations.
Inverse kinematics
Inverse kinematics specifies the end-effector location and computes the associated joint angles. For serial manipulators this requires solution of a set of polynomials obtained from the kinematics equations and yields multiple configurations for the chain. The case of a general 6R serial manipulator (a serial chain with six revolute joints) yields sixteen different inverse kinematics solutions, which are solutions of a sixteenth degree polynomial. For parallel manipulators, the specification of the end-effector location simplifies the kinematics equations, which yields formulas for the joint parameters.
Robot Jacobian
The time derivative of the kinematics equations yields the Jacobian of the rob
|
https://en.wikipedia.org/wiki/Screw%20theory
|
Screw theory is the algebraic calculation of pairs of vectors, such as forces and moments or angular and linear velocity, that arise in the kinematics and dynamics of rigid bodies. The mathematical framework was developed by Sir Robert Stawell Ball in 1876 for application in kinematics and statics of mechanisms (rigid body mechanics).
Screw theory provides a mathematical formulation for the geometry of lines which is central to rigid body dynamics, where lines form the screw axes of spatial movement and the lines of action of forces. The pair of vectors that form the Plücker coordinates of a line define a unit screw, and general screws are obtained by multiplication by a pair of real numbers and addition of vectors.
An important result of screw theory is that geometric calculations for points using vectors have parallel geometric calculations for lines obtained by replacing vectors with screws. This is termed the transfer principle.
Screw theory has become an important tool in robot mechanics, mechanical design, computational geometry and multibody dynamics.
This is in part because of the relationship between screws and dual quaternions which have been used to interpolate rigid-body motions. Based on screw theory, an efficient approach has also been developed for the type synthesis of parallel mechanisms (parallel manipulators or parallel robots).
Fundamental theorems include Poinsot's theorem (Louis Poinsot, 1806) and Chasles' theorem (Michel Chasles, 1832). Felix Klein saw screw theory as an application of elliptic geometry and his Erlangen Program. He also worked out elliptic geometry, and a fresh view of Euclidean geometry, with the Cayley–Klein metric. The use of a symmetric matrix for a von Staudt conic and metric, applied to screws, has been described by Harvey Lipkin. Other prominent contributors include Julius Plücker, W. K. Clifford, F. M. Dimentberg, Kenneth H. Hunt, J. R. Phillips.
Basic concepts
A spatial displacement of a rigid body can be defined by a rotation about a line and a translation along the same line, called a screw displacement. This is known as Chasles' theorem. The six parameters that define a screw displacement are the four independent components of the Plücker vector that defines the screw axis, together with the rotation angle about and linear slide along this line, and form a pair of vectors called a screw. For comparison, the six parameters that define a spatial displacement can also be given by three Euler angles that define the rotation and the three components of the translation vector.
Screw
A screw is a six-dimensional vector constructed from a pair of three-dimensional vectors, such as forces and torques and linear and angular velocity, that arise in the study of spatial rigid body movement. The components of the screw define the Plücker coordinates of a line in space and the magnitudes of the vector along the line and moment about this line.
Twist
A twist is a screw used to represent the
|
https://en.wikipedia.org/wiki/172%20%28number%29
|
172 (one hundred [and] seventy-two) is the natural number following 171 and preceding 173.
In mathematics
172 is a part of a near-miss for being a counterexample to Fermat's last theorem, as 1353 + 1383 = 1723 − 1. This is only the third near-miss of this form, two cubes adding to one less than a third cube. It is also a "thickened cube number", half an odd cube (73 = 343) rounded up to the next integer.
See also
172 (disambiguation)
References
Integers
|
https://en.wikipedia.org/wiki/MSSM
|
MSSM may refer to:
Maine School of Science and Mathematics
Minimal Supersymmetric Standard Model
Mount Sinai School of Medicine
Master of Science degree in Systems Management
|
https://en.wikipedia.org/wiki/Homotopical%20algebra
|
In mathematics, homotopical algebra is a collection of concepts comprising the nonabelian aspects of homological algebra, and possibly the abelian aspects as special cases. The homotopical nomenclature stems from the fact that a common approach to such generalizations is via abstract homotopy theory, as in nonabelian algebraic topology, and in particular the theory of closed model categories.
This subject has received much attention in recent years due to new foundational work of Vladimir Voevodsky, Eric Friedlander, Andrei Suslin, and others resulting in the A1 homotopy theory for quasiprojective varieties over a field. Voevodsky has used this new algebraic homotopy theory to prove the Milnor conjecture (for which he was awarded the Fields Medal) and later, in collaboration with Markus Rost, the full Bloch–Kato conjecture.
References
See also
Derived algebraic geometry
Derivator
Cotangent complex - one of the first objects discovered using homotopical algebra
L∞ Algebra
A∞ Algebra
Categorical algebra
Nonabelian homological algebra
External links
An abstract for a talk on the proof of the full Bloch–Kato conjecture
Algebraic topology
Topological methods of algebraic geometry
|
https://en.wikipedia.org/wiki/99942%20Apophis
|
99942 Apophis is a near-Earth asteroid and a potentially hazardous object with a diameter of that caused a brief period of concern in December 2004 when initial observations indicated a probability up to 2.7% that it would hit Earth on April 13, 2029. Additional observations provided improved predictions that eliminated the possibility of an impact on Earth in 2029. Until 2006, a small probability nevertheless remained that, during its 2029 close encounter with Earth, Apophis would pass through a gravitational keyhole of no more than about in diameter, which would have set up a future impact exactly seven years later on April 13, 2036. This possibility kept it at Level 1 on the Torino impact hazard scale until August 2006, when the probability that Apophis would pass through the keyhole was determined to be very small and Apophis' rating on the Torino scale was lowered to zero. By 2008, the keyhole had been determined to be less than 1 km wide. During the short time when it had been of greatest concern, Apophis set the record for highest rating ever on the Torino scale, reaching level 4 on December 27, 2004.
The diameter of Apophis is estimated to be approximately . Preliminary observations by Goldstone radar in January 2013 effectively ruled out the possibility of an Earth impact by Apophis in 2036. By May 6, 2013 (April 15, 2013, observation arc), the possibility of an impact on April 13, 2036 had been eliminated altogether. In 2036 Apophis will approach the Earth at a third the distance of the Sun in both March and December, but this is about the distance of the planet Venus when it overtakes Earth every 1.6 years. On April 12, 2068, the nominal trajectory has Apophis from Earth. Entering March 2021, six asteroids each had a more notable cumulative Palermo Technical Impact Hazard Scale than Apophis, and none of those has a Torino level above 0. On average, an asteroid the size of Apophis (370 metres) is expected to impact Earth once in about 80,000 years. Observations in 2020 by the Subaru telescope confirmed David Vokrouhlický's 2015 Yarkovsky effect predictions. The Goldstone radar observed Apophis March 3–11, 2021, helping to refine the orbit again, and on March 25, 2021, the Jet Propulsion Laboratory announced that Apophis has no chance of impacting Earth in the next 100 years. The uncertainty in the 2029 approach distance has been reduced from hundreds of kilometers to now just a couple of kilometers, greatly enhancing predictions of future approaches.
Discovery and naming
Apophis was discovered on June 19, 2004, by Roy A. Tucker, David J. Tholen, and Fabrizio Bernardi at the Kitt Peak National Observatory. On December 21, 2004, Apophis passed from Earth. Precovery observations from March 15, 2004, were identified on December 27, and an improved orbit solution was computed. Radar astrometry in January 2005 further refined its orbit solution. The discovery was notable in that it was at a very low solar elongation (56°) and at very
|
https://en.wikipedia.org/wiki/Quasi-projective%20variety
|
In mathematics, a quasi-projective variety in algebraic geometry is a locally closed subset of a projective variety, i.e., the intersection inside some projective space of a Zariski-open and a Zariski-closed subset. A similar definition is used in scheme theory, where a quasi-projective scheme is a locally closed subscheme of some projective space.
Relationship to affine varieties
An affine space is a Zariski-open subset of a projective space, and since any closed affine subset can be expressed as an intersection of the projective completion and the affine space embedded in the projective space, this implies that any affine variety is quasiprojective. There are locally closed subsets of projective space that are not affine, so that quasi-projective is more general than affine. Taking the complement of a single point in projective space of dimension at least 2 gives a non-affine quasi-projective variety. This is also an example of a quasi-projective variety that is neither affine nor projective.
Examples
Since quasi-projective varieties generalize both affine and projective varieties, they are sometimes referred to simply as varieties. Varieties isomorphic to affine algebraic varieties as quasi-projective varieties are called affine varieties; similarly for projective varieties. For example, the complement of a point in the affine line, i.e., , is isomorphic to the zero set of the polynomial in the affine plane. As an affine set is not closed since any polynomial zero on the complement must be zero on the affine line. For another example, the complement of any conic in projective space of dimension 2 is affine. Varieties isomorphic to open subsets of affine varieties are called quasi-affine.
Quasi-projective varieties are locally affine in the same sense that a manifold is locally Euclidean: every point of a quasi-projective variety has a neighborhood which is an affine variety. This yields a basis of affine sets for the Zariski topology on a quasi-projective variety.
See also
Abstract algebraic variety, often synonymous with "quasi-projective variety".
divisorial scheme, a generalization of a quasi-projective variety
Citations
References
Algebraic varieties
|
https://en.wikipedia.org/wiki/Integral%20curve
|
In mathematics, an integral curve is a parametric curve that represents a specific solution to an ordinary differential equation or system of equations.
Name
Integral curves are known by various other names, depending on the nature and interpretation of the differential equation or vector field. In physics, integral curves for an electric field or magnetic field are known as field lines, and integral curves for the velocity field of a fluid are known as streamlines. In dynamical systems, the integral curves for a differential equation that governs a system are referred to as trajectories or orbits.
Definition
Suppose that F is a static vector field, that is, a vector-valued function with Cartesian coordinates (F1,F2,...,Fn), and that x(t) is a parametric curve with Cartesian coordinates (x1(t),x2(t),...,xn(t)). Then x(t) is an integral curve of F if it is a solution of the autonomous system of ordinary differential equations,
Such a system may be written as a single vector equation,
This equation says that the vector tangent to the curve at any point x(t) along the curve is precisely the vector F(x(t)), and so the curve x(t) is tangent at each point to the vector field F.
If a given vector field is Lipschitz continuous, then the Picard–Lindelöf theorem implies that there exists a unique flow for small time.
Examples
If the differential equation is represented as a vector field or slope field, then the corresponding integral curves are tangent to the field at each point.
Generalization to differentiable manifolds
Definition
Let M be a Banach manifold of class Cr with r ≥ 2. As usual, TM denotes the tangent bundle of M with its natural projection πM : TM → M given by
A vector field on M is a cross-section of the tangent bundle TM, i.e. an assignment to every point of the manifold M of a tangent vector to M at that point. Let X be a vector field on M of class Cr−1 and let p ∈ M. An integral curve for X passing through p at time t0 is a curve α : J → M of class Cr−1, defined on an open interval J of the real line R containing t0, such that
Relationship to ordinary differential equations
The above definition of an integral curve α for a vector field X, passing through p at time t0, is the same as saying that α is a local solution to the ordinary differential equation/initial value problem
It is local in the sense that it is defined only for times in J, and not necessarily for all t ≥ t0 (let alone t ≤ t0). Thus, the problem of proving the existence and uniqueness of integral curves is the same as that of finding solutions to ordinary differential equations/initial value problems and showing that they are unique.
Remarks on the time derivative
In the above, α′(t) denotes the derivative of α at time t, the "direction α is pointing" at time t. From a more abstract viewpoint, this is the Fréchet derivative:
In the special case that M is some open subset of Rn, this is the familiar derivative
where α1, ..., αn are the coordinates for α wi
|
https://en.wikipedia.org/wiki/David%20Singmaster
|
David Breyer Singmaster (14 December 1938 – 13 February 2023) was an American-British mathematician who was emeritus professor of mathematics at London South Bank University, England. He had a huge personal collection of mechanical puzzles and books of brain teasers. He was most famous for being an early adopter and enthusiastic promoter of the Rubik's Cube. His Notes on Rubik's "Magic Cube" which he began compiling in 1979 provided the first mathematical analysis of the Cube as well as providing one of the first published solutions. The book contained his cube notation which allowed the recording of Rubik's Cube moves, and which quickly became the standard.
Singmaster was both a puzzle historian and a composer of puzzles, and many of his puzzles were published in newspapers and magazines. In combinatorial number theory, Singmaster's conjecture states that there is an upper bound on the number of times a number other than 1 can appear in Pascal's triangle.
Career
David Singmaster was a student at the California Institute of Technology in the late 1950s. His intention was to become a civil engineer, but he became interested in chemistry and then physics. However he was thrown out of college in his third year for "lack of academic ability". After a year working, he switched to the University of California, Berkeley. He only became really interested in mathematics in his final year when he took some courses in algebra and number theory. In the autumn semester, his number theory teacher Dick Lehmer posed a prize problem which Singmaster won. In his last semester, his algebra teacher posed a question the teacher didn't know the answer to and Singmaster solved it, eventually leading to two papers. He gained his PhD from Berkeley, in 1966. He taught at the American University of Beirut, and then lived for a while in Cyprus.
Singmaster moved to London in 1970. The "Polytechnic of the South Bank" had been created from a merger of institutions in 1970, and Singmaster became a lecturer in the Department of Mathematical Sciences. His academic interests were in combinatorics and number theory.
In August 1971 he joined an archaeological expedition off the coast of Sicily, acting as photographer. He went off course one day and noticed a timber sticking up out of the sand. This led to the discovery of the Marsala Punic Ship.
Around 1972, he attended the Istituto di Matematica in Pisa for a year having won a research scholarship. He was promoted to a Readership (a Research Professorship) at the South Bank Polytechnic in September 1984. The polytechnic college became London South Bank University in 1992, and Singmaster was the professor of mathematics at the "School of Computing, Information Systems and Mathematics". He retired in 1996. He became an honorary research fellow at University College London. He was designated emeritus at London South Bank University in 2020.
Rubik's Cubes
Singmaster's association with Rubik's Cubes dates from August 1978, when h
|
https://en.wikipedia.org/wiki/Markov%20random%20field
|
In the domain of physics and probability, a Markov random field (MRF), Markov network or undirected graphical model is a set of random variables having a Markov property described by an undirected graph. In other words, a random field is said to be a Markov random field if it satisfies Markov properties. The concept originates from the Sherrington–Kirkpatrick model.
A Markov network or MRF is similar to a Bayesian network in its representation of dependencies; the differences being that Bayesian networks are directed and acyclic, whereas Markov networks are undirected and may be cyclic. Thus, a Markov network can represent certain dependencies that a Bayesian network cannot (such as cyclic dependencies ); on the other hand, it can't represent certain dependencies that a Bayesian network can (such as induced dependencies ). The underlying graph of a Markov random field may be finite or infinite.
When the joint probability density of the random variables is strictly positive, it is also referred to as a Gibbs random field, because, according to the Hammersley–Clifford theorem, it can then be represented by a Gibbs measure for an appropriate (locally defined) energy function. The prototypical Markov random field is the Ising model; indeed, the Markov random field was introduced as the general setting for the Ising model. In the domain of artificial intelligence, a Markov random field is used to model various low- to mid-level tasks in image processing and computer vision.
Definition
Given an undirected graph , a set of random variables indexed by form a Markov random field with respect to if they satisfy the local Markov properties:
Pairwise Markov property: Any two non-adjacent variables are conditionally independent given all other variables:
Local Markov property: A variable is conditionally independent of all other variables given its neighbors:
where is the set of neighbors of , and is the closed neighbourhood of .
Global Markov property: Any two subsets of variables are conditionally independent given a separating subset:
where every path from a node in to a node in passes through .
The Global Markov property is stronger than the Local Markov property, which in turn is stronger than the Pairwise one. However, the above three Markov properties are equivalent for positive distributions (those that assign only nonzero probabilities to the associated variables).
The relation between the three Markov properties is particularly clear in the following formulation:
Pairwise: For any not equal or adjacent, .
Local: For any and not containing or adjacent to , .
Global: For any not intersecting or adjacent, .
Clique factorization
As the Markov property of an arbitrary probability distribution can be difficult to establish, a commonly used class of Markov random fields are those that can be factorized according to the cliques of the graph.
Given a set of random variables , let be the probability of a particular field config
|
https://en.wikipedia.org/wiki/Gaspard%20Monge%27s%20mausoleum
|
Gaspard Monge, whose remains are deposited in the burying ground in Père Lachaise Cemetery, at Paris, in a magnificent mausoleum, was professor of geometry in the École polytechnique at Paris, and with Denon accompanied Napoleon Bonaparte on his memorable expedition to Egypt; one to make drawings of the architectural antiquities and sculpture, and the other the geographical delineations of that ancient country. He returned to Paris, where he assisted Denon in the publication of his antiquities. At his decease the pupils of the Polytechnique School erected this mausoleum to his memory, as a testimony of their esteem, after a design made by his friend, Monsieur Denon. The mausoleum is of Egyptian architecture, with which Denon had become familiarly acquainted.
Description
There is a bust of Monge placed on a terminal pedestal underneath a canopy in the upper compartment, which canopy is open in front and in the back. In the cavetto cornice is an Egyptian winged globe, entwined with serpents, emblematical of time and eternity; and on the face below is engraved the following line:—
A GASPARD MONGE.
On each side of the upper compartment is inscribed the following memento mori:
LES ELEVES.
DE L'ECOLE POLYTECHNIQUE.
A G. MONGE.
COMTE DE PELUSE.
Underneath this inscription is carved in sunk work an Egyptian lotus flower in an upright position; on the back of the mausoleum is the year in which Gaspard Monge died. The body is in the cemetery below.
AN. MDCCCXX.
References
Cemetery art
Mausoleums in France
Tombs in France
|
https://en.wikipedia.org/wiki/Yves%20Balasko
|
Yves Balasko is a French economist working in England. He was born in Paris on 9 August 1945 to a Hungarian father and a French mother. After studying mathematics at the École Normale Supérieure in Paris he became interested in economics. He subsequently spent six years at Électricité de France where he was involved in the application of the theory of marginal cost pricing to electricity pricing. While at Électricité de France, he proved his first results on the structure of the equilibrium manifold in the theory of general equilibrium. After completing his dissertation on "L'équilibre économique du point de vue differentiel" (English: "The Economic equilibrium from the differential point of view"), he had positions at the Universities of Paris XII, Paris I, Geneva and York. In 2013, he held a visiting scholar position at Pontifical Catholic University of Rio de Janeiro, in Brazil. Since 2014, he has returned to York University.
He is a Fellow of the Econometric Society since 1980. He is also a Vice President of the Society for Economic Measurement (SEM).
In mathematical economics, Balasko has worked on general equilibrium theory, the overlapping generations model and the theory of incomplete asset markets. In his research, Balasko uses topology.
Books
Foundations of the Theory of General Equilibrium, 1988, .
The Equilibrium Manifold: Postmodern Developments in the Theory of General Economic Equilibrium, 2009,
General Equilibrium Theory of Value, 2011,
External links
Yves Balasko's personal webpage
1945 births
Living people
General equilibrium theorists
Fellows of the Econometric Society
École Normale Supérieure alumni
Mathematical economists
Électricité de France people
20th-century French economists
Academic staff of the Pontifical Catholic University of Rio de Janeiro
21st-century French economists
|
https://en.wikipedia.org/wiki/Hugo%20Dingler
|
Hugo Albert Emil Hermann Dingler (July 7, 1881, Munich – June 29, 1954, Munich) was a German scientist and philosopher.
Life
Hugo Dingler studied mathematics, philosophy, and physics with Felix Klein, Hermann Minkowski, David Hilbert, Edmund Husserl, Woldemar Voigt, and Wilhem Roentgen at the universities of Göttingen and Munich. He graduated from the University of Munich with a thesis under Aurel Voss. Dingler earned his Ph.D. in mathematics, physics and astronomy in 1906. His doctoral advisor was Ferdinand von Lindemann. In 1910 Dingler's first attempt to earn a Habilitation failed. His second try in 1912 was successful. Dingler then taught as a Privatdozent and hold lectures on mathematics, philosophy and the history of science. He became a professor at the University of Munich in 1920. Dingler got a position as Professor ordinarius in Darmstadt in 1932.
In 1934, one year after the Nazis took power Dingler was dismissed from his teaching position for still unclear reasons. Dingler himself told several interviewers that this was because of his favorable writings concerning Jews. In fact both philo-semitic as well as anti-semitic statements by Dingler had been noted.
From 1934 to 1936 he again held a teaching position.
In 1940 Dingler joined the Nazi Party and was again given a teaching position. Of Dingler's 1944 book Aufbau der exakten Fundamentalwissenschaft only thirty copies survived wartime bombing.
Thought
Dingler's position is usually characterized as "conventionalist" by Karl Popper and others. Sometimes he is called a "radical conventionalist" (also referred to as "critical voluntarism" in the secondary literature), as by the early Rudolf Carnap. Dingler himself initially characterized it as "critical conventionalism", to contrast it with the "naïve conventionalism" of other philosophers such as Poincaré, but he himself later ceased to call his position conventionalist. Dingler agrees with the conventionalists that the fundamental assumptions of geometry and physics are not extracted empirically and cannot be given a transcendental deduction. However, Dingler disagrees with conventionalists such as Henri Poincaré in that he does not believe there is freedom to choose alternative assumptions. Dingler believes that one can give a foundation to mathematics and physics by means of operations as building stones. Dingler claims that this operational analysis leads one to Euclidean geometry and Newtonian mechanics, which are the only possible results.
Dingler opposed Albert Einstein's relativity theory and was therefore opposed and snubbed by most of the leaders of the German physics and mathematics community. This opposition, at least to the theory of general relativity, remains in the work of his follower Paul Lorenzen.
Influence
Paul Lorenzen, noted for his work on constructive foundations of mathematics was a follower of Dingler, at least with respect to the foundations of geometry and physics. The so-called Erlangen School of foll
|
https://en.wikipedia.org/wiki/Kiyosi%20It%C3%B4
|
was a Japanese mathematician who made fundamental contributions to probability theory, in particular, the theory of stochastic processes. He invented the concept of stochastic integral and stochastic differential equation, and is known as the founder of so-called Itô calculus.
Overview
Itô pioneered the theory of stochastic integration and stochastic differential equations, now known as Itô calculus. Its basic concept is the Itô integral, and among the most important results is a change of variable formula known as Itô's lemma. Itô calculus is a method used in the mathematical study of random events and is applied in various fields, and is perhaps best known for its use in mathematical finance. Itô also made contributions to the study of diffusion processes on manifolds, known as stochastic differential geometry.
Although the standard Hepburn romanization of his name is Kiyoshi Itō, he used the spelling Kiyosi Itô (Kunrei-shiki romanization). The alternative spellings Itoh and Ito are also sometimes seen in the West.
Biography
Itô was born in Hokusei-cho in Mie Prefecture on the main island of Honshū. He graduated with a B.S. (1938) and a Ph.D (1945) in Mathematics from the University of Tokyo. Between 1938 and 1945, Itô worked for the Japanese National Statistical Bureau, where he published two of his seminal works on probability and stochastic processes, including a series of articles in which he defined the stochastic integral and laid the foundations of the Itō calculus. After that he continued to develop his ideas on stochastic analysis with many important papers on the topic.
In 1952, he became a professor at the University of Kyoto to which he remained affiliated until his retirement in 1979. Starting in the 1950s, Itô spent long periods of time outside Japan, at Cornell, Stanford, the Institute for Advanced Study in Princeton, New Jersey, and Aarhus University in Denmark.
Itô was awarded the inaugural Gauss Prize in 2006 by the International Mathematical Union for his lifetime achievements. As he was unable to travel to Madrid, his youngest daughter, Junko Itô received the Gauss Prize from the King of Spain on his behalf. Later, International Mathematics Union (IMU) President Sir John Ball personally presented the medal to Itô at a special ceremony held in Kyoto.
In October 2008, Itô was honored with Japan's Order of Culture, and an awards ceremony for the Order of Culture was held at the Imperial Palace.
Itô wrote in Japanese, Chinese, German, French and English.
He died on November 10, 2008 in Kyoto, Japan at age 93.
Scientific works of Kiyosi Itô
Notes
References
Obituary at The New York Times
See also
Itô calculus
Itô diffusion
Itô integral
Itô–Nisio theorem
Itô isometry
Itô's lemma
Black–Scholes model
External links
Kiyosi Itô(1915-2008) / Eightieth Birthday Lecture RIMS, Kyoto University, September 1995 / Research Institute for Mathematical Sciences, Kyoto University Kyoto
Bibliography of Kiyosi Itô
Kiyosi Itô at
|
https://en.wikipedia.org/wiki/Richmond%20Mayo-Smith
|
Richmond Mayo-Smith (February 9, 1854 – November 11, 1901) was an American economist noted for his work in statistics. He was born in Troy, Ohio, educated at Amherst College (graduating in 1875), then at Berlin and Heidelberg University. He became assistant professor of economics at Columbia University in 1877. He was an adjunct professor from 1878 to 1883, when he was appointed professor of political economy and social science, a post which he held until his death in 1901.
He devoted himself especially to the study of statistics, and was recognized as one of the foremost authorities on the subject. His works include Emigration and Immigration (1890); Sociology and Statistics (1895), and Statistics and Economics (1899).
Bibliography
References
Further reading
External links
National Academy of Sciences Biographical Memoir
Guide to the Richmond Mayo-Smith Papers 1875-1897 at the University of Chicago Special Collections
1854 births
1901 deaths
Amherst College alumni
People from Troy, Ohio
Economists from Ohio
|
https://en.wikipedia.org/wiki/Keith%20Geddes
|
Keith Oliver Geddes (born 1947) is a professor emeritus in the David R. Cheriton School of Computer Science within the Faculty of Mathematics at the University of Waterloo in Waterloo, Ontario. He is a former director of the Symbolic Computation Group in the School of Computer Science. He received a BA in Mathematics at the University of Saskatchewan in 1968; he completed both his MSc and PhD in Computer Science at the University of Toronto.
Geddes is probably best known for co-founding the Maple computer algebra system, now in widespread academic use around the world. He is also the Scientific Director at the Ontario Research Centre for Computer Algebra, and is a member of the Association for Computing Machinery, as well as the American and Canadian Mathematical Societies.
Research
Geddes' primary research interest is to develop algorithms for the mechanization of mathematics. More specifically, he is interested in the computational aspects of algebra and analysis. Currently, he is focusing on designing hybrid symbolic-numeric algorithms to perform definite integration and solve ordinary and partial differential equations.
Much of his work currently revolves around Maple.
Teaching
Geddes retired from teaching in December 2008.
Geddes taught a mixture of both senior-level symbolic computation courses, at both the undergraduate and graduate level, as well as introductory courses on the principles of computer science.
See also
Maple computer algebra system
Waterloo Maple
Gaston Gonnet — the co-founder of Waterloo Maple
Risch algorithm
Symbolic integration
Derivatives of the incomplete gamma function
List of University of Waterloo people
External links
Keith Geddes' home page
The Symbolic Computation Group
1947 births
Living people
Canadian mathematicians
University of Toronto alumni
Academic staff of the University of Waterloo
|
https://en.wikipedia.org/wiki/Indefinite
|
Indefinite may refer to:
the opposite of definite in grammar
indefinite article
indefinite pronoun
Indefinite integral, another name for the antiderivative
Indefinite forms in algebra, see definite quadratic forms
an indefinite matrix
See also
Eternity
NaN
Undefined (disambiguation)
|
https://en.wikipedia.org/wiki/Open%20problem
|
In science and mathematics, an open problem or an open question is a known problem which can be accurately stated, and which is assumed to have an objective and verifiable solution, but which has not yet been solved (i.e., no solution for it is known).
In the history of science, some of these supposed open problems were "solved" by means of showing that they were not well-defined.
In mathematics, many open problems are concerned with the question of whether a certain definition is or is not consistent.
Two notable examples in mathematics that have been solved and closed by researchers in the late twentieth century are Fermat's Last Theorem and the four-color theorem. An important open mathematics problem solved in the early 21st century is the Poincaré conjecture.
Open problems exist in all scientific fields.
For example, one of the most important open problems in biochemistry is the protein structure prediction problem – how to predict a protein's structure from its sequence.
See also
Lists of unsolved problems (by major field)
Hilbert's problems
Millennium Prize Problems
References
External links
Open Problem Garden The collection of open problems in mathematics build on the principle of user editable ("wiki") site
|
https://en.wikipedia.org/wiki/Tiedemann%20Giese
|
Tiedemann Giese (1 June 1480 – 23 October 1550), was Bishop of Kulm (Chełmno) first canon, later Prince-Bishop of Warmia (Ermland)wwhose hose interest in mathematics, astronomy, and theology led him to mentor a number of important young scholars, including Copernicus. He was a prolific writer and correspondent, publishing a number of works on the reformation of the church. Tiedemann was a member of the patrician Giese family of Danzig (Gdańsk) in Poland. The Giese family ancestors originated from Unna in Westphalia, near Dortmund. His father was Albrecht Giese and his younger brother, the Hanseatic League merchant Georg Giese.
Life and career
Giese was the fifth child of Albrecht Giese and his wife, Elisabeth Langenbeck, both members of wealthy merchant families. His paternal family had emigrated from Cologne to Danzig in the 1430s. His father was the Mayor of Danzig, and his mother's uncle, Johann Ferber, had been Mayor of Danzig.
At the age of 12 years, Tiedemann, along with his cousin, Johann Ferber, entered the University of Leipzig, and subsequently studied at Basel and in Italy. He earned a Master of Theology degree. Giese was one of the best educated scholars in Prussia, well versed in both theology and the sciences. At age 24, he and Mauritius Ferber (possibly a cousin) became priests at the Catholic Church of St. Peter and St. Paul.
He was secretary to the King of Poland, and later appointed canon of Frauenburg (Frombork), where he remained for 30 years. His residence was the Episcopal Castle at Frauenburg. The King appointed him Bishop of Kulm on 22 September 1537 (ratified by the Pope on 11 January 1538). Toward the end of his life, he became Bishop of Ermland.
Giese was supported by Chancellor Lucas David. He was a humanist and a liberal in the Erasmian mould. Although a Catholic, he demonstrated relative tolerance towards Lutherans. He made himself the spokesperson for a group of liberal and tolerant men who wanted to mediate between the "old-believers" and "the new-believers". In his writings, he expressed the aim of reconciling the Catholic and Protestant branches of the church, but ultimately alienated both of them.
Bishop Giese was a lifelong friend and frequent companion of the astronomer and proponent of heliocentrism Nicolaus Copernicus and shared his interest in astronomy. As a very wealthy man, Giese had the best instruments which, from time to time, he loaned to Copernicus. Giese, seven years younger than Copernicus, was sufficiently well educated to be able to follow Copernicus' studies. Giese bought his friend an ingenious sundial, and gave him an instrument with which he could observe the equinoxes. The mathematician, Rheticus, published a list of Giese's astronomical instruments, which he considered to have been made by men who really understood their mathematics.
Giese actively encouraged his friend, Copernicus, to publish his findings in relation to the movement of the planets in the solar system. In turn, Cope
|
https://en.wikipedia.org/wiki/Continued%20fraction%20factorization
|
In number theory, the continued fraction factorization method (CFRAC) is an integer factorization algorithm. It is a general-purpose algorithm, meaning that it is suitable for factoring any integer n, not depending on special form or properties. It was described by D. H. Lehmer and R. E. Powers in 1931, and developed as a computer algorithm by Michael A. Morrison and John Brillhart in 1975.
The continued fraction method is based on Dixon's factorization method. It uses convergents in the regular continued fraction expansion of
.
Since this is a quadratic irrational, the continued fraction must be periodic (unless n is square, in which case the factorization is obvious).
It has a time complexity of , in the O and L notations.
References
Further reading
Integer factorization algorithms
|
https://en.wikipedia.org/wiki/Butterfly%20theorem
|
The butterfly theorem is a classical result in Euclidean geometry, which can be stated as follows:
Let be the midpoint of a chord of a circle, through which two other chords and are drawn; and intersect chord at and correspondingly. Then is the midpoint of .
Proof
A formal proof of the theorem is as follows:
Let the perpendiculars and be dropped from the point on the straight lines and respectively. Similarly, let and be dropped from the point perpendicular to the straight lines and respectively.
Since
From the preceding equations and the intersecting chords theorem, it can be seen that
since .
So
Cross-multiplying in the latter equation,
Cancelling the common term
from both sides of the resulting equation yields
hence , since MX, MY, and PM are all positive, real numbers.
Thus, is the midpoint of .
Other proofs exist, including one using projective geometry.
History
Proving the butterfly theorem was posed as a problem by William Wallace in The Gentlemen's Mathematical Companion (1803). Three solutions were published in 1804, and in 1805 Sir William Herschel posed the question again in a letter to Wallace. Rev. Thomas Scurr asked the same question again in 1814 in the Gentlemen's Diary or Mathematical Repository.
References
External links
The Butterfly Theorem at cut-the-knot
A Better Butterfly Theorem at cut-the-knot
Proof of Butterfly Theorem at PlanetMath
The Butterfly Theorem by Jay Warendorff, the Wolfram Demonstrations Project.
Euclidean plane geometry
Theorems about circles
Articles containing proofs
|
https://en.wikipedia.org/wiki/Cartan%20subalgebra
|
In mathematics, a Cartan subalgebra, often abbreviated as CSA, is a nilpotent subalgebra of a Lie algebra that is self-normalising (if for all , then ). They were introduced by Élie Cartan in his doctoral thesis. It controls the representation theory of a semi-simple Lie algebra over a field of characteristic .
In a finite-dimensional semisimple Lie algebra over an algebraically closed field of characteristic zero (e.g., a Cartan subalgebra is the same thing as a maximal abelian subalgebra consisting of elements x such that the adjoint endomorphism is semisimple (i.e., diagonalizable). Sometimes this characterization is simply taken as the definition of a Cartan subalgebra.pg 231
In general, a subalgebra is called toral if it consists of semisimple elements. Over an algebraically closed field, a toral subalgebra is automatically abelian. Thus, over an algebraically closed field of characteristic zero, a Cartan subalgebra can also be defined as a maximal toral subalgebra.
Kac–Moody algebras and generalized Kac–Moody algebras also have subalgebras that play the same role as the Cartan subalgebras of semisimple Lie algebras (over a field of characteristic zero).
Existence and uniqueness
Cartan subalgebras exist for finite-dimensional Lie algebras whenever the base field is infinite. One way to construct a Cartan subalgebra is by means of a regular element. Over a finite field, the question of the existence is still open.
For a finite-dimensional semisimple Lie algebra over an algebraically closed field of characteristic zero, there is a simpler approach: by definition, a toral subalgebra is a subalgebra of that consists of semisimple elements (an element is semisimple if the adjoint endomorphism induced by it is diagonalizable). A Cartan subalgebra of is then the same thing as a maximal toral subalgebra and the existence of a maximal toral subalgebra is easy to see.
In a finite-dimensional Lie algebra over an algebraically closed field of characteristic zero, all Cartan subalgebras are conjugate under automorphisms of the algebra, and in particular are all isomorphic. The common dimension of a Cartan subalgebra is then called the rank of the algebra.
For a finite-dimensional complex semisimple Lie algebra, the existence of a Cartan subalgebra is much simpler to establish, assuming the existence of a compact real form. In that case, may be taken as the complexification of the Lie algebra of a maximal torus of the compact group.
If is a linear Lie algebra (a Lie subalgebra of the Lie algebra of endomorphisms of a finite-dimensional vector space V) over an algebraically closed field, then any Cartan subalgebra of is the centralizer of a maximal toral subalgebra of . If is semisimple and the field has characteristic zero, then a maximal toral subalgebra is self-normalizing, and so is equal to the associated Cartan subalgebra. If in addition is semisimple, then the adjoint representation presents as a linear Lie algebra, so tha
|
https://en.wikipedia.org/wiki/Loop%20algebra
|
In mathematics, loop algebras are certain types of Lie algebras, of particular interest in theoretical physics.
Definition
For a Lie algebra over a field , if is the space of Laurent polynomials, then
with the inherited bracket
Geometric definition
If is a Lie algebra, the tensor product of with , the algebra of (complex) smooth functions over the circle manifold (equivalently, smooth complex-valued periodic functions of a given period),
is an infinite-dimensional Lie algebra with the Lie bracket given by
Here and are elements of and and are elements of .
This isn't precisely what would correspond to the direct product of infinitely many copies of , one for each point in , because of the smoothness restriction. Instead, it can be thought of in terms of smooth map from to ; a smooth parametrized loop in , in other words. This is why it is called the loop algebra.
Gradation
Defining to be the linear subspace the bracket restricts to a product
hence giving the loop algebra a -graded Lie algebra structure.
In particular, the bracket restricts to the 'zero-mode' subalgebra .
Derivation
There is a natural derivation on the loop algebra, conventionally denoted acting as
and so can be thought of formally as .
It is required to define affine Lie algebras, which are used in physics, particularly conformal field theory.
Loop group
Similarly, a set of all smooth maps from to a Lie group forms an infinite-dimensional Lie group (Lie group in the sense we can define functional derivatives over it) called the loop group. The Lie algebra of a loop group is the corresponding loop algebra.
Affine Lie algebras as central extension of loop algebras
If is a semisimple Lie algebra, then a nontrivial central extension of its loop algebra gives rise to an affine Lie algebra. Furthermore this central extension is unique.
The central extension is given by adjoining a central element , that is, for all ,
and modifying the bracket on the loop algebra to
where is the Killing form.
The central extension is, as a vector space, (in its usual definition, as more generally, can be taken to be an arbitrary field).
Cocycle
Using the language of Lie algebra cohomology, the central extension can be described using a 2-cocycle on the loop algebra. This is the map
satisfying
Then the extra term added to the bracket is
Affine Lie algebra
In physics, the central extension is sometimes referred to as the affine Lie algebra. In mathematics, this is insufficient, and the full affine Lie algebra is the vector space
where is the derivation defined above.
On this space, the Killing form can be extended to a non-degenerate form, and so allows a root system analysis of the affine Lie algebra.
References
Lie algebras
|
https://en.wikipedia.org/wiki/Topological%20property
|
In topology and related areas of mathematics, a topological property or topological invariant is a property of a topological space that is invariant under homeomorphisms. Alternatively, a topological property is a proper class of topological spaces which is closed under homeomorphisms. That is, a property of spaces is a topological property if whenever a space X possesses that property every space homeomorphic to X possesses that property. Informally, a topological property is a property of the space that can be expressed using open sets.
A common problem in topology is to decide whether two topological spaces are homeomorphic or not. To prove that two spaces are not homeomorphic, it is sufficient to find a topological property which is not shared by them.
Properties of topological properties
A property is:
Hereditary, if for every topological space and subset the subspace has property
Weakly hereditary, if for every topological space and closed subset the subspace has property
Common topological properties
Cardinal functions
The cardinality of the space .
The cardinality of the topology (the set of open subsets) of the space .
Weight , the least cardinality of a basis of the topology of the space .
Density , the least cardinality of a subset of whose closure is .
Separation
Note that some of these terms are defined differently in older mathematical literature; see history of the separation axioms.
T0 or Kolmogorov. A space is Kolmogorov if for every pair of distinct points x and y in the space, there is at least either an open set containing x but not y, or an open set containing y but not x.
T1 or Fréchet. A space is Fréchet if for every pair of distinct points x and y in the space, there is an open set containing x but not y. (Compare with T0; here, we are allowed to specify which point will be contained in the open set.) Equivalently, a space is T1 if all its singletons are closed. T1 spaces are always T0.
Sober. A space is sober if every irreducible closed set C has a unique generic point p. In other words, if C is not the (possibly nondisjoint) union of two smaller closed non-empty subsets, then there is a p such that the closure of {p} equals C, and p is the only point with this property.
T2 or Hausdorff. A space is Hausdorff if every two distinct points have disjoint neighbourhoods. T2 spaces are always T1.
T2½ or Urysohn. A space is Urysohn if every two distinct points have disjoint closed neighbourhoods. T2½ spaces are always T2.
Completely T2 or completely Hausdorff. A space is completely T2 if every two distinct points are separated by a function. Every completely Hausdorff space is Urysohn.
Regular. A space is regular if whenever C is a closed set and p is a point not in C, then C and p have disjoint neighbourhoods.
T3 or Regular Hausdorff. A space is regular Hausdorff if it is a regular T0 space. (A regular space is Hausdorff if and only if it is T0, so the terminology is consistent.)
Comple
|
https://en.wikipedia.org/wiki/Cross%20section
|
Cross section may refer to:
Cross section (geometry)
Cross-sectional views in architecture & engineering 3D
Cross section (geology)
Cross section (electronics)
Radar cross section, measure of detectability
Cross section (physics)
Absorption cross section
Nuclear cross section
Neutron cross section
Photoionisation cross section
Gamma ray cross section
Cross Section (album), 1956 musical album by Billy Taylor
See also
Cross section (fiber), microscopic view of textile fibers.
Section (fiber bundle), in differential and algebraic geometry and topology, a section of a fiber bundle or sheaf
Cross-sectional data, in statistics, econometrics, and medical research, a data set drawn from a single point in time
Cross-sectional study, a scientific investigation utilizing cross-sectional data
Cross-sectional regression, a particular statistical technique for carrying out a cross-sectional study
|
https://en.wikipedia.org/wiki/Cross%20section%20%28geometry%29
|
In geometry and science, a cross section is the non-empty intersection of a solid body in three-dimensional space with a plane, or the analog in higher-dimensional spaces. Cutting an object into slices creates many parallel cross-sections. The boundary of a cross-section in three-dimensional space that is parallel to two of the axes, that is, parallel to the plane determined by these axes, is sometimes referred to as a contour line; for example, if a plane cuts through mountains of a raised-relief map parallel to the ground, the result is a contour line in two-dimensional space showing points on the surface of the mountains of equal elevation.
In technical drawing a cross-section, being a projection of an object onto a plane that intersects it, is a common tool used to depict the internal arrangement of a 3-dimensional object in two dimensions. It is traditionally crosshatched with the style of crosshatching often indicating the types of materials being used.
With computed axial tomography, computers can construct cross-sections from x-ray data.
Definition
If a plane intersects a solid (a 3-dimensional object), then the region common to the plane and the solid is called a cross-section of the solid. A plane containing a cross-section of the solid may be referred to as a cutting plane.
The shape of the cross-section of a solid may depend upon the orientation of the cutting plane to the solid. For instance, while all the cross-sections of a ball are disks, the cross-sections of a cube depend on how the cutting plane is related to the cube. If the cutting plane is perpendicular to a line joining the centers of two opposite faces of the cube, the cross-section will be a square, however, if the cutting plane is perpendicular to a diagonal of the cube joining opposite vertices, the cross-section can be either a point, a triangle or a hexagon.
Plane sections
A related concept is that of a plane section, which is the curve of intersection of a plane with a surface. Thus, a plane section is the boundary of a cross-section of a solid in a cutting plane.
If a surface in a three-dimensional space is defined by a function of two variables, i.e., , the plane sections by cutting planes that are parallel to a coordinate plane (a plane determined by two coordinate axes) are called level curves or isolines.
More specifically, cutting planes with equations of the form (planes parallel to the -plane) produce plane sections that are often called contour lines in application areas.
Mathematical examples of cross sections and plane sections
A cross section of a polyhedron is a polygon.
The conic sections – circles, ellipses, parabolas, and hyperbolas – are plane sections of a cone with the cutting planes at various different angles, as seen in the diagram at left.
Any cross-section passing through the center of an ellipsoid forms an elliptic region, while the corresponding plane sections are ellipses on its surface. These degenerate to disks and circles, r
|
https://en.wikipedia.org/wiki/Space%20diagonal
|
In geometry, a space diagonal (also interior diagonal or body diagonal) of a polyhedron is a line connecting two vertices that are not on the same face. Space diagonals contrast with face diagonals, which connect vertices on the same face (but not on the same edge) as each other.
For example, a pyramid has no space diagonals, while a cube (shown at right) or more generally a parallelepiped has four space diagonals.
Axial diagonal
An axial diagonal is a space diagonal that passes through the center of a polyhedron.
For example, in a cube with edge length a, all four space diagonals are axial diagonals, of common length More generally, a cuboid with edge lengths a, b, and c has all four space diagonals axial, with common length
A regular octahedron has 3 axial diagonals, of length , with edge length a.
A regular icosahedron has 6 axial diagonals of length , where is the golden ratio .
Space diagonals of magic cubes
A magic square is an arrangement of numbers in a square grid so that the sum of the numbers along every row, column, and diagonal is the same. Similarly, one may define a magic cube to be an arrangement of numbers in a cubical grid so that the sum of the numbers on the four space diagonals must be the same as the sum of the numbers in each row, each column, and each pillar.
See also
Distance
Face diagonal
Magic cube classes
Hypotenuse
Spacetime interval
References
John R. Hendricks, The Pan-3-Agonal Magic Cube, Journal of Recreational Mathematics 5:1:1972, pp 51–54. First published mention of pan-3-agonals
Hendricks, J. R., Magic Squares to Tesseracts by Computer, 1998, 0-9684700-0-9, page 49
Heinz & Hendricks, Magic Square Lexicon: Illustrated, 2000, 0-9687985-0-0, pages 99,165
Guy, R. K. Unsolved Problems in Number Theory, 2nd ed. New York: Springer-Verlag, p. 173, 1994.
External links
de Winkel Magic Encyclopedia
Heinz - Basic cube parts
John Hendricks Hypercubes
Magic squares
Elementary geometry
|
https://en.wikipedia.org/wiki/Semisimple%20Lie%20algebra
|
In mathematics, a Lie algebra is semisimple if it is a direct sum of simple Lie algebras. (A simple Lie algebra is a non-abelian Lie algebra without any non-zero proper ideals).
Throughout the article, unless otherwise stated, a Lie algebra is a finite-dimensional Lie algebra over a field of characteristic 0. For such a Lie algebra , if nonzero, the following conditions are equivalent:
is semisimple;
the Killing form, κ(x,y) = tr(ad(x)ad(y)), is non-degenerate;
has no non-zero abelian ideals;
has no non-zero solvable ideals;
the radical (maximal solvable ideal) of is zero.
Significance
The significance of semisimplicity comes firstly from the Levi decomposition, which states that every finite dimensional Lie algebra is the semidirect product of a solvable ideal (its radical) and a semisimple algebra. In particular, there is no nonzero Lie algebra that is both solvable and semisimple.
Semisimple Lie algebras have a very elegant classification, in stark contrast to solvable Lie algebras. Semisimple Lie algebras over an algebraically closed field of characteristic zero are completely classified by their root system, which are in turn classified by Dynkin diagrams. Semisimple algebras over non-algebraically closed fields can be understood in terms of those over the algebraic closure, though the classification is somewhat more intricate; see real form for the case of real semisimple Lie algebras, which were classified by Élie Cartan.
Further, the representation theory of semisimple Lie algebras is much cleaner than that for general Lie algebras. For example, the Jordan decomposition in a semisimple Lie algebra coincides with the Jordan decomposition in its representation; this is not the case for Lie algebras in general.
If is semisimple, then . In particular, every linear semisimple Lie algebra is a subalgebra of , the special linear Lie algebra. The study of the structure of constitutes an important part of the representation theory for semisimple Lie algebras.
History
The semisimple Lie algebras over the complex numbers were first classified by Wilhelm Killing (1888–90), though his proof lacked rigor. His proof was made rigorous by Élie Cartan (1894) in his Ph.D. thesis, who also classified semisimple real Lie algebras. This was subsequently refined, and the present classification by Dynkin diagrams was given by then 22-year-old Eugene Dynkin in 1947. Some minor modifications have been made (notably by J. P. Serre), but the proof is unchanged in its essentials and can be found in any standard reference, such as .
Basic properties
Every ideal, quotient and product of semisimple Lie algebras is again semisimple.
The center of a semisimple Lie algebra is trivial (since the center is an abelian ideal). In other words, the adjoint representation is injective. Moreover, the image turns out to be of derivations on . Hence, is an isomorphism. (This is a special case of Whitehead's lemma.)
As the adjoint representation is injective, a
|
https://en.wikipedia.org/wiki/Infinite-dimensional%20holomorphy
|
In mathematics, infinite-dimensional holomorphy is a branch of functional analysis. It is concerned with generalizations of the concept of holomorphic function to functions defined and taking values in complex Banach spaces (or Fréchet spaces more generally), typically of infinite dimension. It is one aspect of nonlinear functional analysis.
Vector-valued holomorphic functions defined in the complex plane
A first step in extending the theory of holomorphic functions beyond one complex dimension is considering so-called vector-valued holomorphic functions, which are still defined in the complex plane C, but take values in a Banach space. Such functions are important, for example, in constructing the holomorphic functional calculus for bounded linear operators.
Definition. A function f : U → X, where U ⊂ C is an open subset and X is a complex Banach space is called holomorphic if it is complex-differentiable; that is, for each point z ∈ U the following limit exists:
One may define the line integral of a vector-valued holomorphic function f : U → X along a rectifiable curve γ : [a, b] → U in the same way as for complex-valued holomorphic functions, as the limit of sums of the form
where a = t0 < t1 < ... < tn = b is a subdivision of the interval [a, b], as the lengths of the subdivision intervals approach zero.
It is a quick check that the Cauchy integral theorem also holds for vector-valued holomorphic functions. Indeed, if f : U → X is such a function and T : X → C a bounded linear functional, one can show that
Moreover, the composition T o f : U → C is a complex-valued holomorphic function. Therefore, for γ a simple closed curve whose interior is contained in U, the integral on the right is zero, by the classical Cauchy integral theorem. Then, since T is arbitrary, it follows from the Hahn–Banach theorem that
which proves the Cauchy integral theorem in the vector-valued case.
Using this powerful tool one may then prove Cauchy's integral formula, and, just like in the classical case, that any vector-valued holomorphic function is analytic.
A useful criterion for a function f : U → X to be holomorphic is that T o f : U → C is a holomorphic complex-valued function for every continuous linear functional T : X → C. Such an f is weakly holomorphic. It can be shown that a function defined on an open subset of the complex plane with values in a Fréchet space is holomorphic if, and only if, it is weakly holomorphic.
Holomorphic functions between Banach spaces
More generally, given two complex Banach spaces X and Y and an open set U ⊂ X, f : U → Y is called holomorphic if the Fréchet derivative of f exists at every point in U. One can show that, in this more general context, it is still true that a holomorphic function is analytic, that is, it can be locally expanded in a power series. It is no longer true however that if a function is defined and holomorphic in a ball, its power series around the center of the ball is convergent in the enti
|
https://en.wikipedia.org/wiki/Angle%20of%20parallelism
|
In hyperbolic geometry, angle of parallelism is the angle at the non-right angle vertex of a right hyperbolic triangle having two asymptotic parallel sides. The angle depends on the segment length a between the right angle and the vertex of the angle of parallelism.
Given a point not on a line, drop a perpendicular to the line from the point. Let a be the length of this perpendicular segment, and be the least angle such that the line drawn through the point does not intersect the given line. Since two sides are asymptotically parallel,
There are five equivalent expressions that relate and a:
where sinh, cosh, tanh, sech and csch are hyperbolic functions and gd is the Gudermannian function.
Construction
János Bolyai discovered a construction which gives the asymptotic parallel s to a line r passing through a point A not on r. Drop a perpendicular from A onto B on r. Choose any point C on r different from B. Erect a perpendicular t to r at C. Drop a perpendicular from A onto D on t. Then length DA is longer than CB, but shorter than CA. Draw a circle around C with radius equal to DA. It will intersect the segment AB at a point E. Then the angle BEC is independent of the length BC, depending only on AB; it is the angle of parallelism. Construct s through A at angle BEC from AB.
See Trigonometry of right triangles for the formulas used here.
History
The angle of parallelism was developed in 1840 in the German publication "Geometrische Untersuchungen zur Theory der Parallellinien" by Nikolai Lobachevsky.
This publication became widely known in English after the Texas professor G. B. Halsted produced a translation in 1891. (Geometrical Researches on the Theory of Parallels)
The following passages define this pivotal concept in hyperbolic geometry:
The angle HAD between the parallel HA and the perpendicular AD is called the parallel angle (angle of parallelism) which we will here designate by Π(p) for AD = p.
Demonstration
In the Poincaré half-plane model of the hyperbolic plane (see Hyperbolic motions), one can establish the relation of Φ to a with Euclidean geometry. Let Q be the semicircle with diameter on the x-axis that passes through the points (1,0) and (0,y), where y > 1. Since Q is tangent to the unit semicircle centered at the origin, the two semicircles represent parallel hyperbolic lines. The y-axis crosses both semicircles, making a right angle with the unit semicircle and a variable angle Φ with Q. The angle at the center of Q subtended by the radius to (0, y) is also Φ because the two angles have sides that are perpendicular, left side to left side, and right side to right side. The semicircle Q has its center at (x, 0), x < 0, so its radius is 1 − x. Thus, the radius squared of Q is
hence
The metric of the Poincaré half-plane model of hyperbolic geometry parametrizes distance on the ray {(0, y) : y > 0 } with logarithmic measure. Let log y = a, so y = ea where e is the base of the natural
|
https://en.wikipedia.org/wiki/Engel%27s%20theorem
|
In representation theory, a branch of mathematics, Engel's theorem states that a finite-dimensional Lie algebra is a nilpotent Lie algebra if and only if for each , the adjoint map
given by , is a nilpotent endomorphism on ; i.e., for some k. It is a consequence of the theorem, also called Engel's theorem, which says that if a Lie algebra of matrices consists of nilpotent matrices, then the matrices can all be simultaneously brought to a strictly upper triangular form. Note that if we merely have a Lie algebra of matrices which is nilpotent as a Lie algebra, then this conclusion does not follow (i.e. the naïve replacement in Lie's theorem of "solvable" with "nilpotent", and "upper triangular" with "strictly upper triangular", is false; this already fails for the one-dimensional Lie subalgebra of scalar matrices).
The theorem is named after the mathematician Friedrich Engel, who sketched a proof of it in a letter to Wilhelm Killing dated 20 July 1890 . Engel's student K.A. Umlauf gave a complete proof in his 1891 dissertation, reprinted as .
Statements
Let be the Lie algebra of the endomorphisms of a finite-dimensional vector space V and a subalgebra. Then Engel's theorem states the following are equivalent:
Each is a nilpotent endomorphism on V.
There exists a flag such that ; i.e., the elements of are simultaneously strictly upper-triangulizable.
Note that no assumption on the underlying base field is required.
We note that Statement 2. for various and V is equivalent to the statement
For each nonzero finite-dimensional vector space V and a subalgebra , there exists a nonzero vector v in V such that for every
This is the form of the theorem proven in #Proof. (This statement is trivially equivalent to Statement 2 since it allows one to inductively construct a flag with the required property.)
In general, a Lie algebra is said to be nilpotent if the lower central series of it vanishes in a finite step; i.e., for = (i+1)-th power of , there is some k such that . Then Engel's theorem implies the following theorem (also called Engel's theorem): when has finite dimension,
is nilpotent if and only if is nilpotent for each .
Indeed, if consists of nilpotent operators, then by 1. 2. applied to the algebra , there exists a flag such that . Since , this implies is nilpotent. (The converse follows straightforwardly from the definition.)
Proof
We prove the following form of the theorem: if is a Lie subalgebra such that every is a nilpotent endomorphism and if V has positive dimension, then there exists a nonzero vector v in V such that for each X in .
The proof is by induction on the dimension of and consists of a few steps. (Note the structure of the proof is very similar to that for Lie's theorem, which concerns a solvable algebra.) The basic case is trivial and we assume the dimension of is positive.
Step 1: Find an ideal of codimension one in .
This is the most difficult step. Let be a maximal (proper) subalgebra
|
https://en.wikipedia.org/wiki/Artin%E2%80%93Mazur%20zeta%20function
|
In mathematics, the Artin–Mazur zeta function, named after Michael Artin and Barry Mazur, is a function that is used for studying the iterated functions that occur in dynamical systems and fractals.
It is defined from a given function as the formal power series
where is the set of fixed points of the th iterate of the function , and is the number of fixed points (i.e. the cardinality of that set).
Note that the zeta function is defined only if the set of fixed points is finite for each . This definition is formal in that the series does not always have a positive radius of convergence.
The Artin–Mazur zeta function is invariant under topological conjugation.
The Milnor–Thurston theorem states that the Artin–Mazur zeta function of an interval map is the inverse of the kneading determinant of .
Analogues
The Artin–Mazur zeta function is formally similar to the local zeta function, when a diffeomorphism on a compact manifold replaces the Frobenius mapping for an algebraic variety over a finite field.
The Ihara zeta function of a graph can be interpreted as an example of the Artin–Mazur zeta function.
See also
Lefschetz number
Lefschetz zeta-function
References
Zeta and L-functions
Dynamical systems
Fixed points (mathematics)
|
https://en.wikipedia.org/wiki/Ihara%20zeta%20function
|
In mathematics, the Ihara zeta function is a zeta function associated with a finite graph. It closely resembles the Selberg zeta function, and is used to relate closed walks to the spectrum of the adjacency matrix. The Ihara zeta function was first defined by Yasutaka Ihara in the 1960s in the context of discrete subgroups of the two-by-two p-adic special linear group. Jean-Pierre Serre suggested in his book Trees that Ihara's original definition can be reinterpreted graph-theoretically. It was Toshikazu Sunada who put this suggestion into practice in 1985. As observed by Sunada, a regular graph is a Ramanujan graph if and only if its Ihara zeta function satisfies an analogue of the Riemann hypothesis.
Definition
The Ihara zeta function is defined as the analytic continuation of the infinite product
where L(p) is the length of .
The product in the definition is taken over all prime closed geodesics of the graph , where geodesics which differ by a cyclic rotation are considered equal. A closed geodesic on (known in graph theory as a "reduced closed walk"; it is not a graph geodesic) is a finite sequence of vertices such that
The integer is the length . The closed geodesic is prime if it cannot be obtained by repeating a closed geodesic times, for an integer .
This graph-theoretic formulation is due to Sunada.
Ihara's formula
Ihara (and Sunada in the graph-theoretic setting) showed that for regular graphs the zeta function is a rational function.
If is a -regular graph with adjacency matrix then
where is the circuit rank of . If is connected and has vertices, .
The Ihara zeta-function is in fact always the reciprocal of a graph polynomial:
where is Ki-ichiro Hashimoto's edge adjacency operator. Hyman Bass gave a determinant formula involving the adjacency operator.
Applications
The Ihara zeta function plays an important role in the study of free groups, spectral graph theory, and dynamical systems, especially symbolic dynamics, where the Ihara zeta function is an example of a Ruelle zeta function.
References
Zeta and L-functions
Algebraic graph theory
|
https://en.wikipedia.org/wiki/Lerch%20zeta%20function
|
In mathematics, the Lerch zeta function, sometimes called the Hurwitz–Lerch zeta function, is a special function that generalizes the Hurwitz zeta function and the polylogarithm. It is named after Czech mathematician Mathias Lerch, who published a paper about the function in 1887.
Definition
The Lerch zeta function is given by
A related function, the Lerch transcendent, is given by
.
The transcendent only converges for any real number , where:
, or
, and .
The two are related, as
Integral representations
The Lerch transcendent has an integral representation:
The proof is based on using the integral definition of the Gamma function to write
and then interchanging the sum and integral. The resulting integral representation converges for Re(s) > 0, and Re(a) > 0. This analytically continues to z outside the unit disk. The integral formula also holds if z = 1, Re(s) > 1, and Re(a) > 0; see Hurwitz zeta function.
A contour integral representation is given by
where C is a Hankel contour counterclockwise around the positive real axis, not enclosing any of the points (for integer k) which are poles of the integrand. The integral assumes Re(a) > 0.
Other integral representations
A Hermite-like integral representation is given by
for
and
for
Similar representations include
and
holding for positive z (and more generally wherever the integrals converge). Furthermore,
The last formula is also known as Lipschitz formula.
Special cases
The Lerch zeta function and Lerch transcendent generalize various special functions.
The Hurwitz zeta function is the special case
The polylogarithm is another special case:
The Riemann zeta function is a special case of both of the above:
Other special cases include:
The Dirichlet eta function:
The Dirichlet beta function:
The Legendre chi function:
The polygamma function:
Identities
For λ rational, the summand is a root of unity, and thus may be expressed as a finite sum over the Hurwitz zeta function. Suppose with and . Then and .
Various identities include:
and
and
Series representations
A series representation for the Lerch transcendent is given by
(Note that is a binomial coefficient.)
The series is valid for all s, and for complex z with Re(z)<1/2. Note a general resemblance to a similar series representation for the Hurwitz zeta function.
A Taylor series in the first parameter was given by Arthur Erdélyi. It may be written as the following series, which is valid for
If n is a positive integer, then
where is the digamma function.
A Taylor series in the third variable is given by
where is the Pochhammer symbol.
Series at a = −n is given by
A special case for n = 0 has the following series
where is the polylogarithm.
An asymptotic series for
for
and
for
An asymptotic series in the incomplete gamma function
for
The representation as a generalized hypergeometric function is
Asymptotic expansion
The polylogarithm function is defined as
Let
For and ,
|
https://en.wikipedia.org/wiki/Seifert%20surface
|
In mathematics, a Seifert surface (named after German mathematician Herbert Seifert) is an orientable surface whose boundary is a given knot or link.
Such surfaces can be used to study the properties of the associated knot or link. For example, many knot invariants are most easily calculated using a Seifert surface. Seifert surfaces are also interesting in their own right, and the subject of considerable research.
Specifically, let L be a tame oriented knot or link in Euclidean 3-space (or in the 3-sphere). A Seifert surface is a compact, connected, oriented surface S embedded in 3-space whose boundary is L such that the orientation on L is just the induced orientation from S.
Note that any compact, connected, oriented surface with nonempty boundary in Euclidean 3-space is the Seifert surface associated to its boundary link. A single knot or link can have many different inequivalent Seifert surfaces. A Seifert surface must be oriented. It is possible to associate surfaces to knots which are not oriented nor orientable, as well.
Examples
The standard Möbius strip has the unknot for a boundary but is not a Seifert surface for the unknot because it is not orientable.
The "checkerboard" coloring of the usual minimal crossing projection of the trefoil knot gives a Mobius strip with three half twists. As with the previous example, this is not a Seifert surface as it is not orientable. Applying Seifert's algorithm to this diagram, as expected, does produce a Seifert surface; in this case, it is a punctured torus of genus g = 1, and the Seifert matrix is
Existence and Seifert matrix
It is a theorem that any link always has an associated Seifert surface. This theorem was first published by Frankl and Pontryagin in 1930. A different proof was published in 1934 by Herbert Seifert and relies on what is now called the Seifert algorithm. The algorithm produces a Seifert surface , given a projection of the knot or link in question.
Suppose that link has m components ( for a knot), the diagram has d crossing points, and resolving the crossings (preserving the orientation of the knot) yields f circles. Then the surface is constructed from f disjoint disks by attaching d bands. The homology group is free abelian on 2g generators, where
is the genus of . The intersection form Q on is skew-symmetric, and there is a basis of 2g cycles with
equal to a direct sum of the g copies of the matrix
The 2g × 2g integer Seifert matrix
has the linking number in Euclidean 3-space (or in the 3-sphere) of ai and the "pushoff" of aj in the positive direction of . More precisely, recalling that Seifert surfaces are bicollared, meaning that we can extend the embedding of to an embedding of , given some representative loop which is homology generator in the interior of , the positive pushout is and the negative pushout is .
With this, we have
where V∗ = (v(j, i)) the transpose matrix. Every integer 2g × 2g matrix with arises as the Seifert matrix of a knot wit
|
https://en.wikipedia.org/wiki/Uniform%20isomorphism
|
In the mathematical field of topology a uniform isomorphism or is a special isomorphism between uniform spaces that respects uniform properties. Uniform spaces with uniform maps form a category. An isomorphism between uniform spaces is called a uniform isomorphism.
Definition
A function between two uniform spaces and is called a uniform isomorphism if it satisfies the following properties
is a bijection
is uniformly continuous
the inverse function is uniformly continuous
In other words, a uniform isomorphism is a uniformly continuous bijection between uniform spaces whose inverse is also uniformly continuous.
If a uniform isomorphism exists between two uniform spaces they are called or .
Uniform embeddings
A is an injective uniformly continuous map between uniform spaces whose inverse is also uniformly continuous, where the image has the subspace uniformity inherited from
Examples
The uniform structures induced by equivalent norms on a vector space are uniformly isomorphic.
See also
— an isomorphism between topological spaces
— an isomorphism between metric spaces
References
John L. Kelley, General topology, van Nostrand, 1955. P.181.
Homeomorphisms
Uniform spaces
|
https://en.wikipedia.org/wiki/Mediator
|
Mediator may refer to:
A person who engages in mediation
Business mediator, a mediator in business
Vanishing mediator, a philosophical concept
Mediator variable, in statistics
Chemistry and biology
Mediator (coactivator), a multiprotein complex that functions as a transcriptional coactivator
Endogenous mediator, proteins that enhance and activate the functions of other proteins
Gaseous mediator, chemicals produced by some cells that have biological signalling functions
Mediator, a brand name of benfluorex, a withdrawn appetite suppressant medication
Internet, software, and computer
Mediator pattern, in computer science
A mail server's role in email forwarding
Other
Mediator, guitar pick or plectrum, an accessory for picking strings of musical instruments
The Mediator, a teen book series by Meg Cabot (some under the pseudonym Jenny Carroll)
The Mediator, a television documentary produced by Open Media
Mediator (Christ as Mediator), an office of Jesus Christ
Linesman/Mediator, a radar system in the United Kingdom
HMS Mediator, three ships of the British navy
USS Mediator, a ship of the United States navy
Mediator, brand name for benfluorex, an anorectic and hypolipidemic agent
|
https://en.wikipedia.org/wiki/Position%20%28geometry%29
|
In geometry, a position or position vector, also known as location vector or radius vector, is a Euclidean vector that represents the position of a point P in space in relation to an arbitrary reference origin O. Usually denoted x, r, or s, it corresponds to the straight line segment from O to P.
In other words, it is the displacement or translation that maps the origin to P:
The term position vector is used mostly in the fields of differential geometry, mechanics and occasionally vector calculus.
Frequently this is used in two-dimensional or three-dimensional space, but can be easily generalized to Euclidean spaces and affine spaces of any dimension.
Relative position
The relative position of a point Q with respect to point P is the Euclidean vector resulting from the subtraction of the two absolute position vectors (each with respect to the origin):
where .
The relative direction between two points is their relative position normalized as a unit vector:
where the denominator is the distance between the two points, .
A relative direction is a bound vector, in contrast to an ordinary direction, which is a free vector.
Definition
Three dimensions
In three dimensions, any set of three-dimensional coordinates and their corresponding basis vectors can be used to define the location of a point in space—whichever is the simplest for the task at hand may be used.
Commonly, one uses the familiar Cartesian coordinate system, or sometimes spherical polar coordinates, or cylindrical coordinates:
where t is a parameter, owing to their rectangular or circular symmetry. These different coordinates and corresponding basis vectors represent the same position vector. More general curvilinear coordinates could be used instead and are in contexts like continuum mechanics and general relativity (in the latter case one needs an additional time coordinate).
n dimensions
Linear algebra allows for the abstraction of an n-dimensional position vector. A position vector can be expressed as a linear combination of basis vectors:
The set of all position vectors forms position space (a vector space whose elements are the position vectors), since positions can be added (vector addition) and scaled in length (scalar multiplication) to obtain another position vector in the space. The notion of "space" is intuitive, since each xi (i = 1, 2, …, n) can have any value, the collection of values defines a point in space.
The dimension of the position space is n (also denoted dim(R) = n). The coordinates of the vector r with respect to the basis vectors ei are xi. The vector of coordinates forms the coordinate vector or n-tuple (x1, x2, …, xn).
Each coordinate xi may be parameterized a number of parameters t. One parameter xi(t) would describe a curved 1D path, two parameters xi(t1, t2) describes a curved 2D surface, three xi(t1, t2, t3) describes a curved 3D volume of space, and so on.
The linear span of a basis set B = {e1, e2, …, en} equals the position space R, d
|
https://en.wikipedia.org/wiki/Transfer%20operator
|
In mathematics, the transfer operator encodes information about an iterated map and is frequently used to study the behavior of dynamical systems, statistical mechanics, quantum chaos and fractals. In all usual cases, the largest eigenvalue is 1, and the corresponding eigenvector is the invariant measure of the system.
The transfer operator is sometimes called the Ruelle operator, after David Ruelle, or the Perron–Frobenius operator or Ruelle–Perron–Frobenius operator, in reference to the applicability of the Perron–Frobenius theorem to the determination of the eigenvalues of the operator.
Definition
The iterated function to be studied is a map for an arbitrary set .
The transfer operator is defined as an operator acting on the space of functions as
where is an auxiliary valuation function. When has a Jacobian determinant , then is usually taken to be .
The above definition of the transfer operator can be shown to be the point-set limit of the measure-theoretic pushforward of g: in essence, the transfer operator is the direct image functor in the category of measurable spaces. The left-adjoint of the Frobenius–Perron operator is the Koopman operator or composition operator. The general setting is provided by the Borel functional calculus.
As a general rule, the transfer operator can usually be interpreted as a (left-)shift operator acting on a shift space. The most commonly studied shifts are the subshifts of finite type. The adjoint to the transfer operator can likewise usually be interpreted as a right-shift. Particularly well studied right-shifts include the Jacobi operator and the Hessenberg matrix, both of which generate systems of orthogonal polynomials via a right-shift.
Applications
Whereas the iteration of a function naturally leads to a study of the orbits of points of X under iteration (the study of point dynamics), the transfer operator defines how (smooth) maps evolve under iteration. Thus, transfer operators typically appear in physics problems, such as quantum chaos and statistical mechanics, where attention is focused on the time evolution of smooth functions. In turn, this has medical applications to rational drug design, through the field of molecular dynamics.
It is often the case that the transfer operator is positive, has discrete positive real-valued eigenvalues, with the largest eigenvalue being equal to one. For this reason, the transfer operator is sometimes called the Frobenius–Perron operator.
The eigenfunctions of the transfer operator are usually fractals. When the logarithm of the transfer operator corresponds to a quantum Hamiltonian, the eigenvalues will typically be very closely spaced, and thus even a very narrow and carefully selected ensemble of quantum states will encompass a large number of very different fractal eigenstates with non-zero support over the entire volume. This can be used to explain many results from classical statistical mechanics, including the irreversibility of time a
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.