source
stringlengths 31
168
| text
stringlengths 51
3k
|
---|---|
https://en.wikipedia.org/wiki/Berkson%27s%20paradox
|
Berkson's paradox, also known as Berkson's bias, collider bias, or Berkson's fallacy, is a result in conditional probability and statistics which is often found to be counterintuitive, and hence a veridical paradox. It is a complicating factor arising in statistical tests of proportions. Specifically, it arises when there is an ascertainment bias inherent in a study design. The effect is related to the explaining away phenomenon in Bayesian networks, and conditioning on a collider in graphical models.
It is often described in the fields of medical statistics or biostatistics, as in the original description of the problem by Joseph Berkson.
Examples
Overview
The most common example of Berkson's paradox is a false observation of a negative correlation between two desirable traits, i.e., that members of a population which have some desirable trait tend to lack a second. Berkson's paradox occurs when this observation appears true when in reality the two properties are unrelated—or even positively correlated—because members of the population where both are absent are not equally observed. For example, a person may observe from their experience that fast food restaurants in their area which serve good hamburgers tend to serve bad fries and vice versa; but because they would likely not eat anywhere where both were bad, they fail to allow for the large number of restaurants in this category which would weaken or even flip the correlation.
Original illustration
Berkson's original illustration involves a retrospective study examining a risk factor for a disease in a statistical sample from a hospital in-patient population. Because samples are taken from a hospital in-patient population, rather than from the general public, this can result in a spurious negative association between the disease and the risk factor. For example, if the risk factor is diabetes and the disease is cholecystitis, a hospital patient without diabetes is more likely to have cholecystitis than a member of the general population, since the patient must have had some non-diabetes (possibly cholecystitis-causing) reason to enter the hospital in the first place. That result will be obtained regardless of whether there is any association between diabetes and cholecystitis in the general population.
Ellenberg example
An example presented by Jordan Ellenberg: Suppose Alex will only date a man if his niceness plus his handsomeness exceeds some threshold. Then nicer men do not have to be as handsome to qualify for Alex's dating pool. So, among the men that Alex dates, Alex may observe that the nicer ones are less handsome on average (and vice versa), even if these traits are uncorrelated in the general population. Note that this does not mean that men in the dating pool compare unfavorably with men in the population. On the contrary, Alex's selection criterion means that Alex has high standards. The average nice man that Alex dates is actually more handsome than the average man in t
|
https://en.wikipedia.org/wiki/Bethe%20lattice
|
In statistical mechanics and mathematics, the Bethe lattice (also called a regular tree) is an infinite connected cycle-free graph where all vertices have the same number of neighbors. The Bethe lattice was introduced into the physics literature by Hans Bethe in 1935. In such a graph, each node is connected to z neighbors; the number z is called either the coordination number or the degree, depending on the field.
Due to its distinctive topological structure, the statistical mechanics of lattice models on this graph are often easier to solve than on other lattices. The solutions are related to the often used Bethe ansatz for these systems.
Basic Properties
When working with the Bethe lattice, it is often convenient to mark a given vertex as the root, to be used as a reference point when considering local properties of the graph.
Sizes of layers
Once a vertex is marked as the root, we can group the other vertices into layers based on their distance from the root. The number of vertices at a distance from the root is , as each vertex other than the root is adjacent to vertices at a distance one greater from the root, and the root is adjacent to vertices at a distance 1.
In statistical mechanics
The Bethe lattice is of interest in statistical mechanics mainly because lattice models on the Bethe lattice are often easier to solve than on other lattices, such as the two-dimensional square lattice. This is because the lack of cycles removes some of the more complicated interactions. While the Bethe lattice does not as closely approximate the interactions in physical materials as other lattices, it can still provide useful insight.
Exact solutions to the Ising model
The Ising model is a mathematical model of ferromagnetism, in which the magnetic properties of a material are represented by a "spin" at each node in the lattice, which is either +1 or -1. The model is also equipped with a constant representing the strength of the interaction between adjacent nodes, and a constant representing an external magnetic field.
The Ising model on the Bethe lattice is defined by the partition function
Magnetization
In order to compute the local magnetization, we can break the lattice up into several identical parts by removing a vertex. This gives us a recurrence relation which allows us to compute the magnetization of a Cayley tree with n shells (the finite analog to the Bethe lattice) as
where and the values of satisfy the recurrence relation
In the case when the system is ferromagnetic, the above sequence converges, so we may take the limit to evaluate the magnetization on the Bethe lattice. We get
where x is a solution to .
There are either 1 or 3 solutions to this equation. In the case where there are 3, the sequence will converge to the smallest when and the largest when .
Free energy
The free energy f at each site of the lattice in the Ising Model is given by
,
where and is as before.
In mathematics
Return probability of a ra
|
https://en.wikipedia.org/wiki/Field%20equation
|
In theoretical physics and applied mathematics, a field equation is a partial differential equation which determines the dynamics of a physical field, specifically the time evolution and spatial distribution of the field. The solutions to the equation are mathematical functions which correspond directly to the field, as functions of time and space. Since the field equation is a partial differential equation, there are families of solutions which represent a variety of physical possibilities. Usually, there is not just a single equation, but a set of coupled equations which must be solved simultaneously. Field equations are not ordinary differential equations since a field depends on space and time, which requires at least two variables.
Whereas the "wave equation", the "diffusion equation", and the "continuity equation" all have standard forms (and various special cases or generalizations), there is no single, special equation referred to as "the field equation".
The topic broadly splits into equations of classical field theory and quantum field theory. Classical field equations describe many physical properties like temperature of a substance, velocity of a fluid, stresses in an elastic material, electric and magnetic fields from a current, etc. They also describe the fundamental forces of nature, like electromagnetism and gravity. In quantum field theory, particles or systems of "particles" like electrons and photons are associated with fields, allowing for infinite degrees of freedom (unlike finite degrees of freedom in particle mechanics) and variable particle numbers which can be created or annihilated.
Generalities
Origin
Usually, field equations are postulated (like the Einstein field equations and the Schrödinger equation, which underlies all quantum field equations) or obtained from the results of experiments (like Maxwell's equations). The extent of their validity is their ability to correctly predict and agree with experimental results.
From a theoretical viewpoint, field equations can be formulated in the frameworks of Lagrangian field theory, Hamiltonian field theory, and field theoretic formulations of the principle of stationary action. Given a suitable Lagrangian or Hamiltonian density, a function of the fields in a given system, as well as their derivatives, the principle of stationary action will obtain the field equation.
Symmetry
In both classical and quantum theories, field equations will satisfy the symmetry of the background physical theory. Most of the time Galilean symmetry is enough, for speeds (of propagating fields) much less than light. When particles and fields propagate at speeds close to light, Lorentz symmetry is one of the most common settings because the equation and its solutions are then consistent with special relativity.
Another symmetry arises from gauge freedom, which is intrinsic to the field equations. Fields which correspond to interactions may be gauge fields, which means they can be derived f
|
https://en.wikipedia.org/wiki/Weak%20derivative
|
In mathematics, a weak derivative is a generalization of the concept of the derivative of a function (strong derivative) for functions not assumed differentiable, but only integrable, i.e., to lie in the Lp space .
The method of integration by parts holds that for differentiable functions and we have
A function u' being the weak derivative of u is essentially defined by the requirement that this equation must hold for all infinitely differentiable functions vanishing at the boundary points ().
Definition
Let be a function in the Lebesgue space . We say that in is a weak derivative of if
for all infinitely differentiable functions with .
Generalizing to dimensions, if and are in the space of locally integrable functions for some open set , and if is a multi-index, we say that is the -weak derivative of if
for all , that is, for all infinitely differentiable functions with compact support in . Here is defined as
If has a weak derivative, it is often written since weak derivatives are unique (at least, up to a set of measure zero, see below).
Examples
The absolute value function , which is not differentiable at has a weak derivative known as the sign function, and given by This is not the only weak derivative for u: any w that is equal to v almost everywhere is also a weak derivative for u. (In particular, the definition of v(0) above is superfluous and can be replaced with any desired real number r.) Usually, this is not a problem, since in the theory of Lp spaces and Sobolev spaces, functions that are equal almost everywhere are identified.
The characteristic function of the rational numbers is nowhere differentiable yet has a weak derivative. Since the Lebesgue measure of the rational numbers is zero, Thus is a weak derivative of . Note that this does agree with our intuition since when considered as a member of an Lp space, is identified with the zero function.
The Cantor function c does not have a weak derivative, despite being differentiable almost everywhere. This is because any weak derivative of c would have to be equal almost everywhere to the classical derivative of c, which is zero almost everywhere. But the zero function is not a weak derivative of c, as can be seen by comparing against an appropriate test function . More theoretically, c does not have a weak derivative because its distributional derivative, namely the Cantor distribution, is a singular measure and therefore cannot be represented by a function.
Properties
If two functions are weak derivatives of the same function, they are equal except on a set with Lebesgue measure zero, i.e., they are equal almost everywhere. If we consider equivalence classes of functions such that two functions are equivalent if they are equal almost everywhere, then the weak derivative is unique.
Also, if u is differentiable in the conventional sense then its weak derivative is identical (in the sense given above) to its conventional (strong) derivativ
|
https://en.wikipedia.org/wiki/Science%20Museum%20of%20Minnesota
|
The Science Museum of Minnesota is an American museum focused on topics in technology, natural history, physical science, and mathematics education. Founded in 1907 and located in Saint Paul, Minnesota, the 501(c)(3) nonprofit institution has 385 employees and is supported by volunteers.
History
The museum was established in 1906 through the efforts of a group of businessmen, led by Charles W. Ames, with the aim of promoting intellectual and scientific growth in St. Paul. Initially known as the St. Paul Institute of Science and Letters, it was initially housed at the St. Paul Auditorium on Fourth Street. A brief merger with the St. Paul School of Fine Arts (now the Minnesota Museum of American Art) occurred in 1909.
In 1927, the museum relocated to Merriam Mansion on Capitol Hill, which had previously been the residence of Col. John Merriam. This new location offered increased exhibit storage space. Due to the museum's continued growth, it moved to the St. Paul-Ramsey Arts and Sciences Center at 30 East Tenth Street in 1964.[3] In 1978, the museum expanded into a new area on Wabasha between 10th and Exchange via a skyway connection, allowing for additional exhibit space and the addition of an IMAX Dome (OMNIMAX) cinema.
In the early 1990s, plans for a new facility, to be located adjacent to the Mississippi River, were formed. With aid from public funding initiatives, the new museum broke ground on May 1, 1997, and opened on December 11, 1999. During the move, 1.75 million artifacts were transported.
In the early 2000s, the museum hosted several exhibits, including BODY WORLDS; Tutankhamun: The Golden King and the Great Pharaohs; Star Wars: Where Science Meets Imagination; Real Pirates: The Untold Story of the Whydah from Slave Ship to Pirate Ship; The Science Behind Pixar, and more. It also added several screen films to its production roster, including Jane Goodall’s Wild Chimpanzees; Tornado Alley; National Parks Adventure; and Ancient Caves, and it built its exhibit production portfolio with exhibits like Robots + Us; A Day in Pompeii; RACE: Are We So Different?; Maya: Hidden Worlds Revealed; SPACE: An Out of Gravity Experience; Sportsology, and more. The Science Museum continues to provide exhibit development, design, and production services for other museums.
Resident exhibits
While offerings change frequently, several exhibits are always in the museum, including:
Dinosaurs & Fossils Gallery showcases several original and replicated dinosaur skeletons, as well as many complete and preserved animals. Some highlights from the Mesozoic include a Triceratops, Diplodocus, Allosaurus, Stegosaurus, and Camptosaurus, while those from the Cenozoic include a giant terror bird, an armoured glyptodont, a giant seabird called Pelagornis sandersi, a hyaenodont, and fossil crocodilians of the era, especially champsosaurs from the sixty-million-year-old Wannagan Creek site in North Dakota the museum works at. The gallery also features two sculpted,
|
https://en.wikipedia.org/wiki/Connected%20category
|
In category theory, a branch of mathematics, a connected category is a category in which, for every two objects X and Y there is a finite sequence of objects
with morphisms
or
for each 0 ≤ i < n (both directions are allowed in the same sequence). Equivalently, a category J is connected if each functor from J to a discrete category is constant. In some cases it is convenient to not consider the empty category to be connected.
A stronger notion of connectivity would be to require at least one morphism f between any pair of objects X and Y. Any category with this property is connected in the above sense.
A small category is connected if and only if its underlying graph is weakly connected, meaning that it is connected if one disregard the direction of the arrows.
Each category J can be written as a disjoint union (or coproduct) of a collection of connected categories, which are called the connected components of J. Each connected component is a full subcategory of J.
References
Categories in category theory
|
https://en.wikipedia.org/wiki/Incomplete%20polylogarithm
|
In mathematics, the Incomplete Polylogarithm function is related to the polylogarithm function. It is sometimes known as the incomplete Fermi–Dirac integral or the incomplete Bose–Einstein integral. It may be defined by:
Expanding about z=0 and integrating gives a series representation:
where Γ(s) is the gamma function and Γ(s,x) is the upper incomplete gamma function. Since Γ(s,0)=Γ(s), it follows that:
where Lis(.) is the polylogarithm function.
References
GNU Scientific Library - Reference Manual https://www.gnu.org/software/gsl/manual/gsl-ref.html#SEC117
Special functions
|
https://en.wikipedia.org/wiki/Riemann%E2%80%93Siegel%20theta%20function
|
In mathematics, the Riemann–Siegel theta function is defined in terms of the gamma function as
for real values of t. Here the argument is chosen in such a way that a continuous function is obtained and holds, i.e., in the same way that the principal branch of the log-gamma function is defined.
It has an asymptotic expansion
which is not convergent, but whose first few terms give a good approximation for . Its Taylor-series at 0 which converges for is
where denotes the polygamma function of order .
The Riemann–Siegel theta function is of interest in studying the Riemann zeta function, since it can rotate the Riemann zeta function such that it becomes the totally real valued Z function on the critical line .
Curve discussion
The Riemann–Siegel theta function is an odd real analytic function for real values of with three roots at and . It is an increasing function for , and has local extrema at , with value . It has a single inflection point at with , which is the minimum of its derivative.
Theta as a function of a complex variable
We have an infinite series expression for the log-gamma function
where γ is Euler's constant. Substituting for z and taking the imaginary part termwise gives the following series for θ(t)
For values with imaginary part between −1 and 1, the arctangent function is holomorphic, and it is easily seen that the series converges uniformly on compact sets in the region with imaginary part between −1/2 and 1/2, leading to a holomorphic function on this domain. It follows that the Z function is also holomorphic in this region, which is the critical strip.
We may use the identities
to obtain the closed-form expression
which extends our original definition to a holomorphic function of t. Since the principal branch of log Γ has a single branch cut along the negative real axis, θ(t) in this definition inherits branch cuts along the imaginary axis above i/2 and below −i/2.
Gram points
The Riemann zeta function on the critical line can be written
If is a real number, then the Z function returns real values.
Hence the zeta function on the critical line will be real either at a zero, corresponding to , or when
. Positive real values of where the latter case occurs are called Gram points, after J. P. Gram, and can of course also be described as the points where is an integer.
A Gram point is a solution of
These solutions are approximated by the sequence:
where is the Lambert W function.
Here are the smallest non negative Gram points
The choice of the index n is a bit crude. It is historically chosen in such a way that the index is 0 at the first value which is larger than the smallest positive zero (at imaginary part 14.13472515 ...) of the Riemann zeta function on the critical line. Notice, this -function oscillates for absolute-small real arguments and therefore is not uniquely invertible in the interval [−24,24]! Thus the odd theta-function has its symmetric Gram point with value 0 at index −3.
G
|
https://en.wikipedia.org/wiki/Z%20function
|
In mathematics, the Z function is a function used for studying the Riemann zeta function along the critical line where the argument is one-half. It is also called the Riemann–Siegel Z function, the Riemann–Siegel zeta function, the Hardy function, the Hardy Z function and the Hardy zeta function. It can be defined in terms of the Riemann–Siegel theta function and the Riemann zeta function by
It follows from the functional equation of the Riemann zeta function that the Z function is real for real values of t. It is an even function, and real analytic for real values. It follows from the fact that the Riemann-Siegel theta function and the Riemann zeta function are both holomorphic in the critical strip, where the imaginary part of t is between −1/2 and 1/2, that the Z function is holomorphic in the critical strip also. Moreover, the real zeros of Z(t) are precisely the zeros of the zeta function along the critical line, and complex zeros in the Z function critical strip correspond to zeros off the critical line of the Riemann zeta function in its critical strip.
The Riemann–Siegel formula
Calculation of the value of Z(t) for real t, and hence of the zeta function along the critical line, is greatly expedited by the Riemann–Siegel formula. This formula tells us
where the error term R(t) has a complex asymptotic expression in terms of the function
and its derivatives. If , and then
where the ellipsis indicates we may continue on to higher and increasingly complex terms.
Other efficient series for Z(t) are known, in particular several using the incomplete gamma function. If
then an especially nice example is
Behavior of the Z function
From the critical line theorem, it follows that the density of the real zeros of the Z function is
for some constant c > 2/5. Hence, the number of zeros in an interval of a given size slowly increases. If the Riemann hypothesis is true, all of the zeros in the critical strip are real zeros, and the constant c is one. It is also postulated that all of these zeros are simple zeros.
An Omega theorem
Because of the zeros of the Z function, it exhibits oscillatory behavior. It also slowly grows both on average and in peak value. For instance, we have, even without the Riemann hypothesis, the Omega theorem that
where the notation means that divided by the function within the Ω does not tend to zero with increasing t.
Average growth
The average growth of the Z function has also been much studied. We can find the root mean square (abbreviated RMS) average from
or
which tell us that the RMS size of Z(t) grows as .
This estimate can be improved to
If we increase the exponent, we get an average value which depends more on the peak values of Z. For fourth powers, we have
from which we may conclude that the fourth root of the mean fourth power grows as
The Lindelöf hypothesis
Higher even powers have been much studied, but less is known about the corresponding average value. It is conjectured, and foll
|
https://en.wikipedia.org/wiki/Fitting%20subgroup
|
In mathematics, especially in the area of algebra known as group theory, the Fitting subgroup F of a finite group G, named after Hans Fitting, is the unique largest normal nilpotent subgroup of G. Intuitively, it represents the smallest subgroup which "controls" the structure of G when G is solvable. When G is not solvable, a similar role is played by the generalized Fitting subgroup F*, which is generated by the Fitting subgroup and the components of G.
For an arbitrary (not necessarily finite) group G, the Fitting subgroup is defined to be the subgroup generated by the nilpotent normal subgroups of G. For infinite groups, the Fitting subgroup is not always nilpotent.
The remainder of this article deals exclusively with finite groups.
The Fitting subgroup
The nilpotency of the Fitting subgroup of a finite group is guaranteed by Fitting's theorem which says that the product of a finite collection of normal nilpotent subgroups of G is again a normal nilpotent subgroup. It may also be explicitly constructed as the product of the p-cores of G over all of the primes p dividing the order of G.
If G is a finite non-trivial solvable group then the Fitting subgroup is always non-trivial, i.e. if G≠1 is finite solvable, then F(G)≠1. Similarly the Fitting subgroup of G/F(G) will be nontrivial if G is not itself nilpotent, giving rise to the concept of Fitting length. Since the Fitting subgroup of a finite solvable group contains its own centralizer, this gives a method of understanding finite solvable groups as extensions of nilpotent groups by faithful automorphism groups of nilpotent groups.
In a nilpotent group, every chief factor is centralized by every element. Relaxing the condition somewhat, and taking the subgroup of elements of a general finite group which centralize every chief factor, one simply gets the Fitting subgroup again :
The generalization to p-nilpotent groups is similar.
The generalized Fitting subgroup
A component of a group is a subnormal quasisimple subgroup. (A group is quasisimple if it is a perfect central extension of a simple group.) The layer E(G) or L(G) of a group is the subgroup generated by all components. Any two components of a group commute, so the layer is a perfect central extension of a product of simple groups, and is the largest normal subgroup of G with this structure. The generalized Fitting subgroup F*(G) is the subgroup generated by the layer and the Fitting subgroup. The layer commutes with the Fitting subgroup, so the generalized Fitting subgroup is a central extension of a product of p-groups and simple groups.
The layer is also the maximal normal semisimple subgroup, where a group is called semisimple if it is a perfect central extension of a product of simple groups.
This definition of the generalized Fitting subgroup can be motivated by some of its intended uses. Consider the problem of trying to identify a normal subgroup H of G that contains its own centralizer and the Fitting group.
|
https://en.wikipedia.org/wiki/Tangent%20cone
|
In geometry, the tangent cone is a generalization of the notion of the tangent space to a manifold to the case of certain spaces with singularities.
Definitions in nonlinear analysis
In nonlinear analysis, there are many definitions for a tangent cone, including the adjacent cone, Bouligand's contingent cone, and the Clarke tangent cone. These three cones coincide for a convex set, but they can differ on more general sets.
Clarke tangent cone
Let be a nonempty closed subset of the Banach space . The Clarke's tangent cone to at , denoted by consists of all vectors , such that for any sequence tending to zero, and any sequence tending to , there exists a sequence tending to , such that for all holds
Clarke's tangent cone is always subset of the corresponding contingent cone (and coincides with it, when the set in question is convex). It has the important property of being a closed convex cone.
Definition in convex geometry
Let K be a closed convex subset of a real vector space V and ∂K be the boundary of K. The solid tangent cone to K at a point x ∈ ∂K is the closure of the cone formed by all half-lines (or rays) emanating from x and intersecting K in at least one point y distinct from x. It is a convex cone in V and can also be defined as the intersection of the closed half-spaces of V containing K and bounded by the supporting hyperplanes of K at x. The boundary TK of the solid tangent cone is the tangent cone to K and ∂K at x. If this is an affine subspace of V then the point x is called a smooth point of ∂K and ∂K is said to be differentiable at x and TK is the ordinary tangent space to ∂K at x.
Definition in algebraic geometry
Let X be an affine algebraic variety embedded into the affine space , with defining ideal . For any polynomial f, let be the homogeneous component of f of the lowest degree, the initial term of f, and let
be the homogeneous ideal which is formed by the initial terms for all , the initial ideal of I. The tangent cone to X at the origin is the Zariski closed subset of defined by the ideal . By shifting the coordinate system, this definition extends to an arbitrary point of in place of the origin. The tangent cone serves as the extension of the notion of the tangent space to X at a regular point, where X most closely resembles a differentiable manifold, to all of X. (The tangent cone at a point of that is not contained in X is empty.)
For example, the nodal curve
is singular at the origin, because both partial derivatives of f(x, y) = y2 − x3 − x2 vanish at (0, 0). Thus the Zariski tangent space to C at the origin is the whole plane, and has higher dimension than the curve itself (two versus one). On the other hand, the tangent cone is the union of the tangent lines to the two branches of C at the origin,
Its defining ideal is the principal ideal of k[x] generated by the initial term of f, namely y2 − x2 = 0.
The definition of the tangent cone can be extended to abstract algebraic va
|
https://en.wikipedia.org/wiki/Constantin%20Le%20Paige
|
Constantin Marie Le Paige (9 March 1852 – 26 January 1929) was a Belgian mathematician.
Born in Liège, Belgium, Le Paige began studying mathematics in 1869 at the University of Liège. After studying analysis under Professor Eugène Charles Catalan, Le Paige became a professor at the Université de Liège in 1882.
While interested in astronomy and the history of mathematics, Le Paige mainly worked on the theory of algebraic form, especially algebraic curves and surfaces and more particularly for his work on the construction of cubic surfaces. Le Paige remained at the university until his retirement in 1922.
External links
Le Paige biography at www-groups.dcs.st-and.ac.uk
1852 births
1929 deaths
19th-century Belgian mathematicians
20th-century Belgian mathematicians
20th-century Belgian astronomers
University of Liège alumni
Scientists from Liège
Academic staff of the University of Liège
|
https://en.wikipedia.org/wiki/Fuchsian%20model
|
In mathematics, a Fuchsian model is a representation of a hyperbolic Riemann surface R as a quotient of the upper half-plane H by a Fuchsian group. Every hyperbolic Riemann surface admits such a representation. The concept is named after Lazarus Fuchs.
A more precise definition
By the uniformization theorem, every Riemann surface is either elliptic, parabolic or hyperbolic. More precisely this theorem states that a Riemann surface which is not isomorphic to either the Riemann sphere (the elliptic case) or a quotient of the complex plane by a discrete subgroup (the parabolic case) must be a quotient of the hyperbolic plane by a subgroup acting properly discontinuously and freely.
In the Poincaré half-plane model for the hyperbolic plane the group of biholomorphic transformations is the group acting by homographies, and the uniformization theorem means that there exists a discrete, torsion-free subgroup such that the Riemann surface is isomorphic to . Such a group is called a Fuchsian group, and the isomorphism is called a Fuchsian model for .
Fuchsian models and Teichmüller space
Let be a closed hyperbolic surface and let be a Fuchsian group so that is a Fuchsian model for . Let
and endow this set with the topology of pointwise convergence (sometimes called "algebraic convergence"). In this particular case this topology can most easily be defined as follows: the group is finitely generated since it is isomorphic to the fundamental group of . Let be a generating set: then any is determined by the elements and so we can identify with a subset of by the map . Then we give it the subspace topology.
The Nielsen isomorphism theorem (this is not standard terminology and this result is not directly related to the Dehn–Nielsen theorem) then has the following statement:
The proof is very simple: choose an homeomorphism and lift it to the hyperbolic plane. Taking a diffeomorphism yields quasi-conformal map since is compact.
This result can be seen as the equivalence between two models for Teichmüller space of : the set of discrete faithful representations of the fundamental group into modulo conjugacy and the set of marked Riemann surfaces where is a quasiconformal homeomorphism modulo a natural equivalence relation.
See also
the Kleinian model, an analogous construction for 3-manifolds
Fundamental polygon
References
Matsuzaki, K.; Taniguchi, M.: Hyperbolic manifolds and Kleinian groups. Oxford (1998).
Hyperbolic geometry
Riemann surfaces
|
https://en.wikipedia.org/wiki/Municipality%20of%20the%20District%20of%20Lunenburg
|
The Municipality of the District of Lunenburg, is a district municipality in Lunenburg County, Nova Scotia, Canada. Statistics Canada classifies the district municipality as a municipal district.
Lunenburg surrounds the towns of Bridgewater, Lunenburg, and Mahone Bay, which are incorporated separately and not part of the district municipality.
Demographics
In the 2021 Census of Population conducted by Statistics Canada, the Municipality of the District of Lunenburg had a population of living in of its total private dwellings, a change of from its 2016 population of . With a land area of , it had a population density of in 2021.
Ethnicity
Language
See also
List of municipalities in Nova Scotia
Notes
References
External links
Communities in Lunenburg County, Nova Scotia
District municipalities in Nova Scotia
|
https://en.wikipedia.org/wiki/Regular%20matrix
|
Regular matrix may refer to:
Mathematics
Regular stochastic matrix, a stochastic matrix such that all the entries of some power of the matrix are positive
The opposite of irregular matrix, a matrix with a different number of entries in each row
Regular Hadamard matrix, a Hadamard matrix whose row and column sums are all equal
A regular element of a Lie algebra, when the Lie algebra is gln
Invertible matrix (this usage is rare)
Other uses
QS Regular Matrix, a quadraphonic sound system developed by Sansui Electric
|
https://en.wikipedia.org/wiki/Mediant%20%28mathematics%29
|
In mathematics, the mediant of two fractions, generally made up of four positive integers
and is defined as
That is to say, the numerator and denominator of the mediant are the sums of the numerators and denominators of the given fractions, respectively. It is sometimes called the freshman sum, as it is a common mistake in the early stages of learning about addition of fractions.
Technically, this is a binary operation on valid fractions (nonzero denominator), considered as ordered pairs of appropriate integers, a priori disregarding the perspective on rational numbers as equivalence classes of fractions. For example, the mediant of the fractions 1/1 and 1/2 is 2/3. However, if the fraction 1/1 is replaced by the fraction 2/2, which is an equivalent fraction denoting the same rational number 1, the mediant of the fractions 2/2 and 1/2 is 3/4. For a stronger connection to rational numbers the fractions may be required to be reduced to lowest terms, thereby selecting unique representatives from the respective equivalence classes.
The Stern–Brocot tree provides an enumeration of all positive rational numbers via mediants in lowest terms, obtained purely by iterative computation of the mediant according to a simple algorithm.
Properties
The mediant inequality: An important property (also explaining its name) of the mediant is that it lies strictly between the two fractions of which it is the mediant: If and , then This property follows from the two relations and
Componendo and Dividendo Theorems: If and , then
Componendo:
Dividendo:
Assume that the pair of fractions a/c and b/d satisfies the determinant relation . Then the mediant has the property that it is the simplest fraction in the interval (a/c, b/d), in the sense of being the fraction with the smallest denominator. More precisely, if the fraction with positive denominator c' lies (strictly) between a/c and b/d, then its numerator and denominator can be written as and with two positive real (in fact rational) numbers . To see why the must be positive note that and must be positive. The determinant relation then implies that both must be integers, solving the system of linear equations for . Therefore,
The converse is also true: assume that the pair of reduced fractions a/c < b/d has the property that the reduced fraction with smallest denominator lying in the interval (a/c, b/d) is equal to the mediant of the two fractions. Then the determinant relation holds. This fact may be deduced e.g. with the help of Pick's theorem which expresses the area of a plane triangle whose vertices have integer coordinates in terms of the number vinterior of lattice points (strictly) inside the triangle and the number vboundary of lattice points on the boundary of the triangle. Consider the triangle with the three vertices v1 = (0, 0), v2 = (a, c), v3 = (b, d). Its area is equal to A point inside the triangle can be parametrized as where The Pick formula now implies that t
|
https://en.wikipedia.org/wiki/Mathlete
|
A mathlete is a person who competes in mathematics competitions at any level or any age. More specifically, a Mathlete is a student who participates in any of the MATHCOUNTS programs, as Mathlete is a registered trademark of the MATHCOUNTS Foundation in the United States. The term is a portmanteau of the words mathematics and athlete.
Top Mathletes from MATHCOUNTS often go on to compete in the AIME, USAMO, and ARML competitions in the United States. Those in other countries generally participate in national olympiads to qualify for the International Mathematical Olympiad.
Participants in World Math Day also are commonly referred to as mathletes.
Mathletic competitions
The Putnam Exam: The William Lowell Putnam Competition is the preeminent undergraduate level mathletic competition in the United States. Administered by the Mathematical Association of America, students compete as individuals and as teams (as chosen by their Institution) for scholarships and team prize money. The exam is administered on the first saturday in December.
Mathletic off-season training
The academic off-season (traditionally referred to as "summer") can be especially difficult on mathletes, though various training regimens have been proposed to keep mathletic ability at its peak. Publications such as the MAA's The American Mathematical Monthly and the AMS's Notices of the American Mathematical Society are widely read to maintain and hone mathematical ability. Some coaches suggest seeking research internships or grants, many of which are funded by the National Science Foundation.
At higher levels, mathletes can obtain funding from host institutions to work on summer research projects. For example, the University of Delaware offers the Groups Exploring the Mathematical Sciences project (GEMS project) to first year graduate students. The students act as the principal investigator and work with an undergraduate research assistant and a faculty adviser who will oversee their summer research.
References
External links
World Math Day
Algorithm Olympics
Mathematics competitions
|
https://en.wikipedia.org/wiki/Unbounded%20operator
|
In mathematics, more specifically functional analysis and operator theory, the notion of unbounded operator provides an abstract framework for dealing with differential operators, unbounded observables in quantum mechanics, and other cases.
The term "unbounded operator" can be misleading, since
"unbounded" should sometimes be understood as "not necessarily bounded";
"operator" should be understood as "linear operator" (as in the case of "bounded operator");
the domain of the operator is a linear subspace, not necessarily the whole space;
this linear subspace is not necessarily closed; often (but not always) it is assumed to be dense;
in the special case of a bounded operator, still, the domain is usually assumed to be the whole space.
In contrast to bounded operators, unbounded operators on a given space do not form an algebra, nor even a linear space, because each one is defined on its own domain.
The term "operator" often means "bounded linear operator", but in the context of this article it means "unbounded operator", with the reservations made above. The given space is assumed to be a Hilbert space. Some generalizations to Banach spaces and more general topological vector spaces are possible.
Short history
The theory of unbounded operators developed in the late 1920s and early 1930s as part of developing a rigorous mathematical framework for quantum mechanics. The theory's development is due to John von Neumann and Marshall Stone. Von Neumann introduced using graphs to analyze unbounded operators in 1932.
Definitions and basic properties
Let be Banach spaces. An unbounded operator (or simply operator) is a linear map from a linear subspace —the domain of —to the space . Contrary to the usual convention, may not be defined on the whole space .
An operator is said to be closed if its graph is a closed set. (Here, the graph is a linear subspace of the direct sum , defined as the set of all pairs , where runs over the domain of .) Explicitly, this means that for every sequence of points from the domain of such that and , it holds that belongs to the domain of and . The closedness can also be formulated in terms of the graph norm: an operator is closed if and only if its domain is a complete space with respect to the norm:
An operator is said to be densely defined if its domain is dense in . This also includes operators defined on the entire space , since the whole space is dense in itself. The denseness of the domain is necessary and sufficient for the existence of the adjoint (if and are Hilbert spaces) and the transpose; see the sections below.
If is closed, densely defined and continuous on its domain, then its domain is all of .
A densely defined operator on a Hilbert space is called bounded from below if is a positive operator for some real number . That is, for all in the domain of (or alternatively since is arbitrary). If both and are bounded from below then is bounded.
Example
Let deno
|
https://en.wikipedia.org/wiki/Joseph%20Tilly
|
Joseph Marie de Tilly (16 August 1837 – 4 August 1906) was a Belgian military man and mathematician.
He was born in Ypres, Belgium. In 1858, he became a teacher in mathematics at the regimental school. He began with studying geometry, particularly Euclid's fifth postulate and non-Euclidean geometry. He found similar results as Lobachevsky in 1860, but the Russian mathematician was already dead at that time. Tilly is more known for his work on non-Euclidean mechanics, as he was the one who invented it. He worked thus alone on this topic until a French mathematician, Jules Hoüel, showed interest in that field. Tilly also wrote on military science and history of mathematics. He died in München, Germany.
References
1837 births
1906 deaths
Belgian mathematicians
People from Ypres
|
https://en.wikipedia.org/wiki/Duality%20principle
|
Duality principle or principle of duality may refer to:
Duality (projective geometry)
Duality (order theory)
Duality principle (Boolean algebra)
Duality principle for sets
Duality principle (optimization theory)
Lagrange duality
Duality principle in functional analysis, used in large sieve method of analytic number theory
Wave–particle duality
See also
Duality (mathematics)
Duality (disambiguation)
Dual (disambiguation)
List of dualities
|
https://en.wikipedia.org/wiki/Annales%20de%20l%27Institut%20Fourier
|
The Annales de l'Institut Fourier is a French mathematical journal publishing papers in all fields of mathematics. It was established in 1949. The journal publishes one volume per year, consisting of six issues. The current editor-in-chief is Hervé Pajot. Articles are published either in English or in French.
The journal is indexed in Mathematical Reviews, Zentralblatt MATH and the Web of Science. According to the Journal Citation Reports, the journal had a 2008 impact factor of 0.804.
References
External links
Mathematics journals
Academic journals established in 1949
Multilingual journals
Bimonthly journals
Open access journals
1949 establishments in France
|
https://en.wikipedia.org/wiki/Differential%20equation
|
In mathematics, a differential equation is an equation that relates one or more unknown functions and their derivatives. In applications, the functions generally represent physical quantities, the derivatives represent their rates of change, and the differential equation defines a relationship between the two. Such relations are common; therefore, differential equations play a prominent role in many disciplines including engineering, physics, economics, and biology.
The study of differential equations consists mainly of the study of their solutions (the set of functions that satisfy each equation), and of the properties of their solutions. Only the simplest differential equations are soluble by explicit formulas; however, many properties of solutions of a given differential equation may be determined without computing them exactly.
Often when a closed-form expression for the solutions is not available, solutions may be approximated numerically using computers. The theory of dynamical systems puts emphasis on qualitative analysis of systems described by differential equations, while many numerical methods have been developed to determine solutions with a given degree of accuracy.
History
Differential equations came into existence with the invention of calculus by Newton and Leibniz. In Chapter 2 of his 1671 work Methodus fluxionum et Serierum Infinitarum, Isaac Newton listed three kinds of differential equations:
In all these cases, is an unknown function of (or of and ), and is a given function.
He solves these examples and others using infinite series and discusses the non-uniqueness of solutions.
Jacob Bernoulli proposed the Bernoulli differential equation in 1695. This is an ordinary differential equation of the form
for which the following year Leibniz obtained solutions by simplifying it.
Historically, the problem of a vibrating string such as that of a musical instrument was studied by Jean le Rond d'Alembert, Leonhard Euler, Daniel Bernoulli, and Joseph-Louis Lagrange. In 1746, d’Alembert discovered the one-dimensional wave equation, and within ten years Euler discovered the three-dimensional wave equation.
The Euler–Lagrange equation was developed in the 1750s by Euler and Lagrange in connection with their studies of the tautochrone problem. This is the problem of determining a curve on which a weighted particle will fall to a fixed point in a fixed amount of time, independent of the starting point. Lagrange solved this problem in 1755 and sent the solution to Euler. Both further developed Lagrange's method and applied it to mechanics, which led to the formulation of Lagrangian mechanics.
In 1822, Fourier published his work on heat flow in Théorie analytique de la chaleur (The Analytic Theory of Heat), in which he based his reasoning on Newton's law of cooling, namely, that the flow of heat between two adjacent molecules is proportional to the extremely small difference of their temperatures. Contained in this book was F
|
https://en.wikipedia.org/wiki/Eternal%20Majesty
|
Eternal Majesty is a French black metal band.
It was formed by four brothers under the original name of Enchantress Moon.
Statistics
Genre: Black metal
Country: France
Status: Active
Time: 1995 -
Discography
Albums
2002 - From War to Darkness (CD)
2003 - From War to Darkness (Picture disk)
2006 - Wounds of Hatred and Slavery (CD)
2020 - Black Metal Excommunication (CD) (Cassette Tape) (Vinyl)
Other Releases
1997 - Dark Empire (Demo tape)
1998 - Split demo with Antaeus
2000 - Evil Consecration (Live tape)
2000 - None Shall Escape the Wrath (Split CD with Krieg, Judas Iscariot, and Macabre Omen)
2001 - Unholy Chants of darkness (Split LP with Temple of Baal)
2001 - SPK Kommando (Split EP with Deviant, Antaeus and Hell Militia)
2005 - Night Evilness (Mcd) label Diahableries
2006 - Wounds of Hatred and Slavery (Album Candlelight/appease me...)
Band members
Navint (Deviant) - Vocals - (1995 - )
Sagoth (Madonagun, Antaeus, Autolyse-Dark Electro) - Bass - (1995 - )
Thorgon (Madonagun, Antaeus, Deviant, Autolyse-Dark Electro) - Drums - (1995 - )
Martyr (Atrox) - Guitars - (1995 - )
External links
https://eternalmajesty.bandcamp.com
https://web.archive.org/web/20060307003643/http://www.candlelightrecords.co.uk/candleweb/redesign/candle_eternal.htm
https://www.instagram.com/eternalmajestyofficial/?hl=fr
French black metal musical groups
Musical groups established in 1995
French musical quartets
|
https://en.wikipedia.org/wiki/Spherical%20space%20form%20conjecture
|
In geometric topology, the spherical space form conjecture (now a theorem) states that a finite group acting on the 3-sphere is conjugate to a group of isometries of the 3-sphere.
History
The conjecture was posed by Heinz Hopf in 1926 after determining the fundamental groups of three-dimensional spherical space forms as a generalization of the Poincaré conjecture to the non-simply connected case.
Status
The conjecture is implied by Thurston's geometrization conjecture, which was proven by Grigori Perelman in 2003. The conjecture was independently proven for groups whose actions have fixed points—this special case is known as the Smith conjecture. It is also proven for various groups acting without fixed points, such as cyclic groups whose orders are a power of two (George Livesay, Robert Myers) and cyclic groups of order 3 (J. Hyam Rubinstein).
See also
Killing–Hopf theorem
References
Conjectures that have been proved
Geometric topology
|
https://en.wikipedia.org/wiki/F%C3%B8lner%20sequence
|
In mathematics, a Følner sequence for a group is a sequence of sets satisfying a particular condition. If a group has a Følner sequence with respect to its action on itself, the group is amenable. A more general notion of Følner nets can be defined analogously, and is suited for the study of uncountable groups. Følner sequences are named for Erling Følner.
Definition
Given a group that acts on a countable set , a Følner sequence for the action is a sequence of finite subsets of which exhaust and which "don't move too much" when acted on by any group element. Precisely,
For every , there exists some such that for all , and
for all group elements in .
Explanation of the notation used above:
is the result of the set being acted on the left by . It consists of elements of the form for all in .
is the symmetric difference operator, i.e., is the set of elements in exactly one of the sets and .
is the cardinality of a set .
Thus, what this definition says is that for any group element , the proportion of elements of that are moved away by goes to 0 as gets large.
In the setting of a locally compact group acting on a measure space there is a more general definition. Instead of being finite, the sets are required to have finite, non-zero measure, and so the Følner requirement will be that
,
analogously to the discrete case. The standard case is that of the group acting on itself by left translation, in which case the measure in question is normally assumed to be the Haar measure.
Examples
Any finite group trivially has a Følner sequence for each .
Consider the group of integers, acting on itself by addition. Let consist of the integers between and . Then consists of integers between and . For large , the symmetric difference has size , while has size . The resulting ratio is , which goes to 0 as gets large.
With the original definition of Følner sequence, a group has a Følner sequence if and only if it is countable and amenable.
A locally compact group has a Følner sequence (with the generalized definition) if and only if it is amenable and second countable.
Proof of amenability
We have a group and a Følner sequence , and we need to define a measure on , which philosophically speaking says how much of any subset takes up. The natural definition that uses our Følner sequence would be
Of course, this limit doesn't necessarily exist. To overcome this technicality, we take an ultrafilter on the natural numbers that contains intervals . Then we use an ultralimit instead of the regular limit:
It turns out ultralimits have all the properties we need. Namely,
is a probability measure. That is, , since the ultralimit coincides with the regular limit when it exists.
is finitely additive. This is since ultralimits commute with addition just as regular limits do.
is left invariant. This is since
by the Følner sequence definition.
References
Geometric group theory
|
https://en.wikipedia.org/wiki/Holonomic
|
Holonomic (introduced by Heinrich Hertz in 1894 from the Greek ὅλος meaning whole, entire and νόμ-ος meaning law) may refer to:
Mathematics
Holonomic basis, a set of basis vector fields {ek} such that some coordinate system {xk} exists for which
Holonomic constraints, which are expressible as a function of the coordinates and time
Holonomic module in the theory of D-modules
Holonomic function, a smooth function that is a solution of a linear homogeneous differential equation with polynomial coefficients
Other uses
Holonomic brain theory, model of cognitive function as being guided by a matrix of neurological wave interference patterns
See also
Holonomy in differential geometry
Holon (disambiguation)
Nonholonomic system, in physics, a system whose state depends on the path taken in order to achieve it
|
https://en.wikipedia.org/wiki/Spray%20%28mathematics%29
|
In differential geometry, a spray is a vector field H on the tangent bundle TM that encodes a quasilinear second order system of ordinary differential equations on the base manifold M. Usually a spray is required to be homogeneous in the sense that its integral curves t→ΦHt(ξ)∈TM obey the rule ΦHt(λξ)=ΦHλt(ξ) in positive reparameterizations. If this requirement is dropped, H is called a semispray.
Sprays arise naturally in Riemannian and Finsler geometry as the geodesic sprays whose integral curves are precisely the tangent curves of locally length minimizing curves.
Semisprays arise naturally as the extremal curves of action integrals in Lagrangian mechanics. Generalizing all these examples, any (possibly nonlinear) connection on M induces a semispray H, and conversely, any semispray H induces a torsion-free nonlinear connection on M. If the original connection is torsion-free it coincides with the connection induced by H, and homogeneous torsion-free connections are in one-to-one correspondence with full sprays.
Formal definitions
Let M be a differentiable manifold and (TM,πTM,M) its tangent bundle. Then a vector field H on TM (that is, a section of the double tangent bundle TTM) is a semispray on M, if any of the three following equivalent conditions holds:
(πTM)*Hξ = ξ.
JH=V, where J is the tangent structure on TM and V is the canonical vector field on TM\0.
j∘H=H, where j:TTM→TTM is the canonical flip and H is seen as a mapping TM→TTM.
A semispray H on M is a (full) spray if any of the following equivalent conditions hold:
Hλξ = λ*(λHξ), where λ*:TTM→TTM is the push-forward of the multiplication λ:TM→TM by a positive scalar λ>0.
The Lie-derivative of H along the canonical vector field V satisfies [V,H]=H.
The integral curves t→ΦHt(ξ)∈TM\0 of H satisfy ΦHt(λξ)=λΦHλt(ξ) for any λ>0.
Let be the local coordinates on associated with the local coordinates ) on using the coordinate basis on each tangent space. Then is a semispray on if it has a local representation of the form
on each associated coordinate system on TM. The semispray H is a (full) spray, if and only if the spray coefficients Gi satisfy
Semisprays in Lagrangian mechanics
A physical system is modeled in Lagrangian mechanics by a Lagrangian function L:TM→R on the tangent bundle of some configuration space M. The dynamical law is obtained from the Hamiltonian principle, which states that the time evolution γ:[a,b]→M of the state of the system is stationary for the action integral
.
In the associated coordinates on TM the first variation of the action integral reads as
where X:[a,b]→R is the variation vector field associated with the variation γs:[a,b]→M around γ(t) = γ0(t). This first variation formula can be recast in a more informative form by introducing the following concepts:
The covector with is the conjugate momentum of .
The corresponding one-form with is the Hilbert-form associated with the Lagrangian.
The bilinear form with is the fundamental
|
https://en.wikipedia.org/wiki/Fulkerson%20Prize
|
The Fulkerson Prize for outstanding papers in the area of discrete mathematics is sponsored jointly by the Mathematical Optimization Society (MOS) and the American Mathematical Society (AMS). Up to three awards of $1,500 each are presented at each (triennial) International Symposium of the MOS. Originally, the prizes were paid out of a memorial fund administered by the AMS that was established by friends of the late Delbert Ray Fulkerson to encourage mathematical excellence in the fields of research exemplified by his work. The prizes are now funded by an endowment administered by MPS.
Winners
Source: Mathematical Optimization Society
1979:
Richard M. Karp for classifying many important NP-complete problems.
Kenneth Appel and Wolfgang Haken for the four color theorem.
Paul Seymour for generalizing the max-flow min-cut theorem to matroids.
1982:
D.B. Judin, Arkadi Nemirovski, Leonid Khachiyan, Martin Grötschel, László Lovász and Alexander Schrijver for the ellipsoid method in linear programming and combinatorial optimization.
G. P. Egorychev and D. I. Falikman for proving van der Waerden's conjecture that the matrix with all entries equal has the smallest permanent of any doubly stochastic matrix.
1985:
Jozsef Beck for tight bounds on the discrepancy of arithmetic progressions.
H. W. Lenstra Jr. for using the geometry of numbers to solve integer programs with few variables in time polynomial in the number of constraints.
Eugene M. Luks for a polynomial time graph isomorphism algorithm for graphs of bounded maximum degree.
1988:
Éva Tardos for finding minimum cost circulations in strongly polynomial time.
Narendra Karmarkar for Karmarkar's algorithm for linear programming.
1991:
Martin E. Dyer, Alan M. Frieze and Ravindran Kannan for random-walk-based approximation algorithms for the volume of convex bodies.
Alfred Lehman for 0,1-matrix analogues of the theory of perfect graphs.
Nikolai E. Mnev for Mnev's universality theorem, that every semialgebraic set is equivalent to the space of realizations of an oriented matroid.
1994:
Louis Billera for finding bases of piecewise-polynomial function spaces over triangulations of space.
Gil Kalai for making progress on the Hirsch conjecture by proving subexponential bounds on the diameter of d-dimensional polytopes with n facets.
Neil Robertson, Paul Seymour and Robin Thomas for the six-color case of Hadwiger's conjecture.
1997:
Jeong Han Kim for finding the asymptotic growth rate of the Ramsey numbers R(3,t).
2000:
Michel X. Goemans and David P. Williamson for approximation algorithms based on semidefinite programming.
Michele Conforti, Gérard Cornuéjols, and M. R. Rao for recognizing balanced 0-1 matrices in polynomial time.
2003:
J. F. Geelen, A. M. H. Gerards and A. Kapoor for the GF(4) case of Rota's conjecture on matroid minors.
Bertrand Guenin for a forbidden minor characterization of the weakly bipartite graphs (graphs whose bipartite subgraph polytope is 0-1).
|
https://en.wikipedia.org/wiki/Bungoma
|
{
"type": "FeatureCollection",
"features": [
{
"type": "Feature",
"properties": {},
"geometry": {
"type": "Point",
"coordinates": [
34.5611343352357,
0.5699063737153023
]
}
}
]
}
Bungoma is the headquarters of Bungoma County in Kenya. It was established as a trading centre in the early 20th century. It is located in Kenya's fertile Western region and at the foot of Mount Elgon, Kenya's second tallest mountain. The town and the surrounding areas boast of one of Kenya's highest average rainfall, making it one of the nation's most important food baskets.
Technical Colleges in Bungoma County Are:
Matili Technical Training Institute
Sang'alo Technical Training Institute
Kisiwa Technical Training Institute
Technical Training Institute
Bungoma North TVC
Webuye West TVC
Sirisia TVC
Naming
Bungoma, was named from eng'oma—the Bukusu word for drums. The town was originally a meeting place for Bukusu elders. The sound of drums would emanate from the area as the meeting venue, leading to its eventual naming as Bungoma.
There is a second version of the story. It says that in the early days, the area was occupied by the Bungomek, a clan of the Sabaot. The Bungomek was later driven out by the Bukusu, but the name Bungoma, in reference to their occupation, remained.
Economy
Farming is the main economic activity in the county.
Bungoma county is deependent on sugarcane farming, with one of the country's largest sugar factories, as well as numerous small-holder sugar mills. Maize is also grown for subsistence, alongside pearl millet and sorghum. Dairy farming is widely practised, as well as the raising of poultry. There is a small but important tourist circuit, centering on the biennial circumcision ceremonies that are mostly practiced by Bukusu, Tachoni and Sabaot.
Aside from sugar processing, the town also boasts of a variety other manufacturing plants such as maize mills, large bakeries, dairy plants and a plastic factory. Other smaller scale manufacturing activities include steel crafting, iron sheet production, garages and auto repairs among others.
The services sector is also quite vibrant. There is a busy retail sector dominated by local brands, several banks, insurance companies and large hotels to support the local tourist circuit.
Overview
The major economic activity in the area is sugarcane farming. This is because more than 67000 farmers directly depend on Nzoia Sugar Company Ltd. Early businesses were supported by the Kenya-Uganda Railway which passes through the town. The collapse of Webuye paper mills and the struggling of nzoia sugar company has led to an economic nightmare in the county. Malakisi Ginnery that solely depended of cotton farming in Bungoma County and neighboring counties like Busia has struggled ages to start on its feet because of inadequate cotton supplies and few people have embraced cotton farming. Within Malikisi town, British Amer
|
https://en.wikipedia.org/wiki/Toshio%20Mura
|
was a professor of engineering.
He was born in Ono, a small port village of Kanazawa Japan, on December 7, 1925. He received a doctorate in the Department of Applied Mathematics of the University of Tokyo in 1954. He taught at Meiji University, Japan from 1954 to 1958. In 1958, he went to the United States to work in the Department of Materials Science at Northwestern University in Evanston, Illinois. He became a professor in the Department of Civil Engineering in 1966 before his retirement in 1996, and also held an appointment in the Department of Mechanical Engineering.
Dr. Mura was appointed Walter P. Murphy Professor in the McCormick School of Engineering at Northwestern, was elected as a member of the National Academy of Engineering (NAE) in 1986 for his contributions to the field of micromechanics, and received many other accolades for his work. He was as much recognized for his academic achievement as his generosity in opening his home to visiting scholars and graduate students from Japan, where weekend gatherings were a regular occurrence.
Dr. Mura was interested in the micromechanics of solids. Examples of micromechanics are theories on fracture and fatigue of materials, mathematical analysis for dislocations and inclusions in solids, mechanical characterization of thin films, ceramics and composite materials.
Professor Mura was also interested in the inverse problems. His research aimed to predict inelastic damages in solids by knowing surface displacements on the surface of the solids, including prediction of earthquake by knowing the earth surface. The inverse problems play an important role in qualitative nondestructive evaluation of materials. Most of Professor Mura's research was mathematically oriented but cooperative with experiments in mechanics and materials science.
Professor Mura died of heart related complications on August 9, 2009, at the age of 83.
Selected publications
Mura, T. 1969. Mathematical Theory of Dislocations. Proceedings of ASME Symposium, Northwestern University.
Mura, T. 1981. Mechanics of Fatigue. AMD-Vol. 47. Proceedings of ASME Symposium.
Mura, T. 1987. Micromechanics of Defects in Solids (2nd ed.). The Netherlands: Martinus Nijhoff.
Mura, T., and T. Koya. 1992. Variational methods in mechanics. Oxford University Press.
1925 births
People from Kanazawa, Ishikawa
Japanese emigrants to the United States
Northwestern University faculty
Members of the United States National Academy of Engineering
2009 deaths
|
https://en.wikipedia.org/wiki/MEI
|
MEI may refer to:
Education
MEI Academy, an international school
Mathematics in Education and Industry, an examination board affiliated with the OCR examination board
Mennonite Educational Institute, an independent grades K-12 school in Abbotsford, British Columbia
Businesses
MEI (company), manufacturer of cash handling systems
Matsushita Electric Industrial Co., Ltd.
Micro Electronics, Inc.
Member of the Energy Institute (MEI)
, an annual conference of Italian independent record labels
Government
Ministry of Economy and Innovation (), the Portuguese economy ministry
Middle East Institute
Marginal efficiency of investment or internal rate of return, a relationship between interest rate and amount of investment that can be profitable at a given time
Meridian Regional Airport
Montreal Economic Institute
Meridian (Amtrak station), Amtrak station code MEI, Mississippi, United States
Medicare Economic Index
Military
MEI Hellhound (Grenade), low velocity multipurpose grenade
MEI Mercury, a family of grenades developed by Martin Electronics, Inc.
Science
Iodomethane (methyl iodide, MeI), a halomethane
Multivariate ENSO index
4-Methylimidazole (4-MEI), a chemical compound
Technology
Management Engine Interface, a component of Intel Active Management Technology
Music Encoding Initiative, a music encoding format
Other uses
Media and Entertainment International, a former global union federation
OECD Main Economic Indicators, a monthly publication of the Organisation for Economic Co-operation and Development
See also
Mei (disambiguation)
|
https://en.wikipedia.org/wiki/Coefficient%20of%20multiple%20correlation
|
In statistics, the coefficient of multiple correlation is a measure of how well a given variable can be predicted using a linear function of a set of other variables. It is the correlation between the variable's values and the best predictions that can be computed linearly from the predictive variables.
The coefficient of multiple correlation takes values between 0 and 1. Higher values indicate higher predictability of the dependent variable from the independent variables, with a value of 1 indicating that the predictions are exactly correct and a value of 0 indicating that no linear combination of the independent variables is a better predictor than is the fixed mean of the dependent variable.
The coefficient of multiple correlation is known as the square root of the coefficient of determination, but under the particular assumptions that an intercept is included and that the best possible linear predictors are used, whereas the coefficient of determination is defined for more general cases, including those of nonlinear prediction and those in which the predicted values have not been derived from a model-fitting procedure.
Definition
The coefficient of multiple correlation, denoted R, is a scalar that is defined as the Pearson correlation coefficient between the predicted and the actual values of the dependent variable in a linear regression model that includes an intercept.
Computation
The square of the coefficient of multiple correlation can be computed using the vector of correlations between the predictor variables (independent variables) and the target variable (dependent variable), and the correlation matrix of correlations between predictor variables. It is given by
where is the transpose of , and is the inverse of the matrix
If all the predictor variables are uncorrelated, the matrix is the identity matrix and simply equals , the sum of the squared correlations with the dependent variable. If the predictor variables are correlated among themselves, the inverse of the correlation matrix accounts for this.
The squared coefficient of multiple correlation can also be computed as the fraction of variance of the dependent variable that is explained by the independent variables, which in turn is 1 minus the unexplained fraction. The unexplained fraction can be computed as the sum of squares of residuals—that is, the sum of the squares of the prediction errors—divided by the sum of squares of deviations of the values of the dependent variable from its expected value.
Properties
With more than two variables being related to each other, the value of the coefficient of multiple correlation depends on the choice of dependent variable: a regression of on and will in general have a different than will a regression of on and . For example, suppose that in a particular sample the variable is uncorrelated with both and , while and are linearly related to each other. Then a regression of on and will yield an of zero, whi
|
https://en.wikipedia.org/wiki/Potential%20theory
|
In mathematics and mathematical physics, potential theory is the study of harmonic functions.
The term "potential theory" was coined in 19th-century physics when it was realized that two fundamental forces of nature known at the time, namely gravity and the electrostatic force, could be modeled using functions called the gravitational potential and electrostatic potential, both of which satisfy Poisson's equation—or in the vacuum, Laplace's equation.
There is considerable overlap between potential theory and the theory of Poisson's equation to the extent that it is impossible to draw a distinction between these two fields. The difference is more one of emphasis than subject matter and rests on the following distinction: potential theory focuses on the properties of the functions as opposed to the properties of the equation. For example, a result about the singularities of harmonic functions would be said to belong to potential theory whilst a result on how the solution depends on the boundary data would be said to belong to the theory of the Laplace equation. This is not a hard and fast distinction, and in practice there is considerable overlap between the two fields, with methods and results from one being used in the other.
Modern potential theory is also intimately connected with probability and the theory of Markov chains. In the continuous case, this is closely related to analytic theory. In the finite state space case, this connection can be introduced by introducing an electrical network on the state space, with resistance between points inversely proportional to transition probabilities and densities proportional to potentials. Even in the finite case, the analogue I-K of the Laplacian in potential theory has its own maximum principle, uniqueness principle, balance principle, and others.
Symmetry
A useful starting point and organizing principle in the study of harmonic functions is a consideration of the symmetries of the Laplace equation. Although it is not a symmetry in the usual sense of the term, we can start with the observation that the Laplace equation is linear. This means that the fundamental object of study in potential theory is a linear space of functions. This observation will prove especially important when we consider function space approaches to the subject in a later section.
As for symmetry in the usual sense of the term, we may start with the theorem that the symmetries of the -dimensional Laplace equation are exactly the conformal symmetries of the -dimensional Euclidean space. This fact has several implications. First of all, one can consider harmonic functions which transform under irreducible representations of the conformal group or of its subgroups (such as the group of rotations or translations). Proceeding in this fashion, one systematically obtains the solutions of the Laplace equation which arise from separation of variables such as spherical harmonic solutions and Fourier series. By taking linear superpo
|
https://en.wikipedia.org/wiki/Mode%20%28statistics%29
|
In statistics, the mode is the value that appears most often in a set of data values. If is a discrete random variable, the mode is the value at which the probability mass function takes its maximum value (i.e, ). In other words, it is the value that is most likely to be sampled.
Like the statistical mean and median, the mode is a way of expressing, in a (usually) single number, important information about a random variable or a population. The numerical value of the mode is the same as that of the mean and median in a normal distribution, and it may be very different in highly skewed distributions.
The mode is not necessarily unique to a given discrete distribution, since the probability mass function may take the same maximum value at several points , , etc. The most extreme case occurs in uniform distributions, where all values occur equally frequently.
A mode of a continuous probability distribution is often considered to be any value at which its probability density function has a locally maximum value. When the probability density function of a continuous distribution has multiple local maxima it is common to refer to all of the local maxima as modes of the distribution, so any peak is a mode. Such a continuous distribution is called multimodal (as opposed to unimodal).
In symmetric unimodal distributions, such as the normal distribution, the mean (if defined), median and mode all coincide. For samples, if it is known that they are drawn from a symmetric unimodal distribution, the sample mean can be used as an estimate of the population mode.
Mode of a sample
The mode of a sample is the element that occurs most often in the collection. For example, the mode of the sample [1, 3, 6, 6, 6, 6, 7, 7, 12, 12, 17] is 6. Given the list of data [1, 1, 2, 4, 4] its mode is not unique. A dataset, in such a case, is said to be bimodal, while a set with more than two modes may be described as multimodal.
For a sample from a continuous distribution, such as [0.935..., 1.211..., 2.430..., 3.668..., 3.874...], the concept is unusable in its raw form, since no two values will be exactly the same, so each value will occur precisely once. In order to estimate the mode of the underlying distribution, the usual practice is to discretize the data by assigning frequency values to intervals of equal distance, as for making a histogram, effectively replacing the values by the midpoints of the
intervals they are assigned to. The mode is then the value where the histogram reaches its peak. For small or middle-sized samples the outcome of this procedure is sensitive to the choice of interval width if chosen too narrow or too wide; typically one should have a sizable fraction of the data concentrated in a relatively small number of intervals (5 to 10), while the fraction of the data falling outside these intervals is also sizable. An alternate approach is kernel density estimation, which essentially blurs point samples to produce a continuous estimate of th
|
https://en.wikipedia.org/wiki/Paul%20Benacerraf
|
Paul Joseph Salomon Benacerraf (; born 26 March 1931) is a French-born American philosopher working in the field of the philosophy of mathematics who taught at Princeton University his entire career, from 1960 until his retirement in 2007. He was appointed Stuart Professor of Philosophy in 1974, and retired as the James S. McDonnell Distinguished University Professor of Philosophy.
Life and career
Benacerraf was born in Paris to a Moroccan-Venezuelan father and an Algerian mother. In 1939 the family moved to Caracas and then to New York City.
When the family returned to Caracas, Benacerraf remained in the United States, boarding at the Peddie School in Hightstown, New Jersey. He attended Princeton University for both his undergraduate and graduate studies.
He was elected a fellow of the American Academy of Arts and Sciences in 1998.
His brother was the Venezuelan Nobel Prize-winning immunologist Baruj Benacerraf.
Philosophical work
Benacerraf is perhaps best known for his two papers "What Numbers Could Not Be" (1965) and "Mathematical Truth" (1973), and for his anthology on the philosophy of mathematics, co-edited with Hilary Putnam.
In "What Numbers Could Not Be" (1965), Benacerraf argues against a Platonist view of mathematics, and for structuralism, on the ground that what is important about numbers is the abstract structures they represent rather than the objects that number words ostensibly refer to. In particular, this argument is based on the point that Ernst Zermelo and John von Neumann give distinct, and completely adequate, identifications of natural numbers with sets (see Zermelo ordinals and von Neumann ordinals). This argument is called Benacerraf's identification problem.
In "Mathematical Truth" (1973), he argues that no interpretation of mathematics offers a satisfactory package of epistemology and semantics; it is possible to explain mathematical truth in a way that is consistent with our syntactico-semantical treatment of truth in non-mathematical language, and it is possible to explain our knowledge of mathematics in terms consistent with a causal account of epistemology, but it is in general not possible to accomplish both of these objectives simultaneously (this argument is called Benacerraf's epistemological problem). He argues for this on the grounds that an adequate account of truth in mathematics implies the existence of abstract mathematical objects, but that such objects are epistemologically inaccessible because they are causally inert and beyond the reach of sense perception. On the other hand, an adequate epistemology of mathematics, say one that ties truth-conditions to proof in some way, precludes understanding how and why the truth-conditions have any bearing on truth.
Sexual harassment allegation
Elisabeth Lloyd has alleged that while she was a PhD student at Princeton, Benacerraf "petted and touched" her every day. She said, "It was just an extra price I had to pay, that the men did not have to pay, i
|
https://en.wikipedia.org/wiki/Schwarz%20lemma
|
In mathematics, the Schwarz lemma, named after Hermann Amandus Schwarz, is a result in complex analysis about holomorphic functions from the open unit disk to itself. The lemma is less celebrated than deeper theorems, such as the Riemann mapping theorem, which it helps to prove. It is, however, one of the simplest results capturing the rigidity of holomorphic functions.
Statement
Let be the open unit disk in the complex plane centered at the origin, and let be a holomorphic map such that and on .
Then for all , and .
Moreover, if for some non-zero or , then for some with .
Proof
The proof is a straightforward application of the maximum modulus principle on the function
which is holomorphic on the whole of , including at the origin (because is differentiable at the origin and fixes zero). Now if denotes the closed disk of radius centered at the origin, then the maximum modulus principle implies that, for , given any , there exists on the boundary of such that
As we get .
Moreover, suppose that for some non-zero , or . Then, at some point of . So by the maximum modulus principle, is equal to a constant such that . Therefore, , as desired.
Schwarz–Pick theorem
A variant of the Schwarz lemma, known as the Schwarz–Pick theorem (after Georg Pick), characterizes the analytic automorphisms of the unit disc, i.e. bijective holomorphic mappings of the unit disc to itself:
Let be holomorphic. Then, for all ,
and, for all ,
The expression
is the distance of the points , in the Poincaré metric, i.e. the metric in the Poincaré disc model for hyperbolic geometry in dimension two. The Schwarz–Pick theorem then essentially states that a holomorphic map of the unit disk into itself decreases the distance of points in the Poincaré metric. If equality holds throughout in one of the two inequalities above (which is equivalent to saying that the holomorphic map preserves the distance in the Poincaré metric), then must be an analytic automorphism of the unit disc, given by a Möbius transformation mapping the unit disc to itself.
An analogous statement on the upper half-plane can be made as follows:
Let be holomorphic. Then, for all ,
This is an easy consequence of the Schwarz–Pick theorem mentioned above: One just needs to remember that the Cayley transform
maps the upper half-plane conformally onto the unit disc . Then, the map is a holomorphic map from onto . Using the Schwarz–Pick theorem on this map, and finally simplifying the results by using the formula for , we get the desired result. Also, for all ,
If equality holds for either the one or the other expressions, then must be a Möbius transformation with real coefficients. That is, if equality holds, then
with and .
Proof of Schwarz–Pick theorem
The proof of the Schwarz–Pick theorem follows from Schwarz's lemma and the fact that a Möbius transformation of the form
maps the unit circle to itself. Fix and define the Möbius transformations
Since and the Möb
|
https://en.wikipedia.org/wiki/Autoregressive%20model
|
In statistics, econometrics, and signal processing, an autoregressive (AR) model is a representation of a type of random process; as such, it is used to describe certain time-varying processes in nature, economics, behavior, etc. The autoregressive model specifies that the output variable depends linearly on its own previous values and on a stochastic term (an imperfectly predictable term); thus the model is in the form of a stochastic difference equation (or recurrence relation which should not be confused with differential equation). Together with the moving-average (MA) model, it is a special case and key component of the more general autoregressive–moving-average (ARMA) and autoregressive integrated moving average (ARIMA) models of time series, which have a more complicated stochastic structure; it is also a special case of the vector autoregressive model (VAR), which consists of a system of more than one interlocking stochastic difference equation in more than one evolving random variable.
Contrary to the moving-average (MA) model, the autoregressive model is not always stationary as it may contain a unit root.
Definition
The notation indicates an autoregressive model of order p. The AR(p) model is defined as
where are the parameters of the model, and is white noise. This can be equivalently written using the backshift operator B as
so that, moving the summation term to the left side and using polynomial notation, we have
An autoregressive model can thus be viewed as the output of an all-pole infinite impulse response filter whose input is white noise.
Some parameter constraints are necessary for the model to remain weak-sense stationary. For example, processes in the AR(1) model with are not stationary. More generally, for an AR(p) model to be weak-sense stationary, the roots of the polynomial must lie outside the unit circle, i.e., each (complex) root must satisfy (see pages 89,92 ).
Intertemporal effect of shocks
In an AR process, a one-time shock affects values of the evolving variable infinitely far into the future. For example, consider the AR(1) model . A non-zero value for at say time t=1 affects by the amount . Then by the AR equation for in terms of , this affects by the amount . Then by the AR equation for in terms of , this affects by the amount . Continuing this process shows that the effect of never ends, although if the process is stationary then the effect diminishes toward zero in the limit.
Because each shock affects X values infinitely far into the future from when they occur, any given value Xt is affected by shocks occurring infinitely far into the past. This can also be seen by rewriting the autoregression
(where the constant term has been suppressed by assuming that the variable has been measured as deviations from its mean) as
When the polynomial division on the right side is carried out, the polynomial in the backshift operator applied to has an infinite order—that is, an infinite number
|
https://en.wikipedia.org/wiki/ELT
|
ELT may refer to:
Education
English language teaching
Expanded learning time, an American education strategy
Kolb's experiential learning theory
Mathematics and science
Ending lamination theorem
Extremely large telescope, a type of telescope
Extremely Large Telescope, an astronomical observatory under construction in Chile
Effective lifetime temperature, used in rehydroxylation dating
Medicine
Endovenous laser treatment
Euglobulin lysis time
Excimer laser trabeculostomy
Music
Every Little Thing (band), a Japanese J-Pop band
"ELT", a song by the band Wilco from their 1999 album Summerteeth
Technology
Emergency locator transmitter
Extract, load, transform, a data processing concept
End-of-life tyre
Transport
East London Transit, a British public transport system
El Tor Airport, in Egypt
Elizabethtown station, Pennsylvania
Other uses
Electrical lighting technician, a stage-lighting technician
Electronic lien and title
Elt Drenth (1949–1998), Dutch swimmer
Evolutionary leadership theory
Executive Leadership Team
|
https://en.wikipedia.org/wiki/Voigt%20notation
|
In mathematics, Voigt notation or Voigt form in multilinear algebra is a way to represent a symmetric tensor by reducing its order. There are a few variants and associated names for this idea: Mandel notation, Mandel–Voigt notation and Nye notation are others found. Kelvin notation is a revival by Helbig of old ideas of Lord Kelvin. The differences here lie in certain weights attached to the selected entries of the tensor. Nomenclature may vary according to what is traditional in the field of application.
For example, a 2×2 symmetric tensor X has only three distinct elements, the two on the diagonal and the other being off-diagonal. Thus it can be expressed as the vector
.
As another example:
The stress tensor (in matrix notation) is given as
In Voigt notation it is simplified to a 6-dimensional vector:
The strain tensor, similar in nature to the stress tensor—both are symmetric second-order tensors --, is given in matrix form as
Its representation in Voigt notation is
where , , and are engineering shear strains.
The benefit of using different representations for stress and strain is that the scalar invariance
is preserved.
Likewise, a three-dimensional symmetric fourth-order tensor can be reduced to a 6×6 matrix.
Mnemonic rule
A simple mnemonic rule for memorizing Voigt notation is as follows:
Write down the second order tensor in matrix form (in the example, the stress tensor)
Strike out the diagonal
Continue on the third column
Go back to the first element along the first row.
Voigt indexes are numbered consecutively from the starting point to the end (in the example, the numbers in blue).
Mandel notation
For a symmetric tensor of second rank
only six components are distinct, the three on the diagonal and the others being off-diagonal.
Thus it can be expressed, in Mandel notation, as the vector
The main advantage of Mandel notation is to allow the use of the same conventional operations used with vectors,
for example:
A symmetric tensor of rank four satisfying and has 81 components in three-dimensional space, but only 36
components are distinct. Thus, in Mandel notation, it can be expressed as
Applications
The notation is named after physicist Woldemar Voigt & John Nye (scientist). It is useful, for example, in calculations involving constitutive models to simulate materials, such as the generalized Hooke's law, as well as finite element analysis, and Diffusion MRI.
Hooke's law has a symmetric fourth-order stiffness tensor with 81 components (3×3×3×3), but because the application of such a rank-4 tensor to a symmetric rank-2 tensor must yield another symmetric rank-2 tensor, not all of the 81 elements are independent. Voigt notation enables such a rank-4 tensor to be represented by a 6×6 matrix. However, Voigt's form does not preserve the sum of the squares, which in the case of Hooke's law has geometric significance. This explains why weights are introduced (to make the mapping an isometry).
A discussion of inv
|
https://en.wikipedia.org/wiki/General%20algebraic%20modeling%20system
|
The general algebraic modeling system (GAMS) is a high-level modeling system for mathematical optimization. GAMS is designed for modeling and solving linear, nonlinear, and mixed-integer optimization problems. The system is tailored for complex, large-scale modeling applications and allows the user to build large maintainable models that can be adapted to new situations. The system is available for use on various computer platforms. Models are portable from one platform to another.
GAMS was the first algebraic modeling language (AML) and is formally similar to commonly used fourth-generation programming languages. GAMS contains an integrated development environment (IDE) and is connected to a group of third-party optimization solvers. Among these solvers are BARON, COIN-OR solvers, CONOPT, COPT Cardinal Optimizer, CPLEX, DICOPT, MOSEK, SNOPT, SULUM, and XPRESS.
GAMS allows the users to implement a sort of hybrid algorithm combining different solvers. Models are described in concise, human-readable algebraic statements. GAMS is among the most popular input formats for the NEOS Server. Although initially designed for applications related to economics and management science, it has a community of users from various backgrounds of engineering and science.
Timeline
1976 GAMS idea is presented at the International Symposium on Mathematical Programming (ISMP), Budapest
1978 Phase I: GAMS supports linear programming. Supported platforms: Mainframes and Unix Workstations
1979 Phase II: GAMS supports nonlinear programming.
1987 GAMS becomes a commercial product
1988 First PC System (16 bit)
1988 Alex Meeraus, the initiator of GAMS and founder of GAMS Development Corporation, is awarded INFORMS Computing Society Prize
1990 32 bit Dos Extender
1990 GAMS moves to Georgetown, Washington, D.C.
1991 Mixed Integer Non-Linear Programs capability (DICOPT)
1994 GAMS supports mixed complementarity problems
1995 MPSGE language is added for CGE modeling
1996 European branch opens in Germany
1998 32 bit native Windows
1998 Stochastic programming capability (OSL/SE, DECIS)
1999 Introduction of the GAMS Integrated development environment (IDE)
2000 End of support for DOS & Win 3.11
2000 GAMS World initiative started
2001 GAMS Data Exchange (GDX) is introduced
2002 GAMS is listed in OR/MS 50th Anniversary list of milestones
2003 Conic programming is added
2003 Global optimization in GAMS
2004 Quality assurance initiative starts
2004 Support for Quadratic Constrained programs
2005 Support for 64 bit PC Operating systems (Mac PowerPC / Linux / Win)
2006 GAMS supports parallel grid computing
2007 GAMS supports open-source solvers from COIN-OR
2007 Support for Solaris on Sparc64
2008 Support for 32 and 64 bit Mac OS X
2009 GAMS available on the Amazon Elastic Compute Cloud
2009 GAMS supports extended mathematical programs (EMP)
2010 GAMS is awarded the company award of the German Society of Operations Research (GOR)
2010 GDXMRW interface betw
|
https://en.wikipedia.org/wiki/134%20%28number%29
|
134 (one hundred [and] thirty-four) is the natural number following 133 and preceding 135.
In mathematics
134 is a nontotient since there is no integer with exactly 134 coprimes below it. And it is a noncototient since there is no integer with 134 integers with common factors below it.
134 is .
In Roman numerals, 134 is a Friedman number since CXXXIV = XV * (XC/X) - I.
In the military
was a Mission Buenaventura-class fleet oiler during World War II
was a United States Navy during World War II
was a United States Navy between World War I and World War II
was the lead ship of the United States Navy heavy cruisers during World War II
was a United States Navy General G. O. Squier-class transport ship during World War II
was a United States Navy converted steel-hulled trawler, during World War II
was a United States Navy which saw battle during the Battle of Midway
was a United States Navy during World War II
, was a United States S-class submarine which was later transferred to the Royal Navy
was a United States Navy Crater-class cargo ship during World War II
134 (Bedford) Squadron in the United Kingdom Air Training Corps
The 134th (48th Highlanders) Battalion, CEF was a Toronto, Ontario unit of the Canadian Expeditionary Force during World War I
The 134th Pennsylvania Volunteer Infantry was an infantry regiment in the Union Army during the American Civil War
In sports
Former running back George Reed for the Saskatchewan Roughriders held the career record of 134 rushing touchdowns
In transportation
London Buses route 134 is a Transport for London contracted bus route in London
In other fields
134 is also:
The year AD 134 or 134 BC
134 AH is a year in the Islamic calendar that corresponds to 751 – 752 CE
134 Sophrosyne is a large main belt asteroid with a dark surface and most likely a primitive carbonaceous composition
Caesium-134 has a half-life of 2.0652 years. It is produced both directly (at a very small yield) as a fission product, but not via beta decay of other fission product nuclides of mass 134, since beta decay stops at stable Xe-134
The atomic number of an element temporarily called untriquadium
Article 134 of the American UCMJ is the catch-all article, for offences "not specifically mentioned in this chapter." Used to prosecute a wide variety of offences, from cohabitation by personnel not married to each other to statements critical of the U.S. President. Some prisoners at Abu Ghraib were tagged with this number.
Sonnet 134 by William Shakespeare
was the highest naturally occurring air temperature ever recorded on Earth.
United States Immigration Support Form I-134, Affidavit of Support
See also
List of highways numbered 134
United Nations Security Council Resolution 134
United States Supreme Court cases, Volume 134
Integers
|
https://en.wikipedia.org/wiki/Samurize
|
Serious Samurize (or simply Samurize) is a freeware system monitoring and desktop enhancement engine for Microsoft Windows.
The core of Samurize is the desktop client that displays PC statistics (similar to a widget or gadget) anywhere on the screen. There is also a taskbar client, a clock client, a server, and a screensaver. The client's main purpose is to display information about the computer, such as CPU usage, available RAM/HD space, network conditions, uptime, etc.. It can also be extended by using VBScript, JScript, Perl, Python, Ruby scripts and DLL plugins, which provide virtually unlimited possibilities. There are scripts and plugins which can get weather reports, news headlines, music controllers, etc..
Samurize includes a WYSIWYG config editor used to create the configs. A "config" consists of a collection of "meters", and is saved into an INI file in the "configs" folder of the Samurize installation path. "Configs" can be packed to be shared with other users by using an included tool.
History
Work on Samurize started early in 2002. The first iteration of Samurize was version 0.63c, at which point work began on Serious Samurize, which was released at version 0.80a, breaking compatibility with older configurations.
The earliest predecessor of Samurize was NMeter, created in 2000. NMeter was followed by CureInfo in 2001 and in March 2002 Samurize development began. The development proceeded rapidly with almost one new version each month until version 1.0 was published in November 2003. Then the development process slowed somewhat but new versions were regularly released.
In early 2015, it was announced "that the samurize project is officially over." (sic)
References
External links
Windows-only freeware
Widget engines
|
https://en.wikipedia.org/wiki/Symmetric%20polynomial
|
In mathematics, a symmetric polynomial is a polynomial in variables, such that if any of the variables are interchanged, one obtains the same polynomial. Formally, is a symmetric polynomial if for any permutation of the subscripts one has .
Symmetric polynomials arise naturally in the study of the relation between the roots of a polynomial in one variable and its coefficients, since the coefficients can be given by polynomial expressions in the roots, and all roots play a similar role in this setting. From this point of view the elementary symmetric polynomials are the most fundamental symmetric polynomials. Indeed, a theorem called the fundamental theorem of symmetric polynomials states that any symmetric polynomial can be expressed in terms of elementary symmetric polynomials. This implies that every symmetric polynomial expression in the roots of a monic polynomial can alternatively be given as a polynomial expression in the coefficients of the polynomial.
Symmetric polynomials also form an interesting structure by themselves, independently of any relation to the roots of a polynomial. In this context other collections of specific symmetric polynomials, such as complete homogeneous, power sum, and Schur polynomials play important roles alongside the elementary ones. The resulting structures, and in particular the ring of symmetric functions, are of great importance in combinatorics and in representation theory.
Examples
The following polynomials in two variables X1 and X2 are symmetric:
as is the following polynomial in three variables X1, X2, X3:
There are many ways to make specific symmetric polynomials in any number of variables (see the various types below). An example of a somewhat different flavor is
where first a polynomial is constructed that changes sign under every exchange of variables, and taking the square renders it completely symmetric (if the variables represent the roots of a monic polynomial, this polynomial gives its discriminant).
On the other hand, the polynomial in two variables
is not symmetric, since if one exchanges and one gets a different polynomial, . Similarly in three variables
has only symmetry under cyclic permutations of the three variables, which is not sufficient to be a symmetric polynomial. However, the following is symmetric:
Applications
Galois theory
One context in which symmetric polynomial functions occur is in the study of monic univariate polynomials of degree n having n roots in a given field. These n roots determine the polynomial, and when they are considered as independent variables, the coefficients of the polynomial are symmetric polynomial functions of the roots. Moreover the fundamental theorem of symmetric polynomials implies that a polynomial function f of the n roots can be expressed as (another) polynomial function of the coefficients of the polynomial determined by the roots if and only if f is given by a symmetric polynomial.
This yields the approach to solving po
|
https://en.wikipedia.org/wiki/IEP
|
IEP may refer to:
Science and technology
Immunoelectrophoresis, biochemistry method
Inclusion–exclusion principle, in the mathematics branch of combinatorics
Integrated electric propulsion, in marine propulsion
Isoelectric point, the pH where a molecule is electrically neutral
Education and research
Individualized Education Program, in the United States, for children with disabilities
Instituts d'études politiques (Institutes of Political Studies), higher education institutions in France
Institute for Economics and Peace, a think tank
Institute for European Politics, a Berlin research centre
Institute for Political Studies – Catholic University of Portugal ()
Internet Encyclopedia of Philosophy
Other uses
Icahn Enterprises, an American conglomerate
Independent Expert Panel, concerned with misconduct by members of the UK parliament
Institute of Employability Professionals, a British professional association
Intercity Express Programme, a British rail transport initiative
Irish pound, the pre-euro currency of Ireland
|
https://en.wikipedia.org/wiki/Elementary%20symmetric%20polynomial
|
In mathematics, specifically in commutative algebra, the elementary symmetric polynomials are one type of basic building block for symmetric polynomials, in the sense that any symmetric polynomial can be expressed as a polynomial in elementary symmetric polynomials. That is, any symmetric polynomial is given by an expression involving only additions and multiplication of constants and elementary symmetric polynomials. There is one elementary symmetric polynomial of degree in variables for each positive integer , and it is formed by adding together all distinct products of distinct variables.
Definition
The elementary symmetric polynomials in variables , written for , are defined by
and so forth, ending with
In general, for we define
so that if .
(Sometimes, is included among the elementary symmetric polynomials, but excluding it allows generally simpler formulation of results and properties.)
Thus, for each positive integer less than or equal to there exists exactly one elementary symmetric polynomial of degree in variables. To form the one that has degree , we take the sum of all products of -subsets of the variables. (By contrast, if one performs the same operation using multisets of variables, that is, taking variables with repetition, one arrives at the complete homogeneous symmetric polynomials.)
Given an integer partition (that is, a finite non-increasing sequence of positive integers) , one defines the symmetric polynomial , also called an elementary symmetric polynomial, by
.
Sometimes the notation is used instead of .
Examples
The following lists the elementary symmetric polynomials for the first four positive values of .
For :
For :
For :
For :
Properties
The elementary symmetric polynomials appear when we expand a linear factorization of a monic polynomial: we have the identity
That is, when we substitute numerical values for the variables , we obtain the monic univariate polynomial (with variable ) whose roots are the values substituted for and whose coefficients are – up to their sign – the elementary symmetric polynomials. These relations between the roots and the coefficients of a polynomial are called Vieta's formulas.
The characteristic polynomial of a square matrix is an example of application of Vieta's formulas. The roots of this polynomial are the eigenvalues of the matrix. When we substitute these eigenvalues into the elementary symmetric polynomials, we obtain – up to their sign – the coefficients of the characteristic polynomial, which are invariants of the matrix. In particular, the trace (the sum of the elements of the diagonal) is the value of , and thus the sum of the eigenvalues. Similarly, the determinant is – up to the sign – the constant term of the characteristic polynomial, i.e. the value of . Thus the determinant of a square matrix is the product of the eigenvalues.
The set of elementary symmetric polynomials in variables generates the ring of symmetric polynomials in
|
https://en.wikipedia.org/wiki/Triangulation%20%28geometry%29
|
In geometry, a triangulation is a subdivision of a planar object into triangles, and by extension the subdivision of a higher-dimension geometric object into simplices. Triangulations of a three-dimensional volume would involve subdividing it into tetrahedra packed together.
In most instances, the triangles of a triangulation are required to meet edge-to-edge and vertex-to-vertex.
Types
Different types of triangulations may be defined, depending both on what geometric object is to be subdivided and on how the subdivision is determined.
A triangulation of is a subdivision of into -dimensional simplices such that any two simplices in intersect in a common face (a simplex of any lower dimension) or not at all, and any bounded set in intersects only finitely many simplices in . That is, it is a locally finite simplicial complex that covers the entire space.
A point-set triangulation, i.e., a triangulation of a discrete set of points , is a subdivision of the convex hull of the points into simplices such that any two simplices intersect in a common face of any dimension or not at all and such that the set of vertices of the simplices are contained in . Frequently used and studied point set triangulations include the Delaunay triangulation (for points in general position, the set of simplices that are circumscribed by an open ball that contains no input points) and the minimum-weight triangulation (the point set triangulation minimizing the sum of the edge lengths).
In cartography, a triangulated irregular network is a point set triangulation of a set of two-dimensional points together with elevations for each point. Lifting each point from the plane to its elevated height lifts the triangles of the triangulation into three-dimensional surfaces, which form an approximation of a three-dimensional landform.
A polygon triangulation is a subdivision of a given polygon into triangles meeting edge-to-edge, again with the property that the set of triangle vertices coincides with the set of vertices of the polygon. Polygon triangulations may be found in linear time and form the basis of several important geometric algorithms, including a simple approximate solution to the art gallery problem. The constrained Delaunay triangulation is an adaptation of the Delaunay triangulation from point sets to polygons or, more generally, to planar straight-line graphs.
A triangulation of a surface consists of a net of triangles with points on a given surface covering the surface partly or totally.
In the finite element method, triangulations are often used as the mesh (in this case, a triangle mesh) underlying a computation. In this case, the triangles must form a subdivision of the domain to be simulated, but instead of restricting the vertices to input points, it is allowed to add additional Steiner points as vertices. In order to be suitable as finite element meshes, a triangulation must have well-shaped triangles, according to criteria that depend on the
|
https://en.wikipedia.org/wiki/Triangulation%20%28topology%29
|
In mathematics, triangulation describes the replacement of topological spaces by piecewise linear spaces, i.e. the choice of a homeomorphism in a suitable simplicial complex. Spaces being homeomorphic to a simplicial complex are called triangulable. Triangulation has various uses in different branches of mathematics, for instance in algebraic topology, in complex analysis or in modeling.
Motivation
On the one hand, it is sometimes useful to forget about superfluous information of topological spaces: The replacement of the original spaces with simplicial complexes may help to recognize crucial properties and to gain a better understanding of the considered object.
On the other hand, simplicial complexes are objects of combinatorial character and therefore one can assign them quantities rising from their combinatorial pattern, for instance, the Euler characteristic. Triangulation allows now to assign such quantities to topological spaces.
Investigations concerning the existence and uniqueness of triangulations established a new branch in topology, namely the piecewise-linear-topology (short PL- topology). Its main purpose is topological properties of simplicial complexes and its generalization, cell-complexes.
Simplicial complexes
Abstract simplicial complexes
An abstract simplicial complex above a set is a system of non-empty subsets such that:
for each ;
if and .
The elements of are called simplices, the elements of are called vertices. A simplex with vertices has dimension by definition. The dimension of an abstract simplicial complex is defined as .
Abstract simplicial complexes can be thought of as geometrical objects too. This requires the term of geometric simplex.
Geometric simplices
Let be affinely independent points in , i.e. the vectors are linearly independent. The set is said to be the simplex spanned by . It has dimension by definition. The points are called the vertices of , the simplices spanned by of the vertices are called faces and the boundary is defined to be the union of its faces.
The -dimensional standard-simplex is the simplex spanned by the unit vectors
Geometric simplicial complexes
A geometric simplicial complex is a union of geometric simplices such that
If is a simplex in , then all its faces are in .
If are two distinct simplices in , their interiors are disjoint.
The set can be realized as a topological space by choosing the closed sets to be is closed for all . It should be mentioned, that in general, the simplicial complex won't provide the natural topology of . In the case that each point in the complex lies only in finetly many simplices, both topologies coincide
Each geometric complex can be associated with an abstract complex by choosing as a ground set the set of vertices that appear in any simplex of and as system of subsets the subsets of which correspond to vertex sets of simplices in .
A natural question is if vice versa, any abstract simplicial complex co
|
https://en.wikipedia.org/wiki/Equilibrium%20point%20%28mathematics%29
|
In mathematics, specifically in differential equations, an equilibrium point is a constant solution to a differential equation.
Formal definition
The point is an equilibrium point for the differential equation
if for all .
Similarly, the point is an equilibrium point (or fixed point) for the difference equation
if for .
Equilibria can be classified by looking at the signs of the eigenvalues of the linearization of the equations about the equilibria. That is to say, by evaluating the Jacobian matrix at each of the equilibrium points of the system, and then finding the resulting eigenvalues, the equilibria can be categorized. Then the behavior of the system in the neighborhood of each equilibrium point can be qualitatively determined, (or even quantitatively determined, in some instances), by finding the eigenvector(s) associated with each eigenvalue.
An equilibrium point is hyperbolic if none of the eigenvalues have zero real part. If all eigenvalues have negative real parts, the point is stable. If at least one has a positive real part, the point is unstable. If at least one eigenvalue has negative real part and at least one has positive real part, the equilibrium is a saddle point and it is unstable. If all the eigenvalues are real and have the same sign the point is called a node.
See also
Autonomous equation
Critical point
Steady state
References
Further reading
Stability theory
Dynamical systems
|
https://en.wikipedia.org/wiki/Transpose%20of%20a%20linear%20map
|
In linear algebra, the transpose of a linear map between two vector spaces, defined over the same field, is an induced map between the dual spaces of the two vector spaces.
The transpose or algebraic adjoint of a linear map is often used to study the original linear map. This concept is generalised by adjoint functors.
Definition
Let denote the algebraic dual space of a vector space
Let and be vector spaces over the same field
If is a linear map, then its algebraic adjoint or dual, is the map defined by
The resulting functional is called the pullback of by
The continuous dual space of a topological vector space (TVS) is denoted by
If and are TVSs then a linear map is weakly continuous if and only if in which case we let denote the restriction of to
The map is called the transpose or algebraic adjoint of
The following identity characterizes the transpose of :
where is the natural pairing defined by
Properties
The assignment produces an injective linear map between the space of linear operators from to and the space of linear operators from to
If then the space of linear maps is an algebra under composition of maps, and the assignment is then an antihomomorphism of algebras, meaning that
In the language of category theory, taking the dual of vector spaces and the transpose of linear maps is therefore a contravariant functor from the category of vector spaces over to itself.
One can identify with using the natural injection into the double dual.
If and are linear maps then
If is a (surjective) vector space isomorphism then so is the transpose
If and are normed spaces then
and if the linear operator is bounded then the operator norm of is equal to the norm of ; that is
and moreover,
Polars
Suppose now that is a weakly continuous linear operator between topological vector spaces and with continuous dual spaces and respectively.
Let denote the canonical dual system, defined by where and are said to be if
For any subsets and let
denote the () (resp. ).
If and are convex, weakly closed sets containing the origin then implies
If and then
and
If and are locally convex then
Annihilators
Suppose and are topological vector spaces and is a weakly continuous linear operator (so ). Given subsets and define their (with respect to the canonical dual system) by
and
The kernel of is the subspace of orthogonal to the image of :
The linear map is injective if and only if its image is a weakly dense subset of (that is, the image of is dense in when is given the weak topology induced by ).
The transpose is continuous when both and are endowed with the weak-* topology (resp. both endowed with the strong dual topology, both endowed with the topology of uniform convergence on compact convex subsets, both endowed with the topology of uniform convergence on compact subsets).
(Surjection of Fréchet spaces): If and are Fréchet spaces then the con
|
https://en.wikipedia.org/wiki/Dual%20representation
|
In mathematics, if is a group and is a linear representation of it on the vector space , then the dual representation is defined over the dual vector space as follows:
is the transpose of , that is, = for all .
The dual representation is also known as the contragradient representation.
If is a Lie algebra and is a representation of it on the vector space , then the dual representation is defined over the dual vector space as follows:
= for all .
The motivation for this definition is that Lie algebra representation associated to the dual of a Lie group representation is computed by the above formula. But the definition of the dual of a Lie algebra representation makes sense even if it does not come from a Lie group representation.
In both cases, the dual representation is a representation in the usual sense.
Properties
Irreducibility and second dual
If a (finite-dimensional) representation is irreducible, then the dual representation is also irreducible—but not necessarily isomorphic to the original representation. On the other hand, the dual of the dual of any representation is isomorphic to the original representation.
Unitary representations
Consider a unitary representation of a group , and let us work in an orthonormal basis. Thus, maps into the group of unitary matrices. Then the abstract transpose in the definition of the dual representation may be identified with the ordinary matrix transpose. Since the adjoint of a matrix is the complex conjugate of the transpose, the transpose is the conjugate of the adjoint. Thus, is the complex conjugate of the adjoint of the inverse of . But since is assumed to be unitary, the adjoint of the inverse of is just .
The upshot of this discussion is that when working with unitary representations in an orthonormal basis, is just the complex conjugate of .
The SU(2) and SU(3) cases
In the representation theory of SU(2), the dual of each irreducible representation does turn out to be isomorphic to the representation. But for the representations of SU(3), the dual of the irreducible representation with label is the irreducible representation with label . In particular, the standard three-dimensional representation of SU(3) (with highest weight ) is not isomorphic to its dual. In the theory of quarks in the physics literature, the standard representation and its dual are called "" and "."
General semisimple Lie algebras
More generally, in the representation theory of semisimple Lie algebras (or the closely related representation theory of compact Lie groups), the weights of the dual representation are the negatives of the weights of the original representation. (See the figure.) Now, for a given Lie algebra, if it should happen that operator is an element of the Weyl group, then the weights of every representation are automatically invariant under the map . For such Lie algebras, every irreducible representation will be isomorphic to its dual. (This is the situation for SU(2),
|
https://en.wikipedia.org/wiki/Complex%20conjugate%20of%20a%20vector%20space
|
In mathematics, the complex conjugate of a complex vector space is a complex vector space , which has the same elements and additive group structure as but whose scalar multiplication involves conjugation of the scalars. In other words, the scalar multiplication of satisfies
where is the scalar multiplication of and is the scalar multiplication of
The letter stands for a vector in is a complex number, and denotes the complex conjugate of
More concretely, the complex conjugate vector space is the same underlying vector space (same set of points, same vector addition and real scalar multiplication) with the conjugate linear complex structure (different multiplication by ).
Motivation
If and are complex vector spaces, a function is antilinear if
With the use of the conjugate vector space , an antilinear map can be regarded as an ordinary linear map of type The linearity is checked by noting:
Conversely, any linear map defined on gives rise to an antilinear map on
This is the same underlying principle as in defining opposite ring so that a right -module can be regarded as a left -module, or that of an opposite category so that a contravariant functor can be regarded as an ordinary functor of type
Complex conjugation functor
A linear map gives rise to a corresponding linear map which has the same action as Note that preserves scalar multiplication because
Thus, complex conjugation and define a functor from the category of complex vector spaces to itself.
If and are finite-dimensional and the map is described by the complex matrix with respect to the bases of and of then the map is described by the complex conjugate of with respect to the bases of and of
Structure of the conjugate
The vector spaces and have the same dimension over the complex numbers and are therefore isomorphic as complex vector spaces. However, there is no natural isomorphism from to
The double conjugate is identical to
Complex conjugate of a Hilbert space
Given a Hilbert space (either finite or infinite dimensional), its complex conjugate is the same vector space as its continuous dual space
There is one-to-one antilinear correspondence between continuous linear functionals and vectors.
In other words, any continuous linear functional on is an inner multiplication to some fixed vector, and vice versa.
Thus, the complex conjugate to a vector particularly in finite dimension case, may be denoted as (v-dagger, a row vector which is the conjugate transpose to a column vector ).
In quantum mechanics, the conjugate to a ket vector is denoted as – a bra vector (see bra–ket notation).
See also
conjugate bundle
References
Further reading
Budinich, P. and Trautman, A. The Spinorial Chessboard. Springer-Verlag, 1988. . (complex conjugate vector spaces are discussed in section 3.3, pag. 26).
Linear algebra
Vector space
|
https://en.wikipedia.org/wiki/Complex%20conjugate%20representation
|
In mathematics, if is a group and is a representation of it over the complex vector space , then the complex conjugate representation is defined over the complex conjugate vector space as follows:
is the conjugate of for all in .
is also a representation, as one may check explicitly.
If is a real Lie algebra and is a representation of it over the vector space , then the conjugate representation is defined over the conjugate vector space as follows:
is the conjugate of for all in .
is also a representation, as one may check explicitly.
If two real Lie algebras have the same complexification, and we have a complex representation of the complexified Lie algebra, their conjugate representations are still going to be different. See spinor for some examples associated with spinor representations of the spin groups and .
If is a *-Lie algebra (a complex Lie algebra with a * operation which is compatible with the Lie bracket),
is the conjugate of for all in
For a finite-dimensional unitary representation, the dual representation and the conjugate representation coincide. This also holds for pseudounitary representations.
See also
Dual representation
Notes
Representation theory of groups
|
https://en.wikipedia.org/wiki/Dave%20Bayer
|
David Allen Bayer (born November 29, 1955) is an American mathematician known for his contributions in algebra and symbolic computation and for his consulting work in the movie industry. He is a professor of mathematics at Barnard College, Columbia University.
Education and career
Bayer was educated at Swarthmore College as an undergraduate, where he attended a course on combinatorial algorithms given by Herbert Wilf. During that semester, Bayer related several original ideas to Wilf on the subject. These contributions were later incorporated into the second edition of Wilf and Albert Nijenhuis' influential book Combinatorial Algorithms, with a detailed acknowledgement by its authors. Bayer subsequently earned his Ph.D. at Harvard University in 1982 under the direction of Heisuke Hironaka with a dissertation entitled The Division Algorithm and the Hilbert Scheme. He joined Columbia University thereafter.
Bayer is the son of Joan and Bryce Bayer, the inventor of the Bayer filter.
Contributions
Bayer has worked in various areas of algebra and symbolic computation, including Hilbert functions, Betti numbers, and linear programming. He has written a number of highly cited papers in these areas with other notable mathematicians, including Bernd Sturmfels, Jeffrey Lagarias, Persi Diaconis, Irena Peeva, and David Eisenbud. Bayer is one of ten individuals cited in the white paper published by the pseudonymous Satoshi Nakamoto describing the technological underpinnings of Bitcoin. He is cited as a co-author, along with Stuart Haber and W. Scott Stornetta, of a paper to improve on a system for tamper-proofing timestamps by incorporating Merkle trees.
Consulting
Bayer was a mathematics consultant for the film A Beautiful Mind, the biopic of John Nash, and also had a cameo as one of the "Pen Ceremony" professors.
References
External links
Bayer's homepage at Columbia University
Dave and Beautiful Math at Swarthmore College Bulletin
Living people
1955 births
Mathematicians from New York (state)
20th-century American mathematicians
21st-century American mathematicians
Swarthmore College alumni
Harvard University alumni
Barnard College faculty
Algebraists
Combinatorialists
Algebraic geometers
Scientists from Rochester, New York
|
https://en.wikipedia.org/wiki/Natterer
|
Natterer may refer to:
People
Christian Natterer (born 1981), German politician
August Natterer (1868–1933), German artist
Frank Natterer (born 1941), German mathematics professor
Johann Natterer (1787–1843), Austrian explorer and naturalist
Other
Natterer's bat, Myotis nattereri
|
https://en.wikipedia.org/wiki/Structure%20implies%20multiplicity
|
In diatonic set theory structure implies multiplicity is a quality of a collection or scale. For collections or scales which have this property, the interval series formed by the shortest distance around a diatonic circle of fifths between members of a series indicates the number of unique interval patterns (adjacently, rather than around the circle of fifths) formed by diatonic transpositions of that series. Structure refers to the intervals in relation to the circle of fifths; multiplicity refers to the number of times each different (adjacent) interval pattern occurs. The property was first described by John Clough and Gerald Myerson in "Variety and Multiplicity in Diatonic Systems" (1985). ()
Structure implies multiplicity is true of the diatonic collection and the pentatonic scale, and any subset.
For example, cardinality equals variety dictates that a three member diatonic subset of the C major scale, C-D-E transposed to all scale degrees gives three interval patterns: M2-M2, M2-m2, m2-M2.
On the circle of fifths:
C G D A E B F (C)
1 2 1 2 1 2 3
E and C are three notes apart, C and D are two notes apart, D and E two notes apart. Just as the distance around the circle of fifths between forms the interval pattern 3-2-2, M2-M2 occurs three times, M2-m2 occurs twice, and m2-M2 occurs twice.
Cardinality equals variety and structure implies multiplicity are true of all collections with Myhill's property or maximal evenness.
References
Further reading
Clough, John and Myerson, Gerald (1985). "Variety and Multiplicity in Diatonic Systems", Journal of Music Theory 29: 249-70.
Agmon, Eytan (1989). "A Mathematical Model of the Diatonic System", Journal of Music Theory 33: 1-25.
Agmon, Eytan (1996). "Coherent Tone-Systems: A Study in the Theory of Diatonicism", Journal of Music Theory 40: 39-59.
Diatonic set theory
|
https://en.wikipedia.org/wiki/Directional%20statistics
|
Directional statistics (also circular statistics or spherical statistics) is the subdiscipline of statistics that deals with directions (unit vectors in Euclidean space, Rn), axes (lines through the origin in Rn) or rotations in Rn. More generally, directional statistics deals with observations on compact Riemannian manifolds including the Stiefel manifold.
The fact that 0 degrees and 360 degrees are identical angles, so that for example 180 degrees is not a sensible mean of 2 degrees and 358 degrees, provides one illustration that special statistical methods are required for the analysis of some types of data (in this case, angular data). Other examples of data that may be regarded as directional include statistics involving temporal periods (e.g. time of day, week, month, year, etc.), compass directions, dihedral angles in molecules, orientations, rotations and so on.
Circular distributions
Any probability density function (pdf) on the line can be "wrapped" around the circumference of a circle of unit radius. That is, the pdf of the wrapped variable
is
This concept can be extended to the multivariate context by an extension of the simple sum to a number of sums that cover all dimensions in the feature space:
where is the -th Euclidean basis vector.
The following sections show some relevant circular distributions.
von Mises circular distribution
The von Mises distribution is a circular distribution which, like any other circular distribution, may be thought of as a wrapping of a certain linear probability distribution around the circle. The underlying linear probability distribution for the von Mises distribution is mathematically intractable; however, for statistical purposes, there is no need to deal with the underlying linear distribution. The usefulness of the von Mises distribution is twofold: it is the most mathematically tractable of all circular distributions, allowing simpler statistical analysis, and it is a close approximation to the wrapped normal distribution, which, analogously to the linear normal distribution, is important because it is the limiting case for the sum of a large number of small angular deviations. In fact, the von Mises distribution is often known as the "circular normal" distribution because of its ease of use and its close relationship to the wrapped normal distribution (Fisher, 1993).
The pdf of the von Mises distribution is: where is the modified Bessel function of order 0.
Circular uniform distribution
The probability density function (pdf) of the circular uniform distribution is given by
It can also be thought of as of the von Mises above.
Wrapped normal distribution
The pdf of the wrapped normal distribution (WN) is:
where μ and σ are the mean and standard deviation of the unwrapped distribution, respectively and is the Jacobi theta function:
where and
Wrapped Cauchy distribution
The pdf of the wrapped Cauchy distribution (WC) is:
where is the scale factor and is the peak posi
|
https://en.wikipedia.org/wiki/Bessel%27s%20inequality
|
In mathematics, especially functional analysis, Bessel's inequality is a statement about the coefficients of an element in a Hilbert space with respect to an orthonormal sequence. The inequality was derived by F.W. Bessel in 1828.
Let be a Hilbert space, and suppose that is an orthonormal sequence in . Then, for any in one has
where ⟨·,·⟩ denotes the inner product in the Hilbert space . If we define the infinite sum
consisting of "infinite sum" of vector resolute in direction , Bessel's inequality tells us that this series converges. One can think of it that there exists that can be described in terms of potential basis .
For a complete orthonormal sequence (that is, for an orthonormal sequence that is a basis), we have Parseval's identity, which replaces the inequality with an equality (and consequently with ).
Bessel's inequality follows from the identity
which holds for any natural n.
See also
Cauchy–Schwarz inequality
Parseval's theorem
References
External links
Bessel's Inequality the article on Bessel's Inequality on MathWorld.
Hilbert spaces
Inequalities
|
https://en.wikipedia.org/wiki/Generated%20collection
|
In diatonic set theory, a generated collection is a collection or scale formed by repeatedly adding a constant interval in integer notation, the generator, also known as an interval cycle, around the chromatic circle until a complete collection or scale is formed. All scales with the deep scale property can be generated by any interval coprime with (in twelve-tone equal temperament) twelve. (Johnson, 2003, p. 83)
The C major diatonic collection can be generated by adding a cycle of perfect fifths (C7) starting at F: F-C-G-D-A-E-B = C-D-E-F-G-A-B. Using integer notation and modulo 12: 5 + 7 = 0, 0 + 7 = 7, 7 + 7 = 2, 2 + 7 = 9, 9 + 7 = 4, 4 + 7 = 11.
The C major scale could also be generated using cycle of perfect fourths (C5), as 12 minus any coprime of twelve is also coprime with twelve: 12 − 7 = 5. B-E-A-D-G-C-F.
A generated collection for which a single generic interval corresponds to the single generator or interval cycle used is a MOS (for "moment of symmetry") or well formed generated collection. For example, the diatonic collection is well formed, for the perfect fifth (the generic interval 4) corresponds to the generator 7. Though not all fifths in the diatonic collection are perfect (B-F is a diminished fifth, tritone, or 6), a well formed generated collection has only one specific interval between scale members (in this case 6)—which corresponds to the generic interval (4, a fifth) but to not the generator (7). The major and minor pentatonic scales are also well formed. (Johnson, 2003, p. 83)
The properties of generated and well-formedness were described by Norman Carey and David Clampitt in "Aspects of Well-Formed Scales" (1989), (Johnson, 2003, p. 151.) In earlier (1975) work, theoretician Erv Wilson defined the properties of the idea, and called such a scale a MOS, an acronym for "Moment of Symmetry". While unpublished until it appeared online in 1999, this paper was widely distributed and well known throughout the microtonal music which adopted the term. the paper also remains more inclusive of further developments of the concept.
For instance, the three-gap theorem implies that every generated collection has at most three different steps, the intervals between adjacent tones in the collection (Carey 2007).
A degenerate well-formed collection is a scale in which the generator and the interval required to complete the circle or return to the initial note are equivalent and include all scales with equal notes, such as the whole-tone scale. (Johnson, 2003, p. 158, n. 14)
A bisector is a more general concept used to create collections that cannot be generated but includes all collections which can be generated.
See also
833 cents scale
Cyclic group
Distance model
Pythagorean tuning
References
Sources
Carey, Norman and Clampitt, David (1989). "Aspects of Well-Formed Scales", Music Theory Spectrum 11: 187–206.
Clough, Engebretsen, and Kochavi. "Scales, Sets, and Interval Cycles", 79.
Johnson, Timothy (2003). Foundations of Diat
|
https://en.wikipedia.org/wiki/Diatonic%20set%20theory
|
Diatonic set theory is a subdivision or application of musical set theory which applies the techniques and analysis of discrete mathematics to properties of the diatonic collection such as maximal evenness, Myhill's property, well formedness, the deep scale property, cardinality equals variety, and structure implies multiplicity. The name is something of a misnomer as the concepts involved usually apply much more generally, to any periodically repeating scale.
Music theorists working in diatonic set theory include Eytan Agmon, Gerald J. Balzano, Norman Carey, David Clampitt, John Clough, Jay Rahn, and mathematician Jack Douthett. A number of key concepts were first formulated by David Rothenberg (the Rothenberg propriety), who published in the journal Mathematical Systems Theory, and Erv Wilson, working entirely outside of the academic world.
See also
Bisector
Diatonic and chromatic
Generic and specific intervals
Further reading
Balzano, Gerald, "The Pitch Set as a Level of Description for Studying Musical Pitch Perception", Music, Mind and Brain, the Neurophysiology of Music, Manfred Clynes, ed., Plenum Press, 1982.
Carey, Norman and Clampitt, David (1996), "Self-Similar Pitch Structures, Their Duals, and Rhythmic Analogues", Perspectives of New Music 34, no. 2: 62–87.
Grady, Kraig, (2007), "An Introduction to the Moments of Symmetry", Wilson Archives, anaphoria.com
Johnson, Timothy (2003), Foundations of Diatonic Theory: A Mathematically Based Approach to Music Fundamentals, Key College Publishing. .
Precursors
Rahn, Jay (1977), "Some Recurrent Features of Scales", In Theory Only 2, nos. 11–12: 43–52.
Rothenberg, David, (1977), "A Model for Pattern Perception with Musical Applications", Mathematical Systems Theory, part I: 11, 199–234 ; part II: 353–372 ; part III: (1978) 12, 73–101 .
Wilson, Erv (1975), "Handwritten letter to John Chalmers pertaining to 'Moments of Symmetry'/'Tanabe Cycle', 26 April 1975, 27 pages, anaphoria.com
Musicology
|
https://en.wikipedia.org/wiki/Generic%20and%20specific%20intervals
|
In diatonic set theory a generic interval is the number of scale steps between notes of a collection or scale. The largest generic interval is one less than the number of scale members. (Johnson 2003, p. 26)
A specific interval is the clockwise distance between pitch classes on the chromatic circle (interval class), in other words the number of half steps between notes. The largest specific interval is one less than the number of "chromatic" pitches. In twelve tone equal temperament the largest specific interval is 11. (Johnson 2003, p. 26)
In the diatonic collection the generic interval is one less than the corresponding diatonic interval:
Adjacent intervals, seconds, are 1
Thirds = 2
Fourths = 3
Fifths = 4
Sixths = 5
Sevenths = 6
The largest generic interval in the diatonic scale being 7 − 1 = 6.
Myhill's property
Myhill's property is the quality of musical scales or collections with exactly two specific intervals for every generic interval, and thus also have the properties of cardinality equals variety, structure implies multiplicity, and being a well formed generated collection. In other words, each generic interval can be made from one of two possible different specific intervals. For example, there are major or minor and perfect or augmented/diminished variants of all the diatonic intervals:
The diatonic and pentatonic collections possess Myhill's property. The concept appears to have been first described by John Clough and Gerald Myerson and named after their associate the mathematician John Myhill. (Johnson 2003, p. 106, 158)
Sources
Johnson, Timothy (2003). Foundations of Diatonic Theory: A Mathematically Based Approach to Music Fundamentals. Key College Publishing. .
Further reading
Clough, Engebretsen, and Kochavi. "Scales, Sets, and Interval Cycles": 78–84.
Diatonic set theory
Intervals (music)
|
https://en.wikipedia.org/wiki/Kerr%E2%80%93Newman%20metric
|
The Kerr–Newman metric is the most general asymptotically flat, stationary solution of the Einstein–Maxwell equations in general relativity that describes the spacetime geometry in the region surrounding an electrically charged, rotating mass. It generalizes the Kerr metric by taking into account the field energy of an electromagnetic field, in addition to describing rotation. It is one of a large number of various different electrovacuum solutions, that is, of solutions to the Einstein–Maxwell equations which account for the field energy of an electromagnetic field. Such solutions do not include any electric charges other than that associated with the gravitational field, and are thus termed vacuum solutions.
This solution has not been especially useful for describing astrophysical phenomena, because observed astronomical objects do not possess an appreciable net electric charge, and the magnetic fields of stars arise through other processes. As a model of realistic black holes, it omits any description of infalling baryonic matter, light (null dusts) or dark matter, and thus provides at best an incomplete description of stellar mass black holes and active galactic nuclei. The solution is of theoretical and mathematical interest as it does provide a fairly simple cornerstone for further exploration.
The Kerr–Newman solution is a special case of more general exact solutions of the Einstein–Maxwell equations with non-zero cosmological constant.
History
In Dec 1963 Kerr and Schild found the Kerr–Schild metrics that gave all Einstein spaces that are exact linear perturbations of Minkowski space. In early 1964 Roy Kerr looked for all Einstein–Maxwell spaces with this same property. By Feb 1964 the special case where the Kerr–Schild spaces were charged (this includes the Kerr–Newman solution) was known but the general case where the special directions were not geodesics of the underlying Minkowski space proved very difficult. The problem was given to George Debney to try to solve but was given up by March 1964. About this time Ezra T. Newman found the solution for charged Kerr by guesswork.
In 1965, Ezra "Ted" Newman found the axisymmetric solution of Einstein's field equation for a black hole which is both rotating and electrically charged. This formula for the metric tensor is called the Kerr–Newman metric. It is a generalisation of the Kerr metric for an uncharged spinning point-mass, which had been discovered by Roy Kerr two years earlier.
Four related solutions may be summarized by the following table:
where Q represents the body's electric charge and J represents its spin angular momentum.
Overview of the solution
Newman's result represents the simplest stationary, axisymmetric, asymptotically flat solution of Einstein's equations in the presence of an electromagnetic field in four dimensions. It is sometimes referred to as an "electrovacuum" solution of Einstein's equations.
Any Kerr–Newman source has its rotation axis aligned with its
|
https://en.wikipedia.org/wiki/Visibility%20graph
|
In computational geometry and robot motion planning, a visibility graph is a graph of intervisible locations, typically for a set of points and obstacles in the Euclidean plane. Each node in the graph represents a point location, and each edge represents a visible connection between them. That is, if the line segment connecting two locations does not pass through any obstacle, an edge is drawn between them in the graph. When the set of locations lies in a line, this can be understood as an ordered series. Visibility graphs have therefore been extended to the realm of time series analysis.
Applications
Visibility graphs may be used to find Euclidean shortest paths among a set of polygonal obstacles in the plane: the shortest path between two obstacles follows straight line segments except at the vertices of the obstacles, where it may turn, so the Euclidean shortest path is the shortest path in a visibility graph that has as its nodes the start and destination points and the vertices of the obstacles. Therefore, the Euclidean shortest path problem may be decomposed into two simpler subproblems: constructing the visibility graph, and applying a shortest path algorithm such as Dijkstra's algorithm to the graph. For planning the motion of a robot that has non-negligible size compared to the obstacles, a similar approach may be used after expanding the obstacles to compensate for the size of the robot. attribute the visibility graph method for Euclidean shortest paths to research in 1969 by Nils Nilsson on motion planning for Shakey the robot, and also cite a 1973 description of this method by Russian mathematicians M. B. Ignat'yev, F. M. Kulakov, and A. M. Pokrovskiy.
Visibility graphs may also be used to calculate the placement of radio antennas, or as a tool used within architecture and urban planning through visibility graph analysis.
The visibility graph of a set of locations that lie in a line can be interpreted as a graph-theoretical representation of a time series. This particular case builds a bridge between time series, dynamical systems and graph theory.
Characterization
The visibility graph of a simple polygon has the polygon's vertices as its point locations, and the exterior of the polygon as the only obstacle. Visibility graphs of simple polygons must be Hamiltonian graphs: the boundary of the polygon forms a Hamiltonian cycle in the visibility graph. It is known that not all visibility graphs induce a simple polygon. However, an efficient algorithmic characterization of the visibility graphs of simple polygons remains unknown. These graphs do not fall into many known families of well-structured graphs: they might not be perfect graphs, circle graphs, or chordal graphs. An exception to this phenomenon is that the visibility graphs of simple polygons are cop-win graphs.
Related problems
The art gallery problem is the problem of finding a small set of points such that all other non-obstacle points are visible from this set. Certain
|
https://en.wikipedia.org/wiki/Iterated%20function
|
In mathematics, an iterated function is a function (that is, a function from some set to itself) which is obtained by composing another function with itself a certain number of times. The process of repeatedly applying the same function is called iteration. In this process, starting from some initial object, the result of applying a given function is fed again in the function as input, and this process is repeated. For example on the image on the right:
with the circle‑shaped symbol of function composition.
Iterated functions are objects of study in computer science, fractals, dynamical systems, mathematics and renormalization group physics.
Definition
The formal definition of an iterated function on a set X follows.
Let be a set and be a function.
Defining as the n-th iterate of (a notation introduced by Hans Heinrich Bürmann and John Frederick William Herschel), where n is a non-negative integer, by:
and
where is the identity function on and denotes function composition. That is,
,
always associative.
Because the notation may refer to both iteration (composition) of the function or exponentiation of the function (the latter is commonly used in trigonometry), some mathematicians choose to use to denote the compositional meaning, writing for the -th iterate of the function , as in, for example, meaning . For the same purpose, was used by Benjamin Peirce whereas Alfred Pringsheim and Jules Molk suggested instead.
Abelian property and iteration sequences
In general, the following identity holds for all non-negative integers and ,
This is structurally identical to the property of exponentiation that , i.e. the special case .
In general, for arbitrary general (negative, non-integer, etc.) indices and , this relation is called the translation functional equation, cf. Schröder's equation and Abel equation. On a logarithmic scale, this reduces to the nesting property of Chebyshev polynomials, , since .
The relation also holds, analogous to the property of exponentiation that .
The sequence of functions is called a Picard sequence, named after Charles Émile Picard.
For a given in , the sequence of values is called the orbit of .
If for some integer , the orbit is called a periodic orbit. The smallest such value of for a given is called the period of the orbit. The point itself is called a periodic point. The cycle detection problem in computer science is the algorithmic problem of finding the first periodic point in an orbit, and the period of the orbit.
Fixed points
If for some in (that is, the period of the orbit of is ), then is called a fixed point of the iterated sequence. The set of fixed points is often denoted as . There exist a number of fixed-point theorems that guarantee the existence of fixed points in various situations, including the Banach fixed point theorem and the Brouwer fixed point theorem.
There are several techniques for convergence acceleration of the sequences prod
|
https://en.wikipedia.org/wiki/Conjugate%20gradient%20method
|
In mathematics, the conjugate gradient method is an algorithm for the numerical solution of particular systems of linear equations, namely those whose matrix is positive-definite. The conjugate gradient method is often implemented as an iterative algorithm, applicable to sparse systems that are too large to be handled by a direct implementation or other direct methods such as the Cholesky decomposition. Large sparse systems often arise when numerically solving partial differential equations or optimization problems.
The conjugate gradient method can also be used to solve unconstrained optimization problems such as energy minimization. It is commonly attributed to Magnus Hestenes and Eduard Stiefel, who programmed it on the Z4, and extensively researched it.
The biconjugate gradient method provides a generalization to non-symmetric matrices. Various nonlinear conjugate gradient methods seek minima of nonlinear optimization problems.
Description of the problem addressed by conjugate gradients
Suppose we want to solve the system of linear equations
for the vector , where the known matrix is symmetric (i.e., AT = A), positive-definite (i.e. xTAx > 0 for all non-zero vectors in Rn), and real, and is known as well. We denote the unique solution of this system by .
Derivation as a direct method
The conjugate gradient method can be derived from several different perspectives, including specialization of the conjugate direction method for optimization, and variation of the Arnoldi/Lanczos iteration for eigenvalue problems. Despite differences in their approaches, these derivations share a common topic—proving the orthogonality of the residuals and conjugacy of the search directions. These two properties are crucial to developing the well-known succinct formulation of the method.
We say that two non-zero vectors u and v are conjugate (with respect to ) if
Since is symmetric and positive-definite, the left-hand side defines an inner product
Two vectors are conjugate if and only if they are orthogonal with respect to this inner product. Being conjugate is a symmetric relation: if is conjugate to , then is conjugate to . Suppose that
is a set of mutually conjugate vectors with respect to , i.e. for all .
Then forms a basis for , and we may express the solution of in this basis:
Left-multiplying the problem with the vector yields
and so
This gives the following method for solving the equation : find a sequence of conjugate directions, and then compute the coefficients .
As an iterative method
If we choose the conjugate vectors carefully, then we may not need all of them to obtain a good approximation to the solution . So, we want to regard the conjugate gradient method as an iterative method. This also allows us to approximately solve systems where n is so large that the direct method would take too much time.
We denote the initial guess for by (we can assume without loss of generality that , otherwise consider the system A
|
https://en.wikipedia.org/wiki/Art%20gallery%20problem
|
The art gallery problem or museum problem is a well-studied visibility problem in computational geometry. It originates from the following real-world problem:
"In an art gallery, what is the minimum number of guards who together can observe the whole gallery?"
In the geometric version of the problem, the layout of the art gallery is represented by a simple polygon and each guard is represented by a point in the polygon. A set of points is said to guard a polygon if, for every point in the polygon, there is some such that the line segment between and does not leave the polygon.
The art gallery problem can be applied in several domains such as in robotics, when artificial intelligences (AI) need to execute movements depending on their surroundings. Other domains, where this problem is applied, are in image editing, lighting problems of a stage or installation of infrastructures for the warning of natural disasters.
Two dimensions
There are numerous variations of the original problem that are also referred to as the art gallery problem. In some versions guards are restricted to the perimeter, or even to the vertices of the polygon. Some versions require only the perimeter or a subset of the perimeter to be guarded.
Solving the version in which guards must be placed on vertices and only vertices need to be guarded is equivalent to solving the dominating set problem on the visibility graph of the polygon.
Chvátal's art gallery theorem
Chvátal's art gallery theorem, named after Václav Chvátal, gives an upper bound on the minimal number of guards. It states:
"To guard a simple polygon with vertices, guards are always sufficient and sometimes necessary."
History
The question about how many vertices/watchmen/guards were needed, was posed to Chvátal by Victor Klee in 1973. Chvátal proved it shortly thereafter. Chvátal's proof was later simplified by Steve Fisk, via a 3-coloring argument. Chvátal has a more geometrical approach, whereas Fisk uses well-known results from Graph theory.
Fisk's short proof
Steve Fisk's proof is so short and elegant that it was chosen for inclusion in Proofs from THE BOOK.
The proof goes as follows:
First, the polygon is triangulated (without adding extra vertices), which is possible, because the existence of triangulations is proven under certain verified conditions. The vertices of the resulting triangulation graph may be 3-colored. Clearly, under a 3-coloring, every triangle must have all three colors. The vertices with any one color form a valid guard set, because every triangle of the polygon is guarded by its vertex with that color. Since the three colors partition the n vertices of the polygon, the color with the fewest vertices defines a valid guard set with at most guards.
Illustration of the proof
To illustrate the proof, we consider the polygon below. The first step is to triangulate the polygon (see Figure 1). Then, one applies a proper -colouring (Figure 2) and observes that there are
|
https://en.wikipedia.org/wiki/Degree%20matrix
|
In the mathematical field of algebraic graph theory, the degree matrix of an undirected graph is a diagonal matrix which contains information about the degree of each vertex—that is, the number of edges attached to each vertex. It is used together with the adjacency matrix to construct the Laplacian matrix of a graph: the Laplacian matrix is the difference of the degree matrix and the adjacency matrix.
Definition
Given a graph with , the degree matrix for is a diagonal matrix defined as
where the degree of a vertex counts the number of times an edge terminates at that vertex. In an undirected graph, this means that each loop increases the degree of a vertex by two. In a directed graph, the term degree may refer either to indegree (the number of incoming edges at each vertex) or outdegree (the number of outgoing edges at each vertex).
Example
The following undirected graph has a 6x6 degree matrix with values:
Note that in the case of undirected graphs, an edge that starts and ends in the same node increases the corresponding degree value by 2 (i.e. it is counted twice).
Properties
The degree matrix of a k-regular graph has a constant diagonal of .
According to the degree sum formula, the trace of the degree matrix is twice the number of edges of the considered graph.
References
Algebraic graph theory
Matrices
|
https://en.wikipedia.org/wiki/Bidiagonal%20matrix
|
In mathematics, a bidiagonal matrix is a banded matrix with non-zero entries along the main diagonal and either the diagonal above or the diagonal below. This means there are exactly two non-zero diagonals in the matrix.
When the diagonal above the main diagonal has the non-zero entries the matrix is upper bidiagonal. When the diagonal below the main diagonal has the non-zero entries the matrix is lower bidiagonal.
For example, the following matrix is upper bidiagonal:
and the following matrix is lower bidiagonal:
Usage
One variant of the QR algorithm starts with reducing a general matrix into a bidiagonal one,
and the singular value decomposition (SVD) uses this method as well.
Bidiagonalization
Bidiagonalization allows guaranteed accuracy when using floating-point arithmetic to compute singular values.
See also
List of matrices
LAPACK
Hessenberg form – The Hessenberg form is similar, but has more non-zero diagonal lines than 2.
References
Stewart, G. W. (2001) Matrix Algorithms, Volume II: Eigensystems. Society for Industrial and Applied Mathematics. .
External links
High performance algorithms for reduction to condensed (Hessenberg, tridiagonal, bidiagonal) form
Linear algebra
Sparse matrices
|
https://en.wikipedia.org/wiki/Statistics%20Norway
|
Statistics Norway (, abbreviated to SSB) is the Norwegian statistics bureau. It was established in 1876.
Relying on a staff of about 1,000, Statistics Norway publish about 1,000 new statistical releases every year on its web site. All releases are published both in Norwegian and English. In addition a number of edited publications are published, and all are available on the web site for free.
As the central Norwegian office for official government statistics, Statistics Norway provides the public and government with extensive research and analysis activities. It is administratively placed under the Ministry of Finance but operates independently from all government agencies. Statistics Norway has a board appointed by the government. It relies extensively on data from registers, but are also collecting data from surveys and questionnaires, including from cities and municipalities.
History
Statistics Norway was originally established in 1876. The Statistics Act of 1989 provides the legal framework for Statistics Norway's activities.
Leadership
The agency is led by a Director General.
Geir Axelsen, Director General, (May 2018 - incumbent)
Birger Vikøren, acting Director General (autumn 2017 - May 2018)
Christine Meyer, Director General ( - autumn 2017). In the autumn of 2017 resigned from that position after Finance Minister Siv Jensen declared that Meyer no longer had her confidence. The conflict was the question of how the Research Section should be organised.
References
External links
Government agencies of Norway
1876 establishments in Norway
Government agencies established in 1876
Norway
|
https://en.wikipedia.org/wiki/Nick%20Katz
|
Nicholas Michael Katz (born December 7, 1943) is an American mathematician, working in arithmetic geometry, particularly on p-adic methods, monodromy and moduli problems, and number theory. He is currently a professor of Mathematics at Princeton University and an editor of the journal Annals of Mathematics.
Life and work
Katz graduated from Johns Hopkins University (BA 1964) and from Princeton University, where in 1965 he received his master's degree and in 1966 he received his doctorate under supervision of Bernard Dwork with thesis On the Differential Equations Satisfied by Period Matrices. After that, at Princeton, he was an instructor, an assistant professor in 1968, associate professor in 1971 and professor in 1974. From 2002 to 2005 he was the chairman of faculty there. He was also a visiting scholar at the University of Minnesota, the University of Kyoto, Paris VI, Orsay Faculty of Sciences, the Institute for Advanced Study and the IHES. While in France, he adapted methods of scheme theory and category theory to the theory of modular forms. Subsequently, he has applied geometric methods to various exponential sums.
From 1968 to 1969, he was a NATO Postdoctoral Fellow, from 1975 to 1976 and from 1987–1988 Guggenheim Fellow and from 1971 to 1972 Sloan Fellow. In 1970 he was an invited speaker at the International Congress of Mathematicians in Nice (The regularity theorem in algebraic geometry) and in 1978 in Helsinki (p-adic L functions, Serre-Tate local moduli and ratios of solutions of differential equations).
Since 2003 he is a member of the American Academy of Arts and Sciences and since 2004 the National Academy of Sciences. In 2003 he was awarded with Peter Sarnak the Levi L. Conant Prize of the American Mathematical Society (AMS) for the essay "Zeroes of Zeta Functions and Symmetry" in the Bulletin of the American Mathematical Society. Since 2004 he is an editor of the Annals of Mathematics. In 2023 he received from the AMS the Leroy P. Steele Prize for Lifetime Achievement.
He played a significant role as a sounding-board for Andrew Wiles when Wiles was developing in secret his proof of Fermat's Last Theorem. Mathematician and cryptographer Neal Koblitz was one of Katz's students.
Katz studied, with Sarnak among others, the connection of the eigenvalue distribution of large random matrices of classical groups to the distribution of the distances of the zeros of various L and zeta functions in algebraic geometry. He also studied trigonometric sums (Gauss sums) with algebro-geometric methods.
He introduced the Katz–Lang finiteness theorem.
Writings
Gauss sums, Kloosterman sums, and monodromy groups. Annals of Mathematical Studies, Princeton 1988.
Exponential sums and differential equations. Annals of Mathematical Studies, Princeton 1990. Manuscript with corrections
Rigid Local Systems. Annals of Mathematical Studies, Princeton 1996.
Twisted -functions and Monodromy. Annals of Mathematical Studies, Princeton 2002.
Moment
|
https://en.wikipedia.org/wiki/Band%20matrix
|
In mathematics, particularly matrix theory, a band matrix or banded matrix is a sparse matrix whose non-zero entries are confined to a diagonal band, comprising the main diagonal and zero or more diagonals on either side.
Band matrix
Bandwidth
Formally, consider an n×n matrix A=(ai,j ). If all matrix elements are zero outside a diagonally bordered band whose range is determined by constants k1 and k2:
then the quantities k1 and k2 are called the and , respectively. The of the matrix is the maximum of k1 and k2; in other words, it is the number k such that if .
Examples
A band matrix with k1 = k2 = 0 is a diagonal matrix
A band matrix with k1 = k2 = 1 is a tridiagonal matrix
For k1 = k2 = 2 one has a pentadiagonal matrix and so on.
Triangular matrices
For k1 = 0, k2 = n−1, one obtains the definition of an upper triangular matrix
similarly, for k1 = n−1, k2 = 0 one obtains a lower triangular matrix.
Upper and lower Hessenberg matrices
Toeplitz matrices when bandwidth is limited.
Block diagonal matrices
Shift matrices and shear matrices
Matrices in Jordan normal form
A skyline matrix, also called "variable band matrix"a generalization of band matrix
The inverses of Lehmer matrices are constant tridiagonal matrices, and are thus band matrices.
Applications
In numerical analysis, matrices from finite element or finite difference problems are often banded. Such matrices can be viewed as descriptions of the coupling between the problem variables; the banded property corresponds to the fact that variables are not coupled over arbitrarily large distances. Such matrices can be further dividedfor instance, banded matrices exist where every element in the band is nonzero. These often arise when discretising one-dimensional problems.
Problems in higher dimensions also lead to banded matrices, in which case the band itself also tends to be sparse. For instance, a partial differential equation on a square domain (using central differences) will yield a matrix with a bandwidth equal to the square root of the matrix dimension, but inside the band only 5 diagonals are nonzero. Unfortunately, applying Gaussian elimination (or equivalently an LU decomposition) to such a matrix results in the band being filled in by many non-zero elements.
Band storage
Band matrices are usually stored by storing the diagonals in the band; the rest is implicitly zero.
For example, a tridiagonal matrix has bandwidth 1. The 6-by-6 matrix
is stored as the 6-by-3 matrix
A further saving is possible when the matrix is symmetric. For example, consider a symmetric 6-by-6 matrix with an upper bandwidth of 2:
This matrix is stored as the 6-by-3 matrix:
Band form of sparse matrices
From a computational point of view, working with band matrices is always preferential to working with similarly dimensioned square matrices. A band matrix can be likened in complexity to a rectangular matrix whose row dimension is equal to the bandwidth of the band matrix. Thus the work invol
|
https://en.wikipedia.org/wiki/Universal%20graph
|
In mathematics, a universal graph is an infinite graph that contains every finite (or at-most-countable) graph as an induced subgraph. A universal graph of this type was first constructed by Richard Rado and is now called the Rado graph or random graph. More recent work
has focused on universal graphs for a graph family : that is, an infinite graph belonging to F that contains all finite graphs in . For instance, the Henson graphs are universal in this sense for the -clique-free graphs.
A universal graph for a family of graphs can also refer to a member of a sequence of finite graphs that contains all graphs in ; for instance, every finite tree is a subgraph of a sufficiently large hypercube graph
so a hypercube can be said to be a universal graph for trees. However it is not the smallest such graph: it is known that there is a universal graph for -vertex trees, with only vertices and edges, and that this is optimal. A construction based on the planar separator theorem can be used to show that -vertex planar graphs have universal graphs with edges, and that bounded-degree planar graphs have universal graphs with edges. It is also possible to construct universal graphs for planar graphs that have vertices.
Sumner's conjecture states that tournaments are universal for polytrees, in the sense that every tournament with vertices contains every polytree with vertices as a subgraph.
A family of graphs has a universal graph of polynomial size, containing every -vertex graph as an induced subgraph, if and only if it has an adjacency labelling scheme in which vertices may be labeled by -bit bitstrings such that an algorithm can determine whether two vertices are adjacent by examining their labels. For, if a universal graph of this type exists, the vertices of any graph in may be labeled by the identities of the corresponding vertices in the universal graph, and conversely if a labeling scheme exists then a universal graph may be constructed having a vertex for every possible label.
In older mathematical terminology, the phrase "universal graph" was sometimes used to denote a complete graph.
The notion of universal graph has been adapted and used for solving mean payoff games.
References
External links
The panarborial formula, "Theorem of the Day" concerning universal graphs for trees
Graph families
Infinite graphs
|
https://en.wikipedia.org/wiki/Ludwig%20Schl%C3%A4fli
|
Ludwig Schläfli (15 January 1814 – 20 March 1895) was a Swiss mathematician, specialising in geometry and complex analysis (at the time called function theory) who was one of the key figures in developing the notion of higher-dimensional spaces. The concept of multidimensionality is pervasive in mathematics, has come to play a pivotal role in physics, and is a common element in science fiction.
Life and career
Youth and education
Ludwig spent most of his life in Switzerland. He was born in Grasswil (now part of Seeberg), his mother's hometown. The family then moved to the nearby Burgdorf, where his father worked as a tradesman. His father wanted Ludwig to follow in his footsteps, but Ludwig was not cut out for practical work.
In contrast, because of his mathematical gifts, he was allowed to attend the Gymnasium in Bern in 1829. By that time he was already learning differential calculus from Abraham Gotthelf Kästner's Mathematische Anfangsgründe der Analysis des Unendlichen (1761). In 1831 he transferred to the Akademie in Bern for further studies. By 1834 the Akademie had become the new Universität Bern, where he started studying theology.
Teaching
After graduating in 1836, he was appointed a secondary school teacher in Thun. He stayed there until 1847, spending his free time studying mathematics and botany while attending the university in Bern once a week.
A turning point in his life came in 1843. Schläfli had planned to visit Berlin and become acquainted with its mathematical community, especially Jakob Steiner, a well known Swiss mathematician. But unexpectedly Steiner showed up in Bern and they met. Not only was Steiner impressed by Schläfli's mathematical knowledge, he was also very interested in Schläfli's fluency in Italian and French.
Steiner proposed Schläfli to assist his Berlin colleagues Carl Gustav Jacob Jacobi, Peter Gustav Lejeune Dirichlet, Carl Wilhelm Borchardt and himself as an interpreter on a forthcoming trip to Italy. Steiner sold this idea to his friends in the following way, which indicates Schläfli must have been somewhat clumsy at daily affairs:
... während er den Berliner Freunden den neugeworbenen Reisegefaehrten durch die Worte anpries, der sei ein ländlicher Mathematiker bei Bern, für die Welt ein Esel, aber Sprachen lerne er wie ein Kinderspiel, den wollten sie als Dolmetscher mit sich nehmen. [ADB]
English translation:
... while he (Steiner) praised/recommended the new travel companion to his Berlin friends with the words that he (Schläfli) was a provincial mathematician working near Bern, an 'ass for the world' (i.e., not very practical), but that he learned languages like child's play, and that they should take him with them as a translator.
Schläfli accompanied them to Italy, and benefited much from the trip. They stayed for more than six months, during which time Schläfli even translated some of the others' mathematical works into Italian.
Later life
Schläfli kept up a correspondence with Steiner ti
|
https://en.wikipedia.org/wiki/Gotthold%20Eisenstein
|
Ferdinand Gotthold Max Eisenstein (16 April 1823 – 11 October 1852) was a German mathematician. He specialized in number theory and analysis, and proved several results that eluded even Gauss. Like Galois and Abel before him, Eisenstein died before the age of 30. He was born and died in Berlin, Prussia.
Early life
His parents, Johann Konstantin Eisenstein and Helene Pollack, were of Jewish descent and converted to Protestantism prior to his birth. From an early age, he demonstrated talent in mathematics and music. As a young child he learned to play piano, and he continued to play and compose for piano throughout his life.
He suffered various health problems throughout his life, including meningitis as an infant, a disease that took the lives of all five of his brothers and sisters. In 1837, at the age of 14, he enrolled at Friedrich Wilhelm Gymnasium, and soon thereafter at Friedrich Werder Gymnasium in Berlin. His teachers recognized his talents in mathematics, but by 15 years of age he had already learned all the material taught at the school. He then began to study differential calculus from the works of Leonhard Euler and Joseph-Louis Lagrange.
At 17, still a student, Eisenstein began to attend classes given by Peter Gustav Lejeune Dirichlet and others at the University of Berlin. In 1842, before taking his final exams, he traveled with his mother to England, to search for his father. In 1843 he met William Rowan Hamilton in Dublin, who gave him a copy of his book on Niels Henrik Abel's proof of the impossibility of solving fifth-degree polynomials, a work that would stimulate Eisenstein's interest in mathematical research.
Five remarkable years
In 1843 Eisenstein returned to Berlin, where he passed his graduation exams and enrolled in the University the following autumn. In January 1844 he had already presented his first work to the Berlin Academy, on cubic forms in two variables. The same year he met for the first time with Alexander von Humboldt, who would later become Eisenstein's patron. Humboldt managed to find grants from the King, the government of Prussia, and the Berlin academy to compensate for Eisenstein's extreme poverty. The money, always late and grudgingly given, was earned in full measure by Eisenstein: in 1844 alone he published over 23 papers and two problems in Crelle's Journal, including two proofs of the law of quadratic reciprocity, and the analogous laws of cubic reciprocity and quartic reciprocity.
In June 1844 Eisenstein visited Carl Friedrich Gauss in Göttingen. In 1845, Kummer saw to it that he received an honorary doctorate at the University of Breslau. Jacobi also encouraged the distinction, but later relations between Jacobi and Eisenstein were always rocky, due primarily to a disagreement over the order of discoveries made in 1846. In 1847 Eisenstein habilitated at the University of Berlin, and he began to teach there. Bernhard Riemann attended his classes on elliptic functions.
Imprisonment and death
In
|
https://en.wikipedia.org/wiki/Hankel%20transform
|
In mathematics, the Hankel transform expresses any given function f(r) as the weighted sum of an infinite number of Bessel functions of the first kind . The Bessel functions in the sum are all of the same order ν, but differ in a scaling factor k along the r axis. The necessary coefficient of each Bessel function in the sum, as a function of the scaling factor k constitutes the transformed function. The Hankel transform is an integral transform and was first developed by the mathematician Hermann Hankel. It is also known as the Fourier–Bessel transform. Just as the Fourier transform for an infinite interval is related to the Fourier series over a finite interval, so the Hankel transform over an infinite interval is related to the Fourier–Bessel series over a finite interval.
Definition
The Hankel transform of order of a function f(r) is given by
where is the Bessel function of the first kind of order with . The inverse Hankel transform of is defined as
which can be readily verified using the orthogonality relationship described below.
Domain of definition
Inverting a Hankel transform of a function f(r) is valid at every point at which f(r) is continuous, provided that the function is defined in (0, ∞), is piecewise continuous and of bounded variation in every finite subinterval in (0, ∞), and
However, like the Fourier transform, the domain can be extended by a density argument to include some functions whose above integral is not finite, for example .
Alternative definition
An alternative definition says that the Hankel transform of g(r) is
The two definitions are related:
If , then
This means that, as with the previous definition, the Hankel transform defined this way is also its own inverse:
The obvious domain now has the condition
but this can be extended. According to the reference given above, we can take the integral as the limit as the upper limit goes to infinity (an improper integral rather than a Lebesgue integral), and in this way the Hankel transform and its inverse work for all functions in L2(0, ∞).
Transforming Laplace's equation
The Hankel transform can be used to transform and solve Laplace's equation expressed in cylindrical coordinates. Under the Hankel transform, the Bessel operator becomes a multiplication by . In the axisymmetric case, the partial differential equation is transformed as
which is an ordinary differential equation in the transformed variable .
Orthogonality
The Bessel functions form an orthogonal basis with respect to the weighting factor r:
The Plancherel theorem and Parseval's theorem
If f(r) and g(r) are such that their Hankel transforms and are well defined, then the Plancherel theorem states
Parseval's theorem, which states
is a special case of the Plancherel theorem. These theorems can be proven using the orthogonality property.
Relation to the multidimensional Fourier transform
The Hankel transform appears when one writes the multidimensional Fourier t
|
https://en.wikipedia.org/wiki/Zn%C3%A1m%27s%20problem
|
In number theory, Znám's problem asks which sets of integers have the property that each integer in the set is a proper divisor of the product of the other integers in the set, plus 1. Znám's problem is named after the Slovak mathematician Štefan Znám, who suggested it in 1972, although other mathematicians had considered similar problems around the same time.
The initial terms of Sylvester's sequence almost solve this problem, except that the last chosen term equals one plus the product of the others, rather than being a proper divisor. showed that there is at least one solution to the (proper) Znám problem for each . Sun's solution is based on a recurrence similar to that for Sylvester's sequence, but with a different set of initial values.
The Znám problem is closely related to Egyptian fractions. It is known that there are only finitely many solutions for any fixed . It is unknown whether there are any solutions to Znám's problem using only odd numbers, and there remain several other open questions.
The problem
Znám's problem asks which sets of integers have the property that each integer in the set is a proper divisor of the product of the other integers in the set, plus 1. That is, given , what sets of integers
are there such that, for each , divides but is not equal to
A closely related problem concerns sets of integers in which each integer in the set is a divisor, but not necessarily a proper divisor, of one plus the product of the other integers in the set. This problem does not seem to have been named in the literature, and will be referred to as the improper Znám problem. Any solution to Znám's problem is also a solution to the improper Znám problem, but not necessarily vice versa.
History
Znám's problem is named after the Slovak mathematician Štefan Znám, who suggested it in 1972. had posed the improper Znám problem for , and , independently of Znám, found all solutions to the improper problem for . showed that Znám's problem is unsolvable for , and credited J. Janák with finding the solution for .
Examples
Sylvester's sequence is an integer sequence in which each term is one plus the product of the previous terms. The first few terms of the sequence are
Stopping the sequence early produces a set like that almost meets the conditions of Znám's problem, except that the largest value equals one plus the product of the other terms, rather than being a proper divisor. Thus, it is a solution to the improper Znám problem, but not a solution to Znám's problem as it is usually defined.
One solution to the proper Znám problem, for , is . A few calculations will show that
Connection to Egyptian fractions
Any solution to the improper Znám problem is equivalent (via division by the product of the values ) to a solution to the equation
where as well as each must be an integer, and conversely any such solution corresponds to a solution to the improper Znám problem. However, all known solutions have , so they satisfy the equa
|
https://en.wikipedia.org/wiki/Null%20vector
|
In mathematics, given a vector space X with an associated quadratic form q, written , a null vector or isotropic vector is a non-zero element x of X for which .
In the theory of real bilinear forms, definite quadratic forms and isotropic quadratic forms are distinct. They are distinguished in that only for the latter does there exist a nonzero null vector.
A quadratic space which has a null vector is called a pseudo-Euclidean space.
A pseudo-Euclidean vector space may be decomposed (non-uniquely) into orthogonal subspaces A and B, , where q is positive-definite on A and negative-definite on B. The null cone, or isotropic cone, of X consists of the union of balanced spheres:
The null cone is also the union of the isotropic lines through the origin.
Split algebras
A composition algebra with a null vector is a split algebra.
In a composition algebra (A, +, ×, *), the quadratic form is q(x) = x x*. When x is a null vector then there is no multiplicative inverse for x, and since x ≠ 0, A is not a division algebra.
In the Cayley–Dickson construction, the split algebras arise in the series bicomplex numbers, biquaternions, and bioctonions, which uses the complex number field as the foundation of this doubling construction due to L. E. Dickson (1919). In particular, these algebras have two imaginary units, which commute so their product, when squared, yields +1:
Then
so 1 + hi is a null vector.
The real subalgebras, split complex numbers, split quaternions, and split-octonions, with their null cones representing the light tracking into and out of 0 ∈ A, suggest spacetime topology.
Examples
The light-like vectors of Minkowski space are null vectors.
The four linearly independent biquaternions , , , and are null vectors and can serve as a basis for the subspace used to represent spacetime. Null vectors are also used in the Newman–Penrose formalism approach to spacetime manifolds.
In the Verma module of a Lie algebra there are null vectors.
References
Linear algebra
Quadratic forms
|
https://en.wikipedia.org/wiki/Boxcar%20function
|
In mathematics, a boxcar function is any function which is zero over the entire real line except for a single interval where it is equal to a constant, A. The function is named after its graph's resemblance to a boxcar, a type of railroad car. The boxcar function can be expressed in terms of the uniform distribution as
where is the uniform distribution of x for the interval and is the Heaviside step function. As with most such discontinuous functions, there is a question of the value at the transition points. These values are probably best chosen for each individual application.
When a boxcar function is selected as the impulse response of a filter, the result is a simple moving average filter, whose frequency response is a sinc-in-frequency, a type of low-pass filter.
See also
Boxcar averager
Rectangular function
Step function
Top-hat filter
References
Special functions
|
https://en.wikipedia.org/wiki/Sigma%20approximation
|
In mathematics, σ-approximation adjusts a Fourier summation to greatly reduce the Gibbs phenomenon, which would otherwise occur at discontinuities.
A σ-approximated summation for a series of period T can be written as follows:
in terms of the normalized sinc function
The term
is the Lanczos σ factor, which is responsible for eliminating most of the Gibbs phenomenon. It does not do so entirely, however, but one can square or even cube the expression to serially attenuate Gibbs phenomenon in the most extreme cases.
See also
Lanczos resampling
References
Fourier series
Numerical analysis
|
https://en.wikipedia.org/wiki/Pseudotensor
|
In physics and mathematics, a pseudotensor is usually a quantity that transforms like a tensor under an orientation-preserving coordinate transformation (e.g. a proper rotation) but additionally changes sign under an orientation-reversing coordinate transformation (e.g., an improper rotation), which is a transformation that can be expressed as a proper rotation followed by reflection. This is a generalization of a pseudovector. To evaluate a tensor or pseudotensor sign, it has to be contracted with some vectors, as many as its rank is, belonging to the space where the rotation is made while keeping the tensor coordinates unaffected (differently from what one does in the case of a base change). Under improper rotation a pseudotensor and a proper tensor of the same rank will have different sign which depends on the rank being even or odd. Sometimes inversion of the axes is used as an example of an improper rotation to see the behaviour of a pseudotensor, but it works only if vector space dimensions is odd otherwise inversion is a proper rotation without an additional reflection.
There is a second meaning for pseudotensor (and likewise for pseudovector), restricted to general relativity. Tensors obey strict transformation laws, but pseudotensors in this sense are not so constrained. Consequently, the form of a pseudotensor will, in general, change as the frame of reference is altered. An equation containing pseudotensors which holds in one frame will not necessarily hold in a different frame. This makes pseudotensors of limited relevance because equations in which they appear are not invariant in form.
Definition
Two quite different mathematical objects are called a pseudotensor in different contexts.
The first context is essentially a tensor multiplied by an extra sign factor, such that the pseudotensor changes sign under reflections when a normal tensor does not. According to one definition, a pseudotensor P of the type is a geometric object whose components in an arbitrary basis are enumerated by indices and obey the transformation rule
under a change of basis.
Here are the components of the pseudotensor in the new and old bases, respectively, is the transition matrix for the contravariant indices, is the transition matrix for the covariant indices, and
This transformation rule differs from the rule for an ordinary tensor only by the presence of the factor
The second context where the word "pseudotensor" is used is general relativity. In that theory, one cannot describe the energy and momentum of the gravitational field by an energy–momentum tensor. Instead, one introduces objects that behave as tensors only with respect to restricted coordinate transformations. Strictly speaking, such objects are not tensors at all. A famous example of such a pseudotensor is the Landau–Lifshitz pseudotensor.
Examples
On non-orientable manifolds, one cannot define a volume form globally due to the non-orientability, but one can define a volume elem
|
https://en.wikipedia.org/wiki/BEST%20theorem
|
In graph theory, a part of discrete mathematics, the BEST theorem gives a product formula for the number of Eulerian circuits in directed (oriented) graphs. The name is an acronym of the names of people who discovered it: de Bruijn, van Aardenne-Ehrenfest, Smith and Tutte.
Precise statement
Let G = (V, E) be a directed graph. An Eulerian circuit is a directed closed path which visits each edge exactly once. In 1736, Euler showed that G has an Eulerian circuit if and only if G is connected and the indegree is equal to outdegree at every vertex. In this case G is called Eulerian. We denote the indegree of a vertex v by deg(v).
The BEST theorem states that the number ec(G) of Eulerian circuits in a connected Eulerian graph G is given by the formula
Here tw(G) is the number of arborescences, which are trees directed towards the root at a fixed vertex w in G. The number tw(G) can be computed as a determinant, by the version of the matrix tree theorem for directed graphs. It is a property of Eulerian graphs that tv(G) = tw(G) for every two vertices v and w in a connected Eulerian graph G.
Applications
The BEST theorem shows that the number of Eulerian circuits in directed graphs can be computed in polynomial time, a problem which is #P-complete for undirected graphs. It is also used in the asymptotic enumeration of Eulerian circuits of complete and complete bipartite graphs.
History
The BEST theorem is due to van Aardenne-Ehrenfest and de Bruijn
(1951), §6, Theorem 6.
Their proof is bijective and generalizes the de Bruijn sequences. In a "note added in proof", they refer to an earlier result by Smith and Tutte (1941) which proves the formula for graphs with deg(v)=2 at every vertex.
Notes
References
.
.
.
.
. Theorem 5.6.2
.
Directed graphs
Theorems in graph theory
|
https://en.wikipedia.org/wiki/Demographics%20of%20Montreal
|
The Demographics of Montreal concern population growth and structure for Montreal, Quebec, Canada. The information is analyzed by Statistics Canada and compiled every five years, with the most recent census having taken place in 2021.
Population history
According to Statistics Canada, at the time of the 2011 Canadian census the city of Montreal proper had 1,649,519 inhabitants. A total of 3,824,221 lived in the Montreal Census Metropolitan Area (CMA) at the same 2011 census, up from 3,635,556 at the 2006 census (within 2006 CMA boundaries), which means a population growth rate of +5.2% between 2006 and 2011. Montreal's 2012-2013 population growth rate was 1.135%, compared with 1.533% for all Canadian CMAs.
In the 2006 census, children under 14 years of age (621,695) constituted 17.1%, while inhabitants over 65 years of age (495,685) numbered 13.6% of the total population.
Future projections
The current estimate of the Montreal CMA population, as of July 1, 2013, according to Statistics Canada is 3,981,802.
According to StatsCan, by 2030, the Greater Montreal Area is expected to number 5,275,000 with 1,722,000 being visible minorities.
Ethnic diversity
City of Montreal
According to the 2021 census, some 38.8% of the population of Montreal and 27.2% that of Metro Montreal, are members of a visible minority (non-white) group. Blacks (198,610 persons or 11.5%) contribute to the largest minority group, with Montreal having the 2nd highest number of black people in Canada after Toronto, as well as having the highest concentrations of black people amongst major Canadian cities. Other groups, such as Arabs (141,935 persons or 8.2%), South Asians (79,670 persons or 4.6%), Latin Americans (78,150 persons or 4.5%), and Chinese (56,935 persons or 3.3%) are also large in number. Visible minorities are defined by the Canadian Employment Equity Act as "persons, other than Aboriginals, who are non-Caucasian in race or non-white in colour."
Note: Totals greater than 100% due to multiple origin responses.
Metro Montreal
Future projections
Ethnic groups
European
French
Montreal is the cultural centre of Quebec, French-speaking Canada and French-speaking North America as a whole, and an important city in the Francophonie. The majority of the population is francophone. Montreal is the largest French-speaking city in North America, and second in the world after Paris when counting the number of native-language Francophones (third after Paris and Kinshasa when counting second-language speakers). The city is a hub for French language television productions, radio, theatre, circuses, performing arts, film, multimedia and print publishing.
Montreal plays a prominent role in the development of French-Canadian and Québécois culture. Its contribution to culture is therefore more of a society-building endeavour rather than limited to civic influence. The best talents from French Canada and even the French-speaking areas of the United States converge in Montr
|
https://en.wikipedia.org/wiki/Prime%20number%20theory
|
Prime number theory may refer to:
Prime number
Prime number theorem
Number theory
See also
Fundamental theorem of arithmetic, which explains prime factorization.
|
https://en.wikipedia.org/wiki/Bessel
|
Bessel may refer to:
Bessel beam
Bessel ellipsoid
Bessel function in mathematics
Bessel's inequality in mathematics
Bessel's correction in statistics.
Bessel filter, a linear filter often used in audio crossover systems
Bessel Fjord, NE Greenland
Bessel Fjord, NW Greenland
Bessel (crater), a small lunar crater
Bessel transform, also known as Fourier-Bessel transform or Hankel transform
Bessel window, in signal processing
Besselian date, see Epoch (astronomy)#Besselian years
, a German merchant ship in service 1928–45, latterly for the Kriegsmarine
People
Friedrich Wilhelm Bessel (1784–1846), German mathematician, astronomer, and systematizer of the Bessel functions
See also
Bessell
|
https://en.wikipedia.org/wiki/Nesbitt%27s%20inequality
|
In mathematics, Nesbitt's inequality states that for positive real numbers a, b and c,
It is an elementary special case (N = 3) of the difficult and much studied Shapiro inequality, and was published at least 50 years earlier.
There is no corresponding upper bound as any of the 3 fractions in the inequality can be made arbitrarily large.
Proof
First proof: AM-HM inequality
By the AM-HM inequality on ,
Clearing denominators yields
from which we obtain
by expanding the product and collecting like denominators. This then simplifies directly to the final result.
Second proof: Rearrangement
Suppose , we have that
define
The scalar product of the two sequences is maximum because of the rearrangement inequality if they are arranged the same way, call and the vector shifted by one and by two, we have:
Addition yields our desired Nesbitt's inequality.
Third proof: Sum of Squares
The following identity is true for all
This clearly proves that the left side is no less than for positive a, b and c.
Note: every rational inequality can be demonstrated by transforming it to the appropriate sum-of-squares identity, see Hilbert's seventeenth problem.
Fourth proof: Cauchy–Schwarz
Invoking the Cauchy–Schwarz inequality on the vectors yields
which can be transformed into the final result as we did in the AM-HM proof.
Fifth proof: AM-GM
Let . We then apply the AM-GM inequality to obtain the following
because
Substituting out the in favor of yields
which then simplifies to the final result.
Sixth proof: Titu's lemma
Titu's lemma, a direct consequence of the Cauchy–Schwarz inequality, states that for any sequence of real numbers and any sequence of positive numbers , .
We use the lemma on and . This gives,
This results in,
i.e.,
Seventh proof: Using homogeneity
As the left side of the inequality is homogeneous, we may assume . Now define , , and . The desired inequality turns into , or, equivalently, . This is clearly true by Titu's Lemma.
Eighth proof: Jensen inequality
Define and consider the function . This function can be shown to be convex in and, invoking Jensen inequality, we get
A straightforward computation yields
Ninth proof: Reduction to a two-variable inequality
By clearing denominators,
It now suffices to prove that for , as summing this three times for and completes the proof.
As we are done.
References
Ion Ionescu, Romanian Mathematical Gazette, Volume XXXII (September 15, 1926 - August 15, 1927), page 120
External links
See AoPS for more proofs of this inequality.
Inequalities
|
https://en.wikipedia.org/wiki/Toy%20theorem
|
In mathematics, a toy theorem is a simplified instance (special case) of a more general theorem, which can be useful in providing a handy representation of the general theorem, or a framework for proving the general theorem. One way of obtaining a toy theorem is by introducing some simplifying assumptions in a theorem.
In many cases, a toy theorem is used to illustrate the claim of a theorem, while in other cases, studying the proofs of a toy theorem (derived from a non-trivial theorem) can provide insight that would be hard to obtain otherwise.
Toy theorems can also have educational value as well. For example, after presenting a theorem (with, say, a highly non-trivial proof), one can sometimes give some assurance that the theorem really holds, by proving a toy version of the theorem.
Examples
A toy theorem of the Brouwer fixed-point theorem is obtained by restricting the dimension to one. In this case, the Brouwer fixed-point theorem follows almost immediately from the intermediate value theorem.
Another example of toy theorem is Rolle's theorem, which is obtained from the mean value theorem by equating the function values at the endpoints.
See also
Corollary
Fundamental theorem
Lemma (mathematics)
Toy model
References
Mathematical theorems
Mathematical terminology
|
https://en.wikipedia.org/wiki/Rigidity%20%28mathematics%29
|
In mathematics, a rigid collection C of mathematical objects (for instance sets or functions) is one in which every c ∈ C is uniquely determined by less information about c than one would expect.
The above statement does not define a mathematical property; instead, it describes in what sense the adjective "rigid" is typically used in mathematics, by mathematicians.
Examples
Some examples include:
Harmonic functions on the unit disk are rigid in the sense that they are uniquely determined by their boundary values.
Holomorphic functions are determined by the set of all derivatives at a single point. A smooth function from the real line to the complex plane is not, in general, determined by all its derivatives at a single point, but it is if we require additionally that it be possible to extend the function to one on a neighbourhood of the real line in the complex plane. The Schwarz lemma is an example of such a rigidity theorem.
By the fundamental theorem of algebra, polynomials in C are rigid in the sense that any polynomial is completely determined by its values on any infinite set, say N, or the unit disk. By the previous example, a polynomial is also determined within the set of holomorphic functions by the finite set of its non-zero derivatives at any single point.
Linear maps L(X, Y) between vector spaces X, Y are rigid in the sense that any L ∈ L(X, Y) is completely determined by its values on any set of basis vectors of X.
Mostow's rigidity theorem, which states that the geometric structure of negatively curved manifolds is determined by their topological structure.
A well-ordered set is rigid in the sense that the only (order-preserving) automorphism on it is the identity function. Consequently, an isomorphism between two given well-ordered sets will be unique.
Cauchy's theorem on geometry of convex polytopes states that a convex polytope is uniquely determined by the geometry of its faces and combinatorial adjacency rules.
Alexandrov's uniqueness theorem states that a convex polyhedron in three dimensions is uniquely determined by the metric space of geodesics on its surface.
Rigidity results in K-theory show isomorphisms between various algebraic K-theory groups.
Rigid groups in the inverse Galois problem.
Combinatorial use
In combinatorics, the term rigid is also used to define the notion of a rigid surjection, which is a surjection for which the following equivalent conditions hold:
For every , ;
Considering as an -tuple , the first occurrences of the elements in are in increasing order;
maps initial segments of to initial segments of .
This relates to the above definition of rigid, in that each rigid surjection uniquely defines, and is uniquely defined by, a partition of into pieces. Given a rigid surjection , the partition is defined by . Conversely, given a partition of , order the by letting . If is now the -ordered partition, the function defined by is a rigid surjection.
See also
Uniqueness theorem
Struc
|
https://en.wikipedia.org/wiki/Stationary%20phase%20approximation
|
In mathematics, the stationary phase approximation is a basic principle of asymptotic analysis, applying to functions given by integration against a rapidly-varying complex exponential.
This method originates from the 19th century, and is due to George Gabriel Stokes and Lord Kelvin.
It is closely related to Laplace's method and the method of steepest descent, but Laplace's contribution precedes the others.
Basics
The main idea of stationary phase methods relies on the cancellation of sinusoids with rapidly varying phase. If many sinusoids have the same phase and they are added together, they will add constructively. If, however, these same sinusoids have phases which change rapidly as the frequency changes, they will add incoherently, varying between constructive and destructive addition at different times.
Formula
Letting denote the set of critical points of the function (i.e. points where ), under the assumption that is either compactly supported or has exponential decay, and that all critical points are nondegenerate (i.e. for ) we have the following asymptotic formula, as :
Here denotes the Hessian of , and denotes the signature of the Hessian, i.e. the number of positive eigenvalues minus the number of negative eigenvalues.
For , this reduces to:
In this case the assumptions on reduce to all the critical points being non-degenerate.
This is just the Wick-rotated version of the formula for the method of steepest descent.
An example
Consider a function
.
The phase term in this function, , is stationary when
or equivalently,
.
Solutions to this equation yield dominant frequencies for some and . If we expand as a Taylor series about and neglect terms of order higher than , we have
where denotes the second derivative of . When is relatively large, even a small difference will generate rapid oscillations within the integral, leading to cancellation. Therefore we can extend the limits of integration beyond the limit for a Taylor expansion. If we use the formula,
.
.
This integrates to
.
Reduction steps
The first major general statement of the principle involved is that the asymptotic behaviour of I(k) depends only on the critical points of f. If by choice of g the integral is localised to a region of space where f has no critical point, the resulting integral tends to 0 as the frequency of oscillations is taken to infinity. See for example Riemann–Lebesgue lemma.
The second statement is that when f is a Morse function, so that the singular points of f are non-degenerate and isolated, then the question can be reduced to the case n = 1. In fact, then, a choice of g can be made to split the integral into cases with just one critical point P in each. At that point, because the Hessian determinant at P is by assumption not 0, the Morse lemma applies. By a change of co-ordinates f may be replaced by
.
The value of j is given by the signature of the Hessian matrix of f at P. As for g, the essential case is that
|
https://en.wikipedia.org/wiki/Parallel%20tempering
|
Parallel tempering, in physics and statistics, is a computer simulation method typically used to find the lowest energy state of a system of many interacting particles. It addresses the problem that at high temperatures, one may have a stable state different from low temperature, whereas simulations at low temperatures may become "stuck" in a metastable state. It does this by using the fact that the high temperature simulation may visit states typical of both stable and metastable low temperature states.
More specifically, parallel tempering (also known as replica exchange MCMC sampling), is a simulation method aimed at improving the dynamic properties of Monte Carlo method simulations of physical systems, and of Markov chain Monte Carlo (MCMC) sampling methods more generally. The replica exchange method was originally devised by Robert Swendsen and J. S. Wang, then extended by Charles J. Geyer, and later developed further by Giorgio Parisi,
Koji Hukushima and Koji Nemoto,
and others.
Y. Sugita and Y. Okamoto also formulated a molecular dynamics version of parallel tempering; this is usually known as replica-exchange molecular dynamics or REMD.
Essentially, one runs N copies of the system, randomly initialized, at different temperatures. Then, based on the Metropolis criterion one exchanges configurations at different temperatures. The idea of this method
is to make configurations at high temperatures available to the simulations at low temperatures and vice versa.
This results in a very robust ensemble which is able to sample both low and high energy configurations.
In this way, thermodynamical properties such as the specific heat, which is in general not well computed in the canonical ensemble, can be computed with great precision.
Background
Typically a Monte Carlo simulation using a Metropolis–Hastings update consists of a single stochastic process that evaluates the energy of the system and accepts/rejects updates based on the temperature T. At high temperatures updates that change the energy of the system are comparatively more probable. When the system is highly correlated, updates are rejected and the simulation is said to suffer from critical slowing down.
If we were to run two simulations at temperatures separated by a ΔT, we would find that if ΔT is small enough, then the energy histograms obtained by collecting the values of the energies over a set of Monte Carlo steps N will create two distributions that will somewhat overlap. The overlap can be defined by the area of the histograms that falls over the same interval of energy values, normalized by the total number of samples. For ΔT = 0 the overlap should approach 1.
Another way to interpret this overlap is to say that system configurations sampled at temperature T1 are likely to appear during a simulation at T2. Because the Markov chain should have no memory of its past, we can create a new update for the system composed of the two systems at T1 and T2. At a given Mo
|
https://en.wikipedia.org/wiki/Rigidity
|
Rigid or rigidity may refer to:
Mathematics and physics
Stiffness, the property of a solid body to resist deformation, which is sometimes referred to as rigidity
Structural rigidity, a mathematical theory of the stiffness of ensembles of rigid objects connected by hinges
Rigidity (electromagnetism), the resistance of a charged particle to deflection by a magnetic field
Rigidity (mathematics), a property of a collection of mathematical objects (for instance sets or functions)
Rigid body, in physics, a simplification of the concept of an object to allow for modelling
Rigid transformation, in mathematics, a rigid transformation preserves distances between every pair of points
Rigidity (chemistry), the tendency of a substance to retain/maintain their shape when subjected to outside force
(Modulus of) rigidity or shear modulus (material science), the tendency of a substance to retain/maintain their shape when subjected to outside force
Medicine
Rigidity (neurology), an increase in muscle tone leading to a resistance to passive movement throughout the range of motion
Rigidity (psychology), an obstacle to problem solving which arises from over-dependence on prior experiences
Other uses
Real rigidity, and nominal rigidity, the resistance of prices and wages to market changes in macroeconomics
Ridgid, a brand of tools
|
https://en.wikipedia.org/wiki/Chromatic%20polynomial
|
The chromatic polynomial is a graph polynomial studied in algebraic graph theory, a branch of mathematics. It counts the number of graph colorings as a function of the number of colors and was originally defined by George David Birkhoff to study the four color problem. It was generalised to the Tutte polynomial by Hassler Whitney and W. T. Tutte, linking it to the Potts model of statistical physics.
History
George David Birkhoff introduced the chromatic polynomial in 1912, defining it only for planar graphs, in an attempt to prove the four color theorem. If denotes the number of proper colorings of G with k colors then one could establish the four color theorem by showing for all planar graphs G. In this way he hoped to apply the powerful tools of analysis and algebra for studying the roots of polynomials to the combinatorial coloring problem.
Hassler Whitney generalised Birkhoff’s polynomial from the planar case to general graphs in 1932. In 1968, Ronald C. Read asked which polynomials are the chromatic polynomials of some graph, a question that remains open, and introduced the concept of chromatically equivalent graphs. Today, chromatic polynomials are one of the central objects of algebraic graph theory.
Definition
For a graph G, counts the number of its (proper) vertex k-colorings.
Other commonly used notations include , , or .
There is a unique polynomial which evaluated at any integer k ≥ 0 coincides with ; it is called the chromatic polynomial of G.
For example, to color the path graph on 3 vertices with k colors, one may choose any of the k colors for the first vertex, any of the remaining colors for the second vertex, and lastly for the third vertex, any of the colors that are different from the second vertex's choice.
Therefore, is the number of k-colorings of .
For a variable x (not necessarily integer), we thus have .
(Colorings which differ only by permuting colors or by automorphisms of G are still counted as different.)
Deletion–contraction
The fact that the number of k-colorings is a polynomial in k follows from a recurrence relation called the deletion–contraction recurrence or Fundamental Reduction Theorem. It is based on edge contraction: for a pair of vertices and the graph is obtained by merging the two vertices and removing any edges between them.
If and are adjacent in G, let denote the graph obtained by removing the edge .
Then the numbers of k-colorings of these graphs satisfy:
Equivalently, if and are not adjacent in G and is the graph with the edge added, then
This follows from the observation that every k-coloring of G either gives different colors to and , or the same colors. In the first case this gives a (proper) k-coloring of , while in the second case it gives a coloring of .
Conversely, every k-coloring of G can be uniquely obtained from a k-coloring of or (if and are not adjacent in G).
The chromatic polynomial can hence be recursively defined as
for the edgeless graph on n
|
https://en.wikipedia.org/wiki/Cyclic%20homology
|
In noncommutative geometry and related branches of mathematics, cyclic homology and cyclic cohomology are certain (co)homology theories for associative algebras which generalize the de Rham (co)homology of manifolds. These notions were independently introduced by Boris Tsygan (homology) and Alain Connes (cohomology) in the 1980s. These invariants have many interesting relationships with several older branches of mathematics, including de Rham theory, Hochschild (co)homology, group cohomology, and the K-theory. Contributors to the development of the theory include Max Karoubi, Yuri L. Daletskii, Boris Feigin, Jean-Luc Brylinski, Mariusz Wodzicki, Jean-Louis Loday, Victor Nistor, Daniel Quillen, Joachim Cuntz, Ryszard Nest, Ralf Meyer, and Michael Puschnigg.
Hints about definition
The first definition of the cyclic homology of a ring A over a field of characteristic zero, denoted
HCn(A) or Hnλ(A),
proceeded by the means of the following explicit chain complex related to the Hochschild homology complex of A, called the Connes complex:
For any natural number n ≥ 0, define the operator which generates the natural cyclic action of on the n-th tensor product of A:
Recall that the Hochschild complex groups of A with coefficients in A itself are given by setting for all n ≥ 0. Then the components of the Connes complex are defined as , and the differential is the restriction of the Hochschild differential to this quotient. One can check that the Hochschild differential does indeed factor through to this space of coinvariants.
Connes later found a more categorical approach to cyclic homology using a notion of cyclic object in an abelian category, which is analogous to the notion of simplicial object. In this way, cyclic homology (and cohomology) may be interpreted as a derived functor, which can be explicitly computed by the means of the (b, B)-bicomplex. If the field k contains the rational numbers, the definition in terms of the Connes complex calculates the same homology.
One of the striking features of cyclic homology is the existence of a long exact sequence connecting
Hochschild and cyclic homology. This long exact sequence is referred to as the periodicity sequence.
Case of commutative rings
Cyclic cohomology of the commutative algebra A of regular functions on an affine algebraic variety over a field k of characteristic zero can be computed in terms of Grothendieck's algebraic de Rham complex. In particular, if the variety V=Spec A is smooth, cyclic cohomology of A are expressed in terms of the de Rham cohomology of V as follows:
This formula suggests a way to define de Rham cohomology for a 'noncommutative spectrum' of a noncommutative algebra A, which was extensively developed by Connes.
Variants of cyclic homology
One motivation of cyclic homology was the need for an approximation of K-theory that is defined, unlike K-theory, as the homology of a chain complex. Cyclic cohomology is in fact endowed with a pairing with K-theory,
|
https://en.wikipedia.org/wiki/Indeterminate%20%28variable%29
|
In mathematics, particularly in formal algebra, an indeterminate is a symbol that is treated as a variable, but does not stand for anything else except itself. It may be used as a placeholder in objects such as polynomials and formal power series. In particular:
It does not designate a constant or a parameter of the problem.
It is not an unknown that could be solved for.
It is not a variable designating a function argument, or a variable being summed or integrated over.
It is not any type of bound variable.
It is just a symbol used in an entirely formal way.
When used as placeholders, a common operation is to substitute mathematical expressions (of an appropriate type) for the indeterminates.
By a common abuse of language, mathematical texts may not clearly distinguish indeterminates from ordinary variables.
Polynomials
A polynomial in an indeterminate is an expression of the form , where the are called the coefficients of the polynomial. Two such polynomials are equal only if the corresponding coefficients are equal. In contrast, two polynomial functions in a variable may be equal or not at a particular value of .
For example, the functions
are equal when and not equal otherwise. But the two polynomials
are unequal, since 2 does not equal 5, and 3 does not equal 2. In fact,
does not hold unless and . This is because is not, and does not designate, a number.
The distinction is subtle, since a polynomial in can be changed to a function in by substitution. But the distinction is important because information may be lost when this substitution is made. For example, when working in modulo 2, we have that:
so the polynomial function is identically equal to 0 for having any value in the modulo-2 system. However, the polynomial is not the zero polynomial, since the coefficients, 0, 1 and −1, respectively, are not all zero.
Formal power series
A formal power series in an indeterminate is an expression of the form , where no value is assigned to the symbol . This is similar to the definition of a polynomial, except that an infinite number of the coefficients may be nonzero. Unlike the power series encountered in calculus, questions of convergence are irrelevant (since there is no function at play). So power series that would diverge for values of , such as , are allowed.
As generators
Indeterminates are useful in abstract algebra for generating mathematical structures. For example, given a field , the set of polynomials with coefficients in is the polynomial ring with polynomial addition and multiplication as operations. In particular, if two indeterminates and are used, then the polynomial ring also uses these operations, and convention holds that .
Indeterminates may also be used to generate a free algebra over a commutative ring . For instance, with two indeterminates and , the free algebra includes sums of strings in and , with coefficients in , and with the understanding that and are not necessarily identical
|
https://en.wikipedia.org/wiki/Yakov%20Perelman
|
Yakov Isidorovich Perelman (; – 16 March 1942) was a Russian Empire and Soviet science writer and author of many popular science books, including Physics Can Be Fun and Mathematics Can Be Fun (both translated from Russian into English).
Life and work
Perelman was born in 1882 in the town of Białystok, Russian Empire. He obtained the Diploma in Forestry from the Imperial Forestry Institute (Now Saint-Petersburg State Forestry University) in Saint Petersburg, in 1909. He was influenced by Ernst Mach and probably the Russian Machist Alexander Bogdanov in his pedagogical approach to popularising science. After the success of "Physics for Entertainment", Perelman set out to produce other books, in which he showed himself to be an imaginative populariser of science. Especially popular were "Arithmetic for entertainment", "Mechanics for entertainment", "Geometry for Entertainment", "Astronomy for entertainment", "Lively Mathematics", " Physics Everywhere", and "Tricks and Amusements".
His famous books on physics and astronomy were translated into various languages by the erstwhile Soviet Union.
The scientist Konstantin Tsiolkovsky thought highly of Perelman's talents and creative genius, writing of him in the preface of Interplanetary Journeys: "The author has long been known by his popular, witty and quite scientific works on physics, astronomy and mathematics, which are, moreover written in a marvelous language and are very readable."
Perelman has also authored a number of textbooks and articles in Soviet popular science magazines.
In addition to his educational and scientific writings, he also worked as an editor of science magazines, including Nature and People and In the Workshop of Nature.
Perelman died from starvation in 1942, during the German Siege of Leningrad. The siege started on 9 September 1941 and lasted 872 days, until
27 January 1944. The Siege of Leningrad was one of the longest, most destructive sieges of a major city in modern history and one of the costliest in terms of casualties (1,117,000).
His older brother Yosif was a writer who published under the pseudonym Osip Dymov. He is not related to the Russian mathematician Grigori Perelman, who was born in 1966 to a different Yakov Perelman. However, Grigori Perelman told The New Yorker that his father gave him Physics for Entertainment, and it inspired his interest in mathematics.
Books
Mathematics can be Fun Astronomia Recreativa Physics for Entertainment (1913)
Figures for Fun Algebra can be Fun Fun with Maths & Physics Arithmetic for entertainment Mechanics for entertainment Geometry for Entertainment Astronomy for entertainment Lively Mathematics Physics Everywhere Tricks and Amusements Physics Can Be FunHe has also written several books on interplanetary travel (Interplanetary Journeys, On a Rocket to Stars, and World Expanses)
Physics for Entertainment
In 1913, Russian bookshops began carrying Physics for Entertainment. The educationalist's new book attracted you
|
https://en.wikipedia.org/wiki/Borel%27s%20lemma
|
In mathematics, Borel's lemma, named after Émile Borel, is an important result used in the theory of asymptotic expansions and partial differential equations.
Statement
Suppose U is an open set in the Euclidean space Rn, and suppose that f0, f1, ... is a sequence of smooth functions on U.
If I is any open interval in R containing 0 (possibly I = R), then there exists a smooth function F(t, x) defined on I×U, such that
for k ≥ 0 and x in U.
Proof
Proofs of Borel's lemma can be found in many text books on analysis, including and , from which the proof below is taken.
Note that it suffices to prove the result for a small interval I = (−ε,ε), since if ψ(t) is a smooth bump function with compact support in (−ε,ε) equal identically to 1 near 0, then ψ(t) ⋅ F(t, x) gives a solution on R × U. Similarly using a smooth partition of unity on Rn subordinate to a covering by open balls with centres at δ⋅Zn, it can be assumed that all the fm have compact support in some fixed closed ball C. For each m, let
where εm is chosen sufficiently small that
for |α| < m. These estimates imply that each sum
is uniformly convergent and hence that
is a smooth function with
By construction
Note: Exactly the same construction can be applied, without the auxiliary space U, to produce a smooth function on the interval I for which the derivatives at 0 form an arbitrary sequence.
See also
References
Partial differential equations
Lemmas in analysis
Asymptotic analysis
|
https://en.wikipedia.org/wiki/Riemann%E2%80%93Lebesgue%20lemma
|
In mathematics, the Riemann–Lebesgue lemma, named after Bernhard Riemann and Henri Lebesgue, states that the Fourier transform or Laplace transform of an L1 function vanishes at infinity. It is of importance in harmonic analysis and asymptotic analysis.
Statement
Let be an integrable function, i.e. is a measurable function such that
and let be the Fourier transform of , i.e.
Then vanishes at infinity: as .
Because the Fourier transform of an integrable function is continuous, the Fourier transform is a continuous function vanishing at infinity. If denotes the vector space of continuous functions vanishing at infinity, the Riemann–Lebesgue lemma may be formulated as follows: The Fourier transformation maps to .
Proof
We will focus on the one-dimensional case , the proof in higher dimensions is similar. First, suppose that is continuous and compactly supported. For , the substitution leads to
.
This gives a second formula for . Taking the mean of both formulas, we arrive at the following estimate:
.
Because is continuous, converges to as for all . Thus, converges to 0 as due to the dominated convergence theorem.
If is an arbitrary integrable function, it may be approximated in the norm by a compactly supported continuous function. For , pick a compactly supported continuous function such that . Then
Because this holds for any , it follows that as .
Other versions
The Riemann–Lebesgue lemma holds in a variety of other situations.
If , then the Riemann–Lebesgue lemma also holds for the Laplace transform of , that is,
as within the half-plane .
A version holds for Fourier series as well: if is an integrable function on a bounded interval, then the Fourier coefficients of tend to 0 as . This follows by extending by zero outside the interval, and then applying the version of the Riemann–Lebesgue lemma on the entire real line.
However, the Riemann–Lebesgue lemma does not hold for arbitrary distributions. For example, the Dirac delta function distribution formally has a finite integral over the real line, but its Fourier transform is a constant and does not vanish at infinity.
Applications
The Riemann–Lebesgue lemma can be used to prove the validity of asymptotic approximations for integrals. Rigorous treatments of the method of steepest descent and the method of stationary phase, amongst others, are based on the Riemann–Lebesgue lemma.
References
Asymptotic analysis
Harmonic analysis
Lemmas in analysis
Theorems in analysis
Theorems in harmonic analysis
Bernhard Riemann
|
https://en.wikipedia.org/wiki/Split-quaternion
|
In abstract algebra, the split-quaternions or coquaternions form an algebraic structure introduced by James Cockle in 1849 under the latter name. They form an associative algebra of dimension four over the real numbers.
After introduction in the 20th century of coordinate-free definitions of rings and algebras, it was proved that the algebra of split-quaternions is isomorphic to the ring of the real matrices. So the study of split-quaternions can be reduced to the study of real matrices, and this may explain why there are few mentions of split-quaternions in the mathematical literature of the 20th and 21st centuries.
Definition
The split-quaternions are the linear combinations (with real coefficients) of four basis elements that satisfy the following product rules:
,
,
,
.
By associativity, these relations imply
,
,
and also .
So, the split-quaternions form a real vector space of dimension four with as a basis. They form also a noncommutative ring, by extending the above product rules by distributivity to all split-quaternions.
Let consider the square matrices
They satisfy the same multiplication table as the corresponding split-quaternions. As these matrices form a basis of the two-by-two matrices, the unique linear function that maps to (respectively) induces an algebra isomorphism from the split-quaternions to the two-by-two real matrices.
The above multiplication rules imply that the eight elements form a group under this multiplication, which is isomorphic to the dihedral group D4, the symmetry group of a square. In fact, if one considers a square whose vertices are the points whose coordinates are or , the matrix is the clockwise rotation of the quarter of a turn, is the symmetry around the first diagonal, and is the symmetry around the axis.
Properties
Like the quaternions introduced by Hamilton in 1843, they form a four dimensional real associative algebra. But like the real algebra of 2×2 matrices – and unlike the real algebra of quaternions – the split-quaternions contain nontrivial zero divisors, nilpotent elements, and idempotents. (For example, is an idempotent zero-divisor, and is nilpotent.) As an algebra over the real numbers, the algebra of split-quaternions is isomorphic to the algebra of 2×2 real matrices by the above defined isomorphism.
This isomorphism allows identifying each split-quaternion with a 2×2 matrix. So every property of split-quaternions corresponds to a similar property of matrices, which is often named differently.
The conjugate of a split-quaternion
, is . In term of matrices, the conjugate is the cofactor matrix obtained by exchanging the diagonal entries and changing the sign of the other two entries.
The product of a split-quaternion with its conjugate is the isotropic quadratic form:
which is called the norm of the split-quaternion or the determinant of the associated matrix.
The real part of a split-quaternion is . It equals the trace of associated matrix.
Th
|
https://en.wikipedia.org/wiki/Dirichlet%E2%80%93Jordan%20test
|
In mathematics, the Dirichlet–Jordan test gives sufficient conditions for a real-valued, periodic function f to be equal to the sum of its Fourier series at a point of continuity. Moreover, the behavior of the Fourier series at points of discontinuity is determined as well (it is the midpoint of the values of the discontinuity). It is one of many conditions for the convergence of Fourier series.
The original test was established by Peter Gustav Lejeune Dirichlet in 1829, for piecewise monotone functions. It was extended in the late 19th century by Camille Jordan to functions of bounded variation (any function of bounded variation is the difference of two increasing functions).
Dirichlet–Jordan test for Fourier series
The Dirichlet–Jordan test states that if a periodic function is of bounded variation on a period, then the Fourier series converges, as , at each point of the domain to
In particular, if is continuous at , then the Fourier series converges to . Moreover, if is continuous everywhere, then the convergence is uniform.
Stated in terms of a periodic function of period 2π, the Fourier series coefficients are defined as
and the partial sums of the Fourier series are
The analogous statement holds irrespective of what the period of f is, or which version of the Fourier series is chosen.
There is also a pointwise version of the test: if is a periodic function in , and is of bounded variation in a neighborhood of , then the Fourier series at converges to the limit as above
Jordan test for Fourier integrals
For the Fourier transform on the real line, there is a version of the test as well. Suppose that is in and of bounded variation in a neighborhood of the point . Then
If is continuous in an open interval, then the integral on the left-hand side converges uniformly in the interval, and the limit on the right-hand side is .
This version of the test (although not satisfying modern demands for rigor) is historically prior to Dirichlet, being due to Joseph Fourier.
Dirichlet conditions in signal processing
In signal processing, the test is often retained in the original form due to Dirichlet: a piecewise monotone bounded periodic function has a convergent Fourier series whose value at each point is the arithmetic mean of the left and right limits of the function. The condition of piecewise monotonicity is equivalent to having only finitely many local extrema, i.e., that the function changes its variation only finitely many times. (Dirichlet required in addition that the function have only finitely many discontinuities, but this constraint is unnecessarily stringent.) Any signal that can be physically produced in a laboratory satisfies these conditions.
As in the pointwise case of the Jordan test, the condition of boundedness can be relaxed if the function is assumed to be absolutely integrable (i.e., ) over a period, provided it satisfies the other conditions of the test in a neighborhood of the point where the limit i
|
https://en.wikipedia.org/wiki/Progressive%20function
|
In mathematics, a progressive function ƒ ∈ L2(R) is a function whose Fourier transform is supported by positive frequencies only:
It is called super regressive if and only if the time reversed function f(−t) is progressive, or equivalently, if
The complex conjugate of a progressive function is regressive, and vice versa.
The space of progressive functions is sometimes denoted , which is known as the Hardy space of the upper half-plane. This is because a progressive function has the Fourier inversion formula
and hence extends to a holomorphic function on the upper half-plane
by the formula
Conversely, every holomorphic function on the upper half-plane which is uniformly square-integrable on every horizontal line
will arise in this manner.
Regressive functions are similarly associated with the Hardy space on the lower half-plane .
Hardy spaces
Types of functions
|
https://en.wikipedia.org/wiki/Fixed-point%20space
|
In mathematics, a Hausdorff space X is called a fixed-point space if every continuous function has a fixed point.
For example, any closed interval [a,b] in is a fixed point space, and it can be proved from the intermediate value property of real continuous function. The open interval (a, b), however, is not a fixed point space. To see it, consider the function
, for example.
Any linearly ordered space that is connected and has a top and a bottom element is a fixed point space.
Note that, in the definition, we could easily have disposed of the condition that the space is Hausdorff.
References
Vasile I. Istratescu, Fixed Point Theory, An Introduction, D. Reidel, the Netherlands (1981).
Andrzej Granas and James Dugundji, Fixed Point Theory (2003) Springer-Verlag, New York,
William A. Kirk and Brailey Sims, Handbook of Metric Fixed Point Theory (2001), Kluwer Academic, London
Fixed points (mathematics)
Topology
Topological spaces
|
https://en.wikipedia.org/wiki/Local%20property
|
In mathematics, a mathematical object is said to satisfy a property locally, if the property is satisfied on some limited, immediate portions of the object (e.g., on some sufficiently small or arbitrarily small neighborhoods of points).
Properties of a point on a function
Perhaps the best-known example of the idea of locality lies in the concept of local minimum (or local maximum), which is a point in a function whose functional value is the smallest (resp., largest) within an immediate neighborhood of points. This is to be contrasted with the idea of global minimum (or global maximum), which corresponds to the minimum (resp., maximum) of the function across its entire domain.
Properties of a single space
A topological space is sometimes said to exhibit a property locally, if the property is exhibited "near" each point in one of the following ways:
Each point has a neighborhood exhibiting the property;
Each point has a neighborhood base of sets exhibiting the property.
Here, note that condition (2) is for the most part stronger than condition (1), and that extra caution should be taken to distinguish between the two. For example, some variation in the definition of locally compact can arise as a result of the different choices of these conditions.
Examples
Locally compact topological spaces
Locally connected and Locally path-connected topological spaces
Locally Hausdorff, Locally regular, Locally normal etc...
Locally metrizable
Properties of a pair of spaces
Given some notion of equivalence (e.g., homeomorphism, diffeomorphism, isometry) between topological spaces, two spaces are said to be locally equivalent if every point of the first space has a neighborhood which is equivalent to a neighborhood of the second space.
For instance, the circle and the line are very different objects. One cannot stretch the circle to look like the line, nor compress the line to fit on the circle without gaps or overlaps. However, a small piece of the circle can be stretched and flattened out to look like a small piece of the line. For this reason, one may say that the circle and the line are locally equivalent.
Similarly, the sphere and the plane are locally equivalent. A small enough observer standing on the surface of a sphere (e.g., a person and the Earth) would find it indistinguishable from a plane.
Properties of infinite groups
For an infinite group, a "small neighborhood" is taken to be a finitely generated subgroup. An infinite group is said to be locally P if every finitely generated subgroup is P. For instance, a group is locally finite if every finitely generated subgroup is finite, and a group is locally soluble if every finitely generated subgroup is soluble.
Properties of finite groups
For finite groups, a "small neighborhood" is taken to be a subgroup defined in terms of a prime number p, usually the local subgroups, the normalizers of the nontrivial p-subgroups. In which case, a property is said to be local if it can be
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.