source
stringlengths 31
168
| text
stringlengths 51
3k
|
---|---|
https://en.wikipedia.org/wiki/Kazuoki%20Azuma
|
(born 1939) is a Japanese mathematician. Azuma's inequality in probability theory is named after him.
Publications
References
External links
, archived at the Internet Archive
Partial Bibliography at CiNii (also here, and perhaps at other slightly variant names)
1939 births
Living people
20th-century Japanese mathematicians
21st-century Japanese mathematicians
|
https://en.wikipedia.org/wiki/Casio%20ClassPad%20300
|
The Casio ClassPad 300, ClassPad 330 and fx-CP400 are stylus based touch-screen graphing calculators. It comes with a collection of applications that support self-study, like 3D Graph, Geometry, , Spreadsheet, etc. A large 160x240 pixel LCD touch screen enables stylus-based operation. It resembles Casio's earlier Pocket Viewer line. HP and Texas Instruments attempted to release similar pen based calculators (the HP Xpander and PET Project (see TI PLT SHH1), but both were cancelled before release to the market.
The ClassPad 300 allows input of expressions, and displays them as they appear in a textbook. Factorization of expressions, calculation of limit values of functions, and other operations can be performed while viewing the results on a large LCD screen. It also comes with graphing tools for 3D graphing and drawing of geometric figures.
The user interface features a pull-down menu format. Solutions, expressions, and other items can be selected with the tap of the stylus. Drag and drop, copy and paste, and other pen-based operations, are also supported. An application allows the creation of so-called , which can include figures, expressions, and explanations.
In the United States the ClassPad series is banned from standardized tests including the SAT, the ACT, and the AP Calculus test, due to its virtual QWERTY keyboard and stylus usage.
In 2017, the fx-CG500 was released, targeted towards the North American market. While almost entirely identical to the fx-CP400, its removal of the QWERTY keyboards means it is included in the list of allowed calculators on American standardized exams, including AP and SAT.
History
During 1996, CASIO worked on the CAS (Computer Algebra System) and studying Geometry. The CAS was first used in the Casio CFX-9970G then the Casio Algebra FX 2.0, and later formed the core math system for the ClassPad.
In 1999, the idea of the emerged. It was intended to allow all applications to interact from within one application, and display information in a textbook style.
In 2000, CASIO opened a new office, the CASIO Education Technology M.R.D. Center in Portland, Oregon, USA. They hired many engineers, and started to implement more features.
In 2002, CASIO completed a prototype for the ClassPad. Before the prototype was complete, an emulator was used for testing. The emulator was later included in the software that was being developed for data transfer. The data transfer and emulator software later merged into one product called the ClassPad Manager.
In 2003 and 2005, CASIO released respectively their first and second product: the ClassPad 300 with 4.5 MB of flash memory, and the ClassPad 300 Plus with 5.4 MB of flash memory and other improvements.
ClassPad OS 3.0
In 2006 CASIO released OS 3.0 for the ClassPad. OS 3.0 featured Laplace and Fourier transform, differential equation graphs, financial functions, AP statistics and parameterized 3D graphs. Subsequent releases were only available for users with OS 3.
|
https://en.wikipedia.org/wiki/Palais%E2%80%93Smale%20compactness%20condition
|
The Palais–Smale compactness condition, named after Richard Palais and Stephen Smale, is a hypothesis for some theorems of the calculus of variations. It is useful for guaranteeing the existence of certain kinds of critical points, in particular saddle points. The Palais-Smale condition is a condition on the functional that one is trying to extremize.
In finite-dimensional spaces, the Palais–Smale condition for a continuously differentiable real-valued function is satisfied automatically for proper maps: functions which do not take unbounded sets into bounded sets. In the calculus of variations, where one is typically interested in infinite-dimensional function spaces, the condition is necessary because some extra notion of compactness beyond simple boundedness is needed. See, for example, the proof of the mountain pass theorem in section 8.5 of Evans.
Strong formulation
A continuously Fréchet differentiable functional from a Hilbert space H to the reals satisfies the Palais–Smale condition if every sequence such that:
is bounded, and
in H
has a convergent subsequence in H.
Weak formulation
Let X be a Banach space and be a Gateaux differentiable functional. The functional is said to satisfy the weak Palais–Smale condition if for each sequence such that
,
in ,
for all ,
there exists a critical point of with
References
Calculus of variations
|
https://en.wikipedia.org/wiki/Mountain%20pass%20theorem
|
The mountain pass theorem is an existence theorem from the calculus of variations, originally due to Antonio Ambrosetti and Paul Rabinowitz. Given certain conditions on a function, the theorem demonstrates the existence of a saddle point. The theorem is unusual in that there are many other theorems regarding the existence of extrema, but few regarding saddle points.
Statement
The assumptions of the theorem are:
is a functional from a Hilbert space H to the reals,
and is Lipschitz continuous on bounded subsets of H,
satisfies the Palais–Smale compactness condition,
,
there exist positive constants r and a such that if , and
there exists with such that .
If we define:
and:
then the conclusion of the theorem is that c is a critical value of I.
Visualization
The intuition behind the theorem is in the name "mountain pass." Consider I as describing elevation. Then we know two low spots in the landscape: the origin because , and a far-off spot v where . In between the two lies a range of mountains (at ) where the elevation is high (higher than a>0). In order to travel along a path g from the origin to v, we must pass over the mountains—that is, we must go up and then down. Since I is somewhat smooth, there must be a critical point somewhere in between. (Think along the lines of the mean-value theorem.) The mountain pass lies along the path that passes at the lowest elevation through the mountains. Note that this mountain pass is almost always a saddle point.
For a proof, see section 8.5 of Evans.
Weaker formulation
Let be Banach space. The assumptions of the theorem are:
and have a Gateaux derivative which is continuous when and are endowed with strong topology and weak* topology respectively.
There exists such that one can find certain with
.
satisfies weak Palais–Smale condition on .
In this case there is a critical point of satisfying . Moreover, if we define
then
For a proof, see section 5.5 of Aubin and Ekeland.
References
Further reading
Mathematical analysis
Calculus of variations
Theorems in analysis
|
https://en.wikipedia.org/wiki/Wold%27s%20theorem
|
In statistics, Wold's decomposition or the Wold representation theorem (not to be confused with the Wold theorem that is the discrete-time analog of the Wiener–Khinchin theorem), named after Herman Wold, says that every covariance-stationary time series can be written as the sum of two time series, one deterministic and one stochastic.
Formally
where:
is the time series being considered,
is an uncorrelated sequence which is the innovation process to the process – that is, a white noise process that is input to the linear filter .
is the possibly infinite vector of moving average weights (coefficients or parameters)
is a deterministic time series, such as one represented by a sine wave.
The moving average coefficients have these properties:
Stable, that is square summable <
Causal (i.e. there are no terms with j < 0)
Minimum delay
Constant ( independent of t)
It is conventional to define
This theorem can be considered as an existence theorem: any stationary process has this seemingly special representation. Not only is the existence of such a simple linear and exact representation remarkable, but even more so is the special nature of the moving average model. Imagine creating a process that is a moving average but not satisfying these properties 1–4. For example, the coefficients could define an acausal and model. Nevertheless the theorem assures the existence of a causal that exactly represents this process. How this all works for the case of causality and the minimum delay property is discussed in Scargle (1981), where an extension of the Wold Decomposition is discussed.
The usefulness of the Wold Theorem is that it allows the dynamic evolution of a variable to be approximated by a linear model. If the innovations are independent, then the linear model is the only possible representation relating the observed value of to its past evolution. However, when is merely an uncorrelated but not independent sequence, then the linear model exists but it is not the only representation of the dynamic dependence of the series. In this latter case, it is possible that the linear model may not be very useful, and there would be a nonlinear model relating the observed value of to its past evolution. However, in practical time series analysis, it is often the case that only linear predictors are considered, partly on the grounds of simplicity, in which case the Wold decomposition is directly relevant.
The Wold representation depends on an infinite number of parameters, although in practice they usually decay rapidly. The autoregressive model is an alternative that may have only a few coefficients if the corresponding moving average has many. These two models can be combined into an autoregressive-moving average (ARMA) model, or an autoregressive-integrated-moving average (ARIMA) model if non-stationarity is involved. See and references there; in addition this paper gives an extension of the Wold Theorem that allows more
|
https://en.wikipedia.org/wiki/Wold%27s%20decomposition
|
In mathematics, particularly in operator theory, Wold decomposition or Wold–von Neumann decomposition, named after Herman Wold and John von Neumann, is a classification theorem for isometric linear operators on a given Hilbert space. It states that every isometry is a direct sum of copies of the unilateral shift and a unitary operator.
In time series analysis, the theorem implies that any stationary discrete-time stochastic process can be decomposed into a pair of uncorrelated processes, one deterministic, and the other being a moving average process.
Details
Let H be a Hilbert space, L(H) be the bounded operators on H, and V ∈ L(H) be an isometry. The Wold decomposition states that every isometry V takes the form
for some index set A, where S is the unilateral shift on a Hilbert space Hα, and U is a unitary operator (possible vacuous). The family {Hα} consists of isomorphic Hilbert spaces.
A proof can be sketched as follows. Successive applications of V give a descending sequences of copies of H isomorphically embedded in itself:
where V(H) denotes the range of V. The above defined Hi = Vi(H). If one defines
then
It is clear that K1 and K2 are invariant subspaces of V.
So V(K2) = K2. In other words, V restricted to K2 is a surjective isometry, i.e., a unitary operator U.
Furthermore, each Mi is isomorphic to another, with V being an isomorphism between Mi and Mi+1: V "shifts" Mi to Mi+1. Suppose the dimension of each Mi is some cardinal number α. We see that K1 can be written as a direct sum Hilbert spaces
where each Hα is an invariant subspaces of V and V restricted to each Hα is the unilateral shift S. Therefore
which is a Wold decomposition of V.
Remarks
It is immediate from the Wold decomposition that the spectrum of any proper, i.e. non-unitary, isometry is the unit disk in the complex plane.
An isometry V is said to be pure if, in the notation of the above proof, ∩i≥0 Hi = {0}. The multiplicity of a pure isometry V is the dimension of the kernel of V*, i.e. the cardinality of the index set A in the Wold decomposition of V. In other words, a pure isometry of multiplicity N takes the form
In this terminology, the Wold decomposition expresses an isometry as a direct sum of a pure isometry and a unitary operator.
A subspace M is called a wandering subspace of V if Vn(M) ⊥ Vm(M) for all n ≠ m. In particular, each Mi defined above is a wandering subspace of V.
A sequence of isometries
The decomposition above can be generalized slightly to a sequence of isometries, indexed by the integers.
The C*-algebra generated by an isometry
Consider an isometry V ∈ L(H). Denote by C*(V) the C*-algebra generated by V, i.e. C*(V) is the norm closure of polynomials in V and V*. The Wold decomposition can be applied to characterize C*(V).
Let C(T) be the continuous functions on the unit circle T. We recall that the C*-algebra C*(S) generated by the unilateral shift S takes the following form
C*(S) = {Tf + K | Tf is a Toeplitz operator
|
https://en.wikipedia.org/wiki/Forecast%20error
|
In statistics, a forecast error is the difference between the actual or real and the predicted or forecast value of a time series or any other phenomenon of interest. Since the forecast error is derived from the same scale of data, comparisons between the forecast errors of different series can only be made when the series are on the same scale.
In simple cases, a forecast is compared with an outcome at a single time-point and a summary of forecast errors is constructed over a collection of such time-points. Here the forecast may be assessed using the difference or using a proportional error. By convention, the error is defined using the value of the outcome minus the value of the forecast.
In other cases, a forecast may consist of predicted values over a number of lead-times; in this case an assessment of forecast error may need to consider more general ways of assessing the match between the time-profiles of the forecast and the outcome. If a main application of the forecast is to predict when certain thresholds will be crossed, one possible way of assessing the forecast is to use the timing-error—the difference in time between when the outcome crosses the threshold and when the forecast does so. When there is interest in the maximum value being reached, assessment of forecasts can be done using any of:
the difference of times of the peaks;
the difference in the peak values in the forecast and outcome;
the difference between the peak value of the outcome and the value forecast for that time point.
Forecast error can be a calendar forecast error or a cross-sectional forecast error, when we want to summarize the forecast error over a group of units. If we observe the average forecast error for a time-series of forecasts for the same product or phenomenon, then we call this a calendar forecast error or time-series forecast error. If we observe this for multiple products for the same period, then this is a cross-sectional performance error. Reference class forecasting has been developed to reduce forecast error. Combining forecasts has also been shown to reduce forecast error.
Calculating forecast error
The forecast error is the difference between the observed value and its forecast based on all previous observations. If the error is denoted as then the forecast error can be written as:
where,
= observation
= denote the forecast of based on all previous observations
Forecast errors can be evaluated using a variety of methods namely mean percentage error, root mean squared error, mean absolute percentage error, mean squared error. Other methods include tracking signal and forecast bias.
For forecast errors on training data
denotes the observation and is the forecast
For forecast errors on test data
denotes the actual value of the h-step observation and the forecast is denoted as
Academic literature
Dreman and Berry in 1995 "Financial Analysts Journal", argued that securities analysts' forecasts are too optimistic, and that
|
https://en.wikipedia.org/wiki/Semiparametric%20model
|
In statistics, a semiparametric model is a statistical model that has parametric and nonparametric components.
A statistical model is a parameterized family of distributions: indexed by a parameter .
A parametric model is a model in which the indexing parameter is a vector in -dimensional Euclidean space, for some nonnegative integer . Thus, is finite-dimensional, and .
With a nonparametric model, the set of possible values of the parameter is a subset of some space , which is not necessarily finite-dimensional. For example, we might consider the set of all distributions with mean 0. Such spaces are vector spaces with topological structure, but may not be finite-dimensional as vector spaces. Thus, for some possibly infinite-dimensional space .
With a semiparametric model, the parameter has both a finite-dimensional component and an infinite-dimensional component (often a real-valued function defined on the real line). Thus, , where is an infinite-dimensional space.
It may appear at first that semiparametric models include nonparametric models, since they have an infinite-dimensional as well as a finite-dimensional component. However, a semiparametric model is considered to be "smaller" than a completely nonparametric model because we are often interested only in the finite-dimensional component of . That is, the infinite-dimensional component is regarded as a nuisance parameter. In nonparametric models, by contrast, the primary interest is in estimating the infinite-dimensional parameter. Thus the estimation task is statistically harder in nonparametric models.
These models often use smoothing or kernels.
Example
A well-known example of a semiparametric model is the Cox proportional hazards model. If we are interested in studying the time to an event such as death due to cancer or failure of a light bulb, the Cox model specifies the following distribution function for :
where is the covariate vector, and and are unknown parameters. . Here is finite-dimensional and is of interest; is an unknown non-negative function of time (known as the baseline hazard function) and is often a nuisance parameter. The set of possible candidates for is infinite-dimensional.
See also
Semiparametric regression
Statistical model
Generalized method of moments
Notes
References
Begun, Janet M.; Hall, W. J.; Huang, Wei-Min; Wellner, Jon A. (1983), "Information and asymptotic efficiency in parametric--nonparametric models", Annals of Statistics, 11 (1983), no. 2, 432--452
Mathematical and quantitative methods (economics)
|
https://en.wikipedia.org/wiki/Georges%20Matheron
|
Georges François Paul Marie Matheron (2 December 1930 – 7 August 2000) was a French mathematician and civil engineer of mines, known as the founder of geostatistics and a co-founder (together with Jean Serra) of mathematical morphology. In 1968, he created the Centre de Géostatistique et de Morphologie Mathématique at the Paris School of Mines in Fontainebleau. He is known for his contributions on Kriging and mathematical morphology. His seminal work is posted for study and review to the Online Library of the Centre de Géostatistique, Fontainebleau, France.
Early career
Matheron graduated from École Polytechnique and later Ecole des Mines de Paris, where he studied mathematics, physics and probability theory (as a student of Paul Lévy).
From 1954 to 1963, he worked with the French Geological Survey in Algeria and France, and was influenced by the works of Krige, Sichel, and de Wijs, from the South African school, on the gold deposits of the Witwatersrand. This influence led him to develop the major concepts of the theory for estimating resources he named Geostatistics.
Geostatistics
Matheron’s [Formule des Minerais Connexes] became his Note Statistique No 1. In this paper of 25 November 1954, Matheron derived the degree of associative dependence between lead and silver grades of core samples. In his Rectificatif of 13 January 1955, he revised the arithmetic mean lead and silver grades because his core samples varied in length. He did derive the length-weighted average lead and silver grades but failed to derive the variances of his weighted averages. Neither did he derive the degree of associative dependence between metal grades of ordered core samples as a measure for spatial dependence between ordered core samples. He did not disclose his primary data set and worked mostly with symbols rather than real measured values such test results for lead and silver in Matheron's core samples. Matheron's Interprétations des corrélations entre variables aléatoires lognormales of 29 November 1954 was marked Note statistisque No 2. In this paper, Matheron explored lognormal variables and set the stage for statistics by symbols. Primary data would have allowed him to assess whether or not lead and silver grades departed from the lognormal distribution, or displayed spatial dependence along core samples in his borehole.
Matheron coined the eponym krigeage (Kriging) for the first time in his 1960 Krigeage d’un Panneau Rectangulaire par sa Périphérie. In this Note géostatistique No 28, Matheron derived k*, his estimateur and a precursor to the kriged estimate or kriged estimator. In mathematical statistics, Matheron’s k* is the length-weighted average grade of a single panneau in his set. What Matheron failed to derive in this paper was var(k*), the variance of his estimateur. Matheron presented his Stationary Random Function at the first colloquium on geostatistics in the USA. He called on Brownian motion to conjecture the continuity of his Riemann integ
|
https://en.wikipedia.org/wiki/Approximations%20of%20%CF%80
|
Approximations for the mathematical constant pi () in the history of mathematics reached an accuracy within 0.04% of the true value before the beginning of the Common Era. In Chinese mathematics, this was improved to approximations correct to what corresponds to about seven decimal digits by the 5th century.
Further progress was not made until the 15th century (through the efforts of Jamshīd al-Kāshī). Early modern mathematicians reached an accuracy of 35 digits by the beginning of the 17th century (Ludolph van Ceulen), and 126 digits by the 19th century (Jurij Vega), surpassing the accuracy required for any conceivable application outside of pure mathematics.
The record of manual approximation of is held by William Shanks, who calculated 527 digits correctly in 1853. Since the middle of the 20th century, the approximation of has been the task of electronic digital computers (for a comprehensive account, see Chronology of computation of ). On 8 June 2022, the current record was established by Emma Haruka Iwao with Alexander Yee's y-cruncher with 100 trillion () digits.
Early history
The best known approximations to dating to before the Common Era were accurate to two decimal places; this was improved upon in Chinese mathematics in particular by the mid-first millennium, to an accuracy of seven decimal places. After this, no further progress was made until the late medieval period.
Some Egyptologists
have claimed that the ancient Egyptians used an approximation of as = 3.142857 (about 0.04% too high) from as early as the Old Kingdom.
This claim has been met with skepticism.
Babylonian mathematics usually approximated to 3, sufficient for the architectural projects of the time (notably also reflected in the description of Solomon's Temple in the Hebrew Bible). The Babylonians were aware that this was an approximation, and one Old Babylonian mathematical tablet excavated near Susa in 1936 (dated to between the 19th and 17th centuries BCE) gives a better approximation of as = 3.125, about 0.528% below the exact value.
At about the same time, the Egyptian Rhind Mathematical Papyrus (dated to the Second Intermediate Period, c. 1600 BCE, although stated to be a copy of an older, Middle Kingdom text) implies an approximation of as ≈ 3.16 (accurate to 0.6 percent) by calculating the area of a circle via approximation with the octagon.
Astronomical calculations in the Shatapatha Brahmana (c. 6th century BCE) use a fractional approximation of .
The Mahabharata (500 BCE – 300 CE) offers an approximation of 3, in the ratios offered in Bhishma Parva verses: 6.12.40–45.
In the 3rd century BCE, Archimedes proved the sharp inequalities < < , by means of regular 96-gons (accuracies of 2·10−4 and 4·10−4, respectively).
In the 2nd century CE, Ptolemy used the value , the first known approximation accurate to three decimal places (accuracy 2·10−5). It is equal to which is accurate to two sexagesimal digits.
The Chinese mathematician Liu Hui in
|
https://en.wikipedia.org/wiki/Hopf%20invariant
|
In mathematics, in particular in algebraic topology, the Hopf invariant is a homotopy invariant of certain maps between n-spheres.
Motivation
In 1931 Heinz Hopf used Clifford parallels to construct the Hopf map
,
and proved that is essential, i.e., not homotopic to the constant map, by using the fact that the linking number of the circles
is equal to 1, for any .
It was later shown that the homotopy group is the infinite cyclic group generated by . In 1951, Jean-Pierre Serre proved that the rational homotopy groups
for an odd-dimensional sphere ( odd) are zero unless is equal to 0 or n. However, for an even-dimensional sphere (n even), there is one more bit of infinite cyclic homotopy in degree .
Definition
Let be a continuous map (assume ). Then we can form the cell complex
where is a -dimensional disc attached to via .
The cellular chain groups are just freely generated on the -cells in degree , so they are in degree 0, and and zero everywhere else. Cellular (co-)homology is the (co-)homology of this chain complex, and since all boundary homomorphisms must be zero (recall that ), the cohomology is
Denote the generators of the cohomology groups by
and
For dimensional reasons, all cup-products between those classes must be trivial apart from . Thus, as a ring, the cohomology is
The integer is the Hopf invariant of the map .
Properties
Theorem: The map is a homomorphism.
If is odd, is trivial (since is torsion).
If is even, the image of contains . Moreover, the image of the Whitehead product of identity maps equals 2, i. e. , where is the identity map and is the Whitehead product.
The Hopf invariant is for the Hopf maps, where , corresponding to the real division algebras , respectively, and to the fibration sending a direction on the sphere to the subspace it spans. It is a theorem, proved first by Frank Adams, and subsequently by Adams and Michael Atiyah with methods of topological K-theory, that these are the only maps with Hopf invariant 1.
Whitehead integral formula
J. H. C. Whitehead has proposed the following integral formula for the Hopf invariant.
Given a map , one considers a volume form on such that .
Since , the pullback is a Closed differential form: .
By Poincaré's lemma it is an exact differential form: there exists an -form on such that . The Hopf invariant is then given by
Generalisations for stable maps
A very general notion of the Hopf invariant can be defined, but it requires a certain amount of homotopy theoretic groundwork:
Let denote a vector space and its one-point compactification, i.e. and
for some .
If is any pointed space (as it is implicitly in the previous section), and if we take the point at infinity to be the basepoint of , then we can form the wedge products
.
Now let
be a stable map, i.e. stable under the reduced suspension functor. The (stable) geometric Hopf invariant of is
,
an element of the stable -equivariant homotopy gro
|
https://en.wikipedia.org/wiki/Alternant%20matrix
|
In linear algebra, an alternant matrix is a matrix formed by applying a finite list of functions pointwise to a fixed column of inputs. An alternant determinant is the determinant of a square alternant matrix.
Generally, if are functions from a set to a field , and , then the alternant matrix has size and is defined by
or, more compactly, . (Some authors use the transpose of the above matrix.) Examples of alternant matrices include Vandermonde matrices, for which , and Moore matrices, for which .
Properties
The alternant can be used to check the linear independence of the functions in function space. For example, let and choose . Then the alternant is the matrix and the alternant determinant is Therefore M is invertible and the vectors form a basis for their spanning set: in particular, and are linearly independent.
Linear dependence of the columns of an alternant does not imply that the functions are linearly dependent in function space. For example, let and choose . Then the alternant is and the alternant determinant is 0, but we have already seen that and are linearly independent.
Despite this, the alternant can be used to find a linear dependence if it is already known that one exists. For example, we know from the theory of partial fractions that there are real numbers A and B for which Choosing and we obtain the alternant . Therefore, is in the nullspace of the matrix: that is, . Moving to the other side of the equation gives the partial fraction decomposition
If and for any then the alternant determinant is zero (as a row is repeated).
If and the functions are all polynomials, then divides the alternant determinant for all In particular, if V is a Vandermonde matrix, then divides such polynomial alternant determinants. The ratio is therefore a polynomial in called the bialternant. The Schur polynomial is classically defined as the bialternant of the polynomials .
Applications
Alternant matrices are used in coding theory in the construction of alternant codes.
See also
List of matrices
Wronskian
References
Matrices
Determinants
|
https://en.wikipedia.org/wiki/V%C3%ADctor%20Neumann-Lara
|
Víctor Neumann-Lara (1933–2004) was a Mexican mathematician and a pioneer in the field of graph theory in Mexico. His work also covers general topology, game theory and combinatorics.
Biography
Born in the city of Huejutla de Reyes, Hidalgo, Mexico, he soon moved to Mexico City, where he received his bachelor's degree in mathematics from the School of Sciences, UNAM.
His life was greatly devoted to teaching, giving over 100 courses in Mexico and around the world, and introducing new teaching methods. He carried color chalks with him all the time, and was prompt to give graphic explanations.
Work
Full Professor at Institute of Mathematics, UNAM, he directed over 15 theses and taught both in the Institute and in the Faculty of Sciences. Below is a selection of his multiple publications, which earned him over 120 citations from renowned mathematicians in the area of graph theory.
In 1982 he introduced the notion of dichromatic number of a digraph, which will eventually be used in kernel theory and tournament theory.
Selected publications
Francisco Larrión, Víctor Neumann-Lara, Miguel A. Pizaña, Thomas Dale Porter "A hierarchy of self-clique graphs" Discrete Mathematics 282(1–3): 193–208 (2004)
M. E. Frías-Armenta, Víctor Neumann-Lara, Miguel A. Pizaña "Dismantlings and iterated clique graphs" Discrete Mathematics 282(1–3): 263–265 (2004)
Xueliang Li, Víctor Neumann-Lara, Eduardo Rivera-Campo "On a tree graph defined by a set of cycles" Discrete Mathematics 271(1–3): 303–310 (2003)
Juan José Montellano-Ballesteros, Víctor Neumann-Lara "An Anti-Ramsey Theorem" Combinatorica 22(3): 445–449 (2002)
Francisco Larrión, Víctor Neumann-Lara "On clique divergent graphs with linear growth" Discrete Mathematics 245(1–3): 139–153 (2002)
Francisco Larrión, Víctor Neumann-Lara, Miguel A. Pizaña "Whitney triangulations, local girth and iterated clique graphs" Discrete Mathematics 258(1–3): 123–135 (2002)
Francisco Larrión, Víctor Neumann-Lara, Miguel A. Pizaña "On the homotopy type of the clique graph" J. Braz. Comp. Soc. 7(3): 69–73 (2001)
Francisco Larrión, Víctor Neumann-Lara "Locally C6 graphs are clique divergent" Discrete Mathematics 215: 159–170 (2000)
Manuel Abellanas, G. Hernandez, Rolf Klein, Víctor Neumann-Lara, Jorge Urrutia "A Combinatorial Property of Convex Sets" Discrete & Computational Geometry 17(3): 307–318 (1997)
Manuel Abellanas, G. Hernandez, Rolf Klein, Víctor Neumann-Lara, Jorge Urrutia "Voronoi Diagrams and Containment of Families of Convex Sets on the Plane" Symposium on Computational Geometry 71–78 (1995)
Jorge L. Arocha, Javier Bracho, Víctor Neumann-Lara "Tight and Untight Triangulations of Surfaces by Complete Graphs" J. Comb. Theory, Ser. B 63(2): 185–199 (1995)
Víctor Neumann-Lara, Eduardo Rivera-Campo "Spanning trees with bounded degrees" Combinatorica 11(1): 55–61 (1991)
Roland Häggkvist, Pavol Hell, Donald J. Miller, Víctor Neumann-Lara "On multiplicative graphs and the product conjecture" Combinatorica 8(1): 63
|
https://en.wikipedia.org/wiki/Spelling%20of%20disc
|
Disc and disk are both variants of the English word for objects of a generally thin and cylindrical geometry. The differences in spelling correspond both with regional differences and with different senses of the word. For example, in the case of flat, rotational data storage media the convention is that the spelling disk is used for magnetic storage (e.g., hard disks) while disc is used for optical storage (e.g., compact discs, better known as CDs). When there is no clear convention, the spelling disk is more popular in American English, while the spelling disc is more popular in British English.
Disk
The earlier word is disk, which came into the English language in the middle of the 17th century. In the 19th century, disk became the conventional spelling for audio recordings made on a flat plate, such as the gramophone record. Early BBC technicians differentiated between disks (in-house transcription records) and discs (the colloquial term for commercial gramophone records, or what the BBC dubbed CGRs).
UK versus U.S.
By the 20th century, the "k" spelling was more popular in the United States, while the "c" variant was preferred in the UK. In the 1950s, when the American company IBM pioneered the first hard disk drive storage devices, it used the "k" spelling. Consequently, in computer terminology today it is common for the "k" word to refer mainly to magnetic storage devices (particularly in British English, where the term disk is sometimes regarded as a contraction of diskette, a much later word and actually a diminutive of disk).
Computer discs
Some latter-day competitors to IBM prefer the c-spelling. In 1979, the Dutch company Philips, along with Sony, developed and trademarked the compact disc using the "c" spelling. The "c" spelling is now used consistently for optical media such as the compact disc and similar technologies.
Medical editing
The words disc and disk can appear frequently in medical journals and textbooks, especially those in ophthalmology and orthopedics, and thus style guides often foster consistency by giving rules for which contexts take which spelling. AMA style for this topic is used by many publications. AMA says, "For ophthalmologic terms, use disc (e.g., optic disc); for other anatomical terms, use disk (e.g., lumbar disk). In discussions related to computers, use disk (e.g., floppy disk, disk drive, diskette) (exceptions: compact disc, videodisc)."
Sports
Disc sports, or disc games, are a category of activities which involve throwing and/or catching a flying disc. Participants of disc sports consistently use the "c" spelling when describing the sports equipment used in these activities, which includes team sports such as ultimate or individual sports such as disc golf.
References
Further reading
Apple Support Document HT2300: What's the difference between a "disc" and a "disk?"
Disc
American and British English differences
Disc
|
https://en.wikipedia.org/wiki/Filling
|
Filling may refer to:
a food mixture used for stuffing
Frosting used between layers of a cake
Dental restoration
Symplectic filling, a kind of cobordism in mathematics
Part of the leather crusting process
See also
Fill (disambiguation)
|
https://en.wikipedia.org/wiki/Kurt%20Mahler
|
Kurt Mahler FRS (26 July 1903, Krefeld, Germany – 25 February 1988, Canberra, Australia) was a German mathematician who worked in the fields of transcendental number theory, diophantine approximation, p-adic analysis, and the geometry of numbers.
Career
Mahler was a student at the universities in Frankfurt and Göttingen, graduating with a Ph.D. from Johann Wolfgang Goethe University of Frankfurt am Main in 1927; his advisor was Carl Ludwig Siegel.
He left Germany with the rise of Adolf Hitler and accepted an invitation by Louis Mordell to go to Manchester. However, at the start of World War II he was interned as an enemy alien in Central Camp in Douglas, Isle of Man, where he met Kurt Hirsch, although he was released after only three months. He became a British citizen in 1946.
Mahler held the following positions:
University of Groningen
Assistant 1934–1936
University of Manchester
Assistant Lecturer at 1937–1939, 1941–1944
Lecturer, 1944–1947; Senior Lecturer, 1948–1949; Reader, 1949–1952
Professor of Mathematical Analysis, 1952–1963
Professor of Mathematics, Institute of Advanced Studies, Australian National University, 1963–1968 and 1972–1975
Professor of Mathematics, Ohio State University, USA, 1968–1972
Professor Emeritus, Australian National University, from 1975.
Research
Mahler worked in a broad variety of mathematical disciplines, including transcendental number theory, diophantine approximation, p-adic analysis, and the geometry of numbers.
Mahler proved that the Prouhet–Thue–Morse constant and the Champernowne constant 0.1234567891011121314151617181920... are transcendental numbers.
Mahler was the first to give an irrationality measure for pi, in 1953. Although some have suggested the irrationality measure of pi is likely to be 2, the current best estimate is 7.103205334137…, due to Doron Zeilberger and Wadim Zudilin.
Awards
He was elected a member of the Royal Society in 1948 and a member of the Australian Academy of Science in 1965. He was awarded the London Mathematical Society's Senior Berwick Prize in 1950, the De Morgan Medal, 1971, and the Thomas Ranken Lyle Medal, 1977.
Personal life
Mahler spoke fluent Japanese and was an expert photographer.
See also
Mahler's inequality
Mahler measure
Mahler polynomial
Mahler volume
Mahler's theorem
Mahler's compactness theorem
Skolem–Mahler–Lech theorem
References
External links
1903 births
1988 deaths
20th-century German mathematicians
Fellows of the Royal Society
Fellows of the Australian Academy of Science
Mathematical analysts
Ohio State University faculty
German emigrants to Australia
Academics of the Victoria University of Manchester
People from Krefeld
People interned in the Isle of Man during World War II
|
https://en.wikipedia.org/wiki/Ramsj%C3%B6
|
Ramsjö () is a village in Ljusdal Municipality, Hälsingland, Gävleborg County, Sweden with about 306 inhabitants. (2004, Statistics Sweden).
References
Populated places in Ljusdal Municipality
Hälsingland
|
https://en.wikipedia.org/wiki/Hennan
|
Hennan, is a village in Ljusdal Municipality, Hälsingland, Gävleborg County, Sweden with about 227 inhabitants. (2004, Statistics Sweden).
Populated places in Ljusdal Municipality
Hälsingland
|
https://en.wikipedia.org/wiki/Korskrogen
|
Korskrogen, is a village in Ljusdal Municipality, Hälsingland, Gävleborg County, Sweden with about 202 inhabitants. (2004, Statistics Sweden).
Populated places in Ljusdal Municipality
Hälsingland
|
https://en.wikipedia.org/wiki/K%C3%A5rb%C3%B6le
|
Kårböle is a village in Ljusdal Municipality, Hälsingland, Gävleborg County, Sweden with about 134 inhabitants (2004, Statistics Sweden). The Kårböle stave church can be found here.
Populated places in Ljusdal Municipality
Hälsingland
|
https://en.wikipedia.org/wiki/1728%20%28number%29
|
1728 is the natural number following 1727 and preceding 1729. It is a dozen gross, or one great gross (or grand gross). It is also the number of cubic inches in a cubic foot.
In mathematics
1728 is the cube of 12, and therefore equal to the product of the six divisors of 12 (1, 2, 3, 4, 6, 12). It is also the product of the first four composite numbers (4, 6, 8, and 9), which makes it a compositorial. As a cubic perfect power, it is also a highly powerful number that has a record value (18) between the product of the exponents (3 and 6) in its prime factorization.
It is also a Jordan–Pólya number such that it is a product of factorials:
1728 has twenty-eight divisors, which is a perfect count (as with 12, with six divisors). It also has a Euler totient of 576 or 242, which divides 1728 thrice over.
1728 is an abundant and semiperfect number, as it is smaller than the sum of its proper divisors yet equal to the sum of a subset of its proper divisors.
It is a practical number as each smaller number is the sum of distinct divisors of 1728, and an integer-perfect number where its divisors can be partitioned into two disjoint sets with equal sum.
1728 is 3-smooth, since its only distinct prime factors are 2 and 3. This also makes 1728 a regular number which are most useful in the context of powers of 60, the smallest number with twelve divisors:
1728 is also an untouchable number since there is no number whose sum of proper divisors is 1728.
Many relevant calculations involving 1728 are computed in the duodecimal number system, in-which it is represented as "1000".
Modular j-invariant
1728 occurs in the algebraic formula for the j-invariant of an elliptic curve, as a function over a complex variable on the upper half-plane ,
Inputting a value of for , where is the imaginary number, yields another cubic integer:
In moonshine theory, the first few terms in the Fourier q-expansion of the normalized j-invariant exapand as,
The Griess algebra (which contains the friendly giant as its automorphism group) and all subsequent graded parts of its infinite-dimensional moonshine module hold dimensional representations whose values are the Fourier coefficients in this q-expansion.
Other properties
The number of directed open knight's tours in minichess is 1728.
1728 is one less than the first taxicab or Hardy–Ramanujan number 1729, which is the smallest number that can be expressed as sums of two positive cubes in two ways.
In culture
1728 is the number of daily chants of the Hare Krishna mantra by a Hare Krishna devotee. The number comes from 16 rounds on a 108 japamala bead.
See also
The year AD 1728
References
External links
1728 at Numbers Aplenty.
Integers
|
https://en.wikipedia.org/wiki/Polynormal%20subgroup
|
In mathematics, in the field of group theory, a subgroup of a group is said to be polynormal if its closure under conjugation by any element of the group can also be achieved via closure by conjugation by some element in the subgroup generated.
In symbols, a subgroup of a group is called polynormal if for any the subgroup is the same as .
Here are the relationships with other subgroup properties:
Every weakly pronormal subgroup is polynormal.
Every paranormal subgroup is polynormal.
References
Subgroup properties
|
https://en.wikipedia.org/wiki/Theta%20representation
|
In mathematics, the theta representation is a particular representation of the Heisenberg group of quantum mechanics. It gains its name from the fact that the Jacobi theta function is invariant under the action of a discrete subgroup of the Heisenberg group. The representation was popularized by David Mumford.
Construction
The theta representation is a representation of the continuous Heisenberg group over the field of the real numbers. In this representation, the group elements act on a particular Hilbert space. The construction below proceeds first by defining operators that correspond to the Heisenberg group generators. Next, the Hilbert space on which these act is defined, followed by a demonstration of the isomorphism to the usual representations.
Group generators
Let f(z) be a holomorphic function, let a and b be real numbers, and let be fixed, but arbitrary complex number in the upper half-plane; that is, so that the imaginary part of is positive. Define the operators Sa and Tb such that they act on holomorphic functions as
and
It can be seen that each operator generates a one-parameter subgroup:
and
However, S and T do not commute:
Thus we see that S and T together with a unitary phase form a nilpotent Lie group, the (continuous real) Heisenberg group, parametrizable as where U(1) is the unitary group.
A general group element then acts on a holomorphic function f(z) as
where is the center of H, the commutator subgroup . The parameter on serves only to remind that every different value of gives rise to a different representation of the action of the group.
Hilbert space
The action of the group elements is unitary and irreducible on a certain Hilbert space of functions. For a fixed value of τ, define a norm on entire functions of the complex plane as
Here, is the imaginary part of and the domain of integration is the entire complex plane.
Mumford sets the norm as , but in this way is not unitary.
Let be the set of entire functions f with finite norm. The subscript is used only to indicate that the space depends on the choice of parameter . This forms a Hilbert space. The action of given above is unitary on , that is, preserves the norm on this space. Finally, the action of on is irreducible.
This norm is closely related to that used to define Segal–Bargmann space.
Isomorphism
The above theta representation of the Heisenberg group is isomorphic to the canonical Weyl representation of the Heisenberg group. In particular, this implies that and are isomorphic as H-modules. Let
stand for a general group element of In the canonical Weyl representation, for every real number h, there is a representation acting on as
for and
Here, h is Planck's constant. Each such representation is unitarily inequivalent. The corresponding theta representation is:
Discrete subgroup
Define the subgroup as
The Jacobi theta function is defined as
It is an entire function of z that is invariant under This follows
|
https://en.wikipedia.org/wiki/Multinomial%20logistic%20regression
|
In statistics, multinomial logistic regression is a classification method that generalizes logistic regression to multiclass problems, i.e. with more than two possible discrete outcomes. That is, it is a model that is used to predict the probabilities of the different possible outcomes of a categorically distributed dependent variable, given a set of independent variables (which may be real-valued, binary-valued, categorical-valued, etc.).
Multinomial logistic regression is known by a variety of other names, including polytomous LR, multiclass LR, softmax regression, multinomial logit (mlogit), the maximum entropy (MaxEnt) classifier, and the conditional maximum entropy model.
Background
Multinomial logistic regression is used when the dependent variable in question is nominal (equivalently categorical, meaning that it falls into any one of a set of categories that cannot be ordered in any meaningful way) and for which there are more than two categories. Some examples would be:
Which major will a college student choose, given their grades, stated likes and dislikes, etc.?
Which blood type does a person have, given the results of various diagnostic tests?
In a hands-free mobile phone dialing application, which person's name was spoken, given various properties of the speech signal?
Which candidate will a person vote for, given particular demographic characteristics?
Which country will a firm locate an office in, given the characteristics of the firm and of the various candidate countries?
These are all statistical classification problems. They all have in common a dependent variable to be predicted that comes from one of a limited set of items that cannot be meaningfully ordered, as well as a set of independent variables (also known as features, explanators, etc.), which are used to predict the dependent variable. Multinomial logistic regression is a particular solution to classification problems that use a linear combination of the observed features and some problem-specific parameters to estimate the probability of each particular value of the dependent variable. The best values of the parameters for a given problem are usually determined from some training data (e.g. some people for whom both the diagnostic test results and blood types are known, or some examples of known words being spoken).
Assumptions
The multinomial logistic model assumes that data are case-specific; that is, each independent variable has a single value for each case. As with other types of regression, there is no need for the independent variables to be statistically independent from each other (unlike, for example, in a naive Bayes classifier); however, collinearity is assumed to be relatively low, as it becomes difficult to differentiate between the impact of several variables if this is not the case.
If the multinomial logit is used to model choices, it relies on the assumption of independence of irrelevant alternatives (IIA), which is not always desirable. Thi
|
https://en.wikipedia.org/wiki/Martingale%20difference%20sequence
|
In probability theory, a martingale difference sequence (MDS) is related to the concept of the martingale. A stochastic series X is an MDS if its expectation with respect to the past is zero. Formally, consider an adapted sequence on a probability space . is an MDS if it satisfies the following two conditions:
, and
,
for all . By construction, this implies that if is a martingale, then will be an MDS—hence the name.
The MDS is an extremely useful construct in modern probability theory because it implies much milder restrictions on the memory of the sequence than independence, yet most limit theorems that hold for an independent sequence will also hold for an MDS.
A special case of MDS, denoted as {Xt,t}0 is known as innovative sequence of Sn; where Sn and are corresponding to random walk and filtration of the random processes .
In probability theory innovation series is used to emphasize the generality of Doob representation. In signal processing the innovation series is used to introduce Kalman filter. The main differences of innovation
terminologies are in the applications. The later application aims to introduce the nuance of samples to the model by random sampling.
References
James Douglas Hamilton (1994), Time Series Analysis, Princeton University Press.
James Davidson (1994), Stochastic Limit Theory, Oxford University Press.
Martingale theory
random walk
filtration
Doob decomposition theorem
signal processing
Kalman filter
[[:Category:Innovation (signal processing)]
|
https://en.wikipedia.org/wiki/Fixed%20effects%20model
|
In statistics, a fixed effects model is a statistical model in which the model parameters are fixed or non-random quantities. This is in contrast to random effects models and mixed models in which all or some of the model parameters are random variables. In many applications including econometrics and biostatistics a fixed effects model refers to a regression model in which the group means are fixed (non-random) as opposed to a random effects model in which the group means are a random sample from a population. Generally, data can be grouped according to several observed factors. The group means could be modeled as fixed or random effects for each grouping. In a fixed effects model each group mean is a group-specific fixed quantity.
In panel data where longitudinal observations exist for the same subject, fixed effects represent the subject-specific means. In panel data analysis the term fixed effects estimator (also known as the within estimator) is used to refer to an estimator for the coefficients in the regression model including those fixed effects (one time-invariant intercept for each subject).
Qualitative description
Such models assist in controlling for omitted variable bias due to unobserved heterogeneity when this heterogeneity is constant over time. This heterogeneity can be removed from the data through differencing, for example by subtracting the group-level average over time, or by taking a first difference which will remove any time invariant components of the model.
There are two common assumptions made about the individual specific effect: the random effects assumption and the fixed effects assumption. The random effects assumption is that the individual-specific effects are uncorrelated with the independent variables. The fixed effect assumption is that the individual-specific effects are correlated with the independent variables. If the random effects assumption holds, the random effects estimator is more efficient than the fixed effects estimator. However, if this assumption does not hold, the random effects estimator is not consistent. The Durbin–Wu–Hausman test is often used to discriminate between the fixed and the random effects models.
Formal model and assumptions
Consider the linear unobserved effects model for observations and time periods:
for and
Where:
is the dependent variable observed for individual at time .
is the time-variant (the number of independent variables) regressor vector.
is the matrix of parameters.
is the unobserved time-invariant individual effect. For example, the innate ability for individuals or historical and institutional factors for countries.
is the error term.
Unlike , cannot be directly observed.
Unlike the random effects model where the unobserved is independent of for all , the fixed effects (FE) model allows to be correlated with the regressor matrix . Strict exogeneity with respect to the idiosyncratic error term is still required.
Statistical estimation
|
https://en.wikipedia.org/wiki/Jaroslav%20H%C3%A1jek
|
Jaroslav Hájek (; 1926–1974) was a Czech mathematician who worked in theoretical and nonparametric statistics. The Hajek projection and the Hájek–Le Cam convolution theorem are named for him (as well as collaborator Lucien Le Cam).
Life
Jaroslav Hájek studied statistical and insurance engineering at the Faculty of Special Sciences of the Czech Technical University in Prague and in 1950 he successfully completed this study by obtaining an engineering degree. In 1955 he received the title of CSc. for the paper Contributions to the theory of statistical estimation, the supervisor of this thesis was Josef Novák. In 1963, he received a D.Sc. in the same year he received his habilitation at the Faculty of Mathematics and Physics of Charles University, in 1966 he was entitled professor at this faculty. In 1973, he was awarded the Klement Gottwald State Prize for his work on the asymptotic theory of ordinal tests. He died at the age of 48 after a kidney transplant.
References
Further reading
External links
Czech statisticians
Czechoslovak mathematicians
1926 births
1974 deaths
Academic staff of Charles University
Czech Technical University in Prague alumni
Mathematical statisticians
|
https://en.wikipedia.org/wiki/Karel%20Rychl%C3%ADk
|
Karel Rychlík (; 1885–1968) was a Czechoslovak mathematician who contributed significantly to the fields of algebra, number theory, mathematical analysis, and the history of mathematics.
External links
Extensive Biography
Works
Czechoslovak mathematicians
1885 births
1968 deaths
|
https://en.wikipedia.org/wiki/Mathias%20Lerch
|
Mathias Lerch (Matyáš Lerch, ) (20 February 1860, Milínov – 3 August 1922, Sušice) was a Czech mathematician who published about 250 papers, largely on mathematical analysis and number theory. He studied in Prague and Berlin, and held teaching positions at the Czech Technical Institute in Prague, the University of Fribourg in Switzerland, the Czech Technical Institute in Brno, and Masaryk University in Brno; he was the first mathematics professor at Masaryk University when it was founded in 1920.
In 1900, he was awarded the Grand Prize of the French Academy of Sciences for his number-theoretic work. The Lerch zeta function is named after him, as is the Appell–Lerch sum. His doctoral students include Michel Plancherel and Otakar Borůvka.
References
External links
1860 births
1922 deaths
Czech mathematicians
Academic staff of Masaryk University
Academic staff of the University of Fribourg
Mathematicians from Austria-Hungary
|
https://en.wikipedia.org/wiki/Automotive%20industry%20in%20India
|
The automotive industry in India is the fourth-largest by production in the world as per 2022 statistics. As of 2023, India is the 3rd largest automobile market in the world in terms of sales. In 2022, India became the fourth largest country in the world by the valuation of its automotive industry.
, India's auto industry is worth more than US$100 billion and accounts for 8% of the country's total exports and 7.1% of India's GDP. According to the 2021 National Family Health Survey, barely 8% of Indian households own an automobile. According to government statistics, India has barely 22 automobiles per 1,000 people.
India's major automobile manufacturing companies includes Maruti Suzuki, Hyundai Motor India, Tata Motors, Ashok Leyland, Mahindra & Mahindra, Force Motors, Tractors and Farm Equipment Limited, Eicher Motors, Royal Enfield, Sonalika Tractors, Hindustan Motors, Hradyesh, ICML, Kerala Automobiles Limited, Reva, Pravaig Dynamics, Premier, Tara International and Vehicle Factory Jabalpur.
Brand
History
In 1897, the first car ran on an Indian road. Through the 1930s, cars were imports only, and in small numbers.
An embryonic automotive industry emerged in India in the 1940s. Hindustan Motors was launched in 1942 building Morris products, long-time competitor Premier in 1944, building Chrysler Corporation products such as Dodge and Plymouth, and beginning in the 1960s, Fiat products. Mahindra & Mahindra was established by two brothers in 1945 and began assembly of Jeep CJ-3A utility vehicles. In the same years, J. R. D. Tata, the chairman of Tata Group founded TATA Engineering and Locomotive Company (now Tata Motors) in Jamshedpur. Following independence in 1947, the Government of India and the private sector launched efforts to create an automotive-component manufacturing industry to supply to the automobile industry. In 1953, an import substitution programme was launched, and the import of fully built-up cars began to be restricted.
1947–1970
The 1952 Tariff Commission
In 1952, the Indian government appointed the first Tariff Commission, whose purpose was to come out with a feasibility plan for the indigenization of the Indian automobile industry. In 1953, the commission submitted its report, which recommended categorizing existing Indian car companies according to their manufacturing infrastructure, under a licensed capacity to manufacture a certain number of vehicles, with capacity increases allowable, as per demands, in the future. The Tariff Commission recommendations were implemented with new policies that would eventually exclude companies that only imported parts for assembly, as well as those with no Indian partner. In 1954, following the Tariff Commission implementation, General Motors, Ford, and Rootes Group, which had assembly-only plants in Mumbai, decided to move out of India.
The Tariff commission policies, including similar restrictions that applied to other industries, came to be known as the Licence Raj, which pr
|
https://en.wikipedia.org/wiki/Gillespie%20algorithm
|
In probability theory, the Gillespie algorithm (or the Doob–Gillespie algorithm or Stochastic Simulation Algorithm, the SSA) generates a statistically correct trajectory (possible solution) of a stochastic equation system for which the reaction rates are known. It was created by Joseph L. Doob and others (circa 1945), presented by Dan Gillespie in 1976, and popularized in 1977 in a paper where he uses it to simulate chemical or biochemical systems of reactions efficiently and accurately using limited computational power (see stochastic simulation). As computers have become faster, the algorithm has been used to simulate increasingly complex systems. The algorithm is particularly useful for simulating reactions within cells, where the number of reagents is low and keeping track of every single reaction is computationally feasible. Mathematically, it is a variant of a dynamic Monte Carlo method and similar to the kinetic Monte Carlo methods. It is used heavily in computational systems biology.
History
The process that led to the algorithm recognizes several important steps. In 1931, Andrei Kolmogorov introduced the differential equations corresponding to the time-evolution of stochastic processes that proceed by jumps, today known as Kolmogorov equations (Markov jump process) (a simplified version is known as master equation in the natural sciences). It was William Feller, in 1940, who found the conditions under which the Kolmogorov equations admitted (proper) probabilities as solutions. In his Theorem I (1940 work) he establishes that the time-to-the-next-jump was exponentially distributed and the probability of the next event is proportional to the rate. As such, he established the relation of Kolmogorov's equations with stochastic processes.
Later, Doob (1942, 1945) extended Feller's solutions beyond the case of pure-jump processes. The method was implemented in computers by David George Kendall (1950) using the Manchester Mark 1 computer and later used by Maurice S. Bartlett (1953) in his studies of epidemics outbreaks. Gillespie (1977) obtains the algorithm in a different manner by making use of a physical argument.
Idea behind the algorithm
Traditional continuous and deterministic biochemical rate equations do not accurately predict cellular reactions since they rely on bulk reactions that require the interactions of millions of molecules. They are typically modeled as a set of coupled ordinary differential equations. In contrast, the Gillespie algorithm allows a discrete and stochastic simulation of a system with few reactants because every reaction is explicitly simulated. A trajectory corresponding to a single Gillespie simulation represents an exact sample from the probability mass function that is the solution of the master equation.
The physical basis of the algorithm is the collision of molecules within a reaction vessel. It is assumed that collisions are frequent, but collisions with the proper orientation and energy are infrequent
|
https://en.wikipedia.org/wiki/Ji%C5%99%C3%AD%20Buquoy
|
Georg Franz August Graf von Buquoy (; 7 September 1781 in Brussels – 9 or 19 April 1851 in Prague) was a Bohemian aristocrat, mathematician, and inventor. He studied mathematics, natural science, philosophy, and economics at the Prague and Vienna universities. In 1810 he constructed an early steam engine. Most of all, he was engaged in the glass works in Nové Hrady region. On the basis of many experiments he succeeded in inventing an original process technology of a black opaque glass called hyalite (1817), as well as completing the production process for red hyalite (1819).
See also
Lords of Bucquoy
External links
Short biography
1781 births
1851 deaths
19th-century Czech scientists
|
https://en.wikipedia.org/wiki/Evil%20number
|
In number theory, an evil number is a non-negative integer that has an even number of 1s in its binary expansion. These numbers give the positions of the zero values in the Thue–Morse sequence, and for this reason they have also been called the Thue–Morse set. Non-negative integers that are not evil are called odious numbers.
Examples
The first evil numbers are:
0, 3, 5, 6, 9, 10, 12, 15, 17, 18, 20, 23, 24, 27, 29, 30, 33, 34, 36, 39 ...
Equal sums
The partition of the non-negative integers into the odious and evil numbers is the unique partition of these numbers into two sets that have equal multisets of pairwise sums.
As 19th-century mathematician Eugène Prouhet showed, the partition into evil and odious numbers of the numbers from to , for any , provides a solution to the Prouhet–Tarry–Escott problem of finding sets of numbers whose sums of powers are equal up to the th power.
In computer science
In computer science, an evil number is said to have even parity.
References
Integer sequences
|
https://en.wikipedia.org/wiki/Obiliq
|
Obiliq is a town and municipality in Kosovo. According to the Kosovo Agency of Statistics (KAS) estimate from the 2011 census, there were 21,549 people residing in Obiliq Municipality, with Kosovo Albanians constituting the majority of the population.
Name
Prior to the Balkan Wars, the settlement was known as Globoderica ().
Following the conflict, the settlement was incorporated into Serbia and renamed Obilić as part of the Serbianisation efforts of the early twentieth century when inhabited places within Kosovo were named after heroes from Serbian epic poetry. The placename Obilić refers to the Serbian national hero Miloš Obilić who killed the Ottoman Sultan Murad I at the Battle of Kosovo (1389).
In Albanian, the town is known as Obiliq (a transliteration of the Serbian name), while an alternative name (used by Albanians ) was coined by the Albanological Institute, Kastriot, after Albanian national hero George Kastrioti Skanderbeg (1405–1468).
Economy
There are three coal mines operating on the territory of Obiliq: Belaćevac, Miraš and Sibovc.
Demography
According to the 2011 census, the municipality had a population of 21,548 inhabitants. Based on the population estimates from the Kosovo Agency of Statistics in 2016, the municipality has 19,440 inhabitants.
The ethnic composition of the town of Obilić:
Notes
References
External links
Municipality of ObiliqOfficial Website
Obiliq
Obiliq
1989 establishments in Europe
Populated places established in 1989
Populated places in Pristina District
|
https://en.wikipedia.org/wiki/Malisheva
|
Malisheva is a town and municipality in Kosovo. According to the Kosovo Agency of Statistics (KAS) estimate from the 2011 census, there were 54,613 people residing in Malisheva Municipality, with Kosovo Albanians constituting the majority of the population.
Geography
Malisheva lies in the central part of Kosovo, namely in the eastern part of the Dukagjin Plain. The territory of the municipality includes an area of , and it is bordered by the municipalities of Drenas, Lipjan, Suhareka, Rahovec and Klina. The suitable configuration of the terrain enables the municipality to have good traffic connections with the whole country. Most of the main cities of Kosovo are connected within a distance of about 40-55 km. As for international connections, the passage of the Durrës - Merdarë highway through the territory of the municipality puts Malisheva in a favorable position. This highway allows easy and fast connection with Albania and the port of Durrës, while the plans for the further continuation of this highway, as well as the construction of the new highway for Hani i Elezit will enable quality connections with the states and regional transport corridors.
Climate
The dominant climate is the medium continental climate dominated with various Mediterranean elements, which is characterized by cold and long winters (in the mountain villages); warm summer (in lower villages). The average temperature is in summer and in winter. The average amount of precipitation is close to .
Hydrography
The main river that passes through the municipality of Malisheva is Mirusha, which belongs to the White Drini basin. In the lower part of the Mirusha river, in a length of 2 km, a gorge has been created with 16 river lakes of different sizes, connected to each other with a spectacular waterfall with a length of 21 m called Mirusha waterfalls (the second largest in Kosovo after that in Radavc-Drini i Bardhë). Considering the environmental values that the Mirusha contains in its course and the surrounding spaces, the area around it has been declared a Natural Monument of Special Importance.
History
The population of the town has historically been predominantly Kosovo Albanian. The town was largely destroyed by Serbian forces in 1998. Town residents only returned following the 1998 withdrawal of Serbian paramilitary police and military, in response to international pressures.
Demography
According to the Kosovo Agency of Statistics (KAS) estimate from the 2011 census, there were 54,613 people residing in Malisheva Municipality. With a population density of 178,5 people per square kilometre, its urban population amounted to about 3,300, while the rural population was around 51,200.
The majority of the population in the Municipality of Malisheva is Albanian with 99.8%. Members of the Roma community are 70 inhabitants (0.13%), of the Ashkali community 14 inhabitants (0.03%), while 18 inhabitants (0.03%) are other or undeclared.
See also
Trpeza mine
Notes
Refere
|
https://en.wikipedia.org/wiki/Hyperbolic%20Dehn%20surgery
|
In mathematics, hyperbolic Dehn surgery is an operation by which one can obtain further hyperbolic 3-manifolds from a given cusped hyperbolic 3-manifold. Hyperbolic Dehn surgery exists only in dimension three and is one which distinguishes hyperbolic geometry in three dimensions from other dimensions.
Such an operation is often also called hyperbolic Dehn filling, as Dehn surgery proper refers to a "drill and fill" operation on a link which consists of drilling out a neighborhood of the link and then filling back in with solid tori. Hyperbolic Dehn surgery actually only involves "filling".
We will generally assume that a hyperbolic 3-manifold is complete.
Suppose M is a cusped hyperbolic 3-manifold with n cusps. M can be thought of, topologically, as the interior of a compact manifold with toral boundary. Suppose we have chosen a meridian and longitude for each boundary torus, i.e. simple closed curves that are generators for the fundamental group of the torus. Let denote the manifold obtained from M by filling in the i-th boundary torus with a solid torus using the slope where each pair and are coprime integers. We allow a to be which means we do not fill in that cusp, i.e. do the "empty" Dehn filling. So M = .
We equip the space H of finite volume hyperbolic 3-manifolds with the geometric topology.
Thurston's hyperbolic Dehn surgery theorem states: is hyperbolic as long as a finite set of exceptional slopes is avoided for the i-th cusp for each i. In addition, converges to M in H as all for all corresponding to non-empty Dehn fillings .
This theorem is due to William Thurston and fundamental to the theory of hyperbolic 3-manifolds. It shows that nontrivial limits exist in H. Troels Jorgensen's study of the geometric topology further shows that all nontrivial limits arise by Dehn filling as in the theorem.
Another important result by Thurston is that volume decreases under hyperbolic Dehn filling. In fact, the theorem states that volume decreases under topological Dehn filling, assuming of course that the Dehn-filled manifold is hyperbolic. The proof relies on basic properties of the Gromov norm.
Jørgensen also showed that the volume function on this space is a continuous, proper function. Thus by the previous results, nontrivial limits in H are taken to nontrivial limits in the set of volumes. In fact, one can further conclude, as did Thurston, that the set of volumes of finite volume hyperbolic 3-manifolds has ordinal type . This result is known as the Thurston-Jørgensen theorem. Further work characterizing this set was done by Gromov.
The figure-eight knot and the (-2, 3, 7) pretzel knot are the only two knots whose complements are known to have more than 6 exceptional surgeries; they have 10 and 7, respectively. Cameron Gordon conjectured that 10 is the largest possible number of exceptional surgeries of any hyperbolic knot complement. This was proved by Marc Lackenby and Rob Meyerhoff, who sho
|
https://en.wikipedia.org/wiki/List%20of%20Green%20Bay%20Packers%20records
|
This article details statistics relating to the Green Bay Packers.
Records
Passing
Attempts, career: 8,754 – Brett Favre (1992–07)
Attempts, season: 613 – Brett Favre (2006)
Attempts, game: 61 – Brett Favre (1996), Aaron Rodgers (2015)
Completed, career: 5,377 – Brett Favre (1992–07)
Completed, season: 401 – Aaron Rodgers (2016)
Completed, game: 39 – Aaron Rodgers (2016)
Yards, career: 61,655 – Brett Favre (1992–07)
Yards, season: 4,643 – Aaron Rodgers (2011)
Yards, game: 480 – Matt Flynn (2012), Aaron Rodgers (2013)
Touchdowns, career: 445 – Aaron Rodgers (2005–23)
Touchdowns, season: 48 – Aaron Rodgers (2020)
Touchdowns, game: 6 – Matt Flynn (2012), Aaron Rodgers (2012), (2014), (2019)
Interceptions, career: 286 – Brett Favre (1992–07)
Interceptions, season: 29 – Brett Favre (2005)
Straight completions, game: 20 – Brett Favre (2007)
Single season QB rating, season: 122.5 – Aaron Rodgers (2011)
Rushing
Attempts, career: 1,811 – Jim Taylor (1958–66)
Attempts, season: 355 – Ahman Green (2003)
Attempts, game: 39 – Terdell Middleton (1978)
Yards, career: 8,208 – Ahman Green (2000–06, 2009)
Yards, season: 1,883 – Ahman Green (2003)
Yards, game: 218 – Ahman Green (2003)
Touchdowns, career: 81 – Jim Taylor (1958–66)
Touchdowns, season: 19 – Jim Taylor (1962)
Touchdowns, game: 4 – Jim Taylor (1961), (1962), (1962), Terdell Middleton (1978), Dorsey Levens (2000), Aaron Jones (2019 & 2021)
Receiving
Receptions, career: 729 – Donald Driver (1999–2013)
Receptions, season: 117 – Davante Adams (2021)
Receptions, game: 14 – Don Hutson (1942), Davante Adams (2020)
Yards, career: 10,137 – Donald Driver (1999–2012)
Yards, season: 1,553 – Davante Adams (2021)
Yards, game: 257 – Billy Howton (1956)
Touchdowns, career: 99 – Don Hutson (1935–45)
Touchdowns, season: 18 – Sterling Sharpe (1994), Davante Adams (2020)
Touchdowns, game: 4 – Don Hutson (1945), Sterling Sharpe (1993), (1994)
1000 Yard Seasons, career: 7 – Donald Driver (2002), (2004–09)
Defense
Tackles, career: 1,020 – A. J. Hawk (2005–2014)
Tackles, season: 203 – Blake Martinez (2019)
Sacks, career: 83.5 – Clay Matthews III (2009–2018)
Sacks, season: 19.5 – Tim Harris (1989)
Sacks, game: 5.0 – Vonnie Holliday (2002)
Punting
Punts, career: 495 – David Beverly (1975–80)
Punts, season: 106 – David Beverly (1978)
Punts, game: 11 – Clarke Hinkle (1933), Jug Girard (1950)
Longest punt: 90 – Don Chandler (1965)
Highest average, career: 42.8 – Craig Hentrich (1994–97)
Highest average, season: 45.0 – Craig Hentrich (1997)
Highest average, game: 61.6 – Roy McKay (1945)
Kicking
Attempts, career: 393 – Mason Crosby (2007–present)
Attempts, season: 48 – Chester Marcol (1972)
Attempts, game: 7 – Mason Crosby (2021)
Field goals, career: 317 – Mason Crosby (2007–present)
Field goals, season: 33 – Chester Marcol (1972), Ryan Longwell (2000), Mason Crosby (2013)
Field goals, game: 5 – Chris Jacke (1990), (1996), Ryan Longwell (2000), Mason Crosby (2018)
Highest percentage, career (50 attempts): 81.59 (226/277) – Ryan
|
https://en.wikipedia.org/wiki/Kenneth%20Davidson%20%28mathematician%29
|
Kenneth Ralph Davidson (born 1951 in Edmonton, Alberta) is Professor of Pure Mathematics at the University of Waterloo. He did his undergraduate work at Waterloo and received his Ph.D. under the supervision of William Arveson at the University of California, Berkeley in 1976. Davidson was Director of the Fields Institute from 2001 to 2004. His areas of research include operator theory and C*-algebras. Since 2007 he has been appointed University Professor at the University of Waterloo.
He is a Fellow of the Royal Society of Canada. He was appointed Fellow of the Fields Institute in 2006. In 2018 the Canadian Mathematical Society listed him in their inaugural class of fellows.
Publications
Real Analysis and Applications, with Allan Donsig, Undergraduate Texts in Mathematics, Springer, 2009.
-Algebras by Example, Fields Institute Monograph 6, AMS, 1996.
Nest Algebras, Pitman Research Notes in Math. 191, Longman, 1988.
See also
List of University of Waterloo people
Notes
External links
Kenneth R. Davidson's Homepage
1951 births
Living people
Fellows of the Canadian Mathematical Society
Fellows of the Royal Society of Canada
University of Waterloo alumni
UC Berkeley College of Letters and Science alumni
Academic staff of the University of Waterloo
|
https://en.wikipedia.org/wiki/EinStein%20w%C3%BCrfelt%20nicht%21
|
EinStein würfelt nicht! (... does not play dice) is a board game, designed by Ingo Althöfer, a professor of applied mathematics in Jena, Germany. It was the official game of an exhibition about Einstein in Germany during the Einstein Year (2005).
The name of the game in German has a double meaning. It is a play on Einstein's famous quote "I am convinced that He (God) does not play dice" and also refers to the fact that when a player has only one cube (ein Stein) remaining, they no longer need to "play dice", and may simply move the cube.
Rules
The game is played on a square board with a 5×5 grid. Each player has six cubes, numbered one to six. During setup, players arrange the cubes within the triangular area of their own color.
The players take turns rolling a six-sided die and then moving the matching cube. If the matching cube is no longer on the board, the player moves a remaining cube whose number is next-higher or next-lower to the rolled number. The player starting in the top-left may move that cube one square to the right, down, or on the diagonal down and to the right; the player starting in the bottom-right may move that cube one square to the left, up, or on the diagonal up and to the left. Any cube which already lies in the target square is removed from the board.
The objective of the game is for a player to either get one of their cubes to the far corner square in the grid (where their opponent started) or to remove all of their opponent's cubes from the board.
Strategy
The fewer cubes a player has, the "more mobile" those cubes are, since more die rolls can result in moving the same piece. Therefore, it can be helpful to aim to remove one's own cubes in order to be able to move them with a higher individual probability than having multiple cubes remaining.
Schwarz tables
The "mobility" of pieces given the removal of one or more pieces can be quantified with probability. The Schwarz score of a set of pieces on the board is defined as the expected number of remaining moves, assuming no capturing takes place, for one of the pieces to reach its goal. In general, players should minimize the Schwarz score of their pieces and maximize the Schwarz score of their opponent's pieces. As there are six pieces, and each piece can be 1, 2, 3, or 4 moves away from their goal, or out of play, there are 15,625 distinct situations in which Schwarz scores can be precomputed.
Variants
The game can also be played on a 6 × 6 board with ten pieces on each side, labeled 2, 3, 4, 5, 6, 8, 9, 10, 11, and 12. A pair of 6-sided dice are rolled. The game can also be played in rounds with a doubling cube as in backgammon.
See also
World Year of Physics 2005
16th Computer Olympiad
External links
Official Website of the World Year of Physics 2005
Ingo Althöfer's website (mixed German/English)
On the origins of the game (mixed German/English)
Little Golem Online two player games including EinStein würfelt nicht (free login required, English)
M
|
https://en.wikipedia.org/wiki/Geometric%20topology%20%28disambiguation%29
|
In mathematics, the phrase geometric topology may refer to:
Geometric topology, the study of manifolds and maps between them, particularly embeddings of one manifold into another
Geometric topology (object), a topology one can put on the set H of hyperbolic 3-manifolds of finite volume
|
https://en.wikipedia.org/wiki/Joseph%20Keller
|
Joseph Bishop Keller (July 31, 1923 – September 7, 2016) was an American mathematician who specialized in applied mathematics. He was best known for his work on the "geometrical theory of diffraction" (GTD).
Early life and education
Born in Paterson, New Jersey on July 31, 1923, Keller attended Eastside High School, where he was a member of the math team. After earning his undergraduate degree in 1943 at New York University, Keller obtained his PhD in 1948 from NYU under the supervision of Richard Courant. He was a professor of mathematics in the Courant Institute at New York University until 1979. Then he was Professor of Mathematics and Mechanical Engineering at Stanford University until 1993, when he became professor emeritus.
Research
Keller worked on the application of mathematics to problems in science and engineering, such as wave propagation. He contributed to the Einstein–Brillouin–Keller method for computing eigenvalues in quantum mechanical systems.
Awards and honors
Keller was awarded a Lester R. Ford Award (shared with David W. McLaughlin) in 1976 and (not shared) in 1977. In 1988 he was awarded the U.S. National Medal of Science, and in 1997 he was awarded the Wolf Prize by the Israel-based Wolf Foundation. In 1996, he was awarded the Nemmers Prize in Mathematics. In 1999 he was awarded the Ig Nobel Prize for calculating how to make a teapot spout that does not drip. With Patrick B. Warren, Robin C. Ball and Raymond E. Goldstein, Keller was awarded an Ig Nobel Prize in 2012 for calculating the forces that shape and move ponytail hair.
In 2012 he became a fellow of the American Mathematical Society.
Personal life
Keller's second wife, Alice S. Whittemore, started her career as a pure mathematician but shifted her interests to epidemiology and biostatistics.
Keller had a brother who was also a mathematician, Herbert B. Keller, who studied numerical analysis, scientific computing, bifurcation theory, path following and homotopy methods, and computational fluid dynamics. Herbert Keller was a professor at Caltech. Both brothers contributed to the fields of electromagnetics and fluid dynamics. Joseph Keller died in Stanford, California on September 7, 2016, from a recurrence of kidney cancer first diagnosed in 2003.
Major publications
J.B. Keller. On solutions of Δu=f(u). Comm. Pure Appl. Math. 10 (1957), 503–510.
Edward W. Larsen and Joseph B. Keller. Asymptotic solution of neutron transport problems for small mean free paths. J. Mathematical Phys. 15 (1974), 75–81.
Joseph B. Keller and Dan Givoli. Exact nonreflecting boundary conditions. J. Comput. Phys. 82 (1989), no. 1, 172–192.
Jacob Rubinstein, Peter Sternberg, and Joseph B. Keller. Fast reaction, slow diffusion, and curve shortening. SIAM J. Appl. Math. 49 (1989), no. 1, 116–133.
Marcus J. Grote and Joseph B. Keller. On nonreflecting boundary conditions. J. Comput. Phys. 122 (1995), no. 2, 231–243.
Leonid Ryzhik, George Papanicolaou, and Joseph B. Keller. Transpo
|
https://en.wikipedia.org/wiki/Moscow%20School%20of%20Mathematics%20and%20Navigation
|
Moscow School of Mathematics and Navigation () was a Russian educational institution founded by Peter the Great in 1701. Situated in the Sukharev Tower, it provided Russians with technical education for the first time and much of its curriculum was devoted to producing sailors, engineers, cartographers and bombardiers to support Peter's expanding navy and army. It is the forerunner of the modern system of technical education of Russia. In 1712, Artillery classes and Engineering classes were moved to Saint Petersburg to found the Engineering school and Artillery school. Abram Petrovich Gannibal was the first chief of engineering school. In 1715 Navigator classes were moved to Saint Petersburg to found the Marine academy. The school closed in 1752.
Sources
Schools in Russia
Schools in Moscow
1701 establishments in Russia
1752 disestablishments
|
https://en.wikipedia.org/wiki/Ethernet%20Automatic%20Protection%20Switching
|
Ethernet Automatic Protection Switching (EAPS) is used to create a fault tolerant topology by configuring a primary and secondary path for each VLAN.
Invented by Extreme Networks and submitted to IETF as RFC3619. The idea is to provide highly available Ethernet switched rings (commonly used in Metro Ethernet) to replace legacy TDM based transport protection fiber rings. Other implementations include Ethernet Protection Switching Ring (EPSR) by Allied Telesis which enhanced EAPS to provide full protected transport of IP Triple Play services (voice, video and internet traffic) for xDSL/FTTx deployments. EAPS/EPSR is the most widely deployed Ethernet protection switching solution deployed with major multi-vendor inter-operability support. The EAPS/EPSR are the basis of the ITU G.8032 Ethernet Protection recommendation.
Operation
A ring is formed by configuring a Domain. Each domain has a single "master node" and many "transit nodes". Each node will have a primary port and a secondary port, both known to be able to send control traffic to the master node. Under normal operation, the secondary port on the master is blocked for all protected vlans.
When there is a link down situation, the devices that detect the failure send a control message to the master, and the master will then unblock the secondary port and instruct the transits to flush their forwarding databases. The next packets sent by the network can then be flooded and learned out of the (now enabled) secondary port without any network disruption.
Fail-over times are demonstrably in the region of 50ms.
The same switch can belong to multiple domains and thus multiple rings. However, these act as independent entities and can be controlled individually.
EAPS v2
EAPSv2 is configured and enabled to avoid the potential of super-loops in environments where multiple EAPS domains share a common link. EAPSv2 works using the concept of a controller and partner mechanism. Shared port status is verified using health PDUs exchanged by controller and partner. When a shared link goes down, the configured Controller will open only one segment port for each of the protected VLANs, keeping all other segment ports in a blocking state. This state is maintained as long as the Controller fails to receive the health PDUs over the (broken) shared link.
Although not supported by Extreme Networks, it is possible to complete this shared link with non-EAPS (but tag-aware) switches between the Controller and Partner.
When the shared link is restored, the Controller can then unblock its ports, the masters will see their hello packets, and the rings will be protected by their respective masters.
See also
Rapid Spanning Tree Protocol
Ethernet Ring Protection Switching
References
Further reading
Kwang-Koog Lee, Jeong-dong Ryoo, and Seungwook Min, "An Ethernet Ring Protection Method to Minimize Transient Traffic by Selective FDB Advertisement," ETRI Journal, vol.31, no.5, Oct. 2009, pp.631-633
Kwang-K
|
https://en.wikipedia.org/wiki/S-function
|
In mathematics, S-function may refer to:
sigmoid function
Schur polynomials
A function in the Laplace transformed 's-domain'
In computer science,
It may be member of a series of graph parameters, see
In physics, it may refer to:
action functional
In MATLAB, it may refer to:
A type of dynamically linked subroutine for Simulink.
|
https://en.wikipedia.org/wiki/Geometric%20topology%20%28object%29
|
In mathematics, the geometric topology is a topology one can put on the set H of hyperbolic 3-manifolds of finite volume.
Use
Convergence in this topology is a crucial ingredient of hyperbolic Dehn surgery, a fundamental tool in the theory of hyperbolic 3-manifolds.
Definition
The following is a definition due to Troels Jorgensen:
A sequence in H converges to M in H if there are
a sequence of positive real numbers converging to 0, and
a sequence of -bi-Lipschitz diffeomorphisms
where the domains and ranges of the maps are the -thick parts of either the 's or M.
Alternate definition
There is an alternate definition due to Mikhail Gromov. Gromov's topology utilizes the Gromov-Hausdorff metric and is defined on pointed hyperbolic 3-manifolds. One essentially considers better and better bi-Lipschitz homeomorphisms on larger and larger balls. This results in the same notion of convergence as above as the thick part is always connected; thus, a large ball will eventually encompass all of the thick part.
On framed manifolds
As a further refinement, Gromov's metric can also be defined on framed hyperbolic 3-manifolds. This gives nothing new but this space can be explicitly identified with torsion-free Kleinian groups with the Chabauty topology.
See also
Algebraic topology (object)
References
William Thurston, The geometry and topology of 3-manifolds, Princeton lecture notes (1978-1981).
Canary, R. D.; Epstein, D. B. A.; Green, P., Notes on notes of Thurston. Analytical and geometric aspects of hyperbolic space (Coventry/Durham, 1984), 3--92, London Math. Soc. Lecture Note Ser., 111, Cambridge Univ. Press, Cambridge, 1987.
3-manifolds
Hyperbolic geometry
Topological spaces
|
https://en.wikipedia.org/wiki/Additive%20number%20theory
|
Additive number theory is the subfield of number theory concerning the study of subsets of integers and their behavior under addition. More abstractly, the field of additive number theory includes the study of abelian groups and commutative semigroups with an operation of addition. Additive number theory has close ties to combinatorial number theory and the geometry of numbers. Two principal objects of study are the sumset of two subsets A and B of elements from an abelian group G,
and the h-fold sumset of A,
Additive number theory
The field is principally devoted to consideration of direct problems over (typically) the integers, that is, determining the structure of hA from the structure of A: for example, determining which elements can be represented as a sum from hA, where A is a fixed subset. Two classical problems of this type are the Goldbach conjecture (which is the conjecture that 2P contains all even numbers greater than two, where P is the set of primes) and Waring's problem (which asks how large must h be to guarantee that hAk contains all positive integers, where
is the set of k-th powers). Many of these problems are studied using the tools from the Hardy-Littlewood circle method and from sieve methods. For example, Vinogradov proved that every sufficiently large odd number is the sum of three primes, and so every sufficiently large even integer is the sum of four primes. Hilbert proved that, for every integer k > 1, every non-negative integer is the sum of a bounded number of k-th powers. In general, a set A of nonnegative integers is called a basis of order h if hA contains all positive integers, and it is called an asymptotic basis if hA contains all sufficiently large integers. Much current research in this area concerns properties of general asymptotic bases of finite order. For example, a set A is called a minimal asymptotic basis of order h if A is an asymptotic basis of order h but no proper subset of A is an asymptotic basis of order h. It has been proved that minimal asymptotic bases of order h exist for all h, and that there also exist asymptotic bases of order h that contain no minimal asymptotic bases of order h. Another question to be considered is how small can the number of representations of n as a sum of h elements in an asymptotic basis can be. This is the content of the Erdős–Turán conjecture on additive bases.
See also
Shapley–Folkman lemma
Multiplicative number theory
References
External links
|
https://en.wikipedia.org/wiki/Cuntz%20algebra
|
In mathematics, the Cuntz algebra , named after Joachim Cuntz, is the universal C*-algebra generated by isometries of an infinite-dimensional Hilbert space satisfying certain relations. These algebras were introduced as the first concrete examples of a separable infinite simple C*-algebra, meaning as a Hilbert space, is isometric to the sequence space
and it has no nontrivial closed ideals. These algebras are fundamental to the study of simple infinite C*-algebras since any such algebra contains, for any given n, a subalgebra that has as quotient.
Definitions
Let n ≥ 2 and be a separable Hilbert space. Consider the C*-algebra generated by a set
of isometries (i.e. ) acting on satisfying
This universal C*-algebra is called the Cuntz algebra, denoted by .
A simple C*-algebra is said to be purely infinite if every hereditary C*-subalgebra of it is infinite. is a separable, simple, purely infinite C*-algebra. Any simple infinite C*-algebra contains a subalgebra that has as a quotient.
Properties
Classification
The Cuntz algebras are pairwise non-isomorphic, i.e. and are non-isomorphic for n ≠ m. The K0 group of is , the cyclic group of order n − 1. Since K0 is a functor, and are non-isomorphic.
Relation between concrete C*-algebras and the universal C*-algebra
Theorem. The concrete C*-algebra is isomorphic to the universal C*-algebra generated by n generators s1... sn subject to relations si*si = 1 for all i and ∑ sisi* = 1.
The proof of the theorem hinges on the following fact: any C*-algebra generated by n isometries s1... sn with orthogonal ranges contains a copy of the UHF algebra type n∞. Namely is spanned by words of the form
The *-subalgebra , being approximately finite-dimensional, has a unique C*-norm. The subalgebra plays role of the space of Fourier coefficients for elements of the algebra. A key technical lemma, due to Cuntz, is that an element in the algebra is zero if and only if all its Fourier coefficients vanish. Using this, one can show that the quotient map from to is injective, which proves the theorem.
The UHF algebra has a non-unital subalgebra that is canonically isomorphic to itself: In the Mn stage of the direct system defining , consider the rank-1 projection e11, the matrix that is 1 in the upper left corner and zero elsewhere. Propagate this projection through the direct system. At the Mnk stage of the direct system, one has a rank nk − 1 projection. In the direct limit, this gives a projection P in . The corner
is isomorphic to . The *-endomorphism Φ that maps onto is implemented by the isometry s1, i.e. Φ(·) = s1(·)s1*. is in fact the crossed product of with the endomorphism Φ.
Cuntz algebras to represent direct sums
The relations defining the Cuntz algebras align with the definition of the biproduct for preadditive categories. This similarity is made precise in the C*-category of unital *-endomorphisms over C*-algebras. The objects of this category are unital *-endomorphism
|
https://en.wikipedia.org/wiki/Stephen%20B.%20Streater
|
Stephen Bernard Streater (born 1965) is a British technology entrepreneur.
Career
Streater was born in Boston Lying-In Hospital, Massachusetts, United States. He achieved a degree in mathematics from Trinity College, Cambridge and then began a PhD on artificial pattern recognition in the physics department at King's College London.
In 1990, he co-founded Eidos, a company specialising in video compression and non-linear editing systems, particularly for computers running the RISC OS operating system. He later sold and left Eidos, which had moved into the computer games market, and founded Blackbird in 2000, where he was the company's R&D Director.
On 21 July 2011, Streater was honoured by the University of Bedfordshire with a Doctor of Science degree in recognition of "outstanding contribution to the development of computer technologies."
Personal life
Streater is married to Victoria Jane (née Fantl) and has three daughters (Sophie, Juliette and Emily). He has a sister (Catherine) and a brother (Alexander). His hobbies include playing classical chamber and orchestral music, Go, new technology, and making videos. Streater's father, Ray Streater, is a professor of mathematics at King's College London.
References
External links
Blackbird plc
Stephen Bernard Streater, inventor – from his father's website
1965 births
Alumni of Trinity College, Cambridge
Alumni of King's College London
English businesspeople
Living people
|
https://en.wikipedia.org/wiki/Solid%20Geometry%20%28film%29
|
Solid Geometry is a 2002 short TV film directed by Denis Lawson and starring his nephew Ewan McGregor and Ruth Millar. It is based on a short story by Ian McEwan published in collection First Love, Last Rites. It was made for the Scottish Television/Grampian Television New Found Land series, first shown by them on 3 October 2002. Co-financed by Channel 4, it was subsequently shown by them on 28 November 2002. Production on an earlier BBC adaptation was halted at a late stage in 1979.
Plot
Phil (McGregor) is a successful advertising executive and Maisie (Millar) is his young and hedonistic wife, but their lives are thrown into turmoil when Phil inherits his great-great grandfather's secret diaries. He becomes obsessed with research into solid geometry contained in the diaries and is fascinated by the theory of "a plane without a surface." His pursuit of this mythical geometric concept tears his marriage apart. The film is interspersed with flashbacks showing Phil's great-great grandfather discovering the same mysteries of the supernatural side of geometry as Phil is uncovering by reading the diaries.
Eventually Phil follows the instructions buried in the diaries and begins carefully folding a large sheet of paper in on itself like a lotus flower, then the folded paper emits a bright light, folds over itself and disappears. On seeing this, he dedicates his life to understanding more about the diaries and his great-great grandfather's mysterious disappearance.
He goes on to become obsessed with this transformation and the discussion in the book which links it to 'sexual intercourse positions' of which the diary claims there are only 17. After a love-making session with his wife, he proceeds to place another of the paper lotus flowers in the center of her curled body. As she curls into a foetal position unknowingly around the paper flower, she somehow completes the flower folding as had previously been achieved with just the paper. She spins around the paper flower several times and disappears, emitting tones of shock and fear. The film ends with Phil alone in bed.
References
External links
Channel Four Review
2002 films
2002 television films
2000s British films
2002 drama films
2000s English-language films
2000s mystery films
British drama short films
Films based on short fiction
Works by Ian McEwan
British drama television films
|
https://en.wikipedia.org/wiki/Gresham%20Professor%20of%20Geometry
|
The Professor of Geometry at Gresham College, London, gives free educational lectures to the general public. The college was founded for this purpose in 1597, when it appointed seven professors; this has since increased to ten and in addition the college now has visiting professors.
The Professor of Geometry is always appointed by the City of London Corporation.
List of Gresham Professors of Geometry
Note, years given as, say, 1596/7 refer to Old Style and New Style dates.
References
Gresham College old website, Internet Archive List of professors
Gresham College website Profile of current professor and list of past professors
Notes
External links
'400 Years of Geometry at Gresham College', lecture by Robin Wilson at Gresham College, 14 May 2008 (available for download as PDF, audio and video files)
Further reading
Geometry
1596 establishments in England
Professorships in mathematics
Mathematics education in the United Kingdom
|
https://en.wikipedia.org/wiki/Inscribed%20figure
|
In geometry, an inscribed planar shape or solid is one that is enclosed by and "fits snugly" inside another geometric shape or solid. To say that "figure F is inscribed in figure G" means precisely the same thing as "figure G is circumscribed about figure F". A circle or ellipse inscribed in a convex polygon (or a sphere or ellipsoid inscribed in a convex polyhedron) is tangent to every side or face of the outer figure (but see Inscribed sphere for semantic variants). A polygon inscribed in a circle, ellipse, or polygon (or a polyhedron inscribed in a sphere, ellipsoid, or polyhedron) has each vertex on the outer figure; if the outer figure is a polygon or polyhedron, there must be a vertex of the inscribed polygon or polyhedron on each side of the outer figure. An inscribed figure is not necessarily unique in orientation; this can easily be seen, for example, when the given outer figure is a circle, in which case a rotation of an inscribed figure gives another inscribed figure that is congruent to the original one.
Familiar examples of inscribed figures include circles inscribed in triangles or regular polygons, and triangles or regular polygons inscribed in circles. A circle inscribed in any polygon is called its incircle, in which case the polygon is said to be a tangential polygon. A polygon inscribed in a circle is said to be a cyclic polygon, and the circle is said to be its circumscribed circle or circumcircle.
The inradius or filling radius of a given outer figure is the radius of the inscribed circle or sphere, if it exists.
The definition given above assumes that the objects concerned are embedded in two- or three-dimensional Euclidean space, but can easily be generalized to higher dimensions and other metric spaces.
For an alternative usage of the term "inscribed", see the inscribed square problem, in which a square is considered to be inscribed in another figure (even a non-convex one) if all four of its vertices are on that figure.
Properties
Every circle has an inscribed triangle with any three given angle measures (summing of course to 180°), and every triangle can be inscribed in some circle (which is called its circumscribed circle or circumcircle).
Every triangle has an inscribed circle, called the incircle.
Every circle has an inscribed regular polygon of n sides, for any n≥3, and every regular polygon can be inscribed in some circle (called its circumcircle).
Every regular polygon has an inscribed circle (called its incircle), and every circle can be inscribed in some regular polygon of n sides, for any n≥3.
Not every polygon with more than three sides has an inscribed circle; those polygons that do are called tangential polygons. Not every polygon with more than three sides is an inscribed polygon of a circle; those polygons that are so inscribed are called cyclic polygons.
Every triangle can be inscribed in an ellipse, called its Steiner circumellipse or simply its Steiner ellipse, whose center is the triangle's centro
|
https://en.wikipedia.org/wiki/Chamfered%20dodecahedron
|
In geometry, the chamfered dodecahedron is a convex polyhedron with 80 vertices, 120 edges, and 42 faces: 30 hexagons and 12 pentagons. It is constructed as a chamfer (edge-truncation) of a regular dodecahedron. The pentagons are reduced in size and new hexagonal faces are added in place of all the original edges. Its dual is the pentakis icosidodecahedron.
It is also called a truncated rhombic triacontahedron, constructed as a truncation of the rhombic triacontahedron. It can more accurately be called an order-12 truncated rhombic triacontahedron because only the order-12 vertices are truncated.
Structure
These 12 order-5 vertices can be truncated such that all edges are equal length. The original 30 rhombic faces become non-regular hexagons, and the truncated vertices become regular pentagons.
The hexagon faces can be equilateral but not regular with D symmetry. The angles at the two vertices with vertex configuration are and at the remaining four vertices with , they are each.
It is the Goldberg polyhedron , containing pentagonal and hexagonal faces.
It also represents the exterior envelope of a cell-centered orthogonal projection of the 120-cell, one of six convex regular 4-polytopes.
Chemistry
This is the shape of the fullerene ; sometimes this shape is denoted to describe its icosahedral symmetry and distinguish it from other less-symmetric 80-vertex fullerenes. It is one of only four fullerenes found by to have a skeleton that can be isometrically embeddable into an L space.
Related polyhedra
This polyhedron looks very similar to the uniform truncated icosahedron which has 12 pentagons, but only 20 hexagons.
The chamfered dodecahedron creates more polyhedra by basic Conway polyhedron notation. The zip chamfered dodecahedron makes a chamfered truncated icosahedron, and Goldberg (2,2).
Chamfered truncated icosahedron
In geometry, the chamfered truncated icosahedron is a convex polyhedron with 240 vertices, 360 edges, and 122 faces, 110 hexagons and 12 pentagons.
It is constructed by a chamfer operation to the truncated icosahedron, adding new hexagons in place of original edges. It can also be constructed as a zip (= dk = dual of kis of) operation from the chamfered dodecahedron. In other words, raising pentagonal and hexagonal pyramids on a chamfered dodecahedron (kis operation) will yield the (2,2) geodesic polyhedron. Taking the dual of that yields the (2,2) Goldberg polyhedron, which is the chamfered truncated icosahedron, and is also Fullerene C240.
Dual
Its dual, the hexapentakis chamfered dodecahedron has 240 triangle faces (grouped as 60 (blue), 60 (red) around 12 5-fold symmetry vertices and 120 around 20 6-fold symmetry vertices), 360 edges, and 122 vertices.
Hexapentakis chamfered dodecahedron
References
Bibliography
External links
Vertex- and edge-truncation of the Platonic and Archimedean solids leading to vertex-transitive polyhedra Livio Zefiro
VRML polyhedral generator (Conway polyhedron notation)
G
|
https://en.wikipedia.org/wiki/Morgan%20Prize%20%28disambiguation%29
|
The Morgan Prize in Mathematics may refer to:
Frank and Brennie Morgan Prize for Outstanding Research in Mathematics by an Undergraduate Student awarded jointly by the American Mathematical Society, Mathematical Association of America and Society for Industrial and Applied Mathematics
De Morgan Medal awarded by the London Mathematical Society
|
https://en.wikipedia.org/wiki/Cook%27s%20distance
|
In statistics, Cook's distance or Cook's D is a commonly used estimate of the influence of a data point when performing a least-squares regression analysis. In a practical ordinary least squares analysis, Cook's distance can be used in several ways: to indicate influential data points that are particularly worth checking for validity; or to indicate regions of the design space where it would be good to be able to obtain more data points. It is named after the American statistician R. Dennis Cook, who introduced the concept in 1977.
Definition
Data points with large residuals (outliers) and/or high leverage may distort the outcome and accuracy of a regression. Cook's distance measures the effect of deleting a given observation. Points with a large Cook's distance are considered to merit closer examination in the analysis.
For the algebraic expression, first define
where is the error term, is the coefficient matrix, is the number of covariates or predictors for each observation, and is the design matrix including a constant. The least squares estimator then is , and consequently the fitted (predicted) values for the mean of are
where is the projection matrix (or hat matrix). The -th diagonal element of , given by , is known as the leverage of the -th observation. Similarly, the -th element of the residual vector is denoted by .
Cook's distance of observation is defined as the sum of all the changes in the regression model when observation is removed from it
where p is the rank of the model and is the fitted response value obtained when excluding , and is the mean squared error of the regression model.
Equivalently, it can be expressed using the leverage ():
Detecting highly influential observations
There are different opinions regarding what cut-off values to use for spotting highly influential points. Since Cook's distance is in the metric of an F distribution with and (as defined for the design matrix above) degrees of freedom, the median point (i.e., ) can be used as a cut-off. Since this value is close to 1 for large , a simple operational guideline of has been suggested.
The -dimensional random vector , which is the change of due to a deletion of the -th case, has a covariance matrix of rank one and therefore it is distributed entirely over one dimensional subspace (a line) of the -dimensional space.
However, in the introduction of Cook’s distance, a scaling matrix of full rank is chosen and as a result is treated as if it is a random vector distributed over the whole space of dimensions.
Hence the Cook's distance measure does not always correctly identify influential observations.
Relationship to other influence measures (and interpretation)
can be expressed using the leverage () and the square of the internally Studentized residual (), as follows:
The benefit in the last formulation is that it clearly shows the relationship between and to (while p and n are the same for all observations). If
|
https://en.wikipedia.org/wiki/Duhamel%27s%20principle
|
In mathematics, and more specifically in partial differential equations, Duhamel's principle is a general method for obtaining solutions to inhomogeneous linear evolution equations like the heat equation, wave equation, and vibrating plate equation. It is named after Jean-Marie Duhamel who first applied the principle to the inhomogeneous heat equation that models, for instance, the distribution of heat in a thin plate which is heated from beneath. For linear evolution equations without spatial dependency, such as a harmonic oscillator, Duhamel's principle reduces to the method of variation of parameters technique for solving linear inhomogeneous ordinary differential equations. It is also an indispensable tool in the study of nonlinear partial differential equations such as the Navier–Stokes equations and nonlinear Schrödinger equation where one treats the nonlinearity as an inhomogeneity.
The philosophy underlying Duhamel's principle is that it is possible to go from solutions of the Cauchy problem (or initial value problem) to solutions of the inhomogeneous problem. Consider, for instance, the example of the heat equation modeling the distribution of heat energy in . Indicating by the time derivative of , the initial value problem is
where g is the initial heat distribution. By contrast, the inhomogeneous problem for the heat equation,
corresponds to adding an external heat energy at each point. Intuitively, one can think of the inhomogeneous problem as a set of homogeneous problems each starting afresh at a different time slice . By linearity, one can add up (integrate) the resulting solutions through time and obtain the solution for the inhomogeneous problem. This is the essence of Duhamel's principle.
General considerations
Formally, consider a linear inhomogeneous evolution equation for a function
with spatial domain in , of the form
where L is a linear differential operator that involves no time derivatives.
Duhamel's principle is, formally, that the solution to this problem is
where is the solution of the problem
The integrand is the retarded solution , evaluated at time , representing the effect, at the later time , of an infinitesimal force applied at time .
Duhamel's principle also holds for linear systems (with vector-valued functions ), and this in turn furnishes a generalization to higher t derivatives, such as those appearing in the wave equation (see below). Validity of the principle depends on being able to solve the homogeneous problem in an appropriate function space and that the solution should exhibit reasonable dependence on parameters so that the integral is well-defined. Precise analytic conditions on and depend on the particular application.
Examples
Wave equation
The linear wave equation models the displacement of an idealized dispersionless one-dimensional string, in terms of derivatives with respect to time and space :
The function , in natural units, represents an external force applied to str
|
https://en.wikipedia.org/wiki/D%27Alembert%27s%20formula
|
In mathematics, and specifically partial differential equations, d´Alembert's formula is the general solution to the one-dimensional wave equation:
for
It is named after the mathematician Jean le Rond d'Alembert, who derived it in 1747 as a solution to the problem of a vibrating string.
Details
The characteristics of the PDE are (where sign states the two solutions to quadratic equation), so we can use the change of variables (for the positive solution) and (for the negative solution) to transform the PDE to . The general solution of this PDE is where and are functions. Back in coordinates,
is if and are .
This solution can be interpreted as two waves with constant velocity moving in opposite directions along the x-axis.
Now consider this solution with the Cauchy data .
Using we get .
Using we get .
We can integrate the last equation to get
Now we can solve this system of equations to get
Now, using
d'Alembert's formula becomes:
Generalization for inhomogeneous canonical hyperbolic differential equations
The general form of an inhomogeneous canonical hyperbolic type differential equation takes the form of:
for .
All second order differential equations with constant coefficients can be transformed into their respective canonic forms. This equation is one of these three cases: Elliptic partial differential equation, Parabolic partial differential equation and Hyperbolic partial differential equation.
The only difference between a homogeneous and an inhomogeneous (partial) differential equation is that in the homogeneous form we only allow 0 to stand on the right side (), while the inhomogeneous one is much more general, as in could be any function as long as it's continuous and can be continuously differentiated twice.
The solution of the above equation is given by the formula:
If , the first part disappears, if , the second part disappears, and if , the third part disappears from the solution, since integrating the 0-function between any two bounds always results in 0.
See also
D'Alembert operator
Mechanical wave
Wave equation
Notes
External links
An example of solving a nonhomogeneous wave equation from www.exampleproblems.com
Partial differential equations
https://www.knowledgeablegroup.com/2020/09/equations%20change%20world.html
|
https://en.wikipedia.org/wiki/C.a.R.
|
C.a.R.– Compass and Ruler (also known as Z.u.L., which stands for the German "Zirkel und Lineal") — is a free and open source interactive geometry app that can do geometrical constructions in Euclidean and non-Euclidean geometry.
The software is Java based.
The author is René Grothmann of the Catholic University of Eichstätt-Ingolstadt.
It is licensed under the terms of the GNU General Public License (GPL).
Assignments
Assignments make possible to create Java applets, for a construction exercises.
These applets can be used from the command line using the AppletViewer.
(Previously, they could be run in a browser, but Java support in browsers has been disabled in recent years.)
See also
Graphmatica
GeoGebra
CaRMetal
Compass-and-straightedge construction
External links
C.a.R.
History of the C.a.R.
Free educational software
Free interactive geometry software
Java platform software
|
https://en.wikipedia.org/wiki/Smith%20conjecture
|
In mathematics, the Smith conjecture states that if f is a diffeomorphism of the 3-sphere of finite order, then the fixed point set of f cannot be a nontrivial knot.
showed that a non-trivial orientation-preserving diffeomorphism of finite order with fixed points must have a fixed point set equal to a circle, and asked in if the fixed point set could be knotted. proved the Smith conjecture for the special case of diffeomorphisms of order 2 (and hence any even order). The proof of the general case was described by and depended on several major advances in 3-manifold theory, In particular the work of William Thurston on hyperbolic structures on 3-manifolds, and results by William Meeks and Shing-Tung Yau on minimal surfaces in 3-manifolds, with some additional help from Bass, Cameron Gordon, Peter Shalen, and Rick Litherland.
gave an example of a continuous involution of the 3-sphere whose fixed point set is a wildly embedded circle, so the Smith conjecture is false in the topological (rather than the smooth or PL) category. showed that the analogue of the Smith conjecture in higher dimensions is false: the fixed point set of a periodic diffeomorphism of a sphere of dimension at least 4 can be a knotted sphere of codimension 2.
See also
Hilbert–Smith conjecture
References
3-manifolds
Conjectures
Diffeomorphisms
Theorems in topology
|
https://en.wikipedia.org/wiki/Paul%20Althaus%20Smith
|
Paul Althaus Smith (May 18, 1900June 13, 1980) was an American mathematician. His name occurs in two significant conjectures in geometric topology: the Smith conjecture, which is now a theorem, and the Hilbert–Smith conjecture, which was proved in dimension 3 in 2013. Smith theory is a theory about homeomorphisms of finite order of manifolds, particularly spheres.
Smith was a student of Solomon Lefschetz at the University of Kansas, moving to Princeton University with Lefschetz in the mid-1920s. He finished his doctorate at Princeton, in 1926. His Ph.D. thesis was published in the Annals of Mathematics that same year. He also worked with George David Birkhoff, with whom he wrote a 1928 paper in ergodic theory, entitled Structure analysis of surface transformations, which appeared in the Journal des Mathématiques.
He subsequently became a professor at Columbia University and at Barnard College. His students at Columbia included Sherman K. Stein and Moses Richardson. He has many academic descendants through Richardson and his student Louis Billera.
Family
Smith was married to a Swiss–American early music pioneer, Suzanne Bloch.
External links
Approximation of curves and surfaces by algebraic curves and surfaces, Annals of Mathematics, 2nd Ser., Vol. 27, No. 3 (Mar., 1926), pp. 224–244.
1900 births
1980 deaths
20th-century American mathematicians
Topologists
Columbia University faculty
Princeton University alumni
|
https://en.wikipedia.org/wiki/Accelerated%20Math
|
Accelerated Math is a daily, progress-monitoring software tool that monitors and manages mathematics skills practice, from preschool math through calculus. It is primarily used by primary and secondary schools, and it is published by Renaissance Learning, Inc. Currently, there are five versions: a desktop version and a web-based version in Renaissance Place, the company's web software for Accelerated Math and a number of other software products (e.g. Accelerated Reader). In Australia and the United Kingdom, the software is referred to as "Accelerated Maths".
Research
Below is a sample of some of the most current research on Accelerated Math.
Sadusky and Brem (2002) studied the impact of first-year implementation of Accelerated Math in a K-6 urban elementary school during the 2001–2002 school year. The researchers found that teachers were able to immediately use data to make decisions about instruction in the classroom. The students in classrooms using Accelerated Math had twice the percentile gains when tested as compared to the control classrooms that did not use Accelerated Math.
Ysseldkyke and Tardrew (2003) studied 2,202 students in 125 classrooms encompassing 24 states. The results showed that when students using Accelerated Math were compared to a control group, those students using the software made a significant gains on the STAR Math test. Students in grades 3 through 10 that were using Accelerated Math had more than twice the percentile gains on these tests than students in the control group.
Ysseldyke, Betts, Thill, and Hannigan (2004) conducted a quasi-experimental study with third- through sixth-grade Title I students. They found that Title I students who used Accelerated Math outperformed students who did not. Springer, Pugalee, and Algozzine (2005) also discovered a similar pattern. They studied students that failed to pass the AIMS test in order to graduate. Over half of the students passed the test after taking a course in which Accelerated Math was used to improve their achievement.
The What Works Clearinghouse (2008) within the Institute of Educational Sciences concluded that studies they evaluated did not show statistically significant gains when put through the US government's analysis.
For more research, see the link below.
References
External links
Accelerated Math: Entrance Rates, Success Rates, and College Readiness research
Accelerated Math webpage
Accelerated Math research
Renaissance Learning research
2005 and 2006 Readers’ Choice Awards from eSchool News
Alternate usage
For other uses of the term "accelerated math," please see:
Burris (2003), an article on an accelerated mathematics curriculum
Shiran (2000), an article on accelerated math operators in JavaScript programming
Educational math software
Renaissance Learning software
|
https://en.wikipedia.org/wiki/Response%20surface%20methodology
|
In statistics, response surface methodology (RSM) explores the relationships between several explanatory variables and one or more response variables. The method was introduced by George E. P. Box and K. B. Wilson in 1951. The main idea of RSM is to use a sequence of designed experiments to obtain an optimal response. Box and Wilson suggest using a second-degree polynomial model to do this. They acknowledge that this model is only an approximation, but they use it because such a model is easy to estimate and apply, even when little is known about the process.
Statistical approaches such as RSM can be employed to maximize the production of a special substance by optimization of operational factors. Of late, for formulation optimization, the RSM, using proper design of experiments (DoE), has become extensively used. In contrast to conventional methods, the interaction among process variables can be determined by statistical techniques.
Basic approach of response surface methodology
An easy way to estimate a first-degree polynomial model is to use a factorial experiment or a fractional factorial design. This is sufficient to determine which explanatory variables affect the response variable(s) of interest. Once it is suspected that only significant explanatory variables are left, then a more complicated design, such as a central composite design can be implemented to estimate a second-degree polynomial model, which is still only an approximation at best. However, the second-degree model can be used to optimize (maximize, minimize, or attain a specific target for) the response variable(s) of interest.
Important RSM properties and features
OrthogonalityThe property that allows individual effects of the k-factors to be estimated independently without (or with minimal) confounding. Also orthogonality provides minimum variance estimates of the model coefficient so that they are uncorrelated.
RotatabilityThe property of rotating points of the design about the center of the factor space. The moments of the distribution of the design points are constant.
UniformityA third property of CCD designs used to control the number of center points is uniform precision (or Uniformity).
Special geometries
Cube
Cubic designs are discussed by Kiefer, by Atkinson, Donev, and Tobias and by Hardin and Sloane.
Sphere
Spherical designs are discussed by Kiefer and by Hardin and Sloane.
Simplex geometry and mixture experiments
Mixture experiments are discussed in many books on the design of experiments, and in the response-surface methodology textbooks of Box and Draper and of Atkinson, Donev and Tobias. An extensive discussion and survey appears in the advanced textbook by John Cornell.
Extensions
Multiple objective functions
Some extensions of response surface methodology deal with the multiple response problem. Multiple response variables create difficulty because what is optimal for one response may not be optimal for other responses. Other extens
|
https://en.wikipedia.org/wiki/Chinese%20restaurant%20process
|
In probability theory, the Chinese restaurant process is a discrete-time stochastic process, analogous to seating customers at tables in a restaurant.
Imagine a restaurant with an infinite number of circular tables, each with infinite capacity. Customer 1 sits at the first table. The next customer either sits at the same table as customer 1, or the next table. This continues, with each customer choosing to either sit at an occupied table with a probability proportional to the number of customers already there (i.e., they are more likely to sit at a table with many customers than few), or an unoccupied table. At time n, the n customers have been partitioned among m ≤ n tables (or blocks of the partition). The results of this process are exchangeable, meaning the order in which the customers sit does not affect the probability of the final distribution. This property greatly simplifies a number of problems in population genetics, linguistic analysis, and image recognition.
The restaurant analogy first appeared in a 1985 write-up by David Aldous, where it was attributed to Jim Pitman (who additionally credits Lester Dubins).
Formal definition
For any positive integer , let denote the set of all partitions of the set . The Chinese restaurant process takes values in the infinite Cartesian product .
The value of the process at time is a partition of the set , whose probability distribution is determined as follows. At time , the trivial partition is obtained (with probability one). At time the element "" is either:
added to one of the blocks of the partition , where each block is chosen with probability where is the size of the block (i.e. number of elements), or
added to the partition as a new singleton block, with probability .
The random partition so generated has some special properties. It is exchangeable in the sense that relabeling does not change the distribution of the partition, and it is consistent in the sense that the law of the partition of obtained by removing the element from the random partition is the same as the law of the random partition .
The probability assigned to any particular partition (ignoring the order in which customers sit around any particular table) is
where is a block in the partition and is the size of .
The definition can be generalized by introducing a parameter which modifies the probability of the new customer sitting at a new table to and correspondingly modifies the probability of them sitting at a table of size to . The vanilla process introduced above can be recovered by setting . Intuitively, can be interpreted as the effective number of customers sitting at the first empty table.
Alternative definition
An equivalent, but subtly different way to define the Chinese restaurant process, is to let new customers choose companions rather than tables. Customer chooses to sit at the same table as any one of the seated customers with probability , or chooses to sit at a new, unocc
|
https://en.wikipedia.org/wiki/Fundamental%20matrix%20%28computer%20vision%29
|
In computer vision, the fundamental matrix is a 3×3 matrix which relates corresponding points in stereo images. In epipolar geometry, with homogeneous image coordinates, x and x′, of corresponding points in a stereo image pair, Fx describes a line (an epipolar line) on which the corresponding point x′ on the other image must lie. That means, for all pairs of corresponding points holds
Being of rank two and determined only up to scale, the fundamental matrix can be estimated given at least seven point correspondences. Its seven parameters represent the only geometric information about cameras that can be obtained through point correspondences alone.
The term "fundamental matrix" was coined by QT Luong in his influential PhD thesis. It is sometimes also referred to as the "bifocal tensor". As a tensor it is a two-point tensor in that it is a bilinear form relating points in distinct coordinate systems.
The above relation which defines the fundamental matrix was published in 1992 by both Olivier Faugeras and Richard Hartley. Although H. Christopher Longuet-Higgins' essential matrix satisfies a similar relationship, the essential matrix is a metric object pertaining to calibrated cameras, while the fundamental matrix describes the correspondence in more general and fundamental terms of projective geometry.
This is captured mathematically by the relationship between a fundamental matrix
and its corresponding essential matrix ,
which is
and being the intrinsic calibration
matrices of the two images involved.
Introduction
The fundamental matrix is a relationship between any two images of the same scene that constrains where the projection of points from the scene can occur in both images. Given the projection of a scene point into one of the images the corresponding point in the other image is constrained to a line, helping the search, and allowing for the detection of wrong correspondences. The relation between corresponding points, which the fundamental matrix represents, is referred to as epipolar constraint, matching constraint, discrete matching constraint, or incidence relation.
Projective reconstruction theorem
The fundamental matrix can be determined by a set of point correspondences. Additionally, these corresponding image points may be triangulated to world points with the help of camera matrices derived directly from this fundamental matrix. The scene composed of these world points is within a projective transformation of the true scene.
Proof
Say that the image point correspondence derives from the world point under the camera matrices as
Say we transform space by a general homography matrix such that .
The cameras then transform as
and likewise with still get us the same image points.
Derivation of the fundamental matrix using coplanarity condition
The fundamental matrix can also be derived using the coplanarity condition.
For satellite images
The fundamental matrix expresses the epipolar geometry in ster
|
https://en.wikipedia.org/wiki/EGS%20%28program%29
|
The EGS (Electron Gamma Shower) computer code system is a general purpose package for the Monte Carlo simulation of the coupled transport of electrons and photons in an arbitrary geometry for particles with energies from a few keV up to several hundreds of GeV. It originated at SLAC but National Research Council of Canada and KEK have been involved in its development since the early 80s.
Development of the original EGS code ended with version EGS4. Since then two groups have re-written the code with new physics:
EGSnrc, maintained by the Ionizing Radiation Standards Group, Measurement Science and Standards, National Research Council of Canada
EGS5, maintained by KEK, the Japanese particle physics research facility.
EGSnrc
EGSnrc is a general-purpose software toolkit that can be applied to build Monte Carlo simulations of coupled electron-photon transport, for particle energies ranging from 1 keV to 10 GeV. It is widely used internationally in a variety of radiation-related fields. The EGSnrc implementation improves the accuracy and precision of the charged particle transport mechanics and the atomic scattering cross-section data. The charged particle multiple scattering algorithm allows for large step sizes without sacrificing accuracy - a key feature of the toolkit that leads to fast simulation speeds. EGSnrc also includes a C++ class library called egs++ that can be used to model elaborate geometries and particle sources.
EGSnrc is open source and distributed on GitHub under the GNU Affero General Public License. Download EGSnrc for free, submit bug reports, and contribute pull requests on a group GitHub page. The documentation for EGSnrc is also available online.
EGSnrc is distributed with a wide range of applications that utilize the radiation transport physics to calculate specific quantities. These codes have been developed by numerous authors over the lifetime of EGSnrc to support the large user community. It is possible to calculate quantities such as absorbed dose, kerma, particle fluence, and much more, with complex geometrical conditions. One of the most well-known EGSnrc applications is BEAMnrc, which was developed as part of the OMEGA project. This was a collaboration between the National Research Council of Canada and a research group at the University of Wisconsin–Madison. All types of medical linear accelerators can be modelled using the BEAMnrc's component module system.
See also
GEANT (program)
Geant4
References
External links
NRC-CNRC page for EGSnrc
KEK page for EGS5
EGSnrc Github page
EGSnrc online documentation
EGSnrc subreddit
Monte Carlo software
Physics software
Medical physics
Radiation therapy
Monte Carlo particle physics software
Free science software
|
https://en.wikipedia.org/wiki/Cer
|
Cer, or CER may refer to:
Environment
Certified Emission Reduction, emission units
Statistics
Control event rate, a statistical value in epidemiology
Crossover error rate, a statistical value in a biometric system
Information technology
Canonical Encoding Rules, encoding format
Customer edge router, in computer networking
CER Computer (Serbian Latin: Cifarski Elektronski Računar, "Digital Electronic Computer"), series of early computers
Geography
Cer (mountain), a mountain in Serbia
Cer, Zvornik, a village in the municipality of Zvornik, Republika Srpska, Bosnia and Herzegovina
Cer, Kičevo, a village in the municipality of Kičevo, North Macedonia
Cherbourg – Maupertus Airport or Aéroport de Cherbourg - Maupertus, an airport in France by IATA airport code
Chinese Eastern Railway (Chinese: 中東鐵路/中东铁路, also known as the Chinese Far East Railway), a railway in northeastern China
Religion
Keres, Greek goddess of violent death, one of the Greek primordial deities
Medicine
Comparative effectiveness research, comparison of health care intervention effectiveness
Conditioned emotional response, a specific learned behavior or a procedure
Sports
Classic Endurance Racing, a sports car racing series founded in 2004 by Peter Auto Ltd.
Organizations and trade agreements
Conference of European Rabbis, rabbinical alliance in Europe
Closer Economic Relations, a free trade agreement between Australia and New Zealand
Commission for Energy Regulation, a Republic of Ireland energy regulator
Community of European Railway and Infrastructure Companies, railway system
Broadcasting
Astro Ceria Malaysian pay-channel 611
Publications
Comparative Education Review, publication of the Comparative and International Education Society
Finance
Currency exchange rate, rate at which one currency will be exchanged for another currency
|
https://en.wikipedia.org/wiki/Logarithmic%20mean
|
In mathematics, the logarithmic mean is a function of two non-negative numbers which is equal to their difference divided by the logarithm of their quotient.
This calculation is applicable in engineering problems involving heat and mass transfer.
Definition
The logarithmic mean is defined as:
for the positive numbers .
Inequalities
The logarithmic mean of two numbers is smaller than the arithmetic mean and the generalized mean with exponent one-third but larger than the geometric mean, unless the numbers are the same, in which case all three means are equal to the numbers.
Toyesh Prakash Sharma generalizes the arithmetic logarithmic geometric mean inequality for any belongs to the whole number as
Now, for :
This is the arithmetic logarithmic geometric mean inequality. similarly, one can also obtain results by putting different values of as below
For :
for the proof go through the bibliography.
Derivation
Mean value theorem of differential calculus
From the mean value theorem, there exists a value in the interval between and where the derivative equals the slope of the secant line:
The logarithmic mean is obtained as the value of by substituting for and similarly for its corresponding derivative:
and solving for :
Integration
The logarithmic mean can also be interpreted as the area under an exponential curve.
The area interpretation allows the easy derivation of some basic properties of the logarithmic mean. Since the exponential function is monotonic, the integral over an interval of length 1 is bounded by and . The homogeneity of the integral operator is transferred to the mean operator, that is .
Two other useful integral representations areand
Generalization
Mean value theorem of differential calculus
One can generalize the mean to variables by considering the mean value theorem for divided differences for the -th derivative of the logarithm.
We obtain
where denotes a divided difference of the logarithm.
For this leads to
Integral
The integral interpretation can also be generalized to more variables, but it leads to a different result. Given the simplex with and an appropriate measure which assigns the simplex a volume of 1, we obtain
This can be simplified using divided differences of the exponential function to
.
Example :
Connection to other means
Arithmetic mean:
Geometric mean:
Harmonic mean:
See also
A different mean which is related to logarithms is the geometric mean.
The logarithmic mean is a special case of the Stolarsky mean.
Logarithmic mean temperature difference
Log semiring
References
Citations
Bibliography
Oilfield Glossary: Term 'logarithmic mean'
Stolarsky, Kenneth B.: Generalizations of the logarithmic mean, Mathematics Magazine, Vol. 48, No. 2, Mar., 1975, pp 87–92
Toyesh Prakash Sharma.: https://www.parabola.unsw.edu.au/files/articles/2020-2029/volume-58-2022/issue-2/vol58_no2_3.pdf "A generalisation of the Arithmetic-Logarithmic-Geometric Mean In
|
https://en.wikipedia.org/wiki/Bregman%20divergence
|
In mathematics, specifically statistics and information geometry, a Bregman divergence or Bregman distance is a measure of difference between two points, defined in terms of a strictly convex function; they form an important class of divergences. When the points are interpreted as probability distributions – notably as either values of the parameter of a parametric model or as a data set of observed values – the resulting distance is a statistical distance. The most basic Bregman divergence is the squared Euclidean distance.
Bregman divergences are similar to metrics, but satisfy neither the triangle inequality (ever) nor symmetry (in general). However, they satisfy a generalization of the Pythagorean theorem, and in information geometry the corresponding statistical manifold is interpreted as a (dually) flat manifold. This allows many techniques of optimization theory to be generalized to Bregman divergences, geometrically as generalizations of least squares.
Bregman divergences are named after Russian mathematician Lev M. Bregman, who introduced the concept in 1967.
Definition
Let be a continuously-differentiable, strictly convex function defined on a convex set .
The Bregman distance associated with F for points is the difference between the value of F at point p and the value of the first-order Taylor expansion of F around point q evaluated at point p:
Properties
Non-negativity: for all , . This is a consequence of the convexity of .
Positivity: When is strictly convex, iff .
Uniqueness up to affine difference: iff is an affine function.
Convexity: is convex in its first argument, but not necessarily in the second argument. If F is strictly convex, then is strictly convex in its first argument.
For example, Take f(x) = |x|, smooth it at 0, then take , then .
Linearity: If we think of the Bregman distance as an operator on the function F, then it is linear with respect to non-negative coefficients. In other words, for strictly convex and differentiable, and ,
Duality: If F is strictly convex, then the function F has a convex conjugate which is also strictly convex and continuously differentiable on some convex set . The Bregman distance defined with respect to is dual to as
Here, and are the dual points corresponding to p and q.
Moreover, using the same notations :
Mean as minimizer: A key result about Bregman divergences is that, given a random vector, the mean vector minimizes the expected Bregman divergence from the random vector. This result generalizes the textbook result that the mean of a set minimizes total squared error to elements in the set. This result was proved for the vector case by (Banerjee et al. 2005), and extended to the case of functions/distributions by (Frigyik et al. 2008). This result is important because it further justifies using a mean as a representative of a random set, particularly in Bayesian estimation.
Bregman balls are bounded, and compact if X is closed: Define Bregman
|
https://en.wikipedia.org/wiki/Leicester%20urban%20area
|
The Leicester Urban Area or Leicester Built Up Area (2011 onwards) is an urban agglomeration defined by the Office of National Statistics (ONS), centred on the City of Leicester in the East Midlands, England. With a population of 559,017 at the time of the 2021 census, increased from 508,916 at the time of the 2011 census, the Built Up Area is the eleventh largest in England and thirteenth largest in the United Kingdom. It comprises Leicester itself and its suburbs, all of which are contiguous with or situated in close proximity to the city.
As at 2011 the Leicester Urban Area was home to 51.8% of the total population of Leicestershire (2001: 48.5%). A 2017 quote from the Leicester City Council website states that "The Greater Leicester urban area is one of the fastest growing in the country, with a population of about 650,000, of which 350,000 live within the city council area".
Analysis: 2011
Analysis: 2001
Analysis: 1991
Analysis: 1981
Analysis: Notes
The Leicester ONS sub-totals are explained with reference to the following:
For 2011, there have been certain reclassifications from 'outer' to 'inner suburbs' thus contributing to the increase in population of Leicester ONS compared to 2001 and prior.
The ONS's definition of "Leicester" in the 2001 census excluded the northern area of the Beaumont Leys ward, which was counted separately, and amalgamated several surrounding towns and villages. Its boundaries and population were not the same as that of the Leicester UNITARY, which had a separate population of 279,921 at the 2001 Census and includes Beaumont Leys. See the 2001 Leicester ONS analysis.
Ongoing refinements in methodology between the two census dates meant that contiguous suburbs, external to the city boundaries and which were subdivided up to 2001, were merged into the 2011 Leicester ONS figure. These, along with infilling at the margins, also accounted for the removal of Ratby and the addition of new subdivisions.
Leicester ONS figures do not include the Ashton Green/Glebelands/Thurcaston Park development (containing 556 residents in 2011). Although within the city boundaries, this incipient addition is in effect a satellite village given that there is 'green separation'. This categorisation is likely to change as future residential expansion planned around the development, converges towards Beaumont Leys and Birstall; both part of the urban area/BUA.
The Kirby Muxloe figures up to 2001 includes the portion of Leicester Forest East which is to the west of the M1 motorway. The eastern section is part of the Leicester ONS subdivision, and is listed separately in the 2001 table.
Although Stoughton is shown as a small leg of the urban area, its figures are not counted.
See also
List of settlements in Leicestershire by population
List of urban areas in the United Kingdom
List of metropolitan areas in the United Kingdom
City Regions (United Kingdom; North, West and North Sussex-super-Mare)
References
External links
Scalab
|
https://en.wikipedia.org/wiki/O-Matrix
|
O-Matrix is a matrix programming language for mathematics, engineering, science, and financial analysis, marketed by Harmonic Software. The language is designed for use in high-performance computing.
O-Matrix provides an integrated development environment and a matrix-based scripting language. The environment includes mathematical, statistical, engineering and visualization functions. The set of analysis functions is designed for development of complex, computationally intensive scientific, mathematical and engineering applications.
The integrated environment provides a mode that is largely compatible with version 4 of the MATLAB language in the commercial product from MathWorks. Certain features of MATLAB, such as non-numeric data types (structures, cell arrays and objects), error handling with try/catch, and nested and anonymous functions, are missing in O-Matrix.
The O-Matrix environment includes a virtual machine of the O-Matrix language to enable re-distribution of applications.
External links
Array programming languages
Numerical programming languages
|
https://en.wikipedia.org/wiki/Al-Khwarizmi%20%28disambiguation%29
|
Al-Khwarizmi or Muḥammad ibn Mūsā al-Khwārizmī (c. 780 – c. 850) was a Persian scholar who produced works in mathematics, astronomy, and geography.
Al-Khwarizmi may also refer to:
People
Muhammad ibn Ahmad al-Khwarizmi, 10th-century encyclopedist who wrote Mafātīḥ al-ʿulūm ("Key to the Sciences")
Abū Bakr Muḥammad b. al-ʿAbbās al-Khwarizmi, Arabic poet and writer (934-93)
Al-Khwarizmi al-Khati, 11th-century alchemist
Shuja al-Khwarazmi (d. 861) was the mother of Abbasid caliph Ja'far al-Mutawakkil
Places
Al-Khwarizmi (crater), a crater on the far-side of the Moon named after Muhammad ibn Musa al-Khwarizmi
Khwarizmi International Award, a research award for achievements in science and technology research
See also
Khwarezmian (disambiguation)
|
https://en.wikipedia.org/wiki/Non-Euclidean%20crystallographic%20group
|
In mathematics, a non-Euclidean crystallographic group, NEC group or N.E.C. group is a discrete group of isometries of the hyperbolic plane. These symmetry groups correspond to the wallpaper groups in euclidean geometry. A NEC group which contains only orientation-preserving elements is called a Fuchsian group, and any non-Fuchsian NEC group has an index 2 Fuchsian subgroup of orientation-preserving elements.
The hyperbolic triangle groups are notable NEC groups. Others are listed in Orbifold notation.
See also
Non-Euclidean geometry
Isometry group
Fuchsian group
Uniform tilings in hyperbolic plane
References
.
.
.
Non-Euclidean geometry
Hyperbolic geometry
Symmetry
Discrete groups
|
https://en.wikipedia.org/wiki/Order%20type
|
In mathematics, especially in set theory, two ordered sets and are said to have the same order type if they are order isomorphic, that is, if there exists a bijection (each element pairs with exactly one in the other set) such that both and its inverse are monotonic (preserving orders of elements).
In the special case when is totally ordered, monotonicity of already implies monotonicity of its inverse.
One and the same set may be equipped with different orders. Since order-equivalence is an equivalence relation, it partitions the class of all ordered sets into equivalence classes.
Notation
If a set has order type denoted , the order type of the reversed order, the dual of , is denoted .
The order type of a well-ordered set is sometimes expressed as .
Examples
The order type of the integers and rationals is usually denoted and , respectively. The set of integers and the set of even integers have the same order type, because the mapping is a bijection that preserves the order. But the set of integers and the set of rational numbers (with the standard ordering) do not have the same order type, because even though the sets are of the same size (they are both countably infinite), there is no order-preserving bijective mapping between them. The open interval of rationals is order isomorphic to the rationals, since, for example, is a strictly increasing bijection from the former to the latter. Relevant theorems of this sort are expanded upon below.
More examples can be given now: The set of positive integers (which has a least element), and that of negative integers (which has a greatest element). The natural numbers have order type denoted by ω, as explained below.
The rationals contained in the half-closed intervals [0,1) and (0,1], and the closed interval [0,1], are three additional order type examples.
Order type of well-orderings
Every well-ordered set is order-equivalent to exactly one ordinal number, by definition. The ordinal numbers are taken to be the canonical representatives of their classes, and so the order type of a well-ordered set is usually identified with the corresponding ordinal. Order types thus often take the form of arithmetic expressions of ordinals.
Examples
Firstly, the order type of the set of natural numbers is . Any other model of Peano arithmetic, that is any non-standard model, starts with a segment isomorphic to ω but then adds extra numbers. For example, any countable such model has order type .
Secondly, consider the set of even ordinals less than :
As this comprises two separate counting sequences followed by four elements at the end, the order type is
Rational numbers
With respect to their standard ordering as numbers, the set of rationals is not well-ordered. Neither is the completed set of reals, for that matter.
Any countable totally ordered set can be mapped injectively into the rational numbers in an order-preserving way. When the order is moreover dense and has no highest nor lowest
|
https://en.wikipedia.org/wiki/Corona%20theorem
|
In mathematics, the corona theorem is a result about the spectrum of the bounded holomorphic functions on the open unit disc, conjectured by and proved by .
The commutative Banach algebra and Hardy space H∞ consists of the bounded holomorphic functions on the open unit disc D. Its spectrum S (the closed maximal ideals) contains D as an open subspace because for each z in D there is a maximal ideal consisting of functions f with
f(z) = 0.
The subspace D cannot make up the entire spectrum S, essentially because the spectrum is a compact space and D is not. The complement of the closure of D in S was called the corona by , and the corona theorem states that the corona is empty, or in other words the open unit disc D is dense in the spectrum. A more elementary formulation is that elements f1,...,fn generate the unit ideal of H∞ if and only if there is some δ>0 such that
everywhere in the unit ball.
Newman showed that the corona theorem can be reduced to an interpolation problem, which was then proved by Carleson.
In 1979 Thomas Wolff gave a simplified (but unpublished) proof of the corona theorem, described in and .
Cole later showed that this result cannot be extended to all open Riemann surfaces .
As a by-product, of Carleson's work, the Carleson measure was invented which itself is a very useful tool in modern function theory. It remains an open question whether there are versions of the corona theorem for every planar domain or for higher-dimensional domains.
Note that if one assumes the continuity up to the boundary in the corona theorem, then the conclusion follows easily from the theory of commutative Banach algebra .
See also
Corona set
References
.
.
Banach algebras
Hardy spaces
Theorems in complex analysis
|
https://en.wikipedia.org/wiki/Algebraic%20manifold
|
__notoc__
In mathematics, an algebraic manifold is an algebraic variety which is also a manifold. As such, algebraic manifolds are a generalisation of the concept of smooth curves and surfaces defined by polynomials. An example is the sphere, which can be defined as the zero set of the polynomial and hence is an algebraic variety.
For an algebraic manifold, the ground field will be the real numbers or complex numbers; in the case of the real numbers, the manifold of real points is sometimes called a Nash manifold.
Every sufficiently small local patch of an algebraic manifold is isomorphic to km where k is the ground field. Equivalently the variety is smooth (free from singular points). The Riemann sphere is one example of a complex algebraic manifold, since it is the complex projective line.
Examples
Elliptic curves
Grassmannian
See also
Algebraic geometry and analytic geometry
References
(See also Proc. Internat. Congr. Math., 1950, (AMS, 1952), pp. 516–517.)
External links
K-Algebraic manifold at PlanetMath
Algebraic manifold at Mathworld
Lecture notes on algebraic manifolds
Lecture notes on algebraic manifolds
Algebraic varieties
Manifolds
|
https://en.wikipedia.org/wiki/Stanis%C5%82aw%20Go%C5%82%C4%85b
|
Stanisław Gołąb (July 26, 1902 – April 30, 1980) was a Polish mathematician from Kraków, working in particular on the field of affine geometry.
In 1932, he proved that the perimeter of the unit disc respect to a given metric can take any value in between 6 and 8, and that these extremal values are obtained if and only if the unit disc is an affine regular hexagon resp. a parallelogram.
Selected works
S. Gołąb: Quelques problèmes métriques de la géometrie de Minkowski, Trav. de l'Acad. Mines Cracovie 6 (1932), 1–79
Golab, S., Über einen algebraischen Satz, welcher in der Theorie der geometrischen Objekte auftritt, Beiträge zur Algebra und Geometrie 2 (1974) 7–10.
Golab, S.; Swiatak, H.: Note on Inner Products in Vector Spaces. Aequationes Mathematicae (1972) 74.
Golab, S.: Über das Carnotsche Skalarprodukt in schwach normierten Vektorräumen. Aequationes Mathematicae 13 (1975) 9–13.
Golab,S., Sur un problème de la métrique angulaire dans la géometrie de Minkowski, Aequationes Mathematicae (1971) 121.
Golab, S., Über die Grundlagen der affinen Geometrie., Jahresbericht DMV 71 (1969) 138–155.
Notes
External links
List of Golab's articles at U. of Göttingen, Germany
1902 births
1980 deaths
20th-century Polish mathematicians
Geometers
Scientists from Kraków
|
https://en.wikipedia.org/wiki/Minkowski%20plane
|
In mathematics, a Minkowski plane (named after Hermann Minkowski) is one of the Benz planes (the others being Möbius plane and Laguerre plane).
Classical real Minkowski plane
Applying the pseudo-euclidean distance on two points (instead of the euclidean distance) we get the geometry of hyperbolas, because a pseudo-euclidean circle is a hyperbola with midpoint .
By a transformation of coordinates , , the pseudo-euclidean distance can be rewritten as . The hyperbolas then have asymptotes parallel to the non-primed coordinate axes.
The following completion (see Möbius and Laguerre planes) homogenizes the geometry of hyperbolas:
the set of points:
the set of cycles
The incidence structure is called the classical real Minkowski plane.
The set of points consists of , two copies of and the point .
Any line is completed by point , any hyperbola by the two points (see figure).
Two points can not be connected by a cycle if and only if or .
We define:
Two points are (+)-parallel () if and (−)-parallel () if .
Both these relations are equivalence relations on the set of points.
Two points are called parallel () if
or .
From the definition above we find:
Lemma:
For any pair of non parallel points there is exactly one point with .
For any point and any cycle there are exactly two points with .
For any three points , , , pairwise non parallel, there is exactly one cycle that contains .
For any cycle , any point and any point and there exists exactly one cycle such that , i.e. touches at point P.
Like the classical Möbius and Laguerre planes Minkowski planes can be described as the geometry of plane sections of a suitable quadric. But in this case the quadric lives in projective 3-space: The classical real Minkowski plane is isomorphic to the geometry of plane sections of a hyperboloid of one sheet (not degenerated quadric of index 2).
The axioms of a Minkowski plane
Let be an incidence structure with the set of points, the set of cycles and two equivalence relations ((+)-parallel) and ((−)-parallel) on set . For we define:
and .
An equivalence class or is called (+)-generator and (−)-generator, respectively. (For the space model of the classical Minkowski plane a generator is a line on the hyperboloid.)
Two points are called parallel () if or .
An incidence structure is called Minkowski plane if the following axioms hold:
C1: For any pair of non parallel points there is exactly one point with .
C2: For any point and any cycle there are exactly two points with .
C3: For any three points , pairwise non parallel, there is exactly one cycle which contains .
C4: For any cycle , any point and any point and there exists exactly one cycle such that , i.e., touches at point .
C5: Any cycle contains at least 3 points. There is at least one cycle and a point not in .
For investigations the following statements on parallel classes (equivalent to C1, C2 respectively) are advantageous.
C1′: F
|
https://en.wikipedia.org/wiki/Algebraic%20link
|
In the mathematical field of knot theory, an algebraic link is a link that can be decomposed by Conway spheres into 2-tangles. Algebraic links are also called arborescent links.
Although algebraic links and algebraic tangles were originally defined by John H. Conway as having two pairs of open ends, they were subsequently generalized to more pairs.
References
Links (knot theory)
|
https://en.wikipedia.org/wiki/Integer%20matrix
|
In mathematics, an integer matrix is a matrix whose entries are all integers. Examples include binary matrices, the zero matrix, the matrix of ones, the identity matrix, and the adjacency matrices used in graph theory, amongst many others. Integer matrices find frequent application in combinatorics.
Examples
and
are both examples of integer matrices.
Properties
Invertibility of integer matrices is in general more numerically stable than that of non-integer matrices. The determinant of an integer matrix is itself an integer, thus the numerically smallest possible magnitude of the determinant of an invertible integer matrix is one, hence where inverses exist they do not become excessively large (see condition number). Theorems from matrix theory that infer properties from determinants thus avoid the traps induced by ill conditioned (nearly zero determinant) real or floating point valued matrices.
The inverse of an integer matrix is again an integer matrix if and only if the determinant of equals or . Integer matrices of determinant form the group , which has far-reaching applications in arithmetic and geometry. For , it is closely related to the modular group.
The intersection of the integer matrices with the orthogonal group is the group of signed permutation matrices.
The characteristic polynomial of an integer matrix has integer coefficients. Since the eigenvalues of a matrix are the roots of this polynomial, the eigenvalues of an integer matrix are algebraic integers. In dimension less than 5, they can thus be expressed by radicals involving integers.
Integer matrices are sometimes called integral matrices, although this use is discouraged.
See also
GCD matrix
Unimodular matrix
Wilson matrix
External links
Integer Matrix at MathWorld
Matrices
|
https://en.wikipedia.org/wiki/Generalized%20least%20squares
|
In statistics, generalized least squares (GLS) is a method used to estimate the unknown parameters in a linear regression model when there is a certain degree of correlation between the residuals in the regression model. GLS is employed to improve statistical efficiency and reduce the risk of drawing erroneous inferences compared to conventional least squares and weighted least squares methods. It was first described by Alexander Aitken in 1935.
Method outline
In standard linear regression models, one observes data on n statistical units.
The response values are placed in a vector,and the predictor values are placed in the design matrix,where each row is a vector of the predictor variables (including a constant) for the th data point.
The model assumes that the conditional mean of given to be a linear function of and that the conditional variance of the error term given is a known non-singular covariance matrix, . That is,where is a vector of unknown constants, called “regression coefficients”, which are estimated from the data.
If is a candidate estimate for , then the residual vector for is . The generalized least squares method estimates by minimizing the squared Mahalanobis length of this residual vector:which is equivalent to,which is a quadratic programming problem. The stationary point of the objective function occurs when,so,The quantity is known as the precision matrix (or dispersion matrix), a generalization of the diagonal weight matrix.
Properties
The GLS estimator is unbiased, consistent, efficient, and asymptotically normal with, GLS is equivalent to applying ordinary least squares (OLS) to a linearly-transformed version of the data. This can be seen by factoring using a method like the Cholesky decomposition. Left-multiplying both sides of by yields an equivalent linear model: In this model, , where is the identity matrix. Then, can be efficiently estimated by applying OLS to the transformed data, which requires minimizing the objective,This transformation effectively standardizes the scale of and de-correlates the errors. When OLS is used on data with homoscedastic errors, the Gauss–Markov theorem applies, so the GLS estimate is the best linear unbiased estimator for β.
Weighted least squares
A special case of GLS, called weighted least squares (WLS), occurs when all the off-diagonal entries of Ω are 0. This situation arises when the variances of the observed values are unequal or when heteroscedasticity is present, but no correlations exist among the observed variances. The weight for unit i is proportional to the reciprocal of the variance of the response for unit i.
Derivation by maximum likelihood estimation
Ordinary least squares can be interpreted as maximum likelihood estimation with the prior that the errors are independent and normally-distributed with zero mean and common variance. In GLS, the prior is generalized to the case where errors may not be independent and may have differing variance
|
https://en.wikipedia.org/wiki/Frame%20%28linear%20algebra%29
|
In linear algebra, a frame of an inner product space is a generalization of a basis of a vector space to sets that may be linearly dependent. In the terminology of signal processing, a frame provides a redundant, stable way of representing a signal. Frames are used in error detection and correction and the design and analysis of filter banks and more generally in applied mathematics, computer science, and engineering.
Definition and motivation
Motivating example: computing a basis from a linearly dependent set
Suppose we have a set of vectors in the vector space V and we want to express an arbitrary element as a linear combination of the vectors , that is, we want to find coefficients such that
If the set does not span , then such coefficients do not exist for every such . If spans and also is linearly independent, this set forms a basis of , and the coefficients are uniquely determined by . If, however, spans but is not linearly independent, the question of how to determine the coefficients becomes less apparent, in particular if is of infinite dimension.
Given that spans and is linearly dependent, one strategy is to remove vectors from the set until it becomes linearly independent and forms a basis. There are some problems with this plan:
Removing arbitrary vectors from the set may cause it to be unable to span before it becomes linearly independent.
Even if it is possible to devise a specific way to remove vectors from the set until it becomes a basis, this approach may become unfeasible in practice if the set is large or infinite.
In some applications, it may be an advantage to use more vectors than necessary to represent . This means that we want to find the coefficients without removing elements in . The coefficients will no longer be uniquely determined by . Therefore, the vector can be represented as a linear combination of in more than one way.
Formal definition
Let V be an inner product space and be a set of vectors in . These vectors satisfy the frame condition if there are positive real numbers A and B such that and for each in V,
A set of vectors that satisfies the frame condition is a frame for the vector space.
The numbers A and B are called the lower and upper frame bounds, respectively. The frame bounds are not unique because numbers less than A and greater than B are also valid frame bounds. The optimal lower bound is the supremum of all lower bounds and the optimal upper bound is the infimum of all upper bounds.
A frame is called overcomplete (or redundant) if it is not a basis for the vector space.
Analysis operator
The operator mapping to a sequence of coefficients is called the analysis operator of the frame. It is defined by:
By using this definition we may rewrite the frame condition as
where the left and right norms denote the norm in and the middle norm is the norm.
Synthesis operator
The adjoint operator of the analysis operator is called the synthesis operator of the
|
https://en.wikipedia.org/wiki/Thomas%20Heath
|
Thomas Heath may refer to:
Thomas Heath (classicist) (1861–1940), British civil servant, and historian of ancient Greek mathematics
Thomas Heath (cricketer) (1806–1872), cricketer
Thomas Kurton Heath (1853–1938), vaudeville actor
Tommy Heath (born 1947), musician
Tommy Heath (baseball) (1913–1967), American catcher, scout and baseball manager
Thomas Heath Haviland Sr. (1795-1867), Canadian land owner and banker
Thomas Heath Haviland (1822–1895), Canadian lawyer and politician
Thomas Heath Robinson (1869–1954), English illustrator
Thomas Heth (or Heath) (fl. 1583), English mathematician
|
https://en.wikipedia.org/wiki/Andrea%20Sch%C3%B6pp
|
Andrea Schöpp (born 27 February 1965) is a German curler from Garmisch-Partenkirchen. She lectures part-time in statistics at the University of Munich.
Career
Schöpp is a two-time World champion ( and ), seven-time European champion (, , , , , , ) and 1992 Winter Olympics champion (demonstration). Schöpp has skipped every team she has played for in international events - except when she plays at the European Mixed Curling Championships, where she usually plays third for her brother, Rainer.
Schöpp made her international debut in 1980, at the age of 15. She skipped the German team to a bronze medal at the European championships that year. She also won silver medals at the Worlds in 1986 and 1987 and a bronze in 1989. She continues to curl, although she has had less success in the last decade. Her fourth-place finish at the 2006 Ford World Women's Curling Championship was her highest placement since 1996 at the Worlds. She won the in Swift Current, Saskatchewan, Canada with an extra-end victory over Scotland's Eve Muirhead.
Schöpp won a gold medal at the 2008 European Mixed Curling Championship as a part of the team skipped by her brother Rainer.
Personal life
Schöpp studied statistics at the University of Munich and earned her diploma in 1991. She completed her doctorate in 1996, and has been employed by the University of Munich from 1991.
Schöpp's brother, Rainer Schöpp, is also a curler.
She was born hours before her longtime teammate, Monika Wagner, in the same hospital.
Teammates
Works
Alternative Parametrisierungen Bei Korrelierten Bivariaten Binären Responsevariablen. Vol. 1, Anwendungsorientierte Statistik. 1997.
References
External links
Andrea Schöpp - Player Profile - Curling - Eurosport UK
Living people
1965 births
Sportspeople from Garmisch-Partenkirchen
German female curlers
Olympic gold medalists for Germany
Medalists at the 1992 Winter Olympics
Curlers at the 1988 Winter Olympics
Curlers at the 1992 Winter Olympics
Curlers at the 1998 Winter Olympics
Curlers at the 2010 Winter Olympics
Olympic curlers for Germany
World curling champions
European curling champions
German curling champions
Continental Cup of Curling participants
|
https://en.wikipedia.org/wiki/Nine-point%20center
|
In geometry, the nine-point center is a triangle center, a point defined from a given triangle in a way that does not depend on the placement or scale of the triangle.
It is so called because it is the center of the nine-point circle, a circle that passes through nine significant points of the triangle: the midpoints of the three edges, the feet of the three altitudes, and the points halfway between the orthocenter and each of the three vertices. The nine-point center is listed as point X(5) in Clark Kimberling's Encyclopedia of Triangle Centers.
Properties
The nine-point center lies on the Euler line of its triangle, at the midpoint between that triangle's orthocenter and circumcenter . The centroid also lies on the same line, 2/3 of the way from the orthocenter to the circumcenter, so
Thus, if any two of these four triangle centers are known, the positions of the other two may be determined from them.
Andrew Guinand proved in 1984, as part of what is now known as Euler's triangle determination problem, that if the positions of these centers are given for an unknown triangle, then the incenter of the triangle lies within the orthocentroidal circle (the circle having the segment from the centroid to the orthocenter as its diameter). The only point inside this circle that cannot be the incenter is the nine-point center, and every other interior point of the circle is the incenter of a unique triangle.
The distance from the nine-point center to the incenter satisfies
where are the circumradius and inradius respectively.
The nine-point center is the circumcenter of the medial triangle of the given triangle, the circumcenter of the orthic triangle of the given triangle, and the circumcenter of the Euler triangle. More generally it is the circumcenter of any triangle defined from three of the nine points defining the nine-point circle.
The nine-point center lies at the centroid of four points: the triangle's three vertices and its orthocenter.
The Euler lines of the four triangles formed by an orthocentric system (a set of four points such that each is the orthocenter of the triangle with vertices at the other three points) are concurrent at the nine-point center common to all of the triangles.
Of the nine points defining the nine-point circle, the three midpoints of line segments between the vertices and the orthocenter are reflections of the triangle's midpoints about its nine-point center. Thus, the nine-point center forms the center of a point reflection that maps the medial triangle to the Euler triangle, and vice versa.
According to Lester's theorem, the nine-point center lies on a common circle with three other points: the two Fermat points and the circumcenter.
The Kosnita point of a triangle, a triangle center associated with Kosnita's theorem, is the isogonal conjugate of the nine-point center.
Coordinates
Trilinear coordinates for the nine-point center are
The barycentric coordinates of the nine-point center are
Thus if
|
https://en.wikipedia.org/wiki/NCAA%20Division%20I%20college%20baseball%20team%20statistics
|
The following is a list of National Collegiate Athletic Association (NCAA) Division I college baseball team statistics as of the conclusion of the 2017 season, including all-time number of wins, losses, and ties; number of seasons played; and percent of games won.
This list includes record as a senior college only, and only teams with 25 or more seasons in Division I included.
Winningest Baseball Programs as of Conclusion of 2017 Season
Source:
See also
Baseball statistics
NCAA Division I Baseball Championship
References
Statistics, team
|
https://en.wikipedia.org/wiki/Quasiprobability%20distribution
|
A quasiprobability distribution is a mathematical object similar to a probability distribution but which relaxes some of Kolmogorov's axioms of probability theory. Quasiprobabilities share several of general features with ordinary probabilities, such as, crucially, the ability to yield expectation values with respect to the weights of the distribution. However, they can violate the σ-additivity axiom: integrating over them does not necessarily yield probabilities of mutually exclusive states. Indeed, quasiprobability distributions also have regions of negative probability density, counterintuitively, contradicting the first axiom. Quasiprobability distributions arise naturally in the study of quantum mechanics when treated in phase space formulation, commonly used in quantum optics, time-frequency analysis, and elsewhere.
Introduction
In the most general form, the dynamics of a quantum-mechanical system are determined by a master equation in Hilbert space: an equation of motion for the density operator (usually written ) of the system. The density operator is defined with respect to a complete orthonormal basis. Although it is possible to directly integrate this equation for very small systems (i.e., systems with few particles or degrees of freedom), this quickly becomes intractable for larger systems. However, it is possible to prove that the density operator can always be written in a diagonal form, provided that it is with respect to an overcomplete basis. When the density operator is represented in such an overcomplete basis, then it can be written in a manner more resembling of an ordinary function, at the expense that the function has the features of a quasiprobability distribution. The evolution of the system is then completely determined by the evolution of the quasiprobability distribution function.
The coherent states, i.e. right eigenstates of the annihilation operator serve as the overcomplete basis in the construction described above. By definition, the coherent states have the following property,
They also have some further interesting properties. For example, no two coherent states are orthogonal. In fact, if |α〉 and |β〉 are a pair of coherent states, then
Note that these states are, however, correctly normalized with〈α | α〉 = 1. Owing to the completeness of the basis of Fock states, the choice of the basis of coherent states must be overcomplete. Click to show an informal proof.
In the coherent states basis, however, it is always possible to express the density operator in the diagonal form
where f is a representation of the phase space distribution. This function f is considered a quasiprobability density because it has the following properties:
(normalization)
If is an operator that can be expressed as a power series of the creation and annihilation operators in an ordering Ω, then its expectation value is
(optical equivalence theorem).
The function f is not unique. There exists a family of diff
|
https://en.wikipedia.org/wiki/Phase%20portrait
|
In mathematics, a phase portrait is a geometric representation of the orbits of a dynamical system in the phase plane. Each set of initial conditions is represented by a different point or curve.
Phase portraits are an invaluable tool in studying dynamical systems. They consist of a plot of typical trajectories in the phase space. This reveals information such as whether an attractor, a repellor or limit cycle is present for the chosen parameter value. The concept of topological equivalence is important in classifying the behaviour of systems by specifying when two different phase portraits represent the same qualitative dynamic behavior. An attractor is a stable point which is also called a "sink". The repeller is considered as an unstable point, which is also known as a "source".
A phase portrait graph of a dynamical system depicts the system's trajectories (with arrows) and stable steady states (with dots) and unstable steady states (with circles) in a phase space. The axes are of state variables.
Examples
Simple pendulum, see picture (right).
Simple harmonic oscillator where the phase portrait is made up of ellipses centred at the origin, which is a fixed point.
Damped harmonic motion, see animation (right).
Van der Pol oscillator see picture (bottom right).
Visualizing the behavior of ordinary differential equations
A phase portrait represents the directional behavior of a system of ordinary differential equations (ODEs). The phase portrait can indicate the stability of the system.
The phase portrait behavior of a system of ODEs can be determined by the eigenvalues or the trace and determinant (trace = λ1 + λ2, determinant = λ1 x λ2) of the system.
See also
Phase space
Phase plane
References
Chapter 1.
External links
Linear Phase Portraits, an MIT Mathlet.
Dynamical systems
Diagrams
|
https://en.wikipedia.org/wiki/3/8
|
3/8 or ⅜ may refer to:
3rd Battalion, 8th Marines
the calendar date March 8 (United States)
the calendar date August 3 (Gregorian calendar)
the fraction (mathematics), three eighths or 0.375 in decimal
a time signature
3/8 (album), a 2007 album by Kay Tse
|
https://en.wikipedia.org/wiki/Join-calculus
|
The join-calculus is a process calculus developed at INRIA. The join-calculus was developed to provide a formal basis for the design of distributed programming languages, and therefore intentionally avoids communications constructs found in other process calculi, such as rendezvous communications, which are difficult to implement in a distributed setting. Despite this limitation, the join-calculus is as expressive as the full π-calculus. Encodings of the π-calculus in the join-calculus, and vice versa, have been demonstrated.
The join-calculus is a member of the π-calculus family of process calculi, and can be considered, at its core, an asynchronous π-calculus with several strong restrictions:
Scope restriction, reception, and replicated reception are syntactically merged into a single construct, the definition;
Communication occurs only on defined names;
For every defined name there is exactly one replicated reception.
However, as a language for programming, the join-calculus offers at least one convenience over the π-calculus — namely the use of multi-way join patterns, the ability to match against messages from multiple channels simultaneously.
Implementations
Languages based on the join-calculus
The join-calculus programming language is a new language based on the join-calculus process calculus. It is implemented as an interpreter written in OCaml, and supports statically typed distributed programming, transparent remote communication, agent-based mobility, and some failure-detection.
Though not explicitly based on join-calculus, the rule system of CLIPS implements it if every rule deletes its inputs when triggered (retracts the relevant facts when fired).
Many implementations of the join-calculus were made as extensions of existing programming languages:
JoCaml is a version of OCaml extended with join-calculus primitives
Polyphonic C# and its successor Cω extend C#
MC# and Parallel C# extend Polyphonic C#
Join Java extends Java
A Concurrent Basic proposal that uses Join-calculus
JErlang (the J is for Join, erjang is Erlang for the JVM)
Embeddings in other programming languages
These implementations do not change the underlying programming language but introduce join calculus operations through a custom library or DSL:
The ScalaJoins and the Chymyst libraries are in Scala
JoinHs by Einar Karttunen and syallop/Join-Language by Samuel Yallop are DSLs for Join calculus in Haskell
Joinads - various implementations of join calculus in F#
CocoaJoin is an experimental implementation in Objective-C for iOS and Mac OS X
The Join Python library in Python 3
C++ via Boost (for boost from 2009, ca. v. 40, current (Dec '19) is 72).
References
External links
INRIA, Join Calculus homepage
Microsoft Research, The Join Calculus: a Language for Distributed Mobile Programming
Process calculi
|
https://en.wikipedia.org/wiki/M-sequence
|
An M-sequence may refer to:
Regular sequence, which is an important topic in commutative algebra.
A maximum length sequence, which is a type of pseudorandom binary sequence.
|
https://en.wikipedia.org/wiki/Golden%20triangle%20%28mathematics%29
|
A golden triangle, also called a sublime triangle, is an isosceles triangle in which the duplicated side is in the golden ratio to the base side:
Angles
The vertex angle is:
Hence the golden triangle is an acute (isosceles) triangle.
Since the angles of a triangle sum to radians, each of the base angles (CBX and CXB) is:
Note:
The golden triangle is uniquely identified as the only triangle to have its three angles in the ratio 1 : 2 : 2 (36°, 72°, 72°).
In other geometric figures
Golden triangles can be found in the spikes of regular pentagrams.
Golden triangles can also be found in a regular decagon, an equiangular and equilateral ten-sided polygon, by connecting any two adjacent vertices to the center. This is because: 180(10−2)/10 = 144° is the interior angle, and bisecting it through the vertex to the center: 144/2 = 72°.
Also, golden triangles are found in the nets of several stellations of dodecahedrons and icosahedrons.
Logarithmic spiral
The golden triangle is used to form some points of a logarithmic spiral. By bisecting one of the base angles, a new point is created that in turn, makes another golden triangle. The bisection process can be continued indefinitely, creating an infinite number of golden triangles. A logarithmic spiral can be drawn through the vertices. This spiral is also known as an equiangular spiral, a term coined by René Descartes. "If a straight line is drawn from the pole to any point on the curve, it cuts the curve at precisely the same angle," hence equiangular. This spiral is different from the golden spiral: the golden spiral grows by a factor of the golden ratio in each quarter-turn, whereas the spiral through these golden triangles takes an angle of 108° to grow by the same factor.
Golden gnomon
Closely related to the golden triangle is the golden gnomon, which is the isosceles triangle in which the ratio of the equal side lengths to the base length is the reciprocal of the golden ratio .
"The golden triangle has a ratio of base length to side length equal to the golden section φ, whereas the golden gnomon has the ratio of side length to base length equal to the golden section φ."
Angles
(The distances AX and CX are both a′ = a = φ , and the distance AC is b′ = φ², as seen in the figure.)
The apex angle AXC is:
Hence the golden gnomon is an obtuse (isosceles) triangle.
Note:
Since the angles of the triangle AXC sum to radians, each of the base angles CAX and ACX is:
Note:
The golden gnomon is uniquely identified as a triangle having its three angles in the ratio 1 : 1 : 3 (36°, 36°, 108°). Its base angles are 36° each, which is the same as the apex of the golden triangle.
Bisections
By bisecting one of its base angles, a golden triangle can be subdivided into a golden triangle and a golden gnomon.
By trisecting its apex angle, a golden gnomon can be subdivided into a golden triangle and a golden gnomon.
A golden gnomon and a golden triangle with their equal sides matching eac
|
https://en.wikipedia.org/wiki/Sinogram
|
Sinogram may refer to:
Sinograph, a Chinese character (Hanzi), especially when used in a different language
Radon transform, a type of integral transform in mathematics
A visual representation of the raw data obtained in the operation of computed tomography
See also
Sonogram (disambiguation)
|
https://en.wikipedia.org/wiki/Indeterminate%20equation
|
In mathematics, particularly in algebra, an indeterminate equation is an equation for which there is more than one solution. For example, the equation is a simple indeterminate equation, as is . Indeterminate equations cannot be solved uniquely. In fact, in some cases it might even have infinitely many solutions. Some of the prominent examples of indeterminate equations include:
Univariate polynomial equation:
which has multiple solutions for the variable in the complex plane—unless it can be rewritten in the form .
Non-degenerate conic equation:
where at least one of the given parameters , , and is non-zero, and and are real variables.
Pell's equation:
where is a given integer that is not a square number, and in which the variables and are required to be integers.
The equation of Pythagorean triples:
in which the variables , , and are required to be positive integers.
The equation of the Fermat–Catalan conjecture:
in which the variables , , are required to be coprime positive integers, and the variables , , and are required to be positive integers satisfying the following equation:
See also
Indeterminate form
Indeterminate system
Indeterminate (variable)
Linear algebra
References
Algebra
|
https://en.wikipedia.org/wiki/Four-vertex%20theorem
|
The four-vertex theorem of geometry states that the curvature along a simple, closed, smooth plane curve has at least four local extrema (specifically, at least two local maxima and at least two local minima). The name of the theorem derives from the convention of calling an extreme point of the curvature function a vertex. This theorem has many generalizations, including a version for space curves where a vertex is defined as a point of vanishing torsion.
Definition and examples
The curvature at any point of a smooth curve in the plane can be defined as the reciprocal of the radius of an osculating circle at that point, or as the norm of the second derivative of a parametric representation of the curve, parameterized consistently with the length along the curve. For the vertices of a curve to be well-defined, the curvature itself should vary continuously, as happens for curves of smoothness . A vertex is then a local maximum or local minimum of curvature. If the curvature is constant over an arc of the curve, all points of that arc are considered to be vertices. The four-vertex theorem states that a smooth closed curve always has at least four vertices.
An ellipse has exactly four vertices: two local maxima of curvature where it is crossed by the major axis of the ellipse, and two local minima of curvature where it is crossed by the minor axis. In a circle, every point is both a local maximum and a local minimum of curvature, so there are infinitely many vertices.
Every curve of constant width has at least six vertices. Although many curves of constant width, such as the Reuleaux triangle, are non-smooth or have circular arcs on their boundaries, there exist smooth curves of constant width that have exactly six vertices.
History
The four-vertex theorem was first proved for convex curves (i.e. curves with strictly positive curvature) in 1909 by Syamadas Mukhopadhyaya. His proof utilizes the fact that a point on the curve is an extremum of the curvature function if and only if the osculating circle at that point has fourth-order contact with the curve; in general the osculating circle has only third-order contact with the curve. The four-vertex theorem was proved for more general curves by Adolf Kneser in 1912 using a projective argument.
Proof
For many years the proof of the four-vertex theorem remained difficult, but a simple and conceptual proof was given by , based on the idea of the minimum enclosing circle. This is a circle that contains the given curve and has the smallest possible radius. If the curve includes an arc of the circle, it has infinitely many vertices. Otherwise, the curve and circle must be tangent at at least two points, because a circle that touched the curve at fewer points could be reduced in size while still enclosing it. At each tangency, the curvature of the curve is greater than that of the circle, for otherwise the curve would continue from the tangency outside the circle rather than inside. However, between ea
|
https://en.wikipedia.org/wiki/Leaky%20integrator
|
In mathematics, a leaky integrator equation is a specific differential equation, used to describe a component or system that takes the integral of an input, but gradually leaks a small amount of input over time. It appears commonly in hydraulics, electronics, and neuroscience where it can represent either a single neuron or a local population of neurons.
Equation
The equation is of the form
where C is the input and A is the rate of the 'leak'.
General solution
The equation is a nonhomogeneous first-order linear differential equation. For constant C its solution is
where is a constant encoding the initial condition.
References
Differential equations
|
https://en.wikipedia.org/wiki/Tyersal
|
Tyersal is a village east of Bradford and west of Leeds and has a population of 2,605 according to Bradford Community Statistics Project.
The district is split between both City of Bradford metropolitan borough and the City of Leeds metropolitan borough, with east Tyersal sitting in the Pudsey ward of Leeds City Council.
Tyersal joined Bradford in 1882 and part of it became part of the Leeds metropolitan district in 1974.
Shops
On Tyersal Road there are six shops, including a Newsagents, Pharmacy, Sandwich shop, mortgage brokers and a Takeaway.
Transport
Currently there is the 630 service, operated by First Bradford, which terminates at the top. Service 508 from Leeds to Halifax operated also by First Bradford, is every hour along Dick Lane at the bottom of Tyersal. Previously, service 66 (operated by First Leeds and then Centrebus) provided buses to Leeds and back (there were four services daily), although 2010 saw this service withdrawn, and now service 508 is the only remaining bus to Leeds.
New Pudsey railway station is around one and a half miles north-east of the village by road, where services are operated by Northern to Manchester Victoria, Blackpool North, Wakefield Westgate, York and Selby.
Pubs and clubs
Tyersal Residents Association Community Centre
The Quarry Gap public house (Now Quarry Cafe)
Tyersal Park Bowling Club
Crown Green bowling club plays in the Bradford Crown Green Bowling Association.
See also
Listed buildings in Pudsey
External links
Tyersal Action Group - Neighbourhood Action Plan
The Ancient Parish of Calverley at GENUKI: Tyersal, previously spelled "Tyresall", was in this parish
Areas of Bradford
Places in Leeds
Pudsey
|
https://en.wikipedia.org/wiki/Sylvester%27s%20sequence
|
In number theory, Sylvester's sequence is an integer sequence in which each term is the product of the previous terms, plus one. The first few terms of the sequence are
2, 3, 7, 43, 1807, 3263443, 10650056950807, 113423713055421844361000443 .
Sylvester's sequence is named after James Joseph Sylvester, who first investigated it in 1880. Its values grow doubly exponentially, and the sum of its reciprocals forms a series of unit fractions that converges to 1 more rapidly than any other series of unit fractions. The recurrence by which it is defined allows the numbers in the sequence to be factored more easily than other numbers of the same magnitude, but, due to the rapid growth of the sequence, complete prime factorizations are known only for a few of its terms. Values derived from this sequence have also been used to construct finite Egyptian fraction representations of 1, Sasakian Einstein manifolds, and hard instances for online algorithms.
Formal definitions
Formally, Sylvester's sequence can be defined by the formula
The product of the empty set is 1, so s0 = 2.
Alternatively, one may define the sequence by the recurrence
with s0 = 2.
It is straightforward to show by induction that this is equivalent to the other definition.
Closed form formula and asymptotics
The Sylvester numbers grow doubly exponentially as a function of n. Specifically, it can be shown that
for a number E that is approximately 1.26408473530530... . This formula has the effect of the following algorithm:
s0 is the nearest integer to E 2; s1 is the nearest integer to E 4; s2 is the nearest integer to E 8; for sn, take E 2, square it n more times, and take the nearest integer.
This would only be a practical algorithm if we had a better way of calculating E to the requisite number of places than calculating sn and taking its repeated square root.
The double-exponential growth of the Sylvester sequence is unsurprising if one compares it to the sequence of Fermat numbers Fn ; the Fermat numbers are usually defined by a doubly exponential formula, , but they can also be defined by a product formula very similar to that defining Sylvester's sequence:
Connection with Egyptian fractions
The unit fractions formed by the reciprocals of the values in Sylvester's sequence generate an infinite series:
The partial sums of this series have a simple form,
This may be proved by induction, or more directly by noting that the recursion implies that
so the sum telescopes
Since this sequence of partial sums (sj − 2)/(sj − 1) converges to one, the overall series forms an infinite Egyptian fraction representation of the number one:
One can find finite Egyptian fraction representations of one, of any length, by truncating this series and subtracting one from the last denominator:
The sum of the first k terms of the infinite series provides the closest possible underestimate of 1 by any k-term Egyptian fraction. For example, the first four terms
|
https://en.wikipedia.org/wiki/Semiparametric%20regression
|
In statistics, semiparametric regression includes regression models that combine parametric and nonparametric models. They are often used in situations where the fully nonparametric model may not perform well or when the researcher wants to use a parametric model but the functional form with respect to a subset of the regressors or the density of the errors is not known. Semiparametric regression models are a particular type of semiparametric modelling and, since semiparametric models contain a parametric component, they rely on parametric assumptions and may be misspecified and inconsistent, just like a fully parametric model.
Methods
Many different semiparametric regression methods have been proposed and developed. The most popular methods are the partially linear, index and varying coefficient models.
Partially linear models
A partially linear model is given by
where is the dependent variable, is a vector of explanatory variables, is a vector of unknown parameters and . The parametric part of the partially linear model is given by the parameter vector while the nonparametric part is the unknown function . The data is assumed to be i.i.d. with and the model allows for a conditionally heteroskedastic error process of unknown form. This type of model was proposed by Robinson (1988) and extended to handle categorical covariates by Racine and Li (2007).
This method is implemented by obtaining a consistent estimator of and then deriving an estimator of from the nonparametric regression of on using an appropriate nonparametric regression method.
Index models
A single index model takes the form
where , and are defined as earlier and the error term satisfies . The single index model takes its name from the parametric part of the model which is a scalar single index. The nonparametric part is the unknown function .
Ichimura's method
The single index model method developed by Ichimura (1993) is as follows. Consider the situation in which is continuous. Given a known form for the function , could be estimated using the nonlinear least squares method to minimize the function
Since the functional form of is not known, we need to estimate it. For a given value for an estimate of the function
using kernel method. Ichimura (1993) proposes estimating with
the leave-one-out nonparametric kernel estimator of .
Klein and Spady's estimator
If the dependent variable is binary and and are assumed to be independent, Klein and Spady (1993) propose a technique for estimating using maximum likelihood methods. The log-likelihood function is given by
where is the leave-one-out estimator.
Smooth coefficient/varying coefficient models
Hastie and Tibshirani (1993) propose a smooth coefficient model given by
where is a vector and is a vector of unspecified smooth functions of .
may be expressed as
See also
Nonparametric regression
Effective degree of freedom
Notes
References
Nonparame
|
https://en.wikipedia.org/wiki/Uniform%20algebra
|
In functional analysis, a uniform algebra A on a compact Hausdorff topological space X is a closed (with respect to the uniform norm) subalgebra of the C*-algebra C(X) (the continuous complex-valued functions on X) with the following properties:
the constant functions are contained in A
for every x, y X there is fA with f(x)f(y). This is called separating the points of X.
As a closed subalgebra of the commutative Banach algebra C(X) a uniform algebra is itself a unital commutative Banach algebra (when equipped with the uniform norm). Hence, it is, (by definition) a Banach function algebra.
A uniform algebra A on X is said to be natural if the maximal ideals of A are precisely the ideals of functions vanishing at a point x in X.
Abstract characterization
If A is a unital commutative Banach algebra such that for all a in A, then there is a compact Hausdorff X such that A is isomorphic as a Banach algebra to a uniform algebra on X. This result follows from the spectral radius formula and the Gelfand representation.
Notes
References
Functional analysis
Banach algebras
|
https://en.wikipedia.org/wiki/Textile%20design
|
Textile design, also known as textile geometry, is the creative and technical process by which thread or yarn fibers are interlaced to form a piece of cloth or fabric, which is subsequently printed upon, or otherwise adorned. Textile design is further broken down into three major disciplines: printed textile design, woven textile design, and mixed media textile design, each of which uses different methods to produce a fabric for variable uses and markets. Textile design as an industry is involved in other disciplines such as fashion, interior design, and fine arts.
Overview
Textile designing is the creative technique in which thread or yarn fibers are woven together to form a piece of fabric. Clothing, carpets, drapes, and towels are some products resulting from textile design. Textile design requires understanding of the technical aspects of production and the properties of fiber, yarn, and dyes.
Textile design disciplines
Printed textile design
Printed textile designs are produced by applying different printing processes to fabric, cloth, and other media. Printed textile design is one of the three major disciplines of textile design. Printed textile designers are predominantly involved with home interior design (designing patterns for carpets, wallpapers, or even ceramics), the fashion and clothing industries, and the paper industry (designing stationary or gift wrap).
There are numerous established printed styles and designs that can be broken down into four major categories: floral, geometric, world cultures, and conversational. Floral designs include flowers, plants, or any botanical themes. Geometric designs feature themes (both inorganic and abstract) such as tessellations. Designs surrounding world cultures may be traced to a specific geographic, ethnic, or anthropological source. Finally, conversational design are designs that fit less easily into the other categories; they may be described as presenting "imagery that references popular icons of a particular time period or season, or which is unique and challenges our perceptions in some way." Each category contains sundry, which includes more specific individual styles and designs.
Different clothes, moreover, require different dyes; for example, silk, wool. Other protein-based fabrics require acidic dyes, whereas synthetic fabrics require specialized disperse dyes.
The advent of computer-aided design software, such as Adobe Photoshop or Illustrator, has allowed each discipline of textile design to evolve and innovate new practices and processes, but has most influenced the production of printed textile designs. Digital tools have influenced the process of creating repeating patterns or motifs, or repeats. Repeats are used to create patterns both visible and invisible to the eye: geometric patterns are intended to depict clear, intentional patterns, whereas floral or organic designs are intended to create unbroken repeats that are ideally undetectable. Digital tools have also a
|
https://en.wikipedia.org/wiki/De%20Rham%20curve
|
In mathematics, a de Rham curve is a certain type of fractal curve named in honor of Georges de Rham.
The Cantor function, Cesàro curve, Minkowski's question mark function, the Lévy C curve, the blancmange curve, and Koch curve are all special cases of the general de Rham curve.
Construction
Consider some complete metric space (generally 2 with the usual euclidean distance), and a pair of contracting maps on M:
By the Banach fixed-point theorem, these have fixed points and respectively. Let x be a real number in the interval , having binary expansion
where each is 0 or 1. Consider the map
defined by
where denotes function composition. It can be shown that each will map the common basin of attraction of and to a single point in . The collection of points , parameterized by a single real parameter x, is known as the de Rham curve.
Continuity condition
When the fixed points are paired such that
then it may be shown that the resulting curve is a continuous function of x. When the curve is continuous, it is not in general differentiable.
In the remaining of this page, we will assume the curves are continuous.
Properties
De Rham curves are by construction self-similar, since
for and
for
The self-symmetries of all of the de Rham curves are given by the monoid that describes the symmetries of the infinite binary tree or Cantor set. This so-called period-doubling monoid is a subset of the modular group.
The image of the curve, i.e. the set of points , can be obtained by an Iterated function system using the set of contraction mappings . But the result of an iterated function system with two contraction mappings is a de Rham curve if and only if the contraction mappings satisfy the continuity condition.
Detailed, worked examples of the self-similarities can be found in the articles on the Cantor function and on Minkowski's question-mark function. Precisely the same monoid of self-similarities, the dyadic monoid, apply to every de Rham curve.
Classification and examples
Cesàro curves
Cesàro curves (or Cesàro–Faber curves) are De Rham curves generated by affine transformations conserving orientation, with fixed points and .
Because of these constraints, Cesàro curves are uniquely determined by a complex number such that and .
The contraction mappings and are then defined as complex functions in the complex plane by:
For the value of , the resulting curve is the Lévy C curve.
Koch–Peano curves
In a similar way, we can define the Koch–Peano family of curves as the set of De Rham curves generated by affine transformations reversing orientation, with fixed points and .
These mappings are expressed in the complex plane as a function of , the complex conjugate of :
The name of the family comes from its two most famous members. The Koch curve is obtained by setting:
while the Peano curve corresponds to:
General affine maps
The Cesàro–Faber and Peano–Koch curves are both special cases of the general case of a pa
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.