text
stringlengths 16
172k
| source
stringlengths 32
122
|
---|---|
In mathematics, thestructuretensor, also referred to as thesecond-moment matrix, is amatrixderived from thegradientof afunction. It describes the distribution of the gradient in a specified neighborhood around a point and makes the information invariant to the observing coordinates. The structure tensor is often used inimage processingandcomputer vision.[1][2][3]
For a functionI{\displaystyle I}of two variablesp= (x,y), the structure tensor is the 2×2 matrix
Sw(p)=[∫w(r)(Ix(p−r))2dr∫w(r)Ix(p−r)Iy(p−r)dr∫w(r)Ix(p−r)Iy(p−r)dr∫w(r)(Iy(p−r))2dr]{\displaystyle S_{w}(p)={\begin{bmatrix}\int w(r)(I_{x}(p-r))^{2}\,dr&\int w(r)I_{x}(p-r)I_{y}(p-r)\,dr\\[10pt]\int w(r)I_{x}(p-r)I_{y}(p-r)\,dr&\int w(r)(I_{y}(p-r))^{2}\,dr\end{bmatrix}}}whereIx{\displaystyle I_{x}}andIy{\displaystyle I_{y}}are thepartial derivativesofI{\displaystyle I}with respect toxandy; the integrals range over the planeR2{\displaystyle \mathbb {R} ^{2}}; andwis some fixed "window function" (such as aGaussian blur), adistributionon two variables. Note that the matrixSw{\displaystyle S_{w}}is itself a function ofp= (x,y).
The formula above can be written also asSw(p)=∫w(r)S0(p−r)dr{\textstyle S_{w}(p)=\int w(r)S_{0}(p-r)\,dr}, whereS0{\displaystyle S_{0}}is the matrix-valued function defined byS0(p)=[(Ix(p))2Ix(p)Iy(p)Ix(p)Iy(p)(Iy(p))2]{\displaystyle S_{0}(p)={\begin{bmatrix}(I_{x}(p))^{2}&I_{x}(p)I_{y}(p)\\[10pt]I_{x}(p)I_{y}(p)&(I_{y}(p))^{2}\end{bmatrix}}}
If thegradient∇I=(Ix,Iy)T{\displaystyle \nabla I=(I_{x},I_{y})^{\text{T}}}ofI{\displaystyle I}is viewed as a 2×1 (single-column) matrix, where(⋅)T{\displaystyle (\cdot )^{\text{T}}}denotestransposeoperation, turning a row vector to a column vector, the matrixS0{\displaystyle S_{0}}can be written as thematrix product(∇I)(∇I)T{\displaystyle (\nabla I)(\nabla I)^{\text{T}}}ortensor or outer product∇I⊗∇I{\displaystyle \nabla I\otimes \nabla I}. Note however that the structure tensorSw(p){\displaystyle S_{w}(p)}cannot be factored in this way in general except ifw{\displaystyle w}is aDirac delta function.
In image processing and other similar applications, the functionI{\displaystyle I}is usually given as a discretearrayof samplesI[p]{\displaystyle I[p]}, wherepis a pair of integer indices. The 2D structure tensor at a givenpixelis usually taken to be the discrete sum
Sw[p]=[∑rw[r](Ix[p−r])2∑rw[r]Ix[p−r]Iy[p−r]∑rw[r]Ix[p−r]Iy[p−r]∑rw[r](Iy[p−r])2]{\displaystyle S_{w}[p]={\begin{bmatrix}\sum _{r}w[r](I_{x}[p-r])^{2}&\sum _{r}w[r]I_{x}[p-r]I_{y}[p-r]\\[10pt]\sum _{r}w[r]I_{x}[p-r]I_{y}[p-r]&\sum _{r}w[r](I_{y}[p-r])^{2}\end{bmatrix}}}
Here the summation indexrranges over a finite set of index pairs (the "window", typically{−m…+m}×{−m…+m}{\displaystyle \{-m\ldots +m\}\times \{-m\ldots +m\}}for somem), andw[r] is a fixed "window weight" that depends onr, such that the sum of all weights is 1. The valuesIx[p],Iy[p]{\displaystyle I_{x}[p],I_{y}[p]}are the partial derivatives sampled at pixelp; which, for instance, may be estimated from byI{\displaystyle I}byfinite differenceformulas.
The formula of the structure tensor can be written also asSw[p]=∑rw[r]S0[p−r]{\textstyle S_{w}[p]=\sum _{r}w[r]S_{0}[p-r]}, whereS0{\displaystyle S_{0}}is the matrix-valued array such thatS0[p]=[(Ix[p])2Ix[p]Iy[p]Ix[p]Iy[p](Iy[p])2]{\displaystyle S_{0}[p]={\begin{bmatrix}(I_{x}[p])^{2}&I_{x}[p]I_{y}[p]\\[10pt]I_{x}[p]I_{y}[p]&(I_{y}[p])^{2}\end{bmatrix}}}
The importance of the 2D structure tensorSw{\displaystyle S_{w}}stems from the facteigenvaluesλ1,λ2{\displaystyle \lambda _{1},\lambda _{2}}(which can be ordered so thatλ1≥λ2≥0{\displaystyle \lambda _{1}\geq \lambda _{2}\geq 0}) and the correspondingeigenvectorse1,e2{\displaystyle e_{1},e_{2}}summarize the distribution of thegradient∇I=(Ix,Iy){\displaystyle \nabla I=(I_{x},I_{y})}ofI{\displaystyle I}within the window defined byw{\displaystyle w}centered atp{\displaystyle p}.[1][2][3]
Namely, ifλ1>λ2{\displaystyle \lambda _{1}>\lambda _{2}}, thene1{\displaystyle e_{1}}(or−e1{\displaystyle -e_{1}}) is the direction that is maximally aligned with the gradient within the window.
In particular, ifλ1>0,λ2=0{\displaystyle \lambda _{1}>0,\lambda _{2}=0}then the gradient is always a multiple ofe1{\displaystyle e_{1}}(positive, negative or zero); this is the case if and only ifI{\displaystyle I}within the window varies along the directione1{\displaystyle e_{1}}but is constant alonge2{\displaystyle e_{2}}. This condition of eigenvalues is also called linear symmetry condition because then the iso-curves ofI{\displaystyle I}consist in parallel lines, i.e there exists a one dimensional functiong{\displaystyle g}which can generate the two dimensional functionI{\displaystyle I}asI(x,y)=g(dTp){\displaystyle I(x,y)=g(d^{\text{T}}p)}for some constant vectord=(dx,dy)T{\displaystyle d=(d_{x},d_{y})^{T}}and the coordinatesp=(x,y)T{\displaystyle p=(x,y)^{T}}.
Ifλ1=λ2{\displaystyle \lambda _{1}=\lambda _{2}}, on the other hand, the gradient in the window has no predominant direction; which happens, for instance, when the image hasrotational symmetrywithin that window. This condition of eigenvalues is also called balanced body, or directional equilibrium condition because it holds when all gradient directions in the window are equally frequent/probable.
Furthermore, the conditionλ1=λ2=0{\displaystyle \lambda _{1}=\lambda _{2}=0}happens if and only if the functionI{\displaystyle I}is constant (∇I=(0,0){\displaystyle \nabla I=(0,0)}) withinW{\displaystyle W}.
More generally, the value ofλk{\displaystyle \lambda _{k}}, fork=1 ork=2, is thew{\displaystyle w}-weighted average, in the neighborhood ofp, of the square of thedirectional derivativeofI{\displaystyle I}alongek{\displaystyle e_{k}}. The relative discrepancy between the two eigenvalues ofSw{\displaystyle S_{w}}is an indicator of the degree ofanisotropyof the gradient in the window, namely how strongly is it biased towards a particular direction (and its opposite).[4][5]This attribute can be quantified by thecoherence, defined as
cw=(λ1−λ2λ1+λ2)2{\displaystyle c_{w}=\left({\frac {\lambda _{1}-\lambda _{2}}{\lambda _{1}+\lambda _{2}}}\right)^{2}}
ifλ2>0{\displaystyle \lambda _{2}>0}. This quantity is 1 when the gradient is totally aligned, and 0 when it has no preferred direction. The formula is undefined, even in thelimit, when the image is constant in the window (λ1=λ2=0{\displaystyle \lambda _{1}=\lambda _{2}=0}). Some authors define it as 0 in that case.
Note that the average of the gradient∇I{\displaystyle \nabla I}inside the window isnota good indicator of anisotropy. Aligned but oppositely oriented gradient vectors would cancel out in this average, whereas in the structure tensor they are properly added together.[6]This is a reason for why(∇I)(∇I)T{\displaystyle (\nabla I)(\nabla I)^{\text{T}}}is used in the averaging of the structure tensor to optimize the direction instead of∇I{\displaystyle \nabla I}.
By expanding the effective radius of the window functionw{\displaystyle w}(that is, increasing its variance), one can make the structure tensor more robust in the face of noise, at the cost of diminished spatial resolution.[5][7]The formal basis for this property is described in more detail below, where it is shown that a multi-scale formulation of the structure tensor, referred to as themulti-scale structure tensor, constitutes atrue multi-scale representation of directional data under variations of the spatial extent of the window function.
The interpretation and implementation of the 2D structure tensor becomes particularly accessible usingcomplex numbers.[2]The structure tensor consists in 3 real numbers
Sw(p)=[μ20μ11μ11μ02]{\displaystyle S_{w}(p)={\begin{bmatrix}\mu _{20}&\mu _{11}\\[10pt]\mu _{11}&\mu _{02}\end{bmatrix}}}
whereμ20=∫(w(r)(Ix(p−r))2dr{\textstyle \mu _{20}=\int (w(r)(I_{x}(p-r))^{2}\,dr},μ02=∫(w(r)(Iy(p−r))2dr{\textstyle \mu _{02}=\int (w(r)(I_{y}(p-r))^{2}\,dr}andμ11=∫w(r)Ix(p−r)Iy(p−r)dr{\textstyle \mu _{11}=\int w(r)I_{x}(p-r)I_{y}(p-r)\,dr}in which integrals can be replaced by summations for discrete representation. UsingParseval's identityit is clear that the three real numbers are the second order moments of the power spectrum ofI{\displaystyle I}. The following second order complex moment of the power spectrum ofI{\displaystyle I}can then be written as
κ20=μ20−μ02+i2μ11=∫w(r)(Ix(p−r)+iIy(p−r))2dr=(λ1−λ2)exp(i2ϕ){\displaystyle \kappa _{20}=\mu _{20}-\mu _{02}+i2\mu _{11}=\int w(r)(I_{x}(p-r)+iI_{y}(p-r))^{2}\,dr=(\lambda _{1}-\lambda _{2})\exp(i2\phi )}
wherei=−1{\displaystyle i={\sqrt {-1}}}andϕ{\displaystyle \phi }is the direction angle of the most significant eigenvector of the structure tensorϕ=∠e1{\displaystyle \phi =\angle {e_{1}}}whereasλ1{\displaystyle \lambda _{1}}andλ2{\displaystyle \lambda _{2}}are the most and the least significant eigenvalues. From, this it follows thatκ20{\displaystyle \kappa _{20}}contains both a certainty|κ20|=λ1−λ2{\displaystyle |\kappa _{20}|=\lambda _{1}-\lambda _{2}}and the optimal direction in double angle representation since it is a complex number consisting of two real numbers. It follows also that if the gradient is represented as a complex number, and is remapped by squaring (i.e. the argument angles of the complex gradient is doubled), then averaging acts as an optimizer in the mapped domain, since it directly delivers both the optimal direction (in double angle representation) and the associated certainty. The complex number represents thus how much linear structure (linear symmetry) there is in imageI{\displaystyle I}, and the complex number is obtained directly by averaging the gradient in its (complex) double angle representation without computing the eigenvalues and the eigenvectors explicitly.
Likewise the following second order complex moment of the power spectrum ofI{\displaystyle I}, which happens to be always real becauseI{\displaystyle I}is real,
κ11=μ20+μ02=∫w(r)|Ix(p−r)+iIy(p−r)|2dr=λ1+λ2{\displaystyle \kappa _{11}=\mu _{20}+\mu _{02}=\int w(r)|I_{x}(p-r)+iI_{y}(p-r)|^{2}\,dr=\lambda _{1}+\lambda _{2}}
can be obtained, withλ1{\displaystyle \lambda _{1}}andλ2{\displaystyle \lambda _{2}}being the eigenvalues as before. Notice that this time the magnitude of the complex gradient is squared (which is always real).
However, decomposing the structure tensor in its eigenvectors yields its tensor components as
Sw(p)=λ1e1e1T+λ2e2e2T=(λ1−λ2)e1e1T+λ2(e1e1T+e2e2T)=(λ1−λ2)e1e1T+λ2E{\displaystyle S_{w}(p)=\lambda _{1}e_{1}e_{1}^{\text{T}}+\lambda _{2}e_{2}e_{2}^{\text{T}}=(\lambda _{1}-\lambda _{2})e_{1}e_{1}^{\text{T}}+\lambda _{2}(e_{1}e_{1}^{\text{T}}+e_{2}e_{2}^{\text{T}})=(\lambda _{1}-\lambda _{2})e_{1}e_{1}^{\text{T}}+\lambda _{2}E}
whereE{\displaystyle E}is the identity matrix in 2D because the two eigenvectors are always orthogonal (and sum to unity). The first term in the last expression of the decomposition,(λ1−λ2)e1e1T{\displaystyle (\lambda _{1}-\lambda _{2})e_{1}e_{1}^{\text{T}}}, represents the linear symmetry component of the structure tensor containing all directional information (as a rank-1 matrix), whereas the second term represents the balanced body component of the tensor, which lacks any directional information (containing an identity matrixE{\displaystyle E}). To know how much directional information there is inI{\displaystyle I}is then the same as checking how largeλ1−λ2{\displaystyle \lambda _{1}-\lambda _{2}}is compared toλ2{\displaystyle \lambda _{2}}.
Evidently,κ20{\displaystyle \kappa _{20}}is the complex equivalent of the first term in the tensor decomposition, whereas12(|κ20|−κ11)=λ2{\displaystyle {\tfrac {1}{2}}(|\kappa _{20}|-\kappa _{11})=\lambda _{2}}is the equivalent of the second term. Thus the two scalars, comprising three real numbers,
κ20=(λ1−λ2)exp(i2ϕ)=w∗(h∗I)2κ11=λ1+λ2=w∗|h∗I|2{\displaystyle {\begin{aligned}\kappa _{20}&=&(\lambda _{1}-\lambda _{2})\exp(i2\phi )&=w*(h*I)^{2}\\\kappa _{11}&=&\lambda _{1}+\lambda _{2}&=w*|h*I|^{2}\\\end{aligned}}}whereh(x,y)=(x+iy)exp(−(x2+y2)/(2σ2)){\displaystyle h(x,y)=(x+iy)\exp(-(x^{2}+y^{2})/(2\sigma ^{2}))}is the (complex) gradient filter, and∗{\displaystyle *}is convolution, constitute a complex representation of the 2D Structure Tensor. As discussed here and elsewherew{\displaystyle w}defines the local image which is usually a Gaussian (with a certain variance defining the outer scale), andσ{\displaystyle \sigma }is the (inner scale) parameter determining the effective frequency range in which the orientation2ϕ{\displaystyle 2\phi }is to be estimated.
The elegance of the complex representation stems from that the two components of the structure tensor can be obtained as averages and independently. In turn, this means thatκ20{\displaystyle \kappa _{20}}andκ11{\displaystyle \kappa _{11}}can be used in a scale space representation to describe the evidence for presence of unique orientation and the evidence for the alternative hypothesis, the presence of multiple balanced orientations, without computing the eigenvectors and eigenvalues. A functional, such as squaring the complex numbers have to this date not been shown to exist for structure tensors with dimensions higher than two. In Bigun 91, it has been put forward with due argument that this is because complex numbers are commutative algebras whereas quaternions, the possible candidate to construct such a functional by, constitute a non-commutative algebra.[8]
The complex representation of the structure tensor is frequently used in fingerprint analysis to obtain direction maps containing certainties which in turn are used to enhance them, to find the locations of the global (cores and deltas) and local (minutia) singularities, as well as automatically evaluate the quality of the fingerprints.
The structure tensor can be defined also for a functionI{\displaystyle I}of three variablesp=(x,y,z) in an entirely analogous way. Namely, in the continuous version we haveSw(p)=∫w(r)S0(p−r)dr{\textstyle S_{w}(p)=\int w(r)S_{0}(p-r)\,dr}, whereS0(p)=[(Ix(p))2Ix(p)Iy(p)Ix(p)Iz(p)Ix(p)Iy(p)(Iy(p))2Iy(p)Iz(p)Ix(p)Iz(p)Iy(p)Iz(p)(Iz(p))2]{\displaystyle S_{0}(p)={\begin{bmatrix}(I_{x}(p))^{2}&I_{x}(p)I_{y}(p)&I_{x}(p)I_{z}(p)\\[10pt]I_{x}(p)I_{y}(p)&(I_{y}(p))^{2}&I_{y}(p)I_{z}(p)\\[10pt]I_{x}(p)I_{z}(p)&I_{y}(p)I_{z}(p)&(I_{z}(p))^{2}\end{bmatrix}}}whereIx,Iy,Iz{\displaystyle I_{x},I_{y},I_{z}}are the three partial derivatives ofI{\displaystyle I}, and the integral ranges overR3{\displaystyle \mathbb {R} ^{3}}.
In the discrete version,Sw[p]=∑rw[r]S0[p−r]{\textstyle S_{w}[p]=\sum _{r}w[r]S_{0}[p-r]}, whereS0[p]=[(Ix[p])2Ix[p]Iy[p]Ix[p]Iz[p]Ix[p]Iy[p](Iy[p])2Iy[p]Iz[p]Ix[p]Iz[p]Iy[p]Iz[p](Iz[p])2]{\displaystyle S_{0}[p]={\begin{bmatrix}(I_{x}[p])^{2}&I_{x}[p]I_{y}[p]&I_{x}[p]I_{z}[p]\\[10pt]I_{x}[p]I_{y}[p]&(I_{y}[p])^{2}&I_{y}[p]I_{z}[p]\\[10pt]I_{x}[p]I_{z}[p]&I_{y}[p]I_{z}[p]&(I_{z}[p])^{2}\end{bmatrix}}}and the sum ranges over a finite set of 3D indices, usually{−m…+m}×{−m…+m}×{−m…+m}{\displaystyle \{-m\ldots +m\}\times \{-m\ldots +m\}\times \{-m\ldots +m\}}for somem.
As in the two-dimensional case, the eigenvaluesλ1,λ2,λ3{\displaystyle \lambda _{1},\lambda _{2},\lambda _{3}}ofSw[p]{\displaystyle S_{w}[p]}, and the corresponding eigenvectorse^1,e^2,e^3{\displaystyle {\hat {e}}_{1},{\hat {e}}_{2},{\hat {e}}_{3}}, summarize the distribution of gradient directions within the neighborhood ofpdefined by the windoww{\displaystyle w}. This information can be visualized as anellipsoidwhose semi-axes are equal to the eigenvalues and directed along their corresponding eigenvectors.[9][10]
In particular, if the ellipsoid is stretched along one axis only, like a cigar (that is, ifλ1{\displaystyle \lambda _{1}}is much larger than bothλ2{\displaystyle \lambda _{2}}andλ3{\displaystyle \lambda _{3}}), it means that the gradient in the window is predominantly aligned with the directione1{\displaystyle e_{1}}, so that theisosurfacesofI{\displaystyle I}tend to be flat and perpendicular to that vector. This situation occurs, for instance, whenplies on a thin plate-like feature, or on the smooth boundary between two regions with contrasting values.
If the ellipsoid is flattened in one direction only, like a pancake (that is, ifλ3{\displaystyle \lambda _{3}}is much smaller than bothλ1{\displaystyle \lambda _{1}}andλ2{\displaystyle \lambda _{2}}), it means that the gradient directions are spread out but perpendicular toe3{\displaystyle e_{3}}; so that the isosurfaces tend to be like tubes parallel to that vector. This situation occurs, for instance, whenplies on a thin line-like feature, or on a sharp corner of the boundary between two regions with contrasting values.
Finally, if the ellipsoid is roughly spherical (that is, ifλ1≈λ2≈λ3{\displaystyle \lambda _{1}\approx \lambda _{2}\approx \lambda _{3}}), it means that the gradient directions in the window are more or less evenly distributed, with no marked preference; so that the functionI{\displaystyle I}is mostly isotropic in that neighborhood. This happens, for instance, when the function hasspherical symmetryin the neighborhood ofp. In particular, if the ellipsoid degenerates to a point (that is, if the three eigenvalues are zero), it means thatI{\displaystyle I}is constant (has zero gradient) within the window.
The structure tensor is an important tool inscale spaceanalysis. Themulti-scale structure tensor(ormulti-scale second moment matrix) of a functionI{\displaystyle I}is in contrast to other one-parameter scale-space features an image descriptor that is defined overtwoscale parameters.
One scale parameter, referred to aslocal scalet{\displaystyle t}, is needed for determining the amount of pre-smoothing when computing the image gradient(∇I)(x;t){\displaystyle (\nabla I)(x;t)}. Another scale parameter, referred to asintegration scales{\displaystyle s}, is needed for specifying the spatial extent of the window functionw(ξ;s){\displaystyle w(\xi ;s)}that determines the weights for the region in space over which the components of the outer product of the gradient by itself(∇I)(∇I)T{\displaystyle (\nabla I)(\nabla I)^{\text{T}}}are accumulated.
More precisely, suppose thatI{\displaystyle I}is a real-valued signal defined overRk{\displaystyle \mathbb {R} ^{k}}. For any local scalet>0{\displaystyle t>0}, let a multi-scale representationI(x;t){\displaystyle I(x;t)}of this signal be given byI(x;t)=h(x;t)∗I(x){\displaystyle I(x;t)=h(x;t)*I(x)}whereh(x;t){\displaystyle h(x;t)}represents a pre-smoothing kernel. Furthermore, let(∇I)(x;t){\displaystyle (\nabla I)(x;t)}denote the gradient of thescale space representation.
Then, themulti-scale structure tensor/second-moment matrixis defined by[7][11][12]μ(x;t,s)=∫ξ∈Rk(∇I)(x−ξ;t)(∇I)T(x−ξ;t)w(ξ;s)dξ{\displaystyle \mu (x;t,s)=\int _{\xi \in \mathbb {R} ^{k}}(\nabla I)(x-\xi ;t)\,(\nabla I)^{\text{T}}(x-\xi ;t)\,w(\xi ;s)\,d\xi }Conceptually, one may ask if it would be sufficient to use any self-similar families of smoothing functionsh(x;t){\displaystyle h(x;t)}andw(ξ;s){\displaystyle w(\xi ;s)}. If one naively would apply, for example, a box filter, however, then non-desirable artifacts could easily occur. If one wants the multi-scale structure tensor to be well-behaved over both increasing local scalest{\displaystyle t}and increasing integration scaless{\displaystyle s}, then it can be shown that both the smoothing function and the window functionhave tobe Gaussian.[7]The conditions that specify this uniqueness are similar to thescale-space axiomsthat are used for deriving the uniqueness of the Gaussian kernel for a regular Gaussianscale spaceof image intensities.
There are different ways of handling the two-parameter scale variations in this family of image descriptors. If we keep the local scale parametert{\displaystyle t}fixed and apply increasingly broadened versions of the window function by increasing the integration scale parameters{\displaystyle s}only, then we obtain atrue formalscale space representationof the directional data computed at the given local scalet{\displaystyle t}.[7]If we couple the local scale and integration scale by arelative integration scaler≥1{\displaystyle r\geq 1}, such thats=rt{\displaystyle s=rt}then for any fixed value ofr{\displaystyle r}, we obtain a reduced self-similar one-parameter variation, which is frequently used to simplify computational algorithms, for example incorner detection,interest point detection,texture analysisandimage matching.
By varying the relative integration scaler≥1{\displaystyle r\geq 1}in such a self-similar scale variation, we obtain another alternative way of parameterizing the multi-scale nature of directional data obtained by increasing the integration scale.
A conceptually similar construction can be performed for discrete signals, with the convolution integral replaced by a convolution sum and with the continuous Gaussian kernelg(x;t){\displaystyle g(x;t)}replaced by thediscrete Gaussian kernelT(n;t){\displaystyle T(n;t)}:μ(x;t,s)=∑n∈Zk(∇I)(x−n;t)(∇I)T(x−n;t)w(n;s){\displaystyle \mu (x;t,s)=\sum _{n\in \mathbb {Z} ^{k}}(\nabla I)(x-n;t)\,(\nabla I)^{\text{T}}(x-n;t)\,w(n;s)}When quantizing the scale parameterst{\displaystyle t}ands{\displaystyle s}in an actual implementation, a finite geometric progressionαi{\displaystyle \alpha ^{i}}is usually used, withiranging from 0 to some maximum scale indexm. Thus, the discrete scale levels will bear certain similarities toimage pyramid, although spatial subsampling may not necessarily be used in order to preserve more accurate data for subsequent processing stages.
The eigenvalues of the structure tensor play a significant role in many image processing algorithms, for problems likecorner detection,interest point detection, andfeature tracking.[9][13][14][15][16][17][18]The structure tensor also plays a central role in theLucas-Kanade optical flow algorithm, and in its extensions to estimateaffine shape adaptation;[11]where the magnitude ofλ2{\displaystyle \lambda _{2}}is an indicator of the reliability of the computed result. The tensor has been used forscale spaceanalysis,[7]estimation of local surface orientation from monocular or binocular cues,[12]non-linearfingerprint enhancement,[19]diffusion-based image processing,[20][21][22][23]and several other image processing problems. The structure tensor can be also applied ingeologyto filterseismicdata.[24]
The three-dimensional structure tensor has been used to analyze three-dimensional video data (viewed as a function ofx,y, and timet).[4]If one in this context aims at image descriptors that areinvariantunderGalilean transformations, to make it possible to compare image measurements that have been obtained under variations of a priori unknown image velocitiesv=(vx,vy)T{\displaystyle v=(v_{x},v_{y})^{\text{T}}}[x′y′t′]=G[xyt]=[x−vxty−vytt],{\displaystyle {\begin{bmatrix}x'\\y'\\t'\end{bmatrix}}=G{\begin{bmatrix}x\\y\\t\end{bmatrix}}={\begin{bmatrix}x-v_{x}\,t\\y-v_{y}\,t\\t\end{bmatrix}},}it is, however, from a computational viewpoint preferable to parameterize the components in the structure tensor/second-moment matrixS{\displaystyle S}using the notion ofGalilean diagonalization[25]S′=Rspace−TG−TSG−1Rspace−1=[ν1ν2ν3]{\displaystyle S'=R_{\text{space}}^{-{\text{T}}}\,G^{-{\text{T}}}\,S\,G^{-1}\,R_{\text{space}}^{-1}={\begin{bmatrix}\nu _{1}&\,&\,\\\,&\nu _{2}&\,\\\,&\,&\nu _{3}\end{bmatrix}}}whereG{\displaystyle G}denotes a Galilean transformation of spacetime andRspace{\displaystyle R_{\text{space}}}a two-dimensional rotation over the spatial domain,
compared to the abovementioned use of eigenvalues of a 3-D structure tensor, which corresponds to an eigenvalue decomposition and a (non-physical) three-dimensional rotation of spacetimeS″=Rspacetime−TSRspacetime−1=[λ1λ2λ3].{\displaystyle S''=R_{\text{spacetime}}^{-{\text{T}}}\,S\,R_{\text{spacetime}}^{-1}={\begin{bmatrix}\lambda _{1}&&\\&\lambda _{2}&\\&&\lambda _{3}\end{bmatrix}}.}To obtain true Galilean invariance, however, also the shape of the spatio-temporal window function needs to be adapted,[25][26]corresponding to the transfer ofaffine shape adaptation[11]from spatial to spatio-temporal image data.
In combination with local spatio-temporal histogram descriptors,[27]these concepts together allow for Galilean invariant recognition of spatio-temporal events.[28]
|
https://en.wikipedia.org/wiki/Structure_tensor
|
Invector calculus, thesurface gradientis avectordifferential operatorthat is similar to the conventionalgradient. The distinction is that the surface gradient takes effect along a surface.
For asurfaceS{\displaystyle S}in ascalar fieldu{\displaystyle u}, the surface gradient is defined and notated as
wheren^{\displaystyle \mathbf {\hat {n}} }is a unitnormalto the surface.[1]Examining the definition shows that the surface gradient is the (conventional) gradient with the component normal to the surface removed (subtracted), hence this gradient is tangent to the surface. In other words, the surface gradient is theorthographic projectionof the gradient onto the surface.
The surface gradient arises whenever the gradient of a quantity over a surface is important. In the study ofcapillary surfacesfor example, the gradient of spatially varyingsurface tensiondoesn't make much sense, however the surface gradient does and serves certain purposes.
|
https://en.wikipedia.org/wiki/Surface_gradient
|
Inmathematicsandmathematical optimization, theconvex conjugateof a function is a generalization of theLegendre transformationwhich applies to non-convex functions. It is also known asLegendre–Fenchel transformation,Fenchel transformation, orFenchel conjugate(afterAdrien-Marie LegendreandWerner Fenchel). The convex conjugate is widely used for constructing thedual probleminoptimization theory, thus generalizingLagrangian duality.
LetX{\displaystyle X}be arealtopological vector spaceand letX∗{\displaystyle X^{*}}be thedual spacetoX{\displaystyle X}. Denote by
the canonicaldual pairing, which is defined by⟨x∗,x⟩↦x∗(x).{\displaystyle \left\langle x^{*},x\right\rangle \mapsto x^{*}(x).}
For a functionf:X→R∪{−∞,+∞}{\displaystyle f:X\to \mathbb {R} \cup \{-\infty ,+\infty \}}taking values on theextended real number line, itsconvex conjugateis the function
whose value atx∗∈X∗{\displaystyle x^{*}\in X^{*}}is defined to be thesupremum:
or, equivalently, in terms of theinfimum:
This definition can be interpreted as an encoding of theconvex hullof the function'sepigraphin terms of itssupporting hyperplanes.[1]
For more examples, see§ Table of selected convex conjugates.
The convex conjugate and Legendre transform of the exponential function agree except that thedomainof the convex conjugate is strictly larger as the Legendre transform is only defined forpositive real numbers.
Seethis article for example.
LetFdenote acumulative distribution functionof arandom variableX. Then (integrating by parts),f(x):=∫−∞xF(u)du=E[max(0,x−X)]=x−E[min(x,X)]{\displaystyle f(x):=\int _{-\infty }^{x}F(u)\,du=\operatorname {E} \left[\max(0,x-X)\right]=x-\operatorname {E} \left[\min(x,X)\right]}has the convex conjugatef∗(p)=∫0pF−1(q)dq=(p−1)F−1(p)+E[min(F−1(p),X)]=pF−1(p)−E[max(0,F−1(p)−X)].{\displaystyle f^{*}(p)=\int _{0}^{p}F^{-1}(q)\,dq=(p-1)F^{-1}(p)+\operatorname {E} \left[\min(F^{-1}(p),X)\right]=pF^{-1}(p)-\operatorname {E} \left[\max(0,F^{-1}(p)-X)\right].}
A particular interpretation has the transformfinc(x):=argsuptt⋅x−∫01max{t−f(u),0}du,{\displaystyle f^{\text{inc}}(x):=\arg \sup _{t}t\cdot x-\int _{0}^{1}\max\{t-f(u),0\}\,du,}as this is a nondecreasing rearrangement of the initial functionf; in particular,finc=f{\displaystyle f^{\text{inc}}=f}forfnondecreasing.
The convex conjugate of aclosed convex functionis again a closed convex function. The convex conjugate of apolyhedral convex function(a convex function withpolyhedralepigraph) is again a polyhedral convex function.
Declare thatf≤g{\displaystyle f\leq g}if and only iff(x)≤g(x){\displaystyle f(x)\leq g(x)}for allx.{\displaystyle x.}Then convex-conjugation isorder-reversing, which by definition means that iff≤g{\displaystyle f\leq g}thenf∗≥g∗.{\displaystyle f^{*}\geq g^{*}.}
For a family of functions(fα)α{\displaystyle \left(f_{\alpha }\right)_{\alpha }}it follows from the fact that supremums may be interchanged that
and from themax–min inequalitythat
The convex conjugate of a function is alwayslower semi-continuous. Thebiconjugatef∗∗{\displaystyle f^{**}}(the convex conjugate of the convex conjugate) is also theclosed convex hull, i.e. the largestlower semi-continuousconvex function withf∗∗≤f.{\displaystyle f^{**}\leq f.}Forproper functionsf,{\displaystyle f,}
For any functionfand its convex conjugatef*,Fenchel's inequality(also known as theFenchel–Young inequality) holds for everyx∈X{\displaystyle x\in X}andp∈X∗{\displaystyle p\in X^{*}}:
Furthermore, the equality holds only whenp∈∂f(x){\displaystyle p\in \partial f(x)}.
The proof follows from the definition of convex conjugate:f∗(p)=supx~{⟨p,x~⟩−f(x~)}≥⟨p,x⟩−f(x).{\displaystyle f^{*}(p)=\sup _{\tilde {x}}\left\{\langle p,{\tilde {x}}\rangle -f({\tilde {x}})\right\}\geq \langle p,x\rangle -f(x).}
For two functionsf0{\displaystyle f_{0}}andf1{\displaystyle f_{1}}and a number0≤λ≤1{\displaystyle 0\leq \lambda \leq 1}the convexity relation
holds. The∗{\displaystyle {*}}operation is a convex mapping itself.
Theinfimal convolution(or epi-sum) of two functionsf{\displaystyle f}andg{\displaystyle g}is defined as
Letf1,…,fm{\displaystyle f_{1},\ldots ,f_{m}}beproper, convex andlower semicontinuousfunctions onRn.{\displaystyle \mathbb {R} ^{n}.}Then the infimal convolution is convex and lower semicontinuous (but not necessarily proper),[2]and satisfies
The infimal convolution of two functions has a geometric interpretation: The (strict)epigraphof the infimal convolution of two functions is theMinkowski sumof the (strict) epigraphs of those functions.[3]
If the functionf{\displaystyle f}is differentiable, then its derivative is the maximizing argument in the computation of the convex conjugate:
hence
and moreover
If for someγ>0,{\displaystyle \gamma >0,}g(x)=α+βx+γ⋅f(λx+δ){\displaystyle g(x)=\alpha +\beta x+\gamma \cdot f\left(\lambda x+\delta \right)}, then
LetA:X→Y{\displaystyle A:X\to Y}be abounded linear operator. For any convex functionf{\displaystyle f}onX,{\displaystyle X,}
where
is the preimage off{\displaystyle f}with respect toA{\displaystyle A}andA∗{\displaystyle A^{*}}is theadjoint operatorofA.{\displaystyle A.}[4]
A closed convex functionf{\displaystyle f}is symmetric with respect to a given setG{\displaystyle G}oforthogonal linear transformations,
if and only if its convex conjugatef∗{\displaystyle f^{*}}is symmetric with respect toG.{\displaystyle G.}
The following table provides Legendre transforms for many common functions as well as a few useful properties.[5]
|
https://en.wikipedia.org/wiki/Convex_duality
|
Inmathematics, adualitytranslates concepts,theoremsormathematical structuresinto other concepts, theorems or structures in aone-to-onefashion, often (but not always) by means of aninvolutionoperation: if the dual ofAisB, then the dual ofBisA. In other cases the dual of the dual – the double dual or bidual – is not necessarily identical to the original (also calledprimal). Such involutions sometimes havefixed points, so that the dual ofAisAitself. For example,Desargues' theoremisself-dualin this sense under thestandarddualityinprojective geometry.
In mathematical contexts,dualityhas numerous meanings.[1]It has been described as "a very pervasive and important concept in (modern) mathematics"[2]and "an important general theme that has manifestations in almost every area of mathematics".[3]
Many mathematical dualities between objects of two types correspond topairings,bilinear functionsfrom an object of one type and another object of the second type to some family of scalars. For instance,linear algebra dualitycorresponds in this way to bilinear maps from pairs of vector spaces to scalars, theduality betweendistributionsand the associatedtest functionscorresponds to the pairing in which one integrates a distribution against a test function, andPoincaré dualitycorresponds similarly tointersection number, viewed as a pairing between submanifolds of a given manifold.[4]
From acategory theoryviewpoint, duality can also be seen as afunctor, at least in the realm of vector spaces. This functor assigns to each space its dual space, and thepullbackconstruction assigns to each arrowf:V→Wits dualf∗:W∗→V∗.
In the words ofMichael Atiyah,
Duality in mathematics is not a theorem, but a "principle".[5]
The following list of examples shows the common features of many dualities, but also indicates that the precise meaning of duality may vary from case to case.
A simple duality arises from consideringsubsetsof a fixed setS. To any subsetA⊆S, thecomplementA∁[6]consists of all those elements inSthat are not contained inA. It is again a subset ofS. Taking the complement has the following properties:
This duality appears intopologyas a duality betweenopenandclosed subsetsof some fixed topological spaceX: a subsetUofXis closed if and only if its complement inXis open. Because of this, many theorems about closed sets are dual to theorems about open sets. For example, any union of open sets is open, so dually, any intersection of closed sets is closed.[7]Theinteriorof a set is the largest open set contained in it, and theclosureof the set is the smallest closed set that contains it. Because of the duality, the complement of the interior of any setUis equal to the closure of the complement ofU.
A duality ingeometryis provided by thedual coneconstruction. Given a setC{\displaystyle C}of points in the planeR2{\displaystyle \mathbb {R} ^{2}}(or more generally points inRn{\displaystyle \mathbb {R} ^{n}}),the dual cone is defined as the setC∗⊆R2{\displaystyle C^{*}\subseteq \mathbb {R} ^{2}}consisting of those points(x1,x2){\displaystyle (x_{1},x_{2})}satisfyingx1c1+x2c2≥0{\displaystyle x_{1}c_{1}+x_{2}c_{2}\geq 0}for all points(c1,c2){\displaystyle (c_{1},c_{2})}inC{\displaystyle C}, as illustrated in the diagram.
Unlike for the complement of sets mentioned above, it is not in general true that applying the dual cone construction twice gives back the original setC{\displaystyle C}. Instead,C∗∗{\displaystyle C^{**}}is the smallest cone[8]containingC{\displaystyle C}which may be bigger thanC{\displaystyle C}. Therefore this duality is weaker than the one above, in that
The other two properties carry over without change:
A very important example of a duality arises inlinear algebraby associating to anyvector spaceVitsdual vector spaceV*. Its elements are thelinear functionalsφ:V→K{\displaystyle \varphi :V\to K}, whereKis thefieldover whichVis defined.
The three properties of the dual cone carry over to this type of duality by replacing subsets ofR2{\displaystyle \mathbb {R} ^{2}}by vector space and inclusions of such subsets by linear maps. That is:
A particular feature of this duality is thatVandV*are isomorphic for certain objects, namely finite-dimensional vector spaces. However, this is in a sense a lucky coincidence, for giving such an isomorphism requires a certain choice, for example the choice of abasisofV. This is also true in the case ifVis aHilbert space,viatheRiesz representation theorem.
In all the dualities discussed before, the dual of an object is of the same kind as the object itself. For example, the dual of a vector space is again a vector space. Many duality statements are not of this kind. Instead, such dualities reveal a close relation between objects of seemingly different nature. One example of such a more general duality is fromGalois theory. For a fixedGalois extensionK/F, one may associate theGalois groupGal(K/E)to any intermediate fieldE(i.e.,F⊆E⊆K). This group is a subgroup of the Galois groupG= Gal(K/F). Conversely, to any such subgroupH⊆Gthere is the fixed fieldKHconsisting of elements fixed by the elements inH.
Compared to the above, this duality has the following features:
Given aposetP= (X, ≤)(short for partially ordered set; i.e., a set that has a notion of ordering but in which two elements cannot necessarily be placed in order relative to each other), thedualposetPd= (X, ≥)comprises the same ground set but theconverse relation. Familiar examples of dual partial orders include
Aduality transformis aninvolutive antiautomorphismfof apartially ordered setS, that is, anorder-reversinginvolutionf:S→S.[9][10]In several important cases these simple properties determine the transform uniquely up to some simple symmetries. For example, iff1,f2are two duality transforms then theircompositionis anorder automorphismofS; thus, any two duality transforms differ only by an order automorphism. For example, all order automorphisms of apower setS= 2Rare induced by permutations ofR.
A concept defined for a partial orderPwill correspond to adual concepton the dual posetPd. For instance, aminimal elementofPwill be amaximal elementofPd: minimality and maximality are dual concepts in order theory. Other pairs of dual concepts areupper and lower bounds,lower setsandupper sets, andidealsandfilters.
In topology,open setsandclosed setsare dual concepts: the complement of an open set is closed, and vice versa. Inmatroidtheory, the family of sets complementary to the independent sets of a given matroid themselves form another matroid, called thedual matroid.
There are many distinct but interrelated dualities in which geometric or topological objects correspond to other objects of the same type, but with a reversal of the dimensions of the features of the objects. A classical example of this is the duality of thePlatonic solids, in which the cube and the octahedron form a dual pair, the dodecahedron and the icosahedron form a dual pair, and the tetrahedron is self-dual. Thedual polyhedronof any of these polyhedra may be formed as theconvex hullof the center points of each face of the primal polyhedron, so theverticesof the dual correspond one-for-one with the faces of the primal. Similarly, each edge of the dual corresponds to an edge of the primal, and each face of the dual corresponds to a vertex of the primal. These correspondences are incidence-preserving: if two parts of the primal polyhedron touch each other, so do the corresponding two parts of thedual polyhedron. More generally, using the concept ofpolar reciprocation, anyconvex polyhedron, or more generally anyconvex polytope, corresponds to adual polyhedronor dual polytope, with ani-dimensional feature of ann-dimensional polytope corresponding to an(n−i− 1)-dimensional feature of the dual polytope. The incidence-preserving nature of the duality is reflected in the fact that theface latticesof the primal and dual polyhedra or polytopes are themselvesorder-theoretic duals. Duality of polytopes and order-theoretic duality are bothinvolutions: the dual polytope of the dual polytope of any polytope is the original polytope, and reversing all order-relations twice returns to the original order. Choosing a different center of polarity leads to geometrically different dual polytopes, but all have the same combinatorial structure.
From any three-dimensional polyhedron, one can form aplanar graph, the graph of its vertices and edges. The dual polyhedron has adual graph, a graph with one vertex for each face of the polyhedron and with one edge for every two adjacent faces. The same concept of planar graph duality may be generalized to graphs that are drawn in the plane but that do not come from a three-dimensional polyhedron, or more generally tograph embeddingson surfaces of higher genus: one may draw a dual graph by placing one vertex within each region bounded by a cycle of edges in the embedding, and drawing an edge connecting any two regions that share a boundary edge. An important example of this type comes fromcomputational geometry: the duality for any finite setSof points in the plane between theDelaunay triangulationofSand theVoronoi diagramofS. As with dual polyhedra and dual polytopes, the duality of graphs on surfaces is a dimension-reversing involution: each vertex in the primal embedded graph corresponds to a region of the dual embedding, each edge in the primal is crossed by an edge in the dual, and each region of the primal corresponds to a vertex of the dual. The dual graph depends on how the primal graph is embedded: different planar embeddings of a single graph may lead to different dual graphs.Matroid dualityis an algebraic extension of planar graph duality, in the sense that the dual matroid of the graphic matroid of a planar graph is isomorphic to the graphic matroid of the dual graph.
A kind of geometric duality also occurs inoptimization theory, but not one that reverses dimensions. Alinear programmay be specified by a system of real variables (the coordinates for a point in Euclidean spaceRn{\displaystyle \mathbb {R} ^{n}}),a system of linear constraints (specifying that the point lie in ahalfspace; the intersection of these halfspaces is a convex polytope, the feasible region of the program), and a linear function (what to optimize). Every linear program has adual problemwith the same optimal solution, but the variables in the dual problem correspond to constraints in the primal problem and vice versa.
In logic, functions or relationsAandBare considered dual ifA(¬x) = ¬B(x), where ¬ islogical negation. The basic duality of this type is the duality of the ∃ and ∀quantifiersin classical logic. These are dual because∃x.¬P(x)and¬∀x.P(x)are equivalent for all predicatesPin classical logic: if there exists anxfor whichPfails to hold, then it is false thatPholds for allx(but the converse does not hold constructively). From this fundamental logical duality follow several others:
Other analogous dualities follow from these:
The dual of the dual, called thebidualordouble dual, depending on context, is often identical to the original (also calledprimal), and duality is an involution. In this case the bidual is not usually distinguished, and instead one only refers to the primal and dual. For example, the dual poset of the dual poset is exactly the original poset, since the converse relation is defined by an involution.
In other cases, the bidual is not identical with the primal, though there is often a close connection. For example, the dual cone of the dual cone of a set contains the primal set (it is the smallest cone containing the primal set), and is equal if and only if the primal set is a cone.
An important case is for vector spaces, where there is a map from the primal space to the double dual,V→V**, known as the "canonical evaluation map". For finite-dimensional vector spaces this is an isomorphism, but these are not identical spaces: they are different sets. In category theory, this is generalized by§ Dual objects, and a "natural transformation" from theidentity functorto the double dual functor. For vector spaces (considered algebraically), this is always an injection; seeDual space § Injection into the double-dual. This can be generalized algebraically to adual module. There is still a canonical evaluation map, but it is not always injective; if it is, this is known as atorsionless module; if it is an isomophism, the module is called reflexive.
Fortopological vector spaces(includingnormed vector spaces), there is a separate notion of atopological dual, denotedV′{\displaystyle V'}to distinguish from the algebraic dualV*, with different possible topologies on the dual, each of which defines a different bidual spaceV″{\displaystyle V''}. In these cases the canonical evaluation mapV→V″{\displaystyle V\to V''}is not in general an isomorphism. If it is, this is known (for certainlocally convexvector spaces with thestrong dual spacetopology) as areflexive space.
In other cases, showing a relation between the primal and bidual is a significant result, as inPontryagin duality(alocally compact abelian groupis naturally isomorphic to its bidual).
A group of dualities can be described by endowing, for any mathematical objectX, the set of morphismsHom (X,D)into some fixed objectD, with a structure similar to that ofX. This is sometimes calledinternal Hom. In general, this yields a true duality only for specific choices ofD, in which caseX*= Hom (X,D)is referred to as thedualofX. There is always a map fromXto thebidual, that is to say, the dual of the dual,X→X∗∗:=(X∗)∗=Hom(Hom(X,D),D).{\displaystyle X\to X^{**}:=(X^{*})^{*}=\operatorname {Hom} (\operatorname {Hom} (X,D),D).}It assigns to somex∈Xthe map that associates to any mapf:X→D(i.e., an element inHom(X,D)) the valuef(x).
Depending on the concrete duality considered and also depending on the objectX, this map may or may not be an isomorphism.
The construction of the dual vector spaceV∗=Hom(V,K){\displaystyle V^{*}=\operatorname {Hom} (V,K)}mentioned in the introduction is an example of such a duality. Indeed, the set of morphisms, i.e.,linear maps, forms a vector space in its own right. The mapV→V**mentioned above is always injective. It is surjective, and therefore an isomorphism, if and only if thedimensionofVis finite. This fact characterizes finite-dimensional vector spaces without referring to a basis.
A vector spaceVis isomorphic toV∗precisely ifVis finite-dimensional. In this case, such an isomorphism is equivalent to a non-degeneratebilinear formφ:V×V→K{\displaystyle \varphi :V\times V\to K}In this caseVis called aninner product space.
For example, ifKis the field ofrealorcomplex numbers, anypositive definitebilinear form gives rise to such an isomorphism. InRiemannian geometry,Vis taken to be thetangent spaceof amanifoldand such positive bilinear forms are calledRiemannian metrics. Their purpose is to measure angles and distances. Thus, duality is a foundational basis of this branch of geometry. Another application of inner product spaces is theHodge starwhich provides a correspondence between the elements of theexterior algebra. For ann-dimensional vector space, the Hodge star operator mapsk-formsto(n−k)-forms. This can be used to formulateMaxwell's equations. In this guise, the duality inherent in the inner product space exchanges the role ofmagneticandelectric fields.
In someprojective planes, it is possible to findgeometric transformationsthat map each point of the projective plane to a line, and each line of the projective plane to a point, in an incidence-preserving way.[11]For such planes there arises a general principle ofduality in projective planes: given any theorem in such a plane projective geometry, exchanging the terms "point" and "line" everywhere results in a new, equally valid theorem.[12]A simple example is that the statement "two points determine a unique line, the line passing through these points" has the dual statement that "two lines determine a unique point, theintersection pointof these two lines". For further examples, seeDual theorems.
A conceptual explanation of this phenomenon in some planes (notably field planes) is offered by the dual vector space. In fact, the points in the projective planeRP2{\displaystyle \mathbb {RP} ^{2}}correspond to one-dimensional subvector spacesV⊂R3{\displaystyle V\subset \mathbb {R} ^{3}}[13]while the lines in the projective plane correspond to subvector spacesW{\displaystyle W}of dimension 2. The duality in such projective geometries stems from assigning to a one-dimensionalV{\displaystyle V}the subspace of(R3)∗{\displaystyle (\mathbb {R} ^{3})^{*}}consisting of those linear mapsf:R3→R{\displaystyle f:\mathbb {R} ^{3}\to \mathbb {R} }which satisfyf(V)=0{\displaystyle f(V)=0}. As a consequence of thedimension formulaoflinear algebra, this space is two-dimensional, i.e., it corresponds to a line in the projective plane associated to(R3)∗{\displaystyle (\mathbb {R} ^{3})^{*}}.
The (positive definite) bilinear form⟨⋅,⋅⟩:R3×R3→R,⟨x,y⟩=∑i=13xiyi{\displaystyle \langle \cdot ,\cdot \rangle :\mathbb {R} ^{3}\times \mathbb {R} ^{3}\to \mathbb {R} ,\langle x,y\rangle =\sum _{i=1}^{3}x_{i}y_{i}}yields an identification of this projective plane with theRP2{\displaystyle \mathbb {RP} ^{2}}. Concretely, the duality assigns toV⊂R3{\displaystyle V\subset \mathbb {R} ^{3}}itsorthogonal{w∈R3,⟨v,w⟩=0for allv∈V}{\displaystyle \left\{w\in \mathbb {R} ^{3},\langle v,w\rangle =0{\text{ for all }}v\in V\right\}}. The explicit formulas induality in projective geometryarise by means of this identification.
In the realm oftopological vector spaces, a similar construction exists, replacing the dual by thetopological dualvector space. There are several notions of topological dual space, and each of them gives rise to a certain concept of duality. A topological vector spaceX{\displaystyle X}that is canonically isomorphic to its bidualX″{\displaystyle X''}is called areflexive space:X≅X″.{\displaystyle X\cong X''.}
Examples:
Thedual latticeof alatticeLis given byHom(L,Z),{\displaystyle \operatorname {Hom} (L,\mathbb {Z} ),}the set of linear functions on thereal vector spacecontaining the lattice that map the points of the lattice to the integersZ{\displaystyle \mathbb {Z} }. This is used in the construction oftoric varieties.[16]ThePontryagin dualoflocally compacttopological groupsGis given byHom(G,S1),{\displaystyle \operatorname {Hom} (G,S^{1}),}continuousgroup homomorphismswith values in the circle (with multiplication of complex numbers as group operation).
In another group of dualities, the objects of one theory are translated into objects of another theory and the maps between objects in the first theory are translated into morphisms in the second theory, but with direction reversed. Using the parlance ofcategory theory, this amounts to acontravariant functorbetween twocategoriesCandD:
which for any two objectsXandYofCgives a map
That functor may or may not be anequivalence of categories. There are various situations, where such a functor is an equivalence between theopposite categoryCopofC, andD. Using a duality of this type, every statement in the first theory can be translated into a "dual" statement in the second theory, where the direction of all arrows has to be reversed.[17]Therefore, any duality between categoriesCandDis formally the same as an equivalence betweenCandDop(CopandD). However, in many circumstances the opposite categories have no inherent meaning, which makes duality an additional, separate concept.[18]
A category that is equivalent to its dual is calledself-dual. An example of self-dual category is the category ofHilbert spaces.[19]
Manycategory-theoreticnotions come in pairs in the sense that they correspond to each other while considering the opposite category. For example,Cartesian productsY1×Y2anddisjoint unionsY1⊔Y2of sets are dual to each other in the sense that
and
for any setX. This is a particular case of a more general duality phenomenon, under whichlimitsin a categoryCcorrespond tocolimitsin the opposite categoryCop; further concrete examples of this areepimorphismsvs.monomorphism, in particularfactor modules(or groups etc.) vs.submodules,direct productsvs.direct sums(also calledcoproductsto emphasize the duality aspect). Therefore, in some cases, proofs of certain statements can be halved, using such a duality phenomenon. Further notions displaying related by such a categorical duality areprojectiveandinjective modulesinhomological algebra,[20]fibrationsandcofibrationsin topology and more generallymodel categories.[21]
TwofunctorsF:C→DandG:D→Careadjointif for all objectscinCanddinD
in a natural way. Actually, the correspondence of limits and colimits is an example of adjoints, since there is an adjunction
between the colimit functor that assigns to any diagram inCindexed by some categoryIits colimit and the diagonal functor that maps any objectcofCto the constant diagram which hascat all places. Dually,
Gelfand dualityis a duality between commutativeC*-algebrasAandcompactHausdorff spacesXis the same: it assigns toXthe space of continuous functions (which vanish at infinity) fromXtoC, the complex numbers. Conversely, the spaceXcan be reconstructed fromAas thespectrumofA. Both Gelfand and Pontryagin duality can be deduced in a largely formal, category-theoretic way.[22]
In a similar vein there is a duality inalgebraic geometrybetweencommutative ringsandaffine schemes: to every commutative ringAthere is an affine spectrum,SpecA. Conversely, given an affine schemeS, one gets back a ring by taking global sections of thestructure sheafOS. In addition,ring homomorphismsare in one-to-one correspondence with morphisms of affine schemes, thereby there is an equivalence
Affine schemes are the local building blocks ofschemes. The previous result therefore tells that the local theory of schemes is the same ascommutative algebra, the study of commutative rings.
Noncommutative geometrydraws inspiration from Gelfand duality and studies noncommutative C*-algebras as if they were functions on some imagined space.Tannaka–Krein dualityis a non-commutative analogue of Pontryagin duality.[24]
In a number of situations, the two categories which are dual to each other are actually arising frompartially orderedsets, i.e., there is some notion of an object "being smaller" than another one. A duality that respects the orderings in question is known as aGalois connection. An example is the standard duality inGalois theorymentioned in the introduction: a bigger field extension corresponds—under the mapping that assigns to any extensionL⊃K(inside some fixed bigger field Ω) the Galois group Gal (Ω /L) —to a smaller group.[25]
The collection of all open subsets of a topological spaceXforms a completeHeyting algebra. There is a duality, known asStone duality, connectingsober spacesand spatiallocales.
Pontryagin dualitygives a duality on the category oflocally compactabelian groups: given any such groupG, thecharacter group
given by continuous group homomorphisms fromGto thecircle groupS1can be endowed with thecompact-open topology. Pontryagin duality states that the character group is again locally compact abelian and that
Moreover,discrete groupscorrespond tocompact abelian groups; finite groups correspond to finite groups. On the one hand, Pontryagin is a special case of Gelfand duality. On the other hand, it is the conceptual reason ofFourier analysis, see below.
Inanalysis, problems are frequently solved by passing to the dual description of functions and operators.
Fourier transformswitches between functions on a vector space and its dual:f^(ξ):=∫−∞∞f(x)e−2πixξdx,{\displaystyle {\widehat {f}}(\xi ):=\int _{-\infty }^{\infty }f(x)\ e^{-2\pi ix\xi }\,dx,}and converselyf(x)=∫−∞∞f^(ξ)e2πixξdξ.{\displaystyle f(x)=\int _{-\infty }^{\infty }{\widehat {f}}(\xi )\ e^{2\pi ix\xi }\,d\xi .}Iffis anL2-functiononRorRN, say, then so isf^{\displaystyle {\widehat {f}}}andf(−x)=f^^(x){\displaystyle f(-x)={\widehat {\widehat {f}}}(x)}. Moreover, the transform interchanges operations of multiplication andconvolutionon the correspondingfunction spaces. A conceptual explanation of the Fourier transform is obtained by the aforementioned Pontryagin duality, applied to the locally compact groupsR(orRNetc.): any character ofRis given byξ↦e−2πixξ. The dualizing character of Fourier transform has many other manifestations, for example, in alternative descriptions ofquantum mechanicalsystems in terms of coordinate and momentum representations.
Theorems showing that certain objects of interest are thedual spaces(in the sense of linear algebra) of other objects of interest are often calleddualities. Many of these dualities are given by abilinear pairingof twoK-vector spaces
Forperfect pairings, there is, therefore, an isomorphism ofAto thedualofB.
Poincaré dualityof a smooth compactcomplex manifoldXis given by a pairing of singular cohomology withC-coefficients (equivalently,sheaf cohomologyof theconstant sheafC)
wherenis the (complex) dimension ofX.[27]Poincaré duality can also be expressed as a relation ofsingular homologyandde Rham cohomology, by asserting that the map
(integrating a differentialk-form over a (2n−k)-(real-)dimensional cycle) is a perfect pairing.
Poincaré duality also reverses dimensions; it corresponds to the fact that, if a topologicalmanifoldis represented as acell complex, then the dual of the complex (a higher-dimensional generalization of the planar graph dual) represents the same manifold. In Poincaré duality, this homeomorphism is reflected in an isomorphism of thekthhomologygroup and the (n−k)thcohomologygroup.
The same duality pattern holds for a smoothprojective varietyover aseparably closed field, usingl-adic cohomologywithQℓ-coefficients instead.[28]This is further generalized to possiblysingular varieties, usingintersection cohomologyinstead, a duality calledVerdier duality.[29]Serre dualityorcoherent dualityare similar to the statements above, but applies to cohomology ofcoherent sheavesinstead.[30]
With increasing level of generality, it turns out, an increasing amount of technical background is helpful or necessary to understand these theorems: the modern formulation of these dualities can be done usingderived categoriesand certaindirect and inverse image functors of sheaves(with respect to the classical analytical topology on manifolds for Poincaré duality, l-adic sheaves and theétale topologyin the second case, and with respect to coherent sheaves for coherent duality).
Yet another group of similar duality statements is encountered inarithmetics: étale cohomology offinite,localandglobal fields(also known asGalois cohomology, since étale cohomology over a field is equivalent togroup cohomologyof the (absolute)Galois groupof the field) admit similar pairings. The absolute Galois groupG(Fq) of a finite field, for example, is isomorphic toZ^{\displaystyle {\widehat {\mathbf {Z} }}}, theprofinite completionofZ, the integers. Therefore, the perfect pairing (for anyG-moduleM)
is a direct consequence ofPontryagin dualityof finite groups. For local and global fields, similar statements exist (local dualityand global orPoitou–Tate duality).[32]
|
https://en.wikipedia.org/wiki/Duality_(mathematics)
|
Inmathematical optimizationand related fields,relaxationis amodeling strategy. A relaxation is anapproximationof a difficult problem by a nearby problem that is easier to solve. A solution of the relaxed problem provides information about the original problem.
For example, alinear programmingrelaxation of aninteger programmingproblem removes the integrality constraint and so allows non-integer rational solutions. ALagrangian relaxationof a complicated problem in combinatorial optimization penalizes violations of some constraints, allowing an easier relaxed problem to be solved. Relaxation techniques complement or supplementbranch and boundalgorithms of combinatorial optimization; linear programming and Lagrangian relaxations are used to obtain bounds in branch-and-bound algorithms for integer programming.[1]
The modeling strategy of relaxation should not be confused withiterative methodsofrelaxation, such assuccessive over-relaxation(SOR); iterative methods of relaxation are used in solving problems indifferential equations,linear least-squares, andlinear programming.[2][3][4]However, iterative methods of relaxation have been used to solve Lagrangian relaxations.[a]
Arelaxationof the minimization problem
is another minimization problem of the form
with these two properties
The first property states that the original problem's feasible domain is a subset of the relaxed problem's feasible domain. The second property states that the original problem's objective-function is greater than or equal to the relaxed problem's objective-function.[1]
Ifx∗{\displaystyle x^{*}}is an optimal solution of the original problem, thenx∗∈X⊆XR{\displaystyle x^{*}\in X\subseteq X_{R}}andz=c(x∗)≥cR(x∗)≥zR{\displaystyle z=c(x^{*})\geq c_{R}(x^{*})\geq z_{R}}. Therefore,x∗∈XR{\displaystyle x^{*}\in X_{R}}provides an upper bound onzR{\displaystyle z_{R}}.
If in addition to the previous assumptions,cR(x)=c(x){\displaystyle c_{R}(x)=c(x)},∀x∈X{\displaystyle \forall x\in X}, the following holds: If an optimal solution for the relaxed problem is feasible for the original problem, then it is optimal for the original problem.[1]
|
https://en.wikipedia.org/wiki/Relaxation_(approximation)
|
Inoperations research, theBig M methodis a method of solvinglinear programmingproblems using thesimplex algorithm. The Big M method extends the simplex algorithm to problems that contain "greater-than" constraints. It does so by associating the constraints with large negative constants which would not be part of any optimal solution, if it exists.
The simplex algorithm is the original and still one of the most widely used methods for solving linear maximization problems. It is obvious that the points with the optimal objective must be reached on a vertex of the simplex which is the shape of feasible region of an LP (linear program). Points on the vertex of the simplex are represented as a basis. So, to apply the simplex algorithm which aims improve the basis until a global optima is reached, one needs to find a feasible basis first.
The trivial basis (all problem variables equal to 0) is not always part of the simplex. It is feasible if and only if all the constraints (except non-negativity) are less-than constraints and with positive constant on the right-hand side. The Big M method introduces surplus and artificial variables to convert all inequalities into that form and there by extends the simplex in higher dimensions to be valid in the trivial basis. It is always a vertex due to the positivity constraint on the problem variables inherent in the standard formulation of LP. The "Big M" refers to a large number associated with the artificial variables, represented by the letter M.
The steps in the algorithm are as follows:
For example,x+y≤ 100 becomesx+y+s1= 100, whilstx+y≥ 100 becomesx+y− s1+a1= 100. The artificial variables must be shown to be 0. The function to be maximised is rewritten to include the sum of all the artificial variables. Thenrow reductionsare applied to gain a final solution.
The value of M must be chosen sufficiently large so that the artificial variable would not be part of any feasible solution.
For a sufficiently large M, the optimal solution contains any artificial variables in the basis (i.e. positive values) if and only if the problem is not feasible.
However, the a-priori selection of an appropriate value for M is not trivial. A way to overcome the need to specify the value of M is described in[1]. Other ways to find an initial basis for the simplex algorithm involves solving another linear program in an initial phase.
When used in the objective function, the Big M method sometimes refers to formulations of linear optimization problems in which violations of a constraint or set of constraints are associated with a large positive penalty constant, M.
InMixed integer linear optimizationthe term Big M can also refer to use of a large term in the constraints themselves. For example the logical constraintz=0⟺x=y{\displaystyle z=0\iff x=y}where z is binary variable (0 or 1) variable refers to ensuring equality of variables only when a certain binary variable takes on one value, but to leave the variables "open" if the binary variable takes on its opposite value. For a sufficiently large M and z binary variable (0 or 1), the constraints
ensure that whenz=0{\displaystyle z=0}thenx=y{\displaystyle x=y}. Otherwise, whenz=1{\displaystyle z=1}, then−M≤x−y≤M{\displaystyle -M\leq x-y\leq M}, indicating that the variables x and y can have any values so long as the absolute value of their difference is bounded byM{\displaystyle M}(hence the need for M to be "large enough.") Thus it is possible to "encode" the logical constraint into a MILP problem.
Bibliography
Discussion
|
https://en.wikipedia.org/wiki/Big_M_method
|
Inmathematical optimization,Dantzig'ssimplex algorithm(orsimplex method) is a popularalgorithmforlinear programming.[1]
The name of the algorithm is derived from the concept of asimplexand was suggested byT. S. Motzkin.[2]Simplices are not actually used in the method, but one interpretation of it is that it operates on simplicialcones, and these become proper simplices with an additional constraint.[3][4][5][6]The simplicial cones in question are the corners (i.e., the neighborhoods of the vertices) of a geometric object called apolytope. The shape of this polytope is defined by theconstraintsapplied to the objective function.
George Dantzigworked on planning methods for the US Army Air Force during World War II using adesk calculator. During 1946, his colleague challenged him to mechanize the planning process to distract him from taking another job. Dantzig formulated the problem as linear inequalities inspired by the work ofWassily Leontief, however, at that time he didn't include an objective as part of his formulation. Without an objective, a vast number of solutions can be feasible, and therefore to find the "best" feasible solution, military-specified "ground rules" must be used that describe how goals can be achieved as opposed to specifying a goal itself. Dantzig's core insight was to realize that most such ground rules can be translated into a linear objective function that needs to be maximized.[7]Development of the simplex method was evolutionary and happened over a period of about a year.[8]
After Dantzig included an objective function as part of his formulation during mid-1947, the problem was mathematically more tractable. Dantzig realized that one of the unsolved problems thathe had mistakenas homework in his professorJerzy Neyman's class (and actually later solved), was applicable to finding an algorithm for linear programs. This problem involved finding the existence ofLagrange multipliersfor general linear programs over a continuum of variables, each bounded between zero and one, and satisfying linear constraints expressed in the form ofLebesgue integrals. Dantzig later published his "homework" as a thesis to earn his doctorate. The column geometry used in this thesis gave Dantzig insight that made him believe that the Simplex method would be very efficient.[9]
The simplex algorithm operates on linear programs in thecanonical form
withc=(c1,…,cn){\displaystyle \mathbf {c} =(c_{1},\,\dots ,\,c_{n})}the coefficients of the objective function,(⋅)T{\displaystyle (\cdot )^{\mathrm {T} }}is thematrix transpose, andx=(x1,…,xn){\displaystyle \mathbf {x} =(x_{1},\,\dots ,\,x_{n})}are the variables of the problem,A{\displaystyle A}is ap×nmatrix, andb=(b1,…,bp){\displaystyle \mathbf {b} =(b_{1},\,\dots ,\,b_{p})}. There is a straightforward process to convert any linear program into one in standard form, so using this form of linear programs results in no loss of generality.
In geometric terms, thefeasible regiondefined by all values ofx{\displaystyle \mathbf {x} }such thatAx≤b{\textstyle A\mathbf {x} \leq \mathbf {b} }and∀i,xi≥0{\displaystyle \forall i,x_{i}\geq 0}is a (possibly unbounded)convex polytope. An extreme point or vertex of this polytope is known asbasic feasible solution(BFS).
It can be shown that for a linear program in standard form, if the objective function has a maximum value on the feasible region, then it has this value on (at least) one of the extreme points.[10]This in itself reduces the problem to a finite computation since there is a finite number of extreme points, but the number of extreme points is unmanageably large for all but the smallest linear programs.[11]
It can also be shown that, if an extreme point is not a maximum point of the objective function, then there is an edge containing the point so that the value of the objective function is strictly increasing on the edge moving away from the point.[12]If the edge is finite, then the edge connects to another extreme point where the objective function has a greater value, otherwise the objective function is unbounded above on the edge and the linear program has no solution. The simplex algorithm applies this insight by walking along edges of the polytope to extreme points with greater and greater objective values. This continues until the maximum value is reached, or an unbounded edge is visited (concluding that the problem has no solution). The algorithm always terminates because the number of vertices in the polytope is finite; moreover since we jump between vertices always in the same direction (that of the objective function), we hope that the number of vertices visited will be small.[12]
The solution of a linear program is accomplished in two steps. In the first step, known as Phase I, a starting extreme point is found. Depending on the nature of the program this may be trivial, but in general it can be solved by applying the simplex algorithm to a modified version of the original program. The possible results of Phase I are either that a basic feasible solution is found or that the feasible region is empty. In the latter case the linear program is calledinfeasible. In the second step, Phase II, the simplex algorithm is applied using the basic feasible solution found in Phase I as a starting point. The possible results from Phase II are either an optimum basic feasible solution or an infinite edge on which the objective function is unbounded above.[13][14][15]
The transformation of a linear program to one in standard form may be accomplished as follows.[16]First, for each variable with a lower bound other than 0, a new variable is introduced representing the difference between the variable and bound. The original variable can then be eliminated by substitution. For example, given the constraint
a new variable,y1{\displaystyle y_{1}}, is introduced with
The second equation may be used to eliminatex1{\displaystyle x_{1}}from the linear program. In this way, all lower bound constraints may be changed to non-negativity restrictions.
Second, for each remaining inequality constraint, a new variable, called aslack variable, is introduced to change the constraint to an equality constraint. This variable represents the difference between the two sides of the inequality and is assumed to be non-negative. For example, the inequalities
are replaced with
It is much easier to perform algebraic manipulation on inequalities in this form. In inequalities where ≥ appears such as the second one, some authors refer to the variable introduced as asurplus variable.
Third, each unrestricted variable is eliminated from the linear program. This can be done in two ways, one is by solving for the variable in one of the equations in which it appears and then eliminating the variable by substitution. The other is to replace the variable with the difference of two restricted variables. For example, ifz1{\displaystyle z_{1}}is unrestricted then write
The equation may be used to eliminatez1{\displaystyle z_{1}}from the linear program.
When this process is complete the feasible region will be in the form
It is also useful to assume that the rank ofA{\displaystyle \mathbf {A} }is the number of rows. This results in no loss of generality since otherwise either the systemAx=b{\displaystyle \mathbf {A} \mathbf {x} =\mathbf {b} }has redundant equations which can be dropped, or the system is inconsistent and the linear program has no solution.[17]
A linear program in standard form can be represented as atableauof the form
The first row defines the objective function and the remaining rows specify the constraints. The zero in the first column represents the zero vector of the same dimension as the vectorb{\displaystyle \mathbf {b} }(different authors use different conventions as to the exact layout). If the columns ofA{\displaystyle \mathbf {A} }can be rearranged so that it contains theidentity matrixof orderp{\displaystyle p}(the number of rows inA{\displaystyle \mathbf {A} }) then the tableau is said to be incanonical form.[18]The variables corresponding to the columns of the identity matrix are calledbasic variableswhile the remaining variables are callednonbasicorfree variables. If the values of the nonbasic variables are set to 0, then the values of the basic variables are easily obtained as entries inb{\displaystyle \mathbf {b} }and this solution is a basic feasible solution. The algebraic interpretation here is that the coefficients of the linear equation represented by each row are either0{\displaystyle 0},1{\displaystyle 1}, or some other number. Each row will have1{\displaystyle 1}column with value1{\displaystyle 1},p−1{\displaystyle p-1}columns with coefficients0{\displaystyle 0}, and the remaining columns with some other coefficients (these other variables represent our non-basic variables). By setting the values of the non-basic variables to zero we ensure in each row that the value of the variable represented by a1{\displaystyle 1}in its column is equal to theb{\displaystyle b}value at that row.
Conversely, given a basic feasible solution, the columns corresponding to the nonzero variables can be expanded to a nonsingular matrix. If the corresponding tableau is multiplied by the inverse of this matrix then the result is a tableau in canonical form.[19]
Let
be a tableau in canonical form. Additionalrow-addition transformationscan be applied to remove the coefficientscTBfrom the objective function. This process is calledpricing outand results in a canonical tableau
wherezBis the value of the objective function at the corresponding basic feasible solution. The updated coefficients, also known asrelative cost coefficients, are the rates of change of the objective function with respect to the nonbasic variables.[14]
The geometrical operation of moving from a basic feasible solution to an adjacent basic feasible solution is implemented as apivot operation. First, a nonzeropivot elementis selected in a nonbasic column. The row containing this element ismultipliedby its reciprocal to change this element to 1, and then multiples of the row are added to the other rows to change the other entries in the column to 0. The result is that, if the pivot element is in a rowr, then the column becomes ther-th column of the identity matrix. The variable for this column is now a basic variable, replacing the variable which corresponded to ther-th column of the identity matrix before the operation. In effect, the variable corresponding to the pivot column enters the set of basic variables and is called theentering variable, and the variable being replaced leaves the set of basic variables and is called theleaving variable. The tableau is still in canonical form but with the set of basic variables changed by one element.[13][14]
Let a linear program be given by a canonical tableau. The simplex algorithm proceeds by performing successive pivot operations each of which give an improved basic feasible solution; the choice of pivot element at each step is largely determined by the requirement that this pivot improves the solution.
Since the entering variable will, in general, increase from 0 to a positive number, the value of the objective function will decrease if the derivative of the objective function with respect to this variable is negative. Equivalently, the value of the objective function is increased if the pivot column is selected so that the corresponding entry in the objective row of the tableau is positive.
If there is more than one column so that the entry in the objective row is positive then the choice of which one to add to the set of basic variables is somewhat arbitrary and severalentering variable choice rules[20]such asDevex algorithm[21]have been developed.
If all the entries in the objective row are less than or equal to 0 then no choice of entering variable can be made and the solution is in fact optimal. It is easily seen to be optimal since the objective row now corresponds to an equation of the form
By changing the entering variable choice rule so that it selects a column where the entry in the objective row is negative, the algorithm is changed so that it finds the minimum of the objective function rather than the maximum.
Once the pivot column has been selected, the choice of pivot row is largely determined by the requirement that the resulting solution be feasible. First, only positive entries in the pivot column are considered since this guarantees that the value of the entering variable will be nonnegative. If there are no positive entries in the pivot column then the entering variable can take any non-negative value with the solution remaining feasible. In this case the objective function is unbounded below and there is no minimum.
Next, the pivot row must be selected so that all the other basic variables remain positive. A calculation shows that this occurs when the resulting value of the entering variable is at a minimum. In other words, if the pivot column isc, then the pivot rowris chosen so that
is the minimum over allrso thatarc> 0. This is called theminimum ratio test.[20]If there is more than one row for which the minimum is achieved then adropping variable choice rule[22]can be used to make the determination.
Consider the linear program
With the addition of slack variablessandt, this is represented by the canonical tableau
where columns 5 and 6 represent the basic variablessandtand the corresponding basic feasible solution is
Columns 2, 3, and 4 can be selected as pivot columns, for this example column 4 is selected. The values ofzresulting from the choice of rows 2 and 3 as pivot rows are 10/1 = 10 and 15/3 = 5 respectively. Of these the minimum is 5, so row 3 must be the pivot row. Performing the pivot produces
Now columns 4 and 5 represent the basic variableszandsand the corresponding basic feasible solution is
For the next step, there are no positive entries in the objective row and in fact
so the minimum value ofZis −20.
In general, a linear program will not be given in the canonical form and an equivalent canonical tableau must be found before the simplex algorithm can start. This can be accomplished by the introduction ofartificial variables. Columns of the identity matrix are added as column vectors for these variables. If the b value for a constraint equation is negative, the equation is negated before adding the identity matrix columns. This does not change the set of feasible solutions or the optimal solution, and it ensures that the slack variables will constitute an initial feasible solution. The new tableau is in canonical form but it is not equivalent to the original problem. So a new objective function, equal to the sum of the artificial variables, is introduced and the simplex algorithm is applied to find the minimum; the modified linear program is called thePhase Iproblem.[23]
The simplex algorithm applied to the Phase I problem must terminate with a minimum value for the new objective function since, being the sum of nonnegative variables, its value is bounded below by 0. If the minimum is 0 then the artificial variables can be eliminated from the resulting canonical tableau producing a canonical tableau equivalent to the original problem. The simplex algorithm can then be applied to find the solution; this step is calledPhase II. If the minimum is positive then there is no feasible solution for the Phase I problem where the artificial variables are all zero. This implies that the feasible region for the original problem is empty, and so the original problem has no solution.[13][14][24]
Consider the linear program
It differs from the previous example by having equality instead of inequality constraints. The previous solutionx=y=0,z=5{\displaystyle x=y=0\,,z=5}violates the first constraint.
This new problem is represented by the (non-canonical) tableau
Introduce artificial variablesuandvand objective functionW=u+v, giving a new tableau
The equation defining the original objective function is retained in anticipation of Phase II.
By construction,uandvare both basic variables since they are part of the initial identity matrix. However, the objective functionWcurrently assumes thatuandvare both 0. In order to adjust the objective function to be the correct value whereu= 10 andv= 15, add the third and fourth rows to the first row giving
Select column 5 as a pivot column, so the pivot row must be row 4, and the updated tableau is
Now select column 3 as a pivot column, for which row 3 must be the pivot row, to get
The artificial variables are now 0 and they may be dropped giving a canonical tableau equivalent to the original problem:
This is, fortuitously, already optimal and the optimum value for the original linear program is −130/7. This value is "worse" than -20 which is to be expected for a problem which is more constrained.
The tableau form used above to describe the algorithm lends itself to an immediate implementation in which the tableau is maintained as a rectangular (m+ 1)-by-(m+n+ 1) array. It is straightforward to avoid storing the m explicit columns of the identity matrix that will occur within the tableau by virtue ofBbeing a subset of the columns of [A,I]. This implementation is referred to as the "standardsimplex algorithm". The storage and computation overhead is such that the standard simplex method is a prohibitively expensive approach to solving large linear programming problems.
In each simplex iteration, the only data required are the first row of the tableau, the (pivotal) column of the tableau corresponding to the entering variable and the right-hand-side. The latter can be updated using the pivotal column and the first row of the tableau can be updated using the (pivotal) row corresponding to the leaving variable. Both the pivotal column and pivotal row may be computed directly using the solutions of linear systems of equations involving the matrixBand a matrix-vector product usingA. These observations motivate the "revisedsimplex algorithm", for which implementations are distinguished by their invertible representation ofB.[25]
In large linear-programming problemsAis typically asparse matrixand, when the resulting sparsity ofBis exploited when maintaining its invertible representation, the revised simplex algorithm is much more efficient than the standard simplex method. Commercial simplex solvers are based on the revised simplex algorithm.[24][25][26][27][28]
If the values of all basic variables are strictly positive, then a pivot must result in an improvement in the objective value. When this is always the case no set of basic variables occurs twice and the simplex algorithm must terminate after a finite number of steps. Basic feasible solutions where at least one of thebasicvariables is zero are calleddegenerateand may result in pivots for which there is no improvement in the objective value. In this case there is no actual change in the solution but only a change in the set of basic variables. When several such pivots occur in succession, there is no improvement; in large industrial applications, degeneracy is common and such "stalling" is notable.
Worse than stalling is the possibility the same set of basic variables occurs twice, in which case, the deterministic pivoting rules of the simplex algorithm will produce an infinite loop, or "cycle". While degeneracy is the rule in practice and stalling is common, cycling is rare in practice. A discussion of an example of practical cycling occurs inPadberg.[24]Bland's ruleprevents cycling and thus guarantees that the simplex algorithm always terminates.[24][29][30]Another pivoting algorithm, thecriss-cross algorithmnever cycles on linear programs.[31]
History-based pivot rules such asZadeh's ruleandCunningham's rulealso try to circumvent the issue of stalling and cycling by keeping track of how often particular variables are being used and then favor such variables that have been used least often.
The simplex method is remarkably efficient in practice and was a great improvement over earlier methods such asFourier–Motzkin elimination. However, in 1972,Kleeand Minty[32]gave an example, theKlee–Minty cube, showing that the worst-case complexity of simplex method as formulated by Dantzig isexponential time. Since then, for almost every variation on the method, it has been shown that there is a family of linear programs for which it performs badly. It is an open question if there is a variation withpolynomial time, although sub-exponential pivot rules are known.[33]
In 2014, it was proved[citation needed]that a particular variant of the simplex method isNP-mighty, i.e., it can be used to solve, with polynomial overhead, any problem in NP implicitly during the algorithm's execution. Moreover, deciding whether a given variable ever enters the basis during the algorithm's execution on a given input, and determining the number of iterations needed for solving a given problem, are bothNP-hardproblems.[34]At about the same time it was shown that there exists an artificial pivot rule for which computing its output isPSPACE-complete.[35]In 2015, this was strengthened to show that computing the output of Dantzig's pivot rule isPSPACE-complete.[36]
Analyzing and quantifying the observation that the simplex algorithm is efficient in practice despite its exponential worst-case complexity has led to the development of other measures of complexity. The simplex algorithm has polynomial-timeaverage-case complexityunder variousprobability distributions, with the precise average-case performance of the simplex algorithm depending on the choice of a probability distribution for therandom matrices.[37][38]Another approach to studying "typical phenomena" usesBaire category theoryfromgeneral topology, and to show that (topologically) "most" matrices can be solved by the simplex algorithm in a polynomial number of steps.[citation needed]
Another method to analyze the performance of the simplex algorithm studies the behavior of worst-case scenarios under small perturbation – are worst-case scenarios stable under a small change (in the sense ofstructural stability), or do they become tractable? This area of research, calledsmoothed analysis, was introduced specifically to study the simplex method. Indeed, the running time of the simplex method on input with noise is polynomial in the number of variables and the magnitude of the perturbations.[39][40]
Other algorithms for solving linear-programming problems are described in thelinear-programmingarticle. Another basis-exchange pivoting algorithm is thecriss-cross algorithm.[41][42]There are polynomial-time algorithms for linear programming that use interior point methods: these includeKhachiyan'sellipsoidal algorithm,Karmarkar'sprojective algorithm, andpath-following algorithms.[15]TheBig-M methodis an alternative strategy for solving a linear program, using a single-phase simplex.
Linear–fractional programming(LFP) is a generalization oflinear programming(LP). In LP the objective function is alinear function, while the objective function of a linear–fractional program is a ratio of two linear functions. In other words, a linear program is a fractional–linear program in which the denominator is the constant function having the value one everywhere. A linear–fractional program can be solved by a variant of the simplex algorithm[43][44][45][46]or by thecriss-cross algorithm.[47]
These introductions are written for students ofcomputer scienceandoperations research:
|
https://en.wikipedia.org/wiki/Simplex_algorithm
|
Interior-point methods(also referred to asbarrier methodsorIPMs) arealgorithmsfor solvinglinearandnon-linearconvex optimizationproblems. IPMs combine two advantages of previously-known algorithms:
In contrast to the simplex method which traverses theboundaryof the feasible region, and the ellipsoid method which bounds the feasible region fromoutside, an IPM reaches a best solution by traversing theinteriorof thefeasible region—hence the name.
An interior point method was discovered by Soviet mathematician I. I. Dikin in 1967.[1]The method was reinvented in the U.S. in the mid-1980s. In 1984,Narendra Karmarkardeveloped a method forlinear programmingcalledKarmarkar's algorithm,[2]which runs in probably polynomial time (O(n3.5L){\displaystyle O(n^{3.5}L)}operations onL-bit numbers, wherenis the number of variables and constants), and is also very efficient in practice. Karmarkar's paper created a surge of interest in interior point methods. Two years later,James Renegarinvented the firstpath-followinginterior-point method, with run-timeO(n3L){\displaystyle O(n^{3}L)}. The method was later extended from linear to convex optimization problems, based on aself-concordantbarrier functionused to encode theconvex set.[3]
Any convex optimization problem can be transformed into minimizing (or maximizing) alinear functionover a convex set by converting to theepigraphform.[4]: 143The idea of encoding thefeasible setusing a barrier and designing barrier methods was studied by Anthony V. Fiacco, Garth P. McCormick, and others in the early 1960s. These ideas were mainly developed for generalnonlinear programming, but they were later abandoned due to the presence of more competitive methods for this class of problems (e.g.sequential quadratic programming).
Yurii NesterovandArkadi Nemirovskicame up with a special class of such barriers that can be used to encode any convex set. They guarantee that the number ofiterationsof the algorithm is bounded by a polynomial in the dimension and accuracy of the solution.[5][3]
The class of primal-dual path-following interior-point methods is considered the most successful.Mehrotra's predictor–corrector algorithmprovides the basis for most implementations of this class of methods.[6]
We are given aconvex programof the form:minimizex∈Rnf(x)subject tox∈G.{\displaystyle {\begin{aligned}{\underset {x\in \mathbb {R} ^{n}}{\text{minimize}}}\quad &f(x)\\{\text{subject to}}\quad &x\in G.\end{aligned}}}where f is aconvex functionand G is aconvex set. Without loss of generality,we can assume that the objectivefis a linear function. Usually, the convex setGis represented by a set of convex inequalities and linear equalities; the linear equalities can be eliminated using linear algebra, so for simplicity we assume there are only convex inequalities, and the program can be described as follows, where thegiare convex functions:minimizex∈Rnf(x)subject togi(x)≤0fori=1,…,m.{\displaystyle {\begin{aligned}{\underset {x\in \mathbb {R} ^{n}}{\text{minimize}}}\quad &f(x)\\{\text{subject to}}\quad &g_{i}(x)\leq 0{\text{ for }}i=1,\dots ,m.\\\end{aligned}}}We assume that the constraint functions belong to some family (e.g. quadratic functions), so that the program can be represented by a finitevector of coefficients(e.g. the coefficients to the quadratic functions). The dimension of this coefficient vector is called thesizeof the program. Anumerical solverfor a given family of programs is an algorithm that, given the coefficient vector, generates a sequence of approximate solutionsxtfort=1,2,..., using finitely many arithmetic operations. A numerical solver is calledconvergentif, for any program from the family and any positiveε>0, there is someT(which may depend on the program and onε) such that, for anyt>T, the approximate solutionxtisε-approximate,that is:f(xt)−f∗≤ϵ,gi(xt)≤ϵfori=1,…,m,x∈G,{\displaystyle {\begin{aligned}&f(x_{t})-f^{*}\leq \epsilon ,\\&g_{i}(x_{t})\leq \epsilon \quad {\text{for}}\quad i=1,\dots ,m,\\&x\in G,\end{aligned}}}wheref∗{\displaystyle f^{*}}is the optimal solution. A solver is calledpolynomialif the total number of arithmetic operations in the firstTsteps is at most
poly(problem-size) * log(V/ε),
whereVis some data-dependent constant, e.g., the difference between the largest and smallest value in the feasible set. In other words,V/εis the "relative accuracy" of the solution - the accuracy w.r.t. the largest coefficient. log(V/ε) represents the number of "accuracy digits". Therefore, a solver is 'polynomial' if each additional digit of accuracy requires a number of operations that is polynomial in the problem size.
Types of interior point methods include:
Given a convex optimization program (P) with constraints, we can convert it to anunconstrainedprogram by adding abarrier function. Specifically, letbbe a smooth convex function, defined in the interior of the feasible regionG, such that for any sequence {xjin interior(G)} whose limit is on the boundary ofG:limj→∞b(xj)=∞{\displaystyle \lim _{j\to \infty }b(x_{j})=\infty }. We also assume thatbis non-degenerate, that is:b″(x){\displaystyle b''(x)}ispositive definitefor all x in interior(G). Now, consider the family of programs:
(Pt) minimize t * f(x) + b(x)
Technically the program is restricted, sincebis defined only in the interior ofG. But practically, it is possible to solve it as an unconstrained program, since any solver trying to minimize the function will not approach the boundary, wherebapproaches infinity. Therefore, (Pt) has a unique solution - denote it byx*(t). The functionx* is a continuous function oft, which is called thecentral path. All limit points ofx*, astapproaches infinity, are optimal solutions of the original program (P).
Apath-following methodis a method of tracking the functionx* along a certain increasing sequence t1,t2,..., that is: computing a good-enough approximationxito the pointx*(ti), such that the differencexi-x*(ti) approaches 0 asiapproaches infinity; then the sequencexiapproaches the optimal solution of (P). This requires to specify three things:
The main challenge in proving that the method is polytime is that, as the penalty parameter grows, the solution gets near the boundary, and the function becomes steeper. The run-time of solvers such asNewton's methodbecomes longer, and it is hard to prove that the total runtime is polynomial.
Renegar[7]and Gonzaga[8]proved that a specific instance of a path-following method is polytime:
They proved that, in this case, the differencexi-x*(ti) remains at most 0.01, and f(xi) - f* is at most 2*m/ti. Thus, the solution accuracy is proportional to 1/ti, so to add a single accuracy-digit, it is sufficient to multiplytiby 2 (or any other constant factor), which requires O(sqrt(m)) Newton steps. Since each Newton step takes O(m n2) operations, the total complexity is O(m3/2n2) operations for accuracy digit.
Yuri Nesterovextended the idea from linear to non-linear programs. He noted that the main property of the logarithmic barrier, used in the above proofs, is that it isself-concordantwith a finite barrier parameter. Therefore, many other classes of convex programs can be solved in polytime using a path-following method, if we can find a suitable self-concordant barrier function for their feasible region.[3]: Sec.1
We are given a convex optimization problem (P) in "standard form":
minimizecTxs.t.xinG,
whereGis convex and closed. We can also assume thatGis bounded (we can easily make it bounded by adding a constraint |x|≤Rfor some sufficiently largeR).[3]: Sec.4
To use the interior-point method, we need aself-concordant barrierforG. Letbbe anM-self-concordant barrier forG, whereM≥1 is the self-concordance parameter. We assume that we can compute efficiently the value ofb, its gradient, and itsHessian, for every point x in the interior ofG.
For everyt>0, we define thepenalized objectiveft(x) := tcTx +b(x). We define the path of minimizers by:x*(t) := arg min ft(x). We approximate this path along an increasing sequenceti. The sequence is initialized by a certain non-trivial two-phase initialization procedure. Then, it is updated according to the following rule:ti+1:=μ⋅ti{\displaystyle t_{i+1}:=\mu \cdot t_{i}}.
For eachti, we find an approximate minimum offti, denoted byxi. The approximate minimum is chosen to satisfy the following "closeness condition" (whereLis thepath tolerance):
[∇xft(xi)]T[∇x2ft(xi)]−1[∇xft(xi)]≤L{\displaystyle {\sqrt {[\nabla _{x}f_{t}(x_{i})]^{T}[\nabla _{x}^{2}f_{t}(x_{i})]^{-1}[\nabla _{x}f_{t}(x_{i})]}}\leq L}.
To findxi+1, we start withxiand apply thedamped Newton method. We apply several steps of this method, until the above "closeness relation" is satisfied. The first point that satisfies this relation is denoted byxi+1.[3]: Sec.4
The convergence rate of the method is given by the following formula, for everyi:[3]: Prop.4.4.1
cTxi−c∗≤2Mt0μ−i{\displaystyle c^{T}x_{i}-c^{*}\leq {\frac {2M}{t_{0}}}\mu ^{-i}}
Takingμ=(1+r/M){\displaystyle \mu =\left(1+r/{\sqrt {M}}\right)}, the number of Newton steps required to go fromxitoxi+1is at most a fixed number, that depends only onrandL. In particular, the total number of Newton steps required to find anε-approximate solution (i.e., findingxinGsuch thatcTx- c* ≤ε) is at most:[3]: Thm.4.4.1
O(1)⋅M⋅ln(Mt0ε+1){\displaystyle O(1)\cdot {\sqrt {M}}\cdot \ln \left({\frac {M}{t_{0}\varepsilon }}+1\right)}
where the constant factor O(1) depends only onrandL. The number of Newton steps required for the two-step initialization procedure is at most:[3]: Thm.4.5.1
O(1)⋅M⋅ln(M1−πxf∗(x¯)+1)+O(1)⋅M⋅ln(MVarG(c)ϵ+1){\displaystyle O(1)\cdot {\sqrt {M}}\cdot \ln \left({\frac {M}{1-\pi _{x_{f}^{*}}({\bar {x}})}}+1\right)+O(1)\cdot {\sqrt {M}}\cdot \ln \left({\frac {M{\text{Var}}_{G}(c)}{\epsilon }}+1\right)}[clarification needed]
where the constant factor O(1) depends only onrandL, andVarG(c):=maxx∈GcTx−minx∈GcTx{\displaystyle {\text{Var}}_{G}(c):=\max _{x\in G}c^{T}x-\min _{x\in G}c^{T}x}, andx¯{\displaystyle {\bar {x}}}is some point in the interior ofG. Overall, the overall Newton complexity of finding anε-approximate solution is at most
O(1)⋅M⋅ln(Vε+1){\displaystyle O(1)\cdot {\sqrt {M}}\cdot \ln \left({\frac {V}{\varepsilon }}+1\right)}, where V is some problem-dependent constant:V=VarG(c)1−πxf∗(x¯){\displaystyle V={\frac {{\text{Var}}_{G}(c)}{1-\pi _{x_{f}^{*}({\bar {x}})}}}}.
Each Newton step takes O(n3) arithmetic operations.
To initialize the path-following methods, we need a point in the relative interior of the feasible regionG. In other words: ifGis defined by the inequalitiesgi(x) ≤ 0, then we need somexfor whichgi(x) < 0 for alliin 1,...,m. If we do not have such a point, we need to find one using a so-calledphase I method.[4]: 11.4A simple phase-I method is to solve the following convex program:minimizessubject togi(x)≤sfori=1,…,m{\displaystyle {\begin{aligned}{\text{minimize}}\quad &s\\{\text{subject to}}\quad &g_{i}(x)\leq s{\text{ for }}i=1,\dots ,m\end{aligned}}}Denote the optimal solution by x*,s*.
For this program it is easy to get an interior point: we can take arbitrarilyx=0, and takesto be any number larger than max(f1(0),...,fm(0)). Therefore, it can be solved using interior-point methods. However, the run-time is proportional to log(1/s*). As s* comes near 0, it becomes harder and harder to find an exact solution to the phase-I problem, and thus harder to decide whether the original problem is feasible.
The theoretic guarantees assume that the penalty parameter is increased at the rateμ=(1+r/M){\displaystyle \mu =\left(1+r/{\sqrt {M}}\right)}, so the worst-case number of required Newton steps isO(M){\displaystyle O({\sqrt {M}})}. In theory, ifμis larger (e.g. 2 or more), then the worst-case number of required Newton steps is inO(M){\displaystyle O(M)}. However, in practice, largerμleads to a much faster convergence. These methods are calledlong-step methods.[3]: Sec.4.6In practice, ifμis between 3 and 100, then the program converges within 20-40 Newton steps, regardless of the number of constraints (though the runtime of each Newton step of course grows with the number of constraints). The exact value ofμwithin this range has little effect on the performance.[4]: chpt.11
For potential-reduction methods, the problem is presented in theconic form:[3]: Sec.5
minimizecTxs.t.xin{b+L} ∩ K,
wherebis a vector in Rn, L is alinear subspacein Rn(sob+Lis anaffine plane), andKis a closed pointedconvex conewith a nonempty interior. Every convex program can be converted to the conic form. To use the potential-reduction method (specifically, the extension ofKarmarkar's algorithmto convex programming), we need the following assumptions:[3]: Sec.6
Assumptions A, B and D are needed in most interior-point methods. Assumption C is specific to Karmarkar's approach; it can be alleviated by using a "sliding objective value". It is possible to further reduce the program to theKarmarkar format:
minimizesTxs.t.xinM ∩ KandeTx= 1
whereMis alinear subspaceof in Rn, and the optimal objective value is 0.
The method is based on the followingscalar potentialfunction:
v(x) =F(x) +Mln (sTx)
whereFis theM-self-concordant barrier for the feasible cone. It is possible to prove that, whenxis strictly feasible andv(x) is very small (- very negative),xis approximately-optimal. The idea of the potential-reduction method is to modifyxsuch that the potential at each iteration drops by at least a fixed constantX(specifically,X=1/3-ln(4/3)). This implies that, afteriiterations, the difference between objective value and the optimal objective value is at mostV* exp(-i X/M), whereVis a data-dependent constant. Therefore, the number of Newton steps required for anε-approximate solution is at mostO(1)⋅M⋅ln(Vε+1)+1{\displaystyle O(1)\cdot M\cdot \ln \left({\frac {V}{\varepsilon }}+1\right)+1}.
Note that in path-following methods the expression isM{\displaystyle {\sqrt {M}}}rather thanM, which is better in theory. But in practice, Karmarkar's method allows taking much larger steps towards the goal, so it may converge much faster than the theoretical guarantees.
The primal-dual method's idea is easy to demonstrate for constrainednonlinear optimization.[9][10]For simplicity, consider the following nonlinear optimization problem with inequality constraints:
minimizef(x)subject tox∈Rn,ci(x)≥0fori=1,…,m,wheref:Rn→R,ci:Rn→R.(1){\displaystyle {\begin{aligned}\operatorname {minimize} \quad &f(x)\\{\text{subject to}}\quad &x\in \mathbb {R} ^{n},\\&c_{i}(x)\geq 0{\text{ for }}i=1,\ldots ,m,\\{\text{where}}\quad &f:\mathbb {R} ^{n}\to \mathbb {R} ,\ c_{i}:\mathbb {R} ^{n}\to \mathbb {R} .\end{aligned}}\quad (1)}
This inequality-constrained optimization problem is solved by converting it into an unconstrained objective function whose minimum we hope to find efficiently.
Specifically, the logarithmicbarrier functionassociated with (1) isB(x,μ)=f(x)−μ∑i=1mlog(ci(x)).(2){\displaystyle B(x,\mu )=f(x)-\mu \sum _{i=1}^{m}\log(c_{i}(x)).\quad (2)}
Hereμ{\displaystyle \mu }is a small positive scalar, sometimes called the "barrier parameter". Asμ{\displaystyle \mu }converges to zero the minimum ofB(x,μ){\displaystyle B(x,\mu )}should converge to a solution of (1).
Thegradientof a differentiable functionh:Rn→R{\displaystyle h:\mathbb {R} ^{n}\to \mathbb {R} }is denoted∇h{\displaystyle \nabla h}.
The gradient of the barrier function is∇B(x,μ)=∇f(x)−μ∑i=1m1ci(x)∇ci(x).(3){\displaystyle \nabla B(x,\mu )=\nabla f(x)-\mu \sum _{i=1}^{m}{\frac {1}{c_{i}(x)}}\nabla c_{i}(x).\quad (3)}
In addition to the original ("primal") variablex{\displaystyle x}we introduce aLagrange multiplier-inspireddualvariableλ∈Rm{\displaystyle \lambda \in \mathbb {R} ^{m}}ci(x)λi=μ,∀i=1,…,m.(4){\displaystyle c_{i}(x)\lambda _{i}=\mu ,\quad \forall i=1,\ldots ,m.\quad (4)}
Equation (4) is sometimes called the "perturbed complementarity" condition, for its resemblance to "complementary slackness" inKKT conditions.
We try to find those(xμ,λμ){\displaystyle (x_{\mu },\lambda _{\mu })}for which the gradient of the barrier function is zero.
Substituting1/ci(x)=λi/μ{\displaystyle 1/c_{i}(x)=\lambda _{i}/\mu }from (4) into (3), we get an equation for the gradient:∇B(xμ,λμ)=∇f(xμ)−J(xμ)Tλμ=0,(5){\displaystyle \nabla B(x_{\mu },\lambda _{\mu })=\nabla f(x_{\mu })-J(x_{\mu })^{T}\lambda _{\mu }=0,\quad (5)}where the matrixJ{\displaystyle J}is theJacobianof the constraintsc(x){\displaystyle c(x)}.
The intuition behind (5) is that the gradient off(x){\displaystyle f(x)}should lie in the subspace spanned by the constraints' gradients. The "perturbed complementarity" with smallμ{\displaystyle \mu }(4) can be understood as the condition that the solution should either lie near the boundaryci(x)=0{\displaystyle c_{i}(x)=0}, or that the projection of the gradient∇f{\displaystyle \nabla f}on the constraint componentci(x){\displaystyle c_{i}(x)}normal should be almost zero.
Let(px,pλ){\displaystyle (p_{x},p_{\lambda })}be the search direction for iteratively updating(x,λ){\displaystyle (x,\lambda )}.
ApplyingNewton's methodto (4) and (5), we get an equation for(px,pλ){\displaystyle (p_{x},p_{\lambda })}:(H(x,λ)−J(x)Tdiag(λ)J(x)diag(c(x)))(pxpλ)=(−∇f(x)+J(x)Tλμ1−diag(c(x))λ),{\displaystyle {\begin{pmatrix}H(x,\lambda )&-J(x)^{T}\\\operatorname {diag} (\lambda )J(x)&\operatorname {diag} (c(x))\end{pmatrix}}{\begin{pmatrix}p_{x}\\p_{\lambda }\end{pmatrix}}={\begin{pmatrix}-\nabla f(x)+J(x)^{T}\lambda \\\mu 1-\operatorname {diag} (c(x))\lambda \end{pmatrix}},}
whereH{\displaystyle H}is theHessian matrixofB(x,μ){\displaystyle B(x,\mu )},diag(λ){\displaystyle \operatorname {diag} (\lambda )}is adiagonal matrixofλ{\displaystyle \lambda }, anddiag(c(x)){\displaystyle \operatorname {diag} (c(x))}is the diagonal matrix ofc(x){\displaystyle c(x)}.
Because of (1), (4) the condition
should be enforced at each step. This can be done by choosing appropriateα{\displaystyle \alpha }:
Here are some special cases of convex programs that can be solved efficiently by interior-point methods.[3]: Sec.10
Consider a linear program of the form:minimizec⊤xsubject toAx≤b..{\displaystyle {\begin{aligned}\operatorname {minimize} \quad &c^{\top }x\\{\text{subject to}}\quad &Ax\leq b.\end{aligned}}.}We can apply path-following methods with the barrierb(x):=−∑j=1mln(bj−ajTx).{\displaystyle b(x):=-\sum _{j=1}^{m}\ln(b_{j}-a_{j}^{T}x).}The functionb{\displaystyle b}is self-concordant with parameterM=m(the number of constraints). Therefore, the number of required Newton steps for the path-following method is O(mn2), and the total runtime complexity is O(m3/2n2).[clarification needed]
Given a quadratically constrained quadratic program of the form:minimized⊤xsubject tofj(x):=x⊤Ajx+bj⊤x+cj≤0for allj=1,…,m,{\displaystyle {\begin{aligned}\operatorname {minimize} \quad &d^{\top }x\\{\text{subject to}}\quad &f_{j}(x):=x^{\top }A_{j}x+b_{j}^{\top }x+c_{j}\leq 0\quad {\text{ for all }}j=1,\dots ,m,\end{aligned}}}where all matricesAjarepositive-semidefinite matrices.
We can apply path-following methods with the barrierb(x):=−∑j=1mln(−fj(x)).{\displaystyle b(x):=-\sum _{j=1}^{m}\ln(-f_{j}(x)).}The functionb{\displaystyle b}is a self-concordant barrier with parameterM=m. The Newton complexity is O((m+n)n2), and the total runtime complexity is O(m1/2(m+n)n2).
Consider a problem of the formminimize∑j|vj−uj⊤x|p,{\displaystyle {\begin{aligned}\operatorname {minimize} \quad &\sum _{j}|v_{j}-u_{j}^{\top }x|_{p}\end{aligned}},}where eachuj{\displaystyle u_{j}}is a vector, eachvj{\displaystyle v_{j}}is a scalar, and|⋅|p{\displaystyle |\cdot |_{p}}is anLpnormwith1<p<∞.{\displaystyle 1<p<\infty .}After converting to the standard form, we can apply path-following methods with a self-concordant barrier with parameterM=4m. The Newton complexity is O((m+n)n2), and the total runtime complexity is O(m1/2(m+n)n2).
Consider the problem
minimizef0(x):=∑i=1kci0exp(ai⊤x)subject tofj(x):=∑i=1kcijexp(ai⊤x)≤djfor allj=1,…,m.{\displaystyle {\begin{aligned}\operatorname {minimize} \quad &f_{0}(x):=\sum _{i=1}^{k}c_{i0}\exp(a_{i}^{\top }x)\\{\text{subject to}}\quad &f_{j}(x):=\sum _{i=1}^{k}c_{ij}\exp(a_{i}^{\top }x)\leq d_{j}\quad {\text{ for all }}j=1,\dots ,m.\end{aligned}}}
There is a self-concordant barrier with parameter 2k+m. The path-following method has Newton complexity O(mk2+k3+n3) and total complexity O((k+m)1/2[mk2+k3+n3]).
Interior point methods can be used to solve semidefinite programs.[3]: Sec.11
|
https://en.wikipedia.org/wiki/Interior-point_method
|
In anoptimization problem, aslack variableis a variable that is added to aninequality constraintto transform it into an equality constraint. A non-negativity constraint on the slack variable is also added.[1]: 131
Slack variables are used in particular inlinear programming. As with the other variables in the augmented constraints, the slack variable cannot take on negative values, as thesimplex algorithmrequires them to be positive or zero.[2]
Slack variables are also used in theBig M method.
By introducing the slack variables≥0{\displaystyle \mathbf {s} \geq \mathbf {0} }, the inequalityAx≤b{\displaystyle \mathbf {A} \mathbf {x} \leq \mathbf {b} }can be converted to the equationAx+s=b{\displaystyle \mathbf {A} \mathbf {x} +\mathbf {s} =\mathbf {b} }.
Slack variables give an embedding of apolytopeP↪(R≥0)f{\displaystyle P\hookrightarrow (\mathbf {R} _{\geq 0})^{f}}into the standardf-orthant, wheref{\displaystyle f}is the number of constraints (facets of the polytope). This map is one-to-one (slack variables are uniquely determined) but not onto (not all combinations can be realized), and is expressed in terms of theconstraints(linear functionals, covectors).
Slack variables aredualtogeneralized barycentric coordinates, and, dually to generalized barycentric coordinates (which are not unique but can all be realized), are uniquely determined, but cannot all be realized.
Dually, generalized barycentric coordinates express a polytope withn{\displaystyle n}vertices (dual to facets), regardless of dimension, as theimageof the standard(n−1){\displaystyle (n-1)}-simplex, which hasn{\displaystyle n}vertices – the map is onto:Δn−1↠P,{\displaystyle \Delta ^{n-1}\twoheadrightarrow P,}and expresses points in terms of thevertices(points, vectors). The map is one-to-one if and only if the polytope is a simplex, in which case the map is an isomorphism; this corresponds to a point not havinguniquegeneralized barycentric coordinates.
|
https://en.wikipedia.org/wiki/Slack_variable
|
Inmathematics,Slater's condition(orSlater condition) is asufficient conditionforstrong dualityto hold for aconvex optimization problem, named after Morton L. Slater.[1]Informally, Slater's condition states that thefeasible regionmust have aninterior point(see technical details below).
Slater's condition is a specific example of aconstraint qualification.[2]In particular, if Slater's condition holds for theprimal problem, then theduality gapis 0, and if the dual value is finite then it is attained.
Letf1,…,fm{\displaystyle f_{1},\ldots ,f_{m}}be real-valued functions on some subsetD{\displaystyle D}ofRn{\displaystyle \mathbb {R} ^{n}}. We say that the functions satisfy theSlater conditionif there exists somex{\displaystyle x}in therelative interiorofD{\displaystyle D}, for whichfi(x)<0{\displaystyle f_{i}(x)<0}for alli{\displaystyle i}in1,…,m{\displaystyle 1,\ldots ,m}. We say that the functions satisfy therelaxed Slater conditionif:[3]
Consider theoptimization problem
wheref0,…,fm{\displaystyle f_{0},\ldots ,f_{m}}areconvex functions. This is an instance ofconvex programming. Slater's condition for convex programming states that there exists anx∗{\displaystyle x^{*}}that is strictlyfeasible, that is, allmconstraints are satisfied, and the nonlinear constraints are satisfied with strict inequalities.
If a convex program satisfies Slater's condition (or relaxed condition), and it is bounded from below, thenstrong dualityholds. Mathematically, this states that strong duality holds if there exists anx∗∈relint(D){\displaystyle x^{*}\in \operatorname {relint} (D)}(where relint denotes therelative interiorof the convex setD:=∩i=0mdom(fi){\displaystyle D:=\cap _{i=0}^{m}\operatorname {dom} (f_{i})}) such that
Given the problem
wheref0{\displaystyle f_{0}}is convex andfi{\displaystyle f_{i}}isKi{\displaystyle K_{i}}-convex for eachi{\displaystyle i}. Then Slater's condition says that if there exists anx∗∈relint(D){\displaystyle x^{*}\in \operatorname {relint} (D)}such that
then strong duality holds.[4]
|
https://en.wikipedia.org/wiki/Slater%27s_condition
|
Design optimizationis an engineering design methodology using a mathematical formulation of a design problem to support selection of the optimal design among many alternatives. Design optimization involves the following stages:[1][2]
The formal mathematical (standard form) statement of the design optimization problem is[3]
minimizef(x)subjecttohi(x)=0,i=1,…,m1gj(x)≤0,j=1,…,m2andx∈X⊆Rn{\displaystyle {\begin{aligned}&{\operatorname {minimize} }&&f(x)\\&\operatorname {subject\;to} &&h_{i}(x)=0,\quad i=1,\dots ,m_{1}\\&&&g_{j}(x)\leq 0,\quad j=1,\dots ,m_{2}\\&\operatorname {and} &&x\in X\subseteq R^{n}\end{aligned}}}
where
The problem formulation stated above is a convention called thenegative null form, since all constraint function are expressed as equalities and negative inequalities with zero on the right-hand side. This convention is used so that numerical algorithms developed to solve design optimization problems can assume a standard expression of the mathematical problem.
We can introduce the vector-valued functions
h=(h1,h2,…,hm1)andg=(g1,g2,…,gm2){\displaystyle {\begin{aligned}&&&{h=(h_{1},h_{2},\dots ,h_{m1})}\\\operatorname {and} \\&&&{g=(g_{1},g_{2},\dots ,g_{m2})}\end{aligned}}}
to rewrite the above statement in the compact expression
minimizef(x)subjecttoh(x)=0,g(x)≤0,x∈X⊆Rn{\displaystyle {\begin{aligned}&{\operatorname {minimize} }&&f(x)\\&\operatorname {subject\;to} &&h(x)=0,\quad g(x)\leq 0,\quad x\in X\subseteq R^{n}\\\end{aligned}}}
We callh,g{\displaystyle h,g}thesetorsystem of(functional)constraintsandX{\displaystyle X}theset constraint.
Design optimization applies the methods ofmathematical optimizationto design problem formulations and it is sometimes used interchangeably with the termengineering optimization. When the objective functionfis avectorrather than ascalar, the problem becomes amulti-objective optimizationone. If the design optimization problem has more than one mathematical solutions the methods ofglobal optimizationare used to identified the global optimum.
Optimization Checklist[2]
A detailed and rigorous description of the stages and practical applications with examples can be found in the bookPrinciples of Optimal Design.
Practical design optimization problems are typically solved numerically and manyoptimization softwareexist in academic and commercial forms.[4]There are several domain-specific applications of design optimization posing their own specific challenges in formulating and solving the resulting problems; these include,shape optimization,wing-shape optimization,topology optimization,architectural design optimization,power optimization. Several books, articles and journal publications are listed below for reference.
One modern application of design optimization is structural design optimization (SDO) is in building and construction sector. SDO emphasizes automating and optimizing structural designs and dimensions to satisfy a variety of performance objectives. These advancements aim to optimize the configuration and dimensions of structures to optimize augmenting strength, minimize material usage, reduce costs, enhance energy efficiency, improve sustainability, and optimize several other performance criteria. Concurrently, structural design automation endeavors to streamline the design process, mitigate human errors, and enhance productivity through computer-based tools and optimization algorithms. Prominent practices and technologies in this domain include the parametric design, generative design, building information modelling (BIM) technology, machine learning (ML), and artificial intelligence (AI), as well as integrating finite element analysis (FEA) with simulation tools.[5]
|
https://en.wikipedia.org/wiki/Design_Optimization
|
Incomputational complexity theory, afunction problemis acomputational problemwhere a single output (of atotal function) is expected for every input, but the output is more complex than that of adecision problem. For function problems, the output is not simply 'yes' or 'no'.
A functional problemP{\displaystyle P}is defined by arelationR{\displaystyle R}overstringsof an arbitraryalphabetΣ{\displaystyle \Sigma }:
An algorithm solvesP{\displaystyle P}if for every inputx{\displaystyle x}such that there exists ay{\displaystyle y}satisfying(x,y)∈R{\displaystyle (x,y)\in R}, the algorithm produces one suchy{\displaystyle y}, and if there are no suchy{\displaystyle y}, it rejects.
A promise function problem is allowed to do anything (thus may not terminate) if no suchy{\displaystyle y}exists.
A well-known function problem is given by the Functional Boolean Satisfiability Problem,FSATfor short. The problem, which is closely related to theSATdecision problem, can be formulated as follows:
In this case the relationR{\displaystyle R}is given by tuples of suitably encoded boolean formulas and satisfying assignments.
While a SAT algorithm, fed with a formulaφ{\displaystyle \varphi }, only needs to return "unsatisfiable" or "satisfiable", an FSAT algorithm needs to return some satisfying assignment in the latter case.
Other notable examples include thetravelling salesman problem, which asks for the route taken by the salesman, and theinteger factorization problem, which asks for the list of factors.
Consider an arbitrarydecision problemL{\displaystyle L}in the classNP. By the definition ofNP, each problem instancex{\displaystyle x}that is answered 'yes' has a polynomial-size certificatey{\displaystyle y}which serves as a proof for the 'yes' answer. Thus, the set of these tuples(x,y){\displaystyle (x,y)}forms a relation, representing the function problem "givenx{\displaystyle x}inL{\displaystyle L}, find a certificatey{\displaystyle y}forx{\displaystyle x}". This function problem is called thefunction variantofL{\displaystyle L}; it belongs to the classFNP.
FNPcan be thought of as the function class analogue ofNP, in that solutions ofFNPproblems can be efficiently (i.e., inpolynomial timein terms of the length of the input)verified, but not necessarily efficientlyfound. In contrast, the classFP, which can be thought of as the function class analogue ofP, consists of function problems whose solutions can be found in polynomial time.
Observe that the problemFSATintroduced above can be solved using only polynomially many calls to a subroutine which decides theSATproblem: An algorithm can first ask whether the formulaφ{\displaystyle \varphi }is satisfiable. After that the algorithm can fix variablex1{\displaystyle x_{1}}to TRUE and ask again. If the resulting formula is still satisfiable the algorithm keepsx1{\displaystyle x_{1}}fixed to TRUE and continues to fixx2{\displaystyle x_{2}}, otherwise it decides thatx1{\displaystyle x_{1}}has to be FALSE and continues. Thus,FSATis solvable in polynomial time using anoracledecidingSAT. In general, a problem inNPis calledself-reducibleif its function variant can be solved in polynomial time using an oracle deciding the original problem. EveryNP-completeproblem is self-reducible. It is conjectured[by whom?]that theinteger factorization problemis not self-reducible, because deciding whether an integer is prime is inP(easy),[1]while the integer factorization problem is believed to be hard for a classical computer.
There are several (slightly different) notions of self-reducibility.[2][3][4]
Function problems can bereducedmuch like decision problems: Given function problemsΠR{\displaystyle \Pi _{R}}andΠS{\displaystyle \Pi _{S}}we say thatΠR{\displaystyle \Pi _{R}}reduces toΠS{\displaystyle \Pi _{S}}if there exists polynomially-time computable functionsf{\displaystyle f}andg{\displaystyle g}such that for all instancesx{\displaystyle x}ofR{\displaystyle R}and possible solutionsy{\displaystyle y}ofS{\displaystyle S}, it holds that
It is therefore possible to defineFNP-completeproblems analogous to the NP-complete problem:
A problemΠR{\displaystyle \Pi _{R}}isFNP-completeif every problem inFNPcan be reduced toΠR{\displaystyle \Pi _{R}}. The complexity class ofFNP-completeproblems is denoted byFNP-CorFNPC. Hence the problemFSATis also anFNP-completeproblem, and it holds thatP=NP{\displaystyle \mathbf {P} =\mathbf {NP} }if and only ifFP=FNP{\displaystyle \mathbf {FP} =\mathbf {FNP} }.
The relationR(x,y){\displaystyle R(x,y)}used to define function problems has the drawback of being incomplete: Not every inputx{\displaystyle x}has a counterparty{\displaystyle y}such that(x,y)∈R{\displaystyle (x,y)\in R}. Therefore the question of computability of proofs is not separated from the question of their existence. To overcome this problem it is convenient to consider the restriction of function problems to total relations yielding the classTFNPas a subclass ofFNP. This class contains problems such as the computation of pureNash equilibriain certain strategic games where a solution is guaranteed to exist. In addition, ifTFNPcontains anyFNP-completeproblem it follows thatNP=co-NP{\displaystyle \mathbf {NP} ={\textbf {co-NP}}}.
|
https://en.wikipedia.org/wiki/Function_problem
|
Inoperations research, theglove problem[1](also known as thecondom problem[2]) is anoptimization problemused as an example that the cheapest capital cost often leads to dramatic increase in operational time, but that the shortest operational time need not be given by the most expensive capital cost.[3]
Mdoctors are each to examine each ofNpatients, wearingglovesto avoid contamination. Each glove can be used any number of times, but the same side of one glove cannot be exposed to more than one person. Gloves can be re-used any number of times, and more than one can be used simultaneously.
GivenMdoctors andNpatients, the minimum number of glovesG(M,N) required for all the doctors to examine all the patients is given by:
A naive approach would be to estimate the number of gloves as simplyG(M,N) =MN. But this number can be significantly reduced by exploiting the fact that each glove has two sides, and it is not necessary to use both sides simultaneously.
A better solution can be found by assigning each person his or her own glove, which is to be used for the entire operation. Every pairwise encounter is then protected by a double layer. Note that the outer surface of the doctor's gloves meets only the inner surface of the patient's gloves. This gives an answer ofM+Ngloves, which is significantly lower thanMN.
Themakespanwith this scheme isK· max(M,N), whereKis the duration of one pairwise encounter. Note that this is exactly the same makespan if MN gloves were used. Clearly in this case, increasing capital cost has not produced a shorter operation time.
The numberG(M,N) may be refined further by allowing asymmetry in the initial distribution of gloves. The best scheme is given by:
This scheme uses (1 ·N) + ((M− 1 − 1) · 1) + (1 · 0) =M+N− 2 gloves. This number cannot be reduced further.
The makespan is then given by:
Makespan:K· (2N+ max(M− 2,N)).
Clearly, the minimumG(M,N) increases the makespan significantly, sometimes by a factor of 3. Note that the benefit in the number of gloves is only 2 units.
One or the other solution may be preferred depending on the relative cost of a glove judged against the longer operation time. In theory, the intermediate solution with (M+N− 1) should also occur as a candidate solution, but this requires such narrow windows onM,Nand the cost parameters to be optimal that it is often ignored.
The statement of the problem does not make it clear that the principle of contagion applies, i.e. if the inside of one glove has been touched by the outside of another that previously touched some person, then that inside also counts as touched by that person.
Also,medical glovesare reversible; therefore a better solution exists, which uses
gloves where the less numerous group are equipped with a glove each, the more numerous in pairs. The first of each pair use a clean interface, the second reverse the glove.[original research?]
|
https://en.wikipedia.org/wiki/Glove_problem
|
Operations research(British English:operational research) (U.S.Air Force Specialty Code: Operations Analysis), often shortened to theinitialismOR, is a branch of applied mathematics that deals with the development and application of analytical methods to improve management and decision-making.[1][2]Although the termmanagement scienceis sometimes used similarly, the two fields differ in their scope and emphasis.
Employing techniques from other mathematical sciences, such asmodeling,statistics, andoptimization, operations research arrives at optimal or near-optimal solutions todecision-makingproblems. Because of its emphasis on practical applications, operations research has overlapped with many other disciplines, notablyindustrial engineering. Operations research is often concerned with determining the extreme values of some real-world objective: themaximum(of profit, performance, or yield) or minimum (of loss, risk, or cost). Originating in military efforts beforeWorld War II, its techniques have grown to concern problems in a variety of industries.[3]
Operations research (OR) encompasses the development and the use of a wide range ofproblem-solvingtechniques and methods applied in the pursuit of improved decision-making and efficiency, such assimulation,mathematical optimization,queueing theoryand otherstochastic-processmodels,Markov decision processes,econometric methods,data envelopment analysis,ordinal priority approach,neural networks,expert systems,decision analysis, and theanalytic hierarchy process.[4]Nearly all of these techniques involve the construction of mathematical models that attempt to describe the system. Because of the computational and statistical nature of most of these fields, OR also has strong ties tocomputer scienceandanalytics. Operational researchers faced with a new problem must determine which of these techniques are most appropriate given the nature of the system, the goals for improvement, and constraints on time and computing power, or develop a new technique specific to the problem at hand (and, afterwards, to that type of problem).
The major sub-disciplines (but not limited to) in modern operational research, as identified by the journalOperations Research[5]andThe Journal of the Operational Research Society[6]are:
In the decades after the two world wars, the tools of operations research were more widely applied to problems in business, industry, and society. Since that time, operational research has expanded into a field widely used in industries ranging from petrochemicals to airlines, finance, logistics, and government, moving to a focus on the development of mathematical models that can be used to analyse and optimize sometimes complex systems, and has become an area of active academic and industrial research.[3]
In the 17th century, mathematiciansBlaise PascalandChristiaan Huygenssolved problems involving sometimes complex decisions (problem of points) by usinggame-theoreticideas andexpected values; others, such asPierre de FermatandJacob Bernoulli, solved these types of problems usingcombinatorial reasoninginstead.[7]Charles Babbage's research into the cost of transportation and sorting of mail led to England'suniversal "Penny Post"in 1840, and to studies into the dynamical behaviour of railway vehicles in defence of theGWR's broad gauge.[8]Beginning in the 20th century, study of inventory management could be considered[by whom?]the origin of modern operations research witheconomic order quantitydeveloped byFord W. Harrisin 1913. Operational research may[original research?]have originated in the efforts of military planners duringWorld War I(convoy theory andLanchester's laws).Percy Bridgmanbrought operational research to bear on problems in physics in the 1920s and would later attempt to extend these to the social sciences.[9]
Modern operational research originated at theBawdsey Research Stationin the UK in 1937 as the result of an initiative of the station's superintendent,A. P. RoweandRobert Watson-Watt.[10]Rowe conceived the idea as a means to analyse and improve the working of the UK'searly-warning radarsystem, code-named "Chain Home" (CH). Initially, Rowe analysed the operating of the radar equipment and its communication networks, expanding later to include the operating personnel's behaviour. This revealed unappreciated limitations of the CH network and allowed remedial action to be taken.[11]
Scientists in the United Kingdom (includingPatrick Blackett(later Lord Blackett OM PRS),Cecil Gordon,Solly Zuckerman, (later Baron Zuckerman OM, KCB, FRS),C. H. Waddington,Owen Wansbrough-Jones,Frank Yates,Jacob BronowskiandFreeman Dyson), and in the United States (George Dantzig) looked for ways to make better decisions in such areas aslogisticsand training schedules.
The modern field of operational research arose during World War II.[dubious–discuss]In the World War II era, operational research was defined as "a scientific method of providing executive departments with a quantitative basis for decisions regarding the operations under their control".[12]Other names for it included operational analysis (UK Ministry of Defence from 1962)[13]and quantitative management.[14]
During theSecond World Warclose to 1,000 men and women in Britain were engaged in operational research. About 200 operational research scientists worked for theBritish Army.[15]
Patrick Blackettworked for several different organizations during the war. Early in the war while working for theRoyal Aircraft Establishment(RAE) he set up a team known as the "Circus" which helped to reduce the number ofanti-aircraft artilleryrounds needed to shoot down an enemy aircraft from an average of over 20,000 at the start of theBattle of Britainto 4,000 in 1941.[16]
In 1941, Blackett moved from the RAE to the Navy, after first working withRAF Coastal Command, in 1941 and then early in 1942 to theAdmiralty.[17]Blackett's team at Coastal Command's Operational Research Section (CC-ORS) included two futureNobel Prizewinners and many other people who went on to be pre-eminent in their fields.[18][19]They undertook a number of crucial analyses that aided the war effort. Britain introduced theconvoysystem to reduce shipping losses, but while the principle of using warships to accompany merchant ships was generally accepted, it was unclear whether it was better for convoys to be small or large. Convoys travel at the speed of the slowest member, so small convoys can travel faster. It was also argued that small convoys would be harder for GermanU-boatsto detect. On the other hand, large convoys could deploy more warships against an attacker. Blackett's staff showed that the losses suffered by convoys depended largely on the number of escort vessels present, rather than the size of the convoy. Their conclusion was that a few large convoys are more defensible than many small ones.[20]
While performing an analysis of the methods used byRAF Coastal Commandto hunt and destroy submarines, one of the analysts asked what colour the aircraft were. As most of them were from Bomber Command they were painted black for night-time operations. At the suggestion of CC-ORS a test was run to see if that was the best colour to camouflage the aircraft for daytime operations in the grey North Atlantic skies. Tests showed that aircraft painted white were on average not spotted until they were 20% closer than those painted black. This change indicated that 30% more submarines would be attacked and sunk for the same number of sightings.[21]As a result of these findings Coastal Command changed their aircraft to using white undersurfaces.
Other work by the CC-ORS indicated that on average if the trigger depth of aerial-delivereddepth chargeswere changed from 100 to 25 feet, the kill ratios would go up. The reason was that if a U-boat saw an aircraft only shortly before it arrived over the target then at 100 feet the charges would do no damage (because the U-boat wouldn't have had time to descend as far as 100 feet), and if it saw the aircraft a long way from the target it had time to alter course under water so the chances of it being within the 20-foot kill zone of the charges was small. It was more efficient to attack those submarines close to the surface when the targets' locations were better known than to attempt their destruction at greater depths when their positions could only be guessed. Before the change of settings from 100 to 25 feet, 1% of submerged U-boats were sunk and 14% damaged. After the change, 7% were sunk and 11% damaged; if submarines were caught on the surface but had time to submerge just before being attacked, the numbers rose to 11% sunk and 15% damaged. Blackett observed "there can be few cases where such a great operational gain had been obtained by such a small and simple change of tactics".[22]
Bomber Command's Operational Research Section (BC-ORS), analyzed a report of a survey carried out byRAF Bomber Command.[citation needed]For the survey, Bomber Command inspected all bombers returning from bombing raids over Germany over a particular period. All damage inflicted by Germanair defenseswas noted and the recommendation was given that armor be added in the most heavily damaged areas. Thisrecommendationwas not adopted because the fact that the aircraft were able to return with these areas damaged indicated the areas were not vital, and adding armor to non-vital areas where damage is acceptable reduces aircraft performance. Their suggestion to remove some of the crew so that an aircraft loss would result in fewer personnel losses, was also rejected by RAF command. Blackett's team made the logical recommendation that the armor be placed in the areas which were completely untouched by damage in the bombers who returned. They reasoned that the survey was biased, since it only included aircraft that returned to Britain. The areas untouched in returning aircraft were probably vital areas, which, if hit, would result in the loss of the aircraft.[23]This story has been disputed,[24]with a similar damage assessment study completed in the US by theStatistical Research GroupatColumbia University,[25]the result of work done byAbraham Wald.[26]
When Germany organized its air defences into theKammhuber Line, it was realized by the British that if the RAF bombers were to fly in abomber streamthey could overwhelm the night fighters who flew in individual cells directed to their targets by ground controllers. It was then a matter of calculating the statistical loss from collisions against the statistical loss from night fighters to calculate how close the bombers should fly to minimize RAF losses.[27]
The "exchange rate" ratio of output to input was a characteristic feature of operational research. By comparing the number of flying hours put in by Allied aircraft to the number of U-boat sightings in a given area, it was possible to redistribute aircraft to more productive patrol areas. Comparison of exchange rates established "effectiveness ratios" useful in planning. The ratio of 60mineslaid per ship sunk was common to several campaigns: German mines in British ports, British mines on German routes, and United States mines in Japanese routes.[28]
Operational research doubled the on-target bomb rate ofB-29sbombing Japan from theMarianas Islandsby increasing the training ratio from 4 to 10 percent of flying hours; revealed that wolf-packs of three United States submarines were the most effective number to enable all members of the pack to engage targets discovered on their individual patrol stations; revealed that glossy enamel paint was more effective camouflage for night fighters than conventional dull camouflage paint finish, and a smooth paint finish increased airspeed by reducing skin friction.[28]
On land, the operational research sections of the Army Operational Research Group (AORG) of theMinistry of Supply(MoS) were landed inNormandy in 1944, and they followed British forces in the advance across Europe. They analyzed, among other topics, the effectiveness of artillery, aerial bombing and anti-tank shooting.
In 1947, under the auspices of theBritish Association, a symposium was organized inDundee. In his opening address, Watson-Watt offered a definition of the aims of OR:
With expanded techniques and growing awareness of the field at the close of the war, operational research was no longer limited to only operational, but was extended to encompass equipment procurement, training,logisticsand infrastructure. Operations research also grew in many areas other than the military once scientists learned to apply its principles to the civilian sector. The development of thesimplex algorithmforlinear programmingwas in 1947.[29]
In the 1950s, the term Operations Research was used to describe heterogeneous mathematical methods such asgame theory, dynamic programming, linear programming, warehousing,spare parts theory,queue theory, simulation and production control, which were used primarily in civilian industry. Scientific societies and journals on the subject of operations research were founded in the 1950s, such as theOperation Research Society of America(ORSA) in 1952 and the Institute for Management Science (TIMS) in 1953.[30]Philip Morse, the head of the Weapons Systems Evaluation Group of the Pentagon, became the first president of ORSA and attracted the companies of themilitary-industrial complexto ORSA, which soon had more than 500 members. In the 1960s, ORSA reached 8000 members.[citation needed]Consulting companies also founded OR groups. In 1953, Abraham Charnes and William Cooper published the first textbook on Linear Programming.[citation needed]
In the 1950s and 1960s, chairs of operations research were established in the U.S. and United Kingdom (from 1964 in Lancaster) in the management faculties of universities. Further influences from the U.S. on the development of operations research in Western Europe can be traced here. The authoritative[citation needed]OR textbooks from the U.S. were published in Germany in German language and in France in French (but not in Italian[citation needed]), such as the book by George Dantzig "Linear Programming"(1963) and the book byC. West Churchmanet al. "Introduction to Operations Research"(1957). The latter was also published in Spanish in 1973, opening at the same time Latin American readers to Operations Research.NATOgave important impulses for the spread of Operations Research in Western Europe; NATO headquarters (SHAPE) organised four conferences on OR in the 1950s—the one in 1956 with 120 participants—bringing OR to mainland Europe. Within NATO, OR was also known as "Scientific Advisory" (SA) and was grouped together in the Advisory Group of Aeronautical Research and Development (AGARD). SHAPE and AGARD organized an OR conference in April 1957 in Paris. WhenFrance withdrew from the NATO military command structure, the transfer of NATO headquarters from France to Belgium led to the institutionalization of OR in Belgium, where Jacques Drèze founded CORE, the Center for Operations Research and Econometrics at theCatholic University of Leuvenin 1966.[citation needed]
With the development of computers over the next three decades, Operations Research can now solve problems with hundreds of thousands of variables and constraints. Moreover, the large volumes of data required for such problems can be stored and manipulated very efficiently."[29]Much of operations research (modernly known as 'analytics') relies upon stochastic variables and a therefore access to truly random numbers. Fortunately, the cybernetics field also required the same level of randomness. The development of increasingly better random number generators has been a boon to both disciplines. Modern applications of operations research includes city planning, football strategies, emergency planning, optimizing all facets of industry and economy, and undoubtedly with the likelihood of the inclusion of terrorist attack planning and definitely counterterrorist attack planning. More recently, the research approach of operations research, which dates back to the 1950s, has been criticized for being collections of mathematical models but lacking an empirical basis of data collection for applications. How to collect data is not presented in the textbooks. Because of the lack of data, there are also no computer applications in the textbooks.[31]
Operational research is also used extensively in government whereevidence-based policyis used.
The field of management science (MS) is known as using operations research models in business.[34]Stafford Beercharacterized this in 1967.[35]Like operational research itself, management science is an interdisciplinary branch of applied mathematics devoted to optimal decision planning, with strong links with economics, business, engineering, and othersciences. It uses variousscientificresearch-based principles,strategies, andanalytical methodsincludingmathematical modeling, statistics andnumerical algorithmsto improve an organization's ability to enact rational and meaningful management decisions by arriving at optimal or near-optimal solutions to sometimes complex decision problems. Management scientists help businesses to achieve their goals using the scientific methods of operational research.
The management scientist's mandate is to use rational, systematic, science-based techniques to inform and improve decisions of all kinds. Of course, the techniques of management science are not restricted to business applications but may be applied to military, medical, public administration, charitable groups, political groups or community groups.
Management science is concerned with developing and applyingmodelsandconceptsthat may prove useful in helping to illuminate management issues and solve managerial problems, as well as designing and developing new and better models of organizational excellence.[36]
Some of the fields that have considerable overlap with Operations Research and Management Science include:[37]
Applications are abundant such as in airlines, manufacturing companies,service organizations, military branches, and government. The range of problems and issues to which it has contributed insights and solutions is vast. It includes:[36]
[38]
Management is also concerned with so-called soft-operational analysis which concerns methods forstrategic planning, strategicdecision support,problem structuring methods.
In dealing with these sorts of challenges, mathematicalmodeling and simulationmay not be appropriate or may not suffice. Therefore, during the past 30 years[vague], a number of non-quantified modeling methods have been developed. These include:[citation needed]
TheInternational Federation of Operational Research Societies(IFORS)[39]is anumbrella organizationfor operational research societies worldwide, representing approximately 50 national societies including those in the US,[40]UK,[41]France,[42]Germany,Italy,[43]Canada,[44]Australia,[45]New Zealand,[46]Philippines,[47]India,[48]Japan and South Africa.[49]For the institutionalization of Operations Research, the foundation of IFORS in 1960 was of decisive importance, which stimulated the foundation of national OR societies in Austria, Switzerland and Germany. IFORS held important international conferences every three years since 1957.[50]The constituent members of IFORS form regional groups, such as that in Europe, theAssociation of European Operational Research Societies(EURO).[51]Other important operational research organizations areSimulation Interoperability Standards Organization(SISO)[52]andInterservice/Industry Training, Simulation and Education Conference(I/ITSEC)[53]
In 2004, the US-based organization INFORMS began an initiative to market the OR profession better, including a website entitledThe Science of Better[54]which provides an introduction to OR and examples of successful applications of OR to industrial problems. This initiative has been adopted by theOperational Research Societyin the UK, including a website entitledLearn About OR.[55]
TheInstitute for Operations Research and the Management Sciences(INFORMS) publishes thirteen scholarly journals about operations research, including the top two journals in their class, according to 2005Journal Citation Reports.[56]They are:
These are listed in alphabetical order of their titles.
|
https://en.wikipedia.org/wiki/Operations_research
|
Satisficingis adecision-makingstrategy or cognitiveheuristicthat entails searching through the available alternatives until an acceptability threshold is met, without necessarilymaximizingany specific objective.[1]The termsatisficing, aportmanteauofsatisfyandsuffice,[2]was introduced byHerbert A. Simonin 1956,[3][4]although the concept was first posited in his 1947 bookAdministrative Behavior.[5][6]Simon used satisficing to explain the behavior of decision makers under circumstances in which an optimal solution cannot be determined. He maintained that many natural problems are characterized bycomputational intractabilityor a lack of information, both of which preclude the use of mathematical optimization procedures. He observed in hisNobel Prize in Economicsspeech that "decision makers can satisfice either by finding optimum solutions for a simplified world, or by finding satisfactory solutions for a more realistic world. Neither approach, in general, dominates the other, and both have continued to co-exist in the world ofmanagementscience".[7]
Simon formulated the concept within a novel approach to rationality, which posits thatrational choice theoryis an unrealistic description of human decision processes and calls for psychological realism. He referred to this approach asbounded rationality.Moral satisficingis a branch of bounded rationality that views moral behavior as based on pragmatic social heuristics rather than on moral rules or optimization principles. These heuristics are neither good nor bad per se, but only in relation to the environments in which they are used.[8]Someconsequentialisttheories inmoral philosophyuse the concept of satisficing in a similar sense, though most call for optimization instead.
Two traditions of satisficing exist in decision-making research: Simon's program of studying how individuals or institutions rely on heuristic solutions in the real world, and the program of finding optimal solutions to problems simplified by convenient mathematical assumptions (so that optimization is possible).[9]
Heuristic satisficing refers to the use of aspiration levels when choosing from different paths of action. By this account, decision-makers select the first option that meets a given need or select the option that seems to address most needs rather than the "optimal" solution. The basic model of aspiration-level adaptation is as follows:[10]
Step 1: Set an aspiration level α.
Step 2: Choose the first option that meets or exceeds α.
Step 3: If no option has satisfied α after time β, then change α by an amount γ and continue until a satisfying option is found.
Example: Consider pricing commodities. An analysis of 628 used car dealers showed that 97% relied on a form of satisficing.[11]Most set the initial price α in the middle of the price range of comparable cars and lowered the price if the car was not sold after 24 days (β) by about 3% (γ). A minority (19%), mostly smaller dealerships, set a low initial price and kept it unchanged (no Step 3). The car dealers adapted the parameters to their business environment. For instance, they decreased the waiting time β by about 3% for each additional competitor in the area.
Note that aspiration-level adaptation is a process model of actual behavior rather than an as-if optimization model, and accordingly requires an analysis of how people actually make decisions. It allows for predicting surprising effects such as the "cheap twin paradox", where two similar cars have substantially different price tags in the same dealership.[4] The reason is that one car entered the dealership earlier and had at least one change in price at the time the second car arrived.
A crucial determinant of a satisficing decision strategy concerns the construction of the aspiration level. In many circumstances, the individual may be uncertain about the aspiration level.
Another key issue concerns an evaluation of satisficing strategies. Although often regarded as an inferior decision strategy, specific satisficingstrategiesfor inference have been shown to beecologically rational, that is in particular decision environments, they can outperform alternative decision strategies.[15]
Satisficing also occurs in consensus building when the group looks towards a solution everyone can agree on even if it may not be the best.
One popular method for rationalizing satisficing isoptimizationwhenallcosts, including the cost of the optimization calculations themselves and the cost of getting information for use in those calculations, are considered. As a result, the eventual choice is usually sub-optimal in regard to the main goal of the optimization, i.e., different from the optimum in the case that the costs of choosing are not taken into account.
Alternatively, satisficing can be considered to be justconstraint satisfaction, the process of finding a solution satisfying a set of constraints, without concern for finding an optimum. Any such satisficing problem can be formulated as an (equivalent) optimization problem using theindicator functionof the satisficing requirements as anobjective function. More formally, ifXdenotes the set of all options andS⊆Xdenotes the set of "satisficing" options, then selecting a satisficing solution (an element ofS) is equivalent to the following optimization problem
whereIsdenotes theIndicator functionofS, that is
A solutions∈Xto this optimization problem is optimal if, and only if, it is a satisficing option (an element ofS). Thus, from a decision theory point of view, the distinction between "optimizing" and "satisficing" is essentially a stylistic issue (that can nevertheless be very important in certain applications) rather than a substantive issue. What is important to determine iswhatshould be optimized andwhatshould be satisficed. The following quote from Jan Odhnoff's 1965 paper is appropriate:[16]
In my opinion there is room for both 'optimizing' and 'satisficing' models in business economics. Unfortunately, the difference between 'optimizing' and 'satisficing' is often referred to as a difference in the quality of a certain choice. It is a triviality that an optimal result in an optimization can be an unsatisfactory result in a satisficing model. The best things would therefore be to avoid a general use of these two words.
Ineconomics, satisficing is abehaviorwhich attempts to achieve at least someminimumlevel of a particularvariable, but which does not necessarily maximize its value.[17]The most common application of the concept in economics is in the behavioraltheory of the firm, which, unlike traditional accounts, postulates that producers treatprofitnot as a goal to be maximized, but as a constraint. Under these theories, a critical level of profit must be achieved by firms; thereafter, priority is attached to the attainment of other goals.
More formally, as before ifXdenotes the set of all optionss, and we have the payoff functionU(s)which gives the payoff enjoyed by the agent for each option. Suppose we define the optimum payoffU*the solution to
with the optimum actions being the setOof options such thatU(s*) =U*(i.e. it is the set of all options that yield the maximum payoff). Assume that the setOhas at least one element.
The idea of theaspiration levelwas introduced byHerbert A. Simonand developed in economics by Richard Cyert and James March in their 1963 bookA Behavioral Theory of the Firm.[18]The aspiration level is the payoff that the agent aspires to: if the agent achieves at least this level it is satisfied, and if it does not achieve it, the agent is not satisfied. Let us define the aspiration levelAand assume thatA≤U*. Clearly, whilst it is possible that someone can aspire to something that is better than the optimum, it is in a sense irrational to do so. So, we require the aspiration level to be at or below the optimum payoff.
We can then define the set of satisficing optionsSas all those options that yield at leastA:s∈Sif and only ifA≤U(s). Clearly sinceA≤U*, it follows thatO⊆ S. That is, the set of optimum actions is a subset of the set of satisficing options. So, when an agent satisfices, then she will choose from a larger set of actions than the agent who optimizes. One way of looking at this is that the satisficing agent is not putting in the effort to get to the precise optimum or is unable to exclude actions that are below the optimum but still above aspiration.
An equivalent way of looking at satisficing isepsilon-optimization(that means you choose your actions so that the payoff is within epsilon of the optimum). If we define the "gap" between the optimum and the aspiration asεwhereε =U*−A. Then the set of satisficing optionsS(ε)can be defined as all those optionsssuch thatU(s) ≥U*− ε.
Apart from the behavioral theory of the firm, applications of the idea of satisficing behavior in economics include the Akerlof and Yellen model ofmenu cost, popular inNew Keynesian macroeconomics.[19][20]Also, in economics andgame theorythere is the notion of anEpsilon-equilibrium, which is a generalization of the standardNash equilibriumin which each player is withinεof his or her optimal payoff (the standard Nash-equilibrium being the special case whereε = 0).[21]
What determines the aspiration level may be derived from past experience (some function of an agent's or firm's previous payoffs), or some organizational or market institutions. For example, if we think of managerial firms, the managers will be expected to earnnormal profitsby their shareholders. Other institutions may have specific targets imposed externally (for example state-funded universities in the UK have targets for student recruitment).
An economic example is theDixonmodel of an economy consisting of many firms operating in different industries, where each industry is aduopoly.[22]The endogenous aspiration level is the average profit in the economy. This represents the power of the financial markets: in the long-run firms need to earn normal profits or they die (asArmen Alchianonce said, "This is the criterion by which the economic system selects survivors: those who realize positive profits are the survivors; those who suffer losses disappear"[23]). We can then think what happens over time. If firms are earning profits at or above their aspiration level, then they just stay doing what they are doing (unlike the optimizing firm which would always strive to earn the highest profits possible). However, if the firms are earning below aspiration, then they try something else, until they get into a situation where they attain their aspiration level. It can be shown that in this economy, satisficing leads tocollusionamongst firms: competition between firms leads to lower profits for one or both of the firms in a duopoly. This means that competition is unstable: one or both of the firms will fail to achieve their aspirations and hence try something else. The only situation which is stable is one where all firms achieve their aspirations, which can only happen when all firms earn average profits. In general, this will only happen if all firms earn the joint-profit maximizing or collusive profit.[24]
Some research has suggested that satisficing/maximizingand other decision-making strategies, likepersonalitytraits, have a strong genetic component and endure over time. This genetic influence on decision-making behaviors has been found through classicaltwin studies, in which decision-making tendencies are self-reported by each member of a twinned pair and then compared between monozygotic and dizygotic twins.[25]This implies that people can be categorized into "maximizers" and "satisficers", with some people landing in between.
The distinction between satisficing and maximizing not only differs in the decision-making process, but also in the post-decision evaluation. Maximizers tend to use a more exhaustive approach to their decision-making process: they seek and evaluate more options than satisficers do to achieve greater satisfaction. However, whereas satisficers tend to be relatively pleased with their decisions, maximizers tend to be less happy with their decision outcomes. This is thought to be due to limited cognitive resources people have when theiroptions are vast, forcing maximizers to not make an optimal choice. Because maximization is unrealistic and usually impossible in everyday life, maximizers often feel regretful in their post-choice evaluation.[26]
As an example of satisficing, in the field ofsocial cognition,Jon Krosnickproposed a theory ofstatistical surveysatisficing which says that optimal question answering by a survey respondent involves a great deal ofcognitivework and that some people would use satisficing to reduce that burden.[27][28]Some people may shortcut their cognitive processes in two ways:
Likelihood to satisfice is linked to respondent ability, respondentmotivationand task difficulty.
Regarding survey answers, satisficing manifests in:
|
https://en.wikipedia.org/wiki/Satisficing
|
Inoptimization theory,semi-infinite programming(SIP) is anoptimization problemwith a finite number of variables and an infinite number of constraints, or an infinite number of variables and a finite number of constraints. In the former case the constraints are typically parameterized.[1]
The problem can be stated simply as:
where
SIP can be seen as a special case ofbilevel programsin which the lower-level variables do not participate in the objective function.
In the meantime, see external links below for a complete tutorial.
In the meantime, see external links below for a complete tutorial.
Thisapplied mathematics–related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Semi-infinite_programming
|
TheFrank–Wolfe algorithmis aniterativefirst-orderoptimizationalgorithmforconstrainedconvex optimization. Also known as theconditional gradient method,[1]reduced gradient algorithmand theconvex combination algorithm, the method was originally proposed byMarguerite FrankandPhilip Wolfein 1956.[2]In each iteration, the Frank–Wolfe algorithm considers alinear approximationof the objective function, and moves towards a minimizer of this linear function (taken over the same domain).
SupposeD{\displaystyle {\mathcal {D}}}is acompactconvex setin avector spaceandf:D→R{\displaystyle f\colon {\mathcal {D}}\to \mathbb {R} }is aconvex,differentiablereal-valued function. The Frank–Wolfe algorithm solves theoptimization problem
While competing methods such asgradient descentfor constrained optimization require aprojection stepback to the feasible set in each iteration, the Frank–Wolfe algorithm only needs the solution of a convex problem over the same set in each iteration, and automatically stays in the feasible set.
The convergence of the Frank–Wolfe algorithm is sublinear in general: the error in the objective function to the optimum isO(1/k){\displaystyle O(1/k)}afterkiterations, so long as the gradient isLipschitz continuouswith respect to some norm. The same convergence rate can also be shown if the sub-problems are only solved approximately.[3]
The iterations of the algorithm can always be represented as a sparse convex combination of the extreme points of the feasible set, which has helped to the popularity of the algorithm for sparse greedy optimization inmachine learningandsignal processingproblems,[4]as well as for example the optimization ofminimum–cost flowsintransportation networks.[5]
If the feasible set is given by a set of linear constraints, then the subproblem to be solved in each iteration becomes alinear program.
While the worst-case convergence rate withO(1/k){\displaystyle O(1/k)}can not be improved in general, faster convergence can be obtained for special problem classes, such as some strongly convex problems.[6]
Sincef{\displaystyle f}isconvex, for any two pointsx,y∈D{\displaystyle \mathbf {x} ,\mathbf {y} \in {\mathcal {D}}}we have:
This also holds for the (unknown) optimal solutionx∗{\displaystyle \mathbf {x} ^{*}}. That is,f(x∗)≥f(x)+(x∗−x)T∇f(x){\displaystyle f(\mathbf {x} ^{*})\geq f(\mathbf {x} )+(\mathbf {x} ^{*}-\mathbf {x} )^{T}\nabla f(\mathbf {x} )}. The best lower bound with respect to a given pointx{\displaystyle \mathbf {x} }is given by
The latter optimization problem is solved in every iteration of the Frank–Wolfe algorithm, therefore the solutionsk{\displaystyle \mathbf {s} _{k}}of the direction-finding subproblem of thek{\displaystyle k}-th iteration can be used to determine increasing lower boundslk{\displaystyle l_{k}}during each iteration by settingl0=−∞{\displaystyle l_{0}=-\infty }and
Such lower bounds on the unknown optimal value are important in practice because they can be used as a stopping criterion, and give an efficient certificate of the approximation quality in every iteration, since alwayslk≤f(x∗)≤f(xk){\displaystyle l_{k}\leq f(\mathbf {x} ^{*})\leq f(\mathbf {x} _{k})}.
It has been shown that this correspondingduality gap, that is the difference betweenf(xk){\displaystyle f(\mathbf {x} _{k})}and the lower boundlk{\displaystyle l_{k}}, decreases with the same convergence rate, i.e.f(xk)−lk=O(1/k).{\displaystyle f(\mathbf {x} _{k})-l_{k}=O(1/k).}
|
https://en.wikipedia.org/wiki/Frank%E2%80%93Wolfe_algorithm
|
Aseparation oracle(also called acutting-plane oracle) is a concept in the mathematical theory ofconvex optimization. It is a method to describe aconvex setthat is given as an input to an optimization algorithm. Separation oracles are used as input toellipsoid methods.[1]: 87, 96, 98
LetKbe aconvexandcompactset in Rn. Astrong separation oracle forKis an oracle (black box) that, given a vectoryin Rn, returns one of the following:[1]: 48
A strong separation oracle is completely accurate, and thus may be hard to construct. For practical reasons, a weaker version is considered, which allows for small errors in the boundary ofKand the inequalities. Given a small error toleranced>0, we say that:
The weak version also considersrationalnumbers, which have a representation of finite length, rather than arbitrary real numbers. Aweak separation oracle forKis an oracle that, given a vectoryin Qnand a rational numberd>0, returns one of the following::[1]: 51
A special case of a convex set is a set represented by linear inequalities:K={x|Ax≤b}{\displaystyle K=\{x|Ax\leq b\}}.Such a set is called a convexpolytope. A strong separation oracle for a convex polytope can be implemented, but its run-time depends on the input format.
If the matrixAand the vectorbare given as input, so thatK={x|Ax≤b}{\displaystyle K=\{x|Ax\leq b\}}, then a strong separation oracle can be implemented as follows.[2]Given a pointy, computeAy{\displaystyle Ay}:
This oracle runs in polynomial time as long as the number of constraints is polynomial.
Suppose the set of vertices ofKis given as an input, so thatK=conv(v1,…,vk)={\displaystyle K={\text{conv}}(v_{1},\ldots ,v_{k})=}the convex hull of its vertices. Then, deciding whetheryis inKrequires to check whetheryis a convex combination of the input vectors, that is, whether there exist coefficientsz1,...,zksuch that:[1]: 49
This is alinear programwithkvariables andnequality constraints (one for each element ofy). Ifyis not inK, then the above program has no solution, and the separation oracle needs to find a vectorcsuch that
Note that the two above representations can be very different in size: it is possible that a polytope can be represented by a small number of inequalities, but has exponentially many vertices (for example, ann-dimensional cube). Conversely, it is possible that a polytope has a small number of vertices, but requires exponentially many inequalities (for example, the convex hull of the 2nvectors of the form (0,...,±1,...,0).
In some linear optimization problems, even though the number of constraints is exponential, one can still write a custom separation oracle that works in polynomial time. Some examples are:
Letfbe aconvex functionon Rn. The setK={(x,t)|f(x)≤t}{\displaystyle K=\{(x,t)|f(x)\leq t\}}is a convex set in Rn+1. Given an evaluation oracle forf(a black box that returns the value offfor every given point), one can easily check whether a vector (y,t) is inK. In order to get a separation oracle, we need also an oracle to evaluate thesubgradientoff.[1]: 49Suppose some vector (y,s) is not inK, sof(y) >s. Letgbe the subgradient offaty(gis a vector in Rn).Denotec:=(g,−1){\displaystyle c:=(g,-1)}.Then,c⋅(y,s)=g⋅y−s>g⋅y−f(y){\displaystyle c\cdot (y,s)=g\cdot y-s>g\cdot y-f(y)}, and for all (x,t) inK:c⋅(x,t)=g⋅x−t≤g⋅x−f(x){\displaystyle c\cdot (x,t)=g\cdot x-t\leq g\cdot x-f(x)}. By definition of a subgradient:f(x)≥f(y)+g⋅(x−y){\displaystyle f(x)\geq f(y)+g\cdot (x-y)}for allxin Rn. Therefore,g⋅y−f(y)≥g⋅x−f(x){\displaystyle g\cdot y-f(y)\geq g\cdot x-f(x)}, soc⋅(y,s)>c⋅(x,t){\displaystyle c\cdot (y,s)>c\cdot (x,t)},andcrepresents a separating hyperplane.
A strong separation oracle can be given as an input to theellipsoid methodfor solving a linear program. Consider the linear programmaximizec⋅xsubject toAx≤b,x≥0{\displaystyle {\text{maximize}}~~c\cdot x~~{\text{subject to}}~~Ax\leq b,x\geq 0}. The ellipsoid method maintains an ellipsoid that initially contains the entire feasible domainAx≤b{\displaystyle Ax\leq b}. At each iterationt, it takes the centerxt{\displaystyle x_{t}}of the current ellipsoid, and sends it to the separation oracle:
After making a cut, we construct a new, smaller ellipsoid, that contains the remaining region. It can be shown that this process converges to an approximate solution, in time polynomial in the required accuracy.
Given a weak separation oracle for apolyhedron, it is possible to construct a strong separation oracle by a careful method of rounding, or bydiophantine approximations.[1]: 159
|
https://en.wikipedia.org/wiki/Separation_oracle
|
InBayesian statistics, acredible intervalis anintervalused to characterize aprobability distribution. It is defined such that an unobservedparametervalue has a particularprobabilityγ{\displaystyle \gamma }to fall within it. For example, in an experiment that determines the distribution of possible values of the parameterμ{\displaystyle \mu }, if the probability thatμ{\displaystyle \mu }lies between 35 and 45 isγ=0.95{\displaystyle \gamma =0.95}, then35≤μ≤45{\displaystyle 35\leq \mu \leq 45}is a 95% credible interval.
Credible intervals are typically used to characterizeposterior probabilitydistributions orpredictive probabilitydistributions.[1]Their generalization to disconnected or multivariate sets is calledcredible setor credible region.
Credible intervals are aBayesiananalog toconfidence intervalsinfrequentist statistics.[2]The two concepts arise from different philosophies:[3]Bayesian intervals treat their bounds as fixed and the estimated parameter as a random variable, whereas frequentist confidence intervals treat their bounds as random variables and the parameter as a fixed value. Also, Bayesian credible intervals use (and indeed, require) knowledge of the situation-specificprior distribution, while the frequentist confidence intervals do not.
Credible sets are not unique, as any given probability distribution has an infinite number ofγ{\displaystyle \gamma }-credible sets, i.e. sets of probabilityγ{\displaystyle \gamma }. For example, in the univariate case, there are multiple definitions for a suitable interval or set:
One may also define an interval for which themeanis the central point, assuming that the mean exists.
γ{\displaystyle \gamma }-Smallest Credible Sets (γ{\displaystyle \gamma }-SCS) can easily be generalized to the multivariate case, and are bounded by probability densitycontour lines.[4]They always contain themode, but not necessarily themean, thecoordinate-wise median, nor thegeometric median.
Credible intervals can also be estimated through the use of simulation techniques such asMarkov chain Monte Carlo.[5]
A frequentist 95%confidence intervalmeans that with a large number of repeated samples, 95% of such calculated confidence intervals would include the true value of the parameter. In frequentist terms, the parameter isfixed(cannot be considered to have a distribution of possible values) and the confidence interval israndom(as it depends on the random sample).
Bayesian credible intervals differ from frequentist confidence intervals by two major aspects:
For the case of a single parameter and data that can be summarised in a singlesufficient statistic, it can be shown that the credible interval and the confidence interval coincide if the unknown parameter is alocation parameter(i.e. the forward probability function has the formPr(x|μ)=f(x−μ){\displaystyle \mathrm {Pr} (x|\mu )=f(x-\mu )}), with a prior that is a uniform flat distribution;[6]and also if the unknown parameter is ascale parameter(i.e. the forward probability function has the formPr(x|s)=f(x/s){\displaystyle \mathrm {Pr} (x|s)=f(x/s)}), with aJeffreys' priorPr(s|I)∝1/s{\displaystyle \mathrm {Pr} (s|I)\;\propto \;1/s}[6]— the latter following because taking the logarithm of such a scale parameter turns it into a location parameter with a uniform distribution.
But these are distinctly special (albeit important) cases; in general no such equivalence can be made.
|
https://en.wikipedia.org/wiki/Credible_interval
|
Inclinical trialsand other scientific studies, aninterim analysisis an analysis of data that is conducted before data collection has been completed. Clinical trials are unusual in that enrollment of subjects is a continual process staggered in time. If a treatment can be proven to be clearly beneficial or harmful compared to the concurrent control, or to be obviously futile, based on a pre-defined analysis of an incomplete data set while the study is on-going, the investigators may stop the study early.
The design of many clinical trials includes some strategy for early stopping if an interim analysis reveals large differences between treatment groups, or shows obvious futility such that there is no chance that continuing to the end would show a clinically meaningful effect. In addition to saving time and resources, such a design feature can reduce study participants' exposure to an inferior or useless treatment. However, when repeated significance testing on accumulating data is done, some adjustment of the usual hypothesis testing procedure must be made to maintain an overall significance level.[1][2]The methods described by Pocock[3][4]and O'Brien & Fleming,[5]among others,[6][7][8]are popular implementations of groupsequential testingfor clinical trials.[9][10][11]Sometimes interim analyses are equally spaced in terms of calendar time or the information available from the data, but this assumption can be relaxed to allow for unplanned or unequally spaced analyses.[citation needed]
The secondMulticenter Automatic Defibrillator Implantation Trial(MADIT II) was conducted to help better identify patients with coronary heart disease who would benefit from anICD. MADIT II is the latest in a series of trials involving the use of ICDs to improve management and clinical treatment of arrhythmia patients. The Antiarrhythmics versus Implantable Defibrillators (AVID) Trial compared ICDs with antiarrhythmic-drug therapy (amiodaroneorsotalol, predominantly the former) in patients who had survived life-threatening ventricular arrhythmias. After inclusion of 1,232 patients, the MADIT II study was terminated when interim analysis showed significant (31%) reduction in all-cause death in patients assigned to ICD therapy.[12]
|
https://en.wikipedia.org/wiki/Interim_analysis
|
Bayesian statistics(/ˈbeɪziən/BAY-zee-ənor/ˈbeɪʒən/BAY-zhən)[1]is a theory in the field ofstatisticsbased on theBayesian interpretation of probability, whereprobabilityexpresses adegree of beliefin anevent. The degree of belief may be based on prior knowledge about the event, such as the results of previous experiments, or on personal beliefs about the event. This differs from a number of otherinterpretations of probability, such as thefrequentistinterpretation, which views probability as thelimitof the relative frequency of an event after many trials.[2]More concretely, analysis in Bayesian methods codifies prior knowledge in the form of aprior distribution.
Bayesian statistical methods useBayes' theoremto compute and update probabilities after obtaining new data. Bayes' theorem describes theconditional probabilityof an event based on data as well as prior information or beliefs about the event or conditions related to the event.[3][4]For example, inBayesian inference, Bayes' theorem can be used to estimate the parameters of aprobability distributionorstatistical model. Since Bayesian statistics treats probability as a degree of belief, Bayes' theorem can directly assign a probability distribution that quantifies the belief to the parameter or set of parameters.[2][3]
Bayesian statistics is named afterThomas Bayes, who formulated a specific case of Bayes' theorem ina paperpublished in 1763. In several papers spanning from the late 18th to the early 19th centuries,Pierre-Simon Laplacedeveloped the Bayesian interpretation of probability.[5]Laplace used methods now considered Bayesian to solve a number of statistical problems. While many Bayesian methods were developed by later authors, the term "Bayesian" was not commonly used to describe these methods until the 1950s. Throughout much of the 20th century, Bayesian methods were viewed unfavorably by many statisticians due to philosophical and practical considerations. Many of these methods required much computation, and most widely used approaches during that time were based on the frequentist interpretation. However, with the advent of powerful computers and newalgorithmslikeMarkov chain Monte Carlo, Bayesian methods have gained increasing prominence in statistics in the 21st century.[2][6]
Bayes's theorem is used in Bayesian methods to update probabilities, which are degrees of belief, after obtaining new data. Given two eventsA{\displaystyle A}andB{\displaystyle B}, the conditional probability ofA{\displaystyle A}given thatB{\displaystyle B}is true is expressed as follows:[7]
P(A∣B)=P(B∣A)P(A)P(B){\displaystyle P(A\mid B)={\frac {P(B\mid A)P(A)}{P(B)}}}
whereP(B)≠0{\displaystyle P(B)\neq 0}. Although Bayes's theorem is a fundamental result ofprobability theory, it has a specific interpretation in Bayesian statistics. In the above equation,A{\displaystyle A}usually represents aproposition(such as the statement that a coin lands on heads fifty percent of the time) andB{\displaystyle B}represents the evidence, or new data that is to be taken into account (such as the result of a series of coin flips).P(A){\displaystyle P(A)}is theprior probabilityofA{\displaystyle A}which expresses one's beliefs aboutA{\displaystyle A}before evidence is taken into account. The prior probability may also quantify prior knowledge or information aboutA{\displaystyle A}.P(B∣A){\displaystyle P(B\mid A)}is thelikelihood function, which can be interpreted as the probability of the evidenceB{\displaystyle B}given thatA{\displaystyle A}is true. The likelihood quantifies the extent to which the evidenceB{\displaystyle B}supports the propositionA{\displaystyle A}.P(A∣B){\displaystyle P(A\mid B)}is theposterior probability, the probability of the propositionA{\displaystyle A}after taking the evidenceB{\displaystyle B}into account. Essentially, Bayes's theorem updates one's prior beliefsP(A){\displaystyle P(A)}after considering the new evidenceB{\displaystyle B}.[2]
The probability of the evidenceP(B){\displaystyle P(B)}can be calculated using thelaw of total probability. If{A1,A2,…,An}{\displaystyle \{A_{1},A_{2},\dots ,A_{n}\}}is apartitionof thesample space, which is the set of alloutcomesof an experiment, then,[2][7]
P(B)=P(B∣A1)P(A1)+P(B∣A2)P(A2)+⋯+P(B∣An)P(An)=∑iP(B∣Ai)P(Ai){\displaystyle P(B)=P(B\mid A_{1})P(A_{1})+P(B\mid A_{2})P(A_{2})+\dots +P(B\mid A_{n})P(A_{n})=\sum _{i}P(B\mid A_{i})P(A_{i})}
When there are an infinite number of outcomes, it is necessary tointegrateover all outcomes to calculateP(B){\displaystyle P(B)}using the law of total probability. Often,P(B){\displaystyle P(B)}is difficult to calculate as the calculation would involve sums or integrals that would be time-consuming to evaluate, so often only the product of the prior and likelihood is considered, since the evidence does not change in the same analysis. The posterior is proportional to this product:[2]
P(A∣B)∝P(B∣A)P(A){\displaystyle P(A\mid B)\propto P(B\mid A)P(A)}
Themaximum a posteriori, which is themodeof the posterior and is often computed in Bayesian statistics usingmathematical optimizationmethods, remains the same. The posterior can be approximated even without computing the exact value ofP(B){\displaystyle P(B)}with methods such asMarkov chain Monte Carloorvariational Bayesian methods.[2]
The general set of statistical techniques can be divided into a number of activities, many of which have special Bayesian versions.
Bayesian inference refers tostatistical inferencewhere uncertainty in inferences is quantified using probability.[8]In classicalfrequentist inference, modelparametersand hypotheses are considered to be fixed. Probabilities are not assigned to parameters or hypotheses in frequentist inference. For example, it would not make sense in frequentist inference to directly assign a probability to an event that can only happen once, such as the result of the next flip of a fair coin. However, it would make sense to state that the proportion of headsapproaches one-halfas the number of coin flips increases.[9]
Statistical modelsspecify a set of statistical assumptions and processes that represent how the sample data are generated. Statistical models have a number of parameters that can be modified. For example, a coin can be represented as samples from aBernoulli distribution, which models two possible outcomes. The Bernoulli distribution has a single parameter equal to the probability of one outcome, which in most cases is the probability of landing on heads. Devising a good model for the data is central in Bayesian inference. In most cases, models only approximate the true process, and may not take into account certain factors influencing the data.[2]In Bayesian inference, probabilities can be assigned to model parameters. Parameters can be represented asrandom variables. Bayesian inference uses Bayes' theorem to update probabilities after more evidence is obtained or known.[2][10]Furthermore, Bayesian methods allow for placing priors on entire models and calculating their posterior probabilities using Bayes' theorem. These posterior probabilities
are proportional to the product of the prior and the marginal likelihood,
where the marginal likelihood is the integral of the sampling density over the prior distribution
of the parameters. In complex models, marginal likelihoods are
generally computed numerically.[11]
The formulation ofstatistical modelsusing Bayesian statistics has the identifying feature of requiring the specification ofprior distributionsfor any unknown parameters. Indeed, parameters of prior distributions may themselves have prior distributions, leading toBayesian hierarchical modeling,[12][13][14]also known as multi-level modeling. A special case isBayesian networks.
For conducting a Bayesian statistical analysis, best practices are discussed by van de Schoot et al.[15]
For reporting the results of a Bayesian statistical analysis, Bayesian analysis reporting guidelines (BARG) are provided in an open-access article byJohn K. Kruschke.[16]
TheBayesian design of experimentsincludes a concept called 'influence of prior beliefs'. This approach usessequential analysistechniques to include the outcome of earlier experiments in the design of the next experiment. This is achieved by updating 'beliefs' through the use of prior andposterior distribution. This allows the design of experiments to make good use of resources of all types. An example of this is themulti-armed bandit problem.
Exploratory analysis of Bayesian models is an adaptation or extension of theexploratory data analysisapproach to the needs and peculiarities of Bayesian modeling. In the words of Persi Diaconis:[17]
Exploratory data analysis seeks to reveal structure, or simple descriptions in data. We look at numbers or graphs and try to find patterns. We pursue leads suggested by background information, imagination, patterns perceived, and experience with other data analyses
Theinference processgenerates a posterior distribution, which has a central role in Bayesian statistics, together with other distributions like the posterior predictive distribution and the prior predictive distribution. The correct visualization, analysis, and interpretation of these distributions is key to properly answer the questions that motivate the inference process.[18]
When working with Bayesian models there are a series of related tasks that need to be addressed besides inference itself:
All these tasks are part of the Exploratory analysis of Bayesian models approach and successfully performing them is central to the iterative and interactive modeling process. These tasks require both numerical and visual summaries.[19][20][21]
|
https://en.wikipedia.org/wiki/Bayesian_statistics
|
The word "probability" has been used in a variety of ways since it was first applied to the mathematical study ofgames of chance. Does probability measure the real, physical, tendency of something to occur, or is it a measure of how strongly one believes it will occur, or does it draw on both these elements? In answering such questions, mathematicians interpret the probability values ofprobability theory.
There are two broad categories[1][a][2]ofprobability interpretationswhich can be called "physical" and "evidential" probabilities. Physical probabilities, which are also called objective orfrequency probabilities, are associated with random physical systems such as roulette wheels, rolling dice and radioactive atoms. In such systems, a given type of event (such as a die yielding a six) tends to occur at a persistent rate, or "relative frequency", in a long run of trials. Physical probabilities either explain, or are invoked to explain, these stable frequencies. The two main kinds of theory of physical probability arefrequentistaccounts (such as those of Venn,[3]Reichenbach[4]and von Mises)[5]andpropensityaccounts (such as those of Popper, Miller, Giere and Fetzer).[6]
Evidential probability, also calledBayesian probability, can be assigned to any statement whatsoever, even when no random process is involved, as a way to represent its subjective plausibility, or the degree to which the statement is supported by the available evidence. On most accounts, evidential probabilities are considered to be degrees of belief, defined in terms of dispositions to gamble at certain odds. The four main evidential interpretations are the classical (e.g. Laplace's)[7]interpretation, the subjective interpretation (de Finetti[8]and Savage),[9]the epistemic or inductive interpretation (Ramsey,[10]Cox)[11]and the logical interpretation (Keynes[12]andCarnap).[13]There are also evidential interpretations of probability covering groups, which are often labelled as 'intersubjective' (proposed byGillies[14]and Rowbottom).[6]
Some interpretations of probability are associated with approaches tostatistical inference, including theories ofestimationandhypothesis testing. The physical interpretation, for example, is taken by followers of "frequentist" statistical methods, such asRonald Fisher[dubious–discuss],Jerzy NeymanandEgon Pearson. Statisticians of the opposingBayesianschool typically accept the frequency interpretation when it makes sense (although not as a definition), but there is less agreement regarding physical probabilities. Bayesians consider the calculation of evidential probabilities to be both valid and necessary in statistics. This article, however, focuses on the interpretations of probability rather than theories of statistical inference.
The terminology of this topic is rather confusing, in part because probabilities are studied within a variety of academic fields. The word "frequentist" is especially tricky. To philosophers it refers to a particular theory of physical probability, one that has more or less been abandoned. To scientists, on the other hand, "frequentist probability" is just another name for physical (or objective) probability. Those who promote Bayesian inference view "frequentist statistics" as an approach to statistical inference that is based on the frequency interpretation of probability, usually relying on thelaw of large numbersand characterized by what is called 'Null Hypothesis Significance Testing' (NHST). Also the word "objective", as applied to probability, sometimes means exactly what "physical" means here, but is also used of evidential probabilities that are fixed by rational constraints, such as logical and epistemic probabilities.
It is unanimously agreed that statistics depends somehow on probability. But, as to what probability is and how it is connected with statistics, there has seldom been such complete disagreement and breakdown of communication since the Tower of Babel. Doubtless, much of the disagreement is merely terminological and would disappear under sufficiently sharp analysis.
Thephilosophy of probabilitypresents problems chiefly in matters ofepistemologyand the uneasy interface betweenmathematicalconcepts and ordinary language as it is used by non-mathematicians.Probability theoryis an established field of study in mathematics. It has its origins in correspondence discussing the mathematics ofgames of chancebetweenBlaise PascalandPierre de Fermatin the seventeenth century,[15]and was formalized and renderedaxiomaticas a distinct branch of mathematics byAndrey Kolmogorovin the twentieth century. In axiomatic form, mathematical statements about probability theory carry the same sort of epistemological confidence within thephilosophy of mathematicsas are shared by other mathematical statements.[16][17]
The mathematical analysis originated in observations of the behaviour of game equipment such asplaying cardsanddice, which are designed specifically to introduce random and equalized elements; in mathematical terms, they are subjects ofindifference. This is not the only way probabilistic statements are used in ordinary human language: when people say that "it will probably rain", they typically do not mean that the outcome of rain versus not-rain is a random factor that the odds currently favor; instead, such statements are perhaps better understood as qualifying their expectation of rain with a degree of confidence. Likewise, when it is written that "the most probable explanation" of the name ofLudlow, Massachusetts"is that it was named afterRoger Ludlow", what is meant here is not that Roger Ludlow is favored by a random factor, but rather that this is the most plausible explanation of the evidence, which admits other, less likely explanations.
Thomas Bayesattempted to provide alogicthat could handle varying degrees of confidence; as such,Bayesian probabilityis an attempt to recast the representation of probabilistic statements as an expression of the degree of confidence by which the beliefs they express are held.
Though probability initially had somewhat mundane motivations, its modern influence and use is widespread ranging fromevidence-based medicine, throughsix sigma, all the way to theprobabilistically checkable proofand thestring theory landscape.
The first attempt at mathematical rigour in the field of probability, championed byPierre-Simon Laplace, is now known as theclassical definition. Developed from studies of games of chance (such as rollingdice) it states that probability is shared equally between all the possible outcomes, provided these outcomes can be deemed equally likely.[1](3.1)
The theory of chance consists in reducing all the events of the same kind to a certain number of cases equally possible, that is to say, to such as we may be equally undecided about in regard to their existence, and in determining the number of cases favorable to the event whose probability is sought. The ratio of this number to that of all the cases possible is the measure of this probability, which is thus simply a fraction whose numerator is the number of favorable cases and whose denominator is the number of all the cases possible.
This can be represented mathematically as follows:
If a random experiment can result inNmutually exclusive and equally likely outcomes and ifNAof these outcomes result in the occurrence of the eventA, theprobability ofAis defined by
There are two clear limitations to the classical definition.[18]Firstly, it is applicable only to situations in which there is only a 'finite' number of possible outcomes. But some important random experiments, such astossing a coinuntil it shows heads, give rise to aninfiniteset of outcomes. And secondly, it requires an a priori determination that all possible outcomes are equally likely without falling in a trap ofcircular reasoningby relying on the notion of probability. (In using the terminology "we may be equally undecided", Laplace assumed, by what has been called the "principle of insufficient reason", that all possible outcomes are equally likely if there is no known reason to assume otherwise, for which there is no obvious justification.[19][20])
Frequentists posit that the probability of an event is its relative frequency over time,[1](3.4) i.e., its relative frequency of occurrence after repeating a process a large number of times under similar conditions. This is also known as aleatory probability. The events are assumed to be governed by somerandomphysical phenomena, which are either phenomena that are predictable, in principle, with sufficient information (seedeterminism); or phenomena which are essentially unpredictable. Examples of the first kind include tossingdiceor spinning aroulettewheel; an example of the second kind isradioactive decay. In the case of tossing a fair coin, frequentists say that the probability of getting a heads is 1/2, not because there are two equally likely outcomes but because repeated series of large numbers of trials demonstrate that the empirical frequency converges to the limit 1/2 as the number of trials goes to infinity.
If we denote byna{\displaystyle \textstyle n_{a}}the number of occurrences of an eventA{\displaystyle {\mathcal {A}}}inn{\displaystyle \textstyle n}trials, then iflimn→+∞nan=p{\displaystyle \lim _{n\to +\infty }{n_{a} \over n}=p}we say thatP(A)=p{\displaystyle \textstyle P({\mathcal {A}})=p}.
The frequentist view has its own problems. It is of course impossible to actually perform an infinity of repetitions of a random experiment to determine the probability of an event. But if only a finite number of repetitions of the process are performed, different relative frequencies will appear in different series of trials. If these relative frequencies are to define the probability, the probability will be slightly different every time it is measured. But the real probability should be the same every time. If we acknowledge the fact that we only can measure a probability with some error of measurement attached, we still get into problems as the error of measurement can only be expressed as a probability, the very concept we are trying to define. This renders even the frequency definition circular; see for example “What is the Chance of an Earthquake?”[21]
Subjectivists, also known asBayesiansor followers ofepistemic probability, give the notion of probability a subjective status by regarding it as a measure of the 'degree of belief' of the individual assessing the uncertainty of a particular situation.Epistemicor subjective probability is sometimes calledcredence, as opposed to the termchancefor a propensity probability. Some examples of epistemic probability are to assign a probability to the proposition that a proposed law of physics is true or to determine how probable it is that a suspect committed a crime, based on the evidence presented. The use of Bayesian probability raises the philosophical debate as to whether it can contribute validjustificationsofbelief. Bayesians point to the work ofRamsey[10](p 182) andde Finetti[8](p 103) as proving that subjective beliefs must follow thelaws of probabilityif they are to be coherent.[22]Evidence casts doubt that humans will have coherent beliefs.[23][24]The use of Bayesian probability involves specifying aprior probability. This may be obtained from consideration of whether the required prior probability is greater or lesser than a reference probability[clarification needed]associated with anurn modelor athought experiment. The issue is that for a given problem, multiple thought experiments could apply, and choosing one is a matter of judgement: different people may assign different prior probabilities, known as thereference class problem. The "sunrise problem" provides an example.
Propensity theorists think of probability as a physical propensity, or disposition, or tendency of a given type of physical situation to yield an outcome of a certain kind or to yield a long run relative frequency of such an outcome.[25]This kind of objective probability is sometimes called 'chance'.
Propensities, or chances, are not relative frequencies, but purported causes of the observed stable relative frequencies. Propensities are invoked to explain why repeating a certain kind of experiment will generate given outcome types at persistent rates, which are known as propensities or chances. Frequentists are unable to take this approach, since relative frequencies do not exist for single tosses of a coin, but only for large ensembles or collectives (see "single case possible" in the table above).[2]In contrast, a propensitist is able to use thelaw of large numbersto explain the behaviour of long-run frequencies. This law, which is a consequence of the axioms of probability, says that if (for example) a coin is tossed repeatedly many times, in such a way that its probability of landing heads is the same on each toss, and the outcomes are probabilistically independent, then the relative frequency of heads will be close to the probability of heads on each single toss. This law allows that stable long-run frequencies are a manifestation of invariantsingle-caseprobabilities. In addition to explaining the emergence of stable relative frequencies, the idea of propensity is motivated by the desire to make sense of single-case probability attributions in quantum mechanics, such as the probability ofdecayof a particularatomat a particular time.
The main challenge facing propensity theories is to say exactly what propensity means. (And then, of course, to show that propensity thus defined has the required properties.) At present, unfortunately, none of the well-recognised accounts of propensity comes close to meeting this challenge.
A propensity theory of probability was given byCharles Sanders Peirce.[26][27][28][29]A later propensity theory was proposed by philosopherKarl Popper, who had only slight acquaintance with the writings of C. S. Peirce, however.[26][27]Popper noted that the outcome of a physical experiment is produced by a certain set of "generating conditions". When we repeat an experiment, as the saying goes, we really perform another experiment with a (more or less) similar set of generating conditions. To say that a set of generating conditions has propensitypof producing the outcomeEmeans that those exact conditions, if repeated indefinitely, would produce an outcome sequence in whichEoccurred with limiting relative frequencyp. For Popper then, a deterministic experiment would have propensity 0 or 1 for each outcome, since those generating conditions would have same outcome on each trial. In other words, non-trivial propensities (those that differ from 0 and 1) only exist for genuinely nondeterministic experiments.
A number of other philosophers, includingDavid MillerandDonald A. Gillies, have proposed propensity theories somewhat similar to Popper's.
Other propensity theorists (e.g. Ronald Giere[30]) do not explicitly define propensities at all, but rather see propensity as defined by the theoretical role it plays in science. They argued, for example, that physical magnitudes such aselectrical chargecannot be explicitly defined either, in terms of more basic things, but only in terms of what they do (such as attracting and repelling other electrical charges). In a similar way, propensity is whatever fills the various roles that physical probability plays in science.
What roles does physical probability play in science? What are its properties? One central property of chance is that, when known, it constrains rational belief to take the same numerical value.David Lewiscalled this thePrincipal Principle,[1](3.3 & 3.5) a term that philosophers have mostly adopted. For example, suppose you are certain that a particular biased coin has propensity 0.32 to land heads every time it is tossed. What is then the correct price for a gamble that pays $1 if the coin lands heads, and nothing otherwise? According to the Principal Principle, the fair price is 32 cents.
It is widely recognized that the term "probability" is sometimes used in contexts where it has nothing to do with physical randomness. Consider, for example, the claim that the extinction of the dinosaurs wasprobablycaused by a large meteorite hitting the earth. Statements such as "Hypothesis H is probably true" have been interpreted to mean that the (presently available)empirical evidence(E, say) supports H to a high degree. This degree of support of H by E has been called thelogical, orepistemic, orinductiveprobability of H given E.
The differences between these interpretations are rather small, and may seem inconsequential. One of the main points of disagreement lies in the relation between probability and belief. Logical probabilities are conceived (for example inKeynes'Treatise on Probability[12]) to be objective, logical relations between propositions (or sentences), and hence not to depend in any way upon belief. They are degrees of (partial)entailment, or degrees oflogical consequence, not degrees ofbelief. (They do, nevertheless, dictate proper degrees of belief, as is discussed below.)Frank P. Ramsey, on the other hand, was skeptical about the existence of such objective logical relations and argued that (evidential) probability is "the logic of partial belief".[10](p 157) In other words, Ramsey held that epistemic probabilities simplyaredegrees of rational belief, rather than being logical relations that merelyconstraindegrees of rational belief.
Another point of disagreement concerns theuniquenessof evidential probability, relative to a given state of knowledge.Rudolf Carnapheld, for example, that logical principles always determine a unique logical probability for any statement, relative to any body of evidence. Ramsey, by contrast, thought that while degrees of belief are subject to some rational constraints (such as, but not limited to, the axioms of probability) these constraints usually do not determine a unique value. Rational people, in other words, may differ somewhat in their degrees of belief, even if they all have the same information.
An alternative account of probability emphasizes the role ofprediction– predicting future observations on the basis of past observations, not on unobservable parameters. In its modern form, it is mainly in the Bayesian vein. This was the main function of probability before the 20th century,[31]but fell out of favor compared to the parametric approach, which modeled phenomena as a physical system that was observed with error, such as incelestial mechanics.
The modern predictive approach was pioneered byBruno de Finetti, with the central idea ofexchangeability– that future observations should behave like past observations.[31]This view came to the attention of the Anglophone world with the 1974 translation of de Finetti's book,[31]and has
since been propounded by such statisticians asSeymour Geisser.
The mathematics of probability can be developed on an entirely axiomatic basis that is independent of any interpretation: see the articles onprobability theoryandprobability axiomsfor a detailed treatment.
|
https://en.wikipedia.org/wiki/Probability_interpretations
|
Mean-field particle methodsare a broad class ofinteracting typeMonte Carloalgorithms for simulating from a sequence of probability distributions satisfying a nonlinear evolution equation.[1][2][3][4]These flows of probability measures can always be interpreted as the distributions of the random states of a Markov process whose transition probabilities depends on the distributions of the current random states.[1][2]A natural way to simulate these sophisticated nonlinear Markov processes is to sample a large number of copies of the process, replacing in the evolution equation the unknown distributions of the random states by the sampledempirical measures. In contrast with traditional Monte Carlo andMarkov chain Monte Carlomethods these mean-field particle techniques rely onsequential interacting samples. The terminology mean-field reflects the fact that each of thesamples (a.k.a. particles, individuals, walkers, agents, creatures, or phenotypes)interacts with the empirical measures of the process. When the size of the system tends to infinity, these random empirical measures converge to the deterministic distribution of the random states of the nonlinear Markov chain, so that the statistical interaction between particles vanishes. In other words, starting with a chaotic configuration based on independent copies of initial state of the nonlinear Markov chain model, the chaos propagates at any time horizon as the size the system tends to infinity; that is, finite blocks of particles reduces to independent copies of the nonlinear Markov process. This result is called the propagation of chaos property.[5][6][7]The terminology "propagation of chaos" originated with the work ofMark Kacin 1976 on a colliding mean-field kinetic gas model.[8]
The theory of mean-field interacting particle models had certainly started by the mid-1960s, with the work ofHenry P. McKean Jr.on Markov interpretations of a class of nonlinear parabolic partial differential equations arising in fluid mechanics.[5][9]The mathematical foundations of these classes of models were developed from the mid-1980s to the mid-1990s by several mathematicians, including Werner Braun, Klaus Hepp,[10]Karl Oelschläger,[11][12][13]Gérard Ben Arous and Marc Brunaud,[14]Donald Dawson, Jean Vaillancourt[15]and Jürgen Gärtner,[16][17]Christian Léonard,[18]Sylvie Méléard,Sylvie Roelly,[6]Alain-Sol Sznitman[7][19]and Hiroshi Tanaka[20]for diffusion type models; F. Alberto Grünbaum,[21]Tokuzo Shiga, Hiroshi Tanaka,[22]Sylvie Méléard and Carl Graham[23][24][25]for general classes of interacting jump-diffusion processes.
We also quote an earlier pioneering article byTheodore E. HarrisandHerman Kahn, published in 1951, using mean-field but heuristic-like genetic methods for estimating particle transmission energies.[26]Mean-field genetic type particle methods are also used as heuristic natural search algorithms (a.k.a.metaheuristic) in evolutionary computing. The origins of these mean-field computational techniques can be traced to 1950 and 1954 with the work ofAlan Turingon genetic type mutation-selection learning machines[27]and the articles byNils Aall Barricelliat theInstitute for Advanced StudyinPrinceton, New Jersey.[28][29]The Australian geneticistAlex Fraseralso published in 1957 a series of papers on the genetic type simulation ofartificial selectionof organisms.[30]
Quantum Monte Carlo, and more specificallyDiffusion Monte Carlo methodscan also be interpreted as a mean-field particle approximation of Feynman-Kac path integrals.[3][4][31][32][33][34][35]The origins of Quantum Monte Carlo methods are often attributed to Enrico Fermi and Robert Richtmyer who developed in 1948 a mean field particle interpretation of neutron-chain reactions,[36]but the first heuristic-like and genetic type particle algorithm (a.k.a. Resampled or Reconfiguration Monte Carlo methods) for estimating ground state energies of quantum systems (in reduced matrix models) is due to Jack H. Hetherington in 1984[35]In molecular chemistry, the use of genetic heuristic-like particle methods (a.k.a. pruning and enrichment strategies) can be traced back to 1955 with the seminal work of Marshall. N. Rosenbluth and Arianna. W. Rosenbluth.[37]
The first pioneering articles on the applications of these heuristic-like particle methods in nonlinear filtering problems were the independent studies of Neil Gordon, David Salmon and Adrian Smith (bootstrap filter),[38]Genshiro Kitagawa (Monte Carlo filter)
,[39]and the one by Himilcon Carvalho, Pierre Del Moral, André Monin and Gérard Salut[40]published in the 1990s. The term interacting "particle filters" was first coined in 1996 by Del Moral.[41]Particle filters were also developed in signal processing in the early 1989-1992 by P. Del Moral, J.C. Noyer, G. Rigal, and G. Salut in the LAAS-CNRS in a series of restricted and classified research reports with STCAN (Service Technique des Constructions et Armes Navales), the IT company DIGILOG, and the LAAS-CNRS (the Laboratory for Analysis and Architecture of Systems) on RADAR/SONAR and GPS signal processing problems.[42][43][44][45][46][47]
The foundations and the first rigorous analysis on the convergence of genetic type models and mean field Feynman-Kac particle methods are due to Pierre Del Moral[48][49]in 1996. Branching type particle methods with varying population sizes were also developed in the end of the 1990s by Dan Crisan, Jessica Gaines and Terry Lyons,[50][51][52]and by Dan Crisan, Pierre Del Moral and Terry Lyons.[53]The first uniform convergence results with respect to the time parameter for mean field particle models were developed in the end of the 1990s by Pierre Del Moral and Alice Guionnet[54][55]for interacting jump type processes, and by Florent Malrieu for nonlinear diffusion type processes.[56]
New classes of mean field particle simulation techniques for Feynman-Kac path-integration problems includes genealogical tree based models,[2][3][57]backward particle models,[2][58]adaptive mean field particle models,[59]island type particle models,[60][61]and particle Markov chain Monte Carlo methods[62][63]
Inphysics, and more particularly instatistical mechanics, these nonlinear evolution equations are often used to describe the statistical behavior of microscopic interacting particles in a fluid or in some condensed matter. In this context, the random evolution of a virtual fluid or a gas particle is represented byMcKean-Vlasov diffusion processes,reaction–diffusion systems, orBoltzmann type collision processes.[11][12][13][25][64]As its name indicates, the mean field particle model represents the collective behavior of microscopic particles weakly interacting with their occupation measures. The macroscopic behavior of these many-body particle systems is encapsulated in the limiting model obtained when the size of the population tends to infinity. Boltzmann equations represent the macroscopic evolution of colliding particles in rarefied gases, while McKean Vlasov diffusions represent the macroscopic behavior of fluid particles and granular gases.
Incomputational physicsand more specifically inquantum mechanics, the ground state energies of quantum systems is associated with the top of the spectrum of Schrödinger's operators. TheSchrödinger equationis the quantum mechanics version of the Newton's second law of motion of classical mechanics (the mass times the acceleration is the sum of the forces). This equation represents the wave function (a.k.a. the quantum state) evolution of some physical system, including molecular, atomic of subatomic systems, as well as macroscopic systems like the universe.[65]The solution of the imaginary time Schrödinger equation (a.k.a. the heat equation) is given by a Feynman-Kac distribution associated with a free evolution Markov process (often represented by Brownian motions) in the set of electronic or macromolecular configurations and some potential energy function. The long time behavior of these nonlinear semigroups is related to top eigenvalues and ground state energies of Schrödinger's operators.[3][32][33][34][35][66]The genetic type mean field interpretation of these Feynman-Kac models are termed Resample Monte Carlo, or Diffusion Monte Carlo methods. These branching type evolutionary algorithms are based on mutation and selection transitions. During the mutation transition, the walkers evolve randomly and independently in a potential energy landscape on particle configurations. The mean field selection process (a.k.a. quantum teleportation, population reconfiguration, resampled transition) is associated with a fitness function that reflects the particle absorption in an energy well. Configurations with low relative energy are more likely to duplicate. In molecular chemistry, and statistical physics Mean field particle methods are also used to sampleBoltzmann-Gibbs measuresassociated with some cooling schedule, and to compute their normalizing constants (a.k.a. free energies, or partition functions).[2][67][68][69]
Incomputational biology, and more specifically inpopulation genetics, spatialbranching processeswith competitive selection and migration mechanisms can also be represented by mean field genetic typepopulation dynamics models.[4][70]The first moments of the occupation measures of a spatial branching process are given by Feynman-Kac distribution flows.[71][72]The mean field genetic type approximation of these flows offers a fixed population size interpretation of these branching processes.[2][3][73]Extinction probabilities can be interpreted as absorption probabilities of some Markov process evolving in some absorbing environment. These absorption models are represented by Feynman-Kac models.[74][75][76][77]The long time behavior of these processes conditioned on non-extinction can be expressed in an equivalent way byquasi-invariant measures,Yaglomlimits,[78]or invariant measures of nonlinear normalized Feynman-Kac flows.[2][3][54][55][66][79]
Incomputer sciences, and more particularly inartificial intelligencethese mean field typegenetic algorithmsare used as random search heuristics that mimic the process of evolution to generate useful solutions to complex optimization problems.[80][81][82]These stochastic search algorithms belongs to the class ofEvolutionary models. The idea is to propagate a population of feasible candidate solutions using mutation and selection mechanisms. The mean field interaction between the individuals is encapsulated in the selection and the cross-over mechanisms.
Inmean field gamesandmulti-agent interacting systemstheories, mean field particle processes are used to represent the collective behavior of complex systems with interacting individuals.[83][84][85][86][87][88][89][90]In this context, the mean field interaction is encapsulated in the decision process of interacting agents. The limiting model as the number of agents tends to infinity is sometimes called the continuum model of agents[91]
Ininformation theory, and more specifically in statisticalmachine learningandsignal processing, mean field particle methods are used to sample sequentially from the conditional distributions of some random process with respect to a sequence of observations or a cascade ofrare events.[2][3][73][92]In discrete timenonlinear filtering problems, the conditional distributions of the random states of a signal given partial and noisy observations satisfy a nonlinear updating-prediction evolution equation. The updating step is given byBayes' rule, and the prediction step is aChapman-Kolmogorov transport equation. The mean field particle interpretation of these nonlinear filtering equations is a genetic type selection-mutation particle algorithm[48]During the mutation step, the particles evolve independently of one another according to the Markov transitions of the signal . During the selection stage, particles with small relative likelihood values are killed, while the ones with high relative values are multiplied.[93][94]These mean field particle techniques are also used to solve multiple-object tracking problems, and more specifically to estimate association measures[2][73][95]
The continuous time version of these particle models are mean field Moran type particle interpretations of the robust optimal filter evolution equations or the Kushner-Stratonotich stochastic partial differential equation.[4][31][94]These genetic type mean field particle algorithms also termedParticle FiltersandSequential Monte Carlo methodsare extensively and routinely used in operation research and statistical inference
.[96][97][98]The term "particle filters" was first coined in 1996 by Del Moral,[41]and the term "sequential Monte Carlo" by Liu and Chen in 1998.Subset simulationand Monte Carlo splitting[99]techniques are particular instances of genetic particle schemes and Feynman-Kac particle models equipped withMarkov chain Monte Carlomutation transitions[67][100][101]
To motivate the mean field simulation algorithm we start withSafiniteorcountable statespace and letP(S) denote the set of all probability measures onS. Consider a sequence ofprobability distributions(η0,η1,⋯){\displaystyle (\eta _{0},\eta _{1},\cdots )}onSsatisfying an evolution equation:
for some, possibly nonlinear, mappingΦ:P(S)→P(S).{\displaystyle \Phi :P(S)\to P(S).}These distributions are given by vectors
that satisfy:
Therefore,Φ{\displaystyle \Phi }is a mapping from the(s−1){\displaystyle (s-1)}-unit simplexinto itself, wheresstands for thecardinalityof the setS. Whensis too large, solving equation (1) isintractableor computationally very costly. One natural way to approximate these evolution equations is to reduce sequentially the state space using a mean field particle model. One of the simplest mean field simulation scheme is defined by the Markov chain
on the product spaceSN{\displaystyle S^{N}}, starting withNindependent random variables with probability distributionη0{\displaystyle \eta _{0}}and elementary transitions
with theempirical measure
where1x{\displaystyle 1_{x}}is theindicator functionof the statex.
In other words, givenξn(N){\displaystyle \xi _{n}^{(N)}}the samplesξn+1(N){\displaystyle \xi _{n+1}^{(N)}}are independent random variables with probability distributionΦ(ηnN){\displaystyle \Phi \left(\eta _{n}^{N}\right)}. The rationale behind this mean field simulation technique is the following: We expect that whenηnN{\displaystyle \eta _{n}^{N}}is a good approximation ofηn{\displaystyle \eta _{n}}, thenΦ(ηnN){\displaystyle \Phi \left(\eta _{n}^{N}\right)}is an approximation ofΦ(ηn)=ηn+1{\displaystyle \Phi \left(\eta _{n}\right)=\eta _{n+1}}. Thus, sinceηn+1N{\displaystyle \eta _{n+1}^{N}}is the empirical measure ofNconditionally independent random variables with common probability distributionΦ(ηnN){\displaystyle \Phi \left(\eta _{n}^{N}\right)}, we expectηn+1N{\displaystyle \eta _{n+1}^{N}}to be a good approximation ofηn+1{\displaystyle \eta _{n+1}}.
Another strategy is to find a collection
ofstochastic matricesindexed byηn∈P(S){\displaystyle \eta _{n}\in P(S)}such that
This formula allows us to interpret the sequence(η0,η1,⋯){\displaystyle (\eta _{0},\eta _{1},\cdots )}as the probability distributions of the random states(X¯0,X¯1,⋯){\displaystyle \left({\overline {X}}_{0},{\overline {X}}_{1},\cdots \right)}of the nonlinear Markov chain model with elementary transitions
A collection of Markov transitionsKηn{\displaystyle K_{\eta _{n}}}satisfying the equation (1) is called a McKean interpretation of the sequence of measuresηn{\displaystyle \eta _{n}}.
The mean field particle interpretation of (2) is now defined by the Markov chain
on the product spaceSN{\displaystyle S^{N}}, starting withNindependent random copies ofX0{\displaystyle X_{0}}and elementary transitions
with the empirical measure
Under some weak regularity conditions[2]on the mappingΦ{\displaystyle \Phi }for any functionf:S→R{\displaystyle f:S\to \mathbf {R} }, we have the almost sure convergence
These nonlinear Markov processes and their mean field particle interpretation can be extended to time non homogeneous models on generalmeasurablestate spaces.[2]
To illustrate the abstract models presented above, we consider a stochastic matrixM=(M(x,y))x,y∈S{\displaystyle M=(M(x,y))_{x,y\in S}}and some functionG:S→(0,1){\displaystyle G:S\to (0,1)}. We associate with these two objects the mapping
and the Boltzmann-Gibbs measuresΨG(ηn)(x){\displaystyle \Psi _{G}(\eta _{n})(x)}defined by
We denote byKηn=(Kηn(x,y))x,y∈S{\displaystyle K_{\eta _{n}}=\left(K_{\eta _{n}}(x,y)\right)_{x,y\in S}}the collection of stochastic matrices indexed byηn∈P(S){\displaystyle \eta _{n}\in P(S)}given by
for some parameterϵ∈[0,1]{\displaystyle \epsilon \in [0,1]}. It is readily checked that the equation (2) is satisfied. In addition, we can also show (cf. for instance[3]) that the solution of (1) is given by the Feynman-Kac formula
with a Markov chainXn{\displaystyle X_{n}}with initial distributionη0{\displaystyle \eta _{0}}and Markov transitionM.
For any functionf:S→R{\displaystyle f:S\to \mathbf {R} }we have
IfG(x)=1{\displaystyle G(x)=1}is the unit function andϵ=1{\displaystyle \epsilon =1}, then we have
And the equation (2) reduces to theChapman-Kolmogorov equation
The mean field particle interpretation of this Feynman-Kac model is defined by sampling sequentiallyNconditionally independent random variablesξn+1(N,i){\displaystyle \xi _{n+1}^{(N,i)}}with probability distribution
In other words, with a probabilityϵG(ξn(N,i)){\displaystyle \epsilon G\left(\xi _{n}^{(N,i)}\right)}the particleξn(N,i){\displaystyle \xi _{n}^{(N,i)}}evolves to a new stateξn+1(N,i)=y{\displaystyle \xi _{n+1}^{(N,i)}=y}randomly chosen with the probability distributionM(ξn(N,i),y){\displaystyle M\left(\xi _{n}^{(N,i)},y\right)}; otherwise,ξn(N,i){\displaystyle \xi _{n}^{(N,i)}}jumps to a new locationξn(N,j){\displaystyle \xi _{n}^{(N,j)}}randomly chosen with a probability proportional toG(ξn(N,j)){\displaystyle G\left(\xi _{n}^{(N,j)}\right)}and evolves to a new stateξn+1(N,i)=y{\displaystyle \xi _{n+1}^{(N,i)}=y}randomly chosen with the probability distributionM(ξn(N,j),y).{\displaystyle M\left(\xi _{n}^{(N,j)},y\right).}IfG(x)=1{\displaystyle G(x)=1}is the unit function andϵ=1{\displaystyle \epsilon =1}, the interaction between the particle vanishes and the particle model reduces to a sequence of independent copies of the Markov chainXn{\displaystyle X_{n}}. Whenϵ=0{\displaystyle \epsilon =0}the mean field particle model described above reduces to a simplemutation-selectiongenetic algorithm with fitness functionGand mutation transitionM. These nonlinear Markov chain models and their mean field particle interpretation can be extended to time non homogeneous models on general measurable state spaces (including transition states, path spaces and random excursion spaces) and continuous time models.[1][2][3]
We consider a sequence of real valued random variables(X¯0,X¯1,⋯){\displaystyle \left({\overline {X}}_{0},{\overline {X}}_{1},\cdots \right)}defined sequentially by the equations
with a collectionWn{\displaystyle W_{n}}of independentstandard Gaussianrandom variables, a positive parameterσ, some functionsa,b,c:R→R,{\displaystyle a,b,c:\mathbf {R} \to \mathbf {R} ,}and some standard Gaussian initial random stateX¯0{\displaystyle {\overline {X}}_{0}}. We letηn{\displaystyle \eta _{n}}be the probability distribution of the random stateX¯n{\displaystyle {\overline {X}}_{n}}; that is, for any boundedmeasurable functionf, we have
with
The integral is theLebesgue integral, anddxstands for an infinitesimal neighborhood of the statex. TheMarkov transitionof the chain is given for any bounded measurable functionsfby the formula
with
Using the tower property ofconditional expectationswe prove that the probability distributionsηn{\displaystyle \eta _{n}}satisfy the nonlinear equation
for any bounded measurable functionsf. This equation is sometimes written in the more synthetic form
The mean field particle interpretation of this model is defined by the Markov chain
on the product spaceRN{\displaystyle \mathbf {R} ^{N}}by
where
stand forNindependent copies ofX¯0{\displaystyle {\overline {X}}_{0}}andWn;n⩾1,{\displaystyle W_{n};n\geqslant 1,}respectively. For regular models (for instance for bounded Lipschitz functionsa,b,c) we have the almost sure convergence
with the empirical measure
for any bounded measurable functionsf(cf. for instance[2]). In the above display,δx{\displaystyle \delta _{x}}stands for theDirac measureat the statex.
We consider astandard Brownian motionW¯tn{\displaystyle {\overline {W}}_{t_{n}}}(a.k.a.Wiener Process) evaluated on a time mesh sequencet0=0<t1<⋯<tn<⋯{\displaystyle t_{0}=0<t_{1}<\cdots <t_{n}<\cdots }with a given time steptn−tn−1=h{\displaystyle t_{n}-t_{n-1}=h}. We choosec(x)=x{\displaystyle c(x)=x}in equation (1), we replaceb(x){\displaystyle b(x)}andσbyb(x)×h{\displaystyle b(x)\times h}andσ×h{\displaystyle \sigma \times {\sqrt {h}}}, and we writeX¯tn{\displaystyle {\overline {X}}_{t_{n}}}instead ofX¯n{\displaystyle {\overline {X}}_{n}}the values of the random states evaluated at the time steptn.{\displaystyle t_{n}.}Recalling that(W¯tn+1−W¯tn){\displaystyle \left({\overline {W}}_{t_{n+1}}-{\overline {W}}_{t_{n}}\right)}are independent centered Gaussian random variables with variancetn−tn−1=h,{\displaystyle t_{n}-t_{n-1}=h,}the resulting equation can be rewritten in the following form
Whenh→ 0, the above equation converge to the nonlinear diffusion process
The mean field continuous time model associated with these nonlinear diffusions is the (interacting) diffusion processξt(N)=(ξt(N,i))1⩽i⩽N{\displaystyle \xi _{t}^{(N)}=\left(\xi _{t}^{(N,i)}\right)_{1\leqslant i\leqslant N}}on the product spaceRN{\displaystyle \mathbf {R} ^{N}}defined by
where
areNindependent copies ofX¯0{\displaystyle {\overline {X}}_{0}}andW¯t.{\displaystyle {\overline {W}}_{t}.}For regular models (for instance for bounded Lipschitz functionsa,b) we have the almost sure convergence
withηt=Law(X¯t),{\displaystyle \eta _{t}={\text{Law}}\left({\overline {X}}_{t}\right),}and the empirical measure
for any bounded measurable functionsf(cf. for instance.[7]). These nonlinear Markov processes and their mean field particle interpretation can be extended to interacting jump-diffusion processes[1][2][23][25]
|
https://en.wikipedia.org/wiki/Mean-field_particle_methods
|
Metropolis light transport(MLT) is aglobal illuminationapplication of aMonte Carlo methodcalled theMetropolis–Hastings algorithmto therendering equationfor generating images from detailed physical descriptions ofthree-dimensionalscenes.[1][2]
The procedure constructs paths from the eye to a light source usingbidirectional path tracing, then constructs slight modifications to the path. Some careful statistical calculation (the Metropolis algorithm) is used to compute the appropriate distribution of brightness over the image. This procedure has the advantage, relative to bidirectional path tracing, that once a path has been found from light to eye, the algorithm can then explore nearby paths; thus difficult-to-find light paths can be explored more thoroughly with the same number of simulated photons. In short, the algorithm generates a path and stores the path's 'nodes' in a list. It can then modify the path by adding extra nodes and creating a new light path. While creating this new path, the algorithm decides how many new 'nodes' to add and whether or not these new nodes will actually create a new path.
Metropolis light transport is an unbiased method that, in some cases (but not always), converges to a solution of the rendering equation faster than other unbiased algorithms such as path tracing or bidirectional path tracing.[citation needed]
Energy Redistribution Path Tracing (ERPT) uses Metropolis sampling-like mutation strategies instead of an intermediateprobability distributionstep.[3]
Renderers using MLT:
This computing article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Metropolis_light_transport
|
Multiple-try Metropolis (MTM)is asampling methodthat is a modified form of theMetropolis–Hastingsmethod, first presented by Liu, Liang, and Wong in 2000.
It is designed to help the sampling trajectory converge faster,
by increasing both the step size and the acceptance rate.
InMarkov chain Monte Carlo, theMetropolis–Hastings algorithm(MH) can be used to sample from aprobability distributionwhich is difficult to sample from directly. However, the MH algorithm requires the user to supply a proposal distribution, which can be relatively arbitrary. In many cases, one uses a Gaussian distribution centered on the current point in the probability space, of the formQ(x′;xt)=N(xt;σ2I){\displaystyle Q(x';x^{t})={\mathcal {N}}(x^{t};\sigma ^{2}I)\,}. This proposal distribution is convenient to sample from and may be the best choice if one has little knowledge about the target distribution,π(x){\displaystyle \pi (x)\,}. If desired, one can use the more generalmultivariate normal distribution,Q(x′;xt)=N(xt;Σ){\displaystyle Q(x';x^{t})={\mathcal {N}}(x^{t};\mathbf {\Sigma } )}, whereΣ{\displaystyle \mathbf {\Sigma } }is the covariance matrix which the user believes is similar to the target distribution.
Although this method must converge to the stationary distribution in the limit of infinite sample size, in practice the progress can be exceedingly slow. Ifσ2{\displaystyle \sigma ^{2}\,}is too large, almost all steps under the MH algorithm will be rejected. On the other hand, ifσ2{\displaystyle \sigma ^{2}\,}is too small, almost all steps will be accepted, and the Markov chain will be similar to a random walk through the probability space. In the simpler case ofQ(x′;xt)=N(xt;I){\displaystyle Q(x';x^{t})={\mathcal {N}}(x^{t};I)\,}, we see thatN{\displaystyle N\,}steps only takes us a distance ofN{\displaystyle {\sqrt {N}}\,}. In this event, the Markov Chain will not fully explore the probability space in any reasonable amount of time. Thus the MH algorithm requires reasonable tuning of the scale parameter (σ2{\displaystyle \sigma ^{2}\,}orΣ{\displaystyle \mathbf {\Sigma } }).
Even if the scale parameter is well-tuned, as the dimensionality of the problem increases, progress can still remain exceedingly slow. To see this, again considerQ(x′;xt)=N(xt;I){\displaystyle Q(x';x^{t})={\mathcal {N}}(x^{t};I)\,}. In one dimension, this corresponds to a Gaussian distribution with mean 0 and variance 1. For one dimension, this distribution has a mean step of zero, however the mean squared step size is given by
As the number of dimensions increases, the expected step size becomes larger and larger. InN{\displaystyle N\,}dimensions, the probability of moving a radial distancer{\displaystyle r\,}is related to theChi distribution, and is given by
This distribution is peaked atr=N−1{\displaystyle r={\sqrt {N-1}}\,}which is≈N{\displaystyle \approx {\sqrt {N}}\,}for largeN{\displaystyle N\,}. This means that the step size will increase as the roughly the square root of the number of dimensions. For the MH algorithm, large steps will almost always land in regions of low probability, and therefore be rejected.
If we now add the scale parameterσ2{\displaystyle \sigma ^{2}\,}back in, we find that to retain a reasonable acceptance rate, we must make the transformationσ2→σ2/N{\displaystyle \sigma ^{2}\rightarrow \sigma ^{2}/N}. In this situation, the acceptance rate can now be made reasonable, but the exploration of the probability space becomes increasingly slow. To see this, consider a slice along any one dimension of the problem. By making the scale transformation above, the expected step size is any one dimension is notσ{\displaystyle \sigma \,}but instead isσ/N{\displaystyle \sigma /{\sqrt {N}}}. As this step size is much smaller than the "true" scale of the probability distribution (assuming thatσ{\displaystyle \sigma \,}is somehow known a priori, which is the best possible case), the algorithm executes a random walk along every parameter.
SupposeQ(x,y){\displaystyle Q(\mathbf {x} ,\mathbf {y} )}is an arbitraryproposal function. We require thatQ(x,y)>0{\displaystyle Q(\mathbf {x} ,\mathbf {y} )>0}only ifQ(y,x)>0{\displaystyle Q(\mathbf {y} ,\mathbf {x} )>0}. Additionally,π(x){\displaystyle \pi (\mathbf {x} )}is the likelihood function.
Definew(x,y)=π(x)Q(x,y)λ(x,y){\displaystyle w(\mathbf {x} ,\mathbf {y} )=\pi (\mathbf {x} )Q(\mathbf {x} ,\mathbf {y} )\lambda (\mathbf {x} ,\mathbf {y} )}whereλ(x,y){\displaystyle \lambda (\mathbf {x} ,\mathbf {y} )}is a non-negative symmetric function inx{\displaystyle \mathbf {x} }andy{\displaystyle \mathbf {y} }that can be chosen by the user.
Now suppose the current state isx{\displaystyle \mathbf {x} }. The MTM algorithm is as follows:
1) Drawkindependent trial proposalsy1,…,yk{\displaystyle \mathbf {y} _{1},\ldots ,\mathbf {y} _{k}}fromQ(x,.){\displaystyle Q(\mathbf {x} ,.)}. Compute the weightsw(yj,x){\displaystyle w(\mathbf {y} _{j},\mathbf {x} )}for each of these.
2) Selecty{\displaystyle \mathbf {y} }from theyi{\displaystyle \mathbf {y} _{i}}with probability proportional to the weights.
3) Now produce a reference set by drawingx1,…,xk−1{\displaystyle \mathbf {x} _{1},\ldots ,\mathbf {x} _{k-1}}from the distributionQ(y,.){\displaystyle Q(\mathbf {y} ,.)}. Setxk=x{\displaystyle \mathbf {x} _{k}=\mathbf {x} }(the current point).
4) Accepty{\displaystyle \mathbf {y} }with probability
It can be shown that this method satisfies thedetailed balanceproperty and therefore produces a reversible Markov chain withπ(x){\displaystyle \pi (\mathbf {x} )}as the stationary distribution.
IfQ(x,y){\displaystyle Q(\mathbf {x} ,\mathbf {y} )}is symmetric (as is the case for themultivariate normal distribution), then one can chooseλ(x,y)=1Q(x,y){\displaystyle \lambda (\mathbf {x} ,\mathbf {y} )={\frac {1}{Q(\mathbf {x} ,\mathbf {y} )}}}which givesw(x,y)=π(x){\displaystyle w(\mathbf {x} ,\mathbf {y} )=\pi (\mathbf {x} )}.
Multiple-try Metropolis needs to compute the energy of2k−1{\displaystyle 2k-1}other states at every step.
If the slow part of the process is calculating the energy, then this method can be slower.
If the slow part of the process is finding neighbors of a given point, or generating random numbers, then again this method can be slower.
It can be argued that this method only appears faster because it puts much more computation into a "single step" than Metropolis-Hastings does.
|
https://en.wikipedia.org/wiki/Multiple-try_Metropolis
|
Parallel tempering, inphysicsandstatistics, is a computer simulation method typically used to find the lowest energy state of a system of many interacting particles. It addresses the problem that at high temperatures, one may have a stable state different from low temperature, whereas simulations at low temperatures may become "stuck" in a metastable state. It does this by using the fact that the high temperature simulation may visit states typical of both stable and metastable low temperature states.
More specifically, parallel tempering (also known asreplica exchange MCMC sampling), is asimulationmethod aimed at improving the dynamic properties ofMonte Carlo methodsimulations of physical systems, and ofMarkov chain Monte Carlo(MCMC) sampling methods more generally. The replica exchange method was originally devised byRobert Swendsenand J. S. Wang,[1]then extended byCharles J. Geyer,[2]and later developed further byGiorgio Parisi,[3]Koji HukushimaandKoji Nemoto,[4]and others.[5][6]Y. Sugita and Y. Okamoto also formulated amolecular dynamicsversion of parallel tempering; this is usually known as replica-exchange molecular dynamics or REMD.[7]
Essentially, one runsNcopies of the system, randomly initialized, at different temperatures. Then, based on the Metropolis criterion one exchanges configurations at different temperatures. The idea of this method
is to make configurations at high temperatures available to the simulations at low temperatures and vice versa.
This results in a very robust ensemble which is able to sample both low and high energy configurations.
In this way, thermodynamical properties such as the specific heat, which is in general not well computed in the canonical ensemble, can be computed with great precision.
Typically aMonte Carlo simulationusing aMetropolis–Hastingsupdate consists of a singlestochastic processthat evaluates theenergyof the system and accepts/rejects updates based on thetemperatureT. At high temperatures updates that change the energy of the system are comparatively more probable. When the system is highly correlated, updates are rejected and the simulation is said to suffer from critical slowing down.
If we were to run two simulations at temperatures separated by a ΔT, we would find that if ΔTis small enough, then the energyhistogramsobtained by collecting the values of the energies over a set of Monte Carlo steps N will create two distributions that will somewhat overlap. The overlap can be defined by the area of the histograms that falls over the same interval of energy values, normalized by the total number of samples. For ΔT= 0 the overlap should approach 1.
Another way to interpret this overlap is to say that system configurations sampled at temperatureT1are likely to appear during a simulation atT2. Because theMarkov chainshould have no memory of its past, we can create a new update for the system composed of the two systems atT1andT2. At a given Monte Carlo step we can update the global system by swapping the configuration of the two systems, or alternatively trading the two temperatures. The update is accepted according to the Metropolis–Hastings criterion with probability
and otherwise the update is rejected. Thedetailed balancecondition has to be satisfied by ensuring that the reverse update has to be equally likely, all else being equal. This can be ensured by appropriately choosing regular Monte Carlo updates or parallel tempering updates with probabilities that are independent of the configurations of the two systems or of the Monte Carlo step.[8]
This update can be generalized to more than two systems.
By a careful choice of temperatures and number of systems one can achieve an improvement in the mixing properties of a set of Monte Carlo simulations that exceeds the extra computational cost of running parallel simulations.
Other considerations to be made: increasing the number of different temperatures can have a detrimental effect, as one can think of the 'lateral' movement of a given system across temperatures as a diffusion process.
Set up is important as there must be a practical histogram overlap to achieve a reasonable probability of lateral moves.
The parallel tempering method can be used as a supersimulated annealingthat does not need restart, since a system at high temperature can feed new local optimizers to a system at low temperature, allowing tunneling between metastable states and improving convergence to a global optimum.
|
https://en.wikipedia.org/wiki/Parallel_tempering
|
Inmathematics,Weyl's lemma, named afterHermann Weyl, states that everyweak solutionofLaplace's equationis asmoothsolution. This contrasts with thewave equation, for example, which has weak solutions that are not smooth solutions. Weyl's lemma is a special case ofellipticorhypoelliptic regularity.
LetΩ{\displaystyle \Omega }be anopen subsetofn{\displaystyle n}-dimensional Euclidean spaceRn{\displaystyle \mathbb {R} ^{n}}, and letΔ{\displaystyle \Delta }denote the usualLaplace operator. Weyl's lemma[1]states that if alocally integrablefunctionu∈Lloc1(Ω){\displaystyle u\in L_{\mathrm {loc} }^{1}(\Omega )}is a weak solution of Laplace's equation, in the sense that
for everytest function(smoothfunction withcompact support)φ∈Cc∞(Ω){\displaystyle \varphi \in C_{c}^{\infty }(\Omega )}, then (up to redefinition on a set ofmeasure zero)u∈C∞(Ω){\displaystyle u\in C^{\infty }(\Omega )}is smooth and satisfiesΔu=0{\displaystyle \Delta u=0}pointwise inΩ{\displaystyle \Omega }.
This result implies the interior regularity ofharmonic functionsinΩ{\displaystyle \Omega }, but it does not say anything about their regularity on the boundary∂Ω{\displaystyle \partial \Omega }.
To prove Weyl's lemma, oneconvolvesthe functionu{\displaystyle u}with an appropriatemollifierφε{\displaystyle \varphi _{\varepsilon }}and shows that the mollificationuε=φε∗u{\displaystyle u_{\varepsilon }=\varphi _{\varepsilon }\ast u}satisfies Laplace's equation, which implies thatuε{\displaystyle u_{\varepsilon }}has the mean value property. Taking the limit asε→0{\displaystyle \varepsilon \to 0}and using the properties of mollifiers, one finds thatu{\displaystyle u}also has the mean value property,[2]which implies that it is a smooth solution of Laplace's equation.[3][4]Alternative proofs use the smoothness of the fundamental solution of the Laplacian or suitable a priori elliptic estimates.
Let(ρε)ε>0{\displaystyle \left(\rho _{\varepsilon }\right)_{\varepsilon >0}}be the standardmollifier.
Fix a compact setΩ′⋐Ω{\displaystyle \Omega ^{\prime }\Subset \Omega }and putε0=dist(Ω′,∂Ω){\displaystyle \varepsilon _{0}=\operatorname {dist} \left(\Omega ^{\prime },\partial \Omega \right)}be the distance betweenΩ′{\displaystyle \Omega ^{\prime }}and the boundary ofΩ{\displaystyle \Omega }.
For eachx∈Ω′{\displaystyle x\in \Omega ^{\prime }}andε∈(0,ε0){\displaystyle \varepsilon \in \left(0,\varepsilon _{0}\right)}the function
belongs totest functionsD(Ω){\displaystyle {\mathcal {D}}(\Omega )}and so we may consider
We assert that it is independent ofε∈(0,ε0){\displaystyle \varepsilon \in \left(0,\varepsilon _{0}\right)}. To prove it we calculateddερε(x−y){\displaystyle {\frac {\mathrm {d} }{\mathrm {d} \varepsilon }}\rho _{\varepsilon }(x-y)}forx,y∈Rn{\displaystyle x,y\in \mathbb {R} ^{n}}.
Recall that
where the standard mollifier kernelρ{\displaystyle \rho }onRn{\displaystyle \mathbb {R} ^{n}}was defined atMollifier#Concrete_example. If we put
thenρ(x)=θ(|x|2){\displaystyle \rho (x)=\theta \left(|x|^{2}\right)}.
Clearlyθ∈C∞(R){\displaystyle \theta \in \mathrm {C} ^{\infty }(\mathbb {R} )}satisfiesθ(t)=0{\displaystyle \theta (t)=0}fort⩾1{\displaystyle t\geqslant 1}. Now calculate
PutK(x)=−nρ(x)−∇ρ(x)⋅x{\displaystyle K(x)=-n\rho (x)-\nabla \rho (x)\cdot x}so that
In terms ofρ(x)=θ(|x|2){\displaystyle \rho (x)=\theta \left(|x|^{2}\right)}we get
and if we set
thenΘ∈C∞(R){\displaystyle \Theta \in \mathrm {C} ^{\infty }(\mathbb {R} )}withΘ(t)=0{\displaystyle \Theta (t)=0}fort⩾1{\displaystyle t\geqslant 1}, andΘ′(t)=−12θ(t){\displaystyle \Theta ^{\prime }(t)=-{\frac {1}{2}}\theta (t)}. Consequently
and soK(x)=div∇(Θ(|x|2))=(ΔΦ)(x){\displaystyle K(x)=\operatorname {div} \nabla \left(\Theta \left(|x|^{2}\right)\right)=(\Delta \Phi )(x)}, whereΦ(x)=Θ(|x|2){\displaystyle \Phi (x)=\Theta \left(|x|^{2}\right)}. Observe thatΦ∈D(B1(0)¯){\displaystyle \Phi \in {\mathcal {D}}\left({\overline {B_{1}(0)}}\right)}, and
Herey↦ε1−nΦ(x−yε){\displaystyle y\mapsto \varepsilon ^{1-n}\Phi \left({\frac {x-y}{\varepsilon }}\right)}is supported inBε(x)¯⊂Ω{\displaystyle {\overline {B_{\varepsilon }(x)}}\subset \Omega }, and so by assumption
Now by considering difference quotients we see that
Indeed, forε,ε′>0{\displaystyle \varepsilon ,\varepsilon ^{\prime }>0}we have
inD′(Ω){\displaystyle {\mathcal {D}}^{\prime }(\Omega )}with respect toy{\displaystyle y}, providedx∈Ω′{\displaystyle x\in \Omega ^{\prime }}and0<ε<ε0{\displaystyle 0<\varepsilon <\varepsilon _{0}}(since we may differentiate both sides with respect toy){\displaystyle y)}. But thenddε⟨u,ρε(x−⋅)⟩=0{\displaystyle {\frac {\mathrm {d} }{\mathrm {d} \varepsilon }}\left\langle u,\rho _{\varepsilon }(x-\cdot )\right\rangle =0}, and so⟨u,ρε(x−⋅)⟩=⟨u,ρε1(x−⋅)⟩{\displaystyle \left\langle u,\rho _{\varepsilon }(x-\cdot )\right\rangle =\left\langle u,\rho _{\varepsilon _{1}}(x-\cdot )\right\rangle }for allε∈(0,ε0){\displaystyle \varepsilon \in \left(0,\varepsilon _{0}\right)}, whereε1∈(0,ε0){\displaystyle \varepsilon _{1}\in \left(0,\varepsilon _{0}\right)}. Now letφ∈D(Ω′){\displaystyle \varphi \in {\mathcal {D}}\left(\Omega ^{\prime }\right)}. Then, by the usual trick when convolving distributions withtest functions,
and so forε∈(0,ε1){\displaystyle \varepsilon \in \left(0,\varepsilon _{1}\right)}we have
Hence, asρε∗φ→φ{\displaystyle \rho _{\varepsilon }*\varphi \rightarrow \varphi }inD(Ω){\displaystyle {\mathcal {D}}(\Omega )}asε↘0{\displaystyle \varepsilon \searrow 0}, we get
Consequentlyu|Ω′∈C∞(Ω′){\displaystyle \left.u\right|_{\Omega ^{\prime }}\in \mathrm {C} ^{\infty }\left(\Omega ^{\prime }\right)}, and sinceΩ′{\displaystyle \Omega ^{\prime }}was arbitrary, we are done.
More generally, the same result holds for everydistributional solutionof Laplace's equation: IfT∈D′(Ω){\displaystyle T\in D'(\Omega )}satisfies⟨T,Δφ⟩=0{\displaystyle \langle T,\Delta \varphi \rangle =0}for everyφ∈Cc∞(Ω){\displaystyle \varphi \in C_{c}^{\infty }(\Omega )}, thenT{\displaystyle T}is a regular distribution associated with a smooth solutionu∈C∞(Ω){\displaystyle u\in C^{\infty }(\Omega )}of Laplace's equation.[5]
Weyl's lemma follows from more general results concerning the regularity properties of elliptic or hypoelliptic operators.[6]A linear partial differential operatorP{\displaystyle P}with smooth coefficients is hypoelliptic if thesingular supportofPu{\displaystyle Pu}is equal to the singular support ofu{\displaystyle u}for every distributionu{\displaystyle u}. The Laplace operator is hypoelliptic, so ifΔu=0{\displaystyle \Delta u=0}, then the singular support ofu{\displaystyle u}is empty since the singular support of0{\displaystyle 0}is empty, meaning thatu∈C∞(Ω){\displaystyle u\in C^{\infty }(\Omega )}. In fact, since the Laplacian is elliptic, a stronger result is true, and solutions ofΔu=0{\displaystyle \Delta u=0}arereal-analytic.
|
https://en.wikipedia.org/wiki/Weyl%27s_lemma_(Laplace_equation)
|
Autocorrelation, sometimes known asserial correlationin thediscrete timecase, measures thecorrelationof asignalwith a delayed copy of itself. Essentially, it quantifies the similarity between observations of arandom variableat different points in time. The analysis of autocorrelation is a mathematical tool for identifying repeating patterns or hiddenperiodicitieswithin a signal obscured bynoise. Autocorrelation is widely used insignal processing,time domainandtime series analysisto understand the behavior of data over time.
Different fields of study define autocorrelation differently, and not all of these definitions are equivalent. In some fields, the term is used interchangeably withautocovariance.
Various time series models incorporate autocorrelation, such asunit rootprocesses,trend-stationary processes,autoregressive processes, andmoving average processes.
Instatistics, the autocorrelation of a real or complexrandom processis thePearson correlationbetween values of the process at different times, as a function of the two times or of the time lag. Let{Xt}{\displaystyle \left\{X_{t}\right\}}be a random process, andt{\displaystyle t}be any point in time (t{\displaystyle t}may be anintegerfor adiscrete-timeprocess or areal numberfor acontinuous-timeprocess). ThenXt{\displaystyle X_{t}}is the value (orrealization) produced by a givenrunof the process at timet{\displaystyle t}. Suppose that the process hasmeanμt{\displaystyle \mu _{t}}andvarianceσt2{\displaystyle \sigma _{t}^{2}}at timet{\displaystyle t}, for eacht{\displaystyle t}. Then the definition of theautocorrelation functionbetween timest1{\displaystyle t_{1}}andt2{\displaystyle t_{2}}is[1]: p.388[2]: p.165
RXX(t1,t2)=E[Xt1X¯t2]{\displaystyle \operatorname {R} _{XX}(t_{1},t_{2})=\operatorname {E} \left[X_{t_{1}}{\overline {X}}_{t_{2}}\right]}
whereE{\displaystyle \operatorname {E} }is theexpected valueoperator and the bar representscomplex conjugation. Note that the expectation may not bewell defined.
Subtracting the mean before multiplication yields theauto-covariance functionbetween timest1{\displaystyle t_{1}}andt2{\displaystyle t_{2}}:[1]: p.392[2]: p.168
KXX(t1,t2)=E[(Xt1−μt1)(Xt2−μt2)¯]=E[Xt1X¯t2]−μt1μ¯t2=RXX(t1,t2)−μt1μ¯t2{\displaystyle {\begin{aligned}\operatorname {K} _{XX}(t_{1},t_{2})&=\operatorname {E} \left[(X_{t_{1}}-\mu _{t_{1}}){\overline {(X_{t_{2}}-\mu _{t_{2}})}}\right]\\&=\operatorname {E} \left[X_{t_{1}}{\overline {X}}_{t_{2}}\right]-\mu _{t_{1}}{\overline {\mu }}_{t_{2}}\\&=\operatorname {R} _{XX}(t_{1},t_{2})-\mu _{t_{1}}{\overline {\mu }}_{t_{2}}\end{aligned}}}
Note that this expression is not well defined for all-time series or processes, because the mean may not exist, or the variance may be zero (for a constant process) or infinite (for processes with distribution lacking well-behaved moments, such as certain types ofpower law).
If{Xt}{\displaystyle \left\{X_{t}\right\}}is awide-sense stationary processthen the meanμ{\displaystyle \mu }and the varianceσ2{\displaystyle \sigma ^{2}}are time-independent, and further the autocovariance function depends only on the lag betweent1{\displaystyle t_{1}}andt2{\displaystyle t_{2}}: the autocovariance depends only on the time-distance between the pair of values but not on their position in time. This further implies that the autocovariance and autocorrelation can be expressed as a function of the time-lag, and that this would be aneven functionof the lagτ=t2−t1{\displaystyle \tau =t_{2}-t_{1}}. This gives the more familiar forms for theautocorrelation function[1]: p.395
RXX(τ)=E[Xt+τX¯t]{\displaystyle \operatorname {R} _{XX}(\tau )=\operatorname {E} \left[X_{t+\tau }{\overline {X}}_{t}\right]}
and theauto-covariance function:
KXX(τ)=E[(Xt+τ−μ)(Xt−μ)¯]=E[Xt+τX¯t]−μμ¯=RXX(τ)−μμ¯{\displaystyle {\begin{aligned}\operatorname {K} _{XX}(\tau )&=\operatorname {E} \left[(X_{t+\tau }-\mu ){\overline {(X_{t}-\mu )}}\right]\\&=\operatorname {E} \left[X_{t+\tau }{\overline {X}}_{t}\right]-\mu {\overline {\mu }}\\&=\operatorname {R} _{XX}(\tau )-\mu {\overline {\mu }}\end{aligned}}}
In particular, note that
KXX(0)=σ2.{\displaystyle \operatorname {K} _{XX}(0)=\sigma ^{2}.}
It is common practice in some disciplines (e.g. statistics andtime series analysis) to normalize the autocovariance function to get a time-dependentPearson correlation coefficient. However, in other disciplines (e.g. engineering) the normalization is usually dropped and the terms "autocorrelation" and "autocovariance" are used interchangeably.
The definition of the autocorrelation coefficient of a stochastic process is[2]: p.169
ρXX(t1,t2)=KXX(t1,t2)σt1σt2=E[(Xt1−μt1)(Xt2−μt2)¯]σt1σt2.{\displaystyle \rho _{XX}(t_{1},t_{2})={\frac {\operatorname {K} _{XX}(t_{1},t_{2})}{\sigma _{t_{1}}\sigma _{t_{2}}}}={\frac {\operatorname {E} \left[(X_{t_{1}}-\mu _{t_{1}}){\overline {(X_{t_{2}}-\mu _{t_{2}})}}\right]}{\sigma _{t_{1}}\sigma _{t_{2}}}}.}
If the functionρXX{\displaystyle \rho _{XX}}is well defined, its value must lie in the range[−1,1]{\displaystyle [-1,1]}, with 1 indicating perfect correlation and −1 indicating perfectanti-correlation.
For awide-sense stationary(WSS) process, the definition is
ρXX(τ)=KXX(τ)σ2=E[(Xt+τ−μ)(Xt−μ)¯]σ2{\displaystyle \rho _{XX}(\tau )={\frac {\operatorname {K} _{XX}(\tau )}{\sigma ^{2}}}={\frac {\operatorname {E} \left[(X_{t+\tau }-\mu ){\overline {(X_{t}-\mu )}}\right]}{\sigma ^{2}}}}.
The normalization is important both because the interpretation of the autocorrelation as a correlation provides a scale-free measure of the strength ofstatistical dependence, and because the normalization has an effect on the statistical properties of the estimated autocorrelations.
The fact that the autocorrelation functionRXX{\displaystyle \operatorname {R} _{XX}}is aneven functioncan be stated as[2]: p.171RXX(t1,t2)=RXX(t2,t1)¯{\displaystyle \operatorname {R} _{XX}(t_{1},t_{2})={\overline {\operatorname {R} _{XX}(t_{2},t_{1})}}}respectively for a WSS process:[2]: p.173RXX(τ)=RXX(−τ)¯.{\displaystyle \operatorname {R} _{XX}(\tau )={\overline {\operatorname {R} _{XX}(-\tau )}}.}
For a WSS process:[2]: p.174|RXX(τ)|≤RXX(0){\displaystyle \left|\operatorname {R} _{XX}(\tau )\right|\leq \operatorname {R} _{XX}(0)}Notice thatRXX(0){\displaystyle \operatorname {R} _{XX}(0)}is always real.
TheCauchy–Schwarz inequality, inequality for stochastic processes:[1]: p.392|RXX(t1,t2)|2≤E[|Xt1|2]E[|Xt2|2]{\displaystyle \left|\operatorname {R} _{XX}(t_{1},t_{2})\right|^{2}\leq \operatorname {E} \left[|X_{t_{1}}|^{2}\right]\operatorname {E} \left[|X_{t_{2}}|^{2}\right]}
The autocorrelation of a continuous-timewhite noisesignal will have a strong peak (represented by aDirac delta function) atτ=0{\displaystyle \tau =0}and will be exactly0{\displaystyle 0}for all otherτ{\displaystyle \tau }.
TheWiener–Khinchin theoremrelates the autocorrelation functionRXX{\displaystyle \operatorname {R} _{XX}}to thepower spectral densitySXX{\displaystyle S_{XX}}via theFourier transform:
RXX(τ)=∫−∞∞SXX(f)ei2πfτdf{\displaystyle \operatorname {R} _{XX}(\tau )=\int _{-\infty }^{\infty }S_{XX}(f)e^{i2\pi f\tau }\,{\rm {d}}f}
SXX(f)=∫−∞∞RXX(τ)e−i2πfτdτ.{\displaystyle S_{XX}(f)=\int _{-\infty }^{\infty }\operatorname {R} _{XX}(\tau )e^{-i2\pi f\tau }\,{\rm {d}}\tau .}
For real-valued functions, the symmetric autocorrelation function has a real symmetric transform, so theWiener–Khinchin theoremcan be re-expressed in terms of real cosines only:
RXX(τ)=∫−∞∞SXX(f)cos(2πfτ)df{\displaystyle \operatorname {R} _{XX}(\tau )=\int _{-\infty }^{\infty }S_{XX}(f)\cos(2\pi f\tau )\,{\rm {d}}f}
SXX(f)=∫−∞∞RXX(τ)cos(2πfτ)dτ.{\displaystyle S_{XX}(f)=\int _{-\infty }^{\infty }\operatorname {R} _{XX}(\tau )\cos(2\pi f\tau )\,{\rm {d}}\tau .}
The (potentially time-dependent)autocorrelation matrix(also called second moment) of a (potentially time-dependent)random vectorX=(X1,…,Xn)T{\displaystyle \mathbf {X} =(X_{1},\ldots ,X_{n})^{\rm {T}}}is ann×n{\displaystyle n\times n}matrix containing as elements the autocorrelations of all pairs of elements of the random vectorX{\displaystyle \mathbf {X} }. The autocorrelation matrix is used in variousdigital signal processingalgorithms.
For arandom vectorX=(X1,…,Xn)T{\displaystyle \mathbf {X} =(X_{1},\ldots ,X_{n})^{\rm {T}}}containingrandom elementswhoseexpected valueandvarianceexist, theautocorrelation matrixis defined by[3]: p.190[1]: p.334
RXX≜E[XXT]{\displaystyle \operatorname {R} _{\mathbf {X} \mathbf {X} }\triangleq \ \operatorname {E} \left[\mathbf {X} \mathbf {X} ^{\rm {T}}\right]}
whereT{\displaystyle {}^{\rm {T}}}denotes thetransposedmatrix of dimensionsn×n{\displaystyle n\times n}.
Written component-wise:
RXX=[E[X1X1]E[X1X2]⋯E[X1Xn]E[X2X1]E[X2X2]⋯E[X2Xn]⋮⋮⋱⋮E[XnX1]E[XnX2]⋯E[XnXn]]{\displaystyle \operatorname {R} _{\mathbf {X} \mathbf {X} }={\begin{bmatrix}\operatorname {E} [X_{1}X_{1}]&\operatorname {E} [X_{1}X_{2}]&\cdots &\operatorname {E} [X_{1}X_{n}]\\\\\operatorname {E} [X_{2}X_{1}]&\operatorname {E} [X_{2}X_{2}]&\cdots &\operatorname {E} [X_{2}X_{n}]\\\\\vdots &\vdots &\ddots &\vdots \\\\\operatorname {E} [X_{n}X_{1}]&\operatorname {E} [X_{n}X_{2}]&\cdots &\operatorname {E} [X_{n}X_{n}]\\\\\end{bmatrix}}}
IfZ{\displaystyle \mathbf {Z} }is acomplex random vector, the autocorrelation matrix is instead defined by
RZZ≜E[ZZH].{\displaystyle \operatorname {R} _{\mathbf {Z} \mathbf {Z} }\triangleq \ \operatorname {E} [\mathbf {Z} \mathbf {Z} ^{\rm {H}}].}
HereH{\displaystyle {}^{\rm {H}}}denotesHermitian transpose.
For example, ifX=(X1,X2,X3)T{\displaystyle \mathbf {X} =\left(X_{1},X_{2},X_{3}\right)^{\rm {T}}}is a random vector, thenRXX{\displaystyle \operatorname {R} _{\mathbf {X} \mathbf {X} }}is a3×3{\displaystyle 3\times 3}matrix whose(i,j){\displaystyle (i,j)}-th entry isE[XiXj]{\displaystyle \operatorname {E} [X_{i}X_{j}]}.
Insignal processing, the above definition is often used without the normalization, that is, without subtracting the mean and dividing by the variance. When the autocorrelation function is normalized by mean and variance, it is sometimes referred to as theautocorrelation coefficient[4]or autocovariance function.
Given asignalf(t){\displaystyle f(t)}, the continuous autocorrelationRff(τ){\displaystyle R_{ff}(\tau )}is most often defined as the continuouscross-correlationintegral off(t){\displaystyle f(t)}with itself, at lagτ{\displaystyle \tau }.[1]: p.411
Rff(τ)=∫−∞∞f(t+τ)f(t)¯dt=∫−∞∞f(t)f(t−τ)¯dt{\displaystyle R_{ff}(\tau )=\int _{-\infty }^{\infty }f(t+\tau ){\overline {f(t)}}\,{\rm {d}}t=\int _{-\infty }^{\infty }f(t){\overline {f(t-\tau )}}\,{\rm {d}}t}
wheref(t)¯{\displaystyle {\overline {f(t)}}}represents thecomplex conjugateoff(t){\displaystyle f(t)}. Note that the parametert{\displaystyle t}in the integral is a dummy variable and is only necessary to calculate the integral. It has no specific meaning.
The discrete autocorrelationR{\displaystyle R}at lagℓ{\displaystyle \ell }for a discrete-time signaly(n){\displaystyle y(n)}is
Ryy(ℓ)=∑n∈Zy(n)y(n−ℓ)¯{\displaystyle R_{yy}(\ell )=\sum _{n\in Z}y(n)\,{\overline {y(n-\ell )}}}
The above definitions work for signals that are square integrable, or square summable, that is, of finite energy. Signals that "last forever" are treated instead as random processes, in which case different definitions are needed, based on expected values. Forwide-sense-stationary random processes, the autocorrelations are defined as
Rff(τ)=E[f(t)f(t−τ)¯]Ryy(ℓ)=E[y(n)y(n−ℓ)¯].{\displaystyle {\begin{aligned}R_{ff}(\tau )&=\operatorname {E} \left[f(t){\overline {f(t-\tau )}}\right]\\R_{yy}(\ell )&=\operatorname {E} \left[y(n)\,{\overline {y(n-\ell )}}\right].\end{aligned}}}
For processes that are notstationary, these will also be functions oft{\displaystyle t}, orn{\displaystyle n}.
For processes that are alsoergodic, the expectation can be replaced by the limit of a time average. The autocorrelation of an ergodic process is sometimes defined as or equated to[4]
Rff(τ)=limT→∞1T∫0Tf(t+τ)f(t)¯dtRyy(ℓ)=limN→∞1N∑n=0N−1y(n)y(n−ℓ)¯.{\displaystyle {\begin{aligned}R_{ff}(\tau )&=\lim _{T\rightarrow \infty }{\frac {1}{T}}\int _{0}^{T}f(t+\tau ){\overline {f(t)}}\,{\rm {d}}t\\R_{yy}(\ell )&=\lim _{N\rightarrow \infty }{\frac {1}{N}}\sum _{n=0}^{N-1}y(n)\,{\overline {y(n-\ell )}}.\end{aligned}}}
These definitions have the advantage that they give sensible well-defined single-parameter results for periodic functions, even when those functions are not the output of stationary ergodic processes.
Alternatively, signals thatlast forevercan be treated by a short-time autocorrelation function analysis, using finite time integrals. (Seeshort-time Fourier transformfor a related process.)
Iff{\displaystyle f}is a continuous periodic function of periodT{\displaystyle T}, the integration from−∞{\displaystyle -\infty }to∞{\displaystyle \infty }is replaced by integration over any interval[t0,t0+T]{\displaystyle [t_{0},t_{0}+T]}of lengthT{\displaystyle T}:
Rff(τ)≜∫t0t0+Tf(t+τ)f(t)¯dt{\displaystyle R_{ff}(\tau )\triangleq \int _{t_{0}}^{t_{0}+T}f(t+\tau ){\overline {f(t)}}\,dt}
which is equivalent to
Rff(τ)≜∫t0t0+Tf(t)f(t−τ)¯dt{\displaystyle R_{ff}(\tau )\triangleq \int _{t_{0}}^{t_{0}+T}f(t){\overline {f(t-\tau )}}\,dt}
In the following, we will describe properties of one-dimensional autocorrelations only, since most properties are easily transferred from the one-dimensional case to the multi-dimensional cases. These properties hold forwide-sense stationary processes.[5]
Multi-dimensionalautocorrelation is defined similarly. For example, inthree dimensionsthe autocorrelation of a square-summablediscrete signalwould be
R(j,k,ℓ)=∑n,q,rxn,q,rx¯n−j,q−k,r−ℓ.{\displaystyle R(j,k,\ell )=\sum _{n,q,r}x_{n,q,r}\,{\overline {x}}_{n-j,q-k,r-\ell }.}
When mean values are subtracted from signals before computing an autocorrelation function, the resulting function is usually called an auto-covariance function.
For data expressed as adiscretesequence, it is frequently necessary to compute the autocorrelation with highcomputational efficiency. Abrute force methodbased on the signal processing definitionRxx(j)=∑nxnx¯n−j{\displaystyle R_{xx}(j)=\sum _{n}x_{n}\,{\overline {x}}_{n-j}}can be used when the signal size is small. For example, to calculate the autocorrelation of the real signal sequencex=(2,3,−1){\displaystyle x=(2,3,-1)}(i.e.x0=2,x1=3,x2=−1{\displaystyle x_{0}=2,x_{1}=3,x_{2}=-1}, andxi=0{\displaystyle x_{i}=0}for all other values ofi) by hand, we first recognize that the definition just given is the same as the "usual" multiplication, but with right shifts, where each vertical addition gives the autocorrelation for particular lag values:23−1×23−1−2−3169−3+46−2−23143−2{\displaystyle {\begin{array}{rrrrrr}&2&3&-1\\\times &2&3&-1\\\hline &-2&-3&1\\&&6&9&-3\\+&&&4&6&-2\\\hline &-2&3&14&3&-2\end{array}}}
Thus the required autocorrelation sequence isRxx=(−2,3,14,3,−2){\displaystyle R_{xx}=(-2,3,14,3,-2)}, whereRxx(0)=14,{\displaystyle R_{xx}(0)=14,}Rxx(−1)=Rxx(1)=3,{\displaystyle R_{xx}(-1)=R_{xx}(1)=3,}andRxx(−2)=Rxx(2)=−2,{\displaystyle R_{xx}(-2)=R_{xx}(2)=-2,}the autocorrelation for other lag values being zero. In this calculation we do not perform the carry-over operation during addition as is usual in normal multiplication. Note that we can halve the number of operations required by exploiting the inherent symmetry of the autocorrelation. If the signal happens to be periodic, i.e.x=(…,2,3,−1,2,3,−1,…),{\displaystyle x=(\ldots ,2,3,-1,2,3,-1,\ldots ),}then we get a circular autocorrelation (similar tocircular convolution) where the left and right tails of the previous autocorrelation sequence will overlap and giveRxx=(…,14,1,1,14,1,1,…){\displaystyle R_{xx}=(\ldots ,14,1,1,14,1,1,\ldots )}which has the same period as the signal sequencex.{\displaystyle x.}The procedure can be regarded as an application of the convolution property ofZ-transformof a discrete signal.
While the brute force algorithm isordern2, several efficient algorithms exist which can compute the autocorrelation in ordernlog(n). For example, theWiener–Khinchin theoremallows computing the autocorrelation from the raw dataX(t)with twofast Fourier transforms(FFT):[6][page needed]
FR(f)=FFT[X(t)]S(f)=FR(f)FR∗(f)R(τ)=IFFT[S(f)]{\displaystyle {\begin{aligned}F_{R}(f)&=\operatorname {FFT} [X(t)]\\S(f)&=F_{R}(f)F_{R}^{*}(f)\\R(\tau )&=\operatorname {IFFT} [S(f)]\end{aligned}}}
where IFFT denotes the inversefast Fourier transform. The asterisk denotescomplex conjugate.
Alternatively, a multipleτcorrelation can be performed by using brute force calculation for lowτvalues, and then progressively binning theX(t)data with alogarithmicdensity to compute higher values, resulting in the samenlog(n)efficiency, but with lower memory requirements.[7][8]
For adiscreteprocess with known mean and variance for which we observen{\displaystyle n}observations{X1,X2,…,Xn}{\displaystyle \{X_{1},\,X_{2},\,\ldots ,\,X_{n}\}}, an estimate of the autocorrelation coefficient may be obtained as
R^(k)=1(n−k)σ2∑t=1n−k(Xt−μ)(Xt+k−μ){\displaystyle {\hat {R}}(k)={\frac {1}{(n-k)\sigma ^{2}}}\sum _{t=1}^{n-k}(X_{t}-\mu )(X_{t+k}-\mu )}
for any positive integerk<n{\displaystyle k<n}. When the true meanμ{\displaystyle \mu }and varianceσ2{\displaystyle \sigma ^{2}}are known, this estimate isunbiased. If the true mean andvarianceof the process are not known there are several possibilities:
The advantage of estimates of the last type is that the set of estimated autocorrelations, as a function ofk{\displaystyle k}, then form a function which is a valid autocorrelation in the sense that it is possible to define a theoretical process having exactly that autocorrelation. Other estimates can suffer from the problem that, if they are used to calculate the variance of a linear combination of theX{\displaystyle X}'s, the variance calculated may turn out to be negative.[11]
Inregression analysisusingtime series data, autocorrelation in a variable of interest is typically modeled either with anautoregressive model(AR), amoving average model(MA), their combination as anautoregressive-moving-average model(ARMA), or an extension of the latter called anautoregressive integrated moving average model(ARIMA). With multiple interrelated data series,vector autoregression(VAR) or its extensions are used.
Inordinary least squares(OLS), the adequacy of a model specification can be checked in part by establishing whether there is autocorrelation of theregression residuals. Problematic autocorrelation of the errors, which themselves are unobserved, can generally be detected because it produces autocorrelation in the observable residuals. (Errors are also known as "error terms" ineconometrics.) Autocorrelation of the errors violates the ordinary least squares assumption that the error terms are uncorrelated, meaning that theGauss Markov theoremdoes not apply, and that OLS estimators are no longer the Best Linear Unbiased Estimators (BLUE). While it does not bias the OLS coefficient estimates, thestandard errorstend to be underestimated (and thet-scoresoverestimated) when the autocorrelations of the errors at low lags are positive.
The traditional test for the presence of first-order autocorrelation is theDurbin–Watson statisticor, if the explanatory variables include a lagged dependent variable,Durbin's h statistic. The Durbin-Watson can be linearly mapped however to the Pearson correlation between values and their lags.[12]A more flexible test, covering autocorrelation of higher orders and applicable whether or not the regressors include lags of the dependent variable, is theBreusch–Godfrey test. This involves an auxiliary regression, wherein the residuals obtained from estimating the model of interest are regressed on (a) the original regressors and (b)klags of the residuals, where 'k' is the order of the test. The simplest version of thetest statisticfrom this auxiliary regression isTR2, whereTis the sample size andR2is thecoefficient of determination. Under the null hypothesis of no autocorrelation, this statistic is asymptoticallydistributed asχ2{\displaystyle \chi ^{2}}withkdegrees of freedom.
Responses to nonzero autocorrelation includegeneralized least squaresand theNewey–West HAC estimator(Heteroskedasticity and Autocorrelation Consistent).[13]
In the estimation of amoving average model(MA), the autocorrelation function is used to determine the appropriate number of lagged error terms to be included. This is based on the fact that for an MA process of orderq, we haveR(τ)≠0{\displaystyle R(\tau )\neq 0}, forτ=0,1,…,q{\displaystyle \tau =0,1,\ldots ,q}, andR(τ)=0{\displaystyle R(\tau )=0}, forτ>q{\displaystyle \tau >q}.
Autocorrelation's ability to find repeating patterns indatayields many applications, including:
Serial dependenceis closely linked to the notion of autocorrelation, but represents a distinct concept (seeCorrelation and dependence). In particular, it is possible to have serial dependence but no (linear) correlation. In some fields however, the two terms are used as synonyms.
Atime seriesof arandom variablehas serial dependence if the value at some timet{\displaystyle t}in the series isstatistically dependenton the value at another times{\displaystyle s}. A series is serially independent if there is no dependence between any pair.
If a time series{Xt}{\displaystyle \left\{X_{t}\right\}}isstationary, then statistical dependence between the pair(Xt,Xs){\displaystyle (X_{t},X_{s})}would imply that there is statistical dependence between all pairs of values at the same lagτ=s−t{\displaystyle \tau =s-t}.
|
https://en.wikipedia.org/wiki/Autocorrelation_function
|
Acorrelation functionis afunctionthat gives the statisticalcorrelationbetweenrandom variables, contingent on the spatial or temporal distance between those variables.[1]If one considers the correlation function between random variables representing the same quantity measured at two different points, then this is often referred to as anautocorrelation function, which is made up ofautocorrelations. Correlation functions of different random variables are sometimes calledcross-correlation functionsto emphasize that different variables are being considered and because they are made up ofcross-correlations.
Correlation functions are a useful indicator of dependencies as a function of distance in time or space, and they can be used to assess the distance required between sample points for the values to be effectively uncorrelated. In addition, they can form the basis of rules for interpolating values at points for which there are no observations.
Correlation functions used inastronomy,financial analysis,econometrics, andstatistical mechanicsdiffer only in the particular stochastic processes they are applied to. Inquantum field theorythere arecorrelation functions over quantum distributions.
For possibly distinct random variablesX(s) andY(t) at different pointssandtof some space, the correlation function is
wherecorr{\displaystyle \operatorname {corr} }is described in the article oncorrelation. In this definition, it has been assumed that the stochastic variables are scalar-valued. If they are not, then more complicated correlation functions can be defined. For example, ifX(s) is arandom vectorwithnelements andY(t) is a vector withqelements, then ann×qmatrix of correlation functions is defined withi,j{\displaystyle i,j}element
Whenn=q, sometimes thetraceof this matrix is focused on. If theprobability distributionshave any target space symmetries, i.e. symmetries in the value space of the stochastic variable (also calledinternal symmetries), then the correlation matrix will have induced symmetries. Similarly, if there are symmetries of the space (or time) domain in which the random variables exist (also calledspacetime symmetries), then the correlation function will have corresponding space or time symmetries. Examples of important spacetime symmetries are —
Higher order correlation functions are often defined. A typical correlation function of ordernis (the angle brackets represent theexpectation value)
If the random vector has only one component variable, then the indicesi,j{\displaystyle i,j}are redundant. If there are symmetries, then the correlation function can be broken up intoirreducible representationsof the symmetries — both internal and spacetime.
With these definitions, the study of correlation functions is similar to the study ofprobability distributions. Many stochastic processes can be completely characterized by their correlation functions; the most notable example is the class ofGaussian processes.
Probability distributions defined on a finite number of points can always be normalized, but when these are defined over continuous spaces, then extra care is called for. The study of such distributions started with the study ofrandom walksand led to the notion of theItō calculus.
The Feynmanpath integralin Euclidean space generalizes this to other problems of interest tostatistical mechanics. Any probability distribution which obeys a condition on correlation functions calledreflection positivityleads to a localquantum field theoryafterWick rotationtoMinkowski spacetime(seeOsterwalder-Schrader axioms). The operation ofrenormalizationis a specified set of mappings from the space of probability distributions to itself. Aquantum field theoryis called renormalizable if this mapping has a fixed point which gives a quantum field theory.
|
https://en.wikipedia.org/wiki/Correlation_function
|
Inprobability theoryandstatistics, acovariance matrix(also known asauto-covariance matrix,dispersion matrix,variance matrix, orvariance–covariance matrix) is a squarematrixgiving thecovariancebetween each pair of elements of a givenrandom vector.
Intuitively, the covariance matrix generalizes the notion of variance to multiple dimensions. As an example, the variation in a collection of random points in two-dimensional space cannot be characterized fully by a single number, nor would the variances in thex{\displaystyle x}andy{\displaystyle y}directions contain all of the necessary information; a2×2{\displaystyle 2\times 2}matrix would be necessary to fully characterize the two-dimensional variation.
Anycovariancematrix issymmetricandpositive semi-definiteand its main diagonal containsvariances(i.e., the covariance of each element with itself).
The covariance matrix of a random vectorX{\displaystyle \mathbf {X} }is typically denoted byKXX{\displaystyle \operatorname {K} _{\mathbf {X} \mathbf {X} }},Σ{\displaystyle \Sigma }orS{\displaystyle S}.
Throughout this article, boldfaced unsubscriptedX{\displaystyle \mathbf {X} }andY{\displaystyle \mathbf {Y} }are used to refer to random vectors, and Roman subscriptedXi{\displaystyle X_{i}}andYi{\displaystyle Y_{i}}are used to refer to scalar random variables.
If the entries in thecolumn vectorX=(X1,X2,…,Xn)T{\displaystyle \mathbf {X} =(X_{1},X_{2},\dots ,X_{n})^{\mathsf {T}}}arerandom variables, each with finitevarianceandexpected value, then the covariance matrixKXX{\displaystyle \operatorname {K} _{\mathbf {X} \mathbf {X} }}is the matrix whose(i,j){\displaystyle (i,j)}entry is thecovariance[1]: 177KXiXj=cov[Xi,Xj]=E[(Xi−E[Xi])(Xj−E[Xj])]{\displaystyle \operatorname {K} _{X_{i}X_{j}}=\operatorname {cov} [X_{i},X_{j}]=\operatorname {E} [(X_{i}-\operatorname {E} [X_{i}])(X_{j}-\operatorname {E} [X_{j}])]}where the operatorE{\displaystyle \operatorname {E} }denotes the expected value (mean) of its argument.
Nomenclatures differ. Some statisticians, following the probabilistWilliam Fellerin his two-volume bookAn Introduction to Probability Theory and Its Applications,[2]call the matrixKXX{\displaystyle \operatorname {K} _{\mathbf {X} \mathbf {X} }}thevarianceof the random vectorX{\displaystyle \mathbf {X} }, because it is the natural generalization to higher dimensions of the 1-dimensional variance. Others call it thecovariance matrix, because it is the matrix of covariances between the scalar components of the vectorX{\displaystyle \mathbf {X} }.var(X)=cov(X,X)=E[(X−E[X])(X−E[X])T].{\displaystyle \operatorname {var} (\mathbf {X} )=\operatorname {cov} (\mathbf {X} ,\mathbf {X} )=\operatorname {E} \left[(\mathbf {X} -\operatorname {E} [\mathbf {X} ])(\mathbf {X} -\operatorname {E} [\mathbf {X} ])^{\mathsf {T}}\right].}
Both forms are quite standard, and there is no ambiguity between them. The matrixKXX{\displaystyle \operatorname {K} _{\mathbf {X} \mathbf {X} }}is also often called thevariance-covariance matrix, since the diagonal terms are in fact variances.
By comparison, the notation for thecross-covariance matrixbetweentwo vectors iscov(X,Y)=KXY=E[(X−E[X])(Y−E[Y])T].{\displaystyle \operatorname {cov} (\mathbf {X} ,\mathbf {Y} )=\operatorname {K} _{\mathbf {X} \mathbf {Y} }=\operatorname {E} \left[(\mathbf {X} -\operatorname {E} [\mathbf {X} ])(\mathbf {Y} -\operatorname {E} [\mathbf {Y} ])^{\mathsf {T}}\right].}
The auto-covariance matrixKXX{\displaystyle \operatorname {K} _{\mathbf {X} \mathbf {X} }}is related to theautocorrelation matrixRXX{\displaystyle \operatorname {R} _{\mathbf {X} \mathbf {X} }}byKXX=E[(X−E[X])(X−E[X])T]=RXX−E[X]E[X]T{\displaystyle \operatorname {K} _{\mathbf {X} \mathbf {X} }=\operatorname {E} [(\mathbf {X} -\operatorname {E} [\mathbf {X} ])(\mathbf {X} -\operatorname {E} [\mathbf {X} ])^{\mathsf {T}}]=\operatorname {R} _{\mathbf {X} \mathbf {X} }-\operatorname {E} [\mathbf {X} ]\operatorname {E} [\mathbf {X} ]^{\mathsf {T}}}where the autocorrelation matrix is defined asRXX=E[XXT]{\displaystyle \operatorname {R} _{\mathbf {X} \mathbf {X} }=\operatorname {E} [\mathbf {X} \mathbf {X} ^{\mathsf {T}}]}.
An entity closely related to the covariance matrix is the matrix ofPearson product-moment correlation coefficientsbetween each of the random variables in the random vectorX{\displaystyle \mathbf {X} }, which can be written ascorr(X)=(diag(KXX))−12KXX(diag(KXX))−12,{\displaystyle \operatorname {corr} (\mathbf {X} )={\big (}\operatorname {diag} (\operatorname {K} _{\mathbf {X} \mathbf {X} }){\big )}^{-{\frac {1}{2}}}\,\operatorname {K} _{\mathbf {X} \mathbf {X} }\,{\big (}\operatorname {diag} (\operatorname {K} _{\mathbf {X} \mathbf {X} }){\big )}^{-{\frac {1}{2}}},}wherediag(KXX){\displaystyle \operatorname {diag} (\operatorname {K} _{\mathbf {X} \mathbf {X} })}is the matrix of the diagonal elements ofKXX{\displaystyle \operatorname {K} _{\mathbf {X} \mathbf {X} }}(i.e., adiagonal matrixof the variances ofXi{\displaystyle X_{i}}fori=1,…,n{\displaystyle i=1,\dots ,n}).
Equivalently, the correlation matrix can be seen as the covariance matrix of thestandardized random variablesXi/σ(Xi){\displaystyle X_{i}/\sigma (X_{i})}fori=1,…,n{\displaystyle i=1,\dots ,n}.corr(X)=[1E[(X1−μ1)(X2−μ2)]σ(X1)σ(X2)⋯E[(X1−μ1)(Xn−μn)]σ(X1)σ(Xn)E[(X2−μ2)(X1−μ1)]σ(X2)σ(X1)1⋯E[(X2−μ2)(Xn−μn)]σ(X2)σ(Xn)⋮⋮⋱⋮E[(Xn−μn)(X1−μ1)]σ(Xn)σ(X1)E[(Xn−μn)(X2−μ2)]σ(Xn)σ(X2)⋯1].{\displaystyle \operatorname {corr} (\mathbf {X} )={\begin{bmatrix}1&{\frac {\operatorname {E} [(X_{1}-\mu _{1})(X_{2}-\mu _{2})]}{\sigma (X_{1})\sigma (X_{2})}}&\cdots &{\frac {\operatorname {E} [(X_{1}-\mu _{1})(X_{n}-\mu _{n})]}{\sigma (X_{1})\sigma (X_{n})}}\\\\{\frac {\operatorname {E} [(X_{2}-\mu _{2})(X_{1}-\mu _{1})]}{\sigma (X_{2})\sigma (X_{1})}}&1&\cdots &{\frac {\operatorname {E} [(X_{2}-\mu _{2})(X_{n}-\mu _{n})]}{\sigma (X_{2})\sigma (X_{n})}}\\\\\vdots &\vdots &\ddots &\vdots \\\\{\frac {\operatorname {E} [(X_{n}-\mu _{n})(X_{1}-\mu _{1})]}{\sigma (X_{n})\sigma (X_{1})}}&{\frac {\operatorname {E} [(X_{n}-\mu _{n})(X_{2}-\mu _{2})]}{\sigma (X_{n})\sigma (X_{2})}}&\cdots &1\end{bmatrix}}.}
Each element on the principal diagonal of a correlation matrix is the correlation of a random variable with itself, which always equals 1. Eachoff-diagonal elementis between −1 and +1 inclusive.
Theinverse of this matrix,KXX−1{\displaystyle \operatorname {K} _{\mathbf {X} \mathbf {X} }^{-1}}, if it exists, is the inverse covariance matrix (or inverse concentration matrix[dubious–discuss]), also known as theprecision matrix(orconcentration matrix).[3]
Just as the covariance matrix can be written as the rescaling of a correlation matrix by the marginal variances:cov(X)=[σx10σx2⋱0σxn][1ρx1,x2⋯ρx1,xnρx2,x11⋯ρx2,xn⋮⋮⋱⋮ρxn,x1ρxn,x2⋯1][σx10σx2⋱0σxn]{\displaystyle \operatorname {cov} (\mathbf {X} )={\begin{bmatrix}\sigma _{x_{1}}&&&0\\&\sigma _{x_{2}}\\&&\ddots \\0&&&\sigma _{x_{n}}\end{bmatrix}}{\begin{bmatrix}1&\rho _{x_{1},x_{2}}&\cdots &\rho _{x_{1},x_{n}}\\\rho _{x_{2},x_{1}}&1&\cdots &\rho _{x_{2},x_{n}}\\\vdots &\vdots &\ddots &\vdots \\\rho _{x_{n},x_{1}}&\rho _{x_{n},x_{2}}&\cdots &1\\\end{bmatrix}}{\begin{bmatrix}\sigma _{x_{1}}&&&0\\&\sigma _{x_{2}}\\&&\ddots \\0&&&\sigma _{x_{n}}\end{bmatrix}}}
So, using the idea ofpartial correlation, and partial variance, the inverse covariance matrix can be expressed analogously:cov(X)−1=[1σx1|x2...01σx2|x1,x3...⋱01σxn|x1...xn−1][1−ρx1,x2∣x3...⋯−ρx1,xn∣x2...xn−1−ρx2,x1∣x3...1⋯−ρx2,xn∣x1,x3...xn−1⋮⋮⋱⋮−ρxn,x1∣x2...xn−1−ρxn,x2∣x1,x3...xn−1⋯1][1σx1|x2...01σx2|x1,x3...⋱01σxn|x1...xn−1]{\displaystyle \operatorname {cov} (\mathbf {X} )^{-1}={\begin{bmatrix}{\frac {1}{\sigma _{x_{1}|x_{2}...}}}&&&0\\&{\frac {1}{\sigma _{x_{2}|x_{1},x_{3}...}}}\\&&\ddots \\0&&&{\frac {1}{\sigma _{x_{n}|x_{1}...x_{n-1}}}}\end{bmatrix}}{\begin{bmatrix}1&-\rho _{x_{1},x_{2}\mid x_{3}...}&\cdots &-\rho _{x_{1},x_{n}\mid x_{2}...x_{n-1}}\\-\rho _{x_{2},x_{1}\mid x_{3}...}&1&\cdots &-\rho _{x_{2},x_{n}\mid x_{1},x_{3}...x_{n-1}}\\\vdots &\vdots &\ddots &\vdots \\-\rho _{x_{n},x_{1}\mid x_{2}...x_{n-1}}&-\rho _{x_{n},x_{2}\mid x_{1},x_{3}...x_{n-1}}&\cdots &1\\\end{bmatrix}}{\begin{bmatrix}{\frac {1}{\sigma _{x_{1}|x_{2}...}}}&&&0\\&{\frac {1}{\sigma _{x_{2}|x_{1},x_{3}...}}}\\&&\ddots \\0&&&{\frac {1}{\sigma _{x_{n}|x_{1}...x_{n-1}}}}\end{bmatrix}}}This duality motivates a number of other dualities between marginalizing and conditioning for Gaussian random variables.
ForKXX=var(X)=E[(X−E[X])(X−E[X])T]{\displaystyle \operatorname {K} _{\mathbf {X} \mathbf {X} }=\operatorname {var} (\mathbf {X} )=\operatorname {E} \left[\left(\mathbf {X} -\operatorname {E} [\mathbf {X} ]\right)\left(\mathbf {X} -\operatorname {E} [\mathbf {X} ]\right)^{\mathsf {T}}\right]}andμX=E[X]{\displaystyle {\boldsymbol {\mu }}_{\mathbf {X} }=\operatorname {E} [{\textbf {X}}]}, whereX=(X1,…,Xn)T{\displaystyle \mathbf {X} =(X_{1},\ldots ,X_{n})^{\mathsf {T}}}is ann{\displaystyle n}-dimensional random variable, the following basic properties apply:[4]
Indeed, from the property 4 it follows that under linear transformation of random variableX{\displaystyle \mathbf {X} }with covariation matrixΣX=cov(X){\displaystyle \mathbf {\Sigma _{X}} =\mathrm {cov} (\mathbf {X} )}by linear operatorA{\displaystyle \mathbf {A} }s.a.Y=AX{\displaystyle \mathbf {Y} =\mathbf {A} \mathbf {X} }, the covariation matrix is tranformed as
As according to the property 3 matrixΣX{\displaystyle \mathbf {\Sigma _{X}} }is symmetric, it can be diagonalized by a linear orthogonal transformation, i.e. there exists such orthogonal matrixA{\displaystyle \mathbf {A} }(meanwhileA⊤=A−1{\displaystyle \mathbf {A} ^{\top }=\mathbf {A} ^{-1}}), that
The joint meanμ{\displaystyle {\boldsymbol {\mu }}}andjoint covariance matrixΣ{\displaystyle {\boldsymbol {\Sigma }}}ofX{\displaystyle \mathbf {X} }andY{\displaystyle \mathbf {Y} }can be written in block formμ=[μXμY],Σ=[KXXKXYKYXKYY]{\displaystyle {\boldsymbol {\mu }}={\begin{bmatrix}{\boldsymbol {\mu }}_{X}\\{\boldsymbol {\mu }}_{Y}\end{bmatrix}},\qquad {\boldsymbol {\Sigma }}={\begin{bmatrix}\operatorname {K} _{\mathbf {XX} }&\operatorname {K} _{\mathbf {XY} }\\\operatorname {K} _{\mathbf {YX} }&\operatorname {K} _{\mathbf {YY} }\end{bmatrix}}}whereKXX=var(X){\displaystyle \operatorname {K} _{\mathbf {XX} }=\operatorname {var} (\mathbf {X} )},KYY=var(Y){\displaystyle \operatorname {K} _{\mathbf {YY} }=\operatorname {var} (\mathbf {Y} )}andKXY=KYXT=cov(X,Y){\displaystyle \operatorname {K} _{\mathbf {XY} }=\operatorname {K} _{\mathbf {YX} }^{\mathsf {T}}=\operatorname {cov} (\mathbf {X} ,\mathbf {Y} )}.
KXX{\displaystyle \operatorname {K} _{\mathbf {XX} }}andKYY{\displaystyle \operatorname {K} _{\mathbf {YY} }}can be identified as the variance matrices of themarginal distributionsforX{\displaystyle \mathbf {X} }andY{\displaystyle \mathbf {Y} }respectively.
IfX{\displaystyle \mathbf {X} }andY{\displaystyle \mathbf {Y} }arejointly normally distributed,X,Y∼N(μ,Σ),{\displaystyle \mathbf {X} ,\mathbf {Y} \sim \ {\mathcal {N}}({\boldsymbol {\mu }},\operatorname {\boldsymbol {\Sigma }} ),}then theconditional distributionforY{\displaystyle \mathbf {Y} }givenX{\displaystyle \mathbf {X} }is given by[5]Y∣X∼N(μY|X,KY|X),{\displaystyle \mathbf {Y} \mid \mathbf {X} \sim \ {\mathcal {N}}({\boldsymbol {\mu }}_{\mathbf {Y|X} },\operatorname {K} _{\mathbf {Y|X} }),}defined byconditional meanμY|X=μY+KYXKXX−1(X−μX){\displaystyle {\boldsymbol {\mu }}_{\mathbf {Y} |\mathbf {X} }={\boldsymbol {\mu }}_{\mathbf {Y} }+\operatorname {K} _{\mathbf {YX} }\operatorname {K} _{\mathbf {XX} }^{-1}\left(\mathbf {X} -{\boldsymbol {\mu }}_{\mathbf {X} }\right)}andconditional varianceKY|X=KYY−KYXKXX−1KXY.{\displaystyle \operatorname {K} _{\mathbf {Y|X} }=\operatorname {K} _{\mathbf {YY} }-\operatorname {K} _{\mathbf {YX} }\operatorname {K} _{\mathbf {XX} }^{-1}\operatorname {K} _{\mathbf {XY} }.}
The matrixKYXKXX−1{\displaystyle \operatorname {K} _{\mathbf {YX} }\operatorname {K} _{\mathbf {XX} }^{-1}}is known as the matrix ofregressioncoefficients, while in linear algebraKY|X{\displaystyle \operatorname {K} _{\mathbf {Y|X} }}is theSchur complementofKXX{\displaystyle \operatorname {K} _{\mathbf {XX} }}inΣ{\displaystyle {\boldsymbol {\Sigma }}}.
The matrix of regression coefficients may often be given in transpose form,KXX−1KXY{\displaystyle \operatorname {K} _{\mathbf {XX} }^{-1}\operatorname {K} _{\mathbf {XY} }}, suitable for post-multiplying a row vector of explanatory variablesXT{\displaystyle \mathbf {X} ^{\mathsf {T}}}rather than pre-multiplying a column vectorX{\displaystyle \mathbf {X} }. In this form they correspond to the coefficients obtained by inverting the matrix of thenormal equationsofordinary least squares(OLS).
A covariance matrix with all non-zero elements tells us that all the individual random variables are interrelated. This means that the variables are not only directly correlated, but also correlated via other variables indirectly. Often such indirect,common-modecorrelations are trivial and uninteresting. They can be suppressed by calculating the partial covariance matrix, that is the part of covariance matrix that shows only the interesting part of correlations.
If two vectors of random variablesX{\displaystyle \mathbf {X} }andY{\displaystyle \mathbf {Y} }are correlated via another vectorI{\displaystyle \mathbf {I} }, the latter correlations are suppressed in a matrix[6]KXY∣I=pcov(X,Y∣I)=cov(X,Y)−cov(X,I)cov(I,I)−1cov(I,Y).{\displaystyle \operatorname {K} _{\mathbf {XY\mid I} }=\operatorname {pcov} (\mathbf {X} ,\mathbf {Y} \mid \mathbf {I} )=\operatorname {cov} (\mathbf {X} ,\mathbf {Y} )-\operatorname {cov} (\mathbf {X} ,\mathbf {I} )\operatorname {cov} (\mathbf {I} ,\mathbf {I} )^{-1}\operatorname {cov} (\mathbf {I} ,\mathbf {Y} ).}The partial covariance matrixKXY∣I{\displaystyle \operatorname {K} _{\mathbf {XY\mid I} }}is effectively the simple covariance matrixKXY{\displaystyle \operatorname {K} _{\mathbf {XY} }}as if the uninteresting random variablesI{\displaystyle \mathbf {I} }were held constant.
The standard deviation matrixS{\displaystyle \mathbf {S} }is the extension of the standard deviation to multiple dimensions. It is the symmetricsquare rootof the covariance matrixΣ{\displaystyle \mathbf {\Sigma } }.[7]
If a column vectorX{\displaystyle \mathbf {X} }ofn{\displaystyle n}possibly correlated random variables isjointly normally distributed, or more generallyelliptically distributed, then itsprobability density functionf(X){\displaystyle \operatorname {f} (\mathbf {X} )}can be expressed in terms of the covariance matrixΣ{\displaystyle {\boldsymbol {\Sigma }}}as follows[6]f(X)=(2π)−n/2|Σ|−1/2exp(−12(X−μ)TΣ−1(X−μ)),{\displaystyle \operatorname {f} (\mathbf {X} )=(2\pi )^{-n/2}|{\boldsymbol {\Sigma }}|^{-1/2}\exp \left(-{\tfrac {1}{2}}\mathbf {(X-\mu )^{\mathsf {T}}\Sigma ^{-1}(X-\mu )} \right),}whereμ=E[X]{\displaystyle {\boldsymbol {\mu }}=\operatorname {E} [\mathbf {X} ]}and|Σ|{\displaystyle |{\boldsymbol {\Sigma }}|}is thedeterminantofΣ{\displaystyle {\boldsymbol {\Sigma }}}.
Applied to one vector, the covariance matrix maps a linear combinationcof the random variablesXonto a vector of covariances with those variables:cTΣ=cov(cTX,X){\displaystyle \mathbf {c} ^{\mathsf {T}}\Sigma =\operatorname {cov} (\mathbf {c} ^{\mathsf {T}}\mathbf {X} ,\mathbf {X} )}. Treated as abilinear form, it yields the covariance between the two linear combinations:dTΣc=cov(dTX,cTX){\displaystyle \mathbf {d} ^{\mathsf {T}}{\boldsymbol {\Sigma }}\mathbf {c} =\operatorname {cov} (\mathbf {d} ^{\mathsf {T}}\mathbf {X} ,\mathbf {c} ^{\mathsf {T}}\mathbf {X} )}. The variance of a linear combination is thencTΣc{\displaystyle \mathbf {c} ^{\mathsf {T}}{\boldsymbol {\Sigma }}\mathbf {c} }, its covariance with itself.
Similarly, the (pseudo-)inverse covariance matrix provides an inner product⟨c−μ|Σ+|c−μ⟩{\displaystyle \langle c-\mu |\Sigma ^{+}|c-\mu \rangle }, which induces theMahalanobis distance, a measure of the "unlikelihood" ofc.[citation needed]
From basic property 4. above, letb{\displaystyle \mathbf {b} }be a(p×1){\displaystyle (p\times 1)}real-valued vector, thenvar(bTX)=bTvar(X)b,{\displaystyle \operatorname {var} (\mathbf {b} ^{\mathsf {T}}\mathbf {X} )=\mathbf {b} ^{\mathsf {T}}\operatorname {var} (\mathbf {X} )\mathbf {b} ,\,}which must always be nonnegative, since it is thevarianceof a real-valued random variable, so a covariance matrix is always apositive-semidefinite matrix.
The above argument can be expanded as follows:wTE[(X−E[X])(X−E[X])T]w=E[wT(X−E[X])(X−E[X])Tw]=E[(wT(X−E[X]))2]≥0,{\displaystyle {\begin{aligned}&w^{\mathsf {T}}\operatorname {E} \left[(\mathbf {X} -\operatorname {E} [\mathbf {X} ])(\mathbf {X} -\operatorname {E} [\mathbf {X} ])^{\mathsf {T}}\right]w=\operatorname {E} \left[w^{\mathsf {T}}(\mathbf {X} -\operatorname {E} [\mathbf {X} ])(\mathbf {X} -\operatorname {E} [\mathbf {X} ])^{\mathsf {T}}w\right]\\&=\operatorname {E} {\big [}{\big (}w^{\mathsf {T}}(\mathbf {X} -\operatorname {E} [\mathbf {X} ]){\big )}^{2}{\big ]}\geq 0,\end{aligned}}}where the last inequality follows from the observation thatwT(X−E[X]){\displaystyle w^{\mathsf {T}}(\mathbf {X} -\operatorname {E} [\mathbf {X} ])}is a scalar.
Conversely, every symmetric positive semi-definite matrix is a covariance matrix. To see this, supposeM{\displaystyle M}is ap×p{\displaystyle p\times p}symmetric positive-semidefinite matrix. From the finite-dimensional case of thespectral theorem, it follows thatM{\displaystyle M}has a nonnegative symmetricsquare root, which can be denoted byM1/2. LetX{\displaystyle \mathbf {X} }be anyp×1{\displaystyle p\times 1}column vector-valued random variable whose covariance matrix is thep×p{\displaystyle p\times p}identity matrix. Thenvar(M1/2X)=M1/2var(X)M1/2=M.{\displaystyle \operatorname {var} (\mathbf {M} ^{1/2}\mathbf {X} )=\mathbf {M} ^{1/2}\,\operatorname {var} (\mathbf {X} )\,\mathbf {M} ^{1/2}=\mathbf {M} .}
Thevarianceof acomplexscalar-valuedrandom variable with expected valueμ{\displaystyle \mu }is conventionally defined usingcomplex conjugation:var(Z)=E[(Z−μZ)(Z−μZ)¯],{\displaystyle \operatorname {var} (Z)=\operatorname {E} \left[(Z-\mu _{Z}){\overline {(Z-\mu _{Z})}}\right],}where the complex conjugate of a complex numberz{\displaystyle z}is denotedz¯{\displaystyle {\overline {z}}}; thus the variance of a complex random variable is a real number.
IfZ=(Z1,…,Zn)T{\displaystyle \mathbf {Z} =(Z_{1},\ldots ,Z_{n})^{\mathsf {T}}}is a column vector of complex-valued random variables, then theconjugate transposeZH{\displaystyle \mathbf {Z} ^{\mathsf {H}}}is formed bybothtransposing and conjugating. In the following expression, the product of a vector with its conjugate transpose results in a square matrix called thecovariance matrix, as its expectation:[8]: 293KZZ=cov[Z,Z]=E[(Z−μZ)(Z−μZ)H],{\displaystyle \operatorname {K} _{\mathbf {Z} \mathbf {Z} }=\operatorname {cov} [\mathbf {Z} ,\mathbf {Z} ]=\operatorname {E} \left[(\mathbf {Z} -{\boldsymbol {\mu }}_{\mathbf {Z} })(\mathbf {Z} -{\boldsymbol {\mu }}_{\mathbf {Z} })^{\mathsf {H}}\right],}The matrix so obtained will beHermitianpositive-semidefinite,[9]with real numbers in the main diagonal and complex numbers off-diagonal.
For complex random vectors, another kind of second central moment, thepseudo-covariance matrix(also calledrelation matrix) is defined as follows:JZZ=cov[Z,Z¯]=E[(Z−μZ)(Z−μZ)T]{\displaystyle \operatorname {J} _{\mathbf {Z} \mathbf {Z} }=\operatorname {cov} [\mathbf {Z} ,{\overline {\mathbf {Z} }}]=\operatorname {E} \left[(\mathbf {Z} -{\boldsymbol {\mu }}_{\mathbf {Z} })(\mathbf {Z} -{\boldsymbol {\mu }}_{\mathbf {Z} })^{\mathsf {T}}\right]}
In contrast to the covariance matrix defined above, Hermitian transposition gets replaced by transposition in the definition.
Its diagonal elements may be complex valued; it is acomplex symmetric matrix.
IfMX{\displaystyle \mathbf {M} _{\mathbf {X} }}andMY{\displaystyle \mathbf {M} _{\mathbf {Y} }}are centereddata matricesof dimensionp×n{\displaystyle p\times n}andq×n{\displaystyle q\times n}respectively, i.e. withncolumns of observations ofpandqrows of variables, from which the row means have been subtracted, then, if the row means were estimated from the data, sample covariance matricesQXX{\displaystyle \mathbf {Q} _{\mathbf {XX} }}andQXY{\displaystyle \mathbf {Q} _{\mathbf {XY} }}can be defined to beQXX=1n−1MXMXT,QXY=1n−1MXMYT{\displaystyle \mathbf {Q} _{\mathbf {XX} }={\frac {1}{n-1}}\mathbf {M} _{\mathbf {X} }\mathbf {M} _{\mathbf {X} }^{\mathsf {T}},\qquad \mathbf {Q} _{\mathbf {XY} }={\frac {1}{n-1}}\mathbf {M} _{\mathbf {X} }\mathbf {M} _{\mathbf {Y} }^{\mathsf {T}}}or, if the row means were known a priori,QXX=1nMXMXT,QXY=1nMXMYT.{\displaystyle \mathbf {Q} _{\mathbf {XX} }={\frac {1}{n}}\mathbf {M} _{\mathbf {X} }\mathbf {M} _{\mathbf {X} }^{\mathsf {T}},\qquad \mathbf {Q} _{\mathbf {XY} }={\frac {1}{n}}\mathbf {M} _{\mathbf {X} }\mathbf {M} _{\mathbf {Y} }^{\mathsf {T}}.}
These empirical sample covariance matrices are the most straightforward and most often used estimators for the covariance matrices, but other estimators also exist, including regularised or shrinkage estimators, which may have better properties.
The covariance matrix is a useful tool in many different areas. From it atransformation matrixcan be derived, called awhitening transformation, that allows one to completely decorrelate the data[10]or, from a different point of view, to find an optimal basis for representing the data in a compact way[citation needed](seeRayleigh quotientfor a formal proof and additional properties of covariance matrices).
This is calledprincipal component analysis(PCA) and theKarhunen–Loève transform(KL-transform).
The covariance matrix plays a key role infinancial economics, especially inportfolio theoryand itsmutual fund separation theoremand in thecapital asset pricing model. The matrix of covariances among various assets' returns is used to determine, under certain assumptions, the relative amounts of different assets that investors should (in anormative analysis) or are predicted to (in apositive analysis) choose to hold in a context ofdiversification.
Theevolution strategy, a particular family of Randomized Search Heuristics, fundamentally relies on a covariance matrix in its mechanism. The characteristic mutation operator draws the update step from a multivariate normal distribution using an evolving covariance matrix. There is a formal proof that theevolution strategy's covariance matrix adapts to the inverse of theHessian matrixof the search landscape,up toa scalar factor and small random fluctuations (proven for a single-parent strategy and a static model, as the population size increases, relying on the quadratic approximation).[11]Intuitively, this result is supported by the rationale that the optimal covariance distribution can offer mutation steps whose equidensity probability contours match the level sets of the landscape, and so they maximize the progress rate.
Incovariance mappingthe values of thecov(X,Y){\displaystyle \operatorname {cov} (\mathbf {X} ,\mathbf {Y} )}orpcov(X,Y∣I){\displaystyle \operatorname {pcov} (\mathbf {X} ,\mathbf {Y} \mid \mathbf {I} )}matrix are plotted as a 2-dimensional map. When vectorsX{\displaystyle \mathbf {X} }andY{\displaystyle \mathbf {Y} }are discreterandom functions, the map shows statistical relations between different regions of the random functions. Statistically independent regions of the functions show up on the map as zero-level flatland, while positive or negative correlations show up, respectively, as hills or valleys.
In practice the column vectorsX,Y{\displaystyle \mathbf {X} ,\mathbf {Y} }, andI{\displaystyle \mathbf {I} }are acquired experimentally as rows ofn{\displaystyle n}samples, e.g.[X1,X2,…,Xn]=[X1(t1)X2(t1)⋯Xn(t1)X1(t2)X2(t2)⋯Xn(t2)⋮⋮⋱⋮X1(tm)X2(tm)⋯Xn(tm)],{\displaystyle \left[\mathbf {X} _{1},\mathbf {X} _{2},\dots ,\mathbf {X} _{n}\right]={\begin{bmatrix}X_{1}(t_{1})&X_{2}(t_{1})&\cdots &X_{n}(t_{1})\\\\X_{1}(t_{2})&X_{2}(t_{2})&\cdots &X_{n}(t_{2})\\\\\vdots &\vdots &\ddots &\vdots \\\\X_{1}(t_{m})&X_{2}(t_{m})&\cdots &X_{n}(t_{m})\end{bmatrix}},}whereXj(ti){\displaystyle X_{j}(t_{i})}is thei-th discrete value in samplejof the random functionX(t){\displaystyle X(t)}. The expected values needed in the covariance formula are estimated using thesample mean, e.g.⟨X⟩=1n∑j=1nXj{\displaystyle \langle \mathbf {X} \rangle ={\frac {1}{n}}\sum _{j=1}^{n}\mathbf {X} _{j}}and the covariance matrix is estimated by thesample covariancematrixcov(X,Y)≈⟨XYT⟩−⟨X⟩⟨YT⟩,{\displaystyle \operatorname {cov} (\mathbf {X} ,\mathbf {Y} )\approx \langle \mathbf {XY^{\mathsf {T}}} \rangle -\langle \mathbf {X} \rangle \langle \mathbf {Y} ^{\mathsf {T}}\rangle ,}where the angular brackets denote sample averaging as before except that theBessel's correctionshould be made to avoidbias. Using this estimation the partial covariance matrix can be calculated aspcov(X,Y∣I)=cov(X,Y)−cov(X,I)(cov(I,I)∖cov(I,Y)),{\displaystyle \operatorname {pcov} (\mathbf {X} ,\mathbf {Y} \mid \mathbf {I} )=\operatorname {cov} (\mathbf {X} ,\mathbf {Y} )-\operatorname {cov} (\mathbf {X} ,\mathbf {I} )\left(\operatorname {cov} (\mathbf {I} ,\mathbf {I} )\backslash \operatorname {cov} (\mathbf {I} ,\mathbf {Y} )\right),}where the backslash denotes theleft matrix divisionoperator, which bypasses the requirement to invert a matrix and is available in some computational packages such asMatlab.[12]
Fig. 1 illustrates how a partial covariance map is constructed on an example of an experiment performed at theFLASHfree-electron laserin Hamburg.[13]The random functionX(t){\displaystyle X(t)}is thetime-of-flightspectrum of ions from aCoulomb explosionof nitrogen molecules multiply ionised by a laser pulse. Since only a few hundreds of molecules are ionised at each laser pulse, the single-shot spectra are highly fluctuating. However, collecting typicallym=104{\displaystyle m=10^{4}}such spectra,Xj(t){\displaystyle \mathbf {X} _{j}(t)}, and averaging them overj{\displaystyle j}produces a smooth spectrum⟨X(t)⟩{\displaystyle \langle \mathbf {X} (t)\rangle }, which is shown in red at the bottom of Fig. 1. The average spectrum⟨X⟩{\displaystyle \langle \mathbf {X} \rangle }reveals several nitrogen ions in a form of peaks broadened by their kinetic energy, but to find the correlations between the ionisation stages and the ion momenta requires calculating a covariance map.
In the example of Fig. 1 spectraXj(t){\displaystyle \mathbf {X} _{j}(t)}andYj(t){\displaystyle \mathbf {Y} _{j}(t)}are the same, except that the range of the time-of-flightt{\displaystyle t}differs. Panelashows⟨XYT⟩{\displaystyle \langle \mathbf {XY^{\mathsf {T}}} \rangle }, panelbshows⟨X⟩⟨YT⟩{\displaystyle \langle \mathbf {X} \rangle \langle \mathbf {Y} ^{\mathsf {T}}\rangle }and panelcshows their difference, which iscov(X,Y){\displaystyle \operatorname {cov} (\mathbf {X} ,\mathbf {Y} )}(note a change in the colour scale). Unfortunately, this map is overwhelmed by uninteresting, common-mode correlations induced by laser intensity fluctuating from shot to shot. To suppress such correlations the laser intensityIj{\displaystyle I_{j}}is recorded at every shot, put intoI{\displaystyle \mathbf {I} }andpcov(X,Y∣I){\displaystyle \operatorname {pcov} (\mathbf {X} ,\mathbf {Y} \mid \mathbf {I} )}is calculated as panelsdandeshow. The suppression of the uninteresting correlations is, however, imperfect because there are other sources of common-mode fluctuations than the laser intensity and in principle all these sources should be monitored in vectorI{\displaystyle \mathbf {I} }. Yet in practice it is often sufficient to overcompensate the partial covariance correction as panelfshows, where interesting correlations of ion momenta are now clearly visible as straight lines centred on ionisation stages of atomic nitrogen.
Two-dimensional infrared spectroscopy employscorrelation analysisto obtain 2D spectra of thecondensed phase. There are two versions of this analysis:synchronousandasynchronous. Mathematically, the former is expressed in terms of the sample covariance matrix and the technique is equivalent to covariance mapping.[14]
|
https://en.wikipedia.org/wiki/Covariance_matrix
|
Inprobability theory, for aprobability measurePon aHilbert spaceHwithinner product⟨⋅,⋅⟩{\displaystyle \langle \cdot ,\cdot \rangle }, thecovarianceofPis thebilinear formCov:H×H→Rgiven by
for allxandyinH. Thecovariance operatorCis then defined by
(from theRiesz representation theorem, such operator exists if Cov isbounded). Since Cov is symmetric in its arguments, the covariance operator isself-adjoint.
Even more generally, for aprobability measurePon aBanach spaceB, the covariance ofPis thebilinear formon thealgebraic dualB#, defined by
where⟨x,z⟩{\displaystyle \langle x,z\rangle }is now the value of the linear functionalxon the elementz.
Quite similarly, thecovariance functionof a function-valuedrandom element(in special cases is calledrandom processorrandom field)zis
wherez(x) is now the value of the functionzat the pointx, i.e., the value of thelinear functionalu↦u(x){\displaystyle u\mapsto u(x)}evaluated atz.
Thisprobability-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Covariance_operator
|
Inoperator theory, a branch of mathematics, apositive-definite kernelis a generalization of apositive-definite functionor apositive-definite matrix. It was first introduced byJames Mercerin the early 20th century, in the context of solvingintegral operator equations. Since then, positive-definite functions and their various analogues and generalizations have arisen in diverse parts of mathematics. They occur naturally inFourier analysis,probability theory,operator theory,complex function-theory,moment problems,integral equations,boundary-value problemsforpartial differential equations,machine learning,embedding problem,information theory, and other areas.
LetX{\displaystyle {\mathcal {X}}}be a nonempty set, sometimes referred to as the index set. Asymmetric functionK:X×X→R{\displaystyle K:{\mathcal {X}}\times {\mathcal {X}}\to \mathbb {R} }is called a positive-definite (p.d.) kernel onX{\displaystyle {\mathcal {X}}}if
holds for allx1,…,xn∈X{\displaystyle x_{1},\dots ,x_{n}\in {\mathcal {X}}},n∈N,c1,…,cn∈R{\displaystyle n\in \mathbb {N} ,c_{1},\dots ,c_{n}\in \mathbb {R} }.
In probability theory, a distinction is sometimes made between positive-definite kernels, for which equality in (1.1) impliesci=0(∀i){\displaystyle c_{i}=0\;(\forall i)}, and positive semi-definite (p.s.d.) kernels, which do not impose this condition. Note that this is equivalent to requiring that every finite matrix constructed by pairwise evaluation,Kij=K(xi,xj){\displaystyle \mathbf {K} _{ij}=K(x_{i},x_{j})}, has either entirely positive (p.d.) or nonnegative (p.s.d.)eigenvalues.
In mathematical literature, kernels are usually complex-valued functions. That is, a complex-valued functionK:X×X→C{\displaystyle K:{\mathcal {X}}\times {\mathcal {X}}\to \mathbb {C} }is called aHermitian kernelifK(x,y)=K(y,x)¯{\displaystyle K(x,y)={\overline {K(y,x)}}}and positive definite if for every finite set of pointsx1,…,xn∈X{\displaystyle x_{1},\dots ,x_{n}\in {\mathcal {X}}}and any complex numbersξ1,…,ξn∈C{\displaystyle \xi _{1},\dots ,\xi _{n}\in \mathbb {C} },
whereξ¯j{\displaystyle {\overline {\xi }}_{j}}denotes thecomplex conjugate.[1]In the rest of this article we assume real-valued functions, which is the common practice in applications of p.d. kernels.
The sigmoid kernel, or hyperbolic tangent kernel, is defined asK(x,y)=tanh(γxTy+r),x,y∈Rd{\displaystyle K(\mathbf {x} ,\mathbf {y} )=\tanh(\gamma \mathbf {x} ^{T}\mathbf {y} +r),\quad \mathbf {x} ,\mathbf {y} \in \mathbb {R} ^{d}}whereγ,r{\displaystyle \gamma ,r}are real parameters. The kernel is not PD, but has been sometimes used for kernel algorithms.[3]
Positive-definite kernels, as defined in (1.1), appeared first in 1909 in a paper on integral equations by James Mercer.[4]Several other authors made use of this concept in the following two decades, but none of them explicitly used kernelsK(x,y)=f(x−y){\displaystyle K(x,y)=f(x-y)}, i.e. p.d. functions (indeed M. Mathias andS. Bochnerseem not to have been aware of the study of p.d. kernels). Mercer’s work arose from Hilbert’s paper of 1904[5]onFredholm integral equationsof the second kind:
In particular, Hilbert had shown that
whereK{\displaystyle K}is a continuous real symmetric kernel,x{\displaystyle x}is continuous,{ψn}{\displaystyle \{\psi _{n}\}}is a complete system oforthonormal eigenfunctions, andλn{\displaystyle \lambda _{n}}’s are the correspondingeigenvaluesof (1.2). Hilbert defined a “definite” kernel as one for which the double integralJ(x)=∫ab∫abK(s,t)x(s)x(t)dsdt{\displaystyle J(x)=\int _{a}^{b}\int _{a}^{b}K(s,t)x(s)x(t)\ \mathrm {d} s\;\mathrm {d} t}satisfiesJ(x)>0{\displaystyle J(x)>0}except forx(t)=0{\displaystyle x(t)=0}. The original object of Mercer’s paper was to characterize the kernels which are definite in the sense of Hilbert, but Mercer soon found that the class of such functions was too restrictive to characterize in terms of determinants. He therefore defined a continuous real symmetric kernelK(s,t){\displaystyle K(s,t)}to be of positive type (i.e. positive-definite) ifJ(x)≥0{\displaystyle J(x)\geq 0}for all real continuous functionsx{\displaystyle x}on[a,b]{\displaystyle [a,b]}, and he proved that (1.1) is a necessary and sufficient condition for a kernel to be of positive type. Mercer then proved that for any continuous p.d. kernel the expansionK(s,t)=∑nψn(s)ψn(t)λn{\displaystyle K(s,t)=\sum _{n}{\frac {\psi _{n}(s)\psi _{n}(t)}{\lambda _{n}}}}holds absolutely and uniformly.
At about the same time W. H. Young,[6]motivated by a different question in the theory of integral equations, showed that for continuous kernels condition (1.1) is equivalent toJ(x)≥0{\displaystyle J(x)\geq 0}for allx∈L1[a,b]{\displaystyle x\in L^{1}[a,b]}.
E.H. Moore[7][8]initiated the study of a very general kind of p.d. kernel. IfE{\displaystyle E}is an abstract set, he calls functionsK(x,y){\displaystyle K(x,y)}defined onE×E{\displaystyle E\times E}“positive Hermitian matrices” if they satisfy (1.1) for allxi∈E{\displaystyle x_{i}\in E}. Moore was interested in generalization of integral equations and showed that to each suchK{\displaystyle K}there is a Hilbert spaceH{\displaystyle H}of functions such that, for eachf∈H,f(y)=(f,K(⋅,y))H{\displaystyle f\in H,f(y)=(f,K(\cdot ,y))_{H}}. This property is called the reproducing property of the kernel and turns out to have importance in the solution of boundary-value problems for elliptic partial differential equations.
Another line of development in which p.d. kernels played a large role was the theory of harmonics on homogeneous spaces as begun byE. Cartanin 1929, and continued byH. Weyland S. Ito. The most comprehensive theory of p.d. kernels in homogeneous spaces is that ofM. Krein[9]which includes as special cases the work on p.d. functions and irreducibleunitary representationsof locally compact groups.
In probability theory, p.d. kernels arise as covariance kernels of stochastic processes.[10]
Positive-definite kernels provide a framework that encompasses some basic Hilbert space constructions. In the following we present a tight relationship between positive-definite kernels and two mathematical objects, namely reproducing Hilbert spaces and feature maps.
LetX{\displaystyle X}be a set,H{\displaystyle H}a Hilbert space of functionsf:X→R{\displaystyle f:X\to \mathbb {R} }, and(⋅,⋅)H:H×H→R{\displaystyle (\cdot ,\cdot )_{H}:H\times H\to \mathbb {R} }the corresponding inner product onH{\displaystyle H}. For anyx∈X{\displaystyle x\in X}the evaluation functionalex:H→R{\displaystyle e_{x}:H\to \mathbb {R} }is defined byf↦ex(f)=f(x){\displaystyle f\mapsto e_{x}(f)=f(x)}.
We first define a reproducing kernel Hilbert space (RKHS):
Definition: SpaceH{\displaystyle H}is called a reproducing kernel Hilbert space if the evaluation functionals are continuous.
Every RKHS has a special function associated to it, namely the reproducing kernel:
Definition: Reproducing kernel is a functionK:X×X→R{\displaystyle K:X\times X\to \mathbb {R} }such that
The latter property is called the reproducing property.
The following result shows equivalence between RKHS and reproducing kernels:
Theorem—Every reproducing kernelK{\displaystyle K}induces a unique RKHS, and every RKHS has a unique reproducing kernel.
Now the connection between positive definite kernels and RKHS is given by the following theorem
Theorem—Every reproducing kernel is positive-definite, and every positive definite kernel defines a unique RKHS, of which it is the unique reproducing kernel.
Thus, given a positive-definite kernelK{\displaystyle K}, it is possible to build an associated RKHS withK{\displaystyle K}as a reproducing kernel.
As stated earlier, positive definite kernels can be constructed from inner products. This fact can be used to connect p.d. kernels with another interesting object that arises in machine learning applications, namely the feature map. LetF{\displaystyle F}be a Hilbert space, and(⋅,⋅)F{\displaystyle (\cdot ,\cdot )_{F}}the corresponding inner product. Any mapΦ:X→F{\displaystyle \Phi :X\to F}is called a feature map. In this case we callF{\displaystyle F}the feature space. It is easy to see[11]that every feature map defines a unique p.d. kernel byK(x,y)=(Φ(x),Φ(y))F.{\displaystyle K(x,y)=(\Phi (x),\Phi (y))_{F}.}Indeed, positive definiteness ofK{\displaystyle K}follows from the p.d. property of the inner product. On the other hand, every p.d. kernel, and its corresponding RKHS, have many associated feature maps. For example: LetF=H{\displaystyle F=H}, andΦ(x)=Kx{\displaystyle \Phi (x)=K_{x}}for allx∈X{\displaystyle x\in X}. Then(Φ(x),Φ(y))F=(Kx,Ky)H=K(x,y){\displaystyle (\Phi (x),\Phi (y))_{F}=(K_{x},K_{y})_{H}=K(x,y)}, by the reproducing property.
This suggests a new look at p.d. kernels as inner products in appropriate Hilbert spaces, or in other words p.d. kernels can be viewed as similarity maps which quantify effectively how similar two pointsx{\displaystyle x}andy{\displaystyle y}are through the valueK(x,y){\displaystyle K(x,y)}. Moreover, through the equivalence of p.d. kernels and its corresponding RKHS, every feature map can be used to construct a RKHS.
Kernel methods are often compared to distance based methods such asnearest neighbors. In this section we discuss parallels between their two respective ingredients, namely kernelsK{\displaystyle K}and distancesd{\displaystyle d}.
Here by a distance function between each pair of elements of some setX{\displaystyle X}, we mean ametricdefined on that set, i.e. any nonnegative-valued functiond{\displaystyle d}onX×X{\displaystyle {\mathcal {X}}\times {\mathcal {X}}}which satisfies
One link between distances and p.d. kernels is given by a particular kind of kernel, called a negative definite kernel, and defined as follows
Definition: A symmetric functionψ:X×X→R{\displaystyle \psi :{\mathcal {X}}\times {\mathcal {X}}\to \mathbb {R} }is called a negative definite (n.d.) kernel onX{\displaystyle {\mathcal {X}}}if
holds for anyn∈N,x1,…,xn∈X,{\displaystyle n\in \mathbb {N} ,x_{1},\dots ,x_{n}\in {\mathcal {X}},}andc1,…,cn∈R{\displaystyle c_{1},\dots ,c_{n}\in \mathbb {R} }such that∑i=1nci=0{\textstyle \sum _{i=1}^{n}c_{i}=0}.
The parallel between n.d. kernels and distances is in the following: whenever a n.d. kernel vanishes on the set{(x,x):x∈X}{\displaystyle \{(x,x):x\in {\mathcal {X}}\}}, and is zero only on this set, then its square root is a distance forX{\displaystyle {\mathcal {X}}}.[12]At the same time each distance does not correspond necessarily to a n.d. kernel. This is only true for Hilbertian distances, where distanced{\displaystyle d}is called Hilbertian if one can embed the metric space(X,d){\displaystyle ({\mathcal {X}},d)}isometricallyinto some Hilbert space.
On the other hand, n.d. kernels can be identified with a subfamily of p.d. kernels known as infinitely divisible kernels. A nonnegative-valued kernelK{\displaystyle K}is said to be infinitely divisible if for everyn∈N{\displaystyle n\in \mathbb {N} }there exists a positive-definite kernelKn{\displaystyle K_{n}}such thatK=(Kn)n{\displaystyle K=(K_{n})^{n}}.
Another link is that a p.d. kernel induces apseudometric, where the first constraint on the distance function is loosened to allowd(x,y)=0{\displaystyle d(x,y)=0}forx≠y{\displaystyle x\neq y}. Given a positive-definite kernelK{\displaystyle K}, we can define a distance function as:d(x,y)=K(x,x)−2K(x,y)+K(y,y){\displaystyle d(x,y)={\sqrt {K(x,x)-2K(x,y)+K(y,y)}}}
Positive-definite kernels, through their equivalence with reproducing kernel Hilbert spaces (RKHS), are particularly important in the field ofstatistical learning theorybecause of the celebratedrepresenter theoremwhich states that every minimizer function in an RKHS can be written as a linear combination of the kernel function evaluated at the training points. This is a practically useful result as it effectively simplifies the empirical risk minimization problem from an infinite dimensional to a finite dimensional optimization problem.
There are several different ways in which kernels arise in probability theory.
Assume now that a noise variableϵ(x){\displaystyle \epsilon (x)}, with zero mean and varianceσ2{\displaystyle \sigma ^{2}}, is added tox{\displaystyle x}, such that the noise is independent for differentx{\displaystyle x}and independent ofZ{\displaystyle Z}there, then the problem of finding a good estimate forf{\displaystyle f}is identical to the above one, but with a modified kernel given byK(x,y)=E[Z(x)⋅Z(y)]+σ2δxy{\displaystyle K(x,y)=E[Z(x)\cdot Z(y)]+\sigma ^{2}\delta _{xy}}.
One of the greatest application areas of so-calledmeshfree methodsis in the numerical solution ofPDEs. Some of the popular meshfree methods are closely related to positive-definite kernels (such asmeshless local Petrov Galerkin (MLPG),Reproducing kernel particle method (RKPM)andsmoothed-particle hydrodynamics (SPH)). These methods use radial basis kernel forcollocation.[13]
In the literature on computer experiments[14]and other engineering experiments, one increasingly encounters models based on p.d. kernels, RBFs orkriging. One such topic isresponse surface methodology. Other types of applications that boil down to data fitting arerapid prototypingandcomputer graphics. Here one often uses implicit surface models to approximate or interpolate point cloud data.
Applications of p.d. kernels in various other branches of mathematics are in multivariate integration, multivariate optimization, and in numerical analysis and scientific computing, where one studies fast, accurate and adaptive algorithms ideally implemented in high-performance computing environments.[15]
|
https://en.wikipedia.org/wiki/Positive-definite_kernel
|
Inphysicsandmathematics, arandom fieldis a random function over an arbitrary domain (usually a multi-dimensional space such asRn{\displaystyle \mathbb {R} ^{n}}). That is, it is a functionf(x){\displaystyle f(x)}that takes on a random value at each pointx∈Rn{\displaystyle x\in \mathbb {R} ^{n}}(or some other domain). It is also sometimes thought of as a synonym for astochastic processwith some restriction on its index set. That is, by modern definitions, a random field is a generalization of astochastic processwhere the underlying parameter need no longer berealorintegervalued "time" but can instead take values that are multidimensionalvectorsor points on somemanifold.[1]
Given aprobability space(Ω,F,P){\displaystyle (\Omega ,{\mathcal {F}},P)}, anX-valued random field is a collection ofX-valuedrandom variablesindexed by elements in atopological spaceT. That is, a random fieldFis a collection
where eachFt{\displaystyle F_{t}}is anX-valued random variable.
In its discrete version, a random field is a list of random numbers whose indices are identified with a discrete set of points in a space (for example, n-dimensionalEuclidean space). Suppose there are four random variables,X1{\displaystyle X_{1}},X2{\displaystyle X_{2}},X3{\displaystyle X_{3}}, andX4{\displaystyle X_{4}}, located in a 2D grid at (0,0), (0,2), (2,2), and (2,0), respectively. Suppose each random variable can take on the value of -1 or 1, and the probability of each random variable's value depends on its immediately adjacent neighbours. This is a simple example of a discrete random field.
More generally, the values eachXi{\displaystyle X_{i}}can take on might be defined over a continuous domain. In larger grids, it can also be useful to think of the random field as a "function valued" random variable as described above. Inquantum field theorythe notion is generalized to a randomfunctional, one that takes on random values over aspace of functions(seeFeynman integral).
Several kinds of random fields exist, among them theMarkov random field(MRF),Gibbs random field,conditional random field(CRF), andGaussian random field. In 1974,Julian Besagproposed an approximation method relying on the relation between MRFs and Gibbs RFs.[citation needed]
An MRF exhibits theMarkov property
for each choice of values(xj)j{\displaystyle (x_{j})_{j}}. Here each∂i{\displaystyle \partial _{i}}is the set of neighbors ofi{\displaystyle i}. In other words, the probability that a random variable assumes a value depends on its immediate neighboring random variables. The probability of a random variable in an MRF[clarification needed]is given by
where the sum (can be an integral) is over the possible values of k.[clarification needed]It is sometimes difficult to compute this quantity exactly.
When used in thenatural sciences, values in a random field are often spatially correlated. For example, adjacent values (i.e. values with adjacent indices) do not differ as much as values that are further apart. This is an example of acovariancestructure, many different types of which may be modeled in a random field. One example is theIsing modelwhere sometimes nearest neighbor interactions are only included as a simplification to better understand the model.
A common use of random fields is in the generation of computer graphics, particularly those that mimic natural surfaces such aswaterandearth. Random fields have been also used in subsurface ground models as in[2]
Inneuroscience, particularly intask-related functional brain imagingstudies usingPETorfMRI, statistical analysis of random fields are one common alternative tocorrection for multiple comparisonsto find regions withtrulysignificant activation.[3]More generally, random fields can be used to correct for thelook-elsewhere effectin statistical testing, where the domain is theparameter spacebeing searched.[4]
They are also used inmachine learningapplications (seegraphical models).
Random fields are of great use in studying natural processes by theMonte Carlo methodin which the random fields correspond to naturally spatially varying properties. This leads to tensor-valued random fields[clarification needed]in which the key role is played by astatistical volume element(SVE), which is a spatial box over which properties can be averaged; when the SVE becomes sufficiently large, its properties become deterministic and one recovers therepresentative volume element(RVE) of deterministic continuum physics. The second type of random field that appears in continuum theories are those of dependent quantities (temperature, displacement, velocity, deformation, rotation, body and surface forces, stress, etc.).[5][clarification needed]
|
https://en.wikipedia.org/wiki/Random_field
|
Inprobability theoryand related fields, astochastic(/stəˈkæstɪk/) orrandom processis amathematical objectusually defined as afamilyofrandom variablesin aprobability space, where theindexof the family often has the interpretation oftime.Stochasticprocesses are widely used asmathematical modelsof systems and phenomena that appear to vary in a random manner. Examples include the growth of abacterialpopulation, anelectrical currentfluctuating due tothermal noise, or the movement of agasmolecule.[1][4][5]Stochastic processes have applications in many disciplines such asbiology,[6]chemistry,[7]ecology,[8]neuroscience,[9]physics,[10]image processing,signal processing,[11]control theory,[12]information theory,[13]computer science,[14]andtelecommunications.[15]Furthermore, seemingly random changes infinancial marketshave motivated the extensive use of stochastic processes infinance.[16][17][18]
Applications and the study of phenomena have in turn inspired the proposal of new stochastic processes. Examples of such stochastic processes include theWiener processor Brownian motion process,[a]used byLouis Bachelierto study price changes on theParis Bourse,[21]and thePoisson process, used byA. K. Erlangto study the number of phone calls occurring in a certain period of time.[22]These two stochastic processes are considered the most important and central in the theory of stochastic processes,[1][4][23]and were invented repeatedly and independently, both before and after Bachelier and Erlang, in different settings and countries.[21][24]
The termrandom functionis also used to refer to a stochastic or random process,[25][26]because a stochastic process can also be interpreted as a random element in afunction space.[27][28]The termsstochastic processandrandom processare used interchangeably, often with no specificmathematical spacefor the set that indexes the random variables.[27][29]But often these two terms are used when the random variables are indexed by theintegersor anintervalof thereal line.[5][29]If the random variables are indexed by theCartesian planeor some higher-dimensionalEuclidean space, then the collection of random variables is usually called arandom fieldinstead.[5][30]The values of a stochastic process are not always numbers and can be vectors or other mathematical objects.[5][28]
Based on their mathematical properties, stochastic processes can be grouped into various categories, which includerandom walks,[31]martingales,[32]Markov processes,[33]Lévy processes,[34]Gaussian processes,[35]random fields,[36]renewal processes, andbranching processes.[37]The study of stochastic processes uses mathematical knowledge and techniques fromprobability,calculus,linear algebra,set theory, andtopology[38][39][40]as well as branches ofmathematical analysissuch asreal analysis,measure theory,Fourier analysis, andfunctional analysis.[41][42][43]The theory of stochastic processes is considered to be an important contribution to mathematics[44]and it continues to be an active topic of research for both theoretical reasons and applications.[45][46][47]
A stochastic or random process can be defined as a collection of random variables that is indexed by some mathematical set, meaning that each random variable of the stochastic process is uniquely associated with an element in the set.[4][5]The set used to index the random variables is called theindex set. Historically, the index set was somesubsetof thereal line, such as thenatural numbers, giving the index set the interpretation of time.[1]Each random variable in the collection takes values from the samemathematical spaceknown as thestate space. This state space can be, for example, the integers, the real line orn{\displaystyle n}-dimensional Euclidean space.[1][5]Anincrementis the amount that a stochastic process changes between two index values, often interpreted as two points in time.[48][49]A stochastic process can have manyoutcomes, due to its randomness, and a single outcome of a stochastic process is called, among other names, asample functionorrealization.[28][50]
A stochastic process can be classified in different ways, for example, by its state space, its index set, or the dependence among the random variables. One common way of classification is by thecardinalityof the index set and the state space.[51][52][53]
When interpreted as time, if the index set of a stochastic process has a finite or countable number of elements, such as a finite set of numbers, the set of integers, or the natural numbers, then the stochastic process is said to be indiscrete time.[54][55]If the index set is some interval of the real line, then time is said to becontinuous. The two types of stochastic processes are respectively referred to asdiscrete-timeandcontinuous-time stochastic processes.[48][56][57]Discrete-time stochastic processes are considered easier to study because continuous-time processes require more advanced mathematical techniques and knowledge, particularly due to the index set being uncountable.[58][59]If the index set is the integers, or some subset of them, then the stochastic process can also be called arandom sequence.[55]
If the state space is the integers or natural numbers, then the stochastic process is called adiscreteorinteger-valued stochastic process. If the state space is the real line, then the stochastic process is referred to as areal-valued stochastic processor aprocess with continuous state space. If the state space isn{\displaystyle n}-dimensional Euclidean space, then the stochastic process is called an{\displaystyle n}-dimensional vector processorn{\displaystyle n}-vector process.[51][52]
The wordstochasticinEnglishwas originally used as an adjective with the definition "pertaining to conjecturing", and stemming from aGreekword meaning "to aim at a mark, guess", and theOxford English Dictionarygives the year 1662 as its earliest occurrence.[60]In his work on probabilityArs Conjectandi, originally published in Latin in 1713,Jakob Bernoulliused the phrase "Ars Conjectandi sive Stochastice", which has been translated to "the art of conjecturing or stochastics".[61]This phrase was used, with reference to Bernoulli, byLadislaus Bortkiewicz[62]who in 1917 wrote in German the wordstochastikwith a sense meaning random. The termstochastic processfirst appeared in English in a 1934 paper byJoseph Doob.[60]For the term and a specific mathematical definition, Doob cited another 1934 paper, where the termstochastischer Prozeßwas used in German byAleksandr Khinchin,[63][64]though the German term had been used earlier, for example, by Andrei Kolmogorov in 1931.[65]
According to the Oxford English Dictionary, early occurrences of the wordrandomin English with its current meaning, which relates to chance or luck, date back to the 16th century, while earlier recorded usages started in the 14th century as a noun meaning "impetuosity, great speed, force, or violence (in riding, running, striking, etc.)". The word itself comes from a Middle French word meaning "speed, haste", and it is probably derived from a French verb meaning "to run" or "to gallop". The first written appearance of the termrandom processpre-datesstochastic process, which the Oxford English Dictionary also gives as a synonym, and was used in an article byFrancis Edgeworthpublished in 1888.[66]
The definition of a stochastic process varies,[67]but a stochastic process is traditionally defined as a collection of random variables indexed by some set.[68][69]The termsrandom processandstochastic processare considered synonyms and are used interchangeably, without the index set being precisely specified.[27][29][30][70][71][72]Both "collection",[28][70]or "family" are used[4][73]while instead of "index set", sometimes the terms "parameter set"[28]or "parameter space"[30]are used.
The termrandom functionis also used to refer to a stochastic or random process,[5][74][75]though sometimes it is only used when the stochastic process takes real values.[28][73]This term is also used when the index sets are mathematical spaces other than the real line,[5][76]while the termsstochastic processandrandom processare usually used when the index set is interpreted as time,[5][76][77]and other terms are used such asrandom fieldwhen the index set isn{\displaystyle n}-dimensional Euclidean spaceRn{\displaystyle \mathbb {R} ^{n}}or amanifold.[5][28][30]
A stochastic process can be denoted, among other ways, by{X(t)}t∈T{\displaystyle \{X(t)\}_{t\in T}},[56]{Xt}t∈T{\displaystyle \{X_{t}\}_{t\in T}},[69]{Xt}{\displaystyle \{X_{t}\}}[78]{X(t)}{\displaystyle \{X(t)\}}or simply asX{\displaystyle X}. Some authors mistakenly writeX(t){\displaystyle X(t)}even though it is anabuse of function notation.[79]For example,X(t){\displaystyle X(t)}orXt{\displaystyle X_{t}}are used to refer to the random variable with the indext{\displaystyle t}, and not the entire stochastic process.[78]If the index set isT=[0,∞){\displaystyle T=[0,\infty )}, then one can write, for example,(Xt,t≥0){\displaystyle (X_{t},t\geq 0)}to denote the stochastic process.[29]
One of the simplest stochastic processes is theBernoulli process,[80]which is a sequence ofindependent and identically distributed(iid) random variables, where each random variable takes either the value one or zero, say one with probabilityp{\displaystyle p}and zero with probability1−p{\displaystyle 1-p}. This process can be linked to an idealisation of repeatedly flipping a coin, where the probability of obtaining a head is taken to bep{\displaystyle p}and its value is one, while the value of a tail is zero.[81]In other words, a Bernoulli process is a sequence of iid Bernoulli random variables,[82]where each idealised coin flip is an example of aBernoulli trial.[83]
Random walksare stochastic processes that are usually defined as sums ofiidrandom variables or random vectors in Euclidean space, so they are processes that change in discrete time.[84][85][86][87][88]But some also use the term to refer to processes that change in continuous time,[89]particularly the Wiener process used in financial models, which has led to some confusion, resulting in its criticism.[90]There are various other types of random walks, defined so their state spaces can be other mathematical objects, such as lattices and groups, and in general they are highly studied and have many applications in different disciplines.[89][91]
A classic example of a random walk is known as thesimple random walk, which is a stochastic process in discrete time with the integers as the state space, and is based on a Bernoulli process, where each Bernoulli variable takes either the value positive one or negative one. In other words, the simple random walk takes place on the integers, and its value increases by one with probability, say,p{\displaystyle p}, or decreases by one with probability1−p{\displaystyle 1-p}, so the index set of this random walk is the natural numbers, while its state space is the integers. Ifp=0.5{\displaystyle p=0.5}, this random walk is called a symmetric random walk.[92][93]
The Wiener process is a stochastic process withstationaryandindependent incrementsthat arenormally distributedbased on the size of the increments.[2][94]The Wiener process is named afterNorbert Wiener, who proved its mathematical existence, but the process is also called the Brownian motion process or just Brownian motion due to its historical connection as a model forBrownian movementin liquids.[95][96][97]
Playing a central role in the theory of probability, the Wiener process is often considered the most important and studied stochastic process, with connections to other stochastic processes.[1][2][3][98][99][100][101]Its index set and state space are the non-negative numbers and real numbers, respectively, so it has both continuous index set and states space.[102]But the process can be defined more generally so its state space can ben{\displaystyle n}-dimensional Euclidean space.[91][99][103]If themeanof any increment is zero, then the resulting Wiener or Brownian motion process is said to have zero drift. If the mean of the increment for any two points in time is equal to the time difference multiplied by some constantμ{\displaystyle \mu }, which is a real number, then the resulting stochastic process is said to have driftμ{\displaystyle \mu }.[104][105][106]
Almost surely, a sample path of a Wiener process is continuous everywhere butnowhere differentiable. It can be considered as a continuous version of the simple random walk.[49][105]The process arises as the mathematical limit of other stochastic processes such as certain random walks rescaled,[107][108]which is the subject ofDonsker's theoremor invariance principle, also known as the functional central limit theorem.[109][110][111]
The Wiener process is a member of some important families of stochastic processes, including Markov processes, Lévy processes and Gaussian processes.[2][49]The process also has many applications and is the main stochastic process used in stochastic calculus.[112][113]It plays a central role in quantitative finance,[114][115]where it is used, for example, in the Black–Scholes–Merton model.[116]The process is also used in different fields, including the majority of natural sciences as well as some branches of social sciences, as a mathematical model for various random phenomena.[3][117][118]
The Poisson process is a stochastic process that has different forms and definitions.[119][120]It can be defined as a counting process, which is a stochastic process that represents the random number of points or events up to some time. The number of points of the process that are located in the interval from zero to some given time is a Poisson random variable that depends on that time and some parameter. This process has the natural numbers as its state space and the non-negative numbers as its index set. This process is also called the Poisson counting process, since it can be interpreted as an example of a counting process.[119]
If a Poisson process is defined with a single positive constant, then the process is called a homogeneous Poisson process.[119][121]The homogeneous Poisson process is a member of important classes of stochastic processes such as Markov processes and Lévy processes.[49]
The homogeneous Poisson process can be defined and generalized in different ways. It can be defined such that its index set is the real line, and this stochastic process is also called the stationary Poisson process.[122][123]If the parameter constant of the Poisson process is replaced with some non-negative integrable function oft{\displaystyle t}, the resulting process is called an inhomogeneous or nonhomogeneous Poisson process, where the average density of points of the process is no longer constant.[124]Serving as a fundamental process in queueing theory, the Poisson process is an important process for mathematical models, where it finds applications for models of events randomly occurring in certain time windows.[125][126]
Defined on the real line, the Poisson process can be interpreted as a stochastic process,[49][127]among other random objects.[128][129]But then it can be defined on then{\displaystyle n}-dimensional Euclidean space or other mathematical spaces,[130]where it is often interpreted as a random set or a random counting measure, instead of a stochastic process.[128][129]In this setting, the Poisson process, also called the Poisson point process, is one of the most important objects in probability theory, both for applications and theoretical reasons.[22][131]But it has been remarked that the Poisson process does not receive as much attention as it should, partly due to it often being considered just on the real line, and not on other mathematical spaces.[131][132]
A stochastic process is defined as a collection of random variables defined on a commonprobability space(Ω,F,P){\displaystyle (\Omega ,{\mathcal {F}},P)}, whereΩ{\displaystyle \Omega }is asample space,F{\displaystyle {\mathcal {F}}}is aσ{\displaystyle \sigma }-algebra, andP{\displaystyle P}is aprobability measure; and the random variables, indexed by some setT{\displaystyle T}, all take values in the same mathematical spaceS{\displaystyle S}, which must bemeasurablewith respect to someσ{\displaystyle \sigma }-algebraΣ{\displaystyle \Sigma }.[28]
In other words, for a given probability space(Ω,F,P){\displaystyle (\Omega ,{\mathcal {F}},P)}and a measurable space(S,Σ){\displaystyle (S,\Sigma )}, a stochastic process is a collection ofS{\displaystyle S}-valued random variables, which can be written as:[80]
Historically, in many problems from the natural sciences a pointt∈T{\displaystyle t\in T}had the meaning of time, soX(t){\displaystyle X(t)}is a random variable representing a value observed at timet{\displaystyle t}.[133]A stochastic process can also be written as{X(t,ω):t∈T}{\displaystyle \{X(t,\omega ):t\in T\}}to reflect that it is actually a function of two variables,t∈T{\displaystyle t\in T}andω∈Ω{\displaystyle \omega \in \Omega }.[28][134]
There are other ways to consider a stochastic process, with the above definition being considered the traditional one.[68][69]For example, a stochastic process can be interpreted or defined as aST{\displaystyle S^{T}}-valued random variable, whereST{\displaystyle S^{T}}is the space of all the possiblefunctionsfrom the setT{\displaystyle T}into the spaceS{\displaystyle S}.[27][68]However this alternative definition as a "function-valued random variable" in general requires additional regularity assumptions to be well-defined.[135]
The setT{\displaystyle T}is called theindex set[4][51]orparameter set[28][136]of the stochastic process. Often this set is some subset of thereal line, such as thenatural numbersor an interval, giving the setT{\displaystyle T}the interpretation of time.[1]In addition to these sets, the index setT{\displaystyle T}can be another set with atotal orderor a more general set,[1][54]such as the Cartesian planeR2{\displaystyle \mathbb {R} ^{2}}orn{\displaystyle n}-dimensional Euclidean space, where an elementt∈T{\displaystyle t\in T}can represent a point in space.[48][137]That said, many results and theorems are only possible for stochastic processes with a totally ordered index set.[138]
Themathematical spaceS{\displaystyle S}of a stochastic process is called itsstate space. This mathematical space can be defined usingintegers,real lines,n{\displaystyle n}-dimensionalEuclidean spaces, complex planes, or more abstract mathematical spaces. The state space is defined using elements that reflect the different values that the stochastic process can take.[1][5][28][51][56]
Asample functionis a singleoutcomeof a stochastic process, so it is formed by taking a single possible value of each random variable of the stochastic process.[28][139]More precisely, if{X(t,ω):t∈T}{\displaystyle \{X(t,\omega ):t\in T\}}is a stochastic process, then for any pointω∈Ω{\displaystyle \omega \in \Omega }, themapping
is called a sample function, arealization, or, particularly whenT{\displaystyle T}is interpreted as time, asample pathof the stochastic process{X(t,ω):t∈T}{\displaystyle \{X(t,\omega ):t\in T\}}.[50]This means that for a fixedω∈Ω{\displaystyle \omega \in \Omega }, there exists a sample function that maps the index setT{\displaystyle T}to the state spaceS{\displaystyle S}.[28]Other names for a sample function of a stochastic process includetrajectory,path function[140]orpath.[141]
Anincrementof a stochastic process is the difference between two random variables of the same stochastic process. For a stochastic process with an index set that can be interpreted as time, an increment is how much the stochastic process changes over a certain time period. For example, if{X(t):t∈T}{\displaystyle \{X(t):t\in T\}}is a stochastic process with state spaceS{\displaystyle S}and index setT=[0,∞){\displaystyle T=[0,\infty )}, then for any two non-negative numberst1∈[0,∞){\displaystyle t_{1}\in [0,\infty )}andt2∈[0,∞){\displaystyle t_{2}\in [0,\infty )}such thatt1≤t2{\displaystyle t_{1}\leq t_{2}}, the differenceXt2−Xt1{\displaystyle X_{t_{2}}-X_{t_{1}}}is aS{\displaystyle S}-valued random variable known as an increment.[48][49]When interested in the increments, often the state spaceS{\displaystyle S}is the real line or the natural numbers, but it can ben{\displaystyle n}-dimensional Euclidean space or more abstract spaces such asBanach spaces.[49]
For a stochastic processX:Ω→ST{\displaystyle X\colon \Omega \rightarrow S^{T}}defined on the probability space(Ω,F,P){\displaystyle (\Omega ,{\mathcal {F}},P)}, thelawof stochastic processX{\displaystyle X}is defined as thepushforward measure:
whereP{\displaystyle P}is a probability measure, the symbol∘{\displaystyle \circ }denotes function composition andX−1{\displaystyle X^{-1}}is the pre-image of the measurable function or, equivalently, theST{\displaystyle S^{T}}-valued random variableX{\displaystyle X}, whereST{\displaystyle S^{T}}is the space of all the possibleS{\displaystyle S}-valued functions oft∈T{\displaystyle t\in T}, so the law of a stochastic process is a probability measure.[27][68][142][143]
For a measurable subsetB{\displaystyle B}ofST{\displaystyle S^{T}}, the pre-image ofX{\displaystyle X}gives
so the law of aX{\displaystyle X}can be written as:[28]
The law of a stochastic process or a random variable is also called theprobability law,probability distribution, or thedistribution.[133][142][144][145][146]
For a stochastic processX{\displaystyle X}with lawμ{\displaystyle \mu }, itsfinite-dimensional distributionfort1,…,tn∈T{\displaystyle t_{1},\dots ,t_{n}\in T}is defined as:
This measureμt1,..,tn{\displaystyle \mu _{t_{1},..,t_{n}}}is the joint distribution of the random vector(X(t1),…,X(tn)){\displaystyle (X({t_{1}}),\dots ,X({t_{n}}))}; it can be viewed as a "projection" of the lawμ{\displaystyle \mu }onto a finite subset ofT{\displaystyle T}.[27][147]
For any measurable subsetC{\displaystyle C}of then{\displaystyle n}-foldCartesian powerSn=S×⋯×S{\displaystyle S^{n}=S\times \dots \times S}, the finite-dimensional distributions of a stochastic processX{\displaystyle X}can be written as:[28]
The finite-dimensional distributions of a stochastic process satisfy two mathematical conditions known as consistency conditions.[57]
Stationarityis a mathematical property that a stochastic process has when all the random variables of that stochastic process are identically distributed. In other words, ifX{\displaystyle X}is a stationary stochastic process, then for anyt∈T{\displaystyle t\in T}the random variableXt{\displaystyle X_{t}}has the same distribution, which means that for any set ofn{\displaystyle n}index set valuest1,…,tn{\displaystyle t_{1},\dots ,t_{n}}, the correspondingn{\displaystyle n}random variables
all have the sameprobability distribution. The index set of a stationary stochastic process is usually interpreted as time, so it can be the integers or the real line.[148][149]But the concept of stationarity also exists for point processes and random fields, where the index set is not interpreted as time.[148][150][151]
When the index setT{\displaystyle T}can be interpreted as time, a stochastic process is said to be stationary if its finite-dimensional distributions are invariant under translations of time. This type of stochastic process can be used to describe a physical system that is in steady state, but still experiences random fluctuations.[148]The intuition behind stationarity is that as time passes the distribution of the stationary stochastic process remains the same.[152]A sequence of random variables forms a stationary stochastic process only if the random variables are identically distributed.[148]
A stochastic process with the above definition of stationarity is sometimes said to be strictly stationary, but there are other forms of stationarity. One example is when a discrete-time or continuous-time stochastic processX{\displaystyle X}is said to be stationary in the wide sense, then the processX{\displaystyle X}has a finite second moment for allt∈T{\displaystyle t\in T}and the covariance of the two random variablesXt{\displaystyle X_{t}}andXt+h{\displaystyle X_{t+h}}depends only on the numberh{\displaystyle h}for allt∈T{\displaystyle t\in T}.[152][153]Khinchinintroduced the related concept ofstationarity in the wide sense, which has other names includingcovariance stationarityorstationarity in the broad sense.[153][154]
Afiltrationis an increasing sequence of sigma-algebras defined in relation to some probability space and an index set that has sometotal orderrelation, such as in the case of the index set being some subset of the real numbers. More formally, if a stochastic process has an index set with a total order, then a filtration{Ft}t∈T{\displaystyle \{{\mathcal {F}}_{t}\}_{t\in T}}, on a probability space(Ω,F,P){\displaystyle (\Omega ,{\mathcal {F}},P)}is a family of sigma-algebras such thatFs⊆Ft⊆F{\displaystyle {\mathcal {F}}_{s}\subseteq {\mathcal {F}}_{t}\subseteq {\mathcal {F}}}for alls≤t{\displaystyle s\leq t}, wheret,s∈T{\displaystyle t,s\in T}and≤{\displaystyle \leq }denotes the total order of the index setT{\displaystyle T}.[51]With the concept of a filtration, it is possible to study the amount of information contained in a stochastic processXt{\displaystyle X_{t}}att∈T{\displaystyle t\in T}, which can be interpreted as timet{\displaystyle t}.[51][155]The intuition behind a filtrationFt{\displaystyle {\mathcal {F}}_{t}}is that as timet{\displaystyle t}passes, more and more information onXt{\displaystyle X_{t}}is known or available, which is captured inFt{\displaystyle {\mathcal {F}}_{t}}, resulting in finer and finer partitions ofΩ{\displaystyle \Omega }.[156][157]
Amodificationof a stochastic process is another stochastic process, which is closely related to the original stochastic process. More precisely, a stochastic processX{\displaystyle X}that has the same index setT{\displaystyle T}, state spaceS{\displaystyle S}, and probability space(Ω,F,P){\displaystyle (\Omega ,{\cal {F}},P)}as another stochastic processY{\displaystyle Y}is said to be a modification ofX{\displaystyle X}if for allt∈T{\displaystyle t\in T}the following
holds. Two stochastic processes that are modifications of each other have the same finite-dimensional law[158]and they are said to bestochastically equivalentorequivalent.[159]
Instead of modification, the termversionis also used,[150][160][161][162]however some authors use the term version when two stochastic processes have the same finite-dimensional distributions, but they may be defined on different probability spaces, so two processes that are modifications of each other, are also versions of each other, in the latter sense, but not the converse.[163][142]
If a continuous-time real-valued stochastic process meets certain moment conditions on its increments, then theKolmogorov continuity theoremsays that there exists a modification of this process that has continuous sample paths with probability one, so the stochastic process has a continuous modification or version.[161][162][164]The theorem can also be generalized to random fields so the index set isn{\displaystyle n}-dimensional Euclidean space[165]as well as to stochastic processes withmetric spacesas their state spaces.[166]
Two stochastic processesX{\displaystyle X}andY{\displaystyle Y}defined on the same probability space(Ω,F,P){\displaystyle (\Omega ,{\mathcal {F}},P)}with the same index setT{\displaystyle T}and set spaceS{\displaystyle S}are said beindistinguishableif the following
holds.[142][158]If twoX{\displaystyle X}andY{\displaystyle Y}are modifications of each other and arealmost surely continuous, thenX{\displaystyle X}andY{\displaystyle Y}are indistinguishable.[167]
Separabilityis a property of a stochastic process based on its index set in relation to the probability measure. The property is assumed so that functionals of stochastic processes or random fields with uncountable index sets can form random variables. For a stochastic process to be separable, in addition to other conditions, its index set must be aseparable space,[b]which means that the index set has a dense countable subset.[150][168]
More precisely, a real-valued continuous-time stochastic processX{\displaystyle X}with a probability space(Ω,F,P){\displaystyle (\Omega ,{\cal {F}},P)}is separable if its index setT{\displaystyle T}has a dense countable subsetU⊂T{\displaystyle U\subset T}and there is a setΩ0⊂Ω{\displaystyle \Omega _{0}\subset \Omega }of probability zero, soP(Ω0)=0{\displaystyle P(\Omega _{0})=0}, such that for every open setG⊂T{\displaystyle G\subset T}and every closed setF⊂R=(−∞,∞){\displaystyle F\subset \textstyle R=(-\infty ,\infty )}, the two events{Xt∈Ffor allt∈G∩U}{\displaystyle \{X_{t}\in F{\text{ for all }}t\in G\cap U\}}and{Xt∈Ffor allt∈G}{\displaystyle \{X_{t}\in F{\text{ for all }}t\in G\}}differ from each other at most on a subset ofΩ0{\displaystyle \Omega _{0}}.[169][170][171]The definition of separability[c]can also be stated for other index sets and state spaces,[174]such as in the case of random fields, where the index set as well as the state space can ben{\displaystyle n}-dimensional Euclidean space.[30][150]
The concept of separability of a stochastic process was introduced byJoseph Doob,.[168]The underlying idea of separability is to make a countable set of points of the index set determine the properties of the stochastic process.[172]Any stochastic process with a countable index set already meets the separability conditions, so discrete-time stochastic processes are always separable.[175]A theorem by Doob, sometimes known as Doob's separability theorem, says that any real-valued continuous-time stochastic process has a separable modification.[168][170][176]Versions of this theorem also exist for more general stochastic processes with index sets and state spaces other than the real line.[136]
Two stochastic processesX{\displaystyle X}andY{\displaystyle Y}defined on the same probability space(Ω,F,P){\displaystyle (\Omega ,{\mathcal {F}},P)}with the same index setT{\displaystyle T}are said beindependentif for alln∈N{\displaystyle n\in \mathbb {N} }and for every choice of epochst1,…,tn∈T{\displaystyle t_{1},\ldots ,t_{n}\in T}, the random vectors(X(t1),…,X(tn)){\displaystyle \left(X(t_{1}),\ldots ,X(t_{n})\right)}and(Y(t1),…,Y(tn)){\displaystyle \left(Y(t_{1}),\ldots ,Y(t_{n})\right)}are independent.[177]: p. 515
Two stochastic processes{Xt}{\displaystyle \left\{X_{t}\right\}}and{Yt}{\displaystyle \left\{Y_{t}\right\}}are calleduncorrelatedif their cross-covarianceKXY(t1,t2)=E[(X(t1)−μX(t1))(Y(t2)−μY(t2))]{\displaystyle \operatorname {K} _{\mathbf {X} \mathbf {Y} }(t_{1},t_{2})=\operatorname {E} \left[\left(X(t_{1})-\mu _{X}(t_{1})\right)\left(Y(t_{2})-\mu _{Y}(t_{2})\right)\right]}is zero for all times.[178]: p. 142Formally:
If two stochastic processesX{\displaystyle X}andY{\displaystyle Y}are independent, then they are also uncorrelated.[178]: p. 151
Two stochastic processes{Xt}{\displaystyle \left\{X_{t}\right\}}and{Yt}{\displaystyle \left\{Y_{t}\right\}}are calledorthogonalif their cross-correlationRXY(t1,t2)=E[X(t1)Y(t2)¯]{\displaystyle \operatorname {R} _{\mathbf {X} \mathbf {Y} }(t_{1},t_{2})=\operatorname {E} [X(t_{1}){\overline {Y(t_{2})}}]}is zero for all times.[178]: p. 142Formally:
ASkorokhod space, also written asSkorohod space, is a mathematical space of all the functions that are right-continuous with left limits, defined on some interval of the real line such as[0,1]{\displaystyle [0,1]}or[0,∞){\displaystyle [0,\infty )}, and take values on the real line or on some metric space.[179][180][181]Such functions are known as càdlàg or cadlag functions, based on the acronym of the French phrasecontinue à droite, limite à gauche.[179][182]A Skorokhod function space, introduced byAnatoliy Skorokhod,[181]is often denoted with the letterD{\displaystyle D},[179][180][181][182]so the function space is also referred to as spaceD{\displaystyle D}.[179][183][184]The notation of this function space can also include the interval on which all the càdlàg functions are defined, so, for example,D[0,1]{\displaystyle D[0,1]}denotes the space of càdlàg functions defined on theunit interval[0,1]{\displaystyle [0,1]}.[182][184][185]
Skorokhod function spaces are frequently used in the theory of stochastic processes because it often assumed that the sample functions of continuous-time stochastic processes belong to a Skorokhod space.[181][183]Such spaces contain continuous functions, which correspond to sample functions of the Wiener process. But the space also has functions with discontinuities, which means that the sample functions of stochastic processes with jumps, such as the Poisson process (on the real line), are also members of this space.[184][186]
In the context of mathematical construction of stochastic processes, the termregularityis used when discussing and assuming certain conditions for a stochastic process to resolve possible construction issues.[187][188]For example, to study stochastic processes with uncountable index sets, it is assumed that the stochastic process adheres to some type of regularity condition such as the sample functions being continuous.[189][190]
Markov processes are stochastic processes, traditionally indiscrete or continuous time, that have the Markov property, which means the next value of the Markov process depends on the current value, but it is conditionally independent of the previous values of the stochastic process. In other words, the behavior of the process in the future is stochastically independent of its behavior in the past, given the current state of the process.[191][192]
The Brownian motion process and the Poisson process (in one dimension) are both examples of Markov processes[193]in continuous time, whilerandom walkson the integers and thegambler's ruinproblem are examples of Markov processes in discrete time.[194][195]
A Markov chain is a type of Markov process that has either discretestate spaceor discrete index set (often representing time), but the precise definition of a Markov chain varies.[196]For example, it is common to define a Markov chain as a Markov process in eitherdiscrete or continuous timewith a countable state space (thus regardless of the nature of time),[197][198][199][200]but it has been also common to define a Markov chain as having discrete time in either countable or continuous state space (thus regardless of the state space).[196]It has been argued that the first definition of a Markov chain, where it has discrete time, now tends to be used, despite the second definition having been used by researchers likeJoseph DoobandKai Lai Chung.[201]
Markov processes form an important class of stochastic processes and have applications in many areas.[39][202]For example, they are the basis for a general stochastic simulation method known asMarkov chain Monte Carlo, which is used for simulating random objects with specific probability distributions, and has found application inBayesian statistics.[203][204]
The concept of the Markov property was originally for stochastic processes in continuous and discrete time, but the property has been adapted for other index sets such asn{\displaystyle n}-dimensional Euclidean space, which results in collections of random variables known as Markov random fields.[205][206][207]
A martingale is a discrete-time or continuous-time stochastic process with the property that, at every instant, given the current value and all the past values of the process, the conditional expectation of every future value is equal to the current value. In discrete time, if this property holds for the next value, then it holds for all future values. The exact mathematical definition of a martingale requires two other conditions coupled with the mathematical concept of a filtration, which is related to the intuition of increasing available information as time passes. Martingales are usually defined to be real-valued,[208][209][155]but they can also be complex-valued[210]or even more general.[211]
A symmetric random walk and a Wiener process (with zero drift) are both examples of martingales, respectively, in discrete and continuous time.[208][209]For asequenceofindependent and identically distributedrandom variablesX1,X2,X3,…{\displaystyle X_{1},X_{2},X_{3},\dots }with zero mean, the stochastic process formed from the successive partial sumsX1,X1+X2,X1+X2+X3,…{\displaystyle X_{1},X_{1}+X_{2},X_{1}+X_{2}+X_{3},\dots }is a discrete-time martingale.[212]In this aspect, discrete-time martingales generalize the idea of partial sums of independent random variables.[213]
Martingales can also be created from stochastic processes by applying some suitable transformations, which is the case for the homogeneous Poisson process (on the real line) resulting in a martingale called thecompensated Poisson process.[209]Martingales can also be built from other martingales.[212]For example, there are martingales based on the martingale the Wiener process, forming continuous-time martingales.[208][214]
Martingales mathematically formalize the idea of a 'fair game' where it is possible form reasonable expectations for payoffs,[215]and they were originally developed to show that it is not possible to gain an 'unfair' advantage in such a game.[216]But now they are used in many areas of probability, which is one of the main reasons for studying them.[155][216][217]Many problems in probability have been solved by finding a martingale in the problem and studying it.[218]Martingales will converge, given some conditions on their moments, so they are often used to derive convergence results, due largely tomartingale convergence theorems.[213][219][220]
Martingales have many applications in statistics, but it has been remarked that its use and application are not as widespread as it could be in the field of statistics, particularly statistical inference.[221]They have found applications in areas in probability theory such as queueing theory and Palm calculus[222]and other fields such as economics[223]and finance.[17]
Lévy processes are types of stochastic processes that can be considered as generalizations of random walks in continuous time.[49][224]These processes have many applications in fields such as finance, fluid mechanics, physics and biology.[225][226]The main defining characteristics of these processes are their stationarity and independence properties, so they were known asprocesses with stationary and independent increments. In other words, a stochastic processX{\displaystyle X}is a Lévy process if forn{\displaystyle n}non-negatives numbers,0≤t1≤⋯≤tn{\displaystyle 0\leq t_{1}\leq \dots \leq t_{n}}, the correspondingn−1{\displaystyle n-1}increments
are all independent of each other, and the distribution of each increment only depends on the difference in time.[49]
A Lévy process can be defined such that its state space is some abstract mathematical space, such as aBanach space, but the processes are often defined so that they take values in Euclidean space. The index set is the non-negative numbers, soI=[0,∞){\displaystyle I=[0,\infty )}, which gives the interpretation of time. Important stochastic processes such as the Wiener process, the homogeneous Poisson process (in one dimension), andsubordinatorsare all Lévy processes.[49][224]
A random field is a collection of random variables indexed by an{\displaystyle n}-dimensional Euclidean space or some manifold. In general, a random field can be considered an example of a stochastic or random process, where the index set is not necessarily a subset of the real line.[30]But there is a convention that an indexed collection of random variables is called a random field when the index has two or more dimensions.[5][28][227]If the specific definition of a stochastic process requires the index set to be a subset of the real line, then the random field can be considered as a generalization of stochastic process.[228]
A point process is a collection of points randomly located on some mathematical space such as the real line,n{\displaystyle n}-dimensional Euclidean space, or more abstract spaces. Sometimes the termpoint processis not preferred, as historically the wordprocessdenoted an evolution of some system in time, so a point process is also called arandom point field.[229]There are different interpretations of a point process, such a random counting measure or a random set.[230][231]Some authors regard a point process and stochastic process as two different objects such that a point process is a random object that arises from or is associated with a stochastic process,[232][233]though it has been remarked that the difference between point processes and stochastic processes is not clear.[233]
Other authors consider a point process as a stochastic process, where the process is indexed by sets of the underlying space[d]on which it is defined, such as the real line orn{\displaystyle n}-dimensional Euclidean space.[236][237]Other stochastic processes such as renewal and counting processes are studied in the theory of point processes.[238][233]
Probability theory has its origins in games of chance, which have a long history, with some games being played thousands of years ago,[239]but very little analysis on them was done in terms of probability.[240]The year 1654 is often considered the birth of probability theory when French mathematiciansPierre FermatandBlaise Pascalhad a written correspondence on probability, motivated by agambling problem.[241][242]But there was earlier mathematical work done on the probability of gambling games such asLiber de Ludo AleaebyGerolamo Cardano, written in the 16th century but posthumously published later in 1663.[243]
After Cardano,Jakob Bernoulli[e]wroteArs Conjectandi, which is considered a significant event in the history of probability theory. Bernoulli's book was published, also posthumously, in 1713 and inspired many mathematicians to study probability.[245][246]But despite some renowned mathematicians contributing to probability theory, such asPierre-Simon Laplace,Abraham de Moivre,Carl Gauss,Siméon PoissonandPafnuty Chebyshev,[247][248]most of the mathematical community[f]did not consider probability theory to be part of mathematics until the 20th century.[247][249][250][251]
In the physical sciences, scientists developed in the 19th century the discipline ofstatistical mechanics, where physical systems, such as containers filled with gases, are regarded or treated mathematically as collections of many moving particles. Although there were attempts to incorporate randomness into statistical physics by some scientists, such asRudolf Clausius, most of the work had little or no randomness.[252][253]This changed in 1859 whenJames Clerk Maxwellcontributed significantly to the field, more specifically, to the kinetic theory of gases, by presenting work where he modelled the gas particles as moving in random directions at random velocities.[254][255]The kinetic theory of gases and statistical physics continued to be developed in the second half of the 19th century, with work done chiefly by Clausius,Ludwig BoltzmannandJosiah Gibbs, which would later have an influence onAlbert Einstein's mathematical model forBrownian movement.[256]
At theInternational Congress of MathematiciansinParisin 1900,David Hilbertpresented a list ofmathematical problems, where his sixth problem asked for a mathematical treatment of physics and probability involvingaxioms.[248]Around the start of the 20th century, mathematicians developed measure theory, a branch of mathematics for studying integrals of mathematical functions, where two of the founders were French mathematicians,Henri LebesgueandÉmile Borel. In 1925, another French mathematicianPaul Lévypublished the first probability book that used ideas from measure theory.[248]
In the 1920s, fundamental contributions to probability theory were made in the Soviet Union by mathematicians such asSergei Bernstein,Aleksandr Khinchin,[g]andAndrei Kolmogorov.[251]Kolmogorov published in 1929 his first attempt at presenting a mathematical foundation, based on measure theory, for probability theory.[257]In the early 1930s, Khinchin and Kolmogorov set up probability seminars, which were attended by researchers such asEugene SlutskyandNikolai Smirnov,[258]and Khinchin gave the first mathematical definition of a stochastic process as a set of random variables indexed by the real line.[63][259][h]
In 1933, Andrei Kolmogorov published in German, his book on the foundations of probability theory titledGrundbegriffe der Wahrscheinlichkeitsrechnung,[i]where Kolmogorov used measure theory to develop an axiomatic framework for probability theory. The publication of this book is now widely considered to be the birth of modern probability theory, when the theories of probability and stochastic processes became parts of mathematics.[248][251]
After the publication of Kolmogorov's book, further fundamental work on probability theory and stochastic processes was done by Khinchin and Kolmogorov as well as other mathematicians such asJoseph Doob,William Feller,Maurice Fréchet,Paul Lévy,Wolfgang Doeblin, andHarald Cramér.[248][251]Decades later, Cramér referred to the 1930s as the "heroic period of mathematical probability theory".[251]World War IIgreatly interrupted the development of probability theory, causing, for example, the migration of Feller fromSwedento theUnited States of America[251]and the death of Doeblin, considered now a pioneer in stochastic processes.[261]
After World War II, the study of probability theory and stochastic processes gained more attention from mathematicians, with significant contributions made in many areas of probability and mathematics as well as the creation of new areas.[251][264]Starting in the 1940s,Kiyosi Itôpublished papers developing the field ofstochastic calculus, which involves stochasticintegralsand stochasticdifferential equationsbased on the Wiener or Brownian motion process.[265]
Also starting in the 1940s, connections were made between stochastic processes, particularly martingales, and the mathematical field ofpotential theory, with early ideas byShizuo Kakutaniand then later work by Joseph Doob.[264]Further work, considered pioneering, was done byGilbert Huntin the 1950s, connecting Markov processes and potential theory, which had a significant effect on the theory of Lévy processes and led to more interest in studying Markov processes with methods developed by Itô.[21][266][267]
In 1953, Doob published his bookStochastic processes, which had a strong influence on the theory of stochastic processes and stressed the importance of measure theory in probability.[264][263]Doob also chiefly developed the theory of martingales, with later substantial contributions byPaul-André Meyer. Earlier work had been carried out bySergei Bernstein,Paul LévyandJean Ville, the latter adopting the term martingale for the stochastic process.[268][269]Methods from the theory of martingales became popular for solving various probability problems. Techniques and theory were developed to study Markov processes and then applied to martingales. Conversely, methods from the theory of martingales were established to treat Markov processes.[264]
Other fields of probability were developed and used to study stochastic processes, with one main approach being the theory of large deviations.[264]The theory has many applications in statistical physics, among other fields, and has core ideas going back to at least the 1930s. Later in the 1960s and 1970s, fundamental work was done by Alexander Wentzell in the Soviet Union andMonroe D. DonskerandSrinivasa Varadhanin the United States of America,[270]which would later result in Varadhan winning the 2007 Abel Prize.[271]In the 1990s and 2000s the theories ofSchramm–Loewner evolution[272]andrough paths[142]were introduced and developed to study stochastic processes and other mathematical objects in probability theory, which respectively resulted inFields Medalsbeing awarded toWendelin Werner[273]in 2008 and toMartin Hairerin 2014.[274]
The theory of stochastic processes still continues to be a focus of research, with yearly international conferences on the topic of stochastic processes.[45][225]
Although Khinchin gave mathematical definitions of stochastic processes in the 1930s,[63][259]specific stochastic processes had already been discovered in different settings, such as the Brownian motion process and the Poisson process.[21][24]Some families of stochastic processes such as point processes or renewal processes have long and complex histories, stretching back centuries.[275]
The Bernoulli process, which can serve as a mathematical model for flipping a biased coin, is possibly the first stochastic process to have been studied.[81]The process is a sequence of independent Bernoulli trials,[82]which are named afterJacob Bernoulliwho used them to study games of chance, including probability problems proposed and studied earlier by Christiaan Huygens.[276]Bernoulli's work, including the Bernoulli process, were published in his bookArs Conjectandiin 1713.[277]
In 1905,Karl Pearsoncoined the termrandom walkwhile posing a problem describing a random walk on the plane, which was motivated by an application in biology, but such problems involving random walks had already been studied in other fields. Certain gambling problems that were studied centuries earlier can be considered as problems involving random walks.[89][277]For example, the problem known as theGambler's ruinis based on a simple random walk,[195][278]and is an example of a random walk with absorbing barriers.[241][279]Pascal, Fermat and Huyens all gave numerical solutions to this problem without detailing their methods,[280]and then more detailed solutions were presented by Jakob Bernoulli andAbraham de Moivre.[281]
For random walks inn{\displaystyle n}-dimensional integerlattices,George Pólyapublished, in 1919 and 1921, work where he studied the probability of a symmetric random walk returning to a previous position in the lattice. Pólya showed that a symmetric random walk, which has an equal probability to advance in any direction in the lattice, will return to a previous position in the lattice an infinite number of times with probability one in one and two dimensions, but with probability zero in three or higher dimensions.[282][283]
TheWiener processor Brownian motion process has its origins in different fields including statistics, finance and physics.[21]In 1880, Danish astronomerThorvald Thielewrote a paper on the method of least squares, where he used the process to study the errors of a model in time-series analysis.[284][285][286]The work is now considered as an early discovery of the statistical method known asKalman filtering, but the work was largely overlooked. It is thought that the ideas in Thiele's paper were too advanced to have been understood by the broader mathematical and statistical community at the time.[286]
The French mathematicianLouis Bachelierused a Wiener process in his 1900 thesis[287][288]in order to model price changes on theParis Bourse, astock exchange,[289]without knowing the work of Thiele.[21]It has been speculated that Bachelier drew ideas from the random walk model ofJules Regnault, but Bachelier did not cite him,[290]and Bachelier's thesis is now considered pioneering in the field of financial mathematics.[289][290]
It is commonly thought that Bachelier's work gained little attention and was forgotten for decades until it was rediscovered in the 1950s by theLeonard Savage, and then become more popular after Bachelier's thesis was translated into English in 1964. But the work was never forgotten in the mathematical community, as Bachelier published a book in 1912 detailing his ideas,[290]which was cited by mathematicians including Doob, Feller[290]and Kolmogorov.[21]The book continued to be cited, but then starting in the 1960s, the original thesis by Bachelier began to be cited more than his book when economists started citing Bachelier's work.[290]
In 1905,Albert Einsteinpublished a paper where he studied the physical observation of Brownian motion or movement to explain the seemingly random movements of particles in liquids by using ideas from thekinetic theory of gases. Einstein derived adifferential equation, known as adiffusion equation, for describing the probability of finding a particle in a certain region of space. Shortly after Einstein's first paper on Brownian movement,Marian Smoluchowskipublished work where he cited Einstein, but wrote that he had independently derived the equivalent results by using a different method.[291]
Einstein's work, as well as experimental results obtained byJean Perrin, later inspired Norbert Wiener in the 1920s[292]to use a type of measure theory, developed byPercy Daniell, and Fourier analysis to prove the existence of the Wiener process as a mathematical object.[21]
The Poisson process is named afterSiméon Poisson, due to its definition involving thePoisson distribution, but Poisson never studied the process.[22][293]There are a number of claims for early uses or discoveries of the Poisson
process.[22][24]At the beginning of the 20th century, the Poisson process would arise independently in different situations.[22][24]In Sweden 1903,Filip Lundbergpublished athesiscontaining work, now considered fundamental and pioneering, where he proposed to model insurance claims with a homogeneous Poisson process.[294][295]
Another discovery occurred inDenmarkin 1909 whenA.K. Erlangderived the Poisson distribution when developing a mathematical model for the number of incoming phone calls in a finite time interval. Erlang was not at the time aware of Poisson's earlier work and assumed that the number phone calls arriving in each interval of time were independent to each other. He then found the limiting case, which is effectively recasting the Poisson distribution as a limit of the binomial distribution.[22]
In 1910,Ernest RutherfordandHans Geigerpublished experimental results on counting alpha particles. Motivated by their work,Harry Batemanstudied the counting problem and derived Poisson probabilities as a solution to a family of differential equations, resulting in the independent discovery of the Poisson process.[22]After this time there were many studies and applications of the Poisson process, but its early history is complicated, which has been explained by the various applications of the process in numerous fields by biologists, ecologists, engineers and various physical scientists.[22]
Markov processes and Markov chains are named afterAndrey Markovwho studied Markov chains in the early 20th century. Markov was interested in studying an extension of independent random sequences. In his first paper on Markov chains, published in 1906, Markov showed that under certain conditions the average outcomes of the Markov chain would converge to a fixed vector of values, so proving aweak law of large numberswithout the independence assumption,[296][297][298]which had been commonly regarded as a requirement for such mathematical laws to hold.[298]Markov later used Markov chains to study the distribution of vowels inEugene Onegin, written byAlexander Pushkin, and proved acentral limit theoremfor such chains.
In 1912, Poincaré studied Markov chains onfinite groupswith an aim to study card shuffling. Other early uses of Markov chains include a diffusion model, introduced byPaulandTatyana Ehrenfestin 1907, and a branching process, introduced byFrancis GaltonandHenry William Watsonin 1873, preceding the work of Markov.[296][297]After the work of Galton and Watson, it was later revealed that their branching process had been independently discovered and studied around three decades earlier byIrénée-Jules Bienaymé.[299]Starting in 1928,Maurice Fréchetbecame interested in Markov chains, eventually resulting in him publishing in 1938 a detailed study on Markov chains.[296][300]
Andrei Kolmogorovdeveloped in a 1931 paper a large part of the early theory of continuous-time Markov processes.[251][257]Kolmogorov was partly inspired by Louis Bachelier's 1900 work on fluctuations in the stock market as well asNorbert Wiener's work on Einstein's model of Brownian movement.[257][301]He introduced and studied a particular set of Markov processes known as diffusion processes, where he derived a set of differential equations describing the processes.[257][302]Independent of Kolmogorov's work,Sydney Chapmanderived in a 1928 paper an equation, now called theChapman–Kolmogorov equation, in a less mathematically rigorous way than Kolmogorov, while studying Brownian movement.[303]The differential equations are now called the Kolmogorov equations[304]or the Kolmogorov–Chapman equations.[305]Other mathematicians who contributed significantly to the foundations of Markov processes include William Feller, starting in the 1930s, and then later Eugene Dynkin, starting in the 1950s.[251]
Lévy processes such as the Wiener process and the Poisson process (on the real line) are named after Paul Lévy who started studying them in the 1930s,[225]but they have connections toinfinitely divisible distributionsgoing back to the 1920s.[224]In a 1932 paper, Kolmogorov derived acharacteristic functionfor random variables associated with Lévy processes. This result was later derived under more general conditions by Lévy in 1934, and then Khinchin independently gave an alternative form for this characteristic function in 1937.[251][306]In addition to Lévy, Khinchin and Kolomogrov, early fundamental contributions to the theory of Lévy processes were made byBruno de FinettiandKiyosi Itô.[224]
In mathematics, constructions of mathematical objects are needed, which is also the case for stochastic processes, to prove that they exist mathematically.[57]There are two main approaches for constructing a stochastic process. One approach involves considering a measurable space of functions, defining a suitable measurable mapping from a probability space to this measurable space of functions, and then deriving the corresponding finite-dimensional distributions.[307]
Another approach involves defining a collection of random variables to have specific finite-dimensional distributions, and then usingKolmogorov's existence theorem[j]to prove a corresponding stochastic process exists.[57][307]This theorem, which is an existence theorem for measures on infinite product spaces,[311]says that if any finite-dimensional distributions satisfy two conditions, known asconsistency conditions, then there exists a stochastic process with those finite-dimensional distributions.[57]
When constructing continuous-time stochastic processes certain mathematical difficulties arise, due to the uncountable index sets, which do not occur with discrete-time processes.[58][59]One problem is that it is possible to have more than one stochastic process with the same finite-dimensional distributions. For example, both the left-continuous modification and the right-continuous modification of a Poisson process have the same finite-dimensional distributions.[312]This means that the distribution of the stochastic process does not, necessarily, specify uniquely the properties of the sample functions of the stochastic process.[307][313]
Another problem is that functionals of continuous-time process that rely upon an uncountable number of points of the index set may not be measurable, so the probabilities of certain events may not be well-defined.[168]For example, the supremum of a stochastic process or random field is not necessarily a well-defined random variable.[30][59]For a continuous-time stochastic processX{\displaystyle X}, other characteristics that depend on an uncountable number of points of the index setT{\displaystyle T}include:[168]
where the symbol∈can be read "a member of the set", as int{\displaystyle t}a member of the setT{\displaystyle T}.
To overcome the two difficulties described above, i.e., "more than one..." and "functionals of...", different assumptions and approaches are possible.[69]
One approach for avoiding mathematical construction issues of stochastic processes, proposed byJoseph Doob, is to assume that the stochastic process is separable.[314]Separability ensures that infinite-dimensional distributions determine the properties of sample functions by requiring that sample functions are essentially determined by their values on a dense countable set of points in the index set.[315]Furthermore, if a stochastic process is separable, then functionals of an uncountable number of points of the index set are measurable and their probabilities can be studied.[168][315]
Another approach is possible, originally developed byAnatoliy SkorokhodandAndrei Kolmogorov,[316]for a continuous-time stochastic process with any metric space as its state space. For the construction of such a stochastic process, it is assumed that the sample functions of the stochastic process belong to some suitable function space, which is usually the Skorokhod space consisting of all right-continuous functions with left limits. This approach is now more used than the separability assumption,[69][262]but such a stochastic process based on this approach will be automatically separable.[317]
Although less used, the separability assumption is considered more general because every stochastic process has a separable version.[262]It is also used when it is not possible to construct a stochastic process in a Skorokhod space.[173]For example, separability is assumed when constructing and studying random fields, where the collection of random variables is now indexed by sets other than the real line such asn{\displaystyle n}-dimensional Euclidean space.[30][318]
One of the most famous applications of stochastic processes in finance is theBlack-Scholes modelfor option pricing. Developed byFischer Black,Myron Scholes, andRobert Solow, this model usesGeometric Brownian motion, a specific type of stochastic process, to describe the dynamics of asset prices.[319][320]The model assumes that the price of a stock follows a continuous-time stochastic process and provides a closed-form solution for pricing European-style options. The Black-Scholes formula has had a profound impact on financial markets, forming the basis for much of modern options trading.
The key assumption of the Black-Scholes model is that the price of a financial asset, such as a stock, follows alog-normal distribution, with its continuous returns following a normal distribution. Although the model has limitations, such as the assumption of constant volatility, it remains widely used due to its simplicity and practical relevance.
Another significant application of stochastic processes in finance is instochastic volatility models, which aim to capture the time-varying nature of market volatility. TheHeston model[321]is a popular example, allowing for the volatility of asset prices to follow its own stochastic process. Unlike the Black-Scholes model, which assumes constant volatility, stochastic volatility models provide a more flexible framework for modeling market dynamics, particularly during periods of high uncertainty or market stress.
One of the primary applications of stochastic processes in biology is inpopulation dynamics. In contrast todeterministic models, which assume that populations change in predictable ways, stochastic models account for the inherent randomness in births, deaths, and migration. Thebirth-death process,[322]a simple stochastic model, describes how populations fluctuate over time due to random births and deaths. These models are particularly important when dealing with small populations, where random events can have large impacts, such as in the case of endangered species or small microbial populations.
Another example is thebranching process,[322]which models the growth of a population where each individual reproduces independently. The branching process is often used to describe population extinction or explosion, particularly in epidemiology, where it can model the spread of infectious diseases within a population.
Stochastic processes play a critical role in computer science, particularly in the analysis and development ofrandomized algorithms. These algorithms utilize random inputs to simplify problem-solving or enhance performance in complex computational tasks. For instance, Markov chains are widely used in probabilistic algorithms for optimization and sampling tasks, such as those employed in search engines like Google's PageRank.[323]These methods balance computational efficiency with accuracy, making them invaluable for handling large datasets. Randomized algorithms are also extensively applied in areas such as cryptography, large-scale simulations, and artificial intelligence, where uncertainty must be managed effectively.[323]
Another significant application of stochastic processes in computer science is inqueuing theory, which models the random arrival and service of tasks in a system.[324]This is particularly relevant in network traffic analysis and server management. For instance, queuing models help predict delays, manage resource allocation, and optimize throughput in web servers and communication networks. The flexibility of stochastic models allows researchers to simulate and improve the performance of high-traffic environments. For example, queueing theory is crucial for designing efficient data centers and cloud computing infrastructures.[325]
|
https://en.wikipedia.org/wiki/Stochastic_process
|
Inspatial statisticsthe theoreticalvariogram, denoted2γ(s1,s2){\displaystyle 2\gamma (\mathbf {s} _{1},\mathbf {s} _{2})}, is a function describing the degree ofspatial dependenceof a spatialrandom fieldorstochastic processZ(s){\displaystyle Z(\mathbf {s} )}. Thesemivariogramγ(s1,s2){\displaystyle \gamma (\mathbf {s} _{1},\mathbf {s} _{2})}is half the variogram.
For example, ingold mining, a variogram will give a measure of how much two samples taken from the mining area will vary in gold percentage depending on the distance between those samples. Samples taken far apart will vary more than samples taken close to each other.
Thesemivariogramγ(h){\displaystyle \gamma (h)}was first defined by Matheron (1963) as half the average squared difference between a function and a translated copy of the function separated at distanceh{\displaystyle h}.[1][2]Formally
whereM{\displaystyle M}is a point in the geometric fieldV{\displaystyle V}, andf(M){\displaystyle f(M)}is the value at that point. The triple integral is over 3 dimensions.h{\displaystyle h}is the separation distance (e.g., in meters or km) of interest.
For example, the valuef(M){\displaystyle f(M)}could represent the iron content in soil, at some locationM{\displaystyle M}(withgeographic coordinatesof latitude, longitude, and elevation) over some regionV{\displaystyle V}with element of volumedV{\displaystyle dV}.
To obtain the semivariogram for a givenγ(h){\displaystyle \gamma (h)}, all pairs of points at that exact distance would be sampled. In practice it is impossible to sample everywhere, so theempirical variogramis used instead.
The variogram is twice the semivariogram and can be defined, differently, as thevarianceof the difference between field values at two locations (s1{\displaystyle \mathbf {s} _{1}}ands2{\displaystyle \mathbf {s} _{2}}, note change of notation fromM{\displaystyle M}tos{\displaystyle \mathbf {s} }andf{\displaystyle f}toZ{\displaystyle Z}) across realizations of the field (Cressie 1993):
If the spatial random field has constant meanμ{\displaystyle \mu }, this is equivalent to the expectation for the squared increment of the values between locationss1{\displaystyle \mathbf {s} _{1}}ands2{\displaystyle s_{2}}(Wackernagel 2003) (wheres1{\displaystyle \mathbf {s} _{1}}ands2{\displaystyle \mathbf {s} _{2}}are points in space and possibly time):
In the case of astationary process, the variogram and semivariogram can be represented as a functionγs(h)=γ(0,0+h){\displaystyle \gamma _{s}(h)=\gamma (0,0+h)}of the differenceh=s2−s1{\displaystyle h=\mathbf {s} _{2}-\mathbf {s} _{1}}between locations only, by the following relation (Cressie 1993):
If the process is furthermoreisotropic, then the variogram and semivariogram can be represented by a functionγi(h):=γs(he1){\displaystyle \gamma _{i}(h):=\gamma _{s}(he_{1})}of the distanceh=‖s2−s1‖{\displaystyle h=\|\mathbf {s} _{2}-\mathbf {s} _{1}\|}only (Cressie 1993):
The indexesi{\displaystyle i}ors{\displaystyle s}are typically not written. The terms are used for all three forms of the function. Moreover, the term "variogram" is sometimes used to denote the semivariogram, and the symbolγ{\displaystyle \gamma }is sometimes used for the variogram, which brings some confusion.[3]
According to (Cressie 1993, Chiles and Delfiner 1999, Wackernagel 2003) the theoretical variogram has the following properties:
In summary, the following parameters are often used to describe variograms:
Generally, anempirical variogramis needed for measured data, because sample informationZ{\displaystyle Z}is not available for every location. The sample information for example could be concentration of iron in soil samples, or pixel intensity on a camera. Each piece of sample information has coordinatess=(x,y){\displaystyle \mathbf {s} =(x,y)}for a 2D sample space wherex{\displaystyle x}andy{\displaystyle y}are geographical coordinates. In the case of the iron in soil, the sample space could be 3 dimensional. If there is temporal variability as well (e.g., phosphorus content in a lake) thens{\displaystyle \mathbf {s} }could be a 4 dimensional vector(x,y,z,t){\displaystyle (x,y,z,t)}. For the case where dimensions have different units (e.g., distance and time) then a scaling factorB{\displaystyle B}can be applied to each to obtain a modified Euclidean distance.[4]
Sample observations are denotedZ(si)=zi{\displaystyle Z(\mathbf {s} _{i})=z_{i}}. Observations may be taken atM{\displaystyle M}total different locations (thesample size). This would provide as set of observationsz1,…,zM{\displaystyle z_{1},\ldots ,z_{M}}at locationss1,…,sM{\displaystyle \mathbf {s} _{1},\ldots ,\mathbf {s} _{M}}. Generally, plots show the semivariogram values as a function of separation distancehk{\displaystyle h_{k}}for multiple stepsk=1,…{\displaystyle k=1,\ldots }. In the case of empirical semivariogram, separation distance intervalhk±δ{\displaystyle h_{k}\pm \delta }is used rather than exact distances, and usually isotropic conditions are assumed (i.e., thatγ{\displaystyle \gamma }is only a function ofh{\displaystyle h}and does not depend on other variables such as center position). Then, the empirical semivariogramγ^(h±δ){\displaystyle {\hat {\gamma }}(h\pm \delta )}can be calculated for eachbin:
Or in other words, each pair of points separated byhk{\displaystyle h_{k}}(plus or minus some bin width tolerance rangeδ{\displaystyle \delta }) are found. These form the set of points
The number of these points in this bin isNk=|Sk|{\displaystyle N_{k}=|S_{k}|}(theset size). Then for each pair of pointsi,j{\displaystyle i,j}, the square of the difference in the observation (e.g., soil sample content or pixel intensity) is found (|zi−zj|2{\displaystyle |z_{i}-z_{j}|^{2}}). These squared differences are added together and normalized by the natural numberNk{\displaystyle N_{k}}. By definition the result is divided by 2 for the semivariogram at this separation.
For computational speed, only the unique pairs of points are needed. For example, for 2 observations pairs [(za,zb),(zc,zd){\displaystyle (z_{a},z_{b}),(z_{c},z_{d})}] taken from locations with separationh±δ{\displaystyle h\pm \delta }only [(za,zb),(zc,zd){\displaystyle (z_{a},z_{b}),(z_{c},z_{d})}] need to be considered, as the pairs [(zb,za),(zd,zc){\displaystyle (z_{b},z_{a}),(z_{d},z_{c})}] do not provide any additional information.
The empirical variogram cannot be computed at every lag distanceh{\displaystyle h}and due to variation in the estimation it is not ensured that it is a valid variogram, as defined above. However somegeostatisticalmethods such askrigingneed valid semivariograms. In applied geostatistics the empirical variograms are thus often approximated by model function ensuring validity (Chiles&Delfiner 1999). Some important models are (Chiles&Delfiner 1999, Cressie 1993):
The parametera{\displaystyle a}has different values in different references, due to the ambiguity in the definition of the range. E.g.a=1/3{\displaystyle a=1/3}is the value used in (Chiles&Delfiner 1999). The1A(h){\displaystyle 1_{A}(h)}function is 1 ifh∈A{\displaystyle h\in A}and 0 otherwise.
Three functions are used ingeostatisticsfor describing the spatial or the temporal correlation of observations: these are thecorrelogram, thecovariance, and thesemivariogram. The last is also more simply calledvariogram.
The variogram is the key function in geostatistics as it will be used to fit a model of the temporal/spatial correlationof the observed phenomenon. One is thus making a distinction between theexperimental variogramthat is a visualization of a possible spatial/temporal correlation and thevariogram modelthat is further used to define the weights of thekrigingfunction. Note that the experimental variogram is an empirical estimate of thecovarianceof aGaussian process. As such, it may not bepositive definiteand hence not directly usable in kriging, without constraints or further processing. This explains why only a limited number of variogram models are used: most commonly, the linear, the spherical, the Gaussian, and the exponential models.
The empirical variogram is used ingeostatisticsas a first estimate of the variogram model needed for spatial interpolation bykriging.
The squared term in the variogram, for instance(Z(s1)−Z(s2))2{\displaystyle (Z(\mathbf {s} _{1})-Z(\mathbf {s} _{2}))^{2}}, can be replaced with different powers: Amadogramis defined with theabsolute difference,|Z(s1)−Z(s2)|{\displaystyle |Z(\mathbf {s} _{1})-Z(\mathbf {s} _{2})|}, and arodogramis defined with thesquare rootof the absolute difference,|Z(s1)−Z(s2)|0.5{\displaystyle |Z(\mathbf {s} _{1})-Z(\mathbf {s} _{2})|^{0.5}}.Estimatorsbased on these lower powers are said to be moreresistanttooutliers. They can be generalized as a "variogram of orderα",
in which a variogram is of order 2, a madogram is a variogram of order 1, and a rodogram is a variogram of order 0.5.[8]
When a variogram is used to describe the correlation of different variables it is calledcross-variogram. Cross-variograms are used inco-kriging.
Should the variable be binary or represent classes of values, one is then talking aboutindicator variograms. Indicator variograms are used inindicator kriging.
|
https://en.wikipedia.org/wiki/Variogram
|
Inmathematics, adifferential equationis anequationthat relates one or more unknownfunctionsand theirderivatives.[1]In applications, the functions generally represent physical quantities, the derivatives represent their rates of change, and the differential equation defines a relationship between the two. Such relations are common inmathematical modelsandscientific laws; therefore, differential equations play a prominent role in many disciplines includingengineering,physics,economics, andbiology.
The study of differential equations consists mainly of the study of their solutions (the set of functions that satisfy each equation), and of the properties of their solutions. Only the simplest differential equations are solvable by explicit formulas; however, many properties of solutions of a given differential equation may be determined without computing them exactly.
Often when aclosed-form expressionfor the solutions is not available, solutions may be approximated numerically using computers, and manynumerical methodshave been developed to determine solutions with a given degree of accuracy. Thetheory of dynamical systemsanalyzes thequalitativeaspects of solutions, such as theiraverage behaviorover a long time interval.
Differential equations came into existence with theinvention of calculusbyIsaac NewtonandGottfried Leibniz. In Chapter 2 of his 1671 workMethodus fluxionum et Serierum Infinitarum,[2]Newton listed three kinds of differential equations:
In all these cases,yis an unknown function ofx(or ofx1andx2), andfis a given function.
He solves these examples and others using infinite series and discusses the non-uniqueness of solutions.
Jacob Bernoulliproposed theBernoulli differential equationin 1695.[3]This is anordinary differential equationof the form
for which the following year Leibniz obtained solutions by simplifying it.[4]
Historically, the problem of a vibrating string such as that of amusical instrumentwas studied byJean le Rond d'Alembert,Leonhard Euler,Daniel Bernoulli, andJoseph-Louis Lagrange.[5][6][7][8]In 1746, d’Alembert discovered the one-dimensionalwave equation, and within ten years Euler discovered the three-dimensional wave equation.[9]
TheEuler–Lagrange equationwas developed in the 1750s by Euler and Lagrange in connection with their studies of thetautochroneproblem. This is the problem of determining a curve on which a weighted particle will fall to a fixed point in a fixed amount of time, independent of the starting point. Lagrange solved this problem in 1755 and sent the solution to Euler. Both further developed Lagrange's method and applied it tomechanics, which led to the formulation ofLagrangian mechanics.
In 1822,Fourierpublished his work onheat flowinThéorie analytique de la chaleur(The Analytic Theory of Heat),[10]in which he based his reasoning onNewton's law of cooling, namely, that the flow of heat between two adjacent molecules is proportional to the extremely small difference of their temperatures. Contained in this book was Fourier's proposal of hisheat equationfor conductive diffusion of heat. This partial differential equation is now a common part of mathematical physics curriculum.
Inclassical mechanics, the motion of a body is described by its position and velocity as the time value varies.Newton's lawsallow these variables to be expressed dynamically (given the position, velocity, acceleration and various forces acting on the body) as a differential equation for the unknown position of the body as a function of time.
In some cases, this differential equation (called anequation of motion) may be solved explicitly.
An example of modeling a real-world problem using differential equations is the determination of the velocity of a ball falling through the air, considering only gravity and air resistance. The ball's acceleration towards the ground is the acceleration due to gravity minus the deceleration due to air resistance. Gravity is considered constant, and air resistance may be modeled as proportional to the ball's velocity. This means that the ball's acceleration, which is a derivative of its velocity, depends on the velocity (and the velocity depends on time). Finding the velocity as a function of time involves solving a differential equation and verifying its validity.
Differential equations can be classified several different ways. Besides describing the properties of the equation itself, these classes of differential equations can help inform the choice of approach to a solution. Commonly used distinctions include whether the equation is ordinary or partial, linear or non-linear, and homogeneous or heterogeneous. This list is far from exhaustive; there are many other properties and subclasses of differential equations which can be very useful in specific contexts.
Anordinary differential equation(ODE) is an equation containing an unknownfunction of one real or complex variablex, its derivatives, and some given functions ofx. The unknown function is generally represented by avariable(often denotedy), which, therefore,dependsonx. Thusxis often called theindependent variableof the equation. The term "ordinary" is used in contrast with the termpartial differential equation, which may be with respect tomore thanone independent variable.
Linear differential equationsare the differential equations that arelinearin the unknown function and its derivatives. Their theory is well developed, and in many cases one may express their solutions in terms ofintegrals.
Most ODEs that are encountered inphysicsare linear. Therefore, mostspecial functionsmay be defined as solutions of linear differential equations (seeHolonomic function).
As, in general, the solutions of a differential equation cannot be expressed by aclosed-form expression,numerical methodsare commonly used for solving differential equations on a computer.
Apartial differential equation(PDE) is a differential equation that contains unknownmultivariable functionsand theirpartial derivatives. (This is in contrast toordinary differential equations, which deal with functions of a single variable and their derivatives.) PDEs are used to formulate problems involving functions of several variables, and are either solved in closed form, or used to create a relevantcomputer model.
PDEs can be used to describe a wide variety of phenomena in nature such assound,heat,electrostatics,electrodynamics,fluid flow,elasticity, orquantum mechanics. These seemingly distinct physical phenomena can be formalized similarly in terms of PDEs. Just as ordinary differential equations often model one-dimensionaldynamical systems, partial differential equations often modelmultidimensional systems.Stochastic partial differential equationsgeneralize partial differential equations for modelingrandomness.
Anon-linear differential equationis a differential equation that is not alinear equationin the unknown function and its derivatives (the linearity or non-linearity in the arguments of the function are not considered here). There are very few methods of solving nonlinear differential equations exactly; those that are known typically depend on the equation having particularsymmetries. Nonlinear differential equations can exhibit very complicated behaviour over extended time intervals, characteristic ofchaos. Even the fundamental questions of existence, uniqueness, and extendability of solutions for nonlinear differential equations, and well-posedness of initial and boundary value problems for nonlinear PDEs are hard problems and their resolution in special cases is considered to be a significant advance in the mathematical theory (cf.Navier–Stokes existence and smoothness). However, if the differential equation is a correctly formulated representation of a meaningful physical process, then one expects it to have a solution.[11]
Linear differential equations frequently appear asapproximationsto nonlinear equations. These approximations are only valid under restricted conditions. For example, theharmonic oscillatorequation is an approximation to the nonlinear pendulum equation that is valid for small amplitude oscillations.
Theorder of the differential equationis the highestorder of derivativeof the unknown function that appears in the differential equation.
For example, an equation containing onlyfirst-order derivativesis afirst-order differential equation, an equation containing thesecond-order derivativeis asecond-order differential equation, and so on.[12][13]
When it is written as apolynomial equationin the unknown function and its derivatives, itsdegree of the differential equationis, depending on the context, thepolynomial degreein the highest derivative of the unknown function,[14]or itstotal degreein the unknown function and its derivatives. In particular, alinear differential equationhas degree one for both meanings, but the non-linear differential equationy′+y2=0{\displaystyle y'+y^{2}=0}is of degree one for the first meaning but not for the second one.
Differential equations that describe natural phenomena almost always have only first and second order derivatives in them, but there are some exceptions, such as thethin-film equation, which is a fourth order partial differential equation.
In the first group of examplesuis an unknown function ofx, andcandωare constants that are supposed to be known. Two broad classifications of both ordinary and partial differential equations consist of distinguishing betweenlinearandnonlineardifferential equations, and betweenhomogeneousdifferential equationsandheterogeneousones.
In the next group of examples, the unknown functionudepends on two variablesxandtorxandy.
Solvingdifferential equations is not like solvingalgebraic equations. Not only are their solutions often unclear, but whether solutions are unique or exist at all are also notable subjects of interest.
For first order initial value problems, thePeano existence theoremgives one set of circumstances in which a solution exists. Given any point(a,b){\displaystyle (a,b)}in the xy-plane, define some rectangular regionZ{\displaystyle Z}, such thatZ=[l,m]×[n,p]{\displaystyle Z=[l,m]\times [n,p]}and(a,b){\displaystyle (a,b)}is in the interior ofZ{\displaystyle Z}. If we are given a differential equationdydx=g(x,y){\textstyle {\frac {dy}{dx}}=g(x,y)}and the condition thaty=b{\displaystyle y=b}whenx=a{\displaystyle x=a}, then there is locally a solution to this problem ifg(x,y){\displaystyle g(x,y)}and∂g∂x{\textstyle {\frac {\partial g}{\partial x}}}are both continuous onZ{\displaystyle Z}. This solution exists on some interval with its center ata{\displaystyle a}. The solution may not be unique. (SeeOrdinary differential equationfor other results.)
However, this only helps us with first orderinitial value problems. Suppose we had a linear initial value problem of the nth order:
such that
For any nonzerofn(x){\displaystyle f_{n}(x)}, if{f0,f1,…}{\displaystyle \{f_{0},f_{1},\ldots \}}andg{\displaystyle g}are continuous on some interval containingx0{\displaystyle x_{0}},y{\displaystyle y}exists and is unique.[15]
The theory of differential equations is closely related to the theory ofdifference equations, in which the coordinates assume only discrete values, and the relationship involves values of the unknown function or functions and values at nearby coordinates. Many methods to compute numerical solutions of differential equations or study the properties of differential equations involve the approximation of the solution of a differential equation by the solution of a corresponding difference equation.
The study of differential equations is a wide field inpureandapplied mathematics,physics, andengineering. All of these disciplines are concerned with the properties of differential equations of various types. Pure mathematics focuses on the existence and uniqueness of solutions, while applied mathematics emphasizes the rigorous justification of the methods for approximating solutions. Differential equations play an important role in modeling virtually every physical, technical, or biological process, from celestial motion, to bridge design, to interactions between neurons. Differential equations such as those used to solve real-life problems may not necessarily be directly solvable, i.e. do not haveclosed formsolutions. Instead, solutions can be approximated usingnumerical methods.
Many fundamental laws ofphysicsandchemistrycan be formulated as differential equations. Inbiologyandeconomics, differential equations are used tomodelthe behavior ofcomplex systems. The mathematical theory of differential equations first developed together with the sciences where the equations had originated and where the results found application. However, diverse problems, sometimes originating in quite distinct scientific fields, may give rise to identical differential equations. Whenever this happens, mathematical theory behind the equations can be viewed as a unifying principle behind diverse phenomena. As an example, consider the propagation of light and sound in the atmosphere, and of waves on the surface of a pond. All of them may be described by the same second-orderpartial differential equation, thewave equation, which allows us to think of light and sound as forms of waves, much like familiar waves in the water. Conduction of heat, the theory of which was developed byJoseph Fourier, is governed by another second-order partial differential equation, theheat equation. It turns out that manydiffusionprocesses, while seemingly different, are described by the same equation; theBlack–Scholesequation in finance is, for instance, related to the heat equation.
The number of differential equations that have received a name, in various scientific areas is a witness of the importance of the topic. SeeList of named differential equations.
SomeCASsoftware can solve differential equations. These are the commands used in the leading programs:
|
https://en.wikipedia.org/wiki/Differential_equation
|
Inmathematics, anintegro-differential equationis anequationthat involves bothintegralsandderivativesof afunction.
The general first-order, linear (only with respect to the term involving derivative) integro-differential equation is of the form
As is typical withdifferential equations, obtaining a closed-form solution can often be difficult. In the relatively few cases where a solution can be found, it is often by some kind of integral transform, where the problem is first transformed into an algebraic setting. In such situations, the solution of the problem may be derived by applying the inverse transform to the solution of this algebraic equation.
Consider the following second-order problem,
where
is theHeaviside step function. TheLaplace transformis defined by,
Upon taking term-by-term Laplace transforms, and utilising the rules for derivatives and integrals, the integro-differential equation is converted into the following algebraic equation,
Thus,
Inverting the Laplace transform usingcontour integral methodsthen gives
Alternatively, one cancomplete the squareand use a table ofLaplace transforms("exponentially decaying sine wave") or recall from memory to proceed:
Integro-differential equations model many situations fromscienceandengineering, such as in circuit analysis. ByKirchhoff's second law, the net voltage drop across a closed loop equals the voltage impressedE(t){\displaystyle E(t)}. (It is essentially an application ofenergy conservation.) An RLC circuit therefore obeysLddtI(t)+RI(t)+1C∫0tI(τ)dτ=E(t),{\displaystyle L{\frac {d}{dt}}I(t)+RI(t)+{\frac {1}{C}}\int _{0}^{t}I(\tau )d\tau =E(t),}whereI(t){\displaystyle I(t)}is the current as a function of time,R{\displaystyle R}is the resistance,L{\displaystyle L}the inductance, andC{\displaystyle C}the capacitance.[1]
The activity of interactinginhibitoryandexcitatoryneuronscan be described by a system of integro-differential equations, see for example theWilson-Cowan model.
TheWhitham equationis used to model nonlinear dispersive waves in fluid dynamics.[2]
Integro-differential equations have found applications inepidemiology, the mathematical modeling ofepidemics, particularly when the models containage-structure[3]or describe spatial epidemics.[4]TheKermack-McKendrick theoryof infectious disease transmission is one particular example where age-structure in the population is incorporated into the modeling framework.
|
https://en.wikipedia.org/wiki/Integro-differential_equation
|
Inactuarial scienceandapplied probability,ruin theory(sometimesrisk theory[1]orcollective risk theory) uses mathematical models to describe an insurer's vulnerability to insolvency/ruin. In such models key quantities of interest are the probability of ruin, distribution of surplus immediately prior to ruin and deficit at time of ruin.
The theoretical foundation of ruin theory, known as the Cramér–Lundberg model (or classical compound-Poisson risk model, classical risk process[2]or Poisson risk process) was introduced in 1903 by the Swedish actuaryFilip Lundberg.[3]Lundberg's work was republished in the 1930s byHarald Cramér.[4]
The model describes an insurance company who experiences two opposing cash flows: incoming cash premiums and outgoing claims. Premiums arrive a constant ratec>0{\textstyle c>0}from customers and claims arrive according to aPoisson processNt{\displaystyle N_{t}}with intensityλ{\textstyle \lambda }and areindependent and identically distributednon-negative random variablesξi{\displaystyle \xi _{i}}with distributionF{\textstyle F}and meanμ{\textstyle \mu }(they form acompound Poisson process). So for an insurer who starts with initial surplusx{\textstyle x}, the aggregate assetsXt{\displaystyle X_{t}}are given by:[5]
The central object of the model is to investigate the probability that the insurer's surplus level eventually falls below zero (making the firm bankrupt). This quantity, called the probability of ultimate ruin, is defined as
where the time of ruin isτ=inf{t>0:X(t)<0}{\displaystyle \tau =\inf\{t>0\,:\,X(t)<0\}}with the convention thatinf∅=∞{\displaystyle \inf \varnothing =\infty }. This can be computed exactly using thePollaczek–Khinchine formulaas[6](the ruin function here is equivalent to the tail function of the stationary distribution of waiting time in anM/G/1 queue[7])
whereFl{\displaystyle F_{l}}is the transform of the tail distribution ofF{\displaystyle F},
and⋅∗n{\displaystyle \cdot ^{\ast n}}denotes then{\displaystyle n}-foldconvolution.
In the case where the claim sizes are exponentially distributed, this simplifies to[7]
E. Sparre Andersen extended the classical model in 1957[8]by allowing claim inter-arrival times to have arbitrary distribution functions.[9]
where the claim number process(Nt)t≥0{\displaystyle (N_{t})_{t\geq 0}}is arenewal processand(ξi)i∈N{\displaystyle (\xi _{i})_{i\in \mathbb {N} }}are independent and identically distributed random variables.
The model furthermore assumes thatξi>0{\displaystyle \xi _{i}>0}almost surely and that(Nt)t≥0{\displaystyle (N_{t})_{t\geq 0}}and(ξi)i∈N{\displaystyle (\xi _{i})_{i\in \mathbb {N} }}are independent. The model is also known as the renewal risk model.
Michael R. Powers[10]and Gerber and Shiu[11]analyzed the behavior of the insurer's surplus through theexpected discounted penalty function, which is commonly referred to as Gerber-Shiu function in the ruin literature and named after actuarial scientistsElias S.W. ShiuandHans-Ulrich Gerber. It is arguable whether the function should have been called Powers-Gerber-Shiu function due to the contribution of Powers.[10]
InPowers' notation, this is defined as
whereδ{\displaystyle \delta }is the discounting force of interest,Kτ{\displaystyle K_{\tau }}is a general penalty function reflecting the economic costs to the insurer at the time of ruin, and the expectationEx{\displaystyle \mathbb {E} ^{x}}corresponds to the probability measurePx{\displaystyle \mathbb {P} ^{x}}. The function is called expected discounted cost of insolvency by Powers.[10]
In Gerber and Shiu's notation, it is given as
whereδ{\displaystyle \delta }is the discounting force of interest andw(Xτ−,Xτ){\displaystyle w(X_{\tau -},X_{\tau })}is a penalty function capturing the economic costs to the insurer at the time of ruin (assumed to depend on the surplus prior to ruinXτ−{\displaystyle X_{\tau -}}and the deficit at ruinXτ{\displaystyle X_{\tau }}), and the expectationEx{\displaystyle \mathbb {E} ^{x}}corresponds to the probability measurePx{\displaystyle \mathbb {P} ^{x}}. Here the indicator functionI(τ<∞){\displaystyle \mathbb {I} (\tau <\infty )}emphasizes that the penalty is exercised only when ruin occurs.
It is quite intuitive to interpret the expected discounted penalty function. Since the function measures the actuarial present value of the penalty that occurs atτ{\displaystyle \tau }, the penalty function is multiplied by the discounting factore−δτ{\displaystyle e^{-\delta \tau }}, and then averaged over the probability distribution of the waiting time toτ{\displaystyle \tau }. While Gerber and Shiu[11]applied this function to the classical compound-Poisson model, Powers[10]argued that an insurer's surplus is better modeled by a family of diffusion processes.
There are a great variety of ruin-related quantities that fall into the category of the expected discounted penalty function.
Other finance-related quantities belonging to the class of the expected discounted penalty function include the perpetual American put option,[12]the contingent claim at optimal exercise time, and more.
|
https://en.wikipedia.org/wiki/Ruin_theory
|
Inmathematics, theVolterra integral equationsare a special type ofintegral equations.[1]They are divided into two groups referred to as the first and the second kind.
A linear Volterra equation of the first kind is
wherefis a given function andxis an unknown function to be solved for. A linear Volterra equation of the second kind is
Inoperator theory, and inFredholm theory, the corresponding operators are calledVolterra operators. A useful method to solve such equations, theAdomian decomposition method, is due toGeorge Adomian.
A linear Volterra integral equation is aconvolutionequation if
The functionK{\displaystyle K}in the integral is called thekernel. Such equations can be analyzed and solved by means ofLaplace transformtechniques.
For a weakly singular kernel of the formK(t,s)=(t2−s2)−α{\displaystyle K(t,s)=(t^{2}-s^{2})^{-\alpha }}with0<α<1{\displaystyle 0<\alpha <1}, Volterra integral equation of the first kind can conveniently be transformed into a classical Abel integral equation.
The Volterra integral equations were introduced byVito Volterraand then studied byTraian Lalescuin his 1908 thesis,Sur les équations de Volterra, written under the direction ofÉmile Picard. In 1911, Lalescu wrote the first book ever on integral equations.
Volterra integral equations find application indemographyasLotka's integral equation,[2]the study ofviscoelasticmaterials,
inactuarial sciencethrough therenewal equation,[3]and influid mechanicsto describe the flow behavior near finite-sized boundaries.[4][5]
A linear Volterra equation of the first kind can always be reduced to a linear Volterra equation of the second kind, assuming thatK(t,t)≠0{\displaystyle K(t,t)\neq 0}. Taking the derivative of the first kind Volterra equation gives us:dfdt=∫at∂K∂tx(s)ds+K(t,t)x(t){\displaystyle {df \over {dt}}=\int _{a}^{t}{\partial K \over {\partial t}}x(s)ds+K(t,t)x(t)}Dividing through byK(t,t){\displaystyle K(t,t)}yields:x(t)=1K(t,t)dfdt−∫at1K(t,t)∂K∂tx(s)ds{\displaystyle x(t)={1 \over {K(t,t)}}{df \over {dt}}-\int _{a}^{t}{1 \over {K(t,t)}}{\partial K \over {\partial t}}x(s)ds}Definingf~(t)=1K(t,t)dfdt{\textstyle {\widetilde {f}}(t)={1 \over {K(t,t)}}{df \over {dt}}}andK~(t,s)=−1K(t,t)∂K∂t{\textstyle {\widetilde {K}}(t,s)=-{1 \over {K(t,t)}}{\partial K \over {\partial t}}}completes the transformation of the first kind equation into a linear Volterra equation of the second kind.
A standard method for computing the numerical solution of a linear Volterra equation of the second kind is thetrapezoidal rule, which for equally-spaced subintervalsΔx{\displaystyle \Delta x}is given by:∫abf(x)dx≈Δx2[f(x0)+2∑i=1n−1f(xi)+f(xn)]{\displaystyle \int _{a}^{b}f(x)dx\approx {\Delta x \over {2}}\left[f(x_{0})+2\sum _{i=1}^{n-1}f(x_{i})+f(x_{n})\right]}Assuming equal spacing for the subintervals, the integral component of the Volterra equation may be approximated by:∫atK(t,s)x(s)ds≈Δs2[K(t,s0)x(s0)+2K(t,s1)x(s1)+⋯+2K(t,sn−1)x(sn−1)+K(t,sn)x(sn)]{\displaystyle \int _{a}^{t}K(t,s)x(s)ds\approx {\Delta s \over {2}}\left[K(t,s_{0})x(s_{0})+2K(t,s_{1})x(s_{1})+\cdots +2K(t,s_{n-1})x(s_{n-1})+K(t,s_{n})x(s_{n})\right]}Definingxi=x(si){\displaystyle x_{i}=x(s_{i})},fi=f(ti){\displaystyle f_{i}=f(t_{i})}, andKij=K(ti,sj){\displaystyle K_{ij}=K(t_{i},s_{j})}, we have the system of linear equations:x0=f0x1=f1+Δs2(K10x0+K11x1)x2=f2+Δs2(K20x0+2K21x1+K22x2)⋮xn=fn+Δs2(Kn0x0+2Kn1x1+⋯+2Kn,n−1xn−1+Knnxn){\displaystyle {\begin{aligned}x_{0}&=f_{0}\\x_{1}&=f_{1}+{\Delta s \over {2}}\left(K_{10}x_{0}+K_{11}x_{1}\right)\\x_{2}&=f_{2}+{\Delta s \over {2}}\left(K_{20}x_{0}+2K_{21}x_{1}+K_{22}x_{2}\right)\\&\vdots \\x_{n}&=f_{n}+{\Delta s \over {2}}\left(K_{n0}x_{0}+2K_{n1}x_{1}+\cdots +2K_{n,n-1}x_{n-1}+K_{nn}x_{n}\right)\end{aligned}}}This is equivalent to thematrixequation:x=f+Mx⟹x=(I−M)−1f{\displaystyle x=f+Mx\implies x=(I-M)^{-1}f}For well-behaved kernels, the trapezoidal rule tends to work well.
One area where Volterra integral equations appear is inruin theory, the study of the risk of insolvency in actuarial science. The objective is to quantify the probability of ruinψ(u)=P[τ(u)<∞]{\displaystyle \psi (u)=\mathbb {P} [\tau (u)<\infty ]}, whereu{\displaystyle u}is the initial surplus andτ(u){\displaystyle \tau (u)}is the time of ruin. In theclassical modelof ruin theory, the net cash positionXt{\displaystyle X_{t}}is a function of the initial surplus, premium income earned at ratec{\displaystyle c}, and outgoing claimsξ{\displaystyle \xi }:Xt=u+ct−∑i=1Ntξi,t≥0{\displaystyle X_{t}=u+ct-\sum _{i=1}^{N_{t}}\xi _{i},\quad t\geq 0}whereNt{\displaystyle N_{t}}is aPoisson processfor the number of claims with intensityλ{\displaystyle \lambda }. Under these circumstances, the ruin probability may be represented by a Volterra integral equation of the form[6]:ψ(u)=λc∫u∞S(x)dx+λc∫0uψ(u−x)S(x)dx{\displaystyle \psi (u)={\lambda \over {c}}\int _{u}^{\infty }S(x)dx+{\lambda \over {c}}\int _{0}^{u}\psi (u-x)S(x)dx}whereS(⋅){\displaystyle S(\cdot )}is thesurvival functionof the claims distribution.
|
https://en.wikipedia.org/wiki/Volterra_integral_equation
|
Inmathematics, specificallyfunctional analysis,Mercer's theoremis a representation of a symmetricpositive-definitefunction on a square as a sum of a convergent sequence of product functions. This theorem, presented in (Mercer 1909), is one of the most notable results of the work ofJames Mercer(1883–1932). It is an important theoretical tool in the theory ofintegral equations; it is used in theHilbert spacetheory ofstochastic processes, for example theKarhunen–Loève theorem; and it is also used in thereproducing kernel Hilbert spacetheory where it characterizes a symmetricpositive-definite kernelas a reproducing kernel.[1]
To explain Mercer's theorem, we first consider an important special case; seebelowfor a more general formulation.
Akernel, in this context, is asymmetriccontinuous function
whereK(x,y)=K(y,x){\displaystyle K(x,y)=K(y,x)}for allx,y∈[a,b]{\displaystyle x,y\in [a,b]}.
Kis said to be apositive-definite kernelif and only if
for all finite sequences of pointsx1, ...,xnof [a,b] and all choices of real numbersc1, ...,cn. Note that the term "positive-definite" is well-established in literature despite the weak inequality in the definition.[2][3]
The fundamental characterization of stationary positive-definite kernels (whereK(x,y)=K(x−y){\displaystyle K(x,y)=K(x-y)}) is given byBochner's theorem. It states that a continuous functionK(x−y){\displaystyle K(x-y)}is positive-definite if and only if it can be expressed as theFourier transformof a finite non-negative measureμ{\displaystyle \mu }:
This spectral representation reveals the connection between positive definiteness and harmonic analysis, providing a stronger and more direct characterization of positive definiteness than the abstract definition in terms of inequalities when the kernel is stationary, e.g, when it can be expressed as a 1-variable function of the distance between points rather than the 2-variable function of the positions of pairs of points.
Associated toKis alinear operator(more specifically aHilbert–Schmidt integral operatorwhen the interval is compact) on functions defined by the integral
We assumeφ{\displaystyle \varphi }can range through the space
of real-valuedsquare-integrable functionsL2[a,b]; however, in many cases the associated RKHS can be strictly larger thanL2[a,b]. SinceTKis a linear operator, theeigenvaluesandeigenfunctionsofTKexist.
Theorem. SupposeKis a continuous symmetric positive-definite kernel. Then there is anorthonormal basis{ei}iofL2[a,b] consisting of eigenfunctions ofTKsuch that the corresponding
sequence of eigenvalues {λi}iis nonnegative. The eigenfunctions corresponding to non-zero eigenvalues are continuous on [a,b] andKhas the representation
where the convergence is absolute and uniform.
We now explain in greater detail the structure of the proof of
Mercer's theorem, particularly how it relates tospectral theory of compact operators.
To show compactness, show that the image of theunit ballofL2[a,b] underTKisequicontinuousand applyAscoli's theorem, to show that the image of the unit ball is relatively compact in C([a,b]) with theuniform normanda fortioriinL2[a,b].
Now apply thespectral theoremfor compact operators on Hilbert
spaces toTKto show the existence of the
orthonormal basis {ei}iofL2[a,b]
If λi≠ 0, the eigenvector (eigenfunction)eiis seen to be continuous on [a,b]. Now
which shows that the sequence
converges absolutely and uniformly to a kernelK0which is easily seen to define the same operator as the kernelK. HenceK=K0from which Mercer's theorem follows.
Finally, to show non-negativity of the eigenvalues one can writeλ⟨f,f⟩=⟨f,TKf⟩{\displaystyle \lambda \langle f,f\rangle =\langle f,T_{K}f\rangle }and expressing the right hand side as an integral well-approximated by its Riemann sums, which are non-negative
by positive-definiteness ofK, implyingλ⟨f,f⟩≥0{\displaystyle \lambda \langle f,f\rangle \geq 0}, implyingλ≥0{\displaystyle \lambda \geq 0}.
The following is immediate:
Theorem. SupposeKis a continuous symmetric positive-definite kernel;TKhas a sequence of nonnegative
eigenvalues {λi}i. Then
This shows that the operatorTKis atrace classoperator and
Mercer's theorem itself is a generalization of the result that anysymmetricpositive-semidefinite matrixis theGramian matrixof a set of vectors.
The first generalization replaces the interval [a,b] with anycompact Hausdorff spaceand Lebesgue measure on [a,b] is replaced by a finite countably additive measure μ on theBorel algebraofXwhose support isX. This means that μ(U) > 0 for any nonempty open subsetUofX.
A recent generalization replaces these conditions by the following: the setXis afirst-countabletopological space endowed with a Borel (complete) measure μ.Xis the support of μ and, for allxinX, there is an open setUcontainingxand having finite measure. Then essentially the same result holds:
Theorem. SupposeKis a continuous symmetric positive-definite kernel onX. If the function κ isL1μ(X), where κ(x)=K(x,x), for allxinX, then there is anorthonormal set{ei}iofL2μ(X) consisting of eigenfunctions ofTKsuch that corresponding
sequence of eigenvalues {λi}iis nonnegative. The eigenfunctions corresponding to non-zero eigenvalues are continuous onXandKhas the representation
where the convergence is absolute and uniform on compact subsets ofX.
The next generalization deals with representations ofmeasurablekernels.
Let (X,M, μ) be a σ-finite measure space. AnL2(or square-integrable) kernel onXis a function
L2kernels define a bounded operatorTKby the formula
TKis a compact operator (actually it is even aHilbert–Schmidt operator). If the kernelKis symmetric, by thespectral theorem,TKhas an orthonormal basis of eigenvectors. Those eigenvectors that correspond to non-zero eigenvalues can be arranged in a sequence {ei}i(regardless of separability).
Theorem. IfKis a symmetric positive-definite kernel on (X,M, μ), then
where the convergence in theL2norm. Note that when continuity of the kernel is not assumed, the expansion no longer converges uniformly.
Inmathematics, areal-valuedfunctionK(x,y) is said to fulfillMercer's conditionif for allsquare-integrable functionsg(x) one has
This is analogous to the definition of apositive-semidefinite matrix. This is a matrixK{\displaystyle K}of dimensionN{\displaystyle N}, which satisfies, for all vectorsg{\displaystyle g}, the property
A positive constant function
satisfies Mercer's condition, as then the integral becomes byFubini's theorem
which is indeednon-negative.
|
https://en.wikipedia.org/wiki/Mercer%27s_theorem
|
The termkernelis used instatistical analysisto refer to awindow function. The term "kernel" has several distinct meanings in different branches of statistics.
In statistics, especially inBayesian statistics, the kernel of aprobability density function(pdf) orprobability mass function(pmf) is the form of the pdf or pmf in which any factors that are not functions of any of the variables in the domain are omitted.[1]Note that such factors may well be functions of theparametersof the pdf or pmf. These factors form part of thenormalization factorof theprobability distribution, and are unnecessary in many situations. For example, inpseudo-random number sampling, most sampling algorithms ignore the normalization factor. In addition, inBayesian analysisofconjugate priordistributions, the normalization factors are generally ignored during the calculations, and only the kernel considered. At the end, the form of the kernel is examined, and if it matches a known distribution, the normalization factor can be reinstated. Otherwise, it may be unnecessary (for example, if the distribution only needs to be sampled from).
For many distributions, the kernel can be written in closed form, but not the normalization constant.
An example is thenormal distribution. Itsprobability density functionis
and the associated kernel is
Note that the factor in front of the exponential has been omitted, even though it contains the parameterσ2{\displaystyle \sigma ^{2}}, because it is not a function of the domain variablex{\displaystyle x}.
The kernel of areproducing kernel Hilbert spaceis used in the suite of techniques known askernel methodsto perform tasks such asstatistical classification,regression analysis, andcluster analysison data in an implicit space. This usage is particularly common inmachine learning.
Innonparametric statistics, a kernel is a weighting function used innon-parametricestimation techniques. Kernels are used inkernel density estimationto estimaterandom variables'density functions, or inkernel regressionto estimate theconditional expectationof a random variable. Kernels are also used intime series, in the use of theperiodogramto estimate thespectral densitywhere they are known aswindow functions. An additional use is in the estimation of a time-varying intensity for apoint processwhere window functions (kernels) are convolved with time-series data.
Commonly, kernel widths must also be specified when running a non-parametric estimation.
A kernel is anon-negativereal-valuedintegrablefunctionK.For most applications, it is desirable to define the function to satisfy two additional requirements:
The first requirement ensures that the method of kernel density estimation results in aprobability density function. The second requirement ensures that the average of the corresponding distribution is equal to that of the sample used.
IfKis a kernel, then so is the functionK* defined byK*(u) = λK(λu), where λ > 0. This can be used to select a scale that is appropriate for the data.
Several types of kernel functions are commonly used: uniform, triangle, Epanechnikov,[2]quartic (biweight), tricube,[3]triweight, Gaussian, quadratic[4]and cosine.
In the table below, ifK{\displaystyle K}is given with a boundedsupport, thenK(u)=0{\displaystyle K(u)=0}for values ofulying outside the support.
Support:|u|≤1{\displaystyle |u|\leq 1}
"Boxcar function"
Support:|u|≤1{\displaystyle |u|\leq 1}
(parabolic)
Support:|u|≤1{\displaystyle |u|\leq 1}
Support:|u|≤1{\displaystyle |u|\leq 1}
Support:|u|≤1{\displaystyle |u|\leq 1}
Support:|u|≤1{\displaystyle |u|\leq 1}
Support:|u|≤1{\displaystyle |u|\leq 1}
|
https://en.wikipedia.org/wiki/Kernel_(statistics)
|
Akernel smootheris astatisticaltechnique to estimate a real valuedfunctionf:Rp→R{\displaystyle f:\mathbb {R} ^{p}\to \mathbb {R} }as theweighted averageof neighboring observed data. The weight is defined by thekernel, such that closer points are given higher weights. The estimated function is smooth, and the level of smoothness is set by a single parameter.
Kernel smoothing is a type ofweighted moving average.
LetKhλ(X0,X){\displaystyle K_{h_{\lambda }}(X_{0},X)}be a kernel defined by
where:
Popularkernelsused for smoothing include parabolic (Epanechnikov), tricube, andGaussiankernels.
LetY(X):Rp→R{\displaystyle Y(X):\mathbb {R} ^{p}\to \mathbb {R} }be a continuous function ofX. For eachX0∈Rp{\displaystyle X_{0}\in \mathbb {R} ^{p}}, the Nadaraya-Watson kernel-weighted average (smoothY(X) estimation) is defined by
where:
In the following sections, we describe some particular cases of kernel smoothers.
TheGaussian kernelis one of the most widely used kernels, and is expressed with the equation below.
Here, b is the length scale for the input space.
Thek-nearest neighbor algorithmcan be used for defining ak-nearest neighbor smootheras follows. For each pointX0, takemnearest neighbors and estimate the value ofY(X0) by averaging the values of these neighbors.
Formally,hm(X0)=‖X0−X[m]‖{\displaystyle h_{m}(X_{0})=\left\|X_{0}-X_{[m]}\right\|}, whereX[m]{\displaystyle X_{[m]}}is themth closest toX0neighbor, and
In this example,Xis one-dimensional. For each X0, theY^(X0){\displaystyle {\hat {Y}}(X_{0})}is an average value of 16 closest toX0points (denoted by red).
The idea of the kernel average smoother is the following. For each data pointX0, choose a constant distance sizeλ(kernel radius, or window width forp= 1 dimension), and compute a weighted average for all data points that are closer thanλ{\displaystyle \lambda }toX0(the closer toX0points get higher weights).
Formally,hλ(X0)=λ=constant,{\displaystyle h_{\lambda }(X_{0})=\lambda ={\text{constant}},}andD(t) is one of the popular kernels.
For eachX0the window width is constant, and the weight of each point in the window is schematically denoted by the yellow figure in the graph. It can be seen that the estimation is smooth, but the boundary points are biased. The reason for that is the non-equal number of points (from the right and from the left to theX0) in the window, when theX0is close enough to the boundary.
In the two previous sections we assumed that the underlying Y(X) function is locally constant, therefore we were able to use the weighted average for the estimation. The idea of local linear regression is to fit locally a straight line (or a hyperplane for higher dimensions), and not the constant (horizontal line). After fitting the line, the estimationY^(X0){\displaystyle {\hat {Y}}(X_{0})}is provided by the value of this line atX0point. By repeating this procedure for eachX0, one can get the estimation functionY^(X){\displaystyle {\hat {Y}}(X)}.
Like in previous section, the window width is constanthλ(X0)=λ=constant.{\displaystyle h_{\lambda }(X_{0})=\lambda ={\text{constant}}.}Formally, the local linear regression is computed by solving a weighted least square problem.
For one dimension (p= 1):
minα(X0),β(X0)∑i=1NKhλ(X0,Xi)(Y(Xi)−α(X0)−β(X0)Xi)2⇓Y^(X0)=α(X0)+β(X0)X0{\displaystyle {\begin{aligned}&\min _{\alpha (X_{0}),\beta (X_{0})}\sum \limits _{i=1}^{N}{K_{h_{\lambda }}(X_{0},X_{i})\left(Y(X_{i})-\alpha (X_{0})-\beta (X_{0})X_{i}\right)^{2}}\\&\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\Downarrow \\&\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,{\hat {Y}}(X_{0})=\alpha (X_{0})+\beta (X_{0})X_{0}\\\end{aligned}}}
The closed form solution is given by:
where:
The resulting function is smooth, and the problem with the biased boundary points is reduced.
Local linear regression can be applied to any-dimensional space, though the question of what is a local neighborhood becomes more complicated. It is common to use k nearest training points to a test point to fit the local linear regression. This can lead to high variance of the fitted function. To bound the variance, the set of training points should contain the test point in their convex hull (see Gupta et al. reference).
Instead of fitting locally linear functions, one can fit polynomial functions.
For p=1, one should minimize:
withY^(X0)=α(X0)+∑j=1dβj(X0)X0j{\displaystyle {\hat {Y}}(X_{0})=\alpha (X_{0})+\sum \limits _{j=1}^{d}{\beta _{j}(X_{0})X_{0}^{j}}}
In general case (p>1), one should minimize:
|
https://en.wikipedia.org/wiki/Kernel_smoothing
|
Instatistics,adaptiveor"variable-bandwidth" kernel density estimationis a form ofkernel density estimationin which the size of the kernels used in the estimate are varied
depending upon either the location of the samples or the location of the test point.
It is a particularly effective technique when the sample space is multi-dimensional.[1]
Given a set of samples,{x→i}{\displaystyle \lbrace {\vec {x}}_{i}\rbrace }, we wish to estimate the
density,P(x→){\displaystyle P({\vec {x}})}, at a test point,x→{\displaystyle {\vec {x}}}:
wherenis the number of samples,Kis the"kernel",his its width andDis the number of dimensions inx→{\displaystyle {\vec {x}}}.
The kernel can be thought of as a simple,linear filter.
Using a fixed filter width may mean that in regions of low density, all samples
will fall in the tails of the filter with very low weighting, while regions of high
density will find an excessive number of samples in the central region with weighting
close to unity. To fix this problem, we vary the width of the kernel in different
regions of the sample space.
There are two methods of doing this: balloon and pointwise estimation.
In a balloon estimator, the kernel width is varied depending on the location
of the test point. In a pointwise estimator, the kernel width is varied depending
on the location of the sample.[1]
For multivariate estimators, the parameter,h, can be generalized to
vary not just the size, but also the shape of the kernel. This more complicated approach
will not be covered here.
A common method of varying the kernel width is to make it inversely proportional to the density at the test point:
wherekis a constant.
If we back-substitute the estimated PDF, and assuming a Gaussiankernel function,
we can show thatWis a constant:[2]
A similar derivation holds for any kernel whose normalising function is of the orderhD, although with a different constant factor in place of the(2 π)D/2term. This produces a generalization of thek-nearest neighbour algorithm.
That is, a uniformkernel functionwill return the
KNN technique.[2]
There are two components to the error: a variance term and a bias term. The variance term is given as:[1]
The bias term is found by evaluating the approximated function in the limit as the kernel
width becomes much larger than the sample spacing. By using a Taylor expansion for the real function, the bias term drops out:
An optimal kernel width that minimizes the error of each estimate can thus be derived.
The method is particularly effective when applied tostatistical classification.
There are two ways we can proceed: the first is to compute the PDFs of
each class separately, using different bandwidth parameters,
and then compare them as in Taylor.[3]Alternatively, we can divide up the sum based on the class of each sample:
whereciis the class of theith sample.
The class of the test point may be estimated throughmaximum likelihood.
|
https://en.wikipedia.org/wiki/Variable_kernel_density_estimation
|
Head/tail breaksis aclustering algorithmfor data with aheavy-tailed distributionsuch aspower lawsandlognormal distributions. The heavy-tailed distribution can be simply referred to the scaling pattern of far more small things than large ones, or alternatively numerous smallest, a very few largest, and some in between the smallest and largest. The classification is done through dividing things into large (or called the head) and small (or called the tail) things around the arithmetic mean or average, and then recursively going on for the division process for the large things or the head until the notion of far more small things than large ones is no longer valid, or with more or less similar things left only.[1]Head/tail breaks is not just for classification, but also for visualization of big data by keeping the head, since the head is self-similar to the whole. Head/tail breaks can be applied not only to vector data such as points, lines and polygons, but also to raster data like digital elevation model (DEM).
The head/tail breaks is motivated by inability of conventional classification methods such as equal intervals, quantiles, geometric progressions, standard deviation, and natural breaks - commonly known asJenks natural breaks optimizationork-means clusteringto reveal the underlying scaling or living structure with the inherent hierarchy (or heterogeneity) characterized by the recurring notion of far more small things than large ones.[2][3]Note that the notion of far more small things than large one is not only referred to geometric property, but also to topological and semantic properties. In this connection, the notion should be interpreted as far more unpopular (or less-connected) things than popular (or well-connected) ones, or far more meaningless things than meaningful ones. Head/tail breaks uses the mean or average to dichotomize a dataset into small and large values, rather than to characterize classes by average values, which is unlike k-means clustering or natural breaks. Through the head/tail breaks, a dataset is seen as a living structure with an inherent hierarchy with far more smalls than larges, or recursively perceived as the head of the head of the head and so on. It opens up new avenues of analyzing data from a holistic and organic point of view while considering different types of scales and scaling in spatial analysis.[4]
Given some variable X that demonstrates a heavy-tailed distribution, there are far more small x than large ones. Take the average of all xi, and obtain the first mean m1. Then calculate the second mean for those xi greater than m1, and obtain m2. In the same recursive way, we can get m3 depending on whether the ending condition of no longer far more small x than large ones is met. For simplicity, we assume there are three means, m1, m2, and m3. This classification leads to four classes: [minimum, m1], (m1, m2], (m2, m3], (m3, maximum]. In general, it can be represented as a recursive function as follows:
The resulting number of classes is referred to as ht-index, an alternative index tofractal dimensionfor characterizing complexity of fractals or geographic features: the higher the ht-index, the more complex the fractals.[5]
The criterion to stop the iterative classification process using the head/tail breaks method is that the remaining data (i.e., the head part) are not heavy-tailed, or simply, the head part is no longer a minority (i.e., the proportion of the head part is no longer less than a threshold such as 40%). This threshold is suggested to be 40% by Jiang et al. (2013),[6]just as the codes above (i.e., (length/head)/length(data) ≤ 40%). This process is called head/tail breaks 1.0. But sometimes a larger threshold, for example 50% or more, can be used, as Jiang and Yin (2014)[5]noted in another article: "this condition can be relaxed for many geographic features, such as 50 percent or even more". However, all heads' percentage on average must be smaller than 40% (or 41, 42%), indicating far more small things than large ones. Many real-world data cannot be fit into a perfect long tailed distribution, therefore its threshold can be relaxed structurally. In head/tail breaks 2.0 the threshold only applies to the overall heads' percentage.[7]This means that the percentages of all heads related to the tails should be around 40% on average. Individual classes can have any percentage spit around the average, as long as this averages out as a whole. For example, if there is data distributed in such a way that it has a clearly defined head and tail during the first and second iteration (length(head)/(length(data)<20%) but a much less well defined long tailed distribution for the third iteration (60% in the head), head/tail breaks 2.0 allows the iteration to continue into the fourth iteration which can be distributed 30% head - 70% tail again and so on. As long as the overall threshold is not surpassed the head/tail breaks classification holds.
A good tool to display the scaling pattern, or the heavy-tailed distribution, is the rank-size plot, which is a scatter plot to display a set of values according to their ranks. With this tool, a new index[8]termed as the ratio of areas (RA) in a rank-size plot was defined to characterize the scaling pattern. The RA index has been successfully used in the estimation of traffic conditions. However, the RA index can only be used as a complementary method to the ht-index, because it is ineffective to capture the scaling structure of geographic features.
In addition to the ht-index, the following indices are also derived with the head/tail breaks.
Instead of more or less similar things, there are far more small things than large ones surrounding us. Given the ubiquity of the scaling pattern, head/tail breaks is found to be of use to statistical mapping, map generalization, cognitive mapping and even perception of beauty
.[6][12][13]It helps visualize big data, since big data are likely to show the scaling property of far more small things than large ones. Essentially geographic phenomena can be scaleful or scale-free. Scaleful phenomena can be explained by conventional mathematical or geographical operations, but scale-free phenomena can not. Head/tail breaks can be used to characterize the scale-free phenomena, which are in the majority.[14]The visualization strategy is to recursively drop out the tail parts until the head parts are clear or visible enough.[15][16]In addition, it helps delineate cities or natural cities to be more precise from various geographic information such as street networks, social media geolocation data, and nighttime images.
As the head/tail breaks method can be used iteratively to obtain head parts of a data set, this method actually captures the underlying hierarchy of the data set. For example, if we divide the array (19, 8, 7, 6, 2, 1, 1, 1, 0) with the head/tail breaks method, we can get two head parts, i.e., the first head part (19, 8, 7, 6) and the second head part (19). These two head parts as well as the original array form a three-level hierarchy:
The number of levels of the above-mentioned hierarchy is actually a characterization of the imbalance of the example array, and this number of levels has been termed as the ht-index.[5]With the ht-index, we are able to compare degrees of imbalance of two data sets. For example, the ht-index of the example array (19, 8, 7, 6, 2, 1, 1, 1, 0) is 3, and the ht-index of another array (19, 8, 8, 8, 8, 8, 8, 8, 8) is 2. Therefore, the degree of imbalance of the former array is higher than that of the latter array.
The use of fractals in modelling human geography has for a longer period been seen as useful in measuring the spatial distribution of human settlements.[17]Head/tail breaks can be used to do just that with a concept called natural cities. The term ‘natural cities’ refers to the human settlements or human activities in general on Earth's surface that are naturally or objectively defined and delineated from massive geographic information based on head/tail division rule, a non-recursive form of head/tail breaks.[18][19]Such geographic information could be from various sources, such as massive street junctions[19]and street ends, a massive number of street blocks, nighttime imagery and social media users’ locations etc. Based on these the different urban forms and configurations detected in cities can be derived.[20]Distinctive from conventional cities, the adjective ‘natural’ could be explained not only by the sources of natural cities, but also by the approach to derive them[1]. Natural cities are derived from a meaningful cutoff averaged from a massive number of units extracted from geographic information.[15]Those units vary according to different kinds of geographic information, for example the units could be area units for the street blocks and pixel values for the nighttime images.[21]Anatural cities modelhas been created using ArcGIS model builder,[22]it follows the same process of deriving natural cities from location-based social media,[18]namely, building up huge triangular irregular network (TIN) based on the point features (street nodes in this case) and regarding the triangles which are smaller than a mean value as the natural cities. These natural cities can also be created from other open access information likeOpenStreetMapand further be used as an alternative delineation of administrative boundaries.[23]Scaling lawcan also at the same time correctly be identified and the administrative borders can be created to respect this by the delineation of the natural cities.[24][25]This type methodology can help urban geographers and planners by correctly identifying the effective urban territorial scope of the areas they work in.[26]
Natural cities can vary depending on the scale on which the natural cities are delineated, which is why optimally they have to be based on data from the whole world. Due to that being computationally impossible, a country or county scale is suggested as alternative.[27]Due to the scale-free nature of natural cities and the data they are based on there are also possibilities to use the natural cities method for further measurements. One of the main advantages of natural cities is that it is derivedbottom-upinstead oftop-down. That means that the borders are determined by the data of something physical rather than determined by an administrative government or administration.[28]For example by calculating the natural cities of a natural city recursively the dense areas within a natural city are identified. These can be seen as city centers for example. By using the natural cities method in this way further border delineations can be made dependent on the scale the natural cities were generated from.[29]Natural cities derived from smaller regional areas will provide less accurate but still usable results in certain analysis, like for example determining urban expansion over time.[30]As mentioned before though, optimally natural cities should be based on a massive amount of for example street intersections for an entire country or even the world. This is because natural cities are based onthe wisdom of crowdsthinking, which needs the biggest set of available data for the best results. Also note that the structure of natural cities can be considered to befractalin nature.[31]
It is important when head/tail breaks are being used to generate natural cities, that the data is not aggregated afterwards. For example, the amount of generated natural cities can only be known after they are generated. It is not possible to use a pre-defined number of cities for an area or country and aggregate the results of the natural cities to administratively determined city borders. Naturally natural cities should followZipf's law, if they do not, the area is most likely too small, or data has probably been processed wrongly. An example of this is seen in a research where head/tail breaks were used to extract natural cities, but they were aggregated to administrative borders, which following that concluded that the cities do not followZipf's law.[32]This happens more often in science, where papers actually produce results which are actually false.[33]
Current color renderings for DEM or density map are essentially based on conventional classifications such as natural breaks or equal intervals, so they disproportionately exaggerate high elevations or high densities. As a matter of fact, there are not so many high elevations or high-density locations.[34]It was found that coloring based head/tail breaks is more favorable than those by other classifications.[35][36][2]
The pattern of far more small things than large ones frequently recurs in geographical data. A spiral layout inspired by the golden ratio or Fibonacci sequence can help visualize this recursive notion of scaling hierarchy and the different levels of scale.[37][38]In other words, from the smallest to the largest scale, a map can be seen as a map of a map of a map, and so on.
Other applications of Head/tail breaks:
The following implementations are available underFree/Open Source Softwarelicenses.
|
https://en.wikipedia.org/wiki/Head/tail_Breaks
|
Inmachine learning,kernel machinesare a class of algorithms forpattern analysis, whose best known member is thesupport-vector machine(SVM). These methods involve using linear classifiers to solve nonlinear problems.[1]The general task ofpattern analysisis to find and study general types of relations (for exampleclusters,rankings,principal components,correlations,classifications) in datasets. For many algorithms that solve these tasks, the data in raw representation have to be explicitly transformed intofeature vectorrepresentations via a user-specifiedfeature map: in contrast, kernel methods require only a user-specifiedkernel, i.e., asimilarity functionover all pairs of data points computed usinginner products. The feature map in kernel machines is infinite dimensional but only requires a finite dimensional matrix from user-input according to therepresenter theorem. Kernel machines are slow to compute for datasets larger than a couple of thousand examples without parallel processing.
Kernel methods owe their name to the use ofkernel functions, which enable them to operate in a high-dimensional,implicitfeature spacewithout ever computing the coordinates of the data in that space, but rather by simply computing theinner productsbetween theimagesof all pairs of data in the feature space. This operation is often computationally cheaper than the explicit computation of the coordinates. This approach is called the "kernel trick".[2]Kernel functions have been introduced for sequence data,graphs, text, images, as well as vectors.
Algorithms capable of operating with kernels include thekernel perceptron, support-vector machines (SVM),Gaussian processes,principal components analysis(PCA),canonical correlation analysis,ridge regression,spectral clustering,linear adaptive filtersand many others.
Most kernel algorithms are based onconvex optimizationoreigenproblemsand are statistically well-founded. Typically, their statistical properties are analyzed usingstatistical learning theory(for example, usingRademacher complexity).
Kernel methods can be thought of asinstance-based learners: rather than learning some fixed set of parameters corresponding to the features of their inputs, they instead "remember" thei{\displaystyle i}-th training example(xi,yi){\displaystyle (\mathbf {x} _{i},y_{i})}and learn for it a corresponding weightwi{\displaystyle w_{i}}. Prediction for unlabeled inputs, i.e., those not in the training set, is treated by the application of asimilarity functionk{\displaystyle k}, called akernel, between the unlabeled inputx′{\displaystyle \mathbf {x'} }and each of the training inputsxi{\displaystyle \mathbf {x} _{i}}. For instance, a kernelizedbinary classifiertypically computes a weighted sum of similaritiesy^=sgn∑i=1nwiyik(xi,x′),{\displaystyle {\hat {y}}=\operatorname {sgn} \sum _{i=1}^{n}w_{i}y_{i}k(\mathbf {x} _{i},\mathbf {x'} ),}where
Kernel classifiers were described as early as the 1960s, with the invention of thekernel perceptron.[3]They rose to great prominence with the popularity of thesupport-vector machine(SVM) in the 1990s, when the SVM was found to be competitive withneural networkson tasks such ashandwriting recognition.
The kernel trick avoids the explicit mapping that is needed to get linearlearning algorithmsto learn a nonlinear function ordecision boundary. For allx{\displaystyle \mathbf {x} }andx′{\displaystyle \mathbf {x'} }in the input spaceX{\displaystyle {\mathcal {X}}}, certain functionsk(x,x′){\displaystyle k(\mathbf {x} ,\mathbf {x'} )}can be expressed as aninner productin another spaceV{\displaystyle {\mathcal {V}}}. The functionk:X×X→R{\displaystyle k\colon {\mathcal {X}}\times {\mathcal {X}}\to \mathbb {R} }is often referred to as akernelor akernel function. The word "kernel" is used in mathematics to denote a weighting function for a weighted sum orintegral.
Certain problems in machine learning have more structure than an arbitrary weighting functionk{\displaystyle k}. The computation is made much simpler if the kernel can be written in the form of a "feature map"φ:X→V{\displaystyle \varphi \colon {\mathcal {X}}\to {\mathcal {V}}}which satisfiesk(x,x′)=⟨φ(x),φ(x′)⟩V.{\displaystyle k(\mathbf {x} ,\mathbf {x'} )=\langle \varphi (\mathbf {x} ),\varphi (\mathbf {x'} )\rangle _{\mathcal {V}}.}The key restriction is that⟨⋅,⋅⟩V{\displaystyle \langle \cdot ,\cdot \rangle _{\mathcal {V}}}must be a proper inner product. On the other hand, an explicit representation forφ{\displaystyle \varphi }is not necessary, as long asV{\displaystyle {\mathcal {V}}}is aninner product space. The alternative follows fromMercer's theorem: an implicitly defined functionφ{\displaystyle \varphi }exists whenever the spaceX{\displaystyle {\mathcal {X}}}can be equipped with a suitablemeasureensuring the functionk{\displaystyle k}satisfiesMercer's condition.
Mercer's theorem is similar to a generalization of the result from linear algebra thatassociates an inner product to any positive-definite matrix. In fact, Mercer's condition can be reduced to this simpler case. If we choose as our measure thecounting measureμ(T)=|T|{\displaystyle \mu (T)=|T|}for allT⊂X{\displaystyle T\subset X}, which counts the number of points inside the setT{\displaystyle T}, then the integral in Mercer's theorem reduces to a summation∑i=1n∑j=1nk(xi,xj)cicj≥0.{\displaystyle \sum _{i=1}^{n}\sum _{j=1}^{n}k(\mathbf {x} _{i},\mathbf {x} _{j})c_{i}c_{j}\geq 0.}If this summation holds for all finite sequences of points(x1,…,xn){\displaystyle (\mathbf {x} _{1},\dotsc ,\mathbf {x} _{n})}inX{\displaystyle {\mathcal {X}}}and all choices ofn{\displaystyle n}real-valued coefficients(c1,…,cn){\displaystyle (c_{1},\dots ,c_{n})}(cf.positive definite kernel), then the functionk{\displaystyle k}satisfies Mercer's condition.
Some algorithms that depend on arbitrary relationships in the native spaceX{\displaystyle {\mathcal {X}}}would, in fact, have a linear interpretation in a different setting: the range space ofφ{\displaystyle \varphi }. The linear interpretation gives us insight about the algorithm. Furthermore, there is often no need to computeφ{\displaystyle \varphi }directly during computation, as is the case withsupport-vector machines. Some cite this running time shortcut as the primary benefit. Researchers also use it to justify the meanings and properties of existing algorithms.
Theoretically, aGram matrixK∈Rn×n{\displaystyle \mathbf {K} \in \mathbb {R} ^{n\times n}}with respect to{x1,…,xn}{\displaystyle \{\mathbf {x} _{1},\dotsc ,\mathbf {x} _{n}\}}(sometimes also called a "kernel matrix"[4]), whereKij=k(xi,xj){\displaystyle K_{ij}=k(\mathbf {x} _{i},\mathbf {x} _{j})}, must bepositive semi-definite (PSD).[5]Empirically, for machine learning heuristics, choices of a functionk{\displaystyle k}that do not satisfy Mercer's condition may still perform reasonably ifk{\displaystyle k}at least approximates the intuitive idea of similarity.[6]Regardless of whetherk{\displaystyle k}is a Mercer kernel,k{\displaystyle k}may still be referred to as a "kernel".
If the kernel functionk{\displaystyle k}is also acovariance functionas used inGaussian processes, then the Gram matrixK{\displaystyle \mathbf {K} }can also be called acovariance matrix.[7]
Application areas of kernel methods are diverse and includegeostatistics,[8]kriging,inverse distance weighting,3D reconstruction,bioinformatics,cheminformatics,information extractionandhandwriting recognition.
|
https://en.wikipedia.org/wiki/Kernel_methods
|
Learning to rank[1]ormachine-learned ranking(MLR) is the application ofmachine learning, typicallysupervised,semi-supervisedorreinforcement learning, in the construction ofranking modelsforinformation retrievalsystems.[2]Training datamay, for example, consist of lists of items with somepartial orderspecified between items in each list. This order is typically induced by giving a numerical or ordinal score or a binary judgment (e.g. "relevant" or "not relevant") for each item. The goal of constructing the ranking model is to rank new, unseen lists in a similar way to rankings in the training data.
Ranking is a central part of manyinformation retrievalproblems, such asdocument retrieval,collaborative filtering,sentiment analysis, andonline advertising.
A possible architecture of a machine-learned search engine is shown in the accompanying figure.
Training data consists of queries and documents matching them together with the relevance degree of each match. It may be prepared manually by humanassessors(orraters, asGooglecalls them), who check results for some queries and determinerelevanceof each result. It is not feasible to check the relevance of all documents, and so typically a technique called pooling is used — only the top few documents, retrieved by some existing ranking models are checked. This technique may introduce selection bias. Alternatively, training data may be derived automatically by analyzingclickthrough logs(i.e. search results which got clicks from users),[3]query chains,[4]or such search engines' features as Google's (since-replaced)SearchWiki. Clickthrough logs can be biased by the tendency of users to click on the top search results on the assumption that they are already well-ranked.
Training data is used by a learning algorithm to produce a ranking model which computes the relevance of documents for actual queries.
Typically, users expect a search query to complete in a short time (such as a few hundred milliseconds for web search), which makes it impossible to evaluate a complex ranking model on each document in the corpus, and so a two-phase scheme is used.[5]First, a small number of potentially relevant documents are identified using simpler retrieval models which permit fast query evaluation, such as thevector space model,Boolean model, weighted AND,[6]orBM25. This phase is calledtop-k{\displaystyle k}document retrievaland many heuristics were proposed in the literature to accelerate it, such as using a document's static quality score and tiered indexes.[7]In the second phase, a more accurate but computationally expensive machine-learned model is used to re-rank these documents.
Learning to rank algorithms have been applied in areas other than information retrieval:
For the convenience of MLR algorithms, query-document pairs are usually represented by numerical vectors, which are calledfeature vectors. Such an approach is sometimes calledbag of featuresand is analogous to thebag of wordsmodel andvector space modelused in information retrieval for representation of documents.
Components of such vectors are calledfeatures,factorsorranking signals. They may be divided into three groups (features fromdocument retrievalare shown as examples):
Some examples of features, which were used in the well-knownLETORdataset:
Selecting and designing good features is an important area in machine learning, which is calledfeature engineering.
There are several measures (metrics) which are commonly used to judge how well an algorithm is doing on training data and to compare the performance of different MLR algorithms. Often a learning-to-rank problem is reformulated as an optimization problem with respect to one of these metrics.
Examples of ranking quality measures:
DCG and its normalized variant NDCG are usually preferred in academic research when multiple levels of relevance are used.[11]Other metrics such as MAP, MRR and precision, are defined only for binary judgments.
Recently, there have been proposed several new evaluation metrics which claim to model user's satisfaction with search results better than the DCG metric:
Both of these metrics are based on the assumption that the user is more likely to stop looking at search results after examining a more relevant document, than after a less relevant document.
Learning to Rank approaches are often categorized using one of three approaches: pointwise (where individual documents are ranked), pairwise (where pairs of documents are ranked into a relative order), and listwise (where an entire list of documents are ordered).
Tie-Yan Liuof Microsoft Research Asia has analyzed existing algorithms for learning to rank problems in his bookLearning to Rank for Information Retrieval.[1]He categorized them into three groups by their input spaces, output spaces, hypothesis spaces (the core function of the model) andloss functions: the pointwise, pairwise, and listwise approach. In practice, listwise approaches often outperform pairwise approaches and pointwise approaches. This statement was further supported by a large scale experiment on the performance of different learning-to-rank methods on a large collection of benchmark data sets.[14]
In this section, without further notice,x{\displaystyle x}denotes an object to be evaluated, for example, a document or an image,f(x){\displaystyle f(x)}denotes a single-value hypothesis,h(⋅){\displaystyle h(\cdot )}denotes a bi-variate or multi-variate function andL(⋅){\displaystyle L(\cdot )}denotes the loss function.
In this case, it is assumed that each query-document pair in the training data has a numerical or ordinal score. Then the learning-to-rank problem can be approximated by a regression problem — given a single query-document pair, predict its score. Formally speaking, the pointwise approach aims at learning a functionf(x){\displaystyle f(x)}predicting the real-value or ordinal score of a documentx{\displaystyle x}using the loss functionL(f;xj,yj){\displaystyle L(f;x_{j},y_{j})}.
A number of existingsupervisedmachine learning algorithms can be readily used for this purpose.Ordinal regressionandclassificationalgorithms can also be used in pointwise approach when they are used to predict the score of a single query-document pair, and it takes a small, finite number of values.
In this case, the learning-to-rank problem is approximated by a classification problem — learning abinary classifierh(xu,xv){\displaystyle h(x_{u},x_{v})}that can tell which document is better in a given pair of documents. The classifier shall take two documents as its input and the goal is to minimize a loss functionL(h;xu,xv,yu,v){\displaystyle L(h;x_{u},x_{v},y_{u,v})}. The loss function typically reflects the number and magnitude ofinversionsin the induced ranking.
In many cases, the binary classifierh(xu,xv){\displaystyle h(x_{u},x_{v})}is implemented with ascoring functionf(x){\displaystyle f(x)}. As an example, RankNet[15]adapts a probability model and definesh(xu,xv){\displaystyle h(x_{u},x_{v})}as the estimated probability of the documentxu{\displaystyle x_{u}}has higher quality thanxv{\displaystyle x_{v}}:
whereCDF(⋅){\displaystyle {\text{CDF}}(\cdot )}is acumulative distribution function, for example, thestandard logistic CDF, i.e.
These algorithms try to directly optimize the value of one of the above evaluation measures, averaged over all queries in the training data. This is often difficult in practice because most evaluation measures are not continuous functions with respect to ranking model's parameters, and so continuous approximations or bounds on evaluation measures have to be used. For example the SoftRank algorithm.[16]LambdaMART is a pairwise algorithm which has been empirically shown to approximate listwise objective functions.[17]
A partial list of published learning-to-rank algorithms is shown below with years of first publication of each method:
Regularized least-squares based ranking. The work is extended in[26]to learning to rank from general preference graphs.
Note: as most supervised learning-to-rank algorithms can be applied to pointwise, pairwise and listwise case, only those methods which are specifically designed with ranking in mind are shown above.
Norbert Fuhrintroduced the general idea of MLR in 1992, describing learning approaches in information retrieval as a generalization of parameter estimation;[49]a specific variant of this approach (usingpolynomial regression) had been published by him three years earlier.[18]Bill Cooper proposedlogistic regressionfor the same purpose in 1992[19]and used it with hisBerkeleyresearch group to train a successful ranking function forTREC. Manning et al.[50]suggest that these early works achieved limited results in their time due to little available training data and poor machine learning techniques.
Several conferences, such asNeurIPS,SIGIRandICMLhave had workshops devoted to the learning-to-rank problem since the mid-2000s (decade).
Commercialweb search enginesbegan using machine-learned ranking systems since the 2000s (decade). One of the first search engines to start using it wasAltaVista(later its technology was acquired byOverture, and thenYahoo), which launched agradient boosting-trained ranking function in April 2003.[51][52]
Bing's search is said to be powered byRankNetalgorithm,[53][when?]which was invented atMicrosoft Researchin 2005.
In November 2009 a Russian search engineYandexannounced[54]that it had significantly increased its search quality due to deployment of a new proprietaryMatrixNetalgorithm, a variant ofgradient boostingmethod which uses oblivious decision trees.[55]Recently they have also sponsored a machine-learned ranking competition "Internet Mathematics 2009"[56]based on their own search engine's production data. Yahoo has announced a similar competition in 2010.[57]
As of 2008,Google'sPeter Norvigdenied that their search engine exclusively relies on machine-learned ranking.[58]Cuil's CEO, Tom Costello, suggests that they prefer hand-built models because they can outperform machine-learned models when measured against metrics like click-through rate or time on landing page, which is because machine-learned models "learn what people say they like, not what people actually like".[59]
In January 2017, the technology was included in theopen sourcesearch engineApache Solr.[60]It is also available in the open sourceOpenSearchandElasticsearch.[61][62]These implementations make learning to rank widely accessible for enterprise search.
Similar to recognition applications incomputer vision, recent neural network based ranking algorithms are also found to be susceptible to covertadversarial attacks, both on the candidates and the queries.[63]With small perturbations imperceptible to human beings, ranking order could be arbitrarily altered. In addition, model-agnostic transferable adversarial examples are found to be possible, which enables black-box adversarial attacks on deep ranking systems without requiring access to their underlying implementations.[63][64]
Conversely, the robustness of such ranking systems can be improved via adversarial defenses such as the Madry defense.[65]
|
https://en.wikipedia.org/wiki/Learning_to_rank
|
Ingeometry, anintersectionis a point, line, or curve common to two or more objects (such as lines, curves, planes, and surfaces). The simplest case inEuclidean geometryis theline–line intersectionbetween two distinctlines, which either is onepoint(sometimes called avertex) or does not exist (if the lines areparallel). Other types of geometric intersection include:
Determination of the intersection offlats– linear geometric objects embedded in a higher-dimensionalspace – is a simple task oflinear algebra, namely the solution of asystem of linear equations. In general the determination of an intersection leads tonon-linear equations, which can besolved numerically, for example usingNewton iteration. Intersection problems between a line and aconic section(circle, ellipse, parabola, etc.) or aquadric(sphere, cylinder, hyperboloid, etc.) lead toquadratic equationsthat can be easily solved. Intersections between quadrics lead toquartic equationsthat can be solvedalgebraically.
For the determination of the intersection point of two non-parallel lines
a1x+b1y=c1,a2x+b2y=c2{\displaystyle a_{1}x+b_{1}y=c_{1},\ a_{2}x+b_{2}y=c_{2}}
one gets, fromCramer's ruleor by substituting out a variable, the coordinates of the intersection point(xs,ys){\displaystyle (x_{s},y_{s})}:
(Ifa1b2−a2b1=0{\displaystyle a_{1}b_{2}-a_{2}b_{1}=0}the lines are parallel and these formulas cannot be used because they involve dividing by 0.)
For two non-parallelline segments(x1,y1),(x2,y2){\displaystyle (x_{1},y_{1}),(x_{2},y_{2})}and(x3,y3),(x4,y4){\displaystyle (x_{3},y_{3}),(x_{4},y_{4})}there is not necessarily an intersection point (see diagram), because the intersection point(x0,y0){\displaystyle (x_{0},y_{0})}of the corresponding lines need not to be contained in the line segments. In order to check the situation one uses parametric representations of the lines:
The line segments intersect only in a common point(x0,y0){\displaystyle (x_{0},y_{0})}of the corresponding lines if the corresponding parameterss0,t0{\displaystyle s_{0},t_{0}}fulfill the condition0≤s0,t0≤1{\displaystyle 0\leq s_{0},t_{0}\leq 1}.
The parameterss0,t0{\displaystyle s_{0},t_{0}}are the solution of the linear system
It can be solved forsandtusing Cramer's rule (seeabove). If the condition0≤s0,t0≤1{\displaystyle 0\leq s_{0},t_{0}\leq 1}is fulfilled one insertss0{\displaystyle s_{0}}ort0{\displaystyle t_{0}}into the corresponding parametric representation and gets the intersection point(x0,y0){\displaystyle (x_{0},y_{0})}.
Example:For the line segments(1,1),(3,2){\displaystyle (1,1),(3,2)}and(1,4),(2,−1){\displaystyle (1,4),(2,-1)}one gets the linear system
ands0=311,t0=611{\displaystyle s_{0}={\tfrac {3}{11}},t_{0}={\tfrac {6}{11}}}. That means: the lines intersect at point(1711,1411){\displaystyle ({\tfrac {17}{11}},{\tfrac {14}{11}})}.
Remark:Considering lines, instead of segments, determined by pairs of points, each condition0≤s0,t0≤1{\displaystyle 0\leq s_{0},t_{0}\leq 1}can be dropped and the method yields the intersection point of the lines (seeabove).
For the intersection of
one solves the line equation forxoryandsubstitutesit into the equation of the circle and gets for the solution (using the formula of a quadratic equation)(x1,y1),(x2,y2){\displaystyle (x_{1},y_{1}),(x_{2},y_{2})}with
ifr2(a2+b2)−c2>0.{\displaystyle r^{2}(a^{2}+b^{2})-c^{2}>0\ .}If this condition holds with strict inequality, there are two intersection points; in this case the line is called asecant lineof the circle, and the line segment connecting the intersection points is called achordof the circle.
Ifr2(a2+b2)−c2=0{\displaystyle r^{2}(a^{2}+b^{2})-c^{2}=0}holds, there exists only one intersection point and the line is tangent to the circle. If the weak inequality does not hold, the line does not intersect the circle.
If the circle's midpoint is not the origin, see.[1]The intersection of a line and a parabola or hyperbola may be treated analogously.
The determination of the intersection points of two circles
can be reduced to the previous case of intersecting a line and a circle. By subtraction of the two given equations one gets the line equation:
This special line is theradical lineof the two circles.
Special casex1=y1=y2=0{\displaystyle \;x_{1}=y_{1}=y_{2}=0}:In this case the origin is the center of the first circle and the second center lies on the x-axis (s. diagram). The equation of the radical line simplifies to2x2x=r12−r22+x22{\displaystyle \;2x_{2}x=r_{1}^{2}-r_{2}^{2}+x_{2}^{2}\;}and the points of intersection can be written as(x0,±y0){\displaystyle (x_{0},\pm y_{0})}with
In case ofr12<x02{\displaystyle r_{1}^{2}<x_{0}^{2}}the circles have no points in common.In case ofr12=x02{\displaystyle r_{1}^{2}=x_{0}^{2}}the circles have one point in common and the radical line is a common tangent.
Any general case as written above can be transformed by a shift and a rotation into the special case.
The intersection of twodisks(the interiors of the two circles) forms a shape called alens.
The problem of intersection of an ellipse/hyperbola/parabola with anotherconic sectionleads to asystem of quadratic equations, which can be solved in special cases easily by elimination of one coordinate. Special properties of conic sections may be used to obtain asolution. In general the intersection points can be determined by solving the equation by a Newton iteration. If a) both conics are given implicitly (by an equation) a 2-dimensional Newton iteration b) one implicitly and the other parametrically given a 1-dimensional Newton iteration is necessary. See next section.
Two curves inR2{\displaystyle \mathbb {R} ^{2}}(two-dimensional space), which are continuously differentiable (i.e. there is no sharp bend),
have an intersection point, if they have a point of the plane in common and have at this point (see diagram):
If both the curves have a pointSand the tangent line there in common but do not cross each other, they are justtouchingat pointS.
Because touching intersections appear rarely and are difficult to deal with, the following considerations omit this case. In any case below all necessary differential conditions are presupposed. The determination of intersection points always leads to one or two non-linear equations which can be solved by Newton iteration. A list of the appearing cases follows:
Any Newton iteration needs convenient starting values, which can be derived by a visualization of both the curves. A parametrically or explicitly given curve can easily be visualized, because to any parametertorxrespectively it is easy to calculate the corresponding point. For implicitly given curves this task is not as easy. In this case one has to determine a curve point with help of starting values and an iteration. See
.[2]
Examples:
If one wants to determine the intersection points of twopolygons, one can check the intersection of any pair of line segments of the polygons (seeabove). For polygons with many segments this method is rather time-consuming. In practice one accelerates the intersection algorithm by usingwindow tests. In this case one divides the polygons into small sub-polygons and determines the smallest window (rectangle with sides parallel to the coordinate axes) for any sub-polygon. Before starting the time-consuming determination of the intersection point of two line segments any pair of windows is tested for common points. See.[3]
In 3-dimensional space there are intersection points (common points) between curves and surfaces. In the following sections we considertransversalintersectiononly.
The intersection of a line and a planeingeneral positionin three dimensions is a point.
Commonly a line in space is represented parametrically(x(t),y(t),z(t)){\displaystyle (x(t),y(t),z(t))}and a plane by an equationax+by+cz=d{\displaystyle ax+by+cz=d}. Inserting the parameter representation into the equation yields the linear equation
for parametert0{\displaystyle t_{0}}of the intersection point(x(t0),y(t0),z(t0)){\displaystyle (x(t_{0}),y(t_{0}),z(t_{0}))}.
If the linear equation has no solution, the line either lies on the plane or is parallel to it.
If a line is defined by two intersecting planesεi:n→i⋅x→=di,i=1,2{\displaystyle \varepsilon _{i}:\ {\vec {n}}_{i}\cdot {\vec {x}}=d_{i},\ i=1,2}and should be intersected by a third planeε3:n→3⋅x→=d3{\displaystyle \varepsilon _{3}:\ {\vec {n}}_{3}\cdot {\vec {x}}=d_{3}}, the common intersection point of the three planes has to be evaluated.
Three planesεi:n→i⋅x→=di,i=1,2,3{\displaystyle \varepsilon _{i}:\ {\vec {n}}_{i}\cdot {\vec {x}}=d_{i},\ i=1,2,3}with linear independent normal vectorsn→1,n→2,n→3{\displaystyle {\vec {n}}_{1},{\vec {n}}_{2},{\vec {n}}_{3}}have the intersection point
For the proof one should establishn→i⋅p→0=di,i=1,2,3,{\displaystyle {\vec {n}}_{i}\cdot {\vec {p}}_{0}=d_{i},\ i=1,2,3,}using the rules of ascalar triple product. If the scalar triple product equals to 0, then planes either do not have the triple intersection or it is a line (or a plane, if all three planes are the same).
Analogously to the plane case the following cases lead to non-linear systems, which can be solved using a 1- or 3-dimensional Newton iteration.[4]
Example:
Aline–sphere intersectionis a simple special case.
Like the case of a line and a plane, the intersection of a curve and a surfaceingeneral positionconsists of discrete points, but a curve may be partly or totally contained in a surface.
Two transversally intersecting surfaces give anintersection curve. The most simple case is the intersection line of two non-parallel planes.
When the intersection of a sphere and a plane is not empty or a single point, it is a circle. This can be seen as follows:
LetSbe a sphere with centerO,Pa plane which intersectsS. DrawOEperpendicular toPand meetingPatE. LetAandBbe any two different points in the intersection. ThenAOEandBOEare right triangles with a common side,OE, and hypotenusesAOandBOequal. Therefore, the remaining sidesAEandBEare equal. This proves that all points in the intersection are the same distance from the pointEin the planeP, in other words all points in the intersection lie on a circleCwith centerE.[5]This proves that the intersection ofPandSis contained inC. Note thatOEis the axis of the circle.
Now consider a pointDof the circleC. SinceClies inP, so doesD. On the other hand, the trianglesAOEandDOEare right triangles with a common side,OE, and legsEAandEDequal. Therefore, the hypotenusesAOandDOare equal, and equal to the radius ofS, so thatDlies inS. This proves thatCis contained in the intersection ofPandS.
As a corollary, on a sphere there is exactly one circle that can be drawn through three given points.[6]
The proof can be extended to show that the points on a circle are all a common angular distance from one of its poles.[7]
Compare alsoconic sections, which can produceovals.
To show that a non-trivial intersection of two spheres is a circle, assume (without loss of generality) that one sphere (with radiusR{\displaystyle R}) is centered at the origin. Points on this sphere satisfy
Also without loss of generality, assume that the second sphere, with radiusr{\displaystyle r}, is centered at a point on the positive x-axis, at distancea{\displaystyle a}from the origin. Its points satisfy
The intersection of the spheres is the set of points satisfying both equations. Subtracting the equations gives
In the singular casea=0{\displaystyle a=0}, the spheres are concentric. There are two possibilities: ifR=r{\displaystyle R=r}, the spheres coincide, and the intersection is the entire sphere; ifR≠r{\displaystyle R\not =r}, the spheres are disjoint and the intersection is empty.
Whenais nonzero, the intersection lies in a vertical plane with this x-coordinate, which may intersect both of the spheres, be tangent to both spheres, or external to both spheres.
The result follows from the previous proof for sphere-plane intersections.
|
https://en.wikipedia.org/wiki/Line_segment_intersection
|
Inmathematics, aprojective planeis a geometric structure that extends the concept of aplane. In the ordinary Euclidean plane, two lines typically intersect at a single point, but there are some pairs of lines (namely, parallel lines) that do not intersect. A projective plane can be thought of as an ordinary plane equipped with additional "points at infinity" where parallel lines intersect. Thusanytwo distinct lines in a projective plane intersect at exactly one point.
Renaissance artists, in developing the techniques of drawing inperspective, laid the groundwork for this mathematical topic. The archetypical example is thereal projective plane, also known as theextended Euclidean plane.[1]This example, in slightly different guises, is important inalgebraic geometry,topologyandprojective geometrywhere it may be denoted variously byPG(2,R),RP2, orP2(R), among other notations. There are many other projective planes, both infinite, such as thecomplex projective plane, and finite, such as theFano plane.
A projective plane is a 2-dimensionalprojective space. Not all projective planes can beembeddedin 3-dimensional projective spaces; such embeddability is a consequence of a property known asDesargues' theorem, not shared by all projective planes.
Aprojective planeis a rank 2incidence structure(P,L,I){\displaystyle ({\mathcal {P}},{\mathcal {L}},I)}consisting of a set ofpointsP{\displaystyle {\mathcal {P}}}, a set oflinesL{\displaystyle {\mathcal {L}}}, and a symmetric relationI{\displaystyle I}on the setP∪L{\displaystyle {\mathcal {P}}\cup {\mathcal {L}}}calledincidence, having the following properties:[2]
The second condition means that there are noparallel lines. The last condition excludes the so-calleddegeneratecases (seebelow). The term "incidence" is used to emphasize the symmetric nature of the relationship between points and lines. Thus the expression "pointPis incident with lineℓ" is used instead of either "Pis onℓ" or "ℓpasses throughP".
It follows from the definition that the number of pointss+1{\displaystyle s+1}incident with any given line in a projective plane is the same as the number of lines incident with any given point. The (possibly infinite) cardinal numbers{\displaystyle s}is calledorderof the plane.
To turn the ordinary Euclidean plane into a projective plane, proceed as follows:
The extended structure is a projective plane and is called theextended Euclidean planeor thereal projective plane. The process outlined above, used to obtain it, is called "projective completion" orprojectivization. This plane can also be constructed by starting fromR3viewed as a vector space, see§ Vector space constructionbelow.
The points of theMoulton planeare the points of the Euclidean plane, with coordinates in the usual way. To create the Moulton plane from the Euclidean plane some of the lines are redefined. That is, some of their point sets will be changed, but other lines will remain unchanged. Redefine all the lines with negative slopes so that they look like "bent" lines, meaning that these lines keep their points with negativex-coordinates, but the rest of their points are replaced with the points of the line with the samey-intercept but twice the slope wherever theirx-coordinate is positive.
The Moulton plane has parallel classes of lines and is anaffine plane. It can be projectivized, as in the previous example, to obtain theprojective Moulton plane.Desargues' theoremis not a valid theorem in either the Moulton plane or the projective Moulton plane.
This example has just thirteen points and thirteen lines. We label the points P1, ..., P13and the lines m1, ..., m13. Theincidence relation(which points are on which lines) can be given by the followingincidence matrix. The rows are labelled by the points and the columns are labelled by the lines. A 1 in rowiand columnjmeans that the point Piis on the line mj, while a 0 (which we represent here by a blank cell for ease of reading) means that they are not incident. The matrix is in Paige–Wexler normal form.
To verify the conditions that make this a projective plane, observe that every two rows have exactly one common column in which 1s appear (every pair of distinct points are on exactly one common line) and that every two columns have exactly one common row in which 1s appear (every pair of distinct lines meet at exactly one point). Among many possibilities, the points P1, P4, P5, and P8, for example, will satisfy the third condition. This example is known as theprojective plane of order three.
Though the line at infinity of the extended real plane may appear to have a different nature than the other lines of that projective plane, this is not the case. Another construction of the same projective plane shows that no line can be distinguished (on geometrical grounds) from any other. In this construction, each "point" of the real projective plane is the one-dimensional subspace (ageometricline) through the origin in a 3-dimensional vector space, and a "line" in the projective plane arises from a (geometric) plane through the origin in the 3-space. This idea can be generalized and made more precise as follows.[3]
LetKbe anydivision ring(skewfield). LetK3denote the set of all triplesx=(x0,x1,x2)of elements ofK(aCartesian productviewed as avector space). For any nonzeroxinK3, the minimal subspace ofK3containingx(which may be visualized as all the vectors in a line through the origin) is the subset
ofK3. Similarly, letxandybe linearly independent elements ofK3, meaning thatkx+my= 0implies thatk=m= 0. The minimal subspace ofK3containingxandy(which may be visualized as all the vectors in a plane through the origin) is the subset
ofK3. This 2-dimensional subspace contains various 1-dimensional subspaces through the origin that may be obtained by fixingkandmand taking the multiples of the resulting vector. Different choices ofkandmthat are in the same ratio will give the same line.
Theprojective planeoverK, denoted PG(2,K) orKP2, has a set ofpointsconsisting of all the 1-dimensional subspaces inK3. A subsetLof the points of PG(2,K) is alinein PG(2,K) if there exists a 2-dimensional subspace ofK3whose set of 1-dimensional subspaces is exactlyL.
Verifying that this construction produces a projective plane is usually left as a linear algebra exercise.
An alternate (algebraic) view of this construction is as follows. The points of this projective plane are the equivalence classes of the setK3\ {(0, 0, 0)}modulo theequivalence relation
Lines in the projective plane are defined exactly as above.
The coordinates(x0,x1,x2)of a point in PG(2,K) are calledhomogeneous coordinates. Each triple(x0,x1,x2)represents a well-defined point in PG(2,K), except for the triple(0, 0, 0), which represents no point. Each point in PG(2,K), however, is represented by many triples.
IfKis atopological space, thenKP2inherits a topology via theproduct,subspace, andquotienttopologies.
Thereal projective planeRP2arises whenKis taken to be thereal numbers,R. As a closed, non-orientable real 2-manifold, it serves as a fundamental example in topology.[4]
In this construction, consider the unit sphere centered at the origin inR3. Each of theR3lines in this construction intersects the sphere at two antipodal points. Since theR3line represents a point ofRP2, we will obtain the same model ofRP2by identifying the antipodal points of the sphere. The lines ofRP2will be the great circles of the sphere after this identification of antipodal points. This description gives the standard model ofelliptic geometry.
Thecomplex projective planeCP2arises whenKis taken to be thecomplex numbers,C. It is a closed complex 2-manifold, and hence a closed, orientable real 4-manifold. It and projective planes over otherfields(known aspappian planes) serve as fundamental examples inalgebraic geometry.[5]
Thequaternionic projective planeHP2is also of independent interest.[6]
ByWedderburn's Theorem, a finite division ring must be commutative and so be a field. Thus, the finite examples of this construction are known as "field planes". TakingKto be thefinite fieldofq=pnelements with primepproduces a projective plane ofq2+q+ 1points. The field planes are usually denoted by PG(2,q) where PG stands for projective geometry, the "2" is the dimension andqis called theorderof the plane (it is one less than the number of points on any line). The Fano plane, discussed below, is denoted by PG(2, 2). Thethird example aboveis the projective plane PG(2, 3).
TheFano planeis the projective plane arising from the field of two elements. It is the smallest projective plane, with only seven points and seven lines. In the figure at right, the seven points are shown as small balls, and the seven lines are shown as six line segments and a circle. However, one could equivalently consider the balls to be the "lines" and the line segments and circle to be the "points" – this is an example ofdualityin the projective plane: if the lines and points are interchanged, the result is still a projective plane (seebelow). A permutation of the seven points that carriescollinearpoints (points on the same line) to collinear points is called acollineationorsymmetryof the plane. The collineations of a geometry form agroupunder composition, and for the Fano plane this group (PΓL(3, 2) = PGL(3, 2)) has 168 elements.
Thetheorem of Desarguesis universally valid in a projective plane if and only if the plane can be constructed from a three-dimensional vector space over a skewfield asabove.[7]These planes are calledDesarguesian planes, named afterGirard Desargues. The real (or complex) projective plane and the projective plane of order 3 givenaboveare examples of Desarguesian projective planes. The projective planes that can not be constructed in this manner are callednon-Desarguesian planes, and theMoulton planegivenaboveis an example of one. The PG(2,K) notation is reserved for the Desarguesian planes. WhenKis afield, a very common case, they are also known asfield planesand if the field is afinite fieldthey can be calledGalois planes.
Asubplaneof a projective plane(P,L,I){\displaystyle ({\mathcal {P}},{\mathcal {L}},I)}is a pair of subsets(P′,L′){\displaystyle ({{\mathcal {P}}'},{{\mathcal {L}}'})}whereP′⊆P{\displaystyle {{\mathcal {P}}'}\subseteq {\mathcal {P}}},L′⊆L{\displaystyle {{\mathcal {L}}'}\subseteq {\mathcal {L}}}and(P′,L′,I′){\displaystyle ({{\mathcal {P}}'},{{\mathcal {L}}'},I')}is itself a projective plane with respect to the restrictionI′{\displaystyle I'}of the incidence relationI{\displaystyle I}to(P′∪L′)×(P′∪L′){\displaystyle ({{\mathcal {P}}'}\cup {{\mathcal {L}}'})\times ({{\mathcal {P}}'}\cup {{\mathcal {L}}'})}.
(Bruck 1955) proves the following theorem. Let Π be a finite projective plane of orderNwith a proper subplane Π0of orderM. Then eitherN=M2orN≥M2+M.
A subplane(P′,L′){\displaystyle ({{\mathcal {P}}'},{{\mathcal {L}}'})}of(P,L,I){\displaystyle ({\mathcal {P}},{\mathcal {L}},I)}is aBaer subplaneif every line inL∖L′{\displaystyle {\mathcal {L}}\setminus {{\mathcal {L}}'}}is incident with exactly one point inP′{\displaystyle {\mathcal {P}}'}and every point inP∖P′{\displaystyle {\mathcal {P}}\setminus {{\mathcal {P}}'}}is incident with exactly one line ofL′{\displaystyle {\mathcal {L}}'}.
A finite Desarguesian projective plane of orderq{\displaystyle q}admits Baer subplanes (all necessarily Desarguesian) if and
only ifq{\displaystyle q}is square; in this
case the order of the Baer subplanes isq{\displaystyle {\sqrt {q}}}.
In the finite Desarguesian planes PG(2,pn), the subplanes have orders which are the orders of the subfields of the finite field GF(pn), that is,piwhereiis a divisor ofn. In non-Desarguesian planes however, Bruck's theorem gives the only information about subplane orders. The case of equality in the inequality of this theorem is not known to occur. Whether or not there exists a subplane of orderMin a plane of orderNwithM2+M=Nis an open question. If such subplanes existed there would be projective planes of composite (non-prime power) order.
AFano subplaneis a subplane isomorphic to PG(2, 2), the unique projective plane of order 2.
If you consider aquadrangle(a set of 4 points no three collinear) in this plane, the points determine six of the lines of the plane. The remaining three points (called thediagonal pointsof the quadrangle) are the points where the lines that do not intersect at a point of the quadrangle meet. The seventh line consists of all the diagonal points (usually drawn as a circle or semicircle).
In finite desarguesian planes, PG(2,q), Fano subplanes exist if and only ifqis even (that is, a power of 2). The situation in non-desarguesian planes is unsettled. They could exist in any non-desarguesian plane of order greater than 6, and indeed, they have been found in all non-desarguesian planes in which they have been looked for (in both odd and even orders).
An open question, apparently due toHanna Neumannthough not published by her, is: Does every non-desarguesian plane contain a Fano subplane?
A theorem concerning Fano subplanes due to (Gleason 1956) is:
Projectivization of the Euclidean plane produced the real projective plane. The inverse operation—starting with a projective plane, remove one line and all the points incident with that line—produces anaffine plane.
More formally anaffine planeconsists of a set oflinesand a set ofpoints, and a relation between points and lines calledincidence, having the following properties:
The second condition means that there areparallel linesand is known asPlayfair'saxiom. The expression "does not meet" in this condition is shorthand for "there does not exist a point incident with both lines".
The Euclidean plane and the Moulton plane are examples of infinite affine planes. A finite projective plane will produce a finite affine plane when one of its lines and the points on it are removed. Theorderof a finite affine plane is the number of points on any of its lines (this will be the same number as the order of the projective plane from which it comes). The affine planes which arise from the projective planes PG(2,q) are denoted by AG(2,q).
There is a projective plane of orderNif and only if there is anaffine planeof orderN. When there is only one affine plane of orderNthere is only one projective plane of orderN, but the converse is not true. The affine planes formed by the removal of different lines of the projective plane will be isomorphic if and only if the removed lines are in the same orbit of the collineation group of the projective plane. These statements hold for infinite projective planes as well.
The affine planeK2overKembeds intoKP2via the map which sends affine (non-homogeneous) coordinates tohomogeneous coordinates,
The complement of the image is the set of points of the form(0,x1,x2). From the point of view of the embedding just given, these points are thepoints at infinity. They constitute a line inKP2—namely, the line arising from the plane
inK3—called theline at infinity. The points at infinity are the "extra" points where parallel lines intersect in the construction of the extended real plane; the point (0,x1,x2) is where all lines of slopex2/x1intersect. Consider for example the two lines
in the affine planeK2. These lines have slope 0 and do not intersect. They can be regarded as subsets ofKP2via the embedding above, but these subsets are not lines inKP2. Add the point(0, 1, 0)to each subset; that is, let
These are lines inKP2; ū arises from the plane
inK3, while ȳ arises from the plane
The projective lines ū and ȳ intersect at(0, 1, 0). In fact, all lines inK2of slope 0, when projectivized in this manner, intersect at(0, 1, 0)inKP2.
The embedding ofK2intoKP2given above is not unique. Each embedding produces its own notion of points at infinity. For example, the embedding
has as its complement those points of the form(x0, 0,x2), which are then regarded as points at infinity.
When an affine plane does not have the form ofK2withKa division ring, it can still be embedded in a projective plane, but the construction used above does not work. A commonly used method for carrying out the embedding in this case involves expanding the set of affine coordinates and working in a more general "algebra".
One can construct a coordinate "ring"—a so-calledplanar ternary ring(not a genuine ring)—corresponding to any projective plane. A planar ternary ring need not be a field or division ring, and there are many projective planes that are not constructed from a division ring. They are callednon-Desarguesian projective planesand are an active area of research. TheCayley plane(OP2), a projective plane over theoctonions, is one of these because the octonions do not form a division ring.[8]
Conversely, given a planar ternary ring (R,T), a projective plane can be constructed (see below). The relationship is not one to one. A projective plane may be associated with several non-isomorphic planar ternary rings. The ternary operatorTcan be used to produce two binary operators on the setR, by:
The ternary operator islinearifT(x,m,k) =x⋅m+k. When the set of coordinates of a projective plane actually form a ring, a linear ternary operator may be defined in this way, using the ring operations on the right, to produce a planar ternary ring.
Algebraic properties of this planar ternary coordinate ring turn out to correspond to geometric incidence properties of the plane. For example,Desargues' theoremcorresponds to the coordinate ring being obtained from adivision ring, whilePappus's theoremcorresponds to this ring being obtained from acommutativefield. A projective plane satisfying Pappus's theorem universally is called aPappian plane.Alternative, not necessarilyassociative, division algebras like the octonions correspond toMoufang planes.
There is no known purely geometric proof of the purely geometric statement that Desargues' theorem implies Pappus' theorem in a finite projective plane (finite Desarguesian planes are Pappian). (The converse is true in any projective plane and is provable geometrically, but finiteness is essential in this statement as there are infinite Desarguesian planes which are not Pappian.) The most common proof uses coordinates in a division ring andWedderburn's theoremthat finite division rings must be commutative;Bamberg & Penttila (2015)give a proof that uses only more "elementary" algebraic facts about division rings.
To describe a finite projective plane of orderN(≥ 2) using non-homogeneous coordinates and a planar ternary ring:
On these points, construct the following lines:
For example, forN= 2we can use the symbols {0, 1} associated with the finite field of order 2. The ternary operation defined byT(x,m,k) =xm+kwith the operations on the right being the multiplication and addition in the field yields the following:
Degenerate planes do not fulfill thethird conditionin the definition of a projective plane. They are not structurally complex enough to be interesting in their own right, but from time to time they arise as special cases in general arguments. There are seven kinds of degenerate plane according to (Albert & Sandler 1968). They are:
These seven cases are not independent, the fourth and fifth can be considered as special cases of the sixth, while the second and third are special cases of the fourth and fifth respectively. The special case of the seventh plane with no additional lines can be seen as an eighth plane. All the cases can therefore be organized into two families of degenerate planes as follows (this representation is for finite degenerate planes, but may be extended to infinite ones in a natural way):
1) For any number of pointsP1, ...,Pn, and linesL1, ...,Lm,
2) For any number of pointsP1, ...,Pn, and linesL1, ...,Ln, (same number of points as lines)
Acollineationof a projective plane is abijective mapof the plane to itself which maps points to points and lines to lines that preserves incidence, meaning that ifσis a bijection and pointPis on linem, thenPσis onmσ.[9]
Ifσis a collineation of a projective plane, a pointPwithP=Pσis called afixed pointofσ, and a linemwithm=mσis called afixed lineofσ. The points on a fixed line need not be fixed points, their images underσare just constrained to lie on this line. The collection of fixed points and fixed lines of a collineation form aclosed configuration, which is a system of points and lines that satisfy the first two but not necessarily the third condition in thedefinitionof a projective plane. Thus, the fixed point and fixed line structure for any collineation either form a projective plane by themselves, or adegenerate plane. Collineations whose fixed structure forms a plane are calledplanar collineations.
Ahomography(orprojective transformation) of PG(2,K) is a collineation of this type of projective plane which is a linear transformation of the underlying vector space. Using homogeneous coordinates they can be represented by invertible3 × 3matrices overKwhich act on the points of PG(2,K) byy=MxT, wherexandyare points inK3(vectors) andMis an invertible3 × 3matrix overK.[10]Two matrices represent the same projective transformation if one is a constant multiple of the other. Thus the group of projective transformations is the quotient of thegeneral linear groupby the scalar matrices called theprojective linear group.
Another type of collineation of PG(2,K) is induced by anyautomorphismofK, these are calledautomorphic collineations. Ifαis an automorphism ofK, then the collineation given by(x0,x1, x2) → (x0α,x1α,x2α)is an automorphic collineation. Thefundamental theorem of projective geometrysays that all the collineations of PG(2,K) are compositions of homographies and automorphic collineations. Automorphic collineations are planar collineations.
A projective plane is defined axiomatically as anincidence structure, in terms of a setPof points, a setLof lines, and anincidence relationIthat determines which points lie on which lines. AsPandLare only sets one can interchange their roles and define aplane dual structure.
By interchanging the role of "points" and "lines" in
we obtain the dual structure
whereI* is theconverse relationofI.
In a projective plane a statement involving points, lines and incidence between them that is obtained from another such statement by interchanging the words "point" and "line" and making whatever grammatical adjustments that are necessary, is called theplane dual statementof the first. The plane dual statement of "Two points are on a unique line." is "Two lines meet at a unique point." Forming the plane dual of a statement is known asdualizingthe statement.
If a statement is true in a projective planeC, then the plane dual of that statement must be true in the dual planeC*. This follows since dualizing each statement in the proof "inC" gives a statement of the proof "inC*."
In the projective planeC, it can be shown that there exist four lines, no three of which are concurrent. Dualizing this theorem and the first two axioms in the definition of a projective plane shows that the plane dual structureC* is also a projective plane, called thedual planeofC.
IfCandC* are isomorphic, thenCis calledself-dual. The projective planes PG(2,K) for any division ringKare self-dual. However, there arenon-Desarguesian planeswhich are not self-dual, such as the Hall planes and some that are, such as theHughes planes.
ThePrinciple of plane dualitysays that dualizing any theorem in a self-dual projective planeCproduces another theorem valid inC.
Adualityis a map from a projective planeC= (P,L,I)to its dual planeC* = (L,P,I*)(seeabove) which preserves incidence. That is, a dualityσwill map points to lines and lines to points (Pσ=LandLσ=P) in such a way that if a pointQis on a linem(denoted byQIm) thenQσI*mσ⇔mσIQσ. A duality which is an isomorphism is called acorrelation.[11]If a correlation exists then the projective planeCis self-dual.
In the special case that the projective plane is of thePG(2,K)type, withKa division ring, a duality is called areciprocity.[12]These planes are always self-dual. By thefundamental theorem of projective geometrya reciprocity is the composition of anautomorphic functionofKand ahomography. If the automorphism involved is the identity, then the reciprocity is called aprojective correlation.
A correlation of order two (aninvolution) is called apolarity. If a correlationφis not a polarity thenφ2is a nontrivial collineation.
It can be shown that a projective plane has the same number of lines as it has points (infinite or finite). Thus, for every finite projective plane there is anintegerN≥ 2 such that the plane has
The numberNis called theorderof the projective plane.
The projective plane of order 2 is called theFano plane. See also the article onfinite geometry.
Using the vector space construction with finite fields there exists a projective plane of orderN=pn, for each prime powerpn. In fact, for all known finite projective planes, the orderNis a prime power.[citation needed]
The existence of finite projective planes of other orders is an open question. The only general restriction known on the order is theBruck–Ryser–Chowla theoremthat if the orderNiscongruentto 1 or 2 mod 4, it must be the sum of two squares. This rules outN= 6. The next caseN= 10has been ruled out by massive computer calculations.[13]Nothing more is known; in particular, the question of whether there exists a finite projective plane of orderN= 12is still open.[citation needed]
Another longstanding open problem is whether there exist finite projective planes ofprimeorder which are not finite field planes (equivalently, whether there exists a non-Desarguesian projective plane of prime order).[citation needed]
A projective plane of orderNis a SteinerS(2,N+ 1,N2+N+ 1)system
(seeSteiner system). Conversely, one can prove that all Steiner systems of this form (λ= 2) are projective planes.
Automorphismsfor PG(n,k), withk=pm,p=prime is (m!)(kn+1− 1)(kn+1−k)(kn+1−k2)...(kn+1−kn)/(k− 1).
The number of mutuallyorthogonal Latin squaresof orderNis at mostN− 1.N− 1exist if and only if there is a projective plane of orderN.
While the classification of all projective planes is far from complete, results are known for small orders:
Projective planes may be thought of asprojective geometriesof dimension two.[15]Higher-dimensional projective geometries can be defined in terms of incidence relations in a manner analogous to the definition of a projective plane.
The smallest projective space of dimension 3 isPG(3,2).
These turn out to be "tamer" than the projective planes since the extra degrees of freedom permitDesargues' theoremto be proved geometrically in the higher-dimensional geometry. This means that the coordinate "ring" associated to the geometry must be adivision ring(skewfield)K, and the projective geometry is isomorphic to the one constructed from the vector spaceKd+1, i.e. PG(d,K). As in the construction given earlier, the points of thed-dimensionalprojective spacePG(d,K) are the lines through the origin inKd+1and a line in PG(d,K) corresponds to a plane through the origin inKd+1. In fact, eachi-dimensional object in PG(d,K), withi<d, is an(i+ 1)-dimensional (algebraic) vector subspace ofKd+1("goes through the origin"). The projective spaces in turn generalize to theGrassmannian spaces.
It can be shown that if Desargues' theorem holds in a projective space of dimension greater than two, then it must also hold in all planes that are contained in that space. Since there are projective planes in which Desargues' theorem fails (non-Desarguesian planes), these planes can not be embedded in a higher-dimensional projective space. Only the planes from the vector space construction PG(2,K) can appear in projective spaces of higher dimension. Some disciplines in mathematics restrict the meaning of projective plane to only this type of projective plane since otherwise general statements about projective spaces would always have to mention the exceptions when the geometric dimension is two.[16]
|
https://en.wikipedia.org/wiki/Projective_plane#Lines_joining_points_and_intersection_of_lines_.28using_duality.29
|
Thedistancebetween twoparallellinesin theplaneis the minimum distance between any two points.
Because the lines are parallel, the perpendicular distance between them is a constant, so it does not matter which point is chosen to measure the distance. Given the equations of two non-vertical parallel lines
the distance between the two lines is the distance between the two intersection points of these lines with the perpendicular line
This distance can be found by first solving thelinear systems
and
to get the coordinates of the intersection points. The solutions to the linear systems are the points
and
The distance between the points is
which reduces to
When the lines are given by
the distance between them can be expressed as
|
https://en.wikipedia.org/wiki/Distance_between_two_parallel_lines
|
Thedistance(orperpendicular distance)from a point to a lineis the shortestdistancefrom a fixedpointto any point on a fixed infinitelineinEuclidean geometry. It is the length of theline segmentwhich joins the point to the line and isperpendicularto the line. The formula for calculating it can be derived and expressed in several ways.
Knowing the shortest distance from a point to a line can be useful in various situations—for example, finding the shortest distance to reach a road, quantifying the scatter on a graph, etc. InDeming regression, a type of linear curve fitting, if the dependent and independent variables have equal variance this results inorthogonal regressionin which the degree of imperfection of the fit is measured for each data point as the perpendicular distance of the point from the regression line.
In the case of a line in the plane given by the equationax+by+c= 0,wherea,bandcarerealconstants withaandbnot both zero, the distance from the line to a point (x0,y0) is[1][2]: p.14
The point on this line which is closest to (x0,y0) has coordinates:[3]
Horizontal and vertical lines
In the general equation of a line,ax+by+c= 0,aandbcannot both be zero unlesscis also zero, in which case the equation does not define a line. Ifa= 0 andb≠0, the line is horizontal and has equationy= -c/b. The distance from (x0,y0) to this line is measured along a vertical line segment of length |y0- (-c/b)| = |by0+c| / |b| in accordance with the formula. Similarly, for vertical lines (b= 0) the distance between the same point and the line is |ax0+c| / |a|, as measured along a horizontal line segment.
If the line passes through two pointsP1=(x1,y1) andP2=(x2,y2) then the distance of (x0,y0) from the line is:
The denominator of this expression is the distance betweenP1andP2. The numerator is twice the area of the triangle with its vertices at the three points, (x0,y0),P1andP2. See:Area of a triangle § Using coordinates. The expression is equivalent toh=2Ab{\textstyle h={\frac {2A}{b}}}, which can be obtained by rearranging the standard formula for the area of a triangle:A=12bh{\textstyle A={\frac {1}{2}}bh}, wherebis the length of a side, andhis the perpendicular height from the opposite vertex.
If the line passes through the pointP= (Px,Py)with angleθ, then the distance of some point(x0,y0)to the line is
This proof is valid only if the line is neither vertical nor horizontal, that is, we assume that neitheranorbin the equation of the line is zero.
The line with equationax+by+c= 0 has slope −a/b, so any line perpendicular to it will have slopeb/a(the negative reciprocal). Let (m,n) be the point of intersection of the lineax+by+c= 0 and the line perpendicular to it which passes through the point (x0,y0). The line through these two points is perpendicular to the original line, so
Thus,a(y0−n)−b(x0−m)=0,{\displaystyle a(y_{0}-n)-b(x_{0}-m)=0,}and by squaring this equation we obtain:
Now consider,
using the above squared equation. But we also have,
since (m,n) is onax+by+c= 0.
Thus,
and we obtain the length of the line segment determined by these two points,
This proof is valid only if the line is not horizontal or vertical.[5]
Drop a perpendicular from the pointPwith coordinates (x0,y0) to the line with equationAx+By+C= 0. Label the foot of the perpendicularR. Draw the vertical line throughPand label its intersection with the given lineS. At any pointTon the line, draw a right triangleTVUwhose sides are horizontal and vertical line segments with hypotenuseTUon the given line and horizontal side of length |B| (see diagram). The vertical side of ∆TVUwill have length |A| since the line has slope -A/B.
∆PRSand ∆TVUaresimilar triangles, since they are both right triangles and ∠PSR≅ ∠TUVsince they are corresponding angles of a transversal to the parallel linesPSandUV(both are vertical lines).[6]Corresponding sides of these triangles are in the same ratio, so:
If pointShas coordinates (x0,m) then |PS| = |y0-m| and the distance fromPto the line is:
SinceSis on the line, we can find the value of m,
and finally obtain:[7]
A variation of this proof is to place V at P and compute the area of the triangle ∆UVTtwo ways to obtain thatD|TU¯|=|VU¯||VT¯|{\displaystyle D|{\overline {TU}}|=|{\overline {VU}}||{\overline {VT}}|}where D is the altitude of ∆UVTdrawn to the hypoteneuse of ∆UVTfromP. The distance formula can then used to express|TU¯|{\displaystyle |{\overline {TU}}|},|VU¯|{\displaystyle |{\overline {VU}}|}, and|VT¯|{\displaystyle |{\overline {VT}}|}in terms of the coordinates of P and the coefficients of the equation of the line to get the indicated formula.[citation needed]
LetPbe the point with coordinates (x0,y0) and let the given line have equationax+by+c= 0. Also, letQ= (x1,y1) be any point on this line andnthe vector (a,b) starting at pointQ. The vectornis perpendicular to the line, and the distancedfrom pointPto the line is equal to the length of the orthogonal projection ofQP→{\displaystyle {\overrightarrow {QP}}}onn. The length of this projection is given by:
Now,
thus
SinceQis a point on the line,c=−ax1−by1{\displaystyle c=-ax_{1}-by_{1}}, and so,[8]
It is possible to produce another expression to find the shortest distance of a point to a line. This derivation also requires that the line is not vertical or horizontal.
The point P is given with coordinates (x0,y0{\displaystyle x_{0},y_{0}}).
The equation of a line is given byy=mx+k{\displaystyle y=mx+k}. The equation of the normal of that line which passes through the point P is giveny=x0−xm+y0{\displaystyle y={\frac {x_{0}-x}{m}}+y_{0}}.
The point at which these two lines intersect is the closest point on the original line to the point P. Hence:
We can solve this equation forx,
Theycoordinate of the point of intersection can be found by substituting this value ofxinto the equation of the original line,
Using the equation for finding the distance between 2 points,d=(X2−X1)2+(Y2−Y1)2{\displaystyle d={\sqrt {(X_{2}-X_{1})^{2}+(Y_{2}-Y_{1})^{2}}}}, we can deduce that the formula to find the shortest distance between a line and a point is the following:
Recalling thatm= −a/bandk= −c/bfor the line with equationax+by+ c = 0, a little algebraic simplification reduces this to the standard expression.[9]
The equation of a line can be given invectorform:
Hereais the position of a point on the line, andnis aunit vectorin the direction of the line. Then as scalartvaries,xgives thelocusof the line.
The distance of an arbitrary pointpto this line is given by
This formula can be derived as follows:a−p{\displaystyle \mathbf {a} -\mathbf {p} }is a vector frompto the pointaon the line. Then(a−p)⋅n{\displaystyle (\mathbf {a} -\mathbf {p} )\cdot \mathbf {n} }is the projected length onto the line and so
is a vector that is theprojectionofa−p{\displaystyle \mathbf {a} -\mathbf {p} }onto the line. Thus
is the component ofa−p{\displaystyle \mathbf {a} -\mathbf {p} }perpendicular to the line. The distance from the point to the line is then just thenormof that vector.[10]This more general formula is not restricted to two dimensions.
If the line (l) goes through point A and has adirection vectoru→{\displaystyle {\vec {u}}}, the distance between point P and line (l) is
whereAP→×u→{\displaystyle {\overrightarrow {\mathrm {AP} }}\times {\vec {u}}}is thecross productof the vectorsAP→{\displaystyle {\overrightarrow {\mathrm {AP} }}}andu→{\displaystyle {\vec {u}}}and where‖u→‖{\displaystyle \|{\vec {u}}\|}is the vector norm ofu→{\displaystyle {\vec {u}}}.
AP→=P−A{\displaystyle {\overrightarrow {\mathrm {AP} }}=P-A}
Note that cross products only exist in dimensions 3 and 7 and trivially in dimensions 0 and 1 (where the cross product is constant 0).
|
https://en.wikipedia.org/wiki/Distance_from_a_point_to_a_line
|
In analyticgeometry, theintersection of alineand aplaneinthree-dimensional spacecan be theempty set, apoint, or a line. It is the entire line if that line is embedded in the plane, and is the empty set if the line is parallel to the plane but outside it. Otherwise, the line cuts through the plane at a single point.
Distinguishing these cases, and determining equations for the point and line in the latter cases, have use incomputer graphics,motion planning, andcollision detection.
Invector notation, a plane can be expressed as the set of pointsp{\displaystyle \mathbf {p} }for which
wheren{\displaystyle \mathbf {n} }is anormal vectorto the plane andp0{\displaystyle \mathbf {p_{0}} }is a point on the plane. (The notationa⋅b{\displaystyle \mathbf {a} \cdot \mathbf {b} }denotes thedot productof the vectorsa{\displaystyle \mathbf {a} }andb{\displaystyle \mathbf {b} }.)
The vector equation for a line is
wherel{\displaystyle \mathbf {l} }is a unit vector in the direction of the line,l0{\displaystyle \mathbf {l_{0}} }is a point on the line, andd{\displaystyle d}is a scalar in thereal numberdomain. Substituting the equation for the line into the equation for the plane gives
Expanding gives
And solving ford{\displaystyle d}gives
Ifl⋅n=0{\displaystyle \mathbf {l} \cdot \mathbf {n} =0}then the line and plane are parallel. There will be two cases: if(p0−l0)⋅n=0{\displaystyle (\mathbf {p_{0}} -\mathbf {l_{0}} )\cdot \mathbf {n} =0}then the line is contained in the plane, that is, the line intersects the plane at each point of the line. Otherwise, the line and plane have no intersection.
Ifl⋅n≠0{\displaystyle \mathbf {l} \cdot \mathbf {n} \neq 0}there is a single point of intersection. The value ofd{\displaystyle d}can be calculated and the point of intersection,p{\displaystyle \mathbf {p} }, is given by
A line is described by all points that are a given direction from a point. A general point on a line passing through pointsla=(xa,ya,za){\displaystyle \mathbf {l} _{a}=(x_{a},y_{a},z_{a})}andlb=(xb,yb,zb){\displaystyle \mathbf {l} _{b}=(x_{b},y_{b},z_{b})}can be represented as
wherelab=lb−la{\displaystyle \mathbf {l} _{ab}=\mathbf {l} _{b}-\mathbf {l} _{a}}is the vector pointing fromla{\displaystyle \mathbf {l} _{a}}tolb{\displaystyle \mathbf {l} _{b}}.
Similarly a general point on a plane determined by the triangle defined by the pointsp0=(x0,y0,z0){\displaystyle \mathbf {p} _{0}=(x_{0},y_{0},z_{0})},p1=(x1,y1,z1){\displaystyle \mathbf {p} _{1}=(x_{1},y_{1},z_{1})}andp2=(x2,y2,z2){\displaystyle \mathbf {p} _{2}=(x_{2},y_{2},z_{2})}can be represented as
wherep01=p1−p0{\displaystyle \mathbf {p} _{01}=\mathbf {p} _{1}-\mathbf {p} _{0}}is the vector pointing fromp0{\displaystyle \mathbf {p} _{0}}top1{\displaystyle \mathbf {p} _{1}}, andp02=p2−p0{\displaystyle \mathbf {p} _{02}=\mathbf {p} _{2}-\mathbf {p} _{0}}is the vector pointing fromp0{\displaystyle \mathbf {p} _{0}}top2{\displaystyle \mathbf {p} _{2}}.
The point at which the line intersects the plane is therefore described by setting the point on the line equal to the point on the plane, giving the parametric equation:
This can be rewritten as
which can be expressed in matrix form as
where the vectors are written as column vectors.
This produces asystem of linear equationswhich can be solved fort{\displaystyle t},u{\displaystyle u}andv{\displaystyle v}. If the solution satisfies the conditiont∈[0,1],{\displaystyle t\in [0,1],}, then the intersection point is on the line segment betweenla{\displaystyle \mathbf {l} _{a}}andlb{\displaystyle \mathbf {l} _{b}}, otherwise it is elsewhere on the line. Likewise, if the solution satisfiesu,v∈[0,1],{\displaystyle u,v\in [0,1],}, then the intersection point is in theparallelogramformed by the pointp0{\displaystyle \mathbf {p} _{0}}and vectorsp01{\displaystyle \mathbf {p} _{01}}andp02{\displaystyle \mathbf {p} _{02}}. If the solution additionally satisfies(u+v)≤1{\displaystyle (u+v)\leq 1}, then the intersection point lies in the triangle formed by the three pointsp0{\displaystyle \mathbf {p} _{0}},p1{\displaystyle \mathbf {p} _{1}}andp2{\displaystyle \mathbf {p} _{2}}.
The determinant of the matrix can be calculated as
If the determinant is zero, then there is no unique solution; the line is either in the plane or parallel to it.
If a unique solution exists (determinant is not 0), then it can be found byinvertingthe matrix and rearranging:
which expands to
and then to
thus giving the solutions:
The point of intersection is then equal to
In theray tracingmethod ofcomputer graphicsa surface can be represented as a set of pieces of planes. The intersection of a ray of light with each plane is used to produce an image of the surface. In vision-based3D reconstruction, a subfield of computer vision, depth values are commonly measured by so-called triangulation method, which finds the intersection between light plane and ray reflected toward camera.
The algorithm can be generalised to cover intersection with other planar figures, in particular, theintersection of a polyhedron with a line.
|
https://en.wikipedia.org/wiki/Line%E2%80%93plane_intersection
|
Ingeometry, theparallel postulateis the fifth postulate inEuclid'sElementsand a distinctiveaxiominEuclidean geometry. It states that, in two-dimensional geometry:
If aline segmentintersects two straightlinesforming two interior angles on the same side that are less than tworight angles, then the two lines, if extended indefinitely, meet on that side on which the angles sum to less than two right angles.
This postulate does not specifically talk about parallel lines;[1]it is only a postulate related to parallelism. Euclid gave the definition of parallel lines in Book I, Definition 23[2]just before the five postulates.[3]
Euclidean geometryis the study of geometry that satisfies all of Euclid's axioms, including the parallel postulate.
The postulate was long considered to be obvious or inevitable, but proofs were elusive. Eventually, it was discovered that inverting the postulate gave valid, albeit different geometries. A geometry where the parallel postulate does not hold is known as anon-Euclidean geometry. Geometry that isindependentof Euclid's fifth postulate (i.e., only assumes the modern equivalent of the first four postulates) is known asabsolute geometry(or sometimes "neutral geometry").
Probably the best-known equivalent of Euclid's parallel postulate, contingent on his other postulates, isPlayfair's axiom, named after the ScottishmathematicianJohn Playfair, which states:
In a plane, given a line and a point not on it, at most one line parallel to the given line can be drawn through the point.[4]
This axiom by itself is notlogically equivalentto the Euclidean parallel postulate since there are geometries in which one is true and the other is not. However, in the presence of the remaining axioms which give Euclidean geometry, one can be used to prove the other, so they are equivalent in the context ofabsolute geometry.[5]
Many other statements equivalent to the parallel postulate have been suggested, some of them appearing at first to be unrelated to parallelism, and some seeming soself-evidentthat they wereunconsciouslyassumed by people who claimed to have proven the parallel postulate from Euclid's other postulates. These equivalent statements include:
However, the alternatives which employ the word "parallel" cease appearing so simple when one is obliged to explain which of the four common definitions of "parallel" is meant – constant separation, never meeting, same angles where crossed bysomethird line, or same angles where crossed byanythird line – since the equivalence of these four is itself one of the unconsciously obvious assumptions equivalent to Euclid's fifth postulate. In the list above, it is always taken to refer to non-intersecting lines. For example, if the word "parallel" in Playfair's axiom is taken to mean 'constant separation' or 'same angles where crossed by any third line', then it is no longer equivalent to Euclid's fifth postulate, and is provable from the first four (the axiom says 'There is at most one line...', which is consistent with there being no such lines). However, if the definition is taken so that parallel lines are lines that do not intersect, or that have some line intersecting them in the same angles, Playfair's axiom is contextually equivalent to Euclid's fifth postulate and is thus logically independent of the first four postulates. Note that the latter two definitions are not equivalent, because in hyperbolic geometry the second definition holds only forultraparallellines.
From the beginning, the postulate came under attack as being provable, and therefore not a postulate, and for more than two thousand years, many attempts were made to prove (derive) the parallel postulate using Euclid's first four postulates.[10]The main reason that such a proof was so highly sought after was that, unlike the first four postulates, the parallel postulate is not self-evident. If the order in which the postulates were listed in the Elements is significant, it indicates that Euclid included this postulate only when he realised he could not prove it or proceed without it.[11]Many attempts were made to prove the fifth postulate from the other four, many of them being accepted as proofs for long periods until the mistake was found. Invariably the mistake was assuming some 'obvious' property which turned out to be equivalent to the fifth postulate (Playfair's axiom). Although known from the time of Proclus, this became known as Playfair's Axiom after John Playfair wrote a famous commentary on Euclid in 1795 in which he proposed replacing Euclid's fifth postulate by his own axiom. Today, over two thousand two hundred years later, Euclid's fifth postulate remains a postulate.
Proclus(410–485) wrote a commentary onThe Elementswhere he comments on attempted proofs to deduce the fifth postulate from the other four; in particular, he notes thatPtolemyhad produced a false 'proof'. Proclus then goes on to give a false proof of his own. However, he did give a postulate which is equivalent to the fifth postulate.
Ibn al-Haytham(Alhazen) (965–1039), anArab mathematician, made an attempt at proving the parallel postulate using aproof by contradiction,[12]in the course of which he introduced the concept ofmotionandtransformationinto geometry.[13]He formulated theLambert quadrilateral, which Boris Abramovich Rozenfeld names the "Ibn al-Haytham–Lambert quadrilateral",[14]and his attempted proof contains elements similar to those found inLambert quadrilateralsandPlayfair's axiom.[15]
The Persian mathematician, astronomer, philosopher, and poetOmar Khayyám(1050–1123), attempted to prove the fifth postulate from another explicitly given postulate (based on the fourth of the fiveprinciples due to the Philosopher(Aristotle), namely, "Two convergent straight lines intersect and it is impossible for two convergent straight lines to diverge in the direction in which they converge."[16]He derived some of the earlier results belonging toelliptical geometryandhyperbolic geometry, though his postulate excluded the latter possibility.[17]TheSaccheri quadrilateralwas also first considered by Omar Khayyám in the late 11th century in Book I ofExplanations of the Difficulties in the Postulates of Euclid.[14]Unlike many commentators on Euclid before and after him (includingGiovanni Girolamo Saccheri), Khayyám was not trying to prove the parallel postulate as such but to derive it from his equivalent postulate. He recognized that three possibilities arose from omitting Euclid's fifth postulate; if two perpendiculars to one line cross another line, judicious choice of the last can make the internal angles where it meets the two perpendiculars equal (it is then parallel to the first line). If those equal internal angles are right angles, we get Euclid's fifth postulate, otherwise, they must be either acute or obtuse. He showed that the acute and obtuse cases led to contradictions using his postulate, but his postulate is now known to be equivalent to the fifth postulate.
Nasir al-Din al-Tusi(1201–1274), in hisAl-risala al-shafiya'an al-shakk fi'l-khutut al-mutawaziya(Discussion Which Removes Doubt about Parallel Lines) (1250), wrote detailed critiques of the parallel postulate and on Khayyám's attempted proof a century earlier. Nasir al-Din attempted to derive a proof by contradiction of the parallel postulate.[18]He also considered the cases of what are now known as elliptical and hyperbolic geometry, though he ruled out both of them.[17]
Nasir al-Din's son, Sadr al-Din (sometimes known as "Pseudo-Tusi"), wrote a book on the subject in 1298, based on his father's later thoughts, which presented one of the earliest arguments for a non-Euclidean hypothesis equivalent to the parallel postulate. "He essentially revised both the Euclidean system of axioms and postulates and the proofs of many propositions from theElements."[18][19]His work was published inRomein 1594 and was studied by European geometers. This work marked the starting point for Saccheri's work on the subject[18]which opened with a criticism of Sadr al-Din's work and the work of Wallis.[20]
Giordano Vitale(1633–1711), in his bookEuclide restituo(1680, 1686), used the Khayyam-Saccheri quadrilateral to prove that if three points are equidistant on the base AB and the summit CD, then AB and CD are everywhere equidistant.Girolamo Saccheri(1667–1733) pursued the same line of reasoning more thoroughly, correctly obtaining absurdity from the obtuse case (proceeding, like Euclid, from the implicit assumption that lines can be extended indefinitely and have infinite length), but failing to refute the acute case (although he managed to wrongly persuade himself that he had).
In 1766Johann Lambertwrote, but did not publish,Theorie der Parallellinienin which he attempted, as Saccheri did, to prove the fifth postulate. He worked with a figure that today we call aLambert quadrilateral, a quadrilateral with three right angles (can be considered half of a Saccheri quadrilateral). He quickly eliminated the possibility that the fourth angle is obtuse, as had Saccheri and Khayyám, and then proceeded to prove many theorems under the assumption of an acute angle. Unlike Saccheri, he never felt that he had reached a contradiction with this assumption. He had proved the non-Euclidean result that the sum of the angles in a triangle increases as the area of the triangle decreases, and this led him to speculate on the possibility of a model of the acute case on a sphere of imaginary radius. He did not carry this idea any further.[21]
Where Khayyám and Saccheri had attempted to prove Euclid's fifth by disproving the only possible alternatives, the nineteenth century finally saw mathematicians exploring those alternatives and discovering thelogically consistentgeometries that result. In 1829,Nikolai Ivanovich Lobachevskypublished an account of acute geometry in an obscure Russian journal (later re-published in 1840 in German). In 1831,János Bolyaiincluded, in a book by his father, an appendix describing acute geometry, which, doubtlessly, he had developed independently of Lobachevsky.Carl Friedrich Gausshad also studied the problem, but he did not publish any of his results. Upon hearing of Bolyai's results in a letter from Bolyai's father,Farkas Bolyai, Gauss stated:
If I commenced by saying that I am unable to praise this work, you would certainly be surprised for a moment. But I cannot say otherwise. To praise it would be to praise myself. Indeed the whole contents of the work, the path taken by your son, the results to which he is led, coincide almost entirely with my meditations, which have occupied my mind partly for the last thirty or thirty-five years.[22]
The resulting geometries were later developed byLobachevsky,RiemannandPoincaréintohyperbolic geometry(the acute case) andelliptic geometry(the obtuse case). Theindependenceof the parallel postulate from Euclid's other axioms was finally demonstrated byEugenio Beltramiin 1868.
Euclid did not postulate theconverseof his fifth postulate, which is one way to distinguish Euclidean geometry fromelliptic geometry. The Elements contains the proof of an equivalent statement (Book I, Proposition 27):If a straight line falling on two straight lines make the alternate angles equal to one another, the straight lines will be parallel to one another.AsDe Morgan[23]pointed out, this is logically equivalent to (Book I, Proposition 16). These results do not depend upon the fifth postulate, but they do require the second postulate[24]which is violated in elliptic geometry.
Attempts to logically prove the parallel postulate, rather than the eighth axiom,[25]were criticized byArthur SchopenhauerinThe World as Will and Idea. However, the argument used by Schopenhauer was that the postulate is evident by perception, not that it was not a logical consequence of the other axioms.[26]
The parallel postulate is equivalent to the conjunction of theLotschnittaxiomand ofAristotle's axiom.[27][28]The former states that the perpendiculars to the sides of a right angle intersect, while the latter states that there is no upper bound for the lengths of the distances from the leg of an angle to the other leg. As shown in,[29]the parallel postulate is equivalent to the conjunction of the following incidence-geometric forms of theLotschnittaxiomand ofAristotle's axiom:
Given three parallel lines, there is a line that intersects all three of them.
Given a lineaand two distinct intersecting linesmandn, each different froma, there exists a linegwhich intersectsaandm, but notn.
The splitting of the parallel postulate into the conjunction of these incidence-geometric axioms is possible only in the presence ofabsolute geometry.[30]
In effect, this method characterized parallel lines as lines always equidistant from one another and also introduced the concept of motion into geometry.
"Khayyam's postulate had excluded the case of the hyperbolic geometry whereas al-Tusi's postulate ruled out both the hyperbolic and elliptic geometries."
"But in a manuscript probably written by his son Sadr al-Din in 1298, based on Nasir al-Din's later thoughts on the subject, there is a new argument based on another hypothesis, also equivalent to Euclid's, [...] The importance of this latter work is that it was published in Rome in 1594 and was studied by European geometers. In particular, it became the starting point for the work of Saccheri and ultimately for the discovery of non-Euclidean geometry."
"InPseudo-Tusi's Exposition of Euclid, [...] another statement is used instead of a postulate. It was independent of the Euclidean postulate V and easy to prove. [...] He essentially revised both the Euclidean system of axioms and postulates and the proofs of many propositions from theElements."
Eder, Michelle (2000),Views of Euclid's Parallel Postulate in Ancient Greece and in Medieval Islam,Rutgers University, retrieved2008-01-23
|
https://en.wikipedia.org/wiki/Parallel_postulate
|
Incomputer vision,triangulationrefers to the process of determining a point in 3D space given its projections onto two, or more, images. In order to solve this problem it is necessary to know the parameters of the camera projection function from 3D to 2D for the cameras involved, in the simplest case represented by thecamera matrices. Triangulation is sometimes also referred to asreconstructionorintersection.
The triangulation problem is in principle trivial. Since each point in an image corresponds to a line in 3D space, all points on the line in 3D are projected to the point in the image. If a pair ofcorresponding pointsin two, or more images, can be found it must be the case that they are the projection of a common 3D pointx. The set of lines generated by the image points must intersect atx(3D point) and the algebraic formulation of the coordinates ofx(3D point) can be computed in a variety of ways, as is presented below.
In practice, however, the coordinates of image points cannot be measured with arbitrary accuracy. Instead, various types of noise, such as geometric noise from lens distortion or interest point detection error, lead to inaccuracies in the measured image coordinates. As a consequence, the lines generated by the corresponding image points do not always intersect in 3D space. The problem, then, is to find a 3D point which optimally fits the measured image points. In the literature there are multiple proposals for how to define optimality and how to find the optimal 3D point. Since they are based on different optimality criteria, the various methods produce different estimates of the 3D pointxwhen noise is involved.
In the following, it is assumed that triangulation is made on corresponding image points from two views generated bypinhole cameras.
The image to the left illustrates theepipolar geometryof a pair of stereo cameras ofpinhole model. A pointx(3D point) in 3D space is projected onto the respective image plane along a line (green) which goes through the camera'sfocal point,O1{\displaystyle \mathbf {O} _{1}}andO2{\displaystyle \mathbf {O} _{2}}, resulting in the two corresponding image pointsy1{\displaystyle \mathbf {y} _{1}}andy2{\displaystyle \mathbf {y} _{2}}. Ify1{\displaystyle \mathbf {y} _{1}}andy2{\displaystyle \mathbf {y} _{2}}are given and the geometry of the two cameras are known, the two projection lines (green lines) can be determined and it must be the case that they intersect at pointx(3D point). Using basiclinear algebrathat intersection point can be determined in a straightforward way.
The image to the right shows the real case. The position of the image pointsy1{\displaystyle \mathbf {y} _{1}}andy2{\displaystyle \mathbf {y} _{2}}cannot be measured exactly. The reason is a combination of factors such as
As a consequence, the measured image points arey1′{\displaystyle \mathbf {y} '_{1}}andy2′{\displaystyle \mathbf {y} '_{2}}instead ofy1{\displaystyle \mathbf {y} _{1}}andy2{\displaystyle \mathbf {y} _{2}}. However, their projection lines (blue) do not have to intersect in 3D space or come close tox. In fact, these lines intersect if and only ify1′{\displaystyle \mathbf {y} '_{1}}andy2′{\displaystyle \mathbf {y} '_{2}}satisfy theepipolar constraintdefined by thefundamental matrix. Given the measurement noise iny1′{\displaystyle \mathbf {y} '_{1}}andy2′{\displaystyle \mathbf {y} '_{2}}it is rather likely that the epipolar constraint is not satisfied and the projection lines do not intersect.
This observation leads to the problem which is solved in triangulation. Which 3D pointxestis the best estimate ofxgiveny1′{\displaystyle \mathbf {y} '_{1}}andy2′{\displaystyle \mathbf {y} '_{2}}and the geometry of the cameras? The answer is often found by defining an error measure which depends onxestand then minimizing this error. In the following sections, some of the various methods for computingxestpresented in the literature are briefly described.
All triangulation methods producexest=xin the case thaty1=y1′{\displaystyle \mathbf {y} _{1}=\mathbf {y} '_{1}}andy2=y2′{\displaystyle \mathbf {y} _{2}=\mathbf {y} '_{2}}, that is, when the epipolar constraint is satisfied (except for singular points, see below). It is what happens when the constraint is not satisfied which differs between the methods.
A triangulation method can be described in terms of a functionτ{\displaystyle \tau \,}such that
wherey1′,y2′{\displaystyle \mathbf {y} '_{1},\mathbf {y} '_{2}}are the homogeneous coordinates of the detected image points andC1,C2{\displaystyle \mathbf {C} _{1},\mathbf {C} _{2}}are the camera matrices.x(3D point) is the homogeneous representation of the resulting 3D point. The∼{\displaystyle \sim \,}sign implies thatτ{\displaystyle \tau \,}is only required to produce a vector which is equal toxup to a multiplication by a non-zero scalar since homogeneous vectors are involved.
Before looking at the specific methods, that is, specific functionsτ{\displaystyle \tau \,}, there are some general concepts related to the methods that need to be explained. Which triangulation method is chosen for a particular problem depends to some extent on these characteristics.
Some of the methods fail to correctly compute an estimate ofx(3D point) if it lies in a certain subset of the 3D space, corresponding to some combination ofy1′,y2′,C1,C2{\displaystyle \mathbf {y} '_{1},\mathbf {y} '_{2},\mathbf {C} _{1},\mathbf {C} _{2}}. A point in this subset is then asingularityof the triangulation method. The reason for the failure can be that some equation system to be solved is under-determined or that the projective representation ofxestbecomes the zero vector for the singular points.
In some applications, it is desirable that the triangulation is independent of the coordinate system used to represent 3D points; if the triangulation problem is formulated in one coordinate system and then transformed into another the resulting estimatexestshould transform in the same way. This property is commonly referred to asinvariance. Not every triangulation method assures invariance, at least not for general types of coordinate transformations.
For a homogeneous representation of 3D coordinates, the most general transformation is a projective transformation, represented by a4×4{\displaystyle 4\times 4}matrixT{\displaystyle \mathbf {T} }. If the homogeneous coordinates are transformed according to
then the camera matrices must transform as (Ck)
to produce the same homogeneous image coordinates (yk)
If the triangulation functionτ{\displaystyle \tau }is invariant toT{\displaystyle \mathbf {T} }then the following relation must be valid
from which follows that
For each triangulation method, it can be determined if this last relation is valid. If it is, it may be satisfied only for a subset of the projective transformations, for example, rigid or affine transformations.
The functionτ{\displaystyle \tau }is only an abstract representation of a computation which, in practice, may be relatively complex. Some methods result in aτ{\displaystyle \tau }which is a closed-form continuous function while others need to be decomposed into a series of computational steps involving, for example,SVDor finding the roots of a polynomial. Yet another class of methods results inτ{\displaystyle \tau }which must rely on iterative estimation of some parameters. This means that both the computation time and the complexity of the operations involved may vary between the different methods.
Each of the two image pointsy1′{\displaystyle \mathbf {y} '_{1}}andy2′{\displaystyle \mathbf {y} '_{2}}has a corresponding projection line (blue in the right image above), here denoted asL1′{\displaystyle \mathbf {L} '_{1}}andL2′{\displaystyle \mathbf {L} '_{2}}, which can be determined given the camera matricesC1,C2{\displaystyle \mathbf {C} _{1},\mathbf {C} _{2}}. Letd{\displaystyle d\,}be a distance function between a (3D line)Land ax(3D point) such thatd(L,x){\displaystyle d(\mathbf {L} ,\mathbf {x} )}is the Euclidean distance betweenL{\displaystyle \mathbf {L} }andx{\displaystyle \mathbf {x} }.
Themidpoint methodfinds the pointxestwhich minimizes
It turns out thatxestlies exactly at the middle of the shortest line segment which joins the two projection lines.
The problem to be solved there is how to compute(x1,x2,x3){\displaystyle (x_{1},x_{2},x_{3})}given corresponding normalized image coordinates(y1,y2){\displaystyle (y_{1},y_{2})}and(y1′,y2′){\displaystyle (y'_{1},y'_{2})}. If theessential matrixis known and the corresponding rotation and translation transformations have been determined, this algorithm (described in Longuet-Higgins' paper) provides a solution.
Letrk{\displaystyle \mathbf {r} _{k}}denote rowkof the rotation matrixR{\displaystyle \mathbf {R} }:
Combining the above relations between 3D coordinates in the two coordinate systems and the mapping between 3D and 2D points described earlier gives
or
Oncex3{\displaystyle x_{3}}is determined, the other two coordinates can be computed as
The above derivation is not unique. It is also possible to start with an expression fory2′{\displaystyle y'_{2}}and derive an expression forx3{\displaystyle x_{3}}according to
In the ideal case, when the camera maps the 3D points according to a perfect pinhole camera and the resulting 2D points can be detected without any noise, the two expressions forx3{\displaystyle x_{3}}are equal. In practice, however, they are not and it may be advantageous to combine the two estimates ofx3{\displaystyle x_{3}}, for example, in terms of some sort of average.
There are also other types of extensions of the above computations which are possible. They started with an expression of the primed image coordinates and derived 3D coordinates in the unprimed system. It is also possible to start with unprimed image coordinates and obtain primed 3D coordinates, which finally can be transformed into unprimed 3D coordinates. Again, in the ideal case the result should be equal to the above expressions, but in practice they may deviate.
A final remark relates to the fact that if the essential matrix is determined from corresponding image coordinate, which often is the case when 3D points are determined in this way, the translation vectort{\displaystyle \mathbf {t} }is known only up to an unknown positive scaling. As a consequence, the reconstructed 3D points, too, are undetermined with respect to a positive scaling.
|
https://en.wikipedia.org/wiki/Triangulation_(computer_vision)
|
Ingeometry, anintersectionis a point, line, or curve common to two or more objects (such as lines, curves, planes, and surfaces). The simplest case inEuclidean geometryis theline–line intersectionbetween two distinctlines, which either is onepoint(sometimes called avertex) or does not exist (if the lines areparallel). Other types of geometric intersection include:
Determination of the intersection offlats– linear geometric objects embedded in a higher-dimensionalspace – is a simple task oflinear algebra, namely the solution of asystem of linear equations. In general the determination of an intersection leads tonon-linear equations, which can besolved numerically, for example usingNewton iteration. Intersection problems between a line and aconic section(circle, ellipse, parabola, etc.) or aquadric(sphere, cylinder, hyperboloid, etc.) lead toquadratic equationsthat can be easily solved. Intersections between quadrics lead toquartic equationsthat can be solvedalgebraically.
For the determination of the intersection point of two non-parallel lines
a1x+b1y=c1,a2x+b2y=c2{\displaystyle a_{1}x+b_{1}y=c_{1},\ a_{2}x+b_{2}y=c_{2}}
one gets, fromCramer's ruleor by substituting out a variable, the coordinates of the intersection point(xs,ys){\displaystyle (x_{s},y_{s})}:
(Ifa1b2−a2b1=0{\displaystyle a_{1}b_{2}-a_{2}b_{1}=0}the lines are parallel and these formulas cannot be used because they involve dividing by 0.)
For two non-parallelline segments(x1,y1),(x2,y2){\displaystyle (x_{1},y_{1}),(x_{2},y_{2})}and(x3,y3),(x4,y4){\displaystyle (x_{3},y_{3}),(x_{4},y_{4})}there is not necessarily an intersection point (see diagram), because the intersection point(x0,y0){\displaystyle (x_{0},y_{0})}of the corresponding lines need not to be contained in the line segments. In order to check the situation one uses parametric representations of the lines:
The line segments intersect only in a common point(x0,y0){\displaystyle (x_{0},y_{0})}of the corresponding lines if the corresponding parameterss0,t0{\displaystyle s_{0},t_{0}}fulfill the condition0≤s0,t0≤1{\displaystyle 0\leq s_{0},t_{0}\leq 1}.
The parameterss0,t0{\displaystyle s_{0},t_{0}}are the solution of the linear system
It can be solved forsandtusing Cramer's rule (seeabove). If the condition0≤s0,t0≤1{\displaystyle 0\leq s_{0},t_{0}\leq 1}is fulfilled one insertss0{\displaystyle s_{0}}ort0{\displaystyle t_{0}}into the corresponding parametric representation and gets the intersection point(x0,y0){\displaystyle (x_{0},y_{0})}.
Example:For the line segments(1,1),(3,2){\displaystyle (1,1),(3,2)}and(1,4),(2,−1){\displaystyle (1,4),(2,-1)}one gets the linear system
ands0=311,t0=611{\displaystyle s_{0}={\tfrac {3}{11}},t_{0}={\tfrac {6}{11}}}. That means: the lines intersect at point(1711,1411){\displaystyle ({\tfrac {17}{11}},{\tfrac {14}{11}})}.
Remark:Considering lines, instead of segments, determined by pairs of points, each condition0≤s0,t0≤1{\displaystyle 0\leq s_{0},t_{0}\leq 1}can be dropped and the method yields the intersection point of the lines (seeabove).
For the intersection of
one solves the line equation forxoryandsubstitutesit into the equation of the circle and gets for the solution (using the formula of a quadratic equation)(x1,y1),(x2,y2){\displaystyle (x_{1},y_{1}),(x_{2},y_{2})}with
ifr2(a2+b2)−c2>0.{\displaystyle r^{2}(a^{2}+b^{2})-c^{2}>0\ .}If this condition holds with strict inequality, there are two intersection points; in this case the line is called asecant lineof the circle, and the line segment connecting the intersection points is called achordof the circle.
Ifr2(a2+b2)−c2=0{\displaystyle r^{2}(a^{2}+b^{2})-c^{2}=0}holds, there exists only one intersection point and the line is tangent to the circle. If the weak inequality does not hold, the line does not intersect the circle.
If the circle's midpoint is not the origin, see.[1]The intersection of a line and a parabola or hyperbola may be treated analogously.
The determination of the intersection points of two circles
can be reduced to the previous case of intersecting a line and a circle. By subtraction of the two given equations one gets the line equation:
This special line is theradical lineof the two circles.
Special casex1=y1=y2=0{\displaystyle \;x_{1}=y_{1}=y_{2}=0}:In this case the origin is the center of the first circle and the second center lies on the x-axis (s. diagram). The equation of the radical line simplifies to2x2x=r12−r22+x22{\displaystyle \;2x_{2}x=r_{1}^{2}-r_{2}^{2}+x_{2}^{2}\;}and the points of intersection can be written as(x0,±y0){\displaystyle (x_{0},\pm y_{0})}with
In case ofr12<x02{\displaystyle r_{1}^{2}<x_{0}^{2}}the circles have no points in common.In case ofr12=x02{\displaystyle r_{1}^{2}=x_{0}^{2}}the circles have one point in common and the radical line is a common tangent.
Any general case as written above can be transformed by a shift and a rotation into the special case.
The intersection of twodisks(the interiors of the two circles) forms a shape called alens.
The problem of intersection of an ellipse/hyperbola/parabola with anotherconic sectionleads to asystem of quadratic equations, which can be solved in special cases easily by elimination of one coordinate. Special properties of conic sections may be used to obtain asolution. In general the intersection points can be determined by solving the equation by a Newton iteration. If a) both conics are given implicitly (by an equation) a 2-dimensional Newton iteration b) one implicitly and the other parametrically given a 1-dimensional Newton iteration is necessary. See next section.
Two curves inR2{\displaystyle \mathbb {R} ^{2}}(two-dimensional space), which are continuously differentiable (i.e. there is no sharp bend),
have an intersection point, if they have a point of the plane in common and have at this point (see diagram):
If both the curves have a pointSand the tangent line there in common but do not cross each other, they are justtouchingat pointS.
Because touching intersections appear rarely and are difficult to deal with, the following considerations omit this case. In any case below all necessary differential conditions are presupposed. The determination of intersection points always leads to one or two non-linear equations which can be solved by Newton iteration. A list of the appearing cases follows:
Any Newton iteration needs convenient starting values, which can be derived by a visualization of both the curves. A parametrically or explicitly given curve can easily be visualized, because to any parametertorxrespectively it is easy to calculate the corresponding point. For implicitly given curves this task is not as easy. In this case one has to determine a curve point with help of starting values and an iteration. See
.[2]
Examples:
If one wants to determine the intersection points of twopolygons, one can check the intersection of any pair of line segments of the polygons (seeabove). For polygons with many segments this method is rather time-consuming. In practice one accelerates the intersection algorithm by usingwindow tests. In this case one divides the polygons into small sub-polygons and determines the smallest window (rectangle with sides parallel to the coordinate axes) for any sub-polygon. Before starting the time-consuming determination of the intersection point of two line segments any pair of windows is tested for common points. See.[3]
In 3-dimensional space there are intersection points (common points) between curves and surfaces. In the following sections we considertransversalintersectiononly.
The intersection of a line and a planeingeneral positionin three dimensions is a point.
Commonly a line in space is represented parametrically(x(t),y(t),z(t)){\displaystyle (x(t),y(t),z(t))}and a plane by an equationax+by+cz=d{\displaystyle ax+by+cz=d}. Inserting the parameter representation into the equation yields the linear equation
for parametert0{\displaystyle t_{0}}of the intersection point(x(t0),y(t0),z(t0)){\displaystyle (x(t_{0}),y(t_{0}),z(t_{0}))}.
If the linear equation has no solution, the line either lies on the plane or is parallel to it.
If a line is defined by two intersecting planesεi:n→i⋅x→=di,i=1,2{\displaystyle \varepsilon _{i}:\ {\vec {n}}_{i}\cdot {\vec {x}}=d_{i},\ i=1,2}and should be intersected by a third planeε3:n→3⋅x→=d3{\displaystyle \varepsilon _{3}:\ {\vec {n}}_{3}\cdot {\vec {x}}=d_{3}}, the common intersection point of the three planes has to be evaluated.
Three planesεi:n→i⋅x→=di,i=1,2,3{\displaystyle \varepsilon _{i}:\ {\vec {n}}_{i}\cdot {\vec {x}}=d_{i},\ i=1,2,3}with linear independent normal vectorsn→1,n→2,n→3{\displaystyle {\vec {n}}_{1},{\vec {n}}_{2},{\vec {n}}_{3}}have the intersection point
For the proof one should establishn→i⋅p→0=di,i=1,2,3,{\displaystyle {\vec {n}}_{i}\cdot {\vec {p}}_{0}=d_{i},\ i=1,2,3,}using the rules of ascalar triple product. If the scalar triple product equals to 0, then planes either do not have the triple intersection or it is a line (or a plane, if all three planes are the same).
Analogously to the plane case the following cases lead to non-linear systems, which can be solved using a 1- or 3-dimensional Newton iteration.[4]
Example:
Aline–sphere intersectionis a simple special case.
Like the case of a line and a plane, the intersection of a curve and a surfaceingeneral positionconsists of discrete points, but a curve may be partly or totally contained in a surface.
Two transversally intersecting surfaces give anintersection curve. The most simple case is the intersection line of two non-parallel planes.
When the intersection of a sphere and a plane is not empty or a single point, it is a circle. This can be seen as follows:
LetSbe a sphere with centerO,Pa plane which intersectsS. DrawOEperpendicular toPand meetingPatE. LetAandBbe any two different points in the intersection. ThenAOEandBOEare right triangles with a common side,OE, and hypotenusesAOandBOequal. Therefore, the remaining sidesAEandBEare equal. This proves that all points in the intersection are the same distance from the pointEin the planeP, in other words all points in the intersection lie on a circleCwith centerE.[5]This proves that the intersection ofPandSis contained inC. Note thatOEis the axis of the circle.
Now consider a pointDof the circleC. SinceClies inP, so doesD. On the other hand, the trianglesAOEandDOEare right triangles with a common side,OE, and legsEAandEDequal. Therefore, the hypotenusesAOandDOare equal, and equal to the radius ofS, so thatDlies inS. This proves thatCis contained in the intersection ofPandS.
As a corollary, on a sphere there is exactly one circle that can be drawn through three given points.[6]
The proof can be extended to show that the points on a circle are all a common angular distance from one of its poles.[7]
Compare alsoconic sections, which can produceovals.
To show that a non-trivial intersection of two spheres is a circle, assume (without loss of generality) that one sphere (with radiusR{\displaystyle R}) is centered at the origin. Points on this sphere satisfy
Also without loss of generality, assume that the second sphere, with radiusr{\displaystyle r}, is centered at a point on the positive x-axis, at distancea{\displaystyle a}from the origin. Its points satisfy
The intersection of the spheres is the set of points satisfying both equations. Subtracting the equations gives
In the singular casea=0{\displaystyle a=0}, the spheres are concentric. There are two possibilities: ifR=r{\displaystyle R=r}, the spheres coincide, and the intersection is the entire sphere; ifR≠r{\displaystyle R\not =r}, the spheres are disjoint and the intersection is empty.
Whenais nonzero, the intersection lies in a vertical plane with this x-coordinate, which may intersect both of the spheres, be tangent to both spheres, or external to both spheres.
The result follows from the previous proof for sphere-plane intersections.
|
https://en.wikipedia.org/wiki/Intersection_(Euclidean_geometry)#Two_line_segments
|
Linear least squares(LLS) is theleast squares approximationoflinear functionsto data.
It is a set of formulations for solving statistical problems involved inlinear regression, including variants forordinary(unweighted),weighted, andgeneralized(correlated)residuals.Numerical methods for linear least squaresinclude inverting the matrix of the normal equations andorthogonal decompositionmethods.
Consider the linear equation
whereA∈Rm×n{\displaystyle A\in \mathbb {R} ^{m\times n}}andb∈Rm{\displaystyle b\in \mathbb {R} ^{m}}are given andx∈Rn{\displaystyle x\in \mathbb {R} ^{n}}is variable to be computed. Whenm>n,{\displaystyle m>n,}it is generally the case that (1) has no solution.
For example, there is no value ofx{\displaystyle x}that satisfies[100111]x=[110],{\displaystyle {\begin{bmatrix}1&0\\0&1\\1&1\end{bmatrix}}x={\begin{bmatrix}1\\1\\0\end{bmatrix}},}because the first two rows require thatx=(1,1),{\displaystyle x=(1,1),}but then the third row is not satisfied.
Thus, form>n,{\displaystyle m>n,}the goal of solving (1) exactly is typically replaced by finding the value ofx{\displaystyle x}that minimizes some error.
There are many ways that the error can be defined, but one of the most common is to define it as‖Ax−b‖2.{\displaystyle \|Ax-b\|^{2}.}This produces a minimization problem, called aleast squares problem
The solution to the least squares problem (1) is computed by solving thenormal equation[1]
whereA⊤{\displaystyle A^{\top }}denotes thetransposeofA{\displaystyle A}.
Continuing the example, above, withA=[100111]andb=[110],{\displaystyle A={\begin{bmatrix}1&0\\0&1\\1&1\end{bmatrix}}\quad {\text{and}}\quad b={\begin{bmatrix}1\\1\\0\end{bmatrix}},}we findA⊤A=[101011][100111]=[2112]{\displaystyle A^{\top }A={\begin{bmatrix}1&0&1\\0&1&1\end{bmatrix}}{\begin{bmatrix}1&0\\0&1\\1&1\end{bmatrix}}={\begin{bmatrix}2&1\\1&2\end{bmatrix}}}andA⊤b=[101011][110]=[11].{\displaystyle A^{\top }b={\begin{bmatrix}1&0&1\\0&1&1\end{bmatrix}}{\begin{bmatrix}1\\1\\0\end{bmatrix}}={\begin{bmatrix}1\\1\end{bmatrix}}.}Solving the normal equation givesx=(1/3,1/3).{\displaystyle x=(1/3,1/3).}
The three main linear least squares formulations are:
Other formulations include:
In OLS (i.e., assuming unweighted observations), theoptimal valueof theobjective functionis found by substituting the optimal expression for the coefficient vector:S=yT(I−H)T(I−H)y=yT(I−H)y,{\displaystyle S=\mathbf {y} ^{\mathsf {T}}(\mathbf {I} -\mathbf {H} )^{\mathsf {T}}(\mathbf {I} -\mathbf {H} )\mathbf {y} =\mathbf {y} ^{\mathsf {T}}(\mathbf {I} -\mathbf {H} )\mathbf {y} ,}whereH=X(XTX)−1XT{\displaystyle \mathbf {H} =\mathbf {X} (\mathbf {X} ^{\mathsf {T}}\mathbf {X} )^{-1}\mathbf {X} ^{\mathsf {T}}}, the latter equality holding since(I−H){\displaystyle (\mathbf {I} -\mathbf {H} )}is symmetric and idempotent. It can be shown from this[9]that under an appropriate assignment of weights theexpected valueofSism−n{\textstyle m-n}. If instead unit weights are assumed, the expected value ofSis(m−n)σ2{\displaystyle (m-n)\sigma ^{2}}, whereσ2{\displaystyle \sigma ^{2}}is the variance of each observation.
If it is assumed that the residuals belong to a normal distribution, the objective function, being a sum of weighted squared residuals, will belong to achi-squared(χ2{\displaystyle \chi ^{2}})distributionwithm−ndegrees of freedom. Some illustrative percentile values ofχ2{\displaystyle \chi ^{2}}are given in the following table.[10]
These values can be used for a statistical criterion as to thegoodness of fit. When unit weights are used, the numbers should be divided by the variance of an observation.
For WLS, the ordinary objective function above is replaced for a weighted average of residuals.
Instatisticsandmathematics,linear least squaresis an approach to fitting amathematicalorstatistical modeltodatain cases where the idealized value provided by the model for any data point is expressed linearly in terms of the unknownparametersof the model. The resulting fitted model can be used tosummarizethe data, topredictunobserved values from the same system, and to understand the mechanisms that may underlie the system.
Mathematically, linear least squares is the problem of approximately solving anoverdetermined systemof linear equationsAx=b, wherebis not an element of thecolumn spaceof the matrixA. The approximate solution is realized as an exact solution toAx=b', whereb'is the projection ofbonto the column space ofA. The best approximation is then that which minimizes the sum of squared differences between the data values and their corresponding modeled values. The approach is calledlinearleast squares since the assumed function is linear in the parameters to be estimated. Linear least squares problems areconvexand have aclosed-form solutionthat is unique, provided that the number of data points used for fitting equals or exceeds the number of unknown parameters, except in special degenerate situations. In contrast,non-linear least squaresproblems generally must be solved by aniterative procedure, and the problems can be non-convex with multiple optima for the objective function. If prior distributions are available, then even an underdetermined system can be solved using theBayesian MMSE estimator.
In statistics, linear least squares problems correspond to a particularly important type ofstatistical modelcalledlinear regressionwhich arises as a particular form ofregression analysis. One basic form of such a model is anordinary least squaresmodel. The present article concentrates on the mathematical aspects of linear least squares problems, with discussion of the formulation and interpretation of statistical regression models andstatistical inferencesrelated to these being dealt with in the articles just mentioned. Seeoutline of regression analysisfor an outline of the topic.
If the experimental errors,ε{\displaystyle \varepsilon }, are uncorrelated, have a mean of zero and a constant variance,σ{\displaystyle \sigma }, theGauss–Markov theoremstates that the least-squares estimator,β^{\displaystyle {\hat {\boldsymbol {\beta }}}}, has the minimum variance of all estimators that are linear combinations of the observations. In this sense it is the best, or optimal, estimator of the parameters. Note particularly that this property is independent of the statisticaldistribution functionof the errors. In other words,the distribution function of the errors need not be anormal distribution. However, for some probability distributions, there is no guarantee that the least-squares solution is even possible given the observations; still, in such cases it is the best estimator that is both linear and unbiased.
For example, it is easy to show that thearithmetic meanof a set of measurements of a quantity is the least-squares estimator of the value of that quantity. If the conditions of the Gauss–Markov theorem apply, the arithmetic mean is optimal, whatever the distribution of errors of the measurements might be.
However, in the case that the experimental errors do belong to a normal distribution, the least-squares estimator is also amaximum likelihoodestimator.[11]
These properties underpin the use of the method of least squares for all types of data fitting, even when the assumptions are not strictly valid.
An assumption underlying the treatment given above is that the independent variable,x, is free of error. In practice, the errors on the measurements of the independent variable are usually much smaller than the errors on the dependent variable and can therefore be ignored. When this is not the case,total least squaresor more generallyerrors-in-variables models, orrigorous least squares, should be used. This can be done by adjusting the weighting scheme to take into account errors on both the dependent and independent variables and then following the standard procedure.[12][13]
In some cases the (weighted) normal equations matrixXTXisill-conditioned. When fitting polynomials the normal equations matrix is aVandermonde matrix. Vandermonde matrices become increasingly ill-conditioned as the order of the matrix increases.[citation needed]In these cases, the least squares estimate amplifies the measurement noise and may be grossly inaccurate.[citation needed]Variousregularizationtechniques can be applied in such cases, the most common of which is calledridge regression. If further information about the parameters is known, for example, a range of possible values ofβ^{\displaystyle \mathbf {\hat {\boldsymbol {\beta }}} }, then various techniques can be used to increase the stability of the solution. For example, seeconstrained least squares.
Another drawback of the least squares estimator is the fact that the norm of the residuals,‖y−Xβ^‖{\displaystyle \|\mathbf {y} -\mathbf {X} {\hat {\boldsymbol {\beta }}}\|}is minimized, whereas in some cases one is truly interested in obtaining small error in the parameterβ^{\displaystyle \mathbf {\hat {\boldsymbol {\beta }}} }, e.g., a small value of‖β−β^‖{\displaystyle \|{\boldsymbol {\beta }}-{\hat {\boldsymbol {\beta }}}\|}.[citation needed]However, since the true parameterβ{\displaystyle {\boldsymbol {\beta }}}is necessarily unknown, this quantity cannot be directly minimized. If aprior probabilityonβ^{\displaystyle {\hat {\boldsymbol {\beta }}}}is known, then aBayes estimatorcan be used to minimize themean squared error,E{‖β−β^‖2}{\displaystyle E\left\{\|{\boldsymbol {\beta }}-{\hat {\boldsymbol {\beta }}}\|^{2}\right\}}. The least squares method is often applied when no prior is known. When several parameters are being estimated jointly, better estimators can be constructed, an effect known asStein's phenomenon. For example, if the measurement error isGaussian, several estimators are known whichdominate, or outperform, the least squares technique; the best known of these is theJames–Stein estimator. This is an example of more generalshrinkage estimatorsthat have been applied to regression problems.
The primary application of linear least squares is indata fitting. Given a set ofmdata pointsy1,y2,…,ym,{\displaystyle y_{1},y_{2},\dots ,y_{m},}consisting of experimentally measured values taken atmvaluesx1,x2,…,xm{\displaystyle x_{1},x_{2},\dots ,x_{m}}of an independent variable (xi{\displaystyle x_{i}}may be scalar or vector quantities), and given a model functiony=f(x,β),{\displaystyle y=f(x,{\boldsymbol {\beta }}),}withβ=(β1,β2,…,βn),{\displaystyle {\boldsymbol {\beta }}=(\beta _{1},\beta _{2},\dots ,\beta _{n}),}it is desired to find the parametersβj{\displaystyle \beta _{j}}such that the model function "best" fits the data. In linear least squares, linearity is meant to be with respect to parametersβj,{\displaystyle \beta _{j},}sof(x,β)=∑j=1nβjφj(x).{\displaystyle f(x,{\boldsymbol {\beta }})=\sum _{j=1}^{n}\beta _{j}\varphi _{j}(x).}
Here, the functionsφj{\displaystyle \varphi _{j}}may benonlinearwith respect to the variablex.
Ideally, the model function fits the data exactly, soyi=f(xi,β){\displaystyle y_{i}=f(x_{i},{\boldsymbol {\beta }})}for alli=1,2,…,m.{\displaystyle i=1,2,\dots ,m.}This is usually not possible in practice, as there are more data points than there are parameters to be determined. The approach chosen then is to find the minimal possible value of the sum of squares of theresidualsri(β)=yi−f(xi,β),(i=1,2,…,m){\displaystyle r_{i}({\boldsymbol {\beta }})=y_{i}-f(x_{i},{\boldsymbol {\beta }}),\ (i=1,2,\dots ,m)}so to minimize the functionS(β)=∑i=1mri2(β).{\displaystyle S({\boldsymbol {\beta }})=\sum _{i=1}^{m}r_{i}^{2}({\boldsymbol {\beta }}).}
After substituting forri{\displaystyle r_{i}}and then forf{\displaystyle f}, this minimization problem becomes the quadratic minimization problem above withXij=φj(xi),{\displaystyle X_{ij}=\varphi _{j}(x_{i}),}and the best fit can be found by solving the normal equations.
A hypothetical researcher conducts an experiment and obtains four(x,y){\displaystyle (x,y)}data points:(1,6),{\displaystyle (1,6),}(2,5),{\displaystyle (2,5),}(3,7),{\displaystyle (3,7),}and(4,10){\displaystyle (4,10)}(shown in red in the diagram on the right). Because of exploratory data analysis or prior knowledge of the subject matter, the researcher suspects that they{\displaystyle y}-values depend on thex{\displaystyle x}-values systematically. Thex{\displaystyle x}-values are assumed to be exact, but they{\displaystyle y}-values contain some uncertainty or "noise", because of the phenomenon being studied, imperfections in the measurements, etc.
One of the simplest possible relationships betweenx{\displaystyle x}andy{\displaystyle y}is a liney=β1+β2x{\displaystyle y=\beta _{1}+\beta _{2}x}. The interceptβ1{\displaystyle \beta _{1}}and the slopeβ2{\displaystyle \beta _{2}}are initially unknown. The researcher would like to find values ofβ1{\displaystyle \beta _{1}}andβ2{\displaystyle \beta _{2}}that cause the line to pass through the four data points. In other words, the researcher would like to solve the system of linear equationsβ1+1β2=6,β1+2β2=5,β1+3β2=7,β1+4β2=10.{\displaystyle {\begin{alignedat}{3}\beta _{1}+1\beta _{2}&&\;=\;&&6,&\\\beta _{1}+2\beta _{2}&&\;=\;&&5,&\\\beta _{1}+3\beta _{2}&&\;=\;&&7,&\\\beta _{1}+4\beta _{2}&&\;=\;&&10.&\\\end{alignedat}}}With four equations in two unknowns, this system is overdetermined. There is no exact solution. To consider approximate solutions, one introducesresidualsr1{\displaystyle r_{1}},r2{\displaystyle r_{2}},r3{\displaystyle r_{3}},r4{\displaystyle r_{4}}into the equations:β1+1β2+r1=6,β1+2β2+r2=5,β1+3β2+r3=7,β1+4β2+r4=10.{\displaystyle {\begin{alignedat}{3}\beta _{1}+1\beta _{2}+r_{1}&&\;=\;&&6,&\\\beta _{1}+2\beta _{2}+r_{2}&&\;=\;&&5,&\\\beta _{1}+3\beta _{2}+r_{3}&&\;=\;&&7,&\\\beta _{1}+4\beta _{2}+r_{4}&&\;=\;&&10.&\\\end{alignedat}}}Thei{\displaystyle i}th residualri{\displaystyle r_{i}}is the misfit between thei{\displaystyle i}th observationyi{\displaystyle y_{i}}and thei{\displaystyle i}th predictionβ1+β2xi{\displaystyle \beta _{1}+\beta _{2}x_{i}}:r1=6−(β1+1β2),r2=5−(β1+2β2),r3=7−(β1+3β2),r4=10−(β1+4β2).{\displaystyle {\begin{alignedat}{3}r_{1}&&\;=\;&&6-(\beta _{1}+1\beta _{2}),&\\r_{2}&&\;=\;&&5-(\beta _{1}+2\beta _{2}),&\\r_{3}&&\;=\;&&7-(\beta _{1}+3\beta _{2}),&\\r_{4}&&\;=\;&&10-(\beta _{1}+4\beta _{2}).&\\\end{alignedat}}}Among all approximate solutions, the researcher would like to find the one that is "best" in some sense.
Inleast squares, one focuses on the sumS{\displaystyle S}of the squared residuals:S(β1,β2)=r12+r22+r32+r42=[6−(β1+1β2)]2+[5−(β1+2β2)]2+[7−(β1+3β2)]2+[10−(β1+4β2)]2=4β12+30β22+20β1β2−56β1−154β2+210.{\displaystyle {\begin{aligned}S(\beta _{1},\beta _{2})&=r_{1}^{2}+r_{2}^{2}+r_{3}^{2}+r_{4}^{2}\\[6pt]&=[6-(\beta _{1}+1\beta _{2})]^{2}+[5-(\beta _{1}+2\beta _{2})]^{2}+[7-(\beta _{1}+3\beta _{2})]^{2}+[10-(\beta _{1}+4\beta _{2})]^{2}\\[6pt]&=4\beta _{1}^{2}+30\beta _{2}^{2}+20\beta _{1}\beta _{2}-56\beta _{1}-154\beta _{2}+210.\\[6pt]\end{aligned}}}The best solution is defined to be the one thatminimizesS{\displaystyle S}with respect toβ1{\displaystyle \beta _{1}}andβ2{\displaystyle \beta _{2}}. The minimum can be calculated by setting thepartial derivativesofS{\displaystyle S}to zero:0=∂S∂β1=8β1+20β2−56,{\displaystyle 0={\frac {\partial S}{\partial \beta _{1}}}=8\beta _{1}+20\beta _{2}-56,}0=∂S∂β2=20β1+60β2−154.{\displaystyle 0={\frac {\partial S}{\partial \beta _{2}}}=20\beta _{1}+60\beta _{2}-154.}Thesenormal equationsconstitute a system of two linear equations in two unknowns. The solution isβ1=3.5{\displaystyle \beta _{1}=3.5}andβ2=1.4{\displaystyle \beta _{2}=1.4}, and the best-fit line is thereforey=3.5+1.4x{\displaystyle y=3.5+1.4x}.
The residuals are1.1,{\displaystyle 1.1,}−1.3,{\displaystyle -1.3,}−0.7,{\displaystyle -0.7,}and0.9{\displaystyle 0.9}(see the diagram on the right). The minimum value of the sum of squared residuals isS(3.5,1.4)=1.12+(−1.3)2+(−0.7)2+0.92=4.2.{\displaystyle S(3.5,1.4)=1.1^{2}+(-1.3)^{2}+(-0.7)^{2}+0.9^{2}=4.2.}
This calculation can be expressed in matrix notation as follows. The original system of equations isy=Xβ{\displaystyle \mathbf {y} =\mathbf {X} \mathbf {\beta } }, wherey=[65710],X=[11121314],β=[β1β2].{\displaystyle \mathbf {y} =\left[{\begin{array}{c}6\\5\\7\\10\end{array}}\right],\;\;\;\;\mathbf {X} =\left[{\begin{array}{cc}1&1\\1&2\\1&3\\1&4\end{array}}\right],\;\;\;\;\mathbf {\beta } =\left[{\begin{array}{c}\beta _{1}\\\beta _{2}\end{array}}\right].}Intuitively,y=Xβ⇒X⊤y=X⊤Xβ⇒β=(X⊤X)−1X⊤y=[3.51.4].{\displaystyle \mathbf {y} =\mathbf {X} \mathbf {\beta } \;\;\;\;\Rightarrow \;\;\;\;\mathbf {X} ^{\top }\mathbf {y} =\mathbf {X} ^{\top }\mathbf {X} \mathbf {\beta } \;\;\;\;\Rightarrow \;\;\;\;\mathbf {\beta } =\left(\mathbf {X} ^{\top }\mathbf {X} \right)^{-1}\mathbf {X} ^{\top }\mathbf {y} =\left[{\begin{array}{c}3.5\\1.4\end{array}}\right].}More rigorously, ifX⊤X{\displaystyle \mathbf {X} ^{\top }\mathbf {X} }is invertible, then the matrixX(X⊤X)−1X⊤{\displaystyle \mathbf {X} \left(\mathbf {X} ^{\top }\mathbf {X} \right)^{-1}\mathbf {X} ^{\top }}represents orthogonal projection onto the column space ofX{\displaystyle \mathbf {X} }. Therefore, among all vectors of the formXβ{\displaystyle \mathbf {X} \mathbf {\beta } }, the one closest toy{\displaystyle \mathbf {y} }isX(X⊤X)−1X⊤y{\displaystyle \mathbf {X} \left(\mathbf {X} ^{\top }\mathbf {X} \right)^{-1}\mathbf {X} ^{\top }\mathbf {y} }. SettingX(X⊤X)−1X⊤y=Xβ,{\displaystyle \mathbf {X} \left(\mathbf {X} ^{\top }\mathbf {X} \right)^{-1}\mathbf {X} ^{\top }\mathbf {y} =\mathbf {X} \mathbf {\beta } ,}it is evident thatβ=(X⊤X)−1X⊤y{\displaystyle \mathbf {\beta } =\left(\mathbf {X} ^{\top }\mathbf {X} \right)^{-1}\mathbf {X} ^{\top }\mathbf {y} }is a solution.
Suppose that the hypothetical researcher wishes to fit a parabola of the formy=β1x2{\displaystyle y=\beta _{1}x^{2}}. Importantly, this model is still linear in the unknown parameters (now justβ1{\displaystyle \beta _{1}}), so linear least squares still applies. The system of equations incorporating residuals is6=β1(1)2+r15=β1(2)2+r27=β1(3)2+r310=β1(4)2+r4{\displaystyle {\begin{alignedat}{2}6&&\;=\beta _{1}(1)^{2}+r_{1}\\5&&\;=\beta _{1}(2)^{2}+r_{2}\\7&&\;=\beta _{1}(3)^{2}+r_{3}\\10&&\;=\beta _{1}(4)^{2}+r_{4}\\\end{alignedat}}}
The sum of squared residuals isS(β1)=(6−β1)2+(5−4β1)2+(7−9β1)2+(10−16β1)2.{\displaystyle S(\beta _{1})=(6-\beta _{1})^{2}+(5-4\beta _{1})^{2}+(7-9\beta _{1})^{2}+(10-16\beta _{1})^{2}.}There is just one partial derivative to set to 0:0=∂S∂β1=708β1−498.{\displaystyle 0={\frac {\partial S}{\partial \beta _{1}}}=708\beta _{1}-498.}The solution isβ1=0.703{\displaystyle \beta _{1}=0.703}, and the fit model isy=0.703x2{\displaystyle y=0.703x^{2}}.
In matrix notation, the equations without residuals are againy=Xβ{\displaystyle \mathbf {y} =\mathbf {X} \mathbf {\beta } }, where nowy=[65710],X=[14916],β=[β1].{\displaystyle \mathbf {y} =\left[{\begin{array}{c}6\\5\\7\\10\end{array}}\right],\;\;\;\;\mathbf {X} =\left[{\begin{array}{c}1\\4\\9\\16\end{array}}\right],\;\;\;\;\mathbf {\beta } =\left[{\begin{array}{c}\beta _{1}\end{array}}\right].}By the same logic as above, the solution isβ=(X⊤X)−1X⊤y=[0.703].{\displaystyle \mathbf {\beta } =\left(\mathbf {X} ^{\top }\mathbf {X} \right)^{-1}\mathbf {X} ^{\top }\mathbf {y} =\left[{\begin{array}{c}0.703\end{array}}\right].}
The figure shows an extension to fitting the three parameter parabola using a design matrixX{\displaystyle \mathbf {X} }with three columns (one forx0{\displaystyle x^{0}},x1{\displaystyle x^{1}}, andx2{\displaystyle x^{2}}), and one row for each of the red data points.
More generally, one can haven{\displaystyle n}regressorsxj{\displaystyle x_{j}}, and a linear modely=β0+∑j=1nβjxj.{\displaystyle y=\beta _{0}+\sum _{j=1}^{n}\beta _{j}x_{j}.}
|
https://en.wikipedia.org/wiki/Linear_least_squares
|
Segmented regression, also known aspiecewise regressionorbroken-stick regression, is a method inregression analysisin which theindependent variableis partitioned into intervals and a separate line segment is fit to each interval. Segmented regression analysis can also be performed on multivariate data by partitioning the various independent variables. Segmented regression is useful when the independent variables, clustered into different groups, exhibit different relationships between the variables in these regions. The boundaries between the segments arebreakpoints.
Segmented linear regressionis segmented regression whereby the relations in the intervals are obtained bylinear regression.
Segmented linear regression with two segments separated by abreakpointcan be useful to quantify an abrupt change of the response function (Yr) of a varying influential factor (x). The breakpoint can be interpreted as acritical,safe, orthresholdvalue beyond or below which (un)desired effects occur. The breakpoint can be important in decision making[1]
The figures illustrate some of the results and regression types obtainable.
A segmented regression analysis is based on the presence of a set of (y, x) data, in whichyis thedependent variableandxtheindependent variable.
Theleast squaresmethod applied separately to each segment, by which the two regression lines are made to fit the data set as closely as possible while minimizing thesum of squares of the differences(SSD) between observed (y) and calculated (Yr) values of the dependent variable, results in the following two equations:
where:
The data may show many types or trends,[2]see the figures.
The method also yields twocorrelation coefficients(R):
and
where:
and
In the determination of the most suitable trend,statistical testsmust be performed to ensure that this trend is reliable (significant).
When no significant breakpoint can be detected, one must fall back on a regression without breakpoint.
For the blue figure at the right that gives the relation between yield of mustard (Yr = Ym, t/ha) andsoil salinity(x= Ss, expressed as electric conductivity of the soil solution EC in dS/m) it is found that:[3]
BP = 4.93, A1= 0, K1= 1.74, A2= −0.129, K2= 2.38, R12= 0.0035 (insignificant), R22= 0.395 (significant) and:
indicating that soil salinities < 4.93 dS/m are safe and soil salinities > 4.93 dS/m reduce the yield @ 0.129 t/ha per unit increase of soil salinity.
The figure also shows confidence intervals and uncertainty as elaborated hereunder.
The followingstatistical testsare used to determine the type of trend:
In addition, use is made of thecorrelation coefficientof all data (Ra), thecoefficient of determinationor coefficient of explanation,confidence intervalsof the regression functions, andANOVAanalysis.[5]
The coefficient of determination for all data (Cd), that is to be maximized under the conditions set by the significance tests, is found from:
where Yr is the expected (predicted) value ofyaccording to the former regression equations and Ya is the average of allyvalues.
The Cd coefficient ranges between 0 (no explanation at all) to 1 (full explanation, perfect match).In a pure, unsegmented, linear regression, the values of Cd and Ra2are equal. In a segmented regression, Cd needs to be significantly larger than Ra2to justify the segmentation.
Theoptimalvalue of the breakpoint may be found such that the Cd coefficient ismaximum.
Segmented regression is often used to detect over which range an explanatory variable (X) has no effect on the dependent variable (Y), while beyond the reach there is a clear response, be it positive or negative.
The reach of no effect may be found at the initial part of X domain or conversely at its last part. For the "no effect" analysis, application of theleast squaresmethod for the segmented regression analysis[6]may not be the most appropriate technique because the aim is rather to find the longest stretch over which the Y-X relation can be considered to possess zero slope while beyond the reach the slope is significantly different from zero but knowledge about the best value of this slope is not material. The method to find the no-effect range is progressive partial regression[7]over the range, extending the range with small steps until the regression coefficient gets significantly different from zero.
In the next figure the break point is found at X=7.9 while for the same data (see blue figure above for mustard yield), the least squares method yields a break point only at X=4.9. The latter value is lower, but the fit of the data beyond the break point is better. Hence, it will depend on the purpose of the analysis which method needs to be employed.
|
https://en.wikipedia.org/wiki/Linear_segmented_regression
|
Least-squares support-vector machines (LS-SVM)forstatisticsand instatistical modeling, areleast-squaresversions ofsupport-vector machines(SVM), which are a set of relatedsupervised learningmethods that analyze data and recognize patterns, and which are used forclassificationandregression analysis. In this version one finds the solution by solving a set oflinear equationsinstead of a convexquadratic programming(QP) problem for classical SVMs. Least-squares SVM classifiers were proposed byJohan Suykensand Joos Vandewalle.[1]LS-SVMs are a class ofkernel-based learning methods.
Given a training set{xi,yi}i=1N{\displaystyle \{x_{i},y_{i}\}_{i=1}^{N}}with input dataxi∈Rn{\displaystyle x_{i}\in \mathbb {R} ^{n}}and corresponding binary class labelsyi∈{−1,+1}{\displaystyle y_{i}\in \{-1,+1\}}, theSVM[2]classifier, according toVapnik's original formulation, satisfies the following conditions:
which is equivalent to
whereϕ(x){\displaystyle \phi (x)}is the nonlinear map from original space to the high- or infinite-dimensional space.
In case such a separating hyperplane does not exist, we introduce so-called slack variablesξi{\displaystyle \xi _{i}}such that
According to thestructural risk minimizationprinciple, the risk bound is minimized by the following minimization problem:
To solve this problem, we could construct theLagrangian function:
whereαi≥0,βi≥0(i=1,…,N){\displaystyle \alpha _{i}\geq 0,\ \beta _{i}\geq 0\ (i=1,\ldots ,N)}are theLagrangian multipliers. The optimal point will be in thesaddle pointof the Lagrangian function, and then we obtain
By substitutingw{\displaystyle w}by its expression in the Lagrangian formed from the appropriate objective and constraints, we will get the following quadratic programming problem:
whereK(xi,xj)=⟨ϕ(xi),ϕ(xj)⟩{\displaystyle K(x_{i},x_{j})=\left\langle \phi (x_{i}),\phi (x_{j})\right\rangle }is called thekernel function. Solving this QP problem subject to constraints in (1), we will get thehyperplanein the high-dimensional space and hence theclassifierin the original space.
The least-squares version of the SVM classifier is obtained by reformulating the minimization problem as
subject to the equality constraints
The least-squares SVM (LS-SVM) classifier formulation above implicitly corresponds to aregressioninterpretation with binary targetsyi=±1{\displaystyle y_{i}=\pm 1}.
Usingyi2=1{\displaystyle y_{i}^{2}=1}, we have
withei=yi−(wTϕ(xi)+b).{\displaystyle e_{i}=y_{i}-(w^{T}\phi (x_{i})+b).}Notice, that this error would also make sense for least-squares data fitting, so that the same end results holds for the regression case.
Hence the LS-SVM classifier formulation is equivalent to
withEW=12wTw{\displaystyle E_{W}={\frac {1}{2}}w^{T}w}andED=12∑i=1Nei2=12∑i=1N(yi−(wTϕ(xi)+b))2.{\displaystyle E_{D}={\frac {1}{2}}\sum \limits _{i=1}^{N}e_{i}^{2}={\frac {1}{2}}\sum \limits _{i=1}^{N}\left(y_{i}-(w^{T}\phi (x_{i})+b)\right)^{2}.}
Bothμ{\displaystyle \mu }andζ{\displaystyle \zeta }should be considered as hyperparameters to tune the amount of regularization versus the sum squared error. The solution does only depend on the ratioγ=ζ/μ{\displaystyle \gamma =\zeta /\mu }, therefore the original formulation uses onlyγ{\displaystyle \gamma }as tuning parameter. We use bothμ{\displaystyle \mu }andζ{\displaystyle \zeta }as parameters in order to provide a Bayesian interpretation to LS-SVM.
The solution of LS-SVM regressor will be obtained after we construct theLagrangian function:
whereαi∈R{\displaystyle \alpha _{i}\in \mathbb {R} }are the Lagrange multipliers. The conditions for optimality are
Elimination ofw{\displaystyle w}ande{\displaystyle e}will yield alinear systeminstead of aquadratic programmingproblem:
withY=[y1,…,yN]T{\displaystyle Y=[y_{1},\ldots ,y_{N}]^{T}},1N=[1,…,1]T{\displaystyle 1_{N}=[1,\ldots ,1]^{T}}andα=[α1,…,αN]T{\displaystyle \alpha =[\alpha _{1},\ldots ,\alpha _{N}]^{T}}. Here,IN{\displaystyle I_{N}}is anN×N{\displaystyle N\times N}identity matrix, andΩ∈RN×N{\displaystyle \Omega \in \mathbb {R} ^{N\times N}}is the kernel matrix defined byΩij=ϕ(xi)Tϕ(xj)=K(xi,xj){\displaystyle \Omega _{ij}=\phi (x_{i})^{T}\phi (x_{j})=K(x_{i},x_{j})}.
For the kernel functionK(•, •) one typically has the following choices:
whered{\displaystyle d},c{\displaystyle c},σ{\displaystyle \sigma },k{\displaystyle k}andθ{\displaystyle \theta }are constants. Notice that the Mercer condition holds for allc,σ∈R+{\displaystyle c,\sigma \in \mathbb {R} ^{+}}andd∈N{\displaystyle d\in N}values in thepolynomialand RBF case, but not for all possible choices ofk{\displaystyle k}andθ{\displaystyle \theta }in the MLP case. The scale parametersc{\displaystyle c},σ{\displaystyle \sigma }andk{\displaystyle k}determine the scaling of the inputs in the polynomial, RBF and MLPkernel function. This scaling is related to the bandwidth of the kernel instatistics, where it is shown that the bandwidth is an important parameter of the generalization behavior of a kernel method.
ABayesianinterpretation of the SVM has been proposed by Smola et al. They showed that the use of different kernels in SVM can be regarded as defining differentprior probabilitydistributions on the functional space, asP[f]∝exp(−β‖P^f‖2){\displaystyle P[f]\propto \exp \left({-\beta \left\|{{\hat {P}}f}\right\|^{2}}\right)}. Hereβ>0{\displaystyle \beta >0}is a constant andP^{\displaystyle {\hat {P}}}is the regularization operator corresponding to the selected kernel.
A general Bayesian evidence framework was developed by MacKay,[3][4][5]and MacKay has used it to the problem of regression, forwardneural networkand classification network. Provided data setD{\displaystyle D}, a modelM{\displaystyle \mathbb {M} }with parameter vectorw{\displaystyle w}and a so-called hyperparameter or regularization parameterλ{\displaystyle \lambda },Bayesian inferenceis constructed with 3 levels of inference:
We can see that Bayesian evidence framework is a unified theory forlearningthe model and model selection.
Kwok used the Bayesian evidence framework to interpret the formulation of SVM and model selection. And he also applied Bayesian evidence framework to support vector regression.
Now, given the data points{xi,yi}i=1N{\displaystyle \{x_{i},y_{i}\}_{i=1}^{N}}and the hyperparametersμ{\displaystyle \mu }andζ{\displaystyle \zeta }of the modelM{\displaystyle \mathbb {M} }, the model parametersw{\displaystyle w}andb{\displaystyle b}are estimated by maximizing the posteriorp(w,b|D,logμ,logζ,M){\displaystyle p(w,b|D,\log \mu ,\log \zeta ,\mathbb {M} )}. Applying Bayes’ rule, we obtain
wherep(D|logμ,logζ,M){\displaystyle p(D|\log \mu ,\log \zeta ,\mathbb {M} )}is a normalizing constant such the integral over all possiblew{\displaystyle w}andb{\displaystyle b}is equal to 1.
We assumew{\displaystyle w}andb{\displaystyle b}are independent of the hyperparameterζ{\displaystyle \zeta }, and are conditional independent, i.e., we assume
Whenσb→∞{\displaystyle \sigma _{b}\to \infty }, the distribution ofb{\displaystyle b}will approximate a uniform distribution. Furthermore, we assumew{\displaystyle w}andb{\displaystyle b}are Gaussian distribution, so we obtain the a priori distribution ofw{\displaystyle w}andb{\displaystyle b}withσb→∞{\displaystyle \sigma _{b}\to \infty }to be
Herenf{\displaystyle n_{f}}is the dimensionality of the feature space, same as the dimensionality ofw{\displaystyle w}.
The probability ofp(D|w,b,logμ,logζ,M){\displaystyle p(D|w,b,\log \mu ,\log \zeta ,\mathbb {M} )}is assumed to depend only onw,b,ζ{\displaystyle w,b,\zeta }andM{\displaystyle \mathbb {M} }. We assume that the data points are independently identically distributed (i.i.d.), so that:
In order to obtain the least square cost function, it is assumed that the probability of a data point is proportional to:
A Gaussian distribution is taken for the errorsei=yi−(wTϕ(xi)+b){\displaystyle e_{i}=y_{i}-(w^{T}\phi (x_{i})+b)}as:
It is assumed that thew{\displaystyle w}andb{\displaystyle b}are determined in such a way that the class centersm^−{\displaystyle {\hat {m}}_{-}}andm^+{\displaystyle {\hat {m}}_{+}}are mapped onto the target -1 and +1, respectively. The projectionswTϕ(x)+b{\displaystyle w^{T}\phi (x)+b}of the class elementsϕ(x){\displaystyle \phi (x)}follow a multivariate Gaussian distribution, which have variance1/ζ{\displaystyle 1/\zeta }.
Combining the preceding expressions, and neglecting all constants, Bayes’ rule becomes
The maximum posterior density estimateswMP{\displaystyle w_{MP}}andbMP{\displaystyle b_{MP}}are then obtained by minimizing the negative logarithm of (26), so we arrive (10).
|
https://en.wikipedia.org/wiki/Least_squares_support_vector_machine
|
Inmathematics,nonlinear programming(NLP) is the process of solving anoptimization problemwhere some of the constraints are notlinear equalitiesor the objective function is not alinear function. Anoptimization problemis one of calculation of the extrema (maxima, minima or stationary points) of anobjective functionover a set of unknownreal variablesand conditional to the satisfaction of asystemofequalitiesandinequalities, collectively termedconstraints. It is the sub-field ofmathematical optimizationthat deals with problems that are not linear.
Letn,m, andpbe positive integers. LetXbe a subset ofRn(usually a box-constrained one), letf,gi, andhjbereal-valued functionsonXfor eachiin {1, ...,m} and eachjin {1, ...,p}, with at least one off,gi, andhjbeing nonlinear.
A nonlinear programming problem is anoptimization problemof the form
Depending on the constraint set, there are several possibilities:
Most realistic applications feature feasible problems, with infeasible or unbounded problems seen as a failure of an underlying model. In some cases, infeasible problems are handled by minimizing a sum of feasibility violations.
Some special cases of nonlinear programming have specialized solution methods:
A typical non-convexproblem is that of optimizing transportation costs by selection from a set of transportation methods, one or more of which exhibiteconomies of scale, with various connectivities and capacity constraints. An example would be petroleum product transport given a selection or combination of pipeline, rail tanker, road tanker, river barge, or coastal tankship. Owing to economic batch size the cost functions may have discontinuities in addition to smooth changes.
In experimental science, some simple data analysis (such as fitting a spectrum with a sum of peaks of known location and shape but unknown magnitude) can be done with linear methods, but in general these problems are also nonlinear. Typically, one has a theoretical model of the system under study with variable parameters in it and a model the experiment or experiments, which may also have unknown parameters. One tries to find a best fit numerically. In this case one often wants a measure of the precision of the result, as well as the best fit itself.
Underdifferentiabilityandconstraint qualifications, theKarush–Kuhn–Tucker (KKT) conditionsprovide necessary conditions for a solution to be optimal. If some of the functions are non-differentiable,subdifferentialversions ofKarush–Kuhn–Tucker (KKT) conditionsare available.[1]
Under convexity, the KKT conditions are sufficient for aglobal optimum. Without convexity, these conditions are sufficient only for alocal optimum. In some cases, the number of local optima is small, and one can find all of them analytically and find the one for which the objective value is smallest.[2]
In most realistic cases, it is very hard to solve the KKT conditions analytically, and so the problems are solved usingnumerical methods. These methods are iterative: they start with an initial point, and then proceed to points that are supposed to be closer to the optimal point, using some update rule. There are three kinds of update rules:[2]: 5.1.2
Third-order routines (and higher) are theoretically possible, but not used in practice, due to the higher computational load and little theoretical benefit.
Another method involves the use ofbranch and boundtechniques, where the program is divided into subclasses to be solved with convex (minimization problem) or linear approximations that form a lower bound on the overall cost within the subdivision. With subsequent divisions, at some point an actual solution will be obtained whose cost is equal to the best lower bound obtained for any of the approximate solutions. This solution is optimal, although possibly not unique. The algorithm may also be stopped early, with the assurance that the best possible solution is within a tolerance from the best point found; such points are called ε-optimal. Terminating to ε-optimal points is typically necessary to ensure finite termination. This is especially useful for large, difficult problems and problems with uncertain costs or values where the uncertainty can be estimated with an appropriate reliability estimation.
There exist numerous nonlinear programming solvers, including open source:
A simple problem (shown in the diagram) can be defined by the constraintsx1≥0x2≥0x12+x22≥1x12+x22≤2{\displaystyle {\begin{aligned}x_{1}&\geq 0\\x_{2}&\geq 0\\x_{1}^{2}+x_{2}^{2}&\geq 1\\x_{1}^{2}+x_{2}^{2}&\leq 2\end{aligned}}}with an objective function to be maximizedf(x)=x1+x2{\displaystyle f(\mathbf {x} )=x_{1}+x_{2}}wherex= (x1,x2).
Another simple problem (see diagram) can be defined by the constraintsx12−x22+x32≤2x12+x22+x32≤10{\displaystyle {\begin{aligned}x_{1}^{2}-x_{2}^{2}+x_{3}^{2}&\leq 2\\x_{1}^{2}+x_{2}^{2}+x_{3}^{2}&\leq 10\end{aligned}}}with an objective function to be maximizedf(x)=x1x2+x2x3{\displaystyle f(\mathbf {x} )=x_{1}x_{2}+x_{2}x_{3}}wherex= (x1,x2,x3).
|
https://en.wikipedia.org/wiki/Nonlinear_programming
|
Mathematical optimization(alternatively spelledoptimisation) ormathematical programmingis the selection of a best element, with regard to some criteria, from some set of available alternatives.[1][2]It is generally divided into two subfields:discrete optimizationandcontinuous optimization. Optimization problems arise in all quantitative disciplines fromcomputer scienceandengineering[3]tooperations researchandeconomics, and the development of solution methods has been of interest inmathematicsfor centuries.[4][5]
In the more general approach, anoptimization problemconsists ofmaximizing or minimizingareal functionby systematically choosinginputvalues from within an allowed set and computing thevalueof the function. The generalization of optimization theory and techniques to other formulations constitutes a large area ofapplied mathematics.[6]
Optimization problems can be divided into two categories, depending on whether thevariablesarecontinuousordiscrete:
An optimization problem can be represented in the following way:
Such a formulation is called anoptimization problemor amathematical programming problem(a term not directly related tocomputer programming, but still in use for example inlinear programming– seeHistorybelow). Many real-world and theoretical problems may be modeled in this general framework.
Since the following is valid:
it suffices to solve only minimization problems. However, the opposite perspective of considering only maximization problems would be valid, too.
Problems formulated using this technique in the fields ofphysicsmay refer to the technique asenergyminimization,[7]speaking of the value of the functionfas representing the energy of thesystembeingmodeled. Inmachine learning, it is always necessary to continuously evaluate the quality of a data model by using acost functionwhere a minimum implies a set of possibly optimal parameters with an optimal (lowest) error.
Typically,Ais somesubsetof theEuclidean spaceRn{\displaystyle \mathbb {R} ^{n}}, often specified by a set ofconstraints, equalities or inequalities that the members ofAhave to satisfy. ThedomainAoffis called thesearch spaceor thechoice set, while the elements ofAare calledcandidate solutionsorfeasible solutions.
The functionfis variously called anobjective function,criterion function,loss function,cost function(minimization),[8]utility functionorfitness function(maximization), or, in certain fields, anenergy functionorenergyfunctional. A feasible solution that minimizes (or maximizes) the objective function is called anoptimal solution.
In mathematics, conventional optimization problems are usually stated in terms of minimization.
Alocal minimumx*is defined as an element for which there exists someδ> 0such that
the expressionf(x*) ≤f(x)holds;
that is to say, on some region aroundx*all of the function values are greater than or equal to the value at that element.
Local maxima are defined similarly.
While a local minimum is at least as good as any nearby elements, aglobal minimumis at least as good as every feasible element.
Generally, unless the objective function isconvexin a minimization problem, there may be several local minima.
In aconvex problem, if there is a local minimum that is interior (not on the edge of the set of feasible elements), it is also the global minimum, but a nonconvex problem may have more than one local minimum not all of which need be global minima.
A large number of algorithms proposed for solving the nonconvex problems – including the majority of commercially available solvers – are not capable of making a distinction between locally optimal solutions and globally optimal solutions, and will treat the former as actual solutions to the original problem.Global optimizationis the branch ofapplied mathematicsandnumerical analysisthat is concerned with the development of deterministic algorithms that are capable of guaranteeing convergence in finite time to the actual optimal solution of a nonconvex problem.
Optimization problems are often expressed with special notation. Here are some examples:
Consider the following notation:
This denotes the minimumvalueof the objective functionx2+ 1, when choosingxfrom the set ofreal numbersR{\displaystyle \mathbb {R} }. The minimum value in this case is 1, occurring atx= 0.
Similarly, the notation
asks for the maximum value of the objective function2x, wherexmay be any real number. In this case, there is no such maximum as the objective function is unbounded, so the answer is "infinity" or "undefined".
Consider the following notation:
or equivalently
This represents the value (or values) of theargumentxin theinterval(−∞,−1]that minimizes (or minimize) the objective functionx2+ 1(the actual minimum value of that function is not what the problem asks for). In this case, the answer isx= −1, sincex= 0is infeasible, that is, it does not belong to thefeasible set.
Similarly,
or equivalently
represents the{x,y}pair (or pairs) that maximizes (or maximize) the value of the objective functionxcosy, with the added constraint thatxlie in the interval[−5,5](again, the actual maximum value of the expression does not matter). In this case, the solutions are the pairs of the form{5, 2kπ}and{−5, (2k+ 1)π}, wherekranges over allintegers.
Operatorsarg minandarg maxare sometimes also written asargminandargmax, and stand forargument of the minimumandargument of the maximum.
FermatandLagrangefound calculus-based formulae for identifying optima, whileNewtonandGaussproposed iterative methods for moving towards an optimum.
The term "linear programming" for certain optimization cases was due toGeorge B. Dantzig, although much of the theory had been introduced byLeonid Kantorovichin 1939. (Programmingin this context does not refer tocomputer programming, but comes from the use ofprogramby theUnited Statesmilitary to refer to proposed training andlogisticsschedules, which were the problems Dantzig studied at that time.) Dantzig published theSimplex algorithmin 1947, and alsoJohn von Neumannand other researchers worked on the theoretical aspects of linear programming (like the theory ofduality) around the same time.[9]
Other notable researchers in mathematical optimization include the following:
In a number of subfields, the techniques are designed primarily for optimization in dynamic contexts (that is, decision making over time):
Adding more than one objective to an optimization problem adds complexity. For example, to optimize a structural design, one would desire a design that is both light and rigid. When two objectives conflict, a trade-off must be created. There may be one lightest design, one stiffest design, and an infinite number of designs that are some compromise of weight and rigidity. The set of trade-off designs that improve upon one criterion at the expense of another is known as thePareto set. The curve created plotting weight against stiffness of the best designs is known as thePareto frontier.
A design is judged to be "Pareto optimal" (equivalently, "Pareto efficient" or in the Pareto set) if it is not dominated by any other design: If it is worse than another design in some respects and no better in any respect, then it is dominated and is not Pareto optimal.
The choice among "Pareto optimal" solutions to determine the "favorite solution" is delegated to the decision maker. In other words, defining the problem as multi-objective optimization signals that some information is missing: desirable objectives are given but combinations of them are not rated relative to each other. In some cases, the missing information can be derived by interactive sessions with the decision maker.
Multi-objective optimization problems have been generalized further intovector optimizationproblems where the (partial) ordering is no longer given by the Pareto ordering.
Optimization problems are often multi-modal; that is, they possess multiple good solutions. They could all be globally good (same cost function value) or there could be a mix of globally good and locally good solutions. Obtaining all (or at least some of) the multiple solutions is the goal of a multi-modal optimizer.
Classical optimization techniques due to their iterative approach do not perform satisfactorily when they are used to obtain multiple solutions, since it is not guaranteed that different solutions will be obtained even with different starting points in multiple runs of the algorithm.
Common approaches toglobal optimizationproblems, where multiple local extrema may be present includeevolutionary algorithms,Bayesian optimizationandsimulated annealing.
Thesatisfiability problem, also called thefeasibility problem, is just the problem of finding anyfeasible solutionat all without regard to objective value. This can be regarded as the special case of mathematical optimization where the objective value is the same for every solution, and thus any solution is optimal.
Many optimization algorithms need to start from a feasible point. One way to obtain such a point is torelaxthe feasibility conditions using aslack variable; with enough slack, any starting point is feasible. Then, minimize that slack variable until the slack is null or negative.
Theextreme value theoremofKarl Weierstrassstates that a continuous real-valued function on a compact set attains its maximum and minimum value. More generally, a lower semi-continuous function on a compact set attains its minimum; an upper semi-continuous function on a compact set attains its maximum point or view.
One of Fermat's theoremsstates that optima of unconstrained problems are found atstationary points, where the first derivative or the gradient of the objective function is zero (seefirst derivative test). More generally, they may be found atcritical points, where the first derivative or gradient of the objective function is zero or is undefined, or on the boundary of the choice set. An equation (or set of equations) stating that the first derivative(s) equal(s) zero at an interior optimum is called a 'first-order condition' or a set of first-order conditions.
Optima of equality-constrained problems can be found by theLagrange multipliermethod. The optima of problems with equality and/or inequality constraints can be found using the 'Karush–Kuhn–Tucker conditions'.
While the first derivative test identifies points that might be extrema, this test does not distinguish a point that is a minimum from one that is a maximum or one that is neither. When the objective function is twice differentiable, these cases can be distinguished by checking the second derivative or the matrix of second derivatives (called theHessian matrix) in unconstrained problems, or the matrix of second derivatives of the objective function and the constraints called thebordered Hessianin constrained problems. The conditions that distinguish maxima, or minima, from other stationary points are called 'second-order conditions' (see 'Second derivative test'). If a candidate solution satisfies the first-order conditions, then the satisfaction of the second-order conditions as well is sufficient to establish at least local optimality.
Theenvelope theoremdescribes how the value of an optimal solution changes when an underlyingparameterchanges. The process of computing this change is calledcomparative statics.
Themaximum theoremofClaude Berge(1963) describes the continuity of an optimal solution as a function of underlying parameters.
For unconstrained problems with twice-differentiable functions, somecritical pointscan be found by finding the points where thegradientof the objective function is zero (that is, the stationary points). More generally, a zerosubgradientcertifies that a local minimum has been found forminimization problems with convexfunctionsand otherlocallyLipschitz functions, which meet in loss function minimization of the neural network. The positive-negative momentum estimation lets to avoid the local minimum and converges at the objective function global minimum.[10]
Further, critical points can be classified using thedefinitenessof theHessian matrix: If the Hessian ispositivedefinite at a critical point, then the point is a local minimum; if the Hessian matrix is negative definite, then the point is a local maximum; finally, if indefinite, then the point is some kind ofsaddle point.
Constrained problems can often be transformed into unconstrained problems with the help ofLagrange multipliers.Lagrangian relaxationcan also provide approximate solutions to difficult constrained problems.
When the objective function is aconvex function, then any local minimum will also be a global minimum. There exist efficient numerical techniques for minimizing convex functions, such asinterior-point methods.
More generally, if the objective function is not a quadratic function, then many optimization methods use other methods to ensure that some subsequence of iterations converges to an optimal solution. The first and still popular method for ensuring convergence relies online searches, which optimize a function along one dimension. A second and increasingly popular method for ensuring convergence usestrust regions. Both line searches and trust regions are used in modern methods ofnon-differentiable optimization. Usually, a global optimizer is much slower than advanced local optimizers (such asBFGS), so often an efficient global optimizer can be constructed by starting the local optimizer from different starting points.
To solve problems, researchers may usealgorithmsthat terminate in a finite number of steps, oriterative methodsthat converge to a solution (on some specified class of problems), orheuristicsthat may provide approximate solutions to some problems (although their iterates need not converge).
Theiterative methodsused to solve problems ofnonlinear programmingdiffer according to whether theyevaluateHessians, gradients, or only function values. While evaluating Hessians (H) and gradients (G) improves the rate of convergence, for functions for which these quantities exist and vary sufficiently smoothly, such evaluations increase thecomputational complexity(or computational cost) of each iteration. In some cases, the computational complexity may be excessively high.
One major criterion for optimizers is just the number of required function evaluations as this often is already a large computational effort, usually much more effort than within the optimizer itself, which mainly has to operate over the N variables. The derivatives provide detailed information for such optimizers, but are even harder to calculate, e.g. approximating the gradient takes at least N+1 function evaluations. For approximations of the 2nd derivatives (collected in the Hessian matrix), the number of function evaluations is in the order of N². Newton's method requires the 2nd-order derivatives, so for each iteration, the number of function calls is in the order of N², but for a simpler pure gradient optimizer it is only N. However, gradient optimizers need usually more iterations than Newton's algorithm. Which one is best with respect to the number of function calls depends on the problem itself.
Besides (finitely terminating)algorithmsand (convergent)iterative methods, there areheuristics. A heuristic is any algorithm which is not guaranteed (mathematically) to find the solution, but which is nevertheless useful in certain practical situations. List of some well-known heuristics:
Problems inrigid body dynamics(in particular articulated rigid body dynamics) often require mathematical programming techniques, since you can view rigid body dynamics as attempting to solve anordinary differential equationon a constraint manifold;[11]the constraints are various nonlinear geometric constraints such as "these two points must always coincide", "this surface must not penetrate any other", or "this point must always lie somewhere on this curve". Also, the problem of computing contact forces can be done by solving alinear complementarity problem, which can also be viewed as a QP (quadratic programming) problem.
Many design problems can also be expressed as optimization programs. This application is called design optimization. One subset is theengineering optimization, and another recent and growing subset of this field ismultidisciplinary design optimization, which, while useful in many problems, has in particular been applied toaerospace engineeringproblems.
This approach may be applied in cosmology and astrophysics.[12]
Economicsis closely enough linked to optimization ofagentsthat an influential definition relatedly describes economicsquascience as the "study of human behavior as a relationship between ends andscarcemeans" with alternative uses.[13]Modern optimization theory includes traditional optimization theory but also overlaps withgame theoryand the study of economicequilibria. TheJournal of Economic Literaturecodesclassify mathematical programming, optimization techniques, and related topics underJEL:C61-C63.
In microeconomics, theutility maximization problemand itsdual problem, theexpenditure minimization problem, are economic optimization problems. Insofar as they behave consistently,consumersare assumed to maximize theirutility, whilefirmsare usually assumed to maximize theirprofit. Also, agents are often modeled as beingrisk-averse, thereby preferring to avoid risk.Asset pricesare also modeled using optimization theory, though the underlying mathematics relies on optimizingstochastic processesrather than on static optimization.International trade theoryalso uses optimization to explain trade patterns between nations. The optimization ofportfoliosis an example of multi-objective optimization in economics.
Since the 1970s, economists have modeled dynamic decisions over time usingcontrol theory.[14]For example, dynamicsearch modelsare used to studylabor-market behavior.[15]A crucial distinction is between deterministic and stochastic models.[16]Macroeconomistsbuilddynamic stochastic general equilibrium (DSGE)models that describe the dynamics of the whole economy as the result of the interdependent optimizing decisions of workers, consumers, investors, and governments.[17][18]
Some common applications of optimization techniques inelectrical engineeringincludeactive filterdesign,[19]stray field reduction in superconducting magnetic energy storage systems,space mappingdesign ofmicrowavestructures,[20]handset antennas,[21][22][23]electromagnetics-based design. Electromagnetically validated design optimization of microwave components and antennas has made extensive use of an appropriate physics-based or empiricalsurrogate modelandspace mappingmethodologies since the discovery ofspace mappingin 1993.[24][25]Optimization techniques are also used inpower-flow analysis.[26]
Optimization has been widely used in civil engineering.Construction managementandtransportation engineeringare among the main branches of civil engineering that heavily rely on optimization. The most common civil engineering problems that are solved by optimization are cut and fill of roads, life-cycle analysis of structures and infrastructures,[27]resource leveling,[28][29]water resource allocation,trafficmanagement[30]and schedule optimization.
Another field that uses optimization techniques extensively isoperations research.[31]Operations research also uses stochastic modeling and simulation to support improved decision-making. Increasingly, operations research usesstochastic programmingto model dynamic decisions that adapt to events; such problems can be solved with large-scale optimization andstochastic optimizationmethods.
Mathematical optimization is used in much modern controller design. High-level controllers such asmodel predictive control(MPC) or real-time optimization (RTO) employ mathematical optimization. These algorithms run online and repeatedly determine values for decision variables, such as choke openings in a process plant, by iteratively solving a mathematical optimization problem including constraints and a model of the system to be controlled.
Optimization techniques are regularly used ingeophysicalparameter estimation problems. Given a set of geophysical measurements, e.g.seismic recordings, it is common to solve for thephysical propertiesandgeometrical shapesof the underlying rocks and fluids. The majority of problems in geophysics are nonlinear with both deterministic and stochastic methods being widely used.
Nonlinear optimization methods are widely used inconformational analysis.
Optimization techniques are used in many facets of computational systems biology such as model building, optimal experimental design, metabolic engineering, and synthetic biology.[32]Linear programminghas been applied to calculate the maximal possible yields of fermentation products,[32]and to infer gene regulatory networks from multiple microarray datasets[33]as well as transcriptional regulatory networks from high-throughput data.[34]Nonlinear programminghas been used to analyze energy metabolism[35]and has been applied to metabolic engineering and parameter estimation in biochemical pathways.[36]
|
https://en.wikipedia.org/wiki/Optimization_(mathematics)
|
Thepartial least squares path modelingorpartial least squares structural equation modeling(PLS-PM,PLS-SEM)[1][2][3]is a method forstructural equation modelingthat allows estimation of complex cause-effect relationships in path models withlatent variables.
PLS-PM[4][5]is a component-based estimation approach that differs from the covariance-basedstructural equation modeling. Unlike covariance-based approaches to structural equation modeling, PLS-PM does not fit a common factor model to the data, it rather fits a composite model.[6][7]In doing so, it maximizes the amount of variance explained (though what this means from a statistical point of view is unclear and PLS-PM users do not agree on how this goal might be achieved).
In addition, by an adjustment PLS-PM is capable of consistently estimating certain parameters of common factor models as well, through an approach called consistent PLS-PM (PLSc-PM).[8]A further related development is factor-based PLS-PM (PLSF), a variation of which employs PLSc-PM as a basis for the estimation of the factors in common factor models; this method significantly increases the number of common factor model parameters that can be estimated, effectively bridging the gap between classic PLS-PM and covariance‐based structural equation modeling.[9]
The PLS-PM structural equation model is composed of two sub-models: the measurement models and the structural model. The measurement models represent the relationships between the observed data and thelatent variables. The structural model represents the relationships between the latent variables.
An iterative algorithm solves the structural equation model by estimating thelatent variablesby using the measurement and structural model in alternating steps, hence the procedure's name, partial. The measurement model estimates the latent variables as a weighted sum of its manifest variables. The structural model estimates the latent variables by means of simple or multiplelinear regressionbetween the latent variables estimated by the measurement model. This algorithm repeats itself until convergence is achieved.
PLS is viewed critically by several methodological researchers.[10][11]A major point of contention has been the claim that PLS-PM can always be used with very small sample sizes.[12]A recent study suggests that this claim is generally unjustified, and proposes two methods for minimum sample size estimation in PLS-PM.[13][14]Another point of contention is the ad hoc way in which PLS-PM has been developed and the lack of analytic proofs to support its main feature: the sampling distribution of PLS-PM weights. However, PLS-PM is still considered preferable (over covariance‐based structural equation modeling) when it is unknown whether the data's nature is common factor- or composite-based.[15]
|
https://en.wikipedia.org/wiki/Partial_least_squares_path_modeling
|
Instatistics,projection pursuit regression (PPR)is astatistical modeldeveloped byJerome H. FriedmanandWerner Stuetzlethat extendsadditive models. This model adapts the additive models in that it first projects thedata matrixofexplanatory variablesin the optimal direction before applying smoothing functions to these explanatory variables.
The model consists oflinear combinationsofridge functions: non-linear transformations of linear combinations of the explanatory variables. The basic model takes the form
wherexiis a 1 ×prow of thedesign matrixcontaining the explanatory variables for examplei,yiis a 1 × 1 prediction, {βj} is a collection ofrvectors (each a unit vector of lengthp) which contain the unknown parameters, {fj} is a collection ofrinitially unknown smooth functions that map fromR→R{\displaystyle \mathbb {R} \rightarrow \mathbb {R} }, andris a hyperparameter. Good values forrcan be determined throughcross-validationor a forward stage-wise strategy which stops when the model fit cannot be significantly improved. Asrapproaches infinity and with an appropriate set of functions {fj}, the PPR model is auniversal estimator, as it can approximate any continuous function inRp{\displaystyle \mathbb {R} ^{p}}.
For a given set of data{(yi,xi)}i=1n{\displaystyle \{(y_{i},x_{i})\}_{i=1}^{n}}, the goal is to minimize the error function
over the functionsfj{\displaystyle f_{j}}and vectorsβj{\displaystyle \beta _{j}}. No method exists for solving over all variables at once, but it can be solved viaalternating optimization. First, consider each(fj,βj){\displaystyle (f_{j},\beta _{j})}pair individually: Let all other parameters be fixed, and find a "residual", the variance of the output not accounted for by those other parameters, given by
The task of minimizing the error function now reduces to solving
for eachjin turn. Typically new(fj,βj){\displaystyle (f_{j},\beta _{j})}pairs are added to the model in a forward stage-wise fashion.
Aside: Previously fitted pairs can be readjusted after new fit-pairs are determined by an algorithm known asbackfitting, which entails reconsidering a previous pair, recalculating the residual given how other pairs have changed, refitting to account for that new information, and then cycling through all fit-pairs this way until parameters converge. This process typically results in a model that performs better with fewer fit-pairs, though it takes longer to train, and it is usually possible to achieve the same performance by skipping backfitting and simply adding more fits to the model (increasingr).
Solving the simplified error function to determine an(fj,βj){\displaystyle (f_{j},\beta _{j})}pair can be done with alternating optimization, where first a randomβj{\displaystyle \beta _{j}}is used to projectX{\displaystyle X}in to 1D space, and then the optimalfj{\displaystyle f_{j}}is found to describe the relationship between that projection and the residuals via your favorite scatter plot regression method. Then iffj{\displaystyle f_{j}}is held constant, assumingfj{\displaystyle f_{j}}is once differentiable, the optimal updated weightsβj{\displaystyle \beta _{j}}can be found via theGauss–Newton method—a quasi-Newton method in which the part of the Hessian involving the second derivative is discarded. To derive this, firstTaylor expandfj(βjTxi)≈fj(βj,oldTxi)+fj˙(βj,oldTxi)(βjTxi−βj,oldTxi){\displaystyle f_{j}(\beta _{j}^{T}x_{i})\approx f_{j}(\beta _{j,old}^{T}x_{i})+{\dot {f_{j}}}(\beta _{j,old}^{T}x_{i})(\beta _{j}^{T}x_{i}-\beta _{j,old}^{T}x_{i})}, then plug the expansion back in to the simplified error functionS′{\displaystyle S'}and do some algebraic manipulation to put it in the form
This is aweighted least squaresproblem. If we solve for all weightsw{\displaystyle w}and put them in a diagonal matrixW{\displaystyle W}, stack all the new targetsb^{\displaystyle {\hat {b}}}in to a vector, and use the full data matrixX{\displaystyle X}instead of a single examplexi{\displaystyle x_{i}}, then the optimalβj{\displaystyle \beta _{j}}is given by the closed-form
Use this updatedβj{\displaystyle \beta _{j}}to find a new projection ofX{\displaystyle X}and refitfj{\displaystyle f_{j}}to the new scatter plot. Then use that newfj{\displaystyle f_{j}}to updateβj{\displaystyle \beta _{j}}by resolving the above, and continue this alternating process until(fj,βj){\displaystyle (f_{j},\beta _{j})}converges.
It has been shown that the convergence rate, the bias and the variance are affected by the estimation ofβj{\displaystyle \beta _{j}}andfj{\displaystyle f_{j}}.
The PPR model takes the form of a basic additive model but with the additionalβj{\displaystyle \beta _{j}}component, so eachfj{\displaystyle f_{j}}fits a scatter plot ofβjTXT{\displaystyle \beta _{j}^{T}X^{T}}vs theresidual(unexplained variance) during training rather than using the raw inputs themselves. This constrains the problem of finding eachfj{\displaystyle f_{j}}to low dimension, making it solvable with common least squares or spline fitting methods and sidestepping thecurse of dimensionalityduring training. Becausefj{\displaystyle f_{j}}is taken of a projection ofX{\displaystyle X}, the result looks like a "ridge" orthogonal to the projection dimension, so{fj}{\displaystyle \{f_{j}\}}are often called "ridge functions". The directionsβj{\displaystyle \beta _{j}}are chosen to optimize the fit of their corresponding ridge functions.
Note that because PPR attempts to fit projections of the data, it can be difficult to interpret the fitted model as a whole, because each input variable has been accounted for in a complex and multifaceted way. This can make the model more useful for prediction than for understanding the data, though visualizing individual ridge functions and considering which projections the model is discovering can yield some insight.
Both projection pursuit regression and fully connectedneural networkswith a single hidden layer project the input vector onto a one-dimensional hyperplane and then apply a nonlinear transformation of the input variables that are then added in a linear fashion. Thus both follow the same steps to overcome the curse of dimensionality. The main difference is that the functionsfj{\displaystyle f_{j}}being fitted in PPR can be different for each combination of input variables and are estimated one at a time and then updated with the weights, whereas in NN these are all specified upfront and estimated simultaneously.
Thus, in PPR estimation the transformations of variables in PPR are data driven whereas in a single-layer neural network these transformations are fixed.
|
https://en.wikipedia.org/wiki/Projection_pursuit_regression
|
Inmathematics, ahomogeneous functionis afunction of several variablessuch that the following holds: If each of the function's arguments is multiplied by the samescalar, then the function's value is multiplied by some power of this scalar; the power is called thedegree of homogeneity, or simply thedegree. That is, ifkis an integer, a functionfofnvariables is homogeneous of degreekif
for everyx1,…,xn,{\displaystyle x_{1},\ldots ,x_{n},}ands≠0.{\displaystyle s\neq 0.}This is also referred to akth-degreeorkth-orderhomogeneous function.
For example, ahomogeneous polynomialof degreekdefines a homogeneous function of degreek.
The above definition extends to functions whosedomainandcodomainarevector spacesover afieldF: a functionf:V→W{\displaystyle f:V\to W}between twoF-vector spaces ishomogeneousof degreek{\displaystyle k}if
for all nonzeros∈F{\displaystyle s\in F}andv∈V.{\displaystyle v\in V.}This definition is often further generalized to functions whose domain is notV, but aconeinV, that is, a subsetCofVsuch thatv∈C{\displaystyle \mathbf {v} \in C}impliessv∈C{\displaystyle s\mathbf {v} \in C}for every nonzero scalars.
In the case offunctions of several real variablesandreal vector spaces, a slightly more general form of homogeneity calledpositive homogeneityis often considered, by requiring only that the above identities hold fors>0,{\displaystyle s>0,}and allowing any real numberkas a degree of homogeneity. Every homogeneous real function ispositively homogeneous. The converse is not true, but is locally true in the sense that (for integer degrees) the two kinds of homogeneity cannot be distinguished by considering the behavior of a function near a given point.
Anormover a real vector space is an example of a positively homogeneous function that is not homogeneous. A special case is theabsolute valueof real numbers. The quotient of two homogeneous polynomials of the same degree gives an example of a homogeneous function of degree zero. This example is fundamental in the definition ofprojective schemes.
The concept of a homogeneous function was originally introduced forfunctions of several real variables. With the definition ofvector spacesat the end of 19th century, the concept has been naturally extended to functions between vector spaces, since atupleof variable values can be considered as acoordinate vector. It is this more general point of view that is described in this article.
There are two commonly used definitions. The general one works for vector spaces over arbitraryfields, and is restricted to degrees of homogeneity that areintegers.
The second one supposes to work over the field ofreal numbers, or, more generally, over anordered field. This definition restricts to positive values the scaling factor that occurs in the definition, and is therefore calledpositive homogeneity, the qualificativepositivebeing often omitted when there is no risk of confusion. Positive homogeneity leads to considering more functions as homogeneous. For example, theabsolute valueand allnormsare positively homogeneous functions that are not homogeneous.
The restriction of the scaling factor to real positive values allows also considering homogeneous functions whose degree of homogeneity is any real number.
LetVandWbe twovector spacesover afieldF. Alinear coneinVis a subsetCofVsuch thatsx∈C{\displaystyle sx\in C}for allx∈C{\displaystyle x\in C}and all nonzeros∈F.{\displaystyle s\in F.}
Ahomogeneous functionffromVtoWis apartial functionfromVtoWthat has a linear coneCas itsdomain, and satisfies
for someintegerk, everyx∈C,{\displaystyle x\in C,}and every nonzeros∈F.{\displaystyle s\in F.}The integerkis called thedegree of homogeneity, or simply thedegreeoff.
A typical example of a homogeneous function of degreekis the function defined by ahomogeneous polynomialof degreek. Therational functiondefined by the quotient of two homogeneous polynomials is a homogeneous function; its degree is the difference of the degrees of the numerator and the denominator; itscone of definitionis the linear cone of the points where the value of denominator is not zero.
Homogeneous functions play a fundamental role inprojective geometrysince any homogeneous functionffromVtoWdefines a well-defined function between theprojectivizationsofVandW. The homogeneous rational functions of degree zero (those defined by the quotient of two homogeneous polynomial of the same degree) play an essential role in theProj constructionofprojective schemes.
When working over thereal numbers, or more generally over anordered field, it is commonly convenient to considerpositive homogeneity, the definition being exactly the same as that in the preceding section, with "nonzeros" replaced by "s> 0" in the definitions of a linear cone and a homogeneous function.
This change allow considering (positively) homogeneous functions with any real number as their degrees, sinceexponentiationwith a positive real base is well defined.
Even in the case of integer degrees, there are many useful functions that are positively homogeneous without being homogeneous. This is, in particular, the case of theabsolute valuefunction andnorms, which are all positively homogeneous of degree1. They are not homogeneous since|−x|=|x|≠−|x|{\displaystyle |-x|=|x|\neq -|x|}ifx≠0.{\displaystyle x\neq 0.}This remains true in thecomplexcase, since the field of the complex numbersC{\displaystyle \mathbb {C} }and every complex vector space can be considered as real vector spaces.
Euler's homogeneous function theoremis a characterization of positively homogeneousdifferentiable functions, which may be considered as thefundamental theorem on homogeneous functions.
The functionf(x,y)=x2+y2{\displaystyle f(x,y)=x^{2}+y^{2}}is homogeneous of degree 2:f(tx,ty)=(tx)2+(ty)2=t2(x2+y2)=t2f(x,y).{\displaystyle f(tx,ty)=(tx)^{2}+(ty)^{2}=t^{2}\left(x^{2}+y^{2}\right)=t^{2}f(x,y).}
Theabsolute valueof areal numberis a positively homogeneous function of degree1, which is not homogeneous, since|sx|=s|x|{\displaystyle |sx|=s|x|}ifs>0,{\displaystyle s>0,}and|sx|=−s|x|{\displaystyle |sx|=-s|x|}ifs<0.{\displaystyle s<0.}
The absolute value of acomplex numberis a positively homogeneous function of degree1{\displaystyle 1}over the real numbers (that is, when considering the complex numbers as avector spaceover the real numbers). It is not homogeneous, over the real numbers as well as over the complex numbers.
More generally, everynormandseminormis a positively homogeneous function of degree1which is not a homogeneous function. As for the absolute value, if the norm or semi-norm is defined on a vector space over the complex numbers, this vector space has to be considered as vector space over the real number for applying the definition of a positively homogeneous function.
Anylinear mapf:V→W{\displaystyle f:V\to W}betweenvector spacesover afieldFis homogeneous of degree 1, by the definition of linearity:f(αv)=αf(v){\displaystyle f(\alpha \mathbf {v} )=\alpha f(\mathbf {v} )}for allα∈F{\displaystyle \alpha \in {F}}andv∈V.{\displaystyle v\in V.}
Similarly, anymultilinear functionf:V1×V2×⋯Vn→W{\displaystyle f:V_{1}\times V_{2}\times \cdots V_{n}\to W}is homogeneous of degreen,{\displaystyle n,}by the definition of multilinearity:f(αv1,…,αvn)=αnf(v1,…,vn){\displaystyle f\left(\alpha \mathbf {v} _{1},\ldots ,\alpha \mathbf {v} _{n}\right)=\alpha ^{n}f(\mathbf {v} _{1},\ldots ,\mathbf {v} _{n})}for allα∈F{\displaystyle \alpha \in {F}}andv1∈V1,v2∈V2,…,vn∈Vn.{\displaystyle v_{1}\in V_{1},v_{2}\in V_{2},\ldots ,v_{n}\in V_{n}.}
Monomialsinn{\displaystyle n}variables define homogeneous functionsf:Fn→F.{\displaystyle f:\mathbb {F} ^{n}\to \mathbb {F} .}For example,f(x,y,z)=x5y2z3{\displaystyle f(x,y,z)=x^{5}y^{2}z^{3}\,}is homogeneous of degree 10 sincef(αx,αy,αz)=(αx)5(αy)2(αz)3=α10x5y2z3=α10f(x,y,z).{\displaystyle f(\alpha x,\alpha y,\alpha z)=(\alpha x)^{5}(\alpha y)^{2}(\alpha z)^{3}=\alpha ^{10}x^{5}y^{2}z^{3}=\alpha ^{10}f(x,y,z).\,}The degree is the sum of the exponents on the variables; in this example,10=5+2+3.{\displaystyle 10=5+2+3.}
Ahomogeneous polynomialis apolynomialmade up of a sum of monomials of the same degree. For example,x5+2x3y2+9xy4{\displaystyle x^{5}+2x^{3}y^{2}+9xy^{4}}is a homogeneous polynomial of degree 5. Homogeneous polynomials also define homogeneous functions.
Given a homogeneous polynomial of degreek{\displaystyle k}with real coefficients that takes only positive values, one gets a positively homogeneous function of degreek/d{\displaystyle k/d}by raising it to the power1/d.{\displaystyle 1/d.}So for example, the following function is positively homogeneous of degree 1 but not homogeneous:(x2+y2+z2)12.{\displaystyle \left(x^{2}+y^{2}+z^{2}\right)^{\frac {1}{2}}.}
For every set of weightsw1,…,wn,{\displaystyle w_{1},\dots ,w_{n},}the following functions are positively homogeneous of degree 1, but not homogeneous:
Rational functionsformed as the ratio of twohomogeneouspolynomials are homogeneous functions in theirdomain, that is, off of thelinear coneformed by thezerosof the denominator. Thus, iff{\displaystyle f}is homogeneous of degreem{\displaystyle m}andg{\displaystyle g}is homogeneous of degreen,{\displaystyle n,}thenf/g{\displaystyle f/g}is homogeneous of degreem−n{\displaystyle m-n}away from the zeros ofg.{\displaystyle g.}
The homogeneousreal functionsof a single variable have the formx↦cxk{\displaystyle x\mapsto cx^{k}}for some constantc. So, theaffine functionx↦x+5,{\displaystyle x\mapsto x+5,}thenatural logarithmx↦ln(x),{\displaystyle x\mapsto \ln(x),}and theexponential functionx↦ex{\displaystyle x\mapsto e^{x}}are not homogeneous.
Roughly speaking,Euler's homogeneous function theoremasserts that the positively homogeneous functions of a given degree are exactly the solution of a specificpartial differential equation. More precisely:
Euler's homogeneous function theorem—Iffis a(partial) functionofnreal variables that is positively homogeneous of degreek, andcontinuously differentiablein some open subset ofRn,{\displaystyle \mathbb {R} ^{n},}then it satisfies in this open set thepartial differential equationkf(x1,…,xn)=∑i=1nxi∂f∂xi(x1,…,xn).{\displaystyle k\,f(x_{1},\ldots ,x_{n})=\sum _{i=1}^{n}x_{i}{\frac {\partial f}{\partial x_{i}}}(x_{1},\ldots ,x_{n}).}
Conversely, every maximal continuously differentiable solution of this partial differentiable equation is a positively homogeneous function of degreek, defined on a positive cone (here,maximalmeans that the solution cannot be prolongated to a function with a larger domain).
For having simpler formulas, we setx=(x1,…,xn).{\displaystyle \mathbf {x} =(x_{1},\ldots ,x_{n}).}The first part results by using thechain rulefor differentiating both sides of the equationf(sx)=skf(x){\displaystyle f(s\mathbf {x} )=s^{k}f(\mathbf {x} )}with respect tos,{\displaystyle s,}and taking the limit of the result whenstends to1.
The converse is proved by integrating a simpledifferential equation.
Letx{\displaystyle \mathbf {x} }be in the interior of the domain off. Forssufficiently close to1, the functiong(s)=f(sx){\textstyle g(s)=f(s\mathbf {x} )}is well defined. The partial differential equation implies thatsg′(s)=kf(sx)=kg(s).{\displaystyle sg'(s)=kf(s\mathbf {x} )=kg(s).}The solutions of thislinear differential equationhave the formg(s)=g(1)sk.{\displaystyle g(s)=g(1)s^{k}.}Therefore,f(sx)=g(s)=skg(1)=skf(x),{\displaystyle f(s\mathbf {x} )=g(s)=s^{k}g(1)=s^{k}f(\mathbf {x} ),}ifsis sufficiently close to1. If this solution of the partial differential equation would not be defined for all positives, then thefunctional equationwould allow to prolongate the solution, and the partial differential equation implies that this prolongation is unique. So, the domain of a maximal solution of the partial differential equation is a linear cone, and the solution is positively homogeneous of degreek.◻{\displaystyle \square }
As a consequence, iff:Rn→R{\displaystyle f:\mathbb {R} ^{n}\to \mathbb {R} }is continuously differentiable and homogeneous of degreek,{\displaystyle k,}its first-orderpartial derivatives∂f/∂xi{\displaystyle \partial f/\partial x_{i}}are homogeneous of degreek−1.{\displaystyle k-1.}This results from Euler's theorem by differentiating the partial differential equation with respect to one variable.
In the case of a function of a single real variable (n=1{\displaystyle n=1}), the theorem implies that a continuously differentiable and positively homogeneous function of degreekhas the formf(x)=c+xk{\displaystyle f(x)=c_{+}x^{k}}forx>0{\displaystyle x>0}andf(x)=c−xk{\displaystyle f(x)=c_{-}x^{k}}forx<0.{\displaystyle x<0.}The constantsc+{\displaystyle c_{+}}andc−{\displaystyle c_{-}}are not necessarily the same, as it is the case for theabsolute value.
The substitutionv=y/x{\displaystyle v=y/x}converts theordinary differential equationI(x,y)dydx+J(x,y)=0,{\displaystyle I(x,y){\frac {\mathrm {d} y}{\mathrm {d} x}}+J(x,y)=0,}whereI{\displaystyle I}andJ{\displaystyle J}are homogeneous functions of the same degree, into theseparable differential equationxdvdx=−J(1,v)I(1,v)−v.{\displaystyle x{\frac {\mathrm {d} v}{\mathrm {d} x}}=-{\frac {J(1,v)}{I(1,v)}}-v.}
The definitions given above are all specialized cases of the following more general notion of homogeneity in whichX{\displaystyle X}can be any set (rather than a vector space) and the real numbers can be replaced by the more general notion of amonoid.
LetM{\displaystyle M}be amonoidwith identity element1∈M,{\displaystyle 1\in M,}letX{\displaystyle X}andY{\displaystyle Y}be sets, and suppose that on bothX{\displaystyle X}andY{\displaystyle Y}there are defined monoid actions ofM.{\displaystyle M.}Letk{\displaystyle k}be a non-negative integer and letf:X→Y{\displaystyle f:X\to Y}be a map. Thenf{\displaystyle f}is said to behomogeneous of degreek{\displaystyle k}overM{\displaystyle M}if for everyx∈X{\displaystyle x\in X}andm∈M,{\displaystyle m\in M,}f(mx)=mkf(x).{\displaystyle f(mx)=m^{k}f(x).}If in addition there is a functionM→M,{\displaystyle M\to M,}denoted bym↦|m|,{\displaystyle m\mapsto |m|,}called anabsolute valuethenf{\displaystyle f}is said to beabsolutely homogeneous of degreek{\displaystyle k}overM{\displaystyle M}if for everyx∈X{\displaystyle x\in X}andm∈M,{\displaystyle m\in M,}f(mx)=|m|kf(x).{\displaystyle f(mx)=|m|^{k}f(x).}
A function ishomogeneous overM{\displaystyle M}(resp.absolutely homogeneous overM{\displaystyle M}) if it is homogeneous of degree1{\displaystyle 1}overM{\displaystyle M}(resp. absolutely homogeneous of degree1{\displaystyle 1}overM{\displaystyle M}).
More generally, it is possible for the symbolsmk{\displaystyle m^{k}}to be defined form∈M{\displaystyle m\in M}withk{\displaystyle k}being something other than an integer (for example, ifM{\displaystyle M}is the real numbers andk{\displaystyle k}is a non-zero real number thenmk{\displaystyle m^{k}}is defined even thoughk{\displaystyle k}is not an integer). If this is the case thenf{\displaystyle f}will be calledhomogeneous of degreek{\displaystyle k}overM{\displaystyle M}if the same equality holds:f(mx)=mkf(x)for everyx∈Xandm∈M.{\displaystyle f(mx)=m^{k}f(x)\quad {\text{ for every }}x\in X{\text{ and }}m\in M.}
The notion of beingabsolutely homogeneous of degreek{\displaystyle k}overM{\displaystyle M}is generalized similarly.
A continuous functionf{\displaystyle f}onRn{\displaystyle \mathbb {R} ^{n}}is homogeneous of degreek{\displaystyle k}if and only if∫Rnf(tx)φ(x)dx=tk∫Rnf(x)φ(x)dx{\displaystyle \int _{\mathbb {R} ^{n}}f(tx)\varphi (x)\,dx=t^{k}\int _{\mathbb {R} ^{n}}f(x)\varphi (x)\,dx}for allcompactly supportedtest functionsφ{\displaystyle \varphi }; and nonzero realt.{\displaystyle t.}Equivalently, making achange of variabley=tx,{\displaystyle y=tx,}f{\displaystyle f}is homogeneous of degreek{\displaystyle k}if and only ift−n∫Rnf(y)φ(yt)dy=tk∫Rnf(y)φ(y)dy{\displaystyle t^{-n}\int _{\mathbb {R} ^{n}}f(y)\varphi \left({\frac {y}{t}}\right)\,dy=t^{k}\int _{\mathbb {R} ^{n}}f(y)\varphi (y)\,dy}for allt{\displaystyle t}and all test functionsφ.{\displaystyle \varphi .}The last display makes it possible to define homogeneity ofdistributions. A distributionS{\displaystyle S}is homogeneous of degreek{\displaystyle k}ift−n⟨S,φ∘μt⟩=tk⟨S,φ⟩{\displaystyle t^{-n}\langle S,\varphi \circ \mu _{t}\rangle =t^{k}\langle S,\varphi \rangle }for all nonzero realt{\displaystyle t}and all test functionsφ.{\displaystyle \varphi .}Here the angle brackets denote the pairing between distributions and test functions, andμt:Rn→Rn{\displaystyle \mu _{t}:\mathbb {R} ^{n}\to \mathbb {R} ^{n}}is the mapping of scalar division by the real numbert.{\displaystyle t.}
Letf:X→Y{\displaystyle f:X\to Y}be a map between twovector spacesover a fieldF{\displaystyle \mathbb {F} }(usually thereal numbersR{\displaystyle \mathbb {R} }orcomplex numbersC{\displaystyle \mathbb {C} }). IfS{\displaystyle S}is a set of scalars, such asZ,{\displaystyle \mathbb {Z} ,}[0,∞),{\displaystyle [0,\infty ),}orR{\displaystyle \mathbb {R} }for example, thenf{\displaystyle f}is said to behomogeneous overS{\displaystyle S}iff(sx)=sf(x){\textstyle f(sx)=sf(x)}for everyx∈X{\displaystyle x\in X}and scalars∈S.{\displaystyle s\in S.}For instance, everyadditive mapbetween vector spaces ishomogeneous over the rational numbersS:=Q{\displaystyle S:=\mathbb {Q} }although itmight not behomogeneous over the real numbersS:=R.{\displaystyle S:=\mathbb {R} .}
The following commonly encountered special cases and variations of this definition have their own terminology:
All of the above definitions can be generalized by replacing the conditionf(rx)=rf(x){\displaystyle f(rx)=rf(x)}withf(rx)=|r|f(x),{\displaystyle f(rx)=|r|f(x),}in which case that definition is prefixed with the word"absolute"or"absolutely."For example,
Ifk{\displaystyle k}is a fixed real number then the above definitions can be further generalized by replacing the conditionf(rx)=rf(x){\displaystyle f(rx)=rf(x)}withf(rx)=rkf(x){\displaystyle f(rx)=r^{k}f(x)}(and similarly, by replacingf(rx)=|r|f(x){\displaystyle f(rx)=|r|f(x)}withf(rx)=|r|kf(x){\displaystyle f(rx)=|r|^{k}f(x)}for conditions using the absolute value, etc.), in which case the homogeneity is said to be"of degreek{\displaystyle k}"(where in particular, all of the above definitions are"of degree1{\displaystyle 1}").
For instance,
A nonzerocontinuous functionthat is homogeneous of degreek{\displaystyle k}onRn∖{0}{\displaystyle \mathbb {R} ^{n}\backslash \lbrace 0\rbrace }extends continuously toRn{\displaystyle \mathbb {R} ^{n}}if and only ifk>0.{\displaystyle k>0.}
Proofs
|
https://en.wikipedia.org/wiki/Homogeneous_function
|
Collective intelligenceCollective actionSelf-organized criticalityHerd mentalityPhase transitionAgent-based modellingSynchronizationAnt colony optimizationParticle swarm optimizationSwarm behaviour
Social network analysisSmall-world networksCentralityMotifsGraph theoryScalingRobustnessSystems biologyDynamic networks
Evolutionary computationGenetic algorithmsGenetic programmingArtificial lifeMachine learningEvolutionary developmental biologyArtificial intelligenceEvolutionary robotics
Reaction–diffusion systemsPartial differential equationsDissipative structuresPercolationCellular automataSpatial ecologySelf-replication
Conversation theoryEntropyFeedbackGoal-orientedHomeostasisInformation theoryOperationalizationSecond-order cyberneticsSelf-referenceSystem dynamicsSystems scienceSystems thinkingSensemakingVariety
Ordinary differential equationsPhase spaceAttractorsPopulation dynamicsChaosMultistabilityBifurcation
Rational choice theoryBounded rationality
Inmathematicsandscience, anonlinear system(or anon-linear system) is asystemin which the change of the output is notproportionalto the change of the input.[1][2]Nonlinear problems are of interest toengineers,biologists,[3][4][5]physicists,[6][7]mathematicians, and many otherscientistssince most systems are inherently nonlinear in nature.[8]Nonlineardynamical systems, describing changes in variables over time, may appear chaotic, unpredictable, or counterintuitive, contrasting with much simplerlinear systems.
Typically, the behavior of a nonlinear system is described in mathematics by anonlinear system of equations, which is a set of simultaneousequationsin which the unknowns (or the unknown functions in the case ofdifferential equations) appear as variables of apolynomialof degree higher than one or in the argument of afunctionwhich is not a polynomial of degree one.
In other words, in a nonlinear system of equations, the equation(s) to be solved cannot be written as alinear combinationof the unknownvariablesorfunctionsthat appear in them. Systems can be defined as nonlinear, regardless of whether known linear functions appear in the equations. In particular, a differential equation islinearif it is linear in terms of the unknown function and its derivatives, even if nonlinear in terms of the other variables appearing in it.
As nonlinear dynamical equations are difficult to solve, nonlinear systems are commonly approximated by linear equations (linearization). This works well up to some accuracy and some range for the input values, but some interesting phenomena such assolitons,chaos,[9]andsingularitiesare hidden by linearization. It follows that some aspects of the dynamic behavior of a nonlinear system can appear to be counterintuitive, unpredictable or even chaotic. Although such chaotic behavior may resemblerandombehavior, it is in fact not random. For example, some aspects of the weather are seen to be chaotic, where simple changes in one part of the system produce complex effects throughout. This nonlinearity is one of the reasons why accurate long-term forecasts are impossible with current technology.
Some authors use the termnonlinear sciencefor the study of nonlinear systems. This term is disputed by others:
Using a term like nonlinear science is like referring to the bulk of zoology as the study of non-elephant animals.
Inmathematics, alinear map(orlinear function)f(x){\displaystyle f(x)}is one which satisfies both of the following properties:
Additivity implies homogeneity for anyrationalα, and, forcontinuous functions, for anyrealα. For acomplexα, homogeneity does not follow from additivity. For example, anantilinear mapis additive but not homogeneous. The conditions of additivity and homogeneity are often combined in the superposition principle
An equation written as
is calledlineariff(x){\displaystyle f(x)}is a linear map (as defined above) andnonlinearotherwise. The equation is calledhomogeneousifC=0{\displaystyle C=0}andf(x){\displaystyle f(x)}is ahomogeneous function.
The definitionf(x)=C{\displaystyle f(x)=C}is very general in thatx{\displaystyle x}can be any sensible mathematical object (number, vector, function, etc.), and the functionf(x){\displaystyle f(x)}can literally be anymapping, including integration or differentiation with associated constraints (such asboundary values). Iff(x){\displaystyle f(x)}containsdifferentiationwith respect tox{\displaystyle x}, the result will be adifferential equation.
A nonlinear system of equations consists of a set of equations in several variables such that at least one of them is not alinear equation.
For a single equation of the formf(x)=0,{\displaystyle f(x)=0,}many methods have been designed; seeRoot-finding algorithm. In the case wherefis apolynomial, one has apolynomial equationsuch asx2+x−1=0.{\displaystyle x^{2}+x-1=0.}The general root-finding algorithms apply to polynomial roots, but, generally they do not find all the roots, and when they fail to find a root, this does not imply that there is no roots. Specific methods for polynomials allow finding all roots or therealroots; seereal-root isolation.
Solvingsystems of polynomial equations, that is finding the common zeros of a set of several polynomials in several variables is a difficult problem for which elaborate algorithms have been designed, such asGröbner basealgorithms.[11]
For the general case of system of equations formed by equating to zero severaldifferentiable functions, the main method isNewton's methodand its variants. Generally they may provide a solution, but do not provide any information on the number of solutions.
A nonlinearrecurrence relationdefines successive terms of asequenceas a nonlinear function of preceding terms. Examples of nonlinear recurrence relations are thelogistic mapand the relations that define the variousHofstadter sequences. Nonlinear discrete models that represent a wide class of nonlinear recurrence relationships include the NARMAX (Nonlinear Autoregressive Moving Average with eXogenous inputs) model and the relatednonlinear system identificationand analysis procedures.[12]These approaches can be used to study a wide class of complex nonlinear behaviors in the time, frequency, and spatio-temporal domains.
Asystemofdifferential equationsis said to be nonlinear if it is not asystem of linear equations. Problems involving nonlinear differential equations are extremely diverse, and methods of solution or analysis are problem dependent. Examples of nonlinear differential equations are theNavier–Stokes equationsin fluid dynamics and theLotka–Volterra equationsin biology.
One of the greatest difficulties of nonlinear problems is that it is not generally possible to combine known solutions into new solutions. In linear problems, for example, a family oflinearly independentsolutions can be used to construct general solutions through thesuperposition principle. A good example of this is one-dimensional heat transport withDirichlet boundary conditions, the solution of which can be written as a time-dependent linear combination of sinusoids of differing frequencies; this makes solutions very flexible. It is often possible to find several very specific solutions to nonlinear equations, however the lack of a superposition principle prevents the construction of new solutions.
First orderordinary differential equationsare often exactly solvable byseparation of variables, especially for autonomous equations. For example, the nonlinear equation
hasu=1x+C{\displaystyle u={\frac {1}{x+C}}}as a general solution (and also the special solutionu=0,{\displaystyle u=0,}corresponding to the limit of the general solution whenCtends to infinity). The equation is nonlinear because it may be written as
and the left-hand side of the equation is not a linear function ofu{\displaystyle u}and its derivatives. Note that if theu2{\displaystyle u^{2}}term were replaced withu{\displaystyle u}, the problem would be linear (theexponential decayproblem).
Second and higher order ordinary differential equations (more generally, systems of nonlinear equations) rarely yieldclosed-formsolutions, though implicit solutions and solutions involvingnonelementary integralsare encountered.
Common methods for the qualitative analysis of nonlinear ordinary differential equations include:
The most common basic approach to studying nonlinearpartial differential equationsis to change the variables (or otherwise transform the problem) so that the resulting problem is simpler (possibly linear). Sometimes, the equation may be transformed into one or moreordinary differential equations, as seen inseparation of variables, which is always useful whether or not the resulting ordinary differential equation(s) is solvable.
Another common (though less mathematical) tactic, often exploited in fluid and heat mechanics, is to usescale analysisto simplify a general, natural equation in a certain specificboundary value problem. For example, the (very) nonlinearNavier-Stokes equationscan be simplified into one linear partial differential equation in the case of transient, laminar, one dimensional flow in a circular pipe; the scale analysis provides conditions under which the flow is laminar and one dimensional and also yields the simplified equation.
Other methods include examining thecharacteristicsand using the methods outlined above for ordinary differential equations.
A classic, extensively studied nonlinear problem is the dynamics of a frictionlesspendulumunder the influence ofgravity. UsingLagrangian mechanics, it may be shown[14]that the motion of a pendulum can be described by thedimensionlessnonlinear equation
where gravity points "downwards" andθ{\displaystyle \theta }is the angle the pendulum forms with its rest position, as shown in the figure at right. One approach to "solving" this equation is to usedθ/dt{\displaystyle d\theta /dt}as anintegrating factor, which would eventually yield
which is an implicit solution involving anelliptic integral. This "solution" generally does not have many uses because most of the nature of the solution is hidden in thenonelementary integral(nonelementary unlessC0=2{\displaystyle C_{0}=2}).
Another way to approach the problem is to linearize any nonlinearity (the sine function term in this case) at the various points of interest throughTaylor expansions. For example, the linearization atθ=0{\displaystyle \theta =0}, called the small angle approximation, is
sincesin(θ)≈θ{\displaystyle \sin(\theta )\approx \theta }forθ≈0{\displaystyle \theta \approx 0}. This is asimple harmonic oscillatorcorresponding to oscillations of the pendulum near the bottom of its path. Another linearization would be atθ=π{\displaystyle \theta =\pi }, corresponding to the pendulum being straight up:
sincesin(θ)≈π−θ{\displaystyle \sin(\theta )\approx \pi -\theta }forθ≈π{\displaystyle \theta \approx \pi }. The solution to this problem involveshyperbolic sinusoids, and note that unlike the small angle approximation, this approximation is unstable, meaning that|θ|{\displaystyle |\theta |}will usually grow without limit, though bounded solutions are possible. This corresponds to the difficulty of balancing a pendulum upright, it is literally an unstable state.
One more interesting linearization is possible aroundθ=π/2{\displaystyle \theta =\pi /2}, around whichsin(θ)≈1{\displaystyle \sin(\theta )\approx 1}:
This corresponds to a free fall problem. A very useful qualitative picture of the pendulum's dynamics may be obtained by piecing together such linearizations, as seen in the figure at right. Other techniques may be used to find (exact)phase portraitsand approximate periods.
|
https://en.wikipedia.org/wiki/Nonlinear_system
|
Inmathematics, apiecewise linearorsegmented functionis areal-valued functionof a real variable, whosegraphis composed of straight-line segments.[1]
A piecewise linear function is a function defined on a (possibly unbounded)intervalofreal numbers, such that there is a collection of intervals on each of which the function is anaffine function. (Thus "piecewise linear" is actually defined to mean "piecewiseaffine".) If the domain of the function iscompact, there needs to be a finite collection of such intervals; if the domain is not compact, it may either be required to be finite or to belocally finitein the reals.
The function defined by
is piecewise linear with four pieces. The graph of this function is shown to the right. Since the graph of an affine(*) function is aline, the graph of a piecewise linear function consists ofline segmentsandrays. Thexvalues (in the above example −3, 0, and 3) where the slope changes are typically called breakpoints, changepoints, threshold values or knots. As in many applications, this function is also continuous. The graph of a continuous piecewise linear function on a compact interval is apolygonal chain.
(*) Alinear functionsatisfies by definitionf(λx)=λf(x){\displaystyle f(\lambda x)=\lambda f(x)}and therefore in particularf(0)=0{\displaystyle f(0)=0}; functions whose graph is a straight line areaffinerather thanlinear.
There are other examples of piecewise linear functions:
An approximation to a known curve can be found by sampling the curve and interpolating linearly between the points. An algorithm for computing the most significant points subject to a given error tolerance has been published.[3]
If partitions, and then breakpoints, are already known,linear regressioncan be performed independently on these partitions.
However, continuity is not preserved in that case, and also there is no unique reference model underlying the observed data. A stable algorithm with this case has been derived.[4]
If partitions are not known, theresidual sum of squarescan be used to choose optimal separation points.[5]However efficient computation and joint estimation of all model parameters (including the breakpoints) may be obtained by an iterative procedure[6]currently implemented in the packagesegmented[7]for theR language.
A variant ofdecision tree learningcalledmodel treeslearns piecewise linear functions.[8]
The notion of a piecewise linear function makes sense in several different contexts. Piecewise linear functions may be defined onn-dimensionalEuclidean space, or more generally anyvector spaceoraffine space, as well as onpiecewise linear manifoldsandsimplicial complexes(seesimplicial map). In each case, the function may bereal-valued, or it may take values from a vector space, an affine space, a piecewise linear manifold, or a simplicial complex. (In these contexts, the term “linear” does not refer solely tolinear transformations, but to more generalaffine linearfunctions.)
In dimensions higher than one, it is common to require the domain of each piece to be apolygonorpolytope. This guarantees that the graph of the function will be composed of polygonal or polytopal pieces.
Splinesgeneralize piecewise linear functions to higher-order polynomials, which are in turn contained in the category of piecewise-differentiable functions,PDIFF.
Important sub-classes of piecewise linear functions include thecontinuouspiecewise linear functions and theconvexpiecewise linear functions.
In general, for everyn-dimensional continuous piecewise linear functionf:Rn→R{\displaystyle f:\mathbb {R} ^{n}\to \mathbb {R} }, there is a
such that
Iff{\displaystyle f}is convex and continuous, then there is a
such that
Inagriculturepiecewiseregression analysisof measured data is used to detect the range over which growth factors affect the yield and the range over which the crop is not sensitive to changes in these factors.
The image on the left shows that at shallowwatertablesthe yield declines, whereas at deeper (> 7 dm) watertables the yield is unaffected. The graph is made using the method ofleast squaresto find the two segments with thebest fit.
The graph on the right reveals that crop yieldstolerateasoil salinityup to ECe = 8 dS/m (ECe is the electric conductivity of an extract of a saturated soil sample), while beyond that value the crop production reduces. The graph is made with the method of partial regression to find the longest range of "no effect", i.e. where the line is horizontal. The two segments need not join at the same point. Only for the second segment method of least squares is used.
|
https://en.wikipedia.org/wiki/Piecewise_linear_function
|
Inmathematics,linear mapsform an important class of "simple"functionswhich preserve the algebraic structure oflinear spacesand are often used as approximations to more general functions (seelinear approximation). If the spaces involved are alsotopological spaces(that is,topological vector spaces), then it makes sense to ask whether all linear maps arecontinuous. It turns out that for maps defined on infinite-dimensionaltopological vector spaces (e.g., infinite-dimensionalnormed spaces), the answer is generally no: there existdiscontinuous linear maps. If the domain of definition iscomplete, it is trickier; such maps can be proven to exist, but the proof relies on theaxiom of choiceand does not provide an explicit example.
LetXandYbe two normed spaces andf:X→Y{\displaystyle f:X\to Y}a linear map fromXtoY. IfXisfinite-dimensional, choose a basis(e1,e2,…,en){\displaystyle \left(e_{1},e_{2},\ldots ,e_{n}\right)}inXwhich may be taken to be unit vectors. Then,f(x)=∑i=1nxif(ei),{\displaystyle f(x)=\sum _{i=1}^{n}x_{i}f(e_{i}),}and so by thetriangle inequality,‖f(x)‖=‖∑i=1nxif(ei)‖≤∑i=1n|xi|‖f(ei)‖.{\displaystyle \|f(x)\|=\left\|\sum _{i=1}^{n}x_{i}f(e_{i})\right\|\leq \sum _{i=1}^{n}|x_{i}|\|f(e_{i})\|.}LettingM=supi{‖f(ei)‖},{\displaystyle M=\sup _{i}\{\|f(e_{i})\|\},}and using the fact that∑i=1n|xi|≤C‖x‖{\displaystyle \sum _{i=1}^{n}|x_{i}|\leq C\|x\|}for someC>0 which follows from the fact thatany two norms on a finite-dimensional space are equivalent, one finds‖f(x)‖≤(∑i=1n|xi|)M≤CM‖x‖.{\displaystyle \|f(x)\|\leq \left(\sum _{i=1}^{n}|x_{i}|\right)M\leq CM\|x\|.}Thus,f{\displaystyle f}is abounded linear operatorand so is continuous. In fact, to see this, simply note thatfis linear,
and therefore‖f(x)−f(x′)‖=‖f(x−x′)‖≤K‖x−x′‖{\displaystyle \|f(x)-f(x')\|=\|f(x-x')\|\leq K\|x-x'\|}for some universal constantK. Thus for anyϵ>0,{\displaystyle \epsilon >0,}we can chooseδ≤ϵ/K{\displaystyle \delta \leq \epsilon /K}so thatf(B(x,δ))⊆B(f(x),ϵ){\displaystyle f(B(x,\delta ))\subseteq B(f(x),\epsilon )}(B(x,δ){\displaystyle B(x,\delta )}andB(f(x),ϵ){\displaystyle B(f(x),\epsilon )}are the normed balls aroundx{\displaystyle x}andf(x){\displaystyle f(x)}), which gives continuity.
IfXis infinite-dimensional, this proof will fail as there is no guarantee that thesupremumMexists. IfYis the zero space {0}, the only map betweenXandYis the zero map which is trivially continuous. In all other cases, whenXis infinite-dimensional andYis not the zero space, one can find a discontinuous map fromXtoY.
Examples of discontinuous linear maps are easy to construct in spaces that are not complete; on any Cauchy sequenceei{\displaystyle e_{i}}of linearly independent vectors which does not have a limit, there is a linear operatorT{\displaystyle T}such that the quantities‖T(ei)‖/‖ei‖{\displaystyle \|T(e_{i})\|/\|e_{i}\|}grow without bound. In a sense, the linear operators are not continuous because the space has "holes".
For example, consider the spaceX{\displaystyle X}of real-valuedsmooth functionson the interval [0, 1] with theuniform norm, that is,‖f‖=supx∈[0,1]|f(x)|.{\displaystyle \|f\|=\sup _{x\in [0,1]}|f(x)|.}Thederivative-at-a-pointmap, given byT(f)=f′(0){\displaystyle T(f)=f'(0)\,}defined onX{\displaystyle X}and with real values, is linear, but not continuous. Indeed, consider the sequencefn(x)=sin(n2x)n{\displaystyle f_{n}(x)={\frac {\sin(n^{2}x)}{n}}}forn≥1{\displaystyle n\geq 1}. This sequence converges uniformly to the constantly zero function, butT(fn)=n2cos(n2⋅0)n=n→∞{\displaystyle T(f_{n})={\frac {n^{2}\cos(n^{2}\cdot 0)}{n}}=n\to \infty }
asn→∞{\displaystyle n\to \infty }instead ofT(fn)→T(0)=0{\displaystyle T(f_{n})\to T(0)=0}, as would hold for a continuous map. Note thatT{\displaystyle T}is real-valued, and so is actually alinear functionalonX{\displaystyle X}(an element of the algebraicdual spaceX∗{\displaystyle X^{*}}). The linear mapX→X{\displaystyle X\to X}which assigns to each function its derivative is similarly discontinuous. Note that although the derivative operator is not continuous, it isclosed.
The fact that the domain is not complete here is important: discontinuous operators on complete spaces require a little more work.
An algebraic basis for thereal numbersas a vector space over therationalsis known as aHamel basis(note that some authors use this term in a broader sense to mean an algebraic basis ofanyvector space). Note that any twononcommensurablenumbers, say 1 andπ{\displaystyle \pi }, are linearly independent. One may find a Hamel basis containing them, and define a mapf:R→R{\displaystyle f:\mathbb {R} \to \mathbb {R} }so thatf(π)=0,{\displaystyle f(\pi )=0,}facts as the identity on the rest of the Hamel basis, and extend to all ofR{\displaystyle \mathbb {R} }by linearity. Let {rn}nbe any sequence of rationals which converges toπ{\displaystyle \pi }. Then limnf(rn) = π, butf(π)=0.{\displaystyle f(\pi )=0.}By construction,fis linear overQ{\displaystyle \mathbb {Q} }(not overR{\displaystyle \mathbb {R} }), but not continuous. Note thatfis also notmeasurable; anadditivereal function is linear if and only if it is measurable, so for every such function there is aVitali set. The construction offrelies on the axiom of choice.
This example can be extended into a general theorem about the existence of discontinuous linear maps on any infinite-dimensional normed space (as long as the codomain is not trivial).
Discontinuous linear maps can be proven to exist more generally, even if the space is complete. LetXandYbenormed spacesover the fieldKwhereK=R{\displaystyle K=\mathbb {R} }orK=C.{\displaystyle K=\mathbb {C} .}Assume thatXis infinite-dimensional andYis not the zero space. We will find a discontinuous linear mapffromXtoK, which will imply the existence of a discontinuous linear mapgfromXtoYgiven by the formulag(x)=f(x)y0{\displaystyle g(x)=f(x)y_{0}}wherey0{\displaystyle y_{0}}is an arbitrary nonzero vector inY.
IfXis infinite-dimensional, to show the existence of a linear functional which is not continuous then amounts to constructingfwhich is not bounded. For that, consider asequence(en)n(n≥1{\displaystyle n\geq 1}) oflinearly independentvectors inX, which we normalize. Then, we defineT(en)=n‖en‖{\displaystyle T(e_{n})=n\|e_{n}\|\,}for eachn=1,2,…{\displaystyle n=1,2,\ldots }Complete this sequence of linearly independent vectors to avector space basisofXby definingTat the other vectors in the basis to be zero.Tso defined will extend uniquely to a linear map onX, and since it is clearly not bounded, it is not continuous.
Notice that by using the fact that any set of linearly independent vectors can be completed to a basis, we implicitly used the axiom of choice, which was not needed for the concrete example in the previous section.
As noted above, theaxiom of choice(AC) is used in the general existence theorem of discontinuous linear maps. In fact, there are no constructive examples of discontinuous linear maps with complete domain (for example,Banach spaces). In analysis as it is usually practiced by working mathematicians, the axiom of choice is always employed (it is an axiom ofZFCset theory); thus, to the analyst, all infinite-dimensional topological vector spaces admit discontinuous linear maps.
On the other hand, in 1970Robert M. Solovayexhibited amodelofset theoryin which every set of reals is measurable.[1]This implies that there are no discontinuous linear real functions. Clearly AC does not hold in the model.
Solovay's result shows that it is not necessary to assume that all infinite-dimensional vector spaces admit discontinuous linear maps, and there are schools of analysis which adopt a moreconstructivistviewpoint. For example, H. G. Garnir, in searching for so-called "dream spaces" (topological vector spaces on which every linear map into a normed space is continuous), was led to adopt ZF +DC+BP(dependent choice is a weakened form and theBaire propertyis a negation of strong AC) as his axioms to prove theGarnir–Wright closed graph theoremwhich states, among other things, that any linear map from anF-spaceto a TVS is continuous. Going to the extreme ofconstructivism, there isCeitin's theorem, which states thateveryfunction is continuous (this is to be understood in the terminology of constructivism, according to which only representable functions are considered to be functions).[2]Such stances are held by only a small minority of working mathematicians.
The upshot is that the existence of discontinuous linear maps depends on AC; it is consistent with set theory without AC that there are no discontinuous linear maps on complete spaces. In particular, no concrete construction such as the derivative can succeed in defining a discontinuous linear map everywhere on a complete space.
Many naturally occurring linear discontinuous operators areclosed, a class of operators which share some of the features of continuous operators. It makes sense to ask which linear operators on a given space are closed. Theclosed graph theoremasserts that aneverywhere-definedclosed operator on a complete domain is continuous, so to obtain a discontinuous closed operator, one must permit operators which are not defined everywhere.
To be more concrete, letT{\displaystyle T}be a map fromX{\displaystyle X}toY{\displaystyle Y}with domainDom(T),{\displaystyle \operatorname {Dom} (T),}writtenT:Dom(T)⊆X→Y.{\displaystyle T:\operatorname {Dom} (T)\subseteq X\to Y.}We don't lose much if we replaceXby the closure ofDom(T).{\displaystyle \operatorname {Dom} (T).}That is, in studying operators that are not everywhere-defined, one may restrict one's attention todensely defined operatorswithout loss of generality.
If the graphΓ(T){\displaystyle \Gamma (T)}ofT{\displaystyle T}is closed inX×Y,{\displaystyle X\times Y,}we callTclosed. Otherwise, consider its closureΓ(T)¯{\displaystyle {\overline {\Gamma (T)}}}inX×Y.{\displaystyle X\times Y.}IfΓ(T)¯{\displaystyle {\overline {\Gamma (T)}}}is itself the graph of some operatorT¯,{\displaystyle {\overline {T}},}T{\displaystyle T}is calledclosable, andT¯{\displaystyle {\overline {T}}}is called theclosureofT.{\displaystyle T.}
So the natural question to ask about linear operators that are not everywhere-defined is whether they are closable. The answer is, "not necessarily"; indeed, every infinite-dimensional normed space admits linear operators that are not closable. As in the case of discontinuous operators considered above, the proof requires the axiom of choice and so is in general nonconstructive, though again, ifXis not complete, there are constructible examples.
In fact, there is even an example of a linear operator whose graph has closureallofX×Y.{\displaystyle X\times Y.}Such an operator is not closable. LetXbe the space ofpolynomial functionsfrom [0,1] toR{\displaystyle \mathbb {R} }andYthe space of polynomial functions from [2,3] toR{\displaystyle \mathbb {R} }. They are subspaces ofC([0,1]) andC([2,3]) respectively, and so normed spaces. Define an operatorTwhich takes the polynomial functionx↦p(x) on [0,1] to the same function on [2,3]. As a consequence of theStone–Weierstrass theorem, the graph of this operator is dense inX×Y,{\displaystyle X\times Y,}so this provides a sort of maximally discontinuous linear map (confernowhere continuous function). Note thatXis not complete here, as must be the case when there is such a constructible map.
Thedual spaceof a topological vector space is the collection of continuous linear maps from the space into the underlying field. Thus the failure of some linear maps to be continuous for infinite-dimensional normed spaces implies that for these spaces, one needs to distinguish the algebraic dual space from the continuous dual space which is then a proper subset. It illustrates the fact that an extra dose of caution is needed in doing analysis on infinite-dimensional spaces as compared to finite-dimensional ones.
The argument for the existence of discontinuous linear maps on normed spaces can be generalized to all metrizable topological vector spaces, especially to all Fréchet spaces, but there exist infinite-dimensional locally convex topological vector spaces such that every functional is continuous.[3]On the other hand, theHahn–Banach theorem, which applies to all locally convex spaces, guarantees the existence of many continuous linear functionals, and so a large dual space. In fact, to every convex set, theMinkowski gaugeassociates a continuouslinear functional. The upshot is that spaces with fewer convex sets have fewer functionals, and in the worst-case scenario, a space may have no functionals at all other than the zero functional. This is the case for theLp(R,dx){\displaystyle L^{p}(\mathbb {R} ,dx)}spaces with0<p<1,{\displaystyle 0<p<1,}from which it follows that these spaces are nonconvex. Note that here is indicated theLebesgue measureon the real line. There are otherLp{\displaystyle L^{p}}spaces with0<p<1{\displaystyle 0<p<1}which do have nontrivial dual spaces.
Another such example is the space of real-valuedmeasurable functionson the unit interval withquasinormgiven by‖f‖=∫I|f(x)|1+|f(x)|dx.{\displaystyle \|f\|=\int _{I}{\frac {|f(x)|}{1+|f(x)|}}dx.}This non-locally convex space has a trivial dual space.
One can consider even more general spaces. For example, the existence of a homomorphism between complete separable metricgroupscan also be shown nonconstructively.
|
https://en.wikipedia.org/wiki/Discontinuous_linear_map
|
Open innovationis a term used to promote anInformation Agemindset toward innovation that runs counter to thesecrecyandsilo mentalityof traditional corporate research labs. The benefits and driving forces behind increased openness have been noted and discussed as far back as the 1960s, especially as it pertains to interfirm cooperation in R&D.[1]Use of the term 'open innovation' in reference to the increasing embrace of external cooperation in a complex world has been promoted in particular byHenry Chesbrough, adjunct professor and faculty director of the Center for Open Innovation of theHaas School of Businessat the University of California, and Maire Tecnimont Chair of Open Innovation atLuiss.[2][3]
The term was originally referred to as "a paradigm that assumes that firms can and should use external ideas as well as internal ideas, and internal and external paths to market, as the firms look to advance their technology".[3]More recently, it is defined as "a distributed innovation process based on purposively managed knowledge flows across organizational boundaries, using pecuniary and non-pecuniary mechanisms in line with the organization's business model".[4]This more recent definition acknowledges that open innovation is not solely firm-centric: it also includescreative consumers[5]and communities of user innovators.[6]The boundaries between a firm and its environment have become more permeable; innovations can easily transfer inward and outward between firms and other firms and between firms and creative consumers, resulting in impacts at the level of the consumer, the firm, an industry, and society.[7]
Because innovations tend to be produced by outsiders andfoundersinstartups, rather than existing organizations, the central idea behind open innovation is that, in a world of widely distributed knowledge, companies cannot afford to rely entirely on their own research, but should instead buy or license processes or inventions (i.e. patents) from other companies. This is termed inbound open innovation.[8]In addition, internal inventions not being used in a firm's business should be taken outside the company (e.g. through licensing, joint ventures orspin-offs).[9]This is called outbound open innovation.
The open innovation paradigm can be interpreted to go beyond just using external sources of innovation such as customers, rival companies, and academic institutions, and can be as much a change in the use, management, and employment ofintellectual propertyas it is in the technical and research driven generation of intellectual property.[10]In this sense, it is understood as the systematic encouragement and exploration of a wide range of internal and external sources for innovative opportunities, the integration of this exploration with firm capabilities and resources, and the exploitation of these opportunities through multiple channels.[11]
In addition, as open innovation explores a wide range of internal and external sources, it could be not just analyzed in the level of company, but also it can be analyzed at inter-organizational level, intra-organizational level, extra-organizational and at industrial, regional and society.[12]
Open innovation offers several benefits to companies operating on a program of global collaboration:
Implementing a model of open innovation is naturally associated with a number of risks and challenges, including:
In the UK, knowledge transfer partnerships (KTP) are a funding mechanism encouraging the partnership between a firm and a knowledge-based partner.[15]A KTP is a collaboration program between a knowledge-based partner (i.e. a research institution), a company partner and one or more associates (i.e. recently qualified persons such as graduates). KTP initiatives aim to deliver significant improvement in business partners’ profitability as a direct result of the partnership through enhanced quality and operations, increased sales and access to new markets. At the end of their KTP project, the three actors involved have to prepare a final report that describes KTP initiative supported the achievement of the project's innovation goals.[15]
Open innovation has allowed startup companies to produce innovation comparable to that of large companies.[16]Although startups tend to have limited resources and experience, they can overcome this disadvantage by leveraging external resources and knowledge.[17]To do so, startups can work in tandem with other institutions including large companies, incubators, VC firms, and higher education systems. Collaborating with these institutions provides startups with the proper resources and support to successfully bring new innovations to the market.[18]
The collaboration between startups and large companies, in particular, has been used to exemplify the fruits of open innovation. In this collaboration, startups can assume one of two roles: that of inbound open innovation, where the startup utilizes innovationfromthe large company, or that of outbound open innovation, where the startup provides internal innovationforthe large company. In the inbound open innovation model, startups can gain access to technology that will allow them to create successful products. In the outbound innovation model, startups can capitalize on their technology without making large investments to do so. The licensing of technology between startups and large companies is beneficial for both parties, but it is more significant for startups since they face larger obstacles in their pursuit of innovation.[17]
This approach involves developing and introducing a partially completed product, for the purpose of providing a framework or tool-kit for contributors to access, customize, and exploit. The goal is for the contributors to extend the platform product's functionality while increasing the overall value of the product for everyone involved.
Readily available software frameworks such as asoftware development kit(SDK), or anapplication programming interface(API) are common examples of product platforms. This approach is common in markets with strongnetwork effectswhere demand for the product implementing the framework (such as a mobile phone, or an online application) increases with the number of developers that are attracted to use the platform tool-kit. The high scalability of platforming often results in an increased complexity of administration and quality assurance.[13]
This model entails implementing a system that encourages competitiveness among contributors by rewarding successful submissions. Developer competitions such ashackathonevents and manycrowdsourcinginitiatives fall under this category of open innovation. This method provides organizations with inexpensive access to a large quantity of innovative ideas, while also providing a deeper insight into the needs of their customers and contributors.[13]
While mostly oriented toward the end of the product development cycle, this technique involves extensive customer interaction through employees of the host organization. Companies are thus able to accurately incorporate customer input, while also allowing them to be more closely involved in the design process and product management cycle.[13]
Similarly to product platforming, an organization incorporates their contributors into the development of the product. This differs from platforming in the sense that, in addition to the provision of the framework on which contributors develop, the hosting organization still controls and maintains the eventual products developed in collaboration with their contributors. This method gives organizations more control by ensuring that the correct product is developed as fast as possible, while reducing the overall cost of development.[13]Dr.Henry Chesbroughrecently supported this model for open innovation in the optics and photonics industry.[19]
Similarly to idea competitions, an organization leverages a network of contributors in the design process by offering a reward in the form of anincentive. The difference relates to the fact that the network of contributors are used to develop solutions to identified problems within the development process, as opposed to new products.[13]Emphasis needs to be placed on assessing organisational capabilities to ensure value creation in open innovation.[20]
InAustriatheLudwig Boltzmann Gesellschaftstarted a project named "Tell us!" about mental health issues and used the concept of open innovation tocrowdsourceresearch questions.[21][22]The institute also launched the first "Lab for Open Innovation in Science" to teach 20 selected scientists the concept of open innovation over the course of one year.
Innovation intermediaries are persons or organizations that facilitate innovation by linking multiple independent players in order to encourage collaboration and open innovation, thus strengthening the innovation capacity of companies, industries, regions, or nations.[23]As such, they may be key players for the transformation from closed to open modes of innovation.[24]
The paradigm ofclosed innovationholds that successful innovation requires control. Particularly, a company should control the generation of their own ideas, as well as production, marketing, distribution, servicing, financing, and supporting. What drove this idea is that, in the early twentieth century, academic and government institutions were not involved in the commercial application of science. As a result, it was left up to other corporations to take thenew product developmentcycle into their own hands. There just was not the time to wait for the scientific community to become more involved in the practical application of science. There also was not enough time to wait for other companies to start producing some of the components that were required in their final product. These companies became relatively self-sufficient, with little communication directed outwards to other companies or universities.
Throughout the years several factors emerged that paved the way for open innovation paradigms:
These four factors have resulted in a new market of knowledge. Knowledge is not anymore proprietary to the company. It resides in employees, suppliers, customers, competitors and universities. If companies do not use the knowledge they have inside, someone else will. Innovation can be generated either by means of closed innovation or by open innovation paradigms.[3][9]Some research argues that the potential of open innovation is exaggerated, while the merits of closed innovation are overlooked.[25]There is an ongoing debate on which paradigm will dominate in the future.
Modern research of open innovation is divided into two groups, which have several names, but are similar in their essence (discovery and exploitation; outside-in and inside-out; inbound and outbound). The common factor for different names is the direction of innovation, whether from outside the company in, or from inside the company out:[26]
This type of open innovation is when a company freely shares its resources with other partners, without an instant financial reward. The source of profit has an indirect nature and is manifested as a new type of business model.
In this type of open innovation a company commercialises its inventions and technology through selling or licensing technology to a third party.
This type of open innovation is when companies use freely available external knowledge, as a source of internal innovation. Before starting any internal R&D project a company should monitor the external environment in search for existing solutions, thus, in this case, internal R&D become tools to absorb external ideas for internal needs.
In this type of open innovation a company is buying innovation from its partners through licensing, or other procedures, involving monetary reward for external knowledge
Open sourceand open innovation might conflict on patent issues. This conflict is particularly apparent when considering technologies that may save lives, or otheropen-source-appropriate technologiesthat may assist inpovertyreduction orsustainable development.[27]However,open sourceand open innovation are not mutually exclusive, because participating companies can donate their patents to an independent organization, put them in a common pool, or grant unlimited license use to anybody. Hence some open-source initiatives can merge these two concepts: this is the case for instance for IBM with itsEclipseplatform, which the company presents as a case of open innovation, where competing companies are invited to cooperate inside an open-innovation network.[28]
In 1997,Eric Raymond, writing about the open-source software movement, coined the termthe cathedral and the bazaar. The cathedral represented the conventional method of employing a group of experts to design and develop software (though it could apply to any large-scale creative or innovative work). The bazaar represented the open-source approach. This idea has been amplified by a lot of people, notablyDon TapscottandAnthony D. Williamsin their bookWikinomics. Eric Raymond himself is also quoted as saying that 'one cannot code from the ground up in bazaar style. One can test, debug, and improve in bazaar style, but it would be very hard to originate a project in bazaar mode'. In the same vein, Raymond is also quoted as saying 'The individual wizard is where successful bazaar projects generally start'.[29]
In 2014, Chesbrough and Bogers describe open innovation as a distributed innovation process that is based on purposefully managed knowledge flows across enterprise boundaries.[30]Open innovation is hardly aligned with the ecosystem theory and not a linear process. Fasnacht's adoption for the financial services uses open innovation as basis and includes alternative forms of mass collaboration, hence, this makes it complex, iterative, non-linear, and barely controllable.[31]The increasing interactions between business partners, competitors, suppliers, customers, and communities create a constant growth of data and cognitive tools. Open innovation ecosystems bring together the symbiotic forces of all supportive firms from various sectors and businesses that collectively seek to create differentiated offerings. Accordingly, the value captured from a network of multiple actorsandthe linear value chain of individual firms combined, creates the new delivery model that Fasnacht declares "value constellation".
The termOpen Innovation Ecosystemconsists of three parts that describe the foundations of the approach of open innovation, innovation systems and business ecosystems.[1]
WhileJames F. Mooreresearched business ecosystems in manufacturing around a specific business or branch, the open model of innovation with the ecosystem theory was recently studied in various industries. Traitler et al. researched it 2010 and used it forR&D, stating that global innovation needs alliances based on compatible differences. Innovation partnerships based on sharing knowledge represents a paradigm shift toward accelerating co‐development of sustainable innovation.[32]West researched open innovation ecosystems in the software industry,[33]following studies in the food industry that show how a small firm thrived and became a business success based on building an ecosystem that shares knowledge, encourages individuals' growth, and embeds trust among participants such as suppliers, alumni chef and staff, and food writers.[34]Other adoptions include the telecom industry[35]or smart cities.[36]
Ecosystems foster collaboration and accelerate the dissemination of knowledge through thenetwork effect, in fact, value creation increases with each actor in the ecosystem, which in turn nurtures the ecosystem as such.
A digital platform is essential to make the innovation ecosystem work as it aligns various actors to achieve a mutually beneficial purpose. Parker explained that with platform revolution and described how networked Markets are transforming the economy.[37]Basically there are three dimensions that increasingly converge, i.e. e-commerce, social media and logistics and finance, termed by Daniel Fasnacht as thegolden triangle of ecosystems.[38]
Business ecosystems are increasingly used and drive digital growth.[3] and pioneering firms in China use their technological capabilities and link client data to historical transactions and social behaviour to offer tailored financial services among luxury goods or health services. Such open collaborative environment changes the client experience and adds value to consumers. The drawback is that it is also threatening incumbent banks from the U.S. and Europe due to its legacies and lack of agility and flexibility.[39]
Bogers, M., Zobel, A-K., Afuah, A., Almirall, E., Brunswicker, S., Dahlander, L., Frederiksen, L.,Gawer, A., Gruber, M., Haefliger, S., Hagedoorn, J., Hilgers, D., Laursen, K., Magnusson, M.G., Majchrzak, A., McCarthy, I.P., Moeslein, K.M., Nambisan, S., Piller, F.T., Radziwon, A., Rossi-Lamastra, C., Sims, J. & Ter Wal, A.J. (2017). The open innovation research landscape: Established perspectives and emerging themes across different levels of analysis. Industry & Innovation, 24(1), 8-40.
|
https://en.wikipedia.org/wiki/Open_innovation
|
Aninnovation competitionis a method or process of theindustrial process,productorbusiness development. It is a form ofsocial engineering, which focuses to the creation and elaboration of the best and sustainable ideas, coming from the bestinnovators.
There are few major works, like Terwiesch and Ulrich,[1][2]who exclusively focus on innovation competitions. They argue, that whileinnovationis seen as a largely creative endeavor, it can also be rigorously managed by viewing and structuring theinnovation processas a collection of “opportunities”. Profitable innovation comes not from increasinginvestmentsin R&D, but from systematically identifying more exceptional opportunities. Terwiesch and Ulrich show how to design and run innovation tournaments: pitting competing opportunities against one another, and then consistently filtering out the weakest ones until only those with the highest profit potential remain.
The aims and the design principles of the innovation competitions are noted in the literature as follows:
In September 2009,Netflixawarded 1 million US dollars to the winner ofNetflix prize—the team who by 10% improved the accuracy of predictions about the extent that people enjoy a movie based on their past movie preferences.[9]
On Nov. 16, 2010,General Electricwill announce the winners of its multimillion-dollar challenge[10]to find new, breakthrough ideas to create cleaner, more efficient and economically viable grid technologies, and to accelerate the adoption ofsmart gridtechnologies.
Since 2010, theIXL Innovation Olympicshas provided a platform to source breakthrough ideas, for CEOs from Fortune 100 Companies, using teams from MBA, Engineering, Social sciences and other programs across the globe. As an example,IBMused this platform to source breakthrough ideas for increasing accessibility to screens and devices for the aging population.[11]
|
https://en.wikipedia.org/wiki/Innovation_competition
|
Aninducement prize contest(IPC) is a competition that awards a cash prize for the accomplishment of a feat, usually of engineering. IPCs are typically designed to extend the limits of human ability. Some of the most famous IPCs include theLongitude prize(1714–1765), theOrteig Prize(1919–1927) and prizes from enterprises such asChallenge Worksand theX Prize Foundation.
IPCs are distinct from recognition prizes, such as theNobel Prize, in that IPCs have prospectively defined criteria for what feat is to be achieved for winning the prize, while recognition prizes may be based on the beneficial effects of the feat.
Throughout history, there have been instances where IPCs were successfully utilized to push the boundaries of what would have been considered state-of-the-art at the time.[1]
The Longitude Prize was a reward offered by theBritishgovernment for a simple and practical method for the precise determination of a ship'slongitude. The prize, established through an Act of Parliament (theLongitude Act) in 1714, was administered by theBoard of Longitude.
Another example happened during the first years of theNapoleonic Wars. The French government offered a hefty cash award of 12,000 francs to any inventor who could devise a cheap and effective method of preserving large amounts of food. The larger armies of the period required increased, regular supplies of quality food. Limited food availability was among the factors limiting military campaigns to the summer and autumn months. In 1809, a French confectioner and brewer,Nicolas Appert, observed that food cooked inside a jar did not spoil unless the seals leaked, and developed a method of sealing food in glass jars.[2]The reason for lack of spoilage was unknown at the time, since it would be another 50 years beforeLouis Pasteurdemonstrated the role of microbes in food spoilage.
Yet another example is the Orteig Prize which was a $25,000 reward offered on May 19, 1919, by New York hotel ownerRaymond Orteigto the first allied aviator(s) to fly non-stop fromNew York CitytoParisor vice versa. On offer for five years, it attracted no competitors. Orteig renewed the offer for another five years in 1924 when the state of aviation technology had advanced to the point that numerous competitors vied for the prize. Several famous aviators made unsuccessful attempts at the New York–Paris flight before relatively unknown AmericanCharles Lindberghwon the prize in 1927 in hisaircraftSpirit of St. Louis.
One of the leading organizations for IPCs isChallenge Works. This social enterprise, originating fromNesta (charity), uses IPCs, or 'Challenge Prizes', to catalyse innovative solutions to the world's largest problems. Their work includes the continuation ofLongitude rewards, for example, theLongitude Prize on Dementia, which seeks to useArtificial intelligenceand other emerging technologies to help those with dementia to manage their symptoms and live independently. Their work in social innovations revolves around 4 key pillars; Climate Response,Global health, Resilient Society and Technology Frontiers. They run a number ofinducement prizesand continue to conduct research intoareaswhere further innovations can make a positive difference.
Another organization which develops and manages IPCs is the X PRIZE Foundation. Its mission is to bring about "radical breakthroughs for the benefit of humanity" through incentivized competition. It fosters high-profile competitions that motivate individuals, companies and organizations across all disciplines to develop innovative ideas and technologies that help solve the grand challenges that restrict humanity's progress. The most high-profile X PRIZE to date was theAnsari X PRIZErelating to spacecraft development awarded in 2004. This prize was intended to inspire research and development into technology for space exploration. Indeed, the X Prize has inspired other "letter" named inducement prize competitions such as theH-Prize,N-Prize, and so forth. In 2006, there was much interest in prizes for automotive achievement, such as the 250mpgcar.[citation needed]
In Europe there has been a re-emergence of challenge prizes that following in the tradition of the Longitude Prize for solutions which impact on social problems. Nesta Challenges, based in London, is an example of this running prizes for innovations that for example reduce social isolation or make renewable energy generators accessible to off the grid refugees and returnees.[3]
In some literature on the subject, it has been stated that well-designed IPCs can garner economic activity on the order of 10 to 20 times the amount of the prize face value.[citation needed]
Inducement prizes have a long history as a policy tool for promoting innovation and solving various technical and societal challenges. These prizes offer a compensation reward, which can be in the form of monetary or non-monetary benefits, and aim to engage diverse groups of actors to develop solutions with low barriers to entry.[4]The primary objectives of inducement prizes are to direct research efforts and incentivize the creation of desired technologies.
In recent years, national and regional policymakers have increasingly utilized inducement prizes to stimulate innovation. These prizes can be implemented at various territorial levels, such as supranational with H2020 prizes, national with the challenge.gov platform, or local with Tampere hackathons. Inducement prizes provide policy flexibility and a non-prescriptive approach that allows regional policymakers to also address specific societal challenges and concerns related to directionality, legitimacy, and responsibility[5].Overall, inducement prizes can be an effective policy tool with a challenge-oriented approach for addressing diverse societal challenges.[6]
|
https://en.wikipedia.org/wiki/Inducement_prize_contest
|
Kaggleis adata science competition platformand online community fordata scientistsandmachine learningpractitioners underGoogle LLC. Kaggle enables users to find and publish datasets, explore and build models in a web-based data science environment, work with other data scientists and machine learning engineers, and enter competitions to solve data science challenges.[1]
Kaggle was founded byAnthony Goldbloomin April 2010.[2]Jeremy Howard, one of the first Kaggle users, joined in November 2010 and served as the President and Chief Scientist.[3]Also on the team wasNicholas Gruenserving as the founding chair.[4]In 2011, the company raised $12.5 million andMax Levchinbecame the chairman.[5]On March 8, 2017,Fei-Fei Li, Chief Scientist at Google, announced thatGooglewas acquiring Kaggle.[6]
In June 2017, Kaggle surpassed 1 million registered users, and as of October 2023, it has over 15 million users in 194 countries.[7][8][9]
In 2022, founders Goldbloom and Hamner stepped down from their positions and D. Sculley became theCEO.[10]
In February 2023, Kaggle introduced Models, allowing users to discover and use pre-trained models through deep integrations with the rest of Kaggle’s platform.[11]
In April of 2025, Kaggle partnered withWikimedia Foundation.[12]
Manymachine-learningcompetitions have been run on Kaggle since the company was founded. Notable competitions include gesture recognition forMicrosoft Kinect,[13]making afootballAIforManchester City, coding a trading algorithm forTwo Sigma Investments,[14]and improving the search for theHiggs bosonatCERN.[15]
The competition host prepares the data and a description of the problem; the host may choose whether it's going to be rewarded with money or be unpaid. Participants experiment with different techniques and compete against each other to produce the best models. Work is shared publicly through Kaggle Kernels to achieve a better benchmark and to inspire new ideas. Submissions can be made through Kaggle Kernels, via manual upload or using the KaggleAPI. For most competitions, submissions are scored immediately (based on their predictive accuracy relative to a hidden solution file) and summarized on a live leaderboard. After the deadline passes, the competition host pays the prize money in exchange for "a worldwide, perpetual, irrevocable and royalty-free license [...] to use the winning Entry", i.e. the algorithm, software and relatedintellectual propertydeveloped, which is "non-exclusive unless otherwise specified".[16]
Alongside its public competitions, Kaggle also offers private competitions, which are limited to Kaggle's top participants. Kaggle offers a free tool for data science teachers to run academic machine-learning competitions.[17]Kaggle also hosts recruiting competitions in which data scientists compete for a chance to interview at leading data science companies likeFacebook,Winton Capital, andWalmart.
Kaggle's competitions have resulted in successful projects such as furtheringHIVresearch,[18]chessratings[19]andtrafficforecasting.[20]Geoffrey Hintonand George Dahl used deepneural networksto win a competition hosted byMerck.[citation needed]Vlad Mnih (one of Hinton's students) used deep neural networks to win a competition hosted byAdzuna.[citation needed]This resulted in the technique being taken up by others in the Kaggle community. Tianqi Chen from theUniversity of Washingtonalso used Kaggle to show the power ofXGBoost, which has since replacedRandom Forestas one of the main methods used to win Kaggle competitions.[citation needed]
Several academic papers have been published based on findings from Kaggle competitions.[21]A contributor to this is the live leaderboard, which encourages participants to continue innovating beyond existing best practices.[22]The winning methods are frequently written on the Kaggle Winner's Blog.
Kaggle has implemented a progression system to recognize and reward users based on their contributions and achievements within the platform. This system consists of five tiers: Novice, Contributor, Expert, Master, and Grandmaster. Each tier is achieved by meeting specific criteria in competitions, datasets, kernels (code-sharing), and discussions.[23]
The highest tier, Kaggle Grandmaster, is awarded to users who have ranked at the top of multiple competitions including high ranking in a solo team. As of April 2, 2025, out of 23.29 million Kaggle accounts, 2,973 have achieved Kaggle Master status and 612 have achieved Kaggle Grandmaster status.[24]
|
https://en.wikipedia.org/wiki/Kaggle
|
Thislist of computer science awardsis an index to articles on notable awards related tocomputer science. It includes lists of awards by theAssociation for Computing Machinery, theInstitute of Electrical and Electronics Engineers, other computer science andinformation scienceawards, and a list of computer science competitions.
The top computer science award is the ACMTuring Award, generally regarded as the Nobel Prize equivalent for Computer Science.[1]Other highly regarded top computer science awards include IEEEJohn von Neumann Medalawarded by the IEEE Board of Directors, and the JapanKyoto Prizefor Information Science.
TheAssociation for Computing Machinery(ACM) gives out many computer science awards, often run by one of their Special Interest Groups.
A number of awards are given by theInstitute of Electrical and Electronics Engineers(IEEE), theIEEE Computer Societyor theIEEE Information Theory Society.
|
https://en.wikipedia.org/wiki/List_of_computer_science_awards
|
Theblack boxmodel of power converteralso called behavior model, is a method ofsystem identificationto represent the characteristics ofpower converter, that is regarded as a black box. There are two types of black box model of power converter - when the model includes the load, it is called terminated model, otherwise un-terminated model. The type of black box model of power converter is chosen based on the goal of modeling. Thisblack boxmodel of power converter could be a tool forfilter designof a system integrated with power converters.
To successfully implement a black box model of a power converter, theequivalent circuitof the converter is assumed a-priori, with the assumption that this equivalent circuit remains constant under different operating conditions. The equivalent circuit of the black box model is built by measuring the stimulus/response of the power converter.
Different modeling methods of power converter could be applied in different circumstances. Thewhite boxmodel of power converters is suitable when all the inner components are known, which can be quite difficult due to the complex nature of the power converter. Thegrey box modelcombines some features from both, black box model and white box model, when parts of components are known or the relationship between physical elements and equivalent circuit is investigated.
Since the power converter consists ofpower semiconductor deviceswitches, it is anonlinearandtime-variant system.[1]One assumption of black box model of a power converter is that the system is regarded aslinear systemwhen the filter is designed properly to avoid saturation and nonlinear effects. Another strong assumption related to the modeling procedure is that the equivalent circuit model is invariant under different operating conditions. Since in the modeling procedures circuit components are determined under different operating conditions.
The expression of a black box model of power converter is the assumed equivalent circuit model (infrequency domain), which could be easily integrated in the circuit of a system in order to facilitate the process of filter design,control systemdesign andpulse-width modulationdesign. In general, the equivalent circuit contains mainly two parts: active components likevoltage/currentsources, and passive components likeimpedance. The process of black box modeling is actually an approach to determine this equivalent circuit for the converter.
The active components in equivalent circuit are voltage/current sources. They are usually at least two sources, which could be variety options depending on the analysis approach, such as two voltage sources, two current sources, and one voltage and one current source.
The passive components containingresistors,capacitorsandinductorscan be expressed as combination of several impedances oradmittances. Another expression method is to regard the passive components of the power converter as atwo-port networkand use aY-matrixorZ-matrixto describe the characteristics of passive components.
Different modeling methods can be utilized to define the equivalent circuit. It depends on the chosen equivalent circuit and the optional measurement techniques. However, many modeling methods need at least one or more assumption mentioned above in order to regard the systems as lineartime-invariant systemor periodicallyswitched linear system.
This method is based on the two assumptions mentioned in section Assumption, so the system is regarded as linear time-invariant system. Based on these assumptions, the equivalent circuit could be derived from several equations of different operating conditions. The equivalent circuit model is defined containing three impedances and two current sources, where five unknown parameters needs to be determined. Three sets of different operating conditions are built up by changing external impedance and the corresponding currents and voltages at the terminals of the power converter are measured or simulated as known parameters. In each condition, two equations containing five unknown variables could be derived according toKirchhoff's circuit lawsandnodal analysis. In total, six equations could be used to solve these five unknowns and the equivalent circuit could be determined in this way.
There are many methods used to determine passive elements. The conventional method is to switch off the power converter and measure the impedance with animpedance analyzer, or measure thescattering parametersby avector network analyzerandcomputethe impedance afterwards. These conventional methods assume that the impedances of power converter is the same in the operating condition and switched-off condition.
Manystate-of-artmethods are investigated to measure the impedance when the power converter is in operating condition. One method is to put twoclamp-on current probesin the system, in which one is called receiving probe and another is injecting probe.[2]The output of two probes are connected on avector network analyzer, the impedance of power converter is measured after some calibration procedures inCMandDMmeasurement setups. This method is restricted with its delicate calibration procedure.
Another state-of-art method is to utilize a transformer and an impedance analyzer in two different setups in order to measure CM and DM impedance separately.[3]The measurement range of this method is limited by the characteristics of the transformer.
|
https://en.wikipedia.org/wiki/Black_box_model_of_power_converter
|
Data-driven control systemsare a broad family ofcontrol systems, in which theidentificationof the process model and/or the design of the controller are based entirely onexperimental datacollected from the plant.[1]
In many control applications, trying to write a mathematical model of the plant is considered a hard task, requiring efforts and time to the process and control engineers. This problem is overcome bydata-drivenmethods, which fit a system model to the experimental data collected, choosing it in a specific models class. The control engineer can then exploit this model to design a proper controller for the system. However, it is still difficult to find a simple yet reliable model for a physical system, that includes only those dynamics of the system that are of interest for the control specifications. Thedirectdata-driven methods allow to tune a controller, belonging to a given class, without the need of an identified model of the system. In this way, one can also simply weight process dynamics of interest inside the control cost function, and exclude those dynamics that are out of interest.
Thestandardapproach to control systems design is organized in two-steps:
Typical objectives ofsystem identificationare to haveG^{\displaystyle {\widehat {G}}}as close as possible toG0{\displaystyle G_{0}}, and to haveΓ{\displaystyle \Gamma }as small as possible. However, from anidentification for controlperspective, what really matters is the performance achieved by the controller, not the intrinsic quality of the model.
One way to deal with uncertainty is to design a controller that has an acceptable performance with all models inΓ{\displaystyle \Gamma }, includingG0{\displaystyle G_{0}}. This is the main idea behindrobust controldesign procedure, that aims at building frequency domain uncertainty descriptions of the process. However, being based on worst-case assumptions rather than on the idea of averaging out the noise, this approach typically leads toconservativeuncertainty sets. Rather, data-driven techniques deal with uncertainty by working on experimental data, and avoiding excessive conservativism.
In the following, the main classifications of data-driven control systems are presented.
There are many methods available to control the systems.
The fundamental distinction is betweenindirectanddirectcontroller design methods. The former group of techniques is still retaining the standard two-step approach,i.e.first a model is identified, then a controller is tuned based on such model. The main issue in doing so is that the controller is computed from the estimated modelG^{\displaystyle {\widehat {G}}}(according to thecertainty equivalenceprinciple), but in practiceG^≠G0{\displaystyle {\widehat {G}}\neq G_{0}}. To overcome this problem, the idea behind the latter group of techniques is to map the experimental datadirectlyonto the controller, without any model to be identified in between.
Another important distinction is betweeniterativeandnoniterative(orone-shot) methods. In the former group, repeated iterations are needed to estimate the controller parameters, during which theoptimization problemis performed based on the results of the previous iteration, and the estimation is expected to become more and more accurate at each iteration. This approach is also prone to on-line implementations (see below). In the latter group, the (optimal) controller parametrization is provided with a single optimization problem. This is particularly important for those systems in which iterations or repetitions of data collection experiments are limited or even not allowed (for example, due to economic aspects). In such cases, one should select a design technique capable of delivering a controller on a single data set. This approach is often implemented off-line (see below).
Since, on practical industrial applications, open-loop or closed-loop data are often available continuously,on-linedata-driven techniques use those data to improve the quality of the identified model and/or the performance of the controller each time new information is collected on the plant. Instead,off-lineapproaches work on batch of data, which may be collected only once, or multiple times at a regular (but rather long) interval of time.
The iterative feedback tuning (IFT) method was introduced in 1994,[2]starting from the observation that, in identification for control, each iteration is based on the (wrong) certainty equivalence principle.
IFT is a model-free technique for the direct iterative optimization of the parameters of a fixed-order controller; such parameters can be successively updated using information coming from standard (closed-loop) system operation.
Letyd{\displaystyle y^{d}}be a desired output to the reference signalr{\displaystyle r}; the error between the achieved and desired response isy~(ρ)=y(ρ)−yd{\displaystyle {\tilde {y}}(\rho )=y(\rho )-y^{d}}. The control design objective can be formulated as the minimization of the objective function:
Given the objective function to minimize, thequasi-Newton methodcan be applied, i.e. a gradient-based minimization using a gradient search of the type:
The valueγi{\displaystyle \gamma _{i}}is the step size,Ri{\displaystyle R_{i}}is an appropriate positive definite matrix anddJ^dρ{\displaystyle {\frac {d{\widehat {J}}}{d\rho }}}is an approximation of the gradient; the true value of the gradient is given by the following:
The value ofδyδρ(t,ρ){\displaystyle {\frac {\delta y}{\delta \rho }}(t,\rho )}is obtained through the following three-step methodology:
A crucial factor for the convergence speed of the algorithm is the choice ofRi{\displaystyle R_{i}}; wheny~{\displaystyle {\tilde {y}}}is small, a good choice is the approximation given by the Gauss–Newton direction:
Noniterative correlation-based tuning (nCbT) is a noniterative method for data-driven tuning of a fixed-structure controller.[3]It provides a one-shot method to directly synthesize a controller based on a single dataset.
Suppose thatG{\displaystyle G}denotes an unknown LTI stable SISO plant,M{\displaystyle M}a user-defined reference model andF{\displaystyle F}a user-defined weighting function. An LTI fixed-order controller is indicated asK(ρ)=βTρ{\displaystyle K(\rho )=\beta ^{T}\rho }, whereρ∈Rn{\displaystyle \rho \in \mathbb {R} ^{n}}, andβ{\displaystyle \beta }is a vector of LTI basis functions. Finally,K∗{\displaystyle K^{*}}is an ideal LTI controller of any structure, guaranteeing a closed-loop functionM{\displaystyle M}when applied toG{\displaystyle G}.
The goal is to minimize the following objective function:
J(ρ){\displaystyle J(\rho )}is a convex approximation of the objective function obtained from a model reference problem, supposing that1(1+K(ρ)G)≈1(1+K∗G){\displaystyle {\frac {1}{(1+K(\rho )G)}}\approx {\frac {1}{(1+K^{*}G)}}}.
WhenG{\displaystyle G}is stable and minimum-phase, the approximated model reference problem is equivalent to the minimization of the norm ofε(t){\displaystyle \varepsilon (t)}in the scheme in figure.
The input signalr(t){\displaystyle r(t)}is supposed to be a persistently exciting input signal andv(t){\displaystyle v(t)}to be generated by a stable data-generation mechanism. The two signals are thus uncorrelated in an open-loop experiment; hence, the ideal errorε(t,ρ∗){\displaystyle \varepsilon (t,\rho ^{*})}is uncorrelated withr(t){\displaystyle r(t)}. The control objective thus consists in findingρ{\displaystyle \rho }such thatr(t){\displaystyle r(t)}andε(t,ρ∗){\displaystyle \varepsilon (t,\rho ^{*})}are uncorrelated.
The vector ofinstrumental variablesζ(t){\displaystyle \zeta (t)}is defined as:
whereℓ1{\displaystyle \ell _{1}}is large enough andrW(t)=Wr(t){\displaystyle r_{W}(t)=Wr(t)}, whereW{\displaystyle W}is an appropriate filter.
The correlation function is:
and the optimization problem becomes:
Denoting withϕr(ω){\displaystyle \phi _{r}(\omega )}the spectrum ofr(t){\displaystyle r(t)}, it can be demonstrated that, under some assumptions, ifW{\displaystyle W}is selected as:
then, the following holds:
There is no guarantee that the controllerK{\displaystyle K}that minimizesJN,ℓ1{\displaystyle J_{N,\ell _{1}}}is stable. Instability may occur in the following cases:
Consider a stabilizing controllerKs{\displaystyle K_{s}}and the closed loop transfer functionMs=KsG1+KsG{\displaystyle M_{s}={\frac {K_{s}G}{1+K_{s}G}}}.
Define:
Condition 1. is enforced when:
The model reference design with stability constraint becomes:
Aconvex data-driven estimationofδ(ρ){\displaystyle \delta (\rho )}can be obtained through thediscrete Fourier transform.
Define the following:
Forstable minimum phase plants, the followingconvex data-driven optimization problemis given:
Virtual Reference Feedback Tuning (VRFT) is a noniterative method for data-driven tuning of a fixed-structure controller. It provides a one-shot method to directly synthesize a controller based on a single dataset.
VRFT was first proposed in[4]and then extended to LPV systems.[5]VRFT also builds on ideas given in[6]asVRD2{\displaystyle VRD^{2}}.
The main idea is to define a desired closed loop modelM{\displaystyle M}and to use its inverse dynamics to obtain a virtual referencerv(t){\displaystyle r_{v}(t)}from the measured output signaly(t){\displaystyle y(t)}.
The virtual signals arerv(t)=M−1y(t){\displaystyle r_{v}(t)=M^{-1}y(t)}andev(t)=rv(t)−y(t).{\displaystyle e_{v}(t)=r_{v}(t)-y(t).}
The optimal controller is obtained from noiseless data by solving the following optimization problem:
where the optimization function is given as follows:
An Introduction to Data-Driven Control Systems
Ali Khaki-Sedigh
ISBN: 978-1-394-19642-5 November 2023 Wiley-IEEE Press 384 Pages
|
https://en.wikipedia.org/wiki/Data-driven_control_system
|
Inmathematics,statistics, andcomputational modelling, agrey box model[1][2][3][4]combines a partial theoretical structure with data to complete the model. The theoretical structure may vary from information on the smoothness of results, to models that need only parameter values from data or existing literature.[5]Thus, almost all models are grey box models as opposed toblack boxwhere no model form is assumed orwhite boxmodels that are purely theoretical. Some models assume a special form such as alinear regression[6][7]orneural network.[8][9]These have special analysis methods. In particularlinear regressiontechniques[10]are much more efficient than most non-linear techniques.[11][12]The model can bedeterministicorstochastic(i.e. containing random components) depending on its planned use.
The general case is anon-linear modelwith a partial theoretical structure and some unknown parts derived from data. Models with unlike theoretical structures need to be evaluated individually,[1][13][14]possibly usingsimulated annealingorgenetic algorithms.
Within a particular model structure,parameters[14][15]or variable parameter relations[5][16]may need to be found. For a particular structure it is arbitrarily assumed that the data consists of sets of feed vectorsf, product vectorsp, and operating condition vectorsc.[5]Typicallycwill contain values extracted fromf, as well as other values. In many cases a model can be converted to a function of the form:[5][17][18]
where the vector functionmgives the errors between the datap, and the model predictions. The vectorqgives some variable parameters that are the model's unknown parts.
The parametersqvary with the operating conditionscin a manner to be determined.[5][17]This relation can be specified asq=AcwhereAis a matrix of unknown coefficients, andcas inlinear regression[6][7]includes aconstant termand possibly transformed values of the original operating conditions to obtain non-linear relations[19][20]between the original operating conditions andq. It is then a matter of selecting which terms inAare non-zero and assigning their values. The model completion becomes anoptimizationproblem to determine the non-zero values inAthat minimizes the error termsm(f,p,Ac)over the data.[1][16][21][22][23]
Once a selection of non-zero values is made, the remaining coefficients inAcan be determined by minimizingm(f,p,Ac)over the data with respect to the nonzero values inA, typically bynon-linear least squares. Selection of the nonzero terms can be done by optimization methods such assimulated annealingandevolutionary algorithms. Also thenon-linear least squarescan provide accuracy estimates[11][15]for the elements ofAthat can be used to determine if they are significantly different from zero, thus providing a method ofterm selection.[24][25]
It is sometimes possible to calculate values ofqfor each data set, directly or bynon-linear least squares. Then the more efficientlinear regressioncan be used to predictqusingcthus selecting the non-zero values inAand estimating their values. Once the non-zero values are locatednon-linear least squarescan be used on the original modelm(f,p,Ac)to refine these values .[16][21][22]
A third method ismodel inversion,[5][17][18]which converts the non-linearm(f,p,Ac) into an approximate linear form in the elements ofA, that can be examined using efficient term selection[24][25]and evaluation of the linear regression.[10]For the simple case of a singleqvalue (q=aTc) and an estimateq*ofq. Putting dq=aTc−q*gives
so thataTis now in a linear position with all other terms known, and thus can be analyzed bylinear regressiontechniques. For more than one parameter the method extends in a direct manner.[5][18][17]After checking that the model has been improved this process can be repeated until convergence. This approach has the advantages that it does not need the parametersqto be able to be determined from an individual data set and the linear regression is on the original error terms[5]
Where sufficient data is available, division of the data into a separate model construction set and one or twoevaluation setsis recommended. This can be repeated using multiple selections of the construction set and theresulting models averagedor used to evaluate prediction differences.
A statistical test such aschi-squaredon the residuals is not particularly useful.[26]The chi squared test requires known standard deviations which are seldom available, and failed tests give no indication of how to improve the model.[11]There are a range of methods to compare both nested and non nested models. These include comparison of model predictions with repeated data.
An attempt to predict the residualsm(, )with the operating conditionscusing linear regression will show if the residuals can be predicted.[21][22]Residuals that cannot be predicted offer little prospect of improving the model using the current operating conditions.[5]Terms that do predict the residuals are prospective terms to incorporate into the model to improve its performance.[21]
The model inversion technique above can be used as a method of determining whether a model can be improved. In this case selection of nonzero terms is not so important andlinear predictioncan be done using the significanteigenvectorsof theregression matrix. The values inAdetermined in this manner need to be substituted into the nonlinear model to assess improvements in the model errors. The absence of a significant improvement indicates the available data is not able to improve the current model form using the defined parameters.[5]Extra parameters can be inserted into the model to make this test more comprehensive.
|
https://en.wikipedia.org/wiki/Grey_box_completion_and_validation
|
Hysteresisis the dependence of the state of a system on its history. For example, amagnetmay have more than one possiblemagnetic momentin a givenmagnetic field, depending on how the field changed in the past. Plots of a single component of the moment often form a loop or hysteresis curve, where there are different values of one variable depending on the direction of change of another variable. This history dependence is the basis of memory in ahard disk driveand theremanencethat retains a record of theEarth's magnetic fieldmagnitude in the past. Hysteresis occurs inferromagneticandferroelectricmaterials, as well as in thedeformationofrubber bandsandshape-memory alloysand many other natural phenomena. In natural systems, it is often associated withirreversible thermodynamic changesuch asphase transitionsand withinternal friction; anddissipationis a common side effect.
Hysteresis can be found inphysics,chemistry,engineering,biology, andeconomics. It is incorporated in many artificial systems: for example, inthermostatsandSchmitt triggers, it prevents unwanted frequent switching.
Hysteresis can be a dynamiclagbetween an input and an output that disappears if the input is varied more slowly; this is known asrate-dependenthysteresis. However, phenomena such as the magnetic hysteresis loops are mainlyrate-independent, which makes a durable memory possible.
Systems with hysteresis arenonlinear, and can be mathematically challenging to model. Some hysteretic models, such as thePreisach model(originally applied to ferromagnetism) and theBouc–Wen model, attempt to capture general features of hysteresis; and there are also phenomenological models for particular phenomena such as theJiles–Atherton modelfor ferromagnetism.
It is difficult to define hysteresis precisely.Isaak D. Mayergoyzwrote "...the very meaning of hysteresis varies from one area to another, from paper to paper and from author to author. As a result, a stringent mathematical definition of hysteresis is needed in order to avoid confusion and ambiguity.".[1]
The term "hysteresis" is derived fromὑστέρησις, anAncient Greekword meaning "deficiency" or "lagging behind". It was coined in 1881 bySir James Alfred Ewingto describe the behaviour of magnetic materials.[2]
Some early work on describing hysteresis in mechanical systems was performed byJames Clerk Maxwell. Subsequently, hysteretic models have received significant attention in the works ofFerenc Preisach(Preisach model of hysteresis),Louis NéelandDouglas Hugh Everettin connection with magnetism and absorption. A more formal mathematical theory of systems with hysteresis was developed in the 1970s by a group of Russian mathematicians led byMark Krasnosel'skii.
One type of hysteresis is alagbetween input and output. An example is asinusoidalinputX(t)that results in a sinusoidal outputY(t), but with a phase lagφ:
Such behavior can occur in linear systems, and a more general form of response is
whereχi{\displaystyle \chi _{\text{i}}}is the instantaneous response andΦd(τ){\displaystyle \Phi _{d}(\tau )}is theimpulse responseto an impulse that occurredτ{\displaystyle \tau }time units in the past. In thefrequency domain, input and output are related by a complexgeneralized susceptibilitythat can be computed fromΦd{\displaystyle \Phi _{d}}; it is mathematically equivalent to atransfer functionin linear filter theory and analogue signal processing.[3]
This kind of hysteresis is often referred to asrate-dependent hysteresis. If the input is reduced to zero, the output continues to respond for a finite time. This constitutes a memory of the past, but a limited one because it disappears as the output decays to zero. The phase lag depends on the frequency of the input, and goes to zero as the frequency decreases.[3]
When rate-dependent hysteresis is due todissipativeeffects likefriction, it is associated with power loss.[3]
Systems withrate-independent hysteresishave apersistentmemory of the past that remains after the transients have died out.[4]The future development of such a system depends on the history of states visited, but does not fade as the events recede into the past. If an input variableX(t)cycles fromX0toX1and back again, the outputY(t)may beY0initially but a different valueY2upon return. The values ofY(t)depend on the path of values thatX(t)passes through but not on the speed at which it traverses the path.[3]Many authors restrict the term hysteresis to mean only rate-independent hysteresis.[5]Hysteresis effects can be characterized using thePreisach modeland the generalizedPrandtl−Ishlinskii model.[6]
In control systems, hysteresis can be used to filter signals so that the output reacts less rapidly than it otherwise would by taking recent system history into account. For example, athermostatcontrolling a heater may switch the heater on when the temperature drops below A, but not turn it off until the temperature rises above B. (For instance, if one wishes to maintain a temperature of 20 °C then one might set the thermostat to turn the heater on when the temperature drops to below 18 °C and off when the temperature exceeds 22 °C).
Similarly, a pressure switch can be designed to exhibit hysteresis, with pressure set-points substituted for temperature thresholds.
Often, some amount of hysteresis is intentionally added to an electronic circuit to prevent unwanted rapid switching. This and similar techniques are used to compensate forcontact bouncein switches, ornoisein an electrical signal.
ASchmitt triggeris a simple electronic circuit that exhibits this property.
Alatching relayuses asolenoidto actuate a ratcheting mechanism that keeps the relay closed even if power to the relay is terminated.
Some positive feedback from the output to one input of a comparator can increase the natural hysteresis (a function of its gain) it exhibits.
Hysteresis is essential to the workings of somememristors(circuit components which "remember" changes in the current passing through them by changing their resistance).[7]
Hysteresis can be used when connecting arrays of elements such asnanoelectronics,electrochrome cellsandmemory effectdevices usingpassive matrix addressing. Shortcuts are made between adjacent components (seecrosstalk) and the hysteresis helps to keep the components in a particular state while the other components change states. Thus, all rows can be addressed at the same time instead of individually.
In the field of audio electronics, anoise gateoften implements hysteresis intentionally to prevent the gate from "chattering" when signals close to its threshold are applied.
A hysteresis is sometimes intentionally added tocomputer algorithms. The field ofuser interface designhas borrowed the term hysteresis to refer to times when the state of the user interface intentionally lags behind the apparent user input. For example, a menu that was drawn in response to a mouse-over event may remain on-screen for a brief moment after the mouse has moved out of the trigger region and the menu region. This allows the user to move the mouse directly to an item on the menu, even if part of that direct mouse path is outside of both the trigger region and the menu region. For instance, right-clicking on the desktop in most Windows interfaces will create a menu that exhibits this behavior.
Inaerodynamics, hysteresis can be observed when decreasing the angle of attack of a wing after stall, regarding the lift and drag coefficients. The angle of attack at which the flow on top of the wing reattaches is generally lower than the angle of attack at which the flow separates during the increase of the angle of attack.[8]
Hysteresis can be observed in the stage-flow relationship of a river during rapidly changing conditions such as passing of a flood wave. It is most pronounced in low gradient streams with steep leading edge hydrographs.[9]
Moving parts within machines, such as the components of agear train, normally have a small gap between them, to allow movement and lubrication. As a consequence of this gap, any reversal in direction of a drive part will not be passed on immediately to the driven part.[10]This unwanted delay is normally kept as small as practicable, and is usually calledbacklash. The amount of backlash will increase with time as the surfaces of moving parts wear.
In the elastic hysteresis of rubber, the area in the centre of a hysteresis loop is the energy dissipated due to materialinternal friction.
Elastic hysteresis was one of the first types of hysteresis to be examined.[11][12]
The effect can be demonstrated using arubber bandwith weights attached to it. If the top of a rubber band is hung on a hook and small weights are attached to the bottom of the band one at a time, it will stretch and get longer. As more weights areloadedonto it, the band will continue to stretch because the force the weights are exerting on the band is increasing. When each weight is taken off, orunloaded, the band will contract as the force is reduced. As the weights are taken off, each weight that produced a specific length as it was loaded onto the band now contracts less, resulting in a slightly longer length as it is unloaded. This is because the band does not obeyHooke's lawperfectly. The hysteresis loop of an idealized rubber band is shown in the figure.
In terms of force, the rubber band was harder to stretch when it was being loaded than when it was being unloaded. In terms of time, when the band is unloaded, the effect (the length) lagged behind the cause (the force of the weights) because the length has not yet reached the value it had for the same weight during the loading part of the cycle. In terms of energy, more energy was required during the loading than the unloading, the excess energy being dissipated as thermal energy.
Elastic hysteresis is more pronounced when the loading and unloading is done quickly than when it is done slowly.[13]Some materials such as hard metals don't show elastic hysteresis under a moderate load, whereas other hard materials like granite and marble do. Materials such as rubber exhibit a high degree of elastic hysteresis.
When the intrinsic hysteresis of rubber is being measured, the material can be considered to behave like a gas. When a rubber band is stretched, it heats up, and if it is suddenly released, it cools down perceptibly. These effects correspond to a large hysteresis from the thermal exchange with the environment and a smaller hysteresis due to internal friction within the rubber. This proper, intrinsic hysteresis can be measured only if the rubber band isthermallyisolated.
Small vehicle suspensions usingrubber(or otherelastomers) can achieve the dual function of springing and damping because rubber, unlike metal springs, has pronounced hysteresis and does not return all the absorbed compression energy on the rebound.Mountain bikeshave made use of elastomer suspension, as did the originalMinicar.
The primary cause ofrolling resistancewhen a body (such as a ball, tire, or wheel) rolls on a surface is hysteresis. This is attributed to theviscoelastic characteristicsof the material of the rolling body.
Thecontact angleformed between a liquid and solid phase will exhibit a range of contact angles that are possible. There are two common methods for measuring this range of contact angles. The first method is referred to as the tilting base method. Once a drop is dispensed on the surface with the surface level, the surface is then tilted from 0° to 90°. As the drop is tilted, the downhill side will be in a state of imminent wetting while the uphill side will be in a state of imminent dewetting. As the tilt increases the downhill contact angle will increase and represents the advancing contact angle while the uphill side will decrease; this is the receding contact angle. The values for these angles just prior to the drop releasing will typically represent the advancing and receding contact angles. The difference between these two angles is the contact angle hysteresis.
The second method is often referred to as the add/remove volume method. When the maximum liquid volume is removed from the drop without theinterfacial areadecreasing the receding contact angle is thus measured. When volume is added to the maximum before the interfacial area increases, this is theadvancing contact angle. As with the tilt method, the difference between the advancing and receding contact angles is the contact angle hysteresis. Most researchers prefer the tilt method; the add/remove method requires that a tip or needle stay embedded in the drop which can affect the accuracy of the values, especially the receding contact angle.
The equilibrium shapes ofbubblesexpanding and contracting on capillaries (blunt needles) can exhibit hysteresis depending on the relative magnitude of themaximum capillary pressureto ambient pressure, and the relative magnitude of the bubble volume at the maximum capillary pressure to the dead volume in the system.[14]The bubble shape hysteresis is a consequence of gascompressibility, which causes the bubbles to behave differently across expansion and contraction. During expansion, bubbles undergo large non equilibrium jumps in volume, while during contraction the bubbles are more stable and undergo a relatively smaller jump in volume resulting in an asymmetry across expansion and contraction. The bubble shape hysteresis is qualitatively similar to the adsorption hysteresis, and as in the contact angle hysteresis, the interfacial properties play an important role in bubble shape hysteresis.
The existence of the bubble shape hysteresis has important consequences ininterfacial rheologyexperiments involving bubbles. As a result of the hysteresis, not all sizes of the bubbles can be formed on a capillary. Further the gas compressibility causing the hysteresis leads to unintended complications in the phase relation between the applied changes in interfacial area to the expected interfacial stresses. These difficulties can be avoided by designing experimental systems to avoid the bubble shape hysteresis.[14][15]
Hysteresis can also occur during physicaladsorptionprocesses. In this type of hysteresis, the quantity adsorbed is different when gas is being added than it is when being removed. The specific causes of adsorption hysteresis are still an active area of research, but it is linked to differences in the nucleation and evaporation mechanisms inside mesopores. These mechanisms are further complicated by effects such ascavitationand pore blocking.
In physical adsorption, hysteresis is evidence ofmesoporosity-indeed, the definition of mesopores (2–50 nm) is associated with the appearance (50 nm) and disappearance (2 nm) of mesoporosity in nitrogen adsorption isotherms as a function of Kelvin radius.[16]An adsorption isotherm showing hysteresis is said to be of Type IV (for a wetting adsorbate) or Type V (for a non-wetting adsorbate), and hysteresis loops themselves are classified according to how symmetric the loop is.[17]Adsorption hysteresis loops also have the unusual property that it is possible to scan within a hysteresis loop by reversing the direction of adsorption while on a point on the loop. The resulting scans are called "crossing", "converging", or "returning", depending on the shape of the isotherm at this point.[18]
The relationship between matricwater potentialandwater contentis the basis of thewater retention curve.Matric potentialmeasurements (Ψm) are converted to volumetric water content (θ) measurements based on a site or soil specific calibration curve. Hysteresis is a source of water content measurement error. Matric potential hysteresis arises from differences in wetting behaviour causing dry medium to re-wet; that is, it depends on the saturation history of the porous medium. Hysteretic behaviour means that, for example, at a matric potential (Ψm) of5 kPa, the volumetric water content (θ) of a fine sandy soil matrix could be anything between 8% and 25%.[19]
Tensiometersare directly influenced by this type of hysteresis. Two other types of sensors used to measure soil water matric potential are also influenced by hysteresis effects within the sensor itself. Resistance blocks, both nylon and gypsum based, measure matric potential as a function of electrical resistance. The relation between the sensor's electrical resistance and sensor matric potential is hysteretic. Thermocouples measure matric potential as a function of heat dissipation. Hysteresis occurs because measured heat dissipation depends on sensor water content, and the sensor water content–matric potential relationship is hysteretic. As of 2002[update], only desorption curves are usually measured during calibration ofsoil moisture sensors. Despite the fact that it can be a source of significant error, the sensor specific effect of hysteresis is generally ignored.[20]
When an externalmagnetic fieldis applied to aferromagnetic materialsuch asiron, the atomicdomainsalign themselves with it. Even when the field is removed, part of the alignment will be retained: the material has becomemagnetized. Once magnetized, the magnet will stay magnetized indefinitely. Todemagnetizeit requires heat or a magnetic field in the opposite direction. This is the effect that provides the element of memory in ahard disk drive.
The relationship between field strengthHand magnetizationMis not linear in such materials. If a magnet is demagnetized (H = M = 0) and the relationship betweenHandMis plotted for increasing levels of field strength,Mfollows theinitial magnetization curve. This curve increases rapidly at first and then approaches anasymptotecalledmagnetic saturation. If the magnetic field is now reduced monotonically,Mfollows a different curve. At zero field strength, the magnetization is offset from the origin by an amount called theremanence. If theH-Mrelationship is plotted for all strengths of applied magnetic field the result is a hysteresis loop called themain loop. The width of the middle section is twice thecoercivityof the material.[21]
A closer look at a magnetization curve generally reveals a series of small, random jumps in magnetization calledBarkhausen jumps. This effect is due tocrystallographic defectssuch asdislocations.[22]
Magnetic hysteresis loops are not exclusive to materials with ferromagnetic ordering. Other magnetic orderings, such asspin glassordering, also exhibit this phenomenon.[23]
The phenomenon of hysteresis inferromagneticmaterials is the result of two effects: rotation ofmagnetizationand changes in size or number ofmagnetic domains. In general, the magnetization varies (in direction but not magnitude) across a magnet, but in sufficiently small magnets, it does not. In thesesingle-domainmagnets, the magnetization responds to a magnetic field by rotating. Single-domain magnets are used wherever a strong, stable magnetization is needed (for example,magnetic recording).
Larger magnets are divided into regions calleddomains. Across each domain, the magnetization does not vary; but between domains are relatively thindomain wallsin which the direction of magnetization rotates from the direction of one domain to another. If the magnetic field changes, the walls move, changing the relative sizes of the domains. Because the domains are not magnetized in the same direction, themagnetic momentper unit volume is smaller than it would be in a single-domain magnet; but domain walls involve rotation of only a small part of the magnetization, so it is much easier to change the magnetic moment. The magnetization can also change by addition or subtraction of domains (callednucleationanddenucleation).
The most known empirical models in hysteresis arePreisachandJiles-Atherton models. These models allow an accurate modeling of the hysteresis loop and are widely used in the industry. However, these models lose the connection with thermodynamics and the energy consistency is not ensured. A more recent model, with a more consistent thermodynamical foundation, is the vectorial incremental nonconservative consistent hysteresis (VINCH) model of Lavet et al. (2011)[24]
There are a great variety of applications of the hysteresis in ferromagnets. Many of these make use of their ability to retain a memory, for examplemagnetic tape,hard disks, andcredit cards. In these applications,hardmagnets (high coercivity) likeironare desirable, such that as much energy is absorbed as possible during the write operation and the resultant magnetized information is not easily erased.
On the other hand, magneticallysoft(low coercivity) iron is used for the cores inelectromagnets. The low coercivity minimizes the energy loss associated with hysteresis, as the magnetic field periodically reverses in the presence of an alternating current. The low energy loss during a hysteresis loop is the reason why soft iron is used for transformer cores and electric motors.
Electrical hysteresis typically occurs inferroelectricmaterial, where domains of polarization contribute to the total polarization. Polarization is theelectrical dipole moment(eitherC·m−2orC·m). The mechanism, an organization of the polarization into domains, is similar to that of magnetic hysteresis.
Hysteresis manifests itself in state transitions whenmelting temperatureand freezing temperature do not agree. For example,agarmelts at 85 °C (185 °F) and solidifies from 32 to 40 °C (90 to 104 °F). This is to say that once agar is melted at 85 °C, it retains a liquid state until cooled to 40 °C. Therefore, from the temperatures of 40 to 85 °C, agar can be either solid or liquid, depending on which state it was before.
Hysteresis in cell biology often followsbistable systemswhere the same input state can lead to two different, stable outputs. Where bistability can lead to digital, switch-like outputs from the continuous inputs of chemical concentrations and activities, hysteresis makes these systems more resistant to noise. These systems are often characterized by higher values of the input required to switch into a particular state as compared to the input required to stay in the state, allowing for a transition that is not continuously reversible, and thus less susceptible to noise.
In the case of mitosis, irreversibility is essential to maintain the overall integrity of the system such that we have three designated checkpoints to account for this: G1/S, G2/M, and the spindle checkpoint.[25]Irreversible hysteresis in this context ensures that once a cell commits to a specific phase (e.g., entering mitosis or DNA replication), it does not revert to a previous phase, even if conditions or regulatory signals change. Based on the irreversible hysteresis curve, there does exist an input at which the cell jumps to the next stable state, but there is no input that allows the cell to revert to its previous stable state, even when the input is 0, demonstrating irreversibility. Positive feedback is critical for generating hysteresis in the cell cycle. For example: In the G2/M transition, active CDK1 promotes the activation of more CDK1 molecules by inhibiting Wee1 (an inhibitor) and activating Cdc25 (a phosphatase that activates CDK1).[26]These loops lock the cell into its current state and amplify the activation of CDK1. Positive feedback also serves to create a bistable system where CDK1 is either fully inactivated or fully activated. Hysteresis prevents the cell from oscillating between these two states from small perturbations in signal (input).
A biochemical system that is under the control of reversible hysteresis has both forward and reverse trajectories. The system generally requires a higher [input] to proceed forward into the next bistable state then to exit from that stage. For example, cells undergoingcell divisionexhibit reversible hysteresis in that it takes a higher concentration ofcyclinsto switch them from G2 phase intomitosisthan to stay in mitosis once begun.[27][28]Additionally, because the [cyclin] required to reverse the cell back to the G2 phase is much lower than the [cycilin] to enter mitosis, this improved the bistability of mitosis because it is more resistance to weak or transient signals. Small perturbations the [input] will be unable to push the cell out of mitosis so easily.
In systems with bistability, the same input level can correspond to two distinct stable states (e.g., "low output" and "high output"). The actual state of the system depends on its history –whether the input level was increasing (forward trajectory) or decreasing (backward trajectory). Thus, it is difficult to determine which state a cell is in if given only a bistability curve. The cell's ability to "remember" its prior state ensures stability and prevents it from switching states unnecessarily due to minor fluctuations in input.[29]This memory is often maintained through molecular feedback loops, such as positive feedback in signaling pathways, or the persistence of regulatory molecules like proteins or phosphorylated components.[30]For example, the refractory period in action potentials is primarily controlled by history. Absolute refraction period prevents a volted-gated sodium channel from activating or refiring after it has just fired.[31]This is because following the absolute refractory period, the neuron is less excitable due to hyperpolarization caused by potassium efflux. This molecular inhibitory feedback creates a memory for the neuron or cell, so that the neuron does not fire too soon. As time passes, the neuron or cell will slowly lose the memory of having fired and will begin to fire again. Thus, memory is time-dependent, which is important in maintaining homeostasis and regulating many different biological processes.
Cells advancing through the cell cycle must make an irreversible commitment to mitosis, ensuring they do not revert to interphase before successfully segregating their chromosomes. A mathematical model of cell-cycle progression in cell-free egg extracts from frogs suggests that hysteresis in the molecular control system drives these irreversible transitions into and out of mitosis.[32]Here, Cdc2 (Cyclin-dependent kinase 1 or CDK1) is responsible for mitotic entry and exit such that binding of cyclin B forms a complex called Maturation-Promoting Factor (MPF).[33]The activation threshold for mitotic entry was found to be between 32 and 40 nM cyclin B in the frog extracts while the inactivation threshold for exiting mitosis was lower, between 16 and 24 nM cyclin B. The higher threshold for mitotic entry compared to the lower threshold for mitotic exit indicates hysteresis, a hallmark of history-dependent behavior in the system. Concentrations between 24 and 32 nM cyclin B demonstrated bistability, where the system could exist in either interphase or mitosis, depending on its prior state (history). Though, the cell cycle is not completely irreversible, the difference in thresholds is enough for growth and survival of the cells.
Hysteric thresholds in biological systems are not definite and can be recalibrated. For example, unreplicated DNA or chromosomes inhibits Cdc25 phosphatase and maintains Wee1 kinase activity.[34]This prevents the activation of Cyclin B-Cdc2, effectively raising the threshold for mitotic entry. As a result, the cell delays the transition to mitosis until replication is complete, ensuring genomic integrity. Other instances may be DNA damage and unattached chromosomes during the spindle assembly checkpoint.[35]
Biochemical systems can also show hysteresis-like output when slowly varying states that are not directly monitored are involved, as in the case of the cell cycle arrest in yeast exposed to mating pheromone.[36]The proposed model is that α-factor, a yeast mating pheromone binds to its analog receptor on another yeast cell promoting transcription of Fus3 and promoting mating. Fus3 further promotes Far1 which inhibits Cln1/2, activators of the cell cycle. This is representative of a coherent feedforward loop that can modeled as a hysteresis curve.
Far1 transcription is the primary mechanism responsible for the hysteresis observed in cell-cycle reentry.[37]The history of pheromone exposure influences the accumulation of Far1, which, in turn, determines the delay in cell-cycle reentry. Previous pulse experiments demonstrated that after exposure to high pheromone concentrations, cells enter a stabilized arrested state where reentry thresholds are elevated due to increased Far1-dependent inhibition of CDK activity. Even when pheromone levels drop to concentrations that would allow naive cells to reenter the cell cycle, pre-exposed cells take longer to resume proliferation. This delay reflects the history-dependent nature of hysteresis, where past exposure to high pheromone concentrations influences the current state. Hysteresis ensures that cells make robust and irreversible decisions about mating and proliferation in response to pheromone signals. It allows cells to "remember" high pheromone exposure, and this helps yeast cells adapt and stability their responses to environmental conditions, avoiding fast premature reentry into the cell cycle, the moment that pheromone signal dies down.
Additionally, the duration of cell cycle arrest depends not only on the final level of input Fus3, but also on the previously achieved Fus3 levels. This effect is achieved due to the slower time scales involved in the transcription of intermediate Far1, such that the total Far1 activity reaches its equilibrium value slowly, and for transient changes in Fus3 concentration, the response of the system depends on the Far1 concentration achieved with the transient value. Experiments in this type of hysteresis benefit from the ability to change the concentration of the inputs with time. The mechanisms are often elucidated by allowing independent control of the concentration of the key intermediate, for instance, by using an inducible promoter.
Biochemical systems can also show hysteresis-like output when slowly varying states that are not directly monitored are involved, as in the case of the cell cycle arrest in yeast exposed to mating pheromone. Here, the duration of cell cycle arrest depends not only on the final level of input Fus3, but also on the previously achieved Fus3 levels. This effect is achieved due to the slower time scales involved in the transcription of intermediate Far1, such that the total Far1 activity reaches its equilibrium value slowly, and for transient changes in Fus3 concentration, the response of the system depends on the Far1 concentration achieved with the transient value. Experiments in this type of hysteresis benefit from the ability to change the concentration of the inputs with time. The mechanisms are often elucidated by allowing independent control of the concentration of the key intermediate, for instance, by using an inducible promoter.
Darlington in his classic works ongenetics[38][39]discussed hysteresis of thechromosomes, by which he meant "failure of the external form of the chromosomes to respond immediately to the internal stresses due to changes in their molecular spiral", as they lie in a somewhat rigid medium in the limited space of thecell nucleus.
Indevelopmental biology, cell type diversity is regulated by long range-acting signaling molecules calledmorphogensthat pattern uniform pools of cells in a concentration- and time-dependent manner. The morphogensonic hedgehog(Shh), for example, acts onlimb budandneural progenitorsto induce expression of a set ofhomeodomain-containingtranscription factorsto subdivide these tissues into distinct domains. It has been shown that these tissues have a 'memory' of previous exposure to Shh.[40]In neural tissue, this hysteresis is regulated by a homeodomain (HD) feedback circuit that amplifies Shh signaling.[41]In this circuit, expression ofGlitranscription factors, the executors of the Shh pathway, is suppressed. Glis are processed to repressor forms (GliR) in the absence of Shh, but in the presence of Shh, a proportion of Glis are maintained as full-length proteins allowed to translocate to the nucleus, where they act as activators (GliA) of transcription. By reducing Gli expression then, the HD transcription factors reduce the total amount of Gli (GliT), so a higher proportion of GliT can be stabilized as GliA for the same concentration of Shh.
There is some evidence thatT cellsexhibit hysteresis in that it takes a lower signal threshold toactivate T cellsthat have been previously activated.Ras GTPaseactivation is required for downstream effector functions of activated T cells.[42]Triggering of the T cell receptor induces high levels of Ras activation, which results in higher levels of GTP-bound (active) Ras at the cell surface. Since higher levels of active Ras have accumulated at the cell surface in T cells that have been previously stimulated by strong engagement of the T cell receptor, weaker subsequent T cell receptor signals received shortly afterwards will deliver the same level of activation due to the presence of higher levels of already activated Ras as compared to a naïve cell.
The property by which someneuronsdo not return to their basal conditions from a stimulated condition immediately after removal of the stimulus is an example of hysteresis.
Neuropsychology, in exploring theneural correlates of consciousness, interfaces withneuroscience, although the complexity of thecentral nervous systemis a challenge to its study (that is, its operation resists easyreduction).Context-dependent memoryandstate-dependent memoryshow hysteretic aspects ofneurocognition.
Lung hysteresis is evident when observing thecompliance of a lungon inspiration versus expiration. The difference incompliance(Δvolume/Δpressure) is due to the additional energy required to overcome surface tension forces during inspiration to recruit and inflate additional alveoli.[43]
Thetranspulmonary pressurevs Volume curve of inhalation is different from the Pressure vs Volume curve of exhalation, the difference being described as hysteresis. Lung volume at any given pressure during inhalation is less than the lung volume at any given pressure during exhalation.[44]
A hysteresis effect may be observed in voicing onset versus offset.[45]The threshold value of the subglottal pressure required to start the vocal fold vibration is lower than the threshold value at which the vibration stops, when other parameters are kept constant. In utterances of vowel-voiceless consonant-vowel sequences during speech, the intraoral pressure is lower at the voice onset of the second vowel compared to the voice offset of the first vowel, the oral airflow is lower, the transglottal pressure is larger and the glottal width is smaller.
Hysteresis is a commonly encountered phenomenon in ecology and epidemiology, where the observed equilibrium of a system can not be predicted solely based on environmental variables, but also requires knowledge of the system's past history. Notable examples include the theory ofspruce budwormoutbreaks and behavioral-effects on disease transmission.[46]
It is commonly examined in relation tocritical transitionsbetween ecosystem or community types in which dominant competitors or entire landscapes can change in a largely irreversible fashion.[47][48]
Complexoceanandclimate modelsrely on the principle.[49][50]
Economic systems can exhibit hysteresis. For example,exportperformance is subject to strong hysteresis effects: because of the fixed transportation costs it may take a big push to start a country's exports, but once the transition is made, not much may be required to keep them going.
When some negative shock reduces employment in a company or industry, fewer employed workers then remain. As usually the employed workers have the power to set wages, their reduced number incentivizes them to bargain for higher wages when the economy again gets better instead of letting the wage be at theequilibrium wagelevel, where the supply and demand of workers would match. This causes hysteresis: the unemployment becomes permanently higher after negative shocks.[51][52]
The idea of hysteresis is used extensively in the area of labor economics, specifically with reference to theunemployment rate.[53]According to theories based on hysteresis, severe economic downturns (recession) and/or persistent stagnation (slow demand growth, usually after a recession) cause unemployed individuals to lose their job skills (commonly developed on the job) or to find that their skills have become obsolete, or become demotivated, disillusioned or depressed or lose job-seeking skills. In addition, employers may use time spent in unemployment as a screening tool, i.e., to weed out less desired employees in hiring decisions. Then, in times of an economic upturn, recovery, or "boom", the affected workers will not share in the prosperity, remaining unemployed for long periods (e.g., over 52 weeks). This makes unemployment "structural", i.e., extremely difficult to reduce simply by increasing the aggregate demand for products and labor without causing increased inflation. That is, it is possible that aratchet effectin unemployment rates exists, so a short-term rise in unemployment rates tends to persist. For example, traditional anti-inflationary policy (the use of recession to fight inflation) leads to a permanently higher "natural" rate of unemployment (more scientifically known as theNAIRU). This occurs first because inflationary expectations are "sticky" downward due to wage and price rigidities (and so adapt slowly over time rather than being approximately correct as in theories ofrational expectations) and second because labor markets do not clear instantly in response to unemployment.
The existence of hysteresis has been put forward as a possible explanation for the persistently high unemployment of many economies in the 1990s. Hysteresis has been invoked byOlivier Blanchardamong others to explain the differences in long run unemployment rates between Europe and the United States. Labor market reform (usually meaning institutional change promoting more flexible wages, firing, and hiring) or strong demand-side economic growth may not therefore reduce this pool of long-term unemployed. Thus, specific targeted training programs are presented as a possible policy solution.[51]However, the hysteresis hypothesis suggests such training programs are aided by persistently high demand for products (perhaps withincomes policiesto avoid increased inflation), which reduces the transition costs out of unemployment and into paid employment easier.
Hysteretic models aremathematical modelscapable of simulating complexnonlinearbehavior (hysteresis) characterizingmechanical systemsandmaterialsused in different fields ofengineering, such asaerospace,civil, andmechanicalengineering. Some examples of mechanical systems and materials having hysteretic behavior are:
Each subject that involves hysteresis has models that are specific to the subject. In addition, there are hysteretic models that capture general features of many systems with hysteresis.[55][56][57]An example is thePreisach model of hysteresis, which represents a hysteresis nonlinearity as alinear superpositionof square loops called non-ideal relays.[55]Many complex models of hysteresis arise from the simple parallel connection, or superposition, of elementary carriers of hysteresis termed hysterons.
A simple and intuitive parametric description of various hysteresis loops may be found in theLapshin model.[56][57]Along with the smooth loops, substitution of trapezoidal, triangular or rectangular pulses instead of the harmonic functions allows piecewise-linear hysteresis loops frequently used in discrete automatics to be built in the model. There are implementations of the hysteresis loop model inMathcad[57]and inR programming language.[58]
TheBouc–Wen model of hysteresisis often used to describe non-linear hysteretic systems. It was introduced by Bouc[59][60]and extended by Wen,[61]who demonstrated its versatility by producing a variety of hysteretic patterns. This model is able to capture in analytical form, a range of shapes of hysteretic cycles which match the behaviour of a wide class of hysteretical systems; therefore, given its versability and mathematical tractability, the Bouc–Wen model has quickly gained popularity and has been extended and applied to a wide variety of engineering problems, including multi-degree-of-freedom (MDOF) systems, buildings, frames, bidirectional andtorsionalresponse of hysteretic systems two- and three-dimensional continua, andsoil liquefactionamong others. The Bouc–Wen model and its variants/extensions have been used in applications ofstructural control, in particular in the modeling of the behaviour ofmagnetorheological dampers,base isolationdevices for buildings and other kinds of damping devices; it has also been used in the modelling and analysis of structures built of reinforced concrete, steel, masonry and timber.[citation needed]. The most important extension of Bouc-Wen Model was carried out by Baber and Noori and later by Noori and co-workers. That extended model, named, BWBN, can reproduce the complex shear pinching or slip-lock phenomenon that earlier model could not reproduce. The BWBN model has been widely used in a wide spectrum of applications and implementations are available in software such asOpenSees.
Hysteretic models may have a generalized displacementu{\displaystyle u}as input variable and a generalized forcef{\displaystyle f}as output variable, or vice versa. In particular, in rate-independent hysteretic models, the output variable does not depend on the rate of variation of the input one.[62][63]
Rate-independent hysteretic models can be classified into four different categories depending on the type of equation that needs to be solved to compute the output variable:
Some notable hysteretic models are listed below, along with their associated fields.
When hysteresis occurs withextensive and intensive variables, the work done on the system is the area under the hysteresis graph.
|
https://en.wikipedia.org/wiki/Hysteresis
|
Insystem analysis, among other fields of study, alinear time-invariant(LTI)systemis asystemthat produces an output signal from any input signal subject to the constraints oflinearityandtime-invariance; these terms are briefly defined in the overview below. These properties apply (exactly or approximately) to many important physical systems, in which case the responsey(t)of the system to an arbitrary inputx(t)can be found directly usingconvolution:y(t) = (x∗h)(t)whereh(t)is called the system'simpulse responseand ∗ represents convolution (not to be confused with multiplication). What's more, there are systematic methods for solving any such system (determiningh(t)), whereas systems not meeting both properties are generally more difficult (or impossible) to solve analytically. A good example of an LTI system is anyelectrical circuitconsisting ofresistors,capacitors,inductorsandlinear amplifiers.[2]
Linear time-invariant system theory is also used inimage processing, where the systems have spatial dimensions instead of, or in addition to, a temporal dimension. These systems may be referred to aslinear translation-invariantto give the terminology the most general reach. In the case of genericdiscrete-time(i.e.,sampled) systems,linear shift-invariantis the corresponding term. LTI system theory is an area ofapplied mathematicswhich has direct applications inelectrical circuit analysis and design,signal processingandfilter design,control theory,mechanical engineering,image processing, the design ofmeasuring instrumentsof many sorts,NMR spectroscopy[citation needed], and many other technical areas where systems ofordinary differential equationspresent themselves.
The defining properties of any LTI system arelinearityandtime invariance.
The fundamental result in LTI system theory is that any LTI system can be characterized entirely by a single function called the system'simpulse response. The output of the systemy(t){\displaystyle y(t)}is simply theconvolutionof the input to the systemx(t){\displaystyle x(t)}with the system's impulse responseh(t){\displaystyle h(t)}. This is called acontinuous timesystem. Similarly, a discrete-time linear time-invariant (or, more generally, "shift-invariant") system is defined as one operating indiscrete time:yi=xi∗hi{\displaystyle y_{i}=x_{i}*h_{i}}wherey,x, andharesequencesand the convolution, in discrete time, uses a discrete summation rather than an integral.
LTI systems can also be characterized in thefrequency domainby the system'stransfer function, which is theLaplace transformof the system's impulse response (orZ transformin the case of discrete-time systems). As a result of the properties of these transforms, the output of the system in the frequency domain is the product of the transfer function and the transform of the input. In other words, convolution in the time domain is equivalent to multiplication in the frequency domain.
For all LTI systems, theeigenfunctions, and the basis functions of the transforms, arecomplexexponentials. This is, if the input to a system is the complex waveformAsest{\displaystyle A_{s}e^{st}}for some complex amplitudeAs{\displaystyle A_{s}}and complex frequencys{\displaystyle s}, the output will be some complex constant times the input, sayBsest{\displaystyle B_{s}e^{st}}for some new complex amplitudeBs{\displaystyle B_{s}}. The ratioBs/As{\displaystyle B_{s}/A_{s}}is the transfer function at frequencys{\displaystyle s}.
Sincesinusoidsare a sum of complex exponentials with complex-conjugate frequencies, if the input to the system is a sinusoid, then the output of the system will also be a sinusoid, perhaps with a differentamplitudeand a differentphase, but always with the same frequency upon reaching steady-state. LTI systems cannot produce frequency components that are not in the input.
LTI system theory is good at describing many important systems. Most LTI systems are considered "easy" to analyze, at least compared to the time-varying and/ornonlinearcase. Any system that can be modeled as a lineardifferential equationwith constant coefficients is an LTI system. Examples of such systems areelectrical circuitsmade up ofresistors,inductors, andcapacitors(RLC circuits). Ideal spring–mass–damper systems are also LTI systems, and are mathematically equivalent to RLC circuits.
Most LTI system concepts are similar between the continuous-time and discrete-time (linear shift-invariant) cases. In image processing, the time variable is replaced with two space variables, and the notion of time invariance is replaced by two-dimensional shift invariance. When analyzingfilter banksandMIMOsystems, it is often useful to considervectorsof signals.
A linear system that is not time-invariant can be solved using other approaches such as theGreen functionmethod.
The behavior of a linear, continuous-time, time-invariant system with input signalx(t) and output signaly(t) is described by the convolution integral:[3]
whereh(t){\textstyle h(t)}is the system's response to animpulse:x(τ)=δ(τ){\textstyle x(\tau )=\delta (\tau )}.y(t){\textstyle y(t)}is therefore proportional to a weighted average of the input functionx(τ){\textstyle x(\tau )}. The weighting function ish(−τ){\textstyle h(-\tau )}, simply shifted by amountt{\textstyle t}. Ast{\textstyle t}changes, the weighting function emphasizes different parts of the input function. Whenh(τ){\textstyle h(\tau )}is zero for all negativeτ{\textstyle \tau },y(t){\textstyle y(t)}depends only on values ofx{\textstyle x}prior to timet{\textstyle t}, and the system is said to becausal.
To understand why the convolution produces the output of an LTI system, let the notation{x(u−τ);u}{\textstyle \{x(u-\tau );\ u\}}represent the functionx(u−τ){\textstyle x(u-\tau )}with variableu{\textstyle u}and constantτ{\textstyle \tau }. And let the shorter notation{x}{\textstyle \{x\}}represent{x(u);u}{\textstyle \{x(u);\ u\}}. Then a continuous-time system transforms an input function,{x},{\textstyle \{x\},}into an output function,{y}{\textstyle \{y\}}. And in general, every value of the output can depend on every value of the input. This concept is represented by:y(t)=defOt{x},{\displaystyle y(t)\mathrel {\stackrel {\text{def}}{=}} O_{t}\{x\},}whereOt{\textstyle O_{t}}is the transformation operator for timet{\textstyle t}. In a typical system,y(t){\textstyle y(t)}depends most heavily on the values ofx{\textstyle x}that occurred near timet{\textstyle t}. Unless the transform itself changes witht{\textstyle t}, the output function is just constant, and the system is uninteresting.
For a linear system,O{\textstyle O}must satisfyEq.1:
And the time-invariance requirement is:
In this notation, we can write theimpulse responseash(t)=defOt{δ(u);u}.{\textstyle h(t)\mathrel {\stackrel {\text{def}}{=}} O_{t}\{\delta (u);\ u\}.}
Similarly:
Substituting this result into the convolution integral:(x∗h)(t)=∫−∞∞x(τ)⋅h(t−τ)dτ=∫−∞∞x(τ)⋅Ot{δ(u−τ);u}dτ,{\displaystyle {\begin{aligned}(x*h)(t)&=\int _{-\infty }^{\infty }x(\tau )\cdot h(t-\tau )\,\mathrm {d} \tau \\[4pt]&=\int _{-\infty }^{\infty }x(\tau )\cdot O_{t}\{\delta (u-\tau );\ u\}\,\mathrm {d} \tau ,\,\end{aligned}}}
which has the form of the right side ofEq.2for the casecτ=x(τ){\textstyle c_{\tau }=x(\tau )}andxτ(u)=δ(u−τ).{\textstyle x_{\tau }(u)=\delta (u-\tau ).}
Eq.2then allows this continuation:(x∗h)(t)=Ot{∫−∞∞x(τ)⋅δ(u−τ)dτ;u}=Ot{x(u);u}=defy(t).{\displaystyle {\begin{aligned}(x*h)(t)&=O_{t}\left\{\int _{-\infty }^{\infty }x(\tau )\cdot \delta (u-\tau )\,\mathrm {d} \tau ;\ u\right\}\\[4pt]&=O_{t}\left\{x(u);\ u\right\}\\&\mathrel {\stackrel {\text{def}}{=}} y(t).\,\end{aligned}}}
In summary, the input function,{x}{\textstyle \{x\}}, can be represented by a continuum of time-shifted impulse functions, combined "linearly", as shown atEq.1. The system's linearity property allows the system's response to be represented by the corresponding continuum of impulseresponses, combined in the same way. And the time-invariance property allows that combination to be represented by the convolution integral.
The mathematical operations above have a simple graphical simulation.[4]
Aneigenfunctionis a function for which the output of the operator is a scaled version of the same function. That is,Hf=λf,{\displaystyle {\mathcal {H}}f=\lambda f,}wherefis the eigenfunction andλ{\displaystyle \lambda }is theeigenvalue, a constant.
Theexponential functionsAest{\displaystyle Ae^{st}}, whereA,s∈C{\displaystyle A,s\in \mathbb {C} }, areeigenfunctionsof alinear,time-invariantoperator. A simple proof illustrates this concept. Suppose the input isx(t)=Aest{\displaystyle x(t)=Ae^{st}}. The output of the system with impulse responseh(t){\displaystyle h(t)}is then∫−∞∞h(t−τ)Aesτdτ{\displaystyle \int _{-\infty }^{\infty }h(t-\tau )Ae^{s\tau }\,\mathrm {d} \tau }which, by the commutative property ofconvolution, is equivalent to∫−∞∞h(τ)Aes(t−τ)dτ⏞Hf=∫−∞∞h(τ)Aeste−sτdτ=Aest∫−∞∞h(τ)e−sτdτ=Aest⏟Input⏞fH(s)⏟Scalar⏞λ,{\displaystyle {\begin{aligned}\overbrace {\int _{-\infty }^{\infty }h(\tau )\,Ae^{s(t-\tau )}\,\mathrm {d} \tau } ^{{\mathcal {H}}f}&=\int _{-\infty }^{\infty }h(\tau )\,Ae^{st}e^{-s\tau }\,\mathrm {d} \tau \\[4pt]&=Ae^{st}\int _{-\infty }^{\infty }h(\tau )\,e^{-s\tau }\,\mathrm {d} \tau \\[4pt]&=\overbrace {\underbrace {Ae^{st}} _{\text{Input}}} ^{f}\overbrace {\underbrace {H(s)} _{\text{Scalar}}} ^{\lambda },\\\end{aligned}}}
where the scalarH(s)=def∫−∞∞h(t)e−stdt{\displaystyle H(s)\mathrel {\stackrel {\text{def}}{=}} \int _{-\infty }^{\infty }h(t)e^{-st}\,\mathrm {d} t}is dependent only on the parameters.
So the system's response is a scaled version of the input. In particular, for anyA,s∈C{\displaystyle A,s\in \mathbb {C} }, the system output is the product of the inputAest{\displaystyle Ae^{st}}and the constantH(s){\displaystyle H(s)}. Hence,Aest{\displaystyle Ae^{st}}is aneigenfunctionof an LTI system, and the correspondingeigenvalueisH(s){\displaystyle H(s)}.
It is also possible to directly derive complex exponentials as eigenfunctions of LTI systems.
Let's setv(t)=eiωt{\displaystyle v(t)=e^{i\omega t}}some complex exponential andva(t)=eiω(t+a){\displaystyle v_{a}(t)=e^{i\omega (t+a)}}a time-shifted version of it.
H[va](t)=eiωaH[v](t){\displaystyle H[v_{a}](t)=e^{i\omega a}H[v](t)}by linearity with respect to the constanteiωa{\displaystyle e^{i\omega a}}.
H[va](t)=H[v](t+a){\displaystyle H[v_{a}](t)=H[v](t+a)}by time invariance ofH{\displaystyle H}.
SoH[v](t+a)=eiωaH[v](t){\displaystyle H[v](t+a)=e^{i\omega a}H[v](t)}. Settingt=0{\displaystyle t=0}and renaming we get:H[v](τ)=eiωτH[v](0){\displaystyle H[v](\tau )=e^{i\omega \tau }H[v](0)}i.e. that a complex exponentialeiωτ{\displaystyle e^{i\omega \tau }}as input will give a complex exponential of same frequency as output.
The eigenfunction property of exponentials is very useful for both analysis and insight into LTI systems. The one-sidedLaplace transformH(s)=defL{h(t)}=def∫0∞h(t)e−stdt{\displaystyle H(s)\mathrel {\stackrel {\text{def}}{=}} {\mathcal {L}}\{h(t)\}\mathrel {\stackrel {\text{def}}{=}} \int _{0}^{\infty }h(t)e^{-st}\,\mathrm {d} t}is exactly the way to get the eigenvalues from the impulse response. Of particular interest are pure sinusoids (i.e., exponential functions of the formejωt{\displaystyle e^{j\omega t}}whereω∈R{\displaystyle \omega \in \mathbb {R} }andj=def−1{\displaystyle j\mathrel {\stackrel {\text{def}}{=}} {\sqrt {-1}}}). TheFourier transformH(jω)=F{h(t)}{\displaystyle H(j\omega )={\mathcal {F}}\{h(t)\}}gives the eigenvalues for pure complex sinusoids. Both ofH(s){\displaystyle H(s)}andH(jω){\displaystyle H(j\omega )}are called thesystem function,system response, ortransfer function.
The Laplace transform is usually used in the context of one-sided signals, i.e. signals that are zero for all values oftless than some value. Usually, this "start time" is set to zero, for convenience and without loss of generality, with the transform integral being taken from zero to infinity (the transform shown above with lower limit of integration of negative infinity is formally known as thebilateral Laplace transform).
The Fourier transform is used for analyzing systems that process signals that are infinite in extent, such as modulated sinusoids, even though it cannot be directly applied to input and output signals that are notsquare integrable. The Laplace transform actually works directly for these signals if they are zero before a start time, even if they are not square integrable, for stable systems. The Fourier transform is often applied to spectra of infinite signals via theWiener–Khinchin theoremeven when Fourier transforms of the signals do not exist.
Due to the convolution property of both of these transforms, the convolution that gives the output of the system can be transformed to a multiplication in the transform domain, given signals for which the transforms existy(t)=(h∗x)(t)=def∫−∞∞h(t−τ)x(τ)dτ=defL−1{H(s)X(s)}.{\displaystyle y(t)=(h*x)(t)\mathrel {\stackrel {\text{def}}{=}} \int _{-\infty }^{\infty }h(t-\tau )x(\tau )\,\mathrm {d} \tau \mathrel {\stackrel {\text{def}}{=}} {\mathcal {L}}^{-1}\{H(s)X(s)\}.}
One can use the system response directly to determine how any particular frequency component is handled by a system with that Laplace transform. If we evaluate the system response (Laplace transform of the impulse response) at complex frequencys=jω, whereω= 2πf, we obtain |H(s)| which is the system gain for frequencyf. The relative phase shift between the output and input for that frequency component is likewise given by arg(H(s)).
When the Laplace transform of the derivative is taken, it transforms to a simple multiplication by the Laplace variables.L{ddtx(t)}=sX(s){\displaystyle {\mathcal {L}}\left\{{\frac {\mathrm {d} }{\mathrm {d} t}}x(t)\right\}=sX(s)}
Some of the most important properties of a system are causality and stability. Causality is a necessity for a physical system whose independent variable is time, however this restriction is not present in other cases such as image processing.
A system is causal if the output depends only on present and past, but not future inputs. A necessary and sufficient condition for causality ish(t)=0∀t<0,{\displaystyle h(t)=0\quad \forall t<0,}
whereh(t){\displaystyle h(t)}is the impulse response. It is not possible in general to determine causality from thetwo-sided Laplace transform. However, when working in the time domain, one normally uses theone-sided Laplace transformwhich requires causality.
A system isbounded-input, bounded-output stable(BIBO stable) if, for every bounded input, the output is finite. Mathematically, if every input satisfying‖x(t)‖∞<∞{\displaystyle \ \|x(t)\|_{\infty }<\infty }
leads to an output satisfying‖y(t)‖∞<∞{\displaystyle \ \|y(t)\|_{\infty }<\infty }
(that is, a finitemaximum absolute valueofx(t){\displaystyle x(t)}implies a finite maximum absolute value ofy(t){\displaystyle y(t)}), then the system is stable. A necessary and sufficient condition is thath(t){\displaystyle h(t)}, the impulse response, is inL1(has a finite L1norm):‖h(t)‖1=∫−∞∞|h(t)|dt<∞.{\displaystyle \|h(t)\|_{1}=\int _{-\infty }^{\infty }|h(t)|\,\mathrm {d} t<\infty .}
In the frequency domain, theregion of convergencemust contain the imaginary axiss=jω{\displaystyle s=j\omega }.
As an example, the ideallow-pass filterwith impulse response equal to asinc functionis not BIBO stable, because the sinc function does not have a finite L1norm. Thus, for some bounded input, the output of the ideal low-pass filter is unbounded. In particular, if the input is zero fort<0{\displaystyle t<0}and equal to a sinusoid at thecut-off frequencyfort>0{\displaystyle t>0}, then the output will be unbounded for all times other than the zero crossings.[dubious–discuss]
Almost everything in continuous-time systems has a counterpart in discrete-time systems.
In many contexts, a discrete time (DT) system is really part of a larger continuous time (CT) system. For example, a digital recording system takes an analog sound, digitizes it, possibly processes the digital signals, and plays back an analog sound for people to listen to.
In practical systems, DT signals obtained are usually uniformly sampled versions of CT signals. Ifx(t){\displaystyle x(t)}is a CT signal, then thesampling circuitused before ananalog-to-digital converterwill transform it to a DT signal:xn=defx(nT)∀n∈Z,{\displaystyle x_{n}\mathrel {\stackrel {\text{def}}{=}} x(nT)\qquad \forall \,n\in \mathbb {Z} ,}whereTis thesampling period. Before sampling, the input signal is normally run through a so-calledNyquist filterwhich removes frequencies above the "folding frequency" 1/(2T); this guarantees that no information in the filtered signal will be lost. Without filtering, any frequency componentabovethe folding frequency (orNyquist frequency) isaliasedto a different frequency (thus distorting the original signal), since a DT signal can only support frequency components lower than the folding frequency.
Let{x[m−k];m}{\displaystyle \{x[m-k];\ m\}}represent the sequence{x[m−k];for all integer values ofm}.{\displaystyle \{x[m-k];{\text{ for all integer values of }}m\}.}
And let the shorter notation{x}{\displaystyle \{x\}}represent{x[m];m}.{\displaystyle \{x[m];\ m\}.}
A discrete system transforms an input sequence,{x}{\displaystyle \{x\}}into an output sequence,{y}.{\displaystyle \{y\}.}In general, every element of the output can depend on every element of the input. Representing the transformation operator byO{\displaystyle O}, we can write:y[n]=defOn{x}.{\displaystyle y[n]\mathrel {\stackrel {\text{def}}{=}} O_{n}\{x\}.}
Note that unless the transform itself changes withn, the output sequence is just constant, and the system is uninteresting. (Thus the subscript,n.) In a typical system,y[n] depends most heavily on the elements ofxwhose indices are nearn.
For the special case of theKronecker delta function,x[m]=δ[m],{\displaystyle x[m]=\delta [m],}the output sequence is theimpulse response:h[n]=defOn{δ[m];m}.{\displaystyle h[n]\mathrel {\stackrel {\text{def}}{=}} O_{n}\{\delta [m];\ m\}.}
For a linear system,O{\displaystyle O}must satisfy:
And the time-invariance requirement is:
In such a system, the impulse response,{h}{\displaystyle \{h\}}, characterizes the system completely. That is, for any input sequence, the output sequence can be calculated in terms of the input and the impulse response. To see how that is done, consider the identity:x[m]≡∑k=−∞∞x[k]⋅δ[m−k],{\displaystyle x[m]\equiv \sum _{k=-\infty }^{\infty }x[k]\cdot \delta [m-k],}
which expresses{x}{\displaystyle \{x\}}in terms of a sum of weighted delta functions.
Therefore:y[n]=On{x}=On{∑k=−∞∞x[k]⋅δ[m−k];m}=∑k=−∞∞x[k]⋅On{δ[m−k];m},{\displaystyle {\begin{aligned}y[n]=O_{n}\{x\}&=O_{n}\left\{\sum _{k=-\infty }^{\infty }x[k]\cdot \delta [m-k];\ m\right\}\\&=\sum _{k=-\infty }^{\infty }x[k]\cdot O_{n}\{\delta [m-k];\ m\},\,\end{aligned}}}
where we have invokedEq.4for the caseck=x[k]{\displaystyle c_{k}=x[k]}andxk[m]=δ[m−k]{\displaystyle x_{k}[m]=\delta [m-k]}.
And because ofEq.5, we may write:On{δ[m−k];m}=On−k{δ[m];m}=defh[n−k].{\displaystyle {\begin{aligned}O_{n}\{\delta [m-k];\ m\}&\mathrel {\stackrel {\quad }{=}} O_{n-k}\{\delta [m];\ m\}\\&\mathrel {\stackrel {\text{def}}{=}} h[n-k].\end{aligned}}}
Therefore:
which is the familiar discrete convolution formula. The operatorOn{\displaystyle O_{n}}can therefore be interpreted as proportional to a weighted average of the functionx[k].
The weighting function ish[−k], simply shifted by amountn. Asnchanges, the weighting function emphasizes different parts of the input function. Equivalently, the system's response to an impulse atn=0 is a "time" reversed copy of the unshifted weighting function. Whenh[k] is zero for all negativek, the system is said to becausal.
Aneigenfunctionis a function for which the output of the operator is the same function, scaled by some constant. In symbols,Hf=λf,{\displaystyle {\mathcal {H}}f=\lambda f,}
wherefis the eigenfunction andλ{\displaystyle \lambda }is theeigenvalue, a constant.
Theexponential functionszn=esTn{\displaystyle z^{n}=e^{sTn}}, wheren∈Z{\displaystyle n\in \mathbb {Z} }, areeigenfunctionsof alinear,time-invariantoperator.T∈R{\displaystyle T\in \mathbb {R} }is the sampling interval, andz=esT,z,s∈C{\displaystyle z=e^{sT},\ z,s\in \mathbb {C} }. A simple proof illustrates this concept.
Suppose the input isx[n]=zn{\displaystyle x[n]=z^{n}}. The output of the system with impulse responseh[n]{\displaystyle h[n]}is then∑m=−∞∞h[n−m]zm{\displaystyle \sum _{m=-\infty }^{\infty }h[n-m]\,z^{m}}
which is equivalent to the following by the commutative property ofconvolution∑m=−∞∞h[m]z(n−m)=zn∑m=−∞∞h[m]z−m=znH(z){\displaystyle \sum _{m=-\infty }^{\infty }h[m]\,z^{(n-m)}=z^{n}\sum _{m=-\infty }^{\infty }h[m]\,z^{-m}=z^{n}H(z)}whereH(z)=def∑m=−∞∞h[m]z−m{\displaystyle H(z)\mathrel {\stackrel {\text{def}}{=}} \sum _{m=-\infty }^{\infty }h[m]z^{-m}}is dependent only on the parameterz.
Sozn{\displaystyle z^{n}}is aneigenfunctionof an LTI system because the system response is the same as the input times the constantH(z){\displaystyle H(z)}.
The eigenfunction property of exponentials is very useful for both analysis and insight into LTI systems. TheZ transformH(z)=Z{h[n]}=∑n=−∞∞h[n]z−n{\displaystyle H(z)={\mathcal {Z}}\{h[n]\}=\sum _{n=-\infty }^{\infty }h[n]z^{-n}}
is exactly the way to get the eigenvalues from the impulse response.[clarification needed]Of particular interest are pure sinusoids; i.e. exponentials of the formejωn{\displaystyle e^{j\omega n}}, whereω∈R{\displaystyle \omega \in \mathbb {R} }. These can also be written aszn{\displaystyle z^{n}}withz=ejω{\displaystyle z=e^{j\omega }}[clarification needed]. Thediscrete-time Fourier transform(DTFT)H(ejω)=F{h[n]}{\displaystyle H(e^{j\omega })={\mathcal {F}}\{h[n]\}}gives the eigenvalues of pure sinusoids[clarification needed]. Both ofH(z){\displaystyle H(z)}andH(ejω){\displaystyle H(e^{j\omega })}are called thesystem function,system response, ortransfer function.
Like the one-sided Laplace transform, the Z transform is usually used in the context of one-sided signals, i.e. signals that are zero for t<0. The discrete-time Fourier transformFourier seriesmay be used for analyzing periodic signals.
Due to the convolution property of both of these transforms, the convolution that gives the output of the system can be transformed to a multiplication in the transform domain. That is,y[n]=(h∗x)[n]=∑m=−∞∞h[n−m]x[m]=Z−1{H(z)X(z)}.{\displaystyle y[n]=(h*x)[n]=\sum _{m=-\infty }^{\infty }h[n-m]x[m]={\mathcal {Z}}^{-1}\{H(z)X(z)\}.}
Just as with the Laplace transform transfer function in continuous-time system analysis, the Z transform makes it easier to analyze systems and gain insight into their behavior.
The Z transform of the delay operator is a simple multiplication byz−1. That is,
The input-output characteristics of discrete-time LTI system are completely described by its impulse responseh[n]{\displaystyle h[n]}.
Two of the most important properties of a system are causality and stability. Non-causal (in time) systems can be defined and analyzed as above, but cannot be realized in real-time. Unstable systems can also be analyzed and built, but are only useful as part of a larger system whose overall transfer functionisstable.
A discrete-time LTI system is causal if the current value of the output depends on only the current value and past values of the input.[5]A necessary and sufficient condition for causality ish[n]=0∀n<0,{\displaystyle h[n]=0\ \forall n<0,}whereh[n]{\displaystyle h[n]}is the impulse response. It is not possible in general to determine causality from the Z transform, because the inverse transform is not unique[dubious–discuss]. When aregion of convergenceis specified, then causality can be determined.
A system isbounded input, bounded output stable(BIBO stable) if, for every bounded input, the output is finite. Mathematically, if‖x[n]‖∞<∞{\displaystyle \|x[n]\|_{\infty }<\infty }
implies that‖y[n]‖∞<∞{\displaystyle \|y[n]\|_{\infty }<\infty }
(that is, if bounded input implies bounded output, in the sense that themaximum absolute valuesofx[n]{\displaystyle x[n]}andy[n]{\displaystyle y[n]}are finite), then the system is stable. A necessary and sufficient condition is thath[n]{\displaystyle h[n]}, the impulse response, satisfies‖h[n]‖1=def∑n=−∞∞|h[n]|<∞.{\displaystyle \|h[n]\|_{1}\mathrel {\stackrel {\text{def}}{=}} \sum _{n=-\infty }^{\infty }|h[n]|<\infty .}
In the frequency domain, theregion of convergencemust contain theunit circle(i.e., thelocussatisfying|z|=1{\displaystyle |z|=1}for complexz).
|
https://en.wikipedia.org/wiki/LTI_system_theory
|
Intime seriesmodeling, anonlinear autoregressive exogenous model(NARX) is anonlinearautoregressive modelwhich hasexogenousinputs. This means that the model relates the current value of a time series to both:
In addition, the model contains an error term which relates to the fact that knowledge of other terms will not enable the current value of the time series to be predicted exactly.
Such a model can be stated algebraically as
Hereyis the variable of interest, anduis the externally determined variable. In this scheme, information aboutuhelps predicty, as do previous values ofyitself. Hereεis theerrorterm (sometimes called noise). For example,ymay be air temperature at noon, andumay be the day of the year (day-number within year).
The functionFis some nonlinear function, such as apolynomial.Fcan be aneural network, awavelet network, asigmoid networkand so on. To test for non-linearity in a time series, theBDS test(Brock-Dechert-Scheinkman test) developed foreconometricscan be used.
|
https://en.wikipedia.org/wiki/Nonlinear_autoregressive_exogenous_model
|
Anopen systemis a system that has external interactions. Such interactions can take the form of information, energy, or material transfers into or out of the system boundary, depending on the discipline which defines the concept. An open system is contrasted with the concept of anisolated systemwhich exchanges neither energy, matter, nor information with its environment. An open system is also known as a flow system.
The concept of an open system was formalized within a framework that enabled one to interrelate thetheory of the organism,thermodynamics, andevolutionary theory.[1]This concept was expanded upon with the advent ofinformation theoryand subsequentlysystems theory. Today the concept has its applications in the natural and social sciences.
In thenatural sciencesan open system is one whose border is permeable to bothenergyandmass.[2]By contrast, aclosed systemis permeable to energy but not to matter.
The definition of an open system assumes that there are supplies of energy that cannot be depleted; in practice, this energy is supplied from some source in the surrounding environment, which can be treated as infinite for the purposes of study. One type of open system is theradiant energysystem, which receives its energy fromsolar radiation– an energy source that can be regarded as inexhaustible for all practical purposes.
In thesocial sciencesan open system is a process that exchanges material, energy, people, capital and information with its environment. French/Greek philosopherKostas Axelosargued that seeing the"world system"as inherently open (though unified) would solve many of the problems in the social sciences, including that ofpraxis(the relation of knowledge to practice), so that various social scientific disciplines would work together rather than create monopolies whereby the world appears only sociological, political, historical, or psychological. Axelos argues that theorizing a closed system contributes tomakingit closed, and is thus a conservative approach.[3][need quotation to verify]TheAlthusserianconcept ofoverdetermination(drawing on Sigmund Freud) posits that there are always multiple causes in every event.[4]
David Harveyuses this to argue that when systems such ascapitalismenter a phase of crisis, it can happen through one of a number of elements, such as gender roles, the relation to nature/the environment, or crises in accumulation.[5]Looking at the crisis in accumulation, Harvey argues that phenomena such asforeign direct investment,privatizationof state-owned resources, andaccumulation by dispossessionact as necessary outlets when capital has overaccumulated too much in private hands and cannot circulate effectively in the marketplace. He cites the forcible displacement of Mexican and Indian peasants since the 1970s and the Asian and South-East Asian financial crisis of 1997–8, involving "hedge fund raising" of national currencies, as examples of this.[6]
Structural functionalistssuch asTalcott Parsonsand neofunctionalists such asNiklas Luhmannhave incorporated system theory to describe society and its components.
Thesociology of religionfinds both open and closed systems within the field ofreligion.[7][8]
|
https://en.wikipedia.org/wiki/Open_system_(systems_theory)
|
Estimation theoryis a branch ofstatisticsthat deals with estimating the values ofparametersbased on measured empirical data that has a random component. The parameters describe an underlying physical setting in such a way that their value affects the distribution of the measured data. Anestimatorattempts to approximate the unknown parameters using the measurements.
In estimation theory, two approaches are generally considered:[1]
For example, it is desired to estimate the proportion of a population of voters who will vote for a particular candidate. That proportion is the parameter sought; the estimate is based on a small random sample of voters. Alternatively, it is desired to estimate the probability of a voter voting for a particular candidate, based on some demographic features, such as age.
Or, for example, inradarthe aim is to find the range of objects (airplanes, boats, etc.) by analyzing the two-way transit timing of received echoes of transmitted pulses. Since the reflected pulses are unavoidably embedded in electrical noise, their measured values are randomly distributed, so that the transit time must be estimated.
As another example, in electrical communication theory, the measurements which contain information regarding the parameters of interest are often associated with anoisysignal.
For a given model, several statistical "ingredients" are needed so the estimator can be implemented. The first is astatistical sample– a set of data points taken from arandom vector(RV) of sizeN. Put into avector,x=[x[0]x[1]⋮x[N−1]].{\displaystyle \mathbf {x} ={\begin{bmatrix}x[0]\\x[1]\\\vdots \\x[N-1]\end{bmatrix}}.}Secondly, there areMparametersθ=[θ1θ2⋮θM],{\displaystyle {\boldsymbol {\theta }}={\begin{bmatrix}\theta _{1}\\\theta _{2}\\\vdots \\\theta _{M}\end{bmatrix}},}whose values are to be estimated. Third, the continuousprobability density function(pdf) or its discrete counterpart, theprobability mass function(pmf), of the underlying distribution that generated the data must be stated conditional on the values of the parameters:p(x|θ).{\displaystyle p(\mathbf {x} |{\boldsymbol {\theta }}).\,}It is also possible for the parameters themselves to have a probability distribution (e.g.,Bayesian statistics). It is then necessary to define theBayesian probabilityπ(θ).{\displaystyle \pi ({\boldsymbol {\theta }}).\,}After the model is formed, the goal is to estimate the parameters, with the estimates commonly denotedθ^{\displaystyle {\hat {\boldsymbol {\theta }}}}, where the "hat" indicates the estimate.
One common estimator is theminimum mean squared error(MMSE) estimator, which utilizes the error between the estimated parameters and the actual value of the parameterse=θ^−θ{\displaystyle \mathbf {e} ={\hat {\boldsymbol {\theta }}}-{\boldsymbol {\theta }}}as the basis for optimality. This error term is then squared and theexpected valueof this squared value is minimized for the MMSE estimator.
Commonly used estimators (estimation methods) and topics related to them include:
Consider a receiveddiscrete signal,x[n]{\displaystyle x[n]}, ofN{\displaystyle N}independentsamplesthat consists of an unknown constantA{\displaystyle A}withadditive white Gaussian noise(AWGN)w[n]{\displaystyle w[n]}with zeromeanand knownvarianceσ2{\displaystyle \sigma ^{2}}(i.e.,N(0,σ2){\displaystyle {\mathcal {N}}(0,\sigma ^{2})}).
Since the variance is known then the only unknown parameter isA{\displaystyle A}.
The model for the signal is thenx[n]=A+w[n]n=0,1,…,N−1{\displaystyle x[n]=A+w[n]\quad n=0,1,\dots ,N-1}
Two possible (of many) estimators for the parameterA{\displaystyle A}are:
Both of these estimators have ameanofA{\displaystyle A}, which can be shown through taking theexpected valueof each estimatorE[A^1]=E[x[0]]=A{\displaystyle \mathrm {E} \left[{\hat {A}}_{1}\right]=\mathrm {E} \left[x[0]\right]=A}andE[A^2]=E[1N∑n=0N−1x[n]]=1N[∑n=0N−1E[x[n]]]=1N[NA]=A{\displaystyle \mathrm {E} \left[{\hat {A}}_{2}\right]=\mathrm {E} \left[{\frac {1}{N}}\sum _{n=0}^{N-1}x[n]\right]={\frac {1}{N}}\left[\sum _{n=0}^{N-1}\mathrm {E} \left[x[n]\right]\right]={\frac {1}{N}}\left[NA\right]=A}
At this point, these two estimators would appear to perform the same.
However, the difference between them becomes apparent when comparing the variances.var(A^1)=var(x[0])=σ2{\displaystyle \mathrm {var} \left({\hat {A}}_{1}\right)=\mathrm {var} \left(x[0]\right)=\sigma ^{2}}andvar(A^2)=var(1N∑n=0N−1x[n])=independence1N2[∑n=0N−1var(x[n])]=1N2[Nσ2]=σ2N{\displaystyle \mathrm {var} \left({\hat {A}}_{2}\right)=\mathrm {var} \left({\frac {1}{N}}\sum _{n=0}^{N-1}x[n]\right){\overset {\text{independence}}{=}}{\frac {1}{N^{2}}}\left[\sum _{n=0}^{N-1}\mathrm {var} (x[n])\right]={\frac {1}{N^{2}}}\left[N\sigma ^{2}\right]={\frac {\sigma ^{2}}{N}}}
It would seem that the sample mean is a better estimator since its variance is lower for everyN> 1.
Continuing the example using themaximum likelihoodestimator, theprobability density function(pdf) of the noise for one samplew[n]{\displaystyle w[n]}isp(w[n])=1σ2πexp(−12σ2w[n]2){\displaystyle p(w[n])={\frac {1}{\sigma {\sqrt {2\pi }}}}\exp \left(-{\frac {1}{2\sigma ^{2}}}w[n]^{2}\right)}and the probability ofx[n]{\displaystyle x[n]}becomes (x[n]{\displaystyle x[n]}can be thought of aN(A,σ2){\displaystyle {\mathcal {N}}(A,\sigma ^{2})})p(x[n];A)=1σ2πexp(−12σ2(x[n]−A)2){\displaystyle p(x[n];A)={\frac {1}{\sigma {\sqrt {2\pi }}}}\exp \left(-{\frac {1}{2\sigma ^{2}}}(x[n]-A)^{2}\right)}Byindependence, the probability ofx{\displaystyle \mathbf {x} }becomesp(x;A)=∏n=0N−1p(x[n];A)=1(σ2π)Nexp(−12σ2∑n=0N−1(x[n]−A)2){\displaystyle p(\mathbf {x} ;A)=\prod _{n=0}^{N-1}p(x[n];A)={\frac {1}{\left(\sigma {\sqrt {2\pi }}\right)^{N}}}\exp \left(-{\frac {1}{2\sigma ^{2}}}\sum _{n=0}^{N-1}(x[n]-A)^{2}\right)}Taking thenatural logarithmof the pdflnp(x;A)=−Nln(σ2π)−12σ2∑n=0N−1(x[n]−A)2{\displaystyle \ln p(\mathbf {x} ;A)=-N\ln \left(\sigma {\sqrt {2\pi }}\right)-{\frac {1}{2\sigma ^{2}}}\sum _{n=0}^{N-1}(x[n]-A)^{2}}and the maximum likelihood estimator isA^=argmaxlnp(x;A){\displaystyle {\hat {A}}=\arg \max \ln p(\mathbf {x} ;A)}
Taking the firstderivativeof the log-likelihood function∂∂Alnp(x;A)=1σ2[∑n=0N−1(x[n]−A)]=1σ2[∑n=0N−1x[n]−NA]{\displaystyle {\frac {\partial }{\partial A}}\ln p(\mathbf {x} ;A)={\frac {1}{\sigma ^{2}}}\left[\sum _{n=0}^{N-1}(x[n]-A)\right]={\frac {1}{\sigma ^{2}}}\left[\sum _{n=0}^{N-1}x[n]-NA\right]}and setting it to zero0=1σ2[∑n=0N−1x[n]−NA]=∑n=0N−1x[n]−NA{\displaystyle 0={\frac {1}{\sigma ^{2}}}\left[\sum _{n=0}^{N-1}x[n]-NA\right]=\sum _{n=0}^{N-1}x[n]-NA}
This results in the maximum likelihood estimatorA^=1N∑n=0N−1x[n]{\displaystyle {\hat {A}}={\frac {1}{N}}\sum _{n=0}^{N-1}x[n]}which is simply the sample mean.
From this example, it was found that the sample mean is the maximum likelihood estimator forN{\displaystyle N}samples of a fixed, unknown parameter corrupted by AWGN.
To find theCramér–Rao lower bound(CRLB) of the sample mean estimator, it is first necessary to find theFisher informationnumberI(A)=E([∂∂Alnp(x;A)]2)=−E[∂2∂A2lnp(x;A)]{\displaystyle {\mathcal {I}}(A)=\mathrm {E} \left(\left[{\frac {\partial }{\partial A}}\ln p(\mathbf {x} ;A)\right]^{2}\right)=-\mathrm {E} \left[{\frac {\partial ^{2}}{\partial A^{2}}}\ln p(\mathbf {x} ;A)\right]}and copying from above∂∂Alnp(x;A)=1σ2[∑n=0N−1x[n]−NA]{\displaystyle {\frac {\partial }{\partial A}}\ln p(\mathbf {x} ;A)={\frac {1}{\sigma ^{2}}}\left[\sum _{n=0}^{N-1}x[n]-NA\right]}
Taking the second derivative∂2∂A2lnp(x;A)=1σ2(−N)=−Nσ2{\displaystyle {\frac {\partial ^{2}}{\partial A^{2}}}\ln p(\mathbf {x} ;A)={\frac {1}{\sigma ^{2}}}(-N)={\frac {-N}{\sigma ^{2}}}}and finding the negative expected value is trivial since it is now a deterministic constant−E[∂2∂A2lnp(x;A)]=Nσ2{\displaystyle -\mathrm {E} \left[{\frac {\partial ^{2}}{\partial A^{2}}}\ln p(\mathbf {x} ;A)\right]={\frac {N}{\sigma ^{2}}}}
Finally, putting the Fisher information intovar(A^)≥1I{\displaystyle \mathrm {var} \left({\hat {A}}\right)\geq {\frac {1}{\mathcal {I}}}}results invar(A^)≥σ2N{\displaystyle \mathrm {var} \left({\hat {A}}\right)\geq {\frac {\sigma ^{2}}{N}}}
Comparing this to the variance of the sample mean (determined previously) shows that the sample mean isequal tothe Cramér–Rao lower bound for all values ofN{\displaystyle N}andA{\displaystyle A}.
In other words, the sample mean is the (necessarily unique)efficient estimator, and thus also theminimum variance unbiased estimator(MVUE), in addition to being themaximum likelihoodestimator.
One of the simplest non-trivial examples of estimation is the estimation of the maximum of a uniform distribution. It is used as a hands-on classroom exercise and to illustrate basic principles of estimation theory. Further, in the case of estimation based on a single sample, it demonstrates philosophical issues and possible misunderstandings in the use ofmaximum likelihoodestimators andlikelihood functions.
Given adiscrete uniform distribution1,2,…,N{\displaystyle 1,2,\dots ,N}with unknown maximum, theUMVUestimator for the maximum is given byk+1km−1=m+mk−1{\displaystyle {\frac {k+1}{k}}m-1=m+{\frac {m}{k}}-1}wheremis thesample maximumandkis thesample size, sampling without replacement.[2][3]This problem is commonly known as theGerman tank problem, due to application of maximum estimation to estimates of German tank production duringWorld War II.
The formula may be understood intuitively as;
the gap being added to compensate for the negative bias of the sample maximum as an estimator for the population maximum.[note 1]
This has a variance of[2]1k(N−k)(N+1)(k+2)≈N2k2for small samplesk≪N{\displaystyle {\frac {1}{k}}{\frac {(N-k)(N+1)}{(k+2)}}\approx {\frac {N^{2}}{k^{2}}}{\text{ for small samples }}k\ll N}so a standard deviation of approximatelyN/k{\displaystyle N/k}, the (population) average size of a gap between samples; comparemk{\displaystyle {\frac {m}{k}}}above. This can be seen as a very simple case ofmaximum spacing estimation.
The sample maximum is themaximum likelihoodestimator for the population maximum, but, as discussed above, it is biased.
Numerous fields require the use of estimation theory.
Some of these fields include:
Measured data are likely to be subject tonoiseor uncertainty and it is through statisticalprobabilitythatoptimalsolutions are sought to extract as muchinformationfrom the data as possible.
|
https://en.wikipedia.org/wiki/Parameter_estimation
|
In the area ofsystem identification, adynamical systemisstructurally identifiableif it is possible to infer its unknown parameters by measuring its output over time. This problem arises in many branch of applied mathematics, sincedynamical systems(such as the ones described byordinary differential equations) are commonly utilized to model physical processes and these models contain unknown parameters that are typically estimated using experimental data.[1][2][3]
However, in certain cases, the model structure may not permit a unique solution for this estimation problem, even when the data is continuous and free from noise. To avoid potential issues, it is recommended to verify the uniqueness of the solution in advance, prior to conducting any actual experiments.[4]The lack of structural identifiability implies that there are multiple solutions for the problem of system identification, and the impossibility of distinguishing between these solutions suggests that the system has poor forecasting power as a model.[5]On the other hand,control systemshave been proposed with the goal of rendering the closed-loop system unidentifiable, decreasing its susceptibility to covert attacks targetingcyber-physical systems.[6]
Source[2]
Consider alinear time-invariant systemwith the followingstate-space representation:
x˙1(t)=−θ1x1,x˙2(t)=θ1x1,y(t)=θ2x2,{\displaystyle {\begin{aligned}{\dot {x}}_{1}(t)&=-\theta _{1}x_{1},\\{\dot {x}}_{2}(t)&=\theta _{1}x_{1},\\y(t)&=\theta _{2}x_{2},\end{aligned}}}
and with initial conditions given byx1(0)=θ3{\displaystyle x_{1}(0)=\theta _{3}}andx2(0)=0{\displaystyle x_{2}(0)=0}. The solution of the outputy{\displaystyle y}is
y(t)=θ2θ3e−θ1t(eθ1t−1),{\displaystyle y(t)=\theta _{2}\theta _{3}e^{-\theta _{1}t}\left(e^{\theta _{1}t}-1\right),}
which implies that the parametersθ2{\displaystyle \theta _{2}}andθ3{\displaystyle \theta _{3}}are not structurally identifiable. For instance, the parametersθ1=1,θ2=1,θ3=1{\displaystyle \theta _{1}=1,\theta _{2}=1,\theta _{3}=1}generates the same output as the parametersθ1=1,θ2=2,θ3=0.5{\displaystyle \theta _{1}=1,\theta _{2}=2,\theta _{3}=0.5}.
Source[7]
A model of a possible glucose homeostasis mechanism is given by the differential equations[8]
G˙=u(0)+u−(c+siI)G,β˙=β(1.4583⋅10−51+(8.4G)1.7−1.7361⋅10−51+(G8.4)8.5),I˙=pβG2α2+G2−γI,{\displaystyle {\begin{aligned}&{\dot {G}}=u(0)+u-(c+s_{\mathrm {i} }\,I)G,\\&{\dot {\beta }}=\beta \left({\frac {1.4583\cdot 10^{-5}}{1+\left({\frac {8.4}{G}}\right)^{1.7}}}-{\frac {1.7361\cdot 10^{-5}}{1+\left({\frac {G}{8.4}}\right)^{8.5}}}\right),\\&{\dot {I}}=p\,\beta \,{\frac {G^{2}}{\alpha ^{2}+G^{2}}}-\gamma \,I,\end{aligned}}}
where (c,si,p,α,γ) are parameters of the system, and the states are the plasma glucose concentrationG, the plasma insulin concentrationI, and the beta-cell functional massβ.It is possible to show that the parameterspandsiare not structurally identifiable: any numerical choice of parameterspandsithat have the same productpsiare indistinguishable.[7]
Structural identifiability is assessed by analyzing the dynamical equations of the system, and does not take into account possible noises in the measurement of the output. In contrast,practical non-identifiabilityalso takes noises into account.[1][9]
The notion of structurally identifiable is closely related toobservability, which refers to the capacity of inferring the state of the system by measuring the trajectories of the system output. It is also closely related todata informativity, which refers to the proper selection of inputs that enables the inference of the unknown parameters.[10][11]
The (lack of) structural identifiability is also important in the context of dynamical compensation of physiological control systems. These systems should ensure a precise dynamical response despite variations in certain parameters.[12][13]In other words, while in the field of systems identification, unidentifiability is considered a negative property, in the context of dynamical compensation, unidentifiability becomes a desirable property.[13]
Identifiability also appears in the context of inverseoptimal control. Here, one assumes that the data comes from a solution of an optimal control problem with unknown parameters in the objective function. Here, identifiability refers to the possibility of infering the parameters present in the objective function by using the measured data.[14]
There exist many software that can be used for analyzing the identifiability of a system, including non-linear systems:[15]
|
https://en.wikipedia.org/wiki/Structural_identifiability
|
System dynamics(SD) is an approach to understanding thenonlinearbehaviour ofcomplex systemsover time usingstocks, flows, internalfeedback loops, table functions and time delays.[1]
System dynamics is amethodologyandmathematical modelingtechnique to frame, understand, and discuss complex issues and problems. Originally developed in the 1950s to help corporate managers improve their understanding of industrial processes, SD is currently being used throughout the public and private sector for policy analysis and design.[2]
Convenientgraphical user interface(GUI) system dynamics software developed into user friendly versions by the 1990s and have been applied to diverse systems. SD models solve the problem of simultaneity (mutual causation) by updating all variables in small time increments with positive and negative feedbacks and time delays structuring the interactions and control. The best known SD model is probably the 1972The Limits to Growth. This model forecast that exponential growth of population and capital, with finite resource sources and sinks and perception delays, would lead to economic collapse during the 21st century under a wide variety of growth scenarios.
System dynamics is an aspect ofsystems theoryas a method to understand the dynamic behavior of complex systems. The basis of the method is the recognition that the structure of any system, the many circular, interlocking, sometimes time-delayed relationships among its components, is often just as important in determining its behavior as the individual components themselves. Examples arechaos theoryandsocial dynamics. It is also claimed that because there are often properties-of-the-whole which cannot be found among the properties-of-the-elements, in some cases the behavior of the whole cannot be explained in terms of the behavior of the parts.
System dynamics was created during the mid-1950s[3]by ProfessorJay Forresterof theMassachusetts Institute of Technology. In 1956, Forrester accepted a professorship in the newly formedMIT Sloan School of Management. His initial goal was to determine how his background in science and engineering could be brought to bear, in some useful way, on the core issues that determine the success or failure of corporations. Forrester's insights into the common foundations that underlie engineering, which led to the creation of system dynamics, were triggered, to a large degree, by his involvement with managers atGeneral Electric(GE) during the mid-1950s. At that time, the managers at GE were perplexed because employment at their appliance plants in Kentucky exhibited a significant three-year cycle. Thebusiness cyclewas judged to be an insufficient explanation for the employment instability. From hand simulations (or calculations) of the stock-flow-feedback structure of the GE plants, which included the existing corporate decision-making structure for hiring and layoffs, Forrester was able to show how the instability in GE employment was due to the internal structure of the firm and not to an external force such as the business cycle. These hand simulations were the start of the field of system dynamics.[2]
During the late 1950s and early 1960s, Forrester and a team of graduate students moved the emerging field of system dynamics from the hand-simulation stage to the formalcomputer modelingstage. Richard Bennett created the first system dynamics computer modeling language called SIMPLE (Simulation of Industrial Management Problems with Lots of Equations) in the spring of 1958. In 1959,Phyllis Foxand Alexander Pugh wrote the first version ofDYNAMO(DYNAmic MOdels), an improved version of SIMPLE, and the system dynamics language became the industry standard for over thirty years. Forrester published the first, and still classic, book in the field titledIndustrial Dynamicsin 1961.[2]
From the late 1950s to the late 1960s, system dynamics was applied almost exclusively to corporate/managerial problems. In 1968, however, an unexpected occurrence caused the field to broaden beyond corporate modeling.John F. Collins, the former mayor of Boston, was appointed a visiting professor of Urban Affairs at MIT. The result of the Collins-Forrester collaboration was a book titledUrban Dynamics. TheUrban Dynamicsmodel presented in the book was the first major non-corporate application of system dynamics.[2]In 1967,Richard M. Goodwinpublished the first edition of his paper "A Growth Cycle",[4]which was the first attempt to apply the principles of system dynamics to economics. He devoted most of his life teaching what he called "Economic Dynamics", which could be considered a precursor of modernNon-equilibrium economics.[5]
The second major noncorporate application of system dynamics came shortly after the first. In 1970, Jay Forrester was invited by theClub of Rometo a meeting in Bern, Switzerland. The Club of Rome is an organization devoted to solving what its members describe as the "predicament of mankind"—that is, the global crisis that may appear sometime in the future, due to the demands being placed on the Earth'scarrying capacity(its sources of renewable and nonrenewable resources and its sinks for the disposal of pollutants) by the world's exponentially growing population. At the Bern meeting, Forrester was asked if system dynamics could be used to address the predicament of mankind. His answer, of course, was that it could. On the plane back from the Bern meeting, Forrester created the first draft of a system dynamics model of the world's socioeconomic system. He called this model WORLD1. Upon his return to the United States, Forrester refined WORLD1 in preparation for a visit to MIT by members of the Club of Rome. Forrester called the refined version of the model WORLD2. Forrester published WORLD2 in a book titledWorld Dynamics.[2]
The primary elements of system dynamics diagrams are feedback, accumulation of flows into stocks and time delays.
As an illustration of the use of system dynamics, imagine an organisation that plans to introduce an innovative new durable consumer product. The organisation needs to understand the possible market dynamics in order to design marketing and production plans.
In the system dynamics methodology, a problem or a system (e.g., ecosystem, political system or mechanical system) may be represented as acausal loop diagram.[6]A causal loop diagram is a simple map of a system with all its constituent components and their interactions. By capturing interactions and consequently the feedback loops (see figure below), a causal loop diagram reveals the structure of a system. By understanding the structure of a system, it becomes possible to ascertain a system's behavior over a certain time period.[7]
The causal loop diagram of the new product introduction may look as follows:
There are two feedback loops in this diagram. The positive reinforcement (labeled R) loop on the right indicates that the more people have already adopted the new product, the stronger the word-of-mouth impact. There will be more references to the product, more demonstrations, and more reviews. This positive feedback should generate sales that continue to grow.
The second feedback loop on the left is negative reinforcement (or "balancing" and hence labeled B). Clearly, growth cannot continue forever, because as more and more people adopt, there remain fewer and fewer potential adopters.
Both feedback loops act simultaneously, but at different times they may have different strengths. Thus one might expect growing sales in the initial years, and then declining sales in the later years. However, in general a causal loop diagram does not specify the structure of a system sufficiently to permit determination of its behavior from the visual representation alone.[8]
Causal loop diagrams aid in visualizing a system's structure and behavior, and analyzing the system qualitatively. To perform a more detailed quantitative analysis, a causal loop diagram is transformed to astock and flowdiagram. A stock and flow model helps in studying and analyzing the system in a quantitative way; such models are usually built and simulated using computer software.
A stock is the term for any entity that accumulates or depletes over time. A flow is the rate of change in a stock.
In this example, there are two stocks: Potential adopters and Adopters. There is one flow: New adopters. For every new adopter, the stock of potential adopters declines by one, and the stock of adopters increases by one.
The real power of system dynamics is utilised through simulation. Although it is possible to perform the modeling in aspreadsheet, there are avariety of softwarepackages that have been optimised for this.
The steps involved in a simulation are:
In this example, the equations that change the two stocks via the flow are:
List of all the equations indiscrete time, in their order of execution in each year, for years 1 to 15 :
The dynamic simulation results show that the behaviour of the system would be to have growth inadoptersthat follows a classic s-curve shape.The increase inadoptersis very slow initially, then exponential growth for a period, followed ultimately by saturation.
To get intermediate values and better accuracy, the model can run in continuous time: we multiply the number of units of time and we proportionally divide values that change stock levels. In this example we multiply the 15 years by 4 to obtain 60 quarters, and we divide the value of the flow by 4.Dividing the value is the simplest with theEuler method, but other methods could be employed instead, such asRunge–Kutta methods.
List of the equations in continuous time for trimesters = 1 to 60 :
System dynamics has found application in a wide range of areas, for examplepopulation, agriculture,[10]epidemiological,ecologicalandeconomicsystems, which usually interact strongly with each other.
System dynamics have various "back of the envelope" management applications. They are a potent tool to:
Computer software is used tosimulatea system dynamicsmodelof the situation being studied. Running "what if" simulations to test certain policies on such a model can greatly aid in understanding how the system changes over time. System dynamics is very similar tosystems thinkingand constructs the samecausal loop diagramsof systems with feedback. However, system dynamics typically goes further and utilises simulation to study the behaviour of systems and the impact of alternative policies.[11]
System dynamics has been used to investigate resource dependencies, and resulting problems, in product development.[12][13]
A system dynamics approach tomacroeconomics, known asMinsky, has been developed by the economistSteve Keen.[14]This has been used to successfully model world economic behaviour from the apparent stability of theGreat Moderationto the2008 financial crisis.
The figure above is a causal loop diagram of a system dynamics model created to examine forces that may be responsible for the growth or decline oflife insurancecompanies in theUnited Kingdom. A number of this figure's features are worth mentioning. The first is that the model's negative feedback loops are identified byC's, which stand forCounteractingloops. The second is that double slashes are used to indicate places where there is a significant delay between causes (i.e., variables at the tails of arrows) and effects (i.e., variables at the heads of arrows). This is a common causal loop diagramming convention in system dynamics. Third, is that thicker lines are used to identify the feedback loops and links that author wishes the audience to focus on. This is also a common system dynamics diagramming convention. Last, it is clear that a decision maker would find it impossible to think through the dynamic behavior inherent in the model, from inspection of the figure alone.[15]
|
https://en.wikipedia.org/wiki/System_dynamics
|
Insystems theory, arealizationof astate spacemodel is an implementation of a given input-output behavior. That is, given an input-output relationship, a realization is a quadruple of (time-varying)matrices[A(t),B(t),C(t),D(t)]{\displaystyle [A(t),B(t),C(t),D(t)]}such that
with(u(t),y(t)){\displaystyle (u(t),y(t))}describing the input and output of the system at timet{\displaystyle t}.
For alinear time-invariant systemspecified by atransfer matrix,H(s){\displaystyle H(s)}, a realization is any quadruple of matrices(A,B,C,D){\displaystyle (A,B,C,D)}such thatH(s)=C(sI−A)−1B+D{\displaystyle H(s)=C(sI-A)^{-1}B+D}.
Any given transfer function which isstrictly propercan easily be transferred into state-space by the following approach (this example is for a 4-dimensional, single-input, single-output system)):
Given a transfer function, expand it to reveal all coefficients in both the numerator and denominator. This should result in the following form:
The coefficients can now be inserted directly into the state-space model by the following approach:
This state-space realization is calledcontrollable canonical form(also known as phase variable canonical form) because the resulting model is guaranteed to becontrollable(i.e., because the control enters a chain of integrators, it has the ability to move every state).
The transfer function coefficients can also be used to construct another type of canonical form
This state-space realization is calledobservable canonical formbecause the resulting model is guaranteed to beobservable(i.e., because the output exits from a chain of integrators, every state has an effect on the output).
If we have an inputu(t){\displaystyle u(t)}, an outputy(t){\displaystyle y(t)}, and aweighting patternT(t,σ){\displaystyle T(t,\sigma )}then a realization is any triple of matrices[A(t),B(t),C(t)]{\displaystyle [A(t),B(t),C(t)]}such thatT(t,σ)=C(t)ϕ(t,σ)B(σ){\displaystyle T(t,\sigma )=C(t)\phi (t,\sigma )B(\sigma )}whereϕ{\displaystyle \phi }is thestate-transition matrixassociated with the realization.[1]
System identification techniques take the experimental data from a system and output a realization. Such techniques can utilize both input and output data (e.g.eigensystem realization algorithm) or can only include the output data (e.g.frequency domain decomposition). Typically an input-output technique would be more accurate, but the input data is not always available.
|
https://en.wikipedia.org/wiki/System_realization
|
Inmathematics, particularlydifferential geometry, aFinsler manifoldis adifferentiable manifoldMwhere a (possiblyasymmetric)Minkowski normF(x, −)is provided on each tangent spaceTxM, that enables one to define the length of anysmooth curveγ: [a,b] →Mas
Finsler manifolds are more general thanRiemannian manifoldssince the tangent norms need not be induced byinner products.
Every Finsler manifold becomes anintrinsicquasimetric spacewhen the distance between two points is defined as the infimum length of the curves that join them.
Élie Cartan(1933) named Finsler manifolds afterPaul Finsler, who studied this geometry in his dissertation (Finsler 1918).
AFinsler manifoldis adifferentiable manifoldMtogether with aFinsler metric, which is a continuous nonnegative functionF: TM→ [0, +∞)defined on thetangent bundleso that for each pointxofM,
In other words,F(x, −)is anasymmetric normon each tangent spaceTxM. The Finsler metricFis also required to besmooth, more precisely:
The subadditivity axiom may then be replaced by the followingstrong convexity condition:
Here the Hessian ofF2atvis thesymmetricbilinear form
also known as thefundamental tensorofFatv. Strong convexity ofFimplies the subadditivity with a strict inequality ifu⁄F(u)≠v⁄F(v). IfFis strongly convex, then it is aMinkowski normon each tangent space.
A Finsler metric isreversibleif, in addition,
A reversible Finsler metric defines anorm(in the usual sense) on each tangent space.
Let(M,a){\displaystyle (M,a)}be aRiemannian manifoldandbadifferential one-formonMwith
where(aij){\displaystyle \left(a^{ij}\right)}is theinverse matrixof(aij){\displaystyle (a_{ij})}and theEinstein notationis used. Then
defines aRanders metriconMand(M,F){\displaystyle (M,F)}is aRanders manifold, a special case of a non-reversible Finsler manifold.[1]
Let (M,d) be aquasimetricso thatMis also adifferentiable manifoldanddis compatible with thedifferential structureofMin the following sense:
Then one can define a Finsler functionF:TM→[0, ∞] by
whereγis any curve inMwithγ(0) =xandγ'(0) = v. The Finsler functionFobtained in this way restricts to an asymmetric (typically non-Minkowski) norm on each tangent space ofM. Theinduced intrinsic metricdL:M×M→ [0, ∞]of the originalquasimetriccan be recovered from
and in fact any Finsler functionF: TM→ [0, ∞) defines anintrinsicquasimetricdLonMby this formula.
Due to the homogeneity ofFthe length
of adifferentiable curveγ: [a,b] →MinMis invariant under positively orientedreparametrizations. A constant speed curveγis ageodesicof a Finsler manifold if its short enough segmentsγ|[c,d]are length-minimizing inMfromγ(c) toγ(d). Equivalently,γis a geodesic if it is stationary for the energy functional
in the sense that itsfunctional derivativevanishes among differentiable curvesγ: [a,b] →Mwith fixed endpointsγ(a) =xandγ(b) =y.
TheEuler–Lagrange equationfor the energy functionalE[γ] reads in the local coordinates (x1, ...,xn,v1, ...,vn) of TMas
wherek= 1, ...,nandgijis the coordinate representation of the fundamental tensor, defined as
Assuming thestrong convexityofF2(x,v) with respect tov∈ TxM, the matrixgij(x,v) is invertible and its inverse is denoted bygij(x,v). Thenγ: [a,b] →Mis a geodesic of (M,F) if and only if its tangent curveγ': [a,b] → TM∖{0}is anintegral curveof thesmooth vector fieldHon TM∖{0} locally defined by
where the local spray coefficientsGiare given by
The vector fieldHon TM∖{0} satisfiesJH=Vand [V,H] =H, whereJandVare thecanonical endomorphismand thecanonical vector fieldon TM∖{0}. Hence, by definition,His asprayonM. The sprayHdefines anonlinear connectionon thefibre bundleTM∖{0} →Mthrough thevertical projection
In analogy with theRiemanniancase, there is a version
of theJacobi equationfor a general spray structure (M,H) in terms of theEhresmann curvatureandnonlinear covariant derivative.
ByHopf–Rinow theoremthere always exist length minimizing curves (at least in small enough neighborhoods) on (M,F). Length minimizing curves can always be positively reparametrized to be geodesics, and any geodesic must satisfy the Euler–Lagrange equation forE[γ]. Assuming the strong convexity ofF2there exists a unique maximal geodesicγwithγ(0) = x andγ'(0) = v for any (x,v) ∈ TM∖{0} by the uniqueness ofintegral curves.
IfF2is strongly convex, geodesicsγ: [0,b] →Mare length-minimizing among nearby curves until the first pointγ(s)conjugatetoγ(0) alongγ, and fort>sthere always exist shorter curves fromγ(0) toγ(t) nearγ, as in theRiemanniancase.
|
https://en.wikipedia.org/wiki/Finsler_manifold
|
Inmathematics, twometricson the same underlyingsetare said to beequivalentif the resulting metric spaces share certain properties. Equivalence is a weaker notion thanisometry; equivalent metrics do not have to be literally the same. Instead, it is one of several ways of generalizingequivalence of normsto general metric spaces.
Throughout the article,X{\displaystyle X}will denote a non-empty setandd1{\displaystyle d_{1}}andd2{\displaystyle d_{2}}will denote two metrics onX{\displaystyle X}.
The two metricsd1{\displaystyle d_{1}}andd2{\displaystyle d_{2}}are said to betopologically equivalentif they generate the sametopologyonX{\displaystyle X}. The adverbtopologicallyis often dropped.[1]There are multiple ways of expressing this condition:
The following are sufficient but not necessary conditions for topological equivalence:
Two metricsd1{\displaystyle d_{1}}andd2{\displaystyle d_{2}}onXarestronglyorbilipschitzequivalentoruniformly equivalentif and only if there exist positive constantsα{\displaystyle \alpha }andβ{\displaystyle \beta }such that, for everyx,y∈X{\displaystyle x,y\in X},
In contrast to the sufficient condition for topological equivalence listed above, strong equivalence requires that there is a single set of constants that holds for every pair of points inX{\displaystyle X}, rather than potentially different constants associated with each point ofX{\displaystyle X}.
Strong equivalence of two metrics implies topological equivalence, but not vice versa. For example, the metricsd1(x,y)=|x−y|{\displaystyle d_{1}(x,y)=|x-y|}andd2(x,y)=|tan(x)−tan(y)|{\displaystyle d_{2}(x,y)=|\tan(x)-\tan(y)|}on the interval(−π2,π2){\displaystyle \left(-{\frac {\pi }{2}},{\frac {\pi }{2}}\right)}are topologically equivalent, but not strongly equivalent. In fact, this interval isboundedunder one of these metrics but not the other. On the other hand, strong equivalences always take bounded sets to bounded sets.
WhenXis a vector space and the two metricsd1{\displaystyle d_{1}}andd2{\displaystyle d_{2}}are those induced bynorms‖⋅‖A{\displaystyle \|\cdot \|_{A}}and‖⋅‖B{\displaystyle \|\cdot \|_{B}}, respectively, then strong equivalence is equivalent to the condition that, for allx∈X{\displaystyle x\in X},α‖x‖A≤‖x‖B≤β‖x‖A{\displaystyle \alpha \|x\|_{A}\leq \|x\|_{B}\leq \beta \|x\|_{A}}For linear operators between normed vector spaces,Lipschitz continuityis equivalent tocontinuity—an operator satisfying either of these conditions is calledbounded.[3]Therefore, in this case,d1{\displaystyle d_{1}}andd2{\displaystyle d_{2}}are topologically equivalent if and only if they are strongly equivalent; the norms‖⋅‖A{\displaystyle \|\cdot \|_{A}}and‖⋅‖B{\displaystyle \|\cdot \|_{B}}are simply said to be equivalent.
In finite dimensional vector spaces, all metrics induced by a norm, including theeuclidean metric, thetaxicab metric, and theChebyshev distance, are equivalent.[4]
|
https://en.wikipedia.org/wiki/Equivalence_of_metrics
|
Infunctional analysis, anF-spaceis avector spaceX{\displaystyle X}over therealorcomplexnumbers together with ametricd:X×X→R{\displaystyle d:X\times X\to \mathbb {R} }such that
The operationx↦‖x‖:=d(0,x){\displaystyle x\mapsto \|x\|:=d(0,x)}is called anF-norm, although in general an F-norm is not required to be homogeneous. Bytranslation-invariance, the metric is recoverable from the F-norm. Thus, a real or complex F-space is equivalently a real or complex vector space equipped with a complete F-norm.
Some authors use the termFréchet spacerather thanF-space, but usually the term "Fréchet space" is reserved forlocally convexF-spaces.
Some other authors use the term "F-space" as a synonym of "Fréchet space", by which they mean a locally convex complete metrizabletopological vector space.
The metric may or may not necessarily be part of the structure on an F-space; many authors only require that such a space bemetrizablein a manner that satisfies the above properties.
AllBanach spacesandFréchet spacesare F-spaces. In particular, a Banach space is an F-space with an additional requirement thatd(ax,0)=|a|d(x,0).{\displaystyle d(ax,0)=|a|d(x,0).}[1]
TheLpspacescan be made into F-spaces for allp≥0{\displaystyle p\geq 0}and forp≥1{\displaystyle p\geq 1}they can be made into locally convex and thus Fréchet spaces and even Banach spaces.
L12[0,1]{\displaystyle L^{\frac {1}{2}}[0,\,1]}is an F-space. It admits no continuous seminorms and no continuous linear functionals — it has trivialdual space.
LetWp(D){\displaystyle W_{p}(\mathbb {D} )}be the space of all complex valuedTaylor seriesf(z)=∑n≥0anzn{\displaystyle f(z)=\sum _{n\geq 0}a_{n}z^{n}}on the unit discD{\displaystyle \mathbb {D} }such that∑n|an|p<∞{\displaystyle \sum _{n}\left|a_{n}\right|^{p}<\infty }then for0<p<1,{\displaystyle 0<p<1,}Wp(D){\displaystyle W_{p}(\mathbb {D} )}are F-spaces under thep-norm:‖f‖p=∑n|an|p(0<p<1).{\displaystyle \|f\|_{p}=\sum _{n}\left|a_{n}\right|^{p}\qquad (0<p<1).}
In fact,Wp{\displaystyle W_{p}}is aquasi-Banach algebra. Moreover, for anyζ{\displaystyle \zeta }with|ζ|≤1{\displaystyle |\zeta |\leq 1}the mapf↦f(ζ){\displaystyle f\mapsto f(\zeta )}is a bounded linear (multiplicative functional) onWp(D).{\displaystyle W_{p}(\mathbb {D} ).}
Theorem[2][3](Klee (1952))—Letd{\displaystyle d}beany[note 1]metric on a vector spaceX{\displaystyle X}such that the topologyτ{\displaystyle \tau }induced byd{\displaystyle d}onX{\displaystyle X}makes(X,τ){\displaystyle (X,\tau )}into a topological vector space. If(X,d){\displaystyle (X,d)}is a complete metric space then(X,τ){\displaystyle (X,\tau )}is acomplete topological vector space.
Theopen mapping theoremimplies that ifτandτ2{\displaystyle \tau {\text{ and }}\tau _{2}}are topologies onX{\displaystyle X}that make both(X,τ){\displaystyle (X,\tau )}and(X,τ2){\displaystyle \left(X,\tau _{2}\right)}intocompletemetrizable topological vector spaces(for example, Banach orFréchet spaces) and if one topology isfiner or coarserthan the other then they must be equal (that is, ifτ⊆τ2orτ2⊆τthenτ=τ2{\displaystyle \tau \subseteq \tau _{2}{\text{ or }}\tau _{2}\subseteq \tau {\text{ then }}\tau =\tau _{2}}).[4]
|
https://en.wikipedia.org/wiki/F-space
|
Infunctional analysisand related areas ofmathematics,Fréchet spaces, named afterMaurice Fréchet, are specialtopological vector spaces.
They are generalizations ofBanach spaces(normed vector spacesthat arecompletewith respect to themetricinduced by thenorm).
AllBanachandHilbert spacesare Fréchet spaces.
Spaces ofinfinitely differentiablefunctionsare typical examples of Fréchet spaces, many of which are typicallynotBanach spaces.
A Fréchet spaceX{\displaystyle X}is defined to be alocally convexmetrizabletopological vector space(TVS) that iscomplete as a TVS,[1]meaning that everyCauchy sequenceinX{\displaystyle X}converges to some point inX{\displaystyle X}(see footnote for more details).[note 1]
The topology of every Fréchet space is induced by sometranslation-invariantcomplete metric.
Conversely, if the topology of a locally convex spaceX{\displaystyle X}is induced by a translation-invariant complete metric thenX{\displaystyle X}is a Fréchet space.
Fréchetwas the first to use the term "Banach space" andBanachin turn then coined the term "Fréchet space" to mean acompletemetrizable topological vector space, without the local convexity requirement (such a space is today often called an "F-space").[1]The local convexity requirement was added later byNicolas Bourbaki.[1]It's important to note that a sizable number of authors (e.g. Schaefer) use "F-space" to mean a (locally convex) Fréchet space while others do not require that a "Fréchet space" be locally convex.
Moreover, some authors even use "F-space" and "Fréchet space" interchangeably.
When reading mathematical literature, it is recommended that a reader always check whether the book's or article's definition of "F-space" and "Fréchet space" requires local convexity.[1]
Fréchet spaces can be defined in two equivalent ways: the first employs atranslation-invariantmetric, the second acountablefamily ofseminorms.
A topological vector spaceX{\displaystyle X}is aFréchet spaceif and only if it satisfies the following three properties:
Note there is no natural notion of distance between two points of a Fréchet space: many different translation-invariant metrics may induce the same topology.
The alternative and somewhat more practical definition is the following: a topological vector spaceX{\displaystyle X}is aFréchet spaceif and only if it satisfies the following three properties:
A familyP{\displaystyle {\mathcal {P}}}of seminorms onX{\displaystyle X}yields a Hausdorff topology if and only if[2]⋂‖⋅‖∈P{x∈X:‖x‖=0}={0}.{\displaystyle \bigcap _{\|\cdot \|\in {\mathcal {P}}}\{x\in X:\|x\|=0\}=\{0\}.}
A sequence(xn)n∈N{\displaystyle \left(x_{n}\right)_{n\in \mathbb {N} }}inX{\displaystyle X}converges tox{\displaystyle x}in the Fréchet space defined by a family of seminorms if and only if it converges tox{\displaystyle x}with respect to each of the given seminorms.
Theorem[3](de Wilde 1978)—Atopological vector spaceX{\displaystyle X}is a Fréchet space if and only if it is both awebbed spaceand aBaire space.
In contrast toBanach spaces, the complete translation-invariant metric need not arise from a norm.
The topology of a Fréchet space does, however, arise from both atotal paranormand anF-norm(theFstands for Fréchet).
Even though thetopological structureof Fréchet spaces is more complicated than that of Banach spaces due to the potential lack of a norm, many important results in functional analysis, like theopen mapping theorem, theclosed graph theorem, and theBanach–Steinhaus theorem, still hold.
Recall that a seminorm‖⋅‖{\displaystyle \|\cdot \|}is a function from a vector spaceX{\displaystyle X}to the real numbers satisfying three properties.
For allx,y∈X{\displaystyle x,y\in X}and all scalarsc,{\displaystyle c,}‖x‖≥0{\displaystyle \|x\|\geq 0}‖x+y‖≤‖x‖+‖y‖{\displaystyle \|x+y\|\leq \|x\|+\|y\|}‖c⋅x‖=|c|‖x‖{\displaystyle \|c\cdot x\|=|c|\|x\|}
If‖x‖=0⟺x=0{\displaystyle \|x\|=0\iff x=0}, then‖⋅‖{\displaystyle \|\cdot \|}is in fact a norm.
However, seminorms are useful in that they enable us to construct Fréchet spaces, as follows:
To construct a Fréchet space, one typically starts with a vector spaceX{\displaystyle X}and defines a countable family of seminorms‖⋅‖k{\displaystyle \|\cdot \|_{k}}onX{\displaystyle X}with the following two properties:
Then the topology induced by these seminorms (as explained above) turnsX{\displaystyle X}into a Fréchet space; the first property ensures that it is Hausdorff, and the second property ensures that it is complete.
A translation-invariant complete metric inducing the same topology onX{\displaystyle X}can then be defined byd(x,y)=∑k=0∞2−k‖x−y‖k1+‖x−y‖kx,y∈X.{\displaystyle d(x,y)=\sum _{k=0}^{\infty }2^{-k}{\frac {\|x-y\|_{k}}{1+\|x-y\|_{k}}}\qquad x,y\in X.}
The functionu↦u1+u{\displaystyle u\mapsto {\frac {u}{1+u}}}maps[0,∞){\displaystyle [0,\infty )}monotonically to[0,1),{\displaystyle [0,1),}and so the above definition ensures thatd(x,y){\displaystyle d(x,y)}is "small" if and only if there existsK{\displaystyle K}"large" such that‖x−y‖k{\displaystyle \|x-y\|_{k}}is "small" fork=0,…,K.{\displaystyle k=0,\ldots ,K.}
Not all vector spaces with complete translation-invariant metrics are Fréchet spaces. An example is thespaceLp([0,1]){\displaystyle L^{p}([0,1])}withp<1.{\displaystyle p<1.}Although this space fails to be locally convex, it is anF-space.
If a Fréchet space admits a continuous norm then all of the seminorms used to define it can be replaced with norms by adding this continuous norm to each of them.
A Banach space,C∞([a,b]),{\displaystyle C^{\infty }([a,b]),}C∞(X,V){\displaystyle C^{\infty }(X,V)}withX{\displaystyle X}compact, andH{\displaystyle H}all admit norms, whileRω{\displaystyle \mathbb {R} ^{\omega }}andC(R){\displaystyle C(\mathbb {R} )}do not.
A closed subspace of a Fréchet space is a Fréchet space.
A quotient of a Fréchet space by a closed subspace is a Fréchet space.
The direct sum of a finite number of Fréchet spaces is a Fréchet space.
A product ofcountably manyFréchet spaces is always once again a Fréchet space. However, an arbitrary product of Fréchet spaces will be a Fréchet space if and only if allexceptfor at most countably many of them are trivial (that is, have dimension 0). Consequently, a product of uncountably many non-trivial Fréchet spaces can not be a Fréchet space (indeed, such a product is not even metrizable because its origin can not have a countable neighborhood basis). So for example, ifI≠∅{\displaystyle I\neq \varnothing }is any set andX{\displaystyle X}is any non-trivial Fréchet space (such asX=R{\displaystyle X=\mathbb {R} }for instance), then the productXI=∏i∈IX{\displaystyle X^{I}=\prod _{i\in I}X}is a Fréchet space if and only ifI{\displaystyle I}is a countable set.
Several important tools of functional analysis which are based on theBaire category theoremremain true in Fréchet spaces; examples are theclosed graph theoremand theopen mapping theorem.
Theopen mapping theoremimplies that ifτandτ2{\displaystyle \tau {\text{ and }}\tau _{2}}are topologies onX{\displaystyle X}that make both(X,τ){\displaystyle (X,\tau )}and(X,τ2){\displaystyle \left(X,\tau _{2}\right)}intocompletemetrizable TVSs(such as Fréchet spaces) and if one topology isfiner or coarserthan the other then they must be equal (that is, ifτ⊆τ2orτ2⊆τthenτ=τ2{\displaystyle \tau \subseteq \tau _{2}{\text{ or }}\tau _{2}\subseteq \tau {\text{ then }}\tau =\tau _{2}}).[4]
Everyboundedlinear operator from a Fréchet space into anothertopological vector space(TVS) is continuous.[5]
There exists a Fréchet spaceX{\displaystyle X}having aboundedsubsetB{\displaystyle B}and also a dense vector subspaceM{\displaystyle M}such thatB{\displaystyle B}isnotcontained in the closure (inX{\displaystyle X}) of any bounded subset ofM.{\displaystyle M.}[6]
All Fréchet spaces arestereotype spaces. In the theory of stereotype spaces Fréchet spaces are dual objects toBrauner spaces.
AllmetrizableMontel spacesareseparable.[7]AseparableFréchet space is a Montel space if and only if eachweak-* convergentsequence in its continuous dual converges isstrongly convergent.[7]
Thestrong dual spaceXb′{\displaystyle X_{b}^{\prime }}of a Fréchet space (and more generally, of any metrizable locally convex space[8])X{\displaystyle X}is aDF-space.[9]The strong dual of a DF-space is a Fréchet space.[10]The strong dual of areflexiveFréchet space is abornological space[8]and aPtak space. Every Fréchet space is a Ptak space.
The strong bidual (that is, thestrong dual spaceof the strong dual space) of a metrizable locally convex space is a Fréchet space.[11]
IfX{\displaystyle X}is a locally convex space then the topology ofX{\displaystyle X}can be a defined by a family of continuousnormsonX{\displaystyle X}(anormis apositive-definiteseminorm) if and only if there existsat least onecontinuousnormonX.{\displaystyle X.}[12]Even if a Fréchet space has a topology that is defined by a (countable) family ofnorms(all norms are also seminorms), then it may nevertheless still fail to benormable space(meaning that its topology can not be defined by any single norm).
Thespace of all sequencesKN{\displaystyle \mathbb {K} ^{\mathbb {N} }}(with the product topology) is a Fréchet space. There does not exist any Hausdorfflocally convextopology onKN{\displaystyle \mathbb {K} ^{\mathbb {N} }}that isstrictly coarserthan this product topology.[13]The spaceKN{\displaystyle \mathbb {K} ^{\mathbb {N} }}is notnormable, which means that its topology can not be defined by anynorm.[13]Also, there does not existanycontinuousnorm onKN.{\displaystyle \mathbb {K} ^{\mathbb {N} }.}In fact, as the following theorem shows, wheneverX{\displaystyle X}is a Fréchet space on which there does not exist any continuous norm, then this is due entirely to the presence ofKN{\displaystyle \mathbb {K} ^{\mathbb {N} }}as a subspace.
Theorem[13]—LetX{\displaystyle X}be a Fréchet space over the fieldK.{\displaystyle \mathbb {K} .}Then the following are equivalent:
IfX{\displaystyle X}is a non-normable Fréchet space on which there exists a continuous norm, thenX{\displaystyle X}contains a closed vector subspace that has notopological complement.[14]
A metrizablelocally convexspace isnormableif and only if itsstrong dual spaceis aFréchet–Urysohnlocally convex space.[9]In particular, if a locally convex metrizable spaceX{\displaystyle X}(such as a Fréchet space) isnotnormable (which can only happen ifX{\displaystyle X}is infinite dimensional) then itsstrong dual spaceXb′{\displaystyle X_{b}^{\prime }}is not aFréchet–Urysohn spaceand consequently, thiscompleteHausdorff locally convex spaceXb′{\displaystyle X_{b}^{\prime }}is also neither metrizable nor normable.
Thestrong dual spaceof a Fréchet space (and more generally, ofbornological spacessuch as metrizable TVSs) is always acomplete TVSand so like any complete TVS, it isnormableif and only if its topology can be induced by acomplete norm(that is, if and only if it can be made into aBanach spacethat has the same topology).
IfX{\displaystyle X}is a Fréchet space thenX{\displaystyle X}isnormableif (and only if) there exists a completenormon its continuous dual spaceX′{\displaystyle X'}such that the norm induced topology onX′{\displaystyle X'}isfinerthan the weak-* topology.[15]Consequently, if a Fréchet space isnotnormable (which can only happen if it is infinite dimensional) then neither is its strong dual space.
Anderson–Kadec theorem—Every infinite-dimensional, separable real Fréchet space is homeomorphic toRN,{\displaystyle \mathbb {R} ^{\mathbb {N} },}theCartesian productofcountably manycopies of the real lineR.{\displaystyle \mathbb {R} .}
Note that the homeomorphism described in the Anderson–Kadec theorem isnotnecessarily linear.
Eidelheittheorem—A Fréchet space is either isomorphic to a Banach space, or has a quotient space isomorphic toRN.{\displaystyle \mathbb {R} ^{\mathbb {N} }.}
IfX{\displaystyle X}andY{\displaystyle Y}are Fréchet spaces, then the spaceL(X,Y){\displaystyle L(X,Y)}consisting of allcontinuouslinear mapsfromX{\displaystyle X}toY{\displaystyle Y}isnota Fréchet space in any natural manner. This is a major difference between the theory of Banach spaces and that of Fréchet spaces and necessitates a different definition for continuous differentiability of functions defined on Fréchet spaces, theGateaux derivative:
SupposeU{\displaystyle U}is an open subset of a Fréchet spaceX,{\displaystyle X,}P:U→Y{\displaystyle P:U\to Y}is a function valued in a Fréchet spaceY,{\displaystyle Y,}x∈U{\displaystyle x\in U}andh∈X.{\displaystyle h\in X.}The mapP{\displaystyle P}isdifferentiable atx{\displaystyle x}in the directionh{\displaystyle h}if thelimitD(P)(x)(h)=limt→01t(P(x+th)−P(x)){\displaystyle D(P)(x)(h)=\lim _{t\to 0}\,{\frac {1}{t}}\left(P(x+th)-P(x)\right)}exists.
The mapP{\displaystyle P}is said to becontinuously differentiableinU{\displaystyle U}if the mapD(P):U×X→Y{\displaystyle D(P):U\times X\to Y}is continuous. Since theproductof Fréchet spaces is again a Fréchet space, we can then try to differentiateD(P){\displaystyle D(P)}and define the higher derivatives ofP{\displaystyle P}in this fashion.
The derivative operatorP:C∞([0,1])→C∞([0,1]){\displaystyle P:C^{\infty }([0,1])\to C^{\infty }([0,1])}defined byP(f)=f′{\displaystyle P(f)=f'}is itself infinitely differentiable. The first derivative is given byD(P)(f)(h)=h′{\displaystyle D(P)(f)(h)=h'}for any two elementsf,h∈C∞([0,1]).{\displaystyle f,h\in C^{\infty }([0,1]).}This is a major advantage of the Fréchet spaceC∞([0,1]){\displaystyle C^{\infty }([0,1])}over the Banach spaceCk([0,1]){\displaystyle C^{k}([0,1])}for finitek.{\displaystyle k.}
IfP:U→Y{\displaystyle P:U\to Y}is a continuously differentiable function, then thedifferential equationx′(t)=P(x(t)),x(0)=x0∈U{\displaystyle x'(t)=P(x(t)),\quad x(0)=x_{0}\in U}need not have any solutions, and even if does, the solutions need not be unique. This is in stark contrast to the situation in Banach spaces.
In general, theinverse function theoremis not true in Fréchet spaces, although a partial substitute is theNash–Moser theorem.
One may defineFréchet manifoldsas spaces that "locally look like" Fréchet spaces (just like ordinary manifolds are defined as spaces that locally look likeEuclidean spaceRn{\displaystyle \mathbb {R} ^{n}}), and one can then extend the concept ofLie groupto these manifolds.
This is useful because for a given (ordinary) compactC∞{\displaystyle C^{\infty }}manifoldM,{\displaystyle M,}the set of allC∞{\displaystyle C^{\infty }}diffeomorphismsf:M→M{\displaystyle f:M\to M}forms a generalized Lie group in this sense, and this Lie group captures the symmetries ofM.{\displaystyle M.}Some of the relations betweenLie algebrasand Lie groups remain valid in this setting.
Another important example of a Fréchet Lie group is the loop group of a compact Lie groupG,{\displaystyle G,}the smooth (C∞{\displaystyle C^{\infty }}) mappingsγ:S1→G,{\displaystyle \gamma :S^{1}\to G,}multiplied pointwise by(γ1γ2)(t)=γ1(t)γ2(t)..{\displaystyle \left(\gamma _{1}\gamma _{2}\right)(t)=\gamma _{1}(t)\gamma _{2}(t)..}[16][17]
If we drop the requirement for the space to be locally convex, we obtainF-spaces: vector spaces with complete translation-invariant metrics.
LF-spacesare countable inductive limits of Fréchet spaces.
|
https://en.wikipedia.org/wiki/Fr%C3%A9chet_space
|
Inmathematics, the concept of ageneralised metricis a generalisation of that of ametric, in which the distance is not areal numberbut taken from an arbitraryordered field.
In general, when we definemetric spacethe distance function is taken to be a real-valuedfunction. The real numbers form an ordered field which isArchimedeanandorder complete. These metric spaces have some nice properties like: in a metric spacecompactness,sequential compactnessandcountable compactnessare equivalent etc. These properties may not, however, hold so easily if the distance function is taken in an arbitrary ordered field, instead of inR.{\displaystyle \scriptstyle \mathbb {R} .}
Let(F,+,⋅,<){\displaystyle (F,+,\cdot ,<)}be an arbitrary ordered field, andM{\displaystyle M}a nonempty set; a functiond:M×M→F+∪{0}{\displaystyle d:M\times M\to F^{+}\cup \{0\}}is called a metric onM,{\displaystyle M,}if the following conditions hold:
It is not difficult to verify that the open ballsB(x,δ):={y∈M:d(x,y)<δ}{\displaystyle B(x,\delta )\;:=\{y\in M\;:d(x,y)<\delta \}}form a basis for a suitable topology, the latter called themetric topologyonM,{\displaystyle M,}with the metric inF.{\displaystyle F.}
SinceF{\displaystyle F}in itsorder topologyismonotonically normal, we would expectM{\displaystyle M}to be at leastregular.
However, underaxiom of choice, every general metric ismonotonically normal, for, givenx∈G,{\displaystyle x\in G,}whereG{\displaystyle G}is open, there is an open ballB(x,δ){\displaystyle B(x,\delta )}such thatx∈B(x,δ)⊆G.{\displaystyle x\in B(x,\delta )\subseteq G.}Takeμ(x,G)=B(x,δ/2).{\displaystyle \mu (x,G)=B\left(x,\delta /2\right).}Verify the conditions for Monotone Normality.
The matter of wonder is that, even without choice, general metrics aremonotonically normal.
proof.
Case I:F{\displaystyle F}is anArchimedean field.
Now, ifx{\displaystyle x}inG,G{\displaystyle G,G}open, we may takeμ(x,G):=B(x,1/2n(x,G)),{\displaystyle \mu (x,G):=B(x,1/2n(x,G)),}wheren(x,G):=min{n∈N:B(x,1/n)⊆G},{\displaystyle n(x,G):=\min\{n\in \mathbb {N} :B(x,1/n)\subseteq G\},}and the trick is done without choice.
Case II:F{\displaystyle F}is a non-Archimedean field.
For givenx∈G{\displaystyle x\in G}whereG{\displaystyle G}is open, consider the setA(x,G):={a∈F:for alln∈N,B(x,n⋅a)⊆G}.{\displaystyle A(x,G):=\{a\in F:{\text{ for all }}n\in \mathbb {N} ,B(x,n\cdot a)\subseteq G\}.}
The setA(x,G){\displaystyle A(x,G)}is non-empty. For, asG{\displaystyle G}is open, there is an open ballB(x,k){\displaystyle B(x,k)}withinG.{\displaystyle G.}Now, asF{\displaystyle F}is non-Archimdedean,NF{\displaystyle \mathbb {N} _{F}}is not bounded above, hence there is someξ∈F{\displaystyle \xi \in F}such that for alln∈N,{\displaystyle n\in \mathbb {N} ,}n⋅1≤ξ.{\displaystyle n\cdot 1\leq \xi .}Puttinga=k⋅(2ξ)−1,{\displaystyle a=k\cdot (2\xi )^{-1},}we see thata{\displaystyle a}is inA(x,G).{\displaystyle A(x,G).}
Now defineμ(x,G)=⋃{B(x,a):a∈A(x,G)}.{\displaystyle \mu (x,G)=\bigcup \{B(x,a):a\in A(x,G)\}.}We would show that with respect to this mu operator, the space is monotonically normal. Note thatμ(x,G)⊆G.{\displaystyle \mu (x,G)\subseteq G.}
Ify{\displaystyle y}is not inG{\displaystyle G}(open set containingx{\displaystyle x}) andx{\displaystyle x}is not inH{\displaystyle H}(open set containingy{\displaystyle y}), then we'd show thatμ(x,G)∩μ(y,H){\displaystyle \mu (x,G)\cap \mu (y,H)}is empty. If not, sayz{\displaystyle z}is in the intersection. Then∃a∈A(x,G):d(x,z)<a;∃b∈A(y,H):d(z,y)<b.{\displaystyle \exists a\in A(x,G)\colon d(x,z)<a;\;\;\exists b\in A(y,H)\colon d(z,y)<b.}
From the above, we get thatd(x,y)≤d(x,z)+d(z,y)<2⋅max{a,b},{\displaystyle d(x,y)\leq d(x,z)+d(z,y)<2\cdot \max\{a,b\},}which is impossible since this would imply that eithery{\displaystyle y}belongs toμ(x,G)⊆G{\displaystyle \mu (x,G)\subseteq G}orx{\displaystyle x}belongs toμ(y,H)⊆H.{\displaystyle \mu (y,H)\subseteq H.}This completes the proof.
|
https://en.wikipedia.org/wiki/Generalised_metric
|
Inmathematics, more specifically infunctional analysis, aK-spaceis anF-spaceV{\displaystyle V}such that every extension of F-spaces (or twisted sum) of the form0→R→X→V→0.{\displaystyle 0\rightarrow \mathbb {R} \rightarrow X\rightarrow V\rightarrow 0.\,\!}is equivalent to the trivial one[1]0→R→R×V→V→0.{\displaystyle 0\rightarrow \mathbb {R} \rightarrow \mathbb {R} \times V\rightarrow V\rightarrow 0.\,\!}whereR{\displaystyle \mathbb {R} }is thereal line.
Theℓp{\displaystyle \ell ^{p}}spacesfor0<p<1{\displaystyle 0<p<1}are K-spaces,[1]as are all finite dimensionalBanach spaces.
N. J. Kalton and N. P. Roberts proved that the Banach spaceℓ1{\displaystyle \ell ^{1}}is not a K-space.[1]
|
https://en.wikipedia.org/wiki/K-space_(functional_analysis)
|
Infunctional analysisand related areas ofmathematics,locally convex topological vector spaces(LCTVS) orlocally convex spacesare examples oftopological vector spaces(TVS) that generalizenormed spaces. They can be defined astopologicalvector spaces whose topology isgeneratedby translations ofbalanced,absorbent,convex sets. Alternatively they can be defined as avector spacewith afamilyofseminorms, and a topology can be defined in terms of that family. Although in general such spaces are not necessarilynormable, the existence of a convexlocal basefor thezero vectoris strong enough for theHahn–Banach theoremto hold, yielding a sufficiently rich theory of continuouslinear functionals.
Fréchet spacesare locally convex topological vector spaces that arecompletely metrizable(with a choice of complete metric). They are generalizations ofBanach spaces, which are complete vector spaces with respect to a metric generated by anorm.
Metrizable topologies on vector spaces have been studied since their introduction inMaurice Fréchet's1902 PhD thesisSur quelques points du calcul fonctionnel(wherein the notion of ametricwas first introduced).
After the notion of a general topological space was defined byFelix Hausdorffin 1914,[1]although locally convex topologies were implicitly used by some mathematicians, up to 1934 onlyJohn von Neumannwould seem to have explicitly defined theweak topologyon Hilbert spaces andstrong operator topologyon operators on Hilbert spaces.[2][3]Finally, in 1935 von Neumann introduced the general definition of a locally convex space (called aconvex spaceby him).[4][5]
A notable example of a result which had to wait for the development and dissemination of general locally convex spaces (amongst other notions and results, likenets, theproduct topologyandTychonoff's theorem) to be proven in its full generality, is theBanach–Alaoglu theoremwhichStefan Banachfirst established in 1932 by an elementarydiagonal argumentfor the case of separable normed spaces[6](in which case theunit ball of the dual is metrizable).
SupposeX{\displaystyle X}is a vector space overK,{\displaystyle \mathbb {K} ,}asubfieldof thecomplex numbers(normallyC{\displaystyle \mathbb {C} }itself orR{\displaystyle \mathbb {R} }).
A locally convex space is defined either in terms of convex sets, or equivalently in terms of seminorms.
Atopological vector space(TVS) is calledlocally convexif it has aneighborhood basis(that is, a local base) at the origin consisting of balanced,convex sets.[7]The termlocally convex topological vector spaceis sometimes shortened tolocally convex spaceorLCTVS.
A subsetC{\displaystyle C}inX{\displaystyle X}is called
In fact, every locally convex TVS has a neighborhood basis of the origin consisting ofabsolutely convexsets (that is, disks), where this neighborhood basis can further be chosen to also consist entirely of open sets or entirely of closed sets.[8]Every TVS has a neighborhood basis at the origin consisting of balanced sets, but only a locally convex TVS has a neighborhood basis at the origin consisting of sets that are both balancedandconvex. It is possible for a TVS to havesomeneighborhoods of the origin that are convex and yet not be locally convex because it has no neighborhood basis at the origin consisting entirely of convex sets (that is, every neighborhood basis at the origin contains some non-convex set); for example, every non-locally convex TVSX{\displaystyle X}has itself (that is,X{\displaystyle X}) as a convex neighborhood of the origin.
Because translation is continuous (by definition oftopological vector space), all translations arehomeomorphisms, so every base for the neighborhoods of the origin can be translated to a base for the neighborhoods of any given vector.
AseminormonX{\displaystyle X}is a mapp:X→R{\displaystyle p:X\to \mathbb {R} }such that
Ifp{\displaystyle p}satisfies positive definiteness, which states that ifp(x)=0{\displaystyle p(x)=0}thenx=0,{\displaystyle x=0,}thenp{\displaystyle p}is anorm.
While in general seminorms need not be norms, there is an analogue of this criterion for families of seminorms, separatedness, defined below.
IfX{\displaystyle X}is a vector space andP{\displaystyle {\mathcal {P}}}is a family of seminorms onX{\displaystyle X}then a subsetQ{\displaystyle {\mathcal {Q}}}ofP{\displaystyle {\mathcal {P}}}is called abase of seminormsforP{\displaystyle {\mathcal {P}}}if for allp∈P{\displaystyle p\in {\mathcal {P}}}there exists aq∈Q{\displaystyle q\in {\mathcal {Q}}}and a realr>0{\displaystyle r>0}such thatp≤rq.{\displaystyle p\leq rq.}[9]
Definition(second version): Alocally convex spaceis defined to be a vector spaceX{\displaystyle X}along with afamilyP{\displaystyle {\mathcal {P}}}of seminorms onX.{\displaystyle X.}
Suppose thatX{\displaystyle X}is a vector space overK,{\displaystyle \mathbb {K} ,}whereK{\displaystyle \mathbb {K} }is either the real or complex numbers.
A family of seminormsP{\displaystyle {\mathcal {P}}}on the vector spaceX{\displaystyle X}induces a canonical vector space topology onX{\displaystyle X}, called theinitial topologyinduced by the seminorms, making it into atopological vector space(TVS). By definition, it is thecoarsesttopology onX{\displaystyle X}for which all maps inP{\displaystyle {\mathcal {P}}}are continuous.
It is possible for a locally convex topology on a spaceX{\displaystyle X}to be induced by a family of norms but forX{\displaystyle X}tonotbenormable(that is, to have its topology be induced by a single norm).
An open set inR≥0{\displaystyle \mathbb {R} _{\geq 0}}has the form[0,r){\displaystyle [0,r)}, wherer{\displaystyle r}is a positive real number. The family ofpreimagesp−1([0,r))={x∈X:p(x)<r}{\displaystyle p^{-1}\left([0,r)\right)=\{x\in X:p(x)<r\}}asp{\displaystyle p}ranges over a family of seminormsP{\displaystyle {\mathcal {P}}}andr{\displaystyle r}ranges over the positive real numbers
is asubbasis at the originfor the topology induced byP{\displaystyle {\mathcal {P}}}. These sets are convex, as follows from properties 2 and 3 of seminorms.
Intersections of finitely many such sets are then also convex, and since the collection of all such finite intersections is abasis at the originit follows that the topology is locally convex in the sense of thefirstdefinition given above.
Recall that the topology of a TVS is translation invariant, meaning that ifS{\displaystyle S}is any subset ofX{\displaystyle X}containing the origin then for anyx∈X,{\displaystyle x\in X,}S{\displaystyle S}is a neighborhood of the origin if and only ifx+S{\displaystyle x+S}is a neighborhood ofx{\displaystyle x};
thus it suffices to define the topology at the origin.
A base of neighborhoods ofy{\displaystyle y}for this topology is obtained in the following way: for every finite subsetF{\displaystyle F}ofP{\displaystyle {\mathcal {P}}}and everyr>0,{\displaystyle r>0,}letUF,r(y):={x∈X:p(x−y)<rfor allp∈F}.{\displaystyle U_{F,r}(y):=\{x\in X:p(x-y)<r\ {\text{ for all }}p\in F\}.}
IfX{\displaystyle X}is a locally convex space and ifP{\displaystyle {\mathcal {P}}}is a collection of continuous seminorms onX{\displaystyle X}, thenP{\displaystyle {\mathcal {P}}}is called abase of continuous seminormsif it is a base of seminorms for the collection ofallcontinuous seminorms onX{\displaystyle X}.[9]Explicitly, this means that for all continuous seminormsp{\displaystyle p}onX{\displaystyle X}, there exists aq∈P{\displaystyle q\in {\mathcal {P}}}and a realr>0{\displaystyle r>0}such thatp≤rq.{\displaystyle p\leq rq.}[9]IfP{\displaystyle {\mathcal {P}}}is a base of continuous seminorms for a locally convex TVSX{\displaystyle X}then the family of all sets of the form{x∈X:q(x)<r}{\displaystyle \{x\in X:q(x)<r\}}asq{\displaystyle q}varies overP{\displaystyle {\mathcal {P}}}andr{\displaystyle r}varies over the positive real numbers, is abaseof neighborhoods of the origin inX{\displaystyle X}(not just a subbasis, so there is no need to take finite intersections of such sets).[9][proof 1]
A familyP{\displaystyle {\mathcal {P}}}of seminorms on a vector spaceX{\displaystyle X}is calledsaturatedif for anyp{\displaystyle p}andq{\displaystyle q}inP,{\displaystyle {\mathcal {P}},}the seminorm defined byx↦max{p(x),q(x)}{\displaystyle x\mapsto \max\{p(x),q(x)\}}belongs toP.{\displaystyle {\mathcal {P}}.}
IfP{\displaystyle {\mathcal {P}}}is a saturated family of continuous seminorms that induces the topology onX{\displaystyle X}then the collection of all sets of the form{x∈X:p(x)<r}{\displaystyle \{x\in X:p(x)<r\}}asp{\displaystyle p}ranges overP{\displaystyle {\mathcal {P}}}andr{\displaystyle r}ranges over all positive real numbers, forms a neighborhood basis at the origin consisting of convex open sets;[9]This forms a basis at the origin rather than merely a subbasis so that in particular, there isnoneed to take finite intersections of such sets.[9]
The following theorem implies that ifX{\displaystyle X}is a locally convex space then the topology ofX{\displaystyle X}can be a defined by a family of continuousnormsonX{\displaystyle X}(anormis aseminorms{\displaystyle s}wheres(x)=0{\displaystyle s(x)=0}impliesx=0{\displaystyle x=0}) if and only if there existsat least onecontinuousnormonX{\displaystyle X}.[10]This is because the sum of a norm and a seminorm is a norm so if a locally convex space is defined by some familyP{\displaystyle {\mathcal {P}}}of seminorms (each of which is necessarily continuous) then the familyP+n:={p+n:p∈P}{\displaystyle {\mathcal {P}}+n:=\{p+n:p\in {\mathcal {P}}\}}of (also continuous) norms obtained by adding some given continuous normn{\displaystyle n}to each element, will necessarily be a family of norms that defines this same locally convex topology.
If there exists a continuous norm on a topological vector spaceX{\displaystyle X}thenX{\displaystyle X}is necessarily Hausdorff but the converse is not in general true (not even for locally convex spaces orFréchet spaces).
Theorem[11]—LetX{\displaystyle X}be a Fréchet space over the fieldK.{\displaystyle \mathbb {K} .}Then the following are equivalent:
Suppose that the topology of a locally convex spaceX{\displaystyle X}is induced by a familyP{\displaystyle {\mathcal {P}}}of continuous seminorms onX{\displaystyle X}.
Ifx∈X{\displaystyle x\in X}and ifx∙=(xi)i∈I{\displaystyle x_{\bullet }=\left(x_{i}\right)_{i\in I}}is anetinX{\displaystyle X}, thenx∙→x{\displaystyle x_{\bullet }\to x}inX{\displaystyle X}if and only if for allp∈P,{\displaystyle p\in {\mathcal {P}},}p(x∙−x)=(p(xi)−x)i∈I→0.{\displaystyle p\left(x_{\bullet }-x\right)=\left(p\left(x_{i}\right)-x\right)_{i\in I}\to 0.}[12]Moreover, ifx∙{\displaystyle x_{\bullet }}is Cauchy inX{\displaystyle X}, then so isp(x∙)=(p(xi))i∈I{\displaystyle p\left(x_{\bullet }\right)=\left(p\left(x_{i}\right)\right)_{i\in I}}for everyp∈P.{\displaystyle p\in {\mathcal {P}}.}[12]
Although the definition in terms of a neighborhood base gives a better geometric picture, the definition in terms of seminorms is easier to work with in practice.
The equivalence of the two definitions follows from a construction known as theMinkowski functionalor Minkowski gauge.
The key feature of seminorms which ensures the convexity of theirε{\displaystyle \varepsilon }-ballsis thetriangle inequality.
For an absorbing setC{\displaystyle C}such that ifx∈C,{\displaystyle x\in C,}thentx∈C{\displaystyle tx\in C}whenever0≤t≤1,{\displaystyle 0\leq t\leq 1,}define the Minkowski functional ofC{\displaystyle C}to beμC(x)=inf{r>0:x∈rC}.{\displaystyle \mu _{C}(x)=\inf\{r>0:x\in rC\}.}
From this definition it follows thatμC{\displaystyle \mu _{C}}is a seminorm ifC{\displaystyle C}is balanced and convex (it is also absorbent by assumption). Conversely, given a family of seminorms, the sets{x:pα1(x)<ε1,…,pαn(x)<εn}{\displaystyle \left\{x:p_{\alpha _{1}}(x)<\varepsilon _{1},\ldots ,p_{\alpha _{n}}(x)<\varepsilon _{n}\right\}}form a base of convex absorbent balanced sets.
Theorem[7]—Suppose thatX{\displaystyle X}is a (real or complex) vector space and letB{\displaystyle {\mathcal {B}}}be afilter baseof subsets ofX{\displaystyle X}such that:
ThenB{\displaystyle {\mathcal {B}}}is aneighborhood baseat 0 for a locally convex TVS topology onX.{\displaystyle X.}
Theorem[7]—Suppose thatX{\displaystyle X}is a (real or complex) vector space and letL{\displaystyle {\mathcal {L}}}be a non-empty collection of convex,balanced, andabsorbingsubsets ofX.{\displaystyle X.}Then the set of all positive scalar multiples of finite intersections of sets inL{\displaystyle {\mathcal {L}}}forms a neighborhood base at the origin for a locally convex TVS topology onX.{\displaystyle X.}
Example: auxiliary normed spaces
IfW{\displaystyle W}isconvexandabsorbinginX{\displaystyle X}then thesymmetric setD:=⋂|u|=1uW{\displaystyle D:=\bigcap _{|u|=1}uW}will be convex andbalanced(also known as anabsolutely convex setor adisk) in addition to being absorbing inX.{\displaystyle X.}This guarantees that theMinkowski functionalpD:X→R{\displaystyle p_{D}:X\to \mathbb {R} }ofD{\displaystyle D}will be aseminormonX,{\displaystyle X,}thereby making(X,pD){\displaystyle \left(X,p_{D}\right)}into aseminormed spacethat carries its canonicalpseudometrizabletopology. The set of scalar multiplesrD{\displaystyle rD}asr{\displaystyle r}ranges over{12,13,14,…}{\displaystyle \left\{{\tfrac {1}{2}},{\tfrac {1}{3}},{\tfrac {1}{4}},\ldots \right\}}(or over any other set of non-zero scalars having0{\displaystyle 0}as a limit point) forms a neighborhood basis of absorbingdisksat the origin for this locally convex topology. IfX{\displaystyle X}is atopological vector spaceand if this convex absorbing subsetW{\displaystyle W}is also abounded subsetofX,{\displaystyle X,}then the absorbing diskD:=⋂|u|=1uW{\displaystyle D:=\bigcap _{|u|=1}uW}will also be bounded, in which casepD{\displaystyle p_{D}}will be anormand(X,pD){\displaystyle \left(X,p_{D}\right)}will form what is known as anauxiliary normed space. If this normed space is aBanach spacethenD{\displaystyle D}is called aBanach disk.
LetX{\displaystyle X}be a TVS.
Say that a vector subspaceM{\displaystyle M}ofX{\displaystyle X}hasthe extension propertyif any continuous linear functional onM{\displaystyle M}can be extended to a continuous linear functional onX{\displaystyle X}.[13]Say thatX{\displaystyle X}has theHahn-Banachextension property(HBEP) if every vector subspace ofX{\displaystyle X}has the extension property.[13]
TheHahn-Banach theoremguarantees that every Hausdorff locally convex space has the HBEP.
For completemetrizable TVSsthere is a converse:
Theorem[13](Kalton)—Every complete metrizable TVS with the Hahn-Banach extension property is locally convex.
If a vector spaceX{\displaystyle X}has uncountable dimension and if we endow it with thefinest vector topologythen this is a TVS with the HBEP that is neither locally convex or metrizable.[13]
Throughout,P{\displaystyle {\mathcal {P}}}is a family of continuous seminorms that generate the topology ofX.{\displaystyle X.}
Topological closure
IfS⊆X{\displaystyle S\subseteq X}andx∈X,{\displaystyle x\in X,}thenx∈clS{\displaystyle x\in \operatorname {cl} S}if and only if for everyr>0{\displaystyle r>0}and every finite collectionp1,…,pn∈P{\displaystyle p_{1},\ldots ,p_{n}\in {\mathcal {P}}}there exists somes∈S{\displaystyle s\in S}such that∑i=1npi(x−s)<r.{\displaystyle \sum _{i=1}^{n}p_{i}(x-s)<r.}[14]The closure of{0}{\displaystyle \{0\}}inX{\displaystyle X}is equal to⋂p∈Pp−1(0).{\displaystyle \bigcap _{p\in {\mathcal {P}}}p^{-1}(0).}[15]
Topology of Hausdorff locally convex spaces
Every Hausdorff locally convex space ishomeomorphicto a vector subspace of a product ofBanach spaces.[16]TheAnderson–Kadec theoremstates that every infinite–dimensionalseparableFréchet spaceishomeomorphicto theproduct space∏i∈NR{\textstyle \prod _{i\in \mathbb {N} }\mathbb {R} }of countably many copies ofR{\displaystyle \mathbb {R} }(this homeomorphism need not be alinear map).[17]
Algebraic properties of convex subsets
A subsetC{\displaystyle C}is convex if and only iftC+(1−t)C⊆C{\displaystyle tC+(1-t)C\subseteq C}for all0≤t≤1{\displaystyle 0\leq t\leq 1}[18]or equivalently, if and only if(s+t)C=sC+tC{\displaystyle (s+t)C=sC+tC}for all positive reals>0andt>0,{\displaystyle s>0{\text{ and }}t>0,}[19]where because(s+t)C⊆sC+tC{\displaystyle (s+t)C\subseteq sC+tC}always holds, theequals sign={\displaystyle \,=\,}can be replaced with⊇.{\displaystyle \,\supseteq .\,}IfC{\displaystyle C}is a convex set that contains the origin thenC{\displaystyle C}isstar shapedat the origin and for all non-negative reals≥0andt≥0,{\displaystyle s\geq 0{\text{ and }}t\geq 0,}(sC)∩(tC)=(min{s,t})C.{\displaystyle (sC)\cap (tC)=(\min _{}\{s,t\})C.}
TheMinkowski sumof two convex sets is convex; furthermore, the scalar multiple of a convex set is again convex.[20]
Topological properties of convex subsets
For any subsetS{\displaystyle S}of a TVSX,{\displaystyle X,}theconvex hull(respectively,closed convex hull,balanced hull,convex balanced hull) ofS,{\displaystyle S,}denoted bycoS{\displaystyle \operatorname {co} S}(respectively,co¯S,{\displaystyle {\overline {\operatorname {co} }}S,}balS,{\displaystyle \operatorname {bal} S,}cobalS{\displaystyle \operatorname {cobal} S}), is the smallest convex (respectively, closed convex, balanced, convex balanced) subset ofX{\displaystyle X}containingS.{\displaystyle S.}
Any vector spaceX{\displaystyle X}endowed with thetrivial topology(also called theindiscrete topology) is a locally convex TVS (and of course, it is the coarsest such topology).
This topology is Hausdorff if and onlyX={0}.{\displaystyle X=\{0\}.}The indiscrete topology makes any vector space into acompletepseudometrizablelocally convex TVS.
In contrast, thediscrete topologyforms a vector topology onX{\displaystyle X}if and onlyX={0}.{\displaystyle X=\{0\}.}This follows from the fact that everytopological vector spaceis aconnected space.
IfX{\displaystyle X}is a real or complex vector space and ifP{\displaystyle {\mathcal {P}}}is the set of all seminorms onX{\displaystyle X}then the locally convex TVS topology, denoted byτlc,{\displaystyle \tau _{\operatorname {lc} },}thatP{\displaystyle {\mathcal {P}}}induces onX{\displaystyle X}is called thefinest locally convex topologyonX.{\displaystyle X.}[37]This topology may also be described as the TVS-topology onX{\displaystyle X}having as a neighborhood base at the origin the set of allabsorbingdisksinX.{\displaystyle X.}[37]Any locally convex TVS-topology onX{\displaystyle X}is necessarily a subset ofτlc.{\displaystyle \tau _{\operatorname {lc} }.}(X,τlc){\displaystyle \left(X,\tau _{\operatorname {lc} }\right)}isHausdorff.[15]Every linear map from(X,τlc){\displaystyle \left(X,\tau _{\operatorname {lc} }\right)}into another locally convex TVS is necessarily continuous.[15]In particular, every linear functional on(X,τlc){\displaystyle \left(X,\tau _{\operatorname {lc} }\right)}is continuous and every vector subspace ofX{\displaystyle X}is closed in(X,τlc){\displaystyle \left(X,\tau _{\operatorname {lc} }\right)};[15]therefore, ifX{\displaystyle X}is infinite dimensional then(X,τlc){\displaystyle \left(X,\tau _{\operatorname {lc} }\right)}is not pseudometrizable (and thus not metrizable).[37]Moreover,τlc{\displaystyle \tau _{\operatorname {lc} }}is theonlyHausdorff locally convex topology onX{\displaystyle X}with the property that any linear map from it into any Hausdorff locally convex space is continuous.[38]The space(X,τlc){\displaystyle \left(X,\tau _{\operatorname {lc} }\right)}is abornological space.[39]
Every normed space is a Hausdorff locally convex space, and much of the theory of locally convex spaces generalizes parts of the theory of normed spaces.
The family of seminorms can be taken to be the single norm.
Every Banach space is a complete Hausdorff locally convex space, in particular, theLp{\displaystyle L^{p}}spaceswithp≥1{\displaystyle p\geq 1}are locally convex.
More generally, every Fréchet space is locally convex.
A Fréchet space can be defined as a complete locally convex space with a separated countable family of seminorms.
The spaceRω{\displaystyle \mathbb {R} ^{\omega }}ofreal valued sequenceswith the family of seminorms given bypi({xn}n)=|xi|,i∈N{\displaystyle p_{i}\left(\left\{x_{n}\right\}_{n}\right)=\left|x_{i}\right|,\qquad i\in \mathbb {N} }is locally convex. The countable family of seminorms is complete and separable, so this is a Fréchet space, which is not normable. This is also thelimit topologyof the spacesRn,{\displaystyle \mathbb {R} ^{n},}embedded inRω{\displaystyle \mathbb {R} ^{\omega }}in the natural way, by completing finite sequences with infinitely many0.{\displaystyle 0.}
Given any vector spaceX{\displaystyle X}and a collectionF{\displaystyle F}of linear functionals on it,X{\displaystyle X}can be made into a locally convex topological vector space by giving it the weakest topology making all linear functionals inF{\displaystyle F}continuous. This is known as theweak topologyor theinitial topologydetermined byF.{\displaystyle F.}The collectionF{\displaystyle F}may be thealgebraic dualofX{\displaystyle X}or any other collection.
The family of seminorms in this case is given bypf(x)=|f(x)|{\displaystyle p_{f}(x)=|f(x)|}for allf{\displaystyle f}inF.{\displaystyle F.}
Spaces of differentiable functions give other non-normable examples. Consider the space ofsmooth functionsf:Rn→C{\displaystyle f:\mathbb {R} ^{n}\to \mathbb {C} }such thatsupx|xaDbf|<∞,{\displaystyle \sup _{x}\left|x^{a}D_{b}f\right|<\infty ,}wherea{\displaystyle a}andB{\displaystyle B}aremultiindices.
The family of seminorms defined bypa,b(f)=supx|xaDbf(x)|{\displaystyle p_{a,b}(f)=\sup _{x}\left|x^{a}D_{b}f(x)\right|}is separated, and countable, and the space is complete, so this metrizable space is a Fréchet space.
It is known as theSchwartz space, or the space of functions of rapid decrease, and itsdual spaceis the space oftempered distributions.
An importantfunction spacein functional analysis is the spaceD(U){\displaystyle D(U)}of smooth functions withcompact supportinU⊆Rn.{\displaystyle U\subseteq \mathbb {R} ^{n}.}A more detailed construction is needed for the topology of this space because the spaceC0∞(U){\displaystyle C_{0}^{\infty }(U)}is not complete in the uniform norm. The topology onD(U){\displaystyle D(U)}is defined as follows: for any fixedcompact setK⊆U,{\displaystyle K\subseteq U,}the spaceC0∞(K){\displaystyle C_{0}^{\infty }(K)}of functionsf∈C0∞{\displaystyle f\in C_{0}^{\infty }}withsupp(f)⊆K{\displaystyle \operatorname {supp} (f)\subseteq K}is aFréchet spacewith countable family of seminorms‖f‖m=supk≤msupx|Dkf(x)|{\displaystyle \|f\|_{m}=\sup _{k\leq m}\sup _{x}\left|D^{k}f(x)\right|}(these are actually norms, and the completion of the spaceC0∞(K){\displaystyle C_{0}^{\infty }(K)}with the‖⋅‖m{\displaystyle \|\cdot \|_{m}}norm is a Banach spaceDm(K){\displaystyle D^{m}(K)}).
Given any collection(Ka)a∈A{\displaystyle \left(K_{a}\right)_{a\in A}}of compact sets, directed by inclusion and such that their union equalU,{\displaystyle U,}theC0∞(Ka){\displaystyle C_{0}^{\infty }\left(K_{a}\right)}form adirect system, andD(U){\displaystyle D(U)}is defined to be the limit of this system. Such a limit of Fréchet spaces is known as anLF space. More concretely,D(U){\displaystyle D(U)}is the union of all theC0∞(Ka){\displaystyle C_{0}^{\infty }\left(K_{a}\right)}with the strongestlocally convextopology which makes eachinclusion mapC0∞(Ka)↪D(U){\displaystyle C_{0}^{\infty }\left(K_{a}\right)\hookrightarrow D(U)}continuous.
This space is locally convex and complete. However, it is not metrizable, and so it is not a Fréchet space. The dual space ofD(Rn){\displaystyle D\left(\mathbb {R} ^{n}\right)}is the space ofdistributionsonRn.{\displaystyle \mathbb {R} ^{n}.}
More abstractly, given atopological spaceX,{\displaystyle X,}the spaceC(X){\displaystyle C(X)}of continuous (not necessarily bounded) functions onX{\displaystyle X}can be given the topology ofuniform convergenceon compact sets. This topology is defined by semi-normsφK(f)=max{|f(x)|:x∈K}{\displaystyle \varphi _{K}(f)=\max\{|f(x)|:x\in K\}}(asK{\displaystyle K}varies over thedirected setof all compact subsets ofX{\displaystyle X}). WhenX{\displaystyle X}is locally compact (for example, an open set inRn{\displaystyle \mathbb {R} ^{n}}) theStone–Weierstrass theoremapplies—in the case of real-valued functions, any subalgebra ofC(X){\displaystyle C(X)}that separates points and contains the constant functions (for example, the subalgebra of polynomials) isdense.
Many topological vector spaces are locally convex. Examples of spaces that lack local convexity include the following:
Both examples have the property that any continuous linear map to thereal numbersis0.{\displaystyle 0.}In particular, theirdual spaceis trivial, that is, it contains only the zero functional.
Theorem[40]—LetT:X→Y{\displaystyle T:X\to Y}be a linear operator between TVSs whereY{\displaystyle Y}is locally convex (note thatX{\displaystyle X}neednotbe locally convex). ThenT{\displaystyle T}is continuous if and only if for every continuous seminormq{\displaystyle q}onY{\displaystyle Y}, there exists a continuous seminormp{\displaystyle p}onX{\displaystyle X}such thatq∘T≤p.{\displaystyle q\circ T\leq p.}
Because locally convex spaces are topological spaces as well as vector spaces, the natural functions to consider between two locally convex spaces arecontinuous linear maps.
Using the seminorms, a necessary and sufficient criterion for thecontinuityof a linear map can be given that closely resembles the more familiarboundedness conditionfound for Banach spaces.
Given locally convex spacesX{\displaystyle X}andY{\displaystyle Y}with families of seminorms(pα)α{\displaystyle \left(p_{\alpha }\right)_{\alpha }}and(qβ)β{\displaystyle \left(q_{\beta }\right)_{\beta }}respectively, a linear mapT:X→Y{\displaystyle T:X\to Y}is continuous if and only if for everyβ,{\displaystyle \beta ,}there existα1,…,αn{\displaystyle \alpha _{1},\ldots ,\alpha _{n}}andM>0{\displaystyle M>0}such that for allv∈X,{\displaystyle v\in X,}qβ(Tv)≤M(pα1(v)+⋯+pαn(v)).{\displaystyle q_{\beta }(Tv)\leq M\left(p_{\alpha _{1}}(v)+\dotsb +p_{\alpha _{n}}(v)\right).}
In other words, each seminorm of the range ofT{\displaystyle T}isboundedabove by some finite sum of seminorms in thedomain. If the family(pα)α{\displaystyle \left(p_{\alpha }\right)_{\alpha }}is a directed family, and it can always be chosen to be directed as explained above, then the formula becomes even simpler and more familiar:qβ(Tv)≤Mpα(v).{\displaystyle q_{\beta }(Tv)\leq Mp_{\alpha }(v).}
Theclassof all locally convex topological vector spaces forms acategorywith continuous linear maps asmorphisms.
Theorem[40]—IfX{\displaystyle X}is a TVS (not necessarily locally convex) and iff{\displaystyle f}is a linear functional onX{\displaystyle X}, thenf{\displaystyle f}is continuous if and only if there exists a continuous seminormp{\displaystyle p}onX{\displaystyle X}such that|f|≤p.{\displaystyle |f|\leq p.}
IfX{\displaystyle X}is a real or complex vector space,f{\displaystyle f}is a linear functional onX{\displaystyle X}, andp{\displaystyle p}is a seminorm onX{\displaystyle X}, then|f|≤p{\displaystyle |f|\leq p}if and only iff≤p.{\displaystyle f\leq p.}[41]Iff{\displaystyle f}is a non-0 linear functional on a real vector spaceX{\displaystyle X}and ifp{\displaystyle p}is a seminorm onX{\displaystyle X}, thenf≤p{\displaystyle f\leq p}if and only iff−1(1)∩{x∈X:p(x)<1}=∅.{\displaystyle f^{-1}(1)\cap \{x\in X:p(x)<1\}=\varnothing .}[15]
Letn≥1{\displaystyle n\geq 1}be an integer,X1,…,Xn{\displaystyle X_{1},\ldots ,X_{n}}be TVSs (not necessarily locally convex), letY{\displaystyle Y}be a locally convex TVS whose topology is determined by a familyQ{\displaystyle {\mathcal {Q}}}of continuous seminorms, and letM:∏i=1nXi→Y{\displaystyle M:\prod _{i=1}^{n}X_{i}\to Y}be amultilinear operatorthat is linear in each of itsn{\displaystyle n}coordinates.
The following are equivalent:
|
https://en.wikipedia.org/wiki/Locally_convex_topological_vector_space
|
Inmathematics, ametric spaceis asettogether with a notion ofdistancebetween itselements, usually calledpoints. The distance is measured by afunctioncalled ametricordistance function.[1]Metric spaces are a general setting for studying many of the concepts ofmathematical analysisandgeometry.
The most familiar example of a metric space is3-dimensional Euclidean spacewith its usual notion of distance. Other well-known examples are asphereequipped with theangular distanceand thehyperbolic plane. A metric may correspond to ametaphorical, rather than physical, notion of distance: for example, the set of 100-character Unicode strings can be equipped with theHamming distance, which measures the number of characters that need to be changed to get from one string to another.
Since they are very general, metric spaces are a tool used in many different branches of mathematics. Many types of mathematical objects have a natural notion of distance and therefore admit the structure of a metric space, includingRiemannian manifolds,normed vector spaces, andgraphs. Inabstract algebra, thep-adic numbersarise as elements of thecompletionof a metric structure on therational numbers. Metric spaces are also studied in their own right inmetric geometry[2]andanalysis on metric spaces.[3]
Many of the basic notions ofmathematical analysis, includingballs,completeness, as well asuniform,Lipschitz, andHölder continuity, can be defined in the setting of metric spaces. Other notions, such ascontinuity,compactness, andopenandclosed sets, can be defined for metric spaces, but also in the even more general setting oftopological spaces.
To see the utility of different notions of distance, consider thesurface of the Earthas a set of points. We can measure the distance between two such points by the length of theshortest path along the surface, "as the crow flies"; this is particularly useful for shipping and aviation. We can also measure the straight-line distance between two points through the Earth's interior; this notion is, for example, natural inseismology, since it roughly corresponds to the length of time it takes for seismic waves to travel between those two points.
The notion of distance encoded by the metric space axioms has relatively few requirements. This generality gives metric spaces a lot of flexibility. At the same time, the notion is strong enough to encode many intuitive facts about what distance means. This means that general results about metric spaces can be applied in many different contexts.
Like many fundamental mathematical concepts, the metric on a metric space can be interpreted in many different ways. A particular metric may not be best thought of as measuring physical distance, but, instead, as the cost of changing from one state to another (as withWasserstein metricson spaces ofmeasures) or the degree of difference between two objects (for example, theHamming distancebetween two strings of characters, or theGromov–Hausdorff distancebetween metric spaces themselves).
Formally, ametric spaceis anordered pair(M,d)whereMis a set anddis ametriconM, i.e., afunctiond:M×M→R{\displaystyle d\,\colon M\times M\to \mathbb {R} }satisfying the following axioms for all pointsx,y,z∈M{\displaystyle x,y,z\in M}:[4][5]
If the metricdis unambiguous, one often refers byabuse of notationto "the metric spaceM".
By taking all axioms except the second, one can show that distance is always non-negative:0=d(x,x)≤d(x,y)+d(y,x)=2d(x,y){\displaystyle 0=d(x,x)\leq d(x,y)+d(y,x)=2d(x,y)}Therefore the second axiom can be weakened toIfx≠y, thend(x,y)≠0{\textstyle {\text{If }}x\neq y{\text{, then }}d(x,y)\neq 0}and combined with the first to maked(x,y)=0⟺x=y{\textstyle d(x,y)=0\iff x=y}.[6]
Thereal numberswith the distance functiond(x,y)=|y−x|{\displaystyle d(x,y)=|y-x|}given by theabsolute differenceform a metric space. Many properties of metric spaces and functions between them are generalizations of concepts inreal analysisand coincide with those concepts when applied to the real line.
The Euclidean planeR2{\displaystyle \mathbb {R} ^{2}}can be equipped with many different metrics. TheEuclidean distancefamiliar from school mathematics can be defined byd2((x1,y1),(x2,y2))=(x2−x1)2+(y2−y1)2.{\displaystyle d_{2}((x_{1},y_{1}),(x_{2},y_{2}))={\sqrt {(x_{2}-x_{1})^{2}+(y_{2}-y_{1})^{2}}}.}
ThetaxicaborManhattandistanceis defined byd1((x1,y1),(x2,y2))=|x2−x1|+|y2−y1|{\displaystyle d_{1}((x_{1},y_{1}),(x_{2},y_{2}))=|x_{2}-x_{1}|+|y_{2}-y_{1}|}and can be thought of as the distance you need to travel along horizontal and vertical lines to get from one point to the other, as illustrated at the top of the article.
Themaximum,L∞{\displaystyle L^{\infty }}, orChebyshev distanceis defined byd∞((x1,y1),(x2,y2))=max{|x2−x1|,|y2−y1|}.{\displaystyle d_{\infty }((x_{1},y_{1}),(x_{2},y_{2}))=\max\{|x_{2}-x_{1}|,|y_{2}-y_{1}|\}.}This distance does not have an easy explanation in terms of paths in the plane, but it still satisfies the metric space axioms. It can be thought of similarly to the number of moves akingwould have to make on achessboardto travel from one point to another on the given space.
In fact, these three distances, while they have distinct properties, are similar in some ways. Informally, points that are close in one are close in the others, too. This observation can be quantified with the formulad∞(p,q)≤d2(p,q)≤d1(p,q)≤2d∞(p,q),{\displaystyle d_{\infty }(p,q)\leq d_{2}(p,q)\leq d_{1}(p,q)\leq 2d_{\infty }(p,q),}which holds for every pair of pointsp,q∈R2{\displaystyle p,q\in \mathbb {R} ^{2}}.
A radically different distance can be defined by settingd(p,q)={0,ifp=q,1,otherwise.{\displaystyle d(p,q)={\begin{cases}0,&{\text{if }}p=q,\\1,&{\text{otherwise.}}\end{cases}}}UsingIverson brackets,d(p,q)=[p≠q]{\displaystyle d(p,q)=[p\neq q]}In thisdiscrete metric, all distinct points are 1 unit apart: none of them are close to each other, and none of them are very far away from each other either. Intuitively, the discrete metric no longer remembers that the set is a plane, but treats it just as an undifferentiated set of points.
All of these metrics make sense onRn{\displaystyle \mathbb {R} ^{n}}as well asR2{\displaystyle \mathbb {R} ^{2}}.
Given a metric space(M,d)and asubsetA⊆M{\displaystyle A\subseteq M}, we can considerAto be a metric space by measuring distances the same way we would inM. Formally, theinduced metriconAis a functiondA:A×A→R{\displaystyle d_{A}:A\times A\to \mathbb {R} }defined bydA(x,y)=d(x,y).{\displaystyle d_{A}(x,y)=d(x,y).}For example, if we take the two-dimensional sphereS2as a subset ofR3{\displaystyle \mathbb {R} ^{3}}, the Euclidean metric onR3{\displaystyle \mathbb {R} ^{3}}induces the straight-line metric onS2described above. Two more useful examples are the open interval(0, 1)and the closed interval[0, 1]thought of as subspaces of the real line.
Arthur Cayley, in his article "On Distance", extended metric concepts beyond Euclidean geometry into domains bounded by a conic in a projective space. Hisdistancewas given by logarithm of across ratio. Any projectivity leaving the conic stable also leaves the cross ratio constant, so isometries are implicit. This method provides models forelliptic geometryandhyperbolic geometry, andFelix Klein, in several publications, established the field ofnon-euclidean geometrythrough the use of theCayley-Klein metric.
The idea of an abstract space with metric properties was addressed in 1906 byRené Maurice Fréchet[7]and the termmetric spacewas coined byFelix Hausdorffin 1914.[8][9][10]
Fréchet's work laid the foundation for understandingconvergence,continuity, and other key concepts in non-geometric spaces. This allowed mathematicians to study functions and sequences in a broader and more flexible way. This was important for the growing field of functional analysis. Mathematicians like Hausdorff andStefan Banachfurther refined and expanded the framework of metric spaces. Hausdorff introducedtopological spacesas a generalization of metric spaces. Banach's work infunctional analysisheavily relied on the metric structure. Over time, metric spaces became a central part ofmodern mathematics. They have influenced various fields includingtopology,geometry, andapplied mathematics. Metric spaces continue to play a crucial role in the study of abstract mathematical concepts.
A distance function is enough to define notions of closeness and convergence that were first developed inreal analysis. Properties that depend on the structure of a metric space are referred to asmetric properties. Every metric space is also atopological space, and some metric properties can also be rephrased without reference to distance in the language of topology; that is, they are reallytopological properties.
For any pointxin a metric spaceMand any real numberr> 0, theopen ballof radiusraroundxis defined to be the set of points that are strictly less than distancerfromx:Br(x)={y∈M:d(x,y)<r}.{\displaystyle B_{r}(x)=\{y\in M:d(x,y)<r\}.}This is a natural way to define a set of points that are relatively close tox. Therefore, a setN⊆M{\displaystyle N\subseteq M}is aneighborhoodofx(informally, it contains all points "close enough" tox) if it contains an open ball of radiusraroundxfor somer> 0.
Anopen setis a set which is a neighborhood of all its points. It follows that the open balls form abasefor a topology onM. In other words, the open sets ofMare exactly the unions of open balls. As in any topology,closed setsare the complements of open sets. Sets may be both open and closed as well as neither open nor closed.
This topology does not carry all the information about the metric space. For example, the distancesd1,d2, andd∞defined above all induce the same topology onR2{\displaystyle \mathbb {R} ^{2}}, although they behave differently in many respects. Similarly,R{\displaystyle \mathbb {R} }with the Euclidean metric and its subspace the interval(0, 1)with the induced metric arehomeomorphicbut have very different metric properties.
Conversely, not every topological space can be given a metric. Topological spaces which are compatible with a metric are calledmetrizableand are particularly well-behaved in many ways: in particular, they areparacompact[11]Hausdorff spaces(hencenormal) andfirst-countable.[a]TheNagata–Smirnov metrization theoremgives a characterization of metrizability in terms of other topological properties, without reference to metrics.
Convergence of sequencesin Euclidean space is defined as follows:
Convergence of sequences in a topological space is defined as follows:
In metric spaces, both of these definitions make sense and they are equivalent. This is a general pattern fortopological propertiesof metric spaces: while they can be defined in a purely topological way, there is often a way that uses the metric which is easier to state or more familiar from real analysis.
Informally, a metric space iscompleteif it has no "missing points": every sequence that looks like it should converge to something actually converges.
To make this precise: a sequence(xn)in a metric spaceMisCauchyif for everyε > 0there is an integerNsuch that for allm,n>N,d(xm,xn) < ε. By the triangle inequality, any convergent sequence is Cauchy: ifxmandxnare both less thanεaway from the limit, then they are less than2εaway from each other. If the converse is true—every Cauchy sequence inMconverges—thenMis complete.
Euclidean spaces are complete, as isR2{\displaystyle \mathbb {R} ^{2}}with the other metrics described above. Two examples of spaces which are not complete are(0, 1)and the rationals, each with the metric induced fromR{\displaystyle \mathbb {R} }. One can think of(0, 1)as "missing" its endpoints 0 and 1. The rationals are missing all the irrationals, since any irrational has a sequence of rationals converging to it inR{\displaystyle \mathbb {R} }(for example, its successive decimal approximations). These examples show that completeness isnota topological property, sinceR{\displaystyle \mathbb {R} }is complete but the homeomorphic space(0, 1)is not.
This notion of "missing points" can be made precise. In fact, every metric space has a uniquecompletion, which is a complete space that contains the given space as adensesubset. For example,[0, 1]is the completion of(0, 1), and the real numbers are the completion of the rationals.
Since complete spaces are generally easier to work with, completions are important throughout mathematics. For example, in abstract algebra, thep-adic numbersare defined as the completion of the rationals under a different metric. Completion is particularly common as a tool infunctional analysis. Often one has a set of nice functions and a way of measuring distances between them. Taking the completion of this metric space gives a new set of functions which may be less nice, but nevertheless useful because they behave similarly to the original nice functions in important ways. For example,weak solutionstodifferential equationstypically live in a completion (aSobolev space) rather than the original space of nice functions for which the differential equation actually makes sense.
A metric spaceMisboundedif there is anrsuch that no pair of points inMis more than distancerapart.[b]The least suchris called thediameterofM.
The spaceMis calledprecompactortotally boundedif for everyr> 0there is a finitecoverofMby open balls of radiusr. Every totally bounded space is bounded. To see this, start with a finite cover byr-balls for some arbitraryr. Since the subset ofMconsisting of the centers of these balls is finite, it has finite diameter, sayD. By the triangle inequality, the diameter of the whole space is at mostD+ 2r. The converse does not hold: an example of a metric space that is bounded but not totally bounded isR2{\displaystyle \mathbb {R} ^{2}}(or any other infinite set) with the discrete metric.
Compactness is a topological property which generalizes the properties of a closed and bounded subset of Euclidean space. There are several equivalent definitions of compactness in metric spaces:
One example of a compact space is the closed interval[0, 1].
Compactness is important for similar reasons to completeness: it makes it easy to find limits. Another important tool isLebesgue's number lemma, which shows that for any open cover of a compact space, every point is relatively deep inside one of the sets of the cover.
Unlike in the case of topological spaces or algebraic structures such asgroupsorrings, there is no single "right" type ofstructure-preserving functionbetween metric spaces. Instead, one works with different types of functions depending on one's goals. Throughout this section, suppose that(M1,d1){\displaystyle (M_{1},d_{1})}and(M2,d2){\displaystyle (M_{2},d_{2})}are two metric spaces. The words "function" and "map" are used interchangeably.
One interpretation of a "structure-preserving" map is one that fully preserves the distance function:
It follows from the metric space axioms that a distance-preserving function is injective. A bijective distance-preserving function is called anisometry.[13]One perhaps non-obvious example of an isometry between spaces described in this article is the mapf:(R2,d1)→(R2,d∞){\displaystyle f:(\mathbb {R} ^{2},d_{1})\to (\mathbb {R} ^{2},d_{\infty })}defined byf(x,y)=(x+y,x−y).{\displaystyle f(x,y)=(x+y,x-y).}
If there is an isometry between the spacesM1andM2, they are said to beisometric. Metric spaces that are isometric areessentially identical.
On the other end of the spectrum, one can forget entirely about the metric structure and studycontinuous maps, which only preserve topological structure. There are several equivalent definitions of continuity for metric spaces. The most important are:
Ahomeomorphismis a continuous bijection whose inverse is also continuous; if there is a homeomorphism betweenM1andM2, they are said to behomeomorphic. Homeomorphic spaces are the same from the point of view of topology, but may have very different metric properties. For example,R{\displaystyle \mathbb {R} }is unbounded and complete, while(0, 1)is bounded but not complete.
A functionf:M1→M2{\displaystyle f\,\colon M_{1}\to M_{2}}isuniformly continuousif for every real numberε > 0there existsδ > 0such that for all pointsxandyinM1such thatd(x,y)<δ{\displaystyle d(x,y)<\delta }, we haved2(f(x),f(y))<ε.{\displaystyle d_{2}(f(x),f(y))<\varepsilon .}
The only difference between this definition and the ε–δ definition of continuity is the order of quantifiers: the choice of δ must depend only on ε and not on the pointx. However, this subtle change makes a big difference. For example, uniformly continuous maps take Cauchy sequences inM1to Cauchy sequences inM2. In other words, uniform continuity preserves some metric properties which are not purely topological.
On the other hand, theHeine–Cantor theoremstates that ifM1is compact, then every continuous map is uniformly continuous. In other words, uniform continuity cannot distinguish any non-topological features of compact metric spaces.
ALipschitz mapis one that stretches distances by at most a bounded factor. Formally, given a real numberK> 0, the mapf:M1→M2{\displaystyle f\,\colon M_{1}\to M_{2}}isK-Lipschitzifd2(f(x),f(y))≤Kd1(x,y)for allx,y∈M1.{\displaystyle d_{2}(f(x),f(y))\leq Kd_{1}(x,y)\quad {\text{for all}}\quad x,y\in M_{1}.}Lipschitz maps are particularly important in metric geometry, since they provide more flexibility than distance-preserving maps, but still make essential use of the metric.[14]For example, a curve in a metric space isrectifiable(has finite length) if and only if it has a Lipschitz reparametrization.
A 1-Lipschitz map is sometimes called anonexpandingormetric map. Metric maps are commonly taken to be the morphisms of thecategory of metric spaces.
AK-Lipschitz map forK< 1is called acontraction. TheBanach fixed-point theoremstates that ifMis a complete metric space, then every contractionf:M→M{\displaystyle f:M\to M}admits a uniquefixed point. If the metric spaceMis compact, the result holds for a slightly weaker condition onf: a mapf:M→M{\displaystyle f:M\to M}admits a unique fixed point ifd(f(x),f(y))<d(x,y)for allx≠y∈M1.{\displaystyle d(f(x),f(y))<d(x,y)\quad {\mbox{for all}}\quad x\neq y\in M_{1}.}
Aquasi-isometryis a map that preserves the "large-scale structure" of a metric space. Quasi-isometries need not be continuous. For example,R2{\displaystyle \mathbb {R} ^{2}}and its subspaceZ2{\displaystyle \mathbb {Z} ^{2}}are quasi-isometric, even though one is connected and the other is discrete. The equivalence relation of quasi-isometry is important ingeometric group theory: theŠvarc–Milnor lemmastates that all spaces on which a groupacts geometricallyare quasi-isometric.[15]
Formally, the mapf:M1→M2{\displaystyle f\,\colon M_{1}\to M_{2}}is aquasi-isometric embeddingif there exist constantsA≥ 1andB≥ 0such that1Ad2(f(x),f(y))−B≤d1(x,y)≤Ad2(f(x),f(y))+Bfor allx,y∈M1.{\displaystyle {\frac {1}{A}}d_{2}(f(x),f(y))-B\leq d_{1}(x,y)\leq Ad_{2}(f(x),f(y))+B\quad {\text{ for all }}\quad x,y\in M_{1}.}It is aquasi-isometryif in addition it isquasi-surjective, i.e. there is a constantC≥ 0such that every point inM2{\displaystyle M_{2}}is at distance at mostCfrom some point in the imagef(M1){\displaystyle f(M_{1})}.
Given two metric spaces(M1,d1){\displaystyle (M_{1},d_{1})}and(M2,d2){\displaystyle (M_{2},d_{2})}:
Anormed vector spaceis a vector space equipped with anorm, which is a function that measures the length of vectors. The norm of a vectorvis typically denoted by‖v‖{\displaystyle \lVert v\rVert }. Any normed vector space can be equipped with a metric in which the distance between two vectorsxandyis given byd(x,y):=‖x−y‖.{\displaystyle d(x,y):=\lVert x-y\rVert .}The metricdis said to beinducedby the norm‖⋅‖{\displaystyle \lVert {\cdot }\rVert }. Conversely,[16]if a metricdon avector spaceXis
then it is the metric induced by the norm‖x‖:=d(x,0).{\displaystyle \lVert x\rVert :=d(x,0).}A similar relationship holds betweenseminormsandpseudometrics.
Among examples of metrics induced by a norm are the metricsd1,d2, andd∞onR2{\displaystyle \mathbb {R} ^{2}}, which are induced by theManhattan norm, theEuclidean norm, and themaximum norm, respectively. More generally, theKuratowski embeddingallows one to see any metric space as a subspace of a normed vector space.
Infinite-dimensional normed vector spaces, particularly spaces of functions, are studied infunctional analysis. Completeness is particularly important in this context: a complete normed vector space is known as aBanach space. An unusual property of normed vector spaces is thatlinear transformationsbetween them are continuous if and only if they are Lipschitz. Such transformations are known asbounded operators.
Acurvein a metric space(M,d)is a continuous functionγ:[0,T]→M{\displaystyle \gamma :[0,T]\to M}. Thelengthofγis measured byL(γ)=sup0=x0<x1<⋯<xn=T{∑k=1nd(γ(xk−1),γ(xk))}.{\displaystyle L(\gamma )=\sup _{0=x_{0}<x_{1}<\cdots <x_{n}=T}\left\{\sum _{k=1}^{n}d(\gamma (x_{k-1}),\gamma (x_{k}))\right\}.}In general, this supremum may be infinite; a curve of finite length is calledrectifiable.[17]Suppose that the length of the curveγis equal to the distance between its endpoints—that is, it is the shortest possible path between its endpoints. After reparametrization by arc length,γbecomes ageodesic: a curve which is a distance-preserving function.[15]A geodesic is a shortest possible path between any two of its points.[c]
Ageodesic metric spaceis a metric space which admits a geodesic between any two of its points. The spaces(R2,d1){\displaystyle (\mathbb {R} ^{2},d_{1})}and(R2,d2){\displaystyle (\mathbb {R} ^{2},d_{2})}are both geodesic metric spaces. In(R2,d2){\displaystyle (\mathbb {R} ^{2},d_{2})}, geodesics are unique, but in(R2,d1){\displaystyle (\mathbb {R} ^{2},d_{1})}, there are often infinitely many geodesics between two points, as shown in the figure at the top of the article.
The spaceMis alength space(or the metricdisintrinsic) if the distance between any two pointsxandyis the infimum of lengths of paths between them. Unlike in a geodesic metric space, the infimum does not have to be attained. An example of a length space which is not geodesic is the Euclidean plane minus the origin: the points(1, 0)and(-1, 0)can be joined by paths of length arbitrarily close to 2, but not by a path of length 2. An example of a metric space which is not a length space is given by the straight-line metric on the sphere: the straight line between two points through the center of the Earth is shorter than any path along the surface.
Given any metric space(M,d), one can define a new, intrinsic distance functiondintrinsiconMby setting the distance between pointsxandyto be the infimum of thed-lengths of paths between them. For instance, ifdis the straight-line distance on the sphere, thendintrinsicis the great-circle distance. However, in some casesdintrinsicmay have infinite values. For example, ifMis theKoch snowflakewith the subspace metricdinduced fromR2{\displaystyle \mathbb {R} ^{2}}, then the resulting intrinsic distance is infinite for any pair of distinct points.
ARiemannian manifoldis a space equipped with a Riemannianmetric tensor, which determines lengths oftangent vectorsat every point. This can be thought of defining a notion of distance infinitesimally. In particular, a differentiable pathγ:[0,T]→M{\displaystyle \gamma :[0,T]\to M}in a Riemannian manifoldMhas length defined as the integral of the length of the tangent vector to the path:L(γ)=∫0T|γ˙(t)|dt.{\displaystyle L(\gamma )=\int _{0}^{T}|{\dot {\gamma }}(t)|dt.}On a connected Riemannian manifold, one then defines the distance between two points as the infimum of lengths of smooth paths between them. This construction generalizes to other kinds of infinitesimal metrics on manifolds, such assub-RiemannianandFinsler metrics.
The Riemannian metric is uniquely determined by the distance function; this means that in principle, all information about a Riemannian manifold can be recovered from its distance function. One direction in metric geometry is finding purely metric ("synthetic") formulations of properties of Riemannian manifolds. For example, a Riemannian manifold is aCAT(k)space(a synthetic condition which depends purely on the metric) if and only if itssectional curvatureis bounded above byk.[20]ThusCAT(k)spaces generalize upper curvature bounds to general metric spaces.
Real analysis makes use of both the metric onRn{\displaystyle \mathbb {R} ^{n}}and theLebesgue measure. Therefore, generalizations of many ideas from analysis naturally reside inmetric measure spaces: spaces that have both ameasureand a metric which are compatible with each other. Formally, ametric measure spaceis a metric space equipped with aBorel regular measuresuch that every ball has positive measure.[21]For example Euclidean spaces of dimensionn, and more generallyn-dimensional Riemannian manifolds, naturally have the structure of a metric measure space, equipped with theLebesgue measure. Certainfractalmetric spaces such as theSierpiński gasketcan be equipped with the α-dimensionalHausdorff measurewhere α is theHausdorff dimension. In general, however, a metric space may not have an "obvious" choice of measure.
One application of metric measure spaces is generalizing the notion ofRicci curvaturebeyond Riemannian manifolds. Just asCAT(k)andAlexandrov spacesgeneralize sectional curvature bounds,RCD spacesare a class of metric measure spaces which generalize lower bounds on Ricci curvature.[22]
Ametric space isdiscreteif its induced topology is thediscrete topology. Although many concepts, such as completeness and compactness, are not interesting for such spaces, they are nevertheless an object of study in several branches of mathematics. In particular,finite metric spaces(those having afinitenumber of points) are studied incombinatoricsandtheoretical computer science.[23]Embeddings in other metric spaces are particularly well-studied. For example, not every finite metric space can beisometrically embeddedin a Euclidean space or inHilbert space. On the other hand, in the worst case the required distortion (bilipschitz constant) is only logarithmic in the number of points.[24][25]
For anyundirected connected graphG, the setVof vertices ofGcan be turned into a metric space by defining thedistancebetween verticesxandyto be the length of the shortest edge path connecting them. This is also calledshortest-path distanceorgeodesic distance. Ingeometric group theorythis construction is applied to theCayley graphof a (typically infinite)finitely-generated group, yielding theword metric. Up to a bilipschitz homeomorphism, the word metric depends only on the group and not on the chosen finite generating set.[15]
An important area of study in finite metric spaces is the embedding of complex metric spaces into simpler ones while controlling the distortion of distances. This is particularly useful in computer science and discrete mathematics, where algorithms often perform more efficiently on simpler structures like tree metrics.
A significant result in this area is that any finite metric space can be probabilistically embedded into atree metricwith an expected distortion ofO(logn){\displaystyle O(logn)}, wheren{\displaystyle n}is the number of points in the metric space.[26]
This embedding is notable because it achieves the best possible asymptotic bound on distortion, matching the lower bound ofΩ(logn){\displaystyle \Omega (logn)}. The tree metrics produced in this embeddingdominatethe original metrics, meaning that distances in the tree are greater than or equal to those in the original space. This property is particularly useful for designing approximation algorithms, as it allows for the preservation of distance-related properties while simplifying the underlying structure.
The result has significant implications for various computational problems:
The technique involves constructing a hierarchical decomposition of the original metric space and converting it into a tree metric via a randomized algorithm. TheO(logn){\displaystyle O(logn)}distortion bound has led to improvedapproximation ratiosin several algorithmic problems, demonstrating the practical significance of this theoretical result.
In modern mathematics, one often studies spaces whose points are themselves mathematical objects. A distance function on such a space generally aims to measure the dissimilarity between two objects. Here are some examples:
The idea of spaces of mathematical objects can also be applied to subsets of a metric space, as well as metric spaces themselves.HausdorffandGromov–Hausdorff distancedefine metrics on the set of compact subsets of a metric space and the set of compact metric spaces, respectively.
Suppose(M,d)is a metric space, and letSbe a subset ofM. Thedistance fromSto a pointxofMis, informally, the distance fromxto the closest point ofS. However, since there may not be a single closest point, it is defined via aninfimum:d(x,S)=inf{d(x,s):s∈S}.{\displaystyle d(x,S)=\inf\{d(x,s):s\in S\}.}In particular,d(x,S)=0{\displaystyle d(x,S)=0}if and only ifxbelongs to theclosureofS. Furthermore, distances between points and sets satisfy a version of the triangle inequality:d(x,S)≤d(x,y)+d(y,S),{\displaystyle d(x,S)\leq d(x,y)+d(y,S),}and therefore the mapdS:M→R{\displaystyle d_{S}:M\to \mathbb {R} }defined bydS(x)=d(x,S){\displaystyle d_{S}(x)=d(x,S)}is continuous. Incidentally, this shows that metric spaces arecompletely regular.
Given two subsetsSandTofM, theirHausdorff distanceisdH(S,T)=max{sup{d(s,T):s∈S},sup{d(t,S):t∈T}}.{\displaystyle d_{H}(S,T)=\max\{\sup\{d(s,T):s\in S\},\sup\{d(t,S):t\in T\}\}.}Informally, two setsSandTare close to each other in the Hausdorff distance if no element ofSis too far fromTand vice versa. For example, ifSis an open set in Euclidean spaceTis anε-netinsideS, thendH(S,T)<ε{\displaystyle d_{H}(S,T)<\varepsilon }. In general, the Hausdorff distancedH(S,T){\displaystyle d_{H}(S,T)}can be infinite or zero. However, the Hausdorff distance between two distinct compact sets is always positive and finite. Thus the Hausdorff distance defines a metric on the set of compact subsets ofM.
The Gromov–Hausdorff metric defines a distance between (isometry classes of) compact metric spaces. TheGromov–Hausdorff distancebetween compact spacesXandYis the infimum of the Hausdorff distance over all metric spacesZthat containXandYas subspaces. While the exact value of the Gromov–Hausdorff distance is rarely useful to know, the resulting topology has found many applications.
If(M1,d1),…,(Mn,dn){\displaystyle (M_{1},d_{1}),\ldots ,(M_{n},d_{n})}are metric spaces, andNis theEuclidean normonRn{\displaystyle \mathbb {R} ^{n}}, then(M1×⋯×Mn,d×){\displaystyle {\bigl (}M_{1}\times \cdots \times M_{n},d_{\times }{\bigr )}}is a metric space, where theproduct metricis defined byd×((x1,…,xn),(y1,…,yn))=N(d1(x1,y1),…,dn(xn,yn)),{\displaystyle d_{\times }{\bigl (}(x_{1},\ldots ,x_{n}),(y_{1},\ldots ,y_{n}){\bigr )}=N{\bigl (}d_{1}(x_{1},y_{1}),\ldots ,d_{n}(x_{n},y_{n}){\bigr )},}and the induced topology agrees with theproduct topology. By the equivalence of norms in finite dimensions, a topologically equivalent metric is obtained ifNis thetaxicab norm, ap-norm, themaximum norm, or any other norm which is non-decreasing as the coordinates of a positiven-tuple increase (yielding the triangle inequality).
Similarly, a metric on the topological product of countably many metric spaces can be obtained using the metricd(x,y)=∑i=1∞12idi(xi,yi)1+di(xi,yi).{\displaystyle d(x,y)=\sum _{i=1}^{\infty }{\frac {1}{2^{i}}}{\frac {d_{i}(x_{i},y_{i})}{1+d_{i}(x_{i},y_{i})}}.}
The topological product of uncountably many metric spaces need not be metrizable. For example, an uncountable product of copies ofR{\displaystyle \mathbb {R} }is notfirst-countableand thus is not metrizable.
IfMis a metric space with metricd, and∼{\displaystyle \sim }is anequivalence relationonM, then we can endow the quotient setM/∼{\displaystyle M/\!\sim }with a pseudometric. The distance between two equivalence classes[x]{\displaystyle [x]}and[y]{\displaystyle [y]}is defined asd′([x],[y])=inf{d(p1,q1)+d(p2,q2)+⋯+d(pn,qn)},{\displaystyle d'([x],[y])=\inf\{d(p_{1},q_{1})+d(p_{2},q_{2})+\dotsb +d(p_{n},q_{n})\},}where theinfimumis taken over all finite sequences(p1,p2,…,pn){\displaystyle (p_{1},p_{2},\dots ,p_{n})}and(q1,q2,…,qn){\displaystyle (q_{1},q_{2},\dots ,q_{n})}withp1∼x{\displaystyle p_{1}\sim x},qn∼y{\displaystyle q_{n}\sim y},qi∼pi+1,i=1,2,…,n−1{\displaystyle q_{i}\sim p_{i+1},i=1,2,\dots ,n-1}.[30]In general this will only define apseudometric, i.e.d′([x],[y])=0{\displaystyle d'([x],[y])=0}does not necessarily imply that[x]=[y]{\displaystyle [x]=[y]}. However, for some equivalence relations (e.g., those given by gluing together polyhedra along faces),d′{\displaystyle d'}is a metric.
The quotient metricd′{\displaystyle d'}is characterized by the followinguniversal property. Iff:(M,d)→(X,δ){\displaystyle f\,\colon (M,d)\to (X,\delta )}is a metric (i.e. 1-Lipschitz) map between metric spaces satisfyingf(x) =f(y)wheneverx∼y{\displaystyle x\sim y}, then the induced functionf¯:M/∼→X{\displaystyle {\overline {f}}\,\colon {M/\sim }\to X}, given byf¯([x])=f(x){\displaystyle {\overline {f}}([x])=f(x)}, is a metric mapf¯:(M/∼,d′)→(X,δ).{\displaystyle {\overline {f}}\,\colon (M/\sim ,d')\to (X,\delta ).}
The quotient metric does not always induce thequotient topology. For example, the topological quotient of the metric spaceN×[0,1]{\displaystyle \mathbb {N} \times [0,1]}identifying all points of the form(n,0){\displaystyle (n,0)}is not metrizable since it is notfirst-countable, but the quotient metric is a well-defined metric on the same set which induces acoarser topology. Moreover, different metrics on the original topological space (a disjoint union of countably many intervals) lead to different topologies on the quotient.[31]
A topological space issequentialif and only if it is a (topological) quotient of a metric space.[32]
There are several notions of spaces which have less structure than a metric space, but more than a topological space.
There are also numerous ways of relaxing the axioms for a metric, giving rise to various notions of generalized metric spaces. These generalizations can also be combined. The terminology used to describe them is not completely standardized. Most notably, infunctional analysispseudometrics often come fromseminormson vector spaces, and so it is natural to call them "semimetrics". This conflicts with the use of the term intopology.
Some authors define metrics so as to allow the distance functiondto attain the value ∞, i.e. distances are non-negative numbers on theextended real number line.[4]Such a function is also called anextended metricor "∞-metric". Every extended metric can be replaced by a real-valued metric that is topologically equivalent. This can be done using asubadditivemonotonically increasing bounded function which is zero at zero, e.g.d′(x,y)=d(x,y)/(1+d(x,y)){\displaystyle d'(x,y)=d(x,y)/(1+d(x,y))}ord″(x,y)=min(1,d(x,y)){\displaystyle d''(x,y)=\min(1,d(x,y))}.
The requirement that the metric take values in[0,∞){\displaystyle [0,\infty )}can be relaxed to consider metrics with values in other structures, including:
These generalizations still induce auniform structureon the space.
ApseudometriconX{\displaystyle X}is a functiond:X×X→R{\displaystyle d:X\times X\to \mathbb {R} }which satisfies the axioms for a metric, except that instead of the second (identity of indiscernibles) onlyd(x,x)=0{\displaystyle d(x,x)=0}for allx{\displaystyle x}is required.[34]In other words, the axioms for a pseudometric are:
In some contexts, pseudometrics are referred to assemimetrics[35]because of their relation toseminorms.
Occasionally, aquasimetricis defined as a function that satisfies all axioms for a metric with the possible exception of symmetry.[36]The name of this generalisation is not entirely standardized.[37]
Quasimetrics are common in real life. For example, given a setXof mountain villages, the typical walking times between elements ofXform a quasimetric because travel uphill takes longer than travel downhill. Another example is thelength of car ridesin a city with one-way streets: here, a shortest path from pointAto pointBgoes along a different set of streets than a shortest path fromBtoAand may have a different length.
A quasimetric on the reals can be defined by settingd(x,y)={x−yifx≥y,1otherwise.{\displaystyle d(x,y)={\begin{cases}x-y&{\text{if }}x\geq y,\\1&{\text{otherwise.}}\end{cases}}}The 1 may be replaced, for example, by infinity or by1+y−x{\displaystyle 1+{\sqrt {y-x}}}or any othersubadditivefunction ofy-x. This quasimetric describes the cost of modifying a metal stick: it is easy to reduce its size byfiling it down, but it is difficult or impossible to grow it.
Given a quasimetric onX, one can define anR-ball aroundxto be the set{y∈X|d(x,y)≤R}{\displaystyle \{y\in X|d(x,y)\leq R\}}. As in the case of a metric, such balls form a basis for a topology onX, but this topology need not be metrizable. For example, the topology induced by the quasimetric on the reals described above is the (reversed)Sorgenfrey line.
In ametametric, all the axioms of a metric are satisfied except that the distance between identical points is not necessarily zero. In other words, the axioms for a metametric are:
Metametrics appear in the study ofGromov hyperbolic metric spacesand their boundaries. Thevisual metametricon such a space satisfiesd(x,x)=0{\displaystyle d(x,x)=0}for pointsx{\displaystyle x}on the boundary, but otherwised(x,x){\displaystyle d(x,x)}is approximately the distance fromx{\displaystyle x}to the boundary. Metametrics were first defined by Jussi Väisälä.[38]In other work, a function satisfying these axioms is called apartial metric[39][40]or adislocated metric.[34]
AsemimetriconX{\displaystyle X}is a functiond:X×X→R{\displaystyle d:X\times X\to \mathbb {R} }that satisfies the first three axioms, but not necessarily the triangle inequality:
Some authors work with a weaker form of the triangle inequality, such as:
The ρ-inframetric inequality implies the ρ-relaxed triangle inequality (assuming the first axiom), and the ρ-relaxed triangle inequality implies the 2ρ-inframetric inequality. Semimetrics satisfying these equivalent conditions have sometimes been referred to asquasimetrics,[41]nearmetrics[42]orinframetrics.[43]
The ρ-inframetric inequalities were introduced to modelround-trip delay timesin theinternet.[43]The triangle inequality implies the 2-inframetric inequality, and theultrametric inequalityis exactly the 1-inframetric inequality.
Relaxing the last three axioms leads to the notion of apremetric, i.e. a function satisfying the following conditions:
This is not a standard term. Sometimes it is used to refer to other generalizations of metrics such as pseudosemimetrics[44]or pseudometrics;[45]in translations of Russian books it sometimes appears as "prametric".[46]A premetric that satisfies symmetry, i.e. a pseudosemimetric, is also called a distance.[47]
Any premetric gives rise to a topology as follows. For a positive realr{\displaystyle r}, ther{\displaystyle r}-ballcentered at a pointp{\displaystyle p}is defined as
A set is calledopenif for any pointp{\displaystyle p}in the set there is anr{\displaystyle r}-ballcentered atp{\displaystyle p}which is contained in the set. Every premetric space is a topological space, and in fact asequential space.
In general, ther{\displaystyle r}-ballsthemselves need not be open sets with respect to this topology.
As for metrics, the distance between two setsA{\displaystyle A}andB{\displaystyle B}, is defined as
This defines a premetric on thepower setof a premetric space. If we start with a (pseudosemi-)metric space, we get a pseudosemimetric, i.e. a symmetric premetric.
Any premetric gives rise to apreclosure operatorcl{\displaystyle cl}as follows:
The prefixespseudo-,quasi-andsemi-can also be combined, e.g., apseudoquasimetric(sometimes calledhemimetric) relaxes both the indiscernibility axiom and the symmetry axiom and is simply a premetric satisfying the triangle inequality. For pseudoquasimetric spaces the openr{\displaystyle r}-ballsform a basis of open sets. A very basic example of a pseudoquasimetric space is the set{0,1}{\displaystyle \{0,1\}}with the premetric given byd(0,1)=1{\displaystyle d(0,1)=1}andd(1,0)=0.{\displaystyle d(1,0)=0.}The associated topological space is theSierpiński space.
Sets equipped with an extended pseudoquasimetric were studied byWilliam Lawvereas "generalized metric spaces".[48]From acategoricalpoint of view, the extended pseudometric spaces and the extended pseudoquasimetric spaces, along with their corresponding nonexpansive maps, are the best behaved of themetric space categories. One can take arbitrary products and coproducts and form quotient objects within the given category. If one drops "extended", one can only take finite products and coproducts. If one drops "pseudo", one cannot take quotients.
Lawvere also gave an alternate definition of such spaces asenriched categories. The ordered set(R,≥){\displaystyle (\mathbb {R} ,\geq )}can be seen as acategorywith onemorphisma→b{\displaystyle a\to b}ifa≥b{\displaystyle a\geq b}and none otherwise. Using+as thetensor productand 0 as theidentitymakes this category into amonoidal categoryR∗{\displaystyle R^{*}}.
Every (extended pseudoquasi-)metric space(M,d){\displaystyle (M,d)}can now be viewed as a categoryM∗{\displaystyle M^{*}}enriched overR∗{\displaystyle R^{*}}:
The notion of a metric can be generalized from a distance between two elements to a number assigned to a multiset of elements. Amultisetis a generalization of the notion of asetin which an element can occur more than once. Define the multiset unionU=XY{\displaystyle U=XY}as follows: if an elementxoccursmtimes inXandntimes inYthen it occursm+ntimes inU. A functiondon the set of nonempty finite multisets of elements of a setMis a metric[49]if
By considering the cases of axioms 1 and 2 in which the multisetXhas two elements and the case of axiom 3 in which the multisetsX,Y, andZhave one element each, one recovers the usual axioms for a metric. That is, every multiset metric yields an ordinary metric when restricted to sets of two elements.
A simple example is the set of all nonempty finite multisetsX{\displaystyle X}of integers withd(X)=max(X)−min(X){\displaystyle d(X)=\max(X)-\min(X)}. More complex examples areinformation distancein multisets;[49]andnormalized compression distance(NCD) in multisets.[50]
|
https://en.wikipedia.org/wiki/Metric_space
|
Inmathematics, apseudometric spaceis ageneralizationof ametric spacein which the distance between two distinct points can be zero. Pseudometric spaces were introduced byĐuro Kurepa[1][2]in 1934. In the same way as everynormed spaceis ametric space, everyseminormed spaceis a pseudometric space. Because of this analogy, the termsemimetric space(which has a different meaning intopology) is sometimes used as a synonym, especially infunctional analysis.
When a topology is generated using a family of pseudometrics, the space is called agauge space.
A pseudometric space(X,d){\displaystyle (X,d)}is a setX{\displaystyle X}together with a non-negativereal-valued functiond:X×X⟶R≥0,{\displaystyle d:X\times X\longrightarrow \mathbb {R} _{\geq 0},}called apseudometric, such that for everyx,y,z∈X,{\displaystyle x,y,z\in X,}
Unlike a metric space, points in a pseudometric space need not bedistinguishable; that is, one may haved(x,y)=0{\displaystyle d(x,y)=0}for distinct valuesx≠y.{\displaystyle x\neq y.}
Any metric space is a pseudometric space.
Pseudometrics arise naturally infunctional analysis. Consider the spaceF(X){\displaystyle {\mathcal {F}}(X)}of real-valued functionsf:X→R{\displaystyle f:X\to \mathbb {R} }together with a special pointx0∈X.{\displaystyle x_{0}\in X.}This point then induces a pseudometric on the space of functions, given byd(f,g)=|f(x0)−g(x0)|{\displaystyle d(f,g)=\left|f(x_{0})-g(x_{0})\right|}forf,g∈F(X){\displaystyle f,g\in {\mathcal {F}}(X)}
Aseminormp{\displaystyle p}induces the pseudometricd(x,y)=p(x−y){\displaystyle d(x,y)=p(x-y)}. This is aconvex functionof anaffine functionofx{\displaystyle x}(in particular, atranslation), and therefore convex inx{\displaystyle x}. (Likewise fory{\displaystyle y}.)
Conversely, a homogeneous, translation-invariant pseudometric induces a seminorm.
Pseudometrics also arise in the theory ofhyperboliccomplex manifolds: seeKobayashi metric.
Everymeasure space(Ω,A,μ){\displaystyle (\Omega ,{\mathcal {A}},\mu )}can be viewed as a complete pseudometric space by definingd(A,B):=μ(A△B){\displaystyle d(A,B):=\mu (A\vartriangle B)}for allA,B∈A,{\displaystyle A,B\in {\mathcal {A}},}where the triangle denotessymmetric difference.
Iff:X1→X2{\displaystyle f:X_{1}\to X_{2}}is a function andd2is a pseudometric onX2, thend1(x,y):=d2(f(x),f(y)){\displaystyle d_{1}(x,y):=d_{2}(f(x),f(y))}gives a pseudometric onX1. Ifd2is a metric andfisinjective, thend1is a metric.
Thepseudometric topologyis thetopologygenerated by theopen ballsBr(p)={x∈X:d(p,x)<r},{\displaystyle B_{r}(p)=\{x\in X:d(p,x)<r\},}which form abasisfor the topology.[3]A topological space is said to be apseudometrizable space[4]if the space can be given a pseudometric such that the pseudometric topology coincides with the given topology on the space.
The difference between pseudometrics and metrics is entirely topological. That is, a pseudometric is a metric if and only if the topology it generates isT0(that is, distinct points aretopologically distinguishable).
The definitions ofCauchy sequencesandmetric completionfor metric spaces carry over to pseudometric spaces unchanged.[5]
The vanishing of the pseudometric induces anequivalence relation, called themetric identification, that converts the pseudometric space into a full-fledgedmetric space. This is done by definingx∼y{\displaystyle x\sim y}ifd(x,y)=0{\displaystyle d(x,y)=0}. LetX∗=X/∼{\displaystyle X^{*}=X/{\sim }}be thequotient spaceofX{\displaystyle X}by this equivalence relation and defined∗:(X/∼)×(X/∼)⟶R≥0d∗([x],[y])=d(x,y){\displaystyle {\begin{aligned}d^{*}:(X/\sim )&\times (X/\sim )\longrightarrow \mathbb {R} _{\geq 0}\\d^{*}([x],[y])&=d(x,y)\end{aligned}}}This is well defined because for anyx′∈[x]{\displaystyle x'\in [x]}we have thatd(x,x′)=0{\displaystyle d(x,x')=0}and sod(x′,y)≤d(x,x′)+d(x,y)=d(x,y){\displaystyle d(x',y)\leq d(x,x')+d(x,y)=d(x,y)}and vice versa. Thend∗{\displaystyle d^{*}}is a metric onX∗{\displaystyle X^{*}}and(X∗,d∗){\displaystyle (X^{*},d^{*})}is a well-defined metric space, called themetric space induced by the pseudometric space(X,d){\displaystyle (X,d)}.[6][7]
The metric identification preserves the induced topologies. That is, a subsetA⊆X{\displaystyle A\subseteq X}is open (or closed) in(X,d){\displaystyle (X,d)}if and only ifπ(A)=[A]{\displaystyle \pi (A)=[A]}is open (or closed) in(X∗,d∗){\displaystyle \left(X^{*},d^{*}\right)}andA{\displaystyle A}issaturated. The topological identification is theKolmogorov quotient.
An example of this construction is thecompletion of a metric spaceby itsCauchy sequences.
|
https://en.wikipedia.org/wiki/Pseudometric_space
|
In mathematics, particularly infunctional analysisandconvex analysis, theUrsescu theoremis a theorem that generalizes theclosed graph theorem, theopen mapping theorem, and theuniform boundedness principle.
The following notation and notions are used, whereR:X⇉Y{\displaystyle {\mathcal {R}}:X\rightrightarrows Y}is aset-valued functionandS{\displaystyle S}is a non-empty subset of atopological vector spaceX{\displaystyle X}:
Theorem[1](Ursescu)—LetX{\displaystyle X}be acompletesemi-metrizablelocally convextopological vector spaceandR:X⇉Y{\displaystyle {\mathcal {R}}:X\rightrightarrows Y}be aclosedconvexmultifunction with non-empty domain.
Assume thatspan(ImR−y){\displaystyle \operatorname {span} (\operatorname {Im} {\mathcal {R}}-y)}is abarrelled spacefor some/everyy∈ImR.{\displaystyle y\in \operatorname {Im} {\mathcal {R}}.}Assume thaty0∈i(ImR){\displaystyle y_{0}\in {}^{i}(\operatorname {Im} {\mathcal {R}})}and letx0∈R−1(y0){\displaystyle x_{0}\in {\mathcal {R}}^{-1}\left(y_{0}\right)}(so thaty0∈R(x0){\displaystyle y_{0}\in {\mathcal {R}}\left(x_{0}\right)}).
Then for every neighborhoodU{\displaystyle U}ofx0{\displaystyle x_{0}}inX,{\displaystyle X,}y0{\displaystyle y_{0}}belongs to the relative interior ofR(U){\displaystyle {\mathcal {R}}(U)}inaff(ImR){\displaystyle \operatorname {aff} (\operatorname {Im} {\mathcal {R}})}(that is,y0∈intaff(ImR)R(U){\displaystyle y_{0}\in \operatorname {int} _{\operatorname {aff} (\operatorname {Im} {\mathcal {R}})}{\mathcal {R}}(U)}).
In particular, ifib(ImR)≠∅{\displaystyle {}^{ib}(\operatorname {Im} {\mathcal {R}})\neq \varnothing }thenib(ImR)=i(ImR)=rint(ImR).{\displaystyle {}^{ib}(\operatorname {Im} {\mathcal {R}})={}^{i}(\operatorname {Im} {\mathcal {R}})=\operatorname {rint} (\operatorname {Im} {\mathcal {R}}).}
Closed graph theorem—LetX{\displaystyle X}andY{\displaystyle Y}beFréchet spacesandT:X→Y{\displaystyle T:X\to Y}be a linear map. ThenT{\displaystyle T}is continuous if and only if the graph ofT{\displaystyle T}is closed inX×Y.{\displaystyle X\times Y.}
For the non-trivial direction, assume that the graph ofT{\displaystyle T}is closed and letR:=T−1:Y⇉X.{\displaystyle {\mathcal {R}}:=T^{-1}:Y\rightrightarrows X.}It is easy to see thatgrR{\displaystyle \operatorname {gr} {\mathcal {R}}}is closed and convex and that its image isX.{\displaystyle X.}Givenx∈X,{\displaystyle x\in X,}(Tx,x){\displaystyle (Tx,x)}belongs toY×X{\displaystyle Y\times X}so that for every open neighborhoodV{\displaystyle V}ofTx{\displaystyle Tx}inY,{\displaystyle Y,}R(V)=T−1(V){\displaystyle {\mathcal {R}}(V)=T^{-1}(V)}is a neighborhood ofx{\displaystyle x}inX.{\displaystyle X.}ThusT{\displaystyle T}is continuous atx.{\displaystyle x.}Q.E.D.
Uniform boundedness principle—LetX{\displaystyle X}andY{\displaystyle Y}beFréchet spacesandT:X→Y{\displaystyle T:X\to Y}be a bijective linear map. ThenT{\displaystyle T}is continuous if and only ifT−1:Y→X{\displaystyle T^{-1}:Y\to X}is continuous. Furthermore, ifT{\displaystyle T}is continuous thenT{\displaystyle T}is an isomorphism ofFréchet spaces.
Apply the closed graph theorem toT{\displaystyle T}andT−1.{\displaystyle T^{-1}.}Q.E.D.
Open mapping theorem—LetX{\displaystyle X}andY{\displaystyle Y}beFréchet spacesandT:X→Y{\displaystyle T:X\to Y}be a continuous surjective linear map. Then T is anopen map.
Clearly,T{\displaystyle T}is a closed and convex relation whose image isY.{\displaystyle Y.}LetU{\displaystyle U}be a non-empty open subset ofX,{\displaystyle X,}lety{\displaystyle y}be inT(U),{\displaystyle T(U),}and letx{\displaystyle x}inU{\displaystyle U}be such thaty=Tx.{\displaystyle y=Tx.}From the Ursescu theorem it follows thatT(U){\displaystyle T(U)}is a neighborhood ofy.{\displaystyle y.}Q.E.D.
The following notation and notions are used for these corollaries, whereR:X⇉Y{\displaystyle {\mathcal {R}}:X\rightrightarrows Y}is a set-valued function,S{\displaystyle S}is a non-empty subset of atopological vector spaceX{\displaystyle X}:
Corollary—LetX{\displaystyle X}be a barreledfirst countable spaceand letC{\displaystyle C}be a subset ofX.{\displaystyle X.}Then:
Simons'theorem[2]—LetX{\displaystyle X}andY{\displaystyle Y}befirst countablewithX{\displaystyle X}locally convex. Suppose thatR:X⇉Y{\displaystyle {\mathcal {R}}:X\rightrightarrows Y}is a multimap with non-empty domain that satisfiescondition (Hwx)or else assume thatX{\displaystyle X}is aFréchet spaceand thatR{\displaystyle {\mathcal {R}}}islower ideally convex.
Assume thatspan(ImR−y){\displaystyle \operatorname {span} (\operatorname {Im} {\mathcal {R}}-y)}isbarreledfor some/everyy∈ImR.{\displaystyle y\in \operatorname {Im} {\mathcal {R}}.}Assume thaty0∈i(ImR){\displaystyle y_{0}\in {}^{i}(\operatorname {Im} {\mathcal {R}})}and letx0∈R−1(y0).{\displaystyle x_{0}\in {\mathcal {R}}^{-1}\left(y_{0}\right).}Then for every neighborhoodU{\displaystyle U}ofx0{\displaystyle x_{0}}inX,{\displaystyle X,}y0{\displaystyle y_{0}}belongs to the relative interior ofR(U){\displaystyle {\mathcal {R}}(U)}inaff(ImR){\displaystyle \operatorname {aff} (\operatorname {Im} {\mathcal {R}})}(i.e.y0∈intaff(ImR)R(U){\displaystyle y_{0}\in \operatorname {int} _{\operatorname {aff} (\operatorname {Im} {\mathcal {R}})}{\mathcal {R}}(U)}).
In particular, ifib(ImR)≠∅{\displaystyle {}^{ib}(\operatorname {Im} {\mathcal {R}})\neq \varnothing }thenib(ImR)=i(ImR)=rint(ImR).{\displaystyle {}^{ib}(\operatorname {Im} {\mathcal {R}})={}^{i}(\operatorname {Im} {\mathcal {R}})=\operatorname {rint} (\operatorname {Im} {\mathcal {R}}).}
The implication (1)⟹{\displaystyle \implies }(2) in the following theorem is known as the Robinson–Ursescu theorem.[3]
Robinson–Ursescu theorem[3]—Let(X,‖⋅‖){\displaystyle (X,\|\,\cdot \,\|)}and(Y,‖⋅‖){\displaystyle (Y,\|\,\cdot \,\|)}benormed spacesandR:X⇉Y{\displaystyle {\mathcal {R}}:X\rightrightarrows Y}be a multimap with non-empty domain.
Suppose thatY{\displaystyle Y}is abarreled space, the graph ofR{\displaystyle {\mathcal {R}}}verifies conditioncondition (Hwx), and that(x0,y0)∈grR.{\displaystyle (x_{0},y_{0})\in \operatorname {gr} {\mathcal {R}}.}LetCX{\displaystyle C_{X}}(resp.CY{\displaystyle C_{Y}}) denote the closed unit ball inX{\displaystyle X}(resp.Y{\displaystyle Y}) (soCX={x∈X:‖x‖≤1}{\displaystyle C_{X}=\{x\in X:\|x\|\leq 1\}}).
Then the following are equivalent:
|
https://en.wikipedia.org/wiki/Ursescu_theorem
|
Infunctional analysisand related areas ofmathematics, ametrizable(resp.pseudometrizable)topological vector space(TVS) is a TVS whose topology is induced by a metric (resp.pseudometric). AnLM-spaceis aninductive limitof a sequence oflocally convexmetrizable TVS.
Apseudometricon a setX{\displaystyle X}is a mapd:X×X→R{\displaystyle d:X\times X\rightarrow \mathbb {R} }satisfying the following properties:
A pseudometric is called ametricif it satisfies:
Ultrapseudometric
A pseudometricd{\displaystyle d}onX{\displaystyle X}is called aultrapseudometricor astrong pseudometricif it satisfies:
Pseudometric space
Apseudometric spaceis a pair(X,d){\displaystyle (X,d)}consisting of a setX{\displaystyle X}and a pseudometricd{\displaystyle d}onX{\displaystyle X}such thatX{\displaystyle X}'s topology is identical to the topology onX{\displaystyle X}induced byd.{\displaystyle d.}We call a pseudometric space(X,d){\displaystyle (X,d)}ametric space(resp.ultrapseudometric space) whend{\displaystyle d}is a metric (resp. ultrapseudometric).
Ifd{\displaystyle d}is a pseudometric on a setX{\displaystyle X}then collection ofopen balls:Br(z):={x∈X:d(x,z)<r}{\displaystyle B_{r}(z):=\{x\in X:d(x,z)<r\}}asz{\displaystyle z}ranges overX{\displaystyle X}andr>0{\displaystyle r>0}ranges over the positive real numbers,
forms a basis for a topology onX{\displaystyle X}that is called thed{\displaystyle d}-topologyor thepseudometric topologyonX{\displaystyle X}induced byd.{\displaystyle d.}
Pseudometrizable space
A topological space(X,τ){\displaystyle (X,\tau )}is calledpseudometrizable(resp.metrizable,ultrapseudometrizable) if there exists a pseudometric (resp. metric, ultrapseudometric)d{\displaystyle d}onX{\displaystyle X}such thatτ{\displaystyle \tau }is equal to the topology induced byd.{\displaystyle d.}[1]
An additivetopological groupis an additive group endowed with a topology, called agroup topology, under which addition and negation become continuous operators.
A topologyτ{\displaystyle \tau }on a real or complex vector spaceX{\displaystyle X}is called avector topologyor aTVS topologyif it makes the operations of vector addition and scalar multiplication continuous (that is, if it makesX{\displaystyle X}into atopological vector space).
Everytopological vector space(TVS)X{\displaystyle X}is an additive commutative topological group but not all group topologies onX{\displaystyle X}are vector topologies.
This is because despite it making addition and negation continuous, a group topology on a vector spaceX{\displaystyle X}may fail to make scalar multiplication continuous.
For instance, thediscrete topologyon any non-trivial vector space makes addition and negation continuous but do not make scalar multiplication continuous.
IfX{\displaystyle X}is an additive group then we say that a pseudometricd{\displaystyle d}onX{\displaystyle X}istranslation invariantor justinvariantif it satisfies any of the following equivalent conditions:
IfX{\displaystyle X}is atopological groupthe avalueorG-seminormonX{\displaystyle X}(theGstands for Group) is a real-valued mapp:X→R{\displaystyle p:X\rightarrow \mathbb {R} }with the following properties:[2]
where we call a G-seminorm aG-normif it satisfies the additional condition:
Ifp{\displaystyle p}is a value on a vector spaceX{\displaystyle X}then:
Theorem[2]—Suppose thatX{\displaystyle X}is an additive commutative group.
Ifd{\displaystyle d}is a translation invariant pseudometric onX{\displaystyle X}then the mapp(x):=d(x,0){\displaystyle p(x):=d(x,0)}is a value onX{\displaystyle X}calledthe value associated withd{\displaystyle d}, and moreover,d{\displaystyle d}generates a group topology onX{\displaystyle X}(i.e. thed{\displaystyle d}-topology onX{\displaystyle X}makesX{\displaystyle X}into a topological group).
Conversely, ifp{\displaystyle p}is a value onX{\displaystyle X}then the mapd(x,y):=p(x−y){\displaystyle d(x,y):=p(x-y)}is a translation-invariant pseudometric onX{\displaystyle X}and the value associated withd{\displaystyle d}is justp.{\displaystyle p.}
Theorem[2]—If(X,τ){\displaystyle (X,\tau )}is an additive commutativetopological groupthen the following are equivalent:
If(X,τ){\displaystyle (X,\tau )}is Hausdorff then the word "pseudometric" in the above statement may be replaced by the word "metric."
A commutative topological group is metrizable if and only if it is Hausdorff and pseudometrizable.
LetX{\displaystyle X}be a non-trivial (i.e.X≠{0}{\displaystyle X\neq \{0\}}) real or complex vector space and letd{\displaystyle d}be the translation-invarianttrivial metriconX{\displaystyle X}defined byd(x,x)=0{\displaystyle d(x,x)=0}andd(x,y)=1for allx,y∈X{\displaystyle d(x,y)=1{\text{ for all }}x,y\in X}such thatx≠y.{\displaystyle x\neq y.}The topologyτ{\displaystyle \tau }thatd{\displaystyle d}induces onX{\displaystyle X}is thediscrete topology, which makes(X,τ){\displaystyle (X,\tau )}into a commutative topological group under addition but doesnotform a vector topology onX{\displaystyle X}because(X,τ){\displaystyle (X,\tau )}isdisconnectedbut every vector topology is connected.
What fails is that scalar multiplication isn't continuous on(X,τ).{\displaystyle (X,\tau ).}
This example shows that a translation-invariant (pseudo)metric isnotenough to guarantee a vector topology, which leads us to define paranorms andF-seminorms.
A collectionN{\displaystyle {\mathcal {N}}}of subsets of a vector space is calledadditive[5]if for everyN∈N,{\displaystyle N\in {\mathcal {N}},}there exists someU∈N{\displaystyle U\in {\mathcal {N}}}such thatU+U⊆N.{\displaystyle U+U\subseteq N.}
Continuity of addition at 0—If(X,+){\displaystyle (X,+)}is agroup(as all vector spaces are),τ{\displaystyle \tau }is a topology onX,{\displaystyle X,}andX×X{\displaystyle X\times X}is endowed with theproduct topology, then the addition mapX×X→X{\displaystyle X\times X\to X}(i.e. the map(x,y)↦x+y{\displaystyle (x,y)\mapsto x+y}) is continuous at the origin ofX×X{\displaystyle X\times X}if and only if the set ofneighborhoodsof the origin in(X,τ){\displaystyle (X,\tau )}is additive. This statement remains true if the word "neighborhood" is replaced by "open neighborhood."[5]
All of the above conditions are consequently a necessary for a topology to form a vector topology.
Additive sequences of sets have the particularly nice property that they define non-negative continuous real-valuedsubadditivefunctions.
These functions can then be used to prove many of the basic properties of topological vector spaces and also show that a Hausdorff TVS with a countable basis of neighborhoods is metrizable. The following theorem is true more generally for commutative additivetopological groups.
Theorem—LetU∙=(Ui)i=0∞{\displaystyle U_{\bullet }=\left(U_{i}\right)_{i=0}^{\infty }}be a collection of subsets of a vector space such that0∈Ui{\displaystyle 0\in U_{i}}andUi+1+Ui+1⊆Ui{\displaystyle U_{i+1}+U_{i+1}\subseteq U_{i}}for alli≥0.{\displaystyle i\geq 0.}For allu∈U0,{\displaystyle u\in U_{0},}letS(u):={n∙=(n1,…,nk):k≥1,ni≥0for alli,andu∈Un1+⋯+Unk}.{\displaystyle \mathbb {S} (u):=\left\{n_{\bullet }=\left(n_{1},\ldots ,n_{k}\right)~:~k\geq 1,n_{i}\geq 0{\text{ for all }}i,{\text{ and }}u\in U_{n_{1}}+\cdots +U_{n_{k}}\right\}.}
Definef:X→[0,1]{\displaystyle f:X\to [0,1]}byf(x)=1{\displaystyle f(x)=1}ifx∉U0{\displaystyle x\not \in U_{0}}and otherwise letf(x):=inf{2−n1+⋯2−nk:n∙=(n1,…,nk)∈S(x)}.{\displaystyle f(x):=\inf _{}\left\{2^{-n_{1}}+\cdots 2^{-n_{k}}~:~n_{\bullet }=\left(n_{1},\ldots ,n_{k}\right)\in \mathbb {S} (x)\right\}.}
Thenf{\displaystyle f}issubadditive(meaningf(x+y)≤f(x)+f(y)for allx,y∈X{\displaystyle f(x+y)\leq f(x)+f(y){\text{ for all }}x,y\in X}) andf=0{\displaystyle f=0}on⋂i≥0Ui,{\displaystyle \bigcap _{i\geq 0}U_{i},}so in particularf(0)=0.{\displaystyle f(0)=0.}If allUi{\displaystyle U_{i}}aresymmetric setsthenf(−x)=f(x){\displaystyle f(-x)=f(x)}and if allUi{\displaystyle U_{i}}are balanced thenf(sx)≤f(x){\displaystyle f(sx)\leq f(x)}for all scalarss{\displaystyle s}such that|s|≤1{\displaystyle |s|\leq 1}and allx∈X.{\displaystyle x\in X.}IfX{\displaystyle X}is a topological vector space and if allUi{\displaystyle U_{i}}are neighborhoods of the origin thenf{\displaystyle f}is continuous, where if in additionX{\displaystyle X}is Hausdorff andU∙{\displaystyle U_{\bullet }}forms a basis of balanced neighborhoods of the origin inX{\displaystyle X}thend(x,y):=f(x−y){\displaystyle d(x,y):=f(x-y)}is a metric defining the vector topology onX.{\displaystyle X.}
Assume thatn∙=(n1,…,nk){\displaystyle n_{\bullet }=\left(n_{1},\ldots ,n_{k}\right)}always denotes a finite sequence of non-negative integers and use the notation:∑2−n∙:=2−n1+⋯+2−nkand∑Un∙:=Un1+⋯+Unk.{\displaystyle \sum 2^{-n_{\bullet }}:=2^{-n_{1}}+\cdots +2^{-n_{k}}\quad {\text{ and }}\quad \sum U_{n_{\bullet }}:=U_{n_{1}}+\cdots +U_{n_{k}}.}
For any integersn≥0{\displaystyle n\geq 0}andd>2,{\displaystyle d>2,}Un⊇Un+1+Un+1⊇Un+1+Un+2+Un+2⊇Un+1+Un+2+⋯+Un+d+Un+d+1+Un+d+1.{\displaystyle U_{n}\supseteq U_{n+1}+U_{n+1}\supseteq U_{n+1}+U_{n+2}+U_{n+2}\supseteq U_{n+1}+U_{n+2}+\cdots +U_{n+d}+U_{n+d+1}+U_{n+d+1}.}
From this it follows that ifn∙=(n1,…,nk){\displaystyle n_{\bullet }=\left(n_{1},\ldots ,n_{k}\right)}consists of distinct positive integers then∑Un∙⊆U−1+min(n∙).{\displaystyle \sum U_{n_{\bullet }}\subseteq U_{-1+\min \left(n_{\bullet }\right)}.}
It will now be shown by induction onk{\displaystyle k}that ifn∙=(n1,…,nk){\displaystyle n_{\bullet }=\left(n_{1},\ldots ,n_{k}\right)}consists of non-negative integers such that∑2−n∙≤2−M{\displaystyle \sum 2^{-n_{\bullet }}\leq 2^{-M}}for some integerM≥0{\displaystyle M\geq 0}then∑Un∙⊆UM.{\displaystyle \sum U_{n_{\bullet }}\subseteq U_{M}.}This is clearly true fork=1{\displaystyle k=1}andk=2{\displaystyle k=2}so assume thatk>2,{\displaystyle k>2,}which implies that allni{\displaystyle n_{i}}are positive.
If allni{\displaystyle n_{i}}are distinct then this step is done, and otherwise pick distinct indicesi<j{\displaystyle i<j}such thatni=nj{\displaystyle n_{i}=n_{j}}and constructm∙=(m1,…,mk−1){\displaystyle m_{\bullet }=\left(m_{1},\ldots ,m_{k-1}\right)}fromn∙{\displaystyle n_{\bullet }}by replacing eachni{\displaystyle n_{i}}withni−1{\displaystyle n_{i}-1}and deleting thejth{\displaystyle j^{\text{th}}}element ofn∙{\displaystyle n_{\bullet }}(all other elements ofn∙{\displaystyle n_{\bullet }}are transferred tom∙{\displaystyle m_{\bullet }}unchanged).
Observe that∑2−n∙=∑2−m∙{\displaystyle \sum 2^{-n_{\bullet }}=\sum 2^{-m_{\bullet }}}and∑Un∙⊆∑Um∙{\displaystyle \sum U_{n_{\bullet }}\subseteq \sum U_{m_{\bullet }}}(becauseUni+Unj⊆Uni−1{\displaystyle U_{n_{i}}+U_{n_{j}}\subseteq U_{n_{i}-1}}) so by appealing to the inductive hypothesis we conclude that∑Un∙⊆∑Um∙⊆UM,{\displaystyle \sum U_{n_{\bullet }}\subseteq \sum U_{m_{\bullet }}\subseteq U_{M},}as desired.
It is clear thatf(0)=0{\displaystyle f(0)=0}and that0≤f≤1{\displaystyle 0\leq f\leq 1}so to prove thatf{\displaystyle f}is subadditive, it suffices to prove thatf(x+y)≤f(x)+f(y){\displaystyle f(x+y)\leq f(x)+f(y)}whenx,y∈X{\displaystyle x,y\in X}are such thatf(x)+f(y)<1,{\displaystyle f(x)+f(y)<1,}which implies thatx,y∈U0.{\displaystyle x,y\in U_{0}.}This is an exercise.
If allUi{\displaystyle U_{i}}are symmetric thenx∈∑Un∙{\displaystyle x\in \sum U_{n_{\bullet }}}if and only if−x∈∑Un∙{\displaystyle -x\in \sum U_{n_{\bullet }}}from which it follows thatf(−x)≤f(x){\displaystyle f(-x)\leq f(x)}andf(−x)≥f(x).{\displaystyle f(-x)\geq f(x).}If allUi{\displaystyle U_{i}}are balanced then the inequalityf(sx)≤f(x){\displaystyle f(sx)\leq f(x)}for all unit scalarss{\displaystyle s}such that|s|≤1{\displaystyle |s|\leq 1}is proved similarly.
Becausef{\displaystyle f}is a nonnegative subadditive function satisfyingf(0)=0,{\displaystyle f(0)=0,}as described in the article onsublinear functionals,f{\displaystyle f}is uniformly continuous onX{\displaystyle X}if and only iff{\displaystyle f}is continuous at the origin.
If allUi{\displaystyle U_{i}}are neighborhoods of the origin then for any realr>0,{\displaystyle r>0,}pick an integerM>1{\displaystyle M>1}such that2−M<r{\displaystyle 2^{-M}<r}so thatx∈UM{\displaystyle x\in U_{M}}impliesf(x)≤2−M<r.{\displaystyle f(x)\leq 2^{-M}<r.}If the set of allUi{\displaystyle U_{i}}form basis of balanced neighborhoods of the origin then it may be shown that for anyn>1,{\displaystyle n>1,}there exists some0<r≤2−n{\displaystyle 0<r\leq 2^{-n}}such thatf(x)<r{\displaystyle f(x)<r}impliesx∈Un.{\displaystyle x\in U_{n}.}◼{\displaystyle \blacksquare }
IfX{\displaystyle X}is a vector space over the real or complex numbers then aparanormonX{\displaystyle X}is a G-seminorm (defined above)p:X→R{\displaystyle p:X\rightarrow \mathbb {R} }onX{\displaystyle X}that satisfies any of the following additional conditions, each of which begins with "for all sequencesx∙=(xi)i=1∞{\displaystyle x_{\bullet }=\left(x_{i}\right)_{i=1}^{\infty }}inX{\displaystyle X}and all convergent sequences of scalarss∙=(si)i=1∞{\displaystyle s_{\bullet }=\left(s_{i}\right)_{i=1}^{\infty }}":[6]
A paranorm is calledtotalif in addition it satisfies:
Ifp{\displaystyle p}is a paranorm on a vector spaceX{\displaystyle X}then the mapd:X×X→R{\displaystyle d:X\times X\rightarrow \mathbb {R} }defined byd(x,y):=p(x−y){\displaystyle d(x,y):=p(x-y)}is a translation-invariant pseudometric onX{\displaystyle X}that defines avector topologyonX.{\displaystyle X.}[8]
Ifp{\displaystyle p}is a paranorm on a vector spaceX{\displaystyle X}then:
IfX{\displaystyle X}is a vector space over the real or complex numbers then anF-seminormonX{\displaystyle X}(theF{\displaystyle F}stands forFréchet) is a real-valued mapp:X→R{\displaystyle p:X\to \mathbb {R} }with the following four properties:[11]
AnF-seminorm is called anF-normif in addition it satisfies:
AnF-seminorm is calledmonotoneif it satisfies:
AnF-seminormed space(resp.F-normed space)[12]is a pair(X,p){\displaystyle (X,p)}consisting of a vector spaceX{\displaystyle X}and anF-seminorm (resp.F-norm)p{\displaystyle p}onX.{\displaystyle X.}
If(X,p){\displaystyle (X,p)}and(Z,q){\displaystyle (Z,q)}areF-seminormed spaces then a mapf:X→Z{\displaystyle f:X\to Z}is called anisometric embedding[12]ifq(f(x)−f(y))=p(x,y)for allx,y∈X.{\displaystyle q(f(x)-f(y))=p(x,y){\text{ for all }}x,y\in X.}
Every isometric embedding of oneF-seminormed space into another is atopological embedding, but the converse is not true in general.[12]
EveryF-seminorm is a paranorm and every paranorm is equivalent to someF-seminorm.[7]EveryF-seminorm on a vector spaceX{\displaystyle X}is a value onX.{\displaystyle X.}In particular,p(x)=0,{\displaystyle p(x)=0,}andp(x)=p(−x){\displaystyle p(x)=p(-x)}for allx∈X.{\displaystyle x\in X.}
Theorem[11]—Letp{\displaystyle p}be anF-seminorm on a vector spaceX.{\displaystyle X.}Then the mapd:X×X→R{\displaystyle d:X\times X\to \mathbb {R} }defined byd(x,y):=p(x−y){\displaystyle d(x,y):=p(x-y)}is a translation invariant pseudometric onX{\displaystyle X}that defines a vector topologyτ{\displaystyle \tau }onX.{\displaystyle X.}Ifp{\displaystyle p}is anF-norm thend{\displaystyle d}is a metric.
WhenX{\displaystyle X}is endowed with this topology thenp{\displaystyle p}is a continuous map onX.{\displaystyle X.}
The balanced sets{x∈X:p(x)≤r},{\displaystyle \{x\in X~:~p(x)\leq r\},}asr{\displaystyle r}ranges over the positive reals, form a neighborhood basis at the origin for this topology consisting of closed set.
Similarly, the balanced sets{x∈X:p(x)<r},{\displaystyle \{x\in X~:~p(x)<r\},}asr{\displaystyle r}ranges over the positive reals, form a neighborhood basis at the origin for this topology consisting of open sets.
Suppose thatL{\displaystyle {\mathcal {L}}}is a non-empty collection ofF-seminorms on a vector spaceX{\displaystyle X}and for any finite subsetF⊆L{\displaystyle {\mathcal {F}}\subseteq {\mathcal {L}}}and anyr>0,{\displaystyle r>0,}letUF,r:=⋂p∈F{x∈X:p(x)<r}.{\displaystyle U_{{\mathcal {F}},r}:=\bigcap _{p\in {\mathcal {F}}}\{x\in X:p(x)<r\}.}
The set{UF,r:r>0,F⊆L,Ffinite}{\displaystyle \left\{U_{{\mathcal {F}},r}~:~r>0,{\mathcal {F}}\subseteq {\mathcal {L}},{\mathcal {F}}{\text{ finite }}\right\}}forms a filter base onX{\displaystyle X}that also forms a neighborhood basis at the origin for a vector topology onX{\displaystyle X}denoted byτL.{\displaystyle \tau _{\mathcal {L}}.}[12]EachUF,r{\displaystyle U_{{\mathcal {F}},r}}is abalancedandabsorbingsubset ofX.{\displaystyle X.}[12]These sets satisfy[12]UF,r/2+UF,r/2⊆UF,r.{\displaystyle U_{{\mathcal {F}},r/2}+U_{{\mathcal {F}},r/2}\subseteq U_{{\mathcal {F}},r}.}
Suppose thatp∙=(pi)i=1∞{\displaystyle p_{\bullet }=\left(p_{i}\right)_{i=1}^{\infty }}is a family of non-negative subadditive functions on a vector spaceX.{\displaystyle X.}
TheFréchet combination[8]ofp∙{\displaystyle p_{\bullet }}is defined to be the real-valued mapp(x):=∑i=1∞pi(x)2i[1+pi(x)].{\displaystyle p(x):=\sum _{i=1}^{\infty }{\frac {p_{i}(x)}{2^{i}\left[1+p_{i}(x)\right]}}.}
Assume thatp∙=(pi)i=1∞{\displaystyle p_{\bullet }=\left(p_{i}\right)_{i=1}^{\infty }}is an increasing sequence of seminorms onX{\displaystyle X}and letp{\displaystyle p}be the Fréchet combination ofp∙.{\displaystyle p_{\bullet }.}Thenp{\displaystyle p}is anF-seminorm onX{\displaystyle X}that induces the same locally convex topology as the familyp∙{\displaystyle p_{\bullet }}of seminorms.[13]
Sincep∙=(pi)i=1∞{\displaystyle p_{\bullet }=\left(p_{i}\right)_{i=1}^{\infty }}is increasing, a basis of open neighborhoods of the origin consists of all sets of the form{x∈X:pi(x)<r}{\displaystyle \left\{x\in X~:~p_{i}(x)<r\right\}}asi{\displaystyle i}ranges over all positive integers andr>0{\displaystyle r>0}ranges over all positive real numbers.
Thetranslation invariantpseudometriconX{\displaystyle X}induced by thisF-seminormp{\displaystyle p}isd(x,y)=∑i=1∞12ipi(x−y)1+pi(x−y).{\displaystyle d(x,y)=\sum _{i=1}^{\infty }{\frac {1}{2^{i}}}{\frac {p_{i}(x-y)}{1+p_{i}(x-y)}}.}
This metric was discovered byFréchetin his 1906 thesis for the spaces of real and complex sequences with pointwise operations.[14]
If eachpi{\displaystyle p_{i}}is a paranorm then so isp{\displaystyle p}and moreover,p{\displaystyle p}induces the same topology onX{\displaystyle X}as the familyp∙{\displaystyle p_{\bullet }}of paranorms.[8]This is also true of the following paranorms onX{\displaystyle X}:
The Fréchet combination can be generalized by use of a bounded remetrization function.
Abounded remetrization function[15]is a continuous non-negative non-decreasing mapR:[0,∞)→[0,∞){\displaystyle R:[0,\infty )\to [0,\infty )}that has a bounded range, issubadditive(meaning thatR(s+t)≤R(s)+R(t){\displaystyle R(s+t)\leq R(s)+R(t)}for alls,t≥0{\displaystyle s,t\geq 0}), and satisfiesR(s)=0{\displaystyle R(s)=0}if and only ifs=0.{\displaystyle s=0.}
Examples of bounded remetrization functions includearctant,{\displaystyle \arctan t,}tanht,{\displaystyle \tanh t,}t↦min{t,1},{\displaystyle t\mapsto \min\{t,1\},}andt↦t1+t.{\displaystyle t\mapsto {\frac {t}{1+t}}.}[15]Ifd{\displaystyle d}is a pseudometric (respectively, metric) onX{\displaystyle X}andR{\displaystyle R}is a bounded remetrization function thenR∘d{\displaystyle R\circ d}is a bounded pseudometric (respectively, bounded metric) onX{\displaystyle X}that is uniformly equivalent tod.{\displaystyle d.}[15]
Suppose thatp∙=(pi)i=1∞{\displaystyle p_{\bullet }=\left(p_{i}\right)_{i=1}^{\infty }}is a family of non-negativeF-seminorm on a vector spaceX,{\displaystyle X,}R{\displaystyle R}is a bounded remetrization function, andr∙=(ri)i=1∞{\displaystyle r_{\bullet }=\left(r_{i}\right)_{i=1}^{\infty }}is a sequence of positive real numbers whose sum is finite.
Thenp(x):=∑i=1∞riR(pi(x)){\displaystyle p(x):=\sum _{i=1}^{\infty }r_{i}R\left(p_{i}(x)\right)}defines a boundedF-seminorm that is uniformly equivalent to thep∙.{\displaystyle p_{\bullet }.}[16]It has the property that for any netx∙=(xa)a∈A{\displaystyle x_{\bullet }=\left(x_{a}\right)_{a\in A}}inX,{\displaystyle X,}p(x∙)→0{\displaystyle p\left(x_{\bullet }\right)\to 0}if and only ifpi(x∙)→0{\displaystyle p_{i}\left(x_{\bullet }\right)\to 0}for alli.{\displaystyle i.}[16]p{\displaystyle p}is anF-norm if and only if thep∙{\displaystyle p_{\bullet }}separate points onX.{\displaystyle X.}[16]
A pseudometric (resp. metric)d{\displaystyle d}is induced by a seminorm (resp. norm) on a vector spaceX{\displaystyle X}if and only ifd{\displaystyle d}is translation invariant andabsolutely homogeneous, which means that for all scalarss{\displaystyle s}and allx,y∈X,{\displaystyle x,y\in X,}in which case the function defined byp(x):=d(x,0){\displaystyle p(x):=d(x,0)}is a seminorm (resp. norm) and the pseudometric (resp. metric) induced byp{\displaystyle p}is equal tod.{\displaystyle d.}
If(X,τ){\displaystyle (X,\tau )}is atopological vector space(TVS) (where note in particular thatτ{\displaystyle \tau }is assumed to be a vector topology) then the following are equivalent:[11]
If(X,τ){\displaystyle (X,\tau )}is a TVS then the following are equivalent:
Birkhoff–Kakutani theorem—If(X,τ){\displaystyle (X,\tau )}is a topological vector space then the following three conditions are equivalent:[17][note 1]
By the Birkhoff–Kakutani theorem, it follows that there is anequivalent metricthat is translation-invariant.
If(X,τ){\displaystyle (X,\tau )}is TVS then the following are equivalent:[13]
LetM{\displaystyle M}be a vector subspace of a topological vector space(X,τ).{\displaystyle (X,\tau ).}
IfX{\displaystyle X}is Hausdorff locally convex TVS thenX{\displaystyle X}with thestrong topology,(X,b(X,X′)),{\displaystyle \left(X,b\left(X,X^{\prime }\right)\right),}is metrizable if and only if there exists a countable setB{\displaystyle {\mathcal {B}}}of bounded subsets ofX{\displaystyle X}such that every bounded subset ofX{\displaystyle X}is contained in some element ofB.{\displaystyle {\mathcal {B}}.}[22]
Thestrong dual spaceXb′{\displaystyle X_{b}^{\prime }}of a metrizable locally convex space (such as aFréchet space[23])X{\displaystyle X}is aDF-space.[24]The strong dual of a DF-space is aFréchet space.[25]The strong dual of areflexiveFréchet space is abornological space.[24]The strong bidual (that is, thestrong dual spaceof the strong dual space) of a metrizable locally convex space is a Fréchet space.[26]IfX{\displaystyle X}is a metrizable locally convex space then its strong dualXb′{\displaystyle X_{b}^{\prime }}has one of the following properties, if and only if it has all of these properties: (1)bornological, (2)infrabarreled, (3)barreled.[26]
A topological vector space isseminormableif and only if it has aconvexbounded neighborhood of the origin.
Moreover, a TVS isnormableif and only if it isHausdorffand seminormable.[14]Every metrizable TVS on a finite-dimensionalvector space is a normablelocally convexcomplete TVS, beingTVS-isomorphictoEuclidean space. Consequently, any metrizable TVS that isnotnormable must be infinite dimensional.
IfM{\displaystyle M}is a metrizablelocally convex TVSthat possess acountablefundamental system of bounded sets, thenM{\displaystyle M}is normable.[27]
IfX{\displaystyle X}is a Hausdorfflocally convex spacethen the following are equivalent:
and if this locally convex spaceX{\displaystyle X}is also metrizable, then the following may be appended to this list:
In particular, if a metrizable locally convex spaceX{\displaystyle X}(such as aFréchet space) isnotnormable then itsstrong dual spaceXb′{\displaystyle X_{b}^{\prime }}is not aFréchet–Urysohn spaceand consequently, thiscompleteHausdorff locally convex spaceXb′{\displaystyle X_{b}^{\prime }}is also neither metrizable nor normable.
Another consequence of this is that ifX{\displaystyle X}is areflexivelocally convexTVS whose strong dualXb′{\displaystyle X_{b}^{\prime }}is metrizable thenXb′{\displaystyle X_{b}^{\prime }}is necessarily a reflexive Fréchet space,X{\displaystyle X}is aDF-space, bothX{\displaystyle X}andXb′{\displaystyle X_{b}^{\prime }}are necessarilycompleteHausdorffultrabornologicaldistinguishedwebbed spaces, and moreover,Xb′{\displaystyle X_{b}^{\prime }}is normable if and only ifX{\displaystyle X}is normable if and only ifX{\displaystyle X}is Fréchet–Urysohn if and only ifX{\displaystyle X}is metrizable. In particular, such a spaceX{\displaystyle X}is either aBanach spaceor else it is not even a Fréchet–Urysohn space.
Suppose that(X,d){\displaystyle (X,d)}is a pseudometric space andB⊆X.{\displaystyle B\subseteq X.}The setB{\displaystyle B}ismetrically boundedord{\displaystyle d}-boundedif there exists a real numberR>0{\displaystyle R>0}such thatd(x,y)≤R{\displaystyle d(x,y)\leq R}for allx,y∈B{\displaystyle x,y\in B};
the smallest suchR{\displaystyle R}is then called thediameterord{\displaystyle d}-diameterofB.{\displaystyle B.}[14]IfB{\displaystyle B}isboundedin a pseudometrizable TVSX{\displaystyle X}then it is metrically bounded;
the converse is in general false but it is true forlocally convexmetrizable TVSs.[14]
Theorem[29]—All infinite-dimensionalseparablecomplete metrizable TVS arehomeomorphic.
Everytopological vector space(and more generally, atopological group) has a canonicaluniform structure, induced by its topology, which allows the notions of completeness and uniform continuity to be applied to it.
IfX{\displaystyle X}is a metrizable TVS andd{\displaystyle d}is a metric that definesX{\displaystyle X}'s topology, then its possible thatX{\displaystyle X}is complete as a TVS (i.e. relative to its uniformity) but the metricd{\displaystyle d}isnotacomplete metric(such metrics exist even forX=R{\displaystyle X=\mathbb {R} }).
Thus, ifX{\displaystyle X}is a TVS whose topology is induced by a pseudometricd,{\displaystyle d,}then the notion of completeness ofX{\displaystyle X}(as a TVS) and the notion of completeness of the pseudometric space(X,d){\displaystyle (X,d)}are not always equivalent.
The next theorem gives a condition for when they are equivalent:
Theorem—IfX{\displaystyle X}is a pseudometrizable TVS whose topology is induced by atranslation invariantpseudometricd,{\displaystyle d,}thend{\displaystyle d}is a complete pseudometric onX{\displaystyle X}if and only ifX{\displaystyle X}is complete as a TVS.[36]
Theorem[37][38](Klee)—Letd{\displaystyle d}beany[note 2]metric on a vector spaceX{\displaystyle X}such that the topologyτ{\displaystyle \tau }induced byd{\displaystyle d}onX{\displaystyle X}makes(X,τ){\displaystyle (X,\tau )}into a topological vector space. If(X,d){\displaystyle (X,d)}is a complete metric space then(X,τ){\displaystyle (X,\tau )}is a complete-TVS.
Theorem—IfX{\displaystyle X}is a TVS whose topology is induced by a paranormp,{\displaystyle p,}thenX{\displaystyle X}is complete if and only if for every sequence(xi)i=1∞{\displaystyle \left(x_{i}\right)_{i=1}^{\infty }}inX,{\displaystyle X,}if∑i=1∞p(xi)<∞{\displaystyle \sum _{i=1}^{\infty }p\left(x_{i}\right)<\infty }then∑i=1∞xi{\displaystyle \sum _{i=1}^{\infty }x_{i}}converges inX.{\displaystyle X.}[39]
IfM{\displaystyle M}is a closed vector subspace of a complete pseudometrizable TVSX,{\displaystyle X,}then the quotient spaceX/M{\displaystyle X/M}is complete.[40]IfM{\displaystyle M}is acompletevector subspace of a metrizable TVSX{\displaystyle X}and if the quotient spaceX/M{\displaystyle X/M}is complete then so isX.{\displaystyle X.}[40]IfX{\displaystyle X}is not complete thenM:=X,{\displaystyle M:=X,}but not complete, vector subspace ofX.{\displaystyle X.}
ABaireseparabletopological groupis metrizable if and only if it is cosmic.[23]
Banach-Saks theorem[45]—If(xn)n=1∞{\displaystyle \left(x_{n}\right)_{n=1}^{\infty }}is a sequence in alocally convexmetrizable TVS(X,τ){\displaystyle (X,\tau )}that convergesweaklyto somex∈X,{\displaystyle x\in X,}then there exists a sequencey∙=(yi)i=1∞{\displaystyle y_{\bullet }=\left(y_{i}\right)_{i=1}^{\infty }}inX{\displaystyle X}such thaty∙→x{\displaystyle y_{\bullet }\to x}in(X,τ){\displaystyle (X,\tau )}and eachyi{\displaystyle y_{i}}is a convex combination of finitely manyxn.{\displaystyle x_{n}.}
Mackey's countability condition[14]—Suppose thatX{\displaystyle X}is a locally convex metrizable TVS and that(Bi)i=1∞{\displaystyle \left(B_{i}\right)_{i=1}^{\infty }}is a countable sequence of bounded subsets ofX.{\displaystyle X.}Then there exists a bounded subsetB{\displaystyle B}ofX{\displaystyle X}and a sequence(ri)i=1∞{\displaystyle \left(r_{i}\right)_{i=1}^{\infty }}of positive real numbers such thatBi⊆riB{\displaystyle B_{i}\subseteq r_{i}B}for alli.{\displaystyle i.}
Generalized series
As describedin this article's section on generalized series, for anyI{\displaystyle I}-indexed familyfamily(ri)i∈I{\displaystyle \left(r_{i}\right)_{i\in I}}of vectors from a TVSX,{\displaystyle X,}it is possible to define their sum∑i∈Iri{\displaystyle \textstyle \sum \limits _{i\in I}r_{i}}as the limit of thenetof finite partial sumsF∈FiniteSubsets(I)↦∑i∈Fri{\displaystyle F\in \operatorname {FiniteSubsets} (I)\mapsto \textstyle \sum \limits _{i\in F}r_{i}}where the domainFiniteSubsets(I){\displaystyle \operatorname {FiniteSubsets} (I)}isdirectedby⊆.{\displaystyle \,\subseteq .\,}IfI=N{\displaystyle I=\mathbb {N} }andX=R,{\displaystyle X=\mathbb {R} ,}for instance, then the generalized series∑i∈Nri{\displaystyle \textstyle \sum \limits _{i\in \mathbb {N} }r_{i}}converges if and only if∑i=1∞ri{\displaystyle \textstyle \sum \limits _{i=1}^{\infty }r_{i}}converges unconditionallyin the usual sense (which for real numbers,is equivalenttoabsolute convergence).
If a generalized series∑i∈Iri{\displaystyle \textstyle \sum \limits _{i\in I}r_{i}}converges in a metrizable TVS, then the set{i∈I:ri≠0}{\displaystyle \left\{i\in I:r_{i}\neq 0\right\}}is necessarilycountable(that is, either finite orcountably infinite);[proof 1]in other words, all but at most countably manyri{\displaystyle r_{i}}will be zero and so this generalized series∑i∈Iri=∑ri≠0i∈Iri{\displaystyle \textstyle \sum \limits _{i\in I}r_{i}~=~\textstyle \sum \limits _{\stackrel {i\in I}{r_{i}\neq 0}}r_{i}}is actually a sum of at most countably many non-zero terms.
IfX{\displaystyle X}is a pseudometrizable TVS andA{\displaystyle A}maps bounded subsets ofX{\displaystyle X}to bounded subsets ofY,{\displaystyle Y,}thenA{\displaystyle A}is continuous.[14]Discontinuous linear functionals exist on any infinite-dimensional pseudometrizable TVS.[46]Thus, a pseudometrizable TVS is finite-dimensional if and only if its continuous dual space is equal to itsalgebraic dual space.[46]
IfF:X→Y{\displaystyle F:X\to Y}is a linear map between TVSs andX{\displaystyle X}is metrizable then the following are equivalent:
Open and almost open maps
A vector subspaceM{\displaystyle M}of a TVSX{\displaystyle X}hasthe extension propertyif any continuous linear functional onM{\displaystyle M}can be extended to a continuous linear functional onX.{\displaystyle X.}[22]Say that a TVSX{\displaystyle X}has theHahn-Banachextension property(HBEP) if every vector subspace ofX{\displaystyle X}has the extension property.[22]
TheHahn-Banach theoremguarantees that every Hausdorff locally convex space has the HBEP.
For complete metrizable TVSs there is a converse:
Theorem(Kalton)—Every complete metrizable TVS with the Hahn-Banach extension property is locally convex.[22]
If a vector spaceX{\displaystyle X}has uncountable dimension and if we endow it with thefinest vector topologythen this is a TVS with the HBEP that is neither locally convex or metrizable.[22]
Proofs
|
https://en.wikipedia.org/wiki/Metrizable_topological_vector_space
|
Inpsychology,number senseis the term used for the hypothesis that some animals, particularly humans, have a biologically determined ability that allows them to represent and manipulate large numerical quantities. The term was popularized byStanislas Dehaenein his 1997 book "The Number Sense," but originally named by the mathematicianTobias Dantzigin his 1930 textNumber: The Language of Science.
Psychologists believe that the number sense in humans can be differentiated into theapproximate number system, a system that supports the estimation of themagnitude, and theparallel individuation system, which allows the tracking of individual objects, typically for quantities below 4.[1]
There are also some differences in how number sense is defined in mathcognition. For example, Gersten and Chard say number sense "refers to a child's fluidity and flexibility with numbers, the sense of what numbers mean and an ability to perform mental mathematics and to look at the world and make comparisons."[2][3][4]
In non-human animals, number sense is not the ability to count, but the ability to perceive changes in the number of things in a collection.[5]All mammals, and most birds, will notice if there is a change in the number of their young nearby. Many birds can distinguish two from three.[6]
Researchers consider number sense to be of prime importance for children in earlyelementary education, and theNational Council of Teachers of Mathematicshas made number sense a focus area of pre-K through 2nd grade mathematics education.[7]An active area of research is to create and test teaching strategies to develop children's number sense. Number sense also refers to the contest hosted by theUniversity Interscholastic League. This contest is a ten-minute test where contestants solve math problems mentally—no calculators, scratch-work, or mark-outs are allowed.[8]
The termnumber senseinvolves several concepts ofmagnitude,ranking,comparison,measurement,rounding,percents, andestimation, including:[9]
Those concepts are taught in elementary-level education.
|
https://en.wikipedia.org/wiki/Number_sense
|
Inmathematics, thecardinalityof asetis the number of its elements. The cardinality of a set may also be called itssize, when no confusion with other notions of size is possible.[a]Beginning in the late 19th century, this concept of size was generalized toinfinite sets, allowing one to distinguish between different types of infinity and to performarithmeticon them. Nowadays, infinite sets are encountered in almost all parts of mathematics, even those that may seem to be unrelated. Familiar examples are provided by mostnumber systemsandalgebraic structures(natural numbers,rational numbers,real numbers,vector spaces, etc.), as well as in geometry, bylines,line segmentsandcurves, which are considered as the sets of their points.
There are two approaches to describing cardinality: one which usescardinal numbersand another which compares sets directly using functions between them, eitherbijectionsorinjections.
The former states the size as a number; the latter compares their relative size and led to the discovery of different sizes of infinity.[1]For example, the setsA={1,2,3}{\displaystyle A=\{1,2,3\}}andB={2,4,6}{\displaystyle B=\{2,4,6\}}are the same size as they each contain 3elements(the first approach) and there is a bijection between them (the second approach).
The cardinality, orcardinal number, of a setA{\displaystyle A}is generally denoted by|A|,{\displaystyle |A|,}with avertical baron each side.[2](This is the same notation as forabsolute value; the meaning depends on context.) The notation|A|=|B|{\displaystyle |A|=|B|}means that the two setsA{\displaystyle A}andB{\displaystyle B}have the same cardinality. The cardinal number of a setA{\displaystyle A}may also be denoted byn(A),{\displaystyle n(A),}A{\displaystyle A},card(A),{\displaystyle \operatorname {card} (A),}#A,{\displaystyle \#A,}etc.
It is conventional to recognize three kinds of cardinality:
In English, the termcardinalityoriginates from thepost-classical Latincardinalis, meaning "principal" or "chief", which derives fromcardo, a noun meaning "hinge". In Latin,cardoreferred to something central or pivotal, both literally and metaphorically. This concept of centrality passed intomedieval Latinand then into English, wherecardinalcame to describe things considered to be, in some sense, fundamental, such ascardinal virtues,cardinal sins,cardinal directions, and (in the grammatical sense)cardinal numbers.[4][5]The last of which referred to numbers used for counting (e.g., one, two, three),[6]as opposed toordinal numbers, which express order (e.g., first, second, third),[7]andnominal numbersused for labeling without meaning (e.g.,jersey numbersandserial numbers).[8]
In mathematics, the notion of cardinality was first introduced byGeorg Cantorin the late 19th century, wherein he used the used the termMächtigkeit, which may be translated as "magnitude" or "power", though Cantor credited the term to a work byJakob Steineronprojective geometry.[9][10][11]The termscardinalityandcardinal numberwere eventually adopted from the grammatical sense, and later translations would use these terms.[12][13]Similarly, the terms forcountableanduncountable setscome fromcountableanduncountable nouns.[citation needed]
A crude sense of cardinality, an awareness that groups of things or events compare with other groups by containing more, fewer, or the same number of instances, is observed in a variety of present-day animal species, suggesting an origin millions of years ago.[14]Human expression of cardinality is seen as early as40000years ago, with equating the size of a group with a group of recorded notches, or a representative collection of other things, such as sticks and shells.[15]The abstraction of cardinality as a number is evident by 3000 BCE, in Sumerianmathematicsand the manipulation of numbers without reference to a specific group of things or events.[16]
From the 6th century BCE, the writings of Greek philosophers show hints of infinite cardinality. While they considered generally infinity as an endless series of actions, such as adding 1 to a number repeatedly, they considered rarely infinite sets (actual infinity), and, if they did, they considered infinity as a unique cardinality.[17]The ancient Greek notion of infinity also considered the division of things into parts repeated without limit.
One of the earliest explicit uses of a one-to-one correspondence is recorded inAristotle'sMechanics(c.350 BC), known asAristotle's wheel paradox. The paradox can be briefly described as follows: A wheel is depicted as twoconcentric circles. The larger, outer circle is tangent to a horizontal line (e.g. a road that it rolls on), while the smaller, inner circle is rigidly affixed to the larger. Assuming the larger circle rolls along the line without slipping (or skidding) for one full revolution, the distances moved by both circles are the same: thecircumferenceof the larger circle. Further, the lines traced by the bottom-most point of each is the same length.[18]Since the smaller wheel does not skip any points, and no point on the smaller wheel is used more than once, there is a one-to-one correspondence between the two circles.
Galileo Galileipresented what was later coinedGalileo's paradoxin his bookTwo New Sciences(1638), where he attempts to show that infinite quantities cannot be called greater or less than one another. He presents the paradox roughly as follows: asquare numberis one which is the product of another number with itself, such as 4 and 9, which are the squares of 2 and 3 respectively. Then thesquare rootof a square number is that multiplicand. He then notes that there are as many square numbers as there are square roots, since every square has its own root and every root its own square, while no square has more than one root and no root more than one square. But there are as many square roots as there are numbers, since every number is the square root of some square. He, however, concluded that this meant we could not compare the sizes of infinite sets, missing the opportunity to discover cardinality.[19]
Bernard Bolzano'sParadoxes of the Infinite(Paradoxien des Unendlichen, 1851) is often considered the first systematic attempt to introduce the concept of sets intomathematical analysis. In this work, Bolzano defended the notion ofactual infinity, examined various properties of infinite collections, including an early formulation of what would later be recognized as one-to-one correspondence between infinite sets, and proposed to base mathematics on a notion similar to sets. He discussed examples such as the pairing between theintervals[0,5]{\displaystyle [0,5]}and[0,12]{\displaystyle [0,12]}by the relation5y=12x.{\displaystyle 5y=12x.}Bolzano also revisited and extended Galileo's paradox. However, he too resisted saying that these sets were, in that sense, the same size. Thus, whileParadoxes of the Infiniteanticipated several ideas central to later set theory, the work had little influence on contemporary mathematics, in part due to itsposthumous publicationand limited circulation.[20][21][22]
Other, more minor contributions incudeDavid HumeinA Treatise of Human Nature(1739), who said"When two numbers are so combined, as that the one has always a unit answering to every unit of the other, we pronounce them equal",[23]now calledHume's principle, which was used extensively byGottlob Fregelater during the rise of set theory.[24]Jakob Steiner, whomGeorg Cantorcredits the original term,Mächtigkeit, for cardinality (1867).[9][10][11]Peter Gustav Lejeune Dirichletis commonly credited for being the first to explicitly formulate thepigeonhole principlein 1834,[25]though it was used at least two centuries earlier byJean Leurechonin 1624.[26]
To better understand infinite sets, a notion of cardinality was formulatedc.1880byGeorg Cantor, the originator ofset theory. He examined the process of equating two sets with abijection, a one-to-one correspondence between the elements of two sets. In 1891, with the publication ofhis diagonal argument, he demonstrated that there are sets of numbers that cannot be placed in one-to-one correspondence with the set of natural numbers, i.e., there are "uncountable sets" that contain more elements than there are in the infinite set of natural numbers.[27]
While the cardinality of a finite set is simply its number of elements, extending that notion to infinite sets usually starts with defining comparison of sizes of arbitrary sets (some of which are possibly infinite).
Two sets have the same cardinality if there exists a one-to-one correspondence between the elements ofA{\displaystyle A}and thoseB{\displaystyle B}(that is, abijectionfromA{\displaystyle A}toB{\displaystyle B}).[3]Such sets are said to beequipotent,equipollent, orequinumerous. For example, the setE={0,2,4,6,...}{\displaystyle E=\{0,2,4,6,{\text{...}}\}}of non-negativeeven numbershas the same cardinality as the setN={0,1,2,3,...}{\displaystyle \mathbb {N} =\{0,1,2,3,{\text{...}}\}}ofnatural numbers, since the functionf(n)=2n{\displaystyle f(n)=2n}is a bijection fromN{\displaystyle \mathbb {N} }toE{\displaystyle E}(see picture).
For finite setsA{\displaystyle A}andB{\displaystyle B}, ifsomebijection exists fromA{\displaystyle A}toB{\displaystyle B}, theneachinjective or surjective function fromA{\displaystyle A}toB{\displaystyle B}is a bijection. This is no longer true for infiniteA{\displaystyle A}andB{\displaystyle B}. For example, the functiong{\displaystyle g}fromN{\displaystyle \mathbb {N} }toE{\displaystyle E}, defined byg(n)=4n{\displaystyle g(n)=4n}is injective, but not surjective since 2, for instance, is not mapped to, andh{\displaystyle h}fromN{\displaystyle \mathbb {N} }toE{\displaystyle E}, defined byh(n)=2floor(n/2){\displaystyle h(n)=2\operatorname {floor} (n/2)}(see:floor function) is surjective, but not injective, since 0 and 1 for instance both map to 0. Neitherg{\displaystyle g}norh{\displaystyle h}can challenge|E|=|N|,{\displaystyle |E|=|\mathbb {N} |,}which was established by the existence off{\displaystyle f}.
A fundamental result often used for cadinality is that of anequivalence relation. A binaryrelationis an equvalence relation if it satisfies the three basic properties of equality:reflexivity,symmetry, andtransitivity. A relationR{\displaystyle R}is reflexive if, for anya,{\displaystyle a,}aRa{\displaystyle aRa}(read:a{\displaystyle a}isR{\displaystyle R}-related toa{\displaystyle a}); symmetric if, for anya{\displaystyle a}andb,{\displaystyle b,}ifaRb,{\displaystyle aRb,}thenbRa{\displaystyle bRa}(read: ifa{\displaystyle a}is related tob,{\displaystyle b,}thenb{\displaystyle b}is related toa{\displaystyle a}); and transitive if, for anya,{\displaystyle a,}b,{\displaystyle b,}andc,{\displaystyle c,}ifaRb{\displaystyle aRb}andbRc,{\displaystyle bRc,}thenaRc.{\displaystyle aRc.}
Given any setA,{\displaystyle A,}there is a bijection fromA{\displaystyle A}to itself by theidentity function, therefore cardinality is reflexive. Given any setsA{\displaystyle A}andB,{\displaystyle B,}such that there is a bijectionf{\displaystyle f}fromA{\displaystyle A}toB,{\displaystyle B,}then there is aninverse functionf−1{\displaystyle f^{-1}}fromB{\displaystyle B}toA,{\displaystyle A,}which is also bijective, therefore cardinality is symmetric. Finally, given any setsA,{\displaystyle A,}B,{\displaystyle B,}andC{\displaystyle C}such that there is a bijectionf{\displaystyle f}fromA{\displaystyle A}toB,{\displaystyle B,}andg{\displaystyle g}fromB{\displaystyle B}toC,{\displaystyle C,}then theircompositiong∘f{\displaystyle g\circ f}(read:g{\displaystyle g}afterf{\displaystyle f}) is a bijection fromA{\displaystyle A}toC,{\displaystyle C,}and so cardinality is transitive. Thus, cardinality forms an equivalence relation. This means that cardinalitypartitions setsintoequivalence classes, and one may assign a representative to denote this class. This motivates the notion of acardinal number.
Somewhat more formally, a relation must be a certain set ofordered pairs. Since there is noset of all setsin standard set theory (see:§ Cantor's paradox), cardinality is not a relation in the usual sense, but apredicateor a relation overclasses.
A setA{\displaystyle A}is not larger than a setB{\displaystyle B}if it can be mapped intoB{\displaystyle B}without overlap. That is, the cardinality ofA{\displaystyle A}is less than or equal to the cardinality ofB{\displaystyle B}if there is aninjective functionfromA{\displaystyle A}toB{\displaystyle B}. This is writtenA⪯B,{\displaystyle A\preceq B,}or|A|≤|B|.{\displaystyle |A|\leq |B|.}IfA⪯B,{\displaystyle A\preceq B,}but there is no injection fromB{\displaystyle B}toA,{\displaystyle A,}thenA{\displaystyle A}is said to bestrictlysmaller thanB,{\displaystyle B,}written without the underline asA≺B{\displaystyle A\prec B}or|A|<|B|.{\displaystyle |A|<|B|.}For example, ifA{\displaystyle A}has four elements andB{\displaystyle B}has five, then the following are trueA⪯A,{\displaystyle A\preceq A,}A⪯B,{\displaystyle A\preceq B,}andA≺B.{\displaystyle A\prec B.}
For example, the setN{\displaystyle \mathbb {N} }of allnatural numbershas cardinality strictly less than itspower setP(N){\displaystyle {\mathcal {P}}(\mathbb {N} )}, becauseg(n)={n}{\displaystyle g(n)=\{n\}}is an injective function fromN{\displaystyle \mathbb {N} }toP(N){\displaystyle {\mathcal {P}}(\mathbb {N} )}, and it can be shown that no function fromN{\displaystyle \mathbb {N} }toP(N){\displaystyle {\mathcal {P}}(\mathbb {N} )}can be bijective (see picture). By a similar argument,N{\displaystyle \mathbb {N} }has cardinality strictly less than the cardinality of the setR{\displaystyle \mathbb {R} }of allreal numbers. For proofs, seeCantor's diagonal argumentorCantor's first uncountability proof.
If|A|≤|B|{\displaystyle |A|\leq |B|}and|B|≤|A|,{\displaystyle |B|\leq |A|,}then|A|=|B|{\displaystyle |A|=|B|}(a fact known as theSchröder–Bernstein theorem). Theaxiom of choiceis equivalent to the statement that|A|≤|B|{\displaystyle |A|\leq |B|}or|B|≤|A|{\displaystyle |B|\leq |A|}for everyA{\displaystyle A}andB{\displaystyle B}.[28][29]
A set is calledcountableif it isfiniteor has a bijection with the set ofnatural numbers(N),{\displaystyle (\mathbb {N} ),}in which case it is calledcountably infinite. The termdenumerableis also sometimes used for countably infinite sets. For example, the set of all even natural numbers is countable, and therefore has the same cardinality as the whole set of natural numbers, even though it is aproper subset. Similarly, the set ofsquare numbersis countable, which was considered paradoxical for hundreds of years before modern set theory (see:§ Pre-Cantorian Set theory). However, several other examples have historically been considered surprising or initially unintuitive since the rise of set theory.
Therational numbers(Q){\displaystyle (\mathbb {Q} )}are those which can be expressed as thequotientorfractionpq{\displaystyle {\tfrac {p}{q}}}of twointegers. The rational numbers can be shown to be countable by considering the set of fractions as the set of allordered pairsof integers, denotedZ×Z,{\displaystyle \mathbb {Z} \times \mathbb {Z} ,}which can be visualized as the set of allinteger pointson a grid. Then, an intuitive function can be described by drawing a line in a repeating pattern, or spiral, which eventually goes through each point in the grid. For example, going through each diagonal on the grid for positive fractions, or through a lattice spiral for all integer pairs. These technically over cover the rationals, since, for example, the rational number12{\textstyle {\frac {1}{2}}}gets mapped to by all the fractions24,36,48,…,{\textstyle {\frac {2}{4}},\,{\frac {3}{6}},\,{\frac {4}{8}},\,\dots ,}as the grid method treats these all as distinct ordered pairs. So this function shows|Q|≤|N|{\displaystyle |\mathbb {Q} |\leq |\mathbb {N} |}not|Q|=|N|.{\displaystyle |\mathbb {Q} |=|\mathbb {N} |.}This can be corrected by "skipping over" these numbers in the grid, or by designing a function which does this naturally, but these menthods are usually more complicated.
A number is calledalgebraicif it is a solution of somepolynomialequation (with integercoefficients). For example, thesquare root of two2{\displaystyle {\sqrt {2}}}is a solution tox2−2=0,{\displaystyle x^{2}-2=0,}and the rational numberp/q{\displaystyle p/q}is the solution toqx−p=0.{\displaystyle qx-p=0.}Conversely, a number which cannot be the root of any polynomial is calledtranscendental. Two examples includeEuler's number(e) andpi (π). In general, proving a number is trancendental is considered to be very difficult, and only a few classes of transcendental numbers are known. However, it can be shown that the set of algebraic numbers is countable (for example, seeCantor's first set theory article § The proofs). Since the set of algebraic numbers is countable while the real numbers are uncountable (shown in the following section), the transcendental numbers must form the vast majority of real numbers, even though they are individually much harder to identify. That is to say,almost allreal numbers are transcendental.
A set is calleduncountableif it is not countable. That is, it is infinite and strictly larger than the set of natural numbers. The usual first example of this is the set ofreal numbers(R){\displaystyle (\mathbb {R} )}, which can be understood as the set of all numbers on thenumber line. One method of proving that the reals are uncountable is calledCantor's diagonal argument, credited to Cantor for his 1891 proof,[30]though his differs from the more common presentation.
It begins by assuming,by contradiction, that there is some one-to-one mapping between the natural numbers and the set of real numbers between 0 and 1 (the interval[0,1]{\displaystyle [0,1]}). Then, take thedecimal expansionsof each real number, which looks like0.d1d2d3...{\displaystyle 0.d_{1}d_{2}d_{3}...}Considering these real numbers in a column, create a new number such that the first digit of the new number is different from that of the first number in the column, the second digit is different from the second number in the column and so on. We also need to make sure that the number we create has a unique decimal representation, that is, it cannot end inrepeated nines. For example, if the digit isn't a 7, make the digit of the new number a 7, and if it was a seven, make it a 3.[31]Then, this new number will be different from each of the numbers in the list by at least one digit, and therefore must not be in the list. This shows that the real numbers cannot be put into a one-to-one correspondence with the naturals, and thus must be strictly larger.[32]
Another classical example of an uncountable set, established using a related reasoning, is thepower setof the natural numbers, denotedP(N){\displaystyle {\mathcal {P}}(\mathbb {N} )}. This is the set of allsubsetsofN{\displaystyle \mathbb {N} }, including theempty setandN{\displaystyle \mathbb {N} }itself. The method is much much closer to Cantor's original diagonal argument. Consider any functionf:N→P(N){\displaystyle f:\mathbb {N} \to {\mathcal {P}}(\mathbb {N} )}. One may define a subsetT⊆N{\displaystyle T\subseteq \mathbb {N} }which cannot be in the image off{\displaystyle f}by: if1∈f(1){\displaystyle 1\in f(1)}, then1∉T{\displaystyle 1\notin T}, and if2∉f(2){\displaystyle 2\notin f(2)}, then2∈T{\displaystyle 2\in T}, and in general, for each natural numbern{\displaystyle n},n∈T{\displaystyle n\in T}if and only ifn∉f(n){\displaystyle n\notin f(n)}. Then if the subsetT=f(t){\displaystyle T=f(t)}was in the image off{\displaystyle f}, thent∈f(t)⟺t∉f(t){\displaystyle t\in f(t)\iff t\notin f(t)}a contradiction. Sof{\displaystyle f}cannot be surjective. Therefore no bijection can exist betweenN{\displaystyle \mathbb {N} }andP(N){\displaystyle {\mathcal {P}}(\mathbb {N} )}. ThusP(N){\displaystyle {\mathcal {P}}(\mathbb {N} )}must be not be countable. The two sets,R{\displaystyle \mathbb {R} }andP(N){\displaystyle {\mathcal {P}}(\mathbb {N} )}can be shown to have the same cardinality (by, for example, assigning each subset to a decimal expansion). Whether there exists a setA{\displaystyle A}with cardinality between these two sets|N|<|A|<|R|{\displaystyle |\mathbb {N} |<|A|<|\mathbb {R} |}is known as thecontinuum hypothesis.
Cantor's theoremgeneralizes the second theorem above, showing that every set is strictly smaller than its powerset. The proof roughly goes as follows: Given a setA{\displaystyle A}, iff{\displaystyle f}is a function fromA{\displaystyle A}toP(A){\displaystyle {\mathcal {P}}(A)}, let the subsetT⊆A{\displaystyle T\subseteq A}be given byT={a∈A:a∉f(a)}{\displaystyle T=\{a\in A:a\notin f(a)\}}. IfT=f(t){\displaystyle T=f(t)}, thent∈f(t)⟺t∉f(t){\displaystyle t\in f(t)\iff t\notin f(t)}a contradiction. Sof{\displaystyle f}cannot be surjective and thus cannot be a bijection. So|A|<|P(A)|{\displaystyle |A|<|{\mathcal {P}}(A)|}. (Notice that a trivial injection exists -- mapa{\displaystyle a}to{a}{\displaystyle \{a\}}.) Further, sinceP(A){\displaystyle {\mathcal {P}}(A)}is itself a set, the argument can be repeated to show|A|<|P(A)|<|P(P(A))|{\displaystyle |A|<|{\mathcal {P}}(A)|<|{\mathcal {P}}({\mathcal {P}}(A))|}. TakingA=N{\displaystyle A=\mathbb {N} }, this shows thatP(P(N)){\displaystyle {\mathcal {P}}({\mathcal {P}}(\mathbb {N} ))}is even larger thanP(N){\displaystyle {\mathcal {P}}(\mathbb {N} )}, which was already shown to be uncountable. Repeating this argument shows that there are infinitely many "sizes" of infinity.
In the above section, "cardinality" of a set was defined relationally. In other words, it was not defined as a specific object itself. However, such an object can be defined as follows.
Given a basic sense ofnatural numbers, a set is said to have cardinalityn{\displaystyle n}if it can be put in one-to-one correspondence with the set{1,2,…,n}.{\displaystyle \{1,\,2,\,\dots ,\,n\}.}For example, the setS={A,B,C,D}{\displaystyle S=\{A,B,C,D\}}has a natural correspondence with the set{1,2,3,4},{\displaystyle \{1,2,3,4\},}and therefore is said to have cardinality 4. Other terminologies include "Its cardinality is 4" or "Its cardinal number is 4". While this definition uses a basic sense of natural numbers, it may be that cardinality is used to define the natural numbers, in which case, a simple construction of objects satisfying thePeano axiomscan be used as a substitute. Most commonly, theVon Neumann ordinals.
Showing that such a correspondence exists is not always trivial, which is the subject matter ofcombinatorics.
An intuitive property of finite sets is that, for example, if a set has cardinality 4, then it does not also have cardinality 5. Intuitively meaning that a set cannot have both exaclty 4 elements and exactly 5 elements. However, it is not so obviously proven. The following proof is adapted fromAnalysis IbyTerence Tao.[33]
Lemma: If a setX{\displaystyle X}has cardinalityn≥1,{\displaystyle n\geq 1,}andx0∈X,{\displaystyle x_{0}\in X,}then the setX−{x0}{\displaystyle X-\{x_{0}\}}(i.e.X{\displaystyle X}with the elementx0{\displaystyle x_{0}}removed) has cardinalityn−1.{\displaystyle n-1.}
Proof: GivenX{\displaystyle X}as above, sinceX{\displaystyle X}has cardinalityn,{\displaystyle n,}there is a bijectionf{\displaystyle f}fromX{\displaystyle X}to{1,2,…,n}.{\displaystyle \{1,\,2,\,\dots ,\,n\}.}Then, sincex0∈X,{\displaystyle x_{0}\in X,}there must be some numberf(x0){\displaystyle f(x_{0})}in{1,2,…,n}.{\displaystyle \{1,\,2,\,\dots ,\,n\}.}We need to find a bijection fromX−{x0}{\displaystyle X-\{x_{0}\}}to{1,…n−1}{\displaystyle \{1,\dots n-1\}}(which may be empty). Define a functiong{\displaystyle g}such thatg(x)=f(x){\displaystyle g(x)=f(x)}iff(x)<f(x0),{\displaystyle f(x)<f(x_{0}),}andg(x)=f(x)−1{\displaystyle g(x)=f(x)-1}iff(x)>f(x0).{\displaystyle f(x)>f(x_{0}).}Theng{\displaystyle g}is a bijection fromX−{x0}{\displaystyle X-\{x_{0}\}}to{1,…n−1}.{\displaystyle \{1,\dots n-1\}.}
Theorem: If a setX{\displaystyle X}has cardinalityn,{\displaystyle n,}then it cannot have any other cardinality. That is,X{\displaystyle X}cannot also have cardinalitym≠n.{\displaystyle m\neq n.}
Proof: IfX{\displaystyle X}is empty (has cardinality 0), then there cannot exist a bijection fromX{\displaystyle X}to any nonempty setY,{\displaystyle Y,}since nothing mapped toy0∈Y.{\displaystyle y_{0}\in Y.}Assume, byinductionthat the result has been proven up to some cardinalityn.{\displaystyle n.}IfX,{\displaystyle X,}has cardinalityn+1,{\displaystyle n+1,}assume it also has cardinalitym.{\displaystyle m.}We want to show thatm=n+1.{\displaystyle m=n+1.}By the lemma above,X−{x0}{\displaystyle X-\{x_{0}\}}must have cardinalityn{\displaystyle n}andm−1.{\displaystyle m-1.}Since, by induction, cardinality is unique for sets with cardinalityn,{\displaystyle n,}it must be thatm−1=n,{\displaystyle m-1=n,}and thusm=n+1.{\displaystyle m=n+1.}
Thealeph numbersare a sequence of cardinal numbers that denote the size ofinfinite sets, denoted with analephℵ,{\displaystyle \aleph ,}the first letter of theHebrew alphabet. The first aleph number isℵ0,{\displaystyle \aleph _{0},}called "aleph-nought", "aleph-zero", or "aleph-null", which represents the cardinality of the set of allnatural numbers:ℵ0=|N|=|{0,1,2,3,⋯}|.{\displaystyle \aleph _{0}=|\mathbb {N} |=|\{0,1,2,3,\cdots \}|.}Then,ℵ1{\displaystyle \aleph _{1}}represents the next largest cardinality. The most common way this is formalized in set theory is throughVon Neumann ordinals, known asVon Neumann cardinal assignment.
Ordinal numbersgeneralize the notion oforderto infinite sets. For example, 2 comes after 1, denoted1<2,{\displaystyle 1<2,}and 3 comes after both, denote1<2<3.{\displaystyle 1<2<3.}Then, one define a new number,ω,{\displaystyle \omega ,}which comes after every natural number, denoted1<2<3<⋯<ω.{\displaystyle 1<2<3<\cdots <\omega .}Furtherω<ω+1,{\displaystyle \omega <\omega +1,}and so on. More formally, these ordinal numbers can be defined as follows:
0:={},{\displaystyle 0:=\{\},}theempty set,1:={0},{\displaystyle 1:=\{0\},}2:={0,1},{\displaystyle 2:=\{0,1\},}3:={0,1,2},{\displaystyle 3:=\{0,1,2\},}and so on. Then one can definem<n, ifm∈n,{\displaystyle m<n{\text{, if }}\,m\in n,}for examlpe,2∈{0,1,2}=3,{\displaystyle 2\in \{0,1,2\}=3,}therefore2<3.{\displaystyle 2<3.}Further, definingω:={0,1,2,3,⋯}{\displaystyle \omega :=\{0,1,2,3,\cdots \}}(alimit ordinal) givesω{\displaystyle \omega }the desired property of being the smallest ordinal greater than all finite ordinal numbers.
Sinceω∼N{\displaystyle \omega \sim \mathbb {N} }by the natural correspondence, one may defineℵ0{\displaystyle \aleph _{0}}as the set of all finite ordinals. That is,ℵ0:=ω.{\displaystyle \aleph _{0}:=\omega .}Then,ℵ1{\displaystyle \aleph _{1}}is the set of all countable ordinals (all ordinalsκ{\displaystyle \kappa }with cardinality|κ|≤ℵ0{\displaystyle |\kappa |\leq \aleph _{0}}), thefirst uncountable ordinal. Since a set cannot contain itself,ℵ1{\displaystyle \aleph _{1}}must have a strictly larger cardinality:ℵ0<ℵ1.{\displaystyle \aleph _{0}<\aleph _{1}.}Furthermore,ℵ2{\displaystyle \aleph _{2}}is the set of all ordinals with cardinalityℵ1,{\displaystyle \aleph _{1},}and so on. By thewell-ordering theorem, there cannot exist any set with cardinality betweenℵ0{\displaystyle \aleph _{0}}andℵ1,{\displaystyle \aleph _{1},}and every infinite set has some cardinality corresponding to some alephℵα,{\displaystyle \aleph _{\alpha },}for some ordinalα.{\displaystyle \alpha .}
The cardinality of thereal numbersis denoted by "c{\displaystyle {\mathfrak {c}}}" (a lowercasefraktur script"c"), and is also referred to as thecardinality of the continuum. Cantor showed, using thediagonal argument, thatc>ℵ0.{\displaystyle {\mathfrak {c}}>\aleph _{0}.}We can show thatc=2ℵ0,{\displaystyle {\mathfrak {c}}=2^{\aleph _{0}},}this also being the cardinality of the set of all subsets of the natural numbers.
Thecontinuum hypothesissays thatℵ1=2ℵ0,{\displaystyle \aleph _{1}=2^{\aleph _{0}},}i.e.2ℵ0{\displaystyle 2^{\aleph _{0}}}is the smallest cardinal number bigger thanℵ0,{\displaystyle \aleph _{0},}i.e. there is no set whose cardinality is strictly between that of the integers and that of the real numbers. The continuum hypothesis isindependentofZFC, a standard axiomatization of set theory; that is, it is impossible to prove the continuum hypothesis or its negation from ZFC—provided that ZFC is consistent.[34][35][36]
One of Cantor's most important results was that thecardinality of the continuum(c{\displaystyle {\mathfrak {c}}}) is greater than that of the natural numbers (ℵ0{\displaystyle \aleph _{0}}); that is, there are more real numbersRthan natural numbersN. Namely, Cantor showed thatc=2ℵ0=ℶ1{\displaystyle {\mathfrak {c}}=2^{\aleph _{0}}=\beth _{1}}(seeBeth one) satisfies:
Thecontinuum hypothesisstates that there is nocardinal numberbetween the cardinality of the reals and the cardinality of the natural numbers, that is,
However, this hypothesis can neither be proved nor disproved within the widely acceptedZFCaxiomatic set theory, if ZFC is consistent.
The first of these results is apparent by considering, for instance, thetangent function, which provides aone-to-one correspondencebetween theinterval(−1/2π,1/2π) andR.
The second result was first demonstrated by Cantor in 1878, but it became more apparent in 1890, whenGiuseppe Peanointroduced thespace-filling curves, curved lines that twist and turn enough to fill the whole of any square, or cube, orhypercube, or finite-dimensional space. These curves are not a direct proof that a line has the same number of points as a finite-dimensional space, but they can be used to obtainsuch a proof.
Cantor also showed that sets with cardinality strictly greater thanc{\displaystyle {\mathfrak {c}}}exist (see hisgeneralized diagonal argumentandtheorem). They include, for instance:
Both have cardinality
Thecardinal equalitiesc2=c,{\displaystyle {\mathfrak {c}}^{2}={\mathfrak {c}},}cℵ0=c,{\displaystyle {\mathfrak {c}}^{\aleph _{0}}={\mathfrak {c}},}andcc=2c{\displaystyle {\mathfrak {c}}^{\mathfrak {c}}=2^{\mathfrak {c}}}can be demonstrated usingcardinal arithmetic:
During the rise of set theory came along severalparadoxes(see:Paradoxes of set theory). These can be divided into two kinds:real paradoxesandapparent paradoxes. Apparent paradoxes are those which follow a series of reasonable steps and arrive at a conclusion which seems impossible or incorrect according to one'sintuition, but aren't necessarily logically impossible. Two historical examples have been given,Galileo's ParadoxandAristotle's Wheel, in§ History. Real paradoxes are those which, through reasonable steps, prove alogical contradiction. The real paradoxes here apply tonaive set theoryor otherwise informal statements, and have been resolved by restating the problem in terms of aformalized set theory, such asZermelo–Fraenkel set theory.
Hilbert's Hotelis athought experimentdevised by the German mathematicianDavid Hilbertto illustrate a counterintuitive property of infinite sets (assuming the axiom of choice), allowing them to have the same cardinality as aproper subsetof themselves. The scenario begins by imagining a hotel with an infinite number of rooms, all of which are occupied. But then a new guest walks in asking for a room. The hotel accommodates by moving the occupant of room 1 to room 2, the occupant of room 2 to room 3, room three to room 4, and in general room n to room n+1. Then every guest still has a room, but room 1 opens up for the new guest.[37]
Then, the scenario continues by imagining an infinite bus of new guests seeking a room. The hotel accommodates by moving the person in room 1 to room 2, room 2 to room 4, and in general room n to room 2n. Thus all the even-numbered rooms are occupied, but all the odd-numbered rooms are vacant, leaving room for the infinite bus of new guests. The scenario continues by assuming an infinite number of these infinite busses arrives at the hotel, and showing that the hotel is still able to accommodate. Finally, an infinite bus which has a seat for everyreal numberarrives, and the hotel is no longer able to accommodate.[37]
Inmodel theory, amodelcorresponds to a specific interpretation of aformal languageortheory. It consists of adomain(a set of objects) and aninterpretationof the symbols and formulas in the language, such that the axioms of the theory are satisfied within this structure. TheLöwenheim–Skolem theoremshows that any model of set theory infirst-order logic, if it isconsistent, has an equivalentmodelwhich is countable. This appears contradictory, becauseGeorg Cantorproved that there exist sets which are not countable. Thus the seeming contradiction is that a model that is itself countable, and which therefore contains only countable sets,satisfiesthe first-order sentence that intuitively states "there are uncountable sets".[38]
A mathematical explanation of the paradox, showing that it is not a true contradiction in mathematics, was first given in 1922 byThoralf Skolem. He explained that the countability of a set is not absolute, but relative to the model in which the cardinality is measured. Skolem's work was harshly received byErnst Zermelo, who argued against the limitations of first-order logic and Skolem's notion of "relativity", but the result quickly came to be accepted by the mathematical community.[39][38]
Cantor's theoremstate's that, for any setA,{\displaystyle A,}possibly infinite, itspowersetP(A){\displaystyle {\mathcal {P}}(A)}has a strictly greater cardinality. For example, this means there is no bijection fromN{\displaystyle \mathbb {N} }toP(N)∼R.{\displaystyle {\mathcal {P}}(\mathbb {N} )\sim \mathbb {R} .}Cantor's paradoxis a paradox innaive set theory, which proves there is not "set of all sets" or "universe set". It starts by assuming there is some set of all sets,U:={x|xis a set},{\displaystyle U:=\{x\;|\;x\,{\text{ is a set}}\},}then it must be thatU{\displaystyle U}is strictly smaller thanP(U),{\displaystyle {\mathcal {P}}(U),}thus|U|≤|P(U)|.{\displaystyle |U|\leq |{\mathcal {P}}(U)|.}But sinceU{\displaystyle U}contains all sets, we must have thatP(U)⊆U,{\displaystyle {\mathcal {P}}(U)\subseteq U,}and thus|P(U)|≤|U|.{\displaystyle |{\mathcal {P}}(U)|\leq |U|.}Therefore|P(U)|=|U|,{\displaystyle |{\mathcal {P}}(U)|=|U|,}contradicting Cantor's theorem. This was one of the original paradoxes that added to the need for a formalized set theory to avoid these paradoxes. This paradox is usually resolved in formal set theories by disallowingunrestricted comprehensionand the existence of a universe set.
Similar to Cantor's paradox, the paradox of the set of all cardinal numbers is a result due to unrestricted comprehension. It often uses the definition of cardinal numbers as ordinal numbers for representatives. It is related to theBurali-Forti paradox. It begins by assuming there is some setS:={X|Xis a cardinal number}.{\displaystyle S:=\{X\,|X{\text{ is a cardinal number}}\}.}Then, if there is some largest elementℵ∈S,{\displaystyle \aleph \in S,}then the powersetP(ℵ){\displaystyle {\mathcal {P}}(\aleph )}is strictly greater, and thus not inS.{\displaystyle S.}Conversly, if there is no largest element, then theunion⋃S{\displaystyle \bigcup S}contains the elements of all elements ofS,{\displaystyle S,}and is therefore greater than or equal to each element. Since there is no largest element inS,{\displaystyle S,}for any elementx∈S,{\displaystyle x\in S,}there is another elementy∈S{\displaystyle y\in S}such that|x|<|y|{\displaystyle |x|<|y|}and|y|≤|⋃S|.{\displaystyle |y|\leq {\Bigl |}\bigcup S{\Bigr |}.}Thus, for anyx∈S,{\displaystyle x\in S,}|x|<|⋃S|,{\displaystyle |x|<{\Bigl |}\bigcup S{\Bigr |},}and so|⋃S|∉S.{\displaystyle {\Bigl |}\bigcup S{\Bigr |}\notin S.}
IfAandBaredisjoint sets, then
From this, one can show that in general, the cardinalities ofunionsandintersectionsare related by the following equation:[40]
|
https://en.wikipedia.org/wiki/Set_size
|
Infunctional analysis, thedual normis a measure of size for acontinuouslinear functiondefined on anormed vector space.
LetX{\displaystyle X}be anormed vector spacewith norm‖⋅‖{\displaystyle \|\cdot \|}and letX∗{\displaystyle X^{*}}denote itscontinuous dual space. Thedual normof a continuouslinear functionalf{\displaystyle f}belonging toX∗{\displaystyle X^{*}}is the non-negative real number defined[1]by any of the following equivalent formulas:‖f‖=sup{|f(x)|:‖x‖≤1andx∈X}=sup{|f(x)|:‖x‖<1andx∈X}=inf{c∈[0,∞):|f(x)|≤c‖x‖for allx∈X}=sup{|f(x)|:‖x‖=1or0andx∈X}=sup{|f(x)|:‖x‖=1andx∈X}this equality holds if and only ifX≠{0}=sup{|f(x)|‖x‖:x≠0andx∈X}this equality holds if and only ifX≠{0}{\displaystyle {\begin{alignedat}{5}\|f\|&=\sup &&\{\,|f(x)|&&~:~\|x\|\leq 1~&&~{\text{ and }}~&&x\in X\}\\&=\sup &&\{\,|f(x)|&&~:~\|x\|<1~&&~{\text{ and }}~&&x\in X\}\\&=\inf &&\{\,c\in [0,\infty )&&~:~|f(x)|\leq c\|x\|~&&~{\text{ for all }}~&&x\in X\}\\&=\sup &&\{\,|f(x)|&&~:~\|x\|=1{\text{ or }}0~&&~{\text{ and }}~&&x\in X\}\\&=\sup &&\{\,|f(x)|&&~:~\|x\|=1~&&~{\text{ and }}~&&x\in X\}\;\;\;{\text{ this equality holds if and only if }}X\neq \{0\}\\&=\sup &&{\bigg \{}\,{\frac {|f(x)|}{\|x\|}}~&&~:~x\neq 0&&~{\text{ and }}~&&x\in X{\bigg \}}\;\;\;{\text{ this equality holds if and only if }}X\neq \{0\}\\\end{alignedat}}}wheresup{\displaystyle \sup }andinf{\displaystyle \inf }denote thesupremum and infimum, respectively.
The constant0{\displaystyle 0}map is the origin of the vector spaceX∗{\displaystyle X^{*}}and it always has norm‖0‖=0.{\displaystyle \|0\|=0.}IfX={0}{\displaystyle X=\{0\}}then the only linear functional onX{\displaystyle X}is the constant0{\displaystyle 0}map and moreover, the sets in the last two rows will both be empty and consequently, theirsupremumswill equalsup∅=−∞{\displaystyle \sup \varnothing =-\infty }instead of the correct value of0.{\displaystyle 0.}
Importantly, a linear functionf{\displaystyle f}is not, in general, guaranteed to achieve its norm‖f‖=sup{|f(x)|:‖x‖≤1,x∈X}{\displaystyle \|f\|=\sup\{|f(x)|:\|x\|\leq 1,x\in X\}}on the closed unit ball{x∈X:‖x‖≤1},{\displaystyle \{x\in X:\|x\|\leq 1\},}meaning that there might not exist any vectoru∈X{\displaystyle u\in X}of norm‖u‖≤1{\displaystyle \|u\|\leq 1}such that‖f‖=|fu|{\displaystyle \|f\|=|fu|}(if such a vector does exist and iff≠0,{\displaystyle f\neq 0,}thenu{\displaystyle u}would necessarily have unit norm‖u‖=1{\displaystyle \|u\|=1}).
R.C. James provedJames's theoremin 1964, which states that aBanach spaceX{\displaystyle X}isreflexiveif and only if every bounded linear functionf∈X∗{\displaystyle f\in X^{*}}achieves its norm on the closed unit ball.[2]It follows, in particular, that every non-reflexive Banach space has some bounded linear functional that does not achieve its norm on the closed unit ball.
However, theBishop–Phelps theoremguarantees that the set of bounded linear functionals that achieve their norm on the unit sphere of aBanach spaceis a norm-dense subsetof thecontinuous dual space.[3][4]
The mapf↦‖f‖{\displaystyle f\mapsto \|f\|}defines anormonX∗.{\displaystyle X^{*}.}(See Theorems 1 and 2 below.)
The dual norm is a special case of theoperator normdefined for each (bounded) linear map between normed vector spaces.
Since theground fieldofX{\displaystyle X}(R{\displaystyle \mathbb {R} }orC{\displaystyle \mathbb {C} }) iscomplete,X∗{\displaystyle X^{*}}is aBanach space.
The topology onX∗{\displaystyle X^{*}}induced by‖⋅‖{\displaystyle \|\cdot \|}turns out to be stronger than theweak-* topologyonX∗.{\displaystyle X^{*}.}
Thedouble dual(or second dual)X∗∗{\displaystyle X^{**}}ofX{\displaystyle X}is the dual of the normed vector spaceX∗{\displaystyle X^{*}}. There is a natural mapφ:X→X∗∗{\displaystyle \varphi :X\to X^{**}}. Indeed, for eachw∗{\displaystyle w^{*}}inX∗{\displaystyle X^{*}}defineφ(v)(w∗):=w∗(v).{\displaystyle \varphi (v)(w^{*}):=w^{*}(v).}
The mapφ{\displaystyle \varphi }islinear,injective, anddistance preserving.[5]In particular, ifX{\displaystyle X}is complete (i.e. a Banach space), thenφ{\displaystyle \varphi }is an isometry onto a closed subspace ofX∗∗{\displaystyle X^{**}}.[6]
In general, the mapφ{\displaystyle \varphi }is not surjective. For example, ifX{\displaystyle X}is the Banach spaceL∞{\displaystyle L^{\infty }}consisting of bounded functions on the real line with the supremum norm, then the mapφ{\displaystyle \varphi }is not surjective. (SeeLp{\displaystyle L^{p}}space). Ifφ{\displaystyle \varphi }is surjective, thenX{\displaystyle X}is said to be areflexive Banach space. If1<p<∞,{\displaystyle 1<p<\infty ,}then thespaceLp{\displaystyle L^{p}}is a reflexive Banach space.
TheFrobenius normdefined by‖A‖F=∑i=1m∑j=1n|aij|2=trace(A∗A)=∑i=1min{m,n}σi2{\displaystyle \|A\|_{\text{F}}={\sqrt {\sum _{i=1}^{m}\sum _{j=1}^{n}\left|a_{ij}\right|^{2}}}={\sqrt {\operatorname {trace} (A^{*}A)}}={\sqrt {\sum _{i=1}^{\min\{m,n\}}\sigma _{i}^{2}}}}is self-dual, i.e., its dual norm is‖⋅‖F′=‖⋅‖F.{\displaystyle \|\cdot \|'_{\text{F}}=\|\cdot \|_{\text{F}}.}
Thespectral norm, a special case of theinduced normwhenp=2{\displaystyle p=2}, is defined by the maximumsingular valuesof a matrix, that is,‖A‖2=σmax(A),{\displaystyle \|A\|_{2}=\sigma _{\max }(A),}has the nuclear norm as its dual norm, which is defined by‖B‖2′=∑iσi(B),{\displaystyle \|B\|'_{2}=\sum _{i}\sigma _{i}(B),}for any matrixB{\displaystyle B}whereσi(B){\displaystyle \sigma _{i}(B)}denote the singular values[citation needed].
Ifp,q∈[1,∞]{\displaystyle p,q\in [1,\infty ]}theSchattenℓp{\displaystyle \ell ^{p}}-normon matrices is dual to the Schattenℓq{\displaystyle \ell ^{q}}-norm.
Let‖⋅‖{\displaystyle \|\cdot \|}be a norm onRn.{\displaystyle \mathbb {R} ^{n}.}The associateddual norm, denoted‖⋅‖∗,{\displaystyle \|\cdot \|_{*},}is defined as‖z‖∗=sup{z⊺x:‖x‖≤1}.{\displaystyle \|z\|_{*}=\sup\{z^{\intercal }x:\|x\|\leq 1\}.}
(This can be shown to be a norm.) The dual norm can be interpreted as theoperator normofz⊺,{\displaystyle z^{\intercal },}interpreted as a1×n{\displaystyle 1\times n}matrix, with the norm‖⋅‖{\displaystyle \|\cdot \|}onRn{\displaystyle \mathbb {R} ^{n}}, and the absolute value onR{\displaystyle \mathbb {R} }:‖z‖∗=sup{|z⊺x|:‖x‖≤1}.{\displaystyle \|z\|_{*}=\sup\{|z^{\intercal }x|:\|x\|\leq 1\}.}
From the definition of dual norm we have the inequalityz⊺x=‖x‖(z⊺x‖x‖)≤‖x‖‖z‖∗{\displaystyle z^{\intercal }x=\|x\|\left(z^{\intercal }{\frac {x}{\|x\|}}\right)\leq \|x\|\|z\|_{*}}which holds for allx{\displaystyle x}andz.{\displaystyle z.}[7][8]The dual of the dual norm is the original norm: we have‖x‖∗∗=‖x‖{\displaystyle \|x\|_{**}=\|x\|}for allx.{\displaystyle x.}(This need not hold in infinite-dimensional vector spaces.)
The dual of theEuclidean normis the Euclidean norm, sincesup{z⊺x:‖x‖2≤1}=‖z‖2.{\displaystyle \sup\{z^{\intercal }x:\|x\|_{2}\leq 1\}=\|z\|_{2}.}
(This follows from theCauchy–Schwarz inequality; for nonzeroz,{\displaystyle z,}the value ofx{\displaystyle x}that maximisesz⊺x{\displaystyle z^{\intercal }x}over‖x‖2≤1{\displaystyle \|x\|_{2}\leq 1}isz‖z‖2.{\displaystyle {\tfrac {z}{\|z\|_{2}}}.})
The dual of theℓ∞{\displaystyle \ell ^{\infty }}-norm is theℓ1{\displaystyle \ell ^{1}}-norm:sup{z⊺x:‖x‖∞≤1}=∑i=1n|zi|=‖z‖1,{\displaystyle \sup\{z^{\intercal }x:\|x\|_{\infty }\leq 1\}=\sum _{i=1}^{n}|z_{i}|=\|z\|_{1},}and the dual of theℓ1{\displaystyle \ell ^{1}}-norm is theℓ∞{\displaystyle \ell ^{\infty }}-norm.
More generally,Hölder's inequalityshows that the dual of theℓp{\displaystyle \ell ^{p}}-normis theℓq{\displaystyle \ell ^{q}}-norm, whereq{\displaystyle q}satisfies1p+1q=1,{\displaystyle {\tfrac {1}{p}}+{\tfrac {1}{q}}=1,}that is,q=pp−1.{\displaystyle q={\tfrac {p}{p-1}}.}
As another example, consider theℓ2{\displaystyle \ell ^{2}}- or spectral norm onRm×n{\displaystyle \mathbb {R} ^{m\times n}}. The associated dual norm is‖Z‖2∗=sup{tr(Z⊺X):‖X‖2≤1},{\displaystyle \|Z\|_{2*}=\sup\{\mathbf {tr} (Z^{\intercal }X):\|X\|_{2}\leq 1\},}which turns out to be the sum of the singular values,‖Z‖2∗=σ1(Z)+⋯+σr(Z)=tr(Z⊺Z),{\displaystyle \|Z\|_{2*}=\sigma _{1}(Z)+\cdots +\sigma _{r}(Z)=\mathbf {tr} ({\sqrt {Z^{\intercal }Z}}),}wherer=rankZ.{\displaystyle r=\mathbf {rank} Z.}This norm is sometimes called thenuclear norm.[9]
Forp∈[1,∞],{\displaystyle p\in [1,\infty ],}p-norm (also calledℓp{\displaystyle \ell _{p}}-norm) of vectorx=(xn)n{\displaystyle \mathbf {x} =(x_{n})_{n}}is‖x‖p:=(∑i=1n|xi|p)1/p.{\displaystyle \|\mathbf {x} \|_{p}~:=~\left(\sum _{i=1}^{n}\left|x_{i}\right|^{p}\right)^{1/p}.}
Ifp,q∈[1,∞]{\displaystyle p,q\in [1,\infty ]}satisfy1/p+1/q=1{\displaystyle 1/p+1/q=1}then theℓp{\displaystyle \ell ^{p}}andℓq{\displaystyle \ell ^{q}}norms are dual to each other and the same is true of theLp{\displaystyle L^{p}}andLq{\displaystyle L^{q}}norms, where(X,Σ,μ),{\displaystyle (X,\Sigma ,\mu ),}is somemeasure space.
In particular theEuclidean normis self-dual sincep=q=2.{\displaystyle p=q=2.}ForxTQx{\displaystyle {\sqrt {x^{\mathrm {T} }Qx}}}, the dual norm isyTQ−1y{\displaystyle {\sqrt {y^{\mathrm {T} }Q^{-1}y}}}withQ{\displaystyle Q}positive definite.
Forp=2,{\displaystyle p=2,}the‖⋅‖2{\displaystyle \|\,\cdot \,\|_{2}}-norm is even induced by a canonicalinner product⟨⋅,⋅⟩,{\displaystyle \langle \,\cdot ,\,\cdot \rangle ,}meaning that‖x‖2=⟨x,x⟩{\displaystyle \|\mathbf {x} \|_{2}={\sqrt {\langle \mathbf {x} ,\mathbf {x} \rangle }}}for all vectorsx.{\displaystyle \mathbf {x} .}This inner product can expressed in terms of the norm by using thepolarization identity.
Onℓ2,{\displaystyle \ell ^{2},}this is theEuclidean inner productdefined by⟨(xn)n,(yn)n⟩ℓ2=∑nxnyn¯{\displaystyle \langle \left(x_{n}\right)_{n},\left(y_{n}\right)_{n}\rangle _{\ell ^{2}}~=~\sum _{n}x_{n}{\overline {y_{n}}}}while for the spaceL2(X,μ){\displaystyle L^{2}(X,\mu )}associated with ameasure space(X,Σ,μ),{\displaystyle (X,\Sigma ,\mu ),}which consists of allsquare-integrable functions, this inner product is⟨f,g⟩L2=∫Xf(x)g(x)¯dx.{\displaystyle \langle f,g\rangle _{L^{2}}=\int _{X}f(x){\overline {g(x)}}\,\mathrm {d} x.}The norms of the continuous dual spaces ofℓ2{\displaystyle \ell ^{2}}andℓ2{\displaystyle \ell ^{2}}satisfy thepolarization identity, and so these dual norms can be used to define inner products. With this inner product, this dual space is also aHilbert space.
Given normed vector spacesX{\displaystyle X}andY,{\displaystyle Y,}letL(X,Y){\displaystyle L(X,Y)}[10]be the collection of allbounded linear mappings(oroperators) ofX{\displaystyle X}intoY.{\displaystyle Y.}ThenL(X,Y){\displaystyle L(X,Y)}can be given a canonical norm.
Theorem 1—LetX{\displaystyle X}andY{\displaystyle Y}be normed spaces. Assigning to each continuous linear operatorf∈L(X,Y){\displaystyle f\in L(X,Y)}the scalar‖f‖=sup{‖f(x)‖:x∈X,‖x‖≤1}{\displaystyle \|f\|=\sup\{\|f(x)\|:x\in X,\|x\|\leq 1\}}defines a norm‖⋅‖:L(X,Y)→R{\displaystyle \|\cdot \|~:~L(X,Y)\to \mathbb {R} }onL(X,Y){\displaystyle L(X,Y)}that makesL(X,Y){\displaystyle L(X,Y)}into a normed space. Moreover, ifY{\displaystyle Y}is a Banach space then so isL(X,Y).{\displaystyle L(X,Y).}[11]
A subset of a normed space is boundedif and only ifit lies in some multiple of theunit sphere; thus‖f‖<∞{\displaystyle \|f\|<\infty }for everyf∈L(X,Y){\displaystyle f\in L(X,Y)}ifα{\displaystyle \alpha }is a scalar, then(αf)(x)=α⋅fx{\displaystyle (\alpha f)(x)=\alpha \cdot fx}so that‖αf‖=|α|‖f‖.{\displaystyle \|\alpha f\|=|\alpha |\|f\|.}
Thetriangle inequalityinY{\displaystyle Y}shows that‖(f1+f2)x‖=‖f1x+f2x‖≤‖f1x‖+‖f2x‖≤(‖f1‖+‖f2‖)‖x‖≤‖f1‖+‖f2‖{\displaystyle {\begin{aligned}\|\left(f_{1}+f_{2}\right)x\|~&=~\|f_{1}x+f_{2}x\|\\&\leq ~\|f_{1}x\|+\|f_{2}x\|\\&\leq ~\left(\|f_{1}\|+\|f_{2}\|\right)\|x\|\\&\leq ~\|f_{1}\|+\|f_{2}\|\end{aligned}}}
for everyx∈X{\displaystyle x\in X}satisfying‖x‖≤1.{\displaystyle \|x\|\leq 1.}This fact together with the definition of‖⋅‖:L(X,Y)→R{\displaystyle \|\cdot \|~:~L(X,Y)\to \mathbb {R} }implies the triangle inequality:‖f+g‖≤‖f‖+‖g‖.{\displaystyle \|f+g\|\leq \|f\|+\|g\|.}
Since{|f(x)|:x∈X,‖x‖≤1}{\displaystyle \{|f(x)|:x\in X,\|x\|\leq 1\}}is a non-empty set of non-negative real numbers,‖f‖=sup{|f(x)|:x∈X,‖x‖≤1}{\displaystyle \|f\|=\sup \left\{|f(x)|:x\in X,\|x\|\leq 1\right\}}is a non-negative real number.
Iff≠0{\displaystyle f\neq 0}thenfx0≠0{\displaystyle fx_{0}\neq 0}for somex0∈X,{\displaystyle x_{0}\in X,}which implies that‖fx0‖>0{\displaystyle \left\|fx_{0}\right\|>0}and consequently‖f‖>0.{\displaystyle \|f\|>0.}This shows that(L(X,Y),‖⋅‖){\displaystyle \left(L(X,Y),\|\cdot \|\right)}is a normed space.[12]
Assume now thatY{\displaystyle Y}is complete and we will show that(L(X,Y),‖⋅‖){\displaystyle (L(X,Y),\|\cdot \|)}is complete. Letf∙=(fn)n=1∞{\displaystyle f_{\bullet }=\left(f_{n}\right)_{n=1}^{\infty }}be aCauchy sequenceinL(X,Y),{\displaystyle L(X,Y),}so by definition‖fn−fm‖→0{\displaystyle \left\|f_{n}-f_{m}\right\|\to 0}asn,m→∞.{\displaystyle n,m\to \infty .}This fact together with the relation‖fnx−fmx‖=‖(fn−fm)x‖≤‖fn−fm‖‖x‖{\displaystyle \left\|f_{n}x-f_{m}x\right\|=\left\|\left(f_{n}-f_{m}\right)x\right\|\leq \left\|f_{n}-f_{m}\right\|\|x\|}
implies that(fnx)n=1∞{\displaystyle \left(f_{n}x\right)_{n=1}^{\infty }}is a Cauchy sequence inY{\displaystyle Y}for everyx∈X.{\displaystyle x\in X.}It follows that for everyx∈X,{\displaystyle x\in X,}the limitlimn→∞fnx{\displaystyle \lim _{n\to \infty }f_{n}x}exists inY{\displaystyle Y}and so we will denote this (necessarily unique) limit byfx,{\displaystyle fx,}that is:fx=limn→∞fnx.{\displaystyle fx~=~\lim _{n\to \infty }f_{n}x.}
It can be shown thatf:X→Y{\displaystyle f:X\to Y}is linear. Ifε>0{\displaystyle \varepsilon >0}, then‖fn−fm‖‖x‖≤ε‖x‖{\displaystyle \left\|f_{n}-f_{m}\right\|\|x\|~\leq ~\varepsilon \|x\|}for all sufficiently large integersnandm. It follows that‖fx−fmx‖≤ε‖x‖{\displaystyle \left\|fx-f_{m}x\right\|~\leq ~\varepsilon \|x\|}for sufficiently all largem.{\displaystyle m.}Hence‖fx‖≤(‖fm‖+ε)‖x‖,{\displaystyle \|fx\|\leq \left(\left\|f_{m}\right\|+\varepsilon \right)\|x\|,}so thatf∈L(X,Y){\displaystyle f\in L(X,Y)}and‖f−fm‖≤ε.{\displaystyle \left\|f-f_{m}\right\|\leq \varepsilon .}This shows thatfm→f{\displaystyle f_{m}\to f}in the norm topology ofL(X,Y).{\displaystyle L(X,Y).}This establishes the completeness ofL(X,Y).{\displaystyle L(X,Y).}[13]
WhenY{\displaystyle Y}is ascalar field(i.e.Y=C{\displaystyle Y=\mathbb {C} }orY=R{\displaystyle Y=\mathbb {R} }) so thatL(X,Y){\displaystyle L(X,Y)}is thedual spaceX∗{\displaystyle X^{*}}ofX.{\displaystyle X.}
Theorem 2—LetX{\displaystyle X}be a normed space and for everyx∗∈X∗{\displaystyle x^{*}\in X^{*}}let‖x∗‖:=sup{|⟨x,x∗⟩|:x∈Xwith‖x‖≤1}{\displaystyle \left\|x^{*}\right\|~:=~\sup \left\{|\langle x,x^{*}\rangle |~:~x\in X{\text{ with }}\|x\|\leq 1\right\}}where by definition⟨x,x∗⟩:=x∗(x){\displaystyle \langle x,x^{*}\rangle ~:=~x^{*}(x)}is a scalar.
Then
LetB=sup{x∈X:‖x‖≤1}{\displaystyle B~=~\sup\{x\in X~:~\|x\|\leq 1\}}denote the closed unit ball of a normed spaceX.{\displaystyle X.}WhenY{\displaystyle Y}is thescalar fieldthenL(X,Y)=X∗{\displaystyle L(X,Y)=X^{*}}so part (a) is a corollary of Theorem 1. Fixx∈X.{\displaystyle x\in X.}There exists[15]y∗∈B∗{\displaystyle y^{*}\in B^{*}}such that⟨x,y∗⟩=‖x‖.{\displaystyle \langle {x,y^{*}}\rangle =\|x\|.}but,|⟨x,x∗⟩|≤‖x‖‖x∗‖≤‖x‖{\displaystyle |\langle {x,x^{*}}\rangle |\leq \|x\|\|x^{*}\|\leq \|x\|}for everyx∗∈B∗{\displaystyle x^{*}\in B^{*}}. (b) follows from the above. Since the open unit ballU{\displaystyle U}ofX{\displaystyle X}is dense inB{\displaystyle B}, the definition of‖x∗‖{\displaystyle \|x^{*}\|}shows thatx∗∈B∗{\displaystyle x^{*}\in B^{*}}if and only if|⟨x,x∗⟩|≤1{\displaystyle |\langle {x,x^{*}}\rangle |\leq 1}for everyx∈U{\displaystyle x\in U}. The proof for (c)[16]now follows directly.[17]
As usual, letd(x,y):=‖x−y‖{\displaystyle d(x,y):=\|x-y\|}denote the canonicalmetricinduced by the norm onX,{\displaystyle X,}and denote the distance from a pointx{\displaystyle x}to the subsetS⊆X{\displaystyle S\subseteq X}byd(x,S):=infs∈Sd(x,s)=infs∈S‖x−s‖.{\displaystyle d(x,S)~:=~\inf _{s\in S}d(x,s)~=~\inf _{s\in S}\|x-s\|.}Iff{\displaystyle f}is a bounded linear functional on a normed spaceX,{\displaystyle X,}then for every vectorx∈X,{\displaystyle x\in X,}[18]|f(x)|=‖f‖d(x,kerf),{\displaystyle |f(x)|=\|f\|\,d(x,\ker f),}wherekerf={k∈X:f(k)=0}{\displaystyle \ker f=\{k\in X:f(k)=0\}}denotes thekerneloff.{\displaystyle f.}
|
https://en.wikipedia.org/wiki/Dual_norm
|
In mathematics, thelogarithmic normis a real-valuedfunctionalonoperators, and is derived from either aninner product, a vector norm, or its inducedoperator norm. The logarithmic norm was independently introduced byGermund Dahlquist[1]and Sergei Lozinskiĭ in 1958, for squarematrices. It has since been extended to nonlinear operators andunbounded operatorsas well.[2]The logarithmic norm has a wide range of applications, in particular in matrix theory,differential equationsandnumerical analysis. In the finite-dimensional setting, it is also referred to as the matrix measure or the Lozinskiĭ measure.
LetA{\displaystyle A}be a square matrix and‖⋅‖{\displaystyle \|\cdot \|}be an induced matrix norm. The associated logarithmic normμ{\displaystyle \mu }ofA{\displaystyle A}is defined by
HereI{\displaystyle I}is theidentity matrixof the same dimension asA{\displaystyle A}, andh{\displaystyle h}is a real, positive number. The limit ash→0−{\displaystyle h\rightarrow 0^{-}}equals−μ(−A){\displaystyle -\mu (-A)}, and is in general different from the logarithmic normμ(A){\displaystyle \mu (A)}, as−μ(−A)≤μ(A){\displaystyle -\mu (-A)\leq \mu (A)}for all matrices.
The matrix norm‖A‖{\displaystyle \|A\|}is always positive ifA≠0{\displaystyle A\neq 0}, but the logarithmic normμ(A){\displaystyle \mu (A)}may also take negative values, e.g. whenA{\displaystyle A}isnegative definite. Therefore, the logarithmic norm does not satisfy the axioms of a norm. The namelogarithmic norm,which does not appear in the original reference, seems to originate from estimating the logarithm of the norm of solutions to the differential equation
The maximal growth rate oflog‖x‖{\displaystyle \log \|x\|}isμ(A){\displaystyle \mu (A)}. This is expressed by the differential inequality
whered/dt+{\displaystyle \mathrm {d} /\mathrm {d} t^{+}}is theupper right Dini derivative. Usinglogarithmic differentiationthe differential inequality can also be written
showing its direct relation toGrönwall's lemma. In fact, it can be shown that the norm of the state transition matrixΦ(t,t0){\displaystyle \Phi (t,t_{0})}associated to the differential equationx˙=A(t)x{\displaystyle {\dot {x}}=A(t)x}is bounded by[3][4]
for allt≥t0{\displaystyle t\geq t_{0}}.
If the vector norm is an inner product norm, as in aHilbert space, then the logarithmic norm is the smallest numberμ(A){\displaystyle \mu (A)}such that for allx{\displaystyle x}
Unlike the original definition, the latter expression also allowsA{\displaystyle A}to be unbounded. Thusdifferential operatorstoo can have logarithmic norms, allowing the use of the logarithmic norm both in algebra and in analysis. The modern, extended theory therefore prefers a definition based on inner products orduality. Both the operator norm and the logarithmic norm are then associated with extremal values ofquadratic formsas follows:
Basic properties of the logarithmic norm of a matrix include:
The logarithmic norm of a matrix can be calculated as follows for the three most common norms. In these formulas,aij{\displaystyle a_{ij}}represents the element on thei{\displaystyle i}th row andj{\displaystyle j}th column of a matrixA{\displaystyle A}.[5]
The logarithmic norm is related to the extreme values of the Rayleigh quotient. It holds that
and both extreme values are taken for some vectorsx≠0{\displaystyle x\neq 0}. This also means that every eigenvalueλk{\displaystyle \lambda _{k}}ofA{\displaystyle A}satisfies
More generally, the logarithmic norm is related to thenumerical rangeof a matrix.
A matrix with−μ(−A)>0{\displaystyle -\mu (-A)>0}is positive definite, and one withμ(A)<0{\displaystyle \mu (A)<0}is negative definite. Such matrices haveinverses. The inverse of a negative definite matrix is bounded by
Both the bounds on the inverse and on the eigenvalues hold irrespective of the choice of vector (matrix) norm. Some results only hold for inner product norms, however. For example, ifR{\displaystyle R}is a rational function with the property
then, for inner product norms,
Thus the matrix norm and logarithmic norms may be viewed as generalizing the modulus and real part, respectively, from complex numbers to matrices.
The logarithmic norm plays an important role in the stability analysis of a continuous dynamical systemx˙=Ax{\displaystyle {\dot {x}}=Ax}. Its role is analogous to that of the matrix norm for a discrete dynamical systemxn+1=Axn{\displaystyle x_{n+1}=Ax_{n}}.
In the simplest case, whenA{\displaystyle A}is a scalar complex constantλ{\displaystyle \lambda }, the discrete dynamical system has stable solutions when|λ|≤1{\displaystyle |\lambda |\leq 1}, while the differential equation has stable solutions whenℜλ≤0{\displaystyle \Re \,\lambda \leq 0}. WhenA{\displaystyle A}is a matrix, the discrete system has stable solutions if‖A‖≤1{\displaystyle \|A\|\leq 1}. In the continuous system, the solutions are of the formetAx(0){\displaystyle \mathrm {e} ^{tA}x(0)}. They are stable if‖etA‖≤1{\displaystyle \|\mathrm {e} ^{tA}\|\leq 1}for allt≥0{\displaystyle t\geq 0}, which follows from property 7 above, ifμ(A)≤0{\displaystyle \mu (A)\leq 0}. In the latter case,‖x‖{\displaystyle \|x\|}is aLyapunov functionfor the system.
Runge–Kutta methodsfor the numerical solution ofx˙=Ax{\displaystyle {\dot {x}}=Ax}replace the differential equation by a discrete equationxn+1=R(hA)⋅xn{\displaystyle x_{n+1}=R(hA)\cdot x_{n}}, where the rational functionR{\displaystyle R}is characteristic of the method, andh{\displaystyle h}is the time step size. If|R(z)|≤1{\displaystyle |R(z)|\leq 1}wheneverℜ(z)≤0{\displaystyle \Re \,(z)\leq 0}, then a stable differential equation, havingμ(A)≤0{\displaystyle \mu (A)\leq 0}, will always result in a stable (contractive) numerical method, as‖R(hA)‖≤1{\displaystyle \|R(hA)\|\leq 1}. Runge-Kutta methods having this property are called A-stable.
Retaining the same form, the results can, under additional assumptions, be extended to nonlinear systems as well as tosemigrouptheory, where the crucial advantage of the logarithmic norm is that it discriminates between forward and reverse time evolution and can establish whether the problem iswell posed. Similar results also apply in the stability analysis incontrol theory, where there is a need to discriminate between positive and negative feedback.
In connection with differential operators it is common to use inner products andintegration by parts. In the simplest case we consider functions satisfyingu(0)=u(1)=0{\displaystyle u(0)=u(1)=0}with inner product
Then it holds that
where the equality on the left represents integration by parts, and the inequality to the right is a Sobolev inequality[citation needed]. In the latter, equality is attained for the functionsinπx{\displaystyle \sin \,\pi x}, implying that the constant−π2{\displaystyle -\pi ^{2}}is the best possible. Thus
for the differential operatorA=d2/dx2{\displaystyle A=\mathrm {d} ^{2}/\mathrm {d} x^{2}}, which implies that
As an operator satisfying⟨u,Au⟩>0{\displaystyle \langle u,Au\rangle >0}is calledelliptic, the logarithmic norm quantifies the (strong) ellipticity of−d2/dx2{\displaystyle -\mathrm {d} ^{2}/\mathrm {d} x^{2}}. Thus, ifA{\displaystyle A}is strongly elliptic, thenμ(−A)<0{\displaystyle \mu (-A)<0}, and is invertible given proper data.
If a finite difference method is used to solve−u″=f{\displaystyle -u''=f}, the problem is replaced by an algebraic equationTu=f{\displaystyle Tu=f}. The matrixT{\displaystyle T}will typically inherit the ellipticity, i.e.,−μ(−T)>0{\displaystyle -\mu (-T)>0}, showing thatT{\displaystyle T}is positive definite and therefore invertible.
These results carry over to thePoisson equationas well as to other numerical methods such as theFinite element method.
For nonlinear operators the operator norm and logarithmic norm are defined in terms of the inequalities
whereL(f){\displaystyle L(f)}is the least upper boundLipschitz constantoff{\displaystyle f}, andl(f){\displaystyle l(f)}is the greatest lower bound Lipschitz constant; and
whereu{\displaystyle u}andv{\displaystyle v}are in the domainD{\displaystyle D}off{\displaystyle f}. HereM(f){\displaystyle M(f)}is the least upper bound logarithmic Lipschitz constant off{\displaystyle f}, andl(f){\displaystyle l(f)}is the greatest lower bound logarithmic Lipschitz constant. It holds thatm(f)=−M(−f){\displaystyle m(f)=-M(-f)}(compare above) and, analogously,l(f)=L(f−1)−1{\displaystyle l(f)=L(f^{-1})^{-1}}, whereL(f−1){\displaystyle L(f^{-1})}is defined on the image off{\displaystyle f}.
For nonlinear operators that are Lipschitz continuous, it further holds that
Iff{\displaystyle f}is differentiable and its domainD{\displaystyle D}is convex, then
Heref′(x){\displaystyle f'(x)}is theJacobian matrixoff{\displaystyle f}, linking the nonlinear extension to the matrix norm and logarithmic norm.
An operator having eitherm(f)>0{\displaystyle m(f)>0}orM(f)<0{\displaystyle M(f)<0}is called uniformly monotone. An operator satisfyingL(f)<1{\displaystyle L(f)<1}is calledcontractive. This extension offers many connections to fixed point theory, and critical point theory.
The theory becomes analogous to that of the logarithmic norm for matrices, but is more complicated as the domains of the operators need to be given close attention, as in the case with unbounded operators. Property 8 of the logarithmic norm above carries over, independently of the choice of vector norm, and it holds that
which quantifies theUniform Monotonicity Theoremdue to Browder & Minty (1963).
|
https://en.wikipedia.org/wiki/Logarithmic_norm
|
Infunctional analysis, a branch of mathematics, two methods of constructingnormed spacesfromdiskswere systematically employed byAlexander Grothendieckto definenuclear operatorsandnuclear spaces.[1]One method is used if the diskD{\displaystyle D}is bounded: in this case, theauxiliary normed spaceisspanD{\displaystyle \operatorname {span} D}with normpD(x):=infx∈rD,r>0r.{\displaystyle p_{D}(x):=\inf _{x\in rD,r>0}r.}The other method is used if the diskD{\displaystyle D}isabsorbing: in this case, the auxiliary normed space is thequotient spaceX/pD−1(0).{\displaystyle X/p_{D}^{-1}(0).}If the disk is both bounded and absorbing then the two auxiliary normed spaces are canonically isomorphic (astopological vector spacesand asnormed spaces).
Throughout this article,X{\displaystyle X}will be a real or complex vector space (not necessarily a TVS, yet) andD{\displaystyle D}will be adiskinX.{\displaystyle X.}
LetX{\displaystyle X}will be a real or complex vector space. For any subsetD{\displaystyle D}ofX,{\displaystyle X,}theMinkowski functionalofD{\displaystyle D}defined by:
LetX{\displaystyle X}will be a real or complex vector space. For any subsetD{\displaystyle D}ofX{\displaystyle X}such that the Minkowski functionalpD{\displaystyle p_{D}}is aseminormonspanD,{\displaystyle \operatorname {span} D,}letXD{\displaystyle X_{D}}denoteXD:=(spanD,pD){\displaystyle X_{D}:=\left(\operatorname {span} D,p_{D}\right)}which is called theseminormed spaceinduced byD,{\displaystyle D,}where ifpD{\displaystyle p_{D}}is anormthen it is called thenormed spaceinduced byD.{\displaystyle D.}
Assumption(Topology):XD=spanD{\displaystyle X_{D}=\operatorname {span} D}is endowed with the seminorm topology induced bypD,{\displaystyle p_{D},}which will be denoted byτD{\displaystyle \tau _{D}}orτpD{\displaystyle \tau _{p_{D}}}
Importantly, this topology stemsentirelyfrom the setD,{\displaystyle D,}the algebraic structure ofX,{\displaystyle X,}and the usual topology onR{\displaystyle \mathbb {R} }(sincepD{\displaystyle p_{D}}is defined usingonlythe setD{\displaystyle D}and scalar multiplication). This justifies the study of Banach disks and is part of the reason why they play an important role in the theory ofnuclear operatorsandnuclear spaces.
The inclusion mapInD:XD→X{\displaystyle \operatorname {In} _{D}:X_{D}\to X}is called thecanonical map.[1]
Suppose thatD{\displaystyle D}is a disk.
ThenspanD=⋃n=1∞nD{\textstyle \operatorname {span} D=\bigcup _{n=1}^{\infty }nD}so thatD{\displaystyle D}isabsorbinginspanD,{\displaystyle \operatorname {span} D,}thelinear spanofD.{\displaystyle D.}The set{rD:r>0}{\displaystyle \{rD:r>0\}}of all positive scalar multiples ofD{\displaystyle D}forms a basis of neighborhoods at the origin for alocally convex topological vector spacetopologyτD{\displaystyle \tau _{D}}onspanD.{\displaystyle \operatorname {span} D.}TheMinkowski functionalof the diskD{\displaystyle D}inspanD{\displaystyle \operatorname {span} D}guarantees thatpD{\displaystyle p_{D}}is well-defined and forms aseminormonspanD.{\displaystyle \operatorname {span} D.}[3]The locally convex topology induced by this seminorm is the topologyτD{\displaystyle \tau _{D}}that was defined before.
AboundeddiskD{\displaystyle D}in atopological vector spaceX{\displaystyle X}such that(XD,pD){\displaystyle \left(X_{D},p_{D}\right)}is aBanach spaceis called aBanach disk,infracomplete, or abounded completantinX.{\displaystyle X.}
If its shown that(spanD,pD){\displaystyle \left(\operatorname {span} D,p_{D}\right)}is a Banach space thenD{\displaystyle D}will be a Banach disk inanyTVS that containsD{\displaystyle D}as a bounded subset.
This is because the Minkowski functionalpD{\displaystyle p_{D}}is defined in purely algebraic terms.
Consequently, the question of whether or not(XD,pD){\displaystyle \left(X_{D},p_{D}\right)}forms a Banach space is dependent only on the diskD{\displaystyle D}and the Minkowski functionalpD,{\displaystyle p_{D},}and not on any particular TVS topology thatX{\displaystyle X}may carry.
Thus the requirement that a Banach disk in a TVSX{\displaystyle X}be a bounded subset ofX{\displaystyle X}is the only property that ties a Banach disk's topology to the topology of its containing TVSX.{\displaystyle X.}
Bounded disks
The following result explains why Banach disks are required to be bounded.
Theorem[4][5][1]—IfD{\displaystyle D}is a disk in atopological vector space(TVS)X,{\displaystyle X,}thenD{\displaystyle D}isboundedinX{\displaystyle X}if and only if the inclusion mapInD:XD→X{\displaystyle \operatorname {In} _{D}:X_{D}\to X}is continuous.
If the diskD{\displaystyle D}is bounded in the TVSX{\displaystyle X}then for all neighborhoodsU{\displaystyle U}of the origin inX,{\displaystyle X,}there exists somer>0{\displaystyle r>0}such thatrD⊆U∩XD.{\displaystyle rD\subseteq U\cap X_{D}.}It follows that in this case the topology of(XD,pD){\displaystyle \left(X_{D},p_{D}\right)}is finer than the subspace topology thatXD{\displaystyle X_{D}}inherits fromX,{\displaystyle X,}which implies that the inclusion mapInD:XD→X{\displaystyle \operatorname {In} _{D}:X_{D}\to X}is continuous.
Conversely, ifX{\displaystyle X}has a TVS topology such thatInD:XD→X{\displaystyle \operatorname {In} _{D}:X_{D}\to X}is continuous, then for every neighborhoodU{\displaystyle U}of the origin inX{\displaystyle X}there exists somer>0{\displaystyle r>0}such thatrD⊆U∩XD,{\displaystyle rD\subseteq U\cap X_{D},}which shows thatD{\displaystyle D}is bounded inX.{\displaystyle X.}
Hausdorffness
The space(XD,pD){\displaystyle \left(X_{D},p_{D}\right)}isHausdorffif and only ifpD{\displaystyle p_{D}}is a norm, which happens if and only ifD{\displaystyle D}does not contain any non-trivial vector subspace.[6]In particular, if there exists a Hausdorff TVS topology onX{\displaystyle X}such thatD{\displaystyle D}is bounded inX{\displaystyle X}thenpD{\displaystyle p_{D}}is a norm.
An example whereXD{\displaystyle X_{D}}is not Hausdorff is obtained by lettingX=R2{\displaystyle X=\mathbb {R} ^{2}}and lettingD{\displaystyle D}be thex{\displaystyle x}-axis.
Convergence of nets
Suppose thatD{\displaystyle D}is a disk inX{\displaystyle X}such thatXD{\displaystyle X_{D}}is Hausdorff and letx∙=(xi)i∈I{\displaystyle x_{\bullet }=\left(x_{i}\right)_{i\in I}}be a net inXD.{\displaystyle X_{D}.}Thenx∙→0{\displaystyle x_{\bullet }\to 0}inXD{\displaystyle X_{D}}if and only if there exists a netr∙=(ri)i∈I{\displaystyle r_{\bullet }=\left(r_{i}\right)_{i\in I}}of real numbers such thatr∙→0{\displaystyle r_{\bullet }\to 0}andxi∈riD{\displaystyle x_{i}\in r_{i}D}for alli{\displaystyle i};
moreover, in this case it will be assumed without loss of generality thatri≥0{\displaystyle r_{i}\geq 0}for alli.{\displaystyle i.}
Relationship between disk-induced spaces
IfC⊆D⊆X{\displaystyle C\subseteq D\subseteq X}thenspanC⊆spanD{\displaystyle \operatorname {span} C\subseteq \operatorname {span} D}andpD≤pC{\displaystyle p_{D}\leq p_{C}}onspanC,{\displaystyle \operatorname {span} C,}so define the following continuous[5]linear map:
IfC{\displaystyle C}andD{\displaystyle D}are disks inX{\displaystyle X}withC⊆D{\displaystyle C\subseteq D}then call the inclusion mapInCD:XC→XD{\displaystyle \operatorname {In} _{C}^{D}:X_{C}\to X_{D}}thecanonical inclusionofXC{\displaystyle X_{C}}intoXD.{\displaystyle X_{D}.}
In particular, the subspace topology thatspanC{\displaystyle \operatorname {span} C}inherits from(XD,pD){\displaystyle \left(X_{D},p_{D}\right)}is weaker than(XC,pC){\displaystyle \left(X_{C},p_{C}\right)}'s seminorm topology.[5]
The disk as the closed unit ball
The diskD{\displaystyle D}is a closed subset of(XD,pD){\displaystyle \left(X_{D},p_{D}\right)}if and only ifD{\displaystyle D}is the closed unit ball of the seminormpD{\displaystyle p_{D}}; that is,D={x∈spanD:pD(x)≤1}.{\displaystyle D=\left\{x\in \operatorname {span} D:p_{D}(x)\leq 1\right\}.}
IfD{\displaystyle D}is a disk in a vector spaceX{\displaystyle X}and if there exists a TVS topologyτ{\displaystyle \tau }onspanD{\displaystyle \operatorname {span} D}such thatD{\displaystyle D}is a closed and bounded subset of(spanD,τ),{\displaystyle \left(\operatorname {span} D,\tau \right),}thenD{\displaystyle D}is the closed unit ball of(XD,pD){\displaystyle \left(X_{D},p_{D}\right)}(that is,D={x∈spanD:pD(x)≤1}{\displaystyle D=\left\{x\in \operatorname {span} D:p_{D}(x)\leq 1\right\}}) (see footnote for proof).[note 2]
The following theorem may be used to establish that(XD,pD){\displaystyle \left(X_{D},p_{D}\right)}is a Banach space.
Once this is established,D{\displaystyle D}will be a Banach disk in any TVS in whichD{\displaystyle D}is bounded.
Theorem[7]—LetD{\displaystyle D}be a disk in a vector spaceX.{\displaystyle X.}If there exists a Hausdorff TVS topologyτ{\displaystyle \tau }onspanD{\displaystyle \operatorname {span} D}such thatD{\displaystyle D}is a boundedsequentially completesubset of(spanD,τ),{\displaystyle (\operatorname {span} D,\tau ),}then(XD,pD){\displaystyle \left(X_{D},p_{D}\right)}is a Banach space.
Assume without loss of generality thatX=spanD{\displaystyle X=\operatorname {span} D}and letp:=pD{\displaystyle p:=p_{D}}be theMinkowski functionalofD.{\displaystyle D.}SinceD{\displaystyle D}is a bounded subset of a Hausdorff TVS,D{\displaystyle D}do not contain any non-trivial vector subspace, which implies thatp{\displaystyle p}is a norm.
LetτD{\displaystyle \tau _{D}}denote the norm topology onX{\displaystyle X}induced byp{\displaystyle p}where sinceD{\displaystyle D}is a bounded subset of(X,τ),{\displaystyle (X,\tau ),}τD{\displaystyle \tau _{D}}is finer thanτ.{\displaystyle \tau .}
BecauseD{\displaystyle D}is convex and balanced, for any0<m<n{\displaystyle 0<m<n}
2−(n+1)D+⋯+2−(m+2)D=2−(m+1)(1−2m−n)D⊆2−(m+2)D.{\displaystyle 2^{-(n+1)}D+\cdots +2^{-(m+2)}D=2^{-(m+1)}\left(1-2^{m-n}\right)D\subseteq 2^{-(m+2)}D.}
Letx∙=(xi)i=1∞{\displaystyle x_{\bullet }=\left(x_{i}\right)_{i=1}^{\infty }}be a Cauchy sequence in(XD,p).{\displaystyle \left(X_{D},p\right).}By replacingx∙{\displaystyle x_{\bullet }}with a subsequence, we may assume without loss of generality†that for alli,{\displaystyle i,}xi+1−xi∈12i+2D.{\displaystyle x_{i+1}-x_{i}\in {\frac {1}{2^{i+2}}}D.}
This implies that for any0<m<n,{\displaystyle 0<m<n,}xn−xm=(xn−xn−1)+(xm+1−xm)∈2−(n+1)D+⋯+2−(m+2)D⊆2−(m+2)D{\displaystyle x_{n}-x_{m}=\left(x_{n}-x_{n-1}\right)+\left(x_{m+1}-x_{m}\right)\in 2^{-(n+1)}D+\cdots +2^{-(m+2)}D\subseteq 2^{-(m+2)}D}so that in particular, by takingm=1{\displaystyle m=1}it follows thatx∙{\displaystyle x_{\bullet }}is contained inx1+2−3D.{\displaystyle x_{1}+2^{-3}D.}SinceτD{\displaystyle \tau _{D}}is finer thanτ,{\displaystyle \tau ,}x∙{\displaystyle x_{\bullet }}is a Cauchy sequence in(X,τ).{\displaystyle (X,\tau ).}For allm>0,{\displaystyle m>0,}2−(m+2)D{\displaystyle 2^{-(m+2)}D}is a Hausdorff sequentially complete subset of(X,τ).{\displaystyle (X,\tau ).}In particular, this is true forx1+2−3D{\displaystyle x_{1}+2^{-3}D}so there exists somex∈x1+2−3D{\displaystyle x\in x_{1}+2^{-3}D}such thatx∙→x{\displaystyle x_{\bullet }\to x}in(X,τ).{\displaystyle (X,\tau ).}
Sincexn−xm∈2−(m+2)D{\displaystyle x_{n}-x_{m}\in 2^{-(m+2)}D}for all0<m<n,{\displaystyle 0<m<n,}by fixingm{\displaystyle m}and taking the limit (in(X,τ){\displaystyle (X,\tau )}) asn→∞,{\displaystyle n\to \infty ,}it follows thatx−xm∈2−(m+2)D{\displaystyle x-x_{m}\in 2^{-(m+2)}D}for eachm>0.{\displaystyle m>0.}This implies thatp(x−xm)→0{\displaystyle p\left(x-x_{m}\right)\to 0}asm→∞,{\displaystyle m\to \infty ,}which says exactly thatx∙→x{\displaystyle x_{\bullet }\to x}in(XD,p).{\displaystyle \left(X_{D},p\right).}This shows that(XD,p){\displaystyle \left(X_{D},p\right)}is complete.
†This assumption is allowed becausex∙{\displaystyle x_{\bullet }}is a Cauchy sequence in a metric space (so the limits of all subsequences are equal) and a sequence in a metric space converges if and only if every subsequence has a sub-subsequence that converges.
Note that even ifD{\displaystyle D}is not a bounded and sequentially complete subset of any Hausdorff TVS, one might still be able to conclude that(XD,pD){\displaystyle \left(X_{D},p_{D}\right)}is a Banach space by applying this theorem to some diskK{\displaystyle K}satisfying{x∈spanD:pD(x)<1}⊆K⊆{x∈spanD:pD(x)≤1}{\displaystyle \left\{x\in \operatorname {span} D:p_{D}(x)<1\right\}\subseteq K\subseteq \left\{x\in \operatorname {span} D:p_{D}(x)\leq 1\right\}}becausepD=pK.{\displaystyle p_{D}=p_{K}.}
The following are consequences of the above theorem:
Suppose thatD{\displaystyle D}is a bounded disk in a TVSX.{\displaystyle X.}
LetX{\displaystyle X}be a TVS and letD{\displaystyle D}be a bounded disk inX.{\displaystyle X.}
IfD{\displaystyle D}is a bounded Banach disk in a Hausdorff locally convex spaceX{\displaystyle X}and ifT{\displaystyle T}is a barrel inX{\displaystyle X}thenT{\displaystyle T}absorbsD{\displaystyle D}(that is, there is a numberr>0{\displaystyle r>0}such thatD⊆rT.{\displaystyle D\subseteq rT.}[4]
IfU{\displaystyle U}is a convex balanced closed neighborhood of the origin inX{\displaystyle X}then the collection of all neighborhoodsrU,{\displaystyle rU,}wherer>0{\displaystyle r>0}ranges over the positive real numbers, induces a topological vector space topology onX.{\displaystyle X.}WhenX{\displaystyle X}has this topology, it is denoted byXU.{\displaystyle X_{U}.}Since this topology is not necessarily Hausdorff nor complete, the completion of the Hausdorff spaceX/pU−1(0){\displaystyle X/p_{U}^{-1}(0)}is denoted byXU¯{\displaystyle {\overline {X_{U}}}}so thatXU¯{\displaystyle {\overline {X_{U}}}}is a complete Hausdorff space andpU(x):=infx∈rU,r>0r{\displaystyle p_{U}(x):=\inf _{x\in rU,r>0}r}is a norm on this space makingXU¯{\displaystyle {\overline {X_{U}}}}into a Banach space. The polar ofU,{\displaystyle U,}U∘,{\displaystyle U^{\circ },}is a weakly compact bounded equicontinuous disk inX′{\displaystyle X^{\prime }}and so is infracomplete.
IfX{\displaystyle X}is ametrizablelocally convexTVS then for everyboundedsubsetB{\displaystyle B}ofX,{\displaystyle X,}there exists a boundeddiskD{\displaystyle D}inX{\displaystyle X}such thatB⊆XD,{\displaystyle B\subseteq X_{D},}and bothX{\displaystyle X}andXD{\displaystyle X_{D}}induce the samesubspace topologyonB.{\displaystyle B.}[5]
Suppose thatX{\displaystyle X}is a topological vector space andV{\displaystyle V}is aconvexbalancedandradialset.
Then{1nV:n=1,2,…}{\displaystyle \left\{{\tfrac {1}{n}}V:n=1,2,\ldots \right\}}is a neighborhood basis at the origin for some locally convex topologyτV{\displaystyle \tau _{V}}onX.{\displaystyle X.}This TVS topologyτV{\displaystyle \tau _{V}}is given by theMinkowski functionalformed byV,{\displaystyle V,}pV:X→R,{\displaystyle p_{V}:X\to \mathbb {R} ,}which is a seminorm onX{\displaystyle X}defined bypV(x):=infx∈rV,r>0r.{\displaystyle p_{V}(x):=\inf _{x\in rV,r>0}r.}The topologyτV{\displaystyle \tau _{V}}is Hausdorff if and only ifpV{\displaystyle p_{V}}is a norm, or equivalently, if and only ifX/pV−1(0)={0}{\displaystyle X/p_{V}^{-1}(0)=\{0\}}or equivalently, for which it suffices thatV{\displaystyle V}beboundedinX.{\displaystyle X.}The topologyτV{\displaystyle \tau _{V}}need not be Hausdorff butX/pV−1(0){\displaystyle X/p_{V}^{-1}(0)}is Hausdorff.
A norm onX/pV−1(0){\displaystyle X/p_{V}^{-1}(0)}is given by‖x+X/pV−1(0)‖:=pV(x),{\displaystyle \left\|x+X/p_{V}^{-1}(0)\right\|:=p_{V}(x),}where this value is in fact independent of the representative of the equivalence classx+X/pV−1(0){\displaystyle x+X/p_{V}^{-1}(0)}chosen.
The normed space(X/pV−1(0),‖⋅‖){\displaystyle \left(X/p_{V}^{-1}(0),\|\cdot \|\right)}is denoted byXV{\displaystyle X_{V}}and its completion is denoted byXV¯.{\displaystyle {\overline {X_{V}}}.}
If in additionV{\displaystyle V}is bounded inX{\displaystyle X}then the seminormpV:X→R{\displaystyle p_{V}:X\to \mathbb {R} }is a norm so in particular,pV−1(0)={0}.{\displaystyle p_{V}^{-1}(0)=\{0\}.}In this case, we takeXV{\displaystyle X_{V}}to be the vector spaceX{\displaystyle X}instead ofX/{0}{\displaystyle X/\{0\}}so that the notationXV{\displaystyle X_{V}}is unambiguous (whetherXV{\displaystyle X_{V}}denotes the space induced by a radial disk or the space induced by a bounded disk).[1]
Thequotient topologyτQ{\displaystyle \tau _{Q}}onX/pV−1(0){\displaystyle X/p_{V}^{-1}(0)}(inherited fromX{\displaystyle X}'s original topology) is finer (in general, strictly finer) than the norm topology.
Thecanonical mapis thequotient mapqV:X→XV=X/pV−1(0),{\displaystyle q_{V}:X\to X_{V}=X/p_{V}^{-1}(0),}which is continuous whenXV{\displaystyle X_{V}}has either the norm topology or the quotient topology.[1]
IfU{\displaystyle U}andV{\displaystyle V}are radial disks such thatU⊆V{\displaystyle U\subseteq V}thenpU−1(0)⊆pV−1(0){\displaystyle p_{U}^{-1}(0)\subseteq p_{V}^{-1}(0)}so there is a continuous linear surjectivecanonical mapqV,U:X/pU−1(0)→X/pV−1(0)=XV{\displaystyle q_{V,U}:X/p_{U}^{-1}(0)\to X/p_{V}^{-1}(0)=X_{V}}defined by sendingx+pU−1(0)∈XU=X/pU−1(0){\displaystyle x+p_{U}^{-1}(0)\in X_{U}=X/p_{U}^{-1}(0)}to the equivalence classx+pV−1(0),{\displaystyle x+p_{V}^{-1}(0),}where one may verify that the definition does not depend on the representative of the equivalence classx+pU−1(0){\displaystyle x+p_{U}^{-1}(0)}that is chosen.[1]This canonical map has norm≤1{\displaystyle \,\leq 1}[1]and it has a unique continuous linear canonical extension toXU¯{\displaystyle {\overline {X_{U}}}}that is denoted bygV,U¯:XU¯→XV¯.{\displaystyle {\overline {g_{V,U}}}:{\overline {X_{U}}}\to {\overline {X_{V}}}.}
Suppose that in additionB≠∅{\displaystyle B\neq \varnothing }andC{\displaystyle C}are bounded disks inX{\displaystyle X}withB⊆C{\displaystyle B\subseteq C}so thatXB⊆XC{\displaystyle X_{B}\subseteq X_{C}}and the inclusionInBC:XB→XC{\displaystyle \operatorname {In} _{B}^{C}:X_{B}\to X_{C}}is a continuous linear map.
LetInB:XB→X,{\displaystyle \operatorname {In} _{B}:X_{B}\to X,}InC:XC→X,{\displaystyle \operatorname {In} _{C}:X_{C}\to X,}andInBC:XB→XC{\displaystyle \operatorname {In} _{B}^{C}:X_{B}\to X_{C}}be the canonical maps.
ThenInC=InBC∘InC:XB→XC{\displaystyle \operatorname {In} _{C}=\operatorname {In} _{B}^{C}\circ \operatorname {In} _{C}:X_{B}\to X_{C}}andqV=qV,U∘qU.{\displaystyle q_{V}=q_{V,U}\circ q_{U}.}[1]
Suppose thatS{\displaystyle S}is a bounded radial disk.
SinceS{\displaystyle S}is a bounded disk, ifD:=S{\displaystyle D:=S}then we may create the auxiliary normed spaceXD=spanD{\displaystyle X_{D}=\operatorname {span} D}with normpD(x):=infx∈rD,r>0r{\displaystyle p_{D}(x):=\inf _{x\in rD,r>0}r}; sinceS{\displaystyle S}is radial,XS=X.{\displaystyle X_{S}=X.}SinceS{\displaystyle S}is a radial disk, ifV:=S{\displaystyle V:=S}then we may create the auxiliary seminormed spaceX/pV−1(0){\displaystyle X/p_{V}^{-1}(0)}with the seminormpV(x):=infx∈rV,r>0r{\displaystyle p_{V}(x):=\inf _{x\in rV,r>0}r}; becauseS{\displaystyle S}is bounded, this seminorm is a norm andpV−1(0)={0}{\displaystyle p_{V}^{-1}(0)=\{0\}}soX/pV−1(0)=X/{0}=X.{\displaystyle X/p_{V}^{-1}(0)=X/\{0\}=X.}Thus, in this case the two auxiliary normed spaces produced by these two different methods result in the same normed space.
Suppose thatH{\displaystyle H}is a weakly closed equicontinuous disk inX′{\displaystyle X^{\prime }}(this implies thatH{\displaystyle H}is weakly compact) and letU:=H∘={x∈X:|h(x)|≤1for allh∈H}{\displaystyle U:=H^{\circ }=\{x\in X:|h(x)|\leq 1{\text{ for all }}h\in H\}}be thepolarofH.{\displaystyle H.}BecauseU∘=H∘∘=H{\displaystyle U^{\circ }=H^{\circ \circ }=H}by thebipolar theorem, it follows that a continuous linear functionalf{\displaystyle f}belongs toXH′=spanH{\displaystyle X_{H}^{\prime }=\operatorname {span} H}if and only iff{\displaystyle f}belongs to the continuous dual space of(X,pU),{\displaystyle \left(X,p_{U}\right),}wherepU{\displaystyle p_{U}}is theMinkowski functionalofU{\displaystyle U}defined bypU(x):=infx∈rU,r>0r.{\displaystyle p_{U}(x):=\inf _{x\in rU,r>0}r.}[9]
A disk in a TVS is calledinfrabornivorous[5]if itabsorbsall Banach disks.
A linear map between two TVSs is calledinfrabounded[5]if it maps Banach disks to bounded disks.
A sequencex∙=(xi)i=1∞{\displaystyle x_{\bullet }=\left(x_{i}\right)_{i=1}^{\infty }}in a TVSX{\displaystyle X}is said to befast convergent[5]to a pointx∈X{\displaystyle x\in X}if there exists a Banach diskD{\displaystyle D}such that bothx{\displaystyle x}and the sequence is (eventually) contained inspanD{\displaystyle \operatorname {span} D}andx∙→x{\displaystyle x_{\bullet }\to x}in(XD,pD).{\displaystyle \left(X_{D},p_{D}\right).}
Every fast convergent sequence isMackey convergent.[5]
|
https://en.wikipedia.org/wiki/Auxiliary_normed_space
|
Cauchy's functional equationis thefunctional equation:f(x+y)=f(x)+f(y).{\displaystyle f(x+y)=f(x)+f(y).\ }
A functionf{\displaystyle f}that solves this equation is called anadditive function. Over therational numbers, it can be shown usingelementary algebrathat there is a single family of solutions, namelyf:x↦cx{\displaystyle f\colon x\mapsto cx}for any rational constantc.{\displaystyle c.}Over thereal numbers, the family oflinear mapsf:x↦cx,{\displaystyle f:x\mapsto cx,}now withc{\displaystyle c}an arbitrary real constant, is likewise a family of solutions; however there can exist other solutions not of this form that are extremely complicated. However, any of a number of regularity conditions, some of them quite weak, will preclude the existence of thesepathologicalsolutions. For example, an additive functionf:R→R{\displaystyle f\colon \mathbb {R} \to \mathbb {R} }islinearif:
On the other hand, if no further conditions are imposed onf,{\displaystyle f,}then (assuming theaxiom of choice) there are infinitely many other functions that satisfy the equation. This was proved in 1905 byGeorg HamelusingHamel bases. Such functions are sometimes calledHamel functions.[1]
Thefifth problemonHilbert's listis a generalisation of this equation. Functions where there exists a real numberc{\displaystyle c}such thatf(cx)≠cf(x){\displaystyle f(cx)\neq cf(x)}are known as Cauchy-Hamel functions and are used in Dehn-Hadwiger invariants which are used in the extension ofHilbert's third problemfrom 3D to higher dimensions.[2]
This equation is sometimes referred to asCauchy's additive functional equationto distinguish it from the other functional equations introduced by Cauchy in 1821, theexponential functional equationf(x+y)=f(x)f(y),{\displaystyle f(x+y)=f(x)f(y),}thelogarithmic functional equationf(xy)=f(x)+f(y),{\displaystyle f(xy)=f(x)+f(y),}and themultiplicative functional equationf(xy)=f(x)f(y).{\displaystyle f(xy)=f(x)f(y).}
A simple argument, involving only elementary algebra, demonstrates that the set of additive mapsf:V→W{\displaystyle f\colon V\to W}, whereV,W{\displaystyle V,W}are vector spaces over an extension field ofQ{\displaystyle \mathbb {Q} }, is identical to the set ofQ{\displaystyle \mathbb {Q} }-linear maps fromV{\displaystyle V}toW{\displaystyle W}.
Theorem:Letf:V→W{\displaystyle f\colon V\to W}be an additive function. Thenf{\displaystyle f}isQ{\displaystyle \mathbb {Q} }-linear.
Proof:We want to prove that any solutionf:V→W{\displaystyle f\colon V\to W}to Cauchy’s functional equation,f(x+y)=f(x)+f(y){\displaystyle f(x+y)=f(x)+f(y)}, satisfiesf(qv)=qf(v){\displaystyle f(qv)=qf(v)}for anyq∈Q{\displaystyle q\in \mathbb {Q} }andv∈V{\displaystyle v\in V}. Letv∈V{\displaystyle v\in V}.
First notef(0)=f(0+0)=f(0)+f(0){\displaystyle f(0)=f(0+0)=f(0)+f(0)}, hencef(0)=0{\displaystyle f(0)=0}, and therewith0=f(0)=f(v+(−v))=f(v)+f(−v){\displaystyle 0=f(0)=f(v+(-v))=f(v)+f(-v)}from which followsf(−v)=−f(v){\displaystyle f(-v)=-f(v)}.
Via induction,f(mv)=mf(v){\displaystyle f(mv)=mf(v)}is proved for anym∈N∪{0}{\displaystyle m\in \mathbb {N} \cup \{0\}}.
For any negative integerm∈Z{\displaystyle m\in \mathbb {Z} }we know−m∈N{\displaystyle -m\in \mathbb {N} }, thereforef(mv)=f((−m)(−v))=(−m)f(−v)=(−m)(−f(v))=mf(v){\displaystyle f(mv)=f((-m)(-v))=(-m)f(-v)=(-m)(-f(v))=mf(v)}. Thus far we have proved
Letn∈N{\displaystyle n\in \mathbb {N} }, thenf(v)=f(nn−1v)=nf(n−1v){\displaystyle f(v)=f(nn^{-1}v)=nf(n^{-1}v)}and hencef(n−1v)=n−1f(v).{\displaystyle f(n^{-1}v)=n^{-1}f(v).}
Finally, anyq∈Q{\displaystyle q\in \mathbb {Q} }has a representationq=mn{\displaystyle q={\frac {m}{n}}}withm∈Z{\displaystyle m\in \mathbb {Z} }andn∈N{\displaystyle n\in \mathbb {N} }, so, putting things together,
We prove below that any other solutions must be highlypathologicalfunctions.
In particular, it is shown that any other solution must have the property that itsgraph{(x,f(x))|x∈R}{\displaystyle \{(x,f(x))\vert x\in \mathbb {R} \}}isdenseinR2,{\displaystyle \mathbb {R} ^{2},}that is, that any disk in the plane (however small) contains a point from the graph.
From this it is easy to prove the various conditions given in the introductory paragraph.
Lemma—Lett>0{\displaystyle t>0}. Iff{\displaystyle f}satisfies the Cauchy functional equation on the interval[0,t]{\displaystyle [0,t]}, but is not linear, then its graph is dense on the strip[0,t]×R{\displaystyle [0,t]\times \mathbb {R} }.
WLOG, scalef{\displaystyle f}on the x-axis and y-axis, so thatf{\displaystyle f}satisfies the Cauchy functional equation on[0,1]{\displaystyle [0,1]}, andf(1)=1{\displaystyle f(1)=1}.
It suffices to show that the graph off{\displaystyle f}is dense in(0,1)×R{\displaystyle (0,1)\times \mathbb {R} }, which is dense in[0,1]×R{\displaystyle [0,1]\times \mathbb {R} }.
Sincef{\displaystyle f}is not linear, we havef(a)≠a{\displaystyle f(a)\neq a}for somea∈(0,1){\displaystyle a\in (0,1)}.
Claim: The lattice defined byL:={(r1+r2a,r1+r2f(a)):r1,r2∈Q}{\displaystyle L:=\{(r_{1}+r_{2}a,r_{1}+r_{2}f(a)):r_{1},r_{2}\in \mathbb {Q} \}}is dense inR2{\displaystyle \mathbb {R} ^{2}}.
Consider the linear transformationA:R2→R2{\displaystyle A:\mathbb {R} ^{2}\to \mathbb {R} ^{2}}defined by
A(x,y)=[1a1f(a)][xy]{\displaystyle A(x,y)={\begin{bmatrix}1&a\\1&f(a)\end{bmatrix}}{\begin{bmatrix}x\\y\end{bmatrix}}}
With this transformation, we haveL=A(Q2){\displaystyle L=A(\mathbb {Q} ^{2})}.
SincedetA=f(a)−a≠0{\displaystyle \det A=f(a)-a\neq 0}, the transformation is invertible, thus it is bicontinuous. SinceQ2{\displaystyle \mathbb {Q} ^{2}}is dense inR2{\displaystyle \mathbb {R} ^{2}}, so isL{\displaystyle L}.◻{\displaystyle \square }
Claim: ifr1,r2∈Q{\displaystyle r_{1},r_{2}\in \mathbb {Q} }, andr1+r2a∈(0,1){\displaystyle r_{1}+r_{2}a\in (0,1)}, thenf(r1+r2a)=r1+r2f(a){\displaystyle f(r_{1}+r_{2}a)=r_{1}+r_{2}f(a)}.
Ifr1,r2≥0{\displaystyle r_{1},r_{2}\geq 0}, then it is true by additivity. Ifr1,r2<0{\displaystyle r_{1},r_{2}<0}, thenr1+r2a<0{\displaystyle r_{1}+r_{2}a<0}, contradiction.
Ifr1≥0,r2<0{\displaystyle r_{1}\geq 0,r_{2}<0}, then sincer1+r2a>0{\displaystyle r_{1}+r_{2}a>0}, we haver1>0{\displaystyle r_{1}>0}. Letk{\displaystyle k}be a positive integer large enough such thatr1k,−r2ak∈(0,1){\displaystyle {\frac {r_{1}}{k}},{\frac {-r_{2}a}{k}}\in (0,1)}. Then we have by additivity:
f(r1k+r2ak)+f(−r2ak)=f(r1k){\displaystyle f\left({\frac {r_{1}}{k}}+{\frac {r_{2}a}{k}}\right)+f\left({\frac {-r_{2}a}{k}}\right)=f\left({\frac {r_{1}}{k}}\right)}
That is,
1kf(r1+r2a)+−r2kf(a)=r1k{\displaystyle {\frac {1}{k}}f\left(r_{1}+r_{2}a\right)+{\frac {-r_{2}}{k}}f\left(a\right)={\frac {r_{1}}{k}}}◻{\displaystyle \square }
Thus, the graph off{\displaystyle f}containsL∩((0,1)×R){\displaystyle L\cap ((0,1)\times \mathbb {R} )}, which is dense in(0,1)×R{\displaystyle (0,1)\times \mathbb {R} }.
The linearity proof given above also applies tof:αQ→R,{\displaystyle f\colon \alpha \mathbb {Q} \to \mathbb {R} ,}whereαQ{\displaystyle \alpha \mathbb {Q} }is a scaled copy of the rationals. This shows that only linear solutions are permitted when thedomainoff{\displaystyle f}is restricted to such sets. Thus, in general, we havef(αq)=f(α)q{\displaystyle f(\alpha q)=f(\alpha )q}for allα∈R{\displaystyle \alpha \in \mathbb {R} }andq∈Q.{\displaystyle q\in \mathbb {Q} .}However, as we will demonstrate below, highly pathological solutions can be found for functionsf:R→R{\displaystyle f\colon \mathbb {R} \to \mathbb {R} }based on these linear solutions, by viewing the reals as avector spaceover thefieldof rational numbers. Note, however, that this method is nonconstructive, relying as it does on the existence of a(Hamel) basisfor any vector space, a statement proved usingZorn's lemma. (In fact, the existence of a basis for every vector space is logically equivalent to theaxiom of choice.) There exist models such as theSolovay modelwhere all sets of reals are measurable which are consistent with ZF +DC, and therein all solutions are linear.[3]
To show that solutions other than the ones defined byf(x)=f(1)x{\displaystyle f(x)=f(1)x}exist, we first note that because every vector space has a basis, there is a basis forR{\displaystyle \mathbb {R} }over the fieldQ,{\displaystyle \mathbb {Q} ,}i.e. a setB⊂R{\displaystyle {\mathcal {B}}\subset \mathbb {R} }with the property that anyx∈R{\displaystyle x\in \mathbb {R} }can be expressed uniquely asx=∑i∈Iλixi,{\textstyle x=\sum _{i\in I}{\lambda _{i}x_{i}},}where{xi}i∈I{\displaystyle \{x_{i}\}_{i\in I}}is a finitesubsetofB,{\displaystyle {\mathcal {B}},}and eachλi{\displaystyle \lambda _{i}}is inQ.{\displaystyle \mathbb {Q} .}We note that because no explicit basis forR{\displaystyle \mathbb {R} }overQ{\displaystyle \mathbb {Q} }can be written down, the pathological solutions defined below likewise cannot be expressed explicitly.
As argued above, the restriction off{\displaystyle f}toxiQ{\displaystyle x_{i}\mathbb {Q} }must be a linear map for eachxi∈B.{\displaystyle x_{i}\in {\mathcal {B}}.}Moreover, becausexiq↦f(xi)q{\displaystyle x_{i}q\mapsto f(x_{i})q}forq∈Q,{\displaystyle q\in \mathbb {Q} ,}it is clear thatf(xi)xi{\displaystyle f(x_{i}) \over x_{i}}is the constant of proportionality. In other words,f:xiQ→R{\displaystyle f\colon x_{i}\mathbb {Q} \to \mathbb {R} }is the mapξ↦[f(xi)/xi]ξ.{\displaystyle \xi \mapsto [f(x_{i})/x_{i}]\xi .}Since anyx∈R{\displaystyle x\in \mathbb {R} }can be expressed as a unique (finite) linear combination of thexi{\displaystyle x_{i}}s, andf:R→R{\displaystyle f\colon \mathbb {R} \to \mathbb {R} }is additive,f(x){\displaystyle f(x)}is well-defined for allx∈R{\displaystyle x\in \mathbb {R} }and is given by:f(x)=f(∑i∈Iλixi)=∑i∈If(xiλi)=∑i∈If(xi)λi.{\displaystyle f(x)=f{\Big (}\sum _{i\in I}\lambda _{i}x_{i}{\Big )}=\sum _{i\in I}f(x_{i}\lambda _{i})=\sum _{i\in I}f(x_{i})\lambda _{i}.}
It is easy to check thatf{\displaystyle f}is a solution to Cauchy's functional equation given a definition off{\displaystyle f}on the basis elements,f:B→R.{\displaystyle f\colon {\mathcal {B}}\to \mathbb {R} .}Moreover, it is clear that every solution is of this form. In particular, the solutions of the functional equation are linearif and only iff(xi)xi{\displaystyle f(x_{i}) \over x_{i}}is constant over allxi∈B.{\displaystyle x_{i}\in {\mathcal {B}}.}Thus, in a sense, despite the inability to exhibit a nonlinear solution, "most" (in the sense of cardinality[4]) solutions to the Cauchy functional equation are actually nonlinear and pathological.
|
https://en.wikipedia.org/wiki/Cauchy%27s_functional_equation
|
Infunctional analysisand related areas ofmathematics,locally convex topological vector spaces(LCTVS) orlocally convex spacesare examples oftopological vector spaces(TVS) that generalizenormed spaces. They can be defined astopologicalvector spaces whose topology isgeneratedby translations ofbalanced,absorbent,convex sets. Alternatively they can be defined as avector spacewith afamilyofseminorms, and a topology can be defined in terms of that family. Although in general such spaces are not necessarilynormable, the existence of a convexlocal basefor thezero vectoris strong enough for theHahn–Banach theoremto hold, yielding a sufficiently rich theory of continuouslinear functionals.
Fréchet spacesare locally convex topological vector spaces that arecompletely metrizable(with a choice of complete metric). They are generalizations ofBanach spaces, which are complete vector spaces with respect to a metric generated by anorm.
Metrizable topologies on vector spaces have been studied since their introduction inMaurice Fréchet's1902 PhD thesisSur quelques points du calcul fonctionnel(wherein the notion of ametricwas first introduced).
After the notion of a general topological space was defined byFelix Hausdorffin 1914,[1]although locally convex topologies were implicitly used by some mathematicians, up to 1934 onlyJohn von Neumannwould seem to have explicitly defined theweak topologyon Hilbert spaces andstrong operator topologyon operators on Hilbert spaces.[2][3]Finally, in 1935 von Neumann introduced the general definition of a locally convex space (called aconvex spaceby him).[4][5]
A notable example of a result which had to wait for the development and dissemination of general locally convex spaces (amongst other notions and results, likenets, theproduct topologyandTychonoff's theorem) to be proven in its full generality, is theBanach–Alaoglu theoremwhichStefan Banachfirst established in 1932 by an elementarydiagonal argumentfor the case of separable normed spaces[6](in which case theunit ball of the dual is metrizable).
SupposeX{\displaystyle X}is a vector space overK,{\displaystyle \mathbb {K} ,}asubfieldof thecomplex numbers(normallyC{\displaystyle \mathbb {C} }itself orR{\displaystyle \mathbb {R} }).
A locally convex space is defined either in terms of convex sets, or equivalently in terms of seminorms.
Atopological vector space(TVS) is calledlocally convexif it has aneighborhood basis(that is, a local base) at the origin consisting of balanced,convex sets.[7]The termlocally convex topological vector spaceis sometimes shortened tolocally convex spaceorLCTVS.
A subsetC{\displaystyle C}inX{\displaystyle X}is called
In fact, every locally convex TVS has a neighborhood basis of the origin consisting ofabsolutely convexsets (that is, disks), where this neighborhood basis can further be chosen to also consist entirely of open sets or entirely of closed sets.[8]Every TVS has a neighborhood basis at the origin consisting of balanced sets, but only a locally convex TVS has a neighborhood basis at the origin consisting of sets that are both balancedandconvex. It is possible for a TVS to havesomeneighborhoods of the origin that are convex and yet not be locally convex because it has no neighborhood basis at the origin consisting entirely of convex sets (that is, every neighborhood basis at the origin contains some non-convex set); for example, every non-locally convex TVSX{\displaystyle X}has itself (that is,X{\displaystyle X}) as a convex neighborhood of the origin.
Because translation is continuous (by definition oftopological vector space), all translations arehomeomorphisms, so every base for the neighborhoods of the origin can be translated to a base for the neighborhoods of any given vector.
AseminormonX{\displaystyle X}is a mapp:X→R{\displaystyle p:X\to \mathbb {R} }such that
Ifp{\displaystyle p}satisfies positive definiteness, which states that ifp(x)=0{\displaystyle p(x)=0}thenx=0,{\displaystyle x=0,}thenp{\displaystyle p}is anorm.
While in general seminorms need not be norms, there is an analogue of this criterion for families of seminorms, separatedness, defined below.
IfX{\displaystyle X}is a vector space andP{\displaystyle {\mathcal {P}}}is a family of seminorms onX{\displaystyle X}then a subsetQ{\displaystyle {\mathcal {Q}}}ofP{\displaystyle {\mathcal {P}}}is called abase of seminormsforP{\displaystyle {\mathcal {P}}}if for allp∈P{\displaystyle p\in {\mathcal {P}}}there exists aq∈Q{\displaystyle q\in {\mathcal {Q}}}and a realr>0{\displaystyle r>0}such thatp≤rq.{\displaystyle p\leq rq.}[9]
Definition(second version): Alocally convex spaceis defined to be a vector spaceX{\displaystyle X}along with afamilyP{\displaystyle {\mathcal {P}}}of seminorms onX.{\displaystyle X.}
Suppose thatX{\displaystyle X}is a vector space overK,{\displaystyle \mathbb {K} ,}whereK{\displaystyle \mathbb {K} }is either the real or complex numbers.
A family of seminormsP{\displaystyle {\mathcal {P}}}on the vector spaceX{\displaystyle X}induces a canonical vector space topology onX{\displaystyle X}, called theinitial topologyinduced by the seminorms, making it into atopological vector space(TVS). By definition, it is thecoarsesttopology onX{\displaystyle X}for which all maps inP{\displaystyle {\mathcal {P}}}are continuous.
It is possible for a locally convex topology on a spaceX{\displaystyle X}to be induced by a family of norms but forX{\displaystyle X}tonotbenormable(that is, to have its topology be induced by a single norm).
An open set inR≥0{\displaystyle \mathbb {R} _{\geq 0}}has the form[0,r){\displaystyle [0,r)}, wherer{\displaystyle r}is a positive real number. The family ofpreimagesp−1([0,r))={x∈X:p(x)<r}{\displaystyle p^{-1}\left([0,r)\right)=\{x\in X:p(x)<r\}}asp{\displaystyle p}ranges over a family of seminormsP{\displaystyle {\mathcal {P}}}andr{\displaystyle r}ranges over the positive real numbers
is asubbasis at the originfor the topology induced byP{\displaystyle {\mathcal {P}}}. These sets are convex, as follows from properties 2 and 3 of seminorms.
Intersections of finitely many such sets are then also convex, and since the collection of all such finite intersections is abasis at the originit follows that the topology is locally convex in the sense of thefirstdefinition given above.
Recall that the topology of a TVS is translation invariant, meaning that ifS{\displaystyle S}is any subset ofX{\displaystyle X}containing the origin then for anyx∈X,{\displaystyle x\in X,}S{\displaystyle S}is a neighborhood of the origin if and only ifx+S{\displaystyle x+S}is a neighborhood ofx{\displaystyle x};
thus it suffices to define the topology at the origin.
A base of neighborhoods ofy{\displaystyle y}for this topology is obtained in the following way: for every finite subsetF{\displaystyle F}ofP{\displaystyle {\mathcal {P}}}and everyr>0,{\displaystyle r>0,}letUF,r(y):={x∈X:p(x−y)<rfor allp∈F}.{\displaystyle U_{F,r}(y):=\{x\in X:p(x-y)<r\ {\text{ for all }}p\in F\}.}
IfX{\displaystyle X}is a locally convex space and ifP{\displaystyle {\mathcal {P}}}is a collection of continuous seminorms onX{\displaystyle X}, thenP{\displaystyle {\mathcal {P}}}is called abase of continuous seminormsif it is a base of seminorms for the collection ofallcontinuous seminorms onX{\displaystyle X}.[9]Explicitly, this means that for all continuous seminormsp{\displaystyle p}onX{\displaystyle X}, there exists aq∈P{\displaystyle q\in {\mathcal {P}}}and a realr>0{\displaystyle r>0}such thatp≤rq.{\displaystyle p\leq rq.}[9]IfP{\displaystyle {\mathcal {P}}}is a base of continuous seminorms for a locally convex TVSX{\displaystyle X}then the family of all sets of the form{x∈X:q(x)<r}{\displaystyle \{x\in X:q(x)<r\}}asq{\displaystyle q}varies overP{\displaystyle {\mathcal {P}}}andr{\displaystyle r}varies over the positive real numbers, is abaseof neighborhoods of the origin inX{\displaystyle X}(not just a subbasis, so there is no need to take finite intersections of such sets).[9][proof 1]
A familyP{\displaystyle {\mathcal {P}}}of seminorms on a vector spaceX{\displaystyle X}is calledsaturatedif for anyp{\displaystyle p}andq{\displaystyle q}inP,{\displaystyle {\mathcal {P}},}the seminorm defined byx↦max{p(x),q(x)}{\displaystyle x\mapsto \max\{p(x),q(x)\}}belongs toP.{\displaystyle {\mathcal {P}}.}
IfP{\displaystyle {\mathcal {P}}}is a saturated family of continuous seminorms that induces the topology onX{\displaystyle X}then the collection of all sets of the form{x∈X:p(x)<r}{\displaystyle \{x\in X:p(x)<r\}}asp{\displaystyle p}ranges overP{\displaystyle {\mathcal {P}}}andr{\displaystyle r}ranges over all positive real numbers, forms a neighborhood basis at the origin consisting of convex open sets;[9]This forms a basis at the origin rather than merely a subbasis so that in particular, there isnoneed to take finite intersections of such sets.[9]
The following theorem implies that ifX{\displaystyle X}is a locally convex space then the topology ofX{\displaystyle X}can be a defined by a family of continuousnormsonX{\displaystyle X}(anormis aseminorms{\displaystyle s}wheres(x)=0{\displaystyle s(x)=0}impliesx=0{\displaystyle x=0}) if and only if there existsat least onecontinuousnormonX{\displaystyle X}.[10]This is because the sum of a norm and a seminorm is a norm so if a locally convex space is defined by some familyP{\displaystyle {\mathcal {P}}}of seminorms (each of which is necessarily continuous) then the familyP+n:={p+n:p∈P}{\displaystyle {\mathcal {P}}+n:=\{p+n:p\in {\mathcal {P}}\}}of (also continuous) norms obtained by adding some given continuous normn{\displaystyle n}to each element, will necessarily be a family of norms that defines this same locally convex topology.
If there exists a continuous norm on a topological vector spaceX{\displaystyle X}thenX{\displaystyle X}is necessarily Hausdorff but the converse is not in general true (not even for locally convex spaces orFréchet spaces).
Theorem[11]—LetX{\displaystyle X}be a Fréchet space over the fieldK.{\displaystyle \mathbb {K} .}Then the following are equivalent:
Suppose that the topology of a locally convex spaceX{\displaystyle X}is induced by a familyP{\displaystyle {\mathcal {P}}}of continuous seminorms onX{\displaystyle X}.
Ifx∈X{\displaystyle x\in X}and ifx∙=(xi)i∈I{\displaystyle x_{\bullet }=\left(x_{i}\right)_{i\in I}}is anetinX{\displaystyle X}, thenx∙→x{\displaystyle x_{\bullet }\to x}inX{\displaystyle X}if and only if for allp∈P,{\displaystyle p\in {\mathcal {P}},}p(x∙−x)=(p(xi)−x)i∈I→0.{\displaystyle p\left(x_{\bullet }-x\right)=\left(p\left(x_{i}\right)-x\right)_{i\in I}\to 0.}[12]Moreover, ifx∙{\displaystyle x_{\bullet }}is Cauchy inX{\displaystyle X}, then so isp(x∙)=(p(xi))i∈I{\displaystyle p\left(x_{\bullet }\right)=\left(p\left(x_{i}\right)\right)_{i\in I}}for everyp∈P.{\displaystyle p\in {\mathcal {P}}.}[12]
Although the definition in terms of a neighborhood base gives a better geometric picture, the definition in terms of seminorms is easier to work with in practice.
The equivalence of the two definitions follows from a construction known as theMinkowski functionalor Minkowski gauge.
The key feature of seminorms which ensures the convexity of theirε{\displaystyle \varepsilon }-ballsis thetriangle inequality.
For an absorbing setC{\displaystyle C}such that ifx∈C,{\displaystyle x\in C,}thentx∈C{\displaystyle tx\in C}whenever0≤t≤1,{\displaystyle 0\leq t\leq 1,}define the Minkowski functional ofC{\displaystyle C}to beμC(x)=inf{r>0:x∈rC}.{\displaystyle \mu _{C}(x)=\inf\{r>0:x\in rC\}.}
From this definition it follows thatμC{\displaystyle \mu _{C}}is a seminorm ifC{\displaystyle C}is balanced and convex (it is also absorbent by assumption). Conversely, given a family of seminorms, the sets{x:pα1(x)<ε1,…,pαn(x)<εn}{\displaystyle \left\{x:p_{\alpha _{1}}(x)<\varepsilon _{1},\ldots ,p_{\alpha _{n}}(x)<\varepsilon _{n}\right\}}form a base of convex absorbent balanced sets.
Theorem[7]—Suppose thatX{\displaystyle X}is a (real or complex) vector space and letB{\displaystyle {\mathcal {B}}}be afilter baseof subsets ofX{\displaystyle X}such that:
ThenB{\displaystyle {\mathcal {B}}}is aneighborhood baseat 0 for a locally convex TVS topology onX.{\displaystyle X.}
Theorem[7]—Suppose thatX{\displaystyle X}is a (real or complex) vector space and letL{\displaystyle {\mathcal {L}}}be a non-empty collection of convex,balanced, andabsorbingsubsets ofX.{\displaystyle X.}Then the set of all positive scalar multiples of finite intersections of sets inL{\displaystyle {\mathcal {L}}}forms a neighborhood base at the origin for a locally convex TVS topology onX.{\displaystyle X.}
Example: auxiliary normed spaces
IfW{\displaystyle W}isconvexandabsorbinginX{\displaystyle X}then thesymmetric setD:=⋂|u|=1uW{\displaystyle D:=\bigcap _{|u|=1}uW}will be convex andbalanced(also known as anabsolutely convex setor adisk) in addition to being absorbing inX.{\displaystyle X.}This guarantees that theMinkowski functionalpD:X→R{\displaystyle p_{D}:X\to \mathbb {R} }ofD{\displaystyle D}will be aseminormonX,{\displaystyle X,}thereby making(X,pD){\displaystyle \left(X,p_{D}\right)}into aseminormed spacethat carries its canonicalpseudometrizabletopology. The set of scalar multiplesrD{\displaystyle rD}asr{\displaystyle r}ranges over{12,13,14,…}{\displaystyle \left\{{\tfrac {1}{2}},{\tfrac {1}{3}},{\tfrac {1}{4}},\ldots \right\}}(or over any other set of non-zero scalars having0{\displaystyle 0}as a limit point) forms a neighborhood basis of absorbingdisksat the origin for this locally convex topology. IfX{\displaystyle X}is atopological vector spaceand if this convex absorbing subsetW{\displaystyle W}is also abounded subsetofX,{\displaystyle X,}then the absorbing diskD:=⋂|u|=1uW{\displaystyle D:=\bigcap _{|u|=1}uW}will also be bounded, in which casepD{\displaystyle p_{D}}will be anormand(X,pD){\displaystyle \left(X,p_{D}\right)}will form what is known as anauxiliary normed space. If this normed space is aBanach spacethenD{\displaystyle D}is called aBanach disk.
LetX{\displaystyle X}be a TVS.
Say that a vector subspaceM{\displaystyle M}ofX{\displaystyle X}hasthe extension propertyif any continuous linear functional onM{\displaystyle M}can be extended to a continuous linear functional onX{\displaystyle X}.[13]Say thatX{\displaystyle X}has theHahn-Banachextension property(HBEP) if every vector subspace ofX{\displaystyle X}has the extension property.[13]
TheHahn-Banach theoremguarantees that every Hausdorff locally convex space has the HBEP.
For completemetrizable TVSsthere is a converse:
Theorem[13](Kalton)—Every complete metrizable TVS with the Hahn-Banach extension property is locally convex.
If a vector spaceX{\displaystyle X}has uncountable dimension and if we endow it with thefinest vector topologythen this is a TVS with the HBEP that is neither locally convex or metrizable.[13]
Throughout,P{\displaystyle {\mathcal {P}}}is a family of continuous seminorms that generate the topology ofX.{\displaystyle X.}
Topological closure
IfS⊆X{\displaystyle S\subseteq X}andx∈X,{\displaystyle x\in X,}thenx∈clS{\displaystyle x\in \operatorname {cl} S}if and only if for everyr>0{\displaystyle r>0}and every finite collectionp1,…,pn∈P{\displaystyle p_{1},\ldots ,p_{n}\in {\mathcal {P}}}there exists somes∈S{\displaystyle s\in S}such that∑i=1npi(x−s)<r.{\displaystyle \sum _{i=1}^{n}p_{i}(x-s)<r.}[14]The closure of{0}{\displaystyle \{0\}}inX{\displaystyle X}is equal to⋂p∈Pp−1(0).{\displaystyle \bigcap _{p\in {\mathcal {P}}}p^{-1}(0).}[15]
Topology of Hausdorff locally convex spaces
Every Hausdorff locally convex space ishomeomorphicto a vector subspace of a product ofBanach spaces.[16]TheAnderson–Kadec theoremstates that every infinite–dimensionalseparableFréchet spaceishomeomorphicto theproduct space∏i∈NR{\textstyle \prod _{i\in \mathbb {N} }\mathbb {R} }of countably many copies ofR{\displaystyle \mathbb {R} }(this homeomorphism need not be alinear map).[17]
Algebraic properties of convex subsets
A subsetC{\displaystyle C}is convex if and only iftC+(1−t)C⊆C{\displaystyle tC+(1-t)C\subseteq C}for all0≤t≤1{\displaystyle 0\leq t\leq 1}[18]or equivalently, if and only if(s+t)C=sC+tC{\displaystyle (s+t)C=sC+tC}for all positive reals>0andt>0,{\displaystyle s>0{\text{ and }}t>0,}[19]where because(s+t)C⊆sC+tC{\displaystyle (s+t)C\subseteq sC+tC}always holds, theequals sign={\displaystyle \,=\,}can be replaced with⊇.{\displaystyle \,\supseteq .\,}IfC{\displaystyle C}is a convex set that contains the origin thenC{\displaystyle C}isstar shapedat the origin and for all non-negative reals≥0andt≥0,{\displaystyle s\geq 0{\text{ and }}t\geq 0,}(sC)∩(tC)=(min{s,t})C.{\displaystyle (sC)\cap (tC)=(\min _{}\{s,t\})C.}
TheMinkowski sumof two convex sets is convex; furthermore, the scalar multiple of a convex set is again convex.[20]
Topological properties of convex subsets
For any subsetS{\displaystyle S}of a TVSX,{\displaystyle X,}theconvex hull(respectively,closed convex hull,balanced hull,convex balanced hull) ofS,{\displaystyle S,}denoted bycoS{\displaystyle \operatorname {co} S}(respectively,co¯S,{\displaystyle {\overline {\operatorname {co} }}S,}balS,{\displaystyle \operatorname {bal} S,}cobalS{\displaystyle \operatorname {cobal} S}), is the smallest convex (respectively, closed convex, balanced, convex balanced) subset ofX{\displaystyle X}containingS.{\displaystyle S.}
Any vector spaceX{\displaystyle X}endowed with thetrivial topology(also called theindiscrete topology) is a locally convex TVS (and of course, it is the coarsest such topology).
This topology is Hausdorff if and onlyX={0}.{\displaystyle X=\{0\}.}The indiscrete topology makes any vector space into acompletepseudometrizablelocally convex TVS.
In contrast, thediscrete topologyforms a vector topology onX{\displaystyle X}if and onlyX={0}.{\displaystyle X=\{0\}.}This follows from the fact that everytopological vector spaceis aconnected space.
IfX{\displaystyle X}is a real or complex vector space and ifP{\displaystyle {\mathcal {P}}}is the set of all seminorms onX{\displaystyle X}then the locally convex TVS topology, denoted byτlc,{\displaystyle \tau _{\operatorname {lc} },}thatP{\displaystyle {\mathcal {P}}}induces onX{\displaystyle X}is called thefinest locally convex topologyonX.{\displaystyle X.}[37]This topology may also be described as the TVS-topology onX{\displaystyle X}having as a neighborhood base at the origin the set of allabsorbingdisksinX.{\displaystyle X.}[37]Any locally convex TVS-topology onX{\displaystyle X}is necessarily a subset ofτlc.{\displaystyle \tau _{\operatorname {lc} }.}(X,τlc){\displaystyle \left(X,\tau _{\operatorname {lc} }\right)}isHausdorff.[15]Every linear map from(X,τlc){\displaystyle \left(X,\tau _{\operatorname {lc} }\right)}into another locally convex TVS is necessarily continuous.[15]In particular, every linear functional on(X,τlc){\displaystyle \left(X,\tau _{\operatorname {lc} }\right)}is continuous and every vector subspace ofX{\displaystyle X}is closed in(X,τlc){\displaystyle \left(X,\tau _{\operatorname {lc} }\right)};[15]therefore, ifX{\displaystyle X}is infinite dimensional then(X,τlc){\displaystyle \left(X,\tau _{\operatorname {lc} }\right)}is not pseudometrizable (and thus not metrizable).[37]Moreover,τlc{\displaystyle \tau _{\operatorname {lc} }}is theonlyHausdorff locally convex topology onX{\displaystyle X}with the property that any linear map from it into any Hausdorff locally convex space is continuous.[38]The space(X,τlc){\displaystyle \left(X,\tau _{\operatorname {lc} }\right)}is abornological space.[39]
Every normed space is a Hausdorff locally convex space, and much of the theory of locally convex spaces generalizes parts of the theory of normed spaces.
The family of seminorms can be taken to be the single norm.
Every Banach space is a complete Hausdorff locally convex space, in particular, theLp{\displaystyle L^{p}}spaceswithp≥1{\displaystyle p\geq 1}are locally convex.
More generally, every Fréchet space is locally convex.
A Fréchet space can be defined as a complete locally convex space with a separated countable family of seminorms.
The spaceRω{\displaystyle \mathbb {R} ^{\omega }}ofreal valued sequenceswith the family of seminorms given bypi({xn}n)=|xi|,i∈N{\displaystyle p_{i}\left(\left\{x_{n}\right\}_{n}\right)=\left|x_{i}\right|,\qquad i\in \mathbb {N} }is locally convex. The countable family of seminorms is complete and separable, so this is a Fréchet space, which is not normable. This is also thelimit topologyof the spacesRn,{\displaystyle \mathbb {R} ^{n},}embedded inRω{\displaystyle \mathbb {R} ^{\omega }}in the natural way, by completing finite sequences with infinitely many0.{\displaystyle 0.}
Given any vector spaceX{\displaystyle X}and a collectionF{\displaystyle F}of linear functionals on it,X{\displaystyle X}can be made into a locally convex topological vector space by giving it the weakest topology making all linear functionals inF{\displaystyle F}continuous. This is known as theweak topologyor theinitial topologydetermined byF.{\displaystyle F.}The collectionF{\displaystyle F}may be thealgebraic dualofX{\displaystyle X}or any other collection.
The family of seminorms in this case is given bypf(x)=|f(x)|{\displaystyle p_{f}(x)=|f(x)|}for allf{\displaystyle f}inF.{\displaystyle F.}
Spaces of differentiable functions give other non-normable examples. Consider the space ofsmooth functionsf:Rn→C{\displaystyle f:\mathbb {R} ^{n}\to \mathbb {C} }such thatsupx|xaDbf|<∞,{\displaystyle \sup _{x}\left|x^{a}D_{b}f\right|<\infty ,}wherea{\displaystyle a}andB{\displaystyle B}aremultiindices.
The family of seminorms defined bypa,b(f)=supx|xaDbf(x)|{\displaystyle p_{a,b}(f)=\sup _{x}\left|x^{a}D_{b}f(x)\right|}is separated, and countable, and the space is complete, so this metrizable space is a Fréchet space.
It is known as theSchwartz space, or the space of functions of rapid decrease, and itsdual spaceis the space oftempered distributions.
An importantfunction spacein functional analysis is the spaceD(U){\displaystyle D(U)}of smooth functions withcompact supportinU⊆Rn.{\displaystyle U\subseteq \mathbb {R} ^{n}.}A more detailed construction is needed for the topology of this space because the spaceC0∞(U){\displaystyle C_{0}^{\infty }(U)}is not complete in the uniform norm. The topology onD(U){\displaystyle D(U)}is defined as follows: for any fixedcompact setK⊆U,{\displaystyle K\subseteq U,}the spaceC0∞(K){\displaystyle C_{0}^{\infty }(K)}of functionsf∈C0∞{\displaystyle f\in C_{0}^{\infty }}withsupp(f)⊆K{\displaystyle \operatorname {supp} (f)\subseteq K}is aFréchet spacewith countable family of seminorms‖f‖m=supk≤msupx|Dkf(x)|{\displaystyle \|f\|_{m}=\sup _{k\leq m}\sup _{x}\left|D^{k}f(x)\right|}(these are actually norms, and the completion of the spaceC0∞(K){\displaystyle C_{0}^{\infty }(K)}with the‖⋅‖m{\displaystyle \|\cdot \|_{m}}norm is a Banach spaceDm(K){\displaystyle D^{m}(K)}).
Given any collection(Ka)a∈A{\displaystyle \left(K_{a}\right)_{a\in A}}of compact sets, directed by inclusion and such that their union equalU,{\displaystyle U,}theC0∞(Ka){\displaystyle C_{0}^{\infty }\left(K_{a}\right)}form adirect system, andD(U){\displaystyle D(U)}is defined to be the limit of this system. Such a limit of Fréchet spaces is known as anLF space. More concretely,D(U){\displaystyle D(U)}is the union of all theC0∞(Ka){\displaystyle C_{0}^{\infty }\left(K_{a}\right)}with the strongestlocally convextopology which makes eachinclusion mapC0∞(Ka)↪D(U){\displaystyle C_{0}^{\infty }\left(K_{a}\right)\hookrightarrow D(U)}continuous.
This space is locally convex and complete. However, it is not metrizable, and so it is not a Fréchet space. The dual space ofD(Rn){\displaystyle D\left(\mathbb {R} ^{n}\right)}is the space ofdistributionsonRn.{\displaystyle \mathbb {R} ^{n}.}
More abstractly, given atopological spaceX,{\displaystyle X,}the spaceC(X){\displaystyle C(X)}of continuous (not necessarily bounded) functions onX{\displaystyle X}can be given the topology ofuniform convergenceon compact sets. This topology is defined by semi-normsφK(f)=max{|f(x)|:x∈K}{\displaystyle \varphi _{K}(f)=\max\{|f(x)|:x\in K\}}(asK{\displaystyle K}varies over thedirected setof all compact subsets ofX{\displaystyle X}). WhenX{\displaystyle X}is locally compact (for example, an open set inRn{\displaystyle \mathbb {R} ^{n}}) theStone–Weierstrass theoremapplies—in the case of real-valued functions, any subalgebra ofC(X){\displaystyle C(X)}that separates points and contains the constant functions (for example, the subalgebra of polynomials) isdense.
Many topological vector spaces are locally convex. Examples of spaces that lack local convexity include the following:
Both examples have the property that any continuous linear map to thereal numbersis0.{\displaystyle 0.}In particular, theirdual spaceis trivial, that is, it contains only the zero functional.
Theorem[40]—LetT:X→Y{\displaystyle T:X\to Y}be a linear operator between TVSs whereY{\displaystyle Y}is locally convex (note thatX{\displaystyle X}neednotbe locally convex). ThenT{\displaystyle T}is continuous if and only if for every continuous seminormq{\displaystyle q}onY{\displaystyle Y}, there exists a continuous seminormp{\displaystyle p}onX{\displaystyle X}such thatq∘T≤p.{\displaystyle q\circ T\leq p.}
Because locally convex spaces are topological spaces as well as vector spaces, the natural functions to consider between two locally convex spaces arecontinuous linear maps.
Using the seminorms, a necessary and sufficient criterion for thecontinuityof a linear map can be given that closely resembles the more familiarboundedness conditionfound for Banach spaces.
Given locally convex spacesX{\displaystyle X}andY{\displaystyle Y}with families of seminorms(pα)α{\displaystyle \left(p_{\alpha }\right)_{\alpha }}and(qβ)β{\displaystyle \left(q_{\beta }\right)_{\beta }}respectively, a linear mapT:X→Y{\displaystyle T:X\to Y}is continuous if and only if for everyβ,{\displaystyle \beta ,}there existα1,…,αn{\displaystyle \alpha _{1},\ldots ,\alpha _{n}}andM>0{\displaystyle M>0}such that for allv∈X,{\displaystyle v\in X,}qβ(Tv)≤M(pα1(v)+⋯+pαn(v)).{\displaystyle q_{\beta }(Tv)\leq M\left(p_{\alpha _{1}}(v)+\dotsb +p_{\alpha _{n}}(v)\right).}
In other words, each seminorm of the range ofT{\displaystyle T}isboundedabove by some finite sum of seminorms in thedomain. If the family(pα)α{\displaystyle \left(p_{\alpha }\right)_{\alpha }}is a directed family, and it can always be chosen to be directed as explained above, then the formula becomes even simpler and more familiar:qβ(Tv)≤Mpα(v).{\displaystyle q_{\beta }(Tv)\leq Mp_{\alpha }(v).}
Theclassof all locally convex topological vector spaces forms acategorywith continuous linear maps asmorphisms.
Theorem[40]—IfX{\displaystyle X}is a TVS (not necessarily locally convex) and iff{\displaystyle f}is a linear functional onX{\displaystyle X}, thenf{\displaystyle f}is continuous if and only if there exists a continuous seminormp{\displaystyle p}onX{\displaystyle X}such that|f|≤p.{\displaystyle |f|\leq p.}
IfX{\displaystyle X}is a real or complex vector space,f{\displaystyle f}is a linear functional onX{\displaystyle X}, andp{\displaystyle p}is a seminorm onX{\displaystyle X}, then|f|≤p{\displaystyle |f|\leq p}if and only iff≤p.{\displaystyle f\leq p.}[41]Iff{\displaystyle f}is a non-0 linear functional on a real vector spaceX{\displaystyle X}and ifp{\displaystyle p}is a seminorm onX{\displaystyle X}, thenf≤p{\displaystyle f\leq p}if and only iff−1(1)∩{x∈X:p(x)<1}=∅.{\displaystyle f^{-1}(1)\cap \{x\in X:p(x)<1\}=\varnothing .}[15]
Letn≥1{\displaystyle n\geq 1}be an integer,X1,…,Xn{\displaystyle X_{1},\ldots ,X_{n}}be TVSs (not necessarily locally convex), letY{\displaystyle Y}be a locally convex TVS whose topology is determined by a familyQ{\displaystyle {\mathcal {Q}}}of continuous seminorms, and letM:∏i=1nXi→Y{\displaystyle M:\prod _{i=1}^{n}X_{i}\to Y}be amultilinear operatorthat is linear in each of itsn{\displaystyle n}coordinates.
The following are equivalent:
|
https://en.wikipedia.org/wiki/Finest_locally_convex_topology
|
Inintegral geometry(otherwise called geometric probability theory),Hadwiger's theoremcharacterises thevaluationsonconvex bodiesinRn.{\displaystyle \mathbb {R} ^{n}.}It was proved byHugo Hadwiger.
LetKn{\displaystyle \mathbb {K} ^{n}}be the collection of all compact convex sets inRn.{\displaystyle \mathbb {R} ^{n}.}Avaluationis a functionv:Kn→R{\displaystyle v:\mathbb {K} ^{n}\to \mathbb {R} }such thatv(∅)=0{\displaystyle v(\varnothing )=0}and for everyS,T∈Kn{\displaystyle S,T\in \mathbb {K} ^{n}}that satisfyS∪T∈Kn,{\displaystyle S\cup T\in \mathbb {K} ^{n},}v(S)+v(T)=v(S∩T)+v(S∪T).{\displaystyle v(S)+v(T)=v(S\cap T)+v(S\cup T)~.}
A valuation is called continuous if it is continuous with respect to theHausdorff metric. A valuation is called invariant under rigid motions ifv(φ(S))=v(S){\displaystyle v(\varphi (S))=v(S)}wheneverS∈Kn{\displaystyle S\in \mathbb {K} ^{n}}andφ{\displaystyle \varphi }is either atranslationor arotationofRn.{\displaystyle \mathbb {R} ^{n}.}
The quermassintegralsWj:Kn→R{\displaystyle W_{j}:\mathbb {K} ^{n}\to \mathbb {R} }are defined via Steiner's formulaVoln(K+tB)=∑j=0n(nj)Wj(K)tj,{\displaystyle \mathrm {Vol} _{n}(K+tB)=\sum _{j=0}^{n}{\binom {n}{j}}W_{j}(K)t^{j}~,}whereB{\displaystyle B}is the Euclidean ball. For example,W0{\displaystyle W_{0}}is the volume,W1{\displaystyle W_{1}}is proportional to thesurface measure,Wn−1{\displaystyle W_{n-1}}is proportional to themean width, andWn{\displaystyle W_{n}}is the constantVoln(B).{\displaystyle \operatorname {Vol} _{n}(B).}
Wj{\displaystyle W_{j}}is a valuation which ishomogeneousof degreen−j,{\displaystyle n-j,}that is,Wj(tK)=tn−jWj(K),t≥0.{\displaystyle W_{j}(tK)=t^{n-j}W_{j}(K)~,\quad t\geq 0~.}
Any continuous valuationv{\displaystyle v}onKn{\displaystyle \mathbb {K} ^{n}}that is invariant under rigid motions can be represented asv(S)=∑j=0ncjWj(S).{\displaystyle v(S)=\sum _{j=0}^{n}c_{j}W_{j}(S)~.}
Any continuous valuationv{\displaystyle v}onKn{\displaystyle \mathbb {K} ^{n}}that is invariant under rigid motions and homogeneous of degreej{\displaystyle j}is a multiple ofWn−j.{\displaystyle W_{n-j}.}
An account and a proof of Hadwiger's theorem may be found in
An elementary and self-contained proof was given by Beifang Chen in
|
https://en.wikipedia.org/wiki/Hadwiger%27s_theorem
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.