text
stringlengths
16
172k
source
stringlengths
32
122
Instatistics,maximum likelihood estimation(MLE) is a method ofestimatingtheparametersof an assumedprobability distribution, given some observed data. This is achieved bymaximizingalikelihood functionso that, under the assumedstatistical model, theobserved datais most probable. Thepointin theparameter spacethat maximizes the likelihood function is called the maximum likelihood estimate.[1]The logic of maximum likelihood is both intuitive and flexible, and as such the method has become a dominant means ofstatistical inference.[2][3][4] If the likelihood function isdifferentiable, thederivative testfor finding maxima can be applied. In some cases, the first-order conditions of the likelihood function can be solved analytically; for instance, theordinary least squaresestimator for alinear regressionmodel maximizes the likelihood when the random errors are assumed to havenormaldistributions with the same variance.[5] From the perspective ofBayesian inference, MLE is generally equivalent tomaximum a posteriori (MAP) estimationwith aprior distributionthat isuniformin the region of interest. Infrequentist inference, MLE is a special case of anextremum estimator, with the objective function being the likelihood. We model a set of observations as a randomsamplefrom an unknownjoint probability distributionwhich is expressed in terms of a set ofparameters. The goal of maximum likelihood estimation is to determine the parameters for which the observed data have the highest joint probability. We write the parameters governing the joint distribution as a vectorθ=[θ1,θ2,…,θk]T{\displaystyle \;\theta =\left[\theta _{1},\,\theta _{2},\,\ldots ,\,\theta _{k}\right]^{\mathsf {T}}\;}so that this distribution falls within aparametric family{f(⋅;θ)∣θ∈Θ},{\displaystyle \;\{f(\cdot \,;\theta )\mid \theta \in \Theta \}\;,}whereΘ{\displaystyle \,\Theta \,}is called theparameter space, a finite-dimensional subset ofEuclidean space. Evaluating the joint density at the observed data sampley=(y1,y2,…,yn){\displaystyle \;\mathbf {y} =(y_{1},y_{2},\ldots ,y_{n})\;}gives a real-valued function,Ln(θ)=Ln(θ;y)=fn(y;θ),{\displaystyle {\mathcal {L}}_{n}(\theta )={\mathcal {L}}_{n}(\theta ;\mathbf {y} )=f_{n}(\mathbf {y} ;\theta )\;,}which is called thelikelihood function. Forindependent random variables,fn(y;θ){\displaystyle f_{n}(\mathbf {y} ;\theta )}will be the product of univariatedensity functions:fn(y;θ)=∏k=1nfkunivar(yk;θ).{\displaystyle f_{n}(\mathbf {y} ;\theta )=\prod _{k=1}^{n}\,f_{k}^{\mathsf {univar}}(y_{k};\theta )~.} The goal of maximum likelihood estimation is to find the values of the model parameters that maximize the likelihood function over the parameter space,[6]that is:θ^=argmaxθ∈ΘLn(θ;y).{\displaystyle {\hat {\theta }}={\underset {\theta \in \Theta }{\operatorname {arg\;max} }}\,{\mathcal {L}}_{n}(\theta \,;\mathbf {y} )~.} Intuitively, this selects the parameter values that make the observed data most probable. The specific valueθ^=θ^n(y)∈Θ{\displaystyle ~{\hat {\theta }}={\hat {\theta }}_{n}(\mathbf {y} )\in \Theta ~}that maximizes the likelihood functionLn{\displaystyle \,{\mathcal {L}}_{n}\,}is called the maximum likelihood estimate. Further, if the functionθ^n:Rn→Θ{\displaystyle \;{\hat {\theta }}_{n}:\mathbb {R} ^{n}\to \Theta \;}so defined ismeasurable, then it is called the maximum likelihoodestimator. It is generally a function defined over thesample space, i.e. taking a given sample as its argument. Asufficient but not necessarycondition for its existence is for the likelihood function to becontinuousover a parameter spaceΘ{\displaystyle \,\Theta \,}that iscompact.[7]For anopenΘ{\displaystyle \,\Theta \,}the likelihood function may increase without ever reaching a supremum value. In practice, it is often convenient to work with thenatural logarithmof the likelihood function, called thelog-likelihood:ℓ(θ;y)=ln⁡Ln(θ;y).{\displaystyle \ell (\theta \,;\mathbf {y} )=\ln {\mathcal {L}}_{n}(\theta \,;\mathbf {y} )~.}Since the logarithm is amonotonic function, the maximum ofℓ(θ;y){\displaystyle \;\ell (\theta \,;\mathbf {y} )\;}occurs at the same value ofθ{\displaystyle \theta }as does the maximum ofLn.{\displaystyle \,{\mathcal {L}}_{n}~.}[8]Ifℓ(θ;y){\displaystyle \ell (\theta \,;\mathbf {y} )}isdifferentiableinΘ,{\displaystyle \,\Theta \,,}sufficient conditionsfor the occurrence of a maximum (or a minimum) are∂ℓ∂θ1=0,∂ℓ∂θ2=0,…,∂ℓ∂θk=0,{\displaystyle {\frac {\partial \ell }{\partial \theta _{1}}}=0,\quad {\frac {\partial \ell }{\partial \theta _{2}}}=0,\quad \ldots ,\quad {\frac {\partial \ell }{\partial \theta _{k}}}=0~,}known as the likelihood equations. For some models, these equations can be explicitly solved forθ^,{\displaystyle \,{\widehat {\theta \,}}\,,}but in general no closed-form solution to the maximization problem is known or available, and an MLE can only be found vianumerical optimization. Another problem is that in finite samples, there may exist multiplerootsfor the likelihood equations.[9]Whether the identified rootθ^{\displaystyle \,{\widehat {\theta \,}}\,}of the likelihood equations is indeed a (local) maximum depends on whether the matrix of second-order partial and cross-partial derivatives, the so-calledHessian matrix H(θ^)=[∂2ℓ∂θ12|θ=θ^∂2ℓ∂θ1∂θ2|θ=θ^…∂2ℓ∂θ1∂θk|θ=θ^∂2ℓ∂θ2∂θ1|θ=θ^∂2ℓ∂θ22|θ=θ^…∂2ℓ∂θ2∂θk|θ=θ^⋮⋮⋱⋮∂2ℓ∂θk∂θ1|θ=θ^∂2ℓ∂θk∂θ2|θ=θ^…∂2ℓ∂θk2|θ=θ^],{\displaystyle \mathbf {H} \left({\widehat {\theta \,}}\right)={\begin{bmatrix}\left.{\frac {\partial ^{2}\ell }{\partial \theta _{1}^{2}}}\right|_{\theta ={\widehat {\theta \,}}}&\left.{\frac {\partial ^{2}\ell }{\partial \theta _{1}\,\partial \theta _{2}}}\right|_{\theta ={\widehat {\theta \,}}}&\dots &\left.{\frac {\partial ^{2}\ell }{\partial \theta _{1}\,\partial \theta _{k}}}\right|_{\theta ={\widehat {\theta \,}}}\\\left.{\frac {\partial ^{2}\ell }{\partial \theta _{2}\,\partial \theta _{1}}}\right|_{\theta ={\widehat {\theta \,}}}&\left.{\frac {\partial ^{2}\ell }{\partial \theta _{2}^{2}}}\right|_{\theta ={\widehat {\theta \,}}}&\dots &\left.{\frac {\partial ^{2}\ell }{\partial \theta _{2}\,\partial \theta _{k}}}\right|_{\theta ={\widehat {\theta \,}}}\\\vdots &\vdots &\ddots &\vdots \\\left.{\frac {\partial ^{2}\ell }{\partial \theta _{k}\,\partial \theta _{1}}}\right|_{\theta ={\widehat {\theta \,}}}&\left.{\frac {\partial ^{2}\ell }{\partial \theta _{k}\,\partial \theta _{2}}}\right|_{\theta ={\widehat {\theta \,}}}&\dots &\left.{\frac {\partial ^{2}\ell }{\partial \theta _{k}^{2}}}\right|_{\theta ={\widehat {\theta \,}}}\end{bmatrix}}~,} isnegative semi-definiteatθ^{\displaystyle {\widehat {\theta \,}}}, as this indicates localconcavity. Conveniently, most commonprobability distributions– in particular theexponential family– arelogarithmically concave.[10][11] While the domain of the likelihood function—theparameter space—is generally a finite-dimensional subset ofEuclidean space, additionalrestrictionssometimes need to be incorporated into the estimation process. The parameter space can be expressed asΘ={θ:θ∈Rk,h(θ)=0},{\displaystyle \Theta =\left\{\theta :\theta \in \mathbb {R} ^{k},\;h(\theta )=0\right\}~,} whereh(θ)=[h1(θ),h2(θ),…,hr(θ)]{\displaystyle \;h(\theta )=\left[h_{1}(\theta ),h_{2}(\theta ),\ldots ,h_{r}(\theta )\right]\;}is avector-valued functionmappingRk{\displaystyle \,\mathbb {R} ^{k}\,}intoRr.{\displaystyle \;\mathbb {R} ^{r}~.}Estimating the true parameterθ{\displaystyle \theta }belonging toΘ{\displaystyle \Theta }then, as a practical matter, means to find the maximum of the likelihood function subject to theconstrainth(θ)=0.{\displaystyle ~h(\theta )=0~.} Theoretically, the most natural approach to thisconstrained optimizationproblem is the method of substitution, that is "filling out" the restrictionsh1,h2,…,hr{\displaystyle \;h_{1},h_{2},\ldots ,h_{r}\;}to a seth1,h2,…,hr,hr+1,…,hk{\displaystyle \;h_{1},h_{2},\ldots ,h_{r},h_{r+1},\ldots ,h_{k}\;}in such a way thath∗=[h1,h2,…,hk]{\displaystyle \;h^{\ast }=\left[h_{1},h_{2},\ldots ,h_{k}\right]\;}is aone-to-one functionfromRk{\displaystyle \mathbb {R} ^{k}}to itself, and reparameterize the likelihood function by settingϕi=hi(θ1,θ2,…,θk).{\displaystyle \;\phi _{i}=h_{i}(\theta _{1},\theta _{2},\ldots ,\theta _{k})~.}[12]Because of the equivariance of the maximum likelihood estimator, the properties of the MLE apply to the restricted estimates also.[13]For instance, in amultivariate normal distributionthecovariance matrixΣ{\displaystyle \,\Sigma \,}must bepositive-definite; this restriction can be imposed by replacingΣ=ΓTΓ,{\displaystyle \;\Sigma =\Gamma ^{\mathsf {T}}\Gamma \;,}whereΓ{\displaystyle \Gamma }is a realupper triangular matrixandΓT{\displaystyle \Gamma ^{\mathsf {T}}}is itstranspose.[14] In practice, restrictions are usually imposed using the method of Lagrange which, given the constraints as defined above, leads to therestricted likelihood equations∂ℓ∂θ−∂h(θ)T∂θλ=0{\displaystyle {\frac {\partial \ell }{\partial \theta }}-{\frac {\partial h(\theta )^{\mathsf {T}}}{\partial \theta }}\lambda =0}andh(θ)=0,{\displaystyle h(\theta )=0\;,} whereλ=[λ1,λ2,…,λr]T{\displaystyle ~\lambda =\left[\lambda _{1},\lambda _{2},\ldots ,\lambda _{r}\right]^{\mathsf {T}}~}is a column-vector ofLagrange multipliersand∂h(θ)T∂θ{\displaystyle \;{\frac {\partial h(\theta )^{\mathsf {T}}}{\partial \theta }}\;}is thek × rJacobian matrixof partial derivatives.[12]Naturally, if the constraints are not binding at the maximum, the Lagrange multipliers should be zero.[15]This in turn allows for a statistical test of the "validity" of the constraint, known as theLagrange multiplier test. Nonparametric maximum likelihood estimation can be performed using theempirical likelihood. A maximum likelihood estimator is anextremum estimatorobtained by maximizing, as a function ofθ, theobjective functionℓ^(θ;x){\displaystyle {\widehat {\ell \,}}(\theta \,;x)}. If the data areindependent and identically distributed, then we haveℓ^(θ;x)=∑i=1nln⁡f(xi∣θ),{\displaystyle {\widehat {\ell \,}}(\theta \,;x)=\sum _{i=1}^{n}\ln f(x_{i}\mid \theta ),}this being the sample analogue of the expected log-likelihoodℓ(θ)=E⁡[ln⁡f(xi∣θ)]{\displaystyle \ell (\theta )=\operatorname {\mathbb {E} } [\,\ln f(x_{i}\mid \theta )\,]}, where this expectation is taken with respect to the true density. Maximum-likelihood estimators have no optimum properties for finite samples, in the sense that (when evaluated on finite samples) other estimators may have greater concentration around the true parameter-value.[16]However, like other estimation methods, maximum likelihood estimation possesses a number of attractivelimiting properties: As the sample size increases to infinity, sequences of maximum likelihood estimators have these properties: Under the conditions outlined below, the maximum likelihood estimator isconsistent. The consistency means that if the data were generated byf(⋅;θ0){\displaystyle f(\cdot \,;\theta _{0})}and we have a sufficiently large number of observationsn, then it is possible to find the value ofθ0with arbitrary precision. In mathematical terms this means that asngoes to infinity the estimatorθ^{\displaystyle {\widehat {\theta \,}}}converges in probabilityto its true value:θ^mle→pθ0.{\displaystyle {\widehat {\theta \,}}_{\mathrm {mle} }\ {\xrightarrow {\text{p}}}\ \theta _{0}.} Under slightly stronger conditions, the estimator convergesalmost surely(orstrongly):θ^mle→a.s.θ0.{\displaystyle {\widehat {\theta \,}}_{\mathrm {mle} }\ {\xrightarrow {\text{a.s.}}}\ \theta _{0}.} In practical applications, data is never generated byf(⋅;θ0){\displaystyle f(\cdot \,;\theta _{0})}. Rather,f(⋅;θ0){\displaystyle f(\cdot \,;\theta _{0})}is a model, often in idealized form, of the process generated by the data. It is a common aphorism in statistics thatall models are wrong. Thus, true consistency does not occur in practical applications. Nevertheless, consistency is often considered to be a desirable property for an estimator to have. To establish consistency, the following conditions are sufficient.[17] θ≠θ0⇔f(⋅∣θ)≠f(⋅∣θ0).{\displaystyle \theta \neq \theta _{0}\quad \Leftrightarrow \quad f(\cdot \mid \theta )\neq f(\cdot \mid \theta _{0}).}In other words, different parameter valuesθcorrespond to different distributions within the model. If this condition did not hold, there would be some valueθ1such thatθ0andθ1generate an identical distribution of the observable data. Then we would not be able to distinguish between these two parameters even with an infinite amount of data—these parameters would have beenobservationally equivalent. The identification condition establishes that the log-likelihood has a unique global maximum. Compactness implies that the likelihood cannot approach the maximum value arbitrarily close at some other point (as demonstrated for example in the picture on the right). Compactness is only a sufficient condition and not a necessary condition. Compactness can be replaced by some other conditions, such as: P⁡[ln⁡f(x∣θ)∈C0(Θ)]=1.{\displaystyle \operatorname {\mathbb {P} } {\Bigl [}\;\ln f(x\mid \theta )\;\in \;C^{0}(\Theta )\;{\Bigr ]}=1.} The dominance condition can be employed in the case ofi.i.d.observations. In the non-i.i.d. case, the uniform convergence in probability can be checked by showing that the sequenceℓ^(θ∣x){\displaystyle {\widehat {\ell \,}}(\theta \mid x)}isstochastically equicontinuous. If one wants to demonstrate that the ML estimatorθ^{\displaystyle {\widehat {\theta \,}}}converges toθ0almost surely, then a stronger condition of uniform convergence almost surely has to be imposed:supθ∈Θ‖ℓ^(θ∣x)−ℓ(θ)‖→a.s.0.{\displaystyle \sup _{\theta \in \Theta }\left\|\;{\widehat {\ell \,}}(\theta \mid x)-\ell (\theta )\;\right\|\ \xrightarrow {\text{a.s.}} \ 0.} Additionally, if (as assumed above) the data were generated byf(⋅;θ0){\displaystyle f(\cdot \,;\theta _{0})}, then under certain conditions, it can also be shown that the maximum likelihood estimatorconverges in distributionto a normal distribution. Specifically,[18]n(θ^mle−θ0)→dN(0,I−1){\displaystyle {\sqrt {n}}\left({\widehat {\theta \,}}_{\mathrm {mle} }-\theta _{0}\right)\ \xrightarrow {d} \ {\mathcal {N}}\left(0,\,I^{-1}\right)}whereIis theFisher information matrix. The maximum likelihood estimator selects the parameter value which gives the observed data the largest possible probability (or probability density, in the continuous case). If the parameter consists of a number of components, then we define their separate maximum likelihood estimators, as the corresponding component of the MLE of the complete parameter. Consistent with this, ifθ^{\displaystyle {\widehat {\theta \,}}}is the MLE forθ{\displaystyle \theta }, and ifg(θ){\displaystyle g(\theta )}is any transformation ofθ{\displaystyle \theta }, then the MLE forα=g(θ){\displaystyle \alpha =g(\theta )}is by definition[19] α^=g(θ^).{\displaystyle {\widehat {\alpha }}=g(\,{\widehat {\theta \,}}\,).\,} It maximizes the so-calledprofile likelihood: L¯(α)=supθ:α=g(θ)L(θ).{\displaystyle {\bar {L}}(\alpha )=\sup _{\theta :\alpha =g(\theta )}L(\theta ).\,} The MLE is also equivariant with respect to certain transformations of the data. Ify=g(x){\displaystyle y=g(x)}whereg{\displaystyle g}is one to one and does not depend on the parameters to be estimated, then the density functions satisfy fY(y)=fX(g−1(y))|(g−1(y))′|{\displaystyle f_{Y}(y)=f_{X}(g^{-1}(y))\,|(g^{-1}(y))^{\prime }|} and hence the likelihood functions forX{\displaystyle X}andY{\displaystyle Y}differ only by a factor that does not depend on the model parameters. For example, the MLE parameters of the log-normal distribution are the same as those of the normal distribution fitted to the logarithm of the data. In fact, in the log-normal case ifX∼N(0,1){\displaystyle X\sim {\mathcal {N}}(0,1)}, thenY=g(X)=eX{\displaystyle Y=g(X)=e^{X}}follows alog-normal distribution. The density of Y follows withfX{\displaystyle f_{X}}standardNormalandg−1(y)=log⁡(y){\displaystyle g^{-1}(y)=\log(y)},|(g−1(y))′|=1y{\displaystyle |(g^{-1}(y))^{\prime }|={\frac {1}{y}}}fory>0{\displaystyle y>0}. As assumed above, if the data were generated byf(⋅;θ0),{\displaystyle ~f(\cdot \,;\theta _{0})~,}then under certain conditions, it can also be shown that the maximum likelihood estimatorconverges in distributionto a normal distribution. It is√n-consistent and asymptotically efficient, meaning that it reaches theCramér–Rao bound. Specifically,[18] n(θ^mle−θ0)→dN(0,I−1),{\displaystyle {\sqrt {n\,}}\,\left({\widehat {\theta \,}}_{\text{mle}}-\theta _{0}\right)\ \ \xrightarrow {d} \ \ {\mathcal {N}}\left(0,\ {\mathcal {I}}^{-1}\right)~,}whereI{\displaystyle ~{\mathcal {I}}~}is theFisher information matrix:Ijk=E[−∂2ln⁡fθ0(Xt)∂θj∂θk].{\displaystyle {\mathcal {I}}_{jk}=\operatorname {\mathbb {E} } \,{\biggl [}\;-{\frac {\partial ^{2}\ln f_{\theta _{0}}(X_{t})}{\partial \theta _{j}\,\partial \theta _{k}}}\;{\biggr ]}~.} In particular, it means that thebiasof the maximum likelihood estimator is equal to zero up to the order⁠1/√n⁠. However, when we consider the higher-order terms in theexpansionof the distribution of this estimator, it turns out thatθmlehas bias of order1⁄n. This bias is equal to (componentwise)[20] bh≡E⁡[(θ^mle−θ0)h]=1n∑i,j,k=1mIhiIjk(12Kijk+Jj,ik){\displaystyle b_{h}\;\equiv \;\operatorname {\mathbb {E} } {\biggl [}\;\left({\widehat {\theta }}_{\mathrm {mle} }-\theta _{0}\right)_{h}\;{\biggr ]}\;=\;{\frac {1}{\,n\,}}\,\sum _{i,j,k=1}^{m}\;{\mathcal {I}}^{hi}\;{\mathcal {I}}^{jk}\left({\frac {1}{\,2\,}}\,K_{ijk}\;+\;J_{j,ik}\right)} whereIjk{\displaystyle {\mathcal {I}}^{jk}}(with superscripts) denotes the (j,k)-th component of theinverseFisher information matrixI−1{\displaystyle {\mathcal {I}}^{-1}}, and 12Kijk+Jj,ik=E[12∂3ln⁡fθ0(Xt)∂θi∂θj∂θk+∂ln⁡fθ0(Xt)∂θj∂2ln⁡fθ0(Xt)∂θi∂θk].{\displaystyle {\frac {1}{\,2\,}}\,K_{ijk}\;+\;J_{j,ik}\;=\;\operatorname {\mathbb {E} } \,{\biggl [}\;{\frac {1}{2}}{\frac {\partial ^{3}\ln f_{\theta _{0}}(X_{t})}{\partial \theta _{i}\;\partial \theta _{j}\;\partial \theta _{k}}}+{\frac {\;\partial \ln f_{\theta _{0}}(X_{t})\;}{\partial \theta _{j}}}\,{\frac {\;\partial ^{2}\ln f_{\theta _{0}}(X_{t})\;}{\partial \theta _{i}\,\partial \theta _{k}}}\;{\biggr ]}~.} Using these formulae it is possible to estimate the second-order bias of the maximum likelihood estimator, andcorrectfor that bias by subtracting it:θ^mle∗=θ^mle−b^.{\displaystyle {\widehat {\theta \,}}_{\text{mle}}^{*}={\widehat {\theta \,}}_{\text{mle}}-{\widehat {b\,}}~.}This estimator is unbiased up to the terms of order⁠1/n⁠, and is called thebias-corrected maximum likelihood estimator. This bias-corrected estimator issecond-order efficient(at least within the curved exponential family), meaning that it has minimal mean squared error among all second-order bias-corrected estimators, up to the terms of the order⁠1/n2⁠. It is possible to continue this process, that is to derive the third-order bias-correction term, and so on. However, the maximum likelihood estimator isnotthird-order efficient.[21] A maximum likelihood estimator coincides with themost probableBayesian estimatorgiven auniformprior distributionon theparameters. Indeed, themaximum a posteriori estimateis the parameterθthat maximizes the probability ofθgiven the data, given by Bayes' theorem: P⁡(θ∣x1,x2,…,xn)=f(x1,x2,…,xn∣θ)P⁡(θ)P⁡(x1,x2,…,xn){\displaystyle \operatorname {\mathbb {P} } (\theta \mid x_{1},x_{2},\ldots ,x_{n})={\frac {f(x_{1},x_{2},\ldots ,x_{n}\mid \theta )\operatorname {\mathbb {P} } (\theta )}{\operatorname {\mathbb {P} } (x_{1},x_{2},\ldots ,x_{n})}}} whereP⁡(θ){\displaystyle \operatorname {\mathbb {P} } (\theta )}is the prior distribution for the parameterθand whereP⁡(x1,x2,…,xn){\displaystyle \operatorname {\mathbb {P} } (x_{1},x_{2},\ldots ,x_{n})}is the probability of the data averaged over all parameters. Since the denominator is independent ofθ, the Bayesian estimator is obtained by maximizingf(x1,x2,…,xn∣θ)P⁡(θ){\displaystyle f(x_{1},x_{2},\ldots ,x_{n}\mid \theta )\operatorname {\mathbb {P} } (\theta )}with respect toθ. If we further assume that the priorP⁡(θ){\displaystyle \operatorname {\mathbb {P} } (\theta )}is a uniform distribution, the Bayesian estimator is obtained by maximizing the likelihood functionf(x1,x2,…,xn∣θ){\displaystyle f(x_{1},x_{2},\ldots ,x_{n}\mid \theta )}. Thus the Bayesian estimator coincides with the maximum likelihood estimator for a uniform prior distributionP⁡(θ){\displaystyle \operatorname {\mathbb {P} } (\theta )}. In many practical applications inmachine learning, maximum-likelihood estimation is used as the model for parameter estimation. The Bayesian Decision theory is about designing a classifier that minimizes total expected risk, especially, when the costs (the loss function) associated with different decisions are equal, the classifier is minimizing the error over the whole distribution.[22] Thus, the Bayes Decision Rule is stated as wherew1,w2{\displaystyle \;w_{1}\,,w_{2}\;}are predictions of different classes. From a perspective of minimizing error, it can also be stated asw=argmaxw∫−∞∞P⁡(error∣x)P⁡(x)d⁡x{\displaystyle w={\underset {w}{\operatorname {arg\;max} }}\;\int _{-\infty }^{\infty }\operatorname {\mathbb {P} } ({\text{ error}}\mid x)\operatorname {\mathbb {P} } (x)\,\operatorname {d} x~}whereP⁡(error∣x)=P⁡(w1∣x){\displaystyle \operatorname {\mathbb {P} } ({\text{ error}}\mid x)=\operatorname {\mathbb {P} } (w_{1}\mid x)~}if we decidew2{\displaystyle \;w_{2}\;}andP⁡(error∣x)=P⁡(w2∣x){\displaystyle \;\operatorname {\mathbb {P} } ({\text{ error}}\mid x)=\operatorname {\mathbb {P} } (w_{2}\mid x)\;}if we decidew1.{\displaystyle \;w_{1}\;.} By applyingBayes' theoremP⁡(wi∣x)=P⁡(x∣wi)P⁡(wi)P⁡(x){\displaystyle \operatorname {\mathbb {P} } (w_{i}\mid x)={\frac {\operatorname {\mathbb {P} } (x\mid w_{i})\operatorname {\mathbb {P} } (w_{i})}{\operatorname {\mathbb {P} } (x)}}}, and if we further assume the zero-or-one loss function, which is a same loss for all errors, the Bayes Decision rule can be reformulated as:hBayes=argmaxw[P⁡(x∣w)P⁡(w)],{\displaystyle h_{\text{Bayes}}={\underset {w}{\operatorname {arg\;max} }}\,{\bigl [}\,\operatorname {\mathbb {P} } (x\mid w)\,\operatorname {\mathbb {P} } (w)\,{\bigr ]}\;,}wherehBayes{\displaystyle h_{\text{Bayes}}}is the prediction andP⁡(w){\displaystyle \;\operatorname {\mathbb {P} } (w)\;}is theprior probability. Findingθ^{\displaystyle {\hat {\theta }}}that maximizes the likelihood is asymptotically equivalent to finding theθ^{\displaystyle {\hat {\theta }}}that defines a probability distribution (Qθ^{\displaystyle Q_{\hat {\theta }}}) that has a minimal distance, in terms ofKullback–Leibler divergence, to the real probability distribution from which our data were generated (i.e., generated byPθ0{\displaystyle P_{\theta _{0}}}).[23]In an ideal world, P and Q are the same (and the only thing unknown isθ{\displaystyle \theta }that defines P), but even if they are not and the model we use is misspecified, still the MLE will give us the "closest" distribution (within the restriction of a model Q that depends onθ^{\displaystyle {\hat {\theta }}}) to the real distributionPθ0{\displaystyle P_{\theta _{0}}}.[24] For simplicity of notation, let's assume that P=Q. Let there beni.i.ddata samplesy=(y1,y2,…,yn){\displaystyle \mathbf {y} =(y_{1},y_{2},\ldots ,y_{n})}from some probabilityy∼Pθ0{\displaystyle y\sim P_{\theta _{0}}}, that we try to estimate by findingθ^{\displaystyle {\hat {\theta }}}that will maximize the likelihood usingPθ{\displaystyle P_{\theta }}, then:θ^=argmaxθLPθ(y)=argmaxθPθ(y)=argmaxθP(y∣θ)=argmaxθ∏i=1nP(yi∣θ)=argmaxθ∑i=1nlog⁡P(yi∣θ)=argmaxθ(∑i=1nlog⁡P(yi∣θ)−∑i=1nlog⁡P(yi∣θ0))=argmaxθ∑i=1n(log⁡P(yi∣θ)−log⁡P(yi∣θ0))=argmaxθ∑i=1nlog⁡P(yi∣θ)P(yi∣θ0)=argminθ∑i=1nlog⁡P(yi∣θ0)P(yi∣θ)=argminθ1n∑i=1nlog⁡P(yi∣θ0)P(yi∣θ)=argminθ1n∑i=1nhθ(yi)⟶n→∞argminθE[hθ(y)]=argminθ∫Pθ0(y)hθ(y)dy=argminθ∫Pθ0(y)log⁡P(y∣θ0)P(y∣θ)dy=argminθDKL(Pθ0∥Pθ){\displaystyle {\begin{aligned}{\hat {\theta }}&={\underset {\theta }{\operatorname {arg\,max} }}\,L_{P_{\theta }}(\mathbf {y} )={\underset {\theta }{\operatorname {arg\,max} }}\,P_{\theta }(\mathbf {y} )={\underset {\theta }{\operatorname {arg\,max} }}\,P(\mathbf {y} \mid \theta )\\&={\underset {\theta }{\operatorname {arg\,max} }}\,\prod _{i=1}^{n}P(y_{i}\mid \theta )={\underset {\theta }{\operatorname {arg\,max} }}\,\sum _{i=1}^{n}\log P(y_{i}\mid \theta )\\&={\underset {\theta }{\operatorname {arg\,max} }}\,\left(\sum _{i=1}^{n}\log P(y_{i}\mid \theta )-\sum _{i=1}^{n}\log P(y_{i}\mid \theta _{0})\right)={\underset {\theta }{\operatorname {arg\,max} }}\,\sum _{i=1}^{n}\left(\log P(y_{i}\mid \theta )-\log P(y_{i}\mid \theta _{0})\right)\\&={\underset {\theta }{\operatorname {arg\,max} }}\,\sum _{i=1}^{n}\log {\frac {P(y_{i}\mid \theta )}{P(y_{i}\mid \theta _{0})}}={\underset {\theta }{\operatorname {arg\,min} }}\,\sum _{i=1}^{n}\log {\frac {P(y_{i}\mid \theta _{0})}{P(y_{i}\mid \theta )}}={\underset {\theta }{\operatorname {arg\,min} }}\,{\frac {1}{n}}\sum _{i=1}^{n}\log {\frac {P(y_{i}\mid \theta _{0})}{P(y_{i}\mid \theta )}}\\&={\underset {\theta }{\operatorname {arg\,min} }}\,{\frac {1}{n}}\sum _{i=1}^{n}h_{\theta }(y_{i})\quad {\underset {n\to \infty }{\longrightarrow }}\quad {\underset {\theta }{\operatorname {arg\,min} }}\,E[h_{\theta }(y)]\\&={\underset {\theta }{\operatorname {arg\,min} }}\,\int P_{\theta _{0}}(y)h_{\theta }(y)dy={\underset {\theta }{\operatorname {arg\,min} }}\,\int P_{\theta _{0}}(y)\log {\frac {P(y\mid \theta _{0})}{P(y\mid \theta )}}dy\\&={\underset {\theta }{\operatorname {arg\,min} }}\,D_{\text{KL}}(P_{\theta _{0}}\parallel P_{\theta })\end{aligned}}} Wherehθ(x)=log⁡P(x∣θ0)P(x∣θ){\displaystyle h_{\theta }(x)=\log {\frac {P(x\mid \theta _{0})}{P(x\mid \theta )}}}. Usinghhelps see how we are using thelaw of large numbersto move from the average ofh(x) to theexpectancyof it using thelaw of the unconscious statistician. The first several transitions have to do with laws oflogarithmand that findingθ^{\displaystyle {\hat {\theta }}}that maximizes some function will also be the one that maximizes some monotonic transformation of that function (i.e.: adding/multiplying by a constant). Sincecross entropyis justShannon's entropyplus KL divergence, and since the entropy ofPθ0{\displaystyle P_{\theta _{0}}}is constant, then the MLE is also asymptotically minimizing cross entropy.[25] Consider a case wherentickets numbered from 1 tonare placed in a box and one is selected at random (seeuniform distribution); thus, the sample size is 1. Ifnis unknown, then the maximum likelihood estimatorn^{\displaystyle {\widehat {n}}}ofnis the numbermon the drawn ticket. (The likelihood is 0 forn<m,1⁄nforn≥m, and this is greatest whenn=m. Note that the maximum likelihood estimate ofnoccurs at the lower extreme of possible values {m,m+ 1, ...}, rather than somewhere in the "middle" of the range of possible values, which would result in less bias.) Theexpected valueof the numbermon the drawn ticket, and therefore the expected value ofn^{\displaystyle {\widehat {n}}}, is (n+ 1)/2. As a result, with a sample size of 1, the maximum likelihood estimator fornwill systematically underestimatenby (n− 1)/2. Suppose one wishes to determine just how biased anunfair coinis. Call the probability of tossing a 'head'p. The goal then becomes to determinep. Suppose the coin is tossed 80 times: i.e. the sample might be something likex1= H,x2= T, ...,x80= T, and the count of the number ofheads"H" is observed. The probability of tossingtailsis 1 −p(so herepisθabove). Suppose the outcome is 49 heads and 31tails, and suppose the coin was taken from a box containing three coins: one which gives heads with probabilityp=1⁄3, one which gives heads with probabilityp=1⁄2and another which gives heads with probabilityp=2⁄3. The coins have lost their labels, so which one it was is unknown. Using maximum likelihood estimation, the coin that has the largest likelihood can be found, given the data that were observed. By using theprobability mass functionof thebinomial distributionwith sample size equal to 80, number successes equal to 49 but for different values ofp(the "probability of success"), the likelihood function (defined below) takes one of three values: P⁡[H=49∣p=13]=(8049)(13)49(1−13)31≈0.000,P⁡[H=49∣p=12]=(8049)(12)49(1−12)31≈0.012,P⁡[H=49∣p=23]=(8049)(23)49(1−23)31≈0.054.{\displaystyle {\begin{aligned}\operatorname {\mathbb {P} } {\bigl [}\;\mathrm {H} =49\mid p={\tfrac {1}{3}}\;{\bigr ]}&={\binom {80}{49}}({\tfrac {1}{3}})^{49}(1-{\tfrac {1}{3}})^{31}\approx 0.000,\\[6pt]\operatorname {\mathbb {P} } {\bigl [}\;\mathrm {H} =49\mid p={\tfrac {1}{2}}\;{\bigr ]}&={\binom {80}{49}}({\tfrac {1}{2}})^{49}(1-{\tfrac {1}{2}})^{31}\approx 0.012,\\[6pt]\operatorname {\mathbb {P} } {\bigl [}\;\mathrm {H} =49\mid p={\tfrac {2}{3}}\;{\bigr ]}&={\binom {80}{49}}({\tfrac {2}{3}})^{49}(1-{\tfrac {2}{3}})^{31}\approx 0.054~.\end{aligned}}} The likelihood is maximized whenp=2⁄3, and so this is themaximum likelihood estimateforp. Now suppose that there was only one coin but itspcould have been any value0 ≤p≤ 1 .The likelihood function to be maximised isL(p)=fD(H=49∣p)=(8049)p49(1−p)31,{\displaystyle L(p)=f_{D}(\mathrm {H} =49\mid p)={\binom {80}{49}}p^{49}(1-p)^{31}~,} and the maximisation is over all possible values0 ≤p≤ 1 . One way to maximize this function is bydifferentiatingwith respect topand setting to zero: 0=∂∂p((8049)p49(1−p)31),0=49p48(1−p)31−31p49(1−p)30=p48(1−p)30[49(1−p)−31p]=p48(1−p)30[49−80p].{\displaystyle {\begin{aligned}0&={\frac {\partial }{\partial p}}\left({\binom {80}{49}}p^{49}(1-p)^{31}\right)~,\\[8pt]0&=49p^{48}(1-p)^{31}-31p^{49}(1-p)^{30}\\[8pt]&=p^{48}(1-p)^{30}\left[49(1-p)-31p\right]\\[8pt]&=p^{48}(1-p)^{30}\left[49-80p\right]~.\end{aligned}}} This is a product of three terms. The first term is 0 whenp= 0. The second is 0 whenp= 1. The third is zero whenp=49⁄80. The solution that maximizes the likelihood is clearlyp=49⁄80(sincep= 0 andp= 1 result in a likelihood of 0). Thus themaximum likelihood estimatorforpis49⁄80. This result is easily generalized by substituting a letter such assin the place of 49 to represent the observed number of 'successes' of ourBernoulli trials, and a letter such asnin the place of 80 to represent the number of Bernoulli trials. Exactly the same calculation yieldss⁄nwhich is the maximum likelihood estimator for any sequence ofnBernoulli trials resulting ins'successes'. For thenormal distributionN(μ,σ2){\displaystyle {\mathcal {N}}(\mu ,\sigma ^{2})}which hasprobability density function f(x∣μ,σ2)=12πσ2exp⁡(−(x−μ)22σ2),{\displaystyle f(x\mid \mu ,\sigma ^{2})={\frac {1}{{\sqrt {2\pi \sigma ^{2}}}\ }}\exp \left(-{\frac {(x-\mu )^{2}}{2\sigma ^{2}}}\right),} the correspondingprobability density functionfor a sample ofnindependent identically distributednormal random variables (the likelihood) is f(x1,…,xn∣μ,σ2)=∏i=1nf(xi∣μ,σ2)=(12πσ2)n/2exp⁡(−∑i=1n(xi−μ)22σ2).{\displaystyle f(x_{1},\ldots ,x_{n}\mid \mu ,\sigma ^{2})=\prod _{i=1}^{n}f(x_{i}\mid \mu ,\sigma ^{2})=\left({\frac {1}{2\pi \sigma ^{2}}}\right)^{n/2}\exp \left(-{\frac {\sum _{i=1}^{n}(x_{i}-\mu )^{2}}{2\sigma ^{2}}}\right).} This family of distributions has two parameters:θ= (μ,σ); so we maximize the likelihood,L(μ,σ2)=f(x1,…,xn∣μ,σ2){\displaystyle {\mathcal {L}}(\mu ,\sigma ^{2})=f(x_{1},\ldots ,x_{n}\mid \mu ,\sigma ^{2})}, over both parameters simultaneously, or if possible, individually. Since thelogarithmfunction itself is acontinuousstrictly increasingfunction over therangeof the likelihood, the values which maximize the likelihood will also maximize its logarithm (the log-likelihood itself is not necessarily strictly increasing). The log-likelihood can be written as follows: log⁡(L(μ,σ2))=−n2log⁡(2πσ2)−12σ2∑i=1n(xi−μ)2{\displaystyle \log {\Bigl (}{\mathcal {L}}(\mu ,\sigma ^{2}){\Bigr )}=-{\frac {\,n\,}{2}}\log(2\pi \sigma ^{2})-{\frac {1}{2\sigma ^{2}}}\sum _{i=1}^{n}(\,x_{i}-\mu \,)^{2}} (Note: the log-likelihood is closely related toinformation entropyandFisher information.) We now compute the derivatives of this log-likelihood as follows. 0=∂∂μlog⁡(L(μ,σ2))=0−−2n(x¯−μ)2σ2.{\displaystyle {\begin{aligned}0&={\frac {\partial }{\partial \mu }}\log {\Bigl (}{\mathcal {L}}(\mu ,\sigma ^{2}){\Bigr )}=0-{\frac {\;-2n({\bar {x}}-\mu )\;}{2\sigma ^{2}}}.\end{aligned}}}wherex¯{\displaystyle {\bar {x}}}is thesample mean. This is solved by μ^=x¯=∑i=1nxin.{\displaystyle {\widehat {\mu }}={\bar {x}}=\sum _{i=1}^{n}{\frac {\,x_{i}\,}{n}}.} This is indeed the maximum of the function, since it is the only turning point inμand the second derivative is strictly less than zero. Itsexpected valueis equal to the parameterμof the given distribution, E⁡[μ^]=μ,{\displaystyle \operatorname {\mathbb {E} } {\bigl [}\;{\widehat {\mu }}\;{\bigr ]}=\mu ,\,} which means that the maximum likelihood estimatorμ^{\displaystyle {\widehat {\mu }}}is unbiased. Similarly we differentiate the log-likelihood with respect toσand equate to zero: 0=∂∂σlog⁡(L(μ,σ2))=−nσ+1σ3∑i=1n(xi−μ)2.{\displaystyle {\begin{aligned}0&={\frac {\partial }{\partial \sigma }}\log {\Bigl (}{\mathcal {L}}(\mu ,\sigma ^{2}){\Bigr )}=-{\frac {\,n\,}{\sigma }}+{\frac {1}{\sigma ^{3}}}\sum _{i=1}^{n}(\,x_{i}-\mu \,)^{2}.\end{aligned}}} which is solved by σ^2=1n∑i=1n(xi−μ)2.{\displaystyle {\widehat {\sigma }}^{2}={\frac {1}{n}}\sum _{i=1}^{n}(x_{i}-\mu )^{2}.} Inserting the estimateμ=μ^{\displaystyle \mu ={\widehat {\mu }}}we obtain σ^2=1n∑i=1n(xi−x¯)2=1n∑i=1nxi2−1n2∑i=1n∑j=1nxixj.{\displaystyle {\widehat {\sigma }}^{2}={\frac {1}{n}}\sum _{i=1}^{n}(x_{i}-{\bar {x}})^{2}={\frac {1}{n}}\sum _{i=1}^{n}x_{i}^{2}-{\frac {1}{n^{2}}}\sum _{i=1}^{n}\sum _{j=1}^{n}x_{i}x_{j}.} To calculate its expected value, it is convenient to rewrite the expression in terms of zero-mean random variables (statistical error)δi≡μ−xi{\displaystyle \delta _{i}\equiv \mu -x_{i}}. Expressing the estimate in these variables yields σ^2=1n∑i=1n(μ−δi)2−1n2∑i=1n∑j=1n(μ−δi)(μ−δj).{\displaystyle {\widehat {\sigma }}^{2}={\frac {1}{n}}\sum _{i=1}^{n}(\mu -\delta _{i})^{2}-{\frac {1}{n^{2}}}\sum _{i=1}^{n}\sum _{j=1}^{n}(\mu -\delta _{i})(\mu -\delta _{j}).} Simplifying the expression above, utilizing the facts thatE⁡[δi]=0{\displaystyle \operatorname {\mathbb {E} } {\bigl [}\;\delta _{i}\;{\bigr ]}=0}andE⁡[δi2]=σ2{\displaystyle \operatorname {E} {\bigl [}\;\delta _{i}^{2}\;{\bigr ]}=\sigma ^{2}}, allows us to obtain E⁡[σ^2]=n−1nσ2.{\displaystyle \operatorname {\mathbb {E} } {\bigl [}\;{\widehat {\sigma }}^{2}\;{\bigr ]}={\frac {\,n-1\,}{n}}\sigma ^{2}.} This means that the estimatorσ^2{\displaystyle {\widehat {\sigma }}^{2}}is biased forσ2{\displaystyle \sigma ^{2}}. It can also be shown thatσ^{\displaystyle {\widehat {\sigma }}}is biased forσ{\displaystyle \sigma }, but that bothσ^2{\displaystyle {\widehat {\sigma }}^{2}}andσ^{\displaystyle {\widehat {\sigma }}}are consistent. Formally we say that themaximum likelihood estimatorforθ=(μ,σ2){\displaystyle \theta =(\mu ,\sigma ^{2})}is θ^=(μ^,σ^2).{\displaystyle {\widehat {\theta \,}}=\left({\widehat {\mu }},{\widehat {\sigma }}^{2}\right).} In this case the MLEs could be obtained individually. In general this may not be the case, and the MLEs would have to be obtained simultaneously. The normal log-likelihood at its maximum takes a particularly simple form: log⁡(L(μ^,σ^))=−n2(log⁡(2πσ^2)+1){\displaystyle \log {\Bigl (}{\mathcal {L}}({\widehat {\mu }},{\widehat {\sigma }}){\Bigr )}={\frac {\,-n\;\;}{2}}{\bigl (}\,\log(2\pi {\widehat {\sigma }}^{2})+1\,{\bigr )}} This maximum log-likelihood can be shown to be the same for more generalleast squares, even fornon-linear least squares. This is often used in determining likelihood-based approximateconfidence intervalsandconfidence regions, which are generally more accurate than those using the asymptotic normality discussed above. It may be the case that variables are correlated, or more generally, not independent. Two random variablesy1{\displaystyle y_{1}}andy2{\displaystyle y_{2}}are independent only if their joint probability density function is the product of the individual probability density functions, i.e. f(y1,y2)=f(y1)f(y2){\displaystyle f(y_{1},y_{2})=f(y_{1})f(y_{2})\,} Suppose one constructs an order-nGaussian vector out of random variables(y1,…,yn){\displaystyle (y_{1},\ldots ,y_{n})}, where each variable has means given by(μ1,…,μn){\displaystyle (\mu _{1},\ldots ,\mu _{n})}. Furthermore, let thecovariance matrixbe denoted byΣ{\displaystyle {\mathit {\Sigma }}}. The joint probability density function of thesenrandom variables then follows amultivariate normal distributiongiven by: f(y1,…,yn)=1(2π)n/2det(Σ)exp⁡(−12[y1−μ1,…,yn−μn]Σ−1[y1−μ1,…,yn−μn]T){\displaystyle f(y_{1},\ldots ,y_{n})={\frac {1}{(2\pi )^{n/2}{\sqrt {\det({\mathit {\Sigma }})}}}}\exp \left(-{\frac {1}{2}}\left[y_{1}-\mu _{1},\ldots ,y_{n}-\mu _{n}\right]{\mathit {\Sigma }}^{-1}\left[y_{1}-\mu _{1},\ldots ,y_{n}-\mu _{n}\right]^{\mathrm {T} }\right)} In thebivariatecase, the joint probability density function is given by: f(y1,y2)=12πσ1σ21−ρ2exp⁡[−12(1−ρ2)((y1−μ1)2σ12−2ρ(y1−μ1)(y2−μ2)σ1σ2+(y2−μ2)2σ22)]{\displaystyle f(y_{1},y_{2})={\frac {1}{2\pi \sigma _{1}\sigma _{2}{\sqrt {1-\rho ^{2}}}}}\exp \left[-{\frac {1}{2(1-\rho ^{2})}}\left({\frac {(y_{1}-\mu _{1})^{2}}{\sigma _{1}^{2}}}-{\frac {2\rho (y_{1}-\mu _{1})(y_{2}-\mu _{2})}{\sigma _{1}\sigma _{2}}}+{\frac {(y_{2}-\mu _{2})^{2}}{\sigma _{2}^{2}}}\right)\right]} In this and other cases where a joint density function exists, the likelihood function is defined as above, in the section "principles," using this density. X1,X2,…,Xm{\displaystyle X_{1},\ X_{2},\ldots ,\ X_{m}}are counts in cells / boxes 1 up to m; each box has a different probability (think of the boxes being bigger or smaller) and we fix the number of balls that fall to ben{\displaystyle n}:x1+x2+⋯+xm=n{\displaystyle x_{1}+x_{2}+\cdots +x_{m}=n}. The probability of each box ispi{\displaystyle p_{i}}, with a constraint:p1+p2+⋯+pm=1{\displaystyle p_{1}+p_{2}+\cdots +p_{m}=1}. This is a case in which theXi{\displaystyle X_{i}}sare not independent, the joint probability of a vectorx1,x2,…,xm{\displaystyle x_{1},\ x_{2},\ldots ,x_{m}}is called the multinomial and has the form: f(x1,x2,…,xm∣p1,p2,…,pm)=n!∏xi!∏pixi=(nx1,x2,…,xm)p1x1p2x2⋯pmxm{\displaystyle f(x_{1},x_{2},\ldots ,x_{m}\mid p_{1},p_{2},\ldots ,p_{m})={\frac {n!}{\prod x_{i}!}}\prod p_{i}^{x_{i}}={\binom {n}{x_{1},x_{2},\ldots ,x_{m}}}p_{1}^{x_{1}}p_{2}^{x_{2}}\cdots p_{m}^{x_{m}}} Each box taken separately against all the other boxes is a binomial and this is an extension thereof. The log-likelihood of this is: ℓ(p1,p2,…,pm)=log⁡n!−∑i=1mlog⁡xi!+∑i=1mxilog⁡pi{\displaystyle \ell (p_{1},p_{2},\ldots ,p_{m})=\log n!-\sum _{i=1}^{m}\log x_{i}!+\sum _{i=1}^{m}x_{i}\log p_{i}} The constraint has to be taken into account and use the Lagrange multipliers: L(p1,p2,…,pm,λ)=ℓ(p1,p2,…,pm)+λ(1−∑i=1mpi){\displaystyle L(p_{1},p_{2},\ldots ,p_{m},\lambda )=\ell (p_{1},p_{2},\ldots ,p_{m})+\lambda \left(1-\sum _{i=1}^{m}p_{i}\right)} By posing all the derivatives to be 0, the most natural estimate is derived p^i=xin{\displaystyle {\hat {p}}_{i}={\frac {x_{i}}{n}}} Maximizing log likelihood, with and without constraints, can be an unsolvable problem in closed form, then we have to use iterative procedures. Except for special cases, the likelihood equations∂ℓ(θ;y)∂θ=0{\displaystyle {\frac {\partial \ell (\theta ;\mathbf {y} )}{\partial \theta }}=0} cannot be solved explicitly for an estimatorθ^=θ^(y){\displaystyle {\widehat {\theta }}={\widehat {\theta }}(\mathbf {y} )}. Instead, they need to be solvediteratively: starting from an initial guess ofθ{\displaystyle \theta }(sayθ^1{\displaystyle {\widehat {\theta }}_{1}}), one seeks to obtain a convergent sequence{θ^r}{\displaystyle \left\{{\widehat {\theta }}_{r}\right\}}. Many methods for this kind ofoptimization problemare available,[26][27]but the most commonly used ones are algorithms based on an updating formula of the formθ^r+1=θ^r+ηrdr(θ^){\displaystyle {\widehat {\theta }}_{r+1}={\widehat {\theta }}_{r}+\eta _{r}\mathbf {d} _{r}\left({\widehat {\theta }}\right)} where the vectordr(θ^){\displaystyle \mathbf {d} _{r}\left({\widehat {\theta }}\right)}indicates thedescent directionof therth "step," and the scalarηr{\displaystyle \eta _{r}}captures the "step length,"[28][29]also known as thelearning rate.[30] (Note: here it is a maximization problem, so the sign before gradient is flipped) ηr∈R+{\displaystyle \eta _{r}\in \mathbb {R} ^{+}}that is small enough for convergence anddr(θ^)=∇ℓ(θ^r;y){\displaystyle \mathbf {d} _{r}\left({\widehat {\theta }}\right)=\nabla \ell \left({\widehat {\theta }}_{r};\mathbf {y} \right)} Gradient descent method requires to calculate the gradient at the rth iteration, but no need to calculate the inverse of second-order derivative, i.e., the Hessian matrix. Therefore, it is computationally faster than Newton-Raphson method. ηr=1{\displaystyle \eta _{r}=1}anddr(θ^)=−Hr−1(θ^)sr(θ^){\displaystyle \mathbf {d} _{r}\left({\widehat {\theta }}\right)=-\mathbf {H} _{r}^{-1}\left({\widehat {\theta }}\right)\mathbf {s} _{r}\left({\widehat {\theta }}\right)} wheresr(θ^){\displaystyle \mathbf {s} _{r}({\widehat {\theta }})}is thescoreandHr−1(θ^){\displaystyle \mathbf {H} _{r}^{-1}\left({\widehat {\theta }}\right)}is theinverseof theHessian matrixof the log-likelihood function, both evaluated therth iteration.[31][32]But because the calculation of the Hessian matrix iscomputationally costly, numerous alternatives have been proposed. The popularBerndt–Hall–Hall–Hausman algorithmapproximates the Hessian with theouter productof the expected gradient, such that dr(θ^)=−[1n∑t=1n∂ℓ(θ;y)∂θ(∂ℓ(θ;y)∂θ)T]−1sr(θ^){\displaystyle \mathbf {d} _{r}\left({\widehat {\theta }}\right)=-\left[{\frac {1}{n}}\sum _{t=1}^{n}{\frac {\partial \ell (\theta ;\mathbf {y} )}{\partial \theta }}\left({\frac {\partial \ell (\theta ;\mathbf {y} )}{\partial \theta }}\right)^{\mathsf {T}}\right]^{-1}\mathbf {s} _{r}\left({\widehat {\theta }}\right)} Other quasi-Newton methods use more elaborate secant updates to give approximation of Hessian matrix. DFP formula finds a solution that is symmetric, positive-definite and closest to the current approximate value of second-order derivative:Hk+1=(I−γkykskT)Hk(I−γkskykT)+γkykykT,{\displaystyle \mathbf {H} _{k+1}=\left(I-\gamma _{k}y_{k}s_{k}^{\mathsf {T}}\right)\mathbf {H} _{k}\left(I-\gamma _{k}s_{k}y_{k}^{\mathsf {T}}\right)+\gamma _{k}y_{k}y_{k}^{\mathsf {T}},} where yk=∇ℓ(xk+sk)−∇ℓ(xk),{\displaystyle y_{k}=\nabla \ell (x_{k}+s_{k})-\nabla \ell (x_{k}),}γk=1ykTsk,{\displaystyle \gamma _{k}={\frac {1}{y_{k}^{T}s_{k}}},}sk=xk+1−xk.{\displaystyle s_{k}=x_{k+1}-x_{k}.} BFGS also gives a solution that is symmetric and positive-definite: Bk+1=Bk+ykykTykTsk−BkskskTBkTskTBksk,{\displaystyle B_{k+1}=B_{k}+{\frac {y_{k}y_{k}^{\mathsf {T}}}{y_{k}^{\mathsf {T}}s_{k}}}-{\frac {B_{k}s_{k}s_{k}^{\mathsf {T}}B_{k}^{\mathsf {T}}}{s_{k}^{\mathsf {T}}B_{k}s_{k}}}\ ,} where yk=∇ℓ(xk+sk)−∇ℓ(xk),{\displaystyle y_{k}=\nabla \ell (x_{k}+s_{k})-\nabla \ell (x_{k}),}sk=xk+1−xk.{\displaystyle s_{k}=x_{k+1}-x_{k}.} BFGS method is not guaranteed to converge unless the function has a quadraticTaylor expansionnear an optimum. However, BFGS can have acceptable performance even for non-smooth optimization instances Another popular method is to replace the Hessian with theFisher information matrix,I(θ)=E⁡[Hr(θ^)]{\displaystyle {\mathcal {I}}(\theta )=\operatorname {\mathbb {E} } \left[\mathbf {H} _{r}\left({\widehat {\theta }}\right)\right]}, giving us the Fisher scoring algorithm. This procedure is standard in the estimation of many methods, such asgeneralized linear models. Although popular, quasi-Newton methods may converge to astationary pointthat is not necessarily a local or global maximum,[33]but rather a local minimum or asaddle point. Therefore, it is important to assess the validity of the obtained solution to the likelihood equations, by verifying that the Hessian, evaluated at the solution, is bothnegative definiteandwell-conditioned.[34] Early users of maximum likelihood includeCarl Friedrich Gauss,Pierre-Simon Laplace,Thorvald N. Thiele, andFrancis Ysidro Edgeworth.[35][36]It wasRonald Fisherhowever, between 1912 and 1922, who singlehandedly created the modern version of the method.[37][38] Maximum-likelihood estimation finally transcendedheuristicjustification in a proof published bySamuel S. Wilksin 1938, now calledWilks' theorem.[39]The theorem shows that the error in the logarithm of likelihood values for estimates from multiple independent observations is asymptoticallyχ2-distributed, which enables convenient determination of aconfidence regionaround any estimate of the parameters. The only difficult part of Wilks' proof depends on the expected value of theFisher informationmatrix, which is provided by a theorem proven by Fisher.[40]Wilks continued to improve on the generality of the theorem throughout his life, with his most general proof published in 1962.[41] Reviews of the development of maximum likelihood estimation have been provided by a number of authors.[42][43][44][45][46][47][48][49]
https://en.wikipedia.org/wiki/Maximum_likelihood_estimation
Inregression analysis,least squaresis aparameter estimationmethod in which the sum of the squares of theresiduals(a residual being the difference between an observed value and the fitted value provided by a model) is minimized. The most important application is indata fitting. When the problem has substantialuncertaintiesin theindependent variable(thexvariable), then simple regression and least-squares methods have problems; in such cases, the methodology required for fittingerrors-in-variables modelsmay be considered instead of that for least squares. Least squares problems fall into two categories: linear orordinary least squaresandnonlinear least squares, depending on whether or not the model functions are linear in all unknowns. The linear least-squares problem occurs in statisticalregression analysis; it has aclosed-form solution. The nonlinear problem is usually solved byiterative refinement; at each iteration the system is approximated by a linear one, and thus the core calculation is similar in both cases. Polynomial least squaresdescribes thevariancein a prediction of the dependent variable as a function of the independent variable and thedeviationsfrom the fitted curve. When the observations come from anexponential familywith identity as its natural sufficient statistics and mild-conditions are satisfied (e.g. fornormal,exponential,Poissonandbinomial distributions), standardized least-squares estimates andmaximum-likelihoodestimates are identical.[1]The method of least squares can also be derived as amethod of momentsestimator. The following discussion is mostly presented in terms oflinearfunctions but the use of least squares is valid and practical for more general families of functions. Also, by iteratively applying local quadratic approximation to the likelihood (through theFisher information), the least-squares method may be used to fit ageneralized linear model. The least-squares method was officially discovered and published byAdrien-Marie Legendre(1805),[2]though it is usually also co-credited toCarl Friedrich Gauss(1809),[3][4]who contributed significant theoretical advances to the method,[4]and may have also used it in his earlier work in 1794 and 1795.[5][4] The method of least squares grew out of the fields ofastronomyandgeodesy, as scientists and mathematicians sought to provide solutions to the challenges of navigating the Earth's oceans during theAge of Discovery. The accurate description of the behavior of celestial bodies was the key to enabling ships to sail in open seas, where sailors could no longer rely on land sightings for navigation. The method was the culmination of several advances that took place during the course of the eighteenth century:[6] The first clear and concise exposition of the method of least squares was published byLegendrein 1805.[10]The technique is described as an algebraic procedure for fitting linear equations to data and Legendre demonstrates the new method by analyzing the same data as Laplace for the shape of the Earth. Within ten years after Legendre's publication, the method of least squares had been adopted as a standard tool in astronomy and geodesy inFrance,Italy, andPrussia, which constitutes an extraordinarily rapid acceptance of a scientific technique.[6] In 1809Carl Friedrich Gausspublished his method of calculating the orbits of celestial bodies. In that work he claimed to have been in possession of the method of least squares since 1795.[11]This naturally led to a priority dispute with Legendre. However, to Gauss's credit, he went beyond Legendre and succeeded in connecting the method of least squares with the principles of probability and to thenormal distribution. He had managed to complete Laplace's program of specifying a mathematical form of the probability density for the observations, depending on a finite number of unknown parameters, and define a method of estimation that minimizes the error of estimation. Gauss showed that thearithmetic meanis indeed the best estimate of the location parameter by changing both theprobability densityand the method of estimation. He then turned the problem around by asking what form the density should have and what method of estimation should be used to get the arithmetic mean as estimate of the location parameter. In this attempt, he invented the normal distribution. An early demonstration of the strength ofGauss's methodcame when it was used to predict the future location of the newly discovered asteroidCeres. On 1 January 1801, the Italian astronomerGiuseppe Piazzidiscovered Ceres and was able to track its path for 40 days before it was lost in the glare of the Sun. Based on these data, astronomers desired to determine the location of Ceres after it emerged from behind the Sun without solvingKepler's complicated nonlinear equationsof planetary motion. The only predictions that successfully allowed Hungarian astronomerFranz Xaver von Zachto relocate Ceres were those performed by the 24-year-old Gauss using least-squares analysis. In 1810, after reading Gauss's work, Laplace, after proving thecentral limit theorem, used it to give a large sample justification for the method of least squares and the normal distribution. In 1822, Gauss was able to state that the least-squares approach to regression analysis is optimal in the sense that in a linear model where the errors have a mean of zero, are uncorrelated, normally distributed, and have equal variances, the best linear unbiased estimator of the coefficients is the least-squares estimator. An extended version of this result is known as theGauss–Markov theorem. The idea of least-squares analysis was also independently formulated by the AmericanRobert Adrainin 1808. In the next two centuries workers in the theory of errors and in statistics found many different ways of implementing least squares.[12] The objective consists of adjusting the parameters of a model function to best fit a data set. A simple data set consists ofnpoints (data pairs)(xi,yi){\displaystyle (x_{i},y_{i})\!},i= 1, …,n, wherexi{\displaystyle x_{i}\!}is anindependent variableandyi{\displaystyle y_{i}\!}is adependent variablewhose value is found by observation. The model function has the formf(x,β){\displaystyle f(x,{\boldsymbol {\beta }})}, wheremadjustable parameters are held in the vectorβ{\displaystyle {\boldsymbol {\beta }}}. The goal is to find the parameter values for the model that "best" fits the data. The fit of a model to a data point is measured by itsresidual, defined as the difference between the observed value of the dependent variable and the value predicted by the model:ri=yi−f(xi,β).{\displaystyle r_{i}=y_{i}-f(x_{i},{\boldsymbol {\beta }}).} The least-squares method finds the optimal parameter values by minimizing thesum of squared residuals,S{\displaystyle S}:[13]S=∑i=1nri2.{\displaystyle S=\sum _{i=1}^{n}r_{i}^{2}.} In the simplest casef(xi,β)=β{\displaystyle f(x_{i},{\boldsymbol {\beta }})=\beta }and the result of the least-squares method is thearithmetic meanof the input data. An example of a model in two dimensions is that of the straight line. Denoting the y-intercept asβ0{\displaystyle \beta _{0}}and the slope asβ1{\displaystyle \beta _{1}}, the model function is given byf(x,β)=β0+β1x{\displaystyle f(x,{\boldsymbol {\beta }})=\beta _{0}+\beta _{1}x}. Seelinear least squaresfor a fully worked out example of this model. A data point may consist of more than one independent variable. For example, when fitting a plane to a set of height measurements, the plane is a function of two independent variables,xandz, say. In the most general case there may be one or more independent variables and one or more dependent variables at each data point. To the right is a residual plot illustrating random fluctuations aboutri=0{\displaystyle r_{i}=0}, indicating that a linear model(Yi=β0+β1xi+Ui){\displaystyle (Y_{i}=\beta _{0}+\beta _{1}x_{i}+U_{i})}is appropriate.Ui{\displaystyle U_{i}}is an independent, random variable.[13] If the residual points had some sort of a shape and were not randomly fluctuating, a linear model would not be appropriate. For example, if the residual plot had a parabolic shape as seen to the right, a parabolic model(Yi=β0+β1xi+β2xi2+Ui){\displaystyle (Y_{i}=\beta _{0}+\beta _{1}x_{i}+\beta _{2}x_{i}^{2}+U_{i})}would be appropriate for the data. The residuals for a parabolic model can be calculated viari=yi−β^0−β^1xi−β^2xi2{\displaystyle r_{i}=y_{i}-{\hat {\beta }}_{0}-{\hat {\beta }}_{1}x_{i}-{\hat {\beta }}_{2}x_{i}^{2}}.[13] This regression formulation considers only observational errors in the dependent variable (but the alternativetotal least squaresregression can account for errors in both variables). There are two rather different contexts with different implications: Theminimumof the sum of squares is found by setting thegradientto zero. Since the model containsmparameters, there aremgradient equations:∂S∂βj=2∑iri∂ri∂βj=0,j=1,…,m,{\displaystyle {\frac {\partial S}{\partial \beta _{j}}}=2\sum _{i}r_{i}{\frac {\partial r_{i}}{\partial \beta _{j}}}=0,\ j=1,\ldots ,m,}and sinceri=yi−f(xi,β){\displaystyle r_{i}=y_{i}-f(x_{i},{\boldsymbol {\beta }})}, the gradient equations become−2∑iri∂f(xi,β)∂βj=0,j=1,…,m.{\displaystyle -2\sum _{i}r_{i}{\frac {\partial f(x_{i},{\boldsymbol {\beta }})}{\partial \beta _{j}}}=0,\ j=1,\ldots ,m.} The gradient equations apply to all least squares problems. Each particular problem requires particular expressions for the model and itspartial derivatives.[15] A regression model is a linear one when the model comprises alinear combinationof the parameters, i.e.,f(x,β)=∑j=1mβjϕj(x),{\displaystyle f(x,{\boldsymbol {\beta }})=\sum _{j=1}^{m}\beta _{j}\phi _{j}(x),}where the functionϕj{\displaystyle \phi _{j}}is a function ofx{\displaystyle x}.[15] LettingXij=ϕj(xi){\displaystyle X_{ij}=\phi _{j}(x_{i})}and putting the independent and dependent variables in matricesX{\displaystyle X}andY,{\displaystyle Y,}respectively, we can compute the least squares in the following way. Note thatD{\displaystyle D}is the set of all data.[15][16]L(D,β)=‖Y−Xβ‖2=(Y−Xβ)T(Y−Xβ){\displaystyle L(D,{\boldsymbol {\beta }})=\left\|Y-X{\boldsymbol {\beta }}\right\|^{2}=(Y-X{\boldsymbol {\beta }})^{\mathsf {T}}(Y-X{\boldsymbol {\beta }})}=YTY−2YTXβ+βTXTXβ{\displaystyle =Y^{\mathsf {T}}Y-2Y^{\mathsf {T}}X{\boldsymbol {\beta }}+{\boldsymbol {\beta }}^{\mathsf {T}}X^{\mathsf {T}}X{\boldsymbol {\beta }}} The gradient of the loss is:∂L(D,β)∂β=∂(YTY−2YTXβ+βTXTXβ)∂β=−2XTY+2XTXβ{\displaystyle {\frac {\partial L(D,{\boldsymbol {\beta }})}{\partial {\boldsymbol {\beta }}}}={\frac {\partial \left(Y^{\mathsf {T}}Y-2Y^{\mathsf {T}}X{\boldsymbol {\beta }}+{\boldsymbol {\beta }}^{\mathsf {T}}X^{\mathsf {T}}X{\boldsymbol {\beta }}\right)}{\partial {\boldsymbol {\beta }}}}=-2X^{\mathsf {T}}Y+2X^{\mathsf {T}}X{\boldsymbol {\beta }}} Setting the gradient of the loss to zero and solving forβ{\displaystyle {\boldsymbol {\beta }}}, we get:[16][15]−2XTY+2XTXβ=0⇒XTY=XTXβ{\displaystyle -2X^{\mathsf {T}}Y+2X^{\mathsf {T}}X{\boldsymbol {\beta }}=0\Rightarrow X^{\mathsf {T}}Y=X^{\mathsf {T}}X{\boldsymbol {\beta }}}β^=(XTX)−1XTY{\displaystyle {\boldsymbol {\hat {\beta }}}=\left(X^{\mathsf {T}}X\right)^{-1}X^{\mathsf {T}}Y} There is, in some cases, aclosed-form solutionto a non-linear least squares problem – but in general there is not. In the case of no closed-form solution, numerical algorithms are used to find the value of the parametersβ{\displaystyle \beta }that minimizes the objective. Most algorithms involve choosing initial values for the parameters. Then, the parameters are refined iteratively, that is, the values are obtained by successive approximation:βjk+1=βjk+Δβj,{\displaystyle {\beta _{j}}^{k+1}={\beta _{j}}^{k}+\Delta \beta _{j},}where a superscriptkis an iteration number, and the vector of incrementsΔβj{\displaystyle \Delta \beta _{j}}is called the shift vector. In some commonly used algorithms, at each iteration the model may be linearized by approximation to a first-orderTaylor seriesexpansion aboutβk{\displaystyle {\boldsymbol {\beta }}^{k}}:f(xi,β)=fk(xi,β)+∑j∂f(xi,β)∂βj(βj−βjk)=fk(xi,β)+∑jJijΔβj.{\displaystyle {\begin{aligned}f(x_{i},{\boldsymbol {\beta }})&=f^{k}(x_{i},{\boldsymbol {\beta }})+\sum _{j}{\frac {\partial f(x_{i},{\boldsymbol {\beta }})}{\partial \beta _{j}}}\left(\beta _{j}-{\beta _{j}}^{k}\right)\\[1ex]&=f^{k}(x_{i},{\boldsymbol {\beta }})+\sum _{j}J_{ij}\,\Delta \beta _{j}.\end{aligned}}} TheJacobianJis a function of constants, the independent variableandthe parameters, so it changes from one iteration to the next. The residuals are given byri=yi−fk(xi,β)−∑k=1mJikΔβk=Δyi−∑j=1mJijΔβj.{\displaystyle r_{i}=y_{i}-f^{k}(x_{i},{\boldsymbol {\beta }})-\sum _{k=1}^{m}J_{ik}\,\Delta \beta _{k}=\Delta y_{i}-\sum _{j=1}^{m}J_{ij}\,\Delta \beta _{j}.} To minimize the sum of squares ofri{\displaystyle r_{i}}, the gradient equation is set to zero and solved forΔβj{\displaystyle \Delta \beta _{j}}:−2∑i=1nJij(Δyi−∑k=1mJikΔβk)=0,{\displaystyle -2\sum _{i=1}^{n}J_{ij}\left(\Delta y_{i}-\sum _{k=1}^{m}J_{ik}\,\Delta \beta _{k}\right)=0,}which, on rearrangement, becomemsimultaneous linear equations, thenormal equations:∑i=1n∑k=1mJijJikΔβk=∑i=1nJijΔyi(j=1,…,m).{\displaystyle \sum _{i=1}^{n}\sum _{k=1}^{m}J_{ij}J_{ik}\,\Delta \beta _{k}=\sum _{i=1}^{n}J_{ij}\,\Delta y_{i}\qquad (j=1,\ldots ,m).} The normal equations are written in matrix notation as(JTJ)Δβ=JTΔy.{\displaystyle \left(\mathbf {J} ^{\mathsf {T}}\mathbf {J} \right)\Delta {\boldsymbol {\beta }}=\mathbf {J} ^{\mathsf {T}}\Delta \mathbf {y} .} These are the defining equations of theGauss–Newton algorithm. These differences must be considered whenever the solution to a nonlinear least squares problem is being sought.[15] Consider a simple example drawn from physics. A spring should obeyHooke's lawwhich states that the extension of a springyis proportional to the force,F, applied to it.y=f(F,k)=kF{\displaystyle y=f(F,k)=kF}constitutes the model, whereFis the independent variable. In order to estimate theforce constant,k, we conduct a series ofnmeasurements with different forces to produce a set of data,(Fi,yi),i=1,…,n{\displaystyle (F_{i},y_{i}),\ i=1,\dots ,n\!}, whereyiis a measured spring extension.[17]Each experimental observation will contain some error,ε{\displaystyle \varepsilon }, and so we may specify an empirical model for our observations,yi=kFi+εi.{\displaystyle y_{i}=kF_{i}+\varepsilon _{i}.} There are many methods we might use to estimate the unknown parameterk. Since thenequations in themvariables in our data comprise anoverdetermined systemwith one unknown andnequations, we estimatekusing least squares. The sum of squares to be minimized is[15]S=∑i=1n(yi−kFi)2.{\displaystyle S=\sum _{i=1}^{n}\left(y_{i}-kF_{i}\right)^{2}.} The least squares estimate of the force constant,k, is given byk^=∑iFiyi∑iFi2.{\displaystyle {\hat {k}}={\frac {\sum _{i}F_{i}y_{i}}{\sum _{i}F_{i}^{2}}}.} We assume that applying forcecausesthe spring to expand. After having derived the force constant by least squares fitting, we predict the extension from Hooke's law. In a least squares calculation with unit weights, or in linear regression, the variance on thejth parameter, denotedvar⁡(β^j){\displaystyle \operatorname {var} ({\hat {\beta }}_{j})}, is usually estimated withvar⁡(β^j)=σ2([XTX]−1)jj≈σ^2Cjj,{\displaystyle \operatorname {var} ({\hat {\beta }}_{j})=\sigma ^{2}\left(\left[X^{\mathsf {T}}X\right]^{-1}\right)_{jj}\approx {\hat {\sigma }}^{2}C_{jj},}σ^2≈Sn−m{\displaystyle {\hat {\sigma }}^{2}\approx {\frac {S}{n-m}}}C=(XTX)−1,{\displaystyle C=\left(X^{\mathsf {T}}X\right)^{-1},}where the true error varianceσ2is replaced by an estimate, thereduced chi-squared statistic, based on the minimized value of theresidual sum of squares(objective function),S. The denominator,n−m, is thestatistical degrees of freedom; seeeffective degrees of freedomfor generalizations.[15]Cis thecovariance matrix. If theprobability distributionof the parameters is known or an asymptotic approximation is made,confidence limitscan be found. Similarly, statistical tests on the residuals can be conducted if the probability distribution of the residuals is known or assumed. We can derive the probability distribution of any linear combination of the dependent variables if the probability distribution of experimental errors is known or assumed. Inferring is easy when assuming that the errors follow a normal distribution, consequently implying that the parameter estimates and residuals will also be normally distributed conditional on the values of the independent variables.[15] It is necessary to make assumptions about the nature of the experimental errors to test the results statistically. A common assumption is that the errors belong to a normal distribution. Thecentral limit theoremsupports the idea that this is a good approximation in many cases. However, suppose the errors are not normally distributed. In that case, acentral limit theoremoften nonetheless implies that the parameter estimates will be approximately normally distributed so long as the sample is reasonably large. For this reason, given the important property that the error mean is independent of the independent variables, the distribution of the error term is not an important issue in regression analysis. Specifically, it is not typically important whether the error term follows a normal distribution. A special case ofgeneralized least squarescalledweighted least squaresoccurs when all the off-diagonal entries of Ω (the correlation matrix of the residuals) are null; thevariancesof the observations (along the covariance matrix diagonal) may still be unequal (heteroscedasticity). In simpler terms,heteroscedasticityis when the variance ofYi{\displaystyle Y_{i}}depends on the value ofFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle x_i}which causes the residual plot to create a "fanning out" effect towards largerYi{\displaystyle Y_{i}}values as seen in the residual plot to the right. On the other hand,homoscedasticityis assuming that the variance ofYi{\displaystyle Y_{i}}and variance ofUi{\displaystyle U_{i}}are equal.[13] The firstprincipal componentabout the mean of a set of points can be represented by that line which most closely approaches the data points (as measured by squared distance of closest approach, i.e. perpendicular to the line). In contrast, linear least squares tries to minimize the distance in they{\displaystyle y}direction only. Thus, although the two use a similar error metric, linear least squares is a method that treats one dimension of the data preferentially, while PCA treats all dimensions equally. Notable statisticianSara van de Geerusedempirical process theoryand theVapnik–Chervonenkis dimensionto prove a least-squares estimator can be interpreted as ameasureon the space ofsquare-integrable functions.[19] In some contexts, aregularizedversion of the least squares solution may be preferable.Tikhonov regularization(orridge regression) adds a constraint that‖β‖22{\displaystyle \left\|\beta \right\|_{2}^{2}}, the squaredℓ2{\displaystyle \ell _{2}}-normof the parameter vector, is not greater than a given value to the least squares formulation, leading to a constrained minimization problem. This is equivalent to the unconstrained minimization problem where the objective function is the residual sum of squares plus a penalty termα‖β‖22{\displaystyle \alpha \left\|\beta \right\|_{2}^{2}}andα{\displaystyle \alpha }is a tuning parameter (this is theLagrangianform of the constrained minimization problem).[20] In aBayesiancontext, this is equivalent to placing a zero-mean normally distributedprioron the parameter vector. An alternativeregularizedversion of least squares isLasso(least absolute shrinkage and selection operator), which uses the constraint that‖β‖1{\displaystyle \|\beta \|_{1}}, theL1-normof the parameter vector, is no greater than a given value.[21][22][23](One can show like above using Lagrange multipliers that this is equivalent to an unconstrained minimization of the least-squares penalty withα‖β‖1{\displaystyle \alpha \|\beta \|_{1}}added.) In aBayesiancontext, this is equivalent to placing a zero-meanLaplaceprior distributionon the parameter vector.[24]The optimization problem may be solved usingquadratic programmingor more generalconvex optimizationmethods, as well as by specific algorithms such as theleast angle regressionalgorithm. One of the prime differences between Lasso and ridge regression is that in ridge regression, as the penalty is increased, all parameters are reduced while still remaining non-zero, while in Lasso, increasing the penalty will cause more and more of the parameters to be driven to zero. This is an advantage of Lasso over ridge regression, as driving parameters to zero deselects the features from the regression. Thus, Lasso automatically selects more relevant features and discards the others, whereas Ridge regression never fully discards any features. Somefeature selectiontechniques are developed based on the LASSO including Bolasso which bootstraps samples,[25]and FeaLect which analyzes the regression coefficients corresponding to different values ofα{\displaystyle \alpha }to score all the features.[26] TheL1-regularized formulation is useful in some contexts due to its tendency to prefer solutions where more parameters are zero, which gives solutions that depend on fewer variables.[21]For this reason, the Lasso and its variants are fundamental to the field ofcompressed sensing. An extension of this approach iselastic net regularization.
https://en.wikipedia.org/wiki/Least_squares
Instatistics,maximum likelihood estimation(MLE) is a method ofestimatingtheparametersof an assumedprobability distribution, given some observed data. This is achieved bymaximizingalikelihood functionso that, under the assumedstatistical model, theobserved datais most probable. Thepointin theparameter spacethat maximizes the likelihood function is called the maximum likelihood estimate.[1]The logic of maximum likelihood is both intuitive and flexible, and as such the method has become a dominant means ofstatistical inference.[2][3][4] If the likelihood function isdifferentiable, thederivative testfor finding maxima can be applied. In some cases, the first-order conditions of the likelihood function can be solved analytically; for instance, theordinary least squaresestimator for alinear regressionmodel maximizes the likelihood when the random errors are assumed to havenormaldistributions with the same variance.[5] From the perspective ofBayesian inference, MLE is generally equivalent tomaximum a posteriori (MAP) estimationwith aprior distributionthat isuniformin the region of interest. Infrequentist inference, MLE is a special case of anextremum estimator, with the objective function being the likelihood. We model a set of observations as a randomsamplefrom an unknownjoint probability distributionwhich is expressed in terms of a set ofparameters. The goal of maximum likelihood estimation is to determine the parameters for which the observed data have the highest joint probability. We write the parameters governing the joint distribution as a vectorθ=[θ1,θ2,…,θk]T{\displaystyle \;\theta =\left[\theta _{1},\,\theta _{2},\,\ldots ,\,\theta _{k}\right]^{\mathsf {T}}\;}so that this distribution falls within aparametric family{f(⋅;θ)∣θ∈Θ},{\displaystyle \;\{f(\cdot \,;\theta )\mid \theta \in \Theta \}\;,}whereΘ{\displaystyle \,\Theta \,}is called theparameter space, a finite-dimensional subset ofEuclidean space. Evaluating the joint density at the observed data sampley=(y1,y2,…,yn){\displaystyle \;\mathbf {y} =(y_{1},y_{2},\ldots ,y_{n})\;}gives a real-valued function,Ln(θ)=Ln(θ;y)=fn(y;θ),{\displaystyle {\mathcal {L}}_{n}(\theta )={\mathcal {L}}_{n}(\theta ;\mathbf {y} )=f_{n}(\mathbf {y} ;\theta )\;,}which is called thelikelihood function. Forindependent random variables,fn(y;θ){\displaystyle f_{n}(\mathbf {y} ;\theta )}will be the product of univariatedensity functions:fn(y;θ)=∏k=1nfkunivar(yk;θ).{\displaystyle f_{n}(\mathbf {y} ;\theta )=\prod _{k=1}^{n}\,f_{k}^{\mathsf {univar}}(y_{k};\theta )~.} The goal of maximum likelihood estimation is to find the values of the model parameters that maximize the likelihood function over the parameter space,[6]that is:θ^=argmaxθ∈ΘLn(θ;y).{\displaystyle {\hat {\theta }}={\underset {\theta \in \Theta }{\operatorname {arg\;max} }}\,{\mathcal {L}}_{n}(\theta \,;\mathbf {y} )~.} Intuitively, this selects the parameter values that make the observed data most probable. The specific valueθ^=θ^n(y)∈Θ{\displaystyle ~{\hat {\theta }}={\hat {\theta }}_{n}(\mathbf {y} )\in \Theta ~}that maximizes the likelihood functionLn{\displaystyle \,{\mathcal {L}}_{n}\,}is called the maximum likelihood estimate. Further, if the functionθ^n:Rn→Θ{\displaystyle \;{\hat {\theta }}_{n}:\mathbb {R} ^{n}\to \Theta \;}so defined ismeasurable, then it is called the maximum likelihoodestimator. It is generally a function defined over thesample space, i.e. taking a given sample as its argument. Asufficient but not necessarycondition for its existence is for the likelihood function to becontinuousover a parameter spaceΘ{\displaystyle \,\Theta \,}that iscompact.[7]For anopenΘ{\displaystyle \,\Theta \,}the likelihood function may increase without ever reaching a supremum value. In practice, it is often convenient to work with thenatural logarithmof the likelihood function, called thelog-likelihood:ℓ(θ;y)=ln⁡Ln(θ;y).{\displaystyle \ell (\theta \,;\mathbf {y} )=\ln {\mathcal {L}}_{n}(\theta \,;\mathbf {y} )~.}Since the logarithm is amonotonic function, the maximum ofℓ(θ;y){\displaystyle \;\ell (\theta \,;\mathbf {y} )\;}occurs at the same value ofθ{\displaystyle \theta }as does the maximum ofLn.{\displaystyle \,{\mathcal {L}}_{n}~.}[8]Ifℓ(θ;y){\displaystyle \ell (\theta \,;\mathbf {y} )}isdifferentiableinΘ,{\displaystyle \,\Theta \,,}sufficient conditionsfor the occurrence of a maximum (or a minimum) are∂ℓ∂θ1=0,∂ℓ∂θ2=0,…,∂ℓ∂θk=0,{\displaystyle {\frac {\partial \ell }{\partial \theta _{1}}}=0,\quad {\frac {\partial \ell }{\partial \theta _{2}}}=0,\quad \ldots ,\quad {\frac {\partial \ell }{\partial \theta _{k}}}=0~,}known as the likelihood equations. For some models, these equations can be explicitly solved forθ^,{\displaystyle \,{\widehat {\theta \,}}\,,}but in general no closed-form solution to the maximization problem is known or available, and an MLE can only be found vianumerical optimization. Another problem is that in finite samples, there may exist multiplerootsfor the likelihood equations.[9]Whether the identified rootθ^{\displaystyle \,{\widehat {\theta \,}}\,}of the likelihood equations is indeed a (local) maximum depends on whether the matrix of second-order partial and cross-partial derivatives, the so-calledHessian matrix H(θ^)=[∂2ℓ∂θ12|θ=θ^∂2ℓ∂θ1∂θ2|θ=θ^…∂2ℓ∂θ1∂θk|θ=θ^∂2ℓ∂θ2∂θ1|θ=θ^∂2ℓ∂θ22|θ=θ^…∂2ℓ∂θ2∂θk|θ=θ^⋮⋮⋱⋮∂2ℓ∂θk∂θ1|θ=θ^∂2ℓ∂θk∂θ2|θ=θ^…∂2ℓ∂θk2|θ=θ^],{\displaystyle \mathbf {H} \left({\widehat {\theta \,}}\right)={\begin{bmatrix}\left.{\frac {\partial ^{2}\ell }{\partial \theta _{1}^{2}}}\right|_{\theta ={\widehat {\theta \,}}}&\left.{\frac {\partial ^{2}\ell }{\partial \theta _{1}\,\partial \theta _{2}}}\right|_{\theta ={\widehat {\theta \,}}}&\dots &\left.{\frac {\partial ^{2}\ell }{\partial \theta _{1}\,\partial \theta _{k}}}\right|_{\theta ={\widehat {\theta \,}}}\\\left.{\frac {\partial ^{2}\ell }{\partial \theta _{2}\,\partial \theta _{1}}}\right|_{\theta ={\widehat {\theta \,}}}&\left.{\frac {\partial ^{2}\ell }{\partial \theta _{2}^{2}}}\right|_{\theta ={\widehat {\theta \,}}}&\dots &\left.{\frac {\partial ^{2}\ell }{\partial \theta _{2}\,\partial \theta _{k}}}\right|_{\theta ={\widehat {\theta \,}}}\\\vdots &\vdots &\ddots &\vdots \\\left.{\frac {\partial ^{2}\ell }{\partial \theta _{k}\,\partial \theta _{1}}}\right|_{\theta ={\widehat {\theta \,}}}&\left.{\frac {\partial ^{2}\ell }{\partial \theta _{k}\,\partial \theta _{2}}}\right|_{\theta ={\widehat {\theta \,}}}&\dots &\left.{\frac {\partial ^{2}\ell }{\partial \theta _{k}^{2}}}\right|_{\theta ={\widehat {\theta \,}}}\end{bmatrix}}~,} isnegative semi-definiteatθ^{\displaystyle {\widehat {\theta \,}}}, as this indicates localconcavity. Conveniently, most commonprobability distributions– in particular theexponential family– arelogarithmically concave.[10][11] While the domain of the likelihood function—theparameter space—is generally a finite-dimensional subset ofEuclidean space, additionalrestrictionssometimes need to be incorporated into the estimation process. The parameter space can be expressed asΘ={θ:θ∈Rk,h(θ)=0},{\displaystyle \Theta =\left\{\theta :\theta \in \mathbb {R} ^{k},\;h(\theta )=0\right\}~,} whereh(θ)=[h1(θ),h2(θ),…,hr(θ)]{\displaystyle \;h(\theta )=\left[h_{1}(\theta ),h_{2}(\theta ),\ldots ,h_{r}(\theta )\right]\;}is avector-valued functionmappingRk{\displaystyle \,\mathbb {R} ^{k}\,}intoRr.{\displaystyle \;\mathbb {R} ^{r}~.}Estimating the true parameterθ{\displaystyle \theta }belonging toΘ{\displaystyle \Theta }then, as a practical matter, means to find the maximum of the likelihood function subject to theconstrainth(θ)=0.{\displaystyle ~h(\theta )=0~.} Theoretically, the most natural approach to thisconstrained optimizationproblem is the method of substitution, that is "filling out" the restrictionsh1,h2,…,hr{\displaystyle \;h_{1},h_{2},\ldots ,h_{r}\;}to a seth1,h2,…,hr,hr+1,…,hk{\displaystyle \;h_{1},h_{2},\ldots ,h_{r},h_{r+1},\ldots ,h_{k}\;}in such a way thath∗=[h1,h2,…,hk]{\displaystyle \;h^{\ast }=\left[h_{1},h_{2},\ldots ,h_{k}\right]\;}is aone-to-one functionfromRk{\displaystyle \mathbb {R} ^{k}}to itself, and reparameterize the likelihood function by settingϕi=hi(θ1,θ2,…,θk).{\displaystyle \;\phi _{i}=h_{i}(\theta _{1},\theta _{2},\ldots ,\theta _{k})~.}[12]Because of the equivariance of the maximum likelihood estimator, the properties of the MLE apply to the restricted estimates also.[13]For instance, in amultivariate normal distributionthecovariance matrixΣ{\displaystyle \,\Sigma \,}must bepositive-definite; this restriction can be imposed by replacingΣ=ΓTΓ,{\displaystyle \;\Sigma =\Gamma ^{\mathsf {T}}\Gamma \;,}whereΓ{\displaystyle \Gamma }is a realupper triangular matrixandΓT{\displaystyle \Gamma ^{\mathsf {T}}}is itstranspose.[14] In practice, restrictions are usually imposed using the method of Lagrange which, given the constraints as defined above, leads to therestricted likelihood equations∂ℓ∂θ−∂h(θ)T∂θλ=0{\displaystyle {\frac {\partial \ell }{\partial \theta }}-{\frac {\partial h(\theta )^{\mathsf {T}}}{\partial \theta }}\lambda =0}andh(θ)=0,{\displaystyle h(\theta )=0\;,} whereλ=[λ1,λ2,…,λr]T{\displaystyle ~\lambda =\left[\lambda _{1},\lambda _{2},\ldots ,\lambda _{r}\right]^{\mathsf {T}}~}is a column-vector ofLagrange multipliersand∂h(θ)T∂θ{\displaystyle \;{\frac {\partial h(\theta )^{\mathsf {T}}}{\partial \theta }}\;}is thek × rJacobian matrixof partial derivatives.[12]Naturally, if the constraints are not binding at the maximum, the Lagrange multipliers should be zero.[15]This in turn allows for a statistical test of the "validity" of the constraint, known as theLagrange multiplier test. Nonparametric maximum likelihood estimation can be performed using theempirical likelihood. A maximum likelihood estimator is anextremum estimatorobtained by maximizing, as a function ofθ, theobjective functionℓ^(θ;x){\displaystyle {\widehat {\ell \,}}(\theta \,;x)}. If the data areindependent and identically distributed, then we haveℓ^(θ;x)=∑i=1nln⁡f(xi∣θ),{\displaystyle {\widehat {\ell \,}}(\theta \,;x)=\sum _{i=1}^{n}\ln f(x_{i}\mid \theta ),}this being the sample analogue of the expected log-likelihoodℓ(θ)=E⁡[ln⁡f(xi∣θ)]{\displaystyle \ell (\theta )=\operatorname {\mathbb {E} } [\,\ln f(x_{i}\mid \theta )\,]}, where this expectation is taken with respect to the true density. Maximum-likelihood estimators have no optimum properties for finite samples, in the sense that (when evaluated on finite samples) other estimators may have greater concentration around the true parameter-value.[16]However, like other estimation methods, maximum likelihood estimation possesses a number of attractivelimiting properties: As the sample size increases to infinity, sequences of maximum likelihood estimators have these properties: Under the conditions outlined below, the maximum likelihood estimator isconsistent. The consistency means that if the data were generated byf(⋅;θ0){\displaystyle f(\cdot \,;\theta _{0})}and we have a sufficiently large number of observationsn, then it is possible to find the value ofθ0with arbitrary precision. In mathematical terms this means that asngoes to infinity the estimatorθ^{\displaystyle {\widehat {\theta \,}}}converges in probabilityto its true value:θ^mle→pθ0.{\displaystyle {\widehat {\theta \,}}_{\mathrm {mle} }\ {\xrightarrow {\text{p}}}\ \theta _{0}.} Under slightly stronger conditions, the estimator convergesalmost surely(orstrongly):θ^mle→a.s.θ0.{\displaystyle {\widehat {\theta \,}}_{\mathrm {mle} }\ {\xrightarrow {\text{a.s.}}}\ \theta _{0}.} In practical applications, data is never generated byf(⋅;θ0){\displaystyle f(\cdot \,;\theta _{0})}. Rather,f(⋅;θ0){\displaystyle f(\cdot \,;\theta _{0})}is a model, often in idealized form, of the process generated by the data. It is a common aphorism in statistics thatall models are wrong. Thus, true consistency does not occur in practical applications. Nevertheless, consistency is often considered to be a desirable property for an estimator to have. To establish consistency, the following conditions are sufficient.[17] θ≠θ0⇔f(⋅∣θ)≠f(⋅∣θ0).{\displaystyle \theta \neq \theta _{0}\quad \Leftrightarrow \quad f(\cdot \mid \theta )\neq f(\cdot \mid \theta _{0}).}In other words, different parameter valuesθcorrespond to different distributions within the model. If this condition did not hold, there would be some valueθ1such thatθ0andθ1generate an identical distribution of the observable data. Then we would not be able to distinguish between these two parameters even with an infinite amount of data—these parameters would have beenobservationally equivalent. The identification condition establishes that the log-likelihood has a unique global maximum. Compactness implies that the likelihood cannot approach the maximum value arbitrarily close at some other point (as demonstrated for example in the picture on the right). Compactness is only a sufficient condition and not a necessary condition. Compactness can be replaced by some other conditions, such as: P⁡[ln⁡f(x∣θ)∈C0(Θ)]=1.{\displaystyle \operatorname {\mathbb {P} } {\Bigl [}\;\ln f(x\mid \theta )\;\in \;C^{0}(\Theta )\;{\Bigr ]}=1.} The dominance condition can be employed in the case ofi.i.d.observations. In the non-i.i.d. case, the uniform convergence in probability can be checked by showing that the sequenceℓ^(θ∣x){\displaystyle {\widehat {\ell \,}}(\theta \mid x)}isstochastically equicontinuous. If one wants to demonstrate that the ML estimatorθ^{\displaystyle {\widehat {\theta \,}}}converges toθ0almost surely, then a stronger condition of uniform convergence almost surely has to be imposed:supθ∈Θ‖ℓ^(θ∣x)−ℓ(θ)‖→a.s.0.{\displaystyle \sup _{\theta \in \Theta }\left\|\;{\widehat {\ell \,}}(\theta \mid x)-\ell (\theta )\;\right\|\ \xrightarrow {\text{a.s.}} \ 0.} Additionally, if (as assumed above) the data were generated byf(⋅;θ0){\displaystyle f(\cdot \,;\theta _{0})}, then under certain conditions, it can also be shown that the maximum likelihood estimatorconverges in distributionto a normal distribution. Specifically,[18]n(θ^mle−θ0)→dN(0,I−1){\displaystyle {\sqrt {n}}\left({\widehat {\theta \,}}_{\mathrm {mle} }-\theta _{0}\right)\ \xrightarrow {d} \ {\mathcal {N}}\left(0,\,I^{-1}\right)}whereIis theFisher information matrix. The maximum likelihood estimator selects the parameter value which gives the observed data the largest possible probability (or probability density, in the continuous case). If the parameter consists of a number of components, then we define their separate maximum likelihood estimators, as the corresponding component of the MLE of the complete parameter. Consistent with this, ifθ^{\displaystyle {\widehat {\theta \,}}}is the MLE forθ{\displaystyle \theta }, and ifg(θ){\displaystyle g(\theta )}is any transformation ofθ{\displaystyle \theta }, then the MLE forα=g(θ){\displaystyle \alpha =g(\theta )}is by definition[19] α^=g(θ^).{\displaystyle {\widehat {\alpha }}=g(\,{\widehat {\theta \,}}\,).\,} It maximizes the so-calledprofile likelihood: L¯(α)=supθ:α=g(θ)L(θ).{\displaystyle {\bar {L}}(\alpha )=\sup _{\theta :\alpha =g(\theta )}L(\theta ).\,} The MLE is also equivariant with respect to certain transformations of the data. Ify=g(x){\displaystyle y=g(x)}whereg{\displaystyle g}is one to one and does not depend on the parameters to be estimated, then the density functions satisfy fY(y)=fX(g−1(y))|(g−1(y))′|{\displaystyle f_{Y}(y)=f_{X}(g^{-1}(y))\,|(g^{-1}(y))^{\prime }|} and hence the likelihood functions forX{\displaystyle X}andY{\displaystyle Y}differ only by a factor that does not depend on the model parameters. For example, the MLE parameters of the log-normal distribution are the same as those of the normal distribution fitted to the logarithm of the data. In fact, in the log-normal case ifX∼N(0,1){\displaystyle X\sim {\mathcal {N}}(0,1)}, thenY=g(X)=eX{\displaystyle Y=g(X)=e^{X}}follows alog-normal distribution. The density of Y follows withfX{\displaystyle f_{X}}standardNormalandg−1(y)=log⁡(y){\displaystyle g^{-1}(y)=\log(y)},|(g−1(y))′|=1y{\displaystyle |(g^{-1}(y))^{\prime }|={\frac {1}{y}}}fory>0{\displaystyle y>0}. As assumed above, if the data were generated byf(⋅;θ0),{\displaystyle ~f(\cdot \,;\theta _{0})~,}then under certain conditions, it can also be shown that the maximum likelihood estimatorconverges in distributionto a normal distribution. It is√n-consistent and asymptotically efficient, meaning that it reaches theCramér–Rao bound. Specifically,[18] n(θ^mle−θ0)→dN(0,I−1),{\displaystyle {\sqrt {n\,}}\,\left({\widehat {\theta \,}}_{\text{mle}}-\theta _{0}\right)\ \ \xrightarrow {d} \ \ {\mathcal {N}}\left(0,\ {\mathcal {I}}^{-1}\right)~,}whereI{\displaystyle ~{\mathcal {I}}~}is theFisher information matrix:Ijk=E[−∂2ln⁡fθ0(Xt)∂θj∂θk].{\displaystyle {\mathcal {I}}_{jk}=\operatorname {\mathbb {E} } \,{\biggl [}\;-{\frac {\partial ^{2}\ln f_{\theta _{0}}(X_{t})}{\partial \theta _{j}\,\partial \theta _{k}}}\;{\biggr ]}~.} In particular, it means that thebiasof the maximum likelihood estimator is equal to zero up to the order⁠1/√n⁠. However, when we consider the higher-order terms in theexpansionof the distribution of this estimator, it turns out thatθmlehas bias of order1⁄n. This bias is equal to (componentwise)[20] bh≡E⁡[(θ^mle−θ0)h]=1n∑i,j,k=1mIhiIjk(12Kijk+Jj,ik){\displaystyle b_{h}\;\equiv \;\operatorname {\mathbb {E} } {\biggl [}\;\left({\widehat {\theta }}_{\mathrm {mle} }-\theta _{0}\right)_{h}\;{\biggr ]}\;=\;{\frac {1}{\,n\,}}\,\sum _{i,j,k=1}^{m}\;{\mathcal {I}}^{hi}\;{\mathcal {I}}^{jk}\left({\frac {1}{\,2\,}}\,K_{ijk}\;+\;J_{j,ik}\right)} whereIjk{\displaystyle {\mathcal {I}}^{jk}}(with superscripts) denotes the (j,k)-th component of theinverseFisher information matrixI−1{\displaystyle {\mathcal {I}}^{-1}}, and 12Kijk+Jj,ik=E[12∂3ln⁡fθ0(Xt)∂θi∂θj∂θk+∂ln⁡fθ0(Xt)∂θj∂2ln⁡fθ0(Xt)∂θi∂θk].{\displaystyle {\frac {1}{\,2\,}}\,K_{ijk}\;+\;J_{j,ik}\;=\;\operatorname {\mathbb {E} } \,{\biggl [}\;{\frac {1}{2}}{\frac {\partial ^{3}\ln f_{\theta _{0}}(X_{t})}{\partial \theta _{i}\;\partial \theta _{j}\;\partial \theta _{k}}}+{\frac {\;\partial \ln f_{\theta _{0}}(X_{t})\;}{\partial \theta _{j}}}\,{\frac {\;\partial ^{2}\ln f_{\theta _{0}}(X_{t})\;}{\partial \theta _{i}\,\partial \theta _{k}}}\;{\biggr ]}~.} Using these formulae it is possible to estimate the second-order bias of the maximum likelihood estimator, andcorrectfor that bias by subtracting it:θ^mle∗=θ^mle−b^.{\displaystyle {\widehat {\theta \,}}_{\text{mle}}^{*}={\widehat {\theta \,}}_{\text{mle}}-{\widehat {b\,}}~.}This estimator is unbiased up to the terms of order⁠1/n⁠, and is called thebias-corrected maximum likelihood estimator. This bias-corrected estimator issecond-order efficient(at least within the curved exponential family), meaning that it has minimal mean squared error among all second-order bias-corrected estimators, up to the terms of the order⁠1/n2⁠. It is possible to continue this process, that is to derive the third-order bias-correction term, and so on. However, the maximum likelihood estimator isnotthird-order efficient.[21] A maximum likelihood estimator coincides with themost probableBayesian estimatorgiven auniformprior distributionon theparameters. Indeed, themaximum a posteriori estimateis the parameterθthat maximizes the probability ofθgiven the data, given by Bayes' theorem: P⁡(θ∣x1,x2,…,xn)=f(x1,x2,…,xn∣θ)P⁡(θ)P⁡(x1,x2,…,xn){\displaystyle \operatorname {\mathbb {P} } (\theta \mid x_{1},x_{2},\ldots ,x_{n})={\frac {f(x_{1},x_{2},\ldots ,x_{n}\mid \theta )\operatorname {\mathbb {P} } (\theta )}{\operatorname {\mathbb {P} } (x_{1},x_{2},\ldots ,x_{n})}}} whereP⁡(θ){\displaystyle \operatorname {\mathbb {P} } (\theta )}is the prior distribution for the parameterθand whereP⁡(x1,x2,…,xn){\displaystyle \operatorname {\mathbb {P} } (x_{1},x_{2},\ldots ,x_{n})}is the probability of the data averaged over all parameters. Since the denominator is independent ofθ, the Bayesian estimator is obtained by maximizingf(x1,x2,…,xn∣θ)P⁡(θ){\displaystyle f(x_{1},x_{2},\ldots ,x_{n}\mid \theta )\operatorname {\mathbb {P} } (\theta )}with respect toθ. If we further assume that the priorP⁡(θ){\displaystyle \operatorname {\mathbb {P} } (\theta )}is a uniform distribution, the Bayesian estimator is obtained by maximizing the likelihood functionf(x1,x2,…,xn∣θ){\displaystyle f(x_{1},x_{2},\ldots ,x_{n}\mid \theta )}. Thus the Bayesian estimator coincides with the maximum likelihood estimator for a uniform prior distributionP⁡(θ){\displaystyle \operatorname {\mathbb {P} } (\theta )}. In many practical applications inmachine learning, maximum-likelihood estimation is used as the model for parameter estimation. The Bayesian Decision theory is about designing a classifier that minimizes total expected risk, especially, when the costs (the loss function) associated with different decisions are equal, the classifier is minimizing the error over the whole distribution.[22] Thus, the Bayes Decision Rule is stated as wherew1,w2{\displaystyle \;w_{1}\,,w_{2}\;}are predictions of different classes. From a perspective of minimizing error, it can also be stated asw=argmaxw∫−∞∞P⁡(error∣x)P⁡(x)d⁡x{\displaystyle w={\underset {w}{\operatorname {arg\;max} }}\;\int _{-\infty }^{\infty }\operatorname {\mathbb {P} } ({\text{ error}}\mid x)\operatorname {\mathbb {P} } (x)\,\operatorname {d} x~}whereP⁡(error∣x)=P⁡(w1∣x){\displaystyle \operatorname {\mathbb {P} } ({\text{ error}}\mid x)=\operatorname {\mathbb {P} } (w_{1}\mid x)~}if we decidew2{\displaystyle \;w_{2}\;}andP⁡(error∣x)=P⁡(w2∣x){\displaystyle \;\operatorname {\mathbb {P} } ({\text{ error}}\mid x)=\operatorname {\mathbb {P} } (w_{2}\mid x)\;}if we decidew1.{\displaystyle \;w_{1}\;.} By applyingBayes' theoremP⁡(wi∣x)=P⁡(x∣wi)P⁡(wi)P⁡(x){\displaystyle \operatorname {\mathbb {P} } (w_{i}\mid x)={\frac {\operatorname {\mathbb {P} } (x\mid w_{i})\operatorname {\mathbb {P} } (w_{i})}{\operatorname {\mathbb {P} } (x)}}}, and if we further assume the zero-or-one loss function, which is a same loss for all errors, the Bayes Decision rule can be reformulated as:hBayes=argmaxw[P⁡(x∣w)P⁡(w)],{\displaystyle h_{\text{Bayes}}={\underset {w}{\operatorname {arg\;max} }}\,{\bigl [}\,\operatorname {\mathbb {P} } (x\mid w)\,\operatorname {\mathbb {P} } (w)\,{\bigr ]}\;,}wherehBayes{\displaystyle h_{\text{Bayes}}}is the prediction andP⁡(w){\displaystyle \;\operatorname {\mathbb {P} } (w)\;}is theprior probability. Findingθ^{\displaystyle {\hat {\theta }}}that maximizes the likelihood is asymptotically equivalent to finding theθ^{\displaystyle {\hat {\theta }}}that defines a probability distribution (Qθ^{\displaystyle Q_{\hat {\theta }}}) that has a minimal distance, in terms ofKullback–Leibler divergence, to the real probability distribution from which our data were generated (i.e., generated byPθ0{\displaystyle P_{\theta _{0}}}).[23]In an ideal world, P and Q are the same (and the only thing unknown isθ{\displaystyle \theta }that defines P), but even if they are not and the model we use is misspecified, still the MLE will give us the "closest" distribution (within the restriction of a model Q that depends onθ^{\displaystyle {\hat {\theta }}}) to the real distributionPθ0{\displaystyle P_{\theta _{0}}}.[24] For simplicity of notation, let's assume that P=Q. Let there beni.i.ddata samplesy=(y1,y2,…,yn){\displaystyle \mathbf {y} =(y_{1},y_{2},\ldots ,y_{n})}from some probabilityy∼Pθ0{\displaystyle y\sim P_{\theta _{0}}}, that we try to estimate by findingθ^{\displaystyle {\hat {\theta }}}that will maximize the likelihood usingPθ{\displaystyle P_{\theta }}, then:θ^=argmaxθLPθ(y)=argmaxθPθ(y)=argmaxθP(y∣θ)=argmaxθ∏i=1nP(yi∣θ)=argmaxθ∑i=1nlog⁡P(yi∣θ)=argmaxθ(∑i=1nlog⁡P(yi∣θ)−∑i=1nlog⁡P(yi∣θ0))=argmaxθ∑i=1n(log⁡P(yi∣θ)−log⁡P(yi∣θ0))=argmaxθ∑i=1nlog⁡P(yi∣θ)P(yi∣θ0)=argminθ∑i=1nlog⁡P(yi∣θ0)P(yi∣θ)=argminθ1n∑i=1nlog⁡P(yi∣θ0)P(yi∣θ)=argminθ1n∑i=1nhθ(yi)⟶n→∞argminθE[hθ(y)]=argminθ∫Pθ0(y)hθ(y)dy=argminθ∫Pθ0(y)log⁡P(y∣θ0)P(y∣θ)dy=argminθDKL(Pθ0∥Pθ){\displaystyle {\begin{aligned}{\hat {\theta }}&={\underset {\theta }{\operatorname {arg\,max} }}\,L_{P_{\theta }}(\mathbf {y} )={\underset {\theta }{\operatorname {arg\,max} }}\,P_{\theta }(\mathbf {y} )={\underset {\theta }{\operatorname {arg\,max} }}\,P(\mathbf {y} \mid \theta )\\&={\underset {\theta }{\operatorname {arg\,max} }}\,\prod _{i=1}^{n}P(y_{i}\mid \theta )={\underset {\theta }{\operatorname {arg\,max} }}\,\sum _{i=1}^{n}\log P(y_{i}\mid \theta )\\&={\underset {\theta }{\operatorname {arg\,max} }}\,\left(\sum _{i=1}^{n}\log P(y_{i}\mid \theta )-\sum _{i=1}^{n}\log P(y_{i}\mid \theta _{0})\right)={\underset {\theta }{\operatorname {arg\,max} }}\,\sum _{i=1}^{n}\left(\log P(y_{i}\mid \theta )-\log P(y_{i}\mid \theta _{0})\right)\\&={\underset {\theta }{\operatorname {arg\,max} }}\,\sum _{i=1}^{n}\log {\frac {P(y_{i}\mid \theta )}{P(y_{i}\mid \theta _{0})}}={\underset {\theta }{\operatorname {arg\,min} }}\,\sum _{i=1}^{n}\log {\frac {P(y_{i}\mid \theta _{0})}{P(y_{i}\mid \theta )}}={\underset {\theta }{\operatorname {arg\,min} }}\,{\frac {1}{n}}\sum _{i=1}^{n}\log {\frac {P(y_{i}\mid \theta _{0})}{P(y_{i}\mid \theta )}}\\&={\underset {\theta }{\operatorname {arg\,min} }}\,{\frac {1}{n}}\sum _{i=1}^{n}h_{\theta }(y_{i})\quad {\underset {n\to \infty }{\longrightarrow }}\quad {\underset {\theta }{\operatorname {arg\,min} }}\,E[h_{\theta }(y)]\\&={\underset {\theta }{\operatorname {arg\,min} }}\,\int P_{\theta _{0}}(y)h_{\theta }(y)dy={\underset {\theta }{\operatorname {arg\,min} }}\,\int P_{\theta _{0}}(y)\log {\frac {P(y\mid \theta _{0})}{P(y\mid \theta )}}dy\\&={\underset {\theta }{\operatorname {arg\,min} }}\,D_{\text{KL}}(P_{\theta _{0}}\parallel P_{\theta })\end{aligned}}} Wherehθ(x)=log⁡P(x∣θ0)P(x∣θ){\displaystyle h_{\theta }(x)=\log {\frac {P(x\mid \theta _{0})}{P(x\mid \theta )}}}. Usinghhelps see how we are using thelaw of large numbersto move from the average ofh(x) to theexpectancyof it using thelaw of the unconscious statistician. The first several transitions have to do with laws oflogarithmand that findingθ^{\displaystyle {\hat {\theta }}}that maximizes some function will also be the one that maximizes some monotonic transformation of that function (i.e.: adding/multiplying by a constant). Sincecross entropyis justShannon's entropyplus KL divergence, and since the entropy ofPθ0{\displaystyle P_{\theta _{0}}}is constant, then the MLE is also asymptotically minimizing cross entropy.[25] Consider a case wherentickets numbered from 1 tonare placed in a box and one is selected at random (seeuniform distribution); thus, the sample size is 1. Ifnis unknown, then the maximum likelihood estimatorn^{\displaystyle {\widehat {n}}}ofnis the numbermon the drawn ticket. (The likelihood is 0 forn<m,1⁄nforn≥m, and this is greatest whenn=m. Note that the maximum likelihood estimate ofnoccurs at the lower extreme of possible values {m,m+ 1, ...}, rather than somewhere in the "middle" of the range of possible values, which would result in less bias.) Theexpected valueof the numbermon the drawn ticket, and therefore the expected value ofn^{\displaystyle {\widehat {n}}}, is (n+ 1)/2. As a result, with a sample size of 1, the maximum likelihood estimator fornwill systematically underestimatenby (n− 1)/2. Suppose one wishes to determine just how biased anunfair coinis. Call the probability of tossing a 'head'p. The goal then becomes to determinep. Suppose the coin is tossed 80 times: i.e. the sample might be something likex1= H,x2= T, ...,x80= T, and the count of the number ofheads"H" is observed. The probability of tossingtailsis 1 −p(so herepisθabove). Suppose the outcome is 49 heads and 31tails, and suppose the coin was taken from a box containing three coins: one which gives heads with probabilityp=1⁄3, one which gives heads with probabilityp=1⁄2and another which gives heads with probabilityp=2⁄3. The coins have lost their labels, so which one it was is unknown. Using maximum likelihood estimation, the coin that has the largest likelihood can be found, given the data that were observed. By using theprobability mass functionof thebinomial distributionwith sample size equal to 80, number successes equal to 49 but for different values ofp(the "probability of success"), the likelihood function (defined below) takes one of three values: P⁡[H=49∣p=13]=(8049)(13)49(1−13)31≈0.000,P⁡[H=49∣p=12]=(8049)(12)49(1−12)31≈0.012,P⁡[H=49∣p=23]=(8049)(23)49(1−23)31≈0.054.{\displaystyle {\begin{aligned}\operatorname {\mathbb {P} } {\bigl [}\;\mathrm {H} =49\mid p={\tfrac {1}{3}}\;{\bigr ]}&={\binom {80}{49}}({\tfrac {1}{3}})^{49}(1-{\tfrac {1}{3}})^{31}\approx 0.000,\\[6pt]\operatorname {\mathbb {P} } {\bigl [}\;\mathrm {H} =49\mid p={\tfrac {1}{2}}\;{\bigr ]}&={\binom {80}{49}}({\tfrac {1}{2}})^{49}(1-{\tfrac {1}{2}})^{31}\approx 0.012,\\[6pt]\operatorname {\mathbb {P} } {\bigl [}\;\mathrm {H} =49\mid p={\tfrac {2}{3}}\;{\bigr ]}&={\binom {80}{49}}({\tfrac {2}{3}})^{49}(1-{\tfrac {2}{3}})^{31}\approx 0.054~.\end{aligned}}} The likelihood is maximized whenp=2⁄3, and so this is themaximum likelihood estimateforp. Now suppose that there was only one coin but itspcould have been any value0 ≤p≤ 1 .The likelihood function to be maximised isL(p)=fD(H=49∣p)=(8049)p49(1−p)31,{\displaystyle L(p)=f_{D}(\mathrm {H} =49\mid p)={\binom {80}{49}}p^{49}(1-p)^{31}~,} and the maximisation is over all possible values0 ≤p≤ 1 . One way to maximize this function is bydifferentiatingwith respect topand setting to zero: 0=∂∂p((8049)p49(1−p)31),0=49p48(1−p)31−31p49(1−p)30=p48(1−p)30[49(1−p)−31p]=p48(1−p)30[49−80p].{\displaystyle {\begin{aligned}0&={\frac {\partial }{\partial p}}\left({\binom {80}{49}}p^{49}(1-p)^{31}\right)~,\\[8pt]0&=49p^{48}(1-p)^{31}-31p^{49}(1-p)^{30}\\[8pt]&=p^{48}(1-p)^{30}\left[49(1-p)-31p\right]\\[8pt]&=p^{48}(1-p)^{30}\left[49-80p\right]~.\end{aligned}}} This is a product of three terms. The first term is 0 whenp= 0. The second is 0 whenp= 1. The third is zero whenp=49⁄80. The solution that maximizes the likelihood is clearlyp=49⁄80(sincep= 0 andp= 1 result in a likelihood of 0). Thus themaximum likelihood estimatorforpis49⁄80. This result is easily generalized by substituting a letter such assin the place of 49 to represent the observed number of 'successes' of ourBernoulli trials, and a letter such asnin the place of 80 to represent the number of Bernoulli trials. Exactly the same calculation yieldss⁄nwhich is the maximum likelihood estimator for any sequence ofnBernoulli trials resulting ins'successes'. For thenormal distributionN(μ,σ2){\displaystyle {\mathcal {N}}(\mu ,\sigma ^{2})}which hasprobability density function f(x∣μ,σ2)=12πσ2exp⁡(−(x−μ)22σ2),{\displaystyle f(x\mid \mu ,\sigma ^{2})={\frac {1}{{\sqrt {2\pi \sigma ^{2}}}\ }}\exp \left(-{\frac {(x-\mu )^{2}}{2\sigma ^{2}}}\right),} the correspondingprobability density functionfor a sample ofnindependent identically distributednormal random variables (the likelihood) is f(x1,…,xn∣μ,σ2)=∏i=1nf(xi∣μ,σ2)=(12πσ2)n/2exp⁡(−∑i=1n(xi−μ)22σ2).{\displaystyle f(x_{1},\ldots ,x_{n}\mid \mu ,\sigma ^{2})=\prod _{i=1}^{n}f(x_{i}\mid \mu ,\sigma ^{2})=\left({\frac {1}{2\pi \sigma ^{2}}}\right)^{n/2}\exp \left(-{\frac {\sum _{i=1}^{n}(x_{i}-\mu )^{2}}{2\sigma ^{2}}}\right).} This family of distributions has two parameters:θ= (μ,σ); so we maximize the likelihood,L(μ,σ2)=f(x1,…,xn∣μ,σ2){\displaystyle {\mathcal {L}}(\mu ,\sigma ^{2})=f(x_{1},\ldots ,x_{n}\mid \mu ,\sigma ^{2})}, over both parameters simultaneously, or if possible, individually. Since thelogarithmfunction itself is acontinuousstrictly increasingfunction over therangeof the likelihood, the values which maximize the likelihood will also maximize its logarithm (the log-likelihood itself is not necessarily strictly increasing). The log-likelihood can be written as follows: log⁡(L(μ,σ2))=−n2log⁡(2πσ2)−12σ2∑i=1n(xi−μ)2{\displaystyle \log {\Bigl (}{\mathcal {L}}(\mu ,\sigma ^{2}){\Bigr )}=-{\frac {\,n\,}{2}}\log(2\pi \sigma ^{2})-{\frac {1}{2\sigma ^{2}}}\sum _{i=1}^{n}(\,x_{i}-\mu \,)^{2}} (Note: the log-likelihood is closely related toinformation entropyandFisher information.) We now compute the derivatives of this log-likelihood as follows. 0=∂∂μlog⁡(L(μ,σ2))=0−−2n(x¯−μ)2σ2.{\displaystyle {\begin{aligned}0&={\frac {\partial }{\partial \mu }}\log {\Bigl (}{\mathcal {L}}(\mu ,\sigma ^{2}){\Bigr )}=0-{\frac {\;-2n({\bar {x}}-\mu )\;}{2\sigma ^{2}}}.\end{aligned}}}wherex¯{\displaystyle {\bar {x}}}is thesample mean. This is solved by μ^=x¯=∑i=1nxin.{\displaystyle {\widehat {\mu }}={\bar {x}}=\sum _{i=1}^{n}{\frac {\,x_{i}\,}{n}}.} This is indeed the maximum of the function, since it is the only turning point inμand the second derivative is strictly less than zero. Itsexpected valueis equal to the parameterμof the given distribution, E⁡[μ^]=μ,{\displaystyle \operatorname {\mathbb {E} } {\bigl [}\;{\widehat {\mu }}\;{\bigr ]}=\mu ,\,} which means that the maximum likelihood estimatorμ^{\displaystyle {\widehat {\mu }}}is unbiased. Similarly we differentiate the log-likelihood with respect toσand equate to zero: 0=∂∂σlog⁡(L(μ,σ2))=−nσ+1σ3∑i=1n(xi−μ)2.{\displaystyle {\begin{aligned}0&={\frac {\partial }{\partial \sigma }}\log {\Bigl (}{\mathcal {L}}(\mu ,\sigma ^{2}){\Bigr )}=-{\frac {\,n\,}{\sigma }}+{\frac {1}{\sigma ^{3}}}\sum _{i=1}^{n}(\,x_{i}-\mu \,)^{2}.\end{aligned}}} which is solved by σ^2=1n∑i=1n(xi−μ)2.{\displaystyle {\widehat {\sigma }}^{2}={\frac {1}{n}}\sum _{i=1}^{n}(x_{i}-\mu )^{2}.} Inserting the estimateμ=μ^{\displaystyle \mu ={\widehat {\mu }}}we obtain σ^2=1n∑i=1n(xi−x¯)2=1n∑i=1nxi2−1n2∑i=1n∑j=1nxixj.{\displaystyle {\widehat {\sigma }}^{2}={\frac {1}{n}}\sum _{i=1}^{n}(x_{i}-{\bar {x}})^{2}={\frac {1}{n}}\sum _{i=1}^{n}x_{i}^{2}-{\frac {1}{n^{2}}}\sum _{i=1}^{n}\sum _{j=1}^{n}x_{i}x_{j}.} To calculate its expected value, it is convenient to rewrite the expression in terms of zero-mean random variables (statistical error)δi≡μ−xi{\displaystyle \delta _{i}\equiv \mu -x_{i}}. Expressing the estimate in these variables yields σ^2=1n∑i=1n(μ−δi)2−1n2∑i=1n∑j=1n(μ−δi)(μ−δj).{\displaystyle {\widehat {\sigma }}^{2}={\frac {1}{n}}\sum _{i=1}^{n}(\mu -\delta _{i})^{2}-{\frac {1}{n^{2}}}\sum _{i=1}^{n}\sum _{j=1}^{n}(\mu -\delta _{i})(\mu -\delta _{j}).} Simplifying the expression above, utilizing the facts thatE⁡[δi]=0{\displaystyle \operatorname {\mathbb {E} } {\bigl [}\;\delta _{i}\;{\bigr ]}=0}andE⁡[δi2]=σ2{\displaystyle \operatorname {E} {\bigl [}\;\delta _{i}^{2}\;{\bigr ]}=\sigma ^{2}}, allows us to obtain E⁡[σ^2]=n−1nσ2.{\displaystyle \operatorname {\mathbb {E} } {\bigl [}\;{\widehat {\sigma }}^{2}\;{\bigr ]}={\frac {\,n-1\,}{n}}\sigma ^{2}.} This means that the estimatorσ^2{\displaystyle {\widehat {\sigma }}^{2}}is biased forσ2{\displaystyle \sigma ^{2}}. It can also be shown thatσ^{\displaystyle {\widehat {\sigma }}}is biased forσ{\displaystyle \sigma }, but that bothσ^2{\displaystyle {\widehat {\sigma }}^{2}}andσ^{\displaystyle {\widehat {\sigma }}}are consistent. Formally we say that themaximum likelihood estimatorforθ=(μ,σ2){\displaystyle \theta =(\mu ,\sigma ^{2})}is θ^=(μ^,σ^2).{\displaystyle {\widehat {\theta \,}}=\left({\widehat {\mu }},{\widehat {\sigma }}^{2}\right).} In this case the MLEs could be obtained individually. In general this may not be the case, and the MLEs would have to be obtained simultaneously. The normal log-likelihood at its maximum takes a particularly simple form: log⁡(L(μ^,σ^))=−n2(log⁡(2πσ^2)+1){\displaystyle \log {\Bigl (}{\mathcal {L}}({\widehat {\mu }},{\widehat {\sigma }}){\Bigr )}={\frac {\,-n\;\;}{2}}{\bigl (}\,\log(2\pi {\widehat {\sigma }}^{2})+1\,{\bigr )}} This maximum log-likelihood can be shown to be the same for more generalleast squares, even fornon-linear least squares. This is often used in determining likelihood-based approximateconfidence intervalsandconfidence regions, which are generally more accurate than those using the asymptotic normality discussed above. It may be the case that variables are correlated, or more generally, not independent. Two random variablesy1{\displaystyle y_{1}}andy2{\displaystyle y_{2}}are independent only if their joint probability density function is the product of the individual probability density functions, i.e. f(y1,y2)=f(y1)f(y2){\displaystyle f(y_{1},y_{2})=f(y_{1})f(y_{2})\,} Suppose one constructs an order-nGaussian vector out of random variables(y1,…,yn){\displaystyle (y_{1},\ldots ,y_{n})}, where each variable has means given by(μ1,…,μn){\displaystyle (\mu _{1},\ldots ,\mu _{n})}. Furthermore, let thecovariance matrixbe denoted byΣ{\displaystyle {\mathit {\Sigma }}}. The joint probability density function of thesenrandom variables then follows amultivariate normal distributiongiven by: f(y1,…,yn)=1(2π)n/2det(Σ)exp⁡(−12[y1−μ1,…,yn−μn]Σ−1[y1−μ1,…,yn−μn]T){\displaystyle f(y_{1},\ldots ,y_{n})={\frac {1}{(2\pi )^{n/2}{\sqrt {\det({\mathit {\Sigma }})}}}}\exp \left(-{\frac {1}{2}}\left[y_{1}-\mu _{1},\ldots ,y_{n}-\mu _{n}\right]{\mathit {\Sigma }}^{-1}\left[y_{1}-\mu _{1},\ldots ,y_{n}-\mu _{n}\right]^{\mathrm {T} }\right)} In thebivariatecase, the joint probability density function is given by: f(y1,y2)=12πσ1σ21−ρ2exp⁡[−12(1−ρ2)((y1−μ1)2σ12−2ρ(y1−μ1)(y2−μ2)σ1σ2+(y2−μ2)2σ22)]{\displaystyle f(y_{1},y_{2})={\frac {1}{2\pi \sigma _{1}\sigma _{2}{\sqrt {1-\rho ^{2}}}}}\exp \left[-{\frac {1}{2(1-\rho ^{2})}}\left({\frac {(y_{1}-\mu _{1})^{2}}{\sigma _{1}^{2}}}-{\frac {2\rho (y_{1}-\mu _{1})(y_{2}-\mu _{2})}{\sigma _{1}\sigma _{2}}}+{\frac {(y_{2}-\mu _{2})^{2}}{\sigma _{2}^{2}}}\right)\right]} In this and other cases where a joint density function exists, the likelihood function is defined as above, in the section "principles," using this density. X1,X2,…,Xm{\displaystyle X_{1},\ X_{2},\ldots ,\ X_{m}}are counts in cells / boxes 1 up to m; each box has a different probability (think of the boxes being bigger or smaller) and we fix the number of balls that fall to ben{\displaystyle n}:x1+x2+⋯+xm=n{\displaystyle x_{1}+x_{2}+\cdots +x_{m}=n}. The probability of each box ispi{\displaystyle p_{i}}, with a constraint:p1+p2+⋯+pm=1{\displaystyle p_{1}+p_{2}+\cdots +p_{m}=1}. This is a case in which theXi{\displaystyle X_{i}}sare not independent, the joint probability of a vectorx1,x2,…,xm{\displaystyle x_{1},\ x_{2},\ldots ,x_{m}}is called the multinomial and has the form: f(x1,x2,…,xm∣p1,p2,…,pm)=n!∏xi!∏pixi=(nx1,x2,…,xm)p1x1p2x2⋯pmxm{\displaystyle f(x_{1},x_{2},\ldots ,x_{m}\mid p_{1},p_{2},\ldots ,p_{m})={\frac {n!}{\prod x_{i}!}}\prod p_{i}^{x_{i}}={\binom {n}{x_{1},x_{2},\ldots ,x_{m}}}p_{1}^{x_{1}}p_{2}^{x_{2}}\cdots p_{m}^{x_{m}}} Each box taken separately against all the other boxes is a binomial and this is an extension thereof. The log-likelihood of this is: ℓ(p1,p2,…,pm)=log⁡n!−∑i=1mlog⁡xi!+∑i=1mxilog⁡pi{\displaystyle \ell (p_{1},p_{2},\ldots ,p_{m})=\log n!-\sum _{i=1}^{m}\log x_{i}!+\sum _{i=1}^{m}x_{i}\log p_{i}} The constraint has to be taken into account and use the Lagrange multipliers: L(p1,p2,…,pm,λ)=ℓ(p1,p2,…,pm)+λ(1−∑i=1mpi){\displaystyle L(p_{1},p_{2},\ldots ,p_{m},\lambda )=\ell (p_{1},p_{2},\ldots ,p_{m})+\lambda \left(1-\sum _{i=1}^{m}p_{i}\right)} By posing all the derivatives to be 0, the most natural estimate is derived p^i=xin{\displaystyle {\hat {p}}_{i}={\frac {x_{i}}{n}}} Maximizing log likelihood, with and without constraints, can be an unsolvable problem in closed form, then we have to use iterative procedures. Except for special cases, the likelihood equations∂ℓ(θ;y)∂θ=0{\displaystyle {\frac {\partial \ell (\theta ;\mathbf {y} )}{\partial \theta }}=0} cannot be solved explicitly for an estimatorθ^=θ^(y){\displaystyle {\widehat {\theta }}={\widehat {\theta }}(\mathbf {y} )}. Instead, they need to be solvediteratively: starting from an initial guess ofθ{\displaystyle \theta }(sayθ^1{\displaystyle {\widehat {\theta }}_{1}}), one seeks to obtain a convergent sequence{θ^r}{\displaystyle \left\{{\widehat {\theta }}_{r}\right\}}. Many methods for this kind ofoptimization problemare available,[26][27]but the most commonly used ones are algorithms based on an updating formula of the formθ^r+1=θ^r+ηrdr(θ^){\displaystyle {\widehat {\theta }}_{r+1}={\widehat {\theta }}_{r}+\eta _{r}\mathbf {d} _{r}\left({\widehat {\theta }}\right)} where the vectordr(θ^){\displaystyle \mathbf {d} _{r}\left({\widehat {\theta }}\right)}indicates thedescent directionof therth "step," and the scalarηr{\displaystyle \eta _{r}}captures the "step length,"[28][29]also known as thelearning rate.[30] (Note: here it is a maximization problem, so the sign before gradient is flipped) ηr∈R+{\displaystyle \eta _{r}\in \mathbb {R} ^{+}}that is small enough for convergence anddr(θ^)=∇ℓ(θ^r;y){\displaystyle \mathbf {d} _{r}\left({\widehat {\theta }}\right)=\nabla \ell \left({\widehat {\theta }}_{r};\mathbf {y} \right)} Gradient descent method requires to calculate the gradient at the rth iteration, but no need to calculate the inverse of second-order derivative, i.e., the Hessian matrix. Therefore, it is computationally faster than Newton-Raphson method. ηr=1{\displaystyle \eta _{r}=1}anddr(θ^)=−Hr−1(θ^)sr(θ^){\displaystyle \mathbf {d} _{r}\left({\widehat {\theta }}\right)=-\mathbf {H} _{r}^{-1}\left({\widehat {\theta }}\right)\mathbf {s} _{r}\left({\widehat {\theta }}\right)} wheresr(θ^){\displaystyle \mathbf {s} _{r}({\widehat {\theta }})}is thescoreandHr−1(θ^){\displaystyle \mathbf {H} _{r}^{-1}\left({\widehat {\theta }}\right)}is theinverseof theHessian matrixof the log-likelihood function, both evaluated therth iteration.[31][32]But because the calculation of the Hessian matrix iscomputationally costly, numerous alternatives have been proposed. The popularBerndt–Hall–Hall–Hausman algorithmapproximates the Hessian with theouter productof the expected gradient, such that dr(θ^)=−[1n∑t=1n∂ℓ(θ;y)∂θ(∂ℓ(θ;y)∂θ)T]−1sr(θ^){\displaystyle \mathbf {d} _{r}\left({\widehat {\theta }}\right)=-\left[{\frac {1}{n}}\sum _{t=1}^{n}{\frac {\partial \ell (\theta ;\mathbf {y} )}{\partial \theta }}\left({\frac {\partial \ell (\theta ;\mathbf {y} )}{\partial \theta }}\right)^{\mathsf {T}}\right]^{-1}\mathbf {s} _{r}\left({\widehat {\theta }}\right)} Other quasi-Newton methods use more elaborate secant updates to give approximation of Hessian matrix. DFP formula finds a solution that is symmetric, positive-definite and closest to the current approximate value of second-order derivative:Hk+1=(I−γkykskT)Hk(I−γkskykT)+γkykykT,{\displaystyle \mathbf {H} _{k+1}=\left(I-\gamma _{k}y_{k}s_{k}^{\mathsf {T}}\right)\mathbf {H} _{k}\left(I-\gamma _{k}s_{k}y_{k}^{\mathsf {T}}\right)+\gamma _{k}y_{k}y_{k}^{\mathsf {T}},} where yk=∇ℓ(xk+sk)−∇ℓ(xk),{\displaystyle y_{k}=\nabla \ell (x_{k}+s_{k})-\nabla \ell (x_{k}),}γk=1ykTsk,{\displaystyle \gamma _{k}={\frac {1}{y_{k}^{T}s_{k}}},}sk=xk+1−xk.{\displaystyle s_{k}=x_{k+1}-x_{k}.} BFGS also gives a solution that is symmetric and positive-definite: Bk+1=Bk+ykykTykTsk−BkskskTBkTskTBksk,{\displaystyle B_{k+1}=B_{k}+{\frac {y_{k}y_{k}^{\mathsf {T}}}{y_{k}^{\mathsf {T}}s_{k}}}-{\frac {B_{k}s_{k}s_{k}^{\mathsf {T}}B_{k}^{\mathsf {T}}}{s_{k}^{\mathsf {T}}B_{k}s_{k}}}\ ,} where yk=∇ℓ(xk+sk)−∇ℓ(xk),{\displaystyle y_{k}=\nabla \ell (x_{k}+s_{k})-\nabla \ell (x_{k}),}sk=xk+1−xk.{\displaystyle s_{k}=x_{k+1}-x_{k}.} BFGS method is not guaranteed to converge unless the function has a quadraticTaylor expansionnear an optimum. However, BFGS can have acceptable performance even for non-smooth optimization instances Another popular method is to replace the Hessian with theFisher information matrix,I(θ)=E⁡[Hr(θ^)]{\displaystyle {\mathcal {I}}(\theta )=\operatorname {\mathbb {E} } \left[\mathbf {H} _{r}\left({\widehat {\theta }}\right)\right]}, giving us the Fisher scoring algorithm. This procedure is standard in the estimation of many methods, such asgeneralized linear models. Although popular, quasi-Newton methods may converge to astationary pointthat is not necessarily a local or global maximum,[33]but rather a local minimum or asaddle point. Therefore, it is important to assess the validity of the obtained solution to the likelihood equations, by verifying that the Hessian, evaluated at the solution, is bothnegative definiteandwell-conditioned.[34] Early users of maximum likelihood includeCarl Friedrich Gauss,Pierre-Simon Laplace,Thorvald N. Thiele, andFrancis Ysidro Edgeworth.[35][36]It wasRonald Fisherhowever, between 1912 and 1922, who singlehandedly created the modern version of the method.[37][38] Maximum-likelihood estimation finally transcendedheuristicjustification in a proof published bySamuel S. Wilksin 1938, now calledWilks' theorem.[39]The theorem shows that the error in the logarithm of likelihood values for estimates from multiple independent observations is asymptoticallyχ2-distributed, which enables convenient determination of aconfidence regionaround any estimate of the parameters. The only difficult part of Wilks' proof depends on the expected value of theFisher informationmatrix, which is provided by a theorem proven by Fisher.[40]Wilks continued to improve on the generality of the theorem throughout his life, with his most general proof published in 1962.[41] Reviews of the development of maximum likelihood estimation have been provided by a number of authors.[42][43][44][45][46][47][48][49]
https://en.wikipedia.org/wiki/Maximum_likelihood
I(μ,σ)=(1/σ2002/σ2){\displaystyle {\mathcal {I}}(\mu ,\sigma )={\begin{pmatrix}1/\sigma ^{2}&0\\0&2/\sigma ^{2}\end{pmatrix}}} Inprobability theoryandstatistics, anormal distributionorGaussian distributionis a type ofcontinuous probability distributionfor areal-valuedrandom variable. The general form of itsprobability density functionis[2][3]f(x)=12πσ2e−(x−μ)22σ2.{\displaystyle f(x)={\frac {1}{\sqrt {2\pi \sigma ^{2}}}}e^{-{\frac {(x-\mu )^{2}}{2\sigma ^{2}}}}\,.} The parameter⁠μ{\displaystyle \mu }⁠is themeanorexpectationof the distribution (and also itsmedianandmode), while the parameterσ2{\textstyle \sigma ^{2}}is thevariance. Thestandard deviationof the distribution is⁠σ{\displaystyle \sigma }⁠(sigma). A random variable with a Gaussian distribution is said to benormally distributed, and is called anormal deviate. Normal distributions are important instatisticsand are often used in thenaturalandsocial sciencesto represent real-valuedrandom variableswhose distributions are not known.[4][5]Their importance is partly due to thecentral limit theorem. It states that, under some conditions, the average of many samples (observations) of a random variable with finite mean and variance is itself a random variable—whose distributionconvergesto a normal distribution as the number of samples increases. Therefore, physical quantities that are expected to be the sum of many independent processes, such asmeasurement errors, often have distributions that are nearly normal.[6] Moreover, Gaussian distributions have some unique properties that are valuable in analytic studies. For instance, anylinear combinationof a fixed collection of independent normal deviates is a normal deviate. Many results and methods, such aspropagation of uncertaintyandleast squares[7]parameter fitting, can be derived analytically in explicit form when the relevant variables are normally distributed. A normal distribution is sometimes informally called abell curve.[8]However, many other distributions arebell-shaped(such as theCauchy,Student'st, andlogisticdistributions). (For other names, seeNaming.) Theunivariate probability distributionis generalized forvectorsin themultivariate normal distributionand for matrices in thematrix normal distribution. The simplest case of a normal distribution is known as thestandard normal distributionorunit normal distribution. This is a special case whenμ=0{\textstyle \mu =0}andσ2=1{\textstyle \sigma ^{2}=1}, and it is described by thisprobability density function(or density):φ(z)=e−z2/22π.{\displaystyle \varphi (z)={\frac {e^{-z^{2}/2}}{\sqrt {2\pi }}}\,.}The variable⁠z{\displaystyle z}⁠has a mean of 0 and a variance and standard deviation of 1. The densityφ(z){\textstyle \varphi (z)}has its peak12π{\textstyle {\frac {1}{\sqrt {2\pi }}}}atz=0{\textstyle z=0}andinflection pointsatz=+1{\textstyle z=+1}and⁠z=−1{\displaystyle z=-1}⁠. Although the density above is most commonly known as thestandard normal,a few authors have used that term to describe other versions of the normal distribution.Carl Friedrich Gauss, for example, once defined the standard normal asφ(z)=e−z2π,{\displaystyle \varphi (z)={\frac {e^{-z^{2}}}{\sqrt {\pi }}},}which has a variance of⁠12{\displaystyle {\frac {1}{2}}}⁠, andStephen Stigler[9]once defined the standard normal asφ(z)=e−πz2,{\displaystyle \varphi (z)=e^{-\pi z^{2}},}which has a simple functional form and a variance ofσ2=12π.{\textstyle \sigma ^{2}={\frac {1}{2\pi }}.} Every normal distribution is a version of the standard normal distribution, whose domain has been stretched by a factor⁠σ{\displaystyle \sigma }⁠(the standard deviation) and then translated by⁠μ{\displaystyle \mu }⁠(the mean value): f(x∣μ,σ2)=1σφ(x−μσ).{\displaystyle f(x\mid \mu ,\sigma ^{2})={\frac {1}{\sigma }}\varphi \left({\frac {x-\mu }{\sigma }}\right)\,.} The probability density must be scaled by1/σ{\textstyle 1/\sigma }so that theintegralis still 1. If⁠Z{\displaystyle Z}⁠is astandard normal deviate, thenX=σZ+μ{\textstyle X=\sigma Z+\mu }will have a normal distribution with expected value⁠μ{\displaystyle \mu }⁠and standard deviation⁠σ{\displaystyle \sigma }⁠. This is equivalent to saying that the standard normal distribution⁠Z{\displaystyle Z}⁠can be scaled/stretched by a factor of⁠σ{\displaystyle \sigma }⁠and shifted by⁠μ{\displaystyle \mu }⁠to yield a different normal distribution, called⁠X{\displaystyle X}⁠. Conversely, if⁠X{\displaystyle X}⁠is a normal deviate with parameters⁠μ{\displaystyle \mu }⁠andσ2{\textstyle \sigma ^{2}}, then this⁠X{\displaystyle X}⁠distribution can be re-scaled and shifted via the formulaZ=(X−μ)/σ{\textstyle Z=(X-\mu )/\sigma }to convert it to the standard normal distribution. This variate is also called the standardized form of⁠X{\displaystyle X}⁠. The probability density of the standard Gaussian distribution (standard normal distribution, with zero mean and unit variance) is often denoted with the Greek letter⁠ϕ{\displaystyle \phi }⁠(phi).[10]The alternative form of the Greek letter phi,⁠φ{\displaystyle \varphi }⁠, is also used quite often. The normal distribution is often referred to asN(μ,σ2){\textstyle N(\mu ,\sigma ^{2})}or⁠N(μ,σ2){\displaystyle {\mathcal {N}}(\mu ,\sigma ^{2})}⁠.[11]Thus when a random variable⁠X{\displaystyle X}⁠is normally distributed with mean⁠μ{\displaystyle \mu }⁠and standard deviation⁠σ{\displaystyle \sigma }⁠, one may write X∼N(μ,σ2).{\displaystyle X\sim {\mathcal {N}}(\mu ,\sigma ^{2}).} Some authors advocate using theprecision⁠τ{\displaystyle \tau }⁠as the parameter defining the width of the distribution, instead of the standard deviation⁠σ{\displaystyle \sigma }⁠or the variance⁠σ2{\displaystyle \sigma ^{2}}⁠. The precision is normally defined as the reciprocal of the variance,⁠1/σ2{\displaystyle 1/\sigma ^{2}}⁠.[12]The formula for the distribution then becomes f(x)=τ2πe−τ(x−μ)2/2.{\displaystyle f(x)={\sqrt {\frac {\tau }{2\pi }}}e^{-\tau (x-\mu )^{2}/2}.} This choice is claimed to have advantages in numerical computations when⁠σ{\displaystyle \sigma }⁠is very close to zero, and simplifies formulas in some contexts, such as in theBayesian inferenceof variables withmultivariate normal distribution. Alternatively, the reciprocal of the standard deviationτ′=1/σ{\textstyle \tau '=1/\sigma }might be defined as theprecision, in which case the expression of the normal distribution becomes f(x)=τ′2πe−(τ′)2(x−μ)2/2.{\displaystyle f(x)={\frac {\tau '}{\sqrt {2\pi }}}e^{-(\tau ')^{2}(x-\mu )^{2}/2}.} According to Stigler, this formulation is advantageous because of a much simpler and easier-to-remember formula, and simple approximate formulas for thequantilesof the distribution. Normal distributions form anexponential familywithnatural parametersθ1=μσ2{\textstyle \textstyle \theta _{1}={\frac {\mu }{\sigma ^{2}}}}andθ2=−12σ2{\textstyle \textstyle \theta _{2}={\frac {-1}{2\sigma ^{2}}}}, and natural statisticsxandx2. The dual expectation parameters for normal distribution areη1=μandη2=μ2+σ2. Thecumulative distribution function(CDF) of the standard normal distribution, usually denoted with the capital Greek letter⁠Φ{\displaystyle \Phi }⁠, is the integral Φ(x)=12π∫−∞xe−t2/2dt.{\displaystyle \Phi (x)={\frac {1}{\sqrt {2\pi }}}\int _{-\infty }^{x}e^{-t^{2}/2}\,dt\,.} The relatederror functionerf⁡(x){\textstyle \operatorname {erf} (x)}gives the probability of a random variable, with normal distribution of mean 0 and variance 1/2 falling in the range⁠[−x,x]{\displaystyle [-x,x]}⁠. That is: erf⁡(x)=1π∫−xxe−t2dt=2π∫0xe−t2dt.{\displaystyle \operatorname {erf} (x)={\frac {1}{\sqrt {\pi }}}\int _{-x}^{x}e^{-t^{2}}\,dt={\frac {2}{\sqrt {\pi }}}\int _{0}^{x}e^{-t^{2}}\,dt\,.} These integrals cannot be expressed in terms of elementary functions, and are often said to bespecial functions. However, many numerical approximations are known; seebelowfor more. The two functions are closely related, namely Φ(x)=12[1+erf⁡(x2)].{\displaystyle \Phi (x)={\frac {1}{2}}\left[1+\operatorname {erf} \left({\frac {x}{\sqrt {2}}}\right)\right]\,.} For a generic normal distribution with density⁠f{\displaystyle f}⁠, mean⁠μ{\displaystyle \mu }⁠and varianceσ2{\textstyle \sigma ^{2}}, the cumulative distribution function is F(x)=Φ(x−μσ)=12[1+erf⁡(x−μσ2)].{\displaystyle F(x)=\Phi {\left({\frac {x-\mu }{\sigma }}\right)}={\frac {1}{2}}\left[1+\operatorname {erf} \left({\frac {x-\mu }{\sigma {\sqrt {2}}}}\right)\right]\,.} The complement of the standard normal cumulative distribution function,Q(x)=1−Φ(x){\textstyle Q(x)=1-\Phi (x)}, is often called theQ-function, especially in engineering texts.[13][14]It gives the probability that the value of a standard normal random variable⁠X{\displaystyle X}⁠will exceed⁠x{\displaystyle x}⁠:⁠P(X>x){\displaystyle P(X>x)}⁠. Other definitions of the⁠Q{\displaystyle Q}⁠-function, all of which are simple transformations of⁠Φ{\displaystyle \Phi }⁠, are also used occasionally.[15] Thegraphof the standard normal cumulative distribution function⁠Φ{\displaystyle \Phi }⁠has 2-foldrotational symmetryaround the point (0,1/2); that is,⁠Φ(−x)=1−Φ(x){\displaystyle \Phi (-x)=1-\Phi (x)}⁠. Itsantiderivative(indefinite integral) can be expressed as follows:∫Φ(x)dx=xΦ(x)+φ(x)+C.{\displaystyle \int \Phi (x)\,dx=x\Phi (x)+\varphi (x)+C.} The cumulative distribution function of the standard normal distribution can be expanded byintegration by partsinto a series: Φ(x)=12+12π⋅e−x2/2[x+x33+x53⋅5+⋯+x2n+1(2n+1)!!+⋯].{\displaystyle \Phi (x)={\frac {1}{2}}+{\frac {1}{\sqrt {2\pi }}}\cdot e^{-x^{2}/2}\left[x+{\frac {x^{3}}{3}}+{\frac {x^{5}}{3\cdot 5}}+\cdots +{\frac {x^{2n+1}}{(2n+1)!!}}+\cdots \right]\,.} where!!{\textstyle !!}denotes thedouble factorial. Anasymptotic expansionof the cumulative distribution function for largexcan also be derived using integration by parts. For more, seeError function § Asymptotic expansion.[16] A quick approximation to the standard normal distribution's cumulative distribution function can be found by using a Taylor series approximation: Φ(x)≈12+12π∑k=0n(−1)kx(2k+1)2kk!(2k+1).{\displaystyle \Phi (x)\approx {\frac {1}{2}}+{\frac {1}{\sqrt {2\pi }}}\sum _{k=0}^{n}{\frac {(-1)^{k}x^{(2k+1)}}{2^{k}k!(2k+1)}}\,.} The recursive nature of theeax2{\textstyle e^{ax^{2}}}family of derivatives may be used to easily construct a rapidly convergingTaylor seriesexpansion using recursive entries about any point of known value of the distribution,Φ(x0){\textstyle \Phi (x_{0})}: Φ(x)=∑n=0∞Φ(n)(x0)n!(x−x0)n,{\displaystyle \Phi (x)=\sum _{n=0}^{\infty }{\frac {\Phi ^{(n)}(x_{0})}{n!}}(x-x_{0})^{n}\,,} where: Φ(0)(x0)=12π∫−∞x0e−t2/2dtΦ(1)(x0)=12πe−x02/2Φ(n)(x0)=−(x0Φ(n−1)(x0)+(n−2)Φ(n−2)(x0)),n≥2.{\displaystyle {\begin{aligned}\Phi ^{(0)}(x_{0})&={\frac {1}{\sqrt {2\pi }}}\int _{-\infty }^{x_{0}}e^{-t^{2}/2}\,dt\\\Phi ^{(1)}(x_{0})&={\frac {1}{\sqrt {2\pi }}}e^{-x_{0}^{2}/2}\\\Phi ^{(n)}(x_{0})&=-\left(x_{0}\Phi ^{(n-1)}(x_{0})+(n-2)\Phi ^{(n-2)}(x_{0})\right),&n\geq 2\,.\end{aligned}}} An application for the above Taylor series expansion is to useNewton's methodto reverse the computation. That is, if we have a value for thecumulative distribution function,Φ(x){\textstyle \Phi (x)}, but do not know the x needed to obtain theΦ(x){\textstyle \Phi (x)}, we can use Newton's method to find x, and use the Taylor series expansion above to minimize the number of computations. Newton's method is ideal to solve this problem because the first derivative ofΦ(x){\textstyle \Phi (x)}, which is an integral of the normal standard distribution, is the normal standard distribution, and is readily available to use in the Newton's method solution. To solve, select a known approximate solution,x0{\textstyle x_{0}}, to the desired⁠Φ(x){\displaystyle \Phi (x)}⁠.x0{\textstyle x_{0}}may be a value from a distribution table, or an intelligent estimate followed by a computation ofΦ(x0){\textstyle \Phi (x_{0})}using any desired means to compute. Use this value ofx0{\textstyle x_{0}}and the Taylor series expansion above to minimize computations. Repeat the following process until the difference between the computedΦ(xn){\textstyle \Phi (x_{n})}and the desired⁠Φ{\displaystyle \Phi }⁠, which we will callΦ(desired){\textstyle \Phi ({\text{desired}})}, is below a chosen acceptably small error, such as 10−5, 10−15, etc.: xn+1=xn−Φ(xn,x0,Φ(x0))−Φ(desired)Φ′(xn),{\displaystyle x_{n+1}=x_{n}-{\frac {\Phi (x_{n},x_{0},\Phi (x_{0}))-\Phi ({\text{desired}})}{\Phi '(x_{n})}}\,,} where Φ′(xn)=12πe−xn2/2.{\displaystyle \Phi '(x_{n})={\frac {1}{\sqrt {2\pi }}}e^{-x_{n}^{2}/2}\,.} When the repeated computations converge to an error below the chosen acceptably small value,xwill be the value needed to obtain aΦ(x){\textstyle \Phi (x)}of the desired value,⁠Φ(desired){\displaystyle \Phi ({\text{desired}})}⁠. About 68% of values drawn from a normal distribution are within one standard deviationσfrom the mean; about 95% of the values lie within two standard deviations; and about 99.7% are within three standard deviations.[8]This fact is known as the68–95–99.7 (empirical) rule, or the3-sigma rule. More precisely, the probability that a normal deviate lies in the range betweenμ−nσ{\textstyle \mu -n\sigma }andμ+nσ{\textstyle \mu +n\sigma }is given byF(μ+nσ)−F(μ−nσ)=Φ(n)−Φ(−n)=erf⁡(n2).{\displaystyle F(\mu +n\sigma )-F(\mu -n\sigma )=\Phi (n)-\Phi (-n)=\operatorname {erf} \left({\frac {n}{\sqrt {2}}}\right).}To 12 significant digits, the values forn=1,2,…,6{\textstyle n=1,2,\ldots ,6}are: For large⁠n{\displaystyle n}⁠, one can use the approximation1−p≈e−n2/2nπ/2{\textstyle 1-p\approx {\frac {e^{-n^{2}/2}}{n{\sqrt {\pi /2}}}}}. Thequantile functionof a distribution is the inverse of the cumulative distribution function. The quantile function of the standard normal distribution is called theprobit function, and can be expressed in terms of the inverseerror function:Φ−1(p)=2erf−1⁡(2p−1),p∈(0,1).{\displaystyle \Phi ^{-1}(p)={\sqrt {2}}\operatorname {erf} ^{-1}(2p-1),\quad p\in (0,1).}For a normal random variable with mean⁠μ{\displaystyle \mu }⁠and varianceσ2{\textstyle \sigma ^{2}}, the quantile function isF−1(p)=μ+σΦ−1(p)=μ+σ2erf−1⁡(2p−1),p∈(0,1).{\displaystyle F^{-1}(p)=\mu +\sigma \Phi ^{-1}(p)=\mu +\sigma {\sqrt {2}}\operatorname {erf} ^{-1}(2p-1),\quad p\in (0,1).}ThequantileΦ−1(p){\textstyle \Phi ^{-1}(p)}of the standard normal distribution is commonly denoted as⁠zp{\displaystyle z_{p}}⁠. These values are used inhypothesis testing, construction ofconfidence intervalsandQ–Q plots. A normal random variable⁠X{\displaystyle X}⁠will exceedμ+zpσ{\textstyle \mu +z_{p}\sigma }with probability1−p{\textstyle 1-p}, and will lie outside the intervalμ±zpσ{\textstyle \mu \pm z_{p}\sigma }with probability⁠2(1−p){\displaystyle 2(1-p)}⁠. In particular, the quantilez0.975{\textstyle z_{0.975}}is1.96; therefore a normal random variable will lie outside the intervalμ±1.96σ{\textstyle \mu \pm 1.96\sigma }in only 5% of cases. The following table gives the quantilezp{\textstyle z_{p}}such that⁠X{\displaystyle X}⁠will lie in the rangeμ±zpσ{\textstyle \mu \pm z_{p}\sigma }with a specified probability⁠p{\displaystyle p}⁠. These values are useful to determinetolerance intervalforsample averagesand other statisticalestimatorswith normal (orasymptoticallynormal) distributions.[17]The following table shows2erf−1⁡(p)=Φ−1(p+12){\textstyle {\sqrt {2}}\operatorname {erf} ^{-1}(p)=\Phi ^{-1}\left({\frac {p+1}{2}}\right)}, notΦ−1(p){\textstyle \Phi ^{-1}(p)}as defined above. For small⁠p{\displaystyle p}⁠, the quantile function has the usefulasymptotic expansionΦ−1(p)=−ln⁡1p2−ln⁡ln⁡1p2−ln⁡(2π)+o(1).{\textstyle \Phi ^{-1}(p)=-{\sqrt {\ln {\frac {1}{p^{2}}}-\ln \ln {\frac {1}{p^{2}}}-\ln(2\pi )}}+{\mathcal {o}}(1).}[citation needed] The normal distribution is the only distribution whosecumulantsbeyond the first two (i.e., other than the mean andvariance) are zero. It is also the continuous distribution with themaximum entropyfor a specified mean and variance.[18][19]Geary has shown, assuming that the mean and variance are finite, that the normal distribution is the only distribution where the mean and variance calculated from a set of independent draws are independent of each other.[20][21] The normal distribution is a subclass of theelliptical distributions. The normal distribution issymmetricabout its mean, and is non-zero over the entire real line. As such it may not be a suitable model for variables that are inherently positive or strongly skewed, such as theweightof a person or the price of ashare. Such variables may be better described by other distributions, such as thelog-normal distributionor thePareto distribution. The value of the normal density is practically zero when the value⁠x{\displaystyle x}⁠lies more than a fewstandard deviationsaway from the mean (e.g., a spread of three standard deviations covers all but 0.27% of the total distribution). Therefore, it may not be an appropriate model when one expects a significant fraction ofoutliers—values that lie many standard deviations away from the mean—and least squares and otherstatistical inferencemethods that are optimal for normally distributed variables often become highly unreliable when applied to such data. In those cases, a moreheavy-taileddistribution should be assumed and the appropriaterobust statistical inferencemethods applied. The Gaussian distribution belongs to the family ofstable distributionswhich are the attractors of sums ofindependent, identically distributeddistributions whether or not the mean or variance is finite. Except for the Gaussian which is a limiting case, all stable distributions have heavy tails and infinite variance. It is one of the few distributions that are stable and that have probability density functions that can be expressed analytically, the others being theCauchy distributionand theLévy distribution. The normal distribution with densityf(x){\textstyle f(x)}(mean⁠μ{\displaystyle \mu }⁠and varianceσ2>0{\textstyle \sigma ^{2}>0}) has the following properties: Furthermore, the density⁠φ{\displaystyle \varphi }⁠of the standard normal distribution (i.e.μ=0{\textstyle \mu =0}andσ=1{\textstyle \sigma =1}) also has the following properties: The plain and absolutemomentsof a variable⁠X{\displaystyle X}⁠are the expected values ofXp{\textstyle X^{p}}and|X|p{\textstyle |X|^{p}}, respectively. If the expected value⁠μ{\displaystyle \mu }⁠of⁠X{\displaystyle X}⁠is zero, these parameters are calledcentral moments;otherwise, these parameters are callednon-central moments.Usually we are interested only in moments with integer order⁠p{\displaystyle p}⁠. If⁠X{\displaystyle X}⁠has a normal distribution, the non-central moments exist and are finite for any⁠p{\displaystyle p}⁠whose real part is greater than −1. For any non-negative integer⁠p{\displaystyle p}⁠, the plain central moments are:[25]E⁡[(X−μ)p]={0ifpis odd,σp(p−1)!!ifpis even.{\displaystyle \operatorname {E} \left[(X-\mu )^{p}\right]={\begin{cases}0&{\text{if }}p{\text{ is odd,}}\\\sigma ^{p}(p-1)!!&{\text{if }}p{\text{ is even.}}\end{cases}}}Heren!!{\textstyle n!!}denotes thedouble factorial, that is, the product of all numbers from⁠n{\displaystyle n}⁠to 1 that have the same parity asn.{\textstyle n.} The central absolute moments coincide with plain moments for all even orders, but are nonzero for odd orders. For any non-negative integerp,{\textstyle p,} E⁡[|X−μ|p]=σp(p−1)!!⋅{2πifpis odd1ifpis even=σp⋅2p/2Γ(p+12)π.{\displaystyle {\begin{aligned}\operatorname {E} \left[|X-\mu |^{p}\right]&=\sigma ^{p}(p-1)!!\cdot {\begin{cases}{\sqrt {\frac {2}{\pi }}}&{\text{if }}p{\text{ is odd}}\\1&{\text{if }}p{\text{ is even}}\end{cases}}\\&=\sigma ^{p}\cdot {\frac {2^{p/2}\Gamma \left({\frac {p+1}{2}}\right)}{\sqrt {\pi }}}.\end{aligned}}}The last formula is valid also for any non-integerp>−1.{\textstyle p>-1.}When the meanμ≠0,{\textstyle \mu \neq 0,}the plain and absolute moments can be expressed in terms ofconfluent hypergeometric functions1F1{\textstyle {}_{1}F_{1}}andU.{\textstyle U.}[26] E⁡[Xp]=σp⋅(−i2)pU(−p2,12,−μ22σ2),E⁡[|X|p]=σp⋅2p/2Γ(1+p2)π1F1(−p2,12,−μ22σ2).{\displaystyle {\begin{aligned}\operatorname {E} \left[X^{p}\right]&=\sigma ^{p}\cdot {\left(-i{\sqrt {2}}\right)}^{p}\,U{\left(-{\frac {p}{2}},{\frac {1}{2}},-{\frac {\mu ^{2}}{2\sigma ^{2}}}\right)},\\\operatorname {E} \left[|X|^{p}\right]&=\sigma ^{p}\cdot 2^{p/2}{\frac {\Gamma {\left({\frac {1+p}{2}}\right)}}{\sqrt {\pi }}}\,{}_{1}F_{1}{\left(-{\frac {p}{2}},{\frac {1}{2}},-{\frac {\mu ^{2}}{2\sigma ^{2}}}\right)}.\end{aligned}}} These expressions remain valid even if⁠p{\displaystyle p}⁠is not an integer. See alsogeneralized Hermite polynomials. The expectation of⁠X{\displaystyle X}⁠conditioned on the event that⁠X{\displaystyle X}⁠lies in an interval[a,b]{\textstyle [a,b]}is given byE⁡[X∣a<X<b]=μ−σ2f(b)−f(a)F(b)−F(a),{\displaystyle \operatorname {E} \left[X\mid a<X<b\right]=\mu -\sigma ^{2}{\frac {f(b)-f(a)}{F(b)-F(a)}}\,,}where⁠f{\displaystyle f}⁠and⁠F{\displaystyle F}⁠respectively are the density and the cumulative distribution function of⁠X{\displaystyle X}⁠. Forb=∞{\textstyle b=\infty }this is known as theinverse Mills ratio. Note that above, density⁠f{\displaystyle f}⁠of⁠X{\displaystyle X}⁠is used instead of standard normal density as in inverse Mills ratio, so here we haveσ2{\textstyle \sigma ^{2}}instead of⁠σ{\displaystyle \sigma }⁠. TheFourier transformof a normal density⁠f{\displaystyle f}⁠with mean⁠μ{\displaystyle \mu }⁠and varianceσ2{\textstyle \sigma ^{2}}is[27] f^(t)=∫−∞∞f(x)e−itxdx=e−iμte−12(σt)2,{\displaystyle {\hat {f}}(t)=\int _{-\infty }^{\infty }f(x)e^{-itx}\,dx=e^{-i\mu t}e^{-{\frac {1}{2}}(\sigma t)^{2}}\,,} where⁠i{\displaystyle i}⁠is theimaginary unit. If the meanμ=0{\textstyle \mu =0}, the first factor is 1, and the Fourier transform is, apart from a constant factor, a normal density on thefrequency domain, with mean 0 and variance⁠1/σ2{\displaystyle 1/\sigma ^{2}}⁠. In particular, the standard normal distribution⁠φ{\displaystyle \varphi }⁠is aneigenfunctionof the Fourier transform. In probability theory, the Fourier transform of the probability distribution of a real-valued random variable⁠X{\displaystyle X}⁠is closely connected to thecharacteristic functionφX(t){\textstyle \varphi _{X}(t)}of that variable, which is defined as theexpected valueofeitX{\textstyle e^{itX}}, as a function of the real variable⁠t{\displaystyle t}⁠(thefrequencyparameter of the Fourier transform). This definition can be analytically extended to a complex-value variable⁠t{\displaystyle t}⁠.[28]The relation between both is:φX(t)=f^(−t).{\displaystyle \varphi _{X}(t)={\hat {f}}(-t)\,.} Themoment generating functionof a real random variable⁠X{\displaystyle X}⁠is the expected value ofetX{\textstyle e^{tX}}, as a function of the real parameter⁠t{\displaystyle t}⁠. For a normal distribution with density⁠f{\displaystyle f}⁠, mean⁠μ{\displaystyle \mu }⁠and varianceσ2{\textstyle \sigma ^{2}}, the moment generating function exists and is equal to M(t)=E⁡[etX]=f^(it)=eμteσ2t2/2.{\displaystyle M(t)=\operatorname {E} \left[e^{tX}\right]={\hat {f}}(it)=e^{\mu t}e^{\sigma ^{2}t^{2}/2}\,.}For any⁠k{\displaystyle k}⁠, the coefficient of⁠tk/k!{\displaystyle t^{k}/k!}⁠in the moment generating function (expressed as anexponential power seriesin⁠t{\displaystyle t}⁠) is the normal distribution's expected value⁠E⁡[Xk]{\displaystyle \operatorname {E} [X^{k}]}⁠. Thecumulant generating functionis the logarithm of the moment generating function, namely g(t)=ln⁡M(t)=μt+12σ2t2.{\displaystyle g(t)=\ln M(t)=\mu t+{\tfrac {1}{2}}\sigma ^{2}t^{2}\,.} The coefficients of this exponential power series define the cumulants, but because this is a quadratic polynomial in⁠t{\displaystyle t}⁠, only the first twocumulantsare nonzero, namely the mean⁠μ{\displaystyle \mu }⁠and the variance⁠σ2{\displaystyle \sigma ^{2}}⁠. Some authors prefer to instead work with thecharacteristic functionE[eitX] =eiμt−σ2t2/2andln E[eitX] =iμt−⁠1/2⁠σ2t2. WithinStein's methodthe Stein operator and class of a random variableX∼N(μ,σ2){\textstyle X\sim {\mathcal {N}}(\mu ,\sigma ^{2})}areAf(x)=σ2f′(x)−(x−μ)f(x){\textstyle {\mathcal {A}}f(x)=\sigma ^{2}f'(x)-(x-\mu )f(x)}andF{\textstyle {\mathcal {F}}}the class of all absolutely continuous functions⁠f:R→R{\displaystyle \textstyle f:\mathbb {R} \to \mathbb {R} }⁠such that⁠E⁡[|f′(X)|]<∞{\displaystyle \operatorname {E} [\vert f'(X)\vert ]<\infty }⁠. In thelimitwhenσ2{\textstyle \sigma ^{2}}tends to zero, the probability densityf(x){\textstyle f(x)}eventually tends to zero at anyx≠μ{\textstyle x\neq \mu }, but grows without limit ifx=μ{\textstyle x=\mu }, while its integral remains equal to 1. Therefore, the normal distribution cannot be defined as an ordinaryfunctionwhen⁠σ2=0{\displaystyle \sigma ^{2}=0}⁠. However, one can define the normal distribution with zero variance as ageneralized function; specifically, as aDirac delta function⁠δ{\displaystyle \delta }⁠translated by the mean⁠μ{\displaystyle \mu }⁠, that isf(x)=δ(x−μ).{\textstyle f(x)=\delta (x-\mu ).}Its cumulative distribution function is then theHeaviside step functiontranslated by the mean⁠μ{\displaystyle \mu }⁠, namelyF(x)={0ifx<μ1ifx≥μ.{\displaystyle F(x)={\begin{cases}0&{\text{if }}x<\mu \\1&{\text{if }}x\geq \mu \,.\end{cases}}} Of all probability distributions over the reals with a specified finite mean⁠μ{\displaystyle \mu }⁠and finite variance⁠σ2{\displaystyle \sigma ^{2}}⁠, the normal distributionN(μ,σ2){\textstyle N(\mu ,\sigma ^{2})}is the one withmaximum entropy.[29]To see this, let⁠X{\displaystyle X}⁠be acontinuous random variablewithprobability density⁠f(x){\displaystyle f(x)}⁠. The entropy of⁠X{\displaystyle X}⁠is defined as[30][31][32]H(X)=−∫−∞∞f(x)ln⁡f(x)dx,{\displaystyle H(X)=-\int _{-\infty }^{\infty }f(x)\ln f(x)\,dx\,,} wheref(x)log⁡f(x){\textstyle f(x)\log f(x)}is understood to be zero whenever⁠f(x)=0{\displaystyle f(x)=0}⁠. This functional can be maximized, subject to the constraints that the distribution is properly normalized and has a specified mean and variance, by usingvariational calculus. A function with threeLagrange multipliersis defined: L=−∫−∞∞f(x)ln⁡f(x)dx−λ0(1−∫−∞∞f(x)dx)−λ1(μ−∫−∞∞f(x)xdx)−λ2(σ2−∫−∞∞f(x)(x−μ)2dx).{\displaystyle L=-\int _{-\infty }^{\infty }f(x)\ln f(x)\,dx-\lambda _{0}\left(1-\int _{-\infty }^{\infty }f(x)\,dx\right)-\lambda _{1}\left(\mu -\int _{-\infty }^{\infty }f(x)x\,dx\right)-\lambda _{2}\left(\sigma ^{2}-\int _{-\infty }^{\infty }f(x)(x-\mu )^{2}\,dx\right)\,.} At maximum entropy, a small variationδf(x){\textstyle \delta f(x)}aboutf(x){\textstyle f(x)}will produce a variationδL{\textstyle \delta L}about⁠L{\displaystyle L}⁠which is equal to 0: 0=δL=∫−∞∞δf(x)(−ln⁡f(x)−1+λ0+λ1x+λ2(x−μ)2)dx.{\displaystyle 0=\delta L=\int _{-\infty }^{\infty }\delta f(x)\left(-\ln f(x)-1+\lambda _{0}+\lambda _{1}x+\lambda _{2}(x-\mu )^{2}\right)\,dx\,.} Since this must hold for any small⁠δf(x){\displaystyle \delta f(x)}⁠, the factor multiplying⁠δf(x){\displaystyle \delta f(x)}⁠must be zero, and solving for⁠f(x){\displaystyle f(x)}⁠yields: f(x)=exp⁡(−1+λ0+λ1x+λ2(x−μ)2).{\displaystyle f(x)=\exp \left(-1+\lambda _{0}+\lambda _{1}x+\lambda _{2}(x-\mu )^{2}\right)\,.} The Lagrange constraints that⁠f(x){\displaystyle f(x)}⁠is properly normalized and has the specified mean and variance are satisfied if and only if⁠λ0{\displaystyle \lambda _{0}}⁠,⁠λ1{\displaystyle \lambda _{1}}⁠, and⁠λ2{\displaystyle \lambda _{2}}⁠are chosen so thatf(x)=12πσ2e−(x−μ)22σ2.{\displaystyle f(x)={\frac {1}{\sqrt {2\pi \sigma ^{2}}}}e^{-{\frac {(x-\mu )^{2}}{2\sigma ^{2}}}}\,.}The entropy of a normal distributionX∼N(μ,σ2){\textstyle X\sim N(\mu ,\sigma ^{2})}is equal toH(X)=12(1+ln⁡2σ2π),{\displaystyle H(X)={\tfrac {1}{2}}(1+\ln 2\sigma ^{2}\pi )\,,}which is independent of the mean⁠μ{\displaystyle \mu }⁠. The central limit theorem states that under certain (fairly common) conditions, the sum of many random variables will have an approximately normal distribution. More specifically, whereX1,…,Xn{\textstyle X_{1},\ldots ,X_{n}}areindependent and identically distributedrandom variables with the same arbitrary distribution, zero mean, and varianceσ2{\textstyle \sigma ^{2}}and⁠Z{\displaystyle Z}⁠is their mean scaled byn{\textstyle {\sqrt {n}}}Z=n(1n∑i=1nXi){\displaystyle Z={\sqrt {n}}\left({\frac {1}{n}}\sum _{i=1}^{n}X_{i}\right)}Then, as⁠n{\displaystyle n}⁠increases, the probability distribution of⁠Z{\displaystyle Z}⁠will tend to the normal distribution with zero mean and variance⁠σ2{\displaystyle \sigma ^{2}}⁠. The theorem can be extended to variables(Xi){\textstyle (X_{i})}that are not independent and/or not identically distributed if certain constraints are placed on the degree of dependence and the moments of the distributions. Manytest statistics,scores, andestimatorsencountered in practice contain sums of certain random variables in them, and even more estimators can be represented as sums of random variables through the use ofinfluence functions. The central limit theorem implies that those statistical parameters will have asymptotically normal distributions. The central limit theorem also implies that certain distributions can be approximated by the normal distribution, for example: Whether these approximations are sufficiently accurate depends on the purpose for which they are needed, and the rate of convergence to the normal distribution. It is typically the case that such approximations are less accurate in the tails of the distribution. A general upper bound for the approximation error in the central limit theorem is given by theBerry–Esseen theorem, improvements of the approximation are given by theEdgeworth expansions. This theorem can also be used to justify modeling the sum of many uniform noise sources asGaussian noise. SeeAWGN. Theprobability density,cumulative distribution, andinverse cumulative distributionof any function of one or more independent or correlated normal variables can be computed with the numerical method of ray-tracing[41](Matlab code). In the following sections we look at some special cases. If⁠X{\displaystyle X}⁠is distributed normally with mean⁠μ{\displaystyle \mu }⁠and varianceσ2{\textstyle \sigma ^{2}}, then IfX1{\textstyle X_{1}}andX2{\textstyle X_{2}}are two independent standard normal random variables with mean 0 and variance 1, then Thesplit normal distributionis most directly defined in terms of joining scaled sections of the density functions of different normal distributions and rescaling the density to integrate to one. Thetruncated normal distributionresults from rescaling a section of a single density function. For any positive integern, any normal distribution with mean⁠μ{\displaystyle \mu }⁠and varianceσ2{\textstyle \sigma ^{2}}is the distribution of the sum ofnindependent normal deviates, each with meanμn{\textstyle {\frac {\mu }{n}}}and varianceσ2n{\textstyle {\frac {\sigma ^{2}}{n}}}. This property is calledinfinite divisibility.[47] Conversely, ifX1{\textstyle X_{1}}andX2{\textstyle X_{2}}are independent random variables and their sumX1+X2{\textstyle X_{1}+X_{2}}has a normal distribution, then bothX1{\textstyle X_{1}}andX2{\textstyle X_{2}}must be normal deviates.[48] This result is known asCramér's decomposition theorem, and is equivalent to saying that theconvolutionof two distributions is normal if and only if both are normal. Cramér's theorem implies that a linear combination of independent non-Gaussian variables will never have an exactly normal distribution, although it may approach it arbitrarily closely.[33] TheKac–Bernstein theoremstates that ifX{\textstyle X}and⁠Y{\displaystyle Y}⁠are independent andX+Y{\textstyle X+Y}andX−Y{\textstyle X-Y}are also independent, then bothXandYmust necessarily have normal distributions.[49][50] More generally, ifX1,…,Xn{\textstyle X_{1},\ldots ,X_{n}}are independent random variables, then two distinct linear combinations∑akXk{\textstyle \sum {a_{k}X_{k}}}and∑bkXk{\textstyle \sum {b_{k}X_{k}}}will be independent if and only if allXk{\textstyle X_{k}}are normal and∑akbkσk2=0{\textstyle \sum {a_{k}b_{k}\sigma _{k}^{2}=0}}, whereσk2{\textstyle \sigma _{k}^{2}}denotes the variance ofXk{\textstyle X_{k}}.[49] The notion of normal distribution, being one of the most important distributions in probability theory, has been extended far beyond the standard framework of the univariate (that is one-dimensional) case (Case 1). All these extensions are also callednormalorGaussianlaws, so a certain ambiguity in names exists. A random variableXhas a two-piece normal distribution if it has a distribution fX(x)={N(μ,σ12),ifx≤μN(μ,σ22),ifx≥μ{\displaystyle f_{X}(x)={\begin{cases}N(\mu ,\sigma _{1}^{2}),&{\text{ if }}x\leq \mu \\N(\mu ,\sigma _{2}^{2}),&{\text{ if }}x\geq \mu \end{cases}}} whereμis the mean andσ21andσ22are the variances of the distribution to the left and right of the mean respectively. The meanE(X), varianceV(X), and third central momentT(X)of this distribution have been determined[51] E⁡(X)=μ+2π(σ2−σ1),V⁡(X)=(1−2π)(σ2−σ1)2+σ1σ2,T⁡(X)=2π(σ2−σ1)[(4π−1)(σ2−σ1)2+σ1σ2].{\displaystyle {\begin{aligned}\operatorname {E} (X)&=\mu +{\sqrt {\frac {2}{\pi }}}(\sigma _{2}-\sigma _{1}),\\\operatorname {V} (X)&=\left(1-{\frac {2}{\pi }}\right)(\sigma _{2}-\sigma _{1})^{2}+\sigma _{1}\sigma _{2},\\\operatorname {T} (X)&={\sqrt {\frac {2}{\pi }}}(\sigma _{2}-\sigma _{1})\left[\left({\frac {4}{\pi }}-1\right)(\sigma _{2}-\sigma _{1})^{2}+\sigma _{1}\sigma _{2}\right].\end{aligned}}} One of the main practical uses of the Gaussian law is to model the empirical distributions of many different random variables encountered in practice. In such case a possible extension would be a richer family of distributions, having more than two parameters and therefore being able to fit the empirical distribution more accurately. The examples of such extensions are: It is often the case that we do not know the parameters of the normal distribution, but instead want toestimatethem. That is, having a sample(x1,…,xn){\textstyle (x_{1},\ldots ,x_{n})}from a normalN(μ,σ2){\textstyle {\mathcal {N}}(\mu ,\sigma ^{2})}population we would like to learn the approximate values of parameters⁠μ{\displaystyle \mu }⁠andσ2{\textstyle \sigma ^{2}}. The standard approach to this problem is themaximum likelihoodmethod, which requires maximization of thelog-likelihood function:ln⁡L(μ,σ2)=∑i=1nln⁡f(xi∣μ,σ2)=−n2ln⁡(2π)−n2ln⁡σ2−12σ2∑i=1n(xi−μ)2.{\displaystyle \ln {\mathcal {L}}(\mu ,\sigma ^{2})=\sum _{i=1}^{n}\ln f(x_{i}\mid \mu ,\sigma ^{2})=-{\frac {n}{2}}\ln(2\pi )-{\frac {n}{2}}\ln \sigma ^{2}-{\frac {1}{2\sigma ^{2}}}\sum _{i=1}^{n}(x_{i}-\mu )^{2}.}Taking derivatives with respect to⁠μ{\displaystyle \mu }⁠andσ2{\textstyle \sigma ^{2}}and solving the resulting system of first order conditions yields themaximum likelihood estimates:μ^=x¯≡1n∑i=1nxi,σ^2=1n∑i=1n(xi−x¯)2.{\displaystyle {\hat {\mu }}={\overline {x}}\equiv {\frac {1}{n}}\sum _{i=1}^{n}x_{i},\qquad {\hat {\sigma }}^{2}={\frac {1}{n}}\sum _{i=1}^{n}(x_{i}-{\overline {x}})^{2}.} Thenln⁡L(μ^,σ^2){\textstyle \ln {\mathcal {L}}({\hat {\mu }},{\hat {\sigma }}^{2})}is as follows: ln⁡L(μ^,σ^2)=(−n/2)[ln⁡(2πσ^2)+1]{\displaystyle \ln {\mathcal {L}}({\hat {\mu }},{\hat {\sigma }}^{2})=(-n/2)[\ln(2\pi {\hat {\sigma }}^{2})+1]} Estimatorμ^{\displaystyle \textstyle {\hat {\mu }}}is called thesample mean, since it is the arithmetic mean of all observations. The statisticx¯{\displaystyle \textstyle {\overline {x}}}iscompleteandsufficientfor⁠μ{\displaystyle \mu }⁠, and therefore by theLehmann–Scheffé theorem,μ^{\displaystyle \textstyle {\hat {\mu }}}is theuniformly minimum variance unbiased(UMVU) estimator.[52]In finite samples it is distributed normally:μ^∼N(μ,σ2/n).{\displaystyle {\hat {\mu }}\sim {\mathcal {N}}(\mu ,\sigma ^{2}/n).}The variance of this estimator is equal to theμμ-element of the inverseFisher information matrixI−1{\displaystyle \textstyle {\mathcal {I}}^{-1}}. This implies that the estimator isfinite-sample efficient. Of practical importance is the fact that thestandard errorofμ^{\displaystyle \textstyle {\hat {\mu }}}is proportional to1/n{\displaystyle \textstyle 1/{\sqrt {n}}}, that is, if one wishes to decrease the standard error by a factor of 10, one must increase the number of points in the sample by a factor of 100. This fact is widely used in determining sample sizes for opinion polls and the number of trials inMonte Carlo simulations. From the standpoint of theasymptotic theory,μ^{\displaystyle \textstyle {\hat {\mu }}}isconsistent, that is, itconverges in probabilityto⁠μ{\displaystyle \mu }⁠asn→∞{\textstyle n\rightarrow \infty }. The estimator is alsoasymptotically normal, which is a simple corollary of the fact that it is normal in finite samples:n(μ^−μ)→dN(0,σ2).{\displaystyle {\sqrt {n}}({\hat {\mu }}-\mu )\,\xrightarrow {d} \,{\mathcal {N}}(0,\sigma ^{2}).} The estimatorσ^2{\displaystyle \textstyle {\hat {\sigma }}^{2}}is called thesample variance, since it is the variance of the sample ((x1,…,xn){\textstyle (x_{1},\ldots ,x_{n})}). In practice, another estimator is often used instead of theσ^2{\displaystyle \textstyle {\hat {\sigma }}^{2}}. This other estimator is denoteds2{\textstyle s^{2}}, and is also called thesample variance, which represents a certain ambiguity in terminology; its square root⁠s{\displaystyle s}⁠is called thesample standard deviation. The estimators2{\textstyle s^{2}}differs fromσ^2{\displaystyle \textstyle {\hat {\sigma }}^{2}}by having(n− 1)instead ofnin the denominator (the so-calledBessel's correction):s2=nn−1σ^2=1n−1∑i=1n(xi−x¯)2.{\displaystyle s^{2}={\frac {n}{n-1}}{\hat {\sigma }}^{2}={\frac {1}{n-1}}\sum _{i=1}^{n}(x_{i}-{\overline {x}})^{2}.}The difference betweens2{\textstyle s^{2}}andσ^2{\displaystyle \textstyle {\hat {\sigma }}^{2}}becomes negligibly small for largen's. In finite samples however, the motivation behind the use ofs2{\textstyle s^{2}}is that it is anunbiased estimatorof the underlying parameterσ2{\textstyle \sigma ^{2}}, whereasσ^2{\displaystyle \textstyle {\hat {\sigma }}^{2}}is biased. Also, by the Lehmann–Scheffé theorem the estimators2{\textstyle s^{2}}is uniformly minimum variance unbiased (UMVU),[52]which makes it the "best" estimator among all unbiased ones. However it can be shown that the biased estimatorσ^2{\displaystyle \textstyle {\hat {\sigma }}^{2}}is better than thes2{\textstyle s^{2}}in terms of themean squared error(MSE) criterion. In finite samples boths2{\textstyle s^{2}}andσ^2{\displaystyle \textstyle {\hat {\sigma }}^{2}}have scaledchi-squared distributionwith(n− 1)degrees of freedom:s2∼σ2n−1⋅χn−12,σ^2∼σ2n⋅χn−12.{\displaystyle s^{2}\sim {\frac {\sigma ^{2}}{n-1}}\cdot \chi _{n-1}^{2},\qquad {\hat {\sigma }}^{2}\sim {\frac {\sigma ^{2}}{n}}\cdot \chi _{n-1}^{2}.}The first of these expressions shows that the variance ofs2{\textstyle s^{2}}is equal to2σ4/(n−1){\textstyle 2\sigma ^{4}/(n-1)}, which is slightly greater than theσσ-element of the inverse Fisher information matrixI−1{\displaystyle \textstyle {\mathcal {I}}^{-1}}, which is2σ4/n{\textstyle 2\sigma ^{4}/n}. Thus,s2{\textstyle s^{2}}is not an efficient estimator forσ2{\textstyle \sigma ^{2}}, and moreover, sinces2{\textstyle s^{2}}is UMVU, we can conclude that the finite-sample efficient estimator forσ2{\textstyle \sigma ^{2}}does not exist. Applying the asymptotic theory, both estimatorss2{\textstyle s^{2}}andσ^2{\displaystyle \textstyle {\hat {\sigma }}^{2}}are consistent, that is they converge in probability toσ2{\textstyle \sigma ^{2}}as the sample sizen→∞{\textstyle n\rightarrow \infty }. The two estimators are also both asymptotically normal:n(σ^2−σ2)≃n(s2−σ2)→dN(0,2σ4).{\displaystyle {\sqrt {n}}({\hat {\sigma }}^{2}-\sigma ^{2})\simeq {\sqrt {n}}(s^{2}-\sigma ^{2})\,\xrightarrow {d} \,{\mathcal {N}}(0,2\sigma ^{4}).}In particular, both estimators are asymptotically efficient forσ2{\textstyle \sigma ^{2}}. ByCochran's theorem, for normal distributions the sample meanμ^{\displaystyle \textstyle {\hat {\mu }}}and the sample variances2areindependent, which means there can be no gain in considering theirjoint distribution. There is also a converse theorem: if in a sample the sample mean and sample variance are independent, then the sample must have come from the normal distribution. The independence betweenμ^{\displaystyle \textstyle {\hat {\mu }}}andscan be employed to construct the so-calledt-statistic:t=μ^−μs/n=x¯−μ1n(n−1)∑(xi−x¯)2∼tn−1{\displaystyle t={\frac {{\hat {\mu }}-\mu }{s/{\sqrt {n}}}}={\frac {{\overline {x}}-\mu }{\sqrt {{\frac {1}{n(n-1)}}\sum (x_{i}-{\overline {x}})^{2}}}}\sim t_{n-1}}This quantitythas theStudent's t-distributionwith(n− 1)degrees of freedom, and it is anancillary statistic(independent of the value of the parameters). Inverting the distribution of thist-statistics will allow us to construct theconfidence intervalforμ;[53]similarly, inverting theχ2distribution of the statistics2will give us the confidence interval forσ2:[54]μ∈[μ^−tn−1,1−α/2sn,μ^+tn−1,1−α/2sn]{\displaystyle \mu \in \left[{\hat {\mu }}-t_{n-1,1-\alpha /2}{\frac {s}{\sqrt {n}}},\,{\hat {\mu }}+t_{n-1,1-\alpha /2}{\frac {s}{\sqrt {n}}}\right]}σ2∈[n−1χn−1,1−α/22s2,n−1χn−1,α/22s2]{\displaystyle \sigma ^{2}\in \left[{\frac {n-1}{\chi _{n-1,1-\alpha /2}^{2}}}s^{2},\,{\frac {n-1}{\chi _{n-1,\alpha /2}^{2}}}s^{2}\right]}wheretk,pandχ2k,pare thepthquantilesof thet- andχ2-distributions respectively. These confidence intervals are of theconfidence level1 −α, meaning that the true valuesμandσ2fall outside of these intervals with probability (orsignificance level)α. In practice people usually takeα= 5%, resulting in the 95% confidence intervals. The confidence interval forσcan be found by taking the square root of the interval bounds forσ2. Approximate formulas can be derived from the asymptotic distributions ofμ^{\displaystyle \textstyle {\hat {\mu }}}ands2:μ∈[μ^−|zα/2|ns,μ^+|zα/2|ns]{\displaystyle \mu \in \left[{\hat {\mu }}-{\frac {|z_{\alpha /2}|}{\sqrt {n}}}s,\,{\hat {\mu }}+{\frac {|z_{\alpha /2}|}{\sqrt {n}}}s\right]}σ2∈[s2−2|zα/2|ns2,s2+2|zα/2|ns2]{\displaystyle \sigma ^{2}\in \left[s^{2}-{\sqrt {2}}{\frac {|z_{\alpha /2}|}{\sqrt {n}}}s^{2},\,s^{2}+{\sqrt {2}}{\frac {|z_{\alpha /2}|}{\sqrt {n}}}s^{2}\right]}The approximate formulas become valid for large values ofn, and are more convenient for the manual calculation since the standard normal quantileszα/2do not depend onn. In particular, the most popular value ofα= 5%, results in|z0.025| =1.96. Normality tests assess the likelihood that the given data set {x1, ...,xn} comes from a normal distribution. Typically thenull hypothesisH0is that the observations are distributed normally with unspecified meanμand varianceσ2, versus the alternativeHathat the distribution is arbitrary. Many tests (over 40) have been devised for this problem. The more prominent of them are outlined below: Diagnostic plotsare more intuitively appealing but subjective at the same time, as they rely on informal human judgement to accept or reject the null hypothesis. Goodness-of-fit tests: Moment-based tests: Tests based on the empirical distribution function: Bayesian analysis of normally distributed data is complicated by the many different possibilities that may be considered: The formulas for the non-linear-regression cases are summarized in theconjugate priorarticle. The following auxiliary formula is useful for simplifying theposteriorupdate equations, which otherwise become fairly tedious. a(x−y)2+b(x−z)2=(a+b)(x−ay+bza+b)2+aba+b(y−z)2{\displaystyle a(x-y)^{2}+b(x-z)^{2}=(a+b)\left(x-{\frac {ay+bz}{a+b}}\right)^{2}+{\frac {ab}{a+b}}(y-z)^{2}} This equation rewrites the sum of two quadratics inxby expanding the squares, grouping the terms inx, andcompleting the square. Note the following about the complex constant factors attached to some of the terms: A similar formula can be written for the sum of two vector quadratics: Ifx,y,zare vectors of lengthk, andAandBaresymmetric,invertible matricesof sizek×k{\textstyle k\times k}, then (y−x)′A(y−x)+(x−z)′B(x−z)=(x−c)′(A+B)(x−c)+(y−z)′(A−1+B−1)−1(y−z){\displaystyle {\begin{aligned}&(\mathbf {y} -\mathbf {x} )'\mathbf {A} (\mathbf {y} -\mathbf {x} )+(\mathbf {x} -\mathbf {z} )'\mathbf {B} (\mathbf {x} -\mathbf {z} )\\={}&(\mathbf {x} -\mathbf {c} )'(\mathbf {A} +\mathbf {B} )(\mathbf {x} -\mathbf {c} )+(\mathbf {y} -\mathbf {z} )'(\mathbf {A} ^{-1}+\mathbf {B} ^{-1})^{-1}(\mathbf {y} -\mathbf {z} )\end{aligned}}} where c=(A+B)−1(Ay+Bz){\displaystyle \mathbf {c} =(\mathbf {A} +\mathbf {B} )^{-1}(\mathbf {A} \mathbf {y} +\mathbf {B} \mathbf {z} )} The formx′Axis called aquadratic formand is ascalar:x′Ax=∑i,jaijxixj{\displaystyle \mathbf {x} '\mathbf {A} \mathbf {x} =\sum _{i,j}a_{ij}x_{i}x_{j}}In other words, it sums up all possible combinations of products of pairs of elements fromx, with a separate coefficient for each. In addition, sincexixj=xjxi{\textstyle x_{i}x_{j}=x_{j}x_{i}}, only the sumaij+aji{\textstyle a_{ij}+a_{ji}}matters for any off-diagonal elements ofA, and there is no loss of generality in assuming thatAissymmetric. Furthermore, ifAis symmetric, then the formx′Ay=y′Ax.{\textstyle \mathbf {x} '\mathbf {A} \mathbf {y} =\mathbf {y} '\mathbf {A} \mathbf {x} .} Another useful formula is as follows:∑i=1n(xi−μ)2=∑i=1n(xi−x¯)2+n(x¯−μ)2{\displaystyle \sum _{i=1}^{n}(x_{i}-\mu )^{2}=\sum _{i=1}^{n}(x_{i}-{\bar {x}})^{2}+n({\bar {x}}-\mu )^{2}}wherex¯=1n∑i=1nxi.{\textstyle {\bar {x}}={\frac {1}{n}}\sum _{i=1}^{n}x_{i}.} For a set ofi.i.d.normally distributed data pointsXof sizenwhere each individual pointxfollowsx∼N(μ,σ2){\textstyle x\sim {\mathcal {N}}(\mu ,\sigma ^{2})}with knownvarianceσ2, theconjugate priordistribution is also normally distributed. This can be shown more easily by rewriting the variance as theprecision, i.e. using τ = 1/σ2. Then ifx∼N(μ,1/τ){\textstyle x\sim {\mathcal {N}}(\mu ,1/\tau )}andμ∼N(μ0,1/τ0),{\textstyle \mu \sim {\mathcal {N}}(\mu _{0},1/\tau _{0}),}we proceed as follows. First, thelikelihood functionis (using the formula above for the sum of differences from the mean): p(X∣μ,τ)=∏i=1nτ2πexp⁡(−12τ(xi−μ)2)=(τ2π)n/2exp⁡(−12τ∑i=1n(xi−μ)2)=(τ2π)n/2exp⁡[−12τ(∑i=1n(xi−x¯)2+n(x¯−μ)2)].{\displaystyle {\begin{aligned}p(\mathbf {X} \mid \mu ,\tau )&=\prod _{i=1}^{n}{\sqrt {\frac {\tau }{2\pi }}}\exp \left(-{\frac {1}{2}}\tau (x_{i}-\mu )^{2}\right)\\&=\left({\frac {\tau }{2\pi }}\right)^{n/2}\exp \left(-{\frac {1}{2}}\tau \sum _{i=1}^{n}(x_{i}-\mu )^{2}\right)\\&=\left({\frac {\tau }{2\pi }}\right)^{n/2}\exp \left[-{\frac {1}{2}}\tau \left(\sum _{i=1}^{n}(x_{i}-{\bar {x}})^{2}+n({\bar {x}}-\mu )^{2}\right)\right].\end{aligned}}} Then, we proceed as follows: p(μ∣X)∝p(X∣μ)p(μ)=(τ2π)n/2exp⁡[−12τ(∑i=1n(xi−x¯)2+n(x¯−μ)2)]τ02πexp⁡(−12τ0(μ−μ0)2)∝exp⁡(−12(τ(∑i=1n(xi−x¯)2+n(x¯−μ)2)+τ0(μ−μ0)2))∝exp⁡(−12(nτ(x¯−μ)2+τ0(μ−μ0)2))=exp⁡(−12(nτ+τ0)(μ−nτx¯+τ0μ0nτ+τ0)2+nττ0nτ+τ0(x¯−μ0)2)∝exp⁡(−12(nτ+τ0)(μ−nτx¯+τ0μ0nτ+τ0)2){\displaystyle {\begin{aligned}p(\mu \mid \mathbf {X} )&\propto p(\mathbf {X} \mid \mu )p(\mu )\\&=\left({\frac {\tau }{2\pi }}\right)^{n/2}\exp \left[-{\frac {1}{2}}\tau \left(\sum _{i=1}^{n}(x_{i}-{\bar {x}})^{2}+n({\bar {x}}-\mu )^{2}\right)\right]{\sqrt {\frac {\tau _{0}}{2\pi }}}\exp \left(-{\frac {1}{2}}\tau _{0}(\mu -\mu _{0})^{2}\right)\\&\propto \exp \left(-{\frac {1}{2}}\left(\tau \left(\sum _{i=1}^{n}(x_{i}-{\bar {x}})^{2}+n({\bar {x}}-\mu )^{2}\right)+\tau _{0}(\mu -\mu _{0})^{2}\right)\right)\\&\propto \exp \left(-{\frac {1}{2}}\left(n\tau ({\bar {x}}-\mu )^{2}+\tau _{0}(\mu -\mu _{0})^{2}\right)\right)\\&=\exp \left(-{\frac {1}{2}}(n\tau +\tau _{0})\left(\mu -{\dfrac {n\tau {\bar {x}}+\tau _{0}\mu _{0}}{n\tau +\tau _{0}}}\right)^{2}+{\frac {n\tau \tau _{0}}{n\tau +\tau _{0}}}({\bar {x}}-\mu _{0})^{2}\right)\\&\propto \exp \left(-{\frac {1}{2}}(n\tau +\tau _{0})\left(\mu -{\dfrac {n\tau {\bar {x}}+\tau _{0}\mu _{0}}{n\tau +\tau _{0}}}\right)^{2}\right)\end{aligned}}} In the above derivation, we used the formula above for the sum of two quadratics and eliminated all constant factors not involvingμ. The result is thekernelof a normal distribution, with meannτx¯+τ0μ0nτ+τ0{\textstyle {\frac {n\tau {\bar {x}}+\tau _{0}\mu _{0}}{n\tau +\tau _{0}}}}and precisionnτ+τ0{\textstyle n\tau +\tau _{0}}, i.e. p(μ∣X)∼N(nτx¯+τ0μ0nτ+τ0,1nτ+τ0){\displaystyle p(\mu \mid \mathbf {X} )\sim {\mathcal {N}}\left({\frac {n\tau {\bar {x}}+\tau _{0}\mu _{0}}{n\tau +\tau _{0}}},{\frac {1}{n\tau +\tau _{0}}}\right)} This can be written as a set of Bayesian update equations for the posterior parameters in terms of the prior parameters: τ0′=τ0+nτμ0′=nτx¯+τ0μ0nτ+τ0x¯=1n∑i=1nxi{\displaystyle {\begin{aligned}\tau _{0}'&=\tau _{0}+n\tau \\[5pt]\mu _{0}'&={\frac {n\tau {\bar {x}}+\tau _{0}\mu _{0}}{n\tau +\tau _{0}}}\\[5pt]{\bar {x}}&={\frac {1}{n}}\sum _{i=1}^{n}x_{i}\end{aligned}}} That is, to combinendata points with total precision ofnτ(or equivalently, total variance ofn/σ2) and mean of valuesx¯{\textstyle {\bar {x}}}, derive a new total precision simply by adding the total precision of the data to the prior total precision, and form a new mean through aprecision-weighted average, i.e. aweighted averageof the data mean and the prior mean, each weighted by the associated total precision. This makes logical sense if the precision is thought of as indicating the certainty of the observations: In the distribution of the posterior mean, each of the input components is weighted by its certainty, and the certainty of this distribution is the sum of the individual certainties. (For the intuition of this, compare the expression "the whole is (or is not) greater than the sum of its parts". In addition, consider that the knowledge of the posterior comes from a combination of the knowledge of the prior and likelihood, so it makes sense that we are more certain of it than of either of its components.) The above formula reveals why it is more convenient to doBayesian analysisofconjugate priorsfor the normal distribution in terms of the precision. The posterior precision is simply the sum of the prior and likelihood precisions, and the posterior mean is computed through a precision-weighted average, as described above. The same formulas can be written in terms of variance by reciprocating all the precisions, yielding the more ugly formulas σ02′=1nσ2+1σ02μ0′=nx¯σ2+μ0σ02nσ2+1σ02x¯=1n∑i=1nxi{\displaystyle {\begin{aligned}{\sigma _{0}^{2}}'&={\frac {1}{{\frac {n}{\sigma ^{2}}}+{\frac {1}{\sigma _{0}^{2}}}}}\\[5pt]\mu _{0}'&={\frac {{\frac {n{\bar {x}}}{\sigma ^{2}}}+{\frac {\mu _{0}}{\sigma _{0}^{2}}}}{{\frac {n}{\sigma ^{2}}}+{\frac {1}{\sigma _{0}^{2}}}}}\\[5pt]{\bar {x}}&={\frac {1}{n}}\sum _{i=1}^{n}x_{i}\end{aligned}}} For a set ofi.i.d.normally distributed data pointsXof sizenwhere each individual pointxfollowsx∼N(μ,σ2){\textstyle x\sim {\mathcal {N}}(\mu ,\sigma ^{2})}with known mean μ, theconjugate priorof thevariancehas aninverse gamma distributionor ascaled inverse chi-squared distribution. The two are equivalent except for having differentparameterizations. Although the inverse gamma is more commonly used, we use the scaled inverse chi-squared for the sake of convenience. The prior for σ2is as follows: p(σ2∣ν0,σ02)=(σ02ν02)ν0/2Γ(ν02)exp⁡[−ν0σ022σ2](σ2)1+ν02∝exp⁡[−ν0σ022σ2](σ2)1+ν02{\displaystyle p(\sigma ^{2}\mid \nu _{0},\sigma _{0}^{2})={\frac {(\sigma _{0}^{2}{\frac {\nu _{0}}{2}})^{\nu _{0}/2}}{\Gamma \left({\frac {\nu _{0}}{2}}\right)}}~{\frac {\exp \left[{\frac {-\nu _{0}\sigma _{0}^{2}}{2\sigma ^{2}}}\right]}{(\sigma ^{2})^{1+{\frac {\nu _{0}}{2}}}}}\propto {\frac {\exp \left[{\frac {-\nu _{0}\sigma _{0}^{2}}{2\sigma ^{2}}}\right]}{(\sigma ^{2})^{1+{\frac {\nu _{0}}{2}}}}}} Thelikelihood functionfrom above, written in terms of the variance, is: p(X∣μ,σ2)=(12πσ2)n/2exp⁡[−12σ2∑i=1n(xi−μ)2]=(12πσ2)n/2exp⁡[−S2σ2]{\displaystyle {\begin{aligned}p(\mathbf {X} \mid \mu ,\sigma ^{2})&=\left({\frac {1}{2\pi \sigma ^{2}}}\right)^{n/2}\exp \left[-{\frac {1}{2\sigma ^{2}}}\sum _{i=1}^{n}(x_{i}-\mu )^{2}\right]\\&=\left({\frac {1}{2\pi \sigma ^{2}}}\right)^{n/2}\exp \left[-{\frac {S}{2\sigma ^{2}}}\right]\end{aligned}}} where S=∑i=1n(xi−μ)2.{\displaystyle S=\sum _{i=1}^{n}(x_{i}-\mu )^{2}.} Then: p(σ2∣X)∝p(X∣σ2)p(σ2)=(12πσ2)n/2exp⁡[−S2σ2](σ02ν02)ν02Γ(ν02)exp⁡[−ν0σ022σ2](σ2)1+ν02∝(1σ2)n/21(σ2)1+ν02exp⁡[−S2σ2+−ν0σ022σ2]=1(σ2)1+ν0+n2exp⁡[−ν0σ02+S2σ2]{\displaystyle {\begin{aligned}p(\sigma ^{2}\mid \mathbf {X} )&\propto p(\mathbf {X} \mid \sigma ^{2})p(\sigma ^{2})\\&=\left({\frac {1}{2\pi \sigma ^{2}}}\right)^{n/2}\exp \left[-{\frac {S}{2\sigma ^{2}}}\right]{\frac {(\sigma _{0}^{2}{\frac {\nu _{0}}{2}})^{\frac {\nu _{0}}{2}}}{\Gamma \left({\frac {\nu _{0}}{2}}\right)}}~{\frac {\exp \left[{\frac {-\nu _{0}\sigma _{0}^{2}}{2\sigma ^{2}}}\right]}{(\sigma ^{2})^{1+{\frac {\nu _{0}}{2}}}}}\\&\propto \left({\frac {1}{\sigma ^{2}}}\right)^{n/2}{\frac {1}{(\sigma ^{2})^{1+{\frac {\nu _{0}}{2}}}}}\exp \left[-{\frac {S}{2\sigma ^{2}}}+{\frac {-\nu _{0}\sigma _{0}^{2}}{2\sigma ^{2}}}\right]\\&={\frac {1}{(\sigma ^{2})^{1+{\frac {\nu _{0}+n}{2}}}}}\exp \left[-{\frac {\nu _{0}\sigma _{0}^{2}+S}{2\sigma ^{2}}}\right]\end{aligned}}} The above is also a scaled inverse chi-squared distribution where ν0′=ν0+nν0′σ02′=ν0σ02+∑i=1n(xi−μ)2{\displaystyle {\begin{aligned}\nu _{0}'&=\nu _{0}+n\\\nu _{0}'{\sigma _{0}^{2}}'&=\nu _{0}\sigma _{0}^{2}+\sum _{i=1}^{n}(x_{i}-\mu )^{2}\end{aligned}}} or equivalently ν0′=ν0+nσ02′=ν0σ02+∑i=1n(xi−μ)2ν0+n{\displaystyle {\begin{aligned}\nu _{0}'&=\nu _{0}+n\\{\sigma _{0}^{2}}'&={\frac {\nu _{0}\sigma _{0}^{2}+\sum _{i=1}^{n}(x_{i}-\mu )^{2}}{\nu _{0}+n}}\end{aligned}}} Reparameterizing in terms of aninverse gamma distribution, the result is: α′=α+n2β′=β+∑i=1n(xi−μ)22{\displaystyle {\begin{aligned}\alpha '&=\alpha +{\frac {n}{2}}\\\beta '&=\beta +{\frac {\sum _{i=1}^{n}(x_{i}-\mu )^{2}}{2}}\end{aligned}}} For a set ofi.i.d.normally distributed data pointsXof sizenwhere each individual pointxfollowsx∼N(μ,σ2){\textstyle x\sim {\mathcal {N}}(\mu ,\sigma ^{2})}with unknown mean μ and unknownvarianceσ2, a combined (multivariate)conjugate prioris placed over the mean and variance, consisting of anormal-inverse-gamma distribution. Logically, this originates as follows: The priors are normally defined as follows: p(μ∣σ2;μ0,n0)∼N(μ0,σ2/n0)p(σ2;ν0,σ02)∼Iχ2(ν0,σ02)=IG(ν0/2,ν0σ02/2){\displaystyle {\begin{aligned}p(\mu \mid \sigma ^{2};\mu _{0},n_{0})&\sim {\mathcal {N}}(\mu _{0},\sigma ^{2}/n_{0})\\p(\sigma ^{2};\nu _{0},\sigma _{0}^{2})&\sim I\chi ^{2}(\nu _{0},\sigma _{0}^{2})=IG(\nu _{0}/2,\nu _{0}\sigma _{0}^{2}/2)\end{aligned}}} The update equations can be derived, and look as follows: x¯=1n∑i=1nxiμ0′=n0μ0+nx¯n0+nn0′=n0+nν0′=ν0+nν0′σ02′=ν0σ02+∑i=1n(xi−x¯)2+n0nn0+n(μ0−x¯)2{\displaystyle {\begin{aligned}{\bar {x}}&={\frac {1}{n}}\sum _{i=1}^{n}x_{i}\\\mu _{0}'&={\frac {n_{0}\mu _{0}+n{\bar {x}}}{n_{0}+n}}\\n_{0}'&=n_{0}+n\\\nu _{0}'&=\nu _{0}+n\\\nu _{0}'{\sigma _{0}^{2}}'&=\nu _{0}\sigma _{0}^{2}+\sum _{i=1}^{n}(x_{i}-{\bar {x}})^{2}+{\frac {n_{0}n}{n_{0}+n}}(\mu _{0}-{\bar {x}})^{2}\end{aligned}}} The respective numbers of pseudo-observations add the number of actual observations to them. The new mean hyperparameter is once again a weighted average, this time weighted by the relative numbers of observations. Finally, the update forν0′σ02′{\textstyle \nu _{0}'{\sigma _{0}^{2}}'}is similar to the case with known mean, but in this case the sum of squared deviations is taken with respect to the observed data mean rather than the true mean, and as a result a new interaction term needs to be added to take care of the additional error source stemming from the deviation between prior and data mean. The prior distributions arep(μ∣σ2;μ0,n0)∼N(μ0,σ2/n0)=12πσ2n0exp⁡(−n02σ2(μ−μ0)2)∝(σ2)−1/2exp⁡(−n02σ2(μ−μ0)2)p(σ2;ν0,σ02)∼Iχ2(ν0,σ02)=IG(ν0/2,ν0σ02/2)=(σ02ν0/2)ν0/2Γ(ν0/2)exp⁡[−ν0σ022σ2](σ2)1+ν0/2∝(σ2)−(1+ν0/2)exp⁡[−ν0σ022σ2].{\displaystyle {\begin{aligned}p(\mu \mid \sigma ^{2};\mu _{0},n_{0})&\sim {\mathcal {N}}(\mu _{0},\sigma ^{2}/n_{0})={\frac {1}{\sqrt {2\pi {\frac {\sigma ^{2}}{n_{0}}}}}}\exp \left(-{\frac {n_{0}}{2\sigma ^{2}}}(\mu -\mu _{0})^{2}\right)\\&\propto (\sigma ^{2})^{-1/2}\exp \left(-{\frac {n_{0}}{2\sigma ^{2}}}(\mu -\mu _{0})^{2}\right)\\p(\sigma ^{2};\nu _{0},\sigma _{0}^{2})&\sim I\chi ^{2}(\nu _{0},\sigma _{0}^{2})=IG(\nu _{0}/2,\nu _{0}\sigma _{0}^{2}/2)\\&={\frac {(\sigma _{0}^{2}\nu _{0}/2)^{\nu _{0}/2}}{\Gamma (\nu _{0}/2)}}~{\frac {\exp \left[{\frac {-\nu _{0}\sigma _{0}^{2}}{2\sigma ^{2}}}\right]}{(\sigma ^{2})^{1+\nu _{0}/2}}}\\&\propto {(\sigma ^{2})^{-(1+\nu _{0}/2)}}\exp \left[{\frac {-\nu _{0}\sigma _{0}^{2}}{2\sigma ^{2}}}\right].\end{aligned}}} Therefore, the joint prior is p(μ,σ2;μ0,n0,ν0,σ02)=p(μ∣σ2;μ0,n0)p(σ2;ν0,σ02)∝(σ2)−(ν0+3)/2exp⁡[−12σ2(ν0σ02+n0(μ−μ0)2)].{\displaystyle {\begin{aligned}p(\mu ,\sigma ^{2};\mu _{0},n_{0},\nu _{0},\sigma _{0}^{2})&=p(\mu \mid \sigma ^{2};\mu _{0},n_{0})\,p(\sigma ^{2};\nu _{0},\sigma _{0}^{2})\\&\propto (\sigma ^{2})^{-(\nu _{0}+3)/2}\exp \left[-{\frac {1}{2\sigma ^{2}}}\left(\nu _{0}\sigma _{0}^{2}+n_{0}(\mu -\mu _{0})^{2}\right)\right].\end{aligned}}} Thelikelihood functionfrom the section above with known variance is: p(X∣μ,σ2)=(12πσ2)n/2exp⁡[−12σ2(∑i=1n(xi−μ)2)]{\displaystyle {\begin{aligned}p(\mathbf {X} \mid \mu ,\sigma ^{2})&=\left({\frac {1}{2\pi \sigma ^{2}}}\right)^{n/2}\exp \left[-{\frac {1}{2\sigma ^{2}}}\left(\sum _{i=1}^{n}(x_{i}-\mu )^{2}\right)\right]\end{aligned}}} Writing it in terms of variance rather than precision, we get:p(X∣μ,σ2)=(12πσ2)n/2exp⁡[−12σ2(∑i=1n(xi−x¯)2+n(x¯−μ)2)]∝σ2−n/2exp⁡[−12σ2(S+n(x¯−μ)2)]{\displaystyle {\begin{aligned}p(\mathbf {X} \mid \mu ,\sigma ^{2})&=\left({\frac {1}{2\pi \sigma ^{2}}}\right)^{n/2}\exp \left[-{\frac {1}{2\sigma ^{2}}}\left(\sum _{i=1}^{n}(x_{i}-{\bar {x}})^{2}+n({\bar {x}}-\mu )^{2}\right)\right]\\&\propto {\sigma ^{2}}^{-n/2}\exp \left[-{\frac {1}{2\sigma ^{2}}}\left(S+n({\bar {x}}-\mu )^{2}\right)\right]\end{aligned}}}whereS=∑i=1n(xi−x¯)2.{\textstyle S=\sum _{i=1}^{n}(x_{i}-{\bar {x}})^{2}.} Therefore, the posterior is (dropping the hyperparameters as conditioning factors):p(μ,σ2∣X)∝p(μ,σ2)p(X∣μ,σ2)∝(σ2)−(ν0+3)/2exp⁡[−12σ2(ν0σ02+n0(μ−μ0)2)]σ2−n/2exp⁡[−12σ2(S+n(x¯−μ)2)]=(σ2)−(ν0+n+3)/2exp⁡[−12σ2(ν0σ02+S+n0(μ−μ0)2+n(x¯−μ)2)]=(σ2)−(ν0+n+3)/2exp⁡[−12σ2(ν0σ02+S+n0nn0+n(μ0−x¯)2+(n0+n)(μ−n0μ0+nx¯n0+n)2)]∝(σ2)−1/2exp⁡[−n0+n2σ2(μ−n0μ0+nx¯n0+n)2]×(σ2)−(ν0/2+n/2+1)exp⁡[−12σ2(ν0σ02+S+n0nn0+n(μ0−x¯)2)]=Nμ∣σ2(n0μ0+nx¯n0+n,σ2n0+n)⋅IGσ2(12(ν0+n),12(ν0σ02+S+n0nn0+n(μ0−x¯)2)).{\displaystyle {\begin{aligned}p(\mu ,\sigma ^{2}\mid \mathbf {X} )&\propto p(\mu ,\sigma ^{2})\,p(\mathbf {X} \mid \mu ,\sigma ^{2})\\&\propto (\sigma ^{2})^{-(\nu _{0}+3)/2}\exp \left[-{\frac {1}{2\sigma ^{2}}}\left(\nu _{0}\sigma _{0}^{2}+n_{0}(\mu -\mu _{0})^{2}\right)\right]{\sigma ^{2}}^{-n/2}\exp \left[-{\frac {1}{2\sigma ^{2}}}\left(S+n({\bar {x}}-\mu )^{2}\right)\right]\\&=(\sigma ^{2})^{-(\nu _{0}+n+3)/2}\exp \left[-{\frac {1}{2\sigma ^{2}}}\left(\nu _{0}\sigma _{0}^{2}+S+n_{0}(\mu -\mu _{0})^{2}+n({\bar {x}}-\mu )^{2}\right)\right]\\&=(\sigma ^{2})^{-(\nu _{0}+n+3)/2}\exp \left[-{\frac {1}{2\sigma ^{2}}}\left(\nu _{0}\sigma _{0}^{2}+S+{\frac {n_{0}n}{n_{0}+n}}(\mu _{0}-{\bar {x}})^{2}+(n_{0}+n)\left(\mu -{\frac {n_{0}\mu _{0}+n{\bar {x}}}{n_{0}+n}}\right)^{2}\right)\right]\\&\propto (\sigma ^{2})^{-1/2}\exp \left[-{\frac {n_{0}+n}{2\sigma ^{2}}}\left(\mu -{\frac {n_{0}\mu _{0}+n{\bar {x}}}{n_{0}+n}}\right)^{2}\right]\\&\quad \times (\sigma ^{2})^{-(\nu _{0}/2+n/2+1)}\exp \left[-{\frac {1}{2\sigma ^{2}}}\left(\nu _{0}\sigma _{0}^{2}+S+{\frac {n_{0}n}{n_{0}+n}}(\mu _{0}-{\bar {x}})^{2}\right)\right]\\&={\mathcal {N}}_{\mu \mid \sigma ^{2}}\left({\frac {n_{0}\mu _{0}+n{\bar {x}}}{n_{0}+n}},{\frac {\sigma ^{2}}{n_{0}+n}}\right)\cdot {\rm {IG}}_{\sigma ^{2}}\left({\frac {1}{2}}(\nu _{0}+n),{\frac {1}{2}}\left(\nu _{0}\sigma _{0}^{2}+S+{\frac {n_{0}n}{n_{0}+n}}(\mu _{0}-{\bar {x}})^{2}\right)\right).\end{aligned}}} In other words, the posterior distribution has the form of a product of a normal distribution overp(μ|σ2){\textstyle p(\mu |\sigma ^{2})}times an inverse gamma distribution overp(σ2){\textstyle p(\sigma ^{2})}, with parameters that are the same as the update equations above. The occurrence of normal distribution in practical problems can be loosely classified into four categories: Certain quantities inphysicsare distributed normally, as was first demonstrated byJames Clerk Maxwell. Examples of such quantities are: Approximatelynormal distributions occur in many situations, as explained by thecentral limit theorem. When the outcome is produced by many small effects actingadditively and independently, its distribution will be close to normal. The normal approximation will not be valid if the effects act multiplicatively (instead of additively), or if there is a single external influence that has a considerably larger magnitude than the rest of the effects. I can only recognize the occurrence of the normal curve – the Laplacian curve of errors – as a very abnormal phenomenon. It is roughly approximated to in certain distributions; for this reason, and on account for its beautiful simplicity, we may, perhaps, use it as a first approximation, particularly in theoretical investigations. There are statistical methods to empirically test that assumption; see the aboveNormality testssection. John Ioannidisarguedthat using normally distributed standard deviations as standards for validating research findings leavefalsifiable predictionsabout phenomena that are not normally distributed untested. This includes, for example, phenomena that only appear when all necessary conditions are present and one cannot be a substitute for another in an addition-like way and phenomena that are not randomly distributed. Ioannidis argues that standard deviation-centered validation gives a false appearance of validity to hypotheses and theories where some but not all falsifiable predictions are normally distributed since the portion of falsifiable predictions that there is evidence against may and in some cases are in the non-normally distributed parts of the range of falsifiable predictions, as well as baselessly dismissing hypotheses for which none of the falsifiable predictions are normally distributed as if were they unfalsifiable when in fact they do make falsifiable predictions. It is argued by Ioannidis that many cases of mutually exclusive theories being accepted as validated by research journals are caused by failure of the journals to take in empirical falsifications of non-normally distributed predictions, and not because mutually exclusive theories are true, which they cannot be, although two mutually exclusive theories can both be wrong and a third one correct.[58] In computer simulations, especially in applications of theMonte-Carlo method, it is often desirable to generate values that are normally distributed. The algorithms listed below all generate the standard normal deviates, since aN(μ,σ2)can be generated asX=μ+σZ, whereZis standard normal. All these algorithms rely on the availability of arandom number generatorUcapable of producinguniformrandom variates. The standard normalcumulative distribution functionis widely used in scientific and statistical computing. The values Φ(x) may be approximated very accurately by a variety of methods, such asnumerical integration,Taylor series,asymptotic seriesandcontinued fractions. Different approximations are used depending on the desired level of accuracy. 1−Φ(x)=1−(1−Φ(−x)){\displaystyle 1-\Phi \left(x\right)=1-\left(1-\Phi \left(-x\right)\right)} Shore (1982) introduced simple approximations that may be incorporated in stochastic optimization models of engineering and operations research, like reliability engineering and inventory analysis. Denotingp= Φ(z), the simplest approximation for the quantile function is:z=Φ−1(p)=5.5556[1−(1−pp)0.1186],p≥1/2{\displaystyle z=\Phi ^{-1}(p)=5.5556\left[1-\left({\frac {1-p}{p}}\right)^{0.1186}\right],\qquad p\geq 1/2} This approximation delivers forza maximum absolute error of 0.026 (for0.5 ≤p≤ 0.9999, corresponding to0 ≤z≤ 3.719). Forp< 1/2replacepby1 −pand change sign. Another approximation, somewhat less accurate, is the single-parameter approximation:z=−0.4115{1−pp+log⁡[1−pp]−1},p≥1/2{\displaystyle z=-0.4115\left\{{\frac {1-p}{p}}+\log \left[{\frac {1-p}{p}}\right]-1\right\},\qquad p\geq 1/2} The latter had served to derive a simple approximation for the loss integral of the normal distribution, defined byL(z)=∫z∞(u−z)φ(u)du=∫z∞[1−Φ(u)]duL(z)≈{0.4115(p1−p)−z,p<1/2,0.4115(1−pp),p≥1/2.or, equivalently,L(z)≈{0.4115{1−log⁡[p1−p]},p<1/2,0.41151−pp,p≥1/2.{\displaystyle {\begin{aligned}L(z)&=\int _{z}^{\infty }(u-z)\varphi (u)\,du=\int _{z}^{\infty }[1-\Phi (u)]\,du\\[5pt]L(z)&\approx {\begin{cases}0.4115\left({\dfrac {p}{1-p}}\right)-z,&p<1/2,\\\\0.4115\left({\dfrac {1-p}{p}}\right),&p\geq 1/2.\end{cases}}\\[5pt]{\text{or, equivalently,}}\\L(z)&\approx {\begin{cases}0.4115\left\{1-\log \left[{\frac {p}{1-p}}\right]\right\},&p<1/2,\\\\0.4115{\dfrac {1-p}{p}},&p\geq 1/2.\end{cases}}\end{aligned}}} This approximation is particularly accurate for the right far-tail (maximum error of 10−3for z≥1.4). Highly accurate approximations for the cumulative distribution function, based onResponse Modeling Methodology(RMM, Shore, 2011, 2012), are shown in Shore (2005). Some more approximations can be found at:Error function#Approximation with elementary functions. In particular, smallrelativeerror on the whole domain for the cumulative distribution function⁠Φ{\displaystyle \Phi }⁠and the quantile functionΦ−1{\textstyle \Phi ^{-1}}as well, is achieved via an explicitly invertible formula by Sergei Winitzki in 2008. Some authors[68][69]attribute the discovery of the normal distribution tode Moivre, who in 1738[note 2]published in the second edition of hisThe Doctrine of Chancesthe study of the coefficients in thebinomial expansionof(a+b)n. De Moivre proved that the middle term in this expansion has the approximate magnitude of2n/2πn{\textstyle 2^{n}/{\sqrt {2\pi n}}}, and that "Ifmor⁠1/2⁠nbe a Quantity infinitely great, then the Logarithm of the Ratio, which a Term distant from the middle by the Intervalℓ, has to the middle Term, is−2ℓℓn{\textstyle -{\frac {2\ell \ell }{n}}}."[70]Although this theorem can be interpreted as the first obscure expression for the normal probability law,Stiglerpoints out that de Moivre himself did not interpret his results as anything more than the approximate rule for the binomial coefficients, and in particular de Moivre lacked the concept of the probability density function.[71] In 1823Gausspublished his monograph"Theoria combinationis observationum erroribus minimis obnoxiae"where among other things he introduces several important statistical concepts, such as themethod of least squares, themethod of maximum likelihood, and thenormal distribution. Gauss usedM,M′,M′′, ...to denote the measurements of some unknown quantityV, and sought the most probable estimator of that quantity: the one that maximizes the probabilityφ(M−V) ·φ(M′−V) ·φ(M′′ −V) · ...of obtaining the observed experimental results. In his notation φΔ is the probability density function of the measurement errors of magnitude Δ. Not knowing what the functionφis, Gauss requires that his method should reduce to the well-known answer: the arithmetic mean of the measured values.[note 3]Starting from these principles, Gauss demonstrates that the only law that rationalizes the choice of arithmetic mean as an estimator of the location parameter, is the normal law of errors:[72]φΔ=h√πe−hhΔΔ,{\displaystyle \varphi {\mathit {\Delta }}={\frac {h}{\surd \pi }}\,e^{-\mathrm {hh} \Delta \Delta },}wherehis "the measure of the precision of the observations". Using this normal law as a generic model for errors in the experiments, Gauss formulates what is now known as thenon-linearweighted least squaresmethod.[73] Although Gauss was the first to suggest the normal distribution law,Laplacemade significant contributions.[note 4]It was Laplace who first posed the problem of aggregating several observations in 1774,[74]although his own solution led to theLaplacian distribution. It was Laplace who first calculated the value of theintegral∫e−t2dt=√πin 1782, providing the normalization constant for the normal distribution.[75]For this accomplishment, Gauss acknowledged the priority of Laplace.[76]Finally, it was Laplace who in 1810 proved and presented to the academy the fundamentalcentral limit theorem, which emphasized the theoretical importance of the normal distribution.[77] It is of interest to note that in 1809 an Irish-American mathematicianRobert Adrainpublished two insightful but flawed derivations of the normal probability law, simultaneously and independently from Gauss.[78]His works remained largely unnoticed by the scientific community, until in 1871 they were exhumed byAbbe.[79] In the middle of the 19th centuryMaxwelldemonstrated that the normal distribution is not just a convenient mathematical tool, but may also occur in natural phenomena:[80]The number of particles whose velocity, resolved in a certain direction, lies betweenxandx+dxisN⁡1απe−x2α2dx{\displaystyle \operatorname {N} {\frac {1}{\alpha \;{\sqrt {\pi }}}}\;e^{-{\frac {x^{2}}{\alpha ^{2}}}}\,dx} Today, the concept is usually known in English as thenormal distributionorGaussian distribution. Other less common names include Gauss distribution, Laplace–Gauss distribution, the law of error, the law of facility of errors, Laplace's second law, and Gaussian law. Gauss himself apparently coined the term with reference to the "normal equations" involved in its applications, with normal having its technical meaning of orthogonal rather than usual.[81]However, by the end of the 19th century some authors[note 5]had started using the namenormal distribution, where the word "normal" was used as an adjective – the term now being seen as a reflection of the fact that this distribution was seen as typical, common – and thus normal.Peirce(one of those authors) once defined "normal" thus: "...the 'normal' is not the average (or any other kind of mean) of what actually occurs, but of whatwould, in the long run, occur under certain circumstances."[82]Around the turn of the 20th centuryPearsonpopularized the termnormalas a designation for this distribution.[83] Many years ago I called the Laplace–Gaussian curve thenormalcurve, which name, while it avoids an international question of priority, has the disadvantage of leading people to believe that all other distributions of frequency are in one sense or another 'abnormal'. Also, it was Pearson who first wrote the distribution in terms of the standard deviationσas in modern notation. Soon after this, in year 1915,Fisheradded the location parameter to the formula for normal distribution, expressing it in the way it is written nowadays:df=12σ2πe−(x−m)2/(2σ2)dx.{\displaystyle df={\frac {1}{\sqrt {2\sigma ^{2}\pi }}}e^{-(x-m)^{2}/(2\sigma ^{2})}\,dx.} The term "standard normal", which denotes the normal distribution with zero mean and unit variance came into general use around the 1950s, appearing in the popular textbooks by P. G. Hoel (1947)Introduction to Mathematical Statisticsand A. M. Mood (1950)Introduction to the Theory of Statistics.[84]
https://en.wikipedia.org/wiki/Gaussian_distribution
Inmachine learning,backpropagationis agradientestimation method commonly used for training aneural networkto compute its parameter updates. It is an efficient application of thechain ruleto neural networks. Backpropagation computes thegradientof aloss functionwith respect to theweightsof the network for a single input–output example, and does soefficiently, computing the gradient one layer at a time,iteratingbackward from the last layer to avoid redundant calculations of intermediate terms in the chain rule; this can be derived throughdynamic programming.[1][2][3] Strictly speaking, the termbackpropagationrefers only to an algorithm for efficiently computing the gradient, not how the gradient is used; but the term is often used loosely to refer to the entire learning algorithm – including how the gradient is used, such as bystochastic gradient descent, or as an intermediate step in a more complicated optimizer, such asAdaptive Moment Estimation.[4]The local minimum convergence, exploding gradient, vanishing gradient, and weak control of learning rate are main disadvantages of these optimization algorithms. TheHessianand quasi-Hessian optimizers solve only local minimum convergence problem, and the backpropagation works longer. These problems caused researchers to develop hybrid[5]and fractional[6]optimization algorithms. Backpropagation had multiple discoveries and partial discoveries, with a tangled history and terminology. See thehistorysection for details. Some other names for the technique include "reverse mode ofautomatic differentiation" or "reverse accumulation".[7] Backpropagation computes the gradient inweight spaceof a feedforward neural network, with respect to aloss function. Denote: In the derivation of backpropagation, other intermediate quantities are used by introducing them as needed below. Bias terms are not treated specially since they correspond to a weight with a fixed input of 1. For backpropagation the specific loss function and activation functions do not matter as long as they and their derivatives can be evaluated efficiently. Traditional activation functions include sigmoid,tanh, andReLU.Swish,[8]mish,[9]and other activation functions have since been proposed as well. The overall network is a combination offunction compositionandmatrix multiplication: For a training set there will be a set of input–output pairs,{(xi,yi)}{\displaystyle \left\{(x_{i},y_{i})\right\}}. For each input–output pair(xi,yi){\displaystyle (x_{i},y_{i})}in the training set, the loss of the model on that pair is the cost of the difference between the predicted outputg(xi){\displaystyle g(x_{i})}and the target outputyi{\displaystyle y_{i}}: Note the distinction: during model evaluation the weights are fixed while the inputs vary (and the target output may be unknown), and the network ends with the output layer (it does not include the loss function). During model training the input–output pair is fixed while the weights vary, and the network ends with the loss function. Backpropagation computes the gradient for afixedinput–output pair(xi,yi){\displaystyle (x_{i},y_{i})}, where the weightswjkl{\displaystyle w_{jk}^{l}}can vary. Each individual component of the gradient,∂C/∂wjkl,{\displaystyle \partial C/\partial w_{jk}^{l},}can be computed by the chain rule; but doing this separately for each weight is inefficient. Backpropagation efficiently computes the gradient by avoiding duplicate calculations and not computing unnecessary intermediate values, by computing the gradient of each layer – specifically the gradient of the weightedinputof each layer, denoted byδl{\displaystyle \delta ^{l}}– from back to front. Informally, the key point is that since the only way a weight inWl{\displaystyle W^{l}}affects the loss is through its effect on thenextlayer, and it does solinearly,δl{\displaystyle \delta ^{l}}are the only data you need to compute the gradients of the weights at layerl{\displaystyle l}, and then the gradients of weights of previous layer can be computed byδl−1{\displaystyle \delta ^{l-1}}and repeated recursively. This avoids inefficiency in two ways. First, it avoids duplication because when computing the gradient at layerl{\displaystyle l}, it is unnecessary to recompute all derivatives on later layersl+1,l+2,…{\displaystyle l+1,l+2,\ldots }each time. Second, it avoids unnecessary intermediate calculations, because at each stage it directly computes the gradient of the weights with respect to the ultimate output (the loss), rather than unnecessarily computing the derivatives of the values of hidden layers with respect to changes in weights∂aj′l′/∂wjkl{\displaystyle \partial a_{j'}^{l'}/\partial w_{jk}^{l}}. Backpropagation can be expressed for simple feedforward networks in terms ofmatrix multiplication, or more generally in terms of theadjoint graph. For the basic case of a feedforward network, where nodes in each layer are connected only to nodes in the immediate next layer (without skipping any layers), and there is a loss function that computes a scalar loss for the final output, backpropagation can be understood simply by matrix multiplication.[c]Essentially, backpropagation evaluates the expression for the derivative of the cost function as a product of derivatives between each layerfrom right to left– "backwards" – with the gradient of the weights between each layer being a simple modification of the partial products (the "backwards propagated error"). Given an input–output pair(x,y){\displaystyle (x,y)}, the loss is: To compute this, one starts with the inputx{\displaystyle x}and works forward; denote the weighted input of each hidden layer aszl{\displaystyle z^{l}}and the output of hidden layerl{\displaystyle l}as the activational{\displaystyle a^{l}}. For backpropagation, the activational{\displaystyle a^{l}}as well as the derivatives(fl)′{\displaystyle (f^{l})'}(evaluated atzl{\displaystyle z^{l}}) must be cached for use during the backwards pass. The derivative of the loss in terms of the inputs is given by the chain rule; note that each term is atotal derivative, evaluated at the value of the network (at each node) on the inputx{\displaystyle x}: wheredaLdzL{\displaystyle {\frac {da^{L}}{dz^{L}}}}is a diagonal matrix. These terms are: the derivative of the loss function;[d]the derivatives of the activation functions;[e]and the matrices of weights:[f] The gradient∇{\displaystyle \nabla }is thetransposeof the derivative of the output in terms of the input, so the matrices are transposed and the order of multiplication is reversed, but the entries are the same: Backpropagation then consists essentially of evaluating this expression from right to left (equivalently, multiplying the previous expression for the derivative from left to right), computing the gradient at each layer on the way; there is an added step, because the gradient of the weights is not just a subexpression: there's an extra multiplication. Introducing the auxiliary quantityδl{\displaystyle \delta ^{l}}for the partial products (multiplying from right to left), interpreted as the "error at levell{\displaystyle l}" and defined as the gradient of the input values at levell{\displaystyle l}: Note thatδl{\displaystyle \delta ^{l}}is a vector, of length equal to the number of nodes in levell{\displaystyle l}; each component is interpreted as the "cost attributable to (the value of) that node". The gradient of the weights in layerl{\displaystyle l}is then: The factor ofal−1{\displaystyle a^{l-1}}is because the weightsWl{\displaystyle W^{l}}between levell−1{\displaystyle l-1}andl{\displaystyle l}affect levell{\displaystyle l}proportionally to the inputs (activations): the inputs are fixed, the weights vary. Theδl{\displaystyle \delta ^{l}}can easily be computed recursively, going from right to left, as: The gradients of the weights can thus be computed using a few matrix multiplications for each level; this is backpropagation. Compared with naively computing forwards (using theδl{\displaystyle \delta ^{l}}for illustration): There are two key differences with backpropagation: For more general graphs, and other advanced variations, backpropagation can be understood in terms ofautomatic differentiation, where backpropagation is a special case ofreverse accumulation(or "reverse mode").[7] The goal of anysupervised learningalgorithm is to find a function that best maps a set of inputs to their correct output. The motivation for backpropagation is to train a multi-layered neural network such that it can learn the appropriate internal representations to allow it to learn any arbitrary mapping of input to output.[10] To understand the mathematical derivation of the backpropagation algorithm, it helps to first develop some intuition about the relationship between the actual output of a neuron and the correct output for a particular training example. Consider a simple neural network with two input units, one output unit and no hidden units, and in which each neuron uses alinear output(unlike most work on neural networks, in which mapping from inputs to outputs is non-linear)[g]that is the weighted sum of its input. Initially, before training, the weights will be set randomly. Then the neuron learns fromtraining examples, which in this case consist of a set oftuples(x1,x2,t){\displaystyle (x_{1},x_{2},t)}wherex1{\displaystyle x_{1}}andx2{\displaystyle x_{2}}are the inputs to the network andtis the correct output (the output the network should produce given those inputs, when it has been trained). The initial network, givenx1{\displaystyle x_{1}}andx2{\displaystyle x_{2}}, will compute an outputythat likely differs fromt(given random weights). Aloss functionL(t,y){\displaystyle L(t,y)}is used for measuring the discrepancy between the target outputtand the computed outputy. Forregression analysisproblems the squared error can be used as a loss function, forclassificationthecategorical cross-entropycan be used. As an example consider a regression problem using the square error as a loss: whereEis the discrepancy or error. Consider the network on a single training case:(1,1,0){\displaystyle (1,1,0)}. Thus, the inputx1{\displaystyle x_{1}}andx2{\displaystyle x_{2}}are 1 and 1 respectively and the correct output,tis 0. Now if the relation is plotted between the network's outputyon the horizontal axis and the errorEon the vertical axis, the result is a parabola. Theminimumof theparabolacorresponds to the outputywhich minimizes the errorE. For a single training case, the minimum also touches the horizontal axis, which means the error will be zero and the network can produce an outputythat exactly matches the target outputt. Therefore, the problem of mapping inputs to outputs can be reduced to anoptimization problemof finding a function that will produce the minimal error. However, the output of a neuron depends on the weighted sum of all its inputs: wherew1{\displaystyle w_{1}}andw2{\displaystyle w_{2}}are the weights on the connection from the input units to the output unit. Therefore, the error also depends on the incoming weights to the neuron, which is ultimately what needs to be changed in the network to enable learning. In this example, upon injecting the training data(1,1,0){\displaystyle (1,1,0)}, the loss function becomes E=(t−y)2=y2=(x1w1+x2w2)2=(w1+w2)2.{\displaystyle E=(t-y)^{2}=y^{2}=(x_{1}w_{1}+x_{2}w_{2})^{2}=(w_{1}+w_{2})^{2}.} Then, the loss functionE{\displaystyle E}takes the form of a parabolic cylinder with its base directed alongw1=−w2{\displaystyle w_{1}=-w_{2}}. Since all sets of weights that satisfyw1=−w2{\displaystyle w_{1}=-w_{2}}minimize the loss function, in this case additional constraints are required to converge to a unique solution. Additional constraints could either be generated by setting specific conditions to the weights, or by injecting additional training data. One commonly used algorithm to find the set of weights that minimizes the error isgradient descent. By backpropagation, the steepest descent direction is calculated of the loss function versus the present synaptic weights. Then, the weights can be modified along the steepest descent direction, and the error is minimized in an efficient way. The gradient descent method involves calculating the derivative of the loss function with respect to the weights of the network. This is normally done using backpropagation. Assuming one output neuron,[h]the squared error function is where For each neuronj{\displaystyle j}, its outputoj{\displaystyle o_{j}}is defined as where theactivation functionφ{\displaystyle \varphi }isnon-linearanddifferentiableover the activation region (the ReLU is not differentiable at one point). A historically used activation function is thelogistic function: which has aconvenientderivative of: The inputnetj{\displaystyle {\text{net}}_{j}}to a neuron is the weighted sum of outputsok{\displaystyle o_{k}}of previous neurons. If the neuron is in the first layer after the input layer, theok{\displaystyle o_{k}}of the input layer are simply the inputsxk{\displaystyle x_{k}}to the network. The number of input units to the neuron isn{\displaystyle n}. The variablewkj{\displaystyle w_{kj}}denotes the weight between neuronk{\displaystyle k}of the previous layer and neuronj{\displaystyle j}of the current layer. Calculating thepartial derivativeof the error with respect to a weightwij{\displaystyle w_{ij}}is done using thechain ruletwice: In the last factor of the right-hand side of the above, only one term in the sumnetj{\displaystyle {\text{net}}_{j}}depends onwij{\displaystyle w_{ij}}, so that If the neuron is in the first layer after the input layer,oi{\displaystyle o_{i}}is justxi{\displaystyle x_{i}}. The derivative of the output of neuronj{\displaystyle j}with respect to its input is simply the partial derivative of the activation function: which for thelogistic activation function This is the reason why backpropagation requires that the activation function bedifferentiable. (Nevertheless, theReLUactivation function, which is non-differentiable at 0, has become quite popular, e.g. inAlexNet) The first factor is straightforward to evaluate if the neuron is in the output layer, because thenoj=y{\displaystyle o_{j}=y}and If half of the square error is used as loss function we can rewrite it as However, ifj{\displaystyle j}is in an arbitrary inner layer of the network, finding the derivativeE{\displaystyle E}with respect tooj{\displaystyle o_{j}}is less obvious. ConsideringE{\displaystyle E}as a function with the inputs being all neuronsL={u,v,…,w}{\displaystyle L=\{u,v,\dots ,w\}}receiving input from neuronj{\displaystyle j}, and taking thetotal derivativewith respect tooj{\displaystyle o_{j}}, a recursive expression for the derivative is obtained: Therefore, the derivative with respect tooj{\displaystyle o_{j}}can be calculated if all the derivatives with respect to the outputsoℓ{\displaystyle o_{\ell }}of the next layer – the ones closer to the output neuron – are known. [Note, if any of the neurons in setL{\displaystyle L}were not connected to neuronj{\displaystyle j}, they would be independent ofwij{\displaystyle w_{ij}}and the corresponding partial derivative under the summation would vanish to 0.] SubstitutingEq. 2,Eq. 3Eq.4andEq. 5inEq. 1we obtain: with ifφ{\displaystyle \varphi }is the logistic function, and the error is the square error: To update the weightwij{\displaystyle w_{ij}}using gradient descent, one must choose a learning rate,η>0{\displaystyle \eta >0}. The change in weight needs to reflect the impact onE{\displaystyle E}of an increase or decrease inwij{\displaystyle w_{ij}}. If∂E∂wij>0{\displaystyle {\frac {\partial E}{\partial w_{ij}}}>0}, an increase inwij{\displaystyle w_{ij}}increasesE{\displaystyle E}; conversely, if∂E∂wij<0{\displaystyle {\frac {\partial E}{\partial w_{ij}}}<0}, an increase inwij{\displaystyle w_{ij}}decreasesE{\displaystyle E}. The newΔwij{\displaystyle \Delta w_{ij}}is added to the old weight, and the product of the learning rate and the gradient, multiplied by−1{\displaystyle -1}guarantees thatwij{\displaystyle w_{ij}}changes in a way that always decreasesE{\displaystyle E}. In other words, in the equation immediately below,−η∂E∂wij{\displaystyle -\eta {\frac {\partial E}{\partial w_{ij}}}}always changeswij{\displaystyle w_{ij}}in such a way thatE{\displaystyle E}is decreased: Using aHessian matrixof second-order derivatives of the error function, theLevenberg–Marquardt algorithmoften converges faster than first-order gradient descent, especially when the topology of the error function is complicated.[11][12]It may also find solutions in smaller node counts for which other methods might not converge.[12]The Hessian can be approximated by theFisher informationmatrix.[13] As an example, consider a simple feedforward network. At thel{\displaystyle l}-th layer, we havexi(l),ai(l)=f(xi(l)),xi(l+1)=∑jWijaj(l){\displaystyle x_{i}^{(l)},\quad a_{i}^{(l)}=f(x_{i}^{(l)}),\quad x_{i}^{(l+1)}=\sum _{j}W_{ij}a_{j}^{(l)}}wherex{\displaystyle x}are the pre-activations,a{\displaystyle a}are the activations, andW{\displaystyle W}is the weight matrix. Given a loss functionL{\displaystyle L}, the first-order backpropagation states that∂L∂aj(l)=∑jWij∂L∂xi(l+1),∂L∂xj(l)=f′(xj(l))∂L∂aj(l){\displaystyle {\frac {\partial L}{\partial a_{j}^{(l)}}}=\sum _{j}W_{ij}{\frac {\partial L}{\partial x_{i}^{(l+1)}}},\quad {\frac {\partial L}{\partial x_{j}^{(l)}}}=f'(x_{j}^{(l)}){\frac {\partial L}{\partial a_{j}^{(l)}}}}and the second-order backpropagation states that∂2L∂aj1(l)∂aj2(l)=∑j1j2Wi1j1Wi2j2∂2L∂xi1(l+1)∂xi2(l+1),∂2L∂xj1(l)∂xj2(l)=f′(xj1(l))f′(xj2(l))∂2L∂aj1(l)∂aj2(l)+δj1j2f″(xj1(l))∂L∂aj1(l){\displaystyle {\frac {\partial ^{2}L}{\partial a_{j_{1}}^{(l)}\partial a_{j_{2}}^{(l)}}}=\sum _{j_{1}j_{2}}W_{i_{1}j_{1}}W_{i_{2}j_{2}}{\frac {\partial ^{2}L}{\partial x_{i_{1}}^{(l+1)}\partial x_{i_{2}}^{(l+1)}}},\quad {\frac {\partial ^{2}L}{\partial x_{j_{1}}^{(l)}\partial x_{j_{2}}^{(l)}}}=f'(x_{j_{1}}^{(l)})f'(x_{j_{2}}^{(l)}){\frac {\partial ^{2}L}{\partial a_{j_{1}}^{(l)}\partial a_{j_{2}}^{(l)}}}+\delta _{j_{1}j_{2}}f''(x_{j_{1}}^{(l)}){\frac {\partial L}{\partial a_{j_{1}}^{(l)}}}}whereδ{\displaystyle \delta }is theDirac delta symbol. Arbitrary-order derivatives in arbitrary computational graphs can be computed with backpropagation, but with more complex expressions for higher orders. The loss function is a function that maps values of one or more variables onto areal numberintuitively representing some "cost" associated with those values. For backpropagation, the loss function calculates the difference between the network output and its expected output, after a training example has propagated through the network. The mathematical expression of the loss function must fulfill two conditions in order for it to be possibly used in backpropagation.[14]The first is that it can be written as an averageE=1n∑xEx{\textstyle E={\frac {1}{n}}\sum _{x}E_{x}}over error functionsEx{\textstyle E_{x}}, forn{\textstyle n}individual training examples,x{\textstyle x}. The reason for this assumption is that the backpropagation algorithm calculates the gradient of the error function for a single training example, which needs to be generalized to the overall error function. The second assumption is that it can be written as a function of the outputs from the neural network. Lety,y′{\displaystyle y,y'}be vectors inRn{\displaystyle \mathbb {R} ^{n}}. Select an error functionE(y,y′){\displaystyle E(y,y')}measuring the difference between two outputs. The standard choice is the square of theEuclidean distancebetween the vectorsy{\displaystyle y}andy′{\displaystyle y'}:E(y,y′)=12‖y−y′‖2{\displaystyle E(y,y')={\tfrac {1}{2}}\lVert y-y'\rVert ^{2}}The error function overn{\textstyle n}training examples can then be written as an average of losses over individual examples:E=12n∑x‖(y(x)−y′(x))‖2{\displaystyle E={\frac {1}{2n}}\sum _{x}\lVert (y(x)-y'(x))\rVert ^{2}} Backpropagation had been derived repeatedly, as it is essentially an efficient application of thechain rule(first written down byGottfried Wilhelm Leibnizin 1676)[17][18]to neural networks. The terminology "back-propagating error correction" was introduced in 1962 byFrank Rosenblatt, but he did not know how to implement this.[19]In any case, he only studied neurons whose outputs were discrete levels, which only had zero derivatives, making backpropagation impossible. Precursors to backpropagation appeared inoptimal control theorysince 1950s.Yann LeCunet al credits 1950s work byPontryaginand others in optimal control theory, especially theadjoint state method, for being a continuous-time version of backpropagation.[20]Hecht-Nielsen[21]credits theRobbins–Monro algorithm(1951)[22]andArthur BrysonandYu-Chi Ho'sApplied Optimal Control(1969) as presages of backpropagation. Other precursors wereHenry J. Kelley1960,[1]andArthur E. Bryson(1961).[2]In 1962,Stuart Dreyfuspublished a simpler derivation based only on thechain rule.[23][24][25]In 1973, he adaptedparametersof controllers in proportion to error gradients.[26]Unlike modern backpropagation, these precursors used standard Jacobian matrix calculations from one stage to the previous one, neither addressing direct links across several stages nor potential additional efficiency gains due to network sparsity.[27] TheADALINE(1960) learning algorithm was gradient descent with a squared error loss for a single layer. The firstmultilayer perceptron(MLP) with more than one layer trained bystochastic gradient descent[22]was published in 1967 byShun'ichi Amari.[28]The MLP had 5 layers, with 2 learnable layers, and it learned to classify patterns not linearly separable.[27] Modern backpropagation was first published bySeppo Linnainmaaas "reverse mode ofautomatic differentiation" (1970)[29]for discrete connected networks of nesteddifferentiablefunctions.[30][31][32] In 1982,Paul Werbosapplied backpropagation to MLPs in the way that has become standard.[33][34]Werbos described how he developed backpropagation in an interview. In 1971, during his PhD work, he developed backpropagation to mathematicizeFreud's "flow of psychic energy". He faced repeated difficulty in publishing the work, only managing in 1981.[35]He also claimed that "the first practical application of back-propagation was for estimating a dynamic model to predict nationalism and social communications in 1974" by him.[36] Around 1982,[35]: 376David E. Rumelhartindependently developed[37]: 252backpropagation and taught the algorithm to others in his research circle. He did not cite previous work as he was unaware of them. He published the algorithm first in a 1985 paper, then in a 1986Naturepaper an experimental analysis of the technique.[38]These papers became highly cited, contributed to the popularization of backpropagation, and coincided with the resurging research interest in neural networks during the 1980s.[10][39][40] In 1985, the method was also described by David Parker.[41][42]Yann LeCunproposed an alternative form of backpropagation for neural networks in his PhD thesis in 1987.[43] Gradient descent took a considerable amount of time to reach acceptance. Some early objections were: there were no guarantees that gradient descent could reach a global minimum, only local minimum; neurons were "known" by physiologists as making discrete signals (0/1), not continuous ones, and with discrete signals, there is no gradient to take. See the interview withGeoffrey Hinton,[35]who was awarded the 2024Nobel Prize in Physicsfor his contributions to the field.[44] Contributing to the acceptance were several applications in training neural networks via backpropagation, sometimes achieving popularity outside the research circles. In 1987,NETtalklearned to convert English text into pronunciation. Sejnowski tried training it with both backpropagation and Boltzmann machine, but found the backpropagation significantly faster, so he used it for the final NETtalk.[35]: 324The NETtalk program became a popular success, appearing on theTodayshow.[45] In 1989, Dean A. Pomerleau published ALVINN, a neural network trained todrive autonomouslyusing backpropagation.[46] TheLeNetwas published in 1989 to recognize handwritten zip codes. In 1992,TD-Gammonachieved top human level play in backgammon. It was a reinforcement learning agent with a neural network with two layers, trained by backpropagation.[47] In 1993, Eric Wan won an international pattern recognition contest through backpropagation.[48][49] During the 2000s it fell out of favour[citation needed], but returned in the 2010s, benefiting from cheap, powerfulGPU-based computing systems. This has been especially so inspeech recognition,machine vision,natural language processing, and language structure learning research (in which it has been used to explain a variety of phenomena related to first[50]and second language learning.[51])[52] Error backpropagation has been suggested to explain human brainevent-related potential(ERP) components like theN400andP600.[53] In 2023, a backpropagation algorithm was implemented on aphotonic processorby a team atStanford University.[54]
https://en.wikipedia.org/wiki/Backpropagation
Aneural networkis a group of interconnected units calledneuronsthat send signals to one another. Neurons can be eitherbiological cellsormathematical models. While individual neurons are simple, many of them together in a network can perform complex tasks. There are two main types of neural networks. In the context of biology, a neural network is a population of biologicalneuronschemically connected to each other bysynapses. A given neuron can be connected to hundreds of thousands of synapses.[1]Each neuron sends and receiveselectrochemicalsignals calledaction potentialsto its connected neighbors. A neuron can serve anexcitatoryrole, amplifying and propagating signals it receives, or aninhibitoryrole, suppressing signals instead.[1] Populations of interconnected neurons that are smaller than neural networks are calledneural circuits. Very large interconnected networks are calledlarge scale brain networks, and many of these together formbrainsandnervous systems. Signals generated by neural networks in the brain eventually travel through the nervous system and acrossneuromuscular junctionstomuscle cells, where they cause contraction and thereby motion.[2] In machine learning, a neural network is an artificial mathematical model used to approximate nonlinear functions. While early artificial neural networks were physical machines,[3]today they are almost always implemented insoftware. Neuronsin an artificial neural network are usually arranged into layers, with information passing from the first layer (the input layer) through one or more intermediate layers (the hidden layers) to the final layer (the output layer).[4]The "signal" input to each neuron is a number, specifically alinear combinationof the outputs of the connected neurons in the previous layer. The signal each neuron outputs is calculated from this number, according to itsactivation function. The behavior of the network depends on the strengths (orweights) of the connections between neurons. A network is trained by modifying these weights throughempirical risk minimizationorbackpropagationin order to fit some preexisting dataset.[5] The termdeep neural networkrefers to neural networks that have more than three layers, typically including at least two hidden layers in addition to the input and output layers. Neural networks are used to solve problems inartificial intelligence, and have thereby found applications in many disciplines, includingpredictive modeling,adaptive control,facial recognition,handwriting recognition,general game playing, andgenerative AI. The theoretical base for contemporary neural networks was independently proposed byAlexander Bainin 1873[6]andWilliam Jamesin 1890.[7]Both posited that human thought emerged from interactions among large numbers of neurons inside the brain. In 1949,Donald HebbdescribedHebbian learning, the idea that neural networks can change and learn over time by strengthening a synapse every time a signal travels along it.[8] Artificial neural networks were originally used to model biological neural networks starting in the 1930s under the approach ofconnectionism. However, starting with the invention of theperceptron, a simple artificial neural network, byWarren McCullochandWalter Pittsin 1943,[9]followed by the implementation of one in hardware byFrank Rosenblattin 1957,[3]artificial neural networks became increasingly used for machine learning applications instead, and increasingly different from their biological counterparts.
https://en.wikipedia.org/wiki/Neural_network#Limitations_of_backpropagation
Innumerical analysis, one of the most important problems is designing efficient andstablealgorithmsfor finding theeigenvaluesof amatrix. Theseeigenvalue algorithmsmay also find eigenvectors. Given ann×nsquare matrixAofrealorcomplexnumbers, aneigenvalueλand its associatedgeneralized eigenvectorvare a pair obeying the relation[1] wherevis a nonzeron× 1column vector,Iis then×nidentity matrix,kis a positive integer, and bothλandvare allowed to be complex even whenAis real. Whenk= 1, the vector is called simply aneigenvector, and the pair is called aneigenpair. In this case,Av=λv. Any eigenvalueλofAhas ordinary[note 1]eigenvectors associated to it, for ifkis the smallest integer such that(A−λI)kv= 0for a generalized eigenvectorv, then(A−λI)k−1vis an ordinary eigenvector. The valuekcan always be taken as less than or equal ton. In particular,(A−λI)nv= 0for all generalized eigenvectorsvassociated withλ. For each eigenvalueλofA, thekernelker(A−λI)consists of all eigenvectors associated withλ(along with 0), called theeigenspaceofλ, while the vector spaceker((A−λI)n)consists of all generalized eigenvectors, and is called thegeneralized eigenspace. Thegeometric multiplicityofλis the dimension of its eigenspace. Thealgebraic multiplicityofλis the dimension of its generalized eigenspace. The latter terminology is justified by the equation wheredetis thedeterminantfunction, theλiare all the distinct eigenvalues ofAand theαiare the corresponding algebraic multiplicities. The functionpA(z)is thecharacteristic polynomialofA. So the algebraic multiplicity is the multiplicity of the eigenvalue as azeroof the characteristic polynomial. Since any eigenvector is also a generalized eigenvector, the geometric multiplicity is less than or equal to the algebraic multiplicity. The algebraic multiplicities sum up ton, the degree of the characteristic polynomial. The equationpA(z) = 0is called thecharacteristic equation, as its roots are exactly the eigenvalues ofA. By theCayley–Hamilton theorem,Aitself obeys the same equation:pA(A) = 0.[note 2]As a consequence, the columns of the matrix∏i≠j(A−λiI)αi{\textstyle \prod _{i\neq j}(A-\lambda _{i}I)^{\alpha _{i}}}must be either 0 or generalized eigenvectors of the eigenvalueλj, since they are annihilated by(A−λjI)αj{\displaystyle (A-\lambda _{j}I)^{\alpha _{j}}}. In fact, thecolumn spaceis the generalized eigenspace ofλj. Any collection of generalized eigenvectors of distinct eigenvalues is linearly independent, so a basis for all ofCncan be chosen consisting of generalized eigenvectors. More particularly, this basis{vi}ni=1can be chosen and organized so that If these basis vectors are placed as the column vectors of a matrixV= [v1v2⋯vn], thenVcan be used to convertAto itsJordan normal form: where theλiare the eigenvalues,βi= 1if(A−λi+1)vi+1=viandβi= 0otherwise. More generally, ifWis any invertible matrix, andλis an eigenvalue ofAwith generalized eigenvectorv, then(W−1AW−λI)kW−kv= 0. Thusλis an eigenvalue ofW−1AWwith generalized eigenvectorW−kv. That is,similar matriceshave the same eigenvalues. TheadjointM*of a complex matrixMis the transpose of the conjugate ofM:M*=MT. A square matrixAis callednormalif it commutes with its adjoint:A*A=AA*. It is calledHermitianif it is equal to its adjoint:A*=A. All Hermitian matrices are normal. IfAhas only real elements, then the adjoint is just the transpose, andAis Hermitian if and only if it issymmetric. When applied to column vectors, the adjoint can be used to define the canonical inner product onCn:w⋅v=w*v.[note 3]Normal, Hermitian, and real-symmetric matrices have several useful properties: It is possible for a real or complex matrix to have all real eigenvalues without being Hermitian. For example, a realtriangular matrixhas its eigenvalues along its diagonal, but in general is not symmetric. Any problem of numeric calculation can be viewed as the evaluation of some functionffor some inputx. Thecondition numberκ(f,x)of the problem is the ratio of the relative error in the function's output to the relative error in the input, and varies with both the function and the input. The condition number describes how error grows during the calculation. Its base-10 logarithm tells how many fewer digits of accuracy exist in the result than existed in the input. The condition number is a best-case scenario. It reflects the instability built into the problem, regardless of how it is solved. No algorithm can ever produce more accurate results than indicated by the condition number, except by chance. However, a poorly designed algorithm may produce significantly worse results. For example, as mentioned below, the problem of finding eigenvalues for normal matrices is always well-conditioned. However, the problem of finding the roots of a polynomial can bevery ill-conditioned. Thus eigenvalue algorithms that work by finding the roots of the characteristic polynomial can be ill-conditioned even when the problem is not. For the problem of solving the linear equationAv=bwhereAis invertible, thematrix condition numberκ(A−1,b)is given by||A||op||A−1||op, where|| ||opis theoperator normsubordinate to the normalEuclidean normonCn. Since this number is independent ofband is the same forAandA−1, it is usually just called the condition numberκ(A)of the matrixA. This valueκ(A)is also the absolute value of the ratio of the largestsingular valueofAto its smallest. IfAisunitary, then||A||op= ||A−1||op= 1, soκ(A) = 1. For general matrices, the operator norm is often difficult to calculate. For this reason, othermatrix normsare commonly used to estimate the condition number. For the eigenvalue problem,Bauer and Fike provedthat ifλis an eigenvalue for adiagonalizablen×nmatrixAwitheigenvector matrixV, then the absolute error in calculatingλis bounded by the product ofκ(V)and the absolute error inA.[2]As a result, the condition number for findingλisκ(λ,A) =κ(V) = ||V||op||V−1||op. IfAis normal, thenVis unitary, andκ(λ,A) = 1. Thus the eigenvalue problem for all normal matrices is well-conditioned. The condition number for the problem of finding the eigenspace of a normal matrixAcorresponding to an eigenvalueλhas been shown to be inversely proportional to the minimum distance betweenλand the other distinct eigenvalues ofA.[3]In particular, the eigenspace problem for normal matrices is well-conditioned for isolated eigenvalues. When eigenvalues are not isolated, the best that can be hoped for is to identify the span of all eigenvectors of nearby eigenvalues. The most reliable and most widely used algorithm for computing eigenvalues isJohn G. F. Francis' andVera N. Kublanovskaya'sQR algorithm, considered one of the top ten algorithms of 20th century.[4] Any monic polynomial is the characteristic polynomial of itscompanion matrix. Therefore, a general algorithm for finding eigenvalues could also be used to find the roots of polynomials. TheAbel–Ruffini theoremshows that any such algorithm for dimensions greater than 4 must either be infinite, or involve functions of greater complexity than elementary arithmetic operations and fractional powers. For this reason algorithms that exactly calculate eigenvalues in a finite number of steps only exist for a few special classes of matrices. For general matrices, algorithms areiterative, producing better approximate solutions with each iteration. Some algorithms produce every eigenvalue, others will produce a few, or only one. However, even the latter algorithms can be used to find all eigenvalues. Once an eigenvalueλof a matrixAhas been identified, it can be used to either direct the algorithm towards a different solution next time, or to reduce the problem to one that no longer hasλas a solution. Redirection is usually accomplished by shifting: replacingAwithA−μIfor some constantμ. The eigenvalue found forA−μImust haveμadded back in to get an eigenvalue forA. For example, forpower iteration,μ=λ. Power iteration finds the largest eigenvalue in absolute value, so even whenλis only an approximate eigenvalue, power iteration is unlikely to find it a second time. Conversely,inverse iterationbased methods find the lowest eigenvalue, soμis chosen well away fromλand hopefully closer to some other eigenvalue. Reduction can be accomplished by restrictingAto the column space of the matrixA−λI, whichAcarries to itself. SinceA-λIis singular, the column space is of lesser dimension. The eigenvalue algorithm can then be applied to the restricted matrix. This process can be repeated until all eigenvalues are found. If an eigenvalue algorithm does not produce eigenvectors, a common practice is to use an inverse iteration based algorithm withμset to a close approximation to the eigenvalue. This will quickly converge to the eigenvector of the closest eigenvalue toμ. For small matrices, an alternative is to look at the column space of the product ofA−λ'Ifor each of the other eigenvaluesλ'. A formula for the norm of unit eigenvector components of normal matrices was discovered by Robert Thompson in 1966 and rediscovered independently by several others.[5][6][7][8][9]IfAis ann×n{\textstyle n\times n}normal matrix with eigenvaluesλi(A)and corresponding unit eigenvectorsviwhose component entries arevi,j, letAjbe then−1×n−1{\textstyle n-1\times n-1}matrix obtained by removing thei-th row and column fromA, and letλk(Aj)be itsk-th eigenvalue. Then|vi,j|2∏k=1,k≠in(λi(A)−λk(A))=∏k=1n−1(λi(A)−λk(Aj)){\displaystyle |v_{i,j}|^{2}\prod _{k=1,k\neq i}^{n}(\lambda _{i}(A)-\lambda _{k}(A))=\prod _{k=1}^{n-1}(\lambda _{i}(A)-\lambda _{k}(A_{j}))} Ifp,pj{\displaystyle p,p_{j}}are the characteristic polynomials ofA{\displaystyle A}andAj{\displaystyle A_{j}}, the formula can be re-written as|vi,j|2=pj(λi(A))p′(λi(A)){\displaystyle |v_{i,j}|^{2}={\frac {p_{j}(\lambda _{i}(A))}{p'(\lambda _{i}(A))}}}assuming the derivativep′{\displaystyle p'}is not zero atλi(A){\displaystyle \lambda _{i}(A)}. Because the eigenvalues of a triangular matrix are its diagonal elements, for general matrices there is no finite method likegaussian eliminationto convert a matrix to triangular form while preserving eigenvalues. But it is possible to reach something close to triangular. Anupper Hessenberg matrixis a square matrix for which all entries below thesubdiagonalare zero. A lower Hessenberg matrix is one for which all entries above thesuperdiagonalare zero. Matrices that are both upper and lower Hessenberg aretridiagonal. Hessenberg and tridiagonal matrices are the starting points for many eigenvalue algorithms because the zero entries reduce the complexity of the problem. Several methods are commonly used to convert a general matrix into a Hessenberg matrix with the same eigenvalues. If the original matrix was symmetric or Hermitian, then the resulting matrix will be tridiagonal. When only eigenvalues are needed, there is no need to calculate the similarity matrix, as the transformed matrix has the same eigenvalues. If eigenvectors are needed as well, the similarity matrix may be needed to transform the eigenvectors of the Hessenberg matrix back into eigenvectors of the original matrix. For symmetric tridiagonal eigenvalue problems all eigenvalues (without eigenvectors) can be computed numerically in time O(n log(n)), using bisection on the characteristic polynomial.[11] Iterative algorithms solve the eigenvalue problem by producing sequences that converge to the eigenvalues. Some algorithms also produce sequences of vectors that converge to the eigenvectors. Most commonly, the eigenvalue sequences are expressed as sequences of similar matrices which converge to a triangular or diagonal form, allowing the eigenvalues to be read easily. The eigenvector sequences are expressed as the corresponding similarity matrices. While there is no simple algorithm to directly calculate eigenvalues for general matrices, there are numerous special classes of matrices where eigenvalues can be directly calculated. These include: Since the determinant of atriangular matrixis the product of its diagonal entries, ifTis triangular, thendet(λI−T)=∏i(λ−Tii){\textstyle \det(\lambda I-T)=\prod _{i}(\lambda -T_{ii})}. Thus the eigenvalues ofTare its diagonal entries. Ifpis any polynomial andp(A) = 0,then the eigenvalues ofAalso satisfy the same equation. Ifphappens to have a known factorization, then the eigenvalues ofAlie among its roots. For example, aprojectionis a square matrixPsatisfyingP2=P. The roots of the corresponding scalar polynomial equation,λ2=λ, are 0 and 1. Thus any projection has 0 and 1 for its eigenvalues. The multiplicity of 0 as an eigenvalue is thenullityofP, while the multiplicity of 1 is the rank ofP. Another example is a matrixAthat satisfiesA2=α2Ifor some scalarα. The eigenvalues must be±α. The projection operators satisfy and Thecolumn spacesofP+andP−are the eigenspaces ofAcorresponding to+αand−α, respectively. For dimensions 2 through 4, formulas involving radicals exist that can be used to find the eigenvalues. While a common practice for 2×2 and 3×3 matrices, for 4×4 matrices the increasing complexity of theroot formulasmakes this approach less attractive. For the 2×2 matrix the characteristic polynomial is Thus the eigenvalues can be found by using thequadratic formula: Defininggap(A)=tr2(A)−4det(A){\textstyle {\rm {gap}}\left(A\right)={\sqrt {{\rm {tr}}^{2}(A)-4\det(A)}}}to be the distance between the two eigenvalues, it is straightforward to calculate with similar formulas forcandd. From this it follows that the calculation is well-conditioned if the eigenvalues are isolated. Eigenvectors can be found by exploiting theCayley–Hamilton theorem. Ifλ1,λ2are the eigenvalues, then(A−λ1I)(A−λ2I) = (A−λ2I)(A−λ1I) = 0, so the columns of(A−λ2I)are annihilated by(A−λ1I)and vice versa. Assuming neither matrix is zero, the columns of each must include eigenvectors for the other eigenvalue. (If either matrix is zero, thenAis a multiple of the identity and any non-zero vector is an eigenvector.) For example, suppose thentr(A) = 4 − 3 = 1anddet(A) = 4(−3) − 3(−2) = −6, so the characteristic equation is and the eigenvalues are 3 and -2. Now, In both matrices, the columns are multiples of each other, so either column can be used. Thus,(1, −2)can be taken as an eigenvector associated with the eigenvalue -2, and(3, −1)as an eigenvector associated with the eigenvalue 3, as can be verified by multiplying them byA. The characteristic equation of a symmetric 3×3 matrixAis: This equation may be solved using the methods ofCardanoorLagrange, but an affine change toAwill simplify the expression considerably, and lead directly to atrigonometric solution. IfA=pB+qI, thenAandBhave the same eigenvectors, andβis an eigenvalue ofBif and only ifα=pβ+qis an eigenvalue ofA. Lettingq=tr(A)/3{\textstyle q={\rm {tr}}(A)/3}andp=(tr((A−qI)2)/6)1/2{\textstyle p=\left({\rm {tr}}\left((A-qI)^{2}\right)/6\right)^{1/2}}, gives The substitutionβ= 2cosθand some simplification using the identitycos 3θ= 4cos3θ− 3cosθreduces the equation tocos 3θ= det(B) / 2. Thus Ifdet(B)is complex or is greater than 2 in absolute value, the arccosine should be taken along the same branch for all three values ofk. This issue doesn't arise whenAis real and symmetric, resulting in a simple algorithm:[17] Once again, the eigenvectors ofAcan be obtained by recourse to theCayley–Hamilton theorem. Ifα1,α2,α3are distinct eigenvalues ofA, then(A−α1I)(A−α2I)(A−α3I) = 0. Thus the columns of the product of any two of these matrices will contain an eigenvector for the third eigenvalue. However, ifα3=α1, then(A−α1I)2(A−α2I) = 0and(A−α2I)(A−α1I)2= 0. Thus thegeneralizedeigenspace ofα1is spanned by the columns ofA−α2Iwhile the ordinary eigenspace is spanned by the columns of(A−α1I)(A−α2I). The ordinary eigenspace ofα2is spanned by the columns of(A−α1I)2. For example, let The characteristic equation is with eigenvalues 1 (of multiplicity 2) and -1. Calculating, and Thus(−4, −4, 4)is an eigenvector for −1, and(4, 2, −2)is an eigenvector for 1.(2, 3, −1)and(6, 5, −3)are both generalized eigenvectors associated with 1, either one of which could be combined with(−4, −4, 4)and(4, 2, −2)to form a basis of generalized eigenvectors ofA. Once found, the eigenvectors can be normalized if needed. If a 3×3 matrixA{\displaystyle A}is normal, then the cross-product can be used to find eigenvectors. Ifλ{\displaystyle \lambda }is an eigenvalue ofA{\displaystyle A}, then the null space ofA−λI{\displaystyle A-\lambda I}is perpendicular to its column space. Thecross productof two independent columns ofA−λI{\displaystyle A-\lambda I}will be in the null space. That is, it will be an eigenvector associated withλ{\displaystyle \lambda }. Since the column space is two dimensional in this case, the eigenspace must be one dimensional, so any other eigenvector will be parallel to it. IfA−λI{\displaystyle A-\lambda I}does not contain two independent columns but is not0, the cross-product can still be used. In this caseλ{\displaystyle \lambda }is an eigenvalue of multiplicity 2, so any vector perpendicular to the column space will be an eigenvector. Supposev{\displaystyle \mathbf {v} }is a non-zero column ofA−λI{\displaystyle A-\lambda I}. Choose an arbitrary vectoru{\displaystyle \mathbf {u} }not parallel tov{\displaystyle \mathbf {v} }. Thenv×u{\displaystyle \mathbf {v} \times \mathbf {u} }and(v×u)×v{\displaystyle (\mathbf {v} \times \mathbf {u} )\times \mathbf {v} }will be perpendicular tov{\displaystyle \mathbf {v} }and thus will be eigenvectors ofλ{\displaystyle \lambda }. This does not work whenA{\displaystyle A}is not normal, as the null space and column space do not need to be perpendicular for such matrices.
https://en.wikipedia.org/wiki/Eigenvalue_algorithm
InEuclidean geometry,linear separabilityis a property of two sets ofpoints. This is most easily visualized in two dimensions (theEuclidean plane) by thinking of one set of points as being colored blue and the other set of points as being colored red. These two sets arelinearly separableif there exists at least onelinein the plane with all of the blue points on one side of the line and all the red points on the other side. This idea immediately generalizes to higher-dimensional Euclidean spaces if the line is replaced by ahyperplane. The problem of determining if a pair of sets is linearly separable and finding a separating hyperplane if they are, arises in several areas. Instatisticsandmachine learning, classifying certain types of data is a problem for which good algorithms exist that are based on this concept. LetX0{\displaystyle X_{0}}andX1{\displaystyle X_{1}}be two sets of points in ann-dimensional Euclidean space. ThenX0{\displaystyle X_{0}}andX1{\displaystyle X_{1}}arelinearly separableif there existn+ 1 real numbersw1,w2,..,wn,k{\displaystyle w_{1},w_{2},..,w_{n},k}, such that every pointx∈X0{\displaystyle x\in X_{0}}satisfies∑i=1nwixi>k{\displaystyle \sum _{i=1}^{n}w_{i}x_{i}>k}and every pointx∈X1{\displaystyle x\in X_{1}}satisfies∑i=1nwixi<k{\displaystyle \sum _{i=1}^{n}w_{i}x_{i}<k}, wherexi{\displaystyle x_{i}}is thei{\displaystyle i}-th component ofx{\displaystyle x}. Equivalently, two sets are linearly separable precisely when their respectiveconvex hullsaredisjoint(colloquially, do not overlap).[1] In simple 2D, it can also be imagined that the set of points under a linear transformation collapses into a line, on which there exists a value, k, greater than which one set of points will fall into, and lesser than which the other set of points fall. Three non-collinearpoints in two classes ('+' and '-') are always linearly separable in two dimensions. This is illustrated by the three examples in the following figure (the all '+' case is not shown, but is similar to the all '-' case): However, not all sets of four points, no three collinear, are linearly separable in two dimensions. The following example would needtwostraight lines and thus is not linearly separable: Notice that three points which are collinear and of the form "+ ⋅⋅⋅ — ⋅⋅⋅ +" are also not linearly separable. LetT(N,K){\displaystyle T(N,K)}be the number of ways to linearly separateNpoints (in general position) inKdimensions, then[2]T(N,K)={2NK≥N2∑k=0K−1(N−1k)K<N{\displaystyle T(N,K)=\left\{{\begin{array}{cc}2^{N}&K\geq N\\2\sum _{k=0}^{K-1}\left({\begin{array}{c}N-1\\k\end{array}}\right)&K<N\end{array}}\right.}WhenKis large,T(N,K)/2N{\displaystyle T(N,K)/2^{N}}is very close to one whenN≤2K{\displaystyle N\leq 2K}, but very close to zero whenN>2K{\displaystyle N>2K}. In words, oneperceptronunit can almost certainly memorize a random assignment of binary labels on N points whenN≤2K{\displaystyle N\leq 2K}, but almost certainly not whenN>2K{\displaystyle N>2K}. ABoolean functioninnvariables can be thought of as an assignment of0or1to each vertex of a Booleanhypercubeinndimensions. This gives a natural division of the vertices into two sets. The Boolean function is said to belinearly separableprovided these two sets of points are linearly separable. The number of distinct Boolean functions is22n{\displaystyle 2^{2^{n}}}wherenis the number of variables passed into the function.[3] Such functions are also called linear threshold logic, orperceptrons. The classical theory is summarized in,[4]as Knuth claims.[5] The value is only known exactly up ton=9{\displaystyle n=9}case, but the order of magnitude is known quite exactly: it has upper bound2n2−nlog2⁡n+O(n){\displaystyle 2^{n^{2}-n\log _{2}n+O(n)}}and lower bound2n2−nlog2⁡n−O(n){\displaystyle 2^{n^{2}-n\log _{2}n-O(n)}}.[6] It isco-NP-completeto decide whether a Boolean function given indisjunctiveorconjunctive normal formis linearly separable.[6] A linear threshold logic gate is a Boolean function defined byn{\displaystyle n}weightsw1,…,wn{\displaystyle w_{1},\dots ,w_{n}}and a thresholdθ{\displaystyle \theta }. It takesn{\displaystyle n}binary inputsx1,…,xn{\displaystyle x_{1},\dots ,x_{n}}, and outputs 1 if∑iwixi>θ{\displaystyle \sum _{i}w_{i}x_{i}>\theta }, and otherwise outputs 0. For any fixedn{\displaystyle n}, because there are only finitely many Boolean functions that can be computed by a threshold logic unit, it is possible to set allw1,…,wn,θ{\displaystyle w_{1},\dots ,w_{n},\theta }to be integers. LetW(n){\displaystyle W(n)}be the smallest numberW{\displaystyle W}such that every possible real threshold function ofn{\displaystyle n}variables can be realized using integer weights of absolute value≤W{\displaystyle \leq W}. It is known that[8]12nlog⁡n−2n+o(n)≤log2⁡W(n)≤12nlog⁡n−n+o(n){\displaystyle {\frac {1}{2}}n\log n-2n+o(n)\leq \log _{2}W(n)\leq {\frac {1}{2}}n\log n-n+o(n)}See[9]: Section 11.10for a literature review. Classifying datais a common task inmachine learning. Suppose some data points, each belonging to one of two sets, are given and we wish to create a model that will decide which set anewdata point will be in. In the case ofsupport vector machines, a data point is viewed as ap-dimensional vector (a list ofpnumbers), and we want to know whether we can separate such points with a (p− 1)-dimensionalhyperplane. This is called alinear classifier. There are many hyperplanes that might classify (separate) the data. One reasonable choice as the best hyperplane is the one that represents the largest separation, or margin, between the two sets. So we choose the hyperplane so that the distance from it to the nearest data point on each side is maximized. If such a hyperplane exists, it is known as themaximum-margin hyperplaneand the linear classifier it defines is known as amaximummargin classifier. More formally, given some training dataD{\displaystyle {\mathcal {D}}}, a set ofnpoints of the form where theyiis either 1 or −1, indicating the set to which the pointxi{\displaystyle \mathbf {x} _{i}}belongs. Eachxi{\displaystyle \mathbf {x} _{i}}is ap-dimensionalrealvector. We want to find the maximum-margin hyperplane that divides the points havingyi=1{\displaystyle y_{i}=1}from those havingyi=−1{\displaystyle y_{i}=-1}. Any hyperplane can be written as the set of pointsx{\displaystyle \mathbf {x} }satisfying where⋅{\displaystyle \cdot }denotes thedot productandw{\displaystyle {\mathbf {w} }}the (not necessarily normalized)normal vectorto the hyperplane. The parameterb‖w‖{\displaystyle {\tfrac {b}{\|\mathbf {w} \|}}}determines the offset of the hyperplane from the origin along the normal vectorw{\displaystyle {\mathbf {w} }}. If the training data are linearly separable, we can select two hyperplanes in such a way that they separate the data and there are no points between them, and then try to maximize their distance.
https://en.wikipedia.org/wiki/Linear_separability
Theroot mean square deviation(RMSD) orroot mean square error(RMSE) is either one of two closely related and frequently used measures of the differences between true or predicted values on the one hand and observed values or anestimatoron the other. Thedeviationis typically simply a differences ofscalars; it can also be generalized to thevector lengthsof adisplacement, as in thebioinformaticsconcept ofroot mean square deviation of atomic positions. The RMSD of asampleis thequadratic meanof the differences between the observed values and predicted ones. Thesedeviationsare calledresidualswhen the calculations are performed over the data sample that was used for estimation (and are therefore always in reference to an estimate) and are callederrors(or prediction errors) when computed out-of-sample (aka on the full set, referencing a true value rather than an estimate). The RMSD serves to aggregate the magnitudes of the errors in predictions for various data points into a single measure of predictive power. RMSD is a measure ofaccuracy, to compare forecasting errors of different models for a particular dataset and not between datasets, as it is scale-dependent.[1] RMSD is always non-negative, and a value of 0 (almost never achieved in practice) would indicate a perfect fit to the data. In general, a lower RMSD is better than a higher one. However, comparisons across different types of data would be invalid because the measure is dependent on the scale of the numbers used. RMSD is the square root of the average of squared errors. The effect of each error on RMSD is proportional to the size of the squared error; thus larger errors have a disproportionately large effect on RMSD. Consequently, RMSD is sensitive tooutliers.[2][3] The RMSD of anestimatorθ^{\displaystyle {\hat {\theta }}}with respect to an estimated parameterθ{\displaystyle \theta }is defined as the square root of themean squared error: For anunbiased estimator, the RMSD is the square root of thevariance, known as thestandard deviation. IfX1, ...,Xnis a sample of a population with true mean valuex0{\displaystyle x_{0}}, then the RMSD of the sample is The RMSD of predicted valuesy^t{\displaystyle {\hat {y}}_{t}}for timestof aregression'sdependent variableyt,{\displaystyle y_{t},}with variables observed overTtimes, is computed forTdifferent predictions as the square root of the mean of the squares of the deviations: (For regressions oncross-sectional data, the subscripttis replaced byiandTis replaced byn.) In some disciplines, the RMSD is used to compare differences between two things that may vary, neither of which is accepted as the "standard". For example, when measuring the average difference between two time seriesx1,t{\displaystyle x_{1,t}}andx2,t{\displaystyle x_{2,t}}, the formula becomes Normalizing the RMSD facilitates the comparison between datasets or models with different scales. Though there is no consistent means of normalization in the literature, common choices are the mean or the range (defined as the maximum value minus the minimum value) of the measured data:[4] This value is commonly referred to as thenormalized root mean square deviationorerror(NRMSD or NRMSE), and often expressed as a percentage, where lower values indicate less residual variance. This is also calledCoefficient of VariationorPercent RMS. In many cases, especially for smaller samples, the sample range is likely to be affected by the size of sample which would hamper comparisons. Another possible method to make the RMSD a more useful comparison measure is to divide the RMSD by theinterquartile range(IQR). When dividing the RMSD with the IQR the normalized value gets less sensitive for extreme values in the target variable. withQ1=CDF−1(0.25){\displaystyle Q_{1}={\text{CDF}}^{-1}(0.25)}andQ3=CDF−1(0.75),{\displaystyle Q_{3}={\text{CDF}}^{-1}(0.75),}where CDF−1is thequantile function. When normalizing by the mean value of the measurements, the termcoefficient of variation of the RMSD, CV(RMSD)may be used to avoid ambiguity.[5]This is analogous to thecoefficient of variationwith the RMSD taking the place of thestandard deviation. Some researchers[who?]have recommended[where?]the use of themean absolute error(MAE) instead of the root mean square deviation. MAE possesses advantages in interpretability over RMSD. MAE is the average of the absolute values of the errors. MAE is fundamentally easier to understand than the square root of the average of squared errors. Furthermore, each error influences MAE in direct proportion to the absolute value of the error, which is not the case for RMSD.[2]
https://en.wikipedia.org/wiki/Root_mean_square_deviation
Inmathematics, areal-valued functionis calledconvexif theline segmentbetween any two distinct points on thegraph of the functionlies above or on the graph between the two points. Equivalently, a function is convex if itsepigraph(the set of points on or above the graph of the function) is aconvex set. In simple terms, a convex function graph is shaped like a cup∪{\displaystyle \cup }(or a straight line like a linear function), while aconcave function's graph is shaped like a cap∩{\displaystyle \cap }. A twice-differentiablefunction of a single variable is convexif and only ifitssecond derivativeis nonnegative on its entiredomain.[1]Well-known examples of convex functions of a single variable include alinear functionf(x)=cx{\displaystyle f(x)=cx}(wherec{\displaystyle c}is areal number), aquadratic functioncx2{\displaystyle cx^{2}}(c{\displaystyle c}as a nonnegative real number) and anexponential functioncex{\displaystyle ce^{x}}(c{\displaystyle c}as a nonnegative real number). Convex functions play an important role in many areas of mathematics. They are especially important in the study ofoptimizationproblems where they are distinguished by a number of convenient properties. For instance, a strictly convex function on anopen sethas no more than oneminimum. Even in infinite-dimensional spaces, under suitable additional hypotheses, convex functions continue to satisfy such properties and as a result, they are the most well-understood functionals in thecalculus of variations. Inprobability theory, a convex function applied to theexpected valueof arandom variableis always bounded above by the expected value of the convex function of the random variable. This result, known asJensen's inequality, can be used to deduceinequalitiessuch as thearithmetic–geometric mean inequalityandHölder's inequality. LetX{\displaystyle X}be aconvex subsetof a realvector spaceand letf:X→R{\displaystyle f:X\to \mathbb {R} }be a function. Thenf{\displaystyle f}is calledconvexif and only if any of the following equivalent conditions hold: The second statement characterizing convex functions that are valued in the real lineR{\displaystyle \mathbb {R} }is also the statement used to defineconvex functionsthat are valued in theextended real number line[−∞,∞]=R∪{±∞},{\displaystyle [-\infty ,\infty ]=\mathbb {R} \cup \{\pm \infty \},}where such a functionf{\displaystyle f}is allowed to take±∞{\displaystyle \pm \infty }as a value. The first statement is not used because it permitst{\displaystyle t}to take0{\displaystyle 0}or1{\displaystyle 1}as a value, in which case, iff(x1)=±∞{\displaystyle f\left(x_{1}\right)=\pm \infty }orf(x2)=±∞,{\displaystyle f\left(x_{2}\right)=\pm \infty ,}respectively, thentf(x1)+(1−t)f(x2){\displaystyle tf\left(x_{1}\right)+(1-t)f\left(x_{2}\right)}would be undefined (because the multiplications0⋅∞{\displaystyle 0\cdot \infty }and0⋅(−∞){\displaystyle 0\cdot (-\infty )}are undefined). The sum−∞+∞{\displaystyle -\infty +\infty }is also undefined so a convex extended real-valued function is typically only allowed to take exactly one of−∞{\displaystyle -\infty }and+∞{\displaystyle +\infty }as a value. The second statement can also be modified to get the definition ofstrict convexity, where the latter is obtained by replacing≤{\displaystyle \,\leq \,}with the strict inequality<.{\displaystyle \,<.}Explicitly, the mapf{\displaystyle f}is calledstrictly convexif and only if for all real0<t<1{\displaystyle 0<t<1}and allx1,x2∈X{\displaystyle x_{1},x_{2}\in X}such thatx1≠x2{\displaystyle x_{1}\neq x_{2}}:f(tx1+(1−t)x2)<tf(x1)+(1−t)f(x2){\displaystyle f\left(tx_{1}+(1-t)x_{2}\right)<tf\left(x_{1}\right)+(1-t)f\left(x_{2}\right)} A strictly convex functionf{\displaystyle f}is a function that the straight line between any pair of points on the curvef{\displaystyle f}is above the curvef{\displaystyle f}except for the intersection points between the straight line and the curve. An example of a function which is convex but not strictly convex isf(x,y)=x2+y{\displaystyle f(x,y)=x^{2}+y}. This function is not strictly convex because any two points sharing an x coordinate will have a straight line between them, while any two points NOT sharing an x coordinate will have a greater value of the function than the points between them. The functionf{\displaystyle f}is said to beconcave(resp.strictly concave) if−f{\displaystyle -f}(f{\displaystyle f}multiplied by −1) is convex (resp. strictly convex). The termconvexis often referred to asconvex downorconcave upward, and the termconcaveis often referred asconcave downorconvex upward.[3][4][5]If the term "convex" is used without an "up" or "down" keyword, then it refers strictly to a cup shaped graph∪{\displaystyle \cup }. As an example,Jensen's inequalityrefers to an inequality involving a convex or convex-(down), function.[6] Many properties of convex functions have the same simple formulation for functions of many variables as for functions of one variable. See below the properties for the case of many variables, as some of them are not listed for functions of one variable. Sincef{\displaystyle f}is convex, by using one of the convex function definitions above and lettingx2=0,{\displaystyle x_{2}=0,}it follows that for all real0≤t≤1,{\displaystyle 0\leq t\leq 1,}f(tx1)=f(tx1+(1−t)⋅0)≤tf(x1)+(1−t)f(0)≤tf(x1).{\displaystyle {\begin{aligned}f(tx_{1})&=f(tx_{1}+(1-t)\cdot 0)\\&\leq tf(x_{1})+(1-t)f(0)\\&\leq tf(x_{1}).\\\end{aligned}}}Fromf(tx1)≤tf(x1){\displaystyle f(tx_{1})\leq tf(x_{1})}, it follows thatf(a)+f(b)=f((a+b)aa+b)+f((a+b)ba+b)≤aa+bf(a+b)+ba+bf(a+b)=f(a+b).{\displaystyle {\begin{aligned}f(a)+f(b)&=f\left((a+b){\frac {a}{a+b}}\right)+f\left((a+b){\frac {b}{a+b}}\right)\\&\leq {\frac {a}{a+b}}f(a+b)+{\frac {b}{a+b}}f(a+b)\\&=f(a+b).\\\end{aligned}}}Namely,f(a)+f(b)≤f(a+b){\displaystyle f(a)+f(b)\leq f(a+b)}. The concept of strong convexity extends and parametrizes the notion of strict convexity. Intuitively, a strongly-convex function is a function that grows as fast as a quadratic function.[11]A strongly convex function is also strictly convex, but not vice versa. If a one-dimensional functionf{\displaystyle f}is twice continuously differentiable and the domain is the real line, then we can characterize it as follows: For example, letf{\displaystyle f}be strictly convex, and suppose there is a sequence of points(xn){\displaystyle (x_{n})}such thatf″(xn)=1n{\displaystyle f''(x_{n})={\tfrac {1}{n}}}. Even thoughf″(xn)>0{\displaystyle f''(x_{n})>0}, the function is not strongly convex becausef″(x){\displaystyle f''(x)}will become arbitrarily small. More generally, a differentiable functionf{\displaystyle f}is called strongly convex with parameterm>0{\displaystyle m>0}if the following inequality holds for all pointsx,y{\displaystyle x,y}in its domain:[12](∇f(x)−∇f(y))T(x−y)≥m‖x−y‖22{\displaystyle (\nabla f(x)-\nabla f(y))^{T}(x-y)\geq m\|x-y\|_{2}^{2}}or, more generally,⟨∇f(x)−∇f(y),x−y⟩≥m‖x−y‖2{\displaystyle \langle \nabla f(x)-\nabla f(y),x-y\rangle \geq m\|x-y\|^{2}}where⟨⋅,⋅⟩{\displaystyle \langle \cdot ,\cdot \rangle }is anyinner product, and‖⋅‖{\displaystyle \|\cdot \|}is the correspondingnorm. Some authors, such as[13]refer to functions satisfying this inequality asellipticfunctions. An equivalent condition is the following:[14]f(y)≥f(x)+∇f(x)T(y−x)+m2‖y−x‖22{\displaystyle f(y)\geq f(x)+\nabla f(x)^{T}(y-x)+{\frac {m}{2}}\|y-x\|_{2}^{2}} It is not necessary for a function to be differentiable in order to be strongly convex. A third definition[14]for a strongly convex function, with parameterm,{\displaystyle m,}is that, for allx,y{\displaystyle x,y}in the domain andt∈[0,1],{\displaystyle t\in [0,1],}f(tx+(1−t)y)≤tf(x)+(1−t)f(y)−12mt(1−t)‖x−y‖22{\displaystyle f(tx+(1-t)y)\leq tf(x)+(1-t)f(y)-{\frac {1}{2}}mt(1-t)\|x-y\|_{2}^{2}} Notice that this definition approaches the definition for strict convexity asm→0,{\displaystyle m\to 0,}and is identical to the definition of a convex function whenm=0.{\displaystyle m=0.}Despite this, functions exist that are strictly convex but are not strongly convex for anym>0{\displaystyle m>0}(see example below). If the functionf{\displaystyle f}is twice continuously differentiable, then it is strongly convex with parameterm{\displaystyle m}if and only if∇2f(x)⪰mI{\displaystyle \nabla ^{2}f(x)\succeq mI}for allx{\displaystyle x}in the domain, whereI{\displaystyle I}is the identity and∇2f{\displaystyle \nabla ^{2}f}is theHessian matrix, and the inequality⪰{\displaystyle \succeq }means that∇2f(x)−mI{\displaystyle \nabla ^{2}f(x)-mI}ispositive semi-definite. This is equivalent to requiring that the minimumeigenvalueof∇2f(x){\displaystyle \nabla ^{2}f(x)}be at leastm{\displaystyle m}for allx.{\displaystyle x.}If the domain is just the real line, then∇2f(x){\displaystyle \nabla ^{2}f(x)}is just the second derivativef″(x),{\displaystyle f''(x),}so the condition becomesf″(x)≥m{\displaystyle f''(x)\geq m}. Ifm=0{\displaystyle m=0}then this means the Hessian is positive semidefinite (or if the domain is the real line, it means thatf″(x)≥0{\displaystyle f''(x)\geq 0}), which implies the function is convex, and perhaps strictly convex, but not strongly convex. Assuming still that the function is twice continuously differentiable, one can show that the lower bound of∇2f(x){\displaystyle \nabla ^{2}f(x)}implies that it is strongly convex. UsingTaylor's Theoremthere existsz∈{tx+(1−t)y:t∈[0,1]}{\displaystyle z\in \{tx+(1-t)y:t\in [0,1]\}}such thatf(y)=f(x)+∇f(x)T(y−x)+12(y−x)T∇2f(z)(y−x){\displaystyle f(y)=f(x)+\nabla f(x)^{T}(y-x)+{\frac {1}{2}}(y-x)^{T}\nabla ^{2}f(z)(y-x)}Then(y−x)T∇2f(z)(y−x)≥m(y−x)T(y−x){\displaystyle (y-x)^{T}\nabla ^{2}f(z)(y-x)\geq m(y-x)^{T}(y-x)}by the assumption about the eigenvalues, and hence we recover the second strong convexity equation above. A functionf{\displaystyle f}is strongly convex with parametermif and only if the functionx↦f(x)−m2‖x‖2{\displaystyle x\mapsto f(x)-{\frac {m}{2}}\|x\|^{2}}is convex. A twice continuously differentiable functionf{\displaystyle f}on a compact domainX{\displaystyle X}that satisfiesf″(x)>0{\displaystyle f''(x)>0}for allx∈X{\displaystyle x\in X}is strongly convex. The proof of this statement follows from theextreme value theorem, which states that a continuous function on a compact set has a maximum and minimum. Strongly convex functions are in general easier to work with than convex or strictly convex functions, since they are a smaller class. Like strictly convex functions, strongly convex functions have unique minima on compact sets. Iffis a strongly-convex function with parameterm, then:[15]: Prop.6.1.4 A uniformly convex function,[16][17]with modulusϕ{\displaystyle \phi }, is a functionf{\displaystyle f}that, for allx,y{\displaystyle x,y}in the domain andt∈[0,1],{\displaystyle t\in [0,1],}satisfiesf(tx+(1−t)y)≤tf(x)+(1−t)f(y)−t(1−t)ϕ(‖x−y‖){\displaystyle f(tx+(1-t)y)\leq tf(x)+(1-t)f(y)-t(1-t)\phi (\|x-y\|)}whereϕ{\displaystyle \phi }is a function that is non-negative and vanishes only at 0. This is a generalization of the concept of strongly convex function; by takingϕ(α)=m2α2{\displaystyle \phi (\alpha )={\tfrac {m}{2}}\alpha ^{2}}we recover the definition of strong convexity. It is worth noting that some authors require the modulusϕ{\displaystyle \phi }to be an increasing function,[17]but this condition is not required by all authors.[16]
https://en.wikipedia.org/wiki/Convex_function
Inmathematics, thecomposition operator∘{\displaystyle \circ }takes twofunctions,f{\displaystyle f}andg{\displaystyle g}, and returns a new functionh(x):=(g∘f)(x)=g(f(x)){\displaystyle h(x):=(g\circ f)(x)=g(f(x))}. Thus, the functiongisappliedafter applyingftox.(g∘f){\displaystyle (g\circ f)}is pronounced "the composition ofgandf".[1] Reverse composition, sometimes denotedf↦g{\displaystyle f\mapsto g}, applies the operation in the opposite order, applyingf{\displaystyle f}first andg{\displaystyle g}second. Intuitively, reverse composition is a chaining process in which the output of functionffeeds the input of functiong. The composition of functions is a special case of thecomposition of relations, sometimes also denoted by∘{\displaystyle \circ }. As a result, all properties of composition of relations are true of composition of functions,[2]such asassociativity. The composition of functions is alwaysassociative—a property inherited from thecomposition of relations.[2]That is, iff,g, andhare composable, thenf∘ (g∘h) = (f∘g) ∘h.[3]Since the parentheses do not change the result, they are generally omitted. In a strict sense, the compositiong∘fis only meaningful if the codomain offequals the domain ofg; in a wider sense, it is sufficient that the former be an impropersubsetof the latter.[nb 1]Moreover, it is often convenient to tacitly restrict the domain off, such thatfproduces only values in the domain ofg. For example, the compositiong∘fof the functionsf:R→(−∞,+9]defined byf(x) = 9 −x2andg:[0,+∞)→Rdefined byg(x)=x{\displaystyle g(x)={\sqrt {x}}}can be defined on theinterval[−3,+3]. The functionsgandfare said tocommutewith each other ifg∘f=f∘g. Commutativity is a special property, attained only by particular functions, and often in special circumstances. For example,|x| + 3 = |x+ 3|only whenx≥ 0. The picture shows another example. The composition ofone-to-one(injective) functions is always one-to-one. Similarly, the composition ofonto(surjective) functions is always onto. It follows that the composition of twobijectionsis also a bijection. Theinverse functionof a composition (assumed invertible) has the property that(f∘g)−1=g−1∘f−1.[4] Derivativesof compositions involving differentiable functions can be found using thechain rule.Higher derivativesof such functions are given byFaà di Bruno's formula.[3] Composition of functions is sometimes described as a kind ofmultiplicationon a function space, but has very different properties frompointwisemultiplication of functions (e.g. composition is notcommutative).[5] Suppose one has two (or more) functionsf:X→X,g:X→Xhaving the same domain and codomain; these are often calledtransformations. Then one can form chains of transformations composed together, such asf∘f∘g∘f. Such chains have thealgebraic structureof amonoid, called atransformation monoidor (much more seldom) acomposition monoid. In general, transformation monoids can have remarkably complicated structure. One particular notable example is thede Rham curve. The set ofallfunctionsf:X→Xis called thefull transformation semigroup[6]orsymmetric semigroup[7]onX. (One can actually define two semigroups depending how one defines the semigroup operation as the left or right composition of functions.[8]) If the given transformations arebijective(and thus invertible), then the set of all possible combinations of these functions forms atransformation group(also known as apermutation group); and one says that the group isgeneratedby these functions. The set of all bijective functionsf:X→X(calledpermutations) forms a group with respect to function composition. This is thesymmetric group, also sometimes called thecomposition group. A fundamental result in group theory,Cayley's theorem, essentially says that any group is in fact just a subgroup of a symmetric group (up toisomorphism).[9] In the symmetric semigroup (of all transformations) one also finds a weaker, non-unique notion of inverse (called a pseudoinverse) because the symmetric semigroup is aregular semigroup.[10] IfY⊆X, thenf:X→Y{\displaystyle f:X\to Y}may compose with itself; this is sometimes denoted asf2{\displaystyle f^{2}}. That is: More generally, for anynatural numbern≥ 2, thenthfunctionalpowercan be defined inductively byfn=f∘fn−1=fn−1∘f, a notation introduced byHans Heinrich Bürmann[citation needed][11][12]andJohn Frederick William Herschel.[13][11][14][12]Repeated composition of such a function with itself is calledfunction iteration. Note:Ifftakes its values in aring(in particular for real or complex-valuedf), there is a risk of confusion, asfncould also stand for then-fold product off, e.g.f2(x) =f(x) ·f(x).[12]For trigonometric functions, usually the latter is meant, at least for positive exponents.[12]For example, intrigonometry, this superscript notation represents standardexponentiationwhen used withtrigonometric functions: sin2(x) = sin(x) · sin(x). However, for negative exponents (especially −1), it nevertheless usually refers to the inverse function, e.g.,tan−1= arctan ≠ 1/tan. In some cases, when, for a given functionf, the equationg∘g=fhas a unique solutiong, that function can be defined as thefunctional square rootoff, then written asg=f1/2. More generally, whengn=fhas a unique solution for some natural numbern> 0, thenfm/ncan be defined asgm. Under additional restrictions, this idea can be generalized so that theiteration countbecomes a continuous parameter; in this case, such a system is called aflow, specified through solutions ofSchröder's equation. Iterated functions and flows occur naturally in the study offractalsanddynamical systems. To avoid ambiguity, some mathematicians[citation needed]choose to use∘to denote the compositional meaning, writingf∘n(x)for then-th iterate of the functionf(x), as in, for example,f∘3(x)meaningf(f(f(x))). For the same purpose,f[n](x)was used byBenjamin Peirce[15][12]whereasAlfred PringsheimandJules Molksuggestednf(x)instead.[16][12][nb 2] Many mathematicians, particularly ingroup theory, omit the composition symbol, writinggfforg∘f.[17] During the mid-20th century, some mathematicians adoptedpostfix notation, writingxfforf(x)and(xf)gforg(f(x)).[18]This can be more natural thanprefix notationin many cases, such as inlinear algebrawhenxis arow vectorandfandgdenotematricesand the composition is bymatrix multiplication. The order is important because function composition is not necessarily commutative. Having successive transformations applying and composing to the right agrees with the left-to-right reading sequence. Mathematicians who use postfix notation may write "fg", meaning first applyfand then applyg, in keeping with the order the symbols occur in postfix notation, thus making the notation "fg" ambiguous. Computer scientists may write "f;g" for this,[19]thereby disambiguating the order of composition. To distinguish the left composition operator from a text semicolon, in theZ notationthe ⨾ character is used for leftrelation composition.[20]Since all functions arebinary relations, it is correct to use the [fat] semicolon for function composition as well (see the article oncomposition of relationsfor further details on this notation). Given a functiong, thecomposition operatorCgis defined as thatoperatorwhich maps functions to functions asCgf=f∘g.{\displaystyle C_{g}f=f\circ g.}Composition operators are studied in the field ofoperator theory. Function composition appears in one form or another in numerousprogramming languages. Partial composition is possible formultivariate functions. The function resulting when some argumentxiof the functionfis replaced by the functiongis called a composition offandgin some computer engineering contexts, and is denotedf|xi=gf|xi=g=f(x1,…,xi−1,g(x1,x2,…,xn),xi+1,…,xn).{\displaystyle f|_{x_{i}=g}=f(x_{1},\ldots ,x_{i-1},g(x_{1},x_{2},\ldots ,x_{n}),x_{i+1},\ldots ,x_{n}).} Whengis a simple constantb, composition degenerates into a (partial) valuation, whose result is also known asrestrictionorco-factor.[21] f|xi=b=f(x1,…,xi−1,b,xi+1,…,xn).{\displaystyle f|_{x_{i}=b}=f(x_{1},\ldots ,x_{i-1},b,x_{i+1},\ldots ,x_{n}).} In general, the composition of multivariate functions may involve several other functions as arguments, as in the definition ofprimitive recursive function. Givenf, an-ary function, andnm-ary functionsg1, ...,gn, the composition offwithg1, ...,gn, is them-ary functionh(x1,…,xm)=f(g1(x1,…,xm),…,gn(x1,…,xm)).{\displaystyle h(x_{1},\ldots ,x_{m})=f(g_{1}(x_{1},\ldots ,x_{m}),\ldots ,g_{n}(x_{1},\ldots ,x_{m})).} This is sometimes called thegeneralized compositeorsuperpositionoffwithg1, ...,gn.[22]The partial composition in only one argument mentioned previously can be instantiated from this more general scheme by setting all argument functions except one to be suitably chosenprojection functions. Hereg1, ...,gncan be seen as a single vector/tuple-valued function in this generalized scheme, in which case this is precisely the standard definition of function composition.[23] A set of finitaryoperationson some base setXis called acloneif it contains all projections and is closed under generalized composition. A clone generally contains operations of variousarities.[22]The notion of commutation also finds an interesting generalization in the multivariate case; a functionfof aritynis said to commute with a functiongof aritymiffis ahomomorphismpreservingg, and vice versa, that is:[22]f(g(a11,…,a1m),…,g(an1,…,anm))=g(f(a11,…,an1),…,f(a1m,…,anm)).{\displaystyle f(g(a_{11},\ldots ,a_{1m}),\ldots ,g(a_{n1},\ldots ,a_{nm}))=g(f(a_{11},\ldots ,a_{n1}),\ldots ,f(a_{1m},\ldots ,a_{nm})).} A unary operation always commutes with itself, but this is not necessarily the case for a binary (or higher arity) operation. A binary (or higher arity) operation that commutes with itself is calledmedial or entropic.[22] Compositioncan be generalized to arbitrarybinary relations. IfR⊆X×YandS⊆Y×Zare two binary relations, then their composition amounts to R∘S={(x,z)∈X×Z:(∃y∈Y)((x,y)∈R∧(y,z)∈S)}{\displaystyle R\circ S=\{(x,z)\in X\times Z:(\exists y\in Y)((x,y)\in R\,\land \,(y,z)\in S)\}}. Considering a function as a special case of a binary relation (namelyfunctional relations), function composition satisfies the definition for relation composition. A small circleR∘Shas been used for theinfix notation of composition of relations, as well as functions. When used to represent composition of functions(g∘f)(x)=g(f(x)){\displaystyle (g\circ f)(x)\ =\ g(f(x))}however, the text sequence is reversed to illustrate the different operation sequences accordingly. The composition is defined in the same way forpartial functionsand Cayley's theorem has its analogue called theWagner–Preston theorem.[24] Thecategory of setswith functions asmorphismsis the prototypicalcategory. The axioms of a category are in fact inspired from the properties (and also the definition) of function composition.[25]The structures given by composition are axiomatized and generalized incategory theorywith the concept ofmorphismas the category-theoretical replacement of functions. The reversed order of composition in the formula(f∘g)−1= (g−1∘f−1)applies forcomposition of relationsusingconverse relations, and thus ingroup theory. These structures formdagger categories. The standard "foundation" for mathematics starts withsets and their elements. It is possible to start differently, by axiomatising not elements of sets but functions between sets. This can be done by using the language of categories and universal constructions. . . . the membership relation for sets can often be replaced by the composition operation for functions. This leads to an alternative foundation for Mathematics upon categories -- specifically, on the category of all functions. Now much of Mathematics is dynamic, in that it deals with morphisms of an object into another object of the same kind. Such morphisms(like functions)form categories, and so the approach via categories fits well with the objective of organizing and understanding Mathematics. That, in truth, should be the goal of a proper philosophy of Mathematics. -Saunders Mac Lane,Mathematics: Form and Function[26] The composition symbol∘is encoded asU+2218∘RING OPERATOR(&compfn;, &SmallCircle;); see theDegree symbolarticle for similar-appearing Unicode characters. InTeX, it is written\circ.
https://en.wikipedia.org/wiki/Composition_of_functions
Aneural networkis a group of interconnected units calledneuronsthat send signals to one another. Neurons can be eitherbiological cellsormathematical models. While individual neurons are simple, many of them together in a network can perform complex tasks. There are two main types of neural networks. In the context of biology, a neural network is a population of biologicalneuronschemically connected to each other bysynapses. A given neuron can be connected to hundreds of thousands of synapses.[1]Each neuron sends and receiveselectrochemicalsignals calledaction potentialsto its connected neighbors. A neuron can serve anexcitatoryrole, amplifying and propagating signals it receives, or aninhibitoryrole, suppressing signals instead.[1] Populations of interconnected neurons that are smaller than neural networks are calledneural circuits. Very large interconnected networks are calledlarge scale brain networks, and many of these together formbrainsandnervous systems. Signals generated by neural networks in the brain eventually travel through the nervous system and acrossneuromuscular junctionstomuscle cells, where they cause contraction and thereby motion.[2] In machine learning, a neural network is an artificial mathematical model used to approximate nonlinear functions. While early artificial neural networks were physical machines,[3]today they are almost always implemented insoftware. Neuronsin an artificial neural network are usually arranged into layers, with information passing from the first layer (the input layer) through one or more intermediate layers (the hidden layers) to the final layer (the output layer).[4]The "signal" input to each neuron is a number, specifically alinear combinationof the outputs of the connected neurons in the previous layer. The signal each neuron outputs is calculated from this number, according to itsactivation function. The behavior of the network depends on the strengths (orweights) of the connections between neurons. A network is trained by modifying these weights throughempirical risk minimizationorbackpropagationin order to fit some preexisting dataset.[5] The termdeep neural networkrefers to neural networks that have more than three layers, typically including at least two hidden layers in addition to the input and output layers. Neural networks are used to solve problems inartificial intelligence, and have thereby found applications in many disciplines, includingpredictive modeling,adaptive control,facial recognition,handwriting recognition,general game playing, andgenerative AI. The theoretical base for contemporary neural networks was independently proposed byAlexander Bainin 1873[6]andWilliam Jamesin 1890.[7]Both posited that human thought emerged from interactions among large numbers of neurons inside the brain. In 1949,Donald HebbdescribedHebbian learning, the idea that neural networks can change and learn over time by strengthening a synapse every time a signal travels along it.[8] Artificial neural networks were originally used to model biological neural networks starting in the 1930s under the approach ofconnectionism. However, starting with the invention of theperceptron, a simple artificial neural network, byWarren McCullochandWalter Pittsin 1943,[9]followed by the implementation of one in hardware byFrank Rosenblattin 1957,[3]artificial neural networks became increasingly used for machine learning applications instead, and increasingly different from their biological counterparts.
https://en.wikipedia.org/wiki/Neural_network
Inmachine learningandpattern recognition, afeatureis an individual measurable property or characteristic of a data set.[1]Choosing informative, discriminating, and independent features is crucial to produce effectivealgorithmsforpattern recognition,classification, andregressiontasks. Features are usually numeric, but other types such asstringsandgraphsare used insyntactic pattern recognition, after some pre-processing step such asone-hot encoding. The concept of "features" is related to that ofexplanatory variablesused in statistical techniques such aslinear regression. In feature engineering, two types of features are commonly used: numerical and categorical. Numerical features are continuous values that can be measured on a scale. Examples of numerical features include age, height, weight, and income. Numerical features can be used in machine learning algorithms directly.[citation needed] Categorical featuresare discrete values that can be grouped into categories. Examples of categorical features include gender, color, and zip code. Categorical features typically need to be converted to numerical features before they can be used in machine learning algorithms. This can be done using a variety of techniques, such as one-hot encoding, label encoding, and ordinal encoding. The type of feature that is used in feature engineering depends on the specific machine learning algorithm that is being used. Some machine learning algorithms, such as decision trees, can handle both numerical and categorical features. Other machine learning algorithms, such as linear regression, can only handle numerical features. A numeric feature can be conveniently described by a feature vector. One way to achievebinary classificationis using alinear predictor function(related to theperceptron) with a feature vector as input. The method consists of calculating thescalar productbetween the feature vector and a vector of weights, qualifying those observations whose result exceeds a threshold. Algorithms for classification from a feature vector includenearest neighbor classification,neural networks, andstatistical techniquessuch asBayesian approaches. Incharacter recognition, features may includehistogramscounting the number of black pixels along horizontal and vertical directions, number of internal holes, stroke detection and many others. Inspeech recognition, features for recognizingphonemescan include noise ratios, length of sounds, relative power, filter matches and many others. Inspamdetection algorithms, features may include the presence or absence of certain email headers, the email structure, the language, the frequency of specific terms, the grammatical correctness of the text. Incomputer vision, there are a large number of possiblefeatures, such as edges and objects. Inpattern recognitionandmachine learning, afeature vectoris an n-dimensionalvectorof numerical features that represent some object. Manyalgorithmsin machine learning require a numerical representation of objects, since such representations facilitate processing and statistical analysis. When representing images, the feature values might correspond to the pixels of an image, while when representing texts the features might be the frequencies of occurrence of textual terms. Feature vectors are equivalent to the vectors ofexplanatory variablesused instatisticalprocedures such aslinear regression. Feature vectors are often combined with weights using adot productin order to construct alinear predictor functionthat is used to determine a score for making a prediction. Thevector spaceassociated with these vectors is often called thefeature space. In order to reduce the dimensionality of the feature space, a number ofdimensionality reductiontechniques can be employed. Higher-level features can be obtained from already available features and added to the feature vector; for example, for the study of diseases the feature 'Age' is useful and is defined asAge = 'Year of death' minus 'Year of birth'. This process is referred to asfeature construction.[2][3]Feature construction is the application of a set of constructive operators to a set of existing features resulting in construction of new features. Examples of such constructive operators include checking for the equality conditions {=, ≠}, the arithmetic operators {+,−,×, /}, the array operators {max(S), min(S), average(S)} as well as other more sophisticated operators, for example count(S,C)[4]that counts the number of features in the feature vector S satisfying some condition C or, for example, distances to other recognition classes generalized by some accepting device. Feature construction has long been considered a powerful tool for increasing both accuracy and understanding of structure, particularly in high-dimensional problems.[5]Applications include studies of disease andemotion recognitionfrom speech.[6] The initial set of raw features can be redundant and large enough that estimation and optimization is made difficult or ineffective. Therefore, a preliminary step in many applications ofmachine learningandpattern recognitionconsists ofselectinga subset of features, orconstructinga new and reduced set of features to facilitate learning, and to improve generalization and interpretability.[7] Extracting or selecting features is a combination of art and science; developing systems to do so is known asfeature engineering. It requires the experimentation of multiple possibilities and the combination of automated techniques with the intuition and knowledge of thedomain expert. Automating this process isfeature learning, where a machine not only uses features for learning, but learns the features itself.
https://en.wikipedia.org/wiki/Feature_space
Aneural networkis a group of interconnected units calledneuronsthat send signals to one another. Neurons can be eitherbiological cellsormathematical models. While individual neurons are simple, many of them together in a network can perform complex tasks. There are two main types of neural networks. In the context of biology, a neural network is a population of biologicalneuronschemically connected to each other bysynapses. A given neuron can be connected to hundreds of thousands of synapses.[1]Each neuron sends and receiveselectrochemicalsignals calledaction potentialsto its connected neighbors. A neuron can serve anexcitatoryrole, amplifying and propagating signals it receives, or aninhibitoryrole, suppressing signals instead.[1] Populations of interconnected neurons that are smaller than neural networks are calledneural circuits. Very large interconnected networks are calledlarge scale brain networks, and many of these together formbrainsandnervous systems. Signals generated by neural networks in the brain eventually travel through the nervous system and acrossneuromuscular junctionstomuscle cells, where they cause contraction and thereby motion.[2] In machine learning, a neural network is an artificial mathematical model used to approximate nonlinear functions. While early artificial neural networks were physical machines,[3]today they are almost always implemented insoftware. Neuronsin an artificial neural network are usually arranged into layers, with information passing from the first layer (the input layer) through one or more intermediate layers (the hidden layers) to the final layer (the output layer).[4]The "signal" input to each neuron is a number, specifically alinear combinationof the outputs of the connected neurons in the previous layer. The signal each neuron outputs is calculated from this number, according to itsactivation function. The behavior of the network depends on the strengths (orweights) of the connections between neurons. A network is trained by modifying these weights throughempirical risk minimizationorbackpropagationin order to fit some preexisting dataset.[5] The termdeep neural networkrefers to neural networks that have more than three layers, typically including at least two hidden layers in addition to the input and output layers. Neural networks are used to solve problems inartificial intelligence, and have thereby found applications in many disciplines, includingpredictive modeling,adaptive control,facial recognition,handwriting recognition,general game playing, andgenerative AI. The theoretical base for contemporary neural networks was independently proposed byAlexander Bainin 1873[6]andWilliam Jamesin 1890.[7]Both posited that human thought emerged from interactions among large numbers of neurons inside the brain. In 1949,Donald HebbdescribedHebbian learning, the idea that neural networks can change and learn over time by strengthening a synapse every time a signal travels along it.[8] Artificial neural networks were originally used to model biological neural networks starting in the 1930s under the approach ofconnectionism. However, starting with the invention of theperceptron, a simple artificial neural network, byWarren McCullochandWalter Pittsin 1943,[9]followed by the implementation of one in hardware byFrank Rosenblattin 1957,[3]artificial neural networks became increasingly used for machine learning applications instead, and increasingly different from their biological counterparts.
https://en.wikipedia.org/wiki/Neural_network#Fully_connected_networks
Instatisticsandmachine learning, thebias–variance tradeoffdescribes the relationship between a model's complexity, the accuracy of its predictions, and how well it can make predictions on previously unseen data that were not used to train the model. In general, as we increase the number of tunable parameters in a model, it becomes more flexible, and can better fit a training data set. It is said to have lower error, orbias. However, for more flexible models, there will tend to be greatervarianceto the model fit each time we take a set ofsamplesto create a new training data set. It is said that there is greatervariancein the model'sestimatedparameters. Thebias–variance dilemmaorbias–variance problemis the conflict in trying to simultaneously minimize these two sources oferrorthat preventsupervised learningalgorithms from generalizing beyond theirtraining set:[1][2] Thebias–variance decompositionis a way of analyzing a learning algorithm'sexpectedgeneralization errorwith respect to a particular problem as a sum of three terms, the bias, variance, and a quantity called theirreducible error, resulting from noise in the problem itself. The bias–variance tradeoff is a central problem in supervised learning. Ideally, one wants tochoose a modelthat both accurately captures the regularities in its training data, but alsogeneralizeswell to unseen data. Unfortunately, it is typically impossible to do both simultaneously. High-variance learning methods may be able to represent their training set well but are at risk of overfitting to noisy or unrepresentative training data. In contrast, algorithms with high bias typically produce simpler models that may fail to capture important regularities (i.e. underfit) in the data. It is an often madefallacy[3][4]to assume that complex models must have high variance. High variance models are "complex" in some sense, but the reverse needs not be true.[5]In addition, one has to be careful how to define complexity. In particular, the number of parameters used to describe the model is a poor measure of complexity. This is illustrated by an example adapted from:[6]The modelfa,b(x)=asin⁡(bx){\displaystyle f_{a,b}(x)=a\sin(bx)}has only two parameters (a,b{\displaystyle a,b}) but it can interpolate any number of points by oscillating with a high enough frequency, resulting in both a high bias and high variance. An analogy can be made to the relationship betweenaccuracy and precision. Accuracy is one way of quantifying bias and can intuitively be improved by selecting from onlylocalinformation. Consequently, a sample will appear accurate (i.e. have low bias) under the aforementioned selection conditions, but may result in underfitting. In other words,test datamay not agree as closely with training data, which would indicate imprecision and therefore inflated variance. A graphical example would be a straight line fit to data exhibiting quadratic behavior overall. Precision is a description of variance and generally can only be improved by selecting information from a comparatively larger space. The option to select many data points over a broad sample space is the ideal condition for any analysis. However, intrinsic constraints (whether physical, theoretical, computational, etc.) will always play a limiting role. The limiting case where only a finite number of data points are selected over a broad sample space may result in improved precision and lower variance overall, but may also result in an overreliance on the training data (overfitting). This means that test data would also not agree as closely with the training data, but in this case the reason is inaccuracy or high bias. To borrow from the previous example, the graphical representation would appear as a high-order polynomial fit to the same data exhibiting quadratic behavior. Note that error in each case is measured the same way, but the reason ascribed to the error is different depending on the balance between bias and variance. To mitigate how much information is used from neighboring observations, a model can besmoothedvia explicitregularization, such asshrinkage. Suppose that we have a training set consisting of a set of pointsx1,…,xn{\displaystyle x_{1},\dots ,x_{n}}and real-valued labelsyi{\displaystyle y_{i}}associated with the pointsxi{\displaystyle x_{i}}. We assume that the data is generated by a functionf(x){\displaystyle f(x)}such asy=f(x)+ε{\displaystyle y=f(x)+\varepsilon }, where the noise,ε{\displaystyle \varepsilon }, has zero mean and varianceσ2{\displaystyle \sigma ^{2}}. That is,yi=f(xi)+εi{\displaystyle y_{i}=f(x_{i})+\varepsilon _{i}}, whereεi{\displaystyle \varepsilon _{i}}is a noise sample. We want to find a functionf^(x;D){\displaystyle {\hat {f}}(x;D)}, that approximates the true functionf(x){\displaystyle f(x)}as well as possible, by means of some learning algorithm based on a training dataset (sample)D={(x1,y1)…,(xn,yn)}{\displaystyle D=\{(x_{1},y_{1})\dots ,(x_{n},y_{n})\}}. We make "as well as possible" precise by measuring themean squared errorbetweeny{\displaystyle y}andf^(x;D){\displaystyle {\hat {f}}(x;D)}: we want(y−f^(x;D))2{\displaystyle (y-{\hat {f}}(x;D))^{2}}to be minimal, both forx1,…,xn{\displaystyle x_{1},\dots ,x_{n}}and for points outside of our sample. Of course, we cannot hope to do so perfectly, since theyi{\displaystyle y_{i}}contain noiseε{\displaystyle \varepsilon }; this means we must be prepared to accept anirreducible errorin any function we come up with. Finding anf^{\displaystyle {\hat {f}}}that generalizes to points outside of the training set can be done with any of the countless algorithms used for supervised learning. It turns out that whichever functionf^{\displaystyle {\hat {f}}}we select, we can decompose itsexpectederror on an unseen samplex{\displaystyle x}(i.e. conditional to x) as follows:[7]: 34[8]: 223 where and and The expectation ranges over different choices of the training setD={(x1,y1)…,(xn,yn)}{\displaystyle D=\{(x_{1},y_{1})\dots ,(x_{n},y_{n})\}}, all sampled from the same joint distributionP(x,y){\displaystyle P(x,y)}which can for example be done viabootstrapping. The three terms represent: Since all three terms are non-negative, the irreducible error forms a lower bound on the expected error on unseen samples.[7]: 34 The more complex the modelf^(x){\displaystyle {\hat {f}}(x)}is, the more data points it will capture, and the lower the bias will be. However, complexity will make the model "move" more to capture the data points, and hence its variance will be larger. The derivation of the bias–variance decomposition for squared error proceeds as follows.[9][10]For convenience, we drop theD{\displaystyle D}subscript in the following lines, such thatf^(x;D)=f^(x){\displaystyle {\hat {f}}(x;D)={\hat {f}}(x)}. Let us write the mean-squared error of our model: We can show that the second term of this equation is null: E[(f(x)−f^(x))ε]=E[f(x)−f^(x)]E[ε]sinceεis independent fromx=0sinceE[ε]=0{\displaystyle {\begin{aligned}\mathbb {E} {\Big [}{\big (}f(x)-{\hat {f}}(x){\big )}\varepsilon {\Big ]}&=\mathbb {E} {\big [}f(x)-{\hat {f}}(x){\big ]}\ \mathbb {E} {\big [}\varepsilon {\big ]}&&{\text{since }}\varepsilon {\text{ is independent from }}x\\&=0&&{\text{since }}\mathbb {E} {\big [}\varepsilon {\big ]}=0\end{aligned}}} Moreover, the third term of this equation is nothing butσ2{\displaystyle \sigma ^{2}}, the variance ofε{\displaystyle \varepsilon }. Let us now expand the remaining term: E[(f(x)−f^(x))2]=E[(f(x)−E[f^(x)]+E[f^(x)]−f^(x))2]=E[(f(x)−E[f^(x)])2]+2E[(f(x)−E[f^(x)])(E[f^(x)]−f^(x))]+E[(E[f^(x)]−f^(x))2]{\displaystyle {\begin{aligned}\mathbb {E} {\Big [}{\big (}f(x)-{\hat {f}}(x){\big )}^{2}{\Big ]}&=\mathbb {E} {\Big [}{\big (}f(x)-\mathbb {E} {\big [}{\hat {f}}(x){\big ]}+\mathbb {E} {\big [}{\hat {f}}(x){\big ]}-{\hat {f}}(x){\big )}^{2}{\Big ]}\\&={\color {Blue}\mathbb {E} {\Big [}{\big (}f(x)-\mathbb {E} {\big [}{\hat {f}}(x){\big ]}{\big )}^{2}{\Big ]}}\,+\,2\ {\color {PineGreen}\mathbb {E} {\Big [}{\big (}f(x)-\mathbb {E} {\big [}{\hat {f}}(x){\big ]}{\big )}{\big (}\mathbb {E} {\big [}{\hat {f}}(x){\big ]}-{\hat {f}}(x){\big )}{\Big ]}}\,+\,\mathbb {E} {\Big [}{\big (}\mathbb {E} {\big [}{\hat {f}}(x){\big ]}-{\hat {f}}(x){\big )}^{2}{\Big ]}\end{aligned}}} We show that: E[(f(x)−E[f^(x)])2]=E[f(x)2]−2E[f(x)E[f^(x)]]+E[E[f^(x)]2]=f(x)2−2f(x)E[f^(x)]+E[f^(x)]2=(f(x)−E[f^(x)])2{\displaystyle {\begin{aligned}{\color {Blue}\mathbb {E} {\Big [}{\big (}f(x)-\mathbb {E} {\big [}{\hat {f}}(x){\big ]}{\big )}^{2}{\Big ]}}&=\mathbb {E} {\big [}f(x)^{2}{\big ]}\,-\,2\ \mathbb {E} {\Big [}f(x)\ \mathbb {E} {\big [}{\hat {f}}(x){\big ]}{\Big ]}\,+\,\mathbb {E} {\Big [}\mathbb {E} {\big [}{\hat {f}}(x){\big ]}^{2}{\Big ]}\\&=f(x)^{2}\,-\,2\ f(x)\ \mathbb {E} {\big [}{\hat {f}}(x){\big ]}\,+\,\mathbb {E} {\big [}{\hat {f}}(x){\big ]}^{2}\\&={\Big (}f(x)-\mathbb {E} {\big [}{\hat {f}}(x){\big ]}{\Big )}^{2}\end{aligned}}} This last series of equalities comes from the fact thatf(x){\displaystyle f(x)}is not a random variable, but a fixed, deterministic function ofx{\displaystyle x}. Therefore,E[f(x)]=f(x){\displaystyle \mathbb {E} {\big [}f(x){\big ]}=f(x)}. SimilarlyE[f(x)2]=f(x)2{\displaystyle \mathbb {E} {\big [}f(x)^{2}{\big ]}=f(x)^{2}}, andE[f(x)E[f^(x)]]=f(x)E[E[f^(x)]]=f(x)E[f^(x)]{\displaystyle \mathbb {E} {\Big [}f(x)\ \mathbb {E} {\big [}{\hat {f}}(x){\big ]}{\Big ]}=f(x)\ \mathbb {E} {\Big [}\ \mathbb {E} {\big [}{\hat {f}}(x){\big ]}{\Big ]}=f(x)\ \mathbb {E} {\big [}{\hat {f}}(x){\big ]}}. Using the same reasoning, we can expand the second term and show that it is null: E[(f(x)−E[f^(x)])(E[f^(x)]−f^(x))]=E[f(x)E[f^(x)]−f(x)f^(x)−E[f^(x)]2+E[f^(x)]f^(x)]=f(x)E[f^(x)]−f(x)E[f^(x)]−E[f^(x)]2+E[f^(x)]2=0{\displaystyle {\begin{aligned}{\color {PineGreen}\mathbb {E} {\Big [}{\big (}f(x)-\mathbb {E} {\big [}{\hat {f}}(x){\big ]}{\big )}{\big (}\mathbb {E} {\big [}{\hat {f}}(x){\big ]}-{\hat {f}}(x){\big )}{\Big ]}}&=\mathbb {E} {\Big [}f(x)\ \mathbb {E} {\big [}{\hat {f}}(x){\big ]}\,-\,f(x){\hat {f}}(x)\,-\,\mathbb {E} {\big [}{\hat {f}}(x){\big ]}^{2}+\mathbb {E} {\big [}{\hat {f}}(x){\big ]}\ {\hat {f}}(x){\Big ]}\\&=f(x)\ \mathbb {E} {\big [}{\hat {f}}(x){\big ]}\,-\,f(x)\ \mathbb {E} {\big [}{\hat {f}}(x){\big ]}\,-\,\mathbb {E} {\big [}{\hat {f}}(x){\big ]}^{2}\,+\,\mathbb {E} {\big [}{\hat {f}}(x){\big ]}^{2}\\&=0\end{aligned}}} Eventually, we plug our derivations back into the original equation, and identify each term: MSE=(f(x)−E[f^(x)])2+E[(E[f^(x)]−f^(x))2]+σ2=Bias⁡(f^(x))2+Var⁡[f^(x)]+σ2{\displaystyle {\begin{aligned}{\text{MSE}}&={\Big (}f(x)-\mathbb {E} {\big [}{\hat {f}}(x){\big ]}{\Big )}^{2}+\mathbb {E} {\Big [}{\big (}\mathbb {E} {\big [}{\hat {f}}(x){\big ]}-{\hat {f}}(x){\big )}^{2}{\Big ]}+\sigma ^{2}\\&=\operatorname {Bias} {\big (}{\hat {f}}(x){\big )}^{2}\,+\,\operatorname {Var} {\big [}{\hat {f}}(x){\big ]}\,+\,\sigma ^{2}\end{aligned}}} Finally, the MSE loss function (or negative log-likelihood) is obtained by taking the expectation value overx∼P{\displaystyle x\sim P}: Dimensionality reductionandfeature selectioncan decrease variance by simplifying models. Similarly, a larger training set tends to decrease variance. Adding features (predictors) tends to decrease bias, at the expense of introducing additional variance. Learning algorithms typically have some tunable parameters that control bias and variance; for example, One way of resolving the trade-off is to usemixture modelsandensemble learning.[14][15]For example,boostingcombines many "weak" (high bias) models in an ensemble that has lower bias than the individual models, whilebaggingcombines "strong" learners in a way that reduces their variance. Model validationmethods such ascross-validation (statistics)can be used to tune models so as to optimize the trade-off. In the case ofk-nearest neighbors regression, when the expectation is taken over the possible labeling of a fixed training set, aclosed-form expressionexists that relates the bias–variance decomposition to the parameterk:[8]: 37, 223 whereN1(x),…,Nk(x){\displaystyle N_{1}(x),\dots ,N_{k}(x)}are theknearest neighbors ofxin the training set. The bias (first term) is a monotone rising function ofk, while the variance (second term) drops off askis increased. In fact, under "reasonable assumptions" the bias of the first-nearest neighbor (1-NN) estimator vanishes entirely as the size of the training set approaches infinity.[12] The bias–variance decomposition forms the conceptual basis for regressionregularizationmethods such asLASSOandridge regression. Regularization methods introduce bias into the regression solution that can reduce variance considerably relative to theordinary least squares (OLS)solution. Although the OLS solution provides non-biased regression estimates, the lower variance solutions produced by regularization techniques provide superior MSE performance. The bias–variance decomposition was originally formulated for least-squares regression. For the case ofclassificationunder the0-1 loss(misclassification rate), it is possible to find a similar decomposition, with the caveat that the variance term becomes dependent on the target label.[16][17]Alternatively, if the classification problem can be phrased asprobabilistic classification, then the expected cross-entropy can instead be decomposed to give bias and variance terms with the same semantics but taking a different form. It has been argued that as training data increases, the variance of learned models will tend to decrease, and hence that as training data quantity increases, error is minimised by methods that learn models with lesser bias, and that conversely, for smaller training data quantities it is ever more important to minimise variance.[18] Even though the bias–variance decomposition does not directly apply inreinforcement learning, a similar tradeoff can also characterize generalization. When an agent has limited information on its environment, the suboptimality of an RL algorithm can be decomposed into the sum of two terms: a term related to an asymptotic bias and a term due to overfitting. The asymptotic bias is directly related to the learning algorithm (independently of the quantity of data) while the overfitting term comes from the fact that the amount of data is limited.[19] While in traditional Monte Carlo methods the bias is typically zero, modern approaches, such asMarkov chain Monte Carloare only asymptotically unbiased, at best.[20]Convergence diagnostics can be used to control bias viaburn-inremoval, but due to a limited computational budget, a bias–variance trade-off arises,[21]leading to a wide-range of approaches, in which a controlled bias is accepted, if this allows to dramatically reduce the variance, and hence the overall estimation error.[22][23][24] While widely discussed in the context of machine learning, the bias–variance dilemma has been examined in the context ofhuman cognition, most notably byGerd Gigerenzerand co-workers in the context of learned heuristics. They have argued (see references below) that the human brain resolves the dilemma in the case of the typically sparse, poorly-characterized training-sets provided by experience by adopting high-bias/low variance heuristics. This reflects the fact that a zero-bias approach has poor generalizability to new situations, and also unreasonably presumes precise knowledge of the true state of the world. The resulting heuristics are relatively simple, but produce better inferences in a wider variety of situations.[25] Gemanet al.[12]argue that the bias–variance dilemma implies that abilities such as genericobject recognitioncannot be learned from scratch, but require a certain degree of "hard wiring" that is later tuned by experience. This is because model-free approaches to inference require impractically large training sets if they are to avoid high variance.
https://en.wikipedia.org/wiki/Bias%E2%80%93variance_tradeoff
Forsupervised learningapplications inmachine learningandstatistical learning theory,generalization error[1](also known as theout-of-sample error[2]or therisk) is a measure of how accurately analgorithmis able to predict outcomes for previously unseen data. As learning algorithms are evaluated on finite samples, the evaluation of a learning algorithm may be sensitive tosampling error. As a result, measurements of prediction error on the current data may not provide much information about the algorithm's predictive ability on new, unseen data. The generalization error can be minimized by avoidingoverfittingin the learning algorithm. The performance of machine learning algorithms is commonly visualized bylearning curveplots that show estimates of the generalization error throughout the learning process. In a learning problem, the goal is to develop a functionfn(x→){\displaystyle f_{n}({\vec {x}})}that predicts output valuesy{\displaystyle y}for each input datumx→{\displaystyle {\vec {x}}}. The subscriptn{\displaystyle n}indicates that the functionfn{\displaystyle f_{n}}is developed based on a data set ofn{\displaystyle n}data points. Thegeneralization errororexpected lossorriskI[f]{\displaystyle I[f]}of a particular functionf{\displaystyle f}over all possible values ofx→{\displaystyle {\vec {x}}}andy{\displaystyle y}is theexpected valueof theloss functionV(f){\displaystyle V(f)}:[1] whereρ(x→,y){\displaystyle \rho ({\vec {x}},y)}is the unknownjoint probability distributionforx→{\displaystyle {\vec {x}}}andy{\displaystyle y}. Without knowing the joint probability distributionρ{\displaystyle \rho }, it is impossible to computeI[f]{\displaystyle I[f]}. Instead, we can compute the error on sample data, which is calledempirical error(orempirical risk). Givenn{\displaystyle n}data points, the empirical error of a candidate functionf{\displaystyle f}is: An algorithm is said to generalize if: Of particular importance is thegeneralization errorI[fn]{\displaystyle I[f_{n}]}of the data-dependent functionfn{\displaystyle f_{n}}that is found by a learning algorithm based on the sample. Again, for an unknown probability distribution,I[fn]{\displaystyle I[f_{n}]}cannot be computed. Instead, the aim of many problems in statistical learning theory is to bound or characterize the difference of the generalization error and the empirical error in probability: That is, the goal is to characterize the probability1−δn{\displaystyle 1-\delta _{n}}that the generalization error is less than the empirical error plus some error boundϵ{\displaystyle \epsilon }(generally dependent onδ{\displaystyle \delta }andn{\displaystyle n}). For many types of algorithms, it has been shown that an algorithm has generalization bounds if it meets certainstabilitycriteria. Specifically, if an algorithm is symmetric (the order of inputs does not affect the result), has bounded loss and meets two stability conditions, it will generalize. The first stability condition,leave-one-out cross-validationstability, says that to be stable, the prediction error for each data point when leave-one-out cross validation is used must converge to zero asn→∞{\displaystyle n\rightarrow \infty }. The second condition, expected-to-leave-one-out error stability (also known as hypothesis stability if operating in theL1{\displaystyle L_{1}}norm) is met if the prediction on a left-out datapoint does not change when a single data point is removed from the training dataset.[3] These conditions can be formalized as: An algorithmL{\displaystyle L}hasCVloo{\displaystyle CVloo}stability if for eachn{\displaystyle n}, there exists aβCV(n){\displaystyle \beta _{CV}^{(n)}}andδCV(n){\displaystyle \delta _{CV}^{(n)}}such that: andβCV(n){\displaystyle \beta _{CV}^{(n)}}andδCV(n){\displaystyle \delta _{CV}^{(n)}}go to zero asn{\displaystyle n}goes to infinity.[3] An algorithmL{\displaystyle L}hasElooerr{\displaystyle Eloo_{err}}stability if for eachn{\displaystyle n}there exists aβELm{\displaystyle \beta _{EL}^{m}}and aδELm{\displaystyle \delta _{EL}^{m}}such that: withβEL(n){\displaystyle \beta _{EL}^{(n)}}andδEL(n){\displaystyle \delta _{EL}^{(n)}}going to zero forn→∞{\displaystyle n\rightarrow \infty }. For leave-one-out stability in theL1{\displaystyle L_{1}}norm, this is the same as hypothesis stability: withβH(n){\displaystyle \beta _{H}^{(n)}}going to zero asn{\displaystyle n}goes to infinity.[3] A number of algorithms have been proven to be stable and as a result have bounds on their generalization error. A list of these algorithms and the papers that proved stability is availablehere. The concepts of generalization error and overfitting are closely related. Overfitting occurs when the learned functionfS{\displaystyle f_{S}}becomes sensitive to the noise in the sample. As a result, the function will perform well on the training set but not perform well on other data from the joint probability distribution ofx{\displaystyle x}andy{\displaystyle y}. Thus, the more overfitting occurs, the larger the generalization error. The amount of overfitting can be tested usingcross-validationmethods, that split the sample into simulated training samples and testing samples. The model is then trained on a training sample and evaluated on the testing sample. The testing sample is previously unseen by the algorithm and so represents a random sample from the joint probability distribution ofx{\displaystyle x}andy{\displaystyle y}. This test sample allows us to approximate the expected error and as a result approximate a particular form of the generalization error. Many algorithms exist to prevent overfitting. The minimization algorithm can penalize more complex functions (known as Tikhonovregularization), or the hypothesis space can be constrained, either explicitly in the form of the functions or by adding constraints to the minimization function (Ivanov regularization). The approach to finding a function that does not overfit is at odds with the goal of finding a function that is sufficiently complex to capture the particular characteristics of the data. This is known as thebias–variance tradeoff. Keeping a function simple to avoid overfitting may introduce a bias in the resulting predictions, while allowing it to be more complex leads to overfitting and a higher variance in the predictions. It is impossible to minimize both simultaneously.
https://en.wikipedia.org/wiki/Generalization_error
Instatistics, amixture modelis aprobabilistic modelfor representing the presence ofsubpopulationswithin an overall population, without requiring that an observed data set should identify the sub-population to which an individual observation belongs. Formally a mixture model corresponds to themixture distributionthat represents theprobability distributionof observations in the overall population. However, while problems associated with "mixture distributions" relate to deriving the properties of the overall population from those of the sub-populations, "mixture models" are used to makestatistical inferencesabout the properties of the sub-populations given only observations on the pooled population, without sub-population identity information. Mixture models are used for clustering, under the namemodel-based clustering, and also fordensity estimation. Mixture models should not be confused with models forcompositional data, i.e., data whose components are constrained to sum to a constant value (1, 100%, etc.). However, compositional models can be thought of as mixture models, where members of the population are sampled at random. Conversely, mixture models can be thought of as compositional models, where thetotal sizereading population has been normalized to 1. A typical finite-dimensional mixture model is ahierarchical modelconsisting of the following components: In addition, in aBayesian setting, the mixture weights and parameters will themselves be random variables, andprior distributionswill be placed over the variables. In such a case, the weights are typically viewed as aK-dimensional random vector drawn from aDirichlet distribution(theconjugate priorof the categorical distribution), and the parameters will be distributed according to their respective conjugate priors. Mathematically, a basic parametric mixture model can be described as follows: In a Bayesian setting, all parameters are associated with random variables, as follows: This characterization usesFandHto describe arbitrary distributions over observations and parameters, respectively. TypicallyHwill be theconjugate priorofF. The two most common choices ofFareGaussianaka "normal" (for real-valued observations) andcategorical(for discrete observations). Other common possibilities for the distribution of the mixture components are: A typical non-BayesianGaussianmixture model looks like this: A Bayesian version of aGaussianmixture model is as follows: A Bayesian Gaussian mixture model is commonly extended to fit a vector of unknown parameters (denoted in bold), or multivariate normal distributions. In a multivariate distribution (i.e. one modelling a vectorx{\displaystyle {\boldsymbol {x}}}withNrandom variables) one may model a vector of parameters (such as several observations of a signal or patches within an image) using a Gaussian mixture model prior distribution on the vector of estimates given byp(θ)=∑i=1KϕiN(μi,Σi){\displaystyle p({\boldsymbol {\theta }})=\sum _{i=1}^{K}\phi _{i}{\mathcal {N}}({\boldsymbol {\mu }}_{i},{\boldsymbol {\Sigma }}_{i})}where theithvector component is characterized by normal distributions with weightsϕi{\displaystyle \phi _{i}}, meansμi{\displaystyle {\boldsymbol {\mu }}_{i}}and covariance matricesΣi{\displaystyle {\boldsymbol {\Sigma }}_{i}}. To incorporate this prior into a Bayesian estimation, the prior is multiplied with the known distributionp(x|θ){\displaystyle p({\boldsymbol {x|\theta }})}of the datax{\displaystyle {\boldsymbol {x}}}conditioned on the parametersθ{\displaystyle {\boldsymbol {\theta }}}to be estimated. With this formulation, theposterior distributionp(θ|x){\displaystyle p({\boldsymbol {\theta |x}})}isalsoa Gaussian mixture model of the formp(θ|x)=∑i=1Kϕ~iN(μ~i,Σ~i){\displaystyle p({\boldsymbol {\theta |x}})=\sum _{i=1}^{K}{\tilde {\phi }}_{i}{\mathcal {N}}({\boldsymbol {{\tilde {\mu }}_{i}}},{\boldsymbol {\tilde {\Sigma }}}_{i})}with new parametersϕ~i,μ~i{\displaystyle {\tilde {\phi }}_{i},{\boldsymbol {\tilde {\mu }}}_{i}}andΣ~i{\displaystyle {\boldsymbol {\tilde {\Sigma }}}_{i}}that are updated using theEM algorithm.[2]Although EM-based parameter updates are well-established, providing the initial estimates for these parameters is currently an area of active research. Note that this formulation yields a closed-form solution to the complete posterior distribution. Estimations of the random variableθ{\displaystyle {\boldsymbol {\theta }}}may be obtained via one of several estimators, such as the mean or maximum of the posterior distribution. Such distributions are useful for assuming patch-wise shapes of images and clusters, for example. In the case of image representation, each Gaussian may be tilted, expanded, and warped according to the covariance matricesΣi{\displaystyle {\boldsymbol {\Sigma }}_{i}}. One Gaussian distribution of the set is fit to each patch (usually of size 8×8 pixels) in the image. Notably, any distribution of points around a cluster (seek-means) may be accurately given enough Gaussian components, but scarcely overK=20 components are needed to accurately model a given image distribution or cluster of data. A typical non-Bayesian mixture model withcategoricalobservations looks like this: The random variables: A typical Bayesian mixture model withcategoricalobservations looks like this: The random variables: Financial returns often behave differently in normal situations and during crisis times. A mixture model[3]for return data seems reasonable. Sometimes the model used is ajump-diffusion model, or as a mixture of two normal distributions. SeeFinancial economics § Challenges and criticismandFinancial risk management § Bankingfor further context. Assume that we observe the prices ofNdifferent houses. Different types of houses in different neighborhoods will have vastly different prices, but the price of a particular type of house in a particular neighborhood (e.g., three-bedroom house in moderately upscale neighborhood) will tend to cluster fairly closely around the mean. One possible model of such prices would be to assume that the prices are accurately described by a mixture model withKdifferent components, each distributed as anormal distributionwith unknown mean and variance, with each component specifying a particular combination of house type/neighborhood. Fitting this model to observed prices, e.g., using theexpectation-maximization algorithm, would tend to cluster the prices according to house type/neighborhood and reveal the spread of prices in each type/neighborhood. (Note that for values such as prices or incomes that are guaranteed to be positive and which tend to growexponentially, alog-normal distributionmight actually be a better model than a normal distribution.) Assume that a document is composed ofNdifferent words from a total vocabulary of sizeV, where each word corresponds to one ofKpossible topics. The distribution of such words could be modelled as a mixture ofKdifferentV-dimensionalcategorical distributions. A model of this sort is commonly termed atopic model. Note thatexpectation maximizationapplied to such a model will typically fail to produce realistic results, due (among other things) to theexcessive number of parameters. Some sorts of additional assumptions are typically necessary to get good results. Typically two sorts of additional components are added to the model: The following example is based on an example inChristopher M. Bishop,Pattern Recognition and Machine Learning.[4] Imagine that we are given anN×Nblack-and-white image that is known to be a scan of a hand-written digit between 0 and 9, but we don't know which digit is written. We can create a mixture model withK=10{\displaystyle K=10}different components, where each component is a vector of sizeN2{\displaystyle N^{2}}ofBernoulli distributions(one per pixel). Such a model can be trained with theexpectation-maximization algorithmon an unlabeled set of hand-written digits, and will effectively cluster the images according to the digit being written. The same model could then be used to recognize the digit of another image simply by holding the parameters constant, computing the probability of the new image for each possible digit (a trivial calculation), and returning the digit that generated the highest probability. Mixture models apply in the problem of directing multiple projectiles at a target (as in air, land, or sea defense applications), where the physical and/or statistical characteristics of the projectiles differ within the multiple projectiles. An example might be shots from multiple munitions types or shots from multiple locations directed at one target. The combination of projectile types may be characterized as a Gaussian mixture model.[5]Further, a well-known measure of accuracy for a group of projectiles is thecircular error probable(CEP), which is the numberRsuch that, on average, half of the group of projectiles falls within the circle of radiusRabout the target point. The mixture model can be used to determine (or estimate) the valueR. The mixture model properly captures the different types of projectiles. The financial example above is one direct application of the mixture model, a situation in which we assume an underlying mechanism so that each observation belongs to one of some number of different sources or categories. This underlying mechanism may or may not, however, be observable. In this form of mixture, each of the sources is described by a component probability density function, and its mixture weight is the probability that an observation comes from this component. In an indirect application of the mixture model we do not assume such a mechanism. The mixture model is simply used for its mathematical flexibilities. For example, a mixture of twonormal distributionswith different means may result in a density with twomodes, which is not modeled by standard parametric distributions. Another example is given by the possibility of mixture distributions to model fatter tails than the basic Gaussian ones, so as to be a candidate for modeling more extreme events. The mixture model-based clustering is also predominantly used in identifying the state of the machine inpredictive maintenance. Density plots are used to analyze the density of high dimensional features. If multi-model densities are observed, then it is assumed that a finite set of densities are formed by a finite set of normal mixtures. A multivariate Gaussian mixture model is used to cluster the feature data into k number of groups where k represents each state of the machine. The machine state can be a normal state, power off state, or faulty state.[6]Each formed cluster can be diagnosed using techniques such as spectral analysis. In the recent years, this has also been widely used in other areas such as early fault detection.[7] In image processing and computer vision, traditionalimage segmentationmodels often assign to onepixelonly one exclusive pattern. In fuzzy or soft segmentation, any pattern can have certain "ownership" over any single pixel. If the patterns are Gaussian, fuzzy segmentation naturally results in Gaussian mixtures. Combined with other analytic or geometric tools (e.g., phase transitions over diffusive boundaries), such spatially regularized mixture models could lead to more realistic and computationally efficient segmentation methods.[8] Probabilistic mixture models such asGaussian mixture models(GMM) are used to resolvepoint set registrationproblems in image processing and computer vision fields. For pair-wisepoint set registration, one point set is regarded as the centroids of mixture models, and the other point set is regarded as data points (observations). State-of-the-art methods are e.g.coherent point drift(CPD)[9]andStudent's t-distributionmixture models (TMM).[10]The result of recent research demonstrate the superiority of hybrid mixture models[11](e.g. combining Student's t-distribution and Watson distribution/Bingham distributionto model spatial positions and axes orientations separately) compare to CPD and TMM, in terms of inherent robustness, accuracy and discriminative capacity. Identifiability refers to the existence of a unique characterization for any one of the models in the class (family) being considered. Estimation procedures may not be well-defined and asymptotic theory may not hold if a model is not identifiable. LetJbe the class of all binomial distributions withn= 2. Then a mixture of two members ofJwould have p0=π(1−θ1)2+(1−π)(1−θ2)2p1=2πθ1(1−θ1)+2(1−π)θ2(1−θ2){\displaystyle {\begin{aligned}p_{0}&=\pi {\left(1-\theta _{1}\right)}^{2}+\left(1-\pi \right){\left(1-\theta _{2}\right)}^{2}\\[1ex]p_{1}&=2\pi \theta _{1}\left(1-\theta _{1}\right)+2\left(1-\pi \right)\theta _{2}\left(1-\theta _{2}\right)\end{aligned}}} andp2= 1 −p0−p1. Clearly, givenp0andp1, it is not possible to determine the above mixture model uniquely, as there are three parameters(π,θ1,θ2)to be determined. Consider a mixture of parametric distributions of the same class. Let J={f(⋅;θ):θ∈Ω}{\displaystyle J=\{f(\cdot ;\theta ):\theta \in \Omega \}} be the class of all component distributions. Then theconvex hullKofJdefines the class of all finite mixture of distributions inJ: K={p(⋅):p(⋅)=∑i=1naifi(⋅;θi),ai>0,∑i=1nai=1,fi(⋅;θi)∈J∀i,n}{\displaystyle K=\left\{p(\cdot ):p(\cdot )=\sum _{i=1}^{n}a_{i}f_{i}(\cdot ;\theta _{i}),a_{i}>0,\sum _{i=1}^{n}a_{i}=1,f_{i}(\cdot ;\theta _{i})\in J\ \forall i,n\right\}} Kis said to be identifiable if all its members are unique, that is, given two memberspandp′inK, being mixtures ofkdistributions andk′distributions respectively inJ, we havep=p′if and only if, first of all,k=k′and secondly we can reorder the summations such thatai=ai′andfi=fi′for alli. Parametric mixture models are often used when we know the distributionYand we can sample fromX, but we would like to determine theaiandθivalues. Such situations can arise in studies in which we sample from a population that is composed of several distinct subpopulations. It is common to think of probability mixture modeling as a missing data problem. One way to understand this is to assume that the data points under consideration have "membership" in one of the distributions we are using to model the data. When we start, this membership is unknown, or missing. The job of estimation is to devise appropriate parameters for the model functions we choose, with the connection to the data points being represented as their membership in the individual model distributions. A variety of approaches to the problem of mixture decomposition have been proposed, many of which focus on maximum likelihood methods such asexpectation maximization(EM) or maximuma posterioriestimation (MAP). Generally these methods consider separately the questions of system identification and parameter estimation; methods to determine the number and functional form of components within a mixture are distinguished from methods to estimate the corresponding parameter values. Some notable departures are the graphical methods as outlined in Tarter and Lock[12]and more recentlyminimum message length(MML) techniques such as Figueiredo and Jain[13]and to some extent the moment matching pattern analysis routines suggested by McWilliam and Loh (2009).[14] Expectation maximization(EM) is seemingly the most popular technique used to determine the parameters of a mixture with ana priorigiven number of components. This is a particular way of implementingmaximum likelihoodestimation for this problem. EM is of particular appeal for finite normal mixtures where closed-form expressions are possible such as in the following iterative algorithm by Dempsteret al.(1977)[15] with the posterior probabilities Thus on the basis of the current estimate for the parameters, theconditional probabilityfor a given observationx(t)being generated from statesis determined for eacht= 1, …,N;Nbeing the sample size. The parameters are then updated such that the new component weights correspond to the average conditional probability and each component mean and covariance is the component specific weighted average of the mean and covariance of the entire sample. Dempster[15]also showed that each successive EM iteration will not decrease the likelihood, a property not shared by other gradient based maximization techniques. Moreover, EM naturally embeds within it constraints on the probability vector, and for sufficiently large sample sizes positive definiteness of the covariance iterates. This is a key advantage since explicitly constrained methods incur extra computational costs to check and maintain appropriate values. Theoretically EM is a first-order algorithm and as such converges slowly to a fixed-point solution. Redner and Walker (1984)[full citation needed]make this point arguing in favour of superlinear and second order Newton and quasi-Newton methods and reporting slow convergence in EM on the basis of their empirical tests. They do concede that convergence in likelihood was rapid even if convergence in the parameter values themselves was not. The relative merits of EM and other algorithms vis-à-vis convergence have been discussed in other literature.[16] Other common objections to the use of EM are that it has a propensity to spuriously identify local maxima, as well as displaying sensitivity to initial values.[17][18]One may address these problems by evaluating EM at several initial points in the parameter space but this is computationally costly and other approaches, such as the annealing EM method of Udea and Nakano (1998) (in which the initial components are essentially forced to overlap, providing a less heterogeneous basis for initial guesses), may be preferable. Figueiredo and Jain[13]note that convergence to 'meaningless' parameter values obtained at the boundary (where regularity conditions breakdown, e.g., Ghosh and Sen (1985)) is frequently observed when the number of model components exceeds the optimal/true one. On this basis they suggest a unified approach to estimation and identification in which the initialnis chosen to greatly exceed the expected optimal value. Their optimization routine is constructed via a minimum message length (MML) criterion that effectively eliminates a candidate component if there is insufficient information to support it. In this way it is possible to systematize reductions innand consider estimation and identification jointly. With initial guesses for the parameters of our mixture model, "partial membership" of each data point in each constituent distribution is computed by calculatingexpectation valuesfor the membership variables of each data point. That is, for each data pointxjand distributionYi, the membership valueyi,jis: With expectation values in hand for group membership,plug-in estimatesare recomputed for the distribution parameters. The mixing coefficientsaiare themeansof the membership values over theNdata points. The component model parametersθiare also calculated by expectation maximization using data pointsxjthat have been weighted using the membership values. For example, ifθis a meanμ With new estimates foraiand theθi's, the expectation step is repeated to recompute new membership values. The entire procedure is repeated until model parameters converge. As an alternative to the EM algorithm, the mixture model parameters can be deduced usingposterior samplingas indicated byBayes' theorem. This is still regarded as an incomplete data problem in which membership of data points is the missing data. A two-step iterative procedure known asGibbs samplingcan be used. The previous example of a mixture of twoGaussian distributionscan demonstrate how the method works. As before, initial guesses of the parameters for the mixture model are made. Instead of computing partial memberships for each elemental distribution, a membership value for each data point is drawn from aBernoulli distribution(that is, it will be assigned to either the first or the second Gaussian). The Bernoulli parameterθis determined for each data point on the basis of one of the constituent distributions.[vague]Draws from the distribution generate membership associations for each data point. Plug-in estimators can then be used as in the M step of EM to generate a new set of mixture model parameters, and the binomial draw step repeated. Themethod of moment matchingis one of the oldest techniques for determining the mixture parameters dating back to Karl Pearson's seminal work of 1894. In this approach the parameters of the mixture are determined such that the composite distribution has moments matching some given value. In many instances extraction of solutions to the moment equations may present non-trivial algebraic or computational problems. Moreover, numerical analysis by Day[19]has indicated that such methods may be inefficient compared to EM. Nonetheless, there has been renewed interest in this method, e.g., Craigmile and Titterington (1998) and Wang.[20] McWilliam and Loh (2009) consider the characterisation of a hyper-cuboid normal mixturecopulain large dimensional systems for which EM would be computationally prohibitive. Here a pattern analysis routine is used to generate multivariate tail-dependencies consistent with a set of univariate and (in some sense) bivariate moments. The performance of this method is then evaluated using equity log-return data withKolmogorov–Smirnovtest statistics suggesting a good descriptive fit. Some problems in mixture model estimation can be solved usingspectral methods. In particular it becomes useful if data pointsxiare points in high-dimensionalreal space, and the hidden distributions are known to belog-concave(such asGaussian distributionorExponential distribution). Spectral methods of learning mixture models are based on the use ofSingular Value Decompositionof a matrix which contains data points. The idea is to consider the topksingular vectors, wherekis the number of distributions to be learned. The projection of each data point to alinear subspacespanned by those vectors groups points originating from the same distribution very close together, while points from different distributions stay far apart. One distinctive feature of the spectral method is that it allows us toprovethat if distributions satisfy certain separation condition (e.g., not too close), then the estimated mixture will be very close to the true one with high probability. Tarter and Lock[12]describe a graphical approach to mixture identification in which a kernel function is applied to an empirical frequency plot so to reduce intra-component variance. In this way one may more readily identify components having differing means. While thisλ-method does not require prior knowledge of the number or functional form of the components its success does rely on the choice of the kernel parameters which to some extent implicitly embeds assumptions about the component structure. Some of them can even probably learn mixtures ofheavy-tailed distributionsincluding those with infinitevariance(seelinks to papersbelow). In this setting, EM based methods would not work, since the Expectation step would diverge due to presence ofoutliers. To simulate a sample of sizeNthat is from a mixture of distributionsFi,i=1 ton, with probabilitiespi(sum=pi= 1): In aBayesian setting, additional levels can be added to thegraphical modeldefining the mixture model. For example, in the commonlatent Dirichlet allocationtopic model, the observations are sets of words drawn fromDdifferent documents and theKmixture components represent topics that are shared across documents. Each document has a different set of mixture weights, which specify the topics prevalent in that document. All sets of mixture weights share commonhyperparameters. A very common extension is to connect thelatent variablesdefining the mixture component identities into aMarkov chain, instead of assuming that they areindependent identically distributedrandom variables. The resulting model is termed ahidden Markov modeland is one of the most common sequential hierarchical models. Numerous extensions of hidden Markov models have been developed; see the resulting article for more information. Mixture distributions and the problem of mixture decomposition, that is the identification of its constituent components and the parameters thereof, has been cited in the literature as far back as 1846 (Quetelet in McLachlan,[17]2000) although common reference is made to the work ofKarl Pearson(1894)[21]as the first author to explicitly address the decomposition problem in characterising non-normal attributes of forehead to body length ratios in female shore crab populations. The motivation for this work was provided by the zoologistWalter Frank Raphael Weldonwho had speculated in 1893 (in Tarter and Lock[12]) that asymmetry in the histogram of these ratios could signal evolutionary divergence. Pearson's approach was to fit a univariate mixture of two normals to the data by choosing the five parameters of the mixture such that the empirical moments matched that of the model. While his work was successful in identifying two potentially distinct sub-populations and in demonstrating the flexibility of mixtures as a moment matching tool, the formulation required the solution of a 9th degree (nonic) polynomial which at the time posed a significant computational challenge. Subsequent works focused on addressing these problems, but it was not until the advent of the modern computer and the popularisation ofMaximum Likelihood(MLE) parameterisation techniques that research really took off.[22]Since that time there has been a vast body of research on the subject spanning areas such asfisheries research,agriculture,botany,economics,medicine,genetics,psychology,palaeontology,electrophoresis,finance,geologyandzoology.[23]
https://en.wikipedia.org/wiki/Gaussian_mixture_model
Instatistics, anexpectation–maximization(EM)algorithmis aniterative methodto find (local)maximum likelihoodormaximum a posteriori(MAP) estimates ofparametersinstatistical models, where the model depends on unobservedlatent variables.[1]The EM iteration alternates between performing an expectation (E) step, which creates a function for the expectation of thelog-likelihoodevaluated using the current estimate for the parameters, and a maximization (M) step, which computes parameters maximizing the expected log-likelihood found on theEstep. These parameter-estimates are then used to determine the distribution of the latent variables in the next E step. It can be used, for example, to estimate a mixture ofgaussians, or to solve the multiple linear regression problem.[2] The EM algorithm was explained and given its name in a classic 1977 paper byArthur Dempster,Nan Laird, andDonald Rubin.[3]They pointed out that the method had been "proposed many times in special circumstances" by earlier authors. One of the earliest is the gene-counting method for estimating allele frequencies byCedric Smith.[4]Another was proposed byH.O. Hartleyin 1958, and Hartley and Hocking in 1977, from which many of the ideas in the Dempster–Laird–Rubin paper originated.[5]Another one by S.K Ng, Thriyambakam Krishnan and G.J McLachlan in 1977.[6]Hartley’s ideas can be broadened to any grouped discrete distribution. A very detailed treatment of the EM method for exponential families was published by Rolf Sundberg in his thesis and several papers,[7][8][9]following his collaboration withPer Martin-LöfandAnders Martin-Löf.[10][11][12][13][14]The Dempster–Laird–Rubin paper in 1977 generalized the method and sketched a convergence analysis for a wider class of problems. The Dempster–Laird–Rubin paper established the EM method as an important tool of statistical analysis. See also Meng and van Dyk (1997). The convergence analysis of the Dempster–Laird–Rubin algorithm was flawed and a correct convergence analysis was published byC. F. Jeff Wuin 1983.[15]Wu's proof established the EM method's convergence also outside of theexponential family, as claimed by Dempster–Laird–Rubin.[15] The EM algorithm is used to find (local)maximum likelihoodparameters of astatistical modelin cases where the equations cannot be solved directly. Typically these models involvelatent variablesin addition to unknownparametersand known data observations. That is, eithermissing valuesexist among the data, or the model can be formulated more simply by assuming the existence of further unobserved data points. For example, amixture modelcan be described more simply by assuming that each observed data point has a corresponding unobserved data point, or latent variable, specifying the mixture component to which each data point belongs. Finding a maximum likelihood solution typically requires taking thederivativesof thelikelihood functionwith respect to all the unknown values, the parameters and the latent variables, and simultaneously solving the resulting equations. In statistical models with latent variables, this is usually impossible. Instead, the result is typically a set of interlocking equations in which the solution to the parameters requires the values of the latent variables and vice versa, but substituting one set of equations into the other produces an unsolvable equation. The EM algorithm proceeds from the observation that there is a way to solve these two sets of equations numerically. One can simply pick arbitrary values for one of the two sets of unknowns, use them to estimate the second set, then use these new values to find a better estimate of the first set, and then keep alternating between the two until the resulting values both converge to fixed points. It's not obvious that this will work, but it can be proven in this context. Additionally, it can be proven that the derivative of the likelihood is (arbitrarily close to) zero at that point, which in turn means that the point is either a local maximum or asaddle point.[15]In general, multiple maxima may occur, with no guarantee that the global maximum will be found. Some likelihoods also havesingularitiesin them, i.e., nonsensical maxima. For example, one of thesolutionsthat may be found by EM in a mixture model involves setting one of the components to have zero variance and the mean parameter for the same component to be equal to one of the data points. Given thestatistical modelwhich generates a setX{\displaystyle \mathbf {X} }of observed data, a set of unobserved latent data ormissing valuesZ{\displaystyle \mathbf {Z} }, and a vector of unknown parametersθ{\displaystyle {\boldsymbol {\theta }}}, along with alikelihood functionL(θ;X,Z)=p(X,Z∣θ){\displaystyle L({\boldsymbol {\theta }};\mathbf {X} ,\mathbf {Z} )=p(\mathbf {X} ,\mathbf {Z} \mid {\boldsymbol {\theta }})}, themaximum likelihood estimate(MLE) of the unknown parameters is determined by maximizing themarginal likelihoodof the observed data However, this quantity is often intractable sinceZ{\displaystyle \mathbf {Z} }is unobserved and the distribution ofZ{\displaystyle \mathbf {Z} }is unknown before attainingθ{\displaystyle {\boldsymbol {\theta }}}. The EM algorithm seeks to find the maximum likelihood estimate of the marginal likelihood by iteratively applying these two steps: More succinctly, we can write it as one equation:θ(t+1)=argmaxθEZ∼p(⋅|X,θ(t))⁡[log⁡p(X,Z|θ)]{\displaystyle {\boldsymbol {\theta }}^{(t+1)}={\underset {\boldsymbol {\theta }}{\operatorname {arg\,max} }}\operatorname {E} _{\mathbf {Z} \sim p(\cdot |\mathbf {X} ,{\boldsymbol {\theta }}^{(t)})}\left[\log p(\mathbf {X} ,\mathbf {Z} |{\boldsymbol {\theta }})\right]\,} The typical models to which EM is applied useZ{\displaystyle \mathbf {Z} }as a latent variable indicating membership in one of a set of groups: However, it is possible to apply EM to other sorts of models. The motivation is as follows. If the value of the parametersθ{\displaystyle {\boldsymbol {\theta }}}is known, usually the value of the latent variablesZ{\displaystyle \mathbf {Z} }can be found by maximizing the log-likelihood over all possible values ofZ{\displaystyle \mathbf {Z} }, either simply by iterating overZ{\displaystyle \mathbf {Z} }or through an algorithm such as theViterbi algorithmforhidden Markov models. Conversely, if we know the value of the latent variablesZ{\displaystyle \mathbf {Z} }, we can find an estimate of the parametersθ{\displaystyle {\boldsymbol {\theta }}}fairly easily, typically by simply grouping the observed data points according to the value of the associated latent variable and averaging the values, or some function of the values, of the points in each group. This suggests an iterative algorithm, in the case where bothθ{\displaystyle {\boldsymbol {\theta }}}andZ{\displaystyle \mathbf {Z} }are unknown: The algorithm as just described monotonically approaches a local minimum of the cost function. Although an EM iteration does increase the observed data (i.e., marginal) likelihood function, no guarantee exists that the sequence converges to amaximum likelihood estimator. Formultimodal distributions, this means that an EM algorithm may converge to alocal maximumof the observed data likelihood function, depending on starting values. A variety of heuristic ormetaheuristicapproaches exist to escape a local maximum, such as random-restarthill climbing(starting with several different random initial estimatesθ(t){\displaystyle {\boldsymbol {\theta }}^{(t)}}), or applyingsimulated annealingmethods. EM is especially useful when the likelihood is anexponential family, see Sundberg (2019, Ch. 8) for a comprehensive treatment:[16]the E step becomes the sum of expectations ofsufficient statistics, and the M step involves maximizing a linear function. In such a case, it is usually possible to deriveclosed-form expressionupdates for each step, using the Sundberg formula[17](proved and published by Rolf Sundberg, based on unpublished results ofPer Martin-LöfandAnders Martin-Löf).[8][9][11][12][13][14] The EM method was modified to computemaximum a posteriori(MAP) estimates forBayesian inferencein the original paper by Dempster, Laird, and Rubin. Other methods exist to find maximum likelihood estimates, such asgradient descent,conjugate gradient, or variants of theGauss–Newton algorithm. Unlike EM, such methods typically require the evaluation of first and/or second derivatives of the likelihood function. Expectation-Maximization works to improveQ(θ∣θ(t)){\displaystyle Q({\boldsymbol {\theta }}\mid {\boldsymbol {\theta }}^{(t)})}rather than directly improvinglog⁡p(X∣θ){\displaystyle \log p(\mathbf {X} \mid {\boldsymbol {\theta }})}. Here it is shown that improvements to the former imply improvements to the latter.[18] For anyZ{\displaystyle \mathbf {Z} }with non-zero probabilityp(Z∣X,θ){\displaystyle p(\mathbf {Z} \mid \mathbf {X} ,{\boldsymbol {\theta }})}, we can write We take the expectation over possible values of the unknown dataZ{\displaystyle \mathbf {Z} }under the current parameter estimateθ(t){\displaystyle \theta ^{(t)}}by multiplying both sides byp(Z∣X,θ(t)){\displaystyle p(\mathbf {Z} \mid \mathbf {X} ,{\boldsymbol {\theta }}^{(t)})}and summing (or integrating) overZ{\displaystyle \mathbf {Z} }. The left-hand side is the expectation of a constant, so we get: whereH(θ∣θ(t)){\displaystyle H({\boldsymbol {\theta }}\mid {\boldsymbol {\theta }}^{(t)})}is defined by the negated sum it is replacing. This last equation holds for every value ofθ{\displaystyle {\boldsymbol {\theta }}}includingθ=θ(t){\displaystyle {\boldsymbol {\theta }}={\boldsymbol {\theta }}^{(t)}}, and subtracting this last equation from the previous equation gives However,Gibbs' inequalitytells us thatH(θ∣θ(t))≥H(θ(t)∣θ(t)){\displaystyle H({\boldsymbol {\theta }}\mid {\boldsymbol {\theta }}^{(t)})\geq H({\boldsymbol {\theta }}^{(t)}\mid {\boldsymbol {\theta }}^{(t)})}, so we can conclude that In words, choosingθ{\displaystyle {\boldsymbol {\theta }}}to improveQ(θ∣θ(t)){\displaystyle Q({\boldsymbol {\theta }}\mid {\boldsymbol {\theta }}^{(t)})}causeslog⁡p(X∣θ){\displaystyle \log p(\mathbf {X} \mid {\boldsymbol {\theta }})}to improve at least as much. The EM algorithm can be viewed as two alternating maximization steps, that is, as an example ofcoordinate descent.[19][20]Consider the function: whereqis an arbitrary probability distribution over the unobserved datazandH(q)is theentropyof the distributionq. This function can be written as wherepZ∣X(⋅∣x;θ){\displaystyle p_{Z\mid X}(\cdot \mid x;\theta )}is the conditional distribution of the unobserved data given the observed datax{\displaystyle x}andDKL{\displaystyle D_{KL}}is theKullback–Leibler divergence. Then the steps in the EM algorithm may be viewed as: AKalman filteris typically used for on-line state estimation and a minimum-variance smoother may be employed for off-line or batch state estimation. However, these minimum-variance solutions require estimates of the state-space model parameters. EM algorithms can be used for solving joint state and parameter estimation problems. Filtering and smoothing EM algorithms arise by repeating this two-step procedure: Suppose that aKalman filteror minimum-variance smoother operates on measurements of a single-input-single-output system that possess additive white noise. An updated measurement noise variance estimate can be obtained from themaximum likelihoodcalculation wherex^k{\displaystyle {\widehat {x}}_{k}}are scalar output estimates calculated by a filter or a smoother from N scalar measurementszk{\displaystyle z_{k}}. The above update can also be applied to updating a Poisson measurement noise intensity. Similarly, for a first-order auto-regressive process, an updated process noise variance estimate can be calculated by wherex^k{\displaystyle {\widehat {x}}_{k}}andx^k+1{\displaystyle {\widehat {x}}_{k+1}}are scalar state estimates calculated by a filter or a smoother. The updated model coefficient estimate is obtained via The convergence of parameter estimates such as those above are well studied.[26][27][28][29] A number of methods have been proposed to accelerate the sometimes slow convergence of the EM algorithm, such as those usingconjugate gradientand modifiedNewton's methods(Newton–Raphson).[30]Also, EM can be used with constrained estimation methods. Parameter-expanded expectation maximization (PX-EM)algorithm often provides speed up by "us[ing] a `covariance adjustment' to correct the analysis of the M step, capitalising on extra information captured in the imputed complete data".[31] Expectation conditional maximization (ECM)replaces each M step with a sequence of conditional maximization (CM) steps in which each parameterθiis maximized individually, conditionally on the other parameters remaining fixed.[32]Itself can be extended into theExpectation conditional maximization either (ECME)algorithm.[33] This idea is further extended ingeneralized expectation maximization (GEM)algorithm, in which is sought only an increase in the objective functionFfor both the E step and M step as described in theAs a maximization–maximization proceduresection.[19]GEM is further developed in a distributed environment and shows promising results.[34] It is also possible to consider the EM algorithm as a subclass of theMM(Majorize/Minimize or Minorize/Maximize, depending on context) algorithm,[35]and therefore use any machinery developed in the more general case. The Q-function used in the EM algorithm is based on the log likelihood. Therefore, it is regarded as the log-EM algorithm. The use of the log likelihood can be generalized to that of the α-log likelihood ratio. Then, the α-log likelihood ratio of the observed data can be exactly expressed as equality by using the Q-function of the α-log likelihood ratio and the α-divergence. Obtaining this Q-function is a generalized E step. Its maximization is a generalized M step. This pair is called the α-EM algorithm[36]which contains the log-EM algorithm as its subclass. Thus, the α-EM algorithm byYasuo Matsuyamais an exact generalization of the log-EM algorithm. No computation of gradient or Hessian matrix is needed. The α-EM shows faster convergence than the log-EM algorithm by choosing an appropriate α. The α-EM algorithm leads to a faster version of the Hidden Markov model estimation algorithm α-HMM.[37] EM is a partially non-Bayesian, maximum likelihood method. Its final result gives aprobability distributionover the latent variables (in the Bayesian style) together with a point estimate forθ(either amaximum likelihood estimateor a posterior mode). A fully Bayesian version of this may be wanted, giving a probability distribution overθand the latent variables. The Bayesian approach to inference is simply to treatθas another latent variable. In this paradigm, the distinction between the E and M steps disappears. If using the factorized Q approximation as described above (variational Bayes), solving can iterate over each latent variable (now includingθ) and optimize them one at a time. Now,ksteps per iteration are needed, wherekis the number of latent variables. Forgraphical modelsthis is easy to do as each variable's newQdepends only on itsMarkov blanket, so localmessage passingcan be used for efficient inference. Ininformation geometry, the E step and the M step are interpreted as projections under dualaffine connections, called the e-connection and the m-connection; theKullback–Leibler divergencecan also be understood in these terms. Letx=(x1,x2,…,xn){\displaystyle \mathbf {x} =(\mathbf {x} _{1},\mathbf {x} _{2},\ldots ,\mathbf {x} _{n})}be a sample ofn{\displaystyle n}independent observations from amixtureof twomultivariate normal distributionsof dimensiond{\displaystyle d}, and letz=(z1,z2,…,zn){\displaystyle \mathbf {z} =(z_{1},z_{2},\ldots ,z_{n})}be the latent variables that determine the component from which the observation originates.[20] where The aim is to estimate the unknown parameters representing themixingvalue between the Gaussians and the means and covariances of each: where the incomplete-data likelihood function is and the complete-data likelihood function is or whereI{\displaystyle \mathbb {I} }is anindicator functionandf{\displaystyle f}is theprobability density functionof a multivariate normal. In the last equality, for eachi, one indicatorI(zi=j){\displaystyle \mathbb {I} (z_{i}=j)}is equal to zero, and one indicator is equal to one. The inner sum thus reduces to one term. Given our current estimate of the parametersθ(t), the conditional distribution of theZiis determined byBayes' theoremto be the proportional height of the normaldensityweighted byτ: These are called the "membership probabilities", which are normally considered the output of the E step (although this is not the Q function of below). This E step corresponds with setting up this function for Q: The expectation oflog⁡L(θ;xi,Zi){\displaystyle \log L(\theta ;\mathbf {x} _{i},Z_{i})}inside the sum is taken with respect to the probability density functionP(Zi∣Xi=xi;θ(t)){\displaystyle P(Z_{i}\mid X_{i}=\mathbf {x} _{i};\theta ^{(t)})}, which might be different for eachxi{\displaystyle \mathbf {x} _{i}}of the training set. Everything in the E step is known before the step is taken exceptTj,i{\displaystyle T_{j,i}}, which is computed according to the equation at the beginning of the E step section. This full conditional expectation does not need to be calculated in one step, becauseτandμ/Σappear in separate linear terms and can thus be maximized independently. Q(θ∣θ(t)){\displaystyle Q(\theta \mid \theta ^{(t)})}being quadratic in form means that determining the maximizing values ofθ{\displaystyle \theta }is relatively straightforward. Also,τ{\displaystyle \tau },(μ1,Σ1){\displaystyle ({\boldsymbol {\mu }}_{1},\Sigma _{1})}and(μ2,Σ2){\displaystyle ({\boldsymbol {\mu }}_{2},\Sigma _{2})}may all be maximized independently since they all appear in separate linear terms. To begin, considerτ{\displaystyle \tau }, which has the constraintτ1+τ2=1{\displaystyle \tau _{1}+\tau _{2}=1}: This has the same form as the maximum likelihood estimate for thebinomial distribution, so For the next estimates of(μ1,Σ1){\displaystyle ({\boldsymbol {\mu }}_{1},\Sigma _{1})}: This has the same form as a weighted maximum likelihood estimate for a normal distribution, so and, by symmetry, Conclude the iterative process ifEZ∣θ(t),x[log⁡L(θ(t);x,Z)]≤EZ∣θ(t−1),x[log⁡L(θ(t−1);x,Z)]+ε{\displaystyle E_{Z\mid \theta ^{(t)},\mathbf {x} }[\log L(\theta ^{(t)};\mathbf {x} ,\mathbf {Z} )]\leq E_{Z\mid \theta ^{(t-1)},\mathbf {x} }[\log L(\theta ^{(t-1)};\mathbf {x} ,\mathbf {Z} )]+\varepsilon }forε{\displaystyle \varepsilon }below some preset threshold. The algorithm illustrated above can be generalized for mixtures of more than twomultivariate normal distributions. The EM algorithm has been implemented in the case where an underlyinglinear regressionmodel exists explaining the variation of some quantity, but where the values actually observed are censored or truncated versions of those represented in the model.[38]Special cases of this model include censored or truncated observations from onenormal distribution.[38] EM typically converges to a local optimum, not necessarily the global optimum, with no bound on the convergence rate in general. It is possible that it can be arbitrarily poor in high dimensions and there can be an exponential number of local optima. Hence, a need exists for alternative methods for guaranteed learning, especially in the high-dimensional setting. Alternatives to EM exist with better guarantees for consistency, which are termedmoment-based approaches[39]or the so-calledspectral techniques.[40][41]Moment-based approaches to learning the parameters of a probabilistic model enjoy guarantees such as global convergence under certain conditions unlike EM which is often plagued by the issue of getting stuck in local optima. Algorithms with guarantees for learning can be derived for a number of important models such as mixture models, HMMs etc. For these spectral methods, no spurious local optima occur, and the true parameters can be consistently estimated under some regularity conditions.[citation needed]
https://en.wikipedia.org/wiki/Expectation%E2%80%93maximization_algorithm
Minimax(sometimesMinmax,MM[1]orsaddle point[2]) is a decision rule used inartificial intelligence,decision theory,game theory,statistics, andphilosophyforminimizingthe possiblelossfor aworst case (maximum loss) scenario. When dealing with gains, it is referred to as "maximin" – to maximize the minimum gain. Originally formulated for several-playerzero-sumgame theory, covering both the cases where players take alternate moves and those where they make simultaneous moves, it has also been extended to more complex games and to general decision-making in the presence of uncertainty. Themaximin valueis the highest value that the player can be sure to get without knowing the actions of the other players; equivalently, it is the lowest value the other players can force the player to receive when they know the player's action. Its formal definition is:[3] Where: Calculating the maximin value of a player is done in a worst-case approach: for each possible action of the player, we check all possible actions of the other players and determine the worst possible combination of actions – the one that gives playerithe smallest value. Then, we determine which action playerican take in order to make sure that this smallest value is the highest possible. For example, consider the following game for two players, where the first player ("row player") may choose any of three moves, labelledT,M, orB, and the second player ("column player") may choose either of two moves,LorR. The result of the combination of both moves is expressed in a payoff table: (where the first number in each of the cell is the pay-out of the row player and the second number is the pay-out of the column player). For the sake of example, we consider onlypure strategies. Check each player in turn: If both players play their respective maximin strategies(T,L){\displaystyle (T,L)}, the payoff vector is(3,1){\displaystyle (3,1)}. Theminimax valueof a player is the smallest value that the other players can force the player to receive, without knowing the player's actions; equivalently, it is the largest value the player can be sure to get when theyknowthe actions of the other players. Its formal definition is:[3] The definition is very similar to that of the maximin value – only the order of the maximum and minimum operators is inverse. In the above example: For every playeri, the maximin is at most the minimax: Intuitively, in maximin the maximization comes after the minimization, so playeritries to maximize their value before knowing what the others will do; in minimax the maximization comes before the minimization, so playeriis in a much better position – they maximize their value knowing what the others did. Another way to understand thenotationis by reading from right to left: When we write the initial set of outcomesvi(ai,a−i){\displaystyle \ v_{i}(a_{i},a_{-i})\ }depends on bothai{\displaystyle \ {a_{i}}\ }anda−i.{\displaystyle \ {a_{-i}}\ .}We firstmarginalize awayai{\displaystyle {a_{i}}}fromvi(ai,a−i){\displaystyle v_{i}(a_{i},a_{-i})}, by maximizing overai{\displaystyle \ {a_{i}}\ }(for every possible value ofa−i{\displaystyle {a_{-i}}}) to yield a set of marginal outcomesvi′(a−i),{\displaystyle \ v'_{i}(a_{-i})\,,}which depends only ona−i.{\displaystyle \ {a_{-i}}\ .}We then minimize overa−i{\displaystyle \ {a_{-i}}\ }over these outcomes. (Conversely for maximin.) Although it is always the case thatvrow_≤vrow¯{\displaystyle \ {\underline {v_{row}}}\leq {\overline {v_{row}}}\ }andvcol_≤vcol¯,{\displaystyle \ {\underline {v_{col}}}\leq {\overline {v_{col}}}\,,}the payoff vector resulting from both players playing their minimax strategies,(2,−20){\displaystyle \ (2,-20)\ }in the case of(T,R){\displaystyle \ (T,R)\ }or(−10,1){\displaystyle (-10,1)}in the case of(M,R),{\displaystyle \ (M,R)\,,}cannot similarly be ranked against the payoff vector(3,1){\displaystyle \ (3,1)\ }resulting from both players playing their maximin strategy. In two-playerzero-sum games, the minimax solution is the same as theNash equilibrium. In the context of zero-sum games, theminimax theoremis equivalent to:[4][failed verification] For every two-personzero-sumgame with finitely many strategies, there exists a valueVand a mixed strategy for each player, such that Equivalently, Player 1's strategy guarantees them a payoff ofVregardless of Player 2's strategy, and similarly Player 2 can guarantee themselves a payoff of −V. The nameminimaxarises because each player minimizes the maximum payoff possible for the other – since the game is zero-sum, they also minimize their own maximum loss (i.e., maximize their minimum payoff). See alsoexample of a game without a value. The following example of a zero-sum game, whereAandBmake simultaneous moves, illustratesmaximinsolutions. Suppose each player has three choices and consider thepayoff matrixforAdisplayed on the table ("Payoff matrix for player A"). Assume the payoff matrix forBis the same matrix with the signs reversed (i.e., if the choices are A1 and B1 thenBpays 3 toA). Then, the maximin choice forAis A2 since the worst possible result is then having to pay 1, while the simple maximin choice forBis B2 since the worst possible result is then no payment. However, this solution is not stable, since ifBbelievesAwill choose A2 thenBwill choose B1 to gain 1; then ifAbelievesBwill choose B1 thenAwill choose A1 to gain 3; and thenBwill choose B2; and eventually both players will realize the difficulty of making a choice. So a more stable strategy is needed. Some choices aredominatedby others and can be eliminated:Awill not choose A3 since either A1 or A2 will produce a better result, no matter whatBchooses;Bwill not choose B3 since some mixtures of B1 and B2 will produce a better result, no matter whatAchooses. PlayerAcan avoid having to make an expected payment of more than⁠1/3⁠by choosing A1 with probability⁠1/6⁠and A2 with probability⁠5/6⁠:The expected payoff forAwould be3 ×⁠1/6⁠− 1 ×⁠5/6⁠=⁠−+1/3⁠in caseBchose B1 and−2 ×⁠1/6⁠+ 0 ×⁠5/6⁠=⁠−+1/3⁠in caseBchose B2. Similarly,Bcan ensure an expected gain of at least⁠1/3⁠, no matter whatAchooses, by using a randomized strategy of choosing B1 with probability⁠1/3⁠and B2 with probability⁠2/3⁠. Thesemixedminimax strategies cannot be improved and are now stable. Frequently, in game theory,maximinis distinct from minimax. Minimax is used in zero-sum games to denote minimizing the opponent's maximum payoff. In azero-sum game, this is identical to minimizing one's own maximum loss, and to maximizing one's own minimum gain. "Maximin" is a term commonly used for non-zero-sum games to describe the strategy which maximizes one's own minimum payoff. In non-zero-sum games, this is not generally the same as minimizing the opponent's maximum gain, nor the same as theNash equilibriumstrategy. The minimax values are very important in the theory ofrepeated games. One of the central theorems in this theory, thefolk theorem, relies on the minimax values. Incombinatorial game theory, there is a minimax algorithm for game solutions. Asimpleversion of the minimaxalgorithm, stated below, deals with games such astic-tac-toe, where each player can win, lose, or draw. If player Acanwin in one move, their best move is that winning move. If player B knows that one move will lead to the situation where player Acanwin in one move, while another move will lead to the situation where player A can, at best, draw, then player B's best move is the one leading to a draw. Late in the game, it's easy to see what the "best" move is. The minimax algorithm helps find the best move, by working backwards from the end of the game. At each step it assumes that player A is trying tomaximizethe chances of A winning, while on the next turn player B is trying tominimizethe chances of A winning (i.e., to maximize B's own chances of winning). Aminimax algorithm[5]is a recursivealgorithmfor choosing the next move in an n-playergame, usually a two-player game. A value is associated with each position or state of the game. This value is computed by means of aposition evaluation functionand it indicates how good it would be for a player to reach that position. The player then makes the move that maximizes the minimum value of the position resulting from the opponent's possible following moves. If it isA's turn to move,Agives a value to each of their legal moves. A possible allocation method consists in assigning a certain win forAas +1 and forBas −1. This leads tocombinatorial game theoryas developed byJohn H. Conway. An alternative is using a rule that if the result of a move is an immediate win forA, it is assigned positive infinity and if it is an immediate win forB, negative infinity. The value toAof any other move is the maximum of the values resulting from each ofB's possible replies. For this reason,Ais called themaximizing playerandBis called theminimizing player, hence the nameminimax algorithm. The above algorithm will assign a value of positive or negative infinity to any position since the value of every position will be the value of some final winning or losing position. Often this is generally only possible at the very end of complicated games such aschessorgo, since it is not computationally feasible to look ahead as far as the completion of the game, except towards the end, and instead, positions are given finite values as estimates of the degree of belief that they will lead to a win for one player or another. This can be extended if we can supply aheuristicevaluation function which gives values to non-final game states without considering all possible following complete sequences. We can then limit the minimax algorithm to look only at a certain number of moves ahead. This number is called the "look-ahead", measured in "plies". For example, the chess computerDeep Blue(the first one to beat a reigning world champion,Garry Kasparovat that time) looked ahead at least 12 plies, then applied a heuristic evaluation function.[6] The algorithm can be thought of as exploring thenodesof agame tree. Theeffectivebranching factorof the tree is the average number ofchildrenof each node (i.e., the average number of legal moves in a position). The number of nodes to be explored usuallyincreases exponentiallywith the number of plies (it is less than exponential if evaluatingforced movesor repeated positions). The number of nodes to be explored for the analysis of a game is therefore approximately the branching factor raised to the power of the number of plies. It is thereforeimpracticalto completely analyze games such as chess using the minimax algorithm. The performance of the naïve minimax algorithm may be improved dramatically, without affecting the result, by the use ofalpha–beta pruning. Other heuristic pruning methods can also be used, but not all of them are guaranteed to give the same result as the unpruned search. A naïve minimax algorithm may be trivially modified to additionally return an entirePrincipal Variationalong with a minimax score. Thepseudocodefor the depth-limited minimax algorithm is given below. The minimax function returns a heuristic value forleaf nodes(terminal nodes and nodes at the maximum search depth). Non-leaf nodes inherit their value from a descendant leaf node. The heuristic value is a score measuring the favorability of the node for the maximizing player. Hence nodes resulting in a favorable outcome, such as a win, for the maximizing player have higher scores than nodes more favorable for the minimizing player. The heuristic value for terminal (game ending) leaf nodes are scores corresponding to win, loss, or draw, for the maximizing player. For non terminal leaf nodes at the maximum search depth, an evaluation function estimates a heuristic value for the node. The quality of this estimate and the search depth determine the quality and accuracy of the final minimax result. Minimax treats the two players (the maximizing player and the minimizing player) separately in its code. Based on the observation thatmax(a,b)=−min(−a,−b),{\displaystyle \ \max(a,b)=-\min(-a,-b)\ ,}minimax may often be simplified into thenegamaxalgorithm. Suppose the game being played only has a maximum of two possible moves per player each turn. The algorithm generates thetreeon the right, where the circles represent the moves of the player running the algorithm (maximizing player), and squares represent the moves of the opponent (minimizing player). Because of the limitation of computation resources, as explained above, the tree is limited to alook-aheadof 4 moves. The algorithm evaluates eachleaf nodeusing a heuristic evaluation function, obtaining the values shown. The moves where themaximizing playerwins are assigned with positive infinity, while the moves that lead to a win of theminimizing playerare assigned with negative infinity. At level 3, the algorithm will choose, for each node, thesmallestof thechild nodevalues, and assign it to that same node (e.g. the node on the left will choose the minimum between "10" and "+∞", therefore assigning the value "10" to itself). The next step, in level 2, consists of choosing for each node thelargestof thechild nodevalues. Once again, the values are assigned to eachparent node. The algorithm continues evaluating the maximum and minimum values of the child nodes alternately until it reaches theroot node, where it chooses the move with the largest value (represented in the figure with a blue arrow). This is the move that the player should make in order tominimizethemaximumpossibleloss. Minimax theory has been extended to decisions where there is no other player, but where the consequences of decisions depend on unknown facts. For example, deciding to prospect for minerals entails a cost, which will be wasted if the minerals are not present, but will bring major rewards if they are. One approach is to treat this as a game againstnature(seemove by nature), and using a similar mindset asMurphy's laworresistentialism, take an approach which minimizes the maximum expected loss, using the same techniques as in the two-person zero-sum games. In addition,expectiminimax treeshave been developed, for two-player games in which chance (for example, dice) is a factor. In classical statisticaldecision theory, we have anestimatorδ{\displaystyle \ \delta \ }that is used to estimate aparameterθ∈Θ.{\displaystyle \ \theta \in \Theta \ .}We also assume arisk functionR(θ,δ).{\displaystyle \ R(\theta ,\delta )\ .}usually specified as the integral of aloss function. In this framework,δ~{\displaystyle \ {\tilde {\delta }}\ }is calledminimaxif it satisfies An alternative criterion in the decision theoretic framework is theBayes estimatorin the presence of aprior distributionΠ.{\displaystyle \Pi \ .}An estimator is Bayes if it minimizes theaveragerisk A key feature of minimax decision making is being non-probabilistic: in contrast to decisions usingexpected valueorexpected utility, it makes no assumptions about the probabilities of various outcomes, justscenario analysisof what the possible outcomes are. It is thusrobustto changes in the assumptions, in contrast to these other decision techniques. Various extensions of this non-probabilistic approach exist, notablyminimax regretandInfo-gap decision theory. Further, minimax only requiresordinal measurement(that outcomes be compared and ranked), notintervalmeasurements (that outcomes include "how much better or worse"), and returns ordinal data, using only the modeled outcomes: the conclusion of a minimax analysis is: "this strategy is minimax, as the worst case is (outcome), which is less bad than any other strategy". Compare to expected value analysis, whose conclusion is of the form: "This strategy yieldsℰ(X) =n."Minimax thus can be used on ordinal data, and can be more transparent. The concept of "lesser evil" voting (LEV) can be seen as a form of theminimaxstrategy where voters, when faced with two or more candidates, choose the one they perceive as the least harmful or the "lesser evil." To do so, "voting should not be viewed as a form of personal self-expression or moral judgement directed in retaliation towards major party candidates who fail to reflect our values, or of a corrupt system designed to limit choices to those acceptable to corporate elites," but rather as an opportunity to reduce harm or loss.[7] In philosophy, the term "maximin" is often used in the context ofJohn Rawls'sA Theory of Justice,where he refers to it in the context of TheDifference Principle.[8]Rawls defined this principle as the rule which states that social and economic inequalities should be arranged so that "they are to be of the greatest benefit to the least-advantaged members of society".[9][10]
https://en.wikipedia.org/wiki/Minimax
In astatistical-classificationproblem with two classes, adecision boundaryordecision surfaceis ahypersurfacethat partitions the underlyingvector spaceinto two sets, one for each class. The classifier will classify all the points on one side of the decision boundary as belonging to one class and all those on the other side as belonging to the other class. A decision boundary is the region of a problem space in which the output label of aclassifieris ambiguous.[1] If the decision surface is ahyperplane, then the classification problem is linear, and the classes arelinearly separable. Decision boundaries are not always clear cut. That is, the transition from one class in thefeature spaceto another is not discontinuous, but gradual. This effect is common infuzzy logicbased classification algorithms, where membership in one class or another is ambiguous. Decision boundaries can be approximations of optimal stopping boundaries.[2]The decision boundary is the set of points of that hyperplane that pass through zero.[3]For example, the angle between a vector and points in a set must be zero for points that are on or close to the decision boundary.[4] Decision boundary instability can be incorporated with generalization error as a standard for selecting the most accurate and stable classifier.[5] In the case ofbackpropagationbasedartificial neural networksorperceptrons, the type of decision boundary that the network can learn is determined by the number of hidden layers the network has. If it has no hidden layers, then it can only learn linear problems. If it has one hidden layer, then it can learn anycontinuous functiononcompact subsetsofRnas shown by theuniversal approximation theorem, thus it can have an arbitrary decision boundary. In particular,support vector machinesfind ahyperplanethat separates the feature space into two classes with themaximum margin. If the problem is not originally linearly separable, thekernel trickcan be used to turn it into a linearly separable one, by increasing the number of dimensions. Thus a general hypersurface in a small dimension space is turned into a hyperplane in a space with much larger dimensions. Neural networks try to learn the decision boundary which minimizes the empirical error, while support vector machines try to learn the decision boundary which maximizes the empirical margin between the decision boundary and data points.
https://en.wikipedia.org/wiki/Decision_boundary
Invector calculus, thegradientof ascalar-valueddifferentiable functionf{\displaystyle f}ofseveral variablesis thevector field(orvector-valued function)∇f{\displaystyle \nabla f}whose value at a pointp{\displaystyle p}gives the direction and the rate of fastest increase. The gradient transforms like a vector under change of basis of the space of variables off{\displaystyle f}. If the gradient of a function is non-zero at a pointp{\displaystyle p}, the direction of the gradient is the direction in which the function increases most quickly fromp{\displaystyle p}, and themagnitudeof the gradient is the rate of increase in that direction, the greatestabsolutedirectional derivative.[1]Further, a point where the gradient is the zero vector is known as astationary point. The gradient thus plays a fundamental role inoptimization theory, where it is used to minimize a function bygradient descent. In coordinate-free terms, the gradient of a functionf(r){\displaystyle f(\mathbf {r} )}may be defined by: df=∇f⋅dr{\displaystyle df=\nabla f\cdot d\mathbf {r} } wheredf{\displaystyle df}is the total infinitesimal change inf{\displaystyle f}for an infinitesimal displacementdr{\displaystyle d\mathbf {r} }, and is seen to be maximal whendr{\displaystyle d\mathbf {r} }is in the direction of the gradient∇f{\displaystyle \nabla f}. Thenabla symbol∇{\displaystyle \nabla }, written as an upside-down triangle and pronounced "del", denotes thevector differential operator. When a coordinate system is used in which the basis vectors are not functions of position, the gradient is given by thevector[a]whose components are thepartial derivativesoff{\displaystyle f}atp{\displaystyle p}.[2]That is, forf:Rn→R{\displaystyle f\colon \mathbb {R} ^{n}\to \mathbb {R} }, its gradient∇f:Rn→Rn{\displaystyle \nabla f\colon \mathbb {R} ^{n}\to \mathbb {R} ^{n}}is defined at the pointp=(x1,…,xn){\displaystyle p=(x_{1},\ldots ,x_{n})}inn-dimensional space as the vector[b] ∇f(p)=[∂f∂x1(p)⋮∂f∂xn(p)].{\displaystyle \nabla f(p)={\begin{bmatrix}{\frac {\partial f}{\partial x_{1}}}(p)\\\vdots \\{\frac {\partial f}{\partial x_{n}}}(p)\end{bmatrix}}.} Note that the above definition for gradient is defined for the functionf{\displaystyle f}only iff{\displaystyle f}is differentiable atp{\displaystyle p}. There can be functions for which partial derivatives exist in every direction but fail to be differentiable. Furthermore, this definition as the vector of partial derivatives is only valid when the basis of the coordinate system isorthonormal. For any other basis, themetric tensorat that point needs to be taken into account. For example, the functionf(x,y)=x2yx2+y2{\displaystyle f(x,y)={\frac {x^{2}y}{x^{2}+y^{2}}}}unless at origin wheref(0,0)=0{\displaystyle f(0,0)=0}, is not differentiable at the origin as it does not have a well defined tangent plane despite having well defined partial derivatives in every direction at the origin.[3]In this particular example, under rotation of x-y coordinate system, the above formula for gradient fails to transform like a vector (gradient becomes dependent on choice of basis for coordinate system) and also fails to point towards the 'steepest ascent' in some orientations. For differentiable functions where the formula for gradient holds, it can be shown to always transform as a vector under transformation of the basis so as to always point towards the fastest increase. The gradient is dual to thetotal derivativedf{\displaystyle df}: the value of the gradient at a point is atangent vector– a vector at each point; while the value of the derivative at a point is acotangent vector– a linear functional on vectors.[c]They are related in that thedot productof the gradient off{\displaystyle f}at a pointp{\displaystyle p}with another tangent vectorv{\displaystyle \mathbf {v} }equals thedirectional derivativeoff{\displaystyle f}atp{\displaystyle p}of the function alongv{\displaystyle \mathbf {v} }; that is,∇f(p)⋅v=∂f∂v(p)=dfp(v){\textstyle \nabla f(p)\cdot \mathbf {v} ={\frac {\partial f}{\partial \mathbf {v} }}(p)=df_{p}(\mathbf {v} )}. The gradient admits multiple generalizations to more general functions onmanifolds; see§ Generalizations. Consider a room where the temperature is given by ascalar field,T, so at each point(x,y,z)the temperature isT(x,y,z), independent of time. At each point in the room, the gradient ofTat that point will show the direction in which the temperature rises most quickly, moving away from(x,y,z). The magnitude of the gradient will determine how fast the temperature rises in that direction. Consider a surface whose height above sea level at point(x,y)isH(x,y). The gradient ofHat a point is a plane vector pointing in the direction of the steepest slope orgradeat that point. The steepness of the slope at that point is given by the magnitude of the gradient vector. The gradient can also be used to measure how a scalar field changes in other directions, rather than just the direction of greatest change, by taking adot product. Suppose that the steepest slope on a hill is 40%. A road going directly uphill has slope 40%, but a road going around the hill at an angle will have a shallower slope. For example, if the road is at a 60° angle from the uphill direction (when both directions are projected onto the horizontal plane), then the slope along the road will be the dot product between the gradient vector and aunit vectoralong the road, as the dot product measures how much the unit vector along the road aligns with the steepest slope,[d]which is 40% times thecosineof 60°, or 20%. More generally, if the hill height functionHisdifferentiable, then the gradient ofHdottedwith aunit vectorgives the slope of the hill in the direction of the vector, thedirectional derivativeofHalong the unit vector. The gradient of a functionf{\displaystyle f}at pointa{\displaystyle a}is usually written as∇f(a){\displaystyle \nabla f(a)}. It may also be denoted by any of the following: The gradient (or gradient vector field) of a scalar functionf(x1,x2,x3, …,xn)is denoted∇for∇→fwhere∇(nabla) denotes the vectordifferential operator,del. The notationgradfis also commonly used to represent the gradient. The gradient offis defined as the unique vector field whose dot product with anyvectorvat each pointxis the directional derivative offalongv. That is, (∇f(x))⋅v=Dvf(x){\displaystyle {\big (}\nabla f(x){\big )}\cdot \mathbf {v} =D_{\mathbf {v} }f(x)} where the right-hand side is thedirectional derivativeand there are many ways to represent it. Formally, the derivative isdualto the gradient; seerelationship with derivative. When a function also depends on a parameter such as time, the gradient often refers simply to the vector of its spatial derivatives only (seeSpatial gradient). The magnitude and direction of the gradient vector areindependentof the particularcoordinate representation.[4][5] In the three-dimensionalCartesian coordinate systemwith aEuclidean metric, the gradient, if it exists, is given by ∇f=∂f∂xi+∂f∂yj+∂f∂zk,{\displaystyle \nabla f={\frac {\partial f}{\partial x}}\mathbf {i} +{\frac {\partial f}{\partial y}}\mathbf {j} +{\frac {\partial f}{\partial z}}\mathbf {k} ,} wherei,j,kare thestandardunit vectors in the directions of thex,yandzcoordinates, respectively. For example, the gradient of the functionf(x,y,z)=2x+3y2−sin⁡(z){\displaystyle f(x,y,z)=2x+3y^{2}-\sin(z)}is∇f(x,y,z)=2i+6yj−cos⁡(z)k.{\displaystyle \nabla f(x,y,z)=2\mathbf {i} +6y\mathbf {j} -\cos(z)\mathbf {k} .}or∇f(x,y,z)=[26y−cos⁡z].{\displaystyle \nabla f(x,y,z)={\begin{bmatrix}2\\6y\\-\cos z\end{bmatrix}}.} In some applications it is customary to represent the gradient as arow vectororcolumn vectorof its components in a rectangular coordinate system; this article follows the convention of the gradient being a column vector, while the derivative is a row vector. Incylindrical coordinates, the gradient is given by:[6] ∇f(ρ,φ,z)=∂f∂ρeρ+1ρ∂f∂φeφ+∂f∂zez,{\displaystyle \nabla f(\rho ,\varphi ,z)={\frac {\partial f}{\partial \rho }}\mathbf {e} _{\rho }+{\frac {1}{\rho }}{\frac {\partial f}{\partial \varphi }}\mathbf {e} _{\varphi }+{\frac {\partial f}{\partial z}}\mathbf {e} _{z},} whereρis the axial distance,φis the azimuthal or azimuth angle,zis the axial coordinate, andeρ,eφandezare unit vectors pointing along the coordinate directions. Inspherical coordinateswith a Euclidean metric, the gradient is given by:[6] ∇f(r,θ,φ)=∂f∂rer+1r∂f∂θeθ+1rsin⁡θ∂f∂φeφ,{\displaystyle \nabla f(r,\theta ,\varphi )={\frac {\partial f}{\partial r}}\mathbf {e} _{r}+{\frac {1}{r}}{\frac {\partial f}{\partial \theta }}\mathbf {e} _{\theta }+{\frac {1}{r\sin \theta }}{\frac {\partial f}{\partial \varphi }}\mathbf {e} _{\varphi },} whereris the radial distance,φis the azimuthal angle andθis the polar angle, ander,eθandeφare again local unit vectors pointing in the coordinate directions (that is, the normalizedcovariant basis). For the gradient in otherorthogonal coordinate systems, seeOrthogonal coordinates (Differential operators in three dimensions). We considergeneral coordinates, which we write asx1, …,xi, …,xn, wherenis the number of dimensions of the domain. Here, the upper index refers to the position in the list of the coordinate or component, sox2refers to the second component—not the quantityxsquared. The index variableirefers to an arbitrary elementxi. UsingEinstein notation, the gradient can then be written as: ∇f=∂f∂xigijej{\displaystyle \nabla f={\frac {\partial f}{\partial x^{i}}}g^{ij}\mathbf {e} _{j}}(Note that itsdualisdf=∂f∂xiei{\textstyle \mathrm {d} f={\frac {\partial f}{\partial x^{i}}}\mathbf {e} ^{i}}), whereei=dxi{\displaystyle \mathbf {e} ^{i}=\mathrm {d} x^{i}}andei=∂x/∂xi{\displaystyle \mathbf {e} _{i}=\partial \mathbf {x} /\partial x^{i}}refer to the unnormalized localcovariant and contravariant basesrespectively,gij{\displaystyle g^{ij}}is theinverse metric tensor, and the Einstein summation convention implies summation overiandj. If the coordinates are orthogonal we can easily express the gradient (and thedifferential) in terms of the normalized bases, which we refer to ase^i{\displaystyle {\hat {\mathbf {e} }}_{i}}ande^i{\displaystyle {\hat {\mathbf {e} }}^{i}}, using the scale factors (also known asLamé coefficients)hi=‖ei‖=gii=1/‖ei‖{\displaystyle h_{i}=\lVert \mathbf {e} _{i}\rVert ={\sqrt {g_{ii}}}=1\,/\lVert \mathbf {e} ^{i}\rVert }: ∇f=∂f∂xigije^jgjj=∑i=1n∂f∂xi1hie^i{\displaystyle \nabla f={\frac {\partial f}{\partial x^{i}}}g^{ij}{\hat {\mathbf {e} }}_{j}{\sqrt {g_{jj}}}=\sum _{i=1}^{n}\,{\frac {\partial f}{\partial x^{i}}}{\frac {1}{h_{i}}}\mathbf {\hat {e}} _{i}}(anddf=∑i=1n∂f∂xi1hie^i{\textstyle \mathrm {d} f=\sum _{i=1}^{n}\,{\frac {\partial f}{\partial x^{i}}}{\frac {1}{h_{i}}}\mathbf {\hat {e}} ^{i}}), where we cannot use Einstein notation, since it is impossible to avoid the repetition of more than two indices. Despite the use of upper and lower indices,e^i{\displaystyle \mathbf {\hat {e}} _{i}},e^i{\displaystyle \mathbf {\hat {e}} ^{i}}, andhi{\displaystyle h_{i}}are neither contravariant nor covariant. The latter expression evaluates to the expressions given above for cylindrical and spherical coordinates. The gradient is closely related to thetotal derivative(total differential)df{\displaystyle df}: they aretranspose(dual) to each other. Using the convention that vectors inRn{\displaystyle \mathbb {R} ^{n}}are represented bycolumn vectors, and that covectors (linear mapsRn→R{\displaystyle \mathbb {R} ^{n}\to \mathbb {R} }) are represented byrow vectors,[a]the gradient∇f{\displaystyle \nabla f}and the derivativedf{\displaystyle df}are expressed as a column and row vector, respectively, with the same components, but transpose of each other: ∇f(p)=[∂f∂x1(p)⋮∂f∂xn(p)];{\displaystyle \nabla f(p)={\begin{bmatrix}{\frac {\partial f}{\partial x_{1}}}(p)\\\vdots \\{\frac {\partial f}{\partial x_{n}}}(p)\end{bmatrix}};}dfp=[∂f∂x1(p)⋯∂f∂xn(p)].{\displaystyle df_{p}={\begin{bmatrix}{\frac {\partial f}{\partial x_{1}}}(p)&\cdots &{\frac {\partial f}{\partial x_{n}}}(p)\end{bmatrix}}.} While these both have the same components, they differ in what kind of mathematical object they represent: at each point, the derivative is acotangent vector, alinear form(or covector) which expresses how much the (scalar) output changes for a given infinitesimal change in (vector) input, while at each point, the gradient is atangent vector, which represents an infinitesimal change in (vector) input. In symbols, the gradient is an element of the tangent space at a point,∇f(p)∈TpRn{\displaystyle \nabla f(p)\in T_{p}\mathbb {R} ^{n}}, while the derivative is a map from the tangent space to the real numbers,dfp:TpRn→R{\displaystyle df_{p}\colon T_{p}\mathbb {R} ^{n}\to \mathbb {R} }. The tangent spaces at each point ofRn{\displaystyle \mathbb {R} ^{n}}can be "naturally" identified[e]with the vector spaceRn{\displaystyle \mathbb {R} ^{n}}itself, and similarly the cotangent space at each point can be naturally identified with thedual vector space(Rn)∗{\displaystyle (\mathbb {R} ^{n})^{*}}of covectors; thus the value of the gradient at a point can be thought of a vector in the originalRn{\displaystyle \mathbb {R} ^{n}}, not just as a tangent vector. Computationally, given a tangent vector, the vector can bemultipliedby the derivative (as matrices), which is equal to taking thedot productwith the gradient:(dfp)(v)=[∂f∂x1(p)⋯∂f∂xn(p)][v1⋮vn]=∑i=1n∂f∂xi(p)vi=[∂f∂x1(p)⋮∂f∂xn(p)]⋅[v1⋮vn]=∇f(p)⋅v{\displaystyle (df_{p})(v)={\begin{bmatrix}{\frac {\partial f}{\partial x_{1}}}(p)&\cdots &{\frac {\partial f}{\partial x_{n}}}(p)\end{bmatrix}}{\begin{bmatrix}v_{1}\\\vdots \\v_{n}\end{bmatrix}}=\sum _{i=1}^{n}{\frac {\partial f}{\partial x_{i}}}(p)v_{i}={\begin{bmatrix}{\frac {\partial f}{\partial x_{1}}}(p)\\\vdots \\{\frac {\partial f}{\partial x_{n}}}(p)\end{bmatrix}}\cdot {\begin{bmatrix}v_{1}\\\vdots \\v_{n}\end{bmatrix}}=\nabla f(p)\cdot v} The best linear approximation to a differentiable functionf:Rn→R{\displaystyle f:\mathbb {R} ^{n}\to \mathbb {R} }at a pointx{\displaystyle x}inRn{\displaystyle \mathbb {R} ^{n}}is a linear map fromRn{\displaystyle \mathbb {R} ^{n}}toR{\displaystyle \mathbb {R} }which is often denoted bydfx{\displaystyle df_{x}}orDf(x){\displaystyle Df(x)}and called thedifferentialortotal derivativeoff{\displaystyle f}atx{\displaystyle x}. The functiondf{\displaystyle df}, which mapsx{\displaystyle x}todfx{\displaystyle df_{x}}, is called thetotal differentialorexterior derivativeoff{\displaystyle f}and is an example of adifferential 1-form. Much as the derivative of a function of a single variable represents theslopeof thetangentto thegraphof the function,[7]the directional derivative of a function in several variables represents the slope of the tangenthyperplanein the direction of the vector. The gradient is related to the differential by the formula(∇f)x⋅v=dfx(v){\displaystyle (\nabla f)_{x}\cdot v=df_{x}(v)}for anyv∈Rn{\displaystyle v\in \mathbb {R} ^{n}}, where⋅{\displaystyle \cdot }is thedot product: taking the dot product of a vector with the gradient is the same as taking the directional derivative along the vector. IfRn{\displaystyle \mathbb {R} ^{n}}is viewed as the space of (dimensionn{\displaystyle n}) column vectors (of real numbers), then one can regarddf{\displaystyle df}as the row vector with components(∂f∂x1,…,∂f∂xn),{\displaystyle \left({\frac {\partial f}{\partial x_{1}}},\dots ,{\frac {\partial f}{\partial x_{n}}}\right),}so thatdfx(v){\displaystyle df_{x}(v)}is given bymatrix multiplication. Assuming the standard Euclidean metric onRn{\displaystyle \mathbb {R} ^{n}}, the gradient is then the corresponding column vector, that is,(∇f)i=dfiT.{\displaystyle (\nabla f)_{i}=df_{i}^{\mathsf {T}}.} The bestlinear approximationto a function can be expressed in terms of the gradient, rather than the derivative. The gradient of afunctionf{\displaystyle f}from the Euclidean spaceRn{\displaystyle \mathbb {R} ^{n}}toR{\displaystyle \mathbb {R} }at any particular pointx0{\displaystyle x_{0}}inRn{\displaystyle \mathbb {R} ^{n}}characterizes the bestlinear approximationtof{\displaystyle f}atx0{\displaystyle x_{0}}. The approximation is as follows: f(x)≈f(x0)+(∇f)x0⋅(x−x0){\displaystyle f(x)\approx f(x_{0})+(\nabla f)_{x_{0}}\cdot (x-x_{0})} forx{\displaystyle x}close tox0{\displaystyle x_{0}}, where(∇f)x0{\displaystyle (\nabla f)_{x_{0}}}is the gradient off{\displaystyle f}computed atx0{\displaystyle x_{0}}, and the dot denotes the dot product onRn{\displaystyle \mathbb {R} ^{n}}. This equation is equivalent to the first two terms in themultivariable Taylor seriesexpansion off{\displaystyle f}atx0{\displaystyle x_{0}}. LetUbe anopen setinRn. If the functionf:U→Ris differentiable, then the differential offis theFréchet derivativeoff. Thus∇fis a function fromUto the spaceRnsuch thatlimh→0|f(x+h)−f(x)−∇f(x)⋅h|‖h‖=0,{\displaystyle \lim _{h\to 0}{\frac {|f(x+h)-f(x)-\nabla f(x)\cdot h|}{\|h\|}}=0,}where · is the dot product. As a consequence, the usual properties of the derivative hold for the gradient, though the gradient is not a derivative itself, but rather dual to the derivative: More generally, if insteadI⊂Rk, then the following holds:∇(f∘g)(c)=(Dg(c))T(∇f(a)),{\displaystyle \nabla (f\circ g)(c)={\big (}Dg(c){\big )}^{\mathsf {T}}{\big (}\nabla f(a){\big )},}where(Dg)Tdenotes the transposeJacobian matrix. For the second form of the chain rule, suppose thath:I→Ris a real valued function on a subsetIofR, and thathis differentiable at the pointf(a) ∈I. Then∇(h∘f)(a)=h′(f(a))∇f(a).{\displaystyle \nabla (h\circ f)(a)=h'{\big (}f(a){\big )}\nabla f(a).} A level surface, orisosurface, is the set of all points where some function has a given value. Iffis differentiable, then the dot product(∇f)x⋅vof the gradient at a pointxwith a vectorvgives the directional derivative offatxin the directionv. It follows that in this case the gradient offisorthogonalto thelevel setsoff. For example, a level surface in three-dimensional space is defined by an equation of the formF(x,y,z) =c. The gradient ofFis then normal to the surface. More generally, anyembeddedhypersurfacein aRiemannian manifoldcan be cut out by an equation of the formF(P) = 0such thatdFis nowhere zero. The gradient ofFis then normal to the hypersurface. Similarly, anaffine algebraic hypersurfacemay be defined by an equationF(x1, ...,xn) = 0, whereFis a polynomial. The gradient ofFis zero at a singular point of the hypersurface (this is the definition of a singular point). At a non-singular point, it is a nonzero normal vector. The gradient of a function is called a gradient field. A (continuous) gradient field is always aconservative vector field: itsline integralalong any path depends only on the endpoints of the path, and can be evaluated by the gradient theorem (the fundamental theorem of calculus for line integrals). Conversely, a (continuous) conservative vector field is always the gradient of a function. The gradient of a functionf:Rn→R{\displaystyle f\colon \mathbb {R} ^{n}\to \mathbb {R} }at pointxis also the direction of its steepest ascent, i.e. it maximizes itsdirectional derivative: Letv∈Rn{\displaystyle v\in \mathbb {R} ^{n}}be an arbitrary unit vector. With the directional derivative defined as ∇vf(x)=limh→0f(x+vh)−f(x)h,{\displaystyle \nabla _{v}f(x)=\lim _{h\rightarrow 0}{\frac {f(x+vh)-f(x)}{h}},} we get, by substituting the functionf(x+vh){\displaystyle f(x+vh)}with itsTaylor series, ∇vf(x)=limh→0(f(x)+∇f⋅vh+R)−f(x)h,{\displaystyle \nabla _{v}f(x)=\lim _{h\rightarrow 0}{\frac {(f(x)+\nabla f\cdot vh+R)-f(x)}{h}},} whereR{\displaystyle R}denotes higher order terms invh{\displaystyle vh}. Dividing byh{\displaystyle h}, and taking the limit yields a term which is bounded from above by theCauchy-Schwarz inequality[8] |∇vf(x)|=|∇f⋅v|≤|∇f||v|=|∇f|.{\displaystyle |\nabla _{v}f(x)|=|\nabla f\cdot v|\leq |\nabla f||v|=|\nabla f|.} Choosingv∗=∇f/|∇f|{\displaystyle v^{*}=\nabla f/|\nabla f|}maximizes the directional derivative, and equals the upper bound |∇v∗f(x)|=|(∇f)2/|∇f||=|∇f|.{\displaystyle |\nabla _{v^{*}}f(x)|=|(\nabla f)^{2}/|\nabla f||=|\nabla f|.} TheJacobian matrixis the generalization of the gradient for vector-valued functions of several variables anddifferentiable mapsbetweenEuclidean spacesor, more generally,manifolds.[9][10]A further generalization for a function betweenBanach spacesis theFréchet derivative. Supposef:Rn→Rmis a function such that each of its first-order partial derivatives exist onℝn. Then the Jacobian matrix offis defined to be anm×nmatrix, denoted byJf(x){\displaystyle \mathbf {J} _{\mathbb {f} }(\mathbb {x} )}or simplyJ{\displaystyle \mathbf {J} }. The(i,j)th entry isJij=∂fi/∂xj{\textstyle \mathbf {J} _{ij}={\partial f_{i}}/{\partial x_{j}}}. ExplicitlyJ=[∂f∂x1⋯∂f∂xn]=[∇Tf1⋮∇Tfm]=[∂f1∂x1⋯∂f1∂xn⋮⋱⋮∂fm∂x1⋯∂fm∂xn].{\displaystyle \mathbf {J} ={\begin{bmatrix}{\dfrac {\partial \mathbf {f} }{\partial x_{1}}}&\cdots &{\dfrac {\partial \mathbf {f} }{\partial x_{n}}}\end{bmatrix}}={\begin{bmatrix}\nabla ^{\mathsf {T}}f_{1}\\\vdots \\\nabla ^{\mathsf {T}}f_{m}\end{bmatrix}}={\begin{bmatrix}{\dfrac {\partial f_{1}}{\partial x_{1}}}&\cdots &{\dfrac {\partial f_{1}}{\partial x_{n}}}\\\vdots &\ddots &\vdots \\{\dfrac {\partial f_{m}}{\partial x_{1}}}&\cdots &{\dfrac {\partial f_{m}}{\partial x_{n}}}\end{bmatrix}}.} Since thetotal derivativeof a vector field is alinear mappingfrom vectors to vectors, it is atensorquantity. In rectangular coordinates, the gradient of a vector fieldf= (f1,f2,f3)is defined by: ∇f=gjk∂fi∂xjei⊗ek,{\displaystyle \nabla \mathbf {f} =g^{jk}{\frac {\partial f^{i}}{\partial x^{j}}}\mathbf {e} _{i}\otimes \mathbf {e} _{k},} (where theEinstein summation notationis used and thetensor productof the vectorseiandekis adyadic tensorof type (2,0)). Overall, this expression equals the transpose of the Jacobian matrix: ∂fi∂xj=∂(f1,f2,f3)∂(x1,x2,x3).{\displaystyle {\frac {\partial f^{i}}{\partial x^{j}}}={\frac {\partial (f^{1},f^{2},f^{3})}{\partial (x^{1},x^{2},x^{3})}}.} In curvilinear coordinates, or more generally on a curvedmanifold, the gradient involvesChristoffel symbols: ∇f=gjk(∂fi∂xj+Γijlfl)ei⊗ek,{\displaystyle \nabla \mathbf {f} =g^{jk}\left({\frac {\partial f^{i}}{\partial x^{j}}}+{\Gamma ^{i}}_{jl}f^{l}\right)\mathbf {e} _{i}\otimes \mathbf {e} _{k},} wheregjkare the components of the inversemetric tensorand theeiare the coordinate basis vectors. Expressed more invariantly, the gradient of a vector fieldfcan be defined by theLevi-Civita connectionand metric tensor:[11] ∇afb=gac∇cfb,{\displaystyle \nabla ^{a}f^{b}=g^{ac}\nabla _{c}f^{b},} where∇cis the connection. For anysmooth functionfon a Riemannian manifold(M,g), the gradient offis the vector field∇fsuch that for any vector fieldX,g(∇f,X)=∂Xf,{\displaystyle g(\nabla f,X)=\partial _{X}f,}that is,gx((∇f)x,Xx)=(∂Xf)(x),{\displaystyle g_{x}{\big (}(\nabla f)_{x},X_{x}{\big )}=(\partial _{X}f)(x),}wheregx( , )denotes theinner productof tangent vectors atxdefined by the metricgand∂Xfis the function that takes any pointx∈Mto the directional derivative offin the directionX, evaluated atx. In other words, in acoordinate chartφfrom an open subset ofMto an open subset ofRn,(∂Xf)(x)is given by:∑j=1nXj(φ(x))∂∂xj(f∘φ−1)|φ(x),{\displaystyle \sum _{j=1}^{n}X^{j}{\big (}\varphi (x){\big )}{\frac {\partial }{\partial x_{j}}}(f\circ \varphi ^{-1}){\Bigg |}_{\varphi (x)},}whereXjdenotes thejth component ofXin this coordinate chart. So, the local form of the gradient takes the form: ∇f=gik∂f∂xkei.{\displaystyle \nabla f=g^{ik}{\frac {\partial f}{\partial x^{k}}}{\textbf {e}}_{i}.} Generalizing the caseM=Rn, the gradient of a function is related to its exterior derivative, since(∂Xf)(x)=(df)x(Xx).{\displaystyle (\partial _{X}f)(x)=(df)_{x}(X_{x}).}More precisely, the gradient∇fis the vector field associated to the differential 1-formdfusing themusical isomorphism♯=♯g:T∗M→TM{\displaystyle \sharp =\sharp ^{g}\colon T^{*}M\to TM}(called "sharp") defined by the metricg. The relation between the exterior derivative and the gradient of a function onRnis a special case of this in which the metric is the flat metric given by the dot product. [1]
https://en.wikipedia.org/wiki/Gradient
Inmathematics,matrix calculusis a specialized notation for doingmultivariable calculus, especially over spaces ofmatrices. It collects the variouspartial derivativesof a singlefunctionwith respect to manyvariables, and/or of amultivariate functionwith respect to a single variable, intovectorsand matrices that can be treated as single entities. This greatly simplifies operations such as finding the maximum or minimum of a multivariate function and solving systems ofdifferential equations. The notation used here is commonly used instatisticsandengineering, while thetensor index notationis preferred inphysics. Two competing notational conventions split the field of matrix calculus into two separate groups. The two groups can be distinguished by whether they write the derivative of ascalarwith respect to a vector as acolumn vector or a row vector. Both of these conventions are possible even when the common assumption is made that vectors should be treated as column vectors when combined with matrices (rather than row vectors). A single convention can be somewhat standard throughout a single field that commonly uses matrix calculus (e.g.econometrics, statistics,estimation theoryandmachine learning). However, even within a given field different authors can be found using competing conventions. Authors of both groups often write as though their specific conventions were standard. Serious mistakes can result when combining results from different authors without carefully verifying that compatible notations have been used. Definitions of these two conventions and comparisons between them are collected in thelayout conventionssection. Matrix calculus refers to a number of different notations that use matrices and vectors to collect the derivative of each component of the dependent variable with respect to each component of the independent variable. In general, the independent variable can be a scalar, a vector, or a matrix while the dependent variable can be any of these as well. Each different situation will lead to a different set of rules, or a separatecalculus, using the broader sense of the term. Matrix notation serves as a convenient way to collect the many derivatives in an organized way. As a first example, consider thegradientfromvector calculus. For a scalar function of three independent variables,f(x1,x2,x3){\displaystyle f(x_{1},x_{2},x_{3})}, the gradient is given by the vector equation wherex^i{\displaystyle {\hat {x}}_{i}}represents a unit vector in thexi{\displaystyle x_{i}}direction for1≤i≤3{\displaystyle 1\leq i\leq 3}. This type of generalized derivative can be seen as the derivative of a scalar,f, with respect to a vector,x{\displaystyle \mathbf {x} }, and its result can be easily collected in vector form. More complicated examples include the derivative of a scalar function with respect to a matrix, known as thegradient matrix, which collects the derivative with respect to each matrix element in the corresponding position in the resulting matrix. In that case the scalar must be a function of each of the independent variables in the matrix. As another example, if we have ann-vector of dependent variables, or functions, ofmindependent variables we might consider the derivative of the dependent vector with respect to the independent vector. The result could be collected in anm×nmatrix consisting of all of the possible derivative combinations. There are a total of nine possibilities using scalars, vectors, and matrices. Notice that as we consider higher numbers of components in each of the independent and dependent variables we can be left with a very large number of possibilities. The six kinds of derivatives that can be most neatly organized in matrix form are collected in the following table.[1] Here, we have used the term "matrix" in its most general sense, recognizing that vectors are simply matrices with one column (and scalars are simply vectors with one row). Moreover, we have used bold letters to indicate vectors and bold capital letters for matrices. This notation is used throughout. Notice that we could also talk about the derivative of a vector with respect to a matrix, or any of the other unfilled cells in our table. However, these derivatives are most naturally organized in atensorof rank higher than 2, so that they do not fit neatly into a matrix. In the following three sections we will define each one of these derivatives and relate them to other branches of mathematics. See thelayout conventionssection for a more detailed table. The matrix derivative is a convenient notation for keeping track of partial derivatives for doing calculations. TheFréchet derivativeis the standard way in the setting offunctional analysisto take derivatives with respect to vectors. In the case that a matrix function of a matrix is Fréchet differentiable, the two derivatives will agree up to translation of notations. As is the case in general forpartial derivatives, some formulae may extend under weaker analytic conditions than the existence of the derivative as approximating linear mapping. Matrix calculus is used for deriving optimal stochastic estimators, often involving the use ofLagrange multipliers. This includes the derivation of: The vector and matrix derivatives presented in the sections to follow take full advantage ofmatrix notation, using a single variable to represent a large number of variables. In what follows we will distinguish scalars, vectors and matrices by their typeface. We will letM(n,m)denote the space ofrealn×mmatrices withnrows andmcolumns. Such matrices will be denoted using bold capital letters:A,X,Y, etc. An element ofM(n,1), that is, acolumn vector, is denoted with a boldface lowercase letter:a,x,y, etc. An element ofM(1,1)is a scalar, denoted with lowercase italic typeface:a,t,x, etc.XTdenotes matrixtranspose,tr(X)is thetrace, anddet(X)or|X|is thedeterminant. All functions are assumed to be ofdifferentiability classC1unless otherwise noted. Generally letters from the first half of the alphabet (a, b, c, ...) will be used to denote constants, and from the second half (t, x, y, ...) to denote variables. NOTE: As mentioned above, there are competing notations for laying out systems ofpartial derivativesin vectors and matrices, and no standard appears to be emerging yet. The next two introductory sections use thenumerator layout conventionsimply for the purposes of convenience, to avoid overly complicating the discussion. The section after them discusseslayout conventionsin more detail. It is important to realize the following: Thetensor index notationwith itsEinstein summationconvention is very similar to the matrix calculus, except one writes only a single component at a time. It has the advantage that one can easily manipulate arbitrarily high rank tensors, whereas tensors of rank higher than two are quite unwieldy with matrix notation. All of the work here can be done in this notation without use of the single-variable matrix notation. However, many problems in estimation theory and other areas of applied mathematics would result in too many indices to properly keep track of, pointing in favor of matrix calculus in those areas. Also, Einstein notation can be very useful in proving the identities presented here (see section ondifferentiation) as an alternative to typical element notation, which can become cumbersome when the explicit sums are carried around. Note that a matrix can be considered a tensor of rank two. Because vectors are matrices with only one column, the simplest matrix derivatives are vector derivatives. The notations developed here can accommodate the usual operations ofvector calculusby identifying the spaceM(n,1)ofn-vectors with theEuclidean spaceRn, and the scalarM(1,1)is identified withR. The corresponding concept from vector calculus is indicated at the end of each subsection. NOTE: The discussion in this section assumes thenumerator layout conventionfor pedagogical purposes. Some authors use different conventions. The section onlayout conventionsdiscusses this issue in greater detail. The identities given further down are presented in forms that can be used in conjunction with all common layout conventions. Thederivativeof avectory=[y1y2⋯ym]T{\displaystyle \mathbf {y} ={\begin{bmatrix}y_{1}&y_{2}&\cdots &y_{m}\end{bmatrix}}^{\mathsf {T}}}, by ascalarxis written (innumerator layout notation) as Invector calculusthe derivative of a vectorywith respect to a scalarxis known as thetangent vectorof the vectory,∂y∂x{\displaystyle {\frac {\partial \mathbf {y} }{\partial x}}}. Notice here thaty:R1→Rm. ExampleSimple examples of this include thevelocityvector inEuclidean space, which is thetangent vectorof thepositionvector (considered as a function of time). Also, theaccelerationis the tangent vector of the velocity. Thederivativeof ascalaryby a vectorx=[x1x2⋯xn]T{\displaystyle \mathbf {x} ={\begin{bmatrix}x_{1}&x_{2}&\cdots &x_{n}\end{bmatrix}}^{\mathsf {T}}}, is written (innumerator layout notation) as Invector calculus, thegradientof a scalar fieldf:Rn→R(whose independent coordinates are the components ofx) is the transpose of the derivative of a scalar by a vector. By example, in physics, theelectric fieldis the negative vectorgradientof theelectric potential. Thedirectional derivativeof a scalar functionf(x)of the space vectorxin the direction of the unit vectoru(represented in this case as a column vector) is defined using the gradient as follows. Using the notation just defined for the derivative of a scalar with respect to a vector we can re-write the directional derivative as∇uf=∂f∂xu.{\displaystyle \nabla _{\mathbf {u} }f={\frac {\partial f}{\partial \mathbf {x} }}\mathbf {u} .}This type of notation will be nice when proving product rules and chain rules that come out looking similar to what we are familiar with for the scalarderivative. Each of the previous two cases can be considered as an application of the derivative of a vector with respect to a vector, using a vector of size one appropriately. Similarly we will find that the derivatives involving matrices will reduce to derivatives involving vectors in a corresponding way. The derivative of avector function(a vector whose components are functions)y=[y1y2⋯ym]T{\displaystyle \mathbf {y} ={\begin{bmatrix}y_{1}&y_{2}&\cdots &y_{m}\end{bmatrix}}^{\mathsf {T}}}, with respect to an input vector,x=[x1x2⋯xn]T{\displaystyle \mathbf {x} ={\begin{bmatrix}x_{1}&x_{2}&\cdots &x_{n}\end{bmatrix}}^{\mathsf {T}}}, is written (innumerator layout notation) as Invector calculus, the derivative of a vector functionywith respect to a vectorxwhose components represent a space is known as thepushforward (or differential), or theJacobian matrix. The pushforward along a vector functionfwith respect to vectorvinRnis given bydf(v)=∂f∂vdv.{\displaystyle d\mathbf {f} (\mathbf {v} )={\frac {\partial \mathbf {f} }{\partial \mathbf {v} }}d\mathbf {v} .} There are two types of derivatives with matrices that can be organized into a matrix of the same size. These are the derivative of a matrix by a scalar and the derivative of a scalar by a matrix. These can be useful in minimization problems found in many areas of applied mathematics and have adopted the namestangent matrixandgradient matrixrespectively after their analogs for vectors. Note: The discussion in this section assumes thenumerator layout conventionfor pedagogical purposes. Some authors use different conventions. The section onlayout conventionsdiscusses this issue in greater detail. The identities given further down are presented in forms that can be used in conjunction with all common layout conventions. The derivative of a matrix functionYby a scalarxis known as thetangent matrixand is given (innumerator layout notation) by The derivative of a scalar functiony, with respect to ap×qmatrixXof independent variables, is given (innumerator layout notation) by Important examples of scalar functions of matrices include thetraceof a matrix and thedeterminant. In analog withvector calculusthis derivative is often written as the following. Also in analog withvector calculus, thedirectional derivativeof a scalarf(X)of a matrixXin the direction of matrixYis given by It is the gradient matrix, in particular, that finds many uses in minimization problems inestimation theory, particularly in thederivationof theKalman filteralgorithm, which is of great importance in the field. The three types of derivatives that have not been considered are those involving vectors-by-matrices, matrices-by-vectors, and matrices-by-matrices. These are not as widely considered and a notation is not widely agreed upon. This section discusses the similarities and differences between notational conventions that are used in the various fields that take advantage of matrix calculus. Although there are largely two consistent conventions, some authors find it convenient to mix the two conventions in forms that are discussed below. After this section, equations will be listed in both competing forms separately. The fundamental issue is that the derivative of a vector with respect to a vector, i.e.∂y∂x{\displaystyle {\frac {\partial \mathbf {y} }{\partial \mathbf {x} }}}, is often written in two competing ways. If the numeratoryis of sizemand the denominatorxof sizen, then the result can be laid out as either anm×nmatrix orn×mmatrix, i.e. themelements ofylaid out in rows and thenelements ofxlaid out in columns, or vice versa. This leads to the following possibilities: When handling thegradient∂y∂x{\displaystyle {\frac {\partial y}{\partial \mathbf {x} }}}and the opposite case∂y∂x,{\displaystyle {\frac {\partial \mathbf {y} }{\partial x}},}we have the same issues. To be consistent, we should do one of the following: Not all math textbooks and papers are consistent in this respect throughout. That is, sometimes different conventions are used in different contexts within the same book or paper. For example, some choose denominator layout for gradients (laying them out as column vectors), but numerator layout for the vector-by-vector derivative∂y∂x.{\displaystyle {\frac {\partial \mathbf {y} }{\partial \mathbf {x} }}.} Similarly, when it comes to scalar-by-matrix derivatives∂y∂X{\displaystyle {\frac {\partial y}{\partial \mathbf {X} }}}and matrix-by-scalar derivatives∂Y∂x,{\displaystyle {\frac {\partial \mathbf {Y} }{\partial x}},}then consistent numerator layout lays out according toYandXT, while consistent denominator layout lays out according toYTandX. In practice, however, following a denominator layout for∂Y∂x,{\displaystyle {\frac {\partial \mathbf {Y} }{\partial x}},}and laying the result out according toYT, is rarely seen because it makes for ugly formulas that do not correspond to the scalar formulas. As a result, the following layouts can often be found: In the following formulas, we handle the five possible combinations∂y∂x,∂y∂x,∂y∂x,∂y∂X{\displaystyle {\frac {\partial y}{\partial \mathbf {x} }},{\frac {\partial \mathbf {y} }{\partial x}},{\frac {\partial \mathbf {y} }{\partial \mathbf {x} }},{\frac {\partial y}{\partial \mathbf {X} }}}and∂Y∂x{\displaystyle {\frac {\partial \mathbf {Y} }{\partial x}}}separately. We also handle cases of scalar-by-scalar derivatives that involve an intermediate vector or matrix. (This can arise, for example, if a multi-dimensionalparametric curveis defined in terms of a scalar variable, and then a derivative of a scalar function of the curve is taken with respect to the scalar that parameterizes the curve.) For each of the various combinations, we give numerator-layout and denominator-layout results, except in the cases above where denominator layout rarely occurs. In cases involving matrices where it makes sense, we give numerator-layout and mixed-layout results. As noted above, cases where vector and matrix denominators are written in transpose notation are equivalent to numerator layout with the denominators written without the transpose. Keep in mind that various authors use different combinations of numerator and denominator layouts for different types of derivatives, and there is no guarantee that an author will consistently use either numerator or denominator layout for all types. Match up the formulas below with those quoted in the source to determine the layout used for that particular type of derivative, but be careful not to assume that derivatives of other types necessarily follow the same kind of layout. When taking derivatives with an aggregate (vector or matrix) denominator in order to find a maximum or minimum of the aggregate, it should be kept in mind that using numerator layout will produce results that are transposed with respect to the aggregate. For example, in attempting to find themaximum likelihoodestimate of amultivariate normal distributionusing matrix calculus, if the domain is ak×1 column vector, then the result using the numerator layout will be in the form of a 1×krow vector. Thus, either the results should be transposed at the end or the denominator layout (or mixed layout) should be used. The results of operations will be transposed when switching between numerator-layout and denominator-layout notation. Using numerator-layout notation, we have:[1] The following definitions are only provided in numerator-layout notation: Using denominator-layout notation, we have:[2] As noted above, in general, the results of operations will be transposed when switching between numerator-layout and denominator-layout notation. To help make sense of all the identities below, keep in mind the most important rules: thechain rule,product ruleandsum rule. The sum rule applies universally, and the product rule applies in most of the cases below, provided that the order of matrix products is maintained, since matrix products are not commutative. The chain rule applies in some of the cases, but unfortunately doesnotapply in matrix-by-scalar derivatives or scalar-by-matrix derivatives (in the latter case, mostly involving thetraceoperator applied to matrices). In the latter case, the product rule can't quite be applied directly, either, but the equivalent can be done with a bit more work using the differential identities. The following identities adopt the following conventions: This is presented first because all of the operations that apply to vector-by-vector differentiation apply directly to vector-by-scalar or scalar-by-vector differentiation simply by reducing the appropriate vector in the numerator or denominator to a scalar. The fundamental identities are placed above the thick black line. ∂u∂x,∂v∂x{\displaystyle {\frac {\partial \mathbf {u} }{\partial \mathbf {x} }},{\frac {\partial \mathbf {v} }{\partial \mathbf {x} }}}in numerator layout ∂u∂x,∂v∂x{\displaystyle {\frac {\partial \mathbf {u} }{\partial \mathbf {x} }},{\frac {\partial \mathbf {v} }{\partial \mathbf {x} }}}in denominator layout ∂u∂x,∂v∂x{\displaystyle {\frac {\partial \mathbf {u} }{\partial \mathbf {x} }},{\frac {\partial \mathbf {v} }{\partial \mathbf {x} }}}in numerator layout ∂u∂x,∂v∂x{\displaystyle {\frac {\partial \mathbf {u} }{\partial \mathbf {x} }},{\frac {\partial \mathbf {v} }{\partial \mathbf {x} }}}in denominator layout ∂u∂x{\displaystyle {\frac {\partial \mathbf {u} }{\partial \mathbf {x} }}}in numerator layout ∂u∂x{\displaystyle {\frac {\partial \mathbf {u} }{\partial \mathbf {x} }}}in denominator layout NOTE: The formulas involving the vector-by-vector derivatives∂g(u)∂u{\displaystyle {\frac {\partial \mathbf {g} (\mathbf {u} )}{\partial \mathbf {u} }}}and∂f(g)∂g{\displaystyle {\frac {\partial \mathbf {f} (\mathbf {g} )}{\partial \mathbf {g} }}}(whose outputs are matrices) assume the matrices are laid out consistent with the vector layout, i.e. numerator-layout matrix when numerator-layout vector and vice versa; otherwise, transpose the vector-by-vector derivatives. Note that exact equivalents of the scalarproduct ruleandchain ruledo not exist when applied to matrix-valued functions of matrices. However, the product rule of this sort does apply to the differential form (see below), and this is the way to derive many of the identities below involving thetracefunction, combined with the fact that the trace function allows transposing and cyclic permutation, i.e.: For example, to compute∂tr⁡(AXBX⊤C)∂X:{\displaystyle {\frac {\partial \operatorname {tr} (\mathbf {AXBX^{\top }C} )}{\partial \mathbf {X} }}:}dtr⁡(AXBX⊤C)=dtr⁡(CAXBX⊤)=tr⁡(d(CAXBX⊤))=tr⁡(CAXd(BX⊤)+d(CAX)BX⊤)=tr⁡(CAXd(BX⊤))+tr⁡(d(CAX)BX⊤)=tr⁡(CAXBd(X⊤))+tr⁡(CA(dX)BX⊤)=tr⁡(CAXB(dX)⊤)+tr⁡(CA(dX)BX⊤)=tr⁡((CAXB(dX)⊤)⊤)+tr⁡(CA(dX)BX⊤)=tr⁡((dX)B⊤X⊤A⊤C⊤)+tr⁡(CA(dX)BX⊤)=tr⁡(B⊤X⊤A⊤C⊤(dX))+tr⁡(BX⊤CA(dX))=tr⁡((B⊤X⊤A⊤C⊤+BX⊤CA)dX)=tr⁡((CAXB+A⊤C⊤XB⊤)⊤dX){\displaystyle {\begin{aligned}d\operatorname {tr} (\mathbf {AXBX^{\top }C} )&=d\operatorname {tr} \left(\mathbf {CAXBX^{\top }} \right)=\operatorname {tr} \left(d\left(\mathbf {CAXBX^{\top }} \right)\right)\\[1ex]&=\operatorname {tr} \left(\mathbf {CAX} d(\mathbf {BX^{\top }} \right)+d\left(\mathbf {CAX} )\mathbf {BX^{\top }} \right)\\[1ex]&=\operatorname {tr} \left(\mathbf {CAX} d\left(\mathbf {BX^{\top }} \right)\right)+\operatorname {tr} \left(d(\mathbf {CAX} )\mathbf {BX^{\top }} \right)\\[1ex]&=\operatorname {tr} \left(\mathbf {CAXB} d\left(\mathbf {X^{\top }} \right)\right)+\operatorname {tr} \left(\mathbf {CA} (d\mathbf {X} )\mathbf {BX^{\top }} \right)\\[1ex]&=\operatorname {tr} \left(\mathbf {CAXB} (d\mathbf {X} )^{\top }\right)+\operatorname {tr} (\mathbf {CA} \left(d\mathbf {X} )\mathbf {BX^{\top }} \right)\\[1ex]&=\operatorname {tr} \left(\left(\mathbf {CAXB} (d\mathbf {X} )^{\top }\right)^{\top }\right)+\operatorname {tr} \left(\mathbf {CA} (d\mathbf {X} )\mathbf {BX^{\top }} \right)\\[1ex]&=\operatorname {tr} \left((d\mathbf {X} )\mathbf {B^{\top }X^{\top }A^{\top }C^{\top }} \right)+\operatorname {tr} \left(\mathbf {CA} (d\mathbf {X} )\mathbf {BX^{\top }} \right)\\[1ex]&=\operatorname {tr} \left(\mathbf {B^{\top }X^{\top }A^{\top }C^{\top }} (d\mathbf {X} )\right)+\operatorname {tr} \left(\mathbf {BX^{\top }} \mathbf {CA} (d\mathbf {X} )\right)\\[1ex]&=\operatorname {tr} \left(\left(\mathbf {B^{\top }X^{\top }A^{\top }C^{\top }} +\mathbf {BX^{\top }} \mathbf {CA} \right)d\mathbf {X} \right)\\[1ex]&=\operatorname {tr} \left(\left(\mathbf {CAXB} +\mathbf {A^{\top }C^{\top }XB^{\top }} \right)^{\top }d\mathbf {X} \right)\end{aligned}}} Therefore, (For the last step, see theConversion from differential to derivative formsection.) i.e. mixed layout if denominator layout forXis being used. It is often easier to work in differential form and then convert back to normal derivatives. This only works well using the numerator layout. In these rules,ais a scalar. PiPj=δijPi{\displaystyle \mathbf {P} _{i}\mathbf {P} _{j}=\delta _{ij}\mathbf {P} _{i}}fisdifferentiableat every eigenvalueλi{\displaystyle \lambda _{i}} In the last row,δij{\displaystyle \delta _{ij}}is theKronecker deltaand(Pk)ij=(Q)ik(Q−1)kj{\displaystyle (\mathbf {P} _{k})_{ij}=(\mathbf {Q} )_{ik}(\mathbf {Q} ^{-1})_{kj}}is the set of orthogonal projection operators that project onto thek-th eigenvector ofX.Qis the matrix ofeigenvectorsofX=QΛQ−1{\displaystyle \mathbf {X} =\mathbf {Q} {\boldsymbol {\Lambda }}\mathbf {Q} ^{-1}}, and(Λ)ii=λi{\displaystyle ({\boldsymbol {\Lambda }})_{ii}=\lambda _{i}}are the eigenvalues. The matrix functionf(X){\displaystyle f(\mathbf {X} )}isdefined in terms of the scalar functionf(x){\displaystyle f(x)}for diagonalizable matrices byf(X)=∑if(λi)Pi{\textstyle f(\mathbf {X} )=\sum _{i}f(\lambda _{i})\mathbf {P} _{i}}whereX=∑iλiPi{\textstyle \mathbf {X} =\sum _{i}\lambda _{i}\mathbf {P} _{i}}withPiPj=δijPi{\displaystyle \mathbf {P} _{i}\mathbf {P} _{j}=\delta _{ij}\mathbf {P} _{i}}. To convert to normal derivative form, first convert it to one of the following canonical forms, and then use these identities: Matrix differential calculus is used in statistics and econometrics, particularly for the statistical analysis ofmultivariate distributions, especially themultivariate normal distributionand otherelliptical distributions.[8][9][10] It is used inregression analysisto compute, for example, theordinary least squares regression formulafor the case of multipleexplanatory variables.[11]It is also used in random matrices, statistical moments, local sensitivity and statistical diagnostics.[12][13]
https://en.wikipedia.org/wiki/Matrix_calculus
Convex optimizationis a subfield ofmathematical optimizationthat studies the problem of minimizingconvex functionsoverconvex sets(or, equivalently, maximizingconcave functionsover convex sets). Many classes of convex optimization problems admit polynomial-time algorithms,[1]whereas mathematical optimization is in generalNP-hard.[2][3][4] A convex optimization problem is defined by two ingredients:[5][6] The goal of the problem is to find somex∗∈C{\displaystyle \mathbf {x^{\ast }} \in C}attaining In general, there are three options regarding the existence of a solution:[7]: chpt.4 A convex optimization problem is instandard formif it is written as where:[7]: chpt.4 The feasible setC{\displaystyle C}of the optimization problem consists of all pointsx∈D{\displaystyle \mathbf {x} \in {\mathcal {D}}}satisfying the inequality and the equality constraints. This set is convex becauseD{\displaystyle {\mathcal {D}}}is convex, thesublevel setsof convex functions are convex, affine sets are convex, and the intersection of convex sets is convex.[7]: chpt.2 Many optimization problems can be equivalently formulated in this standard form. For example, the problem of maximizing aconcave functionf{\displaystyle f}can be re-formulated equivalently as the problem of minimizing the convex function−f{\displaystyle -f}. The problem of maximizing a concave function over a convex set is commonly called a convex optimization problem.[8] In the standard form it is possible to assume, without loss of generality, that the objective functionfis alinear function. This is because any program with a general objective can be transformed into a program with a linear objective by adding a single variable t and a singleconstraint, as follows:[9]: 1.4 Every convex program can be presented in aconic form, which means minimizing a linear objective over the intersection of an affine plane and a convex cone:[9]: 5.1 where K is a closedpointed convex cone, L is alinear subspaceof Rn, and b is a vector in Rn. A linear program in standard form is the special case in which K is the nonnegative orthant of Rn. It is possible to convert a convex program in standard form, to a convex program with no equality constraints.[7]: 132Denote the equality constraintshi(x)=0 asAx=b, whereAhasncolumns. IfAx=bis infeasible, then of course the original problem is infeasible. Otherwise, it has some solutionx0, and the set of all solutions can be presented as:Fz+x0, wherezis inRk,k=n-rank(A), andFis ann-by-kmatrix. Substitutingx=Fz+x0in the original problem gives: minimizexf(Fz+x0)subjecttogi(Fz+x0)≤0,i=1,…,m{\displaystyle {\begin{aligned}&{\underset {\mathbf {x} }{\operatorname {minimize} }}&&f(\mathbf {F\mathbf {z} +\mathbf {x} _{0}} )\\&\operatorname {subject\ to} &&g_{i}(\mathbf {F\mathbf {z} +\mathbf {x} _{0}} )\leq 0,\quad i=1,\dots ,m\\\end{aligned}}} where the variables arez. Note that there are rank(A) fewer variables. This means that, in principle, one can restrict attention to convex optimization problems without equality constraints. In practice, however, it is often preferred to retain the equality constraints, since they might make some algorithms more efficient, and also make the problem easier to understand and analyze. The following problem classes are all convex optimization problems, or can be reduced to convex optimization problems via simple transformations:[7]: chpt.4[10] Other special cases include; The following are useful properties of convex optimization problems:[11][7]: chpt.4 These results are used by the theory of convex minimization along with geometric notions fromfunctional analysis(in Hilbert spaces) such as theHilbert projection theorem, theseparating hyperplane theorem, andFarkas' lemma.[citation needed] The convex programs easiest to solve are theunconstrainedproblems, or the problems with only equality constraints. As the equality constraints are all linear, they can be eliminated withlinear algebraand integrated into the objective, thus converting an equality-constrained problem into an unconstrained one. In the class of unconstrained (or equality-constrained) problems, the simplest ones are those in which the objective isquadratic. For these problems, theKKT conditions(which are necessary for optimality) are all linear, so they can be solved analytically.[7]: chpt.11 For unconstrained (or equality-constrained) problems with a general convex objective that is twice-differentiable,Newton's methodcan be used. It can be seen as reducing a general unconstrained convex problem, to a sequence of quadratic problems.[7]: chpt.11Newton's method can be combined withline searchfor an appropriate step size, and it can be mathematically proven to converge quickly. Other efficient algorithms for unconstrained minimization aregradient descent(a special case ofsteepest descent). The more challenging problems are those with inequality constraints. A common way to solve them is to reduce them to unconstrained problems by adding abarrier function, enforcing the inequality constraints, to the objective function. Such methods are calledinterior point methods.[7]: chpt.11They have to be initialized by finding a feasible interior point using by so-calledphase Imethods, which either find a feasible point or show that none exist. Phase I methods generally consist of reducing the search in question to a simpler convex optimization problem.[7]: chpt.11 Convex optimization problems can also be solved by the following contemporary methods:[12] Subgradient methods can be implemented simply and so are widely used.[15]Dual subgradient methods are subgradient methods applied to adual problem. Thedrift-plus-penaltymethod is similar to the dual subgradient method, but takes a time average of the primal variables.[citation needed] Consider a convex minimization problem given in standard form by a cost functionf(x){\displaystyle f(x)}and inequality constraintsgi(x)≤0{\displaystyle g_{i}(x)\leq 0}for1≤i≤m{\displaystyle 1\leq i\leq m}. Then the domainX{\displaystyle {\mathcal {X}}}is: The Lagrangian function for the problem is[16] For each pointx{\displaystyle x}inX{\displaystyle X}that minimizesf{\displaystyle f}overX{\displaystyle X}, there exist real numbersλ0,λ1,…,λm,{\displaystyle \lambda _{0},\lambda _{1},\ldots ,\lambda _{m},}calledLagrange multipliers, that satisfy these conditions simultaneously: If there exists a "strictly feasible point", that is, a pointz{\displaystyle z}satisfying then the statement above can be strengthened to require thatλ0=1{\displaystyle \lambda _{0}=1}. Conversely, if somex{\displaystyle x}inX{\displaystyle X}satisfies (1)–(3) forscalarsλ0,…,λm{\displaystyle \lambda _{0},\ldots ,\lambda _{m}}withλ0=1{\displaystyle \lambda _{0}=1}thenx{\displaystyle x}is certain to minimizef{\displaystyle f}overX{\displaystyle X}. There is a large software ecosystem for convex optimization. This ecosystem has two main categories:solverson the one hand andmodeling tools(orinterfaces) on the other hand. Solvers implement the algorithms themselves and are usually written in C. They require users to specify optimization problems in very specific formats which may not be natural from a modeling perspective. Modeling tools are separate pieces of software that let the user specify an optimization in higher-level syntax. They manage all transformations to and from the user's high-level model and the solver's input/output format. Below are two tables. The first shows shows modelling tools (such as CVXPY and JuMP.jl) and the second solvers (such as SCS and MOSEK). They are by no means exhaustive. Octave Convex optimization can be used to model problems in a wide range of disciplines, such as automaticcontrol systems, estimation andsignal processing, communications and networks, electroniccircuit design,[7]: 17data analysis and modeling,finance,statistics(optimal experimental design),[22]andstructural optimization, where the approximation concept has proven to be efficient.[7][23]Convex optimization can be used to model problems in the following fields: Extensions of convex optimization include the optimization ofbiconvex,pseudo-convex, andquasiconvexfunctions. Extensions of the theory ofconvex analysisand iterative methods for approximately solvingnon-convex minimizationproblems occur in the field ofgeneralized convexity, also known as abstract convex analysis.[citation needed]
https://en.wikipedia.org/wiki/Convex_optimization
Theposterior probabilityis a type ofconditional probabilitythat results fromupdatingtheprior probabilitywith information summarized by thelikelihoodvia an application ofBayes' rule.[1]From anepistemological perspective, the posterior probability contains everything there is to know about an uncertain proposition (such as a scientific hypothesis, or parameter values), given prior knowledge and a mathematical model describing the observations available at a particular time.[2]After the arrival of new information, the current posterior probability may serve as the prior in another round of Bayesian updating.[3] In the context ofBayesian statistics, theposteriorprobability distributionusually describes the epistemic uncertainty aboutstatistical parametersconditional on a collection of observed data. From a given posterior distribution, variouspointandinterval estimatescan be derived, such as themaximum a posteriori(MAP) or thehighest posterior density interval(HPDI).[4]But while conceptually simple, the posterior distribution is generally not tractable and therefore needs to be either analytically or numerically approximated.[5] In Bayesian statistics, the posterior probability is the probability of the parametersθ{\displaystyle \theta }given the evidenceX{\displaystyle X}, and is denotedp(θ|X){\displaystyle p(\theta |X)}. It contrasts with thelikelihood function, which is the probability of the evidence given the parameters:p(X|θ){\displaystyle p(X|\theta )}. The two are related as follows: Given apriorbelief that aprobability distribution functionisp(θ){\displaystyle p(\theta )}and that the observationsx{\displaystyle x}have a likelihoodp(x|θ){\displaystyle p(x|\theta )}, then the posterior probability is defined as wherep(x){\displaystyle p(x)}is the normalizing constant and is calculated as for continuousθ{\displaystyle \theta }, or by summingp(x|θ)p(θ){\displaystyle p(x|\theta )p(\theta )}over all possible values ofθ{\displaystyle \theta }for discreteθ{\displaystyle \theta }.[7] The posterior probability is thereforeproportional tothe productLikelihood · Prior probability.[8] Suppose there is a school with 60% boys and 40% girls as students. The girls wear trousers or skirts in equal numbers; all boys wear trousers. An observer sees a (random) student from a distance; all the observer can see is that this student is wearing trousers. What is the probability this student is a girl? The correct answer can be computed using Bayes' theorem. The eventGis that the student observed is a girl, and the eventTis that the student observed is wearing trousers. To compute the posterior probabilityP(G|T){\displaystyle P(G|T)}, we first need to know: Given all this information, theposterior probabilityof the observer having spotted a girl given that the observed student is wearing trousers can be computed by substituting these values in the formula: An intuitive way to solve this is to assume the school hasNstudents. Number of boys = 0.6Nand number of girls = 0.4N. IfNis sufficiently large, total number of trouser wearers = 0.6N+ 50% of 0.4N. And number of girl trouser wearers = 50% of 0.4N. Therefore, in the population of trousers, girls are (50% of 0.4N)/(0.6N+ 50% of 0.4N) = 25%. In other words, if you separated out the group of trouser wearers, a quarter of that group will be girls. Therefore, if you see trousers, the most you can deduce is that you are looking at a single sample from a subset of students where 25% are girls. And by definition, chance of this random student being a girl is 25%. Every Bayes-theorem problem can be solved in this way.[9] The posterior probability distribution of onerandom variablegiven the value of another can be calculated withBayes' theoremby multiplying theprior probability distributionby thelikelihood function, and then dividing by thenormalizing constant, as follows: gives the posteriorprobability density functionfor a random variableX{\displaystyle X}given the dataY=y{\displaystyle Y=y}, where Posterior probability is a conditional probability conditioned on randomly observed data. Hence it is a random variable. For a random variable, it is important to summarize its amount of uncertainty. One way to achieve this goal is to provide acredible intervalof the posterior probability.[11] Inclassification, posterior probabilities reflect the uncertainty of assessing an observation to particular class, see alsoclass-membership probabilities. Whilestatistical classificationmethods by definition generate posterior probabilities, Machine Learners usually supply membership values which do not induce any probabilistic confidence. It is desirable to transform or rescale membership values to class-membership probabilities, since they are comparable and additionally more easily applicable for post-processing.[12]
https://en.wikipedia.org/wiki/Posterior_probability
Adversarial machine learningis the study of the attacks onmachine learningalgorithms, and of the defenses against such attacks.[1]A survey from May 2020 revealed practitioners' common feeling for better protection of machine learning systems in industrial applications.[2] Machine learning techniques are mostly designed to work on specific problem sets, under the assumption that the training and test data are generated from the same statistical distribution (IID). However, this assumption is often dangerously violated in practical high-stake applications, where users may intentionally supply fabricated data that violates the statistical assumption. Most common attacks in adversarial machine learning includeevasion attacks,[3]data poisoning attacks,[4]Byzantine attacks[5]and model extraction.[6] At the MIT Spam Conference in January 2004,John Graham-Cummingshowed that a machine-learning spam filter could be used to defeat another machine-learning spam filter by automatically learning which words to add to a spam email to get the email classified as not spam.[7] In 2004, Nilesh Dalvi and others noted thatlinear classifiersused inspam filterscould be defeated by simple "evasionattacks" as spammers inserted "good words" into their spam emails. (Around 2007, some spammers added random noise to fuzz words within "image spam" in order to defeatOCR-based filters.) In 2006, Marco Barreno and others published "Can Machine Learning Be Secure?", outlining a broad taxonomy of attacks. As late as 2013 many researchers continued to hope that non-linear classifiers (such assupport vector machinesandneural networks) might be robust to adversaries, until Battista Biggio and others demonstrated the first gradient-based attacks on such machine-learning models (2012[8]–2013[9]). In 2012,deep neural networksbegan to dominate computer vision problems; starting in 2014, Christian Szegedy and others demonstrated that deep neural networks could be fooled by adversaries, again using a gradient-based attack to craft adversarial perturbations.[10][11] Recently, it was observed that adversarial attacks are harder to produce in the practical world due to the different environmental constraints that cancel out the effect of noise.[12][13]For example, any small rotation or slight illumination on an adversarial image can destroy the adversariality. In addition, researchers such as Google Brain's Nicholas Frosst point out that it is much easier to make self-driving cars[14]miss stop signs by physically removing the sign itself, rather than creating adversarial examples.[15]Frosst also believes that the adversarial machine learning community incorrectly assumes models trained on a certain data distribution will also perform well on a completely different data distribution. He suggests that a new approach to machine learning should be explored, and is currently working on a unique neural network that has characteristics more similar to human perception than state-of-the-art approaches.[15] While adversarial machine learning continues to be heavily rooted in academia, large tech companies such as Google, Microsoft, and IBM have begun curating documentation and open source code bases to allow others to concretely assess therobustnessof machine learning models and minimize the risk of adversarial attacks.[16][17][18] Examples include attacks inspam filtering, where spam messages are obfuscated through the misspelling of "bad" words or the insertion of "good" words;[19][20]attacks incomputer security, such as obfuscating malware code withinnetwork packetsor modifying the characteristics of anetwork flowto mislead intrusion detection;[21][22]attacks in biometric recognition where fake biometric traits may be exploited to impersonate a legitimate user;[23]or to compromise users' template galleries that adapt to updated traits over time. Researchers showed that by changing only one-pixel it was possible to fool deep learning algorithms.[24]Others3-D printeda toy turtle with a texture engineered to make Google's object detectionAIclassify it as a rifle regardless of the angle from which the turtle was viewed.[25]Creating the turtle required only low-cost commercially available 3-D printing technology.[26] A machine-tweaked image of a dog was shown to look like a cat to both computers and humans.[27]A 2019 study reported that humans can guess how machines will classify adversarial images.[28]Researchers discovered methods for perturbing the appearance of a stop sign such that an autonomous vehicle classified it as a merge or speed limit sign.[14][29] McAfeeattackedTesla's formerMobileyesystem, fooling it into driving 50 mph over the speed limit, simply by adding a two-inch strip of black tape to a speed limit sign.[30][31] Adversarial patterns on glasses or clothing designed to deceive facial-recognition systems or license-plate readers, have led to a niche industry of "stealth streetwear".[32] An adversarial attack on a neural network can allow an attacker to inject algorithms into the target system.[33]Researchers can also create adversarial audio inputs to disguise commands to intelligent assistants in benign-seeming audio;[34]a parallel literature explores human perception of such stimuli.[35][36] Clustering algorithms are used in security applications. Malware andcomputer virusanalysis aims to identify malware families, and to generate specific detection signatures.[37][38] Attacks against (supervised) machine learning algorithms have been categorized along three primary axes:[39]influence on the classifier, the security violation and their specificity. This taxonomy has been extended into a more comprehensive threat model that allows explicit assumptions about the adversary's goal, knowledge of the attacked system, capability of manipulating the input data/system components, and on attack strategy.[41][42]This taxonomy has further been extended to include dimensions for defense strategies against adversarial attacks.[43] Below are some of the most commonly encountered attack scenarios. Poisoning consists of contaminating the training dataset with data designed to increase errors in the output. Given that learning algorithms are shaped by their training datasets, poisoning can effectively reprogram algorithms with potentially malicious intent. Concerns have been raised especially for user-generated training data, e.g. for content recommendation or natural language models. The ubiquity of fake accounts offers many opportunities for poisoning. Facebook reportedly removes around 7 billion fake accounts per year.[44][45]Poisoning has been reported as the leading concern for industrial applications.[2] On social medias,disinformation campaignsattempt to bias recommendation and moderation algorithms, to push certain content over others. A particular case of data poisoning is thebackdoorattack,[46]which aims to teach a specific behavior for inputs with a given trigger, e.g. a small defect on images, sounds, videos or texts. For instance,intrusion detection systemsare often trained using collected data. An attacker may poison this data by injecting malicious samples during operation that subsequently disrupt retraining.[41][42][39][48][49] Data poisoning techniques can also be applied totext-to-image modelsto alter their output, which is used by artists to defend their copyrighted works or their artistic style against imitation.[50] Data poisoning can also happen unintentionally throughmodel collapse, where models are trained on synthetic data.[51] As machine learning is scaled, it often relies on multiple computing machines. Infederated learning, for instance, edge devices collaborate with a central server, typically by sending gradients or model parameters. However, some of these devices may deviate from their expected behavior, e.g. to harm the central server's model[52]or to bias algorithms towards certain behaviors (e.g., amplifying the recommendation of disinformation content). On the other hand, if the training is performed on a single machine, then the model is very vulnerable to a failure of the machine, or an attack on the machine; the machine is asingle point of failure.[53]In fact, the machine owner may themselves insert provably undetectablebackdoors.[54] The current leading solutions to make (distributed) learning algorithms provably resilient to a minority of malicious (a.k.a.Byzantine) participants are based on robust gradient aggregation rules.[55][56][57][58][59][60]The robust aggregation rules do not always work especially when the data across participants has a non-iid distribution. Nevertheless, in the context of heterogeneous honest participants, such as users with different consumption habits for recommendation algorithms or writing styles for language models, there are provable impossibility theorems on what any robust learning algorithm can guarantee.[5][61] Evasion attacks[9][41][42][62]consist of exploiting the imperfection of a trained model. For instance, spammers and hackers often attempt to evade detection by obfuscating the content of spam emails andmalware. Samples are modified to evade detection; that is, to be classified as legitimate. This does not involve influence over the training data. A clear example of evasion isimage-based spamin which the spam content is embedded within an attached image to evade textual analysis by anti-spam filters. Another example of evasion is given by spoofing attacks against biometric verification systems.[23] Evasion attacks can be generally split into two different categories:black box attacksandwhite box attacks.[17] Model extraction involves an adversary probing a black box machine learning system in order to extract the data it was trained on.[63][64]This can cause issues when either the training data or the model itself is sensitive and confidential. For example, model extraction could be used to extract a proprietary stock trading model which the adversary could then use for their own financial benefit. In the extreme case, model extraction can lead tomodel stealing, which corresponds to extracting a sufficient amount of data from the model to enable the complete reconstruction of the model. On the other hand, membership inference is a targeted model extraction attack, which infers the owner of a data point, often by leveraging theoverfittingresulting from poor machine learning practices.[65]Concerningly, this is sometimes achievable even without knowledge or access to a target model's parameters, raising security concerns for models trained on sensitive data, including but not limited to medical records and/or personally identifiable information. With the emergence oftransfer learningand public accessibility of many state of the art machine learning models, tech companies are increasingly drawn to create models based on public ones, giving attackers freely accessible information to the structure and type of model being used.[65] There is a growing literature about adversarial attacks in linear models. Indeed, since the seminal work from Goodfellow at al.[66]studying these models in linear models has been an important tool to understand how adversarial attacks affect machine learning models. The analysis of these models is simplified because the computation of adversarial attacks can be simplified in linear regression and classification problems. Moreover, adversarial training is convex in this case.[67] Linear models allow for analytical analysis while still reproducing phenomena observed in state-of-the-art models. One prime example of that is how this model can be used to explain the trade-off between robustness and accuracy.[68]Diverse work indeed provides analysis of adversarial attacks in linear models, including asymptotic analysis for classification[69]and for linear regression.[70][71]And, finite-sample analysis based on Rademacher complexity.[72] A result from studying adversarial attacks in linear models is that it closely relates toregularization.[73]Under certain conditions, it has been shown that Adversarial deep reinforcement learning is an active area of research in reinforcement learning focusing on vulnerabilities of learned policies. In this research area, some studies initially showed that reinforcement learning policies are susceptible to imperceptible adversarial manipulations.[74][75]While some methods have been proposed to overcome these susceptibilities, in the most recent studies it has been shown that these proposed solutions are far from providing an accurate representation of current vulnerabilities of deep reinforcement learning policies.[76] Adversarial attacks onspeech recognitionhave been introduced for speech-to-text applications, in particular for Mozilla's implementation of DeepSpeech.[77] There are a large variety of different adversarial attacks that can be used against machine learning systems. Many of these work on bothdeep learningsystems as well as traditional machine learning models such asSVMs[8]andlinear regression.[78]A high level sample of these attack types include: An adversarial example refers to specially crafted input that is designed to look "normal" to humans but causes misclassification to a machine learning model. Often, a form of specially designed "noise" is used to elicit the misclassifications. Below are some current techniques for generating adversarial examples in the literature (by no means an exhaustive list). Black box attacks in adversarial machine learning assume that the adversary can only get outputs for provided inputs and has no knowledge of the model structure or parameters.[17][87]In this case, the adversarial example is generated either using a model created from scratch, or without any model at all (excluding the ability to query the original model). In either case, the objective of these attacks is to create adversarial examples that are able to transfer to the black box model in question.[88] Simple Black-box Adversarial Attacksis a query-efficient way to attack black-box image classifiers.[89] Take a random orthonormal basisv1,v2,…,vd{\displaystyle v_{1},v_{2},\dots ,v_{d}}inRd{\displaystyle \mathbb {R} ^{d}}. The authors suggested thediscrete cosine transformof the standard basis (the pixels). For a correctly classified imagex{\displaystyle x}, tryx+ϵv1,x−ϵv1{\displaystyle x+\epsilon v_{1},x-\epsilon v_{1}}, and compare the amount of error in the classifier uponx+ϵv1,x,x−ϵv1{\displaystyle x+\epsilon v_{1},x,x-\epsilon v_{1}}. Pick the one that causes the largest amount of error. Repeat this forv2,v3,…{\displaystyle v_{2},v_{3},\dots }until the desired level of error in the classifier is reached. It was discovered when the authors designed a simple baseline to compare with a previous black-box adversarial attack algorithm based ongaussian processes, and were surprised that the baseline worked even better.[90] The Square Attack was introduced in 2020 as a black box evasion adversarial attack based on querying classification scores without the need of gradient information.[91]As a score based black box attack, this adversarial approach is able to query probability distributions across model output classes, but has no other access to the model itself. According to the paper's authors, the proposed Square Attack required fewer queries than when compared to state-of-the-art score-based black box attacks at the time.[91] To describe the function objective, the attack defines the classifier asf:[0,1]d→RK{\textstyle f:[0,1]^{d}\rightarrow \mathbb {R} ^{K}}, withd{\textstyle d}representing the dimensions of the input andK{\textstyle K}as the total number of output classes.fk(x){\textstyle f_{k}(x)}returns the score (or a probability between 0 and 1) that the inputx{\textstyle x}belongs to classk{\textstyle k}, which allows the classifier's class output for any inputx{\textstyle x}to be defined asargmaxk=1,...,Kfk(x){\textstyle {\text{argmax}}_{k=1,...,K}f_{k}(x)}. The goal of this attack is as follows:[91] argmaxk=1,...,Kfk(x^)≠y,||x^−x||p≤ϵandx^∈[0,1]d{\displaystyle {\text{argmax}}_{k=1,...,K}f_{k}({\hat {x}})\neq y,||{\hat {x}}-x||_{p}\leq \epsilon {\text{ and }}{\hat {x}}\in [0,1]^{d}} In other words, finding some perturbed adversarial examplex^{\textstyle {\hat {x}}}such that the classifier incorrectly classifies it to some other class under the constraint thatx^{\textstyle {\hat {x}}}andx{\textstyle x}are similar. The paper then defineslossL{\textstyle L}asL(f(x^),y)=fy(x^)−maxk≠yfk(x^){\textstyle L(f({\hat {x}}),y)=f_{y}({\hat {x}})-\max _{k\neq y}f_{k}({\hat {x}})}and proposes the solution to finding adversarial examplex^{\textstyle {\hat {x}}}as solving the belowconstrained optimization problem:[91] minx^∈[0,1]dL(f(x^),y),s.t.||x^−x||p≤ϵ{\displaystyle \min _{{\hat {x}}\in [0,1]^{d}}L(f({\hat {x}}),y),{\text{ s.t. }}||{\hat {x}}-x||_{p}\leq \epsilon } The result in theory is an adversarial example that is highly confident in the incorrect class but is also very similar to the original image. To find such example, Square Attack utilizes the iterativerandom searchtechnique to randomly perturb the image in hopes of improving the objective function. In each step, the algorithm perturbs only a small square section of pixels, hence the name Square Attack, which terminates as soon as an adversarial example is found in order to improve query efficiency. Finally, since the attack algorithm uses scores and not gradient information, the authors of the paper indicate that this approach is not affected by gradient masking, a common technique formerly used to prevent evasion attacks.[91] This black box attack was also proposed as a query efficient attack, but one that relies solely on access to any input's predicted output class. In other words, the HopSkipJump attack does not require the ability to calculate gradients or access to score values like the Square Attack, and will require just the model's class prediction output (for any given input). The proposed attack is split into two different settings, targeted and untargeted, but both are built from the general idea of adding minimal perturbations that leads to a different model output. In the targeted setting, the goal is to cause the model to misclassify the perturbed image to a specific target label (that is not the original label). In the untargeted setting, the goal is to cause the model to misclassify the perturbed image to any label that is not the original label. The attack objectives for both are as follows wherex{\textstyle x}is the original image,x′{\textstyle x^{\prime }}is the adversarial image,d{\textstyle d}is a distance function between images,c∗{\textstyle c^{*}}is the target label, andC{\textstyle C}is the model's classification class label function:[92] Targeted:minx′d(x′,x)subject toC(x′)=c∗{\displaystyle {\textbf {Targeted:}}\min _{x^{\prime }}d(x^{\prime },x){\text{ subject to }}C(x^{\prime })=c^{*}} Untargeted:minx′d(x′,x)subject toC(x′)≠C(x){\displaystyle {\textbf {Untargeted:}}\min _{x^{\prime }}d(x^{\prime },x){\text{ subject to }}C(x^{\prime })\neq C(x)} To solve this problem, the attack proposes the following boundary functionS{\textstyle S}for both the untargeted and targeted setting:[92] S(x′):={maxc≠C(x)F(x′)c−F(x′)C(x),(Untargeted)F(x′)c∗−maxc≠c∗F(x′)c,(Targeted){\displaystyle S(x^{\prime }):={\begin{cases}\max _{c\neq C(x)}{F(x^{\prime })_{c}}-F(x^{\prime })_{C(x)},&{\text{(Untargeted)}}\\F(x^{\prime })_{c^{*}}-\max _{c\neq c^{*}}{F(x^{\prime })_{c}},&{\text{(Targeted)}}\end{cases}}} This can be further simplified to better visualize the boundary between different potential adversarial examples:[92] S(x′)>0⟺{argmaxcF(x′)≠C(x),(Untargeted)argmaxcF(x′)=c∗,(Targeted){\displaystyle S(x^{\prime })>0\iff {\begin{cases}argmax_{c}F(x^{\prime })\neq C(x),&{\text{(Untargeted)}}\\argmax_{c}F(x^{\prime })=c^{*},&{\text{(Targeted)}}\end{cases}}} With this boundary function, the attack then follows an iterative algorithm to find adversarial examplesx′{\textstyle x^{\prime }}for a given imagex{\textstyle x}that satisfies the attack objectives. Boundary search uses a modifiedbinary searchto find the point in which the boundary (as defined byS{\textstyle S}) intersects with the line betweenx{\textstyle x}andx′{\textstyle x^{\prime }}. The next step involves calculating the gradient forx{\textstyle x}, and update the originalx{\textstyle x}using this gradient and a pre-chosen step size. HopSkipJump authors prove that this iterative algorithm will converge, leadingx{\textstyle x}to a point right along the boundary that is very close in distance to the original image.[92] However, since HopSkipJump is a proposed black box attack and the iterative algorithm above requires the calculation of a gradient in the second iterative step (which black box attacks do not have access to), the authors propose a solution to gradient calculation that requires only the model's output predictions alone.[92]By generating many random vectors in all directions, denoted asub{\textstyle u_{b}}, an approximation of the gradient can be calculated using the average of these random vectors weighted by the sign of the boundary function on the imagex′+δub{\textstyle x^{\prime }+\delta _{u_{b}}}, whereδub{\textstyle \delta _{u_{b}}}is the size of the random vector perturbation:[92] ∇S(x′,δ)≈1B∑b=1Bϕ(x′+δub)ub{\displaystyle \nabla S(x^{\prime },\delta )\approx {\frac {1}{B}}\sum _{b=1}^{B}\phi (x^{\prime }+\delta _{u_{b}})u_{b}} The result of the equation above gives a close approximation of the gradient required in step 2 of the iterative algorithm, completing HopSkipJump as a black box attack.[93][94][92] White box attacks assumes that the adversary has access to model parameters on top of being able to get labels for provided inputs.[88] One of the first proposed attacks for generating adversarial examples was proposed by Google researchersIan J. Goodfellow, Jonathon Shlens, and Christian Szegedy.[95]The attack was called fast gradient sign method (FGSM), and it consists of adding a linear amount of in-perceivable noise to the image and causing a model to incorrectly classify it. This noise is calculated by multiplying the sign of the gradient with respect to the image we want to perturb by a small constant epsilon. As epsilon increases, the model is more likely to be fooled, but the perturbations become easier to identify as well. Shown below is the equation to generate an adversarial example wherex{\textstyle x}is the original image,ϵ{\textstyle \epsilon }is a very small number,Δx{\textstyle \Delta _{x}}is the gradient function,J{\textstyle J}is the loss function,θ{\textstyle \theta }is the model weights, andy{\textstyle y}is the true label.[96] advx=x+ϵ⋅sign(ΔxJ(θ,x,y)){\displaystyle adv_{x}=x+\epsilon \cdot sign(\Delta _{x}J(\theta ,x,y))} One important property of this equation is that the gradient is calculated with respect to the input image since the goal is to generate an image that maximizes the loss for the original image of true labely{\textstyle y}. In traditionalgradient descent(for model training), the gradient is used to update the weights of the model since the goal is to minimize the loss for the model on a ground truth dataset. The Fast Gradient Sign Method was proposed as a fast way to generate adversarial examples to evade the model, based on the hypothesis that neural networks cannot resist even linear amounts of perturbation to the input.[97][96][95]FGSM has shown to be effective in adversarial attacks for image classification and skeletal action recognition.[98] In an effort to analyze existing adversarial attacks and defenses, researchers at the University of California, Berkeley,Nicholas CarliniandDavid Wagnerin 2016 propose a faster and more robust method to generate adversarial examples.[99] The attack proposed by Carlini and Wagner begins with trying to solve a difficult non-linear optimization equation:[64] min(||δ||p)subject toC(x+δ)=t,x+δ∈[0,1]n{\displaystyle \min(||\delta ||_{p}){\text{ subject to }}C(x+\delta )=t,x+\delta \in [0,1]^{n}} Here the objective is to minimize the noise (δ{\textstyle \delta }), added to the original inputx{\textstyle x}, such that the machine learning algorithm (C{\textstyle C}) predicts the original input with delta (orx+δ{\textstyle x+\delta }) as some other classt{\textstyle t}. However instead of directly the above equation, Carlini and Wagner propose using a new functionf{\textstyle f}such that:[64] C(x+δ)=t⟺f(x+δ)≤0{\displaystyle C(x+\delta )=t\iff f(x+\delta )\leq 0} This condenses the first equation to the problem below:[64] min(||δ||p)subject tof(x+δ)≤0,x+δ∈[0,1]n{\displaystyle \min(||\delta ||_{p}){\text{ subject to }}f(x+\delta )\leq 0,x+\delta \in [0,1]^{n}} and even more to the equation below:[64] min(||δ||p+c⋅f(x+δ)),x+δ∈[0,1]n{\displaystyle \min(||\delta ||_{p}+c\cdot f(x+\delta )),x+\delta \in [0,1]^{n}} Carlini and Wagner then propose the use of the below function in place off{\textstyle f}usingZ{\textstyle Z}, a function that determines class probabilities for given inputx{\textstyle x}. When substituted in, this equation can be thought of as finding a target class that is more confident than the next likeliest class by some constant amount:[64] f(x)=([maxi≠tZ(x)i]−Z(x)t)+{\displaystyle f(x)=([\max _{i\neq t}Z(x)_{i}]-Z(x)_{t})^{+}} When solved using gradient descent, this equation is able to produce stronger adversarial examples when compared to fast gradient sign method that is also able to bypass defensive distillation, a defense that was once proposed to be effective against adversarial examples.[100][101][99][64] Researchers have proposed a multi-step approach to protecting machine learning.[11] A number of defense mechanisms against evasion, poisoning, and privacy attacks have been proposed, including:
https://en.wikipedia.org/wiki/Adversarial_machine_learning
Inmathematics,subderivatives(orsubgradient) generalizes thederivativeto convex functions which are not necessarilydifferentiable. The set of subderivatives at a point is called thesubdifferentialat that point.[1]Subderivatives arise inconvex analysis, the study ofconvex functions, often in connection toconvex optimization. Letf:I→R{\displaystyle f:I\to \mathbb {R} }be areal-valued convex function defined on anopen intervalof the real line. Such a function need not be differentiable at all points: For example, theabsolute valuefunctionf(x)=|x|{\displaystyle f(x)=|x|}is non-differentiable whenx=0{\displaystyle x=0}. However, as seen in the graph on the right (wheref(x){\displaystyle f(x)}in blue has non-differentiable kinks similar to the absolute value function), for anyx0{\displaystyle x_{0}}in the domain of the function one can draw a line which goes through the point(x0,f(x0)){\displaystyle (x_{0},f(x_{0}))}and which is everywhere either touching or below the graph off. Theslopeof such a line is called asubderivative. Rigorously, asubderivativeof a convex functionf:I→R{\displaystyle f:I\to \mathbb {R} }at a pointx0{\displaystyle x_{0}}in the open intervalI{\displaystyle I}is a real numberc{\displaystyle c}such thatf(x)−f(x0)≥c(x−x0){\displaystyle f(x)-f(x_{0})\geq c(x-x_{0})}for allx∈I{\displaystyle x\in I}. By the converse of themean value theorem, thesetof subderivatives atx0{\displaystyle x_{0}}for a convex function is anonemptyclosed interval[a,b]{\displaystyle [a,b]}, wherea{\displaystyle a}andb{\displaystyle b}are theone-sided limitsa=limx→x0−f(x)−f(x0)x−x0,{\displaystyle a=\lim _{x\to x_{0}^{-}}{\frac {f(x)-f(x_{0})}{x-x_{0}}},}b=limx→x0+f(x)−f(x0)x−x0.{\displaystyle b=\lim _{x\to x_{0}^{+}}{\frac {f(x)-f(x_{0})}{x-x_{0}}}.}Theinterval[a,b]{\displaystyle [a,b]}of all subderivatives is called thesubdifferentialof the functionf{\displaystyle f}atx0{\displaystyle x_{0}}, denoted by∂f(x0){\displaystyle \partial f(x_{0})}. Iff{\displaystyle f}is convex, then its subdifferential at any point is non-empty. Moreover, if its subdifferential atx0{\displaystyle x_{0}}contains exactly one subderivative, thenf{\displaystyle f}is differentiable atx0{\displaystyle x_{0}}and∂f(x0)={f′(x0)}{\displaystyle \partial f(x_{0})=\{f'(x_{0})\}}.[2] Consider the functionf(x)=|x|{\displaystyle f(x)=|x|}which is convex. Then, the subdifferential at the origin is theinterval[−1,1]{\displaystyle [-1,1]}. The subdifferential at any pointx0<0{\displaystyle x_{0}<0}is thesingleton set{−1}{\displaystyle \{-1\}}, while the subdifferential at any pointx0>0{\displaystyle x_{0}>0}is the singleton set{1}{\displaystyle \{1\}}. This is similar to thesign function, but is not single-valued at0{\displaystyle 0}, instead including all possible subderivatives. The concepts of subderivative and subdifferential can be generalized to functions of several variables. Iff:U→R{\displaystyle f:U\to \mathbb {R} }is a real-valued convex function defined on aconvexopen setin theEuclidean spaceRn{\displaystyle \mathbb {R} ^{n}}, a vectorv{\displaystyle v}in that space is called asubgradientatx0∈U{\displaystyle x_{0}\in U}if for anyx∈U{\displaystyle x\in U}one has that where the dot denotes thedot product. The set of all subgradients atx0{\displaystyle x_{0}}is called thesubdifferentialatx0{\displaystyle x_{0}}and is denoted∂f(x0){\displaystyle \partial f(x_{0})}. The subdifferential is always a nonempty convexcompact set. These concepts generalize further to convex functionsf:U→R{\displaystyle f:U\to \mathbb {R} }on aconvex setin alocally convex spaceV{\displaystyle V}. A functionalv∗{\displaystyle v^{*}}in thedual spaceV∗{\displaystyle V^{*}}is called asubgradientatx0{\displaystyle x_{0}}inU{\displaystyle U}if for allx∈U{\displaystyle x\in U}, The set of all subgradients atx0{\displaystyle x_{0}}is called the subdifferential atx0{\displaystyle x_{0}}and is again denoted∂f(x0){\displaystyle \partial f(x_{0})}. The subdifferential is always a convexclosed set. It can be an empty set; consider for example anunbounded operator, which is convex, but has no subgradient. Iff{\displaystyle f}is continuous, the subdifferential is nonempty. The subdifferential on convex functions was introduced byJean Jacques MoreauandR. Tyrrell Rockafellarin the early 1960s. Thegeneralized subdifferentialfor nonconvex functions was introduced byFrancis H. Clarkeand R. Tyrrell Rockafellar in the early 1980s.[4]
https://en.wikipedia.org/wiki/Subgradient
Inoperator theory, a branch of mathematics, apositive-definite kernelis a generalization of apositive-definite functionor apositive-definite matrix. It was first introduced byJames Mercerin the early 20th century, in the context of solvingintegral operator equations. Since then, positive-definite functions and their various analogues and generalizations have arisen in diverse parts of mathematics. They occur naturally inFourier analysis,probability theory,operator theory,complex function-theory,moment problems,integral equations,boundary-value problemsforpartial differential equations,machine learning,embedding problem,information theory, and other areas. LetX{\displaystyle {\mathcal {X}}}be a nonempty set, sometimes referred to as the index set. Asymmetric functionK:X×X→R{\displaystyle K:{\mathcal {X}}\times {\mathcal {X}}\to \mathbb {R} }is called a positive-definite (p.d.) kernel onX{\displaystyle {\mathcal {X}}}if holds for allx1,…,xn∈X{\displaystyle x_{1},\dots ,x_{n}\in {\mathcal {X}}},n∈N,c1,…,cn∈R{\displaystyle n\in \mathbb {N} ,c_{1},\dots ,c_{n}\in \mathbb {R} }. In probability theory, a distinction is sometimes made between positive-definite kernels, for which equality in (1.1) impliesci=0(∀i){\displaystyle c_{i}=0\;(\forall i)}, and positive semi-definite (p.s.d.) kernels, which do not impose this condition. Note that this is equivalent to requiring that every finite matrix constructed by pairwise evaluation,Kij=K(xi,xj){\displaystyle \mathbf {K} _{ij}=K(x_{i},x_{j})}, has either entirely positive (p.d.) or nonnegative (p.s.d.)eigenvalues. In mathematical literature, kernels are usually complex-valued functions. That is, a complex-valued functionK:X×X→C{\displaystyle K:{\mathcal {X}}\times {\mathcal {X}}\to \mathbb {C} }is called aHermitian kernelifK(x,y)=K(y,x)¯{\displaystyle K(x,y)={\overline {K(y,x)}}}and positive definite if for every finite set of pointsx1,…,xn∈X{\displaystyle x_{1},\dots ,x_{n}\in {\mathcal {X}}}and any complex numbersξ1,…,ξn∈C{\displaystyle \xi _{1},\dots ,\xi _{n}\in \mathbb {C} }, whereξ¯j{\displaystyle {\overline {\xi }}_{j}}denotes thecomplex conjugate.[1]In the rest of this article we assume real-valued functions, which is the common practice in applications of p.d. kernels. The sigmoid kernel, or hyperbolic tangent kernel, is defined asK(x,y)=tanh⁡(γxTy+r),x,y∈Rd{\displaystyle K(\mathbf {x} ,\mathbf {y} )=\tanh(\gamma \mathbf {x} ^{T}\mathbf {y} +r),\quad \mathbf {x} ,\mathbf {y} \in \mathbb {R} ^{d}}whereγ,r{\displaystyle \gamma ,r}are real parameters. The kernel is not PD, but has been sometimes used for kernel algorithms.[3] Positive-definite kernels, as defined in (1.1), appeared first in 1909 in a paper on integral equations by James Mercer.[4]Several other authors made use of this concept in the following two decades, but none of them explicitly used kernelsK(x,y)=f(x−y){\displaystyle K(x,y)=f(x-y)}, i.e. p.d. functions (indeed M. Mathias andS. Bochnerseem not to have been aware of the study of p.d. kernels). Mercer’s work arose from Hilbert’s paper of 1904[5]onFredholm integral equationsof the second kind: In particular, Hilbert had shown that whereK{\displaystyle K}is a continuous real symmetric kernel,x{\displaystyle x}is continuous,{ψn}{\displaystyle \{\psi _{n}\}}is a complete system oforthonormal eigenfunctions, andλn{\displaystyle \lambda _{n}}’s are the correspondingeigenvaluesof (1.2). Hilbert defined a “definite” kernel as one for which the double integralJ(x)=∫ab∫abK(s,t)x(s)x(t)dsdt{\displaystyle J(x)=\int _{a}^{b}\int _{a}^{b}K(s,t)x(s)x(t)\ \mathrm {d} s\;\mathrm {d} t}satisfiesJ(x)>0{\displaystyle J(x)>0}except forx(t)=0{\displaystyle x(t)=0}. The original object of Mercer’s paper was to characterize the kernels which are definite in the sense of Hilbert, but Mercer soon found that the class of such functions was too restrictive to characterize in terms of determinants. He therefore defined a continuous real symmetric kernelK(s,t){\displaystyle K(s,t)}to be of positive type (i.e. positive-definite) ifJ(x)≥0{\displaystyle J(x)\geq 0}for all real continuous functionsx{\displaystyle x}on[a,b]{\displaystyle [a,b]}, and he proved that (1.1) is a necessary and sufficient condition for a kernel to be of positive type. Mercer then proved that for any continuous p.d. kernel the expansionK(s,t)=∑nψn(s)ψn(t)λn{\displaystyle K(s,t)=\sum _{n}{\frac {\psi _{n}(s)\psi _{n}(t)}{\lambda _{n}}}}holds absolutely and uniformly. At about the same time W. H. Young,[6]motivated by a different question in the theory of integral equations, showed that for continuous kernels condition (1.1) is equivalent toJ(x)≥0{\displaystyle J(x)\geq 0}for allx∈L1[a,b]{\displaystyle x\in L^{1}[a,b]}. E.H. Moore[7][8]initiated the study of a very general kind of p.d. kernel. IfE{\displaystyle E}is an abstract set, he calls functionsK(x,y){\displaystyle K(x,y)}defined onE×E{\displaystyle E\times E}“positive Hermitian matrices” if they satisfy (1.1) for allxi∈E{\displaystyle x_{i}\in E}. Moore was interested in generalization of integral equations and showed that to each suchK{\displaystyle K}there is a Hilbert spaceH{\displaystyle H}of functions such that, for eachf∈H,f(y)=(f,K(⋅,y))H{\displaystyle f\in H,f(y)=(f,K(\cdot ,y))_{H}}. This property is called the reproducing property of the kernel and turns out to have importance in the solution of boundary-value problems for elliptic partial differential equations. Another line of development in which p.d. kernels played a large role was the theory of harmonics on homogeneous spaces as begun byE. Cartanin 1929, and continued byH. Weyland S. Ito. The most comprehensive theory of p.d. kernels in homogeneous spaces is that ofM. Krein[9]which includes as special cases the work on p.d. functions and irreducibleunitary representationsof locally compact groups. In probability theory, p.d. kernels arise as covariance kernels of stochastic processes.[10] Positive-definite kernels provide a framework that encompasses some basic Hilbert space constructions. In the following we present a tight relationship between positive-definite kernels and two mathematical objects, namely reproducing Hilbert spaces and feature maps. LetX{\displaystyle X}be a set,H{\displaystyle H}a Hilbert space of functionsf:X→R{\displaystyle f:X\to \mathbb {R} }, and(⋅,⋅)H:H×H→R{\displaystyle (\cdot ,\cdot )_{H}:H\times H\to \mathbb {R} }the corresponding inner product onH{\displaystyle H}. For anyx∈X{\displaystyle x\in X}the evaluation functionalex:H→R{\displaystyle e_{x}:H\to \mathbb {R} }is defined byf↦ex(f)=f(x){\displaystyle f\mapsto e_{x}(f)=f(x)}. We first define a reproducing kernel Hilbert space (RKHS): Definition: SpaceH{\displaystyle H}is called a reproducing kernel Hilbert space if the evaluation functionals are continuous. Every RKHS has a special function associated to it, namely the reproducing kernel: Definition: Reproducing kernel is a functionK:X×X→R{\displaystyle K:X\times X\to \mathbb {R} }such that The latter property is called the reproducing property. The following result shows equivalence between RKHS and reproducing kernels: Theorem—Every reproducing kernelK{\displaystyle K}induces a unique RKHS, and every RKHS has a unique reproducing kernel. Now the connection between positive definite kernels and RKHS is given by the following theorem Theorem—Every reproducing kernel is positive-definite, and every positive definite kernel defines a unique RKHS, of which it is the unique reproducing kernel. Thus, given a positive-definite kernelK{\displaystyle K}, it is possible to build an associated RKHS withK{\displaystyle K}as a reproducing kernel. As stated earlier, positive definite kernels can be constructed from inner products. This fact can be used to connect p.d. kernels with another interesting object that arises in machine learning applications, namely the feature map. LetF{\displaystyle F}be a Hilbert space, and(⋅,⋅)F{\displaystyle (\cdot ,\cdot )_{F}}the corresponding inner product. Any mapΦ:X→F{\displaystyle \Phi :X\to F}is called a feature map. In this case we callF{\displaystyle F}the feature space. It is easy to see[11]that every feature map defines a unique p.d. kernel byK(x,y)=(Φ(x),Φ(y))F.{\displaystyle K(x,y)=(\Phi (x),\Phi (y))_{F}.}Indeed, positive definiteness ofK{\displaystyle K}follows from the p.d. property of the inner product. On the other hand, every p.d. kernel, and its corresponding RKHS, have many associated feature maps. For example: LetF=H{\displaystyle F=H}, andΦ(x)=Kx{\displaystyle \Phi (x)=K_{x}}for allx∈X{\displaystyle x\in X}. Then(Φ(x),Φ(y))F=(Kx,Ky)H=K(x,y){\displaystyle (\Phi (x),\Phi (y))_{F}=(K_{x},K_{y})_{H}=K(x,y)}, by the reproducing property. This suggests a new look at p.d. kernels as inner products in appropriate Hilbert spaces, or in other words p.d. kernels can be viewed as similarity maps which quantify effectively how similar two pointsx{\displaystyle x}andy{\displaystyle y}are through the valueK(x,y){\displaystyle K(x,y)}. Moreover, through the equivalence of p.d. kernels and its corresponding RKHS, every feature map can be used to construct a RKHS. Kernel methods are often compared to distance based methods such asnearest neighbors. In this section we discuss parallels between their two respective ingredients, namely kernelsK{\displaystyle K}and distancesd{\displaystyle d}. Here by a distance function between each pair of elements of some setX{\displaystyle X}, we mean ametricdefined on that set, i.e. any nonnegative-valued functiond{\displaystyle d}onX×X{\displaystyle {\mathcal {X}}\times {\mathcal {X}}}which satisfies One link between distances and p.d. kernels is given by a particular kind of kernel, called a negative definite kernel, and defined as follows Definition: A symmetric functionψ:X×X→R{\displaystyle \psi :{\mathcal {X}}\times {\mathcal {X}}\to \mathbb {R} }is called a negative definite (n.d.) kernel onX{\displaystyle {\mathcal {X}}}if holds for anyn∈N,x1,…,xn∈X,{\displaystyle n\in \mathbb {N} ,x_{1},\dots ,x_{n}\in {\mathcal {X}},}andc1,…,cn∈R{\displaystyle c_{1},\dots ,c_{n}\in \mathbb {R} }such that∑i=1nci=0{\textstyle \sum _{i=1}^{n}c_{i}=0}. The parallel between n.d. kernels and distances is in the following: whenever a n.d. kernel vanishes on the set{(x,x):x∈X}{\displaystyle \{(x,x):x\in {\mathcal {X}}\}}, and is zero only on this set, then its square root is a distance forX{\displaystyle {\mathcal {X}}}.[12]At the same time each distance does not correspond necessarily to a n.d. kernel. This is only true for Hilbertian distances, where distanced{\displaystyle d}is called Hilbertian if one can embed the metric space(X,d){\displaystyle ({\mathcal {X}},d)}isometricallyinto some Hilbert space. On the other hand, n.d. kernels can be identified with a subfamily of p.d. kernels known as infinitely divisible kernels. A nonnegative-valued kernelK{\displaystyle K}is said to be infinitely divisible if for everyn∈N{\displaystyle n\in \mathbb {N} }there exists a positive-definite kernelKn{\displaystyle K_{n}}such thatK=(Kn)n{\displaystyle K=(K_{n})^{n}}. Another link is that a p.d. kernel induces apseudometric, where the first constraint on the distance function is loosened to allowd(x,y)=0{\displaystyle d(x,y)=0}forx≠y{\displaystyle x\neq y}. Given a positive-definite kernelK{\displaystyle K}, we can define a distance function as:d(x,y)=K(x,x)−2K(x,y)+K(y,y){\displaystyle d(x,y)={\sqrt {K(x,x)-2K(x,y)+K(y,y)}}} Positive-definite kernels, through their equivalence with reproducing kernel Hilbert spaces (RKHS), are particularly important in the field ofstatistical learning theorybecause of the celebratedrepresenter theoremwhich states that every minimizer function in an RKHS can be written as a linear combination of the kernel function evaluated at the training points. This is a practically useful result as it effectively simplifies the empirical risk minimization problem from an infinite dimensional to a finite dimensional optimization problem. There are several different ways in which kernels arise in probability theory. Assume now that a noise variableϵ(x){\displaystyle \epsilon (x)}, with zero mean and varianceσ2{\displaystyle \sigma ^{2}}, is added tox{\displaystyle x}, such that the noise is independent for differentx{\displaystyle x}and independent ofZ{\displaystyle Z}there, then the problem of finding a good estimate forf{\displaystyle f}is identical to the above one, but with a modified kernel given byK(x,y)=E[Z(x)⋅Z(y)]+σ2δxy{\displaystyle K(x,y)=E[Z(x)\cdot Z(y)]+\sigma ^{2}\delta _{xy}}. One of the greatest application areas of so-calledmeshfree methodsis in the numerical solution ofPDEs. Some of the popular meshfree methods are closely related to positive-definite kernels (such asmeshless local Petrov Galerkin (MLPG),Reproducing kernel particle method (RKPM)andsmoothed-particle hydrodynamics (SPH)). These methods use radial basis kernel forcollocation.[13] In the literature on computer experiments[14]and other engineering experiments, one increasingly encounters models based on p.d. kernels, RBFs orkriging. One such topic isresponse surface methodology. Other types of applications that boil down to data fitting arerapid prototypingandcomputer graphics. Here one often uses implicit surface models to approximate or interpolate point cloud data. Applications of p.d. kernels in various other branches of mathematics are in multivariate integration, multivariate optimization, and in numerical analysis and scientific computing, where one studies fast, accurate and adaptive algorithms ideally implemented in high-performance computing environments.[15]
https://en.wikipedia.org/wiki/Positive_definite_kernel
Inmachine learning,kernel machinesare a class of algorithms forpattern analysis, whose best known member is thesupport-vector machine(SVM). These methods involve using linear classifiers to solve nonlinear problems.[1]The general task ofpattern analysisis to find and study general types of relations (for exampleclusters,rankings,principal components,correlations,classifications) in datasets. For many algorithms that solve these tasks, the data in raw representation have to be explicitly transformed intofeature vectorrepresentations via a user-specifiedfeature map: in contrast, kernel methods require only a user-specifiedkernel, i.e., asimilarity functionover all pairs of data points computed usinginner products. The feature map in kernel machines is infinite dimensional but only requires a finite dimensional matrix from user-input according to therepresenter theorem. Kernel machines are slow to compute for datasets larger than a couple of thousand examples without parallel processing. Kernel methods owe their name to the use ofkernel functions, which enable them to operate in a high-dimensional,implicitfeature spacewithout ever computing the coordinates of the data in that space, but rather by simply computing theinner productsbetween theimagesof all pairs of data in the feature space. This operation is often computationally cheaper than the explicit computation of the coordinates. This approach is called the "kernel trick".[2]Kernel functions have been introduced for sequence data,graphs, text, images, as well as vectors. Algorithms capable of operating with kernels include thekernel perceptron, support-vector machines (SVM),Gaussian processes,principal components analysis(PCA),canonical correlation analysis,ridge regression,spectral clustering,linear adaptive filtersand many others. Most kernel algorithms are based onconvex optimizationoreigenproblemsand are statistically well-founded. Typically, their statistical properties are analyzed usingstatistical learning theory(for example, usingRademacher complexity). Kernel methods can be thought of asinstance-based learners: rather than learning some fixed set of parameters corresponding to the features of their inputs, they instead "remember" thei{\displaystyle i}-th training example(xi,yi){\displaystyle (\mathbf {x} _{i},y_{i})}and learn for it a corresponding weightwi{\displaystyle w_{i}}. Prediction for unlabeled inputs, i.e., those not in the training set, is treated by the application of asimilarity functionk{\displaystyle k}, called akernel, between the unlabeled inputx′{\displaystyle \mathbf {x'} }and each of the training inputsxi{\displaystyle \mathbf {x} _{i}}. For instance, a kernelizedbinary classifiertypically computes a weighted sum of similaritiesy^=sgn⁡∑i=1nwiyik(xi,x′),{\displaystyle {\hat {y}}=\operatorname {sgn} \sum _{i=1}^{n}w_{i}y_{i}k(\mathbf {x} _{i},\mathbf {x'} ),}where Kernel classifiers were described as early as the 1960s, with the invention of thekernel perceptron.[3]They rose to great prominence with the popularity of thesupport-vector machine(SVM) in the 1990s, when the SVM was found to be competitive withneural networkson tasks such ashandwriting recognition. The kernel trick avoids the explicit mapping that is needed to get linearlearning algorithmsto learn a nonlinear function ordecision boundary. For allx{\displaystyle \mathbf {x} }andx′{\displaystyle \mathbf {x'} }in the input spaceX{\displaystyle {\mathcal {X}}}, certain functionsk(x,x′){\displaystyle k(\mathbf {x} ,\mathbf {x'} )}can be expressed as aninner productin another spaceV{\displaystyle {\mathcal {V}}}. The functionk:X×X→R{\displaystyle k\colon {\mathcal {X}}\times {\mathcal {X}}\to \mathbb {R} }is often referred to as akernelor akernel function. The word "kernel" is used in mathematics to denote a weighting function for a weighted sum orintegral. Certain problems in machine learning have more structure than an arbitrary weighting functionk{\displaystyle k}. The computation is made much simpler if the kernel can be written in the form of a "feature map"φ:X→V{\displaystyle \varphi \colon {\mathcal {X}}\to {\mathcal {V}}}which satisfiesk(x,x′)=⟨φ(x),φ(x′)⟩V.{\displaystyle k(\mathbf {x} ,\mathbf {x'} )=\langle \varphi (\mathbf {x} ),\varphi (\mathbf {x'} )\rangle _{\mathcal {V}}.}The key restriction is that⟨⋅,⋅⟩V{\displaystyle \langle \cdot ,\cdot \rangle _{\mathcal {V}}}must be a proper inner product. On the other hand, an explicit representation forφ{\displaystyle \varphi }is not necessary, as long asV{\displaystyle {\mathcal {V}}}is aninner product space. The alternative follows fromMercer's theorem: an implicitly defined functionφ{\displaystyle \varphi }exists whenever the spaceX{\displaystyle {\mathcal {X}}}can be equipped with a suitablemeasureensuring the functionk{\displaystyle k}satisfiesMercer's condition. Mercer's theorem is similar to a generalization of the result from linear algebra thatassociates an inner product to any positive-definite matrix. In fact, Mercer's condition can be reduced to this simpler case. If we choose as our measure thecounting measureμ(T)=|T|{\displaystyle \mu (T)=|T|}for allT⊂X{\displaystyle T\subset X}, which counts the number of points inside the setT{\displaystyle T}, then the integral in Mercer's theorem reduces to a summation∑i=1n∑j=1nk(xi,xj)cicj≥0.{\displaystyle \sum _{i=1}^{n}\sum _{j=1}^{n}k(\mathbf {x} _{i},\mathbf {x} _{j})c_{i}c_{j}\geq 0.}If this summation holds for all finite sequences of points(x1,…,xn){\displaystyle (\mathbf {x} _{1},\dotsc ,\mathbf {x} _{n})}inX{\displaystyle {\mathcal {X}}}and all choices ofn{\displaystyle n}real-valued coefficients(c1,…,cn){\displaystyle (c_{1},\dots ,c_{n})}(cf.positive definite kernel), then the functionk{\displaystyle k}satisfies Mercer's condition. Some algorithms that depend on arbitrary relationships in the native spaceX{\displaystyle {\mathcal {X}}}would, in fact, have a linear interpretation in a different setting: the range space ofφ{\displaystyle \varphi }. The linear interpretation gives us insight about the algorithm. Furthermore, there is often no need to computeφ{\displaystyle \varphi }directly during computation, as is the case withsupport-vector machines. Some cite this running time shortcut as the primary benefit. Researchers also use it to justify the meanings and properties of existing algorithms. Theoretically, aGram matrixK∈Rn×n{\displaystyle \mathbf {K} \in \mathbb {R} ^{n\times n}}with respect to{x1,…,xn}{\displaystyle \{\mathbf {x} _{1},\dotsc ,\mathbf {x} _{n}\}}(sometimes also called a "kernel matrix"[4]), whereKij=k(xi,xj){\displaystyle K_{ij}=k(\mathbf {x} _{i},\mathbf {x} _{j})}, must bepositive semi-definite (PSD).[5]Empirically, for machine learning heuristics, choices of a functionk{\displaystyle k}that do not satisfy Mercer's condition may still perform reasonably ifk{\displaystyle k}at least approximates the intuitive idea of similarity.[6]Regardless of whetherk{\displaystyle k}is a Mercer kernel,k{\displaystyle k}may still be referred to as a "kernel". If the kernel functionk{\displaystyle k}is also acovariance functionas used inGaussian processes, then the Gram matrixK{\displaystyle \mathbf {K} }can also be called acovariance matrix.[7] Application areas of kernel methods are diverse and includegeostatistics,[8]kriging,inverse distance weighting,3D reconstruction,bioinformatics,cheminformatics,information extractionandhandwriting recognition.
https://en.wikipedia.org/wiki/Kernel_method
Inmachine learning, alinear classifiermakes aclassificationdecision for each object based on alinear combinationof itsfeatures. Such classifiers work well for practical problems such asdocument classification, and more generally for problems with many variables (features), reaching accuracy levels comparable to non-linear classifiers while taking less time to train and use.[1] If the input feature vector to the classifier is arealvectorx→{\displaystyle {\vec {x}}}, then the output score is wherew→{\displaystyle {\vec {w}}}is a real vector of weights andfis a function that converts thedot productof the two vectors into the desired output. (In other words,w→{\displaystyle {\vec {w}}}is aone-formorlinear functionalmappingx→{\displaystyle {\vec {x}}}ontoR.) The weight vectorw→{\displaystyle {\vec {w}}}is learned from a set of labeled training samples. Oftenfis athreshold function, which maps all values ofw→⋅x→{\displaystyle {\vec {w}}\cdot {\vec {x}}}above a certain threshold to the first class and all other values to the second class; e.g., The superscript T indicates the transpose andθ{\displaystyle \theta }is a scalar threshold. A more complexfmight give the probability that an item belongs to a certain class. For a two-class classification problem, one can visualize the operation of a linear classifier as splitting ahigh-dimensionalinput space with ahyperplane: all points on one side of the hyperplane are classified as "yes", while the others are classified as "no". A linear classifier is often used in situations where the speed of classification is an issue, since it is often the fastest classifier, especially whenx→{\displaystyle {\vec {x}}}is sparse. Also, linear classifiers often work very well when the number of dimensions inx→{\displaystyle {\vec {x}}}is large, as indocument classification, where each element inx→{\displaystyle {\vec {x}}}is typically the number of occurrences of a word in a document (seedocument-term matrix). In such cases, the classifier should be well-regularized. There are two broad classes of methods for determining the parameters of a linear classifierw→{\displaystyle {\vec {w}}}. They can begenerativeanddiscriminativemodels.[2][3]Methods of the former modeljoint probability distribution, whereas methods of the latter modelconditional density functionsP(class|x→){\displaystyle P({\rm {class}}|{\vec {x}})}. Examples of such algorithms include: The second set of methods includesdiscriminative models, which attempt to maximize the quality of the output on atraining set. Additional terms in the training cost function can easily performregularizationof the final model. Examples of discriminative training of linear classifiers include: Note:Despite its name, LDA does not belong to the class of discriminative models in this taxonomy. However, its name makes sense when we compare LDA to the other main lineardimensionality reductionalgorithm:principal components analysis(PCA). LDA is asupervised learningalgorithm that utilizes the labels of the data, while PCA is anunsupervised learningalgorithm that ignores the labels. To summarize, the name is a historical artifact.[5] Discriminative training often yields higher accuracy than modeling the conditional density functions[citation needed]. However, handling missing data is often easier with conditional density models[citation needed]. All of the linear classifier algorithms listed above can be converted into non-linear algorithms operating on a different input spaceφ(x→){\displaystyle \varphi ({\vec {x}})}, using thekernel trick. Discriminative training of linear classifiers usually proceeds in asupervisedway, by means of anoptimization algorithmthat is given a training set with desired outputs and aloss functionthat measures the discrepancy between the classifier's outputs and the desired outputs. Thus, the learning algorithm solves an optimization problem of the form[1] where Popular loss functions include thehinge loss(for linear SVMs) and thelog loss(for linear logistic regression). If the regularization functionRisconvex, then the above is aconvex problem.[1]Many algorithms exist for solving such problems; popular ones for linear classification include (stochastic)gradient descent,L-BFGS,coordinate descentandNewton methods.
https://en.wikipedia.org/wiki/Linear_classifier
Linear least squares(LLS) is theleast squares approximationoflinear functionsto data. It is a set of formulations for solving statistical problems involved inlinear regression, including variants forordinary(unweighted),weighted, andgeneralized(correlated)residuals.Numerical methods for linear least squaresinclude inverting the matrix of the normal equations andorthogonal decompositionmethods. Consider the linear equation whereA∈Rm×n{\displaystyle A\in \mathbb {R} ^{m\times n}}andb∈Rm{\displaystyle b\in \mathbb {R} ^{m}}are given andx∈Rn{\displaystyle x\in \mathbb {R} ^{n}}is variable to be computed. Whenm>n,{\displaystyle m>n,}it is generally the case that (1) has no solution. For example, there is no value ofx{\displaystyle x}that satisfies[100111]x=[110],{\displaystyle {\begin{bmatrix}1&0\\0&1\\1&1\end{bmatrix}}x={\begin{bmatrix}1\\1\\0\end{bmatrix}},}because the first two rows require thatx=(1,1),{\displaystyle x=(1,1),}but then the third row is not satisfied. Thus, form>n,{\displaystyle m>n,}the goal of solving (1) exactly is typically replaced by finding the value ofx{\displaystyle x}that minimizes some error. There are many ways that the error can be defined, but one of the most common is to define it as‖Ax−b‖2.{\displaystyle \|Ax-b\|^{2}.}This produces a minimization problem, called aleast squares problem The solution to the least squares problem (1) is computed by solving thenormal equation[1] whereA⊤{\displaystyle A^{\top }}denotes thetransposeofA{\displaystyle A}. Continuing the example, above, withA=[100111]andb=[110],{\displaystyle A={\begin{bmatrix}1&0\\0&1\\1&1\end{bmatrix}}\quad {\text{and}}\quad b={\begin{bmatrix}1\\1\\0\end{bmatrix}},}we findA⊤A=[101011][100111]=[2112]{\displaystyle A^{\top }A={\begin{bmatrix}1&0&1\\0&1&1\end{bmatrix}}{\begin{bmatrix}1&0\\0&1\\1&1\end{bmatrix}}={\begin{bmatrix}2&1\\1&2\end{bmatrix}}}andA⊤b=[101011][110]=[11].{\displaystyle A^{\top }b={\begin{bmatrix}1&0&1\\0&1&1\end{bmatrix}}{\begin{bmatrix}1\\1\\0\end{bmatrix}}={\begin{bmatrix}1\\1\end{bmatrix}}.}Solving the normal equation givesx=(1/3,1/3).{\displaystyle x=(1/3,1/3).} The three main linear least squares formulations are: Other formulations include: In OLS (i.e., assuming unweighted observations), theoptimal valueof theobjective functionis found by substituting the optimal expression for the coefficient vector:S=yT(I−H)T(I−H)y=yT(I−H)y,{\displaystyle S=\mathbf {y} ^{\mathsf {T}}(\mathbf {I} -\mathbf {H} )^{\mathsf {T}}(\mathbf {I} -\mathbf {H} )\mathbf {y} =\mathbf {y} ^{\mathsf {T}}(\mathbf {I} -\mathbf {H} )\mathbf {y} ,}whereH=X(XTX)−1XT{\displaystyle \mathbf {H} =\mathbf {X} (\mathbf {X} ^{\mathsf {T}}\mathbf {X} )^{-1}\mathbf {X} ^{\mathsf {T}}}, the latter equality holding since(I−H){\displaystyle (\mathbf {I} -\mathbf {H} )}is symmetric and idempotent. It can be shown from this[9]that under an appropriate assignment of weights theexpected valueofSism−n{\textstyle m-n}. If instead unit weights are assumed, the expected value ofSis(m−n)σ2{\displaystyle (m-n)\sigma ^{2}}, whereσ2{\displaystyle \sigma ^{2}}is the variance of each observation. If it is assumed that the residuals belong to a normal distribution, the objective function, being a sum of weighted squared residuals, will belong to achi-squared(χ2{\displaystyle \chi ^{2}})distributionwithm−ndegrees of freedom. Some illustrative percentile values ofχ2{\displaystyle \chi ^{2}}are given in the following table.[10] These values can be used for a statistical criterion as to thegoodness of fit. When unit weights are used, the numbers should be divided by the variance of an observation. For WLS, the ordinary objective function above is replaced for a weighted average of residuals. Instatisticsandmathematics,linear least squaresis an approach to fitting amathematicalorstatistical modeltodatain cases where the idealized value provided by the model for any data point is expressed linearly in terms of the unknownparametersof the model. The resulting fitted model can be used tosummarizethe data, topredictunobserved values from the same system, and to understand the mechanisms that may underlie the system. Mathematically, linear least squares is the problem of approximately solving anoverdetermined systemof linear equationsAx=b, wherebis not an element of thecolumn spaceof the matrixA. The approximate solution is realized as an exact solution toAx=b', whereb'is the projection ofbonto the column space ofA. The best approximation is then that which minimizes the sum of squared differences between the data values and their corresponding modeled values. The approach is calledlinearleast squares since the assumed function is linear in the parameters to be estimated. Linear least squares problems areconvexand have aclosed-form solutionthat is unique, provided that the number of data points used for fitting equals or exceeds the number of unknown parameters, except in special degenerate situations. In contrast,non-linear least squaresproblems generally must be solved by aniterative procedure, and the problems can be non-convex with multiple optima for the objective function. If prior distributions are available, then even an underdetermined system can be solved using theBayesian MMSE estimator. In statistics, linear least squares problems correspond to a particularly important type ofstatistical modelcalledlinear regressionwhich arises as a particular form ofregression analysis. One basic form of such a model is anordinary least squaresmodel. The present article concentrates on the mathematical aspects of linear least squares problems, with discussion of the formulation and interpretation of statistical regression models andstatistical inferencesrelated to these being dealt with in the articles just mentioned. Seeoutline of regression analysisfor an outline of the topic. If the experimental errors,ε{\displaystyle \varepsilon }, are uncorrelated, have a mean of zero and a constant variance,σ{\displaystyle \sigma }, theGauss–Markov theoremstates that the least-squares estimator,β^{\displaystyle {\hat {\boldsymbol {\beta }}}}, has the minimum variance of all estimators that are linear combinations of the observations. In this sense it is the best, or optimal, estimator of the parameters. Note particularly that this property is independent of the statisticaldistribution functionof the errors. In other words,the distribution function of the errors need not be anormal distribution. However, for some probability distributions, there is no guarantee that the least-squares solution is even possible given the observations; still, in such cases it is the best estimator that is both linear and unbiased. For example, it is easy to show that thearithmetic meanof a set of measurements of a quantity is the least-squares estimator of the value of that quantity. If the conditions of the Gauss–Markov theorem apply, the arithmetic mean is optimal, whatever the distribution of errors of the measurements might be. However, in the case that the experimental errors do belong to a normal distribution, the least-squares estimator is also amaximum likelihoodestimator.[11] These properties underpin the use of the method of least squares for all types of data fitting, even when the assumptions are not strictly valid. An assumption underlying the treatment given above is that the independent variable,x, is free of error. In practice, the errors on the measurements of the independent variable are usually much smaller than the errors on the dependent variable and can therefore be ignored. When this is not the case,total least squaresor more generallyerrors-in-variables models, orrigorous least squares, should be used. This can be done by adjusting the weighting scheme to take into account errors on both the dependent and independent variables and then following the standard procedure.[12][13] In some cases the (weighted) normal equations matrixXTXisill-conditioned. When fitting polynomials the normal equations matrix is aVandermonde matrix. Vandermonde matrices become increasingly ill-conditioned as the order of the matrix increases.[citation needed]In these cases, the least squares estimate amplifies the measurement noise and may be grossly inaccurate.[citation needed]Variousregularizationtechniques can be applied in such cases, the most common of which is calledridge regression. If further information about the parameters is known, for example, a range of possible values ofβ^{\displaystyle \mathbf {\hat {\boldsymbol {\beta }}} }, then various techniques can be used to increase the stability of the solution. For example, seeconstrained least squares. Another drawback of the least squares estimator is the fact that the norm of the residuals,‖y−Xβ^‖{\displaystyle \|\mathbf {y} -\mathbf {X} {\hat {\boldsymbol {\beta }}}\|}is minimized, whereas in some cases one is truly interested in obtaining small error in the parameterβ^{\displaystyle \mathbf {\hat {\boldsymbol {\beta }}} }, e.g., a small value of‖β−β^‖{\displaystyle \|{\boldsymbol {\beta }}-{\hat {\boldsymbol {\beta }}}\|}.[citation needed]However, since the true parameterβ{\displaystyle {\boldsymbol {\beta }}}is necessarily unknown, this quantity cannot be directly minimized. If aprior probabilityonβ^{\displaystyle {\hat {\boldsymbol {\beta }}}}is known, then aBayes estimatorcan be used to minimize themean squared error,E{‖β−β^‖2}{\displaystyle E\left\{\|{\boldsymbol {\beta }}-{\hat {\boldsymbol {\beta }}}\|^{2}\right\}}. The least squares method is often applied when no prior is known. When several parameters are being estimated jointly, better estimators can be constructed, an effect known asStein's phenomenon. For example, if the measurement error isGaussian, several estimators are known whichdominate, or outperform, the least squares technique; the best known of these is theJames–Stein estimator. This is an example of more generalshrinkage estimatorsthat have been applied to regression problems. The primary application of linear least squares is indata fitting. Given a set ofmdata pointsy1,y2,…,ym,{\displaystyle y_{1},y_{2},\dots ,y_{m},}consisting of experimentally measured values taken atmvaluesx1,x2,…,xm{\displaystyle x_{1},x_{2},\dots ,x_{m}}of an independent variable (xi{\displaystyle x_{i}}may be scalar or vector quantities), and given a model functiony=f(x,β),{\displaystyle y=f(x,{\boldsymbol {\beta }}),}withβ=(β1,β2,…,βn),{\displaystyle {\boldsymbol {\beta }}=(\beta _{1},\beta _{2},\dots ,\beta _{n}),}it is desired to find the parametersβj{\displaystyle \beta _{j}}such that the model function "best" fits the data. In linear least squares, linearity is meant to be with respect to parametersβj,{\displaystyle \beta _{j},}sof(x,β)=∑j=1nβjφj(x).{\displaystyle f(x,{\boldsymbol {\beta }})=\sum _{j=1}^{n}\beta _{j}\varphi _{j}(x).} Here, the functionsφj{\displaystyle \varphi _{j}}may benonlinearwith respect to the variablex. Ideally, the model function fits the data exactly, soyi=f(xi,β){\displaystyle y_{i}=f(x_{i},{\boldsymbol {\beta }})}for alli=1,2,…,m.{\displaystyle i=1,2,\dots ,m.}This is usually not possible in practice, as there are more data points than there are parameters to be determined. The approach chosen then is to find the minimal possible value of the sum of squares of theresidualsri(β)=yi−f(xi,β),(i=1,2,…,m){\displaystyle r_{i}({\boldsymbol {\beta }})=y_{i}-f(x_{i},{\boldsymbol {\beta }}),\ (i=1,2,\dots ,m)}so to minimize the functionS(β)=∑i=1mri2(β).{\displaystyle S({\boldsymbol {\beta }})=\sum _{i=1}^{m}r_{i}^{2}({\boldsymbol {\beta }}).} After substituting forri{\displaystyle r_{i}}and then forf{\displaystyle f}, this minimization problem becomes the quadratic minimization problem above withXij=φj(xi),{\displaystyle X_{ij}=\varphi _{j}(x_{i}),}and the best fit can be found by solving the normal equations. A hypothetical researcher conducts an experiment and obtains four(x,y){\displaystyle (x,y)}data points:(1,6),{\displaystyle (1,6),}(2,5),{\displaystyle (2,5),}(3,7),{\displaystyle (3,7),}and(4,10){\displaystyle (4,10)}(shown in red in the diagram on the right). Because of exploratory data analysis or prior knowledge of the subject matter, the researcher suspects that they{\displaystyle y}-values depend on thex{\displaystyle x}-values systematically. Thex{\displaystyle x}-values are assumed to be exact, but they{\displaystyle y}-values contain some uncertainty or "noise", because of the phenomenon being studied, imperfections in the measurements, etc. One of the simplest possible relationships betweenx{\displaystyle x}andy{\displaystyle y}is a liney=β1+β2x{\displaystyle y=\beta _{1}+\beta _{2}x}. The interceptβ1{\displaystyle \beta _{1}}and the slopeβ2{\displaystyle \beta _{2}}are initially unknown. The researcher would like to find values ofβ1{\displaystyle \beta _{1}}andβ2{\displaystyle \beta _{2}}that cause the line to pass through the four data points. In other words, the researcher would like to solve the system of linear equationsβ1+1β2=6,β1+2β2=5,β1+3β2=7,β1+4β2=10.{\displaystyle {\begin{alignedat}{3}\beta _{1}+1\beta _{2}&&\;=\;&&6,&\\\beta _{1}+2\beta _{2}&&\;=\;&&5,&\\\beta _{1}+3\beta _{2}&&\;=\;&&7,&\\\beta _{1}+4\beta _{2}&&\;=\;&&10.&\\\end{alignedat}}}With four equations in two unknowns, this system is overdetermined. There is no exact solution. To consider approximate solutions, one introducesresidualsr1{\displaystyle r_{1}},r2{\displaystyle r_{2}},r3{\displaystyle r_{3}},r4{\displaystyle r_{4}}into the equations:β1+1β2+r1=6,β1+2β2+r2=5,β1+3β2+r3=7,β1+4β2+r4=10.{\displaystyle {\begin{alignedat}{3}\beta _{1}+1\beta _{2}+r_{1}&&\;=\;&&6,&\\\beta _{1}+2\beta _{2}+r_{2}&&\;=\;&&5,&\\\beta _{1}+3\beta _{2}+r_{3}&&\;=\;&&7,&\\\beta _{1}+4\beta _{2}+r_{4}&&\;=\;&&10.&\\\end{alignedat}}}Thei{\displaystyle i}th residualri{\displaystyle r_{i}}is the misfit between thei{\displaystyle i}th observationyi{\displaystyle y_{i}}and thei{\displaystyle i}th predictionβ1+β2xi{\displaystyle \beta _{1}+\beta _{2}x_{i}}:r1=6−(β1+1β2),r2=5−(β1+2β2),r3=7−(β1+3β2),r4=10−(β1+4β2).{\displaystyle {\begin{alignedat}{3}r_{1}&&\;=\;&&6-(\beta _{1}+1\beta _{2}),&\\r_{2}&&\;=\;&&5-(\beta _{1}+2\beta _{2}),&\\r_{3}&&\;=\;&&7-(\beta _{1}+3\beta _{2}),&\\r_{4}&&\;=\;&&10-(\beta _{1}+4\beta _{2}).&\\\end{alignedat}}}Among all approximate solutions, the researcher would like to find the one that is "best" in some sense. Inleast squares, one focuses on the sumS{\displaystyle S}of the squared residuals:S(β1,β2)=r12+r22+r32+r42=[6−(β1+1β2)]2+[5−(β1+2β2)]2+[7−(β1+3β2)]2+[10−(β1+4β2)]2=4β12+30β22+20β1β2−56β1−154β2+210.{\displaystyle {\begin{aligned}S(\beta _{1},\beta _{2})&=r_{1}^{2}+r_{2}^{2}+r_{3}^{2}+r_{4}^{2}\\[6pt]&=[6-(\beta _{1}+1\beta _{2})]^{2}+[5-(\beta _{1}+2\beta _{2})]^{2}+[7-(\beta _{1}+3\beta _{2})]^{2}+[10-(\beta _{1}+4\beta _{2})]^{2}\\[6pt]&=4\beta _{1}^{2}+30\beta _{2}^{2}+20\beta _{1}\beta _{2}-56\beta _{1}-154\beta _{2}+210.\\[6pt]\end{aligned}}}The best solution is defined to be the one thatminimizesS{\displaystyle S}with respect toβ1{\displaystyle \beta _{1}}andβ2{\displaystyle \beta _{2}}. The minimum can be calculated by setting thepartial derivativesofS{\displaystyle S}to zero:0=∂S∂β1=8β1+20β2−56,{\displaystyle 0={\frac {\partial S}{\partial \beta _{1}}}=8\beta _{1}+20\beta _{2}-56,}0=∂S∂β2=20β1+60β2−154.{\displaystyle 0={\frac {\partial S}{\partial \beta _{2}}}=20\beta _{1}+60\beta _{2}-154.}Thesenormal equationsconstitute a system of two linear equations in two unknowns. The solution isβ1=3.5{\displaystyle \beta _{1}=3.5}andβ2=1.4{\displaystyle \beta _{2}=1.4}, and the best-fit line is thereforey=3.5+1.4x{\displaystyle y=3.5+1.4x}. The residuals are1.1,{\displaystyle 1.1,}−1.3,{\displaystyle -1.3,}−0.7,{\displaystyle -0.7,}and0.9{\displaystyle 0.9}(see the diagram on the right). The minimum value of the sum of squared residuals isS(3.5,1.4)=1.12+(−1.3)2+(−0.7)2+0.92=4.2.{\displaystyle S(3.5,1.4)=1.1^{2}+(-1.3)^{2}+(-0.7)^{2}+0.9^{2}=4.2.} This calculation can be expressed in matrix notation as follows. The original system of equations isy=Xβ{\displaystyle \mathbf {y} =\mathbf {X} \mathbf {\beta } }, wherey=[65710],X=[11121314],β=[β1β2].{\displaystyle \mathbf {y} =\left[{\begin{array}{c}6\\5\\7\\10\end{array}}\right],\;\;\;\;\mathbf {X} =\left[{\begin{array}{cc}1&1\\1&2\\1&3\\1&4\end{array}}\right],\;\;\;\;\mathbf {\beta } =\left[{\begin{array}{c}\beta _{1}\\\beta _{2}\end{array}}\right].}Intuitively,y=Xβ⇒X⊤y=X⊤Xβ⇒β=(X⊤X)−1X⊤y=[3.51.4].{\displaystyle \mathbf {y} =\mathbf {X} \mathbf {\beta } \;\;\;\;\Rightarrow \;\;\;\;\mathbf {X} ^{\top }\mathbf {y} =\mathbf {X} ^{\top }\mathbf {X} \mathbf {\beta } \;\;\;\;\Rightarrow \;\;\;\;\mathbf {\beta } =\left(\mathbf {X} ^{\top }\mathbf {X} \right)^{-1}\mathbf {X} ^{\top }\mathbf {y} =\left[{\begin{array}{c}3.5\\1.4\end{array}}\right].}More rigorously, ifX⊤X{\displaystyle \mathbf {X} ^{\top }\mathbf {X} }is invertible, then the matrixX(X⊤X)−1X⊤{\displaystyle \mathbf {X} \left(\mathbf {X} ^{\top }\mathbf {X} \right)^{-1}\mathbf {X} ^{\top }}represents orthogonal projection onto the column space ofX{\displaystyle \mathbf {X} }. Therefore, among all vectors of the formXβ{\displaystyle \mathbf {X} \mathbf {\beta } }, the one closest toy{\displaystyle \mathbf {y} }isX(X⊤X)−1X⊤y{\displaystyle \mathbf {X} \left(\mathbf {X} ^{\top }\mathbf {X} \right)^{-1}\mathbf {X} ^{\top }\mathbf {y} }. SettingX(X⊤X)−1X⊤y=Xβ,{\displaystyle \mathbf {X} \left(\mathbf {X} ^{\top }\mathbf {X} \right)^{-1}\mathbf {X} ^{\top }\mathbf {y} =\mathbf {X} \mathbf {\beta } ,}it is evident thatβ=(X⊤X)−1X⊤y{\displaystyle \mathbf {\beta } =\left(\mathbf {X} ^{\top }\mathbf {X} \right)^{-1}\mathbf {X} ^{\top }\mathbf {y} }is a solution. Suppose that the hypothetical researcher wishes to fit a parabola of the formy=β1x2{\displaystyle y=\beta _{1}x^{2}}. Importantly, this model is still linear in the unknown parameters (now justβ1{\displaystyle \beta _{1}}), so linear least squares still applies. The system of equations incorporating residuals is6=β1(1)2+r15=β1(2)2+r27=β1(3)2+r310=β1(4)2+r4{\displaystyle {\begin{alignedat}{2}6&&\;=\beta _{1}(1)^{2}+r_{1}\\5&&\;=\beta _{1}(2)^{2}+r_{2}\\7&&\;=\beta _{1}(3)^{2}+r_{3}\\10&&\;=\beta _{1}(4)^{2}+r_{4}\\\end{alignedat}}} The sum of squared residuals isS(β1)=(6−β1)2+(5−4β1)2+(7−9β1)2+(10−16β1)2.{\displaystyle S(\beta _{1})=(6-\beta _{1})^{2}+(5-4\beta _{1})^{2}+(7-9\beta _{1})^{2}+(10-16\beta _{1})^{2}.}There is just one partial derivative to set to 0:0=∂S∂β1=708β1−498.{\displaystyle 0={\frac {\partial S}{\partial \beta _{1}}}=708\beta _{1}-498.}The solution isβ1=0.703{\displaystyle \beta _{1}=0.703}, and the fit model isy=0.703x2{\displaystyle y=0.703x^{2}}. In matrix notation, the equations without residuals are againy=Xβ{\displaystyle \mathbf {y} =\mathbf {X} \mathbf {\beta } }, where nowy=[65710],X=[14916],β=[β1].{\displaystyle \mathbf {y} =\left[{\begin{array}{c}6\\5\\7\\10\end{array}}\right],\;\;\;\;\mathbf {X} =\left[{\begin{array}{c}1\\4\\9\\16\end{array}}\right],\;\;\;\;\mathbf {\beta } =\left[{\begin{array}{c}\beta _{1}\end{array}}\right].}By the same logic as above, the solution isβ=(X⊤X)−1X⊤y=[0.703].{\displaystyle \mathbf {\beta } =\left(\mathbf {X} ^{\top }\mathbf {X} \right)^{-1}\mathbf {X} ^{\top }\mathbf {y} =\left[{\begin{array}{c}0.703\end{array}}\right].} The figure shows an extension to fitting the three parameter parabola using a design matrixX{\displaystyle \mathbf {X} }with three columns (one forx0{\displaystyle x^{0}},x1{\displaystyle x^{1}}, andx2{\displaystyle x^{2}}), and one row for each of the red data points. More generally, one can haven{\displaystyle n}regressorsxj{\displaystyle x_{j}}, and a linear modely=β0+∑j=1nβjxj.{\displaystyle y=\beta _{0}+\sum _{j=1}^{n}\beta _{j}x_{j}.}
https://en.wikipedia.org/wiki/Normal_equation
Word2vecis a technique innatural language processing(NLP) for obtainingvectorrepresentations of words. These vectors capture information about the meaning of the word based on the surrounding words. The word2vec algorithm estimates these representations by modeling text in a largecorpus. Once trained, such a model can detectsynonymouswords or suggest additional words for a partial sentence. Word2vec was developed byTomáš Mikolov, Kai Chen, Greg Corrado,Ilya SutskeverandJeff Deanat Google, and published in 2013.[1][2] Word2vec represents a word as a high-dimensionvectorof numbers which capture relationships between words. In particular, words which appear in similar contexts are mapped to vectors which are nearby as measured bycosine similarity. This indicates the level ofsemantic similaritybetween the words, so for example the vectors forwalkandranare nearby, as are those for "but" and "however", and "Berlin" and "Germany". Word2vec is a group of related models that are used to produceword embeddings. These models are shallow, two-layerneural networksthat are trained to reconstruct linguistic contexts of words. Word2vec takes as its input a largecorpus of textand produces a mapping of the set of words to avector space, typically of several hundreddimensions, with each unique word in thecorpusbeing assigned a vector in the space. Word2vec can use either of two model architectures to produce thesedistributed representationsof words: continuousbag of words(CBOW) or continuously sliding skip-gram. In both architectures, word2vec considers both individual words and a sliding context window as it iterates over the corpus. The CBOW can be viewed as a ‘fill in the blank’ task, where the word embedding represents the way the word influences the relative probabilities of other words in the context window. Words which are semantically similar should influence these probabilities in similar ways, because semantically similar words should be used in similar contexts. The order of context words does not influence prediction (bag of words assumption). In the continuous skip-gram architecture, the model uses the current word to predict the surrounding window of context words.[1][2]The skip-gram architecture weighs nearby context words more heavily than more distant context words. According to the authors' note,[3]CBOW is faster while skip-gram does a better job for infrequent words. After the model is trained, the learned word embeddings are positioned in the vector space such that words that share common contexts in the corpus — that is, words that aresemanticallyand syntactically similar — are located close to one another in the space.[1]More dissimilar words are located farther from one another in the space.[1] This section is based on expositions.[4][5] A corpus is a sequence of words. Both CBOW and skip-gram are methods to learn one vector per word appearing in the corpus. LetV{\displaystyle V}("vocabulary") be the set of all words appearing in the corpusC{\displaystyle C}. Our goal is to learn one vectorvw∈Rn{\displaystyle v_{w}\in \mathbb {R} ^{n}}for each wordw∈V{\displaystyle w\in V}. The idea of skip-gram is that the vector of a word should be close to the vector of each of its neighbors. The idea of CBOW is that the vector-sum of a word's neighbors should be close to the vector of the word. In the original publication, "closeness" is measured bysoftmax, but the framework allows other ways to measure closeness. Suppose we want each word in the corpus to be predicted by every other word in a small span of 4 words. The set of relative indexes of neighbor words will be:N={−4,−3,−2,−1,+1,+2,+3,+4}{\displaystyle N=\{-4,-3,-2,-1,+1,+2,+3,+4\}}. Then the training objective is to maximize the following quantity:∏i∈CPr(wi|wj:j∈N+i){\displaystyle \prod _{i\in C}\Pr(w_{i}|w_{j}:j\in N+i)}That is, we want to maximize the total probability for the corpus, as seen by a probability model that uses word neighbors to predict words. Products are numerically unstable, so we convert it by taking the logarithm:∑i∈Clog⁡(Pr(wi|wj:j∈N+i)){\displaystyle \sum _{i\in C}\log(\Pr(w_{i}|w_{j}:j\in N+i))}That is, we maximize the log-probability of the corpus. Our probability model is as follows: Given words{wj:j∈N+i}{\displaystyle \{w_{j}:j\in N+i\}}, it takes their vector sumv:=∑j∈N+ivwj{\displaystyle v:=\sum _{j\in N+i}v_{w_{j}}}, then take the dot-product-softmax with every other vector sum (this step is similar to the attention mechanism in Transformers), to obtain the probability:Pr(w|wj:j∈N+i):=evw⋅v∑w∈Vevw⋅v{\displaystyle \Pr(w|w_{j}:j\in N+i):={\frac {e^{v_{w}\cdot v}}{\sum _{w\in V}e^{v_{w}\cdot v}}}}The quantity to be maximized is then after simplifications:∑i∈C,j∈N+i(vwi⋅vwj−ln⁡∑w∈Vevw⋅vwj){\displaystyle \sum _{i\in C,j\in N+i}\left(v_{w_{i}}\cdot v_{w_{j}}-\ln \sum _{w\in V}e^{v_{w}\cdot v_{w_{j}}}\right)}The quantity on the left is fast to compute, but the quantity on the right is slow, as it involves summing over the entire vocabulary set for each word in the corpus. Furthermore, to use gradient ascent to maximize the log-probability requires computing the gradient of the quantity on the right, which is intractable. This prompted the authors to use numerical approximation tricks. For skip-gram, the training objective is∏i∈CPr(wj:j∈N+i|wi){\displaystyle \prod _{i\in C}\Pr(w_{j}:j\in N+i|w_{i})}That is, we want to maximize the total probability for the corpus, as seen by a probability model that uses words to predict its word neighbors. We predict each word-neighbor independently, thusPr(wj:j∈N+i|wi)=∏j∈N+iPr(wj|wi){\displaystyle \Pr(w_{j}:j\in N+i|w_{i})=\prod _{j\in N+i}\Pr(w_{j}|w_{i})}. Products are numerically unstable, so we convert it by taking the logarithm:∑i∈C,j∈N+iln⁡Pr(wj|wi){\displaystyle \sum _{i\in C,j\in N+i}\ln \Pr(w_{j}|w_{i})}The probability model is still the dot-product-softmax model, so the calculation proceeds as before.∑i∈C,j∈N+i(vwi⋅vwj−ln⁡∑w∈Vevw⋅vwi){\displaystyle \sum _{i\in C,j\in N+i}\left(v_{w_{i}}\cdot v_{w_{j}}-\ln \sum _{w\in V}e^{v_{w}\cdot v_{w_{\color {red}i}}}\right)}There is only a single difference from the CBOW equation, highlighted in red. During the 1980s, there were some early attempts at using neural networks to represent words and concepts as vectors.[6][7][8] In 2010,Tomáš Mikolov(then atBrno University of Technology) with co-authors applied a simplerecurrent neural networkwith a single hidden layer to language modelling.[9] Word2vec was created, patented,[10]and published in 2013 by a team of researchers led by Mikolov atGoogleover two papers.[1][2]The original paper was rejected by reviewers forICLR conference2013. It also took months for the code to be approved for open-sourcing.[11]Other researchers helped analyse and explain the algorithm.[4] Embedding vectors created using the Word2vec algorithm have some advantages compared to earlier algorithms[1]such as those using n-grams andlatent semantic analysis.GloVewas developed by a team at Stanford specifically as a competitor, and the original paper noted multiple improvements of GloVe over word2vec.[12]Mikolov argued that the comparison was unfair as GloVe was trained on more data, and that thefastTextproject showed that word2vec is superior when trained on the same data.[13][11] As of 2022, the straight Word2vec approach was described as "dated".Transformer-based models, such asELMoandBERT, which add multiple neural-network attention layers on top of a word embedding model similar to Word2vec, have come to be regarded as the state of the art in NLP.[14] Results of word2vec training can be sensitive toparametrization. The following are some important parameters in word2vec training. A Word2vec model can be trained with hierarchicalsoftmaxand/or negative sampling. To approximate the conditional log-likelihood a model seeks to maximize, the hierarchical softmax method uses aHuffman treeto reduce calculation. The negative sampling method, on the other hand, approaches the maximization problem by minimizing thelog-likelihoodof sampled negative instances. According to the authors, hierarchical softmax works better for infrequent words while negative sampling works better for frequent words and better with low dimensional vectors.[3]As training epochs increase, hierarchical softmax stops being useful.[15] High-frequency and low-frequency words often provide little information. Words with a frequency above a certain threshold, or below a certain threshold, may be subsampled or removed to speed up training.[16] Quality of word embedding increases with higher dimensionality. But after reaching some point, marginal gain diminishes.[1]Typically, the dimensionality of the vectors is set to be between 100 and 1,000. The size of the context window determines how many words before and after a given word are included as context words of the given word. According to the authors' note, the recommended value is 10 for skip-gram and 5 for CBOW.[3] There are a variety of extensions to word2vec. doc2vec, generates distributed representations ofvariable-lengthpieces of texts, such as sentences, paragraphs, or entire documents.[17][18]doc2vec has been implemented in theC,PythonandJava/Scalatools (see below), with the Java and Python versions also supporting inference of document embeddings on new, unseen documents. doc2vec estimates the distributed representations of documents much like how word2vec estimates representations of words: doc2vec utilizes either of two model architectures, both of which are allegories to the architectures used in word2vec. The first, Distributed Memory Model of Paragraph Vectors (PV-DM), is identical to CBOW other than it also provides a unique document identifier as a piece of additional context. The second architecture, Distributed Bag of Words version of Paragraph Vector (PV-DBOW), is identical to the skip-gram model except that it attempts to predict the window of surrounding context words from the paragraph identifier instead of the current word.[17] doc2vec also has the ability to capture the semantic ‘meanings’ for additional pieces of  ‘context’ around words; doc2vec can estimate the semantic embeddings for speakers or speaker attributes, groups, and periods of time. For example, doc2vec has been used to estimate the political positions of political parties in various Congresses and Parliaments in the U.S. and U.K.,[19]respectively, and various governmental institutions.[20] Another extension of word2vec is top2vec, which leverages both document and word embeddings to estimate distributed representations of topics.[21][22]top2vec takes document embeddings learned from a doc2vec model andreducesthem into a lower dimension (typically usingUMAP). The space of documents is then scanned usingHDBSCAN,[23]and clusters of similar documents are found. Next, the centroid of documents identified in a cluster is considered to be that cluster's topic vector. Finally, top2vec searches the semantic space for word embeddings located near to the topic vector to ascertain the 'meaning' of the topic.[21]The word with embeddings most similar to the topic vector might be assigned as the topic's title, whereas far away word embeddings may be considered unrelated. As opposed to other topic models such asLDA, top2vec provides canonical ‘distance’ metrics between two topics, or between a topic and another embeddings (word, document, or otherwise). Together with results from HDBSCAN, users can generate topic hierarchies, or groups of related topics and subtopics. Furthermore, a user can use the results of top2vec to infer the topics of out-of-sample documents. After inferring the embedding for a new document, must only search the space of topics for the closest topic vector. An extension of word vectors for n-grams inbiologicalsequences (e.g.DNA,RNA, andproteins) forbioinformaticsapplications has been proposed by Asgari and Mofrad.[24]Named bio-vectors (BioVec) to refer to biological sequences in general with protein-vectors (ProtVec) for proteins (amino-acid sequences) and gene-vectors (GeneVec) for gene sequences, this representation can be widely used in applications of machine learning in proteomics and genomics. The results suggest that BioVectors can characterize biological sequences in terms of biochemical and biophysical interpretations of the underlying patterns.[24]A similar variant, dna2vec, has shown that there is correlation betweenNeedleman–Wunschsimilarity score andcosine similarityof dna2vec word vectors.[25] An extension of word vectors for creating a dense vector representation of unstructured radiology reports has been proposed by Banerjee et al.[26]One of the biggest challenges with Word2vec is how to handle unknown orout-of-vocabulary(OOV) words and morphologically similar words. If the Word2vec model has not encountered a particular word before, it will be forced to use a random vector, which is generally far from its ideal representation. This can particularly be an issue in domains like medicine where synonyms and related words can be used depending on the preferred style of radiologist, and words may have been used infrequently in a large corpus. IWE combines Word2vec with a semantic dictionary mapping technique to tackle the major challenges ofinformation extractionfrom clinical texts, which include ambiguity of free text narrative style, lexical variations, use of ungrammatical and telegraphic phases, arbitrary ordering of words, and frequent appearance of abbreviations and acronyms. Of particular interest, the IWE model (trained on the one institutional dataset) successfully translated to a different institutional dataset which demonstrates good generalizability of the approach across institutions. The reasons for successfulword embeddinglearning in the word2vec framework are poorly understood. Goldberg and Levy point out that the word2vec objective function causes words that occur in similar contexts to have similar embeddings (as measured bycosine similarity) and note that this is in line with J. R. Firth'sdistributional hypothesis. However, they note that this explanation is "very hand-wavy" and argue that a more formal explanation would be preferable.[4] Levy et al. (2015)[27]show that much of the superior performance of word2vec or similar embeddings in downstream tasks is not a result of the models per se, but of the choice of specific hyperparameters. Transferring these hyperparameters to more 'traditional' approaches yields similar performances in downstream tasks. Arora et al. (2016)[28]explain word2vec and related algorithms as performing inference for a simplegenerative modelfor text, which involves a random walk generation process based upon loglinear topic model. They use this to explain some properties of word embeddings, including their use to solve analogies. The word embedding approach is able to capture multiple different degrees of similarity between words. Mikolov et al. (2013)[29]found that semantic and syntactic patterns can be reproduced using vector arithmetic. Patterns such as "Man is to Woman as Brother is to Sister" can be generated through algebraic operations on the vector representations of these words such that the vector representation of "Brother" - "Man" + "Woman" produces a result which is closest to the vector representation of "Sister" in the model. Such relationships can be generated for a range of semantic relations (such as Country–Capital) as well as syntactic relations (e.g. present tense–past tense). This facet of word2vec has been exploited in a variety of other contexts. For example, word2vec has been used to map a vector space of words in one language to a vector space constructed from another language. Relationships between translated words in both spaces can be used to assist withmachine translationof new words.[30] Mikolov et al. (2013)[1]developed an approach to assessing the quality of a word2vec model which draws on the semantic and syntactic patterns discussed above. They developed a set of 8,869 semantic relations and 10,675 syntactic relations which they use as a benchmark to test the accuracy of a model. When assessing the quality of a vector model, a user may draw on this accuracy test which is implemented in word2vec,[31]or develop their own test set which is meaningful to the corpora which make up the model. This approach offers a more challenging test than simply arguing that the words most similar to a given test word are intuitively plausible.[1] The use of different model parameters and different corpus sizes can greatly affect the quality of a word2vec model. Accuracy can be improved in a number of ways, including the choice of model architecture (CBOW or Skip-Gram), increasing the training data set, increasing the number of vector dimensions, and increasing the window size of words considered by the algorithm. Each of these improvements comes with the cost of increased computational complexity and therefore increased model generation time.[1] In models using large corpora and a high number of dimensions, the skip-gram model yields the highest overall accuracy, and consistently produces the highest accuracy on semantic relationships, as well as yielding the highest syntactic accuracy in most cases. However, the CBOW is less computationally expensive and yields similar accuracy results.[1] Overall, accuracy increases with the number of words used and the number of dimensions. Mikolov et al.[1]report that doubling the amount of training data results in an increase in computational complexity equivalent to doubling the number of vector dimensions. Altszyler and coauthors (2017) studied Word2vec performance in two semantic tests for different corpus size.[32]They found that Word2vec has a steeplearning curve, outperforming another word-embedding technique,latent semantic analysis(LSA), when it is trained with medium to large corpus size (more than 10 million words). However, with a small training corpus, LSA showed better performance. Additionally they show that the best parameter setting depends on the task and the training corpus. Nevertheless, for skip-gram models trained in medium size corpora, with 50 dimensions, a window size of 15 and 10 negative samples seems to be a good parameter setting.
https://en.wikipedia.org/wiki/Word2vec
Inmathematical optimizationanddecision theory, aloss functionorcost function(sometimes also called an error function)[1]is a function that maps aneventor values of one or more variables onto areal numberintuitively representing some "cost" associated with the event. Anoptimization problemseeks to minimize a loss function. Anobjective functionis either a loss function or its opposite (in specific domains, variously called areward function, aprofit function, autility function, afitness function, etc.), in which case it is to be maximized. The loss function could include terms from several levels of the hierarchy. In statistics, typically a loss function is used forparameter estimation, and the event in question is some function of the difference between estimated and true values for an instance of data. The concept, as old asLaplace, was reintroduced in statistics byAbraham Waldin the middle of the 20th century.[2]In the context ofeconomics, for example, this is usuallyeconomic costorregret. Inclassification, it is the penalty for an incorrect classification of an example. Inactuarial science, it is used in an insurance context to model benefits paid over premiums, particularly since the works ofHarald Cramérin the 1920s.[3]Inoptimal control, the loss is the penalty for failing to achieve a desired value. Infinancial risk management, the function is mapped to a monetary loss. Leonard J. Savageargued that using non-Bayesian methods such asminimax, the loss function should be based on the idea ofregret, i.e., the loss associated with a decision should be the difference between the consequences of the best decision that could have been made under circumstances will be known and the decision that was in fact taken before they were known. The use of aquadraticloss function is common, for example when usingleast squarestechniques. It is often more mathematically tractable than other loss functions because of the properties ofvariances, as well as being symmetric: an error above the target causes the same loss as the same magnitude of error below the target. If the target ist, then a quadratic loss function is for some constantC; the value of the constant makes no difference to a decision, and can be ignored by setting it equal to 1. This is also known as thesquared error loss(SEL).[1] Many commonstatistics, includingt-tests,regressionmodels,design of experiments, and much else, useleast squaresmethods applied usinglinear regressiontheory, which is based on the quadratic loss function. The quadratic loss function is also used inlinear-quadratic optimal control problems. In these problems, even in the absence of uncertainty, it may not be possible to achieve the desired values of all target variables. Often loss is expressed as aquadratic formin the deviations of the variables of interest from their desired values; this approach istractablebecause it results in linearfirst-order conditions. In the context ofstochastic control, the expected value of the quadratic form is used. The quadratic loss assigns more importance to outliers than to the true data due to its square nature, so alternatives like theHuber, Log-Cosh and SMAE losses are used when the data has many large outliers. Instatisticsanddecision theory, a frequently used loss function is the0-1 loss function usingIverson bracketnotation, i.e. it evaluates to 1 wheny^≠y{\displaystyle {\hat {y}}\neq y}, and 0 otherwise. In many applications, objective functions, including loss functions as a particular case, are determined by the problem formulation. In other situations, the decision maker’s preference must be elicited and represented by a scalar-valued function (called alsoutilityfunction) in a form suitable for optimization — the problem thatRagnar Frischhas highlighted in hisNobel Prizelecture.[4]The existing methods for constructing objective functions are collected in the proceedings of two dedicated conferences.[5][6]In particular,Andranik Tangianshowed that the most usable objective functions — quadratic and additive — are determined by a fewindifferencepoints. He used this property in the models for constructing these objective functions from eitherordinalorcardinaldata that were elicited through computer-assisted interviews with decision makers.[7][8]Among other things, he constructed objective functions to optimally distribute budgets for 16 Westfalian universities[9]and the European subsidies for equalizing unemployment rates among 271 German regions.[10] In some contexts, the value of the loss function itself is a random quantity because it depends on the outcome of a random variableX. BothfrequentistandBayesianstatistical theory involve making a decision based on theexpected valueof the loss function; however, this quantity is defined differently under the two paradigms. We first define the expected loss in the frequentist context. It is obtained by taking the expected value with respect to theprobability distribution,Pθ, of the observed data,X. This is also referred to as therisk function[11][12][13][14]of the decision ruleδand the parameterθ. Here the decision rule depends on the outcome ofX. The risk function is given by: Here,θis a fixed but possibly unknown state of nature,Xis a vector of observations stochastically drawn from apopulation,Eθ{\displaystyle \operatorname {E} _{\theta }}is the expectation over all population values ofX,dPθis aprobability measureover the event space ofX(parametrized byθ) and the integral is evaluated over the entiresupportofX. In a Bayesian approach, the expectation is calculated using theprior distributionπ*of the parameterθ: where m(x) is known as thepredictive likelihoodwherein θ has been "integrated out,"π*(θ | x) is the posterior distribution, and the order of integration has been changed. One then should choose the actiona*which minimises this expected loss, which is referred to asBayes Risk. In the latter equation, the integrand inside dx is known as thePosterior Risk, and minimising it with respect to decisionaalso minimizes the overall Bayes Risk. This optimal decision,a*is known as theBayes (decision) Rule- it minimises the average loss over all possible states of nature θ, over all possible (probability-weighted) data outcomes. One advantage of the Bayesian approach is to that one need only choose the optimal action under the actual observed data to obtain a uniformly optimal one, whereas choosing the actual frequentist optimal decision rule as a function of all possible observations, is a much more difficult problem. Of equal importance though, the Bayes Rule reflects consideration of loss outcomes under different states of nature, θ. In economics, decision-making under uncertainty is often modelled using thevon Neumann–Morgenstern utility functionof the uncertain variable of interest, such as end-of-period wealth. Since the value of this variable is uncertain, so is the value of the utility function; it is the expected value of utility that is maximized. Adecision rulemakes a choice using an optimality criterion. Some commonly used criteria are: Sound statistical practice requires selecting an estimator consistent with the actual acceptable variation experienced in the context of a particular applied problem. Thus, in the applied use of loss functions, selecting which statistical method to use to model an applied problem depends on knowing the losses that will be experienced from being wrong under the problem's particular circumstances.[15] A common example involves estimating "location". Under typical statistical assumptions, themeanor average is the statistic for estimating location that minimizes the expected loss experienced under thesquared-errorloss function, while themedianis the estimator that minimizes expected loss experienced under the absolute-difference loss function. Still different estimators would be optimal under other, less common circumstances. In economics, when an agent isrisk neutral, the objective function is simply expressed as the expected value of a monetary quantity, such as profit, income, or end-of-period wealth. Forrisk-averseorrisk-lovingagents, loss is measured as the negative of autility function, and the objective function to be optimized is the expected value of utility. Other measures of cost are possible, for examplemortalityormorbidityin the field ofpublic healthorsafety engineering. For mostoptimization algorithms, it is desirable to have a loss function that is globallycontinuousanddifferentiable. Two very commonly used loss functions are thesquared loss,L(a)=a2{\displaystyle L(a)=a^{2}}, and theabsolute loss,L(a)=|a|{\displaystyle L(a)=|a|}. However the absolute loss has the disadvantage that it is not differentiable ata=0{\displaystyle a=0}. The squared loss has the disadvantage that it has the tendency to be dominated byoutliers—when summing over a set ofa{\displaystyle a}'s (as in∑i=1nL(ai){\textstyle \sum _{i=1}^{n}L(a_{i})}), the final sum tends to be the result of a few particularly largea-values, rather than an expression of the averagea-value. The choice of a loss function is not arbitrary. It is very restrictive and sometimes the loss function may be characterized by its desirable properties.[16]Among the choice principles are, for example, the requirement of completeness of the class of symmetric statistics in the case ofi.i.d.observations, the principle of complete information, and some others. W. Edwards DemingandNassim Nicholas Talebargue that empirical reality, not nice mathematical properties, should be the sole basis for selecting loss functions, and real losses often are not mathematically nice and are not differentiable, continuous, symmetric, etc. For example, a person who arrives before a plane gate closure can still make the plane, but a person who arrives after can not, a discontinuity and asymmetry which makes arriving slightly late much more costly than arriving slightly early. In drug dosing, the cost of too little drug may be lack of efficacy, while the cost of too much may be tolerable toxicity, another example of asymmetry. Traffic, pipes, beams, ecologies, climates, etc. may tolerate increased load or stress with little noticeable change up to a point, then become backed up or break catastrophically. These situations, Deming and Taleb argue, are common in real-life problems, perhaps more common than classical smooth, continuous, symmetric, differentials cases.[17]
https://en.wikipedia.org/wiki/Loss_function#L1_loss
Inmathematical optimizationanddecision theory, aloss functionorcost function(sometimes also called an error function)[1]is a function that maps aneventor values of one or more variables onto areal numberintuitively representing some "cost" associated with the event. Anoptimization problemseeks to minimize a loss function. Anobjective functionis either a loss function or its opposite (in specific domains, variously called areward function, aprofit function, autility function, afitness function, etc.), in which case it is to be maximized. The loss function could include terms from several levels of the hierarchy. In statistics, typically a loss function is used forparameter estimation, and the event in question is some function of the difference between estimated and true values for an instance of data. The concept, as old asLaplace, was reintroduced in statistics byAbraham Waldin the middle of the 20th century.[2]In the context ofeconomics, for example, this is usuallyeconomic costorregret. Inclassification, it is the penalty for an incorrect classification of an example. Inactuarial science, it is used in an insurance context to model benefits paid over premiums, particularly since the works ofHarald Cramérin the 1920s.[3]Inoptimal control, the loss is the penalty for failing to achieve a desired value. Infinancial risk management, the function is mapped to a monetary loss. Leonard J. Savageargued that using non-Bayesian methods such asminimax, the loss function should be based on the idea ofregret, i.e., the loss associated with a decision should be the difference between the consequences of the best decision that could have been made under circumstances will be known and the decision that was in fact taken before they were known. The use of aquadraticloss function is common, for example when usingleast squarestechniques. It is often more mathematically tractable than other loss functions because of the properties ofvariances, as well as being symmetric: an error above the target causes the same loss as the same magnitude of error below the target. If the target ist, then a quadratic loss function is for some constantC; the value of the constant makes no difference to a decision, and can be ignored by setting it equal to 1. This is also known as thesquared error loss(SEL).[1] Many commonstatistics, includingt-tests,regressionmodels,design of experiments, and much else, useleast squaresmethods applied usinglinear regressiontheory, which is based on the quadratic loss function. The quadratic loss function is also used inlinear-quadratic optimal control problems. In these problems, even in the absence of uncertainty, it may not be possible to achieve the desired values of all target variables. Often loss is expressed as aquadratic formin the deviations of the variables of interest from their desired values; this approach istractablebecause it results in linearfirst-order conditions. In the context ofstochastic control, the expected value of the quadratic form is used. The quadratic loss assigns more importance to outliers than to the true data due to its square nature, so alternatives like theHuber, Log-Cosh and SMAE losses are used when the data has many large outliers. Instatisticsanddecision theory, a frequently used loss function is the0-1 loss function usingIverson bracketnotation, i.e. it evaluates to 1 wheny^≠y{\displaystyle {\hat {y}}\neq y}, and 0 otherwise. In many applications, objective functions, including loss functions as a particular case, are determined by the problem formulation. In other situations, the decision maker’s preference must be elicited and represented by a scalar-valued function (called alsoutilityfunction) in a form suitable for optimization — the problem thatRagnar Frischhas highlighted in hisNobel Prizelecture.[4]The existing methods for constructing objective functions are collected in the proceedings of two dedicated conferences.[5][6]In particular,Andranik Tangianshowed that the most usable objective functions — quadratic and additive — are determined by a fewindifferencepoints. He used this property in the models for constructing these objective functions from eitherordinalorcardinaldata that were elicited through computer-assisted interviews with decision makers.[7][8]Among other things, he constructed objective functions to optimally distribute budgets for 16 Westfalian universities[9]and the European subsidies for equalizing unemployment rates among 271 German regions.[10] In some contexts, the value of the loss function itself is a random quantity because it depends on the outcome of a random variableX. BothfrequentistandBayesianstatistical theory involve making a decision based on theexpected valueof the loss function; however, this quantity is defined differently under the two paradigms. We first define the expected loss in the frequentist context. It is obtained by taking the expected value with respect to theprobability distribution,Pθ, of the observed data,X. This is also referred to as therisk function[11][12][13][14]of the decision ruleδand the parameterθ. Here the decision rule depends on the outcome ofX. The risk function is given by: Here,θis a fixed but possibly unknown state of nature,Xis a vector of observations stochastically drawn from apopulation,Eθ{\displaystyle \operatorname {E} _{\theta }}is the expectation over all population values ofX,dPθis aprobability measureover the event space ofX(parametrized byθ) and the integral is evaluated over the entiresupportofX. In a Bayesian approach, the expectation is calculated using theprior distributionπ*of the parameterθ: where m(x) is known as thepredictive likelihoodwherein θ has been "integrated out,"π*(θ | x) is the posterior distribution, and the order of integration has been changed. One then should choose the actiona*which minimises this expected loss, which is referred to asBayes Risk. In the latter equation, the integrand inside dx is known as thePosterior Risk, and minimising it with respect to decisionaalso minimizes the overall Bayes Risk. This optimal decision,a*is known as theBayes (decision) Rule- it minimises the average loss over all possible states of nature θ, over all possible (probability-weighted) data outcomes. One advantage of the Bayesian approach is to that one need only choose the optimal action under the actual observed data to obtain a uniformly optimal one, whereas choosing the actual frequentist optimal decision rule as a function of all possible observations, is a much more difficult problem. Of equal importance though, the Bayes Rule reflects consideration of loss outcomes under different states of nature, θ. In economics, decision-making under uncertainty is often modelled using thevon Neumann–Morgenstern utility functionof the uncertain variable of interest, such as end-of-period wealth. Since the value of this variable is uncertain, so is the value of the utility function; it is the expected value of utility that is maximized. Adecision rulemakes a choice using an optimality criterion. Some commonly used criteria are: Sound statistical practice requires selecting an estimator consistent with the actual acceptable variation experienced in the context of a particular applied problem. Thus, in the applied use of loss functions, selecting which statistical method to use to model an applied problem depends on knowing the losses that will be experienced from being wrong under the problem's particular circumstances.[15] A common example involves estimating "location". Under typical statistical assumptions, themeanor average is the statistic for estimating location that minimizes the expected loss experienced under thesquared-errorloss function, while themedianis the estimator that minimizes expected loss experienced under the absolute-difference loss function. Still different estimators would be optimal under other, less common circumstances. In economics, when an agent isrisk neutral, the objective function is simply expressed as the expected value of a monetary quantity, such as profit, income, or end-of-period wealth. Forrisk-averseorrisk-lovingagents, loss is measured as the negative of autility function, and the objective function to be optimized is the expected value of utility. Other measures of cost are possible, for examplemortalityormorbidityin the field ofpublic healthorsafety engineering. For mostoptimization algorithms, it is desirable to have a loss function that is globallycontinuousanddifferentiable. Two very commonly used loss functions are thesquared loss,L(a)=a2{\displaystyle L(a)=a^{2}}, and theabsolute loss,L(a)=|a|{\displaystyle L(a)=|a|}. However the absolute loss has the disadvantage that it is not differentiable ata=0{\displaystyle a=0}. The squared loss has the disadvantage that it has the tendency to be dominated byoutliers—when summing over a set ofa{\displaystyle a}'s (as in∑i=1nL(ai){\textstyle \sum _{i=1}^{n}L(a_{i})}), the final sum tends to be the result of a few particularly largea-values, rather than an expression of the averagea-value. The choice of a loss function is not arbitrary. It is very restrictive and sometimes the loss function may be characterized by its desirable properties.[16]Among the choice principles are, for example, the requirement of completeness of the class of symmetric statistics in the case ofi.i.d.observations, the principle of complete information, and some others. W. Edwards DemingandNassim Nicholas Talebargue that empirical reality, not nice mathematical properties, should be the sole basis for selecting loss functions, and real losses often are not mathematically nice and are not differentiable, continuous, symmetric, etc. For example, a person who arrives before a plane gate closure can still make the plane, but a person who arrives after can not, a discontinuity and asymmetry which makes arriving slightly late much more costly than arriving slightly early. In drug dosing, the cost of too little drug may be lack of efficacy, while the cost of too much may be tolerable toxicity, another example of asymmetry. Traffic, pipes, beams, ecologies, climates, etc. may tolerate increased load or stress with little noticeable change up to a point, then become backed up or break catastrophically. These situations, Deming and Taleb argue, are common in real-life problems, perhaps more common than classical smooth, continuous, symmetric, differentials cases.[17]
https://en.wikipedia.org/wiki/Loss_function#L2_loss
Matrix completionis the task of filling in the missing entries of a partially observed matrix, which is equivalent to performing dataimputationin statistics. A wide range of datasets are naturally organized in matrix form. One example is the movie-ratings matrix, as appears in theNetflix problem: Given a ratings matrix in which each entry(i,j){\displaystyle (i,j)}represents the rating of moviej{\displaystyle j}by customeri{\displaystyle i}, if customeri{\displaystyle i}has watched moviej{\displaystyle j}and is otherwise missing, we would like to predict the remaining entries in order to make good recommendations to customers on what to watch next. Another example is thedocument-term matrix: The frequencies of words used in a collection of documents can be represented as a matrix, where each entry corresponds to the number of times the associated term appears in the indicated document. Without any restrictions on the number ofdegrees of freedomin the completed matrix, this problem isunderdeterminedsince the hidden entries could be assigned arbitrary values. Thus, we require some assumption on the matrix to create awell-posed problem, such as assuming it has maximal determinant, is positive definite, or is low-rank.[1][2] For example, one may assume the matrix has low-rank structure, and then seek to find the lowestrankmatrix or, if the rank of the completed matrix is known, a matrix ofrankr{\displaystyle r}that matches the known entries. The illustration shows that a partially revealed rank-1 matrix (on the left) can be completed with zero-error (on the right) since all the rows with missing entries should be the same as the third row. In the case of the Netflix problem the ratings matrix is expected to be low-rank since user preferences can often be described by a few factors, such as the movie genre and time of release. Other applications include computer vision, where missing pixels in images need to be reconstructed, detecting the global positioning of sensors in a network from partial distance information, andmulticlass learning. The matrix completion problem is in generalNP-hard, but under additional assumptions there are efficient algorithms that achieve exact reconstruction with high probability. In statistical learning point of view, the matrix completion problem is an application ofmatrix regularizationwhich is a generalization of vectorregularization. For example, in the low-rank matrix completion problem one may apply the regularization penalty taking the form of a nuclear normR(X)=λ‖X‖∗{\displaystyle R(X)=\lambda \|X\|_{*}} One of the variants of the matrix completion problem is to find the lowestrankmatrixX{\displaystyle X}which matches the matrixM{\displaystyle M}, which we wish to recover, for all entries in the setE{\displaystyle E}of observed entries. The mathematical formulation of this problem is as follows: Candès and Recht[3]proved that with assumptions on the sampling of the observed entries and sufficiently many sampled entries this problem has a unique solution with high probability. An equivalent formulation, given that the matrixM{\displaystyle M}to be recovered is known to be ofrankr{\displaystyle r}, is to solve forX{\displaystyle X}whereXij=Mij∀i,j∈E{\displaystyle X_{ij}=M_{ij}\;\;\forall i,j\in E} A number of assumptions on the sampling of the observed entries and the number of sampled entries are frequently made to simplify the analysis and to ensure the problem is notunderdetermined. To make the analysis tractable, it is often assumed that the setE{\displaystyle E}of observed entries and fixedcardinalityis sampled uniformly at random from the collection of all subsets of entries of cardinality|E|{\displaystyle |E|}. To further simplify the analysis, it is instead assumed thatE{\displaystyle E}is constructed byBernoulli sampling, i.e. that each entry is observed with probabilityp{\displaystyle p}. Ifp{\displaystyle p}is set toNmn{\displaystyle {\frac {N}{mn}}}whereN{\displaystyle N}is the desired expectedcardinalityofE{\displaystyle E}, andm,n{\displaystyle m,\;n}are the dimensions of the matrix (letm<n{\displaystyle m<n}without loss of generality),|E|{\displaystyle |E|}is withinO(nlog⁡n){\displaystyle O(n\log n)}ofN{\displaystyle N}with high probability, thusBernoulli samplingis a good approximation for uniform sampling.[3]Another simplification is to assume that entries are sampled independently and with replacement.[4] Suppose them{\displaystyle m}byn{\displaystyle n}matrixM{\displaystyle M}(withm<n{\displaystyle m<n}) we are trying to recover hasrankr{\displaystyle r}. There is an information theoretic lower bound on how many entries must be observed beforeM{\displaystyle M}can be uniquely reconstructed. The set ofm{\displaystyle m}byn{\displaystyle n}matrices with rank less than or equal tor{\displaystyle r}is an algebraic variety inCm×n{\displaystyle {\mathbb {C} }^{m\times n}}with dimension(n+m)r−r2{\displaystyle (n+m)r-r^{2}}. Using this result, one can show that at least4nr−4r2{\displaystyle 4nr-4r^{2}}entries must be observed for matrix completion inCn×n{\displaystyle {\mathbb {C} }^{n\times n}}to have a unique solution whenr≤n/2{\displaystyle r\leq n/2}.[5] Secondly, there must be at least one observed entry per row and column ofM{\displaystyle M}. Thesingular value decompositionofM{\displaystyle M}is given byUΣV†{\displaystyle U\Sigma V^{\dagger }}. If rowi{\displaystyle i}is unobserved, it is easy to see theith{\displaystyle i^{\text{th}}}right singular vector ofM{\displaystyle M},vi{\displaystyle v_{i}}, can be changed to some arbitrary value and still yield a matrix matchingM{\displaystyle M}over the set of observed entries. Similarly, if columnj{\displaystyle j}is unobserved, thejth{\displaystyle j^{\text{th}}}left singular vector ofM{\displaystyle M},ui{\displaystyle u_{i}}can be arbitrary. If we assume Bernoulli sampling of the set of observed entries, theCoupon collector effectimplies that entries on the order ofO(nlog⁡n){\displaystyle O(n\log n)}must be observed to ensure that there is an observation from each row and column with high probability.[6] Combining the necessary conditions and assuming thatr≪m,n{\displaystyle r\ll m,n}(a valid assumption for many practical applications), the lower bound on the number of observed entries required to prevent the problem of matrix completion from being underdetermined is on the order ofnrlog⁡n{\displaystyle nr\log n}. The concept of incoherence arose incompressed sensing. It is introduced in the context of matrix completion to ensure the singular vectors ofM{\displaystyle M}are not too "sparse" in the sense that all coordinates of each singular vector are of comparable magnitude instead of just a few coordinates having significantly larger magnitudes.[7][8]The standard basis vectors are then undesirable as singular vectors, and the vector1n[11⋮1]{\displaystyle {\frac {1}{\sqrt {n}}}{\begin{bmatrix}1\\1\\\vdots \\1\end{bmatrix}}}inRn{\displaystyle \mathbb {R} ^{n}}is desirable. As an example of what could go wrong if the singular vectors are sufficiently "sparse", consider them{\displaystyle m}byn{\displaystyle n}matrix[10⋯0⋮⋮0000]{\displaystyle {\begin{bmatrix}1&0&\cdots &0\\\vdots &&\vdots \\0&0&0&0\end{bmatrix}}}withsingular value decompositionIm[10⋯0⋮⋮0000]In{\displaystyle I_{m}{\begin{bmatrix}1&0&\cdots &0\\\vdots &&\vdots \\0&0&0&0\end{bmatrix}}I_{n}}. Almost all the entries ofM{\displaystyle M}must be sampled before it can be reconstructed. Candès and Recht[3]define the coherence of a matrixU{\displaystyle U}withcolumn spaceanr−{\displaystyle r-}dimensional subspace ofRn{\displaystyle \mathbb {R} ^{n}}asμ(U)=nrmaxi<n‖PUei‖2{\displaystyle \mu (U)={\frac {n}{r}}\max _{i<n}\|P_{U}e_{i}\|^{2}}, wherePU{\displaystyle P_{U}}is the orthogonalprojectionontoU{\displaystyle U}. Incoherence then asserts that given thesingular value decompositionUΣV†{\displaystyle U\Sigma V^{\dagger }}of them{\displaystyle m}byn{\displaystyle n}matrixM{\displaystyle M}, for someμ0,μ1{\displaystyle \mu _{0},\;\mu _{1}}. In real world application, one often observe only a few entries corrupted at least by a small amount of noise. For example, in the Netflix problem, the ratings are uncertain. Candès and Plan[9]showed that it is possible to fill in the many missing entries of large low-rank matrices from just a few noisy samples by nuclear norm minimization. The noisy model assumes that we observe whereZij:(i,j)∈Ω{\displaystyle {Z_{ij}:(i,j)\in \Omega }}is a noise term. Note that the noise can be either stochastic or deterministic. Alternatively the model can be expressed as whereZ{\displaystyle Z}is ann×n{\displaystyle n\times n}matrix with entriesZij{\displaystyle Z_{ij}}for(i,j)∈Ω{\displaystyle (i,j)\in \Omega }assuming that‖PΩ(Z)‖F≤δ{\displaystyle \|P_{\Omega }(Z)\|_{F}\leq \delta }for someδ>0{\displaystyle \delta >0}.To recover the incomplete matrix, we try to solve the following optimization problem: Among all matrices consistent with the data, find the one with minimum nuclear norm. Candès and Plan[9]have shown that this reconstruction is accurate. They have proved that when perfect noiseless recovery occurs, then matrix completion is stable vis a vis perturbations. The error is proportional to the noise levelδ{\displaystyle \delta }. Therefore, when the noise level is small, the error is small. Here the matrix completion problem does not obey the restricted isometry property (RIP). For matrices, the RIP would assume that the sampling operator obeys for all matricesX{\displaystyle X}with sufficiently small rank andδ<1{\displaystyle \delta <1}sufficiently small. The methods are also applicable to sparse signal recovery problems in which the RIP does not hold. The high-rank matrix completion in general isNP-hard. However, with certain assumptions, some incomplete high rank matrix or even full rank matrix can be completed. Eriksson, Balzano and Nowak[10]have considered the problem of completing a matrix with the assumption that the columns of the matrix belong to a union of multiple low-rank subspaces. Since the columns belong to a union of subspaces, the problem may be viewed as a missing-data version of thesubspace clusteringproblem. LetX{\displaystyle X}be ann×N{\displaystyle n\times N}matrix whose (complete) columns lie in a union of at mostk{\displaystyle k}subspaces, each ofrank≤r<n{\displaystyle \operatorname {rank} \leq r<n}, and assumeN≫kn{\displaystyle N\gg kn}. Eriksson, Balzano and Nowak[10]showed that under mild assumptions each column ofX{\displaystyle X}can be perfectly recovered with high probability from an incomplete version so long as at leastCrNlog2⁡(n){\displaystyle CrN\log ^{2}(n)}entries ofX{\displaystyle X}are observed uniformly at random, withC>1{\displaystyle C>1}a constant depending on the usual incoherence conditions, the geometrical arrangement of subspaces, and the distribution of columns over the subspaces. The algorithm involves several steps: (1) local neighborhoods; (2) local subspaces; (3) subspace refinement; (4) full matrix completion. This method can be applied to Internet distance matrix completion and topology identification. Various matrix completion algorithms have been proposed.[8]These include convex relaxation-based algorithm,[3]gradient-based algorithm,[11]alternating minimization-based algorithm,[12]and discrete-aware based algorithm.[13] The rank minimization problem isNP-hard. One approach, proposed by Candès and Recht, is to form aconvexrelaxation of the problem and minimize the nuclearnorm‖M‖∗{\displaystyle \|M\|_{*}}(which gives the sum of thesingular valuesofM{\displaystyle M}) instead ofrank(M){\displaystyle {\text{rank}}(M)}(which counts the number of non zerosingular valuesofM{\displaystyle M}).[3]This is analogous to minimizing the L1-normrather than the L0-normfor vectors. Theconvexrelaxation can be solved usingsemidefinite programming(SDP) by noticing that the optimization problem is equivalent to The complexity of usingSDPto solve the convex relaxation isO(max(m,n)4){\displaystyle O({\text{max}}(m,n)^{4})}. State of the art solvers like SDPT3 can only handle matrices of size up to 100 by 100.[14]An alternative first order method that approximately solves the convex relaxation is the Singular Value Thresholding Algorithm introduced by Cai, Candès and Shen.[14] Candès and Recht show, using the study of random variables onBanach spaces, that if the number of observed entries is on the order ofmax{μ12,μ0μ1,μ0n0.25}nrlog⁡n{\displaystyle \max {\{\mu _{1}^{2},{\sqrt {\mu _{0}}}\mu _{1},\mu _{0}n^{0.25}\}}nr\log n}(assume without loss of generalitym<n{\displaystyle m<n}), the rank minimization problem has a unique solution which also happens to be the solution of its convex relaxation with probability1−cn3{\displaystyle 1-{\frac {c}{n^{3}}}}for some constantc{\displaystyle c}. If the rank ofM{\displaystyle M}is small (r≤n0.2μ0{\displaystyle r\leq {\frac {n^{0.2}}{\mu _{0}}}}), the size of the set of observations reduces to the order ofμ0n1.2rlog⁡n{\displaystyle \mu _{0}n^{1.2}r\log n}. These results are near optimal, since the minimum number of entries that must be observed for the matrix completion problem to not be underdetermined is on the order ofnrlog⁡n{\displaystyle nr\log n}. This result has been improved by Candès and Tao.[6]They achieve bounds that differ from the optimal bounds only bypolylogarithmicfactors by strengthening the assumptions. Instead of the incoherence property, they assume the strong incoherence property with parameterμ3{\displaystyle \mu _{3}}. This property states that: Intuitively, strong incoherence of a matrixU{\displaystyle U}asserts that the orthogonal projections of standard basis vectors toU{\displaystyle U}has magnitudes that have high likelihood if the singular vectors were distributed randomly.[7] Candès and Tao find that whenr{\displaystyle r}isO(1){\displaystyle O(1)}and the number of observed entries is on the order ofμ34n(log⁡n)2{\displaystyle \mu _{3}^{4}n(\log n)^{2}}, the rank minimization problem has a unique solution which also happens to be the solution of its convex relaxation with probability1−cn3{\displaystyle 1-{\frac {c}{n^{3}}}}for some constantc{\displaystyle c}. For arbitraryr{\displaystyle r}, the number of observed entries sufficient for this assertion hold true is on the order ofμ32nr(log⁡n)6{\displaystyle \mu _{3}^{2}nr(\log n)^{6}} Another convex relaxation approach[15]is to minimize the Frobenius squared norm under a rank constraint. This is equivalent to solving By introducing an orthogonal projection matrixY{\displaystyle Y}(meaningY2=Y,Y=Y′{\displaystyle Y^{2}=Y,Y=Y'}) to model the rank ofX{\displaystyle X}viaX=YX,trace(Y)≤k{\displaystyle X=YX,{\text{trace}}(Y)\leq k}and taking this problem's convex relaxation, we obtain the following semidefinite program IfYis a projection matrix (i.e., has binary eigenvalues) in this relaxation, then the relaxation is tight. Otherwise, it gives a valid lower bound on the overall objective. Moreover, it can be converted into a feasible solution with a (slightly) larger objective by rounding the eigenvalues ofYgreedily.[15]Remarkably, this convex relaxation can be solved by alternating minimization onXandYwithout solving any SDPs, and thus it scales beyond the typical numerical limits of state-of-the-art SDP solvers like SDPT3 or Mosek. This approach is a special case of a more general reformulation technique, which can be applied to obtain a valid lower bound on any low-rank problem with a trace-matrix-convex objective.[16] Keshavan, Montanari and Oh[11]consider a variant of matrix completion where therankof them{\displaystyle m}byn{\displaystyle n}matrixM{\displaystyle M}, which is to be recovered, is known to ber{\displaystyle r}. They assumeBernoulli samplingof entries, constant aspect ratiomn{\displaystyle {\frac {m}{n}}}, bounded magnitude of entries ofM{\displaystyle M}(let the upper bound beMmax{\displaystyle M_{\text{max}}}), and constantcondition numberσ1σr{\displaystyle {\frac {\sigma _{1}}{\sigma _{r}}}}(whereσ1{\displaystyle \sigma _{1}}andσr{\displaystyle \sigma _{r}}are the largest and smallestsingular valuesofM{\displaystyle M}respectively). Further, they assume the two incoherence conditions are satisfied withμ0{\displaystyle \mu _{0}}andμ1σ1σr{\displaystyle \mu _{1}{\frac {\sigma _{1}}{\sigma _{r}}}}whereμ0{\displaystyle \mu _{0}}andμ1{\displaystyle \mu _{1}}are constants. LetME{\displaystyle M^{E}}be a matrix that matchesM{\displaystyle M}on the setE{\displaystyle E}of observed entries and is 0 elsewhere. They then propose the following algorithm: Steps 1 and 2 of the algorithm yield a matrixTr(ME){\displaystyle {\text{Tr}}(M^{E})}very close to the true matrixM{\displaystyle M}(as measured by theroot mean square error (RMSE)) with high probability. In particular, with probability1−1n3{\displaystyle 1-{\frac {1}{n^{3}}}},1mnMmax2‖M−Tr(ME)‖F2≤Crm|E|mn{\displaystyle {\frac {1}{mnM_{\text{max}}^{2}}}\|M-{\text{Tr}}(M^{E})\|_{F}^{2}\leq C{\frac {r}{m|E|}}{\sqrt {\frac {m}{n}}}}for some constantC{\displaystyle C}.‖⋅‖F{\displaystyle \|\cdot \|_{F}}denotes the Frobeniusnorm. Note that the full suite of assumptions is not needed for this result to hold. The incoherence condition, for example, only comes into play in exact reconstruction. Finally, although trimming may seem counter intuitive as it involves throwing out information, it ensures projectingME{\displaystyle M^{E}}onto its firstr{\displaystyle r}principal componentsgives more information about the underlying matrixM{\displaystyle M}than about the observed entries. In Step 3, the space of candidate matricesX,Y{\displaystyle X,\;Y}can be reduced by noticing that the inner minimization problem has the same solution for(X,Y){\displaystyle (X,Y)}as for(XQ,YR){\displaystyle (XQ,YR)}whereQ{\displaystyle Q}andR{\displaystyle R}areorthonormalr{\displaystyle r}byr{\displaystyle r}matrices. Thengradient descentcan be performed over thecross productof twoGrassman manifolds. Ifr≪m,n{\displaystyle r\ll m,\;n}and the observed entry set is in the order ofnrlog⁡n{\displaystyle nr\log n}, the matrix returned by Step 3 is exactlyM{\displaystyle M}. Then the algorithm is order optimal, since we know that for the matrix completion problem to not beunderdeterminedthe number of entries must be in the order ofnrlog⁡n{\displaystyle nr\log n}. Alternating minimization represents a widely applicable and empirically successful approach for finding low-rank matrices that best fit the given data. For example, for the problem of low-rank matrix completion, this method is believed to be one of the most accurate and efficient, and formed a major component of the winning entry in the Netflix problem. In the alternating minimization approach, the low-rank target matrix is written in abilinear form: X=UVT{\displaystyle X=UV^{T}}; the algorithm then alternates between finding the bestU{\displaystyle U}and the bestV{\displaystyle V}. While the overall problem is non-convex, each sub-problem is typically convex and can be solved efficiently. Jain, Netrapalli and Sanghavi[12]have given one of the first guarantees for performance of alternating minimization for both matrix completion and matrix sensing. The alternating minimization algorithm can be viewed as an approximate way to solve the following non-convex problem: minU,V∈Rn×k‖PΩ(UVT)−PΩ(M)‖F2{\displaystyle {\begin{aligned}&{\underset {U,V\in \mathbb {R} ^{n\times k}}{\text{min}}}&\|P_{\Omega }(UV^{T})-P_{\Omega }(M)\|_{F}^{2}\\\end{aligned}}} The AltMinComplete Algorithm proposed by Jain, Netrapalli and Sanghavi is listed here:[12] They showed that by observing|Ω|=O((σ1∗σk∗)6k7log⁡nlog⁡(k‖M‖F/ϵ)){\displaystyle |\Omega |=O(({\frac {\sigma _{1}^{*}}{\sigma _{k}^{*}}})^{6}k^{7}\log n\log(k\|M\|_{F}/\epsilon ))}random entries of an incoherent matrixM{\displaystyle M}, AltMinComplete algorithm can recoverM{\displaystyle M}inO(log⁡(1/ϵ)){\displaystyle O(\log(1/\epsilon ))}steps. In terms of sample complexity (|Ω|{\displaystyle |\Omega |}), theoretically, Alternating Minimization may require a biggerΩ{\displaystyle \Omega }than Convex Relaxation. However empirically it seems not the case which implies that the sample complexity bounds can be further tightened. In terms of time complexity, they showed that AltMinComplete needs time O(|Ω|k2log⁡(1/ϵ)){\displaystyle O(|\Omega |k^{2}\log(1/\epsilon ))}. It is worth noting that, although convex relaxation based methods have rigorous analysis, alternating minimization based algorithms are more successful in practice.[citation needed] In applications such as recommender systems, where matrix entries are discrete (e.g., integer ratings from 1 to 5), incorporating this discreteness into the matrix completion problem can improve performance. Discrete-aware matrix completion approaches introduce a regularizer that encourages the completed matrix entries to align with a finite discrete alphabet. An early method in this domain utilized theℓ1{\displaystyle \ell _{1}}-norm as a convex relaxation of theℓ0{\displaystyle \ell _{0}}-norm to enforce discreteness, enabling efficient optimization using proximal gradient methods. Building upon this, Führling et al. (2023)[13]replaces theℓ1{\displaystyle \ell _{1}}-norm with a continuous and differentiable approximation of theℓ0{\displaystyle \ell _{0}}-norm, making the problem more tractable and improving the performance. The discrete-aware matrix completion problem can be formulated as: where: To solve this non-convex problem, theℓ0{\displaystyle \ell _{0}}-norm is approximated by a continuous function. This approximation is convexized using fractional programming, transforming the problem into a series of convex subproblems. The algorithm iteratively updates the matrix estimate by applying proximal operations to the discrete-space regularizer and singular value thresholding to enforce the low-rank constraint. Initializing the process with the solution from theℓ1{\displaystyle \ell _{1}}-norm-based method can accelerate convergence. Simulation results, tested on datasets like MovieLens-100k, demonstrate that this method outperforms both itsℓ1{\displaystyle \ell _{1}}-norm-based predecessor and other state-of-the-art techniques, particularly when the ratio of observed entries is low (e.g., 20% to 60%).[13] Several applications of matrix completion are summarized by Candès and Plan[9]as follows: Collaborative filteringis the task of making automatic predictions about the interests of a user by collecting taste information from many users. Companies like Apple, Amazon, Barnes and Noble, and Netflix are trying to predict their user preferences from partial knowledge. In these kind of matrix completion problem, the unknown full matrix is often considered low rank because only a few factors typically contribute to an individual's tastes or preference. In control, one would like to fit a discrete-time linear time-invariant state-space model to a sequence of inputsu(t)∈Rm{\displaystyle u(t)\in \mathbb {R} ^{m}}and outputsy(t)∈Rp,t=0,…,N{\displaystyle y(t)\in \mathbb {R} ^{p},t=0,\ldots ,N}. The vectorx(t)∈Rn{\displaystyle x(t)\in \mathbb {R} ^{n}}is the state of the system at timet{\displaystyle t}andn{\displaystyle n}is the order of the system model. From the input/output pair, one would like to recover the matricesA,B,C,D{\displaystyle A,B,C,D}and the initial statex(0){\displaystyle x(0)}. This problem can also be viewed as a low-rank matrix completion problem. The localization (or global positioning) problem emerges naturally in IoT sensor networks. The problem is to recover the sensor map inEuclidean spacefrom a local or partial set of pairwise distances. Thus it is a matrix completion problem with rank two if the sensors are located in a 2-D plane and three if they are in a 3-D space.[17] Most of the real-world social networks have low-rank distance matrices. When we are not able to measure the complete network, which can be due to reasons such as private nodes, limited storage or compute resources, we only have a fraction of distance entries known. Criminal networks are a good example of such networks. Low-rank Matrix Completion can be used to recover these unobserved distances.[18]
https://en.wikipedia.org/wiki/Matrix_completion
Aneural networkis a group of interconnected units calledneuronsthat send signals to one another. Neurons can be eitherbiological cellsormathematical models. While individual neurons are simple, many of them together in a network can perform complex tasks. There are two main types of neural networks. In the context of biology, a neural network is a population of biologicalneuronschemically connected to each other bysynapses. A given neuron can be connected to hundreds of thousands of synapses.[1]Each neuron sends and receiveselectrochemicalsignals calledaction potentialsto its connected neighbors. A neuron can serve anexcitatoryrole, amplifying and propagating signals it receives, or aninhibitoryrole, suppressing signals instead.[1] Populations of interconnected neurons that are smaller than neural networks are calledneural circuits. Very large interconnected networks are calledlarge scale brain networks, and many of these together formbrainsandnervous systems. Signals generated by neural networks in the brain eventually travel through the nervous system and acrossneuromuscular junctionstomuscle cells, where they cause contraction and thereby motion.[2] In machine learning, a neural network is an artificial mathematical model used to approximate nonlinear functions. While early artificial neural networks were physical machines,[3]today they are almost always implemented insoftware. Neuronsin an artificial neural network are usually arranged into layers, with information passing from the first layer (the input layer) through one or more intermediate layers (the hidden layers) to the final layer (the output layer).[4]The "signal" input to each neuron is a number, specifically alinear combinationof the outputs of the connected neurons in the previous layer. The signal each neuron outputs is calculated from this number, according to itsactivation function. The behavior of the network depends on the strengths (orweights) of the connections between neurons. A network is trained by modifying these weights throughempirical risk minimizationorbackpropagationin order to fit some preexisting dataset.[5] The termdeep neural networkrefers to neural networks that have more than three layers, typically including at least two hidden layers in addition to the input and output layers. Neural networks are used to solve problems inartificial intelligence, and have thereby found applications in many disciplines, includingpredictive modeling,adaptive control,facial recognition,handwriting recognition,general game playing, andgenerative AI. The theoretical base for contemporary neural networks was independently proposed byAlexander Bainin 1873[6]andWilliam Jamesin 1890.[7]Both posited that human thought emerged from interactions among large numbers of neurons inside the brain. In 1949,Donald HebbdescribedHebbian learning, the idea that neural networks can change and learn over time by strengthening a synapse every time a signal travels along it.[8] Artificial neural networks were originally used to model biological neural networks starting in the 1930s under the approach ofconnectionism. However, starting with the invention of theperceptron, a simple artificial neural network, byWarren McCullochandWalter Pittsin 1943,[9]followed by the implementation of one in hardware byFrank Rosenblattin 1957,[3]artificial neural networks became increasingly used for machine learning applications instead, and increasingly different from their biological counterparts.
https://en.wikipedia.org/wiki/Neural_network#Training_neural_networks
Indeep learning, amultilayer perceptron(MLP) is a name for a modernfeedforwardneural networkconsisting of fully connected neurons with nonlinearactivation functions, organized in layers, notable for being able to distinguish data that is notlinearly separable.[1] Modern neural networks are trained usingbackpropagation[2][3][4][5][6]and are colloquially referred to as "vanilla" networks.[7]MLPs grew out of an effort to improvesingle-layer perceptrons, which could only be applied to linearly separable data. A perceptron traditionally used aHeaviside step functionas its nonlinear activation function. However, the backpropagation algorithm requires that modern MLPs usecontinuousactivation functions such assigmoidorReLU.[8] Multilayer perceptrons form the basis of deep learning,[9]and areapplicableacross a vast set of diverse domains.[10] If a multilayer perceptron has a linearactivation functionin all neurons, that is, a linear function that maps theweighted inputsto the output of each neuron, thenlinear algebrashows that any number of layers can be reduced to a two-layer input-output model. In MLPs some neurons use anonlinearactivation function that was developed to model the frequency ofaction potentials, or firing, of biological neurons. The two historically common activation functions are bothsigmoids, and are described by The first is ahyperbolic tangentthat ranges from −1 to 1, while the other is thelogistic function, which is similar in shape but ranges from 0 to 1. Hereyi{\displaystyle y_{i}}is the output of thei{\displaystyle i}th node (neuron) andvi{\displaystyle v_{i}}is the weighted sum of the input connections. Alternative activation functions have been proposed, including therectifier and softplusfunctions. More specialized activation functions includeradial basis functions(used inradial basis networks, another class of supervised neural network models). In recent developments ofdeep learningtherectified linear unit (ReLU)is more frequently used as one of the possible ways to overcome the numericalproblemsrelated to the sigmoids. The MLP consists of three or more layers (an input and an output layer with one or morehidden layers) of nonlinearly-activating nodes. Since MLPs are fully connected, each node in one layer connects with a certain weightwij{\displaystyle w_{ij}}to every node in the following layer. Learning occurs in the perceptron by changing connection weights after each piece of data is processed, based on the amount of error in the output compared to the expected result. This is an example ofsupervised learning, and is carried out throughbackpropagation, a generalization of theleast mean squares algorithmin the linear perceptron. We can represent the degree of error in an output nodej{\displaystyle j}in then{\displaystyle n}th data point (training example) byej(n)=dj(n)−yj(n){\displaystyle e_{j}(n)=d_{j}(n)-y_{j}(n)}, wheredj(n){\displaystyle d_{j}(n)}is the desired target value forn{\displaystyle n}th data point at nodej{\displaystyle j}, andyj(n){\displaystyle y_{j}(n)}is the value produced by the perceptron at nodej{\displaystyle j}when then{\displaystyle n}th data point is given as an input. The node weights can then be adjusted based on corrections that minimize the error in the entire output for then{\displaystyle n}th data point, given by Usinggradient descent, the change in each weightwij{\displaystyle w_{ij}}is whereyi(n){\displaystyle y_{i}(n)}is the output of the previous neuroni{\displaystyle i}, andη{\displaystyle \eta }is thelearning rate, which is selected to ensure that the weights quickly converge to a response, without oscillations. In the previous expression,∂E(n)∂vj(n){\displaystyle {\frac {\partial {\mathcal {E}}(n)}{\partial v_{j}(n)}}}denotes the partial derivate of the errorE(n){\displaystyle {\mathcal {E}}(n)}according to the weighted sumvj(n){\displaystyle v_{j}(n)}of the input connections of neuroni{\displaystyle i}. The derivative to be calculated depends on the induced local fieldvj{\displaystyle v_{j}}, which itself varies. It is easy to prove that for an output node this derivative can be simplified to whereϕ′{\displaystyle \phi ^{\prime }}is the derivative of the activation function described above, which itself does not vary. The analysis is more difficult for the change in weights to a hidden node, but it can be shown that the relevant derivative is This depends on the change in weights of thek{\displaystyle k}th nodes, which represent the output layer. So to change the hidden layer weights, the output layer weights change according to the derivative of the activation function, and so this algorithm represents a backpropagation of the activation function.[26]
https://en.wikipedia.org/wiki/Multilayer_perceptron
Inmathematics, anormis afunctionfrom a real or complexvector spaceto the non-negative real numbers that behaves in certain ways like the distance from theorigin: itcommuteswith scaling, obeys a form of thetriangle inequality, and zero is only at the origin. In particular, theEuclidean distancein aEuclidean spaceis defined by a norm on the associatedEuclidean vector space, called theEuclidean norm, the2-norm, or, sometimes, themagnitudeorlengthof the vector. This norm can be defined as thesquare rootof theinner productof a vector with itself. Aseminormsatisfies the first two properties of a norm but may be zero for vectors other than the origin.[1]A vector space with a specified norm is called anormed vector space. In a similar manner, a vector space with a seminorm is called aseminormed vector space. The termpseudonormhas been used for several related meanings. It may be a synonym of "seminorm".[1]It can also refer to a norm that can take infinite values[2]or to certain functions parametrised by adirected set.[3] Given avector spaceX{\displaystyle X}over asubfieldF{\displaystyle F}of the complex numbersC,{\displaystyle \mathbb {C} ,}anormonX{\displaystyle X}is areal-valued functionp:X→R{\displaystyle p:X\to \mathbb {R} }with the following properties, where|s|{\displaystyle |s|}denotes the usualabsolute valueof a scalars{\displaystyle s}:[4] AseminormonX{\displaystyle X}is a functionp:X→R{\displaystyle p:X\to \mathbb {R} }that has properties (1.) and (2.)[6]so that in particular, every norm is also a seminorm (and thus also asublinear functional). However, there exist seminorms that are not norms. Properties (1.) and (2.) imply that ifp{\displaystyle p}is a norm (or more generally, a seminorm) thenp(0)=0{\displaystyle p(0)=0}and thatp{\displaystyle p}also has the following property: Some authors include non-negativity as part of the definition of "norm", although this is not necessary. Although this article defined "positive" to be a synonym of "positive definite", some authors instead define "positive" to be a synonym of "non-negative";[7]these definitions are not equivalent. Suppose thatp{\displaystyle p}andq{\displaystyle q}are two norms (or seminorms) on a vector spaceX.{\displaystyle X.}Thenp{\displaystyle p}andq{\displaystyle q}are calledequivalent, if there exist two positive real constantsc{\displaystyle c}andC{\displaystyle C}such that for every vectorx∈X,{\displaystyle x\in X,}cq(x)≤p(x)≤Cq(x).{\displaystyle cq(x)\leq p(x)\leq Cq(x).}The relation "p{\displaystyle p}is equivalent toq{\displaystyle q}" isreflexive,symmetric(cq≤p≤Cq{\displaystyle cq\leq p\leq Cq}implies1Cp≤q≤1cp{\displaystyle {\tfrac {1}{C}}p\leq q\leq {\tfrac {1}{c}}p}), andtransitiveand thus defines anequivalence relationon the set of all norms onX.{\displaystyle X.}The normsp{\displaystyle p}andq{\displaystyle q}are equivalent if and only if they induce the same topology onX.{\displaystyle X.}[8]Any two norms on a finite-dimensional space are equivalent but this does not extend to infinite-dimensional spaces.[8] If a normp:X→R{\displaystyle p:X\to \mathbb {R} }is given on a vector spaceX,{\displaystyle X,}then the norm of a vectorz∈X{\displaystyle z\in X}is usually denoted by enclosing it within double vertical lines:‖z‖=p(z){\displaystyle \|z\|=p(z)}, as proposed byStefan Banachin his doctoral thesis from 1920. Such notation is also sometimes used ifp{\displaystyle p}is only a seminorm. For the length of a vector in Euclidean space (which is an example of a norm, asexplained below), the notation|x|{\displaystyle |x|}with single vertical lines is also widespread. Every (real or complex) vector space admits a norm: Ifx∙=(xi)i∈I{\displaystyle x_{\bullet }=\left(x_{i}\right)_{i\in I}}is aHamel basisfor a vector spaceX{\displaystyle X}then the real-valued map that sendsx=∑i∈Isixi∈X{\displaystyle x=\sum _{i\in I}s_{i}x_{i}\in X}(where all but finitely many of the scalarssi{\displaystyle s_{i}}are0{\displaystyle 0}) to∑i∈I|si|{\displaystyle \sum _{i\in I}\left|s_{i}\right|}is a norm onX.{\displaystyle X.}[9]There are also a large number of norms that exhibit additional properties that make them useful for specific problems. Theabsolute value|x|{\displaystyle |x|}is a norm on the vector space formed by therealorcomplex numbers. The complex numbers form aone-dimensional vector spaceover themselves and a two-dimensional vector space over the reals; the absolute value is a norm for these two structures. Any normp{\displaystyle p}on a one-dimensional vector spaceX{\displaystyle X}is equivalent (up to scaling) to the absolute value norm, meaning that there is a norm-preservingisomorphismof vector spacesf:F→X,{\displaystyle f:\mathbb {F} \to X,}whereF{\displaystyle \mathbb {F} }is eitherR{\displaystyle \mathbb {R} }orC,{\displaystyle \mathbb {C} ,}and norm-preserving means that|x|=p(f(x)).{\displaystyle |x|=p(f(x)).}This isomorphism is given by sending1∈F{\displaystyle 1\in \mathbb {F} }to a vector of norm1,{\displaystyle 1,}which exists since such a vector is obtained by multiplying any non-zero vector by the inverse of its norm. On then{\displaystyle n}-dimensionalEuclidean spaceRn,{\displaystyle \mathbb {R} ^{n},}the intuitive notion of length of the vectorx=(x1,x2,…,xn){\displaystyle {\boldsymbol {x}}=\left(x_{1},x_{2},\ldots ,x_{n}\right)}is captured by the formula[10]‖x‖2:=x12+⋯+xn2.{\displaystyle \|{\boldsymbol {x}}\|_{2}:={\sqrt {x_{1}^{2}+\cdots +x_{n}^{2}}}.} This is theEuclidean norm, which gives the ordinary distance from the origin to the pointX—a consequence of thePythagorean theorem. This operation may also be referred to as "SRSS", which is an acronym for thesquareroot of thesum ofsquares.[11] The Euclidean norm is by far the most commonly used norm onRn,{\displaystyle \mathbb {R} ^{n},}[10]but there are other norms on this vector space as will be shown below. However, all these norms are equivalent in the sense that they all define the same topology on finite-dimensional spaces. Theinner productof two vectors of aEuclidean vector spaceis thedot productof theircoordinate vectorsover anorthonormal basis. Hence, the Euclidean norm can be written in a coordinate-free way as‖x‖:=x⋅x.{\displaystyle \|{\boldsymbol {x}}\|:={\sqrt {{\boldsymbol {x}}\cdot {\boldsymbol {x}}}}.} The Euclidean norm is also called thequadratic norm,L2{\displaystyle L^{2}}norm,[12]ℓ2{\displaystyle \ell ^{2}}norm,2-norm, orsquare norm; seeLp{\displaystyle L^{p}}space. It defines adistance functioncalled theEuclidean length,L2{\displaystyle L^{2}}distance, orℓ2{\displaystyle \ell ^{2}}distance. The set of vectors inRn+1{\displaystyle \mathbb {R} ^{n+1}}whose Euclidean norm is a given positive constant forms ann{\displaystyle n}-sphere. The Euclidean norm of acomplex numberis theabsolute value(also called themodulus) of it, if thecomplex planeis identified with theEuclidean planeR2.{\displaystyle \mathbb {R} ^{2}.}This identification of the complex numberx+iy{\displaystyle x+iy}as a vector in the Euclidean plane, makes the quantityx2+y2{\textstyle {\sqrt {x^{2}+y^{2}}}}(as first suggested by Euler) the Euclidean norm associated with the complex number. Forz=x+iy{\displaystyle z=x+iy}, the norm can also be written asz¯z{\displaystyle {\sqrt {{\bar {z}}z}}}wherez¯{\displaystyle {\bar {z}}}is thecomplex conjugateofz.{\displaystyle z\,.} There are exactly fourEuclidean Hurwitz algebrasover thereal numbers. These are the real numbersR,{\displaystyle \mathbb {R} ,}the complex numbersC,{\displaystyle \mathbb {C} ,}thequaternionsH,{\displaystyle \mathbb {H} ,}and lastly theoctonionsO,{\displaystyle \mathbb {O} ,}where the dimensions of these spaces over the real numbers are1,2,4,and8,{\displaystyle 1,2,4,{\text{ and }}8,}respectively. The canonical norms onR{\displaystyle \mathbb {R} }andC{\displaystyle \mathbb {C} }are theirabsolute valuefunctions, as discussed previously. The canonical norm onH{\displaystyle \mathbb {H} }ofquaternionsis defined by‖q‖=qq∗=q∗q=a2+b2+c2+d2{\displaystyle \lVert q\rVert ={\sqrt {\,qq^{*}~}}={\sqrt {\,q^{*}q~}}={\sqrt {\,a^{2}+b^{2}+c^{2}+d^{2}~}}}for every quaternionq=a+bi+cj+dk{\displaystyle q=a+b\,\mathbf {i} +c\,\mathbf {j} +d\,\mathbf {k} }inH.{\displaystyle \mathbb {H} .}This is the same as the Euclidean norm onH{\displaystyle \mathbb {H} }considered as the vector spaceR4.{\displaystyle \mathbb {R} ^{4}.}Similarly, the canonical norm on theoctonionsis just the Euclidean norm onR8.{\displaystyle \mathbb {R} ^{8}.} On ann{\displaystyle n}-dimensionalcomplex spaceCn,{\displaystyle \mathbb {C} ^{n},}the most common norm is‖z‖:=|z1|2+⋯+|zn|2=z1z¯1+⋯+znz¯n.{\displaystyle \|{\boldsymbol {z}}\|:={\sqrt {\left|z_{1}\right|^{2}+\cdots +\left|z_{n}\right|^{2}}}={\sqrt {z_{1}{\bar {z}}_{1}+\cdots +z_{n}{\bar {z}}_{n}}}.} In this case, the norm can be expressed as thesquare rootof theinner productof the vector and itself:‖x‖:=xHx,{\displaystyle \|{\boldsymbol {x}}\|:={\sqrt {{\boldsymbol {x}}^{H}~{\boldsymbol {x}}}},}wherex{\displaystyle {\boldsymbol {x}}}is represented as acolumn vector[x1x2…xn]T{\displaystyle {\begin{bmatrix}x_{1}\;x_{2}\;\dots \;x_{n}\end{bmatrix}}^{\rm {T}}}andxH{\displaystyle {\boldsymbol {x}}^{H}}denotes itsconjugate transpose. This formula is valid for anyinner product space, including Euclidean and complex spaces. For complex spaces, the inner product is equivalent to thecomplex dot product. Hence the formula in this case can also be written using the following notation:‖x‖:=x⋅x.{\displaystyle \|{\boldsymbol {x}}\|:={\sqrt {{\boldsymbol {x}}\cdot {\boldsymbol {x}}}}.} ‖x‖1:=∑i=1n|xi|.{\displaystyle \|{\boldsymbol {x}}\|_{1}:=\sum _{i=1}^{n}\left|x_{i}\right|.}The name relates to the distance a taxi has to drive in a rectangularstreet grid(like that of theNew Yorkborough ofManhattan) to get from the origin to the pointx.{\displaystyle x.} The set of vectors whose 1-norm is a given constant forms the surface of across polytope, which has dimension equal to the dimension of the vector space minus 1. The Taxicab norm is also called theℓ1{\displaystyle \ell ^{1}}norm. The distance derived from this norm is called theManhattan distanceorℓ1{\displaystyle \ell ^{1}}distance. The 1-norm is simply the sum of the absolute values of the columns. In contrast,∑i=1nxi{\displaystyle \sum _{i=1}^{n}x_{i}}is not a norm because it may yield negative results. Letp≥1{\displaystyle p\geq 1}be a real number. Thep{\displaystyle p}-norm (also calledℓp{\displaystyle \ell ^{p}}-norm) of vectorx=(x1,…,xn){\displaystyle \mathbf {x} =(x_{1},\ldots ,x_{n})}is[10]‖x‖p:=(∑i=1n|xi|p)1/p.{\displaystyle \|\mathbf {x} \|_{p}:={\biggl (}\sum _{i=1}^{n}\left|x_{i}\right|^{p}{\biggr )}^{1/p}.}Forp=1,{\displaystyle p=1,}we get thetaxicab norm, forp=2{\displaystyle p=2}we get theEuclidean norm, and asp{\displaystyle p}approaches∞{\displaystyle \infty }thep{\displaystyle p}-norm approaches theinfinity normormaximum norm:‖x‖∞:=maxi|xi|.{\displaystyle \|\mathbf {x} \|_{\infty }:=\max _{i}\left|x_{i}\right|.}Thep{\displaystyle p}-norm is related to thegeneralized meanor power mean. Forp=2,{\displaystyle p=2,}the‖⋅‖2{\displaystyle \|\,\cdot \,\|_{2}}-norm is even induced by a canonicalinner product⟨⋅,⋅⟩,{\displaystyle \langle \,\cdot ,\,\cdot \rangle ,}meaning that‖x‖2=⟨x,x⟩{\textstyle \|\mathbf {x} \|_{2}={\sqrt {\langle \mathbf {x} ,\mathbf {x} \rangle }}}for all vectorsx.{\displaystyle \mathbf {x} .}This inner product can be expressed in terms of the norm by using thepolarization identity. Onℓ2,{\displaystyle \ell ^{2},}this inner product is theEuclidean inner productdefined by⟨(xn)n,(yn)n⟩ℓ2=∑nxn¯yn{\displaystyle \langle \left(x_{n}\right)_{n},\left(y_{n}\right)_{n}\rangle _{\ell ^{2}}~=~\sum _{n}{\overline {x_{n}}}y_{n}}while for the spaceL2(X,μ){\displaystyle L^{2}(X,\mu )}associated with ameasure space(X,Σ,μ),{\displaystyle (X,\Sigma ,\mu ),}which consists of allsquare-integrable functions, this inner product is⟨f,g⟩L2=∫Xf(x)¯g(x)dx.{\displaystyle \langle f,g\rangle _{L^{2}}=\int _{X}{\overline {f(x)}}g(x)\,\mathrm {d} x.} This definition is still of some interest for0<p<1,{\displaystyle 0<p<1,}but the resulting function does not define a norm,[13]because it violates thetriangle inequality. What is true for this case of0<p<1,{\displaystyle 0<p<1,}even in the measurable analog, is that the correspondingLp{\displaystyle L^{p}}class is a vector space, and it is also true that the function∫X|f(x)−g(x)|pdμ{\displaystyle \int _{X}|f(x)-g(x)|^{p}~\mathrm {d} \mu }(withoutp{\displaystyle p}th root) defines a distance that makesLp(X){\displaystyle L^{p}(X)}into a complete metrictopological vector space. These spaces are of great interest infunctional analysis,probability theoryandharmonic analysis. However, aside from trivial cases, this topological vector space is not locally convex, and has no continuous non-zero linear forms. Thus the topological dual space contains only the zero functional. The partial derivative of thep{\displaystyle p}-norm is given by∂∂xk‖x‖p=xk|xk|p−2‖x‖pp−1.{\displaystyle {\frac {\partial }{\partial x_{k}}}\|\mathbf {x} \|_{p}={\frac {x_{k}\left|x_{k}\right|^{p-2}}{\|\mathbf {x} \|_{p}^{p-1}}}.} The derivative with respect tox,{\displaystyle x,}therefore, is∂‖x‖p∂x=x∘|x|p−2‖x‖pp−1.{\displaystyle {\frac {\partial \|\mathbf {x} \|_{p}}{\partial \mathbf {x} }}={\frac {\mathbf {x} \circ |\mathbf {x} |^{p-2}}{\|\mathbf {x} \|_{p}^{p-1}}}.}where∘{\displaystyle \circ }denotesHadamard productand|⋅|{\displaystyle |\cdot |}is used for absolute value of each component of the vector. For the special case ofp=2,{\displaystyle p=2,}this becomes∂∂xk‖x‖2=xk‖x‖2,{\displaystyle {\frac {\partial }{\partial x_{k}}}\|\mathbf {x} \|_{2}={\frac {x_{k}}{\|\mathbf {x} \|_{2}}},}or∂∂x‖x‖2=x‖x‖2.{\displaystyle {\frac {\partial }{\partial \mathbf {x} }}\|\mathbf {x} \|_{2}={\frac {\mathbf {x} }{\|\mathbf {x} \|_{2}}}.} Ifx{\displaystyle \mathbf {x} }is some vector such thatx=(x1,x2,…,xn),{\displaystyle \mathbf {x} =(x_{1},x_{2},\ldots ,x_{n}),}then:‖x‖∞:=max(|x1|,…,|xn|).{\displaystyle \|\mathbf {x} \|_{\infty }:=\max \left(\left|x_{1}\right|,\ldots ,\left|x_{n}\right|\right).} The set of vectors whose infinity norm is a given constant,c,{\displaystyle c,}forms the surface of ahypercubewith edge length2c.{\displaystyle 2c.} The energy norm[14]of a vectorx=(x1,x2,…,xn)∈Rn{\displaystyle {\boldsymbol {x}}=\left(x_{1},x_{2},\ldots ,x_{n}\right)\in \mathbb {R} ^{n}}is defined in terms of asymmetricpositive definitematrixA∈Rn{\displaystyle A\in \mathbb {R} ^{n}}as ‖x‖A:=xT⋅A⋅x.{\displaystyle {\|{\boldsymbol {x}}\|}_{A}:={\sqrt {{\boldsymbol {x}}^{T}\cdot A\cdot {\boldsymbol {x}}}}.} It is clear that ifA{\displaystyle A}is theidentity matrix, this norm corresponds to theEuclidean norm. IfA{\displaystyle A}is diagonal, this norm is also called aweighted norm. The energy norm is induced by theinner productgiven by⟨x,y⟩A:=xT⋅A⋅y{\displaystyle \langle {\boldsymbol {x}},{\boldsymbol {y}}\rangle _{A}:={\boldsymbol {x}}^{T}\cdot A\cdot {\boldsymbol {y}}}forx,y∈Rn{\displaystyle {\boldsymbol {x}},{\boldsymbol {y}}\in \mathbb {R} ^{n}}. In general, the value of the norm is dependent on thespectrumofA{\displaystyle A}: For a vectorx{\displaystyle {\boldsymbol {x}}}with a Euclidean norm of one, the value of‖x‖A{\displaystyle {\|{\boldsymbol {x}}\|}_{A}}is bounded from below and above by the smallest and largest absoluteeigenvaluesofA{\displaystyle A}respectively, where the bounds are achieved ifx{\displaystyle {\boldsymbol {x}}}coincides with the corresponding (normalized) eigenvectors. Based on the symmetricmatrix square rootA1/2{\displaystyle A^{1/2}}, the energy norm of a vector can be written in terms of the standard Euclidean norm as ‖x‖A=‖A1/2x‖2.{\displaystyle {\|{\boldsymbol {x}}\|}_{A}={\|A^{1/2}{\boldsymbol {x}}\|}_{2}.} In probability and functional analysis, the zero norm induces a complete metric topology for the space ofmeasurable functionsand for theF-spaceof sequences with F–norm(xn)↦∑n2−nxn/(1+xn).{\textstyle (x_{n})\mapsto \sum _{n}{2^{-n}x_{n}/(1+x_{n})}.}[15]Here we mean byF-normsome real-valued function‖⋅‖{\displaystyle \lVert \cdot \rVert }on an F-space with distanced,{\displaystyle d,}such that‖x‖=d(x,0).{\displaystyle \lVert x\rVert =d(x,0).}TheF-norm described above is not a norm in the usual sense because it lacks the required homogeneity property. Inmetric geometry, thediscrete metrictakes the value one for distinct points and zero otherwise. When applied coordinate-wise to the elements of a vector space, the discrete distance defines theHamming distance, which is important incodingandinformation theory. In the field of real or complex numbers, the distance of the discrete metric from zero is not homogeneous in the non-zero point; indeed, the distance from zero remains one as its non-zero argument approaches zero. However, the discrete distance of a number from zero does satisfy the other properties of a norm, namely the triangle inequality and positive definiteness. When applied component-wise to vectors, the discrete distance from zero behaves like a non-homogeneous "norm", which counts the number of non-zero components in its vector argument; again, this non-homogeneous "norm" is discontinuous. Insignal processingandstatistics,David Donohoreferred to thezero"norm"with quotation marks. Following Donoho's notation, the zero "norm" ofx{\displaystyle x}is simply the number of non-zero coordinates ofx,{\displaystyle x,}or the Hamming distance of the vector from zero. When this "norm" is localized to a bounded set, it is the limit ofp{\displaystyle p}-norms asp{\displaystyle p}approaches 0. Of course, the zero "norm" isnottruly a norm, because it is notpositive homogeneous. Indeed, it is not even an F-norm in the sense described above, since it is discontinuous, jointly and severally, with respect to the scalar argument in scalar–vector multiplication and with respect to its vector argument.Abusing terminology, some engineers[who?]omit Donoho's quotation marks and inappropriately call the number-of-non-zeros function theL0{\displaystyle L^{0}}norm, echoing the notation for theLebesgue spaceofmeasurable functions. The generalization of the above norms to an infinite number of components leads toℓp{\displaystyle \ell ^{p}}andLp{\displaystyle L^{p}}spacesforp≥1,{\displaystyle p\geq 1\,,}with norms ‖x‖p=(∑i∈N|xi|p)1/pand‖f‖p,X=(∫X|f(x)|pdx)1/p{\displaystyle \|x\|_{p}={\bigg (}\sum _{i\in \mathbb {N} }\left|x_{i}\right|^{p}{\bigg )}^{1/p}{\text{ and }}\ \|f\|_{p,X}={\bigg (}\int _{X}|f(x)|^{p}~\mathrm {d} x{\bigg )}^{1/p}} for complex-valued sequences and functions onX⊆Rn{\displaystyle X\subseteq \mathbb {R} ^{n}}respectively, which can be further generalized (seeHaar measure). These norms are also valid in the limit asp→+∞{\displaystyle p\rightarrow +\infty }, giving asupremum norm, and are calledℓ∞{\displaystyle \ell ^{\infty }}andL∞.{\displaystyle L^{\infty }\,.} Anyinner productinduces in a natural way the norm‖x‖:=⟨x,x⟩.{\textstyle \|x\|:={\sqrt {\langle x,x\rangle }}.} Other examples of infinite-dimensional normed vector spaces can be found in theBanach spacearticle. Generally, these norms do not give the same topologies. For example, an infinite-dimensionalℓp{\displaystyle \ell ^{p}}space gives astrictly finer topologythan an infinite-dimensionalℓq{\displaystyle \ell ^{q}}space whenp<q.{\displaystyle p<q\,.} Other norms onRn{\displaystyle \mathbb {R} ^{n}}can be constructed by combining the above; for example‖x‖:=2|x1|+3|x2|2+max(|x3|,2|x4|)2{\displaystyle \|x\|:=2\left|x_{1}\right|+{\sqrt {3\left|x_{2}\right|^{2}+\max(\left|x_{3}\right|,2\left|x_{4}\right|)^{2}}}}is a norm onR4.{\displaystyle \mathbb {R} ^{4}.} For any norm and anyinjectivelinear transformationA{\displaystyle A}we can define a new norm ofx,{\displaystyle x,}equal to‖Ax‖.{\displaystyle \|Ax\|.}In 2D, withA{\displaystyle A}a rotation by 45° and a suitable scaling, this changes the taxicab norm into the maximum norm. EachA{\displaystyle A}applied to the taxicab norm, up to inversion and interchanging of axes, gives a different unit ball: aparallelogramof a particular shape, size, and orientation. In 3D, this is similar but different for the 1-norm (octahedrons) and the maximum norm (prismswith parallelogram base). There are examples of norms that are not defined by "entrywise" formulas. For instance, theMinkowski functionalof a centrally-symmetric convex body inRn{\displaystyle \mathbb {R} ^{n}}(centered at zero) defines a norm onRn{\displaystyle \mathbb {R} ^{n}}(see§ Classification of seminorms: absolutely convex absorbing setsbelow). All the above formulas also yield norms onCn{\displaystyle \mathbb {C} ^{n}}without modification. There are also norms on spaces of matrices (with real or complex entries), the so-calledmatrix norms. LetE{\displaystyle E}be afinite extensionof a fieldk{\displaystyle k}ofinseparable degreepμ,{\displaystyle p^{\mu },}and letk{\displaystyle k}have algebraic closureK.{\displaystyle K.}If the distinctembeddingsofE{\displaystyle E}are{σj}j,{\displaystyle \left\{\sigma _{j}\right\}_{j},}then theGalois-theoretic normof an elementα∈E{\displaystyle \alpha \in E}is the value(∏jσk(α))pμ.{\textstyle \left(\prod _{j}{\sigma _{k}(\alpha )}\right)^{p^{\mu }}.}As that function is homogeneous of degree[E:k]{\displaystyle [E:k]}, the Galois-theoretic norm is not a norm in the sense of this article. However, the[E:k]{\displaystyle [E:k]}-th root of the norm (assuming that concept makes sense) is a norm.[16] The concept of normN(z){\displaystyle N(z)}incomposition algebrasdoesnotshare the usual properties of a norm sincenull vectorsare allowed. A composition algebra(A,∗,N){\displaystyle (A,{}^{*},N)}consists of analgebra over a fieldA,{\displaystyle A,}aninvolution∗,{\displaystyle {}^{*},}and aquadratic formN(z)=zz∗{\displaystyle N(z)=zz^{*}}called the "norm". The characteristic feature of composition algebras is thehomomorphismproperty ofN{\displaystyle N}: for the productwz{\displaystyle wz}of two elementsw{\displaystyle w}andz{\displaystyle z}of the composition algebra, its norm satisfiesN(wz)=N(w)N(z).{\displaystyle N(wz)=N(w)N(z).}In the case ofdivision algebrasR,{\displaystyle \mathbb {R} ,}C,{\displaystyle \mathbb {C} ,}H,{\displaystyle \mathbb {H} ,}andO{\displaystyle \mathbb {O} }the composition algebra norm is the square of the norm discussed above. In those cases the norm is adefinite quadratic form. In thesplit algebrasthe norm is anisotropic quadratic form. For any normp:X→R{\displaystyle p:X\to \mathbb {R} }on a vector spaceX,{\displaystyle X,}thereverse triangle inequalityholds:p(x±y)≥|p(x)−p(y)|for allx,y∈X.{\displaystyle p(x\pm y)\geq |p(x)-p(y)|{\text{ for all }}x,y\in X.}Ifu:X→Y{\displaystyle u:X\to Y}is a continuous linear map between normed spaces, then the norm ofu{\displaystyle u}and the norm of thetransposeofu{\displaystyle u}are equal.[17] For theLp{\displaystyle L^{p}}norms, we haveHölder's inequality[18]|⟨x,y⟩|≤‖x‖p‖y‖q1p+1q=1.{\displaystyle |\langle x,y\rangle |\leq \|x\|_{p}\|y\|_{q}\qquad {\frac {1}{p}}+{\frac {1}{q}}=1.}A special case of this is theCauchy–Schwarz inequality:[18]|⟨x,y⟩|≤‖x‖2‖y‖2.{\displaystyle \left|\langle x,y\rangle \right|\leq \|x\|_{2}\|y\|_{2}.} Every norm is aseminormand thus satisfies allproperties of the latter. In turn, every seminorm is asublinear functionand thus satisfies allproperties of the latter. In particular, every norm is aconvex function. The concept ofunit circle(the set of all vectors of norm 1) is different in different norms: for the 1-norm, the unit circle is asquareoriented as a diamond; for the 2-norm (Euclidean norm), it is the well-known unitcircle; while for the infinity norm, it is an axis-aligned square. For anyp{\displaystyle p}-norm, it is asuperellipsewith congruent axes (see the accompanying illustration). Due to the definition of the norm, the unit circle must beconvexand centrally symmetric (therefore, for example, the unit ball may be a rectangle but cannot be a triangle, andp≥1{\displaystyle p\geq 1}for ap{\displaystyle p}-norm). In terms of the vector space, the seminorm defines atopologyon the space, and this is aHausdorfftopology precisely when the seminorm can distinguish between distinct vectors, which is again equivalent to the seminorm being a norm. The topology thus defined (by either a norm or a seminorm) can be understood either in terms of sequences or open sets. Asequenceof vectors{vn}{\displaystyle \{v_{n}\}}is said toconvergein norm tov,{\displaystyle v,}if‖vn−v‖→0{\displaystyle \left\|v_{n}-v\right\|\to 0}asn→∞.{\displaystyle n\to \infty .}Equivalently, the topology consists of all sets that can be represented as a union of openballs. If(X,‖⋅‖){\displaystyle (X,\|\cdot \|)}is a normed space then[19]‖x−y‖=‖x−z‖+‖z−y‖for allx,y∈Xandz∈[x,y].{\displaystyle \|x-y\|=\|x-z\|+\|z-y\|{\text{ for all }}x,y\in X{\text{ and }}z\in [x,y].} Two norms‖⋅‖α{\displaystyle \|\cdot \|_{\alpha }}and‖⋅‖β{\displaystyle \|\cdot \|_{\beta }}on a vector spaceX{\displaystyle X}are calledequivalentif they induce the same topology,[8]which happens if and only if there exist positive real numbersC{\displaystyle C}andD{\displaystyle D}such that for allx∈X{\displaystyle x\in X}C‖x‖α≤‖x‖β≤D‖x‖α.{\displaystyle C\|x\|_{\alpha }\leq \|x\|_{\beta }\leq D\|x\|_{\alpha }.}For instance, ifp>r≥1{\displaystyle p>r\geq 1}onCn,{\displaystyle \mathbb {C} ^{n},}then[20]‖x‖p≤‖x‖r≤n(1/r−1/p)‖x‖p.{\displaystyle \|x\|_{p}\leq \|x\|_{r}\leq n^{(1/r-1/p)}\|x\|_{p}.} In particular,‖x‖2≤‖x‖1≤n‖x‖2{\displaystyle \|x\|_{2}\leq \|x\|_{1}\leq {\sqrt {n}}\|x\|_{2}}‖x‖∞≤‖x‖2≤n‖x‖∞{\displaystyle \|x\|_{\infty }\leq \|x\|_{2}\leq {\sqrt {n}}\|x\|_{\infty }}‖x‖∞≤‖x‖1≤n‖x‖∞,{\displaystyle \|x\|_{\infty }\leq \|x\|_{1}\leq n\|x\|_{\infty },}That is,‖x‖∞≤‖x‖2≤‖x‖1≤n‖x‖2≤n‖x‖∞.{\displaystyle \|x\|_{\infty }\leq \|x\|_{2}\leq \|x\|_{1}\leq {\sqrt {n}}\|x\|_{2}\leq n\|x\|_{\infty }.}If the vector space is a finite-dimensional real or complex one, all norms are equivalent. On the other hand, in the case of infinite-dimensional vector spaces, not all norms are equivalent. Equivalent norms define the same notions of continuity and convergence and for many purposes do not need to be distinguished. To be more precise the uniform structure defined by equivalent norms on the vector space isuniformly isomorphic. All seminorms on a vector spaceX{\displaystyle X}can be classified in terms ofabsolutely convexabsorbing subsetsA{\displaystyle A}ofX.{\displaystyle X.}To each such subset corresponds a seminormpA{\displaystyle p_{A}}called thegaugeofA,{\displaystyle A,}defined aspA(x):=inf{r∈R:r>0,x∈rA}{\displaystyle p_{A}(x):=\inf\{r\in \mathbb {R} :r>0,x\in rA\}}whereinf{\displaystyle \inf _{}}is theinfimum, with the property that{x∈X:pA(x)<1}⊆A⊆{x∈X:pA(x)≤1}.{\displaystyle \left\{x\in X:p_{A}(x)<1\right\}~\subseteq ~A~\subseteq ~\left\{x\in X:p_{A}(x)\leq 1\right\}.}Conversely: Anylocally convex topological vector spacehas alocal basisconsisting of absolutely convex sets. A common method to construct such a basis is to use a family(p){\displaystyle (p)}of seminormsp{\displaystyle p}thatseparates points: the collection of all finite intersections of sets{p<1/n}{\displaystyle \{p<1/n\}}turns the space into alocally convex topological vector spaceso that every p iscontinuous. Such a method is used to designweak and weak* topologies. norm case:
https://en.wikipedia.org/wiki/Norm_(mathematics)
Inmathematics, thelogarithmof a number is theexponentby which another fixed value, thebase, must be raised to produce that number. For example, the logarithm of1000to base10is3, because1000is10to the3rd power:1000 = 103= 10 × 10 × 10. More generally, ifx=by, thenyis the logarithm ofxto baseb, writtenlogbx, solog101000 = 3. As a single-variable function, the logarithm to basebis theinverseofexponentiationwith baseb. The logarithm base10is called thedecimalorcommonlogarithmand is commonly used in science and engineering. Thenaturallogarithmhas the numbere≈ 2.718as its base; its use is widespread in mathematics andphysicsbecause of its very simplederivative. Thebinarylogarithmuses base2and is widely used incomputer science,information theory,music theory, andphotography. When the base is unambiguous from the context or irrelevant it is often omitted, and the logarithm is writtenlogx. Logarithms were introduced byJohn Napierin 1614 as a means of simplifying calculations.[1]They were rapidly adopted bynavigators, scientists, engineers,surveyors, and others to perform high-accuracy computations more easily. Usinglogarithm tables, tedious multi-digit multiplication steps can be replaced by table look-ups and simpler addition. This is possible because the logarithm of aproductis thesumof the logarithms of the factors:logb⁡(xy)=logb⁡x+logb⁡y,{\displaystyle \log _{b}(xy)=\log _{b}x+\log _{b}y,}provided thatb,xandyare all positive andb≠ 1. Theslide rule, also based on logarithms, allows quick calculations without tables, but at lower precision. The present-day notion of logarithms comes fromLeonhard Euler, who connected them to theexponential functionin the 18th century, and who also introduced the lettereas the base of natural logarithms.[2] Logarithmic scalesreduce wide-ranging quantities to smaller scopes. For example, thedecibel(dB) is aunitused to expressratio as logarithms, mostly for signal power and amplitude (of whichsound pressureis a common example). In chemistry,pHis a logarithmic measure for theacidityof anaqueous solution. Logarithms are commonplace in scientificformulae, and in measurements of thecomplexity of algorithmsand of geometric objects calledfractals. They help to describefrequencyratios ofmusical intervals, appear in formulas countingprime numbersorapproximatingfactorials, inform some models inpsychophysics, and can aid inforensic accounting. The concept of logarithm as the inverse of exponentiation extends to other mathematical structures as well. However, in general settings, the logarithm tends to be a multi-valued function. For example, thecomplex logarithmis the multi-valuedinverseof the complex exponential function. Similarly, thediscrete logarithmis the multi-valued inverse of the exponential function in finite groups; it has uses inpublic-key cryptography. Addition,multiplication, andexponentiationare three of the most fundamental arithmetic operations. The inverse of addition issubtraction, and the inverse of multiplication isdivision. Similarly, a logarithm is the inverse operation ofexponentiation. Exponentiation is when a numberb, thebase, is raised to a certain powery, theexponent, to give a valuex; this is denotedby=x.{\displaystyle b^{y}=x.}For example, raising2to the power of3gives8:23=8.{\displaystyle 2^{3}=8.} The logarithm of basebis the inverse operation, that provides the outputyfrom the inputx. That is,y=logb⁡x{\displaystyle y=\log _{b}x}is equivalent tox=by{\displaystyle x=b^{y}}ifbis a positivereal number. (Ifbis not a positive real number, both exponentiation and logarithm can be defined but may take several values, which makes definitions much more complicated.) One of the main historical motivations of introducing logarithms is the formulalogb⁡(xy)=logb⁡x+logb⁡y,{\displaystyle \log _{b}(xy)=\log _{b}x+\log _{b}y,}by whichtables of logarithmsallow multiplication and division to be reduced to addition and subtraction, a great aid to calculations before the invention of computers. Given a positivereal numberbsuch thatb≠ 1, thelogarithmof a positive real numberxwith respect to baseb[nb 1]is the exponent by whichbmust be raised to yieldx. In other words, the logarithm ofxto basebis the unique real numberysuch thatby=x{\displaystyle b^{y}=x}.[3] The logarithm is denoted "logbx" (pronounced as "the logarithm ofxto baseb", "thebase-blogarithm ofx", or most commonly "the log, baseb, ofx"). An equivalent and more succinct definition is that the functionlogbis theinverse functionto the functionx↦bx{\displaystyle x\mapsto b^{x}}. Several important formulas, sometimes calledlogarithmic identitiesorlogarithmic laws, relate logarithms to one another.[4] The logarithm of a product is the sum of the logarithms of the numbers being multiplied; the logarithm of the ratio of two numbers is the difference of the logarithms. The logarithm of thep-th power of a number isptimes the logarithm of the number itself; the logarithm of ap-th root is the logarithm of the number divided byp. The following table lists these identities with examples. Each of the identities can be derived after substitution of the logarithm definitionsx=blogb⁡x{\displaystyle x=b^{\,\log _{b}x}}ory=blogb⁡y{\displaystyle y=b^{\,\log _{b}y}}in the left hand sides. In the following formulas,⁠x{\displaystyle x}⁠and⁠y{\displaystyle y}⁠arepositive real numbersand⁠p{\displaystyle p}⁠is an integer greater than 1. The logarithmlogbxcan be computed from the logarithms ofxandbwith respect to an arbitrary basekusing the following formula:[nb 2]logb⁡x=logk⁡xlogk⁡b.{\displaystyle \log _{b}x={\frac {\log _{k}x}{\log _{k}b}}.} Typicalscientific calculatorscalculate the logarithms to bases 10 ande.[5]Logarithms with respect to any basebcan be determined using either of these two logarithms by the previous formula:logb⁡x=log10⁡xlog10⁡b=loge⁡xloge⁡b.{\displaystyle \log _{b}x={\frac {\log _{10}x}{\log _{10}b}}={\frac {\log _{e}x}{\log _{e}b}}.} Given a numberxand its logarithmy= logbxto an unknown baseb, the base is given by: b=x1y,{\displaystyle b=x^{\frac {1}{y}},} which can be seen from taking the defining equationx=blogb⁡x=by{\displaystyle x=b^{\,\log _{b}x}=b^{y}}to the power of1y.{\displaystyle {\tfrac {1}{y}}.} Among all choices for the base, three are particularly common. These areb= 10,b=e(theirrationalmathematical constante≈ 2.71828183),andb= 2(thebinary logarithm). Inmathematical analysis, the logarithm baseeis widespread because of analytical properties explained below. On the other hand,base 10logarithms (thecommon logarithm) are easy to use for manual calculations in thedecimalnumber system:[6] log10(10x)=log10⁡10+log10⁡x=1+log10⁡x.{\displaystyle \log _{10}\,(\,10\,x\,)\ =\;\log _{10}10\ +\;\log _{10}x\ =\ 1\,+\,\log _{10}x\,.} Thus,log10(x)is related to the number ofdecimal digitsof a positive integerx: The number of digits is the smallestintegerstrictly bigger thanlog10(x).[7]For example,log10(5986)is approximately 3.78 . Thenext integer aboveit is 4, which is the number of digits of 5986. Both the natural logarithm and the binary logarithm are used ininformation theory, corresponding to the use ofnatsorbitsas the fundamental units of information, respectively.[8]Binary logarithms are also used incomputer science, where thebinary systemis ubiquitous; inmusic theory, where a pitch ratio of two (theoctave) is ubiquitous and the number ofcentsbetween any two pitches is a scaled version of the binary logarithm, or log 2 times 1200, of the pitch ratio (that is, 100 cents persemitoneinconventional equal temperament), or equivalently the log base21/1200;and inphotographyrescaled base 2 logarithms are used to measureexposure values,light levels,exposure times, lensapertures, andfilm speedsin "stops".[9] The abbreviationlogxis often used when the intended base can be inferred based on the context or discipline, or when the base is indeterminate or immaterial. Common logarithms (base 10), historically used in logarithm tables and slide rules, are a basic tool for measurement and computation in many areas of science and engineering; in these contextslogxstill often means the base ten logarithm.[10]In mathematicslogxusually refers to the natural logarithm (basee).[11]In computer science and information theory,logoften refers to binary logarithms (base 2).[12]The following table lists common notations for logarithms to these bases. The "ISO notation" column lists designations suggested by theInternational Organization for Standardization.[13] The history of logarithms in seventeenth-century Europe saw the discovery of a newfunctionthat extended the realm of analysis beyond the scope of algebraic methods. The method of logarithms was publicly propounded byJohn Napierin 1614, in a book titledMirifici Logarithmorum Canonis Descriptio(Description of the Wonderful Canon of Logarithms).[19][20]Prior to Napier's invention, there had been other techniques of similar scopes, such as theprosthaphaeresisor the use of tables of progressions, extensively developed byJost Bürgiaround 1600.[21][22]Napier coined the term for logarithm in Middle Latin,logarithmus, literally meaning'ratio-number', derived from the Greeklogos'proportion, ratio, word'+arithmos'number'. Thecommon logarithmof a number is the index of that power of ten which equals the number.[23]Speaking of a number as requiring so many figures is a rough allusion to common logarithm, and was referred to byArchimedesas the "order of a number".[24]The first real logarithms were heuristic methods to turn multiplication into addition, thus facilitating rapid computation. Some of these methods used tables derived from trigonometric identities.[25]Such methods are calledprosthaphaeresis. Invention of thefunctionnow known as thenatural logarithmbegan as an attempt to perform aquadratureof a rectangularhyperbolabyGrégoire de Saint-Vincent, a Belgian Jesuit residing in Prague. Archimedes had writtenThe Quadrature of the Parabolain the third century BC, but a quadrature for the hyperbola eluded all efforts until Saint-Vincent published his results in 1647. The relation that the logarithm provides between ageometric progressionin itsargumentand anarithmetic progressionof values, promptedA. A. de Sarasato make the connection of Saint-Vincent's quadrature and the tradition of logarithms inprosthaphaeresis, leading to the term "hyperbolic logarithm", a synonym for natural logarithm. Soon the new function was appreciated byChristiaan Huygens, andJames Gregory. The notationLogywas adopted byGottfried Wilhelm Leibnizin 1675,[26]and the next year he connected it to theintegral∫dyy.{\textstyle \int {\frac {dy}{y}}.} Before Euler developed his modern conception of complex natural logarithms,Roger Coteshad a nearly equivalent result when he showed in 1714 that[27]log⁡(cos⁡θ+isin⁡θ)=iθ.{\displaystyle \log(\cos \theta +i\sin \theta )=i\theta .} By simplifying difficult calculations before calculators and computers became available, logarithms contributed to the advance of science, especiallyastronomy. They were critical to advances insurveying,celestial navigation, and other domains.Pierre-Simon Laplacecalled logarithms As the functionf(x) =bxis the inverse function oflogbx, it has been called anantilogarithm.[29]Nowadays, this function is more commonly called anexponential function. A key tool that enabled the practical use of logarithms was thetable of logarithms.[30]The first such table was compiled byHenry Briggsin 1617, immediately after Napier's invention but with the innovation of using 10 as the base. Briggs' first table contained thecommon logarithmsof all integers in the range from 1 to 1000, with a precision of 14 digits. Subsequently, tables with increasing scope were written. These tables listed the values oflog10xfor any numberxin a certain range, at a certain precision. Base-10 logarithms were universally used for computation, hence the name common logarithm, since numbers that differ by factors of 10 have logarithms that differ by integers. The common logarithm ofxcan be separated into aninteger partand afractional part, known as the characteristic andmantissa. Tables of logarithms need only include the mantissa, as the characteristic can be easily determined by counting digits from the decimal point.[31]The characteristic of10 ·xis one plus the characteristic ofx, and their mantissas are the same. Thus using a three-digit log table, the logarithm of 3542 is approximated by log10⁡3542=log10⁡(1000⋅3.542)=3+log10⁡3.542≈3+log10⁡3.54{\displaystyle {\begin{aligned}\log _{10}3542&=\log _{10}(1000\cdot 3.542)\\&=3+\log _{10}3.542\\&\approx 3+\log _{10}3.54\end{aligned}}} Greater accuracy can be obtained byinterpolation: log10⁡3542≈3+log10⁡3.54+0.2(log10⁡3.55−log10⁡3.54){\displaystyle \log _{10}3542\approx {}3+\log _{10}3.54+0.2(\log _{10}3.55-\log _{10}3.54)} The value of10xcan be determined by reverse look up in the same table, since the logarithm is amonotonic function. The product and quotient of two positive numberscanddwere routinely calculated as the sum and difference of their logarithms. The productcdor quotientc/dcame from looking up the antilogarithm of the sum or difference, via the same table: cd=10log10⁡c10log10⁡d=10log10⁡c+log10⁡d{\displaystyle cd=10^{\,\log _{10}c}\,10^{\,\log _{10}d}=10^{\,\log _{10}c\,+\,\log _{10}d}}andcd=cd−1=10log10⁡c−log10⁡d.{\displaystyle {\frac {c}{d}}=cd^{-1}=10^{\,\log _{10}c\,-\,\log _{10}d}.} For manual calculations that demand any appreciable precision, performing the lookups of the two logarithms, calculating their sum or difference, and looking up the antilogarithm is much faster than performing the multiplication by earlier methods such asprosthaphaeresis, which relies ontrigonometric identities. Calculations of powers androotsare reduced to multiplications or divisions and lookups bycd=(10log10⁡c)d=10dlog10⁡c{\displaystyle c^{d}=\left(10^{\,\log _{10}c}\right)^{d}=10^{\,d\log _{10}c}} andcd=c1d=101dlog10⁡c.{\displaystyle {\sqrt[{d}]{c}}=c^{\frac {1}{d}}=10^{{\frac {1}{d}}\log _{10}c}.} Trigonometric calculations were facilitated by tables that contained the common logarithms oftrigonometric functions. Another critical application was the slide rule, a pair of logarithmically divided scales used for calculation. The non-sliding logarithmic scale,Gunter's rule, was invented shortly after Napier's invention.William Oughtredenhanced it to create the slide rule—a pair of logarithmic scales movable with respect to each other. Numbers are placed on sliding scales at distances proportional to the differences between their logarithms. Sliding the upper scale appropriately amounts to mechanically adding logarithms, as illustrated here: For example, adding the distance from 1 to 2 on the lower scale to the distance from 1 to 3 on the upper scale yields a product of 6, which is read off at the lower part. The slide rule was an essential calculating tool for engineers and scientists until the 1970s, because it allows, at the expense of precision, much faster computation than techniques based on tables.[32] A deeper study of logarithms requires the concept of afunction. A function is a rule that, given one number, produces another number.[33]An example is the function producing thex-th power ofbfrom any real numberx, where the basebis a fixed number. This function is written asf(x) =bx. Whenbis positive and unequal to 1, we show below thatfis invertible when considered as a function from the reals to the positive reals. Letbbe a positive real number not equal to 1 and letf(x) =bx. It is a standard result in real analysis that any continuous strictly monotonic function is bijective between its domain and range. This fact follows from theintermediate value theorem.[34]Now,fisstrictly increasing(forb> 1), or strictly decreasing (for0 <b< 1),[35]is continuous, has domainR{\displaystyle \mathbb {R} }, and has rangeR>0{\displaystyle \mathbb {R} _{>0}}. Therefore,fis a bijection fromR{\displaystyle \mathbb {R} }toR>0{\displaystyle \mathbb {R} _{>0}}. In other words, for each positive real numbery, there is exactly one real numberxsuch thatbx=y{\displaystyle b^{x}=y}. We letlogb:R>0→R{\displaystyle \log _{b}\colon \mathbb {R} _{>0}\to \mathbb {R} }denote the inverse off. That is,logbyis the unique real numberxsuch thatbx=y{\displaystyle b^{x}=y}. This function is called the base-blogarithm functionorlogarithmic function(or justlogarithm). The functionlogbxcan also be essentially characterized by the product formulalogb⁡(xy)=logb⁡x+logb⁡y.{\displaystyle \log _{b}(xy)=\log _{b}x+\log _{b}y.}More precisely, the logarithm to any baseb> 1is the onlyincreasing functionffrom the positive reals to the reals satisfyingf(b) = 1and[36]f(xy)=f(x)+f(y).{\displaystyle f(xy)=f(x)+f(y).} As discussed above, the functionlogbis the inverse to the exponential functionx↦bx{\displaystyle x\mapsto b^{x}}. Therefore, theirgraphscorrespond to each other upon exchanging thex- and they-coordinates (or upon reflection at the diagonal linex=y), as shown at the right: a point(t,u=bt)on the graph offyields a point(u,t= logbu)on the graph of the logarithm and vice versa. As a consequence,logb(x)diverges to infinity(gets bigger than any given number) ifxgrows to infinity, provided thatbis greater than one. In that case,logb(x)is anincreasing function. Forb< 1,logb(x)tends to minus infinity instead. Whenxapproaches zero,logbxgoes to minus infinity forb> 1(plus infinity forb< 1, respectively). Analytic properties of functions pass to their inverses.[34]Thus, asf(x) =bxis a continuous anddifferentiable function, so islogby. Roughly, a continuous function is differentiable if its graph has no sharp "corners". Moreover, as thederivativeoff(x)evaluates toln(b)bxby the properties of theexponential function, thechain ruleimplies that the derivative oflogbxis given by[35][37]ddxlogb⁡x=1xln⁡b.{\displaystyle {\frac {d}{dx}}\log _{b}x={\frac {1}{x\ln b}}.}That is, theslopeof thetangenttouching the graph of thebase-blogarithm at the point(x, logb(x))equals1/(xln(b)). The derivative ofln(x)is1/x; this implies thatln(x)is the uniqueantiderivativeof1/xthat has the value 0 forx= 1. It is this very simple formula that motivated to qualify as "natural" the natural logarithm; this is also one of the main reasons of the importance of theconstante. The derivative with a generalized functional argumentf(x)isddxln⁡f(x)=f′(x)f(x).{\displaystyle {\frac {d}{dx}}\ln f(x)={\frac {f'(x)}{f(x)}}.}The quotient at the right hand side is called thelogarithmic derivativeoff. Computingf'(x)by means of the derivative ofln(f(x))is known aslogarithmic differentiation.[38]The antiderivative of thenatural logarithmln(x)is:[39]∫ln⁡(x)dx=xln⁡(x)−x+C.{\displaystyle \int \ln(x)\,dx=x\ln(x)-x+C.}Related formulas, such as antiderivatives of logarithms to other bases can be derived from this equation using the change of bases.[40] Thenatural logarithmoftcan be defined as thedefinite integral: ln⁡t=∫1t1xdx.{\displaystyle \ln t=\int _{1}^{t}{\frac {1}{x}}\,dx.}This definition has the advantage that it does not rely on the exponential function or any trigonometric functions; the definition is in terms of an integral of a simple reciprocal. As an integral,ln(t)equals the area between thex-axis and the graph of the function1/x, ranging fromx= 1tox=t. This is a consequence of thefundamental theorem of calculusand the fact that the derivative ofln(x)is1/x. Product and power logarithm formulas can be derived from this definition.[41]For example, the product formulaln(tu) = ln(t) + ln(u)is deduced as: ln⁡(tu)=∫1tu1xdx=(1)∫1t1xdx+∫ttu1xdx=(2)ln⁡(t)+∫1u1wdw=ln⁡(t)+ln⁡(u).{\displaystyle {\begin{aligned}\ln(tu)&=\int _{1}^{tu}{\frac {1}{x}}\,dx\\&{\stackrel {(1)}{=}}\int _{1}^{t}{\frac {1}{x}}\,dx+\int _{t}^{tu}{\frac {1}{x}}\,dx\\&{\stackrel {(2)}{=}}\ln(t)+\int _{1}^{u}{\frac {1}{w}}\,dw\\&=\ln(t)+\ln(u).\end{aligned}}} The equality (1) splits the integral into two parts, while the equality (2) is a change of variable (w=x/t). In the illustration below, the splitting corresponds to dividing the area into the yellow and blue parts. Rescaling the left hand blue area vertically by the factortand shrinking it by the same factor horizontally does not change its size. Moving it appropriately, the area fits the graph of the functionf(x) = 1/xagain. Therefore, the left hand blue area, which is the integral off(x)fromttotuis the same as the integral from 1 tou. This justifies the equality (2) with a more geometric proof. The power formulaln(tr) =rln(t)may be derived in a similar way: ln⁡(tr)=∫1tr1xdx=∫1t1wr(rwr−1dw)=r∫1t1wdw=rln⁡(t).{\displaystyle {\begin{aligned}\ln(t^{r})&=\int _{1}^{t^{r}}{\frac {1}{x}}dx\\&=\int _{1}^{t}{\frac {1}{w^{r}}}\left(rw^{r-1}\,dw\right)\\&=r\int _{1}^{t}{\frac {1}{w}}\,dw\\&=r\ln(t).\end{aligned}}}The second equality uses a change of variables (integration by substitution),w=x1/r. The sum over the reciprocals of natural numbers,1+12+13+⋯+1n=∑k=1n1k,{\displaystyle 1+{\frac {1}{2}}+{\frac {1}{3}}+\cdots +{\frac {1}{n}}=\sum _{k=1}^{n}{\frac {1}{k}},}is called theharmonic series. It is closely tied to thenatural logarithm: asntends toinfinity, the difference,∑k=1n1k−ln⁡(n),{\displaystyle \sum _{k=1}^{n}{\frac {1}{k}}-\ln(n),}converges(i.e. gets arbitrarily close) to a number known as theEuler–Mascheroni constantγ= 0.5772.... This relation aids in analyzing the performance of algorithms such asquicksort.[42] Real numbersthat are notalgebraicare calledtranscendental;[43]for example,πandeare such numbers, but2−3{\displaystyle {\sqrt {2-{\sqrt {3}}}}}is not.Almost allreal numbers are transcendental. The logarithm is an example of atranscendental function. TheGelfond–Schneider theoremasserts that logarithms usually take transcendental, i.e. "difficult" values.[44] Logarithms are easy to compute in some cases, such aslog10(1000) = 3. In general, logarithms can be calculated usingpower seriesor thearithmetic–geometric mean, or be retrieved from a precalculatedlogarithm tablethat provides a fixed precision.[45][46]Newton's method, an iterative method to solve equations approximately, can also be used to calculate the logarithm, because its inverse function, the exponential function, can be computed efficiently.[47]Using look-up tables,CORDIC-like methods can be used to compute logarithms by using only the operations of addition andbit shifts.[48][49]Moreover, thebinary logarithm algorithmcalculateslb(x)recursively, based on repeated squarings ofx, taking advantage of the relationlog2⁡(x2)=2log2⁡|x|.{\displaystyle \log _{2}\left(x^{2}\right)=2\log _{2}|x|.} For any real numberzthat satisfies0 <z≤ 2, the following formula holds:[nb 4][50] ln⁡(z)=(z−1)11−(z−1)22+(z−1)33−(z−1)44+⋯=∑k=1∞(−1)k+1(z−1)kk.{\displaystyle {\begin{aligned}\ln(z)&={\frac {(z-1)^{1}}{1}}-{\frac {(z-1)^{2}}{2}}+{\frac {(z-1)^{3}}{3}}-{\frac {(z-1)^{4}}{4}}+\cdots \\&=\sum _{k=1}^{\infty }(-1)^{k+1}{\frac {(z-1)^{k}}{k}}.\end{aligned}}} Equating the functionln(z)to this infinite sum (series) is shorthand for saying that the function can be approximated to a more and more accurate value by the following expressions (known aspartial sums): (z−1),(z−1)−(z−1)22,(z−1)−(z−1)22+(z−1)33,…{\displaystyle (z-1),\ \ (z-1)-{\frac {(z-1)^{2}}{2}},\ \ (z-1)-{\frac {(z-1)^{2}}{2}}+{\frac {(z-1)^{3}}{3}},\ \ldots } For example, withz= 1.5the third approximation yields0.4167, which is about0.011greater thanln(1.5) = 0.405465, and the ninth approximation yields0.40553, which is only about0.0001greater. Thenth partial sum can approximateln(z)with arbitrary precision, provided the number of summandsnis large enough. In elementary calculus, the series is said toconvergeto the functionln(z), and the function is thelimitof the series. It is theTaylor seriesof thenatural logarithmatz= 1. The Taylor series ofln(z)provides a particularly useful approximation toln(1 +z)whenzis small,|z| < 1, since thenln⁡(1+z)=z−z22+z33−⋯≈z.{\displaystyle \ln(1+z)=z-{\frac {z^{2}}{2}}+{\frac {z^{3}}{3}}-\cdots \approx z.} For example, withz= 0.1the first-order approximation givesln(1.1) ≈ 0.1, which is less than5%off the correct value0.0953. Another series is based on theinverse hyperbolic tangentfunction:ln⁡(z)=2⋅artanhz−1z+1=2(z−1z+1+13(z−1z+1)3+15(z−1z+1)5+⋯),{\displaystyle \ln(z)=2\cdot \operatorname {artanh} \,{\frac {z-1}{z+1}}=2\left({\frac {z-1}{z+1}}+{\frac {1}{3}}{\left({\frac {z-1}{z+1}}\right)}^{3}+{\frac {1}{5}}{\left({\frac {z-1}{z+1}}\right)}^{5}+\cdots \right),}for any real numberz> 0.[nb 5][50]Usingsigma notation, this is also written asln⁡(z)=2∑k=0∞12k+1(z−1z+1)2k+1.{\displaystyle \ln(z)=2\sum _{k=0}^{\infty }{\frac {1}{2k+1}}\left({\frac {z-1}{z+1}}\right)^{2k+1}.}This series can be derived from the above Taylor series. It converges quicker than the Taylor series, especially ifzis close to 1. For example, forz= 1.5, the first three terms of the second series approximateln(1.5)with an error of about3×10−6. The quick convergence forzclose to 1 can be taken advantage of in the following way: given a low-accuracy approximationy≈ ln(z)and puttingA=zexp⁡(y),{\displaystyle A={\frac {z}{\exp(y)}},}the logarithm ofzis:ln⁡(z)=y+ln⁡(A).{\displaystyle \ln(z)=y+\ln(A).}The better the initial approximationyis, the closerAis to 1, so its logarithm can be calculated efficiently.Acan be calculated using theexponential series, which converges quickly providedyis not too large. Calculating the logarithm of largerzcan be reduced to smaller values ofzby writingz=a· 10b, so thatln(z) = ln(a) +b· ln(10). A closely related method can be used to compute the logarithm of integers. Puttingz=n+1n{\displaystyle \textstyle z={\frac {n+1}{n}}}in the above series, it follows that:ln⁡(n+1)=ln⁡(n)+2∑k=0∞12k+1(12n+1)2k+1.{\displaystyle \ln(n+1)=\ln(n)+2\sum _{k=0}^{\infty }{\frac {1}{2k+1}}\left({\frac {1}{2n+1}}\right)^{2k+1}.}If the logarithm of a large integernis known, then this series yields a fast converging series forlog(n+1), with arate of convergenceof(12n+1)2{\textstyle \left({\frac {1}{2n+1}}\right)^{2}}. Thearithmetic–geometric meanyields high-precision approximations of thenatural logarithm. Sasaki and Kanada showed in 1982 that it was particularly fast for precisions between 400 and 1000 decimal places, while Taylor series methods were typically faster when less precision was needed. In their workln(x)is approximated to a precision of2−p(orpprecise bits) by the following formula (due toCarl Friedrich Gauss):[51][52] ln⁡(x)≈π2M(1,22−m/x)−mln⁡(2).{\displaystyle \ln(x)\approx {\frac {\pi }{2\,\mathrm {M} \!\left(1,2^{2-m}/x\right)}}-m\ln(2).} HereM(x,y)denotes thearithmetic–geometric meanofxandy. It is obtained by repeatedly calculating the average(x+y)/2(arithmetic mean) andxy{\textstyle {\sqrt {xy}}}(geometric mean) ofxandythen let those two numbers become the nextxandy. The two numbers quickly converge to a common limit which is the value ofM(x,y).mis chosen such that x2m>2p/2.{\displaystyle x\,2^{m}>2^{p/2}.\,} to ensure the required precision. A largermmakes theM(x,y)calculation take more steps (the initialxandyare farther apart so it takes more steps to converge) but gives more precision. The constantsπandln(2)can be calculated with quickly converging series. While atLos Alamos National Laboratoryworking on theManhattan Project,Richard Feynmandeveloped a bit-processing algorithm to compute the logarithm that is similar to long division and was later used in theConnection Machine. The algorithm relies on the fact that every real numberxwhere1 <x< 2can be represented as a product of distinct factors of the form1 + 2−k. The algorithm sequentially builds that productP, starting withP= 1andk= 1: ifP· (1 + 2−k) <x, then it changesPtoP· (1 + 2−k). It then increasesk{\displaystyle k}by one regardless. The algorithm stops whenkis large enough to give the desired accuracy. Becauselog(x)is the sum of the terms of the formlog(1 + 2−k)corresponding to thosekfor which the factor1 + 2−kwas included in the productP,log(x)may be computed by simple addition, using a table oflog(1 + 2−k)for allk. Any base may be used for the logarithm table.[53] Logarithms have many applications inside and outside mathematics. Some of these occurrences are related to the notion ofscale invariance. For example, each chamber of the shell of anautilusis an approximate copy of the next one, scaled by a constant factor. This gives rise to alogarithmic spiral.[54]Benford's lawon the distribution of leading digits can also be explained by scale invariance.[55]Logarithms are also linked toself-similarity. For example, logarithms appear in the analysis of algorithms that solve a problem by dividing it into two similar smaller problems and patching their solutions.[56]The dimensions of self-similar geometric shapes, that is, shapes whose parts resemble the overall picture are also based on logarithms.Logarithmic scalesare useful for quantifying the relative change of a value as opposed to its absolute difference. Moreover, because the logarithmic functionlog(x)grows very slowly for largex, logarithmic scales are used to compress large-scale scientific data. Logarithms also occur in numerous scientific formulas, such as theTsiolkovsky rocket equation, theFenske equation, or theNernst equation. Scientific quantities are often expressed as logarithms of other quantities, using alogarithmic scale. For example, thedecibelis aunit of measurementassociated withlogarithmic-scalequantities. It is based on the common logarithm ofratios—10 times the common logarithm of apowerratio or 20 times the common logarithm of avoltageratio. It is used to quantify the attenuation or amplification of electrical signals,[57]to describe power levels of sounds inacoustics,[58]and theabsorbanceof light in the fields ofspectrometryandoptics. Thesignal-to-noise ratiodescribing the amount of unwantednoisein relation to a (meaningful)signalis also measured in decibels.[59]In a similar vein, thepeak signal-to-noise ratiois commonly used to assess the quality of sound andimage compressionmethods using the logarithm.[60] The strength of an earthquake is measured by taking the common logarithm of the energy emitted at the quake. This is used in themoment magnitude scaleor theRichter magnitude scale. For example, a 5.0 earthquake releases 32 times(101.5)and a 6.0 releases 1000 times(103)the energy of a 4.0.[61]Apparent magnitudemeasures the brightness of stars logarithmically.[62]Inchemistrythe negative of the decimal logarithm, the decimalcologarithm, is indicated by the letter p.[63]For instance,pHis the decimal cologarithm of theactivityofhydroniumions (the formhydrogenionsH+take in water).[64]The activity of hydronium ions in neutral water is 10−7mol·L−1, hence a pH of 7. Vinegar typically has a pH of about 3. The difference of 4 corresponds to a ratio of 104of the activity, that is, vinegar's hydronium ion activity is about10−3mol·L−1. Semilog(log–linear) graphs use the logarithmic scale concept for visualization: one axis, typically the vertical one, is scaled logarithmically. For example, the chart at the right compresses the steep increase from 1 million to 1 trillion to the same space (on the vertical axis) as the increase from 1 to 1 million. In such graphs,exponential functionsof the formf(x) =a·bxappear as straight lines withslopeequal to the logarithm ofb.Log-loggraphs scale both axes logarithmically, which causes functions of the formf(x) =a·xkto be depicted as straight lines with slope equal to the exponentk. This is applied in visualizing and analyzingpower laws.[65] Logarithms occur in several laws describinghuman perception:[66][67]Hick's lawproposes a logarithmic relation between the time individuals take to choose an alternative and the number of choices they have.[68]Fitts's lawpredicts that the time required to rapidly move to a target area is a logarithmic function of the ratio between the distance to a target and the size of the target.[69]Inpsychophysics, theWeber–Fechner lawproposes a logarithmic relationship betweenstimulusandsensationsuch as the actual vs. the perceived weight of an item a person is carrying.[70](This "law", however, is less realistic than more recent models, such asStevens's power law.[71]) Psychological studies found that individuals with little mathematics education tend to estimate quantities logarithmically, that is, they position a number on an unmarked line according to its logarithm, so that 10 is positioned as close to 100 as 100 is to 1000. Increasing education shifts this to a linear estimate (positioning 1000 10 times as far away) in some circumstances, while logarithms are used when the numbers to be plotted are difficult to plot linearly.[72][73] Logarithms arise inprobability theory: thelaw of large numbersdictates that, for afair coin, as the number of coin-tosses increases to infinity, the observed proportion of headsapproaches one-half. The fluctuations of this proportion about one-half are described by thelaw of the iterated logarithm.[74] Logarithms also occur inlog-normal distributions. When the logarithm of arandom variablehas anormal distribution, the variable is said to have a log-normal distribution.[75]Log-normal distributions are encountered in many fields, wherever a variable is formed as the product of many independent positive random variables, for example in the study of turbulence.[76] Logarithms are used formaximum-likelihood estimationof parametricstatistical models. For such a model, thelikelihood functiondepends on at least oneparameterthat must be estimated. A maximum of the likelihood function occurs at the same parameter-value as a maximum of the logarithm of the likelihood (the "log likelihood"), because the logarithm is an increasing function. The log-likelihood is easier to maximize, especially for the multiplied likelihoods forindependentrandom variables.[77] Benford's lawdescribes the occurrence of digits in manydata sets, such as heights of buildings. According to Benford's law, the probability that the first decimal-digit of an item in the data sample isd(from 1 to 9) equalslog10(d+ 1) − log10(d),regardlessof the unit of measurement.[78]Thus, about 30% of the data can be expected to have 1 as first digit, 18% start with 2, etc. Auditors examine deviations from Benford's law to detect fraudulent accounting.[79] Thelogarithm transformationis a type ofdata transformationused to bring the empirical distribution closer to the assumed one. Analysis of algorithmsis a branch ofcomputer sciencethat studies theperformanceofalgorithms(computer programs solving a certain problem).[80]Logarithms are valuable for describing algorithms thatdivide a probleminto smaller ones, and join the solutions of the subproblems.[81] For example, to find a number in a sorted list, thebinary search algorithmchecks the middle entry and proceeds with the half before or after the middle entry if the number is still not found. This algorithm requires, on average,log2(N)comparisons, whereNis the list's length.[82]Similarly, themerge sortalgorithm sorts an unsorted list by dividing the list into halves and sorting these first before merging the results. Merge sort algorithms typically require a timeapproximately proportional toN· log(N).[83]The base of the logarithm is not specified here, because the result only changes by a constant factor when another base is used. A constant factor is usually disregarded in the analysis of algorithms under the standarduniform cost model.[84] A functionf(x)is said togrow logarithmicallyiff(x)is (exactly or approximately) proportional to the logarithm ofx. (Biological descriptions of organism growth, however, use this term for an exponential function.[85]) For example, anynatural numberNcan be represented inbinary formin no more thanlog2N+ 1bits. In other words, the amount ofmemoryneeded to storeNgrows logarithmically withN. Entropyis broadly a measure of the disorder of some system. Instatistical thermodynamics, the entropySof some physical system is defined asS=−k∑ipiln⁡(pi).{\displaystyle S=-k\sum _{i}p_{i}\ln(p_{i}).\,}The sum is over all possible statesiof the system in question, such as the positions of gas particles in a container. Moreover,piis the probability that the stateiis attained andkis theBoltzmann constant. Similarly,entropy in information theorymeasures the quantity of information. If a message recipient may expect any one ofNpossible messages with equal likelihood, then the amount of information conveyed by any one such message is quantified aslog2Nbits.[86] Lyapunov exponentsuse logarithms to gauge the degree of chaoticity of adynamical system. For example, for a particle moving on an oval billiard table, even small changes of the initial conditions result in very different paths of the particle. Such systems arechaoticin adeterministicway, because small measurement errors of the initial state predictably lead to largely different final states.[87]At least one Lyapunov exponent of a deterministically chaotic system is positive. Logarithms occur in definitions of thedimensionoffractals.[88]Fractals are geometric objects that are self-similar in the sense that small parts reproduce, at least roughly, the entire global structure. TheSierpinski triangle(pictured) can be covered by three copies of itself, each having sides half the original length. This makes theHausdorff dimensionof this structureln(3)/ln(2) ≈ 1.58. Another logarithm-based notion of dimension is obtained bycounting the number of boxesneeded to cover the fractal in question. Logarithms are related to musical tones andintervals. Inequal temperamenttunings, the frequency ratio depends only on the interval between two tones, not on the specific frequency, orpitch, of the individual tones. In the12-tone equal temperamenttuning common in modern Western music, eachoctave(doubling of frequency) is broken into twelve equally spaced intervals calledsemitones. For example, if thenoteAhas a frequency of 440Hzthen the noteB-flathas a frequency of 466 Hz. The interval betweenAandB-flatis asemitone, as is the one betweenB-flatandB(frequency 493 Hz). Accordingly, the frequency ratios agree:466440≈493466≈1.059≈212.{\displaystyle {\frac {466}{440}}\approx {\frac {493}{466}}\approx 1.059\approx {\sqrt[{12}]{2}}.} Intervals between arbitrary pitches can be measured in octaves by taking thebase-2logarithm of thefrequencyratio, can be measured in equally tempered semitones by taking thebase-21/12logarithm (12times thebase-2logarithm), or can be measured incents, hundredths of a semitone, by taking thebase-21/1200logarithm (1200times thebase-2logarithm). The latter is used for finer encoding, as it is needed for finer measurements or non-equal temperaments.[89] Natural logarithmsare closely linked tocounting prime numbers(2, 3, 5, 7, 11, ...), an important topic innumber theory. For anyintegerx, the quantity ofprime numbersless than or equal toxis denotedπ(x). Theprime number theoremasserts thatπ(x)is approximately given byxln⁡(x),{\displaystyle {\frac {x}{\ln(x)}},}in the sense that the ratio ofπ(x)and that fraction approaches 1 whenxtends to infinity.[90]As a consequence, the probability that a randomly chosen number between 1 andxis prime is inverselyproportionalto the number of decimal digits ofx. A far better estimate ofπ(x)is given by theoffset logarithmic integralfunctionLi(x), defined byLi(x)=∫2x1ln⁡(t)dt.{\displaystyle \mathrm {Li} (x)=\int _{2}^{x}{\frac {1}{\ln(t)}}\,dt.}TheRiemann hypothesis, one of the oldest open mathematicalconjectures, can be stated in terms of comparingπ(x)andLi(x).[91]TheErdős–Kac theoremdescribing the number of distinctprime factorsalso involves thenatural logarithm. The logarithm ofnfactorial,n! = 1 · 2 · ... ·n, is given byln⁡(n!)=ln⁡(1)+ln⁡(2)+⋯+ln⁡(n).{\displaystyle \ln(n!)=\ln(1)+\ln(2)+\cdots +\ln(n).}This can be used to obtainStirling's formula, an approximation ofn!for largen.[92] All thecomplex numbersathat solve the equation ea=z{\displaystyle e^{a}=z} are calledcomplex logarithmsofz, whenzis (considered as) a complex number. A complex number is commonly represented asz = x + iy, wherexandyare real numbers andiis animaginary unit, the square of which is −1. Such a number can be visualized by a point in thecomplex plane, as shown at the right. Thepolar formencodes a non-zero complex numberzby itsabsolute value, that is, the (positive, real) distancerto theorigin, and an angle between the real (x) axisReand the line passing through both the origin andz. This angle is called theargumentofz. The absolute valuerofzis given by r=x2+y2.{\displaystyle \textstyle r={\sqrt {x^{2}+y^{2}}}.} Using the geometrical interpretation ofsineandcosineand their periodicity in2π, any complex numberzmay be denoted as z=x+iy=r(cos⁡φ+isin⁡φ)=r(cos⁡(φ+2kπ)+isin⁡(φ+2kπ)),{\displaystyle {\begin{aligned}z&=x+iy\\&=r(\cos \varphi +i\sin \varphi )\\&=r(\cos(\varphi +2k\pi )+i\sin(\varphi +2k\pi )),\end{aligned}}} for any integer numberk. Evidently the argument ofzis not uniquely specified: bothφandφ'=φ+ 2kπare valid arguments ofzfor all integersk, because adding2kπradiansork⋅360°[nb 6]toφcorresponds to "winding" around the origin counter-clock-wise bykturns. The resulting complex number is alwaysz, as illustrated at the right fork= 1. One may select exactly one of the possible arguments ofzas the so-calledprincipal argument, denotedArg(z), with a capitalA, by requiringφto belong to one, conveniently selected turn, e.g.−π<φ≤π[93]or0 ≤φ< 2π.[94]These regions, where the argument ofzis uniquely determined are calledbranchesof the argument function. Euler's formulaconnects thetrigonometric functionssineandcosineto thecomplex exponential:eiφ=cos⁡φ+isin⁡φ.{\displaystyle e^{i\varphi }=\cos \varphi +i\sin \varphi .} Using this formula, and again the periodicity, the following identities hold:[95] z=r(cos⁡φ+isin⁡φ)=r(cos⁡(φ+2kπ)+isin⁡(φ+2kπ))=rei(φ+2kπ)=eln⁡(r)ei(φ+2kπ)=eln⁡(r)+i(φ+2kπ)=eak,{\displaystyle {\begin{aligned}z&=r\left(\cos \varphi +i\sin \varphi \right)\\&=r\left(\cos(\varphi +2k\pi )+i\sin(\varphi +2k\pi )\right)\\&=re^{i(\varphi +2k\pi )}\\&=e^{\ln(r)}e^{i(\varphi +2k\pi )}\\&=e^{\ln(r)+i(\varphi +2k\pi )}=e^{a_{k}},\end{aligned}}} whereln(r)is the unique real natural logarithm,akdenote the complex logarithms ofz, andkis an arbitrary integer. Therefore, the complex logarithms ofz, which are all those complex valuesakfor which theak-thpower ofeequalsz, are the infinitely many valuesak=ln⁡(r)+i(φ+2kπ),{\displaystyle a_{k}=\ln(r)+i(\varphi +2k\pi ),}for arbitrary integersk. Takingksuch thatφ+ 2kπis within the defined interval for the principal arguments, thenakis called theprincipal valueof the logarithm, denotedLog(z), again with a capitalL. The principal argument of any positive real numberxis 0; henceLog(x)is a real number and equals the real (natural) logarithm. However, the above formulas for logarithms of products and powersdonotgeneralizeto the principal value of the complex logarithm.[96] The illustration at the right depictsLog(z), confining the arguments ofzto the interval(−π, π]. This way the corresponding branch of the complex logarithm has discontinuities all along the negative realxaxis, which can be seen in the jump in the hue there. This discontinuity arises from jumping to the other boundary in the same branch, when crossing a boundary, i.e. not changing to the correspondingk-value of the continuously neighboring branch. Such a locus is called abranch cut. Dropping the range restrictions on the argument makes the relations "argument ofz", and consequently the "logarithm ofz",multi-valued functions. Exponentiation occurs in many areas of mathematics and its inverse function is often referred to as the logarithm. For example, thelogarithm of a matrixis the (multi-valued) inverse function of thematrix exponential.[97]Another example is thep-adic logarithm, the inverse function of thep-adic exponential. Both are defined via Taylor series analogous to the real case.[98]In the context ofdifferential geometry, theexponential mapmaps thetangent spaceat a point of amanifoldto aneighborhoodof that point. Its inverse is also called the logarithmic (or log) map.[99] In the context offinite groupsexponentiation is given by repeatedly multiplying one group elementbwith itself. Thediscrete logarithmis the integernsolving the equationbn=x,{\displaystyle b^{n}=x,}wherexis an element of the group. Carrying out the exponentiation can be done efficiently, but the discrete logarithm is believed to be very hard to calculate in some groups. This asymmetry has important applications inpublic key cryptography, such as for example in theDiffie–Hellman key exchange, a routine that allows secure exchanges ofcryptographickeys over unsecured information channels.[100]Zech's logarithmis related to the discrete logarithm in the multiplicative group of non-zero elements of afinite field.[101] Further logarithm-like inverse functions include thedouble logarithmln(ln(x)), thesuper- or hyper-4-logarithm(a slight variation of which is callediterated logarithmin computer science), theLambert W function, and thelogit. They are the inverse functions of thedouble exponential function,tetration, off(w) =wew,[102]and of thelogistic function, respectively.[103] From the perspective ofgroup theory, the identitylog(cd) = log(c) + log(d)expresses agroup isomorphismbetween positiverealsunder multiplication and reals under addition. Logarithmic functions are the only continuous isomorphisms between these groups.[104]By means of that isomorphism, theHaar measure(Lebesgue measure)dxon the reals corresponds to the Haar measuredx/xon the positive reals.[105]The non-negative reals not only have a multiplication, but also have addition, and form asemiring, called theprobability semiring; this is in fact asemifield. The logarithm then takes multiplication to addition (log multiplication), and takes addition to log addition (LogSumExp), giving anisomorphismof semirings between the probability semiring and thelog semiring. Logarithmic one-formsdf/fappear incomplex analysisandalgebraic geometryasdifferential formswith logarithmicpoles.[106] Thepolylogarithmis the function defined byLis⁡(z)=∑k=1∞zkks.{\displaystyle \operatorname {Li} _{s}(z)=\sum _{k=1}^{\infty }{z^{k} \over k^{s}}.}It is related to thenatural logarithmbyLi1(z) = −ln(1 −z). Moreover,Lis(1)equals theRiemann zeta functionζ(s).[107]
https://en.wikipedia.org/wiki/Logarithm
Inmathematics,sineandcosinearetrigonometric functionsof anangle. The sine and cosine of an acuteangleare defined in the context of aright triangle: for the specified angle, its sine is the ratio of the length of the side opposite that angle to the length of the longest side of thetriangle(thehypotenuse), and the cosine is theratioof the length of the adjacent leg to that of thehypotenuse. For an angleθ{\displaystyle \theta }, the sine and cosine functions are denoted assin⁡(θ){\displaystyle \sin(\theta )}andcos⁡(θ){\displaystyle \cos(\theta )}. The definitions of sine and cosine have been extended to anyrealvalue in terms of the lengths of certain line segments in aunit circle. More modern definitions express the sine and cosine asinfinite series, or as the solutions of certaindifferential equations, allowing their extension to arbitrary positive and negative values and even tocomplex numbers. The sine and cosine functions are commonly used to modelperiodicphenomena such assoundandlight waves, the position and velocity of harmonic oscillators, sunlight intensity and day length, and average temperature variations throughout the year. They can be traced to thejyāandkoṭi-jyāfunctions used inIndian astronomyduring theGupta period. To define the sine and cosine of an acute angleα{\displaystyle \alpha }, start with aright trianglethat contains an angle of measureα{\displaystyle \alpha }; in the accompanying figure, angleα{\displaystyle \alpha }in a right triangleABC{\displaystyle ABC}is the angle of interest. The three sides of the triangle are named as follows:[1] Once such a triangle is chosen, the sine of the angle is equal to the length of the opposite side divided by the length of the hypotenuse, and the cosine of the angle is equal to the length of the adjacent side divided by the length of the hypotenuse:[1]sin⁡(α)=oppositehypotenuse,cos⁡(α)=adjacenthypotenuse.{\displaystyle \sin(\alpha )={\frac {\text{opposite}}{\text{hypotenuse}}},\qquad \cos(\alpha )={\frac {\text{adjacent}}{\text{hypotenuse}}}.} The other trigonometric functions of the angle can be defined similarly; for example, thetangentis the ratio between the opposite and adjacent sides or equivalently the ratio between the sine and cosine functions. Thereciprocalof sine is cosecant, which gives the ratio of the hypotenuse length to the length of the opposite side. Similarly, the reciprocal of cosine is secant, which gives the ratio of the hypotenuse length to that of the adjacent side. The cotangent function is the ratio between the adjacent and opposite sides, a reciprocal of a tangent function. These functions can be formulated as:[1]tan⁡(θ)=sin⁡(θ)cos⁡(θ)=oppositeadjacent,cot⁡(θ)=1tan⁡(θ)=adjacentopposite,csc⁡(θ)=1sin⁡(θ)=hypotenuseopposite,sec⁡(θ)=1cos⁡(θ)=hypotenuseadjacent.{\displaystyle {\begin{aligned}\tan(\theta )&={\frac {\sin(\theta )}{\cos(\theta )}}={\frac {\text{opposite}}{\text{adjacent}}},\\\cot(\theta )&={\frac {1}{\tan(\theta )}}={\frac {\text{adjacent}}{\text{opposite}}},\\\csc(\theta )&={\frac {1}{\sin(\theta )}}={\frac {\text{hypotenuse}}{\text{opposite}}},\\\sec(\theta )&={\frac {1}{\cos(\theta )}}={\frac {\textrm {hypotenuse}}{\textrm {adjacent}}}.\end{aligned}}} As stated, the valuessin⁡(α){\displaystyle \sin(\alpha )}andcos⁡(α){\displaystyle \cos(\alpha )}appear to depend on the choice of a right triangle containing an angle of measureα{\displaystyle \alpha }. However, this is not the case as all such triangles aresimilar, and so the ratios are the same for each of them. For example, eachlegof the 45-45-90 right triangle is 1 unit, and its hypotenuse is2{\displaystyle {\sqrt {2}}}; therefore,sin⁡45∘=cos⁡45∘=22{\textstyle \sin 45^{\circ }=\cos 45^{\circ }={\frac {\sqrt {2}}{2}}}.[2]The following table shows the special value of each input for both sine and cosine with the domain between0<α<π2{\textstyle 0<\alpha <{\frac {\pi }{2}}}. The input in this table provides various unit systems such as degree, radian, and so on. The angles other than those five can be obtained by using a calculator.[3][4] Thelaw of sinesis useful for computing the lengths of the unknown sides in a triangle if two angles and one side are known.[5]Given a triangleABC{\displaystyle ABC}with sidesa{\displaystyle a},b{\displaystyle b}, andc{\displaystyle c}, and angles opposite those sidesα{\displaystyle \alpha },β{\displaystyle \beta }, andγ{\displaystyle \gamma }, the law states,sin⁡αa=sin⁡βb=sin⁡γc.{\displaystyle {\frac {\sin \alpha }{a}}={\frac {\sin \beta }{b}}={\frac {\sin \gamma }{c}}.}This is equivalent to the equality of the first three expressions below:asin⁡α=bsin⁡β=csin⁡γ=2R,{\displaystyle {\frac {a}{\sin \alpha }}={\frac {b}{\sin \beta }}={\frac {c}{\sin \gamma }}=2R,}whereR{\displaystyle R}is the triangle'scircumradius. Thelaw of cosinesis useful for computing the length of an unknown side if two other sides and an angle are known.[5]The law states,a2+b2−2abcos⁡(γ)=c2{\displaystyle a^{2}+b^{2}-2ab\cos(\gamma )=c^{2}}In the case whereγ=π/2{\displaystyle \gamma =\pi /2}from whichcos⁡(γ)=0{\displaystyle \cos(\gamma )=0}, the resulting equation becomes thePythagorean theorem.[6] Thecross productanddot productare operations on twovectorsinEuclidean vector space. The sine and cosine functions can be defined in terms of the cross product and dot product. Ifa{\displaystyle \mathbb {a} }andb{\displaystyle \mathbb {b} }are vectors, andθ{\displaystyle \theta }is the angle betweena{\displaystyle \mathbb {a} }andb{\displaystyle \mathbb {b} }, then sine and cosine can be defined as:sin⁡(θ)=|a×b||a||b|,cos⁡(θ)=a⋅b|a||b|.{\displaystyle {\begin{aligned}\sin(\theta )&={\frac {|\mathbb {a} \times \mathbb {b} |}{|a||b|}},\\\cos(\theta )&={\frac {\mathbb {a} \cdot \mathbb {b} }{|a||b|}}.\end{aligned}}} The sine and cosine functions may also be defined in a more general way by usingunit circle, a circle of radius one centered at the origin(0,0){\displaystyle (0,0)}, formulated as the equation ofx2+y2=1{\displaystyle x^{2}+y^{2}=1}in theCartesian coordinate system. Let a line through the origin intersect the unit circle, making an angle ofθ{\displaystyle \theta }with the positive half of thex{\displaystyle x}-axis. Thex{\displaystyle x}-andy{\displaystyle y}-coordinates of this point of intersection are equal tocos⁡(θ){\displaystyle \cos(\theta )}andsin⁡(θ){\displaystyle \sin(\theta )}, respectively; that is,[7]sin⁡(θ)=y,cos⁡(θ)=x.{\displaystyle \sin(\theta )=y,\qquad \cos(\theta )=x.} This definition is consistent with the right-angled triangle definition of sine and cosine when0<θ<π2{\textstyle 0<\theta <{\frac {\pi }{2}}}because the length of the hypotenuse of the unit circle is always 1; mathematically speaking, the sine of an angle equals the opposite side of the triangle, which is simply they{\displaystyle y}-coordinate. A similar argument can be made for the cosine function to show that the cosine of an angle when0<θ<π2{\textstyle 0<\theta <{\frac {\pi }{2}}}, even under the new definition using the unit circle.[8][9] Using the unit circle definition has the advantage of drawing a graph of sine and cosine functions. This can be done by rotating counterclockwise a point along the circumference of a circle, depending on the inputθ>0{\displaystyle \theta >0}. In a sine function, if the input isθ=π2{\textstyle \theta ={\frac {\pi }{2}}}, the point is rotated counterclockwise and stopped exactly on they{\displaystyle y}-axis. Ifθ=π{\displaystyle \theta =\pi }, the point is at the circle's halfway. Ifθ=2π{\displaystyle \theta =2\pi }, the point returned to its origin. This results that both sine and cosine functions have therangebetween−1≤y≤1{\displaystyle -1\leq y\leq 1}.[10] Extending the angle to any real domain, the point rotated counterclockwise continuously. This can be done similarly for the cosine function as well, although the point is rotated initially from they{\displaystyle y}-coordinate. In other words, both sine and cosine functions areperiodic, meaning any angle added by the circumference's circle is the angle itself. Mathematically,[11]sin⁡(θ+2π)=sin⁡(θ),cos⁡(θ+2π)=cos⁡(θ).{\displaystyle \sin(\theta +2\pi )=\sin(\theta ),\qquad \cos(\theta +2\pi )=\cos(\theta ).} A functionf{\displaystyle f}is said to beoddiff(−x)=−f(x){\displaystyle f(-x)=-f(x)}, and is said to beeveniff(−x)=f(x){\displaystyle f(-x)=f(x)}. The sine function is odd, whereas the cosine function is even.[12]Both sine and cosine functions are similar, with their difference beingshiftedbyπ2{\textstyle {\frac {\pi }{2}}}. This means,[13]sin⁡(θ)=cos⁡(π2−θ),cos⁡(θ)=sin⁡(π2−θ).{\displaystyle {\begin{aligned}\sin(\theta )&=\cos \left({\frac {\pi }{2}}-\theta \right),\\\cos(\theta )&=\sin \left({\frac {\pi }{2}}-\theta \right).\end{aligned}}} Zero is the only realfixed pointof the sine function; in other words the only intersection of the sine function and theidentity functionissin⁡(0)=0{\displaystyle \sin(0)=0}. The only real fixed point of the cosine function is called theDottie number. The Dottie number is the unique real root of the equationcos⁡(x)=x{\displaystyle \cos(x)=x}. The decimal expansion of the Dottie number is approximately 0.739085.[14] The sine and cosine functions are infinitely differentiable.[15]The derivative of sine is cosine, and the derivative of cosine is negative sine:[16]ddxsin⁡(x)=cos⁡(x),ddxcos⁡(x)=−sin⁡(x).{\displaystyle {\frac {d}{dx}}\sin(x)=\cos(x),\qquad {\frac {d}{dx}}\cos(x)=-\sin(x).}Continuing the process in higher-order derivative results in the repeated same functions; the fourth derivative of a sine is the sine itself.[15]These derivatives can be applied to thefirst derivative test, according to which themonotonicityof a function can be defined as the inequality of function's first derivative greater or less than equal to zero.[17]It can also be applied tosecond derivative test, according to which theconcavityof a function can be defined by applying the inequality of the function's second derivative greater or less than equal to zero.[18]The following table shows that both sine and cosine functions have concavity and monotonicity—the positive sign (+{\displaystyle +}) denotes a graph is increasing (going upward) and the negative sign (−{\displaystyle -}) is decreasing (going downward)—in certain intervals.[19]This information can be represented as a Cartesian coordinates system divided into four quadrants. Both sine and cosine functions can be defined by using differential equations. The pair of(cos⁡θ,sin⁡θ){\displaystyle (\cos \theta ,\sin \theta )}is the solution(x(θ),y(θ)){\displaystyle (x(\theta ),y(\theta ))}to the two-dimensional system ofdifferential equationsy′(θ)=x(θ){\displaystyle y'(\theta )=x(\theta )}andx′(θ)=−y(θ){\displaystyle x'(\theta )=-y(\theta )}with theinitial conditionsy(0)=0{\displaystyle y(0)=0}andx(0)=1{\displaystyle x(0)=1}. One could interpret the unit circle in the above definitions as defining thephase space trajectoryof the differential equation with the given initial conditions. It can be interpreted as a phase space trajectory of the system of differential equationsy′(θ)=x(θ){\displaystyle y'(\theta )=x(\theta )}andx′(θ)=−y(θ){\displaystyle x'(\theta )=-y(\theta )}starting from the initial conditionsy(0)=0{\displaystyle y(0)=0}andx(0)=1{\displaystyle x(0)=1}.[citation needed] Their area under a curve can be obtained by using theintegralwith a certain bounded interval. Their antiderivatives are:∫sin⁡(x)dx=−cos⁡(x)+C∫cos⁡(x)dx=sin⁡(x)+C,{\displaystyle \int \sin(x)\,dx=-\cos(x)+C\qquad \int \cos(x)\,dx=\sin(x)+C,}whereC{\displaystyle C}denotes theconstant of integration.[20]These antiderivatives may be applied to compute the mensuration properties of both sine and cosine functions' curves with a given interval. For example, thearc lengthof the sine curve between0{\displaystyle 0}andt{\displaystyle t}is∫0t1+cos2⁡(x)dx=2E⁡(t,12),{\displaystyle \int _{0}^{t}\!{\sqrt {1+\cos ^{2}(x)}}\,dx={\sqrt {2}}\operatorname {E} \left(t,{\frac {1}{\sqrt {2}}}\right),}whereE⁡(φ,k){\displaystyle \operatorname {E} (\varphi ,k)}is theincomplete elliptic integral of the second kindwith modulusk{\displaystyle k}. It cannot be expressed usingelementary functions.[21]In the case of a full period, its arc length isL=42π3Γ(1/4)2+Γ(1/4)22π=2πϖ+2ϖ≈7.6404…{\displaystyle L={\frac {4{\sqrt {2\pi ^{3}}}}{\Gamma (1/4)^{2}}}+{\frac {\Gamma (1/4)^{2}}{\sqrt {2\pi }}}={\frac {2\pi }{\varpi }}+2\varpi \approx 7.6404\ldots }whereΓ{\displaystyle \Gamma }is thegamma functionandϖ{\displaystyle \varpi }is thelemniscate constant.[22] Theinverse functionof sine is arcsine or inverse sine, denoted as "arcsin", "asin", orsin−1{\displaystyle \sin ^{-1}}.[23]The inverse function of cosine is arccosine, denoted as "arccos", "acos", orcos−1{\displaystyle \cos ^{-1}}.[a]As sine and cosine are notinjective, their inverses are not exact inverse functions, but partial inverse functions. For example,sin⁡(0)=0{\displaystyle \sin(0)=0}, but alsosin⁡(π)=0{\displaystyle \sin(\pi )=0},sin⁡(2π)=0{\displaystyle \sin(2\pi )=0}, and so on. It follows that the arcsine function is multivalued:arcsin⁡(0)=0{\displaystyle \arcsin(0)=0}, but alsoarcsin⁡(0)=π{\displaystyle \arcsin(0)=\pi },arcsin⁡(0)=2π{\displaystyle \arcsin(0)=2\pi }, and so on. When only one value is desired, the function may be restricted to itsprincipal branch. With this restriction, for eachx{\displaystyle x}in the domain, the expressionarcsin⁡(x){\displaystyle \arcsin(x)}will evaluate only to a single value, called itsprincipal value. The standard range of principal values for arcsin is from−π2{\textstyle -{\frac {\pi }{2}}}toπ2{\textstyle {\frac {\pi }{2}}}, and the standard range for arccos is from0{\displaystyle 0}toπ{\displaystyle \pi }.[24] The inverse function of both sine and cosine are defined as:[citation needed]θ=arcsin⁡(oppositehypotenuse)=arccos⁡(adjacenthypotenuse),{\displaystyle \theta =\arcsin \left({\frac {\text{opposite}}{\text{hypotenuse}}}\right)=\arccos \left({\frac {\text{adjacent}}{\text{hypotenuse}}}\right),}where for some integerk{\displaystyle k},sin⁡(y)=x⟺y=arcsin⁡(x)+2πk,ory=π−arcsin⁡(x)+2πkcos⁡(y)=x⟺y=arccos⁡(x)+2πk,ory=−arccos⁡(x)+2πk{\displaystyle {\begin{aligned}\sin(y)=x\iff &y=\arcsin(x)+2\pi k,{\text{ or }}\\&y=\pi -\arcsin(x)+2\pi k\\\cos(y)=x\iff &y=\arccos(x)+2\pi k,{\text{ or }}\\&y=-\arccos(x)+2\pi k\end{aligned}}}By definition, both functions satisfy the equations:[citation needed]sin⁡(arcsin⁡(x))=xcos⁡(arccos⁡(x))=x{\displaystyle \sin(\arcsin(x))=x\qquad \cos(\arccos(x))=x}andarcsin⁡(sin⁡(θ))=θfor−π2≤θ≤π2arccos⁡(cos⁡(θ))=θfor0≤θ≤π{\displaystyle {\begin{aligned}\arcsin(\sin(\theta ))=\theta \quad &{\text{for}}\quad -{\frac {\pi }{2}}\leq \theta \leq {\frac {\pi }{2}}\\\arccos(\cos(\theta ))=\theta \quad &{\text{for}}\quad 0\leq \theta \leq \pi \end{aligned}}} According toPythagorean theorem, the squared hypotenuse is the sum of two squared legs of a right triangle. Dividing the formula on both sides with squared hypotenuse resulting in thePythagorean trigonometric identity, the sum of a squared sine and a squared cosine equals 1:[25][b]sin2⁡(θ)+cos2⁡(θ)=1.{\displaystyle \sin ^{2}(\theta )+\cos ^{2}(\theta )=1.} Sine and cosine satisfy the following double-angle formulas:[26]sin⁡(2θ)=2sin⁡(θ)cos⁡(θ),cos⁡(2θ)=cos2⁡(θ)−sin2⁡(θ)=2cos2⁡(θ)−1=1−2sin2⁡(θ){\displaystyle {\begin{aligned}\sin(2\theta )&=2\sin(\theta )\cos(\theta ),\\\cos(2\theta )&=\cos ^{2}(\theta )-\sin ^{2}(\theta )\\&=2\cos ^{2}(\theta )-1\\&=1-2\sin ^{2}(\theta )\end{aligned}}} The cosine double angle formula implies that sin2and cos2are, themselves, shifted and scaled sine waves. Specifically,[27]sin2⁡(θ)=1−cos⁡(2θ)2cos2⁡(θ)=1+cos⁡(2θ)2{\displaystyle \sin ^{2}(\theta )={\frac {1-\cos(2\theta )}{2}}\qquad \cos ^{2}(\theta )={\frac {1+\cos(2\theta )}{2}}}The graph shows both sine and sine squared functions, with the sine in blue and the sine squared in red. Both graphs have the same shape but with different ranges of values and different periods. Sine squared has only positive values, but twice the number of periods.[citation needed] Both sine and cosine functions can be defined by using aTaylor series, apower seriesinvolving the higher-order derivatives. As mentioned in§ Continuity and differentiation, thederivativeof sine is cosine and that the derivative of cosine is the negative of sine. This means the successive derivatives ofsin⁡(x){\displaystyle \sin(x)}arecos⁡(x){\displaystyle \cos(x)},−sin⁡(x){\displaystyle -\sin(x)},−cos⁡(x){\displaystyle -\cos(x)},sin⁡(x){\displaystyle \sin(x)}, continuing to repeat those four functions. The(4n+k){\displaystyle (4n+k)}-th derivative, evaluated at the point 0:sin(4n+k)⁡(0)={0whenk=01whenk=10whenk=2−1whenk=3{\displaystyle \sin ^{(4n+k)}(0)={\begin{cases}0&{\text{when }}k=0\\1&{\text{when }}k=1\\0&{\text{when }}k=2\\-1&{\text{when }}k=3\end{cases}}}where the superscript represents repeated differentiation. This implies the following Taylor series expansion atx=0{\displaystyle x=0}. One can then use the theory ofTaylor seriesto show that the following identities hold for allreal numbersx{\displaystyle x}—wherex{\displaystyle x}is the angle in radians.[28]More generally, for allcomplex numbers:[29]sin⁡(x)=x−x33!+x55!−x77!+⋯=∑n=0∞(−1)n(2n+1)!x2n+1{\displaystyle {\begin{aligned}\sin(x)&=x-{\frac {x^{3}}{3!}}+{\frac {x^{5}}{5!}}-{\frac {x^{7}}{7!}}+\cdots \\&=\sum _{n=0}^{\infty }{\frac {(-1)^{n}}{(2n+1)!}}x^{2n+1}\end{aligned}}}Taking the derivative of each term gives the Taylor series for cosine:[28][29]cos⁡(x)=1−x22!+x44!−x66!+⋯=∑n=0∞(−1)n(2n)!x2n{\displaystyle {\begin{aligned}\cos(x)&=1-{\frac {x^{2}}{2!}}+{\frac {x^{4}}{4!}}-{\frac {x^{6}}{6!}}+\cdots \\&=\sum _{n=0}^{\infty }{\frac {(-1)^{n}}{(2n)!}}x^{2n}\end{aligned}}} Both sine and cosine functions with multiple angles may appear as theirlinear combination, resulting in a polynomial. Such a polynomial is known as thetrigonometric polynomial. The trigonometric polynomial's ample applications may be acquired inits interpolation, and its extension of a periodic function known as theFourier series. Letan{\displaystyle a_{n}}andbn{\displaystyle b_{n}}be any coefficients, then the trigonometric polynomial of a degreeN{\displaystyle N}—denoted asT(x){\displaystyle T(x)}—is defined as:[30][31]T(x)=a0+∑n=1Nancos⁡(nx)+∑n=1Nbnsin⁡(nx).{\displaystyle T(x)=a_{0}+\sum _{n=1}^{N}a_{n}\cos(nx)+\sum _{n=1}^{N}b_{n}\sin(nx).} Thetrigonometric seriescan be defined similarly analogous to the trigonometric polynomial, its infinite inversion. LetAn{\displaystyle A_{n}}andBn{\displaystyle B_{n}}be any coefficients, then the trigonometric series can be defined as:[32]12A0+∑n=1∞Ancos⁡(nx)+Bnsin⁡(nx).{\displaystyle {\frac {1}{2}}A_{0}+\sum _{n=1}^{\infty }A_{n}\cos(nx)+B_{n}\sin(nx).}In the case of a Fourier series with a given integrable functionf{\displaystyle f}, the coefficients of a trigonometric series are:[33]An=1π∫02πf(x)cos⁡(nx)dx,Bn=1π∫02πf(x)sin⁡(nx)dx.{\displaystyle {\begin{aligned}A_{n}&={\frac {1}{\pi }}\int _{0}^{2\pi }f(x)\cos(nx)\,dx,\\B_{n}&={\frac {1}{\pi }}\int _{0}^{2\pi }f(x)\sin(nx)\,dx.\end{aligned}}} Both sine and cosine can be extended further viacomplex number, a set of numbers composed of bothrealandimaginary numbers. For real numberθ{\displaystyle \theta }, the definition of both sine and cosine functions can be extended in acomplex planein terms of anexponential functionas follows:[34]sin⁡(θ)=eiθ−e−iθ2i,cos⁡(θ)=eiθ+e−iθ2,{\displaystyle {\begin{aligned}\sin(\theta )&={\frac {e^{i\theta }-e^{-i\theta }}{2i}},\\\cos(\theta )&={\frac {e^{i\theta }+e^{-i\theta }}{2}},\end{aligned}}} Alternatively, both functions can be defined in terms ofEuler's formula:[34]eiθ=cos⁡(θ)+isin⁡(θ),e−iθ=cos⁡(θ)−isin⁡(θ).{\displaystyle {\begin{aligned}e^{i\theta }&=\cos(\theta )+i\sin(\theta ),\\e^{-i\theta }&=\cos(\theta )-i\sin(\theta ).\end{aligned}}} When plotted on thecomplex plane, the functioneix{\displaystyle e^{ix}}for real values ofx{\displaystyle x}traces out theunit circlein the complex plane. Both sine and cosine functions may be simplified to the imaginary and real parts ofeiθ{\displaystyle e^{i\theta }}as:[35]sin⁡θ=Im⁡(eiθ),cos⁡θ=Re⁡(eiθ).{\displaystyle {\begin{aligned}\sin \theta &=\operatorname {Im} (e^{i\theta }),\\\cos \theta &=\operatorname {Re} (e^{i\theta }).\end{aligned}}} Whenz=x+iy{\displaystyle z=x+iy}for real valuesx{\displaystyle x}andy{\displaystyle y}, wherei=−1{\displaystyle i={\sqrt {-1}}}, both sine and cosine functions can be expressed in terms of real sines, cosines, andhyperbolic functionsas:[citation needed]sin⁡z=sin⁡xcosh⁡y+icos⁡xsinh⁡y,cos⁡z=cos⁡xcosh⁡y−isin⁡xsinh⁡y.{\displaystyle {\begin{aligned}\sin z&=\sin x\cosh y+i\cos x\sinh y,\\\cos z&=\cos x\cosh y-i\sin x\sinh y.\end{aligned}}} Sine and cosine are used to connect the real and imaginary parts of acomplex numberwith itspolar coordinates(r,θ){\displaystyle (r,\theta )}:z=r(cos⁡(θ)+isin⁡(θ)),{\displaystyle z=r(\cos(\theta )+i\sin(\theta )),}and the real and imaginary parts areRe⁡(z)=rcos⁡(θ),Im⁡(z)=rsin⁡(θ),{\displaystyle {\begin{aligned}\operatorname {Re} (z)&=r\cos(\theta ),\\\operatorname {Im} (z)&=r\sin(\theta ),\end{aligned}}}wherer{\displaystyle r}andθ{\displaystyle \theta }represent the magnitude and angle of the complex numberz{\displaystyle z}. For any real numberθ{\displaystyle \theta }, Euler's formula in terms of polar coordinates is stated asz=reiθ{\textstyle z=re^{i\theta }}. Applying the series definition of the sine and cosine to a complex argument,z, gives: where sinh and cosh are thehyperbolic sine and cosine. These areentire functions. It is also sometimes useful to express the complex sine and cosine functions in terms of the real and imaginary parts of its argument: Using the partial fraction expansion technique incomplex analysis, one can find that the infinite series∑n=−∞∞(−1)nz−n=1z−2z∑n=1∞(−1)nn2−z2{\displaystyle \sum _{n=-\infty }^{\infty }{\frac {(-1)^{n}}{z-n}}={\frac {1}{z}}-2z\sum _{n=1}^{\infty }{\frac {(-1)^{n}}{n^{2}-z^{2}}}}both converge and are equal toπsin⁡(πz){\textstyle {\frac {\pi }{\sin(\pi z)}}}. Similarly, one can show thatπ2sin2⁡(πz)=∑n=−∞∞1(z−n)2.{\displaystyle {\frac {\pi ^{2}}{\sin ^{2}(\pi z)}}=\sum _{n=-\infty }^{\infty }{\frac {1}{(z-n)^{2}}}.} Using product expansion technique, one can derivesin⁡(πz)=πz∏n=1∞(1−z2n2).{\displaystyle \sin(\pi z)=\pi z\prod _{n=1}^{\infty }\left(1-{\frac {z^{2}}{n^{2}}}\right).} sin(z) is found in thefunctional equationfor theGamma function, which in turn is found in thefunctional equationfor theRiemann zeta-function, As aholomorphic function, sinzis a 2D solution ofLaplace's equation: The complex sine function is also related to the level curves ofpendulums.[how?][36][better source needed] The wordsineis derived, indirectly, from theSanskritwordjyā'bow-string' or more specifically its synonymjīvá(both adopted fromAncient Greekχορδή'string; chord'), due to visual similarity between the arc of a circle with its corresponding chord and a bow with its string (seejyā, koti-jyā and utkrama-jyā;sineandchordare closely related in a circle of unit diameter, seePtolemy’s Theorem). This wastransliteratedinArabicasjība, which is meaningless in that language and written asjb(جب). Since Arabic is written without short vowels,jbwas interpreted as thehomographjayb(جيب), which means 'bosom', 'pocket', or 'fold'.[37][38]When the Arabic texts ofAl-Battaniandal-Khwārizmīwere translated intoMedieval Latinin the 12th century byGerard of Cremona, he used the Latin equivalentsinus(which also means 'bay' or 'fold', and more specifically 'the hanging fold of atogaover the breast').[39][40][41]Gerard was probably not the first scholar to use this translation; Robert of Chester appears to have preceded him and there is evidence of even earlier usage.[42][43]The English formsinewas introduced inThomas Fale's 1593Horologiographia.[44] The wordcosinederives from an abbreviation of the Latincomplementi sinus'sine of thecomplementary angle' ascosinusinEdmund Gunter'sCanon triangulorum(1620), which also includes a similar definition ofcotangens.[45] While the early study of trigonometry can be traced to antiquity, thetrigonometric functionsas they are in use today were developed in the medieval period. Thechordfunction was discovered byHipparchusofNicaea(180–125 BCE) andPtolemyofRoman Egypt(90–165 CE).[46] The sine and cosine functions are closely related to thejyāandkoṭi-jyāfunctions used inIndian astronomyduring theGupta period(AryabhatiyaandSurya Siddhanta), via translation from Sanskrit to Arabic and then from Arabic to Latin.[39][47] All six trigonometric functions in current use were known inIslamic mathematicsby the 9th century, as was thelaw of sines, used insolving triangles.[48]Al-Khwārizmī(c. 780–850) produced tables of sines, cosines and tangents.[49][50]Muhammad ibn Jābir al-Harrānī al-Battānī(853–929) discovered the reciprocal functions of secant and cosecant, and produced the first table of cosecants for each degree from 1° to 90°.[50] In the early 17th-century, the French mathematicianAlbert Girardpublished the first use of the abbreviationssin,cos, andtan; these were further promulgated by Euler (see below). TheOpus palatinum de triangulisofGeorg Joachim Rheticus, a student ofCopernicus, was probably the first in Europe to define trigonometric functions directly in terms of right triangles instead of circles, with tables for all six trigonometric functions; this work was finished by Rheticus' student Valentin Otho in 1596. In a paper published in 1682,Leibnizproved that sinxis not analgebraic functionofx.[51]Roger Cotescomputed the derivative of sine in hisHarmonia Mensurarum(1722).[52]Leonhard Euler'sIntroductio in analysin infinitorum(1748) was mostly responsible for establishing the analytic treatment of trigonometric functions in Europe, also defining them as infinite series and presenting "Euler's formula", as well as the near-modern abbreviationssin.,cos.,tang.,cot.,sec., andcosec.[39] There is no standard algorithm for calculating sine and cosine.IEEE 754, the most widely used standard for the specification of reliable floating-point computation, does not address calculating trigonometric functions such as sine. The reason is that no efficient algorithm is known for computing sine and cosine with a specified accuracy, especially for large inputs.[53] Algorithms for calculating sine may be balanced for such constraints as speed, accuracy, portability, or range of input values accepted. This can lead to different results for different algorithms, especially for special circumstances such as very large inputs, e.g.sin(1022). A common programming optimization, used especially in 3D graphics, is to pre-calculate a table of sine values, for example one value per degree, then for values in-between pick the closest pre-calculated value, orlinearly interpolatebetween the 2 closest values to approximate it. This allows results to be looked up from a table rather than being calculated in real time. With modern CPU architectures this method may offer no advantage.[citation needed] TheCORDICalgorithm is commonly used in scientific calculators. The sine and cosine functions, along with other trigonometric functions, are widely available across programming languages and platforms. In computing, they are typically abbreviated tosinandcos. Some CPU architectures have a built-in instruction for sine, including the Intel x87 FPUs since the 80387. In programming languages,sinandcosare typically either a built-in function or found within the language's standard math library. For example, theC standard librarydefines sine functions withinmath.h:sin(double),sinf(float), andsinl(long double). The parameter of each is afloating pointvalue, specifying the angle in radians. Each function returns the samedata typeas it accepts. Many other trigonometric functions are also defined inmath.h, such as for cosine, arc sine, and hyperbolic sine (sinh). Similarly,Pythondefinesmath.sin(x)andmath.cos(x)within the built-inmathmodule. Complex sine and cosine functions are also available within thecmathmodule, e.g.cmath.sin(z).CPython's math functions call theCmathlibrary, and use adouble-precision floating-point format. Some software libraries provide implementations of sine and cosine using the input angle in half-turns, a half-turn being an angle of 180 degrees orπ{\displaystyle \pi }radians. Representing angles in turns or half-turns has accuracy advantages and efficiency advantages in some cases.[54][55]These functions are calledsinpiandcospiin MATLAB,[54]OpenCL,[56]R,[55]Julia,[57]CUDA,[58]and ARM.[59]For example,sinpi(x)would evaluate tosin⁡(πx),{\displaystyle \sin(\pi x),}wherexis expressed in half-turns, and consequently the final input to the function,πxcan be interpreted in radians bysin. The accuracy advantage stems from the ability to perfectly represent key angles like full-turn, half-turn, and quarter-turn losslessly in binary floating-point or fixed-point. In contrast, representing2π{\displaystyle 2\pi },π{\displaystyle \pi }, andπ2{\textstyle {\frac {\pi }{2}}}in binary floating-point or binary scaled fixed-point always involves a loss of accuracy since irrational numbers cannot be represented with finitely many binary digits. Turns also have an accuracy advantage and efficiency advantage for computing modulo to one period. Computing modulo 1 turn or modulo 2 half-turns can be losslessly and efficiently computed in both floating-point and fixed-point. For example, computing modulo 1 or modulo 2 for a binary point scaled fixed-point value requires only a bit shift or bitwise AND operation. In contrast, computing moduloπ2{\textstyle {\frac {\pi }{2}}}involves inaccuracies in representingπ2{\textstyle {\frac {\pi }{2}}}. For applications involving angle sensors, the sensor typically provides angle measurements in a form directly compatible with turns or half-turns. For example, an angle sensor may count from 0 to 4096 over one complete revolution.[60]If half-turns are used as the unit for angle, then the value provided by the sensor directly and losslessly maps to a fixed-point data type with 11 bits to the right of the binary point. In contrast, if radians are used as the unit for storing the angle, then the inaccuracies and cost of multiplying the raw sensor integer by an approximation toπ2048{\textstyle {\frac {\pi }{2048}}}would be incurred.
https://en.wikipedia.org/wiki/Sine
Inmathematics, theexponential functionis the uniquereal functionwhich mapszerotooneand has aderivativeeverywhere equal to its value. The exponential of a variable⁠x{\displaystyle x}⁠is denoted⁠exp⁡x{\displaystyle \exp x}⁠or⁠ex{\displaystyle e^{x}}⁠, with the two notations used interchangeably. It is calledexponentialbecause its argument can be seen as anexponentto which a constantnumbere≈ 2.718, the base, is raised. There are several other definitions of the exponential function, which are all equivalent although being of very different nature. The exponential function converts sums to products: it maps theadditive identity0to themultiplicative identity1, and the exponential of a sum is equal to the product of separate exponentials,⁠exp⁡(x+y)=exp⁡x⋅exp⁡y{\displaystyle \exp(x+y)=\exp x\cdot \exp y}⁠. Itsinverse function, thenatural logarithm,⁠ln{\displaystyle \ln }⁠or⁠log{\displaystyle \log }⁠, converts products to sums:⁠ln⁡(x⋅y)=ln⁡x+ln⁡y{\displaystyle \ln(x\cdot y)=\ln x+\ln y}⁠. The exponential function is occasionally called thenatural exponential function, matching the namenatural logarithm, for distinguishing it from some other functions that are also commonly calledexponential functions. These functions include the functions of the form⁠f(x)=bx{\displaystyle f(x)=b^{x}}⁠, which isexponentiationwith a fixed base⁠b{\displaystyle b}⁠. More generally, and especially in applications, functions of the general form⁠f(x)=abx{\displaystyle f(x)=ab^{x}}⁠are also called exponential functions. Theygrowordecayexponentially in that the rate that⁠f(x){\displaystyle f(x)}⁠changes when⁠x{\displaystyle x}⁠is increased isproportionalto the current value of⁠f(x){\displaystyle f(x)}⁠. The exponential function can be generalized to acceptcomplex numbersas arguments. This reveals relations between multiplication of complex numbers, rotations in thecomplex plane, andtrigonometry.Euler's formula⁠exp⁡iθ=cos⁡θ+isin⁡θ{\displaystyle \exp i\theta =\cos \theta +i\sin \theta }⁠expresses and summarizes these relations. The exponential function can be even further generalized to accept other types of arguments, such asmatricesand elements ofLie algebras. Thegraphofy=ex{\displaystyle y=e^{x}}is upward-sloping, and increases faster than every power of⁠x{\displaystyle x}⁠.[1]The graph always lies above thex-axis, but becomes arbitrarily close to it for large negativex; thus, thex-axis is a horizontalasymptote. The equationddxex=ex{\displaystyle {\tfrac {d}{dx}}e^{x}=e^{x}}means that theslopeof thetangentto the graph at each point is equal to its height (itsy-coordinate) at that point. There are several equivalent definitions of the exponential function, although of very different nature. One of the simplest definitions is: Theexponential functionis theuniquedifferentiable functionthat equals itsderivative, and takes the value1for the value0of its variable. This "conceptual" definition requires a uniqueness proof and an existence proof, but it allows an easy derivation of the main properties of the exponential function. Uniqueness:If⁠f(x){\displaystyle f(x)}⁠and⁠g(x){\displaystyle g(x)}⁠are two functions satisfying the above definition, then the derivative of⁠f/g{\displaystyle f/g}⁠is zero everywhere because of thequotient rule. It follows that⁠f/g{\displaystyle f/g}⁠is constant; this constant is1since⁠f(0)=g(0)=1{\displaystyle f(0)=g(0)=1}⁠. Existenceis proved in each of the two following sections. The exponential function is theinverse functionof thenatural logarithm.Theinverse function theoremimplies that the natural logarithm has an inverse function, that satisfies the above definition. This is a first proof of existence. Therefore, one has for everyreal numberx{\displaystyle x}and every positive real numbery.{\displaystyle y.} The exponential function is the sum of thepower series[2][3]exp⁡(x)=1+x+x22!+x33!+⋯=∑n=0∞xnn!,{\displaystyle {\begin{aligned}\exp(x)&=1+x+{\frac {x^{2}}{2!}}+{\frac {x^{3}}{3!}}+\cdots \\&=\sum _{n=0}^{\infty }{\frac {x^{n}}{n!}},\end{aligned}}} wheren!{\displaystyle n!}is thefactorialofn(the product of thenfirst positive integers). This series isabsolutely convergentfor everyx{\displaystyle x}per theratio test. So, the derivative of the sum can be computed by term-by-term differentiation, and this shows that the sum of the series satisfies the above definition. This is a second existence proof, and shows, as a byproduct, that the exponential function is defined for every⁠x{\displaystyle x}⁠, and is everywhere the sum of itsMaclaurin series. The exponential satisfies the functional equation:exp⁡(x+y)=exp⁡(x)⋅exp⁡(y).{\displaystyle \exp(x+y)=\exp(x)\cdot \exp(y).}This results from the uniqueness and the fact that the functionf(x)=exp⁡(x+y)/exp⁡(y){\displaystyle f(x)=\exp(x+y)/\exp(y)}satisfies the above definition. It can be proved that a function that satisfies this functional equation has the form⁠x↦exp⁡(cx){\displaystyle x\mapsto \exp(cx)}⁠if it is eithercontinuousormonotonic. It is thusdifferentiable, and equals the exponential function if its derivative at0is1. The exponential function is thelimit, as the integerngoes to infinity,[4][3]exp⁡(x)=limn→+∞(1+xn)n.{\displaystyle \exp(x)=\lim _{n\to +\infty }\left(1+{\frac {x}{n}}\right)^{n}.}By continuity of the logarithm, this can be proved by taking logarithms and provingx=limn→∞ln⁡(1+xn)n=limn→∞nln⁡(1+xn),{\displaystyle x=\lim _{n\to \infty }\ln \left(1+{\frac {x}{n}}\right)^{n}=\lim _{n\to \infty }n\ln \left(1+{\frac {x}{n}}\right),}for example withTaylor's theorem. Reciprocal:The functional equation implies⁠exe−x=1{\displaystyle e^{x}e^{-x}=1}⁠. Therefore⁠ex≠0{\displaystyle e^{x}\neq 0}⁠for every⁠x{\displaystyle x}⁠and1ex=e−x.{\displaystyle {\frac {1}{e^{x}}}=e^{-x}.} Positiveness:⁠ex>0{\displaystyle e^{x}>0}⁠for every real number⁠x{\displaystyle x}⁠. This results from theintermediate value theorem, since⁠e0=1{\displaystyle e^{0}=1}⁠and, if one would have⁠ex<0{\displaystyle e^{x}<0}⁠for some⁠x{\displaystyle x}⁠, there would be an⁠y{\displaystyle y}⁠such that⁠ey=0{\displaystyle e^{y}=0}⁠between⁠0{\displaystyle 0}⁠and⁠x{\displaystyle x}⁠. Since the exponential function equals its derivative, this implies that the exponential function ismonotonically increasing. Extension ofexponentiationto positive real bases:Letbbe a positive real number. The exponential function and the natural logarithm being the inverse each of the other, one hasb=exp⁡(ln⁡b).{\displaystyle b=\exp(\ln b).}Ifnis an integer, the functional equation of the logarithm impliesbn=exp⁡(ln⁡bn)=exp⁡(nln⁡b).{\displaystyle b^{n}=\exp(\ln b^{n})=\exp(n\ln b).}Since the right-most expression is defined ifnis any real number, this allows defining⁠bx{\displaystyle b^{x}}⁠for every positive real numberband every real numberx:bx=exp⁡(xln⁡b).{\displaystyle b^{x}=\exp(x\ln b).}In particular, ifbis theEuler's numbere=exp⁡(1),{\displaystyle e=\exp(1),}one hasln⁡e=1{\displaystyle \ln e=1}(inverse function) and thusex=exp⁡(x).{\displaystyle e^{x}=\exp(x).}This shows the equivalence of the two notations for the exponential function. A function is commonly calledan exponential function—with an indefinite article—if it has the form⁠x↦bx{\displaystyle x\mapsto b^{x}}⁠, that is, if it is obtained fromexponentiationby fixing the base and letting theexponentvary. More generally and especially in applied contexts, the termexponential functionis commonly used for functions of the form⁠f(x)=abx{\displaystyle f(x)=ab^{x}}⁠. This may be motivated by the fact that, if the values of the function representquantities, a change ofmeasurement unitchanges the value of⁠a{\displaystyle a}⁠, and so, it is nonsensical to impose⁠a=1{\displaystyle a=1}⁠. These most general exponential functions are thedifferentiable functionsthat satisfy the following equivalent characterizations. Thebaseof an exponential function is thebaseof theexponentiationthat appears in it when written as⁠x→abx{\displaystyle x\to ab^{x}}⁠, namely⁠b{\displaystyle b}⁠.[6]The base is⁠ek{\displaystyle e^{k}}⁠in the second characterization,exp⁡f′(x)f(x){\textstyle \exp {\frac {f'(x)}{f(x)}}}in the third one, and(f(x+d)f(x))1/d{\textstyle \left({\frac {f(x+d)}{f(x)}}\right)^{1/d}}in the last one. The last characterization is important inempirical sciences, as allowing a directexperimentaltest whether a function is an exponential function. Exponentialgrowthorexponential decay—where the variable change isproportionalto the variable value—are thus modeled with exponential functions. Examples are unlimited population growth leading toMalthusian catastrophe,continuously compounded interest, andradioactive decay. If the modeling function has the form⁠x↦aekx,{\displaystyle x\mapsto ae^{kx},}⁠or, equivalently, is a solution of the differential equation⁠y′=ky{\displaystyle y'=ky}⁠, the constant⁠k{\displaystyle k}⁠is called, depending on the context, thedecay constant,disintegration constant,[7]rate constant,[8]ortransformation constant.[9] For proving the equivalence of the above properties, one can proceed as follows. The two first characterizations are equivalent, since, if⁠b=ek{\displaystyle b=e^{k}}⁠and⁠k=ln⁡b{\displaystyle k=\ln b}⁠, one has s.ekx=(ek)x=bx.{\displaystyle e^{kx}=(e^{k})^{x}=b^{x}.}The basic properties of the exponential function (derivative and functional equation) implies immediately the third and ths last condititon Suppose that the third condition is verified, and let⁠k{\displaystyle k}⁠be the constant value off′(x)/f(x).{\displaystyle f'(x)/f(x).}Since∂ekx∂x=kekx,{\textstyle {\frac {\partial e^{kx}}{\partial x}}=ke^{kx},}thequotient rulefor derivation implies that∂∂xf(x)ekx=0,{\displaystyle {\frac {\partial }{\partial x}}\,{\frac {f(x)}{e^{kx}}}=0,}and thus that there is a constant⁠a{\displaystyle a}⁠such thatf(x)=aekx.{\displaystyle f(x)=ae^{kx}.} If the last condition is verified, letφ(d)=f(x+d)/f(x),{\textstyle \varphi (d)=f(x+d)/f(x),}which is independent of⁠x{\displaystyle x}⁠. Using⁠φ(0)=1{\displaystyle \varphi (0)=1}⁠, one getsf(x+d)−f(x)d=f(x)φ(d)−φ(0)d.{\displaystyle {\frac {f(x+d)-f(x)}{d}}=f(x)\,{\frac {\varphi (d)-\varphi (0)}{d}}.}Taking the limit when⁠d{\displaystyle d}⁠tends to zero, one gets that the third condition is verified with⁠k=φ′(0){\displaystyle k=\varphi '(0)}⁠. It follows therefore that⁠f(x)=aekx{\displaystyle f(x)=ae^{kx}}⁠for some⁠a,{\displaystyle a,}⁠and⁠φ(d)=ekd.{\displaystyle \varphi (d)=e^{kd}.}⁠As a byproduct, one gets that(f(x+d)f(x))1/d=ek{\displaystyle \left({\frac {f(x+d)}{f(x)}}\right)^{1/d}=e^{k}}is independent of both⁠x{\displaystyle x}⁠and⁠d{\displaystyle d}⁠. The earliest occurrence of the exponential function was inJacob Bernoulli's study ofcompound interestsin 1683.[10]This is this study that led Bernoulli to consider the numberlimn→∞(1+1n)n{\displaystyle \lim _{n\to \infty }\left(1+{\frac {1}{n}}\right)^{n}}now known asEuler's numberand denoted⁠e{\displaystyle e}⁠. The exponential function is involved as follows in the computation ofcontinuously compounded interests. If a principal amount of 1 earns interest at an annual rate ofxcompounded monthly, then the interest earned each month is⁠x/12⁠times the current value, so each month the total value is multiplied by(1 +⁠x/12⁠), and the value at the end of the year is(1 +⁠x/12⁠)12. If instead interest is compounded daily, this becomes(1 +⁠x/365⁠)365. Letting the number of time intervals per year grow without bound leads to thelimitdefinition of the exponential function,exp⁡x=limn→∞(1+xn)n{\displaystyle \exp x=\lim _{n\to \infty }\left(1+{\frac {x}{n}}\right)^{n}}first given byLeonhard Euler.[4] Exponential functions occur very often in solutions ofdifferential equations. The exponential functions can be defined as solutions ofdifferential equations. Indeed, the exponential function is a solution of the simplest possible differential equation, namely⁠y′=y{\displaystyle y'=y}⁠. Every other exponential function, of the form⁠y=abx{\displaystyle y=ab^{x}}⁠, is a solution of the differential equation⁠y′=ky{\displaystyle y'=ky}⁠, and every solution of this differential equation has this form. The solutions of an equation of the formy′+ky=f(x){\displaystyle y'+ky=f(x)}involve exponential functions in a more sophisticated way, since they have the formy=ce−kx+e−kx∫f(x)ekxdx,{\displaystyle y=ce^{-kx}+e^{-kx}\int f(x)e^{kx}dx,}where⁠c{\displaystyle c}⁠is an arbitrary constant and the integral denotes anyantiderivativeof its argument. More generally, the solutions of every linear differential equation with constant coefficients can be expressed in terms of exponential functions and, when they are not homogeneous, antiderivatives. This holds true also for systems of linear differential equations with constant coefficients. The exponential function can be naturally extended to acomplex function, which is a function with thecomplex numbersasdomainandcodomain, such that itsrestrictionto the reals is the above-defined exponential function, calledreal exponential functionin what follows. This function is also calledthe exponential function, and also denoted⁠ez{\displaystyle e^{z}}⁠or⁠exp⁡(z){\displaystyle \exp(z)}⁠. For distinguishing the complex case from the real one, the extended function is also calledcomplex exponential functionor simplycomplex exponential. Most of the definitions of the exponential function can be used verbatim for definiting the complex exponential function, and the proof of their equivalence is the same as in the real case. The complex exponential function can be defined in several equivalent ways that are the same as in the real case. Thecomplex exponentialis the unique complex function that equals itscomplex derivativeand takes the value⁠1{\displaystyle 1}⁠for the argument⁠0{\displaystyle 0}⁠:dezdz=ezande0=1.{\displaystyle {\frac {de^{z}}{dz}}=e^{z}\quad {\text{and}}\quad e^{0}=1.} Thecomplex exponential functionis the sum of theseriesez=∑k=0∞zkk!.{\displaystyle e^{z}=\sum _{k=0}^{\infty }{\frac {z^{k}}{k!}}.}This series isabsolutely convergentfor every complex number⁠z{\displaystyle z}⁠. So, the complex differential is anentire function. The complex exponential function is thelimitez=limn→∞(1+zn)n{\displaystyle e^{z}=\lim _{n\to \infty }\left(1+{\frac {z}{n}}\right)^{n}} The functional equationew+z=ewez{\displaystyle e^{w+z}=e^{w}e^{z}}holds for every complex numbers⁠w{\displaystyle w}⁠and⁠z{\displaystyle z}⁠. The complex exponential is the uniquecontinuous functionthat satisfies this functional equation and has the value⁠1{\displaystyle 1}⁠for⁠z=0{\displaystyle z=0}⁠. Thecomplex logarithmis aright-inverse functionof the complex exponential:elog⁡z=z.{\displaystyle e^{\log z}=z.}However, since the complex logarithm is amultivalued function, one haslog⁡ez={z+2ikπ∣k∈Z},{\displaystyle \log e^{z}=\{z+2ik\pi \mid k\in \mathbb {Z} \},}and it is difficult to define the complex exponential from the complex logarithm. On the opposite, this is the complex logarithm that is often defined from the complex exponential. The complex exponential has the following properties:1ez=e−z{\displaystyle {\frac {1}{e^{z}}}=e^{-z}}andez≠0for everyz∈C.{\displaystyle e^{z}\neq 0\quad {\text{for every }}z\in \mathbb {C} .}It isperiodic functionof period⁠2iπ{\displaystyle 2i\pi }⁠; that isez+2ikπ=ezfor everyk∈Z.{\displaystyle e^{z+2ik\pi }=e^{z}\quad {\text{for every }}k\in \mathbb {Z} .}This results fromEuler's identity⁠eiπ=−1{\displaystyle e^{i\pi }=-1}⁠and the functional identity. Thecomplex conjugateof the complex exponential isez¯=ez¯.{\displaystyle {\overline {e^{z}}}=e^{\overline {z}}.}Its modulus is|ez|=e|ℜ(z)|,{\displaystyle |e^{z}|=e^{|\Re (z)|},}where⁠ℜ(z){\displaystyle \Re (z)}⁠denotes the real part of⁠z{\displaystyle z}⁠. Complex exponential andtrigonometric functionsare strongly related byEuler's formula:eit=cos⁡(t)+isin⁡(t).{\displaystyle e^{it}=\cos(t)+i\sin(t).} This formula provides the decomposition of complex exponential intoreal and imaginary parts:ex+iy=excos⁡y+iexsin⁡y.{\displaystyle e^{x+iy}=e^{x}\,\cos y+ie^{x}\,\sin y.} The trigonometric functions can be expressed in terms of complex exponentials:cos⁡x=eix+e−ix2sin⁡x=eix−e−ix2itan⁡x=i1−e2ix1+e2ix{\displaystyle {\begin{aligned}\cos x&={\frac {e^{ix}+e^{-ix}}{2}}\\\sin x&={\frac {e^{ix}-e^{-ix}}{2i}}\\\tan x&=i\,{\frac {1-e^{2ix}}{1+e^{2ix}}}\end{aligned}}} In these formulas,⁠x,y,t{\displaystyle x,y,t}⁠are commonly interpreted as real variables, but the formulas remain valid if the variables are interpreted as complex variables. These formulas may be used to define trigonometric functions of a complex variable.[11] Considering the complex exponential function as a function involving four real variables:v+iw=exp⁡(x+iy){\displaystyle v+iw=\exp(x+iy)}the graph of the exponential function is a two-dimensional surface curving through four dimensions. Starting with a color-coded portion of thexy{\displaystyle xy}domain, the following are depictions of the graph as variously projected into two or three dimensions. The second image shows how the domain complex plane is mapped into the range complex plane: The third and fourth images show how the graph in the second image extends into one of the other two dimensions not shown in the second image. The third image shows the graph extended along the realx{\displaystyle x}axis. It shows the graph is a surface of revolution about thex{\displaystyle x}axis of the graph of the real exponential function, producing a horn or funnel shape. The fourth image shows the graph extended along the imaginaryy{\displaystyle y}axis. It shows that the graph's surface for positive and negativey{\displaystyle y}values doesn't really meet along the negative realv{\displaystyle v}axis, but instead forms a spiral surface about they{\displaystyle y}axis. Because itsy{\displaystyle y}values have been extended to±2π, this image also better depicts the 2π periodicity in the imaginaryy{\displaystyle y}value. The power series definition of the exponential function makes sense for squarematrices(for which the function is called thematrix exponential) and more generally in any unitalBanach algebraB. In this setting,e0= 1, andexis invertible with inversee−xfor anyxinB. Ifxy=yx, thenex+y=exey, but this identity can fail for noncommutingxandy. Some alternative definitions lead to the same function. For instance,excan be defined aslimn→∞(1+xn)n.{\displaystyle \lim _{n\to \infty }\left(1+{\frac {x}{n}}\right)^{n}.} Orexcan be defined asfx(1), wherefx:R→Bis the solution to the differential equation⁠dfx/dt⁠(t) =xfx(t), with initial conditionfx(0) = 1; it follows thatfx(t) =etxfor everytinR. Given aLie groupGand its associatedLie algebrag{\displaystyle {\mathfrak {g}}}, theexponential mapis a mapg{\displaystyle {\mathfrak {g}}}↦Gsatisfying similar properties. In fact, sinceRis the Lie algebra of the Lie group of all positive real numbers under multiplication, the ordinary exponential function for real arguments is a special case of the Lie algebra situation. Similarly, since the Lie groupGL(n,R)of invertiblen×nmatrices has as Lie algebraM(n,R), the space of alln×nmatrices, the exponential function for square matrices is a special case of the Lie algebra exponential map. The identityexp⁡(x+y)=exp⁡(x)exp⁡(y){\displaystyle \exp(x+y)=\exp(x)\exp(y)}can fail for Lie algebra elementsxandythat do not commute; theBaker–Campbell–Hausdorff formulasupplies the necessary correction terms. The functionezis atranscendental function, which means that it is not arootof a polynomial over theringof therational fractionsC(z).{\displaystyle \mathbb {C} (z).} Ifa1, ...,anare distinct complex numbers, thenea1z, ...,eanzare linearly independent overC(z){\displaystyle \mathbb {C} (z)}, and henceezistranscendentaloverC(z){\displaystyle \mathbb {C} (z)}. The Taylor series definition above is generally efficient for computing (an approximation of)ex{\displaystyle e^{x}}. However, when computing near the argumentx=0{\displaystyle x=0}, the result will be close to 1, and computing the value of the differenceex−1{\displaystyle e^{x}-1}withfloating-point arithmeticmay lead to the loss of (possibly all)significant figures, producing a large relative error, possibly even a meaningless result. Following a proposal byWilliam Kahan, it may thus be useful to have a dedicated routine, often calledexpm1, which computesex− 1directly, bypassing computation ofex. For example, one may use the Taylor series:ex−1=x+x22+x36+⋯+xnn!+⋯.{\displaystyle e^{x}-1=x+{\frac {x^{2}}{2}}+{\frac {x^{3}}{6}}+\cdots +{\frac {x^{n}}{n!}}+\cdots .} This was first implemented in 1979 in theHewlett-PackardHP-41Ccalculator, and provided by several calculators,[12][13]operating systems(for exampleBerkeley UNIX 4.3BSD[14]),computer algebra systems, and programming languages (for exampleC99).[15] In addition to basee, theIEEE 754-2008standard defines similar exponential functions near 0 for base 2 and 10:2x−1{\displaystyle 2^{x}-1}and10x−1{\displaystyle 10^{x}-1}. A similar approach has been used for the logarithm; seelog1p. An identity in terms of thehyperbolic tangent,expm1⁡(x)=ex−1=2tanh⁡(x/2)1−tanh⁡(x/2),{\displaystyle \operatorname {expm1} (x)=e^{x}-1={\frac {2\tanh(x/2)}{1-\tanh(x/2)}},}gives a high-precision value for small values ofxon systems that do not implementexpm1(x). The exponential function can also be computed withcontinued fractions. A continued fraction forexcan be obtained viaan identity of Euler:ex=1+x1−xx+2−2xx+3−3xx+4−⋱{\displaystyle e^{x}=1+{\cfrac {x}{1-{\cfrac {x}{x+2-{\cfrac {2x}{x+3-{\cfrac {3x}{x+4-\ddots }}}}}}}}} The followinggeneralized continued fractionforezconverges more quickly:[16]ez=1+2z2−z+z26+z210+z214+⋱{\displaystyle e^{z}=1+{\cfrac {2z}{2-z+{\cfrac {z^{2}}{6+{\cfrac {z^{2}}{10+{\cfrac {z^{2}}{14+\ddots }}}}}}}}} or, by applying the substitutionz=⁠x/y⁠:exy=1+2x2y−x+x26y+x210y+x214y+⋱{\displaystyle e^{\frac {x}{y}}=1+{\cfrac {2x}{2y-x+{\cfrac {x^{2}}{6y+{\cfrac {x^{2}}{10y+{\cfrac {x^{2}}{14y+\ddots }}}}}}}}}with a special case forz= 2:e2=1+40+226+2210+2214+⋱=7+25+17+19+111+⋱{\displaystyle e^{2}=1+{\cfrac {4}{0+{\cfrac {2^{2}}{6+{\cfrac {2^{2}}{10+{\cfrac {2^{2}}{14+\ddots }}}}}}}}=7+{\cfrac {2}{5+{\cfrac {1}{7+{\cfrac {1}{9+{\cfrac {1}{11+\ddots }}}}}}}}} This formula also converges, though more slowly, forz> 2. For example:e3=1+6−1+326+3210+3214+⋱=13+547+914+918+922+⋱{\displaystyle e^{3}=1+{\cfrac {6}{-1+{\cfrac {3^{2}}{6+{\cfrac {3^{2}}{10+{\cfrac {3^{2}}{14+\ddots }}}}}}}}=13+{\cfrac {54}{7+{\cfrac {9}{14+{\cfrac {9}{18+{\cfrac {9}{22+\ddots }}}}}}}}}
https://en.wikipedia.org/wiki/Exponential_function
Ridge regression(also known asTikhonov regularization, named forAndrey Tikhonov) is a method of estimating thecoefficientsof multiple-regression modelsin scenarios where the independent variables are highly correlated.[1]It has been used in many fields including econometrics, chemistry, and engineering.[2]It is a method ofregularizationofill-posed problems.[a]It is particularly useful to mitigate the problem ofmulticollinearityinlinear regression, which commonly occurs in models with large numbers of parameters.[3]In general, the method provides improvedefficiencyin parameter estimation problems in exchange for a tolerable amount ofbias(seebias–variance tradeoff).[4] The theory was first introduced by Hoerl and Kennard in 1970 in theirTechnometricspapers "Ridge regressions: biased estimation of nonorthogonal problems" and "Ridge regressions: applications in nonorthogonal problems".[5][6][1] Ridge regression was developed as a possible solution to the imprecision of least square estimators when linear regression models have some multicollinear (highly correlated) independent variables—by creating a ridge regression estimator (RR). This provides a more precise ridge parameters estimate, as its variance and mean square estimator are often smaller than the least square estimators previously derived.[7][2] In the simplest case, the problem of anear-singularmoment matrixXTX{\displaystyle \mathbf {X} ^{\mathsf {T}}\mathbf {X} }is alleviated by adding positive elements to thediagonals, thereby decreasing itscondition number. Analogous to theordinary least squaresestimator, the simple ridge estimator is then given byβ^R=(XTX+λI)−1XTy{\displaystyle {\hat {\boldsymbol {\beta }}}_{R}=\left(\mathbf {X} ^{\mathsf {T}}\mathbf {X} +\lambda \mathbf {I} \right)^{-1}\mathbf {X} ^{\mathsf {T}}\mathbf {y} }wherey{\displaystyle \mathbf {y} }is theregressand,X{\displaystyle \mathbf {X} }is thedesign matrix,I{\displaystyle \mathbf {I} }is theidentity matrix, and the ridge parameterλ≥0{\displaystyle \lambda \geq 0}serves as the constant shifting the diagonals of the moment matrix.[8]It can be shown that this estimator is the solution to theleast squaresproblem subject to theconstraintβTβ=c{\displaystyle {\boldsymbol {\beta }}^{\mathsf {T}}{\boldsymbol {\beta }}=c}, which can be expressed as a Lagrangian minimization:β^R=argminβ(y−Xβ)T(y−Xβ)+λ(βTβ−c){\displaystyle {\hat {\boldsymbol {\beta }}}_{R}={\text{argmin}}_{\boldsymbol {\beta }}\,\left(\mathbf {y} -\mathbf {X} {\boldsymbol {\beta }}\right)^{\mathsf {T}}\left(\mathbf {y} -\mathbf {X} {\boldsymbol {\beta }}\right)+\lambda \left({\boldsymbol {\beta }}^{\mathsf {T}}{\boldsymbol {\beta }}-c\right)}which shows thatλ{\displaystyle \lambda }is nothing but theLagrange multiplierof the constraint.[9]In fact, there is a one-to-one relationship betweenc{\displaystyle c}andβ{\displaystyle \beta }and since, in practice, we do not knowc{\displaystyle c}, we defineλ{\displaystyle \lambda }heuristically or find it via additional data-fitting strategies, seeDetermination of the Tikhonov factor. Note that, whenλ=0{\displaystyle \lambda =0}, in which case theconstraint is non-binding, the ridge estimator reduces toordinary least squares. A more general approach to Tikhonov regularization is discussed below. Tikhonov regularization was invented independently in many different contexts. It became widely known through its application to integral equations in the works ofAndrey Tikhonov[10][11][12][13][14]and David L. Phillips.[15]Some authors use the termTikhonov–Phillips regularization. The finite-dimensional case was expounded byArthur E. Hoerl, who took a statistical approach,[16]and by Manus Foster, who interpreted this method as aWiener–Kolmogorov (Kriging)filter.[17]Following Hoerl, it is known in the statistical literature as ridge regression,[18]named after ridge analysis ("ridge" refers to the path from the constrained maximum).[19] Suppose that for a knownreal matrixA{\displaystyle A}and vectorb{\displaystyle \mathbf {b} }, we wish to find a vectorx{\displaystyle \mathbf {x} }such thatAx=b,{\displaystyle A\mathbf {x} =\mathbf {b} ,}wherex{\displaystyle \mathbf {x} }andb{\displaystyle \mathbf {b} }may be of different sizes andA{\displaystyle A}may be non-square. The standard approach isordinary least squareslinear regression.[clarification needed]However, if nox{\displaystyle \mathbf {x} }satisfies the equation or more than onex{\displaystyle \mathbf {x} }does—that is, the solution is not unique—the problem is said to beill posed. In such cases, ordinary least squares estimation leads to anoverdetermined, or more often anunderdeterminedsystem of equations. Most real-world phenomena have the effect oflow-pass filters[clarification needed]in the forward direction whereA{\displaystyle A}mapsx{\displaystyle \mathbf {x} }tob{\displaystyle \mathbf {b} }. Therefore, in solving the inverse-problem, the inverse mapping operates as ahigh-pass filterthat has the undesirable tendency of amplifying noise (eigenvalues/ singular values are largest in the reverse mapping where they were smallest in the forward mapping). In addition, ordinary least squares implicitly nullifies every element of the reconstructed version ofx{\displaystyle \mathbf {x} }that is in the null-space ofA{\displaystyle A}, rather than allowing for a model to be used as a prior forx{\displaystyle \mathbf {x} }. Ordinary least squares seeks to minimize the sum of squaredresiduals, which can be compactly written as‖Ax−b‖22,{\displaystyle \left\|A\mathbf {x} -\mathbf {b} \right\|_{2}^{2},}where‖⋅‖2{\displaystyle \|\cdot \|_{2}}is theEuclidean norm. In order to give preference to a particular solution with desirable properties, a regularization term can be included in this minimization:‖Ax−b‖22+‖Γx‖22=‖(AΓ)x−(b0)‖22{\displaystyle \left\|A\mathbf {x} -\mathbf {b} \right\|_{2}^{2}+\left\|\Gamma \mathbf {x} \right\|_{2}^{2}=\left\|{\begin{pmatrix}A\\\Gamma \end{pmatrix}}\mathbf {x} -{\begin{pmatrix}\mathbf {b} \\{\boldsymbol {0}}\end{pmatrix}}\right\|_{2}^{2}}for some suitably chosenTikhonov matrixΓ{\displaystyle \Gamma }. In many cases, this matrix is chosen as a scalar multiple of theidentity matrix(Γ=αI{\displaystyle \Gamma =\alpha I}), giving preference to solutions with smallernorms; this is known asL2regularization.[20]In other cases, high-pass operators (e.g., adifference operatoror a weightedFourier operator) may be used to enforce smoothness if the underlying vector is believed to be mostly continuous. This regularization improves the conditioning of the problem, thus enabling a direct numerical solution. An explicit solution, denoted byx^{\displaystyle {\hat {\mathbf {x} }}}, is given byx^=(ATA+ΓTΓ)−1ATb=((AΓ)T(AΓ))−1(AΓ)T(b0).{\displaystyle {\hat {\mathbf {x} }}=\left(A^{\mathsf {T}}A+\Gamma ^{\mathsf {T}}\Gamma \right)^{-1}A^{\mathsf {T}}\mathbf {b} =\left({\begin{pmatrix}A\\\Gamma \end{pmatrix}}^{\mathsf {T}}{\begin{pmatrix}A\\\Gamma \end{pmatrix}}\right)^{-1}{\begin{pmatrix}A\\\Gamma \end{pmatrix}}^{\mathsf {T}}{\begin{pmatrix}\mathbf {b} \\{\boldsymbol {0}}\end{pmatrix}}.}The effect of regularization may be varied by the scale of matrixΓ{\displaystyle \Gamma }. ForΓ=0{\displaystyle \Gamma =0}this reduces to the unregularized least-squares solution, provided that (ATA)−1exists. Note that in case of acomplex matrixA{\displaystyle A}, as usual the transposeAT{\displaystyle A^{\mathsf {T}}}has to be replaced by theHermitian transposeAH{\displaystyle A^{\mathsf {H}}}. L2regularization is used in many contexts aside from linear regression, such asclassificationwithlogistic regressionorsupport vector machines,[21]and matrix factorization.[22] Since Tikhonov Regularization simply adds a quadratic term to the objective function in optimization problems, it is possible to do so after the unregularised optimisation has taken place. E.g., if the above problem withΓ=0{\displaystyle \Gamma =0}yields the solutionx^0{\displaystyle {\hat {\mathbf {x} }}_{0}}, the solution in the presence ofΓ≠0{\displaystyle \Gamma \neq 0}can be expressed as:x^=Bx^0,{\displaystyle {\hat {\mathbf {x} }}=B{\hat {\mathbf {x} }}_{0},}with the "regularisation matrix"B=(ATA+ΓTΓ)−1ATA{\displaystyle B=\left(A^{\mathsf {T}}A+\Gamma ^{\mathsf {T}}\Gamma \right)^{-1}A^{\mathsf {T}}A}. If the parameter fit comes with a covariance matrix of the estimated parameter uncertaintiesV0{\displaystyle V_{0}}, then the regularisation matrix will beB=(V0−1+ΓTΓ)−1V0−1,{\displaystyle B=(V_{0}^{-1}+\Gamma ^{\mathsf {T}}\Gamma )^{-1}V_{0}^{-1},}and the regularised result will have a new covarianceV=BV0BT.{\displaystyle V=BV_{0}B^{\mathsf {T}}.} In the context of arbitrary likelihood fits, this is valid, as long as the quadratic approximation of the likelihood function is valid. This means that, as long as the perturbation from the unregularised result is small, one can regularise any result that is presented as a best fit point with a covariance matrix. No detailed knowledge of the underlying likelihood function is needed.[23] For general multivariate normal distributions forx{\displaystyle \mathbf {x} }and the data error, one can apply a transformation of the variables to reduce to the case above. Equivalently, one can seek anx{\displaystyle \mathbf {x} }to minimize‖Ax−b‖P2+‖x−x0‖Q2,{\displaystyle \left\|A\mathbf {x} -\mathbf {b} \right\|_{P}^{2}+\left\|\mathbf {x} -\mathbf {x} _{0}\right\|_{Q}^{2},}where we have used‖x‖Q2{\displaystyle \left\|\mathbf {x} \right\|_{Q}^{2}}to stand for the weighted norm squaredxTQx{\displaystyle \mathbf {x} ^{\mathsf {T}}Q\mathbf {x} }(compare with theMahalanobis distance). In the Bayesian interpretationP{\displaystyle P}is the inversecovariance matrixofb{\displaystyle \mathbf {b} },x0{\displaystyle \mathbf {x} _{0}}is theexpected valueofx{\displaystyle \mathbf {x} }, andQ{\displaystyle Q}is the inverse covariance matrix ofx{\displaystyle \mathbf {x} }. The Tikhonov matrix is then given as a factorization of the matrixQ=ΓTΓ{\displaystyle Q=\Gamma ^{\mathsf {T}}\Gamma }(e.g. theCholesky factorization) and is considered awhitening filter. This generalized problem has an optimal solutionx∗{\displaystyle \mathbf {x} ^{*}}which can be written explicitly using the formulax∗=(ATPA+Q)−1(ATPb+Qx0),{\displaystyle \mathbf {x} ^{*}=\left(A^{\mathsf {T}}PA+Q\right)^{-1}\left(A^{\mathsf {T}}P\mathbf {b} +Q\mathbf {x} _{0}\right),}or equivalently, whenQisnota null matrix:x∗=x0+(ATPA+Q)−1(ATP(b−Ax0)).{\displaystyle \mathbf {x} ^{*}=\mathbf {x} _{0}+\left(A^{\mathsf {T}}PA+Q\right)^{-1}\left(A^{\mathsf {T}}P\left(\mathbf {b} -A\mathbf {x} _{0}\right)\right).} In some situations, one can avoid using the transposeAT{\displaystyle A^{\mathsf {T}}}, as proposed byMikhail Lavrentyev.[24]For example, ifA{\displaystyle A}is symmetric positive definite, i.e.A=AT>0{\displaystyle A=A^{\mathsf {T}}>0}, so is its inverseA−1{\displaystyle A^{-1}}, which can thus be used to set up the weighted norm squared‖x‖P2=xTA−1x{\displaystyle \left\|\mathbf {x} \right\|_{P}^{2}=\mathbf {x} ^{\mathsf {T}}A^{-1}\mathbf {x} }in the generalized Tikhonov regularization, leading to minimizing‖Ax−b‖A−12+‖x−x0‖Q2{\displaystyle \left\|A\mathbf {x} -\mathbf {b} \right\|_{A^{-1}}^{2}+\left\|\mathbf {x} -\mathbf {x} _{0}\right\|_{Q}^{2}}or, equivalently up to a constant term,xT(A+Q)x−2xT(b+Qx0).{\displaystyle \mathbf {x} ^{\mathsf {T}}\left(A+Q\right)\mathbf {x} -2\mathbf {x} ^{\mathsf {T}}\left(\mathbf {b} +Q\mathbf {x} _{0}\right).} This minimization problem has an optimal solutionx∗{\displaystyle \mathbf {x} ^{*}}which can be written explicitly using the formulax∗=(A+Q)−1(b+Qx0),{\displaystyle \mathbf {x} ^{*}=\left(A+Q\right)^{-1}\left(\mathbf {b} +Q\mathbf {x} _{0}\right),}which is nothing but the solution of the generalized Tikhonov problem whereA=AT=P−1.{\displaystyle A=A^{\mathsf {T}}=P^{-1}.} The Lavrentyev regularization, if applicable, is advantageous to the original Tikhonov regularization, since the Lavrentyev matrixA+Q{\displaystyle A+Q}can be better conditioned, i.e., have a smallercondition number, compared to the Tikhonov matrixATA+ΓTΓ.{\displaystyle A^{\mathsf {T}}A+\Gamma ^{\mathsf {T}}\Gamma .} Typically discrete linear ill-conditioned problems result from discretization ofintegral equations, and one can formulate a Tikhonov regularization in the original infinite-dimensional context. In the above we can interpretA{\displaystyle A}as acompact operatoronHilbert spaces, andx{\displaystyle x}andb{\displaystyle b}as elements in the domain and range ofA{\displaystyle A}. The operatorA∗A+ΓTΓ{\displaystyle A^{*}A+\Gamma ^{\mathsf {T}}\Gamma }is then aself-adjointbounded invertible operator. WithΓ=αI{\displaystyle \Gamma =\alpha I}, this least-squares solution can be analyzed in a special way using thesingular-value decomposition. Given the singular value decompositionA=UΣVT{\displaystyle A=U\Sigma V^{\mathsf {T}}}with singular valuesσi{\displaystyle \sigma _{i}}, the Tikhonov regularized solution can be expressed asx^=VDUTb,{\displaystyle {\hat {x}}=VDU^{\mathsf {T}}b,}whereD{\displaystyle D}has diagonal valuesDii=σiσi2+α2{\displaystyle D_{ii}={\frac {\sigma _{i}}{\sigma _{i}^{2}+\alpha ^{2}}}}and is zero elsewhere. This demonstrates the effect of the Tikhonov parameter on thecondition numberof the regularized problem. For the generalized case, a similar representation can be derived using ageneralized singular-value decomposition.[25] Finally, it is related to theWiener filter:x^=∑i=1qfiuiTbσivi,{\displaystyle {\hat {x}}=\sum _{i=1}^{q}f_{i}{\frac {u_{i}^{\mathsf {T}}b}{\sigma _{i}}}v_{i},}where the Wiener weights arefi=σi2σi2+α2{\displaystyle f_{i}={\frac {\sigma _{i}^{2}}{\sigma _{i}^{2}+\alpha ^{2}}}}andq{\displaystyle q}is therankofA{\displaystyle A}. The optimal regularization parameterα{\displaystyle \alpha }is usually unknown and often in practical problems is determined by anad hocmethod. A possible approach relies on the Bayesian interpretation described below. Other approaches include thediscrepancy principle,cross-validation,L-curve method,[26]restricted maximum likelihoodandunbiased predictive risk estimator.Grace Wahbaproved that the optimal parameter, in the sense ofleave-one-out cross-validationminimizes[27][28]G=RSSτ2=‖Xβ^−y‖2[Tr⁡(I−X(XTX+α2I)−1XT)]2,{\displaystyle G={\frac {\operatorname {RSS} }{\tau ^{2}}}={\frac {\left\|X{\hat {\beta }}-y\right\|^{2}}{\left[\operatorname {Tr} \left(I-X\left(X^{\mathsf {T}}X+\alpha ^{2}I\right)^{-1}X^{\mathsf {T}}\right)\right]^{2}}},}whereRSS{\displaystyle \operatorname {RSS} }is theresidual sum of squares, andτ{\displaystyle \tau }is theeffective number of degrees of freedom. Using the previous SVD decomposition, we can simplify the above expression:RSS=‖y−∑i=1q(ui′b)ui‖2+‖∑i=1qα2σi2+α2(ui′b)ui‖2,{\displaystyle \operatorname {RSS} =\left\|y-\sum _{i=1}^{q}(u_{i}'b)u_{i}\right\|^{2}+\left\|\sum _{i=1}^{q}{\frac {\alpha ^{2}}{\sigma _{i}^{2}+\alpha ^{2}}}(u_{i}'b)u_{i}\right\|^{2},}RSS=RSS0+‖∑i=1qα2σi2+α2(ui′b)ui‖2,{\displaystyle \operatorname {RSS} =\operatorname {RSS} _{0}+\left\|\sum _{i=1}^{q}{\frac {\alpha ^{2}}{\sigma _{i}^{2}+\alpha ^{2}}}(u_{i}'b)u_{i}\right\|^{2},}andτ=m−∑i=1qσi2σi2+α2=m−q+∑i=1qα2σi2+α2.{\displaystyle \tau =m-\sum _{i=1}^{q}{\frac {\sigma _{i}^{2}}{\sigma _{i}^{2}+\alpha ^{2}}}=m-q+\sum _{i=1}^{q}{\frac {\alpha ^{2}}{\sigma _{i}^{2}+\alpha ^{2}}}.} The probabilistic formulation of aninverse problemintroduces (when all uncertainties are Gaussian) a covariance matrixCM{\displaystyle C_{M}}representing thea prioriuncertainties on the model parameters, and a covariance matrixCD{\displaystyle C_{D}}representing the uncertainties on the observed parameters.[29]In the special case when these two matrices are diagonal and isotropic,CM=σM2I{\displaystyle C_{M}=\sigma _{M}^{2}I}andCD=σD2I{\displaystyle C_{D}=\sigma _{D}^{2}I}, and, in this case, the equations of inverse theory reduce to the equations above, withα=σD/σM{\displaystyle \alpha ={\sigma _{D}}/{\sigma _{M}}}.[30][31] Although at first the choice of the solution to this regularized problem may look artificial, and indeed the matrixΓ{\displaystyle \Gamma }seems rather arbitrary, the process can be justified from aBayesian point of view.[32]Note that for an ill-posed problem one must necessarily introduce some additional assumptions in order to get a unique solution. Statistically, theprior probabilitydistribution ofx{\displaystyle x}is sometimes taken to be amultivariate normal distribution.[33]For simplicity here, the following assumptions are made: the means are zero; their components are independent; the components have the samestandard deviationσx{\displaystyle \sigma _{x}}. The data are also subject to errors, and the errors inb{\displaystyle b}are also assumed to beindependentwith zero mean and standard deviationσb{\displaystyle \sigma _{b}}. Under these assumptions the Tikhonov-regularized solution is themost probablesolution given the data and thea prioridistribution ofx{\displaystyle x}, according toBayes' theorem.[34] If the assumption ofnormalityis replaced by assumptions ofhomoscedasticityand uncorrelatedness oferrors, and if one still assumes zero mean, then theGauss–Markov theorementails that the solution is the minimalunbiased linear estimator.[35]
https://en.wikipedia.org/wiki/Ridge_regression
Inmathematics,statistics,finance,[1]andcomputer science, particularly inmachine learningandinverse problems,regularizationis a process that converts theanswer to a problemto a simpler one. It is often used in solvingill-posed problemsor to preventoverfitting.[2] Although regularization procedures can be divided in many ways, the following delineation is particularly helpful: In explicit regularization, independent of the problem or model, there is always a data term, that corresponds to a likelihood of the measurement, and a regularization term that corresponds to a prior. By combining both using Bayesian statistics, one can compute a posterior, that includes both information sources and therefore stabilizes the estimation process. By trading off both objectives, one chooses to be more aligned to the data or to enforce regularization (to prevent overfitting). There is a whole research branch dealing with all possible regularizations. In practice, one usually tries a specific regularization and then figures out the probability density that corresponds to that regularization to justify the choice. It can also be physically motivated by common sense or intuition. Inmachine learning, the data term corresponds to the training data and the regularization is either the choice of the model or modifications to the algorithm. It is always intended to reduce thegeneralization error, i.e. the error score with the trained model on the evaluation set (testing data) and not the training data.[3] One of the earliest uses of regularization isTikhonov regularization(ridge regression), related to the method of least squares. Inmachine learning, a key challenge is enabling models to accurately predict outcomes on unseen data, not just on familiar training data. Regularization is crucial for addressingoverfitting—where a model memorizes training data details but cannot generalize to new data. The goal of regularization is to encourage models to learn the broader patterns within the data rather than memorizing it. Techniques likeearly stopping, L1 andL2 regularization, anddropoutare designed to prevent overfitting and underfitting, thereby enhancing the model's ability to adapt to and perform well with new data, thus improving model generalization.[4] Stops training when validation performance deteriorates, preventing overfitting by halting before the model memorizes training data.[4] Adds penalty terms to the cost function to discourage complex models: In the context of neural networks, the Dropout technique repeatedly ignores random subsets of neurons during training, which simulates the training of multiple neural network architectures at once to improve generalization.[4] Empirical learning of classifiers (from a finite data set) is always anunderdeterminedproblem, because it attempts to infer a function of anyx{\displaystyle x}given only examplesx1,x2,…,xn{\displaystyle x_{1},x_{2},\dots ,x_{n}}. A regularization term (or regularizer)R(f){\displaystyle R(f)}is added to aloss function:minf∑i=1nV(f(xi),yi)+λR(f){\displaystyle \min _{f}\sum _{i=1}^{n}V(f(x_{i}),y_{i})+\lambda R(f)}whereV{\displaystyle V}is an underlying loss function that describes the cost of predictingf(x){\displaystyle f(x)}when the label isy{\displaystyle y}, such as thesquare lossorhinge loss; andλ{\displaystyle \lambda }is a parameter which controls the importance of the regularization term.R(f){\displaystyle R(f)}is typically chosen to impose a penalty on the complexity off{\displaystyle f}. Concrete notions of complexity used include restrictions forsmoothnessand bounds on thevector space norm.[5][page needed] A theoretical justification for regularization is that it attempts to imposeOccam's razoron the solution (as depicted in the figure above, where the green function, the simpler one, may be preferred). From aBayesianpoint of view, many regularization techniques correspond to imposing certainpriordistributions on model parameters.[6] Regularization can serve multiple purposes, including learning simpler models, inducing models to be sparse and introducing group structure[clarification needed]into the learning problem. The same idea arose in many fields ofscience. A simple form of regularization applied tointegral equations(Tikhonov regularization) is essentially a trade-off between fitting the data and reducing a norm of the solution. More recently, non-linear regularization methods, includingtotal variation regularization, have become popular. Regularization can be motivated as a technique to improve the generalizability of a learned model. The goal of this learning problem is to find a function that fits or predicts the outcome (label) that minimizes the expected error over all possible inputs and labels. The expected error of a functionfn{\displaystyle f_{n}}is:I[fn]=∫X×YV(fn(x),y)ρ(x,y)dxdy{\displaystyle I[f_{n}]=\int _{X\times Y}V(f_{n}(x),y)\rho (x,y)\,dx\,dy}whereX{\displaystyle X}andY{\displaystyle Y}are the domains of input datax{\displaystyle x}and their labelsy{\displaystyle y}respectively. Typically in learning problems, only a subset of input data and labels are available, measured with some noise. Therefore, the expected error is unmeasurable, and the best surrogate available is the empirical error over theN{\displaystyle N}available samples:IS[fn]=1n∑i=1NV(fn(x^i),y^i){\displaystyle I_{S}[f_{n}]={\frac {1}{n}}\sum _{i=1}^{N}V(f_{n}({\hat {x}}_{i}),{\hat {y}}_{i})}Without bounds on the complexity of the function space (formally, thereproducing kernel Hilbert space) available, a model will be learned that incurs zero loss on the surrogate empirical error. If measurements (e.g. ofxi{\displaystyle x_{i}}) were made with noise, this model may suffer fromoverfittingand display poor expected error. Regularization introduces a penalty for exploring certain regions of the function space used to build the model, which can improve generalization. These techniques are named forAndrey Nikolayevich Tikhonov, who applied regularization tointegral equationsand made important contributions in many other areas. When learning a linear functionf{\displaystyle f}, characterized by an unknownvectorw{\displaystyle w}such thatf(x)=w⋅x{\displaystyle f(x)=w\cdot x}, one can add theL2{\displaystyle L_{2}}-norm of the vectorw{\displaystyle w}to the loss expression in order to prefer solutions with smaller norms. Tikhonov regularization is one of the most common forms. It is also known as ridge regression. It is expressed as:minw∑i=1nV(x^i⋅w,y^i)+λ‖w‖22,{\displaystyle \min _{w}\sum _{i=1}^{n}V({\hat {x}}_{i}\cdot w,{\hat {y}}_{i})+\lambda \left\|w\right\|_{2}^{2},}where(x^i,y^i),1≤i≤n,{\displaystyle ({\hat {x}}_{i},{\hat {y}}_{i}),\,1\leq i\leq n,}would represent samples used for training. In the case of a general function, the norm of the function in itsreproducing kernel Hilbert spaceis:minf∑i=1nV(f(x^i),y^i)+λ‖f‖H2{\displaystyle \min _{f}\sum _{i=1}^{n}V(f({\hat {x}}_{i}),{\hat {y}}_{i})+\lambda \left\|f\right\|_{\mathcal {H}}^{2}} As theL2{\displaystyle L_{2}}norm isdifferentiable, learning can be advanced bygradient descent. The learning problem with theleast squaresloss function and Tikhonov regularization can be solved analytically. Written in matrix form, the optimalw{\displaystyle w}is the one for which the gradient of the loss function with respect tow{\displaystyle w}is 0.minw1n(X^w−Y)T(X^w−Y)+λ‖w‖22{\displaystyle \min _{w}{\frac {1}{n}}\left({\hat {X}}w-Y\right)^{\mathsf {T}}\left({\hat {X}}w-Y\right)+\lambda \left\|w\right\|_{2}^{2}}∇w=2nX^T(X^w−Y)+2λw{\displaystyle \nabla _{w}={\frac {2}{n}}{\hat {X}}^{\mathsf {T}}\left({\hat {X}}w-Y\right)+2\lambda w}0=X^T(X^w−Y)+nλw{\displaystyle 0={\hat {X}}^{\mathsf {T}}\left({\hat {X}}w-Y\right)+n\lambda w}w=(X^TX^+λnI)−1(X^TY){\displaystyle w=\left({\hat {X}}^{\mathsf {T}}{\hat {X}}+\lambda nI\right)^{-1}\left({\hat {X}}^{\mathsf {T}}Y\right)}where the third statement is afirst-order condition. By construction of the optimization problem, other values ofw{\displaystyle w}give larger values for the loss function. This can be verified by examining thesecond derivative∇ww{\displaystyle \nabla _{ww}}. During training, this algorithm takesO(d3+nd2){\displaystyle O(d^{3}+nd^{2})}time. The terms correspond to the matrix inversion and calculatingXTX{\displaystyle X^{\mathsf {T}}X}, respectively. Testing takesO(nd){\displaystyle O(nd)}time. Early stopping can be viewed as regularization in time. Intuitively, a training procedure such as gradient descent tends to learn more and more complex functions with increasing iterations. By regularizing for time, model complexity can be controlled, improving generalization. Early stopping is implemented using one data set for training, one statistically independent data set for validation and another for testing. The model is trained until performance on the validation set no longer improves and then applied to the test set. Consider the finite approximation ofNeumann seriesfor an invertible matrixAwhere‖I−A‖<1{\displaystyle \left\|I-A\right\|<1}:∑i=0T−1(I−A)i≈A−1{\displaystyle \sum _{i=0}^{T-1}\left(I-A\right)^{i}\approx A^{-1}} This can be used to approximate the analytical solution of unregularized least squares, ifγis introduced to ensure the norm is less than one.wT=γn∑i=0T−1(I−γnX^TX^)iX^TY^{\displaystyle w_{T}={\frac {\gamma }{n}}\sum _{i=0}^{T-1}\left(I-{\frac {\gamma }{n}}{\hat {X}}^{\mathsf {T}}{\hat {X}}\right)^{i}{\hat {X}}^{\mathsf {T}}{\hat {Y}}} The exact solution to the unregularized least squares learning problem minimizes the empirical error, but may fail. By limitingT, the only free parameter in the algorithm above, the problem is regularized for time, which may improve its generalization. The algorithm above is equivalent to restricting the number of gradient descent iterations for the empirical riskIs[w]=12n‖X^w−Y^‖Rn2{\displaystyle I_{s}[w]={\frac {1}{2n}}\left\|{\hat {X}}w-{\hat {Y}}\right\|_{\mathbb {R} ^{n}}^{2}}with the gradient descent update:w0=0wt+1=(I−γnX^TX^)wt+γnX^TY^{\displaystyle {\begin{aligned}w_{0}&=0\\[1ex]w_{t+1}&=\left(I-{\frac {\gamma }{n}}{\hat {X}}^{\mathsf {T}}{\hat {X}}\right)w_{t}+{\frac {\gamma }{n}}{\hat {X}}^{\mathsf {T}}{\hat {Y}}\end{aligned}}} The base case is trivial. The inductive case is proved as follows:wT=(I−γnX^TX^)γn∑i=0T−2(I−γnX^TX^)iX^TY^+γnX^TY^=γn∑i=1T−1(I−γnX^TX^)iX^TY^+γnX^TY^=γn∑i=0T−1(I−γnX^TX^)iX^TY^{\displaystyle {\begin{aligned}w_{T}&=\left(I-{\frac {\gamma }{n}}{\hat {X}}^{\mathsf {T}}{\hat {X}}\right){\frac {\gamma }{n}}\sum _{i=0}^{T-2}\left(I-{\frac {\gamma }{n}}{\hat {X}}^{\mathsf {T}}{\hat {X}}\right)^{i}{\hat {X}}^{\mathsf {T}}{\hat {Y}}+{\frac {\gamma }{n}}{\hat {X}}^{\mathsf {T}}{\hat {Y}}\\[1ex]&={\frac {\gamma }{n}}\sum _{i=1}^{T-1}\left(I-{\frac {\gamma }{n}}{\hat {X}}^{\mathsf {T}}{\hat {X}}\right)^{i}{\hat {X}}^{\mathsf {T}}{\hat {Y}}+{\frac {\gamma }{n}}{\hat {X}}^{\mathsf {T}}{\hat {Y}}\\[1ex]&={\frac {\gamma }{n}}\sum _{i=0}^{T-1}\left(I-{\frac {\gamma }{n}}{\hat {X}}^{\mathsf {T}}{\hat {X}}\right)^{i}{\hat {X}}^{\mathsf {T}}{\hat {Y}}\end{aligned}}} Assume that a dictionaryϕj{\displaystyle \phi _{j}}with dimensionp{\displaystyle p}is given such that a function in the function space can be expressed as:f(x)=∑j=1pϕj(x)wj{\displaystyle f(x)=\sum _{j=1}^{p}\phi _{j}(x)w_{j}} Enforcing a sparsity constraint onw{\displaystyle w}can lead to simpler and more interpretable models. This is useful in many real-life applications such ascomputational biology. An example is developing a simple predictive test for a disease in order to minimize the cost of performing medical tests while maximizing predictive power. A sensible sparsity constraint is theL0{\displaystyle L_{0}}norm‖w‖0{\displaystyle \|w\|_{0}}, defined as the number of non-zero elements inw{\displaystyle w}. Solving aL0{\displaystyle L_{0}}regularized learning problem, however, has been demonstrated to beNP-hard.[7] TheL1{\displaystyle L_{1}}norm(see alsoNorms) can be used to approximate the optimalL0{\displaystyle L_{0}}norm via convex relaxation. It can be shown that theL1{\displaystyle L_{1}}norm induces sparsity. In the case of least squares, this problem is known asLASSOin statistics andbasis pursuitin signal processing.minw∈Rp1n‖X^w−Y^‖2+λ‖w‖1{\displaystyle \min _{w\in \mathbb {R} ^{p}}{\frac {1}{n}}\left\|{\hat {X}}w-{\hat {Y}}\right\|^{2}+\lambda \left\|w\right\|_{1}} L1{\displaystyle L_{1}}regularization can occasionally produce non-unique solutions. A simple example is provided in the figure when the space of possible solutions lies on a 45 degree line. This can be problematic for certain applications, and is overcome by combiningL1{\displaystyle L_{1}}withL2{\displaystyle L_{2}}regularization inelastic net regularization, which takes the following form:minw∈Rp1n‖X^w−Y^‖2+λ(α‖w‖1+(1−α)‖w‖22),α∈[0,1]{\displaystyle \min _{w\in \mathbb {R} ^{p}}{\frac {1}{n}}\left\|{\hat {X}}w-{\hat {Y}}\right\|^{2}+\lambda \left(\alpha \left\|w\right\|_{1}+(1-\alpha )\left\|w\right\|_{2}^{2}\right),\alpha \in [0,1]} Elastic net regularization tends to have a grouping effect, where correlated input features are assigned equal weights. Elastic net regularization is commonly used in practice and is implemented in many machine learning libraries. While theL1{\displaystyle L_{1}}norm does not result in an NP-hard problem, theL1{\displaystyle L_{1}}norm is convex but is not strictly differentiable due to the kink at x = 0.Subgradient methodswhich rely on thesubderivativecan be used to solveL1{\displaystyle L_{1}}regularized learning problems. However, faster convergence can be achieved through proximal methods. For a problemminw∈HF(w)+R(w){\displaystyle \min _{w\in H}F(w)+R(w)}such thatF{\displaystyle F}is convex, continuous, differentiable, with Lipschitz continuous gradient (such as the least squares loss function), andR{\displaystyle R}is convex, continuous, and proper, then the proximal method to solve the problem is as follows. First define theproximal operatorproxR⁡(v)=argminw∈RD⁡{R(w)+12‖w−v‖2},{\displaystyle \operatorname {prox} _{R}(v)=\mathop {\operatorname {argmin} } _{w\in \mathbb {R} ^{D}}\left\{R(w)+{\frac {1}{2}}\left\|w-v\right\|^{2}\right\},}and then iteratewk+1=proxγ,R⁡(wk−γ∇F(wk)){\displaystyle w_{k+1}=\mathop {\operatorname {prox} } _{\gamma ,R}\left(w_{k}-\gamma \nabla F(w_{k})\right)} The proximal method iteratively performs gradient descent and then projects the result back into the space permitted byR{\displaystyle R}. WhenR{\displaystyle R}is theL1regularizer, the proximal operator is equivalent to the soft-thresholding operator,Sλ(v)f(n)={vi−λ,ifvi>λ0,ifvi∈[−λ,λ]vi+λ,ifvi<−λ{\displaystyle S_{\lambda }(v)f(n)={\begin{cases}v_{i}-\lambda ,&{\text{if }}v_{i}>\lambda \\0,&{\text{if }}v_{i}\in [-\lambda ,\lambda ]\\v_{i}+\lambda ,&{\text{if }}v_{i}<-\lambda \end{cases}}} This allows for efficient computation. Groups of features can be regularized by a sparsity constraint, which can be useful for expressing certain prior knowledge into an optimization problem. In the case of a linear model with non-overlapping known groups, a regularizer can be defined:R(w)=∑g=1G‖wg‖2,{\displaystyle R(w)=\sum _{g=1}^{G}\left\|w_{g}\right\|_{2},}where‖wg‖2=∑j=1|Gg|(wgj)2{\displaystyle \|w_{g}\|_{2}={\sqrt {\sum _{j=1}^{|G_{g}|}\left(w_{g}^{j}\right)^{2}}}} This can be viewed as inducing a regularizer over theL2{\displaystyle L_{2}}norm over members of each group followed by anL1{\displaystyle L_{1}}norm over groups. This can be solved by the proximal method, where the proximal operator is a block-wise soft-thresholding function: proxλ,R,g⁡(wg)={(1−λ‖wg‖2)wg,if‖wg‖2>λ0,if‖wg‖2≤λ{\displaystyle \operatorname {prox} \limits _{\lambda ,R,g}(w_{g})={\begin{cases}\left(1-{\dfrac {\lambda }{\left\|w_{g}\right\|_{2}}}\right)w_{g},&{\text{if }}\left\|w_{g}\right\|_{2}>\lambda \\[1ex]0,&{\text{if }}\|w_{g}\|_{2}\leq \lambda \end{cases}}} The algorithm described for group sparsity without overlaps can be applied to the case where groups do overlap, in certain situations. This will likely result in some groups with all zero elements, and other groups with some non-zero and some zero elements. If it is desired to preserve the group structure, a new regularizer can be defined:R(w)=inf{∑g=1G‖wg‖2:w=∑g=1Gw¯g}{\displaystyle R(w)=\inf \left\{\sum _{g=1}^{G}\|w_{g}\|_{2}:w=\sum _{g=1}^{G}{\bar {w}}_{g}\right\}} For eachwg{\displaystyle w_{g}},w¯g{\displaystyle {\bar {w}}_{g}}is defined as the vector such that the restriction ofw¯g{\displaystyle {\bar {w}}_{g}}to the groupg{\displaystyle g}equalswg{\displaystyle w_{g}}and all other entries ofw¯g{\displaystyle {\bar {w}}_{g}}are zero. The regularizer finds the optimal disintegration ofw{\displaystyle w}into parts. It can be viewed as duplicating all elements that exist in multiple groups. Learning problems with this regularizer can also be solved with the proximal method with a complication. The proximal operator cannot be computed in closed form, but can be effectively solved iteratively, inducing an inner iteration within the proximal method iteration. When labels are more expensive to gather than input examples, semi-supervised learning can be useful. Regularizers have been designed to guide learning algorithms to learn models that respect the structure of unsupervised training samples. If a symmetric weight matrixW{\displaystyle W}is given, a regularizer can be defined:R(f)=∑i,jwij(f(xi)−f(xj))2{\displaystyle R(f)=\sum _{i,j}w_{ij}\left(f(x_{i})-f(x_{j})\right)^{2}} IfWij{\displaystyle W_{ij}}encodes the result of some distance metric for pointsxi{\displaystyle x_{i}}andxj{\displaystyle x_{j}}, it is desirable thatf(xi)≈f(xj){\displaystyle f(x_{i})\approx f(x_{j})}. This regularizer captures this intuition, and is equivalent to:R(f)=f¯TLf¯{\displaystyle R(f)={\bar {f}}^{\mathsf {T}}L{\bar {f}}}whereL=D−W{\displaystyle L=D-W}is theLaplacian matrixof the graph induced byW{\displaystyle W}. The optimization problemminf∈RmR(f),m=u+l{\displaystyle \min _{f\in \mathbb {R} ^{m}}R(f),m=u+l}can be solved analytically if the constraintf(xi)=yi{\displaystyle f(x_{i})=y_{i}}is applied for all supervised samples. The labeled part of the vectorf{\displaystyle f}is therefore obvious. The unlabeled part off{\displaystyle f}is solved for by:minfu∈RufTLf=minfu∈Ru{fuTLuufu+flTLlufu+fuTLulfl}{\displaystyle \min _{f_{u}\in \mathbb {R} ^{u}}f^{\mathsf {T}}Lf=\min _{f_{u}\in \mathbb {R} ^{u}}\left\{f_{u}^{\mathsf {T}}L_{uu}f_{u}+f_{l}^{\mathsf {T}}L_{lu}f_{u}+f_{u}^{\mathsf {T}}L_{ul}f_{l}\right\}}∇fu=2Luufu+2LulY{\displaystyle \nabla _{f_{u}}=2L_{uu}f_{u}+2L_{ul}Y}fu=Luu†(LulY){\displaystyle f_{u}=L_{uu}^{\dagger }\left(L_{ul}Y\right)}The pseudo-inverse can be taken becauseLul{\displaystyle L_{ul}}has the same range asLuu{\displaystyle L_{uu}}. In the case of multitask learning,T{\displaystyle T}problems are considered simultaneously, each related in some way. The goal is to learnT{\displaystyle T}functions, ideally borrowing strength from the relatedness of tasks, that have predictive power. This is equivalent to learning the matrixW:T×D{\displaystyle W:T\times D}. R(w)=∑i=1D‖W‖2,1{\displaystyle R(w)=\sum _{i=1}^{D}\left\|W\right\|_{2,1}} This regularizer defines an L2 norm on each column and an L1 norm over all columns. It can be solved by proximal methods. R(w)=‖σ(W)‖1{\displaystyle R(w)=\left\|\sigma (W)\right\|_{1}}whereσ(W){\displaystyle \sigma (W)}is theeigenvaluesin thesingular value decompositionofW{\displaystyle W}. R(f1⋯fT)=∑t=1T‖ft−1T∑s=1Tfs‖Hk2{\displaystyle R(f_{1}\cdots f_{T})=\sum _{t=1}^{T}\left\|f_{t}-{\frac {1}{T}}\sum _{s=1}^{T}f_{s}\right\|_{H_{k}}^{2}} This regularizer constrains the functions learned for each task to be similar to the overall average of the functions across all tasks. This is useful for expressing prior information that each task is expected to share with each other task. An example is predicting blood iron levels measured at different times of the day, where each task represents an individual. R(f1⋯fT)=∑r=1C∑t∈I(r)‖ft−1I(r)∑s∈I(r)fs‖Hk2{\displaystyle R(f_{1}\cdots f_{T})=\sum _{r=1}^{C}\sum _{t\in I(r)}\left\|f_{t}-{\frac {1}{I(r)}}\sum _{s\in I(r)}f_{s}\right\|_{H_{k}}^{2}}whereI(r){\displaystyle I(r)}is a cluster of tasks. This regularizer is similar to the mean-constrained regularizer, but instead enforces similarity between tasks within the same cluster. This can capture more complex prior information. This technique has been used to predictNetflixrecommendations. A cluster would correspond to a group of people who share similar preferences. More generally than above, similarity between tasks can be defined by a function. The regularizer encourages the model to learn similar functions for similar tasks.R(f1⋯fT)=∑t,s=1,t≠sT‖ft−fs‖2Mts{\displaystyle R(f_{1}\cdots f_{T})=\sum _{t,s=1,t\neq s}^{\mathsf {T}}\left\|f_{t}-f_{s}\right\|^{2}M_{ts}}for a given symmetricsimilarity matrixM{\displaystyle M}. Bayesian learningmethods make use of aprior probabilitythat (usually) gives lower probability to more complex models. Well-known model selection techniques include theAkaike information criterion(AIC),minimum description length(MDL), and theBayesian information criterion(BIC). Alternative methods of controlling overfitting not involving regularization includecross-validation. Examples of applications of different methods of regularization to thelinear modelare:
https://en.wikipedia.org/wiki/Regularization_(machine_learning)
Inmachine learning, a common task is the study and construction ofalgorithmsthat can learn from and make predictions ondata.[1]Such algorithms function by making data-driven predictions or decisions,[2]through building amathematical modelfrom input data. These input data used to build the model are usually divided into multipledata sets. In particular, three data sets are commonly used in different stages of the creation of the model: training, validation, and test sets. The model is initially fit on atraining data set,[3]which is a set of examples used to fit the parameters (e.g. weights of connections between neurons inartificial neural networks) of the model.[4]The model (e.g. anaive Bayes classifier) is trained on the training data set using asupervised learningmethod, for example using optimization methods such asgradient descentorstochastic gradient descent. In practice, the training data set often consists of pairs of an inputvector(or scalar) and the corresponding output vector (or scalar), where the answer key is commonly denoted as thetarget(orlabel). The current model is run with the training data set and produces a result, which is then compared with thetarget, for each input vector in the training data set. Based on the result of the comparison and the specific learning algorithm being used, the parameters of the model are adjusted. The model fitting can include bothvariable selectionand parameterestimation. Successively, the fitted model is used to predict the responses for the observations in a second data set called thevalidation data set.[3]The validation data set provides an unbiased evaluation of a model fit on the training data set while tuning the model'shyperparameters[5](e.g. the number of hidden units—layers and layer widths—in a neural network[4]). Validation data sets can be used forregularizationbyearly stopping(stopping training when the error on the validation data set increases, as this is a sign ofover-fittingto the training data set).[6]This simple procedure is complicated in practice by the fact that the validation data set's error may fluctuate during training, producing multiple local minima. This complication has led to the creation of many ad-hoc rules for deciding when over-fitting has truly begun.[6] Finally, thetest data setis a data set used to provide an unbiased evaluation of afinalmodel fit on the training data set.[5]If the data in the test data set has never been used in training (for example incross-validation), the test data set is also called aholdout data set. The term "validation set" is sometimes used instead of "test set" in some literature (e.g., if the original data set was partitioned into only two subsets, the test set might be referred to as the validation set).[5] Deciding the sizes and strategies for data set division in training, test and validation sets is very dependent on the problem and data available.[7] A training data set is adata setof examples used during the learning process and is used to fit the parameters (e.g., weights) of, for example, aclassifier.[9][10] For classification tasks, a supervised learning algorithm looks at the training data set to determine, or learn, the optimal combinations of variables that will generate a goodpredictive model.[11]The goal is to produce a trained (fitted) model that generalizes well to new, unknown data.[12]The fitted model is evaluated using “new” examples from the held-out data sets (validation and test data sets) to estimate the model’s accuracy in classifying new data.[5]To reduce the risk of issues such as over-fitting, the examples in the validation and test data sets should not be used to train the model.[5] Most approaches that search through training data for empirical relationships tend tooverfitthe data, meaning that they can identify and exploit apparent relationships in the training data that do not hold in general. When a training set is continuously expanded with new data, then this isincremental learning. A validation data set is adata setof examples used to tune thehyperparameters(i.e. the architecture) of a model. It is sometimes also called the development set or the "dev set".[13]An example of a hyperparameter forartificial neural networksincludes the number of hidden units in each layer.[9][10]It, as well as the testing set (as mentioned below), should follow the same probability distribution as the training data set. In order to avoid overfitting, when anyclassificationparameter needs to be adjusted, it is necessary to have a validation data set in addition to the training and test data sets. For example, if the most suitable classifier for the problem is sought, the training data set is used to train the different candidate classifiers, the validation data set is used to compare their performances and decide which one to take and, finally, the test data set is used to obtain the performance characteristics such asaccuracy,sensitivity,specificity,F-measure, and so on. The validation data set functions as a hybrid: it is training data used for testing, but neither as part of the low-level training nor as part of the final testing. The basic process of using a validation data set formodel selection(as part of training data set, validation data set, and test data set) is:[10][14] Since our goal is to find the network having the best performance on new data, the simplest approach to the comparison of different networks is to evaluate the error function using data which is independent of that used for training. Various networks are trained by minimization of an appropriate error function defined with respect to a training data set. The performance of the networks is then compared by evaluating the error function using an independent validation set, and the network having the smallest error with respect to the validation set is selected. This approach is called thehold outmethod. Since this procedure can itself lead to some overfitting to the validation set, the performance of the selected network should be confirmed by measuring its performance on a third independent set of data called a test set. An application of this process is inearly stopping, where the candidate models are successive iterations of the same network, and training stops when the error on the validation set grows, choosing the previous model (the one with minimum error). A test data set is adata setthat isindependentof the training data set, but that follows the sameprobability distributionas the training data set. If a model fit to the training data set also fits the test data set well, minimaloverfittinghas taken place (see figure below). A better fitting of the training data set as opposed to the test data set usually points to over-fitting. A test set is therefore a set of examples used only to assess the performance (i.e. generalization) of a fully specified classifier.[9][10]To do this, the final model is used to predict classifications of examples in the test set. Those predictions are compared to the examples' true classifications to assess the model's accuracy.[11] In a scenario where both validation and test data sets are used, the test data set is typically used to assess the final model that is selected during the validation process. In the case where the original data set is partitioned into two subsets (training and test data sets), the test data set might assess the model only once (e.g., in theholdout method).[15]Note that some sources advise against such a method.[12]However, when using a method such ascross-validation, two partitions can be sufficient and effective since results are averaged after repeated rounds of model training and testing to help reduce bias and variability.[5][12] Testing is trying something to find out about it ("To put to the proof; to prove the truth, genuineness, or quality of by experiment" according to the Collaborative International Dictionary of English) and to validate is to prove that something is valid ("To confirm; to render valid" Collaborative International Dictionary of English). With this perspective, the most common use of the termstest setandvalidation setis the one here described. However, in both industry and academia, they are sometimes used interchanged, by considering that the internal process is testing different models to improve (test set as a development set) and the final model is the one that needs to be validated before real use with an unseen data (validation set). "The literature on machine learning often reverses the meaning of 'validation' and 'test' sets. This is the most blatant example of the terminological confusion that pervades artificial intelligence research."[16]Nevertheless, the important concept that must be kept is that the final set, whether called test or validation, should only be used in the final experiment. In order to get more stable results and use all valuable data for training, a data set can be repeatedly split into several training and a validation data sets. This is known ascross-validation. To confirm the model's performance, an additional test data set held out from cross-validation is normally used. It is possible to use cross-validation on training and validation sets, andwithineach training set have further cross-validation for a test set for hyperparameter tuning. This is known asnested cross-validation. Omissions in the training of algorithms are a major cause of erroneous outputs.[17]Types of such omissions include:[17] An example of an omission of particular circumstances is a case where a boy was able to unlock the phone because his mother registered her face under indoor, nighttime lighting, a condition which was not appropriately included in the training of the system.[17][18] Usage of relatively irrelevant input can include situations where algorithms use the background rather than the object of interest forobject detection, such as being trained by pictures of sheep on grasslands, leading to a risk that a different object will be interpreted as a sheep if located on a grassland.[17]
https://en.wikipedia.org/wiki/Training_set
Inlinear algebra, aneigenvector(/ˈaɪɡən-/EYE-gən-) orcharacteristic vectoris avectorthat has itsdirectionunchanged (or reversed) by a givenlinear transformation. More precisely, an eigenvectorv{\displaystyle \mathbf {v} }of a linear transformationT{\displaystyle T}isscaled by a constant factorλ{\displaystyle \lambda }when the linear transformation is applied to it:Tv=λv{\displaystyle T\mathbf {v} =\lambda \mathbf {v} }. The correspondingeigenvalue,characteristic value, orcharacteristic rootis the multiplying factorλ{\displaystyle \lambda }(possibly negative). Geometrically, vectorsare multi-dimensionalquantities with magnitude and direction, often pictured as arrows. A linear transformationrotates,stretches, orshearsthe vectors upon which it acts. A linear transformation's eigenvectors are those vectors that are only stretched or shrunk, with neither rotation nor shear. The corresponding eigenvalue is the factor by which an eigenvector is stretched or shrunk. If the eigenvalue is negative, the eigenvector's direction is reversed.[1] The eigenvectors and eigenvalues of a linear transformation serve to characterize it, and so they play important roles in all areas where linear algebra is applied, fromgeologytoquantum mechanics. In particular, it is often the case that a system is represented by a linear transformation whose outputs are fed as inputs to the same transformation (feedback). In such an application, the largest eigenvalue is of particular importance, because it governs the long-term behavior of the system after many applications of the linear transformation, and the associated eigenvector is thesteady stateof the system. For ann×n{\displaystyle n{\times }n}matrixAand a nonzero vectorv{\displaystyle \mathbf {v} }of lengthn{\displaystyle n}, if multiplyingAbyv{\displaystyle \mathbf {v} }(denotedAv{\displaystyle A\mathbf {v} }) simply scalesv{\displaystyle \mathbf {v} }by a factorλ, whereλis ascalar, thenv{\displaystyle \mathbf {v} }is called an eigenvector ofA, andλis the corresponding eigenvalue. This relationship can be expressed as:Av=λv{\displaystyle A\mathbf {v} =\lambda \mathbf {v} }.[2] Given ann-dimensional vector spaceand a choice ofbasis, there is a direct correspondence between linear transformations from the vector space into itself andn-by-nsquare matrices. Hence, in a finite-dimensional vector space, it is equivalent to define eigenvalues and eigenvectors using either the language of linear transformations, or the language ofmatrices.[3][4] Eigenvalues and eigenvectors feature prominently in the analysis of linear transformations. The prefixeigen-is adopted from theGermanwordeigen(cognatewith theEnglishwordown) for 'proper', 'characteristic', 'own'.[5][6]Originally used to studyprincipal axesof the rotational motion ofrigid bodies, eigenvalues and eigenvectors have a wide range of applications, for example instability analysis,vibration analysis,atomic orbitals,facial recognition, andmatrix diagonalization. In essence, an eigenvectorvof a linear transformationTis a nonzero vector that, whenTis applied to it, does not change direction. ApplyingTto the eigenvector only scales the eigenvector by the scalar valueλ, called an eigenvalue. This condition can be written as the equationT(v)=λv,{\displaystyle T(\mathbf {v} )=\lambda \mathbf {v} ,}referred to as theeigenvalue equationoreigenequation. In general,λmay be anyscalar. For example,λmay be negative, in which case the eigenvector reverses direction as part of the scaling, or it may be zero orcomplex. The example here, based on theMona Lisa, provides a simple illustration. Each point on the painting can be represented as a vector pointing from the center of the painting to that point. The linear transformation in this example is called ashear mapping. Points in the top half are moved to the right, and points in the bottom half are moved to the left, proportional to how far they are from the horizontal axis that goes through the middle of the painting. The vectors pointing to each point in the original image are therefore tilted right or left, and made longer or shorter by the transformation. Pointsalongthe horizontal axis do not move at all when this transformation is applied. Therefore, any vector that points directly to the right or left with no vertical component is an eigenvector of this transformation, because the mapping does not change its direction. Moreover, these eigenvectors all have an eigenvalue equal to one, because the mapping does not change their length either. Linear transformations can take many different forms, mapping vectors in a variety of vector spaces, so the eigenvectors can also take many forms. For example, the linear transformation could be adifferential operatorlikeddx{\displaystyle {\tfrac {d}{dx}}}, in which case the eigenvectors are functions calledeigenfunctionsthat are scaled by that differential operator, such asddxeλx=λeλx.{\displaystyle {\frac {d}{dx}}e^{\lambda x}=\lambda e^{\lambda x}.}Alternatively, the linear transformation could take the form of annbynmatrix, in which case the eigenvectors arenby 1 matrices. If the linear transformation is expressed in the form of annbynmatrixA, then the eigenvalue equation for a linear transformation above can be rewritten as the matrix multiplicationAv=λv,{\displaystyle A\mathbf {v} =\lambda \mathbf {v} ,}where the eigenvectorvis annby 1 matrix. For a matrix, eigenvalues and eigenvectors can be used todecompose the matrix—for example bydiagonalizingit. Eigenvalues and eigenvectors give rise to many closely related mathematical concepts, and the prefixeigen-is applied liberally when naming them: Eigenvalues are often introduced in the context oflinear algebraormatrix theory. Historically, however, they arose in the study ofquadratic formsanddifferential equations. In the 18th century,Leonhard Eulerstudied the rotational motion of arigid body, and discovered the importance of theprincipal axes.[a]Joseph-Louis Lagrangerealized that the principal axes are the eigenvectors of the inertia matrix.[10] In the early 19th century,Augustin-Louis Cauchysaw how their work could be used to classify thequadric surfaces, and generalized it to arbitrary dimensions.[11]Cauchy also coined the termracine caractéristique(characteristic root), for what is now calledeigenvalue; his term survives incharacteristic equation.[b] Later,Joseph Fourierused the work of Lagrange andPierre-Simon Laplaceto solve theheat equationbyseparation of variablesin his 1822 treatiseThe Analytic Theory of Heat (Théorie analytique de la chaleur).[12]Charles-François Sturmelaborated on Fourier's ideas further, and brought them to the attention of Cauchy, who combined them with his own ideas and arrived at the fact that realsymmetric matriceshave real eigenvalues.[11]This was extended byCharles Hermitein 1855 to what are now calledHermitian matrices.[13] Around the same time,Francesco Brioschiproved that the eigenvalues oforthogonal matriceslie on theunit circle,[11]andAlfred Clebschfound the corresponding result forskew-symmetric matrices.[13]Finally,Karl Weierstrassclarified an important aspect in thestability theorystarted by Laplace, by realizing thatdefective matricescan cause instability.[11] In the meantime,Joseph Liouvillestudied eigenvalue problems similar to those of Sturm; the discipline that grew out of their work is now calledSturm–Liouville theory.[14]Schwarzstudied the first eigenvalue ofLaplace's equationon general domains towards the end of the 19th century, whilePoincaréstudiedPoisson's equationa few years later.[15] At the start of the 20th century,David Hilbertstudied the eigenvalues ofintegral operatorsby viewing the operators as infinite matrices.[16]He was the first to use theGermanwordeigen, which means "own",[6]to denote eigenvalues and eigenvectors in 1904,[c]though he may have been following a related usage byHermann von Helmholtz. For some time, the standard term in English was "proper value", but the more distinctive term "eigenvalue" is the standard today.[17] The first numerical algorithm for computing eigenvalues and eigenvectors appeared in 1929, whenRichard von Misespublished thepower method. One of the most popular methods today, theQR algorithm, was proposed independently byJohn G. F. Francis[18]andVera Kublanovskaya[19]in 1961.[20][21] Eigenvalues and eigenvectors are often introduced to students in the context of linear algebra courses focused on matrices.[22][23]Furthermore, linear transformations over a finite-dimensional vector space can be represented using matrices,[3][4]which is especially common in numerical and computational applications.[24] Considern-dimensional vectors that are formed as a list ofnscalars, such as the three-dimensional vectorsx=[1−34]andy=[−2060−80].{\displaystyle \mathbf {x} ={\begin{bmatrix}1\\-3\\4\end{bmatrix}}\quad {\mbox{and}}\quad \mathbf {y} ={\begin{bmatrix}-20\\60\\-80\end{bmatrix}}.} These vectors are said to bescalar multiplesof each other, orparallelorcollinear, if there is a scalarλsuch thatx=λy.{\displaystyle \mathbf {x} =\lambda \mathbf {y} .} In this case,λ=−120{\displaystyle \lambda =-{\frac {1}{20}}}. Now consider the linear transformation ofn-dimensional vectors defined by annbynmatrixA,Av=w,{\displaystyle A\mathbf {v} =\mathbf {w} ,}or[A11A12⋯A1nA21A22⋯A2n⋮⋮⋱⋮An1An2⋯Ann][v1v2⋮vn]=[w1w2⋮wn]{\displaystyle {\begin{bmatrix}A_{11}&A_{12}&\cdots &A_{1n}\\A_{21}&A_{22}&\cdots &A_{2n}\\\vdots &\vdots &\ddots &\vdots \\A_{n1}&A_{n2}&\cdots &A_{nn}\\\end{bmatrix}}{\begin{bmatrix}v_{1}\\v_{2}\\\vdots \\v_{n}\end{bmatrix}}={\begin{bmatrix}w_{1}\\w_{2}\\\vdots \\w_{n}\end{bmatrix}}}where, for each row,wi=Ai1v1+Ai2v2+⋯+Ainvn=∑j=1nAijvj.{\displaystyle w_{i}=A_{i1}v_{1}+A_{i2}v_{2}+\cdots +A_{in}v_{n}=\sum _{j=1}^{n}A_{ij}v_{j}.} If it occurs thatvandware scalar multiples, that is if thenvis aneigenvectorof the linear transformationAand the scale factorλis theeigenvaluecorresponding to that eigenvector. Equation (1) is theeigenvalue equationfor the matrixA. Equation (1) can be stated equivalently as whereIis thenbynidentity matrixand0is the zero vector. Equation (2) has a nonzero solutionvif and only ifthedeterminantof the matrix(A−λI)is zero. Therefore, the eigenvalues ofAare values ofλthat satisfy the equation Using theLeibniz formula for determinants, the left-hand side of equation (3) is apolynomialfunction of the variableλand thedegreeof this polynomial isn, the order of the matrixA. Itscoefficientsdepend on the entries ofA, except that its term of degreenis always (−1)nλn. This polynomial is called thecharacteristic polynomialofA. Equation (3) is called thecharacteristic equationor thesecular equationofA. Thefundamental theorem of algebraimplies that the characteristic polynomial of ann-by-nmatrixA, being a polynomial of degreen, can befactoredinto the product ofnlinear terms, where eachλimay be real but in general is a complex number. The numbersλ1,λ2, ...,λn, which may not all have distinct values, are roots of the polynomial and are the eigenvalues ofA. As a brief example, which is described in more detail in the examples section later, consider the matrixA=[2112].{\displaystyle A={\begin{bmatrix}2&1\\1&2\end{bmatrix}}.} Taking the determinant of(A−λI), the characteristic polynomial ofAisdet(A−λI)=|2−λ112−λ|=3−4λ+λ2.{\displaystyle \det(A-\lambda I)={\begin{vmatrix}2-\lambda &1\\1&2-\lambda \end{vmatrix}}=3-4\lambda +\lambda ^{2}.} Setting the characteristic polynomial equal to zero, it has roots atλ=1andλ=3, which are the two eigenvalues ofA. The eigenvectors corresponding to each eigenvalue can be found by solving for the components ofvin the equation(A−λI)v=0{\displaystyle \left(A-\lambda I\right)\mathbf {v} =\mathbf {0} }.In this example, the eigenvectors are any nonzero scalar multiples ofvλ=1=[1−1],vλ=3=[11].{\displaystyle \mathbf {v} _{\lambda =1}={\begin{bmatrix}1\\-1\end{bmatrix}},\quad \mathbf {v} _{\lambda =3}={\begin{bmatrix}1\\1\end{bmatrix}}.} If the entries of the matrixAare all real numbers, then the coefficients of the characteristic polynomial will also be real numbers, but the eigenvalues may still have nonzero imaginary parts. The entries of the corresponding eigenvectors therefore may also have nonzero imaginary parts. Similarly, the eigenvalues may beirrational numberseven if all the entries ofAarerational numbersor even if they are all integers. However, if the entries ofAare allalgebraic numbers, which include the rationals, the eigenvalues must also be algebraic numbers. The non-real roots of a real polynomial with real coefficients can be grouped into pairs ofcomplex conjugates, namely with the two members of each pair having imaginary parts that differ only in sign and the same real part. If the degree is odd, then by theintermediate value theoremat least one of the roots is real. Therefore, anyreal matrixwith odd order has at least one real eigenvalue, whereas a real matrix with even order may not have any real eigenvalues. The eigenvectors associated with these complex eigenvalues are also complex and also appear in complex conjugate pairs. Thespectrumof a matrix is the list of eigenvalues, repeated according to multiplicity; in an alternative notation the set of eigenvalues with their multiplicities. An important quantity associated with the spectrum is the maximum absolute value of any eigenvalue. This is known as thespectral radiusof the matrix. Letλibe an eigenvalue of annbynmatrixA. Thealgebraic multiplicityμA(λi) of the eigenvalue is itsmultiplicity as a rootof the characteristic polynomial, that is, the largest integerksuch that (λ−λi)kdivides evenlythat polynomial.[9][25][26] Suppose a matrixAhas dimensionnandd≤ndistinct eigenvalues. Whereas equation (4) factors the characteristic polynomial ofAinto the product ofnlinear terms with some terms potentially repeating, the characteristic polynomial can also be written as the product ofdterms each corresponding to a distinct eigenvalue and raised to the power of the algebraic multiplicity,det(A−λI)=(λ1−λ)μA(λ1)(λ2−λ)μA(λ2)⋯(λd−λ)μA(λd).{\displaystyle \det(A-\lambda I)=(\lambda _{1}-\lambda )^{\mu _{A}(\lambda _{1})}(\lambda _{2}-\lambda )^{\mu _{A}(\lambda _{2})}\cdots (\lambda _{d}-\lambda )^{\mu _{A}(\lambda _{d})}.} Ifd=nthen the right-hand side is the product ofnlinear terms and this is the same as equation (4). The size of each eigenvalue's algebraic multiplicity is related to the dimensionnas1≤μA(λi)≤n,μA=∑i=1dμA(λi)=n.{\displaystyle {\begin{aligned}1&\leq \mu _{A}(\lambda _{i})\leq n,\\\mu _{A}&=\sum _{i=1}^{d}\mu _{A}\left(\lambda _{i}\right)=n.\end{aligned}}} IfμA(λi) = 1, thenλiis said to be asimple eigenvalue.[26]IfμA(λi) equals the geometric multiplicity ofλi,γA(λi), defined in the next section, thenλiis said to be asemisimple eigenvalue. Given a particular eigenvalueλof thenbynmatrixA, define thesetEto be all vectorsvthat satisfy equation (2),E={v:(A−λI)v=0}.{\displaystyle E=\left\{\mathbf {v} :\left(A-\lambda I\right)\mathbf {v} =\mathbf {0} \right\}.} On one hand, this set is precisely thekernelor nullspace of the matrix(A−λI). On the other hand, by definition, any nonzero vector that satisfies this condition is an eigenvector ofAassociated withλ. So, the setEis theunionof the zero vector with the set of all eigenvectors ofAassociated withλ, andEequals the nullspace of(A−λI).Eis called theeigenspaceorcharacteristic spaceofAassociated withλ.[27][9]In generalλis a complex number and the eigenvectors are complexnby 1 matrices. A property of the nullspace is that it is alinear subspace, soEis a linear subspace ofCn{\displaystyle \mathbb {C} ^{n}}. Because the eigenspaceEis a linear subspace, it isclosedunder addition. That is, if two vectorsuandvbelong to the setE, writtenu,v∈E, then(u+v) ∈Eor equivalentlyA(u+v) =λ(u+v). This can be checked using thedistributive propertyof matrix multiplication. Similarly, becauseEis a linear subspace, it is closed under scalar multiplication. That is, ifv∈Eandαis a complex number,(αv) ∈Eor equivalentlyA(αv) =λ(αv). This can be checked by noting that multiplication of complex matrices by complex numbers iscommutative. As long asu+vandαvare not zero, they are also eigenvectors ofAassociated withλ. The dimension of the eigenspaceEassociated withλ, or equivalently the maximum number of linearly independent eigenvectors associated withλ, is referred to as the eigenvalue'sgeometric multiplicityγA(λ){\displaystyle \gamma _{A}(\lambda )}. BecauseEis also the nullspace of(A−λI), the geometric multiplicity ofλis the dimension of the nullspace of(A−λI),also called thenullityof(A−λI),which relates to the dimension and rank of(A−λI)asγA(λ)=n−rank⁡(A−λI).{\displaystyle \gamma _{A}(\lambda )=n-\operatorname {rank} (A-\lambda I).} Because of the definition of eigenvalues and eigenvectors, an eigenvalue's geometric multiplicity must be at least one, that is, each eigenvalue has at least one associated eigenvector. Furthermore, an eigenvalue's geometric multiplicity cannot exceed its algebraic multiplicity. Additionally, recall that an eigenvalue's algebraic multiplicity cannot exceedn.1≤γA(λ)≤μA(λ)≤n{\displaystyle 1\leq \gamma _{A}(\lambda )\leq \mu _{A}(\lambda )\leq n} To prove the inequalityγA(λ)≤μA(λ){\displaystyle \gamma _{A}(\lambda )\leq \mu _{A}(\lambda )}, consider how the definition of geometric multiplicity implies the existence ofγA(λ){\displaystyle \gamma _{A}(\lambda )}orthonormaleigenvectorsv1,…,vγA(λ){\displaystyle {\boldsymbol {v}}_{1},\,\ldots ,\,{\boldsymbol {v}}_{\gamma _{A}(\lambda )}}, such thatAvk=λvk{\displaystyle A{\boldsymbol {v}}_{k}=\lambda {\boldsymbol {v}}_{k}}. We can therefore find a (unitary) matrixVwhose firstγA(λ){\displaystyle \gamma _{A}(\lambda )}columns are these eigenvectors, and whose remaining columns can be any orthonormal set ofn−γA(λ){\displaystyle n-\gamma _{A}(\lambda )}vectors orthogonal to these eigenvectors ofA. ThenVhas full rank and is therefore invertible. EvaluatingD:=VTAV{\displaystyle D:=V^{T}AV}, we get a matrix whose top left block is the diagonal matrixλIγA(λ){\displaystyle \lambda I_{\gamma _{A}(\lambda )}}. This can be seen by evaluating what the left-hand side does to the first column basis vectors. By reorganizing and adding−ξV{\displaystyle -\xi V}on both sides, we get(A−ξI)V=V(D−ξI){\displaystyle (A-\xi I)V=V(D-\xi I)}sinceIcommutes withV. In other words,A−ξI{\displaystyle A-\xi I}is similar toD−ξI{\displaystyle D-\xi I}, anddet(A−ξI)=det(D−ξI){\displaystyle \det(A-\xi I)=\det(D-\xi I)}. But from the definition ofD, we know thatdet(D−ξI){\displaystyle \det(D-\xi I)}contains a factor(ξ−λ)γA(λ){\displaystyle (\xi -\lambda )^{\gamma _{A}(\lambda )}}, which means that the algebraic multiplicity ofλ{\displaystyle \lambda }must satisfyμA(λ)≥γA(λ){\displaystyle \mu _{A}(\lambda )\geq \gamma _{A}(\lambda )}. SupposeAhasd≤n{\displaystyle d\leq n}distinct eigenvaluesλ1,…,λd{\displaystyle \lambda _{1},\ldots ,\lambda _{d}}, where the geometric multiplicity ofλi{\displaystyle \lambda _{i}}isγA(λi){\displaystyle \gamma _{A}(\lambda _{i})}. The total geometric multiplicity ofA,γA=∑i=1dγA(λi),d≤γA≤n,{\displaystyle {\begin{aligned}\gamma _{A}&=\sum _{i=1}^{d}\gamma _{A}(\lambda _{i}),\\d&\leq \gamma _{A}\leq n,\end{aligned}}}is the dimension of thesumof all the eigenspaces ofA's eigenvalues, or equivalently the maximum number of linearly independent eigenvectors ofA. IfγA=n{\displaystyle \gamma _{A}=n}, then LetA{\displaystyle A}be an arbitraryn×n{\displaystyle n\times n}matrix of complex numbers with eigenvaluesλ1,…,λn{\displaystyle \lambda _{1},\ldots ,\lambda _{n}}. Each eigenvalue appearsμA(λi){\displaystyle \mu _{A}(\lambda _{i})}times in this list, whereμA(λi){\displaystyle \mu _{A}(\lambda _{i})}is the eigenvalue's algebraic multiplicity. The following are properties of this matrix and its eigenvalues: Many disciplines traditionally represent vectors as matrices with a single column rather than as matrices with a single row. For that reason, the word "eigenvector" in the context of matrices almost always refers to aright eigenvector, namely acolumnvector thatrightmultiplies then×n{\displaystyle n\times n}matrixA{\displaystyle A}in the defining equation, equation (1),Av=λv.{\displaystyle A\mathbf {v} =\lambda \mathbf {v} .} The eigenvalue and eigenvector problem can also be defined forrowvectors thatleftmultiply matrixA{\displaystyle A}. In this formulation, the defining equation isuA=κu,{\displaystyle \mathbf {u} A=\kappa \mathbf {u} ,} whereκ{\displaystyle \kappa }is a scalar andu{\displaystyle u}is a1×n{\displaystyle 1\times n}matrix. Any row vectoru{\displaystyle u}satisfying this equation is called aleft eigenvectorofA{\displaystyle A}andκ{\displaystyle \kappa }is its associated eigenvalue. Taking the transpose of this equation,ATuT=κuT.{\displaystyle A^{\textsf {T}}\mathbf {u} ^{\textsf {T}}=\kappa \mathbf {u} ^{\textsf {T}}.} Comparing this equation to equation (1), it follows immediately that a left eigenvector ofA{\displaystyle A}is the same as the transpose of a right eigenvector ofAT{\displaystyle A^{\textsf {T}}}, with the same eigenvalue. Furthermore, since the characteristic polynomial ofAT{\displaystyle A^{\textsf {T}}}is the same as the characteristic polynomial ofA{\displaystyle A}, the left and right eigenvectors ofA{\displaystyle A}are associated with the same eigenvalues. Suppose the eigenvectors ofAform a basis, or equivalentlyAhasnlinearly independent eigenvectorsv1,v2, ...,vnwith associated eigenvaluesλ1,λ2, ...,λn. The eigenvalues need not be distinct. Define asquare matrixQwhose columns are thenlinearly independent eigenvectors ofA, Since each column ofQis an eigenvector ofA, right multiplyingAbyQscales each column ofQby its associated eigenvalue, With this in mind, define a diagonal matrix Λ where each diagonal element Λiiis the eigenvalue associated with theith column ofQ. Then Because the columns ofQare linearly independent, Q is invertible. Right multiplying both sides of the equation byQ−1, or by instead left multiplying both sides byQ−1, Acan therefore be decomposed into a matrix composed of its eigenvectors, a diagonal matrix with its eigenvalues along the diagonal, and the inverse of the matrix of eigenvectors. This is called theeigendecompositionand it is asimilarity transformation. Such a matrixAis said to besimilarto the diagonal matrix Λ ordiagonalizable. The matrixQis the change of basis matrix of the similarity transformation. Essentially, the matricesAand Λ represent the same linear transformation expressed in two different bases. The eigenvectors are used as the basis when representing the linear transformation as Λ. Conversely, suppose a matrixAis diagonalizable. LetPbe a non-singular square matrix such thatP−1APis some diagonal matrixD. Left multiplying both byP,AP=PD. Each column ofPmust therefore be an eigenvector ofAwhose eigenvalue is the corresponding diagonal element ofD. Since the columns ofPmust be linearly independent forPto be invertible, there existnlinearly independent eigenvectors ofA. It then follows that the eigenvectors ofAform a basis if and only ifAis diagonalizable. A matrix that is not diagonalizable is said to bedefective. For defective matrices, the notion of eigenvectors generalizes togeneralized eigenvectorsand the diagonal matrix of eigenvalues generalizes to theJordan normal form. Over an algebraically closed field, any matrixAhas aJordan normal formand therefore admits a basis of generalized eigenvectors and a decomposition intogeneralized eigenspaces. In theHermitiancase, eigenvalues can be given a variational characterization. The largest eigenvalue ofH{\displaystyle H}is the maximum value of thequadratic formxTHx/xTx{\displaystyle \mathbf {x} ^{\textsf {T}}H\mathbf {x} /\mathbf {x} ^{\textsf {T}}\mathbf {x} }. A value ofx{\displaystyle \mathbf {x} }that realizes that maximum is an eigenvector. Consider the matrixA=[2112].{\displaystyle A={\begin{bmatrix}2&1\\1&2\end{bmatrix}}.} The figure on the right shows the effect of this transformation on point coordinates in the plane. The eigenvectorsvof this transformation satisfy equation (1), and the values ofλfor which the determinant of the matrix (A−λI) equals zero are the eigenvalues. Taking the determinant to find characteristic polynomial ofA,det(A−λI)=|[2112]−λ[1001]|=|2−λ112−λ|=3−4λ+λ2=(λ−3)(λ−1).{\displaystyle {\begin{aligned}\det(A-\lambda I)&=\left|{\begin{bmatrix}2&1\\1&2\end{bmatrix}}-\lambda {\begin{bmatrix}1&0\\0&1\end{bmatrix}}\right|={\begin{vmatrix}2-\lambda &1\\1&2-\lambda \end{vmatrix}}\\[6pt]&=3-4\lambda +\lambda ^{2}\\[6pt]&=(\lambda -3)(\lambda -1).\end{aligned}}} Setting the characteristic polynomial equal to zero, it has roots atλ=1andλ=3, which are the two eigenvalues ofA. Forλ=1, equation (2) becomes,(A−I)vλ=1=[1111][v1v2]=[00]{\displaystyle (A-I)\mathbf {v} _{\lambda =1}={\begin{bmatrix}1&1\\1&1\end{bmatrix}}{\begin{bmatrix}v_{1}\\v_{2}\end{bmatrix}}={\begin{bmatrix}0\\0\end{bmatrix}}}1v1+1v2=0{\displaystyle 1v_{1}+1v_{2}=0} Any nonzero vector withv1= −v2solves this equation. Therefore,vλ=1=[v1−v1]=[1−1]{\displaystyle \mathbf {v} _{\lambda =1}={\begin{bmatrix}v_{1}\\-v_{1}\end{bmatrix}}={\begin{bmatrix}1\\-1\end{bmatrix}}}is an eigenvector ofAcorresponding toλ= 1, as is any scalar multiple of this vector. Forλ=3, equation (2) becomes(A−3I)vλ=3=[−111−1][v1v2]=[00]−1v1+1v2=0;1v1−1v2=0{\displaystyle {\begin{aligned}(A-3I)\mathbf {v} _{\lambda =3}&={\begin{bmatrix}-1&1\\1&-1\end{bmatrix}}{\begin{bmatrix}v_{1}\\v_{2}\end{bmatrix}}={\begin{bmatrix}0\\0\end{bmatrix}}\\-1v_{1}+1v_{2}&=0;\\1v_{1}-1v_{2}&=0\end{aligned}}} Any nonzero vector withv1=v2solves this equation. Therefore,vλ=3=[v1v1]=[11]{\displaystyle \mathbf {v} _{\lambda =3}={\begin{bmatrix}v_{1}\\v_{1}\end{bmatrix}}={\begin{bmatrix}1\\1\end{bmatrix}}} is an eigenvector ofAcorresponding toλ= 3, as is any scalar multiple of this vector. Thus, the vectorsvλ=1andvλ=3are eigenvectors ofAassociated with the eigenvaluesλ=1andλ=3, respectively. Consider the matrixA=[200034049].{\displaystyle A={\begin{bmatrix}2&0&0\\0&3&4\\0&4&9\end{bmatrix}}.} The characteristic polynomial ofAisdet(A−λI)=|[200034049]−λ[100010001]|=|2−λ0003−λ4049−λ|,=(2−λ)[(3−λ)(9−λ)−16]=−λ3+14λ2−35λ+22.{\displaystyle {\begin{aligned}\det(A-\lambda I)&=\left|{\begin{bmatrix}2&0&0\\0&3&4\\0&4&9\end{bmatrix}}-\lambda {\begin{bmatrix}1&0&0\\0&1&0\\0&0&1\end{bmatrix}}\right|={\begin{vmatrix}2-\lambda &0&0\\0&3-\lambda &4\\0&4&9-\lambda \end{vmatrix}},\\[6pt]&=(2-\lambda ){\bigl [}(3-\lambda )(9-\lambda )-16{\bigr ]}=-\lambda ^{3}+14\lambda ^{2}-35\lambda +22.\end{aligned}}} The roots of the characteristic polynomial are 2, 1, and 11, which are the only three eigenvalues ofA. These eigenvalues correspond to the eigenvectors[100]T{\displaystyle {\begin{bmatrix}1&0&0\end{bmatrix}}^{\textsf {T}}},[0−21]T{\displaystyle {\begin{bmatrix}0&-2&1\end{bmatrix}}^{\textsf {T}}},and[012]T{\displaystyle {\begin{bmatrix}0&1&2\end{bmatrix}}^{\textsf {T}}},or any nonzero multiple thereof. Consider thecyclic permutation matrixA=[010001100].{\displaystyle A={\begin{bmatrix}0&1&0\\0&0&1\\1&0&0\end{bmatrix}}.} This matrix shifts the coordinates of the vector up by one position and moves the first coordinate to the bottom. Its characteristic polynomial is 1 −λ3, whose roots areλ1=1λ2=−12+i32λ3=λ2∗=−12−i32{\displaystyle {\begin{aligned}\lambda _{1}&=1\\\lambda _{2}&=-{\frac {1}{2}}+i{\frac {\sqrt {3}}{2}}\\\lambda _{3}&=\lambda _{2}^{*}=-{\frac {1}{2}}-i{\frac {\sqrt {3}}{2}}\end{aligned}}}wherei{\displaystyle i}is animaginary unitwithi2=−1{\displaystyle i^{2}=-1}. For the real eigenvalueλ1= 1, any vector with three equal nonzero entries is an eigenvector. For example,A[555]=[555]=1⋅[555].{\displaystyle A{\begin{bmatrix}5\\5\\5\end{bmatrix}}={\begin{bmatrix}5\\5\\5\end{bmatrix}}=1\cdot {\begin{bmatrix}5\\5\\5\end{bmatrix}}.} For the complex conjugate pair of imaginary eigenvalues,λ2λ3=1,λ22=λ3,λ32=λ2.{\displaystyle \lambda _{2}\lambda _{3}=1,\quad \lambda _{2}^{2}=\lambda _{3},\quad \lambda _{3}^{2}=\lambda _{2}.} ThenA[1λ2λ3]=[λ2λ31]=λ2⋅[1λ2λ3],{\displaystyle A{\begin{bmatrix}1\\\lambda _{2}\\\lambda _{3}\end{bmatrix}}={\begin{bmatrix}\lambda _{2}\\\lambda _{3}\\1\end{bmatrix}}=\lambda _{2}\cdot {\begin{bmatrix}1\\\lambda _{2}\\\lambda _{3}\end{bmatrix}},}andA[1λ3λ2]=[λ3λ21]=λ3⋅[1λ3λ2].{\displaystyle A{\begin{bmatrix}1\\\lambda _{3}\\\lambda _{2}\end{bmatrix}}={\begin{bmatrix}\lambda _{3}\\\lambda _{2}\\1\end{bmatrix}}=\lambda _{3}\cdot {\begin{bmatrix}1\\\lambda _{3}\\\lambda _{2}\end{bmatrix}}.} Therefore, the other two eigenvectors ofAare complex and arevλ2=[1λ2λ3]T{\displaystyle \mathbf {v} _{\lambda _{2}}={\begin{bmatrix}1&\lambda _{2}&\lambda _{3}\end{bmatrix}}^{\textsf {T}}}andvλ3=[1λ3λ2]T{\displaystyle \mathbf {v} _{\lambda _{3}}={\begin{bmatrix}1&\lambda _{3}&\lambda _{2}\end{bmatrix}}^{\textsf {T}}}with eigenvaluesλ2andλ3, respectively. The two complex eigenvectors also appear in a complex conjugate pair,vλ2=vλ3∗.{\displaystyle \mathbf {v} _{\lambda _{2}}=\mathbf {v} _{\lambda _{3}}^{*}.} Matrices with entries only along the main diagonal are calleddiagonal matrices. The eigenvalues of a diagonal matrix are the diagonal elements themselves. Consider the matrixA=[100020003].{\displaystyle A={\begin{bmatrix}1&0&0\\0&2&0\\0&0&3\end{bmatrix}}.} The characteristic polynomial ofAisdet(A−λI)=(1−λ)(2−λ)(3−λ),{\displaystyle \det(A-\lambda I)=(1-\lambda )(2-\lambda )(3-\lambda ),} which has the rootsλ1= 1,λ2= 2, andλ3= 3. These roots are the diagonal elements as well as the eigenvalues ofA. Each diagonal element corresponds to an eigenvector whose only nonzero component is in the same row as that diagonal element. In the example, the eigenvalues correspond to the eigenvectors,vλ1=[100],vλ2=[010],vλ3=[001],{\displaystyle \mathbf {v} _{\lambda _{1}}={\begin{bmatrix}1\\0\\0\end{bmatrix}},\quad \mathbf {v} _{\lambda _{2}}={\begin{bmatrix}0\\1\\0\end{bmatrix}},\quad \mathbf {v} _{\lambda _{3}}={\begin{bmatrix}0\\0\\1\end{bmatrix}},} respectively, as well as scalar multiples of these vectors. A matrix whose elements above the main diagonal are all zero is called alowertriangular matrix, while a matrix whose elements below the main diagonal are all zero is called anupper triangular matrix. As with diagonal matrices, the eigenvalues of triangular matrices are the elements of the main diagonal. Consider the lower triangular matrix,A=[100120233].{\displaystyle A={\begin{bmatrix}1&0&0\\1&2&0\\2&3&3\end{bmatrix}}.} The characteristic polynomial ofAisdet(A−λI)=(1−λ)(2−λ)(3−λ),{\displaystyle \det(A-\lambda I)=(1-\lambda )(2-\lambda )(3-\lambda ),} which has the rootsλ1= 1,λ2= 2, andλ3= 3. These roots are the diagonal elements as well as the eigenvalues ofA. These eigenvalues correspond to the eigenvectors,vλ1=[1−112],vλ2=[01−3],vλ3=[001],{\displaystyle \mathbf {v} _{\lambda _{1}}={\begin{bmatrix}1\\-1\\{\frac {1}{2}}\end{bmatrix}},\quad \mathbf {v} _{\lambda _{2}}={\begin{bmatrix}0\\1\\-3\end{bmatrix}},\quad \mathbf {v} _{\lambda _{3}}={\begin{bmatrix}0\\0\\1\end{bmatrix}},} respectively, as well as scalar multiples of these vectors. As in the previous example, the lower triangular matrixA=[2000120001300013],{\displaystyle A={\begin{bmatrix}2&0&0&0\\1&2&0&0\\0&1&3&0\\0&0&1&3\end{bmatrix}},}has a characteristic polynomial that is the product of its diagonal elements,det(A−λI)=|2−λ00012−λ00013−λ00013−λ|=(2−λ)2(3−λ)2.{\displaystyle \det(A-\lambda I)={\begin{vmatrix}2-\lambda &0&0&0\\1&2-\lambda &0&0\\0&1&3-\lambda &0\\0&0&1&3-\lambda \end{vmatrix}}=(2-\lambda )^{2}(3-\lambda )^{2}.} The roots of this polynomial, and hence the eigenvalues, are 2 and 3. Thealgebraic multiplicityof each eigenvalue is 2; in other words they are both double roots. The sum of the algebraic multiplicities of all distinct eigenvalues isμA= 4 =n, the order of the characteristic polynomial and the dimension ofA. On the other hand, thegeometric multiplicityof the eigenvalue 2 is only 1, because its eigenspace is spanned by just one vector[01−11]T{\displaystyle {\begin{bmatrix}0&1&-1&1\end{bmatrix}}^{\textsf {T}}}and is therefore 1-dimensional. Similarly, the geometric multiplicity of the eigenvalue 3 is 1 because its eigenspace is spanned by just one vector[0001]T{\displaystyle {\begin{bmatrix}0&0&0&1\end{bmatrix}}^{\textsf {T}}}. The total geometric multiplicityγAis 2, which is the smallest it could be for a matrix with two distinct eigenvalues. Geometric multiplicities are defined in a later section. For aHermitian matrix, the norm squared of thejth component of a normalized eigenvector can be calculated using only the matrix eigenvalues and the eigenvalues of the correspondingminor matrix,|vi,j|2=∏k(λi−λk(Mj))∏k≠i(λi−λk),{\displaystyle |v_{i,j}|^{2}={\frac {\prod _{k}{(\lambda _{i}-\lambda _{k}(M_{j}))}}{\prod _{k\neq i}{(\lambda _{i}-\lambda _{k})}}},}whereMj{\textstyle M_{j}}is thesubmatrixformed by removing thejth row and column from the original matrix.[33][34][35]This identity also extends todiagonalizable matrices, and has been rediscovered many times in the literature.[34][36] The definitions of eigenvalue and eigenvectors of a linear transformationTremains valid even if the underlying vector space is an infinite-dimensionalHilbertorBanach space. A widely used class of linear transformations acting on infinite-dimensional spaces are thedifferential operatorsonfunction spaces. LetDbe a linear differential operator on the spaceC∞of infinitelydifferentiablereal functions of a real argumentt. The eigenvalue equation forDis thedifferential equationDf(t)=λf(t){\displaystyle Df(t)=\lambda f(t)} The functions that satisfy this equation are eigenvectors ofDand are commonly calledeigenfunctions. Consider the derivative operatorddt{\displaystyle {\tfrac {d}{dt}}}with eigenvalue equationddtf(t)=λf(t).{\displaystyle {\frac {d}{dt}}f(t)=\lambda f(t).} This differential equation can be solved by multiplying both sides bydt/f(t) andintegrating. Its solution, theexponential functionf(t)=f(0)eλt,{\displaystyle f(t)=f(0)e^{\lambda t},}is the eigenfunction of the derivative operator. In this case the eigenfunction is itself a function of its associated eigenvalue. In particular, forλ= 0 the eigenfunctionf(t) is a constant. The maineigenfunctionarticle gives other examples. The concept of eigenvalues and eigenvectors extends naturally to arbitrarylinear transformationson arbitrary vector spaces. LetVbe any vector space over somefieldKofscalars, and letTbe a linear transformation mappingVintoV,T:V→V.{\displaystyle T:V\to V.} We say that a nonzero vectorv∈Vis aneigenvectorofTif and only if there exists a scalarλ∈Ksuch that This equation is called the eigenvalue equation forT, and the scalarλis theeigenvalueofTcorresponding to the eigenvectorv.T(v) is the result of applying the transformationTto the vectorv, whileλvis the product of the scalarλwithv.[37][38] Given an eigenvalueλ, consider the setE={v:T(v)=λv},{\displaystyle E=\left\{\mathbf {v} :T(\mathbf {v} )=\lambda \mathbf {v} \right\},} which is the union of the zero vector with the set of all eigenvectors associated withλ.Eis called theeigenspaceorcharacteristic spaceofTassociated withλ.[39] By definition of a linear transformation,T(x+y)=T(x)+T(y),T(αx)=αT(x),{\displaystyle {\begin{aligned}T(\mathbf {x} +\mathbf {y} )&=T(\mathbf {x} )+T(\mathbf {y} ),\\T(\alpha \mathbf {x} )&=\alpha T(\mathbf {x} ),\end{aligned}}} forx,y∈Vandα∈K. Therefore, ifuandvare eigenvectors ofTassociated with eigenvalueλ, namelyu,v∈E, thenT(u+v)=λ(u+v),T(αv)=λ(αv).{\displaystyle {\begin{aligned}T(\mathbf {u} +\mathbf {v} )&=\lambda (\mathbf {u} +\mathbf {v} ),\\T(\alpha \mathbf {v} )&=\lambda (\alpha \mathbf {v} ).\end{aligned}}} So, bothu+vand αvare either zero or eigenvectors ofTassociated withλ, namelyu+v,αv∈E, andEis closed under addition and scalar multiplication. The eigenspaceEassociated withλis therefore a linear subspace ofV.[40]If that subspace has dimension 1, it is sometimes called aneigenline.[41] Thegeometric multiplicityγT(λ) of an eigenvalueλis the dimension of the eigenspace associated withλ, i.e., the maximum number of linearly independent eigenvectors associated with that eigenvalue.[9][26][42]By the definition of eigenvalues and eigenvectors,γT(λ) ≥ 1 because every eigenvalue has at least one eigenvector. The eigenspaces ofTalways form adirect sum. As a consequence, eigenvectors ofdifferenteigenvalues are always linearly independent. Therefore, the sum of the dimensions of the eigenspaces cannot exceed the dimensionnof the vector space on whichToperates, and there cannot be more thanndistinct eigenvalues.[d] Any subspace spanned by eigenvectors ofTis aninvariant subspaceofT, and the restriction ofTto such a subspace is diagonalizable. Moreover, if the entire vector spaceVcan be spanned by the eigenvectors ofT, or equivalently if the direct sum of the eigenspaces associated with all the eigenvalues ofTis the entire vector spaceV, then a basis ofVcalled aneigenbasiscan be formed from linearly independent eigenvectors ofT. WhenTadmits an eigenbasis,Tis diagonalizable. Ifλis an eigenvalue ofT, then the operator (T−λI) is notone-to-one, and therefore its inverse (T−λI)−1does not exist. The converse is true for finite-dimensional vector spaces, but not for infinite-dimensional vector spaces. In general, the operator (T−λI) may not have an inverse even ifλis not an eigenvalue. For this reason, infunctional analysiseigenvalues can be generalized to thespectrum of a linear operatorTas the set of all scalarsλfor which the operator (T−λI) has noboundedinverse. The spectrum of an operator always contains all its eigenvalues but is not limited to them. One can generalize the algebraic object that is acting on the vector space, replacing a single operator acting on a vector space with analgebra representation– anassociative algebraacting on amodule. The study of such actions is the field ofrepresentation theory. Therepresentation-theoretical concept of weightis an analog of eigenvalues, whileweight vectorsandweight spacesare the analogs of eigenvectors and eigenspaces, respectively. Hecke eigensheafis a tensor-multiple of itself and is considered inLanglands correspondence. The simplestdifference equationshave the form The solution of this equation forxin terms oftis found by using its characteristic equation which can be found by stacking into matrix form a set of equations consisting of the above difference equation and thek– 1 equationsxt−1=xt−1,…,xt−k+1=xt−k+1,{\displaystyle x_{t-1}=x_{t-1},\ \dots ,\ x_{t-k+1}=x_{t-k+1},}giving ak-dimensional system of the first order in the stacked variable vector[xt⋯xt−k+1]{\displaystyle {\begin{bmatrix}x_{t}&\cdots &x_{t-k+1}\end{bmatrix}}}in terms of its once-lagged value, and taking the characteristic equation of this system's matrix. This equation giveskcharacteristic rootsλ1,…,λk,{\displaystyle \lambda _{1},\,\ldots ,\,\lambda _{k},}for use in the solution equation A similar procedure is used for solving adifferential equationof the form The calculation of eigenvalues and eigenvectors is a topic where theory, as presented in elementary linear algebra textbooks, is often very far from practice. The classical method is to first find the eigenvalues, and then calculate the eigenvectors for each eigenvalue. It is in several ways poorly suited for non-exact arithmetics such asfloating-point. The eigenvalues of a matrixA{\displaystyle A}can be determined by finding the roots of the characteristic polynomial. This is easy for2×2{\displaystyle 2\times 2}matrices, but the difficulty increases rapidly with the size of the matrix. In theory, the coefficients of the characteristic polynomial can be computed exactly, since they are sums of products of matrix elements; and there are algorithms that can find all the roots of a polynomial of arbitrary degree to any requiredaccuracy.[43]However, this approach is not viable in practice because the coefficients would be contaminated by unavoidableround-off errors, and the roots of a polynomial can be an extremely sensitive function of the coefficients (as exemplified byWilkinson's polynomial).[43]Even for matrices whose elements are integers the calculation becomes nontrivial, because the sums are very long; the constant term is thedeterminant, which for ann×n{\displaystyle n\times n}matrix is a sum ofn!{\displaystyle n!}different products.[e] Explicitalgebraic formulasfor the roots of a polynomial exist only if the degreen{\displaystyle n}is 4 or less. According to theAbel–Ruffini theoremthere is no general, explicit and exact algebraic formula for the roots of a polynomial with degree 5 or more. (Generality matters because any polynomial with degreen{\displaystyle n}is the characteristic polynomial of somecompanion matrixof ordern{\displaystyle n}.) Therefore, for matrices of order 5 or more, the eigenvalues and eigenvectors cannot be obtained by an explicit algebraic formula, and must therefore be computed by approximatenumerical methods. Even theexact formulafor the roots of a degree 3 polynomial is numerically impractical. Once the (exact) value of an eigenvalue is known, the corresponding eigenvectors can be found by finding nonzero solutions of the eigenvalue equation, that becomes asystem of linear equationswith known coefficients. For example, once it is known that 6 is an eigenvalue of the matrixA=[4163]{\displaystyle A={\begin{bmatrix}4&1\\6&3\end{bmatrix}}} we can find its eigenvectors by solving the equationAv=6v{\displaystyle Av=6v}, that is[4163][xy]=6⋅[xy]{\displaystyle {\begin{bmatrix}4&1\\6&3\end{bmatrix}}{\begin{bmatrix}x\\y\end{bmatrix}}=6\cdot {\begin{bmatrix}x\\y\end{bmatrix}}} This matrix equation is equivalent to twolinear equations{4x+y=6x6x+3y=6y{\displaystyle \left\{{\begin{aligned}4x+y&=6x\\6x+3y&=6y\end{aligned}}\right.}that is{−2x+y=06x−3y=0{\displaystyle \left\{{\begin{aligned}-2x+y&=0\\6x-3y&=0\end{aligned}}\right.} Both equations reduce to the single linear equationy=2x{\displaystyle y=2x}. Therefore, any vector of the form[a2a]T{\displaystyle {\begin{bmatrix}a&2a\end{bmatrix}}^{\textsf {T}}}, for any nonzero real numbera{\displaystyle a}, is an eigenvector ofA{\displaystyle A}with eigenvalueλ=6{\displaystyle \lambda =6}. The matrixA{\displaystyle A}above has another eigenvalueλ=1{\displaystyle \lambda =1}. A similar calculation shows that the corresponding eigenvectors are the nonzero solutions of3x+y=0{\displaystyle 3x+y=0}, that is, any vector of the form[b−3b]T{\displaystyle {\begin{bmatrix}b&-3b\end{bmatrix}}^{\textsf {T}}}, for any nonzero real numberb{\displaystyle b}. The converse approach, of first seeking the eigenvectors and then determining each eigenvalue from its eigenvector, turns out to be far more tractable for computers. The easiest algorithm here consists of picking an arbitrary starting vector and then repeatedly multiplying it with the matrix (optionally normalizing the vector to keep its elements of reasonable size); this makes the vector converge towards an eigenvector.A variationis to instead multiply the vector by(A−μI)−1{\displaystyle (A-\mu I)^{-1}};this causes it to converge to an eigenvector of the eigenvalue closest toμ∈C{\displaystyle \mu \in \mathbb {C} }. Ifv{\displaystyle \mathbf {v} }is (a good approximation of) an eigenvector ofA{\displaystyle A}, then the corresponding eigenvalue can be computed as wherev∗{\displaystyle \mathbf {v} ^{*}}denotes theconjugate transposeofv{\displaystyle \mathbf {v} }. Efficient, accurate methods to compute eigenvalues and eigenvectors of arbitrary matrices were not known until theQR algorithmwas designed in 1961.[43]Combining theHouseholder transformationwith the LU decomposition results in an algorithm with better convergence than the QR algorithm.[citation needed]For largeHermitiansparse matrices, theLanczos algorithmis one example of an efficientiterative methodto compute eigenvalues and eigenvectors, among several other possibilities.[43] Most numeric methods that compute the eigenvalues of a matrix also determine a set of corresponding eigenvectors as a by-product of the computation, although sometimes implementors choose to discard the eigenvector information as soon as it is no longer needed. Eigenvectors and eigenvalues can be useful for understanding linear transformations of geometric shapes. The following table presents some example transformations in the plane along with their 2×2 matrices, eigenvalues, and eigenvectors. The characteristic equation for a rotation is aquadratic equationwithdiscriminantD=−4(sin⁡θ)2{\displaystyle D=-4(\sin \theta )^{2}}, which is a negative number wheneverθis not an integer multiple of 180°. Therefore, except for these special cases, the two eigenvalues are complex numbers,cos⁡θ±isin⁡θ{\displaystyle \cos \theta \pm i\sin \theta }; and all eigenvectors have non-real entries. Indeed, except for those special cases, a rotation changes the direction of every nonzero vector in the plane. A linear transformation that takes a square to a rectangle of the same area (asqueeze mapping) has reciprocal eigenvalues. Theeigendecompositionof asymmetricpositive semidefinite(PSD)matrixyields anorthogonal basisof eigenvectors, each of which has a nonnegative eigenvalue. The orthogonal decomposition of a PSD matrix is used inmultivariate analysis, where thesamplecovariance matricesare PSD. This orthogonal decomposition is calledprincipal component analysis(PCA) in statistics. PCA studieslinear relationsamong variables. PCA is performed on thecovariance matrixor thecorrelation matrix(in which each variable is scaled to have itssample varianceequal to one). For the covariance or correlation matrix, the eigenvectors correspond toprincipal componentsand the eigenvalues to thevariance explainedby the principal components. Principal component analysis of the correlation matrix provides anorthogonal basisfor the space of the observed data: In this basis, the largest eigenvalues correspond to the principal components that are associated with most of the covariability among a number of observed data. Principal component analysis is used as a means ofdimensionality reductionin the study of largedata sets, such as those encountered inbioinformatics. InQ methodology, the eigenvalues of the correlation matrix determine the Q-methodologist's judgment ofpracticalsignificance (which differs from thestatistical significanceofhypothesis testing; cf.criteria for determining the number of factors). More generally, principal component analysis can be used as a method offactor analysisinstructural equation modeling. Inspectral graph theory, an eigenvalue of agraphis defined as an eigenvalue of the graph'sadjacency matrixA{\displaystyle A}, or (increasingly) of the graph'sLaplacian matrixdue to itsdiscrete Laplace operator, which is eitherD−A{\displaystyle D-A}(sometimes called thecombinatorial Laplacian) orI−D−1/2AD−1/2{\displaystyle I-D^{-1/2}AD^{-1/2}}(sometimes called thenormalized Laplacian), whereD{\displaystyle D}is a diagonal matrix withDii{\displaystyle D_{ii}}equal to the degree of vertexvi{\displaystyle v_{i}}, and inD−1/2{\displaystyle D^{-1/2}}, thei{\displaystyle i}th diagonal entry is1/deg⁡(vi){\textstyle 1/{\sqrt {\deg(v_{i})}}}. Thek{\displaystyle k}th principal eigenvector of a graph is defined as either the eigenvector corresponding to thek{\displaystyle k}th largest ork{\displaystyle k}th smallest eigenvalue of the Laplacian. The first principal eigenvector of the graph is also referred to merely as the principal eigenvector. The principal eigenvector is used to measure thecentralityof its vertices. An example isGoogle'sPageRankalgorithm. The principal eigenvector of a modifiedadjacency matrixof the World Wide Web graph gives the page ranks as its components. This vector corresponds to thestationary distributionof theMarkov chainrepresented by the row-normalized adjacency matrix; however, the adjacency matrix must first be modified to ensure a stationary distribution exists. The second smallest eigenvector can be used to partition the graph into clusters, viaspectral clustering. Other methods are also available for clustering. AMarkov chainis represented by a matrix whose entries are thetransition probabilitiesbetween states of a system. In particular the entries are non-negative, and every row of the matrix sums to one, being the sum of probabilities of transitions from one state to some other state of the system. ThePerron–Frobenius theoremgives sufficient conditions for a Markov chain to have a unique dominant eigenvalue, which governs the convergence of the system to a steady state. Eigenvalue problems occur naturally in the vibration analysis of mechanical structures with manydegrees of freedom. The eigenvalues are thenatural frequencies(oreigenfrequencies) of vibration, and the eigenvectors are the shapes of these vibrational modes. In particular, undamped vibration is governed bymx¨+kx=0{\displaystyle m{\ddot {x}}+kx=0}ormx¨=−kx{\displaystyle m{\ddot {x}}=-kx} That is, acceleration is proportional to position (i.e., we expectx{\displaystyle x}to be sinusoidal in time). Inn{\displaystyle n}dimensions,m{\displaystyle m}becomes amass matrixandk{\displaystyle k}astiffness matrix. Admissible solutions are then a linear combination of solutions to thegeneralized eigenvalue problemkx=ω2mx{\displaystyle kx=\omega ^{2}mx}whereω2{\displaystyle \omega ^{2}}is the eigenvalue andω{\displaystyle \omega }is the (imaginary)angular frequency. The principalvibration modesare different from the principal compliance modes, which are the eigenvectors ofk{\displaystyle k}alone. Furthermore,damped vibration, governed bymx¨+cx˙+kx=0{\displaystyle m{\ddot {x}}+c{\dot {x}}+kx=0}leads to a so-calledquadratic eigenvalue problem,(ω2m+ωc+k)x=0.{\displaystyle \left(\omega ^{2}m+\omega c+k\right)x=0.} This can be reduced to a generalized eigenvalue problem byalgebraic manipulationat the cost of solving a larger system. The orthogonality properties of the eigenvectors allows decoupling of thedifferential equationsso that the system can be represented as linear summation of the eigenvectors. The eigenvalue problem of complex structures is often solved usingfinite element analysis, but neatly generalize the solution to scalar-valued vibration problems. Inmechanics, the eigenvectors of themoment of inertia tensordefine theprincipal axesof arigid body. Thetensorof moment ofinertiais a key quantity required to determine the rotation of a rigid body around itscenter of mass. Insolid mechanics, thestresstensor is symmetric and so can be decomposed into adiagonaltensor with the eigenvalues on the diagonal and eigenvectors as a basis. Because it is diagonal, in this orientation, the stress tensor has noshearcomponents; the components it does have are the principal components. An example of an eigenvalue equation where the transformationT{\displaystyle T}is represented in terms of a differential operator is the time-independentSchrödinger equationinquantum mechanics: whereH{\displaystyle H}, theHamiltonian, is a second-orderdifferential operatorandψE{\displaystyle \psi _{E}}, thewavefunction, is one of its eigenfunctions corresponding to the eigenvalueE{\displaystyle E}, interpreted as itsenergy. However, in the case where one is interested only in thebound statesolutions of the Schrödinger equation, one looks forψE{\displaystyle \psi _{E}}within the space ofsquare integrablefunctions. Since this space is aHilbert spacewith a well-definedscalar product, one can introduce abasis setin whichψE{\displaystyle \psi _{E}}andH{\displaystyle H}can be represented as a one-dimensional array (i.e., a vector) and a matrix respectively. This allows one to represent the Schrödinger equation in a matrix form. Thebra–ket notationis often used in this context. A vector, which represents a state of the system, in the Hilbert space of square integrable functions is represented by|ΨE⟩{\displaystyle |\Psi _{E}\rangle }. In this notation, the Schrödinger equation is: where|ΨE⟩{\displaystyle |\Psi _{E}\rangle }is aneigenstateofH{\displaystyle H}andE{\displaystyle E}represents the eigenvalue.H{\displaystyle H}is anobservableself-adjoint operator, the infinite-dimensional analog of Hermitian matrices. As in the matrix case, in the equation aboveH|ΨE⟩{\displaystyle H|\Psi _{E}\rangle }is understood to be the vector obtained by application of the transformationH{\displaystyle H}to|ΨE⟩{\displaystyle |\Psi _{E}\rangle }. Light,acoustic waves, andmicrowavesare randomlyscatterednumerous times when traversing a staticdisordered system. Even though multiple scattering repeatedly randomizes the waves, ultimately coherent wave transport through the system is a deterministic process which can be described by a field transmission matrixt{\displaystyle \mathbf {t} }.[44][45]The eigenvectors of the transmission operatort†t{\displaystyle \mathbf {t} ^{\dagger }\mathbf {t} }form a set of disorder-specific input wavefronts which enable waves to couple into the disordered system's eigenchannels: the independent pathways waves can travel through the system. The eigenvalues,τ{\displaystyle \tau }, oft†t{\displaystyle \mathbf {t} ^{\dagger }\mathbf {t} }correspond to the intensity transmittance associated with each eigenchannel. One of the remarkable properties of the transmission operator of diffusive systems is their bimodal eigenvalue distribution withτmax=1{\displaystyle \tau _{\max }=1}andτmin=0{\displaystyle \tau _{\min }=0}.[45]Furthermore, one of the striking properties of open eigenchannels, beyond the perfect transmittance, is the statistically robust spatial profile of the eigenchannels.[46] Inquantum mechanics, and in particular inatomicandmolecular physics, within theHartree–Focktheory, theatomicandmolecular orbitalscan be defined by the eigenvectors of theFock operator. The corresponding eigenvalues are interpreted asionization potentialsviaKoopmans' theorem. In this case, the term eigenvector is used in a somewhat more general meaning, since the Fock operator is explicitly dependent on the orbitals and their eigenvalues. Thus, if one wants to underline this aspect, one speaks of nonlinear eigenvalue problems. Such equations are usually solved by aniterationprocedure, called in this caseself-consistent fieldmethod. Inquantum chemistry, one often represents the Hartree–Fock equation in a non-orthogonalbasis set. This particular representation is ageneralized eigenvalue problemcalledRoothaan equations. Ingeology, especially in the study ofglacial till, eigenvectors and eigenvalues are used as a method by which a mass of information of aclast'sfabriccan be summarized in a 3-D space by six numbers. In the field, a geologist may collect such data for hundreds or thousands of clasts in a soil sample, which can be compared graphically or as astereographic projection. Graphically, many geologists use a Tri-Plot (Sneed and Folk) diagram,.[47][48]A stereographic projection projects 3-dimensional spaces onto a two-dimensional plane. A type of stereographic projection is Wulff Net, which is commonly used incrystallographyto createstereograms.[49] The output for the orientation tensor is in the three orthogonal (perpendicular) axes of space. The three eigenvectors are orderedv1,v2,v3{\displaystyle \mathbf {v} _{1},\mathbf {v} _{2},\mathbf {v} _{3}}by their eigenvaluesE1≥E2≥E3{\displaystyle E_{1}\geq E_{2}\geq E_{3}};[50]v1{\displaystyle \mathbf {v} _{1}}then is the primary orientation/dip of clast,v2{\displaystyle \mathbf {v} _{2}}is the secondary andv3{\displaystyle \mathbf {v} _{3}}is the tertiary, in terms of strength. The clast orientation is defined as the direction of the eigenvector, on acompass roseof360°. Dip is measured as the eigenvalue, the modulus of the tensor: this is valued from 0° (no dip) to 90° (vertical). The relative values ofE1{\displaystyle E_{1}},E2{\displaystyle E_{2}}, andE3{\displaystyle E_{3}}are dictated by the nature of the sediment's fabric. IfE1=E2=E3{\displaystyle E_{1}=E_{2}=E_{3}}, the fabric is said to be isotropic. IfE1=E2>E3{\displaystyle E_{1}=E_{2}>E_{3}}, the fabric is said to be planar. IfE1>E2>E3{\displaystyle E_{1}>E_{2}>E_{3}}, the fabric is said to be linear.[51] The basic reproduction number (R0{\displaystyle R_{0}}) is a fundamental number in the study of how infectious diseases spread. If one infectious person is put into a population of completely susceptible people, thenR0{\displaystyle R_{0}}is the average number of people that one typical infectious person will infect. The generation time of an infection is the time,tG{\displaystyle t_{G}}, from one person becoming infected to the next person becoming infected. In a heterogeneous population, the next generation matrix defines how many people in the population will become infected after timetG{\displaystyle t_{G}}has passed. The valueR0{\displaystyle R_{0}}is then the largest eigenvalue of the next generation matrix.[52][53] Inimage processing, processed images of faces can be seen as vectors whose components are thebrightnessesof eachpixel.[54]The dimension of this vector space is the number of pixels. The eigenvectors of thecovariance matrixassociated with a large set of normalized pictures of faces are calledeigenfaces; this is an example ofprincipal component analysis. They are very useful for expressing any face image as alinear combinationof some of them. In thefacial recognitionbranch ofbiometrics, eigenfaces provide a means of applyingdata compressionto faces foridentificationpurposes. Research related to eigen vision systems determining hand gestures has also been made. Similar to this concept,eigenvoicesrepresent the general direction of variability in human pronunciations of a particular utterance, such as a word in a language. Based on a linear combination of such eigenvoices, a new voice pronunciation of the word can be constructed. These concepts have been found useful in automatic speech recognition systems for speaker adaptation. Wikiversity uses introductory physics to introduceEigenvalues and eigenvectors
https://en.wikipedia.org/wiki/Eigenvalues_and_eigenvectors
Nearest neighbor search(NNS), as a form ofproximity search, is theoptimization problemof finding the point in a given set that is closest (or most similar) to a given point. Closeness is typically expressed in terms of a dissimilarity function: the lesssimilarthe objects, the larger the function values. Formally, the nearest-neighbor (NN) search problem is defined as follows: given a setSof points in a spaceMand a query pointq∈M, find the closest point inStoq.Donald Knuthin vol. 3 ofThe Art of Computer Programming(1973) called it thepost-office problem, referring to an application of assigning to a residence the nearest post office. A direct generalization of this problem is ak-NN search, where we need to find thekclosest points. Most commonlyMis ametric spaceand dissimilarity is expressed as adistance metric, which is symmetric and satisfies thetriangle inequality. Even more common,Mis taken to be thed-dimensionalvector spacewhere dissimilarity is measured using theEuclidean distance,Manhattan distanceor otherdistance metric. However, the dissimilarity function can be arbitrary. One example is asymmetricBregman divergence, for which the triangle inequality does not hold.[1] The nearest neighbor search problem arises in numerous fields of application, including: Various solutions to the NNS problem have been proposed. The quality and usefulness of the algorithms are determined by the time complexity of queries as well as the space complexity of any search data structures that must be maintained. The informal observation usually referred to as thecurse of dimensionalitystates that there is no general-purpose exact solution for NNS in high-dimensional Euclidean space using polynomial preprocessing and polylogarithmic search time. The simplest solution to the NNS problem is to compute the distance from the query point to every other point in the database, keeping track of the "best so far". This algorithm, sometimes referred to as the naive approach, has arunning timeofO(dN), whereNis thecardinalityofSanddis the dimensionality ofS. There are no search data structures to maintain, so the linear search has no space complexity beyond the storage of the database. Naive search can, on average, outperform space partitioning approaches on higher dimensional spaces.[5] The absolute distance is not required for distance comparison, only the relative distance. In geometric coordinate systems the distance calculation can be sped up considerably by omitting the square root calculation from the distance calculation between two coordinates. The distance comparison will still yield identical results. Since the 1970s, thebranch and boundmethodology has been applied to the problem. In the case of Euclidean space, this approach encompassesspatial indexor spatial access methods. Severalspace-partitioningmethods have been developed for solving the NNS problem. Perhaps the simplest is thek-d tree, which iteratively bisects the search space into two regions containing half of the points of the parent region. Queries are performed via traversal of the tree from the root to a leaf by evaluating the query point at each split. Depending on the distance specified in the query, neighboring branches that might contain hits may also need to be evaluated. For constant dimension query time, average complexity isO(logN)[6]in the case of randomly distributed points, worst case complexity isO(kN^(1-1/k))[7]Alternatively theR-treedata structure was designed to support nearest neighbor search in dynamic context, as it has efficient algorithms for insertions and deletions such as theR* tree.[8]R-trees can yield nearest neighbors not only for Euclidean distance, but can also be used with other distances. In the case of general metric space, the branch-and-bound approach is known as themetric treeapproach. Particular examples includevp-treeandBK-treemethods. Using a set of points taken from a 3-dimensional space and put into aBSP tree, and given a query point taken from the same space, a possible solution to the problem of finding the nearest point-cloud point to the query point is given in the following description of an algorithm. (Strictly speaking, no such point may exist, because it may not be unique. But in practice, usually we only care about finding any one of the subset of all point-cloud points that exist at the shortest distance to a given query point.) The idea is, for each branching of the tree, guess that the closest point in the cloud resides in the half-space containing the query point. This may not be the case, but it is a good heuristic. After having recursively gone through all the trouble of solving the problem for the guessed half-space, now compare the distance returned by this result with the shortest distance from the query point to the partitioning plane. This latter distance is that between the query point and the closest possible point that could exist in the half-space not searched. If this distance is greater than that returned in the earlier result, then clearly there is no need to search the other half-space. If there is such a need, then you must go through the trouble of solving the problem for the other half space, and then compare its result to the former result, and then return the proper result. The performance of this algorithm is nearer to logarithmic time than linear time when the query point is near the cloud, because as the distance between the query point and the closest point-cloud point nears zero, the algorithm needs only perform a look-up using the query point as a key to get the correct result. An approximate nearest neighbor search algorithm is allowed to return points whose distance from the query is at mostc{\displaystyle c}times the distance from the query to its nearest points. The appeal of this approach is that, in many cases, an approximate nearest neighbor is almost as good as the exact one. In particular, if the distance measure accurately captures the notion of user quality, then small differences in the distance should not matter.[9] Proximity graph methods (such as navigable small world graphs[10]andHNSW[11][12]) are considered the current state-of-the-art for the approximate nearest neighbors search. The methods are based on greedy traversing in proximity neighborhood graphsG(V,E){\displaystyle G(V,E)}in which every pointxi∈S{\displaystyle x_{i}\in S}is uniquely associated with vertexvi∈V{\displaystyle v_{i}\in V}. The search for the nearest neighbors to a queryqin the setStakes the form of searching for the vertex in the graphG(V,E){\displaystyle G(V,E)}. The basic algorithm – greedy search – works as follows: search starts from an enter-point vertexvi∈V{\displaystyle v_{i}\in V}by computing the distances from the query q to each vertex of its neighborhood{vj:(vi,vj)∈E}{\displaystyle \{v_{j}:(v_{i},v_{j})\in E\}}, and then finds a vertex with the minimal distance value. If the distance value between the query and the selected vertex is smaller than the one between the query and the current element, then the algorithm moves to the selected vertex, and it becomes new enter-point. The algorithm stops when it reaches a local minimum: a vertex whose neighborhood does not contain a vertex that is closer to the query than the vertex itself. The idea of proximity neighborhood graphs was exploited in multiple publications, including the seminal paper by Arya and Mount,[13]in the VoroNet system for the plane,[14]in the RayNet system for theEn{\displaystyle \mathbb {E} ^{n}},[15]and in the Navigable Small World,[10]Metrized Small World[16]andHNSW[11][12]algorithms for the general case of spaces with a distance function. These works were preceded by a pioneering paper by Toussaint, in which he introduced the concept of arelative neighborhoodgraph.[17] Locality sensitive hashing(LSH) is a technique for grouping points in space into 'buckets' based on some distance metric operating on the points. Points that are close to each other under the chosen metric are mapped to the same bucket with high probability.[18] Thecover treehas a theoretical bound that is based on the dataset'sdoubling constant. The bound on search time isO(c12logn) wherecis theexpansion constantof the dataset. In the special case where the data is a dense 3D map of geometric points, the projection geometry of the sensing technique can be used to dramatically simplify the search problem. This approach requires that the 3D data is organized by a projection to a two-dimensional grid and assumes that the data is spatially smooth across neighboring grid cells with the exception of object boundaries. These assumptions are valid when dealing with 3D sensor data in applications such as surveying, robotics and stereo vision but may not hold for unorganized data in general. In practice this technique has an average search time ofO(1) orO(K) for thek-nearest neighbor problem when applied to real world stereo vision data.[4] In high-dimensional spaces, tree indexing structures become useless because an increasing percentage of the nodes need to be examined anyway. To speed up linear search, a compressed version of the feature vectors stored in RAM is used to prefilter the datasets in a first run. The final candidates are determined in a second stage using the uncompressed data from the disk for distance calculation.[19] The VA-file approach is a special case of a compression based search, where each feature component is compressed uniformly and independently. The optimal compression technique in multidimensional spaces isVector Quantization(VQ), implemented through clustering. The database is clustered and the most "promising" clusters are retrieved. Huge gains over VA-File, tree-based indexes and sequential scan have been observed.[20][21]Also note the parallels between clustering and LSH. There are numerous variants of the NNS problem and the two most well-known are thek-nearest neighbor searchand theε-approximate nearest neighbor search. k-nearest neighbor searchidentifies the topknearest neighbors to the query. This technique is commonly used inpredictive analyticsto estimate or classify a point based on the consensus of its neighbors.k-nearest neighbor graphs are graphs in which every point is connected to itsknearest neighbors. In some applications it may be acceptable to retrieve a "good guess" of the nearest neighbor. In those cases, we can use an algorithm which doesn't guarantee to return the actual nearest neighbor in every case, in return for improved speed or memory savings. Often such an algorithm will find the nearest neighbor in a majority of cases, but this depends strongly on the dataset being queried. Algorithms that support the approximate nearest neighbor search includelocality-sensitive hashing,best bin firstandbalanced box-decomposition treebased search.[22] Nearest neighbor distance ratiodoes not apply the threshold on the direct distance from the original point to the challenger neighbor but on a ratio of it depending on the distance to the previous neighbor. It is used inCBIRto retrieve pictures through a "query by example" using the similarity between local features. More generally it is involved in severalmatchingproblems. Fixed-radius near neighborsis the problem where one wants to efficiently find all points given inEuclidean spacewithin a given fixed distance from a specified point. The distance is assumed to be fixed, but the query point is arbitrary. For some applications (e.g.entropy estimation), we may haveNdata-points and wish to know which is the nearest neighborfor every one of those N points. This could, of course, be achieved by running a nearest-neighbor search once for every point, but an improved strategy would be an algorithm that exploits the information redundancy between theseNqueries to produce a more efficient search. As a simple example: when we find the distance from pointXto pointY, that also tells us the distance from pointYto pointX, so the same calculation can be reused in two different queries. Given a fixed dimension, a semi-definite positive norm (thereby including everyLpnorm), andnpoints in this space, the nearest neighbour of every point can be found inO(nlogn) time and themnearest neighbours of every point can be found inO(mnlogn) time.[23][24]
https://en.wikipedia.org/wiki/Nearest_neighbor_search
Instatisticsandmachine learning, thebias–variance tradeoffdescribes the relationship between a model's complexity, the accuracy of its predictions, and how well it can make predictions on previously unseen data that were not used to train the model. In general, as we increase the number of tunable parameters in a model, it becomes more flexible, and can better fit a training data set. It is said to have lower error, orbias. However, for more flexible models, there will tend to be greatervarianceto the model fit each time we take a set ofsamplesto create a new training data set. It is said that there is greatervariancein the model'sestimatedparameters. Thebias–variance dilemmaorbias–variance problemis the conflict in trying to simultaneously minimize these two sources oferrorthat preventsupervised learningalgorithms from generalizing beyond theirtraining set:[1][2] Thebias–variance decompositionis a way of analyzing a learning algorithm'sexpectedgeneralization errorwith respect to a particular problem as a sum of three terms, the bias, variance, and a quantity called theirreducible error, resulting from noise in the problem itself. The bias–variance tradeoff is a central problem in supervised learning. Ideally, one wants tochoose a modelthat both accurately captures the regularities in its training data, but alsogeneralizeswell to unseen data. Unfortunately, it is typically impossible to do both simultaneously. High-variance learning methods may be able to represent their training set well but are at risk of overfitting to noisy or unrepresentative training data. In contrast, algorithms with high bias typically produce simpler models that may fail to capture important regularities (i.e. underfit) in the data. It is an often madefallacy[3][4]to assume that complex models must have high variance. High variance models are "complex" in some sense, but the reverse needs not be true.[5]In addition, one has to be careful how to define complexity. In particular, the number of parameters used to describe the model is a poor measure of complexity. This is illustrated by an example adapted from:[6]The modelfa,b(x)=asin⁡(bx){\displaystyle f_{a,b}(x)=a\sin(bx)}has only two parameters (a,b{\displaystyle a,b}) but it can interpolate any number of points by oscillating with a high enough frequency, resulting in both a high bias and high variance. An analogy can be made to the relationship betweenaccuracy and precision. Accuracy is one way of quantifying bias and can intuitively be improved by selecting from onlylocalinformation. Consequently, a sample will appear accurate (i.e. have low bias) under the aforementioned selection conditions, but may result in underfitting. In other words,test datamay not agree as closely with training data, which would indicate imprecision and therefore inflated variance. A graphical example would be a straight line fit to data exhibiting quadratic behavior overall. Precision is a description of variance and generally can only be improved by selecting information from a comparatively larger space. The option to select many data points over a broad sample space is the ideal condition for any analysis. However, intrinsic constraints (whether physical, theoretical, computational, etc.) will always play a limiting role. The limiting case where only a finite number of data points are selected over a broad sample space may result in improved precision and lower variance overall, but may also result in an overreliance on the training data (overfitting). This means that test data would also not agree as closely with the training data, but in this case the reason is inaccuracy or high bias. To borrow from the previous example, the graphical representation would appear as a high-order polynomial fit to the same data exhibiting quadratic behavior. Note that error in each case is measured the same way, but the reason ascribed to the error is different depending on the balance between bias and variance. To mitigate how much information is used from neighboring observations, a model can besmoothedvia explicitregularization, such asshrinkage. Suppose that we have a training set consisting of a set of pointsx1,…,xn{\displaystyle x_{1},\dots ,x_{n}}and real-valued labelsyi{\displaystyle y_{i}}associated with the pointsxi{\displaystyle x_{i}}. We assume that the data is generated by a functionf(x){\displaystyle f(x)}such asy=f(x)+ε{\displaystyle y=f(x)+\varepsilon }, where the noise,ε{\displaystyle \varepsilon }, has zero mean and varianceσ2{\displaystyle \sigma ^{2}}. That is,yi=f(xi)+εi{\displaystyle y_{i}=f(x_{i})+\varepsilon _{i}}, whereεi{\displaystyle \varepsilon _{i}}is a noise sample. We want to find a functionf^(x;D){\displaystyle {\hat {f}}(x;D)}, that approximates the true functionf(x){\displaystyle f(x)}as well as possible, by means of some learning algorithm based on a training dataset (sample)D={(x1,y1)…,(xn,yn)}{\displaystyle D=\{(x_{1},y_{1})\dots ,(x_{n},y_{n})\}}. We make "as well as possible" precise by measuring themean squared errorbetweeny{\displaystyle y}andf^(x;D){\displaystyle {\hat {f}}(x;D)}: we want(y−f^(x;D))2{\displaystyle (y-{\hat {f}}(x;D))^{2}}to be minimal, both forx1,…,xn{\displaystyle x_{1},\dots ,x_{n}}and for points outside of our sample. Of course, we cannot hope to do so perfectly, since theyi{\displaystyle y_{i}}contain noiseε{\displaystyle \varepsilon }; this means we must be prepared to accept anirreducible errorin any function we come up with. Finding anf^{\displaystyle {\hat {f}}}that generalizes to points outside of the training set can be done with any of the countless algorithms used for supervised learning. It turns out that whichever functionf^{\displaystyle {\hat {f}}}we select, we can decompose itsexpectederror on an unseen samplex{\displaystyle x}(i.e. conditional to x) as follows:[7]: 34[8]: 223 where and and The expectation ranges over different choices of the training setD={(x1,y1)…,(xn,yn)}{\displaystyle D=\{(x_{1},y_{1})\dots ,(x_{n},y_{n})\}}, all sampled from the same joint distributionP(x,y){\displaystyle P(x,y)}which can for example be done viabootstrapping. The three terms represent: Since all three terms are non-negative, the irreducible error forms a lower bound on the expected error on unseen samples.[7]: 34 The more complex the modelf^(x){\displaystyle {\hat {f}}(x)}is, the more data points it will capture, and the lower the bias will be. However, complexity will make the model "move" more to capture the data points, and hence its variance will be larger. The derivation of the bias–variance decomposition for squared error proceeds as follows.[9][10]For convenience, we drop theD{\displaystyle D}subscript in the following lines, such thatf^(x;D)=f^(x){\displaystyle {\hat {f}}(x;D)={\hat {f}}(x)}. Let us write the mean-squared error of our model: We can show that the second term of this equation is null: E[(f(x)−f^(x))ε]=E[f(x)−f^(x)]E[ε]sinceεis independent fromx=0sinceE[ε]=0{\displaystyle {\begin{aligned}\mathbb {E} {\Big [}{\big (}f(x)-{\hat {f}}(x){\big )}\varepsilon {\Big ]}&=\mathbb {E} {\big [}f(x)-{\hat {f}}(x){\big ]}\ \mathbb {E} {\big [}\varepsilon {\big ]}&&{\text{since }}\varepsilon {\text{ is independent from }}x\\&=0&&{\text{since }}\mathbb {E} {\big [}\varepsilon {\big ]}=0\end{aligned}}} Moreover, the third term of this equation is nothing butσ2{\displaystyle \sigma ^{2}}, the variance ofε{\displaystyle \varepsilon }. Let us now expand the remaining term: E[(f(x)−f^(x))2]=E[(f(x)−E[f^(x)]+E[f^(x)]−f^(x))2]=E[(f(x)−E[f^(x)])2]+2E[(f(x)−E[f^(x)])(E[f^(x)]−f^(x))]+E[(E[f^(x)]−f^(x))2]{\displaystyle {\begin{aligned}\mathbb {E} {\Big [}{\big (}f(x)-{\hat {f}}(x){\big )}^{2}{\Big ]}&=\mathbb {E} {\Big [}{\big (}f(x)-\mathbb {E} {\big [}{\hat {f}}(x){\big ]}+\mathbb {E} {\big [}{\hat {f}}(x){\big ]}-{\hat {f}}(x){\big )}^{2}{\Big ]}\\&={\color {Blue}\mathbb {E} {\Big [}{\big (}f(x)-\mathbb {E} {\big [}{\hat {f}}(x){\big ]}{\big )}^{2}{\Big ]}}\,+\,2\ {\color {PineGreen}\mathbb {E} {\Big [}{\big (}f(x)-\mathbb {E} {\big [}{\hat {f}}(x){\big ]}{\big )}{\big (}\mathbb {E} {\big [}{\hat {f}}(x){\big ]}-{\hat {f}}(x){\big )}{\Big ]}}\,+\,\mathbb {E} {\Big [}{\big (}\mathbb {E} {\big [}{\hat {f}}(x){\big ]}-{\hat {f}}(x){\big )}^{2}{\Big ]}\end{aligned}}} We show that: E[(f(x)−E[f^(x)])2]=E[f(x)2]−2E[f(x)E[f^(x)]]+E[E[f^(x)]2]=f(x)2−2f(x)E[f^(x)]+E[f^(x)]2=(f(x)−E[f^(x)])2{\displaystyle {\begin{aligned}{\color {Blue}\mathbb {E} {\Big [}{\big (}f(x)-\mathbb {E} {\big [}{\hat {f}}(x){\big ]}{\big )}^{2}{\Big ]}}&=\mathbb {E} {\big [}f(x)^{2}{\big ]}\,-\,2\ \mathbb {E} {\Big [}f(x)\ \mathbb {E} {\big [}{\hat {f}}(x){\big ]}{\Big ]}\,+\,\mathbb {E} {\Big [}\mathbb {E} {\big [}{\hat {f}}(x){\big ]}^{2}{\Big ]}\\&=f(x)^{2}\,-\,2\ f(x)\ \mathbb {E} {\big [}{\hat {f}}(x){\big ]}\,+\,\mathbb {E} {\big [}{\hat {f}}(x){\big ]}^{2}\\&={\Big (}f(x)-\mathbb {E} {\big [}{\hat {f}}(x){\big ]}{\Big )}^{2}\end{aligned}}} This last series of equalities comes from the fact thatf(x){\displaystyle f(x)}is not a random variable, but a fixed, deterministic function ofx{\displaystyle x}. Therefore,E[f(x)]=f(x){\displaystyle \mathbb {E} {\big [}f(x){\big ]}=f(x)}. SimilarlyE[f(x)2]=f(x)2{\displaystyle \mathbb {E} {\big [}f(x)^{2}{\big ]}=f(x)^{2}}, andE[f(x)E[f^(x)]]=f(x)E[E[f^(x)]]=f(x)E[f^(x)]{\displaystyle \mathbb {E} {\Big [}f(x)\ \mathbb {E} {\big [}{\hat {f}}(x){\big ]}{\Big ]}=f(x)\ \mathbb {E} {\Big [}\ \mathbb {E} {\big [}{\hat {f}}(x){\big ]}{\Big ]}=f(x)\ \mathbb {E} {\big [}{\hat {f}}(x){\big ]}}. Using the same reasoning, we can expand the second term and show that it is null: E[(f(x)−E[f^(x)])(E[f^(x)]−f^(x))]=E[f(x)E[f^(x)]−f(x)f^(x)−E[f^(x)]2+E[f^(x)]f^(x)]=f(x)E[f^(x)]−f(x)E[f^(x)]−E[f^(x)]2+E[f^(x)]2=0{\displaystyle {\begin{aligned}{\color {PineGreen}\mathbb {E} {\Big [}{\big (}f(x)-\mathbb {E} {\big [}{\hat {f}}(x){\big ]}{\big )}{\big (}\mathbb {E} {\big [}{\hat {f}}(x){\big ]}-{\hat {f}}(x){\big )}{\Big ]}}&=\mathbb {E} {\Big [}f(x)\ \mathbb {E} {\big [}{\hat {f}}(x){\big ]}\,-\,f(x){\hat {f}}(x)\,-\,\mathbb {E} {\big [}{\hat {f}}(x){\big ]}^{2}+\mathbb {E} {\big [}{\hat {f}}(x){\big ]}\ {\hat {f}}(x){\Big ]}\\&=f(x)\ \mathbb {E} {\big [}{\hat {f}}(x){\big ]}\,-\,f(x)\ \mathbb {E} {\big [}{\hat {f}}(x){\big ]}\,-\,\mathbb {E} {\big [}{\hat {f}}(x){\big ]}^{2}\,+\,\mathbb {E} {\big [}{\hat {f}}(x){\big ]}^{2}\\&=0\end{aligned}}} Eventually, we plug our derivations back into the original equation, and identify each term: MSE=(f(x)−E[f^(x)])2+E[(E[f^(x)]−f^(x))2]+σ2=Bias⁡(f^(x))2+Var⁡[f^(x)]+σ2{\displaystyle {\begin{aligned}{\text{MSE}}&={\Big (}f(x)-\mathbb {E} {\big [}{\hat {f}}(x){\big ]}{\Big )}^{2}+\mathbb {E} {\Big [}{\big (}\mathbb {E} {\big [}{\hat {f}}(x){\big ]}-{\hat {f}}(x){\big )}^{2}{\Big ]}+\sigma ^{2}\\&=\operatorname {Bias} {\big (}{\hat {f}}(x){\big )}^{2}\,+\,\operatorname {Var} {\big [}{\hat {f}}(x){\big ]}\,+\,\sigma ^{2}\end{aligned}}} Finally, the MSE loss function (or negative log-likelihood) is obtained by taking the expectation value overx∼P{\displaystyle x\sim P}: Dimensionality reductionandfeature selectioncan decrease variance by simplifying models. Similarly, a larger training set tends to decrease variance. Adding features (predictors) tends to decrease bias, at the expense of introducing additional variance. Learning algorithms typically have some tunable parameters that control bias and variance; for example, One way of resolving the trade-off is to usemixture modelsandensemble learning.[14][15]For example,boostingcombines many "weak" (high bias) models in an ensemble that has lower bias than the individual models, whilebaggingcombines "strong" learners in a way that reduces their variance. Model validationmethods such ascross-validation (statistics)can be used to tune models so as to optimize the trade-off. In the case ofk-nearest neighbors regression, when the expectation is taken over the possible labeling of a fixed training set, aclosed-form expressionexists that relates the bias–variance decomposition to the parameterk:[8]: 37, 223 whereN1(x),…,Nk(x){\displaystyle N_{1}(x),\dots ,N_{k}(x)}are theknearest neighbors ofxin the training set. The bias (first term) is a monotone rising function ofk, while the variance (second term) drops off askis increased. In fact, under "reasonable assumptions" the bias of the first-nearest neighbor (1-NN) estimator vanishes entirely as the size of the training set approaches infinity.[12] The bias–variance decomposition forms the conceptual basis for regressionregularizationmethods such asLASSOandridge regression. Regularization methods introduce bias into the regression solution that can reduce variance considerably relative to theordinary least squares (OLS)solution. Although the OLS solution provides non-biased regression estimates, the lower variance solutions produced by regularization techniques provide superior MSE performance. The bias–variance decomposition was originally formulated for least-squares regression. For the case ofclassificationunder the0-1 loss(misclassification rate), it is possible to find a similar decomposition, with the caveat that the variance term becomes dependent on the target label.[16][17]Alternatively, if the classification problem can be phrased asprobabilistic classification, then the expected cross-entropy can instead be decomposed to give bias and variance terms with the same semantics but taking a different form. It has been argued that as training data increases, the variance of learned models will tend to decrease, and hence that as training data quantity increases, error is minimised by methods that learn models with lesser bias, and that conversely, for smaller training data quantities it is ever more important to minimise variance.[18] Even though the bias–variance decomposition does not directly apply inreinforcement learning, a similar tradeoff can also characterize generalization. When an agent has limited information on its environment, the suboptimality of an RL algorithm can be decomposed into the sum of two terms: a term related to an asymptotic bias and a term due to overfitting. The asymptotic bias is directly related to the learning algorithm (independently of the quantity of data) while the overfitting term comes from the fact that the amount of data is limited.[19] While in traditional Monte Carlo methods the bias is typically zero, modern approaches, such asMarkov chain Monte Carloare only asymptotically unbiased, at best.[20]Convergence diagnostics can be used to control bias viaburn-inremoval, but due to a limited computational budget, a bias–variance trade-off arises,[21]leading to a wide-range of approaches, in which a controlled bias is accepted, if this allows to dramatically reduce the variance, and hence the overall estimation error.[22][23][24] While widely discussed in the context of machine learning, the bias–variance dilemma has been examined in the context ofhuman cognition, most notably byGerd Gigerenzerand co-workers in the context of learned heuristics. They have argued (see references below) that the human brain resolves the dilemma in the case of the typically sparse, poorly-characterized training-sets provided by experience by adopting high-bias/low variance heuristics. This reflects the fact that a zero-bias approach has poor generalizability to new situations, and also unreasonably presumes precise knowledge of the true state of the world. The resulting heuristics are relatively simple, but produce better inferences in a wider variety of situations.[25] Gemanet al.[12]argue that the bias–variance dilemma implies that abilities such as genericobject recognitioncannot be learned from scratch, but require a certain degree of "hard wiring" that is later tuned by experience. This is because model-free approaches to inference require impractically large training sets if they are to avoid high variance.
https://en.wikipedia.org/wiki/Bias-variance_tradeoff
Inmathematics, aconcave functionis one for which the function value at any convex combination of elements in the domain is greater than or equal to that convex combination of those domain elements. Equivalently, a concave function is any function for which thehypographis convex. The class of concave functions is in a sense the opposite of the class ofconvex functions. A concave function is alsosynonymouslycalledconcave downwards,concave down,convex upwards,convex cap, orupper convex. A real-valuedfunctionf{\displaystyle f}on aninterval(or, more generally, aconvex setinvector space) is said to beconcaveif, for anyx{\displaystyle x}andy{\displaystyle y}in the interval and for anyα∈[0,1]{\displaystyle \alpha \in [0,1]},[1] A function is calledstrictly concaveif for anyα∈(0,1){\displaystyle \alpha \in (0,1)}andx≠y{\displaystyle x\neq y}. For a functionf:R→R{\displaystyle f:\mathbb {R} \to \mathbb {R} }, this second definition merely states that for everyz{\displaystyle z}strictly betweenx{\displaystyle x}andy{\displaystyle y}, the point(z,f(z)){\displaystyle (z,f(z))}on the graph off{\displaystyle f}is above the straight line joining the points(x,f(x)){\displaystyle (x,f(x))}and(y,f(y)){\displaystyle (y,f(y))}. A functionf{\displaystyle f}isquasiconcaveif the upper contour sets of the functionS(a)={x:f(x)≥a}{\displaystyle S(a)=\{x:f(x)\geq a\}}are convex sets.[2]
https://en.wikipedia.org/wiki/Concave_function
Inmathematics,sineandcosinearetrigonometric functionsof anangle. The sine and cosine of an acuteangleare defined in the context of aright triangle: for the specified angle, its sine is the ratio of the length of the side opposite that angle to the length of the longest side of thetriangle(thehypotenuse), and the cosine is theratioof the length of the adjacent leg to that of thehypotenuse. For an angleθ{\displaystyle \theta }, the sine and cosine functions are denoted assin⁡(θ){\displaystyle \sin(\theta )}andcos⁡(θ){\displaystyle \cos(\theta )}. The definitions of sine and cosine have been extended to anyrealvalue in terms of the lengths of certain line segments in aunit circle. More modern definitions express the sine and cosine asinfinite series, or as the solutions of certaindifferential equations, allowing their extension to arbitrary positive and negative values and even tocomplex numbers. The sine and cosine functions are commonly used to modelperiodicphenomena such assoundandlight waves, the position and velocity of harmonic oscillators, sunlight intensity and day length, and average temperature variations throughout the year. They can be traced to thejyāandkoṭi-jyāfunctions used inIndian astronomyduring theGupta period. To define the sine and cosine of an acute angleα{\displaystyle \alpha }, start with aright trianglethat contains an angle of measureα{\displaystyle \alpha }; in the accompanying figure, angleα{\displaystyle \alpha }in a right triangleABC{\displaystyle ABC}is the angle of interest. The three sides of the triangle are named as follows:[1] Once such a triangle is chosen, the sine of the angle is equal to the length of the opposite side divided by the length of the hypotenuse, and the cosine of the angle is equal to the length of the adjacent side divided by the length of the hypotenuse:[1]sin⁡(α)=oppositehypotenuse,cos⁡(α)=adjacenthypotenuse.{\displaystyle \sin(\alpha )={\frac {\text{opposite}}{\text{hypotenuse}}},\qquad \cos(\alpha )={\frac {\text{adjacent}}{\text{hypotenuse}}}.} The other trigonometric functions of the angle can be defined similarly; for example, thetangentis the ratio between the opposite and adjacent sides or equivalently the ratio between the sine and cosine functions. Thereciprocalof sine is cosecant, which gives the ratio of the hypotenuse length to the length of the opposite side. Similarly, the reciprocal of cosine is secant, which gives the ratio of the hypotenuse length to that of the adjacent side. The cotangent function is the ratio between the adjacent and opposite sides, a reciprocal of a tangent function. These functions can be formulated as:[1]tan⁡(θ)=sin⁡(θ)cos⁡(θ)=oppositeadjacent,cot⁡(θ)=1tan⁡(θ)=adjacentopposite,csc⁡(θ)=1sin⁡(θ)=hypotenuseopposite,sec⁡(θ)=1cos⁡(θ)=hypotenuseadjacent.{\displaystyle {\begin{aligned}\tan(\theta )&={\frac {\sin(\theta )}{\cos(\theta )}}={\frac {\text{opposite}}{\text{adjacent}}},\\\cot(\theta )&={\frac {1}{\tan(\theta )}}={\frac {\text{adjacent}}{\text{opposite}}},\\\csc(\theta )&={\frac {1}{\sin(\theta )}}={\frac {\text{hypotenuse}}{\text{opposite}}},\\\sec(\theta )&={\frac {1}{\cos(\theta )}}={\frac {\textrm {hypotenuse}}{\textrm {adjacent}}}.\end{aligned}}} As stated, the valuessin⁡(α){\displaystyle \sin(\alpha )}andcos⁡(α){\displaystyle \cos(\alpha )}appear to depend on the choice of a right triangle containing an angle of measureα{\displaystyle \alpha }. However, this is not the case as all such triangles aresimilar, and so the ratios are the same for each of them. For example, eachlegof the 45-45-90 right triangle is 1 unit, and its hypotenuse is2{\displaystyle {\sqrt {2}}}; therefore,sin⁡45∘=cos⁡45∘=22{\textstyle \sin 45^{\circ }=\cos 45^{\circ }={\frac {\sqrt {2}}{2}}}.[2]The following table shows the special value of each input for both sine and cosine with the domain between0<α<π2{\textstyle 0<\alpha <{\frac {\pi }{2}}}. The input in this table provides various unit systems such as degree, radian, and so on. The angles other than those five can be obtained by using a calculator.[3][4] Thelaw of sinesis useful for computing the lengths of the unknown sides in a triangle if two angles and one side are known.[5]Given a triangleABC{\displaystyle ABC}with sidesa{\displaystyle a},b{\displaystyle b}, andc{\displaystyle c}, and angles opposite those sidesα{\displaystyle \alpha },β{\displaystyle \beta }, andγ{\displaystyle \gamma }, the law states,sin⁡αa=sin⁡βb=sin⁡γc.{\displaystyle {\frac {\sin \alpha }{a}}={\frac {\sin \beta }{b}}={\frac {\sin \gamma }{c}}.}This is equivalent to the equality of the first three expressions below:asin⁡α=bsin⁡β=csin⁡γ=2R,{\displaystyle {\frac {a}{\sin \alpha }}={\frac {b}{\sin \beta }}={\frac {c}{\sin \gamma }}=2R,}whereR{\displaystyle R}is the triangle'scircumradius. Thelaw of cosinesis useful for computing the length of an unknown side if two other sides and an angle are known.[5]The law states,a2+b2−2abcos⁡(γ)=c2{\displaystyle a^{2}+b^{2}-2ab\cos(\gamma )=c^{2}}In the case whereγ=π/2{\displaystyle \gamma =\pi /2}from whichcos⁡(γ)=0{\displaystyle \cos(\gamma )=0}, the resulting equation becomes thePythagorean theorem.[6] Thecross productanddot productare operations on twovectorsinEuclidean vector space. The sine and cosine functions can be defined in terms of the cross product and dot product. Ifa{\displaystyle \mathbb {a} }andb{\displaystyle \mathbb {b} }are vectors, andθ{\displaystyle \theta }is the angle betweena{\displaystyle \mathbb {a} }andb{\displaystyle \mathbb {b} }, then sine and cosine can be defined as:sin⁡(θ)=|a×b||a||b|,cos⁡(θ)=a⋅b|a||b|.{\displaystyle {\begin{aligned}\sin(\theta )&={\frac {|\mathbb {a} \times \mathbb {b} |}{|a||b|}},\\\cos(\theta )&={\frac {\mathbb {a} \cdot \mathbb {b} }{|a||b|}}.\end{aligned}}} The sine and cosine functions may also be defined in a more general way by usingunit circle, a circle of radius one centered at the origin(0,0){\displaystyle (0,0)}, formulated as the equation ofx2+y2=1{\displaystyle x^{2}+y^{2}=1}in theCartesian coordinate system. Let a line through the origin intersect the unit circle, making an angle ofθ{\displaystyle \theta }with the positive half of thex{\displaystyle x}-axis. Thex{\displaystyle x}-andy{\displaystyle y}-coordinates of this point of intersection are equal tocos⁡(θ){\displaystyle \cos(\theta )}andsin⁡(θ){\displaystyle \sin(\theta )}, respectively; that is,[7]sin⁡(θ)=y,cos⁡(θ)=x.{\displaystyle \sin(\theta )=y,\qquad \cos(\theta )=x.} This definition is consistent with the right-angled triangle definition of sine and cosine when0<θ<π2{\textstyle 0<\theta <{\frac {\pi }{2}}}because the length of the hypotenuse of the unit circle is always 1; mathematically speaking, the sine of an angle equals the opposite side of the triangle, which is simply they{\displaystyle y}-coordinate. A similar argument can be made for the cosine function to show that the cosine of an angle when0<θ<π2{\textstyle 0<\theta <{\frac {\pi }{2}}}, even under the new definition using the unit circle.[8][9] Using the unit circle definition has the advantage of drawing a graph of sine and cosine functions. This can be done by rotating counterclockwise a point along the circumference of a circle, depending on the inputθ>0{\displaystyle \theta >0}. In a sine function, if the input isθ=π2{\textstyle \theta ={\frac {\pi }{2}}}, the point is rotated counterclockwise and stopped exactly on they{\displaystyle y}-axis. Ifθ=π{\displaystyle \theta =\pi }, the point is at the circle's halfway. Ifθ=2π{\displaystyle \theta =2\pi }, the point returned to its origin. This results that both sine and cosine functions have therangebetween−1≤y≤1{\displaystyle -1\leq y\leq 1}.[10] Extending the angle to any real domain, the point rotated counterclockwise continuously. This can be done similarly for the cosine function as well, although the point is rotated initially from they{\displaystyle y}-coordinate. In other words, both sine and cosine functions areperiodic, meaning any angle added by the circumference's circle is the angle itself. Mathematically,[11]sin⁡(θ+2π)=sin⁡(θ),cos⁡(θ+2π)=cos⁡(θ).{\displaystyle \sin(\theta +2\pi )=\sin(\theta ),\qquad \cos(\theta +2\pi )=\cos(\theta ).} A functionf{\displaystyle f}is said to beoddiff(−x)=−f(x){\displaystyle f(-x)=-f(x)}, and is said to beeveniff(−x)=f(x){\displaystyle f(-x)=f(x)}. The sine function is odd, whereas the cosine function is even.[12]Both sine and cosine functions are similar, with their difference beingshiftedbyπ2{\textstyle {\frac {\pi }{2}}}. This means,[13]sin⁡(θ)=cos⁡(π2−θ),cos⁡(θ)=sin⁡(π2−θ).{\displaystyle {\begin{aligned}\sin(\theta )&=\cos \left({\frac {\pi }{2}}-\theta \right),\\\cos(\theta )&=\sin \left({\frac {\pi }{2}}-\theta \right).\end{aligned}}} Zero is the only realfixed pointof the sine function; in other words the only intersection of the sine function and theidentity functionissin⁡(0)=0{\displaystyle \sin(0)=0}. The only real fixed point of the cosine function is called theDottie number. The Dottie number is the unique real root of the equationcos⁡(x)=x{\displaystyle \cos(x)=x}. The decimal expansion of the Dottie number is approximately 0.739085.[14] The sine and cosine functions are infinitely differentiable.[15]The derivative of sine is cosine, and the derivative of cosine is negative sine:[16]ddxsin⁡(x)=cos⁡(x),ddxcos⁡(x)=−sin⁡(x).{\displaystyle {\frac {d}{dx}}\sin(x)=\cos(x),\qquad {\frac {d}{dx}}\cos(x)=-\sin(x).}Continuing the process in higher-order derivative results in the repeated same functions; the fourth derivative of a sine is the sine itself.[15]These derivatives can be applied to thefirst derivative test, according to which themonotonicityof a function can be defined as the inequality of function's first derivative greater or less than equal to zero.[17]It can also be applied tosecond derivative test, according to which theconcavityof a function can be defined by applying the inequality of the function's second derivative greater or less than equal to zero.[18]The following table shows that both sine and cosine functions have concavity and monotonicity—the positive sign (+{\displaystyle +}) denotes a graph is increasing (going upward) and the negative sign (−{\displaystyle -}) is decreasing (going downward)—in certain intervals.[19]This information can be represented as a Cartesian coordinates system divided into four quadrants. Both sine and cosine functions can be defined by using differential equations. The pair of(cos⁡θ,sin⁡θ){\displaystyle (\cos \theta ,\sin \theta )}is the solution(x(θ),y(θ)){\displaystyle (x(\theta ),y(\theta ))}to the two-dimensional system ofdifferential equationsy′(θ)=x(θ){\displaystyle y'(\theta )=x(\theta )}andx′(θ)=−y(θ){\displaystyle x'(\theta )=-y(\theta )}with theinitial conditionsy(0)=0{\displaystyle y(0)=0}andx(0)=1{\displaystyle x(0)=1}. One could interpret the unit circle in the above definitions as defining thephase space trajectoryof the differential equation with the given initial conditions. It can be interpreted as a phase space trajectory of the system of differential equationsy′(θ)=x(θ){\displaystyle y'(\theta )=x(\theta )}andx′(θ)=−y(θ){\displaystyle x'(\theta )=-y(\theta )}starting from the initial conditionsy(0)=0{\displaystyle y(0)=0}andx(0)=1{\displaystyle x(0)=1}.[citation needed] Their area under a curve can be obtained by using theintegralwith a certain bounded interval. Their antiderivatives are:∫sin⁡(x)dx=−cos⁡(x)+C∫cos⁡(x)dx=sin⁡(x)+C,{\displaystyle \int \sin(x)\,dx=-\cos(x)+C\qquad \int \cos(x)\,dx=\sin(x)+C,}whereC{\displaystyle C}denotes theconstant of integration.[20]These antiderivatives may be applied to compute the mensuration properties of both sine and cosine functions' curves with a given interval. For example, thearc lengthof the sine curve between0{\displaystyle 0}andt{\displaystyle t}is∫0t1+cos2⁡(x)dx=2E⁡(t,12),{\displaystyle \int _{0}^{t}\!{\sqrt {1+\cos ^{2}(x)}}\,dx={\sqrt {2}}\operatorname {E} \left(t,{\frac {1}{\sqrt {2}}}\right),}whereE⁡(φ,k){\displaystyle \operatorname {E} (\varphi ,k)}is theincomplete elliptic integral of the second kindwith modulusk{\displaystyle k}. It cannot be expressed usingelementary functions.[21]In the case of a full period, its arc length isL=42π3Γ(1/4)2+Γ(1/4)22π=2πϖ+2ϖ≈7.6404…{\displaystyle L={\frac {4{\sqrt {2\pi ^{3}}}}{\Gamma (1/4)^{2}}}+{\frac {\Gamma (1/4)^{2}}{\sqrt {2\pi }}}={\frac {2\pi }{\varpi }}+2\varpi \approx 7.6404\ldots }whereΓ{\displaystyle \Gamma }is thegamma functionandϖ{\displaystyle \varpi }is thelemniscate constant.[22] Theinverse functionof sine is arcsine or inverse sine, denoted as "arcsin", "asin", orsin−1{\displaystyle \sin ^{-1}}.[23]The inverse function of cosine is arccosine, denoted as "arccos", "acos", orcos−1{\displaystyle \cos ^{-1}}.[a]As sine and cosine are notinjective, their inverses are not exact inverse functions, but partial inverse functions. For example,sin⁡(0)=0{\displaystyle \sin(0)=0}, but alsosin⁡(π)=0{\displaystyle \sin(\pi )=0},sin⁡(2π)=0{\displaystyle \sin(2\pi )=0}, and so on. It follows that the arcsine function is multivalued:arcsin⁡(0)=0{\displaystyle \arcsin(0)=0}, but alsoarcsin⁡(0)=π{\displaystyle \arcsin(0)=\pi },arcsin⁡(0)=2π{\displaystyle \arcsin(0)=2\pi }, and so on. When only one value is desired, the function may be restricted to itsprincipal branch. With this restriction, for eachx{\displaystyle x}in the domain, the expressionarcsin⁡(x){\displaystyle \arcsin(x)}will evaluate only to a single value, called itsprincipal value. The standard range of principal values for arcsin is from−π2{\textstyle -{\frac {\pi }{2}}}toπ2{\textstyle {\frac {\pi }{2}}}, and the standard range for arccos is from0{\displaystyle 0}toπ{\displaystyle \pi }.[24] The inverse function of both sine and cosine are defined as:[citation needed]θ=arcsin⁡(oppositehypotenuse)=arccos⁡(adjacenthypotenuse),{\displaystyle \theta =\arcsin \left({\frac {\text{opposite}}{\text{hypotenuse}}}\right)=\arccos \left({\frac {\text{adjacent}}{\text{hypotenuse}}}\right),}where for some integerk{\displaystyle k},sin⁡(y)=x⟺y=arcsin⁡(x)+2πk,ory=π−arcsin⁡(x)+2πkcos⁡(y)=x⟺y=arccos⁡(x)+2πk,ory=−arccos⁡(x)+2πk{\displaystyle {\begin{aligned}\sin(y)=x\iff &y=\arcsin(x)+2\pi k,{\text{ or }}\\&y=\pi -\arcsin(x)+2\pi k\\\cos(y)=x\iff &y=\arccos(x)+2\pi k,{\text{ or }}\\&y=-\arccos(x)+2\pi k\end{aligned}}}By definition, both functions satisfy the equations:[citation needed]sin⁡(arcsin⁡(x))=xcos⁡(arccos⁡(x))=x{\displaystyle \sin(\arcsin(x))=x\qquad \cos(\arccos(x))=x}andarcsin⁡(sin⁡(θ))=θfor−π2≤θ≤π2arccos⁡(cos⁡(θ))=θfor0≤θ≤π{\displaystyle {\begin{aligned}\arcsin(\sin(\theta ))=\theta \quad &{\text{for}}\quad -{\frac {\pi }{2}}\leq \theta \leq {\frac {\pi }{2}}\\\arccos(\cos(\theta ))=\theta \quad &{\text{for}}\quad 0\leq \theta \leq \pi \end{aligned}}} According toPythagorean theorem, the squared hypotenuse is the sum of two squared legs of a right triangle. Dividing the formula on both sides with squared hypotenuse resulting in thePythagorean trigonometric identity, the sum of a squared sine and a squared cosine equals 1:[25][b]sin2⁡(θ)+cos2⁡(θ)=1.{\displaystyle \sin ^{2}(\theta )+\cos ^{2}(\theta )=1.} Sine and cosine satisfy the following double-angle formulas:[26]sin⁡(2θ)=2sin⁡(θ)cos⁡(θ),cos⁡(2θ)=cos2⁡(θ)−sin2⁡(θ)=2cos2⁡(θ)−1=1−2sin2⁡(θ){\displaystyle {\begin{aligned}\sin(2\theta )&=2\sin(\theta )\cos(\theta ),\\\cos(2\theta )&=\cos ^{2}(\theta )-\sin ^{2}(\theta )\\&=2\cos ^{2}(\theta )-1\\&=1-2\sin ^{2}(\theta )\end{aligned}}} The cosine double angle formula implies that sin2and cos2are, themselves, shifted and scaled sine waves. Specifically,[27]sin2⁡(θ)=1−cos⁡(2θ)2cos2⁡(θ)=1+cos⁡(2θ)2{\displaystyle \sin ^{2}(\theta )={\frac {1-\cos(2\theta )}{2}}\qquad \cos ^{2}(\theta )={\frac {1+\cos(2\theta )}{2}}}The graph shows both sine and sine squared functions, with the sine in blue and the sine squared in red. Both graphs have the same shape but with different ranges of values and different periods. Sine squared has only positive values, but twice the number of periods.[citation needed] Both sine and cosine functions can be defined by using aTaylor series, apower seriesinvolving the higher-order derivatives. As mentioned in§ Continuity and differentiation, thederivativeof sine is cosine and that the derivative of cosine is the negative of sine. This means the successive derivatives ofsin⁡(x){\displaystyle \sin(x)}arecos⁡(x){\displaystyle \cos(x)},−sin⁡(x){\displaystyle -\sin(x)},−cos⁡(x){\displaystyle -\cos(x)},sin⁡(x){\displaystyle \sin(x)}, continuing to repeat those four functions. The(4n+k){\displaystyle (4n+k)}-th derivative, evaluated at the point 0:sin(4n+k)⁡(0)={0whenk=01whenk=10whenk=2−1whenk=3{\displaystyle \sin ^{(4n+k)}(0)={\begin{cases}0&{\text{when }}k=0\\1&{\text{when }}k=1\\0&{\text{when }}k=2\\-1&{\text{when }}k=3\end{cases}}}where the superscript represents repeated differentiation. This implies the following Taylor series expansion atx=0{\displaystyle x=0}. One can then use the theory ofTaylor seriesto show that the following identities hold for allreal numbersx{\displaystyle x}—wherex{\displaystyle x}is the angle in radians.[28]More generally, for allcomplex numbers:[29]sin⁡(x)=x−x33!+x55!−x77!+⋯=∑n=0∞(−1)n(2n+1)!x2n+1{\displaystyle {\begin{aligned}\sin(x)&=x-{\frac {x^{3}}{3!}}+{\frac {x^{5}}{5!}}-{\frac {x^{7}}{7!}}+\cdots \\&=\sum _{n=0}^{\infty }{\frac {(-1)^{n}}{(2n+1)!}}x^{2n+1}\end{aligned}}}Taking the derivative of each term gives the Taylor series for cosine:[28][29]cos⁡(x)=1−x22!+x44!−x66!+⋯=∑n=0∞(−1)n(2n)!x2n{\displaystyle {\begin{aligned}\cos(x)&=1-{\frac {x^{2}}{2!}}+{\frac {x^{4}}{4!}}-{\frac {x^{6}}{6!}}+\cdots \\&=\sum _{n=0}^{\infty }{\frac {(-1)^{n}}{(2n)!}}x^{2n}\end{aligned}}} Both sine and cosine functions with multiple angles may appear as theirlinear combination, resulting in a polynomial. Such a polynomial is known as thetrigonometric polynomial. The trigonometric polynomial's ample applications may be acquired inits interpolation, and its extension of a periodic function known as theFourier series. Letan{\displaystyle a_{n}}andbn{\displaystyle b_{n}}be any coefficients, then the trigonometric polynomial of a degreeN{\displaystyle N}—denoted asT(x){\displaystyle T(x)}—is defined as:[30][31]T(x)=a0+∑n=1Nancos⁡(nx)+∑n=1Nbnsin⁡(nx).{\displaystyle T(x)=a_{0}+\sum _{n=1}^{N}a_{n}\cos(nx)+\sum _{n=1}^{N}b_{n}\sin(nx).} Thetrigonometric seriescan be defined similarly analogous to the trigonometric polynomial, its infinite inversion. LetAn{\displaystyle A_{n}}andBn{\displaystyle B_{n}}be any coefficients, then the trigonometric series can be defined as:[32]12A0+∑n=1∞Ancos⁡(nx)+Bnsin⁡(nx).{\displaystyle {\frac {1}{2}}A_{0}+\sum _{n=1}^{\infty }A_{n}\cos(nx)+B_{n}\sin(nx).}In the case of a Fourier series with a given integrable functionf{\displaystyle f}, the coefficients of a trigonometric series are:[33]An=1π∫02πf(x)cos⁡(nx)dx,Bn=1π∫02πf(x)sin⁡(nx)dx.{\displaystyle {\begin{aligned}A_{n}&={\frac {1}{\pi }}\int _{0}^{2\pi }f(x)\cos(nx)\,dx,\\B_{n}&={\frac {1}{\pi }}\int _{0}^{2\pi }f(x)\sin(nx)\,dx.\end{aligned}}} Both sine and cosine can be extended further viacomplex number, a set of numbers composed of bothrealandimaginary numbers. For real numberθ{\displaystyle \theta }, the definition of both sine and cosine functions can be extended in acomplex planein terms of anexponential functionas follows:[34]sin⁡(θ)=eiθ−e−iθ2i,cos⁡(θ)=eiθ+e−iθ2,{\displaystyle {\begin{aligned}\sin(\theta )&={\frac {e^{i\theta }-e^{-i\theta }}{2i}},\\\cos(\theta )&={\frac {e^{i\theta }+e^{-i\theta }}{2}},\end{aligned}}} Alternatively, both functions can be defined in terms ofEuler's formula:[34]eiθ=cos⁡(θ)+isin⁡(θ),e−iθ=cos⁡(θ)−isin⁡(θ).{\displaystyle {\begin{aligned}e^{i\theta }&=\cos(\theta )+i\sin(\theta ),\\e^{-i\theta }&=\cos(\theta )-i\sin(\theta ).\end{aligned}}} When plotted on thecomplex plane, the functioneix{\displaystyle e^{ix}}for real values ofx{\displaystyle x}traces out theunit circlein the complex plane. Both sine and cosine functions may be simplified to the imaginary and real parts ofeiθ{\displaystyle e^{i\theta }}as:[35]sin⁡θ=Im⁡(eiθ),cos⁡θ=Re⁡(eiθ).{\displaystyle {\begin{aligned}\sin \theta &=\operatorname {Im} (e^{i\theta }),\\\cos \theta &=\operatorname {Re} (e^{i\theta }).\end{aligned}}} Whenz=x+iy{\displaystyle z=x+iy}for real valuesx{\displaystyle x}andy{\displaystyle y}, wherei=−1{\displaystyle i={\sqrt {-1}}}, both sine and cosine functions can be expressed in terms of real sines, cosines, andhyperbolic functionsas:[citation needed]sin⁡z=sin⁡xcosh⁡y+icos⁡xsinh⁡y,cos⁡z=cos⁡xcosh⁡y−isin⁡xsinh⁡y.{\displaystyle {\begin{aligned}\sin z&=\sin x\cosh y+i\cos x\sinh y,\\\cos z&=\cos x\cosh y-i\sin x\sinh y.\end{aligned}}} Sine and cosine are used to connect the real and imaginary parts of acomplex numberwith itspolar coordinates(r,θ){\displaystyle (r,\theta )}:z=r(cos⁡(θ)+isin⁡(θ)),{\displaystyle z=r(\cos(\theta )+i\sin(\theta )),}and the real and imaginary parts areRe⁡(z)=rcos⁡(θ),Im⁡(z)=rsin⁡(θ),{\displaystyle {\begin{aligned}\operatorname {Re} (z)&=r\cos(\theta ),\\\operatorname {Im} (z)&=r\sin(\theta ),\end{aligned}}}wherer{\displaystyle r}andθ{\displaystyle \theta }represent the magnitude and angle of the complex numberz{\displaystyle z}. For any real numberθ{\displaystyle \theta }, Euler's formula in terms of polar coordinates is stated asz=reiθ{\textstyle z=re^{i\theta }}. Applying the series definition of the sine and cosine to a complex argument,z, gives: where sinh and cosh are thehyperbolic sine and cosine. These areentire functions. It is also sometimes useful to express the complex sine and cosine functions in terms of the real and imaginary parts of its argument: Using the partial fraction expansion technique incomplex analysis, one can find that the infinite series∑n=−∞∞(−1)nz−n=1z−2z∑n=1∞(−1)nn2−z2{\displaystyle \sum _{n=-\infty }^{\infty }{\frac {(-1)^{n}}{z-n}}={\frac {1}{z}}-2z\sum _{n=1}^{\infty }{\frac {(-1)^{n}}{n^{2}-z^{2}}}}both converge and are equal toπsin⁡(πz){\textstyle {\frac {\pi }{\sin(\pi z)}}}. Similarly, one can show thatπ2sin2⁡(πz)=∑n=−∞∞1(z−n)2.{\displaystyle {\frac {\pi ^{2}}{\sin ^{2}(\pi z)}}=\sum _{n=-\infty }^{\infty }{\frac {1}{(z-n)^{2}}}.} Using product expansion technique, one can derivesin⁡(πz)=πz∏n=1∞(1−z2n2).{\displaystyle \sin(\pi z)=\pi z\prod _{n=1}^{\infty }\left(1-{\frac {z^{2}}{n^{2}}}\right).} sin(z) is found in thefunctional equationfor theGamma function, which in turn is found in thefunctional equationfor theRiemann zeta-function, As aholomorphic function, sinzis a 2D solution ofLaplace's equation: The complex sine function is also related to the level curves ofpendulums.[how?][36][better source needed] The wordsineis derived, indirectly, from theSanskritwordjyā'bow-string' or more specifically its synonymjīvá(both adopted fromAncient Greekχορδή'string; chord'), due to visual similarity between the arc of a circle with its corresponding chord and a bow with its string (seejyā, koti-jyā and utkrama-jyā;sineandchordare closely related in a circle of unit diameter, seePtolemy’s Theorem). This wastransliteratedinArabicasjība, which is meaningless in that language and written asjb(جب). Since Arabic is written without short vowels,jbwas interpreted as thehomographjayb(جيب), which means 'bosom', 'pocket', or 'fold'.[37][38]When the Arabic texts ofAl-Battaniandal-Khwārizmīwere translated intoMedieval Latinin the 12th century byGerard of Cremona, he used the Latin equivalentsinus(which also means 'bay' or 'fold', and more specifically 'the hanging fold of atogaover the breast').[39][40][41]Gerard was probably not the first scholar to use this translation; Robert of Chester appears to have preceded him and there is evidence of even earlier usage.[42][43]The English formsinewas introduced inThomas Fale's 1593Horologiographia.[44] The wordcosinederives from an abbreviation of the Latincomplementi sinus'sine of thecomplementary angle' ascosinusinEdmund Gunter'sCanon triangulorum(1620), which also includes a similar definition ofcotangens.[45] While the early study of trigonometry can be traced to antiquity, thetrigonometric functionsas they are in use today were developed in the medieval period. Thechordfunction was discovered byHipparchusofNicaea(180–125 BCE) andPtolemyofRoman Egypt(90–165 CE).[46] The sine and cosine functions are closely related to thejyāandkoṭi-jyāfunctions used inIndian astronomyduring theGupta period(AryabhatiyaandSurya Siddhanta), via translation from Sanskrit to Arabic and then from Arabic to Latin.[39][47] All six trigonometric functions in current use were known inIslamic mathematicsby the 9th century, as was thelaw of sines, used insolving triangles.[48]Al-Khwārizmī(c. 780–850) produced tables of sines, cosines and tangents.[49][50]Muhammad ibn Jābir al-Harrānī al-Battānī(853–929) discovered the reciprocal functions of secant and cosecant, and produced the first table of cosecants for each degree from 1° to 90°.[50] In the early 17th-century, the French mathematicianAlbert Girardpublished the first use of the abbreviationssin,cos, andtan; these were further promulgated by Euler (see below). TheOpus palatinum de triangulisofGeorg Joachim Rheticus, a student ofCopernicus, was probably the first in Europe to define trigonometric functions directly in terms of right triangles instead of circles, with tables for all six trigonometric functions; this work was finished by Rheticus' student Valentin Otho in 1596. In a paper published in 1682,Leibnizproved that sinxis not analgebraic functionofx.[51]Roger Cotescomputed the derivative of sine in hisHarmonia Mensurarum(1722).[52]Leonhard Euler'sIntroductio in analysin infinitorum(1748) was mostly responsible for establishing the analytic treatment of trigonometric functions in Europe, also defining them as infinite series and presenting "Euler's formula", as well as the near-modern abbreviationssin.,cos.,tang.,cot.,sec., andcosec.[39] There is no standard algorithm for calculating sine and cosine.IEEE 754, the most widely used standard for the specification of reliable floating-point computation, does not address calculating trigonometric functions such as sine. The reason is that no efficient algorithm is known for computing sine and cosine with a specified accuracy, especially for large inputs.[53] Algorithms for calculating sine may be balanced for such constraints as speed, accuracy, portability, or range of input values accepted. This can lead to different results for different algorithms, especially for special circumstances such as very large inputs, e.g.sin(1022). A common programming optimization, used especially in 3D graphics, is to pre-calculate a table of sine values, for example one value per degree, then for values in-between pick the closest pre-calculated value, orlinearly interpolatebetween the 2 closest values to approximate it. This allows results to be looked up from a table rather than being calculated in real time. With modern CPU architectures this method may offer no advantage.[citation needed] TheCORDICalgorithm is commonly used in scientific calculators. The sine and cosine functions, along with other trigonometric functions, are widely available across programming languages and platforms. In computing, they are typically abbreviated tosinandcos. Some CPU architectures have a built-in instruction for sine, including the Intel x87 FPUs since the 80387. In programming languages,sinandcosare typically either a built-in function or found within the language's standard math library. For example, theC standard librarydefines sine functions withinmath.h:sin(double),sinf(float), andsinl(long double). The parameter of each is afloating pointvalue, specifying the angle in radians. Each function returns the samedata typeas it accepts. Many other trigonometric functions are also defined inmath.h, such as for cosine, arc sine, and hyperbolic sine (sinh). Similarly,Pythondefinesmath.sin(x)andmath.cos(x)within the built-inmathmodule. Complex sine and cosine functions are also available within thecmathmodule, e.g.cmath.sin(z).CPython's math functions call theCmathlibrary, and use adouble-precision floating-point format. Some software libraries provide implementations of sine and cosine using the input angle in half-turns, a half-turn being an angle of 180 degrees orπ{\displaystyle \pi }radians. Representing angles in turns or half-turns has accuracy advantages and efficiency advantages in some cases.[54][55]These functions are calledsinpiandcospiin MATLAB,[54]OpenCL,[56]R,[55]Julia,[57]CUDA,[58]and ARM.[59]For example,sinpi(x)would evaluate tosin⁡(πx),{\displaystyle \sin(\pi x),}wherexis expressed in half-turns, and consequently the final input to the function,πxcan be interpreted in radians bysin. The accuracy advantage stems from the ability to perfectly represent key angles like full-turn, half-turn, and quarter-turn losslessly in binary floating-point or fixed-point. In contrast, representing2π{\displaystyle 2\pi },π{\displaystyle \pi }, andπ2{\textstyle {\frac {\pi }{2}}}in binary floating-point or binary scaled fixed-point always involves a loss of accuracy since irrational numbers cannot be represented with finitely many binary digits. Turns also have an accuracy advantage and efficiency advantage for computing modulo to one period. Computing modulo 1 turn or modulo 2 half-turns can be losslessly and efficiently computed in both floating-point and fixed-point. For example, computing modulo 1 or modulo 2 for a binary point scaled fixed-point value requires only a bit shift or bitwise AND operation. In contrast, computing moduloπ2{\textstyle {\frac {\pi }{2}}}involves inaccuracies in representingπ2{\textstyle {\frac {\pi }{2}}}. For applications involving angle sensors, the sensor typically provides angle measurements in a form directly compatible with turns or half-turns. For example, an angle sensor may count from 0 to 4096 over one complete revolution.[60]If half-turns are used as the unit for angle, then the value provided by the sensor directly and losslessly maps to a fixed-point data type with 11 bits to the right of the binary point. In contrast, if radians are used as the unit for storing the angle, then the inaccuracies and cost of multiplying the raw sensor integer by an approximation toπ2048{\textstyle {\frac {\pi }{2048}}}would be incurred.
https://en.wikipedia.org/wiki/Cosine#Properties
Agenerative adversarial network(GAN) is a class ofmachine learningframeworks and a prominent framework for approachinggenerative artificial intelligence. The concept was initially developed byIan Goodfellowand his colleagues in June 2014.[1]In a GAN, twoneural networkscompete with each other in the form of azero-sum game, where one agent's gain is another agent's loss. Given a training set, this technique learns to generate new data with the same statistics as the training set. For example, a GAN trained on photographs can generate new photographs that look at least superficially authentic to human observers, having many realistic characteristics. Though originally proposed as a form ofgenerative modelforunsupervised learning, GANs have also proved useful forsemi-supervised learning,[2]fullysupervised learning,[3]andreinforcement learning.[4] The core idea of a GAN is based on the "indirect" training through the discriminator, another neural network that can tell how "realistic" the input seems, which itself is also being updated dynamically.[5]This means that the generator is not trained to minimize the distance to a specific image, but rather to fool the discriminator. This enables the model to learn in an unsupervised manner. GANs are similar tomimicryinevolutionary biology, with anevolutionary arms racebetween both networks. The original GAN is defined as the followinggame:[1] Eachprobability space(Ω,μref){\displaystyle (\Omega ,\mu _{\text{ref}})}defines a GAN game. There are 2 players: generator and discriminator. The generator'sstrategy setisP(Ω){\displaystyle {\mathcal {P}}(\Omega )}, the set of all probability measuresμG{\displaystyle \mu _{G}}onΩ{\displaystyle \Omega }. The discriminator's strategy set is the set ofMarkov kernelsμD:Ω→P[0,1]{\displaystyle \mu _{D}:\Omega \to {\mathcal {P}}[0,1]}, whereP[0,1]{\displaystyle {\mathcal {P}}[0,1]}is the set of probability measures on[0,1]{\displaystyle [0,1]}. The GAN game is azero-sum game, with objective functionL(μG,μD):=Ex∼μref,y∼μD(x)⁡[ln⁡y]+Ex∼μG,y∼μD(x)⁡[ln⁡(1−y)].{\displaystyle L(\mu _{G},\mu _{D}):=\operatorname {E} _{x\sim \mu _{\text{ref}},y\sim \mu _{D}(x)}[\ln y]+\operatorname {E} _{x\sim \mu _{G},y\sim \mu _{D}(x)}[\ln(1-y)].}The generator aims to minimize the objective, and the discriminator aims to maximize the objective. The generator's task is to approachμG≈μref{\displaystyle \mu _{G}\approx \mu _{\text{ref}}}, that is, to match its own output distribution as closely as possible to the reference distribution. The discriminator's task is to output a value close to 1 when the input appears to be from the reference distribution, and to output a value close to 0 when the input looks like it came from the generator distribution. Thegenerativenetworkgenerates candidates while thediscriminativenetworkevaluates them.[1]The contest operates in terms of data distributions. Typically, the generative network learns to map from alatent spaceto a data distribution of interest, while the discriminative network distinguishes candidates produced by the generator from the true data distribution. The generative network's training objective is to increase the error rate of the discriminative network (i.e., "fool" the discriminator network by producing novel candidates that the discriminator thinks are not synthesized (are part of the true data distribution)).[1][6] A known dataset serves as the initial training data for the discriminator. Training involves presenting it with samples from the training dataset until it achieves acceptable accuracy. The generator is trained based on whether it succeeds in fooling the discriminator. Typically, the generator is seeded with randomized input that is sampled from a predefinedlatent space(e.g. amultivariate normal distribution). Thereafter, candidates synthesized by the generator are evaluated by the discriminator. Independentbackpropagationprocedures are applied to both networks so that the generator produces better samples, while the discriminator becomes more skilled at flagging synthetic samples.[7]When used for image generation, the generator is typically adeconvolutional neural network, and the discriminator is aconvolutional neural network. GANs areimplicit generative models,[8]which means that they do not explicitly model the likelihood function nor provide a means for finding the latent variable corresponding to a given sample, unlike alternatives such asflow-based generative model. Compared to fully visible belief networks such asWaveNetand PixelRNN and autoregressive models in general, GANs can generate one complete sample in one pass, rather than multiple passes through the network. Compared toBoltzmann machinesand linearICA, there is no restriction on the type of function used by the network. Since neural networks areuniversal approximators, GANs areasymptotically consistent.Variational autoencodersmight be universal approximators, but it is not proven as of 2017.[9] This section provides some of the mathematical theory behind these methods. Inmodern probability theorybased onmeasure theory, a probability space also needs to be equipped with aσ-algebra. As a result, a more rigorous definition of the GAN game would make the following changes: Each probability space(Ω,B,μref){\displaystyle (\Omega ,{\mathcal {B}},\mu _{\text{ref}})}defines a GAN game. The generator's strategy set isP(Ω,B){\displaystyle {\mathcal {P}}(\Omega ,{\mathcal {B}})}, the set of all probability measuresμG{\displaystyle \mu _{G}}on the measure-space(Ω,B){\displaystyle (\Omega ,{\mathcal {B}})}. The discriminator's strategy set is the set ofMarkov kernelsμD:(Ω,B)→P([0,1],B([0,1])){\displaystyle \mu _{D}:(\Omega ,{\mathcal {B}})\to {\mathcal {P}}([0,1],{\mathcal {B}}([0,1]))}, whereB([0,1]){\displaystyle {\mathcal {B}}([0,1])}is theBorel σ-algebraon[0,1]{\displaystyle [0,1]}. Since issues of measurability never arise in practice, these will not concern us further. In the most generic version of the GAN game described above, the strategy set for the discriminator contains all Markov kernelsμD:Ω→P[0,1]{\displaystyle \mu _{D}:\Omega \to {\mathcal {P}}[0,1]}, and the strategy set for the generator contains arbitraryprobability distributionsμG{\displaystyle \mu _{G}}onΩ{\displaystyle \Omega }. However, as shown below, the optimal discriminator strategy against anyμG{\displaystyle \mu _{G}}is deterministic, so there is no loss of generality in restricting the discriminator's strategies to deterministic functionsD:Ω→[0,1]{\displaystyle D:\Omega \to [0,1]}. In most applications,D{\displaystyle D}is adeep neural networkfunction. As for the generator, whileμG{\displaystyle \mu _{G}}could theoretically be any computable probability distribution, in practice, it is usually implemented as apushforward:μG=μZ∘G−1{\displaystyle \mu _{G}=\mu _{Z}\circ G^{-1}}. That is, start with a random variablez∼μZ{\displaystyle z\sim \mu _{Z}}, whereμZ{\displaystyle \mu _{Z}}is a probability distribution that is easy to compute (such as theuniform distribution, or theGaussian distribution), then define a functionG:ΩZ→Ω{\displaystyle G:\Omega _{Z}\to \Omega }. Then the distributionμG{\displaystyle \mu _{G}}is the distribution ofG(z){\displaystyle G(z)}. Consequently, the generator's strategy is usually defined as justG{\displaystyle G}, leavingz∼μZ{\displaystyle z\sim \mu _{Z}}implicit. In this formalism, the GAN game objective isL(G,D):=Ex∼μref⁡[ln⁡D(x)]+Ez∼μZ⁡[ln⁡(1−D(G(z)))].{\displaystyle L(G,D):=\operatorname {E} _{x\sim \mu _{\text{ref}}}[\ln D(x)]+\operatorname {E} _{z\sim \mu _{Z}}[\ln(1-D(G(z)))].} The GAN architecture has two main components. One is casting optimization into a game, of formminGmaxDL(G,D){\displaystyle \min _{G}\max _{D}L(G,D)}, which is different from the usual kind of optimization, of formminθL(θ){\displaystyle \min _{\theta }L(\theta )}. The other is the decomposition ofμG{\displaystyle \mu _{G}}intoμZ∘G−1{\displaystyle \mu _{Z}\circ G^{-1}}, which can be understood as a reparametrization trick. To see its significance, one must compare GAN with previous methods for learning generative models, which were plagued with "intractable probabilistic computations that arise in maximum likelihood estimation and related strategies".[1] At the same time, Kingma and Welling[10]and Rezende et al.[11]developed the same idea of reparametrization into a general stochastic backpropagation method. Among its first applications was thevariational autoencoder. In the original paper, as well as most subsequent papers, it is usually assumed that the generatormoves first, and the discriminatormoves second, thus giving the following minimax game:minμGmaxμDL(μG,μD):=Ex∼μref,y∼μD(x)⁡[ln⁡y]+Ex∼μG,y∼μD(x)⁡[ln⁡(1−y)].{\displaystyle \min _{\mu _{G}}\max _{\mu _{D}}L(\mu _{G},\mu _{D}):=\operatorname {E} _{x\sim \mu _{\text{ref}},y\sim \mu _{D}(x)}[\ln y]+\operatorname {E} _{x\sim \mu _{G},y\sim \mu _{D}(x)}[\ln(1-y)].} If both the generator's and the discriminator's strategy sets are spanned by a finite number of strategies, then by theminimax theorem,minμGmaxμDL(μG,μD)=maxμDminμGL(μG,μD){\displaystyle \min _{\mu _{G}}\max _{\mu _{D}}L(\mu _{G},\mu _{D})=\max _{\mu _{D}}\min _{\mu _{G}}L(\mu _{G},\mu _{D})}that is, the move order does not matter. However, since the strategy sets are both not finitely spanned, the minimax theorem does not apply, and the idea of an "equilibrium" becomes delicate. To wit, there are the following different concepts of equilibrium: For general games, these equilibria do not have to agree, or even to exist. For the original GAN game, these equilibria all exist, and are all equal. However, for more general GAN games, these do not necessarily exist, or agree.[12] The original GAN paper proved the following two theorems:[1] Theorem(the optimal discriminator computes the Jensen–Shannon divergence)—For any fixed generator strategyμG{\displaystyle \mu _{G}}, let the optimal reply beD∗=arg⁡maxDL(μG,D){\displaystyle D^{*}=\arg \max _{D}L(\mu _{G},D)}, then D∗(x)=dμrefd(μref+μG)L(μG,D∗)=2DJS(μref;μG)−2ln⁡2{\displaystyle {\begin{aligned}D^{*}(x)&={\frac {d\mu _{\text{ref}}}{d(\mu _{\text{ref}}+\mu _{G})}}\\[6pt]L(\mu _{G},D^{*})&=2D_{JS}(\mu _{\text{ref}};\mu _{G})-2\ln 2\end{aligned}}} where the derivative is theRadon–Nikodym derivative, andDJS{\displaystyle D_{JS}}is theJensen–Shannon divergence. By Jensen's inequality, Ex∼μref,y∼μD(x)⁡[ln⁡y]≤Ex∼μref⁡[ln⁡Ey∼μD(x)⁡[y]]{\displaystyle \operatorname {E} _{x\sim \mu _{\text{ref}},y\sim \mu _{D}(x)}[\ln y]\leq \operatorname {E} _{x\sim \mu _{\text{ref}}}[\ln \operatorname {E} _{y\sim \mu _{D}(x)}[y]]}and similarly for the other term. Therefore, the optimal reply can be deterministic, i.e.μD(x)=δD(x){\displaystyle \mu _{D}(x)=\delta _{D(x)}}for some functionD:Ω→[0,1]{\displaystyle D:\Omega \to [0,1]}, in which case L(μG,μD):=Ex∼μref⁡[ln⁡D(x)]+Ex∼μG⁡[ln⁡(1−D(x))].{\displaystyle L(\mu _{G},\mu _{D}):=\operatorname {E} _{x\sim \mu _{\text{ref}}}[\ln D(x)]+\operatorname {E} _{x\sim \mu _{G}}[\ln(1-D(x))].} To define suitable density functions, we define a base measureμ:=μref+μG{\displaystyle \mu :=\mu _{\text{ref}}+\mu _{G}}, which allows us to take the Radon–Nikodym derivatives ρref=dμrefdμρG=dμGdμ{\displaystyle \rho _{\text{ref}}={\frac {d\mu _{\text{ref}}}{d\mu }}\quad \rho _{G}={\frac {d\mu _{G}}{d\mu }}}withρref+ρG=1{\displaystyle \rho _{\text{ref}}+\rho _{G}=1}. We then have L(μG,μD):=∫μ(dx)[ρref(x)ln⁡(D(x))+ρG(x)ln⁡(1−D(x))].{\displaystyle L(\mu _{G},\mu _{D}):=\int \mu (dx)\left[\rho _{\text{ref}}(x)\ln(D(x))+\rho _{G}(x)\ln(1-D(x))\right].} The integrand is just the negativecross-entropybetween two Bernoulli random variables with parametersρref(x){\displaystyle \rho _{\text{ref}}(x)}andD(x){\displaystyle D(x)}. We can write this as−H(ρref(x))−DKL(ρref(x)∥D(x)){\displaystyle -H(\rho _{\text{ref}}(x))-D_{KL}(\rho _{\text{ref}}(x)\parallel D(x))}, whereH{\displaystyle H}is thebinary entropy function, so L(μG,μD)=−∫μ(dx)(H(ρref(x))+DKL(ρref(x)∥D(x))).{\displaystyle L(\mu _{G},\mu _{D})=-\int \mu (dx)(H(\rho _{\text{ref}}(x))+D_{KL}(\rho _{\text{ref}}(x)\parallel D(x))).} This means that the optimal strategy for the discriminator isD(x)=ρref(x){\displaystyle D(x)=\rho _{\text{ref}}(x)}, withL(μG,μD∗)=−∫μ(dx)H(ρref(x))=DJS(μref∥μG)−2ln⁡2{\displaystyle L(\mu _{G},\mu _{D}^{*})=-\int \mu (dx)H(\rho _{\text{ref}}(x))=D_{JS}(\mu _{\text{ref}}\parallel \mu _{G})-2\ln 2} after routine calculation. Interpretation: For any fixed generator strategyμG{\displaystyle \mu _{G}}, the optimal discriminator keeps track of the likelihood ratio between the reference distribution and the generator distribution:D(x)1−D(x)=dμrefdμG(x)=μref(dx)μG(dx);D(x)=σ(ln⁡μref(dx)−ln⁡μG(dx)){\displaystyle {\frac {D(x)}{1-D(x)}}={\frac {d\mu _{\text{ref}}}{d\mu _{G}}}(x)={\frac {\mu _{\text{ref}}(dx)}{\mu _{G}(dx)}};\quad D(x)=\sigma (\ln \mu _{\text{ref}}(dx)-\ln \mu _{G}(dx))}whereσ{\displaystyle \sigma }is thelogistic function. In particular, if the prior probability for an imagex{\displaystyle x}to come from the reference distribution is equal to12{\displaystyle {\frac {1}{2}}}, thenD(x){\displaystyle D(x)}is just the posterior probability thatx{\displaystyle x}came from the reference distribution:D(x)=Pr(xcame from reference distribution∣x).{\displaystyle D(x)=\Pr(x{\text{ came from reference distribution}}\mid x).} Theorem(the unique equilibrium point)—For any GAN game, there exists a pair(μ^D,μ^G){\displaystyle ({\hat {\mu }}_{D},{\hat {\mu }}_{G})}that is both a sequential equilibrium and a Nash equilibrium: L(μ^G,μ^D)=minμGmaxμDL(μG,μD)=maxμDminμGL(μG,μD)=−2ln⁡2μ^D∈arg⁡maxμDminμGL(μG,μD),μ^G∈arg⁡minμGmaxμDL(μG,μD)μ^D∈arg⁡maxμDL(μ^G,μD),μ^G∈arg⁡minμGL(μG,μ^D)∀x∈Ω,μ^D(x)=δ12,μ^G=μref{\displaystyle {\begin{aligned}&L({\hat {\mu }}_{G},{\hat {\mu }}_{D})=\min _{\mu _{G}}\max _{\mu _{D}}L(\mu _{G},\mu _{D})=&\max _{\mu _{D}}\min _{\mu _{G}}L(\mu _{G},\mu _{D})=-2\ln 2\\[6pt]&{\hat {\mu }}_{D}\in \arg \max _{\mu _{D}}\min _{\mu _{G}}L(\mu _{G},\mu _{D}),&\quad {\hat {\mu }}_{G}\in \arg \min _{\mu _{G}}\max _{\mu _{D}}L(\mu _{G},\mu _{D})\\[6pt]&{\hat {\mu }}_{D}\in \arg \max _{\mu _{D}}L({\hat {\mu }}_{G},\mu _{D}),&\quad {\hat {\mu }}_{G}\in \arg \min _{\mu _{G}}L(\mu _{G},{\hat {\mu }}_{D})\\[6pt]&\forall x\in \Omega ,{\hat {\mu }}_{D}(x)=\delta _{\frac {1}{2}},&\quad {\hat {\mu }}_{G}=\mu _{\text{ref}}\end{aligned}}} That is, the generator perfectly mimics the reference, and the discriminator outputs12{\displaystyle {\frac {1}{2}}}deterministically on all inputs. From the previous proposition, arg⁡minμGmaxμDL(μG,μD)=μref;minμGmaxμDL(μG,μD)=−2ln⁡2.{\displaystyle \arg \min _{\mu _{G}}\max _{\mu _{D}}L(\mu _{G},\mu _{D})=\mu _{\text{ref}};\quad \min _{\mu _{G}}\max _{\mu _{D}}L(\mu _{G},\mu _{D})=-2\ln 2.} For any fixed discriminator strategyμD{\displaystyle \mu _{D}}, anyμG{\displaystyle \mu _{G}}concentrated on the set {x∣Ey∼μD(x)⁡[ln⁡(1−y)]=infxEy∼μD(x)⁡[ln⁡(1−y)]}{\displaystyle \{x\mid \operatorname {E} _{y\sim \mu _{D}(x)}[\ln(1-y)]=\inf _{x}\operatorname {E} _{y\sim \mu _{D}(x)}[\ln(1-y)]\}}is an optimal strategy for the generator. Thus, arg⁡maxμDminμGL(μG,μD)=arg⁡maxμDEx∼μref,y∼μD(x)⁡[ln⁡y]+infxEy∼μD(x)⁡[ln⁡(1−y)].{\displaystyle \arg \max _{\mu _{D}}\min _{\mu _{G}}L(\mu _{G},\mu _{D})=\arg \max _{\mu _{D}}\operatorname {E} _{x\sim \mu _{\text{ref}},y\sim \mu _{D}(x)}[\ln y]+\inf _{x}\operatorname {E} _{y\sim \mu _{D}(x)}[\ln(1-y)].} By Jensen's inequality, the discriminator can only improve by adopting the deterministic strategy of always playingD(x)=Ey∼μD(x)⁡[y]{\displaystyle D(x)=\operatorname {E} _{y\sim \mu _{D}(x)}[y]}. Therefore, arg⁡maxμDminμGL(μG,μD)=arg⁡maxDEx∼μref⁡[ln⁡D(x)]+infxln⁡(1−D(x)){\displaystyle \arg \max _{\mu _{D}}\min _{\mu _{G}}L(\mu _{G},\mu _{D})=\arg \max _{D}\operatorname {E} _{x\sim \mu _{\text{ref}}}[\ln D(x)]+\inf _{x}\ln(1-D(x))} By Jensen's inequality, ln⁡Ex∼μref⁡[D(x)]+infxln⁡(1−D(x))=ln⁡Ex∼μref⁡[D(x)]+ln⁡(1−supxD(x))=ln⁡[Ex∼μref⁡[D(x)](1−supxD(x))]≤ln⁡[supxD(x))(1−supxD(x))]≤ln⁡14,{\displaystyle {\begin{aligned}&\ln \operatorname {E} _{x\sim \mu _{\text{ref}}}[D(x)]+\inf _{x}\ln(1-D(x))\\[6pt]={}&\ln \operatorname {E} _{x\sim \mu _{\text{ref}}}[D(x)]+\ln(1-\sup _{x}D(x))\\[6pt]={}&\ln[\operatorname {E} _{x\sim \mu _{\text{ref}}}[D(x)](1-\sup _{x}D(x))]\leq \ln[\sup _{x}D(x))(1-\sup _{x}D(x))]\leq \ln {\frac {1}{4}},\end{aligned}}} with equality ifD(x)=12{\displaystyle D(x)={\frac {1}{2}}}, so ∀x∈Ω,μ^D(x)=δ12;maxμDminμGL(μG,μD)=−2ln⁡2.{\displaystyle \forall x\in \Omega ,{\hat {\mu }}_{D}(x)=\delta _{\frac {1}{2}};\quad \max _{\mu _{D}}\min _{\mu _{G}}L(\mu _{G},\mu _{D})=-2\ln 2.} Finally, to check that this is a Nash equilibrium, note that whenμG=μref{\displaystyle \mu _{G}=\mu _{\text{ref}}}, we have L(μG,μD):=Ex∼μref,y∼μD(x)⁡[ln⁡(y(1−y))]{\displaystyle L(\mu _{G},\mu _{D}):=\operatorname {E} _{x\sim \mu _{\text{ref}},y\sim \mu _{D}(x)}[\ln(y(1-y))]}which is always maximized byy=12{\displaystyle y={\frac {1}{2}}}. When∀x∈Ω,μD(x)=δ12{\displaystyle \forall x\in \Omega ,\mu _{D}(x)=\delta _{\frac {1}{2}}}, any strategy is optimal for the generator. While the GAN game has a unique global equilibrium point when both the generator and discriminator have access to their entire strategy sets, the equilibrium is no longer guaranteed when they have a restricted strategy set.[12] In practice, the generator has access only to measures of formμZ∘Gθ−1{\displaystyle \mu _{Z}\circ G_{\theta }^{-1}}, whereGθ{\displaystyle G_{\theta }}is a function computed by a neural network with parametersθ{\displaystyle \theta }, andμZ{\displaystyle \mu _{Z}}is an easily sampled distribution, such as the uniform or normal distribution. Similarly, the discriminator has access only to functions of formDζ{\displaystyle D_{\zeta }}, a function computed by a neural network with parametersζ{\displaystyle \zeta }. These restricted strategy sets take up avanishingly small proportionof their entire strategy sets.[13] Further, even if an equilibrium still exists, it can only be found by searching in the high-dimensional space of all possible neural network functions. The standard strategy of usinggradient descentto find the equilibrium often does not work for GAN, and often the game "collapses" into one of several failure modes. To improve the convergence stability, some training strategies start with an easier task, such as generating low-resolution images[14]or simple images (one object with uniform background),[15]and gradually increase the difficulty of the task during training. This essentially translates to applying a curriculum learning scheme.[16] GANs often suffer frommode collapsewhere they fail to generalize properly, missing entire modes from the input data. For example, a GAN trained on theMNISTdataset containing many samples of each digit might only generate pictures of digit 0. This was termed "the Helvetica scenario".[1] One way this can happen is if the generator learns too fast compared to the discriminator. If the discriminatorD{\displaystyle D}is held constant, then the optimal generator would only output elements ofarg⁡maxxD(x){\displaystyle \arg \max _{x}D(x)}.[17]So for example, if during GAN training for generating MNIST dataset, for a few epochs, the discriminator somehow prefers the digit 0 slightly more than other digits, the generator may seize the opportunity to generate only digit 0, then be unable to escape the local minimum after the discriminator improves. Some researchers perceive the root problem to be a weak discriminative network that fails to notice the pattern of omission, while others assign blame to a bad choice ofobjective function. Many solutions have been proposed, but it is still an open problem.[18][19] Even the state-of-the-art architecture, BigGAN (2019), could not avoid mode collapse. The authors resorted to "allowing collapse to occur at the later stages of training, by which time a model is sufficiently trained to achieve good results".[20] Thetwo time-scale update rule (TTUR)is proposed to make GAN convergence more stable by making the learning rate of the generator lower than that of the discriminator. The authors argued that the generator should move slower than the discriminator, so that it does not "drive the discriminator steadily into new regions without capturing its gathered information". They proved that a general class of games that included the GAN game, when trained under TTUR, "converges under mild assumptions to a stationary local Nash equilibrium".[21] They also proposed using theAdam stochastic optimization[22]to avoid mode collapse, as well as theFréchet inception distancefor evaluating GAN performances. Conversely, if the discriminator learns too fast compared to the generator, then the discriminator could almost perfectly distinguishμGθ,μref{\displaystyle \mu _{G_{\theta }},\mu _{\text{ref}}}. In such case, the generatorGθ{\displaystyle G_{\theta }}could be stuck with a very high loss no matter which direction it changes itsθ{\displaystyle \theta }, meaning that the gradient∇θL(Gθ,Dζ){\displaystyle \nabla _{\theta }L(G_{\theta },D_{\zeta })}would be close to zero. In such case, the generator cannot learn, a case of thevanishing gradientproblem.[13] Intuitively speaking, the discriminator is too good, and since the generator cannot take any small step (only small steps are considered in gradient descent) to improve its payoff, it does not even try. One important method for solving this problem is theWasserstein GAN. GANs are usually evaluated byInception score(IS), which measures how varied the generator's outputs are (as classified by an image classifier, usuallyInception-v3), orFréchet inception distance(FID), which measures how similar the generator's outputs are to a reference set (as classified by a learned image featurizer, such as Inception-v3 without its final layer). Many papers that propose new GAN architectures for image generation report how their architectures break thestate of the arton FID or IS. Another evaluation method is the Learned Perceptual Image Patch Similarity (LPIPS), which starts with a learned image featurizerfθ:Image→Rn{\displaystyle f_{\theta }:{\text{Image}}\to \mathbb {R} ^{n}}, and finetunes it by supervised learning on a set of(x,x′,perceptualdifference⁡(x,x′)){\displaystyle (x,x',\operatorname {perceptual~difference} (x,x'))}, wherex{\displaystyle x}is an image,x′{\displaystyle x'}is a perturbed version of it, andperceptualdifference⁡(x,x′){\displaystyle \operatorname {perceptual~difference} (x,x')}is how much they differ, as reported by human subjects. The model is finetuned so that it can approximate‖fθ(x)−fθ(x′)‖≈perceptualdifference⁡(x,x′){\displaystyle \|f_{\theta }(x)-f_{\theta }(x')\|\approx \operatorname {perceptual~difference} (x,x')}. This finetuned model is then used to defineLPIPS⁡(x,x′):=‖fθ(x)−fθ(x′)‖{\displaystyle \operatorname {LPIPS} (x,x'):=\|f_{\theta }(x)-f_{\theta }(x')\|}.[23] Other evaluation methods are reviewed in.[24] There is a veritable zoo of GAN variants.[25]Some of the most prominent are as follows: Conditional GANs are similar to standard GANs except they allow the model to conditionally generate samples based on additional information. For example, if we want to generate a cat face given a dog picture, we could use a conditional GAN. The generator in a GAN game generatesμG{\displaystyle \mu _{G}}, a probability distribution on the probability spaceΩ{\displaystyle \Omega }. This leads to the idea of a conditional GAN, where instead of generating one probability distribution onΩ{\displaystyle \Omega }, the generator generates a different probability distributionμG(c){\displaystyle \mu _{G}(c)}onΩ{\displaystyle \Omega }, for each given class labelc{\displaystyle c}. For example, for generating images that look likeImageNet, the generator should be able to generate a picture of cat when given the class label "cat". In the original paper,[1]the authors noted that GAN can be trivially extended to conditional GAN by providing the labels to both the generator and the discriminator. Concretely, the conditional GAN game is just the GAN game with class labels provided:L(μG,D):=Ec∼μC,x∼μref(c)⁡[ln⁡D(x,c)]+Ec∼μC,x∼μG(c)⁡[ln⁡(1−D(x,c))]{\displaystyle L(\mu _{G},D):=\operatorname {E} _{c\sim \mu _{C},x\sim \mu _{\text{ref}}(c)}[\ln D(x,c)]+\operatorname {E} _{c\sim \mu _{C},x\sim \mu _{G}(c)}[\ln(1-D(x,c))]}whereμC{\displaystyle \mu _{C}}is a probability distribution over classes,μref(c){\displaystyle \mu _{\text{ref}}(c)}is the probability distribution of real images of classc{\displaystyle c}, andμG(c){\displaystyle \mu _{G}(c)}the probability distribution of images generated by the generator when given class labelc{\displaystyle c}. In 2017, a conditional GAN learned to generate 1000 image classes ofImageNet.[26] The GAN game is a general framework and can be run with any reasonable parametrization of the generatorG{\displaystyle G}and discriminatorD{\displaystyle D}. In the original paper, the authors demonstrated it usingmultilayer perceptronnetworks andconvolutional neural networks. Many alternative architectures have been tried. Deep convolutional GAN (DCGAN):[27]For both generator and discriminator, uses only deep networks consisting entirely of convolution-deconvolution layers, that is, fully convolutional networks.[28] Self-attention GAN (SAGAN):[29]Starts with the DCGAN, then adds residually-connected standardself-attention modulesto the generator and discriminator. Variational autoencoder GAN (VAEGAN):[30]Uses avariational autoencoder(VAE) for the generator. Transformer GAN (TransGAN):[31]Uses the puretransformerarchitecture for both the generator and discriminator, entirely devoid of convolution-deconvolution layers. Flow-GAN:[32]Usesflow-based generative modelfor the generator, allowing efficient computation of the likelihood function. Many GAN variants are merely obtained by changing the loss functions for the generator and discriminator. Original GAN: We recast the original GAN objective into a form more convenient for comparison:{minDLD(D,μG)=−Ex∼μG⁡[ln⁡D(x)]−Ex∼μref⁡[ln⁡(1−D(x))]minGLG(D,μG)=−Ex∼μG⁡[ln⁡(1−D(x))]{\displaystyle {\begin{cases}\min _{D}L_{D}(D,\mu _{G})=-\operatorname {E} _{x\sim \mu _{G}}[\ln D(x)]-\operatorname {E} _{x\sim \mu _{\text{ref}}}[\ln(1-D(x))]\\\min _{G}L_{G}(D,\mu _{G})=-\operatorname {E} _{x\sim \mu _{G}}[\ln(1-D(x))]\end{cases}}} Original GAN, non-saturating loss: This objective for generator was recommended in the original paper for faster convergence.[1]LG=Ex∼μG⁡[ln⁡D(x)]{\displaystyle L_{G}=\operatorname {E} _{x\sim \mu _{G}}[\ln D(x)]}The effect of using this objective is analyzed in Section 2.2.2 of Arjovsky et al.[33] Original GAN, maximum likelihood: LG=Ex∼μG⁡[(exp∘σ−1∘D)(x)]{\displaystyle L_{G}=\operatorname {E} _{x\sim \mu _{G}}[({\exp }\circ \sigma ^{-1}\circ D)(x)]}whereσ{\displaystyle \sigma }is the logistic function. When the discriminator is optimal, the generator gradient is the same as inmaximum likelihood estimation, even though GAN cannot perform maximum likelihood estimationitself.[34][35] Hinge lossGAN:[36]LD=−Ex∼pref⁡[min(0,−1+D(x))]−Ex∼μG⁡[min(0,−1−D(x))]{\displaystyle L_{D}=-\operatorname {E} _{x\sim p_{\text{ref}}}\left[\min \left(0,-1+D(x)\right)\right]-\operatorname {E} _{x\sim \mu _{G}}\left[\min \left(0,-1-D\left(x\right)\right)\right]}LG=−Ex∼μG⁡[D(x)]{\displaystyle L_{G}=-\operatorname {E} _{x\sim \mu _{G}}[D(x)]}Least squares GAN:[37]LD=Ex∼μref⁡[(D(x)−b)2]+Ex∼μG⁡[(D(x)−a)2]{\displaystyle L_{D}=\operatorname {E} _{x\sim \mu _{\text{ref}}}[(D(x)-b)^{2}]+\operatorname {E} _{x\sim \mu _{G}}[(D(x)-a)^{2}]}LG=Ex∼μG⁡[(D(x)−c)2]{\displaystyle L_{G}=\operatorname {E} _{x\sim \mu _{G}}[(D(x)-c)^{2}]}wherea,b,c{\displaystyle a,b,c}are parameters to be chosen. The authors recommendeda=−1,b=1,c=0{\displaystyle a=-1,b=1,c=0}. The Wasserstein GAN modifies the GAN game at two points: One of its purposes is to solve the problem of mode collapse (see above).[13]The authors claim "In no experiment did we see evidence of mode collapse for the WGAN algorithm". An adversarial autoencoder (AAE)[38]is more autoencoder than GAN. The idea is to start with a plainautoencoder, but train a discriminator to discriminate the latent vectors from a reference distribution (often the normal distribution). In conditional GAN, the generator receives both a noise vectorz{\displaystyle z}and a labelc{\displaystyle c}, and produces an imageG(z,c){\displaystyle G(z,c)}. The discriminator receives image-label pairs(x,c){\displaystyle (x,c)}, and computesD(x,c){\displaystyle D(x,c)}. When the training dataset is unlabeled, conditional GAN does not work directly. The idea of InfoGAN is to decree that every latent vector in the latent space can be decomposed as(z,c){\displaystyle (z,c)}: an incompressible noise partz{\displaystyle z}, and an informative label partc{\displaystyle c}, and encourage the generator to comply with the decree, by encouraging it to maximizeI(c,G(z,c)){\displaystyle I(c,G(z,c))}, themutual informationbetweenc{\displaystyle c}andG(z,c){\displaystyle G(z,c)}, while making no demands on the mutual informationz{\displaystyle z}betweenG(z,c){\displaystyle G(z,c)}. Unfortunately,I(c,G(z,c)){\displaystyle I(c,G(z,c))}is intractable in general, The key idea of InfoGAN is Variational Mutual Information Maximization:[39]indirectly maximize it by maximizing a lower boundI^(G,Q)=Ez∼μZ,c∼μC[ln⁡Q(c∣G(z,c))];I(c,G(z,c))≥supQI^(G,Q){\displaystyle {\hat {I}}(G,Q)=\mathbb {E} _{z\sim \mu _{Z},c\sim \mu _{C}}[\ln Q(c\mid G(z,c))];\quad I(c,G(z,c))\geq \sup _{Q}{\hat {I}}(G,Q)}whereQ{\displaystyle Q}ranges over allMarkov kernelsof typeQ:ΩY→P(ΩC){\displaystyle Q:\Omega _{Y}\to {\mathcal {P}}(\Omega _{C})}. The InfoGAN game is defined as follows:[40] Three probability spaces define an InfoGAN game: There are 3 players in 2 teams: generator, Q, and discriminator. The generator and Q are on one team, and the discriminator on the other team. The objective function isL(G,Q,D)=LGAN(G,D)−λI^(G,Q){\displaystyle L(G,Q,D)=L_{GAN}(G,D)-\lambda {\hat {I}}(G,Q)}whereLGAN(G,D)=Ex∼μref,⁡[ln⁡D(x)]+Ez∼μZ⁡[ln⁡(1−D(G(z,c)))]{\displaystyle L_{GAN}(G,D)=\operatorname {E} _{x\sim \mu _{\text{ref}},}[\ln D(x)]+\operatorname {E} _{z\sim \mu _{Z}}[\ln(1-D(G(z,c)))]}is the original GAN game objective, andI^(G,Q)=Ez∼μZ,c∼μC[ln⁡Q(c∣G(z,c))]{\displaystyle {\hat {I}}(G,Q)=\mathbb {E} _{z\sim \mu _{Z},c\sim \mu _{C}}[\ln Q(c\mid G(z,c))]} Generator-Q team aims to minimize the objective, and discriminator aims to maximize it:minG,QmaxDL(G,Q,D){\displaystyle \min _{G,Q}\max _{D}L(G,Q,D)} The standard GAN generator is a function of typeG:ΩZ→ΩX{\displaystyle G:\Omega _{Z}\to \Omega _{X}}, that is, it is a mapping from a latent spaceΩZ{\displaystyle \Omega _{Z}}to the image spaceΩX{\displaystyle \Omega _{X}}. This can be understood as a "decoding" process, whereby every latent vectorz∈ΩZ{\displaystyle z\in \Omega _{Z}}is a code for an imagex∈ΩX{\displaystyle x\in \Omega _{X}}, and the generator performs the decoding. This naturally leads to the idea of training another network that performs "encoding", creating anautoencoderout of the encoder-generator pair. Already in the original paper,[1]the authors noted that "Learned approximate inference can be performed by training an auxiliary network to predictz{\displaystyle z}givenx{\displaystyle x}". The bidirectional GAN architecture performs exactly this.[41] The BiGAN is defined as follows: Two probability spaces define a BiGAN game: There are 3 players in 2 teams: generator, encoder, and discriminator. The generator and encoder are on one team, and the discriminator on the other team. The generator's strategies are functionsG:ΩZ→ΩX{\displaystyle G:\Omega _{Z}\to \Omega _{X}}, and the encoder's strategies are functionsE:ΩX→ΩZ{\displaystyle E:\Omega _{X}\to \Omega _{Z}}. The discriminator's strategies are functionsD:ΩX→[0,1]{\displaystyle D:\Omega _{X}\to [0,1]}. The objective function isL(G,E,D)=Ex∼μX[ln⁡D(x,E(x))]+Ez∼μZ[ln⁡(1−D(G(z),z))]{\displaystyle L(G,E,D)=\mathbb {E} _{x\sim \mu _{X}}[\ln D(x,E(x))]+\mathbb {E} _{z\sim \mu _{Z}}[\ln(1-D(G(z),z))]} Generator-encoder team aims to minimize the objective, and discriminator aims to maximize it:minG,EmaxDL(G,E,D){\displaystyle \min _{G,E}\max _{D}L(G,E,D)} In the paper, they gave a more abstract definition of the objective as:L(G,E,D)=E(x,z)∼μE,X[ln⁡D(x,z)]+E(x,z)∼μG,Z[ln⁡(1−D(x,z))]{\displaystyle L(G,E,D)=\mathbb {E} _{(x,z)\sim \mu _{E,X}}[\ln D(x,z)]+\mathbb {E} _{(x,z)\sim \mu _{G,Z}}[\ln(1-D(x,z))]}whereμE,X(dx,dz)=μX(dx)⋅δE(x)(dz){\displaystyle \mu _{E,X}(dx,dz)=\mu _{X}(dx)\cdot \delta _{E(x)}(dz)}is the probability distribution onΩX×ΩZ{\displaystyle \Omega _{X}\times \Omega _{Z}}obtained bypushingμX{\displaystyle \mu _{X}}forwardviax↦(x,E(x)){\displaystyle x\mapsto (x,E(x))}, andμG,Z(dx,dz)=δG(z)(dx)⋅μZ(dz){\displaystyle \mu _{G,Z}(dx,dz)=\delta _{G(z)}(dx)\cdot \mu _{Z}(dz)}is the probability distribution onΩX×ΩZ{\displaystyle \Omega _{X}\times \Omega _{Z}}obtained by pushingμZ{\displaystyle \mu _{Z}}forward viaz↦(G(x),z){\displaystyle z\mapsto (G(x),z)}. Applications of bidirectional models includesemi-supervised learning,[42]interpretable machine learning,[43]andneural machine translation.[44] CycleGAN is an architecture for performing translations between two domains, such as between photos of horses and photos of zebras, or photos of night cities and photos of day cities. The CycleGAN game is defined as follows:[45] There are two probability spaces(ΩX,μX),(ΩY,μY){\displaystyle (\Omega _{X},\mu _{X}),(\Omega _{Y},\mu _{Y})}, corresponding to the two domains needed for translations fore-and-back. There are 4 players in 2 teams: generatorsGX:ΩX→ΩY,GY:ΩY→ΩX{\displaystyle G_{X}:\Omega _{X}\to \Omega _{Y},G_{Y}:\Omega _{Y}\to \Omega _{X}}, and discriminatorsDX:ΩX→[0,1],DY:ΩY→[0,1]{\displaystyle D_{X}:\Omega _{X}\to [0,1],D_{Y}:\Omega _{Y}\to [0,1]}. The objective function isL(GX,GY,DX,DY)=LGAN(GX,DX)+LGAN(GY,DY)+λLcycle(GX,GY){\displaystyle L(G_{X},G_{Y},D_{X},D_{Y})=L_{GAN}(G_{X},D_{X})+L_{GAN}(G_{Y},D_{Y})+\lambda L_{cycle}(G_{X},G_{Y})} whereλ{\displaystyle \lambda }is a positive adjustable parameter,LGAN{\displaystyle L_{GAN}}is the GAN game objective, andLcycle{\displaystyle L_{cycle}}is thecycle consistency loss:Lcycle(GX,GY)=Ex∼μX‖GX(GY(x))−x‖+Ey∼μY‖GY(GX(y))−y‖{\displaystyle L_{cycle}(G_{X},G_{Y})=E_{x\sim \mu _{X}}\|G_{X}(G_{Y}(x))-x\|+E_{y\sim \mu _{Y}}\|G_{Y}(G_{X}(y))-y\|}The generators aim to minimize the objective, and the discriminators aim to maximize it:minGX,GYmaxDX,DYL(GX,GY,DX,DY){\displaystyle \min _{G_{X},G_{Y}}\max _{D_{X},D_{Y}}L(G_{X},G_{Y},D_{X},D_{Y})} Unlike previous work like pix2pix,[46]which requires paired training data, cycleGAN requires no paired data. For example, to train a pix2pix model to turn a summer scenery photo to winter scenery photo and back, the dataset must contain pairs of the same place in summer and winter, shot at the same angle; cycleGAN would only need a set of summer scenery photos, and an unrelated set of winter scenery photos. The BigGAN is essentially a self-attention GAN trained on a large scale (up to 80 million parameters) to generate large images of ImageNet (up to 512 x 512 resolution), with numerous engineering tricks to make it converge.[20][47] When there is insufficient training data, the reference distributionμref{\displaystyle \mu _{\text{ref}}}cannot be well-approximated by theempirical distributiongiven by the training dataset. In such cases,data augmentationcan be applied, to allow training GAN on smaller datasets. Naïve data augmentation, however, brings its problems. Consider the original GAN game, slightly reformulated as follows:{minDLD(D,μG)=−Ex∼μref⁡[ln⁡D(x)]−Ex∼μG⁡[ln⁡(1−D(x))]minGLG(D,μG)=−Ex∼μG⁡[ln⁡(1−D(x))]{\displaystyle {\begin{cases}\min _{D}L_{D}(D,\mu _{G})=-\operatorname {E} _{x\sim \mu _{\text{ref}}}[\ln D(x)]-\operatorname {E} _{x\sim \mu _{G}}[\ln(1-D(x))]\\\min _{G}L_{G}(D,\mu _{G})=-\operatorname {E} _{x\sim \mu _{G}}[\ln(1-D(x))]\end{cases}}}Now we use data augmentation by randomly sampling semantic-preserving transformsT:Ω→Ω{\displaystyle T:\Omega \to \Omega }and applying them to the dataset, to obtain the reformulated GAN game:{minDLD(D,μG)=−Ex∼μref,T∼μtrans⁡[ln⁡D(T(x))]−Ex∼μG⁡[ln⁡(1−D(x))]minGLG(D,μG)=−Ex∼μG⁡[ln⁡(1−D(x))]{\displaystyle {\begin{cases}\min _{D}L_{D}(D,\mu _{G})=-\operatorname {E} _{x\sim \mu _{\text{ref}},T\sim \mu _{\text{trans}}}[\ln D(T(x))]-\operatorname {E} _{x\sim \mu _{G}}[\ln(1-D(x))]\\\min _{G}L_{G}(D,\mu _{G})=-\operatorname {E} _{x\sim \mu _{G}}[\ln(1-D(x))]\end{cases}}}This is equivalent to a GAN game with a different distributionμref′{\displaystyle \mu _{\text{ref}}'}, sampled byT(x){\displaystyle T(x)}, withx∼μref,T∼μtrans{\displaystyle x\sim \mu _{\text{ref}},T\sim \mu _{\text{trans}}}. For example, ifμref{\displaystyle \mu _{\text{ref}}}is the distribution of images in ImageNet, andμtrans{\displaystyle \mu _{\text{trans}}}samples identity-transform with probability 0.5, and horizontal-reflection with probability 0.5, thenμref′{\displaystyle \mu _{\text{ref}}'}is the distribution of images in ImageNet and horizontally-reflected ImageNet, combined. The result of such training would be a generator that mimicsμref′{\displaystyle \mu _{\text{ref}}'}. For example, it would generate images that look like they are randomly cropped, if the data augmentation uses random cropping. The solution is to apply data augmentation to both generated and real images:{minDLD(D,μG)=−Ex∼μref,T∼μtrans⁡[ln⁡D(T(x))]−Ex∼μG,T∼μtrans⁡[ln⁡(1−D(T(x)))]minGLG(D,μG)=−Ex∼μG,T∼μtrans⁡[ln⁡(1−D(T(x)))]{\displaystyle {\begin{cases}\min _{D}L_{D}(D,\mu _{G})=-\operatorname {E} _{x\sim \mu _{\text{ref}},T\sim \mu _{\text{trans}}}[\ln D(T(x))]-\operatorname {E} _{x\sim \mu _{G},T\sim \mu _{\text{trans}}}[\ln(1-D(T(x)))]\\\min _{G}L_{G}(D,\mu _{G})=-\operatorname {E} _{x\sim \mu _{G},T\sim \mu _{\text{trans}}}[\ln(1-D(T(x)))]\end{cases}}}The authors demonstrated high-quality generation using just 100-picture-large datasets.[48] The StyleGAN-2-ADA paper points out a further point on data augmentation: it must beinvertible.[49]Continue with the example of generating ImageNet pictures. If the data augmentation is "randomly rotate the picture by 0, 90, 180, 270 degrees withequalprobability", then there is no way for the generator to know which is the true orientation: Consider two generatorsG,G′{\displaystyle G,G'}, such that for any latentz{\displaystyle z}, the generated imageG(z){\displaystyle G(z)}is a 90-degree rotation ofG′(z){\displaystyle G'(z)}. They would have exactly the same expected loss, and so neither is preferred over the other. The solution is to only use invertible data augmentation: instead of "randomly rotate the picture by 0, 90, 180, 270 degrees withequalprobability", use "randomly rotate the picture by 90, 180, 270 degrees with 0.1 probability, and keep the picture as it is with 0.7 probability". This way, the generator is still rewarded to keep images oriented the same way as un-augmented ImageNet pictures. Abstractly, the effect of randomly sampling transformationsT:Ω→Ω{\displaystyle T:\Omega \to \Omega }from the distributionμtrans{\displaystyle \mu _{\text{trans}}}is to define a Markov kernelKtrans:Ω→P(Ω){\displaystyle K_{\text{trans}}:\Omega \to {\mathcal {P}}(\Omega )}. Then, the data-augmented GAN game pushes the generator to find someμ^G∈P(Ω){\displaystyle {\hat {\mu }}_{G}\in {\mathcal {P}}(\Omega )}, such thatKtrans∗μref=Ktrans∗μ^G{\displaystyle K_{\text{trans}}*\mu _{\text{ref}}=K_{\text{trans}}*{\hat {\mu }}_{G}}where∗{\displaystyle *}is theMarkov kernel convolution. A data-augmentation method is defined to beinvertibleif its Markov kernelKtrans{\displaystyle K_{\text{trans}}}satisfiesKtrans∗μ=Ktrans∗μ′⟹μ=μ′∀μ,μ′∈P(Ω){\displaystyle K_{\text{trans}}*\mu =K_{\text{trans}}*\mu '\implies \mu =\mu '\quad \forall \mu ,\mu '\in {\mathcal {P}}(\Omega )}Immediately by definition, we see that composing multiple invertible data-augmentation methods results in yet another invertible method. Also by definition, if the data-augmentation method is invertible, then using it in a GAN game does not change the optimal strategyμ^G{\displaystyle {\hat {\mu }}_{G}}for the generator, which is stillμref{\displaystyle \mu _{\text{ref}}}. There are two prototypical examples of invertible Markov kernels: Discrete case: Invertiblestochastic matrices, whenΩ{\displaystyle \Omega }is finite. For example, ifΩ={↑,↓,←,→}{\displaystyle \Omega =\{\uparrow ,\downarrow ,\leftarrow ,\rightarrow \}}is the set of four images of an arrow, pointing in 4 directions, and the data augmentation is "randomly rotate the picture by 90, 180, 270 degrees with probabilityp{\displaystyle p}, and keep the picture as it is with probability(1−3p){\displaystyle (1-3p)}", then the Markov kernelKtrans{\displaystyle K_{\text{trans}}}can be represented as a stochastic matrix:[Ktrans]=[(1−3p)pppp(1−3p)pppp(1−3p)pppp(1−3p)]{\displaystyle [K_{\text{trans}}]={\begin{bmatrix}(1-3p)&p&p&p\\p&(1-3p)&p&p\\p&p&(1-3p)&p\\p&p&p&(1-3p)\end{bmatrix}}}andKtrans{\displaystyle K_{\text{trans}}}is an invertible kernel iff[Ktrans]{\displaystyle [K_{\text{trans}}]}is an invertible matrix, that is,p≠1/4{\displaystyle p\neq 1/4}. Continuous case: The gaussian kernel, whenΩ=Rn{\displaystyle \Omega =\mathbb {R} ^{n}}for somen≥1{\displaystyle n\geq 1}. For example, ifΩ=R2562{\displaystyle \Omega =\mathbb {R} ^{256^{2}}}is the space of 256x256 images, and the data-augmentation method is "generate a gaussian noisez∼N(0,I2562){\displaystyle z\sim {\mathcal {N}}(0,I_{256^{2}})}, then addϵz{\displaystyle \epsilon z}to the image", thenKtrans{\displaystyle K_{\text{trans}}}is just convolution by the density function ofN(0,ϵ2I2562){\displaystyle {\mathcal {N}}(0,\epsilon ^{2}I_{256^{2}})}. This is invertible, because convolution by a gaussian is just convolution by theheat kernel, so given anyμ∈P(Rn){\displaystyle \mu \in {\mathcal {P}}(\mathbb {R} ^{n})}, the convolved distributionKtrans∗μ{\displaystyle K_{\text{trans}}*\mu }can be obtained by heating upRn{\displaystyle \mathbb {R} ^{n}}precisely according toμ{\displaystyle \mu }, then wait for timeϵ2/4{\displaystyle \epsilon ^{2}/4}. With that, we can recoverμ{\displaystyle \mu }by running theheat equationbackwards in timeforϵ2/4{\displaystyle \epsilon ^{2}/4}. More examples of invertible data augmentations are found in the paper.[49] SinGAN pushes data augmentation to the limit, by using only a single image as training data and performing data augmentation on it. The GAN architecture is adapted to this training method by using a multi-scale pipeline. The generatorG{\displaystyle G}is decomposed into a pyramid of generatorsG=G1∘G2∘⋯∘GN{\displaystyle G=G_{1}\circ G_{2}\circ \cdots \circ G_{N}}, with the lowest one generating the imageGN(zN){\displaystyle G_{N}(z_{N})}at the lowest resolution, then the generated image is scaled up tor(GN(zN)){\displaystyle r(G_{N}(z_{N}))}, and fed to the next level to generate an imageGN−1(zN−1+r(GN(zN))){\displaystyle G_{N-1}(z_{N-1}+r(G_{N}(z_{N})))}at a higher resolution, and so on. The discriminator is decomposed into a pyramid as well.[50] The StyleGAN family is a series of architectures published byNvidia's research division. Progressive GAN[14]is a method for training GAN for large-scale image generation stably, by growing a GAN generator from small to large scale in a pyramidal fashion. Like SinGAN, it decomposes the generator asG=G1∘G2∘⋯∘GN{\displaystyle G=G_{1}\circ G_{2}\circ \cdots \circ G_{N}}, and the discriminator asD=D1∘D2∘⋯∘DN{\displaystyle D=D_{1}\circ D_{2}\circ \cdots \circ D_{N}}. During training, at first onlyGN,DN{\displaystyle G_{N},D_{N}}are used in a GAN game to generate 4x4 images. ThenGN−1,DN−1{\displaystyle G_{N-1},D_{N-1}}are added to reach the second stage of GAN game, to generate 8x8 images, and so on, until we reach a GAN game to generate 1024x1024 images. To avoid shock between stages of the GAN game, each new layer is "blended in" (Figure 2 of the paper[14]). For example, this is how the second stage GAN game starts: StyleGAN-1 is designed as a combination of Progressive GAN withneural style transfer.[51] The key architectural choice of StyleGAN-1 is a progressive growth mechanism, similar to Progressive GAN. Each generated image starts as a constant4×4×512{\displaystyle 4\times 4\times 512}array, and repeatedly passed through style blocks. Each style block applies a "style latent vector" via affine transform ("adaptive instance normalization"), similar to how neural style transfer usesGramian matrix. It then adds noise, and normalize (subtract the mean, then divide by the variance). At training time, usually only one style latent vector is used per image generated, but sometimes two ("mixing regularization") in order to encourage each style block to independently perform its stylization without expecting help from other style blocks (since they might receive an entirely different style latent vector). After training, multiple style latent vectors can be fed into each style block. Those fed to the lower layers control the large-scale styles, and those fed to the higher layers control the fine-detail styles. Style-mixing between two imagesx,x′{\displaystyle x,x'}can be performed as well. First, run a gradient descent to findz,z′{\displaystyle z,z'}such thatG(z)≈x,G(z′)≈x′{\displaystyle G(z)\approx x,G(z')\approx x'}. This is called "projecting an image back to style latent space". Then,z{\displaystyle z}can be fed to the lower style blocks, andz′{\displaystyle z'}to the higher style blocks, to generate a composite image that has the large-scale style ofx{\displaystyle x}, and the fine-detail style ofx′{\displaystyle x'}. Multiple images can also be composed this way. StyleGAN-2 improves upon StyleGAN-1, by using the style latent vector to transform the convolution layer's weights instead, thus solving the "blob" problem.[52] This was updated by the StyleGAN-2-ADA ("ADA" stands for "adaptive"),[49]which uses invertible data augmentation as described above. It also tunes the amount of data augmentation applied by starting at zero, and gradually increasing it until an "overfitting heuristic" reaches a target level, thus the name "adaptive". StyleGAN-3[53]improves upon StyleGAN-2 by solving the "texture sticking" problem, which can be seen in the official videos.[54]They analyzed the problem by theNyquist–Shannon sampling theorem, and argued that the layers in the generator learned to exploit the high-frequency signal in the pixels they operate upon. To solve this, they proposed imposing strictlowpass filtersbetween each generator's layers, so that the generator is forced to operate on the pixels in a wayfaithfulto the continuous signals they represent, rather than operate on them as merely discrete signals. They further imposed rotational and translational invariance by using moresignal filters. The resulting StyleGAN-3 is able to solve the texture sticking problem, as well as generating images that rotate and translate smoothly. Other than for generative and discriminative modelling of data, GANs have been used for other things. GANs have been used fortransfer learningto enforce the alignment of the latent feature space, such as indeep reinforcement learning.[55]This works by feeding the embeddings of the source and target task to the discriminator which tries to guess the context. The resulting loss is then (inversely) backpropagated through the encoder. GAN-generated molecules were validated experimentally in mice.[72][73] One of the major concerns in medical imaging is preserving patient privacy. Due to these reasons, researchers often face difficulties in obtaining medical images for their research purposes. GAN has been used for generatingsynthetic medical images, such asMRIandPETimages to address this challenge.[74] GAN can be used to detectglaucomatousimages helping the early diagnosis which is essential to avoid partial or total loss of vision.[75] GANs have been used to createforensic facial reconstructionsof deceased historical figures.[76] Concerns have been raised about the potential use of GAN-basedhuman image synthesisfor sinister purposes, e.g., to produce fake, possibly incriminating, photographs and videos.[77]GANs can be used to generate unique, realistic profile photos of people who do not exist, in order to automate creation of fake social media profiles.[78] In 2019 the state of California considered[79]and passed on October 3, 2019, thebill AB-602, which bans the use of human image synthesis technologies to make fake pornography without the consent of the people depicted, andbill AB-730, which prohibits distribution of manipulated videos of a political candidate within 60 days of an election. Both bills were authored by Assembly memberMarc Bermanand signed by GovernorGavin Newsom. The laws went into effect in 2020.[80] DARPA's Media Forensics program studies ways to counteract fake media, including fake media produced using GANs.[81] GANs can be used to generate art;The Vergewrote in March 2019 that "The images created by GANs have become the defining look of contemporary AI art."[82]GANs can also be used to Some have worked with using GAN for artistic creativity, as "creative adversarial network".[88][89]A GAN, trained on a set of 15,000 portraits fromWikiArtfrom the 14th to the 19th century, created the 2018 paintingEdmond de Belamy,which sold for US$432,500.[90] GANs were used by thevideo game moddingcommunity toup-scalelow-resolution 2D textures in old video games by recreating them in4kor higher resolutions via image training, and then down-sampling them to fit the game's native resolution (resemblingsupersamplinganti-aliasing).[91] In 2020,Artbreederwas used to create the main antagonist in the sequel to the psychological web horror seriesBen Drowned. The author would later go on to praise GAN applications for their ability to help generate assets for independent artists who are short on budget and manpower.[92][93] In May 2020,Nvidiaresearchers taught an AI system (termed "GameGAN") to recreate the game ofPac-Mansimply by watching it being played.[94][95] In August 2019, a large dataset consisting of 12,197 MIDI songs each with paired lyrics and melody alignment was created for neural melody generation from lyrics using conditional GAN-LSTM (refer to sources at GitHubAI Melody Generation from Lyrics).[96] GANs have been used to In 1991,Juergen Schmidhuberpublished "artificial curiosity",neural networksin azero-sum game.[108]The first network is agenerative modelthat models aprobability distributionover output patterns. The second network learns bygradient descentto predict the reactions of the environment to these patterns. GANs can be regarded as a case where the environmental reaction is 1 or 0 depending on whether the first network's output is in a given set.[109] Other people had similar ideas but did not develop them similarly. An idea involving adversarial networks was published in a 2010 blog post by Olli Niemitalo.[110]This idea was never implemented and did not involvestochasticityin the generator and thus was not a generative model. It is now known as a conditional GAN or cGAN.[111]An idea similar to GANs was used to model animal behavior by Li, Gauci and Gross in 2013.[112] Another inspiration for GANs was noise-contrastive estimation,[113]which uses the same loss function as GANs and which Goodfellow studied during his PhD in 2010–2014. Adversarial machine learninghas other uses besides generative modeling and can be applied to models other than neural networks. In control theory, adversarial learning based on neural networks was used in 2006 to train robust controllers in a game theoretic sense, by alternating the iterations between a minimizer policy, the controller, and a maximizer policy, the disturbance.[114][115] In 2017, a GAN was used for image enhancement focusing on realistic textures rather than pixel-accuracy, producing a higher image quality at high magnification.[116]In 2017, the first faces were generated.[117]These were exhibited in February 2018 at the Grand Palais.[118][119]Faces generated byStyleGAN[120]in 2019 drew comparisons withDeepfakes.[121][122][123]
https://en.wikipedia.org/wiki/Generative_adversarial_network
Discriminative models, also referred to asconditional models, are a class of models frequently used forclassification. They are typically used to solvebinary classificationproblems, i.e. assign labels, such as pass/fail, win/lose, alive/dead or healthy/sick, to existing datapoints. Types of discriminative models includelogistic regression(LR),conditional random fields(CRFs),decision treesamong many others.Generative modelapproaches which uses a joint probability distribution instead, includenaive Bayes classifiers,Gaussian mixture models,variational autoencoders,generative adversarial networksand others. Unlike generative modelling, which studies thejoint probabilityP(x,y){\displaystyle P(x,y)}, discriminative modeling studies theP(y|x){\displaystyle P(y|x)}or maps the given unobserved variable (target)x{\displaystyle x}to a class labely{\displaystyle y}dependent on the observed variables (training samples). For example, inobject recognition,x{\displaystyle x}is likely to be a vector of raw pixels (or features extracted from the raw pixels of the image). Within a probabilistic framework, this is done by modeling theconditional probability distributionP(y|x){\displaystyle P(y|x)}, which can be used for predictingy{\displaystyle y}fromx{\displaystyle x}. Note that there is still distinction between the conditional model and the discriminative model, though more often they are simply categorised as discriminative model. Aconditional modelmodels the conditionalprobability distribution, while the traditional discriminative model aims to optimize on mapping the input around the most similar trained samples.[1] The following approach is based on the assumption that it is given the training data-setD={(xi;yi)|i≤N∈Z}{\displaystyle D=\{(x_{i};y_{i})|i\leq N\in \mathbb {Z} \}}, whereyi{\displaystyle y_{i}}is the corresponding output for the inputxi{\displaystyle x_{i}}.[2] We intend to use the functionf(x){\displaystyle f(x)}to simulate the behavior of what we observed from the training data-set by thelinear classifiermethod. Using the joint feature vectorϕ(x,y){\displaystyle \phi (x,y)}, the decision function is defined as: According to Memisevic's interpretation,[2]wTϕ(x,y){\displaystyle w^{T}\phi (x,y)}, which is alsoc(x,y;w){\displaystyle c(x,y;w)}, computes a score which measures the compatibility of the inputx{\displaystyle x}with the potential outputy{\displaystyle y}. Then thearg⁡max{\displaystyle \arg \max }determines the class with the highest score. Since the0-1 loss functionis a commonly used one in the decision theory, the conditionalprobability distributionP(y|x;w){\displaystyle P(y|x;w)}, wherew{\displaystyle w}is a parameter vector for optimizing the training data, could be reconsidered as following for the logistics regression model: The equation above representslogistic regression. Notice that a major distinction between models is their way of introducing posterior probability. Posterior probability is inferred from the parametric model. We then can maximize the parameter by following equation: It could also be replaced by thelog-lossequation below: Since thelog-lossis differentiable, a gradient-based method can be used to optimize the model. A global optimum is guaranteed because the objective function is convex. The gradient of log likelihood is represented by: whereEp(y|xi;w){\displaystyle E_{p(y|x^{i};w)}}is the expectation ofp(y|xi;w){\displaystyle p(y|x^{i};w)}. The above method will provide efficient computation for the relative small number of classification. Let's say we are given them{\displaystyle m}class labels (classification) andn{\displaystyle n}feature variables,Y:{y1,y2,…,ym},X:{x1,x2,…,xn}{\displaystyle Y:\{y_{1},y_{2},\ldots ,y_{m}\},X:\{x_{1},x_{2},\ldots ,x_{n}\}}, as the training samples. A generative model takes the joint probabilityP(x,y){\displaystyle P(x,y)}, wherex{\displaystyle x}is the input andy{\displaystyle y}is the label, and predicts the most possible known labely~∈Y{\displaystyle {\widetilde {y}}\in Y}for the unknown variablex~{\displaystyle {\widetilde {x}}}usingBayes' theorem.[3] Discriminative models, as opposed togenerative models, do not allow one to generate samples from thejoint distributionof observed and target variables. However, for tasks such asclassificationandregressionthat do not require the joint distribution, discriminative models can yield superior performance (in part because they have fewer variables to compute).[4][5][3]On the other hand, generative models are typically more flexible than discriminative models in expressing dependencies in complex learning tasks. In addition, most discriminative models are inherentlysupervisedand cannot easily supportunsupervised learning. Application-specific details ultimately dictate the suitability of selecting a discriminative versus generative model. Discriminative models and generative models also differ in introducing theposterior possibility.[6]To maintain the least expected loss, the minimization of result's misclassification should be acquired. In the discriminative model, the posterior probabilities,P(y|x){\displaystyle P(y|x)}, is inferred from a parametric model, where the parameters come from the training data. Points of estimation of the parameters are obtained from the maximization of likelihood or distribution computation over the parameters. On the other hand, considering that the generative models focus on the joint probability, the class posterior possibilityP(k){\displaystyle P(k)}is considered inBayes' theorem, which is In the repeated experiments, logistic regression and naive Bayes are applied here for different models on binary classification task, discriminative learning results in lower asymptotic errors, while generative one results in higher asymptotic errors faster.[3]However, in Ulusoy and Bishop's joint work,Comparison of Generative and Discriminative Techniques for Object Detection and Classification, they state that the above statement is true only when the model is the appropriate one for data (i.e.the data distribution is correctly modeled by the generative model). Significant advantages of using discriminative modeling are: Compared with the advantages of using generative modeling: Since both advantages and disadvantages present on the two way of modeling, combining both approaches will be a good modeling in practice. For example, in Marras' articleA Joint Discriminative Generative Model for Deformable Model Construction and Classification,[7]he and his coauthors apply the combination of two modelings on face classification of the models, and receive a higher accuracy than the traditional approach. Similarly, Kelm[8]also proposed the combination of two modelings for pixel classification in his articleCombining Generative and Discriminative Methods for Pixel Classification with Multi-Conditional Learning. During the process of extracting the discriminative features prior to the clustering,Principal component analysis(PCA), though commonly used, is not a necessarily discriminative approach. In contrast, LDA is a discriminative one.[9]Linear discriminant analysis(LDA), provides an efficient way of eliminating the disadvantage we list above. As we know, the discriminative model needs a combination of multiple subtasks before classification, and LDA provides appropriate solution towards this problem by reducing dimension. Examples of discriminative models include:
https://en.wikipedia.org/wiki/Discriminative_model
Inprobabilityandstatistics, anexponential familyis aparametricset ofprobability distributionsof a certain form, specified below. This special form is chosen for mathematical convenience, including the enabling of the user to calculate expectations, covariances using differentiation based on some useful algebraic properties, as well as for generality, as exponential families are in a sense very natural sets of distributions to consider. The termexponential classis sometimes used in place of "exponential family",[1]or the older termKoopman–Darmois family. Sometimes loosely referred to astheexponential family, this class of distributions is distinct because they all possess a variety of desirable properties, most importantly the existence of asufficient statistic. The concept of exponential families is credited to[2]E. J. G. Pitman,[3]G. Darmois,[4]andB. O. Koopman[5]in 1935–1936. Exponential families of distributions provide a general framework for selecting a possible alternative parameterisation of aparametric familyof distributions, in terms of natural parameters, and for defining usefulsample statistics, called the natural sufficient statistics of the family. The terms "distribution" and "family" are often used loosely: Specifically,anexponential family is asetof distributions, where the specific distribution varies with the parameter;[a]however, a parametricfamilyof distributions is often referred to as "adistribution" (like "the normal distribution", meaning "the family of normal distributions"), and the set of all exponential families is sometimes loosely referred to as "the" exponential family. Most of the commonly used distributions form an exponential family or subset of an exponential family, listed in the subsection below. The subsections following it are a sequence of increasingly more general mathematical definitions of an exponential family. A casual reader may wish to restrict attention to the first and simplest definition, which corresponds to a single-parameter family ofdiscreteorcontinuousprobability distributions. Exponential families include many of the most common distributions. Among many others, exponential families includes the following:[6] A number of common distributions are exponential families, but only when certain parameters are fixed and known. For example: Note that in each case, the parameters which must be fixed are those that set a limit on the range of values that can possibly be observed. Examples of common distributions that arenotexponential families areStudent'st, mostmixture distributions, and even the family ofuniform distributionswhen the bounds are not fixed. See the section below onexamplesfor more discussion. The value ofθ{\displaystyle \theta }is called theparameterof the family. A single-parameter exponential family is a set of probability distributions whoseprobability density function(orprobability mass function, for the case of adiscrete distribution) can be expressed in the form fX(x|θ)=h(x)exp⁡[η(θ)⋅T(x)−A(θ)]{\displaystyle f_{X}{\left(x\,{\big |}\,\theta \right)}=h(x)\,\exp \left[\eta (\theta )\cdot T(x)-A(\theta )\right]} whereT(x),h(x),η(θ), andA(θ)are known functions. The functionh(x)must be non-negative. An alternative, equivalent form often given is fX(x|θ)=h(x)g(θ)exp⁡[η(θ)⋅T(x)]{\displaystyle f_{X}{\left(x\ {\big |}\ \theta \right)}=h(x)\,g(\theta )\,\exp \left[\eta (\theta )\cdot T(x)\right]} or equivalently fX(x|θ)=exp⁡[η(θ)⋅T(x)−A(θ)+B(x)].{\displaystyle f_{X}{\left(x\ {\big |}\ \theta \right)}=\exp \left[\eta (\theta )\cdot T(x)-A(\theta )+B(x)\right].} In terms oflog probability,log⁡(fX(x|θ))=η(θ)⋅T(x)−A(θ)+B(x).{\displaystyle \log(f_{X}{\left(x\ {\big |}\ \theta \right)})=\eta (\theta )\cdot T(x)-A(\theta )+B(x).} Note thatg(θ)=e−A(θ){\displaystyle g(\theta )=e^{-A(\theta )}}andh(x)=eB(x){\displaystyle h(x)=e^{B(x)}}. Importantly, thesupportoffX(x|θ){\displaystyle f_{X}{\left(x{\big |}\theta \right)}}(all the possiblex{\displaystyle x}values for whichfX(x|θ){\displaystyle f_{X}\!\left(x{\big |}\theta \right)}is greater than0{\displaystyle 0}) is required tonotdepend onθ.{\displaystyle \theta ~.}[7]This requirement can be used to exclude a parametric family distribution from being an exponential family. For example: ThePareto distributionhas a pdf which is defined forx≥xm{\displaystyle x\geq x_{\mathsf {m}}}(the minimum value,xm,{\displaystyle x_{m}\ ,}being the scale parameter) and its support, therefore, has a lower limit ofxm.{\displaystyle x_{\mathsf {m}}~.}Since the support offα,xm(x){\displaystyle f_{\alpha ,x_{m}}\!(x)}is dependent on the value of the parameter, the family ofPareto distributionsdoes not form an exponential family of distributions (at least whenxm{\displaystyle x_{m}}is unknown). Another example:Bernoulli-typedistributions –binomial,negative binomial,geometric distribution, and similar – can only be included in the exponential class if the number ofBernoulli trials,n, is treated as a fixed constant – excluded from the free parameter(s)θ{\displaystyle \theta }– since the allowed number of trials sets the limits for the number of "successes" or "failures" that can be observed in a set of trials. Oftenx{\displaystyle x}is a vector of measurements, in which caseT(x){\displaystyle T(x)}may be a function from the space of possible values ofx{\displaystyle x}to the real numbers. More generally,η(θ){\displaystyle \eta (\theta )}andT(x){\displaystyle T(x)}can each be vector-valued such thatη(θ)⋅T(x){\displaystyle \eta (\theta )\cdot T(x)}is real-valued. However, see the discussion below onvector parameters, regarding thecurvedexponential family. Ifη(θ)=θ,{\displaystyle \eta (\theta )=\theta \ ,}then the exponential family is said to be incanonical form. By defining a transformed parameterη=η(θ),{\displaystyle \eta =\eta (\theta )\ ,}it is always possible to convert an exponential family to canonical form. The canonical form is non-unique, sinceη(θ){\displaystyle \eta (\theta )}can be multiplied by any nonzero constant, provided thatT(x)is multiplied by that constant's reciprocal, or a constantccan be added toη(θ){\displaystyle \eta (\theta )}andh(x)multiplied byexp⁡[−c⋅T(x)]{\displaystyle \exp \left[{-c}\cdot T(x)\,\right]}to offset it. In the special case thatη(θ)=θ{\displaystyle \eta (\theta )=\theta }andT(x) =x, then the family is called anatural exponential family. Even whenx{\displaystyle x}is a scalar, and there is only a single parameter, the functionsη(θ){\displaystyle \eta (\theta )}andT(x){\displaystyle T(x)}can still be vectors, as described below. The functionA(θ),{\displaystyle A(\theta )\ ,}or equivalentlyg(θ),{\displaystyle g(\theta )\ ,}is automatically determined once the other functions have been chosen, since it must assume a form that causes the distribution to benormalized(sum or integrate to one over the entire domain). Furthermore, both of these functions can always be written as functions ofη,{\displaystyle \eta \ ,}even whenη(θ){\displaystyle \eta (\theta )}is not aone-to-onefunction, i.e. two or more different values ofθ{\displaystyle \theta }map to the same value ofη(θ),{\displaystyle \eta (\theta )\ ,}and henceη(θ){\displaystyle \eta (\theta )}cannot be inverted. In such a case, all values ofθ{\displaystyle \theta }mapping to the sameη(θ){\displaystyle \eta (\theta )}will also have the same value forA(θ){\displaystyle A(\theta )}andg(θ).{\displaystyle g(\theta )~.} What is important to note, and what characterizes all exponential family variants, is that the parameter(s) and the observation variable(s) mustfactorize(can be separated into products each of which involves only one type of variable), either directly or within either part (the base or exponent) of anexponentiationoperation. Generally, this means that all of the factors constituting the density or mass function must be of one of the following forms: f(x),cf(x),[f(x)]c,[f(x)]g(θ),[f(x)]h(x)g(θ),g(θ),cg(θ),[g(θ)]c,[g(θ)]f(x),or[g(θ)]h(x)j(θ),{\displaystyle {\begin{aligned}f(x),&&c^{f(x)},&&{[f(x)]}^{c},&&{[f(x)]}^{g(\theta )},&&{[f(x)]}^{h(x)g(\theta )},\\g(\theta ),&&c^{g(\theta )},&&{[g(\theta )]}^{c},&&{[g(\theta )]}^{f(x)},&&~~{\mathsf {or}}~~{[g(\theta )]}^{h(x)j(\theta )},\end{aligned}}} wherefandhare arbitrary functions ofx, the observed statistical variable;gandjare arbitrary functions ofθ,{\displaystyle \theta ,}the fixed parameters defining the shape of the distribution; andcis any arbitrary constant expression (i.e. a number or an expression that does not change with eitherxorθ{\displaystyle \theta }). There are further restrictions on how many such factors can occur. For example, the two expressions: [f(x)g(θ)]h(x)j(θ),[f(x)]h(x)j(θ)[g(θ)]h(x)j(θ),{\displaystyle {[f(x)g(\theta )]}^{h(x)j(\theta )},\qquad {[f(x)]}^{h(x)j(\theta )}{[g(\theta )]}^{h(x)j(\theta )},} are the same, i.e. a product of two "allowed" factors. However, when rewritten into the factorized form, [f(x)g(θ)]h(x)j(θ)=[f(x)]h(x)j(θ)[g(θ)]h(x)j(θ)=exp⁡{[h(x)log⁡f(x)]j(θ)+h(x)[j(θ)log⁡g(θ)]},{\displaystyle {\begin{aligned}{\left[f(x)g(\theta )\right]}^{h(x)j(\theta )}&={\left[f(x)\right]}^{h(x)j(\theta )}{\left[g(\theta )\right]}^{h(x)j(\theta )}\\[4pt]&=\exp \left\{{[h(x)\log f(x)]j(\theta )+h(x)[j(\theta )\log g(\theta )]}\right\},\end{aligned}}} it can be seen that it cannot be expressed in the required form. (However, a form of this sort is a member of acurved exponential family, which allows multiple factorized terms in the exponent.[citation needed]) To see why an expression of the form [f(x)]g(θ){\displaystyle {[f(x)]}^{g(\theta )}} qualifies,[f(x)]g(θ)=eg(θ)log⁡f(x){\displaystyle {[f(x)]}^{g(\theta )}=e^{g(\theta )\log f(x)}} and hence factorizes inside of the exponent. Similarly, [f(x)]h(x)g(θ)=eh(x)g(θ)log⁡f(x)=e[h(x)log⁡f(x)]g(θ){\displaystyle {[f(x)]}^{h(x)g(\theta )}=e^{h(x)g(\theta )\log f(x)}=e^{[h(x)\log f(x)]g(\theta )}} and again factorizes inside of the exponent. A factor consisting of a sum where both types of variables are involved (e.g. a factor of the form1+f(x)g(θ){\displaystyle 1+f(x)g(\theta )}) cannot be factorized in this fashion (except in some cases where occurring directly in an exponent); this is why, for example, theCauchy distributionandStudent'stdistributionare not exponential families. The definition in terms of onereal-numberparameter can be extended to onereal-vectorparameter θ≡[θ1θ2⋯θs]T.{\displaystyle {\boldsymbol {\theta }}\equiv {\begin{bmatrix}\theta _{1}&\theta _{2}&\cdots &\theta _{s}\end{bmatrix}}^{\mathsf {T}}.} A family of distributions is said to belong to a vector exponential family if the probability density function (or probability mass function, for discrete distributions) can be written as fX(x∣θ)=h(x)exp⁡(∑i=1sηi(θ)Ti(x)−A(θ)),{\displaystyle f_{X}(x\mid {\boldsymbol {\theta }})=h(x)\,\exp \left(\sum _{i=1}^{s}\eta _{i}({\boldsymbol {\theta }})T_{i}(x)-A({\boldsymbol {\theta }})\right)~,} or in a more compact form, fX(x∣θ)=h(x)exp⁡[η(θ)⋅T(x)−A(θ)]{\displaystyle f_{X}(x\mid {\boldsymbol {\theta }})=h(x)\,\exp \left[{\boldsymbol {\eta }}({\boldsymbol {\theta }})\cdot \mathbf {T} (x)-A({\boldsymbol {\theta }})\right]} This form writes the sum as adot productof vector-valued functionsη(θ){\displaystyle {\boldsymbol {\eta }}({\boldsymbol {\theta }})}andT(x). An alternative, equivalent form often seen is fX(x∣θ)=h(x)g(θ)exp⁡[η(θ)⋅T(x)]{\displaystyle f_{X}(x\mid {\boldsymbol {\theta }})=h(x)\,g({\boldsymbol {\theta }})\,\exp \left[{\boldsymbol {\eta }}({\boldsymbol {\theta }})\cdot \mathbf {T} (x)\right]} As in the scalar valued case, the exponential family is said to be incanonical formif ηi(θ)=θi,∀i.{\displaystyle \eta _{i}({\boldsymbol {\theta }})=\theta _{i}~,\quad \forall i\,.} A vector exponential family is said to becurvedif the dimension of θ≡[θ1θ2⋯θd]T{\displaystyle {\boldsymbol {\theta }}\equiv {\begin{bmatrix}\theta _{1}&\theta _{2}&\cdots &\theta _{d}\end{bmatrix}}^{\mathsf {T}}} is less than the dimension of the vector η(θ)≡[η1(θ)η2(θ)⋯ηs(θ)]T.{\displaystyle {\boldsymbol {\eta }}({\boldsymbol {\theta }})\equiv {\begin{bmatrix}\eta _{1}{\!({\boldsymbol {\theta }})}&\eta _{2}{\!({\boldsymbol {\theta }})}&\cdots &\eta _{s}{\!({\boldsymbol {\theta }})}\end{bmatrix}}^{\mathsf {T}}~.} That is, if thedimension,d, of the parameter vector is less than thenumber of functions,s, of the parameter vector in the above representation of the probability density function. Most common distributions in the exponential family arenotcurved, and many algorithms designed to work with any exponential family implicitly or explicitly assume that the distribution is not curved. Just as in the case of a scalar-valued parameter, the functionA(θ){\displaystyle A({\boldsymbol {\theta }})}or equivalentlyg(θ){\displaystyle g({\boldsymbol {\theta }})}is automatically determined by the normalization constraint, once the other functions have been chosen. Even ifη(θ){\displaystyle {\boldsymbol {\eta }}({\boldsymbol {\theta }})}is not one-to-one, functionsA(η){\displaystyle A({\boldsymbol {\eta }})}andg(η){\displaystyle g({\boldsymbol {\eta }})}can be defined by requiring that the distribution is normalized for each value of the natural parameterη{\displaystyle {\boldsymbol {\eta }}}. This yields thecanonical form fX(x∣η)=h(x)exp⁡[η⋅T(x)−A(η)],{\displaystyle f_{X}(x\mid {\boldsymbol {\eta }})=h(x)\exp \left[{\boldsymbol {\eta }}\cdot \mathbf {T} (x)-A({\boldsymbol {\eta }})\right],} or equivalently fX(x∣η)=h(x)g(η)exp⁡[η⋅T(x)].{\displaystyle f_{X}(x\mid {\boldsymbol {\eta }})=h(x)g({\boldsymbol {\eta }})\exp \left[{\boldsymbol {\eta }}\cdot \mathbf {T} (x)\right].} The above forms may sometimes be seen withηTT(x){\displaystyle {\boldsymbol {\eta }}^{\mathsf {T}}\mathbf {T} (x)}in place ofη⋅T(x){\displaystyle {\boldsymbol {\eta }}\cdot \mathbf {T} (x)\,}. These are exactly equivalent formulations, merely using different notation for thedot product. The vector-parameter form over a single scalar-valued random variable can be trivially expanded to cover a joint distribution over a vector of random variables. The resulting distribution is simply the same as the above distribution for a scalar-valued random variable with each occurrence of the scalarxreplaced by the vector x=[x1x2⋯xk]T.{\displaystyle \mathbf {x} ={\begin{bmatrix}x_{1}&x_{2}&\cdots &x_{k}\end{bmatrix}}^{\mathsf {T}}.} The dimensionskof the random variable need not match the dimensiondof the parameter vector, nor (in the case of a curved exponential function) the dimensionsof the natural parameterη{\displaystyle {\boldsymbol {\eta }}}andsufficient statisticT(x). The distribution in this case is written as fX(x∣θ)=h(x)exp[∑i=1sηi(θ)Ti(x)−A(θ)]{\displaystyle f_{X}{\left(\mathbf {x} \mid {\boldsymbol {\theta }}\right)}=h(\mathbf {x} )\,\exp \!\left[\sum _{i=1}^{s}\eta _{i}({\boldsymbol {\theta }})T_{i}(\mathbf {x} )-A({\boldsymbol {\theta }})\right]} Or more compactly as fX(x∣θ)=h(x)exp⁡[η(θ)⋅T(x)−A(θ)]{\displaystyle f_{X}{\left(\mathbf {x} \mid {\boldsymbol {\theta }}\right)}=h(\mathbf {x} )\,\exp \left[{\boldsymbol {\eta }}({\boldsymbol {\theta }})\cdot \mathbf {T} (\mathbf {x} )-A({\boldsymbol {\theta }})\right]} Or alternatively as fX(x∣θ)=g(θ)h(x)exp⁡[η(θ)⋅T(x)]{\displaystyle f_{X}{\left(\mathbf {x} \mid {\boldsymbol {\theta }}\right)}=g({\boldsymbol {\theta }})\,h(\mathbf {x} )\,\exp \left[{\boldsymbol {\eta }}({\boldsymbol {\theta }})\cdot \mathbf {T} (\mathbf {x} )\right]} We usecumulative distribution functions(CDF) in order to encompass both discrete and continuous distributions. SupposeHis a non-decreasing function of a real variable. ThenLebesgue–Stieltjes integralswith respect todH(x){\displaystyle dH(\mathbf {x} )}are integrals with respect to thereference measureof the exponential family generated byH. Any member of that exponential family has cumulative distribution function dF(x∣θ)=exp⁡[η(θ)⋅T(x)−A(θ)]dH(x).{\displaystyle dF{\left(\mathbf {x} \mid {\boldsymbol {\theta }}\right)}=\exp \left[{\boldsymbol {\eta }}(\theta )\cdot \mathbf {T} (\mathbf {x} )-A({\boldsymbol {\theta }})\right]~dH(\mathbf {x} )\,.} H(x)is aLebesgue–Stieltjes integratorfor the reference measure. When the reference measure is finite, it can be normalized andHis actually thecumulative distribution functionof a probability distribution. IfFis absolutely continuous with a densityf(x){\displaystyle f(x)}with respect to a reference measuredx{\displaystyle dx}(typicallyLebesgue measure), one can writedF(x)=f(x)dx{\displaystyle dF(x)=f(x)\,dx}. In this case,His also absolutely continuous and can be writtendH(x)=h(x)dx{\displaystyle dH(x)=h(x)\,dx}so the formulas reduce to that of the previous paragraphs. IfFis discrete, thenHis astep function(with steps on thesupportofF). Alternatively, we can write the probability measure directly as P(dx∣θ)=exp⁡[η(θ)⋅T(x)−A(θ)]μ(dx).{\displaystyle P\left(d\mathbf {x} \mid {\boldsymbol {\theta }}\right)=\exp \left[{\boldsymbol {\eta }}(\theta )\cdot \mathbf {T} (\mathbf {x} )-A({\boldsymbol {\theta }})\right]~\mu (d\mathbf {x} )\,.} for some reference measureμ{\displaystyle \mu \,}. In the definitions above, the functionsT(x),η(θ), andA(η)were arbitrary. However, these functions have important interpretations in the resulting probability distribution. The functionAis important in its own right, because themean,varianceand othermomentsof the sufficient statisticT(x)can be derived simply by differentiatingA(η). For example, becauselog(x)is one of the components of the sufficient statistic of thegamma distribution,E⁡[log⁡x]{\displaystyle \operatorname {\mathcal {E}} [\log x]}can be easily determined for this distribution usingA(η). Technically, this is true becauseK(u∣η)=A(η+u)−A(η),{\displaystyle K{\left(u\mid \eta \right)}=A(\eta +u)-A(\eta )\,,}is thecumulant generating functionof the sufficient statistic. Exponential families have a large number of properties that make them extremely useful for statistical analysis. In many cases, it can be shown thatonlyexponential families have these properties. Examples: Given an exponential family defined byfX(x∣θ)=h(x)exp⁡[θ⋅T(x)−A(θ)]{\displaystyle f_{X}{\!(x\mid \theta )}=h(x)\exp \left[\theta \cdot T(x)-A(\theta )\right]}, whereΘ{\displaystyle \Theta }is the parameter space, such thatθ∈Θ⊂Rk{\displaystyle \theta \in \Theta \subset \mathbb {R} ^{k}}. Then It is critical, when considering the examples in this section, to remember the discussion above about what it means to say that a "distribution" is an exponential family, and in particular to keep in mind that the set of parameters that are allowed to vary is critical in determining whether a "distribution" is or is not an exponential family. Thenormal,exponential,log-normal,gamma,chi-squared,beta,Dirichlet,Bernoulli,categorical,Poisson,geometric,inverse Gaussian,ALAAM,von Mises, andvon Mises-Fisherdistributions are all exponential families. Some distributions are exponential families only if some of their parameters are held fixed. The family ofPareto distributionswith a fixed minimum boundxmform an exponential family. The families ofbinomialandmultinomialdistributions with fixed number of trialsnbut unknown probability parameter(s) are exponential families. The family ofnegative binomial distributionswith fixed number of failures (a.k.a. stopping-time parameter)ris an exponential family. However, when any of the above-mentioned fixed parameters are allowed to vary, the resulting family is not an exponential family. As mentioned above, as a general rule, thesupportof an exponential family must remain the same across all parameter settings in the family. This is why the above cases (e.g. binomial with varying number of trials, Pareto with varying minimum bound) are not exponential families — in all of the cases, the parameter in question affects the support (particularly, changing the minimum or maximum possible value). For similar reasons, neither thediscrete uniform distributionnorcontinuous uniform distributionare exponential families as one or both bounds vary. TheWeibull distributionwith fixed shape parameterkis an exponential family. Unlike in the previous examples, the shape parameter does not affect the support; the fact that allowing it to vary makes the Weibull non-exponential is due rather to the particular form of the Weibull'sprobability density function(kappears in the exponent of an exponent). In general, distributions that result from a finite or infinitemixtureof other distributions, e.g.mixture modeldensities andcompound probability distributions, arenotexponential families. Examples are typical Gaussianmixture modelsas well as manyheavy-tailed distributionsthat result fromcompounding(i.e. infinitely mixing) a distribution with aprior distributionover one of its parameters, e.g. theStudent'st-distribution(compounding anormal distributionover agamma-distributedprecision prior), and thebeta-binomialandDirichlet-multinomialdistributions. Other examples of distributions that are not exponential families are theF-distribution,Cauchy distribution,hypergeometric distributionandlogistic distribution. Following are some detailed examples of the representation of some useful distribution as exponential families. As a first example, consider a random variable distributed normally with unknown meanμandknownvarianceσ2. The probability density function is then fσ(x;μ)=12πσ2e−(x−μ)2/2σ2.{\displaystyle f_{\sigma }(x;\mu )={\frac {1}{\sqrt {2\pi \sigma ^{2}}}}e^{-(x-\mu )^{2}/2\sigma ^{2}}.} This is a single-parameter exponential family, as can be seen by setting Tσ(x)=xσ,hσ(x)=12πσ2e−x2/2σ2,Aσ(μ)=μ22σ2,ησ(μ)=μσ.{\displaystyle {\begin{aligned}T_{\sigma }(x)&={\frac {x}{\sigma }},&h_{\sigma }(x)&={\frac {1}{\sqrt {2\pi \sigma ^{2}}}}e^{-x^{2}/2\sigma ^{2}},\\[4pt]A_{\sigma }(\mu )&={\frac {\mu ^{2}}{2\sigma ^{2}}},&\eta _{\sigma }(\mu )&={\frac {\mu }{\sigma }}.\end{aligned}}} Ifσ= 1this is in canonical form, as thenη(μ) =μ. Next, consider the case of a normal distribution with unknown mean and unknown variance. The probability density function is then f(y;μ,σ2)=12πσ2e−(y−μ)2/2σ2.{\displaystyle f(y;\mu ,\sigma ^{2})={\frac {1}{\sqrt {2\pi \sigma ^{2}}}}e^{-(y-\mu )^{2}/2\sigma ^{2}}.} This is an exponential family which can be written in canonical form by defining h(y)=12π,η=[μσ2,−12σ2],T(y)=(y,y2)T,A(η)=μ22σ2+log⁡|σ|=−η124η2+12log⁡|12η2|{\displaystyle {\begin{aligned}h(y)&={\frac {1}{\sqrt {2\pi }}},&{\boldsymbol {\eta }}&=\left[{\frac {\mu }{\sigma ^{2}}},~-{\frac {1}{2\sigma ^{2}}}\right],\\T(y)&=\left(y,y^{2}\right)^{\mathsf {T}},&A({\boldsymbol {\eta }})&={\frac {\mu ^{2}}{2\sigma ^{2}}}+\log |\sigma |=-{\frac {\eta _{1}^{2}}{4\eta _{2}}}+{\frac {1}{2}}\log \left|{\frac {1}{2\eta _{2}}}\right|\end{aligned}}} As an example of a discrete exponential family, consider thebinomial distributionwithknownnumber of trialsn. Theprobability mass functionfor this distribution isf(x)=(nx)px(1−p)n−x,x∈{0,1,2,…,n}.{\displaystyle f(x)={\binom {n}{x}}p^{x}{\left(1-p\right)}^{n-x},\quad x\in \{0,1,2,\ldots ,n\}.}This can equivalently be written asf(x)=(nx)exp⁡[xlog⁡(p1−p)+nlog⁡(1−p)],{\displaystyle f(x)={\binom {n}{x}}\exp \left[x\log \left({\frac {p}{1-p}}\right)+n\log(1-p)\right],}which shows that the binomial distribution is an exponential family, whose natural parameter isη=log⁡p1−p.{\displaystyle \eta =\log {\frac {p}{1-p}}.}This function ofpis known aslogit. The following table shows how to rewrite a number of common distributions as exponential-family distributions with natural parameters. Refer to the flashcards[12]for main exponential families. For a scalar variable and scalar parameter, the form is as follows: fX(x∣θ)=h(x)exp⁡[η(θ)T(x)−A(η)]{\displaystyle f_{X}(x\mid \theta )=h(x)\exp \left[\eta ({\theta })T(x)-A(\eta )\right]} For a scalar variable and vector parameter: fX(x∣θ)=h(x)exp⁡[η(θ)⋅T(x)−A(η)]fX(x∣θ)=h(x)g(θ)exp⁡[η(θ)⋅T(x)]{\displaystyle {\begin{aligned}f_{X}(x\mid {\boldsymbol {\theta }})&=h(x)\,\exp \left[{\boldsymbol {\eta }}({\boldsymbol {\theta }})\cdot \mathbf {T} (x)-A({\boldsymbol {\eta }})\right]\\[4pt]f_{X}(x\mid {\boldsymbol {\theta }})&=h(x)\,g({\boldsymbol {\theta }})\,\exp \left[{\boldsymbol {\eta }}({\boldsymbol {\theta }})\cdot \mathbf {T} (x)\right]\end{aligned}}} For a vector variable and vector parameter: fX(x∣θ)=h(x)exp⁡[η(θ)⋅T(x)−A(η)]{\displaystyle f_{X}(\mathbf {x} \mid {\boldsymbol {\theta }})=h(\mathbf {x} )\,\exp \left[{\boldsymbol {\eta }}({\boldsymbol {\theta }})\cdot \mathbf {T} (\mathbf {x} )-A({\boldsymbol {\eta }})\right]} The above formulas choose the functional form of the exponential-family with a log-partition functionA(η){\displaystyle A({\boldsymbol {\eta }})}. The reason for this is so that themoments of the sufficient statisticscan be calculated easily, simply by differentiating this function. Alternative forms involve either parameterizing this function in terms of the normal parameterθ{\displaystyle {\boldsymbol {\theta }}}instead of the natural parameter, and/or using a factorg(η){\displaystyle g({\boldsymbol {\eta }})}outside of the exponential. The relation between the latter and the former is:A(η)=−log⁡g(η),g(η)=e−A(η){\displaystyle {\begin{aligned}A({\boldsymbol {\eta }})&=-\log g({\boldsymbol {\eta }}),\\[2pt]g({\boldsymbol {\eta }})&=e^{-A({\boldsymbol {\eta }})}\end{aligned}}}To convert between the representations involving the two types of parameter, use the formulas below for writing one type of parameter in terms of the other. wherelog2refers to theiterated logarithm This is the inversesoftmax function, a generalization of thelogit function. 1C2[eη1⋮eηk−11]{\displaystyle {\frac {1}{C_{2}}}{\begin{bmatrix}e^{\eta _{1}}\\[5pt]\vdots \\[5pt]e^{\eta _{k-1}}\\[5pt]1\end{bmatrix}}}whereC1=∑i=1keηi{\textstyle C_{1}=\sum \limits _{i=1}^{k}e^{\eta _{i}}}andC2=1+∑i=1k−1eηi{\textstyle C_{2}=1+\sum \limits _{i=1}^{k-1}e^{\eta _{i}}}. This is thesoftmax function, a generalization of thelogistic function. whereC=∑i=1keηi{\textstyle C=\sum \limits _{i=1}^{k}e^{\eta _{i}}} 1C2[eη1⋮eηk−11]{\displaystyle {\frac {1}{C_{2}}}{\begin{bmatrix}e^{\eta _{1}}\\[5pt]\vdots \\[5pt]e^{\eta _{k-1}}\\[5pt]1\end{bmatrix}}} whereC1=∑i=1keηi{\textstyle C_{1}=\sum \limits _{i=1}^{k}e^{\eta _{i}}}andC2=1+∑i=1k−1eηi{\textstyle C_{2}=1+\sum \limits _{i=1}^{k-1}e^{\eta _{i}}} Three variants with different parameterizations are given, to facilitate computing moments of the sufficient statistics. The three variants of thecategorical distributionandmultinomial distributionare due to the fact that the parameterspi{\displaystyle p_{i}}are constrained, such that ∑i=1kpi=1.{\displaystyle \sum _{i=1}^{k}p_{i}=1\,.} Thus, there are onlyk−1{\displaystyle k-1}independent parameters. Variants 1 and 2 are not actually standard exponential families at all. Rather they arecurved exponential families, i.e. there arek−1{\displaystyle k-1}independent parameters embedded in ak{\displaystyle k}-dimensional parameter space.[13]Many of the standard results for exponential families do not apply to curved exponential families. An example is the log-partition functionA(x){\displaystyle A(x)}, which has the value of 0 in the curved cases. In standard exponential families, the derivatives of this function correspond to the moments (more technically, thecumulants) of the sufficient statistics, e.g. the mean and variance. However, a value of 0 suggests that the mean and variance of all the sufficient statistics are uniformly 0, whereas in fact the mean of thei{\displaystyle i}th sufficient statistic should bepi{\displaystyle p_{i}}. (This does emerge correctly when using the form ofA(x){\displaystyle A(x)}shown in variant 3.) We start with the normalization of the probability distribution. In general, any non-negative functionf(x) that serves as thekernelof a probability distribution (the part encoding all dependence onx) can be made into a proper distribution bynormalizing: i.e. p(x)=1Zf(x){\displaystyle p(x)={\frac {1}{Z}}f(x)} where Z=∫xf(x)dx.{\displaystyle Z=\int _{x}f(x)\,dx.} The factorZis sometimes termed thenormalizerorpartition function, based on an analogy tostatistical physics. In the case of an exponential family wherep(x;η)=g(η)h(x)eη⋅T(x),{\displaystyle p(x;{\boldsymbol {\eta }})=g({\boldsymbol {\eta }})h(x)e^{{\boldsymbol {\eta }}\cdot \mathbf {T} (x)},} the kernel isK(x)=h(x)eη⋅T(x){\displaystyle K(x)=h(x)e^{{\boldsymbol {\eta }}\cdot \mathbf {T} (x)}}and the partition function isZ=∫xh(x)eη⋅T(x)dx.{\displaystyle Z=\int _{x}h(x)e^{{\boldsymbol {\eta }}\cdot \mathbf {T} (x)}\,dx.} Since the distribution must be normalized, we have 1=∫xg(η)h(x)eη⋅T(x)dx=g(η)∫xh(x)eη⋅T(x)dx=g(η)Z.{\displaystyle {\begin{aligned}1&=\int _{x}g({\boldsymbol {\eta }})h(x)e^{{\boldsymbol {\eta }}\cdot \mathbf {T} (x)}\,dx\\&=g({\boldsymbol {\eta }})\int _{x}h(x)e^{{\boldsymbol {\eta }}\cdot \mathbf {T} (x)}\,dx\\[1ex]&=g({\boldsymbol {\eta }})Z.\end{aligned}}} In other words,g(η)=1Z{\displaystyle g({\boldsymbol {\eta }})={\frac {1}{Z}}}or equivalentlyA(η)=−log⁡g(η)=log⁡Z.{\displaystyle A({\boldsymbol {\eta }})=-\log g({\boldsymbol {\eta }})=\log Z.} This justifies callingAthelog-normalizerorlog-partition function. Now, themoment-generating functionofT(x)is MT(u)≡E⁡[exp⁡(uTT(x))∣η]=∫xh(x)exp⁡[(η+u)TT(x)−A(η)]dx=eA(η+u)−A(η){\displaystyle {\begin{aligned}M_{T}(u)&\equiv \operatorname {E} \left[\exp \left(u^{\mathsf {T}}T(x)\right)\mid \eta \right]\\&=\int _{x}h(x)\,\exp \left[(\eta +u)^{\mathsf {T}}T(x)-A(\eta )\right]\,dx\\[1ex]&=e^{A(\eta +u)-A(\eta )}\end{aligned}}} proving the earlier statement that K(u∣η)=A(η+u)−A(η){\displaystyle K(u\mid \eta )=A(\eta +u)-A(\eta )} is thecumulant generating functionforT. An important subclass of exponential families are thenatural exponential families, which have a similar form for the moment-generating function for the distribution ofx. In particular, using the properties of the cumulant generating function, E⁡(Tj)=∂A(η)∂ηj{\displaystyle \operatorname {E} (T_{j})={\frac {\partial A(\eta )}{\partial \eta _{j}}}} and cov⁡(Ti,Tj)=∂2A(η)∂ηi∂ηj.{\displaystyle \operatorname {cov} \left(T_{i},\,T_{j}\right)={\frac {\partial ^{2}A(\eta )}{\partial \eta _{i}\,\partial \eta _{j}}}.} The first two raw moments and all mixed second moments can be recovered from these two identities. Higher-order moments and cumulants are obtained by higher derivatives. This technique is often useful whenTis a complicated function of the data, whose moments are difficult to calculate by integration. Another way to see this that does not rely on the theory ofcumulantsis to begin from the fact that the distribution of an exponential family must be normalized, and differentiate. We illustrate using the simple case of a one-dimensional parameter, but an analogous derivation holds more generally. In the one-dimensional case, we havep(x)=g(η)h(x)eηT(x).{\displaystyle p(x)=g(\eta )h(x)e^{\eta T(x)}.} This must be normalized, so 1=∫xp(x)dx=∫xg(η)h(x)eηT(x)dx=g(η)∫xh(x)eηT(x)dx.{\displaystyle 1=\int _{x}p(x)\,dx=\int _{x}g(\eta )h(x)e^{\eta T(x)}\,dx=g(\eta )\int _{x}h(x)e^{\eta T(x)}\,dx.} Take thederivativeof both sides with respect toη: 0=g(η)ddη∫xh(x)eηT(x)dx+g′(η)∫xh(x)eηT(x)dx=g(η)∫xh(x)(ddηeηT(x))dx+g′(η)∫xh(x)eηT(x)dx=g(η)∫xh(x)eηT(x)T(x)dx+g′(η)∫xh(x)eηT(x)dx=∫xT(x)g(η)h(x)eηT(x)dx+g′(η)g(η)∫xg(η)h(x)eηT(x)dx=∫xT(x)p(x)dx+g′(η)g(η)∫xp(x)dx=E⁡[T(x)]+g′(η)g(η)=E⁡[T(x)]+ddηlog⁡g(η){\displaystyle {\begin{aligned}0&=g(\eta ){\frac {d}{d\eta }}\int _{x}h(x)e^{\eta T(x)}\,dx+g'(\eta )\int _{x}h(x)e^{\eta T(x)}\,dx\\[1ex]&=g(\eta )\int _{x}h(x)\left({\frac {d}{d\eta }}e^{\eta T(x)}\right)\,dx+g'(\eta )\int _{x}h(x)e^{\eta T(x)}\,dx\\[1ex]&=g(\eta )\int _{x}h(x)e^{\eta T(x)}T(x)\,dx+g'(\eta )\int _{x}h(x)e^{\eta T(x)}\,dx\\[1ex]&=\int _{x}T(x)g(\eta )h(x)e^{\eta T(x)}\,dx+{\frac {g'(\eta )}{g(\eta )}}\int _{x}g(\eta )h(x)e^{\eta T(x)}\,dx\\[1ex]&=\int _{x}T(x)p(x)\,dx+{\frac {g'(\eta )}{g(\eta )}}\int _{x}p(x)\,dx\\[1ex]&=\operatorname {E} [T(x)]+{\frac {g'(\eta )}{g(\eta )}}\\[1ex]&=\operatorname {E} [T(x)]+{\frac {d}{d\eta }}\log g(\eta )\end{aligned}}} Therefore,E⁡[T(x)]=−ddηlog⁡g(η)=ddηA(η).{\displaystyle \operatorname {E} [T(x)]=-{\frac {d}{d\eta }}\log g(\eta )={\frac {d}{d\eta }}A(\eta ).} As an introductory example, consider thegamma distribution, whose distribution is defined by p(x)=βαΓ(α)xα−1e−βx.{\displaystyle p(x)={\frac {\beta ^{\alpha }}{\Gamma (\alpha )}}x^{\alpha -1}e^{-\beta x}.} Referring to the above table, we can see that the natural parameter is given by η1=α−1,η2=−β,{\displaystyle {\begin{aligned}\eta _{1}&=\alpha -1,\\\eta _{2}&=-\beta ,\end{aligned}}} the reverse substitutions are α=η1+1,β=−η2,{\displaystyle {\begin{aligned}\alpha &=\eta _{1}+1,\\\beta &=-\eta _{2},\end{aligned}}} the sufficient statistics are(logx, x), and the log-partition function is A(η1,η2)=log⁡Γ(η1+1)−(η1+1)log⁡(−η2).{\displaystyle A(\eta _{1},\eta _{2})=\log \Gamma (\eta _{1}+1)-(\eta _{1}+1)\log(-\eta _{2}).} We can find the mean of the sufficient statistics as follows. First, forη1: E⁡[log⁡x]=∂∂η1A(η1,η2)=∂∂η1[log⁡Γ(η1+1)−(η1+1)log⁡(−η2)]=ψ(η1+1)−log⁡(−η2)=ψ(α)−log⁡β,{\displaystyle {\begin{aligned}\operatorname {E} [\log x]&={\frac {\partial }{\partial \eta _{1}}}A(\eta _{1},\eta _{2})\\[0.5ex]&={\frac {\partial }{\partial \eta _{1}}}\left[\log \Gamma (\eta _{1}+1)-(\eta _{1}+1)\log(-\eta _{2})\right]\\[1ex]&=\psi (\eta _{1}+1)-\log(-\eta _{2})\\[1ex]&=\psi (\alpha )-\log \beta ,\end{aligned}}} Whereψ(x){\displaystyle \psi (x)}is thedigamma function(derivative of log gamma), and we used the reverse substitutions in the last step. Now, forη2: E⁡[x]=∂∂η2A(η1,η2)=∂∂η2[log⁡Γ(η1+1)−(η1+1)log⁡(−η2)]=−(η1+1)1−η2(−1)=η1+1−η2=αβ,{\displaystyle {\begin{aligned}\operatorname {E} [x]&={\frac {\partial }{\partial \eta _{2}}}A(\eta _{1},\eta _{2})\\[1ex]&={\frac {\partial }{\partial \eta _{2}}}\left[\log \Gamma (\eta _{1}+1)-(\eta _{1}+1)\log(-\eta _{2})\right]\\[1ex]&=-(\eta _{1}+1){\frac {1}{-\eta _{2}}}(-1)={\frac {\eta _{1}+1}{-\eta _{2}}}={\frac {\alpha }{\beta }},\end{aligned}}} again making the reverse substitution in the last step. To compute the variance ofx, we just differentiate again: Var⁡(x)=∂2∂η22A(η1,η2)=∂∂η2η1+1−η2=η1+1η22=αβ2.{\displaystyle {\begin{aligned}\operatorname {Var} (x)&={\frac {\partial ^{2}}{\partial \eta _{2}^{2}}}A{\left(\eta _{1},\eta _{2}\right)}={\frac {\partial }{\partial \eta _{2}}}{\frac {\eta _{1}+1}{-\eta _{2}}}\\[1ex]&={\frac {\eta _{1}+1}{\eta _{2}^{2}}}={\frac {\alpha }{\beta ^{2}}}.\end{aligned}}} All of these calculations can be done using integration, making use of various properties of thegamma function, but this requires significantly more work. As another example consider a real valued random variableXwith density pθ(x)=θe−x(1+e−x)θ+1{\displaystyle p_{\theta }(x)={\frac {\theta e^{-x}}{\left(1+e^{-x}\right)^{\theta +1}}}} indexed by shape parameterθ∈(0,∞){\displaystyle \theta \in (0,\infty )}(this is called theskew-logistic distribution). The density can be rewritten as e−x1+e−xexp⁡[−θlog⁡(1+e−x)+log⁡(θ)]{\displaystyle {\frac {e^{-x}}{1+e^{-x}}}\exp[-\theta \log \left(1+e^{-x})+\log(\theta )\right]} Notice this is an exponential family with natural parameter η=−θ,{\displaystyle \eta =-\theta ,} sufficient statistic T=log⁡(1+e−x),{\displaystyle T=\log \left(1+e^{-x}\right),} and log-partition function A(η)=−log⁡(θ)=−log⁡(−η){\displaystyle A(\eta )=-\log(\theta )=-\log(-\eta )} So using the first identity, E⁡[log⁡(1+e−X)]=E⁡(T)=∂A(η)∂η=∂∂η[−log⁡(−η)]=1−η=1θ,{\displaystyle \operatorname {E} \left[\log \left(1+e^{-X}\right)\right]=\operatorname {E} (T)={\frac {\partial A(\eta )}{\partial \eta }}={\frac {\partial }{\partial \eta }}[-\log(-\eta )]={\frac {1}{-\eta }}={\frac {1}{\theta }},} and using the second identity var⁡[log⁡(1+e−X)]=∂2A(η)∂η2=∂∂η[1−η]=1(−η)2=1θ2.{\displaystyle \operatorname {var} \left[\log \left(1+e^{-X}\right)\right]={\frac {\partial ^{2}A(\eta )}{\partial \eta ^{2}}}={\frac {\partial }{\partial \eta }}\left[{\frac {1}{-\eta }}\right]={\frac {1}{{\left(-\eta \right)}^{2}}}={\frac {1}{\theta ^{2}}}.} This example illustrates a case where using this method is very simple, but the direct calculation would be nearly impossible. The final example is one where integration would be extremely difficult. This is the case of theWishart distribution, which is defined over matrices. Even taking derivatives is a bit tricky, as it involvesmatrix calculus, but the respective identities are listed in that article. From the above table, we can see that the natural parameter is given by η1=−12V−1,η2=−12(n−p−1),{\displaystyle {\begin{aligned}{\boldsymbol {\eta }}_{1}&=-{\tfrac {1}{2}}\mathbf {V} ^{-1},\\\eta _{2}&={\hphantom {-}}{\tfrac {1}{2}}\left(n-p-1\right),\end{aligned}}} the reverse substitutions are V=−12η1−1,n=2η2+p+1,{\displaystyle {\begin{aligned}\mathbf {V} &=-{\tfrac {1}{2}}{\boldsymbol {\eta }}_{1}^{-1},\\n&=2\eta _{2}+p+1,\end{aligned}}} and the sufficient statistics are(X,log⁡|X|).{\displaystyle (\mathbf {X} ,\log |\mathbf {X} |).} The log-partition function is written in various forms in the table, to facilitate differentiation and back-substitution. We use the following forms: A(η1,n)=−n2log⁡|−η1|+log⁡Γp(n2),A(V,η2)=(η2+p+12)log⁡(2p|V|)+log⁡Γp(η2+p+12).{\displaystyle {\begin{aligned}A({\boldsymbol {\eta }}_{1},n)&=-{\frac {n}{2}}\log \left|-{\boldsymbol {\eta }}_{1}\right|+\log \Gamma _{p}{\left({\frac {n}{2}}\right)},\\[1ex]A(\mathbf {V} ,\eta _{2})&=\left(\eta _{2}+{\frac {p+1}{2}}\right)\log \left(2^{p}\left|\mathbf {V} \right|\right)+\log \Gamma _{p}{\left(\eta _{2}+{\frac {p+1}{2}}\right)}.\end{aligned}}} To differentiate with respect toη1, we need the followingmatrix calculusidentity: ∂log⁡|aX|∂X=(X−1)T{\displaystyle {\frac {\partial \log |a\mathbf {X} |}{\partial \mathbf {X} }}=(\mathbf {X} ^{-1})^{\mathsf {T}}} Then: E⁡[X]=∂∂η1A(η1,…)=∂∂η1[−n2log⁡|−η1|+log⁡Γp(n2)]=−n2(η1−1)T=n2(−η1−1)T=n(V)T=nV{\displaystyle {\begin{aligned}\operatorname {E} [\mathbf {X} ]&={\frac {\partial }{\partial {\boldsymbol {\eta }}_{1}}}A\left({\boldsymbol {\eta }}_{1},\ldots \right)\\[1ex]&={\frac {\partial }{\partial {\boldsymbol {\eta }}_{1}}}\left[-{\frac {n}{2}}\log \left|-{\boldsymbol {\eta }}_{1}\right|+\log \Gamma _{p}{\left({\frac {n}{2}}\right)}\right]\\[1ex]&=-{\frac {n}{2}}({\boldsymbol {\eta }}_{1}^{-1})^{\mathsf {T}}\\[1ex]&={\frac {n}{2}}(-{\boldsymbol {\eta }}_{1}^{-1})^{\mathsf {T}}\\[1ex]&=n(\mathbf {V} )^{\mathsf {T}}\\[1ex]&=n\mathbf {V} \end{aligned}}} The last line uses the fact thatVis symmetric, and therefore it is the same when transposed. Now, forη2, we first need to expand the part of the log-partition function that involves themultivariate gamma function: log⁡Γp(a)=log⁡(πp(p−1)4∏j=1pΓ(a+1−j2))=p(p−1)4log⁡π+∑j=1plog⁡Γ(a+1−j2){\displaystyle {\begin{aligned}\log \Gamma _{p}(a)&=\log \left(\pi ^{\frac {p(p-1)}{4}}\prod _{j=1}^{p}\Gamma {\left(a+{\frac {1-j}{2}}\right)}\right)\\&={\frac {p(p-1)}{4}}\log \pi +\sum _{j=1}^{p}\log \Gamma {\left(a+{\frac {1-j}{2}}\right)}\end{aligned}}} We also need thedigamma function: ψ(x)=ddxlog⁡Γ(x).{\displaystyle \psi (x)={\frac {d}{dx}}\log \Gamma (x).} Then: E⁡[log⁡|X|]=∂∂η2A(…,η2)=∂∂η2[−(η2+p+12)log⁡(2p|V|)+log⁡Γp(η2+p+12)]=∂∂η2[(η2+p+12)log⁡(2p|V|)]+∂∂η2[p(p−1)4log⁡π]=+∂∂η2∑j=1plog⁡Γ(η2+p+12+1−j2)=plog⁡2+log⁡|V|+∑j=1pψ(η2+p+12+1−j2)=plog⁡2+log⁡|V|+∑j=1pψ(n−p−12+p+12+1−j2)=plog⁡2+log⁡|V|+∑j=1pψ(n+1−j2){\displaystyle {\begin{aligned}\operatorname {E} [\log |\mathbf {X} |]&={\frac {\partial }{\partial \eta _{2}}}A\left(\ldots ,\eta _{2}\right)\\[1ex]&={\frac {\partial }{\partial \eta _{2}}}\left[-\left(\eta _{2}+{\frac {p+1}{2}}\right)\log \left(2^{p}\left|\mathbf {V} \right|\right)+\log \Gamma _{p}{\left(\eta _{2}+{\frac {p+1}{2}}\right)}\right]\\[1ex]&={\frac {\partial }{\partial \eta _{2}}}\left[\left(\eta _{2}+{\frac {p+1}{2}}\right)\log \left(2^{p}\left|\mathbf {V} \right|\right)\right]+{\frac {\partial }{\partial \eta _{2}}}\left[{\frac {p(p-1)}{4}}\log \pi \right]\\&{\hphantom {=}}+{\frac {\partial }{\partial \eta _{2}}}\sum _{j=1}^{p}\log \Gamma {\left(\eta _{2}+{\frac {p+1}{2}}+{\frac {1-j}{2}}\right)}\\[1ex]&=p\log 2+\log |\mathbf {V} |+\sum _{j=1}^{p}\psi {\left(\eta _{2}+{\frac {p+1}{2}}+{\frac {1-j}{2}}\right)}\\[1ex]&=p\log 2+\log |\mathbf {V} |+\sum _{j=1}^{p}\psi {\left({\frac {n-p-1}{2}}+{\frac {p+1}{2}}+{\frac {1-j}{2}}\right)}\\[1ex]&=p\log 2+\log |\mathbf {V} |+\sum _{j=1}^{p}\psi {\left({\frac {n+1-j}{2}}\right)}\end{aligned}}} This latter formula is listed in theWishart distributionarticle. Both of these expectations are needed when deriving thevariational Bayesupdate equations in aBayes networkinvolving a Wishart distribution (which is theconjugate priorof themultivariate normal distribution). Computing these formulas using integration would be much more difficult. The first one, for example, would require matrix integration. Therelative entropy(Kullback–Leibler divergence, KL divergence) of two distributions in an exponential family has a simple expression as theBregman divergencebetween the natural parameters with respect to the log-normalizer.[14]The relative entropy is defined in terms of an integral, while the Bregman divergence is defined in terms of a derivative and inner product, and thus is easier to calculate and has aclosed-form expression(assuming the derivative has a closed-form expression). Further, the Bregman divergence in terms of the natural parameters and the log-normalizer equals the Bregman divergence of the dual parameters (expectation parameters), in the opposite order, for theconvex conjugatefunction.[15] Fixing an exponential family with log-normalizer⁠A{\displaystyle A}⁠(with convex conjugate⁠A∗{\displaystyle A^{*}}⁠), writingPA,θ{\displaystyle P_{A,\theta }}for the distribution in this family corresponding a fixed value of the natural parameter⁠θ{\displaystyle \theta }⁠(writing⁠θ′{\displaystyle \theta '}⁠for another value, and with⁠η,η′{\displaystyle \eta ,\eta '}⁠for the corresponding dual expectation/moment parameters), writingKLfor the KL divergence, and⁠BA{\displaystyle B_{A}}⁠for the Bregman divergence, the divergences are related as:KL⁡(PA,θ∥PA,θ′)=BA(θ′∥θ)=BA∗(η∥η′).{\displaystyle \operatorname {KL} (P_{A,\theta }\parallel P_{A,\theta '})=B_{A}(\theta '\parallel \theta )=B_{A^{*}}(\eta \parallel \eta ').} The KL divergence is conventionally written with respect to thefirstparameter, while the Bregman divergence is conventionally written with respect to thesecondparameter, and thus this can be read as "the relative entropy is equal to the Bregman divergence defined by the log-normalizer on the swapped natural parameters", or equivalently as "equal to the Bregman divergence defined by the dual to the log-normalizer on the expectation parameters". Exponential families arise naturally as the answer to the following question: what is themaximum-entropydistribution consistent with given constraints on expected values? Theinformation entropyof a probability distributiondF(x)can only be computed with respect to some other probability distribution (or, more generally, a positive measure), and bothmeasuresmust be mutuallyabsolutely continuous. Accordingly, we need to pick areference measuredH(x)with the same support asdF(x). The entropy ofdF(x)relative todH(x)is S[dF∣dH]=−∫dFdHlog⁡dFdHdH{\displaystyle S[dF\mid dH]=-\int {\frac {dF}{dH}}\log {\frac {dF}{dH}}\,dH} or S[dF∣dH]=∫log⁡dHdFdF{\displaystyle S[dF\mid dH]=\int \log {\frac {dH}{dF}}\,dF} wheredF/dHanddH/dFareRadon–Nikodym derivatives. The ordinary definition of entropy for a discrete distribution supported on a setI, namely S=−∑i∈Ipilog⁡pi{\displaystyle S=-\sum _{i\in I}p_{i}\log p_{i}} assumes, though this is seldom pointed out, thatdHis chosen to be thecounting measureonI. Consider now a collection of observable quantities (random variables)Ti. The probability distributiondFwhose entropy with respect todHis greatest, subject to the conditions that the expected value ofTibe equal toti, is an exponential family withdHas reference measure and(T1, ...,Tn)as sufficient statistic. The derivation is a simplevariational calculationusingLagrange multipliers. Normalization is imposed by lettingT0= 1be one of the constraints. The natural parameters of the distribution are the Lagrange multipliers, and the normalization factor is the Lagrange multiplier associated toT0. For examples of such derivations, seeMaximum entropy probability distribution. According to thePitman–Koopman–Darmoistheorem, among families of probability distributions whose domain does not vary with the parameter being estimated, only in exponential families is there asufficient statisticwhose dimension remains bounded as sample size increases. Less tersely, supposeXk, (wherek= 1, 2, 3, ...n) areindependent, identically distributed random variables. Only if their distribution is one of theexponential familyof distributions is there asufficient statisticT(X1, ...,Xn)whosenumberofscalar componentsdoes not increase as the sample sizenincreases; the statisticTmay be avectoror asingle scalar number, but whatever it is, itssizewill neither grow nor shrink when more data are obtained. As a counterexample if these conditions are relaxed, the family ofuniform distributions(eitherdiscreteorcontinuous, with either or both bounds unknown) has a sufficient statistic, namely the sample maximum, sample minimum, and sample size, but does not form an exponential family, as the domain varies with the parameters. Exponential families are also important inBayesian statistics. In Bayesian statistics aprior distributionis multiplied by alikelihood functionand then normalised to produce aposterior distribution. In the case of a likelihood which belongs to an exponential family there exists aconjugate prior, which is often also in an exponential family. A conjugate prior π for the parameterη{\displaystyle {\boldsymbol {\eta }}}of an exponential family f(x∣η)=h(x)exp⁡[ηTT(x)−A(η)]{\displaystyle f(x\mid {\boldsymbol {\eta }})=h(x)\,\exp \left[{\boldsymbol {\eta }}^{\mathsf {T}}\mathbf {T} (x)-A({\boldsymbol {\eta }})\right]} is given by pπ(η∣χ,ν)=f(χ,ν)exp⁡[ηTχ−νA(η)],{\displaystyle p_{\pi }({\boldsymbol {\eta }}\mid {\boldsymbol {\chi }},\nu )=f({\boldsymbol {\chi }},\nu )\,\exp \left[{\boldsymbol {\eta }}^{\mathsf {T}}{\boldsymbol {\chi }}-\nu A({\boldsymbol {\eta }})\right],} or equivalently pπ(η∣χ,ν)=f(χ,ν)g(η)νexp⁡(ηTχ),χ∈Rs{\displaystyle p_{\pi }({\boldsymbol {\eta }}\mid {\boldsymbol {\chi }},\nu )=f({\boldsymbol {\chi }},\nu )\,g({\boldsymbol {\eta }})^{\nu }\,\exp \left({\boldsymbol {\eta }}^{\mathsf {T}}{\boldsymbol {\chi }}\right),\qquad {\boldsymbol {\chi }}\in \mathbb {R} ^{s}} wheresis the dimension ofη{\displaystyle {\boldsymbol {\eta }}}andν>0{\displaystyle \nu >0}andχ{\displaystyle {\boldsymbol {\chi }}}arehyperparameters(parameters controlling parameters).ν{\displaystyle \nu }corresponds to the effective number of observations that the prior distribution contributes, andχ{\displaystyle {\boldsymbol {\chi }}}corresponds to the total amount that these pseudo-observations contribute to thesufficient statisticover all observations and pseudo-observations.f(χ,ν){\displaystyle f({\boldsymbol {\chi }},\nu )}is anormalization constantthat is automatically determined by the remaining functions and serves to ensure that the given function is aprobability density function(i.e. it isnormalized).A(η){\displaystyle A({\boldsymbol {\eta }})}and equivalentlyg(η){\displaystyle g({\boldsymbol {\eta }})}are the same functions as in the definition of the distribution over which π is the conjugate prior. A conjugate prior is one which, when combined with the likelihood and normalised, produces a posterior distribution which is of the same type as the prior. For example, if one is estimating the success probability of a binomial distribution, then if one chooses to use a beta distribution as one's prior, the posterior is another beta distribution. This makes the computation of the posterior particularly simple. Similarly, if one is estimating the parameter of aPoisson distributionthe use of a gamma prior will lead to another gamma posterior. Conjugate priors are often very flexible and can be very convenient. However, if one's belief about the likely value of the theta parameter of a binomial is represented by (say) a bimodal (two-humped) prior distribution, then this cannot be represented by a beta distribution. It can however be represented by using amixture densityas the prior, here a combination of two beta distributions; this is a form ofhyperprior. An arbitrary likelihood will not belong to an exponential family, and thus in general no conjugate prior exists. The posterior will then have to be computed by numerical methods. To show that the above prior distribution is a conjugate prior, we can derive the posterior. First, assume that the probability of a single observation follows an exponential family, parameterized using its natural parameter: pF(x∣η)=h(x)g(η)exp⁡[ηTT(x)]{\displaystyle p_{F}(x\mid {\boldsymbol {\eta }})=h(x)\,g({\boldsymbol {\eta }})\,\exp \left[{\boldsymbol {\eta }}^{\mathsf {T}}\mathbf {T} (x)\right]} Then, for dataX=(x1,…,xn){\displaystyle \mathbf {X} =(x_{1},\ldots ,x_{n})}, the likelihood is computed as follows: p(X∣η)=(∏i=1nh(xi))g(η)nexp⁡(ηT∑i=1nT(xi)){\displaystyle p(\mathbf {X} \mid {\boldsymbol {\eta }})=\left(\prod _{i=1}^{n}h(x_{i})\right)g({\boldsymbol {\eta }})^{n}\exp \left({\boldsymbol {\eta }}^{\mathsf {T}}\sum _{i=1}^{n}\mathbf {T} (x_{i})\right)} Then, for the above conjugate prior: pπ(η∣χ,ν)=f(χ,ν)g(η)νexp⁡(ηTχ)∝g(η)νexp⁡(ηTχ){\displaystyle {\begin{aligned}p_{\pi }({\boldsymbol {\eta }}\mid {\boldsymbol {\chi }},\nu )&=f({\boldsymbol {\chi }},\nu )g({\boldsymbol {\eta }})^{\nu }\exp({\boldsymbol {\eta }}^{\mathsf {T}}{\boldsymbol {\chi }})\propto g({\boldsymbol {\eta }})^{\nu }\exp({\boldsymbol {\eta }}^{\mathsf {T}}{\boldsymbol {\chi }})\end{aligned}}} We can then compute the posterior as follows: p(η∣X,χ,ν)∝p(X∣η)pπ(η∣χ,ν)=(∏i=1nh(xi))g(η)nexp⁡(ηT∑i=1nT(xi))f(χ,ν)g(η)νexp⁡(ηTχ)∝g(η)nexp⁡(ηT∑i=1nT(xi))g(η)νexp⁡(ηTχ)∝g(η)ν+nexp⁡(ηT(χ+∑i=1nT(xi))){\displaystyle {\begin{aligned}p({\boldsymbol {\eta }}\mid \mathbf {X} ,{\boldsymbol {\chi }},\nu )&\propto p(\mathbf {X} \mid {\boldsymbol {\eta }})p_{\pi }({\boldsymbol {\eta }}\mid {\boldsymbol {\chi }},\nu )\\&=\left(\prod _{i=1}^{n}h(x_{i})\right)g({\boldsymbol {\eta }})^{n}\exp \left({\boldsymbol {\eta }}^{\mathsf {T}}\sum _{i=1}^{n}\mathbf {T} (x_{i})\right)f({\boldsymbol {\chi }},\nu )g({\boldsymbol {\eta }})^{\nu }\exp({\boldsymbol {\eta }}^{\mathsf {T}}{\boldsymbol {\chi }})\\&\propto g({\boldsymbol {\eta }})^{n}\exp \left({\boldsymbol {\eta }}^{\mathsf {T}}\sum _{i=1}^{n}\mathbf {T} (x_{i})\right)g({\boldsymbol {\eta }})^{\nu }\exp({\boldsymbol {\eta }}^{\mathsf {T}}{\boldsymbol {\chi }})\\&\propto g({\boldsymbol {\eta }})^{\nu +n}\exp \left({\boldsymbol {\eta }}^{\mathsf {T}}\left({\boldsymbol {\chi }}+\sum _{i=1}^{n}\mathbf {T} (x_{i})\right)\right)\end{aligned}}} The last line is thekernelof the posterior distribution, i.e. p(η∣X,χ,ν)=pπ(η|χ+∑i=1nT(xi),ν+n){\displaystyle p({\boldsymbol {\eta }}\mid \mathbf {X} ,{\boldsymbol {\chi }},\nu )=p_{\pi }\left({\boldsymbol {\eta }}\left|~{\boldsymbol {\chi }}+\sum _{i=1}^{n}\mathbf {T} (x_{i}),\nu +n\right.\right)} This shows that the posterior has the same form as the prior. The dataXenters into this equationonlyin the expression T(X)=∑i=1nT(xi),{\displaystyle \mathbf {T} (\mathbf {X} )=\sum _{i=1}^{n}\mathbf {T} (x_{i}),} which is termed thesufficient statisticof the data. That is, the value of the sufficient statistic is sufficient to completely determine the posterior distribution. The actual data points themselves are not needed, and all sets of data points with the same sufficient statistic will have the same distribution. This is important because the dimension of the sufficient statistic does not grow with the data size — it has only as many components as the components ofη{\displaystyle {\boldsymbol {\eta }}}(equivalently, the number of parameters of the distribution of a single data point). The update equations are as follows: χ′=χ+T(X)=χ+∑i=1nT(xi)ν′=ν+n{\displaystyle {\begin{aligned}{\boldsymbol {\chi }}'&={\boldsymbol {\chi }}+\mathbf {T} (\mathbf {X} )\\&={\boldsymbol {\chi }}+\sum _{i=1}^{n}\mathbf {T} (x_{i})\\\nu '&=\nu +n\end{aligned}}} This shows that the update equations can be written simply in terms of the number of data points and thesufficient statisticof the data. This can be seen clearly in the various examples of update equations shown in theconjugate priorpage. Because of the way that the sufficient statistic is computed, it necessarily involves sums of components of the data (in some cases disguised as products or other forms — a product can be written in terms of a sum oflogarithms). The cases where the update equations for particular distributions don't exactly match the above forms are cases where the conjugate prior has been expressed using a differentparameterizationthan the one that produces a conjugate prior of the above form — often specifically because the above form is defined over the natural parameterη{\displaystyle {\boldsymbol {\eta }}}while conjugate priors are usually defined over the actual parameterθ.{\displaystyle {\boldsymbol {\theta }}.} If the likelihoodz|η∼eηzf1(η)f0(z){\displaystyle z|\eta \sim e^{\eta z}f_{1}(\eta )f_{0}(z)}is an exponential family, then the unbiased estimator ofη{\displaystyle \eta }is−ddzln⁡f0(z){\displaystyle -{\frac {d}{dz}}\ln f_{0}(z)}.[16] A one-parameter exponential family has a monotone non-decreasing likelihood ratio in thesufficient statisticT(x), provided thatη(θ)is non-decreasing. As a consequence, there exists auniformly most powerful testfortesting the hypothesisH0:θ≥θ0vs.H1:θ<θ0. Exponential families form the basis for the distribution functions used ingeneralized linear models(GLM), a class of model that encompasses many of the commonly used regression models in statistics. Examples includelogistic regressionusing the binomial family andPoisson regression.
https://en.wikipedia.org/wiki/Exponential_family
Γ(⌊k+1⌋,λ)⌊k⌋!,{\displaystyle {\frac {\Gamma (\lfloor k+1\rfloor ,\lambda )}{\lfloor k\rfloor !}},}ore−λ∑j=0⌊k⌋λjj!,{\displaystyle e^{-\lambda }\sum _{j=0}^{\lfloor k\rfloor }{\frac {\lambda ^{j}}{j!}},}orQ(⌊k+1⌋,λ){\displaystyle Q(\lfloor k+1\rfloor ,\lambda )} λ[1−log⁡(λ)]+e−λ∑k=0∞λklog⁡(k!)k!{\displaystyle \lambda {\Bigl [}1-\log(\lambda ){\Bigr ]}+e^{-\lambda }\sum _{k=0}^{\infty }{\frac {\lambda ^{k}\log(k!)}{k!}}}or for largeλ{\displaystyle \lambda } Inprobability theoryandstatistics, thePoisson distribution(/ˈpwɑːsɒn/) is adiscrete probability distributionthat expresses the probability of a given number ofeventsoccurring in a fixed interval of time if these events occur with a known constant mean rate andindependentlyof the time since the last event.[1]It can also be used for the number of events in other types of intervals than time, and in dimension greater than 1 (e.g., number of events in a given area or volume). The Poisson distribution is named afterFrenchmathematicianSiméon Denis Poisson. It plays an important role fordiscrete-stable distributions. Under a Poisson distribution with theexpectationofλevents in a given interval, the probability ofkevents in the same interval is:[2]: 60 For instance, consider a call center which receives an average ofλ =3 calls per minute at all times of day. If the calls are independent, receiving one does not change the probability of when the next one will arrive. Under these assumptions, the numberkof calls received during any minute has a Poisson probability distribution. Receivingk =1 to 4 calls then has a probability of about 0.77, while receiving 0 or at least 5 calls has a probability of about 0.23. A classic example used to motivate the Poisson distribution is the number ofradioactive decayevents during a fixed observation period.[3] The distribution was first introduced bySiméon Denis Poisson(1781–1840) and published together with his probability theory in his workRecherches sur la probabilité des jugements en matière criminelle et en matière civile(1837).[4]: 205-207The work theorized about the number of wrongful convictions in a given country by focusing on certainrandom variablesNthat count, among other things, the number of discrete occurrences (sometimes called "events" or "arrivals") that take place during atime-interval of given length. The result had already been given in 1711 byAbraham de MoivreinDe Mensura Sortis seu; de Probabilitate Eventuum in Ludis a Casu Fortuito Pendentibus.[5]: 219[6]: 14-15[7]: 193[8]: 157This makes it an example ofStigler's lawand it has prompted some authors to argue that the Poisson distribution should bear the name of de Moivre.[9][10] In 1860,Simon Newcombfitted the Poisson distribution to the number of stars found in a unit of space.[11]A further practical application was made byLadislaus Bortkiewiczin 1898. Bortkiewicz showed that the frequency with which soldiers in the Prussian army were accidentally killed by horse kicks could be well modeled by a Poisson distribution.[12]: 23-25. A discreterandom variableXis said to have a Poisson distribution with parameterλ>0{\displaystyle \lambda >0}if it has aprobability mass functiongiven by:[2]: 60 where The positivereal numberλis equal to theexpected valueofXand also to itsvariance.[13] The Poisson distribution can be applied to systems with alarge number of possible events, each of which is rare. The number of such events that occur during a fixed time interval is, under the right circumstances, a random number with a Poisson distribution. The equation can be adapted if, instead of the average number of eventsλ,{\displaystyle \lambda ,}we are given the average rater{\displaystyle r}at which events occur. Thenλ=rt,{\displaystyle \lambda =rt,}and:[14] The Poisson distribution may be useful to model events such as: Examples of the occurrence of random points in space are: the locations of asteroid impacts with earth (2-dimensional), the locations of imperfections in a material (3-dimensional), and the locations of trees in a forest (2-dimensional).[15] The Poisson distribution is an appropriate model if the following assumptions are true: If these conditions are true, thenkis a Poisson random variable; the distribution ofkis a Poisson distribution. The Poisson distribution is also thelimitof abinomial distribution, for which the probability of success for each trial equalsλdivided by the number of trials, as the number of trials approaches infinity (seeRelated distributions). On a particular river, overflow floods occur once every 100 years on average. Calculate the probability ofk= 0, 1, 2, 3, 4, 5, or 6 overflow floods in a 100-year interval, assuming the Poisson model is appropriate. Because the average event rate is one overflow flood per 100 years,λ= 1 The probability for 0 to 6 overflow floods in a 100-year period. In this example, it is reported that the average number of goals in a World Cup soccer match is approximately 2.5 and the Poisson model is appropriate.[16]Because the average event rate is 2.5 goals per match,λ= 2.5 . The probability for 0 to 7 goals in a match. Suppose that astronomers estimate that large meteorites (above a certain size) hit the earth on average once every 100 years (λ= 1event per 100 years), and that the number of meteorite hits follows a Poisson distribution. What is the probability ofk= 0meteorite hits in the next 100 years? Under these assumptions, the probability that no large meteorites hit the earth in the next 100 years is roughly 0.37. The remaining1 − 0.37 = 0.63is the probability of 1, 2, 3, or more large meteorite hits in the next 100 years. In an example above, an overflow flood occurred once every 100 years(λ= 1).The probability of no overflow floods in 100 years was roughly 0.37, by the same calculation. In general, if an event occurs on average once per interval (λ= 1), and the events follow a Poisson distribution, thenP(0 events in next interval) = 0.37.In addition,P(exactly one event in next interval) = 0.37,as shown in the table for overflow floods. The number of students who arrive at thestudent unionper minute will likely not follow a Poisson distribution, because the rate is not constant (low rate during class time, high rate between class times) and the arrivals of individual students are not independent (students tend to come in groups). The non-constant arrival rate may be modeled as amixed Poisson distribution, and the arrival of groups rather than individual students as acompound Poisson process. The number of magnitude 5 earthquakes per year in a country may not follow a Poisson distribution, if one large earthquake increases the probability of aftershocks of similar magnitude. Examples in which at least one event is guaranteed are not Poisson distributed; but may be modeled using azero-truncated Poisson distribution. Count distributions in which the number of intervals with zero events is higher than predicted by a Poisson model may be modeled using azero-inflated model. Bounds for the median (ν{\displaystyle \nu }) of the distribution are known and aresharp:[18]λ−ln⁡2≤ν<λ+13.{\displaystyle \lambda -\ln 2\leq \nu <\lambda +{\frac {1}{3}}.} The higher non-centeredmomentsmkof the Poisson distribution areTouchard polynomialsinλ:mk=∑i=0kλi{ki},{\displaystyle m_{k}=\sum _{i=0}^{k}\lambda ^{i}{\begin{Bmatrix}k\\i\end{Bmatrix}},}where the braces { } denoteStirling numbers of the second kind.[19][1]: 6In other words,E[X]=λ,E[X(X−1)]=λ2,E[X(X−1)(X−2)]=λ3,⋯{\displaystyle E[X]=\lambda ,\quad E[X(X-1)]=\lambda ^{2},\quad E[X(X-1)(X-2)]=\lambda ^{3},\cdots }When the expected value is set toλ =1,Dobinski's formulaimplies that then‑th moment is equal to the number ofpartitions of a setof sizen. A simple upper bound is:[20]mk=E[Xk]≤(klog⁡(k/λ+1))k≤λkexp⁡(k22λ).{\displaystyle m_{k}=E[X^{k}]\leq \left({\frac {k}{\log(k/\lambda +1)}}\right)^{k}\leq \lambda ^{k}\exp \left({\frac {k^{2}}{2\lambda }}\right).} IfXi∼Pois⁡(λi){\displaystyle X_{i}\sim \operatorname {Pois} (\lambda _{i})}fori=1,…,n{\displaystyle i=1,\dotsc ,n}areindependent, then∑i=1nXi∼Pois⁡(∑i=1nλi).{\textstyle \sum _{i=1}^{n}X_{i}\sim \operatorname {Pois} \left(\sum _{i=1}^{n}\lambda _{i}\right).}[21]: 65A converse isRaikov's theorem, which says that if the sum of two independent random variables is Poisson-distributed, then so are each of those two independent random variables.[22][23] It is amaximum-entropy distributionamong the set of generalized binomial distributionsBn(λ){\displaystyle B_{n}(\lambda )}with meanλ{\displaystyle \lambda }andn→∞{\displaystyle n\rightarrow \infty },[24]where a generalized binomial distribution is defined as a distribution of the sum of N independent but not identically distributed Bernoulli variables. DKL⁡(P∥P0)=λ0−λ+λlog⁡λλ0.{\displaystyle \operatorname {D} _{\text{KL}}(P\parallel P_{0})=\lambda _{0}-\lambda +\lambda \log {\frac {\lambda }{\lambda _{0}}}.} P(X≥x)≤e−DKL⁡(Q∥P)max(2,4πDKL⁡(Q∥P)),forx>λ,{\displaystyle P(X\geq x)\leq {\frac {e^{-\operatorname {D} _{\text{KL}}(Q\parallel P)}}{\max {(2,{\sqrt {4\pi \operatorname {D} _{\text{KL}}(Q\parallel P)}}})}},{\text{ for }}x>\lambda ,}whereDKL⁡(Q∥P){\displaystyle \operatorname {D} _{\text{KL}}(Q\parallel P)}is the Kullback–Leibler divergence ofQ=Pois⁡(x){\displaystyle Q=\operatorname {Pois} (x)}fromP=Pois⁡(λ){\displaystyle P=\operatorname {Pois} (\lambda )}. Φ(sign⁡(k−λ)2DKL⁡(Q−∥P))<P(X≤k)<Φ(sign⁡(k+1−λ)2DKL⁡(Q+∥P)),fork>0,{\displaystyle \Phi \left(\operatorname {sign} (k-\lambda ){\sqrt {2\operatorname {D} _{\text{KL}}(Q_{-}\parallel P)}}\right)<P(X\leq k)<\Phi \left(\operatorname {sign} (k+1-\lambda ){\sqrt {2\operatorname {D} _{\text{KL}}(Q_{+}\parallel P)}}\right),{\text{ for }}k>0,}whereDKL⁡(Q−∥P){\displaystyle \operatorname {D} _{\text{KL}}(Q_{-}\parallel P)}is the Kullback–Leibler divergence ofQ−=Pois⁡(k){\displaystyle Q_{-}=\operatorname {Pois} (k)}fromP=Pois⁡(λ){\displaystyle P=\operatorname {Pois} (\lambda )}andDKL⁡(Q+∥P){\displaystyle \operatorname {D} _{\text{KL}}(Q_{+}\parallel P)}is the Kullback–Leibler divergence ofQ+=Pois⁡(k+1){\displaystyle Q_{+}=\operatorname {Pois} (k+1)}fromP{\displaystyle P}. LetX∼Pois⁡(λ){\displaystyle X\sim \operatorname {Pois} (\lambda )}andY∼Pois⁡(μ){\displaystyle Y\sim \operatorname {Pois} (\mu )}be independent random variables, withλ<μ,{\displaystyle \lambda <\mu ,}then we have thate−(μ−λ)2(λ+μ)2−e−(λ+μ)2λμ−e−(λ+μ)4λμ≤P(X−Y≥0)≤e−(μ−λ)2{\displaystyle {\frac {e^{-({\sqrt {\mu }}-{\sqrt {\lambda }})^{2}}}{(\lambda +\mu )^{2}}}-{\frac {e^{-(\lambda +\mu )}}{2{\sqrt {\lambda \mu }}}}-{\frac {e^{-(\lambda +\mu )}}{4\lambda \mu }}\leq P(X-Y\geq 0)\leq e^{-({\sqrt {\mu }}-{\sqrt {\lambda }})^{2}}} The upper bound is proved using a standard Chernoff bound. The lower bound can be proved by noting thatP(X−Y≥0∣X+Y=i){\displaystyle P(X-Y\geq 0\mid X+Y=i)}is the probability thatZ≥i2,{\textstyle Z\geq {\frac {i}{2}},}whereZ∼Bin⁡(i,λλ+μ),{\textstyle Z\sim \operatorname {Bin} \left(i,{\frac {\lambda }{\lambda +\mu }}\right),}which is bounded below by1(i+1)2e−iD(0.5‖λλ+μ),{\textstyle {\frac {1}{(i+1)^{2}}}e^{-iD\left(0.5\|{\frac {\lambda }{\lambda +\mu }}\right)},}whereD{\displaystyle D}isrelative entropy(See the entry onbounds on tails of binomial distributionsfor details). Further noting thatX+Y∼Pois⁡(λ+μ),{\displaystyle X+Y\sim \operatorname {Pois} (\lambda +\mu ),}and computing a lower bound on the unconditional probability gives the result. More details can be found in the appendix of Kamath et al.[30] The Poisson distribution can be derived as a limiting case to thebinomial distributionas the number of trials goes to infinity and theexpectednumber of successes remains fixed — seelaw of rare eventsbelow. Therefore, it can be used as an approximation of the binomial distribution ifnis sufficiently large andpis sufficiently small. The Poisson distribution is a good approximation of the binomial distribution ifnis at least 20 andpis smaller than or equal to 0.05, and an excellent approximation ifn≥ 100 andn p≤ 10.[31]LettingFB{\displaystyle F_{\mathrm {B} }}andFP{\displaystyle F_{\mathrm {P} }}be the respectivecumulative density functionsof the binomial and Poisson distributions, one has:FB(k;n,p)≈FP(k;λ=np).{\displaystyle F_{\mathrm {B} }(k;n,p)\ \approx \ F_{\mathrm {P} }(k;\lambda =np).} One derivation of this usesprobability-generating functions.[32]Consider aBernoulli trial(coin-flip) whose probability of one success (or expected number of successes) isλ≤1{\displaystyle \lambda \leq 1}within a given interval. Split the interval intonparts, and perform a trial in each subinterval with probabilityλn{\displaystyle {\tfrac {\lambda }{n}}}. The probability ofksuccesses out ofntrials over the entire interval is then given by the binomial distribution pk(n)=(nk)(λn)k(1−λn)n−k,{\displaystyle p_{k}^{(n)}={\binom {n}{k}}\left({\frac {\lambda }{n}}\right)^{\!k}\left(1{-}{\frac {\lambda }{n}}\right)^{\!n-k},} whose generating function is: P(n)(x)=∑k=0npk(n)xk=(1−λn+λnx)n.{\displaystyle P^{(n)}(x)=\sum _{k=0}^{n}p_{k}^{(n)}x^{k}=\left(1-{\frac {\lambda }{n}}+{\frac {\lambda }{n}}x\right)^{n}.} Taking the limit asnincreases to infinity (withxfixed) and applying the product limit definition of theexponential function, this reduces to the generating function of the Poisson distribution: limn→∞P(n)(x)=limn→∞(1+λ(x−1)n)n=eλ(x−1)=∑k=0∞e−λλkk!xk.{\displaystyle \lim _{n\to \infty }P^{(n)}(x)=\lim _{n\to \infty }\left(1{+}{\tfrac {\lambda (x-1)}{n}}\right)^{n}=e^{\lambda (x-1)}=\sum _{k=0}^{\infty }e^{-\lambda }{\frac {\lambda ^{k}}{k!}}x^{k}.} AssumeX1∼Pois⁡(λ1),X2∼Pois⁡(λ2),…,Xn∼Pois⁡(λn){\displaystyle X_{1}\sim \operatorname {Pois} (\lambda _{1}),X_{2}\sim \operatorname {Pois} (\lambda _{2}),\dots ,X_{n}\sim \operatorname {Pois} (\lambda _{n})}whereλ1+λ2+⋯+λn=1,{\displaystyle \lambda _{1}+\lambda _{2}+\dots +\lambda _{n}=1,}then[38](X1,X2,…,Xn){\displaystyle (X_{1},X_{2},\dots ,X_{n})}ismultinomially distributed(X1,X2,…,Xn)∼Mult⁡(N,λ1,λ2,…,λn){\displaystyle (X_{1},X_{2},\dots ,X_{n})\sim \operatorname {Mult} (N,\lambda _{1},\lambda _{2},\dots ,\lambda _{n})}conditioned onN=X1+X2+…Xn.{\displaystyle N=X_{1}+X_{2}+\dots X_{n}.} This means[27]: 101-102, among other things, that for any nonnegative functionf(x1,x2,…,xn),{\displaystyle f(x_{1},x_{2},\dots ,x_{n}),}if(Y1,Y2,…,Yn)∼Mult⁡(m,p){\displaystyle (Y_{1},Y_{2},\dots ,Y_{n})\sim \operatorname {Mult} (m,\mathbf {p} )}is multinomially distributed, thenE⁡[f(Y1,Y2,…,Yn)]≤emE⁡[f(X1,X2,…,Xn)]{\displaystyle \operatorname {E} [f(Y_{1},Y_{2},\dots ,Y_{n})]\leq e{\sqrt {m}}\operatorname {E} [f(X_{1},X_{2},\dots ,X_{n})]}where(X1,X2,…,Xn)∼Pois⁡(p).{\displaystyle (X_{1},X_{2},\dots ,X_{n})\sim \operatorname {Pois} (\mathbf {p} ).} The factor ofem{\displaystyle e{\sqrt {m}}}can be replaced by 2 iff{\displaystyle f}is further assumed to be monotonically increasing or decreasing. This distribution has been extended to thebivariatecase.[39]Thegenerating functionfor this distribution isg(u,v)=exp⁡[(θ1−θ12)(u−1)+(θ2−θ12)(v−1)+θ12(uv−1)]{\displaystyle g(u,v)=\exp[(\theta _{1}-\theta _{12})(u-1)+(\theta _{2}-\theta _{12})(v-1)+\theta _{12}(uv-1)]} withθ1,θ2>θ12>0{\displaystyle \theta _{1},\theta _{2}>\theta _{12}>0} The marginal distributions are Poisson(θ1) and Poisson(θ2) and the correlation coefficient is limited to the range0≤ρ≤min{θ1θ2,θ2θ1}{\displaystyle 0\leq \rho \leq \min \left\{{\sqrt {\frac {\theta _{1}}{\theta _{2}}}},{\sqrt {\frac {\theta _{2}}{\theta _{1}}}}\right\}} A simple way to generate a bivariate Poisson distributionX1,X2{\displaystyle X_{1},X_{2}}is to take three independent Poisson distributionsY1,Y2,Y3{\displaystyle Y_{1},Y_{2},Y_{3}}with meansλ1,λ2,λ3{\displaystyle \lambda _{1},\lambda _{2},\lambda _{3}}and then setX1=Y1+Y3,X2=Y2+Y3.{\displaystyle X_{1}=Y_{1}+Y_{3},X_{2}=Y_{2}+Y_{3}.}The probability function of the bivariate Poisson distribution isPr(X1=k1,X2=k2)=exp⁡(−λ1−λ2−λ3)λ1k1k1!λ2k2k2!∑k=0min(k1,k2)(k1k)(k2k)k!(λ3λ1λ2)k{\displaystyle \Pr(X_{1}=k_{1},X_{2}=k_{2})=\exp \left(-\lambda _{1}-\lambda _{2}-\lambda _{3}\right){\frac {\lambda _{1}^{k_{1}}}{k_{1}!}}{\frac {\lambda _{2}^{k_{2}}}{k_{2}!}}\sum _{k=0}^{\min(k_{1},k_{2})}{\binom {k_{1}}{k}}{\binom {k_{2}}{k}}k!\left({\frac {\lambda _{3}}{\lambda _{1}\lambda _{2}}}\right)^{k}} The free Poisson distribution[40]with jump sizeα{\displaystyle \alpha }and rateλ{\displaystyle \lambda }arises infree probabilitytheory as the limit of repeatedfree convolution((1−λN)δ0+λNδα)⊞N{\displaystyle \left(\left(1-{\frac {\lambda }{N}}\right)\delta _{0}+{\frac {\lambda }{N}}\delta _{\alpha }\right)^{\boxplus N}}asN→ ∞. In other words, letXN{\displaystyle X_{N}}be random variables so thatXN{\displaystyle X_{N}}has valueα{\displaystyle \alpha }with probabilityλN{\textstyle {\frac {\lambda }{N}}}and value 0 with the remaining probability. Assume also that the familyX1,X2,…{\displaystyle X_{1},X_{2},\ldots }arefreely independent. Then the limit asN→∞{\displaystyle N\to \infty }of the law ofX1+⋯+XN{\displaystyle X_{1}+\cdots +X_{N}}is given by the Free Poisson law with parametersλ,α.{\displaystyle \lambda ,\alpha .} This definition is analogous to one of the ways in which the classical Poisson distribution is obtained from a (classical) Poisson process. The measure associated to the free Poisson law is given by[41]μ={(1−λ)δ0+ν,if0≤λ≤1ν,ifλ>1,{\displaystyle \mu ={\begin{cases}(1-\lambda )\delta _{0}+\nu ,&{\text{if }}0\leq \lambda \leq 1\\\nu ,&{\text{if }}\lambda >1,\end{cases}}}whereν=12παt4λα2−(t−α(1+λ))2dt{\displaystyle \nu ={\frac {1}{2\pi \alpha t}}{\sqrt {4\lambda \alpha ^{2}-(t-\alpha (1+\lambda ))^{2}}}\,dt}and has support[α(1−λ)2,α(1+λ)2].{\displaystyle [\alpha (1-{\sqrt {\lambda }})^{2},\alpha (1+{\sqrt {\lambda }})^{2}].} This law also arises inrandom matrixtheory as theMarchenko–Pastur law. Itsfree cumulantsare equal toκn=λαn.{\displaystyle \kappa _{n}=\lambda \alpha ^{n}.} We give values of some important transforms of the free Poisson law; the computation can be found in e.g. in the bookLectures on the Combinatorics of Free Probabilityby A. Nica and R. Speicher[42] The R-transform of the free Poisson law is given byR(z)=λα1−αz.{\displaystyle R(z)={\frac {\lambda \alpha }{1-\alpha z}}.} The Cauchy transform (which is the negative of theStieltjes transformation) is given byG(z)=z+α−λα−(z−α(1+λ))2−4λα22αz{\displaystyle G(z)={\frac {z+\alpha -\lambda \alpha -{\sqrt {(z-\alpha (1+\lambda ))^{2}-4\lambda \alpha ^{2}}}}{2\alpha z}}} The S-transform is given byS(z)=1z+λ{\displaystyle S(z)={\frac {1}{z+\lambda }}}in the case thatα=1.{\displaystyle \alpha =1.} Poisson's probability mass functionf(k;λ){\displaystyle f(k;\lambda )}can be expressed in a form similar to the product distribution of aWeibull distributionand a variant form of thestable count distribution. The variable(k+1){\displaystyle (k+1)}can be regarded as inverse of Lévy's stability parameter in the stable count distribution:f(k;λ)=∫0∞1uWk+1(λu)[(k+1)ukN1k+1(uk+1)]du,{\displaystyle f(k;\lambda )=\int _{0}^{\infty }{\frac {1}{u}}\,W_{k+1}\left({\frac {\lambda }{u}}\right)\left[(k+1)u^{k}\,{\mathfrak {N}}_{\frac {1}{k+1}}(u^{k+1})\right]\,du,}whereNα(ν){\displaystyle {\mathfrak {N}}_{\alpha }(\nu )}is a standard stable count distribution of shapeα=1/(k+1),{\displaystyle \alpha =1/(k+1),}andWk+1(x){\displaystyle W_{k+1}(x)}is a standard Weibull distribution of shapek+1.{\displaystyle k+1.} Given a sample ofnmeasured valueski∈{0,1,…},{\displaystyle k_{i}\in \{0,1,\dots \},}fori= 1, ...,n,we wish to estimate the value of the parameterλof the Poisson population from which the sample was drawn. Themaximum likelihoodestimate is[43] Since each observation has expectationλ, so does the sample mean. Therefore, the maximum likelihood estimate is anunbiased estimatorofλ. It is also an efficient estimator since its variance achieves theCramér–Rao lower bound(CRLB).[44]Hence it isminimum-variance unbiased. Also it can be proven that the sum (and hence the sample mean as it is a one-to-one function of the sum) is a complete and sufficient statistic forλ. To prove sufficiency we may use thefactorization theorem. Consider partitioning the probability mass function of the joint Poisson distribution for the sample into two parts: one that depends solely on the samplex{\displaystyle \mathbf {x} }, calledh(x){\displaystyle h(\mathbf {x} )}, and one that depends on the parameterλ{\displaystyle \lambda }and the samplex{\displaystyle \mathbf {x} }only through the functionT(x).{\displaystyle T(\mathbf {x} ).}ThenT(x){\displaystyle T(\mathbf {x} )}is a sufficient statistic forλ.{\displaystyle \lambda .} The first termh(x){\displaystyle h(\mathbf {x} )}depends only onx{\displaystyle \mathbf {x} }. The second termg(T(x)|λ){\displaystyle g(T(\mathbf {x} )|\lambda )}depends on the sample only throughT(x)=∑i=1nxi.{\textstyle T(\mathbf {x} )=\sum _{i=1}^{n}x_{i}.}Thus,T(x){\displaystyle T(\mathbf {x} )}is sufficient. To find the parameterλthat maximizes the probability function for the Poisson population, we can use the logarithm of the likelihood function: We take the derivative ofℓ{\displaystyle \ell }with respect toλand compare it to zero: Solving forλgives a stationary point. Soλis the average of thekivalues. Obtaining the sign of the second derivative ofLat the stationary point will determine what kind of extreme valueλis. Evaluating the second derivativeat the stationary pointgives: which is the negative ofntimes the reciprocal of the average of the ki. This expression is negative when the average is positive. If this is satisfied, then the stationary point maximizes the probability function. Forcompleteness, a family of distributions is said to be complete if and only ifE(g(T))=0{\displaystyle E(g(T))=0}implies thatPλ(g(T)=0)=1{\displaystyle P_{\lambda }(g(T)=0)=1}for allλ.{\displaystyle \lambda .}If the individualXi{\displaystyle X_{i}}are iidPo(λ),{\displaystyle \mathrm {Po} (\lambda ),}thenT(x)=∑i=1nXi∼Po(nλ).{\textstyle T(\mathbf {x} )=\sum _{i=1}^{n}X_{i}\sim \mathrm {Po} (n\lambda ).}Knowing the distribution we want to investigate, it is easy to see that the statistic is complete. For this equality to hold,g(t){\displaystyle g(t)}must be 0. This follows from the fact that none of the other terms will be 0 for allt{\displaystyle t}in the sum and for all possible values ofλ.{\displaystyle \lambda .}Hence,E(g(T))=0{\displaystyle E(g(T))=0}for allλ{\displaystyle \lambda }implies thatPλ(g(T)=0)=1,{\displaystyle P_{\lambda }(g(T)=0)=1,}and the statistic has been shown to be complete. Theconfidence intervalfor the mean of a Poisson distribution can be expressed using the relationship between the cumulative distribution functions of the Poisson andchi-squared distributions. The chi-squared distribution is itself closely related to thegamma distribution, and this leads to an alternative expression. Given an observationkfrom a Poisson distribution with meanμ, a confidence interval forμwith confidence level1 –αis or equivalently, whereχ2(p;n){\displaystyle \chi ^{2}(p;n)}is thequantile function(corresponding to a lower tail areap) of the chi-squared distribution withndegrees of freedom andF−1(p;n,1){\displaystyle F^{-1}(p;n,1)}is the quantile function of agamma distributionwith shape parameter n and scale parameter 1.[8]: 176-178[45]This interval is 'exact' in the sense that itscoverage probabilityis never less than the nominal1 –α. When quantiles of the gamma distribution are not available, an accurate approximation to this exact interval has been proposed (based on theWilson–Hilferty transformation):[46] wherezα/2{\displaystyle z_{\alpha /2}}denotes thestandard normal deviatewith upper tail areaα / 2. For application of these formulae in the same context as above (given a sample ofnmeasured valueskieach drawn from a Poisson distribution with meanλ), one would set calculate an interval forμ=n λ,and then derive the interval forλ. InBayesian inference, theconjugate priorfor the rate parameterλof the Poisson distribution is thegamma distribution.[47]Let denote thatλis distributed according to the gammadensitygparameterized in terms of ashape parameterαand an inversescale parameterβ: Then, given the same sample ofnmeasured valueskias before, and a prior of Gamma(α,β), the posterior distribution is Note that the posterior mean is linear and is given by It can be shown that gamma distribution is the only prior that induces linearity of the conditional mean. Moreover, a converse result exists which states that if the conditional mean is close to a linear function in theL2{\displaystyle L_{2}}distance than the prior distribution ofλmust be close to gamma distribution inLevy distance.[48] The posterior mean E[λ] approaches the maximum likelihood estimateλ^MLE{\displaystyle {\widehat {\lambda }}_{\mathrm {MLE} }}in the limit asα→0,β→0,{\displaystyle \alpha \to 0,\beta \to 0,}which follows immediately from the general expression of the mean of thegamma distribution. Theposterior predictive distributionfor a single additional observation is anegative binomial distribution,[49]: 53sometimes called a gamma–Poisson distribution. SupposeX1,X2,…,Xp{\displaystyle X_{1},X_{2},\dots ,X_{p}}is a set of independent random variables from a set ofp{\displaystyle p}Poisson distributions, each with a parameterλi,{\displaystyle \lambda _{i},}i=1,…,p,{\displaystyle i=1,\dots ,p,}and we would like to estimate these parameters. Then, Clevenson and Zidek show that under the normalized squared error lossL(λ,λ^)=∑i=1pλi−1(λ^i−λi)2,{\textstyle L(\lambda ,{\hat {\lambda }})=\sum _{i=1}^{p}\lambda _{i}^{-1}({\hat {\lambda }}_{i}-\lambda _{i})^{2},}whenp>1,{\displaystyle p>1,}then, similar as inStein's examplefor the Normal means, the MLE estimatorλ^i=Xi{\displaystyle {\hat {\lambda }}_{i}=X_{i}}isinadmissible.[50] In this case, a family ofminimax estimatorsis given for any0<c≤2(p−1){\displaystyle 0<c\leq 2(p-1)}andb≥(p−2+p−1){\displaystyle b\geq (p-2+p^{-1})}as[51] Some applications of the Poisson distribution tocount data(number of events):[52] More examples of counting events that may be modelled as Poisson processes include: Inprobabilistic number theory,Gallaghershowed in 1976 that, if a certain version of the unprovedprime r-tuple conjectureholds,[61]then the counts ofprime numbersin short intervals would obey a Poisson distribution.[62] The rate of an event is related to the probability of an event occurring in some small subinterval (of time, space or otherwise). In the case of the Poisson distribution, one assumes that there exists a small enough subinterval for which the probability of an event occurring twice is "negligible". With this assumption one can derive the Poisson distribution from the binomial one, given only the information of expected number of total events in the whole interval. Let the total number of events in the whole interval be denoted byλ.{\displaystyle \lambda .}Divide the whole interval inton{\displaystyle n}subintervalsI1,…,In{\displaystyle I_{1},\dots ,I_{n}}of equal size, such thatn>λ{\displaystyle n>\lambda }(since we are interested in only very small portions of the interval this assumption is meaningful). This means that the expected number of events in each of thensubintervals is equal toλ/n.{\displaystyle \lambda /n.} Now we assume that the occurrence of an event in the whole interval can be seen as a sequence ofnBernoulli trials, where thei{\displaystyle i}-thBernoulli trialcorresponds to looking whether an event happens at the subintervalIi{\displaystyle I_{i}}with probabilityλ/n.{\displaystyle \lambda /n.}The expected number of total events inn{\displaystyle n}such trials would beλ,{\displaystyle \lambda ,}the expected number of total events in the whole interval. Hence for each subdivision of the interval we have approximated the occurrence of the event as a Bernoulli process of the formB(n,λ/n).{\displaystyle {\textrm {B}}(n,\lambda /n).}As we have noted before we want to consider only very small subintervals. Therefore, we take the limit asn{\displaystyle n}goes to infinity. In this case thebinomial distributionconverges to what is known as the Poisson distribution by thePoisson limit theorem. In several of the above examples — such as the number of mutations in a given sequence of DNA — the events being counted are actually the outcomes of discrete trials, and would more precisely be modelled using thebinomial distribution, that isX∼B(n,p).{\displaystyle X\sim {\textrm {B}}(n,p).} In such casesnis very large andpis very small (and so the expectationn pis of intermediate magnitude). Then the distribution may be approximated by the less cumbersome Poisson distributionX∼Pois(np).{\displaystyle X\sim {\textrm {Pois}}(np).} This approximation is sometimes known as thelaw of rare events,[63]: 5since each of thenindividualBernoulli eventsrarely occurs. The name "law of rare events" may be misleading because the total count of success events in a Poisson process need not be rare if the parametern pis not small. For example, the number of telephone calls to a busy switchboard in one hour follows a Poisson distribution with the events appearing frequent to the operator, but they are rare from the point of view of the average member of the population who is very unlikely to make a call to that switchboard in that hour. The variance of the binomial distribution is 1 −ptimes that of the Poisson distribution, so almost equal whenpis very small. The wordlawis sometimes used as a synonym ofprobability distribution, andconvergence in lawmeansconvergence in distribution. Accordingly, the Poisson distribution is sometimes called the "law of small numbers" because it is the probability distribution of the number of occurrences of an event that happens rarely but has very many opportunities to happen.The Law of Small Numbersis a book by Ladislaus Bortkiewicz about the Poisson distribution, published in 1898.[12][64] The Poisson distribution arises as the number of points of aPoisson point processlocated in some finite region. More specifically, ifDis some region space, for example Euclidean spaceRd, for which |D|, the area, volume or, more generally, the Lebesgue measure of the region is finite, and ifN(D)denotes the number of points inD, then Poisson regressionandnegative binomialregression are useful for analyses where the dependent (response) variable is the count(0, 1, 2, ... )of the number of events or occurrences in an interval. TheLuria–Delbrück experimenttested against the hypothesis of Lamarckian evolution, which should result in a Poisson distribution. Katz and Miledi measured themembrane potentialwith and without the presence ofacetylcholine(ACh).[65]When ACh is present,ion channelson the membrane would be open randomly at a small fraction of the time. As there are a large number of ion channels each open for a small fraction of the time, the total number of ion channels open at any moment is Poisson distributed. When ACh is not present, effectively no ion channels are open. The membrane potential isV=NopenVion+V0+Vnoise{\displaystyle V=N_{\text{open}}V_{\text{ion}}+V_{0}+V_{\text{noise}}}. Subtracting the effect of noise, Katz and Miledi found the mean and variance of membrane potential to be8.5×10−3V,(29.2×10−6V)2{\displaystyle 8.5\times 10^{-3}\;\mathrm {V} ,(29.2\times 10^{-6}\;\mathrm {V} )^{2}}, givingVion=10−7V{\displaystyle V_{\text{ion}}=10^{-7}\;\mathrm {V} }. (pp. 94-95[66]) During each cellular replication event, the number of mutations is roughly Poisson distributed.[67]For example, the HIV virus has 10,000 base pairs, and has a mutation rate of about 1 per 30,000 base pairs, meaning the number of mutations per replication event is distributed asPois(1/3){\displaystyle \mathrm {Pois} (1/3)}. (p. 64[66]) In a Poisson process, the number of observed occurrences fluctuates about its meanλwith astandard deviationσk=λ.{\displaystyle \sigma _{k}={\sqrt {\lambda }}.}These fluctuations are denoted asPoisson noiseor (particularly in electronics) asshot noise. The correlation of the mean and standard deviation in counting independent discrete occurrences is useful scientifically. By monitoring how the fluctuations vary with the mean signal, one can estimate the contribution of a single occurrence,even if that contribution is too small to be detected directly. For example, the chargeeon an electron can be estimated by correlating the magnitude of anelectric currentwith itsshot noise. IfNelectrons pass a point in a given timeton the average, themeancurrentisI=eN/t{\displaystyle I=eN/t}; since the current fluctuations should be of the orderσI=eN/t{\displaystyle \sigma _{I}=e{\sqrt {N}}/t}(i.e., the standard deviation of thePoisson process), the chargee{\displaystyle e}can be estimated from the ratiotσI2/I.{\displaystyle t\sigma _{I}^{2}/I.}[citation needed] An everyday example is the graininess that appears as photographs are enlarged; the graininess is due to Poisson fluctuations in the number of reducedsilvergrains, not to the individual grains themselves. Bycorrelatingthe graininess with the degree of enlargement, one can estimate the contribution of an individual grain (which is otherwise too small to be seen unaided).[citation needed] Incausal settheory the discrete elements of spacetime follow a Poisson distribution in the volume. The Poisson distribution also appears inquantum mechanics, especiallyquantum optics. Namely, for aquantum harmonic oscillatorsystem in acoherent state, the probability of measuring a particular energy level has a Poisson distribution. The Poisson distribution poses two different tasks for dedicated software libraries:evaluatingthe distributionP(k;λ){\displaystyle P(k;\lambda )}, anddrawing random numbersaccording to that distribution. ComputingP(k;λ){\displaystyle P(k;\lambda )}for givenk{\displaystyle k}andλ{\displaystyle \lambda }is a trivial task that can be accomplished by using the standard definition ofP(k;λ){\displaystyle P(k;\lambda )}in terms of exponential, power, and factorial functions. However, the conventional definition of the Poisson distribution contains two terms that can easily overflow on computers:λkandk!. The fraction ofλktok! can also produce a rounding error that is very large compared toe−λ, and therefore give an erroneous result. For numerical stability the Poisson probability mass function should therefore be evaluated as which is mathematically equivalent but numerically stable. The natural logarithm of theGamma functioncan be obtained using thelgammafunction in theCstandard library (C99 version) orR, thegammalnfunction inMATLABorSciPy, or thelog_gammafunction inFortran2008 and later. Some computing languages provide built-in functions to evaluate the Poisson distribution, namely The less trivial task is to draw integerrandom variatefrom the Poisson distribution with givenλ.{\displaystyle \lambda .} Solutions are provided by: A simple algorithm to generate random Poisson-distributed numbers (pseudo-random number sampling) has been given byKnuth:[70]: 137-138 The complexity is linear in the returned valuek, which isλon average. There are many other algorithms to improve this. Some are given in Ahrens & Dieter, see§ Referencesbelow. For large values ofλ, the value ofL=e−λmay be so small that it is hard to represent. This can be solved by a change to the algorithm which uses an additional parameter STEP such thate−STEPdoes not underflow:[citation needed] The choice of STEP depends on the threshold of overflow. For double precision floating point format the threshold is neare700, so 500 should be a safeSTEP. Other solutions for large values ofλincluderejection samplingand using Gaussian approximation. Inverse transform samplingis simple and efficient for small values ofλ, and requires only one uniform random numberuper sample. Cumulative probabilities are examined in turn until one exceedsu.
https://en.wikipedia.org/wiki/Poisson_distribution
TheCauchy distribution, named afterAugustin-Louis Cauchy, is acontinuous probability distribution. It is also known, especially amongphysicists, as theLorentz distribution(afterHendrik Lorentz),Cauchy–Lorentz distribution,Lorentz(ian) function, orBreit–Wigner distribution. The Cauchy distributionf(x;x0,γ){\displaystyle f(x;x_{0},\gamma )}is the distribution of thex-intercept of a ray issuing from(x0,γ){\displaystyle (x_{0},\gamma )}with a uniformly distributed angle. It is also the distribution of theratioof two independentnormally distributedrandom variables with mean zero. The Cauchy distribution is often used in statistics as the canonical example of a "pathological" distribution since both itsexpected valueand itsvarianceare undefined (but see§ Momentsbelow). The Cauchy distribution does not have finitemomentsof order greater than or equal to one; only fractional absolute moments exist.[1]The Cauchy distribution has nomoment generating function. Inmathematics, it is closely related to thePoisson kernel, which is thefundamental solutionfor theLaplace equationin theupper half-plane. It is one of the fewstable distributionswith a probability density function that can be expressed analytically, the others being thenormal distributionand theLévy distribution. Here are the most important constructions. If one stands in front of a line and kicks a ball with at a uniformly distributed random angle towards the line, then the distribution of the point where the ball hits the line is a Cauchy distribution. For example, consider a point at(x0,γ){\displaystyle (x_{0},\gamma )}in the x-y plane, and select a line passing through the point, with its direction (angle with thex{\displaystyle x}-axis) chosen uniformly (between −180° and 0°) at random. The intersection of the line with the x-axis follows a Cauchy distribution with locationx0{\displaystyle x_{0}}and scaleγ{\displaystyle \gamma }. This definition gives a simple way to sample from the standard Cauchy distribution. Letu{\displaystyle u}be a sample from a uniform distribution from[0,1]{\displaystyle [0,1]}, then we can generate a sample,x{\displaystyle x}from the standard Cauchy distribution using x=tan⁡(π(u−12)){\displaystyle x=\tan \left(\pi (u-{\tfrac {1}{2}})\right)}WhenU{\displaystyle U}andV{\displaystyle V}are two independentnormally distributedrandom variableswithexpected value0 andvariance1, then the ratioU/V{\displaystyle U/V}has the standard Cauchy distribution. More generally, if(U,V){\displaystyle (U,V)}is a rotationally symmetric distribution on the plane, then the ratioU/V{\displaystyle U/V}has the standard Cauchy distribution. The Cauchy distribution is the probability distribution with the followingprobability density function(PDF)[1][2]f(x;x0,γ)=1πγ[1+(x−x0γ)2]=1π[γ(x−x0)2+γ2],{\displaystyle f(x;x_{0},\gamma )={\frac {1}{\pi \gamma \left[1+\left({\frac {x-x_{0}}{\gamma }}\right)^{2}\right]}}={1 \over \pi }\left[{\gamma \over (x-x_{0})^{2}+\gamma ^{2}}\right],} wherex0{\displaystyle x_{0}}is thelocation parameter, specifying the location of the peak of the distribution, andγ{\displaystyle \gamma }is thescale parameterwhich specifies the half-width at half-maximum (HWHM), alternatively2γ{\displaystyle 2\gamma }isfull width at half maximum(FWHM).γ{\displaystyle \gamma }is also equal to half theinterquartile rangeand is sometimes called theprobable error. This function is also known as aLorentzian function,[3]and an example of anascent delta function, and therefore approaches aDirac delta functionin the limit asγ→0{\displaystyle \gamma \to 0}.Augustin-Louis Cauchyexploited such a density function in 1827 with aninfinitesimalscale parameter, defining thisDirac delta function. The maximum value or amplitude of the Cauchy PDF is1πγ{\displaystyle {\frac {1}{\pi \gamma }}}, located atx=x0{\displaystyle x=x_{0}}. It is sometimes convenient to express the PDF in terms of the complex parameterψ=x0+iγ{\displaystyle \psi =x_{0}+i\gamma } f(x;ψ)=1πIm(1x−ψ)=1πRe(−ix−ψ){\displaystyle f(x;\psi )={\frac {1}{\pi }}\,{\textrm {Im}}\left({\frac {1}{x-\psi }}\right)={\frac {1}{\pi }}\,{\textrm {Re}}\left({\frac {-i}{x-\psi }}\right)} The special case whenx0=0{\displaystyle x_{0}=0}andγ=1{\displaystyle \gamma =1}is called thestandard Cauchy distributionwith the probability density function[4][5]f(x;0,1)=1π(1+x2).{\displaystyle f(x;0,1)={\frac {1}{\pi \left(1+x^{2}\right)}}.} In physics, a three-parameter Lorentzian function is often used:f(x;x0,γ,I)=I[1+(x−x0γ)2]=I[γ2(x−x0)2+γ2],{\displaystyle f(x;x_{0},\gamma ,I)={\frac {I}{\left[1+{\left({\frac {x-x_{0}}{\gamma }}\right)}^{2}\right]}}=I\left[{\frac {\gamma ^{2}}{{\left(x-x_{0}\right)}^{2}+\gamma ^{2}}}\right],}whereI{\displaystyle I}is the height of the peak. The three-parameter Lorentzian function indicated is not, in general, a probability density function, since it does not integrate to 1, except in the special case whereI=1πγ.{\displaystyle I={\frac {1}{\pi \gamma }}.\!} The Cauchy distribution is the probability distribution with the followingcumulative distribution function(CDF):F(x;x0,γ)=1πarctan⁡(x−x0γ)+12{\displaystyle F(x;x_{0},\gamma )={\frac {1}{\pi }}\arctan \left({\frac {x-x_{0}}{\gamma }}\right)+{\frac {1}{2}}} and thequantile function(inversecdf) of the Cauchy distribution isQ(p;x0,γ)=x0+γtan⁡[π(p−12)].{\displaystyle Q(p;x_{0},\gamma )=x_{0}+\gamma \,\tan \left[\pi \left(p-{\tfrac {1}{2}}\right)\right].}It follows that the first and third quartiles are(x0−γ,x0+γ){\displaystyle (x_{0}-\gamma ,x_{0}+\gamma )}, and hence theinterquartile rangeis2γ{\displaystyle 2\gamma }. For the standard distribution, the cumulative distribution function simplifies toarctangent functionarctan⁡(x){\displaystyle \arctan(x)}:F(x;0,1)=1πarctan⁡(x)+12{\displaystyle F(x;0,1)={\frac {1}{\pi }}\arctan \left(x\right)+{\frac {1}{2}}} The standard Cauchy distribution is theStudent'st-distributionwith one degree of freedom, and so it may be constructed by any method that constructs the Student's t-distribution.[6] IfΣ{\displaystyle \Sigma }is ap×p{\displaystyle p\times p}positive-semidefinite covariance matrix with strictly positive diagonal entries, then forindependent and identically distributedX,Y∼N(0,Σ){\displaystyle X,Y\sim N(0,\Sigma )}and any randomp{\displaystyle p}-vectorw{\displaystyle w}independent ofX{\displaystyle X}andY{\displaystyle Y}such thatw1+⋯+wp=1{\displaystyle w_{1}+\cdots +w_{p}=1}andwi≥0,i=1,…,p,{\displaystyle w_{i}\geq 0,i=1,\ldots ,p,}(defining acategorical distribution) it holds that[7]∑j=1pwjXjYj∼Cauchy(0,1).{\displaystyle \sum _{j=1}^{p}w_{j}{\frac {X_{j}}{Y_{j}}}\sim \mathrm {Cauchy} (0,1).} The Cauchy distribution is an example of a distribution which has nomean,varianceor highermomentsdefined. Itsmodeandmedianare well defined and are both equal tox0{\displaystyle x_{0}}. The Cauchy distribution is aninfinitely divisible probability distribution. It is also a strictlystabledistribution.[8] Like all stable distributions, thelocation-scale familyto which the Cauchy distribution belongs is closed underlinear transformationswithrealcoefficients. In addition, the family of Cauchy-distributed random variables is closed underlinear fractional transformationswith real coefficients.[9]In this connection, see alsoMcCullagh's parametrization of the Cauchy distributions. IfX1,X2,…,Xn{\displaystyle X_{1},X_{2},\ldots ,X_{n}}are anIIDsample from the standard Cauchy distribution, then theirsample meanX¯=1n∑iXi{\textstyle {\bar {X}}={\frac {1}{n}}\sum _{i}X_{i}}is also standard Cauchy distributed. In particular, the average does not converge to the mean, and so the standard Cauchy distribution does not follow the law of large numbers. This can be proved by repeated integration with the PDF, or more conveniently, by using thecharacteristic functionof the standard Cauchy distribution (see below):φX(t)=E⁡[eiXt]=e−|t|.{\displaystyle \varphi _{X}(t)=\operatorname {E} \left[e^{iXt}\right]=e^{-|t|}.}With this, we haveφ∑iXi(t)=e−n|t|{\displaystyle \varphi _{\sum _{i}X_{i}}(t)=e^{-n|t|}}, and soX¯{\displaystyle {\bar {X}}}has a standard Cauchy distribution. More generally, ifX1,X2,…,Xn{\displaystyle X_{1},X_{2},\ldots ,X_{n}}are independent and Cauchy distributed with location parametersx1,…,xn{\displaystyle x_{1},\ldots ,x_{n}}and scalesγ1,…,γn{\displaystyle \gamma _{1},\ldots ,\gamma _{n}}, anda1,…,an{\displaystyle a_{1},\ldots ,a_{n}}are real numbers, then∑iaiXi{\textstyle \sum _{i}a_{i}X_{i}}is Cauchy distributed with location∑iaixi{\textstyle \sum _{i}a_{i}x_{i}}and scale∑i|ai|γi{\textstyle \sum _{i}|a_{i}|\gamma _{i}}. We see that there is no law of large numbers for any weighted sum of independent Cauchy distributions. This shows that the condition of finite variance in thecentral limit theoremcannot be dropped. It is also an example of a more generalized version of the central limit theorem that is characteristic of allstable distributions, of which the Cauchy distribution is a special case. IfX1,X2,…{\displaystyle X_{1},X_{2},\ldots }are an IID sample with PDFρ{\displaystyle \rho }such thatlimc→∞1c∫−ccx2ρ(x)dx=2γπ{\textstyle \lim _{c\to \infty }{\frac {1}{c}}\int _{-c}^{c}x^{2}\rho (x)\,dx={\frac {2\gamma }{\pi }}}is finite, but nonzero, then1n∑i=1nXi{\textstyle {\frac {1}{n}}\sum _{i=1}^{n}X_{i}}converges in distribution to a Cauchy distribution with scaleγ{\displaystyle \gamma }.[10] LetX{\displaystyle X}denote a Cauchy distributed random variable. Thecharacteristic functionof the Cauchy distribution is given by φX(t)=E⁡[eiXt]=∫−∞∞f(x;x0,γ)eixtdx=eix0t−γ|t|.{\displaystyle \varphi _{X}(t)=\operatorname {E} \left[e^{iXt}\right]=\int _{-\infty }^{\infty }f(x;x_{0},\gamma )e^{ixt}\,dx=e^{ix_{0}t-\gamma |t|}.} which is just theFourier transformof the probability density. The original probability density may be expressed in terms of the characteristic function, essentially by using the inverse Fourier transform: f(x;x0,γ)=12π∫−∞∞φX(t;x0,γ)e−ixtdt{\displaystyle f(x;x_{0},\gamma )={\frac {1}{2\pi }}\int _{-\infty }^{\infty }\varphi _{X}(t;x_{0},\gamma )e^{-ixt}\,dt\!} Thenth moment of a distribution is thenth derivative of the characteristic function evaluated att=0{\displaystyle t=0}. Observe that the characteristic function is notdifferentiableat the origin: this corresponds to the fact that the Cauchy distribution does not have well-defined moments higher than the zeroth moment. TheKullback–Leibler divergencebetween two Cauchy distributions has the following symmetric closed-form formula:[11]KL(px0,1,γ1:px0,2,γ2)=log⁡(γ1+γ2)2+(x0,1−x0,2)24γ1γ2.{\displaystyle \mathrm {KL} \left(p_{x_{0,1},\gamma _{1}}:p_{x_{0,2},\gamma _{2}}\right)=\log {\frac {{\left(\gamma _{1}+\gamma _{2}\right)}^{2}+{\left(x_{0,1}-x_{0,2}\right)}^{2}}{4\gamma _{1}\gamma _{2}}}.} Anyf-divergencebetween two Cauchy distributions is symmetric and can be expressed as a function of the chi-squared divergence.[12]Closed-form expression for thetotal variation,Jensen–Shannon divergence,Hellinger distance, etc. are available. The entropy of the Cauchy distribution is given by: H(γ)=−∫−∞∞f(x;x0,γ)log⁡(f(x;x0,γ))dx=log⁡(4πγ){\displaystyle {\begin{aligned}H(\gamma )&=-\int _{-\infty }^{\infty }f(x;x_{0},\gamma )\log(f(x;x_{0},\gamma ))\,dx\\[6pt]&=\log(4\pi \gamma )\end{aligned}}} The derivative of thequantile function, the quantile density function, for the Cauchy distribution is: Q′(p;γ)=γπsec2⁡[π(p−12)].{\displaystyle Q'(p;\gamma )=\gamma \pi \,\sec ^{2}\left[\pi \left(p-{\tfrac {1}{2}}\right)\right].} Thedifferential entropyof a distribution can be defined in terms of its quantile density,[13]specifically: H(γ)=∫01log(Q′(p;γ))dp=log⁡(4πγ){\displaystyle H(\gamma )=\int _{0}^{1}\log \,(Q'(p;\gamma ))\,\mathrm {d} p=\log(4\pi \gamma )} The Cauchy distribution is themaximum entropy probability distributionfor a random variateX{\displaystyle X}for which[14] E⁡[log⁡(1+(X−x0γ)2)]=log⁡4{\displaystyle \operatorname {E} \left[\log \left(1+{\left({\frac {X-x_{0}}{\gamma }}\right)}^{2}\right)\right]=\log 4} The Cauchy distribution is usually used as an illustrative counterexample in elementary probability courses, as a distribution with no well-defined (or "indefinite") moments. If we take an IID sampleX1,X2,…{\displaystyle X_{1},X_{2},\ldots }from the standard Cauchy distribution, then the sequence of their sample mean isSn=1n∑i=1nXi{\textstyle S_{n}={\frac {1}{n}}\sum _{i=1}^{n}X_{i}}, which also has the standard Cauchy distribution. Consequently, no matter how many terms we take, the sample average does not converge. Similarly, the sample varianceVn=1n∑i=1n(Xi−Sn)2{\textstyle V_{n}={\frac {1}{n}}\sum _{i=1}^{n}{\left(X_{i}-S_{n}\right)}^{2}}also does not converge. A typical trajectory ofS1,S2,...{\displaystyle S_{1},S_{2},...}looks like long periods of slow convergence to zero, punctuated by large jumps away from zero, but never getting too far away. A typical trajectory ofV1,V2,...{\displaystyle V_{1},V_{2},...}looks similar, but the jumps accumulate faster than the decay, diverging to infinity. These two kinds of trajectories are plotted in the figure. Moments of sample lower than order 1 would converge to zero. Moments of sample higher than order 2 would diverge to infinity even faster than sample variance. If aprobability distributionhas adensity functionf(x){\displaystyle f(x)}, then the mean, if it exists, is given by We may evaluate this two-sidedimproper integralby computing the sum of two one-sided improper integrals. That is, for an arbitrary real numbera{\displaystyle a}. For the integral to exist (even as an infinite value), at least one of the terms in this sum should be finite, or both should be infinite and have the same sign. But in the case of the Cauchy distribution, both the terms in this sum (2) are infinite and have opposite sign. Hence (1) is undefined, and thus so is the mean.[15]When the mean of a probability distribution function (PDF) is undefined, no one can compute a reliable average over the experimental data points, regardless of the sample's size. Note that theCauchy principal valueof the mean of the Cauchy distribution islima→∞∫−aaxf(x)dx{\displaystyle \lim _{a\to \infty }\int _{-a}^{a}xf(x)\,dx}which is zero. On the other hand, the related integrallima→∞∫−2aaxf(x)dx{\displaystyle \lim _{a\to \infty }\int _{-2a}^{a}xf(x)\,dx}isnotzero, as can be seen by computing the integral. This again shows that the mean (1) cannot exist. Various results in probability theory aboutexpected values, such as thestrong law of large numbers, fail to hold for the Cauchy distribution.[15] The absolute moments forp∈(−1,1){\displaystyle p\in (-1,1)}are defined. ForX∼Cauchy(0,γ){\displaystyle X\sim \mathrm {Cauchy} (0,\gamma )}we haveE⁡[|X|p]=γpsec(πp/2).{\displaystyle \operatorname {E} [|X|^{p}]=\gamma ^{p}\mathrm {sec} (\pi p/2).} The Cauchy distribution does not have finite moments of any order. Some of the higherraw momentsdo exist and have a value of infinity, for example, the raw second moment: E⁡[X2]∝∫−∞∞x21+x2dx=∫−∞∞1−11+x2dx=∫−∞∞dx−∫−∞∞11+x2dx=∫−∞∞dx−π=∞.{\displaystyle {\begin{aligned}\operatorname {E} [X^{2}]&\propto \int _{-\infty }^{\infty }{\frac {x^{2}}{1+x^{2}}}\,dx=\int _{-\infty }^{\infty }1-{\frac {1}{1+x^{2}}}\,dx\\[8pt]&=\int _{-\infty }^{\infty }dx-\int _{-\infty }^{\infty }{\frac {1}{1+x^{2}}}\,dx=\int _{-\infty }^{\infty }dx-\pi =\infty .\end{aligned}}} By re-arranging the formula, one can see that the second moment is essentially the infinite integral of a constant (here 1). Higher even-powered raw moments will also evaluate to infinity. Odd-powered raw moments, however, are undefined, which is distinctly different from existing with the value of infinity. The odd-powered raw moments are undefined because their values are essentially equivalent to∞−∞{\displaystyle \infty -\infty }since the two halves of the integral both diverge and have opposite signs. The first raw moment is the mean, which, being odd, does not exist. (See also the discussion above about this.) This in turn means that all of thecentral momentsandstandardized momentsare undefined since they are all based on the mean. The variance—which is the second central moment—is likewise non-existent (despite the fact that the raw second moment exists with the value infinity). The results for higher moments follow fromHölder's inequality, which implies that higher moments (or halves of moments) diverge if lower ones do. Consider thetruncated distributiondefined by restricting the standard Cauchy distribution to the interval[−10100, 10100]. Such a truncated distribution has all moments (and the central limit theorem applies fori.i.d.observations from it); yet for almost all practical purposes it behaves like a Cauchy distribution.[16] Because the parameters of the Cauchy distribution do not correspond to a mean and variance, attempting to estimate the parameters of the Cauchy distribution by using a sample mean and a sample variance will not succeed.[19]For example, if an i.i.d. sample of sizenis taken from a Cauchy distribution, one may calculate the sample mean as: x¯=1n∑i=1nxi{\displaystyle {\bar {x}}={\frac {1}{n}}\sum _{i=1}^{n}x_{i}} Although the sample valuesxi{\displaystyle x_{i}}will be concentrated about the central valuex0{\displaystyle x_{0}}, the sample mean will become increasingly variable as more observations are taken, because of the increased probability of encountering sample points with a large absolute value. In fact, the distribution of the sample mean will be equal to the distribution of the observations themselves; i.e., the sample mean of a large sample is no better (or worse) an estimator ofx0{\displaystyle x_{0}}than any single observation from the sample. Similarly, calculating the sample variance will result in values that grow larger as more observations are taken. Therefore, more robust means of estimating the central valuex0{\displaystyle x_{0}}and the scaling parameterγ{\displaystyle \gamma }are needed. One simple method is to take the median value of the sample as an estimator ofx0{\displaystyle x_{0}}and half the sampleinterquartile rangeas an estimator ofγ{\displaystyle \gamma }. Other, more precise and robust methods have been developed.[20][21]For example, thetruncated meanof the middle 24% of the sampleorder statisticsproduces an estimate forx0{\displaystyle x_{0}}that is more efficient than using either the sample median or the full sample mean.[22][23]However, because of thefat tailsof the Cauchy distribution, the efficiency of the estimator decreases if more than 24% of the sample is used.[22][23] Maximum likelihoodcan also be used to estimate the parametersx0{\displaystyle x_{0}}andγ{\displaystyle \gamma }. However, this tends to be complicated by the fact that this requires finding the roots of a high degree polynomial, and there can be multiple roots that represent local maxima.[24]Also, while the maximum likelihood estimator is asymptotically efficient, it is relatively inefficient for small samples.[25][26]The log-likelihood function for the Cauchy distribution for sample sizen{\displaystyle n}is: ℓ^(x1,…,xn∣x0,γ)=−nlog⁡(γπ)−∑i=1nlog⁡(1+(xi−x0γ)2){\displaystyle {\hat {\ell }}(x_{1},\dotsc ,x_{n}\mid \!x_{0},\gamma )=-n\log(\gamma \pi )-\sum _{i=1}^{n}\log \left(1+\left({\frac {x_{i}-x_{0}}{\gamma }}\right)^{2}\right)} Maximizing the log likelihood function with respect tox0{\displaystyle x_{0}}andγ{\displaystyle \gamma }by taking the first derivative produces the following system of equations: dℓdx0=∑i=1n2(xi−x0)γ2+(xi−x0)2=0{\displaystyle {\frac {d\ell }{dx_{0}}}=\sum _{i=1}^{n}{\frac {2(x_{i}-x_{0})}{\gamma ^{2}+\left(x_{i}-\!x_{0}\right)^{2}}}=0}dℓdγ=∑i=1n2(xi−x0)2γ(γ2+(xi−x0)2)−nγ=0{\displaystyle {\frac {d\ell }{d\gamma }}=\sum _{i=1}^{n}{\frac {2\left(x_{i}-x_{0}\right)^{2}}{\gamma (\gamma ^{2}+\left(x_{i}-x_{0}\right)^{2})}}-{\frac {n}{\gamma }}=0} Note that ∑i=1n(xi−x0)2γ2+(xi−x0)2{\displaystyle \sum _{i=1}^{n}{\frac {\left(x_{i}-x_{0}\right)^{2}}{\gamma ^{2}+\left(x_{i}-x_{0}\right)^{2}}}} is a monotone function inγ{\displaystyle \gamma }and that the solutionγ{\displaystyle \gamma }must satisfy min|xi−x0|≤γ≤max|xi−x0|.{\displaystyle \min |x_{i}-x_{0}|\leq \gamma \leq \max |x_{i}-x_{0}|.} Solving just forx0{\displaystyle x_{0}}requires solving a polynomial of degree2n−1{\displaystyle 2n-1},[24]and solving just forγ{\displaystyle \,\!\gamma }requires solving a polynomial of degree2n{\displaystyle 2n}. Therefore, whether solving for one parameter or for both parameters simultaneously, anumericalsolution on a computer is typically required. The benefit of maximum likelihood estimation is asymptotic efficiency; estimatingx0{\displaystyle x_{0}}using the sample median is only about 81% as asymptotically efficient as estimatingx0{\displaystyle x_{0}}by maximum likelihood.[23][27]The truncated sample mean using the middle 24% order statistics is about 88% as asymptotically efficient an estimator ofx0{\displaystyle x_{0}}as the maximum likelihood estimate.[23]WhenNewton's methodis used to find the solution for the maximum likelihood estimate, the middle 24% order statistics can be used as an initial solution forx0{\displaystyle x_{0}}. The shape can be estimated using the median of absolute values, since for location 0 Cauchy variablesX∼Cauchy(0,γ){\displaystyle X\sim \mathrm {Cauchy} (0,\gamma )}, themedian⁡(|X|)=γ{\displaystyle \operatorname {median} (|X|)=\gamma }the shape parameter. The Cauchy distribution is thestable distributionof index 1. TheLévy–Khintchine representationof such a stable distribution of parameterγ{\displaystyle \gamma }is given, forX∼Stable⁡(γ,0,0){\displaystyle X\sim \operatorname {Stable} (\gamma ,0,0)\,}by: E⁡(eixX)=exp⁡(∫R(eixy−1)Πγ(dy)){\displaystyle \operatorname {E} \left(e^{ixX}\right)=\exp \left(\int _{\mathbb {R} }(e^{ixy}-1)\Pi _{\gamma }(dy)\right)} where Πγ(dy)=(c1,γ1y1+γ1{y>0}+c2,γ1|y|1+γ1{y<0})dy{\displaystyle \Pi _{\gamma }(dy)=\left(c_{1,\gamma }{\frac {1}{y^{1+\gamma }}}1_{\left\{y>0\right\}}+c_{2,\gamma }{\frac {1}{|y|^{1+\gamma }}}1_{\left\{y<0\right\}}\right)\,dy} andc1,γ,c2,γ{\displaystyle c_{1,\gamma },c_{2,\gamma }}can be expressed explicitly.[28]In the caseγ=1{\displaystyle \gamma =1}of the Cauchy distribution, one hasc1,γ=c2,γ{\displaystyle c_{1,\gamma }=c_{2,\gamma }}. This last representation is a consequence of the formula π|x|=PV⁡∫R∖{0}(1−eixy)dyy2{\displaystyle \pi |x|=\operatorname {PV} \int _{\mathbb {R} \smallsetminus \lbrace 0\rbrace }(1-e^{ixy})\,{\frac {dy}{y^{2}}}} Arandom vectorX=(X1,…,Xk)T{\displaystyle X=(X_{1},\ldots ,X_{k})^{T}}is said to have the multivariate Cauchy distribution if every linear combination of its componentsY=a1X1+⋯+akXk{\displaystyle Y=a_{1}X_{1}+\cdots +a_{k}X_{k}}has a Cauchy distribution. That is, for any constant vectora∈Rk{\displaystyle a\in \mathbb {R} ^{k}}, the random variableY=aTX{\displaystyle Y=a^{T}X}should have a univariate Cauchy distribution.[29]The characteristic function of a multivariate Cauchy distribution is given by: φX(t)=eix0(t)−γ(t),{\displaystyle \varphi _{X}(t)=e^{ix_{0}(t)-\gamma (t)},\!} wherex0(t){\displaystyle x_{0}(t)}andγ(t){\displaystyle \gamma (t)}are real functions withx0(t){\displaystyle x_{0}(t)}ahomogeneous functionof degree one andγ(t){\displaystyle \gamma (t)}a positive homogeneous function of degree one.[29]More formally:[29] x0(at)=ax0(t),γ(at)=|a|γ(t),{\displaystyle {\begin{aligned}x_{0}(at)&=ax_{0}(t),\\\gamma (at)&=|a|\gamma (t),\end{aligned}}} for allt{\displaystyle t}. An example of a bivariate Cauchy distribution can be given by:[30]f(x,y;x0,y0,γ)=12πγ((x−x0)2+(y−y0)2+γ2)3/2.{\displaystyle f(x,y;x_{0},y_{0},\gamma )={\frac {1}{2\pi }}\,{\frac {\gamma }{{\left({\left(x-x_{0}\right)}^{2}+{\left(y-y_{0}\right)}^{2}+\gamma ^{2}\right)}^{3/2}}}.}Note that in this example, even though the covariance betweenx{\displaystyle x}andy{\displaystyle y}is 0,x{\displaystyle x}andy{\displaystyle y}are notstatistically independent.[30] We also can write this formula for complex variable. Then the probability density function of complex Cauchy is : f(z;z0,γ)=12πγ(|z−z0|2+γ2)3/2.{\displaystyle f(z;z_{0},\gamma )={\frac {1}{2\pi }}\,{\frac {\gamma }{{\left({\left|z-z_{0}\right|}^{2}+\gamma ^{2}\right)}^{3/2}}}.} Like how the standard Cauchy distribution is the Student t-distribution with one degree of freedom, the multidimensional Cauchy density is themultivariate Student distributionwith one degree of freedom. The density of ak{\displaystyle k}dimension Student distribution with one degree of freedom is: f(x;μ,Σ,k)=Γ(1+k2)Γ(12)πk2|Σ|12[1+(x−μ)TΣ−1(x−μ)]1+k2.{\displaystyle f(\mathbf {x} ;{\boldsymbol {\mu }},\mathbf {\Sigma } ,k)={\frac {\Gamma {\left({\frac {1+k}{2}}\right)}}{\Gamma ({\frac {1}{2}})\pi ^{\frac {k}{2}}\left|\mathbf {\Sigma } \right|^{\frac {1}{2}}\left[1+({\mathbf {x} }-{\boldsymbol {\mu }})^{\mathsf {T}}{\mathbf {\Sigma } }^{-1}({\mathbf {x} }-{\boldsymbol {\mu }})\right]^{\frac {1+k}{2}}}}.} The properties of multidimensional Cauchy distribution are then special cases of the multivariate Student distribution. Innuclearandparticle physics, the energy profile of aresonanceis described by therelativistic Breit–Wigner distribution, while the Cauchy distribution is the (non-relativistic) Breit–Wigner distribution.[citation needed] A function with the form of the density function of the Cauchy distribution was studied geometrically byFermatin 1659, and later was known as thewitch of Agnesi, afterMaria Gaetana Agnesiincluded it as an example in her 1748 calculus textbook. Despite its name, the first explicit analysis of the properties of the Cauchy distribution was published by the French mathematicianPoissonin 1824, with Cauchy only becoming associated with it during an academic controversy in 1853.[36]Poisson noted that if the mean of observations following such a distribution were taken, thestandard deviationdid not converge to any finite number. As such,Laplace's use of thecentral limit theoremwith such a distribution was inappropriate, as it assumed a finite mean and variance. Despite this, Poisson did not regard the issue as important, in contrast toBienaymé, who was to engage Cauchy in a long dispute over the matter.
https://en.wikipedia.org/wiki/Cauchy_distribution
Instatistics,ordinary least squares(OLS) is a type oflinear least squaresmethod for choosing the unknownparametersin alinear regressionmodel (with fixed level-one[clarification needed]effects of alinear functionof a set ofexplanatory variables) by the principle ofleast squares: minimizing the sum of the squares of the differences between the observeddependent variable(values of the variable being observed) in the inputdatasetand the output of the (linear) function of theindependent variable. Some sources consider OLS to be linear regression.[1] Geometrically, this is seen as the sum of the squared distances, parallel to the axis of the dependent variable, between each data point in the set and the corresponding point on the regression surface—the smaller the differences, the better the model fits the data. The resultingestimatorcan be expressed by a simple formula, especially in the case of asimple linear regression, in which there is a singleregressoron the right side of the regression equation. The OLS estimator isconsistentfor the level-one fixed effects when the regressors areexogenousand forms perfectcolinearity(rank condition), consistent for the variance estimate of the residuals when regressors have finite fourth moments[2]and—by theGauss–Markov theorem—optimal in the class of linear unbiased estimatorswhen theerrorsarehomoscedasticandserially uncorrelated. Under these conditions, the method of OLS providesminimum-variance mean-unbiasedestimation when the errors have finitevariances. Under the additional assumption that the errors arenormally distributedwith zero mean, OLS is themaximum likelihood estimatorthat outperforms any non-linear unbiased estimator. Suppose the data consists ofn{\displaystyle n}observations{xi,yi}i=1n{\displaystyle \left\{\mathbf {x} _{i},y_{i}\right\}_{i=1}^{n}}. Each observationi{\displaystyle i}includes a scalar responseyi{\displaystyle y_{i}}and a column vectorxi{\displaystyle \mathbf {x} _{i}}ofp{\displaystyle p}parameters (regressors), i.e.,xi=[xi1,xi2,…,xip]T{\displaystyle \mathbf {x} _{i}=\left[x_{i1},x_{i2},\dots ,x_{ip}\right]^{\operatorname {T} }}. In alinear regression model, the response variable,yi{\displaystyle y_{i}}, is a linear function of the regressors: or invectorform, wherexi{\displaystyle \mathbf {x} _{i}}, as introduced previously, is a column vector of thei{\displaystyle i}-th observation of all the explanatory variables;β{\displaystyle {\boldsymbol {\beta }}}is ap×1{\displaystyle p\times 1}vector of unknown parameters; and the scalarεi{\displaystyle \varepsilon _{i}}represents unobserved random variables (errors) of thei{\displaystyle i}-th observation.εi{\displaystyle \varepsilon _{i}}accounts for the influences upon the responsesyi{\displaystyle y_{i}}from sources other than the explanatory variablesxi{\displaystyle \mathbf {x} _{i}}. This model can also be written in matrix notation as wherey{\displaystyle \mathbf {y} }andε{\displaystyle {\boldsymbol {\varepsilon }}}aren×1{\displaystyle n\times 1}vectors of the response variables and the errors of then{\displaystyle n}observations, andX{\displaystyle \mathbf {X} }is ann×p{\displaystyle n\times p}matrix of regressors, also sometimes called thedesign matrix, whose rowi{\displaystyle i}isxiT{\displaystyle \mathbf {x} _{i}^{\operatorname {T} }}and contains thei{\displaystyle i}-th observations on all the explanatory variables. Typically, a constant term is included in the set of regressorsX{\displaystyle \mathbf {X} }, say, by takingxi1=1{\displaystyle x_{i1}=1}for alli=1,…,n{\displaystyle i=1,\dots ,n}. The coefficientβ1{\displaystyle \beta _{1}}corresponding to this regressor is called theintercept. Without the intercept, the fitted line is forced to cross the origin whenxi=0→{\displaystyle x_{i}={\vec {0}}}. Regressors do not have to be independent for estimation to be consistent e.g. they may be non-linearly dependent. Short of perfect multicollinearity, parameter estimates may still be consistent; however, as multicollinearity rises the standard error around such estimates increases and reduces the precision of such estimates. When there is perfect multicollinearity, it is no longer possible to obtain unique estimates for the coefficients to the related regressors; estimation for these parameters cannot converge (thus, it cannot be consistent). As a concrete example where regressors are non-linearly dependent yet estimation may still be consistent, we might suspect the response depends linearly both on a value and its square; in which case we would include one regressor whose value is just the square of another regressor. In that case, the model would bequadraticin the second regressor, but none-the-less is still considered alinearmodel because the modelisstill linear in the parameters (β{\displaystyle {\boldsymbol {\beta }}}). Consider anoverdetermined system ofn{\displaystyle n}linear equationsinp{\displaystyle p}unknowncoefficients,β1,β2,…,βp{\displaystyle \beta _{1},\beta _{2},\dots ,\beta _{p}}, withn>p{\displaystyle n>p}. This can be written inmatrixform as where (Note: for a linear model as above, not all elements inX{\displaystyle \mathbf {X} }contains information on the data points. The first column is populated with ones,Xi1=1{\displaystyle X_{i1}=1}. Only the other columns contain actual data. So herep{\displaystyle p}is equal to the number of regressors plus one). Such a system usually has no exact solution, so the goal is instead to find the coefficientsβ{\displaystyle {\boldsymbol {\beta }}}which fit the equations "best", in the sense of solving thequadraticminimizationproblem where the objective functionS{\displaystyle S}is given by A justification for choosing this criterion is given inPropertiesbelow. This minimization problem has a unique solution, provided that thep{\displaystyle p}columns of the matrixX{\displaystyle \mathbf {X} }arelinearly independent, given by solving the so-callednormal equations: The matrixXTX{\displaystyle \mathbf {X} ^{\operatorname {T} }\mathbf {X} }is known as thenormal matrixorGram matrixand the matrixXTy{\displaystyle \mathbf {X} ^{\operatorname {T} }\mathbf {y} }is known as themoment matrixof regressand by regressors.[3]Finally,β^{\displaystyle {\hat {\boldsymbol {\beta }}}}is the coefficient vector of the least-squareshyperplane, expressed as or Supposebis a "candidate" value for the parameter vectorβ. The quantityyi−xiTb, called theresidualfor thei-th observation, measures the vertical distance between the data point(xi,yi)and the hyperplaney=xTb, and thus assesses the degree of fit between the actual data and the model. Thesum of squared residuals(SSR) (also called theerror sum of squares(ESS) orresidual sum of squares(RSS))[4]is a measure of the overall model fit: whereTdenotes the matrixtranspose, and the rows ofX, denoting the values of all the independent variables associated with a particular value of the dependent variable, areXi= xiT. The value ofbwhich minimizes this sum is called theOLS estimator forβ. The functionS(b) is quadratic inbwith positive-definiteHessian, and therefore this function possesses a unique global minimum atb=β^{\displaystyle b={\hat {\beta }}}, which can be given by the explicit formula[5][proof] The productN=XTXis aGram matrix, and its inverse,Q=N−1, is thecofactor matrixofβ,[6][7][8]closely related to itscovariance matrix,Cβ. The matrix (XTX)−1XT=QXTis called theMoore–Penrose pseudoinversematrix ofX. This formulation highlights the point that estimation can be carried out if, and only if, there is no perfectmulticollinearitybetween the explanatory variables (which would cause the Gram matrix to have no inverse). After we have estimatedβ, thefitted values(orpredicted values) from the regression will be whereP=X(XTX)−1XTis theprojection matrixonto the spaceVspanned by the columns ofX. This matrixPis also sometimes called thehat matrixbecause it "puts a hat" onto the variabley. Another matrix, closely related toPis theannihilatormatrixM=In−P; this is a projection matrix onto the space orthogonal toV. Both matricesPandMaresymmetricandidempotent(meaning thatP2=PandM2=M), and relate to the data matrixXvia identitiesPX=XandMX= 0.[9]MatrixMcreates theresidualsfrom the regression: The variances of the predicted valuessy^i2{\displaystyle s_{{\hat {y}}_{i}}^{2}}are found in the main diagonal of thevariance-covariance matrixof predicted values: wherePis the projection matrix ands2is the sample variance.[10]The full matrix is very large; its diagonal elements can be calculated individually as: whereXiis thei-th row of matrixX. Using these residuals we can estimate the sample variances2using thereduced chi-squaredstatistic: The denominator,n−p, is thestatistical degrees of freedom. The first quantity,s2, is the OLS estimate forσ2, whereas the second,σ^2{\displaystyle \scriptstyle {\hat {\sigma }}^{2}}, is the MLE estimate forσ2. The two estimators are quite similar in large samples; the first estimator is alwaysunbiased, while the second estimator is biased but has a smallermean squared error. In practices2is used more often, since it is more convenient for the hypothesis testing. The square root ofs2is called theregression standard error,[11]standard error of the regression,[12][13]orstandard error of the equation.[9] It is common to assess the goodness-of-fit of the OLS regression by comparing how much the initial variation in the sample can be reduced by regressing ontoX. Thecoefficient of determinationR2is defined as a ratio of "explained" variance to the "total" variance of the dependent variabley, in the cases where the regression sum of squares equals the sum of squares of residuals:[14] where TSS is thetotal sum of squaresfor the dependent variable,L=In−1nJn{\textstyle L=I_{n}-{\frac {1}{n}}J_{n}}, andJn{\textstyle J_{n}}is ann×nmatrix of ones. (L{\displaystyle L}is acentering matrixwhich is equivalent to regression on a constant; it simply subtracts the mean from a variable.) In order forR2to be meaningful, the matrixXof data on regressors must contain a column vector of ones to represent the constant whose coefficient is the regression intercept. In that case,R2will always be a number between 0 and 1, with values close to 1 indicating a good degree of fit. If the data matrixXcontains only two variables, a constant and a scalar regressorxi, then this is called the "simple regression model". This case is often considered in the beginner statistics classes, as it provides much simpler formulas even suitable for manual calculation. The parameters are commonly denoted as(α,β): The least squares estimates in this case are given by simple formulas In the previous section the least squares estimatorβ^{\displaystyle {\hat {\beta }}}was obtained as a value that minimizes the sum of squared residuals of the model. However it is also possible to derive the same estimator from other approaches. In all cases the formula for OLS estimator remains the same:^β= (XTX)−1XTy; the only difference is in how we interpret this result. For mathematicians, OLS is an approximate solution to an overdetermined system of linear equationsXβ≈y, whereβis the unknown. Assuming the system cannot be solved exactly (the number of equationsnis much larger than the number of unknownsp), we are looking for a solution that could provide the smallest discrepancy between the right- and left- hand sides. In other words, we are looking for the solution that satisfies where‖·‖is the standardL2normin then-dimensionalEuclidean spaceRn. The predicted quantityXβis just a certain linear combination of the vectors of regressors. Thus, the residual vectory−Xβwill have the smallest length whenyisprojected orthogonallyonto thelinear subspacespannedby the columns ofX. The OLS estimatorβ^{\displaystyle {\hat {\beta }}}in this case can be interpreted as the coefficients ofvector decompositionof^y=Pyalong the basis ofX. In other words, the gradient equations at the minimum can be written as: A geometrical interpretation of these equations is that the vector of residuals,y−Xβ^{\displaystyle \mathbf {y} -X{\hat {\boldsymbol {\beta }}}}is orthogonal to thecolumn spaceofX, since thedot product(y−Xβ^)⋅Xv{\displaystyle (\mathbf {y} -\mathbf {X} {\hat {\boldsymbol {\beta }}})\cdot \mathbf {X} \mathbf {v} }is equal to zero foranyconformal vector,v. This means thaty−Xβ^{\displaystyle \mathbf {y} -\mathbf {X} {\boldsymbol {\hat {\beta }}}}is the shortest of all possible vectorsy−Xβ{\displaystyle \mathbf {y} -\mathbf {X} {\boldsymbol {\beta }}}, that is, the variance of the residuals is the minimum possible. This is illustrated at the right. Introducingγ^{\displaystyle {\hat {\boldsymbol {\gamma }}}}and a matrixKwith the assumption that a matrix[XK]{\displaystyle [\mathbf {X} \ \mathbf {K} ]}is non-singular andKTX= 0 (cf.Orthogonal projections), the residual vector should satisfy the following equation: The equation and solution of linear least squares are thus described as follows: Another way of looking at it is to consider the regression line to be a weighted average of the lines passing through the combination of any two points in the dataset.[15]Although this way of calculation is more computationally expensive, it provides a better intuition on OLS. The OLS estimator is identical to themaximum likelihood estimator(MLE) under the normality assumption for the error terms.[16][proof]This normality assumption has historical importance, as it provided the basis for the early work in linear regression analysis byYuleandPearson.[citation needed]From the properties of MLE, we can infer that the OLS estimator is asymptotically efficient (in the sense of attaining theCramér–Rao boundfor variance) if the normality assumption is satisfied.[17] Iniidcase the OLS estimator can also be viewed as aGMMestimator arising from the moment conditions These moment conditions state that the regressors should be uncorrelated with the errors. Sincexiis ap-vector, the number of moment conditions is equal to the dimension of the parameter vectorβ, and thus the system is exactly identified. This is the so-called classical GMM case, when the estimator does not depend on the choice of the weighting matrix. Note that the original strict exogeneity assumptionE[εi|xi] = 0implies a far richer set of moment conditions than stated above. In particular, this assumption implies that for any vector-functionƒ, the moment conditionE[ƒ(xi)·εi] = 0will hold. However it can be shown using theGauss–Markov theoremthat the optimal choice of functionƒis to takeƒ(x) =x, which results in the moment equation posted above. There are several different frameworks in which thelinear regression modelcan be cast in order to make the OLS technique applicable. Each of these settings produces the same formulas and same results. The only difference is the interpretation and the assumptions which have to be imposed in order for the method to give meaningful results. The choice of the applicable framework depends mostly on the nature of data in hand, and on the inference task which has to be performed. One of the lines of difference in interpretation is whether to treat the regressors as random variables, or as predefined constants. In the first case (random design) the regressorsxiare random and sampled together with theyi's from somepopulation, as in anobservational study. This approach allows for more natural study of theasymptotic propertiesof the estimators. In the other interpretation (fixed design), the regressorsXare treated as known constants set by adesign, andyis sampled conditionally on the values ofXas in anexperiment. For practical purposes, this distinction is often unimportant, since estimation and inference is carried out while conditioning onX. All results stated in this article are within the random design framework. The classical model focuses on the "finite sample" estimation and inference, meaning that the number of observationsnis fixed. This contrasts with the other approaches, which study theasymptotic behaviorof OLS, and in which the behavior at a large number of samples is studied. In some applications, especially withcross-sectional data, an additional assumption is imposed — that all observations areindependent and identically distributed. This means that all observations are taken from arandom samplewhich makes all the assumptions listed earlier simpler and easier to interpret. Also this framework allows one to state asymptotic results (as the sample sizen→ ∞), which are understood as a theoretical possibility of fetching new independent observations from thedata generating process. The list of assumptions in this case is: First of all, under thestrict exogeneityassumption the OLS estimatorsβ^{\displaystyle \scriptstyle {\hat {\beta }}}ands2areunbiased, meaning that their expected values coincide with the true values of the parameters:[24][proof] If the strict exogeneity does not hold (as is the case with manytime seriesmodels, where exogeneity is assumed only with respect to the past shocks but not the future ones), then these estimators will be biased in finite samples. Thevariance-covariance matrix(or simplycovariance matrix) ofβ^{\displaystyle \scriptstyle {\hat {\beta }}}is equal to[25] In particular, the standard error of each coefficientβ^j{\displaystyle \scriptstyle {\hat {\beta }}_{j}}is equal to square root of thej-th diagonal element of this matrix. The estimate of this standard error is obtained by replacing the unknown quantityσ2with its estimates2. Thus, It can also be easily shown that the estimatorβ^{\displaystyle \scriptstyle {\hat {\beta }}}is uncorrelated with the residuals from the model:[25] TheGauss–Markov theoremstates that under thespherical errorsassumption (that is, the errors should beuncorrelatedandhomoscedastic) the estimatorβ^{\displaystyle \scriptstyle {\hat {\beta }}}is efficient in the class of linear unbiased estimators. This is called thebest linear unbiased estimator(BLUE). Efficiency should be understood as if we were to find some other estimatorβ~{\displaystyle \scriptstyle {\tilde {\beta }}}which would be linear inyand unbiased, then[25] in the sense that this is anonnegative-definite matrix. This theorem establishes optimality only in the class of linear unbiased estimators, which is quite restrictive. Depending on the distribution of the error termsε, other, non-linear estimators may provide better results than OLS. The properties listed so far are all valid regardless of the underlying distribution of the error terms. However, if you are willing to assume that thenormality assumptionholds (that is, thatε~N(0,σ2In)), then additional properties of the OLS estimators can be stated. The estimatorβ^{\displaystyle \scriptstyle {\hat {\beta }}}is normally distributed, with mean and variance as given before:[26] This estimator reaches theCramér–Rao boundfor the model, and thus is optimal in the class of all unbiased estimators.[17]Note that unlike theGauss–Markov theorem, this result establishes optimality among both linear and non-linear estimators, but only in the case of normally distributed error terms. The estimators2will be proportional to thechi-squared distribution:[27] The variance of this estimator is equal to2σ4/(n−p), which does not attain theCramér–Rao boundof2σ4/n. However it was shown that there are no unbiased estimators ofσ2with variance smaller than that of the estimators2.[28]If we are willing to allow biased estimators, and consider the class of estimators that are proportional to the sum of squared residuals (SSR) of the model, then the best (in the sense of themean squared error) estimator in this class will be~σ2= SSR/(n−p+ 2), which even beats the Cramér–Rao bound in case when there is only one regressor (p= 1).[29] Moreover, the estimatorsβ^{\displaystyle \scriptstyle {\hat {\beta }}}ands2areindependent,[30]the fact which comes in useful when constructing the t- and F-tests for the regression. As was mentioned before, the estimatorβ^{\displaystyle {\hat {\beta }}}is linear iny, meaning that it represents a linear combination of the dependent variablesyi. The weights in this linear combination are functions of the regressorsX, and generally are unequal. The observations with high weights are calledinfluentialbecause they have a more pronounced effect on the value of the estimator. To analyze which observations are influential we remove a specificj-th observation and consider how much the estimated quantities are going to change (similarly to thejackknife method). It can be shown that the change in the OLS estimator forβwill be equal to[31] wherehj=xjT(XTX)−1xjis thej-th diagonal element of the hat matrixP, andxjis the vector of regressors corresponding to thej-th observation. Similarly, the change in the predicted value forj-th observation resulting from omitting that observation from the dataset will be equal to[31] From the properties of the hat matrix,0 ≤hj≤ 1, and they sum up top, so that on averagehj≈p/n. These quantitieshjare called theleverages, and observations with highhjare calledleverage points.[32]Usually the observations with high leverage ought to be scrutinized more carefully, in case they are erroneous, or outliers, or in some other way atypical of the rest of the dataset. Sometimes the variables and corresponding parameters in the regression can be logically split into two groups, so that the regression takes form whereX1andX2have dimensionsn×p1,n×p2, andβ1,β2arep1×1 andp2×1 vectors, withp1+p2=p. TheFrisch–Waugh–Lovell theoremstates that in this regression the residualsε^{\displaystyle {\hat {\varepsilon }}}and the OLS estimateβ^2{\displaystyle \scriptstyle {\hat {\beta }}_{2}}will be numerically identical to the residuals and the OLS estimate forβ2in the following regression:[33] whereM1is theannihilator matrixfor regressorsX1. The theorem can be used to establish a number of theoretical results. For example, having a regression with a constant and another regressor is equivalent to subtracting the means from the dependent variable and the regressor and then running the regression for the de-meaned variables but without the constant term. Suppose it is known that the coefficients in the regression satisfy a system of linear equations whereQis ap×qmatrix of full rank, andcis aq×1 vector of known constants, whereq < p. In this case least squares estimation is equivalent to minimizing the sum of squared residuals of the model subject to the constraintA. Theconstrained least squares (CLS)estimator can be given by an explicit formula:[34] This expression for the constrained estimator is valid as long as the matrixXTXis invertible. It was assumed from the beginning of this article that this matrix is of full rank, and it was noted that when the rank condition fails,βwill not be identifiable. However it may happen that adding the restrictionAmakesβidentifiable, in which case one would like to find the formula for the estimator. The estimator is equal to[35] whereRis ap×(p−q) matrix such that the matrix[Q R]is non-singular, andRTQ= 0. Such a matrix can always be found, although generally it is not unique. The second formula coincides with the first in case whenXTXis invertible.[35] The least squares estimators arepoint estimatesof the linear regression model parametersβ. However, generally we also want to know how close those estimates might be to the true values of parameters. In other words, we want to construct theinterval estimates. Since we have not made any assumption about the distribution of error termεi, it is impossible to infer the distribution of the estimatorsβ^{\displaystyle {\hat {\beta }}}andσ^2{\displaystyle {\hat {\sigma }}^{2}}. Nevertheless, we can apply thecentral limit theoremto derive theirasymptoticproperties as sample sizengoes to infinity. While the sample size is necessarily finite, it is customary to assume thatnis "large enough" so that the true distribution of the OLS estimator is close to its asymptotic limit. We can show that under the model assumptions, the least squares estimator forβisconsistent(that isβ^{\displaystyle {\hat {\beta }}}converges in probabilitytoβ) and asymptotically normal:[proof] whereQxx=XTX.{\displaystyle Q_{xx}=X^{\operatorname {T} }X.} Using this asymptotic distribution, approximate two-sided confidence intervals for thej-th component of the vectorβ^{\displaystyle {\hat {\beta }}}can be constructed as whereqdenotes thequantile functionof standard normal distribution, and [·]jjis thej-th diagonal element of a matrix. Similarly, the least squares estimator forσ2is also consistent and asymptotically normal (provided that the fourth moment ofεiexists) with limiting distribution These asymptotic distributions can be used for prediction, testing hypotheses, constructing other estimators, etc.. As an example consider the problem of prediction. Supposex0{\displaystyle x_{0}}is some point within the domain of distribution of the regressors, and one wants to know what the response variable would have been at that point. Themean responseis the quantityy0=x0Tβ{\displaystyle y_{0}=x_{0}^{\mathrm {T} }\beta }, whereas thepredicted responseisy^0=x0Tβ^{\displaystyle {\hat {y}}_{0}=x_{0}^{\mathrm {T} }{\hat {\beta }}}. Clearly the predicted response is a random variable, its distribution can be derived from that ofβ^{\displaystyle {\hat {\beta }}}: which allows construct confidence intervals for mean responsey0{\displaystyle y_{0}}to be constructed: Two hypothesis tests are particularly widely used. First, one wants to know if the estimated regression equation is any better than simply predicting that all values of the response variable equal its sample mean (if not, it is said to have no explanatory power). Thenull hypothesisof no explanatory value of the estimated regression is tested using anF-test. If the calculated F-value is found to be large enough to exceed its critical value for the pre-chosen level of significance, the null hypothesis is rejected and thealternative hypothesis, that the regression has explanatory power, is accepted. Otherwise, the null hypothesis of no explanatory power is accepted. Second, for each explanatory variable of interest, one wants to know whether its estimated coefficient differs significantly from zero—that is, whether this particular explanatory variable in fact has explanatory power in predicting the response variable. Here the null hypothesis is that the true coefficient is zero. This hypothesis is tested by computing the coefficient'st-statistic, as the ratio of the coefficient estimate to itsstandard error. If the t-statistic is larger than a predetermined value, the null hypothesis is rejected and the variable is found to have explanatory power, with its coefficient significantly different from zero. Otherwise, the null hypothesis of a zero value of the true coefficient is accepted. In addition, theChow testis used to test whether two subsamples both have the same underlying true coefficient values. The sum of squared residuals of regressions on each of the subsets and on the combined data set are compared by computing an F-statistic; if this exceeds a critical value, the null hypothesis of no difference between the two subsets is rejected; otherwise, it is accepted. The following data set gives average heights and weights for American women aged 30–39 (source:The World Almanac and Book of Facts, 1975). When only one dependent variable is being modeled, ascatterplotwill suggest the form and strength of the relationship between the dependent variable and regressors. It might also reveal outliers, heteroscedasticity, and other aspects of the data that may complicate the interpretation of a fitted regression model. The scatterplot suggests that the relationship is strong and can be approximated as a quadratic function. OLS can handle non-linear relationships by introducing the regressorHEIGHT2. The regression model then becomes a multiple linear model: The output from most popularstatistical packageswill look similar to this: In this table: Ordinary least squares analysis often includes the use of diagnostic plots designed to detect departures of the data from the assumed form of the model. These are some of the common diagnostic plots: An important consideration when carrying out statistical inference using regression models is how the data were sampled. In this example, the data are averages rather than measurements on individual women. The fit of the model is very good, but this does not imply that the weight of an individual woman can be predicted with high accuracy based only on her height. This example also demonstrates that coefficients determined by these calculations are sensitive to how the data is prepared. The heights were originally given rounded to the nearest inch and have been converted and rounded to the nearest centimetre. Since the conversion factor is one inch to 2.54 cm this isnotan exact conversion. The original inches can be recovered by Round(x/0.0254) and then re-converted to metric without rounding. If this is done the results become: Using either of these equations to predict the weight of a 5' 6" (1.6764 m) woman gives similar values: 62.94 kg with rounding vs. 62.98 kg without rounding. Thus a seemingly small variation in the data has a real effect on the coefficients but a small effect on the results of the equation. While this may look innocuous in the middle of the data range it could become significant at the extremes or in the case where the fitted model is used to project outside the data range (extrapolation). This highlights a common error: this example is an abuse of OLS which inherently requires that the errors in the independent variable (in this case height) are zero or at least negligible. The initial rounding to nearest inch plus any actual measurement errors constitute a finite and non-negligible error. As a result, the fitted parameters are not the best estimates they are presumed to be. Though not totally spurious the error in the estimation will depend upon relative size of thexandyerrors. We can use the least square mechanism to figure out the equation of a two body orbit in polar base co-ordinates. The equation typically used isr(θ)=p1−ecos⁡(θ){\displaystyle r(\theta )={\frac {p}{1-e\cos(\theta )}}}wherer(θ){\displaystyle r(\theta )}is the radius of how far the object is from one of the bodies. In the equation the parametersp{\displaystyle p}ande{\displaystyle e}are used to determine the path of the orbit. We have measured the following data. We need to find the least-squares approximation ofe{\displaystyle e}andp{\displaystyle p}for the given data. First we need to represent e and p in a linear form. So we are going to rewrite the equationr(θ){\displaystyle r(\theta )}as1r(θ)=1p−epcos⁡(θ){\displaystyle {\frac {1}{r(\theta )}}={\frac {1}{p}}-{\frac {e}{p}}\cos(\theta )}. Furthermore, one could fit forapsidesby expandingcos⁡(θ){\displaystyle \cos(\theta )}with an extra parameter ascos⁡(θ−θ0)=cos⁡(θ)cos⁡(θ0)+sin⁡(θ)sin⁡(θ0){\displaystyle \cos(\theta -\theta _{0})=\cos(\theta )\cos(\theta _{0})+\sin(\theta )\sin(\theta _{0})}, which is linear in bothcos⁡(θ){\displaystyle \cos(\theta )}and in the extra basis functionsin⁡(θ){\displaystyle \sin(\theta )}, used to extratan⁡θ0=sin⁡(θ0)/cos⁡(θ0){\displaystyle \tan \theta _{0}=\sin(\theta _{0})/\cos(\theta _{0})}. We use the original two-parameter form to represent our observational data as: ATA(xy)=ATb{\displaystyle A^{T}A{\binom {x}{y}}=A^{T}b}wherex{\displaystyle x}is1p{\displaystyle {\frac {1}{p}}}andy{\displaystyle y}isep{\displaystyle {\frac {e}{p}}}andA{\displaystyle A}is constructed by the first column being the coefficient of1p{\displaystyle {\frac {1}{p}}}and the second column being the coefficient ofep{\displaystyle {\frac {e}{p}}}andb{\displaystyle b}is the values for the respective1r(θ){\displaystyle {\frac {1}{r(\theta )}}}soA=[1−0.7313541−0.7071071−0.61566110.05233610.30901710.438371]{\displaystyle A={\begin{bmatrix}1&-0.731354\\1&-0.707107\\1&-0.615661\\1&\ 0.052336\\1&0.309017\\1&0.438371\end{bmatrix}}}andb=[0.212200.219580.247410.450710.528830.56820].{\displaystyle b={\begin{bmatrix}0.21220\\0.21958\\0.24741\\0.45071\\0.52883\\0.56820\end{bmatrix}}.} On solving we get(xy)=(0.434780.30435){\displaystyle {\binom {x}{y}}={\binom {0.43478}{0.30435}}} sop=1x=2.3000{\displaystyle p={\frac {1}{x}}=2.3000}ande=p⋅y=0.70001{\displaystyle e=p\cdot y=0.70001}
https://en.wikipedia.org/wiki/Ordinary_least_squares
Inmathematics, thecomposition operator∘{\displaystyle \circ }takes twofunctions,f{\displaystyle f}andg{\displaystyle g}, and returns a new functionh(x):=(g∘f)(x)=g(f(x)){\displaystyle h(x):=(g\circ f)(x)=g(f(x))}. Thus, the functiongisappliedafter applyingftox.(g∘f){\displaystyle (g\circ f)}is pronounced "the composition ofgandf".[1] Reverse composition, sometimes denotedf↦g{\displaystyle f\mapsto g}, applies the operation in the opposite order, applyingf{\displaystyle f}first andg{\displaystyle g}second. Intuitively, reverse composition is a chaining process in which the output of functionffeeds the input of functiong. The composition of functions is a special case of thecomposition of relations, sometimes also denoted by∘{\displaystyle \circ }. As a result, all properties of composition of relations are true of composition of functions,[2]such asassociativity. The composition of functions is alwaysassociative—a property inherited from thecomposition of relations.[2]That is, iff,g, andhare composable, thenf∘ (g∘h) = (f∘g) ∘h.[3]Since the parentheses do not change the result, they are generally omitted. In a strict sense, the compositiong∘fis only meaningful if the codomain offequals the domain ofg; in a wider sense, it is sufficient that the former be an impropersubsetof the latter.[nb 1]Moreover, it is often convenient to tacitly restrict the domain off, such thatfproduces only values in the domain ofg. For example, the compositiong∘fof the functionsf:R→(−∞,+9]defined byf(x) = 9 −x2andg:[0,+∞)→Rdefined byg(x)=x{\displaystyle g(x)={\sqrt {x}}}can be defined on theinterval[−3,+3]. The functionsgandfare said tocommutewith each other ifg∘f=f∘g. Commutativity is a special property, attained only by particular functions, and often in special circumstances. For example,|x| + 3 = |x+ 3|only whenx≥ 0. The picture shows another example. The composition ofone-to-one(injective) functions is always one-to-one. Similarly, the composition ofonto(surjective) functions is always onto. It follows that the composition of twobijectionsis also a bijection. Theinverse functionof a composition (assumed invertible) has the property that(f∘g)−1=g−1∘f−1.[4] Derivativesof compositions involving differentiable functions can be found using thechain rule.Higher derivativesof such functions are given byFaà di Bruno's formula.[3] Composition of functions is sometimes described as a kind ofmultiplicationon a function space, but has very different properties frompointwisemultiplication of functions (e.g. composition is notcommutative).[5] Suppose one has two (or more) functionsf:X→X,g:X→Xhaving the same domain and codomain; these are often calledtransformations. Then one can form chains of transformations composed together, such asf∘f∘g∘f. Such chains have thealgebraic structureof amonoid, called atransformation monoidor (much more seldom) acomposition monoid. In general, transformation monoids can have remarkably complicated structure. One particular notable example is thede Rham curve. The set ofallfunctionsf:X→Xis called thefull transformation semigroup[6]orsymmetric semigroup[7]onX. (One can actually define two semigroups depending how one defines the semigroup operation as the left or right composition of functions.[8]) If the given transformations arebijective(and thus invertible), then the set of all possible combinations of these functions forms atransformation group(also known as apermutation group); and one says that the group isgeneratedby these functions. The set of all bijective functionsf:X→X(calledpermutations) forms a group with respect to function composition. This is thesymmetric group, also sometimes called thecomposition group. A fundamental result in group theory,Cayley's theorem, essentially says that any group is in fact just a subgroup of a symmetric group (up toisomorphism).[9] In the symmetric semigroup (of all transformations) one also finds a weaker, non-unique notion of inverse (called a pseudoinverse) because the symmetric semigroup is aregular semigroup.[10] IfY⊆X, thenf:X→Y{\displaystyle f:X\to Y}may compose with itself; this is sometimes denoted asf2{\displaystyle f^{2}}. That is: More generally, for anynatural numbern≥ 2, thenthfunctionalpowercan be defined inductively byfn=f∘fn−1=fn−1∘f, a notation introduced byHans Heinrich Bürmann[citation needed][11][12]andJohn Frederick William Herschel.[13][11][14][12]Repeated composition of such a function with itself is calledfunction iteration. Note:Ifftakes its values in aring(in particular for real or complex-valuedf), there is a risk of confusion, asfncould also stand for then-fold product off, e.g.f2(x) =f(x) ·f(x).[12]For trigonometric functions, usually the latter is meant, at least for positive exponents.[12]For example, intrigonometry, this superscript notation represents standardexponentiationwhen used withtrigonometric functions: sin2(x) = sin(x) · sin(x). However, for negative exponents (especially −1), it nevertheless usually refers to the inverse function, e.g.,tan−1= arctan ≠ 1/tan. In some cases, when, for a given functionf, the equationg∘g=fhas a unique solutiong, that function can be defined as thefunctional square rootoff, then written asg=f1/2. More generally, whengn=fhas a unique solution for some natural numbern> 0, thenfm/ncan be defined asgm. Under additional restrictions, this idea can be generalized so that theiteration countbecomes a continuous parameter; in this case, such a system is called aflow, specified through solutions ofSchröder's equation. Iterated functions and flows occur naturally in the study offractalsanddynamical systems. To avoid ambiguity, some mathematicians[citation needed]choose to use∘to denote the compositional meaning, writingf∘n(x)for then-th iterate of the functionf(x), as in, for example,f∘3(x)meaningf(f(f(x))). For the same purpose,f[n](x)was used byBenjamin Peirce[15][12]whereasAlfred PringsheimandJules Molksuggestednf(x)instead.[16][12][nb 2] Many mathematicians, particularly ingroup theory, omit the composition symbol, writinggfforg∘f.[17] During the mid-20th century, some mathematicians adoptedpostfix notation, writingxfforf(x)and(xf)gforg(f(x)).[18]This can be more natural thanprefix notationin many cases, such as inlinear algebrawhenxis arow vectorandfandgdenotematricesand the composition is bymatrix multiplication. The order is important because function composition is not necessarily commutative. Having successive transformations applying and composing to the right agrees with the left-to-right reading sequence. Mathematicians who use postfix notation may write "fg", meaning first applyfand then applyg, in keeping with the order the symbols occur in postfix notation, thus making the notation "fg" ambiguous. Computer scientists may write "f;g" for this,[19]thereby disambiguating the order of composition. To distinguish the left composition operator from a text semicolon, in theZ notationthe ⨾ character is used for leftrelation composition.[20]Since all functions arebinary relations, it is correct to use the [fat] semicolon for function composition as well (see the article oncomposition of relationsfor further details on this notation). Given a functiong, thecomposition operatorCgis defined as thatoperatorwhich maps functions to functions asCgf=f∘g.{\displaystyle C_{g}f=f\circ g.}Composition operators are studied in the field ofoperator theory. Function composition appears in one form or another in numerousprogramming languages. Partial composition is possible formultivariate functions. The function resulting when some argumentxiof the functionfis replaced by the functiongis called a composition offandgin some computer engineering contexts, and is denotedf|xi=gf|xi=g=f(x1,…,xi−1,g(x1,x2,…,xn),xi+1,…,xn).{\displaystyle f|_{x_{i}=g}=f(x_{1},\ldots ,x_{i-1},g(x_{1},x_{2},\ldots ,x_{n}),x_{i+1},\ldots ,x_{n}).} Whengis a simple constantb, composition degenerates into a (partial) valuation, whose result is also known asrestrictionorco-factor.[21] f|xi=b=f(x1,…,xi−1,b,xi+1,…,xn).{\displaystyle f|_{x_{i}=b}=f(x_{1},\ldots ,x_{i-1},b,x_{i+1},\ldots ,x_{n}).} In general, the composition of multivariate functions may involve several other functions as arguments, as in the definition ofprimitive recursive function. Givenf, an-ary function, andnm-ary functionsg1, ...,gn, the composition offwithg1, ...,gn, is them-ary functionh(x1,…,xm)=f(g1(x1,…,xm),…,gn(x1,…,xm)).{\displaystyle h(x_{1},\ldots ,x_{m})=f(g_{1}(x_{1},\ldots ,x_{m}),\ldots ,g_{n}(x_{1},\ldots ,x_{m})).} This is sometimes called thegeneralized compositeorsuperpositionoffwithg1, ...,gn.[22]The partial composition in only one argument mentioned previously can be instantiated from this more general scheme by setting all argument functions except one to be suitably chosenprojection functions. Hereg1, ...,gncan be seen as a single vector/tuple-valued function in this generalized scheme, in which case this is precisely the standard definition of function composition.[23] A set of finitaryoperationson some base setXis called acloneif it contains all projections and is closed under generalized composition. A clone generally contains operations of variousarities.[22]The notion of commutation also finds an interesting generalization in the multivariate case; a functionfof aritynis said to commute with a functiongof aritymiffis ahomomorphismpreservingg, and vice versa, that is:[22]f(g(a11,…,a1m),…,g(an1,…,anm))=g(f(a11,…,an1),…,f(a1m,…,anm)).{\displaystyle f(g(a_{11},\ldots ,a_{1m}),\ldots ,g(a_{n1},\ldots ,a_{nm}))=g(f(a_{11},\ldots ,a_{n1}),\ldots ,f(a_{1m},\ldots ,a_{nm})).} A unary operation always commutes with itself, but this is not necessarily the case for a binary (or higher arity) operation. A binary (or higher arity) operation that commutes with itself is calledmedial or entropic.[22] Compositioncan be generalized to arbitrarybinary relations. IfR⊆X×YandS⊆Y×Zare two binary relations, then their composition amounts to R∘S={(x,z)∈X×Z:(∃y∈Y)((x,y)∈R∧(y,z)∈S)}{\displaystyle R\circ S=\{(x,z)\in X\times Z:(\exists y\in Y)((x,y)\in R\,\land \,(y,z)\in S)\}}. Considering a function as a special case of a binary relation (namelyfunctional relations), function composition satisfies the definition for relation composition. A small circleR∘Shas been used for theinfix notation of composition of relations, as well as functions. When used to represent composition of functions(g∘f)(x)=g(f(x)){\displaystyle (g\circ f)(x)\ =\ g(f(x))}however, the text sequence is reversed to illustrate the different operation sequences accordingly. The composition is defined in the same way forpartial functionsand Cayley's theorem has its analogue called theWagner–Preston theorem.[24] Thecategory of setswith functions asmorphismsis the prototypicalcategory. The axioms of a category are in fact inspired from the properties (and also the definition) of function composition.[25]The structures given by composition are axiomatized and generalized incategory theorywith the concept ofmorphismas the category-theoretical replacement of functions. The reversed order of composition in the formula(f∘g)−1= (g−1∘f−1)applies forcomposition of relationsusingconverse relations, and thus ingroup theory. These structures formdagger categories. The standard "foundation" for mathematics starts withsets and their elements. It is possible to start differently, by axiomatising not elements of sets but functions between sets. This can be done by using the language of categories and universal constructions. . . . the membership relation for sets can often be replaced by the composition operation for functions. This leads to an alternative foundation for Mathematics upon categories -- specifically, on the category of all functions. Now much of Mathematics is dynamic, in that it deals with morphisms of an object into another object of the same kind. Such morphisms(like functions)form categories, and so the approach via categories fits well with the objective of organizing and understanding Mathematics. That, in truth, should be the goal of a proper philosophy of Mathematics. -Saunders Mac Lane,Mathematics: Form and Function[26] The composition symbol∘is encoded asU+2218∘RING OPERATOR(&compfn;, &SmallCircle;); see theDegree symbolarticle for similar-appearing Unicode characters. InTeX, it is written\circ.
https://en.wikipedia.org/wiki/Composition_of_functions#Convexity
Inmathematical analysis, themaximumandminimum[a]of afunctionare, respectively, the greatest and least value taken by the function. Known generically asextremum,[b]they may be defined either within a givenrange(thelocalorrelativeextrema) or on the entiredomain(theglobalorabsoluteextrema) of a function.[1][2][3]Pierre de Fermatwas one of the first mathematicians to propose a general technique,adequality, for finding the maxima and minima of functions. As defined inset theory, the maximum and minimum of asetare thegreatest and least elementsin the set, respectively. Unboundedinfinite sets, such as the set ofreal numbers, have no minimum or maximum. Instatistics, the corresponding concept is thesample maximum and minimum. A real-valuedfunctionfdefined on adomainXhas aglobal(orabsolute)maximum pointatx∗, iff(x∗) ≥f(x)for allxinX. Similarly, the function has aglobal(orabsolute)minimum pointatx∗, iff(x∗) ≤f(x)for allxinX. The value of the function at a maximum point is called themaximum valueof the function, denotedmax(f(x)){\displaystyle \max(f(x))}, and the value of the function at a minimum point is called theminimum valueof the function, (denotedmin(f(x)){\displaystyle \min(f(x))}for clarity). Symbolically, this can be written as follows: The definition of global minimum point also proceeds similarly. If the domainXis ametric space, thenfis said to have alocal(orrelative)maximum pointat the pointx∗, if there exists someε> 0 such thatf(x∗) ≥f(x)for allxinXwithin distanceεofx∗. Similarly, the function has alocal minimum pointatx∗, iff(x∗) ≤f(x) for allxinXwithin distanceεofx∗. A similar definition can be used whenXis atopological space, since the definition just given can be rephrased in terms of neighbourhoods. Mathematically, the given definition is written as follows: The definition of local minimum point can also proceed similarly. In both the global and local cases, the concept of astrict extremumcan be defined. For example,x∗is astrict global maximum pointif for allxinXwithx≠x∗, we havef(x∗) >f(x), andx∗is astrict local maximum pointif there exists someε> 0such that, for allxinXwithin distanceεofx∗withx≠x∗, we havef(x∗) >f(x). Note that a point is a strict global maximum point if and only if it is the unique global maximum point, and similarly for minimum points. Acontinuousreal-valued function with acompactdomain always has a maximum point and a minimum point. An important example is a function whose domain is a closed and boundedintervalofreal numbers(see the graph above). Finding global maxima and minima is the goal ofmathematical optimization. If a function is continuous on a closed interval, then by theextreme value theorem, global maxima and minima exist. Furthermore, a global maximum (or minimum) either must be a local maximum (or minimum) in the interior of the domain, or must lie on the boundary of the domain. So a method of finding a global maximum (or minimum) is to look at all the local maxima (or minima) in the interior, and also look at the maxima (or minima) of the points on the boundary, and take the greatest (or least) one. Fordifferentiable functions,Fermat's theoremstates that local extrema in the interior of a domain must occur atcritical points(or points where the derivative equals zero).[4]However, not all critical points are extrema. One can often distinguish whether a critical point is a local maximum, a local minimum, or neither by using thefirst derivative test,second derivative test, orhigher-order derivative test, given sufficient differentiability.[5] For any function that is definedpiecewise, one finds a maximum (or minimum) by finding the maximum (or minimum) of each piece separately, and then seeing which one is greatest (or least). For a practical example,[6]assume a situation where someone has200{\displaystyle 200}feet of fencing and is trying to maximize the square footage of a rectangular enclosure, wherex{\displaystyle x}is the length,y{\displaystyle y}is the width, andxy{\displaystyle xy}is the area: The derivative with respect tox{\displaystyle x}is: Setting this equal to0{\displaystyle 0} reveals thatx=50{\displaystyle x=50}is our onlycritical point. Now retrieve theendpointsby determining the interval to whichx{\displaystyle x}is restricted. Since width is positive, thenx>0{\displaystyle x>0}, and sincex=100−y{\displaystyle x=100-y},that implies thatx<100{\displaystyle x<100}.Plug in critical point50{\displaystyle 50},as well as endpoints0{\displaystyle 0}and100{\displaystyle 100},intoxy=x(100−x){\displaystyle xy=x(100-x)},and the results are2500,0,{\displaystyle 2500,0,}and0{\displaystyle 0}respectively. Therefore, the greatest area attainable with a rectangle of200{\displaystyle 200}feet of fencing is50×50=2500{\displaystyle 50\times 50=2500}.[6] For functions of more than one variable, similar conditions apply. For example, in the (enlargeable) figure on the right, the necessary conditions for alocalmaximum are similar to those of a function with only one variable. The firstpartial derivativesas toz(the variable to be maximized) are zero at the maximum (the glowing dot on top in the figure). The second partial derivatives are negative. These are only necessary, not sufficient, conditions for a local maximum, because of the possibility of asaddle point. For use of these conditions to solve for a maximum, the functionzmust also bedifferentiablethroughout. Thesecond partial derivative testcan help classify the point as a relative maximum or relative minimum. In contrast, there are substantial differences between functions of one variable and functions of more than one variable in the identification of global extrema. For example, if a bounded differentiable functionfdefined on a closed interval in the real line has a single critical point, which is a local minimum, then it is also a global minimum (use theintermediate value theoremandRolle's theoremto prove this bycontradiction). In two and more dimensions, this argument fails. This is illustrated by the function whose only critical point is at (0,0), which is a local minimum withf(0,0) = 0. However, it cannot be a global one, becausef(2,3) = −5. If the domain of a function for which an extremum is to be found consists itself of functions (i.e. if an extremum is to be found of afunctional), then the extremum is found using thecalculus of variations. Maxima and minima can also be defined for sets. In general, if anordered setShas agreatest elementm, thenmis amaximal elementof the set, also denoted asmax(S){\displaystyle \max(S)}. Furthermore, ifSis a subset of an ordered setTandmis the greatest element ofSwith (respect to order induced byT), thenmis aleast upper boundofSinT. Similar results hold forleast element,minimal elementandgreatest lower bound. The maximum and minimum function for sets are used indatabases, and can be computed rapidly, since the maximum (or minimum) of a set can be computed from the maxima of a partition; formally, they are self-decomposable aggregation functions. In the case of a generalpartial order, aleast element(i.e., one that is less than all others) should not be confused with theminimal element(nothing is lesser). Likewise, agreatest elementof apartially ordered set(poset) is anupper boundof the set which is contained within the set, whereas themaximal elementmof a posetAis an element ofAsuch that ifm≤b(for anybinA), thenm=b. Any least element or greatest element of a poset is unique, but a poset can have several minimal or maximal elements. If a poset has more than one maximal element, then these elements will not be mutually comparable. In atotally orderedset, orchain, all elements are mutually comparable, so such a set can have at most one minimal element and at most one maximal element. Then, due to mutual comparability, the minimal element will also be the least element, and the maximal element will also be the greatest element. Thus in a totally ordered set, we can simply use the termsminimumandmaximum. If a chain is finite, then it will always have a maximum and a minimum. If a chain is infinite, then it need not have a maximum or a minimum. For example, the set ofnatural numbershas no maximum, though it has a minimum. If an infinite chainSis bounded, then theclosureCl(S) of the set occasionally has a minimum and a maximum, in which case they are called thegreatest lower boundand theleast upper boundof the setS, respectively.
https://en.wikipedia.org/wiki/Global_minimum
Inmathematical optimizationanddecision theory, aloss functionorcost function(sometimes also called an error function)[1]is a function that maps aneventor values of one or more variables onto areal numberintuitively representing some "cost" associated with the event. Anoptimization problemseeks to minimize a loss function. Anobjective functionis either a loss function or its opposite (in specific domains, variously called areward function, aprofit function, autility function, afitness function, etc.), in which case it is to be maximized. The loss function could include terms from several levels of the hierarchy. In statistics, typically a loss function is used forparameter estimation, and the event in question is some function of the difference between estimated and true values for an instance of data. The concept, as old asLaplace, was reintroduced in statistics byAbraham Waldin the middle of the 20th century.[2]In the context ofeconomics, for example, this is usuallyeconomic costorregret. Inclassification, it is the penalty for an incorrect classification of an example. Inactuarial science, it is used in an insurance context to model benefits paid over premiums, particularly since the works ofHarald Cramérin the 1920s.[3]Inoptimal control, the loss is the penalty for failing to achieve a desired value. Infinancial risk management, the function is mapped to a monetary loss. Leonard J. Savageargued that using non-Bayesian methods such asminimax, the loss function should be based on the idea ofregret, i.e., the loss associated with a decision should be the difference between the consequences of the best decision that could have been made under circumstances will be known and the decision that was in fact taken before they were known. The use of aquadraticloss function is common, for example when usingleast squarestechniques. It is often more mathematically tractable than other loss functions because of the properties ofvariances, as well as being symmetric: an error above the target causes the same loss as the same magnitude of error below the target. If the target ist, then a quadratic loss function is for some constantC; the value of the constant makes no difference to a decision, and can be ignored by setting it equal to 1. This is also known as thesquared error loss(SEL).[1] Many commonstatistics, includingt-tests,regressionmodels,design of experiments, and much else, useleast squaresmethods applied usinglinear regressiontheory, which is based on the quadratic loss function. The quadratic loss function is also used inlinear-quadratic optimal control problems. In these problems, even in the absence of uncertainty, it may not be possible to achieve the desired values of all target variables. Often loss is expressed as aquadratic formin the deviations of the variables of interest from their desired values; this approach istractablebecause it results in linearfirst-order conditions. In the context ofstochastic control, the expected value of the quadratic form is used. The quadratic loss assigns more importance to outliers than to the true data due to its square nature, so alternatives like theHuber, Log-Cosh and SMAE losses are used when the data has many large outliers. Instatisticsanddecision theory, a frequently used loss function is the0-1 loss function usingIverson bracketnotation, i.e. it evaluates to 1 wheny^≠y{\displaystyle {\hat {y}}\neq y}, and 0 otherwise. In many applications, objective functions, including loss functions as a particular case, are determined by the problem formulation. In other situations, the decision maker’s preference must be elicited and represented by a scalar-valued function (called alsoutilityfunction) in a form suitable for optimization — the problem thatRagnar Frischhas highlighted in hisNobel Prizelecture.[4]The existing methods for constructing objective functions are collected in the proceedings of two dedicated conferences.[5][6]In particular,Andranik Tangianshowed that the most usable objective functions — quadratic and additive — are determined by a fewindifferencepoints. He used this property in the models for constructing these objective functions from eitherordinalorcardinaldata that were elicited through computer-assisted interviews with decision makers.[7][8]Among other things, he constructed objective functions to optimally distribute budgets for 16 Westfalian universities[9]and the European subsidies for equalizing unemployment rates among 271 German regions.[10] In some contexts, the value of the loss function itself is a random quantity because it depends on the outcome of a random variableX. BothfrequentistandBayesianstatistical theory involve making a decision based on theexpected valueof the loss function; however, this quantity is defined differently under the two paradigms. We first define the expected loss in the frequentist context. It is obtained by taking the expected value with respect to theprobability distribution,Pθ, of the observed data,X. This is also referred to as therisk function[11][12][13][14]of the decision ruleδand the parameterθ. Here the decision rule depends on the outcome ofX. The risk function is given by: Here,θis a fixed but possibly unknown state of nature,Xis a vector of observations stochastically drawn from apopulation,Eθ{\displaystyle \operatorname {E} _{\theta }}is the expectation over all population values ofX,dPθis aprobability measureover the event space ofX(parametrized byθ) and the integral is evaluated over the entiresupportofX. In a Bayesian approach, the expectation is calculated using theprior distributionπ*of the parameterθ: where m(x) is known as thepredictive likelihoodwherein θ has been "integrated out,"π*(θ | x) is the posterior distribution, and the order of integration has been changed. One then should choose the actiona*which minimises this expected loss, which is referred to asBayes Risk. In the latter equation, the integrand inside dx is known as thePosterior Risk, and minimising it with respect to decisionaalso minimizes the overall Bayes Risk. This optimal decision,a*is known as theBayes (decision) Rule- it minimises the average loss over all possible states of nature θ, over all possible (probability-weighted) data outcomes. One advantage of the Bayesian approach is to that one need only choose the optimal action under the actual observed data to obtain a uniformly optimal one, whereas choosing the actual frequentist optimal decision rule as a function of all possible observations, is a much more difficult problem. Of equal importance though, the Bayes Rule reflects consideration of loss outcomes under different states of nature, θ. In economics, decision-making under uncertainty is often modelled using thevon Neumann–Morgenstern utility functionof the uncertain variable of interest, such as end-of-period wealth. Since the value of this variable is uncertain, so is the value of the utility function; it is the expected value of utility that is maximized. Adecision rulemakes a choice using an optimality criterion. Some commonly used criteria are: Sound statistical practice requires selecting an estimator consistent with the actual acceptable variation experienced in the context of a particular applied problem. Thus, in the applied use of loss functions, selecting which statistical method to use to model an applied problem depends on knowing the losses that will be experienced from being wrong under the problem's particular circumstances.[15] A common example involves estimating "location". Under typical statistical assumptions, themeanor average is the statistic for estimating location that minimizes the expected loss experienced under thesquared-errorloss function, while themedianis the estimator that minimizes expected loss experienced under the absolute-difference loss function. Still different estimators would be optimal under other, less common circumstances. In economics, when an agent isrisk neutral, the objective function is simply expressed as the expected value of a monetary quantity, such as profit, income, or end-of-period wealth. Forrisk-averseorrisk-lovingagents, loss is measured as the negative of autility function, and the objective function to be optimized is the expected value of utility. Other measures of cost are possible, for examplemortalityormorbidityin the field ofpublic healthorsafety engineering. For mostoptimization algorithms, it is desirable to have a loss function that is globallycontinuousanddifferentiable. Two very commonly used loss functions are thesquared loss,L(a)=a2{\displaystyle L(a)=a^{2}}, and theabsolute loss,L(a)=|a|{\displaystyle L(a)=|a|}. However the absolute loss has the disadvantage that it is not differentiable ata=0{\displaystyle a=0}. The squared loss has the disadvantage that it has the tendency to be dominated byoutliers—when summing over a set ofa{\displaystyle a}'s (as in∑i=1nL(ai){\textstyle \sum _{i=1}^{n}L(a_{i})}), the final sum tends to be the result of a few particularly largea-values, rather than an expression of the averagea-value. The choice of a loss function is not arbitrary. It is very restrictive and sometimes the loss function may be characterized by its desirable properties.[16]Among the choice principles are, for example, the requirement of completeness of the class of symmetric statistics in the case ofi.i.d.observations, the principle of complete information, and some others. W. Edwards DemingandNassim Nicholas Talebargue that empirical reality, not nice mathematical properties, should be the sole basis for selecting loss functions, and real losses often are not mathematically nice and are not differentiable, continuous, symmetric, etc. For example, a person who arrives before a plane gate closure can still make the plane, but a person who arrives after can not, a discontinuity and asymmetry which makes arriving slightly late much more costly than arriving slightly early. In drug dosing, the cost of too little drug may be lack of efficacy, while the cost of too much may be tolerable toxicity, another example of asymmetry. Traffic, pipes, beams, ecologies, climates, etc. may tolerate increased load or stress with little noticeable change up to a point, then become backed up or break catastrophically. These situations, Deming and Taleb argue, are common in real-life problems, perhaps more common than classical smooth, continuous, symmetric, differentials cases.[17]
https://en.wikipedia.org/wiki/Loss_function#Mean_squared_error
Innumber theory, theFermat quotientof anintegerawith respect to anoddprimepis defined as[1][2][3][4] or This article is about the former; for the latter seep-derivation. The quotient is named afterPierre de Fermat. If the baseaiscoprimeto the exponentpthenFermat's little theoremsays thatqp(a) will be an integer. If the baseais also agenerator of the multiplicative group of integers modulop, thenqp(a) will be acyclic number, andpwill be afull reptend prime. From the definition, it is obvious that In 1850,Gotthold Eisensteinprovedthat ifaandbare both coprime top, then:[5] Eisenstein likened the first two of thesecongruencesto properties oflogarithms. These properties imply In 1895,Dmitry Mirimanoffpointed out that an iteration of Eisenstein's rules gives thecorollary:[6] From this, it follows that:[7] M. Lerchproved in 1905 that[8][9][10] HereWp{\displaystyle W_{p}}is theWilson quotient. Eisenstein discovered that the Fermat quotient with base 2 could be expressed in terms of the sum of thereciprocals modulopof the numbers lying in the first half of the range {1, ...,p− 1}: Later writers showed that the number of terms required in such a representation could be reduced from 1/2 to 1/4, 1/5, or even 1/6: Eisenstein's series also has an increasingly complex connection to the Fermat quotients with other bases, the first few examples being: Ifqp(a) ≡ 0 (modp) thenap−1≡ 1 (modp2). Primes for which this is true fora= 2 are calledWieferich primes. In general they are calledWieferich primes base a.Known solutions ofqp(a) ≡ 0 (modp) for small values ofaare:[2] For more information, see[17][18][19]and.[20] The smallest solutions ofqp(a) ≡ 0 (modp) witha=nare: A pair (p,r) of prime numbers such thatqp(r) ≡ 0 (modp) andqr(p) ≡ 0 (modr) is called aWieferich pair.
https://en.wikipedia.org/wiki/Fermat_quotient
Incommutative algebraandfield theory, theFrobenius endomorphism(afterFerdinand Georg Frobenius) is a specialendomorphismofcommutativeringswithprimecharacteristicp, an important class that includesfinite fields. The endomorphism maps every element to itsp-th power. In certain contexts it is anautomorphism, but this is not true in general. LetRbe a commutative ring with prime characteristicp(anintegral domainof positive characteristic always has prime characteristic, for example). The Frobenius endomorphismFis defined by for allrinR. It respects the multiplication ofR: andF(1)is 1 as well. Moreover, it also respects the addition ofR. The expression(r+s)pcan be expanded using thebinomial theorem. Becausepis prime, it dividesp!but not anyq!forq<p; it therefore will divide thenumerator, but not thedenominator, of the explicit formula of thebinomial coefficients if1 ≤k≤p− 1. Therefore, the coefficients of all the terms exceptrpandspare divisible byp, and hence they vanish.[1]Thus This shows thatFis aring homomorphism. Ifφ:R→Sis a homomorphism of rings of characteristicp, then IfFRandFSare the Frobenius endomorphisms ofRandS, then this can be rewritten as: This means that the Frobenius endomorphism is anatural transformationfrom the identityfunctoron thecategoryof characteristicprings to itself. If the ringRis a ring with nonilpotentelements, then the Frobenius endomorphism isinjective:F(r) = 0meansrp= 0, which by definition means thatris nilpotent of order at mostp. In fact, this is necessary and sufficient, because ifris any nilpotent, then one of its powers will be nilpotent of order at mostp. In particular, ifRis a field then the Frobenius endomorphism is injective. The Frobenius morphism is not necessarilysurjective, even whenRis a field. For example, letK=Fp(t)be the finite field ofpelements together with a singletranscendental element; equivalently,Kis the field ofrational functionswith coefficients inFp. Then the image ofFdoes not containt. If it did, then there would be a rational functionq(t)/r(t)whosep-th powerq(t)p/r(t)pwould equalt. But the degree of thisp-th power (the difference between the degrees of its numerator and denominator) ispdeg(q) −pdeg(r), which is a multiple ofp. In particular, it can't be 1, which is the degree oft. This is a contradiction; sotis not in the image ofF. A fieldKis calledperfectif either it is of characteristic zero or it is of positive characteristic and its Frobenius endomorphism is an automorphism. For example, all finite fields are perfect. Consider the finite fieldFp. ByFermat's little theorem, every elementxofFpsatisfiesxp=x. Equivalently, it is a root of the polynomialXp−X. The elements ofFptherefore determineproots of this equation, and because this equation has degreepit has no more thanproots over anyextension. In particular, ifKis an algebraic extension ofFp(such as the algebraic closure or another finite field), thenFpis the fixed field of the Frobenius automorphism ofK. LetRbe a ring of characteristicp> 0. IfRis an integral domain, then by the same reasoning, the fixed points of Frobenius are the elements of the prime field. However, ifRis not a domain, thenXp−Xmay have more thanproots; for example, this happens ifR=Fp×Fp. A similar property is enjoyed on the finite fieldFpn{\displaystyle \mathbf {F} _{p^{n}}}by thenth iterate of the Frobenius automorphism: Every element ofFpn{\displaystyle \mathbf {F} _{p^{n}}}is a root ofXpn−X{\displaystyle X^{p^{n}}-X}, so ifKis an algebraic extension ofFpn{\displaystyle \mathbf {F} _{p^{n}}}andFis the Frobenius automorphism ofK, then the fixed field ofFnisFpn{\displaystyle \mathbf {F} _{p^{n}}}. IfRis a domain that is anFpn{\displaystyle \mathbf {F} _{p^{n}}}-algebra, then the fixed points of thenth iterate of Frobenius are the elements of the image ofFpn{\displaystyle \mathbf {F} _{p^{n}}}. Iterating the Frobenius map gives a sequence of elements inR: This sequence of iterates is used in defining theFrobenius closureand thetight closureof an ideal. TheGalois groupof an extension of finite fields is generated by an iterate of the Frobenius automorphism. First, consider the case where the ground field is the prime fieldFp. LetFqbe the finite field ofqelements, whereq=pn. The Frobenius automorphismFofFqfixes the prime fieldFp, so it is an element of the Galois groupGal(Fq/Fp). In fact, sinceFq×{\displaystyle \mathbf {F} _{q}^{\times }}iscyclic withq− 1elements, we know that the Galois group is cyclic andFis a generator. The order ofFisnbecauseFjacts on an elementxby sending it toxpj, andxpj=x{\displaystyle x^{p^{j}}=x}can only havepj{\displaystyle p^{j}}many roots, since we are in a field. Every automorphism ofFqis a power ofF, and the generators are the powersFiwithicoprime ton. Now consider the finite fieldFqfas an extension ofFq, whereq=pnas above. Ifn> 1, then the Frobenius automorphismFofFqfdoes not fix the ground fieldFq, but itsnth iterateFndoes. The Galois groupGal(Fqf/Fq)is cyclic of orderfand is generated byFn. It is the subgroup ofGal(Fqf/Fp)generated byFn. The generators ofGal(Fqf/Fq)are the powersFniwhereiis coprime tof. The Frobenius automorphism is not a generator of theabsolute Galois group because this Galois group is isomorphic to theprofinite integers which are not cyclic. However, because the Frobenius automorphism is a generator of the Galois group of every finite extension ofFq, it is a generator of every finite quotient of the absolute Galois group. Consequently, it is a topological generator in the usual Krull topology on the absolute Galois group. There are several different ways to define the Frobenius morphism for ascheme. The most fundamental is the absolute Frobenius morphism. However, the absolute Frobenius morphism behaves poorly in the relative situation because it pays no attention to the base scheme. There are several different ways of adapting the Frobenius morphism to the relative situation, each of which is useful in certain situations. Suppose thatXis a scheme of characteristicp> 0. Choose an open affine subsetU= SpecAofX. The ringAis anFp-algebra, so it admits a Frobenius endomorphism. IfVis an open affine subset ofU, then by the naturality of Frobenius, the Frobenius morphism onU, when restricted toV, is the Frobenius morphism onV. Consequently, the Frobenius morphism glues to give an endomorphism ofX. This endomorphism is called theabsolute Frobenius morphismofX, denotedFX. By definition, it is a homeomorphism ofXwith itself. The absolute Frobenius morphism is a natural transformation from the identity functor on the category ofFp-schemes to itself. IfXis anS-scheme and the Frobenius morphism ofSis the identity, then the absolute Frobenius morphism is a morphism ofS-schemes. In general, however, it is not. For example, consider the ringA=Fp2{\displaystyle A=\mathbf {F} _{p^{2}}}. LetXandSboth equalSpecAwith the structure mapX→Sbeing the identity. The Frobenius morphism onAsendsatoap. It is not a morphism ofFp2{\displaystyle \mathbf {F} _{p^{2}}}-algebras. If it were, then multiplying by an elementbinFp2{\displaystyle \mathbf {F} _{p^{2}}}would commute with applying the Frobenius endomorphism. But this is not true because: The former is the action ofbin theFp2{\displaystyle \mathbf {F} _{p^{2}}}-algebra structure thatAbegins with, and the latter is the action ofFp2{\displaystyle \mathbf {F} _{p^{2}}}induced by Frobenius. Consequently, the Frobenius morphism onSpecAis not a morphism ofFp2{\displaystyle \mathbf {F} _{p^{2}}}-schemes. The absolute Frobenius morphism is a purely inseparable morphism of degreep. Its differential is zero. It preserves products, meaning that for any two schemesXandY,FX×Y=FX×FY. Suppose thatφ:X→Sis the structure morphism for anS-schemeX. The base schemeShas a Frobenius morphismFS. ComposingφwithFSresults in anS-schemeXFcalled therestriction of scalars by Frobenius. The restriction of scalars is actually a functor, because anS-morphismX→Yinduces anS-morphismXF→YF. For example, consider a ringAof characteristicp> 0and a finitely presented algebra overA: The action ofAonRis given by: where α is a multi-index. LetX= SpecR. ThenXFis the affine schemeSpecR, but its structure morphismSpecR→ SpecA, and hence the action ofAonR, is different: Because restriction of scalars by Frobenius is simply composition, many properties ofXare inherited byXFunder appropriate hypotheses on the Frobenius morphism. For example, ifXandSFare both finite type, then so isXF. Theextension of scalars by Frobeniusis defined to be: The projection onto theSfactor makesX(p)anS-scheme. IfSis not clear from the context, thenX(p)is denoted byX(p/S). Like restriction of scalars, extension of scalars is a functor: AnS-morphismX→Ydetermines anS-morphismX(p)→Y(p). As before, consider a ringAand a finitely presented algebraRoverA, and again letX= SpecR. Then: A global section ofX(p)is of the form: whereαis a multi-index and everyaiαandbiis an element ofA. The action of an elementcofAon this section is: Consequently,X(p)is isomorphic to: where, if: then: A similar description holds for arbitraryA-algebrasR. Because extension of scalars is base change, it preserves limits and coproducts. This implies in particular that ifXhas an algebraic structure defined in terms of finite limits (such as being agroup scheme), then so doesX(p). Furthermore, being a base change means that extension of scalars preserves properties such as being of finite type, finite presentation, separated, affine, and so on. Extension of scalars is well-behaved with respect to base change: Given a morphismS′ →S, there is a natural isomorphism: LetXbe anS-scheme with structure morphismφ. Therelative Frobenius morphismofXis the morphism: defined by the universal property of thepullbackX(p)(see the diagram above): Because the absolute Frobenius morphism is natural, the relative Frobenius morphism is a morphism ofS-schemes. Consider, for example, theA-algebra: We have: The relative Frobenius morphism is the homomorphismR(p)→Rdefined by: Relative Frobenius is compatible with base change in the sense that, under the natural isomorphism ofX(p/S)×SS′and(X×SS′)(p/S′), we have: Relative Frobenius is a universal homeomorphism. IfX→Sis an open immersion, then it is the identity. IfX→Sis a closed immersion determined by an ideal sheafIofOS, thenX(p)is determined by the ideal sheafIpand relative Frobenius is the augmentation mapOS/Ip→OS/I. Xis unramified overSif and only ifFX/Sis unramified and if and only ifFX/Sis a monomorphism.Xis étale overSif and only ifFX/Sis étale and if and only ifFX/Sis an isomorphism. Thearithmetic Frobenius morphismof anS-schemeXis a morphism: defined by: That is, it is the base change ofFSby 1X. Again, if: then the arithmetic Frobenius is the homomorphism: If we rewriteR(p)as: then this homomorphism is: Assume that the absolute Frobenius morphism ofSis invertible with inverseFS−1{\displaystyle F_{S}^{-1}}. LetSF−1{\displaystyle S_{F^{-1}}}denote theS-schemeFS−1:S→S{\displaystyle F_{S}^{-1}:S\to S}. Then there is an extension of scalars ofXbyFS−1{\displaystyle F_{S}^{-1}}: If: then extending scalars byFS−1{\displaystyle F_{S}^{-1}}gives: If: then we write: and then there is an isomorphism: Thegeometric Frobenius morphismof anS-schemeXis a morphism: defined by: It is the base change ofFS−1{\displaystyle F_{S}^{-1}}by1X. Continuing our example ofAandRabove, geometric Frobenius is defined to be: After rewritingR(1/p)in terms of{fj(1/p)}{\displaystyle \{f_{j}^{(1/p)}\}}, geometric Frobenius is: Suppose that the Frobenius morphism ofSis an isomorphism. Then it generates a subgroup of the automorphism group ofS. IfS= Speckis the spectrum of a finite field, then its automorphism group is the Galois group of the field over the prime field, and the Frobenius morphism and its inverse are both generators of the automorphism group. In addition,X(p)andX(1/p)may be identified withX. The arithmetic and geometric Frobenius morphisms are then endomorphisms ofX, and so they lead to an action of the Galois group ofkonX. Consider the set ofK-pointsX(K). This set comes with a Galois action: Each such pointxcorresponds to a homomorphismOX→Kfrom the structure sheaf toK, which factors viak(x), the residue field atx, and the action of Frobenius onxis the application of the Frobenius morphism to the residue field. This Galois action agrees with the action of arithmetic Frobenius: The composite morphism is the same as the composite morphism: by the definition of the arithmetic Frobenius. Consequently, arithmetic Frobenius explicitly exhibits the action of the Galois group on points as an endomorphism ofX. Given anunramifiedfinite extensionL/Koflocal fields, there is a concept ofFrobenius endomorphismthat induces the Frobenius endomorphism in the corresponding extension ofresidue fields.[2] SupposeL/Kis an unramified extension of local fields, withring of integersOKofKsuch that the residue field, the integers ofKmodulo their unique maximal idealφ, is a finite field of orderq, whereqis a power of a prime. IfΦis a prime ofLlying overφ, thatL/Kis unramified means by definition that the integers ofLmoduloΦ, the residue field ofL, will be a finite field of orderqfextending the residue field ofKwherefis the degree ofL/K. We may define the Frobenius map for elements of the ring of integersOLofLas an automorphismsΦofLsuch that Inalgebraic number theory,Frobenius elementsare defined for extensionsL/Kofglobal fieldsthat are finiteGalois extensionsforprime idealsΦofLthat are unramified inL/K. Since the extension is unramified thedecomposition groupofΦis the Galois group of the extension of residue fields. The Frobenius element then can be defined for elements of the ring of integers ofLas in the local case, by whereqis the order of the residue fieldOK/(Φ ∩OK). Lifts of the Frobenius are in correspondence withp-derivations. The polynomial hasdiscriminant and so is unramified at the prime 3; it is also irreducible mod 3. Hence adjoining a rootρof it to the field of3-adic numbersQ3gives an unramified extensionQ3(ρ)ofQ3. We may find the image ofρunder the Frobenius map by locating the root nearest toρ3, which we may do byNewton's method. We obtain an element of the ring of integersZ3[ρ]in this way; this is a polynomial of degree four inρwith coefficients in the3-adic integersZ3. Modulo38this polynomial is This is algebraic overQand is the correct global Frobenius image in terms of the embedding ofQintoQ3; moreover, the coefficients are algebraic and the result can be expressed algebraically. However, they are of degree 120, the order of the Galois group, illustrating the fact that explicit computations are much more easily accomplished ifp-adic results will suffice. IfL/Kis an abelian extension of global fields, we get a much stronger congruence since it depends only on the primeφin the base fieldK. For an example, consider the extensionQ(β)ofQobtained by adjoining a rootβsatisfying toQ. This extension is cyclic of order five, with roots for integern. It has roots that areChebyshev polynomialsofβ: give the result of the Frobenius map for the primes 2, 3 and 5, and so on for larger primes not equal to 11 or of the form22n+ 1(which split). It is immediately apparent how the Frobenius map gives a result equal modpto thep-th power of the rootβ.
https://en.wikipedia.org/wiki/Frobenius_endomorphism
Inmathematics, more specificallydifferential algebra, ap-derivation(forpaprime number) on aringR, is a mapping fromRtoRthat satisfies certain conditions outlined directly below. The notion of ap-derivationis related to that of aderivationin differential algebra. Letpbe a prime number. Ap-derivationor Buium derivative on a ringR{\displaystyle R}is a mapδ:R→R{\displaystyle \delta :R\to R}that satisfies the following "product rule": and "sum rule": as well as Note that in the "sum rule" we are not really dividing byp, since all the relevantbinomial coefficientsin the numerator are divisible byp, so this definition applies in the case whenR{\displaystyle R}hasp-torsion. A mapσ:R→R{\displaystyle \sigma :R\to R}is a lift of theFrobenius endomorphismprovidedσ(x)=xp(modpR){\displaystyle \sigma (x)=x^{p}{\pmod {pR}}}. An example of such a lift could come from theArtin map. If(R,δ){\displaystyle (R,\delta )}is a ring with ap-derivation, then the mapσ(x):=xp+pδ(x){\displaystyle \sigma (x):=x^{p}+p\delta (x)}defines a ringendomorphismwhich is a lift of the Frobenius endomorphism. When the ringRisp-torsionfree the correspondence is abijection. The quotient is well-defined because ofFermat's little theorem. defines ap-derivation.
https://en.wikipedia.org/wiki/P-derivation
Arepeating decimalorrecurring decimalis adecimal representationof a number whosedigitsare eventuallyperiodic(that is, after some place, the same sequence of digits is repeated forever); if this sequence consists only of zeros (that is if there is only a finite number of nonzero digits), the decimal is said to beterminating, and is not considered as repeating. It can be shown that a number isrationalif and only if its decimal representation is repeating or terminating. For example, the decimal representation of⁠1/3⁠becomes periodic just after thedecimal point, repeating the single digit "3" forever, i.e. 0.333.... A more complicated example is⁠3227/555⁠, whose decimal becomes periodic at theseconddigit following the decimal point and then repeats the sequence "144" forever, i.e. 5.8144144144.... Another example of this is⁠593/53⁠, which becomes periodic after the decimal point, repeating the 13-digit pattern "1886792452830" forever, i.e. 11.18867924528301886792452830.... The infinitely repeated digit sequence is called therepetendorreptend. If the repetend is a zero, this decimal representation is called aterminating decimalrather than a repeating decimal, since the zeros can be omitted and the decimal terminates before these zeros.[1]Every terminating decimal representation can be written as adecimal fraction, a fraction whose denominator is apowerof 10 (e.g.1.585 =⁠1585/1000⁠); it may also be written as aratioof the form⁠k/2n·5m⁠(e.g.1.585 =⁠317/23·52⁠). However,everynumber with a terminating decimal representation also trivially has a second, alternative representation as a repeating decimal whose repetend is the digit "9". This is obtained by decreasing the final (rightmost) non-zero digit by one and appending a repetend of 9. Two examples of this are1.000... = 0.999...and1.585000... = 1.584999.... (This type of repeating decimal can be obtained by long division if one uses a modified form of the usualdivision algorithm.[2]) Any number that cannot be expressed as aratioof twointegersis said to beirrational. Their decimal representation neither terminates nor infinitely repeats, but extends forever without repetition (see§ Every rational number is either a terminating or repeating decimal). Examples of such irrational numbers are√2andπ.[3] There are several notational conventions for representing repeating decimals. None of them are accepted universally. In English, there are various ways to read repeating decimals aloud. For example, 1.234may be read "one point two repeating three four", "one point two repeated three four", "one point two recurring three four", "one point two repetend three four" or "one point two into infinity three four". Likewise, 11.1886792452830may be read "eleven point repeating one double eight six seven nine two four five two eight three zero", "eleven point repeated one double eight six seven nine two four five two eight three zero", "eleven point recurring one double eight six seven nine two four five two eight three zero" "eleven point repetend one double eight six seven nine two four five two eight three zero" or "eleven point into infinity one double eight six seven nine two four five two eight three zero". In order to convert arational numberrepresented as a fraction into decimal form, one may uselong division. For example, consider the rational number⁠5/74⁠: etc. Observe that at each step we have a remainder; the successive remainders displayed above are 56, 42, 50. When we arrive at 50 as the remainder, and bring down the "0", we find ourselves dividing 500 by 74, which is the same problem we began with. Therefore, the decimal repeats:0.0675675675.... For any integer fraction⁠A/B⁠, the remainder at step k, for any positive integerk, isA× 10k(moduloB). For any given divisor, only finitely many different remainders can occur. In the example above, the 74 possible remainders are 0, 1, 2, ..., 73. If at any point in the division the remainder is 0, the expansion terminates at that point. Then the length of the repetend, also called "period", is defined to be 0. If 0 never occurs as a remainder, then the division process continues forever, and eventually, a remainder must occur that has occurred before. The next step in the division will yield the same new digit in the quotient, and the same new remainder, as the previous time the remainder was the same. Therefore, the following division will repeat the same results. The repeating sequence of digits is called "repetend" which has a certain length greater than 0, also called "period".[5] In base 10, a fraction has a repeating decimal if and only ifin lowest terms, its denominator has any prime factors besides 2 or 5, or in other words, cannot be expressed as 2m5n, wheremandnare non-negative integers. Each repeating decimal number satisfies alinear equationwith integer coefficients, and its unique solution is a rational number. In the example above,α= 5.8144144144...satisfies the equation The process of how to find these integer coefficients is describedbelow. Given a repeating decimalx=a.bc¯{\displaystyle x=a.b{\overline {c}}}wherea{\displaystyle a},b{\displaystyle b}, andc{\displaystyle c}are groups of digits, letn=⌈log10⁡b⌉{\displaystyle n=\lceil {\log _{10}b}\rceil }, the number of digits ofb{\displaystyle b}. Multiplying by10n{\displaystyle 10^{n}}separates the repeating and terminating groups: 10nx=ab.c¯.{\displaystyle 10^{n}x=ab.{\bar {c}}.} If the decimals terminate (c=0{\displaystyle c=0}), the proof is complete.[6]Forc≠0{\displaystyle c\neq 0}withk∈N{\displaystyle k\in \mathbb {N} }digits, letx=y.c¯{\displaystyle x=y.{\bar {c}}}wherey∈Z{\displaystyle y\in \mathbb {Z} }is a terminating group of digits. Then, c=d1d2...dk{\displaystyle c=d_{1}d_{2}\,...d_{k}} wheredi{\displaystyle d_{i}}denotes thei-thdigit, and x=y+∑n=1∞c(10k)n=y+(c∑n=0∞1(10k)n)−c.{\displaystyle x=y+\sum _{n=1}^{\infty }{\frac {c}{{(10^{k})}^{n}}}=y+\left(c\sum _{n=0}^{\infty }{\frac {1}{{(10^{k})}^{n}}}\right)-c.} Since∑n=0∞1(10k)n=11−10−k{\displaystyle \textstyle \sum _{n=0}^{\infty }{\frac {1}{{(10^{k})}^{n}}}={\frac {1}{1-10^{-k}}}},[7] x=y−c+10kc10k−1.{\displaystyle x=y-c+{\frac {10^{k}c}{10^{k}-1}}.} Sincex{\displaystyle x}is the sum of an integer (y−c{\displaystyle y-c}) and a rational number (10kc10k−1{\textstyle {\frac {10^{k}c}{10^{k}-1}}}),x{\displaystyle x}is also rational.[8] Therebyfractionis theunit fraction⁠1/n⁠andℓ10is the length of the (decimal) repetend. The lengthsℓ10(n) of the decimal repetends of⁠1/n⁠,n= 1, 2, 3, ..., are: For comparison, the lengthsℓ2(n) of thebinaryrepetends of the fractions⁠1/n⁠,n= 1, 2, 3, ..., are: The decimal repetends of⁠1/n⁠,n= 1, 2, 3, ..., are: The decimal repetend lengths of⁠1/p⁠,p= 2, 3, 5, ... (nth prime), are: The least primespfor which⁠1/p⁠has decimal repetend lengthn,n= 1, 2, 3, ..., are: The least primespfor which⁠k/p⁠hasndifferent cycles (1 ≤k≤p−1),n= 1, 2, 3, ..., are: A fractionin lowest termswith aprimedenominator other than 2 or 5 (i.e.coprimeto 10) always produces a repeating decimal. The length of the repetend (period of the repeating decimal segment) of⁠1/p⁠is equal to theorderof 10 modulop. If 10 is aprimitive rootmodulop, then the repetend length is equal top− 1; if not, then the repetend length is a factor ofp− 1. This result can be deduced fromFermat's little theorem, which states that10p−1≡ 1 (modp). The base-10digital rootof the repetend of the reciprocal of any prime number greater than 5 is 9.[9] If the repetend length of⁠1/p⁠for primepis equal top− 1 then the repetend, expressed as an integer, is called acyclic number. Examples of fractions belonging to this group are: The list can go on to include the fractions⁠1/109⁠,⁠1/113⁠,⁠1/131⁠,⁠1/149⁠,⁠1/167⁠,⁠1/179⁠,⁠1/181⁠,⁠1/193⁠,⁠1/223⁠,⁠1/229⁠, etc. (sequenceA001913in theOEIS). Everypropermultiple of a cyclic number (that is, a multiple having the same number of digits) is a rotation: The reason for the cyclic behavior is apparent from an arithmetic exercise of long division of⁠1/7⁠: the sequential remainders are the cyclic sequence{1, 3, 2, 6, 4, 5}. See also the article142,857for more properties of this cyclic number. A fraction which is cyclic thus has a recurring decimal of even length that divides into two sequences innines' complementform. For example⁠1/7⁠starts '142' and is followed by '857' while⁠6/7⁠(by rotation) starts '857' followed byitsnines' complement '142'. The rotation of the repetend of a cyclic number always happens in such a way that each successive repetend is a bigger number than the previous one. In the succession above, for instance, we see that 0.142857... < 0.285714... < 0.428571... < 0.571428... < 0.714285... < 0.857142.... This, for cyclic fractions with long repetends, allows us to easily predict what the result of multiplying the fraction by any natural number n will be, as long as the repetend is known. Aproper primeis a primepwhich ends in the digit 1 in base 10 and whose reciprocal in base 10 has a repetend with lengthp− 1. In such primes, each digit 0, 1,..., 9 appears in the repeating sequence the same number of times as does each other digit (namely,⁠p− 1/10⁠times). They are:[10]: 166 A prime is a proper prime if and only if it is afull reptend primeandcongruentto 1 mod 10. If a primepis bothfull reptend primeandsafe prime, then⁠1/p⁠will produce a stream ofp− 1pseudo-random digits. Those primes are Some reciprocals of primes that do not generate cyclic numbers are: (sequenceA006559in theOEIS) The reason is that 3 is a divisor of 9, 11 is a divisor of 99, 41 is a divisor of 99999, etc. To find the period of⁠1/p⁠, we can check whether the primepdivides some number 999...999 in which the number of digits dividesp− 1. Since the period is never greater thanp− 1, we can obtain this by calculating⁠10p−1− 1/p⁠. For example, for 11 we get and then by inspection find the repetend 09 and period of 2. Those reciprocals of primes can be associated with several sequences of repeating decimals. For example, the multiples of⁠1/13⁠can be divided into two sets, with different repetends. The first set is: where the repetend of each fraction is a cyclic re-arrangement of 076923. The second set is: where the repetend of each fraction is a cyclic re-arrangement of 153846. In general, the set of proper multiples of reciprocals of a primepconsists ofnsubsets, each with repetend lengthk, wherenk=p− 1. For an arbitrary integern, the lengthL(n) of the decimal repetend of⁠1/n⁠dividesφ(n), whereφis thetotient function. The length is equal toφ(n)if and only if 10 is aprimitive root modulon.[11] In particular, it follows thatL(p) =p− 1if and only ifpis a prime and 10 is a primitive root modulop. Then, the decimal expansions of⁠n/p⁠forn= 1, 2, ...,p− 1, all have periodp− 1 and differ only by a cyclic permutation. Such numberspare calledfull repetend primes. Ifpis a prime other than 2 or 5, the decimal representation of the fraction⁠1/p2⁠repeats: The period (repetend length)L(49) must be a factor ofλ(49) = 42, whereλ(n) is known as theCarmichael function. This follows fromCarmichael's theoremwhich states that ifnis a positive integer thenλ(n) is the smallest integermsuch that for every integerathat iscoprimeton. The period of⁠1/p2⁠is usuallypTp, whereTpis the period of⁠1/p⁠. There are three known primes for which this is not true, and for those the period of⁠1/p2⁠is the same as the period of⁠1/p⁠becausep2divides 10p−1−1. These three primes are 3, 487, and 56598313 (sequenceA045616in theOEIS).[12] Similarly, the period of⁠1/pk⁠is usuallypk–1Tp Ifpandqare primes other than 2 or 5, the decimal representation of the fraction⁠1/pq⁠repeats. An example is⁠1/119⁠: where LCM denotes theleast common multiple. The periodTof⁠1/pq⁠is a factor ofλ(pq) and it happens to be 48 in this case: The periodTof⁠1/pq⁠is LCM(Tp,Tq), whereTpis the period of⁠1/p⁠andTqis the period of⁠1/q⁠. Ifp,q,r, etc. are primes other than 2 or 5, andk,ℓ,m, etc. are positive integers, then is a repeating decimal with a period of whereTpk,Tqℓ,Trm,... are respectively the period of the repeating decimals⁠1/pk⁠,⁠1/qℓ⁠,⁠1/rm⁠,... as defined above. An integer that is not coprime to 10 but has a prime factor other than 2 or 5 has a reciprocal that is eventually periodic, but with a non-repeating sequence of digits that precede the repeating part. The reciprocal can be expressed as: whereaandbare not both zero. This fraction can also be expressed as: ifa>b, or as ifb>a, or as ifa=b. The decimal has: For example⁠1/28⁠= 0.03571428: Given a repeating decimal, it is possible to calculate the fraction that produces it. For example: Another example: The procedure below can be applied in particular if the repetend hasndigits, all of which are 0 except the final one which is 1. For instance forn= 7: So this particular repeating decimal corresponds to the fraction⁠1/10n− 1⁠, where the denominator is the number written asn9s. Knowing just that, a general repeating decimal can be expressed as a fraction without having to solve an equation. For example, one could reason: or It is possible to get a general formula expressing a repeating decimal with ann-digit period (repetend length), beginning right after the decimal point, as a fraction: More explicitly, one gets the following cases: If the repeating decimal is between 0 and 1, and the repeating block isndigits long, first occurring right after the decimal point, then the fraction (not necessarily reduced) will be the integer number represented by then-digit block divided by the one represented byn9s. For example, If the repeating decimal is as above, except that there arek(extra) digits 0 between the decimal point and the repeatingn-digit block, then one can simply addkdigits 0 after thendigits 9 of the denominator (and, as before, the fraction may subsequently be simplified). For example, Any repeating decimal not of the form described above can be written as a sum of a terminating decimal and a repeating decimal of one of the two above types (actually the first type suffices, but that could require the terminating decimal to be negative). For example, An even faster method is to ignore the decimal point completely and go like this It follows that any repeating decimal withperiodn, andkdigits after the decimal point that do not belong to the repeating part, can be written as a (not necessarily reduced) fraction whose denominator is (10n− 1)10k. Conversely the period of the repeating decimal of a fraction⁠c/d⁠will be (at most) the smallest numbernsuch that 10n− 1 is divisible byd. For example, the fraction⁠2/7⁠hasd= 7, and the smallestkthat makes 10k− 1 divisible by 7 isk= 6, because 999999 = 7 × 142857. The period of the fraction⁠2/7⁠is therefore 6. The following picture suggests kind of compression of the above shortcut. TherebyI{\displaystyle \mathbf {I} }represents the digits of the integer part of the decimal number (to the left of the decimal point),A{\displaystyle \mathbf {A} }makes up the string of digits of the preperiod and#A{\displaystyle \#\mathbf {A} }its length, andP{\displaystyle \mathbf {P} }being the string of repeated digits (the period) with length#P{\displaystyle \#\mathbf {P} }which is nonzero. In the generated fraction, the digit9{\displaystyle 9}will be repeated#P{\displaystyle \#\mathbf {P} }times, and the digit0{\displaystyle 0}will be repeated#A{\displaystyle \#\mathbf {A} }times. Note that in the absence of anintegerpart in the decimal,I{\displaystyle \mathbf {I} }will be represented by zero, which being to the left of the other digits, will not affect the final result, and may be omitted in the calculation of the generating function. Examples: 3.254444…=3.254¯={I=3A=25P=4#A=2#P=1}=3254−325900=29299000.512512…=0.512¯={I=0A=∅P=512#A=0#P=3}=512−0999=5129991.09191…=1.091¯={I=1A=0P=91#A=1#P=2}=1091−10990=10819901.333…=1.3¯={I=1A=∅P=3#A=0#P=1}=13−19=129=430.3789789…=0.3789¯={I=0A=3P=789#A=1#P=3}=3789−39990=37869990=6311665{\displaystyle {\begin{array}{lllll}3.254444\ldots &=3.25{\overline {4}}&={\begin{Bmatrix}\mathbf {I} =3&\mathbf {A} =25&\mathbf {P} =4\\&\#\mathbf {A} =2&\#\mathbf {P} =1\end{Bmatrix}}&={\dfrac {3254-325}{900}}&={\dfrac {2929}{900}}\\\\0.512512\ldots &=0.{\overline {512}}&={\begin{Bmatrix}\mathbf {I} =0&\mathbf {A} =\emptyset &\mathbf {P} =512\\&\#\mathbf {A} =0&\#\mathbf {P} =3\end{Bmatrix}}&={\dfrac {512-0}{999}}&={\dfrac {512}{999}}\\\\1.09191\ldots &=1.0{\overline {91}}&={\begin{Bmatrix}\mathbf {I} =1&\mathbf {A} =0&\mathbf {P} =91\\&\#\mathbf {A} =1&\#\mathbf {P} =2\end{Bmatrix}}&={\dfrac {1091-10}{990}}&={\dfrac {1081}{990}}\\\\1.333\ldots &=1.{\overline {3}}&={\begin{Bmatrix}\mathbf {I} =1&\mathbf {A} =\emptyset &\mathbf {P} =3\\&\#\mathbf {A} =0&\#\mathbf {P} =1\end{Bmatrix}}&={\dfrac {13-1}{9}}&={\dfrac {12}{9}}&={\dfrac {4}{3}}\\\\0.3789789\ldots &=0.3{\overline {789}}&={\begin{Bmatrix}\mathbf {I} =0&\mathbf {A} =3&\mathbf {P} =789\\&\#\mathbf {A} =1&\#\mathbf {P} =3\end{Bmatrix}}&={\dfrac {3789-3}{9990}}&={\dfrac {3786}{9990}}&={\dfrac {631}{1665}}\end{array}}} The symbol∅{\displaystyle \emptyset }in the examples above denotes the absence of digits of partA{\displaystyle \mathbf {A} }in the decimal, and therefore#A=0{\displaystyle \#\mathbf {A} =0}and a corresponding absence in the generated fraction. A repeating decimal can also be expressed as aninfinite series. That is, a repeating decimal can be regarded as the sum of an infinite number of rational numbers. To take the simplest example, The above series is ageometric serieswith the first term as⁠1/10⁠and the common factor⁠1/10⁠. Because the absolute value of the common factor is less than 1, we can say that the geometric seriesconvergesand find the exact value in the form of a fraction by using the following formula whereais the first term of the series andris the common factor. Similarly, The cyclic behavior of repeating decimals in multiplication also leads to the construction of integers which arecyclically permutedwhen multiplied by certain numbers. For example,102564 × 4 = 410256. 102564 is the repetend of⁠4/39⁠and 410256 the repetend of⁠16/39⁠. Various properties of repetend lengths (periods) are given by Mitchell[13]and Dickson.[14] For some other properties of repetends, see also.[15] Various features of repeating decimals extend to the representation of numbers in all other integer bases, not just base 10: For example, induodecimal,⁠1/2⁠= 0.6,⁠1/3⁠= 0.4,⁠1/4⁠= 0.3 and⁠1/6⁠= 0.2 all terminate;⁠1/5⁠= 0.2497repeats with period length 4, in contrast with the equivalent decimal expansion of 0.2;⁠1/7⁠= 0.186A35has period 6 in duodecimal, just as it does in decimal. Ifbis an integer base andkis an integer, then For example 1/7 in duodecimal:17=(1101+5102+21103+A5104+441105+1985106+⋯)base 12{\displaystyle {\frac {1}{7}}=\left({\frac {1}{10^{\phantom {1}}}}+{\frac {5}{10^{2}}}+{\frac {21}{10^{3}}}+{\frac {A5}{10^{4}}}+{\frac {441}{10^{5}}}+{\frac {1985}{10^{6}}}+\cdots \right)_{\text{base 12}}} which is 0.186A35base12. 10base12is 12base10, 102base12is 144base10, 21base12is 25base10, A5base12is 125base10. For a rational0 <⁠p/q⁠< 1(and baseb∈N>1) there is the following algorithm producing the repetend together with its length: The first highlighted line calculates the digitz. The subsequent line calculates the new remainderp′of the divisionmodulothe denominatorq. As a consequence of thefloor functionfloorwe have thus and Because all these remainderspare non-negative integers less thanq, there can be only a finite number of them with the consequence that they must recur in thewhileloop. Such a recurrence is detected by theassociative arrayoccurs. The new digitzis formed in the yellow line, wherepis the only non-constant. The lengthLof the repetend equals the number of the remainders (see also sectionEvery rational number is either a terminating or repeating decimal). Repeating decimals (also called decimal sequences) have found cryptographic and error-correction coding applications.[16]In these applications repeating decimals to base 2 are generally used which gives rise to binary sequences. The maximum length binary sequence for⁠1/p⁠(when 2 is a primitive root ofp) is given by:[17]
https://en.wikipedia.org/wiki/Recurring_decimal#Fractions_with_prime_denominators
TheRSA(Rivest–Shamir–Adleman)cryptosystemis apublic-key cryptosystem, one of the oldest widely used for secure data transmission. Theinitialism"RSA" comes from the surnames ofRon Rivest,Adi ShamirandLeonard Adleman, who publicly described the algorithm in 1977. An equivalent system was developed secretly in 1973 atGovernment Communications Headquarters(GCHQ), the Britishsignals intelligenceagency, by the English mathematicianClifford Cocks. That system wasdeclassifiedin 1997.[2] In a public-keycryptosystem, theencryption keyis public and distinct from thedecryption key, which is kept secret (private). An RSA user creates and publishes a public key based on two largeprime numbers, along with an auxiliary value. The prime numbers are kept secret. Messages can be encrypted by anyone, via the public key, but can only be decrypted by someone who knows the private key.[1] The security of RSA relies on the practical difficulty offactoringthe product of two largeprime numbers, the "factoring problem". Breaking RSA encryption is known as theRSA problem. Whether it is as difficult as the factoring problem is an open question.[3]There are no published methods to defeat the system if a large enough key is used. RSA is a relatively slow algorithm. Because of this, it is not commonly used to directly encrypt user data. More often, RSA is used to transmit shared keys forsymmetric-keycryptography, which are then used for bulk encryption–decryption. The idea of an asymmetric public-private key cryptosystem is attributed toWhitfield DiffieandMartin Hellman, who published this concept in 1976. They also introduced digital signatures and attempted to apply number theory. Their formulation used a shared-secret-key created from exponentiation of some number, modulo a prime number. However, they left open the problem of realizing a one-way function, possibly because the difficulty of factoring was not well-studied at the time.[4]Moreover, likeDiffie-Hellman, RSA is based onmodular exponentiation. Ron Rivest,Adi Shamir, andLeonard Adlemanat theMassachusetts Institute of Technologymade several attempts over the course of a year to create a function that was hard to invert. Rivest and Shamir, as computer scientists, proposed many potential functions, while Adleman, as a mathematician, was responsible for finding their weaknesses. They tried many approaches, including "knapsack-based" and "permutation polynomials". For a time, they thought what they wanted to achieve was impossible due to contradictory requirements.[5]In April 1977, they spentPassoverat the house of a student and drank a good deal of wine before returning to their homes at around midnight.[6]Rivest, unable to sleep, lay on the couch with a math textbook and started thinking about their one-way function. He spent the rest of the night formalizing his idea, and he had much of the paper ready by daybreak. The algorithm is now known as RSA – the initials of their surnames in same order as their paper.[7] Clifford Cocks, an Englishmathematicianworking for theBritishintelligence agencyGovernment Communications Headquarters(GCHQ), described a similar system in an internal document in 1973.[8]However, given the relatively expensive computers needed to implement it at the time, it was considered to be mostly a curiosity and, as far as is publicly known, was never deployed. His ideas and concepts were not revealed until 1997 due to its top-secret classification. Kid-RSA (KRSA) is a simplified, insecure public-key cipher published in 1997, designed for educational purposes. Some people feel that learning Kid-RSA gives insight into RSA and other public-key ciphers, analogous tosimplified DES.[9][10][11][12][13] Apatentdescribing the RSA algorithm was granted toMITon 20 September 1983:U.S. patent 4,405,829"Cryptographic communications system and method". FromDWPI's abstract of the patent: The system includes a communications channel coupled to at least one terminal having an encoding device and to at least one terminal having a decoding device. A message-to-be-transferred is enciphered to ciphertext at the encoding terminal by encoding the message as a number M in a predetermined set. That number is then raised to a first predetermined power (associated with the intended receiver) and finally computed. The remainder or residue, C, is... computed when the exponentiated number is divided by the product of two predetermined prime numbers (associated with the intended receiver). A detailed description of the algorithm was published in August 1977, inScientific American'sMathematical Gamescolumn.[7]This preceded the patent's filing date of December 1977. Consequently, the patent had no legal standing outside theUnited States. Had Cocks' work been publicly known, a patent in the United States would not have been legal either. When the patent was issued,terms of patentwere 17 years. The patent was about to expire on 21 September 2000, butRSA Securityreleased the algorithm to the public domain on 6 September 2000.[14] The RSA algorithm involves four steps:keygeneration, key distribution, encryption, and decryption. A basic principle behind RSA is the observation that it is practical to find three very large positive integerse,d, andn, such that for all integersm(0 ≤m<n), both(me)d{\displaystyle (m^{e})^{d}}andm{\displaystyle m}have the sameremainderwhen divided byn{\displaystyle n}(they arecongruent modulon{\displaystyle n}):(me)d≡m(modn).{\displaystyle (m^{e})^{d}\equiv m{\pmod {n}}.}However, when given onlyeandn, it is extremely difficult to findd. The integersnandecomprise the public key,drepresents the private key, andmrepresents the message. Themodular exponentiationtoeanddcorresponds to encryption and decryption, respectively. In addition, because the two exponentscan be swapped, the private and public key can also be swapped, allowing for messagesigning and verificationusing the same algorithm. The keys for the RSA algorithm are generated in the following way: Thepublic keyconsists of the modulusnand the public (or encryption) exponente. Theprivate keyconsists of the private (or decryption) exponentd, which must be kept secret.p,q, andλ(n)must also be kept secret because they can be used to calculated. In fact, they can all be discarded afterdhas been computed.[16] In the original RSA paper,[1]theEuler totient functionφ(n) = (p− 1)(q− 1)is used instead ofλ(n)for calculating the private exponentd. Sinceφ(n)is always divisible byλ(n), the algorithm works as well. The possibility of usingEuler totient functionresults also fromLagrange's theoremapplied to themultiplicative group of integers modulopq. Thus anydsatisfyingd⋅e≡ 1 (modφ(n))also satisfiesd⋅e≡ 1 (modλ(n)). However, computingdmoduloφ(n)will sometimes yield a result that is larger than necessary (i.e.d>λ(n)). Most of the implementations of RSA will accept exponents generated using either method (if they use the private exponentdat all, rather than using the optimized decryption methodbased on the Chinese remainder theoremdescribed below), but some standards such asFIPS 186-4(Section B.3.1) may require thatd<λ(n). Any "oversized" private exponents not meeting this criterion may always be reduced moduloλ(n)to obtain a smaller equivalent exponent. Since any common factors of(p− 1)and(q− 1)are present in the factorisation ofn− 1=pq− 1=(p− 1)(q− 1) + (p− 1) + (q− 1),[17][self-published source?]it is recommended that(p− 1)and(q− 1)have only very small common factors, if any, besides the necessary 2.[1][18][19][failed verification][20][failed verification] Note: The authors of the original RSA paper carry out the key generation by choosingdand then computingeas themodular multiplicative inverseofdmoduloφ(n), whereas most current implementations of RSA, such as those followingPKCS#1, do the reverse (chooseeand computed). Since the chosen key can be small, whereas the computed key normally is not, the RSA paper's algorithm optimizes decryption compared to encryption, while the modern algorithm optimizes encryption instead.[1][21] Suppose thatBobwants to send information toAlice. If they decide to use RSA, Bob must know Alice's public key to encrypt the message, and Alice must use her private key to decrypt the message. To enable Bob to send his encrypted messages, Alice transmits her public key(n,e)to Bob via a reliable, but not necessarily secret, route. Alice's private key(d)is never distributed. After Bob obtains Alice's public key, he can send a messageMto Alice. To do it, he first turnsM(strictly speaking, the un-padded plaintext) into an integerm(strictly speaking, thepaddedplaintext), such that0 ≤m<nby using an agreed-upon reversible protocol known as apadding scheme. He then computes the ciphertextc, using Alice's public keye, corresponding to c≡me(modn).{\displaystyle c\equiv m^{e}{\pmod {n}}.} This can be done reasonably quickly, even for very large numbers, usingmodular exponentiation. Bob then transmitscto Alice. Note that at least nine values ofmwill yield a ciphertextcequal tom,[a]but this is very unlikely to occur in practice. Alice can recovermfromcby using her private key exponentdby computing cd≡(me)d≡m(modn).{\displaystyle c^{d}\equiv (m^{e})^{d}\equiv m{\pmod {n}}.} Givenm, she can recover the original messageMby reversing the padding scheme. Here is an example of RSA encryption and decryption:[b] Thepublic keyis(n= 3233,e= 17). For a paddedplaintextmessagem, the encryption function isc(m)=memodn=m17mod3233.{\displaystyle {\begin{aligned}c(m)&=m^{e}{\bmod {n}}\\&=m^{17}{\bmod {3}}233.\end{aligned}}} Theprivate keyis(n= 3233,d= 413). For an encryptedciphertextc, the decryption function ism(c)=cdmodn=c413mod3233.{\displaystyle {\begin{aligned}m(c)&=c^{d}{\bmod {n}}\\&=c^{413}{\bmod {3}}233.\end{aligned}}} For instance, in order to encryptm= 65, one calculatesc=6517mod3233=2790.{\displaystyle c=65^{17}{\bmod {3}}233=2790.} To decryptc= 2790, one calculatesm=2790413mod3233=65.{\displaystyle m=2790^{413}{\bmod {3}}233=65.} Both of these calculations can be computed efficiently using thesquare-and-multiply algorithmformodular exponentiation. In real-life situations the primes selected would be much larger; in our example it would be trivial to factorn= 3233(obtained from the freely available public key) back to the primespandq.e, also from the public key, is then inverted to getd, thus acquiring the private key. Practical implementations use theChinese remainder theoremto speed up the calculation using modulus of factors (modpqusing modpand modq). The valuesdp,dqandqinv, which are part of the private key are computed as follows:dp=dmod(p−1)=413mod(61−1)=53,dq=dmod(q−1)=413mod(53−1)=49,qinv=q−1modp=53−1mod61=38⇒(qinv×q)modp=38×53mod61=1.{\displaystyle {\begin{aligned}d_{p}&=d{\bmod {(}}p-1)=413{\bmod {(}}61-1)=53,\\d_{q}&=d{\bmod {(}}q-1)=413{\bmod {(}}53-1)=49,\\q_{\text{inv}}&=q^{-1}{\bmod {p}}=53^{-1}{\bmod {6}}1=38\\&\Rightarrow (q_{\text{inv}}\times q){\bmod {p}}=38\times 53{\bmod {6}}1=1.\end{aligned}}} Here is howdp,dqandqinvare used for efficient decryption (encryption is efficient by choice of a suitabledandepair):m1=cdpmodp=279053mod61=4,m2=cdqmodq=279049mod53=12,h=(qinv×(m1−m2))modp=(38×−8)mod61=1,m=m2+h×q=12+1×53=65.{\displaystyle {\begin{aligned}m_{1}&=c^{d_{p}}{\bmod {p}}=2790^{53}{\bmod {6}}1=4,\\m_{2}&=c^{d_{q}}{\bmod {q}}=2790^{49}{\bmod {5}}3=12,\\h&=(q_{\text{inv}}\times (m_{1}-m_{2})){\bmod {p}}=(38\times -8){\bmod {6}}1=1,\\m&=m_{2}+h\times q=12+1\times 53=65.\end{aligned}}} SupposeAliceusesBob's public key to send him an encrypted message. In the message, she can claim to be Alice, but Bob has no way of verifying that the message was from Alice, since anyone can use Bob's public key to send him encrypted messages. In order to verify the origin of a message, RSA can also be used tosigna message. Suppose Alice wishes to send a signed message to Bob. She can use her own private key to do so. She produces ahash valueof the message, raises it to the power ofd(modulon) (as she does when decrypting a message), and attaches it as a "signature" to the message. When Bob receives the signed message, he uses the same hash algorithm in conjunction with Alice's public key. He raises the signature to the power ofe(modulon) (as he does when encrypting a message), and compares the resulting hash value with the message's hash value. If the two agree, he knows that the author of the message was in possession of Alice's private key and that the message has not been tampered with since being sent. This works because ofexponentiationrules:h=hash⁡(m),{\displaystyle h=\operatorname {hash} (m),}(he)d=hed=hde=(hd)e≡h(modn).{\displaystyle (h^{e})^{d}=h^{ed}=h^{de}=(h^{d})^{e}\equiv h{\pmod {n}}.} Thus the keys may be swapped without loss of generality, that is, a private key of a key pair may be used either to: The proof of the correctness of RSA is based onFermat's little theorem, stating thatap− 1≡ 1 (modp)for any integeraand primep, not dividinga.[note 1] We want to show that(me)d≡m(modpq){\displaystyle (m^{e})^{d}\equiv m{\pmod {pq}}}for every integermwhenpandqare distinct prime numbers andeanddare positive integers satisfyinged≡ 1 (modλ(pq)). Sinceλ(pq) =lcm(p− 1,q− 1)is, by construction, divisible by bothp− 1andq− 1, we can writeed−1=h(p−1)=k(q−1){\displaystyle ed-1=h(p-1)=k(q-1)}for some nonnegative integershandk.[note 2] To check whether two numbers, such asmedandm, are congruentmodpq, it suffices (and in fact is equivalent) to check that they are congruentmodpandmodqseparately.[note 3] To showmed≡m(modp), we consider two cases: The verification thatmed≡m(modq)proceeds in a completely analogous way: This completes the proof that, for any integerm, and integerse,dsuch thated≡ 1 (modλ(pq)),(me)d≡m(modpq).{\displaystyle (m^{e})^{d}\equiv m{\pmod {pq}}.} Although the original paper of Rivest, Shamir, and Adleman used Fermat's little theorem to explain why RSA works, it is common to find proofs that rely instead onEuler's theorem. We want to show thatmed≡m(modn), wheren=pqis a product of two different prime numbers, andeanddare positive integers satisfyinged≡ 1 (modφ(n)). Sinceeanddare positive, we can writeed= 1 +hφ(n)for some non-negative integerh.Assumingthatmis relatively prime ton, we havemed=m1+hφ(n)=m(mφ(n))h≡m(1)h≡m(modn),{\displaystyle m^{ed}=m^{1+h\varphi (n)}=m(m^{\varphi (n)})^{h}\equiv m(1)^{h}\equiv m{\pmod {n}},} where the second-last congruence follows fromEuler's theorem. More generally, for anyeanddsatisfyinged≡ 1 (modλ(n)), the same conclusion follows fromCarmichael's generalization of Euler's theorem, which states thatmλ(n)≡ 1 (modn)for allmrelatively prime ton. Whenmis not relatively prime ton, the argument just given is invalid. This is highly improbable (only a proportion of1/p+ 1/q− 1/(pq)numbers have this property), but even in this case, the desired congruence is still true. Eitherm≡ 0 (modp)orm≡ 0 (modq), and these cases can be treated using the previous proof. There are a number of attacks against plain RSA as described below. To avoid these problems, practical RSA implementations typically embed some form of structured, randomizedpaddinginto the valuembefore encrypting it. This padding ensures thatmdoes not fall into the range of insecure plaintexts, and that a given message, once padded, will encrypt to one of a large number of different possible ciphertexts. Standards such asPKCS#1have been carefully designed to securely pad messages prior to RSA encryption. Because these schemes pad the plaintextmwith some number of additional bits, the size of the un-padded messageMmust be somewhat smaller. RSA padding schemes must be carefully designed so as to prevent sophisticated attacks that may be facilitated by a predictable message structure. Early versions of the PKCS#1 standard (up to version 1.5) used a construction that appears to make RSA semantically secure. However, atCrypto1998, Bleichenbacher showed that this version is vulnerable to a practicaladaptive chosen-ciphertext attack. Furthermore, atEurocrypt2000, Coron et al.[25]showed that for some types of messages, this padding does not provide a high enough level of security. Later versions of the standard includeOptimal Asymmetric Encryption Padding(OAEP), which prevents these attacks. As such, OAEP should be used in any new application, and PKCS#1 v1.5 padding should be replaced wherever possible. The PKCS#1 standard also incorporates processing schemes designed to provide additional security for RSA signatures, e.g. the Probabilistic Signature Scheme for RSA (RSA-PSS). Secure padding schemes such as RSA-PSS are as essential for the security of message signing as they are for message encryption. Two USA patents on PSS were granted (U.S. patent 6,266,771andU.S. patent 7,036,014); however, these patents expired on 24 July 2009 and 25 April 2010 respectively. Use of PSS no longer seems to be encumbered by patents.[original research?]Note that using different RSA key pairs for encryption and signing is potentially more secure.[26] For efficiency, many popular crypto libraries (such asOpenSSL,Javaand.NET) use for decryption and signing the following optimization based on theChinese remainder theorem.[27][citation needed]The following values are precomputed and stored as part of the private key: These values allow the recipient to compute the exponentiationm=cd(modpq)more efficiently as follows:m1=cdP(modp){\displaystyle m_{1}=c^{d_{P}}{\pmod {p}}},m2=cdQ(modq){\displaystyle m_{2}=c^{d_{Q}}{\pmod {q}}},h=qinv(m1−m2)(modp){\displaystyle h=q_{\text{inv}}(m_{1}-m_{2}){\pmod {p}}},[c]m=m2+hq{\displaystyle m=m_{2}+hq}. This is more efficient than computingexponentiation by squaring, even though two modular exponentiations have to be computed. The reason is that these two modular exponentiations both use a smaller exponent and a smaller modulus. The security of the RSA cryptosystem is based on two mathematical problems: the problem offactoring large numbersand theRSA problem. Full decryption of an RSA ciphertext is thought to be infeasible on the assumption that both of these problems arehard, i.e., no efficient algorithm exists for solving them. Providing security againstpartialdecryption may require the addition of a securepadding scheme.[28] TheRSA problemis defined as the task of takingeth roots modulo a compositen: recovering a valuemsuch thatc≡me(modn), where(n,e)is an RSA public key, andcis an RSA ciphertext. Currently the most promising approach to solving the RSA problem is to factor the modulusn. With the ability to recover prime factors, an attacker can compute the secret exponentdfrom a public key(n,e), then decryptcusing the standard procedure. To accomplish this, an attacker factorsnintopandq, and computeslcm(p− 1,q− 1)that allows the determination ofdfrome. No polynomial-time method for factoring large integers on a classical computer has yet been found, but it has not been proven that none exists; seeinteger factorizationfor a discussion of this problem. The first RSA-512 factorization in 1999 used hundreds of computers and required the equivalent of 8,400 MIPS years, over an elapsed time of about seven months.[29]By 2009, Benjamin Moody could factor an 512-bit RSA key in 73 days using only public software (GGNFS) and his desktop computer (a dual-coreAthlon64with a 1,900 MHz CPU). Just less than 5 gigabytes of disk storage was required and about 2.5 gigabytes of RAM for the sieving process. Rivest, Shamir, and Adleman noted[1]that Miller has shown that – assuming the truth of theextended Riemann hypothesis– findingdfromnandeis as hard as factoringnintopandq(up to a polynomial time difference).[30]However, Rivest, Shamir, and Adleman noted, in section IX/D of their paper, that they had not found a proof that inverting RSA is as hard as factoring. As of 2020[update], the largest publicly known factoredRSA numberhad 829 bits (250 decimal digits,RSA-250).[31]Its factorization, by a state-of-the-art distributed implementation, took about 2,700 CPU-years. In practice, RSA keys are typically 1024 to 4096 bits long. In 2003,RSA Securityestimated that 1024-bit keys were likely to become crackable by 2010.[32]As of 2020, it is not known whether such keys can be cracked, but minimum recommendations have moved to at least 2048 bits.[33]It is generally presumed that RSA is secure ifnis sufficiently large, outside of quantum computing. Ifnis 300bitsor shorter, it can be factored in a few hours on apersonal computer, using software already freely available. Keys of 512 bits have been shown to be practically breakable in 1999, whenRSA-155was factored by using several hundred computers, and these are now factored in a few weeks using common hardware. Exploits using 512-bit code-signing certificates that may have been factored were reported in 2011.[34]A theoretical hardware device namedTWIRL, described by Shamir and Tromer in 2003, called into question the security of 1024-bit keys.[32] In 1994,Peter Shorshowed that aquantum computer– if one could ever be practically created for the purpose – would be able to factor inpolynomial time, breaking RSA; seeShor's algorithm. Finding the large primespandqis usually done by testing random numbers of the correct size with probabilisticprimality teststhat quickly eliminate virtually all of the nonprimes. The numberspandqshould not be "too close", lest theFermat factorizationfornbe successful. Ifp−qis less than2n1/4(n=p⋅q, which even for "small" 1024-bit values ofnis3×1077), solving forpandqis trivial. Furthermore, if eitherp− 1orq− 1has only small prime factors,ncan be factored quickly byPollard'sp− 1 algorithm, and hence such values ofporqshould be discarded. It is important that the private exponentdbe large enough. Michael J. Wiener showed that ifpis betweenqand2q(which is quite typical) andd<n1/4/3, thendcan be computed efficiently fromnande.[35] There is no known attack against small public exponents such ase= 3, provided that the proper padding is used.Coppersmith's attackhas many applications in attacking RSA specifically if the public exponenteis small and if the encrypted message is short and not padded.65537is a commonly used value fore; this value can be regarded as a compromise between avoiding potential small-exponent attacks and still allowing efficient encryptions (or signature verification). The NIST Special Publication on Computer Security (SP 800-78 Rev. 1 of August 2007) does not allow public exponentsesmaller than 65537, but does not state a reason for this restriction. In October 2017, a team of researchers fromMasaryk Universityannounced theROCA vulnerability, which affects RSA keys generated by an algorithm embodied in a library fromInfineonknown as RSALib. A large number ofsmart cardsandtrusted platform modules(TPM) were shown to be affected. Vulnerable RSA keys are easily identified using a test program the team released.[36] A cryptographically strongrandom number generator, which has been properly seeded with adequate entropy, must be used to generate the primespandq. An analysis comparing millions of public keys gathered from the Internet was carried out in early 2012 byArjen K. Lenstra, James P. Hughes, Maxime Augier, Joppe W. Bos, Thorsten Kleinjung and Christophe Wachter. They were able to factor 0.2% of the keys using only Euclid's algorithm.[37][38][self-published source?] They exploited a weakness unique to cryptosystems based on integer factorization. Ifn=pqis one public key, andn′ =p′q′is another, then if by chancep=p′(butqis not equal toq'), then a simple computation ofgcd(n,n′) =pfactors bothnandn', totally compromising both keys. Lenstra et al. note that this problem can be minimized by using a strong random seed of bit length twice the intended security level, or by employing a deterministic function to chooseqgivenp, instead of choosingpandqindependently. Nadia Heningerwas part of a group that did a similar experiment. They used an idea ofDaniel J. Bernsteinto compute the GCD of each RSA keynagainst the product of all the other keysn' they had found (a 729-million-digit number), instead of computing eachgcd(n,n′)separately, thereby achieving a very significant speedup, since after one large division, the GCD problem is of normal size. Heninger says in her blog that the bad keys occurred almost entirely in embedded applications, including "firewalls, routers, VPN devices, remote server administration devices, printers, projectors, and VOIP phones" from more than 30 manufacturers. Heninger explains that the one-shared-prime problem uncovered by the two groups results from situations where the pseudorandom number generator is poorly seeded initially, and then is reseeded between the generation of the first and second primes. Using seeds of sufficiently high entropy obtained from key stroke timings or electronic diode noise oratmospheric noisefrom a radio receiver tuned between stations should solve the problem.[39] Strong random number generation is important throughout every phase of public-key cryptography. For instance, if a weak generator is used for the symmetric keys that are being distributed by RSA, then an eavesdropper could bypass RSA and guess the symmetric keys directly. Kocherdescribed a new attack on RSA in 1995: if the attacker Eve knows Alice's hardware in sufficient detail and is able to measure the decryption times for several known ciphertexts, Eve can deduce the decryption keydquickly. This attack can also be applied against the RSA signature scheme. In 2003,BonehandBrumleydemonstrated a more practical attack capable of recovering RSA factorizations over a network connection (e.g., from aSecure Sockets Layer(SSL)-enabled webserver).[40]This attack takes advantage of information leaked by theChinese remainder theoremoptimization used by many RSA implementations. One way to thwart these attacks is to ensure that the decryption operation takes a constant amount of time for every ciphertext. However, this approach can significantly reduce performance. Instead, most RSA implementations use an alternate technique known ascryptographic blinding. RSA blinding makes use of the multiplicative property of RSA. Instead of computingcd(modn), Alice first chooses a secret random valuerand computes(rec)d(modn). The result of this computation, after applyingEuler's theorem, isrcd(modn), and so the effect ofrcan be removed by multiplying by its inverse. A new value ofris chosen for each ciphertext. With blinding applied, the decryption time is no longer correlated to the value of the input ciphertext, and so the timing attack fails. In 1998,Daniel Bleichenbacherdescribed the first practicaladaptive chosen-ciphertext attackagainst RSA-encrypted messages using the PKCS #1 v1padding scheme(a padding scheme randomizes and adds structure to an RSA-encrypted message, so it is possible to determine whether a decrypted message is valid). Due to flaws with the PKCS #1 scheme, Bleichenbacher was able to mount a practical attack against RSA implementations of theSecure Sockets Layerprotocol and to recover session keys. As a result of this work, cryptographers now recommend the use of provably secure padding schemes such asOptimal Asymmetric Encryption Padding, and RSA Laboratories has released new versions of PKCS #1 that are not vulnerable to these attacks. A variant of this attack, dubbed "BERserk", came back in 2014.[41][42]It impacted the Mozilla NSS Crypto Library, which was used notably by Firefox and Chrome. A side-channel attack using branch-prediction analysis (BPA) has been described. Many processors use abranch predictorto determine whether a conditional branch in the instruction flow of a program is likely to be taken or not. Often these processors also implementsimultaneous multithreading(SMT). Branch-prediction analysis attacks use a spy process to discover (statistically) the private key when processed with these processors. Simple Branch Prediction Analysis (SBPA) claims to improve BPA in a non-statistical way. In their paper, "On the Power of Simple Branch Prediction Analysis",[43]the authors of SBPA (Onur Aciicmez and Cetin Kaya Koc) claim to have discovered 508 out of 512 bits of an RSA key in 10 iterations. A power-fault attack on RSA implementations was described in 2010.[44]The author recovered the key by varying the CPU power voltage outside limits; this caused multiple power faults on the server. There are many details to keep in mind in order to implement RSA securely (strongPRNG, acceptable public exponent, etc.). This makes the implementation challenging, to the point the book Practical Cryptography With Go suggests avoiding RSA if possible.[45] Some cryptography libraries that provide support for RSA include:
https://en.wikipedia.org/wiki/RSA_(algorithm)
Innumber theory, acongruenceis anequivalence relationon theintegers. The following sections list important or interesting prime-related congruences. There are other prime-related congruences that provide necessary and sufficient conditions on the primality of certain subsequences of the natural numbers. Many of these alternate statements characterizing primality are related toWilson's theorem, or are restatements of this classical result given in terms of other special variants ofgeneralized factorial functions. For instance, new variants ofWilson's theoremstated in terms of thehyperfactorials,subfactorials, andsuperfactorialsare given in.[1] For integersk≥1{\displaystyle k\geq 1}, we have the following form of Wilson's theorem: Ifp{\displaystyle p}is odd, we have that Clement's congruence-based theorem characterizes thetwin primespairs of the form(p,p+2){\displaystyle (p,p+2)}through the following conditions: P. A. Clement's original 1949 paper[2]provides a proof of this interesting elementary number theoretic criteria for twin primality based on Wilson's theorem. Another characterization given in Lin and Zhipeng's article provides that The prime pairs of the form(p,p+2k){\displaystyle (p,p+2k)}for somek≥1{\displaystyle k\geq 1}include the special cases of thecousin primes(whenk=2{\displaystyle k=2}) and thesexy primes(whenk=3{\displaystyle k=3}). We have elementary congruence-based characterizations of the primality of such pairs, proved for instance in the article.[3]Examples of congruences characterizing these prime pairs include and the alternate characterization whenp{\displaystyle p}is odd such thatp⧸∣(2k−1)!!2{\displaystyle p\not {\mid }(2k-1)!!^{2}}given by Still other congruence-based characterizations of the primality of triples, and more generalprime clusters(orprime tuples) exist and are typically proved starting from Wilson's theorem.[4]).
https://en.wikipedia.org/wiki/Table_of_congruences
Inmathematics, particularly in the area ofarithmetic, amodular multiplicative inverseof anintegerais an integerxsuch that the productaxiscongruentto 1 with respect to the modulusm.[1]In the standard notation ofmodular arithmeticthis congruence is written as which is the shorthand way of writing the statement thatmdivides (evenly) the quantityax− 1, or, put another way, the remainder after dividingaxby the integermis 1. Ifadoes have an inverse modulom, then there is an infinite number of solutions of this congruence, which form acongruence classwith respect to this modulus. Furthermore, any integer that is congruent toa(i.e., ina's congruence class) has any element ofx's congruence class as a modular multiplicative inverse. Using the notation ofw¯{\displaystyle {\overline {w}}}to indicate the congruence class containingw, this can be expressed by saying that themodulo multiplicative inverseof the congruence classa¯{\displaystyle {\overline {a}}}is the congruence classx¯{\displaystyle {\overline {x}}}such that: where the symbol⋅m{\displaystyle \cdot _{m}}denotes the multiplication of equivalence classes modulom.[2]Written in this way, the analogy with the usual concept of amultiplicative inversein the set ofrationalorreal numbersis clearly represented, replacing the numbers by congruence classes and altering thebinary operationappropriately. As with the analogous operation on the real numbers, a fundamental use of this operation is in solving, when possible, linear congruences of the form Finding modular multiplicative inverses also has practical applications in the field ofcryptography, e.g.public-key cryptographyand theRSA algorithm.[3][4][5]A benefit for the computer implementation of these applications is that there exists a very fast algorithm (theextended Euclidean algorithm) that can be used for the calculation of modular multiplicative inverses. For a given positive integerm, two integers,aandb, are said to becongruent modulomifmdivides their difference. Thisbinary relationis denoted by, This is anequivalence relationon the set of integers,Z{\displaystyle \mathbb {Z} }, and the equivalence classes are calledcongruence classes modulomorresidue classes modulom. Leta¯{\displaystyle {\overline {a}}}denote the congruence class containing the integera,[6]then Alinear congruenceis a modular congruence of the form Unlike linear equations over the reals, linear congruences may have zero, one or several solutions. Ifxis a solution of a linear congruence then every element inx¯{\displaystyle {\overline {x}}}is also a solution, so, when speaking of the number of solutions of a linear congruence we are referring to the number of different congruence classes that contain solutions. Ifdis thegreatest common divisorofaandmthen the linear congruenceax≡b(modm)has solutions if and only ifddividesb. Ifddividesb, then there are exactlydsolutions.[7] A modular multiplicative inverse of an integerawith respect to the modulusmis a solution of the linear congruence The previous result says that a solution exists if and only ifgcd(a,m) = 1, that is,aandmmust berelatively prime(i.e. coprime). Furthermore, when this condition holds, there is exactly one solution, i.e., when it exists, a modular multiplicative inverse is unique:[8]Ifbandb'are both modular multiplicative inverses ofarespect to the modulusm, then therefore Ifa≡ 0 (modm), thengcd(a,m) =m, andawon't even have a modular multiplicative inverse. Therefore,b ≡ b'(modm). Whenax≡ 1 (modm)has a solution it is often denoted in this way − but this can be considered anabuse of notationsince it could be misinterpreted as thereciprocalofa{\displaystyle a}(which, contrary to the modular multiplicative inverse, is not an integer except whenais 1 or −1). The notation would be proper ifais interpreted as a token standing for the congruence classa¯{\displaystyle {\overline {a}}}, as the multiplicative inverse of a congruence class is a congruence class with the multiplication defined in the next section. The congruence relation, modulom, partitions the set of integers intomcongruence classes. Operations of addition and multiplication can be defined on thesemobjects in the following way: To either add or multiply two congruence classes, first pick a representative (in any way) from each class, then perform the usual operation for integers on the two representatives and finally take the congruence class that the result of the integer operation lies in as the result of the operation on the congruence classes. In symbols, with+m{\displaystyle +_{m}}and⋅m{\displaystyle \cdot _{m}}representing the operations on congruence classes, these definitions are and These operations arewell-defined, meaning that the end result does not depend on the choices of representatives that were made to obtain the result. Themcongruence classes with these two defined operations form aring, called thering of integers modulom. There are several notations used for these algebraic objects, most oftenZ/mZ{\displaystyle \mathbb {Z} /m\mathbb {Z} }orZ/m{\displaystyle \mathbb {Z} /m}, but several elementary texts and application areas use a simplified notationZm{\displaystyle \mathbb {Z} _{m}}when confusion with other algebraic objects is unlikely. The congruence classes of the integers modulomwere traditionally known asresidue classes modulo m, reflecting the fact that all the elements of a congruence class have the same remainder (i.e., "residue") upon being divided bym. Any set ofmintegers selected so that each comes from a different congruence class modulo m is called acomplete system of residues modulom.[9]Thedivision algorithmshows that the set of integers,{0, 1, 2, ...,m− 1}form a complete system of residues modulom, known as theleast residue system modulom. In working with arithmetic problems it is sometimes more convenient to work with a complete system of residues and use the language of congruences while at other times the point of view of the congruence classes of the ringZ/mZ{\displaystyle \mathbb {Z} /m\mathbb {Z} }is more useful.[10] Not every element of a complete residue system modulomhas a modular multiplicative inverse, for instance, zero never does. After removing the elements of a complete residue system that are not relatively prime tom, what is left is called areduced residue system, all of whose elements have modular multiplicative inverses. The number of elements in a reduced residue system isϕ(m){\displaystyle \phi (m)}, whereϕ{\displaystyle \phi }is theEuler totient function, i.e., the number of positive integers less thanmthat are relatively prime tom. In a generalring with unitynot every element has amultiplicative inverseand those that do are calledunits. As the product of two units is a unit, the units of a ring form agroup, thegroup of units of the ringand often denoted byR×ifRis the name of the ring. The group of units of the ring of integers modulomis called themultiplicative group of integers modulom, and it isisomorphicto a reduced residue system. In particular, it hasorder(size),ϕ(m){\displaystyle \phi (m)}. In the case thatmis aprime, sayp, thenϕ(p)=p−1{\displaystyle \phi (p)=p-1}and all the non-zero elements ofZ/pZ{\displaystyle \mathbb {Z} /p\mathbb {Z} }have multiplicative inverses, thusZ/pZ{\displaystyle \mathbb {Z} /p\mathbb {Z} }is afinite field. In this case, the multiplicative group of integers modulopform acyclic groupof orderp− 1. For any integern>1{\displaystyle n>1}, it's always the case thatn2−n+1{\displaystyle n^{2}-n+1}is the modular multiplicative inverse ofn+1{\displaystyle n+1}with respect to the modulusn2{\displaystyle n^{2}}, since(n+1)(n2−n+1)=n3+1{\displaystyle (n+1)(n^{2}-n+1)=n^{3}+1}. Examples are3×3≡1(mod4){\displaystyle 3\times 3\equiv 1{\pmod {4}}},4×7≡1(mod9){\displaystyle 4\times 7\equiv 1{\pmod {9}}},5×13≡1(mod16){\displaystyle 5\times 13\equiv 1{\pmod {16}}}and so on. The following example uses the modulus 10: Two integers are congruent mod 10 if and only if their difference is divisible by 10, for instance Some of the ten congruence classes with respect to this modulus are: The linear congruence4x≡ 5 (mod 10)has no solutions since the integers that are congruent to 5 (i.e., those in5¯{\displaystyle {\overline {5}}}) are all odd while4xis always even. However, the linear congruence4x≡ 6 (mod 10)has two solutions, namely,x= 4andx= 9. Thegcd(4, 10) = 2and 2 does not divide 5, but does divide 6. Sincegcd(3, 10) = 1, the linear congruence3x≡ 1 (mod 10)will have solutions, that is, modular multiplicative inverses of 3 modulo 10 will exist. In fact, 7 satisfies this congruence (i.e., 21 − 1 = 20). However, other integers also satisfy the congruence, for instance 17 and −3 (i.e., 3(17) − 1 = 50 and 3(−3) − 1 = −10). In particular, every integer in7¯{\displaystyle {\overline {7}}}will satisfy the congruence since these integers have the form7 + 10rfor some integerrand is divisible by 10. This congruence has only this one congruence class of solutions. The solution in this case could have been obtained by checking all possible cases, but systematic algorithms would be needed for larger moduli and these will be given in the next section. The product of congruence classes5¯{\displaystyle {\overline {5}}}and8¯{\displaystyle {\overline {8}}}can be obtained by selecting an element of5¯{\displaystyle {\overline {5}}}, say 25, and an element of8¯{\displaystyle {\overline {8}}}, say −2, and observing that their product (25)(−2) = −50 is in the congruence class0¯{\displaystyle {\overline {0}}}. Thus,5¯⋅108¯=0¯{\displaystyle {\overline {5}}\cdot _{10}{\overline {8}}={\overline {0}}}. Addition is defined in a similar way. The ten congruence classes together with these operations of addition and multiplication of congruence classes form the ring of integers modulo 10, i.e.,Z/10Z{\displaystyle \mathbb {Z} /10\mathbb {Z} }. A complete residue system modulo 10 can be the set {10, −9, 2, 13, 24, −15, 26, 37, 8, 9} where each integer is in a different congruence class modulo 10. The unique least residue system modulo 10 is {0, 1, 2, ..., 9}. A reduced residue system modulo 10 could be {1, 3, 7, 9}. The product of any two congruence classes represented by these numbers is again one of these four congruence classes. This implies that these four congruence classes form a group, in this case the cyclic group of order four, having either 3 or 7 as a (multiplicative) generator. The represented congruence classes form the group of units of the ringZ/10Z{\displaystyle \mathbb {Z} /10\mathbb {Z} }. These congruence classes are precisely the ones which have modular multiplicative inverses. A modular multiplicative inverse ofamodulomcan be found by using the extended Euclidean algorithm. TheEuclidean algorithmdetermines the greatest common divisor (gcd) of two integers, sayaandm. Ifahas a multiplicative inverse modulom, this gcd must be 1. The last of several equations produced by the algorithm may be solved for this gcd. Then, using a method called "back substitution", an expression connecting the original parameters and this gcd can be obtained. In other words, integersxandycan be found to satisfyBézout's identity, Rewritten, this is that is, so, a modular multiplicative inverse ofahas been calculated. A more efficient version of the algorithm is the extended Euclidean algorithm, which, by using auxiliary equations, reduces two passes through the algorithm (back substitution can be thought of as passing through the algorithm in reverse) to just one. Inbig O notation, this algorithm runs in timeO(log2(m)), assuming|a| <m, and is considered to be very fast and generally more efficient than its alternative, exponentiation. As an alternative to the extended Euclidean algorithm, Euler's theorem may be used to compute modular inverses.[11] According toEuler's theorem, ifaiscoprimetom, that is,gcd(a,m) = 1, then whereϕ{\displaystyle \phi }isEuler's totient function. This follows from the fact thatabelongs to the multiplicative group(Z/mZ){\displaystyle (\mathbb {Z} /m\mathbb {Z} )}×if and only ifaiscoprimetom. Therefore, a modular multiplicative inverse can be found directly: In the special case wheremis a prime,ϕ(m)=m−1{\displaystyle \phi (m)=m-1}and a modular inverse is given by This method is generally slower than the extended Euclidean algorithm, but is sometimes used when an implementation for modular exponentiation is already available. Some disadvantages of this method include: One notableadvantageof this technique is that there are no conditional branches which depend on the value ofa, and thus the value ofa, which may be an important secret inpublic-key cryptography, can be protected fromside-channel attacks. For this reason, the standard implementation ofCurve25519uses this technique to compute an inverse. It is possible to compute the inverse of multiple numbersai, modulo a commonm, with a single invocation of the Euclidean algorithm and three multiplications per additional input.[12]The basic idea is to form the product of all theai, invert that, then multiply byajfor allj≠ito leave only the desireda−1i. More specifically, the algorithm is (all arithmetic performed modulom): It is possible to perform the multiplications in a tree structure rather than linearly to exploitparallel computing. Finding a modular multiplicative inverse has many applications in algorithms that rely on the theory of modular arithmetic. For instance, in cryptography the use of modular arithmetic permits some operations to be carried out more quickly and with fewer storage requirements, while other operations become more difficult.[13]Both of these features can be used to advantage. In particular, in the RSA algorithm, encrypting and decrypting a message is done using a pair of numbers that are multiplicative inverses with respect to a carefully selected modulus. One of these numbers is made public and can be used in a rapid encryption procedure, while the other, used in the decryption procedure, is kept hidden. Determining the hidden number from the public number is considered to be computationally infeasible and this is what makes the system work to ensure privacy.[14] As another example in a different context, consider the exact division problem in computer science where you have a list of odd word-sized numbers each divisible bykand you wish to divide them all byk. One solution is as follows: On many machines, particularly those without hardware support for division, division is a slower operation than multiplication, so this approach can yield a considerable speedup. The first step is relatively slow but only needs to be done once. Modular multiplicative inverses are used to obtain a solution of a system of linear congruences that is guaranteed by theChinese Remainder Theorem. For example, the system has common solutions since 5,7 and 11 are pairwisecoprime. A solution is given by where Thus, and in its unique reduced form since 385 is theLCMof 5,7 and 11. Also, the modular multiplicative inverse figures prominently in the definition of theKloosterman sum.
https://en.wikipedia.org/wiki/Modular_multiplicative_inverse
Incombinatorialmathematics, theBell polynomials, named in honor ofEric Temple Bell, are used in the study of set partitions. They are related toStirlingandBell numbers. They also occur in many applications, such as inFaà di Bruno's formula. Thepartialorincompleteexponential Bell polynomials are atriangular arrayof polynomials given by where the sum is taken over all sequencesj1,j2,j3, ...,jn−k+1of non-negative integers such that these two conditions are satisfied: The sum is called thenthcomplete exponential Bell polynomial. Likewise, the partialordinaryBell polynomial is defined by where the sum runs over all sequencesj1,j2,j3, ...,jn−k+1of non-negative integers such that Thanks to the first condition on indices, we can rewrite the formula as where we have used themultinomial coefficient. The ordinary Bell polynomials can be expressed in the terms of exponential Bell polynomials: In general, Bell polynomial refers to the exponential Bell polynomial, unless otherwise explicitly stated. The exponential Bell polynomial encodes the information related to the ways a set can be partitioned. For example, if we consider a set {A, B, C}, it can be partitioned into two non-empty, non-overlapping subsets, which are also referred to as parts or blocks, in 3 different ways: Thus, we can encode the information regarding these partitions as Here, the subscripts ofB3,2tell us that we are considering the partitioning of a set with 3 elements into 2 blocks. The subscript of eachxiindicates the presence of a block withielements (or block of sizei) in a given partition. So here,x2indicates the presence of a block with two elements. Similarly,x1indicates the presence of a block with a single element. The exponent ofxijindicates that there arejsuch blocks of sizeiin a single partition. Here, the fact that bothx1andx2have exponent 1 indicates that there is only one such block in a given partition. The coefficient of themonomialindicates how many such partitions there are. Here, there are 3 partitions of a set with 3 elements into 2 blocks, where in each partition the elements are divided into two blocks of sizes 1 and 2. Since any set can be divided into a single block in only one way, the above interpretation would mean thatBn,1=xn. Similarly, since there is only one way that a set withnelements be divided intonsingletons,Bn,n=x1n. As a more complicated example, consider This tells us that if a set with 6 elements is divided into 2 blocks, then we can have 6 partitions with blocks of size 1 and 5, 15 partitions with blocks of size 4 and 2, and 10 partitions with 2 blocks of size 3. The sum of the subscripts in a monomial is equal to the total number of elements. Thus, the number of monomials that appear in the partial Bell polynomial is equal to the number of ways the integerncan be expressed as a summation ofkpositive integers. This is the same as theinteger partitionofnintokparts. For instance, in the above examples, the integer 3 can be partitioned into two parts as 2+1 only. Thus, there is only one monomial inB3,2. However, the integer 6 can be partitioned into two parts as 5+1, 4+2, and 3+3. Thus, there are three monomials inB6,2. Indeed, the subscripts of the variables in a monomial are the same as those given by the integer partition, indicating the sizes of the different blocks. The total number of monomials appearing in a complete Bell polynomialBnis thus equal to the total number of integer partitions ofn. Also the degree of each monomial, which is the sum of the exponents of each variable in the monomial, is equal to the number of blocks the set is divided into. That is,j1+j2+ ... =k. Thus, given a complete Bell polynomialBn, we can separate the partial Bell polynomialBn,kby collecting all those monomials with degreek. Finally, if we disregard the sizes of the blocks and put allxi=x, then the summation of the coefficients of the partial Bell polynomialBn,kwill give the total number of ways that a set withnelements can be partitioned intokblocks, which is the same as theStirling numbers of the second kind. Also, the summation of all the coefficients of the complete Bell polynomialBnwill give us the total number of ways a set withnelements can be partitioned into non-overlapping subsets, which is the same as the Bell number. In general, if the integernispartitionedinto a sum in which "1" appearsj1times, "2" appearsj2times, and so on, then the number ofpartitions of a setof sizenthat collapse to that partition of the integernwhen the members of the set become indistinguishable is the corresponding coefficient in the polynomial. For example, we have because the ways to partition a set of 6 elements as 2 blocks are Similarly, because the ways to partition a set of 6 elements as 3 blocks are Below is atriangular arrayof the incomplete Bell polynomialsBn,k(x1,x2,…,xn−k+1){\displaystyle B_{n,k}(x_{1},x_{2},\dots ,x_{n-k+1})}: The exponential partial Bell polynomials can be defined by the double series expansion of its generating function: In other words, by what amounts to the same, by the series expansion of thek-th power: The complete exponential Bell polynomial is defined byΦ(t,1){\displaystyle \Phi (t,1)}, or in other words: Thus, then-th complete Bell polynomial is given by Likewise, theordinarypartial Bell polynomial can be defined by the generating function Or, equivalently, by series expansion of thek-th power: See alsogenerating function transformationsfor Bell polynomial generating function expansions of compositions of sequencegenerating functionsandpowers,logarithms, andexponentialsof a sequence generating function. Each of these formulas is cited in the respective sections of Comtet.[1] The complete Bell polynomials can berecurrentlydefined as with the initial valueB0=1{\displaystyle B_{0}=1}. The partial Bell polynomials can also be computed efficiently by a recurrence relation: where In addition:[2] When1≤a<n{\displaystyle 1\leq a<n}, The complete Bell polynomials also satisfy the following recurrence differential formula:[3] The partial derivatives of the complete Bell polynomials are given by[4] Similarly, the partial derivatives of the partial Bell polynomials are given by If the arguments of the Bell polynomials are one-dimensional functions, the chain rule can be used to obtain The value of the Bell polynomialBn,k(x1,x2,...) on the sequence offactorialsequals an unsignedStirling number of the first kind: The sum of these values gives the value of the complete Bell polynomial on the sequence of factorials: The value of the Bell polynomialBn,k(x1,x2,...) on the sequence of ones equals aStirling number of the second kind: The sum of these values gives the value of the complete Bell polynomial on the sequence of ones: which is thenthBell number. which gives theLah number. Touchard polynomialTn(x)=∑k=0n{nk}⋅xk{\displaystyle T_{n}(x)=\sum _{k=0}^{n}\left\{{n \atop k}\right\}\cdot x^{k}}can be expressed as the value of the complete Bell polynomial on all arguments beingx: If we define then we have the inverse relationship More generally,[5][6]given some functionf{\displaystyle f}admitting an inverseg=f−1{\displaystyle g=f^{-1}}, yn=∑k=0nf(k)(a)Bn,k(x1,…,xn−k+1)⇔xn=∑k=0ng(k)(f(a))Bn,k(y1,…,yn−k+1).{\displaystyle y_{n}=\sum _{k=0}^{n}f^{(k)}(a)\,B_{n,k}(x_{1},\ldots ,x_{n-k+1})\quad \Leftrightarrow \quad x_{n}=\sum _{k=0}^{n}g^{(k)}{\big (}f(a){\big )}\,B_{n,k}(y_{1},\ldots ,y_{n-k+1}).} The complete Bell polynomial can be expressed asdeterminants: and For sequencesxn,yn,n= 1, 2, ..., define aconvolutionby: The bounds of summation are 1 andn− 1, not 0 andn. Letxnk♢{\displaystyle x_{n}^{k\diamondsuit }\,}be thenth term of the sequence Then[2] For example, let us computeB4,3(x1,x2){\displaystyle B_{4,3}(x_{1},x_{2})}. We have and thus, The first few complete Bell polynomials are: Faà di Bruno's formulamay be stated in terms of Bell polynomials as follows: Similarly, a power-series version of Faà di Bruno's formula may be stated using Bell polynomials as follows. Suppose Then In particular, the complete Bell polynomials appear in the exponential of aformal power series: which also represents theexponential generating functionof the complete Bell polynomials on a fixed sequence of argumentsa1,a2,…{\displaystyle a_{1},a_{2},\dots }. Let two functionsfandgbe expressed in formalpower seriesas such thatgis the compositional inverse offdefined byg(f(w)) =worf(g(z)) =z. Iff0= 0 andf1≠ 0, then an explicit form of the coefficients of the inverse can be given in term of Bell polynomials as[8] withf^k=fk+1(k+1)f1,{\displaystyle {\hat {f}}_{k}={\frac {f_{k+1}}{(k+1)f_{1}}},}andnk¯=n(n+1)⋯(n+k−1){\displaystyle n^{\bar {k}}=n(n+1)\cdots (n+k-1)}is the rising factorial, andg1=1f1.{\displaystyle g_{1}={\frac {1}{f_{1}}}.} Consider the integral of the form where (a,b) is a real (finite or infinite) interval, λ is a large positive parameter and the functionsfandgare continuous. Letfhave a single minimum in [a,b] which occurs atx=a. Assume that asx→a+, withα> 0, Re(β) > 0; and that the expansion offcan be term wise differentiated. Then, Laplace–Erdelyi theorem states that the asymptotic expansion of the integralI(λ) is given by where the coefficientscnare expressible in terms ofanandbnusing partialordinaryBell polynomials, as given by Campbell–Froman–Walles–Wojdylo formula: Theelementary symmetric polynomialen{\displaystyle e_{n}}and thepower sum symmetric polynomialpn{\displaystyle p_{n}}can be related to each other using Bell polynomials as: These formulae allow one to express the coefficients of monic polynomials in terms of the Bell polynomials of its zeroes. For instance, together withCayley–Hamilton theoremthey lead to expression of the determinant of an×nsquare matrixAin terms of the traces of its powers: Thecycle indexof thesymmetric groupSn{\displaystyle S_{n}}can be expressed in terms of complete Bell polynomials as follows: The sum is thenth rawmomentof aprobability distributionwhose firstncumulantsareκ1, ...,κn. In other words, thenth moment is thenth complete Bell polynomial evaluated at the firstncumulants. Likewise, thenth cumulant can be given in terms of the moments as Hermite polynomialscan be expressed in terms of Bell polynomials as wherexi= 0 for alli> 2; thus allowing for a combinatorial interpretation of the coefficients of the Hermite polynomials. This can be seen by comparing the generating function of the Hermite polynomials with that of Bell polynomials. For any sequencea1,a2, …,anof scalars, let Then this polynomial sequence is ofbinomial type, i.e. it satisfies the binomial identity More generally, we have this result: If we define a formal power series then for alln, Bell polynomials are implemented in:
https://en.wikipedia.org/wiki/Bell_polynomials
TheCatalan numbersare asequenceofnatural numbersthat occur in variouscounting problems, often involvingrecursivelydefined objects. They are named afterEugène Catalan, though they were previously discovered in the 1730s byMinggatu. Then-th Catalan number can be expressed directly in terms of thecentral binomial coefficientsby The first Catalan numbers forn= 0, 1, 2, 3, ...are An alternative expression forCnis which is equivalent to the expression given above because(2nn+1)=nn+1(2nn){\displaystyle {\tbinom {2n}{n+1}}={\tfrac {n}{n+1}}{\tbinom {2n}{n}}}. This expression shows thatCnis aninteger, which is not immediately obvious from the first formula given. This expression forms the basis for aproof of the correctness of the formula. Another alternative expression is which can be directly interpreted in terms of thecycle lemma; see below. The Catalan numbers satisfy therecurrence relations and Asymptotically, the Catalan numbers grow asCn∼4nn3/2π,{\displaystyle C_{n}\sim {\frac {4^{n}}{n^{3/2}{\sqrt {\pi }}}}\,,}in the sense that the quotient of then-th Catalan number and the expression on the right tends towards 1 asnapproaches infinity. This can be proved by using theasymptotic growth of the central binomial coefficients, byStirling's approximationforn!{\displaystyle n!}, orvia generating functions. The only Catalan numbersCnthat are odd are those for whichn= 2k− 1; all others are even. The only prime Catalan numbers areC2= 2andC3= 5.[1]More generally, the multiplicity with which a primepdividesCncan be determined by first expressingn+ 1in basep. Forp= 2, the multiplicity is the number of 1 bits, minus 1. Forpan odd prime, count all digits greater than(p+ 1) / 2; also count digits equal to(p+ 1) / 2unless final; and count digits equal to(p− 1) / 2if not final and the next digit is counted.[2]The only known odd Catalan numbers that do not have last digit 5 areC0= 1,C1= 1,C7= 429,C31,C127andC255. The odd Catalan numbers,Cnforn= 2k− 1, do not have last digit 5 ifn+ 1has a base 5 representation containing 0, 1 and 2 only, except in the least significant place, which could also be a 3.[3] The Catalan numbers have the integral representations[4][5] which immediately yields∑n=0∞Cn4n=2{\displaystyle \sum _{n=0}^{\infty }{\frac {C_{n}}{4^{n}}}=2}. This has a simple probabilistic interpretation. Consider a random walk on the integer line, starting at 0. Let -1 be a "trap" state, such that if the walker arrives at -1, it will remain there. The walker can arrive at the trap state at times 1, 3, 5, 7..., and the number of ways the walker can arrive at the trap state at time2k+1{\displaystyle 2k+1}isCk{\displaystyle C_{k}}. Since the 1D random walk is recurrent, the probability that the walker eventually arrives at -1 is∑n=0∞Cn22n+1=1{\displaystyle \sum _{n=0}^{\infty }{\frac {C_{n}}{2^{2n+1}}}=1}. There are many counting problems incombinatoricswhose solution is given by the Catalan numbers. The bookEnumerative Combinatorics: Volume 2by combinatorialistRichard P. Stanleycontains a set of exercises which describe 66 different interpretations of the Catalan numbers. Following are some examples, with illustrations of the casesC3= 5andC4= 14. The following diagrams show the casen= 4: This can be represented by listing the Catalan elements by column height:[8] There are several ways of explaining why the formula solves the combinatorial problems listed above. The first proof below uses agenerating function. The other proofs are examples ofbijective proofs; they involve literally counting a collection of some kind of object to arrive at the correct formula. We first observe that all of the combinatorial problems listed above satisfySegner's[9]recurrence relation For example, every Dyck wordwof length ≥ 2 can be written in a unique way in the form with (possibly empty) Dyck wordsw1andw2. Thegenerating functionfor the Catalan numbers is defined by The recurrence relation given above can then be summarized in generating function form by the relation in other words, this equation follows from the recurrence relation by expanding both sides intopower series. On the one hand, the recurrence relation uniquely determines the Catalan numbers; on the other hand, interpretingxc2−c+ 1 = 0as aquadratic equationofcand using thequadratic formula, the generating function relation can be algebraically solved to yield two solution possibilities From the two possibilities, the second must be chosen because only the second gives The square root term can be expanded as a power series using thebinomial series 1−1−4x=−∑n=1∞(1/2n)(−4x)n=−∑n=1∞(−1)n−1(2n−3)!!2nn!(−4x)n=−∑n=0∞(−1)n(2n−1)!!2n+1(n+1)!(−4x)n+1=∑n=0∞2n+1(2n−1)!!(n+1)!xn+1=∑n=0∞2(2n)!(n+1)!n!xn+1=∑n=0∞2n+1(2nn)xn+1.{\displaystyle {\begin{aligned}1-{\sqrt {1-4x}}&=-\sum _{n=1}^{\infty }{\binom {1/2}{n}}(-4x)^{n}=-\sum _{n=1}^{\infty }{\frac {(-1)^{n-1}(2n-3)!!}{2^{n}n!}}(-4x)^{n}\\&=-\sum _{n=0}^{\infty }{\frac {(-1)^{n}(2n-1)!!}{2^{n+1}(n+1)!}}(-4x)^{n+1}=\sum _{n=0}^{\infty }{\frac {2^{n+1}(2n-1)!!}{(n+1)!}}x^{n+1}\\&=\sum _{n=0}^{\infty }{\frac {2(2n)!}{(n+1)!n!}}x^{n+1}=\sum _{n=0}^{\infty }{\frac {2}{n+1}}{\binom {2n}{n}}x^{n+1}\,.\end{aligned}}}Thus,c(x)=1−1−4x2x=∑n=0∞1n+1(2nn)xn.{\displaystyle c(x)={\frac {1-{\sqrt {1-4x}}}{2x}}=\sum _{n=0}^{\infty }{\frac {1}{n+1}}{\binom {2n}{n}}x^{n}\,.} We count the number of paths which start and end on the diagonal of ann×ngrid. All such paths havenright andnup steps. Since we can choose which of the2nsteps are up or right, there are in total(2nn){\displaystyle {\tbinom {2n}{n}}}monotonic paths of this type. Abadpath crosses the main diagonal and touches the next higher diagonal (red in the illustration). The part of the path after the higher diagonal is then flipped about that diagonal, as illustrated with the red dotted line. This swaps all the right steps to up steps and vice versa. In the section of the path that is not reflected, there is one more up step than right steps, so therefore the remaining section of the bad path has one more right step than up steps. When this portion of the path is reflected, it will have one more up step than right steps. Since there are still2nsteps, there are nown+ 1up steps andn− 1right steps. So, instead of reaching(n,n), all bad paths after reflection end at(n− 1,n+ 1). Because every monotonic path in the(n− 1) × (n+ 1)grid meets the higher diagonal, and because the reflection process is reversible, the reflection is therefore a bijection between bad paths in the original grid and monotonic paths in the new grid. The number of bad paths is therefore: and the number of Catalan paths (i.e. good paths) is obtained by removing the number of bad paths from the total number of monotonic paths of the original grid, In terms of Dyck words, we start with a (non-Dyck) sequence ofnX's andnY's and interchange all X's and Y's after the first Y that violates the Dyck condition. After this Y, note that there is exactly one more Y than there are Xs. This bijective proof provides a natural explanation for the termn+ 1appearing in the denominator of the formula forCn. A generalized version of this proof can be found in a paper of Rukavicka Josef (2011).[10] Given a monotonic path, theexceedanceof the path is defined to be the number ofverticaledges above the diagonal. For example, in Figure 2, the edges above the diagonal are marked in red, so the exceedance of this path is 5. Given a monotonic path whose exceedance is not zero, we apply the following algorithm to construct a new path whose exceedance is1less than the one we started with. In Figure 3, the black dot indicates the point where the path first crosses the diagonal. The black edge isX, and we place the last lattice point of the red portion in the top-right corner, and the first lattice point of the green portion in the bottom-left corner, and place X accordingly, to make a new path, shown in the second diagram. The exceedance has dropped from3to2. In fact, the algorithm causes the exceedance to decrease by1for any path that we feed it, because the first vertical step starting on the diagonal (at the point marked with a black dot) is the only vertical edge that changes from being above the diagonal to being below it when we apply the algorithm - all the other vertical edges stay on the same side of the diagonal. It can be seen that this process isreversible: given any pathPwhose exceedance is less thann, there is exactly one path which yieldsPwhen the algorithm is applied to it. Indeed, the (black) edgeX, which originally was the first horizontal step ending on the diagonal, has become thelasthorizontal stepstartingon the diagonal. Alternatively, reverse the original algorithm to look for the first edge that passesbelowthe diagonal. This implies that the number of paths of exceedancenis equal to the number of paths of exceedancen− 1, which is equal to the number of paths of exceedancen− 2, and so on, down to zero. In other words, we have split up the set ofallmonotonic paths inton+ 1equally sized classes, corresponding to the possible exceedances between 0 andn. Since there are(2nn){\displaystyle \textstyle {2n \choose n}}monotonic paths, we obtain the desired formulaCn=1n+1(2nn).{\displaystyle \textstyle C_{n}={\frac {1}{n+1}}{2n \choose n}.} Figure 4 illustrates the situation forn= 3. Each of the 20 possible monotonic paths appears somewhere in the table. The first column shows all paths of exceedance three, which lie entirely above the diagonal. The columns to the right show the result of successive applications of the algorithm, with the exceedance decreasing one unit at a time. There are five rows, that isC3= 5, and the last column displays all paths no higher than the diagonal. Using Dyck words, start with a sequence from(2nn){\displaystyle \textstyle {\binom {2n}{n}}}. LetXd{\displaystyle X_{d}}be the firstXthat brings an initial subsequence to equality, and configure the sequence as(F)Xd(L){\displaystyle (F)X_{d}(L)}. The new sequence isLXF{\displaystyle LXF}. This proof uses the triangulation definition of Catalan numbers to establish a relation betweenCnandCn+1. Given a polygonPwithn+ 2sides and a triangulation, mark one of its sides as the base, and also orient one of its2n+ 1total edges. There are(4n+ 2)Cnsuch marked triangulations for a given base. Given a polygonQwithn+ 3sides and a (different) triangulation, again mark one of its sides as the base. Mark one of the sides other than the base side (and not an inner triangle edge). There are(n+ 2)Cn+ 1such marked triangulations for a given base. There is a simple bijection between these two marked triangulations: We can either collapse the triangle inQwhose side is marked (in two ways, and subtract the two that cannot collapse the base), or, in reverse, expand the oriented edge inPto a triangle and mark its new side. Thus Write4n−2n+1Cn−1=Cn.{\displaystyle \textstyle {\frac {4n-2}{n+1}}C_{n-1}=C_{n}.} Because we have Applying the recursion withC0=1{\displaystyle C_{0}=1}gives the result. This proof is based on theDyck wordsinterpretation of the Catalan numbers, soCn{\displaystyle C_{n}}is the number of ways to correctly matchnpairs of brackets. We denote a (possibly empty) correct string withcand its inverse withc'. Since anyccan be uniquely decomposed intoc=(c1)c2{\displaystyle c=(c_{1})c_{2}}, summing over the possible lengths ofc1{\displaystyle c_{1}}immediately gives the recursive definition Letbbe a balanced string of length2n, i.e.bcontains an equal number of({\displaystyle (}and){\displaystyle )}, soBn=(2nn){\displaystyle \textstyle B_{n}={2n \choose n}}. A balanced string can also be uniquely decomposed into either(c)b{\displaystyle (c)b}or)c′(b{\displaystyle )c'(b}, so Any incorrect (non-Catalan) balanced string starts withc){\displaystyle c)}, and the remaining string has one more({\displaystyle (}than){\displaystyle )}, so Also, from the definitions, we have: Therefore, as this is true for alln, This proof is based on theDyck wordsinterpretation of the Catalan numbers and uses thecycle lemmaof Dvoretzky and Motzkin.[11][12] We call a sequence of X's and Y'sdominatingif, reading from left to right, the number of X's is always strictly greater than the number of Y's. The cycle lemma[13]states that any sequence ofm{\displaystyle m}X's andn{\displaystyle n}Y's, wherem>n{\displaystyle m>n}, has preciselym−n{\displaystyle m-n}dominatingcircular shifts. To see this, arrange the given sequence ofm+n{\displaystyle m+n}X's and Y's in a circle. Repeatedly removing XY pairs leaves exactlym−n{\displaystyle m-n}X's. Each of these X's was the start of a dominating circular shift before anything was removed. For example, considerXXYXY{\displaystyle {\mathit {XXYXY}}}. This sequence is dominating, but none of its circular shiftsXYXYX{\displaystyle {\mathit {XYXYX}}},YXYXX{\displaystyle {\mathit {YXYXX}}},XYXXY{\displaystyle {\mathit {XYXXY}}}andYXXYX{\displaystyle {\mathit {YXXYX}}}are. A string is a Dyck word ofn{\displaystyle n}X's andn{\displaystyle n}Y's if and only if prepending an X to the Dyck word gives a dominating sequence withn+1{\displaystyle n+1}X's andn{\displaystyle n}Y's, so we can count the former by instead counting the latter. In particular, whenm=n+1{\displaystyle m=n+1}, there is exactly one dominating circular shift. There are(2n+1n){\displaystyle \textstyle {2n+1 \choose n}}sequences with exactlyn+1{\displaystyle n+1}X's andn{\displaystyle n}Y's. For each of these, only one of the2n+1{\displaystyle 2n+1}circular shifts is dominating. Therefore there are12n+1(2n+1n)=Cn{\displaystyle \textstyle {\frac {1}{2n+1}}{2n+1 \choose n}=C_{n}}distinct sequences ofn+1{\displaystyle n+1}X's andn{\displaystyle n}Y's that are dominating, each of which corresponds to exactly one Dyck word. Then×nHankel matrixwhose(i,j)entry is the Catalan numberCi+j−2hasdeterminant1, regardless of the value ofn. For example, forn= 4we have Moreover, if the indexing is "shifted" so that the(i,j)entry is filled with the Catalan numberCi+j−1then the determinant is still 1, regardless of the value ofn. For example, forn= 4we have Taken together, these two conditions uniquely define the Catalan numbers. Another feature unique to the Catalan–Hankel matrix is that then×nsubmatrix starting at2has determinantn+ 1. et cetera. The Catalan sequence was described in 1751 byLeonhard Euler, who was interested in the number of different ways of dividing a polygon into triangles. The sequence is named afterEugène Charles Catalan, who discovered the connection to parenthesized expressions during his exploration of theTowers of Hanoipuzzle. The reflection counting trick (second proof) for Dyck words was found byDésiré Andréin 1887. The name “Catalan numbers” originated fromJohn Riordan.[14] In 1988, it came to light that the Catalan number sequence had been used inChinaby the Mongolian mathematicianMingantuby 1730.[15][16]That is when he started to write his bookGe Yuan Mi Lu Jie Fa[The Quick Method for Obtaining the Precise Ratio of Division of a Circle], which was completed by his student Chen Jixin in 1774 but published sixty years later. Peter J. Larcombe (1999) sketched some of the features of the work of Mingantu, including the stimulus of Pierre Jartoux, who brought three infinite series to China early in the 1700s. For instance, Ming used the Catalan sequence to express series expansions ofsin⁡(2α){\displaystyle \sin(2\alpha )}andsin⁡(4α){\displaystyle \sin(4\alpha )}in terms ofsin⁡(α){\displaystyle \sin(\alpha )}. The Catalan numbers can be interpreted as a special case of theBertrand's ballot theorem. Specifically,Cn{\displaystyle C_{n}}is the number of ways for a candidate A withn+ 1votes to lead candidate B withnvotes. The two-parameter sequence of non-negative integers(2m)!(2n)!(m+n)!m!n!{\displaystyle {\frac {(2m)!(2n)!}{(m+n)!m!n!}}}is a generalization of the Catalan numbers. These are namedsuper-Catalan numbers, perIra Gessel. These should not confused with theSchröder–Hipparchus numbers, which sometimes are also called super-Catalan numbers. Form=1{\displaystyle m=1}, this is just two times the ordinary Catalan numbers, and form=n{\displaystyle m=n}, the numbers have an easy combinatorial description. However, other combinatorial descriptions are only known[17]form=2,3{\displaystyle m=2,3}and4{\displaystyle 4},[18]and it is an open problem to find a general combinatorial interpretation. Sergey Fominand Nathan Reading have given a generalized Catalan number associated to any finite crystallographicCoxeter group, namely the number of fully commutative elements of the group; in terms of the associatedroot system, it is the number of anti-chains (or order ideals) in the poset of positive roots. The classical Catalan numberCn{\displaystyle C_{n}}corresponds to the root system of typeAn{\displaystyle A_{n}}. The classical recurrence relation generalizes: the Catalan number of a Coxeter diagram is equal to the sum of the Catalan numbers of all its maximal proper sub-diagrams.[19] The Catalan numbers are a solution of a version of theHausdorff moment problem.[20] The Catalank-fold convolution, wherek=m, is:[21]
https://en.wikipedia.org/wiki/Catalan_number
Inmathematics, thecyclesof apermutationπof a finitesetScorrespondbijectivelyto theorbitsof the subgroup generated byπactingonS. These orbits aresubsetsofSthat can be written as{c1, ...,cn}, such that The corresponding cycle ofπis written as (c1c2...cn); this expression is not unique sincec1can be chosen to be any element of the orbit. The sizenof the orbit is called the length of the corresponding cycle; whenn= 1, the single element in the orbit is called afixed pointof the permutation. A permutation is determined by giving an expression for each of its cycles, and one notation for permutations consist of writing such expressions one after another in some order. For example, let be a permutation that maps 1 to 2, 6 to 8, etc. Then one may write Here 5 and 7 are fixed points ofπ, sinceπ(5) = 5 andπ(7)=7. It is typical, but not necessary, to not write the cycles of length one in such an expression.[1]Thus,π= (1 2 4 3)(6 8), would be an appropriate way to express this permutation. There are different ways to write a permutation as a list of its cycles, but the number of cycles and their contents are given by thepartitionofSinto orbits, and these are therefore the same for all such expressions. The unsignedStirling numberof the first kind,s(k,j) counts the number of permutations ofkelements with exactlyjdisjoint cycles.[2][3] The valuef(k,j)counts the number of permutations ofkelements with exactlyjfixed points. For the main article on this topic, seerencontres numbers. (3)There are three different methods to construct a permutation ofkelements withjfixed points:
https://en.wikipedia.org/wiki/Cycles_and_fixed_points
Inmathematics, thefalling factorial(sometimes called thedescending factorial,[1]falling sequential product, orlower factorial) is defined as the polynomial(x)n=xn_=x(x−1)(x−2)⋯(x−n+1)⏞nfactors=∏k=1n(x−k+1)=∏k=0n−1(x−k).{\displaystyle {\begin{aligned}(x)_{n}=x^{\underline {n}}&=\overbrace {x(x-1)(x-2)\cdots (x-n+1)} ^{n{\text{ factors}}}\\&=\prod _{k=1}^{n}(x-k+1)=\prod _{k=0}^{n-1}(x-k).\end{aligned}}} Therising factorial(sometimes called thePochhammer function,Pochhammer polynomial,ascending factorial,[1]rising sequential product, orupper factorial) is defined asx(n)=xn¯=x(x+1)(x+2)⋯(x+n−1)⏞nfactors=∏k=1n(x+k−1)=∏k=0n−1(x+k).{\displaystyle {\begin{aligned}x^{(n)}=x^{\overline {n}}&=\overbrace {x(x+1)(x+2)\cdots (x+n-1)} ^{n{\text{ factors}}}\\&=\prod _{k=1}^{n}(x+k-1)=\prod _{k=0}^{n-1}(x+k).\end{aligned}}} The value of each is taken to be 1 (anempty product) whenn=0{\displaystyle n=0}. These symbols are collectively calledfactorial powers.[2] ThePochhammer symbol, introduced byLeo August Pochhammer, is the notation(x)n{\displaystyle (x)_{n}}, wherenis anon-negative integer. It may representeitherthe rising or the falling factorial, with different articles and authors using different conventions. Pochhammer himself actually used(x)n{\displaystyle (x)_{n}}with yet another meaning, namely to denote thebinomial coefficient(xn){\displaystyle {\tbinom {x}{n}}}.[3] In this article, the symbol(x)n{\displaystyle (x)_{n}}is used to represent the falling factorial, and the symbolx(n){\displaystyle x^{(n)}}is used for the rising factorial. These conventions are used incombinatorics,[4]althoughKnuth's underline and overline notationsxn_{\displaystyle x^{\underline {n}}}andxn¯{\displaystyle x^{\overline {n}}}are increasingly popular.[2][5]In the theory ofspecial functions(in particular thehypergeometric function) and in the standard reference workAbramowitz and Stegun, the Pochhammer symbol(x)n{\displaystyle (x)_{n}}is used to represent the rising factorial.[6][7] Whenx{\displaystyle x}is a positive integer,(x)n{\displaystyle (x)_{n}}gives the number ofn-permutations(sequences of distinct elements) from anx-element set, or equivalently the number ofinjective functionsfrom a set of sizen{\displaystyle n}to a set of sizex{\displaystyle x}. The rising factorialx(n){\displaystyle x^{(n)}}gives the number ofpartitionsof ann{\displaystyle n}-element set intox{\displaystyle x}ordered sequences (possibly empty).[a] The first few falling factorials are as follows: (x)0=1(x)1=x(x)2=x(x−1)=x2−x(x)3=x(x−1)(x−2)=x3−3x2+2x(x)4=x(x−1)(x−2)(x−3)=x4−6x3+11x2−6x{\displaystyle {\begin{alignedat}{2}(x)_{0}&&&=1\\(x)_{1}&&&=x\\(x)_{2}&=x(x-1)&&=x^{2}-x\\(x)_{3}&=x(x-1)(x-2)&&=x^{3}-3x^{2}+2x\\(x)_{4}&=x(x-1)(x-2)(x-3)&&=x^{4}-6x^{3}+11x^{2}-6x\end{alignedat}}} The first few rising factorials are as follows: x(0)=1x(1)=xx(2)=x(x+1)=x2+xx(3)=x(x+1)(x+2)=x3+3x2+2xx(4)=x(x+1)(x+2)(x+3)=x4+6x3+11x2+6x{\displaystyle {\begin{alignedat}{2}x^{(0)}&&&=1\\x^{(1)}&&&=x\\x^{(2)}&=x(x+1)&&=x^{2}+x\\x^{(3)}&=x(x+1)(x+2)&&=x^{3}+3x^{2}+2x\\x^{(4)}&=x(x+1)(x+2)(x+3)&&=x^{4}+6x^{3}+11x^{2}+6x\end{alignedat}}} The coefficients that appear in the expansions areStirling numbers of the first kind(see below). When the variablex{\displaystyle x}is a positive integer, the number(x)n{\displaystyle (x)_{n}}is equal to the number ofn-permutations from a set ofxitems, that is, the number of ways of choosing an ordered list of lengthnconsisting of distinct elements drawn from a collection of sizex{\displaystyle x}. For example,(8)3=8×7×6=336{\displaystyle (8)_{3}=8\times 7\times 6=336}is the number of different podiums—assignments of gold, silver, and bronze medals—possible in an eight-person race. On the other hand,x(n){\displaystyle x^{(n)}}is "the number of ways to arrangen{\displaystyle n}flags onx{\displaystyle x}flagpoles",[8]where all flags must be used and each flagpole can have any number of flags. Equivalently, this is the number of ways to partition a set of sizen{\displaystyle n}(the flags) intox{\displaystyle x}distinguishable parts (the poles), with a linear order on the elements assigned to each part (the order of the flags on a given pole). The rising and falling factorials are simply related to one another:(x)n=(x−n+1)(n)=(−1)n(−x)(n),x(n)=(x+n−1)n=(−1)n(−x)n.{\displaystyle {\begin{alignedat}{2}{(x)}_{n}&={(x-n+1)}^{(n)}&&=(-1)^{n}(-x)^{(n)},\\x^{(n)}&={(x+n-1)}_{n}&&=(-1)^{n}(-x)_{n}.\end{alignedat}}} Falling and rising factorials of integers are directly related to the ordinaryfactorial:n!=1(n)=(n)n,(m)n=m!(m−n)!,m(n)=(m+n−1)!(m−1)!.{\displaystyle {\begin{aligned}n!&=1^{(n)}=(n)_{n},\\[6pt](m)_{n}&={\frac {m!}{(m-n)!}},\\[6pt]m^{(n)}&={\frac {(m+n-1)!}{(m-1)!}}.\end{aligned}}} Rising factorials of half integers are directly related to thedouble factorial:[12](n)=(2n−1)!!2n,[2m+12](n)=(2(n+m)−1)!!2n(2m−1)!!.{\displaystyle {\begin{aligned}\left[{\frac {1}{2}}\right]^{(n)}={\frac {(2n-1)!!}{2^{n}}},\quad \left[{\frac {2m+1}{2}}\right]^{(n)}={\frac {(2(n+m)-1)!!}{2^{n}(2m-1)!!}}.\end{aligned}}} The falling and rising factorials can be used to express abinomial coefficient:(x)nn!=(xn),x(n)n!=(x+n−1n).{\displaystyle {\begin{aligned}{\frac {(x)_{n}}{n!}}&={\binom {x}{n}},\\[6pt]{\frac {x^{(n)}}{n!}}&={\binom {x+n-1}{n}}.\end{aligned}}} Thus many identities on binomial coefficients carry over to the falling and rising factorials. The rising and falling factorials are well defined in anyunitalring, and thereforex{\displaystyle x}can be taken to be, for example, acomplex number, including negative integers, or apolynomialwith complex coefficients, or anycomplex-valued function. The falling factorial can be extended torealvalues ofx{\displaystyle x}using thegamma functionprovidedx{\displaystyle x}andx+n{\displaystyle x+n}are real numbers that are not negative integers:(x)n=Γ(x+1)Γ(x−n+1),{\displaystyle (x)_{n}={\frac {\Gamma (x+1)}{\Gamma (x-n+1)}}\ ,}and so can the rising factorial:x(n)=Γ(x+n)Γ(x).{\displaystyle x^{(n)}={\frac {\Gamma (x+n)}{\Gamma (x)}}\ .} Falling factorials appear in multipledifferentiationof simple power functions:(ddx)nxa=(a)n⋅xa−n.{\displaystyle \left({\frac {\mathrm {d} }{\mathrm {d} x}}\right)^{n}x^{a}=(a)_{n}\cdot x^{a-n}.} The rising factorial is also integral to the definition of thehypergeometric function: The hypergeometric function is defined for|z|<1{\displaystyle |z|<1}by thepower series2F1(a,b;c;z)=∑n=0∞a(n)b(n)c(n)znn!{\displaystyle {}_{2}F_{1}(a,b;c;z)=\sum _{n=0}^{\infty }{\frac {a^{(n)}b^{(n)}}{c^{(n)}}}{\frac {z^{n}}{n!}}}provided thatc≠0,−1,−2,…{\displaystyle c\neq 0,-1,-2,\ldots }. Note, however, that the hypergeometric function literature typically uses the notation(a)n{\displaystyle (a)_{n}}for rising factorials. Falling and rising factorials are closely related toStirling numbers. Indeed, expanding the product revealsStirling numbers of the first kind(x)n=∑k=0ns(n,k)xk=∑k=0n[nk](−1)n−kxkx(n)=∑k=0n[nk]xk{\displaystyle {\begin{aligned}(x)_{n}&=\sum _{k=0}^{n}s(n,k)x^{k}\\&=\sum _{k=0}^{n}{\begin{bmatrix}n\\k\end{bmatrix}}(-1)^{n-k}x^{k}\\x^{(n)}&=\sum _{k=0}^{n}{\begin{bmatrix}n\\k\end{bmatrix}}x^{k}\\\end{aligned}}} And the inverse relations usesStirling numbers of the second kindxn=∑k=0n{nk}(x)k=∑k=0n{nk}(−1)n−kx(k).{\displaystyle {\begin{aligned}x^{n}&=\sum _{k=0}^{n}{\begin{Bmatrix}n\\k\end{Bmatrix}}(x)_{k}\\&=\sum _{k=0}^{n}{\begin{Bmatrix}n\\k\end{Bmatrix}}(-1)^{n-k}x^{(k)}.\end{aligned}}} The falling and rising factorials are related to one another through theLah numbersL(n,k)=(n−1k−1)n!k!{\textstyle L(n,k)={\binom {n-1}{k-1}}{\frac {n!}{k!}}}:[9]x(n)=∑k=0nL(n,k)(x)k(x)n=∑k=0nL(n,k)(−1)n−kx(k){\displaystyle {\begin{aligned}x^{(n)}&=\sum _{k=0}^{n}L(n,k)(x)_{k}\\(x)_{n}&=\sum _{k=0}^{n}L(n,k)(-1)^{n-k}x^{(k)}\end{aligned}}} Since the falling factorials are a basis for thepolynomial ring, one can express the product of two of them as alinear combinationof falling factorials:[10](x)m(x)n=∑k=0m(mk)(nk)k!⋅(x)m+n−k.{\displaystyle (x)_{m}(x)_{n}=\sum _{k=0}^{m}{\binom {m}{k}}{\binom {n}{k}}k!\cdot (x)_{m+n-k}\ .} The coefficients(mk)(nk)k!{\displaystyle {\tbinom {m}{k}}{\tbinom {n}{k}}k!}are calledconnection coefficients, and have a combinatorial interpretation as the number of ways to identify (or "glue together")kelements each from a set of sizemand a set of sizen. There is also a connection formula for the ratio of two rising factorials given byx(n)x(i)=(x+i)(n−i),forn≥i.{\displaystyle {\frac {x^{(n)}}{x^{(i)}}}=(x+i)^{(n-i)},\quad {\text{for }}n\geq i.} Additionally, we can expand generalized exponent laws and negative rising and falling powers through the following identities:[11](p 52) (x)m+n=(x)m(x−m)n=(x)n(x−n)mx(m+n)=x(m)(x+m)(n)=x(n)(x+n)(m)x(−n)=Γ(x−n)Γ(x)=(x−n−1)!(x−1)!=1(x−n)(n)=1(x−1)n=1(x−1)(x−2)⋯(x−n)(x)−n=Γ(x+1)Γ(x+n+1)=x!(x+n)!=1(x+n)n=1(x+1)(n)=1(x+1)(x+2)⋯(x+n){\displaystyle {\begin{aligned}(x)_{m+n}&=(x)_{m}(x-m)_{n}=(x)_{n}(x-n)_{m}\\[6pt]x^{(m+n)}&=x^{(m)}(x+m)^{(n)}=x^{(n)}(x+n)^{(m)}\\[6pt]x^{(-n)}&={\frac {\Gamma (x-n)}{\Gamma (x)}}={\frac {(x-n-1)!}{(x-1)!}}={\frac {1}{(x-n)^{(n)}}}={\frac {1}{(x-1)_{n}}}={\frac {1}{(x-1)(x-2)\cdots (x-n)}}\\[6pt](x)_{-n}&={\frac {\Gamma (x+1)}{\Gamma (x+n+1)}}={\frac {x!}{(x+n)!}}={\frac {1}{(x+n)_{n}}}={\frac {1}{(x+1)^{(n)}}}={\frac {1}{(x+1)(x+2)\cdots (x+n)}}\end{aligned}}} Finally,duplicationandmultiplication formulasfor the falling and rising factorials provide the next relations:(x)k+mn=x(k)mmn∏j=0m−1(x−k−jm)n,form∈Nx(k+mn)=x(k)mmn∏j=0m−1(x+k+jm)(n),form∈N(ax+b)(n)=xn∏j=0n−1(a+b+jx),forx∈Z+(2x)(2n)=22nx(n)(x+12)(n).{\displaystyle {\begin{aligned}(x)_{k+mn}&=x^{(k)}m^{mn}\prod _{j=0}^{m-1}\left({\frac {x-k-j}{m}}\right)_{n}\,,&{\text{for }}m&\in \mathbb {N} \\[6pt]x^{(k+mn)}&=x^{(k)}m^{mn}\prod _{j=0}^{m-1}\left({\frac {x+k+j}{m}}\right)^{(n)},&{\text{for }}m&\in \mathbb {N} \\[6pt](ax+b)^{(n)}&=x^{n}\prod _{j=0}^{n-1}\left(a+{\frac {b+j}{x}}\right),&{\text{for }}x&\in \mathbb {Z} ^{+}\\[6pt](2x)^{(2n)}&=2^{2n}x^{(n)}\left(x+{\frac {1}{2}}\right)^{(n)}.\end{aligned}}} The falling factorial occurs in a formula which representspolynomialsusing the forwarddifference operatorΔ⁡f(x)=deff(x+1)−f(x),{\displaystyle \ \operatorname {\Delta } f(x)~{\stackrel {\mathrm {def} }{=}}~f(x{+}1)-f(x)\ ,}which in form is an exact analogue toTaylor's theorem: Compare the series expansion fromumbral calculus with the corresponding series fromdifferential calculus In this formula and in many other places, the falling factorial(x)n{\displaystyle \ (x)_{n}\ }in the calculus offinite differencesplays the role ofxn{\displaystyle \ x^{n}\ }in differential calculus. For another example, note the similarity ofΔ⁡(x)n=n(x)n−1{\displaystyle ~\operatorname {\Delta } (x)_{n}=n\ (x)_{n-1}~}todd⁡xxn=nxn−1.{\displaystyle ~{\frac {\ \operatorname {d} }{\operatorname {d} x}}\ x^{n}=n\ x^{n-1}~.} A corresponding relation holds for the rising factorial and the backward difference operator. The study of analogies of this type is known asumbral calculus. A general theory covering such relations, including the falling and rising factorial functions, is given by the theory ofpolynomial sequences of binomial typeandSheffer sequences. Falling and rising factorials are Sheffer sequences of binomial type, as shown by the relations: (a+b)n=∑j=0n(nj)(a)n−j(b)j(a+b)(n)=∑j=0n(nj)a(n−j)b(j){\displaystyle \ {\begin{aligned}(a+b)_{n}&=\sum _{j=0}^{n}\ {\binom {n}{j}}\ (a)_{n-j}\ (b)_{j}\ \\[6pt](a+b)^{(n)}&=\sum _{j=0}^{n}\ {\binom {n}{j}}\ a^{(n-j)}\ b^{(j)}\ \end{aligned}}\ } where the coefficients are the same as those in thebinomial theorem. Similarly, the generating function of Pochhammer polynomials then amounts to the umbral exponential, ∑n=0∞(x)ntnn!=(1+t)x,{\displaystyle \ \sum _{n=0}^{\infty }\ (x)_{n}\ {\frac {~t^{n}\ }{\ n!}}\ =\ \left(\ 1+t\ \right)^{x}\ ,} since Δx⁡(1+t)x=t⋅(1+t)x.{\displaystyle \ \operatorname {\Delta } _{x}\left(\ 1+t\ \right)^{x}\ =\ t\cdot \left(\ 1+t\ \right)^{x}~.} An alternative notation for the rising factorialxm¯≡(x)+m≡(x)m=x(x+1)…(x+m−1)⏞mfactorsfor integerm≥0{\displaystyle x^{\overline {m}}\equiv (x)_{+m}\equiv (x)_{m}=\overbrace {x(x+1)\ldots (x+m-1)} ^{m{\text{ factors}}}\quad {\text{for integer }}m\geq 0} and for the falling factorialxm_≡(x)−m=x(x−1)…(x−m+1)⏞mfactorsfor integerm≥0{\displaystyle x^{\underline {m}}\equiv (x)_{-m}=\overbrace {x(x-1)\ldots (x-m+1)} ^{m{\text{ factors}}}\quad {\text{for integer }}m\geq 0} goes back to A. Capelli (1893) and L. Toscano (1939), respectively.[2]Graham, Knuth, and Patashnik[11](pp 47, 48)propose to pronounce these expressions as "x{\displaystyle x}to them{\displaystyle m}rising" and "x{\displaystyle x}to them{\displaystyle m}falling", respectively. An alternative notation for the rising factorialx(n){\displaystyle x^{(n)}}is the less common(x)n+{\displaystyle (x)_{n}^{+}}. When(x)n+{\displaystyle (x)_{n}^{+}}is used to denote the rising factorial, the notation(x)n−{\displaystyle (x)_{n}^{-}}is typically used for the ordinary falling factorial, to avoid confusion.[3] The Pochhammer symbol has a generalized version called thegeneralized Pochhammer symbol, used in multivariateanalysis. There is also aq-analogue, theq-Pochhammer symbol. For any fixed arithmetic functionf:N→C{\displaystyle f:\mathbb {N} \rightarrow \mathbb {C} }and symbolic parametersx,t, related generalized factorial products of the form (x)n,f,t:=∏k=0n−1(x+f(k)tk){\displaystyle (x)_{n,f,t}:=\prod _{k=0}^{n-1}\left(x+{\frac {f(k)}{t^{k}}}\right)} may be studied from the point of view of the classes of generalizedStirling numbers of the first kinddefined by the following coefficients of the powers ofxin the expansions of(x)n,f,tand then by the next corresponding triangular recurrence relation: [nk]f,t=[xk−1](x)n,f,t=f(n−1)t1−n[n−1k]f,t+[n−1k−1]f,t+δn,0δk,0.{\displaystyle {\begin{aligned}\left[{\begin{matrix}n\\k\end{matrix}}\right]_{f,t}&=\left[x^{k-1}\right](x)_{n,f,t}\\&=f(n-1)t^{1-n}\left[{\begin{matrix}n-1\\k\end{matrix}}\right]_{f,t}+\left[{\begin{matrix}n-1\\k-1\end{matrix}}\right]_{f,t}+\delta _{n,0}\delta _{k,0}.\end{aligned}}} These coefficients satisfy a number of analogous properties to those for theStirling numbers of the first kindas well as recurrence relations and functional equations related to thef-harmonic numbers,[12]Fn(r)(t):=∑k≤ntkf(k)r.{\displaystyle F_{n}^{(r)}(t):=\sum _{k\leq n}{\frac {t^{k}}{f(k)^{r}}}\,.}
https://en.wikipedia.org/wiki/Pochhammer_symbol
Inmathematics, apolynomial sequenceis asequenceofpolynomialsindexed by the nonnegativeintegers0, 1, 2, 3, ..., in which eachindexis equal to thedegreeof the corresponding polynomial. Polynomial sequences are a topic of interest inenumerative combinatoricsandalgebraic combinatorics, as well asapplied mathematics. Some polynomial sequences arise inphysicsandapproximation theoryas the solutions of certainordinary differential equations: Others come fromstatistics: Many are studied inalgebraand combinatorics:
https://en.wikipedia.org/wiki/Polynomial_sequence
TheTouchard polynomials, studied byJacques Touchard(1939),[1]also called theexponential polynomialsorBell polynomials, comprise apolynomial sequenceofbinomial typedefined by whereS(n,k)={nk}{\displaystyle S(n,k)=\left\{{n \atop k}\right\}}is aStirling number of the second kind, i.e., the number ofpartitions of a setof sizenintokdisjoint non-empty subsets.[2][3][4][5] The first few Touchard polynomials are The value at 1 of thenth Touchard polynomial is thenthBell number, i.e., the number ofpartitions of a setof sizen: IfXis arandom variablewith aPoisson distributionwith expected value λ, then itsnth moment is E(Xn) =Tn(λ), leading to the definition: Using this fact one can quickly prove that thispolynomial sequenceis ofbinomial type, i.e., it satisfies the sequence of identities: The Touchard polynomials constitute the only polynomial sequence of binomial type with the coefficient ofxequal 1 in every polynomial. The Touchard polynomials satisfy the Rodrigues-like formula: The Touchard polynomials satisfy therecurrence relation and In the casex= 1, this reduces to the recurrence formula for theBell numbers. A generalization of both this formula and the definition, is a generalization of Spivey's formula[6] Tn+m(x)=∑k=0n{nk}xk∑j=0m(mj)km−jTj(x){\displaystyle T_{n+m}(x)=\sum _{k=0}^{n}\left\{{n \atop k}\right\}x^{k}\sum _{j=0}^{m}{\binom {m}{j}}k^{m-j}T_{j}(x)} Using theumbral notationTn(x)=Tn(x), these formulas become: Thegenerating functionof the Touchard polynomials is which corresponds to thegenerating function of Stirling numbers of the second kind. Touchard polynomials havecontour integralrepresentation: All zeroes of the Touchard polynomials are real and negative. This fact was observed by L. H. Harper in 1967.[7] The absolute value of the leftmost zero is bounded from above by[8] although it is conjectured that the leftmost zero grows linearly with the indexn. TheMahler measureM(Tn){\displaystyle M(T_{n})}of the Touchard polynomials can be estimated as follows:[9] whereΩn{\displaystyle \Omega _{n}}andKn{\displaystyle K_{n}}are the smallest of the maximum twokindices such that{nk}/(nk){\displaystyle \lbrace \textstyle {n \atop k}\rbrace /{\binom {n}{k}}}and{nk}{\displaystyle \lbrace \textstyle {n \atop k}\rbrace }are maximal, respectively.
https://en.wikipedia.org/wiki/Touchard_polynomials
Incombinatorial mathematics, aStirling permutationof orderkis apermutationof themultiset1, 1, 2, 2, ...,k,k(with two copies of each value from 1 tok) with the additional property that, for each valueiappearing in the permutation, any values between the two copies ofiare larger thani. For instance, the 15 Stirling permutations of order three are The number of Stirling permutations of orderkis given by thedouble factorial(2k− 1)!!. Stirling permutations were introduced byGessel & Stanley (1978)in order to show that certain numbers (the numbers of Stirling permutations with a fixed number of descents) are non-negative. They chose the name because of a connection to certainpolynomialsdefined from theStirling numbers, which are in turn named after 18th-century Scottish mathematicianJames Stirling.[1] Stirling permutations may be used to describe the sequences by which it is possible to construct a rootedplane treewithkedges by adding leaves one by one to the tree. For, if the edges are numbered by the order in which they were inserted, then the sequence of numbers in anEuler tourof the tree (formed by doubling the edges of the tree and traversing the children of each node in left to right order) is a Stirling permutation. Conversely every Stirling permutation describes a tree construction sequence, in which the next edge closer to the root from an edge labelediis the one whose pair of values most closely surrounds the pair ofivalues in the permutation.[2] Stirling permutations have been generalized to the permutations of a multiset with more than two copies of each value.[3]Researchers have also studied the number of Stirling permutations that avoid certain patterns.[4]
https://en.wikipedia.org/wiki/Stirling_permutation
Inmathematics, theLanczos approximationis a method for computing thegamma functionnumerically, published byCornelius Lanczosin 1964. It is a practical alternative to the more popularStirling's approximationfor calculating the gamma function with fixed precision. The Lanczos approximation consists of the formula for the gamma function, with Heregis a realconstantthat may be chosen arbitrarily subject to the restriction that Re(z+g+⁠1/2⁠) > 0.[1]The coefficientsp, which depend ong, are slightly more difficult to calculate (see below). Although the formula as stated here is only valid for arguments in the right complexhalf-plane, it can be extended to the entirecomplex planeby thereflection formula, The seriesAisconvergent, and may be truncated to obtain an approximation with the desired precision. By choosing an appropriateg(typically a small integer), only some 5–10 terms of the series are needed to compute the gamma function with typicalsingleordoublefloating-pointprecision. If a fixedgis chosen, the coefficients can be calculated in advance and, thanks topartial fraction decomposition, the sum is recast into the following form: Thus computing the gamma function becomes a matter of evaluating only a small number ofelementary functionsand multiplying by stored constants. The Lanczos approximation was popularized byNumerical Recipes, according to which computing the gamma function becomes "not much more difficult than other built-in functions that we take for granted, such as sinxorex." The method is also implemented in theGNU Scientific Library,Boost,CPythonandmusl. The coefficients are given by whereCn,m{\displaystyle C_{n,m}}represents the (n,m)th element of thematrixof coefficients for theChebyshev polynomials, which can be calculatedrecursivelyfrom these identities: Godfrey (2001) describes how to obtain the coefficients and also the value of the truncated seriesAas amatrix product.[2] Lanczos derived the formula fromLeonhard Euler'sintegral performing a sequence of basic manipulations to obtain and deriving a series for the integral. The following implementation in thePython programming languageworks for complex arguments and typically gives 13 correct decimal places. Note that omitting the smallest coefficients (in pursuit of speed, for example) gives totally inaccurate results; the coefficients must be recomputed from scratch for an expansion with fewer terms.
https://en.wikipedia.org/wiki/Lanczos_approximation
In mathematics,Spouge's approximationis a formula for computing an approximation of thegamma function. It was named after John L. Spouge, who defined the formula in a 1994 paper.[1]The formula is a modification ofStirling's approximation, and has the form whereais an arbitrary positive integer and the coefficients are given by Spouge has proved that, if Re(z) > 0 anda> 2, the relative error in discardingεa(z) is bounded by The formula is similar to theLanczos approximation, but has some distinct features.[2]Whereas the Lanczos formula exhibits faster convergence, Spouge's coefficients are much easier to calculate and the error can be set arbitrarily low. The formula is therefore feasible forarbitrary-precisionevaluation of the gamma function. However, special care must be taken to use sufficient precision when computing the sum due to the large size of the coefficientsck, as well as their alternating sign. For example, fora= 49, one must compute the sum using about 65 decimal digits of precision in order to obtain the promised 40 decimal digits of accuracy.
https://en.wikipedia.org/wiki/Spouge%27s_approximation
Acellular networkormobile networkis atelecommunications networkwhere the link to and from end nodes iswirelessand the network is distributed over land areas calledcells, each served by at least one fixed-locationtransceiver(such as abase station). These base stations provide the cell with the network coverage which can be used for transmission of voice, data, and other types of content viaradio waves. Each cell's coverage area is determined by factors such as the power of the transceiver, the terrain, and the frequency band being used. A cell typically uses a different set of frequencies from neighboring cells, to avoid interference and provide guaranteed service quality within each cell.[1][2] When joined together, these cells provide radio coverage over a wide geographic area. This enables numerousdevices, includingmobile phones,tablets,laptopsequipped withmobile broadband modems, andwearable devicessuch assmartwatches, to communicate with each other and with fixed transceivers and telephones anywhere in the network, via base stations, even if some of the devices are moving through more than one cell during transmission. The design of cellular networks allows for seamlesshandover, enabling uninterrupted communication when a device moves from one cell to another. Modern cellular networks utilize advanced technologies such asMultiple Input Multiple Output(MIMO),beamforming, and small cells to enhance network capacity and efficiency. Cellular networks offer a number of desirable features:[2] Major telecommunications providers have deployed voice and data cellular networks over most of the inhabited land area ofEarth. This allows mobile phones and other devices to be connected to thepublic switched telephone networkand publicInternet access. In addition to traditional voice and data services, cellular networks now supportInternet of Things(IoT) applications, connecting devices such assmart meters, vehicles, and industrial sensors. The evolution of cellular networks from1Gto5Ghas progressively introduced faster speeds, lower latency, and support for a larger number of devices, enabling advanced applications in fields such as healthcare, transportation, andsmart cities. Private cellular networks can be used for research[3]or for large organizations and fleets, such as dispatch for local public safety agencies or a taxicab company, as well as for local wireless communications in enterprise and industrial settings such as factories, warehouses, mines, power plants, substations, oil and gas facilities and ports.[4] In acellular radiosystem, a land area to be supplied with radio service is divided into cells in a pattern dependent on terrain and reception characteristics. These cell patterns roughly take the form of regular shapes, such as hexagons, squares, or circles although hexagonal cells are conventional. Each of these cells is assigned with multiple frequencies (f1–f6) which have correspondingradio base stations. The group of frequencies can be reused in other cells, provided that the same frequencies are not reused in adjacent cells, which would causeco-channel interference. The increasedcapacityin a cellular network, compared with a network with a single transmitter, comes from the mobile communication switching system developed byAmos Joelof Bell Labs[5]that permitted multiple callers in a given area to use the same frequency by switching calls to the nearest available cellular tower having that frequency available. This strategy is viable because a given radio frequency can be reused in a different area for an unrelated transmission. In contrast, a single transmitter can only handle one transmission for a given frequency. Inevitably, there is some level ofinterferencefrom the signal from the other cells which use the same frequency. Consequently, there must be at least one cell gap between cells which reuse the same frequency in a standardfrequency-division multiple access(FDMA) system. Consider the case of a taxi company, where each radio has a manually operated channel selector knob to tune to different frequencies. As drivers move around, they change from channel to channel. The drivers are aware of whichfrequencyapproximately covers some area. When they do not receive a signal from the transmitter, they try other channels until finding one that works. The taxi drivers only speak one at a time when invited by the base station operator. This is a form oftime-division multiple access(TDMA). The idea to establish a standard cellular phone network was first proposed on December 11, 1947. This proposal was put forward byDouglas H. Ring, aBell Labsengineer, in an internal memo suggesting the development of a cellular telephone system byAT&T.[6][7] The first commercial cellular network, the1Ggeneration, was launched in Japan byNippon Telegraph and Telephone(NTT) in 1979, initially in the metropolitan area ofTokyo. However, NTT did not initially commercialize the system; the early launch was motivated by an effort to understand a practical cellular system rather than by an interest to profit from it.[8][9]In 1981, theNordic Mobile Telephonesystem was created as the first network to cover an entire country. The network was released in 1981 in Sweden and Norway, then in early 1982 in Finland and Denmark.Televerket, a state-owned corporation responsible for telecommunications in Sweden, launched the system.[8][10][11] In September 1981,Jan Stenbeck, a financier and businessman, launchedComvik, a new Swedish telecommunications company. Comvik was the first European telecommunications firm to challenge the state's telephone monopoly on the industry.[12][13][14]According to some sources, Comvik was the first to launch a commercial automatic cellular system before Televerket launched its own in October 1981. However, at the time of the new network’s release, theSwedish Post and Telecom Authoritythreatened to shut down the system after claiming that the company had used an unlicensed automatic gear that could interfere with its own networks.[14][15]In December 1981, Sweden awarded Comvik with a license to operate its own automatic cellular network in the spirit of market competition.[14][15][16] TheBell Systemhad developed cellular technology since 1947, and had cellular networks in operation inChicago, Illinois,[17]andDallas, Texas, prior to 1979; however, regulatory battles delayed AT&T's deployment of cellular service to 1983,[18]when itsRegional Holding CompanyIllinois Bellfirst provided cellular service.[19] First-generation cellular network technology continued to expand its reach to the rest of the world. In 1990,Millicom Inc., a telecommunications service provider, strategically partnered with Comvik’s international cellular operations to become Millicom International Cellular SA.[20]The company went on to establish a 1G systems foothold in Ghana, Africa under the brand name Mobitel.[21]In 2006, the company’s Ghana operations were renamed to Tigo.[22] Thewireless revolutionbegan in the early 1990s,[23][24][25]leading to the transition from analog todigital networks.[26]The MOSFET invented atBell Labsbetween 1955 and 1960,[27][28][29][30][31]was adapted for cellular networks by the early 1990s, with the wide adoption ofpower MOSFET,LDMOS(RF amplifier), andRF CMOS(RF circuit) devices leading to the development and proliferation of digital wireless mobile networks.[26][32][33] The first commercial digital cellular network, the2Ggeneration, was launched in 1991. This sparked competition in the sector as the new operators challenged the incumbent 1G analog network operators. To distinguish signals from several different transmitters, a number ofchannel access methodshave been developed, includingfrequency-division multiple access(FDMA, used by analog andD-AMPS[citation needed]systems),time-division multiple access(TDMA, used byGSM) andcode-division multiple access(CDMA, first used forPCS, and the basis of3G).[2] With FDMA, the transmitting and receiving frequencies used by different users in each cell are different from each other. Each cellular call was assigned a pair of frequencies (one for base to mobile, the other for mobile to base) to providefull-duplexoperation. The originalAMPSsystems had 666 channel pairs, 333 each for theCLEC"A" system andILEC"B" system. The number of channels was expanded to 416 pairs per carrier, but ultimately the number of RF channels limits the number of calls that a cell site could handle. FDMA is a familiar technology to telephone companies, which usedfrequency-division multiplexingto add channels to their point-to-point wireline plants beforetime-division multiplexingrendered FDM obsolete. With TDMA, the transmitting and receiving time slots used by different users in each cell are different from each other. TDMA typically usesdigitalsignaling tostore and forwardbursts of voice data that are fit into time slices for transmission, and expanded at the receiving end to produce a somewhat normal-sounding voice at the receiver. TDMA must introducelatency(time delay) into the audio signal. As long as the latency time is short enough that the delayed audio is not heard as an echo, it is not problematic. TDMA is a familiar technology for telephone companies, which usedtime-division multiplexingto add channels to their point-to-point wireline plants beforepacket switchingrendered FDM obsolete. The principle of CDMA is based onspread spectrumtechnology developed for military use duringWorld War IIand improved during theCold Warintodirect-sequence spread spectrumthat was used for early CDMA cellular systems andWi-Fi. DSSS allows multiple simultaneous phone conversations to take place on a single wideband RF channel, without needing to channelize them in time or frequency. Although more sophisticated than older multiple access schemes (and unfamiliar to legacy telephone companies because it was not developed byBell Labs), CDMA has scaled well to become the basis for 3G cellular radio systems. Other available methods of multiplexing such asMIMO, a more sophisticated version ofantenna diversity, combined with activebeamformingprovides much greaterspatial multiplexingability compared to original AMPS cells, that typically only addressed one to three unique spaces. Massive MIMO deployment allows much greater channel reuse, thus increasing the number of subscribers per cell site, greater data throughput per user, or some combination thereof.Quadrature Amplitude Modulation(QAM) modems offer an increasing number of bits per symbol, allowing more users per megahertz of bandwidth (and decibels of SNR), greater data throughput per user, or some combination thereof. The key characteristic of a cellular network is the ability to reuse frequencies to increase both coverage and capacity. As described above, adjacent cells must use different frequencies, however, there is no problem with two cells sufficiently far apart operating on the same frequency, provided the masts and cellular network users' equipment do not transmit with too much power.[2] The elements that determine frequency reuse are the reuse distance and the reuse factor. The reuse distance,Dis calculated as whereRis the cell radius andNis the number of cells per cluster. Cells may vary in radius from 1 to 30 kilometres (0.62 to 18.64 mi). The boundaries of the cells can also overlap between adjacent cells and large cells can be divided into smaller cells.[34] The frequency reuse factor is the rate at which the same frequency can be used in the network. It is1/K(orKaccording to some books) whereKis the number of cells which cannot use the same frequencies for transmission. Common values for the frequency reuse factor are 1/3, 1/4, 1/7, 1/9 and 1/12 (or 3, 4, 7, 9 and 12, depending on notation).[35] In case ofNsector antennas on the same base station site, each with different direction, the base station site can serve N different sectors.Nis typically 3. Areuse patternofN/Kdenotes a further division in frequency amongNsector antennas per site. Some current and historical reuse patterns are 3/7 (North American AMPS), 6/4 (Motorola NAMPS), and 3/4 (GSM). If the total availablebandwidthisB, each cell can only use a number of frequency channels corresponding to a bandwidth ofB/K, and each sector can use a bandwidth ofB/NK. Code-division multiple access-based systems use a wider frequency band to achieve the same rate of transmission as FDMA, but this is compensated for by the ability to use a frequency reuse factor of 1, for example using a reuse pattern of 1/1. In other words, adjacent base station sites use the same frequencies, and the different base stations and users are separated by codes rather than frequencies. WhileNis shown as 1 in this example, that does not mean the CDMA cell has only one sector, but rather that the entire cell bandwidth is also available to each sector individually. Recently alsoorthogonal frequency-division multiple accessbased systems such asLTEare being deployed with a frequency reuse of 1. Since such systems do not spread the signal across the frequency band, inter-cell radio resource management is important to coordinate resource allocation between different cell sites and to limit the inter-cell interference. There are various means ofinter-cell interference coordination(ICIC) already defined in the standard.[36]Coordinated scheduling, multi-site MIMO or multi-site beamforming are other examples for inter-cell radio resource management that might be standardized in the future. Cell towers frequently use adirectional signalto improve reception in higher-traffic areas. In theUnited States, theFederal Communications Commission(FCC) limits omnidirectional cell tower signals to 100 watts of power. If the tower has directional antennas, the FCC allows the cell operator to emit up to 500 watts ofeffective radiated power(ERP).[37] Although the original cell towers created an even, omnidirectional signal, were at the centers of the cells and were omnidirectional, a cellular map can be redrawn with the cellular telephone towers located at the corners of the hexagons where three cells converge.[38]Each tower has three sets of directional antennas aimed in three different directions with 120 degrees for each cell (totaling 360 degrees) and receiving/transmitting into three different cells at different frequencies. This provides a minimum of three channels, and three towers for each cell and greatly increases the chances of receiving a usable signal from at least one direction. The numbers in the illustration are channel numbers, which repeat every 3 cells. Large cells can be subdivided into smaller cells for high volume areas.[39] Cell phone companies also use this directional signal to improve reception along highways and inside buildings like stadiums and arenas.[37] Practically every cellular system has some kind of broadcast mechanism. This can be used directly for distributing information to multiple mobiles. Commonly, for example inmobile telephonysystems, the most important use of broadcast information is to set up channels for one-to-one communication between the mobile transceiver and the base station. This is calledpaging. The three different paging procedures generally adopted are sequential, parallel and selective paging. The details of the process of paging vary somewhat from network to network, but normally we know a limited number of cells where the phone is located (this group of cells is called a Location Area in theGSMorUMTSsystem, or Routing Area if a data packet session is involved; inLTE, cells are grouped into Tracking Areas). Paging takes place by sending the broadcast message to all of those cells. Paging messages can be used for information transfer. This happens inpagers, inCDMAsystems for sendingSMSmessages, and in theUMTSsystem where it allows for low downlink latency in packet-based connections. In LTE/4G, the Paging procedure is initiated by the MME when data packets need to be delivered to the UE. Paging types supported by the MME are: In a primitive taxi system, when the taxi moved away from a first tower and closer to a second tower, the taxi driver manually switched from one frequency to another as needed. If communication was interrupted due to a loss of a signal, the taxi driver asked the base station operator to repeat the message on a different frequency. In a cellular system, as the distributed mobile transceivers move from cell to cell during an ongoing continuous communication, switching from one cell frequency to a different cell frequency is done electronically without interruption and without a base station operator or manual switching. This is called thehandoveror handoff. Typically, a new channel is automatically selected for the mobile unit on the new base station which will serve it. The mobile unit then automatically switches from the current channel to the new channel and communication continues. The exact details of the mobile system's move from one base station to the other vary considerably from system to system (see the example below for how a mobile phone network manages handover). The most common example of a cellular network is a mobile phone (cell phone) network. Amobile phoneis a portable telephone which receives or makes calls through acell site(base station) or transmitting tower.Radio wavesare used to transfer signals to and from the cell phone. Modern mobile phone networks use cells because radio frequencies are a limited, shared resource. Cell-sites and handsets change frequency under computer control and use low power transmitters so that the usually limited number of radio frequencies can be simultaneously used by many callers with less interference. A cellular network is used by themobile phone operatorto achieve both coverage and capacity for their subscribers. Large geographic areas are split into smaller cells to avoid line-of-sight signal loss and to support a large number of active phones in that area. All of the cell sites are connected totelephone exchanges(or switches), which in turn connect to thepublic telephone network. In cities, each cell site may have a range of up to approximately1⁄2mile (0.80 km), while in rural areas, the range could be as much as 5 miles (8.0 km). It is possible that in clear open areas, a user may receive signals from a cell site 25 miles (40 km) away. In rural areas with low-band coverage and tall towers, basic voice and messaging service may reach 50 miles (80 km), with limitations on bandwidth and number of simultaneous calls.[citation needed] Since almost all mobile phones usecellular technology, includingGSM,CDMA, andAMPS(analog), the term "cell phone" is in some regions, notably the US, used interchangeably with "mobile phone". However,satellite phonesare mobile phones that do not communicate directly with a ground-based cellular tower but may do so indirectly by way of a satellite. There are a number of different digital cellular technologies, including:Global System for Mobile Communications(GSM),General Packet Radio Service(GPRS),cdmaOne,CDMA2000,Evolution-Data Optimized(EV-DO),Enhanced Data Rates for GSM Evolution(EDGE),Universal Mobile Telecommunications System(UMTS),Digital Enhanced Cordless Telecommunications(DECT),Digital AMPS(IS-136/TDMA), andIntegrated Digital Enhanced Network(iDEN). The transition from existing analog to the digital standard followed a very different path in Europe and theUS.[40]As a consequence, multiple digital standards surfaced in the US, whileEuropeand many countries converged towards theGSMstandard. A simple view of the cellular mobile-radio network consists of the following: This network is the foundation of theGSMsystem network. There are many functions that are performed by this network in order to make sure customers get the desired service including mobility management, registration, call set-up, andhandover. Any phone connects to the network via an RBS (Radio Base Station) at a corner of the corresponding cell which in turn connects to theMobile switching center(MSC). The MSC provides a connection to thepublic switched telephone network(PSTN). The link from a phone to the RBS is called anuplinkwhile the other way is termeddownlink. Radio channels effectively use the transmission medium through the use of the following multiplexing and access schemes:frequency-division multiple access(FDMA),time-division multiple access(TDMA),code-division multiple access(CDMA), andspace-division multiple access(SDMA). Small cells, which have a smaller coverage area than base stations, are categorised as follows: As the phone user moves from one cell area to another cell while a call is in progress, the mobile station will search for a new channel to attach to in order not to drop the call. Once a new channel is found, the network will command the mobile unit to switch to the new channel and at the same time switch the call onto the new channel. WithCDMA, multiple CDMA handsets share a specific radio channel. The signals are separated by using apseudonoisecode (PN code) that is specific to each phone. As the user moves from one cell to another, the handset sets up radio links with multiple cell sites (or sectors of the same site) simultaneously. This is known as "soft handoff" because, unlike with traditionalcellular technology, there is no one defined point where the phone switches to the new cell. InIS-95inter-frequency handovers and older analog systems such asNMTit will typically be impossible to test the target channel directly while communicating. In this case, other techniques have to be used such as pilot beacons in IS-95. This means that there is almost always a brief break in the communication while searching for the new channel followed by the risk of an unexpected return to the old channel. If there is no ongoing communication or the communication can be interrupted, it is possible for the mobile unit to spontaneously move from one cell to another and then notify the base station with the strongest signal. The effect of frequency on cell coverage means that different frequencies serve better for different uses. Low frequencies, such as 450  MHz NMT, serve very well for countryside coverage.GSM900 (900 MHz) is suitable for light urban coverage.GSM1800 (1.8 GHz) starts to be limited by structural walls.UMTS, at 2.1 GHz is quite similar in coverage toGSM1800. Higher frequencies are a disadvantage when it comes to coverage, but it is a decided advantage when it comes to capacity. Picocells, covering e.g. one floor of a building, become possible, and the same frequency can be used for cells which are practically neighbors. Cell service area may also vary due to interference from transmitting systems, both within and around that cell. This is true especially in CDMA based systems. The receiver requires a certainsignal-to-noise ratio, and the transmitter should not send with too high transmission power in view to not cause interference with other transmitters. As the receiver moves away from the transmitter, the power received decreases, so thepower controlalgorithm of the transmitter increases the power it transmits to restore the level of received power. As the interference (noise) rises above the received power from the transmitter, and the power of the transmitter cannot be increased anymore, the signal becomes corrupted and eventually unusable. InCDMA-based systems, the effect of interference from other mobile transmitters in the same cell on coverage area is very marked and has a special name,cell breathing. One can see examples of cell coverage by studying some of the coverage maps provided by real operators on their web sites or by looking at independently crowdsourced maps such asOpensignalorCellMapper. In certain cases they may mark the site of the transmitter; in others, it can be calculated by working out the point of strongest coverage. Acellular repeateris used to extend cell coverage into larger areas. They range from wideband repeaters for consumer use in homes and offices to smart or digital repeaters for industrial needs. The following table shows the dependency of the coverage area of one cell on the frequency of aCDMA2000network:[41] Lists and technical information: Starting with EVDO the following techniques can also be used to improve performance: Equipment: Other:
https://en.wikipedia.org/wiki/Cellular_network
Enhanced Data rates for GSM Evolution(EDGE), also known as2.75Gand under various other names, is a2Gdigitalmobile phonetechnology forpacket switcheddata transmission. It is a subset ofGeneral Packet Radio Service(GPRS) on theGSMnetwork and improves upon it offering speeds close to3Gtechnology, hence the name 2.75G. EDGE is standardized by the3GPPas part of the GSM family and as an upgrade to GPRS. EDGE was deployed on GSM networks beginning in 2003 – initially byCingular(nowAT&T) in the United States.[1]It could be readily deployed on existing GSM and GPRS cellular equipment, making it an easier upgrade forcellular companiescompared to theUMTS3G technology that required significant changes.[2]Through the introduction of sophisticated methods of coding and transmitting data, EDGE delivers higher bit-rates per radio channel, resulting in a threefold increase in capacity and performance compared with an ordinary GSM/GPRS connection - originally a max speed of 384 kbit/s.[3]Later,Evolved EDGEwas developed as an enhanced standard providing even more reduced latency and more than double performance, with a peak bit-rate of up to 1 Mbit/s. Enhanced Data rates for GSM Evolutionis the common full name of the EDGE standard. Other names include:Enhanced GPRS(EGPRS),IMT Single Carrier(IMT-SC), andEnhanced Data rates for Global Evolution. Although described as "2.75G" by the3GPPbody, EDGE is part ofInternational Telecommunication Union(ITU)'s 3G definition.[4]It is also recognized as part of theInternational Mobile Telecommunications - 2000(IMT-2000) standard for 3G. EDGE/EGPRS is implemented as a bolt-on enhancement for2.5GGSM/GPRS networks, making it easier for existing GSM carriers to upgrade to it. EDGE is a superset to GPRS and can function on any network with GPRS deployed on it, provided the carrier implements the necessary upgrade. EDGE requires no hardware or software changes to be made in GSM core networks. EDGE-compatible transceiver units must be installed and the base station subsystem needs to be upgraded to support EDGE. If the operator already has this in place, which is often the case today, the network can be upgraded to EDGE by activating an optional software feature. Today EDGE is supported by all major chip vendors for both GSM andWCDMA/HSPA. In addition toGaussian minimum-shift keying(GMSK), EDGE useshigher-order PSK/8 phase-shift keying(8PSK) for the upper five of its nine modulation and coding schemes. EDGE produces a 3-bit word for every change in carrier phase. This effectively triples the gross data rate offered by GSM. EDGE, likeGPRS, uses a rate adaptation algorithm that adapts the modulation and coding scheme (MCS) according to the quality of the radio channel, and thus the bit rate and robustness of data transmission. It introduces a new technology not found in GPRS,incremental redundancy, which, instead of retransmitting disturbed packets, sends more redundancy information to be combined in the receiver. This increases the probability of correct decoding. EDGE can carry a bandwidth up to 236 kbit/s (with end-to-end latency of less than 150 ms) for 4timeslots(theoretical maximum is 473.6 kbit/s for 8 timeslots) in packet mode. This means it can handle four times as much traffic as standard GPRS. EDGE meets theInternational Telecommunication Union's requirement for a3Gnetwork, and has been accepted by the ITU as part of theIMT-2000family of 3G standards.[4]It also enhances the circuit data mode calledHSCSD, increasing the data rate of this service. The channel encoding process in GPRS as well as EGPRS/EDGE consists of two steps: first, a cyclic code is used to add parity bits, which are also referred to as the Block Check Sequence, followed by coding with a possibly puncturedconvolutional code.[5]In GPRS, the Coding Schemes CS-1 to CS-4 specify the number of parity bits generated by the cyclic code and the puncturing rate of the convolutional code.[5]In GPRS Coding Schemes CS-1 through CS-3, the convolutional code is of rate 1/2, i.e. each input bit is converted into two coded bits.[5]In Coding Schemes CS-2 and CS-3, the output of the convolutional code ispuncturedto achieve the desired code rate.[5]In GPRS Coding Scheme CS-4, no convolutional coding is applied.[5] In EGPRS/EDGE, themodulationand coding schemes MCS-1 to MCS-9 take the place of the coding schemes of GPRS, and additionally specify which modulation scheme is used, GMSK or 8PSK.[5]MCS-1 through MCS-4 use GMSK and have performance similar (but not equal) to GPRS, while MCS-5 through MCS-9 use 8PSK.[5]In all EGPRS modulation and coding schemes, a convolutional code of rate 1/3 is used, and puncturing is used to achieve the desired code rate.[5]In contrast to GPRS, theRadio Link Control(RLC) andmedium access control(MAC) headers and the payload data are coded separately in EGPRS.[5]The headers are coded more robustly than the data.[5] The first EDGE network was deployed byCingular(nowAT&T) in the United States[1]on June 30, 2003, initially coveringIndianapolis.[8][9]T-Mobile USdeployed their EDGE network in September 2005.[10][11]In Canada,Rogers Wirelessdeployed their EDGE network in 2004.[12]In Malaysia,DiGilaunched EDGE beginning in May 2004 initially only in theKlang Valley.[13] In Europe,TeliaSonerain Finland rolled out EDGE in April 2004.[14]Orangebegan trialling EDGE in France in April 2005 before a consumer rollout later that year.[15]Bouygues Telecomcompleted its national deployment of EDGE in the country in 2005, strategically focusing on EDGE which is cheaper to deploy compared to 3G networks.[16]Telfortwas the first network in the Netherlands to roll out EDGE having done so by May 2005.[17]Orange launched the UK's first EDGE network in February 2006.[18] TheGlobal Mobile Suppliers Associationreported in 2008 that EDGE networks have been launched in 147 countries around the world.[19] Evolved EDGE, also calledEDGE Evolutionand2.875G, is a bolt-on extension to theGSMmobile telephony standard, which improves on EDGE in a number of ways. Latencies are reduced by lowering theTransmission Time Intervalby half (from 20 ms to 10 ms). Bit rates are increased up to 1 Mbit/s peak bandwidth and latencies down to 80 ms using dual carrier, higher symbol rate andhigher-order modulation(32QAM and 16QAM instead of 8PSK), andturbo codesto improve error correction. This results in real world downlink speeds of up to 600 kbit/s.[20]Further the signal quality is improved using dual antennas improving average bit-rates and spectrum efficiency. The main intention of increasing the existing EDGE throughput is that many operators would like to upgrade their existing infrastructure rather than invest on new network infrastructure. Mobile operators have invested billions in GSM networks, many of which are already capable of supporting EDGE data speeds up to 236.8 kbit/s. With a software upgrade and a new device compliant with Evolved EDGE (like an Evolved EDGEsmartphone) for the user, these data rates can be boosted to speeds approaching 1 Mbit/s (i.e. 98.6 kbit/s per timeslot for 32QAM). Many service providers may not invest in a completely new technology like3Gnetworks.[21] Considerable research and development happened throughout the world for this new technology. A successful trial by Nokia Siemens and "one of China's leading operators" was achieved in a live environment.[21]However, Evolved EDGE was introduced much later than its predecessor, EDGE, coinciding with the widespread adoption of 3G technologies such asHSPAand just before the emergence of4Gnetworks. This timing significantly limited its relevance and practical application, as operators prioritized investment in more advanced wireless technologies likeUMTSandLTE. Moreover, these newer technologies also targeted network coverage layers on low frequencies, further diminishing the potential advantages of Evolved EDGE. Coupled with the upcoming phase-out and shutdown of 2G mobile networks, it became very unlikely that Evolved EDGE would ever see deployment on live networks. As of 2016, nocommercial networkssupported the Evolved EDGE standard (3GPP Rel-7). With Evolved EDGE come three major features designed to reduce latency over the air interface. In EDGE, a single RLC data block (ranging from 23 to 148 bytes of data) is transmitted over four frames, using a single time slot. On average, this requires 20 ms for one way transmission. Under the RTTI scheme, one data block is transmitted over two frames in two timeslots, reducing the latency of the air interface to 10 ms. In addition, Reduced Latency also implies support of Piggy-backedACK/NACK(PAN), in which a bitmap of blocks not received is included in normal data blocks. Using the PAN field, the receiver may report missing data blocks immediately, rather than waiting to send a dedicated PAN message. A final enhancement is RLC-non persistent mode. With EDGE, the RLC interface could operate in either acknowledged mode, or unacknowledged mode. In unacknowledged mode, there is no retransmission of missing data blocks, so a single corrupt block would cause an entire upper-layer IP packet to be lost. With non-persistent mode, an RLC data block may be retransmitted if it is less than a certain age. Once this time expires, it is considered lost, and subsequent data blocks may then be forwarded to upper layers. Both uplink and downlink throughput is improved by using 16 or 32 QAM (quadrature amplitude modulation), along with turbo codes and higher symbol rates. A lesser-known version of the EDGE standard is Enhanced Circuit Switched Data (ECSD), which iscircuit switched.[22] A variant, so called Compact-EDGE, was developed for use in a portion ofDigital AMPSnetwork spectrum.[23] The Global mobile Suppliers Association (GSA) states that, as of May 2013, there were 604 GSM/EDGE networks in 213 countries, from a total of 606 mobile network operator commitments in 213 countries.[24]
https://en.wikipedia.org/wiki/Enhanced_Data_Rates_for_GSM_Evolution
Call forwarding, orcall diversion, is atelephonyfeature of all telephone switching systems which redirects a telephone call to another destination, which may be, for example, amobileor anothertelephone numberwhere the desiredcalled partyis available. Call forwarding was invented by Ernest J. Bonanno.[1] In North America, the forwarded line usually rings once to remind the customer using call forwarding that the call is being redirected. More consistently, the forwarded line indicates its condition bystutter dial tone. Call forwarding typically can redirect incoming calls to any other domestic telephone number, but the owner of the forwarded line must pay any toll charges for forwarded calls. Call forwarding is often enabled by dialing *72 followed by the telephone number to which calls should be forwarded. Once someone answers, call forwarding is in effect. If no one answers or the line is busy, the dialing sequence must be repeated to effect call forwarding. Call forwarding is disabled by dialing *73. This feature requires a subscription from the telephone company. Also available in some areas is Remote Access to call forwarding, which permit the control over call forwarding from telephones other than the subscriber's telephone.VOIPandcable telephonesystems also allow call forwarding to be set up and directed via their web portals. Call forwarding can be Conditional or Unconditional. Conditional call forwarding only works when the conditions set by the customers met while Unconditional call forwarding works in all cases irrelevant ofnetwork coverage. In Europe, most networks indicate that unconditional call forwarding is active with a specialdial tone. When the phone is picked up it is immediately apparent that calls are being forwarded, while in other countries same system is being followed now. The ISDN Diversionsupplementary services[2]standards document uses "diversion" as a general term to encompass specific features including "Call Forwarding Busy", "Call Forwarding No Reply" and "Call Deflection". The termscall forwardingandcall diversionare both used to refer to any feature that allows a call to be routed to a third party, and the terms are generally interchangeable. Special types of call forwarding can be activated only if the line is busy, or if there is no answer, or even only for calls from selected numbers. InNorth America, theNorth American Numbering Plan(NANP) generally uses the followingvertical service codesto control call forwarding: TheSprint Nextelcellphone company uses these:[3] Most EU fixed-line carriers use the following codes based onCEPTandETSIstandards developed in the 1970s on bothPOTSandISDNlines. (There may be some variation to these, but the unconditional code *21* is very much universally standard on EU telephone lines.) The general syntax for all European service codes always follows the pattern below: For GSM/3GSM (UMTS) phones, theGSMstandard defines the following forwardingUnstructured Supplementary Service Data. These were developed by ETSI and are based on standard European diversion codes and are similar to those used on most landlines in the EU:[6] If the prefix to the forwarding command is "**" (instead of the usual "*"), then the phone number in that command is registered in the network. If after that the forwarding is deactivated using a command with a single "#", then later it will be possible to re-activate this forwarding again with a simple "*" command without a phone number in it. The forwarding will be re-activated to the number registered in the network. For example, if one uses the out-of-reach code in a forwarding command: **62*7035551212# and after that one deactivates the forwarding: #62# then later it will be possible to re-activate the out-of-reach forwarding without specifying a number: *62# After the above command, all calls made to the phone, while it is out of reach, will be forwarded to 7035551212. It is possible to activate the feature to a number other than the registered number, while still retaining the registered number for later use. For example, issuing the command: *62*7185551212# will result in calls being forwarded to 7185551212 (andnotto theregistered number7035551212). However, if later a command is issued: *62# then the calls will again be forwarded to the registered number 7035551212 (andnotto the number from the previous forwarding command 7185551212). In GSM networks of some US carriers, and in all mobile networks in Europe, it is possible to set a number of seconds for the phone to ring before forwarding the call. This is specified by inserting "*SC*XX" prior to the final "#" of the forwarding command, where "SC" is aservice type code(11 forvoice, 25 fordata, 13 forfax), and "XX" is the number of seconds in increments of 5 seconds. If "SC" is omitted (just "**XX") then by defaultall service typeswill be forwarded. For example, forwarding on no-answer can be set with: *61*[phone number]**[seconds]# Forwarding voice calls only can be set with: *61*[phone number]*11*[seconds]# In some networks there may be a limit of not more than thirty seconds before forwarding (i.e. “XX” can only be 05, 10, 15, 20, 25, or 30; all greater values, like 45 and 60, will result in the forwarding command being rejected and an error message returned). Diverting calls can increase one's availability to callers. The main alternative is an answering machine or voicemail, but some callers do not wish to leave a recorded message, but want to have a two-way conversation. Some businesses have their calls forwarded to acall center, so that the client can reach an operator instead of an answering machine or voice mail. Before the availability of call forwarding, commercialanswering servicesneeded to physically connect to every line for which they provided after-hours response; this required their offices to be located near the local central exchange and be fed by a huge multi-pair trunk in which a separate pair of wires existed for each client subscriber. With call forwarding, there is no physical connection to the client's main telephone service, which is merely call-forwarded to the answering service (usually on adirect inward dialnumber) at the end of the business day. Often, a suburb of a large city is a toll call from many suburban exchanges on the opposite side of the same city, even though all of these suburbs are a local call to the city centre. A business located in such a suburb may therefore benefit from obtaining a downtown number as an "extender", to be permanently forwarded to their geographic suburban number. Where unlimited local calls are flat-rated and long-distance incurs high per-minute charges, the downtown number's wider local calling area represents a commercial advantage.Markham(directly north ofToronto) is long-distance toMississauga(directly west of Toronto). A Markham business with a forwarded416number could receive calls from Toronto's entire local calling area without incurring long-distance tolls (as both legs, Mississauga → Toronto and Toronto → Markham, are each a local call). Some services offer international call forwarding by allocating for the customer a localvirtual phone numberwhich is forwarded to any other international destination. The number was permanently forwarded and had no associated telephone line. As a means to obtain an inbound number from another town or region for business use,remote call forwardingschemes tend to be far less expensive thanforeign exchange linesbut more costly than usingvoice over IPto obtain a local number in the chosen city. Call forwarding can also assist travelers who do not have international cell phone plans and who wish to continue to receive their voicemails throughVoIPeasily while abroad.
https://en.wikipedia.org/wiki/Call_forwarding#Mobile_(cell)_phones
GSM frequency bandsor frequency ranges are thecellular frequenciesdesignated by theITUfor the operation ofGSMmobile phonesand othermobile devices. A dual-band 900/1800 device is required to be compatible with most networks apart from deployments inITU Region2. GSM-900 and GSM-1800 are used in most parts of the world (ITU-Regions 1 and 3):Africa,Europe,Middle East,Asia(apart fromJapanandSouth KoreawhereGSMhas never been introduced) andOceania. In common GSM-900 is most widely used. Fewer operators use GSM-1800.Mobile Communication Services on Aircraft(MCA) uses GSM-1800.[1] In some countries GSM-1800 is also referred to as "Digital Cellular System" (DCS).[2] GSM-1900 and GSM-850 are used in most of North, South and Central America (ITU-Region 2). In North America, GSM operates on the primary mobile communication bands 850 MHz and 1900 MHz. InCanada, GSM-1900 is the primary band used in urban areas with 850 as a backup, and GSM-850 being the primary rural band. In theUnited States, regulatory requirements determine which area can use which band. The termCellularis sometimes used to describe GSM services in the 850 MHz band, because the originalanalogcellular mobile communication system was allocated in this spectrum. Further GSM-850 is also sometimes calledGSM-800because this frequency range was known as the "800 MHz band" (for simplification) when it was first allocated forAMPSin the United States in 1983. InNorth AmericaGSM-1900 is also referred to asPersonal Communications Service(PCS) like any other cellular system operating on the "1900 MHz band". Some countries in Central and South America have allocated spectrum in the 900 MHz and 1800 MHz bands for GSM in addition to the common GSM deployments at 850 MHz and 1900 MHz for ITU-Region 2 (Americas). The result therefore is a mixture of usage in the Americas that requires travelers to confirm that the devices they have are compatible with the bands of the network at their destination.Frequency compatibilityproblems can be avoided through the use of multi-band (tri-band or, especially, quad-band) device. The following countries are mixing GSM 900/1800 and GSM 850/1900 bands:[3] Another less common GSM version is GSM-450.[4]It uses the same band as, and can co-exist with, old analogNMTsystems. NMT is a first generation (1G) mobile system which was primarily used inNordic countries,Benelux,Alpine Countries,Eastern EuropeandRussiaprior to the introduction of GSM. TheGSM Associationclaims one of its around 680 operator-members has a license to operate a GSM 450 network inTanzania. However, currently all active public operators in Tanzania use GSM 900/1800 MHz. There are no publicly advertised handsets for GSM-450 available. Very few NMT-450 networks remain in operation. Overall, where the 450 MHz NMT band has been licensed, the original analogue network has been closed, and sometimes replaced byCDMA. Some of the CDMA networks have since upgraded from CDMA toLTE(LTE band 31). Today, most telephones support multiple bands as used in different countries to facilitateroaming. These are typically referred to as multi-band phones. Dual-band phones can cover GSM networks in pairs such as 900 and 1800 MHz frequencies (Europe, Asia, Australia and Brazil) or 850 and 1900 (North America and Brazil). European tri-band phones typically cover the 900, 1800 and 1900 bands giving good coverage in Europe and allowing limited use in North America, while North American tri-band phones utilize 850, 1800 and 1900 for widespread North American service but limited worldwide use. A new addition has been the quad-band phone, also known as a World Phone,[5]supporting at least all four major GSM bands, allowing for global use (excluding non-GSM countries such as Japan, South Korea and as well countries where 2G system was shut down to release frequencies and spectrum for LTE networks like Australia (since 2017), Singapore and Taiwan (since 2018). There are also multi-mode phones which can operate on GSM as well as on other mobile phone systems using other technical standards or proprietary technologies. Often these phones use multiple frequency bands as well. For example, one version of the Nokia 6340iGAITphone sold in North America can operate on GSM-1900, GSM-850 and legacyTDMA-1900, TDMA-800, andAMPS-800, making it both multi-mode and multi-band. As a more recent example the AppleiPhone 5andiPhone 4Ssupport quad-band GSM at 850/900/1800/1900 MHz, quad-band UMTS/HSDPA/HSUPA at 850/900/1900/2100 MHz, and dual-bandCDMAEV-DORev. An at 800/1900 MHz, for a total of 'six' different frequencies (though at most four in a single mode). This allows the same handset to be sold forAT&T Mobility,Verizon, andSprintin the U.S. as well as a broad range of GSM carriers worldwide such asVodafone,OrangeandT-Mobile(Excluding-US), many of whom offer official unlocking.
https://en.wikipedia.org/wiki/GSM_frequency_bands
AGSM moduleis a device that allows electronic devices to communicate with each other over theGSMnetwork. GSM is a standard for digital cellular communications, which means that it provides a platform for mobile devices to communicate with each other wirelessly. The GSM module is a specialized device that enables a device to send and receive data over the GSM network. TheGSMnetwork is an essential component of modern communication systems. It is a standard used by mobile devices to communicate with each other wirelessly. The GSM network provides a reliable and secure platform for communication, which makes it a preferred choice for many applications. The history of the GSM module dates back to the 1980s when the GSM network was first introduced. The first GSM module was designed to work with analogue phones, and it was not until the late 1990s that digital GSM modules were introduced. Today, the GSM module is an essential component used in variouscommunicationsystems. A GSM module works by connecting to the GSM network through a SIM card. The SIM card provides the module with a unique identification number, which is used to identify the device on the network. The GSM module then communicates with the network using a set of protocols, which allows it to send and receive data. The GSM network is a digital cellular network that uses a set of protocols to enable communication between devices. The network is divided into cells, which are each serviced by a base station. The base station communicates with the devices in its cell, and the cells are interconnected to form a network. The GSM module plays a crucial role in the communication between devices and the GSM network. It is responsible for establishing and maintaining the communication link between the device and the network. The module also handles the encryption and decryption of data, which ensures the security of the communication. There are different types of GSM modules, each with its own functionalities. Some modules are designed to handle voice communication, while others are designed fordata communication. Some modules also have built-in GPS, which allows them to provide location information. There are several advantages of using a GSM module in communication systems. Some of the most significant advantages are:[1]
https://en.wikipedia.org/wiki/GSM_modem
GSM servicesare a standard collection of applications and features available over theGlobal System for Mobile Communications(GSM) tomobile phonesubscribers all over the world. The GSM standards are defined by the3GPPcollaboration and implemented in hardware and software by equipment manufacturers andmobile phone operators. The common standard makes it possible to use the same phones with different companies' services, or evenroaminto different countries.GSMis the world's predominant mobile phone standard. The design of the service is moderately complex because it must be able to locate a moving phone anywhere in the world, and accommodate the relatively small battery capacity, limited input/output capabilities, and weak radio transmitters on mobile devices. In order to gain access to GSM services, a user needs three things: After subscribers sign up, information about their identity (telephone number) and what services they are allowed to access are stored in a "SIM record" in theHome Location Register(HLR). Once the SIM card is loaded into the phone and the phone is powered on, it will search for the nearest mobile phone mast (also called aBase Transceiver Station/BTS) with the strongest signal in the operator'sfrequency band. If a mast can be successfully contacted, then there is said to becoveragein the area. The phone then identifies itself to the network through the control channel. Once this is successfully completed, the phone is said to be attached to the network. The key feature of a mobile phone is the ability to receive and make calls in any area where coverage is available. This is generally called roaming from a customer perspective, but also called visiting when describing the underlying technical process. Each geographic area has a database called theVisitor Location Register(VLR), which contains details of all the mobiles currently in that area. Whenever a phone attaches, or visits, a new area, theVisitorLocation Register must contact theHomeLocation Register to obtain the details for that phone. The current cellular location of the phone (i.e., which BTS it is at) is entered into the VLR record and will be used during a process calledpagingwhen the GSM network wishes to locate the mobile phone. Every SIM card contains a secret key, called the Ki, which is used to provide authentication and encryption services. This is useful to prevent theft of service, and also to prevent "over the air" snooping of a user's activity. The network does this by utilising theAuthentication Centerand is accomplished without transmitting the key directly. Every GSM phone contains a unique identifier (different from the phone number), called theInternational Mobile Equipment Identity(IMEI). This can be found by dialing *#06#. When a phone contacts the network, its IMEI may be checked against theEquipment Identity Registerto locate stolen phones and facilitate monitoring. Once a mobile phone has successfully attached to a GSM network as described above, calls may be made from the phone to any other phone on the globalPublic Switched Telephone Network. The user dials thetelephone number, presses thesendortalkkey, and the mobile phone sends a call setup request message to themobile phone networkvia the nearest mobile phone base transceiver station (BTS). The call setup request message is handled next by theMobile Switching Center, which checks the subscriber's record held in theVisitor Location Registerto see if the outgoing call is allowed. If so, the MSC then routes the call in the same way that atelephone exchangedoes in a fixed network. If the subscriber is on a prepaid tariff (sometimes known asPay As You Go(PAYG) orPay & Go), then an additional check is made to see if the subscriber has enough credit to proceed. If not, the call is rejected. If the call is allowed to continue, then it is continually monitored and the appropriate amount is decremented from the subscriber's account. When the credit reaches zero, the call is cut off by the network. The systems that monitor and provide the prepaid services are not part of theGSMstandard services, but instead an example ofintelligent networkservices that amobile phone operatormay decide to implement in addition to the standard GSM ones. When someone places a call to a mobile phone, they dial the telephone number (also called aMSISDN) associated with the phone user and the call is routed to themobile phone operator's Gateway Mobile Switching Centre. TheGateway MSC, as the name suggests, acts as the "entrance" from exterior portions of thePublic Switched Telephone Networkonto the provider's network. As noted above, the phone is free to roam anywhere in the operator's network or on the networks of roaming partners, including in other countries. So the first job of the Gateway MSC is to determine the current location of the mobile phone in order to connect the call. It does this by consulting theHome Location Register(HLR), which, as described above, knows whichVisitor Location Register(VLR) the phone is associated with, if any. When the HLR receives this query message, it determines whether the call should be routed to another number (called a divert), or if it is to be routed directly to the mobile. When the call arrives at the Visited MSC, the MSRN is used to determine which of the phones in this area is being called, that is the MSRN maps back to theIMSIof the original phone number dialled. The MSCpagesall themobile phone mastsin the area that the IMSI is registered in order to inform the phone that there is an incoming call for it. If the subscriber answers, a speech path is created through the Visiting MSC and Gateway MSC back to the network of the person making the call, and a normaltelephonecall follows. It is also possible that the phone call is not answered. If the subscriber is busy on another call (andcall waitingis not being used) the Visited MSC routes the call to a predetermined Call Forward Busy (CFB) number. Similarly, if the subscriber does not answer the call after a period of time (typically 30 seconds) then the Visited MSC routes the call to a predetermined Call Forward No Reply (CFNRy) number. Once again, the operator may decide to set this value by default to the voice mail of the mobile so that callers can leave a message. If the subscriber does not respond to the paging request, either due to being out of coverage, or their battery has gone flat/removed, then the Visited MSC routes the call to a predetermined Call Forward Not Reachable (CFNRc) number. Once again, the operator may decide to set this value by default to the voice mail of the mobile so that callers can leave a message. A roaming user may want to avoid these forwarding services in the visited network as roaming charges will apply. In the United States and Canada, callers pay the cost of connecting to the Gateway MSC of the subscriber's phone company, regardless of the actual location of the phone. As mobile numbers are given standard geographic numbers according to theNorth American Numbering Plan, callers pay the same to reach fixed phones and mobile phones in a given geographic area. Mobile subscribers pay for the connection time (typically using in-plan or prepaid minutes) for both incoming and outgoing calls. For outgoing calls, any long distance charges are billed as if they originate at the GMSC, even though it is the visiting MSC that completes the connection to the PSTN. Plans that include nationwide long distance and/or nationwide roaming at no additional charge over "local" outgoing calls are popular. Mobile networks in Europe, Asia (except Hong Kong, Macau (Macao) and Singapore), Australia, and Argentina only charge their subscribers for outgoing calls. Incoming calls are free to the mobile subscriber with the exception of receiving a call while the subscriber is roaming as described below. However, callers typically pay a higher rate when calling mobile phones. Special prefixes are used to designate mobile numbers so that callers are aware they are calling a mobile phone and therefore will be charged a higher rate. From the caller's point of view, it does not matter where the mobile subscriber is, as the technical process of connecting the call is the same. If a subscriber isroamingon a different company's network, the subscriber, instead of the caller,may pay a surchargefor the connection time. International roaming calls are often quite expensive, and as a result some companies require subscribers to grant explicit permission to receive calls while roaming to certain countries. During a GSM call, speech is converted fromanaloguesound waves todigital databy the phone itself, and transmitted through the mobile phone network by digital means. (Though older parts of the fixedPublic Switched Telephone Networkmay use analog transmission.) The digital algorithm used to encode speech signals is called acodec. The speech codecs used inGSMare calledHalf-Rate (HR),Full-Rate (FR),Enhanced Full-Rate (EFR),Adaptive Multirate (AMR)andWideband AMRalso known as HD voice. All codecs except AMR operate with a fixed data rate and error correction level. The GSM standard also provides separate facilities for transmitting digital data. This allows a mobile phone to act like any other computer on theInternet, sending and receiving data via theInternet Protocol. The mobile may also be connected to a desktop computer,laptop, orPDA, for use as a network interface (just like amodemorEthernetcard, but using one of the GSM data protocols described below instead of a PSTN-compatible audio channel or an Ethernet link to transmit data). Some GSM phones can also be controlled by a standardisedHayes AT command setthrough a serial cable or a wireless link (usingIRDAorBluetooth). The AT commands can control anything from ring tones to data compression algorithms. In addition to general Internet access, other special services may be provided by themobile phone operator, such asSMS. Acircuit-switcheddata connection reserves a certain amount of bandwidth between two points for the life of a connection, just as a traditional phone call allocates an audio channel of a certain quality between two phones for the duration of the call. Two circuit-switched data protocols are defined in the GSM standard:Circuit Switched Data(CSD) andHigh-Speed Circuit-Switched Data(HSCSD). These types of connections are typically charged on a per-second basis, regardless of the amount of data sent over the link. This is because a certain amount of bandwidth is dedicated to the connection regardless of whether or not it is needed. Circuit-switched connections do have the advantage of providing a constant, guaranteedquality of service, which is useful for real-time applications like video conferencing. TheGeneral Packet Radio Service(GPRS) is apacket-switcheddata transmission protocol, which was incorporated into the GSM standard in 1997. It is backwards-compatible with systems that use pre-1997 versions of the standard. GPRS does this by sending packets to the local mobile phone mast (BTS) on channels not being used by circuit-switched voice calls or data connections. Multiple GPRS users can share a single unused channel because each of them uses it only for occasional short bursts. The advantage of packet-switched connections is that bandwidth is only used when there is actually data to transmit. This type of connection is thus generally billed by the kilobyte instead of by the second, and is usually a cheaper alternative for applications that only need to send and receive data sporadically, likeinstant messaging. GPRSis usually described as a2.5Gtechnology; see the main article for more information. Short Message Service(more commonly known astext messaging) has become the most used data application on mobile phones, with 74% of all mobile phone users worldwide already as active users of SMS, or 2.4 billion people by the end of 2007. SMS text messages may be sent by mobile phone users to other mobile users or external services that accept SMS. The messages are usually sent from mobile devices via theShort Message Service Centreusing theMAPprotocol. The SMSC is a central routing hubs for Short Messages. Many mobile service operators use their SMSCs as gateways to external systems, including theInternet, incoming SMS news feeds, and other mobile operators (often using thede factoSMPPstandard for SMS exchange). TheSMSstandard is also used outside of theGSMsystem; see the main article for details. See alsoGSM codes for supplementary services.
https://en.wikipedia.org/wiki/GSM_services
Cell Broadcast(CB) is a method of simultaneously sendingshort messagesto multiplemobile telephoneusers in a defined area. It is defined by theETSI's GSM committee and3GPPand is part of the2G,3G,4Gand5Gstandards.[1]It is also known as Short Message Service-Cell Broadcast (SMS-CB or CB SMS).[2][3] Unlike Short Message Service-Point to Point (SMS-PP), Cell Broadcast is aone-to-manygeo-targeted andgeo-fencedmessaging service. Cell Broadcast technology is widely used forpublic warning systems.[4] Cell Broadcast messaging was first demonstrated in Paris in 1997. Some mobile operators used Cell Broadcast for communicating thearea codeof the antenna cell to the mobile user (via channel 050),[5]for nationwide or citywide alerting, weather reports, mass messaging,location-basednews, etc. Cell broadcast has been widely deployed since 2008 by major Asian, US, Canadian, South American and European network operators. Not all operators have the Cell Broadcast messaging function activated in their network yet, but most of the currently used handsets support cell broadcast, however on many devices it is disabled by default and there isn't a standardised interface to enable the feature.[1] One Cell Broadcast message can reach a large number of telephones at once. Cell Broadcast messages are directed to specificradio cellsof a mobile phone network, rather than to a specific telephone.[6]The latest generation of Cell Broadcast Systems (CBS) can send to the whole mobile network (e.g. 1,000,000 cells) in less than 10 seconds, reaching millions of mobile subscribers at the same time. A Cell Broadcast message is an unconfirmedpushservice, meaning that the originators of the messages do not know who has received the message, allowing for services based on anonymity.[1]Cell Broadcast is compliant with the latestEU General Data Protection Regulation (GDPR)as mobile phone numbers are not required by CB. The originator (alerting authority) of the Cell Broadcast message can request the success rate of a message. In such a case the Cell Broadcast System will respond with the number of addressed cells and the number of cells that have broadcast the Cell Broadcast (alert) message. Each radio cell covers a certain geographic area, typically a few kilometers in diameter, so by only sending the Cell Broadcast message to specific radio cells, the broadcast can be limited to a specific area (geotargeting). This is useful for messages that are only relevant in a specific area, such as flood warnings. The CB message parameters contain the broadcasting schedule. If the start-time is left open, the CBC system will assume an immediate start, which will be the case for Public Warning messages. If the end-time is left open, the message will be repeated indefinitely. A subsequent cancel message shall be used to stop this message. The repetition rate can be set between 2 seconds and to values beyond 30 minutes. Each repeated CB message will have the same message identifier (indicating the source of the message), and the same serial number. Using this information, the mobile telephone is able to identify and ignore broadcasts of already received messages. A Cell Broadcast message page is composed of 82octets, which, using the default character set, can encode 93 characters. Up to 15 of these pages may be concatenated to form a Cell Broadcast message[1](hence maximum length of one Cell Broadcast message is therefore 1395 characters).[3] A Cell Broadcast Centre (CBC), a system which is the source of SMS-CB message, is connected to aBase Station Controller (BSC)inGSMnetworks, to aRadio Network Controller (RNC)inUMTSnetworks, to aMobility Management Entity (MME)inLTE (telecommunication)networks or to a core Access and Mobility management Function (AMF) in5Gnetworks. The technical implementation of the Cell Broadcast service is described in the3GPPspecification TS 23.041[7] A CBC sends CB messages, a list of cells where messages are to be broadcast, and the requested repetition rate and number of times they shall be broadcast to the BSC/RNC/MME/AMF. The BSC's/RNC's/MME/AMF responsibility is to deliver the CB messages to thebase stations (BTSs),Node Bs,ENodeBsand gNodeBs which handle the requested cells. Cell Broadcast is not affected by traffic load; therefore, it is very suitable during a disaster when load spikes of data (social mediaandmobile apps), regular SMS and voice calls usage (mass call events) tend to significantly congest mobile networks, as multiple events have shown. Public Warning Systems, otherwise known as Emergency Alert Systems, implemented through Cell Broadcast technology vary by country, but are broadly the same. Technical standards are outlined in the 3GPP TS 23.041 standard. Large implementations mentioned in 3GPP standards areWireless Emergency Alerts (CMAS)in the United States andEU-Alertin Europe (set out inETSIstandards, but national implementation varies). Alerts can be geo-targeted, when only phones in a defined geographical area are set to receive an alert.[8]When an alert is received, a notification is shown in a unique format and a dedicated sound is played even if the phone is set to silent: atwo-tone attention soundⓘof 853Hzand 960 Hzsine waves, as prescribed by both WEA (CMAS) and ETSI standards.[9][8]Cell Broadcast emergency alerts can be broadcast in a local language and an additional language, which will be displayed depending on the user's device language setting.[10]Most phone manufacturers adhere to these standards but have slightly different user interfaces.[11]Similar toemergency calls, devices do not usually need aSIM cardto receive alerts.[12] Emergency Alerts in most implementations of Cell Broadcast have distinct alert categories or levels, using a message identifier outlined in 3GPP standards. The alert category or level is defined by the severity of the warning, e.g. threat to life, imminent danger or advisory message. Depending on national implementation, users may be able to opt-out of receiving lower level alerts. However, the highest level of alert will usually always be displayed on a user's device.[13][10] Below is a comparison table on alert categories/levels across systems (based on the common 3GPP message identifiers):[8] Whenroaming, if the user's home carrier supports Cell Broadcast emergency alerts, alerts will be displayed if the category/level of alert is enabled and equivalent to their home carrier's system.[10][8] Cell Broadcast messages can use a CAP (Common Alerting Protocol) message as an input as specified byOASIS (organization)or theWireless Emergency Alerts(WEA) C-interface protocol, which has been specified jointly by theAlliance for Telecommunications Industry Solutions(ATIS) and theTelecommunications Industry Association(TIA). Advantages of using Cell Broadcast for Public warning are: A point of criticism in the past on Cell Broadcast was that there was no uniform user experience on all mobile devices in a country.[1] Wireless Emergency Alerts and Government alerts using Cell Broadcast are supported in most models of mobile telephones. Some smart phones have a configuration menu that offer opt-out capabilities for certain public warning severity levels.[5][14] In case a national civil defence organisation is adopting one of the 3GPP's Public Warning System standards, PWS - also known as CMAS in North America,EU-Alertin Europe, LAT-Alert in South America, Earthquake and Tsunami Warning System in Japan, each subscriber in that country either making use of the home network or its roaming automatically makes use of the embedded Public warning Cell Broadcast feature present in everyAndroid (operating system)[5]andiOSmobile device.[14] In countries[who?]that have selected Cell Broadcast to transmit public warning messages, up to 99% of the handsets receive the cell broadcast message (reaching between 85 and 95% of the entire population as not all people have a mobile phone) within seconds after the government authorities have submitted the message; see as examplesEmergency Mobile Alert(New Zealand),Wireless Emergency Alerts(USA) andNL-Alert(Netherlands). Many countries and regions have implemented location-based alert systems based on cell broadcast. The alert messages to the population, already broadcast by various media, are relayed over the mobile network using cell broadcast. The following countries and regions have selected Cell Broadcast to use for their national public warning system but are currently in the process of implementing.
https://en.wikipedia.org/wiki/Cell_Broadcast
Mobile phone trackingis a process for identifying the location of a mobile phone, whether stationary or moving. Localization may be affected by a number of technologies, such as themultilaterationof radio signals between (several)cell towersof thenetworkand the phone or by simply usingGNSS. To locate a mobile phone using multilateration of mobile radio signals, the phone must emit at least the idle signal to contact nearby antenna towers and does not require an active call. TheGlobal System for Mobile Communications(GSM) is based on the phone'ssignal strengthto nearby antenna masts.[1] Mobile positioningmay be used forlocation-based servicesthat disclose the actual coordinates of a mobile phone.Telecommunicationcompanies use this toapproximatethe location of a mobile phone, and thereby also its user.[2] The location of a mobile phone can be determined in a number of ways. The location of a mobile phone can be determined using the service provider's network infrastructure. The advantage of network-based techniques, from a service provider's point of view, is that they can be implemented non-intrusively without affecting handsets. Network-based techniques were developed many years prior to the widespread availability of GPS on handsets. (SeeUS 5519760, issued 21 May 1996for one of the first works relating to this.[3]) The technology of locating is based on measuring power levels andantenna patternsand uses the concept that a powered mobile phone always communicateswirelesslywith one of the closestbase stations, so knowledge of the location of the base station implies the cell phone is nearby. Advanced systems determine the sector in which the mobile phone is located and roughly estimate also the distance to the base station. Further approximation can be done byinterpolatingsignals between adjacent antenna towers. Qualified services may achieve a precision of down to 50 meters inurban areaswhere mobile traffic and density of antenna towers (base stations) is sufficiently high.[4]Ruraland desolate areas may see miles between base stations and therefore determine locations less precisely. GSMlocalization usesmultilaterationto determine the location of GSM mobile phones, or dedicated trackers, usually with the intent to locate the user.[2] The accuracy of network-based techniques varies, with cell identification being the least accurate (due to differential signals transposing between towers, otherwise known as "bouncing signals") andtriangulationas moderately accurate, and newer "advanced forward linktrilateration"[5]timing methods as the most accurate. The accuracy of network-based techniques is both dependent on the concentration of cell base stations, with urban environments achieving the highest possible accuracy because of the higher number ofcell towers, and the implementation of the most current timing methods. One of the key challenges of network-based techniques is the requirement to work closely with the service provider, as it entails the installation of hardware and software within the operator's infrastructure. Frequently the compulsion associated with a legislative framework, such asEnhanced 9-1-1, is required before a service provider will deploy a solution. In December 2020, it emerged that the Israeli surveillance companyRayzone Groupmay have gained access, in 2018, to theSS7signaling system via cellular network providerSure Guernsey, thereby being able to track the location of any cellphone globally.[6] The location of a mobile phone can be determined usingclient softwareinstalled on the handset.[7]This technique determines the location of the handset by putting its location by cell identification, signal strengths of the home and neighboring cells, which is continuously sent to the carrier.[8]In addition, if the handset is also equipped withGPSthen significantly more precise location information can be then sent from the handset to the carrier. Another approach is to use a fingerprinting-based technique,[9][10][11]where the "signature" of the home and neighboring cells signal strengths at different points in the area of interest is recorded bywar-drivingand matched in real-time to determine the handset location. This is usually performed independent from the carrier. The key disadvantage of handset-based techniques, from service provider's point of view, is the necessity of installing software on the handset. It requires the active cooperation of the mobile subscriber as well as software that must be able to handle the differentoperating systemsof the handsets. Typically,smartphones, such as one based onSymbian,Windows Mobile,Windows Phone,BlackBerry OS,iOS, orAndroid, would be able to run such software, e.g. Google Maps. One proposed work-around is the installation ofembeddedhardware or software on the handset by the manufacturers, e.g.,Enhanced Observed Time Difference(E-OTD). This avenue has not made significant headway, due to the difficulty of convincing different manufacturers to cooperate on a common mechanism and to address the cost issue. Another difficulty would be to address the issue of foreign handsets that are roaming in the network. CrowdsourcedWi-Fi data can also be used to identify a handset's location.[14]The poor performance of the GPS-based methods in indoor environment and the increasing popularity of Wi-Fi have encouraged companies to design new and feasible methods to carry out Wi-Fi-based indoor positioning.[15]MostsmartphonescombineGlobal Navigation Satellite Systems(GNSS), such asGPSandGLONASS, withWi-Fi positioning systems. Hybrid positioning systemsuse a combination of network-based and handset-based technologies for location determination. One example would be some modes ofAssisted GPS, which can both useGPSand network information to compute the location. Both types of data are thus used by the telephone to make the location more accurate (i.e., A-GPS). Alternatively tracking with both systems can also occur by having the phone attain its GPS-location directly from thesatellites, and then having the information sent via the network to the person that is trying to locate the telephone. Such systems includeGoogle Maps, as well as,LTE'sOTDOAandE-CellID. There are also hybrid positioning systems which combine several different location approaches to position mobile devices byWi-Fi,WiMAX, GSM, LTE,IP addresses, and network environment data. In order to route calls to a phone,cell towerslisten for a signal sent from the phone and negotiate which tower is best able to communicate with the phone. As the phone changes location, the antenna towers monitor the signal, and the phone is "roamed" to an adjacent tower as appropriate. By comparing the relative signal strength from multiple antenna towers, a general location of a phone can be roughly determined. Other means make use of the antenna pattern, which supports angular determination andphase discrimination. Newer phones may also allow the tracking of the phone even when turned on but not active in a telephone call. This results from the roaming procedures that perform hand-over of the phone from one base station to another.[16] A phone's location can be shared with friends and family, posted to a public website, recorded locally, or shared with other users of a smartphone app. The inclusion of GPS receivers on smartphones has made geographical apps nearly ubiquitous on these devices. Specific applications include: In January 2019, the location of her iPhone as determined by her sister helped Boston police find kidnapping victim Olivia Ambrose.[17] Locating or positioning touches upon delicateprivacyissues, since it enables someone to check where a person is without the person's consent.[18]Strict ethics and security measures are strongly recommended for services that employ positioning.In 2012Malte Spitzheld aTED talk[19]on the issue of mobile phone privacy in which he showcased his own stored data that he received fromDeutsche Telekomafter suing the company. He described the data, which consists of 35,830 lines of data collected during the span ofGermany'sdata retentionat the time, saying, "This is six months of my life [...] You can see where I am, when I sleep at night, what I'm doing." He partnered up withZEIT Onlineand made his information publicly available in aninteractive mapwhich allows users to watch his entire movements during that time in fast-forward. Spitz concluded that technology consumers are the key to challenging privacy norms in today's society who "have to fight for self determination in the digital age."[20][21] TheChinese governmenthas proposed using this technology to track commuting patterns ofBeijingcity residents.[22]Aggregate presence of mobile phone users could be tracked in a privacy-preserving fashion.[23]This location data was used to locate protesters duringprotests in Beijing in 2022.[24] InEuropemost countries have a constitutional guarantee on thesecrecy of correspondence, and location data obtained from mobile phone networks is usually given the same protection as the communication itself.[25][26][27][28] In theUnited States, there is a limited constitutional guarantee on theprivacy of telecommunicationsthrough theFourth Amendment.[29][30][31][32][33]The use of location data is further limited bystatutory,[34]administrative,[35]andcase law.[29][36]Police access of seven days of a citizen's location data is unquestionably enough to be afourth amendmentsearch requiring bothprobable causeand awarrant.[29][37] In November 2017, theUnited States Supreme Courtruled inCarpenter v. United Statesthat the government violates the Fourth Amendment by accessing historical records containing the physical locations of cellphones without a search warrant.[38]
https://en.wikipedia.org/wiki/GSM_localization
Multimedia Messaging Service(MMS) is a standard way to send messages that includemultimediacontent to and from amobile phoneover acellular network. Users and providers may refer to such a message as aPXT, apicture message, or amultimedia message.[1]The MMS standard extends the coreSMS(Short Message Service) capability, allowing the exchange of text messages greater than 160 characters in length. Unlike text-only SMS, MMS can deliver a variety of media, including up to forty seconds of video, one image, aslideshow[2]of multiple images, or audio. Media companies have utilized MMS on a commercial basis as a method of delivering news and entertainment content, and retailers have deployed it as a tool for delivering scannable coupon codes, product images, videos, and other information. On (mainly) older devices, messages that start off with text, as SMS, are converted to and sent as an MMS when anemojiis added.[3][4] The commercial introduction of MMS started in March 2002,[1]although picture messaging had already been established in Japan.[5]It was built using the technology of SMS[2]as a captive technology which enabled service providers to "collect a fee every time anyone snaps a photo."[6]MMS was designed to be able to work on the then-newGPRSand3Gnetworks[7]and could be implemented through either aWAP-based orIP-based gateway.[8]The3GPPand WAP Forum groups fostered the development of the MMS standard, which was then continued by theOpen Mobile Alliance(OMA). MMS messages are delivered in a different way from SMS. The first step is for the sending device to encode the multimedia content in a fashion similar to sending aMIMEmessage (MIME content formats are defined in the MMS Message Encapsulation specification). The message is then forwarded to thecarrier'sMMSstore and forwardserver, known as theMMSC(Multimedia Messaging Service Centre). If the receiver is on a carrier different from the sender, then the MMSC acts as a relay, and forwards the message to the MMSC of the recipient's carrier using the Internet.[9] Once the recipient's MMSC has received a message, it first determines whether the receiver's handset is "MMS capable" or not. If it supports the standards for receiving MMS, the content is extracted and sent to a temporary storage server with anHTTPfront-end. An SMS "control message" containing theURLof the content is then sent to the recipient's handset to trigger the receiver'sWAPbrowser to open and receive the content from the embedded URL. Several other messages are exchanged to indicate the status of the delivery attempt.[10]Before delivering content, some MMSCs also include a conversion service that will attempt to modify the multimedia content into a format suitable for the receiver. This is known as "content adaptation". If the receiver's handset is not MMS capable, the message is usually delivered to a web-based service from where the content can be viewed from a normal web browser. The URL for the content is usually sent to the receiver's phone in a normal text message. This behavior is usually known as a "legacy experience" since content can still be received by the user. The method for determining whether a handset is MMS capable is not specified by the standards. A database is usually maintained by the operator, and in it eachmobile phone numberis marked as being associated with a legacy handset or not. This method is unreliable, however, because customers can independently change their handsets, and many of these databases are not updated dynamically. MMS does not utilize operator-maintained "data" plans to distribute multimedia content; they are used only if the user clicks links inside the message. E-mailand web-based gateways to the MMS system are common. On the reception side, the content servers can typically receive service requests both from WAP and normal HTTP browsers, so delivery via the web is simple. For sending from external sources to handsets, most carriers allow aMIMEencoded message to be sent to the receiver's phone number using a special e-mail address combining the recipient's public phone number and a special domain name, which is typically carrier-specific. There are some challenges with MMS that do not exist with SMS: Although the standard does not specify a maximum size for a message, 300 kB and 600 kB are the recommended sizes used by networks[11]for compatibility with MMS 1.2 and MMS 1.3 devices respectively. The limit for the first generation of MMS was 50 kB.[8] Verizonlaunched its MMS service in July 2003.[12]Between 2010 and 2013, MMS traffic in the U.S. increased by 70% from 57 billion to 96 billion messages sent.[13]This is due in part to the wide adoption ofsmartphones. However take-up of MMS never matched the widespread popularity of SMS text messaging.[14] Due to lower cost and improved functionality provided by modern internet-basedinstant messengerssuch asWhatsApp,Telegram, andSignal, MMS usage has declined,[15]and it has been discontinued by several telcos since the early 2020s. Countries withoperatorsthat have discontinued MMS include: India (BSNL; from 1 November 2015),[16]Philippines (Sun Cellular, Smart Communications, TNT; from 28 September 2018),[17]Singapore (Singtel, M1, Starhub; from 16 November 2021),[18]Kazakhstan (Kcell; from 6 May 2022),[19]Switzerland (Swisscom, Salt Mobile; from 10 January 2023),[20][21]Germany (Vodafone; from 17 January 2023).[22] RCSis intended to be the successor technology for MMS and SMS.
https://en.wikipedia.org/wiki/Multimedia_Messaging_Service
Network Identity and Time Zone(NITZ)[1]is a mechanism for provisioning localtimeand date,time zoneanddaylight saving time(DST) offset, as well asnetwork provideridentity information, tomobile devicesvia a wireless network.[2] NITZ has been an optional part of the officialGSMstandard since phase 2+ release 96.[3] NITZ is often used to automatically update the system clock of mobile phones. In terms of standards and other timing or network access protocols such asNTPorCDMA2000, the quality and enforcement of NITZ is weak. This standard allows the network to "transfer its current identity, universal time, DST and LTZ"[1]but each is optional, and support acrossRANvendor and operator varies. This presents a problem for device manufacturers, who are required to maintain a complex time zone database, rather than rely on the network operator. Additionally, unlike3GPP2, which transmitsGPS-sourced, millisecond resolution time via the sync channel, for NITZ, the "accuracy of the time information is in the order of minutes".[1]The optional nature of the delivery mechanism results in issues for users in regions that don't practice daylight saving but which share a time zone with a region that does. Most modern handsets have their own internal time zone software and will automatically perform a daylight saving advance. Because the NITZ delivery is not usually periodic but dependent on the handset crossing radio network boundaries, these handsets can be displaying incorrect time for many hours or even days before a NITZ update arrives and corrects them. Initial list derived from ref:[4][unreliable source?]
https://en.wikipedia.org/wiki/NITZ
Wireless Application Protocol(WAP) is an obsoletetechnical standardfor accessing information over amobile cellular network. Introduced in 1999,[1]WAP allowed users with compatiblemobile devicesto browse content such as news, weather and sports scores provided bymobile network operators, specially designed for the limited capabilities of a mobile device.[2]The Japanesei-modesystem offered a competing wireless data standard. Before the introduction of WAP, mobile service providers had limited opportunities to offer interactive data services, but needed interactivity to supportInternetandWebapplications. Although hyped at launch, WAP suffered from criticism. However the introduction ofGPRSnetworks, offering a faster speed, led to an improvement in the WAP experience.[3][4]WAP content was accessed using aWAP browser, which is like a standardweb browserbut designed for reading pages specific for WAP, instead ofHTML. By the 2010s it had been largely superseded by more modern standards such asXHTML.[5]Modern phones have proper Web browsers, so they do not need WAP markup for compatibility, and therefore, most are no longer able to render and display pages written inWML, WAP's markup language.[6] The WAP standard described aprotocol suiteor stack[8]allowing the interoperability of WAP equipment and software with different network technologies, such asGSMandIS-95(also known asCDMA). The bottom-most protocol in the suite, theWireless Datagram Protocol(WDP), functions as an adaptation layer that makes every data network look a bit likeUDPto the upper layers by providing unreliable transport of data with two 16-bit port numbers (origin and destination). All the upper layers view WDP as one and the same protocol, which has several "technical realizations" on top of other "data bearers" such asSMS,USSD, etc. On native IP bearers such asGPRS,UMTSpacket-radio service, orPPPon top of a circuit-switched data connection, WDP is in fact exactly UDP. WTLS, an optional layer, provides apublic-key cryptography-based security mechanism similar toTLS. WTPprovides transaction support adapted to the wireless world. It provides for transmitting messages reliably, similarly toTCP. However WTP is more effective than TCP when packets are lost, a common occurrence with 2G wireless technologies in most radio conditions. WTP does not misinterpret the packet loss as network congestion, unlike TCP. WAP sites are written in WML, a markup language.[9]WAP provides content in the form of decks, which have several cards: decks are similar to HTML web pages as they are the unit of data transmission used by WAP and each have their own unique URL, and cards are elements such as text or buttons which can be seen by a user.[10]WAP has URLs which can be typed into an address bar which is similar to URLs in HTTP. Relative URLs in WAP are used for navigating within a deck, and Absolute URLs in WAP are used for navigating between decks.[9]WAP was designed to operate in bandwidth-constrained networks by using data compression before transmitting data to users.[11] This protocol suite allows a terminal to transmit requests that have anHTTPorHTTPSequivalent to aWAP gateway; the gateway translates requests into plain HTTP. WAP decks are delivered through a proxy which checks decks for WML syntax correctness and consistency, which improves the user experience in resource-constrained mobile phones.[5]WAP cannot guarantee how content will appear on a screen, because WAP elements are treated as hints to accommodate the capabilities of each mobile device. For example some mobile phones do not support graphics/images or italics.[10] The Wireless Application Environment (WAE) space defines application-specific markup languages. For WAP version 1.X, the primary language of the WAE isWireless Markup Language(WML). In WAP 2.0, the primary markup language isXHTML Mobile Profile. WAP Push was incorporated into the specification to allow the WAP content to be pushed to the mobile handset with minimal user intervention. A WAP Push is basically a specially encoded message which includes a link to a WAP address.[12] WAP Push was specified on top ofWireless Datagram Protocol(WDP); as such, it can be delivered over any WDP-supported bearer, such as GPRS or SMS.[13]Most GSM networks have a wide range of modified processors, but GPRS activation from the network is not generally supported, so WAP Push messages have to be delivered on top of the SMS bearer. On receiving a WAP Push, a WAP 1.2 (or later) -enabled handset will automatically give the user the option to access the WAP content. This is also known as WAP Push SI (Service Indication).[13]A variant, known as WAP Push SL (Service Loading), directly opens the browser to display the WAP content, without user interaction. Since this behaviour raises security concerns, some handsets handle WAP Push SL messages in the same way as SI, by providing user interaction. The network entity that processes WAP Pushes and delivers them over an IP or SMS Bearer is known as aPush Proxy Gateway(PPG).[13] A re-engineered 2.0 version was released in 2002. It uses a cut-down version ofXHTMLwith end-to-endHTTP, dropping the gateway and custom protocol suite used to communicate with it. A WAP gateway can be used in conjunction with WAP 2.0; however, in this scenario, it is used as a standard proxy server. The WAP gateway's role would then shift from one of translation to adding additional information to each request. This would be configured by the operator and could include telephone numbers, location, billing information, and handset information. Mobile devices processXHTML Mobile Profile(XHTML MP), the markup language defined in WAP 2.0. It is a subset ofXHTMLand a superset ofXHTML Basic. A version of Cascading Style Sheets (CSS) called WAP CSS is supported by XHTML MP. Multimedia Messaging Service (MMS) is a combination of WAP andSMSallowing for sending of picture messages. TheWAP Forumwas founded in 1998 by Ericsson, Motorola, Nokia and Unwired Planet.[14]It aimed primarily to bring together the various wireless technologies in a standardised protocol.[15]In 2002, the WAP Forum was consolidated (along with multiple other forums of the industry) intoOpen Mobile Alliance(OMA).[16] The first company to launch a WAP site was Dutchmobile phoneoperatorTelfort BVin October 1999. The site was developed as a side project by Christopher Bee and Euan McLeod and launched with the debut of theNokia 7110. MarketershypedWAP at the time of its introduction,[17]leading users to expect WAP to have the performance of fixed (non-mobile)Internet access.BT Cellnet, one of the UKtelecoms, ran an advertising campaign depicting a cartoon WAP usersurfingthrough aNeuromancer-like "information space".[18]In terms of speed, ease of use, appearance and interoperability, the reality fell far short of expectations when the first handsets became available in 1999.[19][20]This led to the wide usage of sardonic phrases such as "Worthless Application Protocol",[21]"Wait And Pay",[22]and WAPlash.[23] Between 2003 and 2004 WAP made a stronger resurgence with the introduction of wireless services (such asVodafone Live!, T-Mobile T-Zones and other easily accessible services). Operator revenues were generated by transfer ofGPRSandUMTSdata, which is a different business model than used by the traditional Web sites andISPs. According to the Mobile Data Association, WAP traffic in the UK doubled from 2003 to 2004.[24] By the year 2013, WAP use had largely disappeared. Most major companies and websites have since retired from the use of WAP and it has not been a mainstream technology for web on mobile for a number of years. Most modern handset web browsers now support full HTML, CSS, and most ofJavaScript, and do not need to use any kind of WAP markup for webpage compatibility. The list of handsets supporting HTML is extensive, and includes all Android handsets, all versions of the iPhone handset, all Blackberry devices, all devices running Windows Phone, and multiple Nokia handsets. WAP saw major success in Japan. While the largest operatorNTT DoCoModid not use WAP in favor of its in-house systemi-mode, rival operatorsKDDI(au) andSoftBank Mobile(previouslyVodafone Japan) both successfully deployed WAP technology. In particular, (au)'s chakuuta or chakumovie (ringtone song or ringtone movie) services were based on WAP. Like in Europe, WAP and i-mode usage declined in the 2010s as HTML-capable smartphones became popular in Japan. Adoption of WAP in theUSsuffered because a number of cell phone providers required separate activation and additional fees for data support, and also because telecommunications companies sought to limit data access to only approved data providers operating under license of the signal carrier.[citation needed] In recognition of the problem, the USFederal Communications Commission(FCC) issued an order on 31 July 2007 which mandated that licensees of the 22-megahertz wide "Upper 700 MHz C Block" spectrum would have to implement a wireless platform which allows customers, device manufacturers, third-party application developers, and others to use any device or application of their choice when operating on this particular licensed network band.[25][26] Commentators criticized several shortcomings ofWireless Markup Language(WML) and WAP. However, others argued[who?]that, given the technological limitations of its time, it succeeded in its goal of providing simple and custom-designed content at a time where most people globally did not have regular internet access. Technical criticisms included: TheidiosyncraticWML language cut users off from the conventionalHTMLWeb, leaving only native WAP content and Web-to-WAP proxi-content available to WAP users. Many wireless carriers sold their WAP services as "open", in that they allowed users to reach any service expressed in WML and published on the Internet. However, they also made sure that the first page that clients accessed was their own "wireless portal", which they controlled closely.[27] Some carriers also turned off editing or accessing the address bar in the device's browser. To facilitate users wanting to go off deck, an address bar on aformon a page linked off the hard coded home page was provided. It makes it easier for carriers to implement filtering of off deck WML sites by URLs or to disable the address bar in the future if the carrier decides to switch all users to a walled garden model. Given the difficulty in typing up fully qualifiedURLson a phone keyboard, most users would give up going "off portal" or out of thewalled garden; by not letting third parties put their own entries on the operators' wireless portal, some[who?]contend that operators cut themselves off from a valuable opportunity. On the other hand, some operators[which?]argue that their customers would have wanted them to manage the experience and, on such a constrained device, avoid giving access to too many services.[citation needed] Under-specification of terminal requirements: The early WAP standards included multiple optional features and under-specified requirements, which meant that compliant devices would not necessarily interoperate properly. This resulted in great variability in the actual behavior of phones, principally because WAP-service implementers and mobile-phone manufacturers did not[citation needed]obtain a copy of the standards or the correct hardware and the standard software modules. As an example, some phone models would not accept a page more than 1 Kb in size, and some would even crash. The user interface of devices was also underspecified: as an example, accesskeys (e.g., the ability to press '4' to access directly the fourth link in a list) were variously implemented depending on phone models (sometimes with the accesskey number automatically displayed by the browser next to the link, sometimes without it, and sometimes accesskeys were not implemented at all). Constrained user interface capabilities: Terminals with small black-and-white screens and few buttons, like the early WAP terminals, face difficulties in presenting a lot of information to their user, which compounded the other problems: one would have had to be extra careful in designing the user interface on such a resource-constrained device which was the real concept of WAP. In contrast with web development, WAP development was unforgiving due to the strict requirements of the WML specification and the demands of optimizing for and testing on a wide variety of wireless devices, considerably lengthened the time required to complete most projects. As of 2009[update], however, with multiple mobile devices supporting XHTML, and programs such as Adobe Go Live and Dreamweaver offering improved web-authoring tools, it became easier to create content accessible to many more new devices. Lack of user agent profiling tools: Websites adapt content to fit multiple device models by adapting the pages to their capabilities based on a providedUser-Agenttype. However, the development kits which existed for WML did not provide this capability. It quickly became nearly impossible for site hosts to determine if a request came from a mobile device or from a larger more capable device. No useful profiling or database of device capabilities were built into the specifications in the unauthorized non-compliant products.[citation needed] Neglect of content providers by wireless carriers: Some wireless carriers had assumed a "build it and they will come" strategy, meaning that they would just provide the transport of data as well as the terminals, and then wait for content providers to publish their services on the Internet and make their investment in WAP useful. However, content providers received little help or incentive to go through the complicated route of development. Others, notably in Japan (cf. below), had a more thorough dialogue with their content-provider community, which was then replicated in modern, more successful WAP services such asi-modein Japan or the Gallery service in France.[28] The original WAP model provided a simple platform for access to web-like WML services and e-mail using mobile phones in Europe and the SE Asian regions. In 2009 it continued to have a considerable user base. The later versions of WAP, primarily targeting the United States market, were designed by Daniel Tilden of Bell Labs for a different requirement - to enable full web XHTML access using mobile devices with a higher specification and cost, and with a higher degree of software complexity. Considerable discussion has addressed the question whether the WAP protocol design was appropriate. The initial design of WAP specifically aimed at protocol independence across a range of different protocols (SMS, IP overPPPover a circuit switched bearer, IP over GPRS, etc.). This has led to a protocol more complex than an approach directly over IP might have caused. Most controversial, especially for those from the IP side, was the design of WAP over IP. WAP's transmission layer protocol, WTP, uses its own retransmission mechanisms overUDPto attempt to solve the problem of the inadequacy of TCP over high-packet-loss networks.[citation needed] Read Networks And Computers Book by Tanenbaum
https://en.wikipedia.org/wiki/Wireless_Application_Protocol
GSM-R,Global System for Mobile Communications – RailwayorGSM-Railwayis an internationalwirelesscommunications standard forrailwaycommunication and applications. A sub-system ofEuropean Rail Traffic Management System(ERTMS), it is used for communication between train and railway regulation control centers. The system is based onGSMandEIRENE – MORANEspecifications which guarantee performance at speeds up to 500 km/h (310 mph), without any communication loss. GSM-R could be supplanted by LTE-R,[1]with the first production implementation being in South Korea.[2]However,LTEis generally considered to be a "4G" protocol, and theUIC'sFuture Railway Mobile Communication System(FRMCS) program[3]is considering moving to something "5G"-based (specifically3GPPR15/16, i.e.5G NR),[4]thus skipping two technological generations.[5][6] GSM-R is built onGSMtechnology, and benefits from the economies of scale of its GSM technology heritage, aiming at being a cost efficient digital replacement for existing incompatible in-track cable and analogue railway radio networks. Over 35 different such systems are reported to exist in Europe alone.[7] The standard is the result of over ten years of collaboration between the various European railway companies, with the goal of achieving interoperability using a single communication platform. GSM-R is part of theEuropean Rail Traffic Management System(ERTMS) standard and carries the signaling information directly to the train driver, enabling higher train speeds and traffic density with a high level of safety. The specifications were finalized in 2000, based on theEuropean Union-funded MORANE (Mobile Radio for Railways Networks in Europe) project. The specification is being maintained by theInternational Union of Railwaysproject ERTMS. GSM-R has been selected by 38 countries across the world, including all member states of the European Union and countries in Asia, Eurasia and northern Africa. GSM-R is a secure platform for voice anddata communicationbetween railway operational staff, including drivers, dispatchers, shunting team members, train engineers, and station controllers. It delivers features such as group calls (VGCS), voice broadcast (VBS), location-based connections, and call pre-emption in case of an emergency. This supports applications such as cargo tracking,video surveillancein trains and at stations, and passenger information services. GSM-R is typically implemented using dedicatedbase stationmasts close to the railway, with tunnel coverage effected using directional antennae or'leaky' feedertransmission. The distance between the base stations is 7–15 km (4.3–9.3 mi). This creates a high degree of redundancy and higher availability and reliability. In Germany, Italy and France the GSM-R network has between 3,000 and 4,000base stations. In areas where theEuropean Train Control System(ETCS) Level 2 or 3 is used, the train maintains acircuit switcheddigitalmodemconnection to the train control center at all times. This modem operates with higher priority than normal users (eMLPP). If the modem connection is lost, the train will automatically stop. GSM-R is one part ofERTMS(European Rail Traffic Management System) which is composed of: GSM-R is standardized to be implemented in either theE-GSM(900 MHz-GSM) orDCS 1800(1,800 MHz-GSM)frequency bandwhich are both being used around the world. Europe includes theCEPTmember states, which include allEUmembers and Albania, Andorra, Azerbaijan, Bosnia Herzegovina, Georgia, Iceland, Liechtenstein, North Macedonia, Moldavia, Monaco, Montenegro, Norway, San Marino, Serbia, Switzerland, Turkey, Ukraine, the United Kingdom, and Vatican City. Although previously members of the CEPT, Belarus and Russia had their memberships suspended, indefinitely, with effect from 00:00 (CET), 18 March 2022. The CEPT Assembly made this decision following a poll of members by the CEPT Presidency, and published their decision on 17 March 2022.[8]) GSM-R uses a specificfrequency band, which can be referred to as the "standard" GSM-R band:[9] In Germany, this band was extended with additional channels in the 873–876 MHz and 918–921 MHz range.[10]Being used formerly for regionaltrunked radio systems, the full usage of the new frequencies is aimed for 2015.[11] GSM-R occupies a 4 MHz wide range of the E-GSM band (900 MHz-GSM).[12] GSM-R occupies a 1.6 MHz wide range of the P-GSM band (900 MHz-GSM) held byIndian Railways:[13][14] GSM-R is being implemented within DCS 1800 band[15] DCS 1800 band was initially divided and auctioned in paired parcels each of 2 × 2.5 MHz with duplex spacing of 95 MHz. State rail operators acquired six mostly non-grouped parcels which cover 2 × 15 MHz of spectrum to deploy GSM-R.[16] State rail operators re-licensed 2 x 10 MHz of 1800 MHz spectrum in Adelaide, Brisbane, Melbourne, Perth, and Sydney for Rail Safety and Control Communications. All except for South Australian Department of Planning Transport and Infrastructure (Adelaide) re-licensed 2 x 5 MHz of 1800 MHz spectrum at commercial rates set by Australian Government.[17][18] The used modulation isGMSKmodulation (Gaussian Minimum-Shift Keying). GSM-R is aTDMA("Time-Division Multiple Access") system. Data transmission is made of periodicalTDMA frames(with a period of 4.615 ms), for each carrier frequency (physical channel). EachTDMA frameis divided in 8 time slots, named logical channels (577 μs long, each time-slot), carrying 148bitsof information. There are worries thatLTEmobile communication will disturb GSM-R, since it has been given a frequency band rather close to GSM-R. This could cause ETCS disturbances, random emergency braking because of lost communications etc.[19] As a result, there is an increasing trend towards monitoring and managing GSM-R interference using active and automated testing on board trains and trackside.[20] The GSM-R standard specification is divided in two EIRENE specifications:[21] EIRENE defines the "Technical Specification for Interoperability" (TSI) as the set of mandatory specifications to be fulfilled to keep compatibility with other European networks; current TSI are FRS 7 and SRS 15. EIRENE also defines non-mandatory specifications, that are called "Interim version", which defines extra features that are likely to become mandatory in the next TSIs. The current versions are 21 December 2015 versions FRS 8.0.0 and SRS 16.0.0[22]The GSM-R specifications are fairly stable; the latest mandatory upgrade was in 2006. The complete timeline of GSM-R versions is:[23] The current version of GSM-R can run on both R99 and R4 3GPP networks. GSM-R permits new services and applications for mobile communications in several domains: It is used to transmit data between trains and railway regulation centers with level 2 and 3 of ETCS. When the train passes over aEurobalise, it transmits its new position and its speed, then it receives back agreement (or disagreement) to enter the next track and its new maximum speed. Like otherGSMdevices, GSM-R equipment can transmit data and voice. New GSM-R features for mobile communication are based on GSM, and are specified by EIRENE project. Call features are: There are other additional features: The following definitions are a part of the System Requirements Specification (SRS) as defined by the EIRENE standard.[24] The Shunting Emergency Call is a dedicated group call with the number 599. The call is established with an emergency level priority whose level is the highest possible priority 0. The SEC is enabled and used by devices registered for shunting operations. The establishment of such a call leads to automatic acceptance of the call on all enabled devices within the current area or cell-group configured.[24] The EIRENE SRS document defines a fixed numbering plan for GSM-R. It is defined by number prefixes. Those numbers are used for functional registration and fixed entries for MSISDN or short dialcodes as defined within the HLR. 807660 for example defines a MSISDN of a mobile subscriber. The number 23030301 would be a functional number associated with the train number 30303 and the role of the user 01. Different groups make up the GSM-R market:[25][needs update] Transport NSW is installing a Digital Train Radio System (DTRS) throughout the 1,455-kilometre (904 mi) electrified rail network, including 66 tunnels covering 70 kilometres (43 mi), bounded byKiama,Macarthur,Lithgow,Bondi JunctionandNewcastlewith GSM-R to replace the existing analogue MetroNet train radio. The replacement will fulfil recommendations from the Special Commission of Inquiry into theWaterfall rail accidentto provide a common platform of communication for staff working on the railway. The equipment will be installed at about 250 locations and more than 60 sites in tunnels. The old analogue network was dismantled in 2020.[29] Public Transport Victoriahas installed a Digital Train Radio System (DTRS) on theMelbournetrain network with GSM-R to replace the old system called Urban Train Radio System (UTRS). The equipment was installed at about 100 locations. It cost $152 million.[30] In France, the first commercial railway route opened with full GSM-R coverage is theLGV Esteuropéenne linking ParisGare de l'EsttoStrasbourg. It was opened on 10 June 2007. As of 2008, in Italy more than 9,000 kilometres (5,600 mi) of railway lines are served by the GSM-R infrastructure: this number includes both ordinary and high speed lines, as well as more than 1,000 km (620 mi) of tunnels. Roaming agreements with other Italian mobile operators allow coverage of lines not directly served by GSM-R. Roaming agreements have also been set up with French and Swiss railway companies and it is planned to extend them to other countries.[32] In the Netherlands, there is coverage on all the lines and the old system calledTelerailwas abandoned in favour of GSM-R in 2006. In Norway, the GSM-R network was opened on all lines on 1 January 2007, replacing the olderScanetnetwork. The implementation of over 14,000 km (8,700 mi) of GSM-R enabled railway, intended to replace both its legacy VHF 205 MHz National Radio Network (NRN) and UHF 450 MHz suburbanCab Secure Radio(CSR) systems is now complete as of January 2016. As of spring 2016[update], the only areas of UK Network Rail still currently employing VHF train radio communications are on sections of theHighlandandFar Northlines in Scotland, where theRadio Electronic Token Blocksystem is utilised, using modified Ofcom frequencies around 180 MHz, having been de-scoped from the National GSM-R plan, due to practical difficulties involved in deploying the GSM-R system in this region. Currently, 100% of the UK network has GSM-R coverage.[33]
https://en.wikipedia.org/wiki/GSM-R
Unstructured Supplementary Service Data(USSD), sometimes referred to as "quick codes" or "feature codes", is acommunications protocolused byGSMcellular telephonesto communicate with themobile network operator's computers. USSD can be used forWAPbrowsing, prepaid callback service, mobile-money services, location-based content services, menu-based information services, and as part of configuring the phone on the network.[1]The service does not require a messaging app, and does not incur charges.[2] USSD messages are up to 182 alphanumeric characters long. Unlikeshort message service (SMS)messages, USSD messages create a real-time connection during a USSD session. The connection remains open, allowing a two-way exchange of a sequence of data. This makes USSD faster than services that use SMS.[1] WhileGSMis being phased out in the 2020s with2Gand3Gtechnologies, USSD services can be supported overLTEand5G. When a user sends a message to the phone company network, it is received by a computer dedicated to USSD. The computer's response is sent back to the phone, generally in a basic format that can easily be seen on the phone display. Messages sent over USSD are not defined by anystandardizationbody, so each network operator can implement whatever is most suitable for its customers. USSD can be used to provide independent calling services such as acallbackservice (to reduce phone charges while roaming), enhance mobile marketing capabilities or interactive data services. USSD is commonly used by prepaid GSM cellular phones to query the available balance. The vendor's "check balance" application hides the details of the USSD protocol from the user. On somepay as you gonetworks, such asTesco Mobile, once a user performs an action that costs money, the user sees a USSD message with their new balance. USSD can also be used to refill the balance on the user'sSIM cardand to deliverone-time passwordsor PIN codes. Some operators use USSD to provide access to real-time updates from social-networking websites includingFacebookandTwitter.[3]Between 2012 and 2018, theWikipedia Zeroproject provided access to Wikipedia articles via USSD.[4] USSD is sometimes used in conjunction with SMS. The user sends a request to the network via USSD, and the network replies with an acknowledgement of receipt: Subsequently, one or more mobile terminated SMS messages communicate the status and/or results of the initial request.[5]In such cases, SMS is used to "push" a reply or updates to the handset when the network is ready to send them.[6]In contrast, USSD is used for command-and-control only. Most GSM phones have USSD capability.[7]USSD is generally associated with real-time or instant messaging services. AnSMSCis not present in the processing path, so that thestore-and-forwardcapability supported by other short-message protocols such as SMS is not available. USSD Phase 1, as specified in GSM 02.90, only supports mobile-initiated ("pull") operations.[8]In thecore network, the message is delivered overMAP, USSD Phase 2, as specified in GSM 03.90.[9]After entering a USSD code on aGSMhandset, the reply from theGSMoperator is displayed within a few seconds. While GSM is being phased out in the 2020s with2Gand3G, a solution is available for supporting USSD services directly from theLTE/5G/IMSnetwork, providing a similar user experience as in GSM.[10] A USSD message typically starts with an asterisk symbol (*) or a hash symbol (#) and is terminated with a hash symbol (#). A typical message comprises digits for commands or data; groups of digits may be separated by additional asterisks.[1] USSD mode Mobile-initiated Network-initiated The codes below are not USSD codes, these are the related Man-Machine Interface (MMI); they are standardized so they are the same on every GSM phone. They are interpreted by the handset first before a corresponding command (not the code itself) is sent to the network. These codes might not always work when using an AT interface; there are standard AT commands defined for each of these actions instead.[11][12] BSis the type of bearer service, some valid values are: 11for voice 13for fax 16for SMS (only valid for barring) 25for data
https://en.wikipedia.org/wiki/Unstructured_Supplementary_Service_Data
Incellulartelecommunications,handover, orhandoff, is the process of transferring an ongoing call or data session from one channel connected to thecore networkto another channel. Insatellite communicationsit is the process of transferring satellite control responsibility from oneearth stationto another without loss or interruption of service. American Englishuses the termhandoff, and this is most commonly used within some American organizations such as3GPP2and in American originated technologies such asCDMA2000. InBritish Englishthe termhandoveris more common, and is used within international and European organisations such asITU-T,IETF,ETSIand3GPP, and standardised within European originated standards such asGSMandUMTS. The term handover is more common in academic research publications and literature, while handoff is slightly more common within theIEEEandANSIorganisations.[original research?] In telecommunications there may be different reasons why a handover might be conducted:[1] The most basic form of handover is when aphonecallin progress is redirected from its currentcell(calledsource) to a new cell (calledtarget).[1]In terrestrial networks the source and the target cells may be served from two different cell sites or from one and the same cell site (in the latter case the two cells are usually referred to as twosectorson that cell site). Such a handover, in which the source and the target are different cells (even if they are on the same cell site) is calledinter-cellhandover. The purpose of inter-cell handover is to maintain the call as the subscriber is moving out of the area covered by the source cell and entering the area of the target cell. A special case is possible, in which the source and the target are one and the same cell and only the used channel is changed during the handover. Such a handover, in which the cell is not changed, is calledintra-cellhandover. The purpose of intra-cell handover is to change one channel, which may be interfered or fading with a new clearer or less fading channel. In addition to the above classification ofinter-cellandintra-cellclassification of handovers, they also can be divided into hard and soft handovers:[1] Handover can also be classified on the basis of handover techniques used. Broadly they can be classified into three types: An advantage of the hard handover is that at any moment in time one call uses only one channel. The hard handover event is indeed very short and usually is not perceptible by the user. In the oldanalogsystems it could be heard as a click or a very short beep; in digital systems it is unnoticeable. Another advantage of the hard handover is that the phone's hardware does not need to be capable of receiving two or more channels in parallel, which makes it cheaper and simpler. A disadvantage is that if a handover fails the call may be temporarily disrupted or even terminated abnormally. Technologies which use hard handovers, usually have procedures which can re-establish the connection to the source cell if the connection to the target cell cannot be made. However re-establishing this connection may not always be possible (in which case the call will be terminated) and even when possible the procedure may cause a temporary interruption to the call. One advantage of the soft handovers is that the connection to the source cell is broken only when a reliable connection to the target cell has been established and therefore the chances that the call will be terminated abnormally due to failed handovers are lower. However, by far a bigger advantage comes from the mere fact that simultaneously channels in multiple cells are maintained and the call could only fail if all of the channels are interfered or fade at the same time. Fading and interference in different channels are unrelated and therefore the probability of them taking place at the same moment in all channels is very low. Thus the reliability of the connection becomes higher when the call is in a soft handover. Because in a cellular network the majority of the handovers occur in places of poor coverage, where calls would frequently become unreliable when their channel is interfered or fading, soft handovers bring a significant improvement to the reliability of the calls in these places by making the interference or the fading in a single channel not critical. This advantage comes at the cost of more complex hardware in the phone, which must be capable of processing several channels in parallel. Another price to pay for soft handovers is use of several channels in the network to support just a single call. This reduces the number of remaining free channels and thus reduces the capacity of the network. By adjusting the duration of soft handovers and the size of the areas in which they occur, the network engineers can balance the benefit of extra call reliability against the price of reduced capacity. While theoretically speaking soft handovers are possible in any technology, analog or digital, the cost of implementing them for analog technologies is prohibitively high and none of the technologies that were commercially successful in the past (e.g.AMPS,TACS,NMT, etc.) had this feature. Of the digital technologies, those based onFDMAalso face a higher cost for the phones (due to the need to have multiple parallel radio-frequency modules) and those based onTDMAor a combination of TDMA/FDMA, in principle, allow not so expensive implementation of soft handovers. However, none of the2G(second-generation) technologies have this feature (e.g. GSM,D-AMPS/IS-136, etc.). On the other hand, all CDMA based technologies, 2G and3G(third-generation), have soft handovers. On one hand, this is facilitated by the possibility to design not so expensive phone hardware supporting soft handovers for CDMA and on the other hand, this is necessitated by the fact that without soft handovers CDMA networks may suffer from substantial interference arising due to the so-callednear–fareffect. In all current commercial technologies based on FDMA or on a combination of TDMA/FDMA (e.g. GSM, AMPS, IS-136/DAMPS, etc.) changing the channel during a hard handover is realised by changing the pair of used transmit/receivefrequencies. For the practical realisation of handovers in a cellular network each cell is assigned a list of potential target cells, which can be used for handing over calls from this source cell to them. These potential target cells are calledneighborsand the list is calledneighbor list. Creating such a list for a given cell is not trivial and specialized computer tools are used. They implement different algorithms and may use for input data from field measurements or computer predictions of radio wave propagation in the areas covered by the cells. During a call one or more parameters of the signal in the channel in the source cell are monitored and assessed in order to decide when a handover may be necessary. The downlink (forward link) and/or uplink (reverse link) directions may be monitored. The handover may be requested by the phone or by thebase station(BTS) of its source cell and, in some systems, by a BTS of a neighboring cell. The phone and the BTSes of the neighboring cells monitor each other's signals and the best target candidates are selected among the neighboring cells. In some systems, mainly based on CDMA, a target candidate may be selected among the cells which are not in the neighbor list. This is done in an effort to reduce the probability of interference due to the aforementioned near–far effect. In analog systems the parameters used as criteria for requesting a hard handover are usually thereceivedsignal powerand thereceivedsignal-to-noise ratio(the latter may be estimated in an analog system by inserting additional tones, with frequencies just outside the captured voice-frequency band at the transmitter and assessing the form of these tones at the receiver). In non-CDMA 2G digital systems the criteria for requesting hard handover may be based on estimates of the received signal power,bit error rate(BER) andblock error/erasure rate(BLER), received quality of speech (RxQual), distance between the phone and the BTS (estimated from the radio signal propagation delay) and others. In CDMA systems, 2G and 3G, the most common criterion for requesting a handover isEc/Ioratio measured in thepilot channel(CPICH) and/orRSCP. In CDMA systems, when the phone in soft or softer handover is connected to several cells simultaneously, it processes the received in parallel signals using arake receiver. Each signal is processed by a module calledrake finger. A usual design of a rake receiver in mobile phones includes three or more rake fingers used in soft handover state for processing signals from as many cells and one additional finger used to search for signals from other cells. The set of cells, whose signals are used during a soft handover, is referred to as theactive set. If the search finger finds a sufficiently-strong signal (in terms of high Ec/Io or RSCP) from a new cell this cell is added to the active set. The cells in the neighbour list (called in CDMAneighbouring set) are checked more frequently than the rest and thus a handover with a neighbouring cell is more likely, however a handover with others cells outside the neighbor list is also allowed (unlike in GSM, IS-136/DAMPS, AMPS, NMT, etc.). There are occurrences where a handoff is unsuccessful. Much research has been dedicated to this problem.[example needed]The source of the problem was discovered in the late 1980s. Because frequencies cannot be reused in adjacent cells, when a user moves from one cell to another, a new frequency must be allocated for the call. If a user moves into a cell when all available channels are in use, the user's call must be terminated. Also, there is the problem of signal interference where adjacent cells overpower each other resulting in receiver desensitization. There are also inter-technology handovers where a call's connection is transferred from one access technology to another, e.g. a call being transferred from GSM to UMTS or from CDMAIS-95toCDMA2000. The 3GPPUMA/GANstandard enables GSM/UMTS handoff to Wi-Fi and vice versa. Different systems have different methods for handling and managing handoff request. Some systems handle handoff in same way as they handle new originating call. In such system the probability that the handoff will not be served is equal to blocking probability of new originating call. But if the call is terminated abruptly in the middle of conversation then it is more annoying than the new originating call being blocked. So in order to avoid this abrupt termination of ongoing call handoff request should be given priority to new call this is called as handoff prioritization. There are two techniques for this:
https://en.wikipedia.org/wiki/Handoff
High Speed Packet Access(HSPA)[1]is an amalgamation of twomobileprotocols—High Speed Downlink Packet Access (HSDPA) and High Speed Uplink Packet Access (HSUPA)—that extends and improves the performance of existing3Gmobile telecommunication networks using theWCDMAprotocols. A further-improved3GPPstandard calledEvolved High Speed Packet Access(also known as HSPA+) was released late in 2008, with subsequent worldwide adoption beginning in 2010. The newer standard allowsbit ratesto reach as high as 337 Mbit/s in the downlink and 34 Mbit/s in the uplink; however, these speeds are rarely achieved in practice.[2] The first HSPA specifications supported increased peak data rates of up to 14 Mbit/s in the downlink and 5.76 Mbit/s in the uplink. They also reduced latency and provided up to five times more system capacity in the downlink and up to twice as much system capacity in the uplink compared with original WCDMA protocol. High Speed Downlink Packet Access(HSDPA) is an enhanced3G(third-generation)mobilecommunications protocolin the High-Speed Packet Access (HSPA) family. HSDPA is also known as3.5Gand3G+. It allows networks based on theUniversal Mobile Telecommunications System(UMTS) to have higher data speeds and capacity. HSDPA also decreaseslatency, and therefore theround-trip timefor applications. HSDPA was introduced in3GPPRelease 5. It was accompanied by an improvement to the uplink that provided a new bearer of 384 kbit/s (the previous maximum bearer was 128 kbit/s).Evolved High Speed Packet Access(HSPA+), introduced in 3GPP Release 7, further increased data rates by adding 64QAM modulation,MIMO, andDual-Carrier HSDPAoperation. Under 3GPP Release 11, even higher speeds of up to 337.5 Mbit/s were possible.[3] The first phase of HSDPA was specified in 3GPP Release 5. This phase introduced new basic functions and was aimed to achieve peak data rates of 14.0 Mbit/s with significantly reduced latency. The improvement in speed and latency reduced the cost per bit and enhanced support for high-performance packet data applications. HSDPA is based on shared channel transmission, and its key features are shared channel and multi-code transmission,higher-order modulation, shortTransmission Time Interval(TTI), fast link adaptation and scheduling, and fasthybrid automatic repeat request(HARQ). Additional new features include the High Speed Downlink Shared Channels (HS-DSCH),quadrature phase-shift keying, 16-quadrature amplitude modulation, and the High Speed Medium Access protocol (MAC-hs) in base stations. The upgrade to HSDPA is often just a software update for WCDMA networks. In HSDPA, voice calls are usually prioritized over data transfer. The following table is derived from table 5.1a of the release 11 of 3GPP TS 25.306[4]and shows maximum data rates of different device classes and by what combination of features they are achieved. The per-cell, per-stream data rate is limited by the "maximum number of bits of an HS-DSCH transport block received within an HS-DSCH TTI" and the "minimum inter-TTI interval". The TTI is 2 milliseconds. So, for example, Cat 10 can decode 27,952 bits / 2 ms = 13.976 Mbit/s (and not 14.4 Mbit/s as often claimed incorrectly). Categories 1-4 and 11 have inter-TTI intervals of 2 or 3, which reduces the maximum data rate by that factor. Dual-Cell and MIMO 2x2 each multiply the maximum data rate by 2, because multiple independent transport blocks are transmitted over different carriers or spatial streams, respectively. The data rates given in the table are rounded to one decimal point. Further UE categories were defined from 3GGP Release 7 onwards asEvolved HSPA(HSPA+) and are listed inEvolved HSDPA UE Categories. As of 28 August 2009[update], 250 HSDPA networks had commercially launchedmobile broadbandservices in 109 countries. 169 HSDPA networks supported 3.6 Mbit/s peak downlink data throughput, and a growing number delivered 21 Mbit/s peak data downlink.[citation needed] CDMA2000-EVDOnetworks had the early lead on performance. In particular,Japaneseproviders were highly successful benchmarks for this network standard. However, this later changed in favor of HSDPA, as an increasing number of providers worldwide began adopting it. In 2007, an increasing number of telcos worldwide began sellingHSDPA USB modemsto provide mobile broadband connections. In addition, the popularity of HSDPA landline replacement boxes grew—these provided HSDPA for data viaEthernetandWi-Fi, as well as ports for connecting traditional landline telephones. Some were marketed with connection speeds of "up to 7.2 Mbit/s"[5]under ideal conditions. However, these services could be slower, such as when in fringe coverage indoors. High-Speed Uplink Packet Access(HSUPA) is a 3G mobile telephonyprotocolin the HSPA family. It is specified and standardized in 3GPP Release 6 to improve the uplink data rate to 5.76 Mbit/s, extend capacity, and reduce latency. Together with additional improvements, this allows for new features such asVoice over Internet Protocol(VoIP), uploading pictures, and sending large e-mail messages. HSUPA was the second major step in the UMTS evolution process. It has since been superseded by newer technologies with higher transfer rates, such asLTE(150 Mbit/s for downlink and 50 Mbit/s for uplink) andLTE Advanced(maximum downlink rates of over 1 Gbit/s). HSUPA adds a new transport channel to WCDMA, called the Enhanced Dedicated Channel (E-DCH). It also features several improvements similar to those of HSDPA, including multi-code transmission, shorter transmission time interval enabling fasterlink adaptation, fast scheduling, and fasthybrid automatic repeat request(HARQ) with incremental redundancy, makingretransmissionsmore effective. Similar to HSDPA, HSUPA uses a "packet scheduler", but it operates on a "request-grant" principle where theuser equipment(UE) requests permission to send data and the scheduler decides when and how many UEs will be allowed to do so. A request for transmission contains data about the state of the transmission buffer and the queue at the UE and its available power margin. However, unlike HSDPA, uplink transmissions are notorthogonalto each other. In addition to this "scheduled" mode of transmission, the standards allow a self-initiated transmission mode from the UEs, denoted "non-scheduled". The non-scheduled mode can, for example, be used for VoIP services for which even the reduced TTI and theNode Bbased scheduler are unable to provide the necessary short delay time and constant bandwidth. Each MAC-d flow (i.e., QoS flow) is configured to use either scheduled or non-scheduled modes. The UE adjusts the data rate for scheduled and non-scheduled flows independently. The maximum data rate of each non-scheduled flow is configured at call setup, and typically not frequently changed. The power used by the scheduled flows is controlled dynamically by the Node B through absolute grant (consisting of an actual value) and relative grant (consisting of a single up/down bit) messages. At thephysical layer, HSUPA introduces the following new channels: The following table shows uplink speeds for the different categories of HSUPA: Further UE categories were defined from 3GGP Release 7 onwards as Evolved HSPA (HSPA+) and are listed inEvolved HSUPA UE Categories. Evolved HSPA(also known as HSPA Evolution, HSPA+) is a wireless broadband standard defined in3GPPrelease 7 of the WCDMA specification. It provides extensions to the existing HSPA definitions and is thereforebackward compatibleall the way to the original Release 99 WCDMA network releases. Evolved HSPA provides data rates between 42.2 and 56 Mbit/s in the downlink and 22 Mbit/s in the uplink (per 5 MHz carrier) with multiple input, multiple output (2x2 MIMO) technologies and higher order modulation (64 QAM). With Dual Cell technology, these can be doubled. Since 2011, HSPA+ has been widely deployed among WCDMA operators, with nearly 200 commitments.[6]
https://en.wikipedia.org/wiki/High-Speed_Downlink_Packet_Access
TheInternational Mobile Equipment Identity(IMEI)[1]is a numericidentifier, usuallyunique,[2][3]for3GPPandiDENmobile phones, as well as somesatellite phones. It is usually found printed inside the battery compartment of the phone but can also be displayed on-screen on most phones by entering theMMI Supplementary Service code*#06#on the dialpad, or alongside other system information in the settings menu on smartphone operating systems. GSMnetworks use the IMEI number to identify valid devices, and can stop a stolen phone from accessing the network. For example, if amobile phoneis stolen, the owner can have their network provider use the IMEI number to blocklist the phone. This renders the phone useless on that network and sometimes other networks, even if the thief changes the phone'sSIMcard. Devices without a SIM card slot oreSIMcapability usually do not have an IMEI, except for certain earlySprintLTEdevices such as theSamsungGalaxy NexusandS IIIwhich emulated a SIM-freeCDMAactivation experience and lacked roaming capabilities in3GPP-only countries.[4]However, the IMEI only identifies the device and has no particular relationship to the subscriber. The phone identifies the subscriber by transmitting theInternational mobile subscriber identity(IMSI) number, which is stored on a SIM card that can, in theory, be transferred to any handset. However, the network's ability to know a subscriber's current, individual device enables many network and security features.[citation needed] Dual SIM enabled phones will normally have two IMEI numbers, except for devices such as thePixel 3(which has an eSIM and one physical SIM) which only allow one SIM card to be active at once. Many countries have acknowledged the use of the IMEI in reducing the effect of mobile phone thefts. For example, in theUnited Kingdom, under the Mobile Telephones (Re-programming) Act, changing the IMEI of a phone, or possessing equipment that can change it, is considered an offence under some circumstances.[5][6]A bill was introduced in the United States by SenatorChuck Schumerin 2012 that would have made the changing of an IMEI illegal, but the bill was not enacted.[7] IMEI blocking is not the only way to fight phone theft. Instead, mobile operators are encouraged to take measures such as immediate suspension of service and replacement of SIM cards in case of loss or theft.[8] The existence of a formally allocated IMEI number range for a GSM terminal does not mean that the terminal is approved or complies with regulatory requirements. The linkage between regulatory approval and IMEI allocation was removed in April 2000, with the introduction of the European R&TTE Directive.[9]Since that date, IMEIs have been allocated byBABT(or one of several other regional administrators acting on behalf of theGSM Association) to legitimate GSM terminal manufacturers without the need to provide evidence of approval. When someone has their mobile equipment stolen or lost, they can ask their service provider to block the phone from their network, and the operator may do so, especially if required by law. If the local operator maintains an Equipment Identity Register (EIR), it adds the device IMEI to it. Optionally, it also adds the IMEI to shared registries, such as theCentral Equipment Identity Register(CEIR), which blocklists the device with other operators that use the CEIR. This blocklisting makes the device unusable on any operator that uses the CEIR, which makes mobile equipment theft pointless, except for parts. To make blocklisting effective, the IMEI number is supposed to be difficult to change. However, a phone's IMEI may be easy to change with special tools.[10][better source needed]In addition, IMEI is an un-authenticated mobile identifier (as opposed to IMSI, which is routinely authenticated by home and serving mobile networks.) Using a spoofed IMEI can thwart some efforts to track handsets, or target handsets for lawful intercept.[citation needed] Australia was the first nation to implement IMEI blocking across all GSM networks, in 2003.[11]In Australia the Electronic Information Exchange (EIE) Administration Node provides a blocked IMEI lookup service for Australian customers.[12] In the UK, a voluntary charter operated by the mobile networks ensures that any operator's blocklisting of a handset is communicated to the CEIR and subsequently to all other networks. This ensures that the handset is quickly unusable for calls, at most within 48 hours. Some UK Police forces, including theMetropolitan Police Service, actively check IMEI numbers of phones found involved in crime. In New Zealand, the NZ Telecommunications Forum Inc[13]provides a blocked IMEI lookup service for New Zealand consumers. The service allows up to three lookups per day[14]and checks against a database that is updated daily by the three major mobile network operators. A blocked IMEI cannot be connected to any of these three operators. In Latvia the SIA "Datorikas institūts DIVI"[15]provides a blocked IMEI lookup service for checks against a database that is updated by all major mobile network operators in Latvia. In some countries, such blocklisting is not customary. In 2012, major network companies in the United States, under government pressure, committed to introducing a blocklisting service, but it's not clear whether it will interoperate with the CEIR.[16][17]GSM carriers AT&T and T-Mobile began blocking newly reported IMEIs in November 2012.[18]Thefts reported prior to November 2012 were not added to the database. TheCTIArefers users to websites atwww.stolenphonechecker.org[19]andthe GSMA[19]where consumers can check whether a smartphone has been reported as lost or stolen to its member carriers. The relationship between the former and any national or internationalIMEI blocklistsis unclear.[19] It is unclear whether local barring of IMEI has any positive effect, as it may result in international smuggling of stolen phones.[20] IMEIs can sometimes be removed from a blocklist, depending on local arrangements. This would typically include quoting a password chosen at the time of blocklisting.[citation needed] Law enforcement and intelligence services can use an IMEI number as input for tracking devices that are able to locate a cell phone with an accuracy of a few meters. Saudi Arabian government agencies have reportedly used IMEI numbers retrieved from cell phone packaging to locate and detain women who fled Saudi Arabia's patriarchal society in other countries.[21] An IMEI number retrieved from the remnants of aNokia 5110was used to trace and identify the perpetrators behind the2002 Bali bombings.[22] Some countries use allowlists instead of blocklists for IMEI numbers, so that any mobile phone needs to be legally registered in the country in order to be able to access mobile networks of the country, with possible exceptions for international roaming and a grace period for registering.[23]These include Chile,[24]Turkey,[25]Azerbaijan,[26]Colombia,[27]and Nepal.[28]Other countries that have adopted some form of mandatory IMEI registration include India, Pakistan, Indonesia, Cambodia, Thailand, Iran, Nigeria, Ecuador, Ukraine, Lebanon,[29]and Kenya.[30] Prior to their merger withT-Mobile,Sprintin the United States used an allowlist of devices where a user had to register their IMEI and SIM card before aLTE-capable device could be used, despite noUSlaw mandating it.[31]If a user changed their device, they had to register their new IMEI and SIM card. This isn't the case with other CDMA carriers likeVerizonwhich only used allowlists for 3G (which was a requirement for CDMA) and T-Mobile does not use an allowlist but instead a blocklist, including for former Sprint customers. AT&T[32]andTelus[33]also use an allowlist forVoLTEaccess, but does not require IMEI registration by customers. Instead, phone manufacturers are required to register their devices into AT&T's or Telus' databases, and customers are able to freely swap SIM cards or eSIMs into any allowlisted device. This has the problem that imported phones and some non-imported phones such as olderOnePlusmodels or selectCDMA-capable LTE devices (including models sold onVerizonorSprint) will not work for voice calls even if they have the LTE/5G bands for AT&T and Telus and support VoLTE on competitors or via VoLTE roaming. The IMEI (15 decimal digits: 14 digits plus a check digit) or IMEISV (16 decimal digits: 14 digits plus two software version digits) includes information on the origin, model, and serial number of the device. The structure of the IMEI/SV is specified in3GPP TS 23.003. The model and origin comprise the initial 8-digit portion of the IMEI/SV, known as theType Allocation Code(TAC). The remainder of the IMEI is manufacturer-defined, with aLuhn check digitat the end. For the IMEI format prior to 2003, the GSMA guideline was to have this Check Digit always transmitted to the network as zero. This guideline seems to have disappeared for the format valid from 2003 onwards.[34] As of 2004[update], the format of the IMEI isAA-BBBBBB-CCCCCC-D, although it may not always be displayed this way. The IMEISV does not have the Luhn check digit but instead has two digits for the Software Version Number (SVN), making the formatAA-BBBBBB-CCCCCC-EE Prior to 2002, the TAC was six digits and followed by a two-digitFinal Assembly Code(FAC), which was a manufacturer-specific code indicating the location of the device's construction. From January 1, 2003 until April 1, 2004, theFACfor all phones was 00. After April 1, 2004, the Final Assembly Code ceased to exist and the Type Allocation Code increased to eight digits in length. In any of the above cases, the first two digits of the TAC are theReporting Body Identifier, which identifies the GSMA-approved group that allocated the TAC. The RBI numbers are allocated by the Global Decimal Administrator. IMEI numbers being decimal helps distinguish them from anMEID, which is hexadecimal and always has 0xA0 or larger as the first two hexadecimal digits. For example, the old style IMEI code 35-209900-176148-1 or IMEISV code 35-209900-176148-23 tells us the following: TAC: 35-2099 - issued by theBABT(code 35) with the allocation number 2099FAC: 00 - indicating the phone was made during the transition period when FACs were being removed.SNR: 176148 - uniquely identifying a unit of this modelCD: 1 so it is a GSM Phase 2 or higherSVN: 23 - The "software version number" identifying the revision of the software installed on the phone. 99 is reserved. By contrast, the new style IMEI code 49-015420-323751-8 has an 8-digit TAC of 49-015420. The CDMAMobile Equipment Identifieruses the same basic format as the IMEI but gives more flexibility in allocation sizes and usage. The last number of the IMEI is acheck digit, calculated using theLuhn algorithm, as defined in theIMEI Allocation and Approval Guidelines: The Check Digit shall be calculated according toLuhn formula(ISO/IEC 7812). (See GSM 02.16 / 3GPP 22.016). The Check Digit is a function of all other digits in the IMEI. The Software Version Number (SVN) of a mobile is not included in the calculation. The purpose of the Check Digit is to help guard against the possibility of incorrect entries to the CEIR and EIR equipment. The presentation of the Check Digit both electronically and in printed form on the label and packaging is very important. Logistics (using bar-code reader) and EIR/CEIR administration cannot use the Check Digit unless it is printed outside of the packaging, and on the ME IMEI/Type Accreditation label. The check digit is not transmitted over the radio interface, nor is it stored in the EIR database at any point. Therefore, all references to the last three or six digits of an IMEI refer to the actual IMEI number, to which the check digit does not belong. The check digit is validated in three steps: Conversely, one can calculate the IMEI by choosing the check digit that would give a sum divisible by 10. For the example IMEI 49015420323751?, To make the sum divisible by 10, we setx= 8, so the complete IMEI becomes 490154203237518. IMEI validation[35]is the process of verifying the authenticity and integrity of a mobile device’s 15-digitIMEInumber, ensuring it conforms to global registry standards and has not been tampered with. An IMEI consists of four parts: theType Allocation Code(TAC), which identifies the device model; theFinal Assembly Code(FAC), denoting the manufacturing site; theSerial Number(SNR), unique to each unit; and theCheck Digit, calculated via theLuhn algorithm. During validation, the first 14 digits are processed through the Luhn checksum procedure and compared to the Check Digit—any mismatch indicates an invalid or forged IMEI. Widely employed byGSMnetwork operators, regulatory agencies, and anti-theft platforms, IMEI validation helps combat device cloning, unauthorized resale, and mobile phone theft, while maintaining network security and consumer trust. TheBroadband Global Area Network(BGAN),IridiumandThurayasatellite phonenetworks all use IMEI numbers on their transceiver units as well as SIM cards in much the same way as GSM phones do. The Iridium 9601 modem relies solely on its IMEI number for identification and uses no SIM card; however, Iridium is a proprietary network and the device is incompatible with terrestrial GSM networks.
https://en.wikipedia.org/wiki/International_Mobile_Equipment_Identity
Theinternational mobile subscriber identity(IMSI;/ˈɪmziː/) is a number that uniquely identifies every user of acellular network.[1]It is stored as a64-bitfield and is sent by the mobile device to the network. It is also used for acquiring other details of the mobile in thehome location register(HLR) or as locally copied in thevisitor location register. To preventeavesdroppersfrom identifying and tracking the subscriber on the radio interface, the IMSI is sent as rarely as possible and a randomly-generatedTMSIis sent instead.[citation needed] The IMSI is used inanymobile network that interconnects with other networks. ForGSM,UMTSandLTEnetworks, this number was provisioned in theSIMcard and forcdmaOneandCDMA2000networks, in the phone directly or in theR-UIMcard (the CDMA equivalent of the SIM card). Both cards have been superseded by theUICC. An IMSI is usually presented as a 15-digit number but can be shorter. For example,MTN South Africa's old IMSIs that are still in use in the market are 14 digits long. The first 3 digits represent themobile country code(MCC), which is followed by themobile network code(MNC), either 2-digit (European standard) or 3-digit (North American standard). The length of the MNC depends on the value of the MCC, and it is recommended that the length is uniform within a MCC area.[2]The remaining digits are themobile subscription identification number(MSIN) within the network's customer base, usually 9 to 10 digits long, depending on the length of the MNC. The IMSI conforms to theITUE.212 numbering standard. IMSIs can sometimes be mistaken for theICCID(E.118), which is the identifier for the physical SIM card itself (or now the virtual SIM card if it is aneSIM). The IMSI lives as part of the profile (or one of several profiles if the SIM and operator support multi-IMSI SIMs) on the SIM/ICCID. IMSI analysisis the process of examining a subscriber's IMSI to identify the network the IMSI belongs to, and whether subscribers from that network may use a given network (if they are not local subscribers, this requires a roaming agreement). If the subscriber is not from the provider's network, the IMSI must be converted to a Global Title, which can then be used for accessing the subscriber's data in the remoteHLR. This is mainly important for international mobileroaming. Outside North America, the IMSI is converted to the Mobile Global Title (MGT) format, standardE.214, which is similar to anE.164number.E.214provides a method to convert the IMSI into a number that can be used for routing to internationalSS7switches. E.214 can be interpreted as implying that there are two separate stages of conversion; first determine the MCC and convert to E.164country calling codethen determine MNC and convert to national network code for the carrier's network. But this process is not used in practice and the GSM numbering authority has clearly stated that a one-stage process is used[1]. In North America, the IMSI is directly converted to an E.212 number with no modification of its value. This can be routed directly on American SS7 networks. After this conversion,SCCPis used to send the message to its final destination. For details, seeGlobal Title Translation. This example shows the actual practice which is not clearly described in the standards. Translation rule: Therefore, 284011234567890 becomes 359881234567890 under the E.214 numbering plan. Translation rule: Therefore, 310150123456789 becomes 14054123456789 under the E.214 numbering plan. The result is an E.214 compliant Global Title, (Numbering Plan Indicatoris set to 7 in the SCCP message). This number can now be sent to Global Title Analysis. Translation rule: Therefore, 284011234567890 becomes 284011234567890 under the E.212 numbering plan. This number has to be converted on the ANSI to ITU boundary. For more details please seeGlobal Title Translation. The Home Network Identity (HNI) is the combination of the MCC and the MNC. This is the number which fully identifies a subscriber's home network. This combination is also known as thePLMN.
https://en.wikipedia.org/wiki/International_Mobile_Subscriber_Identity
Intelecommunications,long-term evolution(LTE) is astandardforwireless broadbandcommunication forcellularmobile devices and data terminals. It is considered to be a "transitional"4Gtechnology,[1]and is therefore also referred to as3.95Gas a step above3G.[2] LTE is based on the2GGSM/EDGEand 3GUMTS/HSPAstandards. It improves on those standards' capacity and speed by using a different radio interface and core network improvements.[3][4]LTE is the upgrade path for carriers with both GSM/UMTS networks andCDMA2000networks. LTE has been succeeded byLTE Advanced, which is officially defined as a "true" 4G technology[5]and also named "LTE+". The standard is developed by the3GPP(3rd Generation Partnership Project) and is specified in its Release 8 document series, with minor enhancements described in Release 9. LTE is also called3.95Gand has been marketed as4G LTEandAdvanced 4G;[citation needed]but the original version did not meet the technical criteria of a4Gwireless service, as specified in the 3GPP Release 8 and 9 document series forLTE Advanced. The requirements were set forth by theITU-Rorganisation in theIMT Advancedspecification; but, because of market pressure and the significant advances thatWiMAX,Evolved High Speed Packet Access, and LTE bring to the original 3G technologies, ITU-R later decided that LTE and the aforementioned technologies can be called 4G technologies.[6]The LTE Advanced standard formally satisfies the ITU-R requirements for being considered IMT-Advanced.[7]To differentiate LTE Advanced andWiMAX-Advancedfrom current[when?]4G technologies, ITU has defined the latter as "True 4G".[8][5] LTE stands for Long-Term Evolution[9]and is a registered trademark owned byETSI(European Telecommunications Standards Institute) for the wirelessdata communicationstechnology and development of the GSM/UMTS standards. However, other nations and companies do play an active role in the LTE project. The goal of LTE was to increase the capacity and speed of wireless data networks using newDSP(digital signal processing) techniques and modulations that were developed around the turn of the millennium. A further goal was the redesign and simplification of thenetwork architectureto anIP-based system with significantly reduced transferlatencycompared with the3Garchitecture. The LTE wireless interface is incompatible with2Gand 3G networks so it must be operated on a separateradio spectrum. The idea of LTE was first proposed in 1998, with the use of theCOFDMradio access technique to replace theCDMAand studying its Terrestrial use in the L band at 1428 MHz (TE) In 2004 by Japan'sNTT Docomo, with studies on the standard officially commenced in 2005.[10]In May 2007, the LTE/SAETrial Initiative (LSTI) alliance was founded as a global collaboration between vendors and operators with the goal of verifying and promoting the new standard in order to ensure the global introduction of the technology as quickly as possible.[11][12] The LTE standard was finalized in December 2008, and the first publicly available LTE service was launched byTeliaSonerainOsloandStockholmon December 14, 2009, as a data connection with a USB modem. The LTE services were launched by major North American carriers as well, with the Samsung SCH-r900 being the world's first LTE Mobile phone starting on September 21, 2010,[13][14]and Samsung Galaxy Indulge being the world's first LTE smartphone starting on February 10, 2011,[15][16]both offered byMetroPCS, and theHTC ThunderBoltoffered by Verizon starting on March 17 being the second LTE smartphone to be sold commercially.[17][18]In Canada,Rogers Wirelesswas the first to launch LTE network on July 7, 2011, offering the Sierra Wireless AirCard 313U USB mobile broadband modem, known as the "LTE Rocket stick" then followed closely by mobile devices from both HTC and Samsung.[19]Initially, CDMA operators planned to upgrade to rival standards calledUMBandWiMAX, but major CDMA operators (such asVerizon,SprintandMetroPCSin the United States,BellandTelusin Canada,au by KDDIin Japan,SK Telecomin South Korea andChina Telecom/China Unicomin China) have announced instead they intend to migrate to LTE. The next version of LTE isLTE Advanced, which was standardized in March 2011.[20]Services commenced in 2013.[21]Additional evolution known asLTE Advanced Prohave been approved in year 2015.[22] The LTE specification provides downlink peak rates of 300 Mbit/s, uplink peak rates of 75 Mbit/s andQoSprovisions permitting a transferlatencyof less than 5msin theradio access network. LTE has the ability to manage fast-moving mobiles and supports multi-cast and broadcast streams. LTE supports scalable carrierbandwidths, from 1.4MHzto 20 MHz and supports bothfrequency division duplexing(FDD) andtime-division duplexing(TDD). The IP-based network architecture, called theEvolved Packet Core(EPC) designed to replace theGPRS Core Network, supports seamlesshandoversfor both voice and data to cell towers with older network technology such asGSM,UMTSandCDMA2000.[23]The simpler architecture results in lower operating costs (for example, eachE-UTRAcell will support up to four times the data and voice capacity supported by HSPA[24]). BecauseLTE frequencies and bandsdiffer from country to country, only multi-band phones can use LTE in all countries where it is supported. Most carriers supporting GSM or HSUPA networks can be expected to upgrade their networks to LTE at some stage. A complete list of commercial contracts can be found at:[61] The following is a list of the top 10 countries/territories by 4G LTE coverage as measured by OpenSignal.com in February/March 2019.[72][73] For the complete list of all the countries/territories, seelist of countries by 4G LTE penetration. Long-Term Evolution Time-Division Duplex(LTE-TDD), also referred to asTDD LTE, is a4Gtelecommunications technology and standard co-developed by an international coalition of companies, includingChina Mobile,Datang Telecom,Huawei,ZTE,Nokia Solutions and Networks,Qualcomm,Samsung, andST-Ericsson. It is one of the two mobile data transmission technologies of the Long-Term Evolution (LTE) technology standard, the other beingLong-Term Evolution Frequency-Division Duplex(LTE-FDD). While some companies refer to LTE-TDD as "TD-LTE" for familiarity withTD-SCDMA, there is no reference to that abbreviation anywhere in the 3GPP specifications.[74][75][76] There are two major differences between LTE-TDD and LTE-FDD: how data is uploaded and downloaded, and what frequency spectra the networks are deployed in. While LTE-FDD uses paired frequencies to upload and download data,[77]LTE-TDD uses a single frequency, alternating between uploading and downloading data through time.[78][79]The ratio between uploads and downloads on a LTE-TDD network can be changed dynamically, depending on whether more data needs to be sent or received.[80]LTE-TDD and LTE-FDD also operate on different frequency bands,[81]with LTE-TDD working better at higher frequencies, and LTE-FDD working better at lower frequencies.[82]Frequencies used for LTE-TDD range from 1850 MHz to 3800 MHz, with several different bands being used.[83]The LTE-TDD spectrum is generally cheaper to access, and has less traffic.[81]Further, the bands for LTE-TDD overlap with those used forWiMAX, which can easily be upgraded to support LTE-TDD.[81] Despite the differences in how the two types of LTE handle data transmission, LTE-TDD and LTE-FDD share 90 percent of their core technology, making it possible for the same chipsets and networks to use both versions of LTE.[81][84]A number of companies produce dual-mode chips or mobile devices, includingSamsungandQualcomm,[85][86]while operatorsCMHKand Hi3G Access have developed dual-mode networks in Hong Kong and Sweden, respectively.[87] The creation of LTE-TDD involved a coalition of international companies that worked to develop and test the technology.[88]China Mobilewas an early proponent of LTE-TDD,[81][89]along with other companies likeDatang Telecom[88]andHuawei, which worked to deploy LTE-TDD networks, and later developed technology allowing LTE-TDD equipment to operate inwhite spaces—frequency spectra between broadcast TV stations.[75][90]Intelalso participated in the development, setting up a LTE-TDD interoperability lab with Huawei in China,[91]as well asST-Ericsson,[81]Nokia,[81]and Nokia Siemens (nowNokia Solutions and Networks),[75]which developed LTE-TDD base stations that increased capacity by 80 percent and coverage by 40 percent.[92]Qualcommalso participated, developing the world's first multi-mode chip, combining both LTE-TDD and LTE-FDD, along with HSPA and EV-DO.[86]Accelleran, a Belgian company, has also worked to build small cells for LTE-TDD networks.[93] Trials of LTE-TDD technology began as early as 2010, withReliance Industriesand Ericsson India conducting field tests of LTE-TDD inIndia, achieving 80 megabit-per second download speeds and 20 megabit-per-second upload speeds.[94]By 2011, China Mobile began trials of the technology in six cities.[75] Although initially seen as a technology utilized by only a few countries, including China and India,[95]by 2011 international interest in LTE-TDD had expanded, especially in Asia, in part due to LTE-TDD's lower cost of deployment compared to LTE-FDD.[75]By the middle of that year, 26 networks around the world were conducting trials of the technology.[76]The Global LTve (GTI) was also started in 2011, with founding partners China Mobile,Bharti Airtel,SoftBank Mobile,Vodafone,Clearwire, Aero2 andE-Plus.[96]In September 2011, Huawei announced it would partner with Polish mobile provider Aero2 to develop a combined LTE-TDD and LTE-FDD network in Poland,[97]and by April 2012,ZTE Corporationhad worked to deploy trial or commercial LTE-TDD networks for 33 operators in 19 countries.[87]In late 2012, Qualcomm worked extensively to deploy a commercial LTE-TDD network in India, and partnered with Bharti Airtel and Huawei to develop the first multi-mode LTE-TDD smartphone for India.[86] InJapan, SoftBank Mobile launched LTE-TDD services in February 2012 under the nameAdvanced eXtended Global Platform(AXGP), and marketed as SoftBank 4G (ja). The AXGP band was previously used forWillcom'sPHSservice, and after PHS was discontinued in 2010 the PHS band was re-purposed for AXGP service.[98][99] In the U.S., Clearwire planned to implement LTE-TDD, with chip-maker Qualcomm agreeing to support Clearwire's frequencies on its multi-mode LTE chipsets.[100]WithSprint'sacquisition of Clearwire in 2013,[77][101]the carrier began using these frequencies for LTE service on networks built bySamsung,Alcatel-Lucent, andNokia.[102][103] As of March 2013, 156 commercial 4G LTE networks existed, including 142 LTE-FDD networks and 14 LTE-TDD networks.[88]As of November 2013, the South Korean government planned to allow a fourth wireless carrier in 2014, which would provide LTE-TDD services,[79]and in December 2013, LTE-TDD licenses were granted to China's three mobile operators, allowing commercial deployment of 4G LTE services.[104] In January 2014, Nokia Solutions and Networks indicated that it had completed a series of tests ofvoice over LTE( VoLTE)calls on China Mobile's TD-LTE network.[105]The next month, Nokia Solutions and Networks and Sprint announced that they had demonstrated throughput speeds of 2.6 gigabits per second using a LTE-TDD network, surpassing the previous record of 1.6 gigabits per second.[106] Much of the LTE standard addresses the upgrading of 3G UMTS to what will eventually be4Gmobile communications technology. A large amount of the work is aimed at simplifying the architecture of the system, as it transitions from the existing UMTScircuit+packet switchingcombined network to an all-IP flat architecture system.E-UTRAis the air interface of LTE. Its main features are: The LTE standard supports onlypacket switchingwith its all-IP network. Voice calls in GSM, UMTS, and CDMA2000 arecircuit switched, so with the adoption of LTE, carriers will have to re-engineer their voice call network.[108]Four different approaches sprang up: One additional approach that is not initiated by operators is the usage ofover-the-top content(OTT) services, using applications likeSkypeandGoogle Talkto provide LTE voice service.[109] Most major backers of LTE preferred and promoted VoLTE from the beginning. The lack of software support in initial LTE devices, as well as core network devices, however, led to a number of carriers promotingVoLGA(Voice over LTE Generic Access) as an interim solution.[110]The idea was to use the same principles asGAN(Generic Access Network, also known as UMA or Unlicensed Mobile Access), which defines the protocols through which a mobile handset can perform voice calls over a customer's private Internet connection, usually over wireless LAN. VoLGA however never gained much support, because VoLTE (IMS) promises much more flexible services, albeit at the cost of having to upgrade the entire voice call infrastructure. VoLTE may require Single Radio Voice Call Continuity (SRVCC) to be able to smoothly perform a handover to a 2G or 3G network in case of poor LTE signal quality.[111] While the industry has standardized on VoLTE, early LTE deployments required carriers to introduce circuit-switched fallback as a stopgap measure. When placing or receiving a voice call on a non-VoLTE-enabled network or device, LTE handsets will fall back to old 2G or 3G networks for the duration of the call. To ensure compatibility, 3GPP demands at least AMR-NB codec (narrow band), but the recommended speech codec for VoLTE isAdaptive Multi-Rate Wideband, also known asHD Voice. This codec is mandated in 3GPP networks that support 16 kHz sampling.[112] Fraunhofer IIShas proposed and demonstrated "Full-HD Voice", an implementation of theAAC-ELD(Advanced Audio Coding – Enhanced Low Delay) codec for LTE handsets.[113]Where previous cell phone voice codecs only supported frequencies up to 3.5 kHz and upcomingwideband audioservices branded asHD Voiceup to 7 kHz, Full-HD Voice supports the entire bandwidth range from 20 Hz to 20 kHz. For end-to-end Full-HD Voice calls to succeed, however, both the caller's and recipient's handsets, as well as networks, have to support the feature.[114] The LTE standard covers a range of many different bands, each of which is designated by both a frequency and a band number: As a result, phones from one country may not work in other countries. Users will need a multi-band capable phone for roaming internationally. According to theEuropean Telecommunications Standards Institute's (ETSI)intellectual propertyrights (IPR) database, about 50 companies have declared, as of March 2012, holdingessential patentscovering the LTE standard.[121]The ETSI has made no investigation on the correctness of the declarations however,[121]so that "any analysis of essential LTE patents should take into account more than ETSI declarations."[122]Independent studies have found that about 3.3 to 5 percent of all revenues from handset manufacturers are spent on standard-essential patents. This is less than the combined published rates, due to reduced-rate licensing agreements, such as cross-licensing.[123][124][125]
https://en.wikipedia.org/wiki/LTE_(telecommunication)
MSISDN(/ˈɛmɛsaɪɛsdiːɛn/MISS-den) is a number uniquely identifying a subscription in aGlobal System for Mobile communicationsor aUniversal Mobile Telecommunications Systemmobile network. It is the mapping of the telephone number to thesubscriber identity modulein a mobile or cellular phone. This abbreviation has several interpretations, the most common one being "Mobile Station International Subscriber Directory Number".[1] The MSISDN andinternational mobile subscriber identity(IMSI) are two important numbers used for identifying a mobile subscriber. The IMSI is stored in theSIM(the card inserted into the mobile phone), and uniquely identifies the mobile station, its home wireless network, and the home country of the home wireless network. The MSISDN is used for routing calls to the subscriber. The IMSI is often used as a key in thehome location register("subscriber database") and the MSISDN is the number normally dialed to connect a call to the mobile phone. ASIMhas a unique IMSI that does not change, while the MSISDN can change in time, i.e. different MSISDNs can be associated with the SIM. The MSISDN follows the numbering plan defined in theInternational Telecommunication Standard SectorrecommendationE.164. Depending on source or standardization body, the abbreviation MSISDN can be written out in several different ways. These are today the most widespread and common in use. TheITU-TrecommendationE.164limits the maximum length of an MSISDN to 15 digits. 1-3 digits are reserved for country code. Prefixes are not included (e.g., 00 prefixes an international MSISDN when dialing from Sweden). Minimum length of the MSISDN is not specified by ITU-T but is instead specified in the national numbering plans by the telecommunications regulator in each country. InGSMand its variantDCS 1800, MSISDN is built up as In theGSMvariantPCS 1900, MSISDN is built up as The country code identifies a country or geographical area, and may be between 1-3 digits. The ITU defines and maintains thelist of assigned country codes. Example Number: +880 15 00121121 (Teletalk Hotline Number) Has the following subscription number: MSISDN=8801500121121MSISDN=CCCXXN1N2N3N4N5N6N7N8 For further information on the MSISDN format, see theITU-TspecificationE.164.
https://en.wikipedia.org/wiki/MSISDN
NMT(Nordic Mobile Telephony) is an automatic cellular phone system specified byNordictelecommunications administrations (PTTs) and opened for service on 1 October 1981. NMT is based onanaloguetechnology (first generation or1G) and two variants exist: NMT-450 and NMT-900. The numbers indicate the frequency bands used. NMT-900 was introduced in 1986 and carries more channels than the older NMT-450 network. The NMT specifications were free and open, allowing many companies to produce NMT hardware and pushing prices down. The success of NMT was important toNokia(thenMobira) andEricsson. The first Danish implementers wereStorno(then owned byGeneral Electric, later taken over byMotorola) and AP (later taken over byPhilips). Initial NMT phones were designed to mount in the trunk of a car, with a keyboard/display unit at the driver's seat. "Portable" versions existed, though they were still bulky, and with battery life a big problem. Later models, such asBenefon's, were as small as 100 mm (3.9 inches) and weighed only about 100 grams. NMT stands forNordisk MobilTelefoniorNordiska MobilTelefoni-gruppen. The NMT network was opened in Sweden and Norway in 1981, and in Denmark and Finland in 1982. It was a response to the increasing congestion and heavy requirements of the manual mobile phone networks:ARP(150 MHz) in Finland,MTD(450 MHz) in Sweden and Denmark, andOLTin Norway. Iceland joined in 1986. However, Ericsson introduced the first commercial service in Saudi Arabia on 1 September 1981 to 1,200 users, as a pilot test project, one month before they did the same in Sweden. By 1985 the network had grown to 110,000 subscribers inScandinaviaand Finland, 63,300 in Norway alone, which made it the world's largest mobile network at the time.[2] The NMT network has mainly been used in the Nordic countries,Baltic countries, Switzerland, France, Netherlands, Hungary, Poland, Bulgaria, Romania, Czech Republic, Slovakia, Slovenia, Serbia, Turkey, Croatia, Bosnia, Russia, Ukraine and in Asia. The introduction of digital mobile networks such asGSMhas reduced the popularity of NMT and the Nordic countries have suspended their NMT networks. In Estonia the NMT network was shut down in December 2000. In FinlandTeliaSonera's NMT network was suspended on 31 December 2002. Norway's last NMT network was suspended on 31 December 2004. Sweden'sTeliaSoneraNMT network was suspended on 31 December 2007. The NMT network (450 MHz) however has one big advantage over GSM which is the range; this advantage is valuable in big but sparsely populated countries such as Iceland. In Iceland, the GSM network reaches 98% of the country's population but only a small proportion of its land area. The NMT system however reaches most of the country and a lot of the surrounding waters, thus the network was popular with fishermen and those traveling in the vast empty mainland. In Iceland the NMT service was stopped on 1 September 2010, whenSíminnclosed down its NMT network. In Denmark, Norway andSwedenthe NMT-450 frequencies have been auctioned off to SwedishNordisk Mobiltelefonwhich later becameIce.netand renamed toNet 1that built a digital network using CDMA 450. During 2015, the network has been migrated to 4G. France also developed an NMT network in 1988 (in parallel withRadiocom 2000) but with slight variations. As a result, it could not roam with other NMT networks around the world.[3] In Russia Uralwestcom shut down their NMT network on 1 September 2006 and Sibirtelecom on 10 January 2008. Skylink, subsidiary company ofTele2 Russiaoperates NMT-450 network as of 2016 inArkhangelsk OblastandPerm Krai.[4][5]These networks are used in sparsely populated areas with long distance. License for the provision of services was valid until 2021.[6] The cell sizes in an NMT network range from 2 km to 30 km. With smaller ranges the network can service more simultaneous callers; for example in a city the range can be kept short for better service. NMT used full duplex transmission, allowing for simultaneous receiving and transmission of voice. Car phone versions of NMT used transmission power of up to 15 watt (NMT-450) and 6 watt (NMT-900), handsets up to 1 watt. NMT had automatic switching (dialing) andhandoverof the call built into the standard from the beginning, which was not the case with most preceding car phone services, such as the FinnishARP. Additionally, the NMT standard specified billing as well as national and internationalroaming. NMT voice channel is transmitted with FM (Frequency Modulation)[7]and NMT signaling transfer speeds vary between 600 and 1,200 bits per second, using FFSK (FastFrequency Shift Keying) modulation. Signaling between the base station and the mobile station was implemented using the same RF channel that was used for audio, and using the 1,200 bit/s FFSK modem. This caused the periodic short noise bursts, e.g. during handover, that were uniquely characteristic to NMT sound. In the original NMT specification the voice traffic was notencrypted; it was possible to listen to calls using e.g. ascanneror a cable ready TV. As a result, some scanners have had the NMT bands blocked so they could not be accessed. Later versions of the NMT specifications defined optional analogscramblingwhich was based on two-band audio frequencyinversion. If both the base station and the mobile station supported scrambling, they could agree upon using it when initiating a phone call. Also, if two users had mobile (phone) stations supporting scrambling, they could turn it on during conversation even if the base stations didn't support it. In this case, audio would be scrambled all the way between the 2 mobile stations. While the scrambling method was not at all as strong as encryption of current digital phones, such asGSMorCDMA, it did prevent casual listening with scanners. Scrambling is defined in NMT Doc 450-1: System Description (1999-03-23) and NMT Doc 450-3 and 900-3: Technical Specification for the Mobile Station (1995-10-04)'s Annex 26 v.1.1: Mobile Station with Speech Scrambling – Split Inversion Method (Optional) (1998-01-27). NMT also supported a simple but robust integrated data transfer mode called DMS (Data and Messaging Service) or NMT-Text, which used the network's signaling channel for data transfer. Using DMS, text messaging was also possible between two NMT handsets before SMS service started in GSM, but this feature was never commercially available except in Russian, Polish and Bulgarian NMT networks. Another data transfer method was called NMT Mobidigi with transfer speeds of 380 bits per second. It required external equipment.
https://en.wikipedia.org/wiki/Nordic_Mobile_Telephone
ORFSstands forOutput RF Spectrum, where 'RF' stands forRadio Frequency. The acronym ORFS is used in the context of mobile communication systems, e.g.,GSM. It stands for the relationship between (a) the frequency offset from the carrier and (b) the power, measured in a specific bandwidth and time, produced by the mobile station due to effects in modulation andpower rampingand switching.[1]ORFS measurements are defined and required in order to prove conformance by various institutions, e.g., the U.S.Federal Communications Commission(FCC) orETSI. This science article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/ORFS
Personal communications network(PCN) is the European digital cellular mobile telephone network. The underlying standard is known asDigital Cellular System, which defines a variant ofGSMoperating at 1.7–1.88 GHz.GSM-1800has since been adopted by other locations, not necessarily under the PCN/DCS name. The network structure, the signal structure and the transmission characteristics are similar between PCN and GSM-900. The PCN system was first initiated byLord Young,UKSecretary of State for Trade and Industry, in 1988. The main characteristics of PCN are as follows: The UK government's Department for Enterprise produced 'Phones on the Move: Personal Communications in the 1990s - a discussion document[permanent dead link]' in January 1989. The document presented a vision for how mobile communications might develop which outlined ideas for both the PCNs and the CT2 standards. PCN is comparable to the North AmericanPersonal Communications Serviceband allocation. The 1800 MHz DCS band is reused in UMTS, LTE and 5G NR; it sees real-world deployment in LTE as "band 3". This article aboutwireless technologyis astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Personal_communications_network
TheReal-time Transport Protocol(RTP) specifies a general-purpose data format andnetwork protocolfor transmitting digital media streams onInternet Protocol(IP) networks. The details of media encoding, such as signal sampling rate, frame size and timing, are specified in anRTP payload format. The format parameters of the RTP payload are typically communicated between transmission endpoints with theSession Description Protocol(SDP), but other protocols, such as theExtensible Messaging and Presence Protocol(XMPP) may be used. The technical parameters of payload formats for audio and video streams are standardised. The standard also describes the process of registering new payload types with IANA. Payload formats and types for text messaging are defined in the following specifications: Payload formats and types forMIDIare defined in the following specifications: Payload formats and types for audio and video are defined in the following specifications: Payload identifiers 96–127 are used for payloads defined dynamically during a session. It is recommended to dynamically assign port numbers, although port numbers 5004 and 5005 have been registered for use of the profile when a dynamically assigned port is not required. Applications should always support PCMU (payload type 0). Previously, DVI4 (payload type 5) was also recommended, but this was removed in 2013.[53]
https://en.wikipedia.org/wiki/RTP_audio_video_profile
Incomputer networkresearch,network simulationis a technique whereby a software program replicates the behavior of a real network. This is achieved by calculating the interactions between the different network entities such as routers, switches, nodes, access points, links, etc.[1]Most simulators use discrete event simulation in which the modeling of systems in which state variables change at discrete points in time. The behavior of the network and the various applications and services it supports can then be observed in a test lab; various attributes of the environment can also be modified in a controlled manner to assess how the network/protocols would behave under different conditions. Anetwork simulatoris asoftwareprogram that can predict the performance of a computer network or a wireless communication network. Since communication networks have become tool complex for traditional analytical methods to provide an accurate understanding of system behavior, network simulators are used. In simulators, the computer network is modeled with devices, links, applications, etc., and the network performance is reported. Simulators come with support for the most popular technologies and networks in use today such as5G,Internet of Things(IoT),Wireless LANs,mobile ad hoc networks,wireless sensor networks,vehicular ad hoc networks,cognitive radio networks,LTE Most of the commercialsimulatorsareGUIdriven, while some network simulators areCLIdriven. The network model/configuration describes the network (nodes, routers, switches, links) and the events (data transmissions, packet error, etc.). Output results would include network-level metrics, link metrics, device metrics etc. Further, drill down in terms of simulationstracefiles would also be available. Trace files log every packet, every event that occurred in the simulation and is used for analysis. Most network simulators usediscrete event simulation, in which a list of pending "events" is stored, and those events are processed in order, with some events triggering future events—such as the event of the arrival of a packet at one node triggering the event the arrival of that packet at adownstreamnode. Network emulationallows users to introduce real devices and applications into a test network (simulated) that alters packet flow in such a way as to mimic the behavior of a live network. Live traffic can pass through the simulator and be affected by objects within the simulation. The typical methodology is that real packets from a live application are sent to the emulation server (where the virtual network is simulated). The real packet gets 'modulated' into a simulation packet. The simulation packet gets demodulated into a real packet after experiencing effects of loss, errors, delay,jitteretc., thereby transferring these network effects into the real packet. Thus it is as-if the real packet flowed through a real network but in reality it flowed through the simulated network. Emulation is widely used in the design stage for validating communication networks prior to deployment. There are both free/open-source and proprietary network simulators available. Examples of notable open source network simulators / emulators include: There are also some notable commercial network simulators. Network simulators provide a cost-effective method for There are a wide variety of network simulators, ranging from the very simple to the very complex. Minimally, a network simulator must enable a user to
https://en.wikipedia.org/wiki/Network_simulation
This is a comparison of standards of wireless networking technologies for devices such asmobile phones. A newgenerationof cellular standards has appeared approximately every tenth year since1Gsystems were introduced in 1979 and the early to mid-1980s. Global System for Mobile Communications(GSM, around 80–85% market share) andIS-95(around 10–15% market share) were the two most prevalent 2G mobile communication technologies in 2007.[1]In 3G, the most prevalent technology wasUMTSwithCDMA-2000in close contention. All radio access technologies have to solve the same problems: to divide the finiteRF spectrumamong multiple users as efficiently as possible. GSM usesTDMAandFDMAfor user and cell separation. UMTS, IS-95 and CDMA-2000 useCDMA.WiMAXandLTEuseOFDM. In theory, CDMA, TDMA and FDMA have exactly the same spectral efficiency but practically, each has its own challenges – power control in the case of CDMA, timing in the case of TDMA, and frequency generation/filtering in the case of FDMA. For a classic example for understanding the fundamental difference of TDMA and CDMA, imagine a cocktail party where couples are talking to each other in a single room. The room represents the available bandwidth: Source:[4] This graphic compares the market shares of the different mobile standards. In a fast-growing market, GSM/3GSM (red) grows faster than the market and is gaining market share, the CDMA family (blue) grows at about the same rate as the market, while other technologies (grey) are being phased out As a reference, a comparison of mobile and non-mobile wireless Internet standards follows. Antenna,RF front endenhancements and minor protocol timer tweaks have helped deploy long rangeP2Pnetworks compromising on radial coverage, throughput and/or spectra efficiency (310 km&382 km) Notes: All speeds are theoretical maximums and will vary by a number of factors, including the use of external antennas, distance from the tower and the ground speed (e.g. communications on a train may be poorer than when standing still). Usually the bandwidth is shared between several terminals. The performance of each technology is determined by a number of constraints, including thespectral efficiencyof the technology, the cell sizes used, and the amount of spectrum available. For more comparison tables, seebit rate progress trends,comparison of mobile phone standards,spectral efficiency comparison tableandOFDM system comparison table.
https://en.wikipedia.org/wiki/Comparison_of_mobile_phone_standards
GEO-Mobile Radio Interface(GEO stands forGeostationary Earth Orbit), better known asGMR, is anETSIstandard forsatellite phones. The GMR standard is derived from the3GPP-family terrestrial digital cellular standards and supports access to GSM/UMTS core networks. It is used byACeS,ICO,Inmarsat,SkyTerra,TerreStarandThuraya.[1] There are two widely deployed variants of GMR, both heavily modeled afterGSM GMR-1 is the technology used by Thuraya. GMR-1 3G is the technology used for TerreStar and SkyTerra. GMR-2 is used by Inmarsat iSatPhonePro. GMR was developed by TIA and ETSI.[1] Versions of standard and cipher used:[1][2] This article related totelecommunicationsis astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/GEO-Mobile_Radio_Interface
TheSMGGSM 02.07Technical Specification (Version 7.1.0 Release 1998) calledDigital cellular telecommunications system (Phase 2+); Mobile Stations (MS) featuresdefines Mobile Station (MS) features and to classifies them according to their type and whether they are mandatory or optional. The MS features detailed in this specification do not represent an exhaustive list.[1] This technology-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/GSM_02.07