text
stringlengths
16
172k
source
stringlengths
32
122
Photon transport theories inPhysics,Medicine, andStatistics(such as theMonte Carlo method), are commonly used to modellight propagation in tissue. The responses to apencil beamincident on a scattering medium are referred to asGreen's functionsorimpulse responses. Photon transport methods can be directly used to compute broad-beam responses by distributing photons over the cross section of the beam. However,convolutioncan be used in certain cases to improve computational efficiency. In order for convolution to be used to calculate a broad-beam response, a system must betime invariant,linear, andtranslation invariant. Time invariance implies that a photon beam delayed by a given time produces a response shifted by the same delay. Linearity indicates that a given response will increase by the same amount if the input is scaled and obeys the property ofsuperposition. Translational invariance means that if a beam is shifted to a new location on the tissue surface, its response is also shifted in the same direction by the same distance. Here, only spatial convolution is considered. Responses from photon transport methods can be physical quantities such asabsorption,fluence,reflectance, ortransmittance. Given a specific physical quantity,G(x,y,z), from a pencil beam in Cartesian space and a collimated light source with beam profileS(x,y), a broad-beam response can be calculated using the following 2-D convolution formula: Similar to 1-D convolution, 2-D convolution is commutative betweenGandSwith a change of variablesx″=x−x′{\displaystyle x''=x-x'\,}andy″=y−y′{\displaystyle y''=y-y'\,}: Because the broad-beam responseC(x,y,z){\displaystyle C(x,y,z)\,}has cylindrical symmetry, its convolution integrals can be rewritten as: wherer′=x′2+y′2{\displaystyle r'={\sqrt {x'^{2}+y'^{2}}}}. Because the inner integration of Equation 4 is independent ofz, it only needs to be calculated once for all depths. Thus this form of the broad-beam response is more computationally advantageous. For aGaussian beam, the intensity profile is given by Here,Rdenotes the1e2{\displaystyle {\tfrac {1}{e^{2}}}\,}radius of the beam, andS0denotes the intensity at the center of the beam.S0is related to the total powerP0by Substituting Eq. 5 into Eq. 4, we obtain whereI0is the zeroth-ordermodified Bessel function. For atop-hat beamof radiusR, the source function becomes whereS0denotes the intensity inside the beam.S0is related to the total beam powerP0by Substituting Eq. 8 into Eq. 4, we obtain where First photon-tissue interactions always occur on the z axis and hence contribute to the specific absorption or related physical quantities as aDirac delta function. Errors will result if absorption due to the first interactions is not recorded separately from absorption due to subsequent interactions. The total impulse response can be expressed in two parts: where the first term results from the first interactions and the second, from subsequent interactions. For a Gaussian beam, we have For a top-hat beam, we have For a top-hat beam, the upper integration limits may be bounded byrmax, such thatr≤rmax−R. Thus, the limited grid coverage in therdirection does not affect the convolution. To convolve reliably for physical quantities atrin response to a top-hat beam, we must ensure thatrmaxin photon transport methods is large enough thatr≤rmax−Rholds. For a Gaussian beam, no simple upper integration limits exist because it theoretically extends to infinity. Atr>>R, a Gaussian beam and a top-hat beam of the sameRandS0have comparable convolution results. Therefore,r≤rmax−Rcan be used approximately for Gaussian beams as well. There are two common methods used to implement discrete convolution: the definition of convolution andfast Fourier transformation(FFT and IFFT) according to theconvolution theorem. To calculate the optical broad-beam response, the impulse response of a pencil beam is convolved with the beam function. As shown by Equation 4, this is a 2-D convolution. To calculate the response of a light beam on a plane perpendicular to the z axis, the beam function (represented by ab × bmatrix) is convolved with the impulse response on that plane (represented by ana×amatrix). Normallyais greater thanb. The calculation efficiency of these two methods depends largely onb, the size of the light beam. In direct convolution, the solution matrix is of the size (a+b− 1) × (a+b− 1). The calculation of each of these elements (except those near boundaries) includesb×bmultiplications andb×b− 1 additions, so thetime complexityisO[(a+b)2b2]. Using the FFT method, the major steps are the FFT and IFFT of (a+b− 1) × (a+b− 1) matrices, so the time complexity is O[(a+b)2log(a+b)]. Comparing O[(a+b)2b2] and O[(a+b)2log(a+b)], it is apparent that direct convolution will be faster ifbis much smaller thana, but the FFT method will be faster ifbis relatively large.
https://en.wikipedia.org/wiki/Convolution_for_optical_broad-beam_responses_in_scattering_media
Inmathematics, theconvolution poweris then-fold iteration of theconvolutionwith itself. Thus ifx{\displaystyle x}is afunctiononEuclidean spaceRdandn{\displaystyle n}is anatural number, then the convolution power is defined by where∗denotes the convolution operation of functions onRdand δ0is theDirac delta distribution. This definition makes sense ifxis anintegrablefunction (inL1), a rapidly decreasingdistribution(in particular, a compactly supported distribution) or is a finiteBorel measure. Ifxis the distribution function of arandom variableon the real line, then thenthconvolution power ofxgives the distribution function of the sum ofnindependent random variables with identical distributionx. Thecentral limit theoremstates that ifxis in L1and L2with mean zero and variance σ2, then where Φ is the cumulativestandard normal distributionon the real line. Equivalently,x∗n/σn{\displaystyle x^{*n}/\sigma {\sqrt {n}}}tends weakly to the standard normal distribution. In some cases, it is possible to define powersx*tfor arbitrary realt> 0. If μ is aprobability measure, then μ isinfinitely divisibleprovided there exists, for each positive integern, a probability measure μ1/nsuch that That is, a measure is infinitely divisible if it is possible to define allnth roots. Not every probability measure is infinitely divisible, and a characterization of infinitely divisible measures is of central importance in the abstract theory ofstochastic processes. Intuitively, a measure should be infinitely divisible provided it has a well-defined "convolution logarithm." The natural candidate for measures having such a logarithm are those of (generalized)Poissontype, given in the form In fact, theLévy–Khinchin theoremstates that a necessary and sufficient condition for a measure to be infinitely divisible is that it must lie in the closure, with respect to thevague topology, of the class of Poisson measures (Stroock 1993, §3.2). Many applications of the convolution power rely on being able to define the analog ofanalytic functionsasformal power serieswith powers replaced instead by the convolution power. Thus ifF(z)=∑n=0∞anzn{\displaystyle \textstyle {F(z)=\sum _{n=0}^{\infty }a_{n}z^{n}}}is an analytic function, then one would like to be able to define Ifx∈L1(Rd) or more generally is a finite Borel measure onRd, then the latter series converges absolutely in norm provided that the norm ofxis less than the radius of convergence of the original series definingF(z). In particular, it is possible for such measures to define theconvolutional exponential It is not generally possible to extend this definition to arbitrary distributions, although a class of distributions on which this series still converges in an appropriate weak sense is identified byBen Chrouda, El Oued & Ouerdiane (2002). Ifxis itself suitably differentiable, then from thepropertiesof convolution, one has whereD{\displaystyle {\mathcal {D}}}denotes thederivativeoperator. Specifically, this holds ifxis a compactly supported distribution or lies in theSobolev spaceW1,1to ensure that the derivative is sufficiently regular for the convolution to be well-defined. In the configuration random graph, the size distribution ofconnected componentscan be expressed via the convolution power of the excessdegree distribution(Kryven (2017)): Here,w(n){\displaystyle w(n)}is the size distribution for connected components,u1(k)=k+1μ1u(k+1),{\displaystyle u_{1}(k)={\frac {k+1}{\mu _{1}}}u(k+1),}is the excess degree distribution, andu(k){\displaystyle u(k)}denotes thedegree distribution. Asconvolution algebrasare special cases ofHopf algebras, the convolution power is a special case of the (ordinary) power in a Hopf algebra. In applications toquantum field theory, the convolution exponential, convolution logarithm, and other analytic functions based on the convolution are constructed as formal power series in the elements of the algebra (Brouder, Frabetti & Patras 2008). If, in addition, the algebra is aBanach algebra, then convergence of the series can be determined as above. In the formal setting, familiar identities such as continue to hold. Moreover, by the permanence of functional relations, they hold at the level of functions, provided all expressions are well-defined in an open set by convergent series.
https://en.wikipedia.org/wiki/Convolution_power
Inmathematics, a space ofconvolution quotientsis afield of fractionsof a convolutionringof functions: a convolution quotient is to theoperationofconvolutionas aquotientofintegersis tomultiplication. The construction of convolution quotients allows easy algebraic representation of theDirac delta function,integral operator, anddifferential operatorwithout having to deal directly withintegral transforms, which are often subject to technical difficulties with respect to whether they converge. Convolution quotients were introduced byMikusiński(1949), and their theory is sometimes calledMikusiński'soperational calculus. The kind of convolution(f,g)↦f∗g{\textstyle (f,g)\mapsto f*g}with which this theory is concerned is defined by It follows from theTitchmarsh convolution theoremthat if the convolutionf∗g{\textstyle f*g}of two functionsf,g{\textstyle f,g}that are continuous on[0,+∞){\textstyle [0,+\infty )}is equal to 0 everywhere on that interval, then at least one off,g{\textstyle f,g}is 0 everywhere on that interval. A consequence is that iff,g,h{\textstyle f,g,h}are continuous on[0,+∞){\textstyle [0,+\infty )}thenh∗f=h∗g{\textstyle h*f=h*g}only iff=g.{\textstyle f=g.}This fact makes it possible to define convolution quotients by saying that for twofunctionsƒ,g, the pair (ƒ,g) has the same convolution quotient as the pair (h*ƒ,h*g). As with the construction of therational numbersfrom the integers, the field of convolution quotients is a direct extension of the convolution ring from which it was built. Every "ordinary" functionf{\displaystyle f}in the original space embeds canonically into the space of convolution quotients as the (equivalence class of the) pair(f∗g,g){\displaystyle (f*g,g)}, in the same way that ordinary integers embed canonically into the rational numbers. Non-function elements of our new space can be thought of as "operators", or generalized functions, whosealgebraic action on functionsis always well-defined even if they have no representation in "ordinary" function space. If we start with convolution ring of positive half-line functions, the above construction is identical in behavior to theLaplace transform, and ordinary Laplace-space conversion charts can be used to map expressions involving non-function operators to ordinary functions (if they exist). Yet, as mentioned above, the algebraic approach to the construction of the space bypasses the need to explicitly define the transform or its inverse, sidestepping a number of technically challenging convergence problems with the "traditional" integral transform construction. Thismathematics-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Convolution_quotient
Inmathematics,deconvolutionis theinverseofconvolution. Both operations are used insignal processingandimage processing. For example, it may be possible to recover the original signal after a filter (convolution) by using a deconvolution method with a certain degree of accuracy.[1]Due to the measurement error of the recorded signal or image, it can be demonstrated that the worse thesignal-to-noise ratio(SNR), the worse the reversing of a filter will be; hence, inverting a filter is not always a good solution as the error amplifies. Deconvolution offers a solution to this problem. The foundations for deconvolution andtime-series analysiswere largely laid byNorbert Wienerof theMassachusetts Institute of Technologyin his bookExtrapolation, Interpolation, and Smoothing of Stationary Time Series(1949).[2]The book was based on work Wiener had done duringWorld War IIbut that had been classified at the time. Some of the early attempts to apply these theories were in the fields ofweather forecastingandeconomics. In general, the objective of deconvolution is to find the solutionfof a convolution equation of the form: Usually,his some recorded signal, andfis some signal that we wish to recover, but has been convolved with a filter or distortion functiong, before we recorded it. Usually,his a distorted version offand the shape offcan't be easily recognized by the eye or simpler time-domain operations. The functiongrepresents theimpulse responseof an instrument or a driving force that was applied to a physical system. If we knowg, or at least know the form ofg, then we can perform deterministic deconvolution. However, if we do not knowgin advance, then we need to estimate it. This can be done using methods ofstatisticalestimationor building the physical principles of the underlying system, such as the electrical circuit equations or diffusion equations. There are several deconvolution techniques, depending on the choice of the measurement error and deconvolution parameters: When the measurement error is very low (ideal case), deconvolution collapses into a filter reversing. This kind of deconvolution can be performed in the Laplace domain. By computing theFourier transformof the recorded signalhand the system response functiong, you getHandG, withGas thetransfer function. Using theConvolution theorem, whereFis the estimated Fourier transform off. Finally, theinverse Fourier transformof the functionFis taken to find the estimated deconvolved signalf. Note thatGis at the denominator and could amplify elements of the error model if present. In physical measurements, the situation is usually closer to In this caseεisnoisethat has entered our recorded signal. If a noisy signal or image is assumed to be noiseless, the statistical estimate ofgwill be incorrect. In turn, the estimate ofƒwill also be incorrect. The lower thesignal-to-noise ratio, the worse the estimate of the deconvolved signal will be. That is the reason whyinverse filteringthe signal (as in the "raw deconvolution" above) is usually not a good solution. However, if at least some knowledge exists of the type of noise in the data (for example,white noise), the estimate ofƒcan be improved through techniques such asWiener deconvolution. The concept of deconvolution had an early application inreflection seismology. In 1950,Enders Robinsonwas a graduate student atMIT. He worked with others at MIT, such asNorbert Wiener,Norman Levinson, and economistPaul Samuelson, to develop the "convolutional model" of a reflectionseismogram. This model assumes that the recorded seismograms(t) is the convolution of an Earth-reflectivity functione(t) and aseismicwaveletw(t) from apoint source, wheretrepresents recording time. Thus, our convolution equation is The seismologist is interested ine, which contains information about the Earth's structure. By theconvolution theorem, this equation may beFourier transformedto in thefrequency domain, whereω{\displaystyle \omega }is the frequency variable. By assuming that the reflectivity is white, we can assume that thepower spectrumof the reflectivity is constant, and that the power spectrum of the seismogram is the spectrum of the wavelet multiplied by that constant. Thus, If we assume that the wavelet isminimum phase, we can recover it by calculating the minimum phase equivalent of the power spectrum we just found. The reflectivity may be recovered by designing and applying aWiener filterthat shapes the estimated wavelet to aDirac delta function(i.e., a spike). The result may be seen as a series of scaled, shifted delta functions (although this is not mathematically rigorous): whereNis the number of reflection events,ri{\displaystyle r_{i}}are thereflection coefficients,t−τi{\displaystyle t-\tau _{i}}are the reflection times of each event, andδ{\displaystyle \delta }is theDirac delta function. In practice, since we are dealing with noisy, finitebandwidth, finite length,discretely sampleddatasets, the above procedure only yields an approximation of the filter required to deconvolve the data. However, by formulating the problem as the solution of aToeplitz matrixand usingLevinson recursion, we can relatively quickly estimate a filter with the smallestmean squared errorpossible. We can also do deconvolution directly in the frequency domain and get similar results. The technique is closely related tolinear prediction. In optics and imaging, the term "deconvolution" is specifically used to refer to the process of reversing theoptical distortionthat takes place in an opticalmicroscope,electron microscope,telescope, or other imaging instrument, thus creating clearer images. It is usually done in the digital domain by asoftwarealgorithm, as part of a suite ofmicroscope image processingtechniques. Deconvolution is also practical to sharpen images that suffer from fast motion or jiggles during capturing. EarlyHubble Space Telescopeimages were distorted by aflawed mirrorand were sharpened by deconvolution. The usual method is to assume that the optical path through the instrument is optically perfect, convolved with apoint spread function(PSF), that is, amathematical functionthat describes the distortion in terms of the pathway a theoreticalpoint sourceof light (or other waves) takes through the instrument.[3]Usually, such a point source contributes a small area of fuzziness to the final image. If this function can be determined, it is then a matter of computing itsinverseor complementary function, and convolving the acquired image with that. The result is the original, undistorted image. In practice, finding the true PSF is impossible, and usually an approximation of it is used, theoretically calculated[4]or based on some experimental estimation by using known probes. Real optics may also have different PSFs at different focal and spatial locations, and the PSF may be non-linear. The accuracy of the approximation of the PSF will dictate the final result. Different algorithms can be employed to give better results, at the price of being more computationally intensive. Since the original convolution discards data, some algorithms use additional data acquired at nearby focal points to make up some of the lost information.Regularizationin iterative algorithms (as inexpectation-maximization algorithms) can be applied to avoid unrealistic solutions. When the PSF is unknown, it may be possible to deduce it by systematically trying different possible PSFs and assessing whether the image has improved. This procedure is calledblind deconvolution.[3]Blind deconvolution is a well-establishedimage restorationtechnique inastronomy, where the point nature of the objects photographed exposes the PSF thus making it more feasible. It is also used influorescence microscopyfor image restoration, and in fluorescencespectral imagingfor spectral separation of multiple unknownfluorophores. The most commoniterativealgorithm for the purpose is theRichardson–Lucy deconvolutionalgorithm; theWiener deconvolution(and approximations) are the most common non-iterative algorithms. For some specific imaging systems such as laser pulsed terahertz systems, PSF can be modeled mathematically.[6]As a result, as shown in the figure, deconvolution of the modeled PSF and the terahertz image can give a higher resolution representation of the terahertz image. When performing image synthesis in radiointerferometry, a specific kind ofradio astronomy, one step consists of deconvolving the produced image with the "dirty beam", which is a different name for thepoint spread function. A commonly used method is theCLEAN algorithm. Typical use of deconvolution is in tracer kinetics. For example, when measuring a hormone concentration in the blood, its secretion rate can be estimated by deconvolution. Another example is the estimation of the blood glucose concentration from the measured interstitial glucose, which is a distorted version in time and amplitude of the real blood glucose.[7] Deconvolution has been applied extensively toabsorption spectra.[8]TheVan Cittert algorithm(article in German) may be used.[9] Deconvolution maps to division in theFourier co-domain. This allows deconvolution to be easily applied with experimental data that are subject to aFourier transform. An example isNMR spectroscopywhere the data are recorded in the time domain, but analyzed in the frequency domain. Division of the time-domain data by an exponential function has the effect of reducing the width of Lorentzian lines in the frequency domain.
https://en.wikipedia.org/wiki/Deconvolution
Inmathematics,Dirichlet convolution(ordivisor convolution) is abinary operationdefined forarithmetic functions; it is important innumber theory. It was developed byPeter Gustav Lejeune Dirichlet. Iff,g:N→C{\displaystyle f,g:\mathbb {N} \to \mathbb {C} }are twoarithmetic functions, their Dirichlet convolutionf∗g{\displaystyle f*g}is a new arithmetic function defined by: where the sum extends over all positivedivisorsd{\displaystyle d}ofn{\displaystyle n}, or equivalently over all distinct pairs(a,b){\displaystyle (a,b)}of positive integers whose product isn{\displaystyle n}. This product occurs naturally in the study ofDirichlet seriessuch as theRiemann zeta function. It describes the multiplication of two Dirichlet series in terms of their coefficients: The set of arithmetic functions forms acommutative ring, theDirichlet ring, with addition given bypointwise additionand multiplication by Dirichlet convolution. The multiplicative identity is theunit functionε{\displaystyle \varepsilon }defined byε(n)=1{\displaystyle \varepsilon (n)=1}ifn=1{\displaystyle n=1}and0{\displaystyle 0}otherwise. Theunits(invertible elements) of this ring are the arithmetic functionsf{\displaystyle f}withf(1)≠0{\displaystyle f(1)\neq 0}. Specifically, Dirichlet convolution isassociative,[1] distributiveover addition commutative, and has an identity element, Furthermore, for each functionf{\displaystyle f}havingf(1)≠0{\displaystyle f(1)\neq 0}, there exists another arithmetic functionf−1{\displaystyle f^{-1}}satisfyingf∗f−1=ε{\displaystyle f*f^{-1}=\varepsilon }, called theDirichlet inverseoff{\displaystyle f}. The Dirichlet convolution of twomultiplicative functionsis again multiplicative, and every not constantly zero multiplicative function has a Dirichlet inverse which is also multiplicative. In other words, multiplicative functions form a subgroup of the group of invertible elements of the Dirichlet ring. Beware however that the sum of two multiplicative functions is not multiplicative (since(f+g)(1)=f(1)+g(1)=2≠1{\displaystyle (f+g)(1)=f(1)+g(1)=2\neq 1}), so the subset of multiplicative functions is not a subring of the Dirichlet ring. The article on multiplicative functions lists several convolution relations among important multiplicative functions. Another operation on arithmetic functions is pointwise multiplication:fg{\displaystyle fg}is defined by(fg)(n)=f(n)g(n){\displaystyle (fg)(n)=f(n)g(n)}. Given acompletely multiplicative functionh{\displaystyle h}, pointwise multiplication byh{\displaystyle h}distributes over Dirichlet convolution:(f∗g)h=(fh)∗(gh){\displaystyle (f*g)h=(fh)*(gh)}.[2]The convolution of two completely multiplicative functions is multiplicative, but not necessarily completely multiplicative. In these formulas, we use the followingarithmetical functions: The following relations hold: This last identity shows that theprime-counting functionis given by the summatory function whereM(x){\displaystyle M(x)}is theMertens functionandω{\displaystyle \omega }is the distinct prime factor counting function from above. This expansion follows from the identity for the sums over Dirichlet convolutions given on thedivisor sum identitiespage (a standard trick for these sums).[3] Given an arithmetic functionf{\displaystyle f}its Dirichlet inverseg=f−1{\displaystyle g=f^{-1}}may be calculated recursively: the value ofg(n){\displaystyle g(n)}is in terms ofg(m){\displaystyle g(m)}form<n{\displaystyle m<n}. Forn=1{\displaystyle n=1}: Forn=2{\displaystyle n=2}: Forn=3{\displaystyle n=3}: Forn=4{\displaystyle n=4}: and in general forn>1{\displaystyle n>1}, The following properties of the Dirichlet inverse hold:[4] An exact, non-recursive formula for the Dirichlet inverse of anyarithmetic functionfis given inDivisor sum identities. A morepartition theoreticexpression for the Dirichlet inverse offis given by The following formula provides a compact way of expressing the Dirichlet inverse of an invertible arithmetic functionf: f−1=∑k=0+∞(f(1)ε−f)∗kf(1)k+1{\displaystyle f^{-1}=\sum _{k=0}^{+\infty }{\frac {(f(1)\varepsilon -f)^{*k}}{f(1)^{k+1}}}} where the expression(f(1)ε−f)∗k{\displaystyle (f(1)\varepsilon -f)^{*k}}stands for the arithmetic functionf(1)ε−f{\displaystyle f(1)\varepsilon -f}convoluted with itselfktimes. Notice that, for a fixed positive integern{\displaystyle n}, ifk>Ω(n){\displaystyle k>\Omega (n)}then(f(1)ε−f)∗k(n)=0{\displaystyle (f(1)\varepsilon -f)^{*k}(n)=0}, this is becausef(1)ε(1)−f(1)=0{\displaystyle f(1)\varepsilon (1)-f(1)=0}and every way of expressingnas a product ofkpositive integers must include a 1, so the series on the right hand side converges for every fixed positive integern. Iffis an arithmetic function, theDirichlet seriesgenerating functionis defined by for thosecomplexargumentssfor which the series converges (if there are any). The multiplication of Dirichlet series is compatible with Dirichlet convolution in the following sense: for allsfor which both series of the left hand side converge, one of them at least converging absolutely (note that simple convergence of both series of the left hand sidedoes notimply convergence of the right hand side!). This is akin to theconvolution theoremif one thinks of Dirichlet series as aFourier transform. The restriction of the divisors in the convolution tounitary,bi-unitaryor infinitary divisors defines similar commutative operations which share many features with the Dirichlet convolution (existence of a Möbius inversion, persistence of multiplicativity, definitions of totients, Euler-type product formulas over associated primes, etc.). Dirichlet convolution is a special case of the convolution multiplication for theincidence algebraof aposet, in this case the poset of positive integers ordered by divisibility. TheDirichlet hyperbola methodcomputes the summation of a convolution in terms of its functions and their summation functions.
https://en.wikipedia.org/wiki/Dirichlet_convolution
Withinsignal processing, in many cases only oneimagewithnoiseis available, and averaging is then realized in a local neighborhood. Results are acceptable if the noise is smaller in size than the smallest objects of interest in the image, but blurring of edges is a serious disadvantage. In the case of smoothing within a single image, one has to assume that there are no changes in the gray levels of the underlying image data. This assumption is clearly violated at locations of image edges, and edge blurring is a direct consequence of violating the assumption. Averaging is a special case of discrete convolution. For a 3 by 3 neighborhood, the convolution maskMis: M=19[111111111]{\displaystyle M={\frac {1}{9}}{\begin{bmatrix}1&1&1\\1&1&1\\1&1&1\\\end{bmatrix}}} The significance of the central pixel may be increased, as it approximates the properties of noise with aGaussianprobability distribution: M=110[111121111]{\displaystyle M={\frac {1}{10}}{\begin{bmatrix}1&1&1\\1&2&1\\1&1&1\\\end{bmatrix}}} M=116[121242121]{\displaystyle M={\frac {1}{16}}{\begin{bmatrix}1&2&1\\2&4&2\\1&2&1\\\end{bmatrix}}} A suitable page for beginners about matrices is at:https://web.archive.org/web/20060819141930/http://www.gamedev.net/reference/programming/features/imageproc/page2.asp The whole article starts on page:https://web.archive.org/web/20061019072001/http://www.gamedev.net/reference/programming/features/imageproc/ This article related to radio communications is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Generalized_signal_averaging
Inprobability theory, theprobability distributionof the sum of two or moreindependentrandom variablesis theconvolutionof their individual distributions. The term is motivated by the fact that theprobability mass functionorprobability density functionof a sum of independent random variables is theconvolutionof their corresponding probability mass functions or probability density functions respectively. Many well known distributions have simple convolutions. The following is a list of these convolutions. Each statement is of the form whereX1,X2,…,Xn{\displaystyle X_{1},X_{2},\dots ,X_{n}}are independent random variables, andY{\displaystyle Y}is the distribution that results from the convolution ofX1,X2,…,Xn{\displaystyle X_{1},X_{2},\dots ,X_{n}}. In place ofXi{\displaystyle X_{i}}andY{\displaystyle Y}the names of the corresponding distributions and their parameters have been indicated. 0<αi≤2−1≤βi≤1ci>0∞<μi<∞{\displaystyle \qquad 0<\alpha _{i}\leq 2\quad -1\leq \beta _{i}\leq 1\quad c_{i}>0\quad \infty <\mu _{i}<\infty } The following three statements are special cases of the above statement: Mixed distributions:
https://en.wikipedia.org/wiki/List_of_convolutions_of_probability_distributions
Insystem analysis, among other fields of study, alinear time-invariant(LTI)systemis asystemthat produces an output signal from any input signal subject to the constraints oflinearityandtime-invariance; these terms are briefly defined in the overview below. These properties apply (exactly or approximately) to many important physical systems, in which case the responsey(t)of the system to an arbitrary inputx(t)can be found directly usingconvolution:y(t) = (x∗h)(t)whereh(t)is called the system'simpulse responseand ∗ represents convolution (not to be confused with multiplication). What's more, there are systematic methods for solving any such system (determiningh(t)), whereas systems not meeting both properties are generally more difficult (or impossible) to solve analytically. A good example of an LTI system is anyelectrical circuitconsisting ofresistors,capacitors,inductorsandlinear amplifiers.[2] Linear time-invariant system theory is also used inimage processing, where the systems have spatial dimensions instead of, or in addition to, a temporal dimension. These systems may be referred to aslinear translation-invariantto give the terminology the most general reach. In the case of genericdiscrete-time(i.e.,sampled) systems,linear shift-invariantis the corresponding term. LTI system theory is an area ofapplied mathematicswhich has direct applications inelectrical circuit analysis and design,signal processingandfilter design,control theory,mechanical engineering,image processing, the design ofmeasuring instrumentsof many sorts,NMR spectroscopy[citation needed], and many other technical areas where systems ofordinary differential equationspresent themselves. The defining properties of any LTI system arelinearityandtime invariance. The fundamental result in LTI system theory is that any LTI system can be characterized entirely by a single function called the system'simpulse response. The output of the systemy(t){\displaystyle y(t)}is simply theconvolutionof the input to the systemx(t){\displaystyle x(t)}with the system's impulse responseh(t){\displaystyle h(t)}. This is called acontinuous timesystem. Similarly, a discrete-time linear time-invariant (or, more generally, "shift-invariant") system is defined as one operating indiscrete time:yi=xi∗hi{\displaystyle y_{i}=x_{i}*h_{i}}wherey,x, andharesequencesand the convolution, in discrete time, uses a discrete summation rather than an integral. LTI systems can also be characterized in thefrequency domainby the system'stransfer function, which is theLaplace transformof the system's impulse response (orZ transformin the case of discrete-time systems). As a result of the properties of these transforms, the output of the system in the frequency domain is the product of the transfer function and the transform of the input. In other words, convolution in the time domain is equivalent to multiplication in the frequency domain. For all LTI systems, theeigenfunctions, and the basis functions of the transforms, arecomplexexponentials. This is, if the input to a system is the complex waveformAsest{\displaystyle A_{s}e^{st}}for some complex amplitudeAs{\displaystyle A_{s}}and complex frequencys{\displaystyle s}, the output will be some complex constant times the input, sayBsest{\displaystyle B_{s}e^{st}}for some new complex amplitudeBs{\displaystyle B_{s}}. The ratioBs/As{\displaystyle B_{s}/A_{s}}is the transfer function at frequencys{\displaystyle s}. Sincesinusoidsare a sum of complex exponentials with complex-conjugate frequencies, if the input to the system is a sinusoid, then the output of the system will also be a sinusoid, perhaps with a differentamplitudeand a differentphase, but always with the same frequency upon reaching steady-state. LTI systems cannot produce frequency components that are not in the input. LTI system theory is good at describing many important systems. Most LTI systems are considered "easy" to analyze, at least compared to the time-varying and/ornonlinearcase. Any system that can be modeled as a lineardifferential equationwith constant coefficients is an LTI system. Examples of such systems areelectrical circuitsmade up ofresistors,inductors, andcapacitors(RLC circuits). Ideal spring–mass–damper systems are also LTI systems, and are mathematically equivalent to RLC circuits. Most LTI system concepts are similar between the continuous-time and discrete-time (linear shift-invariant) cases. In image processing, the time variable is replaced with two space variables, and the notion of time invariance is replaced by two-dimensional shift invariance. When analyzingfilter banksandMIMOsystems, it is often useful to considervectorsof signals. A linear system that is not time-invariant can be solved using other approaches such as theGreen functionmethod. The behavior of a linear, continuous-time, time-invariant system with input signalx(t) and output signaly(t) is described by the convolution integral:[3] whereh(t){\textstyle h(t)}is the system's response to animpulse:x(τ)=δ(τ){\textstyle x(\tau )=\delta (\tau )}.y(t){\textstyle y(t)}is therefore proportional to a weighted average of the input functionx(τ){\textstyle x(\tau )}. The weighting function ish(−τ){\textstyle h(-\tau )}, simply shifted by amountt{\textstyle t}. Ast{\textstyle t}changes, the weighting function emphasizes different parts of the input function. Whenh(τ){\textstyle h(\tau )}is zero for all negativeτ{\textstyle \tau },y(t){\textstyle y(t)}depends only on values ofx{\textstyle x}prior to timet{\textstyle t}, and the system is said to becausal. To understand why the convolution produces the output of an LTI system, let the notation{x(u−τ);u}{\textstyle \{x(u-\tau );\ u\}}represent the functionx(u−τ){\textstyle x(u-\tau )}with variableu{\textstyle u}and constantτ{\textstyle \tau }. And let the shorter notation{x}{\textstyle \{x\}}represent{x(u);u}{\textstyle \{x(u);\ u\}}. Then a continuous-time system transforms an input function,{x},{\textstyle \{x\},}into an output function,{y}{\textstyle \{y\}}. And in general, every value of the output can depend on every value of the input. This concept is represented by:y(t)=defOt{x},{\displaystyle y(t)\mathrel {\stackrel {\text{def}}{=}} O_{t}\{x\},}whereOt{\textstyle O_{t}}is the transformation operator for timet{\textstyle t}. In a typical system,y(t){\textstyle y(t)}depends most heavily on the values ofx{\textstyle x}that occurred near timet{\textstyle t}. Unless the transform itself changes witht{\textstyle t}, the output function is just constant, and the system is uninteresting. For a linear system,O{\textstyle O}must satisfyEq.1: And the time-invariance requirement is: In this notation, we can write theimpulse responseash(t)=defOt{δ(u);u}.{\textstyle h(t)\mathrel {\stackrel {\text{def}}{=}} O_{t}\{\delta (u);\ u\}.} Similarly: Substituting this result into the convolution integral:(x∗h)(t)=∫−∞∞x(τ)⋅h(t−τ)dτ=∫−∞∞x(τ)⋅Ot{δ(u−τ);u}dτ,{\displaystyle {\begin{aligned}(x*h)(t)&=\int _{-\infty }^{\infty }x(\tau )\cdot h(t-\tau )\,\mathrm {d} \tau \\[4pt]&=\int _{-\infty }^{\infty }x(\tau )\cdot O_{t}\{\delta (u-\tau );\ u\}\,\mathrm {d} \tau ,\,\end{aligned}}} which has the form of the right side ofEq.2for the casecτ=x(τ){\textstyle c_{\tau }=x(\tau )}andxτ(u)=δ(u−τ).{\textstyle x_{\tau }(u)=\delta (u-\tau ).} Eq.2then allows this continuation:(x∗h)(t)=Ot{∫−∞∞x(τ)⋅δ(u−τ)dτ;u}=Ot{x(u);u}=defy(t).{\displaystyle {\begin{aligned}(x*h)(t)&=O_{t}\left\{\int _{-\infty }^{\infty }x(\tau )\cdot \delta (u-\tau )\,\mathrm {d} \tau ;\ u\right\}\\[4pt]&=O_{t}\left\{x(u);\ u\right\}\\&\mathrel {\stackrel {\text{def}}{=}} y(t).\,\end{aligned}}} In summary, the input function,{x}{\textstyle \{x\}}, can be represented by a continuum of time-shifted impulse functions, combined "linearly", as shown atEq.1. The system's linearity property allows the system's response to be represented by the corresponding continuum of impulseresponses, combined in the same way. And the time-invariance property allows that combination to be represented by the convolution integral. The mathematical operations above have a simple graphical simulation.[4] Aneigenfunctionis a function for which the output of the operator is a scaled version of the same function. That is,Hf=λf,{\displaystyle {\mathcal {H}}f=\lambda f,}wherefis the eigenfunction andλ{\displaystyle \lambda }is theeigenvalue, a constant. Theexponential functionsAest{\displaystyle Ae^{st}}, whereA,s∈C{\displaystyle A,s\in \mathbb {C} }, areeigenfunctionsof alinear,time-invariantoperator. A simple proof illustrates this concept. Suppose the input isx(t)=Aest{\displaystyle x(t)=Ae^{st}}. The output of the system with impulse responseh(t){\displaystyle h(t)}is then∫−∞∞h(t−τ)Aesτdτ{\displaystyle \int _{-\infty }^{\infty }h(t-\tau )Ae^{s\tau }\,\mathrm {d} \tau }which, by the commutative property ofconvolution, is equivalent to∫−∞∞h(τ)Aes(t−τ)dτ⏞Hf=∫−∞∞h(τ)Aeste−sτdτ=Aest∫−∞∞h(τ)e−sτdτ=Aest⏟Input⏞fH(s)⏟Scalar⏞λ,{\displaystyle {\begin{aligned}\overbrace {\int _{-\infty }^{\infty }h(\tau )\,Ae^{s(t-\tau )}\,\mathrm {d} \tau } ^{{\mathcal {H}}f}&=\int _{-\infty }^{\infty }h(\tau )\,Ae^{st}e^{-s\tau }\,\mathrm {d} \tau \\[4pt]&=Ae^{st}\int _{-\infty }^{\infty }h(\tau )\,e^{-s\tau }\,\mathrm {d} \tau \\[4pt]&=\overbrace {\underbrace {Ae^{st}} _{\text{Input}}} ^{f}\overbrace {\underbrace {H(s)} _{\text{Scalar}}} ^{\lambda },\\\end{aligned}}} where the scalarH(s)=def∫−∞∞h(t)e−stdt{\displaystyle H(s)\mathrel {\stackrel {\text{def}}{=}} \int _{-\infty }^{\infty }h(t)e^{-st}\,\mathrm {d} t}is dependent only on the parameters. So the system's response is a scaled version of the input. In particular, for anyA,s∈C{\displaystyle A,s\in \mathbb {C} }, the system output is the product of the inputAest{\displaystyle Ae^{st}}and the constantH(s){\displaystyle H(s)}. Hence,Aest{\displaystyle Ae^{st}}is aneigenfunctionof an LTI system, and the correspondingeigenvalueisH(s){\displaystyle H(s)}. It is also possible to directly derive complex exponentials as eigenfunctions of LTI systems. Let's setv(t)=eiωt{\displaystyle v(t)=e^{i\omega t}}some complex exponential andva(t)=eiω(t+a){\displaystyle v_{a}(t)=e^{i\omega (t+a)}}a time-shifted version of it. H[va](t)=eiωaH[v](t){\displaystyle H[v_{a}](t)=e^{i\omega a}H[v](t)}by linearity with respect to the constanteiωa{\displaystyle e^{i\omega a}}. H[va](t)=H[v](t+a){\displaystyle H[v_{a}](t)=H[v](t+a)}by time invariance ofH{\displaystyle H}. SoH[v](t+a)=eiωaH[v](t){\displaystyle H[v](t+a)=e^{i\omega a}H[v](t)}. Settingt=0{\displaystyle t=0}and renaming we get:H[v](τ)=eiωτH[v](0){\displaystyle H[v](\tau )=e^{i\omega \tau }H[v](0)}i.e. that a complex exponentialeiωτ{\displaystyle e^{i\omega \tau }}as input will give a complex exponential of same frequency as output. The eigenfunction property of exponentials is very useful for both analysis and insight into LTI systems. The one-sidedLaplace transformH(s)=defL{h(t)}=def∫0∞h(t)e−stdt{\displaystyle H(s)\mathrel {\stackrel {\text{def}}{=}} {\mathcal {L}}\{h(t)\}\mathrel {\stackrel {\text{def}}{=}} \int _{0}^{\infty }h(t)e^{-st}\,\mathrm {d} t}is exactly the way to get the eigenvalues from the impulse response. Of particular interest are pure sinusoids (i.e., exponential functions of the formejωt{\displaystyle e^{j\omega t}}whereω∈R{\displaystyle \omega \in \mathbb {R} }andj=def−1{\displaystyle j\mathrel {\stackrel {\text{def}}{=}} {\sqrt {-1}}}). TheFourier transformH(jω)=F{h(t)}{\displaystyle H(j\omega )={\mathcal {F}}\{h(t)\}}gives the eigenvalues for pure complex sinusoids. Both ofH(s){\displaystyle H(s)}andH(jω){\displaystyle H(j\omega )}are called thesystem function,system response, ortransfer function. The Laplace transform is usually used in the context of one-sided signals, i.e. signals that are zero for all values oftless than some value. Usually, this "start time" is set to zero, for convenience and without loss of generality, with the transform integral being taken from zero to infinity (the transform shown above with lower limit of integration of negative infinity is formally known as thebilateral Laplace transform). The Fourier transform is used for analyzing systems that process signals that are infinite in extent, such as modulated sinusoids, even though it cannot be directly applied to input and output signals that are notsquare integrable. The Laplace transform actually works directly for these signals if they are zero before a start time, even if they are not square integrable, for stable systems. The Fourier transform is often applied to spectra of infinite signals via theWiener–Khinchin theoremeven when Fourier transforms of the signals do not exist. Due to the convolution property of both of these transforms, the convolution that gives the output of the system can be transformed to a multiplication in the transform domain, given signals for which the transforms existy(t)=(h∗x)(t)=def∫−∞∞h(t−τ)x(τ)dτ=defL−1{H(s)X(s)}.{\displaystyle y(t)=(h*x)(t)\mathrel {\stackrel {\text{def}}{=}} \int _{-\infty }^{\infty }h(t-\tau )x(\tau )\,\mathrm {d} \tau \mathrel {\stackrel {\text{def}}{=}} {\mathcal {L}}^{-1}\{H(s)X(s)\}.} One can use the system response directly to determine how any particular frequency component is handled by a system with that Laplace transform. If we evaluate the system response (Laplace transform of the impulse response) at complex frequencys=jω, whereω= 2πf, we obtain |H(s)| which is the system gain for frequencyf. The relative phase shift between the output and input for that frequency component is likewise given by arg(H(s)). When the Laplace transform of the derivative is taken, it transforms to a simple multiplication by the Laplace variables.L{ddtx(t)}=sX(s){\displaystyle {\mathcal {L}}\left\{{\frac {\mathrm {d} }{\mathrm {d} t}}x(t)\right\}=sX(s)} Some of the most important properties of a system are causality and stability. Causality is a necessity for a physical system whose independent variable is time, however this restriction is not present in other cases such as image processing. A system is causal if the output depends only on present and past, but not future inputs. A necessary and sufficient condition for causality ish(t)=0∀t<0,{\displaystyle h(t)=0\quad \forall t<0,} whereh(t){\displaystyle h(t)}is the impulse response. It is not possible in general to determine causality from thetwo-sided Laplace transform. However, when working in the time domain, one normally uses theone-sided Laplace transformwhich requires causality. A system isbounded-input, bounded-output stable(BIBO stable) if, for every bounded input, the output is finite. Mathematically, if every input satisfying‖x(t)‖∞<∞{\displaystyle \ \|x(t)\|_{\infty }<\infty } leads to an output satisfying‖y(t)‖∞<∞{\displaystyle \ \|y(t)\|_{\infty }<\infty } (that is, a finitemaximum absolute valueofx(t){\displaystyle x(t)}implies a finite maximum absolute value ofy(t){\displaystyle y(t)}), then the system is stable. A necessary and sufficient condition is thath(t){\displaystyle h(t)}, the impulse response, is inL1(has a finite L1norm):‖h(t)‖1=∫−∞∞|h(t)|dt<∞.{\displaystyle \|h(t)\|_{1}=\int _{-\infty }^{\infty }|h(t)|\,\mathrm {d} t<\infty .} In the frequency domain, theregion of convergencemust contain the imaginary axiss=jω{\displaystyle s=j\omega }. As an example, the ideallow-pass filterwith impulse response equal to asinc functionis not BIBO stable, because the sinc function does not have a finite L1norm. Thus, for some bounded input, the output of the ideal low-pass filter is unbounded. In particular, if the input is zero fort<0{\displaystyle t<0}and equal to a sinusoid at thecut-off frequencyfort>0{\displaystyle t>0}, then the output will be unbounded for all times other than the zero crossings.[dubious–discuss] Almost everything in continuous-time systems has a counterpart in discrete-time systems. In many contexts, a discrete time (DT) system is really part of a larger continuous time (CT) system. For example, a digital recording system takes an analog sound, digitizes it, possibly processes the digital signals, and plays back an analog sound for people to listen to. In practical systems, DT signals obtained are usually uniformly sampled versions of CT signals. Ifx(t){\displaystyle x(t)}is a CT signal, then thesampling circuitused before ananalog-to-digital converterwill transform it to a DT signal:xn=defx(nT)∀n∈Z,{\displaystyle x_{n}\mathrel {\stackrel {\text{def}}{=}} x(nT)\qquad \forall \,n\in \mathbb {Z} ,}whereTis thesampling period. Before sampling, the input signal is normally run through a so-calledNyquist filterwhich removes frequencies above the "folding frequency" 1/(2T); this guarantees that no information in the filtered signal will be lost. Without filtering, any frequency componentabovethe folding frequency (orNyquist frequency) isaliasedto a different frequency (thus distorting the original signal), since a DT signal can only support frequency components lower than the folding frequency. Let{x[m−k];m}{\displaystyle \{x[m-k];\ m\}}represent the sequence{x[m−k];for all integer values ofm}.{\displaystyle \{x[m-k];{\text{ for all integer values of }}m\}.} And let the shorter notation{x}{\displaystyle \{x\}}represent{x[m];m}.{\displaystyle \{x[m];\ m\}.} A discrete system transforms an input sequence,{x}{\displaystyle \{x\}}into an output sequence,{y}.{\displaystyle \{y\}.}In general, every element of the output can depend on every element of the input. Representing the transformation operator byO{\displaystyle O}, we can write:y[n]=defOn{x}.{\displaystyle y[n]\mathrel {\stackrel {\text{def}}{=}} O_{n}\{x\}.} Note that unless the transform itself changes withn, the output sequence is just constant, and the system is uninteresting. (Thus the subscript,n.) In a typical system,y[n] depends most heavily on the elements ofxwhose indices are nearn. For the special case of theKronecker delta function,x[m]=δ[m],{\displaystyle x[m]=\delta [m],}the output sequence is theimpulse response:h[n]=defOn{δ[m];m}.{\displaystyle h[n]\mathrel {\stackrel {\text{def}}{=}} O_{n}\{\delta [m];\ m\}.} For a linear system,O{\displaystyle O}must satisfy: And the time-invariance requirement is: In such a system, the impulse response,{h}{\displaystyle \{h\}}, characterizes the system completely. That is, for any input sequence, the output sequence can be calculated in terms of the input and the impulse response. To see how that is done, consider the identity:x[m]≡∑k=−∞∞x[k]⋅δ[m−k],{\displaystyle x[m]\equiv \sum _{k=-\infty }^{\infty }x[k]\cdot \delta [m-k],} which expresses{x}{\displaystyle \{x\}}in terms of a sum of weighted delta functions. Therefore:y[n]=On{x}=On{∑k=−∞∞x[k]⋅δ[m−k];m}=∑k=−∞∞x[k]⋅On{δ[m−k];m},{\displaystyle {\begin{aligned}y[n]=O_{n}\{x\}&=O_{n}\left\{\sum _{k=-\infty }^{\infty }x[k]\cdot \delta [m-k];\ m\right\}\\&=\sum _{k=-\infty }^{\infty }x[k]\cdot O_{n}\{\delta [m-k];\ m\},\,\end{aligned}}} where we have invokedEq.4for the caseck=x[k]{\displaystyle c_{k}=x[k]}andxk[m]=δ[m−k]{\displaystyle x_{k}[m]=\delta [m-k]}. And because ofEq.5, we may write:On{δ[m−k];m}=On−k{δ[m];m}=defh[n−k].{\displaystyle {\begin{aligned}O_{n}\{\delta [m-k];\ m\}&\mathrel {\stackrel {\quad }{=}} O_{n-k}\{\delta [m];\ m\}\\&\mathrel {\stackrel {\text{def}}{=}} h[n-k].\end{aligned}}} Therefore: which is the familiar discrete convolution formula. The operatorOn{\displaystyle O_{n}}can therefore be interpreted as proportional to a weighted average of the functionx[k]. The weighting function ish[−k], simply shifted by amountn. Asnchanges, the weighting function emphasizes different parts of the input function. Equivalently, the system's response to an impulse atn=0 is a "time" reversed copy of the unshifted weighting function. Whenh[k] is zero for all negativek, the system is said to becausal. Aneigenfunctionis a function for which the output of the operator is the same function, scaled by some constant. In symbols,Hf=λf,{\displaystyle {\mathcal {H}}f=\lambda f,} wherefis the eigenfunction andλ{\displaystyle \lambda }is theeigenvalue, a constant. Theexponential functionszn=esTn{\displaystyle z^{n}=e^{sTn}}, wheren∈Z{\displaystyle n\in \mathbb {Z} }, areeigenfunctionsof alinear,time-invariantoperator.T∈R{\displaystyle T\in \mathbb {R} }is the sampling interval, andz=esT,z,s∈C{\displaystyle z=e^{sT},\ z,s\in \mathbb {C} }. A simple proof illustrates this concept. Suppose the input isx[n]=zn{\displaystyle x[n]=z^{n}}. The output of the system with impulse responseh[n]{\displaystyle h[n]}is then∑m=−∞∞h[n−m]zm{\displaystyle \sum _{m=-\infty }^{\infty }h[n-m]\,z^{m}} which is equivalent to the following by the commutative property ofconvolution∑m=−∞∞h[m]z(n−m)=zn∑m=−∞∞h[m]z−m=znH(z){\displaystyle \sum _{m=-\infty }^{\infty }h[m]\,z^{(n-m)}=z^{n}\sum _{m=-\infty }^{\infty }h[m]\,z^{-m}=z^{n}H(z)}whereH(z)=def∑m=−∞∞h[m]z−m{\displaystyle H(z)\mathrel {\stackrel {\text{def}}{=}} \sum _{m=-\infty }^{\infty }h[m]z^{-m}}is dependent only on the parameterz. Sozn{\displaystyle z^{n}}is aneigenfunctionof an LTI system because the system response is the same as the input times the constantH(z){\displaystyle H(z)}. The eigenfunction property of exponentials is very useful for both analysis and insight into LTI systems. TheZ transformH(z)=Z{h[n]}=∑n=−∞∞h[n]z−n{\displaystyle H(z)={\mathcal {Z}}\{h[n]\}=\sum _{n=-\infty }^{\infty }h[n]z^{-n}} is exactly the way to get the eigenvalues from the impulse response.[clarification needed]Of particular interest are pure sinusoids; i.e. exponentials of the formejωn{\displaystyle e^{j\omega n}}, whereω∈R{\displaystyle \omega \in \mathbb {R} }. These can also be written aszn{\displaystyle z^{n}}withz=ejω{\displaystyle z=e^{j\omega }}[clarification needed]. Thediscrete-time Fourier transform(DTFT)H(ejω)=F{h[n]}{\displaystyle H(e^{j\omega })={\mathcal {F}}\{h[n]\}}gives the eigenvalues of pure sinusoids[clarification needed]. Both ofH(z){\displaystyle H(z)}andH(ejω){\displaystyle H(e^{j\omega })}are called thesystem function,system response, ortransfer function. Like the one-sided Laplace transform, the Z transform is usually used in the context of one-sided signals, i.e. signals that are zero for t<0. The discrete-time Fourier transformFourier seriesmay be used for analyzing periodic signals. Due to the convolution property of both of these transforms, the convolution that gives the output of the system can be transformed to a multiplication in the transform domain. That is,y[n]=(h∗x)[n]=∑m=−∞∞h[n−m]x[m]=Z−1{H(z)X(z)}.{\displaystyle y[n]=(h*x)[n]=\sum _{m=-\infty }^{\infty }h[n-m]x[m]={\mathcal {Z}}^{-1}\{H(z)X(z)\}.} Just as with the Laplace transform transfer function in continuous-time system analysis, the Z transform makes it easier to analyze systems and gain insight into their behavior. The Z transform of the delay operator is a simple multiplication byz−1. That is, The input-output characteristics of discrete-time LTI system are completely described by its impulse responseh[n]{\displaystyle h[n]}. Two of the most important properties of a system are causality and stability. Non-causal (in time) systems can be defined and analyzed as above, but cannot be realized in real-time. Unstable systems can also be analyzed and built, but are only useful as part of a larger system whose overall transfer functionisstable. A discrete-time LTI system is causal if the current value of the output depends on only the current value and past values of the input.[5]A necessary and sufficient condition for causality ish[n]=0∀n<0,{\displaystyle h[n]=0\ \forall n<0,}whereh[n]{\displaystyle h[n]}is the impulse response. It is not possible in general to determine causality from the Z transform, because the inverse transform is not unique[dubious–discuss]. When aregion of convergenceis specified, then causality can be determined. A system isbounded input, bounded output stable(BIBO stable) if, for every bounded input, the output is finite. Mathematically, if‖x[n]‖∞<∞{\displaystyle \|x[n]\|_{\infty }<\infty } implies that‖y[n]‖∞<∞{\displaystyle \|y[n]\|_{\infty }<\infty } (that is, if bounded input implies bounded output, in the sense that themaximum absolute valuesofx[n]{\displaystyle x[n]}andy[n]{\displaystyle y[n]}are finite), then the system is stable. A necessary and sufficient condition is thath[n]{\displaystyle h[n]}, the impulse response, satisfies‖h[n]‖1=def∑n=−∞∞|h[n]|<∞.{\displaystyle \|h[n]\|_{1}\mathrel {\stackrel {\text{def}}{=}} \sum _{n=-\infty }^{\infty }|h[n]|<\infty .} In the frequency domain, theregion of convergencemust contain theunit circle(i.e., thelocussatisfying|z|=1{\displaystyle |z|=1}for complexz).
https://en.wikipedia.org/wiki/LTI_system_theory#Impulse_response_and_convolution
In signal processing,multidimensional discrete convolutionrefers to the mathematical operation between two functionsfandgon ann-dimensional lattice that produces a third function, also ofn-dimensions. Multidimensional discrete convolution is the discrete analog of themultidimensional convolutionof functions onEuclidean space. It is also a special case ofconvolution on groupswhen thegroupis the group ofn-tuples of integers. Similar to the one-dimensional case, an asterisk is used to represent the convolution operation. The number of dimensions in the given operation is reflected in the number of asterisks. For example, anM-dimensional convolution would be written withMasterisks. The following represents aM-dimensional convolution of discrete signals: y(n1,n2,...,nM)=x(n1,n2,...,nM)∗⋯M∗h(n1,n2,...,nM){\displaystyle y(n_{1},n_{2},...,n_{M})=x(n_{1},n_{2},...,n_{M})*{\overset {M}{\cdots }}*h(n_{1},n_{2},...,n_{M})} For discrete-valued signals, this convolution can be directly computed via the following: ∑k1=−∞∞∑k2=−∞∞...∑kM=−∞∞h(k1,k2,...,kM)x(n1−k1,n2−k2,...,nM−kM){\displaystyle \sum _{k_{1}=-\infty }^{\infty }\sum _{k_{2}=-\infty }^{\infty }...\sum _{k_{M}=-\infty }^{\infty }h(k_{1},k_{2},...,k_{M})x(n_{1}-k_{1},n_{2}-k_{2},...,n_{M}-k_{M})} The resulting output region of support of a discrete multidimensional convolution will be determined based on the size and regions of support of the two input signals. Listed are several properties of the two-dimensional convolution operator. Note that these can also be extended for signals ofN{\displaystyle N}-dimensions. Commutative Property: x∗∗h=h∗∗x{\displaystyle x**h=h**x} Associate Property: (x∗∗h)∗∗g=x∗∗(h∗∗g){\displaystyle (x**h)**g=x**(h**g)} Distributive Property: x∗∗(h+g)=(x∗∗h)+(x∗∗g){\displaystyle x**(h+g)=(x**h)+(x**g)} These properties are seen in use in the figure below. Given some inputx(n1,n2){\displaystyle x(n_{1},n_{2})}that goes into a filter with impulse responseh(n1,n2){\displaystyle h(n_{1},n_{2})}and then another filter with impulse responseg(n1,n2){\displaystyle g(n_{1},n_{2})}, the output is given byy(n1,n2){\displaystyle y(n_{1},n_{2})}. Assume that the output of the first filter is given byw(n1,n2){\displaystyle w(n_{1},n_{2})}, this means that: w=x∗∗h{\displaystyle w=x**h} Further, that intermediate function is then convolved with the impulse response of the second filter, and thus the output can be represented by: y=w∗∗g=(x∗∗h)∗∗g{\displaystyle y=w**g=(x**h)**g} Using the associative property, this can be rewritten as follows: y=x∗∗(h∗∗g){\displaystyle y=x**(h**g)} meaning that the equivalent impulse response for a cascaded system is given by: heq=h∗∗g{\displaystyle h_{eq}=h**g} A similar analysis can be done on a set of parallel systems illustrated below. In this case, it is clear that: y=(x∗∗h)+(x∗∗g){\displaystyle y=(x**h)+(x**g)} Using the distributive law, it is demonstrated that: y=x∗∗(h+g){\displaystyle y=x**(h+g)} This means that in the case of a parallel system, the equivalent impulse response is provided by: heq=h+g{\displaystyle h_{eq}=h+g} The equivalent impulse responses in both cascaded systems and parallel systems can be generalized to systems withN{\displaystyle N}-number of filters.[1] Convolution in one dimension was a powerful discovery that allowed the input and output of a linear shift-invariant (LSI) system (seeLTI system theory) to be easily compared so long as the impulse response of the filter system was known. This notion carries over to multidimensional convolution as well, as simply knowing the impulse response of a multidimensional filter too allows for a direct comparison to be made between the input and output of a system. This is profound since several of the signals that are transferred in the digital world today are of multiple dimensions including images and videos. Similar to the one-dimensional convolution, the multidimensional convolution allows the computation of the output of an LSI system for a given input signal. For example, consider an image that is sent over some wireless network subject to electro-optical noise. Possible noise sources include errors in channel transmission, the analog to digital converter, and theimage sensor. Usually noise caused by the channel or sensor creates spatially-independent, high-frequency signal components that translates to arbitrary light and dark spots on the actual image. In order to rid the image data of the high-frequency spectral content, it can be multiplied by the frequency response of a low-pass filter, which based on the convolution theorem, is equivalent to convolving the signal in the time/spatial domain by the impulse response of the low-pass filter. Several impulse responses that do so are shown below.[2] In addition to filtering out spectral content, the multidimensional convolution can implementedge detectionand smoothing. This once again is wholly dependent on the values of the impulse response that is used to convolve with the input image. Typical impulse responses for edge detection are illustrated below. In addition to image processing, multidimensional convolution can be implemented to enable a variety of other applications. Since filters are widespread in digital communication systems, any system that must transmit multidimensional data is assisted by filtering techniques It is used in real-time video processing, neural network analysis, digital geophysical data analysis, and much more.[3] One typical distortion that occurs during image and video capture or transmission applications is blur that is caused by a low-pass filtering process. The introduced blur can be modeled using Gaussian low-pass filtering. A signal is said to beseparableif it can be written as the product of multiple one-dimensional signals.[1]Mathematically, this is expressed as the following: x(n1,n2,...,nM)=x(n1)x(n2)...x(nM){\displaystyle x(n_{1},n_{2},...,n_{M})=x(n_{1})x(n_{2})...x(n_{M})} Some readily recognizable separable signals include the unit step function, and the dirac-delta impulse function. u(n1,n2,...,nM)=u(n1)u(n2)...u(nM){\displaystyle u(n_{1},n_{2},...,n_{M})=u(n_{1})u(n_{2})...u(n_{M})}(unit step function) δ(n1,n2,...,nM)=δ(n1)δ(n2)...δ(nM){\displaystyle \delta (n_{1},n_{2},...,n_{M})=\delta (n_{1})\delta (n_{2})...\delta (n_{M})}(dirac-delta impulse function) Convolution is a linear operation. It then follows that the multidimensional convolution of separable signals can be expressed as the product of many one-dimensional convolutions. For example, consider the case wherexandhare both separable functions. x(n1,n2)∗∗h(n1,n2)=∑k1=−∞∞∑k2=−∞∞h(k1,k2)x(n1−k1,n2−k2){\displaystyle x(n_{1},n_{2})**h(n_{1},n_{2})=\sum _{k_{1}=-\infty }^{\infty }\sum _{k_{2}=-\infty }^{\infty }h(k_{1},k_{2})x(n_{1}-k_{1},n_{2}-k_{2})} By applying the properties of separability, this can then be rewritten as the following: x(n1,n2)∗∗h(n1,n2)=(∑k1=−∞∞h(k1)x(n1−k1))(∑k2=−∞∞h(k2)x(n2−k2)){\displaystyle x(n_{1},n_{2})**h(n_{1},n_{2})={\bigg (}\sum _{k_{1}=-\infty }^{\infty }h(k_{1})x(n_{1}-k_{1}){\bigg )}{\bigg (}\sum _{k_{2}=-\infty }^{\infty }h(k_{2})x(n_{2}-k_{2}){\bigg )}} It is readily seen then that this reduces to the product of one-dimensional convolutions: x(n1,n2)∗∗h(n1,n2)=[x(n1)∗h(n1)][x(n2)∗h(n2)]{\displaystyle x(n_{1},n_{2})**h(n_{1},n_{2})={\bigg [}x(n_{1})*h(n_{1}){\bigg ]}{\bigg [}x(n_{2})*h(n_{2}){\bigg ]}} This conclusion can then be extended to the convolution of two separableM-dimensional signals as follows: x(n1,n2,...,nM)∗⋯M∗h(n1,n2,...,nM)=[x(n1)∗h(n1)][x(n2)∗h(n2)]...[x(nM)∗h(nM)]{\displaystyle x(n_{1},n_{2},...,n_{M})*{\overset {M}{\cdots }}*h(n_{1},n_{2},...,n_{M})={\bigg [}x(n_{1})*h(n_{1}){\bigg ]}{\bigg [}x(n_{2})*h(n_{2}){\bigg ]}...{\bigg [}x(n_{M})*h(n_{M}){\bigg ]}} So, when the two signals are separable, the multidimensional convolution can be computed by computingnM{\displaystyle n_{M}}one-dimensional convolutions. The row-column method can be applied when one of the signals in the convolution is separable. The method exploits the properties of separability in order to achieve a method of calculating the convolution of two multidimensional signals that is more computationally efficient than direct computation of each sample (given that one of the signals are separable).[4]The following shows the mathematical reasoning behind the row-column decomposition approach (typicallyh(n1,n2){\displaystyle h(n_{1},n_{2})}is the separable signal): y(n1,n2)=∑k1=−∞∞∑k2=−∞∞h(k1,k2)x(n1−k1,n2−k2)=∑k1=−∞∞∑k2=−∞∞h1(k1)h2(k2)x(n1−k1,n2−k2)=∑k1=−∞∞h1(k1)[∑k2=−∞∞h2(k2)x(n1−k1,n2−k2)]{\displaystyle {\begin{aligned}y(n_{1},n_{2})&=\sum _{k_{1}=-\infty }^{\infty }\sum _{k_{2}=-\infty }^{\infty }h(k_{1},k_{2})x(n_{1}-k_{1},n_{2}-k_{2})\\&=\sum _{k_{1}=-\infty }^{\infty }\sum _{k_{2}=-\infty }^{\infty }h_{1}(k_{1})h_{2}(k_{2})x(n_{1}-k_{1},n_{2}-k_{2})\\&=\sum _{k_{1}=-\infty }^{\infty }h_{1}(k_{1}){\Bigg [}\sum _{k_{2}=-\infty }^{\infty }h_{2}(k_{2})x(n_{1}-k_{1},n_{2}-k_{2}){\Bigg ]}\end{aligned}}} The value of∑k2=−∞∞h2(k2)x(n1−k1,n2−k2){\displaystyle \sum _{k_{2}=-\infty }^{\infty }h_{2}(k_{2})x(n_{1}-k_{1},n_{2}-k_{2})}can now be re-used when evaluating othery{\displaystyle y}values with a shared value ofn2{\displaystyle n_{2}}: y(n1+δ,n2)=∑k1=−∞∞h1(k1)[∑k2=−∞∞h2(k2)x(n1−[k1−δ],n2−k2)]=∑k1=−∞∞h1(k1+δ)[∑k2=−∞∞h2(k2)x(n1−k1,n2−k2)]{\displaystyle {\begin{aligned}y(n_{1}+\delta ,n_{2})&=\sum _{k_{1}=-\infty }^{\infty }h_{1}(k_{1}){\Bigg [}\sum _{k_{2}=-\infty }^{\infty }h_{2}(k_{2})x(n_{1}-[k_{1}-\delta ],n_{2}-k_{2}){\Bigg ]}\\&=\sum _{k_{1}=-\infty }^{\infty }h_{1}(k_{1}+\delta ){\Bigg [}\sum _{k_{2}=-\infty }^{\infty }h_{2}(k_{2})x(n_{1}-k_{1},n_{2}-k_{2}){\Bigg ]}\end{aligned}}} Thus, the resulting convolution can be effectively calculated by first performing the convolution operation on all of the rows ofx(n1,n2){\displaystyle x(n_{1},n_{2})}, and then on all of its columns. This approach can be further optimized by taking into account how memory is accessed within a computer processor. A processor will load in the signal data needed for the given operation. For modern processors, data will be loaded from memory into the processors cache, which has faster access times than memory. The cache itself is partitioned into lines. When a cache line is loaded from memory, multiple data operands are loaded at once. Consider the optimized case where a row of signal data can fit entirely within the processor's cache. This particular processor would be able to access the data row-wise efficiently, but not column-wise since different data operands in the same column would lie on different cache lines.[5]In order to take advantage of the way in which memory is accessed, it is more efficient to transpose the data set and then access it row-wise rather than attempt to access it column-wise. The algorithm then becomes: Examine the case where an image of sizeX×Y{\displaystyle X\times Y}is being passed through a separable filter of sizeJ×K{\displaystyle J\times K}. The image itself is not separable. If the result is calculated using the direct convolution approach without exploiting the separability of the filter, this will require approximatelyXYJK{\displaystyle XYJK}multiplications and additions. If the separability of the filter is taken into account, the filtering can be performed in two steps. The first step will haveXYJ{\displaystyle XYJ}multiplications and additions and the second step will haveXYK{\displaystyle XYK}, resulting in a total ofXYJ+XYK{\displaystyle XYJ+XYK}orXY(J+K){\displaystyle XY(J+K)}multiplications and additions.[6]A comparison of the computational complexity between direct and separable convolution is given in the following image: The premise behind the circular convolution approach on multidimensional signals is to develop a relation between theConvolution theoremand theDiscrete Fourier transform(DFT) that can be used to calculate the convolution between two finite-extent, discrete-valued signals.[7] For one-dimensional signals, theConvolution Theoremstates that theFourier transformof the convolution between two signals is equal to the product of the Fourier Transforms of those two signals. Thus, convolution in the time domain is equal to multiplication in the frequency domain. Mathematically, this principle is expressed via the following:y(n)=h(n)∗x(n)⟷Y(ω)=H(ω)X(ω){\displaystyle y(n)=h(n)*x(n)\longleftrightarrow Y(\omega )=H(\omega )X(\omega )}This principle is directly extendable to dealing with signals of multiple dimensions.y(n1,n2,...,nM)=h(n1,n2,...,nM)∗⋯M∗x(n1,n2,...,nM)⟷Y(ω1,ω2,...,ωM)=H(ω1,ω2,...,ωM)X(ω1,ω2,...,ωM){\displaystyle y(n_{1},n_{2},...,n_{M})=h(n_{1},n_{2},...,n_{M})*{\overset {M}{\cdots }}*x(n_{1},n_{2},...,n_{M})\longleftrightarrow Y(\omega _{1},\omega _{2},...,\omega _{M})=H(\omega _{1},\omega _{2},...,\omega _{M})X(\omega _{1},\omega _{2},...,\omega _{M})}This property is readily extended to the usage with theDiscrete Fourier transform(DFT) as follows (note that linear convolution is replaced with circular convolution where⊗{\displaystyle \otimes }is used to denote the circular convolution operation of sizeN{\displaystyle N}): y(n)=h(n)⊗x(n)⟷Y(k)=H(k)X(k){\displaystyle y(n)=h(n)\otimes x(n)\longleftrightarrow Y(k)=H(k)X(k)} When dealing with signals of multiple dimensions:y(n1,n2,...,nM)=h(n1,n2,...,nM)⊗⋯M⊗x(n1,n2,...,nM)⟷Y(k1,k2,...,kM)=H(k1,k2,...,kM)X(k1,k2,...,kM){\displaystyle y(n_{1},n_{2},...,n_{M})=h(n_{1},n_{2},...,n_{M})\otimes {\overset {M}{\cdots }}\otimes x(n_{1},n_{2},...,n_{M})\longleftrightarrow Y(k_{1},k_{2},...,k_{M})=H(k_{1},k_{2},...,k_{M})X(k_{1},k_{2},...,k_{M})}The circular convolutions here will be of sizeN1,N2,...,NM{\displaystyle N_{1},N_{2},...,N_{M}}. The motivation behind using the circular convolution approach is that it is based on the DFT. The premise behind circular convolution is to take the DFTs of the input signals, multiply them together, and then take the inverse DFT. Care must be taken such that a large enough DFT is used such that aliasing does not occur. The DFT is numerically computable when dealing with signals of finite-extent. One advantage this approach has is that since it requires taking the DFT and inverse DFT, it is possible to utilize efficient algorithms such as theFast Fourier transform(FFT). Circular convolution can also be computed in the time/spatial domain and not only in the frequency domain. Consider the following case where two finite-extent signalsxandhare taken. For both signals, there is a corresponding DFT as follows: x(n1,n2)⟷X(k1,k2){\displaystyle x(n_{1},n_{2})\longleftrightarrow X(k_{1},k_{2})}andh(n1,n2)⟷H(k1,k2){\displaystyle h(n_{1},n_{2})\longleftrightarrow H(k_{1},k_{2})} The region of support ofx(n1,n2){\displaystyle x(n_{1},n_{2})}is0≤n1≤P1−1{\displaystyle 0\leq n_{1}\leq P_{1}-1}and0≤n2≤P2−1{\displaystyle 0\leq n_{2}\leq P_{2}-1}and the region of support ofh(n1,n2){\displaystyle h(n_{1},n_{2})}is0≤n1≤Q1−1{\displaystyle 0\leq n_{1}\leq Q_{1}-1}and0≤n2≤Q2−1{\displaystyle 0\leq n_{2}\leq Q_{2}-1}. The linear convolution of these two signals would be given as:ylinear(n1,n2)=∑m1∑m2h(m1,m2)x(n1−m1,n2−m2){\displaystyle y_{linear}(n_{1},n_{2})=\sum _{m_{1}}\sum _{m_{2}}h(m_{1},m_{2})x(n_{1}-m_{1},n_{2}-m_{2})}Given the regions of support ofx(n1,n2){\displaystyle x(n_{1},n_{2})}andh(n1,n2){\displaystyle h(n_{1},n_{2})}, the region of support ofylinear(n1,n2){\displaystyle y_{linear}(n_{1},n_{2})}will then be given as the following: 0≤n1≤P1+Q1−1{\displaystyle 0\leq n_{1}\leq P_{1}+Q_{1}-1}0≤n2≤P2+Q2−1{\displaystyle 0\leq n_{2}\leq P_{2}+Q_{2}-1}Based on the regions of support of the two signals, a DFT of sizeN1×N2{\displaystyle N_{1}\times N_{2}}must be used whereN1≥max(P1,Q1){\displaystyle N_{1}\geq \max(P_{1},Q_{1})}andN2≥max(P2,Q2){\displaystyle N_{2}\geq \max(P_{2},Q_{2})}since the same size DFT must be used on both signals. In the event where a DFT size larger than the extent of a signal is needed, the signal is zero-padded until it reaches the required length. After multiplying the DFTs and taking the inverse DFT on the result, the resulting circular convolution is then given by: ycircular(n1,n2)=∑r1∑r2[∑m1=0Q1−1∑m2=0Q2−1h(m1,m2)x(n1−m1−r1N1,n2−m2−r2N2)]{\displaystyle y_{circular}(n_{1},n_{2})=\sum _{r_{1}}\sum _{r_{2}}{\Bigg [}\sum _{m_{1}=0}^{Q_{1}-1}\sum _{m_{2}=0}^{Q_{2}-1}h(m_{1},m_{2})x(n_{1}-m_{1}-r_{1}N_{1},n_{2}-m_{2}-r_{2}N_{2}){\Bigg ]}}for(n1,n2)∈RN1N2{\displaystyle (n_{1},n_{2})\in R_{N_{1}N_{2}}} RN1N2≜{(n1,n2):0≤n1≤N1−1,0≤n2≤N2−1}{\displaystyle R_{N_{1}N_{2}}\triangleq \{(n_{1},n_{2}):0\leq n_{1}\leq N_{1}-1,0\leq n_{2}\leq N_{2}-1\}} The result will be thatycircular(n1,n2){\displaystyle y_{circular}(n_{1},n_{2})}will be a spatially aliased version of the linear convolution resultylinear(n1,n2){\displaystyle y_{linear}(n_{1},n_{2})}. This can be expressed as the following: ycircular(n1,n2)=∑r1∑r2ylinear(n1−r1N1,n2−r2N2)for(n1,n2)∈RN1N2{\displaystyle y_{circular}(n_{1},n_{2})=\sum _{r_{1}}\sum _{r_{2}}y_{linear}(n_{1}-r_{1}N_{1},n_{2}-r_{2}N_{2}){\mathrm {\,\,\,for\,\,\,} }(n_{1},n_{2})\in R_{N_{1}N_{2}}} Then, in order to avoid aliasing between the spatially aliased replicas,N1{\displaystyle N_{1}}andN2{\displaystyle N_{2}}must be chosen to satisfy the following conditions: N1≥P1+Q1−1{\displaystyle N_{1}\geq P_{1}+Q_{1}-1} N2≥P2+Q2−1{\displaystyle N_{2}\geq P_{2}+Q_{2}-1} If these conditions are satisfied, then the results of the circular convolution will equal that of the linear convolution (taking the main period of the circular convolution as the region of support). That is: ycircular(n1,n2)=ylinear(n1,n2){\displaystyle y_{circular}(n_{1},n_{2})=y_{linear}(n_{1},n_{2})}for(n1,n2)∈RN1N2{\displaystyle (n_{1},n_{2})\in R_{N_{1}N_{2}}} The Convolution theorem and circular convolution can thus be used in the following manner to achieve a result that is equal to performing the linear convolution:[8] Another method to perform multidimensional convolution is theoverlap and addapproach. This method helps reduce the computational complexity often associated with multidimensional convolutions due to the vast amounts of data inherent in modern-day digital systems.[9]For sake of brevity, the two-dimensional case is used as an example, but the same concepts can be extended to multiple dimensions. Consider a two-dimensional convolution using a direct computation: y(n1,n2)=∑k1=−∞∞∑k2=−∞∞x(n1−k1,n2−k2)h(k1,k2){\displaystyle y(n_{1},n_{2})=\sum _{k_{1}=-\infty }^{\infty }\sum _{k_{2}=-\infty }^{\infty }x(n_{1}-k_{1},n_{2}-k_{2})h(k_{1},k_{2})} Assuming that the output signaly(n1,n2){\displaystyle y(n_{1},n_{2})}has N nonzero coefficients, and the impulse response has M nonzero samples, this direct computation would need MN multiplies and MN - 1 adds in order to compute. Using an FFT instead, the frequency response of the filter and the Fourier transform of the input would have to be stored in memory.[10]Massive amounts of computations and excessive use of memory storage space pose a problematic issue as more dimensions are added. This is where the overlap and add convolution method comes in. Instead of performing convolution on the blocks of information in their entirety, the information can be broken up into smaller blocks of dimensionsL1{\displaystyle L_{1}}xL2{\displaystyle L_{2}}resulting in smaller FFTs, less computational complexity, and less storage needed. This can be expressed mathematically as follows: x(n1,n2)=∑i=1P1∑j=1P2xij(n1,n2){\displaystyle x(n_{1},n_{2})=\sum _{i=1}^{P_{1}}\sum _{j=1}^{P_{2}}x_{ij}(n_{1},n_{2})} wherex(n1,n2){\displaystyle x(n_{1},n_{2})}represents theN1{\displaystyle N_{1}}xN2{\displaystyle N_{2}}input signal, which is a summation ofP1P2{\displaystyle P_{1}P_{2}}block segments, withP1=N1/L1{\displaystyle P_{1}=N_{1}/L_{1}}andP2=N2/L2{\displaystyle P_{2}=N_{2}/L_{2}}. To produce the output signal, a two-dimensional convolution is performed: y(n1,n2)=x(n1,n2)∗∗h(n1,n2){\displaystyle y(n_{1},n_{2})=x(n_{1},n_{2})**h(n_{1},n_{2})} Substituting in forx(n1,n2){\displaystyle x(n_{1},n_{2})}results in the following: y(n1,n2)=∑i=1P1∑j=1P2xij(n1,n2)∗∗h(n1,n2){\displaystyle y(n_{1},n_{2})=\sum _{i=1}^{P_{1}}\sum _{j=1}^{P_{2}}x_{ij}(n_{1},n_{2})**h(n_{1},n_{2})} This convolution adds more complexity than doing a direct convolution; however, since it is integrated with an FFT fast convolution, overlap-add performs faster and is a more memory-efficient method, making it practical for large sets of multidimensional data. Leth(n1,n2){\displaystyle h(n_{1},n_{2})}be of sizeM1×M2{\displaystyle M_{1}\times M_{2}}: In order to visualize the overlap-add method more clearly, the following illustrations examine the method graphically. Assume that the inputx(n1,n2){\displaystyle x(n_{1},n_{2})}has a square region support of length N in both vertical and horizontal directions as shown in the figure below. It is then broken up into four smaller segments in such a way that it is now composed of four smaller squares. Each block of the aggregate signal has dimensions(N/2){\displaystyle (N/2)}×{\displaystyle \times }(N/2){\displaystyle (N/2)}. Then, each component is convolved with the impulse response of the filter. Note that an advantage for an implementation such as this can be visualized here since each of these convolutions can be parallelized on a computer, as long as the computer has sufficient memory and resources to store and compute simultaneously. In the figure below, the first graph on the left represents the convolution corresponding to the component of the inputx0,0{\displaystyle x_{0,0}}with the corresponding impulse responseh(n1,n2){\displaystyle h(n_{1},n_{2})}. To the right of that, the inputx1,0{\displaystyle x_{1,0}}is then convolved with the impulse responseh(n1,n2){\displaystyle h(n_{1},n_{2})}. The same process is done for the other two inputs respectively, and they are accumulated together in order to form the convolution. This is depicted to the left. Assume that the filter impulse responseh(n1,n2){\displaystyle h(n_{1},n_{2})}has a region of support of(N/8){\displaystyle (N/8)}in both dimensions. This entails that each convolution convolves signals with dimensions(N/2){\displaystyle (N/2)}×{\displaystyle \times }(N/8){\displaystyle (N/8)}in bothn1{\displaystyle n_{1}}andn2{\displaystyle n_{2}}directions, which leads to overlap (highlighted in blue) since the length of each individual convolution is equivalent to: (N/2){\displaystyle (N/2)}+{\displaystyle +}(N/8){\displaystyle (N/8)}−{\displaystyle -}1{\displaystyle 1}=(5/8)N−1{\displaystyle (5/8)N-1} in both directions. The lighter blue portion correlates to the overlap between two adjacent convolutions, whereas the darker blue portion correlates to overlap between all four convolutions. All of these overlap portions are added together in addition to the convolutions in order to form the combined convolutiony(n1,n2){\displaystyle y(n_{1},n_{2})}.[12] The overlap and save method, just like the overlap and add method, is also used to reduce the computational complexity associated with discrete-time convolutions. This method, coupled with the FFT, allows for massive amounts of data to be filtered through a digital system while minimizing the necessary memory space used for computations on massive arrays of data. The overlap and save method is very similar to the overlap and add methods with a few notable exceptions. The overlap-add method involves a linear convolution of discrete-time signals, whereas the overlap-save method involves the principle of circular convolution. In addition, the overlap and save method only uses a one-time zero padding of the impulse response, while the overlap-add method involves a zero-padding for every convolution on each input component. Instead of using zero padding to prevent time-domain aliasing like its overlap-add counterpart, overlap-save simply discards all points of aliasing, and saves the previous data in one block to be copied into the convolution for the next block. In one dimension, the performance and storage metric differences between the two methods is minimal. However, in the multidimensional convolution case, the overlap-save method is preferred over the overlap-add method in terms of speed and storage abilities.[13]Just as in the overlap and add case, the procedure invokes the two-dimensional case but can easily be extended to all multidimensional procedures. Leth(n1,n2){\displaystyle h(n_{1},n_{2})}be of sizeM1×M2{\displaystyle M_{1}\times M_{2}}: Similar to row-column decomposition, the helix transform computes the multidimensional convolution by incorporating one-dimensional convolutional properties and operators. Instead of using the separability of signals, however, it maps the Cartesian coordinate space to a helical coordinate space allowing for a mapping from a multidimensional space to a one-dimensional space. To understand the helix transform, it is useful to first understand how a multidimensional convolution can be broken down into a one-dimensional convolution. Assume that the two signals to be convolved areXM×N{\displaystyle X_{M\times N}}andYK×L{\displaystyle Y_{K\times L}}, which results in an outputZ(M−K+1)×(N−L+1){\displaystyle Z_{(M-K+1)\times (N-L+1)}}. This is expressed as follows: Z(i,j)=∑m=0M−1∑n=0N−1X(m,n)Y(i−m,j−n){\displaystyle Z(i,j)=\sum _{m=0}^{M-1}\sum _{n=0}^{N-1}X(m,n)Y(i-m,j-n)} Next, two matrices are created that zero pad each input in both dimensions such that each input has equivalent dimensions, i.e. X′=[X000]{\displaystyle \mathbf {X'} ={\begin{bmatrix}X&0\\0&0\\\end{bmatrix}}}andY′=[Y000]{\displaystyle \mathbf {Y'} ={\begin{bmatrix}Y&0\\0&0\\\end{bmatrix}}} where each of the input matrices are now of dimensions(M+K−1){\displaystyle (M+K-1)}×{\displaystyle \times }(N+L−1){\displaystyle (N+L-1)}. It is then possible to implement column-wise lexicographic ordering in order to convert the modified matrices into vectors,X″{\displaystyle X''}andY″{\displaystyle Y''}. In order to minimize the number of unimportant samples in each vector, each vector is truncated after the last sample in the original matricesX{\displaystyle X}andY{\displaystyle Y}respectively. Given this, the length of vectorX″{\displaystyle X''}andY″{\displaystyle Y''}are given by: lX″={\displaystyle l_{X''}=}(M+K−1){\displaystyle (M+K-1)}×{\displaystyle \times }(N−1){\displaystyle (N-1)}+M{\displaystyle M} lY″={\displaystyle l_{Y''}=}(M+K−1){\displaystyle (M+K-1)}×{\displaystyle \times }(L−1){\displaystyle (L-1)}+K{\displaystyle K} The length of the convolution of these two vectors,Z″{\displaystyle Z''}, can be derived and shown to be: lZ″={\displaystyle l_{Z''}=}lY″+{\displaystyle l_{Y''}+}lX″{\displaystyle l_{X''}}=(M+K−1){\displaystyle =(M+K-1)}×{\displaystyle \times }(N+L−1){\displaystyle (N+L-1)} This vector length is equivalent to the dimensions of the original matrix outputZ{\displaystyle Z}, making converting back to a matrix a direct transformation. Thus, the vector,Z″{\displaystyle Z''}, is converted back to matrix form, which produces the output of the two-dimensional discrete convolution.[14] When working on a two-dimensional Cartesian mesh, a Fourier transform along either axes will result in the two-dimensional plane becoming a cylinder as the end of each column or row attaches to its respective top forming a cylinder. Filtering on a helix behaves in a similar fashion, except in this case, the bottom of each column attaches to the top of the next column, resulting in a helical mesh. This is illustrated below. The darkened tiles represent the filter coefficients. If this helical structure is then sliced and unwound into a one-dimensional strip, the same filter coefficients on the 2-d Cartesian plane will match up with the same input data, resulting in an equivalent filtering scheme. This ensures that a two-dimensional convolution will be able to be performed by a one-dimensional convolution operator as the 2D filter has been unwound to a 1D filter with gaps of zeroes separating the filter coefficients. Assuming that some-low pass two-dimensional filter was used, such as: Then, once the two-dimensional space was converted into a helix, the one-dimensional filter would look as follows: h(n)=−1,0,...,0,−1,4,−1,0,...,0,−1,0,...{\displaystyle h(n)=-1,0,...,0,-1,4,-1,0,...,0,-1,0,...} Notice in the one-dimensional filter that there are no leading zeroes as illustrated in the one-dimensional filtering strip after being unwound. The entire one-dimensional strip could have been convolved with; however, it is less computationally expensive to simply ignore the leading zeroes. In addition, none of these backside zero values will need to be stored in memory, preserving precious memory resources.[15] Helix transformations to implement recursive filters via convolution are used in various areas of signal processing. Although frequency domain Fourier analysis is effective when systems are stationary, with constant coefficients and periodically-sampled data, it becomes more difficult in unstable systems. The helix transform enables three-dimensional post-stack migration processes that can process data for three-dimensional variations in velocity.[15]In addition, it can be applied to assist with the problem of implicit three-dimensional wavefield extrapolation.[16]Other applications include helpful algorithms in seismic data regularization, prediction error filters, and noise attenuation in geophysical digital systems.[14] One application of multidimensional convolution that is used within signal and image processing is Gaussian convolution. This refers to convolving an input signal with the Gaussian distribution function. The Gaussian distribution sampled at discrete values in one dimension is given by the following (assumingμ=0{\displaystyle \mu =0}):G(n)=12πσ2e−n22σ2{\displaystyle G(n)={\frac {1}{\sqrt {2\pi \sigma ^{2}}}}e^{-{\frac {n^{2}}{2\sigma ^{2}}}}}This is readily extended to a signal ofMdimensions (assumingσ{\displaystyle \sigma }stays constant for all dimensions andμ1=μ2=...=μM=0{\displaystyle \mu _{1}=\mu _{2}=...=\mu _{M}=0}):G(n1,n2,...,nM)=1(2π)M/2σMe−(n12+n22+...+nM2)2σ2{\displaystyle G(n_{1},n_{2},...,n_{M})={\frac {1}{(2\pi )^{M/2}\sigma ^{M}}}e^{-{\frac {({n_{1}}^{2}+{n_{2}}^{2}+...+{n_{M}}^{2})}{2\sigma ^{2}}}}}One important property to recognize is that theMdimensional signal is separable such that:G(n1,n2,...,nM)=G(n1)G(n2)...G(nM){\displaystyle G(n_{1},n_{2},...,n_{M})=G(n_{1})G(n_{2})...G(n_{M})}Then, Gaussian convolution with discrete-valued signals can be expressed as the following: y(n)=x(n)∗G(n){\displaystyle y(n)=x(n)*G(n)} y(n1,n2,...,nM)=x(n1,n2,...,nM)∗...∗G(n1,n2,...,nM){\displaystyle y(n_{1},n_{2},...,n_{M})=x(n_{1},n_{2},...,n_{M})*...*G(n_{1},n_{2},...,n_{M})} Gaussian convolution can be effectively approximated via implementation of aFinite impulse response(FIR) filter. The filter will be designed with truncated versions of the Gaussian. For a two-dimensional filter, the transfer function of such a filter would be defined as the following:[17] H(z1,z2)=1s(r1,r2)∑n1=−r1r1∑n2=−r2r2G(n1,n2)z1−n1z2−n2{\displaystyle H(z_{1},z_{2})={\frac {1}{s(r_{1},r_{2})}}\sum _{n_{1}=-r_{1}}^{r_{1}}\sum _{n_{2}=-r_{2}}^{r_{2}}G(n_{1},n_{2}){z_{1}}^{-n_{1}}{z_{2}}^{-n_{2}}} where s(r1,r2)=∑n1=−r1r1∑n2=−r2r2G(n1,n2){\displaystyle s(r_{1},r_{2})=\sum _{n_{1}=-r_{1}}^{r_{1}}\sum _{n_{2}=-r_{2}}^{r_{2}}G(n_{1},n_{2})} Choosing lower values forr1{\displaystyle r_{1}}andr2{\displaystyle r_{2}}will result in performing less computations, but will yield a less accurate approximation while choosing higher values will yield a more accurate approximation, but will require a greater number of computations. Another method for approximating Gaussian convolution is via recursive passes through a box filter. For approximating one-dimensional convolution, this filter is defined as the following:[17] H(z)=12r+1zr−z−r−11−z−1{\displaystyle H(z)={\frac {1}{2r+1}}{\frac {z^{r}-z^{-r-1}}{1-z^{-}1}}} Typically, recursive passes 3, 4, or 5 times are performed in order to obtain an accurate approximation.[17]A suggested method for computingris then given as the following:[18] σ2=112K((2r+1)2−1){\displaystyle \sigma ^{2}={\frac {1}{12}}K((2r+1)^{2}-1)}whereKis the number of recursive passes through the filter. Then, since the Gaussian distribution is separable across different dimensions, it follows that recursive passes through one-dimensional filters (isolating each dimension separately) will thus yield an approximation of the multidimensional Gaussian convolution. That is,M-dimensional Gaussian convolution could be approximated via recursive passes through the following one-dimensional filters: H(z1)=12r1+1z1r1−z1−r1−11−z1−1{\displaystyle H(z_{1})={\frac {1}{2r_{1}+1}}{\frac {{z_{1}}^{r_{1}}-{z_{1}}^{-r_{1}-1}}{1-{z_{1}}^{-}1}}} H(z2)=12r2+1z2r2−z2−r2−11−z2−1{\displaystyle H(z_{2})={\frac {1}{2r_{2}+1}}{\frac {{z_{2}}^{r_{2}}-{z_{2}}^{-r_{2}-1}}{1-{z_{2}}^{-}1}}} ⋮{\displaystyle \vdots } H(zM)=12rM+1zMrM−zM−rM−11−zM−1{\displaystyle H(z_{M})={\frac {1}{2r_{M}+1}}{\frac {{z_{M}}^{r_{M}}-{z_{M}}^{-r_{M}-1}}{1-{z_{M}}^{-}1}}} Gaussian convolutions are used extensively in signal and image processing. For example, image-blurring can be accomplished with Gaussian convolution where theσ{\displaystyle \sigma }parameter will control the strength of the blurring. Higher values would thus correspond to a more blurry end result.[19]It is also commonly used inComputer visionapplications such asScale-invariant feature transform(SIFT) feature detection.[20]
https://en.wikipedia.org/wiki/Multidimensional_discrete_convolution
TheTitchmarsh convolution theoremdescribes the properties of thesupportof theconvolutionof two functions. It was proven byEdward Charles Titchmarshin 1926.[1] Ifφ(t){\textstyle \varphi (t)\,}andψ(t){\textstyle \psi (t)}are integrable functions, such that almost everywherein the interval0<x<κ{\displaystyle 0<x<\kappa \,}, then there existλ≥0{\displaystyle \lambda \geq 0}andμ≥0{\displaystyle \mu \geq 0}satisfyingλ+μ≥κ{\displaystyle \lambda +\mu \geq \kappa }such thatφ(t)=0{\displaystyle \varphi (t)=0\,}almost everywhere in0<t<λ{\displaystyle 0<t<\lambda }andψ(t)=0{\displaystyle \psi (t)=0\,}almost everywhere in0<t<μ.{\displaystyle 0<t<\mu .} As a corollary, if the integral above is 0 for allx>0,{\textstyle x>0,}then eitherφ{\textstyle \varphi \,}orψ{\textstyle \psi }is almost everywhere 0 in the interval[0,+∞).{\textstyle [0,+\infty ).}Thus the convolution of two functions on[0,+∞){\textstyle [0,+\infty )}cannot be identically zero unless at least one of the two functions is identically zero. As another corollary, ifφ∗ψ(x)=0{\displaystyle \varphi *\psi (x)=0}for allx∈[0,κ]{\displaystyle x\in [0,\kappa ]}and one of the functionφ{\displaystyle \varphi }orψ{\displaystyle \psi }is almost everywhere not null in this interval, then the other function must be null almost everywhere in[0,κ]{\displaystyle [0,\kappa ]}. The theorem can be restated in the following form: Above,supp{\displaystyle \operatorname {supp} }denotes the support of a function f (i.e., the closure of the complement of f−1(0)) andinf{\displaystyle \inf }andsup{\displaystyle \sup }denote theinfimum and supremum. This theorem essentially states that the well-known inclusionsupp⁡φ∗ψ⊂supp⁡φ+supp⁡ψ{\displaystyle \operatorname {supp} \varphi \ast \psi \subset \operatorname {supp} \varphi +\operatorname {supp} \psi }is sharp at the boundary. The higher-dimensional generalization in terms of theconvex hullof the supports was proven byJacques-Louis Lionsin 1951:[2] Above,c.h.{\displaystyle \operatorname {c.h.} }denotes theconvex hullof the set andE′(Rn){\displaystyle {\mathcal {E}}'(\mathbb {R} ^{n})}denotes the space ofdistributionswithcompact support. The original proof by Titchmarsh usescomplex-variabletechniques, and is based on thePhragmén–Lindelöf principle,Jensen's inequality,Carleman's theorem, andValiron's theorem. The theorem has since been proven several more times, typically using eitherreal-variable[3][4][5]or complex-variable[6][7][8]methods.Gian-Carlo Rotahas stated that no proof yet addresses the theorem's underlying combinatorial structure, which he believes is necessary for complete understanding.[9]
https://en.wikipedia.org/wiki/Titchmarsh_convolution_theorem
Inlinear algebra, aToeplitz matrixordiagonal-constant matrix, named afterOtto Toeplitz, is amatrixin which each descending diagonal from left to right is constant. For instance, the following matrix is a Toeplitz matrix: Anyn×n{\displaystyle n\times n}matrixA{\displaystyle A}of the form is aToeplitz matrix. If thei,j{\displaystyle i,j}element ofA{\displaystyle A}is denotedAi,j{\displaystyle A_{i,j}}then we have A Toeplitz matrix is not necessarilysquare. A matrix equation of the form is called aToeplitz systemifA{\displaystyle A}is a Toeplitz matrix. IfA{\displaystyle A}is ann×n{\displaystyle n\times n}Toeplitz matrix, then the system has at most only2n−1{\displaystyle 2n-1}unique values, rather thann2{\displaystyle n^{2}}. We might therefore expect that the solution of a Toeplitz system would be easier, and indeed that is the case. Toeplitz systems can be solved by algorithms such as theSchur algorithmor theLevinson algorithminO(n2){\displaystyle O(n^{2})}time.[1][2]Variants of the latter have been shown to be weakly stable (i.e. they exhibitnumerical stabilityforwell-conditionedlinear systems).[3]The algorithms can also be used to find thedeterminantof a Toeplitz matrix inO(n2){\displaystyle O(n^{2})}time.[4] A Toeplitz matrix can also be decomposed (i.e. factored) inO(n2){\displaystyle O(n^{2})}time.[5]The Bareiss algorithm for anLU decompositionis stable.[6]An LU decomposition gives a quick method for solving a Toeplitz system, and also for computing the determinant. Theconvolutionoperation can be constructed as a matrix multiplication, where one of the inputs is converted into a Toeplitz matrix. For example, the convolution ofh{\displaystyle h}andx{\displaystyle x}can be formulated as: This approach can be extended to computeautocorrelation,cross-correlation,moving averageetc. A bi-infinite Toeplitz matrix (i.e. entries indexed byZ×Z{\displaystyle \mathbb {Z} \times \mathbb {Z} })A{\displaystyle A}induces alinear operatoronℓ2{\displaystyle \ell ^{2}}. The induced operator isboundedif and only if the coefficients of the Toeplitz matrixA{\displaystyle A}are the Fourier coefficients of someessentially boundedfunctionf{\displaystyle f}. In such cases,f{\displaystyle f}is called thesymbolof the Toeplitz matrixA{\displaystyle A}, and the spectral norm of the Toeplitz matrixA{\displaystyle A}coincides with theL∞{\displaystyle L^{\infty }}norm of its symbol. Theproofcan be found as Theorem 1.1 of Böttcher and Grudsky.[8]
https://en.wikipedia.org/wiki/Toeplitz_matrix
Thereceptive field, orsensory space, is a delimited medium where somephysiological stimulican evoke asensoryneuronal response in specificorganisms.[1] Complexity of the receptive field ranges from the unidimensionalchemical structureofodorantsto the multidimensionalspacetimeof humanvisual field, through the bidimensionalskinsurface, being a receptive field fortouchperception. Receptive fields can positively or negatively alter themembrane potentialwith or without affecting the rate ofaction potentials.[1] A sensory space can be dependent of an animal's location. For a particular sound wave traveling in an appropriatetransmission medium, by means ofsound localization, an auditory space would amount to areference systemthat continuously shifts as the animal moves (taking into consideration the space inside the ears as well). Conversely, receptive fields can be largely independent of the animal's location, as in the case ofplace cells. A sensory space can also map into a particular region on an animal's body. For example, it could be a hair in thecochleaor a piece of skin, retina, or tongue or other part of an animal's body. Receptive fields have been identified for neurons of theauditory system, thesomatosensory system, and thevisual system. The termreceptive fieldwas first used bySherringtonin 1906 to describe the area of skin from which ascratch reflexcould be elicited in a dog.[2]In 1938,Hartlinestarted to apply the term to single neurons, this time from thefrogretina.[1] This concept of receptive fields can be extended further up the nervous system. If many sensory receptors all formsynapseswith a singlecellfurther up, they collectively form the receptive field of that cell. For example, the receptive field of aganglion cellin the retina of the eye is composed of input from all of thephotoreceptorswhich synapse with it, and a group of ganglion cells in turn forms the receptive field for a cell in the brain. This process is called convergence. Receptive fields have been used in modern artificialdeep neural networksthat work with local operations. The auditory system processes the temporal and spectral (i.e. frequency) characteristics of sound waves, so the receptive fields of neurons in the auditory system are modeled as spectro-temporal patterns that cause the firing rate of the neuron to modulate with the auditory stimulus. Auditory receptive fields are often modeled asspectro-temporal receptive fields(STRFs), which are the specific pattern in the auditory domain that causes modulation of the firing rate of a neuron. Linear STRFs are created by first calculating aspectrogramof the acoustic stimulus, which determines how thespectral densityof the acoustic stimulus changes over time, often using theShort-time Fourier transform(STFT). Firing rate is modeled over time for the neuron, possibly using aperistimulus time histogramif combining over multiple repetitions of the acoustic stimulus. Then,linear regressionis used to predict the firing rate of that neuron as a weighted sum of the spectrogram. The weights learned by the linear model are the STRF, and represent the specific acoustic pattern that causes modulation in the firing rate of the neuron. STRFs can also be understood as thetransfer functionthat maps an acoustic stimulus input to a firing rate response output.[3]A theoretical explanation of the computational function of early auditory receptive fields is given in.[4] In the somatosensory system, receptive fields are regions of theskinor ofinternal organs. Some types ofmechanoreceptorshave large receptive fields, while others have smaller ones. Large receptive fields allow the cell to detect changes over a wider area, but lead to a less precise perception. Thus, the fingers, which require the ability to detect fine detail, have many, densely packed (up to 500 per cubic cm) mechanoreceptors with small receptive fields (around 10 square mm), while the back and legs, for example, have fewer receptors with large receptive fields. Receptors with large receptive fields usually have a "hot spot", an area within the receptive field (usually in the center, directly over the receptor) where stimulation produces the most intense response.[citation needed] Tactile-sense-related cortical neurons have receptive fields on the skin that can be modified by experience or by injury to sensory nerves resulting in changes in the field's size and position. In general these neurons have relatively large receptive fields (much larger than those of dorsal root ganglion cells). However, the neurons are able to discriminate fine detail due to patterns of excitation and inhibition relative to the field which leads to spatial resolution. In the visual system, receptive fields are volumes invisual space. They are smallest in thefoveawhere they can be a fewminutes of arclike a dot on this page, to the whole page. For example, the receptive field of a singlephotoreceptoris a cone-shaped volume comprising all the visual directions in which light will alter the firing of that cell. Itsapexis located in the center of thelensand its base essentially atinfinityin visual space. Traditionally, visual receptive fields were portrayed in two dimensions (e.g., as circles, squares, or rectangles), but these are simply slices, cut along the screen on which the researcher presented the stimulus, of the volume of space to which a particular cell will respond. In the case ofbinocular neuronsin thevisual cortex, receptive fields do not extend tooptical infinity. Instead, they are restricted to a certain interval of distance from the animal, or from where the eyes are fixating (seePanum's area). The receptive field is often identified as the region of theretinawhere the action oflightalters the firing of the neuron. In retinal ganglion cells (see below), this area of the retina would encompass all the photoreceptors, all therodsandconesfrom oneeyethat are connected to this particular ganglion cell viabipolar cells,horizontal cells, andamacrine cells. Inbinocular neuronsin the visual cortex, it is necessary to specify the corresponding area in both retinas (one in each eye). Although these can be mapped separately in each retina by shutting one or the other eye, the full influence on the neuron's firing is revealed only when both eyes are open. Hubel and Wiesel[5]advanced the theory thatreceptive fields of cells at one level of the visual system are formed from input by cells at a lower level of the visual system.In this way, small,simple receptive fields could be combined to form large, complex receptive fields.Later theorists elaborated this simple, hierarchical arrangement by allowing cells at one level of the visual system to be influenced by feedback from higher levels. Receptive fields have been mapped for all levels of the visual system from photoreceptors, to retinal ganglion cells, to lateral geniculate nucleus cells, to visual cortex cells, to extrastriate cortical cells. However, because the activities of neurons at any one location are contingent on the activities of neurons across the whole system, i.e. are contingent on changes in the whole field, it is unclear whether a local description of a particular "receptive field" can be considered a general description, robust to changes in the field as a whole. Studies based on perception do not give the full picture of the understanding of visual phenomena, so the electrophysiological tools must be used, as the retina, after all, is an outgrowth of the brain. In retinal ganglion and V1 cells, the receptive field consists of the centerandsurround region. Each ganglion cell or optic nerve fiber bears a receptive field, increasing with intensifying light. In the largest field, the light has to be more intense at the periphery of the field than at the center, showing that some synaptic pathways are more preferred than others. The organization of ganglion cells' receptive fields, composed of inputs from many rods and cones, provides a way of detecting contrast, and is used fordetecting objects' edges.[6]: 188Each receptive field is arranged into a central disk, the "center", and a concentric ring, the "surround", each region responding oppositely to light. For example, light in the centre might increase the firing of a particular ganglion cell, whereas light in the surround would decrease the firing of that cell. Stimulation of the center of an on-center cell's receptive field producesdepolarizationand an increase in the firing of the ganglion cell, stimulation of thesurroundproduces ahyperpolarizationand a decrease in the firing of the cell, and stimulation of both the center and surround produces only a mild response (due to mutual inhibition of center and surround). An off-center cell is stimulated by activation of the surround and inhibited by stimulation of the center (see figure). Photoreceptors that are part of the receptive fields of more than one ganglion cell are able to excite or inhibitpostsynaptic neuronsbecause they release theneurotransmitterglutamateat theirsynapses, which can act to depolarize or to hyperpolarize a cell, depending on whether there is a metabotropic or ionotropic receptor on that cell. Thecenter-surround receptive field organizationallows ganglion cells to transmit information not merely about whether photoreceptor cells are exposed to light, but also about the differences in firing rates of cells in the center and surround. This allows them to transmit information about contrast. The size of the receptive field governs thespatial frequencyof the information: small receptive fields are stimulated by high spatial frequencies, fine detail; large receptive fields are stimulated by low spatial frequencies, coarse detail. Retinal ganglion cell receptive fields convey information about discontinuities in the distribution of light falling on the retina; these often specify the edges of objects. In dark adaptation, the peripheral opposite activity zone becomes inactive, but, since it is a diminishing of inhibition between center and periphery, the active field can actually increase, allowing more area for summation. Further along in the visual system, groups of ganglion cells form the receptive fields of cells in thelateral geniculate nucleus. Receptive fields are similar to those of ganglion cells, with an antagonistic center-surround system and cells that are either on- or off center. Receptive fields of cells in the visual cortex are larger and have more-complex stimulus requirements than retinal ganglion cells or lateral geniculate nucleus cells.HubelandWiesel(e.g., Hubel, 1963;Hubel-Wiesel 1959) classified receptive fields of cells in the visual cortex intosimple cells,complex cells, andhypercomplex cells. Simple cell receptive fields are elongated, for example with an excitatory central oval, and an inhibitory surrounding region, or approximately rectangular, with one long side being excitatory and the other being inhibitory. Images for these receptive fields need to have a particular orientation in order to excite the cell. For complex-cell receptive fields, a correctly oriented bar of light might need to move in a particular direction in order to excite the cell. For hypercomplex receptive fields, the bar might also need to be of a particular length. In extrastriate visual areas, cells can have very large receptive fields requiring very complex images to excite the cell. For example, in theinferotemporal cortex, receptive fields cross the midline of visual space and require images such as radial gratings or hands. It is also believed that in thefusiform face area, images of faces excite the cortex more than other images. This property was one of the earliest major results obtained throughfMRI(Kanwisher, McDermott and Chun, 1997); the finding was confirmed later at the neuronal level (Tsao, Freiwald, Tootell andLivingstone, 2006). In a similar vein, people have looked for other category-specific areas and found evidence for regions representing views of places (parahippocampal place area) and the body (Extrastriate body area). However, more recent research has suggested that the fusiform face area is specialised not just for faces, but also for any discrete, within-category discrimination.[7] A theoretical explanation of the computational function of visual receptive fields is given in.[8][9][10]It is described how idealised models of receptive fields similar to the biological receptive fields[11][12]found in the retina, the LGN and the primary visual cortex can be derived from structural properties of the environment in combination with internal consistency to guarantee consistent representation of image structures over multiple spatial and temporal scales. It is also described how the receptive fields in the primary visual cortex, which are tuned to different sizes, orientations and directions in the image domain, enable the visual system to handle the influence of natural image transformations and to compute invariant image representations at higher levels in the visual hierarchy. An in-depth theoretical analysis of how the orientation selectivity of simple cells and complex cells in the primary visual cortex relate to inherent properties of visual receptive fields is given in.[13] The term receptive field is also used in the context ofartificial neural networks, most often in relation toconvolutional neural networks(CNNs). So, in a neural network context, the receptive field is defined as the size of the region in the input that produces the feature. Basically, it is a measure of association of an output feature (of any layer) to the input region (patch). It is important to note that the idea of receptive fields applies to local operations (i.e. convolution, pooling). As an example, in motion-based tasks, like video prediction and optical flow estimation, large motions need to be captured (displacements of pixels in a 2D grid), so an adequate receptive field is required. Specifically, the receptive field should be sufficient if it is larger than the largest flow magnitude of the dataset. There are a lot of ways that one can increase the receptive field on a CNN. When used in this sense, the term adopts a meaning reminiscent of receptive fields in actual biological nervous systems. CNNs have a distinct architecture, designed to mimic the way in which real animal brains are understood to function; instead of having everyneuronin each layer connect to all neurons in the next layer (Multilayer perceptron), the neurons are arranged in a 3-dimensional structure in such a way as to take into account the spatial relationships between different neurons with respect to the original data. Since CNNs are used primarily in the field ofcomputer vision, the data that the neurons represent is typically an image; each input neuron represents onepixelfrom the original image. The first layer of neurons is composed of all the input neurons; neurons in the next layer will receive connections from some of the input neurons (pixels), but not all, as would be the case in aMLPand in other traditional neural networks. Hence, instead of having each neuron receive connections from all neurons in the previous layer, CNNs use a receptive field-like layout in which each neuron receives connections only from a subset of neurons in the previous (lower) layer. The receptive field of a neuron in one of the lower layers encompasses only a small area of the image, while the receptive field of a neuron in subsequent (higher) layers involves a combination of receptive fields from several (but not all) neurons in the layer before (i. e. a neuron in a higher layer "looks" at a larger portion of the image than does a neuron in a lower layer). In this way, each successive layer is capable of learning increasingly abstract features of the original image. The use of receptive fields in this fashion is thought to give CNNs an advantage in recognizing visual patterns when compared to other types of neural networks.
https://en.wikipedia.org/wiki/Receptive_field
Image stitchingorphoto stitchingis the process of combining multiplephotographicimageswith overlapping fields of view to produce a segmentedpanoramaor high-resolution image. Commonly performed through the use ofcomputer software, most approaches to image stitching require nearly exact overlaps between images and identical exposures to produce seamless results,[1][2]although some stitching algorithms actually benefit from differently exposed images by doinghigh-dynamic-range imagingin regions of overlap.[3][4]Somedigital camerascan stitch their photos internally. Image stitching is widely used in modern applications, such as the following: The image stitching process can be divided into three main components:image registration,calibration, andblending. In order to estimate image alignment, algorithms are needed to determine the appropriate mathematical model relating pixel coordinates in one image to pixel coordinates in another. Algorithms that combine direct pixel-to-pixel comparisons with gradient descent (and other optimization techniques) can be used to estimate these parameters. Distinctive features can be found in each image and then efficiently matched to rapidly establishcorrespondencesbetween pairs of images. When multiple images exist in a panorama, techniques have been developed to compute a globally consistent set of alignments and to efficiently discover which images overlap one another. A final compositing surface onto which to warp or projectively transform and place all of the aligned images is needed, as are algorithms to seamlessly blend the overlapping images, even in the presence of parallax, lens distortion, scene motion, and exposure differences. Since the illumination in two views cannot be guaranteed to be identical, stitching two images could create a visible seam. Other reasons for seams could be the background changing between two images for the same continuous foreground. Other major issues to deal with are the presence ofparallax,lens distortion, scenemotion, andexposuredifferences. In a non-ideal real-life case, the intensity varies across the whole scene, and so does the contrast and intensity across frames. Additionally, theaspect ratioof a panorama image needs to be taken into account to create a visually pleasingcomposite. Forpanoramicstitching, the ideal set of images will have a reasonable amount of overlap (at least 15–30%) to overcome lens distortion and have enough detectable features. The set of images will have consistent exposure between frames to minimize the probability of seams occurring. Feature detectionis necessary to automatically find correspondences between images. Robust correspondences are required in order to estimate the necessary transformation to align an image with the image it is being composited on. Corners, blobs,Harris corners, anddifferences of Gaussiansof Harris corners are good features since they are repeatable and distinct. One of the first operators for interest point detection was developed byHans Moravecin 1977 for his research involving the automatic navigation of a robot through a clustered environment. Moravec also defined the concept of "points of interest" in an image and concluded these interest points could be used to find matching regions in different images. The Moravec operator is considered to be a corner detector because it defines interest points as points where there are large intensity variations in all directions. This often is the case at corners. However, Moravec was not specifically interested in finding corners, just distinct regions in an image that could be used to register consecutive image frames. Harris and Stephens improved upon Moravec's corner detector by considering the differential of the corner score with respect to direction directly. They needed it as a processing step to build interpretations of a robot's environment based on image sequences. Like Moravec, they needed a method to match corresponding points in consecutive image frames, but were interested in tracking both corners and edges between frames. SIFTandSURFare recent key-point or interest point detector algorithms but a point to note is that SURF is patented and its commercial usage restricted. Once a feature has been detected, a descriptor method like SIFT descriptor can be applied to later match them. Image registrationinvolvesmatching features[7]in a set of images or using direct alignment methods to search for image alignments that minimize thesum of absolute differencesbetween overlapping pixels.[8]When using direct alignment methods one might first calibrate one's images to get better results. Additionally, users may input a rough model of the panorama to help the feature matching stage, so that e.g. only neighboring images are searched for matching features. Since there are smaller group of features for matching, the result of the search is more accurate and execution of the comparison is faster. To estimate a robust model from the data, a common method used is known asRANSAC. The name RANSAC is an abbreviation for "RANdomSAmpleConsensus". It is an iterative method for robust parameter estimation to fit mathematical models from sets of observed data points which may contain outliers. The algorithm is non-deterministic in the sense that it produces a reasonable result only with a certain probability, with this probability increasing as more iterations are performed. It being a probabilistic method means that different results will be obtained for every time the algorithm is run. The RANSAC algorithm has found many applications in computer vision, including the simultaneous solving of the correspondence problem and the estimation of the fundamental matrix related to a pair of stereo cameras. The basic assumption of the method is that the data consists of "inliers", i.e., data whose distribution can be explained by some mathematical model, and "outliers" which are data that do not fit the model. Outliers are considered points which come from noise, erroneous measurements, or simply incorrect data. For the problem ofhomographyestimation, RANSAC works by trying to fit several models using some of the point pairs and then checking if the models were able to relate most of the points. The best model – the homography, which produces the highest number of correct matches – is then chosen as the answer for the problem; thus, if the ratio of number of outliers to data points is very low, the RANSAC outputs a decent model fitting the data. Image calibrationaims to minimize differences between an ideal lens models and the camera-lens combination that was used, optical defects such asdistortions,exposuredifferences between images,vignetting,[9]camera response andchromatic aberrations. If feature detection methods were used to register images and absolute positions of the features were recorded and saved, stitching software may use the data for geometric optimization of the images in addition to placing the images on the panosphere.Panotoolsand its various derivative programs use this method. Alignment may be necessary to transform an image to match the view point of the image it is being composited with. Alignment, in simple terms, is a change in the coordinates system so that it adopts a new coordinate system which outputs image matching the required viewpoint. The types of transformations an image may go through are pure translation, pure rotation, a similarity transform which includes translation, rotation and scaling of the image which needs to be transformed,Affineor projective transform. Projective transformation is the farthest an image can transform (in the set of two dimensional planar transformations), where only visible features that are preserved in the transformed image are straight lines whereas parallelism is maintained in an affine transform. Projective transformation can be mathematically described as where x is points in the old coordinate system, x’ is the corresponding points in the transformed image and H is thehomographymatrix. Expressing the points x and x’ using the camera intrinsics (K and K’) and its rotation and translation[R t]to the real-world coordinates X and X’, we get Using the above two equations and the homography relation between x’ and x, we can derive The homography matrix H has 8 parameters or degrees of freedom. The homography can be computed using Direct Linear Transform and Singular value decomposition with where A is the matrix constructed using the coordinates of correspondences and h is the one dimensional vector of the 9 elements of the reshaped homography matrix. To get to h we can simple apply SVD:A = U⋅{\displaystyle \cdot }S⋅{\displaystyle \cdot }VTAnd h = V (column corresponding to the smallest singular vector). This is true since h lies in the null space of A. Since we have 8 degrees of freedom the algorithm requires at least four point correspondences. In case when RANSAC is used to estimate the homography and multiple correspondences are available the correct homography matrix is the one with the maximum number of inliers. Compositingis the process where the rectified images are aligned in such a way that they appear as a single shot of a scene. Compositing can be automatically done since the algorithm now knows which correspondences overlap. Image blendinginvolves executing the adjustments figured out in the calibration stage, combined with remapping of the images to an output projection. Colors areadjustedbetween images to compensate for exposure differences. If applicable,high dynamic rangemerging is done along withmotion compensationand deghosting. Images are blended together and seam line adjustment is done to minimize the visibility of seams between images. The seam can be reduced by a simple gain adjustment. This compensation is basically minimizing intensity difference of overlapping pixels. Image blending algorithm allots more weight to pixels near the center of the image. Gain compensated and multi band blended images compare the best. IJCV 2007. Straightening is another method to rectify the image. Matthew Brown and David G. Lowe in their paper ‘Automatic Panoramic Image Stitching using Invariant Features’ describe methods of straightening which apply a global rotation such that vector u is vertical (in the rendering frame) which effectively removes the wavy effect from output panoramas. This process is similar toimage rectification, and more generallysoftware correction of optical distortionsin single photographs. Even after gain compensation, some image edges are still visible due to a number of unmodelled effects, such as vignetting (intensity decreases towards the edge of the image), parallax effects due to unwanted motion of the optical centre, mis-registration errors due to mismodelling of the camera, radial distortion and so on. Due to these reasons they propose a blending strategy called multi band blending. For image segments that have been taken from the same point in space, stitched images can be arranged using one of variousmap projections. Rectilinear projection, where the stitched image is viewed on a two-dimensional plane intersecting the panosphere in a single point. Lines that are straight in reality are shown as straight regardless of their directions on the image. Wide views – around 120° or so – start to exhibit severe distortion near the image borders. One case of rectilinear projection is the use ofcube faceswithcubic mappingfor panorama viewing. Panorama is mapped to six squares, each cube face showing 90 by 90 degree area of the panorama. Cylindrical projection, where the stitched image shows a 360° horizontal field of view and a limited vertical field of view. Panoramas in this projection are meant to be viewed as though the image is wrapped into a cylinder and viewed from within. When viewed on a 2D plane, horizontal lines appear curved while vertical lines remain straight.[10]Vertical distortion increases rapidly when nearing the top of the panosphere. There are various other cylindrical formats, such asMercatorandMiller cylindricalwhich have less distortion near the poles of the panosphere. Spherical projectionorequirectangular projection– which is strictly speaking another cylindrical projection – where the stitched image shows a 360° horizontal by 180° vertical field of view i.e. the whole sphere. Panoramas in this projection are meant to be viewed as though the image is wrapped into a sphere and viewed from within. When viewed on a 2D plane, horizontal lines appear curved as in a cylindrical projection, while vertical lines remain vertical.[10] Since a panorama is basically a map of a sphere, various other mapping projections fromcartographerscan also be used if so desired. Additionally there are specialized projections which may have more aesthetically pleasing advantages over normal cartography projections such as Hugin's Panini projection[11]– named after ItalianvedutismopainterGiovanni Paolo Panini[12]– or PTGui's Vedutismo projection.[13]Different projections may be combined in same image for fine tuning the final look of the output image.[14] Stereographic projectionorfisheyeprojection can be used to form alittle planetpanorama by pointing the virtual camera straight down and setting thefield of viewlarge enough to show the whole ground and some of the areas above it; pointing the virtual camera upwards creates a tunnel effect.Conformalityof the stereographic projection may produce more visually pleasing result than equal area fisheye projection as discussed in the stereo-graphic projection's article. The use of images not taken from the same place (on a pivot about theentrance pupilof the camera)[15]can lead toparallaxerrors in the final product. When the captured scene features rapid movement or dynamic motion, artifacts may occur as a result of time differences between the image segments. "Blind stitching" through feature-based alignment methods (seeautostitch), as opposed to manual selection and stitching, can cause imperfections in the assembly of the panorama. Dedicated programs includeAutostitch,Hugin,Ptgui,Panorama Tools,Microsoft Research Image Composite EditorandCleVR Stitcher. Many other programs can also stitch multiple images; a popular example isAdobe Systems'Photoshop, which includes a tool known asPhotomergeand, in the latest versions, the newAuto-Blend. Other programs such asVideoStitchmake it possible to stitch videos, andVahana VRenables real-time video stitching. Image Stitching module for QuickPHOTO microscope software enables to interactively stitch together multiple fields of view from microscope using camera's live view. It can be also used for manual stitching of whole microscopy samples.
https://en.wikipedia.org/wiki/Image_stitching
Scale-spacetheory is a framework formulti-scalesignalrepresentationdeveloped by thecomputer vision,image processingandsignal processingcommunities with complementary motivations fromphysicsandbiological vision. It is a formal theory for handling image structures at differentscales, by representing an image as a one-parameter family of smoothed images, thescale-space representation, parametrized by the size of thesmoothingkernelused for suppressing fine-scale structures.[1][2][3][4][5][6][7][8]The parametert{\displaystyle t}in this family is referred to as thescale parameter, with the interpretation that image structures of spatial size smaller than aboutt{\displaystyle {\sqrt {t}}}have largely been smoothed away in the scale-space level at scalet{\displaystyle t}. The main type of scale space is thelinear (Gaussian) scale space, which has wide applicability as well as the attractive property of being possible to derive from a small set ofscale-space axioms. The corresponding scale-space framework encompasses a theory for Gaussian derivative operators, which can be used as a basis for expressing a large class of visual operations for computerized systems that process visual information. This framework also allows visual operations to be madescale invariant, which is necessary for dealing with the size variations that may occur in image data, because real-world objects may be of different sizes and in addition the distance between the object and the camera may be unknown and may vary depending on the circumstances.[9][10] The notion of scale space applies to signals of arbitrary numbers of variables. The most common case in the literature applies to two-dimensional images, which is what is presented here. Consider a given imagef{\displaystyle f}wheref(x,y){\displaystyle f(x,y)}is the greyscale value of the pixel at position(x,y){\displaystyle (x,y)}. The linear (Gaussian)scale-space representationoff{\displaystyle f}is a family of derived signalsL(x,y;t){\displaystyle L(x,y;t)}defined by theconvolutionoff(x,y){\displaystyle f(x,y)}with the two-dimensionalGaussian kernel such that where the semicolon in the argument ofL{\displaystyle L}implies that the convolution is performed only over the variablesx,y{\displaystyle x,y}, while the scale parametert{\displaystyle t}after the semicolon just indicates which scale level is being defined. This definition ofL{\displaystyle L}works for a continuum of scalest≥0{\displaystyle t\geq 0}, but typically only a finite discrete set of levels in the scale-space representation would be actually considered. The scale parametert=σ2{\displaystyle t=\sigma ^{2}}is thevarianceof theGaussian filterand as a limit fort=0{\displaystyle t=0}the filterg{\displaystyle g}becomes animpulse functionsuch thatL(x,y;0)=f(x,y),{\displaystyle L(x,y;0)=f(x,y),}that is, the scale-space representation at scale levelt=0{\displaystyle t=0}is the imagef{\displaystyle f}itself. Ast{\displaystyle t}increases,L{\displaystyle L}is the result of smoothingf{\displaystyle f}with a larger and larger filter, thereby removing more and more of the details that the image contains. Since the standard deviation of the filter isσ=t{\displaystyle \sigma ={\sqrt {t}}}, details that are significantly smaller than this value are to a large extent removed from the image at scale parametert{\displaystyle t}, see the following figures and[11]for graphical illustrations. When faced with the task of generating a multi-scale representation one may ask: could any filtergof low-pass type and with a parametertwhich determines its width be used to generate a scale space? The answer is no, as it is of crucial importance that the smoothing filter does not introduce new spurious structures at coarse scales that do not correspond to simplifications of corresponding structures at finer scales. In the scale-space literature, a number of different ways have been expressed to formulate this criterion in precise mathematical terms. The conclusion from several different axiomatic derivations that have been presented is that the Gaussian scale space constitutes thecanonicalway to generate a linear scale space, based on the essential requirement that new structures must not be created when going from a fine scale to any coarser scale.[1][3][4][6][9][12][13][14][15][16][17][18][19]Conditions, referred to asscale-space axioms, that have been used for deriving the uniqueness of the Gaussian kernel includelinearity,shift invariance,semi-groupstructure, non-enhancement oflocal extrema,scale invarianceandrotational invariance. In the works,[15][20][21]the uniqueness claimed in the arguments based on scale invariance has been criticized, and alternative self-similar scale-space kernels have been proposed. The Gaussian kernel is, however, a unique choice according to the scale-space axiomatics based on causality[3]or non-enhancement of local extrema.[16][18] Equivalently, the scale-space family can be defined as the solution of thediffusion equation(for example in terms of theheat equation), with initial conditionL(x,y;0)=f(x,y){\displaystyle L(x,y;0)=f(x,y)}. This formulation of the scale-space representationLmeans that it is possible to interpret the intensity values of the imagefas a "temperature distribution" in the image plane and that the process that generates the scale-space representation as a function oftcorresponds to heatdiffusionin the image plane over timet(assuming the thermal conductivity of the material equal to the arbitrarily chosen constant⁠1/2⁠). Although this connection may appear superficial for a reader not familiar withdifferential equations, it is indeed the case that the main scale-space formulation in terms of non-enhancement of local extrema is expressed in terms of a sign condition onpartial derivativesin the 2+1-D volume generated by the scale space, thus within the framework ofpartial differential equations. Furthermore, a detailed analysis of the discrete case shows that the diffusion equation provides a unifying link between continuous and discrete scale spaces, which also generalizes to nonlinear scale spaces, for example, usinganisotropic diffusion. Hence, one may say that the primary way to generate a scale space is by the diffusion equation, and that the Gaussian kernel arises as theGreen's functionof this specific partial differential equation. The motivation for generating a scale-space representation of a given data set originates from the basic observation that real-world objects are composed of different structures at differentscales. This implies that real-world objects, in contrast to idealized mathematical entities such aspointsorlines, may appear in different ways depending on the scale of observation. For example, the concept of a "tree" is appropriate at the scale of meters, while concepts such as leaves and molecules are more appropriate at finer scales. For acomputer visionsystem analysing an unknown scene, there is no way to know a priori whatscalesare appropriate for describing the interesting structures in the image data. Hence, the only reasonable approach is to consider descriptions at multiple scales in order to be able to capture the unknown scale variations that may occur. Taken to the limit, a scale-space representation considers representations at all scales.[9] Another motivation to the scale-space concept originates from the process of performing a physical measurement on real-world data. In order to extract any information from a measurement process, one has to applyoperators of non-infinitesimal sizeto the data. In many branches of computer science and applied mathematics, the size of the measurement operator is disregarded in the theoretical modelling of a problem. The scale-space theory on the other hand explicitly incorporates the need for a non-infinitesimal size of the image operators as an integral part of any measurement as well as any other operation that depends on a real-world measurement.[5] There is a close link between scale-space theory and biological vision. Many scale-space operations show a high degree of similarity with receptive field profiles recorded from the mammalian retina and the first stages in the visual cortex. In these respects, the scale-space framework can be seen as a theoretically well-founded paradigm for early vision, which in addition has been thoroughly tested by algorithms and experiments.[4][9] At any scale in scale space, we can apply local derivative operators to the scale-space representation: Due to the commutative property between the derivative operator and the Gaussian smoothing operator, suchscale-space derivativescan equivalently be computed by convolving the original image with Gaussian derivative operators. For this reason they are often also referred to asGaussian derivatives: The uniqueness of the Gaussian derivative operators as local operations derived from a scale-space representation can be obtained by similar axiomatic derivations as are used for deriving the uniqueness of the Gaussian kernel for scale-space smoothing.[4][22] These Gaussian derivative operators can in turn be combined by linear or non-linear operators into a larger variety of different types of feature detectors, which in many cases can be well modelled bydifferential geometry. Specifically, invariance (or more appropriatelycovariance) to local geometric transformations, such as rotations or local affine transformations, can be obtained by considering differential invariants under the appropriate class of transformations or alternatively by normalizing the Gaussian derivative operators to a locally determined coordinate frame determined from e.g. a preferred orientation in the image domain, or by applying a preferred local affine transformation to a local image patch (see the article onaffine shape adaptationfor further details). When Gaussian derivative operators and differential invariants are used in this way as basic feature detectors at multiple scales, the uncommitted first stages of visual processing are often referred to as avisual front-end. This overall framework has been applied to a large variety of problems in computer vision, includingfeature detection,feature classification,image segmentation,image matching,motion estimation, computation ofshapecues andobject recognition. The set of Gaussian derivative operators up to a certain order is often referred to as theN-jetand constitutes a basic type of feature within the scale-space framework. Following the idea of expressing visual operations in terms of differential invariants computed at multiple scales using Gaussian derivative operators, we can express anedge detectorfrom the set of points that satisfy the requirement that the gradient magnitude should assume a local maximum in the gradient direction By working out the differential geometry, it can be shown[4]that thisdifferential edge detectorcan equivalently be expressed from the zero-crossings of the second-order differential invariant that satisfy the following sign condition on a third-order differential invariant: Similarly, multi-scaleblob detectorsat any given fixed scale[23][9]can be obtained from local maxima and local minima of either theLaplacianoperator (also referred to as theLaplacian of Gaussian) orthe determinant of the Hessian matrix In an analogous fashion, corner detectors and ridge and valley detectors can be expressed as local maxima, minima or zero-crossings of multi-scale differential invariants defined from Gaussian derivatives. The algebraic expressions for the corner and ridge detection operators are, however, somewhat more complex and the reader is referred to the articles oncorner detectionandridge detectionfor further details. Scale-space operations have also been frequently used for expressing coarse-to-fine methods, in particular for tasks such asimage matchingand formulti-scale image segmentation. The theory presented so far describes a well-founded framework forrepresentingimage structures at multiple scales. In many cases it is, however, also necessary to select locally appropriate scales for further analysis. This need forscale selectionoriginates from two major reasons; (i) real-world objects may have different size, and this size may be unknown to the vision system, and (ii) the distance between the object and the camera can vary, and this distance information may also be unknowna priori. A highly useful property of scale-space representation is that image representations can be made invariant to scales, by performing automatic local scale selection[9][10][23][24][25][26][27][28]based on localmaxima(orminima) over scales of scale-normalizedderivatives whereγ∈[0,1]{\displaystyle \gamma \in [0,1]}is a parameter that is related to the dimensionality of the image feature. This algebraic expression forscale normalized Gaussian derivative operatorsoriginates from the introduction ofγ{\displaystyle \gamma }-normalized derivativesaccording to It can be theoretically shown that a scale selection module working according to this principle will satisfy the followingscale covariance property: if for a certain type of image feature a local maximum is assumed in a certain image at a certain scalet0{\displaystyle t_{0}}, then under a rescaling of the image by a scale factors{\displaystyle s}the local maximum over scales in the rescaled image will be transformed to the scale levels2t0{\displaystyle s^{2}t_{0}}.[23] Following this approach of gamma-normalized derivatives, it can be shown that different types ofscale adaptive and scale invariantfeature detectors[9][10][23][24][25][29][30][27]can be expressed for tasks such asblob detection,corner detection,ridge detection,edge detectionandspatio-temporal interest point detection(see the specific articles on these topics for in-depth descriptions of how these scale-invariant feature detectors are formulated). Furthermore, the scale levels obtained from automatic scale selection can be used for determining regions of interest for subsequentaffine shape adaptation[31]to obtain affine invariant interest points[32][33]or for determining scale levels for computing associatedimage descriptors, such as locally scale adaptedN-jets. Recent work has shown that also more complex operations, such as scale-invariantobject recognitioncan be performed in this way, by computing local image descriptors (N-jets or local histograms of gradient directions) at scale-adapted interest points obtained from scale-space extrema of the normalizedLaplacianoperator (see alsoscale-invariant feature transform[34]) or the determinant of the Hessian (see alsoSURF);[35]see also the Scholarpedia article on thescale-invariant feature transform[36]for a more general outlook of object recognition approaches based on receptive field responses[19][37][38][39]in terms Gaussian derivative operators or approximations thereof. An imagepyramidis a discrete representation in which a scale space is sampled in both space and scale. For scale invariance, the scale factors should be sampled exponentially, for example as integer powers of 2 or√2. When properly constructed, the ratio of the sample rates in space and scale are held constant so that the impulse response is identical in all levels of the pyramid.[40][41][42][43]Fast, O(N), algorithms exist for computing a scale invariant image pyramid, in which the image or signal is repeatedly smoothed then subsampled. Values for scale space between pyramid samples can easily be estimated using interpolation within and between scales and allowing for scale and position estimates with sub resolution accuracy.[43] In a scale-space representation, the existence of a continuous scale parameter makes it possible to track zero crossings over scales leading to so-calleddeep structure. For features defined aszero-crossingsofdifferential invariants, theimplicit function theoremdirectly definestrajectoriesacross scales,[4][44]and at those scales wherebifurcationsoccur, the local behaviour can be modelled bysingularity theory.[4][44][45][46][47] Extensions of linear scale-space theory concern the formulation of non-linear scale-space concepts more committed to specific purposes.[48][49]Thesenon-linear scale-spacesoften start from the equivalent diffusion formulation of the scale-space concept, which is subsequently extended in a non-linear fashion. A large number of evolution equations have been formulated in this way, motivated by different specific requirements (see the abovementioned book references for further information). However, not all of these non-linear scale-spaces satisfy similar "nice" theoretical requirements as the linear Gaussian scale-space concept. Hence, unexpected artifacts may sometimes occur and one should be very careful of not using the term "scale-space" for just any type of one-parameter family of images. A first-order extension of the isotropic Gaussian scale space is provided by theaffine (Gaussian) scale space.[4]One motivation for this extension originates from the common need for computing image descriptors subject for real-world objects that are viewed under a perspective camera model. To handle such non-linear deformations locally, partial invariance (or more correctlycovariance) to localaffine deformationscan be achieved by considering affine Gaussian kernels with their shapes determined by the local image structure,[31]see the article onaffine shape adaptationfor theory and algorithms. Indeed, this affine scale space can also be expressed from a non-isotropic extension of the linear (isotropic) diffusion equation, while still being within the class of linearpartial differential equations. There exists a more general extension of the Gaussian scale-space model to affine and spatio-temporal scale-spaces.[4][31][18][19][50]In addition to variabilities over scale, which original scale-space theory was designed to handle, thisgeneralized scale-space theory[19]also comprises other types of variabilities caused by geometric transformations in the image formation process, including variations in viewing direction approximated by local affine transformations, and relative motions between objects in the world and the observer, approximated by localGalilean transformations. This generalized scale-space theory leads to predictions about receptive field profiles in good qualitative agreement with receptive field profiles measured by cell recordings in biological vision.[51][52][50][53] There are strong relations between scale-space theory andwavelet theory, although these two notions of multi-scale representation have been developed from somewhat different premises. There has also been work on othermulti-scale approaches, such aspyramidsand a variety of other kernels, that do not exploit or require the same requirements as true scale-space descriptions do. There are interesting relations between scale-space representation and biological vision and hearing. Neurophysiological studies of biological vision have shown that there arereceptive fieldprofiles in the mammalianretinaandvisual cortex, that can be well modelled by linear Gaussian derivative operators, in some cases also complemented by a non-isotropic affine scale-space model, a spatio-temporal scale-space model and/or non-linear combinations of such linear operators.[18][51][52][50][53][54][55][56][57] Regarding biological hearing there arereceptive fieldprofiles in theinferior colliculusand theprimary auditory cortexthat can be well modelled by spectra-temporal receptive fields that can be well modelled by Gaussian derivates over logarithmic frequencies and windowed Fourier transforms over time with the window functions being temporal scale-space kernels.[58][59] In the area of classical computer vision, scale-space theory has established itself as a theoretical framework for early vision, with Gaussian derivatives constituting a canonical model for the first layer of receptive fields. With the introduction ofdeep learning, there has also been work on also using Gaussian derivatives or Gaussian kernels as a general basis for receptive fields in deep networks.[60][61][62][63][64]Using the transformation properties of the Gaussian derivatives and Gaussian kernels under scaling transformations, it is in this way possible to obtain scale covariance/equivariance and scale invariance of the deep network to handle image structures at different scales in a theoretically well-founded manner.[62][63]There have also been approaches developed to obtain scale covariance/equivariance and scale invariance by learned filters combined with multiple scale channels.[65][66][67][68][69][70]Specifically, using the notions of scale covariance/equivariance and scale invariance, it is possible to make deep networks operate robustly at scales not spanned by the training data, thus enabling scale generalization.[62][63][67][69] For processing pre-recorded temporal signals or video, the Gaussian kernel can also be used for smoothing and suppressing fine-scale structures over the temporal domain, since the data are pre-recorded and available in all directions. When processing temporal signals or video in real-time situations, the Gaussian kernel cannot, however, be used for temporal smoothing, since it would access data from the future, which obviously cannot be available. For temporal smoothing in real-time situations, one can instead use the temporal kernel referred to as the time-causal limit kernel,[71]which possesses similar properties in a time-causal situation (non-creation of new structures towards increasing scale and temporal scale covariance) as the Gaussian kernel obeys in the non-causal case. The time-causal limit kernel corresponds to convolution with an infinite number of truncated exponential kernels coupled in cascade, with specifically chosen time constants to obtain temporal scale covariance. For discrete data, this kernel can often be numerically well approximated by a small set of first-order recursive filters coupled in cascade, see[71]for further details. For an earlier approach to handling temporal scales in a time-causal way, by performing Gaussian smoothing over a logarithmically transformed temporal axis, however, not having any known memory-efficient time-recursive implementation as the time-causal limit kernel has, see,[72] When implementing scale-space smoothing in practice there are a number of different approaches that can be taken in terms of continuous or discrete Gaussian smoothing, implementation in the Fourier domain, in terms of pyramids based on binomial filters that approximate the Gaussian or using recursive filters. More details about this are given in a separate article onscale space implementation.
https://en.wikipedia.org/wiki/Scale_space
In the areas ofcomputer vision,image analysisandsignal processing, the notion of scale-space representation is used for processing measurement data at multiple scales, and specifically enhance or suppress image features over different ranges of scale (see the article onscale space). A special type of scale-space representation is provided by the Gaussian scale space, where the image data inNdimensions is subjected to smoothing by Gaussianconvolution. Most of the theory for Gaussian scale space deals with continuous images, whereas one when implementing this theory will have to face the fact that most measurement data are discrete. Hence, the theoretical problem arises concerning how to discretize the continuous theory while either preserving or well approximating the desirable theoretical properties that lead to the choice of the Gaussian kernel (see the article onscale-space axioms). This article describes basic approaches for this that have been developed in the literature, see also[1]for an in-depth treatment regarding the topic of approximating the Gaussian smoothing operation and the Gaussian derivative computations in scale-space theory, and[2]for a complementary treatment regarding hybrid discretization methods. TheGaussianscale-space representationof anN-dimensional continuous signal, is obtained byconvolvingfCwith anN-dimensionalGaussian kernel: In other words: However, forimplementation, this definition is impractical, since it is continuous. When applying the scale space concept to a discrete signalfD, different approaches can be taken. This article is a brief summary of some of the most frequently used methods. Using theseparability propertyof the Gaussian kernel theN-dimensionalconvolutionoperation can be decomposed into a set of separable smoothing steps with a one-dimensional Gaussian kernelGalong each dimension where and the standard deviation of the Gaussian σ is related to the scale parametertaccording tot= σ2. Separability will be assumed in all that follows, even when the kernel is not exactly Gaussian, since separation of the dimensions is the most practical way to implement multidimensional smoothing, especially at larger scales. Therefore,the rest of the article focuses on the one-dimensional case. When implementing the one-dimensional smoothing step in practice, the presumably simplest approach is to convolve the discrete signalfDwith asampled Gaussian kernel: where (witht= σ2) which in turn is truncated at the ends to give a filter with finite impulse response forMchosen sufficiently large (seeerror function) such that A common choice is to setMto a constantCtimes the standard deviation of the Gaussian kernel whereCis often chosen somewhere between 3 and 6. Using the sampled Gaussian kernel can, however, lead to implementation problems, in particular when computing higher-order derivatives at finer scales by applying sampled derivatives of Gaussian kernels. When accuracy and robustness are primary design criteria, alternative implementation approaches should therefore be considered. For small values of ε (10−6to 10−8) the errors introduced by truncating the Gaussian are usually negligible. For larger values of ε, however, there are many better alternatives to a rectangularwindow function. For example, for a given number of points, aHamming window,Blackman window, orKaiser windowwill do less damage to the spectral and other properties of the Gaussian than a simple truncation will. Notwithstanding this, since the Gaussian kernel decreases rapidly at the tails, the main recommendation is still to use a sufficiently small value of ε such that the truncation effects are no longer important. A more refined approach is to convolve the original signal with thediscrete Gaussian kernelT(n,t)[3][4][5] where andIn(t){\displaystyle I_{n}(t)}denotes themodified Bessel functionsof integer order,n. This is the discrete counterpart of the continuous Gaussian in that it is the solution to the discretediffusion equation(discrete space, continuous time), just as the continuous Gaussian is the solution to the continuous diffusion equation.[3][4][6] This filter can be truncated in the spatial domain as for the sampled Gaussian or can be implemented in the Fourier domain using a closed-form expression for itsdiscrete-time Fourier transform: With this frequency-domain approach, the scale-space properties transferexactlyto the discrete domain, or with excellent approximation using periodic extension and a suitably longdiscrete Fourier transformto approximate thediscrete-time Fourier transformof the signal being smoothed. Moreover, higher-order derivative approximations can be computed in a straightforward manner (and preserving scale-space properties) by applying small support central difference operators to the discretescale space representation.[7] As with the sampled Gaussian, a plain truncation of the infinite impulse response will in most cases be a sufficient approximation for small values of ε, while for larger values of ε it is better to use either a decomposition of the discrete Gaussian into a cascade of generalized binomial filters or alternatively to construct a finite approximate kernel by multiplying by awindow function. If ε has been chosen too large such that effects of the truncation error begin to appear (for example as spurious extrema or spurious responses to higher-order derivative operators), then the options are to decrease the value of ε such that a larger finite kernel is used, with cutoff where the support is very small, or to use a tapered window. Since computational efficiency is often important, low-orderrecursive filtersare often used for scale-space smoothing. For example, Young and van Vliet[8]use a third-order recursive filter with one realpoleand a pair of complex poles, applied forward and backward to make a sixth-order symmetric approximation to the Gaussian with low computational complexity for any smoothing scale. By relaxing a few of the axioms, Lindeberg[3]concluded that good smoothing filters would be "normalizedPólyafrequency sequences", a family of discrete kernels that includes all filters with real poles at 0 <Z< 1 and/orZ> 1, as well as with realzerosatZ< 0. For symmetry, which leads to approximate directional homogeneity, these filters must be further restricted to pairs of poles and zeros that lead to zero-phase filters. To match the transfer function curvature at zero frequency of the discrete Gaussian, which ensures an approximatesemi-groupproperty of additivet, two poles at can be applied forward and backwards, for symmetry and stability. This filter is the simplest implementation of a normalized Pólya frequency sequence kernel that works for any smoothing scale, but it is not as excellent an approximation to the Gaussian as Young and van Vliet's filter, which isnotnormalized Pólya frequency sequence, due to its complex poles. The transfer function,H1, of a symmetric pole-pair recursive filter is closely related to thediscrete-time Fourier transformof the discrete Gaussian kernel via first-order approximation of the exponential: where thetparameter here is related to the stable pole positionZ=pvia: Furthermore, such filters withNpairs of poles, such as the two pole pairs illustrated in this section, are an even better approximation to the exponential: where the stable pole positions are adjusted by solving: The impulse responses of these filters are not very close to gaussian unless more than two pole pairs are used. However, even with only one or two pole pairs per scale, a signal successively smoothed at increasing scales will be very close to a gaussian-smoothed signal. The semi-group property is poorly approximated when too few pole pairs are used. Scale-space axiomsthat are still satisfied by these filters are: The following are only approximately satisfied, the approximation being better for larger numbers of pole pairs: This recursive filter method and variations to compute both the Gaussian smoothing as well as Gaussian derivatives has been described by several authors.[8][9][10][11]Tanet al.have analyzed and compared some of these approaches, and have pointed out that the Young and van Vliet filters are a cascade (multiplication) of forward and backward filters, while the Deriche and the Jinet al.filters are sums of forward and backward filters.[12] At fine scales, the recursive filtering approach as well as other separable approaches are not guaranteed to give the best possible approximation to rotational symmetry, so non-separable implementations for 2D images may be considered as an alternative. When computing several derivatives in theN-jetsimultaneously, discrete scale-space smoothing with the discrete analogue of the Gaussian kernel, or with a recursive filter approximation, followed by small support difference operators, may be both faster and more accurate than computing recursive approximations of each derivative operator. For small scales, a low-orderFIR filtermay be a better smoothing filter than a recursive filter. The symmetric 3-kernel[t/2, 1-t,t/2], fort≤ 0.5 smooths to a scale oftusing a pair of real zeros atZ< 0, and approaches the discrete Gaussian in the limit of smallt. In fact, with infinitesimalt, either this two-zero filter or the two-pole filter with poles atZ=t/2 andZ= 2/tcan be used as the infinitesimal generator for the discrete Gaussian kernels described above. The FIR filter's zeros can be combined with the recursive filter's poles to make a general high-quality smoothing filter. For example, if the smoothing process is to always apply abiquad(two-pole, two-zero) filter forward then backwards on each row of data (and on each column in the 2D case), the poles and zeros can each do a part of the smoothing. The zeros limit out att= 0.5 per pair (zeros atZ= –1), so for large scales the poles do most of the work. At finer scales, the combination makes an excellent approximation to the discrete Gaussian if the poles and zeros each do about half the smoothing. Thetvalues for each portion of the smoothing (poles, zeros, forward and backward multiple applications, etc.) are additive, in accordance with the approximate semi-group property. The FIR filter transfer function is closely related to the discrete Gaussian's DTFT, just as was the recursive filter's. For a single pair of zeros, the transfer function is where thetparameter here is related to the zero positionsZ=zvia: and we requiret≤ 0.5 to keep the transfer function non-negative. Furthermore, such filters withNpairs of zeros, are an even better approximation to the exponential and extend to higher values oft: where the stable zero positions are adjusted by solving: These FIR and pole-zero filters are valid scale-space kernels, satisfying the same axioms as the all-pole recursive filters. Regarding the topic of automatic scale selection based on normalized derivatives,pyramid approximationsare frequently used to obtain real-time performance.[13][14][15]The appropriateness of approximating scale-space operations within a pyramid originates from the fact that repeated cascade smoothing with generalized binomial kernels leads to equivalent smoothing kernels that under reasonable conditions approach the Gaussian. Furthermore, the binomial kernels (or more generally the class of generalized binomial kernels) can be shown to constitute the unique class of finite-support kernels that guarantee non-creation of local extrema or zero-crossings with increasing scale (see the article onmulti-scale approachesfor details). Special care may, however, need to be taken to avoid discretization artifacts. For one-dimensional kernels, there is a well-developed theory ofmulti-scale approaches, concerning filters that do not create new local extrema or new zero-crossings with increasing scales. For continuous signals, filters with real poles in thes-plane are within this class, while for discrete signals the above-described recursive and FIR filters satisfy these criteria. Combined with the strict requirement of a continuous semi-group structure, the continuous Gaussian and the discrete Gaussian constitute the unique choice for continuous and discrete signals. There are many other multi-scale signal processing, image processing and data compression techniques, usingwaveletsand a variety of other kernels, that do not exploit or require thesame requirementsasscale spacedescriptions do; that is, they do not depend on a coarser scale not generating a new extremum that was not present at a finer scale (in 1D) or non-enhancement of local extrema between adjacent scale levels (in any number of dimensions).
https://en.wikipedia.org/wiki/Scale_space_implementation
Structure from motion(SfM)[1]is aphotogrammetricrange imagingtechnique for estimating three-dimensional structures from two-dimensional image sequences that may be coupled with localmotion signals. It is a classic problem studied in the fields ofcomputer visionandvisual perception. In computer vision, the problem of SfM is to design an algorithm to perform this task. In visual perception, the problem of SfM is to find an algorithmby which biological creatures perform this task. Humans perceive a great deal of information about the three-dimensional structure in their environment by moving around it. When the observer moves, objects around them move different amounts depending on their distance from the observer. This is known asmotion parallax, and this depth information can be used to generate an accurate 3D representation of the world around them.[2] Finding structure from motion presents a similar problem to finding structure fromstereo vision. In both instances, the correspondence between images and thereconstructionof 3D object needs to be found. To findcorrespondencebetween images, features such as corner points (edges with gradients in multiple directions) are tracked from one image to the next. One of the most widely used feature detectors is thescale-invariant feature transform(SIFT). It uses the maxima from adifference-of-Gaussians(DOG) pyramid as features. The first step in SIFT is finding a dominant gradient direction. To make it rotation-invariant, the descriptor is rotated to fit this orientation.[3]Another common feature detector is theSURF(speeded-up robust features).[4]In SURF, the DOG is replaced with aHessian matrix-based blob detector. Also, instead of evaluating the gradient histograms, SURF computes for the sums of gradient components and the sums of their absolute values.[5]Its usage of integral images allows the features to be detected extremely quickly with high detection rate.[6]Therefore, comparing to SIFT, SURF is a faster feature detector with drawback of less accuracy in feature positions.[5]Another type of feature recently made practical for structure from motion are general curves (e.g., locally an edge with gradients in one direction), part of a technology known aspointless SfM,[7][8]useful when point features are insufficient, common in man-made environments.[9] The features detected from all the images will then be matched. One of the matching algorithms that track features from one image to another is theLucas–Kanade tracker.[10] Sometimes some of the matched features are incorrectly matched. This is why the matches should also be filtered.RANSAC(random sample consensus) is the algorithm that is usually used to remove the outlier correspondences. In the paper of Fischler and Bolles, RANSAC is used to solve thelocation determination problem(LDP), where the objective is to determine the points in space that project onto an image into a set of landmarks with known locations.[11] The feature trajectories over time are then used to reconstruct their 3D positions and the camera's motion.[12]An alternative is given by so-called direct approaches, where geometric information (3D structure and camera motion) is directly estimated from the images, without intermediate abstraction to features or corners.[13] There are several approaches to structure from motion. In incremental SfM,[14]camera poses are solved for and added one by one to the collection. In global SfM,[15][16]the poses of all cameras are solved for at the same time. A somewhat intermediate approach isout-of-coreSfM, where several partial reconstructions are computed that are then integrated into a global solution. Structure-from-motion photogrammetry with multi-view stereo provides hyperscale landform models using images acquired from a range of digital cameras and optionally a network of ground control points. The technique is not limited in temporal frequency and can provide point cloud data comparable in density and accuracy to those generated by terrestrial and airborne laser scanning at a fraction of the cost.[17][18][19]Structure from motion is also useful in remote or rugged environments where terrestrial laser scanning is limited by equipment portability and airborne laser scanning is limited by terrain roughness causing loss of data and image foreshortening. The technique has been applied in many settings such as rivers,[20]badlands,[21]sandy coastlines,[22][23]fault zones,[24]landslides,[25][26]and coral reef settings.[27]SfM has been also successfully applied for the assessment of changes[28]and large wood accumulation volume[29]and porosity[30]in fluvial systems, the characterization of rock masses through the determination of some properties as the orientation, persistence, etc. of discontinuities.[31][32]as well as for the evaluation of the stability of rock cut slopes.[33]A full range of digital cameras can be utilized, including digital SLR's, compact digital cameras and even smart phones. Generally though, higher accuracy data will be achieved with more expensive cameras, which include lenses of higher optical quality. The technique therefore offers exciting opportunities to characterize surface topography in unprecedented detail and, with multi-temporal data, to detect elevation, position and volumetric changes that are symptomatic of earth surface processes. Structure from motion can be placed in the context of other digital surveying methods. Cultural heritage is present everywhere. Its structural control, documentation and conservation is one of humanity's main duties (UNESCO). Under this point of view, SfM is used in order to properly estimate situations as well as planning and maintenance efforts and costs, control and restoration. Because serious constraints often exist connected to the accessibility of the site and impossibility to install invasive surveying pillars that did not permit the use of traditional surveying routines (like total stations), SfM provides a non-invasive approach for the structure, without the direct interaction between the structure and any operator. The use is accurate as only qualitative considerations are needed. It is fast enough to respond to the monument’s immediate management needs.[34]The first operational phase is an accurate preparation of the photogrammetric surveying where is established the relation between best distance from the object, focal length, the ground sampling distance (GSD) and the sensor’s resolution. With this information the programmed photographic acquisitions must be made using vertical overlapping of at least 60% (figure 02).[35] Furthermore, structure-from-motion photogrammetry represents a non-invasive, highly flexible and low-cost methodology to digitalize historical documents.[36]
https://en.wikipedia.org/wiki/Structure_from_motion
Zero ASIC Corporation, formerlyAdapteva, Inc., is afablesssemiconductorcompanyfocusing on low powermany coremicroprocessordesign. The company was the second company to announce a design with 1,000 specialized processing cores on a singleintegrated circuit.[1][2] Adapteva was founded in 2008 with the goal of bringing a ten times advancement infloating-pointperformance per wattfor the mobile device market. Products are based on its Epiphany multi-coremultiple instruction, multiple data(MIMD) architecture and its ParallellaKickstarterproject promoting "a supercomputer for everyone" in September 2012. The company name is a combination of "adapt" and the Hebrew word "Teva" meaning nature. Adapteva was founded in March 2008, by Andreas Olofsson. The company was founded with the goal of bringing a 10× advancement infloating-pointprocessingenergy efficiencyfor themobile devicemarket. In May 2009, Olofsson had a prototype of a new type ofmassively parallelmulti-corecomputer architecture. The initial prototype was implemented in 65 nm and had 16 independent microprocessor cores. The initial prototypes enabled Adapteva to secure US$1.5 million in series-A funding from BittWare, a company fromConcord, New Hampshire, in October 2009.[3] Adapteva's first commercial chip product started sampling to customers in early May 2011 and they soon thereafter announced the capability to put up to 4,096 cores on a single chip. TheEpiphany III, was announced in October 2011 using 28 nm and 65 nm manufacturing processes. Adapteva's main product family is the Epiphany scalable multi-coreMIMDarchitecture. The Epiphany architecture could accommodate chips with up to 4,096RISCout-of-ordermicroprocessors, all sharing a single32-bitflat memory space. EachRISCprocessor in the Epiphany architecture issuperscalarwith 64× 32-bitunified register file(integer orsingle-precision) microprocessor operating up to 1GHzand capable of 2GFLOPS(single-precision). Epiphany's RISC processors use a custominstruction set architecture(ISA) optimised forsingle-precision floating-point,[4]but are programmable in high levelANSI Cusing a standardGNU-GCCtool chain. Each RISC processor (in current implementations; not fixed in the architecture) has 32KBof local memory. Code (possibly duplicated in each core) and stack space should be in thatlocal memory; in addition (most) temporary data should fit there for full speed. Data can also be used from other processor cores local memory at a speed penalty, or off-chip RAM with much larger speed penalty. The memory architecture does not employ explicit hierarchy ofhardware caches, similar to the Sony/Toshiba/IBMCell processor, but with the additional benefit of off-chip and inter-core loads and stores being supported (which simplifies porting software to the architecture). It is a hardware implementation ofpartitioned global address space.[citation needed] This eliminated the need for complexcache coherencyhardware, which places a practical limit on the number of cores in a traditionalmulticore system. The design allows the programmer to leverage greater foreknowledge of independent data access patterns to avoid the runtime cost of figuring this out. All processor nodes are connected through anetwork on chip, allowing efficient message passing.[5] The architecture is designed to scale almost indefinitely, with 4e-linksallowing multiple chips to be combined in a grid topology, allowing for systems with thousands of cores. On August 19, 2012, Adapteva posted some specifications and information about Epiphany multi-core coprocessors.[6] In September 2012, a 16-core version, the Epiphany-III (E16G301), was produced using 65 nm[9](11.5 mm2, 500 MHz chip[10]) and engineering samples of 64-core Epiphany-IV (E64G401) were produced using 28 nmGlobalFoundriesprocess (800 MHz).[11] The primary markets for the Epiphany multi-core architecture include: In September 2012, Adapteva started project Parallella onKickstarter, which was marketed as "A Supercomputer for everyone." Architecture reference manuals for the platform were published as part of the campaign to attract attention to the project.[12]The US$750,000 funding goal was reached in a month, with a minimum contribution of US$99 entitling backers to obtain one device; although the initial deadline was set for May 2013, the first single-board computers with 16-core Epiphany chip were finally shipped in December 2013.[13] Size of board is planned to be 86 mm × 53 mm (3.4 in × 2.1 in).[14][15][16] The Kickstarter campaign raised US$898,921.[17][18]Raising US$3 million goal was unsuccessful, so no 64-core version of Parallella will be mass-produced.[19]Kickstarter users having donated more than US$750 will get "parallella-64" variant with 64-core coprocessor (made from initialprototype manufacturingwith 50 chips yield per wafer).[20] By 2016, the firm hadtaped outa 1024-core64-bitvariant of their Epiphany architecture that featured: larger local stores (64 KB), 64-bit addressing,double-precision floating-pointarithmetic orSIMDsingle-precision, and 64-bit integer instructions, implemented in the 16 nm process node.[21]This design included instruction set enhancements aimed atdeep-learningandcryptographyapplications. In July 2017, Adapteva's founder became aDARPAMTOprogram manager[22]and announced that the Epiphany V was "unlikely" to become available as a commercial product.[23] The 16-core Parallella achieves roughly 5.0 GFLOPS/W, and the 64-core Epiphany-IV made with 28 nm estimated as 50 GFLOPS/W (single-precision),[24]and 32-board system based on them achieves 15 GFLOPS/W.[25]For comparison, top GPUs from AMD and Nvidia reached 10 GFLOPS/W for single-precision in 2009–2011 timeframe.[26]
https://en.wikipedia.org/wiki/Adapteva_Epiphany
TheCell Broadband Engine(Cell/B.E.) is a 64-bitmulti-core processorandmicroarchitecturedeveloped bySony,Toshiba, andIBM—an alliance known as "STI". It combines a general-purposePowerPCcore, called the Power Processing Element (PPE), with multiple specializedcoprocessors, known as Synergistic Processing Elements (SPEs), which accelerate tasks such asmultimediaandvector processing.[2] The architecture was developed over a four-year period beginning in March 2001, with Sony reporting a development budget of approximatelyUS$400 million.[3]Its first major commercial application was in Sony'sPlayStation 3home video game console, released in 2006. In 2008, a modified version of the Cell processor powered IBM'sRoadrunner, the first supercomputer to sustain onepetaFLOPS. Other applications include high-performance computing systems fromMercury Computer Systemsand specializedarcade system boards. Cell emphasizesmemory coherence, power efficiency, and peakcomputational throughput, but its design presented significant challenges for software development.[4]IBM offered aLinux-basedsoftware development kitto facilitate programming on the platform.[5] In mid-2000, Sony, Toshiba, and IBM formed the STI alliance to develop a new microprocessor.[6]The STI Design Center opened in March 2001 inAustin, Texas. Over the next four years, more than 400 engineers collaborated on the project, with IBM contributing from eleven of its design centers.[7] Initialpatentsdescribed a configuration with fourPower Processing Elements(PPEs), each paired with eight Synergistic Processing Elements (SPEs), for a theoretical peak performance of 1 teraFLOPS.[citation needed]However, only a scaled-down design—one PPE with eight SPEs—was ultimately manufactured.[8] Fabrication of the initial Cell chip began on a90 nmSOI (silicon on insulator) process.[8]In March 2007, IBM transitioned production to a65 nm process,[8][9]followed by a45 nm processannounced in February 2008.[10]Bandai Namco Entertainmentused the Cell processor in itsNamco System 357and 369 arcade boards.[citation needed] In May 2008, IBM introduced thePowerXCell 8i, a double-precision variant of the Cell processor, used in systems such as IBM's Roadrunner supercomputer, the first to achieve one petaFLOPS and the fastest until late 2009.[11][12] IBM ceased development of higher-core-count Cell variants (such as a 32-APU version) in late 2009,[13][14]but continued supporting existing Cell-based products.[15] On May 17, 2005, Sony confirmed the Cell configuration used in thePlayStation 3: one PPE and seven SPEs.[16][17][18]To improve manufacturingyield, the processor is initially fabricated with eight SPEs. After production,each chip is tested, and if a defect is found in one SPE, it is disabled usinglaser trimming. This approach minimizes waste by utilizing processors that would otherwise be discarded. Even in chips without defects, one SPE is intentionally disabled to ensure consistency across units.[19][20]Of the seven operational SPEs, six are available for developers to use in games and applications, while the seventh is reserved for the console's operating system.[20]The chip operates at a clock speed of 3.2 GHz.[21]Sony also used the Cell in itsZegohigh-performance media computing server. The PPE supportssimultaneous multithreading(SMT) and can execute two threads, while each active SPE supports one thread. In the PlayStation 3 configuration, the Cell processor supports up to nine threads.[citation needed] On June 28, 2005, IBM and Mercury Computer Systems announced a partnership to use Cell processors inembedded systemsformedical imaging,aerospace, andseismic processing, among other fields.[22]Mercury use the full Cell processor with eight active SPEs.[citation needed]Mercury later releasedblade serversandPCI Expressaccelerator cards based on the architecture.[23] In 2006, IBM introduced the QS20 blade server, offering up to 410 gigaFLOPS per module in single-precision performance. TheQS22blade, based on the PowerXCell 8i, was used in IBM's Roadrunner supercomputer.[11][12]On April 8, 2008, Fixstars Corporation released a PCI Express accelerator board based on the PowerXCell 8i.[23] TheCell Broadband Engine, orCellas it is more commonly known, is a microprocessor intended as a hybrid of conventional desktop processors (such as theAthlon 64, andCore 2families) and more specialized high-performance processors, such as theNVIDIAandATIgraphics-processors (GPUs). The longer name indicates its intended use, namely as a component in current and futureonline distributionsystems; as such it may be utilized in high-definition displays and recording equipment, as well asHDTVsystems. Additionally the processor may be suited todigital imagingsystems (medical, scientific,etc.) andphysical simulation(e.g., scientific andstructural engineeringmodeling). As used in the PlayStation 3 it has 250 million transistors.[24] In a simple analysis, the Cell processor can be split into four components: external input and output structures, the main processor called thePower Processing Element(PPE) (a two-waysimultaneous-multithreadedPowerPC 2.02core),[25]eight fully functional co-processors called theSynergistic Processing Elements, or SPEs, and a specialized high-bandwidthcircular data busconnecting the PPE, input/output elements and the SPEs, called theElement Interconnect Busor EIB. To achieve the high performance needed for mathematically intensive tasks, such as decoding/encodingMPEGstreams, generating or transforming three-dimensional data, or undertakingFourier analysisof data, the Cell processor marries the SPEs and the PPE via EIB to give access, via fullycache coherentDMA (direct memory access), to both main memory and to other external data storage. To make the best of EIB, and to overlap computation and data transfer, each of the nine processing elements (PPE and SPEs) is equipped with aDMA engine. Since the SPE's load/store instructions can only access its own localscratchpad memory, each SPE entirely depends on DMAs to transfer data to and from the main memory and other SPEs' local memories. A DMA operation can transfer either a single block area of size up to 16KB, or a list of 2 to 2048 such blocks. One of the major design decisions in the architecture of Cell is the use of DMAs as a central means of intra-chip data transfer, with a view to enabling maximal asynchrony and concurrency in data processing inside a chip.[26] The PPE, which is capable of running a conventional operating system, has control over the SPEs and can start, stop, interrupt, and schedule processes running on the SPEs. To this end, the PPE has additional instructions relating to the control of the SPEs. Unlike SPEs, the PPE can read and write the main memory and the local memories of SPEs through the standard load/store instructions. The SPEs are not fully autonomous and require the PPE to prime them before they can do any useful work. As most of the "horsepower" of the system comes from the synergistic processing elements, the use ofDMAas a method of data transfer and the limited localmemory footprintof each SPE pose a major challenge to software developers who wish to make the most of this horsepower, demanding careful hand-tuning of programs to extract maximal performance from this CPU. The PPE and bus architecture includes various modes of operation giving different levels ofmemory protection, allowing areas of memory to be protected from access by specific processes running on the SPEs or the PPE. Both the PPE and SPE areRISCarchitectures with a fixed-width 32-bit instruction format. The PPE contains a 64-bitgeneral-purpose registerset (GPR), a 64-bit floating-point register set (FPR), and a 128-bitAltivecregister set. The SPE contains 128-bit registers only. These can be used for scalar data types ranging from 8-bits to 64-bits in size, or forSIMDcomputations on various integer and floating-point formats. System memory addresses for both the PPE and SPE are expressed as 64-bit values. Local store addresses internal to the SPU (Synergistic Processor Unit) processor are expressed as a 32-bit word. In documentation relating to Cell, a word is always taken to mean 32 bits, a doubleword means 64 bits, and a quadword means 128 bits. In 2008, IBM announced a revised variant of the Cell called thePowerXCell 8i,[27]which is available in QS22Blade Serversfrom IBM. The PowerXCell is manufactured on a65 nmprocess, and adds support for up to 32 GB of slotted DDR2 memory, as well as dramatically improvingdouble-precision floating-pointperformance on the SPEs from a peak of about 12.8GFLOPSto 102.4 GFLOPS total for eight SPEs, which, coincidentally, is the same peak performance as theNEC SX-9vector processor released around the same time. TheIBM Roadrunnersupercomputer, the world's fastest during 2008–2009, consisted of 12,240 PowerXCell 8i processors, along with 6,562AMD Opteronprocessors.[28]The PowerXCell 8i powered super computers also dominated all of the top 6 "greenest" systems in the Green500 list, with highest MFLOPS/Watt ratio supercomputers in the world.[29]Beside the QS22 and supercomputers, the PowerXCell processor is also available as an accelerator on a PCI Express card and is used as the core processor in theQPACEproject. Since the PowerXCell 8i removed the RAMBUS memory interface, and added significantly larger DDR2 interfaces and enhanced SPEs, the chip layout had to be reworked, which resulted in both larger chip die and packaging.[30] While the Cell chip can have a number of different configurations, the basic configuration is amulti-corechip composed of one "Power Processor Element" ("PPE") (sometimes called "Processing Element", or "PE"), and multiple "Synergistic Processing Elements" ("SPE").[31]The PPE and SPEs are linked together by an internal high speed bus dubbed "Element Interconnect Bus" ("EIB"). ThePPE[32][33][34]is thePowerPCbased, dual-issue in-order two-waysimultaneous-multithreadedCPUcore with a 23-stage pipeline acting as the controller for the eight SPEs, which handle most of the computational workload. PPE has limited out of order execution capabilities; it can perform loads out of order and has delayed execution pipelines. The PPE will work with conventional operating systems due to its similarity to other 64-bit PowerPC processors, while the SPEs are designed for vectorized floating point code execution. The PPE contains a 32KiBlevel 1 instructioncache, a 32 KiB level 1 data cache, and a 512 KiB level 2 cache. The size of a cache line is 128 bytes in all caches.[27]: 136–137, 141Additionally, IBM has included anAltiVec(VMX) unit[35]which is fully pipelined forsingle precisionfloating point (Altivec 1 does not supportdouble precisionfloating-point vectors.), 32-bitFixed Point Unit (FXU)with 64-bit register file per thread,Load and Store Unit (LSU), 64-bitFloating-Point Unit (FPU),Branch Unit (BRU)and Branch Execution Unit(BXU).[32]PPE consists of three main units: Instruction Unit (IU), Execution Unit (XU), and vector/scalar execution unit (VSU). IU contains L1 instruction cache, branch prediction hardware, instruction buffers, and dependency checking logic. XU contains integer execution units (FXU) and load-store unit (LSU). VSU contains all of the execution resources for FPU and VMX. Each PPE can complete two double-precision operations per clock cycle using a scalar fused-multiply-add instruction, which translates to 6.4GFLOPSat 3.2 GHz; or eight single-precision operations per clock cycle with a vector fused-multiply-add instruction, which translates to 25.6 GFLOPS at 3.2 GHz.[36] The PPE was designed specifically for the Cell processor but during development,Microsoftapproached IBM wanting a high-performance processor core for itsXbox 360. IBM complied and made the tri-coreXenon processor, based on a slightly modified version of the PPE with added VMX128 extensions.[37][38] Each SPE is a dual issue in order processor composed of a "Synergistic Processing Unit",[39]SPU, and a "Memory Flow Controller", MFC (DMA,MMU, andbusinterface). SPEs do not have anybranch predictionhardware (hence there is a heavy burden on the compiler).[40]Each SPE has 6 execution units divided among odd and even pipelines on each SPE : The SPU runs a specially developedinstruction set(ISA) with128-bitSIMDorganization[35][2][41]for single and double precision instructions. With the current generation of the Cell, each SPE contains a 256KiBembedded SRAMfor instruction and data, called"Local Storage"(not to be mistaken for "Local Memory" in Sony's documents that refer to the VRAM) which is visible to the PPE and can be addressed directly by software. Each SPE can support up to 4GiBof local store memory. The local store does not operate like a conventionalCPU cachesince it is neither transparent to software nor does it contain hardware structures that predict which data to load. The SPEs contain a 128-bit, 128-entryregister fileand measures 14.5 mm2on a 90 nm process. An SPE can operate on sixteen 8-bit integers, eight 16-bit integers, four 32-bit integers, or four single-precision floating-point numbers in a single clock cycle, as well as a memory operation. Note that the SPU cannot directly access system memory; the 64-bit virtual memory addresses formed by the SPU must be passed from the SPU to the SPE memory flow controller (MFC) to set up a DMA operation within the system address space. In one typical usage scenario, the system will load the SPEs with small programs (similar tothreads), chaining the SPEs together to handle each step in a complex operation. For instance, aset-top boxmight load programs for reading a DVD, video and audio decoding, and display and the data would be passed off from SPE to SPE until finally ending up on the TV. Another possibility is to partition the input data set and have several SPEs performing the same kind of operation in parallel. At 3.2 GHz, each SPE gives a theoretical 25.6GFLOPSof single-precision performance. Compared to itspersonal computercontemporaries, the relatively high overall floating-point performance of a Cell processor seemingly dwarfs the abilities of the SIMD unit in CPUs like thePentium 4and theAthlon 64. However, comparing only floating-point abilities of a system is a one-dimensional and application-specific metric. Unlike a Cell processor, such desktop CPUs are more suited to the general-purpose software usually run on personal computers. In addition to executing multiple instructions per clock, processors from Intel and AMD featurebranch predictors. The Cell is designed to compensate for this with compiler assistance, in which prepare-to-branch instructions are created. For double-precision floating-point operations, as sometimes used in personal computers and often used in scientific computing, Cell performance drops by an order of magnitude, but still reaches 20.8 GFLOPS (1.8 GFLOPS per SPE, 6.4 GFLOPS per PPE). The PowerXCell 8i variant, which was specifically designed for double-precision, reaches 102.4 GFLOPS in double-precision calculations.[42] Tests by IBM show that the SPEs can reach 98% of their theoretical peak performance running optimized parallel matrix multiplication.[36] Toshibahas developed aco-processorpowered by four SPEs, but no PPE, called theSpursEnginedesigned to accelerate 3D and movie effects in consumer electronics. Each SPE has a local memory of 256 KB.[43]In total, the SPEs have 2 MB of local memory. The EIB is a communication bus internal to the Cell processor which connects the various on-chip system elements: the PPE processor, the memory controller (MIC), the eight SPE coprocessors, and two off-chip I/O interfaces, for a total of 12 participants in the PS3 (the number of SPU can vary in industrial applications). The EIB also includes an arbitration unit which functions as a set of traffic lights. In some documents, IBM refers to EIB participants as 'units'. The EIB is presently implemented as a circular ring consisting of four 16-byte-wide unidirectional channels which counter-rotate in pairs. When traffic patterns permit, each channel can convey up to three transactions concurrently. As the EIB runs at half the system clock rate the effective channel rate is 16 bytes every two system clocks. At maximumconcurrency, with three active transactions on each of the four rings, the peak instantaneous EIB bandwidth is 96 bytes per clock (12 concurrent transactions × 16 bytes wide / 2 system clocks per transfer). While this figure is often quoted in IBM literature, it is unrealistic to simply scale this number by processor clock speed. The arbitration unitimposes additional constraints. IBM Senior EngineerDavid Krolak, EIB lead designer, explains the concurrency model: A ring can start a new op every three cycles. Each transfer always takes eight beats. That was one of the simplifications we made, it's optimized for streaming a lot of data. If you do small ops, it does not work quite as well. If you think of eight-car trains running around this track, as long as the trains aren't running into each other, they can coexist on the track.[44] Each participant on the EIB has one 16-byte read port and one 16-byte write port. The limit for a single participant is to read and write at a rate of 16 bytes per EIB clock (for simplicity often regarded 8 bytes per system clock). Each SPU processor contains a dedicatedDMAmanagement queue capable of scheduling long sequences of transactions to various endpoints without interfering with the SPU's ongoing computations; these DMA queues can be managed locally or remotely as well, providing additional flexibility in the control model. Data flows on an EIB channel stepwise around the ring. Since there are twelve participants, the total number of steps around the channel back to the point of origin is twelve. Six steps is the longest distance between any pair of participants. An EIB channel is not permitted to convey data requiring more than six steps; such data must take the shorter route around the circle in the other direction. The number of steps involved in sending the packet has very little impact on transfer latency: the clock speed driving the steps is very fast relative to other considerations. However, longer communication distances are detrimental to the overall performance of the EIB as they reduce available concurrency. Despite IBM's original desire to implement the EIB as a more powerful cross-bar, the circular configuration they adopted to spare resources rarely represents a limiting factor on the performance of the Cell chip as a whole. In the worst case, the programmer must take extra care to schedule communication patterns where the EIB is able to function at high concurrency levels. David Krolak explained: Well, in the beginning, early in the development process, several people were pushing for a crossbar switch, and the way the bus is designed, you could actually pull out the EIB and put in a crossbar switch if you were willing to devote more silicon space on the chip to wiring. We had to find a balance between connectivity and area, and there just was not enough room to put a full crossbar switch in. So we came up with this ring structure which we think is very interesting. It fits within the area constraints and still has very impressive bandwidth.[44] At 3.2 GHz, each channel flows at a rate of 25.6 GB/s. Viewing the EIB in isolation from the system elements it connects, achieving twelve concurrent transactions at this flow rate works out to an abstract EIB bandwidth of 307.2 GB/s. Based on this view many IBM publications depict available EIB bandwidth as "greater than 300 GB/s". This number reflects the peak instantaneous EIB bandwidth scaled by processor frequency.[45] However, other technical restrictions are involved in the arbitration mechanism for packets accepted onto the bus. The IBM Systems Performance group explained: Each unit on the EIB can simultaneously send and receive 16 bytes of data every bus cycle. The maximum data bandwidth of the entire EIB is limited by the maximum rate at which addresses are snooped across all units in the system, which is one per bus cycle. Since each snooped address request can potentially transfer up to 128 bytes, the theoretical peak data bandwidth on the EIB at 3.2 GHz is 128Bx1.6 GHz = 204.8 GB/s.[36] This quote apparently represents the full extent of IBM's public disclosure of this mechanism and its impact. The EIB arbitration unit, the snooping mechanism, and interrupt generation on segment or page translation faults are not well described in the documentation set as yet made public by IBM.[citation needed] In practice, effective EIB bandwidth can also be limited by the ring participants involved. While each of the nine processing cores can sustain 25.6 GB/s read and write concurrently, the memory interface controller (MIC) is tied to a pair of XDR memory channels permitting a maximum flow of 25.6 GB/s for reads and writes combined and the two IO controllers are documented as supporting a peak combined input speed of 25.6 GB/s and a peak combined output speed of 35 GB/s. To add further to the confusion, some older publications cite EIB bandwidth assuming a 4 GHz system clock. This reference frame results in an instantaneous EIB bandwidth figure of 384 GB/s and an arbitration-limited bandwidth figure of 256 GB/s. All things considered the theoretic 204.8 GB/s number most often cited is the best one to bear in mind. TheIBM Systems Performancegroup has demonstrated SPU-centric data flows achieving 197 GB/s on a Cell processor running at 3.2 GHz so this number is a fair reflection on practice as well.[36] Cell contains a dual channelRambusXIO macro which interfaces to RambusXDR memory. The memory interface controller (MIC) is separate from the XIO macro and is designed by IBM. The XIO-XDR link runs at 3.2 Gbit/s per pin. Two 32-bit channels can provide a theoretical maximum of 25.6 GB/s. The I/O interface, also a Rambus design, is known asFlexIO. The FlexIO interface is organized into 12 lanes, each lane being a unidirectional 8-bit wide point-to-point path. Five 8-bit wide point-to-point paths are inbound lanes to Cell, while the remaining seven are outbound. This provides a theoretical peak bandwidth of 62.4 GB/s (36.4 GB/s outbound, 26 GB/s inbound) at 2.6 GHz. The FlexIO interface can be clocked independently, typ. at 3.2 GHz. 4 inbound + 4 outbound lanes are supporting memory coherency. Some companies, such asLeadtek, have releasedPCI-Ecards based upon the Cell to allow for "faster than real time" transcoding ofH.264,MPEG-2andMPEG-4video.[46] On August 29, 2007, IBM announced theBladeCenterQS21. Generating a measured 1.05 giga–floating point operations per second (gigaFLOPS) per watt, with peak performance of approximately 460 GFLOPS it is one of the most power efficient computing platforms to date. A single BladeCenter chassis can achieve 6.4 tera–floating point operations per second (teraFLOPS) and over 25.8 teraFLOPS in a standard 42U rack.[47] On May 13, 2008, IBM announced theBladeCenterQS22. The QS22 introduces the PowerXCell 8i processor with five times the double-precision floating point performance of the QS21, and the capacity for up to 32 GB of DDR2 memory on-blade.[48] IBM has discontinued the Blade server line based on Cell processors as of January 12, 2012.[49] Several companies provide PCI-e boards utilising the IBM PowerXCell 8i. The performance is reported as 179.2 GFlops (SP), 89.6 GFlops (DP) at 2.8 GHz.[50][51] Sony'sPlayStation 3video game consolewas the first production application of the Cell processor, clocked at 3.2GHzand containing seven out of eight operational SPEs, to allow Sony to increase theyieldon the processor manufacture. Only six of the seven SPEs are accessible to developers as one is reserved by the OS.[52] Toshiba has producedHDTVsusing Cell. They presented a system to decode 48standard definitionMPEG-2streams simultaneously on a1920×1080screen.[53][54]This can enable a viewer to choose a channel based on dozens of thumbnail videos displayed simultaneously on the screen. Toshiba produced a laptop,QosmioG55, released in 2008, that contains Cell technology embedded into it. Its CPU otherwise is anIntel Corex86-based chip as is common onToshiba computers.[55] IBM's supercomputer,IBM Roadrunner, was a hybrid of General Purpose x86-64Opteronas well as Cell processors. This system assumed the #1 spot on the June 2008 Top 500 list as the first supercomputer to run atpetaFLOPSspeeds, having gained a sustained 1.026 petaFLOPS speed using the standardLINPACK benchmark. IBM Roadrunner used the PowerXCell 8i version of the Cell processor, manufactured using 65 nm technology and enhanced SPUs that can handle double precision calculations in the 128-bit registers, reaching double precision 102 GFLOPs per chip.[56][57] Clusters ofPlayStation 3consoles are an attractive alternative to high-end systems based on Cell blades. Innovative Computing Laboratory, a group led byJack Dongarra, in the Computer Science Department at the University of Tennessee, investigated such an application in depth.[58]Terrasoft Solutions is selling 8-node and 32-node PS3 clusters withYellow Dog Linuxpre-installed, an implementation of Dongarra's research. As first reported byWiredon October 17, 2007,[59]an interesting application of using PlayStation 3 in a cluster configuration was implemented by AstrophysicistGaurav Khanna, from the Physics department ofUniversity of Massachusetts Dartmouth, who replaced time used on supercomputers with a cluster of eight PlayStation 3s. Subsequently, the next generation of this machine, now called thePlayStation 3Gravity Grid, uses a network of 16 machines, and exploits the Cell processor for the intended application which is binaryblack holecoalescence usingperturbation theory. In particular, the cluster performs astrophysical simulations of largesupermassive black holescapturing smaller compact objects and has generated numerical data that has been published multiple times in the relevant scientific research literature.[60]The Cell processor version used by the PlayStation 3 has a main CPU and 6 SPEs available to the user, giving the Gravity Grid machine a net of 16 general-purpose processors and 96 vector processors. The machine has a one-time cost of $9,000 to build and is adequate for black-hole simulations which would otherwise cost $6,000 per run on a conventional supercomputer. The black hole calculations are not memory-intensive and are highly localizable, and so are well-suited to this architecture. Khanna claims that the cluster's performance exceeds that of a 100+ Intel Xeon core based traditional Linux cluster on his simulations. The PS3 Gravity Grid gathered significant media attention through 2007,[61]2008,[62][63]2009,[64][65][66]and 2010.[67][68] The computational Biochemistry and Biophysics lab at theUniversitat Pompeu Fabra, inBarcelona, deployed in 2007 aBOINCsystem calledPS3GRID[69]for collaborative computing based on the CellMD software, the first one designed specifically for the Cell processor. The United StatesAir Force Research Laboratoryhas deployed a PlayStation 3 cluster of over 1700 units, nicknamed the "Condor Cluster", for analyzinghigh-resolutionsatellite imagery. The Air Force claims the Condor Cluster would be the 33rd largest supercomputer in the world in terms of capacity.[70]The lab has opened up the supercomputer for use by universities for research.[71] With the help of the computing power of over half a million PlayStation 3 consoles, the distributed computing projectFolding@homehas been recognized byGuinness World Recordsas the most powerful distributed network in the world. The first record was achieved on September 16, 2007, as the project surpassed onepetaFLOPS, which had never previously been attained by a distributed computing network. Additionally, the collective efforts enabled PS3 alone to reach the petaFLOPS mark on September 23, 2007. In comparison, the world's second-most powerful supercomputer at the time, IBM'sBlue Gene/L, performed at around 478.2 teraFLOPS, which means Folding@home's computing power is approximately twice Blue Gene/L's (although the CPU interconnect in Blue Gene/L is more than one million times faster than the mean network speed in Folding@home). As of May 7, 2011, Folding@home runs at about 9.3 x86 petaFLOPS, with 1.6 petaFLOPS generated by 26,000 active PS3s alone. IBM announced on April 25, 2007, that it would begin integrating its Cell Broadband Engine Architecture microprocessors into the company'sSystem zline of mainframes.[72]This has led to agameframe. The architecture of the processor makes it better suited to hardware-assisted cryptographicbrute-force attackapplications than conventional processors.[73] Due to the flexible nature of the Cell, there are several possibilities for the utilization of its resources, not limited to just different computing paradigms:[74] The PPE maintains a job queue, schedules jobs in SPEs, and monitors progress. Each SPE runs a "mini kernel" whose role is to fetch a job, execute it, and synchronize with the PPE. The mini kernel and scheduling is distributed across the SPEs. Tasks are synchronized usingmutexesorsemaphoresas in a conventionaloperating system. Ready-to-run tasks wait in a queue for an SPE to execute them. The SPEs use shared memory for all tasks in this configuration. Each SPE runs a distinct program. Data comes from an input stream and is sent to SPEs. When an SPE has terminated the processing, the output data is sent to an output stream. This provides a flexible and powerful architecture forstream processing, and allows explicit scheduling for each SPE separately. Other processors are also able to perform streaming tasks but are limited by the kernel loaded. In 2005, patches enabling Cell support in the Linux kernel were submitted for inclusion by IBM developers.[75]Arnd Bergmann (one of the developers of the aforementioned patches) also described the Linux-based Cell architecture atLinuxTag2005.[76]As of release 2.6.16 (March 20, 2006), the Linux kernel officially supports the Cell processor.[77] Both PPE and SPEs are programmable in C/C++ using a common API provided by libraries. Fixstars SolutionsprovidesYellow Dog Linuxfor IBM and Mercury Cell-based systems, as well as for the PlayStation 3.[78]Terra Soft strategically partnered with Mercury to provide a Linux Board Support Package for Cell, and support and development of software applications on various other Cell platforms, including the IBM BladeCenter JS21 and Cell QS20, and Mercury Cell-based solutions.[79]Terra Soft also maintains the Y-HPC (High Performance Computing) Cluster Construction and Management Suite and Y-Bio gene sequencing tools. Y-Bio is built upon the RPM Linux standard for package management, and offers tools which help bioinformatics researchers conduct their work with greater efficiency.[80]IBM has developed a pseudo-filesystem for Linux coined "Spufs" that simplifies access to and use of the SPE resources. IBM is currently maintaining a LinuxkernelandGDBports, while Sony maintains theGNU toolchain(GCC,binutils).[81][82] In November 2005, IBM released a "Cell Broadband Engine (CBE) Software Development Kit Version 1.0", consisting of a simulator and assorted tools, to its web site. Development versions of the latest kernel and tools forFedora Core4 are maintained at theBarcelona Supercomputing Centerwebsite.[83] In August 2007, Mercury Computer Systems released a Software Development Kit for PlayStation 3 for High-Performance Computing.[84] In November 2007, Fixstars Corporation released the new "CVCell" module aiming to accelerate several importantOpenCVAPIs for Cell. In a series of software calculation tests, they recorded execution times on a 3.2 GHz Cell processor that were between 6x and 27x faster compared with the same software on a 2.4 GHz Intel Core 2 Duo.[85] In October 2009, IBM released anOpenCLdriver for POWER6 and CBE. This allows programs written in the cross-platform API to be easily run on Cell PSE.[86] Illustrations of the different generations of Cell/B.E. processors and the PowerXCell 8i. The images are not to scale; All Cell/B.E. packages measures 42.5×42.5 mm and the PowerXCell 8i measures 47.5×47.5 mm.
https://en.wikipedia.org/wiki/CELL
Agraphics processing unit(GPU) is a specializedelectronic circuitdesigned fordigital image processingand to acceleratecomputer graphics, being present either as a discretevideo cardor embedded onmotherboards,mobile phones,personal computers,workstations, andgame consoles. GPUs were later found to be useful for non-graphic calculations involvingembarrassingly parallelproblems due to theirparallel structure. The ability of GPUs to rapidly perform vast numbers of calculations has led to their adoption in diverse fields includingartificial intelligence(AI) where they excel at handling data-intensive and computationally demanding tasks. Other non-graphical uses include the training ofneural networksandcryptocurrency mining. Arcade system boardshave used specialized graphics circuits since the 1970s. In early video game hardware,RAMfor frame buffers was expensive, so video chips composited data together as the display was being scanned out on the monitor.[1] A specializedbarrel shiftercircuit helped the CPU animate theframebuffergraphics for various 1970sarcade video gamesfromMidwayandTaito, such asGun Fight(1975),Sea Wolf(1976), andSpace Invaders(1978).[2]TheNamco Galaxianarcade system in 1979 used specializedgraphics hardwarethat supportedRGB color, multi-colored sprites, andtilemapbackgrounds.[3]The Galaxian hardware was widely used during thegolden age of arcade video games, by game companies such asNamco,Centuri,Gremlin,Irem,Konami, Midway,Nichibutsu,Sega, and Taito.[4] TheAtari 2600in 1977 used a video shifter called theTelevision Interface Adaptor.[5]Atari 8-bit computers(1979) hadANTIC, a video processor which interpreted instructions describing a "display list"—the way the scan lines map to specificbitmappedor character modes and where the memory is stored (so there did not need to be a contiguous frame buffer).[clarification needed][6]6502machine codesubroutinescould be triggered onscan linesby setting a bit on a display list instruction.[clarification needed][7]ANTIC also supported smoothverticalandhorizontal scrollingindependent of the CPU.[8] TheNEC μPD7220was the first implementation of apersonal computergraphics display processor as a singlelarge-scale integration(LSI)integrated circuitchip. This enabled the design of low-cost, high-performance video graphics cards such as those fromNumber Nine Visual Technology. It became the best-known GPU until the mid-1980s.[9]It was the first fully integratedVLSI(very large-scale integration)metal–oxide–semiconductor(NMOS) graphics display processor for PCs, supported up to1024×1024 resolution, and laid the foundations for the PC graphics market. It was used in a number of graphics cards and was licensed for clones such as the Intel 82720, the first ofIntel's graphics processing units.[10]The Williams Electronics arcade gamesRobotron 2084,Joust,Sinistar, andBubbles, all released in 1982, contain customblitterchips for operating on 16-color bitmaps.[11][12] In 1984,Hitachireleased the ARTC HD63484, the first majorCMOSgraphics processor for personal computers. The ARTC could display up to4K resolutionwhen inmonochromemode. It was used in a number of graphics cards and terminals during the late 1980s.[13]In 1985, theAmigawas released with a custom graphics chip including ablitterfor bitmap manipulation, line drawing, and area fill. It also included acoprocessorwith its own simple instruction set, that was capable of manipulating graphics hardware registers in sync with the video beam (e.g. for per-scanline palette switches, sprite multiplexing, and hardware windowing), or driving the blitter. In 1986,Texas Instrumentsreleased theTMS34010, the first fully programmable graphics processor.[14]It could run general-purpose code but also had a graphics-oriented instruction set. During 1990–1992, this chip became the basis of theTexas Instruments Graphics Architecture("TIGA")Windows acceleratorcards. In 1987, theIBM 8514graphics system was released. It was one of the first video cards forIBM PC compatiblesthat implementedfixed-function2D primitives inelectronic hardware.Sharp'sX68000, released in 1987, used a custom graphics chipset[15]with a 65,536 color palette and hardware support for sprites, scrolling, and multiple playfields.[16]It served as a development machine forCapcom'sCP Systemarcade board. Fujitsu'sFM Townscomputer, released in 1989, had support for a 16,777,216 color palette.[17]In 1988, the first dedicatedpolygonal 3Dgraphics boards were introduced in arcades with theNamco System 21[18]andTaitoAir System.[19] IBMintroduced itsproprietaryVideo Graphics Array(VGA) display standard in 1987, with a maximum resolution of 640×480 pixels. In November 1988,NEC Home Electronicsannounced its creation of theVideo Electronics Standards Association(VESA) to develop and promote aSuper VGA(SVGA)computer display standardas a successor to VGA. Super VGA enabledgraphics display resolutionsup to 800×600pixels, a 56% increase.[20] In 1991,S3 Graphicsintroduced theS3 86C911, which its designers named after thePorsche 911as an indication of the performance increase it promised.[21]The 86C911 spawned a variety of imitators: by 1995, all major PC graphics chip makers had added2Dacceleration support to their chips.[22]Fixed-functionWindows acceleratorssurpassed expensive general-purpose graphics coprocessors in Windows performance, and such coprocessors faded from the PC market. Throughout the 1990s, 2DGUIacceleration evolved. As manufacturing capabilities improved, so did the level of integration of graphics chips. Additionalapplication programming interfaces(APIs) arrived for a variety of tasks, such as Microsoft'sWinGgraphics libraryforWindows 3.x, and their laterDirectDrawinterface forhardware accelerationof 2D games inWindows 95and later. In the early- and mid-1990s,real-time3D graphics became increasingly common in arcade, computer, and console games, which led to increasing public demand for hardware-accelerated 3D graphics. Early examples of mass-market 3D graphics hardware can be found in arcade system boards such as theSega Model 1,Namco System 22, andSega Model 2, and thefifth-generation video game consolessuch as theSaturn,PlayStation, andNintendo 64. Arcade systems such as the Sega Model 2 andSGIOnyx-based Namco Magic Edge Hornet Simulator in 1993 were capable of hardware T&L (transform, clipping, and lighting) years before appearing in consumer graphics cards.[23][24]Another early example is theSuper FXchip, aRISC-basedon-cartridge graphics chipused in someSNESgames, notablyDoomandStar Fox. Some systems usedDSPsto accelerate transformations.Fujitsu, which worked on the Sega Model 2 arcade system,[25]began working on integrating T&L into a singleLSIsolution for use in home computers in 1995;[26]the Fujitsu Pinolite, the first 3D geometry processor for personal computers, released in 1997.[27]The first hardware T&L GPU onhomevideo game consoleswas theNintendo 64'sReality Coprocessor, released in 1996.[28]In 1997,Mitsubishireleased the3Dpro/2MP, a GPU capable of transformation and lighting, forworkstationsandWindows NTdesktops;[29]ATiused it for itsFireGL 4000graphics card, released in 1997.[30] The term "GPU" was coined bySonyin reference to the 32-bitSony GPU(designed byToshiba) in thePlayStationvideo game console, released in 1994.[31] In the PC world, notable failed attempts for low-cost 3D graphics chips included theS3ViRGE,ATI Rage, andMatroxMystique. These chips were essentially previous-generation 2D accelerators with 3D features bolted on. Many werepin-compatiblewith the earlier-generation chips for ease of implementation and minimal cost. Initially, 3D graphics were possible only with discrete boards dedicated to accelerating 3D functions (and lacking 2D graphical user interface (GUI) acceleration entirely) such as thePowerVRand the3dfxVoodoo. However, as manufacturing technology continued to progress, video, 2D GUI acceleration, and 3D functionality were all integrated into one chip.Rendition'sVeritechipsets were among the first to do this well. In 1997, Rendition collaborated withHerculesand Fujitsu on a "Thriller Conspiracy" project which combined a Fujitsu FXG-1 Pinolite geometry processor with a Vérité V2200 core to create a graphics card with a full T&L engine years before Nvidia'sGeForce 256; This card, designed to reduce the load placed upon the system's CPU, never made it to market.[citation needed]NVIDIARIVA 128was one of the first consumer-facing GPU integrated 3D processing unit and 2D processing unit on a chip. OpenGLwas introduced in the early 1990s by Silicon Graphics as a professional graphics API, with proprietary hardware support for 3D rasterization. In 1994, Microsoft acquiredSoftimage, the dominant CGI movie production tool used for early CGI movie hits likeJurassic Park,Terminator 2andTitanic. With that deal came a strategic relationship with SGI and a commercial license of their OpenGL libraries, enabling Microsoft to port the API to the Windows NT OS but not to the upcoming release of Windows 95. Although it was little known at the time, SGI had contracted with Microsoft totransition from Unix to the forthcoming Windows NT OS; the deal which was signed in 1995 was not announced publicly until 1998. In the intervening period, Microsoft worked closely with SGI to port OpenGL to Windows NT. In that era, OpenGL had no standard driver model for competing hardware accelerators to compete on the basis of support for higher level 3D texturing and lighting functionality. In 1994 Microsoft announced DirectX 1.0 and support for gaming in the forthcoming Windows 95 consumer OS. In 1995Microsoft announced the acquisition of UK based Rendermorphics Ltdand the Direct3D driver model for the acceleration of consumer 3D graphics. The Direct3D driver model shipped with DirectX 2.0 in 1996. It included standards and specifications for 3D chip makers to compete to support 3D texture, lighting and Z-buffering. ATI, which was later to be acquired by AMD, began development on the first Direct3D GPUs. Nvidia quickly pivoted from afailed deal with Segain 1996 to aggressively embracing support for Direct3D. In this era Microsoft merged their internal Direct3D and OpenGL teams and worked closely with SGI to unify driver standards for both industrial and consumer 3D graphics hardware accelerators. Microsoft ran annual events for 3D chip makers called "Meltdowns" to test their 3D hardware and drivers to work both with Direct3D and OpenGL. It was during this period of strong Microsoft influence over 3D standards that 3D accelerator cards moved beyond being simplerasterizersto become more powerful general purpose processors as support for hardware accelerated texture mapping, lighting, Z-buffering and compute created the modern GPU. During this period the same Microsoft team responsible for Direct3D and OpenGL driver standardization introduced their own Microsoft 3D chip design calledTalisman. Details of this era are documented extensively in the books "Game of X" v.1 and v.2 by Russel Demaria, "Renegades of the Empire" by Mike Drummond, "Opening the Xbox" by Dean Takahashi and "Masters of Doom" by David Kushner. TheNvidiaGeForce 256(also known as NV10) was the first consumer-level card with hardware-accelerated T&L. While the OpenGL API provided software support for texture mapping and lighting, the first 3D hardware acceleration for these features arrived with the firstDirect3D accelerated consumer GPU's. NVIDIA released the GeForce 256, marketed as the world's first GPU, integrating transform and lighting engines for advanced 3D graphics rendering. Nvidia was first to produce a chip capable of programmableshading: theGeForce 3. Each pixel could now be processed by a short program that could include additional image textures as inputs, and each geometric vertex could likewise be processed by a short program before it was projected onto the screen. Used in theXboxconsole, this chip competed with the one in thePlayStation 2, which used a custom vector unit for hardware-accelerated vertex processing (commonly referred to as VU0/VU1). The earliest incarnations of shader execution engines used in Xbox were not general-purpose and could not execute arbitrary pixel code. Vertices and pixels were processed by different units, which had their resources, with pixel shaders having tighter constraints (because they execute at higher frequencies than vertices). Pixel shading engines were more akin to a highly customizable function block and did not "run" a program. Many of these disparities between vertex and pixel shading were not addressed until theUnified Shader Model. In October 2002, with the introduction of theATIRadeon 9700(also known as R300), the world's firstDirect3D9.0 accelerator, pixel and vertex shaders could implementloopingand lengthyfloating pointmath, and were quickly becoming as flexible as CPUs, yet orders of magnitude faster for image-array operations. Pixel shading is often used forbump mapping, which adds texture to make an object look shiny, dull, rough, or even round or extruded.[32] With the introduction of the NvidiaGeForce 8 seriesand new generic stream processing units, GPUs became more generalized computing devices.ParallelGPUs are making computational inroads against the CPU, and a subfield of research, dubbed GPU computing orGPGPUforgeneral purpose computing on GPU, has found applications in fields as diverse asmachine learning,[33]oil exploration, scientificimage processing,linear algebra,[34]statistics,[35]3D reconstruction, andstock optionspricing.GPGPUwas the precursor to what is now called a compute shader (e.g. CUDA, OpenCL, DirectCompute) and actually abused the hardware to a degree by treating the data passed to algorithms as texture maps and executing algorithms by drawing a triangle or quad with an appropriate pixel shader.[clarification needed]This entails some overheads since units like thescan converterare involved where they are not needed (nor are triangle manipulations even a concern—except to invoke the pixel shader).[clarification needed] Nvidia'sCUDAplatform, first introduced in 2007,[36]was the earliest widely adopted programming model for GPU computing.OpenCLis an open standard defined by theKhronos Groupthat allows for the development of code for both GPUs and CPUs with an emphasis on portability.[37]OpenCL solutions are supported by Intel, AMD, Nvidia, and ARM, and according to a report in 2011 by Evans Data, OpenCL had become the second most popular HPC tool.[38] In 2010, Nvidia partnered withAudito power their cars' dashboards, using theTegraGPU to provide increased functionality to cars' navigation and entertainment systems.[39]Advances in GPU technology in cars helped advanceself-driving technology.[40]AMD'sRadeon HD 6000 seriescards were released in 2010, and in 2011 AMD released its 6000M Series discrete GPUs for mobile devices.[41]The Kepler line of graphics cards by Nvidia were released in 2012 and were used in the Nvidia's 600 and 700 series cards. A feature in this GPU microarchitecture included GPU boost, a technology that adjusts the clock-speed of a video card to increase or decrease it according to its power draw.[42]TheKepler microarchitecturewas manufactured. ThePS4andXbox Onewere released in 2013; they both use GPUs based onAMD's Radeon HD 7850 and 7790.[43]Nvidia's Kepler line of GPUs was followed by theMaxwellline, manufactured on the same process. Nvidia's 28 nm chips were manufactured byTSMCin Taiwan using the 28 nm process. Compared to the 40 nm technology from the past, this manufacturing process allowed a 20 percent boost in performance while drawing less power.[44][45]Virtual reality headsetshave high system requirements; manufacturers recommended the GTX 970 and the R9 290X or better at the time of their release.[46][47]Cards based on thePascalmicroarchitecture were released in 2016. TheGeForce 10 seriesof cards are of this generation of graphics cards. They are made using the 16 nm manufacturing process which improves upon previous microarchitectures.[48]Nvidia released one non-consumer card under the newVoltaarchitecture, the Titan V. Changes from the Titan XP, Pascal's high-end card, include an increase in the number of CUDA cores, the addition of tensor cores, andHBM2. Tensor cores are designed for deep learning, while high-bandwidth memory is on-die, stacked, lower-clocked memory that offers an extremely wide memory bus. To emphasize that the Titan V is not a gaming card, Nvidia removed the "GeForce GTX" suffix it adds to consumer gaming cards. In 2018, Nvidia launched the RTX 20 series GPUs that added ray-tracing cores to GPUs, improving their performance on lighting effects.[49]Polaris 11andPolaris 10GPUs from AMD are fabricated by a 14 nm process. Their release resulted in a substantial increase in the performance per watt of AMD video cards.[50]AMD also released the Vega GPU series for the high end market as a competitor to Nvidia's high end Pascal cards, also featuring HBM2 like the Titan V. In 2019, AMD released the successor to theirGraphics Core Next(GCN) microarchitecture/instruction set. Dubbed RDNA, the first product featuring it was theRadeon RX 5000 seriesof video cards.[51]The company announced that the successor to the RDNA microarchitecture would be incremental (a "refresh"). AMD unveiled theRadeon RX 6000 series, its RDNA 2 graphics cards with support for hardware-accelerated ray tracing.[52]The product series, launched in late 2020, consisted of the RX 6800, RX 6800 XT, and RX 6900 XT.[53][54]The RX 6700 XT, which is based on Navi 22, was launched in early 2021.[55] ThePlayStation 5andXbox Series X and Series Swere released in 2020; they both use GPUs based on theRDNA 2microarchitecture with incremental improvements and different GPU configurations in each system's implementation.[56][57][58] Intelfirstentered the GPU marketin the late 1990s, but produced lackluster 3D accelerators compared to the competition at the time. Rather than attempting to compete with the high-end manufacturers Nvidia and ATI/AMD, they began integratingIntel Graphics TechnologyGPUs into motherboard chipsets, beginning with theIntel 810for the Pentium III, and later into CPUs. They began with theIntel Atom 'Pineview'laptop processor in 2009, continuing in 2010 with desktop processors in the first generation of theIntel Coreline and with contemporary Pentiums and Celerons. This resulted in a large nominal market share, as the majority of computers with an Intel CPU also featured this embedded graphics processor. These generally lagged behind discrete processors in performance. Intel re-entered the discrete GPU market in 2022 with itsArcseries, which competed with the then-current GeForce 30 series and Radeon 6000 series cards at competitive prices.[citation needed] In the 2020s, GPUs have been increasingly used for calculations involvingembarrassingly parallelproblems, such as training ofneural networkson enormous datasets that are needed forlarge language models. Specialized processing cores on some modern workstation's GPUs are dedicated fordeep learningsince they have significant FLOPS performance increases, using 4×4 matrix multiplication and division, resulting in hardware performance up to 128 TFLOPS in some applications.[59]These tensor cores are expected to appear in consumer cards, as well.[needs update][60] Many companies have produced GPUs under a number of brand names. In 2009,[needs update]Intel,Nvidia, andAMD/ATIwere the market share leaders, with 49.4%, 27.8%, and 20.6% market share respectively. In addition,Matrox[61]produces GPUs. Chinese companies such asJingjia Microhave also produced GPUs for the domestic market although in terms of worldwide sales, they still lag behind market leaders.[62] Modern smartphones use mostlyAdrenoGPUs fromQualcomm,PowerVRGPUs fromImagination Technologies, andMali GPUsfromARM. Modern GPUs have traditionally used most of theirtransistorsto do calculations related to3D computer graphics. In addition to the 3D hardware, today's GPUs include basic 2D acceleration andframebuffercapabilities (usually with a VGA compatibility mode). Newer cards such as AMD/ATI HD5000–HD7000 lack dedicated 2D acceleration; it is emulated by 3D hardware. GPUs were initially used to accelerate the memory-intensive work oftexture mappingandrenderingpolygons. Later, dedicated hardware was added to accelerategeometriccalculations such as therotationandtranslationofverticesinto differentcoordinate systems. Recent developments in GPUs include support forprogrammable shaderswhich can manipulate vertices and textures with many of the same operations that are supported byCPUs,oversamplingandinterpolationtechniques to reducealiasing, and very high-precisioncolor spaces. Several factors of GPU construction affect the performance of the card for real-time rendering, such as the size of the connector pathways in thesemiconductor device fabrication, theclock signalfrequency, and the number and size of various on-chip memorycaches. Performance is also affected by the number of streaming multiprocessors (SM) for NVidia GPUs, or compute units (CU) for AMD GPUs, or Xe cores for Intel discrete GPUs, which describe the number of on-silicon processor core units within the GPU chip that perform the core calculations, typically working in parallel with other SM/CUs on the GPU. GPU performance is typically measured in floating point operations per second (FLOPS); GPUs in the 2010s and 2020s typically deliver performance measured in teraflops (TFLOPS). This is an estimated performance measure, as other factors can affect the actual display rate.[63] Most GPUs made since 1995 support theYUVcolor spaceandhardware overlays, important fordigital videoplayback, and many GPUs made since 2000 also supportMPEGprimitives such asmotion compensationandiDCT. This hardware-accelerated video decoding, in which portions of thevideo decodingprocess andvideo post-processingare offloaded to the GPU hardware, is commonly referred to as "GPU accelerated video decoding", "GPU assisted video decoding", "GPU hardware accelerated video decoding", or "GPU hardware assisted video decoding". Recent graphics cards decodehigh-definition videoon the card, offloading the central processing unit. The most commonAPIsfor GPU accelerated video decoding areDxVAforMicrosoft Windowsoperating systems andVDPAU,VAAPI,XvMC, andXvBAfor Linux-based and UNIX-like operating systems. All except XvMC are capable of decoding videos encoded withMPEG-1,MPEG-2,MPEG-4 ASP (MPEG-4 Part 2),MPEG-4 AVC(H.264 / DivX 6),VC-1,WMV3/WMV9,Xvid/ OpenDivX (DivX 4), andDivX5codecs, while XvMC is only capable of decoding MPEG-1 and MPEG-2. There are severaldedicated hardware video decoding and encoding solutions. Video decoding processes that can be accelerated by modern GPU hardware are: These operations also have applications in video editing, encoding, and transcoding. An earlier GPU may support one or more 2D graphics API for 2D acceleration, such asGDIandDirectDraw.[64] A GPU can support one or more 3D graphics API, such asDirectX,Metal,OpenGL,OpenGL ES,Vulkan. In the 1970s, the term "GPU" originally stood forgraphics processor unitand described a programmable processing unit working independently from the CPU that was responsible for graphics manipulation and output.[65][66]In 1994,Sonyused the term (now standing forgraphics processing unit) in reference to thePlayStationconsole'sToshiba-designedSony GPU.[31]The term was popularized byNvidiain 1999, who marketed theGeForce 256as "the world's first GPU".[67]It was presented as a "single-chipprocessorwith integratedtransform, lighting, triangle setup/clipping, and rendering engines".[68]RivalATI Technologiescoined the term "visual processing unit" orVPUwith the release of theRadeon 9700in 2002.[69]TheAMD Alveo MA35Dfeatures dual VPU’s, each using the5 nm processin 2023.[70] In personal computers, there are two main forms of GPUs. Each has many synonyms:[71] Most GPUs are designed for a specific use, real-time 3D graphics, or other mass calculations: Dedicated graphics processing unitsusesRAMthat is dedicated to the GPU rather than relying on the computer’s main system memory. This RAM is usually specially selected for the expected serial workload of the graphics card (seeGDDR). Sometimes systems with dedicateddiscreteGPUs were called "DIS" systems as opposed to "UMA" systems (see next section).[72] Dedicated GPUs are not necessarily removable, nor does it necessarily interface with the motherboard in a standard fashion. The term "dedicated" refers to the fact thatgraphics cardshave RAM that is dedicated to the card's use, not to the fact thatmostdedicated GPUs are removable. Dedicated GPUs for portable computers are most commonly interfaced through a non-standard and often proprietary slot due to size and weight constraints. Such ports may still be considered PCIe or AGP in terms of their logical host interface, even if they are not physically interchangeable with their counterparts. Graphics cards with dedicated GPUs typically interface with themotherboardby means of anexpansion slotsuch asPCI Express(PCIe) orAccelerated Graphics Port(AGP). They can usually be replaced or upgraded with relative ease, assuming the motherboard is capable of supporting the upgrade. A few graphics cards still usePeripheral Component Interconnect(PCI) slots, but their bandwidth is so limited that they are generally used only when a PCIe or AGP slot is not available. Technologies such asScan-Line Interleaveby 3dfx,SLIandNVLinkby Nvidia andCrossFireby AMD allow multiple GPUs to draw images simultaneously for a single screen, increasing the processing power available for graphics. These technologies, however, are increasingly uncommon; most games do not fully use multiple GPUs, as most users cannot afford them.[73][74][75]Multiple GPUs are still used on supercomputers (like inSummit), on workstations to accelerate video (processing multiple videos at once)[76][77][78]and 3D rendering,[79]forVFX,[80]GPGPUworkloads and for simulations,[81]and in AI to expedite training, as is the case with Nvidia's lineup of DGX workstations and servers, Tesla GPUs, and Intel's Ponte Vecchio GPUs. Integrated graphics processing units(IGPU),integrated graphics,shared graphics solutions,integrated graphics processors(IGP), orunified memory architectures(UMA) use a portion of a computer's system RAM rather than dedicated graphics memory. IGPs can be integrated onto a motherboard as part of itsnorthbridgechipset,[82]or on the samedie (integrated circuit)with the CPU (likeAMD APUorIntel HD Graphics). On certain motherboards,[83]AMD's IGPs can use dedicated sideport memory: a separate fixed block of high performance memory that is dedicated for use by the GPU. As of early 2007[update]computers with integrated graphics account for about 90% of all PC shipments.[84][needs update]They are less costly to implement than dedicated graphics processing, but tend to be less capable. Historically, integrated processing was considered unfit for 3D games or graphically intensive programs but could run less intensive programs such as Adobe Flash. Examples of such IGPs would be offerings from SiS and VIA circa 2004.[85]However, modern integrated graphics processors such asAMD Accelerated Processing UnitandIntel Graphics Technology(HD, UHD, Iris, Iris Pro, Iris Plus, andXe-LP) can handle 2D graphics or low-stress 3D graphics. Since GPU computations are memory-intensive, integrated processing may compete with the CPU for relatively slow system RAM, as it has minimal or no dedicated video memory. IGPs use system memory with bandwidth up to a current maximum of 128 GB/s, whereas a discrete graphics card may have a bandwidth of more than 1000 GB/s between itsVRAMand GPU core. Thismemory busbandwidth can limit the performance of the GPU, thoughmulti-channel memorycan mitigate this deficiency.[86]Older integrated graphics chipsets lacked hardwaretransform and lighting, but newer ones include it.[87][88] On systems with "Unified Memory Architecture" (UMA), including modern AMD processors with integrated graphics,[89]modern Intel processors with integrated graphics,[90]Apple processors, the PS5 and Xbox Series (among others), the CPU cores and the GPU block share the same pool of RAM and memory address space. This allows the system to dynamically allocate memory between the CPU cores and the GPU block based on memory needs (without needing a large static split of the RAM) and thanks to zero copy transfers, removes the need for either copying data over abus (computing)between physically separate RAM pools or copying between separate address spaces on a single physical pool of RAM, allowing more efficient transfer of data. Hybrid GPUs compete with integrated graphics in the low-end desktop and notebook markets. The most common implementations of this are ATI'sHyperMemoryand Nvidia'sTurboCache. Hybrid graphics cards are somewhat more expensive than integrated graphics, but much less expensive than dedicated graphics cards. They share memory with the system and have a small dedicated memory cache, to make up for the highlatencyof the system RAM. Technologies within PCI Express make this possible. While these solutions are sometimes advertised as having as much as 768 MB of RAM, this refers to how much can be shared with the system memory. It is common to use ageneral purpose graphics processing unit (GPGPU)as a modified form ofstream processor(or avector processor), runningcompute kernels. This turns the massive computational power of a modern graphics accelerator's shader pipeline into general-purpose computing power. In certain applications requiring massive vector operations, this can yield several orders of magnitude higher performance than a conventional CPU. The two largest discrete (see "Dedicated graphics processing unit" above) GPU designers,AMDandNvidia, are pursuing this approach with an array of applications. Both Nvidia and AMD teamed withStanford Universityto create a GPU-based client for theFolding@homedistributed computing project for protein folding calculations. In certain circumstances, the GPU calculates forty times faster than the CPUs traditionally used by such applications.[91][92] GPGPUs can be used for many types ofembarrassingly paralleltasks includingray tracing. They are generally suited to high-throughput computations that exhibitdata-parallelismto exploit the wide vector widthSIMDarchitecture of the GPU. GPU-based high performance computers play a significant role in large-scale modelling. Three of the ten most powerful supercomputers in the world take advantage of GPU acceleration.[93] GPUs support API extensions to theCprogramming language such asOpenCLandOpenMP. Furthermore, each GPU vendor introduced its own API which only works with their cards:AMD APP SDKfrom AMD, andCUDAfrom Nvidia. These allow functions calledcompute kernelsto run on the GPU's stream processors. This makes it possible for C programs to take advantage of a GPU's ability to operate on large buffers in parallel, while still using the CPU when appropriate. CUDA was the first API to allow CPU-based applications to directly access the resources of a GPU for more general purpose computing without the limitations of using a graphics API.[citation needed] Since 2005 there has been interest in using the performance offered by GPUs forevolutionary computationin general, and for accelerating thefitnessevaluation ingenetic programmingin particular. Most approaches compilelinearortree programson the host PC and transfer the executable to the GPU to be run. Typically a performance advantage is only obtained by running the single active program simultaneously on many example problems in parallel, using the GPU'sSIMDarchitecture.[94]However, substantial acceleration can also be obtained by not compiling the programs, and instead transferring them to the GPU, to be interpreted there.[95]Acceleration can then be obtained by either interpreting multiple programs simultaneously, simultaneously running multiple example problems, or combinations of both. A modern GPU can simultaneously interpret hundreds of thousands of very small programs. An external GPU is a graphics processor located outside of the housing of the computer, similar to a large external hard drive. External graphics processors are sometimes used with laptop computers. Laptops might have a substantial amount of RAM and a sufficiently powerful central processing unit (CPU), but often lack a powerful graphics processor, and instead have a less powerful but more energy-efficient on-board graphics chip. On-board graphics chips are often not powerful enough for playing video games, or for other graphically intensive tasks, such as editing video or 3D animation/rendering. Therefore, it is desirable to attach a GPU to some external bus of a notebook.PCI Expressis the only bus used for this purpose. The port may be, for example, anExpressCardormPCIeport (PCIe ×1, up to 5 or 2.5 Gbit/s respectively), aThunderbolt1, 2, or 3 port (PCIe ×4, up to 10, 20, or 40 Gbit/s respectively), aUSB4 port with Thunderbolt compatibility, or anOCuLinkport. Those ports are only available on certain notebook systems.[96]eGPU enclosures include their own power supply (PSU), because powerful GPUs can consume hundreds of watts.[97] Graphics processing units (GPU) have continued to increase in energy usage, while CPUs designers have recently[when?]focused on improving performance per watt. High performance GPUs may draw large amount of power, therefore intelligent techniques are required to manage GPU power consumption. Measures like3DMark2006 scoreper watt can help identify more efficient GPUs.[98]However that may not adequately incorporate efficiency in typical use, where much time is spent doing less demanding tasks.[99] With modern GPUs, energy usage is an important constraint on the maximum computational capabilities that can be achieved. GPU designs are usually highly scalable, allowing the manufacturer to put multiple chips on the same video card, or to use multiple video cards that work in parallel. Peak performance of any system is essentially limited by the amount of power it can draw and the amount of heat it can dissipate. Consequently, performance per watt of a GPU design translates directly into peak performance of a system that uses that design. In 2013, 438.3 million GPUs were shipped globally and the forecast for 2014 was 414.2 million. However, by the third quarter of 2022, shipments of PC GPUs totaled around 75.5 million units, down 19% year-over-year.[100][needs update][101]
https://en.wikipedia.org/wiki/Graphics_processing_unit
Asystem on a chip(SoC) is anintegrated circuitthat combines most or all key components of acomputerorelectronic systemonto a single microchip.[1]Typically, an SoC includes acentral processing unit(CPU) withmemory,input/output, anddata storagecontrol functions, along with optional features like agraphics processing unit(GPU),Wi-Ficonnectivity, and radio frequency processing. This high level of integration minimizes the need for separate, discrete components, thereby enhancingpower efficiencyand simplifying device design. High-performance SoCs are often paired with dedicated memory, such asLPDDR, and flash storage chips, such aseUFSoreMMC, which may be stacked directly on top of the SoC in apackage-on-package(PoP) configuration or placed nearby on the motherboard. Some SoCs also operate alongside specialized chips, such ascellular modems.[2] Fundamentally, SoCs integrate one or moreprocessor coreswith critical peripherals. This comprehensive integration is conceptually similar to how amicrocontrolleris designed, but providing far greater computational power. While this unified design delivers lower power consumption and a reducedsemiconductor diearea compared to traditional multi-chip architectures, though at the cost of reduced modularity and component replaceability. SoCs are ubiquitous in mobile computing, where compact, energy-efficient designs are critical. They powersmartphones,tablets, andsmartwatches, and are increasingly important inedge computing, where real-time data processing occurs close to the data source. By driving the trend toward tighter integration, SoCs have reshaped modern hardware design, reshaping the design landscape for modern computing devices.[3][4] In general, there are three distinguishable types of SoCs: SoCs can be applied to any computing task. However, they are typically used in mobile computing such as tablets, smartphones, smartwatches, and netbooks as well asembedded systemsand in applications where previouslymicrocontrollerswould be used. Where previously only microcontrollers could be used, SoCs are rising to prominence in the embedded systems market. Tighter system integration offers better reliability andmean time between failure, and SoCs offer more advanced functionality and computing power than microcontrollers.[5]Applications includeAI acceleration, embeddedmachine vision,[6]data collection,telemetry,vector processingandambient intelligence. Often embedded SoCs target theinternet of things, multimedia, networking, telecommunications andedge computingmarkets. Some examples of SoCs for embedded applications include theSTMicroelectronicsSTM32, theRaspberry Pi LtdRP2040, and theAMDZynq 7000. Mobile computingbased SoCs always bundle processors, memories, on-chipcaches,wireless networkingcapabilities and oftendigital camerahardware and firmware. With increasing memory sizes, high end SoCs will often have no memory and flash storage and instead, the memory andflash memorywill be placed right next to, or above (package on package), the SoC.[7]Some examples of mobile computing SoCs include: In 1992,Acorn Computersproduced theA3010, A3020 and A4000 range of personal computerswith the ARM250 SoC. It combined the original Acorn ARM2 processor with a memory controller (MEMC), video controller (VIDC), and I/O controller (IOC). In previous AcornARM-powered computers, these were four discrete chips. The ARM7500 chip was their second-generation SoC, based on the ARM700, VIDC20 and IOMD controllers, and was widely licensed in embedded devices such as set-top-boxes, as well as later Acorn personal computers. Tablet and laptop manufacturers have learned lessons from embedded systems and smartphone markets about reduced power consumption, better performance and reliability from tighterintegrationof hardware andfirmwaremodules, andLTEand otherwireless networkcommunications integrated on chip (integratednetwork interface controllers).[10] On modern laptops and mini PCs, the low-power variants ofAMD RyzenandIntel Coreprocessors use SoC design integrating CPU, IGPU, chipset and other processors in a single package. However, such x86 processors still require external memory and storage chips. An SoC consists of hardwarefunctional units, includingmicroprocessorsthat runsoftware code, as well as acommunications subsystemto connect, control, direct and interface between these functional modules. An SoC must have at least oneprocessor core, but typically an SoC has more than one core. Processor cores can be amicrocontroller,microprocessor(μP),[11]digital signal processor(DSP) orapplication-specific instruction set processor(ASIP) core.[12]ASIPs haveinstruction setsthat are customized for anapplication domainand designed to be more efficient than general-purpose instructions for a specific type of workload. Multiprocessor SoCs have more than one processor core by definition. TheARM architectureis a common choice for SoC processor cores because some ARM-architecture cores aresoft processorsspecified asIP cores.[11] SoCs must havesemiconductor memoryblocks to perform their computation, as domicrocontrollersand otherembedded systems. Depending on the application, SoC memory may form amemory hierarchyandcache hierarchy. In the mobile computing market, this is common, but in manylow-powerembedded microcontrollers, this is not necessary. Memory technologies for SoCs includeread-only memory(ROM),random-access memory(RAM), Electrically Erasable Programmable ROM (EEPROM) andflash memory.[11]As in other computer systems, RAM can be subdivided into relatively faster but more expensivestatic RAM(SRAM) and the slower but cheaperdynamic RAM(DRAM). When an SoC has acachehierarchy, SRAM will usually be used to implementprocessor registersand cores'built-in cacheswhereas DRAM will be used formain memory. "Main memory" may be specific to a single processor (which can bemulti-core) when the SoChas multiple processors, in this case it isdistributed memoryand must be sent via§ Intermodule communicationon-chip to be accessed by a different processor.[12]For further discussion of multi-processing memory issues, seecache coherenceandmemory latency. SoCs include externalinterfaces, typically forcommunication protocols. These are often based upon industry standards such asUSB,Ethernet,USART,SPI,HDMI,I²C,CSI, etc. These interfaces will differ according to the intended application.Wireless networkingprotocols such asWi-Fi,Bluetooth,6LoWPANandnear-field communicationmay also be supported. When needed, SoCs includeanaloginterfaces includinganalog-to-digitalanddigital-to-analog converters, often forsignal processing. These may be able to interface with different types ofsensorsoractuators, includingsmart transducers. They may interface with application-specificmodulesor shields.[nb 1]Or they may be internal to the SoC, such as if an analog sensor is built in to the SoC and its readings must be converted to digital signals for mathematical processing. Digital signal processor(DSP) cores are often included on SoCs. They performsignal processingoperations in SoCs forsensors,actuators,data collection,data analysisand multimedia processing. DSP cores typically featurevery long instruction word(VLIW) andsingle instruction, multiple data(SIMD)instruction set architectures, and are therefore highly amenable to exploitinginstruction-level parallelismthroughparallel processingandsuperscalar execution.[12]: 4SP cores most often feature application-specific instructions, and as such are typicallyapplication-specific instruction set processors(ASIP). Such application-specific instructions correspond to dedicated hardwarefunctional unitsthat compute those instructions. Typical DSP instructions includemultiply-accumulate,Fast Fourier transform,fused multiply-add, andconvolutions. As with other computer systems, SoCs requiretiming sourcesto generateclock signals, control execution of SoC functions and provide time context tosignal processingapplications of the SoC, if needed. Popular time sources arecrystal oscillatorsandphase-locked loops. SoCperipheralsincludingcounter-timers, real-timetimersandpower-on resetgenerators. SoCs also includevoltage regulatorsandpower managementcircuits. SoCs comprise manyexecution units. These units must often send data andinstructionsback and forth. Because of this, all but the most trivial SoCs requirecommunications subsystems. Originally, as with othermicrocomputertechnologies,data busarchitectures were used, but recently designs based on sparse intercommunication networks known asnetworks-on-chip(NoC) have risen to prominence and are forecast to overtake bus architectures for SoC design in the near future.[13] Historically, a shared globalcomputer bustypically connected the different components, also called "blocks" of the SoC.[13]A very common bus for SoC communications is ARM's royalty-free Advanced Microcontroller Bus Architecture (AMBA) standard. Direct memory accesscontrollers route data directly between external interfaces and SoC memory, bypassing the CPU orcontrol unit, thereby increasing the datathroughputof the SoC. This is similar to somedevice driversof peripherals on component-basedmulti-chip modulePC architectures. Wire delay is not scalable due to continuedminiaturization,system performancedoes not scale with the number of cores attached, the SoC'soperating frequencymust decrease with each additional core attached for power to be sustainable, and long wires consume large amounts of electrical power. These challenges are prohibitive to supportingmanycoresystems on chip.[13]: xiii In the late 2010s, a trend of SoCs implementingcommunications subsystemsin terms of a network-like topology instead ofbus-basedprotocols has emerged. A trend towards more processor cores on SoCs has caused on-chip communication efficiency to become one of the key factors in determining the overall system performance and cost.[13]: xiiiThis has led to the emergence of interconnection networks withrouter-basedpacket switchingknown as "networks on chip" (NoCs) to overcome thebottlenecksof bus-based networks.[13]: xiii Networks-on-chip have advantages including destination- and application-specificrouting, greater power efficiency and reduced possibility ofbus contention. Network-on-chip architectures take inspiration fromcommunication protocolslikeTCPand theInternet protocol suitefor on-chip communication,[13]although they typically have fewernetwork layers. Optimal network-on-chipnetwork architecturesare an ongoing area of much research interest. NoC architectures range from traditional distributed computingnetwork topologiessuch astorus,hypercube,meshesandtree networkstogenetic algorithm schedulingtorandomized algorithmssuch asrandom walks with branchingand randomizedtime to live(TTL). Many SoC researchers consider NoC architectures to be the future of SoC design because they have been shown to efficiently meet power and throughput needs of SoC designs. Current NoC architectures are two-dimensional. 2D IC design has limitedfloorplanningchoices as the number of cores in SoCs increase, so asthree-dimensional integrated circuits(3DICs) emerge, SoC designers are looking towards building three-dimensional on-chip networks known as 3DNoCs.[13] A system on a chip consists of both thehardware, described in§ Structure, and the software controlling the microcontroller, microprocessor or digital signal processor cores, peripherals and interfaces. Thedesign flowfor an SoC aims to develop this hardware and software at the same time, also known as architectural co-design. The design flow must also take into account optimizations (§ Optimization goals) and constraints. Most SoCs are developed from pre-qualified hardware componentIP core specificationsfor the hardware elements andexecution units, collectively "blocks", described above, together with softwaredevice driversthat may control their operation. Of particular importance are theprotocol stacksthat drive industry-standard interfaces likeUSB. The hardware blocks are put together usingcomputer-aided designtools, specificallyelectronic design automationtools; thesoftware modulesare integrated using a softwareintegrated development environment. SoCs components are also often designed inhigh-level programming languagessuch asC++,MATLABorSystemCand converted toRTLdesigns throughhigh-level synthesis(HLS) tools such asC to HDLorflow to HDL.[14]HLS products called "algorithmic synthesis" allow designers to use C++ to model and synthesize system, circuit, software and verification levels all in one high level language commonly known tocomputer engineersin a manner independent of time scales, which are typically specified in HDL.[15]Other components can remain software and be compiled and embedded ontosoft-core processorsincluded in the SoC as modules in HDL asIP cores. Once thearchitectureof the SoC has been defined, any new hardware elements are written in an abstracthardware description languagetermedregister transfer level(RTL) which defines the circuit behavior, or synthesized into RTL from a high level language through high-level synthesis. These elements are connected together in a hardware description language to create the full SoC design. The logic specified to connect these components and convert between possibly different interfaces provided by different vendors is calledglue logic. Chips are verified for validation correctness before being sent to asemiconductor foundry. This process is calledfunctional verificationand it accounts for a significant portion of the time and energy expended in thechip design life cycle, often quoted as 70%.[16][17]With the growing complexity of chips,hardware verification languageslikeSystemVerilog,SystemC,e, and OpenVera are being used.Bugsfound in the verification stage are reported to the designer. Traditionally, engineers have employed simulation acceleration,emulationor prototyping onreprogrammable hardwareto verify and debug hardware and software for SoC designs prior to the finalization of the design, known astape-out.Field-programmable gate arrays(FPGAs) are favored for prototyping SoCs becauseFPGA prototypesare reprogrammable, allowdebuggingand are more flexible thanapplication-specific integrated circuits(ASICs).[18][19] With high capacity and fast compilation time, simulation acceleration and emulation are powerful technologies that provide wide visibility into systems. Both technologies, however, operate slowly, on the order of MHz, which may be significantly slower – up to 100 times slower – than the SoC's operating frequency. Acceleration and emulation boxes are also very large and expensive at over US$1 million.[citation needed] FPGA prototypes, in contrast, use FPGAs directly to enable engineers to validate and test at, or close to, a system's full operating frequency with real-world stimuli. Tools such as Certus[20]are used to insert probes in the FPGA RTL that make signals available for observation. This is used to debug hardware, firmware and software interactions across multiple FPGAs with capabilities similar to a logic analyzer. In parallel, the hardware elements are grouped and passed through a process oflogic synthesis, during which performance constraints, such as operational frequency and expected signal delays, are applied. This generates an output known as anetlistdescribing the design as a physical circuit and its interconnections. These netlists are combined with theglue logicconnecting the components to produce the schematic description of the SoC as a circuit which can beprintedonto a chip. This process is known asplace and routeand precedestape-outin the event that the SoCs are produced asapplication-specific integrated circuits(ASIC). SoCs must optimizepower use, area ondie, communication, positioning forlocalitybetween modular units and other factors. Optimization is necessarily a design goal of SoCs. If optimization was not necessary, the engineers would use amulti-chip modulearchitecture without accounting for the area use, power consumption or performance of the system to the same extent. Common optimization targets for SoC designs follow, with explanations of each. In general, optimizing any of these quantities may be a hardcombinatorial optimizationproblem, and can indeed beNP-hardfairly easily. Therefore, sophisticatedoptimization algorithmsare often required and it may be practical to useapproximation algorithmsorheuristicsin some cases. Additionally, most SoC designs containmultiple variables to optimize simultaneously, soPareto efficientsolutions are sought after in SoC design. Oftentimes the goals of optimizing some of these quantities are directly at odds, further adding complexity to design optimization of SoCs and introducingtrade-offsin system design. For broader coverage of trade-offs andrequirements analysis, seerequirements engineering. SoCs are optimized to minimize theelectrical powerused to perform the SoC's functions. Most SoCs must use low power. SoC systems often require longbattery life(such assmartphones), can potentially spend months or years without a power source while needing to maintain autonomous function, and often are limited in power use by a high number ofembeddedSoCs beingnetworked togetherin an area. Additionally, energy costs can be high and conserving energy will reduce thetotal cost of ownershipof the SoC. Finally,waste heatfrom high energy consumption can damage other circuit components if too much heat is dissipated, giving another pragmatic reason to conserve energy. The amount of energy used in a circuit is theintegralofpowerconsumed with respect to time, and theaverage rateof power consumption is the product ofcurrentbyvoltage. Equivalently, byOhm's law, power is current squared times resistance or voltage squared divided byresistance: P=IV=V2R=I2R{\displaystyle P=IV={\frac {V^{2}}{R}}={I^{2}}{R}}SoCs are frequently embedded inportable devicessuch assmartphones,GPS navigation devices, digitalwatches(includingsmartwatches) andnetbooks. Customers want long battery lives formobile computingdevices, another reason that power consumption must be minimized in SoCs.Multimedia applicationsare often executed on these devices, including video games,video streaming,image processing; all of which have grown incomputational complexityin recent years with user demands and expectations for higher-qualitymultimedia. Computation is more demanding as expectations move towards3D videoathigh resolutionwithmultiple standards, so SoCs performing multimedia tasks must be computationally capable platform while being low power to run off a standard mobile battery.[12]: 3 SoCs are optimized to maximizepower efficiencyin performance per watt: maximize the performance of the SoC given a budget of power usage. Many applications such asedge computing,distributed processingandambient intelligencerequire a certain level ofcomputational performance, but power is limited in most SoC environments. SoC designs are optimized to minimizewaste heatoutputon the chip. As with otherintegrated circuits, heat generated due to highpower densityare thebottleneckto furtherminiaturizationof components.[21]: 1The power densities of high speed integrated circuits, particularly microprocessors and including SoCs, have become highly uneven. Too much waste heat can damage circuits and erodereliabilityof the circuit over time. High temperatures and thermal stress negatively impact reliability,stress migration, decreasedmean time between failures,electromigration,wire bonding,metastabilityand other performance degradation of the SoC over time.[21]: 2–9 In particular, most SoCs are in a small physical area or volume and therefore the effects of waste heat are compounded because there is little room for it to diffuse out of the system. Because of hightransistor countson modern devices, oftentimes a layout of sufficient throughput and hightransistor densityis physically realizable fromfabrication processesbut would result in unacceptably high amounts of heat in the circuit's volume.[21]: 1 These thermal effects force SoC and other chip designers to apply conservativedesign margins, creating less performant devices to mitigate the risk ofcatastrophic failure. Due to increasedtransistor densitiesas length scales get smaller, eachprocess generationproduces more heat output than the last. Compounding this problem, SoC architectures are usually heterogeneous, creating spatially inhomogeneousheat fluxes, which cannot be effectively mitigated by uniformpassive cooling.[21]: 1 SoCs are optimized to maximize computational and communicationsthroughput. SoCs are optimized to minimizelatencyfor some or all of their functions. This can be accomplished bylaying outelements with proper proximity andlocalityto each-other to minimize the interconnection delays and maximize the speed at which data is communicated between modules,functional unitsand memories. In general, optimizing to minimize latency is anNP-completeproblem equivalent to theBoolean satisfiability problem. Fortasksrunning on processor cores, latency and throughput can be improved withtask scheduling. Some tasks run in application-specific hardware units, however, and even task scheduling may not be sufficient to optimize all software-based tasks to meet timing and throughput constraints. Systems on chip are modeled with standard hardwareverification and validationtechniques, but additional techniques are used to model and optimize SoC design alternatives to make the system optimal with respect tomultiple-criteria decision analysison the above optimization targets. Task schedulingis an important activity in any computer system with multipleprocessesorthreadssharing a single processor core. It is important to reduce§ Latencyand increase§ Throughputforembedded softwarerunning on an SoC's§ Processor cores. Not every important computing activity in a SoC is performed in software running on on-chip processors, but scheduling can drastically improve performance of software-based tasks and other tasks involvingshared resources. Software running on SoCs often schedules tasks according tonetwork schedulingandrandomized schedulingalgorithms. Hardware and software tasks are often pipelined inprocessor design. Pipelining is an important principle forspeedupincomputer architecture. They are frequently used inGPUs(graphics pipeline) and RISC processors (evolutions of theclassic RISC pipeline), but are also applied to application-specific tasks such asdigital signal processingand multimedia manipulations in the context of SoCs.[12] SoCs are often analyzed thoughprobabilistic models,queueing networks, andMarkov chains. For instance,Little's lawallows SoC states and NoC buffers to be modeled as arrival processes and analyzed throughPoisson random variablesandPoisson processes. SoCs are often modeled withMarkov chains, bothdiscrete timeandcontinuous timevariants. Markov chain modeling allowsasymptotic analysisof the SoC'ssteady state distributionof power, heat, latency and other factors to allow design decisions to be optimized for the common case. SoC chips are typicallyfabricatedusingmetal–oxide–semiconductor(MOS) technology.[22]The netlists described above are used as the basis for the physical design (place and route) flow to convert the designers' intent into the design of the SoC. Throughout this conversion process, the design is analyzed with static timing modeling, simulation and other tools to ensure that it meets the specified operational parameters such as frequency, power consumption and dissipation, functional integrity (as described in the register transfer level code) and electrical integrity. When all known bugs have been rectified and these have been re-verified and all physical design checks are done, the physical design files describing each layer of the chip are sent to the foundry's mask shop where a full set of glass lithographic masks will be etched. These are sent to a wafer fabrication plant to create the SoC dice before packaging and testing. SoCs can be fabricated by several technologies, including: ASICs consume less power and are faster than FPGAs but cannot be reprogrammed and are expensive to manufacture. FPGA designs are more suitable for lower volume designs, but after enough units of production ASICs reduce the total cost of ownership.[23] SoC designs consume less power and have a lower cost and higher reliability than the multi-chip systems that they replace. With fewer packages in the system, assembly costs are reduced as well. However, like mostvery-large-scale integration(VLSI) designs, the total cost[clarification needed]is higher for one large chip than for the same functionality distributed over several smaller chips, because oflower yields[clarification needed]and highernon-recurring engineeringcosts. When it is not feasible to construct an SoC for a particular application, an alternative is asystem in package(SiP) comprising a number of chips in a singlepackage. When produced in large volumes, SoC is more cost-effective than SiP because its packaging is simpler.[24]Another reason SiP may be preferred iswaste heatmay be too high in a SoC for a given purpose because functional components are too close together, and in an SiP heat will dissipate better from different functional modules since they are physically further apart. Some examples of systems on a chip are: SoCresearch and developmentoften compares many options. Benchmarks, such as COSMIC,[25]are developed to help such evaluations.
https://en.wikipedia.org/wiki/MPSoC
OpenVXis an open, royalty-free standard for cross-platform acceleration ofcomputer visionapplications. It is designed by theKhronos Groupto facilitate portable, optimized and power-efficient processing of methods for vision algorithms. This is aimed forembeddedandreal-timeprograms within computer vision and related scenarios. It uses aconnected graphrepresentation of operations. OpenVX specifies a higherlevel of abstractionfor programming computer vision use cases than compute frameworks such asOpenCL. The high level makes the programming easy and the underlying execution will be efficient on different computing architectures. This is done while having a consistent and portable vision acceleration API. OpenVX is based on a connected graph of vision nodes that can execute the preferred chain of operations. It uses an opaque memory model, allowing to move image data between the host (CPU) memory and accelerator, such asGPUmemory. As a result, the OpenVX implementation can optimize the execution through various techniques, such as acceleration on variousprocessing unitsordedicated hardware. This architecture facilitates applications programmed in OpenVX on different systems with different power and performance, including battery-sensitive, vision-enabled,wearable displays.[1] OpenVX is complementary to the open source vision libraryOpenCV. OpenVX in some applications offers a better optimized graph management than OpenCV.
https://en.wikipedia.org/wiki/OpenVX
Aphysics processing unit(PPU) is a dedicatedmicroprocessordesigned to handle the calculations ofphysics, especially in thephysics engineofvideo games. It is an example ofhardware acceleration. Examples of calculations involving a PPU might includerigid body dynamics,soft body dynamics,collision detection,fluid dynamics, hair andclothing simulation,finite element analysis, and fracturing of objects. The idea is having specialized processors offload time-consuming tasks from a computer's CPU, much like how aGPUperforms graphics operations in the main CPU's place. The term was coined byAgeiato describe itsPhysXchip. Several other technologies in the CPU-GPU spectrum have some features in common with it, although Ageia's product was the only complete one designed, marketed, supported, and placed within a system exclusively being a PPU. An early academic PPU research project[1][2]named SPARTA (Simulation of Physics on A Real-Time Architecture) was carried out at Penn State[3]and University of Georgia. This was a simpleFPGAbased PPU that was limited to two dimensions. This project was extended into a considerably more advancedASIC-based system named HELLAS. February 2006 saw the release of the first dedicated PPUPhysXfromAgeia(later merged intoNvidia). The unit is most effective in acceleratingparticle systems, with only a small performance improvement measured for rigid body physics.[4]The Ageia PPU is documented in depth in their US patent application #20050075849.[5]Nvidia/Ageia no longer produces PPUs and hardware acceleration for physics processing, although it is now supported through some of their graphics processing units. The first processor to be advertised being a PPU was named thePhysXchip, introduced by afabless semiconductor companycalledAGEIA. Games wishing to take advantage of the PhysX PPU must use AGEIA'sPhysXSDK, (formerly known as the NovodeX SDK). It consists of a general purpose RISC core controlling an array of customSIMDfloating pointVLIWprocessors working in local banked memories, with a switch-fabric to manage transfers between them. There is nocache-hierarchylike in a CPU or GPU. The PhysX was available from three companies akin to the wayvideo cardsare manufactured.ASUS,BFG Technologies,[6]andELSA Technologieswere the primary manufacturers. PCs with the cards already installed were available from system builders such asAlienware,Dell, andFalcon Northwest.[7] In February 2008, afterNvidiabought Ageia Technologies and eventually cut off the ability to process PhysX on the AGEIA PPU and NVIDIA GPUs in systems with active ATi/AMD GPUs, it seemed that PhysX went 100% to Nvidia. But in March 2008, Nvidia announced that it will make PhysX an open standard for everyone,[8]so the main graphic-processor manufacturers will have PhysX support in the next generation graphics cards. Nvidia announced that PhysX will also be available for some of their released graphics cards just by downloading some new drivers. Seephysics enginefor a discussion of academic research PPU projects. ASUSandBFG Technologiesbought licenses to manufacture alternate versions of AGEIA's PPU, the PhysX P1 with 128 MB GDDR3: TheHavokSDK is a major competitor to the PhysX SDK, used in more than 150 games, including major titles likeHalf-Life 2,Halo 3andDead Rising.[12] To compete with the PhysX PPU, an edition known asHavok FXwas to take advantage of multi-GPU technology fromATI(AMD CrossFire) andNVIDIA(SLI) using existing cards to accelerate certain physics calculations.[13] Havok divides the physics simulation intoeffectandgameplayphysics, with effect physics being offloaded (if possible) to the GPU asShader Model 3.0instructions and gameplay physics being processed on the CPU as normal. The important distinction between the two is thateffectphysics do not affect gameplay (dust or small debris from an explosion, for example); the vast majority of physics operations are still performed in software. This approach differs significantly from the PhysX SDK, which moves all calculations to the PhysX card if it is present. Since Havok's acquisition byIntel, Havok FX appears to have been shelved or cancelled.[14] The drive towardGPGPUhas made GPUs more suitable for the job of a PPU; DX10 added integer data types, unified shader architecture, and a geometry shader stage which allows a broader range of algorithms to be implemented; Modern GPUs supportcompute shaders, which run across an indexed space and don't require any graphical resources, just general purpose data buffers. NVidiaCUDAprovides a little more in the way of inter-thread communication andscratchpad-style workspaceassociated with the threads. Nonetheless GPUs are built around a larger number of longer latency, slower threads, and designed around texture and framebuffer data paths, and poor branching performance; this distinguishes them from PPUs andCellas being less well optimized for taking over game world simulation tasks. TheCodeplay Sieve compilersupports the PPU, indicating that the Ageia physX chip would be suitable for GPGPU type tasks. However Ageia seem unlikely to pursue this market. Although very different from the PhysX, one could argue thePlayStation 2'sVU0is an early, limited implementation of a PPU. Conversely, one could describe a PPU to a PS2 programmer as an evolved replacement for VU0. Its feature-set and placement within the system is geared toward accelerating game update tasks including physics and AI; it can offload such calculations working off its own instruction stream whilst the CPU is operating on something else. Being a DSP however, it is much more dependent on the CPU to do useful work in a game engine, and would not be capable of implementing a full physics API, so it cannot be classed as a PPU. Also VU0 is capable of providing additional vertex processing power, though this is more a property of the pathways in the system rather than the unit itself. This usage is similar to Havok FX or GPU physics in that an auxiliary unit's general purpose floating point power is used to complement the CPU in either graphics or physics roles.
https://en.wikipedia.org/wiki/Physics_processing_unit
Theinformation ratiomeasures and compares theactive returnof an investment (e.g., a security or portfolio) compared to a benchmark index relative to the volatility of the active return (also known asactive riskorbenchmark tracking risk). It is defined as theactive return(the difference between the returns of the investment and the returns of the benchmark) divided by thetracking error(thestandard deviationof the active return, i.e., the additional risk). It represents the additional amount of return that an investor receives per unit of increase in risk.[1]The information ratio is simply the ratio of the active return of the portfolio divided by the tracking error of its return, with both components measured relative to the performance of the agreed-on benchmark. It is often used to gauge the skill of managers ofmutual funds,hedge funds, etc. It measures the active return of the manager's portfolio divided by the amount of risk that the manager takes relative to the benchmark. The higher the information ratio, the higher the active return of the portfolio, given the amount of risk taken, and the better the manager. The information ratio is similar to theSharpe ratio, the main difference being that the Sharpe ratio uses arisk-free returnas benchmark (such as aU.S. Treasury security) whereas the information ratio uses a risky index as benchmark (such as theS&P500). The Sharpe ratio is useful for an attribution of the absolute returns of a portfolio, and the information ratio is useful for an attribution of the relative returns of a portfolio.[2] The information ratioIR{\displaystyle IR}is defined as: whereRp{\displaystyle R_{p}}is the portfolio return,Rb{\displaystyle R_{b}}is the benchmark return,α=E[Rp−Rb]{\displaystyle \alpha =E[R_{p}-R_{b}]}is theexpected valueof the active return, andω=σ{\displaystyle \omega =\sigma }is thestandard deviationof the active return, which is an alternate definition of the aforementioned tracking error. Note in this case,α{\displaystyle \alpha }is defined as excess return, not the risk-adjusted excess return orJensen's alphacalculated using regression analysis. Some analysts, however, do use Jensen's alpha for the numerator and a regression-adjusted tracking error for the denominator (this version of the information ratio is often described as the appraisal ratio to differentiate it from the more common definition).[3] Top-quartile investment managers typically achieve annualized information ratios of about one-half.[4]There are bothex ante(expected) andex post(observed) information ratios. Generally, the information ratio compares the returns of the manager's portfolio with those of a benchmark such as the yield on three-monthTreasury billsor an equity index such as theS&P 500.[5] Some hedge funds use Information ratio as a metric for calculating aperformance fee.[citation needed] The information ratio is often annualized. While it is then common for the numerator to be calculated as the arithmetic difference between the annualized portfolio return and the annualized benchmark return, this is an approximation because the annualization of an arithmetic difference between terms is not the arithmetic difference of the annualized terms.[6]Since the denominator is here taken to be the annualized standard deviation of the arithmetic difference of these series, which is a standard measure of annualized risk, and since the ratio of annualized terms is the annualization of their ratio, the annualized information ratio provides the annualized risk-adjusted active return of the portfolio relative to the benchmark. One of the main criticisms of the Information Ratio is that it considersarithmetic returns(rather thangeometric returns) and ignores leverage. This can lead to the Information Ratio calculated for a manager being negative when the manager produces alpha to the benchmark and vice versa. A better measure of the alpha produced by the manager is the Geometric Information Ratio[citation needed].
https://en.wikipedia.org/wiki/Information_ratio
Instatistics, thevariance functionis asmooth functionthat depicts thevarianceof arandom quantityas a function of itsmean. The variance function is a measure ofheteroscedasticityand plays a large role in many settings of statistical modelling. It is a main ingredient in thegeneralized linear modelframework and a tool used innon-parametric regression,[1]semiparametric regression[1]andfunctional data analysis.[2]In parametric modeling, variance functions take on a parametric form and explicitly describe the relationship between the variance and the mean of a random quantity. In a non-parametric setting, the variance function is assumed to be asmooth function. In a regression model setting, the goal is to establish whether or not a relationship exists between a response variable and a set of predictor variables. Further, if a relationship does exist, the goal is then to be able to describe this relationship as best as possible. A main assumption inlinear regressionis constant variance or (homoscedasticity), meaning that different response variables have the same variance in their errors, at every predictor level. This assumption works well when the response variable and the predictor variable are jointlynormal. As we will see later, the variance function in the Normal setting is constant; however, we must find a way to quantify heteroscedasticity (non-constant variance) in the absence of joint Normality. When it is likely that the response follows a distribution that is a member of the exponential family, ageneralized linear modelmay be more appropriate to use, and moreover, when we wish not to force a parametric model onto our data, anon-parametric regressionapproach can be useful. The importance of being able to model the variance as a function of the mean lies in improved inference (in a parametric setting), and estimation of the regression function in general, for any setting. Variance functions play a very important role in parameter estimation and inference. In general, maximum likelihood estimation requires that a likelihood function be defined. This requirement then implies that one must first specify the distribution of the response variables observed. However, to define a quasi-likelihood, one need only specify a relationship between the mean and the variance of the observations to then be able to use the quasi-likelihood function for estimation.[3]Quasi-likelihoodestimation is particularly useful when there isoverdispersion. Overdispersion occurs when there is more variability in the data than there should otherwise be expected according to the assumed distribution of the data. In summary, to ensure efficient inference of the regression parameters and the regression function, the heteroscedasticity must be accounted for. Variance functions quantify the relationship between the variance and the mean of the observed data and hence play a significant role in regression estimation and inference. The variance function and its applications come up in many areas of statistical analysis. A very important use of this function is in the framework ofgeneralized linear modelsandnon-parametric regression. When a member of theexponential familyhas been specified, the variance function can easily be derived.[4]: 29The general form of the variance function is presented under the exponential family context, as well as specific forms for Normal, Bernoulli, Poisson, and Gamma. In addition, we describe the applications and use of variance functions in maximum likelihood estimation and quasi-likelihood estimation. Thegeneralized linear model (GLM), is a generalization of ordinary regression analysis that extends to any member of theexponential family. It is particularly useful when the response variable is categorical, binary or subject to a constraint (e.g. only positive responses make sense). A quick summary of the components of a GLM are summarized on this page, but for more details and information see the page ongeneralized linear models. AGLMconsists of three main ingredients: First it is important to derive a couple key properties of the exponential family. Any random variabley{\displaystyle {\textit {y}}}in the exponential family has a probability density function of the form, with loglikelihood, Here,θ{\displaystyle \theta }is the canonical parameter and the parameter of interest, andϕ{\displaystyle \phi }is a nuisance parameter which plays a role in the variance. We use theBartlett's Identitiesto derive a general expression for thevariance function. The first and second Bartlett results ensures that under suitable conditions (seeLeibniz integral rule), for a density function dependent onθ,fθ(){\displaystyle \theta ,f_{\theta }()}, These identities lead to simple calculations of the expected value and variance of any random variabley{\displaystyle {\textit {y}}}in the exponential familyEθ[y],Varθ[y]{\displaystyle E_{\theta }[y],Var_{\theta }[y]}. Expected value ofY:Taking the first derivative with respect toθ{\displaystyle \theta }of the log of the density in the exponential family form described above, we have Then taking the expected value and setting it equal to zero leads to, Variance of Y:To compute the variance we use the second Bartlett identity, We have now a relationship betweenμ{\displaystyle \mu }andθ{\displaystyle \theta }, namely Note that becauseVarθ⁡[y]>0,b″(θ)>0{\displaystyle \operatorname {Var} _{\theta }\left[y\right]>0,b''(\theta )>0}, thenb′:θ→μ{\displaystyle b':\theta \rightarrow \mu }is invertible. We derive the variance function for a few common distributions. Thenormal distributionis a special case where the variance function is a constant. Lety∼N(μ,σ2){\displaystyle y\sim N(\mu ,\sigma ^{2})}then we put the density function ofyin the form of the exponential family described above: where To calculate the variance functionV(μ){\displaystyle V(\mu )}, we first expressθ{\displaystyle \theta }as a function ofμ{\displaystyle \mu }. Then we transformV(θ){\displaystyle V(\theta )}into a function ofμ{\displaystyle \mu } Therefore, the variance function is constant. Lety∼Bernoulli(p){\displaystyle y\sim {\text{Bernoulli}}(p)}, then we express the density of theBernoulli distributionin exponential family form, This give us Lety∼Poisson(λ){\displaystyle y\sim {\text{Poisson}}(\lambda )}, then we express the density of thePoisson distributionin exponential family form, This give us Here we see the central property of Poisson data, that the variance is equal to the mean. TheGamma distributionand density function can be expressed under different parametrizations. We will use the form of the gamma with parameters(μ,ν){\displaystyle (\mu ,\nu )} Then in exponential family form we have And we haveV(μ)=μ2{\displaystyle V(\mu )=\mu ^{2}} A very important application of the variance function is its use in parameter estimation and inference when the response variable is of the required exponential family form as well as in some cases when it is not (which we will discuss inquasi-likelihood). Weightedleast squares(WLS) is a special case of generalized least squares. Each term in the WLS criterion includes a weight that determines that the influence each observation has on the final parameter estimates. As in regular least squares, the goal is to estimate the unknown parameters in the regression function by finding values for parameter estimates that minimize the sum of the squared deviations between the observed responses and the functional portion of the model. While WLS assumes independence of observations it does not assume equal variance and is therefore a solution for parameter estimation in the presence of heteroscedasticity. TheGauss–Markov theoremandAitkendemonstrate that thebest linear unbiased estimator(BLUE), the unbiased estimator with minimum variance, has each weight equal to the reciprocal of the variance of the measurement. In the GLM framework, our goal is to estimate parametersβ{\displaystyle \beta }, whereZ=g(E[y∣X])=Xβ{\displaystyle Z=g(E[y\mid X])=X\beta }. Therefore, we would like to minimize(Z−XB)TW(Z−XB){\displaystyle (Z-XB)^{T}W(Z-XB)}and if we define the weight matrixWas whereϕ,V(μ),g(μ){\displaystyle \phi ,V(\mu ),g(\mu )}are defined in the previous section, it allows foriteratively reweighted least squares(IRLS) estimation of the parameters. See the section oniteratively reweighted least squaresfor more derivation and information. Also, important to note is that when the weight matrix is of the form described here, minimizing the expression(Z−XB)TW(Z−XB){\displaystyle (Z-XB)^{T}W(Z-XB)}also minimizes the Pearson distance. SeeDistance correlationfor more. The matrixWfalls right out of the estimating equations for estimation ofβ{\displaystyle \beta }. Maximum likelihood estimation for each parameterβr,1≤r≤p{\displaystyle \beta _{r},1\leq r\leq p}, requires Looking at a single observation we have, This gives us The Hessian matrix is determined in a similar manner and can be shown to be, Noticing that the Fisher Information (FI), Because most features ofGLMsonly depend on the first two moments of the distribution, rather than the entire distribution, the quasi-likelihood can be developed by just specifying a link function and a variance function. That is, we need to specify With a specified variance function and link function we can develop, as alternatives to the log-likelihood function, thescore function, and theFisher information, aquasi-likelihood, aquasi-score, and thequasi-information. This allows for full inference ofβ{\displaystyle \beta }. Quasi-likelihood (QL) Though called aquasi-likelihood, this is in fact a quasi-log-likelihood. The QL for one observation is And therefore the QL for allnobservations is From theQLwe have thequasi-score Quasi-score (QS) Recall thescore function,U, for data with log-likelihoodl⁡(μ∣y){\displaystyle \operatorname {l} (\mu \mid y)}is We obtain the quasi-score in an identical manner, Noting that, for one observation the score is The first two Bartlett equations are satisfied for the quasi-score, namely and In addition, the quasi-score is linear iny. Ultimately the goal is to find information about the parameters of interestβ{\displaystyle \beta }. Both the QS and the QL are actually functions ofβ{\displaystyle \beta }. Recall,μ=g−1(η){\displaystyle \mu =g^{-1}(\eta )}, andη=Xβ{\displaystyle \eta =X\beta }, therefore, Quasi-information (QI) Thequasi-information, is similar to theFisher information, QL, QS, QI as functions ofβ{\displaystyle \beta } The QL, QS and QI all provide the building blocks for inference about the parameters of interest and therefore it is important to express the QL, QS and QI all as functions ofβ{\displaystyle \beta }. Recalling again thatμ=g−1(Xβ){\displaystyle \mu =g^{-1}(X\beta )}, we derive the expressions for QL, QS and QI parametrized underβ{\displaystyle \beta }. Quasi-likelihood inβ{\displaystyle \beta }, The QS as a function ofβ{\displaystyle \beta }is therefore Where, The quasi-information matrix inβ{\displaystyle \beta }is, Obtaining the score function and the information ofβ{\displaystyle \beta }allows for parameter estimation and inference in a similar manner as described inApplication – weighted least squares. Non-parametric estimation of the variance function and its importance, has been discussed widely in the literature[5][6][7]Innon-parametric regressionanalysis, the goal is to express the expected value of your response variable(y) as a function of your predictors (X). That is we are looking to estimate ameanfunction,g(x)=E⁡[y∣X=x]{\displaystyle g(x)=\operatorname {E} [y\mid X=x]}without assuming a parametric form. There are many forms of non-parametricsmoothingmethods to help estimate the functiong(x){\displaystyle g(x)}. An interesting approach is to also look at a non-parametricvariance function,gv(x)=Var⁡(Y∣X=x){\displaystyle g_{v}(x)=\operatorname {Var} (Y\mid X=x)}. A non-parametric variance function allows one to look at the mean function as it relates to the variance function and notice patterns in the data. An example is detailed in the pictures to the right. The goal of the project was to determine (among other things) whether or not the predictor,number of years in the major leagues(baseball), had an effect on the response,salary, a player made. An initial scatter plot of the data indicates that there is heteroscedasticity in the data as the variance is not constant at each level of the predictor. Because we can visually detect the non-constant variance, it useful now to plotgv(x)=Var⁡(Y∣X=x)=E⁡[y2∣X=x]−[E⁡[y∣X=x]]2{\displaystyle g_{v}(x)=\operatorname {Var} (Y\mid X=x)=\operatorname {E} [y^{2}\mid X=x]-\left[\operatorname {E} [y\mid X=x]\right]^{2}}, and look to see if the shape is indicative of any known distribution. One can estimateE⁡[y2∣X=x]{\displaystyle \operatorname {E} [y^{2}\mid X=x]}and[E⁡[y∣X=x]]2{\displaystyle \left[\operatorname {E} [y\mid X=x]\right]^{2}}using a generalsmoothingmethod. The plot of the non-parametric smoothed variance function can give the researcher an idea of the relationship between the variance and the mean. The picture to the right indicates a quadratic relationship between the mean and the variance. As we saw above, the Gamma variance function is quadratic in the mean.
https://en.wikipedia.org/wiki/Variance_function
Modern portfolio theory(MPT), ormean-variance analysis, is a mathematical framework for assembling a portfolio of assets such that theexpected returnis maximized for a given level of risk. It is a formalization and extension ofdiversificationin investing, the idea that owning different kinds of financial assets is less risky than owning only one type. Its key insight is that an asset's risk and return should not be assessed by itself, but by how it contributes to a portfolio's overall risk and return. Thevarianceof return (or its transformation, thestandard deviation) is used as a measure of risk, because it is tractable when assets are combined into portfolios.[1]Often, the historical variance and covariance of returns is used as a proxy for the forward-looking versions of these quantities,[2]but other, more sophisticated methods are available.[3] EconomistHarry Markowitzintroduced MPT in a 1952 paper,[1]for which he was later awarded aNobel Memorial Prize in Economic Sciences; seeMarkowitz model. In 1940,Bruno de Finettipublished[4]the mean-variance analysis method, in the context of proportional reinsurance, under a stronger assumption. The paper was obscure and only became known to economists of the English-speaking world in 2006.[5] MPT assumes that investors arerisk averse, meaning that given two portfolios that offer the same expected return, investors will prefer the less risky one. Thus, an investor will take on increased risk only if compensated by higher expected returns. Conversely, an investor who wants higher expected returns must accept more risk. The exact trade-off will not be the same for all investors. Different investors will evaluate the trade-off differently based on individual risk aversion characteristics. The implication is that arationalinvestor will not invest in a portfolio if a second portfolio exists with a more favorablerisk vs expected return profile— i.e., if for that level of risk an alternative portfolio exists that has better expected returns. Under the model: In general: For atwo-assetportfolio: For athree-assetportfolio: The algebra can be much simplified by expressing the quantities involved in matrix notation.[6]Arrange the returns of N risky assets in anN×1{\displaystyle N\times 1}vectorR{\displaystyle R}, where the first element is the return of the first asset, the second element of the second asset, and so on. Arrange their expected returns in a column vectorμ{\displaystyle \mu }, and their variances and covariances in acovariance matrixΣ{\displaystyle \Sigma }. Consider a portfolio of risky assets whose weights in each of the N risky assets is given by the corresponding element of the weight vectorw{\displaystyle w}. Then: and For the case where there is investment in a riskfree asset with returnRf{\displaystyle R_{f}}, the weights of the weight vector do not sum to 1, and the portfolio expected return becomesw′μ+(1−w′1)Rf{\displaystyle w'\mu +(1-w'1)R_{f}}. The expression for the portfolio variance is unchanged. An investor can reduce portfolio risk (especiallyσp{\displaystyle \sigma _{p}}) simply by holding combinations of instruments that are not perfectly positivelycorrelated(correlation coefficient−1≤ρij<1{\displaystyle -1\leq \rho _{ij}<1}). In other words, investors can reduce their exposure to individual asset risk by holding adiversifiedportfolio of assets. Diversification may allow for the same portfolio expected return with reduced risk. The MPT is a mean-variance theory, and it compares the expected (mean) return of a portfolio with the standard deviation of the same portfolio. The image shows expected return on the vertical axis, and the standard deviation on the horizontal axis (volatility). Volatility is described by standard deviation and it serves as a measure of risk.[7]The return - standard deviation space is sometimes called the space of 'expected return vs risk'. Every possible combination of risky assets, can be plotted in this risk-expected return space, and the collection of all such possible portfolios defines a region in this space. The left boundary of this region is hyperbolic,[8]and the upper part of the hyperbolic boundary is theefficient frontierin the absence of a risk-free asset (sometimes called "the Markowitz bullet"). Combinations along this upper edge represent portfolios (including no holdings of the risk-free asset) for which there is lowest risk for a given level of expected return. Equivalently, a portfolio lying on the efficient frontier represents the combination offering the best possible expected return for given risk level. The tangent to the upper part of the hyperbolic boundary is thecapital allocation line (CAL). Matricesare preferred for calculations of the efficient frontier. In matrix form, for a given "risk tolerance"q∈[0,∞){\displaystyle q\in [0,\infty )}, the efficient frontier is found by minimizing the following expression: where The above optimization finds the point on the frontier at which the inverse of the slope of the frontier would beqif portfolio return variance instead of standard deviation were plotted horizontally. The frontier in its entirety is parametric onq. Harry Markowitzdeveloped a specific procedure for solving the above problem, called thecritical line algorithm,[9]that can handle additional linear constraints, upper and lower bounds on assets, and which is proved to work with a semi-positive definite covariance matrix. Examples of implementation of the critical line algorithm exist inVisual Basic for Applications,[10]inJavaScript[11]and in a few other languages. Also, many software packages, includingMATLAB,Microsoft Excel,MathematicaandR, provide genericoptimizationroutines so that using these for solving the above problem is possible, with potential caveats (poor numerical accuracy, requirement of positive definiteness of the covariance matrix...). An alternative approach to specifying the efficient frontier is to do so parametrically on the expected portfolio returnRTw.{\displaystyle R^{T}w.}This version of the problem requires that we minimize subject to and for parameterμ{\displaystyle \mu }. This problem is easily solved using aLagrange multiplierwhich leads to the following linear system of equations: One key result of the above analysis is thetwo mutual fund theorem.[12][13]This theorem states that any portfolio on the efficient frontier can be generated by holding a combination of any two given portfolios on the frontier; the latter two given portfolios are the "mutual funds" in the theorem's name. So in the absence of a risk-free asset, an investor can achieve any desired efficient portfolio even if all that is accessible is a pair of efficient mutual funds. If the location of the desired portfolio on the frontier is between the locations of the two mutual funds, both mutual funds will be held in positive quantities. If the desired portfolio is outside the range spanned by the two mutual funds, then one of the mutual funds must be sold short (held in negative quantity) while the size of the investment in the other mutual fund must be greater than the amount available for investment (the excess being funded by the borrowing from the other fund). The risk-free asset is the (hypothetical) asset that pays arisk-free rate. In practice, short-term government securities (such as UStreasury bills) are used as a risk-free asset, because they pay a fixed rate of interest and have exceptionally lowdefaultrisk. The risk-free asset has zero variance in returns if held to maturity (hence is risk-free); it is also uncorrelated with any other asset (by definition, since its variance is zero). As a result, when it is combined with any other asset or portfolio of assets, the change in return is linearly related to the change in risk as the proportions in the combination vary. When a risk-free asset is introduced, the half-line shown in the figure is the new efficient frontier. It is tangent to the hyperbola at the pure risky portfolio with the highestSharpe ratio. Its vertical intercept represents a portfolio with 100% of holdings in the risk-free asset; the tangency with the hyperbola represents a portfolio with no risk-free holdings and 100% of assets held in the portfolio occurring at the tangency point; points between those points are portfolios containing positive amounts of both the risky tangency portfolio and the risk-free asset; and points on the half-line beyond the tangency point are portfolios involving negative holdings of the risk-free asset and an amount invested in the tangency portfolio equal to more than 100% of the investor's initial capital. This efficient half-line is called thecapital allocation line(CAL), and its formula can be shown to be In this formulaPis the sub-portfolio of risky assets at the tangency with the Markowitz bullet,Fis the risk-free asset, andCis a combination of portfoliosPandF. By the diagram, the introduction of the risk-free asset as a possible component of the portfolio has improved the range of risk-expected return combinations available, because everywhere except at the tangency portfolio the half-line gives a higher expected return than the hyperbola does at every possible risk level. The fact that all points on the linear efficient locus can be achieved by a combination of holdings of the risk-free asset and the tangency portfolio is known as theone mutual fund theorem,[12]where the mutual fund referred to is the tangency portfolio. The efficient frontier can be pictured as a problem inquadratic curves.[12]On the market, we have the assetsR1,R2,…,Rn{\displaystyle R_{1},R_{2},\dots ,R_{n}}. We have some funds, and a portfolio is a way to divide our funds into the assets. Each portfolio can be represented as a vectorw1,w2,…,wn{\displaystyle w_{1},w_{2},\dots ,w_{n}}, such that∑iwi=1{\displaystyle \sum _{i}w_{i}=1}, and we hold the assets according towTR=∑iwiRi{\displaystyle w^{T}R=\sum _{i}w_{i}R_{i}}. Since we wish to maximize expected return while minimizing the standard deviation of the return, we are to solve a quadratic optimization problem:{E[wTR]=μminσ2=Var[wTR]∑iwi=1{\displaystyle {\begin{cases}E[w^{T}R]=\mu \\\min \sigma ^{2}=Var[w^{T}R]\\\sum _{i}w_{i}=1\end{cases}}}Portfolios are points in the Euclidean spaceRn{\displaystyle \mathbb {R} ^{n}}. The third equation states that the portfolio should fall on a plane defined by∑iwi=1{\displaystyle \sum _{i}w_{i}=1}. The first equation states that the portfolio should fall on a plane defined bywTE[R]=μ{\displaystyle w^{T}E[R]=\mu }. The second condition states that the portfolio should fall on the contour surface for∑ijwiρijwj{\displaystyle \sum _{ij}w_{i}\rho _{ij}w_{j}}that is as close to the origin as possible. Since the equation is quadratic, each such contour surface is an ellipsoid (assuming that the covariance matrixρij{\displaystyle \rho _{ij}}is invertible). Therefore, we can solve the quadratic optimization graphically by drawing ellipsoidal contours on the plane∑iwi=1{\displaystyle \sum _{i}w_{i}=1}, then intersect the contours with the plane{w:wTE[R]=μand∑iwi=1}{\displaystyle \{w:w^{T}E[R]=\mu {\text{ and }}\sum _{i}w_{i}=1\}}. As the ellipsoidal contours shrink, eventually one of them would become exactly tangent to the plane, before the contours become completely disjoint from the plane. The tangent point is the optimal portfolio at this level of expected return. As we varyμ{\displaystyle \mu }, the tangent point varies as well, but always falling on a single line (this is thetwo mutual funds theorem). Let the line be parameterized as{w+w′t:t∈R}{\displaystyle \{w+w't:t\in \mathbb {R} \}}. We find that along the line,{μ=(w′TE[R])t+wTE[R]σ2=(w′Tρw′)t2+2(wTρw′)t+(wTρw){\displaystyle {\begin{cases}\mu &=(w'^{T}E[R])t+w^{T}E[R]\\\sigma ^{2}&=(w'^{T}\rho w')t^{2}+2(w^{T}\rho w')t+(w^{T}\rho w)\end{cases}}}giving a hyperbola in the(σ,μ){\displaystyle (\sigma ,\mu )}plane. The hyperbola has two branches, symmetric with respect to theμ{\displaystyle \mu }axis. However, only the branch withσ>0{\displaystyle \sigma >0}is meaningful. By symmetry, the two asymptotes of the hyperbola intersect at a pointμMVP{\displaystyle \mu _{MVP}}on theμ{\displaystyle \mu }axis. The pointμmid{\displaystyle \mu _{mid}}is the height of the leftmost point of the hyperbola, and can be interpreted as the expected return of theglobal minimum-variance portfolio(global MVP). The tangency portfolio exists if and only ifμRF<μMVP{\displaystyle \mu _{RF}<\mu _{MVP}}. In particular, if the risk-free return is greater or equal toμMVP{\displaystyle \mu _{MVP}}, then the tangent portfoliodoes not exist. The capital market line (CML) becomes parallel to the upper asymptote line of the hyperbola. Pointsonthe CML become impossible to achieve, though they can beapproachedfrom below. It is usually assumed that the risk-free return is less than the return of the global MVP, in order that the tangency portfolio exists. However, even in this case, asμRF{\displaystyle \mu _{RF}}approachesμMVP{\displaystyle \mu _{MVP}}from below, the tangency portfolio diverges to a portfolio with infinite return and variance. Since there are only finitely many assets in the market, such a portfolio must be shorting some assets heavily while longing some other assets heavily. In practice, such a tangency portfolio would be impossible to achieve, because one cannot short an asset too much due toshort sale constraints, and also because ofprice impact, that is, longing a large amount of an asset would push up its price, breaking the assumption that the asset prices do not depend on the portfolio. If the covariance matrix is not invertible, then there exists some nonzero vectorv{\displaystyle v}, such thatvTR{\displaystyle v^{T}R}is a random variable with zero variance—that is, it is not random at all. Suppose∑ivi=0{\displaystyle \sum _{i}v_{i}=0}andvTR=0{\displaystyle v^{T}R=0}, then that means one of the assets can be exactly replicated using the other assets at the same price and the same return. Therefore, there is never a reason to buy that asset, and we can remove it from the market. Suppose∑ivi=0{\displaystyle \sum _{i}v_{i}=0}andvTR≠0{\displaystyle v^{T}R\neq 0}, then that means there is free money, breaking theno arbitrageassumption. Suppose∑ivi≠0{\displaystyle \sum _{i}v_{i}\neq 0}, then we can scale the vector to∑ivi=1{\displaystyle \sum _{i}v_{i}=1}. This means that we have constructed a risk-free asset with returnvTR{\displaystyle v^{T}R}. We can remove each such asset from the market, constructing one risk-free asset for each such asset removed. By the no arbitrage assumption, all their return rates are equal. For the assets that still remain in the market, their covariance matrix is invertible. The above analysis describes optimal behavior of an individual investor.Asset pricing theorybuilds on this analysis, allowing MPT to derive the required expected return for a correctly priced asset in this context. Intuitively (in aperfect marketwithrational investors), if a security was expensive relative to others - i.e. too much risk for the price - demand would fall and its price would drop correspondingly; if cheap, demand and price would increase likewise. This would continue until all such adjustments had ceased - a state of "market equilibrium". In this equilibrium, relative supplies will equal relative demands: given the relationship of price with supply and demand, since the risk-to-reward ratio is "identical" across all securities, proportions of each security in any fully-diversified portfolio would correspondingly be the same as in the overall market. More formally, then, since everyone holds the risky assets in identical proportions to each other — namely in the proportions given by the tangency portfolio — inmarket equilibriumthe risky assets' prices, and therefore their expected returns, will adjust so that the ratios in the tangency portfolio are the same as the ratios in which the risky assets are supplied to the market.[14]The result for expected return then follows, as below. Specific risk is the risk associated with individual assets - within a portfolio these risks can be reduced through diversification (specific risks "cancel out"). Specific risk is also called diversifiable, unique, unsystematic, or idiosyncratic risk.Systematic risk(a.k.a. portfolio risk or market risk) refers to the risk common to all securities—except forselling shortas noted below, systematic risk cannot be diversified away (within one market). Within the market portfolio, asset specific risk will be diversified away to the extent possible. Systematic risk is therefore equated with the risk (standard deviation) of the market portfolio. Since a security will be purchased only if it improves the risk-expected return characteristics of the market portfolio, the relevant measure of the risk of a security is the risk it adds to the market portfolio, and not its risk in isolation. In this context, the volatility of the asset, and its correlation with the market portfolio, are historically observed and are therefore given. (There are several approaches to asset pricing that attempt to price assets by modelling the stochastic properties of the moments of assets' returns - these are broadly referred to as conditional asset pricing models.) Systematic risks within one market can be managed through a strategy of using both long and short positions within one portfolio, creating a "market neutral" portfolio. Market neutral portfolios, therefore, will be uncorrelated with broader market indices. The asset return depends on the amount paid for the asset today. The price paid must ensure that the market portfolio's risk / return characteristics improve when the asset is added to it. TheCAPMis a model that derives the theoretical required expected return (i.e., discount rate) for an asset in a market, given the risk-free rate available to investors and the risk of the market as a whole. The CAPM is usually expressed: A derivation[14]is as follows: (1) The incremental impact on risk and expected return when an additional risky asset,a, is added to the market portfolio,m, follows from the formulae for a two-asset portfolio. These results are used to derive the asset-appropriate discount rate. (2) If an asset,a, is correctly priced, the improvement for an investor in her risk-to-expected return ratio achieved by adding it to the market portfolio,m, will at least (in equilibrium, exactly) match the gains of spending that money on an increased stake in the market portfolio. The assumption is that the investor will purchase the asset with funds borrowed at the risk-free rate,Rf{\displaystyle R_{f}}; this is rational ifE⁡(Ra)>Rf{\displaystyle \operatorname {E} (R_{a})>R_{f}}. This equation can beestimatedstatistically using the followingregressionequation: where αiis called the asset'salpha, βiis the asset'sbeta coefficientand SCL is thesecurity characteristic line. Once an asset's expected return,E(Ri){\displaystyle E(R_{i})}, is calculated using CAPM, the futurecash flowsof the asset can bediscountedto theirpresent valueusing this rate to establish the correct price for the asset. A riskier stock will have a higher beta and will be discounted at a higher rate; less sensitive stocks will have lower betas and be discounted at a lower rate. In theory, an asset is correctly priced when its observed price is the same as its value calculated using the CAPM derived discount rate. If the observed price is higher than the valuation, then the asset is overvalued; it is undervalued for a too low price. Despite its theoretical importance, critics of MPT question whether it is an ideal investment tool, because its model of financial markets does not match the real world in many ways.[2] The risk, return, and correlation measures used by MPT are based onexpected values, which means that they are statistical statements about the future (the expected value of returns is explicit in the above equations, and implicit in the definitions ofvarianceandcovariance). Such measures often cannot capture the true statistical features of the risk and return which often follow highly skewed distributions (e.g. thelog-normal distribution) and can give rise to, besides reducedvolatility, also inflated growth of return.[15]In practice, investors must substitute predictions based on historical measurements of asset return and volatility for these values in the equations. Very often such expected values fail to take account of new circumstances that did not exist when the historical data was generated.[16]An optimal approach to capturing trends, which differs from Markowitz optimization by utilizing invariance properties, is also derived from physics. Instead of transforming the normalized expectations using the inverse of the correlation matrix, the invariant portfolio employs the inverse of the square root of the correlation matrix.[17]The optimization problem is solved under the assumption that expected values are uncertain and correlated.[18]The Markowitz solution corresponds only to the case where the correlation between expected returns is similar to the correlation between returns. More fundamentally, investors are stuck with estimating key parameters from past market data because MPT attempts to model risk in terms of the likelihood of losses, but says nothing about why those losses might occur. The risk measurements used areprobabilisticin nature, not structural. This is a major difference as compared to many engineering approaches torisk management. Optionstheory and MPT have at least one important conceptual difference from theprobabilistic risk assessmentdone by nuclear power [plants]. A PRA is what economists would call astructural model. The components of a system and their relationships are modeled inMonte Carlo simulations. If valve X fails, it causes a loss of back pressure on pump Y, causing a drop in flow to vessel Z, and so on. But in theBlack–Scholesequation and MPT, there is no attempt to explain an underlying structure to price changes. Various outcomes are simply given probabilities. And, unlike the PRA, if there is no history of a particular system-level event like aliquidity crisis, there is no way to compute the odds of it. If nuclear engineers ran risk management this way, they would never be able to compute the odds of a meltdown at a particular plant until several similar events occurred in the same reactor design. Mathematical risk measurements are also useful only to the degree that they reflect investors' true concerns—there is no point minimizing a variable that nobody cares about in practice. In particular,varianceis a symmetric measure that counts abnormally high returns as just as risky as abnormally low returns. The psychological phenomenon ofloss aversionis the idea that investors are more concerned about losses than gains, meaning that our intuitive concept of risk is fundamentally asymmetric in nature. There many other risk measures (likecoherent risk measures) might better reflect investors' true preferences. Modern portfolio theory has also been criticized because it assumes that returns follow aGaussian distribution. Already in the 1960s,Benoit MandelbrotandEugene Famashowed the inadequacy of this assumption and proposed the use of more generalstable distributionsinstead.Stefan MittnikandSvetlozar Rachevpresented strategies for deriving optimal portfolios in such settings.[19][20][21]More recently,Nassim Nicholas Talebhas also criticized modern portfolio theory on this ground, writing: After the stock market crash (in 1987), they rewarded two theoreticians, Harry Markowitz and William Sharpe, who built beautifully Platonic models on a Gaussian base, contributing to what is called Modern Portfolio Theory. Simply, if you remove their Gaussian assumptions and treat prices as scalable, you are left with hot air. The Nobel Committee could have tested the Sharpe and Markowitz models—they work like quack remedies sold on the Internet—but nobody in Stockholm seems to have thought about it. Contrarian investorsandvalue investorstypically do not subscribe to Modern Portfolio Theory.[22]One objection is that the MPT relies on theefficient-market hypothesisand uses fluctuations in share price as a substitute for risk.Sir John Templetonbelieved in diversification as a concept, but also felt the theoretical foundations of MPT were questionable, and concluded (as described by a biographer): "the notion that building portfolios on the basis of unreliable and irrelevant statistical inputs, such as historical volatility, was doomed to failure."[23] A few studies have argued that "naive diversification", splitting capital equally among available investment options, might have advantages over MPT in some situations.[24] When applied to certain universes of assets, the Markowitz model has been identified by academics to be inadequate due to its susceptibility to model instability which may arise, for example, among a universe of highly correlated assets.[25] Since MPT's introduction in 1952, many attempts have been made to improve the model, especially by using more realistic assumptions. Post-modern portfolio theoryextends MPT by adopting non-normally distributed, asymmetric, and fat-tailed measures of risk.[26]This helps with some of these problems, but not others. Black–Litterman modeloptimization is an extension of unconstrained Markowitz optimization that incorporates relative and absolute 'views' on inputs of risk and returns from. The model is also extended by assuming that expected returns are uncertain, and the correlation matrix in this case can differ from the correlation matrix between returns.[17][18] Modern portfolio theory is inconsistent with main axioms ofrational choice theory, most notably with monotonicity axiom, stating that, if investing into portfolioXwill, with probability one, return more money than investing into portfolioY, then a rational investor should preferXtoY. In contrast, modern portfolio theory is based on a different axiom, called variance aversion,[27]and may recommend to invest intoYon the basis that it has lower variance. Maccheroni et al.[28]described choice theory which is the closest possible to the modern portfolio theory, while satisfying monotonicity axiom. Alternatively, mean-deviation analysis[29]is a rational choice theory resulting from replacing variance by an appropriatedeviation risk measure. In the 1970s, concepts from MPT found their way into the field ofregional science. In a series of seminal works, Michael Conroy[citation needed]modeled the labor force in the economy using portfolio-theoretic methods to examine growth and variability in the labor force. This was followed by a long literature on the relationship between economic growth and volatility.[30] More recently, modern portfolio theory has been used to model the self-concept in social psychology. When the self attributes comprising the self-concept constitute a well-diversified portfolio, then psychological outcomes at the level of the individual such as mood and self-esteem should be more stable than when the self-concept is undiversified. This prediction has been confirmed in studies involving human subjects.[31] Recently, modern portfolio theory has been applied to modelling the uncertainty and correlation between documents in information retrieval. Given a query, the aim is to maximize the overall relevance of a ranked list of documents and at the same time minimize the overall uncertainty of the ranked list.[32] Some experts apply MPT to portfolios of projects and other assets besides financial instruments.[33][34]When MPT is applied outside of traditional financial portfolios, some distinctions between the different types of portfolios must be considered. Neither of these necessarily eliminate the possibility of using MPT and such portfolios. They simply indicate the need to run the optimization with an additional set of mathematically expressed constraints that would not normally apply to financial portfolios. Furthermore, some of the simplest elements of Modern Portfolio Theory are applicable to virtually any kind of portfolio. The concept of capturing the risk tolerance of an investor by documenting how much risk is acceptable for a given return may be applied to a variety of decision analysis problems. MPT uses historical variance as a measure of risk, but portfolios of assets like major projects do not have a well-defined "historical variance". In this case, the MPT investment boundary can be expressed in more general terms like "chance of an ROI less than cost of capital" or "chance of losing more than half of the investment". When risk is put in terms of uncertainty about forecasts and possible losses then the concept is transferable to various types of investment.[33]
https://en.wikipedia.org/wiki/Modern_portfolio_theory
Simply stated,post-modern portfolio theory(PMPT) is an extension of the traditionalmodern portfolio theory(MPT) of Markowitz and Sharpe. Both theories provide analytical methods for rational investors to use diversification to optimize their investment portfolios. The essential difference between PMPT and MPT is that PMPT emphasizes the return thatmustbe earned on an investment in order to meet future, specified obligations, MPT is concerned only with the absolute return vis-a-vis the risk-free rate. The earliest published literature under the PMPT rubric was published by the principals of software developer Investment Technologies, LLC, Brian M. Rom and Kathleen W. Ferguson, in the Winter, 1993 and Fall, 1994 editions ofThe Journal of Investing. However, while the software tools resulting from the application of PMPT were innovations for practitioners, many of the ideas and concepts embodied in these applications had long and distinguished provenance in academic and research institutions worldwide. Empirical investigations began in 1981 at the Pension Research Institute (PRI) atSan Francisco State University. Dr. Hal Forsey and Dr. Frank Sortino were trying to apply Peter Fishburn's theory published in 1977 to Pension Fund Management. The result was an asset allocation model that PRI licensed Brian Rom to market in 1988. Mr. Rom coined the term PMPT and began using it to market portfolio optimization and performance measurement software developed by his company. These systems were built on the PRI downside- risk algorithms. Sortino and Steven Satchell at Cambridge University co-authored the first book on PMPT. This was intended as a graduate seminar text in portfolio management. A more recent book by Sortino was written for practitioners. The first publication in a major journal was co-authored by Sortino and Dr. Robert van der Meer, then at Shell Oil Netherlands. These concepts were popularized by articles and conference presentations by Sortino, Rom and others, including members of the now-defunct Salomon Bros.Skunk Works. Sortino claims the major contributors to the underlying theory are: Harry Markowitzlaid the foundations of MPT, the greatest contribution of which is[citation needed]the establishment of a formal risk/return framework for investment decision-making; seeMarkowitz model. By defining investment risk in quantitative terms, Markowitz gave investors a mathematical approach to asset-selection andportfolio management. But there are important limitations to the original MPT formulation. Two major limitations of MPT are its assumptions that: Stated another way, MPT is limited by measures of risk and return that do not always represent the realities of the investment markets. The assumption of a normal distribution is a major practical limitation, because it is symmetrical. Using the variance (or its square root, the standard deviation) implies that uncertainty about better-than-expected returns is equally averred as uncertainty about returns that are worse than expected. Furthermore, using the normal distribution to model the pattern of investment returns makes investment results with more upside than downside returns appear more risky than they really are. The converse distortion applies to distributions with a predominance of downside returns. The result is that using traditional MPT techniques for measuring investment portfolio construction and evaluation frequently does not accurately model investment reality. It has long been recognized that investors typically do not view as risky those returnsabovethe minimum they must earn in order to achieve their investment objectives. They believe that risk has to do with the bad outcomes (i.e., returns below a required target), not the good outcomes (i.e., returns in excess of the target) and that losses weigh more heavily than gains. This view has been noted by researchers in finance, economics and psychology, including Sharpe (1964). "Under certain conditions the MVA can be shown to lead to unsatisfactory predictions of (investor) behavior. Markowitz suggests that a model based on thesemivariancewould be preferable; in light of the formidablecomputational problems, however, he bases his (MV) analysis on the mean and the standard deviation.[2]" Recent advances in portfolio and financial theory, coupled with increased computing power, have also contributed to overcoming these limitations. In 1987, the Pension Research Institute at San Francisco State University developed the practical mathematical algorithms of PMPT that are in use today. These methods provide a framework that recognizes investors' preferences for upside over downsidevolatility. At the same time, a more robust model for the pattern of investment returns, the three-parameterlognormal distribution,[3]was introduced. Downside risk (DR) is measured by target semi-deviation (the square root of target semivariance) and is termed downside deviation. It is expressed in percentages and therefore allows for rankings in the same way asstandard deviation. An intuitive way to view downside risk is the annualized standard deviation of returns below the target. Another is the square root of the probability-weighted squared below-target returns. The squaring of the below-target returns has the effect of penalizing failures quadratically. This is consistent with observations made on the behavior of individual decision-making under where d= downside deviation (commonly known in the financial community as 'downside risk'). Note: By extension,d² = downside variance. t= the annual target return, originally termed the minimum acceptable return, or MAR. r= the random variable representing the return for the distribution of annual returnsf(r), f(r) = thedistributionfor the annual returns, e.g. the three-parameter lognormal distribution For the reasons provided below, thiscontinuousformula is preferred over a simplerdiscreteversion that determines the standard deviation of below-target periodic returns taken from the return series. 1. The continuous form permits all subsequent calculations to be made using annual returns which is the natural way for investors to specify their investment goals. The discrete form requires monthly returns for there to be sufficient data points to make a meaningful calculation, which in turn requires converting the annual target into a monthly target. This significantly affects the amount of risk that is identified. For example, a goal of earning 1% in every month of one year results in a greater risk than the seemingly equivalent goal of earning 12% in one year. 2. A second reason for strongly preferring the continuous form to the discrete form has been proposed by Sortino & Forsey (1996): "Before we make an investment, we don't know what the outcome will be... After the investment is made, and we want to measure its performance, all we know is what the outcome was, not what it could have been. To cope with this uncertainty, we assume that a reasonable estimate of the range of possible returns, as well as the probabilities associated with estimation of those returns...In statistical terms, the shape of [this] uncertainty is called a probability distribution. In other words, looking at just the discrete monthly or annual values does not tell the whole story." Using the observed points to create a distribution is a staple of conventional performance measurement. For example, monthly returns are used to calculate a fund's mean and standard deviation. Using these values and the properties of the normal distribution, we can make statements such as the likelihood of losing money (even though no negative returns may actually have been observed), or the range within which two-thirds of all returns lies (even though the specific returns identifying this range have not necessarily occurred). Our ability to make these statements comes from the process of assuming the continuous form of the normal distribution and certain of its well-known properties. In PMPT an analogous process is followed: TheSortino ratio, developed in 1993 by Rom's company, Investment Technologies, LLC, was the first new element in the PMPT rubric. It is defined as: where r= the annualized rate of return, t= the target return, d= downside risk. The following table shows that this ratio is demonstrably superior to the traditionalSharpe ratioas a means for ranking investment results. The table shows risk-adjusted ratios for several major indexes using both Sortino and Sharpe ratios. The data cover the five years 1992-1996 and are based on monthly total returns. The Sortino ratio is calculated against a 9.0% target. As an example of the different conclusions that can be drawn using these two ratios, notice how the Lehman Aggregate and MSCI EAFE compare - the Lehman ranks higher using the Sharpe ratio whereas EAFE ranks higher using the Sortino ratio. In many cases, manager or index rankings will be different, depending on the risk-adjusted measure used. These patterns will change again for different values of t. For example, when t is close to the risk-free rate, the Sortino Ratio for T-Bill's will be higher than that for the S&P 500, while the Sharpe ratio remains unchanged. In March 2008, researchers at the Queensland Investment Corporation andQueensland University of Technologyshowed that for skewed return distributions, the Sortino ratio is superior to the Sharpe ratio as a measure of portfolio risk.[4] Volatility skewness is the second portfolio-analysis statistic introduced by Rom and Ferguson under the PMPT rubric. It measures the ratio of a distribution's percentage of total variance from returns above the mean, to the percentage of the distribution's total variance from returns below the mean. Thus, if a distribution is symmetrical ( as in the normal case, as is assumed under MPT), it has a volatility skewness of 1.00. Values greater than 1.00 indicate positive skewness; values less than 1.00 indicate negative skewness. While closely correlated with the traditional statistical measure of skewness (viz., the third moment of a distribution), the authors of PMPT argue that their volatility skewness measure has the advantage of being intuitively more understandable to non-statisticians who are the primary practical users of these tools. The importance of skewness lies in the fact that the more non-normal (i.e., skewed) a return series is, the more its true risk will be distorted by traditional MPT measures such as the Sharpe ratio. Thus, with the recent advent of hedging and derivative strategies, which are asymmetrical by design, MPT measures are essentially useless, while PMPT is able to capture significantly more of the true information contained in the returns under consideration. Many of the common market indices and the returns of stock and bond mutual funds cannot themselves always be assumed to be accurately represented by the normal distribution. Data: Monthly returns, January, 1991 through December, 1996. For a comprehensive survey of the early literature, see R. Libby and P.C. Fishburn [1977].
https://en.wikipedia.org/wiki/Post-modern_portfolio_theory
In finance, theSharpe ratio(also known as theSharpe index, theSharpe measure, and thereward-to-variability ratio) measures the performance of an investment such as asecurityorportfoliocompared to arisk-free asset, after adjusting for itsrisk. It is defined as the difference between the returns of the investment and therisk-free return, divided by thestandard deviationof the investment returns. It represents the additional amount of return that an investor receives per unit of increase in risk. It was named afterWilliam F. Sharpe,[1]who developed it in 1966. Since its revision by the original author, William Sharpe, in 1994,[2]theex-anteSharpe ratio is defined as: whereRa{\displaystyle R_{a}}is the asset return,Rb{\displaystyle R_{b}}is therisk-free return(such as aU.S. Treasury security).E[Ra−Rb]{\displaystyle E[R_{a}-R_{b}]}is theexpected valueof the excess of the asset return over the benchmark return, andσa{\displaystyle {\sigma _{a}}}is thestandard deviationof the asset excess return. Thet-statisticwill equal the Sharpe Ratio times the square root of T (the number of returns used for the calculation). Theex-postSharpe ratio uses the same equation as the one above but with realized returns of the asset and benchmark rather than expected returns; see the second example below. Theinformation ratiois a generalization of the Sharpe ratio that uses as benchmark some other, typically risky index rather than using risk-free returns. The Sharpe ratio seeks to characterize how well the return of an asset compensates the investor for the risk taken. When comparing two assets, the one with a higher Sharpe ratio appears to provide better return for the same risk, which is usually attractive to investors.[3] However, financial assets are oftennot normally distributed, so that standard deviation does not capture all aspects of risk.Ponzi schemes, for example, will have a high empirical Sharpe ratio until they fail. Similarly, a fund that sells low-strikeput optionswill have a high empirical Sharpe ratio until one of those puts is exercised, creating a large loss. In both cases, the empirical standard deviation before failure gives no real indication of the size of the risk being run.[4] Even in less extreme cases, a reliable empirical estimate of Sharpe ratio still requires the collection of return data over sufficient period for all aspects of the strategy returns to be observed. For example, data must be taken over decades if the algorithm sells an insurance that involves a high liability payout once every 5–10 years, and ahigh-frequency tradingalgorithm may only require a week of data if each trade occurs every 50 milliseconds, with care taken toward risk from unexpected but rare results that such testing did not capture (seeflash crash). Additionally, when examining the investment performance of assets with smoothing of returns (such aswith-profitsfunds), the Sharpe ratio should be derived from the performance of the underlying assets rather than the fund returns (Such a model would invalidate the aforementioned Ponzi scheme, as desired). Sharpe ratios, along withTreynor ratiosandJensen's alphas, are often used to rank the performance of portfolio ormutual fundmanagers.Berkshire Hathawayhad a Sharpe ratio of 0.79 for the period 1976 to 2017, higher than any other stock or mutual fund with a history of more than 30 years. The stock market[specify]had a Sharpe ratio of 0.49 for the same period.[5] Several statistical tests of the Sharpe ratio have been proposed. These include those proposed by Jobson & Korkie[6]and Gibbons, Ross & Shanken.[7] In 1952, Andrew D. Roy suggested maximizing the ratio "(m-d)/σ", where m is expected gross return, d is some "disaster level" (a.k.a., minimum acceptable return, or MAR) and σ is standard deviation of returns.[8]This ratio is just the Sharpe ratio, only using minimum acceptable return instead of the risk-free rate in the numerator, and using standard deviation of returns instead of standard deviation of excess returns in the denominator. Roy's ratio is also related to theSortino ratio, which also uses MAR in the numerator, but uses a different standard deviation (semi/downside deviation) in the denominator. In 1966,William F. Sharpedeveloped what is now known as the Sharpe ratio.[1]Sharpe originally called it the "reward-to-variability" ratio before it began being called the Sharpe ratio by later academics and financial operators. The definition was: Sharpe's 1994 revision acknowledged that the basis of comparison should be an applicable benchmark, which changes with time. After this revision, the definition is: Note, ifRf{\displaystyle R_{f}}is a constant risk-free return throughout the period, The (original) Sharpe ratio has often been challenged with regard to its appropriateness as a fund performance measure during periods of declining markets.[9] Example 1 Suppose the asset has an expected return of 15% in excess of the risk free rate. We typically do not know if the asset will have this return. We estimate the risk of the asset, defined as standard deviation of the asset'sexcess return, as 10%. The risk-free return is constant. Then the Sharpe ratio using the old definition isRa−Rfσa=0.150.10=1.5{\displaystyle {\frac {R_{a}-R_{f}}{\sigma _{a}}}={\frac {0.15}{0.10}}=1.5} Example 2 An investor has a portfolio with an expected return of 12% and a standard deviation of 10%. The rate of interest is 5%, and is risk-free. The Sharpe ratio is:0.12−0.050.1=0.7{\displaystyle {\frac {0.12-0.05}{0.1}}=0.7} A negative Sharpe ratio means the portfolio has underperformed its benchmark. All other things being equal, an investor typically prefers a higher positive Sharpe ratio as it has either higher returns or lowervolatility. However, a negative Sharpe ratio can be made higher by either increasing returns (a good thing) or increasing volatility (a bad thing). Thus, for negative values the Sharpe ratio does not correspond well to typical investorutility functions. The Sharpe ratio is convenient because it can be calculated purely from any observed series of returns without need for additional information surrounding the source of profitability. However, this makes it vulnerable to manipulation if opportunities exist for smoothing or discretionary pricing of illiquid assets. Statistics such as thebias ratioandfirst order autocorrelationare sometimes used to indicate the potential presence of these problems. While theTreynor ratioconsiders only thesystematic riskof a portfolio, the Sharpe ratio considers both systematic andidiosyncratic risks. Which one is more relevant will depend on the portfolio context. The returns measured can be of any frequency (i.e. daily, weekly, monthly or annually), as long as they arenormally distributed, as the returns can always be annualized. Herein lies the underlying weakness of the ratio – asset returns are not normally distributed. Abnormalities likekurtosis,fatter tailsand higher peaks, orskewnesson thedistributioncan be problematic for the ratio, as standard deviation doesn't have the same effectiveness when these problems exist.[10] For Brownian walk, Sharpe ratioμ/σ{\displaystyle \mu /\sigma }is adimensional quantityand has units1/T{\displaystyle 1/{\sqrt {T}}}, because the excess returnμ{\displaystyle \mu }and the volatilityσ{\displaystyle \sigma }are proportional to1/T{\displaystyle 1/{\sqrt {T}}}and1/T{\displaystyle 1/T}correspondingly.Kelly criterionis adimensionless quantity, and, indeed, Kelly fractionμ/σ2{\displaystyle \mu /\sigma ^{2}}is the numerical fraction of wealth suggested for the investment. In some settings, theKelly criterioncan be used to convert the Sharpe ratio into a rate of return. The Kelly criterion gives the ideal size of the investment, which when adjusted by the period and expected rate of return per unit, gives a rate of return.[11] The accuracy of Sharpe ratio estimators hinges on the statistical properties of returns, and these properties can vary considerably among strategies, portfolios, and over time.[12] Bailey and López de Prado (2012)[13]show that Sharpe ratios tend to be overstated in the case of hedge funds with short track records. These authors propose a probabilistic version of the Sharpe ratio that takes into account the asymmetry and fat-tails of the returns' distribution. With regards to the selection of portfolio managers on the basis of their Sharpe ratios, these authors have proposed aSharpe ratio indifference curve[14]This curve illustrates the fact that it is efficient to hire portfolio managers with low and even negative Sharpe ratios, as long as their correlation to the other portfolio managers is sufficiently low. Goetzmann, Ingersoll, Spiegel, and Welch (2002)determined that the best strategy to maximize a portfolio's Sharpe ratio, when both securities and options contracts on these securities are available for investment, is a portfolio of selling oneout-of-the-moneycall and selling one out-of-the-money put. This portfolio generates an immediate positive payoff, has a large probability of generating modestly high returns, and has a small probability of generating huge losses. Shah (2014) observed that such a portfolio is not suitable for many investors, but fund sponsors who select fund managers primarily based on the Sharpe ratio will give incentives for fund managers to adopt such a strategy.[15] In recent years, many financial websites have promoted the idea that a Sharpe Ratio "greater than 1 is considered acceptable; a ratio higher than 2.0 is considered very good; and a ratio above 3.0 is excellent." While it is unclear where this rubric originated online, it makes little sense since the magnitude of the Sharpe ratio is sensitive to the time period over which the underlying returns are measured. This is because the nominator of the ratio (returns) scales in proportion to time; while the denominator of the ratio (standard deviation) scales in proportion to the square root of time. Most diversified indexes of equities, bonds, mortgages or commodities have annualized Sharpe ratios below 1, which suggests that a Sharpe ratio consistently above 2.0 or 3.0 is unrealistic.
https://en.wikipedia.org/wiki/Sharpe_ratio
TheSortino ratiomeasures therisk-adjusted returnof an investmentasset,portfolio, orstrategy.[1]It is a modification of theSharpe ratiobut penalizes only those returns falling below a user-specified target or requiredrate of return, while the Sharpe ratio penalizes both upside and downsidevolatilityequally. Though both ratios measure an investment's risk-adjusted return, they do so in significantly different ways that will frequently lead to differing conclusions as to the true nature of the investment's return-generating efficiency. The Sortino ratio is used as a way to compare the risk-adjusted performance of programs with differing risk and return profiles. In general, risk-adjusted returns seek to normalize the risk across programs and then see which has the higher return unit per risk.[2] The ratioS{\displaystyle S}is calculated as whereR{\displaystyle R}is the asset or portfolio average realized return,T{\displaystyle T}is the target or required rate of return for the investment strategy under consideration (originally called the minimum acceptable returnMAR), andDR{\displaystyle DR}is the target semi-deviation (the square root of target semi-variance), termed downside deviation.DR{\displaystyle DR}is expressed in percentages and therefore allows for rankings in the same way asstandard deviation. An intuitive way to view downside risk is the annualized standard deviation of returns below the target. Another is the square root of the probability-weighted squared below-target returns. The squaring of the below-target returns has the effect of penalizing failures at a quadratic rate. This is consistent with observations made on the behavior of individual decision making under uncertainty. Here DR{\displaystyle DR}= downside deviation or (commonly known in the financial community) "downside risk" (by extension,DR2{\displaystyle DR^{2}}= downside variance), T{\displaystyle T}= the annual target return, originally termed the minimum acceptable returnMAR, r{\displaystyle r}= the random variable representing the return for the distribution of annual returnsf(r){\displaystyle f(r)}, and f(r){\displaystyle f(r)}= thedistributionfor the annual returns, e.g., thelog-normal distribution. For the reasons provided below, thiscontinuousformula is preferred over a simplerdiscreteversion that determines the standard deviation of below-target periodic returns taken from the return series. "Before we make an investment, we don't know what the outcome will be... After the investment is made, and we want to measure its performance, all we know is what the outcome was, not what it could have been. To cope with this uncertainty, we assume that a reasonable estimate of the range of possible returns, as well as the probabilities associated with estimation of those returns...In statistical terms, the shape of [this] uncertainty is called a probability distribution. In other words, looking at just the discrete monthly or annual values does not tell the whole story." Using the observed points to create a distribution is a staple of conventional performance measurement. For example, monthly returns are used to calculate a fund's mean and standard deviation. Using these values and the properties of the normal distribution, we can make statements such as the likelihood of losing money (even though no negative returns may actually have been observed) or the range within which two-thirds of all returns lies (even though the specific returns identifying this range have not necessarily occurred). Our ability to make these statements comes from the process of assuming the continuous form of the normal distribution and certain of its well-known properties. Inpost-modern portfolio theoryan analogous process is followed. As a caveat, some practitioners have fallen into the habit of using discrete periodic returns to compute downside risk. This method is conceptually and operationally incorrect and negates the foundational statistic of post-modern portfolio theory as developed by Brian M. Rom and Frank A. Sortino. The Sortino ratio is used to score a portfolio's risk-adjusted returns relative to an investment target using downside risk. This is analogous to the Sharpe ratio, which scores risk-adjusted returns relative to the risk-free rate using standard deviation. When return distributions are near symmetrical and the target return is close to the distribution median, these two measure will produce similar results. As skewness increases and targets vary from the median, results can be expected to show dramatic differences. The Sortino ratio can also be used in trading. For example, whenever you want to get a performance metric for your trading strategy in an asset, you can compute the Sortino ratio to compare your strategy performance with any other strategy.[3] Practitioners who use a lower partial Standard Deviation (LPSD) instead of a standard deviation also tend to use the Sortino ratio instead of the Sharpe ratio.[4]
https://en.wikipedia.org/wiki/Sortino_ratio
Theupside-potential ratiois a measure of a return of an investment asset relative to the minimal acceptable return. The measurement allows a firm or individual to choose investments which have had relatively good upside performance, per unit ofdownside risk. where the returnsRr{\displaystyle R_{r}}have been put into increasing order. HerePr{\displaystyle P_{r}}is the probability of the returnRr{\displaystyle R_{r}}andRmin{\displaystyle R_{\min }}which occurs atr=min{\displaystyle r=\min }is the minimal acceptable return. In the secondary formula(X)+={XifX≥00else{\displaystyle (X)_{+}={\begin{cases}X&{\text{if }}X\geq 0\\0&{\text{else}}\end{cases}}}and(X)−=(−X)+{\displaystyle (X)_{-}=(-X)_{+}}.[1] The upside-potential ratio may also be expressed as a ratio ofpartial momentssinceE[(Rr−Rmin)+]{\displaystyle \mathbb {E} [(R_{r}-R_{\min })_{+}]}is the first upper moment andE[(Rr−Rmin)−2]{\displaystyle \mathbb {E} [(R_{r}-R_{\min })_{-}^{2}]}is the second lower partial moment. The measure was developed by Frank A. Sortino. The upside-potential ratio is a measure of risk-adjusted returns. All such measures are dependent on some measure of risk. In practice,standard deviationis often used, perhaps because it is mathematically easy to manipulate. However, standard deviation treats deviations above the mean (which are desirable, from the investor's perspective) exactly the same as it treats deviations below the mean (which are less desirable, at the very least). In practice, rational investors have a preference for good returns (e.g., deviations above the mean) and an aversion to bad returns (e.g., deviations below the mean). Sortino further found that investors are (or, at least, should be) averse not to deviations below the mean, but to deviations below some "minimal acceptable return" (MAR), which is meaningful to them specifically. Thus, this measure uses deviations above the MAR in the numerator, rewarding performance above the MAR. In the denominator, it has deviations below the MAR, thus penalizing performance below the MAR. Thus, by rewarding desirable results in the numerator and penalizing undesirable results in the denominator, this measure attempts to serve as a pragmatic measure of the goodness of an investment portfolio's returns in a sense that is not just mathematically simple (a primary reason to use standard deviation as a risk measure), but one that considers the realities of investor psychology and behavior.
https://en.wikipedia.org/wiki/Upside_potential_ratio
Instatistics, astandard normal table, also called theunit normal tableorZ table,[1]is amathematical tablefor the values ofΦ, thecumulative distribution functionof thenormal distribution. It is used to find theprobabilitythat astatisticis observed below, above, or between values on the standard normal distribution, and by extension, any normal distribution. Since probability tables cannot be printed for every normal distribution, as there are an infinite variety of normal distributions, it is common practice to convert a normal to a standard normal (known as az-score) and then use the standard normal table to find probabilities.[2] Normal distributionsare symmetrical, bell-shaped distributions that are useful in describing real-world data. Thestandardnormal distribution, represented byZ, is the normal distribution having ameanof 0 and astandard deviationof 1. IfXis a random variable from a normal distribution with meanμand standard deviationσ, itsZ-scoremay be calculated fromXby subtractingμand dividing by the standard deviation: IfX¯{\displaystyle {\overline {X}}}is the mean of a sample of sizenfrom some population in which the mean isμand the standard deviation isσ, the standard error is⁠σn:{\displaystyle {\tfrac {\sigma }{\sqrt {n}}}:}⁠ If∑X{\textstyle \sum X}is the total of a sample of sizenfrom some population in which the mean isμand the standard deviation isσ, the expected total isnμand the standard error is⁠σn:{\displaystyle \sigma {\sqrt {n}}:}⁠ Ztables are typically composed as follows: Example: To find0.69, one would look down the rows to find0.6and then across the columns to0.09which would yield a probability of0.25490for acumulative from meantable or0.75490from acumulativetable. To find a negative value such as–0.83, one could use acumulativetable for negative z-values[3]which yield a probability of0.20327. But since the normal distribution curve is symmetrical, probabilities for only positive values ofZare typically given. The user might have to use a complementary operation on the absolute value ofZ, as in the example below. Ztables use at least three different conventions: This table gives a probability that a statistic is between minus infinity andZ. The values are calculated using thecumulative distribution functionof a standard normal distribution with mean of zero and standard deviation of one, usually denoted with the capital Greek letterΦ{\displaystyle \Phi }(phi), is the integral Φ{\displaystyle \Phi }(z) is related to theerror function, orerf(z). Note that forz= 1, 2, 3, one obtains (after multiplying by 2 to account for the[−z,z]interval) the resultsf(z) = 0.6827, 0.9545, 0.9974, characteristic of the68–95–99.7 rule. This table gives a probability that a statistic is less thanZ(i.e. between negative infinity andZ). [4] This table gives a probability that a statistic is greater thanZ. :f(z)=1−Φ(z){\displaystyle f(z)=1-\Phi (z)} [5]This table gives a probability that a statistic is greater thanZ, for large integerZvalues. A professor's exam scores are approximately distributed normally with mean 80 and standard deviation 5. Only acumulative from meantable is available.
https://en.wikipedia.org/wiki/Standard_normal_table
Instatistics,Cook's distanceorCook'sDis a commonly used estimate of theinfluenceof a data point when performing a least-squaresregression analysis.[1]In a practicalordinary least squaresanalysis, Cook's distance can be used in several ways: to indicate influential data points that are particularly worth checking for validity; or to indicate regions of the design space where it would be good to be able to obtain more data points. It is named after the American statisticianR. Dennis Cook, who introduced the concept in 1977.[2][3] Data points with largeresiduals(outliers) and/or highleveragemay distort the outcome and accuracy of a regression. Cook's distance measures the effect of deleting a given observation. Points with a large Cook's distance are considered to merit closer examination in the analysis. For thealgebraic expression, first define whereε∼N(0,σ2I){\displaystyle {\boldsymbol {\varepsilon }}\sim {\mathcal {N}}\left(0,\sigma ^{2}\mathbf {I} \right)}is theerror term,β=[β0β1…βp−1]T{\displaystyle {\boldsymbol {\beta }}=\left[\beta _{0}\,\beta _{1}\dots \beta _{p-1}\right]^{\mathsf {T}}}is thecoefficient matrix,p{\displaystyle p}is the number of covariates or predictors for each observation, andX{\displaystyle \mathbf {X} }is thedesign matrixincluding a constant. Theleast squaresestimator then isb=(XTX)−1XTy{\displaystyle \mathbf {b} =\left(\mathbf {X} ^{\mathsf {T}}\mathbf {X} \right)^{-1}\mathbf {X} ^{\mathsf {T}}\mathbf {y} }, and consequently the fitted (predicted) values for the mean ofy{\displaystyle \mathbf {y} }are whereH≡X(XTX)−1XT{\displaystyle \mathbf {H} \equiv \mathbf {X} (\mathbf {X} ^{\mathsf {T}}\mathbf {X} )^{-1}\mathbf {X} ^{\mathsf {T}}}is theprojection matrix(or hat matrix). Thei{\displaystyle i}-th diagonal element ofH{\displaystyle \mathbf {H} \,}, given byhii≡xiT(XTX)−1xi{\displaystyle h_{ii}\equiv \mathbf {x} _{i}^{\mathsf {T}}(\mathbf {X} ^{\mathsf {T}}\mathbf {X} )^{-1}\mathbf {x} _{i}},[4]is known as theleverageof thei{\displaystyle i}-th observation. Similarly, thei{\displaystyle i}-th element of the residual vectore=y−y^=(I−H)y{\displaystyle \mathbf {e} =\mathbf {y} -\mathbf {\widehat {y\,}} =\left(\mathbf {I} -\mathbf {H} \right)\mathbf {y} }is denoted byei{\displaystyle e_{i}}. Cook's distanceDi{\displaystyle D_{i}}of observationi(fori=1,…,n){\displaystyle i\;({\text{for }}i=1,\dots ,n)}is defined as the sum of all the changes in the regression model when observationi{\displaystyle i}is removed from it[5] wherepis the rank of the model (i.e., number of independent variables in the design matrix) andy^j(i){\displaystyle {\widehat {y\,}}_{j(i)}}is the fitted response value obtained when excludingi{\displaystyle i}, ands2=e⊤en−p{\displaystyle s^{2}={\frac {\mathbf {e} ^{\top }\mathbf {e} }{n-p}}}is themean squared errorof the regression model.[6] Equivalently, it can be expressed using the leverage[5](hii{\displaystyle h_{ii}}): There are different opinions regarding what cut-off values to use for spotting highlyinfluential points. Since Cook's distance is in the metric of anFdistributionwithp{\displaystyle p}andn−p{\displaystyle n-p}(as defined for the design matrixX{\displaystyle \mathbf {X} }above) degrees of freedom, the median point (i.e.,F0.5(p,n−p){\displaystyle F_{0.5}(p,n-p)}) can be used as a cut-off.[7]Since this value is close to 1 for largen{\displaystyle n}, a simple operational guideline ofDi>1{\displaystyle D_{i}>1}has been suggested.[8] Thep{\displaystyle p}-dimensional random vectorb−b(i){\displaystyle \mathbf {b} -\mathbf {b\,} _{(i)}}, which is the change ofb{\displaystyle \mathbf {b} }due to a deletion of thei{\displaystyle i}-th observation, has a covariance matrix of rank one and therefore it is distributed entirely over one dimensional subspace (a line, sayL{\displaystyle L}) of thep{\displaystyle p}-dimensional space. The distributional property ofb−b(i){\displaystyle \mathbf {b} -\mathbf {b\,} _{(i)}}mentioned above implies that information about the influence of thei{\displaystyle i}-th observation provided byb−b(i){\displaystyle \mathbf {b} -\mathbf {b\,} _{(i)}}should be obtained not from outside of the lineL{\displaystyle L}but from the lineL{\displaystyle L}itself. However, in the introduction of Cook’s distance, a scaling matrix of full rankp{\displaystyle p}is chosen and as a resultb−b(i){\displaystyle \mathbf {b} -\mathbf {b\,} _{(i)}}is treated as if it is a random vector distributed over the whole space ofp{\displaystyle p}dimensions. This means that information about the influence of thei{\displaystyle i}-th observation provided byb−b(i){\displaystyle \mathbf {b} -\mathbf {b\,} _{(i)}}through the Cook’s distance comes from the whole space ofp{\displaystyle p}dimensions. Hence the Cook's distance measure is likely to distort the real influence of observations, misleading the right identification of influential observations.[9][10] Di{\displaystyle D_{i}}can be expressed using the leverage[5](0≤hii≤1{\displaystyle 0\leq h_{ii}\leq 1}) and the square of theinternallyStudentized residual(0≤ti2{\displaystyle 0\leq t_{i}^{2}}), as follows: The benefit in the last formulation is that it clearly shows the relationship betweenti2{\displaystyle t_{i}^{2}}andhii{\displaystyle h_{ii}}toDi{\displaystyle D_{i}}(while p and n are the same for all observations). Ifti2{\displaystyle t_{i}^{2}}is large then it (for non-extreme values ofhii{\displaystyle h_{ii}}) will increaseDi{\displaystyle D_{i}}. Ifhii{\displaystyle h_{ii}}is close to 0 thenDi{\displaystyle D_{i}}will be small, while ifhii{\displaystyle h_{ii}}is close to 1 thenDi{\displaystyle D_{i}}will become very large (as long asti2>0{\displaystyle t_{i}^{2}>0}, i.e.: that the observationi{\displaystyle i}is not exactly on the regression line that was fitted without observationi{\displaystyle i}). Di{\displaystyle D_{i}}is related toDFFITSthrough the following relationship (note thatσ^σ^(i)ti=ti(i){\displaystyle {{\widehat {\sigma }} \over {\widehat {\sigma }}_{(i)}}t_{i}=t_{i(i)}}is theexternallystudentized residual, andσ^,σ^(i){\displaystyle {\widehat {\sigma }},{\widehat {\sigma }}_{(i)}}are definedhere): Di{\displaystyle D_{i}}can be interpreted as the distance one's estimates move within the confidence ellipsoid that represents a region of plausible values for the parameters.[clarification needed]This is shown by an alternative but equivalent representation of Cook's distance in terms of changes to the estimates of the regression parameters between the cases, where the particular observation is either included or excluded from the regression analysis. An alternative toDi{\displaystyle D_{i}}has been proposed. Instead of considering the influence a single observation has on the overall model, the statisticsSi{\displaystyle S_{i}}serves as a measure of how sensitive the prediction of thei{\displaystyle i}-th observation is to the deletion of each observation in the original data set. It can be formulated as a weighted linear combination of theDj{\displaystyle D_{j}}'s of all data points. Again, theprojection matrixis involved in the calculation to obtain the required weights: In this context,ρij{\displaystyle \rho _{ij}}(≤1{\displaystyle \leq 1}) resembles the correlation between the predictionsy^i{\displaystyle {\widehat {y\,}}_{i}}andy^j{\displaystyle {\widehat {y\,}}_{j}}[a].In contrast toDi{\displaystyle D_{i}}, the distribution ofSi{\displaystyle S_{i}}is asymptotically normal for large sample sizes and models with many predictors. In absence of outliers the expected value ofSi{\displaystyle S_{i}}is approximatelyp−1{\displaystyle p^{-1}}. An influential observation can be identified if withmed⁡(S){\displaystyle \operatorname {med} (S)}as themedianandMAD⁡(S){\displaystyle \operatorname {MAD} (S)}as themedian absolute deviationof allS{\displaystyle S}-values within the original data set, i.e., a robust measure of location and arobust measure of scalefor the distribution ofSi{\displaystyle S_{i}}. The factor 4.5 covers approx. 3standard deviationsofS{\displaystyle S}around its centre.When compared to Cook's distance,Si{\displaystyle S_{i}}was found to perform well for high- and intermediate-leverage outliers, even in presence of masking effects for whichDi{\displaystyle D_{i}}failed.[12]Interestingly,Di{\displaystyle D_{i}}andSi{\displaystyle S_{i}}are closely related because they can both be expressed in terms of the matrixT{\displaystyle \mathbf {T} }which holds the effects of the deletion of thej{\displaystyle j}-th data point on thei{\displaystyle i}-th prediction: WithT{\displaystyle \mathbf {T} }at hand,D{\displaystyle \mathbf {D} }is given by: whereHTH=H{\displaystyle \mathbf {H} ^{\mathsf {T}}\mathbf {H} =\mathbf {H} }ifH{\displaystyle \mathbf {H} }issymmetricandidempotent,which is not necessarily the case. In contrast,S{\displaystyle \mathbf {S} }can be calculated as: wherediag⁡(A){\displaystyle \operatorname {diag} (\mathbf {A} )}extracts themain diagonalof a square matrixA{\displaystyle \mathbf {A} }. In this context,M=p−1s−2GEHTHEG{\displaystyle \mathbf {M} =p^{-1}s^{-2}\mathbf {G} \mathbf {E} \mathbf {H} ^{\mathsf {T}}\mathbf {H} \mathbf {E} \mathbf {G} }is referred to as the influence matrix whereasP=p−1s−2HEGGEHT{\displaystyle \mathbf {P} =p^{-1}s^{-2}\mathbf {H} \mathbf {E} \mathbf {G} \mathbf {G} \mathbf {E} \mathbf {H} ^{\mathsf {T}}}resembles the so-called sensitivity matrix. Aneigenvector analysisofM{\displaystyle \mathbf {M} }andP{\displaystyle \mathbf {P} }- which both share the same eigenvalues – serves as a tool in outlier detection, although the eigenvectors of the sensitivity matrix are more powerful.[13] Many programs and statistics packages, such asR,Python,Julia, etc., include implementations of Cook's distance. High-dimensional Influence Measure (HIM) is an alternative to Cook's distance for whenp>n{\displaystyle p>n}(i.e., when there are more predictors than observations).[14]While the Cook's distance quantifies the individual observation's influence on the least squares regression coefficient estimate, the HIM measures the influence of an observation on the marginal correlations.
https://en.wikipedia.org/wiki/Cook%27s_distance
In statistics,Grubbs's testor theGrubbs test(named afterFrank E. Grubbs, who published the test in 1950[1]), also known as themaximum normalizedresidualtestorextreme studentized deviate test, is atestused to detectoutliersin aunivariatedata setassumed to come from anormally distributedpopulation. Grubbs's test is based on the assumption ofnormality. That is, one should first verify that the data can be reasonably approximated by a normal distribution before applying the Grubbs test.[2] Grubbs's test detects one outlier at a time. This outlier is expunged from the dataset and the test is iterated until no outliers are detected. However, multiple iterations change the probabilities of detection, and the test should not be used for sample sizes of six or fewer since it frequently tags most of the points as outliers.[3] Grubbs's test is defined for the followinghypotheses: The Grubbstest statisticis defined as withY¯{\displaystyle {\overline {Y}}}ands{\displaystyle s}denoting thesample meanandstandard deviation, respectively. The Grubbs test statistic is the largestabsolute deviationfrom the sample mean in units of the sample standard deviation. This is thetwo-sided test, for which the hypothesis of no outliers is rejected atsignificance levelα if withtα/(2N),N−2denoting the uppercritical valueof thet-distributionwithN− 2degrees of freedomand a significance level of α/(2N). Grubbs's test can also be defined as a one-sided test, replacing α/(2N) with α/N. To test whether the minimum value is an outlier, the test statistic is withYmindenoting the minimum value. To test whether the maximum value is an outlier, the test statistic is withYmaxdenoting the maximum value. Severalgraphical techniquescan be used to detect outliers. A simplerun sequence plot, abox plot, or ahistogramshould show any obviously outlying points. Anormal probability plotmay also be useful. This article incorporatespublic domain materialfrom theNational Institute of Standards and Technology
https://en.wikipedia.org/wiki/Grubbs%27s_test
Instatistics,Samuelson's inequality, named after the economistPaul Samuelson,[1]also called theLaguerre–Samuelson inequality,[2][3]after the mathematicianEdmond Laguerre, states that every one of any collectionx1, ...,xn, is within√n− 1uncorrected samplestandard deviationsof their sample mean. If we let be the samplemeanand be the standard deviation of the sample, then Equality holds on the left (or right) forxj{\displaystyle x_{j}}if and only ifall then− 1xi{\displaystyle x_{i}}s other thanxj{\displaystyle x_{j}}are equal to each other and greater (smaller) thanxj.{\displaystyle x_{j}.}[2] If you instead defines=1n−1∑i=1n(xi−x¯)2{\displaystyle s={\sqrt {{\frac {1}{n-1}}\sum _{i=1}^{n}(x_{i}-{\overline {x}})^{2}}}}then the inequalityx¯−sn−1≤xj≤x¯+sn−1{\displaystyle {\overline {x}}-s{\sqrt {n-1}}\leq x_{j}\leq {\overline {x}}+s{\sqrt {n-1}}}still applies and can be slightly tightened tox¯−sn−1n≤xj≤x¯+sn−1n.{\displaystyle {\overline {x}}-s{\tfrac {n-1}{\sqrt {n}}}\leq x_{j}\leq {\overline {x}}+s{\tfrac {n-1}{\sqrt {n}}}.} Chebyshev's inequalitylocates a certain fraction of the data within certain bounds, while Samuelson's inequality locatesallthe data points within certain bounds. The bounds given by Chebyshev's inequality are unaffected by the number of data points, while for Samuelson's inequality the bounds loosen as the sample size increases. Thus for large enough data sets, Chebyshev's inequality is more useful. Samuelson’s inequality has several applications instatisticsandmathematics. It is useful in thestudentization of residualswhich shows a rationale for why this process should be done externally to better understand the spread of residuals inregression analysis. Inmatrix theory, Samuelson’s inequality is used to locate theeigenvaluesof certain matrices and tensors. Furthermore, generalizations of this inequality apply to complex data and random variables in aprobability space.[5][6] Samuelson was not the first to describe this relationship: the first was probablyLaguerrein 1880 while investigating theroots(zeros) ofpolynomials.[2][7] Consider a polynomial with all roots real: Without loss of generality leta0=1{\displaystyle a_{0}=1}and let Then and In terms of the coefficients Laguerre showed that the roots of this polynomial were bounded by where Inspection shows that−a1n{\displaystyle -{\tfrac {a_{1}}{n}}}is themeanof the roots and thatbis the standard deviation of the roots. Laguerre failed to notice this relationship with the means and standard deviations of the roots, being more interested in the bounds themselves. This relationship permits a rapid estimate of the bounds of the roots and may be of use in their location. When the coefficientsa1{\displaystyle a_{1}}anda2{\displaystyle a_{2}}are both zero no information can be obtained about the location of the roots, because not all roots are real (as can be seen fromDescartes' rule of signs) unless the constant term is also zero.
https://en.wikipedia.org/wiki/Samuelson%27s_inequality
William Sealy Gosset(13 June 1876 – 16 October 1937) was an English statistician, chemist and brewer who worked forGuinness. In statistics, he pioneered small sample experimental design. Gosset published under thepen nameStudentand developedStudent's t-distribution– originally called Student's "z" – and "Student's test ofstatistical significance".[1] Born inCanterbury, England the eldest son of Agnes Sealy Vidal and Colonel Frederic Gosset, R.E.Royal Engineers, Gosset attendedWinchester Collegebefore matriculating as Winchester Scholar innatural sciencesand mathematics atNew College, Oxford. Upon graduating in 1899, he joined the brewery ofArthur Guinness& Son inDublin, Ireland; he spent the rest of his 38-year career at Guinness.[1][2] Gosset had three children withMarjory Gosset(née Phillpotts).Harry Gosset(1907–1965) was a consultant paediatrician; Bertha Marian Gosset (1909–2004) was a geographer and nurse; the youngest, Ruth Gosset (1911–1953) married the Oxford mathematician Douglas Roaf and had five children. In his job as Head Experimental Brewer atGuinness, Gosset developed new statistical methods – both in the brewery and on the farm – now central to the design of experiments, to proper use of significance testing on repeated trials, and to analysis ofeconomic significance(an early instance ofdecision theoryinterpretation of statistics) and more, such as his small-sample, stratified, and repeated balanced experiments onbarleyfor proving the bestyieldingvarieties.[3]Gosset acquired that knowledge by study, by trial and error, by cooperating with others, and by spending two terms in 1906–1907 in the Biometrics laboratory ofKarl Pearson.[4]Gosset and Pearson had a good relationship.[4]Pearson helped Gosset with the mathematics of his papers, including the 1908 papers, but had little appreciation of their importance. The papers addressed the brewer's concern with small samples; biometricians like Pearson, on the other hand, typically had hundreds of observations and saw no urgency in developing small-sample methods.[2] Gosset's first publication came in 1907, "On the Error of Counting with aHaemocytometer," in which – unbeknownst to Gosset aka "Student" – he rediscovered thePoisson distribution.[3]Another researcher at Guinness had previously published a paper containing trade secrets of the Guinness brewery. The economic historian Stephen Ziliak discovered in the Guinness Archives that to prevent further disclosure of confidential information, the Guinness Board of Directors allowed its scientists to publish research on condition that they do not mention "1) beer, 2) Guinness, or 3) their own surname".[4]To Ziliak, Gosset seems to have acquired his pen name "Student" from his 1906–1907 notebook on counting yeast cells with a haemocytometer, "The Student's Science Notebook"[1][5]Thus his most noteworthy achievement is now called Student's, rather than Gosset's,t-distributionand test ofstatistical significance.[2] Gosset published most of his 21 academic papers, includingThe probable error of a mean,in Pearson's journalBiometrikaunder the pseudonymStudent.[6]It was, however, not Pearson butRonald A. Fisherwho appreciated the understudied importance of Gosset's small-sample work. Fisher wrote to Gosset in 1912 explaining that Student's z-distribution should be divided bydegrees of freedomnot totalsample size. From 1912 to 1934 Gosset and Fisher would exchange more than 150 letters. In 1924, Gosset wrote in a letter to Fisher, "I am sending you a copy of Student's Tables as you are the only man that's ever likely to use them!" Fisher believed that Gosset had effected a "logical revolution".[3]In a special issue ofMetronin 1925 Student published the corrected tables, now calledStudent's tz=tn−1{\textstyle z={\frac {t}{\sqrt {n-1}}}}. In the same volume Fisher contributed applications of Student'st-distribution toregression analysis.[3] Although introduced by others,Studentized residualsare named in Student's honour because, like the problem that led to Student's t-distribution, the idea of adjusting for estimated standard deviations is central to that concept.[7] Gosset's interest in the cultivation of barley led him to speculate that thedesign of experimentsshould aim not only at improving the average yield but also at breeding varieties whose yield was insensitive to variation in soil and climate (that is, "robust"). Gosset called his innovation "balanced layout", because treatments and controls are allocated in a balanced fashion to stratified growing conditions, such as differential soil fertility.[8]Gosset's balanced principle was challenged by Ronald Fisher, who preferred randomized designs. The Bayesian Harold Jeffreys, and Gosset's close associates Jerzy Neyman and Egon S. Pearson sided with Gosset's balanced designs of experiments; however, as Ziliak (2014) has shown, Gosset and Fisher would strongly disagree for the rest of their lives about the meaning and interpretation of balanced versus randomized experiments, as they had earlier clashed on the role of bright-line rules of statistical significance.[4] In 1935, at the age of 59, Gosset leftDublinto take up the position of Head Brewer at a new (and second)Guinnessbrewery atPark Royalin northwestern London. In September 1937 Gosset was promoted to Head Brewer of all Guinness. He died one month later, aged 61, inBeaconsfield, England, of a heart attack.[1] Gosset was a friend of bothPearsonandFisher, a noteworthy achievement, for each had a massive ego and a loathing for the other. He was a modest man who once cut short an admirer with this comment: "Fisher would have discovered it all anyway."[9] Gosset:
https://en.wikipedia.org/wiki/William_Sealy_Gosset
Thenormal probability plotis agraphical techniqueto identify substantive departures fromnormality. This includes identifyingoutliers,skewness,kurtosis, a need for transformations, andmixtures. Normal probability plots are made of raw data,residuals from model fits, and estimated parameters. In a normal probability plot (also called a "normal plot"), the sorted data are plotted vs. values selected to make the resulting image look close to a straight line if the data are approximately normally distributed. Deviations from a straight line suggest departures from normality. The plotting can be manually performed by using a specialgraph paper, callednormal probability paper. With modern computers normal plots are commonly made with software. The normal probability plot is a special case of theQ–Qprobability plot for a normal distribution. The theoreticalquantilesare generally chosen to approximate either the mean or the median of the correspondingorder statistics. The normal probability plot is formed by plotting the sorted data vs. an approximation to the means or medians of the correspondingorder statistics; seerankit. Some plot the data on the vertical axis;[1]others plot the data on the horizontal axis.[2][3] Different sources use slightly different approximations forrankits. The formula used by the "qqnorm" function in the basic "stats" package inR (programming language)is as follows: fori= 1, 2, ...,n, where andΦ−1is the standard normalquantile function. If the data are consistent with a sample from a normal distribution, the points should lie close to a straight line. As a reference, a straight line can be fit to the points. The further the points vary from this line, the greater the indication of departure from normality. If the sample has mean 0, standard deviation 1 then a line through 0 with slope 1 could be used. With more points, random deviations from a line will be less pronounced. Normal plots are often used with as few as 7 points, e.g., with plotting the effects in a saturated model from a2-level fractional factorial experiment. With fewer points, it becomes harder to distinguish between random variability and a substantive deviation from normality. Probability plots for distributions other than the normal are computed in exactly the same way. The normal quantile functionΦ−1is simply replaced by the quantile function of the desired distribution. In this way, a probability plot can easily be generated for any distribution for which one has the quantile function. With alocation-scale family of distributions, thelocationandscale parametersof the distribution can be estimated from theinterceptand theslopeof the line. For other distributions the parameters must first be estimated before a probability plot can be made. This is a sample of size 50 from a normal distribution, plotted as both a histogram, and a normal probability plot. This is a sample of size 50 from a right-skewed distribution, plotted as both a histogram, and a normal probability plot. This is a sample of size 50 from a uniform distribution, plotted as both a histogram, and a normal probability plot. This article incorporatespublic domain materialfrom theNational Institute of Standards and Technology
https://en.wikipedia.org/wiki/Normal_probability_plot
In statistics, aQ–Q plot(quantile–quantile plot) is a probability plot, agraphical methodfor comparing twoprobability distributionsby plotting theirquantilesagainst each other.[1]A point(x,y)on the plot corresponds to one of the quantiles of the second distribution (y-coordinate) plotted against the same quantile of the first distribution (x-coordinate). This defines aparametric curvewhere the parameter is the index of the quantile interval. If the two distributions being compared are similar, the points in the Q–Q plot will approximately lie on theidentity liney=x. If the distributions are linearly related, the points in the Q–Q plot will approximately lie on a line, but not necessarily on the liney=x. Q–Q plots can also be used as a graphical means of estimating parameters in alocation-scale familyof distributions. A Q–Q plot is used to compare the shapes of distributions, providing a graphical view of how properties such aslocation,scale, andskewnessare similar or different in the two distributions. Q–Q plots can be used to compare collections of data, ortheoretical distributions. The use of Q–Q plots to compare two samples of data can be viewed as anon-parametricapproach to comparing their underlying distributions. A Q–Q plot is generally more diagnostic than comparing the samples'histograms, but is less widely known. Q–Q plots are commonly used to compare a data set to a theoretical model.[2][3]This can provide an assessment ofgoodness of fitthat is graphical, rather than reducing to a numericalsummary statistic. Q–Q plots are also used to compare two theoretical distributions to each other.[4]Since Q–Q plots compare distributions, there is no need for the values to be observed as pairs, as in ascatter plot, or even for the numbers of values in the two groups being compared to be equal. The term "probability plot" sometimes refers specifically to a Q–Q plot, sometimes to a more general class of plots, and sometimes to the less commonly usedP–P plot. Theprobability plot correlation coefficient plot(PPCC plot) is a quantity derived from the idea of Q–Q plots, which measures the agreement of a fitted distribution with observed data and which is sometimes used as a means of fitting a distribution to data. A Q–Q plot is a plot of the quantiles of two distributions against each other, or a plot based on estimates of the quantiles. The pattern of points in the plot is used to compare the two distributions. The main step in constructing a Q–Q plot is calculating or estimating the quantiles to be plotted. If one or both of the axes in a Q–Q plot is based on a theoretical distribution with a continuouscumulative distribution function(CDF), all quantiles are uniquely defined and can be obtained by inverting the CDF. If a theoretical probability distribution with a discontinuous CDF is one of the two distributions being compared, some of the quantiles may not be defined, so an interpolated quantile may be plotted. If the Q–Q plot is based on data, there are multiple quantile estimators in use. Rules for forming Q–Q plots when quantiles must be estimated or interpolated are calledplotting positions. A simple case is where one has two data sets of the same size. In that case, to make the Q–Q plot, one orders each set in increasing order, then pairs off and plots the corresponding values. A more complicated construction is the case where two data sets of different sizes are being compared. To construct the Q–Q plot in this case, it is necessary to use aninterpolatedquantile estimate so that quantiles corresponding to the same underlying probability can be constructed. More abstractly,[4]given two cumulative probability distribution functionsFandG, with associatedquantile functionsF−1andG−1(the inverse function of the CDF is the quantile function), the Q–Q plot draws theq-th quantile ofFagainst theq-th quantile ofGfor a range of values ofq. Thus, the Q–Q plot is aparametric curveindexed over [0,1] with values in the real planeR2. Typically for an analysis of normality, the vertical axis shows the values of the variable of interest, sayxwith CDFF(x), and the horizontal axis representsN−1(F(x)), whereN−1(.)represents the inverse cumulative normal distribution function. The points plotted in a Q–Q plot are always non-decreasing when viewed from left to right. If the two distributions being compared are identical, the Q–Q plot follows the 45° liney=x. If the two distributions agree after linearly transforming the values in one of the distributions, then the Q–Q plot follows some line, but not necessarily the liney=x. If the general trend of the Q–Q plot is flatter than the liney=x, the distribution plotted on the horizontal axis is moredispersedthan the distribution plotted on the vertical axis. Conversely, if the general trend of the Q–Q plot is steeper than the liney=x, the distribution plotted on the vertical axis is moredispersedthan the distribution plotted on the horizontal axis. Q–Q plots are often arced, or S-shaped, indicating that one of the distributions is more skewed than the other, or that one of the distributions has heavier tails than the other. Although a Q–Q plot is based on quantiles, in a standard Q–Q plot it is not possible to determine which point in the Q–Q plot determines a given quantile. For example, it is not possible to determine the median of either of the two distributions being compared by inspecting the Q–Q plot. Some Q–Q plots indicate the deciles to make determinations such as this possible. The intercept and slope of a linear regression between the quantiles gives a measure of the relative location and relative scale of the samples. If the median of the distribution plotted on the horizontal axis is 0, the intercept of a regression line is a measure of location, and the slope is a measure of scale. The distance between medians is another measure of relative location reflected in a Q–Q plot. The "probability plot correlation coefficient" (PPCC plot) is thecorrelation coefficientbetween the paired sample quantiles. The closer the correlation coefficient is to one, the closer the distributions are to being shifted, scaled versions of each other. For distributions with a single shape parameter, the probability plot correlation coefficient plot provides a method for estimating the shape parameter – one simply computes the correlation coefficient for different values of the shape parameter, and uses the one with the best fit, just as if one were comparing distributions of different types. Another common use of Q–Q plots is to compare the distribution of a sample to a theoretical distribution, such as the standardnormal distributionN(0,1), as in anormal probability plot. As in the case when comparing two samples of data, one orders the data (formally, computes the order statistics), then plots them against certain quantiles of the theoretical distribution.[3] The choice of quantiles from a theoretical distribution can depend upon context and purpose. One choice, given a sample of sizen, isk/nfork= 1, …,n, as these are the quantiles that thesampling distributionrealizes. The last of these,n/n, corresponds to the 100th percentile – the maximum value of the theoretical distribution, which is sometimes infinite. Other choices are the use of(k− 0.5) /n, or instead to space thenpoints such that there is an equal distance between all of them and also between the two outermost points and the edges of the[0,1]{\displaystyle [0,1]}interval, usingk/ (n+ 1).[6] Many other choices have been suggested, both formal and heuristic, based on theory or simulations relevant in context. The following subsections discuss some of these. A narrower question is choosing a maximum (estimation of a population maximum), known as theGerman tank problem, for which similar "sample maximum, plus a gap" solutions exist, most simplym+m/n− 1. A more formal application of this uniformization of spacing occurs inmaximum spacing estimationof parameters. Thek/ (n+ 1)approach equals that of plotting the points according to the probability that the last of (n+ 1) randomly drawn values will not exceed thek-th smallest of the firstnrandomly drawn values.[7][8] In using anormal probability plot, the quantiles one uses are therankits, the quantile of the expected value of the order statistic of a standard normal distribution. More generally,Shapiro–Wilk testuses the expected values of the order statistics of the given distribution; the resulting plot and line yields thegeneralized least squaresestimate for location and scale (from theinterceptandslopeof the fitted line).[9]Although this is not too important for the normal distribution (the location and scale are estimated by the mean and standard deviation, respectively), it can be useful for many other distributions. However, this requires calculating the expected values of the order statistic, which may be difficult if the distribution is not normal. Alternatively, one may use estimates of themedianof the order statistics, which one can compute based on estimates of the median of the order statistics of a uniform distribution and the quantile function of the distribution; this was suggested byFilliben (1975).[9] This can be easily generated for any distribution for which the quantile function can be computed, but conversely the resulting estimates of location and scale are no longer precisely the least squares estimates, though these only differ significantly fornsmall. Several different formulas have been used or proposed asaffinesymmetricalplotting positions. Such formulas have the form(k−a) / (n+ 1 − 2a)for some value ofain the range from 0 to 1, which gives a range betweenk/ (n+ 1)and(k− 1) / (n− 1). Expressions include: For large sample size,n, there is little difference between these various expressions. The order statistic medians are the medians of theorder statisticsof the distribution. These can be expressed in terms of the quantile function and theorder statistic medians for the continuous uniform distributionby: whereU(i)are the uniform order statistic medians andGis the quantile function for the desired distribution. The quantile function is the inverse of thecumulative distribution function(probability thatXis less than or equal to some value). That is, given a probability, we want the corresponding quantile of the cumulative distribution function. James J. Filliben uses the following estimates for the uniform order statistic medians:[15] The reason for this estimate is that the order statistic medians do not have a simple form. TheR programming languagecomes with functions to make Q–Q plots, namely qqnorm and qqplot from thestatspackage. Thefastqqpackage implements faster plotting for large number of data points.
https://en.wikipedia.org/wiki/Q%E2%80%93Q_plot
Inprobability theoryandstatistics, there are several relationships amongprobability distributions. These relations can be categorized in the following groups: Multiplying the variable by any positive real constant yields ascalingof the original distribution. Some are self-replicating, meaning that the scaling yields the same family of distributions, albeit with a different parameter:normal distribution,gamma distribution,Cauchy distribution,exponential distribution,Erlang distribution,Weibull distribution,logistic distribution,error distribution,power-law distribution,Rayleigh distribution. Example: The affine transformax+byields arelocation and scalingof the original distribution. The following are self-replicating:Normal distribution,Cauchy distribution,Logistic distribution,Error distribution,Power distribution,Rayleigh distribution. Example: The reciprocal 1/Xof a random variableX, is a member of the same family of distribution asX, in the following cases:Cauchy distribution,F distribution,log logistic distribution. Examples: Some distributions are invariant under a specific transformation. Example: Some distributions are variant under a specific transformation. The distribution of the sum ofindependent random variablesis theconvolutionof their distributions. SupposeZ{\displaystyle Z}is the sum ofn{\displaystyle n}independent random variablesX1,…,Xn{\displaystyle X_{1},\dots ,X_{n}}each withprobability mass functionsfXi(x){\displaystyle f_{X_{i}}(x)}. ThenZ=∑i=1nXi.{\displaystyle Z=\sum _{i=1}^{n}{X_{i}}.}If it has a distribution from the same family of distributions as the original variables, that family of distributions is said to beclosed under convolution. Often (always?) these distributions are alsostable distributions(see alsoDiscrete-stable distribution). Examples of suchunivariate distributionsare:normal distributions,Poisson distributions,binomial distributions(with common success probability),negative binomial distributions(with common success probability),gamma distributions(with commonrate parameter),chi-squared distributions,Cauchy distributions,hyperexponential distributions. Examples:[3][4] Other distributions are not closed under convolution, but their sum has a known distribution: The product of independent random variablesXandYmay belong to the same family of distribution asXandY:Bernoulli distributionandlog-normal distribution. Example: (See alsoProduct distribution.) For some distributions, theminimumvalue of several independent random variables is a member of the same family, with different parameters:Bernoulli distribution,Geometric distribution,Exponential distribution,Extreme value distribution,Pareto distribution,Rayleigh distribution,Weibull distribution. Examples: Similarly, distributions for which themaximumvalue of several independent random variables is a member of the same family of distribution include:Bernoulli distribution,Power lawdistribution. (See alsoratio distribution.) Approximate or limit relationship means Combination ofiidrandom variables: Special case of distribution parametrization: Consequences of the CLT: When one or more parameter(s) of a distribution are random variables, thecompounddistribution is the marginal distribution of the variable. Examples: Some distributions have been specially named as compounds:beta-binomial distribution,Beta negative binomial distribution,gamma-normal distribution. Examples:
https://en.wikipedia.org/wiki/Relationships_among_probability_distributions
Aproduct distributionis aprobability distributionconstructed as the distribution of theproductofrandom variableshaving two other known distributions. Given twostatistically independentrandom variablesXandY, the distribution of the random variableZthat is formed as the productZ=XY{\displaystyle Z=XY}is aproduct distribution. The product distribution is the PDF of the product of sample values. This is not the same as the product of their PDFs yet the concepts are often ambiguously termed as in "product of Gaussians". The product is one type of algebra for random variables: Related to the product distribution are theratio distribution, sum distribution (seeList of convolutions of probability distributions) and difference distribution. More generally, one may talk of combinations of sums, differences, products and ratios. Many of these distributions are described in Melvin D. Springer's book from 1979The Algebra of Random Variables.[1] IfX{\displaystyle X}andY{\displaystyle Y}are two independent, continuous random variables, described by probability density functionsfX{\displaystyle f_{X}}andfY{\displaystyle f_{Y}}then the probability density function ofZ=XY{\displaystyle Z=XY}is[2] We first write thecumulative distribution functionofZ{\displaystyle Z}starting with its definition We find the desired probability density function by taking the derivative of both sides with respect toz{\displaystyle z}. Since on the right hand side,z{\displaystyle z}appears only in the integration limits, the derivative is easily performed using thefundamental theorem of calculusand thechain rule. (Note the negative sign that is needed when the variable occurs in the lower limit of the integration.) where the absolute value is used to conveniently combine the two terms.[3] A faster more compact proof begins with the same step of writing the cumulative distribution ofZ{\displaystyle Z}starting with its definition: whereu(⋅){\displaystyle u(\cdot )}is theHeaviside step functionand serves to limit the region of integration to values ofx{\displaystyle x}andy{\displaystyle y}satisfyingxy≤z{\displaystyle xy\leq z}. We find the desired probability density function by taking the derivative of both sides with respect toz{\displaystyle z}. where we utilize the translation and scaling properties of theDirac delta functionδ{\displaystyle \delta }. A more intuitive description of the procedure is illustrated in the figure below. The joint pdffX(x)fY(y){\displaystyle f_{X}(x)f_{Y}(y)}exists in thex{\displaystyle x}-y{\displaystyle y}plane and an arc of constantz{\displaystyle z}value is shown as the shaded line. To find the marginal probabilityfZ(z){\displaystyle f_{Z}(z)}on this arc, integrate over increments of areadxdyf(x,y){\displaystyle dx\,dy\;f(x,y)}on this contour. Starting withy=zx{\displaystyle y={\frac {z}{x}}}, we havedy=−zx2dx=−yxdx{\displaystyle dy=-{\frac {z}{x^{2}}}\,dx=-{\frac {y}{x}}\,dx}. So the probability increment isδp=f(x,y)dx|dy|=fX(x)fY(z/x)y|x|dxdx{\displaystyle \delta p=f(x,y)\,dx\,|dy|=f_{X}(x)f_{Y}(z/x){\frac {y}{|x|}}\,dx\,dx}. Sincez=yx{\displaystyle z=yx}impliesdz=ydx{\displaystyle dz=y\,dx}, we can relate the probability increment to thez{\displaystyle z}-increment, namelyδp=fX(x)fY(z/x)1|x|dxdz{\displaystyle \delta p=f_{X}(x)f_{Y}(z/x){\frac {1}{|x|}}\,dx\,dz}. Then integration overx{\displaystyle x}, yieldsfZ(z)=∫fX(x)fY(z/x)1|x|dx{\displaystyle f_{Z}(z)=\int f_{X}(x)f_{Y}(z/x){\frac {1}{|x|}}\,dx}. LetX∼f(x){\displaystyle X\sim f(x)}be a random sample drawn from probability distributionfx(x){\displaystyle f_{x}(x)}. ScalingX{\displaystyle X}byθ{\displaystyle \theta }generates a sample from scaled distributionθX∼1|θ|fX(xθ){\displaystyle \theta X\sim {\frac {1}{|\theta |}}f_{X}\left({\frac {x}{\theta }}\right)}which can be written as a conditional distributiongx(x|θ)=1|θ|fx(xθ){\displaystyle g_{x}(x|\theta )={\frac {1}{|\theta |}}f_{x}\left({\frac {x}{\theta }}\right)}. Lettingθ{\displaystyle \theta }be a random variable with pdffθ(θ){\displaystyle f_{\theta }(\theta )}, the distribution of the scaled sample becomesfX(θx)=gX(x∣θ)fθ(θ){\displaystyle f_{X}(\theta x)=g_{X}(x\mid \theta )f_{\theta }(\theta )}and integrating outθ{\displaystyle \theta }we gethx(x)=∫−∞∞gX(x|θ)fθ(θ)dθ{\displaystyle h_{x}(x)=\int _{-\infty }^{\infty }g_{X}(x|\theta )f_{\theta }(\theta )d\theta }soθX{\displaystyle \theta X}is drawn from this distributionθX∼hX(x){\displaystyle \theta X\sim h_{X}(x)}. However, substituting the definition ofg{\displaystyle g}we also havehX(x)=∫−∞∞1|θ|fx(xθ)fθ(θ)dθ{\displaystyle h_{X}(x)=\int _{-\infty }^{\infty }{\frac {1}{|\theta |}}f_{x}\left({\frac {x}{\theta }}\right)f_{\theta }(\theta )\,d\theta }which has the same form as the product distribution above. Thus the Bayesian posterior distributionhX(x){\displaystyle h_{X}(x)}is the distribution of the product of the two independent random samplesθ{\displaystyle \theta }andX{\displaystyle X}. For the case of one variable being discrete, letθ{\displaystyle \theta }have probabilityPi{\displaystyle P_{i}}at levelsθi{\displaystyle \theta _{i}}with∑iPi=1{\displaystyle \sum _{i}P_{i}=1}. The conditional density isfX(x∣θi)=1|θi|fx(xθi){\displaystyle f_{X}(x\mid \theta _{i})={\frac {1}{|\theta _{i}|}}f_{x}\left({\frac {x}{\theta _{i}}}\right)}. ThereforefX(θx)=∑iPi|θi|fX(xθi){\displaystyle f_{X}(\theta x)=\sum _{i}{\frac {P_{i}}{|\theta _{i}|}}f_{X}\left({\frac {x}{\theta _{i}}}\right)}. When two random variables are statistically independent,the expectation of their product is the product of their expectations. This can be proved from thelaw of total expectation: In the inner expression,Yis a constant. Hence: This is true even ifXandYare statistically dependent in which caseE⁡[X∣Y]{\displaystyle \operatorname {E} [X\mid Y]}is a function ofY. In the special case in whichXandYare statistically independent, it is a constant independent ofY. Hence: LetX,Y{\displaystyle X,Y}be uncorrelated random variables with meansμX,μY,{\displaystyle \mu _{X},\mu _{Y},}and variancesσX2,σY2{\displaystyle \sigma _{X}^{2},\sigma _{Y}^{2}}. If, additionally, the random variablesX2{\displaystyle X^{2}}andY2{\displaystyle Y^{2}}are uncorrelated, then the variance of the productXYis[4] In the case of the product of more than two variables, ifX1⋯Xn,n>2{\displaystyle X_{1}\cdots X_{n},\;\;n>2}are statistically independent then[5]the variance of their product is AssumeX,Yare independent random variables. The characteristic function ofXisφX(t){\displaystyle \varphi _{X}(t)}, and the distribution ofYis known. Then from thelaw of total expectation, we have[6] If the characteristic functions and distributions of bothXandYare known, then alternatively,φZ(t)=E⁡(φY(tX)){\displaystyle \varphi _{Z}(t)=\operatorname {E} (\varphi _{Y}(tX))}also holds. TheMellin transformof a distributionf(x){\displaystyle f(x)}with supportonlyonx≥0{\displaystyle x\geq 0}and having a random sampleX{\displaystyle X}is The inverse transform is ifXandY{\displaystyle X{\text{ and }}Y}are two independent random samples from different distributions, then the Mellin transform of their product is equal to the product of their Mellin transforms: Ifsis restricted to integer values, a simpler result is Thus the moments of the random productXY{\displaystyle XY}are the product of the corresponding moments ofXandY{\displaystyle X{\text{ and }}Y}and this extends to non-integer moments, for example The pdf of a function can be reconstructed from its moments using thesaddlepoint approximation method. A further result is that for independentX,Y Gamma distribution exampleTo illustrate how the product of moments yields a much simpler result than finding the moments of the distribution of the product, letX,Y{\displaystyle X,Y}be sampled from twoGamma distributions,fGamma(x;θ,1)=Γ(θ)−1xθ−1e−x{\displaystyle f_{Gamma}(x;\theta ,1)=\Gamma (\theta )^{-1}x^{\theta -1}e^{-x}}with parametersθ=α,β{\displaystyle \theta =\alpha ,\beta }whose moments are Multiplying the corresponding moments gives the Mellin transform result Independently, it is known that the product of two independent Gamma-distributed samples (~Gamma(α,1) and Gamma(β,1)) has aK-distribution: To find the moments of this, make the change of variabley=2z{\displaystyle y=2{\sqrt {z}}}, simplifying similar integrals to: thus The definite integral which, after some difficulty, has agreed with the moment product result above. IfX,Yare drawn independently from Gamma distributions with shape parametersα,β{\displaystyle \alpha ,\;\beta }then This type of result is universally true, since for bivariate independent variablesfX,Y(x,y)=fX(x)fY(y){\displaystyle f_{X,Y}(x,y)=f_{X}(x)f_{Y}(y)}thus or equivalently it is clear thatXpandYq{\displaystyle X^{p}{\text{ and }}Y^{q}}are independent variables. The distribution of the product of two random variables which havelognormal distributionsis again lognormal. This is itself a special case of a more general set of results where the logarithm of the product can be written as the sum of the logarithms. Thus, in cases where a simple result can be found in thelist of convolutions of probability distributions, where the distributions to be convolved are those of the logarithms of the components of the product, the result might be transformed to provide the distribution of the product. However this approach is only useful where the logarithms of the components of the product are in some standard families of distributions. LetZ{\displaystyle Z}be the product of two independent variablesZ=X1X2{\displaystyle Z=X_{1}X_{2}}each uniformly distributed on the interval [0,1], possibly the outcome of acopulatransformation. As noted in "Lognormal Distributions" above, PDF convolution operations in the Log domain correspond to the product of sample values in the original domain. Thus, making the transformationu=ln⁡(x){\displaystyle u=\ln(x)}, such thatpU(u)|du|=pX(x)|dx|{\displaystyle p_{U}(u)\,|du|=p_{X}(x)\,|dx|}, each variate is distributed independently onuas and the convolution of the two distributions is the autoconvolution Next retransform the variable toz=ey{\displaystyle z=e^{y}}yielding the distribution For the product of multiple (> 2) independent samples thecharacteristic functionroute is favorable. If we definey~=−y{\displaystyle {\tilde {y}}=-y}thenc(y~){\displaystyle c({\tilde {y}})}above is aGamma distributionof shape 1 and scale factor 1,c(y~)=y~e−y~{\displaystyle c({\tilde {y}})={\tilde {y}}e^{-{\tilde {y}}}}, and its known CF is(1−it)−1{\displaystyle (1-it)^{-1}}. Note that|dy~|=|dy|{\displaystyle |d{\tilde {y}}|=|dy|}so the Jacobian of the transformation is unity. The convolution ofn{\displaystyle n}independent samples fromY~{\displaystyle {\tilde {Y}}}therefore has CF(1−it)−n{\displaystyle (1-it)^{-n}}which is known to be the CF of a Gamma distribution of shapen{\displaystyle n}: Make the inverse transformationz=ey{\displaystyle z=e^{y}}to extract the PDF of the product of thensamples: The following, more conventional, derivation from Stackexchange[7]is consistent with this result. First of all, lettingZ2=X1X2{\displaystyle Z_{2}=X_{1}X_{2}}its CDF is The density ofz2is thenf(z2)=−log⁡(z2){\displaystyle z_{2}{\text{ is then }}f(z_{2})=-\log(z_{2})} Multiplying by a third independent sample gives distribution function Taking the derivative yieldsfZ3(z)=12log2⁡(z),0<z≤1.{\displaystyle f_{Z_{3}}(z)={\frac {1}{2}}\log ^{2}(z),\;\;0<z\leq 1.} The author of the note conjectures that, in general,fZn(z)=(−log⁡z)n−1(n−1)!,0<z≤1{\displaystyle f_{Z_{n}}(z)={\frac {(-\log z)^{n-1}}{(n-1)!\;\;\;}},\;\;0<z\leq 1} The figure illustrates the nature of the integrals above. The area of the selection within the unit square and below the line z = xy, represents the CDF of z. This divides into two parts. The first is for 0 < x < z where the increment of area in the vertical slot is just equal todx. The second part lies below thexyline, hasy-heightz/x, and incremental areadx z/x. The product of two independent Normal samples follows amodified Bessel function. Letx,y{\displaystyle x,y}be independent samples from a Normal(0,1) distribution andz=xy{\displaystyle z=xy}. Then The variance of this distribution could be determined, in principle, by a definite integral from Gradsheyn and Ryzhik,[8] thusE⁡[Z2]=∫−∞∞z2K0(|z|)πdz=4πΓ2(32)=1{\displaystyle \operatorname {E} [Z^{2}]=\int _{-\infty }^{\infty }{\frac {z^{2}K_{0}(|z|)}{\pi }}\,dz={\frac {4}{\pi }}\;\Gamma ^{2}{\Big (}{\frac {3}{2}}{\Big )}=1} A much simpler result, stated in a section above, is that the variance of the product of zero-mean independent samples is equal to the product of their variances. Since the variance of each Normal sample is one, the variance of the product is also one. The product of two Gaussian samples is often confused with the product of two Gaussian PDFs. The latter simply results in a bivariate Gaussian distribution. The product of correlated Normal samples case was recently addressed by Nadarajaha and Pogány.[9]LetX,Y{\displaystyle X{\text{, }}Y}be zero mean, unit variance, normally distributed variates with correlation coefficientρand letZ=XY{\displaystyle \rho {\text{ and let }}Z=XY} Then Mean and variance: For the mean we haveE⁡[Z]=ρ{\displaystyle \operatorname {E} [Z]=\rho }from the definition of correlation coefficient. The variance can be found by transforming from two unit variance zero mean uncorrelated variablesU, V. Let ThenX, Yare unit variance variables with correlation coefficientρ{\displaystyle \rho }and Removing odd-power terms, whose expectations are obviously zero, we get Since(E⁡[Z])2=ρ2{\displaystyle (\operatorname {E} [Z])^{2}=\rho ^{2}}we have High correlation asymptoteIn the highly correlated case,ρ→1{\displaystyle \rho \rightarrow 1}the product converges on the square of one sample. In this case theK0{\displaystyle K_{0}}asymptote isK0(x)→π2xe−xin the limit asx=|z|1−ρ2→∞{\displaystyle K_{0}(x)\rightarrow {\sqrt {\tfrac {\pi }{2x}}}e^{-x}{\text{ in the limit as }}x={\frac {|z|}{1-\rho ^{2}}}\rightarrow \infty }and which is aChi-squared distributionwith one degree of freedom. Multiple correlated samples. Nadarajaha et al. further show that ifZ1,Z2,..Znaren{\displaystyle Z_{1},Z_{2},..Z_{n}{\text{ are }}n}iid random variables sampled fromfZ(z){\displaystyle f_{Z}(z)}andZ¯=1n∑Zi{\displaystyle {\bar {Z}}={\tfrac {1}{n}}\sum Z_{i}}is their mean then whereWis the Whittaker function whileβ=n1−ρ,γ=n1+ρ{\displaystyle \beta ={\frac {n}{1-\rho }},\;\;\gamma ={\frac {n}{1+\rho }}}. Using the identityW0,ν(x)=xπKν(x/2),x≥0{\displaystyle W_{0,\nu }(x)={\sqrt {\frac {x}{\pi }}}K_{\nu }(x/2),\;\;x\geq 0}, see for example the DLMF compilation. eqn(13.13.9),[10]this expression can be somewhat simplified to The pdf gives the marginal distribution of a sample bivariate normal covariance, a result also shown in the Wishart Distribution article. The approximate distribution of a correlation coefficient can be found via theFisher transformation. Multiple non-central correlated samples. The distribution of the product of correlated non-central normal samples was derived by Cui et al.[11]and takes the form of an infinite series of modified Bessel functions of the first kind. Moments of product of correlated central normal samples For a centralnormal distributionN(0,1) the moments are wheren!!{\displaystyle n!!}denotes thedouble factorial. IfX,Y∼Norm(0,1){\displaystyle X,Y\sim {\text{Norm}}(0,1)}are central correlated variables, the simplest bivariate case of the multivariate normal moment problem described by Kan,[12]then where [needs checking] The distribution of the product of non-central correlated normal samples was derived by Cui et al.[11]and takes the form of an infinite series. These product distributions are somewhat comparable to theWishart distribution. The latter is thejointdistribution of the four elements (actually only three independent elements) of a sample covariance matrix. Ifxt,yt{\displaystyle x_{t},y_{t}}are samples from a bivariate time series then theW=∑t=1K(xtyt)(xtyt)T{\displaystyle W=\sum _{t=1}^{K}{\dbinom {x_{t}}{y_{t}}}{\dbinom {x_{t}}{y_{t}}}^{T}}is a Wishart matrix withKdegrees of freedom. The product distributions above are the unconditional distribution of the aggregate ofK> 1 samples ofW2,1{\displaystyle W_{2,1}}. Letu1,v1,u2,v2{\displaystyle u_{1},v_{1},u_{2},v_{2}}be independent samples from a normal(0,1) distribution.Settingz1=u1+iv1andz2=u2+iv2thenz1,z2{\displaystyle z_{1}=u_{1}+iv_{1}{\text{ and }}z_{2}=u_{2}+iv_{2}{\text{ then }}z_{1},z_{2}}are independent zero-mean complex normal samples with circular symmetry. Their complex variances areVar⁡|zi|=2.{\displaystyle \operatorname {Var} |z_{i}|=2.} The density functions of The variableyi≡ri2{\displaystyle y_{i}\equiv r_{i}^{2}}is clearly Chi-squared with two degrees of freedom and has PDF Wells et al.[13]show that the density function ofs≡|z1z2|{\displaystyle s\equiv |z_{1}z_{2}|}is and the cumulative distribution function ofs{\displaystyle s}is Thus the polar representation of the product of two uncorrelated complex Gaussian samples is The first and second moments of this distribution can be found from the integral inNormal Distributionsabove Thus its variance isVar⁡(s)=m2−m12=4−π24{\displaystyle \operatorname {Var} (s)=m_{2}-m_{1}^{2}=4-{\frac {\pi ^{2}}{4}}}. Further, the density ofz≡s2=|r1r2|2=|r1|2|r2|2=y1y2{\displaystyle z\equiv s^{2}={|r_{1}r_{2}|}^{2}={|r_{1}|}^{2}{|r_{2}|}^{2}=y_{1}y_{2}}corresponds to the product of two independent Chi-square samplesyi{\displaystyle y_{i}}each with two DoF. Writing these as scaled Gamma distributionsfy(yi)=1θΓ(1)e−yi/θwithθ=2{\displaystyle f_{y}(y_{i})={\tfrac {1}{\theta \Gamma (1)}}e^{-y_{i}/\theta }{\text{ with }}\theta =2}then, from the Gamma products below, the density of the product is Letu1,v1,u2,v2,…,u2N,v2N,{\displaystyle u_{1},v_{1},u_{2},v_{2},\ldots ,u_{2N},v_{2N},}be4N{\displaystyle 4N}independent samples from a normal(0,1) distribution.Settingz1=u1+iv1,z2=u2+iv2,…,andz2N=u2N+iv2N,{\displaystyle z_{1}=u_{1}+iv_{1},z_{2}=u_{2}+iv_{2},\ldots ,{\text{ and }}z_{2N}=u_{2N}+iv_{2N},}thenz1,z2,…,z2N{\displaystyle z_{1},z_{2},\ldots ,z_{2N}}are independent zero-mean complex normal samples with circular symmetry. Let ofs≡∑i=1Nz2i−1z2i{\displaystyle s\equiv \sum _{i=1}^{N}z_{2i-1}z_{2i}}, Heliot et al.[14]show that the joint density function of the real and imaginary parts ofs{\displaystyle s}, denotedsR{\displaystyle s_{\textrm {R}}}andsI{\displaystyle s_{\textrm {I}}}, respectively, is given by psR,sI(sR,sI)=2(sR2+sI2)N−12πΓ(n)σsN+1Kn−1(2sR2+sI2σs),{\displaystyle p_{s_{\textrm {R}},s_{\textrm {I}}}(s_{\textrm {R}},s_{\textrm {I}})={\frac {2\left(s_{\textrm {R}}^{2}+s_{\textrm {I}}^{2}\right)^{\frac {N-1}{2}}}{\pi \Gamma (n)\sigma _{s}^{N+1}}}K_{n-1}\!\left(\!2{\frac {\sqrt {s_{\textrm {R}}^{2}+s_{\textrm {I}}^{2}}}{\sigma _{s}}}\right),}whereσs{\displaystyle \sigma _{s}}is the standard deviation ofs{\displaystyle s}. Note thatσs=1{\displaystyle \sigma _{s}=1}if all theui,vi{\displaystyle u_{i},v_{i}}variables are normal(0,1). Besides, they also prove that the density function of the magnitude ofs{\displaystyle s},|s|{\displaystyle |s|}, is p|s|(s)=4Γ(N)σsN+1sNKN−1(2sσs),{\displaystyle p_{|s|}(s)={\frac {4}{\Gamma (N)\sigma _{s}^{N+1}}}s^{N}K_{N-1}\left({\frac {2s}{\sigma _{s}}}\right),}wheres=sR2+sI2{\displaystyle s={\sqrt {s_{\textrm {R}}^{2}+s_{\textrm {I}}^{2}}}}. The first moment of this distribution, i.e. the mean of|s|{\displaystyle |s|}, can be expressed as E{|s|}=πσsΓ(N+12)2Γ(N),{\displaystyle E\{|s|\}={\sqrt {\pi }}\sigma _{s}{\frac {\Gamma (N+{\frac {1}{2}})}{2\Gamma (N)}},}which further simplifies asE{|s|}∼σsπN2,{\displaystyle E\{|s|\}\sim {\frac {\sigma _{s}{\sqrt {\pi N}}}{2}},}whenN{\displaystyle N}is asymptotically large (i.e.,N→∞{\displaystyle N\rightarrow \infty }) . The product of non-central independent complex Gaussians is described by O’Donoughue and Moura[15]and forms a double infinite series ofmodified Bessel functionsof the first and second types. The product of two independent Gamma samples,z=x1x2{\displaystyle z=x_{1}x_{2}}, definingΓ(x;ki,θi)=xki−1e−x/θiΓ(ki)θiki{\displaystyle \Gamma (x;k_{i},\theta _{i})={\frac {x^{k_{i}-1}e^{-x/\theta _{i}}}{\Gamma (k_{i})\theta _{i}^{k_{i}}}}}, follows[16] Nagar et al.[17]define a correlated bivariate beta distribution where Then the pdf ofZ=XYis given by where2F1{\displaystyle {_{2}F_{1}}}is the Gauss hypergeometric function defined by the Euler integral Note that multivariate distributions are not generally unique, apart from the Gaussian case, and there may be alternatives. The distribution of the product of a random variable having auniform distributionon (0,1) with a random variable having agamma distributionwith shape parameter equal to 2, is anexponential distribution.[18]A more general case of this concerns the distribution of the product of a random variable having abeta distributionwith a random variable having agamma distribution: for some cases where the parameters of the two component distributions are related in a certain way, the result is again a gamma distribution but with a changed shape parameter.[18] TheK-distributionis an example of a non-standard distribution that can be defined as a product distribution (where both components have a gamma distribution). The product ofnGamma andmPareto independent samples was derived by Nadarajah.[19]
https://en.wikipedia.org/wiki/Product_distribution
Theratio estimatoris astatistical estimatorfor theratioofmeansof two random variables. Ratio estimates arebiasedand corrections must be made when they are used in experimental or survey work. The ratio estimates are asymmetrical and symmetrical tests such as thet testshould not be used to generate confidence intervals. The bias is of the orderO(1/n) (seebig O notation) so as the sample size (n) increases, the bias will asymptotically approach 0. Therefore, the estimator is approximately unbiased for large sample sizes. Assume there are two characteristics –xandy– that can be observed for each sampled element in the data set. The ratioRis The ratio estimate of a value of theyvariate (θy) is whereθxis the corresponding value of thexvariate.θyis known to be asymptotically normally distributed.[1] The sample ratio (r) is estimated from the sample That the ratio is biased can be shown withJensen's inequalityas follows (assuming independence betweenx¯{\displaystyle {\bar {x}}}andy¯{\displaystyle {\bar {y}}}): wheremx{\displaystyle m_{x}}is the mean of the variatex{\displaystyle x}andmy{\displaystyle m_{y}}is the mean of the variatey{\displaystyle y}. Under simple random sampling the bias is of the orderO(n−1). An upper bound on the relative bias of the estimate is provided by thecoefficient of variation(the ratio of thestandard deviationto themean).[2]Under simple random sampling the relative bias isO(n−1/2). The correction methods, depending on the distributions of thexandyvariates, differ in their efficiency making it difficult to recommend an overall best method. Because the estimates ofrare biased a corrected version should be used in all subsequent calculations. A correction of the bias accurate to the first order is[citation needed] wheremxis the mean of the variatexandsxyis thecovariancebetweenxandy. To simplify the notationsxywill be used subsequently to denote the covariance between the variatesxandy. Another estimator based on theTaylor expansionis[3] wherenis the sample size,Nis the population size,mxis the mean of thexvariate andsx2andsy2are the samplevariancesof thexandyvariates respectively. A computationally simpler but slightly less accurate version of this estimator is whereNis the population size,nis the sample size,mxis the mean of thexvariate andsx2andsy2are the samplevariancesof thexandyvariates respectively. These versions differ only in the factor in the denominator (N- 1). For a largeNthe difference is negligible. Ifxandyare unitless counts withPoisson distributiona second-order correction is[4] Other methods of bias correction have also been proposed. To simplify the notation the following variables will be used Pascual's estimator:[5] Beale's estimator:[6] Tin's estimator:[7] Sahoo's estimator:[8] Sahoo has also proposed a number of additional estimators:[9] Ifxandyare unitless counts with Poisson distribution andmxandmyare both greater than 10, then the following approximation is correct to order O(n−3).[4] An asymptotically correct estimator is[3] Ajackknife estimateof the ratio is less biased than the naive form. A jackknife estimator of the ratio is wherenis the size of the sample and theriare estimated with the omission of one pair of variates at a time.[10] An alternative method is to divide the sample intoggroups each of sizepwithn=pg.[11]Letribe the estimate of theithgroup. Then the estimator wherer¯{\displaystyle {\bar {r}}}is the mean of the ratiosrgof theggroups, has a bias of at mostO(n−2). Other estimators based on the division of the sample intoggroups are:[12] wherer¯{\displaystyle {\bar {r}}}is the mean of the ratiosrgof theggroups and whereri'is the value of the sample ratio with theithgroup omitted. Other methods of estimating a ratio estimator includemaximum likelihoodandbootstrapping.[10] The estimated total of theyvariate (τy) is where (τx) is the total of thexvariate. The variance of the sample ratio is approximately: wheresx2andsy2are the variances of thexandyvariates respectively,mxandmyare the means of thexandyvariates respectively andsxyis the covariance ofxandy. Although the approximate variance estimator of the ratio given below is biased, if the sample size is large, the bias in this estimator is negligible. whereNis the population size,nis the sample size andmxis the mean of thexvariate. Another estimator of the variance based on theTaylor expansionis wherenis the sample size andNis the population size andsxyis the covariance ofxandy. An estimate accurate to O(n−2) is[3] If the probability distribution is Poissonian, an estimator accurate to O(n−3) is[4] A jackknife estimator of the variance is whereriis the ratio with theithpair of variates omitted andrJis the jackknife estimate of the ratio.[10] The variance of the estimated total is The variance of the estimated mean of theyvariate is wheremxis the mean of thexvariate,sx2andsy2are the sample variances of thexandyvariates respectively andsxyis the covariance ofxandy. Theskewnessand thekurtosisof the ratio depend on the distributions of thexandyvariates. Estimates have been made of these parameters fornormally distributedxandyvariates but for other distributions no expressions have yet been derived. It has been found that in general ratio variables are skewed to the right, areleptokurticand their nonnormality is increased when magnitude of the denominator'scoefficient of variationis increased. For normally distributedxandyvariates the skewness of the ratio is approximately[7] where Because the ratio estimate is generally skewed confidence intervals created with the variance and symmetrical tests such as the t test are incorrect.[10]These confidence intervals tend to overestimate the size of the left confidence interval and underestimate the size of the right. If the ratio estimator isunimodal(which is frequently the case) then a conservative estimate of the 95% confidence intervals can be made with theVysochanskiï–Petunin inequality. An alternative method of reducing or eliminating the bias in the ratio estimator is to alter the method of sampling. The variance of the ratio using these methods differs from the estimates given previously. Note that while many applications such as those discussion in Lohr[13]are intended to be restricted to positiveintegersonly, such as sizes of sample groups, the Midzuno-Sen method works for any sequence of positive numbers, integral or not. It's not clear what it means that Lahiri's methodworkssince it returns a biased result. The first of these sampling schemes is a double use of a sampling method introduced by Lahiri in 1951.[14]The algorithm here is based upon the description by Lohr.[13] The same procedure for the same desired sample size is carried out with theyvariate. Lahiri's scheme as described by Lohr isbiased highand, so, is interesting only for historical reasons. The Midzuno-Sen technique described below is recommended instead. In 1952 Midzuno and Sen independently described a sampling scheme that provides an unbiased estimator of the ratio.[15][16] The first sample is chosen with probability proportional to the size of thexvariate. The remainingn- 1 samples are chosen at random without replacement from the remainingN- 1 members in the population. The probability of selection under this scheme is whereXis the sum of theNxvariates and thexiare thenmembers of the sample. Then the ratio of the sum of theyvariates and the sum of thexvariates chosen in this fashion is an unbiased estimate of the ratio estimator. In symbols we have wherexiandyiare chosen according to the scheme described above. The ratio estimator given by this scheme is unbiased. Särndal, Swensson, and Wretman credit Lahiri, Midzuno and Sen for the insights leading to this method[17]but Lahiri's technique is biased high. Tin (1965)[18]described and compared ratio estimators proposed by Beale (1962)[19]and Quenouille (1956)[20]and proposed a modified approach (now referred to as Tin's method). These ratio estimators are commonly used to calculate pollutant loads from sampling of waterways, particularly where flow is measured more frequently than water quality. For example see Quilbe et al., (2006)[21] If a linear relationship between thexandyvariates exists and theregressionequation passes through the origin then the estimated variance of the regression equation is always less than that of the ratio estimator[citation needed]. The precise relationship between the variances depends on the linearity of the relationship between thexandyvariates: when the relationship is other than linear the ratio estimate may have a lower variance than that estimated by regression. Although the ratio estimator may be of use in a number of settings it is of particular use in two cases: The first known use of the ratio estimator was byJohn GrauntinEnglandwho in 1662 was the first to estimate the ratioy/xwhereyrepresented the total population andxthe known total number of registered births in the same areas during the preceding year. Later Messance (~1765) and Moheau (1778) published very carefully prepared estimates forFrancebased on enumeration of population in certain districts and on the count of births, deaths and marriages as reported for the whole country. The districts from which the ratio of inhabitants to birth was determined only constituted a sample. In 1802,Laplacewished to estimate the population of France. Nopopulation censushad been carried out and Laplace lacked the resources to count every individual. Instead he sampled 30parisheswhose total number of inhabitants was 2,037,615. The parish baptismal registrations were considered to be reliable estimates of the number of live births so he used the total number of births over a three-year period. The sample estimate was 71,866.333 baptisms per year over this period giving a ratio of one registered baptism for every 28.35 persons. The total number of baptismal registrations for France was also available to him and he assumed that the ratio of live births to population was constant. He then used the ratio from his sample to estimate the population of France. Karl Pearsonsaid in 1897 that the ratio estimates are biased and cautioned against their use.[22]
https://en.wikipedia.org/wiki/Ratio_estimator
Inmachine learning,normalizationis a statistical technique with various applications. There are two main forms of normalization, namelydata normalizationandactivation normalization. Data normalization (orfeature scaling) includes methods that rescale input data so that thefeatureshave the same range, mean, variance, or other statistical properties. For instance, a popular choice of feature scaling method ismin-max normalization, where each feature is transformed to have the same range (typically[0,1]{\displaystyle [0,1]}or[−1,1]{\displaystyle [-1,1]}). This solves the problem of different features having vastly different scales, for example if one feature is measured in kilometers and another in nanometers. Activation normalization, on the other hand, is specific todeep learning, and includes methods that rescale the activation ofhidden neuronsinsideneural networks. Normalization is often used to: Normalization techniques are often theoretically justified as reducing covariance shift, smoothing optimization landscapes, and increasingregularization, though they are mainly justified by empirical success.[1] Batch normalization(BatchNorm)[2]operates on the activations of a layer for each mini-batch. Consider a simple feedforward network, defined by chaining together modules: x(0)↦x(1)↦x(2)↦⋯{\displaystyle x^{(0)}\mapsto x^{(1)}\mapsto x^{(2)}\mapsto \cdots } where each network module can be a linear transform, a nonlinear activation function, a convolution, etc.x(0){\displaystyle x^{(0)}}is the input vector,x(1){\displaystyle x^{(1)}}is the output vector from the first module, etc. BatchNorm is a module that can be inserted at any point in the feedforward network. For example, suppose it is inserted just afterx(l){\displaystyle x^{(l)}}, then the network would operate accordingly: ⋯↦x(l)↦BN(x(l))↦x(l+1)↦⋯{\displaystyle \cdots \mapsto x^{(l)}\mapsto \mathrm {BN} (x^{(l)})\mapsto x^{(l+1)}\mapsto \cdots } The BatchNorm module does not operate over individual inputs. Instead, it must operate over one batch of inputs at a time. Concretely, suppose we have a batch of inputsx(1)(0),x(2)(0),…,x(B)(0){\displaystyle x_{(1)}^{(0)},x_{(2)}^{(0)},\dots ,x_{(B)}^{(0)}}, fed all at once into the network. We would obtain in the middle of the network some vectors: x(1)(l),x(2)(l),…,x(B)(l){\displaystyle x_{(1)}^{(l)},x_{(2)}^{(l)},\dots ,x_{(B)}^{(l)}} The BatchNorm module computes the coordinate-wise mean and variance of these vectors: μi(l)=1B∑b=1Bx(b),i(l)(σi(l))2=1B∑b=1B(x(b),i(l)−μi(l))2{\displaystyle {\begin{aligned}\mu _{i}^{(l)}&={\frac {1}{B}}\sum _{b=1}^{B}x_{(b),i}^{(l)}\\(\sigma _{i}^{(l)})^{2}&={\frac {1}{B}}\sum _{b=1}^{B}(x_{(b),i}^{(l)}-\mu _{i}^{(l)})^{2}\end{aligned}}} wherei{\displaystyle i}indexes the coordinates of the vectors, andb{\displaystyle b}indexes the elements of the batch. In other words, we are considering thei{\displaystyle i}-th coordinate of each vector in the batch, and computing the mean and variance of these numbers. It then normalizes each coordinate to have zero mean and unit variance: x^(b),i(l)=x(b),i(l)−μi(l)(σi(l))2+ϵ{\displaystyle {\hat {x}}_{(b),i}^{(l)}={\frac {x_{(b),i}^{(l)}-\mu _{i}^{(l)}}{\sqrt {(\sigma _{i}^{(l)})^{2}+\epsilon }}}} Theϵ{\displaystyle \epsilon }is a small positive constant such as10−9{\displaystyle 10^{-9}}added to the variance for numerical stability, to avoiddivision by zero. Finally, it applies a linear transformation: y(b),i(l)=γix^(b),i(l)+βi{\displaystyle y_{(b),i}^{(l)}=\gamma _{i}{\hat {x}}_{(b),i}^{(l)}+\beta _{i}} Here,γ{\displaystyle \gamma }andβ{\displaystyle \beta }are parameters inside the BatchNorm module. They are learnable parameters, typically trained bygradient descent. The following is aPythonimplementation of BatchNorm: γ{\displaystyle \gamma }andβ{\displaystyle \beta }allow the network to learn to undo the normalization, if this is beneficial.[3]BatchNorm can be interpreted as removing the purely linear transformations, so that its layers focus solely on modelling the nonlinear aspects of data, which may be beneficial, as a neural network can always be augmented with a linear transformation layer on top.[4][3] It is claimed in the original publication that BatchNorm works by reducing internal covariance shift, though the claim has both supporters[5][6]and detractors.[7][8] The original paper[2]recommended to only use BatchNorms after a linear transform, not after a nonlinear activation. That is,ϕ(BN(Wx+b)){\displaystyle \phi (\mathrm {BN} (Wx+b))}, notBN(ϕ(Wx+b)){\displaystyle \mathrm {BN} (\phi (Wx+b))}. Also, the biasb{\displaystyle b}does not matter, since it would be canceled by the subsequent mean subtraction, so it is of the formBN(Wx){\displaystyle \mathrm {BN} (Wx)}. That is, if a BatchNorm is preceded by a linear transform, then that linear transform's bias term is set to zero.[2] Forconvolutional neural networks(CNNs), BatchNorm must preserve the translation-invariance of these models, meaning that it must treat all outputs of the samekernelas if they are different data points within a batch.[2]This is sometimes called Spatial BatchNorm, or BatchNorm2D, or per-channel BatchNorm.[9][10] Concretely, suppose we have a 2-dimensional convolutional layer defined by: xh,w,c(l)=∑h′,w′,c′Kh′−h,w′−w,c,c′(l)xh′,w′,c′(l−1)+bc(l){\displaystyle x_{h,w,c}^{(l)}=\sum _{h',w',c'}K_{h'-h,w'-w,c,c'}^{(l)}x_{h',w',c'}^{(l-1)}+b_{c}^{(l)}} where: In order to preserve the translational invariance, BatchNorm treats all outputs from the same kernel in the same batch as more data in a batch. That is, it is applied once perkernelc{\displaystyle c}(equivalently, once per channelc{\displaystyle c}), not peractivationxh,w,c(l+1){\displaystyle x_{h,w,c}^{(l+1)}}: μc(l)=1BHW∑b=1B∑h=1H∑w=1Wx(b),h,w,c(l)(σc(l))2=1BHW∑b=1B∑h=1H∑w=1W(x(b),h,w,c(l)−μc(l))2{\displaystyle {\begin{aligned}\mu _{c}^{(l)}&={\frac {1}{BHW}}\sum _{b=1}^{B}\sum _{h=1}^{H}\sum _{w=1}^{W}x_{(b),h,w,c}^{(l)}\\(\sigma _{c}^{(l)})^{2}&={\frac {1}{BHW}}\sum _{b=1}^{B}\sum _{h=1}^{H}\sum _{w=1}^{W}(x_{(b),h,w,c}^{(l)}-\mu _{c}^{(l)})^{2}\end{aligned}}} whereB{\displaystyle B}is the batch size,H{\displaystyle H}is the height of the feature map, andW{\displaystyle W}is the width of the feature map. That is, even though there are onlyB{\displaystyle B}data points in a batch, allBHW{\displaystyle BHW}outputs from the kernel in this batch are treated equally.[2] Subsequently, normalization and the linear transform is also done per kernel: x^(b),h,w,c(l)=x(b),h,w,c(l)−μc(l)(σc(l))2+ϵy(b),h,w,c(l)=γcx^(b),h,w,c(l)+βc{\displaystyle {\begin{aligned}{\hat {x}}_{(b),h,w,c}^{(l)}&={\frac {x_{(b),h,w,c}^{(l)}-\mu _{c}^{(l)}}{\sqrt {(\sigma _{c}^{(l)})^{2}+\epsilon }}}\\y_{(b),h,w,c}^{(l)}&=\gamma _{c}{\hat {x}}_{(b),h,w,c}^{(l)}+\beta _{c}\end{aligned}}} Similar considerations apply for BatchNorm forn-dimensional convolutions. The following is a Python implementation of BatchNorm for 2D convolutions: For multilayeredrecurrent neural networks(RNN), BatchNorm is usually applied only for theinput-to-hiddenpart, not thehidden-to-hiddenpart.[11]Let the hidden state of thel{\displaystyle l}-th layer at timet{\displaystyle t}beht(l){\displaystyle h_{t}^{(l)}}. The standard RNN, without normalization, satisfiesht(l)=ϕ(W(l)htl−1+U(l)ht−1l+b(l)){\displaystyle h_{t}^{(l)}=\phi (W^{(l)}h_{t}^{l-1}+U^{(l)}h_{t-1}^{l}+b^{(l)})}whereW(l),U(l),b(l){\displaystyle W^{(l)},U^{(l)},b^{(l)}}are weights and biases, andϕ{\displaystyle \phi }is the activation function. Applying BatchNorm, this becomesht(l)=ϕ(BN(W(l)htl−1)+U(l)ht−1l){\displaystyle h_{t}^{(l)}=\phi (\mathrm {BN} (W^{(l)}h_{t}^{l-1})+U^{(l)}h_{t-1}^{l})}There are two possible ways to define what a "batch" is in BatchNorm for RNNs:frame-wiseandsequence-wise. Concretely, consider applying an RNN to process a batch of sentences. Lethb,t(l){\displaystyle h_{b,t}^{(l)}}be the hidden state of thel{\displaystyle l}-th layer for thet{\displaystyle t}-th token of theb{\displaystyle b}-th input sentence. Then frame-wise BatchNorm means normalizing overb{\displaystyle b}:μt(l)=1B∑b=1Bhi,t(l)(σt(l))2=1B∑b=1B(ht(l)−μt(l))2{\displaystyle {\begin{aligned}\mu _{t}^{(l)}&={\frac {1}{B}}\sum _{b=1}^{B}h_{i,t}^{(l)}\\(\sigma _{t}^{(l)})^{2}&={\frac {1}{B}}\sum _{b=1}^{B}(h_{t}^{(l)}-\mu _{t}^{(l)})^{2}\end{aligned}}}and sequence-wise means normalizing over(b,t){\displaystyle (b,t)}:μ(l)=1BT∑b=1B∑t=1Thi,t(l)(σ(l))2=1BT∑b=1B∑t=1T(ht(l)−μ(l))2{\displaystyle {\begin{aligned}\mu ^{(l)}&={\frac {1}{BT}}\sum _{b=1}^{B}\sum _{t=1}^{T}h_{i,t}^{(l)}\\(\sigma ^{(l)})^{2}&={\frac {1}{BT}}\sum _{b=1}^{B}\sum _{t=1}^{T}(h_{t}^{(l)}-\mu ^{(l)})^{2}\end{aligned}}}Frame-wise BatchNorm is suited for causal tasks such as next-character prediction, where future frames are unavailable, forcing normalization per frame. Sequence-wise BatchNorm is suited for tasks such as speech recognition, where the entire sequences are available, but with variable lengths. In a batch, the smaller sequences are padded with zeroes to match the size of the longest sequence of the batch. In such setups, frame-wise is not recommended, because the number of unpadded frames decreases along the time axis, leading to increasingly poorer statistics estimates.[11] It is also possible to apply BatchNorm toLSTMs.[12] BatchNorm has been very popular and there were many attempted improvements. Some examples include:[13] A particular problem with BatchNorm is that during training, the mean and variance are calculated on the fly for each batch (usually as anexponential moving average), but during inference, the mean and variance were frozen from those calculated during training. This train-test disparity degrades performance. The disparity can be decreased by simulating the moving average during inference:[13]: Eq. 3 μ=αE[x]+(1−α)μx,trainσ2=(αE[x]2+(1−α)μx2,train)−μ2{\displaystyle {\begin{aligned}\mu &=\alpha E[x]+(1-\alpha )\mu _{x,{\text{ train}}}\\\sigma ^{2}&=(\alpha E[x]^{2}+(1-\alpha )\mu _{x^{2},{\text{ train}}})-\mu ^{2}\end{aligned}}} whereα{\displaystyle \alpha }is a hyperparameter to be optimized on a validation set. Other works attempt to eliminate BatchNorm, such as the Normalizer-Free ResNet.[14] Layer normalization(LayerNorm)[15]is a popular alternative to BatchNorm. Unlike BatchNorm, which normalizes activations across the batch dimension for a given feature, LayerNorm normalizes across all the features within a single data sample. Compared to BatchNorm, LayerNorm's performance is not affected by batch size. It is a key component oftransformermodels. For a given data input and layer, LayerNorm computes the meanμ{\displaystyle \mu }and varianceσ2{\displaystyle \sigma ^{2}}over all the neurons in the layer. Similar to BatchNorm, learnable parametersγ{\displaystyle \gamma }(scale) andβ{\displaystyle \beta }(shift) are applied. It is defined by: xi^=xi−μσ2+ϵ,yi=γixi^+βi{\displaystyle {\hat {x_{i}}}={\frac {x_{i}-\mu }{\sqrt {\sigma ^{2}+\epsilon }}},\quad y_{i}=\gamma _{i}{\hat {x_{i}}}+\beta _{i}} where: μ=1D∑i=1Dxi,σ2=1D∑i=1D(xi−μ)2{\displaystyle \mu ={\frac {1}{D}}\sum _{i=1}^{D}x_{i},\quad \sigma ^{2}={\frac {1}{D}}\sum _{i=1}^{D}(x_{i}-\mu )^{2}} and the indexi{\displaystyle i}ranges over the neurons in that layer. For example, in CNN, a LayerNorm applies to all activations in a layer. In the previous notation, we have: μ(l)=1HWC∑h=1H∑w=1W∑c=1Cxh,w,c(l)(σ(l))2=1HWC∑h=1H∑w=1W∑c=1C(xh,w,c(l)−μ(l))2x^h,w,c(l)=x^h,w,c(l)−μ(l)(σ(l))2+ϵyh,w,c(l)=γ(l)x^h,w,c(l)+β(l){\displaystyle {\begin{aligned}\mu ^{(l)}&={\frac {1}{HWC}}\sum _{h=1}^{H}\sum _{w=1}^{W}\sum _{c=1}^{C}x_{h,w,c}^{(l)}\\(\sigma ^{(l)})^{2}&={\frac {1}{HWC}}\sum _{h=1}^{H}\sum _{w=1}^{W}\sum _{c=1}^{C}(x_{h,w,c}^{(l)}-\mu ^{(l)})^{2}\\{\hat {x}}_{h,w,c}^{(l)}&={\frac {{\hat {x}}_{h,w,c}^{(l)}-\mu ^{(l)}}{\sqrt {(\sigma ^{(l)})^{2}+\epsilon }}}\\y_{h,w,c}^{(l)}&=\gamma ^{(l)}{\hat {x}}_{h,w,c}^{(l)}+\beta ^{(l)}\end{aligned}}} Notice that the batch indexb{\displaystyle b}is removed, while the channel indexc{\displaystyle c}is added. Inrecurrent neural networks[15]andtransformers,[16]LayerNorm is applied individually to each timestep. For example, if the hidden vector in an RNN at timestept{\displaystyle t}isx(t)∈RD{\displaystyle x^{(t)}\in \mathbb {R} ^{D}}, whereD{\displaystyle D}is the dimension of the hidden vector, then LayerNorm will be applied with: xi^(t)=xi(t)−μ(t)(σ(t))2+ϵ,yi(t)=γixi^(t)+βi{\displaystyle {\hat {x_{i}}}^{(t)}={\frac {x_{i}^{(t)}-\mu ^{(t)}}{\sqrt {(\sigma ^{(t)})^{2}+\epsilon }}},\quad y_{i}^{(t)}=\gamma _{i}{\hat {x_{i}}}^{(t)}+\beta _{i}} where: μ(t)=1D∑i=1Dxi(t),(σ(t))2=1D∑i=1D(xi(t)−μ(t))2{\displaystyle \mu ^{(t)}={\frac {1}{D}}\sum _{i=1}^{D}x_{i}^{(t)},\quad (\sigma ^{(t)})^{2}={\frac {1}{D}}\sum _{i=1}^{D}(x_{i}^{(t)}-\mu ^{(t)})^{2}} Root mean square layer normalization(RMSNorm)[17]changes LayerNorm by: xi^=xi1D∑i=1Dxi2,yi=γxi^+β{\displaystyle {\hat {x_{i}}}={\frac {x_{i}}{\sqrt {{\frac {1}{D}}\sum _{i=1}^{D}x_{i}^{2}}}},\quad y_{i}=\gamma {\hat {x_{i}}}+\beta } Essentially, it is LayerNorm where we enforceμ,ϵ=0{\displaystyle \mu ,\epsilon =0}. Adaptive layer norm(adaLN) computes theγ,β{\displaystyle \gamma ,\beta }in a LayerNorm not from the layer activation itself, but from other data. It was first proposed for CNNs,[18]and has been used effectively indiffusiontransformers (DiTs).[19]For example, in a DiT, the conditioning information (such as a text encoding vector) is processed by amultilayer perceptronintoγ,β{\displaystyle \gamma ,\beta }, which is then applied in the LayerNorm module of a transformer. Weight normalization(WeightNorm)[20]is a technique inspired by BatchNorm that normalizes weight matrices in a neural network, rather than its activations. One example isspectral normalization, which divides weight matrices by theirspectral norm. The spectral normalization is used ingenerative adversarial networks(GANs) such as theWasserstein GAN.[21]The spectral radius can be efficiently computed by the following algorithm: INPUTmatrixW{\displaystyle W}and initial guessx{\displaystyle x} Iteratex↦1‖Wx‖2Wx{\displaystyle x\mapsto {\frac {1}{\|Wx\|_{2}}}Wx}to convergencex∗{\displaystyle x^{*}}. This is the eigenvector ofW{\displaystyle W}with eigenvalue‖W‖s{\displaystyle \|W\|_{s}}. RETURNx∗,‖Wx∗‖2{\displaystyle x^{*},\|Wx^{*}\|_{2}} By reassigningWi←Wi‖Wi‖s{\displaystyle W_{i}\leftarrow {\frac {W_{i}}{\|W_{i}\|_{s}}}}after each update of the discriminator, we can upper-bound‖Wi‖s≤1{\displaystyle \|W_{i}\|_{s}\leq 1}, and thus upper-bound‖D‖L{\displaystyle \|D\|_{L}}. The algorithm can be further accelerated bymemoization: at stept{\displaystyle t}, storexi∗(t){\displaystyle x_{i}^{*}(t)}. Then, at stept+1{\displaystyle t+1}, usexi∗(t){\displaystyle x_{i}^{*}(t)}as the initial guess for the algorithm. SinceWi(t+1){\displaystyle W_{i}(t+1)}is very close toWi(t){\displaystyle W_{i}(t)}, so isxi∗(t){\displaystyle x_{i}^{*}(t)}toxi∗(t+1){\displaystyle x_{i}^{*}(t+1)}, thus allowing rapid convergence. There are some activation normalization techniques that are only used for CNNs. Local response normalization[22]was used inAlexNet. It was applied in a convolutional layer, just after a nonlinear activation function. It was defined by: bx,yi=ax,yi(k+α∑j=max(0,i−n/2)min(N−1,i+n/2)(ax,yj)2)β{\displaystyle b_{x,y}^{i}={\frac {a_{x,y}^{i}}{\left(k+\alpha \sum _{j=\max(0,i-n/2)}^{\min(N-1,i+n/2)}\left(a_{x,y}^{j}\right)^{2}\right)^{\beta }}}} whereax,yi{\displaystyle a_{x,y}^{i}}is the activation of the neuron at location(x,y){\displaystyle (x,y)}and channeli{\displaystyle i}. I.e., each pixel in a channel is suppressed by the activations of the same pixel in its adjacent channels. k,n,α,β{\displaystyle k,n,\alpha ,\beta }are hyperparameters picked by using a validation set. It was a variant of the earlierlocal contrast normalization.[23] bx,yi=ax,yi(k+α∑j=max(0,i−n/2)min(N−1,i+n/2)(ax,yj−a¯x,yj)2)β{\displaystyle b_{x,y}^{i}={\frac {a_{x,y}^{i}}{\left(k+\alpha \sum _{j=\max(0,i-n/2)}^{\min(N-1,i+n/2)}\left(a_{x,y}^{j}-{\bar {a}}_{x,y}^{j}\right)^{2}\right)^{\beta }}}} wherea¯x,yj{\displaystyle {\bar {a}}_{x,y}^{j}}is the average activation in a small window centered on location(x,y){\displaystyle (x,y)}and channeli{\displaystyle i}. The hyperparametersk,n,α,β{\displaystyle k,n,\alpha ,\beta }, and the size of the small window, are picked by using a validation set. Similar methods were calleddivisive normalization, as they divide activations by a number depending on the activations. They were originally inspired by biology, where it was used to explain nonlinear responses of cortical neurons and nonlinear masking in visual perception.[24] Both kinds of local normalization were obviated by batch normalization, which is a more global form of normalization.[25] Response normalization reappeared in ConvNeXT-2 asglobal response normalization.[26] Group normalization(GroupNorm)[27]is a technique also solely used for CNNs. It can be understood as the LayerNorm for CNN applied once per channel group. Suppose at a layerl{\displaystyle l}, there are channels1,2,…,C{\displaystyle 1,2,\dots ,C}, then it is partitioned into groupsg1,g2,…,gG{\displaystyle g_{1},g_{2},\dots ,g_{G}}. Then, LayerNorm is applied to each group. Instance normalization(InstanceNorm), orcontrast normalization, is a technique first developed forneural style transfer, and is also only used for CNNs.[28]It can be understood as the LayerNorm for CNN applied once per channel, or equivalently, as group normalization where each group consists of a single channel: μc(l)=1HW∑h=1H∑w=1Wxh,w,c(l)(σc(l))2=1HW∑h=1H∑w=1W(xh,w,c(l)−μc(l))2x^h,w,c(l)=x^h,w,c(l)−μc(l)(σc(l))2+ϵyh,w,c(l)=γc(l)x^h,w,c(l)+βc(l){\displaystyle {\begin{aligned}\mu _{c}^{(l)}&={\frac {1}{HW}}\sum _{h=1}^{H}\sum _{w=1}^{W}x_{h,w,c}^{(l)}\\(\sigma _{c}^{(l)})^{2}&={\frac {1}{HW}}\sum _{h=1}^{H}\sum _{w=1}^{W}(x_{h,w,c}^{(l)}-\mu _{c}^{(l)})^{2}\\{\hat {x}}_{h,w,c}^{(l)}&={\frac {{\hat {x}}_{h,w,c}^{(l)}-\mu _{c}^{(l)}}{\sqrt {(\sigma _{c}^{(l)})^{2}+\epsilon }}}\\y_{h,w,c}^{(l)}&=\gamma _{c}^{(l)}{\hat {x}}_{h,w,c}^{(l)}+\beta _{c}^{(l)}\end{aligned}}} Adaptive instance normalization(AdaIN) is a variant of instance normalization, designed specifically for neural style transfer with CNNs, rather than just CNNs in general.[29] In the AdaIN method of style transfer, we take a CNN and two input images, one forcontentand one forstyle. Each image is processed through the same CNN, and at a certain layerl{\displaystyle l}, AdaIn is applied. Letx(l),content{\displaystyle x^{(l),{\text{ content}}}}be the activation in the content image, andx(l),style{\displaystyle x^{(l),{\text{ style}}}}be the activation in the style image. Then, AdaIn first computes the mean and variance of the activations of the content imagex′(l){\displaystyle x'^{(l)}}, then uses those as theγ,β{\displaystyle \gamma ,\beta }for InstanceNorm onx(l),content{\displaystyle x^{(l),{\text{ content}}}}. Note thatx(l),style{\displaystyle x^{(l),{\text{ style}}}}itself remains unchanged. Explicitly, we have: yh,w,c(l),content=σc(l),style(xh,w,c(l),content−μc(l),content(σc(l),content)2+ϵ)+μc(l),style{\displaystyle {\begin{aligned}y_{h,w,c}^{(l),{\text{ content}}}&=\sigma _{c}^{(l),{\text{ style}}}\left({\frac {x_{h,w,c}^{(l),{\text{ content}}}-\mu _{c}^{(l),{\text{ content}}}}{\sqrt {(\sigma _{c}^{(l),{\text{ content}}})^{2}+\epsilon }}}\right)+\mu _{c}^{(l),{\text{ style}}}\end{aligned}}} Some normalization methods were designed for use intransformers. The original 2017 transformer used the "post-LN" configuration for its LayerNorms. It was difficult to train, and required carefulhyperparameter tuningand a "warm-up" inlearning rate, where it starts small and gradually increases. The pre-LN convention, proposed several times in 2018,[30]was found to be easier to train, requiring no warm-up, leading to faster convergence.[31] FixNorm[32]andScaleNorm[33]both normalize activation vectors in a transformer. The FixNorm method divides theoutputvectors from a transformer by their L2 norms, then multiplies by a learned parameterg{\displaystyle g}. The ScaleNorm replaces all LayerNorms inside a transformer by division with L2 norm, then multiplying by a learned parameterg′{\displaystyle g'}(shared by all ScaleNorm modules of a transformer).Query-Key normalization(QKNorm)[34]normalizes query and key vectors to have unit L2 norm. InnGPT, many vectors are normalized to have unit L2 norm:[35]hidden state vectors, input and output embedding vectors, weight matrix columns, and query and key vectors. Gradient normalization(GradNorm)[36]normalizes gradient vectors during backpropagation.
https://en.wikipedia.org/wiki/Normalization_(machine_learning)
Insignal processing,Feature space Maximum Likelihood Linear Regression(fMLLR) is a global feature transform that are typically applied in a speaker adaptive way, where fMLLR transforms acoustic features to speaker adapted features by a multiplication operation with a transformation matrix. In some literature, fMLLR is also known as theConstrained Maximum Likelihood Linear Regression(cMLLR). fMLLR transformations are trained in a maximum likelihood sense on adaptation data. These transformations may be estimated in many ways, but only maximum likelihood (ML) estimation is considered in fMLLR. The fMLLR transformation is trained on a particular set of adaptation data, such that it maximizes the likelihood of that adaptation data given a current model-set. This technique is a widely used approach for speaker adaptation inHMM-basedspeech recognition.[1][2]Later research[3]also shows that fMLLR is an excellent acoustic feature for DNN/HMM[4]hybrid speech recognition models. The advantage of fMLLR includes the following: Major problem and disadvantage of fMLLR: Feature transform of fMLLR can be easily computed with the open source speech toolKaldi, the Kaldi script uses the standard estimation scheme described in Appendix B of the original paper,[1]in particular the section Appendix B.1 "Direct method over rows". In the Kaldi formulation, fMLLR is an affine feature transform of the formx{\displaystyle x}→A{\displaystyle A}x{\displaystyle x}+b{\displaystyle +b}, which can be written in the formx{\displaystyle x}→Wx^{\displaystyle {\hat {x}}}, wherex^{\displaystyle {\hat {x}}}=[x1]{\displaystyle {\begin{bmatrix}x\\1\end{bmatrix}}}is the acoustic featurex{\displaystyle x}with a 1 appended. Note that this differs from some of the literature where the 1 comes first asx^{\displaystyle {\hat {x}}}=[1x]{\displaystyle {\begin{bmatrix}1\\x\end{bmatrix}}}. The sufficient statistics stored are: K=∑t,j,mγj,m(t)Σjm−1μjmx(t)+{\displaystyle K=\sum _{t,j,m}\gamma _{j,m}(t)\textstyle \Sigma _{jm}^{-1}\mu _{jm}x(t)^{+}\displaystyle } whereΣjm−1{\displaystyle \textstyle \Sigma _{jm}^{-1}\displaystyle }is the inverse co-variance matrix. And for0≤i≤D{\displaystyle 0\leq i\leq D}whereD{\displaystyle D}is the feature dimension: G(i)=∑t,j,mγj,m(t)(1σj,m2(i))x(t)+x(t)+T{\displaystyle G^{(i)}=\sum _{t,j,m}\gamma _{j,m}(t)\left({\frac {1}{\sigma _{j,m}^{2}(i)}}\right)x(t)^{+}x(t)^{+T}\displaystyle } For a thorough review that explains fMLLR and the commonly used estimation techniques, see the original paper "Maximum likelihood linear transformations for HMM-based speech recognition[1]". Note that the Kaldi script that performs the feature transforms of fMLLR differs with[1]by using a column of the inverse in place of the cofactor row. In other words, the factor of the determinant is ignored, as it does not affect the transform result and can causes potential danger of numerical underflow or overflow. Experiment result shows that by using the fMLLR feature in speech recognition, constant improvement is gained over other acoustic features on various commonly used benchmark datasets (TIMIT,LibriSpeech, etc). In particular, fMLLR features outperformMFCCsandFBANKscoefficients, which is mainly due to the speaker adaptation process that fMLLR performs.[3] In,[3]phoneme error rate (PER, %) is reported for the test set ofTIMITwith various neural architectures: As expected, fMLLR features outperformMFCCsandFBANKscoefficients despite the use of different model architecture. WhereMLP(multi-layer perceptron) serves as a simple baseline, on the other handRNN,LSTM, andGRUare all well known recurrent models. The Li-GRU[5]architecture is based on a single gate and thus saves 33% of the computations over a standard GRU model, Li-GRU thus effectively address the gradient vanishing problem of recurrent models. As a result, the best performance is obtained with the Li-GRU model on fMLLR features. fMLLR can be extracted as reported in the s5 recipe of Kaldi. Kaldi scripts can certainly extract fMLLR features on different dataset, below are the basic example steps to extract fMLLR features from the open source speech corporaLibrispeech. Note that the instructions below are for the subsetstrain-clean-100,train-clean-360,dev-clean, andtest-clean, but they can be easily extended to support the other setsdev-other,test-other, andtrain-other-500.
https://en.wikipedia.org/wiki/FMLLR
Ahyper-heuristicis aheuristicsearch method that seeks to automate, often by the incorporation ofmachine learningtechniques, the process of selecting, combining, generating or adapting several simpler heuristics (or components of such heuristics) to efficiently solve computational search problems. One of the motivations for studying hyper-heuristics is to build systems which can handle classes of problems rather than solving just one problem.[1][2][3] There might be multiple heuristics from which one can choose for solving a problem, and each heuristic has its own strength and weakness. The idea is to automatically devise algorithms by combining the strength and compensating for the weakness of known heuristics.[4]In a typical hyper-heuristic framework there is a high-level methodology and a set of low-level heuristics (either constructive or perturbative heuristics). Given a problem instance, the high-level method selects which low-level heuristic should be applied at any given time, depending upon the current problem state (or search stage) determined by features.[2][5][6] The fundamental difference betweenmetaheuristicsand hyper-heuristics is that most implementations of metaheuristics search within asearch spaceof problem solutions, whereas hyper-heuristics always search within a search space of heuristics. Thus, when using hyper-heuristics, we are attempting to find the right method or sequence of heuristics in a given situation rather than trying to solve a problem directly. Moreover, we are searching for a generally applicable methodology rather than solving a single problem instance. Hyper-heuristics could be regarded as "off-the-peg" methods as opposed to "made-to-measure" metaheuristics. They aim to be generic methods, which should produce solutions of acceptable quality, based on a set of easy-to-implement low-level heuristics. Despite the significant progress in building search methodologies for a wide variety of application areas so far, such approaches still require specialists to integrate their expertise in a given problem domain. Many researchers fromcomputer science,artificial intelligenceandoperational researchhave already acknowledged the need for developing automated systems to replace the role of a human expert in such situations. One of the main ideas for automating the design of heuristics requires the incorporation ofmachine learningmechanisms into algorithms to adaptively guide the search. Both learning and adaptation processes can be realised on-line or off-line, and be based on constructive or perturbative heuristics. A hyper-heuristic usually aims at reducing the amount ofdomain knowledgein the search methodology. The resulting approach should be cheap and fast to implement, requiring less expertise in either the problem domain or heuristic methods, and (ideally) it would be robust enough to effectively handle a range of problem instances from a variety of domains. The goal is to raise the level of generality of decision support methodology perhaps at the expense of reduced - but still acceptable - solution quality when compared to tailor-made metaheuristic approaches.[7]In order to reduce the gap between tailor-made schemes and hyperheuristic-based strategies, parallel hyperheuristics have been proposed.[8] The term "hyperheuristics" was first coined in a 2000 publication by Cowling and Soubeiga, who used it to describe the idea of "heuristics to choose heuristics".[9]They used a "choice function" machine learning approach which trades off exploitation and exploration in choosing the next heuristic to use.[10]Subsequently, Cowling, Soubeiga, Kendall, Han, Ross and other authors investigated and extended this idea in areas such as evolutionary algorithms, and pathological low level heuristics. The first journal article to use the term appeared in 2003.[11]The origin of the idea (although not the term) can be traced back to the early 1960s[12][13]and was independently re-discovered and extended several times during the 1990s.[14][15][16]In the domain of Job Shop Scheduling, the pioneering work by Fisher and Thompson,[12][13]hypothesized and experimentally proved, using probabilistic learning, that combining scheduling rules (also known as priority or dispatching rules) was superior than any of the rules taken separately. Although the term was not then in use, this was the first "hyper-heuristic" paper. Another root inspiring the concept of hyper-heuristics comes from the field ofartificial intelligence. More specifically, it comes from work onautomated planningsystems, and its eventual focus towards the problem of learning control knowledge. The so-called COMPOSER system, developed by Gratch et al.,[17][18]was used for controlling satellite communication schedules involving a number of earth-orbiting satellites and three ground stations. The system can be characterized as ahill-climbingsearch in the space of possible control strategies. Hyper-heuristic approaches so far can be classified into two main categories. In the first class, captured by the phraseheuristics to choose heuristics,[9][10]the hyper-heuristic framework is provided with a set of pre-existing, generally widely known heuristics for solving the target problem. The task is to discover a good sequence of applications of these heuristics (also known as low-level heuristics within the domain of hyper-heuristics) for efficiently solving the problem. At each decision stage, a heuristic is selected through a component called selection mechanism and applied to an incumbent solution. The new solution produced from the application of the selected heuristic is accepted/rejected based on another component called acceptance criterion. Rejection of a solution means it is simply discarded while acceptance leads to the replacement of the incumbent solution. In the second class,heuristics to generate heuristics, the key idea is to "evolve new heuristics by making use of the components of known heuristics."[19]The process requires, as in the first class of hyper-heuristics, the selection of a suitable set of heuristics known to be useful in solving the target problem. However, instead of supplying these directly to the framework, the heuristics are first decomposed into their basic components. These two main broad types can be further categorised according to whether they are based on constructive or perturbative search. An additional orthogonal classification of hyper-heuristics considers the source providing feedback during the learning process, which can be either one instance (on-line learning) or many instances of the underlying problem studied (off-line learning). Discover good combinations of fixed, human-designed, well-known low-level heuristics. Generate new heuristic methods using basic components of previously existing heuristic methods. The learning takes place while the algorithm is solving an instance of a problem, therefore, task-dependent local properties can be used by the high-level strategy to determine the appropriate low-level heuristic to apply. Examples of on-line learning approaches within hyper-heuristics are: the use ofreinforcement learningfor heuristic selection, and generally the use ofmetaheuristicsas high-level search strategies over a search space of heuristics. The idea is to gather knowledge in form of rules or programs, from a set of training instances, which would hopefully generalise to the process of solving unseen instances. Examples of off-line learning approaches within hyper-heuristics are:learning classifier systems, case-base reasoning andgenetic programming. An extended classification ofselectionhyper-heuristics was provided in 2020,[20]to provide a more comprehensive categorisation of contemporary selection hyper-heuristic methods. Hyper-heuristics have been applied across many different problems. Indeed, one of the motivations of hyper-heuristics is to be able to operate across different problem types. The following list is a non-exhaustive selection of some of the problems and fields in which hyper-heuristics have been explored: Hyper-heuristics are not the only approach being investigated in the quest for more general and applicable search methodologies. Many researchers from computer science,artificial intelligenceandoperational researchhave already acknowledged the need for developing automated systems to replace the role of a human expert in the process of tuning and adapting search methodologies. The following list outlines some related areas of research: Nowadays, there are several frameworks available, in different programming languages. These include, but are not limited to: HyFlex ParHyFlex EvoHyp MatHH
https://en.wikipedia.org/wiki/Hyper-heuristic
The replication crisis, also known as the reproducibility or replicability crisis, refers to the growing number of published scientific results that other researchers have been unable to reproduce or verify. Because the reproducibility of empirical results is an essential part of thescientific method,[2]such failures undermine the credibility of theories that build on them and can call into question substantial parts of scientific knowledge. The replication crisis is frequently discussed in relation topsychologyandmedicine, where considerable efforts have been undertaken to reinvestigate classic results, to determine whether they are reliable, and if they turn out not to be, the reasons for the failure.[3][4]Data strongly indicates that othernaturalandsocial sciencesare affected as well.[5] The phrasereplication crisiswas coined in the early 2010s[6]as part of a growing awareness of the problem. Considerations of causes and remedies have given rise to a new scientific discipline,metascience,[7]which uses methods of empirical research to examine empirical research practice.[8] Considerations about reproducibility can be placed into two categories.Reproducibilityin the narrow sense refers to re-examining and validating the analysis of a given set of data.Replicationrefers to repeating an existing experiment or study using new, independent data with the goal of verifying the original conclusions. Replicationhas been called "the cornerstone of science".[9][10]Environmental health scientist Stefan Schmidt began a 2009 review with this description of replication: Replication is one of the central issues in any empirical science. To confirm results or hypotheses by a repetition procedure is at the basis of any scientific conception. A replication experiment to demonstrate that the same findings can be obtained in any other place by any other researcher is conceived as an operationalization of objectivity. It is the proof that the experiment reflects knowledge that can be separated from the specific circumstances (such as time, place, or persons) under which it was gained.[11] But there is limited consensus on how to definereplicationand potentially related concepts.[12][13][11]A number of types of replication have been identified: Reproducibilitycan also be distinguished fromreplication, as referring to reproducing the same results using the same data set. Reproducibility of this type is why many researchers make their data available to others for testing.[15] The replication crisis does not necessarily mean these fields are unscientific.[16][17][18]Rather, this process is part of the scientific process in which old ideas or those that cannot withstand careful scrutiny are pruned,[19][20]although this pruning process is not always effective.[21][22] A hypothesis is generally considered to be supported when the results match the predicted pattern and that pattern of results is found to bestatistically significant. Results are considered significant whenever the relative frequency of the observed pattern falls below an arbitrarily chosen value (i.e. thesignificance level) when assuming thenull hypothesisis true. This generally answers the question of how unlikely results would be if no difference existed at the level of thestatistical population. If the probability associated with thetest statisticexceeds the chosencritical value, the results are considered statistically significant.[23]The corresponding probability of exceeding the critical value is depicted asp< 0.05, wherep(typically referred to as the "p-value") is the probability level. This should result in 5% of hypotheses that are supported being false positives (an incorrect hypothesis being erroneously found correct), assuming the studies meet all of the statistical assumptions. Some fields use smaller p-values, such asp< 0.01 (1% chance of a false positive) orp< 0.001 (0.1% chance of a false positive). But a smaller chance of a false positive often requires greater sample sizes or a greater chance of afalse negative (a correct hypothesis being erroneously found incorrect). Althoughp-value testing is the most commonly used method, it is not the only method. Certain terms commonly used in discussions of the replication crisis have technically precise meanings, which are presented here.[1] In the most common case,null hypothesis testing, there are two hypotheses, anull hypothesisH0{\displaystyle H_{0}}and analternative hypothesisH1{\displaystyle H_{1}}. The null hypothesis is typically of the form "X and Y arestatistically independent". For example, the null hypothesis might be "taking drug X doesnotchange 1-year recovery rate from disease Y", and the alternative hypothesis is that it does change. As testing for full statistical independence is difficult, the full null hypothesis is often reduced to asimplifiednull hypothesis "the effect size is 0", where "effect size" is a real number that is 0 if thefullnull hypothesis is true, and the larger the effect size is, the more the null hypothesis is false.[24]For example, if X is binary, then the effect size might be defined as the change in the expectation of Y upon a change of X:(effect size)=E[Y|X=1]−E[Y|X=0]{\displaystyle ({\text{effect size}})=\mathbb {E} [Y|X=1]-\mathbb {E} [Y|X=0]}Note that the effect size as defined above might be zero even if X and Y are not independent, such as whenY∼N(0,1+X){\displaystyle Y\sim {\mathcal {N}}(0,1+X)}. Since different definitions of "effect size" capture different ways for X and Y to be dependent, there are many different definitions of effect size. In practice, effect sizes cannot be directly observed, but must be measured bystatistical estimators. For example, the above definition of effect size is often measured byCohen's destimator. The same effect size might have multiple estimators, as they have tradeoffs betweenefficiency,bias,variance, etc. This further increases the number of possible statistical quantities that can be computed on a single dataset. When an estimator for an effect size is used for statistical testing, it is called atest statistic. A null hypothesistestis a decision procedure which takes in some data, and outputs eitherH0{\displaystyle H_{0}}orH1{\displaystyle H_{1}}. If it outputsH1{\displaystyle H_{1}}, it is usually stated as "there is a statistically significant effect" or "the null hypothesis is rejected". Often, the statistical test is a (one-sided)threshold test, which is structured as follows: A two-sided threshold test is similar, but with two thresholds, such that it outputsH1{\displaystyle H_{1}}if eithert[D]<tthreshold−{\displaystyle t[D]<t_{\text{threshold}}^{-}}ort[D]>tthreshold+{\displaystyle t[D]>t_{\text{threshold}}^{+}} There are 4 possible outcomes of a null hypothesis test: false negative, true negative, false positive, true positive. A false negative means thatH0{\displaystyle H_{0}}is true, but the test outcome isH1{\displaystyle H_{1}}; a true negative means thatH0{\displaystyle H_{0}}is true, and the test outcome isH0{\displaystyle H_{0}}, etc. Significance level, false positive rate, or the alpha level, is the probability of finding the alternative to be true when the null hypothesis is true:(significance):=α:=Pr(findH1|H0){\displaystyle ({\text{significance}}):=\alpha :=Pr({\text{find }}H_{1}|H_{0})}For example, when the test is a one-sided threshold test, thenα=PrD∼H0(t[D]>tthreshold){\displaystyle \alpha =Pr_{D\sim H_{0}}(t[D]>t_{\text{threshold}})}whereD∼H0{\displaystyle D\sim H_{0}}means "the data is sampled fromH0{\displaystyle H_{0}}". Statistical power, true positive rate, is the probability of finding the alternative to be true when the alternative hypothesis is true:(power):=1−β:=Pr(findH1|H1){\displaystyle ({\text{power}}):=1-\beta :=Pr({\text{find }}H_{1}|H_{1})}whereβ{\displaystyle \beta }is also called the false negative rate. For example, when the test is a one-sided threshold test, then1−β=PrD∼H1(t[D]>tthreshold){\displaystyle 1-\beta =Pr_{D\sim H_{1}}(t[D]>t_{\text{threshold}})}. Given a statistical test and a data setD{\displaystyle D}, the correspondingp-valueis the probability that the test statistic is at least as extreme, conditional onH0{\displaystyle H_{0}}. For example, for a one-sided threshold test,p[D]=PrD′∼H0(t[D′]>t[D]){\displaystyle p[D]=Pr_{D'\sim H_{0}}(t[D']>t[D])}If the null hypothesis is true, then the p-value is distributed uniformly on[0,1]{\displaystyle [0,1]}. Otherwise, it is typically peaked atp=0.0{\displaystyle p=0.0}and roughly exponential, though the precise shape of the p-value distribution depends on what the alternative hypothesis is.[25][26] Since the p-value is distributed uniformly on[0,1]{\displaystyle [0,1]}conditional on the null hypothesis, one may construct a statistical test with any significance levelα{\displaystyle \alpha }by simply computing the p-value, then outputH1{\displaystyle H_{1}}ifp[D]<α{\displaystyle p[D]<\alpha }. This is usually stated as "the null hypothesis is rejected at significance levelα{\displaystyle \alpha }", or "H1(p<α){\displaystyle H_{1}\;(p<\alpha )}", such as "smoking is correlated with cancer (p < 0.001)". The beginning of the replication crisis can be traced to a number of events in the early 2010s. Philosopher of science and social epistemologist Felipe Romero identified four events that can be considered precursors to the ongoing crisis:[27] This series of events generated a great deal of skepticism about the validity of existing research in light of widespread methodological flaws and failures to replicate findings. This led prominent scholars to declare a "crisis of confidence" in psychology and other fields,[42]and the ensuing situation came to be known as the "replication crisis". Although the beginning of the replication crisis can be traced to the early 2010s, some authors point out that concerns about replicability and research practices in the social sciences had been expressed much earlier. Romero notes that authors voiced concerns about the lack of direct replications in psychological research in the late 1960s and early 1970s.[43][44]He also writes that certain studies in the 1990s were already reporting that journal editors and reviewers are generally biased against publishing replication studies.[45][46] In the social sciences, the blogData Colada(whose three authors coined the term "p-hacking" in a 2014 paper) has been credited with contributing to the start of the replication crisis.[47][48][49] University of Virginia professor and cognitive psychologistBarbara A. Spellmanhas written that many criticisms of research practices and concerns about replicability of research are not new.[50]She reports that between the late 1950s and the 1990s, scholars were already expressing concerns about a possible crisis of replication,[51]a suspiciously high rate of positive findings,[52]questionable research practices (QRPs),[53]the effects of publication bias,[54]issues with statistical power,[55][56]and bad standards of reporting.[51] Spellman also identifies reasons that the reiteration of these criticisms and concerns in recent years led to a full-blown crisis and challenges to the status quo. First, technological improvements facilitated conducting and disseminating replication studies, and analyzing large swaths of literature for systemic problems. Second, the research community's increasing size and diversity made the work of established members more easily scrutinized by other community members unfamiliar with them. According to Spellman, these factors, coupled with increasingly limited resources and misaligned incentives for doing scientific work, led to a crisis in psychology and other fields.[50] According toAndrew Gelman,[57]the works ofPaul Meehl,Jacob Cohen, andTverskyandKahnemanin the 1960s-70s were early warnings of replication crisis. In discussing the origins of the problem, Kahneman himself noted historical precedents insubliminal perceptionanddissonance reductionreplication failures.[58] It had been repeatedly pointed out since 1962[55]that most psychological studies have low power (true positive rate), but low power persisted for 50 years, indicating a structural and persistent problem in psychological research.[59][60] Several factors have combined to put psychology at the center of the conversation.[61][62]Some areas of psychology once considered solid, such associal primingandego depletion,[63]have come under increased scrutiny due to failed replications.[64]Much of the focus has been onsocial psychology,[65]although other areas of psychology such asclinical psychology,[66][67][68]developmental psychology,[69][70][71]andeducational researchhave also been implicated.[72][73][74][75][76] In August 2015, the first openempirical studyof reproducibility in psychology was published, calledThe Reproducibility Project: Psychology. Coordinated by psychologistBrian Nosek, researchers redid 100 studies in psychological science from three high-ranking psychology journals (Journal of Personality and Social Psychology,Journal of Experimental Psychology: Learning, Memory, and Cognition, andPsychological Science). 97 of the original studies had significant effects, but of those 97, only 36% of the replications yielded significant findings (pvalue below 0.05).[12]The meaneffect sizein the replications was approximately half the magnitude of the effects reported in the original studies. The same paper examined the reproducibility rates and effect sizes by journal and discipline. Study replication rates were 23% for theJournal of Personality and Social Psychology, 48% forJournal of Experimental Psychology: Learning, Memory, and Cognition, and 38% forPsychological Science. Studies in the field of cognitive psychology had a higher replication rate (50%) than studies in the field of social psychology (25%).[77] Of the 64% of non-replications, only 25% disproved the original result (at statistical significance). The other 49% were inconclusive, neither supporting nor contradicting the original result. This is because many replications were underpowered, with a sample 2.5 times smaller than the original.[78] A study published in 2018 inNature Human Behaviourreplicated 21 social and behavioral science papers fromNatureandScience,finding that only about 62% could successfully reproduce original results.[79][80] Similarly, in a study conducted under the auspices of theCenter for Open Science, a team of 186 researchers from 60 different laboratories (representing 36 different nationalities from six different continents) conducted replications of 28 classic and contemporary findings in psychology.[81][82]The study's focus was not only whether the original papers' findings replicated but also the extent to which findings varied as a function of variations in samples and contexts. Overall, 50% of the 28 findings failed to replicate despite massive sample sizes. But if a finding replicated, then it replicated in most samples. If a finding was not replicated, then it failed to replicate with little variation across samples and contexts. This evidence is inconsistent with a proposed explanation that failures to replicate in psychology are likely due to changes in the sample between the original and replication study.[82] Results of a 2022 study suggest that many earlierbrain–phenotypestudies ("brain-wide association studies" (BWAS)) produced invalid conclusions as the replication of such studies requires samples from thousands of individuals due to smalleffect sizes.[83][84] Of 49 medical studies from 1990 to 2003 with more than 1000 citations, 92% found that the studied therapies were effective. Of these studies, 16% were contradicted by subsequent studies, 16% had found stronger effects than did subsequent studies, 44% were replicated, and 24% remained largely unchallenged.[85]A 2011 analysis by researchers with pharmaceutical companyBayerfound that, at most, a quarter of Bayer's in-house findings replicated the original results.[86]But the analysis of Bayer's results found that the results that did replicate could often be successfully used for clinical applications.[87] In a 2012 paper,C. Glenn Begley, a biotech consultant working atAmgen, and Lee Ellis, a medical researcher at the University of Texas, found that only 11% of 53 pre-clinical cancer studies had replications that could confirm conclusions from the original studies.[38]In late 2021, The Reproducibility Project: Cancer Biology examined 53 top papers about cancer published between 2010 and 2012 and showed that among studies that provided sufficient information to be redone, the effect sizes were 85% smaller on average than the original findings.[88][89]A survey of cancer researchers found that half of them had been unable to reproduce a published result.[90]Another report estimated that almost half of randomized controlled trials contained flawed data (based on the analysis of anonymized individual participant data (IPD) from more than 150 trials).[91] In nutrition science, for most food ingredients, there were studies that found that the ingredient has an effect on cancer risk. Specifically, out of a random sample of 50 ingredients from a cookbook, 80% had articles reporting on their cancer risk. Statistical significance decreased for meta-analyses.[92] Economicshas lagged behind other social sciences and psychology in its attempts to assess replication rates and increase the number of studies that attempt replication.[13]A 2016 study in the journalSciencereplicated 18experimental studiespublished in two leading economics journals,The American Economic Reviewand theQuarterly Journal of Economics, between 2011 and 2014. It found that about 39% failed to reproduce the original results.[93][94][95]About 20% of studies published inThe American Economic Revieware contradicted by other studies despite relying on the same or similar data sets.[96]A study of empirical findings in theStrategic Management Journalfound that about 30% of 27 retested articles showed statistically insignificant results for previously significant findings, whereas about 4% showed statistically significant results for previously insignificant findings.[97] A 2019 study inScientific Dataestimated with 95% confidence that of 1,989 articles on water resources and management published in 2017, study results might be reproduced for only 0.6% to 6.8%, largely because the articles did not provide sufficient information to allow for replication.[98] A 2016 survey byNatureon 1,576 researchers who took a brief online questionnaire on reproducibility found that more than 70% of researchers have tried and failed to reproduce another scientist's experiment results (including 87% ofchemists, 77% ofbiologists, 69% ofphysicistsandengineers, 67% ofmedical researchers, 64% ofearthandenvironmental scientists, and 62% of all others), and more than half have failed to reproduce their own experiments. But fewer than 20% had been contacted by another researcher unable to reproduce their work. The survey found that fewer than 31% of researchers believe that failure to reproduce results means that the original result is probably wrong, although 52% agree that a significant replication crisis exists. Most researchers said they still trust the published literature.[5][99]In 2010, Fanelli (2010)[100]found that 91.5% of psychiatry/psychology studies confirmed the effects they were looking for, and concluded that the odds of this happening (a positive result) was around five times higher than in fields such asastronomyorgeosciences. Fanelli argued that this is because researchers in "softer" sciences have fewer constraints to their conscious and unconscious biases. Early analysis ofresult-blind peer review, which is less affected by publication bias, has estimated that 61% of result-blind studies in biomedicine and psychology have led tonull results, in contrast to an estimated 5% to 20% in earlier research.[101] In 2021, a study conducted byUniversity of California, San Diegofound that papers that cannot be replicated are more likely to be cited.[102]Nonreplicable publications are often cited more even after a replication study is published.[103] There are many proposed causes for the replication crisis. The replication crisis may be triggered by the "generation of new data and scientific publications at an unprecedented rate" that leads to "desperation to publish or perish" and failure to adhere to good scientific practice.[104] Predictions of an impending crisis in the quality-control mechanism of science can be traced back several decades.Derek de Solla Price—considered the father ofscientometrics, thequantitative studyof science—predicted in 1963 that science could reach "senility" as a result of its own exponential growth.[105]Some present-day literature seems to vindicate this "overflow" prophecy, lamenting the decay in both attention and quality.[106][107] HistorianPhilip Mirowskiargues that the decline of scientific quality can be connected to its commodification, especially spurred by major corporations' profit-driven decision to outsource their research to universities andcontract research organizations.[108] Socialsystems theory, as expounded in the work of German sociologistNiklas Luhmann, inspires a similar diagnosis. This theory holds that each system, such as economy, science, religion, and media, communicates using its own code:trueandfalsefor science,profitandlossfor the economy,newsandno-newsfor the media, and so on.[109][110]According to some sociologists, science'smediatization,[111]commodification,[108]and politicization,[111][112]as a result of the structural coupling among systems, have led to a confusion of the original system codes. A major cause of low reproducibility is thepublication biasstemming from the fact that statistically non-significant results and seemingly unoriginal replications are rarely published. Only a very small proportion of academic journals in psychology and neurosciences explicitly welcomed submissions of replication studies in their aim and scope or instructions to authors.[113][114]This does not encourage reporting on, or even attempts to perform, replication studies. Among 1,576 researchersNaturesurveyed in 2016, only a minority had ever attempted to publish a replication, and several respondents who had published failed replications noted that editors and reviewers demanded that they play down comparisons with the original studies.[5][99]An analysis of 4,270 empirical studies in 18 business journals from 1970 to 1991 reported that less than 10% of accounting, economics, and finance articles and 5% of management and marketing articles were replication studies.[93][115]Publication bias is augmented by thepressure to publishand the author's ownconfirmation bias,[a]and is an inherent hazard in the field, requiring a certain degree of skepticism on the part of readers.[41] Publication bias leads to what psychologistRobert Rosenthalcalls the "file drawer effect". The file drawer effect is the idea that as a consequence of the publication bias, a significant number of negative results[b]are not published. According to philosopher of science Felipe Romero, this tends to produce "misleading literature and biased meta-analytic studies",[27]and when publication bias is considered along with the fact that a majority of tested hypotheses might be falsea priori, it is plausible that a considerable proportion of research findings might be false positives, as shown by metascientist John Ioannidis.[1]In turn, a high proportion of false positives in the published literature can explain why many findings are nonreproducible.[27] Another publication bias is that studies that do not reject the null hypothesis are scrutinized asymmetrically. For example, they are likely to be rejected as being difficult to interpret or having a Type II error. Studies that do reject the null hypothesis are not likely to be rejected for those reasons.[117] In popular media, there is another element of publication bias: the desire to make research accessible to the public led to oversimplification and exaggeration of findings, creating unrealistic expectations and amplifying the impact of non-replications. In contrast, null results and failures to replicate tend to go unreported. This explanation may apply topower posing's replication crisis.[118] Even high-impact journals have a significant fraction of mathematical errors in their use of statistics. For example, 11% of statistical results published inNatureandBMJin 2001 are "incongruent", meaning that the reported p-value is mathematically different from what it should be if it were correctly calculated from the reported test statistic. These errors were likely from typesetting, rounding, and transcription errors.[119] Among 157 neuroscience papers published in five top-ranking journals that attempt to show that two experimental effects are different, 78 erroneously tested instead for whether one effect is significant while the other is not, and 79 correctly tested for whether their difference is significantly different from 0.[120] The consequences for replicability of the publication bias are exacerbated by academia's "publish or perish" culture. As explained by metascientist Daniele Fanelli, "publish or perish" culture is a sociological aspect of academia whereby scientists work in an environment with very high pressure to have their work published in recognized journals. This is the consequence of the academic work environment being hypercompetitive and of bibliometric parameters (e.g., number of publications) being increasingly used to evaluate scientific careers.[121]According to Fanelli, this pushes scientists to employ a number of strategies aimed at making results "publishable". In the context of publication bias, this can mean adopting behaviors aimed at making results positive or statistically significant, often at the expense of their validity (see QRPs, section 4.3).[121] According to Center for Open Science founder Brian Nosek and his colleagues, "publish or perish" culture created a situation whereby the goals and values of single scientists (e.g., publishability) are not aligned with the general goals of science (e.g., pursuing scientific truth). This is detrimental to the validity of published findings.[122] Philosopher Brian D. Earp and psychologist Jim A. C. Everett argue that, although replication is in the best interests of academics and researchers as a group, features of academic psychological culture discourage replication by individual researchers. They argue that performing replications can be time-consuming, and take away resources from projects that reflect the researcher's original thinking. They are harder to publish, largely because they are unoriginal, and even when they can be published they are unlikely to be viewed as major contributions to the field. Replications "bring less recognition and reward, including grant money, to their authors".[123] In his 1971 bookScientific Knowledge and Its Social Problems, philosopher and historian of scienceJerome R. Ravetzpredicted that science—in its progression from "little" science composed of isolated communities of researchers to "big" science or "techno-science"—would suffer major problems in its internal system of quality control. He recognized that the incentive structure for modern scientists could become dysfunctional, creatingperverse incentivesto publish any findings, however dubious. According to Ravetz, quality in science is maintained only when there is a community of scholars, linked by a set of shared norms and standards, who are willing and able to hold each other accountable. Certain publishing practices also make it difficult to conduct replications and to monitor the severity of the reproducibility crisis, for articles often come with insufficient descriptions for other scholars to reproduce the study. The Reproducibility Project: Cancer Biology showed that of 193 experiments from 53 top papers about cancer published between 2010 and 2012, only 50 experiments from 23 papers have authors who provided enough information for researchers to redo the studies, sometimes with modifications. None of the 193 papers examined had its experimental protocols fully described and replicating 70% of experiments required asking for key reagents.[88][89]The aforementioned study of empirical findings in theStrategic Management Journalfound that 70% of 88 articles could not be replicated due to a lack of sufficient information for data or procedures.[93][97]Inwater resourcesandmanagement, most of 1,987 articles published in 2017 were not replicable because of a lack of available information shared online.[98]In studies ofevent-related potentials, only two-thirds the information needed to replicate a study were reported in a sample of 150 studies, highlighting that there are substantial gaps in reporting.[124] By theDuhem-Quine thesis, scientific results are interpreted by both a substantive theory and a theory of instruments. For example, astronomical observations depend both on the theory of astronomical objects and the theory of telescopes. A large amount of non-replicable research might accumulate if there is a bias of the following kind: faced with a null result, a scientist prefers to treat the data as saying the instrument is insufficient; faced with a non-null result, a scientist prefers to accept the instrument as good, and treat the data as saying something about the substantive theory.[125] Smaldino and McElreath[60]proposed a simple model for thecultural evolutionof scientific practice. Each lab randomly decides to produce novel research or replication research, at different fixed levels of false positive rate, true positive rate, replication rate, and productivity (its "traits"). A lab might use more "effort", making theROC curvemore convex but decreasing productivity. A lab accumulates a score over its lifetime that increases with publications and decreases when another lab fails to replicate its results. At regular intervals, a random lab "dies" and another "reproduces" a child lab with a similar trait as its parent. Labs with higher scores are more likely to reproduce. Under certain parameter settings, the population of labs converge to maximum productivity even at the price of very high false positive rates. Questionable research practices (QRPs) are intentional behaviors that capitalize on the gray area of acceptable scientific behavior or exploit theresearcher degrees of freedom(researcher DF), which can contribute to the irreproducibility of results by increasing the probability of false positive results.[126][127][41]Researcher DF are seen inhypothesisformulation,design of experiments,data collectionandanalysis, andreporting of research.[127]But in many analyst studies involving several researchers or research teams analyzing the same data, analysts obtain different and sometimes conflicting results, even without incentives to report statistically significant findings across psychology, linguistics, and ecology.[128][129][130]This is because research design and data analysis entail numerous decisions that are not sufficiently constrained by a field’s best practices and statistical methodologies. As a result, researcher DF can lead to situations where some failed replication attempts use a different, yet plausible, research design or statistical analysis; such studies do not necessarily undermine previous findings.[131]Multiverse analysis, a method that makes inferences based on all plausible data-processing pipelines, provides a solution to the problem of analytical flexibility.[132] Instead, estimating many statistical models (known asdata dredging[127][133][40][c]),selective reportingonly statistically significant findings,[126][127][133][40][d]andHARKing(hypothesizing after results are known) are examples of questionable research practices.[127][133][40][e]In medicine, irreproducible studies have six features in common: investigators not being blinded to the experimental versus the control arms; failure to repeat experiments; lack ofpositiveandnegative controls; failing to report all the data; inappropriate use of statistical tests; and use of reagents that were not appropriatelyvalidated.[135] QRPs do not include more explicit violations of scientific integrity, such as data falsification.[126][127]Fraudulent research does occur, as in the case of scientific fraud by social psychologistDiederik Stapel,[136][14]cognitive psychologistMarc Hauserand social psychologist Lawrence Sanna,[14]but it appears to be uncommon.[14] According toIUprofessor Ernest O’Boyle and psychologist Martin Götz, around 50% of researchers surveyed across various studies admitted engaging in HARKing.[137]In a survey of 2,000 psychologists by behavioral scientist Leslie K. John and colleagues, around 94% of psychologists admitted having employed at least one QRP. More specifically, 63% admitted failing to report all of a study's dependent measures, 28% to report all of a study's conditions, and 46% to selectively reporting studies that produced the desired pattern of results. In addition, 56% admitted having collected more data after having inspected already collected data, and 16% to having stopped data collection because the desired result was already visible.[40]According to biotechnology researcher J. Leslie Glick's estimate in 1992, 10% to 20% of research and development studies involved either QRPs or outright fraud.[138]The methodology used to estimate QRPs has been contested, and more recent studies suggested lower prevalence rates on average.[139] A 2009 meta-analysis found that 2% of scientists across fields admitted falsifying studies at least once and 14% admitted knowing someone who did. Such misconduct was, according to one study, reported more frequently by medical researchers than by others.[140] According toDeakin Universityprofessor Tom Stanley and colleagues, one plausible reason studies fail to replicate is lowstatistical power. This happens for three reasons. First, a replication study with low power is unlikely to succeed since, by definition, it has a low probability to detect a true effect. Second, if the original study has low power, it will yield biasedeffect sizeestimates. When conductinga priori power analysisfor the replication study, this will result in underestimation of the required sample size. Third, if the original study has low power, the post-study odds of a statistically significant finding reflecting a true effect are quite low. It is therefore likely that a replication attempt of the original study would fail.[15] Mathematically, the probability of replicating a previous publication that rejected a null hypothesisH0{\displaystyle H_{0}}in favor of an alternativeH1{\displaystyle H_{1}}is(significance)Pr(H0|publication)+(power)Pr(H1|publication)≤(power){\displaystyle ({\text{significance}})Pr(H_{0}|{\text{publication}})+({\text{power}})Pr(H_{1}|{\text{publication}})\leq ({\text{power}})}assuming significance is less than power. Thus, low power implies low probability of replication, regardless of how the previous publication was designed, and regardless of which hypothesis is really true.[78] Stanley and colleagues estimated the average statistical power of psychological literature by analyzing data from 200meta-analyses. They found that on average, psychology studies have between 33.1% and 36.4% statistical power. These values are quite low compared to the 80% considered adequate statistical power for an experiment. Across the 200 meta-analyses, the median of studies with adequate statistical power was between 7.7% and 9.1%, implying that a positive result would replicate with probability less than 10%, regardless of whether the positive result was a true positive or a false positive.[15] The statistical power ofneurosciencestudies is quite low. The estimated statistical power offMRIresearch is between .08 and .31,[141]and that of studies ofevent-related potentialswas estimated as .72‒.98 for large effect sizes, .35‒.73 for medium effects, and .10‒.18 for small effects.[124] In a study published inNature, psychologist Katherine Button and colleagues conducted a similar study with 49 meta-analyses in neuroscience, estimating a median statistical power of 21%.[142]Meta-scientistJohn Ioannidisand colleagues computed an estimate of average power for empirical economic research, finding a median power of 18% based on literature drawing upon 6.700 studies.[143]In light of these results, it is plausible that a major reason for widespread failures to replicate in several scientific fields might be very low statistical power on average. The same statistical test with the same significance level will have lower statistical power if the effect size is small under the alternative hypothesis. Complex inheritable traits are typically correlated with a large number of genes, each of small effect size, so high power requires a large sample size. In particular, many results from thecandidate geneliterature suffered from small effect sizes and small sample sizes and would not replicate. More data fromgenome-wide association studies(GWAS) come close to solving this problem.[144][145]As a numeric example, most genes associated with schizophrenia risk have low effect size (genotypic relative risk, GRR). A statistical study with 1000 cases and 1000 controls has 0.03% power for a gene with GRR = 1.15, which is already large for schizophrenia. In contrast, the largest GWAS to date has ~100% power for it.[146] Even when the study replicates, the replication typically have smaller effect size. Underpowered studies have a large effect size bias.[147] In studies that statistically estimate a regression factor, such as thek{\displaystyle k}inY=kX+b{\displaystyle Y=kX+b}, when the dataset is large, noise tends to cause the regression factor to be underestimated, but when the dataset is small, noise tends to cause the regression factor to be overestimated.[148] Meta-analyses have their own methodological problems and disputes, which leads to rejection of the meta-analytic method by researchers whose theory is challenged by meta-analysis.[117] Rosenthal proposed the "fail-safe number" (FSN)[54]to avoid the publication bias against null results. It is defined as follows: Suppose the null hypothesis is true; how many publications would be required to make the current result indistinguishable from the null hypothesis? Rosenthal's point is that certain effect sizes are large enough, such that even if there is a total publication bias against null results (the "file drawer problem"), the number of unpublished null results would be impossibly large to swamp out the effect size. Thus, the effect size must be statistically significant even after accounting for unpublished null results. One objection to the FSN is that it is calculated as if unpublished results are unbiased samples from the null hypothesis. But if the file drawer problem is true, then unpublished results would have effect sizes concentrated around 0. Thus fewer unpublished null results would be necessary to swap out the effect size, and so the FSN is an overestimate.[117] Another problem with meta-analysis is that bad studies are "infectious" in the sense that one bad study might cause the entire meta-analysis to overestimate statistical significance.[78] Various statistical methods can be applied to make the p-value appear smaller than it really is. This need not be malicious, as moderately flexible data analysis, routine in research, can increase the false-positive rate to above 60%.[41] For example, if one collects some data, applies several different significance tests to it, and publishes only the one that happens to have a p-value less than 0.05, then the total p-value for "at least one significance test reaches p < 0.05" can be much larger than 0.05, because even if the null hypothesis were true, the probability that one out of many significance tests is extreme is not itself extreme. Typically, a statistical study has multiple steps, with several choices at each step, such as during data collection, outlier rejection, choice of test statistic, choice of one-tailed or two-tailed test, etc. These choices in the "garden of forking paths" multiply, creating many "researcher degrees of freedom". The effect is similar to the file-drawer problem, as the paths not taken are not published.[149] Consider a simple illustration. Suppose the null hypothesis is true, and we have 20 possible significance tests to apply to the dataset. Also suppose the outcomes to the significance tests are independent. By definition of "significance", each test has probability 0.05 to pass with significance level 0.05. The probability that at least 1 out of 20 is significant is, by assumption of independence,1−(1−0.05)20=0.64{\displaystyle 1-(1-0.05)^{20}=0.64}.[150] Another possibility is themultiple comparisons problem. In 2009, it was twice noted that fMRI studies had a suspicious number of positive results with large effect sizes, more than would be expected since the studies have low power (one example[151]had only 13 subjects). It pointed out that over half of the studies would test for correlation between a phenomenon and individual fMRI voxels, and only report on voxels exceeding chosen thresholds.[152] Optional stopping is a practice where one collects data until some stopping criterion is reached. Though a valid procedure, it is easily misused. The problem is that p-value of an optionally stopped statistical test is larger than it seems. Intuitively, this is because the p-value is supposed to be the sum of all events at least as rare as what is observed. With optional stopping, there are even rarer events that are difficult to account for, i.e. not triggering the optional stopping rule, and collecting even more data before stopping. Neglecting these events leads to a p-value that is too low. In fact, if the null hypothesis is true,anysignificance level can be reached if one is allowed to keep collecting data and stop when the desired p-value (calculated as if one has always been planning to collect exactly this much data) is obtained.[153]For a concrete example of testing for a fair coin, seep-value#optional stopping. More succinctly, the proper calculation of p-value requires accounting for counterfactuals, that is, what the experimentercouldhave done in reaction to data thatmighthave been. Accounting for what might have been is hard even for honest researchers.[153]One benefit of preregistration is to account for all counterfactuals, allowing the p-value to be calculated correctly.[154] The problem of early stopping is not just limited to researcher misconduct. There is often pressure to stop early if the cost of collecting data is high. Some animal ethics boards even mandate early stopping if the study obtains a significant result midway.[150] Such practices are widespread in psychology. In a 2012 survey, 56% of psychologists admitted to early stopping, 46% to only reporting analyses that "worked", and 38% topost hocexclusion, that is, removing some dataafteranalysis was already performed on the data before reanalyzing the remaining data (often on the premise of "outlier removal").[40] As also reported by Stanley and colleagues, a further reason studies might fail to replicate is highheterogeneityof the to-be-replicated effects. In meta-analysis, "heterogeneity" refers to the variance in research findings that results from there being no single true effect size. Instead, findings in such cases are better seen as a distribution of true effects.[15]Statistical heterogeneity is calculated using the I-squared statistic,[155]defined as "the proportion (or percentage) of observed variation among reported effect sizes that cannot be explained by the calculated standard errors associated with these reported effect sizes".[15]This variation can be due to differences in experimental methods, populations, cohorts, and statistical methods between replication studies. Heterogeneity poses a challenge to studies attempting to replicate previously foundeffect sizes. When heterogeneity is high, subsequent replications have a high probability of finding an effect size radically different than that of the original study.[f] Importantly, significant levels of heterogeneity are also found in direct/exact replications of a study. Stanley and colleagues discuss this while reporting a study by quantitative behavioral scientist Richard Klein and colleagues, where the authors attempted to replicate 15 psychological effects across 36 different sites in Europe and the U.S. In the study, Klein and colleagues found significant amounts of heterogeneity in 8 out of 16 effects (I-squared = 23% to 91%). Importantly, while the replication sites intentionally differed on a variety of characteristics, such differences could account for very little heterogeneity . According to Stanley and colleagues, this suggested that heterogeneity could have been a genuine characteristic of the phenomena being investigated. For instance, phenomena might be influenced by so-called "hidden moderators" – relevant factors that were previously not understood to be important in the production of a certain effect. In their analysis of 200 meta-analyses of psychological effects, Stanley and colleagues found a median percent of heterogeneity of I-squared = 74%. According to the authors, this level of heterogeneity can be considered "huge". It is three times larger than the random sampling variance of effect sizes measured in their study. If considered alongsampling error, heterogeneity yields astandard deviationfrom one study to the next even larger than the median effect size of the 200 meta-analyses they investigated.[g]The authors conclude that if replication is defined by a subsequent study finding a sufficiently similar effect size to the original, replication success is not likely even if replications have very large sample sizes. Importantly, this occurs even if replications are direct or exact since heterogeneity nonetheless remains relatively high in these cases. Within economics, the replication crisis may be also exacerbated because econometric results are fragile:[156]using different but plausibleestimation proceduresordata preprocessingtechniques can lead to conflicting results.[157][158][159] New York Universityprofessor Jay Van Bavel and colleagues argue that a further reason findings are difficult to replicate is the sensitivity to context of certain psychological effects. On this view, failures to replicate might be explained by contextual differences between the original experiment and the replication, often called "hiddenmoderators".[160]Van Bavel and colleagues tested the influence of context sensitivity by reanalyzing the data of the widely cited Reproducibility Project carried out by the Open Science Collaboration.[12]They re-coded effects according to their sensitivity to contextual factors and then tested the relationship between context sensitivity and replication success in variousregression models. Context sensitivity was found to negatively correlate with replication success, such that higher ratings of context sensitivity were associated with lower probabilities of replicating an effect.[h]Importantly, context sensitivity significantly correlated with replication success even when adjusting for other factors considered important for reproducing results (e.g., effect size and sample size of original, statistical power of the replication, methodological similarity between original and replication).[i]In light of the results, the authors concluded that attempting a replication in a different time, place or with a different sample can significantly alter an experiment's results. Context sensitivity thus may be a reason certain effects fail to replicate in psychology.[160] In the framework of Bayesian probability, byBayes' theorem, rejecting the null hypothesis at significance level 5% does not mean that the posterior probability for the alternative hypothesis is 95%, and the posterior probability is also different from the probability of replication.[161][153]Consider a simplified case where there are only two hypotheses. Let the prior probability of the null hypothesis bePr(H0){\displaystyle Pr(H_{0})}, and the alternativePr(H1)=1−Pr(H0){\displaystyle Pr(H_{1})=1-Pr(H_{0})}. For a given statistical study, let its false positive rate (significance level) bePr(findH1|H0){\displaystyle Pr({\text{find }}H_{1}|H_{0})}, and true positive rate (power) bePr(findH1|H1){\displaystyle Pr({\text{find }}H_{1}|H_{1})}. For illustrative purposes, let significance level be 0.05 and power be 0.45 (underpowered). Now, by Bayes' theorem, conditional on the statistical studying findingH1{\displaystyle H_{1}}to be true, the posterior probability ofH1{\displaystyle H_{1}}actually being true is not1−Pr(findH1|H0)=0.95{\displaystyle 1-Pr({\text{find }}H_{1}|H_{0})=0.95}, but Pr(H1|findH1)=Pr(findH1|H1)Pr(H1)Pr(findH1|H0)Pr(H0)+Pr(findH1|H1)Pr(H1){\displaystyle Pr(H_{1}|{\text{ find }}H_{1})={\frac {Pr({\text{ find }}H_{1}|H_{1})Pr(H_{1})}{Pr({\text{ find }}H_{1}|H_{0})Pr(H_{0})+Pr({\text{ find }}H_{1}|H_{1})Pr(H_{1})}}} and the probability of replicating the statistical study isPr(replication|findH1)=Pr(findH1|H1)Pr(H1|findH1)+Pr(findH1|H0)Pr(H0|findH1){\displaystyle Pr({\text{replication}}|{\text{ find }}H_{1})=Pr({\text{find }}H_{1}|H_{1})Pr(H_{1}|{\text{ find }}H_{1})+Pr({\text{find }}H_{1}|H_{0})Pr(H_{0}|{\text{ find }}H_{1})}which is also different fromPr(H1|findH1){\displaystyle Pr(H_{1}|{\text{ find }}H_{1})}. In particular, for a fixed level of significance, the probability of replication increases with power, and prior probability forH1{\displaystyle H_{1}}. If the prior probability forH1{\displaystyle H_{1}}is small, then one would require a high power for replication. For example, if the prior probability of the null hypothesis isPr(H0)=0.9{\displaystyle Pr(H_{0})=0.9}, and the study found a positive result, then the posterior probability forH1{\displaystyle H_{1}}isPr(H1|findH1)=0.50{\displaystyle Pr(H_{1}|{\text{ find }}H_{1})=0.50}, and the replication probability isPr(replication|findH1)=0.25{\displaystyle Pr({\text{replication}}|{\text{ find }}H_{1})=0.25}. Some argue that null hypothesis testing is itself inappropriate, especially in "soft sciences" like social psychology.[162][163] As repeatedly observed by statisticians,[164]in complex systems, such as social psychology, "the null hypothesis is always false", or "everything is correlated". If so, then if the null hypothesis is not rejected, that does not show that the null hypothesis is true, but merely that it was a false negative, typically due to low power.[165]Low power is especially prevalent in subject areas where effect sizes are small and data is expensive to acquire, such as social psychology.[162][166] Furthermore, when the null hypothesis is rejected, it might not be evidence for the substantial alternative hypothesis. In soft sciences, many hypotheses can predict a correlation between two variables. Thus, evidenceagainstthe null hypothesis "there is no correlation" is no evidenceforone of the many alternative hypotheses that equally well predict "there is a correlation". Fisher developed the NHST for agronomy, where rejecting the null hypothesis is usually good proof of the alternative hypothesis, since there are not many of them. Rejecting the hypothesis "fertilizer does not help" is evidence for "fertilizer helps". But in psychology, there are many alternative hypotheses for every null hypothesis.[166][167] In particular, when statistical studies on extrasensory perception reject the null hypothesis at extremely low p-value (as in the case ofDaryl Bem), it does not imply the alternative hypothesis "ESP exists". Far more likely is that there was a small (non-ESP) signal in the experiment setup that has been measured precisely.[168] Paul Meehlnoted that statistical hypothesis testing is used differently in "soft" psychology (personality, social, etc.) from physics. In physics, a theory makes a quantitative prediction and is tested by checking whether the prediction falls within the statistically measured interval. In soft psychology, a theory makes a directional prediction and is tested by checking whether the null hypothesis is rejected in the right direction. Consequently, improved experimental technique makes theories more likely to be falsified in physics but less likely to be falsified in soft psychology, as the null hypothesis is always false since any two variables are correlated by a "crud factor" of about 0.30. The net effect is an accumulation of theories that remainunfalsified, but with no empirical evidence for preferring one over the others.[23][167] According to philosopherAlexander Bird, a possible reason for the low rates of replicability in certain scientific fields is that a majority of tested hypotheses are falsea priori.[169]On this view, low rates of replicability could be consistent with quality science. Relatedly, the expectation that most findings should replicate would be misguided and, according to Bird, a form of base rate fallacy. Bird's argument works as follows. Assuming an ideal situation of a test of significance, whereby the probability of incorrectly rejecting the null hypothesis is 5% (i.e.Type I error) and the probability of correctly rejecting the null hypothesis is 80% (i.e.Power), in a context where a high proportion of tested hypotheses are false, it is conceivable that the number of false positives would be high compared to those of true positives.[169]For example, in a situation where only 10% of tested hypotheses are actually true, one can calculate that as many as 36% of results will be false positives.[j] The claim that the falsity of most tested hypotheses can explain low rates of replicability is even more relevant when considering that the average power for statistical tests in certain fields might be much lower than 80%. For example, the proportion of false positives increases to a value between 55.2% and 57.6% when calculated with the estimates of an average power between 34.1% and 36.4% for psychology studies, as provided by Stanley and colleagues in their analysis of 200 meta-analyses in the field.[15]A high proportion of false positives would then result in many research findings being non-replicable. Bird notes that the claim that a majority of tested hypotheses are falsea prioriin certain scientific fields might be plausible given factors such as the complexity of the phenomena under investigation, the fact that theories are seldom undisputed, the "inferential distance" between theories and hypotheses, and the ease with which hypotheses can be generated. In this respect, the fields Bird takes as examples are clinical medicine, genetic and molecular epidemiology, and social psychology. This situation is radically different in fields where theories have outstanding empirical basis and hypotheses can be easily derived from theories (e.g., experimental physics).[169] When effects are wrongly stated as relevant in the literature, failure to detect this by replication will lead to the canonization of such false facts.[170] A 2021 study found that papers in leading general interest, psychology and economicsjournalswith findings that could not be replicated tend to be cited more over time than reproducible research papers, likely because these results are surprising or interesting. The trend is not affected by publication of failed reproductions, after which only 12% of papers that cite the original research will mention the failed replication.[171][172]Further, experts are able to predict which studies will be replicable, leading the authors of the 2021 study, Marta Serra-Garcia andUri Gneezy, to conclude that experts apply lower standards to interesting results when deciding whether to publish them.[172] Concerns have been expressed within the scientific community that the general public may consider science less credible due to failed replications.[173]Research supporting this concern is sparse, but a nationally representative survey in Germany showed that more than 75% of Germans have not heard of replication failures in science.[174]The study also found that most Germans have positive perceptions of replication efforts: only 18% think that non-replicability shows that science cannot be trusted, while 65% think that replication research shows that science applies quality control, and 80% agree that errors and corrections are part of science.[174] With the replication crisis of psychology earning attention, Princeton University psychologistSusan Fiskedrew controversy for speaking against critics of psychology for what she called bullying and undermining the science.[175][176][177][178]She called these unidentified "adversaries" names such as "methodological terrorist" and "self-appointed data police", saying that criticism of psychology should be expressed only in private or by contacting the journals.[175]Columbia University statistician and political scientistAndrew Gelmanresponded to Fiske, saying that she had found herself willing to tolerate the "dead paradigm" of faulty statistics and had refused to retract publications even when errors were pointed out.[175]He added that her tenure as editor had been abysmal and that a number of published papers she edited were found to be based on extremely weak statistics; one of Fiske's own published papers had a major statistical error and "impossible" conclusions.[175] Some researchers inpsychologyindicate that the replication crisis is a foundation for a "credibility revolution", where changes in standards by which psychological science are evaluated may include emphasizing transparency and openness, preregistering research projects, and replicating research with higher standards for evidence to improve the strength of scientific claims.[179]Such changes may diminish the productivity of individual researchers, but this effect could be avoided by data sharing and greater collaboration.[179]A credibility revolution could be good for the research environment.[180] Focus on the replication crisis has led to renewed efforts in psychology to retest important findings.[41][181]A 2013 special edition of the journalSocial Psychologyfocused on replication studies.[13] Standardizationas well as (requiring) transparency of the used statistical and experimental methods have been proposed.[182]Carefuldocumentationof the experimental set-up is considered crucial for replicability of experiments and various variables may not be documented and standardized such as animals' diets in animal studies.[183] A 2016 article byJohn Ioannidiselaborated on "Why Most Clinical Research Is Not Useful".[184]Ioannidis describes what he views as some of the problems and calls for reform, characterizing certain points for medical research to be useful again; one example he makes is the need for medicine to be patient-centered (e.g. in the form of thePatient-Centered Outcomes Research Institute) instead of the current practice to mainly take care of "the needs of physicians, investigators, or sponsors". Metascience is the use ofscientific methodologyto study science itself. It seeks to increase the quality of scientific research while reducing waste. It is also known as "research on research" and "the science of science", as it usesresearch methodsto study howresearchis done and where improvements can be made. Metascience is concerned with all fields of research and has been called "a bird's eye view of science."[185]In Ioannidis's words, "Science is the best thing that has happened to human beings ... but we can do it better."[186] Meta-research continues to be conducted to identify the roots of the crisis and to address them. Methods of addressing the crisis includepre-registrationof scientific studies andclinical trialsas well as the founding of organizations such asCONSORTand theEQUATOR Networkthat issue guidelines for methodology and reporting. Efforts continue to reform the system of academic incentives, improve thepeer reviewprocess, reduce themisuse of statistics, combat bias in scientific literature, and increase the overall quality and efficiency of the scientific process. Some authors have argued that the insufficient communication of experimental methods is a major contributor to the reproducibility crisis and that better reporting of experimental design and statistical analyses would improve the situation. These authors tend to plead for both a broad cultural change in the scientific community of how statistics are considered and a more coercive push from scientific journals and funding bodies.[187]But concerns have been raised about the potential for standards for transparency and replication to be misapplied to qualitative as well as quantitative studies.[188] Business and management journals that have introduced editorial policies on data accessibility, replication, and transparency include theStrategic Management Journal, theJournal of International Business Studies, and theManagement and Organization Review.[93] In response to concerns in psychology about publication bias anddata dredging, more than 140 psychology journals have adopted result-blind peer review. In this approach, studies are accepted not on the basis of their findings and after the studies are completed, but before they are conducted and on the basis of themethodological rigorof their experimental designs, and the theoretical justifications for their statistical analysis techniques before data collection or analysis is done.[189]Early analysis of this procedure has estimated that 61% of result-blind studies have led tonull results, in contrast to an estimated 5% to 20% in earlier research.[101]In addition, large-scale collaborations between researchers working in multiple labs in different countries that regularly make their data openly available for different researchers to assess have become much more common in psychology.[190] Scientific publishing has begun usingpre-registration reportsto address the replication crisis.[191][192]The registered report format requires authors to submit a description of the study methods and analyses prior to data collection. Once the method and analysis plan is vetted through peer-review, publication of the findings is provisionally guaranteed, based on whether the authors follow the proposed protocol. One goal of registered reports is to circumvent thepublication biastoward significant findings that can lead to implementation of questionable research practices. Another is to encourage publication of studies with rigorous methods. The journalPsychological Sciencehas encouraged thepreregistrationof studies and the reporting of effect sizes and confidence intervals.[193]The editor in chief also noted that the editorial staff will be asking for replication of studies with surprising findings from examinations using small sample sizes before allowing the manuscripts to be published. It has been suggested that "a simple way to check how often studies have been repeated, and whether or not the original findings are confirmed" is needed.[171]Categorizations and ratings of reproducibility at the study or results level, as well as addition of links to and rating of third-party confirmations, could be conducted by the peer-reviewers, the scientific journal, or by readers in combination with novel digital platforms or tools. Many publications require ap-valueofp< 0.05 to claimstatistical significance. The paper "Redefine statistical significance",[194]signed by a large number of scientists and mathematicians, proposes that in "fields where the threshold for defining statistical significance for new discoveries isp< 0.05, we propose a change top< 0.005. This simple step would immediately improve the reproducibility of scientific research in many fields." Their rationale is that "a leading cause of non-reproducibility (is that the) statistical standards of evidence for claiming new discoveries in many fields of science are simply too low. Associating 'statistically significant' findings withp< 0.05 results in a high rate of false positives even in the absence of other experimental, procedural and reporting problems."[194] This call was subsequently criticised by another large group, who argued that "redefining" the threshold would not fix current problems, would lead to some new ones, and that in the end, all thresholds needed to be justified case-by-case instead of following general conventions.[195]A 2022 followup study examined these competing recommendations' practical impact. Despite high citation rates of both proposals, researchers found limited implementation of either the p < 0.005 threshold or the case-by-case justification approach in practice. This revealed what the authors called a "vicious cycle", in which scientists reject recommendations because they are not standard practice, while the recommendations fail to become standard practice because few scientists adopt them.[196] Although statisticians are unanimous that the use of "p< 0.05" as a standard for significance provides weaker evidence than is generally appreciated, there is a lack of unanimity about what should be done about it. Some have advocated thatBayesian methodsshould replacep-values. This has not happened on a wide scale, partly because it is complicated and partly because many users distrust the specification of prior distributions in the absence of hard data. A simplified version of the Bayesian argument, based on testing a point null hypothesis was suggested by pharmacologistDavid Colquhoun.[197][198]The logical problems of inductive inference were discussed in "The Problem with p-values" (2016).[199] The hazards of reliance onp-values arises partly because even an observation ofp= 0.001 is not necessarily strong evidence against the null hypothesis.[198]Despite the fact that the likelihood ratio in favor of the alternative hypothesis over the null is close to 100, if the hypothesis was implausible, with a prior probability of a real effect being 0.1, even the observation ofp= 0.001 would have a false positive risk of 8 percent. It would still fail to reach the 5 percent level. It was recommended that the terms "significant" and "non-significant" should not be used.[198]p-values and confidence intervals should still be specified, but they should be accompanied by an indication of the false-positive risk. It was suggested that the best way to do this is to calculate the prior probability that would be necessary to believe in order to achieve a false positive risk of a certain level, such as 5%. The calculations can be done with various computer software.[198][200]This reverse Bayesian approach, which physicistRobert Matthewssuggested in 2001,[201]is one way to avoid the problem that the prior probability is rarely known. To improve the quality of replications, largersample sizesthan those used in the original study are often needed.[202]Larger sample sizes are needed because estimates ofeffect sizesin published work are often exaggerated due to publication bias and large sampling variability associated with small sample sizes in an original study.[203][204][205]Further, usingsignificance thresholdsusually leads to inflated effects, because particularly with small sample sizes, only the largest effects will become significant.[163] One common statistical problem isoverfitting, that is, when researchers fit a regression model over a large number of variables but a small number of data points. For example, a typical fMRI study of emotion, personality, and social cognition has fewer than 100 subjects, but each subject has 10,000 voxels. The study would fit a sparse linear regression model that uses the voxels to predict a variable of interest, such as self-reported stress. But the study would then report on the p-value of the modelon the same datait was fitted to. The standard approach in statistics, where data is split into atraining and a validation set, is resisted because test subjects are expensive to acquire.[152][206] One possible solution iscross-validation, which allows model validation while also allowing the whole dataset to be used for model-fitting.[207] In July 2016, theNetherlands Organisation for Scientific Researchmade €3 million available for replication studies. The funding is for replication based on reanalysis of existing data and replication by collecting and analysing new data. Funding is available in the areas of social sciences, health research and healthcare innovation.[208] In 2013, theLaura and John Arnold Foundationfunded the launch ofThe Center for Open Sciencewith a $5.25 million grant. By 2017, it provided an additional $10 million in funding.[209]It also funded the launch of theMeta-Research Innovation Center at Stanfordat Stanford University run by Ioannidis and medical scientistSteven Goodmanto study ways to improve scientific research.[209]It also provided funding for theAllTrialsinitiative led in part by medical scientistBen Goldacre.[209] Based on coursework in experimental methods at MIT, Stanford, and theUniversity of Washington, it has been suggested that methods courses in psychology and other fields should emphasize replication attempts rather than original studies.[210][211][212]Such an approach would help students learn scientific methodology and provide numerous independent replications of meaningful scientific findings that would test the replicability of scientific findings. Some have recommended that graduate students should be required to publish a high-quality replication attempt on a topic related to their doctoral research prior to graduation.[213] There has been a concern that replication attempts have been growing.[214][215][216]As a result, this may lead to lead to research waste.[217]In turn, this has led to a need to systematically track replication attempts. As a result, several databases have been created (e.g.[218][219]). The databases have created a Replication Database that includes psychology and speech-language therapy, among other disciplines, to promote theory-driven research and optimize the use of academic and institutional resource, while promoting trust in science.[220] Some institutions requireundergraduatestudents to submit a final year thesis that consists of an original piece of research. Daniel Quintana, a psychologist at the University of Oslo in Norway, has recommended that students should be encouraged to perform replication studies in thesis projects, as well as being taught aboutopen science.[221] Researchers demonstrated a way of semi-automated testing for reproducibility: statements about experimental results were extracted from, as of 2022non-semantic, gene expression cancer research papers and subsequently reproduced viarobot scientist"Eve".[222][223]Problems of this approach include that it may not be feasible for many areas of research and that sufficient experimental data may not get extracted from some or many papers even if available. PsychologistDaniel Kahnemanargued that, in psychology, the original authors should be involved in the replication effort because the published methods are often too vague.[224][225]Others, such as psychologist Andrew Wilson, disagree, arguing that the original authors should write down the methods in detail.[224]An investigation of replication rates in psychology in 2012 indicated higher success rates of replication in replication studies when there was author overlap with the original authors of a study[226](91.7% successful replication rates in studies with author overlap compared to 64.6% successful replication rates without author overlap). The replication crisis has led to the formation and development of various large-scale and collaborative communities to pool their resources to address a single question across cultures, countries and disciplines.[227]The focus is on replication, to ensure that the effect generalizes beyond a specific culture and investigate whether the effect is replicable and genuine.[228]This allows interdisciplinary internal reviews, multiple perspectives, uniform protocols across labs, and recruiting larger and more diverse samples.[228]Researchers can collaborate by coordinating data collection or fund data collection by researchers who may not have access to the funds, allowing larger sample sizes and increasing the robustness of the conclusions. PsychologistMarcus R. Munafòand EpidemiologistGeorge Davey Smithargue, in a piece published byNature, that research should emphasizetriangulation, not just replication, to protect against flawed ideas. They claim that, replication alone will get us only so far (and) might actually make matters worse ... [Triangulation] is the strategic use of multiple approaches to address one question. Each approach has its own unrelated assumptions, strengths and weaknesses. Results that agree across different methodologies are less likely to beartefacts. ... Maybe one reason replication has captured so much interest is the often-repeated idea thatfalsificationis at the heart of the scientific enterprise. This idea was popularized byKarl Popper's 1950s maxim that theories can never be proved, only falsified. Yet an overemphasis on repeating experiments could provide an unfounded sense of certainty about findings that rely on a single approach. ... philosophers of science have moved on since Popper. Better descriptions of how scientists actually work include what epistemologistPeter Liptoncalled in 1991 "inference to the best explanation".[229] The dominant scientific and statistical model of causation is the linear model.[230]The linear model assumes that mental variables are stable properties which are independent of each other. In other words, these variables are not expected to influence each other. Instead, the model assumes that the variables will have an independent, linear effect on observable outcomes.[230] Social scientists Sebastian Wallot and Damian Kelty-Stephen argue that the linear model is not always appropriate.[230]An alternative is the complex system model which assumes that mental variables are interdependent. These variables are not assumed to be stable, rather they will interact and adapt to each specific context.[230]They argue that the complex system model is often more appropriate in psychology, and that the use of the linear model when the complex system model is more appropriate will result in failed replications.[230] ...psychology may be hoping for replications in the very measurements and under the very conditions where a growing body of psychological evidence explicitly discourages predicting replication. Failures to replicate may be plainly baked into the potentially incomplete, but broadly sweeping failure of human behavior to conform to the standard of independen[ce] ...[230] Replication is fundamental for scientific progress to confirm original findings. However, replication alone is not sufficient to resolve the replication crisis. Replication efforts should seek not just to support or question the original findings, but also to replace them with revised, stronger theories with greater explanatory power. This approach therefore involves pruning existing theories, comparing all the alternative theories, and making replication efforts more generative and engaged in theory-building.[231][232]However, replication alone is not enough, it is important to assess the extent that results generalise across geographical, historical and social contexts is important for several scientific fields, especially practitioners and policy makers to make analyses in order to guide important strategic decisions. Reproducible and replicable findings was the best predictor of generalisability beyond historical and geographical contexts, indicating that for social sciences, results from a certain time period and place can meaningfully drive as to what is universally present in individuals.[233] Open data, open source software and open source hardware all are critical to enabling reproducibility in the sense of validation of the original data analysis. The use of proprietary software, the lack of the publication of analysis software and the lack of open data prevents the replication of studies. Unless software used in research is open source, reproducing results with different software and hardware configurations is impossible.[234]CERNhas both Open Data and CERN Analysis Preservation projects for storing data, all relevant information, and all software and tools needed to preserve an analysis at the large experiments of theLHC. Aside from all software and data, preserved analysis assets include metadata that enable understanding of the analysis workflow, related software, systematic uncertainties, statistics procedures and meaningful ways to search for the analysis, as well as references to publications and to backup material.[235]CERN software is open source and available for use outside ofparticle physicsand there is some guidance provided to other fields on the broad approaches and strategies used for open science in contemporary particle physics.[236] Online repositories where data, protocols, and findings can be stored and evaluated by the public seek to improve the integrity and reproducibility of research. Examples of such repositories include theOpen Science Framework,Registry of Research Data Repositories, and Psychfiledrawer.org. Sites like Open Science Framework offer badges for using open science practices in an effort to incentivize scientists. However, there have been concerns that those who are most likely to provide their data and code for analyses are the researchers that are likely the most sophisticated.[237]Ioannidis suggested that "the paradox may arise that the most meticulous and sophisticated and method-savvy and careful researchers may become more susceptible to criticism and reputation attacks by reanalyzers who hunt for errors, no matter how negligible these errors are".[237]
https://en.wikipedia.org/wiki/Replication_crisis
Limited-memory BFGS(L-BFGSorLM-BFGS) is anoptimizationalgorithmin the family ofquasi-Newton methodsthat approximates theBroyden–Fletcher–Goldfarb–Shanno algorithm(BFGS) using a limited amount ofcomputer memory.[1]It is a popular algorithm for parameter estimation inmachine learning.[2][3]The algorithm's target problem is to minimizef(x){\displaystyle f(\mathbf {x} )}over unconstrained values of the real-vectorx{\displaystyle \mathbf {x} }wheref{\displaystyle f}is a differentiable scalar function. Like the original BFGS, L-BFGS uses an estimate of the inverseHessian matrixto steer its search through variable space, but where BFGS stores a densen×n{\displaystyle n\times n}approximation to the inverse Hessian (nbeing the number of variables in the problem), L-BFGS stores only a few vectors that represent the approximation implicitly. Due to its resulting linear memory requirement, the L-BFGS method is particularly well suited for optimization problems with many variables. Instead of the inverse HessianHk, L-BFGS maintains a history of the pastmupdates of the positionxand gradient ∇f(x), where generally the history sizemcan be small (oftenm<10{\displaystyle m<10}). These updates are used to implicitly do operations requiring theHk-vector product. The algorithm starts with an initial estimate of the optimal value,x0{\displaystyle \mathbf {x} _{0}}, and proceeds iteratively to refine that estimate with a sequence of better estimatesx1,x2,…{\displaystyle \mathbf {x} _{1},\mathbf {x} _{2},\ldots }. The derivatives of the functiongk:=∇f(xk){\displaystyle g_{k}:=\nabla f(\mathbf {x} _{k})}are used as a key driver of the algorithm to identify the direction of steepest descent, and also to form an estimate of the Hessian matrix (second derivative) off(x){\displaystyle f(\mathbf {x} )}. L-BFGS shares many features with other quasi-Newton algorithms, but is very different in how the matrix-vector multiplicationdk=−Hkgk{\displaystyle d_{k}=-H_{k}g_{k}}is carried out, wheredk{\displaystyle d_{k}}is the approximate Newton's direction,gk{\displaystyle g_{k}}is the current gradient, andHk{\displaystyle H_{k}}is the inverse of the Hessian matrix. There are multiple published approaches using a history of updates to form this direction vector. Here, we give a common approach, the so-called "two loop recursion."[4][5] We take as givenxk{\displaystyle x_{k}}, the position at thek-th iteration, andgk≡∇f(xk){\displaystyle g_{k}\equiv \nabla f(x_{k})}wheref{\displaystyle f}is the function being minimized, and all vectors are column vectors. We also assume that we have stored the lastmupdates of the form We defineρk=1yk⊤sk{\displaystyle \rho _{k}={\frac {1}{y_{k}^{\top }s_{k}}}}, andHk0{\displaystyle H_{k}^{0}}will be the 'initial' approximate of the inverse Hessian that our estimate at iterationkbegins with. The algorithm is based on the BFGS recursion for the inverse Hessian as For a fixedkwe define a sequence of vectorsqk−m,…,qk{\displaystyle q_{k-m},\ldots ,q_{k}}asqk:=gk{\displaystyle q_{k}:=g_{k}}andqi:=(I−ρiyisi⊤)qi+1{\displaystyle q_{i}:=(I-\rho _{i}y_{i}s_{i}^{\top })q_{i+1}}. Then a recursive algorithm for calculatingqi{\displaystyle q_{i}}fromqi+1{\displaystyle q_{i+1}}is to defineαi:=ρisi⊤qi+1{\displaystyle \alpha _{i}:=\rho _{i}s_{i}^{\top }q_{i+1}}andqi=qi+1−αiyi{\displaystyle q_{i}=q_{i+1}-\alpha _{i}y_{i}}. We also define another sequence of vectorszk−m,…,zk{\displaystyle z_{k-m},\ldots ,z_{k}}aszi:=Hiqi{\displaystyle z_{i}:=H_{i}q_{i}}. There is another recursive algorithm for calculating these vectors which is to definezk−m=Hk0qk−m{\displaystyle z_{k-m}=H_{k}^{0}q_{k-m}}and then recursively defineβi:=ρiyi⊤zi{\displaystyle \beta _{i}:=\rho _{i}y_{i}^{\top }z_{i}}andzi+1=zi+(αi−βi)si{\displaystyle z_{i+1}=z_{i}+(\alpha _{i}-\beta _{i})s_{i}}. The value ofzk{\displaystyle z_{k}}is then our ascent direction. Thus we can compute the descent direction as follows: This formulation gives the search direction for the minimization problem, i.e.,z=−Hkgk{\displaystyle z=-H_{k}g_{k}}. For maximization problems, one should thus take-zinstead. Note that the initial approximate inverse HessianHk0{\displaystyle H_{k}^{0}}is chosen as a diagonal matrix or even a multiple of the identity matrix since this is numerically efficient. The scaling of the initial matrixγk{\displaystyle \gamma _{k}}ensures that the search direction is well scaled and therefore the unit step length is accepted in most iterations. AWolfe line searchis used to ensure that the curvature condition is satisfied and the BFGS updating is stable. Note that some software implementations use an Armijobacktracking line search, but cannot guarantee that the curvature conditionyk⊤sk>0{\displaystyle y_{k}^{\top }s_{k}>0}will be satisfied by the chosen step since a step length greater than1{\displaystyle 1}may be needed to satisfy this condition. Some implementations address this by skipping the BFGS update whenyk⊤sk{\displaystyle y_{k}^{\top }s_{k}}is negative or too close to zero, but this approach is not generally recommended since the updates may be skipped too often to allow the Hessian approximationHk{\displaystyle H_{k}}to capture important curvature information. Some solvers employ so called damped (L)BFGS update which modifies quantitiessk{\displaystyle s_{k}}andyk{\displaystyle y_{k}}in order to satisfy the curvature condition. The two-loop recursion formula is widely used by unconstrained optimizers due to its efficiency in multiplying by the inverse Hessian. However, it does not allow for the explicit formation of either the direct or inverse Hessian and is incompatible with non-box constraints. An alternative approach is thecompact representation, which involves a low-rank representation for the direct and/or inverse Hessian.[6]This represents the Hessian as a sum of a diagonal matrix and a low-rank update. Such a representation enables the use of L-BFGS in constrained settings, for example, as part of the SQP method. L-BFGS has been called "the algorithm of choice" for fittinglog-linear (MaxEnt) modelsandconditional random fieldswithℓ2{\displaystyle \ell _{2}}-regularization.[2][3] Since BFGS (and hence L-BFGS) is designed to minimizesmoothfunctions withoutconstraints, the L-BFGS algorithm must be modified to handle functions that include non-differentiablecomponents or constraints. A popular class of modifications are called active-set methods, based on the concept of theactive set. The idea is that when restricted to a small neighborhood of the current iterate, the function and constraints can be simplified. TheL-BFGS-Balgorithm extends L-BFGS to handle simple box constraints (aka bound constraints) on variables; that is, constraints of the formli≤xi≤uiwherelianduiare per-variable constant lower and upper bounds, respectively (for eachxi, either or both bounds may be omitted).[7][8]The method works by identifying fixed and free variables at every step (using a simple gradient method), and then using the L-BFGS method on the free variables only to get higher accuracy, and then repeating the process. Orthant-wise limited-memory quasi-Newton(OWL-QN) is an L-BFGS variant for fittingℓ1{\displaystyle \ell _{1}}-regularizedmodels, exploiting the inherentsparsityof such models.[3]It minimizes functions of the form whereg{\displaystyle g}is adifferentiableconvexloss function. The method is an active-set type method: at each iterate, it estimates thesignof each component of the variable, and restricts the subsequent step to have the same sign. Once the sign is fixed, the non-differentiable‖x→‖1{\displaystyle \|{\vec {x}}\|_{1}}term becomes a smooth linear term which can be handled by L-BFGS. After an L-BFGS step, the method allows some variables to change sign, and repeats the process. Schraudolphet al.present anonlineapproximation to both BFGS and L-BFGS.[9]Similar tostochastic gradient descent, this can be used to reduce the computational complexity by evaluating the error function and gradient on a randomly drawn subset of the overall dataset in each iteration. It has been shown that O-LBFGS has a global almost sure convergence[10]while the online approximation of BFGS (O-BFGS) is not necessarily convergent.[11] Notable open source implementations include: Notable non open source implementations include:
https://en.wikipedia.org/wiki/Orthant-wise_limited-memory_quasi-Newton
In numerical analysis,Broyden's methodis aquasi-Newton methodforfinding rootsinkvariables. It was originally described byC. G. Broydenin 1965.[1] Newton's methodfor solvingf(x) =0uses theJacobian matrix,J, at every iteration. However, computing this Jacobian can be a difficult and expensive operation; for large problems such as those involving solving theKohn–Sham equationsinquantum mechanicsthe number of variables can be in the hundreds of thousands. The idea behind Broyden's method is to compute the whole Jacobian at most only at the first iteration, and to do rank-one updates at other iterations. In 1979 Gay proved that when Broyden's method is applied to a linear system of sizen×n, it terminates in2nsteps,[2]although like all quasi-Newton methods, it may not converge for nonlinear systems. In thesecant method, we replace the first derivativef′atxnwith thefinite-differenceapproximation: and proceed similar toNewton's method: wherenis the iteration index. Consider a system ofknonlinear equations ink{\displaystyle k}unknowns wherefis a vector-valued function of vectorx For such problems, Broyden gives a variation of the one-dimensional Newton's method, replacing the derivative with an approximateJacobianJ. The approximate Jacobian matrix is determined iteratively based on thesecant equation, a finite-difference approximation: wherenis the iteration index. For clarity, define so the above may be rewritten as The above equation isunderdeterminedwhenkis greater than one. Broyden suggested using the most recent estimate of the Jacobian matrix,Jn−1, and then improving upon it by requiring that the new form is a solution to the most recent secant equation, and that there is minimal modification toJn−1: This minimizes theFrobenius norm One then updates the variables using the approximate Jacobian, what is called a quasi-Newton approach. Ifα=1{\displaystyle \alpha =1}this is the full Newton step; commonly aline searchortrust regionmethod is used to controlα{\displaystyle \alpha }. The initial Jacobian can be taken as a diagonal, unit matrix, although more common is to scale it based upon the first step.[3]Broyden also suggested using theSherman–Morrison formula[4]to directly update the inverse of the approximate Jacobian matrix: This first method is commonly known as the "good Broyden's method." A similar technique can be derived by using a slightly different modification toJn−1. This yields a second method, the so-called "bad Broyden's method": This minimizes a different Frobenius norm In his original paper Broyden could not get the bad method to work, but there are cases where it does[5]for which several explanations have been proposed.[6][7]Many other quasi-Newton schemes have been suggested inoptimizationsuch as theBFGS, where one seeks a maximum or minimum by finding zeros of the first derivatives (zeros of thegradientin multiple dimensions). The Jacobian of the gradient is called theHessianand is symmetric, adding further constraints to its approximation. In addition to the two methods described above, Broyden defined a wider class of related methods.[1]: 578In general, methods in theBroyden classare given in the form[8]: 150Jk+1=Jk−JkskskTJkskTJksk+ykykTykTsk+ϕk(skTJksk)vkvkT,{\displaystyle \mathbf {J} _{k+1}=\mathbf {J} _{k}-{\frac {\mathbf {J} _{k}s_{k}s_{k}^{T}\mathbf {J} _{k}}{s_{k}^{T}\mathbf {J} _{k}s_{k}}}+{\frac {y_{k}y_{k}^{T}}{y_{k}^{T}s_{k}}}+\phi _{k}\left(s_{k}^{T}\mathbf {J} _{k}s_{k}\right)v_{k}v_{k}^{T},}whereyk:=f(xk+1)−f(xk),{\displaystyle y_{k}:=\mathbf {f} (\mathbf {x} _{k+1})-\mathbf {f} (\mathbf {x} _{k}),}sk:=xk+1−xk,{\displaystyle s_{k}:=\mathbf {x} _{k+1}-\mathbf {x} _{k},}andvk=[ykykTsk−JkskskTJksk],{\displaystyle v_{k}=\left[{\frac {y_{k}}{y_{k}^{T}s_{k}}}-{\frac {\mathbf {J} _{k}s_{k}}{s_{k}^{T}\mathbf {J} _{k}s_{k}}}\right],}andϕk∈R{\displaystyle \phi _{k}\in \mathbb {R} }for eachk=1,2,...{\displaystyle k=1,2,...}. The choice ofϕk{\displaystyle \phi _{k}}determines the method. Other methods in the Broyden class have been introduced by other authors.
https://en.wikipedia.org/wiki/Broyden%27s_method
TheDavidon–Fletcher–Powell formula(orDFP; named afterWilliam C. Davidon,Roger Fletcher, andMichael J. D. Powell) finds the solution to the secant equation that is closest to the current estimate and satisfies the curvature condition. It was the firstquasi-Newton methodto generalize thesecant methodto a multidimensional problem. This update maintains the symmetry and positive definiteness of theHessian matrix. Given a functionf(x){\displaystyle f(x)}, itsgradient(∇f{\displaystyle \nabla f}), andpositive-definiteHessian matrixB{\displaystyle B}, theTaylor seriesis and theTaylor seriesof the gradient itself (secant equation) is used to updateB{\displaystyle B}. The DFP formula finds a solution that is symmetric, positive-definite and closest to the current approximate value ofBk{\displaystyle B_{k}}: where andBk{\displaystyle B_{k}}is a symmetric andpositive-definite matrix. The corresponding update to the inverse Hessian approximationHk=Bk−1{\displaystyle H_{k}=B_{k}^{-1}}is given by B{\displaystyle B}is assumed to be positive-definite, and the vectorsskT{\displaystyle s_{k}^{T}}andy{\displaystyle y}must satisfy the curvature condition The DFP formula is quite effective, but it was soon superseded by theBroyden–Fletcher–Goldfarb–Shanno formula, which is itsdual(interchanging the roles ofyands).[1] By unwinding the matrix recurrence forBk{\displaystyle B_{k}}, the DFP formula can be expressed as acompact matrix representation. Specifically, defining Sk=[s0s1…sk−1],{\displaystyle S_{k}={\begin{bmatrix}s_{0}&s_{1}&\ldots &s_{k-1}\end{bmatrix}},}Yk=[y0y1…yk−1],{\displaystyle Y_{k}={\begin{bmatrix}y_{0}&y_{1}&\ldots &y_{k-1}\end{bmatrix}},} and upper triangular and diagonal matrices (Rk)ij:=(RkSY)ij=si−1Tyj−1,(RkYS)ij=yi−1Tsj−1,(Dk)ii:=(DkSY)ii=si−1Tyi−1for1≤i≤j≤k{\displaystyle {\big (}R_{k}{\big )}_{ij}:={\big (}R_{k}^{\text{SY}}{\big )}_{ij}=s_{i-1}^{T}y_{j-1},\quad {\big (}R_{k}^{\text{YS}}{\big )}_{ij}=y_{i-1}^{T}s_{j-1},\quad (D_{k})_{ii}:={\big (}D_{k}^{\text{SY}}{\big )}_{ii}=s_{i-1}^{T}y_{i-1}\quad \quad {\text{ for }}1\leq i\leq j\leq k} the DFP matrix has the equivalent formula Bk=B0+JkNk−1JkT,{\displaystyle B_{k}=B_{0}+J_{k}N_{k}^{-1}J_{k}^{T},} Jk=[YkYk−B0Sk]{\displaystyle J_{k}={\begin{bmatrix}Y_{k}&Y_{k}-B_{0}S_{k}\end{bmatrix}}} Nk=[0k×kRkYS(RkYS)TRk+RkT−(Dk+SkTB0Sk)]{\displaystyle N_{k}={\begin{bmatrix}0_{k\times k}&R_{k}^{\text{YS}}\\{\big (}R_{k}^{\text{YS}}{\big )}^{T}&R_{k}+R_{k}^{T}-(D_{k}+S_{k}^{T}B_{0}S_{k})\end{bmatrix}}} The inverse compact representation can be found by applying theSherman-Morrison-Woodbury inversetoBk{\displaystyle B_{k}}. The compact representation is particularly useful for limited-memory and constrained problems.[2]
https://en.wikipedia.org/wiki/DFP_updating_formula
TheSymmetric Rank 1(SR1) method is aquasi-Newton methodto update the second derivative (Hessian) based on the derivatives (gradients) calculated at two points. It is a generalization to thesecant methodfor a multidimensional problem. This update maintains thesymmetryof the matrix but doesnotguarantee that the update bepositive definite. The sequence of Hessian approximations generated by the SR1 method converges to the true Hessian under mild conditions, in theory; in practice, the approximate Hessians generated by the SR1 method show faster progress towards the true Hessian than do popular alternatives (BFGSorDFP), in preliminary numerical experiments.[1][2]The SR1 method has computational advantages forsparseorpartially separableproblems.[3] A twice continuously differentiable functionx↦f(x){\displaystyle x\mapsto f(x)}has agradient(∇f{\displaystyle \nabla f}) andHessian matrixB{\displaystyle B}: The functionf{\displaystyle f}has an expansion as aTaylor seriesatx0{\displaystyle x_{0}}, which can be truncated its gradient has a Taylor-series approximation also which is used to updateB{\displaystyle B}. The above secant-equation need not have a unique solutionB{\displaystyle B}. The SR1 formula computes (via an update ofrank1) the symmetric solution that is closest[further explanation needed]to the current approximate-valueBk{\displaystyle B_{k}}: where The corresponding update to the approximate inverse-HessianHk=Bk−1{\displaystyle H_{k}=B_{k}^{-1}}is One might wonder why positive-definiteness is not preserved — after all, a rank-1 update of the formBk+1=Bk+vvT{\displaystyle B_{k+1}=B_{k}+vv^{T}}is positive-definite ifBk{\displaystyle B_{k}}is. The explanation is that the update might be of the formBk+1=Bk−vvT{\displaystyle B_{k+1}=B_{k}-vv^{T}}instead because the denominator can be negative, and in that case there are no guarantees about positive-definiteness. The SR1 formula has been rediscovered a number of times. Since the denominator can vanish, some authors have suggested that the update be applied only if wherer∈(0,1){\displaystyle r\in (0,1)}is a small number, e.g.10−8{\displaystyle 10^{-8}}.[4] The SR1 update maintains a dense matrix, which can be prohibitive for large problems. Similar to theL-BFGSmethod also a limited-memory SR1 (L-SR1) algorithm exists.[5]Instead of storing the full Hessian approximation, a L-SR1 method only stores them{\displaystyle m}most recent pairs{(si,yi)}i=k−mk−1{\displaystyle \{(s_{i},y_{i})\}_{i=k-m}^{k-1}}, whereΔxi:=si{\displaystyle \Delta x_{i}:=s_{i}}andm{\displaystyle m}is an integer much smaller than the problem size (m≪n{\displaystyle m\ll n}). The limited-memory matrix is based on acompact matrix representation Bk=B0+JkNk−1JkT,Jk=Yk−B0Sk,Nk=Dk+Lk+LkT−SkTB0Sk{\displaystyle B_{k}=B_{0}+J_{k}N_{k}^{-1}J_{k}^{T},\quad J_{k}=Y_{k}-B_{0}S_{k},\quad N_{k}=D_{k}+L_{k}+L_{k}^{T}-S_{k}^{T}B_{0}S_{k}} Sk=[sk−msk−m+1…sk−1],{\displaystyle S_{k}={\begin{bmatrix}s_{k-m}&s_{k-m+1}&\ldots &s_{k-1}\end{bmatrix}},}Yk=[yk−myk−m+1…yk−1],{\displaystyle Y_{k}={\begin{bmatrix}y_{k-m}&y_{k-m+1}&\ldots &y_{k-1}\end{bmatrix}},} (Lk)ij=si−1Tyj−1,(Dk)ii=si−1Tyi−1,k−m≤i≤k−1{\displaystyle {\big (}L_{k}{\big )}_{ij}=s_{i-1}^{T}y_{j-1},\quad (D_{k})_{ii}=s_{i-1}^{T}y_{i-1},\quad k-m\leq i\leq k-1} Since the update can be indefinite, the L-SR1 algorithm is suitable for atrust-regionstrategy. Because of the limited-memory matrix, the trust-region L-SR1 algorithm scales linearly with the problem size, just like L-BFGS.
https://en.wikipedia.org/wiki/SR1_formula
Inmathematics, thespectral radiusof asquare matrixis the maximum of the absolute values of itseigenvalues.[1]More generally, the spectral radius of abounded linear operatoris thesupremumof the absolute values of the elements of itsspectrum. The spectral radius is often denoted byρ(·). Letλ1, ...,λnbe the eigenvalues of a matrixA∈Cn×n. The spectral radius ofAis defined as The spectral radius can be thought of as an infimum of all norms of a matrix. Indeed, on the one hand,ρ(A)⩽‖A‖{\displaystyle \rho (A)\leqslant \|A\|}for everynatural matrix norm‖⋅‖{\displaystyle \|\cdot \|}; and on the other hand, Gelfand's formula states thatρ(A)=limk→∞‖Ak‖1/k{\displaystyle \rho (A)=\lim _{k\to \infty }\|A^{k}\|^{1/k}}. Both of these results are shown below. However, the spectral radius does not necessarily satisfy‖Av‖⩽ρ(A)‖v‖{\displaystyle \|A\mathbf {v} \|\leqslant \rho (A)\|\mathbf {v} \|}for arbitrary vectorsv∈Cn{\displaystyle \mathbf {v} \in \mathbb {C} ^{n}}. To see why, letr>1{\displaystyle r>1}be arbitrary and consider the matrix Thecharacteristic polynomialofCr{\displaystyle C_{r}}isλ2−1{\displaystyle \lambda ^{2}-1}, so its eigenvalues are{−1,1}{\displaystyle \{-1,1\}}and thusρ(Cr)=1{\displaystyle \rho (C_{r})=1}. However,Cre1=re2{\displaystyle C_{r}\mathbf {e} _{1}=r\mathbf {e} _{2}}. As a result, As an illustration of Gelfand's formula, note that‖Crk‖1/k→1{\displaystyle \|C_{r}^{k}\|^{1/k}\to 1}ask→∞{\displaystyle k\to \infty }, sinceCrk=I{\displaystyle C_{r}^{k}=I}ifk{\displaystyle k}is even andCrk=Cr{\displaystyle C_{r}^{k}=C_{r}}ifk{\displaystyle k}is odd. A special case in which‖Av‖⩽ρ(A)‖v‖{\displaystyle \|A\mathbf {v} \|\leqslant \rho (A)\|\mathbf {v} \|}for allv∈Cn{\displaystyle \mathbf {v} \in \mathbb {C} ^{n}}is whenA{\displaystyle A}is aHermitian matrixand‖⋅‖{\displaystyle \|\cdot \|}is theEuclidean norm. This is because any Hermitian Matrix isdiagonalizableby aunitary matrix, and unitary matrices preserve vector length. As a result, In the context of abounded linear operatorAon aBanach space, the eigenvalues need to be replaced with the elements of thespectrum of the operator, i.e. the valuesλ{\displaystyle \lambda }for whichA−λI{\displaystyle A-\lambda I}is not bijective. We denote the spectrum by The spectral radius is then defined as the supremum of the magnitudes of the elements of the spectrum: Gelfand's formula, also known as the spectral radius formula, also holds for bounded linear operators: letting‖⋅‖{\displaystyle \|\cdot \|}denote theoperator norm, we have A bounded operator (on a complex Hilbert space) is called aspectraloid operatorif its spectral radius coincides with itsnumerical radius. An example of such an operator is anormal operator. The spectral radius of a finitegraphis defined to be the spectral radius of itsadjacency matrix. This definition extends to the case of infinite graphs with bounded degrees of vertices (i.e. there exists some real numberCsuch that the degree of every vertex of the graph is smaller thanC). In this case, for the graphGdefine: Letγbe the adjacency operator ofG: The spectral radius ofGis defined to be the spectral radius of the bounded linear operatorγ. The following proposition gives simple yet useful upper bounds on the spectral radius of a matrix. Proposition.LetA∈Cn×nwith spectral radiusρ(A)and asub-multiplicative matrix norm||⋅||. Then for each integerk⩾1{\displaystyle k\geqslant 1}: Proof Let(v,λ)be aneigenvector-eigenvaluepair for a matrixA. By the sub-multiplicativity of the matrix norm, we get: Sincev≠ 0, we have and therefore concluding the proof. There are many upper bounds for the spectral radius of a graph in terms of its numbernof vertices and its numbermof edges. For instance, if where3≤k≤n{\displaystyle 3\leq k\leq n}is an integer, then[2] For real-valued matricesA{\displaystyle A}the inequalityρ(A)≤‖A‖2{\displaystyle \rho (A)\leq {\|A\|}_{2}}holds in particular, where‖⋅‖2{\displaystyle {\|\cdot \|}_{2}}denotes thespectral norm. In the case whereA{\displaystyle A}issymmetric, this inequality is tight: Theorem.LetA∈Rn×n{\displaystyle A\in \mathbb {R} ^{n\times n}}be symmetric, i.e.,A=AT.{\displaystyle A=A^{T}.}Then it holds thatρ(A)=‖A‖2.{\displaystyle \rho (A)={\|A\|}_{2}.} Proof Let(vi,λi)i=1n{\displaystyle (v_{i},\lambda _{i})_{i=1}^{n}}be the eigenpairs ofA. Due to the symmetry ofA, allvi{\displaystyle v_{i}}andλi{\displaystyle \lambda _{i}}are real-valued and the eigenvectorsvi{\displaystyle v_{i}}areorthonormal. By the definition of the spectral norm, there exists anx∈Rn{\displaystyle x\in \mathbb {R} ^{n}}with‖x‖2=1{\displaystyle {\|x\|}_{2}=1}such that‖A‖2=‖Ax‖2.{\displaystyle {\|A\|}_{2}={\|Ax\|}_{2}.}Since the eigenvectorsvi{\displaystyle v_{i}}form a basis ofRn,{\displaystyle \mathbb {R} ^{n},}there exists factorsδ1,…,δn∈Rn{\displaystyle \delta _{1},\ldots ,\delta _{n}\in \mathbb {R} ^{n}}such thatx=∑i=1nδivi{\displaystyle \textstyle x=\sum _{i=1}^{n}\delta _{i}v_{i}}which implies that From the orthonormality of the eigenvectorsvi{\displaystyle v_{i}}it follows that and Sincex{\displaystyle x}is chosen such that it maximizes‖Ax‖2{\displaystyle {\|Ax\|}_{2}}while satisfying‖x‖2=1,{\displaystyle {\|x\|}_{2}=1,}the values ofδi{\displaystyle \delta _{i}}must be such that they maximize∑i=1n|δi|⋅|λi|{\displaystyle \textstyle \sum _{i=1}^{n}{|\delta _{i}|}\cdot {|\lambda _{i}|}}while satisfying∑i=1n|δi|=1.{\displaystyle \textstyle \sum _{i=1}^{n}{|\delta _{i}|}=1.}This is achieved by settingδk=1{\displaystyle \delta _{k}=1}fork=argmaxi=1n|λi|{\displaystyle k=\mathrm {arg\,max} _{i=1}^{n}{|\lambda _{i}|}}andδi=0{\displaystyle \delta _{i}=0}otherwise, yielding a value of‖Ax‖2=|λk|=ρ(A).{\displaystyle {\|Ax\|}_{2}={|\lambda _{k}|}=\rho (A).} The spectral radius is closely related to the behavior of the convergence of the power sequence of a matrix; namely as shown by the following theorem. Theorem.LetA∈Cn×nwith spectral radiusρ(A). Thenρ(A) < 1if and only if On the other hand, ifρ(A) > 1,limk→∞‖Ak‖=∞{\displaystyle \lim _{k\to \infty }\|A^{k}\|=\infty }. The statement holds for any choice of matrix norm onCn×n. Proof Assume thatAk{\displaystyle A^{k}}goes to zero ask{\displaystyle k}goes to infinity. We will show thatρ(A) < 1. Let(v,λ)be aneigenvector-eigenvaluepair forA. SinceAkv=λkv, we have Sincev≠ 0by hypothesis, we must have which implies|λ|<1{\displaystyle |\lambda |<1}. Since this must be true for any eigenvalueλ{\displaystyle \lambda }, we can conclude thatρ(A) < 1. Now, assume the radius ofAis less than1. From theJordan normal formtheorem, we know that for allA∈Cn×n, there existV,J∈Cn×nwithVnon-singular andJblock diagonal such that: with where It is easy to see that and, sinceJis block-diagonal, Now, a standard result on thek-power of anmi×mi{\displaystyle m_{i}\times m_{i}}Jordan block states that, fork≥mi−1{\displaystyle k\geq m_{i}-1}: Thus, ifρ(A)<1{\displaystyle \rho (A)<1}then for alli|λi|<1{\displaystyle |\lambda _{i}|<1}. Hence for alliwe have: which implies Therefore, On the other side, ifρ(A)>1{\displaystyle \rho (A)>1}, there is at least one element inJthat does not remain bounded askincreases, thereby proving the second part of the statement. Gelfand's formula, named afterIsrael Gelfand, gives the spectral radius as a limit of matrix norms. For anymatrix norm||⋅||,we have[3] Moreover, in the case of aconsistentmatrix normlimk→∞‖Ak‖1k{\displaystyle \lim _{k\to \infty }\left\|A^{k}\right\|^{\frac {1}{k}}}approachesρ(A){\displaystyle \rho (A)}from above (indeed, in that caseρ(A)≤‖Ak‖1k{\displaystyle \rho (A)\leq \left\|A^{k}\right\|^{\frac {1}{k}}}for allk{\displaystyle k}). For anyε> 0, let us define the two following matrices: Thus, We start by applying the previous theorem on limits of power sequences toA+: This shows the existence ofN+∈Nsuch that, for allk≥N+, Therefore, Similarly, the theorem on power sequences implies that‖A−k‖{\displaystyle \|A_{-}^{k}\|}is not bounded and that there existsN−∈Nsuch that, for allk≥N−, Therefore, LetN= max{N+,N−}. Then, that is, This concludes the proof. Gelfand's formula yields a bound on the spectral radius of a product of commuting matrices: ifA1,…,An{\displaystyle A_{1},\ldots ,A_{n}}are matrices that all commute, then Consider the matrix whose eigenvalues are5, 10, 10; by definition,ρ(A) = 10. In the following table, the values of‖Ak‖1k{\displaystyle \|A^{k}\|^{\frac {1}{k}}}for the four most used norms are listed versus several increasing values of k (note that, due to the particular form of this matrix,‖.‖1=‖.‖∞{\displaystyle \|.\|_{1}=\|.\|_{\infty }}):
https://en.wikipedia.org/wiki/Spectral_radius
Legal informaticsis an area withininformation science. TheAmerican Library Associationdefinesinformaticsas "the study of thestructureandpropertiesofinformation, as well as the application oftechnologyto theorganization,storage,retrieval, and dissemination of information." Legal informatics therefore, pertains to the application of informatics within the context of the legal environment and as such involveslaw-relatedorganizations(e.g., law offices,courts, andlaw schools) andusersof information andinformation technologieswithin these organizations.[1] Policy issues in legal informatics arise from the use of informational technologies in the implementation of law, such as the use ofsubpoenasfor information found in emails,search queries, andsocial networks. Policy approaches to legal informatics issues vary throughout the world. For example, European countries tend to require the destruction or anonymization of data so that it cannot be used for discovery.[2] The widespread introduction ofcloud computingprovides several benefits in delivering legal services. Legal service providers can use theSoftware as a Servicemodel to earn a profit by charging customers a per-use or subscription fee. This model has several benefits over traditional bespoke services. Software as a service also complicates the attorney-client relationship in a way that may have implications forattorney–client privilege. The traditional delivery model makes it easy to create delineations of when attorney-client privilege attaches and when it does not. But in more complex models of legal service delivery other actors or automated processes may moderate the relationship between a client and their attorney making it difficult to tell which communications should belegally privileged.[3] Artificial intelligence is employed inonline dispute resolutionplatforms that use optimization algorithms and blind-bidding.[4]Artificial intelligence is also frequently employed in modeling the legalontology, "an explicit, formal, and general specification of a conceptualization of properties of and relations between objects in a given domain".[5] Artificial intelligence and law (AI and law) is a subfield ofartificial intelligence(AI) mainly concerned with applications of AI to legal informatics problems and original research on those problems. It is also concerned to contribute in the other direction: to export tools and techniques developed in the context of legal problems to AI in general. For example, theories of legal decision making, especially models ofargumentation, have contributed toknowledge representation and reasoning; models of social organization based onnormshave contributed tomulti-agent systems; reasoning with legal cases has contributed tocase-based reasoning; and the need to store and retrieve large amounts of textual data has resulted in contributions to conceptualinformation retrievaland intelligent databases.[6][7][8] Although Loevinger,[9]Allen[10]and Mehl[11]anticipated several of the ideas that would become important in AI and Law, the first serious proposal for applying AI techniques to law is usually taken to be Buchanan and Headrick.[12]Early work from this period includes Thorne McCarty's influential TAXMAN project[13]in the US and Ronald Stamper'sLEGOLproject[14]in the UK. Landmarks in the early 1980s include Carole Hafner's work on conceptual retrieval,[15]Anne Gardner's work on contract law,[16]Edwina Rissland's work on legal hypotheticals[17]and the work at Imperial College London on the representation of legislation by means of executable logic programs.[18] Early meetings of scholars included a one-off meeting at Swansea,[19]the series of conferences organized by IDG inFlorence[20]and the workshops organised by Charles Walter at the University of Houston in 1984 and 1985.[21]In 1987 a biennial conference, the International Conference on AI and Law (ICAIL), was instituted.[22]This conference began to be seen as the main venue for publishing and the developing ideas within AI and Law,[23]and it led to the foundation of the International Association for Artificial Intelligence and Law (IAAIL), to organize and convene subsequent ICAILs. This, in turn, led to the foundation of the Artificial Intelligence and Law Journal, first published in 1992.[24]In Europe, the annual JURIX conferences (organised by the Jurix Foundation for Legal Knowledge Based Systems), began in 1988. Initially intended to bring together the Dutch-speaking (i.e. Dutch and Flemish) researchers, JURIX quickly developed into an international, primarily European, conference and since 2002 has regularly been held outside the Dutch speaking countries.[25]Since 2007 the JURISIN workshops have been held in Japan under the auspices of the Japanese Society for Artificial Intelligence.[26] The interoperable legal documents standardAkoma Ntosoallows machine-driven processes to operate on the syntactic and semantic components of digital parliamentary, judicial and legislative documents, thus facilitating the development of high-quality information resources and forming a basis for AI tools. Its goal is to substantially enhance the performance, accountability, quality and openness of parliamentary and legislative operations based on best practices and guidance through machine-assisted drafting and machine-assisted (legal) analysis. Embedded in the environment of the semantic web, it forms the basis for a heterogenous yet interoperable ecosystem, with which these tools can operate and communicate, as well as for future applications and use cases based on digital law or rule representation.[27] In 2019, the city ofHangzhou, China established a pilot program artificial intelligence-based Internet Court to adjudicate disputes related to ecommerce and internet-relatedintellectual propertyclaims.[28]: 124Parties appear before the court via videoconference and AI evaluates the evidence presented and applies relevant legal standards.[28]: 124 Today, AI and law embrace a wide range of topics, including: Formal models of legal texts and legal reasoning have been used in AI and Law to clarify issues, to give a more precise understanding and to provide a basis for implementations. A variety of formalisms have been used, including propositional and predicate calculi; deontic, temporal and non-monotonic logics; and state transition diagrams. Prakken and Sartor[31]give a detailed and authoritative review of the use of logic and argumentation in AI and Law, together with a comprehensive set of references. An important role of formal models is to remove ambiguity. In fact, legislation abounds with ambiguity: Because it is written in natural language there are no brackets and so the scope of connectives such as "and" and "or" can be unclear. "Unless" is also capable of several interpretations, and legal draftsman never write "if and only if", although this is often what they intend by "if". In perhaps the earliest use of logic to model law in AI and Law, Layman Allen advocated the use of propositional logic to resolve such syntactic ambiguities in a series of papers.[10] In the late 1970s and throughout the 1980s a significant strand of work on AI and Law involved the production of executable models of legislation, originating with Thorne McCarty's TAXMAN[13]and Ronald Stamper's LEGOL.[14]TAXMAN was used to model the majority and minority arguments in a US Tax law case (Eisner v Macomber), and was implemented in themicro-PLANNERprogramming language. LEGOL was used to provide a formal model of the rules and regulations that govern an organization, and was implemented in a condition-action rule language of the kind used for expert systems. The TAXMAN and LEGOL languages were executable, rule-based languages, which did not have an explicit logical interpretation. However, the formalisation of a large portion of the British Nationality Act by Sergot et al.[18]showed that the natural language of legal documents bears a close resemblance to theHorn clausesubset of first order predicate calculus. Moreover, it identified the need to extend the use of Horn clauses by including negative conditions, to represent rules and exceptions. The resulting extended Horn clauses are executable aslogic programs. Later work on larger applications, such as that on Supplementary Benefits,[32]showed that logic programs need further extensions, to deal with such complications as multiple cross references, counterfactuals, deeming provisions, amendments, and highly technical concepts (such as contribution conditions). The use of hierarchical representations[33]was suggested to address the problem of cross reference; and so-called isomorphic[34]representations were suggested to address the problems of verification and frequent amendment. As the 1990s developed this strand of work became partially absorbed into the development of formalisations of domain conceptualisations, (so-calledontologies), which became popular in AI following the work of Gruber.[35]Early examples in AI and Law include Valente's functional ontology[36]and the frame based ontologies of Visser and van Kralingen.[37]Legal ontologies have since become the subject of regular workshops at AI and Law conferences and there are many examples ranging from generic top-level and core ontologies[38]to very specific models of particular pieces of legislation. Since law comprises sets of norms, it is unsurprising that deontic logics have been tried as the formal basis for models of legislation. These, however, have not been widely adopted as the basis for expert systems, perhaps because expert systems are supposed to enforce the norms, whereas deontic logic becomes of real interest only when we need to consider violations of the norms.[39]In law directed obligations,[40]whereby an obligation is owed to another named individual are of particular interest, since violations of such obligations are often the basis of legal proceedings. There is also some interesting work combining deontic and action logics to explore normative positions.[41] In the context ofmulti-agent systems, norms have been modelled using state transition diagrams. Often, especially in the context of electronic institutions,[42]the norms so described are regimented (i.e., cannot be violated), but in other systems violations are also handled, giving a more faithful reflection of real norms. For a good example of this approach see Modgil et al.[43] Law often concerns issues about time, both relating to the content, such as time periods and deadlines, and those relating to the law itself, such as commencement. Some attempts have been made to model these temporal logics using both computational formalisms such as the Event Calculus[44]and temporal logics such as defeasible temporal logic.[45] In any consideration of the use of logic to model law it needs to be borne in mind that law is inherently non-monotonic, as is shown by the rights of appeal enshrined in all legal systems, and the way in which interpretations of the law change over time.[46][47][48]Moreover, in the drafting of law exceptions abound, and, in the application of law, precedents are overturned as well as followed. In logic programming approaches,negation as failureis often used to handle non-monotonicity,[49]but specific non-monotonic logics such as defeasible logic[50]have also been used. Following the development of abstract argumentation,[51]however, these concerns are increasingly being addressed through argumentation in monotonic logic rather than through the use of non-monotonic logics. Two recent prominent accounts of legal reasoning involve reasons, and they are John Horty's, which focuses on common law reasoning and the notion of precedent,[52]and Federico Faroldi's, which focuses on civil law and uses justification logic.[53] Both academic and proprietary quantitative legal prediction models exist. One of the earliest examples of a working quantitative legal prediction model occurred in the form of theSupreme Courtforecasting project. The Supreme Court forecasting model attempted to predict the results of all the cases on the 2002 term of the Supreme Court. The model predicted 75% of cases correctly compared to experts who only predicted 59.1% of cases.[54]Another example of an academic quantitative legal prediction models is a 2012 model that predicted the result of Federal Securities class action lawsuits.[55]Some academics andlegal technologystartups are attempting to create algorithmic models to predict case outcomes.[56][57]Part of this overall effort involves improved case assessment for litigation funding.[58] In order to better evaluate the quality of case outcome prediction systems, a proposal has been made to create a standardised dataset that would allow comparisons between systems.[59] Within the practice issues conceptual area, progress continues to be made on both litigation and transaction focused technologies. In particular, technology including predictive coding has the potential to effect substantial efficiency gains in law practice. Though predictive coding has largely been applied in the litigation space, it is beginning to make inroads in transaction practice, where it is being used to improve document review in mergers and acquisitions.[60]Other advances, including XML coding in transaction contracts, and increasingly advanced document preparation systems demonstrate the importance of legal informatics in the transactional law space.[61][62] Current applications of AI in the legal field utilize machines to review documents, particularly when a high level of completeness and confidence in the quality of document analysis is depended upon, such as in instances of litigation and where due diligence play a role.[63]Predictive coding leverages small samples to cross-reference similar items, weed out less relevant documents so attorneys can focus on the truly important key documents, produces statistically validated results, equal to or surpassing the accuracy and, prominently, the rate of human review.[63] Advances intechnologyand legal informatics have led to new models for the delivery of legal services. Legal services have traditionally been a "bespoke" product created by a professionalattorneyon an individual basis for each client.[64]However, to work more efficiently, parts of these services will move sequentially from (1) bespoke to (2) standardized, (3) systematized, (4) packaged, and (5) commoditized.[64]Moving from one stage to the next will require embracing different technologies and knowledge systems.[64] The spread of the Internet and development of legal technology and informatics are extending legal services to individuals and small-medium companies. Corporate legal departments may use legal informatics for such purposes as to manage patent portfolios,[65]and for preparation, customization and management of documents.[66]
https://en.wikipedia.org/wiki/Applications_of_artificial_intelligence_to_legal_informatics
Deep learningis a subset ofmachine learningthat focuses on utilizing multilayeredneural networksto perform tasks such asclassification,regression, andrepresentation learning. The field takes inspiration frombiological neuroscienceand is centered around stackingartificial neuronsinto layers and "training" them to process data. The adjective "deep" refers to the use of multiple layers (ranging from three to several hundred or thousands) in the network. Methods used can be eithersupervised,semi-supervisedorunsupervised.[2] Some common deep learning network architectures includefully connected networks,deep belief networks,recurrent neural networks,convolutional neural networks,generative adversarial networks,transformers, andneural radiance fields. These architectures have been applied to fields includingcomputer vision,speech recognition,natural language processing,machine translation,bioinformatics,drug design,medical image analysis,climate science, material inspection andboard gameprograms, where they have produced results comparable to and in some cases surpassing human expert performance.[3][4][5] Early forms of neural networks were inspired by information processing and distributed communication nodes inbiological systems, particularly thehuman brain. However, current neural networks do not intend to model the brain function of organisms, and are generally seen as low-quality models for that purpose.[6] Most modern deep learning models are based on multi-layeredneural networkssuch asconvolutional neural networksandtransformers, although they can also includepropositional formulasor latent variables organized layer-wise in deepgenerative modelssuch as the nodes indeep belief networksand deepBoltzmann machines.[7] Fundamentally, deep learning refers to a class ofmachine learningalgorithmsin which a hierarchy of layers is used to transform input data into a progressively more abstract and composite representation. For example, in animage recognitionmodel, the raw input may be animage(represented as atensorofpixels). The first representational layer may attempt to identify basic shapes such as lines and circles, the second layer may compose and encode arrangements of edges, the third layer may encode a nose and eyes, and the fourth layer may recognize that the image contains a face. Importantly, a deep learning process can learn which features to optimally place at which levelon its own. Prior to deep learning, machine learning techniques often involved hand-craftedfeature engineeringto transform the data into a more suitable representation for a classification algorithm to operate on. In the deep learning approach, features are not hand-crafted and the modeldiscoversuseful feature representations from the data automatically. This does not eliminate the need for hand-tuning; for example, varying numbers of layers and layer sizes can provide different degrees of abstraction.[8][2] The word "deep" in "deep learning" refers to the number of layers through which the data is transformed. More precisely, deep learning systems have a substantialcredit assignment path(CAP) depth. The CAP is the chain of transformations from input to output. CAPs describe potentially causal connections between input and output. For afeedforward neural network, the depth of the CAPs is that of the network and is the number of hidden layers plus one (as the output layer is also parameterized). Forrecurrent neural networks, in which a signal may propagate through a layer more than once, the CAP depth is potentially unlimited.[9]No universally agreed-upon threshold of depth divides shallow learning from deep learning, but most researchers agree that deep learning involves CAP depth higher than two. CAP of depth two has been shown to be a universal approximator in the sense that it can emulate any function.[10]Beyond that, more layers do not add to the function approximator ability of the network. Deep models (CAP > two) are able to extract better features than shallow models and hence, extra layers help in learning the features effectively. Deep learning architectures can be constructed with agreedylayer-by-layer method.[11]Deep learning helps to disentangle these abstractions and pick out which features improve performance.[8] Deep learning algorithms can be applied to unsupervised learning tasks. This is an important benefit because unlabeled data is more abundant than the labeled data. Examples of deep structures that can be trained in an unsupervised manner aredeep belief networks.[8][12] The termDeep Learningwas introduced to the machine learning community byRina Dechterin 1986,[13]and to artificial neural networks by Igor Aizenberg and colleagues in 2000, in the context ofBooleanthreshold neurons.[14][15]Although the history of its appearance is apparently more complicated.[16] Deep neural networks are generally interpreted in terms of theuniversal approximation theorem[17][18][19][20][21]orprobabilistic inference.[22][23][8][9][24] The classic universal approximation theorem concerns the capacity offeedforward neural networkswith a single hidden layer of finite size to approximatecontinuous functions.[17][18][19][20]In 1989, the first proof was published byGeorge Cybenkoforsigmoidactivation functions[17]and was generalised to feed-forward multi-layer architectures in 1991 by Kurt Hornik.[18]Recent work also showed that universal approximation also holds for non-bounded activation functions such asKunihiko Fukushima'srectified linear unit.[25][26] The universal approximation theorem fordeep neural networksconcerns the capacity of networks with bounded width but the depth is allowed to grow. Lu et al.[21]proved that if the width of a deep neural network withReLUactivation is strictly larger than the input dimension, then the network can approximate anyLebesgue integrable function; if the width is smaller or equal to the input dimension, then a deep neural network is not a universal approximator. Theprobabilisticinterpretation[24]derives from the field ofmachine learning. It features inference,[23][7][8][9][12][24]as well as theoptimizationconcepts oftrainingandtesting, related to fitting andgeneralization, respectively. More specifically, the probabilistic interpretation considers the activation nonlinearity as acumulative distribution function.[24]The probabilistic interpretation led to the introduction ofdropoutasregularizerin neural networks. The probabilistic interpretation was introduced by researchers includingHopfield,WidrowandNarendraand popularized in surveys such as the one byBishop.[27] There are twotypesof artificial neural network (ANN):feedforward neural network(FNN) ormultilayer perceptron(MLP) andrecurrent neural networks(RNN). RNNs have cycles in their connectivity structure, FNNs don't. In the 1920s,Wilhelm LenzandErnst Isingcreated theIsing model[28][29]which is essentially a non-learning RNN architecture consisting of neuron-like threshold elements. In 1972,Shun'ichi Amarimade this architecture adaptive.[30][31]His learning RNN was republished byJohn Hopfieldin 1982.[32]Other earlyrecurrent neural networkswere published by Kaoru Nakano in 1971.[33][34]Already in 1948,Alan Turingproduced work on "Intelligent Machinery" that was not published in his lifetime,[35]containing "ideas related to artificial evolution and learning RNNs".[31] Frank Rosenblatt(1958)[36]proposed the perceptron, an MLP with 3 layers: an input layer, a hidden layer with randomized weights that did not learn, and an output layer. He later published a 1962 book that also introduced variants and computer experiments, including a version with four-layer perceptrons "with adaptive preterminal networks" where the last two layers have learned weights (here he credits H. D. Block and B. W. Knight).[37]: section 16The book cites an earlier network by R. D. Joseph (1960)[38]"functionally equivalent to a variation of" this four-layer system (the book mentions Joseph over 30 times). Should Joseph therefore be considered the originator of proper adaptivemultilayer perceptronswith learning hidden units? Unfortunately, the learning algorithm was not a functional one, and fell into oblivion. The first working deep learning algorithm was theGroup method of data handling, a method to train arbitrarily deep neural networks, published byAlexey Ivakhnenkoand Lapa in 1965. They regarded it as a form of polynomial regression,[39]or a generalization of Rosenblatt's perceptron.[40]A 1971 paper described a deep network with eight layers trained by this method,[41]which is based on layer by layer training through regression analysis. Superfluous hidden units are pruned using a separate validation set. Since the activation functions of the nodes are Kolmogorov-Gabor polynomials, these were also the first deep networks with multiplicative units or "gates".[31] The first deep learningmultilayer perceptrontrained bystochastic gradient descent[42]was published in 1967 byShun'ichi Amari.[43]In computer experiments conducted by Amari's student Saito, a five layer MLP with two modifiable layers learnedinternal representationsto classify non-linearily separable pattern classes.[31]Subsequent developments in hardware and hyperparameter tunings have made end-to-endstochastic gradient descentthe currently dominant training technique. In 1969,Kunihiko Fukushimaintroduced theReLU(rectified linear unit)activation function.[25][31]The rectifier has become the most popular activation function for deep learning.[44] Deep learning architectures forconvolutional neural networks(CNNs) with convolutional layers and downsampling layers began with theNeocognitronintroduced byKunihiko Fukushimain 1979, though not trained by backpropagation.[45][46] Backpropagationis an efficient application of thechain rulederived byGottfried Wilhelm Leibnizin 1673[47]to networks of differentiable nodes. The terminology "back-propagating errors" was actually introduced in 1962 by Rosenblatt,[37]but he did not know how to implement this, althoughHenry J. Kelleyhad a continuous precursor of backpropagation in 1960 in the context ofcontrol theory.[48]The modern form of backpropagation was first published inSeppo Linnainmaa's master thesis (1970).[49][50][31]G.M. Ostrovski et al. republished it in 1971.[51][52]Paul Werbosapplied backpropagation to neural networks in 1982[53](his 1974 PhD thesis, reprinted in a 1994 book,[54]did not yet describe the algorithm[52]). In 1986,David E. Rumelhartet al. popularised backpropagation but did not cite the original work.[55][56] Thetime delay neural network(TDNN) was introduced in 1987 byAlex Waibelto apply CNN to phoneme recognition. It used convolutions, weight sharing, and backpropagation.[57][58]In 1988, Wei Zhang applied a backpropagation-trained CNN to alphabet recognition.[59]In 1989,Yann LeCunet al. created a CNN calledLeNetforrecognizing handwritten ZIP codeson mail. Training required 3 days.[60]In 1990, Wei Zhang implemented a CNN onoptical computinghardware.[61]In 1991, a CNN was applied to medical image object segmentation[62]and breast cancer detection in mammograms.[63]LeNet-5 (1998), a 7-level CNN byYann LeCunet al., that classifies digits, was applied by several banks to recognize hand-written numbers on checks digitized in 32x32 pixel images.[64] Recurrent neural networks(RNN)[28][30]were further developed in the 1980s. Recurrence is used for sequence processing, and when a recurrent network is unrolled, it mathematically resembles a deep feedforward layer. Consequently, they have similar properties and issues, and their developments had mutual influences. In RNN, two early influential works were theJordan network(1986)[65]and theElman network(1990),[66]which applied RNN to study problems incognitive psychology. In the 1980s, backpropagation did not work well for deep learning with long credit assignment paths. To overcome this problem, in 1991,Jürgen Schmidhuberproposed a hierarchy of RNNs pre-trained one level at a time byself-supervised learningwhere each RNN tries to predict its own next input, which is the next unexpected input of the RNN below.[67][68]This "neural history compressor" usespredictive codingto learninternal representationsat multiple self-organizing time scales. This can substantially facilitate downstream deep learning. The RNN hierarchy can becollapsedinto a single RNN, bydistillinga higher levelchunkernetwork into a lower levelautomatizernetwork.[67][68][31]In 1993, a neural history compressor solved a "Very Deep Learning" task that required more than 1000 subsequentlayersin an RNN unfolded in time.[69]The "P" inChatGPTrefers to such pre-training. Sepp Hochreiter's diploma thesis (1991)[70]implemented the neural history compressor,[67]and identified and analyzed thevanishing gradient problem.[70][71]Hochreiter proposed recurrentresidualconnections to solve the vanishing gradient problem. This led to thelong short-term memory(LSTM), published in 1995.[72]LSTM can learn "very deep learning" tasks[9]with long credit assignment paths that require memories of events that happened thousands of discrete time steps before. That LSTM was not yet the modern architecture, which required a "forget gate", introduced in 1999,[73]which became the standard RNN architecture. In 1991,Jürgen Schmidhuberalso published adversarial neural networks that contest with each other in the form of azero-sum game, where one network's gain is the other network's loss.[74][75]The first network is agenerative modelthat models aprobability distributionover output patterns. The second network learns bygradient descentto predict the reactions of the environment to these patterns. This was called "artificial curiosity". In 2014, this principle was used ingenerative adversarial networks(GANs).[76] During 1985–1995, inspired by statistical mechanics, several architectures and methods were developed byTerry Sejnowski,Peter Dayan,Geoffrey Hinton, etc., including theBoltzmann machine,[77]restricted Boltzmann machine,[78]Helmholtz machine,[79]and thewake-sleep algorithm.[80]These were designed for unsupervised learning of deep generative models. However, those were more computationally expensive compared to backpropagation. Boltzmann machine learning algorithm, published in 1985, was briefly popular before being eclipsed by the backpropagation algorithm in 1986. (p. 112[81]). A 1988 network became state of the art inprotein structure prediction, an early application of deep learning to bioinformatics.[82] Both shallow and deep learning (e.g., recurrent nets) of ANNs forspeech recognitionhave been explored for many years.[83][84][85]These methods never outperformed non-uniform internal-handcrafting Gaussianmixture model/Hidden Markov model(GMM-HMM) technology based on generative models of speech trained discriminatively.[86]Key difficulties have been analyzed, including gradient diminishing[70]and weak temporal correlation structure in neural predictive models.[87][88]Additional difficulties were the lack of training data and limited computing power. Mostspeech recognitionresearchers moved away from neural nets to pursue generative modeling. An exception was atSRI Internationalin the late 1990s. Funded by the US government'sNSAandDARPA, SRI researched in speech andspeaker recognition. The speaker recognition team led byLarry Heckreported significant success with deep neural networks in speech processing in the 1998NISTSpeaker Recognition benchmark.[89][90]It was deployed in the Nuance Verifier, representing the first major industrial application of deep learning.[91] The principle of elevating "raw" features over hand-crafted optimization was first explored successfully in the architecture of deep autoencoder on the "raw" spectrogram or linearfilter-bankfeatures in the late 1990s,[90]showing its superiority over theMel-Cepstralfeatures that contain stages of fixed transformation from spectrograms. The raw features of speech,waveforms, later produced excellent larger-scale results.[92] Neural networks entered a lull, and simpler models that use task-specific handcrafted features such asGabor filtersandsupport vector machines(SVMs) became the preferred choices in the 1990s and 2000s, because of artificial neural networks' computational cost and a lack of understanding of how the brain wires its biological networks.[citation needed] In 2003, LSTM became competitive with traditional speech recognizers on certain tasks.[93]In 2006,Alex Graves, Santiago Fernández, Faustino Gomez, and Schmidhuber combined it withconnectionist temporal classification(CTC)[94]in stacks of LSTMs.[95]In 2009, it became the first RNN to win apattern recognitioncontest, in connectedhandwriting recognition.[96][9] In 2006, publications byGeoff Hinton,Ruslan Salakhutdinov, Osindero andTeh[97][98]deep belief networkswere developed for generative modeling. They are trained by training one restricted Boltzmann machine, then freezing it and training another one on top of the first one, and so on, then optionallyfine-tunedusing supervised backpropagation.[99]They could model high-dimensional probability distributions, such as the distribution ofMNIST images, but convergence was slow.[100][101][102] The impact of deep learning in industry began in the early 2000s, when CNNs already processed an estimated 10% to 20% of all the checks written in the US, according to Yann LeCun.[103]Industrial applications of deep learning to large-scale speech recognition started around 2010. The 2009 NIPS Workshop on Deep Learning for Speech Recognition was motivated by the limitations of deep generative models of speech, and the possibility that given more capable hardware and large-scale data sets that deep neural nets might become practical. It was believed that pre-training DNNs using generative models of deep belief nets (DBN) would overcome the main difficulties of neural nets. However, it was discovered that replacing pre-training with large amounts of training data for straightforward backpropagation when using DNNs with large, context-dependent output layers produced error rates dramatically lower than then-state-of-the-art Gaussian mixture model (GMM)/Hidden Markov Model (HMM) and also than more-advanced generative model-based systems.[104]The nature of the recognition errors produced by the two types of systems was characteristically different,[105]offering technical insights into how to integrate deep learning into the existing highly efficient, run-time speech decoding system deployed by all major speech recognition systems.[23][106][107]Analysis around 2009–2010, contrasting the GMM (and other generative speech models) vs. DNN models, stimulated early industrial investment in deep learning for speech recognition.[105]That analysis was done with comparable performance (less than 1.5% in error rate) between discriminative DNNs and generative models.[104][105][108]In 2010, researchers extended deep learning fromTIMITto large vocabulary speech recognition, by adopting large output layers of the DNN based on context-dependent HMM states constructed bydecision trees.[109][110][111][106] The deep learning revolution started around CNN- and GPU-based computer vision. Although CNNs trained by backpropagation had been around for decades and GPU implementations of NNs for years,[112]including CNNs,[113]faster implementations of CNNs on GPUs were needed to progress on computer vision. Later, as deep learning becomes widespread, specialized hardware and algorithm optimizations were developed specifically for deep learning.[114] A key advance for the deep learning revolution was hardware advances, especially GPU. Some early work dated back to 2004.[112][113]In 2009, Raina, Madhavan, andAndrew Ngreported a 100M deep belief network trained on 30 NvidiaGeForce GTX 280GPUs, an early demonstration of GPU-based deep learning. They reported up to 70 times faster training.[115] In 2011, a CNN namedDanNet[116][117]by Dan Ciresan, Ueli Meier, Jonathan Masci,Luca Maria Gambardella, andJürgen Schmidhuberachieved for the first time superhuman performance in a visual pattern recognition contest, outperforming traditional methods by a factor of 3.[9]It then won more contests.[118][119]They also showed howmax-poolingCNNs on GPU improved performance significantly.[3] In 2012,Andrew NgandJeff Deancreated an FNN that learned to recognize higher-level concepts, such as cats, only from watching unlabeled images taken fromYouTubevideos.[120] In October 2012,AlexNetbyAlex Krizhevsky,Ilya Sutskever, andGeoffrey Hinton[4]won the large-scaleImageNet competitionby a significant margin over shallow machine learning methods. Further incremental improvements included theVGG-16network byKaren SimonyanandAndrew Zisserman[121]and Google'sInceptionv3.[122] The success in image classification was then extended to the more challenging task ofgenerating descriptions(captions) for images, often as a combination of CNNs and LSTMs.[123][124][125] In 2014, the state of the art was training “very deep neural network” with 20 to 30 layers.[126]Stacking too many layers led to a steep reduction intrainingaccuracy,[127]known as the "degradation" problem.[128]In 2015, two techniques were developed to train very deep networks: the Highway Network was published in May 2015, and theresidual neural network(ResNet)[129]in Dec 2015. ResNet behaves like an open-gated Highway Net. Around the same time, deep learning started impacting the field of art. Early examples includedGoogle DeepDream(2015), andneural style transfer(2015),[130]both of which were based on pretrained image classification neural networks, such asVGG-19. Generative adversarial network(GAN) by (Ian Goodfellowet al., 2014)[131](based onJürgen Schmidhuber's principle of artificial curiosity[74][76]) became state of the art in generative modeling during 2014-2018 period. Excellent image quality is achieved byNvidia'sStyleGAN(2018)[132]based on the Progressive GAN by Tero Karras et al.[133]Here the GAN generator is grown from small to large scale in a pyramidal fashion. Image generation by GAN reached popular success, and provoked discussions concerningdeepfakes.[134]Diffusion models(2015)[135]eclipsed GANs in generative modeling since then, with systems such asDALL·E 2(2022) andStable Diffusion(2022). In 2015, Google's speech recognition improved by 49% by an LSTM-based model, which they made available throughGoogle Voice Searchonsmartphone.[136][137] Deep learning is part of state-of-the-art systems in various disciplines, particularly computer vision andautomatic speech recognition(ASR). Results on commonly used evaluation sets such asTIMIT(ASR) andMNIST(image classification), as well as a range of large-vocabulary speech recognition tasks have steadily improved.[104][138]Convolutional neural networks were superseded for ASR byLSTM.[137][139][140][141]but are more successful in computer vision. Yoshua Bengio,Geoffrey HintonandYann LeCunwere awarded the 2018Turing Awardfor "conceptual and engineering breakthroughs that have made deep neural networks a critical component of computing".[142] Artificial neural networks(ANNs) orconnectionistsystemsare computing systems inspired by thebiological neural networksthat constitute animal brains. Such systems learn (progressively improve their ability) to do tasks by considering examples, generally without task-specific programming. For example, in image recognition, they might learn to identify images that contain cats by analyzing example images that have been manuallylabeledas "cat" or "no cat" and using the analytic results to identify cats in other images. They have found most use in applications difficult to express with a traditional computer algorithm usingrule-based programming. An ANN is based on a collection of connected units calledartificial neurons, (analogous to biologicalneuronsin abiological brain). Each connection (synapse) between neurons can transmit a signal to another neuron. The receiving (postsynaptic) neuron can process the signal(s) and then signal downstream neurons connected to it. Neurons may have state, generally represented byreal numbers, typically between 0 and 1. Neurons and synapses may also have a weight that varies as learning proceeds, which can increase or decrease the strength of the signal that it sends downstream. Typically, neurons are organized in layers. Different layers may perform different kinds of transformations on their inputs. Signals travel from the first (input), to the last (output) layer, possibly after traversing the layers multiple times. The original goal of the neural network approach was to solve problems in the same way that a human brain would. Over time, attention focused on matching specific mental abilities, leading to deviations from biology such asbackpropagation, or passing information in the reverse direction and adjusting the network to reflect that information. Neural networks have been used on a variety of tasks, including computer vision,speech recognition,machine translation,social networkfiltering,playing board and video gamesand medical diagnosis. As of 2017, neural networks typically have a few thousand to a few million units and millions of connections. Despite this number being several order of magnitude less than the number of neurons on a human brain, these networks can perform many tasks at a level beyond that of humans (e.g., recognizing faces, or playing "Go"[144]). A deep neural network (DNN) is an artificial neural network with multiple layers between the input and output layers.[7][9]There are different types of neural networks but they always consist of the same components: neurons, synapses, weights, biases, and functions.[145]These components as a whole function in a way that mimics functions of the human brain, and can be trained like any other ML algorithm.[citation needed] For example, a DNN that is trained to recognize dog breeds will go over the given image and calculate the probability that the dog in the image is a certain breed. The user can review the results and select which probabilities the network should display (above a certain threshold, etc.) and return the proposed label. Each mathematical manipulation as such is considered a layer,[146]and complex DNN have many layers, hence the name "deep" networks. DNNs can model complex non-linear relationships. DNN architectures generate compositional models where the object is expressed as a layered composition ofprimitives.[147]The extra layers enable composition of features from lower layers, potentially modeling complex data with fewer units than a similarly performing shallow network.[7]For instance, it was proved that sparsemultivariate polynomialsare exponentially easier to approximate with DNNs than with shallow networks.[148] Deep architectures include many variants of a few basic approaches. Each architecture has found success in specific domains. It is not always possible to compare the performance of multiple architectures, unless they have been evaluated on the same data sets.[146] DNNs are typically feedforward networks in which data flows from the input layer to the output layer without looping back. At first, the DNN creates a map of virtual neurons and assigns random numerical values, or "weights", to connections between them. The weights and inputs are multiplied and return an output between 0 and 1. If the network did not accurately recognize a particular pattern, an algorithm would adjust the weights.[149]That way the algorithm can make certain parameters more influential, until it determines the correct mathematical manipulation to fully process the data. Recurrent neural networks, in which data can flow in any direction, are used for applications such aslanguage modeling.[150][151][152][153][154]Long short-term memory is particularly effective for this use.[155][156] Convolutional neural networks(CNNs) are used in computer vision.[157]CNNs also have been applied toacoustic modelingfor automatic speech recognition (ASR).[158] As with ANNs, many issues can arise with naively trained DNNs. Two common issues areoverfittingand computation time. DNNs are prone to overfitting because of the added layers of abstraction, which allow them to model rare dependencies in the training data.Regularizationmethods such as Ivakhnenko's unit pruning[41]orweight decay(ℓ2{\displaystyle \ell _{2}}-regularization) orsparsity(ℓ1{\displaystyle \ell _{1}}-regularization) can be applied during training to combat overfitting.[159]Alternativelydropoutregularization randomly omits units from the hidden layers during training. This helps to exclude rare dependencies.[160]Another interesting recent development is research into models of just enough complexity through an estimation of the intrinsic complexity of the task being modelled. This approach has been successfully applied for multivariate time series prediction tasks such as traffic prediction.[161]Finally, data can be augmented via methods such as cropping and rotating such that smaller training sets can be increased in size to reduce the chances of overfitting.[162] DNNs must consider many training parameters, such as the size (number of layers and number of units per layer), thelearning rate, and initial weights.Sweeping through the parameter spacefor optimal parameters may not be feasible due to the cost in time and computational resources. Various tricks, such asbatching(computing the gradient on several training examples at once rather than individual examples)[163]speed up computation. Large processing capabilities of many-core architectures (such as GPUs or the Intel Xeon Phi) have produced significant speedups in training, because of the suitability of such processing architectures for the matrix and vector computations.[164][165] Alternatively, engineers may look for other types of neural networks with more straightforward and convergent training algorithms. CMAC (cerebellar model articulation controller) is one such kind of neural network. It doesn't require learning rates or randomized initial weights. The training process can be guaranteed to converge in one step with a new batch of data, and the computational complexity of the training algorithm is linear with respect to the number of neurons involved.[166][167] Since the 2010s, advances in both machine learning algorithms andcomputer hardwarehave led to more efficient methods for training deep neural networks that contain many layers of non-linear hidden units and a very large output layer.[168]By 2019, graphics processing units (GPUs), often with AI-specific enhancements, had displaced CPUs as the dominant method for training large-scale commercial cloud AI .[169]OpenAIestimated the hardware computation used in the largest deep learning projects from AlexNet (2012) to AlphaZero (2017) and found a 300,000-fold increase in the amount of computation required, with a doubling-time trendline of 3.4 months.[170][171] Specialelectronic circuitscalleddeep learning processorswere designed to speed up deep learning algorithms. Deep learning processors include neural processing units (NPUs) inHuaweicellphones[172]andcloud computingservers such astensor processing units(TPU) in theGoogle Cloud Platform.[173]Cerebras Systemshas also built a dedicated system to handle large deep learning models, the CS-2, based on the largest processor in the industry, the second-generation Wafer Scale Engine (WSE-2).[174][175] Atomically thinsemiconductorsare considered promising for energy-efficient deep learning hardware where the same basic device structure is used for both logic operations and data storage. In 2020, Marega et al. published experiments with a large-area active channel material for developing logic-in-memory devices and circuits based onfloating-gatefield-effect transistors(FGFETs).[176] In 2021, J. Feldmann et al. proposed an integratedphotonichardware acceleratorfor parallel convolutional processing.[177]The authors identify two key advantages of integrated photonics over its electronic counterparts: (1) massively parallel data transfer throughwavelengthdivisionmultiplexingin conjunction withfrequency combs, and (2) extremely high data modulation speeds.[177]Their system can execute trillions of multiply-accumulate operations per second, indicating the potential ofintegratedphotonicsin data-heavy AI applications.[177] Large-scale automatic speech recognition is the first and most convincing successful case of deep learning. LSTM RNNs can learn "Very Deep Learning" tasks[9]that involve multi-second intervals containing speech events separated by thousands of discrete time steps, where one time step corresponds to about 10 ms. LSTM with forget gates[156]is competitive with traditional speech recognizers on certain tasks.[93] The initial success in speech recognition was based on small-scale recognition tasks based on TIMIT. The data set contains 630 speakers from eight majordialectsofAmerican English, where each speaker reads 10 sentences.[178]Its small size lets many configurations be tried. More importantly, the TIMIT task concernsphone-sequence recognition, which, unlike word-sequence recognition, allows weak phonebigramlanguage models. This lets the strength of the acoustic modeling aspects of speech recognition be more easily analyzed. The error rates listed below, including these early results and measured as percent phone error rates (PER), have been summarized since 1991. The debut of DNNs for speaker recognition in the late 1990s and speech recognition around 2009-2011 and of LSTM around 2003–2007, accelerated progress in eight major areas:[23][108][106] All major commercial speech recognition systems (e.g., MicrosoftCortana,Xbox,Skype Translator,Amazon Alexa,Google Now,Apple Siri,BaiduandiFlyTekvoice search, and a range ofNuancespeech products, etc.) are based on deep learning.[23][183][184] A common evaluation set for image classification is theMNIST databasedata set. MNIST is composed of handwritten digits and includes 60,000 training examples and 10,000 test examples. As with TIMIT, its small size lets users test multiple configurations. A comprehensive list of results on this set is available.[185] Deep learning-based image recognition has become "superhuman", producing more accurate results than human contestants. This first occurred in 2011 in recognition of traffic signs, and in 2014, with recognition of human faces.[186][187] Deep learning-trained vehicles now interpret 360° camera views.[188]Another example is Facial Dysmorphology Novel Analysis (FDNA) used to analyze cases of human malformation connected to a large database of genetic syndromes. Closely related to the progress that has been made in image recognition is the increasing application of deep learning techniques to various visual art tasks. DNNs have proven themselves capable, for example, of Neural networks have been used for implementing language models since the early 2000s.[150]LSTM helped to improve machine translation and language modeling.[151][152][153] Other key techniques in this field are negative sampling[191]andword embedding. Word embedding, such asword2vec, can be thought of as a representational layer in a deep learning architecture that transforms an atomic word into a positional representation of the word relative to other words in the dataset; the position is represented as a point in avector space. Using word embedding as an RNN input layer allows the network to parse sentences and phrases using an effective compositional vector grammar. A compositional vector grammar can be thought of asprobabilistic context free grammar(PCFG) implemented by an RNN.[192]Recursive auto-encoders built atop word embeddings can assess sentence similarity and detect paraphrasing.[192]Deep neural architectures provide the best results for constituency parsing,[193]sentiment analysis,[194]information retrieval,[195][196]spoken language understanding,[197]machine translation,[151][198]contextual entity linking,[198]writing style recognition,[199]named-entity recognition(token classification),[200]text classification, and others.[201] Recent developments generalizeword embeddingtosentence embedding. Google Translate(GT) uses a large end-to-endlong short-term memory(LSTM) network.[202][203][204][205]Google Neural Machine Translation (GNMT)uses anexample-based machine translationmethod in which the system "learns from millions of examples".[203]It translates "whole sentences at a time, rather than pieces". Google Translate supports over one hundred languages.[203]The network encodes the "semantics of the sentence rather than simply memorizing phrase-to-phrase translations".[203][206]GT uses English as an intermediate between most language pairs.[206] A large percentage of candidate drugs fail to win regulatory approval. These failures are caused by insufficient efficacy (on-target effect), undesired interactions (off-target effects), or unanticipatedtoxic effects.[207][208]Research has explored use of deep learning to predict thebiomolecular targets,[209][210]off-targets, andtoxic effectsof environmental chemicals in nutrients, household products and drugs.[211][212][213] AtomNet is a deep learning system for structure-basedrational drug design.[214]AtomNet was used to predict novel candidate biomolecules for disease targets such as theEbola virus[215]andmultiple sclerosis.[216][215] In 2017graph neural networkswere used for the first time to predict various properties of molecules in a large toxicology data set.[217]In 2019, generative neural networks were used to produce molecules that were validated experimentally all the way into mice.[218][219] Deep reinforcement learninghas been used to approximate the value of possibledirect marketingactions, defined in terms ofRFMvariables. The estimated value function was shown to have a natural interpretation ascustomer lifetime value.[220] Recommendation systems have used deep learning to extract meaningful features for a latent factor model for content-based music and journal recommendations.[221][222]Multi-view deep learning has been applied for learning user preferences from multiple domains.[223]The model uses a hybrid collaborative and content-based approach and enhances recommendations in multiple tasks. AnautoencoderANN was used inbioinformatics, to predictgene ontologyannotations and gene-function relationships.[224] In medical informatics, deep learning was used to predict sleep quality based on data from wearables[225]and predictions of health complications fromelectronic health recorddata.[226] Deep neural networks have shown unparalleled performance inpredicting protein structure, according to the sequence of the amino acids that make it up. In 2020,AlphaFold, a deep-learning based system, achieved a level of accuracy significantly higher than all previous computational methods.[227][228] Deep neural networks can be used to estimate the entropy of astochastic processand called Neural Joint Entropy Estimator (NJEE).[229]Such an estimation provides insights on the effects of inputrandom variableson an independentrandom variable. Practically, the DNN is trained as aclassifierthat maps an inputvectorormatrixX to an outputprobability distributionover the possible classes of random variable Y, given input X. For example, inimage classificationtasks, the NJEE maps a vector ofpixels' color values to probabilities over possible image classes. In practice, the probability distribution of Y is obtained by aSoftmaxlayer with number of nodes that is equal to thealphabetsize of Y. NJEE uses continuously differentiableactivation functions, such that the conditions for theuniversal approximation theoremholds. It is shown that this method provides a stronglyconsistent estimatorand outperforms other methods in case of large alphabet sizes.[229] Deep learning has been shown to produce competitive results in medical application such as cancer cell classification, lesion detection, organ segmentation and image enhancement.[230][231]Modern deep learning tools demonstrate the high accuracy of detecting various diseases and the helpfulness of their use by specialists to improve the diagnosis efficiency.[232][233] Finding the appropriate mobile audience formobile advertisingis always challenging, since many data points must be considered and analyzed before a target segment can be created and used in ad serving by any ad server.[234]Deep learning has been used to interpret large, many-dimensioned advertising datasets. Many data points are collected during the request/serve/click internet advertising cycle. This information can form the basis of machine learning to improve ad selection. Deep learning has been successfully applied toinverse problemssuch asdenoising,super-resolution,inpainting, andfilm colorization.[235]These applications include learning methods such as "Shrinkage Fields for Effective Image Restoration"[236]which trains on an image dataset, andDeep Image Prior, which trains on the image that needs restoration. Deep learning is being successfully applied to financialfraud detection, tax evasion detection,[237]and anti-money laundering.[238] In November 2023, researchers atGoogle DeepMindandLawrence Berkeley National Laboratoryannounced that they had developed an AI system known as GNoME. This system has contributed tomaterials scienceby discovering over 2 million new materials within a relatively short timeframe. GNoME employs deep learning techniques to efficiently explore potential material structures, achieving a significant increase in the identification of stable inorganiccrystal structures. The system's predictions were validated through autonomous robotic experiments, demonstrating a noteworthy success rate of 71%. The data of newly discovered materials is publicly available through theMaterials Projectdatabase, offering researchers the opportunity to identify materials with desired properties for various applications. This development has implications for the future of scientific discovery and the integration of AI in material science research, potentially expediting material innovation and reducing costs in product development. The use of AI and deep learning suggests the possibility of minimizing or eliminating manual lab experiments and allowing scientists to focus more on the design and analysis of unique compounds.[239][240][241] The United States Department of Defense applied deep learning to train robots in new tasks through observation.[242] Physics informed neural networks have been used to solvepartial differential equationsin both forward and inverse problems in a data driven manner.[243]One example is the reconstructing fluid flow governed by theNavier-Stokes equations. Using physics informed neural networks does not require the often expensive mesh generation that conventionalCFDmethods rely on.[244][245] Deep backward stochastic differential equation methodis a numerical method that combines deep learning withBackward stochastic differential equation(BSDE). This method is particularly useful for solving high-dimensional problems in financial mathematics. By leveraging the powerful function approximation capabilities ofdeep neural networks, deep BSDE addresses the computational challenges faced by traditional numerical methods in high-dimensional settings. Specifically, traditional methods like finite difference methods or Monte Carlo simulations often struggle with the curse of dimensionality, where computational cost increases exponentially with the number of dimensions. Deep BSDE methods, however, employ deep neural networks to approximate solutions of high-dimensional partial differential equations (PDEs), effectively reducing the computational burden.[246] In addition, the integration ofPhysics-informed neural networks(PINNs) into the deep BSDE framework enhances its capability by embedding the underlying physical laws directly into the neural network architecture. This ensures that the solutions not only fit the data but also adhere to the governing stochastic differential equations. PINNs leverage the power of deep learning while respecting the constraints imposed by the physical models, resulting in more accurate and reliable solutions for financial mathematics problems. Image reconstruction is the reconstruction of the underlying images from the image-related measurements. Several works showed the better and superior performance of the deep learning methods compared to analytical methods for various applications, e.g., spectral imaging[247]and ultrasound imaging.[248] Traditional weather prediction systems solve a very complex system of partial differential equations. GraphCast is a deep learning based model, trained on a long history of weather data to predict how weather patterns change over time. It is able to predict weather conditions for up to 10 days globally, at a very detailed level, and in under a minute, with precision similar to state of the art systems.[249][250] An epigenetic clock is abiochemical testthat can be used to measure age. Galkin et al. used deep neural networks to train an epigenetic aging clock of unprecedented accuracy using >6,000 blood samples.[251]The clock uses information from 1000CpG sitesand predicts people with certain conditions older than healthy controls:IBD,frontotemporal dementia,ovarian cancer,obesity. The aging clock was planned to be released for public use in 2021 by anInsilico Medicinespinoff company Deep Longevity. Deep learning is closely related to a class of theories ofbrain development(specifically, neocortical development) proposed bycognitive neuroscientistsin the early 1990s.[252][253][254][255]These developmental theories were instantiated in computational models, making them predecessors of deep learning systems. These developmental models share the property that various proposed learning dynamics in the brain (e.g., a wave ofnerve growth factor) support theself-organizationsomewhat analogous to the neural networks utilized in deep learning models. Like theneocortex, neural networks employ a hierarchy of layered filters in which each layer considers information from a prior layer (or the operating environment), and then passes its output (and possibly the original input), to other layers. This process yields a self-organizing stack oftransducers, well-tuned to their operating environment. A 1995 description stated, "...the infant's brain seems to organize itself under the influence of waves of so-called trophic-factors ... different regions of the brain become connected sequentially, with one layer of tissue maturing before another and so on until the whole brain is mature".[256] A variety of approaches have been used to investigate the plausibility of deep learning models from a neurobiological perspective. On the one hand, several variants of thebackpropagationalgorithm have been proposed in order to increase its processing realism.[257][258]Other researchers have argued that unsupervised forms of deep learning, such as those based on hierarchicalgenerative modelsanddeep belief networks, may be closer to biological reality.[259][260]In this respect, generative neural network models have been related to neurobiological evidence about sampling-based processing in the cerebral cortex.[261] Although a systematic comparison between the human brain organization and the neuronal encoding in deep networks has not yet been established, several analogies have been reported. For example, the computations performed by deep learning units could be similar to those of actual neurons[262]and neural populations.[263]Similarly, the representations developed by deep learning models are similar to those measured in the primate visual system[264]both at the single-unit[265]and at the population[266]levels. Facebook's AI lab performs tasks such asautomatically tagging uploaded pictureswith the names of the people in them.[267] Google'sDeepMind Technologiesdeveloped a system capable of learning how to playAtarivideo games using only pixels as data input. In 2015 they demonstrated theirAlphaGosystem, which learned the game ofGowell enough to beat a professional Go player.[268][269][270]Google Translateuses a neural network to translate between more than 100 languages. In 2017, Covariant.ai was launched, which focuses on integrating deep learning into factories.[271] As of 2008,[272]researchers atThe University of Texas at Austin(UT) developed a machine learning framework called Training an Agent Manually via Evaluative Reinforcement, or TAMER, which proposed new methods for robots or computer programs to learn how to perform tasks by interacting with a human instructor.[242]First developed as TAMER, a new algorithm called Deep TAMER was later introduced in 2018 during a collaboration betweenU.S. Army Research Laboratory(ARL) and UT researchers. Deep TAMER used deep learning to provide a robot with the ability to learn new tasks through observation.[242]Using Deep TAMER, a robot learned a task with a human trainer, watching video streams or observing a human perform a task in-person. The robot later practiced the task with the help of some coaching from the trainer, who provided feedback such as "good job" and "bad job".[273] Deep learning has attracted both criticism and comment, in some cases from outside the field of computer science. A main criticism concerns the lack of theory surrounding some methods.[274]Learning in the most common deep architectures is implemented using well-understood gradient descent. However, the theory surrounding other algorithms, such as contrastive divergence is less clear.[citation needed](e.g., Does it converge? If so, how fast? What is it approximating?) Deep learning methods are often looked at as ablack box, with most confirmations done empirically, rather than theoretically.[275] In further reference to the idea that artistic sensitivity might be inherent in relatively low levels of the cognitive hierarchy, a published series of graphic representations of the internal states of deep (20-30 layers) neural networks attempting to discern within essentially random data the images on which they were trained[276]demonstrate a visual appeal: the original research notice received well over 1,000 comments, and was the subject of what was for a time the most frequently accessed article onThe Guardian's[277]website. Some deep learning architectures display problematic behaviors,[278]such as confidently classifying unrecognizable images as belonging to a familiar category of ordinary images (2014)[279]and misclassifying minuscule perturbations of correctly classified images (2013).[280]Goertzelhypothesized that these behaviors are due to limitations in their internal representations and that these limitations would inhibit integration into heterogeneous multi-componentartificial general intelligence(AGI) architectures.[278]These issues may possibly be addressed by deep learning architectures that internally form states homologous to image-grammar[281]decompositions of observed entities and events.[278]Learning a grammar(visual or linguistic) from training data would be equivalent to restricting the system tocommonsense reasoningthat operates on concepts in terms of grammaticalproduction rulesand is a basic goal of both human language acquisition[282]andartificial intelligence(AI).[283] As deep learning moves from the lab into the world, research and experience show that artificial neural networks are vulnerable to hacks and deception.[284]By identifying patterns that these systems use to function, attackers can modify inputs to ANNs in such a way that the ANN finds a match that human observers would not recognize. For example, an attacker can make subtle changes to an image such that the ANN finds a match even though the image looks to a human nothing like the search target. Such manipulation is termed an "adversarial attack".[285] In 2016 researchers used one ANN to doctor images in trial and error fashion, identify another's focal points, and thereby generate images that deceived it. The modified images looked no different to human eyes. Another group showed that printouts of doctored images then photographed successfully tricked an image classification system.[286]One defense is reverse image search, in which a possible fake image is submitted to a site such asTinEyethat can then find other instances of it. A refinement is to search using only parts of the image, to identify images from which that piece may have been taken.[287] Another group showed that certainpsychedelicspectacles could fool afacial recognition systeminto thinking ordinary people were celebrities, potentially allowing one person to impersonate another. In 2017 researchers added stickers tostop signsand caused an ANN to misclassify them.[286] ANNs can however be further trained to detect attempts atdeception, potentially leading attackers and defenders into an arms race similar to the kind that already defines themalwaredefense industry. ANNs have been trained to defeat ANN-based anti-malwaresoftware by repeatedly attacking a defense with malware that was continually altered by agenetic algorithmuntil it tricked the anti-malware while retaining its ability to damage the target.[286] In 2016, another group demonstrated that certain sounds could make theGoogle Nowvoice command system open a particular web address, and hypothesized that this could "serve as a stepping stone for further attacks (e.g., opening a web page hosting drive-by malware)".[286] In "data poisoning", false data is continually smuggled into a machine learning system's training set to prevent it from achieving mastery.[286] The deep learning systems that are trained using supervised learning often rely on data that is created or annotated by humans, or both.[288]It has been argued that not only low-paidclickwork(such as onAmazon Mechanical Turk) is regularly deployed for this purpose, but also implicit forms of humanmicroworkthat are often not recognized as such.[289]The philosopherRainer Mühlhoffdistinguishes five types of "machinic capture" of human microwork to generate training data: (1)gamification(the embedding of annotation or computation tasks in the flow of a game), (2) "trapping and tracking" (e.g.CAPTCHAsfor image recognition or click-tracking on Googlesearch results pages), (3) exploitation of social motivations (e.g.tagging facesonFacebookto obtain labeled facial images), (4)information mining(e.g. by leveragingquantified-selfdevices such asactivity trackers) and (5)clickwork.[289]
https://en.wikipedia.org/wiki/Applications_of_deep_learning
Machine learning(ML) is afield of studyinartificial intelligenceconcerned with the development and study ofstatistical algorithmsthat can learn fromdataandgeneraliseto unseen data, and thus performtaskswithout explicitinstructions.[1]Within a subdiscipline in machine learning, advances in the field ofdeep learninghave allowedneural networks, a class of statistical algorithms, to surpass many previous machine learning approaches in performance.[2] ML finds application in many fields, includingnatural language processing,computer vision,speech recognition,email filtering,agriculture, andmedicine.[3][4]The application of ML to business problems is known aspredictive analytics. Statisticsandmathematical optimisation(mathematical programming) methods comprise the foundations of machine learning.Data miningis a related field of study, focusing onexploratory data analysis(EDA) viaunsupervised learning.[6][7] From a theoretical viewpoint,probably approximately correct learningprovides a framework for describing machine learning. The termmachine learningwas coined in 1959 byArthur Samuel, anIBMemployee and pioneer in the field ofcomputer gamingandartificial intelligence.[8][9]The synonymself-teaching computerswas also used in this time period.[10][11] Although the earliest machine learning model was introduced in the 1950s whenArthur Samuelinvented aprogramthat calculated the winning chance in checkers for each side, the history of machine learning roots back to decades of human desire and effort to study human cognitive processes.[12]In 1949,CanadianpsychologistDonald Hebbpublished the bookThe Organization of Behavior, in which he introduced atheoretical neural structureformed by certain interactions amongnerve cells.[13]Hebb's model ofneuronsinteracting with one another set a groundwork for how AIs and machine learning algorithms work under nodes, orartificial neuronsused by computers to communicate data.[12]Other researchers who have studied humancognitive systemscontributed to the modern machine learning technologies as well, including logicianWalter PittsandWarren McCulloch, who proposed the early mathematical models of neural networks to come up withalgorithmsthat mirror human thought processes.[12] By the early 1960s, an experimental "learning machine" withpunched tapememory, called Cybertron, had been developed byRaytheon Companyto analysesonarsignals,electrocardiograms, and speech patterns using rudimentaryreinforcement learning. It was repetitively "trained" by a human operator/teacher to recognise patterns and equipped with a "goof" button to cause it to reevaluate incorrect decisions.[14]A representative book on research into machine learning during the 1960s was Nilsson's book on Learning Machines, dealing mostly with machine learning for pattern classification.[15]Interest related to pattern recognition continued into the 1970s, as described by Duda and Hart in 1973.[16]In 1981 a report was given on using teaching strategies so that anartificial neural networklearns to recognise 40 characters (26 letters, 10 digits, and 4 special symbols) from a computer terminal.[17] Tom M. Mitchellprovided a widely quoted, more formal definition of the algorithms studied in the machine learning field: "A computer program is said to learn from experienceEwith respect to some class of tasksTand performance measurePif its performance at tasks inT, as measured byP, improves with experienceE."[18]This definition of the tasks in which machine learning is concerned offers a fundamentallyoperational definitionrather than defining the field in cognitive terms. This followsAlan Turing's proposal in his paper "Computing Machinery and Intelligence", in which the question "Can machines think?" is replaced with the question "Can machines do what we (as thinking entities) can do?".[19] Modern-day machine learning has two objectives. One is to classify data based on models which have been developed; the other purpose is to make predictions for future outcomes based on these models. A hypothetical algorithm specific to classifying data may use computer vision of moles coupled with supervised learning in order to train it to classify the cancerous moles. A machine learning algorithm for stock trading may inform the trader of future potential predictions.[20] As a scientific endeavour, machine learning grew out of the quest forartificial intelligence(AI). In the early days of AI as anacademic discipline, some researchers were interested in having machines learn from data. They attempted to approach the problem with various symbolic methods, as well as what were then termed "neural networks"; these were mostlyperceptronsandother modelsthat were later found to be reinventions of thegeneralised linear modelsof statistics.[22]Probabilistic reasoningwas also employed, especially inautomated medical diagnosis.[23]: 488 However, an increasing emphasis on thelogical, knowledge-based approachcaused a rift between AI and machine learning. Probabilistic systems were plagued by theoretical and practical problems of data acquisition and representation.[23]: 488By 1980,expert systemshad come to dominate AI, and statistics was out of favour.[24]Work on symbolic/knowledge-based learning did continue within AI, leading toinductive logic programming(ILP), but the more statistical line of research was now outside the field of AI proper, inpattern recognitionandinformation retrieval.[23]: 708–710, 755Neural networks research had been abandoned by AI andcomputer sciencearound the same time. This line, too, was continued outside the AI/CS field, as "connectionism", by researchers from other disciplines includingJohn Hopfield,David Rumelhart, andGeoffrey Hinton. Their main success came in the mid-1980s with the reinvention ofbackpropagation.[23]: 25 Machine learning (ML), reorganised and recognised as its own field, started to flourish in the 1990s. The field changed its goal from achieving artificial intelligence to tackling solvable problems of a practical nature. It shifted focus away from thesymbolic approachesit had inherited from AI, and toward methods and models borrowed from statistics,fuzzy logic, andprobability theory.[24] There is a close connection between machine learning and compression. A system that predicts theposterior probabilitiesof a sequence given its entire history can be used for optimal data compression (by usingarithmetic codingon the output distribution). Conversely, an optimal compressor can be used for prediction (by finding the symbol that compresses best, given the previous history). This equivalence has been used as a justification for using data compression as a benchmark for "general intelligence".[25][26][27] An alternative view can show compression algorithms implicitly map strings into implicitfeature space vectors, and compression-based similarity measures compute similarity within these feature spaces. For each compressor C(.) we define an associated vector space ℵ, such that C(.) maps an input string x, corresponding to the vector norm ||~x||. An exhaustive examination of the feature spaces underlying all compression algorithms is precluded by space; instead, feature vectors chooses to examine three representative lossless compression methods, LZW, LZ77, and PPM.[28] According toAIXItheory, a connection more directly explained inHutter Prize, the best possible compression of x is the smallest possible software that generates x. For example, in that model, a zip file's compressed size includes both the zip file and the unzipping software, since you can not unzip it without both, but there may be an even smaller combined form. Examples of AI-powered audio/video compression software includeNVIDIA Maxine, AIVC.[29]Examples of software that can perform AI-powered image compression includeOpenCV,TensorFlow,MATLAB's Image Processing Toolbox (IPT) and High-Fidelity Generative Image Compression.[30] Inunsupervised machine learning,k-means clusteringcan be utilized to compress data by grouping similar data points into clusters. This technique simplifies handling extensive datasets that lack predefined labels and finds widespread use in fields such asimage compression.[31] Data compression aims to reduce the size of data files, enhancing storage efficiency and speeding up data transmission. K-means clustering, an unsupervised machine learning algorithm, is employed to partition a dataset into a specified number of clusters, k, each represented by thecentroidof its points. This process condenses extensive datasets into a more compact set of representative points. Particularly beneficial inimageandsignal processing, k-means clustering aids in data reduction by replacing groups of data points with their centroids, thereby preserving the core information of the original data while significantly decreasing the required storage space.[32] Machine learning anddata miningoften employ the same methods and overlap significantly, but while machine learning focuses on prediction, based onknownproperties learned from the training data, data mining focuses on thediscoveryof (previously)unknownproperties in the data (this is the analysis step ofknowledge discoveryin databases). Data mining uses many machine learning methods, but with different goals; on the other hand, machine learning also employs data mining methods as "unsupervised learning" or as a preprocessing step to improve learner accuracy. Much of the confusion between these two research communities (which do often have separate conferences and separate journals,ECML PKDDbeing a major exception) comes from the basic assumptions they work with: in machine learning, performance is usually evaluated with respect to the ability toreproduce knownknowledge, while in knowledge discovery and data mining (KDD) the key task is the discovery of previouslyunknownknowledge. Evaluated with respect to known knowledge, an uninformed (unsupervised) method will easily be outperformed by other supervised methods, while in a typical KDD task, supervised methods cannot be used due to the unavailability of training data. Machine learning also has intimate ties tooptimisation: Many learning problems are formulated as minimisation of someloss functionon a training set of examples. Loss functions express the discrepancy between the predictions of the model being trained and the actual problem instances (for example, in classification, one wants to assign alabelto instances, and models are trained to correctly predict the preassigned labels of a set of examples).[35] Characterizing the generalisation of various learning algorithms is an active topic of current research, especially fordeep learningalgorithms. Machine learning andstatisticsare closely related fields in terms of methods, but distinct in their principal goal: statistics draws populationinferencesfrom asample, while machine learning finds generalisable predictive patterns.[36]According toMichael I. Jordan, the ideas of machine learning, from methodological principles to theoretical tools, have had a long pre-history in statistics.[37]He also suggested the termdata scienceas a placeholder to call the overall field.[37] Conventional statistical analyses require the a priori selection of a model most suitable for the study data set. In addition, only significant or theoretically relevant variables based on previous experience are included for analysis. In contrast, machine learning is not built on a pre-structured model; rather, the data shape the model by detecting underlying patterns. The more variables (input) used to train the model, the more accurate the ultimate model will be.[38] Leo Breimandistinguished two statistical modelling paradigms: data model and algorithmic model,[39]wherein "algorithmic model" means more or less the machine learning algorithms likeRandom Forest. Some statisticians have adopted methods from machine learning, leading to a combined field that they callstatistical learning.[40] Analytical and computational techniques derived from deep-rooted physics of disordered systems can be extended to large-scale problems, including machine learning, e.g., to analyse the weight space ofdeep neural networks.[41]Statistical physics is thus finding applications in the area ofmedical diagnostics.[42] A core objective of a learner is to generalise from its experience.[5][43]Generalisation in this context is the ability of a learning machine to perform accurately on new, unseen examples/tasks after having experienced a learning data set. The training examples come from some generally unknown probability distribution (considered representative of the space of occurrences) and the learner has to build a general model about this space that enables it to produce sufficiently accurate predictions in new cases. The computational analysis of machine learning algorithms and their performance is a branch oftheoretical computer scienceknown ascomputational learning theoryvia theprobably approximately correct learningmodel. Because training sets are finite and the future is uncertain, learning theory usually does not yield guarantees of the performance of algorithms. Instead, probabilistic bounds on the performance are quite common. Thebias–variance decompositionis one way to quantify generalisationerror. For the best performance in the context of generalisation, the complexity of the hypothesis should match the complexity of the function underlying the data. If the hypothesis is less complex than the function, then the model has under fitted the data. If the complexity of the model is increased in response, then the training error decreases. But if the hypothesis is too complex, then the model is subject tooverfittingand generalisation will be poorer.[44] In addition to performance bounds, learning theorists study the time complexity and feasibility of learning. In computational learning theory, a computation is considered feasible if it can be done inpolynomial time. There are two kinds oftime complexityresults: Positive results show that a certain class of functions can be learned in polynomial time. Negative results show that certain classes cannot be learned in polynomial time. Machine learning approaches are traditionally divided into three broad categories, which correspond to learning paradigms, depending on the nature of the "signal" or "feedback" available to the learning system: Although each algorithm has advantages and limitations, no single algorithm works for all problems.[45][46][47] Supervised learning algorithms build a mathematical model of a set of data that contains both the inputs and the desired outputs.[48]The data, known astraining data, consists of a set of training examples. Each training example has one or more inputs and the desired output, also known as a supervisory signal. In the mathematical model, each training example is represented by anarrayor vector, sometimes called afeature vector, and the training data is represented by amatrix. Throughiterative optimisationof anobjective function, supervised learning algorithms learn a function that can be used to predict the output associated with new inputs.[49]An optimal function allows the algorithm to correctly determine the output for inputs that were not a part of the training data. An algorithm that improves the accuracy of its outputs or predictions over time is said to have learned to perform that task.[18] Types of supervised-learning algorithms includeactive learning,classificationandregression.[50]Classification algorithms are used when the outputs are restricted to a limited set of values, while regression algorithms are used when the outputs can take any numerical value within a range. For example, in a classification algorithm that filters emails, the input is an incoming email, and the output is the folder in which to file the email. In contrast, regression is used for tasks such as predicting a person's height based on factors like age and genetics or forecasting future temperatures based on historical data.[51] Similarity learningis an area of supervised machine learning closely related to regression and classification, but the goal is to learn from examples using a similarity function that measures how similar or related two objects are. It has applications inranking,recommendation systems, visual identity tracking, face verification, and speaker verification. Unsupervised learning algorithms find structures in data that has not been labelled, classified or categorised. Instead of responding to feedback, unsupervised learning algorithms identify commonalities in the data and react based on the presence or absence of such commonalities in each new piece of data. Central applications of unsupervised machine learning include clustering,dimensionality reduction,[7]anddensity estimation.[52] Cluster analysis is the assignment of a set of observations into subsets (calledclusters) so that observations within the same cluster are similar according to one or more predesignated criteria, while observations drawn from different clusters are dissimilar. Different clustering techniques make different assumptions on the structure of the data, often defined by somesimilarity metricand evaluated, for example, byinternal compactness, or the similarity between members of the same cluster, andseparation, the difference between clusters. Other methods are based onestimated densityandgraph connectivity. A special type of unsupervised learning called,self-supervised learninginvolves training a model by generating the supervisory signal from the data itself.[53][54] Semi-supervised learning falls betweenunsupervised learning(without any labelled training data) andsupervised learning(with completely labelled training data). Some of the training examples are missing training labels, yet many machine-learning researchers have found that unlabelled data, when used in conjunction with a small amount of labelled data, can produce a considerable improvement in learning accuracy. Inweakly supervised learning, the training labels are noisy, limited, or imprecise; however, these labels are often cheaper to obtain, resulting in larger effective training sets.[55] Reinforcement learning is an area of machine learning concerned with howsoftware agentsought to takeactionsin an environment so as to maximise some notion of cumulative reward. Due to its generality, the field is studied in many other disciplines, such asgame theory,control theory,operations research,information theory,simulation-based optimisation,multi-agent systems,swarm intelligence,statisticsandgenetic algorithms. In reinforcement learning, the environment is typically represented as aMarkov decision process(MDP). Many reinforcement learning algorithms usedynamic programmingtechniques.[56]Reinforcement learning algorithms do not assume knowledge of an exact mathematical model of the MDP and are used when exact models are infeasible. Reinforcement learning algorithms are used in autonomous vehicles or in learning to play a game against a human opponent. Dimensionality reductionis a process of reducing the number of random variables under consideration by obtaining a set of principal variables.[57]In other words, it is a process of reducing the dimension of thefeatureset, also called the "number of features". Most of the dimensionality reduction techniques can be considered as either feature elimination orextraction. One of the popular methods of dimensionality reduction isprincipal component analysis(PCA). PCA involves changing higher-dimensional data (e.g., 3D) to a smaller space (e.g., 2D). Themanifold hypothesisproposes that high-dimensional data sets lie along low-dimensionalmanifolds, and many dimensionality reduction techniques make this assumption, leading to the area ofmanifold learningandmanifold regularisation. Other approaches have been developed which do not fit neatly into this three-fold categorisation, and sometimes more than one is used by the same machine learning system. For example,topic modelling,meta-learning.[58] Self-learning, as a machine learning paradigm was introduced in 1982 along with a neural network capable of self-learning, namedcrossbar adaptive array(CAA).[59][60]It gives a solution to the problem learning without any external reward, by introducing emotion as an internal reward. Emotion is used as state evaluation of a self-learning agent. The CAA self-learning algorithm computes, in a crossbar fashion, both decisions about actions and emotions (feelings) about consequence situations. The system is driven by the interaction between cognition and emotion.[61]The self-learning algorithm updates a memory matrix W =||w(a,s)|| such that in each iteration executes the following machine learning routine: It is a system with only one input, situation, and only one output, action (or behaviour) a. There is neither a separate reinforcement input nor an advice input from the environment. The backpropagated value (secondary reinforcement) is the emotion toward the consequence situation. The CAA exists in two environments, one is the behavioural environment where it behaves, and the other is the genetic environment, wherefrom it initially and only once receives initial emotions about situations to be encountered in the behavioural environment. After receiving the genome (species) vector from the genetic environment, the CAA learns a goal-seeking behaviour, in an environment that contains both desirable and undesirable situations.[62] Several learning algorithms aim at discovering better representations of the inputs provided during training.[63]Classic examples includeprincipal component analysisand cluster analysis. Feature learning algorithms, also called representation learning algorithms, often attempt to preserve the information in their input but also transform it in a way that makes it useful, often as a pre-processing step before performing classification or predictions. This technique allows reconstruction of the inputs coming from the unknown data-generating distribution, while not being necessarily faithful to configurations that are implausible under that distribution. This replaces manualfeature engineering, and allows a machine to both learn the features and use them to perform a specific task. Feature learning can be either supervised or unsupervised. In supervised feature learning, features are learned using labelled input data. Examples includeartificial neural networks,multilayer perceptrons, and superviseddictionary learning. In unsupervised feature learning, features are learned with unlabelled input data. Examples include dictionary learning,independent component analysis,autoencoders,matrix factorisation[64]and various forms ofclustering.[65][66][67] Manifold learningalgorithms attempt to do so under the constraint that the learned representation is low-dimensional.Sparse codingalgorithms attempt to do so under the constraint that the learned representation is sparse, meaning that the mathematical model has many zeros.Multilinear subspace learningalgorithms aim to learn low-dimensional representations directly fromtensorrepresentations for multidimensional data, without reshaping them into higher-dimensional vectors.[68]Deep learningalgorithms discover multiple levels of representation, or a hierarchy of features, with higher-level, more abstract features defined in terms of (or generating) lower-level features. It has been argued that an intelligent machine is one that learns a representation that disentangles the underlying factors of variation that explain the observed data.[69] Feature learning is motivated by the fact that machine learning tasks such as classification often require input that is mathematically and computationally convenient to process. However, real-world data such as images, video, and sensory data has not yielded attempts to algorithmically define specific features. An alternative is to discover such features or representations through examination, without relying on explicit algorithms. Sparse dictionary learning is a feature learning method where a training example is represented as a linear combination ofbasis functionsand assumed to be asparse matrix. The method isstrongly NP-hardand difficult to solve approximately.[70]A popularheuristicmethod for sparse dictionary learning is thek-SVDalgorithm. Sparse dictionary learning has been applied in several contexts. In classification, the problem is to determine the class to which a previously unseen training example belongs. For a dictionary where each class has already been built, a new training example is associated with the class that is best sparsely represented by the corresponding dictionary. Sparse dictionary learning has also been applied inimage de-noising. The key idea is that a clean image patch can be sparsely represented by an image dictionary, but the noise cannot.[71] Indata mining, anomaly detection, also known as outlier detection, is the identification of rare items, events or observations which raise suspicions by differing significantly from the majority of the data.[72]Typically, the anomalous items represent an issue such asbank fraud, a structural defect, medical problems or errors in a text. Anomalies are referred to asoutliers, novelties, noise, deviations and exceptions.[73] In particular, in the context of abuse and network intrusion detection, the interesting objects are often not rare objects, but unexpected bursts of inactivity. This pattern does not adhere to the common statistical definition of an outlier as a rare object. Many outlier detection methods (in particular, unsupervised algorithms) will fail on such data unless aggregated appropriately. Instead, a cluster analysis algorithm may be able to detect the micro-clusters formed by these patterns.[74] Three broad categories of anomaly detection techniques exist.[75]Unsupervised anomaly detection techniques detect anomalies in an unlabelled test data set under the assumption that the majority of the instances in the data set are normal, by looking for instances that seem to fit the least to the remainder of the data set. Supervised anomaly detection techniques require a data set that has been labelled as "normal" and "abnormal" and involves training a classifier (the key difference from many other statistical classification problems is the inherently unbalanced nature of outlier detection). Semi-supervised anomaly detection techniques construct a model representing normal behaviour from a given normal training data set and then test the likelihood of a test instance to be generated by the model. Robot learningis inspired by a multitude of machine learning methods, starting from supervised learning, reinforcement learning,[76][77]and finallymeta-learning(e.g. MAML). Association rule learning is arule-based machine learningmethod for discovering relationships between variables in large databases. It is intended to identify strong rules discovered in databases using some measure of "interestingness".[78] Rule-based machine learning is a general term for any machine learning method that identifies, learns, or evolves "rules" to store, manipulate or apply knowledge. The defining characteristic of a rule-based machine learning algorithm is the identification and utilisation of a set of relational rules that collectively represent the knowledge captured by the system. This is in contrast to other machine learning algorithms that commonly identify a singular model that can be universally applied to any instance in order to make a prediction.[79]Rule-based machine learning approaches includelearning classifier systems, association rule learning, andartificial immune systems. Based on the concept of strong rules,Rakesh Agrawal,Tomasz Imielińskiand Arun Swami introduced association rules for discovering regularities between products in large-scale transaction data recorded bypoint-of-sale(POS) systems in supermarkets.[80]For example, the rule{onions,potatoes}⇒{burger}{\displaystyle \{\mathrm {onions,potatoes} \}\Rightarrow \{\mathrm {burger} \}}found in the sales data of a supermarket would indicate that if a customer buys onions and potatoes together, they are likely to also buy hamburger meat. Such information can be used as the basis for decisions about marketing activities such as promotionalpricingorproduct placements. In addition tomarket basket analysis, association rules are employed today in application areas includingWeb usage mining,intrusion detection,continuous production, andbioinformatics. In contrast withsequence mining, association rule learning typically does not consider the order of items either within a transaction or across transactions. Learning classifier systems(LCS) are a family of rule-based machine learning algorithms that combine a discovery component, typically agenetic algorithm, with a learning component, performing eithersupervised learning,reinforcement learning, orunsupervised learning. They seek to identify a set of context-dependent rules that collectively store and apply knowledge in apiecewisemanner in order to make predictions.[81] Inductive logic programming(ILP) is an approach to rule learning usinglogic programmingas a uniform representation for input examples, background knowledge, and hypotheses. Given an encoding of the known background knowledge and a set of examples represented as a logical database of facts, an ILP system will derive a hypothesized logic program thatentailsall positive and no negative examples.Inductive programmingis a related field that considers any kind of programming language for representing hypotheses (and not only logic programming), such asfunctional programs. Inductive logic programming is particularly useful inbioinformaticsandnatural language processing.Gordon PlotkinandEhud Shapirolaid the initial theoretical foundation for inductive machine learning in a logical setting.[82][83][84]Shapiro built their first implementation (Model Inference System) in 1981: a Prolog program that inductively inferred logic programs from positive and negative examples.[85]The terminductivehere refers tophilosophicalinduction, suggesting a theory to explain observed facts, rather thanmathematical induction, proving a property for all members of a well-ordered set. Amachine learning modelis a type ofmathematical modelthat, once "trained" on a given dataset, can be used to make predictions or classifications on new data. During training, a learning algorithm iteratively adjusts the model's internal parameters to minimise errors in its predictions.[86]By extension, the term "model" can refer to several levels of specificity, from a general class of models and their associated learning algorithms to a fully trained model with all its internal parameters tuned.[87] Various types of models have been used and researched for machine learning systems, picking the best model for a task is calledmodel selection. Artificial neural networks (ANNs), orconnectionistsystems, are computing systems vaguely inspired by thebiological neural networksthat constitute animalbrains. Such systems "learn" to perform tasks by considering examples, generally without being programmed with any task-specific rules. An ANN is a model based on a collection of connected units or nodes called "artificial neurons", which loosely model theneuronsin a biological brain. Each connection, like thesynapsesin a biological brain, can transmit information, a "signal", from one artificial neuron to another. An artificial neuron that receives a signal can process it and then signal additional artificial neurons connected to it. In common ANN implementations, the signal at a connection between artificial neurons is areal number, and the output of each artificial neuron is computed by some non-linear function of the sum of its inputs. The connections between artificial neurons are called "edges". Artificial neurons and edges typically have aweightthat adjusts as learning proceeds. The weight increases or decreases the strength of the signal at a connection. Artificial neurons may have a threshold such that the signal is only sent if the aggregate signal crosses that threshold. Typically, artificial neurons are aggregated into layers. Different layers may perform different kinds of transformations on their inputs. Signals travel from the first layer (the input layer) to the last layer (the output layer), possibly after traversing the layers multiple times. The original goal of the ANN approach was to solve problems in the same way that ahuman brainwould. However, over time, attention moved to performing specific tasks, leading to deviations frombiology. Artificial neural networks have been used on a variety of tasks, includingcomputer vision,speech recognition,machine translation,social networkfiltering,playing board and video gamesandmedical diagnosis. Deep learningconsists of multiple hidden layers in an artificial neural network. This approach tries to model the way the human brain processes light and sound into vision and hearing. Some successful applications of deep learning are computer vision and speech recognition.[88] Decision tree learning uses adecision treeas apredictive modelto go from observations about an item (represented in the branches) to conclusions about the item's target value (represented in the leaves). It is one of the predictive modelling approaches used in statistics, data mining, and machine learning. Tree models where the target variable can take a discrete set of values are called classification trees; in these tree structures,leavesrepresent class labels, and branches representconjunctionsof features that lead to those class labels. Decision trees where the target variable can take continuous values (typicallyreal numbers) are called regression trees. In decision analysis, a decision tree can be used to visually and explicitly represent decisions anddecision making. In data mining, a decision tree describes data, but the resulting classification tree can be an input for decision-making. Random forest regression (RFR) falls under umbrella of decisiontree-based models. RFR is an ensemble learning method that builds multiple decision trees and averages their predictions to improve accuracy and to avoid overfitting.  To build decision trees, RFR uses bootstrapped sampling, for instance each decision tree is trained on random data of from training set. This random selection of RFR for training enables model to reduce bias predictions and achieve accuracy. RFR generates independent decision trees, and it can work on single output data as well multiple regressor task. This makes RFR compatible to be used in various application.[89][90] Support-vector machines (SVMs), also known as support-vector networks, are a set of relatedsupervised learningmethods used for classification and regression. Given a set of training examples, each marked as belonging to one of two categories, an SVM training algorithm builds a model that predicts whether a new example falls into one category.[91]An SVM training algorithm is a non-probabilistic,binary,linear classifier, although methods such asPlatt scalingexist to use SVM in a probabilistic classification setting. In addition to performing linear classification, SVMs can efficiently perform a non-linear classification using what is called thekernel trick, implicitly mapping their inputs into high-dimensional feature spaces. Regression analysis encompasses a large variety of statistical methods to estimate the relationship between input variables and their associated features. Its most common form islinear regression, where a single line is drawn to best fit the given data according to a mathematical criterion such asordinary least squares. The latter is often extended byregularisationmethods to mitigate overfitting and bias, as inridge regression. When dealing with non-linear problems, go-to models includepolynomial regression(for example, used for trendline fitting in Microsoft Excel[92]),logistic regression(often used instatistical classification) or evenkernel regression, which introduces non-linearity by taking advantage of thekernel trickto implicitly map input variables to higher-dimensional space. Multivariate linear regressionextends the concept of linear regression to handle multiple dependent variables simultaneously. This approach estimates the relationships between a set of input variables and several output variables by fitting amultidimensionallinear model. It is particularly useful in scenarios where outputs are interdependent or share underlying patterns, such as predicting multiple economic indicators or reconstructing images,[93]which are inherently multi-dimensional. A Bayesian network, belief network, or directed acyclic graphical model is a probabilisticgraphical modelthat represents a set ofrandom variablesand theirconditional independencewith adirected acyclic graph(DAG). For example, a Bayesian network could represent the probabilistic relationships between diseases and symptoms. Given symptoms, the network can be used to compute the probabilities of the presence of various diseases. Efficient algorithms exist that performinferenceand learning. Bayesian networks that model sequences of variables, likespeech signalsorprotein sequences, are calleddynamic Bayesian networks. Generalisations of Bayesian networks that can represent and solve decision problems under uncertainty are calledinfluence diagrams. A Gaussian process is astochastic processin which every finite collection of the random variables in the process has amultivariate normal distribution, and it relies on a pre-definedcovariance function, or kernel, that models how pairs of points relate to each other depending on their locations. Given a set of observed points, or input–output examples, the distribution of the (unobserved) output of a new point as function of its input data can be directly computed by looking like the observed points and the covariances between those points and the new, unobserved point. Gaussian processes are popular surrogate models inBayesian optimisationused to dohyperparameter optimisation. A genetic algorithm (GA) is asearch algorithmandheuristictechnique that mimics the process ofnatural selection, using methods such asmutationandcrossoverto generate newgenotypesin the hope of finding good solutions to a given problem. In machine learning, genetic algorithms were used in the 1980s and 1990s.[95][96]Conversely, machine learning techniques have been used to improve the performance of genetic andevolutionary algorithms.[97] The theory of belief functions, also referred to as evidence theory or Dempster–Shafer theory, is a general framework for reasoning with uncertainty, with understood connections to other frameworks such asprobability,possibilityandimprecise probability theories. These theoretical frameworks can be thought of as a kind of learner and have some analogous properties of how evidence is combined (e.g., Dempster's rule of combination), just like how in apmf-based Bayesian approach would combine probabilities.[98]However, there are many caveats to these beliefs functions when compared to Bayesian approaches in order to incorporate ignorance anduncertainty quantification. These belief function approaches that are implemented within the machine learning domain typically leverage a fusion approach of variousensemble methodsto better handle the learner'sdecision boundary, low samples, and ambiguous class issues that standard machine learning approach tend to have difficulty resolving.[4][9]However, the computational complexity of these algorithms are dependent on the number of propositions (classes), and can lead to a much higher computation time when compared to other machine learning approaches. Rule-based machine learning (RBML) is a branch of machine learning that automatically discovers and learns 'rules' from data. It provides interpretable models, making it useful for decision-making in fields like healthcare, fraud detection, and cybersecurity. Key RBML techniques includeslearning classifier systems,[99]association rule learning,[100]artificial immune systems,[101]and other similar models. These methods extract patterns from data and evolve rules over time. Typically, machine learning models require a high quantity of reliable data to perform accurate predictions. When training a machine learning model, machine learning engineers need to target and collect a large and representativesampleof data. Data from the training set can be as varied as acorpus of text, a collection of images,sensordata, and data collected from individual users of a service.Overfittingis something to watch out for when training a machine learning model. Trained models derived from biased or non-evaluated data can result in skewed or undesired predictions. Biased models may result in detrimental outcomes, thereby furthering the negative impacts on society or objectives.Algorithmic biasis a potential result of data not being fully prepared for training. Machine learning ethics is becoming a field of study and notably, becoming integrated within machine learning engineering teams. Federated learning is an adapted form ofdistributed artificial intelligenceto training machine learning models that decentralises the training process, allowing for users' privacy to be maintained by not needing to send their data to a centralised server. This also increases efficiency by decentralising the training process to many devices. For example,Gboarduses federated machine learning to train search query prediction models on users' mobile phones without having to send individual searches back toGoogle.[102] There are many applications for machine learning, including: In 2006, the media-services providerNetflixheld the first "Netflix Prize" competition to find a program to better predict user preferences and improve the accuracy of its existing Cinematch movie recommendation algorithm by at least 10%. A joint team made up of researchers fromAT&T Labs-Research in collaboration with the teams Big Chaos and Pragmatic Theory built anensemble modelto win the Grand Prize in 2009 for $1 million.[105]Shortly after the prize was awarded, Netflix realised that viewers' ratings were not the best indicators of their viewing patterns ("everything is a recommendation") and they changed their recommendation engine accordingly.[106]In 2010 The Wall Street Journal wrote about the firm Rebellion Research and their use of machine learning to predict the financial crisis.[107]In 2012, co-founder ofSun Microsystems,Vinod Khosla, predicted that 80% of medical doctors jobs would be lost in the next two decades to automated machine learning medical diagnostic software.[108]In 2014, it was reported that a machine learning algorithm had been applied in the field of art history to study fine art paintings and that it may have revealed previously unrecognised influences among artists.[109]In 2019Springer Naturepublished the first research book created using machine learning.[110]In 2020, machine learning technology was used to help make diagnoses and aid researchers in developing a cure for COVID-19.[111]Machine learning was recently applied to predict the pro-environmental behaviour of travellers.[112]Recently, machine learning technology was also applied to optimise smartphone's performance and thermal behaviour based on the user's interaction with the phone.[113][114][115]When applied correctly, machine learning algorithms (MLAs) can utilise a wide range of company characteristics to predict stock returns withoutoverfitting. By employing effective feature engineering and combining forecasts, MLAs can generate results that far surpass those obtained from basic linear techniques likeOLS.[116] Recent advancements in machine learning have extended into the field of quantum chemistry, where novel algorithms now enable the prediction of solvent effects on chemical reactions, thereby offering new tools for chemists to tailor experimental conditions for optimal outcomes.[117] Machine Learning is becoming a useful tool to investigate and predict evacuation decision making in large scale and small scale disasters. Different solutions have been tested to predict if and when householders decide to evacuate during wildfires and hurricanes.[118][119][120]Other applications have been focusing on pre evacuation decisions in building fires.[121][122] Machine learning is also emerging as a promising tool in geotechnical engineering, where it is used to support tasks such as ground classification, hazard prediction, and site characterization. Recent research emphasizes a move toward data-centric methods in this field, where machine learning is not a replacement for engineering judgment, but a way to enhance it using site-specific data and patterns.[123] Although machine learning has been transformative in some fields, machine-learning programs often fail to deliver expected results.[124][125][126]Reasons for this are numerous: lack of (suitable) data, lack of access to the data, data bias, privacy problems, badly chosen tasks and algorithms, wrong tools and people, lack of resources, and evaluation problems.[127] The "black box theory" poses another yet significant challenge. Black box refers to a situation where the algorithm or the process of producing an output is entirely opaque, meaning that even the coders of the algorithm cannot audit the pattern that the machine extracted out of the data.[128]The House of Lords Select Committee, which claimed that such an "intelligence system" that could have a "substantial impact on an individual's life" would not be considered acceptable unless it provided "a full and satisfactory explanation for the decisions" it makes.[128] In 2018, a self-driving car fromUberfailed to detect a pedestrian, who was killed after a collision.[129]Attempts to use machine learning in healthcare with theIBM Watsonsystem failed to deliver even after years of time and billions of dollars invested.[130][131]Microsoft'sBing Chatchatbot has been reported to produce hostile and offensive response against its users.[132] Machine learning has been used as a strategy to update the evidence related to a systematic review and increased reviewer burden related to the growth of biomedical literature. While it has improved with training sets, it has not yet developed sufficiently to reduce the workload burden without limiting the necessary sensitivity for the findings research themselves.[133] Explainable AI (XAI), or Interpretable AI, or Explainable Machine Learning (XML), is artificial intelligence (AI) in which humans can understand the decisions or predictions made by the AI.[134]It contrasts with the "black box" concept in machine learning where even its designers cannot explain why an AI arrived at a specific decision.[135]By refining the mental models of users of AI-powered systems and dismantling their misconceptions, XAI promises to help users perform more effectively. XAI may be an implementation of the social right to explanation. Settling on a bad, overly complex theory gerrymandered to fit all the past training data is known as overfitting. Many systems attempt to reduce overfitting by rewarding a theory in accordance with how well it fits the data but penalising the theory in accordance with how complex the theory is.[136] Learners can also disappoint by "learning the wrong lesson". A toy example is that an image classifier trained only on pictures of brown horses and black cats might conclude that all brown patches are likely to be horses.[137]A real-world example is that, unlike humans, current image classifiers often do not primarily make judgements from the spatial relationship between components of the picture, and they learn relationships between pixels that humans are oblivious to, but that still correlate with images of certain types of real objects. Modifying these patterns on a legitimate image can result in "adversarial" images that the system misclassifies.[138][139] Adversarial vulnerabilities can also result in nonlinear systems, or from non-pattern perturbations. For some systems, it is possible to change the output by only changing a single adversarially chosen pixel.[140]Machine learning models are often vulnerable to manipulation or evasion viaadversarial machine learning.[141] Researchers have demonstrated howbackdoorscan be placed undetectably into classifying (e.g., for categories "spam" and well-visible "not spam" of posts) machine learning models that are often developed or trained by third parties. Parties can change the classification of any input, including in cases for which a type ofdata/software transparencyis provided, possibly includingwhite-box access.[142][143][144] Classification of machine learning models can be validated by accuracy estimation techniques like theholdoutmethod, which splits the data in a training and test set (conventionally 2/3 training set and 1/3 test set designation) and evaluates the performance of the training model on the test set. In comparison, the K-fold-cross-validationmethod randomly partitions the data into K subsets and then K experiments are performed each respectively considering 1 subset for evaluation and the remaining K-1 subsets for training the model. In addition to the holdout and cross-validation methods,bootstrap, which samples n instances with replacement from the dataset, can be used to assess model accuracy.[145] In addition to overall accuracy, investigators frequently reportsensitivity and specificitymeaning true positive rate (TPR) and true negative rate (TNR) respectively. Similarly, investigators sometimes report thefalse positive rate(FPR) as well as thefalse negative rate(FNR). However, these rates are ratios that fail to reveal their numerators and denominators.Receiver operating characteristic(ROC) along with the accompanying Area Under the ROC Curve (AUC) offer additional tools for classification model assessment. Higher AUC is associated with a better performing model.[146] Theethicsofartificial intelligencecovers a broad range of topics within AI that are considered to have particular ethical stakes.[147]This includesalgorithmic biases,fairness,[148]automated decision-making,[149]accountability,privacy, andregulation. It also covers various emerging or potential future challenges such asmachine ethics(how to make machines that behave ethically),lethal autonomous weapon systems,arms racedynamics,AI safetyandalignment,technological unemployment, AI-enabledmisinformation, how to treat certain AI systems if they have amoral status(AI welfare and rights),artificial superintelligenceandexistential risks.[147] Different machine learning approaches can suffer from different data biases. A machine learning system trained specifically on current customers may not be able to predict the needs of new customer groups that are not represented in the training data. When trained on human-made data, machine learning is likely to pick up the constitutional and unconscious biases already present in society.[150] Systems that are trained on datasets collected with biases may exhibit these biases upon use (algorithmic bias), thus digitising cultural prejudices.[151]For example, in 1988, the UK'sCommission for Racial Equalityfound thatSt. George's Medical Schoolhad been using a computer program trained from data of previous admissions staff and that this program had denied nearly 60 candidates who were found to either be women or have non-European sounding names.[150]Using job hiring data from a firm with racist hiring policies may lead to a machine learning system duplicating the bias by scoring job applicants by similarity to previous successful applicants.[152][153]Another example includes predictive policing companyGeolitica's predictive algorithm that resulted in "disproportionately high levels of over-policing in low-income and minority communities" after being trained with historical crime data.[154] While responsiblecollection of dataand documentation of algorithmic rules used by a system is considered a critical part of machine learning, some researchers blame lack of participation and representation of minority population in the field of AI for machine learning's vulnerability to biases.[155]In fact, according to research carried out by the Computing Research Association (CRA) in 2021, "female faculty merely make up 16.1%" of all faculty members who focus on AI among several universities around the world.[156]Furthermore, among the group of "new U.S. resident AI PhD graduates," 45% identified as white, 22.4% as Asian, 3.2% as Hispanic, and 2.4% as African American, which further demonstrates a lack of diversity in the field of AI.[156] Language models learned from data have been shown to contain human-like biases.[157][158]Because human languages contain biases, machines trained on languagecorporawill necessarily also learn these biases.[159][160]In 2016, Microsoft testedTay, achatbotthat learned from Twitter, and it quickly picked up racist and sexist language.[161] In an experiment carried out byProPublica, aninvestigative journalismorganisation, a machine learning algorithm's insight into the recidivism rates among prisoners falsely flagged "black defendants high risk twice as often as white defendants".[154]In 2015, Google Photos once tagged a couple of black people as gorillas, which caused controversy. The gorilla label was subsequently removed, and in 2023, it still cannot recognise gorillas.[162]Similar issues with recognising non-white people have been found in many other systems.[163] Because of such challenges, the effective use of machine learning may take longer to be adopted in other domains.[164]Concern forfairnessin machine learning, that is, reducing bias in machine learning and propelling its use for human good, is increasingly expressed by artificial intelligence scientists, includingFei-Fei Li, who said that "[t]here's nothing artificial about AI. It's inspired by people, it's created by people, and—most importantly—it impacts people. It is a powerful tool we are only just beginning to understand, and that is a profound responsibility."[165] There are concerns among health care professionals that these systems might not be designed in the public's interest but as income-generating machines. This is especially true in the United States where there is a long-standing ethical dilemma of improving health care, but also increasing profits. For example, the algorithms could be designed to provide patients with unnecessary tests or medication in which the algorithm's proprietary owners hold stakes. There is potential for machine learning in health care to provide professionals an additional tool to diagnose, medicate, and plan recovery paths for patients, but this requires these biases to be mitigated.[166] Since the 2010s, advances in both machine learning algorithms and computer hardware have led to more efficient methods for trainingdeep neural networks(a particular narrow subdomain of machine learning) that contain many layers of nonlinear hidden units.[167]By 2019, graphics processing units (GPUs), often with AI-specific enhancements, had displaced CPUs as the dominant method of training large-scale commercial cloud AI.[168]OpenAIestimated the hardware compute used in the largest deep learning projects fromAlexNet(2012) toAlphaZero(2017), and found a 300,000-fold increase in the amount of compute required, with a doubling-time trendline of 3.4 months.[169][170] Tensor Processing Units (TPUs)are specialised hardware accelerators developed byGooglespecifically for machine learning workloads. Unlike general-purposeGPUsandFPGAs, TPUs are optimised for tensor computations, making them particularly efficient for deep learning tasks such as training and inference. They are widely used in Google Cloud AI services and large-scale machine learning models like Google's DeepMind AlphaFold and large language models. TPUs leverage matrix multiplication units and high-bandwidth memory to accelerate computations while maintaining energy efficiency.[171]Since their introduction in 2016, TPUs have become a key component of AI infrastructure, especially in cloud-based environments. Neuromorphic computingrefers to a class of computing systems designed to emulate the structure and functionality of biological neural networks. These systems may be implemented through software-based simulations on conventional hardware or through specialised hardware architectures.[172] Aphysical neural networkis a specific type of neuromorphic hardware that relies on electrically adjustable materials, such as memristors, to emulate the function ofneural synapses. The term "physical neural network" highlights the use of physical hardware for computation, as opposed to software-based implementations. It broadly refers to artificial neural networks that use materials with adjustable resistance to replicate neural synapses.[173][174] Embedded machine learning is a sub-field of machine learning where models are deployed onembedded systemswith limited computing resources, such aswearable computers,edge devicesandmicrocontrollers.[175][176][177][178]Running models directly on these devices eliminates the need to transfer and store data on cloud servers for further processing, thereby reducing the risk of data breaches, privacy leaks and theft of intellectual property, personal data and business secrets. Embedded machine learning can be achieved through various techniques, such ashardware acceleration,[179][180]approximate computing,[181]and model optimisation.[182][183]Common optimisation techniques includepruning,quantisation,knowledge distillation, low-rank factorisation, network architecture search, and parameter sharing. Software suitescontaining a variety of machine learning algorithms include the following:
https://en.wikipedia.org/wiki/Applications_of_machine_learning
Collective intelligence(CI) is shared orgroupintelligence(GI) thatemergesfrom thecollaboration, collective efforts, and competition of many individuals and appears inconsensus decision making. The term appears insociobiology,political scienceand in context of masspeer reviewandcrowdsourcingapplications. It may involveconsensus,social capitalandformalismssuch asvoting systems,social mediaand other means of quantifying mass activity.[1]CollectiveIQis a measure of collective intelligence, although it is often used interchangeably with the term collective intelligence. Collective intelligence has also been attributed tobacteriaand animals.[2] It can be understood as anemergent propertyfrom thesynergiesamong: Or it can be more narrowly understood as an emergent property between people and ways of processing information.[4]This notion of collective intelligence is referred to as "symbiotic intelligence" by Norman Lee Johnson.[5]The concept is used insociology,business,computer scienceand mass communications: it also appears inscience fiction.Pierre Lévydefines collective intelligence as, "It is a form of universally distributed intelligence, constantly enhanced, coordinated in real time, and resulting in the effective mobilization of skills. I'll add the following indispensable characteristic to this definition: The basis and goal of collective intelligence is mutual recognition and enrichment of individuals rather than the cult of fetishized orhypostatizedcommunities."[6]According to researchers Pierre Lévy andDerrick de Kerckhove, it refers to capacity of networkedICTs(Information communication technologies) to enhance the collective pool of social knowledge by simultaneously expanding the extent of human interactions.[7][8]A broader definition was provided byGeoff Mulganin a series of lectures and reports from 2006 onwards[9]and in the book Big Mind[10]which proposed a framework for analysing any thinking system, including both human and machine intelligence, in terms of functional elements (observation, prediction, creativity, judgement etc.), learning loops and forms of organisation. The aim was to provide a way to diagnose, and improve, the collective intelligence of a city, business, NGO or parliament. Collective intelligence strongly contributes to the shift of knowledge and power from the individual to the collective. According toEric S. Raymondin 1998 and JC Herz in 2005,[11][12]open-source intelligencewill eventually generate superior outcomes to knowledge generated by proprietary software developed within corporations.[13]Media theoristHenry Jenkinssees collective intelligence as an 'alternative source of media power', related to convergence culture. He draws attention to education and the way people are learning to participate in knowledge cultures outside formal learning settings. Henry Jenkins criticizes schools which promote 'autonomous problem solvers and self-contained learners' while remaining hostile to learning through the means of collective intelligence.[14]Both Pierre Lévy and Henry Jenkins support the claim that collective intelligence is important fordemocratization, as it is interlinked with knowledge-based culture and sustained by collective idea sharing, and thus contributes to a better understanding of diverse society.[15][16] Similar to thegfactor (g)for general individual intelligence, a new scientific understanding of collective intelligence aims to extract a general collective intelligence factorc factorfor groups indicating a group's ability to perform a wide range of tasks.[17]Definition, operationalization and statistical methods are derived fromg. Similarly asgis highly interrelated with the concept ofIQ,[18][19]this measurement of collective intelligence can be interpreted as intelligence quotient for groups (Group-IQ) even though the score is not a quotient per se. Causes forcand predictive validity are investigated as well. Writers who have influenced the idea of collective intelligence includeFrancis Galton,Douglas Hofstadter(1979), Peter Russell (1983),Tom Atlee(1993),Pierre Lévy(1994),Howard Bloom(1995),Francis Heylighen(1995),Douglas Engelbart, Louis Rosenberg,Cliff Joslyn,Ron Dembo,Gottfried Mayer-Kress(2003), andGeoff Mulgan. The concept (although not so named) originated in 1785 with theMarquis de Condorcet, whose"jury theorem"states that if each member of a voting group is more likely than not to make a correct decision, the probability that the highest vote of the group is the correct decision increases with the number of members of the group.[20]Many theorists have interpretedAristotle's statement in thePoliticsthat "a feast to which many contribute is better than a dinner provided out of a single purse" to mean that just as many may bring different dishes to the table, so in a deliberation many may contribute different pieces of information to generate a better decision.[21][22]Recent scholarship,[23]however, suggests that this was probably not what Aristotle meant but is a modern interpretation based on what we now know about team intelligence.[24] A precursor of the concept is found in entomologistWilliam Morton Wheeler's observation in 1910 that seemingly independent individuals can cooperate so closely as to become indistinguishable from a single organism.[25]Wheeler saw this collaborative process at work inantsthat acted like the cells of a single beast he called asuperorganism. In 1912Émile Durkheimidentified society as the sole source of human logical thought. He argued in "The Elementary Forms of Religious Life" that society constitutes a higher intelligence because it transcends the individual over space and time.[26]Other antecedents areVladimir VernadskyandPierre Teilhard de Chardin's concept of "noosphere" andH. G. Wells's concept of "world brain".[27]Peter Russell,Elisabet Sahtouris, andBarbara Marx Hubbard(originator of the term "conscious evolution")[28]are inspired by the visions of a noosphere – a transcendent, rapidly evolving collective intelligence – an informational cortex of the planet. The notion has more recently been examined by the philosopher Pierre Lévy. In a 1962 research report,Douglas Engelbartlinked collective intelligence to organizational effectiveness, and predicted that pro-actively 'augmenting human intellect' would yield a multiplier effect in group problem solving: "Three people working together in this augmented mode [would] seem to be more than three times as effective in solving a complex problem as is one augmented person working alone".[29]In 1994, he coined the term 'collective IQ' as a measure of collective intelligence, to focus attention on the opportunity to significantly raise collective IQ in business and society.[30] The idea of collective intelligence also forms the framework for contemporary democratic theories often referred to asepistemic democracy. Epistemic democratic theories refer to the capacity of the populace, either through deliberation or aggregation of knowledge, to track the truth and relies on mechanisms to synthesize and apply collective intelligence.[31] Collective intelligence was introduced into the machine learning community in the late 20th century,[32]and matured into a broader consideration of how to design "collectives" of self-interested adaptive agents to meet a system-wide goal.[33][34]This was related to single-agent work on "reward shaping"[35]and has been taken forward by numerous researchers in the game theory and engineering communities.[36] Howard Bloomhas discussed mass behavior –collective behaviorfrom the level of quarks to the level of bacterial, plant, animal, and human societies. He stresses the biological adaptations that have turned most of this earth's living beings into components of what he calls "a learning machine". In 1986 Bloom combined the concepts ofapoptosis,parallel distributed processing,group selection, and the superorganism to produce a theory of how collective intelligence works.[37]Later he showed how the collective intelligences of competing bacterial colonies and human societies can be explained in terms of computer-generated "complex adaptive systems" and the "genetic algorithms", concepts pioneered byJohn Holland.[38] Bloom traced the evolution of collective intelligence to our bacterial ancestors 1 billion years ago and demonstrated how a multi-species intelligence has worked since the beginning of life.[38]Ant societiesexhibit more intelligence, in terms of technology, than any other animal except for humans and co-operate in keeping livestock, for exampleaphidsfor "milking".[38]Leaf cutters care for fungi and carry leaves to feed the fungi.[38] David Skrbina[39]cites the concept of a 'group mind' as being derived from Plato's concept ofpanpsychism(that mind or consciousness is omnipresent and exists in all matter). He develops the concept of a 'group mind' as articulated byThomas HobbesinLeviathanandFechner's arguments for acollective consciousnessof mankind. He citesDurkheimas the most notable advocate of a "collective consciousness"[40]andTeilhard de Chardinas a thinker who has developed the philosophical implications of the group mind.[41] Tom Atlee focuses primarily on humans and on work to upgrade what Howard Bloom calls "the group IQ". Atlee feels that collective intelligence can be encouraged "to overcome 'groupthink' and individualcognitive biasin order to allow a collective to cooperate on one process – while achieving enhanced intellectual performance." George Pór defined the collective intelligence phenomenon as "the capacity of human communities to evolve towards higher order complexity and harmony, through such innovation mechanisms as differentiation and integration, competition and collaboration."[42]Atlee and Pór state that "collective intelligence also involves achieving a single focus of attention and standard of metrics which provide an appropriate threshold of action".[43]Their approach is rooted inscientific community metaphor.[43] The term group intelligence is sometimes used interchangeably with the term collective intelligence. Anita Woolley presents Collective intelligence as a measure of group intelligence and group creativity.[17]The idea is that a measure of collective intelligence covers a broad range of features of the group, mainly group composition and group interaction.[44]The features of composition that lead to increased levels of collective intelligence in groups include criteria such as higher numbers of women in the group as well as increased diversity of the group.[44] Atlee and Pór suggest that the field of collective intelligence should primarily be seen as a human enterprise in which mind-sets, a willingness to share and an openness to the value of distributed intelligence for the common good are paramount, though group theory andartificial intelligencehave something to offer.[43]Individuals who respect collective intelligence are confident of their own abilities and recognize that the whole is indeed greater than the sum of any individual parts.[45]Maximizing collective intelligence relies on the ability of an organization to accept and develop "The Golden Suggestion", which is any potentially useful input from any member.[46]Groupthink often hampers collective intelligence by limiting input to a select few individuals or filtering potential Golden Suggestions without fully developing them to implementation.[43] Robert David Steele VivasinThe New Craft of Intelligenceportrayed all citizens as "intelligence minutemen", drawing only on legal and ethical sources of information, able to create a "public intelligence" that keeps public officials and corporate managers honest, turning the concept of "national intelligence" (previously concerned about spies and secrecy) on its head.[47] According toDon TapscottandAnthony D. Williams, collective intelligence ismass collaboration. In order for this concept to happen, four principles need to exist:[48] A new scientific understanding of collective intelligence defines it as a group's general ability to perform a wide range of tasks.[17]Definition, operationalization and statistical methods are similar to thepsychometric approach of general individual intelligence. Hereby, an individual's performance on a given set of cognitive tasks is used to measure general cognitive ability indicated by the general intelligencefactorgproposed by English psychologistCharles Spearmanand extracted viafactor analysis.[49]In the same vein asgserves to display between-individual performance differences on cognitive tasks, collective intelligence research aims to find a parallel intelligence factor for groups'cfactor'[17](also called 'collective intelligence factor' (CI)[50]) displaying between-group differences on task performance. The collective intelligence score then is used to predict how this same group will perform on any other similar task in the future. Yet tasks, hereby, refer to mental or intellectual tasks performed by small groups[17]even though the concept is hoped to be transferable to other performances and any groups or crowds reaching from families to companies and even whole cities.[51]Since individuals'gfactor scores are highly correlated with full-scaleIQscores, which are in turn regarded as good estimates ofg,[18][19]this measurement of collective intelligence can also be seen as an intelligence indicator or quotient respectively for a group (Group-IQ) parallel to an individual's intelligence quotient (IQ) even though the score is not a quotient per se. Mathematically,candgare both variables summarizing positive correlations among different tasks supposing that performance on one task is comparable with performance on other similar tasks.[52]cthus is a source of variance among groups and can only be considered as a group's standing on thecfactor compared to other groups in a given relevant population.[19][53]The concept is in contrast to competing hypotheses including other correlational structures to explain group intelligence,[17]such as a composition out of several equally important but independent factors as found inindividual personality research.[54] Besides, this scientific idea also aims to explore the causes affecting collective intelligence, such as group size, collaboration tools or group members' interpersonal skills.[55]TheMIT Center for Collective Intelligence, for instance, announced the detection ofThe Genome of Collective Intelligence[55]as one of its main goals aiming to develop a "taxonomy of organizational building blocks, or genes, that can be combined and recombined to harness the intelligence of crowds".[55] Individual intelligence is shown to be genetically and environmentally influenced.[56][57]Analogously, collective intelligence research aims to explore reasons why certain groups perform more intelligently than other groups given thatcis just moderately correlated with the intelligence of individual group members.[17]According to Woolley et al.'s results, neither team cohesion nor motivation or satisfaction is correlated withc. However, they claim that three factors were found as significant correlates: the variance in the number of speaking turns, group members' average social sensitivity and the proportion of females. All three had similar predictive power forc, but only social sensitivity was statistically significant (b=0.33, P=0.05).[17] The number speaking turns indicates that "groups where a few people dominated the conversation were less collectively intelligent than those with a more equal distribution of conversational turn-taking".[50]Hence, providing multiple team members the chance to speak up made a group more intelligent.[17] Group members' social sensitivity was measured via the Reading the Mind in the Eyes Test[58](RME) and correlated .26 withc.[17]Hereby, participants are asked to detect thinking or feeling expressed in other peoples' eyes presented on pictures and assessed in a multiple choice format. The test aims to measure peoples'theory of mind (ToM), also called 'mentalizing'[59][60][61][62]or 'mind reading',[63]which refers to the ability to attribute mental states, such as beliefs, desires or intents, to other people and in how far people understand that others have beliefs, desires, intentions or perspectives different from their own ones.[58]RME is a ToM test for adults[58]that shows sufficient test-retest reliability[64]and constantly differentiates control groups from individuals with functionalautismorAsperger Syndrome.[58]It is one of the most widely accepted and well-validated tests for ToM within adults.[65]ToM can be regarded as an associated subset of skills and abilities within the broader concept ofemotional intelligence.[50][66] The proportion of females as a predictor ofcwaslargely mediated by social sensitivity (Sobelz = 1.93, P= 0.03)[17]which is in line with previous research showing that women score higher on social sensitivity tests.[58]While amediation, statistically speaking, clarifies the mechanism underlying the relationship between a dependent and an independent variable,[67]Wolley agreed in an interview with theHarvard Business Reviewthat these findings are saying that groups of women are smarter than groups of men.[51]However, she relativizes this stating that the actual important thing is the high social sensitivity of group members.[51] It is theorized that the collective intelligence factorcis an emergent property resulting from bottom-up as well as top-down processes.[44]Hereby, bottom-up processes cover aggregated group-member characteristics. Top-down processes cover group structures and norms that influence a group's way of collaborating and coordinating.[44] Top-down processes cover group interaction, such as structures, processes, and norms.[44]An example of such top-down processes is conversational turn-taking.[17]Research further suggest that collectively intelligent groups communicate more in general as well as more equally; same applies for participation and is shown for face-to-face as well as online groups communicating only via writing.[50][68] Bottom-up processes include group composition,[44]namely the characteristics of group members which are aggregated to the team level.[44]An example of such bottom-up processes is the average social sensitivity or the average and maximum intelligence scores of group members.[17]Furthermore, collective intelligence was found to be related to a group's cognitive diversity[69]including thinking styles and perspectives.[70]Groups that are moderately diverse incognitive stylehave higher collective intelligence than those who are very similar in cognitive style or very different. Consequently, groups where members are too similar to each other lack the variety of perspectives and skills needed to perform well. On the other hand, groups whose members are too different seem to have difficulties to communicate and coordinate effectively.[69] For most of human history, collective intelligence was confined to small tribal groups in which opinions were aggregated through real-time parallel interactions among members.[71]In modern times, mass communication, mass media, and networking technologies have enabled collective intelligence to span massive groups, distributed across continents and time-zones. To accommodate this shift in scale, collective intelligence in large-scale groups been dominated by serialized polling processes such as aggregating up-votes, likes, and ratings over time. While modern systems benefit from larger group size, the serialized process has been found to introduce substantial noise that distorts the collective output of the group. In one significant study of serialized collective intelligence, it was found that the first vote contributed to a serialized voting system can distort the final result by 34%.[72] To address the problems of serialized aggregation of input among large-scale groups, recent advancements collective intelligence have worked to replace serialized votes, polls, and markets, with parallel systems such as "human swarms" modeled after synchronous swarms in nature.[73][74]Based on natural process ofSwarm Intelligence, these artificial swarms of networked humans enable participants to work together in parallel to answer questions and make predictions as an emergent collective intelligence.[75][76]In one high-profile example, a human swarm challenge by CBS Interactive to predict the Kentucky Derby. The swarm correctly predicted the first four horses, in order, defying 542–1 odds and turning a $20 bet into $10,800.[77] The value of parallel collective intelligence was demonstrated in medical applications by researchers atStanford University School of MedicineandUnanimous AIin a set of published studies wherein groups of human doctors were connected by real-time swarming algorithms and tasked with diagnosing chest x-rays for the presence of pneumonia.[78][79]When working together as "human swarms", the groups of experienced radiologists demonstrated a 33% reduction in diagnostic errors as compared to traditional methods.[80][81] Woolley, Chabris, Pentland, Hashmi, & Malone (2010),[17]the originators of this scientific understanding of collective intelligence, found a single statistical factor for collective intelligence in their research across 192 groups with people randomly recruited from the public. In Woolley et al.'s two initial studies, groups worked together on different tasks from theMcGrath Task Circumplex,[82]a well-established taxonomy of group tasks. Tasks were chosen from all four quadrants of the circumplex and included visual puzzles, brainstorming, making collective moral judgments, and negotiating over limited resources. The results in these tasks were taken to conduct afactor analysis. Both studies showed support for a general collective intelligence factorcunderlying differences in group performance with an initial eigenvalue accounting for 43% (44% in study 2) of the variance, whereas the next factor accounted for only 18% (20%). That fits the range normally found in research regarding ageneral individual intelligence factorgtypically accounting for 40% to 50% percent of between-individual performance differences on cognitive tests.[52] Afterwards, a more complex task was solved by each group to determine whethercfactor scores predict performance on tasks beyond the original test. Criterion tasks were playingcheckers (draughts)against a standardized computer in the first and a complex architectural design task in the second study. In aregression analysisusing both individual intelligence of group members andcto predict performance on the criterion tasks,chad a significant effect, but average and maximum individual intelligence had not. While average (r=0.15, P=0.04) and maximum intelligence (r=0.19, P=0.008) of individual group members were moderately correlated withc,cwas still a much better predictor of the criterion tasks. According to Woolley et al., this supports the existence of a collective intelligence factorc,because it demonstrates an effect over and beyond group members' individual intelligence and thus thatcis more than just the aggregation of the individual IQs or the influence of the group member with the highest IQ.[17] Engel et al.[50](2014) replicated Woolley et al.'s findings applying an accelerated battery of tasks with a first factor in the factor analysis explaining 49% of the between-group variance in performance with the following factors explaining less than half of this amount. Moreover, they found a similar result for groups working together online communicating only via text and confirmed the role of female proportion and social sensitivity in causing collective intelligence in both cases. Similarly to Wolley et al.,[17]they also measured social sensitivity with the RME which is actually meant to measure people's ability to detect mental states in other peoples' eyes. The online collaborating participants, however, did neither know nor see each other at all. The authors conclude that scores on the RME must be related to a broader set of abilities of social reasoning than only drawing inferences from other people's eye expressions.[83] A collective intelligence factorcin the sense of Woolley et al.[17]was further found in groups of MBA students working together over the course of a semester,[84]in online gaming groups[68]as well as in groups from different cultures[85]and groups in different contexts in terms of short-term versus long-term groups.[85]None of these investigations considered team members' individual intelligence scores as control variables.[68][84][85] Note as well that the field of collective intelligence research is quite young and published empirical evidence is relatively rare yet. However, various proposals and working papers are in progress or already completed but (supposedly) still in ascholarly peer reviewingpublication process.[86][87][88][89] Next to predicting a group's performance on more complex criterion tasks as shown in the original experiments,[17]the collective intelligence factorcwas also found to predict group performance in diverse tasks in MBA classes lasting over several months.[84]Thereby, highly collectively intelligent groups earned significantly higher scores on their group assignments although their members did not do any better on other individually performed assignments. Moreover, highly collective intelligent teams improved performance over time suggesting that more collectively intelligent teams learn better.[84]This is another potential parallel to individual intelligence where more intelligent people are found to acquire new material quicker.[19][90] Individual intelligence can be used to predict plenty of life outcomes from school attainment[91]and career success[92]to health outcomes[93]and even mortality.[93]Whether collective intelligence is able to predict other outcomes besides group performance on mental tasks has still to be investigated. Gladwell[94](2008) showed that the relationship between individual IQ and success works only to a certain point and that additional IQ points over an estimate of IQ 120 do not translate into real life advantages. If a similar border exists for Group-IQ or if advantages are linear and infinite, has still to be explored. Similarly, demand for further research on possible connections of individual and collective intelligence exists within plenty of other potentially transferable logics of individual intelligence, such as, for instance, the development over time[95]or the question of improving intelligence.[96][97]Whereas it is controversial whether human intelligence can be enhanced via training,[96][97]a group's collective intelligence potentially offers simpler opportunities for improvement by exchanging team members or implementing structures and technologies.[51]Moreover, social sensitivity was found to be, at least temporarily, improvable by readingliterary fiction[98]as well as watching drama movies.[99]In how far such training ultimately improves collective intelligence through social sensitivity remains an open question.[100] There are further more advanced concepts and factor models attempting to explain individual cognitive ability including the categorization of intelligence influid and crystallized intelligence[101][102]or thehierarchical model of intelligence differences.[103][104]Further supplementing explanations and conceptualizations for the factor structure of theGenomesof collective intelligence besides a general'cfactor', though, are missing yet.[105] Other scholars explain team performance by aggregating team members' general intelligence to the team level[106][107]instead of building an own overall collective intelligence measure. Devine and Philips[108](2001) showed in a meta-analysis that mean cognitive ability predicts team performance in laboratory settings (0.37) as well as field settings (0.14) – note that this is only a small effect. Suggesting a strong dependence on the relevant tasks, other scholars showed that tasks requiring a high degree of communication and cooperation are found to be most influenced by the team member with the lowest cognitive ability.[109]Tasks in which selecting the best team member is the most successful strategy, are shown to be most influenced by the member with the highest cognitive ability.[66] Since Woolley et al.'s[17]results do not show any influence of group satisfaction,group cohesiveness, or motivation, they, at least implicitly, challenge these concepts regarding the importance for group performance in general and thus contrast meta-analytically proven evidence concerning the positive effects ofgroup cohesion,[110][111][112]motivation[113][114]and satisfaction[115]on group performance. Some scholars have noted that the evidence for collective intelligence in the body of work by Wolley et al.[17]is weak and may contain errors or misunderstandings of the data.[116]For example, Woolley et al.[17]stated in their findings that the maximum individual score on the Wonderlic Personnel Test (WPT;[117]an individual intelligence test used in their research) was 39, but also that the maximum averaged team score on the same test was also a 39. This indicates that their sample seemingly had a team composed entirely of people who, individually, got exactly the same score on the WPT, and also all happened to all have achieved the highest scores on the WPT found in Woolley et al.[17]This was noted by scholars as particularly unlikely to occur.[116]Other anomalies found in the data indicate that results may be driven in part by low-effort responding.[17][116]For instance, Woolley et al.'s[17]data indicates that at least one team scored a 0 on a task in which they were given 10 minutes to come up with as many uses for a brick as possible. Similarly, Woolley et al.'s[17]data show that at least one team had an average score of 8 out of 50 on the WPT. Scholars have noted that the probability of this occurring with study participants who are putting forth effort is nearly zero.[116]This may explain why Woolley et al.[17]found that the group's individual intelligence scores were not predictive of performance. In addition, low effort on tasks in human subjects research may inflate evidence for a supposed collective intelligence factor based on similarity of performance across tasks, because a team's low effort on one research task may generalize to low effort across many tasks.[116][118][119]It is notable that such a phenomenon is present merely because of the low stakes setting of laboratory research for research participants and not because it reflects how teams operate in organizations.[116][120] Noteworthy is also that the involved researchers among the confirming findings widely overlap with each other and with the authors participating in the original first study around Anita Woolley.[17][44][50][69][83] On 3 May 2022, the authors of "Quantifying collective intelligence in human groups",[121]who include Riedl and Woolley from the original 2010 paper on Collective Intelligence,[17]issued a correction to the article after mathematically impossible findings reported in the article were noted publicly by researcher Marcus Credé. Among the corrections is an admission that the average variance extracted (AVE)--that is to say, the evidence for collective intelligence—was only 19.6% from their Confirmatory Factor Analysis. Notable is that an AVE of at least 50% is generally required to demonstrate evidence for convergent validity of a single factor, with greater than 70% generally indicating good evidence for the factor.[122]Therefore, the evidence for collective intelligence referred to as "robust" in Riedl et al.[121]is in fact quite weak or nonexistent, as their primary evidence does not meet or near even the lowest thresholds of acceptable evidence for a latent factor.[122]Curiously, despite this and several other factual inaccuracies found throughout the article, the paper has not been retracted, and these inaccuracies were apparently not originally detected by the author team, peer reviewers, or editors of the journal.[121] In 2001, Tadeusz (Tad) Szuba from theAGH Universityin Poland proposed a formal model for the phenomenon of collective intelligence. It is assumed to be an unconscious, random, parallel, and distributed computational process, run in mathematical logic by the social structure.[123] In this model, beings and information are modeled as abstract information molecules carrying expressions of mathematical logic.[123]They are quasi-randomly displacing due to their interaction with their environments with their intended displacements.[123]Their interaction in abstract computational space creates multi-thread inference process which we perceive as collective intelligence.[123]Thus, a non-Turingmodel of computation is used. This theory allows simple formal definition of collective intelligence as the property ofsocial structureand seems to be working well for a wide spectrum of beings, from bacterial colonies up to human social structures. Collective intelligence considered as a specific computational process is providing a straightforward explanation of several social phenomena. For this model of collective intelligence, the formal definition of IQS (IQ Social) was proposed and was defined as "the probability function over the time and domain of N-element inferences which are reflecting inference activity of the social structure".[123]While IQS seems to be computationally hard, modeling of social structure in terms of a computational process as described above gives a chance for approximation.[123]Prospective applications are optimization of companies through the maximization of their IQS, and the analysis of drug resistance against collective intelligence of bacterial colonies.[123] One measure sometimes applied, especially by more artificial intelligence focused theorists, is a "collective intelligence quotient"[124](or "cooperation quotient") – which can be normalized from the "individual"intelligence quotient(IQ)[124]– thus making it possible to determine the marginal intelligence added by each new individual participating in thecollective action, thus usingmetricsto avoid the hazards ofgroup thinkandstupidity.[125] There have been many recent applications of collective intelligence, including in fields such as crowd-sourcing, citizen science and prediction markets. The Nesta Centre for Collective Intelligence Design[126]was launched in 2018 and has produced many surveys of applications as well as funding experiments. In 2020 the UNDP Accelerator Labs[127]began using collective intelligence methods in their work to accelerate innovation for theSustainable Development Goals. Here, the goal is to get an estimate (in a single value) of something. For example, estimating the weight of an object, or the release date of a product or probability of success of a project etc. as seen in prediction markets like Intrade, HSX or InklingMarkets and also in several implementations of crowdsourced estimation of a numeric outcome such as theDelphi method. Essentially, we try to get the average value of the estimates provided by the members in the crowd. In this situation, opinions are gathered from the crowd regarding an idea, issue or product. For example, trying to get a rating (on some scale) of a product sold online (such as Amazon's star rating system). Here, the emphasis is to collect and simply aggregate the ratings provided by customers/users. In these problems, someone solicits ideas for projects, designs or solutions from the crowd. For example, ideas on solving adata scienceproblem (as inKaggle) or getting a good design for a T-shirt (as inThreadless) or in getting answers to simple problems that only humans can do well (as in Amazon's Mechanical Turk). The objective is to gather the ideas and devise some selection criteria to choose the best ideas. James Surowieckidivides the advantages of disorganized decision-making into three main categories, which are cognition, cooperation and coordination.[128] Because of the Internet's ability to rapidly convey large amounts of information throughout the world, the use of collective intelligence to predict stock prices and stock price direction has become increasingly viable.[129]Websites aggregate stock market information that is as current as possible so professional or amateur stock analysts can publish their viewpoints, enabling amateur investors to submit their financial opinions and create an aggregate opinion.[129]The opinion of all investor can be weighed equally so that a pivotal premise of the effective application of collective intelligence can be applied: the masses, including a broad spectrum of stock market expertise, can be utilized to more accurately predict the behavior of financial markets.[130][131] Collective intelligence underpins theefficient-market hypothesisofEugene Fama[132]– although the term collective intelligence is not used explicitly in his paper. Fama cites research conducted byMichael Jensen[133]in which 89 out of 115 selected funds underperformed relative to the index during the period from 1955 to 1964. But after removing the loading charge (up-front fee) only 72 underperformed while after removing brokerage costs only 58 underperformed. On the basis of such evidenceindex fundsbecame popular investment vehicles using the collective intelligence of the market, rather than the judgement of professional fund managers, as an investment strategy.[133] Political parties mobilize large numbers of people to form policy, select candidates and finance and run election campaigns.[134]Knowledge focusing through variousvotingmethods allows perspectives to converge through the assumption that uninformed voting is to some degree random and can be filtered from the decision process leaving only a residue of informed consensus.[134]Critics point out that often bad ideas, misunderstandings, and misconceptions are widely held, and that structuring of the decision process must favor experts who are presumably less prone to random or misinformed voting in a given context.[135] Companies such as Affinnova (acquired by Nielsen),Google,InnoCentive,Marketocracy, andThreadless[136]have successfully employed the concept of collective intelligence in bringing about the next generation of technological changes through their research and development (R&D), customer service, and knowledge management.[136][137]An example of such application is Google's Project Aristotle in 2012, where the effect of collective intelligence on team makeup was examined in hundreds of the company's R&D teams.[138] In 2012, theGlobal Futures Collective Intelligence System(GFIS) was created byThe Millennium Project,[139]which epitomizes collective intelligence as the synergistic intersection among data/information/knowledge, software/hardware, and expertise/insights that has a recursive learning process for better decision-making than the individual players alone.[139] New mediaare often associated with the promotion and enhancement of collective intelligence. The ability of new media to easily store and retrieve information, predominantly through databases and the Internet, allows for it to be shared without difficulty. Thus, through interaction with new media, knowledge easily passes between sources[13]resulting in a form of collective intelligence. The use of interactive new media, particularly the internet, promotes online interaction and this distribution of knowledge between users. Francis Heylighen,Valentin Turchin, and Gottfried Mayer-Kress are among those who view collective intelligence through the lens of computer science andcybernetics. In their view, the Internet enables collective intelligence at the widest, planetary scale, thus facilitating the emergence of aglobal brain. The developer of the World Wide Web,Tim Berners-Lee, aimed to promote sharing and publishing of information globally. Later his employer opened up the technology for free use. In the early '90s, the Internet's potential was still untapped, until the mid-1990s when 'critical mass', as termed by the head of the Advanced Research Project Agency (ARPA), Dr.J.C.R. Licklider, demanded more accessibility and utility.[140]The driving force of this Internet-based collective intelligence is the digitization of information and communication.Henry Jenkins, a key theorist of new media and media convergence draws on the theory that collective intelligence can be attributed to media convergence and participatory culture.[13]He criticizes contemporary education for failing to incorporate online trends of collective problem solving into the classroom, stating "whereas a collective intelligence community encourages ownership of work as a group, schools grade individuals". Jenkins argues that interaction within a knowledge community builds vital skills for young people, and teamwork through collective intelligence communities contribute to the development of such skills.[141]Collective intelligence is not merely a quantitative contribution of information from all cultures, it is also qualitative.[141] Lévyandde Kerckhoveconsider CI from a mass communications perspective, focusing on the ability of networked information and communication technologies to enhance the community knowledge pool. They suggest that these communications tools enable humans to interact and to share and collaborate with both ease and speed.[13]With the development of theInternetand its widespread use, the opportunity to contribute to knowledge-building communities, such asWikipedia, is greater than ever before. These computer networks give participating users the opportunity to store and to retrieve knowledge through the collective access to these databases and allow them to "harness the hive"[13]Researchers at theMIT Center for Collective Intelligenceresearch and explore collective intelligence of groups of people and computers.[142] In this context collective intelligence is often confused withshared knowledge. The former is the sum total of information held individually by members of a community while the latter is information that is believed to be true and known by all members of the community.[143]Collective intelligence as represented byWeb 2.0has less user engagement thancollaborative intelligence. An art project using Web 2.0 platforms is "Shared Galaxy", an experiment developed by an anonymous artist to create a collective identity that shows up as one person on several platforms like MySpace, Facebook, YouTube and Second Life. The password is written in the profiles and the accounts named "Shared Galaxy" are open to be used by anyone. In this way many take part in being one.[144]Another art project using collective intelligence to produce artistic work is Curatron, where a large group of artists together decides on a smaller group that they think would make a good collaborative group. The process is used based on an algorithm computing the collective preferences[145]In creating what he calls 'CI-Art', Nova Scotia based artist Mathew Aldred follows Pierry Lévy's definition of collective intelligence.[146]Aldred's CI-Art event in March 2016 involved over four hundred people from the community of Oxford, Nova Scotia, and internationally.[147][148]Later work developed by Aldred used the UNUswarm intelligencesystem to create digital drawings and paintings.[149]The Oxford Riverside Gallery (Nova Scotia) held a public CI-Art event in May 2016, which connected with online participants internationally.[150] Insocial bookmarking(also called collaborative tagging),[151]users assign tags to resources shared with other users, which gives rise to a type of information organisation that emerges from thiscrowdsourcingprocess. The resulting information structure can be seen as reflecting the collective knowledge (or collective intelligence) of a community of users and is commonly called a "Folksonomy", and the process can be captured bymodels of collaborative tagging.[151] Recent research using data from the social bookmarking websiteDelicious, has shown that collaborative tagging systems exhibit a form ofcomplex systems(orself-organizing) dynamics.[152][153][154]Although there is no central controlled vocabulary to constrain the actions of individual users, the distributions of tags that describe different resources has been shown to converge over time to a stablepower lawdistributions.[152]Once such stable distributions form, examining thecorrelationsbetween different tags can be used to construct simple folksonomy graphs, which can be efficiently partitioned to obtained a form of community or shared vocabularies.[155]Such vocabularies can be seen as a form of collective intelligence, emerging from the decentralised actions of a community of users. The Wall-it Project is also an example of social bookmarking.[156] Research performed by Tapscott and Williams has provided a few examples of the benefits of collective intelligence to business:[48] Cultural theorist and online community developer, John Banks considered the contribution of online fan communities in the creation of theTrainzproduct. He argued that its commercial success was fundamentally dependent upon "the formation and growth of an active and vibrant online fan community that would both actively promote the product and create content- extensions and additions to the game software".[157] The increase in user created content and interactivity gives rise to issues of control over the game itself and ownership of the player-created content. This gives rise to fundamental legal issues, highlighted by Lessig[158]and Bray and Konsynski,[159]such asintellectual propertyand property ownership rights. Gosney extends this issue of Collective Intelligence in videogames one step further in his discussion ofalternate reality gaming. This genre, he describes as an "across-media game that deliberately blurs the line between the in-game and out-of-game experiences"[160]as events that happen outside the game reality "reach out" into the player's lives in order to bring them together. Solving the game requires "the collective and collaborative efforts of multiple players"; thus the issue of collective and collaborative team play is essential to ARG. Gosney argues that the Alternate Reality genre of gaming dictates an unprecedented level of collaboration and "collective intelligence" in order to solve the mystery of the game.[160] Co-operation helps to solve most important and most interesting multi-science problems. In his book, James Surowiecki mentioned that most scientists think that benefits of co-operation have much more value when compared to potential costs. Co-operation works also because at best it guarantees number of different viewpoints. Because of the possibilities of technology global co-operation is nowadays much easier and productive than before. It is clear that, when co-operation goes from university level to global it has significant benefits. For example, why do scientists co-operate? Science has become more and more isolated and each science field has spread even more and it is impossible for one person to be aware of all developments. This is true especially in experimental research where highly advanced equipment requires special skills. With co-operation scientists can use information from different fields and use it effectively instead of gathering all the information just by reading by themselves."[128] Military, trade unions, and corporations satisfy some definitions of CI – the most rigorous definition would require a capacity to respond to very arbitrary conditions without orders or guidance from "law" or "customers" to constrain actions. Online advertising companies are using collective intelligence to bypass traditional marketing and creative agencies.[161] TheUNUopen platform for "human swarming" (or "social swarming") establishes real-time closed-loop systems around groups of networked users molded after biological swarms, enabling human participants to behave as a unified collective intelligence.[162][163]When connected to UNU, groups of distributed users collectively answer questions and make predictions in real-time.[164]Early testing shows that human swarms can out-predict individuals.[162]In 2016, an UNU swarm was challenged by a reporter to predict the winners of the Kentucky Derby, and successfully picked the first four horses, in order, beating 540 to 1 odds.[165][166] Specialized information sites such as Digital Photography Review[167]or Camera Labs[168]is an example of collective intelligence. Anyone who has an access to the internet can contribute to distributing their knowledge over the world through the specialized information sites. Inlearner-generated contexta group of users marshal resources to create an ecology that meets their needs often (but not only) in relation to the co-configuration, co-creation and co-design of a particular learning space that allows learners to create their own context.[169][170][171]Learner-generated contexts represent anad hoccommunity that facilitates coordination of collective action in a network of trust. An example of learner-generated context is found on the Internet when collaborative users pool knowledge in a "shared intelligence space". As the Internet has developed so has the concept of CI as a shared public forum. The global accessibility and availability of the Internet has allowed more people than ever to contribute and access ideas.[13] Games such asThe SimsSeries, andSecond Lifeare designed to be non-linear and to depend on collective intelligence for expansion. This way of sharing is gradually evolving and influencing the mindset of the current and future generations.[140]For them, collective intelligence has become a norm. In Terry Flew's discussion of 'interactivity' in the online games environment, the ongoing interactive dialogue between users and game developers,[172]he refers to Pierre Lévy's concept of Collective Intelligence[citation needed]and argues this is active in videogames as clans or guilds inMMORPGconstantly work to achieve goals.Henry Jenkinsproposes that the participatory cultures emerging between games producers, media companies, and the end-users mark a fundamental shift in the nature of media production and consumption. Jenkins argues that this new participatory culture arises at the intersection of three broad new media trends.[173]Firstly, the development of new media tools/technologies enabling the creation of content. Secondly, the rise of subcultures promoting such creations, and lastly, the growth of value adding media conglomerates, which foster image, idea and narrative flow. Improvisational actors also experience a type of collective intelligence which they term "group mind", as theatrical improvisation relies on mutual cooperation and agreement,[174]leading to the unity of "group mind".[174][175] Growth of the Internet and mobile telecom has also produced "swarming" or "rendezvous" events that enable meetings or even dates on demand.[32]The full impact has yet to be felt but theanti-globalization movement, for example, relies heavily on e-mail, cell phones, pagers, SMS and other means of organizing.[176]TheIndymediaorganization does this in a more journalistic way.[177]Such resources could combine into a form of collective intelligence accountable only to the current participants yet with some strong moral or linguistic guidance from generations of contributors – or even take on a more obviously democratic form to advance shared goal.[177] A further application of collective intelligence is found in the "Community Engineering for Innovations".[178]In such an integrated framework proposed by Ebner et al., idea competitions and virtual communities are combined to better realize the potential of the collective intelligence of the participants, particularly in open-source R&D.[179]In management theory the use of collective intelligence and crowd sourcing leads to innovations and very robust answers to quantitative issues.[180]Therefore, collective intelligence and crowd sourcing is not necessarily leading to the best solution to economic problems, but to a stable, good solution. Collective actions or tasks require different amounts of coordination depending on the complexity of the task. Tasks vary from being highly independent simple tasks that require very little coordination to complex interdependent tasks that are built by many individuals and require a lot of coordination. In the article written by Kittur, Lee and Kraut the writers introduce a problem in cooperation: "When tasks require high coordination because the work is highly interdependent, having more contributors can increase process losses, reducing the effectiveness of the group below what individual members could optimally accomplish". Having a team too large the overall effectiveness may suffer even when the extra contributors increase the resources. In the end the overall costs from coordination might overwhelm other costs.[181] Group collective intelligence is a property that emerges through coordination from both bottom-up and top-down processes. In a bottom-up process the different characteristics of each member are involved in contributing and enhancing coordination. Top-down processes are more strict and fixed with norms, group structures and routines that in their own way enhance the group's collective work.[44] Tom Atlee reflects that, although humans have an innate ability to gather and analyze data, they are affected by culture, education and social institutions.[182][self-published source?]A single person tends to make decisions motivated by self-preservation. Therefore, without collective intelligence, humans may drive themselves into extinction based on their selfish needs.[46] Phillip Brown and Hugh Lauder quotes Bowles andGintis(1976) that in order to truly define collective intelligence, it is crucial to separate 'intelligence' from IQism.[183]They go on to argue that intelligence is an achievement and can only be developed if allowed to.[183]For example, earlier on, groups from the lower levels of society are severely restricted from aggregating and pooling their intelligence. This is because the elites fear that the collective intelligence would convince the people to rebel. If there is no such capacity and relations, there would be no infrastructure on which collective intelligence is built.[184]This reflects how powerful collective intelligence can be if left to develop.[183] Skeptics, especially those critical of artificial intelligence and more inclined to believe that risk ofbodily harmand bodily action are the basis of all unity between people, are more likely to emphasize the capacity of a group to take action and withstand harm as one fluidmass mobilization, shrugging off harms the way a body shrugs off the loss of a few cells.[185][186]This train of thought is most obvious in theanti-globalization movementand characterized by the works ofJohn Zerzan,Carol Moore, andStarhawk, who typically shun academics.[185][186]These theorists are more likely to refer to ecological andcollective wisdomand to the role ofconsensus processin making ontological distinctions than to any form of "intelligence" as such, which they often argue does not exist, or is mere "cleverness".[185][186] Harsh critics of artificial intelligence on ethical grounds are likely to promote collective wisdom-building methods, such as thenew tribalistsand theGaians.[187][self-published source]Whether these can be said to be collective intelligence systems is an open question. Some, e.g.Bill Joy, simply wish to avoid any form of autonomous artificial intelligence and seem willing to work on rigorous collective intelligence in order to remove any possible niche for AI.[188] In contrast to these views, companies such asAmazon Mechanical TurkandCrowdFlowerare using collective intelligence andcrowdsourcingorconsensus-based assessmentto collect the enormous amounts of data formachine learningalgorithms.
https://en.wikipedia.org/wiki/Collective_intelligence#Applications
Open dataaredatathat are openly accessible, exploitable, editable and shareable by anyone for any purpose. Open data are generally licensed under anopen license.[1][2][3] The goals of the open data movement are similar to those of other "open(-source)" movements such as open-source software,open-source hardware,open content,open specifications,open education,open educational resources,open government,open knowledge,open access,open science, and the open web. The growth of the open data movement is paralleled by a rise in intellectual property rights.[4]The philosophy behind open data has been long established (for example in theMertonian tradition of science), but the term "open data" itself is recent, gaining popularity with the rise of the Internet andWorld Wide Weband, especially, with the launch of open-data government initiativesData.gov,Data.gov.ukandData.gov.in. Open data can belinked data—referred to aslinked open data. One of the most important forms of open data is open government data (OGD), which is a form of open data created by ruling government institutions. The importance of open government data is born from it being a part of citizens' everyday lives, down to the most routine and mundane tasks that are seemingly far removed from government.[citation needed] The abbreviationFAIR/O datais sometimes used to indicate that the dataset or database in question complies with the principles ofFAIR dataand carries an explicit data‑capableopen license. The concept of open data is not new, but a formalized definition is relatively new. Open data as a phenomenon denotes that governmental data should be available to anyone with a possibility of redistribution in any form without any copyright restriction.[5]One more definition is the Open Definition which can be summarized as "a piece of data is open if anyone is free to use, reuse, and redistribute it—subject only, at most, to the requirement to attribute and/or share-alike."[6]Other definitions, including theOpen Data Institute's "open data is data that anyone can access, use or share," have an accessible short version of the definition but refer to the formal definition.[7]Open data may include non-textual material such asmaps,genomes,connectomes,chemical compounds, mathematical and scientific formulae, medical data, and practice, bioscience and biodiversity data. A major barrier to the open data movement is the commercial value of data. Access to, or re-use of, data is often controlled by public or private organizations. Control may be through access restrictions,licenses,copyright,patentsand charges for access or re-use. Advocates of open data argue that these restrictions detract from the common good and that data should be available without restrictions or fees.[citation needed]There are many other, smaller barriers as well.[8] Creators of data do not consider the need to state the conditions of ownership, licensing and re-use; instead presuming that not asserting copyright enters the data into thepublic domain. For example, many scientists do not consider the data published with their work to be theirs to control and consider the act of publication in a journal to be an implicit release of data into thecommons. The lack of a license makes it difficult to determine the status of adata setand may restrict the use of data offered in an "Open" spirit. Because of this uncertainty it is possible for public or private organizations toaggregatesaid data, claim that it is protected by copyright, and then resell it.[citation needed] Open data can come from any source. This section lists some of the fields that publish (or at least discuss publishing) a large amount of open data. The concept ofopen access to scientific datawas established with the formation of theWorld Data Centersystem, in preparation for theInternational Geophysical Yearof 1957–1958.[9]The International Council of Scientific Unions (now theInternational Council for Science) oversees several World Data Centres with the mission to minimize the risk of data loss and to maximize data accessibility.[10] While the open-science-data movement long predates the Internet, the availability of fast, readily available networking has significantly changed the context ofopen science data, as publishing or obtaining data has become much less expensive and time-consuming.[11] TheHuman Genome Projectwas a major initiative that exemplified the power of open data. It was built upon the so-calledBermuda Principles, stipulating that: "All human genomic sequence information … should be freely available and in the public domain in order to encourage research and development and to maximize its benefit to society".[12]More recent initiatives such as the Structural Genomics Consortium have illustrated that the open data approach can be used productively within the context of industrial R&D.[13] In 2004, the Science Ministers of all nations of theOrganisation for Economic Co-operation and Development(OECD), which includes most developed countries of the world, signed a declaration which states that all publicly funded archive data should be made publicly available.[14]Following a request and an intense discussion with data-producing institutions in member states, the OECD published in 2007 theOECD Principles and Guidelines for Access to Research Data from Public Fundingas asoft-lawrecommendation.[15] Examples of open data in science: There are a range of different arguments for government open data.[20][21]Some advocates say that making government information available to the public as machine readable open data can facilitate government transparency, accountability and public participation. "Open data can be a powerful force for public accountability—it can make existing information easier to analyze, process, and combine than ever before, allowing a new level of public scrutiny."[22]Governments that enable public viewing of data can help citizens engage within the governmental sectors and "add value to that data."[23]Open data experts have nuanced the impact that opening government data may have on government transparency and accountability. In a widely cited paper, scholars David Robinson and Harlan Yu contend that governments may project a veneer of transparency by publishing machine-readable data that does not actually make government more transparent or accountable.[24]Drawing from earlier studies on transparency and anticorruption,[25]World Bank political scientistTiago C. Peixotoextended Yu and Robinson's argument by highlighting a minimal chain of events necessary for open data to lead to accountability: Some make the case that opening up official information can support technological innovation and economic growth by enabling third parties to develop new kinds of digital applications and services.[27] Several national governments have created websites to distribute a portion of the data they collect. It is a concept for a collaborative project in the municipal Government to create and organize culture for Open Data or Open government data. Additionally, other levels of government have established open data websites. There are many government entities pursuingOpen Data in Canada.Data.govlists the sites of a total of 40 US states and 46 US cities and counties with websites to provide open data, e.g., the state ofMaryland, the state ofCalifornia, US[28]andNew York City.[29] At the international level, the United Nations has an open data website that publishes statistical data from member states and UN agencies,[30]and theWorld Bankpublished a range of statistical data relating to developing countries.[31]TheEuropean Commissionhas created two portals for theEuropean Union: theEU Open Data Portalwhich gives access to open data from the EU institutions, agencies and other bodies[32]and the European Data Portal that provides datasets from local, regional and national public bodies across Europe.[33]The two portals were consolidated to data.europa.eu on April 21, 2021. Italyis the first country to release standard processes and guidelines under aCreative Commonslicense for spread usage in the Public Administration. The open model is called the Open Data Management Cycle and was adopted in several regions such asVenetoandUmbria.[34][35][36]Main cities likeReggio CalabriaandGenovahave also adopted this model.[citation needed][37] In October 2015, theOpen Government Partnershiplaunched theInternational Open Data Charter, a set of principles and best practices for the release of governmental open data formally adopted by seventeen governments of countries, states and cities during the OGP Global Summit inMexico.[38] In July 2024, theOECDadopted Creative Commons CC-BY-4.0 licensing for its published data and reports.[39] Manynon-profit organizationsoffer open access to their data, as long it does not undermine their users', members' or third party'sprivacy rights. In comparison tofor-profit corporations, they do not seek to monetize their data. OpenNWT launched a website offering open data of elections.[40]CIAToffers open data to anybody who is willing to conduct big data analytics in order to enhance the benefit of international agricultural research.[41]DBLP, which is owned by a non-profit organizationDagstuhl, offers its database of scientific publications from computer science as open data.[42] Hospitality exchange services, including Bewelcome,Warm Showers, and CouchSurfing (before it became for-profit) have offered scientists access to theiranonymized datafor analysis, public research, and publication.[43][44][45][46][47] At a small level, a business or research organization's policies and strategies towards open data will vary, sometimes greatly. One common strategy employed is the use of a data commons. A data commons is an interoperable software and hardware platform that aggregates (or collocates) data, data infrastructure, and data-producing and data-managing applications in order to better allow a community of users to manage, analyze, and share their data with others over both short- and long-term timelines.[48][49][50]Ideally, this interoperable cyberinfrastructure should be robust enough "to facilitate transitions between stages in the life cycle of a collection" of data and information resources[48]while still being driven by common data models and workspace tools enabling and supporting robust data analysis.[50]The policies and strategies underlying a data commons will ideally involve numerous stakeholders, including the data commons service provider, data contributors, and data users.[49] Grossmanet al[49]suggests six major considerations for a data commons strategy that better enables open data in businesses and research organizations. Such a strategy should address the need for: Beyond individual businesses and research centers, and at a more macro level, countries like Germany[51]have launched their own official nationwide open data strategies, detailing how data management systems and data commons should be developed, used, and maintained for the greater public good. Opening government data is only a waypoint on the road to improving education, improving government, and building tools to solve other real-world problems. While many arguments have been made categorically[citation needed], the following discussion of arguments for and against open data highlights that these arguments often depend highly on the type of data and its potential uses. Arguments made on behalf of open data include the following: It is generally held that factual data cannot be copyrighted.[61]Publishers frequently add copyright statements (often forbidding re-use) to scientific data accompanying publications. It may be unclear whether the factual data embedded in full text are part of the copyright. While the human abstraction of facts from paper publications is normally accepted as legal there is often an implied restriction on the machine extraction by robots. Unlikeopen access, where groups of publishers have stated their concerns, open data is normally challenged by individual institutions.[citation needed]Their arguments have been discussed less in public discourse and there are fewer quotes to rely on at this time. Arguments against making all data available as open data include the following: The paper entitled "Optimization of Soft Mobility Localization with Sustainable Policies and Open Data"[65]argues that open data is a valuable tool for improving the sustainability and equity of soft mobility in cities. The author argues that open data can be used to identify the needs of different areas of a city, develop algorithms that are fair and equitable, and justify the installation of soft mobility resources. The goals of the Open Data movement are similar to those of other "Open" movements. Formally both the definition of Open Data and commons revolve around the concept of shared resources with a low barrier to access. Substantially, digital commons include Open Data in that it includes resources maintained online, such as data.[70]Overall, looking at operational principles of Open Data one could see the overlap between Open Data and (digital) commons in practice. Principles of Open Data are sometimes distinct depending on the type of data under scrutiny.[71]Nonetheless, they are somewhat overlapping and their key rationale is the lack of barriers to the re-use of data(sets).[71]Regardless of their origin, principles across types of Open Data hint at the key elements of the definition of commons. These are, for instance, accessibility, re-use, findability, non-proprietarily.[71]Additionally, although to a lower extent, threats and opportunities associated with both Open Data and commons are similar. Synthesizing, they revolve around (risks and) benefits associated with (uncontrolled) use of common resources by a large variety of actors. Both commons and Open Data can be defined by the features of the resources that fit under these concepts, but they can be defined by the characteristics of the systems their advocates push for. Governance is a focus for both Open Data and commons scholars.[71][70]The key elements that outline commons and Open Data peculiarities are the differences (and maybe opposition) to the dominant market logics as shaped by capitalism.[70]Perhaps it is this feature that emerges in the recent surge of the concept of commons as related to a more social look at digital technologies in the specific forms of digital and, especially, data commons. Application of open data for societal good has been demonstrated in academic research works.[72]The paper "Optimization of Soft Mobility Localization with Sustainable Policies and Open Data" uses open data in two ways. First, it uses open data to identify the needs of different areas of a city. For example, it might use data on population density, traffic congestion, and air quality to determine where soft mobility resources, such as bike racks and charging stations for electric vehicles, are most needed. Second, it uses open data to develop algorithms that are fair and equitable. For example, it might use data on the demographics of a city to ensure that soft mobility resources are distributed in a way that is accessible to everyone, regardless of age, disability, or gender. The paper also discusses the challenges of using open data for soft mobility optimization. One challenge is that open data is often incomplete or inaccurate. Another challenge is that it can be difficult to integrate open data from different sources. Despite these challenges, the paper argues that open data is a valuable tool for improving the sustainability and equity of soft mobility in cities. An exemplification of how the relationship between Open Data and commons and how their governance can potentially disrupt the market logic otherwise dominating big data is a project conducted by Human Ecosystem Relazioni in Bologna (Italy). See:https://www.he-r.it/wp-content/uploads/2017/01/HUB-report-impaginato_v1_small.pdf. This project aimed at extrapolating and identifying online social relations surrounding “collaboration” in Bologna. Data was collected from social networks and online platforms for citizens collaboration. Eventually data was analyzed for the content, meaning, location, timeframe, and other variables. Overall, online social relations for collaboration were analyzed based on network theory. The resulting dataset have been made available online as Open Data (aggregated and anonymized); nonetheless, individuals can reclaim all their data. This has been done with the idea of making data into a commons. This project exemplifies the relationship between Open Data and commons, and how they can disrupt the market logic driving big data use in two ways. First, it shows how such projects, following the rationale of Open Data somewhat can trigger the creation of effective data commons. The project itself was offering different types of support to social network platform users to have contents removed. Second, opening data regarding online social networks interactions has the potential to significantly reduce the monopolistic power of social network platforms on those data. Several funding bodies that mandate Open Access also mandate Open Data. A good expression of requirements (truncated in places) is given by theCanadian Institutes of Health Research(CIHR):[73] Other bodies promoting the deposition of data and full text include theWellcome Trust. An academic paper published in 2013 advocated thatHorizon 2020(the science funding mechanism of the EU) should mandate that funded projects hand in their databases as "deliverables" at the end of the project so that they can be checked for third-party usability and then shared.[74]
https://en.wikipedia.org/wiki/Open_data
Progress in artificial intelligence(AI) refers to the advances, milestones, and breakthroughs that have been achieved in the field ofartificial intelligenceover time. AI is a multidisciplinary branch ofcomputer sciencethat aims to create machines and systems capable of performing tasks that typically require human intelligence. AI applications have been used in a wide range of fields includingmedical diagnosis,finance,robotics,law,video games,agriculture, and scientific discovery. However, many AI applications are not perceived as AI: "A lot of cutting-edge AI has filtered into general applications, often without being called AI because once something becomes useful enough and common enough it'snot labeled AI anymore."[1][2]"Many thousands of AI applications are deeply embedded in the infrastructure of every industry."[3]In the late 1990s and early 2000s, AI technology became widely used as elements of larger systems,[3][4]but the field was rarely credited for these successes at the time. Kaplanand Haenlein structure artificial intelligence along three evolutionary stages: To allow comparison with human performance, artificial intelligence can be evaluated on constrained and well-defined problems. Such tests have been termedsubject-matter expert Turing tests. Also, smaller problems provide more achievable goals and there are an ever-increasing number of positive results. Humans still substantially outperform both GPT-4 and models trained on the ConceptARC benchmark that scored 60% on most, and 77% on one category, while humans 91% on all and 97% on one category.[5] There are many useful abilities that can be described as showing some form of intelligence. This gives better insight into the comparative success of artificial intelligence in different areas. AI, like electricity or the steam engine, is ageneral-purpose technology. There is no consensus on how to characterize which tasks AI tends to excel at.[15]Some versions ofMoravec's paradoxobserve that humans are more likely to outperform machines in areas such as physical dexterity that have been the direct target of natural selection.[16]While projects such asAlphaZerohave succeeded in generating their own knowledge from scratch, many other machine learning projects require large training datasets.[17][18]ResearcherAndrew Nghas suggested, as a "highly imperfect rule of thumb", that "almost anything a typical human can do with less than one second of mental thought, we can probably now or in the near future automate using AI."[19] Games provide a high-profile benchmark for assessing rates of progress; many games have a large professional player base and a well-established competitive rating system.AlphaGobrought the era of classical board-game benchmarks to a close when Artificial Intelligence proved their competitive edge over humans in 2016.Deep Mind'sAlphaGo AI software program defeated the world's best professional Go PlayerLee Sedol.[20]Games ofimperfect knowledgeprovide new challenges to AI in the area ofgame theory; the most prominent milestone in this area was brought to a close byLibratus' poker victory in 2017.[21][22]E-sportscontinue to provide additional benchmarks;FacebookAI,Deepmind, and others have engaged with the popularStarCraftfranchise of videogames.[23][24] Broad classes of outcome for an AI test may be given as: In his famousTuring test, Alan Turing picked language,the defining feature of human beings, for its basis.[66]TheTuring testis now considered too exploitable to be a meaningful benchmark.[67] TheFeigenbaum test, proposed by the inventor ofexpert systems, tests a machine's knowledge and expertise about a specific subject.[68]A paper byJim GrayofMicrosoftin 2003 suggested extending the Turing test tospeech understanding,speakingandrecognizing objectsand behavior.[69] Proposed "universal intelligence" tests aim to compare how well machines, humans, and even non-human animals perform on problem sets that are generic as possible. At an extreme, the test suite can contain every possible problem, weighted byKolmogorov complexity; however, these problem sets tend to be dominated by impoverished pattern-matching exercises where a tuned AI can easily exceed human performance levels.[70][71][72][73][74] According toOpenAI, in 2023ChatGPTGPT-4scored the 90th percentile on theUniform Bar Exam. On theSATs, GPT-4 scored the 89th percentile on math, and the 93rd percentile in Reading & Writing. On theGREs, it scored on the 54th percentile on the writing test, 88th percentile on the quantitative section, and 99th percentile on the verbal section. It scored in the 99th to 100th percentile on the 2020USA Biology Olympiadsemifinal exam. It scored a perfect "5" on severalAP exams.[75] Independent researchers found in 2023 that ChatGPTGPT-3.5"performed at or near the passing threshold" for the three parts of theUnited States Medical Licensing Examination. GPT-3.5 was also assessed to attain a low, but passing, grade from exams for four law school courses at theUniversity of Minnesota.[75]GPT-4 passed a text-based radiology board–style examination.[76][77] Many competitions and prizes, such as theImagenet Challenge, promote research in artificial intelligence. The most common areas of competition include general machine intelligence, conversational behavior, data-mining,robotic cars, and robot soccer as well as conventional games.[78] An expert poll around 2016, conducted byKatja Graceof theFuture of Humanity Instituteand associates, gave median estimates of 3 years for championshipAngry Birds, 4 years for the World Series of Poker, and 6 years forStarCraft. On more subjective tasks, the poll gave 6 years for folding laundry as well as an average human worker, 7–10 years for expertly answering 'easily Googleable' questions, 8 years for average speech transcription, 9 years for average telephone banking, and 11 years for expert songwriting, but over 30 years for writing aNew York Timesbestseller or winning thePutnam math competition.[79][80][81] An AI defeated agrandmasterin a regulation tournament game for the first time in 1988; rebranded asDeep Blue, it beat the reigning human world chess champion in 1997 (seeDeep Blue versus Garry Kasparov).[82] AlphaGodefeated a European Go champion in October 2015, andLee Sedolin March 2016, one of the world's top players (seeAlphaGo versus Lee Sedol). According toScientific Americanand other sources, most observers had expected superhuman Computer Go performance to be at least a decade away.[85][86][87] AI pioneer and economistHerbert A. Simoninaccurately predicted in 1965: "Machines will be capable, within twenty years, of doing any work a man can do". Similarly, in 1970Marvin Minskywrote that "Within a generation... the problem of creating artificial intelligence will substantially be solved."[93] Four polls conducted in 2012 and 2013 suggested that themedianestimate among experts for when AGI would arrive was 2040 to 2050, depending on the poll.[94][95] The Grace poll around 2016 found results varied depending on how the question was framed. Respondents asked to estimate "when unaided machines can accomplish every task better and more cheaply than human workers" gave an aggregated median answer of 45 years and a 10% chance of it occurring within 9 years. Other respondents asked to estimate "when all occupations are fully automatable. That is, when for any occupation, machines could be built to carry out the task better and more cheaply than human workers" estimated a median of 122 years and a 10% probability of 20 years. The median response for when "AI researcher" could be fully automated was around 90 years. No link was found between seniority and optimism, but Asian researchers were much more optimistic than North American researchers on average; Asians predicted 30 years on average for "accomplish every task", compared with the 74 years predicted by North Americans.[79][80][81]
https://en.wikipedia.org/wiki/Progress_in_artificial_intelligence
This article presents a detailedtimelineof events in the history ofcomputingfrom 2020 to the present. For narratives explaining the overall developments, see thehistory of computing. Significant events incomputinginclude events relating directly or indirectly tosoftware,hardwareandwetware. Excluded (except in instances of significant functional overlap) are: Currently excluded are: Very broad outlines of topic domains and topics with substantial progress during the decade not yet included above with aFurther information:link:
https://en.wikipedia.org/wiki/Timeline_of_computing_2020%E2%80%93present
The following table comparescognitive architectures.
https://en.wikipedia.org/wiki/Comparison_of_cognitive_architectures
Incommunications technology, the technique ofcompressed sensing(CS) may be applied tothe processing of speech signalsunder certain conditions. In particular, CS can be used to reconstruct asparse vectorfrom a smaller number of measurements, provided the signal can be represented in sparsedomain. "Sparse domain" refers to a domain in which only a few measurements have non-zero values.[1] Suppose a signalx∈RN{\displaystyle {x\in R^{N}}}can be represented in a domain where onlyM{\displaystyle {\it {M}}}coefficientsout ofN{\displaystyle {\it {N}}}(whereM≪N{\displaystyle {M\ll N}}) are non-zero, then the signal is said to be sparse in that domain. This reconstructed sparsevectorcan be used to construct back the original signal if the sparse domain of signal is known. CS can be applied to speech signal only if sparse domain of speech signal is known. Consider a speech signalx{\displaystyle {x}}, which can be represented in a domainΨ{\displaystyle {\Psi }}such thatx=Ψα{\displaystyle {x}={\Psi {\boldsymbol {\alpha }}}}, where speech signalx∈RN{\displaystyle {x\in R^{\it {N}}}}, dictionary matrixΨ∈RN×N{\displaystyle {\Psi \in R^{\it {N\times N}}}}and the sparse coefficient vectorα∈RN{\displaystyle {{\boldsymbol {\alpha }}\in R^{\it {N}}}}. This speech signal is said to be sparse in domainΨ{\displaystyle {\Psi }}, if the number of significant (non zero) coefficients in sparse vectorα{\displaystyle {\boldsymbol {\alpha }}}isK{\displaystyle {\it {K}}}, whereK≪N{\displaystyle {\it {K\ll N}}}. The observed signalx{\displaystyle {x}}is ofdimensionN×1{\displaystyle {\it {N\times 1}}}. To reduce the complexity for solvingα{\displaystyle {\boldsymbol {\alpha }}}using CS speech signal is observed using a measurement matrixΦ{\displaystyle {\Phi }}such that wherey∈RM{\displaystyle {y\in R^{\it {M}}}}, and measurement matrixΦ∈RM×N{\displaystyle {\Phi \in R^{\it {M\times N}}}}such thatM≪N{\displaystyle {\it {M\ll N}}}. Sparse decomposition problem for eq. 1 can be solved as standardl1{\displaystyle {l_{1}}}minimization[2]as If measurement matrixΦ{\displaystyle {\Phi }}satisfies therestricted isometric property(RIP) and is incoherent withdictionary matrixΨ{\displaystyle {\Psi }}.[3]then the reconstructed signal is much closer to the original speech signal. Different types of measurement matrices likerandom matricescan be used for speech signals.[4][5]Estimating the sparsity of a speech signal is a problem since the speech signal varies greatly over time and thus sparsity of speech signal also varies highly over time. If sparsity of speech signal can be calculated over time without much complexity that will be best. If this is not possible then worst-case scenario for sparsity can be considered for a given speech signal. Sparse vector (α^{\displaystyle {\hat {\boldsymbol {\alpha }}}}) for a given speech signal is reconstructed from as small as possible a number of measurements (y{\displaystyle {y}}) usingl1{\displaystyle {l_{1}}}minimization.[2]Then original speech signal is reconstructed form the calculated sparse vectorα^{\displaystyle {\hat {\boldsymbol {\alpha }}}}using the fixed dictionary matrix asΨ{\displaystyle {\Psi }}asx^{\displaystyle {\hat {x}}}=Ψ{\displaystyle {\Psi }}α^{\displaystyle {\hat {\boldsymbol {\alpha }}}}.[6] Estimation of both the dictionary matrix and sparse vector fromrandommeasurements only has been doneiteratively.[7]The speech signal reconstructed from estimated sparse vector and dictionary matrix is much closer to the original signal. Some more iterative approaches to calculate both dictionary matrix and speech signal from just random measurements of speech signal have been developed.[8] The application of structured sparsity for joint speech localization-separation inreverberantacoustics has been investigated for multipartyspeech recognition.[9]Further applications of the concept of sparsity are yet to be studied in the field ofspeech processing. The idea behind applying CS to speech signals is to formulatealgorithmsor methods that use only those random measurements (y){\displaystyle {y})}) to carry out various forms of application-based processing such asspeaker recognitionandspeech enhancement.[10]
https://en.wikipedia.org/wiki/Compressed_sensing_in_speech_signals
Noiseletsare functions which gives the worst case behavior for theHaar waveletpacket analysis. In other words, noiselets are totally incompressible by the Haar wavelet packet analysis.[1]Like the canonical and Fourier bases, which have an incoherent property, noiselets are perfectly incoherent with the Haar basis. In addition, they have a fast algorithm for implementation, making them useful as a sampling basis for signals that are sparse in the Haar domain. The mother bases functionχ(x){\displaystyle \chi (x)}is defined as: χ(x)={1x∈[0,1)0otherwise{\displaystyle \chi (x)={\begin{cases}1&x\in [0,1)\\0&{\text{otherwise}}\end{cases}}} The family of noislets is constructed recursively as follows: f1(x)=χ(x)f2n(x)=(1−i)fn(2x)+(1+i)fn(2x−1)f2n+1(x)=(1+i)fn(2x)+(1−i)fn(2x−1){\displaystyle {\begin{alignedat}{2}f_{1}(x)&=\chi (x)\\f_{2n}(x)&=(1-i)f_{n}(2x)+(1+i)f_{n}(2x-1)\\f_{2n+1}(x)&=(1+i)f_{n}(2x)+(1-i)f_{n}(2x-1)\end{alignedat}}} Source:[2] Noiselet can be extended and discretized. The extended functionfm(k,l){\displaystyle f_{m}(k,l)}is defined as follows: fm(1,l)={1l=0,…,2m−10otherwisefm(2k,l)=(1−i)fm(k,2l)+(1+i)fm(k,2l−2m)fm(2k+1,l)=(1+i)fm(k,2l)+(1−i)fm(k,2l−2m){\displaystyle {\begin{alignedat}{2}f_{m}(1,l)&={\begin{cases}1&l=0,\dots ,2^{m}-1\\0&{\text{otherwise}}\end{cases}}\\f_{m}(2k,l)&=(1-i)f_{m}(k,2l)+(1+i)f_{m}(k,2l-2^{m})\\f_{m}(2k+1,l)&=(1+i)f_{m}(k,2l)+(1-i)f_{m}(k,2l-2^{m})\\\end{alignedat}}} Use extended noiseletfm(k,l){\displaystyle f_{m}(k,l)}, we can generate then×n{\displaystyle n\times n}noiselet matrixNn{\displaystyle N_{n}}, where n is a power of twon=2q{\displaystyle n=2^{q}}: N1=[1]N2n=12[1−i1+i1+i1−i]⊗Nn{\displaystyle {\begin{alignedat}{2}N_{1}&=[1]\\N_{2n}&={\frac {1}{2}}{\begin{bmatrix}1-i&1+i\\1+i&1-i\end{bmatrix}}\otimes N_{n}\\\end{alignedat}}} Here⊗{\displaystyle \otimes }denotes the Kronecker product. Suppose2m>n{\displaystyle 2^{m}>n}, we can find thatNn(k,l){\displaystyle N_{n}(k,l)}is equalfm(n+k,2mnl){\displaystyle f_{m}(n+k,{\frac {2^{m}}{n}}l)}. The elements of the noiselet matrices take discrete values from one of two four-element sets: nNn(j,k)∈{1,−1,i,−i}for evenq2nNn(j,k)∈{1+i,1−i,−1+i,−1−i}for oddq{\displaystyle {\begin{alignedat}{3}{\sqrt {n}}N_{n}(j,k)&\in \{1,-1,i,-i\}&{\text{for even }}q\\{\sqrt {2n}}N_{n}(j,k)&\in \{1+i,1-i,-1+i,-1-i\}&{\text{for odd }}q\\\end{alignedat}}} 2D noiselet transforms are obtained through the Kronecker product of 1D noiselet transform: Nn×k2D=Nk⊗Nn{\displaystyle N_{n\times k}^{2D}=N_{k}\otimes N_{n}} Noiselet has some properties that make them ideal for applications: The complementarity of wavelets and noiselets means that noiselets can be used incompressed sensingto reconstruct a signal (such as an image) which has a compact representation in wavelets.[3]MRIdata can be acquired in noiselet domain, and, subsequently, images can be reconstructed from undersampled data using compressive-sensing reconstruction.[4] Here are few applications that noiselet has been implemented: The noiselet encoding is a technique used in MRI to acquire images with reduced acquisition time. In MRI, the imaging process typically involves encoding spatial information using gradients. Traditional MRI acquisition relies on Cartesian encoding,[5]where the spatial information is sampled on a Cartesian grid. However, this methodology could be time consuming, especially in images with high resolution or dynamic imaging. While noiselet encoding is part of thecompressive sensing. It exploits the sparsity of images to obtain them in a more efficient way. In compressive sensing, the idea is to acquire fewer samples than dictated by the Nyquist-Shannon sampling theorem, under the assumption that the underlying signal or image is sparse in some domain. The overview of how noiselet encoding works in MRI is briefly explained as follow: The noiselet encoding uses a noiselet transform matrix, which the produced coefficients effectively disperse the signal across both scale and time. Consequently, each subset of these transform coefficients captures specific information from the original signal. When these subsets are utilized independently with zero padding, each of them can be employed to reconstruct the original signal at a reduced resolution. As not all of the spatial frequency components are sampled by noiselet encoding, the undersampling allows the reconstruction of the image with fewer measurements, in other words, a more efficient imaging without sacrificing image quality significantly. Single-pixel imaging is a form of imaging where a single detector is used to measure light levels after the sample has been illuminated with patterns to achieve efficient and compressive measurements. Noiselet is implemented to increase the computational efficiency by following the principle of compressive sensing. The following is an overview of how noiselet is applied to single-pixel imaging: The noiselet transform matrix is applied to the structured illumination patterns, and spreads the signal information across the measurement space. The structured patterns leads to a sparse representation of the signal information. This allows the reconstruction step of the image from a reduced set of measurements, while still encapsulates the essential information required to reconstruct an image with good quality compared to the original's. The benefits brought by noiselet can be concluded as:
https://en.wikipedia.org/wiki/Noiselet
Sparse approximation(also known assparse representation) theory deals withsparsesolutions forsystems of linear equations. Techniques for finding these solutions and exploiting them in applications have found wide use inimage processing,signal processing,machine learning,medical imaging, and more. Consider alinear system of equationsx=Dα{\displaystyle x=D\alpha }, whereD{\displaystyle D}is anunderdeterminedm×p{\displaystyle m\times p}matrix(m<p){\displaystyle (m<p)}andx∈Rm,α∈Rp{\displaystyle x\in \mathbb {R} ^{m},\alpha \in \mathbb {R} ^{p}}. The matrixD{\displaystyle D}(typically assumed to be full-rank) is referred to as the dictionary, andx{\displaystyle x}is a signal of interest. The core sparse representation problem is defined as the quest for the sparsest possible representationα{\displaystyle \alpha }satisfyingx=Dα{\displaystyle x=D\alpha }. Due to the underdetermined nature ofD{\displaystyle D}, this linear system admits in general infinitely many possible solutions, and among these we seek the one with the fewest non-zeros. Put formally, we solve where‖α‖0=#{i:αi≠0,i=1,…,p}{\displaystyle \|\alpha \|_{0}=\#\{i:\alpha _{i}\neq 0,\,i=1,\ldots ,p\}}is theℓ0{\displaystyle \ell _{0}}pseudo-norm, which counts thenumberof non-zero components ofα{\displaystyle \alpha }. This problem is known to be NP-hard with a reduction to NP-complete subset selection problems incombinatorial optimization. Sparsity ofα{\displaystyle \alpha }implies that only a few (k≪m<p{\displaystyle k\ll m<p}) components in it are non-zero. The underlying motivation for such a sparse decomposition is the desire to provide the simplest possible explanation ofx{\displaystyle x}as a linear combination of as few as possible columns fromD{\displaystyle D}, also referred to as atoms. As such, the signalx{\displaystyle x}can be viewed as a molecule composed of a few fundamental elements taken fromD{\displaystyle D}. While the above posed problem is indeed NP-Hard, its solution can often be found using approximation algorithms. One such option is a convexrelaxationof the problem, obtained by using theℓ1{\displaystyle \ell _{1}}-norm instead ofℓ0{\displaystyle \ell _{0}}, where‖α‖1{\displaystyle \|\alpha \|_{1}}simply sums the absolute values of the entries inα{\displaystyle \alpha }. This is known as thebasis pursuit(BP) algorithm, which can be handled using anylinear programmingsolver. An alternative approximation method is a greedy technique, such as thematching pursuit(MP), which finds the location of the non-zeros one at a time. Surprisingly, under mild conditions onD{\displaystyle D}(using thespark (mathematics), themutual coherenceor therestricted isometry property) and the level of sparsity in the solution,k{\displaystyle k}, the sparse representation problem can be shown to have a unique solution, and BP and MP are guaranteed to find it perfectly.[1][2][3] Often the observed signalx{\displaystyle x}is noisy. By relaxing the equality constraint and imposing anℓ2{\displaystyle \ell _{2}}-norm on the data-fitting term, the sparse decomposition problem becomes or put in a Lagrangian form, whereλ{\displaystyle \lambda }is replacing theϵ{\displaystyle \epsilon }. Just as in the noiseless case, these two problems are NP-Hard in general, but can be approximated using pursuit algorithms. More specifically, changing theℓ0{\displaystyle \ell _{0}}to anℓ1{\displaystyle \ell _{1}}-norm, we obtain which is known as thebasis pursuit denoising. Similarly,matching pursuitcan be used for approximating the solution of the above problems, finding the locations of the non-zeros one at a time until the error threshold is met. Here as well, theoretical guarantees suggest that BP and MP lead to nearly optimal solutions depending on the properties ofD{\displaystyle D}and the cardinality of the solutionk{\displaystyle k}.[4][5][6]Another interesting theoretical result refers to the case in whichD{\displaystyle D}is aunitary matrix. Under this assumption, the problems posed above (with eitherℓ0{\displaystyle \ell _{0}}orℓ1{\displaystyle \ell _{1}}) admit closed-form solutions in the form of non-linear shrinkage.[4] There are several variations to the basic sparse approximation problem. Structured sparsity: In the original version of the problem, any of the atoms in the dictionary can be picked. In the structured (block) sparsity model, instead of picking atoms individually, groups of them are to be picked. These groups can be overlapping and of varying size. The objective is to representx{\displaystyle x}such that it is sparse while forcing this block-structure.[7] Collaborative (joint) sparse coding: The original version of the problem is defined for a single signalx{\displaystyle x}. In the collaborative (joint) sparse coding model, a set of signals is available, each believed to emerge from (nearly) the same set of atoms fromD{\displaystyle D}. In this case, the pursuit task aims to recover a set of sparse representations that best describe the data while forcing them to share the same (or close-by) support.[8] Other structures: More broadly, the sparse approximation problem can be cast while forcing a specific desired structure on the pattern of non-zero locations inα{\displaystyle \alpha }. Two cases of interest that have been extensively studied are tree-based structure, and more generally, a Boltzmann distributed support.[9] As already mentioned above, there are various approximation (also referred to aspursuit) algorithms that have been developed for addressing the sparse representation problem: We mention below a few of these main methods. Sparse approximation ideas and algorithms have been extensively used insignal processing,image processing,machine learning,medical imaging,array processing,data mining, and more. In most of these applications, the unknown signal of interest is modeled as a sparse combination of a few atoms from a given dictionary, and this is used as theregularizationof the problem. These problems are typically accompanied by adictionary learningmechanism that aims to fitD{\displaystyle D}to best match the model to the given data. The use of sparsity-inspired models has led to state-of-the-art results in a wide set of applications.[12][13][14]Recent work suggests that there is a tight connection between sparse representation modeling and deep-learning.[15]
https://en.wikipedia.org/wiki/Sparse_approximation
Verification-based message-passingalgorithms(VB-MPAs)incompressed sensing(CS), a branch ofdigital signal processingthat deals with measuringsparse signals, are some methods to efficiently solve the recovery problem in compressed sensing. One of the main goal in compressed sensing is the recovery process. Generally speaking, recovery process in compressed sensing is a method by which the original signal is estimated using the knowledge of the compressed signal and the measurement matrix.[1]Mathematically, the recovery process in Compressed Sensing is finding the sparsest possible solution of an under-determinedsystem of linear equations. Based on the nature of the measurement matrix one can employ different reconstruction methods. If the measurement matrix is also sparse, one efficient way is to use Message Passing Algorithms for signal recovery. Although there are message passing approaches that deals with dense matrices, the nature of those algorithms are to some extent different from the algorithms working on sparse matrices.[1][2] The main problem in recovery process in CS is to find the sparsest possible solution to the following under-determined system of linear equationsAx=y{\displaystyle Ax=y}whereA{\displaystyle A}is the measurement matrix,x{\displaystyle x}is the original signal to be recovered andy{\displaystyle y}is the compresses known signal. When the matrixA{\displaystyle A}is sparse, one can represent this matrix by abipartite graphG=(Vl∪Vr,E){\displaystyle G=(V_{l}\cup V_{r},E)}for better understanding.[2][3][4][5]Vl{\displaystyle V_{l}}is the set of variable nodes inG{\displaystyle G}which represents the set of elements ofx{\displaystyle x}and alsoVr{\displaystyle V_{r}}is the set of check nodes corresponding to the set of elements ofy{\displaystyle y}. Besides, there is an edgee=(u,v){\displaystyle e=(u,v)}betweenu∈Vl{\displaystyle u\in V_{l}}andv∈Vr{\displaystyle v\in V_{r}}if the corresponding elements inA{\displaystyle A}is non-zero, i.e.Av,u≠0{\displaystyle A_{v,u}\neq 0}. Moreover, the weight of the edgew(e)=Av,u{\displaystyle w(e)=A_{v,u}}.[6]Here is an example of a binary sparse measurement matrix where the weights of the edges are either zero or one. A=[001000001010000101010000100001000010111000000000000100100001000010100001000000011100010010000100]{\displaystyle A=\left[{\begin{array}{c c c c c c c c c c c c}0&0&1&0&0&0&0&0&1&0&1&0\\0&0&0&1&0&1&0&1&0&0&0&0\\1&0&0&0&0&1&0&0&0&0&1&0\\1&1&1&0&0&0&0&0&0&0&0&0\\0&0&0&1&0&0&1&0&0&0&0&1\\0&0&0&0&1&0&1&0&0&0&0&1\\0&0&0&0&0&0&0&1&1&1&0&0\\0&1&0&0&1&0&0&0&0&1&0&0\end{array}}\right]} The basic idea behind message passing algorithms in CS is to transmit appropriate messages between variable nodes and check nodes in aniterative mannerin order to efficiently find signalx{\displaystyle x}. These messages are different for variable nodes and check nodes. However, the basic nature of the messages for all variable node and check nodes are the same in all of the verification based message passing algorithms.[6]The messagesμv(vi):Vl↦R×{0,1}{\displaystyle \mu ^{v}(v_{i}):~V_{l}\mapsto \mathbb {R} \times \{0,1\}}emanating from variable nodevi{\displaystyle v_{i}}contains the value of the check node and an indicator which shows if the variable node is verified or not. Moreover, the messagesμc(ci):Vr↦R×Z+{\displaystyle \mu ^{c}(c_{i}):~V_{r}\mapsto \mathbb {R} \times \mathbb {Z} ^{+}}emanating from check nodeci{\displaystyle c_{i}}contains the value of the check node and the remaining degree of the check node in the graph.[6][7] In each iteration, every variable node and check node produce a new message to be transmitted to all of its neighbors based on the messages that they have received from their own neighbors. This local property of the message passing algorithms enables them to be implemented as parallel processing algorithms and makes the time complexity of these algorithm so efficient.[8] The common rule between all verification based message passing algorithms is the fact that once a variable node become verified then this variable node can be removed from the graph and the algorithm can be executed to solve the rest of the graph. Different verification bases message passing algorithms use different combinations of verification rules.[6] The verification rules are as follows: The message passing rules given above are the basic and only rules that should be used in any verification based message passing algorithm. It is shown that these simple rules can efficiently recover the original signal provided that certain conditions are satisfied.[8][6] There are four algorithms known as VB-MPA's, namely Genie, LM, XH, and SBB.[6]All of these algorithms use the same strategy for recovery of the original signal; however, they use different combination of the message passing rules to verify variable nodes. Genie algorithm is thebenchmarkin this topic. Firstly, Genie algorithm is assumed to have the knowledge of thesupportset of the signal, i.e. the set of non-zero elements of the original signal. Using this knowledge, Genie should not care about the zero variable nodes in the graph, and the only task of the Genie algorithm is to recover the values of the non-zero elements of the original signal. Although, Genie does not have any practical aspect, it can be regarded as the benchmark of the problem especially in the sense that this algorithm outperforms other algorithms in this category and one can measure how successful one algorithms is by comparing that to the Genie algorithm. Since Genie only wants to find the value of the non-zero elements of the signal it is not necessary to employ rules that are responsible for zero valued variable node in this algorithm. Therefore, Genie only uses D1CN as the verification rule. This algorithm unlike the Genie algorithm does not have any knowledge about the support set of signal, and it uses D1CN and ZCN together to solve the recovery process in CS. In fact, ZCN is the rule that attempts to verify the zero valued variable nodes and D1CN is responsible for non-zero valued variable nodes. This usage of this algorithm is when one does not have non-binary matrix. In such cases, employing the third rule violated the locality nature of the algorithms. This issue will be considered in SBB algorithm.[6] This algorithm is the same as LM, but it only uses ECN instead of D1CN for the verification of the non-zero variable nodes. If the non-zero elements of the measurement matrix arebinary, then this algorithm cannot be implemented efficiently and the locality of the algorithm will be violated. The most powerful practical algorithm among all of the verification message passing algorithms is the SBB algorithm that employs all of the verification rules for the recovery of the original signal. In this algorithm, D1CN and ECN are responsible for the verification of the non-zero elements of the signal and ZCN and ECN will verify zero variable nodes. Thepseudo codeof the VB-MPAs is as follows. In the following algorithmμi{\displaystyle \mu _{i}}represents theith{\displaystyle i^{th}}component of the messages emanating from variable and check nodes.VN{\displaystyle VN}is in fact a variable that keeps the labels of the verified variable nodes.VN′{\displaystyle VN'}is also used to keep the set of verified variable nodes in the previous iteration. By using these two variables one can see if there is any progress in the number of verified variable nodes in the algorithm, and if there is no progress then the algorithm will terminate.[6][9] In all of the algorithms the messages emanating from check nodes are the same; however, since the verification rules are different for different algorithms the messages produced by variable nodes will be different in each algorithm.[6]The algorithm given above works for all of the VB-MPA's, and different algorithms use different rules in half round 2 of round 1 and 2. For instance, Genie algorithm uses D1CN rule in Half round 2 of round 1, and in fact the half round 2 of round 2 which uses ZCN rule is useless in Genie algorithm. LM algorithm uses D1CN in Half round 2 of round 1 and XH algorithm uses ECN rule in this stage instead of D1CN. SBB algorithm also uses both D1CN and ECN rule in the second half round of round 1. All of these rules can be efficiently implemented inupdate_rulefunction in the second half round of round 1. Although there is no guarantee that these algorithms succeed in all of the cases but we can guarantee that if some of the variable nodes become verified during these algorithms then the values of those variable nodes are correctalmost surely. In order to show that it is enough to show that all of the verification rules work perfectly and withoutfalse verification.[6][8] The algebraic point of view of ZCN rule is that if in asystem of linear equationsthe right hand side of the equation is zero thenalmost surelyall of the unknowns in that equations are zero. This is due to the fact that the original signal is assumed to be sparse, besides, we also should have the assumption that the non-zero elements of the signals are chosen form acontinuous distribution. Suppose that there ared{\displaystyle d}variables in that equation, if some of them ind−1{\displaystyle d-1}elements are non-zero then the otherdth{\displaystyle d^{th}}variable node value should have exactly the negative value of the summation of thosed−1{\displaystyle d-1}variable nodes. If the non-zero elements of the original signal are chosen from acontinuous distributionthen the probability of this to occur is zero. Therefore, ZCN rule works perfectly.[6][8] D1CN says that if a variable node is the only unknown variable in an equation then the value of that variable equals theright hand side of that equation. In fact, an equation with just one unknown variable is a check node with degree one, i.e. a check node with just one unverified variable node in its neighborhood.[6][8] This rule has two parts, the first part deals with non-zero elements of the signal while the second one is responsible for the zero elements of the original signal. For the first part, it says that if we have two or more equations with the sameright hand side, and if we only have one singleunknown variablev{\displaystyle v}common in all of those equations then the value of this common variable should be the value of theright hand sideof those equations. Besides, it says that all other variables in those equations should be zero. Suppose that one of those variablesv′{\displaystyle v'}is not zero, then theright hand sideof the equation which contains bothv,v′{\displaystyle v,v'}should bex(v′)+x(v){\displaystyle x(v')+x(v)}(For simplicity assume that theedge weightsare all 1 or zero). Besides, since we know thatv{\displaystyle v}is the only unique variable in all of these equations then there should be one equationc{\displaystyle c}in whichv{\displaystyle v}exists andv′{\displaystyle v'}does not exist. On the other hand, we know that theright hand sideof these equations are the same; therefore, theright hand sideof equationc{\displaystyle c}should also bex(v)+x(v′){\displaystyle x(v)+x(v')}. If we removev′{\displaystyle v'}from this equation we should have the summation of some unknown variables to be a non-zero valuex(v′){\displaystyle x(v')}. Since the non-zero elements ofx{\displaystyle x}are chosen randomly from acontinuous distributionthe probability that this summation equals exactlyx(v′){\displaystyle x(v')}is zero. Therefore,almost surelythe value ofv{\displaystyle v}is zero and all other variables in these equations have value zero.[6][8][7] There is just one scenario remained for the second part of the ECN rule as most of it has been covered in the first part. This scenario is the one that we have some equations with the sameright hand sidebut there is two or more variable node common in all of those equations. In this case, we can say nothing about those common variable nodes; however, we can say that all the other variable nodes in those equations are zero. The proof of this claim can be achieved by achange of variablein those equations. Suppose thatv1,v2,...,vq{\displaystyle v_{1},v_{2},...,v_{q}}are the common variable nodes in those equations. If we setv′=v1+v2+...+vq{\displaystyle v'=v_{1}+v_{2}+...+v_{q}}then the problem will be changed to the first part where we only have one common variable node in all of those equations. Therefore, with the same reasoning as in the first part we can see that all other variable nodes that are not common in all of those equations can be verified with value zeroalmost surely.[6][8][7] When the non-zero elements of the measurement matrix are chosen randomly from acontinuous distribution, then it can be shown that if one variable node receives equal messages divided by theedge weightsfrom its neighbors then this variable node is the only unique variable connected to all of those check nodes, therefore, the rule can be applied using a local decision approach, and the variable node can verify itself without further knowledge about the other connections of those check nodes. Moreover, the second part of the ECN rule is not necessary to be implemented as the non-zero verified variable node in the ECN rule will be removed from thebipartite graphin the nextiterationand ZCN rule will be enough to verify all the zero valued variable nodes remained from those equations with the sameright hand side. All in all, when the non-zero elements of the measurement matrix are chosen form acontinuous distributionthen the SBB and XH algorithm that use ECN rule can be implemented efficiently.[6] Every minor loop in the main loop of the algorithm can be executed inparallel processors, if we consider each variable and check node as a separate processor. Therefore, every minor loop in the algorithm can be executed inconstant timeO(1){\displaystyle O(1)}. Moreover, since the algorithm will terminate when there is no progress in verification of the variable nodes then the if in the worst case in each iteration of the main loop there is only one variable node to be verified, then the maximum number of times that the main loop will be executed is|Vl|{\displaystyle |V_{l}|}. Therefore, the whole algorithm will be executed inO(|Vl|){\displaystyle O(|V_{l}|)}time.[7]
https://en.wikipedia.org/wiki/Verification-based_message-passing_algorithms_in_compressed_sensing
The following tables compare notablesoftware frameworks,libraries, andcomputer programsfordeep learningapplications. [further explanation needed]
https://en.wikipedia.org/wiki/Comparison_of_deep-learning_software
Extreme learning machinesarefeedforward neural networksforclassification,regression,clustering,sparse approximation, compression andfeature learningwith a single layer or multiple layers of hidden nodes, where the parameters of hidden nodes (not just the weights connecting inputs to hidden nodes) need to be tuned. These hidden nodes can be randomly assigned and never updated (i.e. they arerandom projectionbut with nonlinear transforms), or can be inherited from their ancestors without being changed. In most cases, the output weights of hidden nodes are usually learned in a single step, which essentially amounts to learning a linear model. The name "extreme learning machine" (ELM) was given to such models by Guang-Bin Huang who originally proposed for the networks with any type of nonlinear piecewise continuous hidden nodes including biological neurons and different type of mathematical basis functions.[1][2]The idea for artificial neural networks goes back toFrank Rosenblatt, who not only published a single layerPerceptronin 1958,[3]but also introduced amultilayer perceptronwith 3 layers: an input layer, a hidden layer with randomized weights that did not learn, and a learning output layer.[4] According to some researchers, these models are able to produce good generalization performance and learn thousands of times faster than networks trained usingbackpropagation.[5]In literature, it also shows that these models can outperformsupport vector machinesin both classification and regression applications.[6][1][7] From 2001-2010, ELM research mainly focused on the unified learning framework for "generalized" single-hidden layer feedforward neural networks (SLFNs), including but not limited to sigmoid networks, RBF networks, threshold networks,[8]trigonometric networks, fuzzy inference systems, Fourier series,[9][10]Laplacian transform, wavelet networks,[11]etc. One significant achievement made in those years is to successfully prove the universal approximation and classification capabilities of ELM in theory.[9][12][13] From 2010 to 2015, ELM research extended to the unified learning framework for kernel learning, SVM and a few typical feature learning methods such asPrincipal Component Analysis(PCA) andNon-negative Matrix Factorization(NMF). It is shown that SVM actually provides suboptimal solutions compared to ELM, and ELM can provide the whitebox kernel mapping, which is implemented by ELM random feature mapping, instead of the blackbox kernel used in SVM. PCA and NMF can be considered as special cases where linear hidden nodes are used in ELM.[14][15] From 2015 to 2017, an increased focus has been placed on hierarchical implementations[16][17]of ELM. Additionally since 2011, significant biological studies have been made that support certain ELM theories.[18][19][20] From 2017 onwards, to overcome low-convergence problem during trainingLU decomposition,Hessenberg decompositionandQR decompositionbased approaches withregularizationhave begun to attract attention[21][22][23] In 2017, Google Scholar Blog published a list of "Classic Papers: Articles That Have Stood The Test of Time".[24]Among these are two papers written about ELM which are shown in studies 2 and 7 from the "List of 10 classic AI papers from 2006".[25][26][27] Given a single hidden layer of ELM, suppose that the output function of thei{\displaystyle i}-th hidden node ishi(x)=G(ai,bi,x){\displaystyle h_{i}(\mathbf {x} )=G(\mathbf {a} _{i},b_{i},\mathbf {x} )}, whereai{\displaystyle \mathbf {a} _{i}}andbi{\displaystyle b_{i}}are the parameters of thei{\displaystyle i}-th hidden node. The output function of the ELM for single hidden layer feedforward networks (SLFN) withL{\displaystyle L}hidden nodes is: fL(x)=∑i=1Lβihi(x){\displaystyle f_{L}({\bf {x}})=\sum _{i=1}^{L}{\boldsymbol {\beta }}_{i}h_{i}({\bf {x}})}, whereβi{\displaystyle {\boldsymbol {\beta }}_{i}}is the output weight of thei{\displaystyle i}-th hidden node. h(x)=[hi(x),...,hL(x)]{\displaystyle \mathbf {h} (\mathbf {x} )=[h_{i}(\mathbf {x} ),...,h_{L}(\mathbf {x} )]}is the hidden layer output mapping of ELM. GivenN{\displaystyle N}training samples, the hidden layer output matrixH{\displaystyle \mathbf {H} }of ELM is given as:H=[h(x1)⋮h(xN)]=[G(a1,b1,x1)⋯G(aL,bL,x1)⋮⋮⋮G(a1,b1,xN)⋯G(aL,bL,xN)]{\displaystyle {\bf {H}}=\left[{\begin{matrix}{\bf {h}}({\bf {x}}_{1})\\\vdots \\{\bf {h}}({\bf {x}}_{N})\end{matrix}}\right]=\left[{\begin{matrix}G({\bf {a}}_{1},b_{1},{\bf {x}}_{1})&\cdots &G({\bf {a}}_{L},b_{L},{\bf {x}}_{1})\\\vdots &\vdots &\vdots \\G({\bf {a}}_{1},b_{1},{\bf {x}}_{N})&\cdots &G({\bf {a}}_{L},b_{L},{\bf {x}}_{N})\end{matrix}}\right]} andT{\displaystyle \mathbf {T} }is the training data target matrix:T=[t1⋮tN]{\displaystyle {\bf {T}}=\left[{\begin{matrix}{\bf {t}}_{1}\\\vdots \\{\bf {t}}_{N}\end{matrix}}\right]} Generally speaking, ELM is a kind of regularization neural networks but with non-tuned hidden layer mappings (formed by either random hidden nodes, kernels or other implementations), its objective function is: Minimize:‖β‖pσ1+C‖Hβ−T‖qσ2{\displaystyle {\text{Minimize: }}\|{\boldsymbol {\beta }}\|_{p}^{\sigma _{1}}+C\|{\bf {H}}{\boldsymbol {\beta }}-{\bf {T}}\|_{q}^{\sigma _{2}}} whereσ1>0,σ2>0,p,q=0,12,1,2,⋯,+∞{\displaystyle \sigma _{1}>0,\sigma _{2}>0,p,q=0,{\frac {1}{2}},1,2,\cdots ,+\infty }. Different combinations ofσ1{\displaystyle \sigma _{1}},σ2{\displaystyle \sigma _{2}},p{\displaystyle p}andq{\displaystyle q}can be used and result in different learning algorithms for regression, classification, sparse coding, compression, feature learning and clustering. As a special case, a simplest ELM training algorithm learns a model of the form (for single hidden layer sigmoid neural networks): whereW1is the matrix of input-to-hidden-layer weights,σ{\displaystyle \sigma }is an activation function, andW2is the matrix of hidden-to-output-layer weights. The algorithm proceeds as follows: In most cases, ELM is used as a single hidden layer feedforward network (SLFN) including but not limited to sigmoid networks, RBF networks, threshold networks, fuzzy inference networks, complex neural networks, wavelet networks, Fourier transform, Laplacian transform, etc. Due to its different learning algorithm implementations for regression, classification, sparse coding, compression, feature learning and clustering, multi ELMs have been used to form multi hidden layer networks,deep learningor hierarchical networks.[16][17][28] A hidden node in ELM is a computational element, which need not be considered as classical neuron. A hidden node in ELM can be classical artificial neurons, basis functions, or a subnetwork formed by some hidden nodes.[12] Both universal approximation and classification capabilities[6][1]have been proved for ELM in literature. Especially,Guang-Bin Huangand his team spent almost seven years (2001-2008) on the rigorous proofs of ELM's universal approximation capability.[9][12][13] In theory, any nonconstant piecewise continuous function can be used as activation function in ELM hidden nodes, such an activation function need not be differential. If tuning the parameters of hidden nodes could make SLFNs approximate any target functionf(x){\displaystyle f(\mathbf {x} )}, then hidden node parameters can be randomly generated according to any continuous distribution probability, andlimL→∞‖∑i=1Lβihi(x)−f(x)‖=0{\displaystyle \lim _{L\rightarrow \infty }\left\|\sum _{i=1}^{L}{\boldsymbol {\beta }}_{i}h_{i}({\bf {x}})-f({\bf {x}})\right\|=0}holds with probability one with appropriate output weightsβ{\displaystyle {\boldsymbol {\beta }}}. Given any nonconstant piecewise continuous function as the activation function in SLFNs, if tuning the parameters of hidden nodes can make SLFNs approximate any target functionf(x){\displaystyle f(\mathbf {x} )}, then SLFNs with random hidden layer mappingh(x){\displaystyle \mathbf {h} (\mathbf {x} )}can separate arbitrary disjoint regions of any shapes. A wide range of nonlinear piecewise continuous functionsG(a,b,x){\displaystyle G(\mathbf {a} ,b,\mathbf {x} )}can be used in hidden neurons of ELM, for example: Sigmoid function:G(a,b,x)=11+exp⁡(−(a⋅x+b)){\displaystyle G(\mathbf {a} ,b,\mathbf {x} )={\frac {1}{1+\exp(-(\mathbf {a} \cdot \mathbf {x} +b))}}} Fourier function:G(a,b,x)=sin⁡(a⋅x+b){\displaystyle G(\mathbf {a} ,b,\mathbf {x} )=\sin(\mathbf {a} \cdot \mathbf {x} +b)} Hardlimit function:G(a,b,x)={1,ifa⋅x−b≥00,otherwise{\displaystyle G(\mathbf {a} ,b,\mathbf {x} )={\begin{cases}1,&{\text{if }}{\bf {a}}\cdot {\bf {x}}-b\geq 0\\0,&{\text{otherwise}}\end{cases}}} Gaussian function:G(a,b,x)=exp⁡(−b‖x−a‖2){\displaystyle G(\mathbf {a} ,b,\mathbf {x} )=\exp(-b\|\mathbf {x} -\mathbf {a} \|^{2})} Multiquadrics function:G(a,b,x)=(‖x−a‖2+b2)1/2{\displaystyle G(\mathbf {a} ,b,\mathbf {x} )=(\|\mathbf {x} -\mathbf {a} \|^{2}+b^{2})^{1/2}} Wavelet:G(a,b,x)=‖a‖−1/2Ψ(x−ab){\displaystyle G(\mathbf {a} ,b,\mathbf {x} )=\|a\|^{-1/2}\Psi \left({\frac {\mathbf {x} -\mathbf {a} }{b}}\right)}whereΨ{\displaystyle \Psi }is a single mother wavelet function. Circular functions: tan⁡(z)=eiz−e−izi(eiz+e−iz){\displaystyle \tan(z)={\frac {e^{iz}-e^{-iz}}{i(e^{iz}+e^{-iz})}}} sin⁡(z)=eiz−e−iz2i{\displaystyle \sin(z)={\frac {e^{iz}-e^{-iz}}{2i}}} Inverse circular functions: arctan⁡(z)=∫0zdt1+t2{\displaystyle \arctan(z)=\int _{0}^{z}{\frac {dt}{1+t^{2}}}} arccos⁡(z)=∫0zdt(1−t2)1/2{\displaystyle \arccos(z)=\int _{0}^{z}{\frac {dt}{(1-t^{2})^{1/2}}}} Hyperbolic functions: tanh⁡(z)=ez−e−zez+e−z{\displaystyle \tanh(z)={\frac {e^{z}-e^{-z}}{e^{z}+e^{-z}}}} sinh⁡(z)=ez−e−z2{\displaystyle \sinh(z)={\frac {e^{z}-e^{-z}}{2}}} Inverse hyperbolic functions: arctanh(z)=∫0zdt1−t2{\displaystyle {\text{arctanh}}(z)=\int _{0}^{z}{\frac {dt}{1-t^{2}}}} arcsinh(z)=∫0zdt(1+t2)1/2{\displaystyle {\text{arcsinh}}(z)=\int _{0}^{z}{\frac {dt}{(1+t^{2})^{1/2}}}} Theblack-boxcharacter of neural networks in general and extreme learning machines (ELM) in particular is one of the major concerns that repels engineers from application in unsafe automation tasks. This particular issue was approached by means of several different techniques. One approach is to reduce the dependence on the random input.[29][30]Another approach focuses on the incorporation of continuous constraints into the learning process of ELMs[31][32]which are derived from prior knowledge about the specific task. This is reasonable, because machine learning solutions have to guarantee a safe operation in many application domains. The mentioned studies revealed that the special form of ELMs, with its functional separation and the linear read-out weights, is particularly well suited for the efficient incorporation of continuous constraints in predefined regions of the input space. There are two main complaints from academic community concerning this work, the first one is about "reinventing and ignoring previous ideas", the second one is about "improper naming and popularizing", as shown in some debates in 2008 and 2015.[33]In particular, it was pointed out in a letter[34]to the editor ofIEEE Transactions on Neural Networksthat the idea of using a hidden layer connected to the inputs by random untrained weights was already suggested in the original papers onRBF networksin the late 1980s; Guang-Bin Huang replied by pointing out subtle differences.[35]In a 2015 paper,[1]Huang responded to complaints about his invention of the name ELM for already-existing methods, complaining of "very negative and unhelpful comments on ELM in neither academic nor professional manner due to various reasons and intentions" and an "irresponsible anonymous attack which intends to destroy harmony research environment", arguing that his work "provides a unifying learning platform" for various types of neural nets,[1]including hierarchical structured ELM.[28]In 2015, Huang also gave a formal rebuttal to what he considered as "malign and attack."[36]Recent research replaces the random weights with constrained random weights.[6][37]
https://en.wikipedia.org/wiki/Extreme_learning_machine
Inimaging science,difference of Gaussians(DoG) is afeatureenhancement algorithm that involves the subtraction of oneGaussian blurredversion of an original image from another, less blurred version of the original. In the simple case ofgrayscale images, the blurred images are obtained byconvolvingthe originalgrayscale imageswithGaussian kernelshaving differing width (standard deviations). Blurring an image using a Gaussiankernelsuppresses onlyhigh-frequency spatialinformation. Subtracting one image from the other preserves spatial information that lies between the range of frequencies that are preserved in the two blurred images. Thus, the DoG is a spatialband-pass filterthat attenuates frequencies in the original grayscale image that are far from the band center.[1] LetΦt:Rn→R{\displaystyle \Phi _{t}:\mathbb {R} ^{n}\rightarrow \mathbb {R} }denote the radialGaussian functionΦt(x)=N(x|0,t){\displaystyle \Phi _{t}(x)={\mathcal {N}}(x|0,t)}with mean0{\displaystyle 0}and variancet{\displaystyle t}, i.e., themultivariate Gaussian functionΦt(x)=N(x|0,tI){\displaystyle \Phi _{t}(x)={\mathcal {N}}(x|0,tI)}with mean0{\displaystyle 0}and covariancetI{\displaystyle tI}. More explicitly, we have The difference of Gaussians with variancest1<t2{\displaystyle t_{1}<t_{2}}is thekernel function obtained by subtracting the higher-variance Gaussian from the lower-variance Gaussian. The difference of Gaussian operator is theconvolutional operatorassociated with this kernel function. So given ann-dimensionalgrayscaleimageI:Rn→R{\displaystyle I:\mathbb {R} ^{n}\rightarrow \mathbb {R} }, the difference of Gaussians of the imageI{\displaystyle I}is then-dimensional image Because convolution is bilinear, convolving against the difference of Gaussians is equivalent to applying two different Gaussian blurs and then taking the difference. In practice, this is faster because Gaussian blur is aseparable filter. The difference of Gaussians can be thought of as an approximation of theMexican hat kernel functionused for theLaplacian of the Gaussianoperator. The key observation is that the family of GaussiansΦt{\displaystyle \Phi _{t}}is the fundamental solution of the heat equation The left-hand side can be approximated by the difference quotient Meanwhile, the right-hand side is precisely theLaplacianof the Gaussian function. Note that the Laplacian of the Gaussian can be used as a filter to produce a Gaussian blur of the Laplacian of the image becauseI∗ΔΦt=ΔI∗Φt{\displaystyle I*\Delta \Phi _{t}=\Delta {I}*\Phi _{t}}by standard properties of convolution. The relationship between the difference of Gaussians operator and theLaplacian of the Gaussianoperator is explained further in Appendix A in Lindeberg (2015).[2] As afeatureenhancement algorithm, the difference of Gaussians can be utilized to increase the visibility of edges and other detail present in a digital image. A wide variety of alternativeedge sharpening filtersoperate by enhancing high frequency detail, but becauserandom noisealso has a high spatial frequency, many of these sharpening filters tend to enhance noise, which can be an undesirable artifact. The difference of Gaussians algorithm removes high frequency detail that often includes random noise, rendering this approach one of the most suitable for processing images with a high degree of noise. A major drawback to application of the algorithm is an inherent reduction in overall image contrast produced by the operation.[1] When utilized for image enhancement, the difference of Gaussians algorithm is typically applied when the size ratio of kernel (2) to kernel (1) is 4:1 or 5:1. In the example images, the sizes of the Gaussiankernelsemployed tosmooththe sample image were 10 pixels and 5 pixels. The algorithm can also be used to obtain an approximation of theLaplacian of Gaussianwhen the ratio of size 2 to size 1 is roughly equal to 1.6.[3]The Laplacian of Gaussian is useful for detecting edges that appear at various image scales or degrees of image focus. The exact values of sizes of the two kernels that are used to approximate the Laplacian of Gaussian will determine the scale of the difference image, which may appear blurry as a result. Differences of Gaussians have also been used forblob detectionin thescale-invariant feature transform(SIFT). In fact, the DoG as the difference of twoMultivariate normal distributionhas always a total null sum and convolving it with a uniform signal generates no response. It approximates well a second derivate of Gaussian (Laplacian of Gaussian) with K~1.6 and the receptive fields of ganglion cells in theretinawith K~5. It may easily be used in recursive schemes and is used as an operator in real-time algorithms for blob detection and automatic scale selection. In its operation, the difference of Gaussians algorithm is believed to mimic how neural processing in the retina of the eye extracts details from images destined for transmission to the brain.[4][5][6]
https://en.wikipedia.org/wiki/Difference_of_Gaussians
Inmathematics, aGaussian function, often simply referred to as aGaussian, is afunctionof the base formf(x)=exp⁡(−x2){\displaystyle f(x)=\exp(-x^{2})}and with parametric extensionf(x)=aexp⁡(−(x−b)22c2){\displaystyle f(x)=a\exp \left(-{\frac {(x-b)^{2}}{2c^{2}}}\right)}for arbitraryrealconstantsa,band non-zeroc. It is named after the mathematicianCarl Friedrich Gauss. Thegraphof a Gaussian is a characteristic symmetric "bell curve" shape. The parameterais the height of the curve's peak,bis the position of the center of the peak, andc(thestandard deviation, sometimes called the GaussianRMSwidth) controls the width of the "bell". Gaussian functions are often used to represent theprobability density functionof anormally distributedrandom variablewithexpected valueμ=bandvarianceσ2=c2. In this case, the Gaussian is of the form[1] g(x)=1σ2πexp⁡(−12(x−μ)2σ2).{\displaystyle g(x)={\frac {1}{\sigma {\sqrt {2\pi }}}}\exp \left(-{\frac {1}{2}}{\frac {(x-\mu )^{2}}{\sigma ^{2}}}\right).} Gaussian functions are widely used instatisticsto describe thenormal distributions, insignal processingto defineGaussian filters, inimage processingwhere two-dimensional Gaussians are used forGaussian blurs, and in mathematics to solveheat equationsanddiffusion equationsand to define theWeierstrass transform. They are also abundantly used inquantum chemistryto formbasis sets. Gaussian functions arise by composing theexponential functionwith aconcavequadratic function:f(x)=exp⁡(αx2+βx+γ),{\displaystyle f(x)=\exp(\alpha x^{2}+\beta x+\gamma ),}where (Note:a=1/(σ2π){\displaystyle a=1/(\sigma {\sqrt {2\pi }})}inln⁡a{\displaystyle \ln a}, not to be confused withα=−1/2c2{\displaystyle \alpha =-1/2c^{2}}) The Gaussian functions are thus those functions whoselogarithmis a concave quadratic function. The parametercis related to thefull width at half maximum(FWHM) of the peak according to FWHM=22ln⁡2c≈2.35482c.{\displaystyle {\text{FWHM}}=2{\sqrt {2\ln 2}}\,c\approx 2.35482\,c.} The function may then be expressed in terms of the FWHM, represented byw:f(x)=ae−4(ln⁡2)(x−b)2/w2.{\displaystyle f(x)=ae^{-4(\ln 2)(x-b)^{2}/w^{2}}.} Alternatively, the parameterccan be interpreted by saying that the twoinflection pointsof the function occur atx=b±c. Thefull width at tenth of maximum(FWTM) for a Gaussian could be of interest and isFWTM=22ln⁡10c≈4.29193c.{\displaystyle {\text{FWTM}}=2{\sqrt {2\ln 10}}\,c\approx 4.29193\,c.} Gaussian functions areanalytic, and theirlimitasx→ ∞is 0 (for the above case ofb= 0). Gaussian functions are among those functions that areelementarybut lack elementaryantiderivatives; theintegralof the Gaussian function is theerror function: ∫e−x2dx=π2erf⁡x+C.{\displaystyle \int e^{-x^{2}}\,dx={\frac {\sqrt {\pi }}{2}}\operatorname {erf} x+C.} Nonetheless, their improper integrals over the whole real line can be evaluated exactly, using theGaussian integral∫−∞∞e−x2dx=π,{\displaystyle \int _{-\infty }^{\infty }e^{-x^{2}}\,dx={\sqrt {\pi }},}and one obtains∫−∞∞ae−(x−b)2/(2c2)dx=ac⋅2π.{\displaystyle \int _{-\infty }^{\infty }ae^{-(x-b)^{2}/(2c^{2})}\,dx=ac\cdot {\sqrt {2\pi }}.} This integral is 1 if and only ifa=1c2π{\textstyle a={\tfrac {1}{c{\sqrt {2\pi }}}}}(thenormalizing constant), and in this case the Gaussian is theprobability density functionof anormally distributedrandom variablewithexpected valueμ=bandvarianceσ2=c2:g(x)=1σ2πexp⁡(−(x−μ)22σ2).{\displaystyle g(x)={\frac {1}{\sigma {\sqrt {2\pi }}}}\exp \left({\frac {-(x-\mu )^{2}}{2\sigma ^{2}}}\right).} These Gaussians are plotted in the accompanying figure. The product of two Gaussian functions is a Gaussian, and theconvolutionof two Gaussian functions is also a Gaussian, with variance being the sum of the original variances:c2=c12+c22{\displaystyle c^{2}=c_{1}^{2}+c_{2}^{2}}. The product of two Gaussian probability density functions (PDFs), though, is not in general a Gaussian PDF. The Fourieruncertainty principlebecomes an equality if and only if (modulated) Gaussian functions are considered.[2] Taking theFourier transform (unitary, angular-frequency convention)of a Gaussian function with parametersa= 1,b= 0andcyields another Gaussian function, with parametersc{\displaystyle c},b= 0and1/c{\displaystyle 1/c}.[3]So in particular the Gaussian functions withb= 0andc=1{\displaystyle c=1}are kept fixed by the Fourier transform (they areeigenfunctionsof the Fourier transform with eigenvalue 1). A physical realization is that of thediffraction pattern: for example, aphotographic slidewhosetransmittancehas a Gaussian variation is also a Gaussian function. The fact that the Gaussian function is an eigenfunction of the continuous Fourier transform allows us to derive the following interesting[clarification needed]identity from thePoisson summation formula:∑k∈Zexp⁡(−π⋅(kc)2)=c⋅∑k∈Zexp⁡(−π⋅(kc)2).{\displaystyle \sum _{k\in \mathbb {Z} }\exp \left(-\pi \cdot \left({\frac {k}{c}}\right)^{2}\right)=c\cdot \sum _{k\in \mathbb {Z} }\exp \left(-\pi \cdot (kc)^{2}\right).} The integral of an arbitrary Gaussian function is∫−∞∞ae−(x−b)2/2c2dx=a|c|2π.{\displaystyle \int _{-\infty }^{\infty }a\,e^{-(x-b)^{2}/2c^{2}}\,dx=\ a\,|c|\,{\sqrt {2\pi }}.} An alternative form is∫−∞∞ke−fx2+gx+hdx=∫−∞∞ke−f(x−g/(2f))2+g2/(4f)+hdx=kπfexp⁡(g24f+h),{\displaystyle \int _{-\infty }^{\infty }k\,e^{-fx^{2}+gx+h}\,dx=\int _{-\infty }^{\infty }k\,e^{-f{\big (}x-g/(2f){\big )}^{2}+g^{2}/(4f)+h}\,dx=k\,{\sqrt {\frac {\pi }{f}}}\,\exp \left({\frac {g^{2}}{4f}}+h\right),}wherefmust be strictly positive for the integral to converge. The integral∫−∞∞ae−(x−b)2/2c2dx{\displaystyle \int _{-\infty }^{\infty }ae^{-(x-b)^{2}/2c^{2}}\,dx}for somerealconstantsa,bandc> 0 can be calculated by putting it into the form of aGaussian integral. First, the constantacan simply be factored out of the integral. Next, the variable of integration is changed fromxtoy=x−b:a∫−∞∞e−y2/2c2dy,{\displaystyle a\int _{-\infty }^{\infty }e^{-y^{2}/2c^{2}}\,dy,}and then toz=y/2c2{\displaystyle z=y/{\sqrt {2c^{2}}}}:a2c2∫−∞∞e−z2dz.{\displaystyle a{\sqrt {2c^{2}}}\int _{-\infty }^{\infty }e^{-z^{2}}\,dz.} Then, using theGaussian integral identity∫−∞∞e−z2dz=π,{\displaystyle \int _{-\infty }^{\infty }e^{-z^{2}}\,dz={\sqrt {\pi }},} we have∫−∞∞ae−(x−b)2/2c2dx=a2πc2.{\displaystyle \int _{-\infty }^{\infty }ae^{-(x-b)^{2}/2c^{2}}\,dx=a{\sqrt {2\pi c^{2}}}.} Base form:f(x,y)=exp⁡(−x2−y2){\displaystyle f(x,y)=\exp(-x^{2}-y^{2})} In two dimensions, the power to whicheis raised in the Gaussian function is any negative-definite quadratic form. Consequently, thelevel setsof the Gaussian will always be ellipses. A particular example of a two-dimensional Gaussian function isf(x,y)=Aexp⁡(−((x−x0)22σX2+(y−y0)22σY2)).{\displaystyle f(x,y)=A\exp \left(-\left({\frac {(x-x_{0})^{2}}{2\sigma _{X}^{2}}}+{\frac {(y-y_{0})^{2}}{2\sigma _{Y}^{2}}}\right)\right).} Here the coefficientAis the amplitude,x0,y0is the center, andσx,σyare thexandyspreads of the blob. The figure on the right was created usingA= 1,x0= 0,y0= 0,σx=σy= 1. The volume under the Gaussian function is given byV=∫−∞∞∫−∞∞f(x,y)dxdy=2πAσXσY.{\displaystyle V=\int _{-\infty }^{\infty }\int _{-\infty }^{\infty }f(x,y)\,dx\,dy=2\pi A\sigma _{X}\sigma _{Y}.} In general, a two-dimensional elliptical Gaussian function is expressed asf(x,y)=Aexp⁡(−(a(x−x0)2+2b(x−x0)(y−y0)+c(y−y0)2)),{\displaystyle f(x,y)=A\exp {\Big (}-{\big (}a(x-x_{0})^{2}+2b(x-x_{0})(y-y_{0})+c(y-y_{0})^{2}{\big )}{\Big )},}where the matrix[abbc]{\displaystyle {\begin{bmatrix}a&b\\b&c\end{bmatrix}}}ispositive-definite. Using this formulation, the figure on the right can be created usingA= 1,(x0,y0) = (0, 0),a=c= 1/2,b= 0. For the general form of the equation the coefficientAis the height of the peak and(x0,y0)is the center of the blob. If we seta=cos2⁡θ2σX2+sin2⁡θ2σY2,b=−sin⁡θcos⁡θ2σX2+sin⁡θcos⁡θ2σY2,c=sin2⁡θ2σX2+cos2⁡θ2σY2,{\displaystyle {\begin{aligned}a&={\frac {\cos ^{2}\theta }{2\sigma _{X}^{2}}}+{\frac {\sin ^{2}\theta }{2\sigma _{Y}^{2}}},\\b&=-{\frac {\sin \theta \cos \theta }{2\sigma _{X}^{2}}}+{\frac {\sin \theta \cos \theta }{2\sigma _{Y}^{2}}},\\c&={\frac {\sin ^{2}\theta }{2\sigma _{X}^{2}}}+{\frac {\cos ^{2}\theta }{2\sigma _{Y}^{2}}},\end{aligned}}}then we rotate the blob by a positive, counter-clockwise angleθ{\displaystyle \theta }(for negative, clockwise rotation, invert the signs in thebcoefficient).[4] To get back the coefficientsθ{\displaystyle \theta },σX{\displaystyle \sigma _{X}}andσY{\displaystyle \sigma _{Y}}froma{\displaystyle a},b{\displaystyle b}andc{\displaystyle c}use θ=12arctan⁡(2ba−c),θ∈[−45,45],σX2=12(a⋅cos2⁡θ+2b⋅cos⁡θsin⁡θ+c⋅sin2⁡θ),σY2=12(a⋅sin2⁡θ−2b⋅cos⁡θsin⁡θ+c⋅cos2⁡θ).{\displaystyle {\begin{aligned}\theta &={\frac {1}{2}}\arctan \left({\frac {2b}{a-c}}\right),\quad \theta \in [-45,45],\\\sigma _{X}^{2}&={\frac {1}{2(a\cdot \cos ^{2}\theta +2b\cdot \cos \theta \sin \theta +c\cdot \sin ^{2}\theta )}},\\\sigma _{Y}^{2}&={\frac {1}{2(a\cdot \sin ^{2}\theta -2b\cdot \cos \theta \sin \theta +c\cdot \cos ^{2}\theta )}}.\end{aligned}}} Example rotations of Gaussian blobs can be seen in the following examples: Using the followingOctavecode, one can easily see the effect of changing the parameters: Such functions are often used inimage processingand in computational models ofvisual systemfunction—see the articles onscale spaceandaffine shape adaptation. Also seemultivariate normal distribution. A more general formulation of a Gaussian function with a flat-top and Gaussian fall-off can be taken by raising the content of the exponent to a powerP{\displaystyle P}:f(x)=Aexp⁡(−((x−x0)22σX2)P).{\displaystyle f(x)=A\exp \left(-\left({\frac {(x-x_{0})^{2}}{2\sigma _{X}^{2}}}\right)^{P}\right).} This function is known as a super-Gaussian function and is often used for Gaussian beam formulation.[5]This function may also be expressed in terms of thefull width at half maximum(FWHM), represented byw:f(x)=Aexp⁡(−ln⁡2(4(x−x0)2w2)P).{\displaystyle f(x)=A\exp \left(-\ln 2\left(4{\frac {(x-x_{0})^{2}}{w^{2}}}\right)^{P}\right).} In a two-dimensional formulation, a Gaussian function alongx{\displaystyle x}andy{\displaystyle y}can be combined[6]with potentially differentPX{\displaystyle P_{X}}andPY{\displaystyle P_{Y}}to form a rectangular Gaussian distribution:f(x,y)=Aexp⁡(−((x−x0)22σX2)PX−((y−y0)22σY2)PY).{\displaystyle f(x,y)=A\exp \left(-\left({\frac {(x-x_{0})^{2}}{2\sigma _{X}^{2}}}\right)^{P_{X}}-\left({\frac {(y-y_{0})^{2}}{2\sigma _{Y}^{2}}}\right)^{P_{Y}}\right).}or an elliptical Gaussian distribution:f(x,y)=Aexp⁡(−((x−x0)22σX2+(y−y0)22σY2)P){\displaystyle f(x,y)=A\exp \left(-\left({\frac {(x-x_{0})^{2}}{2\sigma _{X}^{2}}}+{\frac {(y-y_{0})^{2}}{2\sigma _{Y}^{2}}}\right)^{P}\right)} In ann{\displaystyle n}-dimensional space a Gaussian function can be defined asf(x)=exp⁡(−xTCx),{\displaystyle f(x)=\exp(-x^{\mathsf {T}}Cx),}wherex=[x1⋯xn]{\displaystyle x={\begin{bmatrix}x_{1}&\cdots &x_{n}\end{bmatrix}}}is a column ofn{\displaystyle n}coordinates,C{\displaystyle C}is apositive-definiten×n{\displaystyle n\times n}matrix, andT{\displaystyle {}^{\mathsf {T}}}denotesmatrix transposition. The integral of this Gaussian function over the wholen{\displaystyle n}-dimensional space is given as∫Rnexp⁡(−xTCx)dx=πndetC.{\displaystyle \int _{\mathbb {R} ^{n}}\exp(-x^{\mathsf {T}}Cx)\,dx={\sqrt {\frac {\pi ^{n}}{\det C}}}.} It can be easily calculated by diagonalizing the matrixC{\displaystyle C}and changing the integration variables to the eigenvectors ofC{\displaystyle C}. More generally a shifted Gaussian function is defined asf(x)=exp⁡(−xTCx+sTx),{\displaystyle f(x)=\exp(-x^{\mathsf {T}}Cx+s^{\mathsf {T}}x),}wheres=[s1⋯sn]{\displaystyle s={\begin{bmatrix}s_{1}&\cdots &s_{n}\end{bmatrix}}}is the shift vector and the matrixC{\displaystyle C}can be assumed to be symmetric,CT=C{\displaystyle C^{\mathsf {T}}=C}, and positive-definite. The following integrals with this function can be calculated with the same technique:∫Rne−xTCx+vTxdx=πndetCexp⁡(14vTC−1v)≡M.{\displaystyle \int _{\mathbb {R} ^{n}}e^{-x^{\mathsf {T}}Cx+v^{\mathsf {T}}x}\,dx={\sqrt {\frac {\pi ^{n}}{\det {C}}}}\exp \left({\frac {1}{4}}v^{\mathsf {T}}C^{-1}v\right)\equiv {\mathcal {M}}.}∫Rne−xTCx+vTx(aTx)dx=(aTu)⋅M,whereu=12C−1v.{\displaystyle \int _{\mathbb {R} ^{n}}e^{-x^{\mathsf {T}}Cx+v^{\mathsf {T}}x}(a^{\mathsf {T}}x)\,dx=(a^{T}u)\cdot {\mathcal {M}},{\text{ where }}u={\frac {1}{2}}C^{-1}v.}∫Rne−xTCx+vTx(xTDx)dx=(uTDu+12tr⁡(DC−1))⋅M.{\displaystyle \int _{\mathbb {R} ^{n}}e^{-x^{\mathsf {T}}Cx+v^{\mathsf {T}}x}(x^{\mathsf {T}}Dx)\,dx=\left(u^{\mathsf {T}}Du+{\frac {1}{2}}\operatorname {tr} (DC^{-1})\right)\cdot {\mathcal {M}}.}∫Rne−xTC′x+s′Tx(−∂∂xΛ∂∂x)e−xTCx+sTxdx=(2tr⁡(C′ΛCB−1)+4uTC′ΛCu−2uT(C′Λs+CΛs′)+s′TΛs)⋅M,{\displaystyle {\begin{aligned}&\int _{\mathbb {R} ^{n}}e^{-x^{\mathsf {T}}C'x+s'^{\mathsf {T}}x}\left(-{\frac {\partial }{\partial x}}\Lambda {\frac {\partial }{\partial x}}\right)e^{-x^{\mathsf {T}}Cx+s^{\mathsf {T}}x}\,dx\\&\qquad =\left(2\operatorname {tr} (C'\Lambda CB^{-1})+4u^{\mathsf {T}}C'\Lambda Cu-2u^{\mathsf {T}}(C'\Lambda s+C\Lambda s')+s'^{\mathsf {T}}\Lambda s\right)\cdot {\mathcal {M}},\end{aligned}}}whereu=12B−1v,v=s+s′,B=C+C′.{\textstyle u={\frac {1}{2}}B^{-1}v,\ v=s+s',\ B=C+C'.} A number of fields such asstellar photometry,Gaussian beamcharacterization, andemission/absorption line spectroscopywork with sampled Gaussian functions and need to accurately estimate the height, position, and width parameters of the function. There are three unknown parameters for a 1D Gaussian function (a,b,c) and five for a 2D Gaussian function(A;x0,y0;σX,σY){\displaystyle (A;x_{0},y_{0};\sigma _{X},\sigma _{Y})}. The most common method for estimating the Gaussian parameters is to take the logarithm of the data andfit a parabolato the resulting data set.[7][8]While this provides a simplecurve fittingprocedure, the resulting algorithm may be biased by excessively weighting small data values, which can produce large errors in the profile estimate. One can partially compensate for this problem throughweighted least squaresestimation, reducing the weight of small data values, but this too can be biased by allowing the tail of the Gaussian to dominate the fit. In order to remove the bias, one can instead use aniteratively reweighted least squaresprocedure, in which the weights are updated at each iteration.[8]It is also possible to performnon-linear regressiondirectly on the data, without involving thelogarithmic data transformation; for more options, seeprobability distribution fitting. Once one has an algorithm for estimating the Gaussian function parameters, it is also important to know howprecisethose estimates are. Anyleast squaresestimation algorithm can provide numerical estimates for the variance of each parameter (i.e., the variance of the estimated height, position, and width of the function). One can also useCramér–Rao boundtheory to obtain an analytical expression for the lower bound on the parameter variances, given certain assumptions about the data.[9][10] When these assumptions are satisfied, the followingcovariance matrixKapplies for the 1D profile parametersa{\displaystyle a},b{\displaystyle b}, andc{\displaystyle c}under i.i.d. Gaussian noise and under Poisson noise:[9]KGauss=σ2πδXQ2(32c0−1a02ca20−1a02ca2),KPoiss=12π(3a2c0−120ca0−120c2a),{\displaystyle \mathbf {K} _{\text{Gauss}}={\frac {\sigma ^{2}}{{\sqrt {\pi }}\delta _{X}Q^{2}}}{\begin{pmatrix}{\frac {3}{2c}}&0&{\frac {-1}{a}}\\0&{\frac {2c}{a^{2}}}&0\\{\frac {-1}{a}}&0&{\frac {2c}{a^{2}}}\end{pmatrix}}\ ,\qquad \mathbf {K} _{\text{Poiss}}={\frac {1}{\sqrt {2\pi }}}{\begin{pmatrix}{\frac {3a}{2c}}&0&-{\frac {1}{2}}\\0&{\frac {c}{a}}&0\\-{\frac {1}{2}}&0&{\frac {c}{2a}}\end{pmatrix}}\ ,}whereδX{\displaystyle \delta _{X}}is the width of the pixels used to sample the function,Q{\displaystyle Q}is the quantum efficiency of the detector, andσ{\displaystyle \sigma }indicates the standard deviation of the measurement noise. Thus, the individual variances for the parameters are, in the Gaussian noise case,var⁡(a)=3σ22πδXQ2cvar⁡(b)=2σ2cδXπQ2a2var⁡(c)=2σ2cδXπQ2a2{\displaystyle {\begin{aligned}\operatorname {var} (a)&={\frac {3\sigma ^{2}}{2{\sqrt {\pi }}\,\delta _{X}Q^{2}c}}\\\operatorname {var} (b)&={\frac {2\sigma ^{2}c}{\delta _{X}{\sqrt {\pi }}\,Q^{2}a^{2}}}\\\operatorname {var} (c)&={\frac {2\sigma ^{2}c}{\delta _{X}{\sqrt {\pi }}\,Q^{2}a^{2}}}\end{aligned}}} and in the Poisson noise case,var⁡(a)=3a22πcvar⁡(b)=c2πavar⁡(c)=c22πa.{\displaystyle {\begin{aligned}\operatorname {var} (a)&={\frac {3a}{2{\sqrt {2\pi }}\,c}}\\\operatorname {var} (b)&={\frac {c}{{\sqrt {2\pi }}\,a}}\\\operatorname {var} (c)&={\frac {c}{2{\sqrt {2\pi }}\,a}}.\end{aligned}}} For the 2D profile parameters giving the amplitudeA{\displaystyle A}, position(x0,y0){\displaystyle (x_{0},y_{0})}, and width(σX,σY){\displaystyle (\sigma _{X},\sigma _{Y})}of the profile, the following covariance matrices apply:[10] KGauss=σ2πδXδYQ2(2σXσY00−1AσY−1AσX02σXA2σY000002σYA2σX00−1Aσy002σXA2σy0−1AσX0002σYA2σX)KPoisson=12π(3AσXσY00−1σY−1σX0σXAσY00000σYAσX00−1σY002σX3AσY13A−1σX0013A2σY3AσX).{\displaystyle {\begin{aligned}\mathbf {K} _{\text{Gauss}}={\frac {\sigma ^{2}}{\pi \delta _{X}\delta _{Y}Q^{2}}}&{\begin{pmatrix}{\frac {2}{\sigma _{X}\sigma _{Y}}}&0&0&{\frac {-1}{A\sigma _{Y}}}&{\frac {-1}{A\sigma _{X}}}\\0&{\frac {2\sigma _{X}}{A^{2}\sigma _{Y}}}&0&0&0\\0&0&{\frac {2\sigma _{Y}}{A^{2}\sigma _{X}}}&0&0\\{\frac {-1}{A\sigma _{y}}}&0&0&{\frac {2\sigma _{X}}{A^{2}\sigma _{y}}}&0\\{\frac {-1}{A\sigma _{X}}}&0&0&0&{\frac {2\sigma _{Y}}{A^{2}\sigma _{X}}}\end{pmatrix}}\\[6pt]\mathbf {K} _{\operatorname {Poisson} }={\frac {1}{2\pi }}&{\begin{pmatrix}{\frac {3A}{\sigma _{X}\sigma _{Y}}}&0&0&{\frac {-1}{\sigma _{Y}}}&{\frac {-1}{\sigma _{X}}}\\0&{\frac {\sigma _{X}}{A\sigma _{Y}}}&0&0&0\\0&0&{\frac {\sigma _{Y}}{A\sigma _{X}}}&0&0\\{\frac {-1}{\sigma _{Y}}}&0&0&{\frac {2\sigma _{X}}{3A\sigma _{Y}}}&{\frac {1}{3A}}\\{\frac {-1}{\sigma _{X}}}&0&0&{\frac {1}{3A}}&{\frac {2\sigma _{Y}}{3A\sigma _{X}}}\end{pmatrix}}.\end{aligned}}}where the individual parameter variances are given by the diagonal elements of the covariance matrix. One may ask for a discrete analog to the Gaussian; this is necessary in discrete applications, particularlydigital signal processing. A simple answer is to sample the continuous Gaussian, yielding thesampled Gaussian kernel. However, this discrete function does not have the discrete analogs of the properties of the continuous function, and can lead to undesired effects, as described in the articlescale space implementation. An alternative approach is to use thediscrete Gaussian kernel:[11]T(n,t)=e−tIn(t){\displaystyle T(n,t)=e^{-t}I_{n}(t)}whereIn(t){\displaystyle I_{n}(t)}denotes themodified Bessel functionsof integer order. This is the discrete analog of the continuous Gaussian in that it is the solution to the discretediffusion equation(discrete space, continuous time), just as the continuous Gaussian is the solution to the continuous diffusion equation.[11][12] Gaussian functions appear in many contexts in thenatural sciences, thesocial sciences,mathematics, andengineering. Some examples include:
https://en.wikipedia.org/wiki/Gaussian_function
Incomputer graphics,mipmaps(alsoMIP maps) orpyramids[1][2][3]are pre-calculated,optimizedsequences ofimages, each of which is a progressively lowerresolutionrepresentation of the previous. The height and width of each image, or level, in the mipmap is a factor of two smaller than the previous level. Mipmaps do not have to be square. They are intended to increaserenderingspeed and reducealiasingartifacts. A high-resolution mipmap image is used for high-density samples, such as for objects close to the camera; lower-resolution images are used as the object appears farther away. This is a more efficient way ofdownscalingatexturethan sampling alltexelsin the original texture that would contribute to a screenpixel; it is faster to take a constant number of samples from the appropriately downfiltered textures. Mipmaps are widely used in 3Dcomputer games,flight simulators, other 3D imaging systems fortexture filtering, and 2D and 3DGIS software. Their use is known asmipmapping. The lettersMIPin the name are an acronym of theLatinphrasemultum in parvo, meaning "much in little".[4] Since mipmaps, by definition, arepre-allocated, additionalstorage spaceis required to take advantage of them. They are also related towavelet compression. Mipmap textures are used in 3D scenes to decrease the time required to render a scene. They also improveimage qualityby reducing aliasing andMoiré patternsthat occur at large viewing distances,[5]at the cost of33%more memory per texture. Mipmaps are used for: Mipmapping was invented byLance Williamsin 1983 and is described in his paperPyramidal parametrics.[4]From the abstract: "This paper advances a 'pyramidal parametric' prefiltering and sampling geometry which minimizes aliasing effects and assures continuity within and between target images." The referenced pyramid can be imagined as the set of mipmaps stacked in front of each other. The first patent issued on Mipmap and texture generation was in 1983 by Johnson Yan, Nicholas Szabo, and Lish-Yann Chen of Link Flight Simulation (Singer). Using their approach, texture could be generated and superimposed on surfaces (curvilinear and planar) of any orientation and could be done in real-time. Texture patterns could be modeled suggestive of the real world material they were intended to represent in a continuous way and free of aliasing, ultimately providing level of detail and gradual (imperceptible) detail level transitions. Texture generating became repeatable and coherent from frame to frame and remained in correct perspective and appropriate occultation. Because the application of real time texturing was applied to early three dimensional flight simulator CGI systems, and texture being a prerequsite for realistic graphics, this patent became widely cited and many of these techniques were later applied in graphics computing and gaming as applications expanded over the years.[9] The origin of the term mipmap is an initialism of the Latin phrasemultum in parvo("much in little"), and map, modeled on bitmap.[4]The termpyramidsis still commonly used in aGIScontext. In GIS software, pyramids are primarily used for speeding up rendering times. Each bitmap image of the mipmap set is a downsized duplicate of the maintexture, but at a certain reduced level of detail. Although the main texture would still be used when the view is sufficient to render it in full detail, the renderer will switch to a suitable mipmap image (or in fact,interpolatebetween the two nearest, iftrilinear filteringis activated) when the texture is viewed from a distance or at a small size. Rendering speed increases since the number of texture pixels (texels) being processed per display pixel can be much lower for similar results with the simpler mipmap textures. If using a limited number of texture samples per display pixel (as is the case withbilinear filtering) then artifacts are reduced since the mipmap images are effectively alreadyanti-aliased. Scaling down and up is made more efficient with mipmaps as well. If the texture has a basic size of 256 by 256 pixels, then the associated mipmap set may contain a series of 8 images, each one-fourth the total area of the previous one: 128×128 pixels, 64×64, 32×32, 16×16, 8×8, 4×4, 2×2, 1×1 (a single pixel). If, for example, a scene is rendering this texture in a space of 40×40 pixels, then either a scaled-up version of the 32×32 (withouttrilinear interpolation) or an interpolation of the 64×64 and the 32×32 mipmaps (with trilinear interpolation) would be used. The simplest way to generate these textures is by successive averaging; however, more sophisticated algorithms (perhaps based onsignal processingandFourier transforms) can also be used. The increase in storage space required for all of these mipmaps is a third of the original texture, because the sum of the areas1/4 + 1/16 + 1/64 + 1/256 + ⋯converges to 1/3. In the case of an RGB image with three channels stored as separate planes, the total mipmap can be visualized as fitting neatly into a square area twice as large as the dimensions of the original image on each side (twice as large on each side is four times the original area - one plane of the original size for each of red, green and blue makes three times the original area, and then since the smaller textures take 1/3 of the original, 1/3 of three is one, so they will take the same total space as just one of the original red, green, or blue planes). This is the inspiration for the tagmultum in parvo. When a texture is viewed at a steep angle, the filtering should not be uniform in each direction (it should beanisotropicrather thanisotropic), and a compromise resolution is required. If a higher resolution is used, thecache coherencegoes down, and the aliasing is increased in one direction, but the image tends to be clearer. If a lower resolution is used, the cache coherence is improved, but the image is overly blurry. This would be a tradeoff of MIP level of detail (LOD) for aliasing vs blurriness. However anisotropic filtering attempts to resolve this trade-off by sampling a non isotropic texture footprint for each pixel rather than merely adjusting the MIP LOD. This non isotropic texture sampling requires either a more sophisticated storage scheme or a summation of more texture fetches at higher frequencies.[10] Summed-area tablescan conserve memory and provide more resolutions. However, they again hurt cache coherence, and need wider types to store the partial sums, which are larger than the base texture's word size. Thus, modern graphics hardware does not support them.
https://en.wikipedia.org/wiki/Mipmap
Biological neuron models, also known asspikingneuron models,[1]are mathematical descriptions of the conduction of electrical signals inneurons. Neurons (or nerve cells) areelectrically excitablecells within thenervous system, able to fire electric signals, calledaction potentials, across aneural network.These mathematical models describe the role of the biophysical and geometrical characteristics of neurons on the conduction of electrical activity. Central to these models is the description of how themembrane potential(that is, the difference inelectric potentialbetween the interior and the exterior of a biologicalcell) across thecell membranechanges over time. In an experimental setting, stimulating neurons with an electrical current generates anaction potential(or spike), that propagates down the neuron'saxon. This axon can branch out and connect to a large number of downstream neurons at sites calledsynapses. At these synapses, the spike can cause the release ofneurotransmitters, which in turn can change the voltage potential of downstream neurons. This change can potentially lead to even more spikes in those downstream neurons, thus passing down the signal. As many as 95% of neurons in theneocortex, the outermost layer of themammalianbrain, consist of excitatorypyramidal neurons,[2][3]and each pyramidal neuron receives tens of thousands of inputs from other neurons.[4]Thus, spiking neurons are a major information processing unit of thenervous system. One such example of a spiking neuron model may be a highly detailed mathematical model that includes spatialmorphology. Another may be a conductance-based neuron model that views neurons as points and describes the membrane voltage dynamics as a function of trans-membrane currents. A mathematically simpler "integrate-and-fire" model significantly simplifies the description ofion channeland membrane potential dynamics (initially studied by Lapique in 1907).[5][6] Non-spiking cells, spiking cells, and their measurement Not all the cells of the nervous system produce the type of spike that defines the scope of the spiking neuron models. For example,cochlearhair cells,retinal receptor cells, andretinal bipolar cellsdo not spike. Furthermore, many cells in the nervous system are not classified as neurons but instead are classified asglia. Neuronal activity can be measured with different experimental techniques, such as the "Whole cell" measurement technique, which captures the spiking activity of a single neuron and produces full amplitude action potentials. With extracellular measurement techniques, one or more electrodes are placed in theextracellular space. Spikes, often from several spiking sources, depending on the size of the electrode and its proximity to the sources, can be identified with signal processing techniques. Extracellular measurement has several advantages: Overview of neuron models Neuron models can be divided into two categories according to the physical units of the interface of the model. Each category could be further divided according to the abstraction/detail level: Although it is not unusual in science and engineering to have several descriptive models for different abstraction/detail levels, the number of different, sometimes contradicting, biological neuron models is exceptionally high. This situation is partly the result of the many different experimental settings, and the difficulty to separate the intrinsic properties of a single neuron from measurement effects and interactions of many cells (networkeffects). Aims of neuron models Ultimately, biological neuron models aim to explain the mechanisms underlying the operation of the nervous system. However, several approaches can be distinguished, from more realistic models (e.g., mechanistic models) to more pragmatic models (e.g., phenomenological models).[7][better source needed]Modeling helps to analyze experimental data and address questions. Models are also important in the context of restoring lost brain functionality throughneuroprostheticdevices. The models in this category describe the relationship between neuronal membrane currents at the input stage and membrane voltage at the output stage. This category includes (generalized) integrate-and-fire models and biophysical models inspired by the work of Hodgkin–Huxley in the early 1950s using an experimental setup that punctured the cell membrane and allowed to force a specific membrane voltage/current.[8][9][10][11] Most modernelectrical neural interfacesapply extra-cellular electrical stimulation to avoid membrane puncturing, which can lead to cell death and tissue damage. Hence, it is not clear to what extent the electrical neuron models hold for extra-cellular stimulation (see e.g.[12]). The Hodgkin–Huxley model (H&H model)[8][9][10][11]is a model of the relationship between the flow of ionic currents across the neuronal cell membrane and the membrane voltage of the cell.[8][9][10][11]It consists of a set ofnonlinear differential equationsdescribing the behavior of ion channels that permeate the cell membrane of thesquid giant axon. Hodgkin and Huxley were awarded the 1963 Nobel Prize in Physiology or Medicine for this work. It is important to note the voltage-current relationship, with multiple voltage-dependent currents charging the cell membrane of capacityCm The above equation is the timederivativeof the law ofcapacitance,Q=CVwhere the change of the total charge must be explained as the sum over the currents. Each current is given by whereg(t,V)is theconductance, or inverse resistance, which can be expanded in terms of its maximal conductanceḡand the activation and inactivation fractionsmandh, respectively, that determine how many ions can flow through available membrane channels. This expansion is given by and our fractions follow the first-order kinetics with similar dynamics forh, where we can use eitherτandm∞orαandβto define our gate fractions. The Hodgkin–Huxley model may be extended to include additional ionic currents. Typically, these include inward Ca2+and Na+input currents, as well as several varieties of K+outward currents, including a "leak" current. The result can be at the small end of 20 parameters which one must estimate or measure for an accurate model. In a model of a complex system of neurons,numerical integrationof the equations arecomputationally expensive. Careful simplifications of the Hodgkin–Huxley model are therefore needed. The model can be reduced to two dimensions thanks to the dynamic relations which can be established between the gating variables.[13]it is also possible to extend it to take into account the evolution of the concentrations (considered fixed in the original model).[14][15] One of the earliest models of a neuron is the perfect integrate-and-fire model (also called non-leaky integrate-and-fire), first investigated in 1907 byLouis Lapicque.[16]A neuron is represented by its membrane voltageVwhich evolves in time during stimulation with an input currentI(t)according which is just the timederivativeof the law ofcapacitance,Q=CV. When an input current is applied, the membrane voltage increases with time until it reaches a constant thresholdVth, at which point adelta functionspike occurs and the voltage is reset to its resting potential, after which the model continues to run. Thefiring frequencyof the model thus increases linearly without bound as input current increases. The model can be made more accurate by introducing arefractory periodtrefthat limits the firing frequency of a neuron by preventing it from firing during that period. For constant inputI(t)=Ithe threshold voltage is reached after an integration timetint=CVthr/Iafter starting from zero. After a reset, the refractory period introduces a dead time so that the total time until the next firing istref+tint. The firing frequency is the inverse of the total inter-spike interval (including dead time). The firing frequency as a function of a constant input current, is therefore A shortcoming of this model is that it describes neither adaptation nor leakage. If the model receives a below-threshold short current pulse at some time, it will retain that voltage boost forever - until another input later makes it fire. This characteristic is not in line with observed neuronal behavior. The following extensions make the integrate-and-fire model more plausible from a biological point of view. The leaky integrate-and-fire model, which can be traced back toLouis Lapicque,[16]contains a "leak" term in the membrane potential equation that reflects the diffusion of ions through the membrane, unlike the non-leaky integrate-and-fire model. The model equation looks like[1] whereVmis the voltage across the cell membrane andRmis the membrane resistance. (The non-leaky integrate-and-fire model is retrieved in the limitRmto infinity, i.e. if the membrane is a perfect insulator). The model equation is valid for arbitrary time-dependent input until a thresholdVthis reached; thereafter the membrane potential is reset. For constant input, the minimum input to reach the threshold isIth=Vth/Rm. Assuming a reset to zero, the firing frequency thus looks like which converges for large input currents to the previous leak-free model with the refractory period.[17]The model can also be used for inhibitory neurons.[18][19] The most significant disadvantage of this model is that it does not contain neuronal adaptation, so that it cannot describe an experimentally measured spike train in response to constant input current.[20]This disadvantage is removed in generalized integrate-and-fire models that also contain one or several adaptation-variables and are able to predict spike times of cortical neurons under current injection to a high degree of accuracy.[21][22][23] Neuronal adaptation refers to the fact that even in the presence of a constant current injection into the soma, the intervals between output spikes increase. An adaptive integrate-and-fire neuron model combines the leaky integration of voltageVwith one or several adaptation variableswk(see Chapter 6.1. in the textbook Neuronal Dynamics[27]) whereτm{\displaystyle \tau _{m}}is the membrane time constant,wkis the adaptation current number, with indexk,τk{\displaystyle \tau _{k}}is the time constant of adaptation currentwk,Emis the resting potential andtfis the firing time of the neuron and the Greek delta denotes the Dirac delta function. Whenever the voltage reaches the firing threshold the voltage is reset to a valueVrbelow the firing threshold. The reset value is one of the important parameters of the model. The simplest model of adaptation has only a single adaptation variablewand the sum over k is removed.[28] Integrate-and-fire neurons with one or several adaptation variables can account for a variety of neuronal firing patterns in response to constant stimulation, including adaptation, bursting, and initial bursting.[24][25][26]Moreover, adaptive integrate-and-fire neurons with several adaptation variables are able to predict spike times of cortical neurons under time-dependent current injection into the soma.[22][23] Recent advances in computational and theoretical fractional calculus lead to a new form of model called Fractional-order leaky integrate-and-fire.[29][30]An advantage of this model is that it can capture adaptation effects with a single variable. The model has the following form[30] Once the voltage hits the threshold it is reset. Fractional integration has been used to account for neuronal adaptation in experimental data.[29] In theexponential integrate-and-firemodel,[33]spike generation is exponential, following the equation: whereV{\displaystyle V}is the membrane potential,VT{\displaystyle V_{T}}is the intrinsic membrane potential threshold,τm{\displaystyle \tau _{m}}is the membrane time constant,Em{\displaystyle E_{m}}is the resting potential, andΔT{\displaystyle \Delta _{T}}is the sharpness of action potential initiation, usually around 1 mV for cortical pyramidal neurons.[31]Once the membrane potential crossesVT{\displaystyle V_{T}}, it diverges to infinity in finite time.[34]In numerical simulation the integration is stopped if the membrane potential hits an arbitrary threshold (much larger thanVT{\displaystyle V_{T}}) at which the membrane potential is reset to a valueVr. The voltage reset valueVris one of the important parameters of the model. Importantly, the right-hand side of the above equation contains a nonlinearity that can be directly extracted from experimental data.[31]In this sense the exponential nonlinearity is strongly supported by experimental evidence. In theadaptive exponential integrate-and-fire neuron[32]the above exponential nonlinearity of the voltage equation is combined with an adaptation variable w wherewdenotes the adaptation current with time scaleτ{\displaystyle \tau }. Important model parameters are the voltage reset valueVr, the intrinsic thresholdVT{\displaystyle V_{T}}, the time constantsτ{\displaystyle \tau }andτm{\displaystyle \tau _{m}}as well as the coupling parametersaandb. The adaptive exponential integrate-and-fire model inherits the experimentally derived voltage nonlinearity[31]of the exponential integrate-and-fire model. But going beyond this model, it can also account for a variety of neuronal firing patterns in response to constant stimulation, including adaptation, bursting, and initial bursting.[26]However, since the adaptation is in the form of a current, aberrant hyperpolarization may appear. This problem was solved by expressing it as a conductance.[35] In this model, a time-dependent functionθ(t){\displaystyle \theta (t)}is added to the fixed threshold,vth0{\displaystyle v_{th0}}, after every spike, causing an adaptation of the threshold. The threshold potential,vth{\displaystyle v_{th}}, gradually returns to its steady state value depending on the threshold adaptation time constantτθ{\displaystyle \tau _{\theta }}.[36]This is one of the simpler techniques to achieve spike frequency adaptation.[37]The expression for the adaptive threshold is given by: vth(t)=vth0+∑θ(t−tf)f=vth0+∑θ0exp⁡[−(t−tf)τθ]f{\displaystyle v_{th}(t)=v_{th0}+{\frac {\sum \theta (t-t_{f})}{f}}=v_{th0}+{\frac {\sum \theta _{0}\exp \left[-{\frac {(t-t_{f})}{\tau _{\theta }}}\right]}{f}}} whereθ(t){\displaystyle \theta (t)}is defined by:θ(t)=θ0exp⁡[−tτθ]{\displaystyle \theta (t)=\theta _{0}\exp \left[-{\frac {t}{\tau _{\theta }}}\right]} When the membrane potential,u(t){\displaystyle u(t)}, reaches a threshold, it is reset tovrest{\displaystyle v_{rest}}: u(t)≥vth(t)⇒v(t)=vrest{\displaystyle u(t)\geq v_{th}(t)\Rightarrow v(t)=v_{\text{rest}}} A simpler version of this with a single time constant in threshold decay with an LIF neuron is realized in[38]to achieve LSTM like recurrent spiking neural networks to achieve accuracy nearer to ANNs on few spatio temporal tasks. The DEXAT neuron model is a flavor of adaptive neuron model in which the threshold voltage decays with a double exponential having two time constants. Double exponential decay is governed by a fast initial decay and then a slower decay over a longer period of time.[39][40]This neuron used in SNNs through surrogate gradient creates an adaptive learning rate yielding higher accuracy and faster convergence, and flexible long short-term memory compared to existing counterparts in the literature. The membrane potential dynamics are described through equations and the threshold adaptation rule is: vth(t)=b0+β1b1(t)+β2b2(t){\displaystyle v_{th}(t)=b_{0}+\beta _{1}b_{1}(t)+\beta _{2}b_{2}(t)} The dynamics ofb1(t){\displaystyle b_{1}(t)}andb2(t){\displaystyle b_{2}(t)}are given by b1(t+δt)=pj1b1(t)+(1−pj1)z(t)δ(t){\displaystyle b_{1}(t+\delta t)=p_{j1}b_{1}(t)+(1-p_{j1})z(t)\delta (t)}, b2(t+δt)=pj2b2(t)+(1−pj2)z(t)δ(t){\displaystyle b_{2}(t+\delta t)=p_{j2}b_{2}(t)+(1-p_{j2})z(t)\delta (t)}, wherepj1=exp⁡[−δtτb1]{\displaystyle p_{j1}=\exp \left[-{\frac {\delta t}{\tau _{b1}}}\right]}andpj2=exp⁡[−δtτb2]{\displaystyle p_{j2}=\exp \left[-{\frac {\delta t}{\tau _{b2}}}\right]}. Further, multi-time scale adaptive threshold neuron model showing more complex dynamics is shown in.[41] The models in this category are generalized integrate-and-fire models that include a certain level of stochasticity. Cortical neurons in experiments are found to respond reliably to time-dependent input, albeit with a small degree of variations between one trial and the next if the same stimulus is repeated.[42][43]Stochasticity in neurons has two important sources. First, even in a very controlled experiment where input current is injected directly into the soma, ion channels open and close stochastically[44]and this channel noise leads to a small amount of variability in the exact value of the membrane potential and the exact timing of output spikes. Second, for a neuron embedded in a cortical network, it is hard to control the exact input because most inputs come from unobserved neurons somewhere else in the brain.[27] Stochasticity has been introduced into spiking neuron models in two fundamentally different forms: either (i) anoisy inputcurrentis added to the differential equation of the neuron model;[45]or (ii) the process ofspike generation is noisy.[46]In both cases, the mathematical theory can be developed for continuous time, which is then, if desired for the use in computer simulations, transformed into a discrete-time model. The relation of noise in neuron models to the variability of spike trains and neural codes is discussed inNeural Codingand in Chapter 7 of the textbook Neuronal Dynamics.[27] A neuron embedded in a network receives spike input from other neurons. Since the spike arrival times are not controlled by an experimentalist they can be considered as stochastic. Thus a (potentially nonlinear) integrate-and-fire model with nonlinearity f(v) receives two inputs: an inputI(t){\displaystyle I(t)}controlled by the experimentalists and a noisy input currentInoise(t){\displaystyle I^{\rm {noise}}(t)}that describes the uncontrolled background input. Stein's model[45]is the special case of a leaky integrate-and-fire neuron and a stationary white noise currentInoise(t)=ξ(t){\displaystyle I^{\rm {noise}}(t)=\xi (t)}with mean zero and unit variance. In the subthreshold regime, these assumptions yield the equation of theOrnstein–Uhlenbeckprocess However, in contrast to the standard Ornstein–Uhlenbeck process, the membrane voltage is reset wheneverVhits the firing thresholdVth.[45]Calculating the interval distribution of the Ornstein–Uhlenbeck model for constant input with threshold leads to afirst-passage timeproblem.[45][47]Stein's neuron model and variants thereof have been used to fit interspike interval distributions of spike trains from real neurons under constant input current.[47] In the mathematical literature, the above equation of the Ornstein–Uhlenbeck process is written in the form whereσ{\displaystyle \sigma }is the amplitude of the noise input anddWare increments of a Wiener process. For discrete-time implementations with time step dt the voltage updates are[27] where y is drawn from a Gaussian distribution with zero mean unit variance. The voltage is reset when it hits the firing thresholdVth. The noisy input model can also be used in generalized integrate-and-fire models. For example, the exponential integrate-and-fire model with noisy input reads For constant deterministic inputI(t)=I0{\displaystyle I(t)=I_{0}}it is possible to calculate the mean firing rate as a function ofI0{\displaystyle I_{0}}.[48]This is important because the frequency-current relation (f-I-curve) is often used by experimentalists to characterize a neuron. The leaky integrate-and-fire with noisy input has been widely used in the analysis of networks of spiking neurons.[49]Noisy input is also called 'diffusive noise' because it leads to a diffusion of the subthreshold membrane potential around the noise-free trajectory (Johannesma,[50]The theory of spiking neurons with noisy input is reviewed in Chapter 8.2 of the textbookNeuronal Dynamics.[27] In deterministic integrate-and-fire models, a spike is generated if the membrane potentialV(t)hits the thresholdVth{\displaystyle V_{th}}. In noisy output models, the strict threshold is replaced by a noisy one as follows. At each moment in time t, a spike is generated stochastically with instantaneous stochastic intensity or'escape rate'[27] that depends on the momentary difference between the membrane voltageV(t)and the thresholdVth{\displaystyle V_{th}}.[46]A common choice for the'escape rate'f{\displaystyle f}(that is consistent with biological data[22]) is whereτ0{\displaystyle \tau _{0}}is a time constant that describes how quickly a spike is fired once the membrane potential reaches the threshold andβ{\displaystyle \beta }is a sharpness parameter. Forβ→∞{\displaystyle \beta \to \infty }the threshold becomes sharp and spike firing occurs deterministically at the moment when the membrane potential hits the threshold from below. The sharpness value found in experiments[22]is1/β≈4mV{\displaystyle 1/\beta \approx 4mV}which means that neuronal firing becomes non-negligible as soon as the membrane potential is a few mV below the formal firing threshold. The escape rate process via a soft threshold is reviewed in Chapter 9 of the textbookNeuronal Dynamics.[27] For models in discrete time, a spike is generated with probability that depends on the momentary difference between the membrane voltageVat timetn{\displaystyle t_{n}}and the thresholdVth{\displaystyle V_{th}}.[55]The function F is often taken as a standard sigmoidalF(x)=0.5[1+tanh⁡(γx)]{\displaystyle F(x)=0.5[1+\tanh(\gamma x)]}with steepness parameterγ{\displaystyle \gamma },[46]similar to the update dynamics in artificial neural networks. But the functional form of F can also be derived from the stochastic intensityf{\displaystyle f}in continuous time introduced above asF(yn)≈1−exp⁡[ynΔt]{\displaystyle F(y_{n})\approx 1-\exp[y_{n}\Delta t]}whereyn=V(tn)−Vth{\displaystyle y_{n}=V(t_{n})-V_{th}}is the threshold distance.[46] Integrate-and-fire models with output noise can be used to predict theperistimulus time histogram(PSTH) of real neurons under arbitrary time-dependent input.[22]For non-adaptive integrate-and-fire neurons, the interval distribution under constant stimulation can be calculated from stationaryrenewal theory.[27] main article:Spike response model The spike response model (SRM) is a generalized linear model for the subthreshold membrane voltage combined with a nonlinear output noise process for spike generation.[46][58][56]The membrane voltageV(t)at timetis V(t)=∑fη(t−tf)+∫0∞κ(s)I(t−s)ds+Vrest{\displaystyle V(t)=\sum _{f}\eta (t-t^{f})+\int \limits _{0}^{\infty }\kappa (s)I(t-s)\,ds+V_{\mathrm {rest} }} wheretfis the firing time of spike number f of the neuron,Vrestis the resting voltage in the absence of input,I(t-s)is the input current at time t-s andκ(s){\displaystyle \kappa (s)}is a linear filter (also called kernel) that describes the contribution of an input current pulse at time t-s to the voltage at time t. The contributions to the voltage caused by a spike at timetf{\displaystyle t^{f}}are described by the refractory kernelη(t−tf){\displaystyle \eta (t-t^{f})}. In particular,η(t−tf){\displaystyle \eta (t-t^{f})}describes the reset after the spike and the time course of the spike-afterpotential following a spike. It therefore expresses the consequences of refractoriness and adaptation.[46][23]The voltage V(t) can be interpreted as the result of an integration of the differential equation of a leaky integrate-and-fire model coupled to an arbitrary number of spike-triggered adaptation variables.[24] Spike firing is stochastic and happens with a time-dependent stochastic intensity (instantaneous rate) with parametersτ0{\displaystyle \tau _{0}}andβ{\displaystyle \beta }and adynamic thresholdϑ(t){\displaystyle \vartheta (t)}given by Hereϑ0{\displaystyle \vartheta _{0}}is the firing threshold of an inactive neuron andθ1(t−tf){\displaystyle \theta _{1}(t-t^{f})}describes the increase of the threshold after a spike at timetf{\displaystyle t^{f}}.[22][23]In case of a fixed threshold, one setsθ1(t−tf)=0{\displaystyle \theta _{1}(t-t^{f})=0}. Forβ→∞{\displaystyle \beta \to \infty }the threshold process is deterministic.[27] The time course of the filtersη,κ,θ1{\displaystyle \eta ,\kappa ,\theta _{1}}that characterize the spike response model can be directly extracted from experimental data.[23]With optimized parameters the SRM describes the time course of the subthreshold membrane voltage for time-dependent input with a precision of 2mV and can predict the timing of most output spikes with a precision of 4ms.[22][23]The SRM is closely related tolinear-nonlinear-Poisson cascade models(also called Generalized Linear Model).[54]The estimation of parameters of probabilistic neuron models such as the SRM using methods developed for Generalized Linear Models[59]is discussed in Chapter 10 of the textbookNeuronal Dynamics.[27] The namespike response modelarises because, in a network, the input current for neuron i is generated by the spikes of other neurons so that in the case of a network the voltage equation becomes wheretjf′{\displaystyle t_{j}^{f'}}is the firing times of neuron j (i.e., its spike train);ηi(t−tif){\displaystyle \eta _{i}(t-t_{i}^{f})}describes the time course of the spike and the spike after-potential for neuron i; andwij{\displaystyle w_{ij}}andεij(t−tjf′){\displaystyle \varepsilon _{ij}(t-t_{j}^{f'})}describe the amplitude and time course of an excitatory or inhibitorypostsynaptic potential(PSP) caused by the spiketjf′{\displaystyle t_{j}^{f'}}of the presynaptic neuron j. The time courseεij(s){\displaystyle \varepsilon _{ij}(s)}of the PSP results from the convolution of the postsynaptic currentI(t){\displaystyle I(t)}caused by the arrival of a presynaptic spike from neuron j with the membrane filterκ(s){\displaystyle \kappa (s)}.[27] TheSRM0[56][60][61]is a stochastic neuron model related to time-dependent nonlinearrenewal theoryand a simplification of the Spike Response Model (SRM). The main difference to the voltage equation of the SRM introduced above is that in the term containing the refractory kernelη(s){\displaystyle \eta (s)}there is no summation sign over past spikes: only themost recent spike(denoted as the timet^{\displaystyle {\hat {t}}}) matters. Another difference is that the threshold is constant. The model SRM0 can be formulated in discrete or continuous time. For example, in continuous time, the single-neuron equation is and the network equations of the SRM0are[56] wheret^i{\displaystyle {\hat {t}}_{i}}is thelast firing time neuroni. Note that the time course of the postsynaptic potentialεij{\displaystyle \varepsilon _{ij}}is also allowed to depend on the time since the last spike of neuron i to describe a change in membrane conductance during refractoriness.[60]The instantaneous firing rate (stochastic intensity) is whereVth{\displaystyle V_{th}}is a fixed firing threshold. Thus spike firing of neuron i depends only on its input and the time since neuron i has fired its last spike. With the SRM0, the interspike-interval distribution for constant input can be mathematically linked to the shape of the refractory kernelη{\displaystyle \eta }.[46][56]Moreover the stationary frequency-current relation can be calculated from the escape rate in combination with the refractory kernelη{\displaystyle \eta }.[46][56]With an appropriate choice of the kernels, the SRM0approximates the dynamics of the Hodgkin-Huxley model to a high degree of accuracy.[60]Moreover, the PSTH response to arbitrary time-dependent input can be predicted.[56] TheGalves–Löcherbach model[62]is astochasticneuron model closely related to the spike response model SRM0[61][56]and the leaky integrate-and-fire model. It is inherentlystochasticand, just like the SRM0, it is linked to time-dependent nonlinearrenewal theory. Given the model specifications, the probability that a given neuroni{\displaystyle i}spikes in a periodt{\displaystyle t}may be described by whereWj→i{\displaystyle W_{j\rightarrow i}}is asynaptic weight, describing the influence of neuronj{\displaystyle j}on neuroni{\displaystyle i},gj{\displaystyle g_{j}}expresses the leak, andLti{\displaystyle L_{t}^{i}}provides the spiking history of neuroni{\displaystyle i}beforet{\displaystyle t}, according to Importantly, the spike probability of neuroni{\displaystyle i}depends only on its spike input (filtered with a kernelgj{\displaystyle g_{j}}and weighted with a factorWj→i{\displaystyle W_{j\to i}}) and the timing of its most recent output spike (summarized byt−Lti{\displaystyle t-L_{t}^{i}}). The models in this category are highly simplified toy models that qualitatively describe the membrane voltage as a function of input. They are mainly used for didactic reasons in teaching but are not considered valid neuron models for large-scale simulations or data fitting. Sweeping simplifications to Hodgkin–Huxley were introduced by FitzHugh and Nagumo in 1961 and 1962. Seeking to describe "regenerative self-excitation" by a nonlinear positive-feedback membrane voltage and recovery by a linear negative-feedback gate voltage, they developed the model described by[63] where we again have a membrane-like voltage and input current with a slower general gate voltagewand experimentally-determined parametersa= -0.7,b= 0.8,τ= 1/0.08. Although not derivable from biology, the model allows for a simplified, immediately available dynamic, without being a trivial simplification.[64]The experimental support is weak, but the model is useful as a didactic tool to introduce dynamics of spike generation throughphase planeanalysis. See Chapter 7 in the textbookMethods of Neuronal Modeling.[65] In 1981, Morris and Lecar combined the Hodgkin–Huxley and FitzHugh–Nagumo models into a voltage-gated calcium channel model with a delayed-rectifier potassium channel represented by whereIion(V,w)=g¯Cam∞⋅(V−VCa)+g¯Kw⋅(V−VK)+g¯L⋅(V−VL){\displaystyle I_{\mathrm {ion} }(V,w)={\bar {g}}_{\mathrm {Ca} }m_{\infty }\cdot (V-V_{\mathrm {Ca} })+{\bar {g}}_{\mathrm {K} }w\cdot (V-V_{\mathrm {K} })+{\bar {g}}_{\mathrm {L} }\cdot (V-V_{\mathrm {L} })}.[17]The experimental support of the model is weak, but the model is useful as a didactic tool to introduce dynamics of spike generation throughphase planeanalysis. See Chapter 7[66]in the textbookMethods of Neuronal Modeling.[65] A two-dimensional neuron model very similar to the Morris-Lecar model can be derived step-by-step starting from the Hodgkin-Huxley model. See Chapter 4.2 in the textbook Neuronal Dynamics.[27] Building upon the FitzHugh–Nagumo model, Hindmarsh and Rose proposed in 1984[67]a model of neuronal activity described by three coupled first-order differential equations: withr2=x2+y2+z2, andr≈ 10−2so that thezvariable only changes very slowly. This extra mathematical complexity allows a great variety of dynamic behaviors for the membrane potential, described by thexvariable of the model, which includes chaotic dynamics. This makes the Hindmarsh–Rose neuron model very useful, because it is still simple, allows a good qualitative description of the many different firing patterns of the action potential, in particular bursting, observed in experiments. Nevertheless, it remains a toy model and has not been fitted to experimental data. It is widely used as a reference model for bursting dynamics.[67] Thetheta model, or Ermentrout–KopellcanonicalType I model, is mathematically equivalent to the quadratic integrate-and-fire model which in turn is an approximation to the exponential integrate-and-fire model and the Hodgkin-Huxley model. It is called a canonical model because it is one of the generic models for constant input close to the bifurcation point, which means close to the transition from silent to repetitive firing.[68][69] The standard formulation of the theta model is[27][68][69] The equation for the quadratic integrate-and-fire model is (see Chapter 5.3 in the textbook Neuronal Dynamics[27]) The equivalence of theta model and quadratic integrate-and-fire is for example reviewed in Chapter 4.1.2.2 of spiking neuron models.[1] For inputI(t){\displaystyle I(t)}that changes over time or is far away from the bifurcation point, it is preferable to work with the exponential integrate-and-fire model (if one wants to stay in the class of one-dimensional neuron models), because real neurons exhibit the nonlinearity of the exponential integrate-and-fire model.[31] The models in this category were derived following experiments involving natural stimulation such as light, sound, touch, or odor. In these experiments, the spike pattern resulting from each stimulus presentation varies from trial to trial, but the averaged response from several trials often converges to a clear pattern. Consequently, the models in this category generate a probabilistic relationship between the input stimulus to spike occurrences. Importantly, the recorded neurons are often located several processing steps after the sensory neurons, so that these models summarize the effects of the sequence of processing steps in a compact form Siebert[70][71]modeled the neuron spike firing pattern using a non-homogeneousPoisson processmodel, following experiments involving the auditory system.[70][71]According to Siebert, the probability of a spiking event at the time interval[t,t+Δt]{\displaystyle [t,t+\Delta _{t}]}is proportional to a non-negative functiong[s(t)]{\displaystyle g[s(t)]}, wheres(t){\displaystyle s(t)}is the raw stimulus.: Siebert considered several functions asg[s(t)]{\displaystyle g[s(t)]}, includingg[s(t)]∝s2(t){\displaystyle g[s(t)]\propto s^{2}(t)}for low stimulus intensities. The main advantage of Siebert's model is its simplicity. The shortcomings of the model is its inability to reflect properly the following phenomena: These shortcomings are addressed by the age-dependent point process model and the two-state Markov Model.[72][73][74] Berry and Meister[75]studied neuronal refractoriness using a stochastic model that predicts spikes as a product of two terms, a function f(s(t)) that depends on the time-dependent stimulus s(t) and one a recovery functionw(t−t^){\displaystyle w(t-{\hat {t}})}that depends on the time since the last spike The model is also called aninhomogeneous Markov interval (IMI) process.[76]Similar models have been used for many years in auditory neuroscience.[77][78][79]Since the model keeps memory of the last spike time it is non-Poisson and falls in the class of time-dependent renewal models.[27]It is closely related to the modelSRM0with exponential escape rate.[27]Importantly, it is possible to fit parameters of the age-dependent point process model so as to describe not just the PSTH response, but also the interspike-interval statistics.[76][77][79] Thelinear-nonlinear-Poisson cascade modelis a cascade of a linear filtering process followed by a nonlinear spike generation step.[80]In the case that output spikes feed back, via a linear filtering process, we arrive at a model that is known in the neurosciences as Generalized Linear Model (GLM).[54][59]The GLM is mathematically equivalent to the spike response model SRM) with escape noise; but whereas in the SRM the internal variables are interpreted as the membrane potential and the firing threshold, in the GLM the internal variables are abstract quantities that summarizes the net effect of input (and recent output spikes) before spikes are generated in the final step.[27][54] The spiking neuron model by Nossenson & Messer[72][73][74]produces the probability of the neuron firing a spike as a function of either an external or pharmacological stimulus.[72][73][74]The model consists of a cascade of a receptor layer model and a spiking neuron model, as shown in Fig 4. The connection between the external stimulus to the spiking probability is made in two steps: First, a receptor cell model translates the raw external stimulus to neurotransmitter concentration, and then, a spiking neuron model connects neurotransmitter concentration to the firing rate (spiking probability). Thus, the spiking neuron model by itself depends on neurotransmitter concentration at the input stage.[72][73][74] An important feature of this model is the prediction for neurons firing rate pattern which captures, using a low number of free parameters, the characteristic edge emphasized response of neurons to a stimulus pulse, as shown in Fig. 5. The firing rate is identified both as a normalized probability for neural spike firing and as a quantity proportional to the current of neurotransmitters released by the cell. The expression for the firing rate takes the following form: where, P0 could be generally calculated recursively using the Euler method, but in the case of a pulse of stimulus, it yields a simple closed-form expression.[72][81] with⟨s2(t)⟩{\displaystyle \langle s^{2}(t)\rangle }being a short temporal average of stimulus power (given in Watt or other energy per time unit). Other predictions by this model include: 1) The averaged evoked response potential (ERP) due to the population of many neurons in unfiltered measurements resembles the firing rate.[74] 2) The voltage variance of activity due to multiple neuron activity resembles the firing rate (also known as Multi-Unit-Activity power or MUA).[73][74] 3) The inter-spike-interval probability distribution takes the form a gamma-distribution like function.[72][81] The models in this category produce predictions for experiments involving pharmacological stimulation. According to the model by Koch and Segev,[17]the response of a neuron to individual neurotransmitters can be modeled as an extension of the classical Hodgkin–Huxley model with both standard and nonstandard kinetic currents. Four neurotransmitters primarily influence the CNS.AMPA/kainate receptorsare fastexcitatorymediators whileNMDA receptorsmediate considerably slower currents. Fastinhibitorycurrents go throughGABAAreceptors, whileGABABreceptorsmediate by secondaryG-protein-activated potassium channels. This range of mediation produces the following current dynamics: whereḡis the maximal[8][17]conductance (around 1S) andEis the equilibrium potential of the given ion or transmitter (AMDA, NMDA,Cl, orK), while[O]describes the fraction of open receptors. For NMDA, there is a significant effect ofmagnesium blockthat dependssigmoidallyon the concentration of intracellular magnesium byB(V). For GABAB,[G]is the concentration of theG-protein, andKddescribes the dissociation ofGin binding to the potassium gates. The dynamics of this more complicated model have been well-studied experimentally and produce important results in terms of very quicksynaptic potentiation and depression, that is fast, short-term learning. The stochasticmodel by Nossenson and Messertranslates neurotransmitter concentration at the input stage to the probability of releasing neurotransmitter at the output stage.[72][73][74]For a more detailed description of this model, see theTwo state Markov model sectionabove. The HTM neuron model was developed byJeff Hawkinsand researchers at Numenta and is based on a theory calledHierarchical Temporal Memory, originally described in the bookOn Intelligence. It is based onneuroscienceand the physiology and interaction ofpyramidal neuronsin theneocortexof the human brain. - No dendrites - Sum input x weights - Learns by modifying the weights of synapses - Active dendrites: cell recognizes hundreds of unique patterns - Co-activation of a set of synapses on a dendritic segment causes an NMDA spike and depolarization at the soma - Sources of input to the cell: - Learns by growing new synapses - Thousands of synapses - Active dendrites: cell recognizes hundreds of unique patterns - Models dendrites and NMDA spikes with each array of coincident detectors having a set of synapses - Learns by modeling the growth of new synapses Spiking Neuron Models are used in a variety of applications that need encoding into or decoding from neuronal spike trains in the context of neuroprosthesis andbrain-computer interfacessuch asretinal prosthesis:[12][99][100][101]or artificial limb control and sensation.[102][103][104]Applications are not part of this article; for more information on this topic please refer to the main article. The most basic model of a neuron consists of an input with somesynaptic weightvector and anactivation functionortransfer functioninside the neuron determining output. This is the basic structure used for artificial neurons, which in aneural networkoften looks like whereyiis the output of theith neuron,xjis thejth input neuron signal,wijis the synaptic weight (or strength of connection) between the neuronsiandj, andφis theactivation function. While this model has seen success in machine-learning applications, it is a poor model for real (biological) neurons, because it lacks time-dependence in input and output. When an input is switched on at a time t and kept constant thereafter, biological neurons emit a spike train. Importantly, this spike train is not regular but exhibits a temporal structure characterized by adaptation, bursting, or initial bursting followed by regular spiking. Generalized integrate-and-fire models such as the Adaptive Exponential Integrate-and-Fire model, the spike response model, or the (linear) adaptive integrate-and-fire model can capture these neuronal firing patterns.[24][25][26] Moreover, neuronal input in the brain is time-dependent. Time-dependent input is transformed by complex linear and nonlinear filters into a spike train in the output. Again, the spike response model or the adaptive integrate-and-fire model enables to prediction of the spike train in the output for arbitrary time-dependent input,[22][23]whereas an artificial neuron or a simple leaky integrate-and-fire does not. If we take the Hodkgin-Huxley model as a starting point, generalized integrate-and-fire models can be derived systematically in a step-by-step simplification procedure. This has been shown explicitly for theexponential integrate-and-fire[33]model and thespike response model.[60] In the case of modeling a biological neuron, physical analogs are used in place of abstractions such as "weight" and "transfer function". A neuron is filled and surrounded with water-containing ions, which carry electric charge. The neuron is bound by an insulating cell membrane and can maintain a concentration of charged ions on either side that determines acapacitanceCm. The firing of a neuron involves the movement of ions into the cell, that occurs whenneurotransmitterscauseion channelson the cell membrane to open. We describe this by a physical time-dependentcurrentI(t). With this comes a change involtage, or the electrical potential energy difference between the cell and its surroundings, which is observed to sometimes result in avoltage spikecalled anaction potentialwhich travels the length of the cell and triggers the release of further neurotransmitters. The voltage, then, is the quantity of interest and is given byVm(t).[19] If the input current is constant, most neurons emit after some time of adaptation or initial bursting a regular spike train. The frequency of regular firing in response to a constant currentIis described by the frequency-current relation, which corresponds to the transfer functionφ{\displaystyle \varphi }of artificial neural networks. Similarly, for all spiking neuron models, the transfer functionφ{\displaystyle \varphi }can be calculated numerically (or analytically). All of the above deterministic models are point-neuron models because they do not consider the spatial structure of a neuron. However, the dendrite contributes to transforming input into output.[105][65]Point neuron models are valid description in three cases. (i) If input current is directly injected into the soma. (ii) If synaptic input arrives predominantly at or close to the soma (closeness is defined by a length scaleλ{\displaystyle \lambda }introduced below. (iii) If synapse arrives anywhere on the dendrite, but the dendrite is completely linear. In the last case, the cable acts as a linear filter; these linear filter properties can be included in the formulation of generalized integrate-and-fire models such as thespike response model. The filter properties can be calculated from acable equation. Let us consider a cell membrane in the form of a cylindrical cable. The position on the cable is denoted by x and the voltage across the cell membrane by V. The cable is characterized by a longitudinal resistancerl{\displaystyle r_{l}}per unit length and a membrane resistancerm{\displaystyle r_{m}}. If everything is linear, the voltage changes as a function of time We introduce a length scaleλ2=rm/rl{\displaystyle \lambda ^{2}={r_{m}}/{r_{l}}}on the left side and time constantτ=cmrm{\displaystyle \tau =c_{m}r_{m}}on the right side. Thecable equationcan now be written in its perhaps best-known form: The above cable equation is valid for a single cylindrical cable. Linear cable theory describes thedendritic arborof a neuron as a cylindrical structure undergoing a regular pattern ofbifurcation, like branches in a tree. For a single cylinder or an entire tree, the static input conductance at the base (where the tree meets the cell body or any such boundary) is defined as whereLis the electrotonic length of the cylinder, which depends on its length, diameter, and resistance. A simple recursive algorithm scales linearly with the number of branches and can be used to calculate the effective conductance of the tree. This is given by whereAD=πldis the total surface area of the tree of total lengthl, andLDis its total electrotonic length. For an entire neuron in which the cell body conductance isGSand the membrane conductance per unit area isGmd=Gm/A, we find the total neuron conductanceGNforndendrite trees by adding up all tree and soma conductances, given by where we can find the general correction factorFdgaexperimentally by notingGD=GmdADFdga. The linear cable model makes several simplifications to give closed analytic results, namely that the dendritic arbor must branch in diminishing pairs in a fixed pattern and that dendrites are linear. A compartmental model[65]allows for any desired tree topology with arbitrary branches and lengths, as well as arbitrary nonlinearities. It is essentially a discretized computational implementation of nonlinear dendrites. Each piece, or compartment, of a dendrite, is modeled by a straight cylinder of arbitrary lengthland diameterdwhich connects with fixed resistance to any number of branching cylinders. We define the conductance ratio of theith cylinder asBi=Gi/G∞, whereG∞=πd3/22RiRm{\displaystyle G_{\infty }={\tfrac {\pi d^{3/2}}{2{\sqrt {R_{i}R_{m}}}}}}andRiis the resistance between the current compartment and the next. We obtain a series of equations for conductance ratios in and out of a compartment by making corrections to the normal dynamicBout,i=Bin,i+1, as where the last equation deals withparentsanddaughtersat branches, andXi=li4RidiRm{\displaystyle X_{i}={\tfrac {l_{i}{\sqrt {4R_{i}}}}{\sqrt {d_{i}R_{m}}}}}. We can iterate these equations through the tree until we get the point where the dendrites connect to the cell body (soma), where the conductance ratio isBin,stem. Then our total neuron conductance for static input is given by Importantly, static input is a very special case. In biology, inputs are time-dependent. Moreover, dendrites are not always linear. Compartmental models enable to include nonlinearities via ion channels positioned at arbitrary locations along the dendrites.[105][106]For static inputs, it is sometimes possible to reduce the number of compartments (increase the computational speed) and yet retain the salient electrical characteristics.[107] The neurotransmitter-based energy detection scheme[74][81]suggests that the neural tissue chemically executes a Radar-like detection procedure. As shown in Fig. 6, the key idea of the conjecture is to account for neurotransmitter concentration, neurotransmitter generation, and neurotransmitter removal rates as the important quantities in executing the detection task, while referring to the measured electrical potentials as a side effect that only in certain conditions coincide with the functional purpose of each step. The detection scheme is similar to a radar-like "energy detection" because it includes signal squaring, temporal summation, and a threshold switch mechanism, just like the energy detector, but it also includes a unit that emphasizes stimulus edges and a variable memory length (variable memory). According to this conjecture, the physiological equivalent of the energy test statistics is neurotransmitter concentration, and the firing rate corresponds to neurotransmitter current. The advantage of this interpretation is that it leads to a unit-consistent explanation which allows for bridge between electrophysiological measurements, biochemical measurements, and psychophysical results. The evidence reviewed in[74][81]suggests the following association between functionality to histological classification: Note that although the electrophysiological signals in Fig.6 are often similar to the functional signal (signal power/neurotransmitter concentration / muscle force), there are some stages in which the electrical observation differs from the functional purpose of the corresponding step. In particular, Nossenson et al. suggested that glia threshold crossing has a completely different functional operation compared to the radiated electrophysiological signal and that the latter might only be a side effect of glia break.
https://en.wikipedia.org/wiki/Biological_neuron_model
Theunity of consciousness and(cognitive)binding problemis the problem of how objects, background, and abstract or emotional features are combined into a single experience.[1]The binding problem refers to the overall encoding of our brain circuits for the combination of decisions, actions, and perception. It is considered a "problem" because no complete model exists. The binding problem can be subdivided into the four areas ofperception,neuroscience,cognitive science, and thephilosophy of mind. It includes general considerations on coordination, the subjective unity of perception, and variable binding.[2] Attention is crucial in determining which phenomena appear to be bound together, noticed, and remembered.[3]This specific binding problem is generally referred to as temporal synchrony. At the most basic level, all neural firing and its adaptation depends on specific consideration to timing (Feldman, 2010). At a much larger level, frequent patterns in large scale neural activity are a major diagnostic and scientific tool.[4] A popular hypothesis mentioned by neuroscientist ProfPeter Milner, in his 1974 articleA Model for Visual Shape Recognition, has been that features of individual objects are bound/segregated viasynchronizationof the activity of different neurons in the cortex.[5][6]The theory, called binding-by-synchrony (BBS), is hypothesized to occur through the transient mutual synchronization of neurons located in different regions of the brain when the stimulus is presented.[7]Empirical testing of the idea was brought to light when von der Malsburg proposed that feature binding posed a special problem that could not be covered simply by cellular firing rates.[8]However, it has been shown this theory may not be a problem since it was revealed that the modules code jointly for multiple features, countering the feature-binding issue.[9]Temporal synchrony has been shown to be the most prevalent when regarding the first problem, "General Considerations on Coordination," because it is an effective method to take in surroundings and is good for grouping and segmentation. A number of studies suggested that there is indeed a relationship between rhythmic synchronous firing and feature binding. This rhythmic firing appears to be linked to intrinsic oscillations in neuronal somatic potentials, typically in thegamma rangearound 40 – 60 hertz.[10]The positive arguments for a role for rhythmic synchrony in resolving the segregational object-feature binding problem have been summarized by Neurophysiologist ProfSinger.[11]There is certainly extensive evidence for synchronization of neural firing as part of responses to visual stimuli. However, there is inconsistency between findings from different laboratories. Moreover, a number of recent reviewers, including neuroscientists ProfShadlenand ProfMovshon[6]and ProfMerker[12]have raised concerns about the theory being potentially untenable. Neuroscientists Prof Thiele and Prof Stoner found that perceptual binding of two moving patterns had no effect on synchronization of the neurons responding to two patterns: coherent and noncoherent plaids.[13][14]In the primary visual cortex, Dong et al. found that whether two neurons were responding to contours of the same shape or different shapes had no effect on neural synchrony since synchrony is independent of binding condition. Shadlen and Movshon[6]raise a series of doubts about both the theoretical and the empirical basis for the idea of segregational binding by temporal synchrony. There is no biophysical evidence that cortical neurons are selective to synchronous input at this point of precision, and cortical activity with synchrony this precise is rare. Synchronization is also connected to endorphin activity. It has been shown that precise spike timing may not be necessary to illustrate a mechanism for visual binding and is only prevalent in modeling certain neuronal interactions. In contrast,Anil Seth[15]describes an artificial brain-based robot that demonstrates multiple, separate, widely distributed neural circuits, firing at different phases, showing that regular brain oscillations at specific frequencies are essential to the neural mechanisms of binding. Cognitive psychologists Prof Goldfarb and ProfTreisman[16]point out that a logical problem appears to arise for binding solely via synchrony if there are several objects that share some of their features and not others. At best synchrony can facilitate segregation supported by other means (as physicist and neuroscientist Profvon der Malsburgacknowledges).[17] A number of neuropsychological studies suggest that the association of color, shape and movement as "features of an object" is not simply a matter of linking or "binding", but shown to be inefficient to not bind elements into groups when considering association,[18]and give extensive evidence for top-down feedback signals that ensure that sensory data are handled as features of (sometimes wrongly) postulated objects early in processing. Pylyshyn[19]has also emphasized the way the brain seems to pre-conceive objects from which features are to be allocated to which are attributed continuing existence even if features such as color change. This is because visual integration increases over time, and indexing visual objects helps to ground visual concepts. The visual feature binding problem refers to the question of why we do not confuse a red circle and a blue square with a blue circle and a red square. The understanding of the circuits in the brain stimulated for visual feature binding is increasing. A binding process is required for us to accurately encode various visual features in separate cortical areas. In her feature integration theory,Treismansuggested that one of the first stages of binding between features is mediated by the features' links to a common location. The second stage is combining individual features of an object that requires attention, and selecting that object occurs within a "master map" of locations. Psychophysical demonstrations of binding failures under conditions of full attention provide support for the idea that binding is accomplished through common location tags.[20] An implication of these approaches is that sensory data such as color or motion may not normally exist in "unallocated" form. ForMerker:[21]"The 'red' of a red ball does not float disembodied in an abstract color space inV4." If color information allocated to a point in the visual field is converted directly, via the instantiation of some form of propositional logic (analogous to that used in computer design) into color information allocated to an "object identity" postulated by a top-down signal as suggested by Purves and Lotto (e.g. There is blue here + Object 1 is here = Object 1 is blue) no special computational task of "binding together" by means such as synchrony may exist. (Although Von der Malsburg[22]poses the problem in terms of binding "propositions" such as "triangle" and "top", these, in isolation, are not propositional.) How signals in the brain come to have propositional content, or meaning, is a much larger issue. However, both Marr[23]and Barlow[24]suggested, on the basis of what was known about neural connectivity in the 1970s that the final integration of features into a percept would be expected to resemble the way words operate in sentences. The role of synchrony in segregational binding remains controversial. Merker[21]has recently suggested that synchrony may be a feature of areas of activation in the brain that relates to an "infrastructural" feature of the computational system analogous to increased oxygen demand indicated via BOLD signal contrast imaging. Apparent specific correlations with segregational tasks may be explainable on the basis of interconnectivity of the areas involved. As a possible manifestation of a need to balance excitation and inhibition over time it might be expected to be associated with reciprocal re-entrant circuits as in the model ofAnil Seth.[15](Merker gives the analogy of the whistle from an audio amplifier receiving its own output.) Visual feature binding is suggested to have a selective attention to the locations of the objects. If indeed spatial attention does play a role in binding integration it will do so primarily when object location acts as a binding cue. A study's findings have shown that functional MRI images indicate regions of the parietal cortex involved in spatial attention, engaged in feature conjunction tasks in single feature tasks. The task involved multiple objects being shown simultaneously at different locations which activated the parietal cortex, whereas when multiple objects are shown sequentially at the same location the parietal cortex was less engaged.[25] Dezfouli et al. investigated feature binding through two feature dimensions to disambiguate whether a specific combination of color and motion direction is perceived as bound or unbound. Two behaviorally relevant features, including color and motion belonging to the same object, are defined as the "bound" condition, whereas the "unbound" condition has features that belong to different objects. Local field potentials were recorded from the lateral prefrontal cortex (lPFC) in monkeys and were monitored during different stimulus configurations. The findings suggest a neural representation of visual feature binding in 4 to 12 Hertzfrequency bands. It is also suggested that transmission of binding information is relayed through different lPFC neural subpopulations. The data shows behavioral relevance of binding information that is linked to the animal's reaction time. This includes the involvement of the prefrontal cortex targeted by the dorsal and ventral visual streams in binding visual features from different dimensions (color and motion).[26] It is suggested that the visual feature binding consists of two different mechanisms in visual perception. One mechanism consists of agonistic familiarity of possible combinations of features integrating several temporal integration windows. It is speculated that this process is mediated by neural synchronization processes and temporal synchronization in the visual cortex. The second mechanism is mediated by familiarity with the stimulus and is provided by attentional top-down support from familiar objects.[27] Smythies[28]defines the combination problem, also known as the subjective unity of perception, as "How do the brain mechanisms actually construct the phenomenal object?". Revonsuo[1]equates this to "consciousness-related binding", emphasizing the entailment of a phenomenal aspect. As Revonsuo explores in 2006,[29]there are nuances of difference beyond the basic BP1:BP2 division. Smythies speaks of constructing a phenomenal object ("local unity" for Revonsuo) but philosophers such asRené Descartes,Gottfried Wilhelm Leibniz,Immanuel Kant, and James (see Brook and Raymont)[30]have typically been concerned with the broader unity of a phenomenal experience ("global unity" for Revonsuo) – which, as Bayne[31]illustrates may involve features as diverse as seeing a book, hearing a tune and feeling an emotion. Further discussion will focus on this more general problem of how sensory data that may have been segregated into, for instance, "blue square" and "yellow circle" are to be re-combined into a single phenomenal experience of a blue square next to a yellow circle, plus all other features of their context. There is a wide range of views on just how real this "unity" is, but the existence of medical conditions in which it appears to be subjectively impaired, or at least restricted, suggests that it is not entirely illusory.[32] There are many neurobiological theories about the subjective unity of perception. Different visual features such as color, size, shape, and motion are computed by largely distinct neural circuits but we experience this as an integrated whole. The different visual features interact with each other in various ways. For example, shape discrimination of objects is strongly affected by orientation but only slightly affected by object size.[33]Some theories suggest that global perception of the integrated whole involves higher order visual areas.[34]There is also evidence that the posterior parietal cortex is responsible for perceptual scene segmentation and organization.[35]Bodies facing each other are processed as a single unit and there is increased coupling of the extrastriate body area (EBA) and the posterior superior temporal sulcus (pSTS) when bodies are facing each other.[36]This suggests that the brain is biased towards grouping humans in twos or dyads.[37] The boundary problem is another unsolved problem in neuroscience and phenomenology that is related to the binding problem. The boundary problem is essentially the inverse of the binding problem, and asks how binding stops occurring and what prevents other neurological phenomena from being included in first-person perspectives, giving first-person perspectives hard boundaries. Topological segmentation and electromagnetic field topology have been proposed as possible avenues for solving the boundary problem as well as the binding problem.[38] Early philosophers René Descartes and Gottfried Wilhelm Leibniz[39]noted that the apparent unity of our experience is an all-or-none qualitative characteristic that does not appear to have an equivalent in the known quantitative features, like proximity or cohesion, of composite matter.William James,[40]in the nineteenth century, considered the ways the unity of consciousness might be explained by known physics and found no satisfactory answer. He coined the term "combination problem", in the specific context of a "mind-dust theory" in which it is proposed that a full human conscious experience is built up from proto- or micro-experiences in the way that matter is built up from atoms. James claimed that such a theory was incoherent, since no causal physical account could be given of how distributed proto-experiences would "combine". He favoured instead a concept of "co-consciousness" in which there is one "experience of A, B and C" rather than combined experiences. A detailed discussion of subsequent philosophical positions is given by Brook and Raymont (see 26). However, these do not generally include physical interpretations. Whitehead[41]proposed a fundamental ontological basis for a relation consistent with James's idea of co-consciousness, in which many causal elements are co-available or "compresent" in a single event or "occasion" that constitutes a unified experience. Whitehead did not give physical specifics, but the idea of compresence is framed in terms of causal convergence in a local interaction consistent with physics. Where Whitehead goes beyond anything formally recognized in physics is in the "chunking" of causal relations into complex but discrete "occasions". Even if such occasions can be defined, Whitehead's approach still leaves James's difficulty with finding a site, or sites, of causal convergence that would make neurobiological sense for "co-consciousness". Sites of signal convergence do clearly exist throughout the brain but there is a concern to avoid re-inventing whatDaniel Dennett[42]calls a Cartesian Theater or a single central site of convergence of the form that Descartes proposed. Descartes's central "soul" is now rejected because neural activity closely correlated with conscious perception is widely distributed throughout the cortex. The remaining choices appear to be either separate involvement of multiple distributed causally convergent events or a model that does not tie a phenomenal experience to any specific local physical event but rather to some overall "functional" capacity. Whichever interpretation is taken, as Revonsuo[1]indicates, there is no consensus on what structural level we are dealing with – whether the cellular level, that of cellular groups as "nodes", "complexes" or "assemblies" or that of widely distributed networks. There is probably only general agreement that it is not the level of the whole brain, since there is evidence that signals in certain primary sensory areas, such as the V1 region of the visual cortex (in addition to motor areas and cerebellum), do not contribute directly to phenomenal experience. Stoll and colleagues conducted an fMRI experiment to see whether participants would view a dynamic bistable stimulus globally or locally.[34]Responses in lower visual cortical regions were suppressed when participants viewed the stimulus globally. However, if global perception was without shape grouping, higher cortical regions were suppressed. This experiment shows that higher order cortex is important in perceptual grouping. Grassi and colleagues used three different motion stimuli to investigate scene segmentation or how meaningful entities are grouped together and separated from other entities in a scene.[35]Across all stimuli, scene segmentation was associated with increased activity in the posterior parietal cortex and decreased activity in lower visual areas. This suggests that the posterior parietal cortex is important for viewing an integrated whole. Mersad and colleagues used an EEG frequency tagging technique to differentiate between brain activity for the integrated whole object and brain activity for parts of the object.[43]The results showed that the visual system binds two humans in close proximity as part of an integrated whole. These results are consistent with evolutionary theories that face-to-face bodies are one of the earliest representations of social interaction.[37]It also supports other experimental work showing that body-selective visual areas respond more strongly to facing bodies.[44] Experiments have shown that ferritin and neuromelanin in fixed humansubstantia nigra pars compacta(SNc) tissue are able to support widespread electron tunneling.[45]Further experiments have shown that ferritin structures similar to ones found in SNc tissue are able to conduct electrons over distances as great as 80 microns, and that they behave in accordance with Coulomb blockade theory to perform a switching or routing function.[46][47]Both of these observations are consistent with earlier predictions that are part of a hypothesis that ferritin and neuromelanin can provide a binding mechanism associated with an action selection mechanism,[48]although the hypothesis itself has not yet been directly investigated. The hypothesis and these observations have been applied toIntegrated Information Theory.[49] Daniel Dennett[42]has proposed that we, as humans, sensing our experiences as individual single events is illusory and that, instead, at any one time there are "multiple drafts" of sensory patterns at multiple sites. Each would only cover a fragment of what we think we experience. Arguably, Dennett is claiming that consciousness is not unified and there is no phenomenal binding problem. Most philosophers have difficulty with this position (see Bayne),[31]but some physiologists agree with it. In particular, the demonstration ofperceptual asynchronyin psychophysical experiments by Moutoussis and Zeki,[50][51]where color is perceived before orientation of lines and before motion by 40 and 80 ms respectively, constitutes an argument that, over these very short time periods, different attributes are consciously perceived at different times, leading to the view that at least over these brief periods of time after visual stimulation, different events are not bound to each other, leading to the view of a disunity of consciousness,[52]at least over these brief time intervals. Dennett's view might be in keeping with evidence from recall experiments and change blindness purporting to show that our experiences are much less rich than we sense them to be – what has been called the Grand Illusion.[53]However, few, if any, other authors suggest the existence of multiple partial "drafts". Moreover, also on the basis of recall experiments, Lamme[54]has challenged the idea that richness is illusory, emphasizing that phenomenal content cannot be equated with content to which there is cognitive access. Dennett does not tie drafts to biophysical events. Multiple sites of causal convergence are invoked in specific biophysical terms by Edwards[55]and Sevush.[56]In this view the sensory signals to be combined in phenomenal experience are available, in full, at each of multiple sites. To avoid non-causal combination, each site/event is placed within an individual neuronal dendritic tree. The advantage is that "compresence" is invoked just where convergence occurs neuro-anatomically. The disadvantage, as for Dennett, is the counter-intuitive concept of multiple "copies" of experience. The precise nature of an experiential event or "occasion", even if local, also remains uncertain. The majority of theoretical frameworks for the unified richness of phenomenal experience adhere to the intuitive idea that experience exists as a single copy, and draw on "functional" descriptions of distributed networks of cells. Baars[57]has suggested that certain signals, encoding what we experience, enter a "Global Workspace" within which they are "broadcast" to many sites in the cortex for parallel processing. Dehaene, Changeux and colleagues[58]have developed a detailed neuro-anatomical version of such a workspace. Tononi and colleagues[59]have suggested that the level of richness of an experience is determined by the narrowest information interface "bottleneck" in the largest sub-network or "complex" that acts as an integrated functional unit. Lamme[54]has suggested that networks supporting reciprocal signaling rather than those merely involved in feed-forward signaling support experience. Edelman and colleagues have also emphasized the importance of re-entrant signaling.[60]Cleeremans[61]emphasizes meta-representation as the functional signature of signals contributing to consciousness. In general, such network-based theories are not explicitly theories of how consciousness is unified, or "bound", but rather theories of functional domains within which signals contribute to unified conscious experience. A concern about functional domains is what Rosenberg[62]has called the boundary problem; it is hard to find a unique account of what is to be included and what excluded. Nevertheless, this is, if anything is, the consensus approach. Within the network context, a role for synchrony has been invoked as a solution to the phenomenal binding problem as well as the computational one. In his book,The Astonishing Hypothesis,[63]Crick appears to be offering a solution to BP2 as much as BP1. Even von der Malsburg,[64]introduces detailed computational arguments about object feature binding with remarks about a "psychological moment". The Singer group[65]also appear to be interested as much in the role of synchrony in phenomenal awareness as in computational segregation. The apparent incompatibility of using synchrony to both segregate and unify might be explained by sequential roles. However, Merker[21]points out what appears to be a contradiction in attempts to solve the subjective unity of perception in terms of a functional (effectively meaning computational) rather than a local biophysical domain in the context of synchrony. Functional arguments for a role for synchrony are in fact underpinned by analysis of local biophysical events. However, Merker[21]points out that the explanatory work is done by the downstream integration of synchronized signals in post-synaptic neurons: "It is, however, by no means clear what is to be understood by 'binding by synchrony' other than the threshold advantage conferred by synchrony at, and only at, sites of axonal convergence onto single dendritic trees..." In other words, although synchrony is proposed as a way of explaining binding on a distributed rather than a convergent basis, the justification rests on what happens at convergence. Signals for two features are proposed as bound by synchrony because synchrony effects downstream convergent interaction. Any theory of phenomenal binding based on this sort of computational function would seem to follow the same principle. The phenomenality would entail convergence, if the computational function does. The assumption in many of the quoted models suggest that computational and phenomenal events, at least at some point in the sequence of events, parallel each other in some way. The difficulty remains in identifying what that way might be. Merker's[21]analysis suggests that either (1) both computational and phenomenal aspects of binding are determined by convergence of signals on neuronal dendritic trees, or (2) that our intuitive ideas about the need for "binding" in a "holding together" sense in both computational and phenomenal contexts are misconceived. We may be looking for something extra that is not needed. Merker, for instance, argues that the homotopic connectivity of sensory pathways does the necessary work. In modernconnectionism, cognitive neuroarchitectures are developed (e.g. "Oscillatory Networks",[66]"Integrated Connectionist/Symbolic (ICS) Cognitive Architecture",[67]"Holographic Reduced Representations (HRRs)",[68]"Neural Engineering Framework (NEF)"[69]) that solve the binding problem by means of integrativesynchronizationmechanisms(e.g. the (phase-)synchronized "Binding-by-synchrony (BBS)" mechanism) According to bioengineering Prof Igor Val Danilov,[74]the mother–fetus neurocognitive model[75]—knowledge about neurophysiological processes duringshared intentionality—can reveal insights into the binding problem and even theperceptionof object development sinceintentionalitysucceeds before organisms confront the binding problem. Indeed, at the beginning of life, the environment is the cacophony of stimuli: electromagnetic waves, chemical interactions, and pressure fluctuations. Because the environment is uncategorised for the organisms at this beginning stage of development, the sensation is too limited by the noise to solve the cue problem—the relevant stimulus cannot overcome the noise magnitude if it passes through the senses. While very young organisms need to combine objects, background and abstract or emotional features into a single experience for building the representation of the surrounded reality, they cannot distinguish relevant sensory stimuli independently to integrate them into object representations. Even theembodied dynamical systemapproach cannot get around the cue to noise problem. The application of embodied information requires an already categorised environment onto objects—holistic representation of reality—which occurs through (and only after the emergence of) perception and intentionality.[76][77]In short, properties of the mother's heart—the electromagnetic and acoustic oscillations—converge the neuronal activity of both nervous systems in an ensemble, shaping synchrony. During the mother's intentional acts with her environment, these interchanges provide clues to the fetus's nervous system, binding synaptic activity with relevant stimuli, occurring due tobrain waveinteraction between the mother's and fetal nervous systems.[78]
https://en.wikipedia.org/wiki/Binding_problem
Acognitive mapis a type ofmental representationused by an individual to order their personal store of information about their everyday or metaphorical spatial environment, and the relationship of its component parts. The concept was introduced byEdward Tolmanin 1948.[1]He tried to explain the behavior of rats that appeared to learn the spatial layout of a maze, and subsequently the concept was applied to other animals, including humans.[2]The term was later generalized by some researchers, especially in the field ofoperations research, to refer to a kind ofsemantic networkrepresenting an individual's personal knowledge orschemas.[3][4][5] Cognitive maps have been studied in various fields, such as psychology, education, archaeology, planning, geography, cartography, architecture, landscape architecture, urban planning, management and history.[6]Because of the broad use and study of cognitive maps, it has become a colloquialism for almost any mental representation or model.[6]As a consequence, these mental models are often referred to, variously, as cognitive maps,mental maps,scripts,schemata, andframe of reference. Cognitive maps are a function of the working brain that humans and animals use for movement in a new environment. They help us in recognizing places, computing directions and distances, and in critical-thinking on shortcuts. They support us in wayfinding in an environment, and act as blueprints for new technology.[citation needed] Cognitive maps serve the construction and accumulation of spatial knowledge, allowing the "mind's eye" to visualize images in order to reducecognitive load, enhancerecallandlearningof information. This type of spatial thinking can also be used as a metaphor for non-spatial tasks, where people performing non-spatial tasks involvingmemoryand imaging use spatial knowledge to aid in processing the task.[7]They include information about the spatial relations that objects have among each other in an environment and they help us in orienting and moving in a setting and in space. They are internal representation, they are not a fixed image, instead they are a schema, dynamic and flexible, with a degree of personal level. A spatial map needs to be acquired according to a frame of reference. Because it is independent from the observer's point of view, it is based on an allocentric reference system— with an object-to-object relation. It codes configurational information, using a world-centred coding system.[citation needed] Theneural correlatesof a cognitive map have been speculated to be theplace cellsystem in thehippocampus[8][9]and the recently discoveredgrid cellsin theentorhinal cortex.[10] The idea of a cognitive map was first developed byEdward C. Tolman. Tolman, one of the early cognitive psychologists, introduced this idea when doing an experiment involving rats and mazes. In Tolman's experiment, a rat was placed in a cross shaped maze and allowed to explore it. After this initial exploration, the rat was placed at one arm of the cross and food was placed at the next arm to the immediate right. The rat was conditioned to this layout and learned to turn right at the intersection in order to get to the food. When placed at different arms of the cross maze however, the rat still went in the correct direction to obtain the food because of the initial cognitive map it had created of the maze. Rather than just deciding to turn right at the intersection no matter what, the rat was able to determine the correct way to the food no matter where in the maze it was placed.[11] Unfortunately, further research was slowed due to the behaviorist point of view prevalent in the field of psychology at the time.[12]In later years, O'Keefe and Nadel attributed Tolman's research to the hippocampus, stating that it was the key to the rat's mental representation of its surroundings. This observation furthered research in this area and consequently much of hippocampus activity is explained through cognitive map making.[13] As time went on, the cognitive map was researched in other prospective fields that found it useful, therefore leading to broader and differentiating definitions and applications.[citation needed] A cognitive map is a spatial representation of the outside world that is kept within the mind, until an actual manifestation (usually, a drawing) of this perceived knowledge is generated, a mental map. Cognitive mapping is the implicit, mental mapping the explicit part of the same process. In most cases, a cognitive map exists independently of a mental map, an article covering just cognitive maps would remain limited to theoretical considerations.[citation needed] Mental mapping is typically associated with landmarks, locations, and geography when demonstrated. Creating mental maps depends on the individual and their perceptions whether they are influenced by media, real-life, or other sources. Because of their factual storage mental maps can be useful when giving directions and navigating.[14][15]As stated previously this distinction is hard to identify when posed with almost identical definitions, nevertheless there is a distinction.[16] In some uses, mental map refers to a practice done by urban theorists by having city dwellers draw a map, from memory, of their city or the place they live. This allows the theorist to get a sense of which parts of the city or dwelling are more substantial or imaginable. This, in turn, lends itself to a decisive idea of how well urban planning has been conducted.[17] The cognitive map is generated from a number of sources, both from thevisual systemand elsewhere. Much of the cognitive map is created through self-generated movementcues. Inputs from senses like vision,proprioception, olfaction, and hearing are all used to deduce a person's location within their environment as they move through it. This allows for path integration, the creation of a vector that represents one's position and direction within one's environment, specifically in comparison to an earlier reference point. This resulting vector can be passed along to the hippocampal place cells where it is interpreted to provide more information about the environment and one's location within the context of the cognitive map.[18] Directional cues and positional landmarks are also used to create the cognitive map. Within directional cues, both explicit cues, like markings on a compass, as well as gradients, like shading or magnetic fields, are used as inputs to create the cognitive map. Directional cues can be used both statically, when a person does not move within his environment while interpreting it, and dynamically, when movement through a gradient is used to provide information about the nature of the surrounding environment. Positional landmarks provide information about the environment by comparing the relative position of specific objects, whereas directional cues give information about the shape of the environment itself. These landmarks are processed by the hippocampus together to provide a graph of the environment through relative locations.[18][9] Alex Siegel and Sheldon White (1975) proposed a model of acquisition of spatial knowledge based on different levels. The first stage of the process is said to be limited to the landmarks available in a new environment. Then, as a second stage, information about the routes that connect landmarks will be encoded, at the beginning in a non-metric representation form and consequently they will be expanded with metric properties, such as distances, durations and angular deviations. In the third and final step, the observer will be able to use a survey representation of the surroundings, using an allocentric point of view.[19] All in all, the acquisition of cognitive maps is a gradual construction. This kind of knowledge is multimodal in nature and it is built up by different pieces of information coming from different sources that are integrated step by step.[citation needed] Cognitive mapping is believed to largely be a function of the hippocampus. The hippocampus is connected to the rest of the brain in such a way that it is ideal for integrating both spatial and nonspatial information. Connections from thepostrhinal cortexand the medial entorhinal cortex provide spatial information to the hippocampus. Connections from theperirhinal cortexand lateral entorhinal cortex provide nonspatial information. The integration of this information in the hippocampus makes the hippocampus a practical location for cognitive mapping, which necessarily involves combining information about an object's location and its other features.[20] O'Keefe and Nadel were the first to outline a relationship between the hippocampus and cognitive mapping.[8]Many additional studies have shown additional evidence that supports this conclusion.[21]Specifically,pyramidal cells(place cells,boundary cells, andgrid cells) have been implicated as the neuronal basis for cognitive maps within the hippocampal system. Numerous studies by O'Keefe have implicated the involvement of place cells. Individual place cells within the hippocampus correspond to separate locations in the environment with the sum of all cells contributing to a single map of an entire environment. The strength of the connections between the cells represents the distances between them in the actual environment. The same cells can be used for constructing several environments, though individual cells' relationships to each other may differ on a map by map basis.[8]The possible involvement of place cells in cognitive mapping has been seen in a number of mammalian species, including rats and macaque monkeys.[21]Additionally, in a study of rats by Manns and Eichenbaum, pyramidal cells from within the hippocampus were also involved in representing object location and object identity, indicating their involvement in the creation of cognitive maps.[20]However, there has been some dispute as to whether such studies of mammalian species indicate the presence of a cognitive map and not another, simpler method of determining one's environment.[22] While not located in the hippocampus, grid cells from within the medial entorhinal cortex have also been implicated in the process ofpath integration, actually playing the role of the path integrator while place cells display the output of the information gained through path integration.[23]The results of path integration are then later used by the hippocampus to generate the cognitive map.[18]The cognitive map likely exists on a circuit involving much more than just the hippocampus, even if it is primarily based there. Other than the medial entorhinal cortex, the presubiculum and parietal cortex have also been implicated in the generation of cognitive maps.[21] There has been some evidence for the idea that the cognitive map is represented in thehippocampusby two separate maps. The first is the bearing map, which represents the environment through self-movement cues andgradientcues. The use of thesevector-based cues creates a rough, 2D map of the environment. The second map would be the sketch map that works off of positional cues. The second map integrates specific objects, orlandmarks, and their relative locations to create a 2D map of the environment. The cognitive map is thus obtained by the integration of these two separate maps.[18]This leads to an understanding that it is not just one map but three that help us create this mental process. It should be clear that parallel map theory is still growing. The sketch map has foundation in previous neurobiological processes and explanations while the bearing map has very little research to support its evidence.[24] According to O'Keefe and Nadel (1978), not only humans require spatial abilities.Non-humans animalsneed them as well to find food, shelters, and other animals whether it is mates or predators.[25]To do so, some animals establish relationships between landmarks, allowing them to make spatial inferences and detect positions.[26] The first experiments onratsin a maze, conducted by Tolman, Ritchie, and Kalish (1946), showed that rats can form mental maps of spatial locations with a good comprehension of them. But these experiments, led again later by other researchers (for example by Eichenbaum, Stewart, & Morris, 1990 and by Singer et al. 2006) have not concluded with such clear results. Some authors tried to bring to light the way rats can take shortcuts. The results have demonstrated that in most cases, rats fail to use a shortcut when reaching for food unless they receive a preexposure to this shortcut route. In that case, rats use that route significantly faster and more often than those who were not preexposed. Moreover, they have difficulties making a spatial inference such as taking a novel shortcut route.[27] In 1987, Chapuis and Varlet led an experiment ondogsto determine if they were able to infer shortcuts. The conclusion confirmed their hypothesis. Indeed, the results demonstrated that the dogs were able to go from starting point to point A with food and then go directly to point B without returning to the starting point. But for Andrew T.D. Bennett (1996) it can simply mean that the dogs have seen some landmarks near point B such as trees or buildings and headed towards them because they associated them with the food. Later, in 1998, Cheng and Spetch did an experiment on gerbils. When looking for the hidden food (goal), gerbils were using the relationship between the goal and one landmark at a time. Instead of deducing that the food was equidistant from two landmarks, gerbils were searching it by its position from two independent landmarks. This means that even though animals use landmarks to locate positions, they do it in a certain way.[26] Another experiment, includingpigeonsthis time, showed that they also use landmarks to locate positions. The task was for the pigeons to find hidden food in an arena. A part of the testing was to make sure that they were not using their smell to locate food. These results show and confirm other evidence of links present in those animals between one or multiple landmark(s) and hidden food (Cheng and Spetch, 1998, 2001; Spetch and Mondloch, 1993; Spetch et al., 1996, 1997).[25] There is increasing evidence thatfishform navigational cognitive maps.[28]In one such neurological study, wireless neural recording systems measured the neural activity ofgoldfishand found evidence they form complex cognitive maps of their surroundings.[29] In a review, Andrew T.D. Bennett noted two principal definitions for the "cognitive map" term. The first one, according to Tolman, O'Keefe, and Nadel, implies the capacity to create novel short-cutting thanks to vigorous memorization of the landmarks. The second one, according to Gallistel, considers a cognitive map as "any representation of space held by an animal".[22]This lack of a proper definition is also shared by Thinus-Blanc (1996) who stated that the definition is not clear enough. Therefore, this makes further experiments difficult to conclude.[25] However, Bennett argued that there is no clear evidence for cognitive maps in non-human animals (i.e. cognitive map according to Tolman's definition). This argument is based on analyses of studies where it has been found that simpler explanations can account for experimental results. Bennett highlights three simpler alternatives that cannot be ruled out in tests of cognitive maps in non-human animals "These alternatives are (1) that the apparently novel short-cut is not truly novel; (2) that path integration is being used; and (3) that familiar landmarks are being recognised from a new angle, followed by movement towards them."[22]This point of view is also shared by Grieves and Dudchenko (2013) that showed with their experiment on rats (briefly presented above) that these animals are not capable of making spatial inferences using cognitive maps.[27] Heuristicswere found to be used in the manipulation and creation of cognitive maps.[30]These internal representations are used by our memory as a guide in our external environment. It was found that when questioned about maps imaging, distancing, etc., people commonly made distortions to images. These distortions took shape in theregularisationof images (i.e., images are represented as more like pure abstractgeometricimages, though they are irregular in shape). There are several ways that humans form and use cognitive maps, with visual intake being an especially key part of mapping: the first is by usinglandmarks, whereby a person uses a mental image to estimate a relationship, usually distance, between two objects. The second isroute-roadknowledge, and is generally developed after a person has performed a task and is relaying the information of that task to another person. The third is asurvey, whereby a person estimates a distance based on a mental image that, to them, might appear like an actual map. This image is generally created when a person's brain begins making image corrections. These are presented in five ways:[citation needed] Another method of creating cognitive maps is by means of auditory intake based on verbal descriptions. Using the mapping based from a person's visual intake, another person can create a mental image, such as directions to a certain location.[31]
https://en.wikipedia.org/wiki/Cognitive_map
Feature integration theoryis a theory ofattentiondeveloped in 1980 byAnne Treismanand Garry Gelade that suggests that when perceiving a stimulus, features are "registered early, automatically, and in parallel, while objects are identified separately" and at a later stage in processing. The theory has been one of the most influentialpsychological modelsof human visualattention. According to Treisman, the first stage of the feature integration theory is the preattentive stage. During this stage, different parts of the brain automatically gather information about basic features (colors, shape, movement) that are found in the visual field. The idea that features are automatically separated appears counterintuitive. However, we are not aware of this process because it occurs early in perceptual processing, before we become conscious of the object. The second stage of feature integration theory is the focused attention stage, where a subject combines individual features of an object to perceive the whole object. Combining individual features of an object requires attention, and selecting that object occurs within a "master map" of locations. The master map of locations contains all the locations in which features have been detected, with each location in the master map having access to the multiple feature maps. These multiple feature maps, or sub-maps, contain a large storage base of features. Features such as color, shape, orientation, sound, and movement are stored in these sub-maps[1][2].When attention is focused at a particular location on the map, the features currently in that position are attended to and are stored in "object files". If the object is familiar, associations are made between the object and prior knowledge, which results in identification of that object. This top-down process, using prior knowledge to inform a current situation or decision, is paramount in either identifying or recognizing objects.[3][4]In support of this stage, researchers often refer to patients withBalint's syndrome. Due to damage in the parietal lobe, these people are unable to focus attention on individual objects. Given a stimulus that requires combining features, people with Balint's syndrome are unable to focus attention long enough to combine the features, providing support for this stage of the theory.[5] Treisman distinguishes between two kinds of visual search tasks, "feature search" and "conjunction search". Feature searches can be performed fast and pre-attentively for targets defined by only one feature, such as color, shape, perceived direction of lighting, movement, or orientation. Features should "pop out" during search and should be able to formillusory conjunctions. Conversely, conjunction searches occur with the combination of two or more features and are identified serially. Conjunction search is much slower than feature search and requires conscious attention and effort. In multiple experiments, some referenced in this article, Treisman concluded thatcolor,orientation, andintensityare features for which feature searches may be performed. As a reaction to the feature integration theory, Wolfe (1994) proposed the Guided Search Model 2.0. According to this model, attention is directed to an object or location through a preattentive process. The preattentive process, as Wolfe explains, directs attention in both a bottom-up and top-down way. Information acquired through both bottom-up and top-down processing is ranked according to priority. The priority rankingguidesvisual search and makes the search more efficient. Whether the Guided Search Model 2.0 or the feature integration theory are "correct" theories of visual search is still a hotly debated topic. To test the notion that attention plays a vital role in visual perception, Treisman and Schmidt (1982) designed an experiment to show that features may exist independently of one another early in processing. Participants were shown a picture involving four objects hidden by two black numbers. The display was flashed for one-fifth of a second followed by a random-dot masking field that appeared on screen to eliminate "any residual perception that might remain after the stimuli were turned off".[6]Participants were to report the black numbers they saw at each location where the shapes had previously been. The results of this experiment verified Treisman and Schmidt's hypothesis. In 18% of trials, participants reported seeing shapes "made up of a combination of features from two different stimuli",[7]even when the stimuli had great differences; this is often referred to as anillusory conjunction. Specifically, illusory conjunctions occur in various situations. For example, you may identify a passing person wearing a red shirt and yellow hat and very quickly transform him or her into one wearing a yellow shirt and red hat. The feature integration theory provides explanation for illusory conjunctions; because features exist independently of one another during early processing and are not associated with a specific object, they can easily be incorrectly combined both in laboratory settings, as well as in real life situations.[8] As previously mentioned, Balint's syndrome patients have provided support for the feature integration theory. Particularly, Research participant R.M., who hadBálint's syndromeand was unable to focus attention on individual objects, experiences illusory conjunctions when presented with simple stimuli such as a "blue O" or a "red T." In 23% of trials, even when able to view the stimulus for as long as 10 seconds, R.M. reported seeing a "red O" or a "blue T".[9]This finding is in accordance with feature integration theory's prediction of how one with a lack of focused attention would erroneously combine features. If people use their prior knowledge or experience to perceive an object, they are less likely to make mistakes, or illusory conjunctions. To explain this phenomenon, Treisman and Souther (1986) conducted an experiment in which they presented three shapes to participants where illusory conjunctions could exist. Surprisingly, when she told participants that they were being shown a carrot, lake, and tire (in place of the orange triangle, blue oval, and black circle, respectively), illusory conjunctions did not exist.[10]Treisman maintained that prior-knowledge played an important role in proper perception. Normally, bottom-up processing is used for identifying novel objects; but, once we recall prior knowledge, top-down processing is used. This explains why people are good at identifying familiar objects rather than unfamiliar. When identifying letters while reading, not only are their shapes picked up but also other features like their colors and surrounding elements. Individual letters are processed serially when spatially conjoined with another letter. The locations of each feature of a letter are not known in advance, even while the letter is in front of the reader. Since the location of the letter's features and/or the location of the letter is unknown, feature interchanges can occur if one is not attentively focused. This is known aslateral masking, which in this case, refers to a difficulty in separating a letter from the background.[11]
https://en.wikipedia.org/wiki/Feature_integration_theory
Thegrandmother cell, sometimes called the "Jennifer Anistonneuron", is a hypotheticalneuronthat represents a complex but specific concept or object.[1]It activates when a person "sees, hears, or otherwise sensibly discriminates"[2]a specific entity, such as their grandmother. It contrasts with the concept ofensemble coding(or "coarse" coding), where the unique set of features characterizing the grandmother is detected as a particular activation pattern across an ensemble of neurons, rather than being detected by a specific "grandmother cell".[1] The term was coined around 1969 by cognitive scientistJerry Lettvin.[1]Rather than serving as a serious hypothesis, the "grandmother cell" concept was initially largely used in jokes and came to be used as a "straw man or foil" for a discussion of ensemble theories in introductory textbooks.[1]However, a similar concept, that of thegnostic neuron, was introduced several years earlier byJerzy Konorskias a serious proposal.[3][1] In 1953,Horace Barlowdescribed cells in a frog retina as "bug detectors", but the term did not gain wide usage.[4][1]Several years later, Jerome (Jerry) Lettvin and others also studied these and other cells, eventually resulting in their widely known 1959 paper "What the frog’s eye tells the frog’s brain."[1] Around 1969, Lettvin introduced the term "grandmother cell" in a course he was teaching at MIT, telling a fictitious anecdote about a neurosurgeon who had discovered a group of "mother cells" in the brain that "responded uniquely only to a mother... whether animate or stuffed, seen from before or behind, upside down or on a diagonal or offered by caricature, photograph or abstraction".[1]In Lettvin's story, the neurosurgeon went on to remove (ablate) all these "several thousand separate neurons" from the brain of Portnoy, the title character of Philip Roth's 1969 novelPortnoy's Complaint, thus curing him from his obsession with his mother, and went on to study "grandmother cells" instead.[1] By 2005,Ed Connorobserved that the term had "become a shorthand for invoking all of the overwhelming practical arguments against a one-to-one object coding scheme. No one wants to be accused of believing in grandmother cells."[5]However, in that year UCLA neurosurgeons Itzhak Fried, mentee Rodrigo Quian Quiroga and others published findings on what they would come to call the "Jennifer Aniston neuron".[5][6]After operating on patients who experience epileptic seizures, the researchers showed photos of celebrities like Jennifer Aniston. The patients, who were fully conscious, often had a particular neuron fire, suggesting that the brain has Aniston-specific neurons.[6][7] Visual neurons in theinferior temporal cortexof the monkey fire selectively to hands and faces.[8][9][10][11]These cells are selective in that they do not fire for other visual objects important for monkeys such as fruit and genitalia. Research finds that some of these cells can be trained to show high specificity for arbitrary visual objects, and these would seem to fit the requirements of gnostic/grandmother cells.[12][13]In addition, evidence exists for cells in the humanhippocampusthat have highly selective responses to different categories of stimuli[14][15]including highly selective responses to individual human faces.[16] However most of the reported face-selective cells are not grandmother/gnostic cells since they do not represent a specific percept, that is, they are not cells narrowly selective in their activations for one face and only one face irrespective of transformations of size, orientation, and color. Even the most selective face cells usually also discharge, if more weakly, to a variety of individual faces. Furthermore, face-selective cells often vary in their responsiveness to different aspects of faces. This suggests that cell responsiveness arises from the need of a monkey to differentiate among different individual faces rather than among other categories of stimuli such as bananas with their discrimination properties linked to the fact that different individual faces are much more similar to each other in their overall organization and fine detail than other kinds of stimuli.[1]Moreover, it has been suggested that these cells might in fact be responding as specialized feature detector neurons that only function in the holistic context of a face construct.[17][18] One idea has been that such cells form ensembles for the coarse or distributed coding of faces rather than detectors for specific faces. Thus, a specific grandmother may be represented by a specialized ensemble of grandmother or near grandmother cells.[1] In 2005, aUCLAandCaltechstudy found evidence of different cells that fire in response to particular people, such asBill ClintonorJennifer Aniston. A neuron forHalle Berry, for example, might respond "to the concept, theabstract entity, of Halle Berry", and would fire not only for images of Halle Berry, but also to the actual name "Halle Berry".[19]However, there is no suggestion in that study that only the cell being monitored responded to that concept, nor was it suggested that no other actress would cause that cell to respond (although several other presented images of actresses did not cause it to respond).[19]The researchers believe that they have found evidence forsparseness, rather than for grandmother cells.[20] Further evidence for the theory that a small neural network provides facial recognition was found from analysis of cell recording studies of macaque monkeys. By formatting faces as points in a high-dimensional linear space, the scientists discovered that each face cell’s firing rate is proportional to the projection of an incoming face stimulus onto a single axis in this space, allowing a face cell ensemble of about 200 cells to encode the location of any face in the space.[21] The grandmother cell hypothesis is an extreme version of the idea ofsparseness,[22][5]and is not without critics. The opposite of the grandmother cell theory is the distributed representation theory, that states that a specific stimulus is coded by its unique pattern of activity over a large group of neurons widely distributed in the brain. The arguments against the sparseness include: William Jamesin 1890 proposed a related idea of a pontifical cell.[23]The pontifical cell is defined as a putative, and implausible cell which had all our experiences. This is different from a concept specific cell in that it is the site of experience of sense data. James's 1890 pontifical cell was instead a cell "to which the rest of the brain provided a representation" of a grandmother. The experience of grandmother occurred in this cell.
https://en.wikipedia.org/wiki/Grandmother_cell
Models of neural computationare attempts to elucidate, in an abstract and mathematical fashion, the core principles that underlie information processing in biological nervous systems, or functional components thereof. This article aims to provide an overview of the most definitive models of neuro-biological computation as well as the tools commonly used to construct and analyze them. Due to the complexity of nervous system behavior, the associated experimental error bounds are ill-defined, but the relative merit of the differentmodelsof a particular subsystem can be compared according to how closely they reproduce real-world behaviors or respond to specific input signals. In the closely related field of computationalneuroethology, the practice is to include the environment in the model in such a way that theloop is closed. In the cases where competing models are unavailable, or where only gross responses have been measured or quantified, a clearly formulated model can guide the scientist in designing experiments to probe biochemical mechanisms or network connectivity. In all but the simplest cases, the mathematical equations that form the basis of a model cannot be solved exactly. Nevertheless, computer technology, sometimes in the form of specialized software or hardware architectures, allow scientists to perform iterative calculations and search for plausible solutions. A computer chip or a robot that can interact with the natural environment in ways akin to the original organism is one embodiment of a useful model. The ultimate measure of success is however the ability to make testable predictions. The rate of information processing in biological neural systems are constrained by the speed at which an action potential can propagate down a nerve fibre. This conduction velocity ranges from 1 m/s to over 100 m/s, and generally increases with the diameter of the neuronal process. Slow in the timescales of biologically-relevant events dictated by the speed of sound or the force of gravity, the nervous system overwhelmingly prefers parallel computations over serial ones in time-critical applications. A model is robust if it continues to produce the same computational results under variations in inputs or operating parameters introduced by noise. For example, the direction of motion as computed by a robustmotion detectorwould not change under small changes ofluminance,contrastor velocity jitter. For simple mathematical models of neuron, for example the dependence of spike patterns on signal delay is much weaker than the dependence on changes in "weights" of interneuronal connections.[1] This refers to the principle that the response of a nervous system should stay within certain bounds even as the inputs from the environment change drastically. For example, when adjusting between a sunny day and a moonless night, the retina changes the relationship between light level and neuronal output by a factor of more than106{\displaystyle 10^{6}}so that the signals sent to later stages of the visual system always remain within a much narrower range of amplitudes.[2][3][4] Alinearsystem is one whose response in a specified unit of measure, to a set of inputs considered at once, is the sum of its responses due to the inputs considered individually. Linearsystems are easier to analyze mathematically and are a persuasive assumption in many models including the McCulloch and Pitts neuron, population coding models, and the simple neurons often used inArtificial neural networks. Linearity may occur in the basic elements of a neural circuit such as the response of a postsynaptic neuron, or as an emergent property of a combination of nonlinear subcircuits.[5]Though linearity is often seen as incorrect, there has been recent work suggesting it may, in fact, be biophysically plausible in some cases.[6][7] A computational neural model may be constrained to the level of biochemical signalling in individualneuronsor it may describe an entire organism in its environment. The examples here are grouped according to their scope. The most widely used models of information transfer in biological neurons are based on analogies with electrical circuits. The equations to be solved are time-dependent differential equations with electro-dynamical variables such as current, conductance or resistance, capacitance and voltage. The Hodgkin–Huxley model, widely regarded as one of the great achievements of 20th-century biophysics, describes howaction potentialsin neurons are initiated and propagated in axons viavoltage-gated ion channels. It is a set ofnonlinearordinary differential equationsthat were introduced byAlan Lloyd HodgkinandAndrew Huxleyin 1952 to explain the results ofvoltage clampexperiments on thesquid giant axon. Analytic solutions do not exist, but theLevenberg–Marquardt algorithm, a modifiedGauss–Newton algorithm, is often used tofitthese equations to voltage-clamp data. TheFitzHugh–Nagumo modelis a simplication of the Hodgkin–Huxley model. TheHindmarsh–Rose modelis an extension which describes neuronal spike bursts. The Morris–Lecar model is a modification which does not generate spikes, but describes slow-wave propagation, which is implicated in the inhibitory synaptic mechanisms ofcentral pattern generators. Thesoliton modelis an alternative to theHodgkin–Huxley modelthat claims to explain howaction potentialsare initiated and conducted in the form of certain kinds ofsolitarysound(ordensity) pulses that can be modeled assolitonsalongaxons, based on a thermodynamic theory of nerve pulse propagation. This approach, influenced bycontrol theoryandsignal processing, treats neurons and synapses as time-invariant entities that produce outputs that arelinear combinationsof input signals, often depicted as sine waves with a well-defined temporal or spatial frequencies. The entire behavior of a neuron or synapse are encoded in atransfer function, lack of knowledge concerning the exact underlying mechanism notwithstanding. This brings a highly developed mathematics to bear on the problem of information transfer. The accompanying taxonomy oflinear filtersturns out to be useful in characterizing neural circuitry. Bothlow-andhigh-pass filtersare postulated to exist in some form in sensory systems, as they act to prevent information loss in high and low contrast environments, respectively. Indeed, measurements of the transfer functions of neurons in thehorseshoe crabretina according to linear systems analysis show that they remove short-term fluctuations in input signals leaving only the long-term trends, in the manner of low-pass filters. These animals are unable to see low-contrast objects without the help of optical distortions caused by underwater currents.[8][9] In the retina, an excited neural receptor can suppress the activity of surrounding neurons within an area called the inhibitory field. This effect, known aslateral inhibition, increases the contrast and sharpness in visual response, but leads to the epiphenomenon ofMach bands. This is often illustrated by theoptical illusionof light or dark stripes next to a sharp boundary between two regions in an image of different luminance. The Hartline-Ratliff model describes interactions within a group ofpphotoreceptor cells.[10]Assuming these interactions to belinear, they proposed the following relationship for thesteady-state response raterp{\displaystyle r_{p}}of the givenp-th photoreceptor in terms of the steady-state response ratesrj{\displaystyle r_{j}}of thejsurrounding receptors: rp=|[ep−∑j=1,j≠pnkpj|rj−rpjo|]|{\displaystyle r_{p}=\left|\left[e_{p}-\sum _{j=1,j\neq p}^{n}k_{pj}\left|r_{j}-r_{pj}^{o}\right|\right]\right|}. Here, ep{\displaystyle e_{p}}is the excitation of the targetp-th receptor from sensory transduction rpjo{\displaystyle r_{pj}^{o}}is the associated threshold of the firing cell, and kpj{\displaystyle k_{pj}}is the coefficient of inhibitory interaction between thep-th and thejth receptor. The inhibitory interaction decreases with distance from the targetp-th receptor. According toJeffress,[11]in order to compute the location of a sound source in space frominteraural time differences, an auditory system relies ondelay lines: the induced signal from anipsilateralauditory receptor to a particular neuron is delayed for the same time as it takes for the original sound to go in space from that ear to the other. Each postsynaptic cell is differently delayed and thus specific for a particular inter-aural time difference. This theory is equivalent to the mathematical procedure ofcross-correlation. Following Fischer and Anderson,[12]the response of the postsynaptic neuron to the signals from the left and right ears is given by yR(t)−yL(t){\displaystyle y_{R}\left(t\right)-y_{L}\left(t\right)} where yL(t)=∫0τuL(σ)w(t−σ)dσ{\displaystyle y_{L}\left(t\right)=\int _{0}^{\tau }u_{L}\left(\sigma \right)w\left(t-\sigma \right)d\sigma } yR(t)=∫0τuR(σ)w(t−σ)dσ{\displaystyle y_{R}\left(t\right)=\int _{0}^{\tau }u_{R}\left(\sigma \right)w\left(t-\sigma \right)d\sigma } and w(t−σ){\displaystyle w\left(t-\sigma \right)}represents the delay function. This is not entirely correct and a clear eye is needed to put the symbols in order. Structures have been located in the barn owl which are consistent with Jeffress-type mechanisms.[13] A motion detector needs to satisfy three general requirements: pair-inputs, asymmetry and nonlinearity.[14]The cross-correlation operation implemented asymmetrically on the responses from a pair of photoreceptors satisfies these minimal criteria, and furthermore, predicts features which have been observed in the response of neurons of the lobula plate in bi-wing insects.[15] The master equation for response is R=A1(t−τ)B2(t)−A2(t−τ)B1(t){\displaystyle R=A_{1}(t-\tau )B_{2}(t)-A_{2}(t-\tau )B_{1}(t)} The HR model predicts a peaking of the response at a particular input temporal frequency. The conceptually similar Barlow–Levick model is deficient in the sense that a stimulus presented to only one receptor of the pair is sufficient to generate a response. This is unlike the HR model, which requires two correlated signals delivered in a time ordered fashion. However the HR model does not show a saturation of response at high contrasts, which is observed in experiment. Extensions of the Barlow-Levick model can provide for this discrepancy.[16] This uses a cross-correlation in both the spatial and temporal directions, and is related to the concept ofoptical flow.[17] Mutuallyinhibitoryprocesses are a unifying motif of allcentral pattern generators. This has been demonstrated in the stomatogastric (STG) nervous system of crayfish and lobsters.[18]Two and three-cell oscillating networks based on the STG have been constructed which are amenable to mathematical analysis, and which depend in a simple way on synaptic strengths and overall activity, presumably the knobs on these things.[19]The mathematics involved is the theory ofdynamical systems. Flight control in the fly is believed to be mediated by inputs from the visual system and also thehalteres, a pair of knob-like organs which measure angular velocity. Integrated computer models ofDrosophila, short on neuronal circuitry but based on the general guidelines given bycontrol theoryand data from the tethered flights of flies, have been constructed to investigate the details of flight control.[20][21] Tensor network theoryis a theory ofcerebellarfunction that provides a mathematical model of thetransformationof sensoryspace-timecoordinates into motor coordinates and vice versa by cerebellarneuronal networks. The theory was developed by Andras Pellionisz andRodolfo Llinasin the 1980s as ageometrizationof brain function (especially of thecentral nervous system) usingtensors.[22][23] In this approach the strength and type, excitatory or inhibitory, of synaptic connections are represented by the magnitude and sign of weights, that is, numericalcoefficientsw′{\displaystyle w'}in front of the inputsx{\displaystyle x}to a particular neuron. The response of thej{\displaystyle j}-th neuron is given by a sum of nonlinear, usually "sigmoidal" functionsg{\displaystyle g}of the inputs as: fj=∑ig(wji′xi+bj){\displaystyle f_{j}=\sum _{i}g\left(w_{ji}'x_{i}+b_{j}\right)}. This response is then fed as input into other neurons and so on. The goal is to optimize the weights of the neurons to output a desired response at the output layer respective to a set given inputs at the input layer. This optimization of the neuron weights is often performed using thebackpropagation algorithmand an optimization method such asgradient descentorNewton's method of optimization. Backpropagation compares the output of the network with the expected output from the training data, then updates the weights of each neuron to minimize the contribution of that individual neuron to the total error of the network. Genetic algorithmsare used to evolve neural (and sometimes body) properties in a model brain-body-environment system so as to exhibit some desired behavioral performance. The evolved agents can then be subjected to a detailed analysis to uncover their principles of operation. Evolutionary approaches are particularly useful for exploring spaces of possible solutions to a given behavioral task because these approaches minimize a priori assumptions about how a given behavior ought to be instantiated. They can also be useful for exploring different ways to complete a computational neuroethology model when only partial neural circuitry is available for a biological system of interest.[24] TheNEURONsoftware, developed at Duke University, is a simulation environment for modeling individual neurons and networks of neurons.[25]The NEURON environment is a self-contained environment allowing interface through itsGUIor via scripting withhocorpython. The NEURON simulation engine is based on a Hodgkin–Huxley type model using a Borg–Graham formulation. Several examples of models written in NEURON are available from the online database ModelDB.[26] Nervous systems differ from the majority of silicon-based computing devices in that they resembleanalog computers(notdigital dataprocessors) and massivelyparallelprocessors, notsequentialprocessors. To model nervous systems accurately, in real-time, alternative hardware is required. The most realistic circuits to date make use ofanalogproperties of existingdigital electronics(operated under non-standard conditions) to realize Hodgkin–Huxley-type modelsin silico.[27][28] [29]
https://en.wikipedia.org/wiki/Models_of_neural_computation
Theneural correlates ofconsciousness(NCC) are the minimal set of neuronal events and mechanisms sufficient for the occurrence of themental statesto which they are related.[2]Neuroscientistsuseempirical approachesto discoverneural correlatesof subjective phenomena; that is, neural changes which necessarily and regularlycorrelatewith a specific experience.[3][4] Ascienceof consciousness must explain the exact relationship between subjective mental states and brain states, the nature of the relationship between theconsciousmindand theelectrochemicalinteractions in the body (mind–body problem). Progress inneuropsychologyandneurophilosophyhas come from focusing on the body rather than the mind. In this context the neuronal correlates of consciousness may be viewed as its causes, and consciousness may be thought of as a state-dependent property of an undefinedcomplex, adaptive, and highly interconnected biological system.[5] Discovering and characterizing neural correlates does not offer a causal theory of consciousness that can explain how particular systems experience anything, the so-calledhard problem of consciousness,[6]but understanding the NCC may be a step toward a causal theory. Most neurobiologists propose that the variables giving rise to consciousness are to be found at the neuronal level, governed by classical physics. There are theories proposed ofquantum consciousnessbased onquantum mechanics.[7] There is an apparent redundancy and parallelism in neural networks so, while activity in one group of neurons may correlate with a percept in one case, a different population may mediate a related percept if the former population is lost or inactivated. It may be that every phenomenal, subjective state has a neural correlate. Where the NCC can be induced artificially, the subject will experience the associated percept, while perturbing or inactivating the region of correlation for a specific percept will affect the percept or cause it to disappear, giving a cause-effect relationship from the neural region to the nature of the percept.[citation needed] Proposals that have been advanced over the years include: what characterizes the NCC? What are the commonalities between the NCC for seeing and for hearing? Will the NCC involve all thepyramidal neuronsin the cortex at any given point in time? Or only a subset of long-range projection cells in the frontal lobes that project to the sensory cortices in the back? Neurons that fire in a rhythmic manner? Neurons that fire in asynchronous manner?[8] The growing ability of neuroscientists to manipulate neurons using methods from molecular biology in combination with optical tools (e.g.,Adamantidis et al. 2007) depends on the simultaneous development of appropriate behavioral assays and model organisms amenable to large-scale genomic analysis and manipulation. The combination of fine-grained neuronal analysis in animals with increasingly more sensitive psychophysical and brain imaging techniques in humans, complemented by the development of a robust theoretical predictive framework, will hopefully lead to a rational understanding of consciousness, one of the central mysteries of life. Research has shown a correlation between significant measurable changes in brain structure at the end of the second trimester of the human fetus development, which facilitate the emergence of early consciousness in the fetus. These structural developments include the maturation of neural connections and the formation of key brain regions associated with sensory processing and emotional regulation. As these areas become more integrated, the fetus begins to exhibit responses to external stimuli, suggesting a nascent awareness of its environment. This early stage of consciousness is crucial, as it lays the foundation for later cognitive and social development, influencing how individuals will interact with the world around them after birth.[9] There are two common but distinct dimensions of the termconsciousness,[10]one involvingarousalandstates of consciousnessand the other involvingcontent of consciousnessandconscious states. To be consciousofanything the brain must be in a relatively high state of arousal (sometimes calledvigilance), whether in wakefulness orREM sleep, vividly experienced in dreams although usually not remembered. Brain arousal level fluctuates in acircadianrhythm but may be influenced by lack of sleep, drugs and alcohol, physical exertion, etc. Arousal can be measured behaviorally by the signal amplitude that triggers some criterion reaction (for instance, the sound level necessary to evoke an eye movement or a head turn toward the sound source). Clinicians use scoring systems such as theGlasgow Coma Scaleto assess the level of arousal in patients.[citation needed] High arousal states are associated with conscious states that have specific content, seeing, hearing, remembering, planning or fantasizing about something. Different levels or states of consciousness are associated with different kinds of conscious experiences. The "awake" state is quite different from the "dreaming" state (for instance, the latter has little or no self-reflection) and from the state of deep sleep. In all three cases the basic physiology of the brain is affected, as it also is inaltered states of consciousness, for instance after taking drugs or during meditation when conscious perception and insight may be enhanced compared to the normal waking state.[citation needed] Clinicians talk aboutimpaired states of consciousnessas in "thecomatose state", "thepersistent vegetative state" (PVS), and "theminimally conscious state" (MCS). Here, "state" refers to different "amounts" of external/physical consciousness, from a total absence in coma, persistent vegetative state and general anesthesia, to a fluctuating and limited form of conscious sensation in a minimally conscious state such as sleep walking or during a complex partialepilepticseizure.[11]The repertoire of conscious states or experiences accessible to a patient in a minimally conscious state is comparatively limited. In brain death there is no arousal, but it is unknown whether the subjectivity of experience has been interrupted, rather than its observable link with the organism. Functional neuroimaging have shown that parts of the cortex are still active in vegetative patients that are presumed to be unconscious;[12]however, these areas appear to be functionally disconnected from associative cortical areas whose activity is needed for awareness.[citation needed] The potentialrichness of conscious experienceappears to increase from deep sleep to drowsiness to full wakefulness, as might be quantified using notions from complexity theory that incorporate both the dimensionality as well as the granularity of conscious experience to give anintegrated-information-theoretical accountof consciousness.[13]As behavioral arousal increases so does the range and complexity of possible behavior. Yet in REM sleep there is a characteristicatonia, low motor arousal and the person is difficult to wake up, but there is still high metabolic and electric brain activity and vivid perception. Many nuclei with distinct chemical signatures in thethalamus,midbrainandponsmust function for a subject to be in a sufficient state of brain arousal to experience anything at all. These nuclei therefore belong to the enabling factors for consciousness. Conversely, it is likely that the specific content of any particular conscious sensation is mediated by particular neurons in the cortex and their associated satellite structures, including theamygdala,thalamus,claustrumand thebasal ganglia.[citation needed][original research?] The possibility of precisely manipulating visual percepts in time and space has madevisiona preferred modality in the quest for the NCC. Psychologists have perfected a number of techniques –masking,binocular rivalry,continuous flash suppression,motion induced blindness,change blindness,inattentional blindness– in which the seemingly simple and unambiguous relationship between a physical stimulus in the world and its associated percept in the privacy of the subject's mind is disrupted.[14]In particular a stimulus can be perceptually suppressed for seconds or even minutes at a time: the image is projected into one of the observer's eyes but is invisible, not seen. In this manner the neural mechanisms that respond to the subjective percept rather than the physical stimulus can be isolated, permitting visual consciousness to be tracked in the brain. In aperceptualillusion, the physical stimulus remains fixed while the percept fluctuates. The best known example is theNecker cubewhose 12 lines can be perceived in one of two different ways in depth. A perceptual illusion that can be precisely controlled isbinocular rivalry. Here, a small image, e.g., a horizontal grating, is presented to the left eye, and another image, e.g., a vertical grating, is shown to the corresponding location in the right eye. In spite of the constant visual stimulus, observers consciously see the horizontal grating alternate every few seconds with the vertical one. The brain does not allow for the simultaneous perception of both images. Logothetis and colleagues[16]recorded a variety of visual cortical areas in awake macaque monkeys performing a binocular rivalry task. Macaque monkeys can be trained to report whether they see the left or the right image. The distribution of the switching times and the way in which changing the contrast in one eye affects these leaves little doubt that monkeys and humans experience the same basic phenomenon. In the primary visual cortex (V1) only a small fraction of cells weakly modulated their response as a function of the percept of the monkey while most cells responded to one or the other retinal stimulus with little regard to what the animal perceived at the time. But in a high-level cortical area such as the inferior temporal cortex along theventral streamalmost all neurons responded only to the perceptually dominant stimulus, so that a "face" cell only fired when the animal indicated that it saw the face and not the pattern presented to the other eye. This implies that NCC involve neurons active in the inferior temporal cortex: it is likely that specific reciprocal actions of neurons in the inferior temporal and parts of the prefrontal cortex are necessary. A number offMRIexperiments that have exploited binocular rivalry and related illusions to identify the hemodynamic activity underlying visual consciousness in humans demonstrate quite conclusively that activity in the upper stages of the ventral pathway (e.g., thefusiform face areaand theparahippocampal place area) as well as in early regions, including V1 and the lateral geniculate nucleus (LGN), follow the percept and not the retinal stimulus.[17]Further, a number of fMRI[18][19]and DTI experiments[20]suggest V1 is necessary but not sufficient for visual consciousness.[21] In a related perceptual phenomenon,flash suppression, the percept associated with an image projected into one eye is suppressed by flashing another image into the other eye while the original image remains. Its methodological advantage over binocular rivalry is that the timing of the perceptual transition is determined by an external trigger rather than by an internal event. The majority of cells in the inferior temporal cortex and the superior temporal sulcus of monkeys trained to report their percept during flash suppression follow the animal's percept: when the cell's preferred stimulus is perceived, the cell responds. If the picture is still present on the retina but is perceptually suppressed, the cell falls silent, even though primary visual cortex neurons fire.[22][23]Single-neuron recordings in the medial temporal lobe of epilepsy patients during flash suppression likewise demonstrate abolishment of response when the preferred stimulus is present but perceptually masked.[24] Given the absence of any accepted criterion of the minimal neuronal correlates necessary for consciousness, the distinction between a persistently vegetative patient who shows regular sleep-wave transitions and may be able to move or smile, and a minimally conscious patient who can communicate (on occasion) in a meaningful manner (for instance, by differential eye movements) and who shows some signs of consciousness, is often difficult. In global anesthesia the patient should not experience psychological trauma but the level of arousal should be compatible with clinical exigencies. Blood-oxygen-level-dependentfMRIhave demonstrated normal patterns of brain activity in a patient in a vegetative state following a severe traumatic brain injury when asked to imagine playing tennis or visiting rooms in his/her house.[26]Differential brain imaging of patients with such global disturbances of consciousness (includingakinetic mutism) reveal that dysfunction in a widespread cortical network including medial and lateral prefrontal and parietal associative areas is associated with a global loss of awareness.[27]Impaired consciousness inepilepticseizures of thetemporal lobewas likewise accompanied by a decrease in cerebral blood flow in frontal and parietal association cortex and an increase in midline structures such as themediodorsal thalamus.[28] Relatively local bilateral injuries to midline (paramedian) subcortical structures can also cause a complete loss of awareness.[29]These structures thereforeenableand control brain arousal (as determined by metabolic or electrical activity) and are necessary neural correlates. One such example is the heterogeneous collection of more than two dozen nuclei on each side of the upper brainstem (pons, midbrain and in the posterior hypothalamus), collectively referred to as thereticular activating system(RAS). Their axons project widely throughout the brain. These nuclei – three-dimensional collections of neurons with their own cyto-architecture and neurochemical identity – release distinct neuromodulators such as acetylcholine, noradrenaline/norepinephrine, serotonin, histamine and orexin/hypocretin to control the excitability of the thalamus and forebrain, mediating alternation between wakefulness and sleep as well as general level of behavioral and brain arousal. After such trauma, however, eventually the excitability of the thalamus and forebrain can recover and consciousness can return.[30]Another enabling factor for consciousness are the five or moreintralaminar nuclei(ILN) of the thalamus. These receive input from many brainstem nuclei and project strongly, directly to the basal ganglia and, in a more distributed manner, into layer I of much of the neocortex. Comparatively small (1 cm3or less) bilateral lesions in the thalamic ILN completely knock out all awareness.[31] Many actions in response to sensory inputs are rapid, transient, stereotyped, and unconscious.[32]They could be thought of as cortical reflexes and are characterized by rapid and somewhat stereotyped responses that can take the form of rather complex automated behavior as seen, e.g., in complex partialepilepticseizures. These automated responses, sometimes calledzombie behaviors,[33]could be contrasted by a slower, all-purpose conscious mode that deals more slowly with broader, less stereotyped aspects of the sensory inputs (or a reflection of these, as in imagery) and takes time to decide on appropriate thoughts and responses. Without such a consciousness mode, a vast number of different zombie modes would be required to react to unusual events. A feature that distinguishes humans from most animals is that we are not born with an extensive repertoire of behavioral programs that would enable us to survive on our own ("physiological prematurity"). To compensate for this, we have an unmatched ability to learn, i.e., to consciously acquire such programs by imitation or exploration. Once consciously acquired and sufficiently exercised, these programs can become automated to the extent that their execution happens beyond the realms of our awareness. Take, as an example, the incredible fine motor skills exerted in playing a Beethoven piano sonata or the sensorimotor coordination required to ride a motorcycle along a curvy mountain road. Such complex behaviors are possible only because a sufficient number of the subprograms involved can be executed with minimal or even suspended conscious control. In fact, the conscious system may actually interfere somewhat with these automated programs.[34] From an evolutionary standpoint it clearly makes sense to have both automated behavioral programs that can be executed rapidly in a stereotyped and automated manner, and a slightly slower system that allows time for thinking and planning more complex behavior. This latter aspect may be one of the principal functions of consciousness. Other philosophers, however, have suggested that consciousness would not be necessary for any functional advantage in evolutionary processes.[35][36]No one has given a causal explanation, they argue, of why it would not be possible for a functionally equivalent non-conscious organism (i.e., aphilosophical zombie) to achieve the very same survival advantages as a conscious organism. If evolutionary processes are blind to the difference between functionFbeing performed by conscious organismOand non-conscious organismO*, it is unclear what adaptive advantage consciousness could provide.[37]As a result, an exaptive explanation of consciousness has gained favor with some theorists that posit consciousness did not evolve as an adaptation but was anexaptationarising as a consequence of other developments such as increases in brain size or cortical rearrangement.[38]Consciousness in this sense has been compared to the blind spot in the retina where it is not an adaption of the retina, but instead just a by-product of the way the retinal axons were wired.[39]Several scholars includingPinker,Chomsky,Edelman, andLuriahave indicated the importance of the emergence of human language as an important regulative mechanism of learning and memory in the context of the development of higher-order consciousness. It seems possible that visual zombie modes in the cortex mainly use thedorsal streamin the parietal region.[32]However, parietal activity can affect consciousness by producing attentional effects on the ventral stream, at least under some circumstances. The conscious mode for vision depends largely on the early visual areas (beyond V1) and especially on the ventral stream. Seemingly complex visual processing (such as detecting animals in natural, cluttered scenes) can be accomplished by the human cortex within 130–150 ms,[40][41]far too brief for eye movements and conscious perception to occur. Furthermore, reflexes such as theoculovestibular reflextake place at even more rapid time-scales. It is quite plausible that such behaviors are mediated by a purely feed-forward moving wave of spiking activity that passes from the retina through V1, into V4, IT and prefrontal cortex, until it affects motorneurons in the spinal cord that control the finger press (as in a typical laboratory experiment). The hypothesis that the basic processing of information is feedforward is supported most directly by the short times (approx. 100 ms) required for a selective response to appear in IT cells. Conversely, conscious perception is believed to require more sustained, reverberatory neural activity, most likely via global feedback from frontal regions of neocortex back to sensory cortical areas[21]that builds up over time until it exceeds a critical threshold. At this point, the sustained neural activity rapidly propagates to parietal, prefrontal and anterior cingulate cortical regions, thalamus, claustrum and related structures that support short-term memory, multi-modality integration, planning, speech, and other processes intimately related to consciousness. Competition prevents more than one or a very small number of percepts to be simultaneously and actively represented. This is the core hypothesis of theglobal workspace theoryof consciousness.[42][43] In brief, while rapid but transient neural activity in the thalamo-cortical system can mediate complex behavior without conscious sensation, it is surmised that consciousness requires sustained but well-organized neural activity dependent on long-range cortico-cortical feedback. The neurobiologistChristfried Jakob(1866–1956) argued that the only conditions which must have neural correlates are direct sensations and reactions; these are called "intonations".[citation needed] Neurophysiological studies in animals provided some insights on the neural correlates of conscious behavior.Vernon Mountcastle, in the early 1960s, set up to study this set of problems, which he termed "the Mind/Brain problem", by studying the neural basis of perception in thesomatic sensory system. His labs at Johns Hopkins were among the first, along with Edward V.Evarts at NIH, to record neural activity from behaving monkeys. Struck with the elegance of SS Stevens approach of magnitude estimation, Mountcastle's group discovered three different modalities of somatic sensation shared one cognitive attribute: in all cases the firing rate of peripheral neurons was linearly related to the strength of the percept elicited. More recently, Ken H. Britten, William T. Newsome, and C. Daniel Salzman have shown that inarea MTof monkeys, neurons respond with variability that suggests they are the basis of decision making about direction of motion. They first showed that neuronal rates are predictive of decisions using signal detection theory, and then that stimulation of these neurons could predictably bias the decision. Such studies were followed by Ranulfo Romo in the somatic sensory system, to confirm, using a different percept and brain area, that a small number of neurons in one brain area underlie perceptual decisions. Other lab groups have followed Mountcastle's seminal work relating cognitive variables to neuronal activity with more complex cognitive tasks. Although monkeys cannot talk about their perceptions, behavioral tasks have been created in which animals made nonverbal reports, for example by producing hand movements. Many of these studies employ perceptual illusions as a way to dissociate sensations (i.e., the sensory information that the brain receives) from perceptions (i.e., how the consciousness interprets them). Neuronal patterns that represent perceptions rather than merely sensory input are interpreted as reflecting the neuronal correlate of consciousness. Using such design,Nikos Logothetisand colleagues discovered perception-reflecting neurons in the temporal lobe. They created an experimental situation in which conflicting images were presented to different eyes (i.e.,binocular rivalry). Under such conditions, human subjects report bistable percepts: they perceive alternatively one or the other image. Logothetis and colleagues trained the monkeys to report with their arm movements which image they perceived. Temporal lobe neurons in Logothetis experiments often reflected what the monkeys' perceived. Neurons with such properties were less frequently observed in the primary visual cortex that corresponds to relatively early stages of visual processing. Another set of experiments using binocular rivalry in humans showed that certain layers of the cortex can be excluded as candidates of the neural correlate of consciousness. Logothetis and colleagues switched the images between eyes during the percept of one of the images. Surprisingly the percept stayed stable. This means that the conscious percept stayed stable and at the same time the primary input to layer 4, which is the input layer, in the visual cortex changed. Therefore, layer 4 can not be a part of the neural correlate of consciousness.Mikhail Lebedevand their colleagues observed a similar phenomenon in monkey prefrontal cortex. In their experiments monkeys reported the perceived direction of visual stimulus movement (which could be an illusion) by making eye movements. Some prefrontal cortex neurons represented actual and some represented perceived displacements of the stimulus. Observation of perception related neurons in prefrontal cortex is consistent with the theory ofChristof KochandFrancis Crickwho postulated that neural correlate of consciousness resides in prefrontal cortex. Proponents of distributed neuronal processing may likely dispute the view that consciousness has a precise localization in the brain. Francis Crickwrote a popular book, "The Astonishing Hypothesis", whose thesis is that the neural correlate for consciousness lies in our nerve cells and their associated molecules. Crick and his collaboratorChristof Koch[44]have sought to avoid philosophical debates that are associated with the study of consciousness, by emphasizing the search for "correlation" and not "causation".[needs update] There is much room for disagreement about the nature of this correlate (e.g., does it require synchronous spikes of neurons in different regions of the brain? Is the co-activation of frontal or parietal areas necessary?). The philosopherDavid Chalmersmaintains that a neural correlate of consciousness, unlike other correlates such as for memory, will fail to offer a satisfactory explanation of the phenomenon; he calls this thehard problem of consciousness.[45][46]
https://en.wikipedia.org/wiki/Neural_correlate
Neural decodingis aneurosciencefield concerned with the hypothetical reconstruction of sensory and other stimuli from information that has already been encoded and represented in thebrainbynetworksofneurons.[1]Reconstruction refers to the ability of the researcher to predict what sensory stimuli the subject is receiving based purely on neuronaction potentials. Therefore, the main goal of neural decoding is to characterize how theelectrical activityof neurons elicit activity and responses in the brain.[2] This article specifically refers to neural decoding as it pertains to themammalianneocortex. When looking at a picture, people's brains are constantly making decisions about what object they are looking at, where they need to move their eyes next, and what they find to be the most salient aspects of the input stimulus. As these images hit the back of the retina, these stimuli are converted from varying wavelengths to a series of neural spikes calledaction potentials. These patterns of action potentials are different for different objects and different colors; we therefore say that the neurons are encoding objects and colors by varying their spike rates or temporal patterns. Now, if someone were to probe the brain by placingelectrodesin theprimary visual cortex, they may find what appears to be random electrical activity. These neurons are actually firing in response to the lower level features of visual input, possibly the edges of a picture frame. This highlights the crux of the neural decoding hypothesis: that it is possible to reconstruct a stimulus from the response of the ensemble of neurons that represent it. In other words, it is possible to look at spike train data and say that the person or animal being recorded is looking at a red ball. With the recent breakthrough in large-scale neural recording and decoding technologies, researchers have begun to crack the neural code and already provided the first glimpse into the real-time neural code of memory traces as memory is formed and recalled in the hippocampus, a brain region known to be central for memory formation.[3][4]Neuroscientists have initiated a large-scale brain activity mapping or brain decoding project[5]to construct brain-wide neural codes. Implicit about the decoding hypothesis is the assumption that neural spiking in the brain somehow represents stimuli in the external world. The decoding of neural data would be impossible if the neurons were firing randomly: nothing would be represented. This process of decoding neural data forms a loop withneural encoding. First, the organism must be able to perceive a set of stimuli in the world – say a picture of a hat. Seeing the stimuli must result in some internal learning: the encoding stage. After varying the range of stimuli that is presented to the observer, we expect the neurons to adapt to the statistical properties of thesignals, encoding those that occur most frequently:[6]theefficient-coding hypothesis. Now neural decoding is the process of taking these statistical consistencies, astatistical modelof the world, and reproducing the stimuli. This may map to the process of thinking and acting, which in turn guide what stimuli we receive, and thus, completing the loop. In order to build a model of neural spike data, one must both understand how information is originally stored in the brain and how this information is used at a later point in time. Thisneural codingand decoding loop is a symbiotic relationship and the crux of the brain's learning algorithm. Furthermore, the processes that underlie neural decoding and encoding are very tightly coupled and may lead to varying levels of representative ability.[7][8] Much of the neural decoding problem depends on thespatial resolutionof the data being collected. The number of neurons needed to reconstruct the stimulus with reasonable accuracy depends on the means by which data is collected and the area which is being recorded. For example,rods and cones(which respond to colors of small visual areas) in the retina may require more recordings thansimple cells(which respond to orientation of lines) in the primary visual cortex. Previous recording methods relied onstimulating single neuronsover a repeated series of tests in order to generalize this neuron's behavior.[9]New techniques such as high-densitymulti-electrode array recordingsandmulti-photon calcium imaging techniquesnow make it possible to record from upwards of a few hundred neurons. Even with better recording techniques, the focus of these recordings must be on an area of the brain that is both manageable and qualitatively understood. Many studies look at spike train data gathered from theganglion cellsin the retina, since this area has the benefits of being strictlyfeedforward,retinotopic, and amenable to current recording granularities. The duration, intensity, and location of the stimulus can be controlled to sample, for example, a particular subset of ganglion cells within a structure of the visual system.[10]Other studies use spike trains to evaluate the discriminatory ability of non-visual senses such as rat facial whiskers[11]and the olfactory coding of moth pheromone receptor neurons.[12] Even with ever-improving recording techniques, one will always run into the limited sampling problem: given a limited number of recording trials, it is impossible to completely account for the error associated with noisy data obtained from stochastically functioning neurons. (For example, a neuron'selectric potentialfluctuates around itsresting potentialdue to a constant influx and efflux ofsodiumandpotassiumions.) Therefore, it is not possible to perfectly reconstruct a stimulus from spike data. Luckily, even with noisy data, the stimulus can still be reconstructed within acceptable error bounds.[13] Timescales and frequencies of stimuli being presented to the observer are also of importance to decoding the neural code. Quicker timescales and higher frequencies demand faster and more precise responses in neural spike data. In humans, millisecond precision has been observed throughout thevisual cortex, theretina,[14]and thelateral geniculate nucleus. So one would suspect this to be the appropriate measuring frequency. This has been confirmed in studies that quantify the responses of neurons in the lateral geniculate nucleus to white-noise and naturalistic movie stimuli.[15]At the cellular level,spike-timing-dependent plasticityoperates at millisecond timescales.[16]Therefore models seeking biological relevance should be able to perform at these temporal scales. When decoding neural data, arrival times of each spiket1,t2,...,tn={ti}{\displaystyle t_{1},{\text{ }}t_{2},{\text{ }}...,{\text{ }}t_{n}{\text{ }}={\text{ }}\{t_{i}\}}, and theprobabilityof seeing a certain stimulus,P[s(t)]{\displaystyle P[s(t)]}may be the extent of the available data. Theprior distributionP[s(t)]{\displaystyle P[s(t)]}defines an ensemble of signals, and represents thelikelihoodof seeing a stimulus in the world based on previous experience. The spike times may also be drawn from adistributionP[{ti}]{\displaystyle P[\{t_{i}\}]}; however, what we want to know is theprobability distributionover a set of stimuli given a series of spike trainsP[s(t)|{ti}]{\displaystyle P[s(t)|\{t_{i}\}]}, which is called theresponse-conditionalensemble. What remains is the characterization of the neural code by translating stimuli into spikes,P[{ti}|s(t)]{\displaystyle P[\{t_{i}\}|s(t)]}; the traditional approach to calculating this probability distribution has been to fix the stimulus and examine the responses of the neuron. Combining everything usingBayes' Ruleresults in the simplified probabilistic characterization of neural decoding:P[s(t)|{ti}]=P[{ti}|s(t)]∗(P[s(t)]/P[{ti}]){\displaystyle P[s(t)|\{t_{i}\}]=P[\{t_{i}\}|s(t)]*(P[s(t)]/P[\{t_{i}\}])}. An area of active research consists of finding better ways of representing and determiningP[{ti}|s(t)]{\displaystyle P[\{t_{i}\}|s(t)]}.[17]The following are some such examples. The simplest coding strategy is thespike train number coding. This method assumes that the spike number is the most important quantification of spike train data. In spike train number coding, each stimulus is represented by a unique firing rate across the sampled neurons. The color red may be signified by 5 total spikes across the entire set of neurons, while the color green may be 10 spikes; each spike is pooled together into an overall count. This is represented by: P(r|s)=∏P(nij|s){\displaystyle P(r|s)=\prod _{}P(n_{ij}|s)} wherer=n={\displaystyle r=n=}the number of spikes,nij{\displaystyle n_{ij}}is the number of spikes of neuroni{\displaystyle i}at stimulus presentation timej{\displaystyle j}, and s is the stimulus. Adding a small temporal component results in thespike timing codingstrategy. Here, the main quantity measured is the number of spikes that occur within a predefinedwindowof time T. This method adds another dimension to the previous. This timing code is given by: P(r|s)=∏l[∏i,jvi(tijl|s)dt]exp[−∑i∫0Tdtvi(t|s)]{\displaystyle P(r|s)=\prod _{l}\left[\prod _{i,j}v_{i}(t_{ijl}|s)dt\right]exp\left[-\sum _{i}\int _{0}^{T}dtv_{i}(t|s)\right]} wheretijl{\displaystyle t_{ijl}}is the jth spike on the lth presentation of neuron i,vi(t|s){\displaystyle v_{i}(t|s)}is the firing rate of neuron i at time t, and 0 to T is the start to stop times of each trial. Temporal correlation code, as the name states, addscorrelationsbetween individual spikes. This means that the time between a spiketi{\displaystyle t_{i}}and its preceding spiketi−1{\displaystyle t_{i-1}}is included. This is given by: P(r|s)=∏l[∏i,jvi(tijl,τ(tijl)|s)dt]exp[−∑i∫0Tdtvi(t,τ(t)|s)]{\displaystyle P(r|s)=\prod _{l}\left[\prod _{i,j}v_{i}(t_{ijl},\tau (t_{ijl})|s)dt\right]exp\left[-\sum _{i}\int _{0}^{T}dtv_{i}(t,\tau (t)|s)\right]} whereτ(t){\displaystyle \tau (t)}is the time interval between a neurons spike and the one preceding it. Another description of neural spike train data uses theIsing modelborrowed from the physics of magnetic spins. Because neural spike trains are effectively binarized (either on or off) at small time scales (10 to 20 ms), theIsing modelis able to effectively capture the present pairwise correlations,[18]and is given by: P(r|s)=1Z(s)exp(∑ihi(s)ri+12∑i≠jJij(s)rirj){\displaystyle P(r|s)={\frac {1}{\mathrm {Z} (s)}}exp\left(\sum _{i}h_{i}(s)r_{i}+{\frac {1}{2}}\sum _{i\neq j}J_{ij}(s)r_{i}r_{j}\right)} wherer=(r1,r2,...,rn)T{\displaystyle r=(r_{1},r_{2},...,r_{n})^{T}}is the set of binary responses of neuron i,hi{\displaystyle h_{i}}is theexternal fields function,Jij{\displaystyle J_{ij}}is thepairwise couplings function, andZ(s){\displaystyle \mathrm {Z} (s)}is thepartition function In addition to the probabilistic approach,agent-based modelsexist that capture the spatial dynamics of the neural system under scrutiny. One such model ishierarchical temporal memory, which is amachine learningframework that organizes the visual perception problem into ahierarchyof interacting nodes (neurons). The connections between nodes on the same level and lower levels are termedsynapses, and their interactions are subsequently learning. Synapse strengths modulate learning and are altered based on the temporal and spatial firing of nodes in response to input patterns.[19][20] While it is possible to transform the firing rates of these modeled neurons into the probabilistic and mathematical frameworks described above, agent-based models provide the ability to observe the behavior of the entire population of modeled neurons. Researchers can circumvent the limitations implicit with lab-based recording techniques. Because this approach does rely on modeling biological systems, error arises in the assumptions made by the researcher and in the data used inparameter estimation. The advancement in our understanding of neural decoding benefits the development ofbrain-machine interfaces,prosthetics[21]and the understanding of neurological disorders such asepilepsy.[22]
https://en.wikipedia.org/wiki/Neural_decoding
Neural oscillations, orbrainwaves, are rhythmic or repetitive patterns of neural activity in thecentral nervous system.Neural tissuecan generateoscillatory activityin many ways, driven either by mechanisms within individualneuronsor by interactions between neurons. In individual neurons, oscillations can appear either as oscillations inmembrane potentialor as rhythmic patterns ofaction potentials, which then produce oscillatory activation ofpost-synapticneurons. At the level ofneural ensembles, synchronized activity of large numbers of neurons can give rise tomacroscopicoscillations, which can be observed in anelectroencephalogram. Oscillatory activity in groups of neurons generally arises from feedback connections between the neurons that result in the synchronization of their firing patterns. The interaction between neurons can give rise to oscillations at a different frequency than the firing frequency of individual neurons. A well-known example of macroscopic neural oscillations isalpha activity. Neural oscillations in humans were observed by researchers as early as 1924 (byHans Berger). More than 50 years later, intrinsic oscillatory behavior was encountered in vertebrate neurons, but its functional role is still not fully understood.[3]The possible roles of neural oscillations includefeature binding,information transfer mechanismsand thegeneration of rhythmic motor output. Over the last decades more insight has been gained, especially with advances inbrain imaging. A major area of research inneuroscienceinvolves determining how oscillations are generated and what their roles are. Oscillatory activity in the brain is widely observed at differentlevels of organizationand is thought to play a key role in processing neural information. Numerous experimental studies support a functional role of neural oscillations; a unified interpretation, however, is still lacking. Richard Catondiscovered electrical activity in the cerebral hemispheres of rabbits and monkeys and presented his findings in 1875.[4]Adolf Beckpublished in 1890 his observations of spontaneous electrical activity of the brain of rabbits and dogs that included rhythmic oscillations altered by light, detected with electrodes directly placed on the surface of the brain.[5]BeforeHans Berger,Vladimir Vladimirovich Pravdich-Neminskypublished the first animal EEG and theevoked potentialof a dog.[6] Neural oscillations are observed throughout the central nervous system at all levels, and includespike trains,local field potentialsand large-scaleoscillationswhich can be measured byelectroencephalography(EEG). In general, oscillations can be characterized by theirfrequency,amplitudeandphase. These signal properties can be extracted from neural recordings usingtime-frequency analysis. In large-scale oscillations, amplitude changes are considered to result from changes in synchronization within aneural ensemble, also referred to as local synchronization. In addition to local synchronization, oscillatory activity of distant neural structures (single neurons or neural ensembles) can synchronize. Neural oscillations and synchronization have been linked to many cognitive functions such as information transfer, perception, motor control and memory.[7][8][9][10] The opposite of neuron synchronization is neural isolation, which is when electrical activity of neurons is not temporally synchronized. This is when the likelihood of the neuron to reach its threshold potential for the signal to propagate to the next neuron decreases. This phenomenon is typically observed as the spectral intensity decreases from the summation of these neurons firing, which can be utilized to differentiate cognitive function or neural isolation. However, new non-linear methods have been used that couple temporal and spectral entropic relationships simultaneously to characterize how neurons are isolated, (the signal's inability to propagate to adjacent neurons), an indicator of impairment (e.g., hypoxia).[1] Neural oscillations have been most widely studied in neural activity generated by large groups of neurons. Large-scale activity can be measured by techniques such as EEG. In general, EEG signals have a broad spectral content similar topink noise, but also reveal oscillatory activity in specific frequency bands. The first discovered and best-known frequency band isalpha activity(8–12Hz)[11][12][13]that can be detected from theoccipital lobeduring relaxed wakefulness and which increases when the eyes are closed.[14]Other frequency bands are:delta(1–4 Hz),theta(4–8 Hz),beta(13–30 Hz), lowgamma(30–70 Hz),[15]and high gamma (70–150 Hz) frequency bands. Faster rhythms such as gamma activity have been linked to cognitive processing. Indeed, EEG signals change dramatically during sleep. In fact, different sleep stages are commonly characterized by their spectral content.[16]Consequently, neural oscillations have been linked to cognitive states, such asawarenessandconsciousness.[17][18][15][13] Although neural oscillations in human brain activity are mostly investigated using EEG recordings, they are also observed using more invasive recording techniques such assingle-unit recordings. Neurons can generate rhythmic patterns ofaction potentialsor spikes. Some types of neurons have the tendency to fire at particular frequencies, either asresonators[19]or asintrinsic oscillators.[2]Burstingis another form of rhythmic spiking. Spiking patterns are considered fundamental forinformation codingin the brain. Oscillatory activity can also be observed in the form ofsubthreshold membrane potential oscillations(i.e. in the absence of action potentials).[20]If numerous neurons spike insynchrony, they can give rise to oscillations inlocal field potentials. Quantitative models can estimate the strength of neural oscillations in recorded data.[21] Neural oscillations are commonly studied within a mathematical framework and belong to the field ofneurodynamics, an area of research in thecognitive sciencesthat places a strong focus on the dynamic character of neural activity in describingbrainfunction.[22]It considers the brain adynamical systemand usesdifferential equationsto describe how neural activity evolves over time. In particular, it aims to relate dynamic patterns of brain activity to cognitive functions such as perception and memory. In very abstract form, neural oscillations can be analyzedanalytically.[23][24]When studied in a more physiologically realistic setting, oscillatory activity is generally studied usingcomputer simulationsof acomputational model. The functions of neural oscillations are wide-ranging and vary for different types of oscillatory activity. Examples are the generation of rhythmic activity such as aheartbeatand theneural bindingof sensory features in perception, such as the shape and color of an object. Neural oscillations also play an important role in manyneurological disorders, such as excessive synchronization duringseizureactivity inepilepsy, ortremorin patients withParkinson's disease. Oscillatory activity can also be used to control external devices such as abrain–computer interface.[25] Oscillatory activity is observed throughout thecentral nervous systemat all levels of organization. Three different levels have been widely recognized: the micro-scale (activity of a single neuron), the meso-scale (activity of a local group of neurons) and the macro-scale (activity of different brain regions).[26] Neurons generateaction potentialsresulting from changes in the electric membrane potential. Neurons can generate multiple action potentials in sequence forming so-called spike trains. These spike trains are the basis forneural codingand information transfer in the brain. Spike trains can form all kinds of patterns, such as rhythmic spiking andbursting, and often display oscillatory activity.[27]Oscillatory activity in single neurons can also be observed insub-threshold fluctuationsin membrane potential. These rhythmic changes in membrane potential do not reach the critical threshold and therefore do not result in an action potential. They can result from postsynaptic potentials from synchronous inputs or from intrinsic properties of neurons. Neuronal spiking can be classified by its activity pattern. The excitability of neurons can be subdivided in Class I and II. Class I neurons can generate action potentials with arbitrarily low frequency depending on the input strength, whereas Class II neurons generate action potentials in a certain frequency band, which is relatively insensitive to changes in input strength.[19]Class II neurons are also more prone to display sub-threshold oscillations in membrane potential. A group of neurons can also generate oscillatory activity. Through synaptic interactions, thefiring patternsof different neurons may become synchronized and the rhythmic changes in electric potential caused by their action potentials may accumulate (constructive interference). That is, synchronized firing patterns result in synchronized input into other cortical areas, which gives rise to large-amplitude oscillations of thelocal field potential. These large-scale oscillations can also be measured outside the scalp usingelectroencephalography(EEG) andmagnetoencephalography(MEG). The electric potentials generated by single neurons are far too small to be picked up outside the scalp, and EEG or MEG activity always reflects the summation of the synchronous activity of thousands or millions of neurons that have similar spatial orientation.[28] Neurons in aneural ensemblerarely all fire at exactly the same moment, i.e. fully synchronized. Instead, the probability of firing is rhythmically modulated such that neurons are more likely to fire at the same time, which gives rise to oscillations in their mean activity. (See figure at top of page.) As such, the frequency oflarge-scaleoscillations does not need to match the firing pattern of individual neurons. Isolated cortical neurons fire regularly under certain conditions, but in the intact brain, cortical cells are bombarded by highly fluctuating synaptic inputs and typically fire seemingly at random. However, if the probability of a large group of neurons firing is rhythmically modulated at a common frequency, they will generate oscillations in the mean field. (See also figure at top of page.)[27] Neural ensembles can generate oscillatory activityendogenouslythrough local interactions between excitatory and inhibitory neurons. In particular, inhibitoryinterneuronsplay an important role in producing neural ensemble synchrony by generating a narrow window for effective excitation and rhythmically modulating the firing rate of excitatory neurons.[29] Neural oscillation can also arise from interactions between different brain areas coupled through the structuralconnectome.Time delaysplay an important role here. Because all brain areas are bidirectionally coupled, these connections between brain areas formfeedbackloops.Positive feedbackloops tend to cause oscillatory activity where frequency is inversely related to the delay time. An example of such a feedback loop is the connections between thethalamusandcortex– thethalamocortical radiations. This thalamocortical network is able to generate oscillatory activity known asrecurrent thalamo-cortical resonance.[30]The thalamocortical network plays an important role in the generation ofalpha activity.[31][32]In a whole-brain network model with realistic anatomical connectivity and propagation delays between brain areas, oscillations in thebeta frequency rangeemerge from the partial synchronisation of subsets of brain areas oscillating in the gamma-band (generated at the mesoscopic level).[33] Scientists have identified some intrinsicneuronal propertiesthat play an important role in generating membrane potential oscillations. In particular,voltage-gated ion channelsare critical in the generation of action potentials. The dynamics of these ion channels have been captured in the well-establishedHodgkin–Huxley modelthat describes how action potentials are initiated and propagated by means of a set of differential equations. Usingbifurcation analysis, different oscillatory varieties of these neuronal models can be determined, allowing for the classification of types of neuronal responses. The oscillatory dynamics of neuronal spiking as identified in the Hodgkin–Huxley model closely agree with empirical findings. In addition to periodic spiking,subthreshold membrane potential oscillations, i.e.resonancebehavior that does not result in action potentials, may also contribute to oscillatory activity by facilitating synchronous activity of neighboring neurons.[34][35] Like pacemaker neurons incentral pattern generators, subtypes of cortical cells fire bursts of spikes (brief clusters of spikes) rhythmically at preferred frequencies.[2]Bursting neurons have the potential to serve as pacemakers for synchronous network oscillations, and bursts of spikes may underlie or enhance neuronal resonance.[27]Many of these neurons can be considered intrinsic oscillators, namely, neurons that generate their oscillations intrinsically, as their oscillation frequencies can be modified by local applications of glutamate in-vivo.[36] Apart from intrinsic properties of neurons,biological neural networkproperties are also an important source of oscillatory activity. Neuronscommunicatewith one another via synapses and affect the timing of spike trains in the post-synaptic neurons. Depending on the properties of the connection, such as the coupling strength, time delay and whether coupling isexcitatoryorinhibitory, the spike trains of the interacting neurons may becomesynchronized.[37]Neurons are locally connected, forming small clusters that are calledneural ensembles. Certain network structures promote oscillatory activity at specific frequencies. For example, neuronal activity generated by two populations of interconnectedinhibitoryandexcitatorycells can show spontaneous oscillations that are described by theWilson-Cowan model. If a group of neurons engages in synchronized oscillatory activity, the neural ensemble can be mathematically represented as a single oscillator.[26]Different neural ensembles are coupled through long-range connections and form a network of weakly coupled oscillators at the next spatial scale. Weakly coupled oscillators can generate a range of dynamics including oscillatory activity.[38]Long-range connections between different brain structures, such as thethalamusand thecortex(seethalamocortical oscillation), involve time-delays due to the finiteconduction velocityof axons. Because most connections are reciprocal, they formfeed-back loopsthat support oscillatory activity. Oscillations recorded from multiple cortical areas can become synchronized to formlarge-scale brain networks, whose dynamics and functional connectivity can be studied by means ofspectral analysisandGranger causalitymeasures.[39]Coherent activity of large-scale brain activity may form dynamic links between brain areas required for the integration of distributed information.[18] Microglia– the major immune cells of the brain – have been shown to play an important role in shaping network connectivity, and thus, influencing neuronal network oscillations bothex vivoandin vivo.[40] In addition to fast directsynaptic interactionsbetween neurons forming a network, oscillatory activity is regulated byneuromodulatorson a much slower time scale. That is, the concentration levels of certain neurotransmitters are known to regulate the amount of oscillatory activity. For instance,GABAconcentration has been shown to be positively correlated with frequency of oscillations in induced stimuli.[41]A number ofnucleiin thebrainstemhave diffuse projections throughout the brain influencing concentration levels of neurotransmitters such asnorepinephrine,acetylcholineandserotonin. These neurotransmitter systems affect the physiological state, e.g.,wakefulnessorarousal, and have a pronounced effect on amplitude of different brain waves, such as alpha activity.[42] Oscillations can often be described and analyzed using mathematics. Mathematicians have identified severaldynamicalmechanisms that generate rhythmicity. Among the most important areharmonic(linear) oscillators,limit cycleoscillators, and delayed-feedbackoscillators.[43]Harmonic oscillations appear very frequently in nature—examples are sound waves, the motion of apendulum, and vibrations of every sort. They generally arise when a physical system is perturbed by a small degree from aminimum-energy state, and are well understood mathematically. Noise-driven harmonic oscillators realistically simulate alpha rhythm in the waking EEG as well as slow waves and spindles in the sleep EEG. SuccessfulEEG analysisalgorithms were based on such models. Several other EEG components are better described by limit-cycle or delayed-feedback oscillations. Limit-cycle oscillations arise from physical systems that show large deviations fromequilibrium, whereas delayed-feedback oscillations arise when components of a system affect each other after significant time delays. Limit-cycle oscillations can be complex but there are powerful mathematical tools for analyzing them; the mathematics of delayed-feedback oscillations is primitive in comparison. Linear oscillators and limit-cycle oscillators qualitatively differ in terms of how they respond to fluctuations in input. In a linear oscillator, the frequency is more or less constant but the amplitude can vary greatly. In a limit-cycle oscillator, the amplitude tends to be more or less constant but the frequency can vary greatly. Aheartbeatis an example of a limit-cycle oscillation in that the frequency of beats varies widely, while each individual beat continues to pump about the same amount of blood. Computational modelsadopt a variety of abstractions in order to describe complex oscillatory dynamics observed in brain activity. Many models are used in the field, each defined at a different level of abstraction and trying to model different aspects of neural systems. They range from models of the short-term behaviour of individual neurons, through models of how the dynamics ofneural circuitryarise from interactions between individual neurons, to models of how behaviour can arise from abstract neural modules that represent complete subsystems. A model of a biological neuron is a mathematical description of the properties of nerve cells, or neurons, that is designed to accurately describe and predict its biological processes. One of the most successful neuron models is the Hodgkin–Huxley model, for whichHodgkinandHuxleywon the 1963 Nobel Prize in physiology or medicine. The model is based on data from thesquid giant axonand consists of nonlinear differential equations that approximate the electrical characteristics of a neuron, including the generation and propagation ofaction potentials. The model is so successful at describing these characteristics that variations of its "conductance-based" formulation continue to be utilized in neuron models over a half a century later.[44] The Hodgkin–Huxley model is too complicated to understand using classical mathematical techniques, so researchers often turn to simplifications such as theFitzHugh–Nagumo modeland theHindmarsh–Rose model, or highly idealized neuron models such as the leaky integrate-and-fire neuron, originally developed by Lapique in 1907.[45][46]Such models only capture salient membrane dynamics such as spiking orburstingat the cost of biophysical detail, but are more computationally efficient, enabling simulations of largerbiological neural networks. A neural network model describes a population of physically interconnected neurons or a group of disparate neurons whose inputs or signalling targets define a recognizable circuit. These models aim to describe how the dynamics of neural circuitry arise from interactions between individual neurons. Local interactions between neurons can result in the synchronization of spiking activity and form the basis of oscillatory activity. In particular, models of interactingpyramidal cellsand inhibitoryinterneuronshave been shown to generate brain rhythms such asgamma activity.[47]Similarly, it was shown that simulations of neural networks with a phenomenological model for neuronal response failures can predict spontaneous broadband neural oscillations.[48] Neural field models are another important tool in studying neural oscillations and are a mathematical framework describing evolution of variables such as mean firing rate in space and time. In modeling the activity of large numbers of neurons, the central idea is to take the density of neurons to thecontinuum limit, resulting in spatially continuousneural networks. Instead of modelling individual neurons, this approach approximates a group of neurons by its average properties and interactions. It is based on themean field approach, an area ofstatistical physicsthat deals with large-scale systems. Models based on these principles have been used to provide mathematical descriptions of neural oscillations and EEG rhythms. They have for instance been used to investigate visual hallucinations.[50] TheKuramoto modelof coupled phase oscillators[51]is one of the most abstract and fundamental models used to investigate neural oscillations and synchronization. It captures the activity of a local system (e.g., a single neuron or neural ensemble) by its circularphasealone and hence ignores the amplitude of oscillations (amplitude is constant).[52]Interactions amongst these oscillators are introduced by a simple algebraic form (such as asinefunction) and collectively generate a dynamical pattern at the global scale. The Kuramoto model is widely used to study oscillatory brain activity, and several extensions have been proposed that increase its neurobiological plausibility, for instance by incorporating topological properties of local cortical connectivity.[53]In particular, it describes how the activity of a group of interacting neurons can become synchronized and generate large-scale oscillations. Simulations using the Kuramoto model with realistic long-range cortical connectivity and time-delayed interactions reveal the emergence of slow patterned fluctuations that reproduce resting-stateBOLDfunctional maps, which can be measured usingfMRI.[54] Both single neurons and groups of neurons can generate oscillatory activity spontaneously. In addition, they may show oscillatory responses to perceptual input or motor output. Some types of neurons will fire rhythmically in the absence of any synaptic input. Likewise, brain-wide activity reveals oscillatory activity while subjects do not engage in any activity, so-calledresting-state activity. These ongoing rhythms can change in different ways in response to perceptual input or motor output. Oscillatory activity may respond by increases or decreases in frequency and amplitude or show a temporary interruption, which is referred to as phase resetting. In addition, external activity may not interact with ongoing activity at all, resulting in an additive response. Spontaneous activity isbrainactivity in the absence of an explicit task, such as sensory input or motor output, and hence also referred to as resting-state activity. It is opposed to induced activity, i.e. brain activity that is induced by sensory stimuli or motor responses. The termongoing brain activityis used inelectroencephalographyandmagnetoencephalographyfor those signal components that are not associated with the processing of astimulusor the occurrence of specific other events, such as moving a body part, i.e. events that do not formevoked potentials/evoked fields, or induced activity. Spontaneous activity is usually considered to benoiseif one is interested in stimulus processing; however, spontaneous activity is considered to play a crucial role during brain development, such as in network formation and synaptogenesis. Spontaneous activity may be informative regarding the current mental state of the person (e.g. wakefulness, alertness) and is often used in sleep research. Certain types of oscillatory activity, such asalpha waves, are part of spontaneous activity. Statistical analysis of power fluctuations of alpha activity reveals a bimodal distribution, i.e. a high- and low-amplitude mode, and hence shows that resting-state activity does not just reflect anoiseprocess.[55] In case of fMRI, spontaneous fluctuations in theblood-oxygen-level dependent(BOLD) signal reveal correlation patterns that are linked to resting state networks, such as thedefault network.[56]The temporal evolution of resting state networks is correlated with fluctuations of oscillatory EEG activity in different frequency bands.[57] Ongoing brain activity may also have an important role in perception, as it may interact with activity related to incoming stimuli. Indeed,EEGstudies suggest that visual perception is dependent on both the phase and amplitude of cortical oscillations. For instance, the amplitude and phase of alpha activity at the moment of visual stimulation predicts whether a weak stimulus will be perceived by the subject.[58][59][60] In response to input, a neuron orneuronal ensemblemay change the frequency at which it oscillates, thus changing therateat which it spikes. Often, a neuron's firing rate depends on the summed activity it receives. Frequency changes are also commonly observed incentral pattern generatorsand directly relate to the speed ofmotor activities, such as step frequency in walking. However, changes inrelativeoscillation frequency between differentbrain areasis not so common because the frequency of oscillatory activity is often related to the time delays between brain areas. Next to evoked activity, neural activity related to stimulus processing may result in induced activity. Induced activity refers to modulation in ongoing brain activity induced by processing of stimuli or movement preparation. Hence, they reflect an indirect response in contrast to evoked responses. A well-studied type of induced activity is amplitude change in oscillatory activity. For instance,gamma activityoften increases during increased mental activity such as during object representation.[61]Because induced responses may have different phases across measurements and therefore would cancel out during averaging, they can only be obtained usingtime-frequency analysis. Induced activity generally reflects the activity of numerous neurons: amplitude changes in oscillatory activity are thought to arise from the synchronization of neural activity, for instance by synchronization of spike timing or membrane potential fluctuations of individual neurons. Increases in oscillatory activity are therefore often referred to as event-related synchronization, while decreases are referred to as event-related desynchronization.[62] Phase resetting occurs when input to a neuron or neuronal ensemble resets the phase of ongoing oscillations.[63]It is very common in single neurons where spike timing is adjusted to neuronal input (a neuron may spike at a fixed delay in response to periodic input, which is referred to as phase locking[19]) and may also occur in neuronal ensembles when the phases of their neurons are adjusted simultaneously. Phase resetting is fundamental for the synchronization of different neurons or different brain regions[18][38]because the timing of spikes can become phase locked to the activity of other neurons. Phase resetting also permits the study of evoked activity, a term used inelectroencephalographyandmagnetoencephalographyfor responses in brain activity that are directly related tostimulus-related activity.Evoked potentialsandevent-related potentialsare obtained from an electroencephalogram by stimulus-locked averaging, i.e. averaging different trials at fixed latencies around the presentation of a stimulus. As a consequence, those signal components that are the same in each single measurement are conserved and all others, i.e. ongoing or spontaneous activity, are averaged out. That is, event-related potentials only reflect oscillations in brain activity that arephase-locked to the stimulus or event. Evoked activity is often considered to be independent from ongoing brain activity, although this is an ongoing debate.[64][65] It has recently been proposed that even if phases are not aligned across trials, induced activity may still causeevent-related potentialsbecause ongoing brain oscillations may not be symmetric and thus amplitude modulations may result in a baseline shift that does not average out.[66][67]This model implies that slow event-related responses, such as asymmetric alpha activity, could result from asymmetric brain oscillation amplitude modulations, such as an asymmetry of the intracellular currents that propagate forward and backward down the dendrites.[68]Under this assumption, asymmetries in the dendritic current would cause asymmetries in oscillatory activity measured by EEG and MEG, since dendritic currents in pyramidal cells are generally thought to generate EEG and MEG signals that can be measured at the scalp.[69] Cross-frequency coupling (CFC) describes the coupling (statistical correlation) between a slow wave and a fast wave. There are many kinds, generally written as A-B coupling, meaning the A of a slow wave is coupled with the B of a fast wave. For example, phase–amplitude coupling is where the phase of a slow wave is coupled with the amplitude of a fast wave.[70] Thetheta-gamma codeis a coupling between theta wave and gamma wave in the hippocampal network. During a theta wave, 4 to 8 non-overlapping neuron ensembles are activated in sequence. This has been hypothesized to form a neural code representing multiple items in a temporal frame[71][72] Neural synchronization can be modulated by task constraints, such asattention, and is thought to play a role infeature binding,[73]neuronal communication,[7]andmotor coordination.[9]Neuronal oscillations became a hot topic inneurosciencein the 1990s when the studies of the visual system of the brain by Gray, Singer and others appeared to support theneural bindinghypothesis.[74]According to this idea, synchronous oscillations in neuronal ensembles bind neurons representing different features of an object. For example, when a person looks at a tree, visual cortex neurons representing the tree trunk and those representing the branches of the same tree would oscillate in synchrony to form a single representation of the tree. This phenomenon is best seen inlocal field potentialswhich reflect the synchronous activity of local groups of neurons, but has also been shown inEEGandMEGrecordings providing increasing evidence for a close relation between synchronous oscillatory activity and a variety of cognitive functions such as perceptual grouping[73]and attentional top-down control.[15][13][12] Cells in thesinoatrial node, located in theright atriumof the heart, spontaneouslydepolarizeapproximately 100 times per minute. Although all of the heart's cells have the ability to generate action potentials that trigger cardiac contraction, the sinoatrial node normally initiates it, simply because it generates impulses slightly faster than the other areas. Hence, these cells generate the normalsinus rhythmand are called pacemaker cells as they directly control theheart rate. In the absence of extrinsic neural and hormonal control, cells in the SA node will rhythmically discharge. The sinoatrial node is richly innervated by theautonomic nervous system, which up or down regulates the spontaneous firing frequency of the pacemaker cells. Synchronized firing of neurons also forms the basis of periodic motor commands for rhythmic movements. These rhythmic outputs are produced by a group of interacting neurons that form a network, called acentral pattern generator. Central pattern generators are neuronal circuits that—when activated—can produce rhythmic motor patterns in the absence of sensory or descending inputs that carry specific timing information. Examples arewalking,breathing, andswimming,[75]Most evidence for central pattern generators comes from lower animals, such as thelamprey, but there is also evidence for spinal central pattern generators in humans.[76][77] Neuronal spiking is generally considered the basis for information transfer in the brain. For such a transfer, information needs to be coded in a spiking pattern. Different types of coding schemes have been proposed, such asrate codingandtemporal coding. Neural oscillations could create periodic time windows in which input spikes have larger effect on neurons, thereby providing a mechanism for decoding temporal codes.[78] Single-cell intrinsic oscillators serve as valuable tools for decoding temporally-encoded sensory information. This information is encoded through inter-spike intervals, and intrinsic oscillators can act as 'temporal rulers' for precisely measuring these intervals. One notable mechanism for achieving this is the neuronalphase-locked loop(NPLL). In this mechanism, cortical oscillators undergo modulation influenced by the firing rates of thalamocortical 'phase detectors,' which, in turn, gauge the disparity between the cortical and sensory periodicity.[79] Synchronization of neuronal firing may serve as a means to group spatially segregated neurons that respond to the same stimulus in order to bind these responses for further joint processing, i.e. to exploit temporal synchrony to encode relations. Purely theoretical formulations of the binding-by-synchrony hypothesis were proposed first,[80]but subsequently extensive experimental evidence has been reported supporting the potential role of synchrony as a relational code.[81] The functional role of synchronized oscillatory activity in the brain was mainly established in experiments performed on awake kittens with multiple electrodes implanted in the visual cortex. These experiments showed that groups of spatially segregated neurons engage in synchronous oscillatory activity when activated by visual stimuli. The frequency of these oscillations was in the range of 40 Hz and differed from the periodic activation induced by the grating, suggesting that the oscillations and their synchronization were due to internal neuronal interactions.[81]Similar findings were shown in parallel by the group of Eckhorn, providing further evidence for the functional role of neural synchronization in feature binding.[82]Since then, numerous studies have replicated these findings and extended them to different modalities such as EEG, providing extensive evidence of the functional role ofgammaoscillations in visual perception. Gilles Laurent and colleagues showed that oscillatory synchronization has an important functional role in odor perception. Perceiving different odors leads to different subsets of neurons firing on different sets of oscillatory cycles.[83]These oscillations can be disrupted byGABAblockerpicrotoxin,[84]and the disruption of the oscillatory synchronization leads to impairment of behavioral discrimination of chemically similar odorants in bees,[85]and to more similar responses across odors in downstream β-lobe neurons.[86]Recent follow-up of this work has shown that oscillations create periodic integration windows forKenyon cellsin the insectmushroom body, such that incoming spikes from theantennal lobeare more effective in activating Kenyon cells only at specific phases of the oscillatory cycle.[78] Neural oscillations are also thought be involved in thesense of time[87]and in somatosensory perception.[88]However, recent findings argue against a clock-like function of cortical gamma oscillations.[89] Oscillations have been commonly reported in the motor system. Pfurtscheller and colleagues found a reduction inalpha(8–12 Hz) andbeta(13–30 Hz) oscillations inEEGactivity when subjects made a movement.[62][90]Using intra-cortical recordings, similar changes in oscillatory activity were found in the motor cortex when the monkeys performed motor acts that required significant attention.[91][92]In addition, oscillations at spinal level become synchronised to beta oscillations in the motor cortex during constant muscle activation, as determined bycortico-muscular coherence.[93][94][95]Likewise, muscle activity of different muscles revealsinter-muscular coherenceat multiple distinct frequencies reflecting the underlyingneural circuitryinvolved inmotor coordination.[96][97] Recently it was found that cortical oscillations propagate astravelling wavesacross the surface of the motor cortex along dominant spatial axes characteristic of the local circuitry of the motor cortex.[98]It has been proposed that motor commands in the form of travelling waves can be spatially filtered by the descending fibres to selectively control muscle force.[99]Simulations have shown that ongoing wave activity in cortex can elicit steady muscle force with physiological levels of EEG-EMG coherence.[100] Oscillatory rhythms at 10 Hz have been recorded in a brain area called theinferior olive, which is associated with the cerebellum.[20]These oscillations are also observed in motor output of physiologicaltremor[101]and when performing slow finger movements.[102]These findings may indicate that the human brain controls continuous movements intermittently. In support, it was shown that these movement discontinuities are directly correlated to oscillatory activity in a cerebello-thalamo-cortical loop, which may represent a neural mechanism for the intermittent motor control.[103] Neural oscillations, in particularthetaactivity, are extensively linked to memory function. Theta rhythms are very strong in rodent hippocampi and entorhinal cortex during learning and memory retrieval, and they are believed to be vital to the induction oflong-term potentiation, a potential cellular mechanism for learning and memory.Couplingbetween theta andgammaactivity is thought to be vital for memory functions, includingepisodic memory.[104][105]Tight coordination of single-neuron spikes with local theta oscillations is linked to successful memory formation in humans, as more stereotyped spiking predicts better memory.[106] Sleep is a naturally recurring state characterized by reduced or absentconsciousnessand proceeds in cycles ofrapid eye movement(REM) andnon-rapid eye movement(NREM) sleep. Sleep stages are characterized by spectral content ofEEG: for instance, stage N1 refers to the transition of the brain from alpha waves (common in the awake state) to theta waves, whereas stage N3 (deep or slow-wave sleep) is characterized by the presence of delta waves.[107]The normal order of sleep stages is N1 → N2 → N3 → N2 → REM.[citation needed] Neural oscillations may play a role in neural development. For example,retinal wavesare thought to have properties that define early connectivity of circuits and synapses between cells in the retina.[108] Specific types of neural oscillations may also appear in pathological situations, such asParkinson's diseaseorepilepsy. These pathological oscillations often consist of an aberrant version of a normal oscillation. For example, one of the best known types is thespike and waveoscillation, which is typical of generalized or absence epileptic seizures, and which resembles normal sleep spindle oscillations. A tremor is an involuntary, somewhat rhythmic, muscle contraction and relaxation involving to-and-fro movements of one or more body parts. It is the most common of all involuntary movements and can affect the hands, arms, eyes, face, head, vocal cords, trunk, and legs. Most tremors occur in the hands. In some people, tremor is a symptom of another neurological disorder. Many different forms of tremor have been identified, such asessential tremororParkinsoniantremor. It is argued that tremors are likely to be multifactorial in origin, with contributions from neural oscillations in the central nervous systems, but also from peripheral mechanisms such as reflex loop resonances.[109] Epilepsy is a common chronic neurological disorder characterized byseizures. These seizures are transient signs and/or symptoms of abnormal, excessive orhypersynchronous neuronal activityin the brain.[110] In thalamocortical dysrhythmia (TCD), normalthalamocortical resonanceis disrupted. The thalamic loss of input allows the frequency of the thalamo-cortical column to slow into the theta or delta band as identified by MEG and EEG by machine learning.[111]TCD can be treated withneurosurgicalmethods likethalamotomy. Neural oscillations are sensitive to several drugs influencing brain activity; accordingly,biomarkersbased on neural oscillations are emerging assecondary endpointsin clinical trials and in quantifying effects in pre-clinical studies. These biomarkers are often named "EEG biomarkers" or "Neurophysiological Biomarkers" and are quantified usingquantitative electroencephalography (qEEG). EEG biomarkers can be extracted from the EEG using the open-sourceNeurophysiological Biomarker Toolbox. Neural oscillation has been applied as a control signal in variousbrain–computer interfaces(BCIs).[112]For example, a non-invasive BCI can be created by placing electrodes on the scalp and then measuring the weak electric signals. Although individual neuron activities cannot be recorded through non-invasive BCI because the skull damps and blurs the electromagnetic signals, oscillatory activity can still be reliably detected. The BCI was introduced by Vidal in 1973[113]as challenge of using EEG signals to control objects outside human body. After the BCI challenge, in 1988, alpha rhythm was used in a brain rhythm based BCI for control of a physical object, a robot.[114][115]Alpha rhythm based BCI was the first BCI for control of a robot.[116][117]In particular, some forms of BCI allow users to control a device by measuring the amplitude of oscillatory activity in specific frequency bands, includingmuandbetarhythms. A non-inclusive list of types of oscillatory activity found in the central nervous system:
https://en.wikipedia.org/wiki/Neural_oscillation
Inneuroscience,representational driftis a phenomenon describing the gradual change in how thebrainrepresents information over time, even when the information (and associated perception or behavior) itself remains constant. This contrasts with the idea of stableneural representations, where the same information would ideally be encoded by consistent patterns of neural activity.[1]Neural representations are the patterns of activity within networks of neurons that encode information. While stability is important for consistent recognition and recall, the brain's inherentplasticityand ongoing learning processes can lead to modifications in these representations.[2]Representational drift manifests as these gradual shifts in the neural activity patterns associated with specific information. Over time, the same stimulus or concept might elicit a different, albeit potentially related, pattern of neural activation. The underlying causes of representational drift are not fully understood,[3]but several contributing factors are hypothesized. One prominent theory suggests that ongoing learning, even about familiar stimuli, continuously refines and updates neural representations.[2]Synaptic plasticity, the dynamic strengthening and weakening of connections between neurons, is another likely contributor, as these changes can reshape the neural circuits involved in representing information.[4]Furthermore, inherent noise within neural systems, including random fluctuations in neuronal firing, could also play a role in driving drift. Thisneurosciencearticle is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Representational_drift
Inmachine learning, aneural network(alsoartificial neural networkorneural net, abbreviatedANNorNN) is a computational model inspired by the structure and functions of biological neural networks.[1][2] A neural network consists of connected units or nodes calledartificial neurons, which loosely model theneuronsin the brain. Artificial neuron models that mimic biological neurons more closely have also been recently investigated and shown to significantly improve performance. These are connected byedges, which model thesynapsesin the brain. Each artificial neuron receives signals from connected neurons, then processes them and sends a signal to other connected neurons. The "signal" is areal number, and the output of each neuron is computed by some non-linear function of the sum of its inputs, called theactivation function. The strength of the signal at each connection is determined by aweight, which adjusts during the learning process. Typically, neurons are aggregated into layers. Different layers may perform different transformations on their inputs. Signals travel from the first layer (theinput layer) to the last layer (theoutput layer), possibly passing through multiple intermediate layers (hidden layers). A network is typically called a deep neural network if it has at least two hidden layers.[3] Artificial neural networks are used for various tasks, includingpredictive modeling,adaptive control, and solving problems inartificial intelligence. They can learn from experience, and can derive conclusions from a complex and seemingly unrelated set of information. Neural networks are typically trained throughempirical risk minimization. This method is based on the idea of optimizing the network's parameters to minimize the difference, or empirical risk, between the predicted output and the actual target values in a given dataset.[4]Gradient-based methods such asbackpropagationare usually used to estimate the parameters of the network.[4]During the training phase, ANNs learn fromlabeledtraining data by iteratively updating their parameters to minimize a definedloss function.[5]This method allows the network to generalize to unseen data. Today's deep neural networks are based on early work instatisticsover 200 years ago. The simplest kind offeedforward neural network(FNN) is a linear network, which consists of a single layer of output nodes with linear activation functions; the inputs are fed directly to the outputs via a series of weights. The sum of the products of the weights and the inputs is calculated at each node. Themean squared errorsbetween these calculated outputs and the given target values are minimized by creating an adjustment to the weights. This technique has been known for over two centuries as themethod of least squaresorlinear regression. It was used as a means of finding a good rough linear fit to a set of points byLegendre(1805) andGauss(1795) for the prediction of planetary movement.[7][8][9][10][11] Historically, digital computers such as thevon Neumann modeloperate via the execution of explicit instructions with access to memory by a number of processors. Some neural networks, on the other hand, originated from efforts to model information processing in biological systems through the framework ofconnectionism. Unlike the von Neumann model, connectionist computing does not separate memory and processing. Warren McCullochandWalter Pitts[12](1943) considered a non-learning computational model for neural networks.[13]This model paved the way for research to split into two approaches. One approach focused on biological processes while the other focused on the application of neural networks to artificial intelligence. In the late 1940s,D. O. Hebb[14]proposed a learninghypothesisbased on the mechanism ofneural plasticitythat became known asHebbian learning. It was used in many early neural networks, such as Rosenblatt'sperceptronand theHopfield network. Farley andClark[15](1954) used computational machines to simulate a Hebbian network. Other neural network computational machines were created byRochester, Holland, Habit and Duda (1956).[16] In 1958, psychologistFrank Rosenblattdescribed the perceptron, one of the first implemented artificial neural networks,[17][18][19][20]funded by the United StatesOffice of Naval Research.[21]R. D. Joseph (1960)[22]mentions an even earlier perceptron-like device by Farley and Clark:[10]"Farley and Clark of MIT Lincoln Laboratory actually preceded Rosenblatt in the development of a perceptron-like device." However, "they dropped the subject." The perceptron raised public excitement for research in Artificial Neural Networks, causing the US government to drastically increase funding. This contributed to "the Golden Age of AI" fueled by the optimistic claims made by computer scientists regarding the ability of perceptrons to emulate human intelligence.[23] The first perceptrons did not have adaptive hidden units. However, Joseph (1960)[22]also discussedmultilayer perceptronswith an adaptive hidden layer. Rosenblatt (1962)[24]: section 16cited and adopted these ideas, also crediting work by H. D. Block and B. W. Knight. Unfortunately, these early efforts did not lead to a working learning algorithm for hidden units, i.e.,deep learning. Fundamental research was conducted on ANNs in the 1960s and 1970s. The first working deep learning algorithm was theGroup method of data handling, a method to train arbitrarily deep neural networks, published byAlexey Ivakhnenkoand Lapa in theSoviet Union(1965). They regarded it as a form of polynomial regression,[25]or a generalization of Rosenblatt's perceptron.[26]A 1971 paper described a deep network with eight layers trained by this method,[27]which is based on layer by layer training through regression analysis. Superfluous hidden units are pruned using a separate validation set. Since the activation functions of the nodes are Kolmogorov-Gabor polynomials, these were also the first deep networks with multiplicative units or "gates."[10] The first deep learningmultilayer perceptrontrained bystochastic gradient descent[28]was published in 1967 byShun'ichi Amari.[29]In computer experiments conducted by Amari's student Saito, a five layer MLP with two modifiable layers learnedinternal representationsto classify non-linearily separable pattern classes.[10]Subsequent developments in hardware and hyperparameter tunings have made end-to-end stochastic gradient descent the currently dominant training technique. In 1969,Kunihiko Fukushimaintroduced theReLU(rectified linear unit) activation function.[10][30][31]The rectifier has become the most popular activation function for deep learning.[32] Nevertheless, research stagnated in the United States following the work ofMinskyandPapert(1969),[33]who emphasized that basic perceptrons were incapable of processing the exclusive-or circuit. This insight was irrelevant for the deep networks of Ivakhnenko (1965) and Amari (1967). In 1976 transfer learning was introduced in neural networks learning.[34][35] Deep learning architectures forconvolutional neural networks(CNNs) with convolutional layers and downsampling layers and weight replication began with theNeocognitronintroduced by Kunihiko Fukushima in 1979, though not trained by backpropagation.[36][37][38] Backpropagationis an efficient application of thechain rulederived byGottfried Wilhelm Leibnizin 1673[39]to networks of differentiable nodes. The terminology "back-propagating errors" was actually introduced in 1962 by Rosenblatt,[24]but he did not know how to implement this, althoughHenry J. Kelleyhad a continuous precursor of backpropagation in 1960 in the context ofcontrol theory.[40]In 1970,Seppo Linnainmaapublished the modern form of backpropagation in his Master'sthesis(1970).[41][42][10]G.M. Ostrovski et al. republished it in 1971.[43][44]Paul Werbosapplied backpropagation to neural networks in 1982[45][46](his 1974 PhD thesis, reprinted in a 1994 book,[47]did not yet describe the algorithm[44]). In 1986,David E. Rumelhartet al. popularised backpropagation but did not cite the original work.[48] Kunihiko Fukushima'sconvolutional neural network(CNN) architecture of 1979[36]also introducedmax pooling,[49]a popular downsampling procedure for CNNs. CNNs have become an essential tool forcomputer vision. Thetime delay neural network(TDNN) was introduced in 1987 byAlex Waibelto apply CNN to phoneme recognition. It used convolutions, weight sharing, and backpropagation.[50][51]In 1988, Wei Zhang applied a backpropagation-trained CNN to alphabet recognition.[52]In 1989,Yann LeCunet al. created a CNN calledLeNetforrecognizing handwritten ZIP codeson mail. Training required 3 days.[53]In 1990, Wei Zhang implemented a CNN onoptical computinghardware.[54]In 1991, a CNN was applied to medical image object segmentation[55]and breast cancer detection in mammograms.[56]LeNet-5 (1998), a 7-level CNN by Yann LeCun et al., that classifies digits, was applied by several banks to recognize hand-written numbers on checks digitized in 32×32 pixel images.[57] From 1988 onward,[58][59]the use of neural networks transformed the field ofprotein structure prediction, in particular when the first cascading networks were trained onprofiles(matrices) produced by multiplesequence alignments.[60] One origin of RNN wasstatistical mechanics. In 1972,Shun'ichi Amariproposed to modify the weights of anIsing modelbyHebbian learningrule as a model ofassociative memory, adding in the component of learning.[61]This was popularized as the Hopfield network byJohn Hopfield(1982).[62]Another origin of RNN was neuroscience. The word "recurrent" is used to describe loop-like structures in anatomy. In 1901,Cajalobserved "recurrent semicircles" in thecerebellar cortex.[63]Hebbconsidered "reverberating circuit" as an explanation for short-term memory.[64]The McCulloch and Pitts paper (1943) considered neural networks that contain cycles, and noted that the current activity of such networks can be affected by activity indefinitely far in the past.[12] In 1982 a recurrent neural network with an array architecture (rather than a multilayer perceptron architecture), namely a Crossbar Adaptive Array,[65][66]used direct recurrent connections from the output to the supervisor (teaching) inputs. In addition of computing actions (decisions), it computed internal state evaluations (emotions) of the consequence situations. Eliminating the external supervisor, it introduced the self-learning method in neural networks. In cognitive psychology, the journal American Psychologist in early 1980's carried out a debate on the relation between cognition and emotion. Zajonc in 1980 stated that emotion is computed first and is independent from cognition, while Lazarus in 1982 stated that cognition is computed first and is inseparable from emotion.[67][68]In 1982 the Crossbar Adaptive Array gave a neural network model of cognition-emotion relation.[65][69]It was an example of a debate where an AI system, a recurrent neural network, contributed to an issue in the same time addressed by cognitive psychology. Two early influential works were theJordan network(1986) and theElman network(1990), which applied RNN to studycognitive psychology. In the 1980s, backpropagation did not work well for deep RNNs. To overcome this problem, in 1991,Jürgen Schmidhuberproposed the "neural sequence chunker" or "neural history compressor"[70][71]which introduced the important concepts of self-supervised pre-training (the "P" inChatGPT) and neuralknowledge distillation.[10]In 1993, a neural history compressor system solved a "Very Deep Learning" task that required more than 1000 subsequentlayersin an RNN unfolded in time.[72] In 1991,Sepp Hochreiter's diploma thesis[73]identified and analyzed thevanishing gradient problem[73][74]and proposed recurrentresidualconnections to solve it. He and Schmidhuber introducedlong short-term memory(LSTM), which set accuracy records in multiple applications domains.[75][76]This was not yet the modern version of LSTM, which required the forget gate, which was introduced in 1999.[77]It became the default choice for RNN architecture. During 1985–1995, inspired by statistical mechanics, several architectures and methods were developed byTerry Sejnowski,Peter Dayan,Geoffrey Hinton, etc., including theBoltzmann machine,[78]restricted Boltzmann machine,[79]Helmholtz machine,[80]and thewake-sleep algorithm.[81]These were designed for unsupervised learning of deep generative models. Between 2009 and 2012, ANNs began winning prizes in image recognition contests, approaching human level performance on various tasks, initially inpattern recognitionandhandwriting recognition.[82][83]In 2011, a CNN namedDanNet[84][85]by Dan Ciresan, Ueli Meier, Jonathan Masci,Luca Maria Gambardella, and Jürgen Schmidhuber achieved for the first time superhuman performance in a visual pattern recognition contest, outperforming traditional methods by a factor of 3.[38]It then won more contests.[86][87]They also showed howmax-poolingCNNs on GPU improved performance significantly.[88] In October 2012,AlexNetbyAlex Krizhevsky,Ilya Sutskever, and Geoffrey Hinton[89]won the large-scaleImageNet competitionby a significant margin over shallow machine learning methods. Further incremental improvements included the VGG-16 network byKaren SimonyanandAndrew Zisserman[90]and Google'sInceptionv3.[91] In 2012,NgandDeancreated a network that learned to recognize higher-level concepts, such as cats, only from watching unlabeled images.[92]Unsupervised pre-training and increased computing power fromGPUsanddistributed computingallowed the use of larger networks, particularly in image and visual recognition problems, which became known as "deep learning".[5] Radial basis functionand wavelet networks were introduced in 2013. These can be shown to offer best approximation properties and have been applied innonlinear system identificationand classification applications.[93] Generative adversarial network(GAN) (Ian Goodfellowet al., 2014)[94]became state of the art in generative modeling during 2014–2018 period. The GAN principle was originally published in 1991 by Jürgen Schmidhuber who called it "artificial curiosity": two neural networks contest with each other in the form of azero-sum game, where one network's gain is the other network's loss.[95][96]The first network is agenerative modelthat models aprobability distributionover output patterns. The second network learns bygradient descentto predict the reactions of the environment to these patterns. Excellent image quality is achieved byNvidia'sStyleGAN(2018)[97]based on the Progressive GAN by Tero Karras et al.[98]Here, the GAN generator is grown from small to large scale in a pyramidal fashion. Image generation by GAN reached popular success, and provoked discussions concerningdeepfakes.[99]Diffusion models(2015)[100]eclipsed GANs in generative modeling since then, with systems such asDALL·E 2(2022) andStable Diffusion(2022). In 2014, the state of the art was training "very deep neural network" with 20 to 30 layers.[101]Stacking too many layers led to a steep reduction intrainingaccuracy,[102]known as the "degradation" problem.[103]In 2015, two techniques were developed to train very deep networks: thehighway networkwas published in May 2015,[104]and the residual neural network (ResNet) in December 2015.[105][106]ResNet behaves like an open-gated Highway Net. During the 2010s, theseq2seqmodel was developed, and attention mechanisms were added. It led to the modern Transformer architecture in 2017 inAttention Is All You Need.[107]It requires computation time that is quadratic in the size of the context window. Jürgen Schmidhuber's fast weight controller (1992)[108]scales linearly and was later shown to be equivalent to the unnormalized linear Transformer.[109][110][10]Transformers have increasingly become the model of choice fornatural language processing.[111]Many modernlarge language modelssuch asChatGPT,GPT-4, andBERTuse this architecture. ANNs began as an attempt to exploit the architecture of the human brain to perform tasks that conventional algorithms had little success with. They soon reoriented towards improving empirical results, abandoning attempts to remain true to their biological precursors. ANNs have the ability to learn and model non-linearities and complex relationships. This is achieved by neurons being connected in various patterns, allowing the output of some neurons to become the input of others. The network forms adirected,weighted graph.[112] An artificial neural network consists of simulated neurons. Each neuron is connected to othernodesvialinkslike a biological axon-synapse-dendrite connection. All the nodes connected by links take in some data and use it to perform specific operations and tasks on the data. Each link has a weight, determining the strength of one node's influence on another,[113]allowing weights to choose the signal between neurons. ANNs are composed ofartificial neuronswhich are conceptually derived from biologicalneurons. Each artificial neuron has inputs and produces a single output which can be sent to multiple other neurons.[114]The inputs can be the feature values of a sample of external data, such as images or documents, or they can be the outputs of other neurons. The outputs of the finaloutput neuronsof the neural net accomplish the task, such as recognizing an object in an image.[citation needed] To find the output of the neuron we take the weighted sum of all the inputs, weighted by theweightsof theconnectionsfrom the inputs to the neuron. We add abiasterm to this sum.[115]This weighted sum is sometimes called theactivation. This weighted sum is then passed through a (usually nonlinear) activation function to produce the output. The initial inputs are external data, such as images and documents. The ultimate outputs accomplish the task, such as recognizing an object in an image.[116] The neurons are typically organized into multiple layers, especially in deep learning. Neurons of one layer connect only to neurons of the immediately preceding and immediately following layers. The layer that receives external data is theinput layer. The layer that produces the ultimate result is theoutput layer. In between them are zero or morehidden layers. Single layer and unlayered networks are also used. Between two layers, multiple connection patterns are possible. They can be 'fully connected', with every neuron in one layer connecting to every neuron in the next layer. They can bepooling, where a group of neurons in one layer connects to a single neuron in the next layer, thereby reducing the number of neurons in that layer.[117]Neurons with only such connections form adirected acyclic graphand are known asfeedforward networks.[118]Alternatively, networks that allow connections between neurons in the same or previous layers are known asrecurrent networks.[119] Ahyperparameteris a constantparameterwhose value is set before the learning process begins. The values of parameters are derived via learning. Examples of hyperparameters includelearning rate, the number of hidden layers and batch size.[citation needed]The values of some hyperparameters can be dependent on those of other hyperparameters. For example, the size of some layers can depend on the overall number of layers.[citation needed] Learning is the adaptation of the network to better handle a task by considering sample observations. Learning involves adjusting the weights (and optional thresholds) of the network to improve the accuracy of the result. This is done by minimizing the observed errors. Learning is complete when examining additional observations does not usefully reduce the error rate. Even after learning, the error rate typically does not reach 0. If after learning, the error rate is too high, the network typically must be redesigned. Practically this is done by defining acost functionthat is evaluated periodically during learning. As long as its output continues to decline, learning continues. The cost is frequently defined as astatisticwhose value can only be approximated. The outputs are actually numbers, so when the error is low, the difference between the output (almost certainly a cat) and the correct answer (cat) is small. Learning attempts to reduce the total of the differences across the observations. Most learning models can be viewed as a straightforward application ofoptimizationtheory andstatistical estimation.[112][120] The learning rate defines the size of the corrective steps that the model takes to adjust for errors in each observation.[121]A high learning rate shortens the training time, but with lower ultimate accuracy, while a lower learning rate takes longer, but with the potential for greater accuracy. Optimizations such asQuickpropare primarily aimed at speeding up error minimization, while other improvements mainly try to increase reliability. In order to avoidoscillationinside the network such as alternating connection weights, and to improve the rate of convergence, refinements use anadaptive learning ratethat increases or decreases as appropriate.[122]The concept of momentum allows the balance between the gradient and the previous change to be weighted such that the weight adjustment depends to some degree on the previous change. A momentum close to 0 emphasizes the gradient, while a value close to 1 emphasizes the last change.[citation needed] While it is possible to define a cost functionad hoc, frequently the choice is determined by the function's desirable properties (such asconvexity) because it arises from the model (e.g. in a probabilistic model, the model'sposterior probabilitycan be used as an inverse cost).[citation needed] Backpropagation is a method used to adjust the connection weights to compensate for each error found during learning. The error amount is effectively divided among the connections. Technically, backpropagation calculates thegradient(the derivative) of thecost functionassociated with a given state with respect to the weights. The weight updates can be done via stochastic gradient descent or other methods, such asextreme learning machines,[123]"no-prop" networks,[124]training without backtracking,[125]"weightless" networks,[126][127]andnon-connectionist neural networks.[citation needed] Machine learning is commonly separated into three main learning paradigms,supervised learning,[128]unsupervised learning[129]andreinforcement learning.[130]Each corresponds to a particular learning task. Supervised learninguses a set of paired inputs and desired outputs. The learning task is to produce the desired output for each input. In this case, the cost function is related to eliminating incorrect deductions.[131]A commonly used cost is themean-squared error, which tries to minimize the average squared error between the network's output and the desired output. Tasks suited for supervised learning arepattern recognition(also known as classification) andregression(also known as function approximation). Supervised learning is also applicable to sequential data (e.g., for handwriting, speech andgesture recognition). This can be thought of as learning with a "teacher", in the form of a function that provides continuous feedback on the quality of solutions obtained thus far. Inunsupervised learning, input data is given along with the cost function, some function of the datax{\displaystyle \textstyle x}and the network's output. The cost function is dependent on the task (the model domain) and anya prioriassumptions (the implicit properties of the model, its parameters and the observed variables). As a trivial example, consider the modelf(x)=a{\displaystyle \textstyle f(x)=a}wherea{\displaystyle \textstyle a}is a constant and the costC=E[(x−f(x))2]{\displaystyle \textstyle C=E[(x-f(x))^{2}]}. Minimizing this cost produces a value ofa{\displaystyle \textstyle a}that is equal to the mean of the data. The cost function can be much more complicated. Its form depends on the application: for example, incompressionit could be related to themutual informationbetweenx{\displaystyle \textstyle x}andf(x){\displaystyle \textstyle f(x)}, whereas in statistical modeling, it could be related to theposterior probabilityof the model given the data (note that in both of those examples, those quantities would be maximized rather than minimized). Tasks that fall within the paradigm of unsupervised learning are in generalestimationproblems; the applications includeclustering, the estimation ofstatistical distributions,compressionandfiltering. In applications such as playing video games, an actor takes a string of actions, receiving a generally unpredictable response from the environment after each one. The goal is to win the game, i.e., generate the most positive (lowest cost) responses. Inreinforcement learning, the aim is to weight the network (devise a policy) to perform actions that minimize long-term (expected cumulative) cost. At each point in time the agent performs an action and the environment generates an observation and aninstantaneouscost, according to some (usually unknown) rules. The rules and the long-term cost usually only can be estimated. At any juncture, the agent decides whether to explore new actions to uncover their costs or to exploit prior learning to proceed more quickly. Formally the environment is modeled as aMarkov decision process(MDP) with statess1,...,sn∈S{\displaystyle \textstyle {s_{1},...,s_{n}}\in S}and actionsa1,...,am∈A{\displaystyle \textstyle {a_{1},...,a_{m}}\in A}. Because the state transitions are not known, probability distributions are used instead: the instantaneous cost distributionP(ct|st){\displaystyle \textstyle P(c_{t}|s_{t})}, the observation distributionP(xt|st){\displaystyle \textstyle P(x_{t}|s_{t})}and the transition distributionP(st+1|st,at){\displaystyle \textstyle P(s_{t+1}|s_{t},a_{t})}, while a policy is defined as the conditional distribution over actions given the observations. Taken together, the two define aMarkov chain(MC). The aim is to discover the lowest-cost MC. ANNs serve as the learning component in such applications.[132][133]Dynamic programmingcoupled with ANNs (givingneurodynamicprogramming)[134]has been applied to problems such as those involved invehicle routing,[135]video games,natural resource management[136][137]andmedicine[138]because of ANNs ability to mitigate losses of accuracy even when reducing thediscretizationgrid density for numerically approximating the solution of control problems. Tasks that fall within the paradigm of reinforcement learning are control problems,gamesand other sequential decision making tasks. Self-learning in neural networks was introduced in 1982 along with a neural network capable of self-learning namedcrossbar adaptive array(CAA).[139]It is a system with only one input, situation s, and only one output, action (or behavior) a. It has neither external advice input nor external reinforcement input from the environment. The CAA computes, in a crossbar fashion, both decisions about actions and emotions (feelings) about encountered situations. The system is driven by the interaction between cognition and emotion.[140]Given the memory matrix, W =||w(a,s)||, the crossbar self-learning algorithm in each iteration performs the following computation: The backpropagated value (secondary reinforcement) is the emotion toward the consequence situation. The CAA exists in two environments, one is behavioral environment where it behaves, and the other is genetic environment, where from it initially and only once receives initial emotions about to be encountered situations in the behavioral environment. Having received the genome vector (species vector) from the genetic environment, the CAA will learn a goal-seeking behavior, in the behavioral environment that contains both desirable and undesirable situations.[141] Neuroevolutioncan create neural network topologies and weights usingevolutionary computation. It is competitive with sophisticated gradient descent approaches.[142][143]One advantage of neuroevolution is that it may be less prone to get caught in "dead ends".[144] Stochastic neural networksoriginating fromSherrington–Kirkpatrick modelsare a type of artificial neural network built by introducing random variations into the network, either by giving the network's artificial neuronsstochastictransfer functions[citation needed], or by giving them stochastic weights. This makes them useful tools foroptimizationproblems, since the random fluctuations help the network escape fromlocal minima.[145]Stochastic neural networks trained using aBayesianapproach are known asBayesian neural networks.[146] Topological deep learning, first introduced in 2017,[147]is an emerging approach inmachine learningthat integrates topology with deep neural networks to address highly intricate and high-order data. Initially rooted inalgebraic topology, TDL has since evolved into a versatile framework incorporating tools from other mathematical disciplines, such asdifferential topologyandgeometric topology. As a successful example of mathematical deep learning, TDL continues to inspire advancements in mathematicalartificial intelligence, fostering a mutually beneficial relationship between AI andmathematics. In aBayesianframework, a distribution over the set of allowed models is chosen to minimize the cost.Evolutionary methods,[148]gene expression programming,[149]simulated annealing,[150]expectation–maximization,non-parametric methodsandparticle swarm optimization[151]are other learning algorithms. Convergent recursion is a learning algorithm forcerebellar model articulation controller(CMAC) neural networks.[152][153] Two modes of learning are available: stochastic and batch. In stochastic learning, each input creates a weight adjustment. In batch learning weights are adjusted based on a batch of inputs, accumulating errors over the batch. Stochastic learning introduces "noise" into the process, using the local gradient calculated from one data point; this reduces the chance of the network getting stuck in local minima. However, batch learning typically yields a faster, more stable descent to a local minimum, since each update is performed in the direction of the batch's average error. A common compromise is to use "mini-batches", small batches with samples in each batch selected stochastically from the entire data set. ANNs have evolved into a broad family of techniques that have advanced the state of the art across multiple domains. The simplest types have one or more static components, including number of units, number of layers, unit weights andtopology. Dynamic types allow one or more of these to evolve via learning. The latter is much more complicated but can shorten learning periods and produce better results. Some types allow/require learning to be "supervised" by the operator, while others operate independently. Some types operate purely in hardware, while others are purely software and run on general purpose computers. Some of the main breakthroughs include: Using artificial neural networks requires an understanding of their characteristics. Neural architecture search(NAS) uses machine learning to automate ANN design. Various approaches to NAS have designed networks that compare well with hand-designed systems. The basic search algorithm is to propose a candidate model, evaluate it against a dataset, and use the results as feedback to teach the NAS network.[165]Available systems includeAutoMLand AutoKeras.[166]scikit-learn libraryprovides functions to help with building a deep network from scratch. We can then implement a deep network withTensorFloworKeras. Hyperparameters must also be defined as part of the design (they are not learned), governing matters such as how many neurons are in each layer, learning rate, step, stride, depth, receptive field and padding (for CNNs), etc.[167] [citation needed] Because of their ability to reproduce and model nonlinear processes, artificial neural networks have found applications in many disciplines. These include: ANNs have been used to diagnose several types of cancers[185][186]and to distinguish highly invasive cancer cell lines from less invasive lines using only cell shape information.[187][188] ANNs have been used to accelerate reliability analysis of infrastructures subject to natural disasters[189][190]and to predict foundation settlements.[191]It can also be useful to mitigate flood by the use of ANNs for modelling rainfall-runoff.[192]ANNs have also been used for building black-box models ingeoscience:hydrology,[193][194]ocean modelling andcoastal engineering,[195][196]andgeomorphology.[197]ANNs have been employed incybersecurity, with the objective to discriminate between legitimate activities and malicious ones. For example, machine learning has been used for classifying Android malware,[198]for identifying domains belonging to threat actors and for detecting URLs posing a security risk.[199]Research is underway on ANN systems designed for penetration testing, for detecting botnets,[200]credit cards frauds[201]and network intrusions. ANNs have been proposed as a tool to solvepartial differential equationsin physics[202][203][204]and simulate the properties of many-bodyopen quantum systems.[205][206][207][208]In brain research ANNs have studied short-term behavior ofindividual neurons,[209]the dynamics of neural circuitry arise from interactions between individual neurons and how behavior can arise from abstract neural modules that represent complete subsystems. Studies considered long-and short-term plasticity of neural systems and their relation to learning and memory from the individual neuron to the system level. It is possible to create a profile of a user's interests from pictures, using artificial neural networks trained for object recognition.[210] Beyond their traditional applications, artificial neural networks are increasingly being utilized in interdisciplinary research, such as materials science. For instance, graph neural networks (GNNs) have demonstrated their capability in scaling deep learning for the discovery of new stable materials by efficiently predicting the total energy of crystals. This application underscores the adaptability and potential of ANNs in tackling complex problems beyond the realms of predictive modeling and artificial intelligence, opening new pathways for scientific discovery and innovation.[211] Themultilayer perceptronis auniversal functionapproximator, as proven by theuniversal approximation theorem. However, the proof is not constructive regarding the number of neurons required, the network topology, the weights and the learning parameters. A specific recurrent architecture withrational-valued weights (as opposed to full precision real number-valued weights) has the power of auniversal Turing machine,[212]using a finite number of neurons and standard linear connections. Further, the use ofirrationalvalues for weights results in a machine withsuper-Turingpower.[213][214][failed verification] A model's "capacity" property corresponds to its ability to model any given function. It is related to the amount of information that can be stored in the network and to the notion of complexity. Two notions of capacity are known by the community. The information capacity and the VC Dimension. The information capacity of a perceptron is intensively discussed in Sir David MacKay's book[215]which summarizes work by Thomas Cover.[216]The capacity of a network of standard neurons (not convolutional) can be derived by four rules[217]that derive from understanding a neuron as an electrical element. The information capacity captures the functions modelable by the network given any data as input. The second notion, is theVC dimension. VC Dimension uses the principles ofmeasure theoryand finds the maximum capacity under the best possible circumstances. This is, given input data in a specific form. As noted in,[215]the VC Dimension for arbitrary inputs is half the information capacity of a Perceptron. The VC Dimension for arbitrary points is sometimes referred to as Memory Capacity.[218] Models may not consistently converge on a single solution, firstly because local minima may exist, depending on the cost function and the model. Secondly, the optimization method used might not guarantee to converge when it begins far from any local minimum. Thirdly, for sufficiently large data or parameters, some methods become impractical. Another issue worthy to mention is that training may cross someSaddle pointwhich may lead the convergence to the wrong direction. The convergence behavior of certain types of ANN architectures are more understood than others. When the width of network approaches to infinity, the ANN is well described by its first order Taylor expansion throughout training, and so inherits the convergence behavior ofaffine models.[219][220]Another example is when parameters are small, it is observed that ANNs often fits target functions from low to high frequencies. This behavior is referred to as the spectral bias, or frequency principle, of neural networks.[221][222][223][224]This phenomenon is the opposite to the behavior of some well studied iterative numerical schemes such asJacobi method. Deeper neural networks have been observed to be more biased towards low frequency functions.[225] Applications whose goal is to create a system that generalizes well to unseen examples, face the possibility of over-training. This arises in convoluted or over-specified systems when the network capacity significantly exceeds the needed free parameters. Two approaches address over-training. The first is to usecross-validationand similar techniques to check for the presence of over-training and to select hyperparameters to minimize the generalization error. The second is to use some form ofregularization. This concept emerges in a probabilistic (Bayesian) framework, where regularization can be performed by selecting a larger prior probability over simpler models; but also in statistical learning theory, where the goal is to minimize over two quantities: the 'empirical risk' and the 'structural risk', which roughly corresponds to the error over the training set and the predicted error in unseen data due to overfitting. Supervised neural networks that use amean squared error(MSE) cost function can use formal statistical methods to determine the confidence of the trained model. The MSE on a validation set can be used as an estimate for variance. This value can then be used to calculate theconfidence intervalof network output, assuming anormal distribution. A confidence analysis made this way is statistically valid as long as the outputprobability distributionstays the same and the network is not modified. By assigning asoftmax activation function, a generalization of thelogistic function, on the output layer of the neural network (or a softmax component in a component-based network) for categorical target variables, the outputs can be interpreted as posterior probabilities. This is useful in classification as it gives a certainty measure on classifications. The softmax activation function is: A common criticism of neural networks, particularly in robotics, is that they require too many training samples for real-world operation.[226]Any learning machine needs sufficient representative examples in order to capture the underlying structure that allows it to generalize to new cases. Potential solutions include randomly shuffling training examples, by using a numerical optimization algorithm that does not take too large steps when changing the network connections following an example, grouping examples in so-called mini-batches and/or introducing a recursive least squares algorithm forCMAC.[152]Dean Pomerleau uses a neural network to train a robotic vehicle to drive on multiple types of roads (single lane, multi-lane, dirt, etc.), and a large amount of his research is devoted to extrapolating multiple training scenarios from a single training experience, and preserving past training diversity so that the system does not become overtrained (if, for example, it is presented with a series of right turns—it should not learn to always turn right).[227] A central claim[citation needed]of ANNs is that they embody new and powerful general principles for processing information. These principles are ill-defined. It is often claimed[by whom?]that they areemergentfrom the network itself. This allows simple statistical association (the basic function of artificial neural networks) to be described as learning or recognition. In 1997,Alexander Dewdney, a formerScientific Americancolumnist, commented that as a result, artificial neural networks have a "something-for-nothing quality, one that imparts a peculiar aura of laziness and a distinct lack of curiosity about just how good these computing systems are. No human hand (or mind) intervenes; solutions are found as if by magic; and no one, it seems, has learned anything".[228]One response to Dewdney is that neural networks have been successfully used to handle many complex and diverse tasks, ranging from autonomously flying aircraft[229]to detecting credit card fraud to mastering the game ofGo. Technology writer Roger Bridgman commented: Neural networks, for instance, are in the dock not only because they have been hyped to high heaven, (what hasn't?) but also because you could create a successful net without understanding how it worked: the bunch of numbers that captures its behaviour would in all probability be "an opaque, unreadable table...valueless as a scientific resource". In spite of his emphatic declaration that science is not technology, Dewdney seems here to pillory neural nets as bad science when most of those devising them are just trying to be good engineers. An unreadable table that a useful machine could read would still be well worth having.[230] Although it is true that analyzing what has been learned by an artificial neural network is difficult, it is much easier to do so than to analyze what has been learned by a biological neural network. Moreover, recent emphasis on theexplainabilityof AI has contributed towards the development of methods, notably those based onattentionmechanisms, for visualizing and explaining learned neural networks. Furthermore, researchers involved in exploring learning algorithms for neural networks are gradually uncovering generic principles that allow a learning machine to be successful. For example, Bengio and LeCun (2007) wrote an article regarding local vs non-local learning, as well as shallow vs deep architecture.[231] Biological brains use both shallow and deep circuits as reported by brain anatomy,[232]displaying a wide variety of invariance. Weng[233]argued that the brain self-wires largely according to signal statistics and therefore, a serial cascade cannot catch all major statistical dependencies. Large and effective neural networks require considerable computing resources.[234]While the brain has hardware tailored to the task of processing signals through agraphof neurons, simulating even a simplified neuron onvon Neumann architecturemay consume vast amounts ofmemoryand storage. Furthermore, the designer often needs to transmit signals through many of these connections and their associated neurons – which require enormousCPUpower and time.[citation needed] Some argue that the resurgence of neural networks in the twenty-first century is largely attributable to advances in hardware: from 1991 to 2015, computing power, especially as delivered byGPGPUs(onGPUs), has increased around a million-fold, making the standard backpropagation algorithm feasible for training networks that are several layers deeper than before.[38]The use of accelerators such asFPGAsand GPUs can reduce training times from months to days.[234][235] Neuromorphic engineeringor aphysical neural networkaddresses the hardware difficulty directly, by constructing non-von-Neumann chips to directly implement neural networks in circuitry. Another type of chip optimized for neural network processing is called aTensor Processing Unit, or TPU.[236] Analyzing what has been learned by an ANN is much easier than analyzing what has been learned by a biological neural network. Furthermore, researchers involved in exploring learning algorithms for neural networks are gradually uncovering general principles that allow a learning machine to be successful. For example, local vs. non-local learning and shallow vs. deep architecture.[237] Advocates ofhybridmodels (combining neural networks and symbolic approaches) say that such a mixture can better capture the mechanisms of the human mind.[238][239] Neural networks are dependent on the quality of the data they are trained on, thus low quality data with imbalanced representativeness can lead to the model learning and perpetuating societal biases.[240][241]These inherited biases become especially critical when the ANNs are integrated into real-world scenarios where the training data may be imbalanced due to the scarcity of data for a specific race, gender or other attribute.[240]This imbalance can result in the model having inadequate representation and understanding of underrepresented groups, leading to discriminatory outcomes that exacerbate societal inequalities, especially in applications likefacial recognition, hiring processes, andlaw enforcement.[241][242]For example, in 2018,Amazonhad to scrap a recruiting tool because the model favored men over women for jobs in software engineering due to the higher number of male workers in the field.[242]The program would penalize any resume with the word "woman" or the name of any women's college. However, the use ofsynthetic datacan help reduce dataset bias and increase representation in datasets.[243] Artificial neural networks (ANNs) have undergone significant advancements, particularly in their ability to model complex systems, handle large data sets, and adapt to various types of applications. Their evolution over the past few decades has been marked by a broad range of applications in fields such as image processing, speech recognition, natural language processing, finance, and medicine.[citation needed] In the realm of image processing, ANNs are employed in tasks such as image classification, object recognition, and image segmentation. For instance, deep convolutional neural networks (CNNs) have been important in handwritten digit recognition, achieving state-of-the-art performance.[244]This demonstrates the ability of ANNs to effectively process and interpret complex visual information, leading to advancements in fields ranging from automated surveillance to medical imaging.[244] By modeling speech signals, ANNs are used for tasks like speaker identification and speech-to-text conversion. Deep neural network architectures have introduced significant improvements in large vocabulary continuous speech recognition, outperforming traditional techniques.[244][245]These advancements have enabled the development of more accurate and efficient voice-activated systems, enhancing user interfaces in technology products.[citation needed] In natural language processing, ANNs are used for tasks such as text classification, sentiment analysis, and machine translation. They have enabled the development of models that can accurately translate between languages, understand the context and sentiment in textual data, and categorize text based on content.[244][245]This has implications for automated customer service, content moderation, and language understanding technologies.[citation needed] In the domain of control systems, ANNs are used to model dynamic systems for tasks such as system identification, control design, and optimization. For instance, deep feedforward neural networks are important in system identification and control applications.[citation needed] ANNs are used forstock market predictionandcredit scoring: ANNs require high-quality data and careful tuning, and their "black-box" nature can pose challenges in interpretation. Nevertheless, ongoing advancements suggest that ANNs continue to play a role in finance, offering valuable insights and enhancingrisk management strategies.[citation needed] ANNs are able to process and analyze vast medical datasets. They enhance diagnostic accuracy, especially by interpreting complexmedical imagingfor early disease detection, and by predicting patient outcomes for personalized treatment planning.[245]In drug discovery, ANNs speed up the identification of potential drug candidates and predict their efficacy and safety, significantly reducing development time and costs.[244]Additionally, their application in personalized medicine and healthcare data analysis allows tailored therapies and efficient patient care management.[245]Ongoing research is aimed at addressing remaining challenges such as data privacy and model interpretability, as well as expanding the scope of ANN applications in medicine.[citation needed] ANNs such as generative adversarial networks (GAN) andtransformersare used for content creation across numerous industries.[246]This is because deep learning models are able to learn the style of an artist or musician from huge datasets and generate completely new artworks and music compositions. For instance,DALL-Eis a deep neural network trained on 650 million pairs of images and texts across the internet that can create artworks based on text entered by the user.[247]In the field of music, transformers are used to create original music for commercials and documentaries through companies such asAIVAandJukedeck.[248]In the marketing industry generative models are used to create personalized advertisements for consumers.[246]Additionally, major film companies are partnering with technology companies to analyze the financial success of a film, such as the partnership between Warner Bros and technology company Cinelytic established in 2020.[249]Furthermore, neural networks have found uses in video game creation, where Non Player Characters (NPCs) can make decisions based on all the characters currently in the game.[250]
https://en.wikipedia.org/wiki/Criticism_of_artificial_neural_networks
Deep learningis a subset ofmachine learningthat focuses on utilizing multilayeredneural networksto perform tasks such asclassification,regression, andrepresentation learning. The field takes inspiration frombiological neuroscienceand is centered around stackingartificial neuronsinto layers and "training" them to process data. The adjective "deep" refers to the use of multiple layers (ranging from three to several hundred or thousands) in the network. Methods used can be eithersupervised,semi-supervisedorunsupervised.[2] Some common deep learning network architectures includefully connected networks,deep belief networks,recurrent neural networks,convolutional neural networks,generative adversarial networks,transformers, andneural radiance fields. These architectures have been applied to fields includingcomputer vision,speech recognition,natural language processing,machine translation,bioinformatics,drug design,medical image analysis,climate science, material inspection andboard gameprograms, where they have produced results comparable to and in some cases surpassing human expert performance.[3][4][5] Early forms of neural networks were inspired by information processing and distributed communication nodes inbiological systems, particularly thehuman brain. However, current neural networks do not intend to model the brain function of organisms, and are generally seen as low-quality models for that purpose.[6] Most modern deep learning models are based on multi-layeredneural networkssuch asconvolutional neural networksandtransformers, although they can also includepropositional formulasor latent variables organized layer-wise in deepgenerative modelssuch as the nodes indeep belief networksand deepBoltzmann machines.[7] Fundamentally, deep learning refers to a class ofmachine learningalgorithmsin which a hierarchy of layers is used to transform input data into a progressively more abstract and composite representation. For example, in animage recognitionmodel, the raw input may be animage(represented as atensorofpixels). The first representational layer may attempt to identify basic shapes such as lines and circles, the second layer may compose and encode arrangements of edges, the third layer may encode a nose and eyes, and the fourth layer may recognize that the image contains a face. Importantly, a deep learning process can learn which features to optimally place at which levelon its own. Prior to deep learning, machine learning techniques often involved hand-craftedfeature engineeringto transform the data into a more suitable representation for a classification algorithm to operate on. In the deep learning approach, features are not hand-crafted and the modeldiscoversuseful feature representations from the data automatically. This does not eliminate the need for hand-tuning; for example, varying numbers of layers and layer sizes can provide different degrees of abstraction.[8][2] The word "deep" in "deep learning" refers to the number of layers through which the data is transformed. More precisely, deep learning systems have a substantialcredit assignment path(CAP) depth. The CAP is the chain of transformations from input to output. CAPs describe potentially causal connections between input and output. For afeedforward neural network, the depth of the CAPs is that of the network and is the number of hidden layers plus one (as the output layer is also parameterized). Forrecurrent neural networks, in which a signal may propagate through a layer more than once, the CAP depth is potentially unlimited.[9]No universally agreed-upon threshold of depth divides shallow learning from deep learning, but most researchers agree that deep learning involves CAP depth higher than two. CAP of depth two has been shown to be a universal approximator in the sense that it can emulate any function.[10]Beyond that, more layers do not add to the function approximator ability of the network. Deep models (CAP > two) are able to extract better features than shallow models and hence, extra layers help in learning the features effectively. Deep learning architectures can be constructed with agreedylayer-by-layer method.[11]Deep learning helps to disentangle these abstractions and pick out which features improve performance.[8] Deep learning algorithms can be applied to unsupervised learning tasks. This is an important benefit because unlabeled data is more abundant than the labeled data. Examples of deep structures that can be trained in an unsupervised manner aredeep belief networks.[8][12] The termDeep Learningwas introduced to the machine learning community byRina Dechterin 1986,[13]and to artificial neural networks by Igor Aizenberg and colleagues in 2000, in the context ofBooleanthreshold neurons.[14][15]Although the history of its appearance is apparently more complicated.[16] Deep neural networks are generally interpreted in terms of theuniversal approximation theorem[17][18][19][20][21]orprobabilistic inference.[22][23][8][9][24] The classic universal approximation theorem concerns the capacity offeedforward neural networkswith a single hidden layer of finite size to approximatecontinuous functions.[17][18][19][20]In 1989, the first proof was published byGeorge Cybenkoforsigmoidactivation functions[17]and was generalised to feed-forward multi-layer architectures in 1991 by Kurt Hornik.[18]Recent work also showed that universal approximation also holds for non-bounded activation functions such asKunihiko Fukushima'srectified linear unit.[25][26] The universal approximation theorem fordeep neural networksconcerns the capacity of networks with bounded width but the depth is allowed to grow. Lu et al.[21]proved that if the width of a deep neural network withReLUactivation is strictly larger than the input dimension, then the network can approximate anyLebesgue integrable function; if the width is smaller or equal to the input dimension, then a deep neural network is not a universal approximator. Theprobabilisticinterpretation[24]derives from the field ofmachine learning. It features inference,[23][7][8][9][12][24]as well as theoptimizationconcepts oftrainingandtesting, related to fitting andgeneralization, respectively. More specifically, the probabilistic interpretation considers the activation nonlinearity as acumulative distribution function.[24]The probabilistic interpretation led to the introduction ofdropoutasregularizerin neural networks. The probabilistic interpretation was introduced by researchers includingHopfield,WidrowandNarendraand popularized in surveys such as the one byBishop.[27] There are twotypesof artificial neural network (ANN):feedforward neural network(FNN) ormultilayer perceptron(MLP) andrecurrent neural networks(RNN). RNNs have cycles in their connectivity structure, FNNs don't. In the 1920s,Wilhelm LenzandErnst Isingcreated theIsing model[28][29]which is essentially a non-learning RNN architecture consisting of neuron-like threshold elements. In 1972,Shun'ichi Amarimade this architecture adaptive.[30][31]His learning RNN was republished byJohn Hopfieldin 1982.[32]Other earlyrecurrent neural networkswere published by Kaoru Nakano in 1971.[33][34]Already in 1948,Alan Turingproduced work on "Intelligent Machinery" that was not published in his lifetime,[35]containing "ideas related to artificial evolution and learning RNNs".[31] Frank Rosenblatt(1958)[36]proposed the perceptron, an MLP with 3 layers: an input layer, a hidden layer with randomized weights that did not learn, and an output layer. He later published a 1962 book that also introduced variants and computer experiments, including a version with four-layer perceptrons "with adaptive preterminal networks" where the last two layers have learned weights (here he credits H. D. Block and B. W. Knight).[37]: section 16The book cites an earlier network by R. D. Joseph (1960)[38]"functionally equivalent to a variation of" this four-layer system (the book mentions Joseph over 30 times). Should Joseph therefore be considered the originator of proper adaptivemultilayer perceptronswith learning hidden units? Unfortunately, the learning algorithm was not a functional one, and fell into oblivion. The first working deep learning algorithm was theGroup method of data handling, a method to train arbitrarily deep neural networks, published byAlexey Ivakhnenkoand Lapa in 1965. They regarded it as a form of polynomial regression,[39]or a generalization of Rosenblatt's perceptron.[40]A 1971 paper described a deep network with eight layers trained by this method,[41]which is based on layer by layer training through regression analysis. Superfluous hidden units are pruned using a separate validation set. Since the activation functions of the nodes are Kolmogorov-Gabor polynomials, these were also the first deep networks with multiplicative units or "gates".[31] The first deep learningmultilayer perceptrontrained bystochastic gradient descent[42]was published in 1967 byShun'ichi Amari.[43]In computer experiments conducted by Amari's student Saito, a five layer MLP with two modifiable layers learnedinternal representationsto classify non-linearily separable pattern classes.[31]Subsequent developments in hardware and hyperparameter tunings have made end-to-endstochastic gradient descentthe currently dominant training technique. In 1969,Kunihiko Fukushimaintroduced theReLU(rectified linear unit)activation function.[25][31]The rectifier has become the most popular activation function for deep learning.[44] Deep learning architectures forconvolutional neural networks(CNNs) with convolutional layers and downsampling layers began with theNeocognitronintroduced byKunihiko Fukushimain 1979, though not trained by backpropagation.[45][46] Backpropagationis an efficient application of thechain rulederived byGottfried Wilhelm Leibnizin 1673[47]to networks of differentiable nodes. The terminology "back-propagating errors" was actually introduced in 1962 by Rosenblatt,[37]but he did not know how to implement this, althoughHenry J. Kelleyhad a continuous precursor of backpropagation in 1960 in the context ofcontrol theory.[48]The modern form of backpropagation was first published inSeppo Linnainmaa's master thesis (1970).[49][50][31]G.M. Ostrovski et al. republished it in 1971.[51][52]Paul Werbosapplied backpropagation to neural networks in 1982[53](his 1974 PhD thesis, reprinted in a 1994 book,[54]did not yet describe the algorithm[52]). In 1986,David E. Rumelhartet al. popularised backpropagation but did not cite the original work.[55][56] Thetime delay neural network(TDNN) was introduced in 1987 byAlex Waibelto apply CNN to phoneme recognition. It used convolutions, weight sharing, and backpropagation.[57][58]In 1988, Wei Zhang applied a backpropagation-trained CNN to alphabet recognition.[59]In 1989,Yann LeCunet al. created a CNN calledLeNetforrecognizing handwritten ZIP codeson mail. Training required 3 days.[60]In 1990, Wei Zhang implemented a CNN onoptical computinghardware.[61]In 1991, a CNN was applied to medical image object segmentation[62]and breast cancer detection in mammograms.[63]LeNet-5 (1998), a 7-level CNN byYann LeCunet al., that classifies digits, was applied by several banks to recognize hand-written numbers on checks digitized in 32x32 pixel images.[64] Recurrent neural networks(RNN)[28][30]were further developed in the 1980s. Recurrence is used for sequence processing, and when a recurrent network is unrolled, it mathematically resembles a deep feedforward layer. Consequently, they have similar properties and issues, and their developments had mutual influences. In RNN, two early influential works were theJordan network(1986)[65]and theElman network(1990),[66]which applied RNN to study problems incognitive psychology. In the 1980s, backpropagation did not work well for deep learning with long credit assignment paths. To overcome this problem, in 1991,Jürgen Schmidhuberproposed a hierarchy of RNNs pre-trained one level at a time byself-supervised learningwhere each RNN tries to predict its own next input, which is the next unexpected input of the RNN below.[67][68]This "neural history compressor" usespredictive codingto learninternal representationsat multiple self-organizing time scales. This can substantially facilitate downstream deep learning. The RNN hierarchy can becollapsedinto a single RNN, bydistillinga higher levelchunkernetwork into a lower levelautomatizernetwork.[67][68][31]In 1993, a neural history compressor solved a "Very Deep Learning" task that required more than 1000 subsequentlayersin an RNN unfolded in time.[69]The "P" inChatGPTrefers to such pre-training. Sepp Hochreiter's diploma thesis (1991)[70]implemented the neural history compressor,[67]and identified and analyzed thevanishing gradient problem.[70][71]Hochreiter proposed recurrentresidualconnections to solve the vanishing gradient problem. This led to thelong short-term memory(LSTM), published in 1995.[72]LSTM can learn "very deep learning" tasks[9]with long credit assignment paths that require memories of events that happened thousands of discrete time steps before. That LSTM was not yet the modern architecture, which required a "forget gate", introduced in 1999,[73]which became the standard RNN architecture. In 1991,Jürgen Schmidhuberalso published adversarial neural networks that contest with each other in the form of azero-sum game, where one network's gain is the other network's loss.[74][75]The first network is agenerative modelthat models aprobability distributionover output patterns. The second network learns bygradient descentto predict the reactions of the environment to these patterns. This was called "artificial curiosity". In 2014, this principle was used ingenerative adversarial networks(GANs).[76] During 1985–1995, inspired by statistical mechanics, several architectures and methods were developed byTerry Sejnowski,Peter Dayan,Geoffrey Hinton, etc., including theBoltzmann machine,[77]restricted Boltzmann machine,[78]Helmholtz machine,[79]and thewake-sleep algorithm.[80]These were designed for unsupervised learning of deep generative models. However, those were more computationally expensive compared to backpropagation. Boltzmann machine learning algorithm, published in 1985, was briefly popular before being eclipsed by the backpropagation algorithm in 1986. (p. 112[81]). A 1988 network became state of the art inprotein structure prediction, an early application of deep learning to bioinformatics.[82] Both shallow and deep learning (e.g., recurrent nets) of ANNs forspeech recognitionhave been explored for many years.[83][84][85]These methods never outperformed non-uniform internal-handcrafting Gaussianmixture model/Hidden Markov model(GMM-HMM) technology based on generative models of speech trained discriminatively.[86]Key difficulties have been analyzed, including gradient diminishing[70]and weak temporal correlation structure in neural predictive models.[87][88]Additional difficulties were the lack of training data and limited computing power. Mostspeech recognitionresearchers moved away from neural nets to pursue generative modeling. An exception was atSRI Internationalin the late 1990s. Funded by the US government'sNSAandDARPA, SRI researched in speech andspeaker recognition. The speaker recognition team led byLarry Heckreported significant success with deep neural networks in speech processing in the 1998NISTSpeaker Recognition benchmark.[89][90]It was deployed in the Nuance Verifier, representing the first major industrial application of deep learning.[91] The principle of elevating "raw" features over hand-crafted optimization was first explored successfully in the architecture of deep autoencoder on the "raw" spectrogram or linearfilter-bankfeatures in the late 1990s,[90]showing its superiority over theMel-Cepstralfeatures that contain stages of fixed transformation from spectrograms. The raw features of speech,waveforms, later produced excellent larger-scale results.[92] Neural networks entered a lull, and simpler models that use task-specific handcrafted features such asGabor filtersandsupport vector machines(SVMs) became the preferred choices in the 1990s and 2000s, because of artificial neural networks' computational cost and a lack of understanding of how the brain wires its biological networks.[citation needed] In 2003, LSTM became competitive with traditional speech recognizers on certain tasks.[93]In 2006,Alex Graves, Santiago Fernández, Faustino Gomez, and Schmidhuber combined it withconnectionist temporal classification(CTC)[94]in stacks of LSTMs.[95]In 2009, it became the first RNN to win apattern recognitioncontest, in connectedhandwriting recognition.[96][9] In 2006, publications byGeoff Hinton,Ruslan Salakhutdinov, Osindero andTeh[97][98]deep belief networkswere developed for generative modeling. They are trained by training one restricted Boltzmann machine, then freezing it and training another one on top of the first one, and so on, then optionallyfine-tunedusing supervised backpropagation.[99]They could model high-dimensional probability distributions, such as the distribution ofMNIST images, but convergence was slow.[100][101][102] The impact of deep learning in industry began in the early 2000s, when CNNs already processed an estimated 10% to 20% of all the checks written in the US, according to Yann LeCun.[103]Industrial applications of deep learning to large-scale speech recognition started around 2010. The 2009 NIPS Workshop on Deep Learning for Speech Recognition was motivated by the limitations of deep generative models of speech, and the possibility that given more capable hardware and large-scale data sets that deep neural nets might become practical. It was believed that pre-training DNNs using generative models of deep belief nets (DBN) would overcome the main difficulties of neural nets. However, it was discovered that replacing pre-training with large amounts of training data for straightforward backpropagation when using DNNs with large, context-dependent output layers produced error rates dramatically lower than then-state-of-the-art Gaussian mixture model (GMM)/Hidden Markov Model (HMM) and also than more-advanced generative model-based systems.[104]The nature of the recognition errors produced by the two types of systems was characteristically different,[105]offering technical insights into how to integrate deep learning into the existing highly efficient, run-time speech decoding system deployed by all major speech recognition systems.[23][106][107]Analysis around 2009–2010, contrasting the GMM (and other generative speech models) vs. DNN models, stimulated early industrial investment in deep learning for speech recognition.[105]That analysis was done with comparable performance (less than 1.5% in error rate) between discriminative DNNs and generative models.[104][105][108]In 2010, researchers extended deep learning fromTIMITto large vocabulary speech recognition, by adopting large output layers of the DNN based on context-dependent HMM states constructed bydecision trees.[109][110][111][106] The deep learning revolution started around CNN- and GPU-based computer vision. Although CNNs trained by backpropagation had been around for decades and GPU implementations of NNs for years,[112]including CNNs,[113]faster implementations of CNNs on GPUs were needed to progress on computer vision. Later, as deep learning becomes widespread, specialized hardware and algorithm optimizations were developed specifically for deep learning.[114] A key advance for the deep learning revolution was hardware advances, especially GPU. Some early work dated back to 2004.[112][113]In 2009, Raina, Madhavan, andAndrew Ngreported a 100M deep belief network trained on 30 NvidiaGeForce GTX 280GPUs, an early demonstration of GPU-based deep learning. They reported up to 70 times faster training.[115] In 2011, a CNN namedDanNet[116][117]by Dan Ciresan, Ueli Meier, Jonathan Masci,Luca Maria Gambardella, andJürgen Schmidhuberachieved for the first time superhuman performance in a visual pattern recognition contest, outperforming traditional methods by a factor of 3.[9]It then won more contests.[118][119]They also showed howmax-poolingCNNs on GPU improved performance significantly.[3] In 2012,Andrew NgandJeff Deancreated an FNN that learned to recognize higher-level concepts, such as cats, only from watching unlabeled images taken fromYouTubevideos.[120] In October 2012,AlexNetbyAlex Krizhevsky,Ilya Sutskever, andGeoffrey Hinton[4]won the large-scaleImageNet competitionby a significant margin over shallow machine learning methods. Further incremental improvements included theVGG-16network byKaren SimonyanandAndrew Zisserman[121]and Google'sInceptionv3.[122] The success in image classification was then extended to the more challenging task ofgenerating descriptions(captions) for images, often as a combination of CNNs and LSTMs.[123][124][125] In 2014, the state of the art was training “very deep neural network” with 20 to 30 layers.[126]Stacking too many layers led to a steep reduction intrainingaccuracy,[127]known as the "degradation" problem.[128]In 2015, two techniques were developed to train very deep networks: the Highway Network was published in May 2015, and theresidual neural network(ResNet)[129]in Dec 2015. ResNet behaves like an open-gated Highway Net. Around the same time, deep learning started impacting the field of art. Early examples includedGoogle DeepDream(2015), andneural style transfer(2015),[130]both of which were based on pretrained image classification neural networks, such asVGG-19. Generative adversarial network(GAN) by (Ian Goodfellowet al., 2014)[131](based onJürgen Schmidhuber's principle of artificial curiosity[74][76]) became state of the art in generative modeling during 2014-2018 period. Excellent image quality is achieved byNvidia'sStyleGAN(2018)[132]based on the Progressive GAN by Tero Karras et al.[133]Here the GAN generator is grown from small to large scale in a pyramidal fashion. Image generation by GAN reached popular success, and provoked discussions concerningdeepfakes.[134]Diffusion models(2015)[135]eclipsed GANs in generative modeling since then, with systems such asDALL·E 2(2022) andStable Diffusion(2022). In 2015, Google's speech recognition improved by 49% by an LSTM-based model, which they made available throughGoogle Voice Searchonsmartphone.[136][137] Deep learning is part of state-of-the-art systems in various disciplines, particularly computer vision andautomatic speech recognition(ASR). Results on commonly used evaluation sets such asTIMIT(ASR) andMNIST(image classification), as well as a range of large-vocabulary speech recognition tasks have steadily improved.[104][138]Convolutional neural networks were superseded for ASR byLSTM.[137][139][140][141]but are more successful in computer vision. Yoshua Bengio,Geoffrey HintonandYann LeCunwere awarded the 2018Turing Awardfor "conceptual and engineering breakthroughs that have made deep neural networks a critical component of computing".[142] Artificial neural networks(ANNs) orconnectionistsystemsare computing systems inspired by thebiological neural networksthat constitute animal brains. Such systems learn (progressively improve their ability) to do tasks by considering examples, generally without task-specific programming. For example, in image recognition, they might learn to identify images that contain cats by analyzing example images that have been manuallylabeledas "cat" or "no cat" and using the analytic results to identify cats in other images. They have found most use in applications difficult to express with a traditional computer algorithm usingrule-based programming. An ANN is based on a collection of connected units calledartificial neurons, (analogous to biologicalneuronsin abiological brain). Each connection (synapse) between neurons can transmit a signal to another neuron. The receiving (postsynaptic) neuron can process the signal(s) and then signal downstream neurons connected to it. Neurons may have state, generally represented byreal numbers, typically between 0 and 1. Neurons and synapses may also have a weight that varies as learning proceeds, which can increase or decrease the strength of the signal that it sends downstream. Typically, neurons are organized in layers. Different layers may perform different kinds of transformations on their inputs. Signals travel from the first (input), to the last (output) layer, possibly after traversing the layers multiple times. The original goal of the neural network approach was to solve problems in the same way that a human brain would. Over time, attention focused on matching specific mental abilities, leading to deviations from biology such asbackpropagation, or passing information in the reverse direction and adjusting the network to reflect that information. Neural networks have been used on a variety of tasks, including computer vision,speech recognition,machine translation,social networkfiltering,playing board and video gamesand medical diagnosis. As of 2017, neural networks typically have a few thousand to a few million units and millions of connections. Despite this number being several order of magnitude less than the number of neurons on a human brain, these networks can perform many tasks at a level beyond that of humans (e.g., recognizing faces, or playing "Go"[144]). A deep neural network (DNN) is an artificial neural network with multiple layers between the input and output layers.[7][9]There are different types of neural networks but they always consist of the same components: neurons, synapses, weights, biases, and functions.[145]These components as a whole function in a way that mimics functions of the human brain, and can be trained like any other ML algorithm.[citation needed] For example, a DNN that is trained to recognize dog breeds will go over the given image and calculate the probability that the dog in the image is a certain breed. The user can review the results and select which probabilities the network should display (above a certain threshold, etc.) and return the proposed label. Each mathematical manipulation as such is considered a layer,[146]and complex DNN have many layers, hence the name "deep" networks. DNNs can model complex non-linear relationships. DNN architectures generate compositional models where the object is expressed as a layered composition ofprimitives.[147]The extra layers enable composition of features from lower layers, potentially modeling complex data with fewer units than a similarly performing shallow network.[7]For instance, it was proved that sparsemultivariate polynomialsare exponentially easier to approximate with DNNs than with shallow networks.[148] Deep architectures include many variants of a few basic approaches. Each architecture has found success in specific domains. It is not always possible to compare the performance of multiple architectures, unless they have been evaluated on the same data sets.[146] DNNs are typically feedforward networks in which data flows from the input layer to the output layer without looping back. At first, the DNN creates a map of virtual neurons and assigns random numerical values, or "weights", to connections between them. The weights and inputs are multiplied and return an output between 0 and 1. If the network did not accurately recognize a particular pattern, an algorithm would adjust the weights.[149]That way the algorithm can make certain parameters more influential, until it determines the correct mathematical manipulation to fully process the data. Recurrent neural networks, in which data can flow in any direction, are used for applications such aslanguage modeling.[150][151][152][153][154]Long short-term memory is particularly effective for this use.[155][156] Convolutional neural networks(CNNs) are used in computer vision.[157]CNNs also have been applied toacoustic modelingfor automatic speech recognition (ASR).[158] As with ANNs, many issues can arise with naively trained DNNs. Two common issues areoverfittingand computation time. DNNs are prone to overfitting because of the added layers of abstraction, which allow them to model rare dependencies in the training data.Regularizationmethods such as Ivakhnenko's unit pruning[41]orweight decay(ℓ2{\displaystyle \ell _{2}}-regularization) orsparsity(ℓ1{\displaystyle \ell _{1}}-regularization) can be applied during training to combat overfitting.[159]Alternativelydropoutregularization randomly omits units from the hidden layers during training. This helps to exclude rare dependencies.[160]Another interesting recent development is research into models of just enough complexity through an estimation of the intrinsic complexity of the task being modelled. This approach has been successfully applied for multivariate time series prediction tasks such as traffic prediction.[161]Finally, data can be augmented via methods such as cropping and rotating such that smaller training sets can be increased in size to reduce the chances of overfitting.[162] DNNs must consider many training parameters, such as the size (number of layers and number of units per layer), thelearning rate, and initial weights.Sweeping through the parameter spacefor optimal parameters may not be feasible due to the cost in time and computational resources. Various tricks, such asbatching(computing the gradient on several training examples at once rather than individual examples)[163]speed up computation. Large processing capabilities of many-core architectures (such as GPUs or the Intel Xeon Phi) have produced significant speedups in training, because of the suitability of such processing architectures for the matrix and vector computations.[164][165] Alternatively, engineers may look for other types of neural networks with more straightforward and convergent training algorithms. CMAC (cerebellar model articulation controller) is one such kind of neural network. It doesn't require learning rates or randomized initial weights. The training process can be guaranteed to converge in one step with a new batch of data, and the computational complexity of the training algorithm is linear with respect to the number of neurons involved.[166][167] Since the 2010s, advances in both machine learning algorithms andcomputer hardwarehave led to more efficient methods for training deep neural networks that contain many layers of non-linear hidden units and a very large output layer.[168]By 2019, graphics processing units (GPUs), often with AI-specific enhancements, had displaced CPUs as the dominant method for training large-scale commercial cloud AI .[169]OpenAIestimated the hardware computation used in the largest deep learning projects from AlexNet (2012) to AlphaZero (2017) and found a 300,000-fold increase in the amount of computation required, with a doubling-time trendline of 3.4 months.[170][171] Specialelectronic circuitscalleddeep learning processorswere designed to speed up deep learning algorithms. Deep learning processors include neural processing units (NPUs) inHuaweicellphones[172]andcloud computingservers such astensor processing units(TPU) in theGoogle Cloud Platform.[173]Cerebras Systemshas also built a dedicated system to handle large deep learning models, the CS-2, based on the largest processor in the industry, the second-generation Wafer Scale Engine (WSE-2).[174][175] Atomically thinsemiconductorsare considered promising for energy-efficient deep learning hardware where the same basic device structure is used for both logic operations and data storage. In 2020, Marega et al. published experiments with a large-area active channel material for developing logic-in-memory devices and circuits based onfloating-gatefield-effect transistors(FGFETs).[176] In 2021, J. Feldmann et al. proposed an integratedphotonichardware acceleratorfor parallel convolutional processing.[177]The authors identify two key advantages of integrated photonics over its electronic counterparts: (1) massively parallel data transfer throughwavelengthdivisionmultiplexingin conjunction withfrequency combs, and (2) extremely high data modulation speeds.[177]Their system can execute trillions of multiply-accumulate operations per second, indicating the potential ofintegratedphotonicsin data-heavy AI applications.[177] Large-scale automatic speech recognition is the first and most convincing successful case of deep learning. LSTM RNNs can learn "Very Deep Learning" tasks[9]that involve multi-second intervals containing speech events separated by thousands of discrete time steps, where one time step corresponds to about 10 ms. LSTM with forget gates[156]is competitive with traditional speech recognizers on certain tasks.[93] The initial success in speech recognition was based on small-scale recognition tasks based on TIMIT. The data set contains 630 speakers from eight majordialectsofAmerican English, where each speaker reads 10 sentences.[178]Its small size lets many configurations be tried. More importantly, the TIMIT task concernsphone-sequence recognition, which, unlike word-sequence recognition, allows weak phonebigramlanguage models. This lets the strength of the acoustic modeling aspects of speech recognition be more easily analyzed. The error rates listed below, including these early results and measured as percent phone error rates (PER), have been summarized since 1991. The debut of DNNs for speaker recognition in the late 1990s and speech recognition around 2009-2011 and of LSTM around 2003–2007, accelerated progress in eight major areas:[23][108][106] All major commercial speech recognition systems (e.g., MicrosoftCortana,Xbox,Skype Translator,Amazon Alexa,Google Now,Apple Siri,BaiduandiFlyTekvoice search, and a range ofNuancespeech products, etc.) are based on deep learning.[23][183][184] A common evaluation set for image classification is theMNIST databasedata set. MNIST is composed of handwritten digits and includes 60,000 training examples and 10,000 test examples. As with TIMIT, its small size lets users test multiple configurations. A comprehensive list of results on this set is available.[185] Deep learning-based image recognition has become "superhuman", producing more accurate results than human contestants. This first occurred in 2011 in recognition of traffic signs, and in 2014, with recognition of human faces.[186][187] Deep learning-trained vehicles now interpret 360° camera views.[188]Another example is Facial Dysmorphology Novel Analysis (FDNA) used to analyze cases of human malformation connected to a large database of genetic syndromes. Closely related to the progress that has been made in image recognition is the increasing application of deep learning techniques to various visual art tasks. DNNs have proven themselves capable, for example, of Neural networks have been used for implementing language models since the early 2000s.[150]LSTM helped to improve machine translation and language modeling.[151][152][153] Other key techniques in this field are negative sampling[191]andword embedding. Word embedding, such asword2vec, can be thought of as a representational layer in a deep learning architecture that transforms an atomic word into a positional representation of the word relative to other words in the dataset; the position is represented as a point in avector space. Using word embedding as an RNN input layer allows the network to parse sentences and phrases using an effective compositional vector grammar. A compositional vector grammar can be thought of asprobabilistic context free grammar(PCFG) implemented by an RNN.[192]Recursive auto-encoders built atop word embeddings can assess sentence similarity and detect paraphrasing.[192]Deep neural architectures provide the best results for constituency parsing,[193]sentiment analysis,[194]information retrieval,[195][196]spoken language understanding,[197]machine translation,[151][198]contextual entity linking,[198]writing style recognition,[199]named-entity recognition(token classification),[200]text classification, and others.[201] Recent developments generalizeword embeddingtosentence embedding. Google Translate(GT) uses a large end-to-endlong short-term memory(LSTM) network.[202][203][204][205]Google Neural Machine Translation (GNMT)uses anexample-based machine translationmethod in which the system "learns from millions of examples".[203]It translates "whole sentences at a time, rather than pieces". Google Translate supports over one hundred languages.[203]The network encodes the "semantics of the sentence rather than simply memorizing phrase-to-phrase translations".[203][206]GT uses English as an intermediate between most language pairs.[206] A large percentage of candidate drugs fail to win regulatory approval. These failures are caused by insufficient efficacy (on-target effect), undesired interactions (off-target effects), or unanticipatedtoxic effects.[207][208]Research has explored use of deep learning to predict thebiomolecular targets,[209][210]off-targets, andtoxic effectsof environmental chemicals in nutrients, household products and drugs.[211][212][213] AtomNet is a deep learning system for structure-basedrational drug design.[214]AtomNet was used to predict novel candidate biomolecules for disease targets such as theEbola virus[215]andmultiple sclerosis.[216][215] In 2017graph neural networkswere used for the first time to predict various properties of molecules in a large toxicology data set.[217]In 2019, generative neural networks were used to produce molecules that were validated experimentally all the way into mice.[218][219] Deep reinforcement learninghas been used to approximate the value of possibledirect marketingactions, defined in terms ofRFMvariables. The estimated value function was shown to have a natural interpretation ascustomer lifetime value.[220] Recommendation systems have used deep learning to extract meaningful features for a latent factor model for content-based music and journal recommendations.[221][222]Multi-view deep learning has been applied for learning user preferences from multiple domains.[223]The model uses a hybrid collaborative and content-based approach and enhances recommendations in multiple tasks. AnautoencoderANN was used inbioinformatics, to predictgene ontologyannotations and gene-function relationships.[224] In medical informatics, deep learning was used to predict sleep quality based on data from wearables[225]and predictions of health complications fromelectronic health recorddata.[226] Deep neural networks have shown unparalleled performance inpredicting protein structure, according to the sequence of the amino acids that make it up. In 2020,AlphaFold, a deep-learning based system, achieved a level of accuracy significantly higher than all previous computational methods.[227][228] Deep neural networks can be used to estimate the entropy of astochastic processand called Neural Joint Entropy Estimator (NJEE).[229]Such an estimation provides insights on the effects of inputrandom variableson an independentrandom variable. Practically, the DNN is trained as aclassifierthat maps an inputvectorormatrixX to an outputprobability distributionover the possible classes of random variable Y, given input X. For example, inimage classificationtasks, the NJEE maps a vector ofpixels' color values to probabilities over possible image classes. In practice, the probability distribution of Y is obtained by aSoftmaxlayer with number of nodes that is equal to thealphabetsize of Y. NJEE uses continuously differentiableactivation functions, such that the conditions for theuniversal approximation theoremholds. It is shown that this method provides a stronglyconsistent estimatorand outperforms other methods in case of large alphabet sizes.[229] Deep learning has been shown to produce competitive results in medical application such as cancer cell classification, lesion detection, organ segmentation and image enhancement.[230][231]Modern deep learning tools demonstrate the high accuracy of detecting various diseases and the helpfulness of their use by specialists to improve the diagnosis efficiency.[232][233] Finding the appropriate mobile audience formobile advertisingis always challenging, since many data points must be considered and analyzed before a target segment can be created and used in ad serving by any ad server.[234]Deep learning has been used to interpret large, many-dimensioned advertising datasets. Many data points are collected during the request/serve/click internet advertising cycle. This information can form the basis of machine learning to improve ad selection. Deep learning has been successfully applied toinverse problemssuch asdenoising,super-resolution,inpainting, andfilm colorization.[235]These applications include learning methods such as "Shrinkage Fields for Effective Image Restoration"[236]which trains on an image dataset, andDeep Image Prior, which trains on the image that needs restoration. Deep learning is being successfully applied to financialfraud detection, tax evasion detection,[237]and anti-money laundering.[238] In November 2023, researchers atGoogle DeepMindandLawrence Berkeley National Laboratoryannounced that they had developed an AI system known as GNoME. This system has contributed tomaterials scienceby discovering over 2 million new materials within a relatively short timeframe. GNoME employs deep learning techniques to efficiently explore potential material structures, achieving a significant increase in the identification of stable inorganiccrystal structures. The system's predictions were validated through autonomous robotic experiments, demonstrating a noteworthy success rate of 71%. The data of newly discovered materials is publicly available through theMaterials Projectdatabase, offering researchers the opportunity to identify materials with desired properties for various applications. This development has implications for the future of scientific discovery and the integration of AI in material science research, potentially expediting material innovation and reducing costs in product development. The use of AI and deep learning suggests the possibility of minimizing or eliminating manual lab experiments and allowing scientists to focus more on the design and analysis of unique compounds.[239][240][241] The United States Department of Defense applied deep learning to train robots in new tasks through observation.[242] Physics informed neural networks have been used to solvepartial differential equationsin both forward and inverse problems in a data driven manner.[243]One example is the reconstructing fluid flow governed by theNavier-Stokes equations. Using physics informed neural networks does not require the often expensive mesh generation that conventionalCFDmethods rely on.[244][245] Deep backward stochastic differential equation methodis a numerical method that combines deep learning withBackward stochastic differential equation(BSDE). This method is particularly useful for solving high-dimensional problems in financial mathematics. By leveraging the powerful function approximation capabilities ofdeep neural networks, deep BSDE addresses the computational challenges faced by traditional numerical methods in high-dimensional settings. Specifically, traditional methods like finite difference methods or Monte Carlo simulations often struggle with the curse of dimensionality, where computational cost increases exponentially with the number of dimensions. Deep BSDE methods, however, employ deep neural networks to approximate solutions of high-dimensional partial differential equations (PDEs), effectively reducing the computational burden.[246] In addition, the integration ofPhysics-informed neural networks(PINNs) into the deep BSDE framework enhances its capability by embedding the underlying physical laws directly into the neural network architecture. This ensures that the solutions not only fit the data but also adhere to the governing stochastic differential equations. PINNs leverage the power of deep learning while respecting the constraints imposed by the physical models, resulting in more accurate and reliable solutions for financial mathematics problems. Image reconstruction is the reconstruction of the underlying images from the image-related measurements. Several works showed the better and superior performance of the deep learning methods compared to analytical methods for various applications, e.g., spectral imaging[247]and ultrasound imaging.[248] Traditional weather prediction systems solve a very complex system of partial differential equations. GraphCast is a deep learning based model, trained on a long history of weather data to predict how weather patterns change over time. It is able to predict weather conditions for up to 10 days globally, at a very detailed level, and in under a minute, with precision similar to state of the art systems.[249][250] An epigenetic clock is abiochemical testthat can be used to measure age. Galkin et al. used deep neural networks to train an epigenetic aging clock of unprecedented accuracy using >6,000 blood samples.[251]The clock uses information from 1000CpG sitesand predicts people with certain conditions older than healthy controls:IBD,frontotemporal dementia,ovarian cancer,obesity. The aging clock was planned to be released for public use in 2021 by anInsilico Medicinespinoff company Deep Longevity. Deep learning is closely related to a class of theories ofbrain development(specifically, neocortical development) proposed bycognitive neuroscientistsin the early 1990s.[252][253][254][255]These developmental theories were instantiated in computational models, making them predecessors of deep learning systems. These developmental models share the property that various proposed learning dynamics in the brain (e.g., a wave ofnerve growth factor) support theself-organizationsomewhat analogous to the neural networks utilized in deep learning models. Like theneocortex, neural networks employ a hierarchy of layered filters in which each layer considers information from a prior layer (or the operating environment), and then passes its output (and possibly the original input), to other layers. This process yields a self-organizing stack oftransducers, well-tuned to their operating environment. A 1995 description stated, "...the infant's brain seems to organize itself under the influence of waves of so-called trophic-factors ... different regions of the brain become connected sequentially, with one layer of tissue maturing before another and so on until the whole brain is mature".[256] A variety of approaches have been used to investigate the plausibility of deep learning models from a neurobiological perspective. On the one hand, several variants of thebackpropagationalgorithm have been proposed in order to increase its processing realism.[257][258]Other researchers have argued that unsupervised forms of deep learning, such as those based on hierarchicalgenerative modelsanddeep belief networks, may be closer to biological reality.[259][260]In this respect, generative neural network models have been related to neurobiological evidence about sampling-based processing in the cerebral cortex.[261] Although a systematic comparison between the human brain organization and the neuronal encoding in deep networks has not yet been established, several analogies have been reported. For example, the computations performed by deep learning units could be similar to those of actual neurons[262]and neural populations.[263]Similarly, the representations developed by deep learning models are similar to those measured in the primate visual system[264]both at the single-unit[265]and at the population[266]levels. Facebook's AI lab performs tasks such asautomatically tagging uploaded pictureswith the names of the people in them.[267] Google'sDeepMind Technologiesdeveloped a system capable of learning how to playAtarivideo games using only pixels as data input. In 2015 they demonstrated theirAlphaGosystem, which learned the game ofGowell enough to beat a professional Go player.[268][269][270]Google Translateuses a neural network to translate between more than 100 languages. In 2017, Covariant.ai was launched, which focuses on integrating deep learning into factories.[271] As of 2008,[272]researchers atThe University of Texas at Austin(UT) developed a machine learning framework called Training an Agent Manually via Evaluative Reinforcement, or TAMER, which proposed new methods for robots or computer programs to learn how to perform tasks by interacting with a human instructor.[242]First developed as TAMER, a new algorithm called Deep TAMER was later introduced in 2018 during a collaboration betweenU.S. Army Research Laboratory(ARL) and UT researchers. Deep TAMER used deep learning to provide a robot with the ability to learn new tasks through observation.[242]Using Deep TAMER, a robot learned a task with a human trainer, watching video streams or observing a human perform a task in-person. The robot later practiced the task with the help of some coaching from the trainer, who provided feedback such as "good job" and "bad job".[273] Deep learning has attracted both criticism and comment, in some cases from outside the field of computer science. A main criticism concerns the lack of theory surrounding some methods.[274]Learning in the most common deep architectures is implemented using well-understood gradient descent. However, the theory surrounding other algorithms, such as contrastive divergence is less clear.[citation needed](e.g., Does it converge? If so, how fast? What is it approximating?) Deep learning methods are often looked at as ablack box, with most confirmations done empirically, rather than theoretically.[275] In further reference to the idea that artistic sensitivity might be inherent in relatively low levels of the cognitive hierarchy, a published series of graphic representations of the internal states of deep (20-30 layers) neural networks attempting to discern within essentially random data the images on which they were trained[276]demonstrate a visual appeal: the original research notice received well over 1,000 comments, and was the subject of what was for a time the most frequently accessed article onThe Guardian's[277]website. Some deep learning architectures display problematic behaviors,[278]such as confidently classifying unrecognizable images as belonging to a familiar category of ordinary images (2014)[279]and misclassifying minuscule perturbations of correctly classified images (2013).[280]Goertzelhypothesized that these behaviors are due to limitations in their internal representations and that these limitations would inhibit integration into heterogeneous multi-componentartificial general intelligence(AGI) architectures.[278]These issues may possibly be addressed by deep learning architectures that internally form states homologous to image-grammar[281]decompositions of observed entities and events.[278]Learning a grammar(visual or linguistic) from training data would be equivalent to restricting the system tocommonsense reasoningthat operates on concepts in terms of grammaticalproduction rulesand is a basic goal of both human language acquisition[282]andartificial intelligence(AI).[283] As deep learning moves from the lab into the world, research and experience show that artificial neural networks are vulnerable to hacks and deception.[284]By identifying patterns that these systems use to function, attackers can modify inputs to ANNs in such a way that the ANN finds a match that human observers would not recognize. For example, an attacker can make subtle changes to an image such that the ANN finds a match even though the image looks to a human nothing like the search target. Such manipulation is termed an "adversarial attack".[285] In 2016 researchers used one ANN to doctor images in trial and error fashion, identify another's focal points, and thereby generate images that deceived it. The modified images looked no different to human eyes. Another group showed that printouts of doctored images then photographed successfully tricked an image classification system.[286]One defense is reverse image search, in which a possible fake image is submitted to a site such asTinEyethat can then find other instances of it. A refinement is to search using only parts of the image, to identify images from which that piece may have been taken.[287] Another group showed that certainpsychedelicspectacles could fool afacial recognition systeminto thinking ordinary people were celebrities, potentially allowing one person to impersonate another. In 2017 researchers added stickers tostop signsand caused an ANN to misclassify them.[286] ANNs can however be further trained to detect attempts atdeception, potentially leading attackers and defenders into an arms race similar to the kind that already defines themalwaredefense industry. ANNs have been trained to defeat ANN-based anti-malwaresoftware by repeatedly attacking a defense with malware that was continually altered by agenetic algorithmuntil it tricked the anti-malware while retaining its ability to damage the target.[286] In 2016, another group demonstrated that certain sounds could make theGoogle Nowvoice command system open a particular web address, and hypothesized that this could "serve as a stepping stone for further attacks (e.g., opening a web page hosting drive-by malware)".[286] In "data poisoning", false data is continually smuggled into a machine learning system's training set to prevent it from achieving mastery.[286] The deep learning systems that are trained using supervised learning often rely on data that is created or annotated by humans, or both.[288]It has been argued that not only low-paidclickwork(such as onAmazon Mechanical Turk) is regularly deployed for this purpose, but also implicit forms of humanmicroworkthat are often not recognized as such.[289]The philosopherRainer Mühlhoffdistinguishes five types of "machinic capture" of human microwork to generate training data: (1)gamification(the embedding of annotation or computation tasks in the flow of a game), (2) "trapping and tracking" (e.g.CAPTCHAsfor image recognition or click-tracking on Googlesearch results pages), (3) exploitation of social motivations (e.g.tagging facesonFacebookto obtain labeled facial images), (4)information mining(e.g. by leveragingquantified-selfdevices such asactivity trackers) and (5)clickwork.[289]
https://en.wikipedia.org/wiki/Criticism_of_deep_learning
Criticism of Googleincludes concern fortax avoidance, misuse and manipulation ofsearch results, its use of others'intellectual property, concerns that its compilation of data may violate people'sprivacyand collaboration with the US military onGoogle Earthto spy on users,[1]censorship of search results and content, its cooperation with theIsraeli militaryonProject NimbustargetingPalestinians[2]and the energy consumption of its servers as well as concerns over traditional business issues such as monopoly,restraint of trade,antitrust,patent infringement, indexing and presenting false information and propaganda in search results, and being an"Ideological Echo Chamber". Google's parent company,Alphabet Inc., is an American multinational public corporation invested inInternet search,cloud computing, and advertising technologies. Google hosts and develops a number of Internet-based services and products,[3]and generates profit primarily from advertising through itsGoogle Ads(formerly AdWords) program.[4][5] Google'sstated missionis "to organize the world's information and make it universally accessible and useful";[6]this mission, and the means used to accomplish it, have raised concerns among the company's critics. Much of the criticism pertains to issues that have not yet been addressed bycyber law. Shona Ghosh, a journalist forBusiness Insider, noted that an increasing digitalresistance movementagainst Google has grown.[7] The algorithms that generate search results andrecommendvideos onYouTubehave both been criticized as motivated to drive user engagement by reinforcing users pre-existing beliefs while also suggesting more extreme and less reliable content. In addition tosocial media, these algorithms have received substantial criticism as a driver ofpolitical polarization,internet addiction disorder, and the promotion ofmisinformation,disinformation,violenceand other externalities.[8][9][10][11][12]Aviv Ovadya argues that these algorithms incentivize the creation of divisive content in addition to promoting existing divisive content.[13] Sally Hubbard argues that as amonopoly, sites like YouTube and Google search result in more fake news than if there were more competition in the market that could make it harder to promote harmful content by just gaming one algorithm.[14] From the 2000s onward,Googleand parent companyAlphabet Inc.have facedantitrustscrutiny over allegedanti-competitive conductin violation ofcompetition lawin a particular jurisdiction.[15]Antitrust scrutiny of Googlehas primarily centered on the company's dominance in thesearch engineanddigital advertisingmarkets.[16][17]The company has also been accused of leveraging control of theAndroid operating systemto illegally curb competition.[18] Google has also received antitrust scrutiny over its control of theGoogle Playstore and alleged "self-preferencing" at the expense of third-party developers.[19][20]Additionally, Google's alleged discrimination against rivals' advertisements on YouTube has been subject to antitrust litigation.[21][22]More recently,Google Mapsand theGoogle Automotive Services(GAS) package have become the target of antitrust scrutiny.[23] TheEuropean Commissionhas pursued several competition law cases against Google, namely:[24] In testimony before aU.S. Senateantitrustpanel in September 2011,Eric Schmidt, Google's chairman, said that "the Internet is the ultimate level playing field" where users were "one click away" from competitors.[29]Nonetheless, Senator Kohl asked Schmidt if Google's market share constituted a monopoly – a special power dominant – for his company. Schmidt acknowledged that Google's market share was akin to a monopoly, but noted the complexity of the law.[30][31] During the hearing,Mike Lee, Republican of Utah, accused Google of cooking its search results to favor its own services. Schmidt replied, "Senator, I can assure we haven't cooked anything."[29]In testimony before the same Senate panel,Jeffrey KatzandJeremy Stoppelman, the chief executives from Google's competitorsNextagandYelp, said that Google tilts search results in its own favor, limiting choice and stifling competition.[29] In October 2012, it was reported that theU.S. Federal Trade Commissionstaff were preparing a recommendation that the government sue Google on antitrust grounds. The areas of concern include accusations of manipulating the search results to favor Google services such asGoogle Shoppingfor buying goods andGoogle Placesfor advertising local restaurants and businesses; whether Google's automated advertising marketplace,AdWords, discriminates against advertisers from competing online commerce services like comparison shopping sites and consumer review Web sites; whether Google's contracts with smartphone makers and carriers prevent them from removing or modifying Google products, such as itsAndroid operating systemorGoogle Search; and Google's use of its smartphone patents. A likely outcome of the antitrust investigations is a negotiated settlement where Google would agree not to discriminate in favor of its products over smaller competitors.[32]Federal Trade Commission ended its investigation during a period which the co-founder of Google,Larry Page, had met with individuals at theWhite Houseand the Federal Trade Commission, leading to voluntary changes by Google; since January 2009 to March 2015 employees of Google have met with officials in the White House about 230 times according toThe Wall Street Journal.[33] In June 2015, Google reached an advertising agreement with Yahoo!, which would have allowed Yahoo! to feature Google advertisements on its web pages. The alliance between the two companies was never completely realized because ofantitrustconcerns by theU.S. Department of Justice. As a result, Google pulled out of the deal in November 2018.[34][35][36] In September 2023 Google's antitrust trialUnited States v. Google LLC (2020)began at federal court in Washington, D.C.[37]in which the DOJ accuses Google of illegally creating a monopoly by paying billions of dollars to smartphone vendors and mobile carriers to make Google's search engine the default service. The federal court ruled in August 2024 that Google did abuse its position in search engines and violated theSherman Act.[38] In January 2023, the DOJfiled a similar lawsuitaccusing Google of monopolizing the digital advertising industry. The complaint alleged that the company had engaged in "anticompetitive and exclusionary conduct" over the previous 15 years.[39]The trial began on September 9, 2024.[40] On April 20, 2016, the European Union filed a formal antitrust complaint against Google's leverage over Android vendors, alleging that the mandatory bundling of the entire suite of proprietary Google software, hindered the ability for competing search providers to be integrated into Android and that barring vendors from producing devices running forks of Android both constituted anti-competitive practices.[41]In June 2018, the European Commission determined a $5 billion fine for Google regarding the April 2016 complaints.[42] In August 2016, Google was fined US$6.75 million by the RussianFederal Antimonopoly Service(FAS) under similar allegations byYandex.[43] On April 16, 2018,Umar Javeed, Sukarma Thapar, Aaqib Javeed vs. Google LLC & Ors.resulted in theCompetition Commission of Indiaordering a wider probe investigation order against Google Android illegal business practices. The investigations arm of the CCI should complete the wider probe in the case within 150 days, the order said, though such cases at the watchdog typically drag on for years. The CCI also said the role of any Google executive in the alleged abuse of the Android platform should also be examined.[44]Google was fined $275 million in 2023 by the Indian government for issues related to Android and for pushing developers to use its in-app payment system.[45] According to the group of 15 state attorneys general suing Google for antitrust issues,[46]Google and Facebook entered into a price-fixing agreement termedJedi Blueto monopolize the online advertising market and prevent the entry of the fairerheader biddingmethod of advertisement sales on any major advertising platform. The agreement consisted of Facebook using the Google-managed system for bidding on and managing online ads in exchange for preferential rates and priority on prime ad placement. This allowed Google to retain its profitable monopoly over online ad exchanges, while saving Facebook billions of dollars on attempts to build competing systems.[47][48]Over 200 newspapers have sued Google and Facebook to recover losses incurred by the collusion.[49] Google admitted that the deal contained, "a provision governing cooperation between Google and Facebook in the event of certain government investigations."[50]Google has an internal team called gTrade dedicated to maximizing Google's advertising profits, reportedly using insider information, price fixing, and leveraging Google's relative monopoly positions.[51] In 2006/2007, a group of Austrian researchers observed a tendency to misuse the Google engine as a "reality interface". Ordinary users as well as journalists tend to rely on the first pages of Google Search, assuming that everything not listed there is either not important or simply does not exist. The researchers say that "Google has become the main interface for our whole reality. To be precise: with the Google interface, the user gets the impression that the search results imply a kind of totality. In fact, one only sees a small part of what one could see if one also integrates other research tools".[52] Eric Schmidt, Google's chief executive, said in a 2007 interview with theFinancial Times: "The goal is to enable Google users to be able to ask the question such as 'What shall I do tomorrow?' and 'What job shall I take?'".[53]Schmidt reaffirmed this during a 2010 interview withThe Wall Street Journal: "I actually think most people don't want Google to answer their questions; they want Google to tell them what they should be doing next."[54] Numerous companies and individuals, for example, MyTriggers.com[55]and transport tycoonSir Brian Souter,[56]have voiced concerns regarding the fairness of Google's PageRank and search results after their web sites disappeared from Google's first-page results. In the case of MyTriggers.com, the Ohio-based shopping comparison search site accused Google of favoring its own services in search results (although the judge eventually ruled that the site failed to show harm to other similar businesses). PageRank, Google's page ranking algorithm, can and has been manipulated for political and humorous reasons. To illustrate the view that Google's search engine could be subjected to manipulation, Google Watch implemented aGoogle bombby linking the phrase "out-of-touch executives" to Google's own page on its corporate management. The attempt was mistakenly attributed to disgruntled Google employees byThe New York Times, which later printed a correction.[57][58] Daniel Brandt started the Google Watch website and has criticized Google'sPageRankalgorithms, saying that they discriminate against new websites and favor established sites.[59]Chris Beasley, who started Google Watch-Watch, disagrees, saying that Mr. Brandt overstates the amount of discrimination that new websites face and that new websites will naturally rank lower when the ranking is based on a site's "reputation". In Google's world, a site's reputation is in part determined by how many and which other sites link to it (links from sites with a "better" reputation of their own carry more weight). Since new sites will seldom be as heavily linked as older more established sites, they aren't as well known, won't have as much of a reputation, and will receive a lower page ranking.[60] In testimony before aU.S. Senateantitrustpanel in September 2011, Jeffrey Katz, the chief executive ofNexTag, said that Google's business interests conflict with its engineering commitment to an open-for-all Internet and that: "Google doesn't play fair. Google rigs its results, biasing in favor ofGoogle Shoppingand against competitors like us." Jeremy Stoppelman, the chief ofYelp, said sites like his have to cooperate with Google because it is the gateway to so many users and "Google then gives its own product preferential treatment." In earlier testimony at the same hearing,Eric Schmidt, Google's chairman, said that Google does not "cook the books" to favor its own products and services.[29] Google apologized in 2009 when a picture ofMichelle Obamadigitally altered to appear as a gorilla was among the first images when searching on Google Image.[61] In 2013, Emily McManus, managing editor forTED.com, searched for "english major who taught herself calculus" which prompted Google to ask, "Did you mean: english major who taughthimselfcalculus?"[62]Her tweet of the incident gained traction online. One response included a screengrab of a search for "how much is a wnba ticket?" to which the auto-correct feature suggested, "how much is an nba ticket?" Google responded directly to McManus and explained that the phrase "taught himself calculus" appeared about 282,000 times, whereas the phrase "taught herself calculus" appeared about 4,000 times. The company also made note of its efforts to bring morewomen into STEM fields.[63] In 2015, a man tweeted a screengrab showing that Google Photos had tagged twoAfrican Americanpeople as gorillas.[64]Google apologized, saying they were "appalled and genuinely sorry" and was "working on longer-term fixes."[65]An investigation byWIREDtwo years later showed that the company's solution has been to censor searches for "gorilla," "chimp," "chimpanzee," and "monkey."[66]As of 2023,Google Photossoftware still will not search for gorillas on local photos.[67] In late May 2012, Google announced that they will no longer be maintaining a strict separation between search results and advertising. Google Shopping (formerly known as Froogle) would be replaced with a nearly identical interface, according to the announcement, but only paid advertisers would be listed instead of the neutral aggregate listings shown previously. Furthermore, rankings would be determined primarily by which advertisers place the highest "bid", though the announcement does not elaborate on this process. The transition was completed in the fall of 2012.[68] As a result of this change to Google Shopping,Microsoft, who operates the competing search engineBing, launched a public information campaign titledScroogled,[69]hiring political campaign strategistMark Pennto run it.[70][additional citation(s) needed] It is unclear how consumers have reacted to this move. Critics charge that Google has effectively abandoned its "Don't be evil" motto and that small businesses will be unable to compete against their larger counterparts. There is also concern that consumers who did not see this announcement will be unaware that they are now looking at paid advertisements and that the top results are no longer determined solely based on relevance but instead will be manipulated according to which company paid the most.[71][72] European Union regulators found in 2017 that Google Shopping links also appear much higher in Google search results.[73]In 2024, some owners of small sites have also criticized Google for burying their websites far behind Google Shopping and other results that lack the expertise found in the content of some of the smaller sites.[74] Google's ambitious plans to scan millions of books and make them readable through its search engine have been criticized forcopyright infringement.[75]TheAssociation for Learned and Professional Society Publishersand theAssociation of American University Pressesboth issued statements strongly opposingGoogle Print, stating that "Google, an enormously successful company, claims a sweeping right to appropriate the property of others for its own commercial use unless it is told, case by case and instance by instance, not to."[76] In a separate dispute in November 2009, the China Written Works Copyright Society (CWWCS), which protects Chinese writers' copyrights, accused Google of scanning 18,000 books by 570 Chinese writers without authorization, for its Google Books library.[77]Toward the end of 2009 representatives of the CWWCS said talks with Google about copyright issues are progressing well, that first they "want Google to admit their mistake and apologize", then talk about compensation, while at the same time they "don't want Google to give up China in its digital library project". On November 20, 2009, Google agreed to provide a list of Chinese books it had scanned, but did not admit having "infringed" copyright laws. In a January 9, 2010 statement the head of Google Books in the Asia-Pacific said "communications with Chinese writers have not been good enough" and apologized to the writers.[78] Kazaaand theChurch of Scientologyhave used theDigital Millennium Copyright Act(DMCA) to demand that Google remove references to allegedlycopyrightedmaterial on their sites.[79][80] Search engines such as Google's that link to sites in "good faith" fall under the safe harbor provisions of theOnline Copyright Infringement Liability Limitation Actwhich is part of DMCA. If they remove links to infringing content after receiving atake down notice, they are not liable. Google removes links to infringing content when requested, provided that supporting evidence is supplied. However, it is sometimes difficult to judge whether or not certain sites are infringing and Google (and other search engines) will sometimes refuse to remove web pages from its index. To complicate matters there have been conflicting rulings from U.S. courts on whether simply linking to infringing content constitutes "contributory infringement" or not.[81][82] The New York Timeshas complained that thecachingof their content during a web crawl, a feature utilized by search engines includingGoogle Web Search, violates copyright.[83]Google observes Internet standard mechanisms for requesting that caching be disabled via therobots.txtfile, which is another mechanism that allows operators of a website to request that part or all of their site not be included in search engine results, or via META tags, which allow a content editor to specify whether a document can be crawled or archived, or whether the links on the document can be followed. The U.S. District Court of Nevada ruled that Google's caches do not constitute copyright infringement underAmerican lawinField v. GoogleandParker v. Google.[84][85] On February 20, 2017, Google agreed to a voluntary United Kingdom code of practice obligating it to demote links to copyright-infringing content in its search results.[86][87] Google Map Makerallows user-contributed data to be put into theGoogle Mapsservice,[88]similar toOpenStreetMapit includes concepts such as organising mapping parties and mapping for humanitarian efforts.[89]It has been criticized for taking work done for free by the general public and claiming commercial ownership of it without returning any contributions back to thecommons[90]as their restrictive license makes it incompatible with most open projects by preventing commercial use or use by competitive services.[91] Google allegedly used code from Chinese companySohu'sSogou Pinyinfor its owninput method editor,Google Pinyin.[92] On February 16, 2016, internetreviewerDoug Walker(TheNostalgia Critic) posted a video about his concerns related to YouTube's current copyright-claiming system, which was apparently being tipped in favor of claimants rather than creators despite many of those videos being reported as covered underFair Uselaws. The video featured stories of other YouTubers' experiences with the copyright system, including fellowChannel AwesomeproducerBrad Jones, who received a strike on his channel for uploading a film review that took place in a parked car and contained no footage from the film itself. In the video, Walker encouraged others to spread the message using thehashtag#WTFU (Where's the Fair Use?) on social media.[93]The hashtag spread among multiple YouTubers, who gave their support to Walker and Channel Awesome and relaying their own stories of issues with YouTube's copyright system, including Dan Murrell ofScreen Junkies,[94]GradeAUnderA, andLet's Playproducers Mark Fishbach (Markiplier) and Seán William McLoughlin (Jacksepticeye).[93] Ten days later, on February 26, 2016, YouTube CEOSusan Wojcickitweeted a link to a post from the YouTube Help Forum and thanked the community for bringing the issue to their attention. The post, written by a member of the YouTube Policy Team named Spencer (no last name was given), stated that they will be working to strengthen communication between creators and YouTube Support and "improvements to increase transparency into the status of monetization claims."[95] Google's March 1, 2012 privacy change enables the company to share data across a wide variety of services.[97]This includes embedded services in millions of third-party websites using AdSense and Analytics. The policy was widely criticized as creating an environment that discourages Internet innovation by making Internet users more fearful online.[98] In December 2009, after privacy concerns were raised, Google's CEO,Eric Schmidt, declared: "If you have something that you don't want anyone to know, maybe you shouldn't be doing it in the first place. If you really need that kind of privacy, the reality is that search engines—including Google—do retain this information for some time and it's important, for example, that we are all subject in the United States to thePatriot Actand it is possible that all that information could be made available to the authorities."[99] Privacy Internationalhas raised concerns regarding the dangers and privacy implications of having a centrally located, widely popular data warehouse of millions of Internet users' searches, and how under controversial existing U.S. law, Google can be forced to hand over all such information to theU.S. government.[100]In its 2007 Consultation Report, Privacy International ranked Google as "Hostile to Privacy", its lowest rating on their report, making Google the only company in the list to receive that ranking.[100][101][102] At the Techonomy conference in 2010, Eric Schmidt predicted that "true transparency and no anonymity" is the way forward for the internet: "In a world of asynchronous threats it is too dangerous for there not to be some way to identify you. We need a [verified] name service for people. Governments will demand it." He also said that "If I look at enough of your messaging and your location, and use artificial intelligence, we can predict where you are going to go. Show us 14 photos of yourself and we can identify who you are. You think you don't have 14 photos of yourself on the internet? You've got Facebook photos!"[103] In 2013, a class-action lawsuit was filed in the northern district of California, accusing Google of "storing and intentionally, systematically and repeatedly divulging" users' search queries and histories to third-party websites.[104]In 2023, Google agreed to pay a $23 million dollar settlement, amounting to $8 per person.[105] In the summer of 2016, Google quietly dropped its ban on personally identifiable info in itsDoubleClickad service. Google's privacy policy was changed to state it "may" combine web-browsing records obtained through DoubleClick with what the company learns from the use of other Google services. While new users were automatically opted-in, existing users were asked if they wanted to opt-in, and it remains possible to opt-out by going to the Activity controls in the My Account page of a Google account.ProPublicastates that "The practical result of the change is that the DoubleClick ads that follow people around on the web may now be customized to them based on your name and other information Google knows about you. It also means that Google could now, if it wished to, build a complete portrait of a user by name, based on everything they write in email, every website they visit and the searches they conduct." Google contactedProPublicato correct the fact that it doesn't "currently" use Gmail keywords to target web ads.[106] Google has a US$1.2 billion artificial intelligence andsurveillancecontract with theIsraeli militaryknown asProject Nimbus. According to Google employees, the Israeli military could use this technology to expand its surveillance of Palestinians living in occupied territories.[107]In what has been described as "retaliation for publicly criticizing the contract,"[108]Google relocated an outspoken employee overseas. Other Palestinian employees have described an "institutionalised bias" within the company.[109] On 12 September 2024, Ireland's Data Protection Commission opened an investigation into Google's AI system for potential GDPR violations related to data collection. The probe is part of Europe's broader efforts to regulate AI amid privacy concerns, with Google's PaLM 2 model under review.[110] Google shared environment activistDisha Ravi's document on Google Docs with the Delhi police which led to her arrest.[111] Google has been criticized for various instances ofcensoringits search results, many times in compliance with the laws of various countries, most notably while it operated inChinafrom January 2006 to March 2010. As of December 12, 2012, Google's SafeSearch feature applies to image searches in the United States. Prior to the change, three SafeSearch settings—"on", "moderate", and "off"—were available to users. Following the change, two "Filter explicit results" settings—"on" and "off"—were newly established. The former and new "on" settings are similar and exclude explicit images from search results. The new "off" setting still permits explicit images to appear in search results, but users need to enter more specific search requests, and no direct equivalent of the old "off" setting exists following the change. The change brings image search results into line with Google's existing settings for web and video search. Some users have stated that the lack of a completely unfiltered option amounts to "censorship" by Google. A Google spokesperson disagreed, saying that Google is "not censoring any adult content", and "[wants] to show users exactly what they are looking for—but we aim not to show sexually explicit results unless a user is specifically searching for them."[112] The search term "bisexual" was blacklisted forInstant Searchuntil 2012, when it was removed at the request of theBiNet USAadvocacy organization.[113] Google has been involved in the censorship of certain sites in specific countries and regions. Until March 2010, Google adhered to theInternet censorship policies of China,[114]enforced by filters colloquially known as "TheGreat Firewall of China". Google.cn search results were filtered to remove some information perceived to be harmful to the People's Republic of China (PRC). Google claimed that some censorship is necessary in order to keep the Chinese government from blocking Google entirely, as occurred in 2002.[115]The company claims it did not plan to give the government information about users who search for blocked content, and will inform users that content has been restricted if they attempt to search for it.[116]As of 2009, Google was the only major China-based search engine to explicitlyinformthe user when search results are blocked or hidden. As of December 2012, Google no longer informs the user of possible censorship for certain queries during search.[117] Some Chinese Internet users were critical of Google for assisting the Chinese government in repressing its own citizens, particularly those dissenting against the government and advocating for human rights.[118]Furthermore, Google had been denounced and called hypocritical byFree Media Movementfor agreeing to China's demands while simultaneously fighting the United States government's requests for similar information.[119]Google China had also been condemned byReporters Without Borders,[119]Human Rights Watch[120]andAmnesty International.[121] In 2009,China Central Television,Xinhua News Agency, andPeople's Dailyall reported on Google's "dissemination of obscene information", and People's Daily claimed that "Google's 'don't be evil' motto becomes a fig leaf".[122][123]The Chinese government imposed administrative penalties to Google China, and demanded a reinforcement of censorship.[124] In 2010, according to aleaked diplomatic cablefrom the U.S. Embassy in Beijing, there were reports that the Chinese Politburo directed theintrusion of Google's computer systems in a worldwide coordinated campaign of computer sabotage and the attempt to access information about Chinese dissidents, carried out by "government operatives, public security experts and Internet outlaws recruited by the Chinese government."[125]The report suggested that it was part of an ongoing campaign in which attackers have "broken into American government computers and those of Western allies, theDalai Lamaand American businesses since 2002." In response tothe attack, Google announced that they were "no longer willing to continue censoring our results on Google.cn, and so over the next few weeks we will be discussing with the Chinese government the basis on which we could operate an unfiltered search engine within the law, if at all."[126][127]On March 22, 2010, after talks with Chinese authorities failed to reach an agreement, the company redirected its censor-complyingGoogle Chinaservice to its Google Hong Kong service, which is outside the jurisdiction of Chinese censorship laws. From the business perspective, many recognize that the move was likely to affect Google's profits: "Google is going to pay a heavy price for its move, which is why it deserves praise for refusing to censor its service in China."[128]However, at least as of March 23, 2010, "The Great Firewall" continues to censor search results from the Hong Kong portal, www.google.com.hk (as it does with the US portal, www.google.com) for controversial terms such as "Falun gong" and "the June 4 incident" (1989 Tiananmen Square protests and massacre).[129][130][131] In 2018,Lhadon Tethong, director of theTibet Action Institute, said there was a, "crisis of repression unfolding across China and territories it controls." and that, "it is shocking to know that Google is planning to return to China and has been building a tool that will help the Chinese authorities engage in censorship and surveillance." She further noted that "Google should be using its incredible wealth, talent, and resources to work with us to find solutions to lift people up and help ease their suffering — not assisting the Chinese government to keep people in chains."[132] In 2024, a Google accelerator program was reported to have provided support to a Chinese company that provides surveillance equipment to police in China.[133] Google has been involved in censorship of Google Maps satellite imagery countrywide affecting Android and iOS apps using .com, .tr, and .tld automatically. Desktop users can easily evade this censorship by just removing .tr, and .tld from the URL but the same technique is impossible with smartphone apps.[134] Google removed theSmart Votingapp from the Play Store before the2021 Russian legislative election. The application, which had been created by the associates of the imprisoned opposition leaderAlexei Navalny, offered voting advice for all voting districts in Russia. It was removed after a meeting with Russian Federation Council officials on 16 September 2021. The Wired reported that several Google employees were threatened with criminal prosecution. Google's actions were condemned as political censorship by Russian opposition figures.[135] In March 2022, Google removed an app, designed to help Russians register protest votes against Putin, from its Play Store.[136] In February 2003, Google stopped showing the advertisements ofOceana, a non-profit organization protesting amajor cruise ship operation's sewage treatment practices. Google cited its editorial policy at the time, stating "Google does not accept advertising if the ad or site advocates against other individuals, groups, or organizations."[137]The policy was later changed.[138] In April 2008, Google refused to run ads for a UK Christian group opposed to abortion, explaining that "At this time, Google policy does not permit the advertisement of websites that contain 'abortion and religion-related content.'" The UK Christian group sued Google for discrimination, and as a result, in September 2008 Google changed its policy and anti-abortion ads were allowed.[139] In August 2008, Google closed the AdSense account of a site that carried a negative view ofScientology, the second closing of such a site within 3 months.[140]It is not certain if the account revocations actually were on the grounds of anti-religious content, however, the cases have raised questions about Google's terms in regards to AdSense/AdWords. The AdSense policy states that "Sites displaying Google ads may not include [...] advocacy against any individual, group, or organization",[141]which allows Google to revoke the above-mentioned AdSense accounts. In May 2011, Google cancelled the AdWord advertisement purchased by a Dublinsex workers' rightsgroup named "Turn Off the Blue Light" (TOBL),[142]claiming that it represented an "egregious violation" of company ad policy by "selling adult sexual services". However, TOBL is a nonprofit campaign for sex worker rights and is not advertising or selling adult sexual services.[143]In July, after TOBL members held a protest outside Google's European headquarters in Dublin and wrote to complain, Google relented, reviewed the group's website, found its content to be advocating a political position, and restored the AdWord advertisement.[144] In June 2012, Google rejected theAustralian Sex Party's ads forAdWordsand sponsored search results for the July 12 by-election for thestate seat of Melbourne, saying the Party breached its rules which prevent solicitation of donations by a website that did not display tax-exempt status. Although the Sex Party amended its website to display tax deductibility information, Google continued to ban the ads. The ads were reinstated on election eve after it was reported in the media that the Sex Party was considering suing Google. On September 13, 2012, the Party lodged formal complaints against Google with theUS Department of Justiceand the Australian competition watchdog, accusing Google of "unlawful interference in the conduct of a state election inVictoriawith corrupt intent" in violation of theForeign Corrupt Practices Act.[145] YouTube is avideo sharingwebsite acquired by Google in 2006. YouTube'sTerms of Serviceprohibits the posting of videos which violatecopyrightsor depict pornography, illegal acts, gratuitous violence, orhate speech.[146]User-posted videos that violate such terms may be removed and replaced with a message stating: "This video is no longer available because its content violated YouTube's Terms of Service". YouTube has been criticized by national governments for failing to police content. For example, videos[147]have been critically accused for being "left up", among other videos featuring unwarranted violence or strong ill-intention against people who probably didn't want this to be published. In 2006, Thailand blocked access to YouTube for users with Thai IP addresses. Thai authorities identified 20 offensive videos and demanded that YouTube remove them before it would unblock any YouTube content.[148]In 2007 a Turkish judge ordered access to YouTube blocked because of content that insultedMustafa Kemal Atatürk, which is a crime under Turkish law.[148]On February 22, 2008,Pakistan Telecommunication Authority(PTA) attempted to block regional access to YouTube following a government order. The attempt inadvertently caused a worldwide YouTube blackout that took 2 hours to correct.[149]Four days later, PTA lifted the ban after YouTube removed controversial religious comments made by a Dutch Member of Parliament[150]concerning Islam.[151] YouTube has also been criticized by its users for attempting to censor content. In November 2007, the account ofWael Abbas, a well known Egyptian activist who posted videos of police brutality, voting irregularities and anti-government demonstrations, was blocked for three days.[152][153][154] In February 2008, a video produced by theAmerican Life Leaguethat accused aPlanned Parenthoodtelevision commercial of promotingrecreational sexwas removed, then reinstated two days later.[155]In October, a video by political speakerPat Condellcriticizing the British government for officially sanctioning sharia law courts in Britain was removed, then reinstated two days later.[156]YouTube also pulled a video of columnistMichelle Malkinshowing violence by Muslim extremists.[157]Siva Vaidhyanathan, a professor of Media Studies at the University of Virginia, commented that while, in his opinion, Michelle Malkin disseminates bigotry in her blog, "that does not mean that this particular video is bigoted; it's not. But because it's by Malkin, it's a target."[158] In 2019, YouTube settled for $170 million theFTCand the New York Attorney General for alleged violations of the USChildren's Online Privacy Protection Act(COPPA), which prohibits internet companies from collecting data from kids under 13. YouTube's enactment of the settlement started in January 2020; this required creators to indicate whether their videos were intended for children, with fines of up to $42,530 per violation of COPPA.[159]Some features that depend onuser dataare disabled on videos designated for children, including comments and channel branding watermarks; the 'donate' button; cards and end screens; live chat and live chat donations; notifications; and 'save to playlist' or 'watch later' features. Such channels will also become "ungooglable".[159] In October 2021, YouTube, together with Snapchat and TikTok, participated in a Senate hearing on protecting children online.[160]The session was prompted by Facebook whistle blowerFrances Haugen's hearing prior. In the hearing, the social media companies tried to distance themselves from Facebook, to which Senate Commerce consumer protection ChairRichard Blumenthalresponded saying "Being different from Facebook is not a defense", "That bar is in the gutter."[161] In 2013, Google successfully prevented theSwedish Language Councilfrom including theSwedishversion of the word "ungoogleable" ("ogooglebar[sv]") in its list of new words.[162]Google objected to its definition (which referred to web searches in general without mentioning Google specifically) and the council was forced to remove it to avoid legal confrontation with Google.[163]They also accused Google of "trying to control the Swedish language".[164] In August 2022, Google closed a person's account on sharing pictures of his son's genitals with the doctor, as it was flagged as child abuse by Google's automated systems.[165] Several former Google employees have spoken out about working conditions, practices, and ethics at the company. As the company became more concerned about leaks to the press in 2019, it scaled employee all-hands meetings from weekly to monthly, limiting question topics to business and product strategy.[166]Google CEOSundar Pichaitold employees in late 2019 that the company is "genuinely struggling with some issues" including transparency and employee trust.[167] On 2 December 2020, the National Labor Relations Board (NLRB) filed a complaint against Google for 'terminations and intimidation in order to quell workplace activism'. The complaint was filed after a year-long investigation by a terminated employee. He filed a petition in 2019, after that many Google employees carried out internal protests against Google's work with US Customs and Border Protection.[168] A widely circulated internal memo, written by senior engineer James Damore,Google's Ideological Echo Chamber, sharply criticized Google's political biases and employee policies.[169]Google said the memo was "advancing harmful gender stereotypes" and fired Damore.[170]David Brooksdemanded the resignation of its CEOSundar Pichaifor mishandling the case.[171][172] Ads criticizing Pichai and Google for the firing were put up shortly after at various Google locations.[173]Some have called to boycott Google and its services, with a hashtag #boycottGoogle coming up on Twitter.[174]A rally against Google alleged partisanship was planned as "March on Google", but later cancelled due to threats and the recentCharlottesville mayhem.[175][176] Arne Wilberg, an ex-YouTube recruiter, claimed that he was fired in November 2017 when he complained about Google's new practices in not hiring white and Asian men to YouTube in favor of women and minority applicants. According to the lawsuit, an internal policy document stated that for three months in 2017, YouTube recruiters should only hire diverse candidates.[177] In June 2021, Google removed its global lead for diversity strategy and research after being made aware of anantisemiticcomment he made in 2007.[178] In February 2016,Amit Singhal, vice president of Google Search for 15 years, left the company following sexual harassment allegations. Google has awarded Singhal $15 million in severance.[180][181] On November 1, 2018, approximately 20,000 employees ofGoogleengaged in a worldwide[182]walkoutto protest the way in which the company has handled sexual harassment, and other grievances.[183][184][185][186][187] In July 2019, Google settled a long-running age discrimination lawsuit brought by 227 over-40 employees and job seekers. Although Google denied it had age discrimination, it agreed to a settlement of $11 million for the plaintiffs, to train its employees not to have age-based bias, and to have its recruiting department focus on age diversity among its engineering employees.[188][189] In January 2020, theSan Francisco Prideorganization voted to ban Google and YouTube from their annualPride paradedue to hate speech on their platforms and retaliation against LBGTQ activists.[190] In 2020, HR executive Eileen Naughton joined long-time Chief Legal Counsel David Drummond in stepping down from their positions over a lawsuit naming them and the company founders in accusations of mishandling years of sexual harassment complaints.[191] In February 2020, theU.S. Equal Employment Opportunity Commission(EEOC) opened an investigation into former Google employeeChelsey Glasson's allegations of pregnancy discrimination.[192]Glasson filed a state civil lawsuit while the EEOC investigated, with a trial date set for January 2022.[193][194][195]She settled with the company in February 2022.[196]She revealed that Google's legal team obtained therapy notes from her sessions through the company'sEmployee assistance programcounseling provider, and that the provider dropped her as a client when she filed the lawsuit, which sparked SenatorKaren Keiserto introduce a bill inWashingtonin January 2022 to prohibitprivate sectorproviders from disclosing private information typically covered underHealth Insurance Portability and Accountability Actlaws.[197][198][199]Also in January 2022, she criticized the company's use ofnon-disclosure agreements(NDAs) in testimony to theWashington House of Representativesfor whistleblower protection legislature, which she said intimidated her from speaking out about the discrimination she allegedly witnessed and experienced. In response, Google toldProtocolthat their confidentiality agreements do not prevent current and former workers from disclosing facts pertaining to harassment or discrimination.[200]Both laws were passed into legislature in March 2022.[201][202] The official settlement agreement that Google signed with the NLRB in 2019 includes this notice to be sent to employees:[203] "YOU HAVE THE RIGHT to discuss wages, hours, and working conditions with other employees, the press/media, and other third parties, and WE WILL NOT do anything to interfere with your exercise of those rights." Google has been criticized for hiring IRI Consultants, a firm that advertises its accomplishments in helping organizations prevent successful union organizing.[204]Google Zurich attempted to cancel employee-organized meetings about labor rights in June and October 2019.[205]Some Google employees and contractors are already unionized, including security guards, some service workers, and analysts and trainers for Google Shopping in Pittsburgh employed by contractor HCL.[206]In 2021 court documents revealed that between 2018 and 2020 Google ran an anti-union campaign called Project Vivian to "convince [employees] that unions suck".[207] As of December 2019, theNational Labor Relations Boardis investigating whether several firings were in retaliation for labor organizing-related activities.[208][209]One of the fired employees was tasked with informing her colleagues about Google policy changes, and created a message informing them that they, "have the right to participate in protected concerted activities," when they visited the IRI Consultants site.[210][211] In 2020, theAustralian Strategic Policy Instituteaccused at least 82 major brands, including Google, of being connected to forcedUyghurlabor inXinjiang.[212] Google cut its taxes by $3.1 billion in the period of 2007 to 2009 using a technique that moves most of its foreign profits throughIrelandand theNetherlandsto Bermuda. Afterwards, the company started to send £8 billion in profits a year toBermuda.[213]Google's income shifting—involving strategies known to lawyers as the "Double Irish" and the "Dutch Sandwich"—helped reduce its overseas tax rate to 2.4 percent, the lowest of the top five U.S. technology companies by market capitalization, according to regulatory filings in six countries.[214][215] According to economist and member of thePvdAdelegation inside theProgressive Alliance of Socialists & Democrats in the European Parliament(S&D)Paul Tang, theEUlost, from 2013 to 2015, a loss estimated to be 3.955 billion euros from Google.[216]When compared to other countries outside the EU, the EU is only taxing Google with a rate of 0.36 – 0.82% of their revenue (approx. 25-35% of their EBT) whereas this rate is near 8% in countries outside the EU. Even if a rate of 2 to 5% – as suggested byECOFINcouncil – would have been applied during this period (2013–2015), a fraud of this rate from Facebook would have meant a loss from 1.262 to 3.155 billion euros in the EU.[216] Google has been accused by a number of countries ofavoidingpaying tens of billions of dollars of tax through a convoluted scheme of inter-company licensing agreements and transfers totax havens.[217][218]For example, Google has used highly contrived and artificial distinctions to avoid paying billions of pounds incorporate taxowed by its UK operations.[219] On May 15, 2013,Margaret Hodge, the chair of theUnited KingdomPublic Accounts Committee, accused Google of being "calculated and [...] unethical" over its use of the scheme.[219]Google ChairmanEric Schmidthas claimed that this scheme of Google is "capitalism",[220]and that he was "very proud" of it.[221] In November 2012, theUK governmentannounced plans to investigate Google, along withStarbucksandAmazon.com, for possibletax avoidance.[222]In 2015, the UK Government introduced a new law intended to penalize Google's and other largemultinationalcorporations' artificial tax avoidance.[223] On 20 January 2016, Google announced that it would pay £130 million in back taxes to settle the investigation.[224]However, only eight days later, it was announced that Google could end up paying more, and UK tax officials were under investigation for what has been termed a "sweetheart deal" for Google.[225] (Google) FormerDeputy Defense SecretaryRobert O. Workin 2018 criticized Google and its employees have stepped into amoral hazardby not continuing Pentagon'sartificial intelligenceproject,Project Maven,[226]while helping China's AI technology that "could be used against the United States in a conflict." He described Google as hypocritical, given it has opened an AI center in China and "Anything that's going on in the AI center in China is going to the Chinese government and then will ultimately end up in the hands of the Chinese military." Work said "I didn't see any Google employee saying, 'Hmm, maybe we shouldn't do that.'" Google's dealings with China is decrying as unpatriotic.[227][228][229] Chairman of the Joint Chiefs of StaffGeneralJoseph Dunfordalso criticizesGoogleas "it's inexplicable" that it continue investing in China, "who usescensorship technology to restrain freedomsandcrackdown on peoplethere and has long history ofintellectual propertyandpatenttheft which hurts U.S. companies," while simultaneously not renewing further research and development collaborations withthe Pentagon. He said, "I'm not sure that people at Google will enjoy a world order that is informed by the norms and standards of Russia or China." He urges Google to work directly with the U.S. government instead of making controversial inroads into China.[230]SenatorMark Warner(D-VA) criticized Dragonfly evidences China's success at "recruit[ing] U.S. companies to their information control efforts" while China exports cyber and censorship infrastructure to countries like Venezuela, Ethiopia, and Pakistan.[231] Google has been criticized for the high amount of energy used to maintain its servers,[232]but was praised byGreenpeacefor the use of renewable sources of energy to run them.[233]Google has pledged to spend millions of dollars to investigate cheap, clean,renewable energy, and has installedsolar panelson the roofs at itsMountain Viewfacilities.[234][235]In 2010, Google also invested $39 million in wind power.[236]In 2023 Google along with Microsoft each consumed 24 TWh of electricity, more than countries such as Iceland, Ghana, the Dominican Republic, or Tunisia.[237] However, when it comes to its water usage, it mentioned in its annual report on sustainability, that it has used roughly 22 million m3of water in 2023, which was approximately 20% more than the year prior. Most of this was used to cool its data centers. It has pledged to replenish 120% of freshwater consumed for cooling its data centers by 2030, but in 2022 only 6% were replenished. The data center water consumption issue is not exclusive to Google.[238] In late 2013, activists in the San Francisco Bay Area began protesting the use of shuttle buses by Google and other tech companies, viewing them as symbols ofgentrificationanddisplacementin a city where the rapid growth of the tech sector has driven up housing prices.[239][240] On August 15, 2007, Google discontinued its Download-to-own/Download-to-rent (DTO/DTR) program.[241]Some videos previously purchased for ownership under that program were no longer viewable when the embeddeddigital rights management(DRM) licenses were revoked. Google gave refunds for the full amount spent on videos using "gift certificates" (or "bonuses") to their customers' "Google Checkout Account".[242][243]After a public uproar, Google issued full refunds to the credit cards of the Google Video users without revoking the gift certificates. For some search results, Google provides a secondary search box that can be used to search within a website identified from the first search. It sparked controversy among some online publishers and retailers. When performing a second search within a specific website, advertisements from competing and rival companies often showed up together with the results from the website being searched. This has the potential to draw users away from the website they were originally searching.[244]"While the service could help increase traffic, some users could be siphoned away as Google uses the prominence of the brands to sell ads, typically to competing companies."[245]In order to combat this controversy, Google has offered to turn off this feature for companies who request to have it removed.[245] According to software engineer Ben Lee and Product Manager Jack Menzel, the idea for search within search originated from the way users were searching. It appeared that users were often not finding exactly what they needed while trying to explore within a company site. "Teleporting" on the web, where users need only type part of the name of a website into Google (no need to remember the entire URL) in order to find the correct site, is what helps Google users complete their search. Google took this concept a step further and instead of just "teleporting", users could type in keywords to search within the website of their choice.[246] Google is criticized for naming their programming language "Go" while there is already an existing programming language called "Go!".[247][248][249] Google'sStreet Viewhas been criticized for providing information that could potentially be useful to terrorists. In the United Kingdom during March 2010,Liberal DemocratsMPPaul Keetchand unnamed military officers criticized Google for including pictures of the entrance to the British ArmySpecial Air Service(SAS) base, stating that terrorists might use the information to plan attacks. Google responded that it "only takes images from public roads and this is no different to what anyone could see traveling down the road themselves, therefore there is no appreciable security risk." Military sources stated that "It is highly irresponsible for military bases, especially special forces, to be pictured on the internet. [...] The question is, why risk a very serious security breach for the sake of having a picture on a website?"[250][251]Google was subsequently forced to remove images of the SAS base and other military, security and intelligence installations, admitting that its trained drivers had failed to not take photographs in areas banned under theOfficial Secrets Act.[252] In 2008, Google complied with requests fromThe Pentagonto remove Street View images of the entrances to military bases.[253][254] Despite being one of the world's largest and most influential companies, unlike many other technology companies, Google does not disclose its political spending. In August 2010, New York City Public AdvocateBill de Blasiolaunched a national campaign urging the corporation to disclose all of its political spending.[255]In the 2010s, Google spent about $150 million on lobbying, largely related to privacy protections and regulation of monopolies.[256][257] Google sponsors several non-profit lobbying groups, such as the Coalition for a Digital Economy (Coadec) in the UK.[258]Google has sponsored meetings of the conservativeCompetitive Enterprise Institutewho have had speakers including libertarian Republican andTea Partymember, and Senator for Kentucky,Rand Paul.[259] Peter Thielstated that Google had too much influence on theObama administration, claiming that the company "had more power under Obama thanExxonhad underBush 43".[260]There are manyrevolving doorexamples between Google and the U.S. government. This includes: 53 revolving door moves between Google and the White House; 22 former White House officials who left the administration to work for Google and 31 Google executives who joined the White House;[261]45Obama for Americacampaign staffers leaving for Google or Google controlled companies; 38 revolving door moves between Google and government positions involving national security, intelligence or the Department of Defense;[262]23 revolving door moves between Google and the State Department; and 18 Pentagon officials moving to Google. As of 2018, studies found that employees of Alphabet donated largely to support the election of candidates from theDemocratic Party.[263] In 2023, Alphabet lobbied on antitrust issues and three particular antitrust bills, spending $7.43 million in the first quarter of 2023, lobbying the federal government and more money in the second quarter of 2023, than in any quarter since 2018.[37] In 2013, Google joined theAmerican Legislative Exchange Council(ALEC).[264][265]In September 2014, Google chairman Eric Schmidt announced the company would leave ALEC for lying about climate change and "hurting our children".[266] In 2018, Google started an oil, gas, and energy division, hiring Darryl Willis, a 25-yearBPexecutive whoThe Wall Street Journalsaid was intended "to court the oil and gas industry."[267]Google Cloud signed an agreement with the French oil companyTotal S.A., "to jointly develop artificial intelligence solutions for subsurface data analysis in oil and gas exploration and production."[268]A partnership with Houston oil investment bank Tudor, Pickering, Holt & Co. was described by theHouston Chronicleas giving Google "a more visible presence in Houston as one of its oldest industries works to cut costs in the wake of the oil bust and remain competitive as electric vehicles and renewable power sources gain market share."[269]Other agreements were made with oilfield services companiesBaker HughesandSchlumberger,[269]andAnadarko Petroleum, to use "artificial intelligence to analyse large volumes of seismic and operational data to find oil, maximise output and increase efficiency,"[270]and negotiations were started with petroleum giantSaudi Aramco.[271] In 2019, Google was criticised for sponsoring a conference that included a session promotingclimate change denial. LibertyCon speaker Caleb Rossiter belongs to theCO2Coalition, a nonprofit that advocates for more carbon dioxide in the atmosphere.[272]In November 2019, over 1,000 Google employees demanded that the company commit to zero emissions by 2030 and cancel contracts with fossil fuel companies.[273] In February 2022, the NewClimate Institute, a German environmental policy think tank, published a survey evaluating the transparency and progress of the climate strategies and carbon neutrality pledges announced by 25 major companies in the United States that found that Alphabet's carbon neutrality pledge and climate strategy was unsubstantiated and misleading.[274][275] In April 2022, Alphabet,Meta Platforms,Shopify,McKinsey & Company, andStripe, Inc.announced a $925 millionadvance market commitmentofcarbon dioxide removal(CDR) from companies that are developing CDR technology over the next 9 years.[276][277]In January 2023, theAmerican Clean Power Associationreleased an annual industry report that found that 326 corporations had contracted 77.4 gigawatts of wind or solar energy by the end of 2022 and that the three corporate purchasers of the largest volumes of wind and solar energy were Alphabet,Amazon, andMeta Platforms.[278] In April 2020,Extinction Rebellionlaunched"agreenergoogle.com",aspoof websitecontaining a fake announcement by Google CEOSundar Pichaiclaiming that "they would stop funding of organizations that deny or work to block action on climate change, effective immediately".[279][280] Most YouTube videos allow users to leave comments, and these have attracted attention for the negative aspects of both their form and content. In 2006,TimepraisedWeb 2.0for enabling "community and collaboration on a scale never seen before", and added that YouTube "harnesses the stupidity of crowds as well as its wisdom. Some of the comments on YouTube make you weep for the future of humanity just for the spelling alone, never mind the obscenity and the naked hatred".[281]The Guardianin 2009 described users' comments on YouTube as: Juvenile, aggressive, misspelled, sexist, homophobic, swinging from raging at the contents of a video to providing a pointlessly detailed description followed by a LOL, YouTube comments are a hotbed of infantile debate and unashamed ignorance – with the occasional burst of wit shining through.[282] In September 2008,The Daily Telegraphcommented that YouTube was "notorious" for "some of the most confrontational and ill-formed comment exchanges on the internet", and reported on YouTube Comment Snob, "a new piece of software that blocks rude and illiterate posts".[283]The Huffington Postnoted in April 2012 that finding comments on YouTube that appear "offensive, stupid and crass" to the "vast majority" of the people is hardly difficult.[284] On November 6, 2013, Google implemented a new comment system that requires all YouTube users to use aGoogle+account to comment on videos, thereby making the comment system Google+-orientated.[285]The corporation stated that the change is necessary to personalize comment sections for viewers, eliciting an overwhelmingly negative public response—YouTube co-founderJawed Karimalso expressed disdain by writing on his channel: "why the fuck do I need a Google+ account to comment on a video?"[286]The official YouTube announcement received over 62,000 "thumbs down" votes and only just over 4,000 "thumbs up" votes, while anonline petitiondemanding Google+'s removal gained more than 230,000 signatures in just over two months.[287][288]Writing in theNewsdayblog Silicon Island, Chase Melvin noted: "Google+ is nowhere near as popular a social media network as Facebook, but it's essentially being forced upon millions of YouTube users who don't want to lose their ability to comment on videos."[289]In the same article Melvin adds: Perhaps user complaints are justified, but the idea of revamping the old system isn't so bad. Think of the crude, misogynistic and racially-charged mudslinging that has transpired over the last eight years on YouTube without any discernible moderation. Isn't any attempt to curb unidentified libelers worth a shot? The system is far from perfect, but Google should be lauded for trying to alleviate some of the damage caused by irate YouTubers hiding behind animosity and anonymity.[289] On July 27, 2015, Google announced that Google+ would no longer be required for using various services, including YouTube.[290][291] Google has supportednet neutralityin the US, while opposing it in India by supportingzero-rating.[292] On April 1, 2016, the Mic Drop April Fools' joke in Gmail caused damage for users who accidentally clicked the button Google installed on that occasion.[293] The New York Timesreported that Google has pressured theNew Americathink tank which is supported by it, to remove a statement supporting theEU antitrust fine against Google. AfterEric Schmidtvoiced his displeasure from the statement, the whole research group involved were sidelined in the New America think tank, which gets funding from Google.[294][295]Consequently, the Open Markets research group went to open their own think tank, which will not get any funding from Google.[295] Wide attention in Polish media has resulted from Google's attempt to patent video compression application ofANS coding, which is now widely used in products of e.g. Apple, Facebook and Google. Its author has helped Google in this adaptation for three years through public forum, but was not included in the patent application. He was supported in fighting this patent by his employer:Jagiellonian University.[296][297][298][299][300] Google's huge share of spatial information services, including Google Maps and the Google Places API, has been criticised by activists and academics in terms of the cartographic power it affords Google to map and represent the world's cities.[301]In addition, given Google and Alphabet Inc.'s increasing involvement with urban planning, particularly through subsidiaries likeSidewalk Labs,[302]this has resulted in criticism that Google is exerting an increasing power over urban areas that may not be beneficial to democracy in the long term.[303][304]This criticism is also related to wider concerns around democracy andSmart Citiesthat has been directed to a number of other large corporations.[305][306] On 10 December 2018, a New Zealand court ordered that the name of a man accused of murdering British travellerGrace Millanebe withheld from the public (agag order). The next morning, Google named the man in an email it sent people who had subscribed to "what's trending in New Zealand".[307]Lawyers warned that this could compromise the trial, and Justice Minister Andrew Little said that Google was in contempt of court.[308][309]Google said that it had been unaware of the court order, and that the email had been created by algorithms. In 2016, Google filed a patent application for interactive pop-up books with electronics.[310]Jie Qi noticed that the patent resembled work she had shared when she visitedGoogle ATAPin 2014 as a PhD student at theMIT Media Lab; two of the Google employees listed on the application as inventors had also interviewed her during the same visit. After Qi submittedprior artto theUSPTO, the application was abandoned.[311][312] Project Nightingaleis a health caredata sharingproject financed byGoogleandAscension, a Catholic health care system, the second largest in the United States. Ascension owns comprehensive health care information on millions of former and current patients who are part of its system. Google and Ascension have been processing this data, in secret, since sometime in 2018, without the knowledge and consent of patients and doctors. The work they are doing appears to comply with federal health care law which includes "robust protections for patient data."[313][314][315]However, concerns have been voiced whether the transfer really isHIPAAcompliant.[316]The project is Google's attempt to gain a large scale foot hold into the healthcare industry.[313] In 2020, Google-ownedYouTubechanged its policy so that it could include ads on all videos, regardless of whether the content-creator wanted them or not. Those who were not part of Google's Partner Program would receive no revenue for this. To join the program, creators must have more than 1,000 subscribers and 4,000 hours of viewed content in the last 12 months.[317][318] In November 2023, YouTube users using variousad blockersin conjunction with theFirefoxweb browserhave started reporting a delay of approximately 5 seconds before a video would start actually playing, which was further confirmed by the analysis of the sourceobfuscriptfor YouTube and then Google itself.[319]Reportedly, changing theuser agentstring toChromium/Google Chrome resolved the issue.[320]This was at a time when Google had also announced that starting June 2024,Chromewould no longer runbrowser extensionsthat use Manifest Version 2 standard in favor of a new version which would severely limit the capabilities of ad blockers other than e.g.DNS blocklistsusing their hitherto standard solutions.[321] In November 2021, YouTube rolled out an update to the website which prevented users from seeing how many dislikes a video had, with only the creator of the video being able to see. The decision was made to counteract "dislike-bombing", in which users make a coordinated effort to dislike a video en masse. This led to significant controversy, as the move was seen by many as undemocratic.[322][323] In March 2022, the Department of Justice and 14 state attorneys general accused Google of misusingattorney–client privilegeto hide emails from subpoenas using an employee policy called 'Communicate with Care,' which instructs employees tocarbon copy(CC) Google's attorneys on emails and flag them as exempt from disclosure. Employees are directed to add a general request for the attorney's advice even when no legal advice is needed or sought. Often Google's lawyers will not respond to such requests, which the Justice Department claimed shows they understand and are participating in the evasion.[324] In 2024, the Kremlin fined Google 2.5 decillion rubles for removal of news sources. Kremlin spokesman Dmitry Peskov admitted he "cannot even pronounce this number" but urged "Google management to pay attention.[325] In May 2023, Google announced that deletion of inactive user accounts would occur starting in December 2023, citing security reasons, noting that old and unused accounts are more likely to be compromised. Google claimed that "Forgotten or unattended accounts often rely on old or re-used passwords that may have been compromised, haven't had two factor authentication set up, and receive fewer security checks by the user," while saying that Google "has no plans to deleteYouTubevideos".[326][327][328] The decision to delete inactive accounts has sparked some criticism and backlash. The cited security rationale behind such decision was ridiculed and was compared to a hypothetical scenario where a bank should be burned down if it is not secure against robbers.[329]Moreover, theAnonymous hacktivist collectivehas protested against the decision to delete inactive accounts multiple times, describing them as "harsh" and saying that the decision will "destroy history".[330][331][332]
https://en.wikipedia.org/wiki/Criticism_of_Google
Thecut-up technique(ordécoupéin French) is analeatorynarrative techniquein which a written text is cut up and rearranged to create a new text. The concept can be traced to theDadaistsof the 1920s, but it was developed and popularized in the 1950s and early 1960s, especially by writerWilliam Burroughs. It has since been used in a wide variety of contexts. The cut-up and the closely associated fold-in are the two main techniques: William Burroughs citedT. S. Eliot's 1922 poem,The Waste Land, andJohn Dos Passos'U.S.A.trilogy, which incorporated newspaper clippings, as early examples of the cut ups he popularized. Gysin introduced Burroughs to the technique at theBeat Hotel. The pair later applied the technique to printed media andaudio recordingsin an effort to decode the material's implicit content, hypothesizing that such a technique could be used to discover the true meaning of a given text. Burroughs also suggested cut-ups may be effective as a form ofdivinationsaying, "When you cut into the present the future leaks out."[3]Burroughs also further developed the "fold-in" technique. In 1977, Burroughs and Gysin publishedThe Third Mind, a collection of cut-up writings and essays on the form.Jeff Nuttall's publicationMy Own Magwas another important outlet for the then-radical technique. In an interview,Alan Burnsnoted that forEurope After The Rain(1965) and subsequent novels he used a version of cut-ups: "I did not actually use scissors, but I folded pages, read across columns, and so on, discovering for myself many of the techniques Burroughs and Gysin describe".[4] A precedent of the technique occurred during a Dadaist rally in the 1920s in whichTristan Tzaraoffered to create a poem on the spot by pulling words atrandomfrom a hat.Collage, which was popularized roughly contemporaneously with the Surrealist movement, sometimes incorporated texts such as newspapers or brochures. Prior to this event, the technique had been published in an issue of 391 in the poem by Tzara,dada manifesto on feeble love and bitter loveunder the sub-title,TO MAKE A DADAIST POEM.[5][1] In the 1950s, painter and writerBrion Gysinmore fully developed the cut-up method after accidentally rediscovering it. He had placed layers of newspapers as a mat to protect a tabletop from being scratched while he cut papers with arazor blade. Upon cutting through the newspapers, Gysin noticed that the sliced layers offered interesting juxtapositions of text and image. He began deliberately cutting newspaper articles into sections, which he randomly rearranged. The bookMinutes to Goresulted from his initial cut-up experiment: unedited and unchanged cut-ups which emerged as coherent and meaningful prose. South African poetSinclair Beilesalso used this technique and co-authoredMinutes To Go. Argentine writerJulio Cortázarused cut ups in his 1963 novelHopscotch. In 1969, poetsHoward W. BergersonandJ. A. Lindondeveloped a cut-up technique known asvocabularyclept poetry, in which a poem is formed by taking all the words of an existing poem and rearranging them, often preserving the metre and stanza lengths.[6][7][8] A drama scripted for five voices by performance poetHedwig Gorskiin 1977 originated the idea of creating poetry only for performance instead of for print publication. The "neo-verse drama" titledBooby, Mama!written for "guerilla theater" performances in public places used a combination of newspaper cut-ups that were edited and choreographed for a troupe of non-professional street actors.[9][10] Kathy Acker, a literary and intermedia artist, sampled external sources and reconfigured them into the creation of shifting versions of her own constructed identity. In her late 1970s novelBlood and Guts in High School, Acker explored literary cut-up and appropriation as an integral part of her method.[11] Antony Balchand Burroughs created a collaboration film,The Cut-Ups[12]that opened in London in 1967. This was part of an abandoned project calledGuerrilla Conditionsmeant as a documentary on Burroughs and filmed throughout 1961–1965. Inspired by Burroughs' and Gysin's technique of cutting up text and rearranging it in random order, Balch had an editor cut his footage for the documentary into little pieces and impose no control over its reassembly.[13]The film opened atOxford Street's Cinephone cinema and had a disturbing reaction. Many audience members claimed the film made them ill, others demanded their money back, while some just stumbled out of the cinema ranting "it's disgusting".[12]Other cut-up films includeGhost at n°9 (Paris)(1963–1972), a posthumously released short film compiled from reels found at Balch's office after his death, andWilliam Buys a Parrott(1982),Bill and Tony(1972),Towers Open Fire(1963) andThe Junky's Christmas(1966).[14] In 1962, the satirical comedy groupBonzo Dog Doo-Dah Band, got their name after using the cut-up technique, resulting in "Bonzo Dog Dada":[15]"Bonzo Dog", after the cartoonBonzo the Dog, and "Dada" after theDadaavant-gardeart movement. The group's eventual frontman,Vivian Stanshall, would quote about wanting to form a band with that name.[15]The "Dada" in the phrase was eventually changed to "Doo-Dah". From the early 1970s,David Bowieused cut-ups to create some of his lyrics. In 1995, he worked with Ty Roberts to develop a program calledVerbasizerfor his Apple PowerBook that could automatically rearrange multiple sentences written into it.[16]Thom Yorkeapplied a similar method inRadiohead'sKid A(2000) album, writing single lines, putting them into a hat, and drawing them out at random while the band rehearsed the songs. Perhaps indicative of Thom Yorke's influences,[17]instructions for "How to make a Dada poem" appeared on Radiohead's website at this time. Stephen MallinderofCabaret Voltairereported toInpressmagazine'sAndrez Bergenthat "I do think the manipulation of sound in our early days – the physical act of cutting up tapes, creatingtape loopsand all that – has a strong reference to Burroughs and Gysin."[18]Anotherindustrial musicpioneer,Al JourgensenofMinistry, named Burroughs and his cut-up technique as the most important influence on how he approached the use of samples.[19] ManyElephant 6bands used decoupe as well, one prominent example of this is seen in "Pree-Sisters Swallowing A Donkey's Eye" byNeutral Milk Hotel.
https://en.wikipedia.org/wiki/Cut-up_technique
Theinfinite monkey theoremstates that amonkeyhitting keys independently and atrandomon atypewriterkeyboard for aninfiniteamount of time willalmost surelytype any given text, including the complete works ofWilliam Shakespeare.[a]More precisely, under the assumption of independence and randomness of each keystroke, the monkey would almost surely type every possible finite text an infinite number of times. The theorem can be generalized to state that any infinite sequence of independent events whose probabilities are uniformly bounded below by a positive number will almost surely have infinitely many occurrences. In this context, "almost surely" is a mathematical term meaning the event happens with probability 1, and the "monkey" is not an actual monkey, but ametaphorfor anabstractdevice that produces an endlessrandom sequenceof letters and symbols. Variants of the theorem include multiple and even infinitely many independent typists, and the target text varies between an entire library and a single sentence. One of the earliest instances of the use of the "monkey metaphor" is that of French mathematicianÉmile Borelin 1913,[1]but the first instance may have been even earlier.Jorge Luis Borgestraced the history of this idea fromAristotle'sOn Generation and CorruptionandCicero'sDe Natura Deorum(On the Nature of the Gods), throughBlaise PascalandJonathan Swift, up to modern statements with their iconic simians and typewriters.[2]In the early 20th century, Borel andArthur Eddingtonused the theorem to illustrate the timescales implicit in the foundations ofstatistical mechanics.[citation needed] There is a straightforward proof of this theorem. As an introduction, recall that if two events arestatistically independent, then the probability of both happening equals the product of the probabilities of each one happening independently. For example, if the chance of rain inMoscowon a particular day in the future is 0.4 and the chance of anearthquakeinSan Franciscoon any particular day is 0.00003, then the chance of both happening on the same day is0.4 × 0.00003 = 0.000012,assumingthat they are indeed independent. Consider the probability of typing the wordbananaon a typewriter with 50 keys. Suppose that the keys are pressed independently and uniformly at random, meaning that each key has an equal chance of being pressed regardless of what keys had been pressed previously. The chance that the first letter typed is 'b' is 1/50, and the chance that the second letter typed is 'a' is also 1/50, and so on. Therefore, the probability of the first six letters spellingbananais: The result is less than one in 15 billion, butnotzero. From the above, the chance ofnottypingbananain a given block of 6 letters is 1 − (1/50)6. Because each block is typed independently, the chanceXnof not typingbananain any of the firstnblocks of 6 letters is: Asngrows,Xngets smaller. Forn= 1 million,Xnis roughly 0.9999, but forn= 10 billionXnis roughly 0.53 and forn= 100 billion it is roughly 0.0017. Asnapproaches infinity, the probabilityXnapproacheszero; that is, by makingnlarge enough,Xncan be made as small as is desired,[3]and the chance of typingbananaapproaches 100%.[b]Thus, the probability of the wordbananaappearing at some point in an infinite sequence of keystrokes is equal to one. The same argument applies if we replace one monkey typingnconsecutive blocks of text withnmonkeys each typing one block (simultaneously and independently). In this case,Xn= (1 − (1/50)6)nis the probability that none of the firstnmonkeys typesbananacorrectly on their first try. Therefore, at least one of infinitely many monkeys will (with probability equal to one) produce a text using the same number of keystrokes as a perfectly accurate human typist copying it from the original. This can be stated more generally and compactly in terms ofstrings, which are sequences of characters chosen from some finitealphabet: Both follow easily from the secondBorel–Cantelli lemma. For the second theorem, letEkbe theeventthat thekth string begins with the given text. Because this has some fixed nonzero probabilitypof occurring, theEkare independent, and the below sum diverges, the probability that infinitely many of theEkoccur is 1. The first theorem is shown similarly; one can divide the random string into nonoverlapping blocks matching the size of the desired text and makeEkthe event where thekth block equals the desired string.[c] However, for physically meaningful numbers of monkeys typing for physically meaningful lengths of time the results are reversed. If there were as many monkeys as there are atoms in the observable universe typing extremely fast for trillions of times the life of the universe, the probability of the monkeys replicating even asingle pageof Shakespeare is unfathomably small. Ignoring punctuation, spacing, and capitalization, a monkey typing letters uniformly at random has a chance of one in 26 of correctly typing the first letter ofHamlet.It has a chance of one in 676 (26 × 26) of typing the first two letters. Because the probability shrinksexponentially, at 20 letters it already has only a chance of one in 2620= 19,928,148,895,209,409,152,340,197,376[d](almost 2 × 1028). In the case of the entire text ofHamlet, the probabilities are so vanishingly small as to be inconceivable. The text ofHamletcontains approximately 130,000 letters.[e]Thus, there is a probability of one in 3.4 × 10183,946to get the text right at the first trial. The average number of letters that needs to be typed until the text appears is also 3.4 × 10183,946,[f]or including punctuation, 4.4 × 10360,783.[g] Even if every proton in the observable universe (which isestimatedat roughly 1080) were a monkey with a typewriter, typing from theBig Banguntil theend of the universe(when protonsmight no longer exist), they would still need a far greater amount of time – more than three hundred and sixty thousandorders of magnitudelonger – to have even a 1 in 10500chance of success. To put it another way, for a one in a trillion chance of success, there would need to be 10360,641observable universes made of protonic monkeys.[h]AsKittelandKroemerput it in their textbook onthermodynamics, the field whose statistical foundations motivated the first known expositions of typing monkeys,[5]"The probability ofHamletis therefore zero in any operational sense of an event ...", and the statement that the monkeys must eventually succeed "gives a misleading conclusion about very, very large numbers." In fact, there is less than a one in a trillion chance of success that such a universe made of monkeys could type any particular document a mere 79 characters long.[i] An online demonstration showed that short random programs can produce highly structured outputs more often than classical probability suggests, aligning withGregory Chaitin's modern theorem and building onAlgorithmic Information TheoryandAlgorithmic probabilitybyRay SolomonoffandLeonid Levin.[6]The demonstration illustrates that the chance of producing a specific binary sequence is not shorter than the base-2 logarithm of the sequence length, showing the difference betweenAlgorithmic probabilityandclassical probability, as well as between random programs and random letters or digits. The probability that an infinite randomly generated string of text will contain a particular finite substring is 1. However, this does not mean the substring's absence is "impossible", despite the absence having a prior probability of 0. For example, the immortal monkeycouldrandomly type G as its first letter, G as its second, and G as every single letter, thereafter, producing an infinite string of Gs; at no point must the monkey be "compelled" to type anything else. (To assume otherwise implies thegambler's fallacy.) However long a randomly generated finite string is, there is a small but nonzero chance that it will turn out to consist of the same character repeated throughout; this chance approaches zero as the string's length approaches infinity. There is nothing special about such a monotonous sequence except that it is easy to describe; the same fact applies to any nameable specific sequence, such as "RGRGRG" repeated forever, or "a-b-aa-bb-aaa-bbb-...", or "Three, Six, Nine, Twelve…". If the hypothetical monkey has a typewriter with 90 equally likely keys that include numerals and punctuation, then the first typed keys might be "3.14" (the first threedigits of pi) with a probability of (1/90)4, which is 1/65,610,000. Equally probable is any other string of four characters allowed by the typewriter, such as "GGGG", "mATh", or "q%8e". The probability that 100 randomly typed keys will consist of the first 99 digits of pi (including the separator key), or any otherparticularsequence of that length, is much lower: (1/90)100. If the monkey's allotted length of text is infinite, the chance of typing only the digit of pi is 0, which is just aspossible(mathematically probable) as typing nothing but Gs (also probability 0). The same applies to the event of typing a particular version ofHamletfollowed by endless copies of itself; orHamletimmediately followed by all the digits of pi; these specific strings areequally infinitein length, they are not prohibited by the terms of the thought problem, and they each have a prior probability of 0. In fact,anyparticular infinite sequence the immortal monkey types will havehada prior probability of 0, even though the monkey must type something. This is an extension of the principle that a finite string of random text has a lower and lower probability ofbeinga particular string the longer it is (though all specific strings are equally unlikely). This probability approaches 0 as the string approaches infinity. Thus, the probability of the monkey typing an endlessly long string, such as all of the digits of pi in order, on a 90-key keyboard is (1/90)∞which equals (1/∞) which is essentially 0. At the same time, the probability that the sequencecontainsa particular subsequence (such as the word MONKEY, or the 12th through 999th digits of pi, or a version of the King James Bible) increases as the total string increases. This probability approaches 1 as the total string approaches infinity, and thus the original theorem is correct. In a simplification of the thought experiment, the monkey could have a typewriter with just two keys: 1 and 0. The infinitely long string thusly produced would correspond to thebinarydigits of a particularreal numberbetween 0 and 1. A countably infinite set of possible strings end in infinite repetitions, which means the corresponding real number isrational. Examples include the strings corresponding to one-third (010101...), five-sixths (11010101...) and five-eighths (1010000...). Only a subset of such real number strings (albeit a countably infinite subset) contains the entirety ofHamlet(assuming that the text is subjected to a numerical encoding, such asASCII). Meanwhile, there is anuncountablyinfinite set of strings which do not end in such repetition; these correspond to theirrational numbers. These can be sorted into two uncountably infinite subsets: those which containHamletand those which do not. However, the "largest" subset of all the real numbers is those which not only containHamlet, but which contain every other possible string of any length, and with equal distribution of such strings. These irrational numbers are callednormal. Because almost all numbers are normal, almost all possible strings contain all possible finite substrings. Hence, the probability of the monkey typing a normal number is 1. The same principles apply regardless of the number of keys from which the monkey can choose; a 90-key keyboard can be seen as a generator of numbers written in base 90. In one of the forms in which probabilists now know this theorem, with its "dactylographic" [i.e., typewriting] monkeys (French:singes dactylographes; the French wordsingecovers both the monkeys and the apes), appeared inÉmile Borel's 1913 article "Mécanique Statique et Irréversibilité" (Static mechanics and irreversibility),[1]and in his book "Le Hasard" in 1914.[7]His "monkeys" are not actual monkeys; rather, they are a metaphor for an imaginary way to produce a large, random sequence of letters. Borel said that if a million monkeys typed ten hours a day, it was extremely unlikely that their output would exactly equal all the books of the richest libraries of the world; and yet, in comparison, it was even more unlikely that the laws of statistical mechanics would ever be violated, even briefly. The physicistArthur Eddingtondrew on Borel's image further inThe Nature of the Physical World(1928), writing: If I let my fingers wander idly over the keys of a typewriter it might happen that my screed made an intelligible sentence. If an army of monkeys were strumming on typewriters they might write all the books in the British Museum. The chance of their doing so is decidedly more favourable than the chance of the molecules returning to one half of the vessel.[8][9] These images invite the reader to consider the incredible improbability of a large but finite number of monkeys working for a large but finite amount of time producing a significant work and compare this with the even greater improbability of certain physical events. Any physical process that is even less likely than such monkeys' success is effectively impossible, and it may safely be said that such a process will never happen.[5]It is clear from the context that Eddington is not suggesting that the probability of this happening is worthy of serious consideration. On the contrary, it was a rhetorical illustration of the fact that below certain levels of probability, the termimprobableis functionally equivalent toimpossible. In a 1939 essay entitled "The Total Library", Argentine writerJorge Luis Borgestraced the infinite-monkey concept back toAristotle'sMetaphysics.Explaining the views ofLeucippus, who held that the world arose through the random combination of atoms, Aristotle notes that the atoms themselves are homogeneous and their possible arrangements only differ in shape, position and ordering. InOn Generation and Corruption, the Greek philosopher compares this to the way that a tragedy and a comedy consist of the same "atoms",i.e., alphabetic characters.[10]Three centuries later,Cicero'sDe natura deorum(On the Nature of the Gods) argued against theEpicurean atomistworldview: Is it possible for any man to behold these things, and yet imagine that certain solid and individual bodies move by their natural force and gravitation, and that a world so beautifully adorned was made by their fortuitous concourse? He who believes this may as well believe that if a great quantity of the one-and-twenty letters, composed either of gold or any other matter, were thrown upon the ground, they would fall into such order as legibly to form theAnnalsof Ennius. I doubt whether fortune could make a single verse of them.[11] Borges follows the history of this argument throughBlaise PascalandJonathan Swift,[12]then observes that in his own time, the vocabulary had changed. By 1939, the idiom was "that a half-dozen monkeys provided with typewriters would, in a few eternities, produce all the books in the British Museum." (To which Borges adds, "Strictly speaking, one immortal monkey would suffice.") Borges then imagines the contents of the Total Library which this enterprise would produce if carried to its fullest extreme: Everything would be in its blind volumes. Everything: the detailed history of the future,Aeschylus'The Egyptians, the exact number of times that the waters ofthe Gangeshave reflected the flight of a falcon,the secret and true name of Rome, the encyclopediaNovaliswould have constructed, my dreams and half-dreams at dawn on August 14, 1934, the proof ofPierre Fermat'stheorem, the unwritten chapters ofEdwin Drood, those same chapters translated into the language spoken by theGaramantes, the paradoxesBerkeleyinvented concerning Time but didn't publish,Urizen's books of iron, the premature epiphanies ofStephen Dedalus, which would be meaningless before a cycle of a thousand years, the GnosticGospel of Basilides, the songthe sirenssang, the complete catalog of the Library, the proof of the inaccuracy of that catalog. Everything: but for every sensible line or accurate fact there would be millions of meaningless cacophonies, verbal farragoes, and babblings. Everything: but all the generations of mankind could pass before the dizzying shelves – shelves that obliterate the day and on which chaos lies – ever reward them with a tolerable page.[13] Borges' total library concept was the main theme of his widely read 1941 short story "The Library of Babel", which describes an unimaginably vast library consisting of interlocking hexagonal chambers, together containing every possible volume that could be composed from the letters of the alphabet and some punctuation characters. In 2002,[14]lecturers and students from theUniversity of PlymouthMediaLab Arts course used a £2,000 grant from theArts Councilto study the literary output of real monkeys. They left a computer keyboard in the enclosure of sixCelebes crested macaquesinPaignton Zooin Devon, England from May 1 to June 22, with a radio link to broadcast the results on a website.[15] Not only did the monkeys produce nothing but five total pages[16]largely consisting of the letter "S",[14]the lead male began striking the keyboard with a stone, and other monkeys followed by urinating and defecating on the machine.[17]Mike Phillips, director of the university's Institute of Digital Arts and Technology (i-DAT), said that the artist-funded project was primarilyperformance art, and they had learned "an awful lot" from it. He concluded that monkeys "are not random generators. They're more complex than that. ... They were quite interested in the screen, and they saw that when they typed a letter, something happened. There was a level of intention there."[15][18] In his 1931 bookThe Mysterious Universe, Eddington's rivalJames Jeansattributed the monkey parable to a "Huxley", presumably meaningThomas Henry Huxley. This attribution is incorrect.[19]Today, it is sometimes further reported that Huxley applied the example in anow-legendary debateoverCharles Darwin'sOn the Origin of Specieswith the Anglican Bishop of Oxford, Samuel Wilberforce, held at a meeting of theBritish Association for the Advancement of Scienceat Oxford on 30 June 1860. This story suffers not only from a lack of evidence, but the fact that in 1860 the typewriter wasnot yet commercially available.[20] Despite the original mix-up, monkey-and-typewriter arguments are now common in arguments over evolution. As an example ofChristian apologeticsDoug Powell argued that even if a monkey accidentally types the letters ofHamlet, it has failed to produceHamletbecause it lacked the intention to communicate. His parallel implication is that natural laws could not produce the information content inDNA.[21]A more common argument is represented by ReverendJohn F. MacArthur, who claimed that the genetic mutations necessary to produce a tapeworm from an amoeba are as unlikely as a monkey typing Hamlet's soliloquy, and hence the odds against the evolution of all life are impossible to overcome.[22] Evolutionary biologistRichard Dawkinsemploys the typing monkey concept in his bookThe Blind Watchmakerto demonstrate the ability ofnatural selectionto produce biologicalcomplexityout of randommutations. In a simulation experiment Dawkins has hisweasel programproduce the Hamlet phraseMETHINKS IT IS LIKE A WEASEL, starting from a randomly typed parent, by "breeding" subsequent generations and always choosing the closest match from progeny that are copies of the parent with random mutations. The chance of the target phrase appearing in a single step is extremely small, yet Dawkins showed that it could be produced rapidly (in about 40 generations) using cumulative selection of phrases. The random choices furnish raw material, while cumulative selection imparts information. As Dawkins acknowledges, however, the weasel program is an imperfect analogy for evolution, as "offspring" phrases were selected "according to the criterion of resemblance to adistant idealtarget." In contrast, Dawkins affirms, evolution has no long-term plans and does not progress toward some distant goal (such as humans). The weasel program is instead meant to illustrate the difference betweennon-randomcumulative selection, andrandomsingle-step selection.[23]In terms of the typing monkey analogy, this means thatRomeo and Julietcould be produced relatively quickly if placed under the constraints of a nonrandom, Darwinian-type selection because thefitness functionwill tend to preserve in place any letters that happen to match the target text, improving each successive generation of typing monkeys. A different avenue for exploring the analogy between evolution and an unconstrained monkey lies in the problem that the monkey types only one letter at a time, independently of the other letters. Hugh Petrie argues that a more sophisticated setup is required, in his case not for biological evolution but the evolution of ideas: In order to get the proper analogy, we would have to equip the monkey with a more complex typewriter. It would have to include whole Elizabethan sentences and thoughts. It would have to include Elizabethan beliefs about human action patterns and the causes, Elizabethan morality and science, and linguistic patterns for expressing these. It would probably even have to include an account of the sorts of experiences which shaped Shakespeare's belief structure as a particular example of an Elizabethan. Then, perhaps, we might allow the monkey to play with such a typewriter and produce variants, but the impossibility of obtaining a Shakespearean play is no longer obvious. What is varied really does encapsulate a great deal of already-achieved knowledge.[24] James W. Valentine, while admitting that the classic monkey's task is impossible, finds that there is a worthwhile analogy between written English and themetazoangenome in this other sense: both have "combinatorial, hierarchical structures" that greatly constrain the immense number of combinations at the alphabet level.[25] Zipf's lawstates that the frequency of words is a power law function of its frequency rank:word frequency∝1(word rank+b)a{\displaystyle {\text{word frequency}}\propto {\frac {1}{({\text{word rank}}+b)^{a}}}}wherea,b{\displaystyle a,b}are real numbers. Assuming that a monkey is typing randomly, with fixed and nonzero probability of hitting each letter key or white space, then the text produced by the monkey follows Zipf's law.[26] R. G. Collingwoodargued in 1938 that art cannot be produced by accident, and wrote as a sarcastic aside to his critics, ... some ... have denied this proposition, pointing out that if a monkey played with a typewriter ... he would produce ... the complete text of Shakespeare. Any reader who has nothing to do can amuse himself by calculating how long it would take for the probability to be worth betting on. But the interest of the suggestion lies in the revelation of the mental state of a person who can identify the 'works' of Shakespeare with the series of letters printed on the pages of a book ...[27] Nelson Goodmantook the contrary position, illustrating his point along with Catherine Elgin by the example of Borges' "Pierre Menard, Author of the Quixote", What Menard wrote is simply another inscription of the text. Any of us can do the same, as can printing presses and photocopiers. Indeed, we are told, if infinitely many monkeys ... one would eventually produce a replica of the text. That replica, we maintain, would be as much an instance of the work,Don Quixote, as Cervantes' manuscript, Menard's manuscript, and each copy of the book that ever has been or will be printed.[28] In another writing, Goodman elaborates, "That the monkey may be supposed to have produced his copy randomly makes no difference. It is the same text, and it is open to all the same interpretations. ..."Gérard Genettedismisses Goodman's argument asbegging the question.[29] ForJorge J. E. Gracia, the question of the identity of texts leads to a different question, that of author. If a monkey is capable of typingHamlet, despite having no intention of meaning and therefore disqualifying itself as an author, then it appears that texts do not require authors. Possible solutions include saying that whoever finds the text and identifies it asHamletis the author; or that Shakespeare is the author, the monkey his agent, and the finder merely a user of the text. These solutions have their own difficulties, in that the text appears to have a meaning separate from the other agents: What if the monkey operates before Shakespeare is born, or if Shakespeare is never born, or if no one ever finds the monkey's typescript?[30] In 1979,William R. Bennett Jr., a profesor ofphysicsatYale University, brought fresh attention to the theorem by applying a series of computer programs. Dr. Bennett simulated varying conditions under which an imaginary monkey, given a keyboard consisting of twenty-eight characters, and typing ten keys per second, might attempt to reproduce the sentence, "To be or not to be, that is the question." Although his experiments agreed with the overall conclusion that even such a short string of words would require many times the current age of the universe to reproduce, he noted that by modifying the statistical probability of certain letters to match the ordinary patterns of various languages and of Shakespeare in particular, seemingly random strings of words could be made to appear. But even with several refinements, the English sentence closest to the target phrase remained gibberish: "TO DEA NOW NAT TO BE WILL AND THEM BE DOES DOESORNS CAI AWROUTROULD."[31] The theorem concerns athought experimentwhich cannot be fully carried out in practice, since it is predicted to require prohibitive amounts of time and resources. Nonetheless, it has inspired efforts in finite random text generation. One computer program run by Dan Oliver of Scottsdale, Arizona, according to an article inThe New Yorker, came up with a result on 4 August 2004: After the group had worked for 42,162,500,000 billion billion monkey-years, one of the "monkeys" typed, "VALENTINE. Cease toIdor:eFLP0FRjWK78aXzVOwm)-‘;8.t" The first 19 letters of this sequence can be found in "The Two Gentlemen of Verona". Other teams have reproduced 18 characters from "Timon of Athens", 17 from "Troilus and Cressida", and 16 from "Richard II".[32] A website entitledThe Monkey Shakespeare Simulator, launched on 1 July 2003, contained aJava appletthat simulated a large population of monkeys typing randomly, with the stated intention of seeing how long it takes the virtual monkeys to produce a complete Shakespearean play from beginning to end. For example, it produced this partial line fromHenry IV, Part 2, reporting that it took "2,737,850 million billion billion billion monkey-years" to reach 24 matching characters: Due to processing power limitations, the program used a probabilistic model (by using arandom number generatoror RNG) instead of actually generating random text and comparing it to Shakespeare. When the simulator "detected a match" (that is, the RNG generated a certain value or a value within a certain range), the simulator simulated the match by generating matched text.[33] Questions about the statistics describing how often an ideal monkey isexpectedto type certain strings translate intopractical tests for random-number generators; these range from the simple to the "quite sophisticated". Computer-science professorsGeorge MarsagliaandArif Zamanreport that they used to call one such category of tests "overlapping m-tupletests" in lectures, since they concern overlapping m-tuples of successive elements in a random sequence. But they found that calling them "monkey tests" helped to motivate the idea with students. They published a report on the class of tests and their results for various RNGs in 1993.[34] The infinite monkey theorem and its associated imagery is considered a popular and proverbial illustration of the mathematics of probability, widely known to the general public because of its transmission through popular culture rather than through formal education.[j]This is helped by the innate humor stemming from the image of literal monkeys rattling away on a set of typewriters, and is a popular visual gag. A quotation attributed[35][36]to a 1996 speech by Robert Wilensky stated, "We've heard that a million monkeys at a million keyboards could produce the complete works of Shakespeare; now, thanks to the Internet, we know that is not true." The enduring, widespread popularity of the theorem was noted in the introduction to a 2001 paper, "Monkeys, Typewriters and Networks: The Internet in the Light of the Theory of Accidental Excellence".[37]In 2002, an article inThe Washington Postsaid, "Plenty of people have had fun with the famous notion that an infinite number of monkeys with an infinite number of typewriters and an infinite amount of time could eventually write the works of Shakespeare".[38]In 2003, the previously mentionedArts Council−funded experiment involving real monkeys and a computer keyboard received widespread press coverage.[14]In 2007, the theorem was listed byWiredmagazine in a list of eight classicthought experiments.[39] American playwrightDavid Ives' shortone-act playWords, Words, Words, from the collectionAll in the Timing, pokes fun of the concept of the infinite monkey theorem. In 2015 Balanced Software released Monkey Typewriter on the Microsoft Store.[40]The software generates random text using the Infinite Monkey theorem string formula. The software queries the generated text for user inputted phrases. However the software should not be considered true to life representation of the theory. This is a more of a practical presentation of the theory rather than scientific model on how to randomly generate text.
https://en.wikipedia.org/wiki/Infinite_monkey_theorem
Generative artificial intelligence(Generative AI,GenAI,[1]orGAI) is a subfield ofartificial intelligencethat uses generative models to produce text, images, videos, or other forms of data.[2][3][4]These modelslearnthe underlying patterns and structures of theirtraining dataand use them to produce new data[5][6]based on the input, which often comes in the form of natural languageprompts.[7][8] Generative AI tools have become more common since an "AI boom" in the 2020s. This boom was made possible by improvements intransformer-baseddeepneural networks, particularlylarge language models(LLMs). Major tools includechatbotssuch asChatGPT,DeepSeek,Copilot,Gemini,Llama, andGrok;text-to-imageartificial intelligence image generationsystems such asStable Diffusion,Midjourney, andDALL-E; andtext-to-videoAI generators such asSora.[9][10][11][12]Technology companies developing generative AI includeOpenAI,Anthropic,Microsoft,Google,DeepSeek, andBaidu.[13][14][15] Generative AI has raised many ethical questions. It can be used forcybercrime, or to deceive or manipulate people throughfake newsordeepfakes.[16]Even if used ethically, it may lead tomass replacement of human jobs.[17]The tools themselves have been criticized as violating intellectual property laws, since they are trained on and emulate copyrighted works of art.[18] Generative AI is used across many industries. Examples include software development,[19]healthcare,[20]finance,[21]entertainment,[22]customer service,[23]sales and marketing,[24]art, writing,[25]fashion,[26]and product design.[27] The first example of an algorithmically generated media is likely theMarkov chain. Markov chains have long been used to model natural languages since their development by Russian mathematicianAndrey Markovin the early 20th century. Markov published his first paper on the topic in 1906,[28][29]and analyzed the pattern of vowels and consonants in the novelEugeny Oneginusing Markov chains. Once a Markov chain is learned on atext corpus, it can then be used as a probabilistic text generator.[30][31] Computers were needed to go beyond Markov chains. By the early 1970s,Harold Cohenwas creating and exhibiting generative AI works created byAARON, the computer program Cohen created to generate paintings.[32] The terms generative AI planning or generative planning were used in the 1980s and 1990s to refer toAI planningsystems, especiallycomputer-aided process planning, used to generate sequences of actions to reach a specified goal.[33][34]Generative AI planning systems usedsymbolic AImethods such asstate space searchandconstraint satisfactionand were a "relatively mature" technology by the early 1990s. They were used to generate crisis action plans for military use,[35]process plans for manufacturing[33]and decision plans such as in prototype autonomous spacecraft.[36] Since its inception, the field ofmachine learninghas used bothdiscriminative modelsandgenerative modelsto model and predict data. Beginning in the late 2000s, the emergence ofdeep learningdrove progress, and research inimage classification,speech recognition,natural language processingand other tasks.Neural networksin this era were typically trained asdiscriminativemodels due to the difficulty of generative modeling.[37] In 2014, advancements such as thevariational autoencoderandgenerative adversarial networkproduced the first practical deep neural networks capable of learning generative models, as opposed to discriminative ones, for complex data such as images. These deep generative models were the first to output not only class labels for images but also entire images. In 2017, theTransformernetwork enabled advancements in generative models compared to olderLong-Short Term Memorymodels,[38]leading to the firstgenerative pre-trained transformer(GPT), known asGPT-1, in 2018.[39]This was followed in 2019 byGPT-2, which demonstrated the ability to generalize unsupervised to many different tasks as aFoundation model.[40] The new generative models introduced during this period allowed for large neural networks to be trained usingunsupervised learningorsemi-supervised learning, rather than thesupervised learningtypical of discriminative models. Unsupervised learning removed the need for humans tomanually label data, allowing for larger networks to be trained.[41] In March 2020, the release of15.ai, a freeweb applicationcreated by an anonymousMITresearcher that could generate convincing character voices using minimal training data, marked one of the earliest popular use cases of generative AI.[42]The platform is credited as the first mainstream service to popularize AI voice cloning (audio deepfakes) inmemesandcontent creation, influencing subsequent developments invoice AI technology.[43][44] In 2021, the emergence ofDALL-E, atransformer-based pixel generative model, marked an advance in AI-generated imagery.[45]This was followed by the releases ofMidjourneyandStable Diffusionin 2022, which further democratized access to high-qualityartificial intelligence artcreation fromnatural language prompts.[46]These systems demonstrated unprecedented capabilities in generating photorealistic images, artwork, and designs based on text descriptions, leading to widespread adoption among artists, designers, and the general public. In late 2022, the public release ofChatGPTrevolutionized the accessibility andapplication of generative AIfor general-purpose text-based tasks.[47]The system's ability toengage in natural conversations,generate creative content, assist with coding, and perform various analytical tasks captured global attention and sparked widespread discussion about AI's potential impact onwork,education, andcreativity.[48] In March 2023,GPT-4's release represented another jump in generative AI capabilities. A team fromMicrosoft Researchcontroversially argued that it "could reasonably be viewed as an early (yet still incomplete) version of anartificial general intelligence(AGI) system."[49]However, this assessment was contested by other scholars who maintained that generative AI remained "still far from reaching the benchmark of 'general human intelligence'" as of 2023.[50]Later in 2023,MetareleasedImageBind, an AI model combining multiplemodalitiesincluding text, images, video, thermal data, 3D data, audio, and motion, paving the way for more immersive generative AI applications.[51] In December 2023,GoogleunveiledGemini, a multimodal AI model available in four versions: Ultra, Pro, Flash, and Nano.[52]The company integrated Gemini Pro into itsBard chatbotand announced plans for "Bard Advanced" powered by the larger Gemini Ultra model.[53]In February 2024, Google unified Bard and Duet AI under the Gemini brand, launching a mobile app onAndroidand integrating the service into the Google app oniOS.[54] In March 2024,Anthropicreleased theClaude3 family of large language models, including Claude 3 Haiku, Sonnet, and Opus.[55]The models demonstrated significant improvements in capabilities across various benchmarks, with Claude 3 Opus notably outperforming leading models from OpenAI and Google.[56]In June 2024, Anthropic released Claude 3.5 Sonnet, which demonstrated improved performance compared to the larger Claude 3 Opus, particularly in areas such as coding, multistep workflows, and image analysis.[57] According to a survey bySASand Coleman Parkes Research,Chinahas emerged as a global leader in generative AI adoption, with 83% of Chinese respondents using the technology, exceeding both the global average of 54% and the U.S. rate of 65%. This leadership is further evidenced by China'sintellectual propertydevelopments in the field, with aUNreport revealing that Chinese entities filed over 38,000 generative AIpatentsfrom 2014 to 2023, substantially surpassing the United States in patent applications.[58] A generative AI system is constructed by applyingunsupervised machine learning(invoking for instanceneural networkarchitectures such asgenerative adversarial networks(GANs),variation autoencoders(VAEs),transformers, orself-supervisedmachine learning trained on adataset. The capabilities of a generative AI system depend on the output (modality) of the data set used. Generative AI can be eitherunimodalormultimodal; unimodal systems take only one type of input, whereas multimodal systems can take more than one type of input.[59]For example, one version ofOpenAI's GPT-4 accepts both text and image inputs.[60] Generative AI has made its appearance in a wide variety of industries, radically changing the dynamics of content creation, analysis, and delivery. In healthcare,[61]generative AI is instrumental in acceleratingdrug discoveryby creating molecular structures with target characteristics[62]and generatingradiologyimages for training diagnostic models. This extraordinary ability not only enables faster and cheaper development but also enhances medical decision-making. In finance, generative AI is invaluable as it generates datasets to train models and automates report generation with natural language summarization capabilities. It automates content creation, produces synthetic financial data, and tailors customer communications. It also powers chatbots and virtual agents. Collectively, these technologies enhance efficiency, reduce operational costs, and support data-driven decision-making in financial institutions.[63]The media industry makes use of generative AI for numerous creative activities such as music composition, scriptwriting, video editing, and digital art. The educational sector is impacted as well, since the tools make learning personalized through creating quizzes, study aids, and essay composition. Both the teachers and the learners benefit from AI-based platforms that suit various learning patterns.[64] Generative AI systems trained on words orword tokensincludeGPT-3,GPT-4,GPT-4o,LaMDA,LLaMA,BLOOM,Geminiand others (seeList of large language models). They are capable ofnatural language processing,machine translation, andnatural language generationand can be used asfoundation modelsfor other tasks.[66]Data sets includeBookCorpus,Wikipedia, and others (seeList of text corpora). In addition tonatural languagetext, large language models can be trained onprogramming languagetext, allowing them to generatesource codefor newcomputer programs.[67]Examples includeOpenAI Codex,Tabnine,GitHub Copilot,Microsoft Copilot, andVS CodeforkCursor.[68] Some AI assistants help candidates cheat during onlinecoding interviewsby providing code, improvements, and explanations. Their clandestine interfaces minimize the need for eye movements that would expose cheating to the interviewer.[69] Producing high-quality visual art is a prominent application of generative AI.[70]Generative AI systems trained on sets of images withtext captionsincludeImagen,DALL-E,Midjourney,Adobe Firefly,FLUX.1, Stable Diffusion and others (seeArtificial intelligence art,Generative art, andSynthetic media). They are commonly used fortext-to-imagegeneration andneural style transfer.[71]Datasets includeLAION-5Band others (seeList of datasets in computer vision and image processing). Generative AI can also be trained extensively on audio clips to produce natural-soundingspeech synthesisandtext-to-speechcapabilities. An early pioneer in this field was15.ai, launched in March 2020, which demonstrated the ability to clone character voices using as little as 15 seconds of training data.[72]The website gained widespread attention for its ability to generate emotionally expressive speech for various fictional characters, though it was later taken offline in 2022 due to copyright concerns.[73][74][75]Commercial alternatives subsequently emerged, includingElevenLabs' context-aware synthesis tools andMeta Platform's Voicebox.[76] Generative AI systems such asMusicLM[77]and MusicGen[78]can also be trained on the audio waveforms of recorded music along with text annotations, in order to generate new musical samples based on text descriptions such asa calming violin melody backed by a distorted guitar riff. Audio deepfakesof musiclyricshave been generated, like the song Savages, which used AI to mimic rapperJay-Z's vocals. Music artist's instrumentals and lyrics are copyrighted but their voices are not protected from regenerative AI yet, raising a debate about whether artists should get royalties from audio deepfakes.[79] Many AI music generators have been created that can be generated using a text phrase,genreoptions, andloopedlibrariesofbarsandriffs.[80] Generative AI trained on annotated video cangeneratetemporally-coherent, detailed andphotorealisticvideo clips. Examples includeSorabyOpenAI,[12]Runway,[81]and Make-A-Video byMeta Platforms.[82] Generative AI can also be trained on the motions of aroboticsystem to generate new trajectories formotion planningornavigation. For example, UniPi from Google Research uses prompts like"pick up blue bowl"or"wipe plate with yellow sponge"to control movements of a robot arm.[83]Multimodal "vision-language-action" models such as Google's RT-2 can perform rudimentary reasoning in response to user prompts and visual input, such as picking up a toydinosaurwhen given the promptpick up the extinct animalat a table filled with toy animals and other objects.[84] Artificially intelligentcomputer-aided design(CAD) can use text-to-3D, image-to-3D, and video-to-3D toautomate3D modeling.[85]AI-basedCAD librariescould also be developed usinglinkedopen dataofschematicsanddiagrams.[86]AI CADassistantsare used as tools to help streamline workflow.[87] Generative AI models are used to powerchatbotproducts such asChatGPT,programming toolssuch asGitHub Copilot,[88]text-to-imageproducts such as Midjourney, and text-to-video products such asRunwayGen-2.[89]Generative AI features have been integrated into a variety of existing commercially available products such asMicrosoft Office(Microsoft Copilot),[90]Google Photos,[91]and theAdobe Suite(Adobe Firefly).[92]Many generative AI models are also available asopen-source software, including Stable Diffusion and the LLaMA[93]language model. Smaller generative AI models with up to a few billion parameters can run onsmartphones, embedded devices, andpersonal computers. For example, LLaMA-7B (a version with 7 billion parameters) can run on aRaspberry Pi 4[94]and one version of Stable Diffusion can run on aniPhone 11.[95] Larger models with tens of billions of parameters can run onlaptopordesktop computers. To achieve an acceptable speed, models of this size may requireacceleratorssuch as theGPUchips produced byNVIDIAandAMDor the Neural Engine included inApple siliconproducts. For example, the 65 billion parameter version of LLaMA can be configured to run on a desktop PC.[96] The advantages of running generative AI locally include protection ofprivacyandintellectual property, and avoidance ofrate limitingandcensorship. Thesubredditr/LocalLLaMA in particular focuses on usingconsumer-grade gaminggraphics cards[97]through such techniques ascompression. That forum is one of only two sourcesAndrej Karpathytrusts forlanguage model benchmarks.[98]Yann LeCunhas advocated open-source models for their value tovertical applications[99]and for improvingAI safety.[100] Language models with hundreds of billions of parameters, such as GPT-4 orPaLM, typically run ondatacentercomputers equipped with arrays ofGPUs(such as NVIDIA'sH100) orAI acceleratorchips (such as Google'sTPU). These very large models are typically accessed ascloudservices over the Internet. In 2022, theUnited States New Export Controls on Advanced Computing and Semiconductors to Chinaimposed restrictions on exports to China ofGPUand AI accelerator chips used for generative AI.[101]Chips such as the NVIDIA A800[102]and theBiren TechnologyBR104[103]were developed to meet the requirements of the sanctions. There is free software on the market capable of recognizing text generated by generative artificial intelligence (such asGPTZero), as well as images, audio or video coming from it.[104]Potential mitigation strategies fordetecting generative AI contentincludedigital watermarking,content authentication,information retrieval, andmachine learning classifier models.[105]Despite claims of accuracy, both free and paid AI text detectors have frequently produced false positives, mistakenly accusing students of submitting AI-generated work.[106][107] Generative adversarial networks(GANs) are an influential generative modeling technique. GANs consist of two neural networks—the generator and the discriminator—trained simultaneously in a competitive setting. The generator createssynthetic databy transforming random noise into samples that resemble the training dataset. The discriminator is trained to distinguish the authentic data from synthetic data produced by the generator.[108]The two models engage in aminimaxgame: the generator aims to create increasingly realistic data to "fool" the discriminator, while the discriminator improves its ability to distinguish real from fake data. This continuous training setup enables the generator to produce high-quality and realistic outputs.[109] Variational autoencoders(VAEs) are deep learning models that probabilistically encode data. They are typically used for tasks such asnoise reductionfrom images,data compression, identifying unusual patterns, andfacial recognition. Unlikestandard autoencoders, which compress input data into a fixed latent representation, VAEs model thelatent spaceas a probability distribution,[110]allowing for smooth sampling and interpolation between data points. The encoder ("recognition model") maps input data to a latent space, producing means and variances that define a probability distribution. The decoder ("generative model") samples from this latent distribution and attempts to reconstruct the original input. VAEs optimize a loss function that includes both the reconstruction error and aKullback–Leibler divergenceterm, which ensures the latent space follows a known prior distribution. VAEs are particularly suitable for tasks that require structured but smooth latent spaces, although they may create blurrier images than GANs. They are used for applications like image generation, data interpolation andanomaly detection. Transformers became the foundation for many powerful generative models, most notably thegenerative pre-trained transformer(GPT) series developed by OpenAI. They marked a major shift in natural language processing by replacing traditionalrecurrentandconvolutionalmodels.[111]This architecture allows models to process entire sequences simultaneously and capture long-range dependencies more efficiently. Theself-attention mechanismenables the model to capture the significance of every word in a sequence when predicting the subsequent word, thus improving its contextual understanding. Unlike recurrent neural networks, transformers process all the tokens in parallel, which improves the training efficiency and scalability. Transformers are typically pre-trained on enormous corpora in aself-supervisedmanner, prior to beingfine-tuned. In the United States, a group of companies including OpenAI, Alphabet, and Meta signed a voluntary agreement with theBiden administrationin July 2023 to watermark AI-generated content.[112]In October 2023,Executive Order 14110applied theDefense Production Actto require all US companies to report information to the federal government when training certain high-impact AI models.[113][114] In the European Union, the proposedArtificial Intelligence Actincludes requirements to disclose copyrighted material used to train generative AI systems, and to label any AI-generated output as such.[115][116] In China, theInterim Measures for the Management of Generative AI Servicesintroduced by theCyberspace Administration of Chinaregulates any public-facing generative AI. It includes requirements to watermark generated images or videos, regulations on training data and label quality, restrictions on personal data collection, and a guideline that generative AI must "adhere to socialist core values".[117][118] Generative AI systems such asChatGPTandMidjourneyare trained on large, publicly available datasets that include copyrighted works. AI developers have argued that such training is protected underfair use, while copyright holders have argued that it infringes their rights.[119] Proponents of fair use training have argued that it is atransformative useand does not involve making copies of copyrighted works available to the public.[119]Critics have argued that image generators such asMidjourneycan create nearly-identical copies of some copyrighted images,[120]and that generative AI programs compete with the content they are trained on.[121] As of 2024, several lawsuits related to the use of copyrighted material in training are ongoing.Getty Imageshas suedStability AIover the use of its images to trainStable Diffusion.[122]Both theAuthors GuildandThe New York Timeshave suedMicrosoftandOpenAIover the use of their works to trainChatGPT.[123][124] A separate question is whether AI-generated works can qualify for copyright protection. TheUnited States Copyright Officehas ruled that works created by artificial intelligence without any human input cannot be copyrighted, because they lack human authorship.[125]Some legal professionals have suggested thatNaruto v. Slater(2018), in which theU.S. 9th Circuit Court of Appealsheld thatnon-humanscannot be copyright holders ofartistic works, could be a potential precedent in copyright litigation over works created by generative AI.[126]However, the office has also begun taking public input to determine if these rules need to be refined for generative AI.[127] In January 2025, theUnited States Copyright Office(USCO) released extensive guidance regarding the use of AI tools in the creative process, and established that "...generative AI systems also offer tools that similarly allow users to exert control. [These] can enable the user to control the selection and placement of individual creative elements. Whether such modifications rise to the minimum standard of originality required underFeistwill depend on a case-by-case determination. In those cases where they do, the output should be copyrightable"[128]Subsequently, the USCO registered the first visual artwork to be composed of entirely AI-generated materials, titled "A Single Piece of American Cheese".[129] The development of generative AI has raised concerns from governments, businesses, and individuals, resulting in protests, legal actions, calls topause AI experiments, and actions by multiple governments. In a July 2023 briefing of theUnited Nations Security Council,Secretary-GeneralAntónio Guterresstated "Generative AI has enormous potential for good and evil at scale", that AI may "turbocharge global development" and contribute between $10 and $15 trillion to the global economy by 2030, but that its malicious use "could cause horrific levels of death and destruction, widespread trauma, and deep psychological damage on an unimaginable scale".[130]In addition, generative AI has a significantcarbon footprint.[131][132] From the early days of the development of AI, there have been arguments put forward byELIZAcreatorJoseph Weizenbaumand others about whether tasks that can be done by computers actually should be done by them, given the difference between computers and humans, and between quantitative calculations and qualitative, value-based judgements.[134]In April 2023, it was reported that image generation AI has resulted in 70% of the jobs for video game illustrators in China being lost.[135][136]In July 2023, developments in generative AI contributed to the2023 Hollywood labor disputes.Fran Drescher, president of theScreen Actors Guild, declared that "artificial intelligence poses an existential threat to creative professions" during the2023 SAG-AFTRA strike.[137]Voice generation AI has been seen as a potential challenge to thevoice actingsector.[138][139] The intersection of AI and employment concerns among underrepresented groups globally remains a critical facet. While AI promises efficiency enhancements and skill acquisition, concerns about job displacement and biased recruiting processes persist among these groups, as outlined in surveys byFast Company. To leverage AI for a more equitable society, proactive steps encompass mitigating biases, advocating transparency, respecting privacy and consent, and embracing diverse teams and ethical considerations. Strategies involve redirecting policy emphasis on regulation, inclusive design, and education's potential for personalized teaching to maximize benefits while minimizing harms.[140] Generative AI models can reflect and amplify anycultural biaspresent in the underlying data. For example, a language model might assume that doctors and judges are male, and that secretaries or nurses are female, if those biases are common in the training data.[141]Similarly, an image model prompted with the text "a photo of a CEO" might disproportionately generate images of white male CEOs,[142]if trained on a racially biased data set. A number of methods for mitigating bias have been attempted, such as altering input prompts[143]and reweighting training data.[144] Deepfakes (aportmanteauof "deep learning" and "fake"[145]) are AI-generated media that take a person in an existing image or video and replace them with someone else's likeness usingartificial neural networks.[146]Deepfakes have garnered widespread attention and concerns for their uses indeepfake celebrity pornographic videos,revenge porn,fake news,hoaxes, healthdisinformation,financial fraud, and covertforeign election interference.[147][148][149][150][151][152][153]This has elicited responses from both industry and government to detect and limit their use.[154][155] In July 2023, the fact-checking companyLogicallyfound that the popular generative AI modelsMidjourney,DALL-E 2andStable Diffusionwould produce plausible disinformation images when prompted to do so, such as images ofelectoral fraudin the United States and Muslim women supporting India'sHindu nationalistBharatiya Janata Party.[156][157] In April 2024, a paper proposed to useblockchain(distributed ledgertechnology) to promote "transparency, verifiability, and decentralization in AI development and usage".[158] Instances of users abusing software to generate controversial statements in the vocal style of celebrities, public officials, and other famous individuals have raised ethical concerns over voice generation AI.[159][160][161][162][163][164]In response, companies such as ElevenLabs have stated that they would work on mitigating potential abuse through safeguards andidentity verification.[165] Concerns and fandoms have spawned fromAI-generated music. The same software used to clone voices has been used on famous musicians' voices to create songs that mimic their voices, gaining both tremendous popularity and criticism.[166][167][168]Similar techniques have also been used to create improved quality or full-length versions of songs that have been leaked or have yet to be released.[169] Generative AI has also been used to create new digital artist personalities, with some of these receiving enough attention to receive record deals at major labels.[170]The developers of these virtual artists have also faced their fair share of criticism for their personified programs, including backlash for "dehumanizing" an artform, and also creating artists which create unrealistic or immoral appeals to their audiences.[171] Many websites that allowexplicit AI generated images or videoshave been created,[172]and this has been used to create illegal content, such asrape,child sexual abuse material,[173][174]necrophilia, andzoophilia. Generative AI's ability to create realistic fake content has been exploited in numerous types of cybercrime, includingphishingscams.[175]Deepfakevideo and audio have been used to create disinformation and fraud. In 2020, former Googleclick fraudczarShuman Ghosemajumderargued that once deepfake videos become perfectly realistic, they would stop appearing remarkable to viewers, potentially leading to uncritical acceptance of false information.[176]Additionally,large language modelsand other forms of text-generation AI have been used to create fake reviews ofe-commercewebsites to boost ratings.[177]Cybercriminals have created large language models focused on fraud, including WormGPT and FraudGPT.[178] A 2023 study showed that generative AI can be vulnerable to jailbreaks,reverse psychologyandprompt injectionattacks, enabling attackers to obtain help with harmful requests, such as for craftingsocial engineeringandphishing attacks.[179]Additionally, other researchers have demonstrated that open-source models can befine-tunedto remove their safety restrictions at low cost.[180] Trainingfrontier AI modelsrequires an enormous amount of computing power. Usually onlyBig Techcompanies have the financial resources to make such investments. Smaller start-ups such asCohereandOpenAIend up buying access todata centersfromGoogleandMicrosoftrespectively.[181] AI has a significant carbon footprint due to growing energy consumption from both training and usage.[131][132]Scientists and journalists have expressed concerns about the environmental impact that the development and deployment of generative models are having: high CO2emissions,[182][183][184]large amounts of freshwater used for data centers,[185][186]and high amounts of electricity usage.[183][187][188]There is also concern that these impacts may increase as these models are incorporated into widely used search engines such as Google Search and Bing,[187]aschatbotsand other applications become more popular,[186][187]and as models need to be retrained.[187] The carbon footprint of generative AI globally is estimated to be growing steadily, with potential annual emissions ranging from 18.21 to 245.94 million tons of CO2 by 2035,[189]with the highest estimates for 2035 nearing the impact of the United Statesbeef industryon emissions (currently estimated to emit 257.5 million tons annually as of 2024).[190] Proposed mitigation strategies include factoring potential environmental costs prior to model development or data collection,[182]increasing efficiency of data centers to reduce electricity/energy usage,[184][187][188]building more efficientmachine learning models,[183][185][186]minimizing the number of times that models need to be retrained,[184]developing a government-directed framework for auditing the environmental impact of these models,[184][185]regulating for transparency of these models,[184]regulating their energy and water usage,[185]encouraging researchers to publish data on their models' carbon footprint,[184][187]and increasing the number of subject matter experts who understand both machine learning and climate science.[184] The New York Timesdefinesslopas analogous tospam: "shoddy or unwanted A.I. content in social media, art, books and ... in search results."[191]Journalists have expressed concerns about the scale of low-quality generated content with respect to social media content moderation,[192]the monetary incentives from social media companies to spread such content,[192][193]false political messaging,[193]spamming of scientific research paper submissions,[194]increased time and effort to find higher quality or desired content on the Internet,[195]the indexing of generated content by search engines,[196]and on journalism itself.[197] A paper published by researchers at Amazon Web Services AI Labs found that over 57% of sentences from a sample of over 6 billion sentences fromCommon Crawl, a snapshot of web pages, weremachine translated. Many of these automated translations were seen as lower quality, especially for sentences that were translated across at least three languages. Many lower-resource languages (ex.Wolof,Xhosa) were translated across more languages than higher-resource languages (ex. English, French).[198][199] In September 2024,Robyn Speer, the author of wordfreq, an open source database that calculated word frequencies based on text from the Internet, announced that she had stopped updating the data for several reasons: high costs for obtaining data fromRedditandTwitter, excessive focus on generative AI compared to other methods in thenatural language processingcommunity, and that "generative AI has polluted the data".[200] The adoption of generative AI tools led to an explosion of AI-generated content across multiple domains. A study fromUniversity College Londonestimated that in 2023, more than 60,000 scholarly articles—over 1% of all publications—were likely written with LLM assistance.[201]According toStanford University's Institute for Human-Centered AI, approximately 17.5% of newly published computer science papers and 16.9% of peer review text now incorporate content generated by LLMs.[202]Many academic disciplines have concerns about the factual reliably of academic content generated by AI.[203] Visual content follows a similar trend. Since the launch ofDALL-E2 in 2022, it is estimated that an average of 34 million images have been created daily. As of August 2023, more than 15 billion images had been generated using text-to-image algorithms, with 80% of these created by models based onStable Diffusion.[204] If AI-generated content is included in new data crawls from the Internet for additional training of AI models, defects in the resulting models may occur.[205]Training an AI model exclusively on the output of another AI model produces a lower-quality model. Repeating this process, where each new model is trained on the previous model's output, leads to progressive degradation and eventually results in a "model collapse" after multiple iterations.[206]Tests have been conducted with pattern recognition of handwritten letters and with pictures of human faces.[207]As a consequence, the value of data collected from genuine human interactions with systems may become increasingly valuable in the presence of LLM-generated content in data crawled from the Internet. On the other side,synthetic datais often used as an alternative to data produced by real-world events. Such data can be deployed to validate mathematical models and to train machine learning models while preserving user privacy,[208]including for structured data.[209]The approach is not limited to text generation; image generation has been employed to train computer vision models.[210] In January 2023,Futurism.combroke the story thatCNEThad been using an undisclosed internal AI tool to write at least 77 of its stories; after the news broke, CNET posted corrections to 41 of the stories.[211] In April 2023, the German tabloidDie Aktuellepublished a fake AI-generated interview with former racing driverMichael Schumacher, who had not made any public appearances since 2013 after sustaining a brain injury in a skiing accident. The story included two possible disclosures: the cover included the line "deceptively real", and the interview included an acknowledgment at the end that it was AI-generated. The editor-in-chief was fired shortly thereafter amid the controversy.[212] Other outlets that have published articles whose content or byline have been confirmed or suspected to be created by generative AI models – often with false content, errors, or non-disclosure of generative AI use – include: In May 2024, Futurism noted that a content management system video by AdVon Commerce, who had used generative AI to produce articles for many of the aforementioned outlets, appeared to show that they "had produced tens of thousands of articles for more than 150 publishers."[221] News broadcasters in Kuwait, Greece, South Korea, India, China and Taiwan have presented news with anchors based on Generative AI models, prompting concerns about job losses for human anchors and audience trust in news that has historically been influenced byparasocial relationshipswith broadcasters, content creators or social media influencers.[242][243][244]Algorithmically generated anchors have also been used by allies ofISISfor their broadcasts.[245] In 2023, Google reportedly pitched a tool to news outlets that claimed to "produce news stories" based on input data provided, such as "details of current events". Some news company executives who viewed the pitch described it as "[taking] for granted the effort that went into producing accurate and artful news stories."[246] In February 2024, Google launched a program to pay small publishers to write three articles per day using a beta generative AI model. The program does not require the knowledge or consent of the websites that the publishers are using as sources, nor does it require the published articles to be labeled as being created or assisted by these models.[247] Many defunct news sites (The Hairpin,The Frisky,Apple Daily,Ashland Daily Tidings,Clayton County Register,Southwest Journal) and blogs (The Unofficial Apple Weblog,iLounge) have undergonecybersquatting, with articles created by generative AI.[248][249][250][251][252][253][254][255] United States SenatorsRichard BlumenthalandAmy Klobucharhave expressed concern that generative AI could have a harmful impact on local news.[256]In July 2023, OpenAI partnered with the American Journalism Project to fund local news outlets for experimenting with generative AI, with Axios noting the possibility of generative AI companies creating a dependency for these news outlets.[257] Meta AI, a chatbot based onLlama 3which summarizes news stories, was noted byThe Washington Postto copy sentences from those stories without direct attribution and to potentially further decrease the traffic of online news outlets.[258] In response to potential pitfalls around the use and misuse of generative AI in journalism and worries about declining audience trust, outlets around the world, including publications such asWired,Associated Press,The Quint,RapplerorThe Guardianhave published guidelines around how they plan to use and not use AI and generative AI in their work.[259][260][261][262] In June 2024,Reuters Institutepublished theirDigital News Report for 2024. In a survey of people in America and Europe, Reuters Institute reports that 52% and 47% respectively are uncomfortable with news produced by "mostly AI with some human oversight", and 23% and 15% respectively report being comfortable. 42% of Americans and 33% of Europeans reported that they were comfortable with news produced by "mainly human with some help from AI". The results of global surveys reported that people were more uncomfortable with news topics including politics (46%), crime (43%), and local news (37%) produced by AI than other news topics.[263]
https://en.wikipedia.org/wiki/Generative_AI
Mark V. Shaneyis a syntheticUsenetuser whose postings in thenet.singlesnewsgroupswere generated byMarkov chaintechniques, based on text from other postings. The username is a play on the words "Markov chain". Many readers were fooled into thinking that the quirky, sometimes uncannily topical posts were written by a real person. The system was designed byRob Pikewith coding by Bruce Ellis. Don P. Mitchell wrote the Markov chain code, initially demonstrating it to Pike and Ellis using theTao Te Chingas a basis. They chose to apply it to thenet.singlesnetnewsgroup. The program is fairly simple. It ingests the sample text (the Tao Te Ching, or the posts of a Usenet group) and creates a massive list of every sequence of three successive words (triplet) which occurs in the text. It then chooses two words at random, and looks for a word which follows those two in one of the triplets in its massive list. If there is more than one, it picks at random (identical triplets count separately, so a sequence which occurs twice is twice as likely to be picked as one which only occurs once). It then adds that word to the generated text.[1] Then, in the same way, it picks a triplet that starts with the second and third words in the generated text, and that gives a fourth word. It adds the fourth word, then repeats with the third and fourth words, and so on. This algorithm is called a third-order Markov chain (because it uses sequences of three words).[1] A classic example, from 1984, originally sent as a mail message, later posted to net.singles[2]is reproduced here: >From mvs Fri Nov 16 17:11 EST 1984 remote from alice It looks likeReaganis going to say? Ummm... Oh yes, I was looking for. I'm so glad I remembered it. Yeah, what I have wondered if I had committed a crime. Don't eat with your assessment of Reagon andMondale. Up your nose with a guy from a firm that specifically researches the teen-age market. As a friend of mine would say, "It really doesn't matter"... It looks like Reagan is holding back the arms of the American eating public have changed dramatically, and it got pretty boring after about 300 games. People, having a much larger number of varieties, and are very different from what one can find inChinatownsacross the country (things likepork buns, steamed dumplings, etc.) They can be cheap, being sold for around 30 to 75 cents apiece (depending on size), are generally not greasy, can be adequately explained by stupidity. Singles have felt insecure since we came down from the Conservative world at large. ButChuquiis the way it happened and the prices are VERY reasonable. Can anyone think of myself as athird sex. Yes, I am expected to have. People often get used to me knowing these things and then a cover is placed over all of them. Along the side of the $$ are spent by (or at least for ) the girls. You can't settle the issue. It seems I've forgotten what it is, but I don't. I know about violence against women, and I really doubt they will ever join together into a large number of jokes. It showedAdam, just after being created. He has amodemand anautodialroutine. He calls my number 1440 times a day. So I will conclude by saying that I can well understand that she might soon have the time, it makes sense, again, to get the gist of my argument, I was in that (though it's aRepublicanadministration). _-_-_-_-Mark Other quotations from Mark's Usenet posts are:[3] InThe Usenet HandbookMark Harrison writes that after September 1981, students joined Useneten masse, "creating the USENET we know today: endless dumb questions, endless idiots posing as savants, and (of course) endless victims for practical jokes." In December, Rob Pike created thenetnewsgroupnet.suicideas prank, "a forum for bad jokes". Some users thought it was a legitimate forum, some discussed "riding motorcycles without helmets". At first, most posters were "real people", but soon "characters" began posting. Pike created a "vicious" character named Bimmler. At its peak,net.suicidehad ten frequent posters; nine were "known to be characters." But ultimately, Pike deleted the newsgroup because it was too much work to maintain; Bimmler messages were created "by hand". The "obvious alternative" was software,[8]running on a Bell Labs computer[3]created by Bruce Ellis, based on the Markov code by Don Mitchell, which became the online character Mark V. Shaney.[9][10][11] Kernighanand Pike listed Mark V. Shaney in the acknowledgements inThe Practice of Programming,[12]noting its roots in Mitchell'smarkov, which, adapted asshaney,[13]was used for "humorousdeconstructionistactivities" in the 1980s.[14] Dewdney pointed out "perhaps Mark V. Shaney's magnum opus: a 20-page commentary on the deconstructionist philosophy ofJean Baudrillard" directed by Pike, with assistance from Henry S. Baird and Catherine Richards, to be distributed by email.[10]The piece was based on Jean Baudrillard's "The Precession of Simulacra",[15]published inSimulacra and Simulation(1981). The program was discussed byA. K. Dewdneyin theScientific American"Computer Recreations" column in 1989,[10]byPenn Jillettein hisPC Computingcolumn in 1991,[3]and in several books, including theUsenet Handbook,[8]Bots: the Origin of New Species,[16]Hippo Eats Dwarf: A Field Guide to Hoaxes and Other B.S.,[17]and non-computer-related journals such asTexas Studies in Literature and Language.[18] Dewdney wrote about the program's output, "The overall impression is not unlike what remains in the brain of an inattentive student after a late-night study session. Indeed, after reading the output of Mark V. Shaney, I find ordinary writing almost equally strange and incomprehensible!" He noted the reactions of newsgroup users, who have "shuddered at Mark V. Shaney's reflections, some with rage and others with laughter:"[10] The opinions of the newnet.singlescorrespondent drew mixed reviews. Serious users of the bulletin board's services sensed satire. Outraged, they urged that someone "pull the plug" on Mark V. Shaney's monstrous rantings. Others inquired almost admiringly whether the program was a secret artificial intelligence project that was being tested in a human conversational environment. A few may even have thought that Mark V. Shaney was a real person, a tortured schizophrenic desperately seeking a like-minded companion.[10] Concluding, Dewdney wrote, "If the purpose of computer prose is to fool people into thinking that it was written by a sane person, Mark V. Shaney probably falls short."[10] A 2012 article inObservercompared Mark V. Shaney's "strangely beautiful" postings to theHorse_ebooksaccount onTwitterand music reviews atPitchfork, saying that "this mash-up of gibberish and human sentiment" is what "made Mark V. Shaney so endlessly fascinating".[19]
https://en.wikipedia.org/wiki/Mark_V._Shaney
In probability theory and statistics, aMarkov chainorMarkov processis astochastic processdescribing asequenceof possible events in which theprobabilityof each event depends only on the state attained in the previous event. Informally, this may be thought of as, "What happens next depends only on the state of affairsnow." Acountably infinitesequence, in which the chain moves state at discrete time steps, gives adiscrete-time Markov chain(DTMC). Acontinuous-timeprocess is called acontinuous-time Markov chain(CTMC). Markov processes are named in honor of theRussianmathematicianAndrey Markov. Markov chains have many applications asstatistical modelsof real-world processes.[1]They provide the basis for general stochastic simulation methods known asMarkov chain Monte Carlo, which are used for simulating sampling from complexprobability distributions, and have found application in areas includingBayesian statistics,biology,chemistry,economics,finance,information theory,physics,signal processing, andspeech processing.[1][2][3] The adjectivesMarkovianandMarkovare used to describe something that is related to a Markov process.[4] A Markov process is astochastic processthat satisfies theMarkov property(sometimes characterized as "memorylessness"). In simpler terms, it is a process for which predictions can be made regarding future outcomes based solely on its present state and—most importantly—such predictions are just as good as the ones that could be made knowing the process's full history.[5]In other words,conditionalon the present state of the system, its future and past states areindependent. A Markov chain is a type of Markov process that has either a discretestate spaceor a discrete index set (often representing time), but the precise definition of a Markov chain varies.[6]For example, it is common to define a Markov chain as a Markov process in eitherdiscrete or continuous timewith a countable state space (thus regardless of the nature of time),[7][8][9][10]but it is also common to define a Markov chain as having discrete time in either countable or continuous state space (thus regardless of the state space).[6] The system'sstate spaceand time parameter index need to be specified. The following table gives an overview of the different instances of Markov processes for different levels of state space generality and for discrete time v. continuous time: Note that there is no definitive agreement in the literature on the use of some of the terms that signify special cases of Markov processes. Usually the term "Markov chain" is reserved for a process with a discrete set of times, that is, adiscrete-time Markov chain (DTMC),[11]but a few authors use the term "Markov process" to refer to acontinuous-time Markov chain (CTMC)without explicit mention.[12][13][14]In addition, there are other extensions of Markov processes that are referred to as such but do not necessarily fall within any of these four categories (seeMarkov model). Moreover, the time index need not necessarily be real-valued; like with the state space, there are conceivable processes that move through index sets with other mathematical constructs. Notice that the general state space continuous-time Markov chain is general to such a degree that it has no designated term. While the time parameter is usually discrete, thestate spaceof a Markov chain does not have any generally agreed-on restrictions: the term may refer to a process on an arbitrary state space.[15]However, many applications of Markov chains employ finite orcountably infinitestate spaces, which have a more straightforward statistical analysis. Besides time-index and state-space parameters, there are many other variations, extensions and generalizations (seeVariations). For simplicity, most of this article concentrates on the discrete-time, discrete state-space case, unless mentioned otherwise. The changes of state of the system are called transitions. The probabilities associated with various state changes are called transition probabilities. The process is characterized by a state space, atransition matrixdescribing the probabilities of particular transitions, and an initial state (or initial distribution) across the state space. By convention, we assume all possible states and transitions have been included in the definition of the process, so there is always a next state, and the process does not terminate. A discrete-time random process involves a system which is in a certain state at each step, with the state changing randomly between steps. The steps are often thought of as moments in time, but they can equally well refer to physical distance or any other discrete measurement. Formally, the steps are theintegersornatural numbers, and the random process is a mapping of these to states. The Markov property states that theconditional probability distributionfor the system at the next step (and in fact at all future steps) depends only on the current state of the system, and not additionally on the state of the system at previous steps. Since the system changes randomly, it is generally impossible to predict with certainty the state of a Markov chain at a given point in the future. However, the statistical properties of the system's future can be predicted. In many applications, it is these statistical properties that are important. Andrey Markovstudied Markov processes in the early 20th century, publishing his first paper on the topic in 1906.[16][17][18]Markov Processes in continuous time were discovered long before his work in the early 20th century in the form of thePoisson process.[19][20][21]Markov was interested in studying an extension of independent random sequences, motivated by a disagreement withPavel Nekrasovwho claimed independence was necessary for theweak law of large numbersto hold.[22]In his first paper on Markov chains, published in 1906, Markov showed that under certain conditions the average outcomes of the Markov chain would converge to a fixed vector of values, so proving a weak law of large numbers without the independence assumption,[16][17][18]which had been commonly regarded as a requirement for such mathematical laws to hold.[18]Markov later used Markov chains to study the distribution of vowels inEugene Onegin, written byAlexander Pushkin, and proved acentral limit theoremfor such chains.[16] In 1912Henri Poincaréstudied Markov chains onfinite groupswith an aim to study card shuffling. Other early uses of Markov chains include a diffusion model, introduced byPaulandTatyana Ehrenfestin 1907, and a branching process, introduced byFrancis GaltonandHenry William Watsonin 1873, preceding the work of Markov.[16][17]After the work of Galton and Watson, it was later revealed that their branching process had been independently discovered and studied around three decades earlier byIrénée-Jules Bienaymé.[23]Starting in 1928,Maurice Fréchetbecame interested in Markov chains, eventually resulting in him publishing in 1938 a detailed study on Markov chains.[16][24] Andrey Kolmogorovdeveloped in a 1931 paper a large part of the early theory of continuous-time Markov processes.[25][26]Kolmogorov was partly inspired by Louis Bachelier's 1900 work on fluctuations in the stock market as well asNorbert Wiener's work on Einstein's model of Brownian movement.[25][27]He introduced and studied a particular set of Markov processes known as diffusion processes, where he derived a set of differential equations describing the processes.[25][28]Independent of Kolmogorov's work,Sydney Chapmanderived in a 1928 paper an equation, now called theChapman–Kolmogorov equation, in a less mathematically rigorous way than Kolmogorov, while studying Brownian movement.[29]The differential equations are now called the Kolmogorov equations[30]or the Kolmogorov–Chapman equations.[31]Other mathematicians who contributed significantly to the foundations of Markov processes includeWilliam Feller, starting in 1930s, and then laterEugene Dynkin, starting in the 1950s.[26] Suppose that there is a coin purse containing five coins worth 25¢, five coins worth 10¢ and five coins worth 5¢, and one by one, coins are randomly drawn from the purse and are set on a table. IfXn{\displaystyle X_{n}}represents the total value of the coins set on the table afterndraws, withX0=0{\displaystyle X_{0}=0}, then the sequence{Xn:n∈N}{\displaystyle \{X_{n}:n\in \mathbb {N} \}}isnota Markov process. To see why this is the case, suppose that in the first six draws, all five nickels and a quarter are drawn. ThusX6=$0.50{\displaystyle X_{6}=\$0.50}. If we know not justX6{\displaystyle X_{6}}, but the earlier values as well, then we can determine which coins have been drawn, and we know that the next coin will not be a nickel; so we can determine thatX7≥$0.60{\displaystyle X_{7}\geq \$0.60}with probability 1. But if we do not know the earlier values, then based only on the valueX6{\displaystyle X_{6}}we might guess that we had drawn four dimes and two nickels, in which case it would certainly be possible to draw another nickel next. Thus, our guesses aboutX7{\displaystyle X_{7}}are impacted by our knowledge of values prior toX6{\displaystyle X_{6}}. However, it is possible to model this scenario as a Markov process. Instead of definingXn{\displaystyle X_{n}}to represent thetotal valueof the coins on the table, we could defineXn{\displaystyle X_{n}}to represent thecountof the various coin types on the table. For instance,X6=1,0,5{\displaystyle X_{6}=1,0,5}could be defined to represent the state where there is one quarter, zero dimes, and five nickels on the table after 6 one-by-one draws. This new model could be represented by6×6×6=216{\displaystyle 6\times 6\times 6=216}possible states, where each state represents the number of coins of each type (from 0 to 5) that are on the table. (Not all of these states are reachable within 6 draws.) Suppose that the first draw results in stateX1=0,1,0{\displaystyle X_{1}=0,1,0}. The probability of achievingX2{\displaystyle X_{2}}now depends onX1{\displaystyle X_{1}}; for example, the stateX2=1,0,1{\displaystyle X_{2}=1,0,1}is not possible. After the second draw, the third draw depends on which coins have so far been drawn, but no longer only on the coins that were drawn for the first state (since probabilistically important information has since been added to the scenario). In this way, the likelihood of theXn=i,j,k{\displaystyle X_{n}=i,j,k}state depends exclusively on the outcome of theXn−1=ℓ,m,p{\displaystyle X_{n-1}=\ell ,m,p}state. A discrete-time Markov chain is a sequence ofrandom variablesX1,X2,X3, ... with theMarkov property, namely that the probability of moving to the next state depends only on the present state and not on the previous states: The possible values ofXiform acountable setScalled the state space of the chain. A continuous-time Markov chain (Xt)t≥ 0is defined by a finite or countable state spaceS, atransition rate matrixQwith dimensions equal to that of the state space and initial probability distribution defined on the state space. Fori≠j, the elementsqijare non-negative and describe the rate of the process transitions from stateito statej. The elementsqiiare chosen such that each row of the transition rate matrix sums to zero, while the row-sums of a probability transition matrix in a (discrete) Markov chain are all equal to one. There are three equivalent definitions of the process.[40] LetXt{\displaystyle X_{t}}be the random variable describing the state of the process at timet, and assume the process is in a stateiat timet. Then, knowingXt=i{\displaystyle X_{t}=i},Xt+h=j{\displaystyle X_{t+h}=j}is independent of previous values(Xs:s<t){\displaystyle \left(X_{s}:s<t\right)}, and ash→ 0 for alljand for allt,Pr(X(t+h)=j∣X(t)=i)=δij+qijh+o(h),{\displaystyle \Pr(X(t+h)=j\mid X(t)=i)=\delta _{ij}+q_{ij}h+o(h),}whereδij{\displaystyle \delta _{ij}}is theKronecker delta, using thelittle-o notation. Theqij{\displaystyle q_{ij}}can be seen as measuring how quickly the transition fromitojhappens. Define a discrete-time Markov chainYnto describe thenth jump of the process and variablesS1,S2,S3, ... to describe holding times in each of the states whereSifollows theexponential distributionwith rate parameter −qYiYi. For any valuen= 0, 1, 2, 3, ... and times indexed up to this value ofn:t0,t1,t2, ... and all states recorded at these timesi0,i1,i2,i3, ... it holds that wherepijis the solution of theforward equation(afirst-order differential equation) with initial condition P(0) is theidentity matrix. If the state space isfinite, the transition probability distribution can be represented by amatrix, called the transition matrix, with the (i,j)thelementofPequal to Since each row ofPsums to one and all elements are non-negative,Pis aright stochastic matrix. A stationary distributionπis a (row) vector, whose entries are non-negative and sum to 1, is unchanged by the operation of transition matrixPon it and so is defined by By comparing this definition with that of aneigenvectorwe see that the two concepts are related and that is a normalized (∑iπi=1{\textstyle \sum _{i}\pi _{i}=1}) multiple of a left eigenvectoreof the transition matrixPwith aneigenvalueof 1. If there is more than one unit eigenvector then a weighted sum of the corresponding stationary states is also a stationary state. But for a Markov chain one is usually more interested in a stationary state that is the limit of the sequence of distributions for some initial distribution. The values of a stationary distributionπi{\displaystyle \textstyle \pi _{i}}are associated with the state space ofPand its eigenvectors have their relative proportions preserved. Since the components of π are positive and the constraint that their sum is unity can be rewritten as∑i1⋅πi=1{\textstyle \sum _{i}1\cdot \pi _{i}=1}we see that thedot productof π with a vector whose components are all 1 is unity and that π lies on asimplex. If the Markov chain is time-homogeneous, then the transition matrixPis the same after each step, so thek-step transition probability can be computed as thek-th power of the transition matrix,Pk. If the Markov chain is irreducible and aperiodic, then there is a unique stationary distributionπ.[41]Additionally, in this casePkconverges to a rank-one matrix in which each row is the stationary distributionπ: where1is the column vector with all entries equal to 1. This is stated by thePerron–Frobenius theorem. If, by whatever means,limk→∞Pk{\textstyle \lim _{k\to \infty }\mathbf {P} ^{k}}is found, then the stationary distribution of the Markov chain in question can be easily determined for any starting distribution, as will be explained below. For some stochastic matricesP, the limitlimk→∞Pk{\textstyle \lim _{k\to \infty }\mathbf {P} ^{k}}does not exist while the stationary distribution does, as shown by this example: (This example illustrates a periodic Markov chain.) Because there are a number of different special cases to consider, the process of finding this limit if it exists can be a lengthy task. However, there are many techniques that can assist in finding this limit. LetPbe ann×nmatrix, and defineQ=limk→∞Pk.{\textstyle \mathbf {Q} =\lim _{k\to \infty }\mathbf {P} ^{k}.} It is always true that SubtractingQfrom both sides and factoring then yields whereInis theidentity matrixof sizen, and0n,nis thezero matrixof sizen×n. Multiplying together stochastic matrices always yields another stochastic matrix, soQmust be astochastic matrix(see the definition above). It is sometimes sufficient to use the matrix equation above and the fact thatQis a stochastic matrix to solve forQ. Including the fact that the sum of each the rows inPis 1, there aren+1equations for determiningnunknowns, so it is computationally easier if on the one hand one selects one row inQand substitutes each of its elements by one, and on the other one substitutes the corresponding element (the one in the same column) in the vector0, and next left-multiplies this latter vector by the inverse of transformed former matrix to findQ. Here is one method for doing so: first, define the functionf(A) to return the matrixAwith its right-most column replaced with all 1's. If [f(P−In)]−1exists then[42][41] One thing to notice is that ifPhas an elementPi,ion its main diagonal that is equal to 1 and theith row or column is otherwise filled with 0's, then that row or column will remain unchanged in all of the subsequent powersPk. Hence, theith row or column ofQwill have the 1 and the 0's in the same positions as inP. As stated earlier, from the equationπ=πP,{\displaystyle {\boldsymbol {\pi }}={\boldsymbol {\pi }}\mathbf {P} ,}(if exists) the stationary (or steady state) distributionπis a left eigenvector of rowstochastic matrixP. Then assuming thatPis diagonalizable or equivalently thatPhasnlinearly independent eigenvectors, speed of convergence is elaborated as follows. (For non-diagonalizable, that is,defective matrices, one may start with theJordan normal formofPand proceed with a bit more involved set of arguments in a similar way.[43]) LetUbe the matrix of eigenvectors (each normalized to having an L2 norm equal to 1) where each column is a left eigenvector ofPand letΣbe the diagonal matrix of left eigenvalues ofP, that is,Σ= diag(λ1,λ2,λ3,...,λn). Then byeigendecomposition Let the eigenvalues be enumerated such that: SincePis a row stochastic matrix, its largest left eigenvalue is 1. If there is a unique stationary distribution, then the largest eigenvalue and the corresponding eigenvector is unique too (because there is no otherπwhich solves the stationary distribution equation above). Letuibe thei-th column ofUmatrix, that is,uiis the left eigenvector ofPcorresponding to λi. Also letxbe a lengthnrow vector that represents a valid probability distribution; since the eigenvectorsuispanRn,{\displaystyle \mathbb {R} ^{n},}we can write If we multiplyxwithPfrom right and continue this operation with the results, in the end we get the stationary distributionπ. In other words,π=a1u1←xPP...P=xPkask→ ∞. That means Sinceπis parallel tou1(normalized by L2 norm) andπ(k)is a probability vector,π(k)approaches toa1u1=πask→ ∞ with a speed in the order ofλ2/λ1exponentially. This follows because|λ2|≥⋯≥|λn|,{\displaystyle |\lambda _{2}|\geq \cdots \geq |\lambda _{n}|,}henceλ2/λ1is the dominant term. The smaller the ratio is, the faster the convergence is.[44]Random noise in the state distributionπcan also speed up this convergence to the stationary distribution.[45] Many results for Markov chains with finite state space can be generalized to chains with uncountable state space throughHarris chains. The use of Markov chains in Markov chain Monte Carlo methods covers cases where the process follows a continuous state space. "Locally interacting Markov chains" are Markov chains with an evolution that takes into account the state of other Markov chains. This corresponds to the situation when the state space has a (Cartesian-) product form. Seeinteracting particle systemandstochastic cellular automata(probabilistic cellular automata). See for instanceInteraction of Markov Processes[46]or.[47] Two states are said tocommunicatewith each other if both are reachable from one another by a sequence of transitions that have positive probability. This is an equivalence relation which yields a set of communicating classes. A class isclosedif the probability of leaving the class is zero. A Markov chain isirreducibleif there is one communicating class, the state space. A stateihas periodkifkis thegreatest common divisorof the number of transitions by whichican be reached, starting fromi. That is: The state isperiodicifk>1{\displaystyle k>1}; otherwisek=1{\displaystyle k=1}and the state isaperiodic. A stateiis said to betransientif, starting fromi, there is a non-zero probability that the chain will never return toi. It is calledrecurrent(orpersistent) otherwise.[48]For a recurrent statei, the meanhitting timeis defined as: Stateiispositive recurrentifMi{\displaystyle M_{i}}is finite andnull recurrentotherwise. Periodicity, transience, recurrence and positive and null recurrence are class properties — that is, if one state has the property then all states in its communicating class have the property.[49] A stateiis calledabsorbingif there are no outgoing transitions from the state. Since periodicity is a class property, if a Markov chain is irreducible, then all its states have the same period. In particular, if one state is aperiodic, then the whole Markov chain is aperiodic.[50] If a finite Markov chain is irreducible, then all states are positive recurrent, and it has a unique stationary distribution given byπi=1/E[Ti]{\displaystyle \pi _{i}=1/E[T_{i}]}. A stateiis said to beergodicif it is aperiodic and positive recurrent. In other words, a stateiis ergodic if it is recurrent, has a period of 1, and has finite mean recurrence time. If all states in an irreducible Markov chain are ergodic, then the chain is said to be ergodic. Equivalently, there exists some integerk{\displaystyle k}such that all entries ofMk{\displaystyle M^{k}}are positive. It can be shown that a finite state irreducible Markov chain is ergodic if it has an aperiodic state. More generally, a Markov chain is ergodic if there is a numberNsuch that any state can be reached from any other state in any number of steps less or equal to a numberN. In case of a fully connected transition matrix, where all transitions have a non-zero probability, this condition is fulfilled withN= 1. A Markov chain with more than one state and just one out-going transition per state is either not irreducible or not aperiodic, hence cannot be ergodic. Some authors call any irreducible, positive recurrent Markov chains ergodic, even periodic ones.[51]In fact, merely irreducible Markov chains correspond toergodic processes, defined according toergodic theory.[52] Some authors call a matrixprimitiveif there exists some integerk{\displaystyle k}such that all entries ofMk{\displaystyle M^{k}}are positive.[53]Some authors call itregular.[54] Theindex of primitivity, orexponent, of a regular matrix, is the smallestk{\displaystyle k}such that all entries ofMk{\displaystyle M^{k}}are positive. The exponent is purely a graph-theoretic property, since it depends only on whether each entry ofM{\displaystyle M}is zero or positive, and therefore can be found on a directed graph withsign(M){\displaystyle \mathrm {sign} (M)}as its adjacency matrix. There are several combinatorial results about the exponent when there are finitely many states. Letn{\displaystyle n}be the number of states, then[55] If a Markov chain has a stationary distribution, then it can be converted to ameasure-preserving dynamical system: Let the probability space beΩ=ΣN{\displaystyle \Omega =\Sigma ^{\mathbb {N} }}, whereΣ{\displaystyle \Sigma }is the set of all states for the Markov chain. Let the sigma-algebra on the probability space be generated by the cylinder sets. Let the probability measure be generated by the stationary distribution, and the Markov chain transition. LetT:Ω→Ω{\displaystyle T:\Omega \to \Omega }be the shift operator:T(X0,X1,…)=(X1,…){\displaystyle T(X_{0},X_{1},\dots )=(X_{1},\dots )}. Similarly we can construct such a dynamical system withΩ=ΣZ{\displaystyle \Omega =\Sigma ^{\mathbb {Z} }}instead.[57] SinceirreducibleMarkov chains with finite state spaces have a unique stationary distribution, the above construction is unambiguous for irreducible Markov chains. Inergodic theory, a measure-preserving dynamical system is calledergodicif any measurable subsetS{\displaystyle S}such thatT−1(S)=S{\displaystyle T^{-1}(S)=S}impliesS=∅{\displaystyle S=\emptyset }orΩ{\displaystyle \Omega }(up to a null set). The terminology is inconsistent. Given a Markov chain with a stationary distribution that is strictly positive on all states, the Markov chain isirreducibleif its corresponding measure-preserving dynamical system isergodic.[52] In some cases, apparently non-Markovian processes may still have Markovian representations, constructed by expanding the concept of the "current" and "future" states. For example, letXbe a non-Markovian process. Then define a processY, such that each state ofYrepresents a time-interval of states ofX. Mathematically, this takes the form: IfYhas the Markov property, then it is a Markovian representation ofX. An example of a non-Markovian process with a Markovian representation is anautoregressivetime seriesof order greater than one.[58] Thehitting timeis the time, starting in a given set of states, until the chain arrives in a given state or set of states. The distribution of such a time period has a phase type distribution. The simplest such distribution is that of a single exponentially distributed transition. For a subset of statesA⊆S, the vectorkAof hitting times (where elementkiA{\displaystyle k_{i}^{A}}represents theexpected value, starting in stateithat the chain enters one of the states in the setA) is the minimal non-negative solution to[59] For a CTMCXt, the time-reversed process is defined to beX^t=XT−t{\displaystyle {\hat {X}}_{t}=X_{T-t}}. ByKelly's lemmathis process has the same stationary distribution as the forward process. A chain is said to bereversibleif the reversed process is the same as the forward process.Kolmogorov's criterionstates that the necessary and sufficient condition for a process to be reversible is that the product of transition rates around a closed loop must be the same in both directions. One method of finding thestationary probability distribution,π, of anergodiccontinuous-time Markov chain,Q, is by first finding itsembedded Markov chain (EMC). Strictly speaking, the EMC is a regular discrete-time Markov chain, sometimes referred to as ajump process. Each element of the one-step transition probability matrix of the EMC,S, is denoted bysij, and represents theconditional probabilityof transitioning from stateiinto statej. These conditional probabilities may be found by From this,Smay be written as whereIis theidentity matrixand diag(Q) is thediagonal matrixformed by selecting themain diagonalfrom the matrixQand setting all other elements to zero. To find the stationary probability distribution vector, we must next findφ{\displaystyle \varphi }such that withφ{\displaystyle \varphi }being a row vector, such that all elements inφ{\displaystyle \varphi }are greater than 0 and‖φ‖1{\displaystyle \|\varphi \|_{1}}= 1. From this,πmay be found as (Smay be periodic, even ifQis not. Onceπis found, it must be normalized to aunit vector.) Another discrete-time process that may be derived from a continuous-time Markov chain is a δ-skeleton—the (discrete-time) Markov chain formed by observingX(t) at intervals of δ units of time. The random variablesX(0),X(δ),X(2δ), ... give the sequence of states visited by the δ-skeleton. Markov models are used to model changing systems. There are 4 main types of models, that generalize Markov chains depending on whether every sequential state is observable or not, and whether the system is to be adjusted on the basis of observations made: ABernoulli schemeis a special case of a Markov chain where the transition probability matrix has identical rows, which means that the next state is independent of even the current state (in addition to being independent of the past states). A Bernoulli scheme with only two possible states is known as aBernoulli process. Note, however, by theOrnstein isomorphism theorem, that every aperiodic and irreducible Markov chain is isomorphic to a Bernoulli scheme;[60]thus, one might equally claim that Markov chains are a "special case" of Bernoulli schemes. The isomorphism generally requires a complicated recoding. The isomorphism theorem is even a bit stronger: it states thatanystationary stochastic processis isomorphic to a Bernoulli scheme; the Markov chain is just one such example. When the Markov matrix is replaced by theadjacency matrixof afinite graph, the resulting shift is termed atopological Markov chainor asubshift of finite type.[60]A Markov matrix that is compatible with the adjacency matrix can then provide ameasureon the subshift. Many chaoticdynamical systemsare isomorphic to topological Markov chains; examples includediffeomorphismsofclosed manifolds, theProuhet–Thue–Morse system, theChacon system,sofic systems,context-free systemsandblock-coding systems.[60] Markov chains have been employed in a wide range of topics across the natural and social sciences, and in technological applications. They have been used for forecasting in several areas: for example, price trends,[61]wind power,[62]stochastic terrorism,[63][64]andsolar irradiance.[65]The Markov chain forecasting models utilize a variety of settings, from discretizing the time series,[62]to hidden Markov models combined with wavelets,[61]and the Markov chain mixture distribution model (MCM).[65] Markovian systems appear extensively inthermodynamicsandstatistical mechanics, whenever probabilities are used to represent unknown or unmodelled details of the system, if it can be assumed that the dynamics are time-invariant, and that no relevant history need be considered which is not already included in the state description.[66][67]For example, a thermodynamic state operates under a probability distribution that is difficult or expensive to acquire. Therefore, Markov Chain Monte Carlo method can be used to draw samples randomly from a black-box to approximate the probability distribution of attributes over a range of objects.[67] Markov chains are used inlattice QCDsimulations.[68] A reaction network is a chemical system involving multiple reactions and chemical species. The simplest stochastic models of such networks treat the system as a continuous time Markov chain with the state being the number of molecules of each species and with reactions modeled as possible transitions of the chain.[69]Markov chains and continuous-time Markov processes are useful in chemistry when physical systems closely approximate the Markov property. For example, imagine a large numbernof molecules in solution in state A, each of which can undergo a chemical reaction to state B with a certain average rate. Perhaps the molecule is an enzyme, and the states refer to how it is folded. The state of any single enzyme follows a Markov chain, and since the molecules are essentially independent of each other, the number of molecules in state A or B at a time isntimes the probability a given molecule is in that state. The classical model of enzyme activity,Michaelis–Menten kinetics, can be viewed as a Markov chain, where at each time step the reaction proceeds in some direction. While Michaelis-Menten is fairly straightforward, far more complicated reaction networks can also be modeled with Markov chains.[70] An algorithm based on a Markov chain was also used to focus the fragment-based growth of chemicalsin silicotowards a desired class of compounds such as drugs or natural products.[71]As a molecule is grown, a fragment is selected from the nascent molecule as the "current" state. It is not aware of its past (that is, it is not aware of what is already bonded to it). It then transitions to the next state when a fragment is attached to it. The transition probabilities are trained on databases of authentic classes of compounds.[72] Also, the growth (and composition) ofcopolymersmay be modeled using Markov chains. Based on the reactivity ratios of the monomers that make up the growing polymer chain, the chain's composition may be calculated (for example, whether monomers tend to add in alternating fashion or in long runs of the same monomer). Due tosteric effects, second-order Markov effects may also play a role in the growth of some polymer chains. Similarly, it has been suggested that the crystallization and growth of some epitaxialsuperlatticeoxide materials can be accurately described by Markov chains.[73] Markov chains are used in various areas of biology. Notable examples include: Several theorists have proposed the idea of the Markov chain statistical test (MCST), a method of conjoining Markov chains to form a "Markov blanket", arranging these chains in several recursive layers ("wafering") and producing more efficient test sets—samples—as a replacement for exhaustive testing.[citation needed] Solar irradiancevariability assessments are useful forsolar powerapplications. Solar irradiance variability at any location over time is mainly a consequence of the deterministic variability of the sun's path across the sky dome and the variability in cloudiness. The variability of accessible solar irradiance on Earth's surface has been modeled using Markov chains,[76][77][78][79]also including modeling the two states of clear and cloudiness as a two-state Markov chain.[80][81] Hidden Markov modelshave been used inautomatic speech recognitionsystems.[82] Markov chains are used throughout information processing.Claude Shannon's famous 1948 paperA Mathematical Theory of Communication, which in a single step created the field ofinformation theory, opens by introducing the concept ofentropyby modeling texts in a natural language (such as English) as generated by an ergodic Markov process, where each letter may depend statistically on previous letters.[83]Such idealized models can capture many of the statistical regularities of systems. Even without describing the full structure of the system perfectly, such signal models can make possible very effectivedata compressionthroughentropy encodingtechniques such asarithmetic coding. They also allow effectivestate estimationandpattern recognition. Markov chains also play an important role inreinforcement learning. Markov chains are also the basis for hidden Markov models, which are an important tool in such diverse fields as telephone networks (which use theViterbi algorithmfor error correction), speech recognition andbioinformatics(such as in rearrangements detection[84]). TheLZMAlossless data compression algorithm combines Markov chains withLempel-Ziv compressionto achieve very high compression ratios. Markov chains are the basis for the analytical treatment of queues (queueing theory).Agner Krarup Erlanginitiated the subject in 1917.[85]This makes them critical for optimizing the performance of telecommunications networks, where messages must often compete for limited resources (such as bandwidth).[86] Numerous queueing models use continuous-time Markov chains. For example, anM/M/1 queueis a CTMC on the non-negative integers where upward transitions fromitoi+ 1 occur at rateλaccording to aPoisson processand describe job arrivals, while transitions fromitoi– 1 (fori> 1) occur at rateμ(job service times are exponentially distributed) and describe completed services (departures) from the queue. ThePageRankof a webpage as used byGoogleis defined by a Markov chain.[87][88][89]It is the probability to be at pagei{\displaystyle i}in the stationary distribution on the following Markov chain on all (known) webpages. IfN{\displaystyle N}is the number of known webpages, and a pagei{\displaystyle i}haski{\displaystyle k_{i}}links to it then it has transition probabilityαki+1−αN{\displaystyle {\frac {\alpha }{k_{i}}}+{\frac {1-\alpha }{N}}}for all pages that are linked to and1−αN{\displaystyle {\frac {1-\alpha }{N}}}for all pages that are not linked to. The parameterα{\displaystyle \alpha }is taken to be about 0.15.[90] Markov models have also been used to analyze web navigation behavior of users. A user's web link transition on a particular website can be modeled using first- or second-order Markov models and can be used to make predictions regarding future navigation and to personalize the web page for an individual user.[citation needed] Markov chain methods have also become very important for generating sequences of random numbers to accurately reflect very complicated desired probability distributions, via a process calledMarkov chain Monte Carlo(MCMC). In recent years this has revolutionized the practicability ofBayesian inferencemethods, allowing a wide range ofposterior distributionsto be simulated and their parameters found numerically.[citation needed] In 1971 aNaval Postgraduate SchoolMaster's thesis proposed to model a variety of combat between adversaries as a Markov chain "with states reflecting the control, maneuver, target acquisition, and target destruction actions of a weapons system" and discussed the parallels between the resulting Markov chain andLanchester's laws.[91] In 1975 Duncan and Siverson remarked that Markov chains could be used to model conflict between state actors, and thought that their analysis would help understand "the behavior of social and political organizations in situations of conflict."[92] Markov chains are used in finance and economics to model a variety of different phenomena, including the distribution of income, the size distribution of firms, asset prices and market crashes.D. G. Champernownebuilt a Markov chain model of the distribution of income in 1953.[93]Herbert A. Simonand co-author Charles Bonini used a Markov chain model to derive a stationary Yule distribution of firm sizes.[94]Louis Bachelierwas the first to observe that stock prices followed a random walk.[95]The random walk was later seen as evidence in favor of theefficient-market hypothesisand random walk models were popular in the literature of the 1960s.[96]Regime-switching models of business cycles were popularized byJames D. Hamilton(1989), who used a Markov chain to model switches between periods of high and low GDP growth (or, alternatively, economic expansions and recessions).[97]A more recent example is theMarkov switching multifractalmodel ofLaurent E. Calvetand Adlai J. Fisher, which builds upon the convenience of earlier regime-switching models.[98][99]It uses an arbitrarily large Markov chain to drive the level of volatility of asset returns. Dynamic macroeconomics makes heavy use of Markov chains. An example is using Markov chains to exogenously model prices of equity (stock) in ageneral equilibriumsetting.[100] Credit rating agenciesproduce annual tables of the transition probabilities for bonds of different credit ratings.[101] Markov chains are generally used in describingpath-dependentarguments, where current structural configurations condition future outcomes. An example is the reformulation of the idea, originally due toKarl Marx'sDas Kapital, tyingeconomic developmentto the rise ofcapitalism. In current research, it is common to use a Markov chain to model how once a country reaches a specific level of economic development, the configuration of structural factors, such as size of themiddle class, the ratio of urban to rural residence, the rate ofpoliticalmobilization, etc., will generate a higher probability of transitioning fromauthoritariantodemocratic regime.[102] Markov chains are employed inalgorithmic music composition, particularly insoftwaresuch asCsound,Max, andSuperCollider. In a first-order chain, the states of the system become note or pitch values, and aprobability vectorfor each note is constructed, completing a transition probability matrix (see below). An algorithm is constructed to produce output note values based on the transition matrix weightings, which could beMIDInote values, frequency (Hz), or any other desirable metric.[103] A second-order Markov chain can be introduced by considering the current stateandalso the previous state, as indicated in the second table. Higher,nth-order chains tend to "group" particular notes together, while 'breaking off' into other patterns and sequences occasionally. These higher-order chains tend to generate results with a sense ofphrasalstructure, rather than the 'aimless wandering' produced by a first-order system.[104] Markov chains can be used structurally, as in Xenakis's Analogique A and B.[105]Markov chains are also used in systems which use a Markov model to react interactively to music input.[106] Usually musical systems need to enforce specific control constraints on the finite-length sequences they generate, but control constraints are not compatible with Markov models, since they induce long-range dependencies that violate the Markov hypothesis of limited memory. In order to overcome this limitation, a new approach has been proposed.[107] Markov chains can be used to model many games of chance. The children's gamesSnakes and Laddersand "Hi Ho! Cherry-O", for example, are represented exactly by Markov chains. At each turn, the player starts in a given state (on a given square) and from there has fixed odds of moving to certain other states (squares).[citation needed] Markov chain models have been used in advanced baseball analysis since 1960, although their use is still rare. Each half-inning of a baseball game fits the Markov chain state when the number of runners and outs are considered. During any at-bat, there are 24 possible combinations of number of outs and position of the runners. Mark Pankin shows that Markov chain models can be used to evaluate runs created for both individual players as well as a team.[108]He also discusses various kinds of strategies and play conditions: how Markov chain models have been used to analyze statistics for game situations such asbuntingandbase stealingand differences when playing on grass vs.AstroTurf.[109] Markov processes can also be used togenerate superficially real-looking textgiven a sample document. Markov processes are used in a variety of recreational "parody generator" software (seedissociated press, Jeff Harrison,[110]Mark V. Shaney,[111][112]and Academias Neutronium). Several open-source text generation libraries using Markov chains exist.
https://en.wikipedia.org/wiki/Markov_text
Instatistical decision theory, anadmissible decision ruleis arule for making a decisionsuch that there is no other rule that is always "better" than it[1](or at least sometimes better and never worse), in the precise sense of "better" defined below. This concept is analogous toPareto efficiency. DefinesetsΘ{\displaystyle \Theta \,},X{\displaystyle {\mathcal {X}}}andA{\displaystyle {\mathcal {A}}}, whereΘ{\displaystyle \Theta \,}are the states of nature,X{\displaystyle {\mathcal {X}}}the possible observations, andA{\displaystyle {\mathcal {A}}}the actions that may be taken. An observation ofx∈X{\displaystyle x\in {\mathcal {X}}\,\!}is distributed asF(x∣θ){\displaystyle F(x\mid \theta )\,\!}and therefore provides evidence about the state of natureθ∈Θ{\displaystyle \theta \in \Theta \,\!}. Adecision ruleis afunctionδ:X→A{\displaystyle \delta :{\mathcal {X}}\rightarrow {\mathcal {A}}}, where upon observingx∈X{\displaystyle x\in {\mathcal {X}}}, we choose to take actionδ(x)∈A{\displaystyle \delta (x)\in {\mathcal {A}}\,\!}. Also define aloss functionL:Θ×A→R{\displaystyle L:\Theta \times {\mathcal {A}}\rightarrow \mathbb {R} }, which specifies the loss we would incur by taking actiona∈A{\displaystyle a\in {\mathcal {A}}}when the true state of nature isθ∈Θ{\displaystyle \theta \in \Theta }. Usually we will take this action after observing datax∈X{\displaystyle x\in {\mathcal {X}}}, so that the loss will beL(θ,δ(x)){\displaystyle L(\theta ,\delta (x))\,\!}. (It is possible though unconventional to recast the following definitions in terms of autility function, which is the negative of the loss.) Define therisk functionas theexpectation Whether a decision ruleδ{\displaystyle \delta \,\!}has low risk depends on the true state of natureθ{\displaystyle \theta \,\!}. A decision ruleδ∗{\displaystyle \delta ^{*}\,\!}dominatesa decision ruleδ{\displaystyle \delta \,\!}if and only ifR(θ,δ∗)≤R(θ,δ){\displaystyle R(\theta ,\delta ^{*})\leq R(\theta ,\delta )}for allθ{\displaystyle \theta \,\!},andthe inequality isstrictfor someθ{\displaystyle \theta \,\!}. A decision rule isadmissible(with respect to the loss function) if and only if no other rule dominates it; otherwise it isinadmissible. Thus an admissible decision rule is amaximal elementwith respect to the above partial order. An inadmissible rule is not preferred (except for reasons of simplicity or computational efficiency), since by definition there is some other rule that will achieve equal or lower risk forallθ{\displaystyle \theta \,\!}. But just because a ruleδ{\displaystyle \delta \,\!}is admissible does not mean it is a good rule to use. Being admissible means there is no other single rule that isalwaysas good or better – but other admissible rules might achieve lower risk for mostθ{\displaystyle \theta \,\!}that occur in practice. (The Bayes risk discussed below is a way of explicitly considering whichθ{\displaystyle \theta \,\!}occur in practice.) Letπ(θ){\displaystyle \pi (\theta )\,\!}be a probability distribution on the states of nature. From aBayesianpoint of view, we would regard it as aprior distribution. That is, it is our believed probability distribution on the states of nature, prior to observing data. For afrequentist, it is merely a function onΘ{\displaystyle \Theta \,\!}with no such special interpretation. TheBayes riskof the decision ruleδ{\displaystyle \delta \,\!}with respect toπ(θ){\displaystyle \pi (\theta )\,\!}is the expectation A decision ruleδ{\displaystyle \delta \,\!}that minimizesr(π,δ){\displaystyle r(\pi ,\delta )\,\!}is called aBayes rulewith respect toπ(θ){\displaystyle \pi (\theta )\,\!}. There may be more than one such Bayes rule. If the Bayes risk is infinite for allδ{\displaystyle \delta \,\!}, then no Bayes rule is defined. In the Bayesian approach to decision theory, the observedx{\displaystyle x\,\!}is consideredfixed. Whereas the frequentist approach (i.e., risk) averages over possible samplesx∈X{\displaystyle x\in {\mathcal {X}}\,\!}, the Bayesian would fix the observed samplex{\displaystyle x\,\!}and average over hypothesesθ∈Θ{\displaystyle \theta \in \Theta \,\!}. Thus, the Bayesian approach is to consider for our observedx{\displaystyle x\,\!}theexpected loss where the expectation is over theposteriorofθ{\displaystyle \theta \,\!}givenx{\displaystyle x\,\!}(obtained fromπ(θ){\displaystyle \pi (\theta )\,\!}andF(x∣θ){\displaystyle F(x\mid \theta )\,\!}usingBayes' theorem). Having made explicit the expected loss for each givenx{\displaystyle x\,\!}separately, we can define a decision ruleδ{\displaystyle \delta \,\!}by specifying for eachx{\displaystyle x\,\!}an actionδ(x){\displaystyle \delta (x)\,\!}that minimizes the expected loss. This is known as ageneralized Bayes rulewith respect toπ(θ){\displaystyle \pi (\theta )\,\!}. There may be more than one generalized Bayes rule, since there may be multiple choices ofδ(x){\displaystyle \delta (x)\,\!}that achieve the same expected loss. At first, this may appear rather different from the Bayes rule approach of the previous section, not a generalization. However, notice that the Bayes risk already averages overΘ{\displaystyle \Theta \,\!}in Bayesian fashion, and the Bayes risk may be recovered as the expectation overX{\displaystyle {\mathcal {X}}}of the expected loss (wherex∼θ{\displaystyle x\sim \theta \,\!}andθ∼π{\displaystyle \theta \sim \pi \,\!}). Roughly speaking,δ{\displaystyle \delta \,\!}minimizes this expectation of expected loss (i.e., is a Bayes rule) if and only if it minimizes the expected loss for eachx∈X{\displaystyle x\in {\mathcal {X}}}separately (i.e., is a generalized Bayes rule). Then why is the notion of generalized Bayes rule an improvement? It is indeed equivalent to the notion of Bayes rule when a Bayes rule exists and allx{\displaystyle x\,\!}have positive probability. However, no Bayes rule exists if the Bayes risk is infinite (for allδ{\displaystyle \delta \,\!}). In this case it is still useful to define a generalized Bayes ruleδ{\displaystyle \delta \,\!}, which at least chooses a minimum-expected-loss actionδ(x){\displaystyle \delta (x)\!\,}for thosex{\displaystyle x\,\!}for which a finite-expected-loss action does exist. In addition, a generalized Bayes rule may be desirable because it must choose a minimum-expected-loss actionδ(x){\displaystyle \delta (x)\,\!}foreveryx{\displaystyle x\,\!}, whereas a Bayes rule would be allowed to deviate from this policy on a setX⊆X{\displaystyle X\subseteq {\mathcal {X}}}of measure 0 without affecting the Bayes risk. More important, it is sometimes convenient to use an improper priorπ(θ){\displaystyle \pi (\theta )\,\!}. In this case, the Bayes risk is not even well-defined, nor is there any well-defined distribution overx{\displaystyle x\,\!}. However, the posteriorπ(θ∣x){\displaystyle \pi (\theta \mid x)\,\!}—and hence the expected loss—may be well-defined for eachx{\displaystyle x\,\!}, so that it is still possible to define a generalized Bayes rule. According to the complete class theorems, under mild conditions every admissible rule is a (generalized) Bayes rule (with respect to some priorπ(θ){\displaystyle \pi (\theta )\,\!}—possibly an improper one—that favors distributionsθ{\displaystyle \theta \,\!}where that rule achieves low risk). Thus, infrequentistdecision theoryit is sufficient to consider only (generalized) Bayes rules. Conversely, while Bayes rules with respect to proper priors are virtually always admissible, generalized Bayes rules corresponding toimproper priorsneed not yield admissible procedures.Stein's exampleis one such famous situation. TheJames–Stein estimatoris a nonlinear estimator of the mean of Gaussian random vectors and can be shown to dominate theordinary least squarestechnique with respect to a mean-squared-error loss function.[2]Thus least squares estimation is not an admissible estimation procedure in this context. Some others of the standard estimates associated with thenormal distributionare also inadmissible: for example, thesample estimate of the variancewhen the population mean and variance are unknown.[3]
https://en.wikipedia.org/wiki/Admissible_decision_rule
Instatistics,shrinkageis the reduction in the effects of sampling variation. Inregression analysis, a fitted relationship appears to perform less well on a new data set than on the data set used for fitting.[1]In particular the value of thecoefficient of determination'shrinks'. This idea is complementary tooverfittingand, separately, to the standard adjustment made in the coefficient of determination to compensate for the subjective effects of further sampling, like controlling for the potential of new explanatory terms improving the model by chance: that is, the adjustment formula itself provides "shrinkage." But the adjustment formula yields an artificial shrinkage. Ashrinkage estimatoris anestimatorthat, either explicitly or implicitly, incorporates the effects of shrinkage. In loose terms this means that a naive or raw estimate is improved by combining it with other information. The term relates to the notion that the improved estimate is made closer to the value supplied by the 'other information' than the raw estimate. In this sense, shrinkage is used toregularizeill-posedinferenceproblems. Shrinkage is implicit inBayesian inferenceand penalized likelihood inference, and explicit inJames–Stein-type inference. In contrast, simple types ofmaximum-likelihoodandleast-squares estimationprocedures do not include shrinkage effects, although they can be used within shrinkage estimation schemes. Many standard estimators can beimproved, in terms ofmean squared error(MSE), by shrinking them towards zero (or any other finite constant value). In other words, the improvement in the estimate from the corresponding reduction in the width of the confidence interval can outweigh the worsening of the estimate introduced by biasing the estimate towards zero (seebias-variance tradeoff). Assume that the expected value of the raw estimate is not zero and consider other estimators obtained by multiplying the raw estimate by a certain parameter. A value for this parameter can be specified so as to minimize the MSE of the new estimate. For this value of the parameter, the new estimate will have a smaller MSE than the raw one, and thus it has been improved. An effect here may be to convert anunbiasedraw estimate to an improved biased one. An example arises in the estimation of the populationvariancebysample variance. For a sample size ofn, the use of a divisorn−1 in the usual formula (Bessel's correction) gives an unbiased estimator, while other divisors have lower MSE, at the expense of bias. The optimal choice of divisor (weighting of shrinkage) depends on theexcess kurtosisof the population, as discussed atmean squared error: variance, but one can always do better (in terms of MSE) than the unbiased estimator; for the normal distribution a divisor ofn+1 gives one which has the minimum mean squared error. Types ofregressionthat involve shrinkage estimates includeridge regression, where coefficients derived from a regular least squares regression are brought closer to zero by multiplying by a constant (theshrinkage factor), andlasso regression, where coefficients are brought closer to zero by adding or subtracting a constant. The use of shrinkage estimators in the context of regression analysis, where there may be a large number of explanatory variables, has been described by Copas.[2]Here the values of the estimated regression coefficients are shrunk towards zero with the effect of reducing the mean square error of predicted values from the model when applied to new data. A later paper by Copas[3]applies shrinkage in a context where the problem is to predict a binary response on the basis of binary explanatory variables. Hausser and Strimmer "develop a James-Stein-type shrinkage estimator, resulting in a procedure that is highly efficient statistically as well as computationally. Despite its simplicity, it outperforms eight other entropy estimation procedures across a diverse range of sampling scenarios and data-generating models, even in cases of severe undersampling. ... method is fully analytic and hence computationally inexpensive. Moreover, procedure simultaneously provides estimates of the entropy and of the cell frequencies. The proposed shrinkage estimators of entropy and mutual information, as well as all other investigated entropy estimators, have been implemented in R (R Development Core Team, 2008). A corresponding R package 'entropy' was deposited in the R archive CRAN under the GNU General Public License."[4][5] Thisstatistics-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Shrinkage_estimator
Regular estimatorsare a class ofstatistical estimatorsthat satisfy certain regularity conditions which make them amenable toasymptoticanalysis. The convergence of aregular estimator'sdistribution is, in a sense, locally uniform. This is often considered desirable and leads to the convenient property that a small change in the parameter does not dramatically change the distribution of the estimator.[1] An estimatorθ^n{\displaystyle {\hat {\theta }}_{n}}ofψ(θ){\displaystyle \psi (\theta )}based on a sample of sizen{\displaystyle n}is said to be regular if for everyh{\displaystyle h}:[1] n(θ^n−ψ(θ+h/n))→θ+h/nLθ{\displaystyle {\sqrt {n}}\left({\hat {\theta }}_{n}-\psi (\theta +h/{\sqrt {n}})\right){\stackrel {\theta +h/{\sqrt {n}}}{\rightarrow }}L_{\theta }} where the convergence is in distribution under the law ofθ+h/n{\displaystyle \theta +h/{\sqrt {n}}}.Lθ{\displaystyle L_{\theta }}is someasymptotic distribution(usually this is anormal distributionwith mean zero and variance which may depend onθ{\displaystyle \theta }). Both theHodges' estimator[1]and theJames-Stein estimator[2]are non-regular estimators when the population parameterθ{\displaystyle \theta }is exactly 0.
https://en.wikipedia.org/wiki/Regular_estimator
Inmathematical statistics, theKullback–Leibler(KL)divergence(also calledrelative entropyandI-divergence[1]), denotedDKL(P∥Q){\displaystyle D_{\text{KL}}(P\parallel Q)}, is a type ofstatistical distance: a measure of how much a modelprobability distributionQis different from a true probability distributionP.[2][3]Mathematically, it is defined as DKL(P∥Q)=∑x∈XP(x)log⁡P(x)Q(x).{\displaystyle D_{\text{KL}}(P\parallel Q)=\sum _{x\in {\mathcal {X}}}P(x)\,\log {\frac {P(x)}{Q(x)}}.} A simpleinterpretationof the KL divergence ofPfromQis theexpectedexcesssurprisefrom usingQas a model instead ofPwhen the actual distribution isP. While it is a measure of how different two distributions are and is thus a distance in some sense, it is not actually ametric, which is the most familiar and formal type of distance. In particular, it is not symmetric in the two distributions (in contrast tovariation of information), and does not satisfy thetriangle inequality. Instead, in terms ofinformation geometry, it is a type ofdivergence,[4]a generalization ofsquared distance, and for certain classes of distributions (notably anexponential family), it satisfies a generalizedPythagorean theorem(which applies to squared distances).[5] Relative entropy is always a non-negativereal number, with value 0 if and only if the two distributions in question are identical. It has diverse applications, both theoretical, such as characterizing the relative(Shannon) entropyin information systems, randomness in continuoustime-series, and information gain when comparing statistical models ofinference; and practical, such as applied statistics,fluid mechanics,neuroscience,bioinformatics, andmachine learning. Consider two probability distributionsPandQ. Usually,Prepresents the data, the observations, or a measured probability distribution. DistributionQrepresents instead a theory, a model, a description or an approximation ofP. The Kullback–Leibler divergenceDKL(P∥Q){\displaystyle D_{\text{KL}}(P\parallel Q)}is then interpreted as the average difference of the number of bits required for encoding samples ofPusing a code optimized forQrather than one optimized forP. Note that the roles ofPandQcan be reversed in some situations where that is easier to compute, such as with theexpectation–maximization algorithm (EM)andevidence lower bound (ELBO)computations. The relative entropy was introduced bySolomon KullbackandRichard LeiblerinKullback & Leibler (1951)as "the mean information for discrimination betweenH1{\displaystyle H_{1}}andH2{\displaystyle H_{2}}per observation fromμ1{\displaystyle \mu _{1}}",[6]where one is comparing two probability measuresμ1,μ2{\displaystyle \mu _{1},\mu _{2}}, andH1,H2{\displaystyle H_{1},H_{2}}are the hypotheses that one is selecting from measureμ1,μ2{\displaystyle \mu _{1},\mu _{2}}(respectively). They denoted this byI(1:2){\displaystyle I(1:2)}, and defined the "'divergence' betweenμ1{\displaystyle \mu _{1}}andμ2{\displaystyle \mu _{2}}" as the symmetrized quantityJ(1,2)=I(1:2)+I(2:1){\displaystyle J(1,2)=I(1:2)+I(2:1)}, which had already been defined and used byHarold Jeffreysin 1948.[7]InKullback (1959), the symmetrized form is again referred to as the "divergence", and the relative entropies in each direction are referred to as a "directed divergences" between two distributions;[8]Kullback preferred the termdiscrimination information.[9]The term "divergence" is in contrast to a distance (metric), since the symmetrized divergence does not satisfy the triangle inequality.[10]Numerous references to earlier uses of the symmetrized divergence and to otherstatistical distancesare given inKullback (1959, pp. 6–7, §1.3 Divergence). The asymmetric "directed divergence" has come to be known as the Kullback–Leibler divergence, while the symmetrized "divergence" is now referred to as theJeffreys divergence. Fordiscrete probability distributionsPandQdefined on the samesample space,X{\displaystyle {\mathcal {X}}},the relative entropy fromQtoPis defined[11]to be DKL(P∥Q)=∑x∈XP(x)log⁡P(x)Q(x),{\displaystyle D_{\text{KL}}(P\parallel Q)=\sum _{x\in {\mathcal {X}}}P(x)\,\log {\frac {P(x)}{Q(x)}}\,,} which is equivalent to DKL(P∥Q)=−∑x∈XP(x)log⁡Q(x)P(x).{\displaystyle D_{\text{KL}}(P\parallel Q)=-\sum _{x\in {\mathcal {X}}}P(x)\,\log {\frac {Q(x)}{P(x)}}\,.} In other words, it is theexpectationof the logarithmic difference between the probabilitiesPandQ, where the expectation is taken using the probabilitiesP. Relative entropy is only defined in this way if, for allx,Q(x)=0{\displaystyle Q(x)=0}impliesP(x)=0{\displaystyle P(x)=0}(absolute continuity). Otherwise, it is often defined as+∞{\displaystyle +\infty },[1]but the value+∞{\displaystyle \ +\infty \ }is possible even ifQ(x)≠0{\displaystyle Q(x)\neq 0}everywhere,[12][13]provided thatX{\displaystyle {\mathcal {X}}}is infinite in extent. Analogous comments apply to the continuous and general measure cases defined below. WheneverP(x){\displaystyle P(x)}is zero the contribution of the corresponding term is interpreted as zero because limx→0+xlog⁡(x)=0.{\displaystyle \lim _{x\to 0^{+}}x\,\log(x)=0\,.} For distributionsPandQof acontinuous random variable, relative entropy is defined to be the integral[14] DKL(P∥Q)=∫−∞∞p(x)log⁡p(x)q(x)dx,{\displaystyle D_{\text{KL}}(P\parallel Q)=\int _{-\infty }^{\infty }p(x)\,\log {\frac {p(x)}{q(x)}}\,dx\,,} wherepandqdenote theprobability densitiesofPandQ. More generally, ifPandQare probabilitymeasureson ameasurable spaceX,{\displaystyle {\mathcal {X}}\,,}andPisabsolutely continuouswith respect toQ, then the relative entropy fromQtoPis defined as DKL(P∥Q)=∫x∈Xlog⁡P(dx)Q(dx)P(dx),{\displaystyle D_{\text{KL}}(P\parallel Q)=\int _{x\in {\mathcal {X}}}\log {\frac {P(dx)}{Q(dx)}}\,P(dx)\,,} whereP(dx)Q(dx){\displaystyle {\frac {P(dx)}{Q(dx)}}}is theRadon–Nikodym derivativeofPwith respect toQ, i.e. the uniqueQalmost everywhere defined functionronX{\displaystyle {\mathcal {X}}}such thatP(dx)=r(x)Q(dx){\displaystyle P(dx)=r(x)Q(dx)}which exists becausePis absolutely continuous with respect toQ. Also we assume the expression on the right-hand side exists. Equivalently (by thechain rule), this can be written as DKL(P∥Q)=∫x∈XP(dx)Q(dx)log⁡P(dx)Q(dx)Q(dx),{\displaystyle D_{\text{KL}}(P\parallel Q)=\int _{x\in {\mathcal {X}}}{\frac {P(dx)}{Q(dx)}}\ \log {\frac {P(dx)}{Q(dx)}}\ Q(dx)\,,} which is theentropyofPrelative toQ. Continuing in this case, ifμ{\displaystyle \mu }is any measure onX{\displaystyle {\mathcal {X}}}for which densitiespandqwithP(dx)=p(x)μ(dx){\displaystyle P(dx)=p(x)\mu (dx)}andQ(dx)=q(x)μ(dx){\displaystyle Q(dx)=q(x)\mu (dx)}exist (meaning thatPandQare both absolutely continuous with respect toμ{\displaystyle \mu }),then the relative entropy fromQtoPis given as DKL(P∥Q)=∫x∈Xp(x)log⁡p(x)q(x)μ(dx).{\displaystyle D_{\text{KL}}(P\parallel Q)=\int _{x\in {\mathcal {X}}}p(x)\,\log {\frac {p(x)}{q(x)}}\ \mu (dx)\,.} Note that such a measureμ{\displaystyle \mu }for which densities can be defined always exists, since one can takeμ=12(P+Q){\textstyle \mu ={\frac {1}{2}}\left(P+Q\right)}although in practice it will usually be one that applies in the context likecounting measurefor discrete distributions, orLebesgue measureor a convenient variant thereof likeGaussian measureor the uniform measure on thesphere,Haar measureon aLie groupetc. for continuous distributions. The logarithms in these formulae are usually taken tobase2 if information is measured in units ofbits, or to baseeif information is measured innats. Most formulas involving relative entropy hold regardless of the base of the logarithm. Various conventions exist for referring toDKL(P∥Q){\displaystyle D_{\text{KL}}(P\parallel Q)}in words. Often it is referred to as the divergencebetweenPandQ, but this fails to convey the fundamental asymmetry in the relation. Sometimes, as in this article, it may be described as the divergence ofPfromQor as the divergencefromQtoP. This reflects theasymmetryinBayesian inference, which startsfromapriorQand updatestotheposteriorP. Another common way to refer toDKL(P∥Q){\displaystyle D_{\text{KL}}(P\parallel Q)}is as the relative entropy ofPwith respect toQor theinformation gainfromPoverQ. Kullback[3]gives the following example (Table 2.1, Example 2.1). LetPandQbe the distributions shown in the table and figure.Pis the distribution on the left side of the figure, abinomial distributionwithN=2{\displaystyle N=2}andp=0.4{\displaystyle p=0.4}.Qis the distribution on the right side of the figure, adiscrete uniform distributionwith the three possible outcomesx=0,1,2(i.e.X={0,1,2}{\displaystyle {\mathcal {X}}=\{0,1,2\}}), each with probabilityp=1/3{\displaystyle p=1/3}. Relative entropiesDKL(P∥Q){\displaystyle D_{\text{KL}}(P\parallel Q)}andDKL(Q∥P){\displaystyle D_{\text{KL}}(Q\parallel P)}are calculated as follows. This example uses thenatural logwith basee, designatedlnto get results innats(seeunits of information): DKL(P∥Q)=∑x∈XP(x)ln⁡P(x)Q(x)=925ln⁡9/251/3+1225ln⁡12/251/3+425ln⁡4/251/3=125(32ln⁡2+55ln⁡3−50ln⁡5)≈0.0852996,{\displaystyle {\begin{aligned}D_{\text{KL}}(P\parallel Q)&=\sum _{x\in {\mathcal {X}}}P(x)\,\ln {\frac {P(x)}{Q(x)}}\\&={\frac {9}{25}}\ln {\frac {9/25}{1/3}}+{\frac {12}{25}}\ln {\frac {12/25}{1/3}}+{\frac {4}{25}}\ln {\frac {4/25}{1/3}}\\&={\frac {1}{25}}\left(32\ln 2+55\ln 3-50\ln 5\right)\\&\approx 0.0852996,\end{aligned}}} DKL(Q∥P)=∑x∈XQ(x)ln⁡Q(x)P(x)=13ln⁡1/39/25+13ln⁡1/312/25+13ln⁡1/34/25=13(−4ln⁡2−6ln⁡3+6ln⁡5)≈0.097455.{\displaystyle {\begin{aligned}D_{\text{KL}}(Q\parallel P)&=\sum _{x\in {\mathcal {X}}}Q(x)\,\ln {\frac {Q(x)}{P(x)}}\\&={\frac {1}{3}}\,\ln {\frac {1/3}{9/25}}+{\frac {1}{3}}\,\ln {\frac {1/3}{12/25}}+{\frac {1}{3}}\,\ln {\frac {1/3}{4/25}}\\&={\frac {1}{3}}\left(-4\ln 2-6\ln 3+6\ln 5\right)\\&\approx 0.097455.\end{aligned}}} In the field of statistics, theNeyman–Pearson lemmastates that the most powerful way to distinguish between the two distributionsPandQbased on an observationY(drawn from one of them) is through the log of the ratio of their likelihoods:log⁡P(Y)−log⁡Q(Y){\displaystyle \log P(Y)-\log Q(Y)}. The KL divergence is the expected value of this statistic ifYis actually drawn fromP. Kullback motivated the statistic as an expected log likelihood ratio.[15] In the context ofcoding theory,DKL(P∥Q){\displaystyle D_{\text{KL}}(P\parallel Q)}can be constructed by measuring the expected number of extrabitsrequired tocodesamples fromPusing a code optimized forQrather than the code optimized forP. In the context ofmachine learning,DKL(P∥Q){\displaystyle D_{\text{KL}}(P\parallel Q)}is often called theinformation gainachieved ifPwould be used instead ofQwhich is currently used. By analogy with information theory, it is called therelative entropyofPwith respect toQ. Expressed in the language ofBayesian inference,DKL(P∥Q){\displaystyle D_{\text{KL}}(P\parallel Q)}is a measure of the information gained by revising one's beliefs from theprior probability distributionQto theposterior probability distributionP. In other words, it is the amount of information lost whenQis used to approximateP.[16] In applications,Ptypically represents the "true" distribution of data, observations, or a precisely calculated theoretical distribution, whileQtypically represents a theory, model, description, orapproximationofP. In order to find a distributionQthat is closest toP, we can minimize the KL divergence and compute aninformation projection. While it is astatistical distance, it is not ametric, the most familiar type of distance, but instead it is adivergence.[4]While metrics are symmetric and generalizelineardistance, satisfying thetriangle inequality, divergences are asymmetric and generalizesquareddistance, in some cases satisfying a generalizedPythagorean theorem. In generalDKL(P∥Q){\displaystyle D_{\text{KL}}(P\parallel Q)}does not equalDKL(Q∥P){\displaystyle D_{\text{KL}}(Q\parallel P)}, and the asymmetry is an important part of the geometry.[4]Theinfinitesimalform of relative entropy, specifically itsHessian, gives ametric tensorthat equals theFisher information metric; see§ Fisher information metric. Fisher information metric on the certain probability distribution let determine the natural gradient for information-geometric optimization algorithms.[17]Its quantum version is Fubini-study metric.[18]Relative entropy satisfies a generalized Pythagorean theorem forexponential families(geometrically interpreted asdually flat manifolds), and this allows one to minimize relative entropy by geometric means, for example byinformation projectionand inmaximum likelihood estimation.[5] The relative entropy is theBregman divergencegenerated by the negative entropy, but it is also of the form of anf-divergence. For probabilities over a finitealphabet, it is unique in being a member of both of these classes ofstatistical divergences. The application of Bregman divergence can be found in mirror descent.[19] Consider a growth-optimizing investor in a fair game with mutually exclusive outcomes (e.g. a “horse race” in which the official odds add up to one). The rate of return expected by such an investor is equal to the relative entropy between the investor's believed probabilities and the official odds.[20]This is a special case of a much more general connection between financial returns and divergence measures.[21] Financial risks are connected toDKL{\displaystyle D_{\text{KL}}}via information geometry.[22]Investors' views, the prevailing market view, and risky scenarios form triangles on the relevant manifold of probability distributions. The shape of the triangles determines key financial risks (both qualitatively and quantitatively). For instance, obtuse triangles in which investors' views and risk scenarios appear on “opposite sides” relative to the market describe negative risks, acute triangles describe positive exposure, and the right-angled situation in the middle corresponds to zero risk. Extending this concept, relative entropy can be hypothetically utilised to identify the behaviour of informed investors, if one takes this to be represented by the magnitude and deviations away from the prior expectations of fund flows, for example.[23] In information theory, theKraft–McMillan theoremestablishes that any directly decodable coding scheme for coding a message to identify one valuexi{\displaystyle x_{i}}out of a set of possibilitiesXcan be seen as representing an implicit probability distributionq(xi)=2−ℓi{\displaystyle q(x_{i})=2^{-\ell _{i}}}overX, whereℓi{\displaystyle \ell _{i}}is the length of the code forxi{\displaystyle x_{i}}in bits. Therefore, relative entropy can be interpreted as the expected extra message-length per datum that must be communicated if a code that is optimal for a given (wrong) distributionQis used, compared to using a code based on the true distributionP: it is theexcessentropy. DKL(P∥Q)=∑x∈Xp(x)log⁡1q(x)−∑x∈Xp(x)log⁡1p(x)=H(P,Q)−H(P){\displaystyle {\begin{aligned}D_{\text{KL}}(P\parallel Q)&=\sum _{x\in {\mathcal {X}}}p(x)\log {\frac {1}{q(x)}}-\sum _{x\in {\mathcal {X}}}p(x)\log {\frac {1}{p(x)}}\\[5pt]&=\mathrm {H} (P,Q)-\mathrm {H} (P)\end{aligned}}} whereH(P,Q){\displaystyle \mathrm {H} (P,Q)}is thecross entropyofQrelative toPandH(P){\displaystyle \mathrm {H} (P)}is theentropyofP(which is the same as the cross-entropy of P with itself). The relative entropyDKL(P∥Q){\displaystyle D_{\text{KL}}(P\parallel Q)}can be thought of geometrically as astatistical distance, a measure of how far the distributionQis from the distributionP. Geometrically it is adivergence: an asymmetric, generalized form of squared distance. The cross-entropyH(P,Q){\displaystyle H(P,Q)}is itself such a measurement (formally aloss function), but it cannot be thought of as a distance, sinceH(P,P)=:H(P){\displaystyle H(P,P)=:H(P)}is not zero. This can be fixed by subtractingH(P){\displaystyle H(P)}to makeDKL(P∥Q){\displaystyle D_{\text{KL}}(P\parallel Q)}agree more closely with our notion of distance, as theexcessloss. The resulting function is asymmetric, and while this can be symmetrized (see§ Symmetrised divergence), the asymmetric form is more useful. See§ Interpretationsfor more on the geometric interpretation. Relative entropy relates to "rate function" in the theory oflarge deviations.[24][25] Arthur Hobson proved that relative entropy is the only measure of difference between probability distributions that satisfies some desired properties, which are the canonical extension to those appearing in a commonly usedcharacterization of entropy.[26]Consequently,mutual informationis the only measure of mutual dependence that obeys certain related conditions, since it can be definedin terms of Kullback–Leibler divergence. In particular, ifP(dx)=p(x)μ(dx){\displaystyle P(dx)=p(x)\mu (dx)}andQ(dx)=q(x)μ(dx){\displaystyle Q(dx)=q(x)\mu (dx)}, thenp(x)=q(x){\displaystyle p(x)=q(x)}μ{\displaystyle \mu }-almost everywhere. The entropyH(P){\displaystyle \mathrm {H} (P)}thus sets a minimum value for the cross-entropyH(P,Q){\displaystyle \mathrm {H} (P,Q)}, theexpectednumber ofbitsrequired when using a code based onQrather thanP; and the Kullback–Leibler divergence therefore represents the expected number of extra bits that must be transmitted to identify a valuexdrawn fromX, if a code is used corresponding to the probability distributionQ, rather than the "true" distributionP. Denotef(α):=DKL((1−α)Q+αP∥Q){\displaystyle f(\alpha ):=D_{\text{KL}}((1-\alpha )Q+\alpha P\parallel Q)}and note thatDKL(P∥Q)=f(1){\displaystyle D_{\text{KL}}(P\parallel Q)=f(1)}. The first derivative off{\displaystyle f}may be derived and evaluated as followsf′(α)=∑x∈X(P(x)−Q(x))(log⁡((1−α)Q(x)+αP(x)Q(x))+1)=∑x∈X(P(x)−Q(x))log⁡((1−α)Q(x)+αP(x)Q(x))f′(0)=0{\displaystyle {\begin{aligned}f'(\alpha )&=\sum _{x\in {\mathcal {X}}}(P(x)-Q(x))\left(\log \left({\frac {(1-\alpha )Q(x)+\alpha P(x)}{Q(x)}}\right)+1\right)\\&=\sum _{x\in {\mathcal {X}}}(P(x)-Q(x))\log \left({\frac {(1-\alpha )Q(x)+\alpha P(x)}{Q(x)}}\right)\\f'(0)&=0\end{aligned}}}Further derivatives may be derived and evaluated as followsf″(α)=∑x∈X(P(x)−Q(x))2(1−α)Q(x)+αP(x)f″(0)=∑x∈X(P(x)−Q(x))2Q(x)f(n)(α)=(−1)n(n−2)!∑x∈X(P(x)−Q(x))n((1−α)Q(x)+αP(x))n−1f(n)(0)=(−1)n(n−2)!∑x∈X(P(x)−Q(x))nQ(x)n−1{\displaystyle {\begin{aligned}f''(\alpha )&=\sum _{x\in {\mathcal {X}}}{\frac {(P(x)-Q(x))^{2}}{(1-\alpha )Q(x)+\alpha P(x)}}\\f''(0)&=\sum _{x\in {\mathcal {X}}}{\frac {(P(x)-Q(x))^{2}}{Q(x)}}\\f^{(n)}(\alpha )&=(-1)^{n}(n-2)!\sum _{x\in {\mathcal {X}}}{\frac {(P(x)-Q(x))^{n}}{\left((1-\alpha )Q(x)+\alpha P(x)\right)^{n-1}}}\\f^{(n)}(0)&=(-1)^{n}(n-2)!\sum _{x\in {\mathcal {X}}}{\frac {(P(x)-Q(x))^{n}}{Q(x)^{n-1}}}\end{aligned}}}Hence solving forDKL(P∥Q){\displaystyle D_{\text{KL}}(P\parallel Q)}via the Taylor expansion off{\displaystyle f}about0{\displaystyle 0}evaluated atα=1{\displaystyle \alpha =1}yieldsDKL(P∥Q)=∑n=0∞f(n)(0)n!=∑n=2∞1n(n−1)∑x∈X(Q(x)−P(x))nQ(x)n−1{\displaystyle {\begin{aligned}D_{\text{KL}}(P\parallel Q)&=\sum _{n=0}^{\infty }{\frac {f^{(n)}(0)}{n!}}\\&=\sum _{n=2}^{\infty }{\frac {1}{n(n-1)}}\sum _{x\in {\mathcal {X}}}{\frac {(Q(x)-P(x))^{n}}{Q(x)^{n-1}}}\end{aligned}}}P≤2Q{\displaystyle P\leq 2Q}a.s. is a sufficient condition for convergence of the series by the following absolute convergence argument∑n=2∞|1n(n−1)∑x∈X(Q(x)−P(x))nQ(x)n−1|=∑n=2∞1n(n−1)∑x∈X|Q(x)−P(x)||1−P(x)Q(x)|n−1≤∑n=2∞1n(n−1)∑x∈X|Q(x)−P(x)|≤∑n=2∞1n(n−1)=1{\displaystyle {\begin{aligned}\sum _{n=2}^{\infty }\left\vert {\frac {1}{n(n-1)}}\sum _{x\in {\mathcal {X}}}{\frac {(Q(x)-P(x))^{n}}{Q(x)^{n-1}}}\right\vert &=\sum _{n=2}^{\infty }{\frac {1}{n(n-1)}}\sum _{x\in {\mathcal {X}}}\left\vert Q(x)-P(x)\right\vert \left\vert 1-{\frac {P(x)}{Q(x)}}\right\vert ^{n-1}\\&\leq \sum _{n=2}^{\infty }{\frac {1}{n(n-1)}}\sum _{x\in {\mathcal {X}}}\left\vert Q(x)-P(x)\right\vert \\&\leq \sum _{n=2}^{\infty }{\frac {1}{n(n-1)}}\\&=1\end{aligned}}}P≤2Q{\displaystyle P\leq 2Q}a.s. is also a necessary condition for convergence of the series by the following proof by contradiction. Assume thatP>2Q{\displaystyle P>2Q}with measure strictly greater than0{\displaystyle 0}. It then follows that there must exist some valuesε>0{\displaystyle \varepsilon >0},ρ>0{\displaystyle \rho >0}, andU<∞{\displaystyle U<\infty }such thatP≥2Q+ε{\displaystyle P\geq 2Q+\varepsilon }andQ≤U{\displaystyle Q\leq U}with measureρ{\displaystyle \rho }. The previous proof of sufficiency demonstrated that the measure1−ρ{\displaystyle 1-\rho }component of the series whereP≤2Q{\displaystyle P\leq 2Q}is bounded, so we need only concern ourselves with the behavior of the measureρ{\displaystyle \rho }component of the series whereP≥2Q+ε{\displaystyle P\geq 2Q+\varepsilon }. The absolute value of then{\displaystyle n}th term of this component of the series is then lower bounded by1n(n−1)ρ(1+εU)n{\displaystyle {\frac {1}{n(n-1)}}\rho \left(1+{\frac {\varepsilon }{U}}\right)^{n}}, which is unbounded asn→∞{\displaystyle n\to \infty }, so the series diverges. The following result, due to Donsker and Varadhan,[29]is known asDonsker and Varadhan's variational formula. Theorem [Duality Formula for Variational Inference]—LetΘ{\displaystyle \Theta }be a set endowed with an appropriateσ{\displaystyle \sigma }-fieldF{\displaystyle {\mathcal {F}}}, and two probability measuresPandQ, which formulate twoprobability spaces(Θ,F,P){\displaystyle (\Theta ,{\mathcal {F}},P)}and(Θ,F,Q){\displaystyle (\Theta ,{\mathcal {F}},Q)}, withQ≪P{\displaystyle Q\ll P}. (Q≪P{\displaystyle Q\ll P}indicates thatQis absolutely continuous with respect toP.) Lethbe a real-valued integrablerandom variableon(Θ,F,P){\displaystyle (\Theta ,{\mathcal {F}},P)}. Then the following equality holds log⁡EP[exp⁡h]=supQ≪P⁡{EQ[h]−DKL(Q∥P)}.{\displaystyle \log E_{P}[\exp h]=\operatorname {sup} _{Q\ll P}\{E_{Q}[h]-D_{\text{KL}}(Q\parallel P)\}.} Further, the supremum on the right-hand side is attained if and only if it holds Q(dθ)P(dθ)=exp⁡h(θ)EP[exp⁡h],{\displaystyle {\frac {Q(d\theta )}{P(d\theta )}}={\frac {\exp h(\theta )}{E_{P}[\exp h]}},} almost surely with respect to probability measureP, whereQ(dθ)P(dθ){\displaystyle {\frac {Q(d\theta )}{P(d\theta )}}}denotes the Radon-Nikodym derivative ofQwith respect toP. For a short proof assuming integrability ofexp⁡(h){\displaystyle \exp(h)}with respect toP, letQ∗{\displaystyle Q^{*}}haveP-densityexp⁡h(θ)EP[exp⁡h]{\displaystyle {\frac {\exp h(\theta )}{E_{P}[\exp h]}}}, i.e.Q∗(dθ)=exp⁡h(θ)EP[exp⁡h]P(dθ){\displaystyle Q^{*}(d\theta )={\frac {\exp h(\theta )}{E_{P}[\exp h]}}P(d\theta )}Then DKL(Q∥Q∗)−DKL(Q∥P)=−EQ[h]+log⁡EP[exp⁡h].{\displaystyle D_{\text{KL}}(Q\parallel Q^{*})-D_{\text{KL}}(Q\parallel P)=-E_{Q}[h]+\log E_{P}[\exp h].} Therefore, EQ[h]−DKL(Q∥P)=log⁡EP[exp⁡h]−DKL(Q∥Q∗)≤log⁡EP[exp⁡h],{\displaystyle E_{Q}[h]-D_{\text{KL}}(Q\parallel P)=\log E_{P}[\exp h]-D_{\text{KL}}(Q\parallel Q^{*})\leq \log E_{P}[\exp h],} where the last inequality follows fromDKL(Q∥Q∗)≥0{\displaystyle D_{\text{KL}}(Q\parallel Q^{*})\geq 0}, for which equality occurs if and only ifQ=Q∗{\displaystyle Q=Q^{*}}. The conclusion follows. Suppose that we have twomultivariate normal distributions, with meansμ0,μ1{\displaystyle \mu _{0},\mu _{1}}and with (non-singular)covariance matricesΣ0,Σ1.{\displaystyle \Sigma _{0},\Sigma _{1}.}If the two distributions have the same dimension,k, then the relative entropy between the distributions is as follows:[30] DKL(N0∥N1)=12[tr⁡(Σ1−1Σ0)−k+(μ1−μ0)TΣ1−1(μ1−μ0)+ln⁡detΣ1detΣ0].{\displaystyle D_{\text{KL}}\left({\mathcal {N}}_{0}\parallel {\mathcal {N}}_{1}\right)={\frac {1}{2}}\left[\operatorname {tr} \left(\Sigma _{1}^{-1}\Sigma _{0}\right)-k+\left(\mu _{1}-\mu _{0}\right)^{\mathsf {T}}\Sigma _{1}^{-1}\left(\mu _{1}-\mu _{0}\right)+\ln {\frac {\det \Sigma _{1}}{\det \Sigma _{0}}}\right].} Thelogarithmin the last term must be taken to baseesince all terms apart from the last are base-elogarithms of expressions that are either factors of the density function or otherwise arise naturally. The equation therefore gives a result measured innats. Dividing the entire expression above byln⁡(2){\displaystyle \ln(2)}yields the divergence inbits. In a numerical implementation, it is helpful to express the result in terms of the Cholesky decompositionsL0,L1{\displaystyle L_{0},L_{1}}such thatΣ0=L0L0T{\displaystyle \Sigma _{0}=L_{0}L_{0}^{T}}andΣ1=L1L1T{\displaystyle \Sigma _{1}=L_{1}L_{1}^{T}}. Then withMandysolutions to the triangular linear systemsL1M=L0{\displaystyle L_{1}M=L_{0}}, andL1y=μ1−μ0{\displaystyle L_{1}y=\mu _{1}-\mu _{0}}, DKL(N0∥N1)=12(∑i,j=1k(Mij)2−k+|y|2+2∑i=1kln⁡(L1)ii(L0)ii).{\displaystyle D_{\text{KL}}\left({\mathcal {N}}_{0}\parallel {\mathcal {N}}_{1}\right)={\frac {1}{2}}\left(\sum _{i,j=1}^{k}{\left(M_{ij}\right)}^{2}-k+|y|^{2}+2\sum _{i=1}^{k}\ln {\frac {(L_{1})_{ii}}{(L_{0})_{ii}}}\right).} A special case, and a common quantity invariational inference, is the relative entropy between a diagonal multivariate normal, and a standard normal distribution (with zero mean and unit variance): DKL(N((μ1,…,μk)T,diag⁡(σ12,…,σk2))∥N(0,I))=12∑i=1k[σi2+μi2−1−ln⁡(σi2)].{\displaystyle D_{\text{KL}}\left({\mathcal {N}}\left(\left(\mu _{1},\ldots ,\mu _{k}\right)^{\mathsf {T}},\operatorname {diag} \left(\sigma _{1}^{2},\ldots ,\sigma _{k}^{2}\right)\right)\parallel {\mathcal {N}}\left(\mathbf {0} ,\mathbf {I} \right)\right)={\frac {1}{2}}\sum _{i=1}^{k}\left[\sigma _{i}^{2}+\mu _{i}^{2}-1-\ln \left(\sigma _{i}^{2}\right)\right].} For two univariate normal distributionspandqthe above simplifies to[31]DKL(p∥q)=log⁡σ1σ0+σ02+(μ0−μ1)22σ12−12{\displaystyle D_{\text{KL}}\left({\mathcal {p}}\parallel {\mathcal {q}}\right)=\log {\frac {\sigma _{1}}{\sigma _{0}}}+{\frac {\sigma _{0}^{2}+{\left(\mu _{0}-\mu _{1}\right)}^{2}}{2\sigma _{1}^{2}}}-{\frac {1}{2}}} In the case of co-centered normal distributions withk=σ1/σ0{\displaystyle k=\sigma _{1}/\sigma _{0}}, this simplifies[32]to: DKL(p∥q)=log2⁡k+(k−2−1)/2/ln⁡(2)bits{\displaystyle D_{\text{KL}}\left({\mathcal {p}}\parallel {\mathcal {q}}\right)=\log _{2}k+(k^{-2}-1)/2/\ln(2)\mathrm {bits} } Consider two uniform distributions, with the support ofp=[A,B]{\displaystyle p=[A,B]}enclosed withinq=[C,D]{\displaystyle q=[C,D]}(C≤A<B≤D{\displaystyle C\leq A<B\leq D}). Then the information gain is: DKL(p∥q)=log⁡D−CB−A{\displaystyle D_{\text{KL}}\left({\mathcal {p}}\parallel {\mathcal {q}}\right)=\log {\frac {D-C}{B-A}}} Intuitively,[32]the information gain to aktimes narrower uniform distribution containslog2⁡k{\displaystyle \log _{2}k}bits. This connects with the use of bits in computing, wherelog2⁡k{\displaystyle \log _{2}k}bits would be needed to identify one element of aklong stream. Theexponential familyof distribution is given by pX(x|θ)=h(x)exp⁡(θTT(x)−A(θ)){\displaystyle p_{X}(x|\theta )=h(x)\exp \left(\theta ^{\mathsf {T}}T(x)-A(\theta )\right)} whereh(x){\displaystyle h(x)}is reference measure,T(x){\displaystyle T(x)}is sufficient statistics,θ{\displaystyle \theta }is canonical natural parameters, andA(θ){\displaystyle A(\theta )}is the log-partition function. The KL divergence between two distributionsp(x|θ1){\displaystyle p(x|\theta _{1})}andp(x|θ2){\displaystyle p(x|\theta _{2})}is given by[33] DKL(θ1∥θ2)=(θ1−θ2)Tμ1−A(θ1)+A(θ2){\displaystyle D_{\text{KL}}(\theta _{1}\parallel \theta _{2})={\left(\theta _{1}-\theta _{2}\right)}^{\mathsf {T}}\mu _{1}-A(\theta _{1})+A(\theta _{2})} whereμ1=Eθ1[T(X)]=∇A(θ1){\displaystyle \mu _{1}=E_{\theta _{1}}[T(X)]=\nabla A(\theta _{1})}is the mean parameter ofp(x|θ1){\displaystyle p(x|\theta _{1})}. For example, for the Poisson distribution with meanλ{\displaystyle \lambda }, the sufficient statisticsT(x)=x{\displaystyle T(x)=x}, the natural parameterθ=log⁡λ{\displaystyle \theta =\log \lambda }, and log partition functionA(θ)=eθ{\displaystyle A(\theta )=e^{\theta }}. As such, the divergence between two Poisson distributions with meansλ1{\displaystyle \lambda _{1}}andλ2{\displaystyle \lambda _{2}}is DKL(λ1∥λ2)=λ1log⁡λ1λ2−λ1+λ2.{\displaystyle D_{\text{KL}}(\lambda _{1}\parallel \lambda _{2})=\lambda _{1}\log {\frac {\lambda _{1}}{\lambda _{2}}}-\lambda _{1}+\lambda _{2}.} As another example, for a normal distribution with unit varianceN(μ,1){\displaystyle N(\mu ,1)}, the sufficient statisticsT(x)=x{\displaystyle T(x)=x}, the natural parameterθ=μ{\displaystyle \theta =\mu }, and log partition functionA(θ)=μ2/2{\displaystyle A(\theta )=\mu ^{2}/2}. Thus, the divergence between two normal distributionsN(μ1,1){\displaystyle N(\mu _{1},1)}andN(μ2,1){\displaystyle N(\mu _{2},1)}is DKL(μ1∥μ2)=(μ1−μ2)μ1−μ122+μ222=(μ2−μ1)22.{\displaystyle D_{\text{KL}}(\mu _{1}\parallel \mu _{2})=\left(\mu _{1}-\mu _{2}\right)\mu _{1}-{\frac {\mu _{1}^{2}}{2}}+{\frac {\mu _{2}^{2}}{2}}={\frac {{\left(\mu _{2}-\mu _{1}\right)}^{2}}{2}}.} As final example, the divergence between a normal distribution with unit varianceN(μ,1){\displaystyle N(\mu ,1)}and a Poisson distribution with meanλ{\displaystyle \lambda }is DKL(μ∥λ)=(μ−log⁡λ)μ−μ22+λ.{\displaystyle D_{\text{KL}}(\mu \parallel \lambda )=(\mu -\log \lambda )\mu -{\frac {\mu ^{2}}{2}}+\lambda .} While relative entropy is astatistical distance, it is not ametricon the space of probability distributions, but instead it is adivergence.[4]While metrics are symmetric and generalizelineardistance, satisfying thetriangle inequality, divergences are asymmetric in general and generalizesquareddistance, in some cases satisfying a generalizedPythagorean theorem. In generalDKL(P∥Q){\displaystyle D_{\text{KL}}(P\parallel Q)}does not equalDKL(Q∥P){\displaystyle D_{\text{KL}}(Q\parallel P)}, and while this can be symmetrized (see§ Symmetrised divergence), the asymmetry is an important part of the geometry.[4] It generates atopologyon the space ofprobability distributions. More concretely, if{P1,P2,…}{\displaystyle \{P_{1},P_{2},\ldots \}}is a sequence of distributions such that limn→∞DKL(Pn∥Q)=0,{\displaystyle \lim _{n\to \infty }D_{\text{KL}}(P_{n}\parallel Q)=0,} then it is said that Pn→DQ.{\displaystyle P_{n}\xrightarrow {D} \,Q.} Pinsker's inequalityentails that Pn→DP⇒Pn→TVP,{\displaystyle P_{n}\xrightarrow {D} P\Rightarrow P_{n}\xrightarrow {TV} P,} where the latter stands for the usual convergence intotal variation. Relative entropy is directly related to theFisher information metric. This can be made explicit as follows. Assume that the probability distributionsPandQare both parameterized by some (possibly multi-dimensional) parameterθ{\displaystyle \theta }. Consider then two close by values ofP=P(θ){\displaystyle P=P(\theta )}andQ=P(θ0){\displaystyle Q=P(\theta _{0})}so that the parameterθ{\displaystyle \theta }differs by only a small amount from the parameter valueθ0{\displaystyle \theta _{0}}. Specifically, up to first order one has (using theEinstein summation convention)P(θ)=P(θ0)+ΔθjPj(θ0)+⋯{\displaystyle P(\theta )=P(\theta _{0})+\Delta \theta _{j}\,P_{j}(\theta _{0})+\cdots } withΔθj=(θ−θ0)j{\displaystyle \Delta \theta _{j}=(\theta -\theta _{0})_{j}}a small change ofθ{\displaystyle \theta }in thejdirection, andPj(θ0)=∂P∂θj(θ0){\displaystyle P_{j}\left(\theta _{0}\right)={\frac {\partial P}{\partial \theta _{j}}}(\theta _{0})}the corresponding rate of change in the probability distribution. Since relative entropy has an absolute minimum 0 forP=Q{\displaystyle P=Q}, i.e.θ=θ0{\displaystyle \theta =\theta _{0}}, it changes only tosecondorder in the small parametersΔθj{\displaystyle \Delta \theta _{j}}. More formally, as for any minimum, the first derivatives of the divergence vanish ∂∂θj|θ=θ0DKL(P(θ)∥P(θ0))=0,{\displaystyle \left.{\frac {\partial }{\partial \theta _{j}}}\right|_{\theta =\theta _{0}}D_{\text{KL}}(P(\theta )\parallel P(\theta _{0}))=0,} and by theTaylor expansionone has up to second order DKL(P(θ)∥P(θ0))=12ΔθjΔθkgjk(θ0)+⋯{\displaystyle D_{\text{KL}}(P(\theta )\parallel P(\theta _{0}))={\frac {1}{2}}\,\Delta \theta _{j}\,\Delta \theta _{k}\,g_{jk}(\theta _{0})+\cdots } where theHessian matrixof the divergence gjk(θ0)=∂2∂θj∂θk|θ=θ0DKL(P(θ)∥P(θ0)){\displaystyle g_{jk}(\theta _{0})=\left.{\frac {\partial ^{2}}{\partial \theta _{j}\,\partial \theta _{k}}}\right|_{\theta =\theta _{0}}D_{\text{KL}}(P(\theta )\parallel P(\theta _{0}))} must bepositive semidefinite. Lettingθ0{\displaystyle \theta _{0}}vary (and dropping the subindex 0) the Hessiangjk(θ){\displaystyle g_{jk}(\theta )}defines a (possibly degenerate)Riemannian metricon theθparameter space, called the Fisher information metric. Whenp(x,ρ){\displaystyle p_{(x,\rho )}}satisfies the following regularity conditions: ∂log⁡(p)∂ρ,∂2log⁡(p)∂ρ2,∂3log⁡(p)∂ρ3{\displaystyle {\frac {\partial \log(p)}{\partial \rho }},{\frac {\partial ^{2}\log(p)}{\partial \rho ^{2}}},{\frac {\partial ^{3}\log(p)}{\partial \rho ^{3}}}}exist,|∂p∂ρ|<F(x):∫x=0∞F(x)dx<∞,|∂2p∂ρ2|<G(x):∫x=0∞G(x)dx<∞|∂3log⁡(p)∂ρ3|<H(x):∫x=0∞p(x,0)H(x)dx<ξ<∞{\displaystyle {\begin{aligned}\left|{\frac {\partial p}{\partial \rho }}\right|&<F(x):\int _{x=0}^{\infty }F(x)\,dx<\infty ,\\\left|{\frac {\partial ^{2}p}{\partial \rho ^{2}}}\right|&<G(x):\int _{x=0}^{\infty }G(x)\,dx<\infty \\\left|{\frac {\partial ^{3}\log(p)}{\partial \rho ^{3}}}\right|&<H(x):\int _{x=0}^{\infty }p(x,0)H(x)\,dx<\xi <\infty \end{aligned}}} whereξis independent ofρ∫x=0∞∂p(x,ρ)∂ρ|ρ=0dx=∫x=0∞∂2p(x,ρ)∂ρ2|ρ=0dx=0{\displaystyle \left.\int _{x=0}^{\infty }{\frac {\partial p(x,\rho )}{\partial \rho }}\right|_{\rho =0}\,dx=\left.\int _{x=0}^{\infty }{\frac {\partial ^{2}p(x,\rho )}{\partial \rho ^{2}}}\right|_{\rho =0}\,dx=0} then:D(p(x,0)∥p(x,ρ))=cρ22+O(ρ3)asρ→0.{\displaystyle {\mathcal {D}}(p(x,0)\parallel p(x,\rho ))={\frac {c\rho ^{2}}{2}}+{\mathcal {O}}\left(\rho ^{3}\right){\text{ as }}\rho \to 0.} Another information-theoretic metric isvariation of information, which is roughly a symmetrization ofconditional entropy. It is a metric on the set ofpartitionsof a discreteprobability space. MAUVE is a measure of the statistical gap between two text distributions, such as the difference between text generated by a model and human-written text. This measure is computed using Kullback–Leibler divergences between the two distributions in a quantized embedding space of a foundation model. Many of the other quantities of information theory can be interpreted as applications of relative entropy to specific cases. Theself-information, also known as theinformation contentof a signal, random variable, oreventis defined as the negative logarithm of theprobabilityof the given outcome occurring. When applied to adiscrete random variable, the self-information can be represented as[citation needed] I⁡(m)=DKL(δim∥{pi}),{\displaystyle \operatorname {\operatorname {I} } (m)=D_{\text{KL}}\left(\delta _{\text{im}}\parallel \{p_{i}\}\right),} is the relative entropy of the probability distributionP(i){\displaystyle P(i)}from aKronecker deltarepresenting certainty thati=m{\displaystyle i=m}— i.e. the number of extra bits that must be transmitted to identifyiif only the probability distributionP(i){\displaystyle P(i)}is available to the receiver, not the fact thati=m{\displaystyle i=m}. Themutual information, I⁡(X;Y)=DKL(P(X,Y)∥P(X)P(Y))=EX⁡{DKL(P(Y∣X)∥P(Y))}=EY⁡{DKL(P(X∣Y)∥P(X))}{\displaystyle {\begin{aligned}\operatorname {I} (X;Y)&=D_{\text{KL}}(P(X,Y)\parallel P(X)P(Y))\\[5pt]&=\operatorname {E} _{X}\{D_{\text{KL}}(P(Y\mid X)\parallel P(Y))\}\\[5pt]&=\operatorname {E} _{Y}\{D_{\text{KL}}(P(X\mid Y)\parallel P(X))\}\end{aligned}}} is the relative entropy of thejoint probability distributionP(X,Y){\displaystyle P(X,Y)}from the productP(X)P(Y){\displaystyle P(X)P(Y)}of the twomarginal probability distributions— i.e. the expected number of extra bits that must be transmitted to identifyXandYif they are coded using only their marginal distributions instead of the joint distribution. Equivalently, if the joint probabilityP(X,Y){\displaystyle P(X,Y)}isknown, it is the expected number of extra bits that must on average be sent to identifyYif the value ofXis not already known to the receiver. TheShannon entropy, H(X)=E⁡[IX⁡(x)]=log⁡N−DKL(pX(x)∥PU(X)){\displaystyle {\begin{aligned}\mathrm {H} (X)&=\operatorname {E} \left[\operatorname {I} _{X}(x)\right]\\&=\log N-D_{\text{KL}}{\left(p_{X}(x)\parallel P_{U}(X)\right)}\end{aligned}}} is the number of bits which would have to be transmitted to identifyXfromNequally likely possibilities,lessthe relative entropy of the uniform distribution on therandom variatesofX,PU(X){\displaystyle P_{U}(X)}, from the true distributionP(X){\displaystyle P(X)}— i.e.lessthe expected number of bits saved, which would have had to be sent if the value ofXwere coded according to the uniform distributionPU(X){\displaystyle P_{U}(X)}rather than the true distributionP(X){\displaystyle P(X)}. This definition of Shannon entropy forms the basis ofE.T. Jaynes's alternative generalization to continuous distributions, thelimiting density of discrete points(as opposed to the usualdifferential entropy), which defines the continuous entropy aslimN→∞HN(X)=log⁡N−∫p(x)log⁡p(x)m(x)dx,{\displaystyle \lim _{N\to \infty }H_{N}(X)=\log N-\int p(x)\log {\frac {p(x)}{m(x)}}\,dx,}which is equivalent to:log⁡(N)−DKL(p(x)||m(x)){\displaystyle \log(N)-D_{\text{KL}}(p(x)||m(x))} Theconditional entropy[34], H(X∣Y)=log⁡N−DKL(P(X,Y)∥PU(X)P(Y))=log⁡N−DKL(P(X,Y)∥P(X)P(Y))−DKL(P(X)∥PU(X))=H(X)−I⁡(X;Y)=log⁡N−EY⁡[DKL(P(X∣Y)∥PU(X))]{\displaystyle {\begin{aligned}\mathrm {H} (X\mid Y)&=\log N-D_{\text{KL}}(P(X,Y)\parallel P_{U}(X)P(Y))\\[5pt]&=\log N-D_{\text{KL}}(P(X,Y)\parallel P(X)P(Y))-D_{\text{KL}}(P(X)\parallel P_{U}(X))\\[5pt]&=\mathrm {H} (X)-\operatorname {I} (X;Y)\\[5pt]&=\log N-\operatorname {E} _{Y}\left[D_{\text{KL}}\left(P\left(X\mid Y\right)\parallel P_{U}(X)\right)\right]\end{aligned}}} is the number of bits which would have to be transmitted to identifyXfromNequally likely possibilities,lessthe relative entropy of the product distributionPU(X)P(Y){\displaystyle P_{U}(X)P(Y)}from the true joint distributionP(X,Y){\displaystyle P(X,Y)}— i.e.lessthe expected number of bits saved which would have had to be sent if the value ofXwere coded according to the uniform distributionPU(X){\displaystyle P_{U}(X)}rather than the conditional distributionP(X|Y){\displaystyle P(X|Y)}ofXgivenY. When we have a set of possible events, coming from the distributionp, we can encode them (with alossless data compression) usingentropy encoding. This compresses the data by replacing each fixed-length input symbol with a corresponding unique, variable-length,prefix-free code(e.g.: the events (A, B, C) with probabilities p = (1/2, 1/4, 1/4) can be encoded as the bits (0, 10, 11)). If we know the distributionpin advance, we can devise an encoding that would be optimal (e.g.: usingHuffman coding). Meaning the messages we encode will have the shortest length on average (assuming the encoded events are sampled fromp), which will be equal toShannon's Entropyofp(denoted asH(p){\displaystyle \mathrm {H} (p)}). However, if we use a different probability distribution (q) when creating the entropy encoding scheme, then a larger number ofbitswill be used (on average) to identify an event from a set of possibilities. This new (larger) number is measured by thecross entropybetweenpandq. Thecross entropybetween twoprobability distributions(pandq) measures the average number ofbitsneeded to identify an event from a set of possibilities, if a coding scheme is used based on a given probability distributionq, rather than the "true" distributionp. The cross entropy for two distributionspandqover the sameprobability spaceis thus defined as follows. H(p,q)=Ep⁡[−log⁡q]=H(p)+DKL(p∥q).{\displaystyle \mathrm {H} (p,q)=\operatorname {E} _{p}[-\log q]=\mathrm {H} (p)+D_{\text{KL}}(p\parallel q).} For explicit derivation of this, see theMotivationsection above. Under this scenario, relative entropies (kl-divergence) can be interpreted as the extra number of bits, on average, that are needed (beyondH(p){\displaystyle \mathrm {H} (p)}) for encoding the events because of usingqfor constructing the encoding scheme instead ofp. InBayesian statistics, relative entropy can be used as a measure of the information gain in moving from aprior distributionto aposterior distribution:p(x)→p(x∣I){\displaystyle p(x)\to p(x\mid I)}. If some new factY=y{\displaystyle Y=y}is discovered, it can be used to update the posterior distribution forXfromp(x∣I){\displaystyle p(x\mid I)}to a new posterior distributionp(x∣y,I){\displaystyle p(x\mid y,I)}usingBayes' theorem: p(x∣y,I)=p(y∣x,I)p(x∣I)p(y∣I){\displaystyle p(x\mid y,I)={\frac {p(y\mid x,I)p(x\mid I)}{p(y\mid I)}}} This distribution has a newentropy: H(p(x∣y,I))=−∑xp(x∣y,I)log⁡p(x∣y,I),{\displaystyle \mathrm {H} {\big (}p(x\mid y,I){\big )}=-\sum _{x}p(x\mid y,I)\log p(x\mid y,I),} which may be less than or greater than the original entropyH(p(x∣I)){\displaystyle \mathrm {H} (p(x\mid I))}. However, from the standpoint of the new probability distribution one can estimate that to have used the original code based onp(x∣I){\displaystyle p(x\mid I)}instead of a new code based onp(x∣y,I){\displaystyle p(x\mid y,I)}would have added an expected number of bits: DKL(p(x∣y,I)∥p(x∣I))=∑xp(x∣y,I)log⁡p(x∣y,I)p(x∣I){\displaystyle D_{\text{KL}}{\big (}p(x\mid y,I)\parallel p(x\mid I){\big )}=\sum _{x}p(x\mid y,I)\log {\frac {p(x\mid y,I)}{p(x\mid I)}}} to the message length. This therefore represents the amount of useful information, or information gain, aboutX, that has been learned by discoveringY=y{\displaystyle Y=y}. If a further piece of data,Y2=y2{\displaystyle Y_{2}=y_{2}}, subsequently comes in, the probability distribution forxcan be updated further, to give a new best guessp(x∣y1,y2,I){\displaystyle p(x\mid y_{1},y_{2},I)}. If one reinvestigates the information gain for usingp(x∣y1,I){\displaystyle p(x\mid y_{1},I)}rather thanp(x∣I){\displaystyle p(x\mid I)}, it turns out that it may be either greater or less than previously estimated: ∑xp(x∣y1,y2,I)log⁡p(x∣y1,y2,I)p(x∣I){\displaystyle \sum _{x}p(x\mid y_{1},y_{2},I)\log {\frac {p(x\mid y_{1},y_{2},I)}{p(x\mid I)}}}may be ≤ or > than∑xp(x∣y1,I)log⁡p(x∣y1,I)p(x∣I){\textstyle \sum _{x}p(x\mid y_{1},I)\log {\frac {p(x\mid y_{1},I)}{p(x\mid I)}}} and so the combined information gain doesnotobey the triangle inequality: DKL(p(x∣y1,y2,I)∥p(x∣I)){\displaystyle D_{\text{KL}}{\big (}p(x\mid y_{1},y_{2},I)\parallel p(x\mid I){\big )}}may be <, = or > thanDKL(p(x∣y1,y2,I)∥p(x∣y1,I))+DKL(p(x∣y1,I)∥p(x∣I)){\displaystyle D_{\text{KL}}{\big (}p(x\mid y_{1},y_{2},I)\parallel p(x\mid y_{1},I){\big )}+D_{\text{KL}}{\big (}p(x\mid y_{1},I)\parallel p(x\mid I){\big )}} All one can say is that onaverage, averaging usingp(y2∣y1,x,I){\displaystyle p(y_{2}\mid y_{1},x,I)}, the two sides will average out. A common goal inBayesian experimental designis to maximise the expected relative entropy between the prior and the posterior.[35]When posteriors are approximated to be Gaussian distributions, a design maximising the expected relative entropy is calledBayes d-optimal. Relative entropyDKL(p(x∣H1)∥p(x∣H0)){\textstyle D_{\text{KL}}{\bigl (}p(x\mid H_{1})\parallel p(x\mid H_{0}){\bigr )}}can also be interpreted as the expecteddiscrimination informationforH1{\displaystyle H_{1}}overH0{\displaystyle H_{0}}: the mean information per sample for discriminating in favor of a hypothesisH1{\displaystyle H_{1}}against a hypothesisH0{\displaystyle H_{0}}, when hypothesisH1{\displaystyle H_{1}}is true.[36]Another name for this quantity, given to it byI. J. Good, is the expected weight of evidence forH1{\displaystyle H_{1}}overH0{\displaystyle H_{0}}to be expected from each sample. The expected weight of evidence forH1{\displaystyle H_{1}}overH0{\displaystyle H_{0}}isnotthe same as the information gain expected per sample about the probability distributionp(H){\displaystyle p(H)}of the hypotheses, DKL(p(x∣H1)∥p(x∣H0))≠IG=DKL(p(H∣x)∥p(H∣I)).{\displaystyle D_{\text{KL}}(p(x\mid H_{1})\parallel p(x\mid H_{0}))\neq IG=D_{\text{KL}}(p(H\mid x)\parallel p(H\mid I)).} Either of the two quantities can be used as autility functionin Bayesian experimental design, to choose an optimal next question to investigate: but they will in general lead to rather different experimental strategies. On the entropy scale ofinformation gainthere is very little difference between near certainty and absolute certainty—coding according to a near certainty requires hardly any more bits than coding according to an absolute certainty. On the other hand, on thelogitscale implied by weight of evidence, the difference between the two is enormous – infinite perhaps; this might reflect the difference between being almost sure (on a probabilistic level) that, say, theRiemann hypothesisis correct, compared to being certain that it is correct because one has a mathematical proof. These two different scales ofloss functionfor uncertainty arebothuseful, according to how well each reflects the particular circumstances of the problem in question. The idea of relative entropy as discrimination information led Kullback to propose the Principle ofMinimum Discrimination Information(MDI): given new facts, a new distributionfshould be chosen which is as hard to discriminate from the original distributionf0{\displaystyle f_{0}}as possible; so that the new data produces as small an information gainDKL(f∥f0){\displaystyle D_{\text{KL}}(f\parallel f_{0})}as possible. For example, if one had a prior distributionp(x,a){\displaystyle p(x,a)}overxanda, and subsequently learnt the true distribution ofawasu(a){\displaystyle u(a)}, then the relative entropy between the new joint distribution forxanda,q(x∣a)u(a){\displaystyle q(x\mid a)u(a)}, and the earlier prior distribution would be: DKL(q(x∣a)u(a)∥p(x,a))=Eu(a)⁡{DKL(q(x∣a)∥p(x∣a))}+DKL(u(a)∥p(a)),{\displaystyle D_{\text{KL}}(q(x\mid a)u(a)\parallel p(x,a))=\operatorname {E} _{u(a)}\left\{D_{\text{KL}}(q(x\mid a)\parallel p(x\mid a))\right\}+D_{\text{KL}}(u(a)\parallel p(a)),} i.e. the sum of the relative entropy ofp(a){\displaystyle p(a)}the prior distribution forafrom the updated distributionu(a){\displaystyle u(a)}, plus the expected value (using the probability distributionu(a){\displaystyle u(a)}) of the relative entropy of the prior conditional distributionp(x∣a){\displaystyle p(x\mid a)}from the new conditional distributionq(x∣a){\displaystyle q(x\mid a)}. (Note that often the later expected value is called theconditional relative entropy(orconditional Kullback–Leibler divergence) and denoted byDKL(q(x∣a)∥p(x∣a)){\displaystyle D_{\text{KL}}(q(x\mid a)\parallel p(x\mid a))}[3][34]) This is minimized ifq(x∣a)=p(x∣a){\displaystyle q(x\mid a)=p(x\mid a)}over the whole support ofu(a){\displaystyle u(a)}; and we note that this result incorporates Bayes' theorem, if the new distributionu(a){\displaystyle u(a)}is in fact a δ function representing certainty thatahas one particular value. MDI can be seen as an extension ofLaplace'sPrinciple of Insufficient Reason, and thePrinciple of Maximum EntropyofE.T. Jaynes. In particular, it is the natural extension of the principle of maximum entropy from discrete to continuous distributions, for which Shannon entropy ceases to be so useful (seedifferential entropy), but the relative entropy continues to be just as relevant. In the engineering literature, MDI is sometimes called thePrinciple of Minimum Cross-Entropy(MCE) orMinxentfor short. Minimising relative entropy frommtopwith respect tomis equivalent to minimizing the cross-entropy ofpandm, since H(p,m)=H(p)+DKL(p∥m),{\displaystyle \mathrm {H} (p,m)=\mathrm {H} (p)+D_{\text{KL}}(p\parallel m),} which is appropriate if one is trying to choose an adequate approximation top. However, this is just as oftennotthe task one is trying to achieve. Instead, just as often it ismthat is some fixed prior reference measure, andpthat one is attempting to optimise by minimisingDKL(p∥m){\displaystyle D_{\text{KL}}(p\parallel m)}subject to some constraint. This has led to some ambiguity in the literature, with some authors attempting to resolve the inconsistency by redefining cross-entropy to beDKL(p∥m){\displaystyle D_{\text{KL}}(p\parallel m)}, rather thanH(p,m){\displaystyle \mathrm {H} (p,m)}[citation needed]. Surprisals[37]add where probabilities multiply. The surprisal for an event of probabilitypis defined ass=−kln⁡p{\displaystyle s=-k\ln p}. Ifkis{1,1/ln⁡2,1.38×10−23}{\displaystyle \left\{1,1/\ln 2,1.38\times 10^{-23}\right\}}then surprisal is in{{\displaystyle \{}nats, bits, orJ/K}{\displaystyle J/K\}}so that, for instance, there areNbits of surprisal for landing all "heads" on a toss ofNcoins. Best-guess states (e.g. for atoms in a gas) are inferred by maximizing theaverage surprisalS(entropy) for a given set of control parameters (like pressurePor volumeV). This constrainedentropy maximization, both classically[38]and quantum mechanically,[39]minimizesGibbsavailability in entropy units[40]A≡−kln⁡Z{\displaystyle A\equiv -k\ln Z}whereZis a constrained multiplicity orpartition function. When temperatureTis fixed, free energy (T×A{\displaystyle T\times A}) is also minimized. Thus ifT,V{\displaystyle T,V}and number of moleculesNare constant, theHelmholtz free energyF≡U−TS{\displaystyle F\equiv U-TS}(whereUis energy andSis entropy) is minimized as a system "equilibrates." IfTandPare held constant (say during processes in your body), theGibbs free energyG=U+PV−TS{\displaystyle G=U+PV-TS}is minimized instead. The change in free energy under these conditions is a measure of availableworkthat might be done in the process. Thus available work for an ideal gas at constant temperatureTo{\displaystyle T_{o}}and pressurePo{\displaystyle P_{o}}isW=ΔG=NkToΘ(V/Vo){\displaystyle W=\Delta G=NkT_{o}\Theta (V/V_{o})}whereVo=NkTo/Po{\displaystyle V_{o}=NkT_{o}/P_{o}}andΘ(x)=x−1−ln⁡x≥0{\displaystyle \Theta (x)=x-1-\ln x\geq 0}(see alsoGibbs inequality). More generally[41]thework availablerelative to some ambient is obtained by multiplying ambient temperatureTo{\displaystyle T_{o}}by relative entropy ornet surprisalΔI≥0,{\displaystyle \Delta I\geq 0,}defined as the average value ofkln⁡(p/po){\displaystyle k\ln(p/p_{o})}wherepo{\displaystyle p_{o}}is the probability of a given state under ambient conditions. For instance, the work available in equilibrating a monatomic ideal gas to ambient values ofVo{\displaystyle V_{o}}andTo{\displaystyle T_{o}}is thusW=ToΔI{\displaystyle W=T_{o}\Delta I}, where relative entropy ΔI=Nk[Θ(VVo)+32Θ(TTo)].{\displaystyle \Delta I=Nk\left[\Theta {\left({\frac {V}{V_{o}}}\right)}+{\frac {3}{2}}\Theta {\left({\frac {T}{T_{o}}}\right)}\right].} The resulting contours of constant relative entropy, shown at right for a mole of Argon at standard temperature and pressure, for example put limits on the conversion of hot to cold as in flame-powered air-conditioning or in the unpowered device to convert boiling-water to ice-water discussed here.[42]Thus relative entropy measures thermodynamic availability in bits. Fordensity matricesPandQon aHilbert space, thequantum relative entropyfromQtoPis defined to be DKL(P∥Q)=Tr⁡(P(log⁡P−log⁡Q)).{\displaystyle D_{\text{KL}}(P\parallel Q)=\operatorname {Tr} (P(\log P-\log Q)).} Inquantum information sciencethe minimum ofDKL(P∥Q){\displaystyle D_{\text{KL}}(P\parallel Q)}over all separable statesQcan also be used as a measure ofentanglementin the stateP. Just as relative entropy of "actual from ambient" measures thermodynamic availability, relative entropy of "reality from a model" is also useful even if the only clues we have about reality are some experimental measurements. In the former case relative entropy describesdistance to equilibriumor (when multiplied by ambient temperature) the amount ofavailable work, while in the latter case it tells you about surprises that reality has up its sleeve or, in other words,how much the model has yet to learn. Although this tool for evaluating models against systems that are accessible experimentally may be applied in any field, its application to selecting astatistical modelviaAkaike information criterionare particularly well described in papers[43]and a book[44]by Burnham and Anderson. In a nutshell the relative entropy of reality from a model may be estimated, to within a constant additive term, by a function of the deviations observed between data and the model's predictions (like themean squared deviation) . Estimates of such divergence for models that share the same additive term can in turn be used to select among models. When trying to fit parametrized models to data there are various estimators which attempt to minimize relative entropy, such asmaximum likelihoodandmaximum spacingestimators.[citation needed] Kullback & Leibler (1951)also considered the symmetrized function:[6] DKL(P∥Q)+DKL(Q∥P){\displaystyle D_{\text{KL}}(P\parallel Q)+D_{\text{KL}}(Q\parallel P)} which they referred to as the "divergence", though today the "KL divergence" refers to the asymmetric function (see§ Etymologyfor the evolution of the term). This function is symmetric and nonnegative, and had already been defined and used byHarold Jeffreysin 1948;[7]it is accordingly called theJeffreys divergence. This quantity has sometimes been used forfeature selectioninclassificationproblems, wherePandQare the conditionalpdfsof a feature under two different classes. In the Banking and Finance industries, this quantity is referred to asPopulation Stability Index(PSI), and is used to assess distributional shifts in model features through time. An alternative is given via theλ{\displaystyle \lambda }-divergence, Dλ(P∥Q)=λDKL(P∥λP+(1−λ)Q)+(1−λ)DKL(Q∥λP+(1−λ)Q),{\displaystyle D_{\lambda }(P\parallel Q)=\lambda D_{\text{KL}}(P\parallel \lambda P+(1-\lambda )Q)+(1-\lambda )D_{\text{KL}}(Q\parallel \lambda P+(1-\lambda )Q),} which can be interpreted as the expected information gain aboutXfrom discovering which probability distributionXis drawn from,PorQ, if they currently have probabilitiesλ{\displaystyle \lambda }and1−λ{\displaystyle 1-\lambda }respectively.[clarification needed][citation needed] The valueλ=0.5{\displaystyle \lambda =0.5}gives theJensen–Shannon divergence, defined by DJS=12DKL(P∥M)+12DKL(Q∥M){\displaystyle D_{\text{JS}}={\tfrac {1}{2}}D_{\text{KL}}(P\parallel M)+{\tfrac {1}{2}}D_{\text{KL}}(Q\parallel M)} whereMis the average of the two distributions, M=12(P+Q).{\displaystyle M={\tfrac {1}{2}}\left(P+Q\right).} We can also interpretDJS{\displaystyle D_{\text{JS}}}as the capacity of a noisy information channel with two inputs giving the output distributionsPandQ. The Jensen–Shannon divergence, like allf-divergences, islocallyproportional to theFisher information metric. It is similar to theHellinger metric(in the sense that it induces the same affine connection on astatistical manifold). Furthermore, the Jensen–Shannon divergence can be generalized using abstract statistical M-mixtures relying on an abstract mean M.[45][46] There are many other important measures ofprobability distance. Some of these are particularly connected with relative entropy. For example: Other notable measures of distance include theHellinger distance,histogram intersection,Chi-squared statistic,quadratic form distance,match distance,Kolmogorov–Smirnov distance, andearth mover's distance.[49] Just asabsoluteentropy serves as theoretical background fordatacompression,relativeentropy serves as theoretical background fordatadifferencing– the absolute entropy of a set of data in this sense being the data required to reconstruct it (minimum compressed size), while the relative entropy of a target set of data, given a source set of data, is the data required to reconstruct the targetgiventhe source (minimum size of apatch).
https://en.wikipedia.org/wiki/KL_divergence
Theapproximation errorin a given data value represents the significant discrepancy that arises when an exact, true value is compared against someapproximationderived for it. This inherent error in approximation can be quantified and expressed in two principal ways: as anabsolute error, which denotes the direct numerical magnitude of this discrepancy irrespective of the true value's scale, or as arelative error, which provides a scaled measure of the error by considering the absolute error in proportion to the exact data value, thus offering a context-dependent assessment of the error's significance. An approximation error can manifest due to a multitude of diverse reasons. Prominent among these are limitations related to computingmachine precision, where digital systems cannot represent all real numbers with perfect accuracy, leading to unavoidable truncation or rounding. Another common source is inherentmeasurement error, stemming from the practical limitations of instruments, environmental factors, or observational processes (for instance, if the actual length of a piece of paper is precisely 4.53 cm, but the measuring ruler only permits an estimation to the nearest 0.1 cm, this constraint could lead to a recorded measurement of 4.5 cm, thereby introducing an error). In themathematicalfield ofnumerical analysis, the crucial concept ofnumerical stabilityassociated with analgorithmserves to indicate the extent to which initial errors or perturbations present in the input data of the algorithm are likely to propagate and potentially amplify into substantial errors in the final output. Algorithms that are characterized as numerically stable are robust in the sense that they do not yield a significantly magnified error in their output even when the input is slightly malformed or contains minor inaccuracies; conversely, numerically unstable algorithms may exhibit dramatic error growth from small input changes, rendering their results unreliable.[1] Given some true or exact valuev, we formally state that an approximationvapproxestimates or representsvwhere the magnitude of theabsolute erroris bounded by a positive valueε(i.e.,ε>0), if the following inequality holds:[2][3] where the vertical bars, | |, unambiguously denote theabsolute valueof the difference between the true valuevand its approximationvapprox. This mathematical operation signifies the magnitude of the error, irrespective of whether the approximation is an overestimate or an underestimate. Similarly, we state thatvapproxapproximates the valuevwhere the magnitude of therelative erroris bounded by a positive valueη(i.e.,η>0), providedvis not zero (v≠ 0), if the subsequent inequality is satisfied: |v−vapprox|≤η⋅|v|{\displaystyle |v-v_{\text{approx}}|\leq \eta \cdot |v|}. This definition ensures thatηacts as an upper bound on the ratio of the absolute error to the magnitude of the true value. Ifv≠ 0, then the actualrelative error, often also denoted byηin context (representing the calculated value rather than a bound), is precisely calculated as: Note that the first term in the equation above implicitly defines `ε` as `|v-v_approx|` if `η` is `ε/|v|`. Thepercent error, often denoted asδ, is a common and intuitive way of expressing the relative error, effectively scaling the relative error value to a percentage for easier interpretation and comparison across different contexts:[3] Anerror boundrigorously defines an established upper limit on either the relative or the absolute magnitude of an approximation error. Such a bound thereby provides a formal guarantee on the maximum possible deviation of the approximation from the true value, which is critical in applications requiring known levels of precision.[4] To illustrate these concepts with a numerical example, consider an instance where the exact, accepted value is 50, and its corresponding approximation is determined to be 49.9. In this particular scenario, the absolute error is precisely 0.1 (calculated as |50 − 49.9|), and the relative error is calculated as the absolute error 0.1 divided by the true value 50, which equals 0.002. This relative error can also be expressed as 0.2%. In a more practical setting, such as when measuring the volume of liquid in a 6 mL beaker, if the instrument reading indicates 5 mL while the true volume is actually 6 mL, the percent error for this particular measurement situation is, when rounded to one decimal place, approximately 16.7% (calculated as |(6 mL − 5 mL) / 6 mL| × 100%). The utility of relative error becomes particularly evident when it is employed to compare the quality of approximations for numbers that possess widely differing magnitudes; for example, approximating the number 1,000 with an absolute error of 3 results in a relative error of 0.003 (or 0.3%). This is, within the context of most scientific or engineering applications, considered a significantly less accurate approximation than approximating the much larger number 1,000,000 with an identical absolute error of 3. In the latter case, the relative error is a mere 0.000003 (or 0.0003%). In the first case, the relative error is 0.003, whereas in the second, more favorable scenario, it is a substantially smaller value of only 0.000003. This comparison clearly highlights how relative error provides a more meaningful and contextually appropriate assessment of precision, especially when dealing with values across different orders of magnitude. There are two crucial features or caveats associated with the interpretation and application of relative error that should always be kept in mind. Firstly, relative error becomes mathematically undefined whenever the true value (v) is zero, because this true value appears in the denominator of its calculation (as detailed in the formal definition provided above), and division by zero is an undefined operation. Secondly, the concept of relative error is most truly meaningful and consistently interpretable only when the measurements under consideration are performed on aratio scale. This type of scale is characterized by possessing a true, non-arbitrary zero point, which signifies the complete absence of the quantity being measured. If this condition of a ratio scale is not met (e.g., when using interval scales like Celsius temperature), the calculated relative error can become highly sensitive to the choice of measurement units, potentially leading to misleading interpretations. For example, when an absolute error in atemperaturemeasurement given in theCelsius scaleis 1 °C, and the true value is 2 °C, the relative error is 0.5 (or 50%, calculated as |1°C / 2°C|). However, if this exact same approximation, representing the same physical temperature difference, is made using theKelvin scale(which is a ratio scale where 0 K represents absolute zero), a 1 K absolute error (equivalent in magnitude to a 1 °C error) with the same true value of 275.15 K (which is equivalent to 2 °C) gives a markedly different relative error of approximately 0.00363, or about 3.63×10−3(calculated as |1 K / 275.15 K|). This disparity underscores the importance of the underlying measurement scale. When comparing the behavior and intrinsic characteristics of these two fundamental error types, it is important to recognize their differing sensitivities to common arithmetic operations. Specifically, statements and conclusions made aboutrelative errorsare notably sensitive to the addition of a non-zero constant to the underlying true and approximated values, as such an addition alters the base value against which the error is relativized, thereby changing the ratio. However, relative errors remain unaffected by the multiplication of both the true and approximated values by the same non-zero constant, because this constant would appear in both the numerator (of the absolute error) and the denominator (the true value) of the relative error calculation, and would consequently cancel out, leaving the relative error unchanged. Conversely, forabsolute errors, the opposite relationship holds true: absolute errors are directly sensitive to the multiplication of the underlying values by a constant (as this scales the magnitude of the difference itself), but they are largely insensitive to the addition of a constant to these values (since adding the same constant to both the true value and its approximation does not change the difference between them: (v+c) − (vapprox+c) =v−vapprox).[5]: 34 In the realm of computational complexity theory, we define that a real valuevispolynomially computable with absolute errorfrom a given input if, for any specified rational numberε> 0 representing the desired maximum permissible absolute error, it is algorithmically possible to compute a rational numbervapproxsuch thatvapproxapproximatesvwith an absolute error no greater thanε(formally, |v−vapprox| ≤ε). Crucially, this computation must be achievable within a time duration that is polynomial in terms of the size of the input data and the encoding size ofε(the latter typically being of the order O(log(1/ε)) bits, reflecting the number of bits needed to represent the precision). Analogously, the valuevis consideredpolynomially computable with relative errorif, for any specified rational numberη> 0 representing the desired maximum permissible relative error, it is possible to compute a rational numbervapproxthat approximatesvwith a relative error no greater thanη(formally, |(v−vapprox)/v| ≤η, assumingv≠ 0). This computation, similar to the absolute error case, must likewise be achievable in an amount of time that is polynomial in the size of the input data and the encoding size ofη(which is typically O(log(1/η)) bits). It can be demonstrated that if a valuevis polynomially computable with relative error (utilizing an algorithm that we can designate as REL), then it is consequently also polynomially computable with absolute error.Proof sketch: Letε> 0 be the target maximum absolute error that we wish to achieve. The procedure commences by invoking the REL algorithm with a chosen relative error bound of, for example,η= 1/2. This initial step aims to find a rational number approximationr1such that the inequality |v−r1| ≤ |v|/2 holds true. From this relationship, by applying the reverse triangle inequality (|v| − |r1| ≤ |v−r1|), we can deduce that |v| ≤ 2|r1| (this holds assumingr1≠ 0; ifr1= 0, then the relative error condition impliesvmust also be 0, in which case the problem of achieving any absolute errorε> 0 is trivial, asvapprox= 0 works, and we are done). Given that the REL algorithm operates in polynomial time, the encoding length of the computedr1will necessarily be polynomial with respect to the input size. Subsequently, the REL algorithm is invoked a second time, now with a new, typically much smaller, relative error target set toη'=ε/ (2|r1|) (this step also assumesr1is non-zero, which we can ensure or handle as a special case). This second application of REL yields another rational number approximation,r2, that satisfies the condition |v−r2| ≤η'|v|. Substituting the expression forη'gives |v−r2| ≤ (ε/ (2|r1|)) |v|. Now, using the previously derived inequality |v| ≤ 2|r1|, we can bound the term: |v−r2| ≤ (ε/ (2|r1|)) × (2|r1|) =ε. Thus, the approximationr2successfully approximatesvwith the desired absolute errorε, demonstrating that polynomial computability with relative error implies polynomial computability with absolute error.[5]: 34 The reverse implication, namely that polynomial computability with absolute error implies polynomial computability with relative error, is generally not true without imposing additional conditions or assumptions. However, a significant special case exists: if one can assume that some positive lower boundbon the magnitude ofv(i.e., |v| >b> 0) can itself be computed in polynomial time, and ifvis also known to be polynomially computable with absolute error (perhaps via an algorithm designated as ABS), thenvalso becomes polynomially computable with relative error. This is because one can simply invoke the ABS algorithm with a carefully chosen target absolute error, specificallyεtarget=ηb, whereηis the desired relative error. The resulting approximationvapproxwould satisfy |v−vapprox| ≤ηb. To see the implication for relative error, we divide by |v| (which is non-zero): |(v−vapprox)/v| ≤ (ηb)/|v|. Since we have the condition |v| >b, it follows thatb/|v| < 1. Therefore, the relative error is bounded byη× (b/|v|) <η× 1 =η, which is the desired outcome for polynomial computability with relative error. An algorithm that, for every given rational numberη> 0, successfully computes a rational numbervapproxthat approximatesvwith a relative error no greater thanη, and critically, does so in a time complexity that is polynomial in both the size of the input and in the reciprocal of the relative error, 1/η(rather than being polynomial merely in log(1/η), which typically allows for faster computation whenηis extremely small), is known as aFully Polynomial-Time Approximation Scheme (FPTAS). The dependence on 1/ηrather than log(1/η) is a defining characteristic of FPTAS and distinguishes it from weaker approximation schemes. In the context of most indicating measurement instruments, such as analog or digital voltmeters, pressure gauges, and thermometers, the specified accuracy is frequently guaranteed by their manufacturers as a certain percentage of the instrument's full-scale reading capability, rather than as a percentage of the actual reading. The defined boundaries or limits of these permissible deviations from the true or specified values under operational conditions are commonly referred to as limiting errors or, alternatively, guarantee errors. This method of specifying accuracy implies that the maximum possible absolute error can be larger when measuring values towards the higher end of the instrument's scale, while the relative error with respect to the full-scale value itself remains constant across the range. Consequently, the relative error with respect to the actual measured value can become quite large for readings at the lower end of the instrument's scale.[6] The fundamental definitions of absolute and relative error, as presented primarily for scalar (one-dimensional) values, can be naturally and rigorously extended to more complex scenarios where the quantity of interestv{\displaystyle v}and its corresponding approximationvapprox{\displaystyle v_{\text{approx}}}aren-dimensional vectors, matrices, or, more generally, elements of anormed vector space. This important generalization is typically achieved by systematically replacing theabsolute valuefunction (which effectively measures magnitude or "size" for scalar numbers) with an appropriatevectorn-normor matrix norm. Common examples of such norms include the L1norm (sum of absolute component values), the L2norm (Euclidean norm, or square root of the sum of squared components), and the L∞norm (maximum absolute component value). These norms provide a way to quantify the "distance" or "difference" between the true vector (or matrix) and its approximation in a multi-dimensional space, thereby allowing for analogous definitions of absolute and relative error in these higher-dimensional contexts.[7]
https://en.wikipedia.org/wiki/Percentage_error
Instatisticsandsignal processing, aminimum mean square error(MMSE) estimator is an estimation method which minimizes themean square error(MSE), which is a common measure of estimator quality, of the fitted values of adependent variable. In theBayesiansetting, the term MMSE more specifically refers to estimation with quadraticloss function. In such case, the MMSE estimator is given by the posterior mean of the parameter to be estimated. Since the posterior mean is cumbersome to calculate, the form of the MMSE estimator is usually constrained to be within a certain class of functions. Linear MMSE estimators are a popular choice since they are easy to use, easy to calculate, and very versatile. It has given rise to many popular estimators such as theWiener–Kolmogorov filterandKalman filter. The term MMSE more specifically refers to estimation in aBayesiansetting with quadratic cost function. The basic idea behind the Bayesian approach to estimation stems from practical situations where we often have some prior information about the parameter to be estimated. For instance, we may have prior information about the range that the parameter can assume; or we may have an old estimate of the parameter that we want to modify when a new observation is made available; or the statistics of an actual random signal such as speech. This is in contrast to the non-Bayesian approach likeminimum-variance unbiased estimator(MVUE) where absolutely nothing is assumed to be known about the parameter in advance and which does not account for such situations. In the Bayesian approach, such prior information is captured by the prior probability density function of the parameters; and based directly onBayes' theorem, it allows us to make better posterior estimates as more observations become available. Thus unlike non-Bayesian approach where parameters of interest are assumed to be deterministic, but unknown constants, the Bayesian estimator seeks to estimate a parameter that is itself arandom variable. Furthermore, Bayesian estimation can also deal with situations where the sequence of observations are not necessarily independent. Thus Bayesian estimation provides yet another alternative to the MVUE. This is useful when the MVUE does not exist or cannot be found. Letx{\displaystyle x}be an×1{\displaystyle n\times 1}hidden random vector variable, and lety{\displaystyle y}be am×1{\displaystyle m\times 1}known random vector variable (the measurement or observation), both of them not necessarily of the same dimension. Anestimatorx^(y){\displaystyle {\hat {x}}(y)}ofx{\displaystyle x}is any function of the measurementy{\displaystyle y}. The estimation error vector is given bye=x^−x{\displaystyle e={\hat {x}}-x}and itsmean squared error(MSE) is given by thetraceof errorcovariance matrix where theexpectationE{\displaystyle \operatorname {E} }is taken overx{\displaystyle x}conditioned ony{\displaystyle y}. Whenx{\displaystyle x}is a scalar variable, the MSE expression simplifies toE⁡{(x^−x)2}{\displaystyle \operatorname {E} \left\{({\hat {x}}-x)^{2}\right\}}. Note that MSE can equivalently be defined in other ways, since The MMSE estimator is then defined as the estimator achieving minimal MSE: In many cases, it is not possible to determine the analytical expression of the MMSE estimator. Two basic numerical approaches to obtain the MMSE estimate depends on either finding the conditional expectationE⁡{x∣y}{\displaystyle \operatorname {E} \{x\mid y\}}or finding the minima of MSE. Direct numerical evaluation of the conditional expectation is computationally expensive since it often requires multidimensional integration usually done viaMonte Carlo methods. Another computational approach is to directly seek the minima of the MSE using techniques such as thestochastic gradient descent methods; but this method still requires the evaluation of expectation. While these numerical methods have been fruitful, a closed form expression for the MMSE estimator is nevertheless possible if we are willing to make some compromises. One possibility is to abandon the full optimality requirements and seek a technique minimizing the MSE within a particular class of estimators, such as the class of linear estimators. Thus, we postulate that the conditional expectation ofx{\displaystyle x}giveny{\displaystyle y}is a simple linear function ofy{\displaystyle y},E⁡{x∣y}=Wy+b{\displaystyle \operatorname {E} \{x\mid y\}=Wy+b}, where the measurementy{\displaystyle y}is a random vector,W{\displaystyle W}is a matrix andb{\displaystyle b}is a vector. This can be seen as the first order Taylor approximation ofE⁡{x∣y}{\displaystyle \operatorname {E} \{x\mid y\}}. The linear MMSE estimator is the estimator achieving minimum MSE among all estimators of such form. That is, it solves the following optimization problem: One advantage of such linear MMSE estimator is that it is not necessary to explicitly calculate theposterior probabilitydensity function ofx{\displaystyle x}. Such linear estimator only depends on the first two moments ofx{\displaystyle x}andy{\displaystyle y}. So although it may be convenient to assume thatx{\displaystyle x}andy{\displaystyle y}are jointly Gaussian, it is not necessary to make this assumption, so long as the assumed distribution has well defined first and second moments. The form of the linear estimator does not depend on the type of the assumed underlying distribution. The expression for optimalb{\displaystyle b}andW{\displaystyle W}is given by: wherex¯=E⁡{x}{\displaystyle {\bar {x}}=\operatorname {E} \{x\}},y¯=E⁡{y},{\displaystyle {\bar {y}}=\operatorname {E} \{y\},}theCXY{\displaystyle C_{XY}}is cross-covariance matrix betweenx{\displaystyle x}andy{\displaystyle y}, theCY{\displaystyle C_{Y}}is auto-covariance matrix ofy{\displaystyle y}. Thus, the expression for linear MMSE estimator, its mean, and its auto-covariance is given by where theCYX{\displaystyle C_{YX}}is cross-covariance matrix betweeny{\displaystyle y}andx{\displaystyle x}. Lastly, the error covariance and minimum mean square error achievable by such estimator is Let us have the optimal linear MMSE estimator given asx^=Wy+b{\displaystyle {\hat {x}}=Wy+b}, where we are required to find the expression forW{\displaystyle W}andb{\displaystyle b}. It is required that the MMSE estimator be unbiased. This means, Plugging the expression forx^{\displaystyle {\hat {x}}}in above, we get wherex¯=E⁡{x}{\displaystyle {\bar {x}}=\operatorname {E} \{x\}}andy¯=E⁡{y}{\displaystyle {\bar {y}}=\operatorname {E} \{y\}}. Thus we can re-write the estimator as and the expression for estimation error becomes From the orthogonality principle, we can haveE⁡{(x^−x)(y−y¯)T}=0{\displaystyle \operatorname {E} \{({\hat {x}}-x)(y-{\bar {y}})^{T}\}=0}, where we takeg(y)=y−y¯{\displaystyle g(y)=y-{\bar {y}}}. Here the left-hand-side term is When equated to zero, we obtain the desired expression forW{\displaystyle W}as TheCXY{\displaystyle C_{XY}}is cross-covariance matrix between X and Y, andCY{\displaystyle C_{Y}}is auto-covariance matrix of Y. SinceCXY=CYXT{\displaystyle C_{XY}=C_{YX}^{T}}, the expression can also be re-written in terms ofCYX{\displaystyle C_{YX}}as Thus the full expression for the linear MMSE estimator is Since the estimatex^{\displaystyle {\hat {x}}}is itself a random variable withE⁡{x^}=x¯{\displaystyle \operatorname {E} \{{\hat {x}}\}={\bar {x}}}, we can also obtain its auto-covariance as Putting the expression forW{\displaystyle W}andWT{\displaystyle W^{T}}, we get Lastly, the covariance of linear MMSE estimation error will then be given by The first term in the third line is zero due to the orthogonality principle. SinceW=CXYCY−1{\displaystyle W=C_{XY}C_{Y}^{-1}}, we can re-writeCe{\displaystyle C_{e}}in terms of covariance matrices as This we can recognize to be the same asCe=CX−CX^.{\displaystyle C_{e}=C_{X}-C_{\hat {X}}.}Thus the minimum mean square error achievable by such a linear estimator is For the special case when bothx{\displaystyle x}andy{\displaystyle y}are scalars, the above relations simplify to whereρ=σXYσXσY{\displaystyle \rho ={\frac {\sigma _{XY}}{\sigma _{X}\sigma _{Y}}}}is thePearson's correlation coefficientbetweenx{\displaystyle x}andy{\displaystyle y}. The above two equations allows us to interpret the correlation coefficient either as normalized slope of linear regression or as square root of the ratio of two variances Whenρ=0{\displaystyle \rho =0}, we havex^=x¯{\displaystyle {\hat {x}}={\bar {x}}}andσe2=σX2{\displaystyle \sigma _{e}^{2}=\sigma _{X}^{2}}. In this case, no new information is gleaned from the measurement which can decrease the uncertainty inx{\displaystyle x}. On the other hand, whenρ=±1{\displaystyle \rho =\pm 1}, we havex^=σXYσY(y−y¯)+x¯{\displaystyle {\hat {x}}={\frac {\sigma _{XY}}{\sigma _{Y}}}(y-{\bar {y}})+{\bar {x}}}andσe2=0{\displaystyle \sigma _{e}^{2}=0}. Herex{\displaystyle x}is completely determined byy{\displaystyle y}, as given by the equation of straight line. Standard method likeGauss eliminationcan be used to solve the matrix equation forW{\displaystyle W}. A more numerically stable method is provided byQR decompositionmethod. Since the matrixCY{\displaystyle C_{Y}}is a symmetric positive definite matrix,W{\displaystyle W}can be solved twice as fast with theCholesky decomposition, while for large sparse systemsconjugate gradient methodis more effective.Levinson recursionis a fast method whenCY{\displaystyle C_{Y}}is also aToeplitz matrix. This can happen wheny{\displaystyle y}is awide sense stationaryprocess. In such stationary cases, these estimators are also referred to asWiener–Kolmogorov filters. Let us further model the underlying process of observation as a linear process:y=Ax+z{\displaystyle y=Ax+z}, whereA{\displaystyle A}is a known matrix andz{\displaystyle z}is random noise vector with the meanE⁡{z}=0{\displaystyle \operatorname {E} \{z\}=0}and cross-covarianceCXZ=0{\displaystyle C_{XZ}=0}. Here the required mean and the covariance matrices will be Thus the expression for the linear MMSE estimator matrixW{\displaystyle W}further modifies to Putting everything into the expression forx^{\displaystyle {\hat {x}}}, we get Lastly, the error covariance is The significant difference between the estimation problem treated above and those ofleast squaresandGauss–Markovestimate is that the number of observationsm, (i.e. the dimension ofy{\displaystyle y}) need not be at least as large as the number of unknowns,n, (i.e. the dimension ofx{\displaystyle x}). The estimate for the linear observation process exists so long as them-by-mmatrix(ACXAT+CZ)−1{\displaystyle (AC_{X}A^{T}+C_{Z})^{-1}}exists; this is the case for anymif, for instance,CZ{\displaystyle C_{Z}}is positive definite. Physically the reason for this property is that sincex{\displaystyle x}is now a random variable, it is possible to form a meaningful estimate (namely its mean) even with no measurements. Every new measurement simply provides additional information which may modify our original estimate. Another feature of this estimate is that form<n, there need be no measurement error. Thus, we may haveCZ=0{\displaystyle C_{Z}=0}, because as long asACXAT{\displaystyle AC_{X}A^{T}}is positive definite, the estimate still exists. Lastly, this technique can handle cases where the noise is correlated. An alternative form of expression can be obtained by using the matrix identity which can be established by post-multiplying by(ACXAT+CZ){\displaystyle (AC_{X}A^{T}+C_{Z})}and pre-multiplying by(ATCZ−1A+CX−1),{\displaystyle (A^{T}C_{Z}^{-1}A+C_{X}^{-1}),}to obtain and SinceW{\displaystyle W}can now be written in terms ofCe{\displaystyle C_{e}}asW=CeATCZ−1{\displaystyle W=C_{e}A^{T}C_{Z}^{-1}}, we get a simplified expression forx^{\displaystyle {\hat {x}}}as In this form the above expression can be easily compared withridge regression,weighed least squareandGauss–Markov estimate. In particular, whenCX−1=0{\displaystyle C_{X}^{-1}=0}, corresponding to infinite variance of the apriori information concerningx{\displaystyle x}, the resultW=(ATCZ−1A)−1ATCZ−1{\displaystyle W=(A^{T}C_{Z}^{-1}A)^{-1}A^{T}C_{Z}^{-1}}is identical to the weighed linear least square estimate withCZ−1{\displaystyle C_{Z}^{-1}}as the weight matrix. Moreover, if the components ofz{\displaystyle z}are uncorrelated and have equal variance such thatCZ=σ2I,{\displaystyle C_{Z}=\sigma ^{2}I,}whereI{\displaystyle I}is an identity matrix, thenW=(ATA)−1AT{\displaystyle W=(A^{T}A)^{-1}A^{T}}is identical to the ordinary least square estimate. When apriori information is available asCX−1=λI{\displaystyle C_{X}^{-1}=\lambda I}and thez{\displaystyle z}are uncorrelated and have equal variance, we haveW=(ATA+λI)−1AT{\displaystyle W=(A^{T}A+\lambda I)^{-1}A^{T}}, which is identical to ridge regression solution. In many real-time applications, observational data is not available in a single batch. Instead the observations are made in a sequence. One possible approach is to use the sequential observations to update an old estimate as additional data becomes available, leading to finer estimates. One crucial difference between batch estimation and sequential estimation is that sequential estimation requires an additional Markov assumption. In the Bayesian framework, such recursive estimation is easily facilitated using Bayes' rule. Givenk{\displaystyle k}observations,y1,…,yk{\displaystyle y_{1},\ldots ,y_{k}}, Bayes' rule gives us the posterior density ofxk{\displaystyle x_{k}}as Thep(xk|y1,…,yk){\displaystyle p(x_{k}|y_{1},\ldots ,y_{k})}is called the posterior density,p(yk|xk){\displaystyle p(y_{k}|x_{k})}is called the likelihood function, andp(xk|y1,…,yk−1){\displaystyle p(x_{k}|y_{1},\ldots ,y_{k-1})}is the prior density ofk-th time step. Here we have assumed the conditional independence ofyk{\displaystyle y_{k}}from previous observationsy1,…,yk−1{\displaystyle y_{1},\ldots ,y_{k-1}}givenx{\displaystyle x}as This is the Markov assumption. The MMSE estimatex^k{\displaystyle {\hat {x}}_{k}}given thek-th observation is then the mean of the posterior densityp(xk|y1,…,yk){\displaystyle p(x_{k}|y_{1},\ldots ,y_{k})}. With the lack of dynamical information on how the statex{\displaystyle x}changes with time, we will make a further stationarity assumption about the prior: Thus, the prior density fork-th time step is the posterior density of (k-1)-th time step. This structure allows us to formulate a recursive approach to estimation. In the context of linear MMSE estimator, the formula for the estimate will have the same form as before:x^=CXYCY−1(y−y¯)+x¯.{\displaystyle {\hat {x}}=C_{XY}C_{Y}^{-1}(y-{\bar {y}})+{\bar {x}}.}However, the mean and covariance matrices ofX{\displaystyle X}andY{\displaystyle Y}will need to be replaced by those of the prior densityp(xk|y1,…,yk−1){\displaystyle p(x_{k}|y_{1},\ldots ,y_{k-1})}and likelihoodp(yk|xk){\displaystyle p(y_{k}|x_{k})}, respectively. For the prior densityp(xk|y1,…,yk−1){\displaystyle p(x_{k}|y_{1},\ldots ,y_{k-1})}, its mean is given by the previous MMSE estimate, and its covariance matrix is given by the previous error covariance matrix, as per by the properties of MMSE estimators and the stationarity assumption. Similarly, for the linear observation process, the mean of the likelihoodp(yk|xk){\displaystyle p(y_{k}|x_{k})}is given byy¯k=Ax¯k=Ax^k−1{\displaystyle {\bar {y}}_{k}=A{\bar {x}}_{k}=A{\hat {x}}_{k-1}}and the covariance matrix is as before The difference between the predicted value ofYk{\displaystyle Y_{k}}, as given byy¯k=Ax^k−1{\displaystyle {\bar {y}}_{k}=A{\hat {x}}_{k-1}}, and its observed valueyk{\displaystyle y_{k}}gives the prediction errory~k=yk−y¯k{\displaystyle {\tilde {y}}_{k}=y_{k}-{\bar {y}}_{k}}, which is also referred to as innovation or residual. It is more convenient to represent the linear MMSE in terms of the prediction error, whose mean and covariance areE[y~k]=0{\displaystyle \mathrm {E} [{\tilde {y}}_{k}]=0}andCY~k=CYk|Xk{\displaystyle C_{{\tilde {Y}}_{k}}=C_{Y_{k}|X_{k}}}. Hence, in the estimate update formula, we should replacex¯{\displaystyle {\bar {x}}}andCX{\displaystyle C_{X}}byx^k−1{\displaystyle {\hat {x}}_{k-1}}andCek−1{\displaystyle C_{e_{k-1}}}, respectively. Also, we should replacey¯{\displaystyle {\bar {y}}}andCY{\displaystyle C_{Y}}byy¯k−1{\displaystyle {\bar {y}}_{k-1}}andCY~k{\displaystyle C_{{\tilde {Y}}_{k}}}. Lastly, we replaceCXY{\displaystyle C_{XY}}by Thus, we have the new estimate as new observationyk{\displaystyle y_{k}}arrives as and the new error covariance as From the point of view of linear algebra, for sequential estimation, if we have an estimatex^1{\displaystyle {\hat {x}}_{1}}based on measurements generating spaceY1{\displaystyle Y_{1}}, then after receiving another set of measurements, we should subtract out from these measurements that part that could be anticipated from the result of the first measurements. In other words, the updating must be based on that part of the new data which is orthogonal to the old data. The repeated use of the above two equations as more observations become available lead to recursive estimation techniques. The expressions can be more compactly written as The matrixWk{\displaystyle W_{k}}is often referred to as the Kalman gain factor. The alternative formulation of the above algorithm will give The repetition of these three steps as more data becomes available leads to an iterative estimation algorithm. The generalization of this idea to non-stationary cases gives rise to theKalman filter. The three update steps outlined above indeed form the update step of the Kalman filter. As an important special case, an easy to use recursive expression can be derived when at eachk-th time instant the underlying linear observation process yields a scalar such thatyk=akTxk+zk{\displaystyle y_{k}=a_{k}^{T}x_{k}+z_{k}}, whereak{\displaystyle a_{k}}isn-by-1 known column vector whose values can change with time,xk{\displaystyle x_{k}}isn-by-1 random column vector to be estimated, andzk{\displaystyle z_{k}}is scalar noise term with varianceσk2{\displaystyle \sigma _{k}^{2}}. After (k+1)-th observation, the direct use of above recursive equations give the expression for the estimatex^k+1{\displaystyle {\hat {x}}_{k+1}}as: whereyk+1{\displaystyle y_{k+1}}is the new scalar observation and the gain factorwk+1{\displaystyle w_{k+1}}isn-by-1 column vector given by TheCek+1{\displaystyle C_{e_{k+1}}}isn-by-nerror covariance matrix given by Here, no matrix inversion is required. Also, the gain factor,wk+1{\displaystyle w_{k+1}}, depends on our confidence in the new data sample, as measured by the noise variance, versus that in the previous data. The initial values ofx^{\displaystyle {\hat {x}}}andCe{\displaystyle C_{e}}are taken to be the mean and covariance of the aprior probability density function ofx{\displaystyle x}. Alternative approaches:This important special case has also given rise to many other iterative methods (oradaptive filters), such as theleast mean squares filterandrecursive least squares filter, that directly solves the original MSE optimization problem usingstochastic gradient descents. However, since the estimation errore{\displaystyle e}cannot be directly observed, these methods try to minimize the mean squared prediction errorE{y~Ty~}{\displaystyle \mathrm {E} \{{\tilde {y}}^{T}{\tilde {y}}\}}. For instance, in the case of scalar observations, we have the gradient∇x^E{y~2}=−2E{y~a}.{\displaystyle \nabla _{\hat {x}}\mathrm {E} \{{\tilde {y}}^{2}\}=-2\mathrm {E} \{{\tilde {y}}a\}.}Thus, the update equation for the least mean square filter is given by whereηk{\displaystyle \eta _{k}}is the scalar step size and the expectation is approximated by the instantaneous valueE{aky~k}≈aky~k{\displaystyle \mathrm {E} \{a_{k}{\tilde {y}}_{k}\}\approx a_{k}{\tilde {y}}_{k}}. As we can see, these methods bypass the need for covariance matrices. In many practical applications, the observation noise is uncorrelated. That is,CZ{\displaystyle C_{Z}}is a diagonal matrix. In such cases, it is advantageous to consider the components ofy{\displaystyle y}as independent scalar measurements, rather than vector measurement. This allows us to reduce computation time by processing them×1{\displaystyle m\times 1}measurement vector asm{\displaystyle m}scalar measurements. The use of scalar update formula avoids matrix inversion in the implementation of the covariance update equations, thus improving the numerical robustness against roundoff errors. The update can be implemented iteratively as: whereℓ=1,2,…,m{\displaystyle \ell =1,2,\ldots ,m}, using the initial valuesCek+1(0)=Cek{\displaystyle C_{e_{k+1}}^{(0)}=C_{e_{k}}}andx^k+1(0)=x^k{\displaystyle {\hat {x}}_{k+1}^{(0)}={\hat {x}}_{k}}. The intermediate variablesCZk+1(ℓ){\displaystyle C_{Z_{k+1}}^{(\ell )}}is theℓ{\displaystyle \ell }-th diagonal element of them×m{\displaystyle m\times m}diagonal matrixCZk+1{\displaystyle C_{Z_{k+1}}}; whileAk+1(ℓ){\displaystyle A_{k+1}^{(\ell )}}is theℓ{\displaystyle \ell }-th row ofm×n{\displaystyle m\times n}matrixAk+1{\displaystyle A_{k+1}}. The final values areCek+1(m)=Cek+1{\displaystyle C_{e_{k+1}}^{(m)}=C_{e_{k+1}}}andx^k+1(m)=x^k+1{\displaystyle {\hat {x}}_{k+1}^{(m)}={\hat {x}}_{k+1}}. We shall take alinear predictionproblem as an example. Let a linear combination of observed scalar random variablesz1,z2{\displaystyle z_{1},z_{2}}andz3{\displaystyle z_{3}}be used to estimate another future scalar random variablez4{\displaystyle z_{4}}such thatz^4=∑i=13wizi{\displaystyle {\hat {z}}_{4}=\sum _{i=1}^{3}w_{i}z_{i}}. If the random variablesz=[z1,z2,z3,z4]T{\displaystyle z=[z_{1},z_{2},z_{3},z_{4}]^{T}}are real Gaussian random variables with zero mean and its covariance matrix given by then our task is to find the coefficientswi{\displaystyle w_{i}}such that it will yield an optimal linear estimatez^4{\displaystyle {\hat {z}}_{4}}. In terms of the terminology developed in the previous sections, for this problem we have the observation vectory=[z1,z2,z3]T{\displaystyle y=[z_{1},z_{2},z_{3}]^{T}}, the estimator matrixW=[w1,w2,w3]{\displaystyle W=[w_{1},w_{2},w_{3}]}as a row vector, and the estimated variablex=z4{\displaystyle x=z_{4}}as a scalar quantity. The autocorrelation matrixCY{\displaystyle C_{Y}}is defined as The cross correlation matrixCYX{\displaystyle C_{YX}}is defined as We now solve the equationCYWT=CYX{\displaystyle C_{Y}W^{T}=C_{YX}}by invertingCY{\displaystyle C_{Y}}and pre-multiplying to get So we havew1=2.57,{\displaystyle w_{1}=2.57,}w2=−0.142,{\displaystyle w_{2}=-0.142,}andw3=.5714{\displaystyle w_{3}=.5714}as the optimal coefficients forz^4{\displaystyle {\hat {z}}_{4}}. Computing the minimum mean square error then gives‖e‖min2=E⁡[z4z4]−WCYX=15−WCYX=.2857{\displaystyle \left\Vert e\right\Vert _{\min }^{2}=\operatorname {E} [z_{4}z_{4}]-WC_{YX}=15-WC_{YX}=.2857}.[2]Note that it is not necessary to obtain an explicit matrix inverse ofCY{\displaystyle C_{Y}}to compute the value ofW{\displaystyle W}. The matrix equation can be solved by well known methods such as Gauss elimination method. A shorter, non-numerical example can be found inorthogonality principle. Consider a vectory{\displaystyle y}formed by takingN{\displaystyle N}observations of a fixed but unknown scalar parameterx{\displaystyle x}disturbed by white Gaussian noise. We can describe the process by a linear equationy=1x+z{\displaystyle y=1x+z}, where1=[1,1,…,1]T{\displaystyle 1=[1,1,\ldots ,1]^{T}}. Depending on context it will be clear if1{\displaystyle 1}represents ascalaror a vector. Suppose that we know[−x0,x0]{\displaystyle [-x_{0},x_{0}]}to be the range within which the value ofx{\displaystyle x}is going to fall in. We can model our uncertainty ofx{\displaystyle x}by an aprioruniform distributionover an interval[−x0,x0]{\displaystyle [-x_{0},x_{0}]}, and thusx{\displaystyle x}will have variance ofσX2=x02/3.{\displaystyle \sigma _{X}^{2}=x_{0}^{2}/3.}. Let the noise vectorz{\displaystyle z}be normally distributed asN(0,σZ2I){\displaystyle N(0,\sigma _{Z}^{2}I)}whereI{\displaystyle I}is an identity matrix. Alsox{\displaystyle x}andz{\displaystyle z}are independent andCXZ=0{\displaystyle C_{XZ}=0}. It is easy to see that Thus, the linear MMSE estimator is given by We can simplify the expression by using the alternative form forW{\displaystyle W}as where fory=[y1,y2,…,yN]T{\displaystyle y=[y_{1},y_{2},\ldots ,y_{N}]^{T}}we havey¯=1TyN=∑i=1NyiN.{\displaystyle {\bar {y}}={\frac {1^{T}y}{N}}={\frac {\sum _{i=1}^{N}y_{i}}{N}}.} Similarly, the variance of the estimator is Thus the MMSE of this linear estimator is For very largeN{\displaystyle N}, we see that the MMSE estimator of a scalar with uniform aprior distribution can be approximated by the arithmetic average of all the observed data while the variance will be unaffected by dataσX^2=σX2,{\displaystyle \sigma _{\hat {X}}^{2}=\sigma _{X}^{2},}and the LMMSE of the estimate will tend to zero. However, the estimator is suboptimal since it is constrained to be linear. Had the random variablex{\displaystyle x}also been Gaussian, then the estimator would have been optimal. Notice, that the form of the estimator will remain unchanged, regardless of the apriori distribution ofx{\displaystyle x}, so long as the mean and variance of these distributions are the same. Consider a variation of the above example: Two candidates are standing for an election. Let the fraction of votes that a candidate will receive on an election day bex∈[0,1].{\displaystyle x\in [0,1].}Thus the fraction of votes the other candidate will receive will be1−x.{\displaystyle 1-x.}We shall takex{\displaystyle x}as a random variable with a uniform prior distribution over[0,1]{\displaystyle [0,1]}so that its mean isx¯=1/2{\displaystyle {\bar {x}}=1/2}and variance isσX2=1/12.{\displaystyle \sigma _{X}^{2}=1/12.}A few weeks before the election, two independent public opinion polls were conducted by two different pollsters. The first poll revealed that the candidate is likely to gety1{\displaystyle y_{1}}fraction of votes. Since some error is always present due to finite sampling and the particular polling methodology adopted, the first pollster declares their estimate to have an errorz1{\displaystyle z_{1}}with zero mean and varianceσZ12.{\displaystyle \sigma _{Z_{1}}^{2}.}Similarly, the second pollster declares their estimate to bey2{\displaystyle y_{2}}with an errorz2{\displaystyle z_{2}}with zero mean and varianceσZ22.{\displaystyle \sigma _{Z_{2}}^{2}.}Note that except for the mean and variance of the error, the error distribution is unspecified. How should the two polls be combined to obtain the voting prediction for the given candidate? As with previous example, we have Here, both theE⁡{y1}=E⁡{y2}=x¯=1/2{\displaystyle \operatorname {E} \{y_{1}\}=\operatorname {E} \{y_{2}\}={\bar {x}}=1/2}. Thus, we can obtain the LMMSE estimate as the linear combination ofy1{\displaystyle y_{1}}andy2{\displaystyle y_{2}}as where the weights are given by Here, since the denominator term is constant, the poll with lower error is given higher weight in order to predict the election outcome. Lastly, the variance ofx^{\displaystyle {\hat {x}}}is given by which makesσX^2{\displaystyle \sigma _{\hat {X}}^{2}}smaller thanσX2.{\displaystyle \sigma _{X}^{2}.}Thus, the LMMSE is given by In general, if we haveN{\displaystyle N}pollsters, thenx^=∑i=1Nwi(yi−x¯)+x¯,{\displaystyle {\hat {x}}=\sum _{i=1}^{N}w_{i}(y_{i}-{\bar {x}})+{\bar {x}},}where the weight fori-th pollster is given bywi=1/σZi2∑j=1N1/σZj2+1/σX2{\displaystyle w_{i}={\frac {1/\sigma _{Z_{i}}^{2}}{\sum _{j=1}^{N}1/\sigma _{Z_{j}}^{2}+1/\sigma _{X}^{2}}}}and the LMMSE is given byLMMSE=1∑j=1N1/σZj2+1/σX2.{\displaystyle \mathrm {LMMSE} ={\frac {1}{\sum _{j=1}^{N}1/\sigma _{Z_{j}}^{2}+1/\sigma _{X}^{2}}}.} Suppose that a musician is playing an instrument and that the sound is received by two microphones, each of them located at two different places. Let the attenuation of sound due to distance at each microphone bea1{\displaystyle a_{1}}anda2{\displaystyle a_{2}}, which are assumed to be known constants. Similarly, let the noise at each microphone bez1{\displaystyle z_{1}}andz2{\displaystyle z_{2}}, each with zero mean and variancesσZ12{\displaystyle \sigma _{Z_{1}}^{2}}andσZ22{\displaystyle \sigma _{Z_{2}}^{2}}respectively. Letx{\displaystyle x}denote the sound produced by the musician, which is a random variable with zero mean and varianceσX2.{\displaystyle \sigma _{X}^{2}.}How should the recorded music from these two microphones be combined, after being synced with each other? We can model the sound received by each microphone as Here both theE⁡{y1}=E⁡{y2}=0{\displaystyle \operatorname {E} \{y_{1}\}=\operatorname {E} \{y_{2}\}=0}. Thus, we can combine the two sounds as where thei-th weight is given as
https://en.wikipedia.org/wiki/Minimum_mean-square_error
Squared deviations from the mean(SDM) result fromsquaringdeviations. Inprobability theoryandstatistics, the definition ofvarianceis either theexpected valueof the SDM (when considering a theoreticaldistribution) or its average value (for actual experimental data). Computations foranalysis of varianceinvolve the partitioning of a sum of SDM. An understanding of the computations involved is greatly enhanced by a study of the statistical value For arandom variableX{\displaystyle X}with meanμ{\displaystyle \mu }and varianceσ2{\displaystyle \sigma ^{2}}, (Its derivation is shownhere.) Therefore, From the above, the following can be derived: The sum of squared deviations needed to calculatesample variance(before deciding whether to divide bynorn− 1) is most easily calculated as From the two derived expectations above the expected value of this sum is which implies This effectively proves the use of the divisorn− 1 in the calculation of anunbiasedsample estimate ofσ2. In the situation where data is available forkdifferent treatment groups having sizeniwhereivaries from 1 tok, then it is assumed that the expected mean of each group is and the variance of each treatment group is unchanged from the population varianceσ2{\displaystyle \sigma ^{2}}. Under the Null Hypothesis that the treatments have no effect, then each of theTi{\displaystyle T_{i}}will be zero. It is now possible to calculate three sums of squares: Under the null hypothesis that the treatments cause no differences and all theTi{\displaystyle T_{i}}are zero, the expectation simplifies to Under the null hypothesis, the difference of any pair ofI,T, andCdoes not contain any dependency onμ{\displaystyle \mu }, onlyσ2{\displaystyle \sigma ^{2}}. The constants (n− 1), (k− 1), and (n−k) are normally referred to as the number ofdegrees of freedom. In a very simple example, 5 observations arise from two treatments. The first treatment gives three values 1, 2, and 3, and the second treatment gives two values 4, and 6. Giving
https://en.wikipedia.org/wiki/Squared_deviations
Inbioinformatics, theroot mean square deviation of atomic positions, or simplyroot mean square deviation (RMSD), is the measure of the average distance between the atoms (usually the backbone atoms) ofsuperimposedmolecules.[1]In the study of globular protein conformations, one customarily measures the similarity in three-dimensional structure by the RMSD of theCαatomic coordinates after optimal rigid body superposition. When adynamical systemfluctuates about some well-defined average position, the RMSD from the average over time can be referred to as theRMSForroot mean square fluctuation. The size of this fluctuation can be measured, for example usingMössbauer spectroscopyornuclear magnetic resonance, and can provide important physical information. TheLindemann indexis a method of placing the RMSF in the context of the parameters of the system. A widely used way to compare the structures of biomolecules or solid bodies is to translate and rotate one structure with respect to the other to minimize the RMSD. Coutsias,et al.presented a simple derivation, based onquaternions, for the optimal solid body transformation (rotation-translation) that minimizes the RMSD between two sets of vectors.[2]They proved that the quaternion method is equivalent to the well-knownKabsch algorithm.[3]The solution given by Kabsch is an instance of the solution of thed-dimensional problem, introduced by Hurley and Cattell.[4]Thequaternionsolution to compute the optimal rotation was published in the appendix of a paper of Petitjean.[5]Thisquaternionsolution and the calculation of the optimal isometry in thed-dimensional case were both extended to infinite sets and to the continuous case in the appendix A of another paper of Petitjean.[6] whereδiis the distance between atomiand either a reference structure or the mean position of theNequivalent atoms. This is often calculated for the backbone heavy atomsC,N,O, andCαor sometimes just theCαatoms. Normally a rigid superposition which minimizes the RMSD is performed, and this minimum is returned. Given two sets ofn{\displaystyle n}pointsv{\displaystyle \mathbf {v} }andw{\displaystyle \mathbf {w} }, the RMSD is defined as follows: An RMSD value is expressed in length units. The most commonly used unit instructural biologyis theÅngström(Å) which is equal to 10−10m. Typically RMSD is used as a quantitative measure of similarity between two or more protein structures. For example, theCASPprotein structure predictioncompetition uses RMSD as one of its assessments of how well a submitted structure matches the known, target structure. Thus the lower RMSD, the better the model is in comparison to the target structure. Also some scientists who studyprotein foldingby computer simulations use RMSD as areaction coordinateto quantify where the protein is between the folded state and the unfolded state. The study of RMSD for small organic molecules (commonly calledligandswhen they're binding to macromolecules, such as proteins, is studied) is common in the context ofdocking,[1]as well as in other methods to study theconfigurationof ligands when bound to macromolecules. Note that, for the case of ligands (contrary to proteins, as described above), their structures are most commonly not superimposed prior to the calculation of the RMSD. RMSD is also one of several metrics that have been proposed for quantifying evolutionary similarity between proteins, as well as the quality of sequence alignments.[7][8]
https://en.wikipedia.org/wiki/Root-mean-square_deviation_of_atomic_positions
Instatistics,Mallows'sCp{\textstyle {\boldsymbol {C_{p}}}},[1][2]named forColin Lingwood Mallows, is used to assess thefitof aregression modelthat has been estimated usingordinary least squares. It is applied in the context ofmodel selection, where a number ofpredictor variablesare available for predicting some outcome, and the goal is to find the best model involving a subset of these predictors. A small value ofCp{\textstyle C_{p}}means that the model is relatively precise. Mallows'sCphas been shown to be equivalent toAkaike information criterionin the special case of Gaussianlinear regression.[3] Mallows'sCpaddresses the issue ofoverfitting, in which model selection statistics such as the residual sum of squares always get smaller as more variables are added to a model. Thus, if we aim to select the model giving the smallest residual sum of squares, the model including all variables would always be selected. Instead, theCpstatistic calculated on asampleof data estimates thesum squared prediction error(SSPE) as itspopulationtarget whereY^i{\displaystyle {\hat {Y}}_{i}}is the fitted value from the regression model for theith case,E(Yi|Xi) is the expected value for theith case, and σ2is the error variance (assumed constant across the cases). Themean squared prediction error(MSPE) will not automatically get smaller as more variables are added. The optimum model under this criterion is a compromise influenced by the sample size, theeffect sizesof the different predictors, and the degree ofcollinearitybetween them. IfPregressorsare selected from a set ofK>P, theCpstatistic for that particular set of regressors is defined as: where Given a linear model such as: where: An alternate version ofCpcan also be defined as:[4] where Note that this version of theCpdoes not give equivalent values to the earlier version, but the model with the smallestCpfrom this definition will also be the same model with the smallestCpfrom the earlier definition. TheCpcriterion suffers from two main limitations[5] TheCpstatistic is often used as a stopping rule for various forms ofstepwise regression. Mallows proposed the statistic as a criterion for selecting among many alternative subset regressions. Under a model not suffering from appreciable lack of fit (bias),Cphas expectation nearly equal toP; otherwise the expectation is roughlyPplus a positive bias term. Nevertheless, even though it has expectation greater than or equal toP, there is nothing to preventCp<Por evenCp< 0 in extreme cases. It is suggested that one should choose a subset that hasCpapproachingP,[6]from above, for a list of subsets ordered by increasingP. In practice, the positive bias can be adjusted for by selecting a model from the ordered list of subsets, such thatCp< 2P. Since the sample-basedCpstatistic is an estimate of the MSPE, usingCpfor model selection does not completely guard against overfitting. For instance, it is possible that the selected model will be one in which the sampleCpwas a particularly severe underestimate of the MSPE. Model selection statistics such asCpare generally not used blindly, but rather information about the field of application, the intended use of the model, and any known biases in the data are taken into account in the process of model selection.
https://en.wikipedia.org/wiki/Mallows%27s_Cp
Inestimation theoryanddecision theory, aBayes estimatoror aBayes actionis anestimatorordecision rulethat minimizes theposteriorexpected valueof aloss function(i.e., theposterior expected loss). Equivalently, it maximizes the posterior expectation of autilityfunction. An alternative way of formulating an estimator withinBayesian statisticsismaximum a posteriori estimation. Suppose an unknown parameterθ{\displaystyle \theta }is known to have aprior distributionπ{\displaystyle \pi }. Letθ^=θ^(x){\displaystyle {\widehat {\theta }}={\widehat {\theta }}(x)}be an estimator ofθ{\displaystyle \theta }(based on some measurementsx), and letL(θ,θ^){\displaystyle L(\theta ,{\widehat {\theta }})}be aloss function, such as squared error. TheBayes riskofθ^{\displaystyle {\widehat {\theta }}}is defined asEπ(L(θ,θ^)){\displaystyle E_{\pi }(L(\theta ,{\widehat {\theta }}))}, where theexpectationis taken over the probability distribution ofθ{\displaystyle \theta }: this defines the risk function as a function ofθ^{\displaystyle {\widehat {\theta }}}. An estimatorθ^{\displaystyle {\widehat {\theta }}}is said to be aBayes estimatorif it minimizes the Bayes risk among all estimators. Equivalently, the estimator which minimizes the posterior expected lossE(L(θ,θ^)|x){\displaystyle E(L(\theta ,{\widehat {\theta }})|x)}for eachx{\displaystyle x}also minimizes the Bayes risk and therefore is a Bayes estimator.[1] If the prior isimproperthen an estimator which minimizes the posterior expected lossfor eachx{\displaystyle x}is called ageneralized Bayes estimator.[2] The most common risk function used for Bayesian estimation is themean square error(MSE), also calledsquared error risk. The MSE is defined by where the expectation is taken over the joint distribution ofθ{\displaystyle \theta }andx{\displaystyle x}. Using the MSE as risk, the Bayes estimate of the unknown parameter is simply the mean of theposterior distribution,[3] This is known as theminimum mean square error(MMSE) estimator. If there is no inherent reason to prefer one prior probability distribution over another, aconjugate prioris sometimes chosen for simplicity. A conjugate prior is defined as a prior distribution belonging to someparametric family, for which the resulting posterior distribution also belongs to the same family. This is an important property, since the Bayes estimator, as well as its statistical properties (variance, confidence interval, etc.), can all be derived from the posterior distribution. Conjugate priors are especially useful for sequential estimation, where the posterior of the current measurement is used as the prior in the next measurement. In sequential estimation, unless a conjugate prior is used, the posterior distribution typically becomes more complex with each added measurement, and the Bayes estimator cannot usually be calculated without resorting to numerical methods. Following are some examples of conjugate priors. Risk functions are chosen depending on how one measures the distance between the estimate and the unknown parameter. The MSE is the most common risk function in use, primarily due to its simplicity. However, alternative risk functions are also occasionally used. The following are several examples of such alternatives. We denote the posterior generalized distribution function byF{\displaystyle F}. Other loss functions can be conceived, although themean squared erroris the most widely used and validated. Other loss functions are used in statistics, particularly inrobust statistics. The prior distributionp{\displaystyle p}has thus far been assumed to be a true probability distribution, in that However, occasionally this can be a restrictive requirement. For example, there is no distribution (covering the set,R, of all real numbers) for which every real number is equally likely. Yet, in some sense, such a "distribution" seems like a natural choice for anon-informative prior, i.e., a prior distribution which does not imply a preference for any particular value of the unknown parameter. One can still define a functionp(θ)=1{\displaystyle p(\theta )=1}, but this would not be a proper probability distribution since it has infinite mass, Suchmeasuresp(θ){\displaystyle p(\theta )}, which are not probability distributions, are referred to asimproper priors. The use of an improper prior means that the Bayes risk is undefined (since the prior is not a probability distribution and we cannot take an expectation under it). As a consequence, it is no longer meaningful to speak of a Bayes estimator that minimizes the Bayes risk. Nevertheless, in many cases, one can define the posterior distribution This is a definition, and not an application ofBayes' theorem, since Bayes' theorem can only be applied when all distributions are proper. However, it is not uncommon for the resulting "posterior" to be a valid probability distribution. In this case, the posterior expected loss is typically well-defined and finite. Recall that, for a proper prior, the Bayes estimator minimizes the posterior expected loss. When the prior is improper, an estimator which minimizes the posterior expected loss is referred to as ageneralized Bayes estimator.[2] A typical example is estimation of alocation parameterwith a loss function of the typeL(a−θ){\displaystyle L(a-\theta )}. Hereθ{\displaystyle \theta }is a location parameter, i.e.,p(x|θ)=f(x−θ){\displaystyle p(x|\theta )=f(x-\theta )}. It is common to use the improper priorp(θ)=1{\displaystyle p(\theta )=1}in this case, especially when no other more subjective information is available. This yields so the posterior expected loss The generalized Bayes estimator is the valuea(x){\displaystyle a(x)}that minimizes this expression for a givenx{\displaystyle x}. This is equivalent to minimizing In this case it can be shown that the generalized Bayes estimator has the formx+a0{\displaystyle x+a_{0}}, for some constanta0{\displaystyle a_{0}}. To see this, leta0{\displaystyle a_{0}}be the value minimizing (1) whenx=0{\displaystyle x=0}. Then, given a different valuex1{\displaystyle x_{1}}, we must minimize This is identical to (1), except thata{\displaystyle a}has been replaced bya−x1{\displaystyle a-x_{1}}. Thus, the expression minimizing is given bya−x1=a0{\displaystyle a-x_{1}=a_{0}}, so that the optimal estimator has the form A Bayes estimator derived through theempirical Bayes methodis called anempirical Bayes estimator. Empirical Bayes methods enable the use of auxiliary empirical data, from observations of related parameters, in the development of a Bayes estimator. This is done under the assumption that the estimated parameters are obtained from a common prior. For example, if independent observations of different parameters are performed, then the estimation performance of a particular parameter can sometimes be improved by using data from other observations. There are bothparametricandnon-parametricapproaches to empirical Bayes estimation.[4] The following is a simple example of parametric empirical Bayes estimation. Given past observationsx1,…,xn{\displaystyle x_{1},\ldots ,x_{n}}having conditional distributionf(xi|θi){\displaystyle f(x_{i}|\theta _{i})}, one is interested in estimatingθn+1{\displaystyle \theta _{n+1}}based onxn+1{\displaystyle x_{n+1}}. Assume that theθi{\displaystyle \theta _{i}}'s have a common priorπ{\displaystyle \pi }which depends on unknown parameters. For example, suppose thatπ{\displaystyle \pi }is normal with unknown meanμπ{\displaystyle \mu _{\pi }\,\!}and varianceσπ.{\displaystyle \sigma _{\pi }\,\!.}We can then use the past observations to determine the mean and variance ofπ{\displaystyle \pi }in the following way. First, we estimate the meanμm{\displaystyle \mu _{m}\,\!}and varianceσm{\displaystyle \sigma _{m}\,\!}of the marginal distribution ofx1,…,xn{\displaystyle x_{1},\ldots ,x_{n}}using themaximum likelihoodapproach: Next, we use thelaw of total expectationto computeμm{\displaystyle \mu _{m}}and thelaw of total varianceto computeσm2{\displaystyle \sigma _{m}^{2}}such that whereμf(θ){\displaystyle \mu _{f}(\theta )}andσf(θ){\displaystyle \sigma _{f}(\theta )}are the moments of the conditional distributionf(xi|θi){\displaystyle f(x_{i}|\theta _{i})}, which are assumed to be known. In particular, suppose thatμf(θ)=θ{\displaystyle \mu _{f}(\theta )=\theta }and thatσf2(θ)=K{\displaystyle \sigma _{f}^{2}(\theta )=K}; we then have Finally, we obtain the estimated moments of the prior, For example, ifxi|θi∼N(θi,1){\displaystyle x_{i}|\theta _{i}\sim N(\theta _{i},1)}, and if we assume a normal prior (which is a conjugate prior in this case), we conclude thatθn+1∼N(μ^π,σ^π2){\displaystyle \theta _{n+1}\sim N({\widehat {\mu }}_{\pi },{\widehat {\sigma }}_{\pi }^{2})}, from which the Bayes estimator ofθn+1{\displaystyle \theta _{n+1}}based onxn+1{\displaystyle x_{n+1}}can be calculated. Bayes rules having finite Bayes risk are typicallyadmissible. The following are some specific examples of admissibility theorems. By contrast, generalized Bayes rules often have undefined Bayes risk in the case of improper priors. These rules are often inadmissible and the verification of their admissibility can be difficult. For example, the generalized Bayes estimator of a location parameter θ based on Gaussian samples (described in the "Generalized Bayes estimator" section above) is inadmissible forp>2{\displaystyle p>2}; this is known asStein's phenomenon. Let θ be an unknown random variable, and suppose thatx1,x2,…{\displaystyle x_{1},x_{2},\ldots }areiidsamples with densityf(xi|θ){\displaystyle f(x_{i}|\theta )}. Letδn=δn(x1,…,xn){\displaystyle \delta _{n}=\delta _{n}(x_{1},\ldots ,x_{n})}be a sequence of Bayes estimators of θ based on an increasing number of measurements. We are interested in analyzing the asymptotic performance of this sequence of estimators, i.e., the performance ofδn{\displaystyle \delta _{n}}for largen. To this end, it is customary to regard θ as a deterministic parameter whose true value isθ0{\displaystyle \theta _{0}}. Under specific conditions,[6]for large samples (large values ofn), the posterior density of θ is approximately normal. In other words, for largen, the effect of the prior probability on the posterior is negligible. Moreover, if δ is the Bayes estimator under MSE risk, then it isasymptotically unbiasedand itconverges in distributionto thenormal distribution: whereI(θ0) is theFisher informationof θ0. It follows that the Bayes estimator δnunder MSE isasymptotically efficient. Another estimator which is asymptotically normal and efficient is themaximum likelihood estimator(MLE). The relations between the maximum likelihood and Bayes estimators can be shown in the following simple example. Consider the estimator of θ based on binomial samplex~b(θ,n) where θ denotes the probability for success. Assuming θ is distributed according to the conjugate prior, which in this case is theBeta distributionB(a,b), the posterior distribution is known to be B(a+x,b+n-x). Thus, the Bayes estimator under MSE is The MLE in this case is x/n and so we get, The last equation implies that, forn→ ∞, the Bayes estimator (in the described problem) is close to the MLE. On the other hand, whennis small, the prior information is still relevant to the decision problem and affects the estimate. To see the relative weight of the prior information, assume thata=b; in this case each measurement brings in 1 new bit of information; the formula above shows that the prior information has the same weight asa+bbits of the new information. In applications, one often knows very little about fine details of the prior distribution; in particular, there is no reason to assume that it coincides with B(a,b) exactly. In such a case, one possible interpretation of this calculation is: "there is a non-pathological prior distribution with the mean value 0.5 and the standard deviationdwhich gives the weight of prior information equal to 1/(4d2)-1 bits of new information." Another example of the same phenomena is the case when the prior estimate and a measurement are normally distributed. If the prior is centered atBwith deviation Σ, and the measurement is centered atbwith deviation σ,thenthe posterior is centered atαα+βB+βα+βb{\displaystyle {\frac {\alpha }{\alpha +\beta }}B+{\frac {\beta }{\alpha +\beta }}b}, with weights in this weighted average being α=σ², β=Σ². Moreover, the squared posterior deviation is Σ²+σ². In other words, the prior is combined with the measurement inexactlythe same way as if it were an extra measurement to take into account. For example, if Σ=σ/2, then the deviation of 4 measurements combined matches the deviation of the prior (assuming that errors of measurements are independent). And the weights α,β in the formula for posterior match this: the weight of the prior is 4 times the weight of the measurement. Combining this prior withnmeasurements with averagevresults in the posterior centered at44+nV+n4+nv{\displaystyle {\frac {4}{4+n}}V+{\frac {n}{4+n}}v}; in particular, the prior plays the same role as 4 measurements made in advance. In general, the prior has the weight of (σ/Σ)² measurements. Compare to the example of binomial distribution: there the prior has the weight of (σ/Σ)²−1 measurements. One can see that the exact weight does depend on the details of the distribution, but when σ≫Σ, the difference becomes small. TheInternet Movie Databaseuses a formula for calculating and comparing the ratings of films by its users, including theirTop Rated 250 Titleswhich is claimed to give "a true Bayesian estimate".[7]The following Bayesian formula was initially used to calculate a weighted average score for the Top 250, though the formula has since changed: where: Note thatWis just theweighted arithmetic meanofRandCwith weight vector(v, m). As the number of ratings surpassesm, the confidence of the average rating surpasses the confidence of the mean vote for all films (C), and the weighted bayesian rating (W) approaches a straight average (R). The closerv(the number of ratings for the film) is to zero, the closerWis toC, where W is the weighted rating and C is the average rating of all films. So, in simpler terms, the fewer ratings/votes cast for a film, the more that film's Weighted Rating will skew towards the average across all films, while films with many ratings/votes will have a rating approaching its pure arithmetic average rating. IMDb's approach ensures that a film with only a few ratings, all at 10, would not rank above "the Godfather", for example, with a 9.2 average from over 500,000 ratings.
https://en.wikipedia.org/wiki/Bayesian_estimator
Instatisticsandsignal processing, theorthogonality principleis anecessary and sufficientcondition for the optimality of aBayesian estimator. Loosely stated, the orthogonality principle says that the error vector of the optimal estimator (in amean square errorsense) is orthogonal to any possible estimator. The orthogonality principle is most commonly stated for linear estimators, but more general formulations are possible. Since the principle is a necessary and sufficient condition for optimality, it can be used to find theminimum mean square errorestimator. The orthogonality principle is most commonly used in the setting of linear estimation.[1]In this context, letxbe an unknownrandom vectorwhich is to be estimated based on the observation vectory. One wishes to construct a linear estimatorx^=Hy+c{\displaystyle {\hat {x}}=Hy+c}for some matrixHand vectorc. Then, the orthogonality principle states that an estimatorx^{\displaystyle {\hat {x}}}achievesminimum mean square errorif and only if Ifxandyhave zero mean, then it suffices to require the first condition. Supposexis aGaussian random variablewith meanmand varianceσx2.{\displaystyle \sigma _{x}^{2}.}Also suppose we observe a valuey=x+w,{\displaystyle y=x+w,}wherewis Gaussian noise which is independent ofxand has mean 0 and varianceσw2.{\displaystyle \sigma _{w}^{2}.}We wish to find a linear estimatorx^=hy+c{\displaystyle {\hat {x}}=hy+c}minimizing the MSE. Substituting the expressionx^=hy+c{\displaystyle {\hat {x}}=hy+c}into the two requirements of the orthogonality principle, we obtain and Solving these two linear equations forhandcresults in so that the linear minimum mean square error estimator is given by This estimator can be interpreted as a weighted average between the noisy measurementsyand the prior expected valuem. If the noise varianceσw2{\displaystyle \sigma _{w}^{2}}is low compared with the variance of the priorσx2{\displaystyle \sigma _{x}^{2}}(corresponding to a highSNR), then most of the weight is given to the measurementsy, which are deemed more reliable than the prior information. Conversely, if the noise variance is relatively higher, then the estimate will be close tom, as the measurements are not reliable enough to outweigh the prior information. Finally, note that because the variablesxandyare jointly Gaussian, the minimum MSE estimator is linear.[2]Therefore, in this case, the estimator above minimizes the MSE among all estimators, not only linear estimators. LetV{\displaystyle V}be aHilbert spaceof random variables with aninner productdefined by⟨x,y⟩=E⁡{xHy}{\displaystyle \langle x,y\rangle =\operatorname {E} \{x^{H}y\}}. SupposeW{\displaystyle W}is aclosedsubspace ofV{\displaystyle V}, representing the space of all possible estimators. One wishes to find a vectorx^∈W{\displaystyle {\hat {x}}\in W}which will approximate a vectorx∈V{\displaystyle x\in V}. More accurately, one would like to minimize the mean squared error (MSE)E⁡‖x−x^‖2{\displaystyle \operatorname {E} \|x-{\hat {x}}\|^{2}}betweenx^{\displaystyle {\hat {x}}}andx{\displaystyle x}. In the special case of linear estimators described above, the spaceV{\displaystyle V}is the set of all functions ofx{\displaystyle x}andy{\displaystyle y}, whileW{\displaystyle W}is the set of linear estimators, i.e., linear functions ofy{\displaystyle y}only. Other settings which can be formulated in this way include the subspace ofcausallinear filters and the subspace of all (possibly nonlinear) estimators. Geometrically, we can see this problem by the following simple case whereW{\displaystyle W}is aone-dimensionalsubspace: We want to find the closest approximation to the vectorx{\displaystyle x}by a vectorx^{\displaystyle {\hat {x}}}in the spaceW{\displaystyle W}. From the geometric interpretation, it is intuitive that the best approximation, or smallest error, occurs when the error vector,e{\displaystyle e}, is orthogonal to vectors in the spaceW{\displaystyle W}. More accurately, the general orthogonality principle states the following: Given a closed subspaceW{\displaystyle W}of estimators within a Hilbert spaceV{\displaystyle V}and an elementx{\displaystyle x}inV{\displaystyle V}, an elementx^∈W{\displaystyle {\hat {x}}\in W}achieves minimum MSE among all elements inW{\displaystyle W}if and only ifE⁡{(x−x^)yT}=0{\displaystyle \operatorname {E} \{(x-{\hat {x}})y^{T}\}=0}for ally∈W.{\displaystyle y\in W.} Stated in such a manner, this principle is simply a statement of theHilbert projection theorem. Nevertheless, the extensive use of this result in signal processing has resulted in the name "orthogonality principle." The following is one way to find theminimum mean square errorestimator by using the orthogonality principle. We want to be able to approximate a vectorx{\displaystyle x}by where is the approximation ofx{\displaystyle x}as a linear combination of vectors in the subspaceW{\displaystyle W}spanned byp1,p2,….{\displaystyle p_{1},p_{2},\ldots .}Therefore, we want to be able to solve for the coefficients,ci{\displaystyle c_{i}}, so that we may write our approximation in known terms. By the orthogonality theorem, the square norm of the error vector,‖e‖2{\displaystyle \left\Vert e\right\Vert ^{2}}, is minimized when, for allj, Developing this equation, we obtain If there is a finite numbern{\displaystyle n}of vectorspi{\displaystyle p_{i}}, one can write this equation in matrix form as Assuming thepi{\displaystyle p_{i}}arelinearly independent, theGramian matrixcan be inverted to obtain thus providing an expression for the coefficientsci{\displaystyle c_{i}}of the minimum mean square error estimator.
https://en.wikipedia.org/wiki/Orthogonality_principle
Insignal processing, theWiener filteris afilterused to produce an estimate of a desired or target random process by linear time-invariant (LTI) filtering of an observed noisy process, assuming knownstationarysignal and noise spectra, and additive noise. The Wiener filter minimizes themean square errorbetween the estimated random process and the desired process.[1][2] The goal of the wiener filter is to compute astatistical estimateof an unknown signal using a related signal as an input and filtering it to produce the estimate. For example, the known signal might consist of an unknown signal of interest that has been corrupted by additivenoise. The Wiener filter can be used to filter out the noise from the corrupted signal to provide an estimate of the underlying signal of interest. The Wiener filter is based on astatisticalapproach, and a more statistical account of the theory is given in theminimum mean square error (MMSE) estimatorarticle. Typical deterministic filters are designed for a desiredfrequency response. However, the design of the Wiener filter takes a different approach. One is assumed to have knowledge of the spectral properties of the original signal and the noise, and one seeks thelinear time-invariantfilter whose output would come as close to the original signal as possible. Wiener filters are characterized by the following:[3] This filter is frequently used in the process ofdeconvolution; for this application, seeWiener deconvolution. Lets(t+α){\displaystyle s(t+\alpha )}be an unknown signal which must be estimated from a measurement signalx(t){\displaystyle x(t)}, whereα{\displaystyle \alpha }is a tunable parameter.α>0{\displaystyle \alpha >0}is known as prediction,α=0{\displaystyle \alpha =0}is known as filtering, andα<0{\displaystyle \alpha <0}is known as smoothing (see Wiener filtering chapter of[3]for more details). The Wiener filter problem has solutions for three possible cases: one where a noncausal filter is acceptable (requiring an infinite amount of both past and future data), the case where acausalfilter is desired (using an infinite amount of past data), and thefinite impulse response(FIR) case where only input data is used (i.e. the result or output is not fed back into the filter as in the IIR case). The first case is simple to solve but is not suited for real-time applications. Wiener's main accomplishment was solving the case where the causality requirement is in effect;Norman Levinsongave the FIR solution in an appendix of Wiener's book. whereS{\displaystyle S}arespectral densities. Provided thatg(t){\displaystyle g(t)}is optimal, then theminimum mean-square errorequation reduces to and the solutiong(t){\displaystyle g(t)}is the inverse two-sidedLaplace transformofG(s){\displaystyle G(s)}. where This general formula is complicated and deserves a more detailed explanation. To write down the solutionG(s){\displaystyle G(s)}in a specific case, one should follow these steps:[4] The causalfinite impulse response(FIR) Wiener filter, instead of using some given data matrix X and output vector Y, finds optimal tap weights by using the statistics of the input and output signals. It populates the input matrix X with estimates of the auto-correlation of the input signal (T) and populates the output vector Y with estimates of the cross-correlation between the output and input signals (V). In order to derive the coefficients of the Wiener filter, consider the signalw[n] being fed to a Wiener filter of order (number of past taps)Nand with coefficients{a0,⋯,aN}{\displaystyle \{a_{0},\cdots ,a_{N}\}}. The output of the filter is denotedx[n] which is given by the expression The residual error is denotede[n] and is defined ase[n] =x[n] −s[n] (see the correspondingblock diagram). The Wiener filter is designed so as to minimize the mean square error (MMSEcriteria) which can be stated concisely as follows: whereE[⋅]{\displaystyle E[\cdot ]}denotes the expectation operator. In the general case, the coefficientsai{\displaystyle a_{i}}may be complex and may be derived for the case wherew[n] ands[n] are complex as well. With a complex signal, the matrix to be solved is aHermitianToeplitz matrix, rather thansymmetricToeplitz matrix. For simplicity, the following considers only the case where all these quantities are real. The mean square error (MSE) may be rewritten as: To find the vector[a0,…,aN]{\displaystyle [a_{0},\,\ldots ,\,a_{N}]}which minimizes the expression above, calculate its derivative with respect to eachai{\displaystyle a_{i}} Assuming thatw[n] ands[n] are each stationary and jointly stationary, the sequencesRw[m]{\displaystyle R_{w}[m]}andRws[m]{\displaystyle R_{ws}[m]}known respectively as the autocorrelation ofw[n] and the cross-correlation betweenw[n] ands[n] can be defined as follows: The derivative of the MSE may therefore be rewritten as: Note that for realw[n]{\displaystyle w[n]}, the autocorrelation is symmetric:Rw[j−i]=Rw[i−j]{\displaystyle R_{w}[j-i]=R_{w}[i-j]}Letting the derivative be equal to zero results in: which can be rewritten (using the above symmetric property) in matrix form These equations are known as theWiener–Hopf equations. The matrixTappearing in the equation is a symmetricToeplitz matrix. Under suitable conditions onR{\displaystyle R}, these matrices are known to be positive definite and therefore non-singular yielding a unique solution to the determination of the Wiener filter coefficient vector,a=T−1v{\displaystyle \mathbf {a} =\mathbf {T} ^{-1}\mathbf {v} }. Furthermore, there exists an efficient algorithm to solve such Wiener–Hopf equations known as theLevinson-Durbinalgorithm so an explicit inversion ofTis not required. In some articles, the cross correlation function is defined in the opposite way:Rsw[m]=E{w[n]s[n+m]}{\displaystyle R_{sw}[m]=E\{w[n]s[n+m]\}}Then, thev{\displaystyle \mathbf {v} }matrix will containRsw[0]…Rsw[N]{\displaystyle R_{sw}[0]\ldots R_{sw}[N]}; this is just a difference in notation. Whichever notation is used, note that for realw[n],s[n]{\displaystyle w[n],s[n]}:Rsw[k]=Rws[−k]{\displaystyle R_{sw}[k]=R_{ws}[-k]} The realization of the causal Wiener filter looks a lot like the solution to theleast squaresestimate, except in the signal processing domain. The least squares solution, for input matrixX{\displaystyle \mathbf {X} }and output vectory{\displaystyle \mathbf {y} }is The FIR Wiener filter is related to theleast mean squares filter, but minimizing the error criterion of the latter does not rely on cross-correlations or auto-correlations. Its solution converges to the Wiener filter solution. For complex signals, the derivation of the complex Wiener filter is performed by minimizingE[|e[n]|2]{\displaystyle E\left[|e[n]|^{2}\right]}=E[e[n]e∗[n]]{\displaystyle E\left[e[n]e^{*}[n]\right]}. This involves computing partial derivatives with respect to both the real and imaginary parts ofai{\displaystyle a_{i}}, and requiring them both to be zero. The resulting Wiener-Hopf equations are: which can be rewritten in matrix form: Note here that:Rw[−k]=Rw∗[k]Rsw[k]=Rws∗[−k]{\displaystyle {\begin{aligned}R_{w}[-k]&=R_{w}^{*}[k]\\R_{sw}[k]&=R_{ws}^{*}[-k]\end{aligned}}} The Wiener coefficient vector is then computed as:a=(T−1v)∗{\displaystyle \mathbf {a} ={(\mathbf {T} ^{-1}\mathbf {v} )}^{*}} The Wiener filter has a variety of applications in signal processing,image processing,[5]control systems, and digital communications. These applications generally fall into one of four main categories: For example, the Wiener filter can be used in image processing to remove noise from a picture. For example, using the Mathematica function:WienerFilter[image,2]on the first image on the right, produces the filtered image below it. It is commonly used to denoise audio signals, especially speech, as a preprocessor beforespeech recognition. The filter was proposed byNorbert Wienerduring the 1940s and published in 1949.[6][7]The discrete-time equivalent of Wiener's work was derived independently byAndrey Kolmogorovand published in 1941.[8]Hence the theory is often called theWiener–Kolmogorovfiltering theory (cf.Kriging). The Wiener filter was the first statistically designed filter to be proposed and subsequently gave rise to many others including theKalman filter.
https://en.wikipedia.org/wiki/Wiener_filter
Instatisticsandcontrol theory,Kalman filtering(also known aslinear quadratic estimation) is analgorithmthat uses a series of measurements observed over time, includingstatistical noiseand other inaccuracies, to produce estimates of unknown variables that tend to be more accurate than those based on a single measurement, by estimating ajoint probability distributionover the variables for each time-step. The filter is constructed as a mean squared error minimiser, but an alternative derivation of the filter is also provided showing how the filter relates to maximum likelihood statistics.[1]The filter is named afterRudolf E. Kálmán. Kalman filtering[2]has numerous technological applications. A common application is forguidance, navigation, and controlof vehicles, particularly aircraft, spacecraft and shipspositioned dynamically.[3]Furthermore, Kalman filtering is much applied intime seriesanalysis tasks such assignal processingandeconometrics. Kalman filtering is also important for roboticmotion planningand control,[4][5]and can be used fortrajectory optimization.[6]Kalman filtering also works for modeling thecentral nervous system's control of movement. Due to the time delay between issuing motor commands and receivingsensory feedback, the use of Kalman filters[7]provides a realistic model for making estimates of the current state of a motor system and issuing updated commands.[8] The algorithm works via a two-phase process: a prediction phase and an update phase. In the prediction phase, the Kalman filter produces estimates of the currentstate variables, including their uncertainties. Once the outcome of the next measurement (necessarily corrupted with some error, including random noise) is observed, these estimates are updated using aweighted average, with more weight given to estimates with greater certainty. The algorithm isrecursive. It can operate inreal time, using only the present input measurements and the state calculated previously and its uncertainty matrix; no additional past information is required. Optimality of Kalman filtering assumes that errors have anormal (Gaussian)distribution. In the words ofRudolf E. Kálmán: "The following assumptions are made about random processes: Physical random phenomena may be thought of as due to primary random sources exciting dynamic systems. The primary sources are assumed to be independent gaussian random processes with zero mean; the dynamic systems will be linear."[9]Regardless of Gaussianity, however, if the process and measurement covariances are known, then the Kalman filter is the best possiblelinearestimator in theminimum mean-square-error sense,[10]although there may be better nonlinear estimators. It is a common misconception (perpetuated in the literature) that the Kalman filter cannot be rigorously applied unless all noise processes are assumed to be Gaussian.[11] Extensions andgeneralizationsof the method have also been developed, such as theextended Kalman filterand theunscented Kalman filterwhich work onnonlinear systems. The basis is ahidden Markov modelsuch that thestate spaceof thelatent variablesiscontinuousand all latent and observed variables have Gaussian distributions. Kalman filtering has been used successfully inmulti-sensor fusion,[12]and distributedsensor networksto develop distributed orconsensusKalman filtering.[13] The filtering method is named for HungarianémigréRudolf E. Kálmán, althoughThorvald Nicolai Thiele[14][15]andPeter Swerlingdeveloped a similar algorithm earlier. Richard S. Bucy of theJohns Hopkins Applied Physics Laboratorycontributed to the theory, causing it to be known sometimes as Kalman–Bucy filtering. Kalman was inspired to derive the Kalman filter by applying state variables to theWiener filtering problem.[16]Stanley F. Schmidtis generally credited with developing the first implementation of a Kalman filter. He realized that the filter could be divided into two distinct parts, with one part for time periods between sensor outputs and another part for incorporating measurements.[17]It was during a visit by Kálmán to theNASA Ames Research Centerthat Schmidt saw the applicability of Kálmán's ideas to the nonlinear problem of trajectory estimation for theApollo programresulting in its incorporation in theApollo navigation computer.[18]: 16 This digital filter is sometimes termed theStratonovich–Kalman–Bucy filterbecause it is a special case of a more general, nonlinear filter developed by theSovietmathematicianRuslan Stratonovich.[19][20][21][22]In fact, some of the special case linear filter's equations appeared in papers by Stratonovich that were published before the summer of 1961, when Kalman met with Stratonovich during a conference in Moscow.[23] This Kalman filtering was first described and developed partially in technical papers by Swerling (1958), Kalman (1960) and Kalman and Bucy (1961). The Apollo computer used 2k of magnetic core RAM and 36k wire rope [...]. The CPU was built from ICs [...]. Clock speed was under 100 kHz [...]. The fact that the MIT engineers were able to pack such good software (one of the very first applications of the Kalman filter) into such a tiny computer is truly remarkable. Kalman filters have been vital in the implementation of the navigation systems ofU.S. Navynuclearballistic missile submarines, and in the guidance and navigation systems of cruise missiles such as the U.S. Navy'sTomahawk missileand theU.S. Air Force'sAir Launched Cruise Missile. They are also used in the guidance and navigation systems ofreusable launch vehiclesand theattitude controland navigation systems of spacecraft which dock at theInternational Space Station.[24] Kalman filtering uses a system's dynamic model (e.g., physical laws of motion), known control inputs to that system, and multiple sequential measurements (such as from sensors) to form an estimate of the system's varying quantities (itsstate) that is better than the estimate obtained by using only one measurement alone. As such, it is a commonsensor fusionanddata fusionalgorithm. Noisy sensor data, approximations in the equations that describe the system evolution, and external factors that are not accounted for, all limit how well it is possible to determine the system's state. The Kalman filter deals effectively with the uncertainty due to noisy sensor data and, to some extent, with random external factors. The Kalman filter produces an estimate of the state of the system as an average of the system's predicted state and of the new measurement using aweighted average. The purpose of the weights is that values with better (i.e., smaller) estimated uncertainty are "trusted" more. The weights are calculated from thecovariance, a measure of the estimated uncertainty of the prediction of the system's state. The result of the weighted average is a new state estimate that lies between the predicted and measured state, and has a better estimated uncertainty than either alone. This process is repeated at every time step, with the new estimate and its covariance informing the prediction used in the following iteration. This means that Kalman filter worksrecursivelyand requires only the last "best guess", rather than the entire history, of a system's state to calculate a new state. The measurements' certainty-grading and current-state estimate are important considerations. It is common to discuss the filter's response in terms of the Kalman filter'sgain. The Kalman gain is the weight given to the measurements and current-state estimate, and can be "tuned" to achieve a particular performance. With a high gain, the filter places more weight on the most recent measurements, and thus conforms to them more responsively. With a low gain, the filter conforms to the model predictions more closely. At the extremes, a high gain (close to one) will result in a more jumpy estimated trajectory, while a low gain (close to zero) will smooth out noise but decrease the responsiveness. When performing the actual calculations for the filter (as discussed below), the state estimate and covariances are coded intomatricesbecause of the multiple dimensions involved in a single set of calculations. This allows for a representation of linear relationships between different state variables (such as position, velocity, and acceleration) in any of the transition models or covariances. As an example application, consider the problem of determining the precise location of a truck. The truck can be equipped with aGPSunit that provides an estimate of the position within a few meters. The GPS estimate is likely to be noisy; readings 'jump around' rapidly, though remaining within a few meters of the real position. In addition, since the truck is expected to follow the laws of physics, its position can also be estimated by integrating its velocity over time, determined by keeping track of wheel revolutions and the angle of the steering wheel. This is a technique known asdead reckoning. Typically, the dead reckoning will provide a very smooth estimate of the truck's position, but it willdriftover time as small errors accumulate. For this example, the Kalman filter can be thought of as operating in two distinct phases: predict and update. In the prediction phase, the truck's old position will be modified according to the physicallaws of motion(the dynamic or "state transition" model). Not only will a new position estimate be calculated, but also a new covariance will be calculated as well. Perhaps the covariance is proportional to the speed of the truck because we are more uncertain about the accuracy of the dead reckoning position estimate at high speeds but very certain about the position estimate at low speeds. Next, in the update phase, a measurement of the truck's position is taken from the GPS unit. Along with this measurement comes some amount of uncertainty, and its covariance relative to that of the prediction from the previous phase determines how much the new measurement will affect the updated prediction. Ideally, as the dead reckoning estimates tend to drift away from the real position, the GPS measurement should pull the position estimate back toward the real position but not disturb it to the point of becoming noisy and rapidly jumping. The Kalman filter is an efficientrecursive filterestimatingthe internal state of alinear dynamic systemfrom a series ofnoisymeasurements. It is used in a wide range ofengineeringandeconometricapplications fromradarandcomputer visionto estimation of structural macroeconomic models,[25][26]and is an important topic incontrol theoryandcontrol systemsengineering. Together with thelinear-quadratic regulator(LQR), the Kalman filter solves thelinear–quadratic–Gaussian controlproblem (LQG). The Kalman filter, the linear-quadratic regulator, and the linear–quadratic–Gaussian controller are solutions to what arguably are the most fundamental problems of control theory. In most applications, the internal state is much larger (has moredegrees of freedom) than the few "observable" parameters which are measured. However, by combining a series of measurements, the Kalman filter can estimate the entire internal state. For theDempster–Shafer theory, each state equation or observation is considered a special case of alinear belief functionand the Kalman filtering is a special case of combining linear belief functions on a join-tree orMarkov tree. Additional methods includebelief filteringwhich use Bayes or evidential updates to the state equations. A wide variety of Kalman filters exists by now: Kalman's original formulation - now termed the "simple" Kalman filter, theKalman–Bucy filter, Schmidt's "extended" filter, theinformation filter, and a variety of "square-root" filters that were developed by Bierman, Thornton, and many others. Perhaps the most commonly used type of very simple Kalman filter is thephase-locked loop, which is now ubiquitous in radios, especiallyfrequency modulation(FM) radios, television sets,satellite communicationsreceivers, outer space communications systems, and nearly any otherelectroniccommunications equipment. Kalman filtering is based onlinear dynamic systemsdiscretized in the time domain. They are modeled on aMarkov chainbuilt onlinear operatorsperturbed by errors that may includeGaussiannoise. Thestateof the target system refers to the ground truth (yet hidden) system configuration of interest, which is represented as avectorofreal numbers. At eachdiscrete timeincrement, a linear operator is applied to the state to generate the new state, with some noise mixed in, and optionally some information from the controls on the system if they are known. Then, another linear operator mixed with more noise generates the measurable outputs (i.e., observation) from the true ("hidden") state. The Kalman filter may be regarded as analogous to the hidden Markov model, with the difference that the hidden state variables have values in a continuous space as opposed to a discrete state space as for the hidden Markov model. There is a strong analogy between the equations of a Kalman Filter and those of the hidden Markov model. A review of this and other models is given in Roweis andGhahramani(1999)[27]and Hamilton (1994), Chapter 13.[28] In order to use the Kalman filter to estimate the internal state of a process given only a sequence of noisy observations, one must model the process in accordance with the following framework. This means specifying the matrices, for each time-stepk{\displaystyle k}, following: As seen below, it is common in many applications that the matricesF{\displaystyle \mathbf {F} },H{\displaystyle \mathbf {H} },Q{\displaystyle \mathbf {Q} },R{\displaystyle \mathbf {R} }, andB{\displaystyle \mathbf {B} }are constant across time, in which case theirk{\displaystyle k}index may be dropped. The Kalman filter model assumes the true state at timek{\displaystyle k}is evolved from the state atk−1{\displaystyle k-1}according to where IfQ{\displaystyle \mathbf {Q} }is independent of time, one may, following Roweis and Ghahramani,[27]: 307writew∙{\displaystyle \mathbf {w} _{\bullet }}instead ofwk{\displaystyle \mathbf {w} _{k}}to emphasize that the noise has no explicit knowledge of time. At timek{\displaystyle k}an observation (or measurement)zk{\displaystyle \mathbf {z} _{k}}of the true statexk{\displaystyle \mathbf {x} _{k}}is made according to where Analogously to the situation forwk{\displaystyle \mathbf {w} _{k}}, one may writev∙{\displaystyle \mathbf {v} _{\bullet }}instead ofvk{\displaystyle \mathbf {v} _{k}}ifR{\displaystyle \mathbf {R} }is independent of time. The initial state, and the noise vectors at each step{x0,w1,…,wk,v1,…,vk}{\displaystyle \{\mathbf {x} _{0},\mathbf {w} _{1},\dots ,\mathbf {w} _{k},\mathbf {v} _{1},\dots ,\mathbf {v} _{k}\}}are all assumed to be mutuallyindependent. Many real-time dynamic systems do not exactly conform to this model. In fact, unmodeled dynamics can seriously degrade the filter performance, even when it was supposed to work with unknown stochastic signals as inputs. The reason for this is that the effect of unmodeled dynamics depends on the input, and, therefore, can bring the estimation algorithm to instability (it diverges). On the other hand, independent white noise signals will not make the algorithm diverge. The problem of distinguishing between measurement noise and unmodeled dynamics is a difficult one and is treated as a problem of control theory usingrobust control.[29][30] The Kalman filter is arecursiveestimator. This means that only the estimated state from the previous time step and the current measurement are needed to compute the estimate for the current state. In contrast to batch estimation techniques, no history of observations and/or estimates is required. In what follows, the notationx^n∣m{\displaystyle {\hat {\mathbf {x} }}_{n\mid m}}represents the estimate ofx{\displaystyle \mathbf {x} }at timengiven observations up to and including at timem≤n. The state of the filter is represented by two variables: The algorithm structure of the Kalman filter resembles that ofAlpha beta filter. The Kalman filter can be written as a single equation; however, it is most often conceptualized as two distinct phases: "Predict" and "Update". The predict phase uses the state estimate from the previous timestep to produce an estimate of the state at the current timestep. This predicted state estimate is also known as thea prioristate estimate because, although it is an estimate of the state at the current timestep, it does not include observation information from the current timestep. In the update phase, theinnovation(the pre-fit residual), i.e. the difference between the currenta prioriprediction and the current observation information, is multiplied by the optimal Kalman gain and combined with the previous state estimate to refine the state estimate. This improved estimate based on the current observation is termed thea posterioristate estimate. Typically, the two phases alternate, with the prediction advancing the state until the next scheduled observation, and the update incorporating the observation. However, this is not necessary; if an observation is unavailable for some reason, the update may be skipped and multiple prediction procedures performed. Likewise, if multiple independent observations are available at the same time, multiple update procedures may be performed (typically with different observation matricesHk).[31][32] The formula for the updated (a posteriori) estimate covariance above is valid for the optimalKkgain that minimizes the residual error, in which form it is most widely used in applications. Proof of the formulae is found in thederivationssection, where the formula valid for anyKkis also shown. A more intuitive way to express the updated state estimate (x^k∣k{\displaystyle {\hat {\mathbf {x} }}_{k\mid k}}) is: This expression reminds us of a linear interpolation,x=(1−t)(a)+t(b){\displaystyle x=(1-t)(a)+t(b)}fort{\displaystyle t}between [0,1]. In our case: This expression also resembles thealpha beta filterupdate step. If the model is accurate, and the values forx^0∣0{\displaystyle {\hat {\mathbf {x} }}_{0\mid 0}}andP0∣0{\displaystyle \mathbf {P} _{0\mid 0}}accurately reflect the distribution of the initial state values, then the following invariants are preserved: whereE⁡[ξ]{\displaystyle \operatorname {E} [\xi ]}is theexpected valueofξ{\displaystyle \xi }. That is, all estimates have a mean error of zero. Also: so covariance matrices accurately reflect the covariance of estimates. Practical implementation of a Kalman Filter is often difficult due to the difficulty of getting a good estimate of the noise covariance matricesQkandRk. Extensive research has been done to estimate these covariances from data. One practical method of doing this is theautocovariance least-squares (ALS)technique that uses the time-laggedautocovariancesof routine operating data to estimate the covariances.[33][34]TheGNU OctaveandMatlabcode used to calculate the noise covariance matrices using the ALS technique is available online using theGNU General Public License.[35]Field Kalman Filter (FKF), a Bayesian algorithm, which allows simultaneous estimation of the state, parameters and noise covariance has been proposed.[36]The FKF algorithm has a recursive formulation, good observed convergence, and relatively low complexity, thus suggesting that the FKF algorithm may possibly be a worthwhile alternative to the Autocovariance Least-Squares methods. Another approach is theOptimized Kalman Filter(OKF), which considers the covariance matrices not as representatives of the noise, but rather, as parameters aimed to achieve the most accurate state estimation.[37]These two views coincide under the KF assumptions, but often contradict each other in real systems. Thus, OKF's state estimation is more robust to modeling inaccuracies. The Kalman filter provides an optimal state estimation in cases where a) the model matches the real system perfectly, b) the entering noise is "white" (uncorrelated), and c) the covariances of the noise are known exactly. Correlated noise can also be treated using Kalman filters.[38]Several methods for the noise covariance estimation have been proposed during past decades, including ALS, mentioned in the section above. More generally, if the model assumptions do not match the real system perfectly, then optimal state estimation is not necessarily obtained by settingQkandRkto the covariances of the noise. Instead, in that case, the parametersQkandRkmay be set to explicitly optimize the state estimation,[37]e.g., using standardsupervised learning. After the covariances are set, it is useful to evaluate the performance of the filter; i.e., whether it is possible to improve the state estimation quality. If the Kalman filter works optimally, the innovation sequence (the output prediction error) is a white noise, therefore the whiteness property of theinnovationsmeasures filter performance. Several different methods can be used for this purpose.[39]If the noise terms are distributed in a non-Gaussian manner, methods for assessing performance of the filter estimate, which use probability inequalities or large-sample theory, are known in the literature.[40][41] Consider a truck on frictionless, straight rails. Initially, the truck is stationary at position 0, but it is buffeted this way and that by random uncontrolled forces. We measure the position of the truck every Δtseconds, but these measurements are imprecise; we want to maintain a model of the truck's position andvelocity. We show here how we derive the model from which we create our Kalman filter. SinceF,H,R,Q{\displaystyle \mathbf {F} ,\mathbf {H} ,\mathbf {R} ,\mathbf {Q} }are constant, their time indices are dropped. The position and velocity of the truck are described by the linear state space wherex˙{\displaystyle {\dot {x}}}is the velocity, that is, the derivative of position with respect to time. We assume that between the (k− 1) andktimestep, uncontrolled forces cause a constant acceleration ofakthat isnormally distributedwith mean 0 and standard deviationσa. FromNewton's laws of motionwe conclude that (there is noBu{\displaystyle \mathbf {B} u}term since there are no known control inputs. Instead,akis the effect of an unknown input andG{\displaystyle \mathbf {G} }applies that effect to the state vector) where so that where The matrixQ{\displaystyle \mathbf {Q} }is not full rank (it is of rank one ifΔt≠0{\displaystyle \Delta t\neq 0}). Hence, the distributionN(0,Q){\displaystyle N(0,\mathbf {Q} )}is not absolutely continuous and hasno probability density function. Another way to express this, avoiding explicit degenerate distributions is given by At each time phase, a noisy measurement of the true position of the truck is made. Let us suppose the measurement noisevkis also distributed normally, with mean 0 and standard deviationσz. where and We know the initial starting state of the truck with perfect precision, so we initialize and to tell the filter that we know the exact position and velocity, we give it a zero covariance matrix: If the initial position and velocity are not known perfectly, the covariance matrix should be initialized with suitable variances on its diagonal: The filter will then prefer the information from the first measurements over the information already in the model. For simplicity, assume that the control inputuk=0{\displaystyle \mathbf {u} _{k}=\mathbf {0} }. Then the Kalman filter may be written: A similar equation holds if we include a non-zero control input. Gain matricesKk{\displaystyle \mathbf {K} _{k}}evolve independently of the measurementszk{\displaystyle \mathbf {z} _{k}}. From above, the four equations needed for updating the Kalman gain are as follows: Since the gain matrices depend only on the model, and not the measurements, they may be computed offline. Convergence of the gain matricesKk{\displaystyle \mathbf {K} _{k}}to an asymptotic matrixK∞{\displaystyle \mathbf {K} _{\infty }}applies for conditions established in Walrand and Dimakis.[42]Simulations establish the number of steps to convergence. For the moving truck example described above, withΔt=1{\displaystyle \Delta t=1}. andσa2=σz2=σx2=σx˙2=1{\displaystyle \sigma _{a}^{2}=\sigma _{z}^{2}=\sigma _{x}^{2}=\sigma _{\dot {x}}^{2}=1}, simulation shows convergence in10{\displaystyle 10}iterations. Using the asymptotic gain, and assumingHk{\displaystyle \mathbf {H} _{k}}andFk{\displaystyle \mathbf {F} _{k}}are independent ofk{\displaystyle k}, the Kalman filter becomes alinear time-invariantfilter: The asymptotic gainK∞{\displaystyle \mathbf {K} _{\infty }}, if it exists, can be computed by first solving the following discreteRiccati equationfor the asymptotic state covarianceP∞{\displaystyle \mathbf {P} _{\infty }}:[42] The asymptotic gain is then computed as before. Additionally, a form of the asymptotic Kalman filter more commonly used in control theory is given by where This leads to an estimator of the form The Kalman filter can be derived as ageneralized least squaresmethod operating on previous data.[43] Starting with our invariant on the error covariancePk|kas above substitute in the definition ofx^k∣k{\displaystyle {\hat {\mathbf {x} }}_{k\mid k}} and substitutey~k{\displaystyle {\tilde {\mathbf {y} }}_{k}} andzk{\displaystyle \mathbf {z} _{k}} and by collecting the error vectors we get Since the measurement errorvkis uncorrelated with the other terms, this becomes by the properties ofvector covariancethis becomes which, using our invariant onPk|k−1and the definition ofRkbecomes This formula (sometimes known as theJoseph formof the covariance update equation) is valid for any value ofKk. It turns out that ifKkis the optimal Kalman gain, this can be simplified further as shown below. The Kalman filter is aminimum mean-square error (MMSE)estimator. The error in thea posterioristate estimation is We seek to minimize the expected value of the square of the magnitude of this vector,E⁡[‖xk−x^k|k‖2]{\displaystyle \operatorname {E} \left[\left\|\mathbf {x} _{k}-{\hat {\mathbf {x} }}_{k|k}\right\|^{2}\right]}. This is equivalent to minimizing thetraceof thea posterioriestimatecovariance matrixPk|k{\displaystyle \mathbf {P} _{k|k}}. By expanding out the terms in the equation above and collecting, we get: The trace is minimized when itsmatrix derivativewith respect to the gain matrix is zero. Using thegradient matrix rulesand the symmetry of the matrices involved we find that Solving this forKkyields the Kalman gain: This gain, which is known as theoptimal Kalman gain, is the one that yields MMSE estimates when used. The formula used to calculate thea posteriorierror covariance can be simplified when the Kalman gain equals the optimal value derived above. Multiplying both sides of our Kalman gain formula on the right bySkKkT, it follows that Referring back to our expanded formula for thea posteriorierror covariance, we find the last two terms cancel out, giving This formula is computationally cheaper and thus nearly always used in practice, but is only correct for the optimal gain. If arithmetic precision is unusually low causing problems withnumerical stability, or if a non-optimal Kalman gain is deliberately used, this simplification cannot be applied; thea posteriorierror covariance formula as derived above (Joseph form) must be used. The Kalman filtering equations provide an estimate of the statex^k∣k{\displaystyle {\hat {\mathbf {x} }}_{k\mid k}}and its error covariancePk∣k{\displaystyle \mathbf {P} _{k\mid k}}recursively. The estimate and its quality depend on the system parameters and the noise statistics fed as inputs to the estimator. This section analyzes the effect of uncertainties in the statistical inputs to the filter.[44]In the absence of reliable statistics or the true values of noise covariance matricesQk{\displaystyle \mathbf {Q} _{k}}andRk{\displaystyle \mathbf {R} _{k}}, the expression no longer provides the actual error covariance. In other words,Pk∣k≠E[(xk−x^k∣k)(xk−x^k∣k)T]{\displaystyle \mathbf {P} _{k\mid k}\neq E\left[\left(\mathbf {x} _{k}-{\hat {\mathbf {x} }}_{k\mid k}\right)\left(\mathbf {x} _{k}-{\hat {\mathbf {x} }}_{k\mid k}\right)^{\textsf {T}}\right]}. In most real-time applications, the covariance matrices that are used in designing the Kalman filter are different from the actual (true) noise covariances matrices.[citation needed]This sensitivity analysis describes the behavior of the estimation error covariance when the noise covariances as well as the system matricesFk{\displaystyle \mathbf {F} _{k}}andHk{\displaystyle \mathbf {H} _{k}}that are fed as inputs to the filter are incorrect. Thus, the sensitivity analysis describes the robustness (or sensitivity) of the estimator to misspecified statistical and parametric inputs to the estimator. This discussion is limited to the error sensitivity analysis for the case of statistical uncertainties. Here the actual noise covariances are denoted byQka{\displaystyle \mathbf {Q} _{k}^{a}}andRka{\displaystyle \mathbf {R} _{k}^{a}}respectively, whereas the design values used in the estimator areQk{\displaystyle \mathbf {Q} _{k}}andRk{\displaystyle \mathbf {R} _{k}}respectively. The actual error covariance is denoted byPk∣ka{\displaystyle \mathbf {P} _{k\mid k}^{a}}andPk∣k{\displaystyle \mathbf {P} _{k\mid k}}as computed by the Kalman filter is referred to as the Riccati variable. WhenQk≡Qka{\displaystyle \mathbf {Q} _{k}\equiv \mathbf {Q} _{k}^{a}}andRk≡Rka{\displaystyle \mathbf {R} _{k}\equiv \mathbf {R} _{k}^{a}}, this means thatPk∣k=Pk∣ka{\displaystyle \mathbf {P} _{k\mid k}=\mathbf {P} _{k\mid k}^{a}}. While computing the actual error covariance usingPk∣ka=E[(xk−x^k∣k)(xk−x^k∣k)T]{\displaystyle \mathbf {P} _{k\mid k}^{a}=E\left[\left(\mathbf {x} _{k}-{\hat {\mathbf {x} }}_{k\mid k}\right)\left(\mathbf {x} _{k}-{\hat {\mathbf {x} }}_{k\mid k}\right)^{\textsf {T}}\right]}, substituting forx^k∣k{\displaystyle {\widehat {\mathbf {x} }}_{k\mid k}}and using the fact thatE[wkwkT]=Qka{\displaystyle E\left[\mathbf {w} _{k}\mathbf {w} _{k}^{\textsf {T}}\right]=\mathbf {Q} _{k}^{a}}andE[vkvkT]=Rka{\displaystyle E\left[\mathbf {v} _{k}\mathbf {v} _{k}^{\textsf {T}}\right]=\mathbf {R} _{k}^{a}}, results in the following recursive equations forPk∣ka{\displaystyle \mathbf {P} _{k\mid k}^{a}}: and While computingPk∣k{\displaystyle \mathbf {P} _{k\mid k}}, by design the filter implicitly assumes thatE[wkwkT]=Qk{\displaystyle E\left[\mathbf {w} _{k}\mathbf {w} _{k}^{\textsf {T}}\right]=\mathbf {Q} _{k}}andE[vkvkT]=Rk{\displaystyle E\left[\mathbf {v} _{k}\mathbf {v} _{k}^{\textsf {T}}\right]=\mathbf {R} _{k}}. The recursive expressions forPk∣ka{\displaystyle \mathbf {P} _{k\mid k}^{a}}andPk∣k{\displaystyle \mathbf {P} _{k\mid k}}are identical except for the presence ofQka{\displaystyle \mathbf {Q} _{k}^{a}}andRka{\displaystyle \mathbf {R} _{k}^{a}}in place of the design valuesQk{\displaystyle \mathbf {Q} _{k}}andRk{\displaystyle \mathbf {R} _{k}}respectively. Researches have been done to analyze Kalman filter system's robustness.[45] One problem with the Kalman filter is itsnumerical stability. If the process noise covarianceQkis small, round-off error often causes a small positive eigenvalue of the state covariance matrixPto be computed as a negative number. This renders the numerical representation ofPindefinite, while its true form ispositive-definite. Positive definite matrices have the property that they have a factorization into the product of anon-singular,lower-triangular matrixSand itstranspose:P=S·ST. The factorScan be computed efficiently using theCholesky factorizationalgorithm. This product form of the covariance matrixPis guaranteed to be symmetric, and for all 1 <= k <= n, the k-th diagonal elementPkkis equal to theeuclidean normof the k-th row ofS, which is necessarily positive. An equivalent form, which avoids many of thesquare rootoperations involved in theCholesky factorizationalgorithm, yet preserves the desirable numerical properties, is the U-D decomposition form,P=U·D·UT, whereUis aunit triangular matrix(with unit diagonal), andDis a diagonal matrix. Between the two, the U-D factorization uses the same amount of storage, and somewhat less computation, and is the most commonly used triangular factorization. (Early literature on the relative efficiency is somewhat misleading, as it assumed that square roots were much more time-consuming than divisions,[46]: 69while on 21st-century computers they are only slightly more expensive.) Efficient algorithms for the Kalman prediction and update steps in the factored form were developed by G. J. Bierman and C. L. Thornton.[46][47] TheL·D·LTdecompositionof the innovation covariance matrixSkis the basis for another type of numerically efficient and robust square root filter.[48]The algorithm starts with the LU decomposition as implemented in the Linear Algebra PACKage (LAPACK). These results are further factored into theL·D·LTstructure with methods given by Golub and Van Loan (algorithm 4.1.2) for a symmetric nonsingular matrix.[49]Any singular covariance matrix ispivotedso that the first diagonal partition isnonsingularandwell-conditioned. The pivoting algorithm must retain any portion of the innovation covariance matrix directly corresponding to observed state-variablesHk·xk|k-1that are associated with auxiliary observations inyk. Thel·d·ltsquare-root filter requiresorthogonalizationof the observation vector.[47][48]This may be done with the inverse square-root of the covariance matrix for the auxiliary variables using Method 2 in Higham (2002, p. 263).[50] The Kalman filter is efficient for sequential data processing oncentral processing units(CPUs), but in its original form it is inefficient on parallel architectures such asgraphics processing units(GPUs). It is however possible to express the filter-update routine in terms of an associative operator using the formulation in Särkkä and García-Fernández (2021).[51]The filter solution can then be retrieved by the use of aprefix sumalgorithm which can be efficiently implemented on GPU.[52]This reduces thecomputational complexityfromO(N){\displaystyle O(N)}in the number of time steps toO(log⁡(N)){\displaystyle O(\log(N))}. The Kalman filter can be presented as one of the simplestdynamic Bayesian networks. The Kalman filter calculates estimates of the true values of states recursively over time using incoming measurements and a mathematical process model. Similarly,recursive Bayesian estimationcalculatesestimatesof an unknownprobability density function(PDF) recursively over time using incoming measurements and a mathematical process model.[53] In recursive Bayesian estimation, the true state is assumed to be an unobservedMarkov process, and the measurements are the observed states of a hidden Markov model (HMM). Because of the Markov assumption, the true state is conditionally independent of all earlier states given the immediately previous state. Similarly, the measurement at thek-th timestep is dependent only upon the current state and is conditionally independent of all other states given the current state. Using these assumptions the probability distribution over all states of the hidden Markov model can be written simply as: However, when a Kalman filter is used to estimate the statex, the probability distribution of interest is that associated with the current states conditioned on the measurements up to the current timestep. This is achieved by marginalizing out the previous states and dividing by the probability of the measurement set. This results in thepredictandupdatephases of the Kalman filter written probabilistically. The probability distribution associated with the predicted state is the sum (integral) of the products of the probability distribution associated with the transition from the (k− 1)-th timestep to thek-th and the probability distribution associated with the previous state, over all possiblexk−1{\displaystyle x_{k-1}}. The measurement set up to timetis The probability distribution of the update is proportional to the product of the measurement likelihood and the predicted state. The denominator is a normalization term. The remaining probability density functions are The PDF at the previous timestep is assumed inductively to be the estimated state and covariance. This is justified because, as an optimal estimator, the Kalman filter makes best use of the measurements, therefore the PDF forxk{\displaystyle \mathbf {x} _{k}}given the measurementsZk{\displaystyle \mathbf {Z} _{k}}is the Kalman filter estimate. Related to the recursive Bayesian interpretation described above, the Kalman filter can be viewed as agenerative model, i.e., a process forgeneratinga stream of random observationsz= (z0,z1,z2, ...). Specifically, the process is This process has identical structure to thehidden Markov model, except that the discrete state and observations are replaced with continuous variables sampled from Gaussian distributions. In some applications, it is useful to compute theprobabilitythat a Kalman filter with a given set of parameters (prior distribution, transition and observation models, and control inputs) would generate a particular observed signal. This probability is known as themarginal likelihoodbecause it integrates over ("marginalizes out") the values of the hidden state variables, so it can be computed using only the observed signal. The marginal likelihood can be useful to evaluate different parameter choices, or to compare the Kalman filter against other models usingBayesian model comparison. It is straightforward to compute the marginal likelihood as a side effect of the recursive filtering computation. By thechain rule, the likelihood can be factored as the product of the probability of each observation given previous observations, and because the Kalman filter describes a Markov process, all relevant information from previous observations is contained in the current state estimatex^k∣k−1,Pk∣k−1.{\displaystyle {\hat {\mathbf {x} }}_{k\mid k-1},\mathbf {P} _{k\mid k-1}.}Thus the marginal likelihood is given by i.e., a product of Gaussian densities, each corresponding to the density of one observationzkunder the current filtering distributionHkx^k∣k−1,Sk{\displaystyle \mathbf {H} _{k}{\hat {\mathbf {x} }}_{k\mid k-1},\mathbf {S} _{k}}. This can easily be computed as a simple recursive update; however, to avoidnumeric underflow, in a practical implementation it is usually desirable to compute thelogmarginal likelihoodℓ=log⁡p(z){\displaystyle \ell =\log p(\mathbf {z} )}instead. Adopting the conventionℓ(−1)=0{\displaystyle \ell ^{(-1)}=0}, this can be done via the recursive update rule wheredy{\displaystyle d_{y}}is the dimension of the measurement vector.[54] An important application where such a (log) likelihood of the observations (given the filter parameters) is used is multi-target tracking. For example, consider an object tracking scenario where a stream of observations is the input, however, it is unknown how many objects are in the scene (or, the number of objects is known but is greater than one). For such a scenario, it can be unknown apriori which observations/measurements were generated by which object. A multiple hypothesis tracker (MHT) typically will form different track association hypotheses, where each hypothesis can be considered as a Kalman filter (for the linear Gaussian case) with a specific set of parameters associated with the hypothesized object. Thus, it is important to compute the likelihood of the observations for the different hypotheses under consideration, such that the most-likely one can be found. In cases where the dimension of the observation vectoryis bigger than the dimension of the state space vectorx, the information filter can avoid the inversion of a bigger matrix in the Kalman gain calculation at the price of inverting a smaller matrix in the prediction step, thus saving computing time. Additionally, the information filter allows for system information initialization according toI1|0=P1|0−1=0{\displaystyle {I_{1|0}=P_{1|0}^{-1}=0}}, which would not be possible for the regular Kalman filter.[55]In the information filter, or inverse covariance filter, the estimated covariance and estimated state are replaced by theinformation matrixandinformationvector respectively. These are defined as: Similarly the predicted covariance and state have equivalent information forms, defined as: and the measurement covariance and measurement vector, which are defined as: The information update now becomes a trivial sum.[56] The main advantage of the information filter is thatNmeasurements can be filtered at each time step simply by summing their information matrices and vectors. To predict the information filter the information matrix and vector can be converted back to their state space equivalents, or alternatively the information space prediction can be used.[56] The optimal fixed-lag smoother provides the optimal estimate ofx^k−N∣k{\displaystyle {\hat {\mathbf {x} }}_{k-N\mid k}}for a given fixed-lagN{\displaystyle N}using the measurements fromz1{\displaystyle \mathbf {z} _{1}}tozk{\displaystyle \mathbf {z} _{k}}.[57]It can be derived using the previous theory via an augmented state, and the main equation of the filter is the following: where: If the estimation error covariance is defined so that then we have that the improvement on the estimation ofxt−i{\displaystyle \mathbf {x} _{t-i}}is given by: The optimal fixed-interval smoother provides the optimal estimate ofx^k∣n{\displaystyle {\hat {\mathbf {x} }}_{k\mid n}}(k<n{\displaystyle k<n}) using the measurements from a fixed intervalz1{\displaystyle \mathbf {z} _{1}}tozn{\displaystyle \mathbf {z} _{n}}. This is also called "Kalman Smoothing". There are several smoothing algorithms in common use. The Rauch–Tung–Striebel (RTS) smoother is an efficient two-pass algorithm for fixed interval smoothing.[58] The forward pass is the same as the regular Kalman filter algorithm. Thesefiltereda-priori and a-posteriori state estimatesx^k∣k−1{\displaystyle {\hat {\mathbf {x} }}_{k\mid k-1}},x^k∣k{\displaystyle {\hat {\mathbf {x} }}_{k\mid k}}and covariancesPk∣k−1{\displaystyle \mathbf {P} _{k\mid k-1}},Pk∣k{\displaystyle \mathbf {P} _{k\mid k}}are saved for use in the backward pass (forretrodiction). In the backward pass, we compute thesmoothedstate estimatesx^k∣n{\displaystyle {\hat {\mathbf {x} }}_{k\mid n}}and covariancesPk∣n{\displaystyle \mathbf {P} _{k\mid n}}. We start at the last time step and proceed backward in time using the following recursive equations: where xk∣k{\displaystyle \mathbf {x} _{k\mid k}}is the a-posteriori state estimate of timestepk{\displaystyle k}andxk+1∣k{\displaystyle \mathbf {x} _{k+1\mid k}}is the a-priori state estimate of timestepk+1{\displaystyle k+1}. The same notation applies to the covariance. An alternative to the RTS algorithm is the modified Bryson–Frazier (MBF) fixed interval smoother developed by Bierman.[47]This also uses a backward pass that processes data saved from the Kalman filter forward pass. The equations for the backward pass involve the recursive computation of data which are used at each observation time to compute the smoothed state and covariance. The recursive equations are whereSk{\displaystyle \mathbf {S} _{k}}is the residual covariance andC^k=I−KkHk{\displaystyle {\hat {\mathbf {C} }}_{k}=\mathbf {I} -\mathbf {K} _{k}\mathbf {H} _{k}}. The smoothed state and covariance can then be found by substitution in the equations or An important advantage of the MBF is that it does not require finding the inverse of the covariance matrix. Bierman's derivation is based on the RTS smoother, which assumes that the underlying distributions are Gaussian. However, a derivation of the MBF based on the concept of the fixed point smoother, which does not require the Gaussian assumption, is given by Gibbs.[59] The MBF can also be used to perform consistency checks on the filter residuals and the difference between the value of a filter state after an update and the smoothed value of the state, that isxk∣k−xk∣n{\displaystyle \mathbf {x} _{k\mid k}-\mathbf {x} _{k\mid n}}.[60] The minimum-variance smoother can attain the best-possible error performance, provided that the models are linear, their parameters and the noise statistics are known precisely.[61]This smoother is a time-varying state-space generalization of the optimal non-causalWiener filter. The smoother calculations are done in two passes. The forward calculations involve a one-step-ahead predictor and are given by The above system is known as the inverse Wiener-Hopf factor. The backward recursion is the adjoint of the above forward system. The result of the backward passβk{\displaystyle \beta _{k}}may be calculated by operating the forward equations on the time-reversedαk{\displaystyle \alpha _{k}}and time reversing the result. In the case of output estimation, the smoothed estimate is given by Taking the causal part of this minimum-variance smoother yields which is identical to the minimum-variance Kalman filter. The above solutions minimize the variance of the output estimation error. Note that the Rauch–Tung–Striebel smoother derivation assumes that the underlying distributions are Gaussian, whereas the minimum-variance solutions do not. Optimal smoothers for state estimation and input estimation can be constructed similarly. A continuous-time version of the above smoother is described in.[62][63] Expectation–maximization algorithmsmay be employed to calculate approximatemaximum likelihoodestimates of unknown state-space parameters within minimum-variance filters and smoothers. Often uncertainties remain within problem assumptions. A smoother that accommodates uncertainties can be designed by adding a positive definite term to the Riccati equation.[64] In cases where the models are nonlinear, step-wise linearizations may be within the minimum-variance filter and smoother recursions (extended Kalman filtering). Pioneering research on the perception of sounds at different frequencies was conducted by Fletcher and Munson in the 1930s. Their work led to a standard way of weighting measured sound levels within investigations of industrial noise and hearing loss. Frequency weightings have since been used within filter and controller designs to manage performance within bands of interest. Typically, a frequency shaping function is used to weight the average power of the error spectral density in a specified frequency band. Lety−y^{\displaystyle \mathbf {y} -{\hat {\mathbf {y} }}}denote the output estimation error exhibited by a conventional Kalman filter. Also, letW{\displaystyle \mathbf {W} }denote a causal frequency weighting transfer function. The optimum solution which minimizes the variance ofW(y−y^){\displaystyle \mathbf {W} \left(\mathbf {y} -{\hat {\mathbf {y} }}\right)}arises by simply constructingW−1y^{\displaystyle \mathbf {W} ^{-1}{\hat {\mathbf {y} }}}. The design ofW{\displaystyle \mathbf {W} }remains an open question. One way of proceeding is to identify a system which generates the estimation error and settingW{\displaystyle \mathbf {W} }equal to the inverse of that system.[65]This procedure may be iterated to obtain mean-square error improvement at the cost of increased filter order. The same technique can be applied to smoothers. The basic Kalman filter is limited to a linear assumption. More complex systems, however, can benonlinear. The nonlinearity can be associated either with the process model or with the observation model or with both. The most common variants of Kalman filters for non-linear systems are the Extended Kalman Filter and Unscented Kalman filter. The suitability of which filter to use depends on the non-linearity indices of the process and observation model.[66] In the extended Kalman filter (EKF), the state transition and observation models need not be linear functions of the state but may instead be nonlinear functions. These functions are ofdifferentiabletype. The functionfcan be used to compute the predicted state from the previous estimate and similarly the functionhcan be used to compute the predicted measurement from the predicted state. However,fandhcannot be applied to the covariance directly. Instead a matrix of partial derivatives (theJacobian) is computed. At each timestep the Jacobian is evaluated with current predicted states. These matrices can be used in the Kalman filter equations. This process essentially linearizes the nonlinear function around the current estimate. When the state transition and observation models—that is, the predict and update functionsf{\displaystyle f}andh{\displaystyle h}—are highly nonlinear, the extended Kalman filter can give particularly poor performance.[67][68]This is because the covariance is propagated through linearization of the underlying nonlinear model. The unscented Kalman filter (UKF)[67]uses a deterministic sampling technique known as theunscented transformation (UT)to pick a minimal set of sample points (called sigma points) around the mean. The sigma points are then propagated through the nonlinear functions, from which a new mean and covariance estimate are formed. The resulting filter depends on how the transformed statistics of the UT are calculated and which set of sigma points are used. It should be remarked that it is always possible to construct new UKFs in a consistent way.[69]For certain systems, the resulting UKF more accurately estimates the true mean and covariance.[70]This can be verified withMonte Carlo samplingorTaylor seriesexpansion of the posterior statistics. In addition, this technique removes the requirement to explicitly calculate Jacobians, which for complex functions can be a difficult task in itself (i.e., requiring complicated derivatives if done analytically or being computationally costly if done numerically), if not impossible (if those functions are not differentiable). For arandomvectorx=(x1,…,xL){\displaystyle \mathbf {x} =(x_{1},\dots ,x_{L})}, sigma points are any set of vectors attributed with A simple choice of sigma points and weights forxk−1∣k−1{\displaystyle \mathbf {x} _{k-1\mid k-1}}in the UKF algorithm is wherex^k−1∣k−1{\displaystyle {\hat {\mathbf {x} }}_{k-1\mid k-1}}is the mean estimate ofxk−1∣k−1{\displaystyle \mathbf {x} _{k-1\mid k-1}}. The vectorAj{\displaystyle \mathbf {A} _{j}}is thejth column ofA{\displaystyle \mathbf {A} }wherePk−1∣k−1=AAT{\displaystyle \mathbf {P} _{k-1\mid k-1}=\mathbf {AA} ^{\textsf {T}}}. Typically,A{\displaystyle \mathbf {A} }is obtained viaCholesky decompositionofPk−1∣k−1{\displaystyle \mathbf {P} _{k-1\mid k-1}}. With some care the filter equations can be expressed in such a way thatA{\displaystyle \mathbf {A} }is evaluated directly without intermediate calculations ofPk−1∣k−1{\displaystyle \mathbf {P} _{k-1\mid k-1}}. This is referred to as thesquare-root unscented Kalman filter.[71] The weight of the mean value,W0{\displaystyle W_{0}}, can be chosen arbitrarily. Another popular parameterization (which generalizes the above) is α{\displaystyle \alpha }andκ{\displaystyle \kappa }control the spread of the sigma points.β{\displaystyle \beta }is related to the distribution ofx{\displaystyle x}. Note that this is an overparameterization in the sense that any one ofα{\displaystyle \alpha },β{\displaystyle \beta }andκ{\displaystyle \kappa }can be chosen arbitrarily. Appropriate values depend on the problem at hand, but a typical recommendation isα=1{\displaystyle \alpha =1},β=0{\displaystyle \beta =0}, andκ≈3L/2{\displaystyle \kappa \approx 3L/2}.[citation needed]If the true distribution ofx{\displaystyle x}is Gaussian,β=2{\displaystyle \beta =2}is optimal.[72] As with the EKF, the UKF prediction can be used independently from the UKF update, in combination with a linear (or indeed EKF) update, or vice versa. Given estimates of the mean and covariance,x^k−1∣k−1{\displaystyle {\hat {\mathbf {x} }}_{k-1\mid k-1}}andPk−1∣k−1{\displaystyle \mathbf {P} _{k-1\mid k-1}}, one obtainsN=2L+1{\displaystyle N=2L+1}sigma points as described in the section above. The sigma points are propagated through the transition functionf. The propagated sigma points are weighed to produce the predicted mean and covariance. whereWja{\displaystyle W_{j}^{a}}are the first-order weights of the original sigma points, andWjc{\displaystyle W_{j}^{c}}are the second-order weights. The matrixQk{\displaystyle \mathbf {Q} _{k}}is the covariance of the transition noise,wk{\displaystyle \mathbf {w} _{k}}. Given prediction estimatesx^k∣k−1{\displaystyle {\hat {\mathbf {x} }}_{k\mid k-1}}andPk∣k−1{\displaystyle \mathbf {P} _{k\mid k-1}}, a new set ofN=2L+1{\displaystyle N=2L+1}sigma pointss0,…,s2L{\displaystyle \mathbf {s} _{0},\dots ,\mathbf {s} _{2L}}with corresponding first-order weightsW0a,…W2La{\displaystyle W_{0}^{a},\dots W_{2L}^{a}}and second-order weightsW0c,…,W2Lc{\displaystyle W_{0}^{c},\dots ,W_{2L}^{c}}is calculated.[73]These sigma points are transformed through the measurement functionh{\displaystyle h}. Then the empirical mean and covariance of the transformed points are calculated. whereRk{\displaystyle \mathbf {R} _{k}}is the covariance matrix of the observation noise,vk{\displaystyle \mathbf {v} _{k}}. Additionally, the cross covariance matrix is also needed The Kalman gain is The updated mean and covariance estimates are When the observation modelp(zk∣xk){\displaystyle p(\mathbf {z} _{k}\mid \mathbf {x} _{k})}is highly non-linear and/or non-Gaussian, it may prove advantageous to applyBayes' ruleand estimate wherep(xk∣zk)≈N(g(zk),Q(zk)){\displaystyle p(\mathbf {x} _{k}\mid \mathbf {z} _{k})\approx {\mathcal {N}}(g(\mathbf {z} _{k}),Q(\mathbf {z} _{k}))}for nonlinear functionsg,Q{\displaystyle g,Q}. This replaces the generative specification of the standard Kalman filter with adiscriminative modelfor the latent states given observations. Under astationarystate model whereT=FTF⊺+C{\displaystyle \mathbf {T} =\mathbf {F} \mathbf {T} \mathbf {F} ^{\intercal }+\mathbf {C} }, if then given a new observationzk{\displaystyle \mathbf {z} _{k}}, it follows that[74] where Note that this approximation requiresQ(zk)−1−T−1{\displaystyle Q(\mathbf {z} _{k})^{-1}-\mathbf {T} ^{-1}}to be positive-definite; in the case that it is not, is used instead. Such an approach proves particularly useful when the dimensionality of the observations is much greater than that of the latent states[75]and can be used build filters that are particularly robust to nonstationarities in the observation model.[76] Adaptive Kalman filters allow to adapt for process dynamics which are not modeled in the process modelF(t){\displaystyle \mathbf {F} (t)}, which happens for example in the context of a maneuvering target when a constant velocity (reduced order) Kalman filter is employed for tracking.[77] Kalman–Bucy filtering (named for Richard Snowden Bucy) is a continuous time version of Kalman filtering.[78][79] It is based on the state space model whereQ(t){\displaystyle \mathbf {Q} (t)}andR(t){\displaystyle \mathbf {R} (t)}represent the intensities of the two white noise termsw(t){\displaystyle \mathbf {w} (t)}andv(t){\displaystyle \mathbf {v} (t)}, respectively. The filter consists of two differential equations, one for the state estimate and one for the covariance: where the Kalman gain is given by Note that in this expression forK(t){\displaystyle \mathbf {K} (t)}the covariance of the observation noiseR(t){\displaystyle \mathbf {R} (t)}represents at the same time the covariance of the prediction error (orinnovation)y~(t)=z(t)−H(t)x^(t){\displaystyle {\tilde {\mathbf {y} }}(t)=\mathbf {z} (t)-\mathbf {H} (t){\hat {\mathbf {x} }}(t)}; these covariances are equal only in the case of continuous time.[80] The distinction between the prediction and update steps of discrete-time Kalman filtering does not exist in continuous time. The second differential equation, for the covariance, is an example of aRiccati equation. Nonlinear generalizations to Kalman–Bucy filters include continuous time extended Kalman filter. Most physical systems are represented as continuous-time models while discrete-time measurements are made frequently for state estimation via a digital processor. Therefore, the system model and measurement model are given by where The prediction equations are derived from those of continuous-time Kalman filter without update from measurements, i.e.,K(t)=0{\displaystyle \mathbf {K} (t)=0}. The predicted state and covariance are calculated respectively by solving a set of differential equations with the initial value equal to the estimate at the previous step. For the case oflinear time invariantsystems, the continuous time dynamics can be exactlydiscretizedinto a discrete time system usingmatrix exponentials. The update equations are identical to those of the discrete-time Kalman filter. The traditional Kalman filter has also been employed for the recovery ofsparse, possibly dynamic, signals from noisy observations. Recent works[81][82][83]utilize notions from the theory ofcompressed sensing/sampling, such as the restricted isometry property and related probabilistic recovery arguments, for sequentially estimating the sparse state in intrinsically low-dimensional systems. Since linear Gaussian state-space models lead to Gaussian processes, Kalman filters can be viewed as sequential solvers forGaussian process regression.[84]
https://en.wikipedia.org/wiki/Kalman_filter
Linear predictionis a mathematical operation where future values of adiscrete-timesignalare estimated as alinear functionof previous samples. Indigital signal processing, linear prediction is often calledlinear predictive coding(LPC) and can thus be viewed as a subset offilter theory. Insystem analysis, a subfield ofmathematics, linear prediction can be viewed as a part ofmathematical modellingoroptimization. The most common representation is wherex^(n){\displaystyle {\widehat {x}}(n)}is the predicted signal value,x(n−i){\displaystyle x(n-i)}the previous observed values, withp≤n{\displaystyle p\leq n}, andai{\displaystyle a_{i}}the predictor coefficients. The error generated by this estimate is wherex(n){\displaystyle x(n)}is the true signal value. These equations are valid for all types of (one-dimensional) linear prediction. The differences are found in the way the predictor coefficientsai{\displaystyle a_{i}}are chosen. For multi-dimensional signals the error metric is often defined as where‖⋅‖{\displaystyle \|\cdot \|}is a suitable chosen vectornorm. Predictions such asx^(n){\displaystyle {\widehat {x}}(n)}are routinely used withinKalman filtersand smoothers to estimate current and past signal values, respectively, from noisy measurements.[1] The most common choice in optimization of parametersai{\displaystyle a_{i}}is theroot mean squarecriterion which is also called theautocorrelationcriterion. In this method we minimize the expected value of the squared errorE[e2(n)]{\displaystyle E[e^{2}(n)]}, which yields the equation for 1 ≤j≤p, whereRis theautocorrelationof signalxn, defined as andEis theexpected value. In the multi-dimensional case this corresponds to minimizing theL2norm. The above equations are called thenormal equationsorYule-Walker equations. In matrix form the equations can be equivalently written as where the autocorrelation matrixR{\displaystyle \mathbf {R} }is a symmetric,p×p{\displaystyle p\times p}Toeplitz matrixwith elementsrij=R(i−j),0≤i,j<p{\displaystyle r_{ij}=R(i-j),0\leq i,j<p}, the vectorr{\displaystyle \mathbf {r} }is the autocorrelation vectorrj=R(j),0<j≤p{\displaystyle r_{j}=R(j),0<j\leq p}, andA=[a1,a2,⋯,ap−1,ap]{\displaystyle \mathbf {A} =[a_{1},a_{2},\,\cdots \,,a_{p-1},a_{p}]}, the parameter vector. Another, more general, approach is to minimize the sum of squares of the errors defined in the form where the optimisation problem searching over allai{\displaystyle a_{i}}must now be constrained witha0=−1{\displaystyle a_{0}=-1}. On the other hand, if the mean square prediction error is constrained to be unity and the prediction error equation is included on top of the normal equations, the augmented set of equations is obtained as where the indexi{\displaystyle i}ranges from 0 top{\displaystyle p}, andR{\displaystyle \mathbf {R} }is a(p+1)×(p+1){\displaystyle (p+1)\times (p+1)}matrix. Specification of the parameters of the linear predictor is a wide topic and a large number of other approaches have been proposed. In fact, the autocorrelation method is the most common[2]and it is used, for example, forspeech codingin theGSMstandard. Solution of the matrix equationRA=r{\displaystyle \mathbf {RA} =\mathbf {r} }is computationally a relatively expensive process. TheGaussian eliminationfor matrix inversion is probably the oldest solution but this approach does not efficiently use the symmetry ofR{\displaystyle \mathbf {R} }. A faster algorithm is theLevinson recursionproposed byNorman Levinsonin 1947, which recursively calculates the solution.[citation needed]In particular, the autocorrelation equations above may be more efficiently solved by the Durbin algorithm.[3] In 1986, Philippe Delsarte and Y.V. Genin proposed an improvement to this algorithm called the split Levinson recursion, which requires about half the number of multiplications and divisions.[4]It uses a special symmetrical property of parameter vectors on subsequent recursion levels. That is, calculations for the optimal predictor containingp{\displaystyle p}terms make use of similar calculations for the optimal predictor containingp−1{\displaystyle p-1}terms. Another way of identifying model parameters is to iteratively calculate state estimates usingKalman filtersand obtainingmaximum likelihoodestimates withinexpectation–maximization algorithms. For equally-spaced values, a polynomial interpolation is alinear combination of the known values.If the discrete time signal is estimated to obey a polynomial of degreep−1,{\displaystyle p-1,}then the predictor coefficientsai{\displaystyle a_{i}}are given by the corresponding row of thetriangle of binomial transform coefficients.This estimate might be suitable for a slowly varying signal with low noise. The predictions for the first few values ofp{\displaystyle p}are
https://en.wikipedia.org/wiki/Linear_prediction
Thezero-forcing equalizeris a form of linearequalizationalgorithmused incommunication systemswhich applies the inverse of thefrequency responseof the channel. This form of equalizer was first proposed byRobert Lucky. The zero-forcing equalizer applies the inverse of the channel frequency response to the received signal, to restore the signal after the channel.[1]It has many useful applications. For example, it is studied heavily forIEEE 802.11n(MIMO) where knowing the channel allows recovery of the two or more streams which will be received on top of each other on each antenna. The namezero-forcing correspondsto bringing down theintersymbol interference(ISI) to zero in a noise-free case. This will be useful when ISI is significant compared to noise. For a channel withfrequency responseF(f){\displaystyle F(f)}the zero-forcing equalizerC(f){\displaystyle C(f)}is constructed byC(f)=1/F(f){\displaystyle C(f)=1/F(f)}. Thus the combination of channel and equalizer gives a flat frequency response and linear phaseF(f)C(f)=1{\displaystyle F(f)C(f)=1}. In reality, zero-forcing equalization does not work in most applications, for the following reasons: This second item is often the more limiting condition. These problems are addressed in the linearMMSEequalizer[2]by making a small modification to the denominator ofC(f){\displaystyle C(f)}:C(f)=1/(F(f)+k){\displaystyle C(f)=1/(F(f)+k)}, where k is related to the channel response and the signalSNR. If the channel response (orchannel transfer function) for a particular channel is H(s) then the input signal is multiplied by thereciprocalof it. This is intended to remove the effect of channel from the received signal, in particular theintersymbol interference(ISI). The zero-forcing equalizer removes all ISI, and is ideal when the channel is noiseless. However, when the channel is noisy, the zero-forcing equalizer will amplify the noise greatly at frequenciesfwhere the channel response H(j2πf) has a small magnitude (i.e. near zeroes of the channel) in the attempt to invert the channel completely. A more balanced linear equalizer in this case is theminimum mean-square errorequalizer, which does not usually eliminate ISI completely but instead minimizes the total power of the noise and ISI components in the output.
https://en.wikipedia.org/wiki/Zero-forcing_equalizer
TheCzenakowski distance(sometimes shortened asCZD) is a per-pixel quality metric that estimates quality or similarity by measuring differences between pixels. Because it compares vectors with strictly non-negative elements, it is often used to compare colored images, as color values cannot be negative. This different approach has a better correlation with subjective quality assessment thanPSNR.[citation needed] Androutsos et al. give theCzenakowski coefficientas follows:[1] dz(i,j)=1−2∑k=1pmin(xik,xjk)∑k=1p(xik+xjk){\displaystyle d_{z}(i,j)=1-{\frac {2\sum _{k=1}^{p}{\text{min}}(x_{ik},\ x_{jk})}{\sum _{k=1}^{p}(x_{ik}+x_{jk})}}} Where a pixelxi{\displaystyle x_{i}}is being compared to a pixelxj{\displaystyle x_{j}}on thek-thband of color– usually one for each of red, green and blue. For a pixel matrix of sizeM×N{\displaystyle M\times N}, the Czenakowski coefficient can be used in anarithmetic meanspanning all pixels to calculate the Czenakowski distance as follows:[2][3] 1MN∑i=0M−1∑j=0N−1(1−2∑k=13min(Ak(i,j),Bk(i,j))∑k=13(Ak(i,j)+Bk(i,j))){\displaystyle {\frac {1}{MN}}\sum _{i=0}^{M-1}\sum _{j=0}^{N-1}{\begin{pmatrix}1-{\frac {2\sum _{k=1}^{3}{\text{min}}(A_{k}(i,j),\ B_{k}(i,j))}{\sum _{k=1}^{3}(A_{k}(i,j)+B_{k}(i,j))}}\end{pmatrix}}} WhereAk(i,j){\displaystyle A_{k}(i,j)}is the(i, j)-th pixel of thek-th band of a color image and, similarly,Bk(i,j){\displaystyle B_{k}(i,j)}is the pixel that it is being compared to. In the context ofimage forensics– for example, detecting if an image has beenmanipulated–, Rocha et al. report the Czenakowski distance is a popular choice forColor Filter Array(CFA) identification.[2] Thiscomputer graphics–related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Czenakowski_distance
Data compression ratio, also known ascompression power, is a measurement of the relative reduction in size of data representation produced by a data compression algorithm. It is typically expressed as the division of uncompressed size by compressed size. Data compression ratio is defined as the ratio between theuncompressed sizeandcompressed size:[1][2][3][4][5] Thus, a representation that compresses a file's storage size from 10 MB to 2 MB has a compression ratio of 10/2 = 5, often notated as an explicit ratio, 5:1 (read "five" to "one"), or as an implicit ratio, 5/1. This formulation applies equally for compression, where the uncompressed size is that of the original; and for decompression, where the uncompressed size is that of the reproduction. Sometimes thespace savingis given instead, which is defined as the reduction in size relative to the uncompressed size: Thus, a representation that compresses the storage size of a file from 10 MB to 2 MB yields a space saving of 1 - 2/10 = 0.8, often notated as a percentage, 80%. For signals of indefinite size, such asstreaming audioand video, the compression ratio is defined in terms of uncompressed and compresseddata ratesinstead of data sizes: and instead of space saving, one speaks ofdata-rate saving, which is defined as the data-rate reduction relative to the uncompressed data rate: For example, uncompressed songs in CD format have a data rate of 16 bits/channel x 2 channels x 44.1 kHz ≅ 1.4 Mbit/s, whereasAACfiles on an iPod are typically compressed to 128 kbit/s, yielding a compression ratio of 10.9, for a data-rate saving of 0.91, or 91%. When the uncompressed data rate is known, the compression ratio can be inferred from the compressed data rate. Lossless compressionof digitized data such as video, digitized film, and audio preserves all the information, but it does not generally achieve compression ratio much better than 2:1 because of the intrinsicentropyof the data. Compression algorithms which provide higher ratios either incur very large overheads or work only for specific data sequences (e.g. compressing a file with mostly zeros). In contrast,lossy compression(e.g.JPEGfor images, orMP3andOpusfor audio) can achieve much higher compression ratios at the cost of a decrease in quality, such as Bluetooth audio streaming, as visual or audio compression artifacts from loss of important information are introduced. A compression ratio of at least 50:1 is needed to get1080ivideo into a 20 Mbit/sMPEG transport stream.[1] The data compression ratio can serve as a measure of thecomplexityof a data set or signal. In particular it is used to approximate thealgorithmic complexity. It is also used to see how much of a file is able to be compressed without increasing its original size.
https://en.wikipedia.org/wiki/Data_compression_ratio