text
stringlengths
16
172k
source
stringlengths
32
122
Inmachine learning, thekernel perceptronis a variant of the popularperceptronlearning algorithm that can learnkernel machines, i.e. non-linearclassifiersthat employ a kernel function to compute the similarity of unseen samples to training samples. The algorithm was invented in 1964,[1]making it the first kernel classification learner.[2] The perceptron algorithm is anonline learningalgorithm that operates by a principle called "error-driven learning". It iteratively improves a model by running it on training samples, then updating the model whenever it finds it has made an incorrect classification with respect to asupervisedsignal. The model learned by the standard perceptron algorithm is alinearbinary classifier: a vector of weightsw(and optionally an intercept termb, omitted here for simplicity) that is used to classify a sample vectorxas class "one" or class "minus one" according to where a zero is arbitrarily mapped to one or minus one. (The "hat" onŷdenotes an estimated value.) Inpseudocode, the perceptron algorithm is given by: By contrast with the linear models learned by the perceptron, a kernel method[3]is a classifier that stores a subset of its training examplesxi, associates with each a weightαi, and makes decisions for new samplesx'by evaluating Here,Kis some kernel function. Formally, a kernel function is anon-negative semidefinite kernel(seeMercer's condition), representing aninner productbetween samples in a high-dimensional space, as if the samples had been expanded to include additional features by a functionΦ:K(x,x') = Φ(x) · Φ(x'). Intuitively, it can be thought of as asimilarity functionbetween samples, so the kernel machine establishes the class of a new sample by weighted comparison to the training set. Each functionx'↦K(xi,x')serves as abasis functionin the classification. To derive a kernelized version of the perceptron algorithm, we must first formulate it indual form, starting from the observation that the weight vectorwcan be expressed as alinear combinationof thentraining samples. The equation for the weight vector is whereαiis the number of timesxiwas misclassified, forcing an updatew←w+yixi. Using this result, we can formulate the dual perceptron algorithm, which loops through the samples as before, making predictions, but instead of storing and updating a weight vectorw, it updates a "mistake counter" vectorα. We must also rewrite the prediction formula to get rid ofw: Plugging these two equations into the training loop turn it into thedual perceptronalgorithm. Finally, we can replace thedot productin the dual perceptron by an arbitrary kernel function, to get the effect of a feature mapΦwithout computingΦ(x)explicitly for any samples. Doing this yields the kernel perceptron algorithm:[4] One problem with the kernel perceptron, as presented above, is that it does not learnsparsekernel machines. Initially, all theαiare zero so that evaluating the decision function to getŷrequires no kernel evaluations at all, but each update increments a singleαi, making the evaluation increasingly more costly. Moreover, when the kernel perceptron is used in anonlinesetting, the number of non-zeroαiand thus the evaluation cost grow linearly in the number of examples presented to the algorithm. The forgetron variant of the kernel perceptron was suggested to deal with this problem. It maintains anactive setof examples with non-zeroαi, removing ("forgetting") examples from the active set when it exceeds a pre-determined budget and "shrinking" (lowering the weight of) old examples as new ones are promoted to non-zeroαi.[5] Another problem with the kernel perceptron is that it does notregularize, making it vulnerable tooverfitting. The NORMA online kernel learning algorithm can be regarded as a generalization of the kernel perceptron algorithm with regularization.[6]Thesequential minimal optimization(SMO) algorithm used to learnsupport vector machinescan also be regarded as a generalization of the kernel perceptron.[6] The voted perceptron algorithm of Freund and Schapire also extends to the kernelized case,[7]giving generalization bounds comparable to the kernel SVM.[2]
https://en.wikipedia.org/wiki/Kernel_perceptron
Inmachine learning, theradial basis functionkernel, orRBF kernel, is a popularkernel functionused in variouskernelizedlearning algorithms. In particular, it is commonly used insupport vector machineclassification.[1] The RBF kernel on two samplesx∈Rk{\displaystyle \mathbf {x} \in \mathbb {R} ^{k}}andx′{\displaystyle \mathbf {x'} }, represented as feature vectors in someinput space, is defined as[2] ‖x−x′‖2{\displaystyle \textstyle \|\mathbf {x} -\mathbf {x'} \|^{2}}may be recognized as thesquared Euclidean distancebetween the two feature vectors.σ{\displaystyle \sigma }is a free parameter. An equivalent definition involves a parameterγ=12σ2{\displaystyle \textstyle \gamma ={\tfrac {1}{2\sigma ^{2}}}}: Since the value of the RBF kernel decreases with distance and ranges between zero (in the infinite-distance limit) and one (whenx=x'), it has a ready interpretation as asimilarity measure.[2]Thefeature spaceof the kernel has an infinite number of dimensions; forσ=1{\displaystyle \sigma =1}, its expansion using themultinomial theoremis:[3] whereℓj=(k+j−1j){\displaystyle \ell _{j}={\tbinom {k+j-1}{j}}}, Because support vector machines and other models employing thekernel trickdo not scale well to large numbers of training samples or large numbers of features in the input space, several approximations to the RBF kernel (and similar kernels) have been introduced.[4]Typically, these take the form of a functionzthat maps a single vector to a vector of higher dimensionality, approximating the kernel: whereφ{\displaystyle \textstyle \varphi }is the implicit mapping embedded in the RBF kernel. One way to construct such azis to randomly sample from theFourier transformationof the kernel[5]φ(x)=1D[cos⁡⟨w1,x⟩,sin⁡⟨w1,x⟩,…,cos⁡⟨wD,x⟩,sin⁡⟨wD,x⟩]T{\displaystyle \varphi (x)={\frac {1}{\sqrt {D}}}[\cos \langle w_{1},x\rangle ,\sin \langle w_{1},x\rangle ,\ldots ,\cos \langle w_{D},x\rangle ,\sin \langle w_{D},x\rangle ]^{T}}wherew1,...,wD{\displaystyle w_{1},...,w_{D}}are independent samples from the normal distributionN(0,σ−2I){\displaystyle N(0,\sigma ^{-2}I)}. Theorem:E⁡[⟨φ(x),φ(y)⟩]=e‖x−y‖2/(2σ2).{\displaystyle \operatorname {E} [\langle \varphi (x),\varphi (y)\rangle ]=e^{\|x-y\|^{2}/(2\sigma ^{2})}.} Proof:It suffices to prove the case ofD=1{\displaystyle D=1}. Use the trigonometric identitycos⁡(a−b)=cos⁡(a)cos⁡(b)+sin⁡(a)sin⁡(b){\displaystyle \cos(a-b)=\cos(a)\cos(b)+\sin(a)\sin(b)}, the spherical symmetry ofGaussian distribution, then evaluate the integral Theorem:Var⁡[⟨φ(x),φ(y)⟩]=O(D−1){\displaystyle \operatorname {Var} [\langle \varphi (x),\varphi (y)\rangle ]=O(D^{-1})}. (Appendix A.2[6]). Another approach uses theNyström methodto approximate theeigendecompositionof theGram matrixK, using only a random sample of the training set.[7]
https://en.wikipedia.org/wiki/Radial_basis_function_kernel
In situadaptive tabulation(ISAT) is analgorithmfor the approximation ofnonlinearrelationships. ISAT is based onmultiple linear regressionsthat are dynamically added as additional information is discovered. The technique is adaptive as it adds new linear regressions dynamically to a store of possible retrieval points. ISAT maintains error control by defining finer granularity in regions of increased nonlinearity. A binary tree search transverses cutting hyper-planes to locate a local linear approximation. ISAT is an alternative toartificial neural networksthat is receiving increased attention for desirable characteristics, namely: ISAT was first proposed byStephen B. Popefor computational reduction ofturbulentcombustionsimulation[1]and later extended to model predictive control.[2]It has been generalized to anISAT frameworkthat operates based on any input and output data regardless of the application. An improved version of the algorithm[3]was proposed just over a decade later of the original publication, including new features that allow you to improve the efficiency of the search for tabulated data, as well as error control.
https://en.wikipedia.org/wiki/In_Situ_Adaptive_Tabulation
Chaos theoryis aninterdisciplinaryarea ofscientific studyand branch ofmathematics. It focuses on underlying patterns anddeterministiclawsofdynamical systemsthat are highly sensitive toinitial conditions. These were once thought to have completely random states of disorder and irregularities.[1]Chaos theory states that within the apparent randomness ofchaotic complex systems, there are underlying patterns, interconnection, constantfeedback loops, repetition,self-similarity,fractalsandself-organization.[2]Thebutterfly effect, an underlying principle of chaos, describes how a small change in one state of a deterministicnonlinear systemcan result in large differences in a later state (meaning there is sensitive dependence on initial conditions).[3]A metaphor for this behavior is that a butterfly flapping its wings inBrazilcan cause or prevent atornadoinTexas.[4][5]: 181–184[6] Small differences in initial conditions, such as those due to errors in measurements or due to rounding errors innumerical computation, can yield widely diverging outcomes for such dynamical systems, rendering long-term prediction of their behavior impossible in general.[7]This can happen even though these systems aredeterministic, meaning that their future behavior follows a unique evolution[8]and is fully determined by their initial conditions, with norandomelements involved.[9]In other words, the deterministic nature of these systems does not make them predictable.[10][11]This behavior is known asdeterministic chaos, or simplychaos. The theory was summarized byEdward Lorenzas:[12] Chaos: When the present determines the future but the approximate present does not approximately determine the future. Chaotic behavior exists in many natural systems, including fluid flow, heartbeat irregularities, weather and climate.[13][14][8]It also occurs spontaneously in some systems with artificial components, such asroad traffic.[2]This behavior can be studied through the analysis of a chaoticmathematical modelor through analytical techniques such asrecurrence plotsandPoincaré maps. Chaos theory has applications in a variety of disciplines, includingmeteorology,[8]anthropology,[15]sociology,environmental science,computer science,engineering,economics,ecology, andpandemiccrisis management.[16][17]The theory formed the basis for such fields of study ascomplex dynamical systems,edge of chaostheory andself-assemblyprocesses. Chaos theory concerns deterministic systems whose behavior can, in principle, be predicted. Chaotic systems are predictable for a while and then 'appear' to become random. The amount of time for which the behavior of a chaotic system can be effectively predicted depends on three things: how much uncertainty can be tolerated in the forecast, how accurately its current state can be measured, and a time scale depending on the dynamics of the system, called theLyapunov time. Some examples of Lyapunov times are: chaotic electrical circuits, about 1 millisecond; weather systems, a few days (unproven); the inner solar system, 4 to 5 million years.[18]In chaotic systems, the uncertainty in a forecast increasesexponentiallywith elapsed time. Hence, mathematically, doubling the forecast time more than squares the proportional uncertainty in the forecast. This means, in practice, a meaningful prediction cannot be made over an interval of more than two or three times the Lyapunov time. When meaningful predictions cannot be made, the system appears random.[19] In common usage, "chaos" means "a state of disorder".[20][21]However, in chaos theory, the term is defined more precisely. Although no universally accepted mathematical definition of chaos exists, a commonly used definition, originally formulated byRobert L. Devaney, says that to classify a dynamical system as chaotic, it must have these properties:[22] In some cases, the last two properties above have been shown to actually imply sensitivity to initial conditions.[23][24]In the discrete-time case, this is true for allcontinuousmapsonmetric spaces.[25]In these cases, while it is often the most practically significant property, "sensitivity to initial conditions" need not be stated in the definition. If attention is restricted tointervals, the second property implies the other two.[26]An alternative and a generally weaker definition of chaos uses only the first two properties in the above list.[27] Sensitivity to initial conditionsmeans that each point in a chaotic system is arbitrarily closely approximated by other points that have significantly different future paths or trajectories. Thus, an arbitrarily small change or perturbation of the current trajectory may lead to significantly different future behavior.[2] Sensitivity to initial conditions is popularly known as the "butterfly effect", so-called because of the title of a paper given byEdward Lorenzin 1972 to theAmerican Association for the Advancement of Sciencein Washington, D.C., entitledPredictability: Does the Flap of a Butterfly's Wings in Brazil set off a Tornado in Texas?.[28]The flapping wing represents a small change in the initial condition of the system, which causes a chain of events that prevents the predictability of large-scale phenomena. Had the butterfly not flapped its wings, the trajectory of the overall system could have been vastly different. As suggested in Lorenz's book entitledThe Essence of Chaos, published in 1993,[5]: 8"sensitive dependence can serve as an acceptable definition of chaos". In the same book, Lorenz defined the butterfly effect as: "The phenomenon that a small alteration in the state of a dynamical system will cause subsequent states to differ greatly from the states that would have followed without the alteration."[5]: 23The above definition is consistent with the sensitive dependence of solutions on initial conditions (SDIC). An idealized skiing model was developed to illustrate the sensitivity of time-varying paths to initial positions.[5]: 189–204A predictability horizon can be determined before the onset of SDIC (i.e., prior to significant separations of initial nearby trajectories).[29] A consequence of sensitivity to initial conditions is that if we start with a limited amount of information about the system (as is usually the case in practice), then beyond a certain time, the system would no longer be predictable. This is most prevalent in the case of weather, which is generally predictable only about a week ahead.[30]This does not mean that one cannot assert anything about events far in the future—only that some restrictions on the system are present. For example, we know that the temperature of the surface of the earth will not naturally reach 100 °C (212 °F) or fall below −130 °C (−202 °F) on earth (during the currentgeologic era), but we cannot predict exactly which day will have the hottest temperature of the year. In more mathematical terms, theLyapunov exponentmeasures the sensitivity to initial conditions, in the form of rate of exponential divergence from the perturbed initial conditions.[31]More specifically, given two startingtrajectoriesin thephase spacethat are infinitesimally close, with initial separationδZ0{\displaystyle \delta \mathbf {Z} _{0}}, the two trajectories end up diverging at a rate given by wheret{\displaystyle t}is the time andλ{\displaystyle \lambda }is the Lyapunov exponent. The rate of separation depends on the orientation of the initial separation vector, so a whole spectrum of Lyapunov exponents can exist. The number of Lyapunov exponents is equal to the number of dimensions of the phase space, though it is common to just refer to the largest one. For example, the maximal Lyapunov exponent (MLE) is most often used, because it determines the overall predictability of the system. A positive MLE, coupled with the solution's boundedness, is usually taken as an indication that the system is chaotic.[8] In addition to the above property, other properties related to sensitivity of initial conditions also exist. These include, for example,measure-theoreticalmixing(as discussed inergodictheory) and properties of aK-system.[11] A chaotic system may have sequences of values for the evolving variable that exactly repeat themselves, giving periodic behavior starting from any point in that sequence. However, such periodic sequences are repelling rather than attracting, meaning that if the evolving variable is outside the sequence, however close, it will not enter the sequence and in fact, will diverge from it. Thus foralmost allinitial conditions, the variable evolves chaotically with non-periodic behavior. Topological mixing(or the weaker condition of topological transitivity) means that the system evolves over time so that any given region oropen setof itsphase spaceeventually overlaps with any other given region. This mathematical concept of "mixing" corresponds to the standard intuition, and the mixing of coloreddyesor fluids is an example of a chaotic system. Topological mixing is often omitted from popular accounts of chaos, which equate chaos with only sensitivity to initial conditions. However, sensitive dependence on initial conditions alone does not give chaos. For example, consider the simple dynamical system produced by repeatedly doubling an initial value. This system has sensitive dependence on initial conditions everywhere, since any pair of nearby points eventually becomes widely separated. However, this example has no topological mixing, and therefore has no chaos. Indeed, it has extremely simple behavior: all points except 0 tend to positive or negative infinity. A mapf:X→X{\displaystyle f:X\to X}is said to be topologically transitive if for any pair of non-emptyopen setsU,V⊂X{\displaystyle U,V\subset X}, there existsk>0{\displaystyle k>0}such thatfk(U)∩V≠∅{\displaystyle f^{k}(U)\cap V\neq \emptyset }. Topological transitivity is a weaker version oftopological mixing. Intuitively, if a map is topologically transitive then given a pointxand a regionV, there exists a pointynearxwhose orbit passes throughV. This implies that it is impossible to decompose the system into two open sets.[32] An important related theorem is the Birkhoff Transitivity Theorem. It is easy to see that the existence of a dense orbit implies topological transitivity. The Birkhoff Transitivity Theorem states that ifXis asecond countable,complete metric space, then topological transitivity implies the existence of adense setof points inXthat have dense orbits.[33] For a chaotic system to havedenseperiodic orbitsmeans that every point in the space is approached arbitrarily closely by periodic orbits.[32]The one-dimensionallogistic mapdefined byx→ 4x(1 –x)is one of the simplest systems with density of periodic orbits. For example,5−58{\displaystyle {\tfrac {5-{\sqrt {5}}}{8}}}→5+58{\displaystyle {\tfrac {5+{\sqrt {5}}}{8}}}→5−58{\displaystyle {\tfrac {5-{\sqrt {5}}}{8}}}(or approximately 0.3454915 → 0.9045085 → 0.3454915) is an (unstable) orbit of period 2, and similar orbits exist for periods 4, 8, 16, etc. (indeed, for all the periods specified bySharkovskii's theorem).[34] Sharkovskii's theorem is the basis of the Li and Yorke[35](1975) proof that any continuous one-dimensional system that exhibits a regular cycle of period three will also display regular cycles of every other length, as well as completely chaotic orbits. Some dynamical systems, like the one-dimensionallogistic mapdefined byx→ 4x(1 –x),are chaotic everywhere, but in many cases chaotic behavior is found only in a subset of phase space. The cases of most interest arise when the chaotic behavior takes place on anattractor, since then a large set of initial conditions leads to orbits that converge to this chaotic region.[36] An easy way to visualize a chaotic attractor is to start with a point in thebasin of attractionof the attractor, and then simply plot its subsequent orbit. Because of the topological transitivity condition, this is likely to produce a picture of the entire final attractor, and indeed both orbits shown in the figure on the right give a picture of the general shape of the Lorenz attractor. This attractor results from a simple three-dimensional model of theLorenzweather system. The Lorenz attractor is perhaps one of the best-known chaotic system diagrams, probably because it is not only one of the first, but it is also one of the most complex, and as such gives rise to a very interesting pattern that, with a little imagination, looks like the wings of a butterfly. Unlikefixed-point attractorsandlimit cycles, the attractors that arise from chaotic systems, known asstrange attractors, have great detail and complexity. Strange attractors occur in bothcontinuousdynamical systems (such as the Lorenz system) and in somediscretesystems (such as theHénon map). Other discrete dynamical systems have a repelling structure called aJulia set, which forms at the boundary between basins of attraction of fixed points. Julia sets can be thought of as strange repellers. Both strange attractors and Julia sets typically have afractalstructure, and thefractal dimensioncan be calculated for them. In contrast to single type chaotic solutions, studies using Lorenz models[40][41]have emphasized the importance of considering various types of solutions. For example, coexisting chaotic and non-chaotic may appear within the same model (e.g., the double pendulum system) using the same modeling configurations but different initial conditions. The findings of attractor coexistence, obtained from classical and generalized Lorenz models,[37][38][39]suggested a revised view that "the entirety of weather possesses a dual nature of chaos and order with distinct predictability", in contrast to the conventional view of "weather is chaotic". Discrete chaotic systems, such as thelogistic map, can exhibit strange attractors whatever theirdimensionality. In contrast, forcontinuousdynamical systems, thePoincaré–Bendixson theoremshows that a strange attractor can only arise in three or more dimensions.Finite-dimensionallinear systemsare never chaotic; for a dynamical system to display chaotic behavior, it must be eithernonlinearor infinite-dimensional. ThePoincaré–Bendixson theoremstates that a two-dimensional differential equation has very regular behavior. The Lorenz attractor discussed below is generated by a system of threedifferential equationssuch as: wherex{\displaystyle x},y{\displaystyle y}, andz{\displaystyle z}make up thesystem state,t{\displaystyle t}is time, andσ{\displaystyle \sigma },ρ{\displaystyle \rho },β{\displaystyle \beta }are the systemparameters. Five of the terms on the right hand side are linear, while two are quadratic; a total of seven terms. Another well-known chaotic attractor is generated by theRössler equations, which have only one nonlinear term out of seven. Sprott[42]found a three-dimensional system with just five terms, that had only one nonlinear term, which exhibits chaos for certain parameter values. Zhang and Heidel[43][44]showed that, at least for dissipative and conservative quadratic systems, three-dimensional quadratic systems with only three or four terms on the right-hand side cannot exhibit chaotic behavior. The reason is, simply put, that solutions to such systems areasymptoticto a two-dimensional surface and therefore solutions are well behaved. While the Poincaré–Bendixson theorem shows that a continuous dynamical system on the Euclideanplanecannot be chaotic, two-dimensional continuous systems withnon-Euclidean geometrycan still exhibit some chaotic properties.[45]Perhaps surprisingly, chaos may occur also in linear systems, provided they are infinite dimensional.[46]A theory of linear chaos is being developed in a branch of mathematical analysis known asfunctional analysis. The above set of three ordinary differential equations has been referred to as the three-dimensional Lorenz model.[47]Since 1963, higher-dimensional Lorenz models have been developed in numerous studies[48][49][37][38]for examining the impact of an increased degree of nonlinearity, as well as its collective effect with heating and dissipations, on solution stability. The straightforward generalization of coupled discrete maps[50]is based upon convolution integral which mediates interaction between spatially distributed maps:ψn+1(r→,t)=∫K(r→−r→,,t)f[ψn(r→,,t)]dr→,{\displaystyle \psi _{n+1}({\vec {r}},t)=\int K({\vec {r}}-{\vec {r}}^{,},t)f[\psi _{n}({\vec {r}}^{,},t)]d{\vec {r}}^{,}}, where kernelK(r→−r→,,t){\displaystyle K({\vec {r}}-{\vec {r}}^{,},t)}is propagator derived as Green function of a relevant physical system,[51] f[ψn(r→,t)]{\displaystyle f[\psi _{n}({\vec {r}},t)]}might be logistic map alikeψ→Gψ[1−tanh⁡(ψ)]{\displaystyle \psi \rightarrow G\psi [1-\tanh(\psi )]}orcomplex map. For examples of complex maps theJulia setf[ψ]=ψ2{\displaystyle f[\psi ]=\psi ^{2}}orIkeda mapψn+1=A+Bψnei(|ψn|2+C){\displaystyle \psi _{n+1}=A+B\psi _{n}e^{i(|\psi _{n}|^{2}+C)}}may serve. When wave propagation problems at distanceL=ct{\displaystyle L=ct}with wavelengthλ=2π/k{\displaystyle \lambda =2\pi /k}are considered the kernelK{\displaystyle K}may have a form of Green function forSchrödinger equation:.[52][53] K(r→−r→,,L)=ikexp⁡[ikL]2πLexp⁡[ik|r→−r→,|22L]{\displaystyle K({\vec {r}}-{\vec {r}}^{,},L)={\frac {ik\exp[ikL]}{2\pi L}}\exp[{\frac {ik|{\vec {r}}-{\vec {r}}^{,}|^{2}}{2L}}]}. Under the right conditions, chaos spontaneously evolves into a lockstep pattern. In theKuramoto model, four conditions suffice to produce synchronization in a chaotic system. Examples include thecoupled oscillationofChristiaan Huygens' pendulums, fireflies,neurons, theLondon Millennium Bridgeresonance, and large arrays ofJosephson junctions.[54] Moreover, from the theoretical physics standpoint, dynamical chaos itself, in its most general manifestation, is a spontaneous order. The essence here is that most orders in nature arise from thespontaneous breakdownof various symmetries. This large family of phenomena includes elasticity, superconductivity, ferromagnetism, and many others. According to thesupersymmetric theory of stochastic dynamics, chaos, or more precisely, its stochastic generalization, is also part of this family. The corresponding symmetry being broken is thetopological supersymmetrywhich is hidden in allstochastic (partial) differential equations, and the correspondingorder parameteris afield-theoreticembodiment of the butterfly effect.[55] James Clerk Maxwellfirst emphasized the "butterfly effect", and is seen as being one of the earliest to discuss chaos theory, with work in the 1860s and 1870s.[56][57][58]An early proponent of chaos theory wasHenri Poincaré. In the 1880s, while studying thethree-body problem, he found that there can be orbits that are nonperiodic, and yet not forever increasing nor approaching a fixed point.[59][60][61]In 1898,Jacques Hadamardpublished an influential study of the chaotic motion of a free particle gliding frictionlessly on a surface of constant negative curvature, called "Hadamard's billiards".[62]Hadamard was able to show that all trajectories are unstable, in that all particle trajectories diverge exponentially from one another, with a positiveLyapunov exponent. Chaos theory began in the field ofergodic theory. Later studies, also on the topic of nonlineardifferential equations, were carried out byGeorge David Birkhoff,[63]Andrey Nikolaevich Kolmogorov,[64][65][66]Mary Lucy CartwrightandJohn Edensor Littlewood,[67]andStephen Smale.[68]Although chaotic planetary motion had not been observed, experimentalists had encountered turbulence in fluid motion and nonperiodic oscillation in radio circuits without the benefit of a theory to explain what they were seeing. Despite initial insights in the first half of the twentieth century, chaos theory became formalized as such only after mid-century, when it first became evident to some scientists thatlinear theory, the prevailing system theory at that time, simply could not explain the observed behavior of certain experiments like that of thelogistic map. What had been attributed to measure imprecision and simple "noise" was considered by chaos theorists as a full component of the studied systems. In 1959Boris Valerianovich Chirikovproposed a criterion for the emergence of classical chaos in Hamiltonian systems (Chirikov criterion). He applied this criterion to explain some experimental results onplasma confinementin open mirror traps.[69][70]This is regarded as the very first physical theory of chaos, which succeeded in explaining a concrete experiment. And Boris Chirikov himself is considered as a pioneer in classical and quantum chaos.[71][72][73] The main catalyst for the development of chaos theory was the electronic computer. Much of the mathematics of chaos theory involves the repeatediterationof simple mathematical formulas, which would be impractical to do by hand. Electronic computers made these repeated calculations practical, while figures and images made it possible to visualize these systems. As a graduate student in Chihiro Hayashi's laboratory at Kyoto University, Yoshisuke Ueda was experimenting with analog computers and noticed, on November 27, 1961, what he called "randomly transitional phenomena". Yet his advisor did not agree with his conclusions at the time, and did not allow him to report his findings until 1970.[74][75] Edward Lorenzwas an early pioneer of the theory. His interest in chaos came about accidentally through his work onweather predictionin 1961.[76][13]Lorenz and his collaboratorEllen FetterandMargaret Hamilton[77]were using a simple digital computer, aRoyal McBeeLGP-30, to run weather simulations. They wanted to see a sequence of data again, and to save time they started the simulation in the middle of its course. They did this by entering a printout of the data that corresponded to conditions in the middle of the original simulation. To their surprise, the weather the machine began to predict was completely different from the previous calculation. They tracked this down to the computer printout. The computer worked with 6-digit precision, but the printout rounded variables off to a 3-digit number, so a value like 0.506127 printed as 0.506. This difference is tiny, and the consensus at the time would have been that it should have no practical effect. However, Lorenz discovered that small changes in initial conditions produced large changes in long-term outcome.[78]Lorenz's discovery, which gave its name toLorenz attractors, showed that even detailed atmospheric modeling cannot, in general, make precise long-term weather predictions. In 1963,Benoit Mandelbrot, studyinginformation theory, discovered that noise in many phenomena (includingstock pricesandtelephonecircuits) was patterned like aCantor set, a set of points with infinite roughness and detail.[79]Mandelbrot described both the "Noah effect" (in which sudden discontinuous changes can occur) and the "Joseph effect" (in which persistence of a value can occur for a while, yet suddenly change afterwards).[80][81]In 1967, he published "How long is the coast of Britain? Statistical self-similarity and fractional dimension", showing that a coastline's length varies with the scale of the measuring instrument, resembles itself at all scales, and is infinite in length for aninfinitesimallysmall measuring device.[82]Arguing that a ball of twine appears as a point when viewed from far away (0-dimensional), a ball when viewed from fairly near (3-dimensional), or a curved strand (1-dimensional), he argued that the dimensions of an object are relative to the observer and may be fractional. An object whose irregularity is constant over different scales ("self-similarity") is afractal(examples include theMenger sponge, theSierpiński gasket, and theKoch curveorsnowflake, which is infinitely long yet encloses a finite space and has afractal dimensionof circa 1.2619). In 1982, Mandelbrot publishedThe Fractal Geometry of Nature, which became a classic of chaos theory.[83] In December 1977, theNew York Academy of Sciencesorganized the first symposium on chaos, attended by David Ruelle,Robert May,James A. Yorke(coiner of the term "chaos" as used in mathematics),Robert Shaw, and the meteorologist Edward Lorenz. The following year Pierre Coullet and Charles Tresser published "Itérations d'endomorphismes et groupe de renormalisation", andMitchell Feigenbaum's article "Quantitative Universality for a Class of Nonlinear Transformations" finally appeared in a journal, after 3 years of referee rejections.[84][85]Thus Feigenbaum (1975) and Coullet & Tresser (1978) discovered theuniversalityin chaos, permitting the application of chaos theory to many different phenomena. In 1979,Albert J. Libchaber, during a symposium organized in Aspen byPierre Hohenberg, presented his experimental observation of thebifurcationcascade that leads to chaos and turbulence inRayleigh–Bénard convectionsystems. He was awarded theWolf Prize in Physicsin 1986 along withMitchell J. Feigenbaumfor their inspiring achievements.[86] In 1986, the New York Academy of Sciences co-organized with theNational Institute of Mental Healthand theOffice of Naval Researchthe first important conference on chaos in biology and medicine. There,Bernardo Hubermanpresented a mathematical model of theeye trackingdysfunction among people withschizophrenia.[87]This led to a renewal ofphysiologyin the 1980s through the application of chaos theory, for example, in the study of pathologicalcardiac cycles. In 1987,Per Bak,Chao TangandKurt Wiesenfeldpublished a paper inPhysical Review Letters[88]describing for the first timeself-organized criticality(SOC), considered one of the mechanisms by whichcomplexityarises in nature. Alongside largely lab-based approaches such as theBak–Tang–Wiesenfeld sandpile, many other investigations have focused on large-scale natural or social systems that are known (or suspected) to displayscale-invariantbehavior. Although these approaches were not always welcomed (at least initially) by specialists in the subjects examined, SOC has nevertheless become established as a strong candidate for explaining a number of natural phenomena, includingearthquakes, (which, long before SOC was discovered, were known as a source of scale-invariant behavior such as theGutenberg–Richter lawdescribing the statistical distribution of earthquake sizes, and theOmori law[89]describing the frequency of aftershocks),solar flares, fluctuations in economic systems such asfinancial markets(references to SOC are common ineconophysics), landscape formation,forest fires,landslides,epidemics, andbiological evolution(where SOC has been invoked, for example, as the dynamical mechanism behind the theory of "punctuated equilibria" put forward byNiles EldredgeandStephen Jay Gould). Given the implications of a scale-free distribution of event sizes, some researchers have suggested that another phenomenon that should be considered an example of SOC is the occurrence ofwars. These investigations of SOC have included both attempts at modelling (either developing new models or adapting existing ones to the specifics of a given natural system), and extensive data analysis to determine the existence and/or characteristics of natural scaling laws. Also in 1987James GleickpublishedChaos: Making a New Science, which became a best-seller and introduced the general principles of chaos theory as well as its history to the broad public.[90]Initially the domain of a few, isolated individuals, chaos theory progressively emerged as a transdisciplinary and institutional discipline, mainly under the name ofnonlinear systemsanalysis. Alluding toThomas Kuhn's concept of aparadigm shiftexposed inThe Structure of Scientific Revolutions(1962), many "chaologists" (as some described themselves) claimed that this new theory was an example of such a shift, a thesis upheld by Gleick. The availability of cheaper, more powerful computers broadens the applicability of chaos theory. Currently, chaos theory remains an active area of research,[91]involving many different disciplines such asmathematics,topology,physics,[92]social systems,[93]population modeling,biology,meteorology,astrophysics,information theory,computational neuroscience,pandemiccrisis management,[16][17]etc. The sensitive dependence on initial conditions (i.e., butterfly effect) has been illustrated using the following folklore:[90] For want of a nail, the shoe was lost.For want of a shoe, the horse was lost.For want of a horse, the rider was lost.For want of a rider, the battle was lost.For want of a battle, the kingdom was lost.And all for the want of a horseshoe nail. Based on the above, many people mistakenly believe that the impact of a tiny initial perturbation monotonically increases with time and that any tiny perturbation can eventually produce a large impact on numerical integrations. However, in 2008, Lorenz stated that he did not feel that this verse described true chaos but that it better illustrated the simpler phenomenon of instability and that the verse implicitly suggests that subsequent small events will not reverse the outcome.[94]Based on the analysis, the verse only indicates divergence, not boundedness.[6]Boundedness is important for the finite size of a butterfly pattern.[6]The characteristic of the aforementioned verse was described as "finite-time sensitive dependence".[95] Although chaos theory was born from observing weather patterns, it has become applicable to a variety of other situations. Some areas benefiting from chaos theory today aregeology,mathematics,biology,computer science,economics,[97][98][99]engineering,[100][101]finance,[102][103][104][105][106]meteorology,philosophy,anthropology,[15]physics,[107][108][109]politics,[110][111]population dynamics,[112]androbotics. A few categories are listed below with examples, but this is by no means a comprehensive list as new applications are appearing. Chaos theory has been used for many years incryptography. In the past few decades, chaos and nonlinear dynamics have been used in the design of hundreds ofcryptographic primitives. These algorithms include imageencryption algorithms,hash functions,secure pseudo-random number generators,stream ciphers,watermarking, andsteganography.[113]The majority of these algorithms are based on uni-modal chaotic maps and a big portion of these algorithms use the control parameters and the initial condition of the chaotic maps as their keys.[114]From a wider perspective, without loss of generality, the similarities between the chaotic maps and the cryptographic systems is the main motivation for the design of chaos based cryptographic algorithms.[113]One type of encryption, secret key orsymmetric key, relies ondiffusion and confusion, which is modeled well by chaos theory.[115]Another type of computing,DNA computing, when paired with chaos theory, offers a way to encrypt images and other information.[116]Many of the DNA-Chaos cryptographic algorithms are proven to be either not secure, or the technique applied is suggested to be not efficient.[117][118][119] Robotics is another area that has recently benefited from chaos theory. Instead of robots acting in a trial-and-error type of refinement to interact with their environment, chaos theory has been used to build apredictive model.[120]Chaotic dynamics have been exhibited bypassive walkingbiped robots.[121] For over a hundred years, biologists have been keeping track of populations of different species withpopulation models. Most models arecontinuous, but recently scientists have been able to implement chaotic models in certain populations.[122]For example, a study on models ofCanadian lynxshowed there was chaotic behavior in the population growth.[123]Chaos can also be found in ecological systems, such ashydrology. While a chaotic model for hydrology has its shortcomings, there is still much to learn from looking at the data through the lens of chaos theory.[124]Another biological application is found incardiotocography. Fetal surveillance is a delicate balance of obtaining accurate information while being as noninvasive as possible. Better models of warning signs offetal hypoxiacan be obtained through chaotic modeling.[125] As Perry points out,modelingof chaotictime seriesinecologyis helped by constraint.[126]: 176, 177There is always potential difficulty in distinguishing real chaos from chaos that is only in the model.[126]: 176, 177Hence both constraint in the model and or duplicate time series data for comparison will be helpful in constraining the model to something close to the reality, for example Perry & Wall 1984.[126]: 176, 177Gene-for-geneco-evolution sometimes shows chaotic dynamics inallele frequencies.[127]Adding variables exaggerates this: Chaos is more common inmodelsincorporating additional variables to reflect additional facets of real populations.[127]Robert M. Mayhimself did some of these foundational crop co-evolution studies, and this in turn helped shape the entire field.[127]Even for a steady environment, merely combining onecropand onepathogenmay result inquasi-periodic-orchaotic-oscillations in pathogenpopulation.[128]: 169 It is possible that economic models can also be improved through an application of chaos theory, but predicting the health of an economic system and what factors influence it most is an extremely complex task.[129]Economic and financial systems are fundamentally different from those in the classical natural sciences since the former are inherently stochastic in nature, as they result from the interactions of people, and thus pure deterministic models are unlikely to provide accurate representations of the data. The empirical literature that tests for chaos in economics and finance presents very mixed results, in part due to confusion between specific tests for chaos and more general tests for non-linear relationships.[130] Chaos could be found in economics by the means ofrecurrence quantification analysis. In fact, Orlando et al.[131]by the means of the so-called recurrence quantification correlation index were able to detect hidden changes in time series. Then, the same technique was employed to detect transitions from laminar (regular) to turbulent (chaotic) phases as well as differences between macroeconomic variables and highlight hidden features of economic dynamics.[132]Finally, chaos theory could help in modeling how an economy operates as well as in embedding shocks due to external events such as COVID-19.[133] Due to the sensitive dependence of solutions on initial conditions (SDIC), also known as the butterfly effect, chaotic systems like the Lorenz 1963 model imply a finite predictability horizon. This means that while accurate predictions are possible over a finite time period, they are not feasible over an infinite time span. Considering the nature of Lorenz's chaotic solutions, the committee led by Charney et al. in 1966[134]extrapolated a doubling time of five days from a general circulation model, suggesting a predictability limit of two weeks. This connection between the five-day doubling time and the two-week predictability limit was also recorded in a 1969 report by the Global Atmospheric Research Program (GARP).[135]To acknowledge the combined direct and indirect influences from the Mintz and Arakawa model and Lorenz's models, as well as the leadership of Charney et al., Shen et al.[136]refer to the two-week predictability limit as the "Predictability Limit Hypothesis," drawing an analogy to Moore's Law. In AI-driven large language models, responses can exhibit sensitivities to factors like alterations in formatting and variations in prompts. These sensitivities are akin to butterfly effects.[137]Although classifying AI-powered large language models as classical deterministic chaotic systems poses challenges, chaos-inspired approaches and techniques (such as ensemble modeling) may be employed to extract reliable information from these expansive language models (see also "Butterfly Effect in Popular Culture"). In chemistry, predicting gas solubility is essential to manufacturingpolymers, but models usingparticle swarm optimization(PSO) tend to converge to the wrong points. An improved version of PSO has been created by introducing chaos, which keeps the simulations from getting stuck.[138]Incelestial mechanics, especially when observing asteroids, applying chaos theory leads to better predictions about when these objects will approach Earth and other planets.[139]Four of the fivemoons of Plutorotate chaotically. Inquantum physicsandelectrical engineering, the study of large arrays ofJosephson junctionsbenefitted greatly from chaos theory.[140]Closer to home, coal mines have always been dangerous places where frequent natural gas leaks cause many deaths. Until recently, there was no reliable way to predict when they would occur. But these gas leaks have chaotic tendencies that, when properly modeled, can be predicted fairly accurately.[141] Chaos theory can be applied outside of the natural sciences, but historically nearly all such studies have suffered from lack of reproducibility; poor external validity; and/or inattention to cross-validation, resulting in poor predictive accuracy (if out-of-sample prediction has even been attempted). Glass[142]and Mandell and Selz[143]have found that no EEG study has as yet indicated the presence of strange attractors or other signs of chaotic behavior. Redington and Reidbord (1992) attempted to demonstrate that the human heart could display chaotic traits. They monitored the changes in between-heartbeat intervals for a single psychotherapy patient as she moved through periods of varying emotional intensity during a therapy session. Results were admittedly inconclusive. Not only were there ambiguities in the various plots the authors produced to purportedly show evidence of chaotic dynamics (spectral analysis, phase trajectory, and autocorrelation plots), but also when they attempted to compute a Lyapunov exponent as more definitive confirmation of chaotic behavior, the authors found they could not reliably do so.[144] In their 1995 paper, Metcalf and Allen[145]maintained that they uncovered in animal behavior a pattern of period doubling leading to chaos. The authors examined a well-known response called schedule-induced polydipsia, by which an animal deprived of food for certain lengths of time will drink unusual amounts of water when the food is at last presented. The control parameter (r) operating here was the length of the interval between feedings, once resumed. The authors were careful to test a large number of animals and to include many replications, and they designed their experiment so as to rule out the likelihood that changes in response patterns were caused by different starting places for r. Time series and first delay plots provide the best support for the claims made, showing a fairly clear march from periodicity to irregularity as the feeding times were increased. The various phase trajectory plots and spectral analyses, on the other hand, do not match up well enough with the other graphs or with the overall theory to lead inexorably to a chaotic diagnosis. For example, the phase trajectories do not show a definite progression towards greater and greater complexity (and away from periodicity); the process seems quite muddied. Also, where Metcalf and Allen saw periods of two and six in their spectral plots, there is room for alternative interpretations. All of this ambiguity necessitate some serpentine, post-hoc explanation to show that results fit a chaotic model. By adapting a model of career counseling to include a chaotic interpretation of the relationship between employees and the job market, Amundson and Bright found that better suggestions can be made to people struggling with career decisions.[146]Modern organizations are increasingly seen as opencomplex adaptive systemswith fundamental natural nonlinear structures, subject to internal and external forces that may contribute chaos. For instance,team buildingandgroup developmentis increasingly being researched as an inherently unpredictable system, as the uncertainty of different individuals meeting for the first time makes the trajectory of the team unknowable.[147] Traffic forecasting may benefit from applications of chaos theory. Better predictions of when a congestion will occur would allow measures to be taken to disperse it before it would have occurred. Combining chaos theory principles with a few other methods has led to a more accurate short-term prediction model (see the plot of theBML traffic modelat right).[148] Chaos theory has been applied to environmentalwater cycledata (alsohydrologicaldata), such as rainfall and streamflow.[149]These studies have yielded controversial results, because the methods for detecting a chaotic signature are often relatively subjective. Early studies tended to "succeed" in finding chaos, whereas subsequent studies and meta-analyses called those studies into question and provided explanations for why these datasets are not likely to have low-dimension chaotic dynamics.[150] Examples of chaotic systems Other related topics People
https://en.wikipedia.org/wiki/Chaos_theory
Incomputer graphics,hierarchical RBFis aninterpolationmethod based onradial basis functions(RBFs). Hierarchical RBF interpolation has applications in treatment of results from a3D scanner,terrainreconstruction, and the construction of shape models in3D computer graphics(such as theStanford bunny, a popular 3D model). This problem is informally named as "large scattered data point set interpolation." The steps of the interpolation method (in three dimensions) are as follows: As J. C. Carr et al. showed,[1]this function takes the formf(x)=∑i=1Nλiφ(x,ci){\displaystyle \mathbf {f} (\mathbf {x} )=\sum _{i=1}^{N}\lambda _{i}\varphi (\mathbf {x} ,\mathbf {c} _{i})}whereφ{\displaystyle \varphi }is a radial basis function andλ{\displaystyle \lambda }are the coefficients that are the solution of the followinglinear system of equations: [φ(c1,c1)φ(c1,c2)...φ(c1,cN)φ(c2,c1)φ(c2,c2)...φ(c2,cN)............φ(cN,c1)φ(cN,c2)...φ(cN,cN)]∗[λ1λ2...λN]=[h1h2...hN]{\displaystyle {\begin{bmatrix}\varphi (c_{1},c_{1})&\varphi (c_{1},c_{2})&...&\varphi (c_{1},c_{N})\\\varphi (c_{2},c_{1})&\varphi (c_{2},c_{2})&...&\varphi (c_{2},c_{N})\\...&...&...&...\\\varphi (c_{N},c_{1})&\varphi (c_{N},c_{2})&...&\varphi (c_{N},c_{N})\end{bmatrix}}*{\begin{bmatrix}\lambda _{1}\\\lambda _{2}\\...\\\lambda _{N}\end{bmatrix}}={\begin{bmatrix}h_{1}\\h_{2}\\...\\h_{N}\end{bmatrix}}} For determination of surface, it is necessary to estimate the value of functionf(x){\displaystyle \mathbf {f} (\mathbf {x} )}in specific pointsx.A lack of such method is a considerable complicationon the order ofO(n2){\displaystyle \mathbf {O} (\mathbf {n} ^{2})}to calculateRBF, solvesystem, and determine surface.[2] A hierarchicalalgorithmallows for an acceleration of calculations due todecompositionof intricate problems on the great number of simple (see picture). In this case, hierarchical division of space contains points on elementary parts, and thesystemof small dimension solves for each. The calculation of surface in this case is taken to the hierarchical (on the basis oftree-structure) calculation of interpolant. A method for a2Dcase is offered by Pouderoux J. et al.[3]For a3Dcase, a method is used in the tasks of3D graphicsby W. Qiang et al.[4]and modified by Babkov V.[5]
https://en.wikipedia.org/wiki/Hierarchical_RBF
Instantaneously trained neural networksarefeedforward artificial neural networksthat create a new hidden neuron node for each novel training sample. The weights to this hidden neuron separate out not only this training sample but others that are near it, thus providing generalization.[1][2]This separation is done using the nearest hyperplane that can be written down instantaneously. In the two most important implementations the neighborhood of generalization either varies with the training sample (CC1 network) or remains constant (CC4 network). These networks useunary codingfor an effective representation of the data sets.[3] This type of network was first proposed in a 1993 paper ofSubhash Kak.[1]Since then, instantaneously trained neural networks have been proposed as models of short termlearningand used inweb search, and financialtime series predictionapplications.[4]They have also been used in instantclassification of documents[5]and fordeep learninganddata mining.[6][7] As in other neural networks, their normal use is as software, but they have also been implemented in hardware using FPGAs[8]and byoptical implementation.[9] In the CC4 network, which is a three-stage network, the number of input nodes is one more than the size of the training vector, with the extra node serving as the biasing node whose input is always 1. For binary input vectors, the weights from the input nodes to the hidden neuron (say of index j) corresponding to the trained vector is given by the following formula: wherer{\displaystyle r}is the radius of generalization ands{\displaystyle s}is theHamming weight(the number of 1s) of the binary sequence. From the hidden layer to the output layer the weights are 1 or -1 depending on whether the vector belongs to a given output class or not. The neurons in the hidden and output layers output 1 if the weighted sum to the input is 0 or positive and 0, if the weighted sum to the input is negative: The CC4 network has also been modified to include non-binary input with varying radii of generalization so that it effectively provides a CC1 implementation.[10] In feedback networks the Willshaw network as well as theHopfield networkare able to learn instantaneously.
https://en.wikipedia.org/wiki/Instantaneously_trained_neural_networks
Inartificial neural networks, ahybrid Kohonen self-organizing mapis a type ofself-organizing map(SOM) named for theFinnishprofessorTeuvo Kohonen, where the network architecture consists of an input layer fully connected to a 2–D SOM or Kohonen layer. The output from the Kohonen layer, which is the winning neuron, feeds into a hidden layer and finally into an output layer. In other words, the Kohonen SOM is the front–end, while the hidden and output layer of amultilayer perceptronis the back–end of the hybrid Kohonen SOM. The hybrid Kohonen SOM was first applied tomachine visionsystems forimage classificationandrecognition.[1] Hybrid Kohonen SOM has been used in weather prediction and especially in forecasting stock prices, which has made a challenging task considerably easier. It is fast and efficient with less classification error, hence is a better predictor, when compared to Kohonen SOM andbackpropagationnetworks.[2]
https://en.wikipedia.org/wiki/Hybrid_Kohonen_self-organizing_map
Incomputer science,learning vector quantization(LVQ) is aprototype-basedsupervisedclassificationalgorithm. LVQ is the supervised counterpart ofvector quantizationsystems. LVQ can be understood as a special case of anartificial neural network, more precisely, it applies awinner-take-allHebbian learning-based approach. It is a precursor toself-organizing maps(SOM) and related toneural gasand thek-nearest neighbor algorithm(k-NN). LVQ was invented byTeuvo Kohonen.[1] An LVQ system is represented by prototypesW=(w(i),...,w(n)){\displaystyle W=(w(i),...,w(n))}which are defined in thefeature spaceof observed data. In winner-take-all training algorithms one determines, for each data point, the prototype which is closest to the input according to a given distance measure. The position of this so-called winner prototype is then adapted, i.e. the winner is moved closer if it correctly classifies the data point or moved away if it classifies the data point incorrectly. An advantage of LVQ is that it creates prototypes that are easy to interpret for experts in the respective application domain.[2]LVQ systems can be applied tomulti-class classificationproblems in a natural way. A key issue in LVQ is the choice of an appropriate measure of distance or similarity for training and classification. Recently, techniques have been developed which adapt a parameterized distance measure in the course of training the system, see e.g. (Schneider, Biehl, and Hammer, 2009)[3]and references therein. LVQ can be a source of great help in classifying text documents.[citation needed] Below follows an informal description.The algorithm consists of three basic steps. The algorithm's input is: The algorithm's flow is:
https://en.wikipedia.org/wiki/Learning_vector_quantization
Inmathematics, more specifically innumerical linear algebra, thebiconjugate gradient methodis analgorithmto solvesystems of linear equations Unlike theconjugate gradient method, this algorithm does not require thematrixA{\displaystyle A}to beself-adjoint, but instead one needs to perform multiplications by theconjugate transposeA*. In the above formulation, the computedrk{\displaystyle r_{k}\,}andrk∗{\displaystyle r_{k}^{*}}satisfy and thus are the respectiveresidualscorresponding toxk{\displaystyle x_{k}\,}andxk∗{\displaystyle x_{k}^{*}}, as approximate solutions to the systems x∗{\displaystyle x^{*}}is theadjoint, andα¯{\displaystyle {\overline {\alpha }}}is thecomplex conjugate. The biconjugate gradient method isnumerically unstable[citation needed](compare to thebiconjugate gradient stabilized method), but very important from a theoretical point of view. Define the iteration steps by wherej<k{\displaystyle j<k}using the relatedprojection with These related projections may be iterated themselves as A relation toQuasi-Newton methodsis given byPk=Ak−1A{\displaystyle P_{k}=A_{k}^{-1}A}andxk+1=xk−Ak+1−1(Axk−b){\displaystyle x_{k+1}=x_{k}-A_{k+1}^{-1}\left(Ax_{k}-b\right)}, where The new directions are then orthogonal to the residuals: which themselves satisfy wherei,j<k{\displaystyle i,j<k}. The biconjugate gradient method now makes a special choice and uses the setting With this particular choice, explicit evaluations ofPk{\displaystyle P_{k}}andA−1are avoided, and the algorithm takes the form stated above.
https://en.wikipedia.org/wiki/Biconjugate_gradient_method
Innumerical linear algebra, theconjugate gradient squared method (CGS)is aniterativealgorithm for solvingsystems of linear equationsof the formAx=b{\displaystyle A{\mathbf {x}}={\mathbf {b}}}, particularly in cases where computing thetransposeAT{\displaystyle A^{T}}is impractical.[1]The CGS method was developed as an improvement to thebiconjugate gradient method.[2][3][4] Asystem of linear equationsAx=b{\displaystyle A{\mathbf {x}}={\mathbf {b}}}consists of a knownmatrixA{\displaystyle A}and a knownvectorb{\displaystyle {\mathbf {b}}}. To solve the system is to find the value of the unknown vectorx{\displaystyle {\mathbf {x}}}.[3][5]A direct method for solving a system of linear equations is to take the inverse of the matrixA{\displaystyle A}, then calculatex=A−1b{\displaystyle {\mathbf {x}}=A^{-1}{\mathbf {b}}}. However, computing the inverse is computationally expensive. Hence, iterative methods are commonly used. Iterative methods begin with a guessx(0){\displaystyle {\mathbf {x}}^{(0)}}, and on each iteration the guess is improved. Once the difference between successive guesses is sufficiently small, the method has converged to a solution.[6][7] As with theconjugate gradient method,biconjugate gradient method, and similar iterative methods for solving systems of linear equations, the CGS method can be used to find solutions to multi-variableoptimisation problems, such aspower-flow analysis,hyperparameter optimisation, andfacial recognition.[8] The algorithm is as follows:[9]
https://en.wikipedia.org/wiki/Conjugate_gradient_squared_method
Theconjugate residual methodis an iterativenumeric methodused for solvingsystems of linear equations. It's aKrylov subspace methodvery similar to the much more popularconjugate gradient method, with similar construction and convergence properties. This method is used to solve linear equations of the form whereAis an invertible andHermitian matrix, andbis nonzero. The conjugate residual method differs from the closely relatedconjugate gradient method. It involves more numerical operations and requires more storage. Given an (arbitrary) initial estimate of the solutionx0{\displaystyle \mathbf {x} _{0}}, the method is outlined below: the iteration may be stopped oncexk{\displaystyle \mathbf {x} _{k}}has been deemed converged. The only difference between this and the conjugate gradient method is the calculation ofαk{\displaystyle \alpha _{k}}andβk{\displaystyle \beta _{k}}(plus the optional incremental calculation ofApk{\displaystyle \mathbf {Ap} _{k}}at the end). Note: the above algorithm can be transformed so to make only one symmetric matrix-vector multiplication in each iteration. By making a few substitutions and variable changes, a preconditioned conjugate residual method may be derived in the same way as done for the conjugate gradient method: ThepreconditionerM−1{\displaystyle \mathbf {M} ^{-1}}must be symmetric positive definite. Note that the residual vector here is different from the residual vector without preconditioning.
https://en.wikipedia.org/wiki/Conjugate_residual_method
Belief propagation, also known assum–product message passing, is a message-passingalgorithmfor performinginferenceongraphical models, such asBayesian networksandMarkov random fields. It calculates themarginal distributionfor each unobserved node (or variable), conditional on any observed nodes (or variables). Belief propagation is commonly used inartificial intelligenceandinformation theory, and has demonstrated empirical success in numerous applications, includinglow-density parity-check codes,turbo codes,free energyapproximation, andsatisfiability.[1] The algorithm was first proposed byJudea Pearlin 1982,[2]who formulated it as an exact inference algorithm ontrees, later extended topolytrees.[3]While the algorithm is not exact on general graphs, it has been shown to be a useful approximate algorithm.[4] Given a finite set ofdiscreterandom variablesX1,…,Xn{\displaystyle X_{1},\ldots ,X_{n}}withjointprobability mass functionp{\displaystyle p}, a common task is to compute themarginal distributionsof theXi{\displaystyle X_{i}}. The marginal of a singleXi{\displaystyle X_{i}}is defined to be wherex′=(x1′,…,xn′){\displaystyle \mathbf {x} '=(x'_{1},\ldots ,x'_{n})}is a vector of possible values for theXi{\displaystyle X_{i}}, and the notationx′:xi′=xi{\displaystyle \mathbf {x} ':x'_{i}=x_{i}}means that the sum is taken over thosex′{\displaystyle \mathbf {x} '}whosei{\displaystyle i}th coordinate is equal toxi{\displaystyle x_{i}}. Computing marginal distributions using this formula quickly becomes computationally prohibitive as the number of variables grows. For example, given 100binary variablesX1,…,X100{\displaystyle X_{1},\ldots ,X_{100}}, computing a single marginalXi{\displaystyle X_{i}}usingp{\displaystyle p}and the above formula would involve summing over299≈6.34×1029{\displaystyle 2^{99}\approx 6.34\times 10^{29}}possible values forx′{\displaystyle \mathbf {x} '}. If it is known that the probability mass functionp{\displaystyle p}factors in a convenient way, belief propagation allows the marginals to be computed much more efficiently. Variants of the belief propagation algorithm exist for several types of graphical models (Bayesian networksandMarkov random fields[5]in particular). We describe here the variant that operates on afactor graph. A factor graph is abipartite graphcontaining nodes corresponding to variablesV{\displaystyle V}and factorsF{\displaystyle F}, with edges between variables and the factors in which they appear. We can write the joint mass function: wherexa{\displaystyle \mathbf {x} _{a}}is the vector of neighboring variable nodes to the factor nodea{\displaystyle a}. AnyBayesian networkorMarkov random fieldcan be represented as a factor graph by using a factor for each node with its parents or a factor for each node with its neighborhood respectively.[6] The algorithm works by passing real valued functions calledmessagesalong the edges between the nodes. More precisely, ifv{\displaystyle v}is a variable node anda{\displaystyle a}is a factor node connected tov{\displaystyle v}in the factor graph, then the messagesμv→a{\displaystyle \mu _{v\to a}}fromv{\displaystyle v}toa{\displaystyle a}and the messagesμa→v{\displaystyle \mu _{a\to v}}froma{\displaystyle a}tov{\displaystyle v}are real-valued functionsμv→a,μa→v:Dom⁡(v)→R{\displaystyle \mu _{v\to a},\mu _{a\to v}:\operatorname {Dom} (v)\to \mathbb {R} }, whose domain is the set of values that can be taken by the random variable associated withv{\displaystyle v}, denotedDom⁡(v){\displaystyle \operatorname {Dom} (v)}. These messages contain the "influence" that one variable exerts on another. The messages are computed differently depending on whether the node receiving the message is a variable node or a factor node. Keeping the same notation: As shown by the previous formula: the complete marginalization is reduced to a sum of products of simpler terms than the ones appearing in the full joint distribution. This is the reason that belief propagation is sometimes calledsum-product message passing, or thesum-product algorithm. In a typical run, each message will be updated iteratively from the previous value of the neighboring messages. Different scheduling can be used for updating the messages. In the case where the graphical model is a tree, an optimal scheduling converges after computing each message exactly once (see next sub-section). When the factor graph has cycles, such an optimal scheduling does not exist, and a typical choice is to update all messages simultaneously at each iteration. Upon convergence (if convergence happened), the estimated marginal distribution of each node is proportional to the product of all messages from adjoining factors (missing the normalization constant): Likewise, the estimated joint marginal distribution of the set of variables belonging to one factor is proportional to the product of the factor and the messages from the variables: In the case where the factor graph is acyclic (i.e. is a tree or a forest), these estimated marginal actually converge to the true marginals in a finite number of iterations. This can be shown bymathematical induction. In the case when thefactor graphis atree, the belief propagation algorithm will compute the exact marginals. Furthermore, with proper scheduling of the message updates, it will terminate after two full passes through the tree. This optimal scheduling can be described as follows: Before starting, the graph is oriented by designating one node as theroot; any non-root node which is connected to only one other node is called aleaf. In the first step, messages are passed inwards: starting at the leaves, each node passes a message along the (unique) edge towards the root node. The tree structure guarantees that it is possible to obtain messages from all other adjoining nodes before passing the message on. This continues until the root has obtained messages from all of its adjoining nodes. The second step involves passing the messages back out: starting at the root, messages are passed in the reverse direction. The algorithm is completed when all leaves have received their messages. Although it was originally designed foracyclicgraphical models, the Belief Propagation algorithm can be used in generalgraphs. The algorithm is then sometimes calledloopy belief propagation, because graphs typically containcycles, or loops. The initialization and scheduling of message updates must be adjusted slightly (compared with the previously described schedule for acyclic graphs) because graphs might not contain any leaves. Instead, one initializes all variable messages to 1 and uses the same message definitions above, updating all messages at every iteration (although messages coming from known leaves or tree-structured subgraphs may no longer need updating after sufficient iterations). It is easy to show that in a tree, the message definitions of this modified procedure will converge to the set of message definitions given above within a number of iterations equal to thediameterof the tree. The precise conditions under which loopy belief propagation will converge are still not well understood; it is known that on graphs containing a single loop it converges in most cases, but the probabilities obtained might be incorrect.[7]Several sufficient (but not necessary) conditions for convergence of loopy belief propagation to a unique fixed point exist.[8]There exist graphs which will fail to converge, or which will oscillate between multiple states over repeated iterations. Techniques likeEXIT chartscan provide an approximate visualization of the progress of belief propagation and an approximate test for convergence. There are other approximate methods for marginalization includingvariational methodsandMonte Carlo methods. One method of exact marginalization in general graphs is called thejunction tree algorithm, which is simply belief propagation on a modified graph guaranteed to be a tree. The basic premise is to eliminate cycles by clustering them into single nodes. A similar algorithm is commonly referred to as theViterbi algorithm, but also known as a special case of the max-product or min-sum algorithm, which solves the related problem of maximization, or most probable explanation. Instead of attempting to solve the marginal, the goal here is to find the valuesx{\displaystyle \mathbf {x} }that maximizes the global function (i.e. most probable values in a probabilistic setting), and it can be defined using thearg max: An algorithm that solves this problem is nearly identical to belief propagation, with the sums replaced by maxima in the definitions.[9] It is worth noting thatinferenceproblems like marginalization and maximization areNP-hardto solve exactly and approximately (at least forrelative error) in a graphical model. More precisely, the marginalization problem defined above is#P-completeand maximization isNP-complete. The memory usage of belief propagation can be reduced through the use of theIsland algorithm(at a small cost in time complexity). The sum-product algorithm is related to the calculation offree energyinthermodynamics. LetZbe thepartition function. A probability distribution (as per the factor graph representation) can be viewed as a measure of theinternal energypresent in a system, computed as The free energy of the system is then It can then be shown that the points of convergence of the sum-product algorithm represent the points where the free energy in such a system is minimized. Similarly, it can be shown that a fixed point of the iterative belief propagation algorithm in graphs with cycles is a stationary point of a free energy approximation.[10] Belief propagation algorithms are normally presented as message update equations on a factor graph, involving messages between variable nodes and their neighboring factor nodes and vice versa. Considering messages betweenregionsin a graph is one way of generalizing the belief propagation algorithm.[10]There are several ways of defining the set of regions in a graph that can exchange messages. One method uses ideas introduced byKikuchiin the physics literature,[11][12][13]and is known as Kikuchi'scluster variation method.[14] Improvements in the performance of belief propagation algorithms are also achievable by breaking the replicas symmetry in the distributions of the fields (messages). This generalization leads to a new kind of algorithm calledsurvey propagation(SP), which have proved to be very efficient inNP-completeproblems likesatisfiability[1]andgraph coloring. The cluster variational method and the survey propagation algorithms are two different improvements to belief propagation. The namegeneralized survey propagation(GSP) is waiting to be assigned to the algorithm that merges both generalizations. Gaussian belief propagation is a variant of the belief propagation algorithm when the underlyingdistributions are Gaussian. The first work analyzing this special model was the seminal work of Weiss and Freeman.[15] The GaBP algorithm solves the following marginalization problem: where Z is a normalization constant,Ais a symmetricpositive definite matrix(inverse covariance matrix a.k.a.precision matrix) andbis the shift vector. Equivalently, it can be shown that using the Gaussian model, the solution of the marginalization problem is equivalent to theMAPassignment problem: This problem is also equivalent to the following minimization problem of the quadratic form: Which is also equivalent to the linear system of equations Convergence of the GaBP algorithm is easier to analyze (relatively to the general BP case) and there are two known sufficient convergence conditions. The first one was formulated by Weiss et al. in the year 2000, when the information matrixAisdiagonally dominant. The second convergence condition was formulated by Johnson et al.[16]in 2006, when thespectral radiusof the matrix whereD= diag(A). Later, Su and Wu established the necessary and sufficient convergence conditions for synchronous GaBP and damped GaBP, as well as another sufficient convergence condition for asynchronous GaBP. For each case, the convergence condition involves verifying 1) a set (determined by A) being non-empty, 2) the spectral radius of a certain matrix being smaller than one, and 3) the singularity issue (when converting BP message into belief) does not occur.[17] The GaBP algorithm was linked to the linear algebra domain,[18]and it was shown that the GaBP algorithm can be viewed as an iterative algorithm for solving the linear system of equationsAx=bwhereAis the information matrix andbis the shift vector. Empirically, the GaBP algorithm is shown to converge faster than classical iterative methods like the Jacobi method, theGauss–Seidel method,successive over-relaxation, and others.[19]Additionally, the GaBP algorithm is shown to be immune to numerical problems of the preconditionedconjugate gradient method[20] The previous description of BP algorithm is called the codeword-based decoding, which calculates the approximate marginal probabilityP(x|X){\displaystyle P(x|X)}, given received codewordX{\displaystyle X}. There is an equivalent form,[21]which calculateP(e|s){\displaystyle P(e|s)}, wheres{\displaystyle s}is the syndrome of the received codewordX{\displaystyle X}ande{\displaystyle e}is the decoded error. The decoded input vector isx=X+e{\displaystyle x=X+e}. This variation only changes the interpretation of the mass functionfa(Xa){\displaystyle f_{a}(X_{a})}. Explicitly, the messages are This syndrome-based decoder doesn't require information on the received bits, thus can be adapted to quantum codes, where the only information is the measurement syndrome. In the binary case,xi∈{0,1}{\displaystyle x_{i}\in \{0,1\}}, those messages can be simplified to cause an exponential reduction of2|{v}|+|N(v)|{\displaystyle 2^{|\{v\}|+|N(v)|}}in the complexity[22][23] Define log-likelihood ratiolv=log⁡uv→a(xv=0)uv→a(xv=1){\displaystyle l_{v}=\log {\frac {u_{v\to a}(x_{v}=0)}{u_{v\to a}(x_{v}=1)}}},La=log⁡ua→v(xv=0)ua→v(xv=1){\displaystyle L_{a}=\log {\frac {u_{a\to v}(x_{v}=0)}{u_{a\to v}(x_{v}=1)}}}, then The posterior log-likelihood ratio can be estimated aslv=lv(0)+∑a∈N(v)(La){\displaystyle l_{v}=l_{v}^{(0)}+\sum _{a\in N(v)}(L_{a})}
https://en.wikipedia.org/wiki/Belief_propagation#Gaussian_belief_propagation_.28GaBP.29
Incomputational mathematics, aniterative methodis amathematical procedurethat uses an initial value to generate a sequence of improving approximate solutions for a class of problems, in which thei-th approximation (called an "iterate") is derived from the previous ones. A specific implementation withterminationcriteria for a given iterative method likegradient descent,hill climbing,Newton's method, orquasi-Newton methodslikeBFGS, is analgorithmof an iterative method or amethod of successive approximation. An iterative method is calledconvergentif the corresponding sequence converges for given initial approximations. A mathematically rigorous convergence analysis of an iterative method is usually performed; however,heuristic-based iterative methods are also common. In contrast,direct methodsattempt to solve the problem by a finite sequence of operations. In the absence ofrounding errors, direct methods would deliver an exact solution (for example, solving a linear system of equationsAx=b{\displaystyle A\mathbf {x} =\mathbf {b} }byGaussian elimination). Iterative methods are often the only choice fornonlinear equations. However, iterative methods are often useful even for linear problems involving many variables (sometimes on the order of millions), where direct methods would be prohibitively expensive (and in some cases impossible) even with the best available computing power.[1] If an equation can be put into the formf(x) =x, and a solutionxis an attractivefixed pointof the functionf, then one may begin with a pointx1in thebasin of attractionofx, and letxn+1=f(xn) forn≥ 1, and the sequence {xn}n≥ 1will converge to the solutionx. Herexnis thenth approximation or iteration ofxandxn+1is the next orn+ 1 iteration ofx. Alternately, superscripts in parentheses are often used in numerical methods, so as not to interfere with subscripts with other meanings. (For example,x(n+1)=f(x(n)).) If the functionfiscontinuously differentiable, a sufficient condition for convergence is that thespectral radiusof the derivative is strictly bounded by one in a neighborhood of the fixed point. If this condition holds at the fixed point, then a sufficiently small neighborhood (basin of attraction) must exist. In the case of asystem of linear equations, the two main classes of iterative methods are thestationary iterative methods, and the more generalKrylov subspacemethods. Stationary iterative methods solve a linear system with anoperatorapproximating the original one; and based on a measurement of the error in the result (the residual), form a "correction equation" for which this process is repeated. While these methods are simple to derive, implement, and analyze, convergence is only guaranteed for a limited class of matrices. Aniterative methodis defined by and for a given linear systemAx=b{\displaystyle A\mathbf {x} =\mathbf {b} }with exact solutionx∗{\displaystyle \mathbf {x} ^{*}}theerrorby An iterative method is calledlinearif there exists a matrixC∈Rn×n{\displaystyle C\in \mathbb {R} ^{n\times n}}such that and this matrix is called theiteration matrix. An iterative method with a given iteration matrixC{\displaystyle C}is calledconvergentif the following holds An important theorem states that for a given iterative method and its iteration matrixC{\displaystyle C}it is convergent if and only if itsspectral radiusρ(C){\displaystyle \rho (C)}is smaller than unity, that is, The basic iterative methods work bysplittingthe matrixA{\displaystyle A}into and here the matrixM{\displaystyle M}should be easilyinvertible. The iterative methods are now defined as or, equivalently, From this follows that the iteration matrix is given by Basic examples of stationary iterative methods use a splitting of the matrixA{\displaystyle A}such as whereD{\displaystyle D}is only the diagonal part ofA{\displaystyle A}, andL{\displaystyle L}is the strict lowertriangular partofA{\displaystyle A}. Respectively,U{\displaystyle U}is the strict upper triangular part ofA{\displaystyle A}. Linear stationary iterative methods are also calledrelaxation methods. Krylov subspace methods[2]work by forming abasisof the sequence of successive matrix powers times the initial residual (theKrylov sequence). The approximations to the solution are then formed by minimizing the residual over the subspace formed. The prototypical method in this class is theconjugate gradient method(CG) which assumes that the system matrixA{\displaystyle A}issymmetricpositive-definite. For symmetric (and possibly indefinite)A{\displaystyle A}one works with theminimal residual method(MINRES). In the case of non-symmetric matrices, methods such as thegeneralized minimal residual method(GMRES) and thebiconjugate gradient method(BiCG) have been derived. Since these methods form a basis, it is evident that the method converges inNiterations, whereNis the system size. However, in the presence of rounding errors this statement does not hold; moreover, in practiceNcan be very large, and the iterative process reaches sufficient accuracy already far earlier. The analysis of these methods is hard, depending on a complicated function of thespectrumof the operator. The approximating operator that appears in stationary iterative methods can also be incorporated in Krylov subspace methods such asGMRES(alternatively,preconditionedKrylov methods can be considered as accelerations of stationary iterative methods), where they become transformations of the original operator to a presumably better conditioned one. The construction of preconditioners is a large research area. Mathematical methods relating to successive approximation include: Jamshīd al-Kāshīused iterative methods to calculate the sine of 1° andπinThe Treatise of Chord and Sineto high precision. An early iterative method forsolving a linear systemappeared in a letter ofGaussto a student of his. He proposed solving a 4-by-4 system of equations by repeatedly solving the component in which the residual was the largest[citation needed]. The theory of stationary iterative methods was solidly established with the work ofD.M. Youngstarting in the 1950s. The conjugate gradient method was also invented in the 1950s, with independent developments byCornelius Lanczos,Magnus HestenesandEduard Stiefel, but its nature and applicability were misunderstood at the time. Only in the 1970s was it realized that conjugacy based methods work very well forpartial differential equations, especially the elliptic type.
https://en.wikipedia.org/wiki/Iterative_method#Linear_systems
Inlinear algebra, the order-rKrylov subspacegenerated by ann-by-nmatrixAand a vectorbof dimensionnis thelinear subspacespannedby theimagesofbunder the firstrpowers ofA(starting fromA0=I{\displaystyle A^{0}=I}), that is,[1][2] The concept is named after Russian applied mathematician and naval engineerAlexei Krylov, who published a paper about the concept in 1931.[3] Krylov subspaces are used in algorithms for finding approximate solutions to high-dimensionallinear algebra problems.[2]Manylinear dynamical systemtests incontrol theory, especially those related tocontrollabilityandobservability, involve checking the rank of the Krylov subspace. These tests are equivalent to finding the span of theGramiansassociated with the system/output maps so the uncontrollable and unobservable subspaces are simply the orthogonal complement to the Krylov subspace.[4] Moderniterative methodssuch asArnoldi iterationcan be used for finding one (or a few) eigenvalues of largesparse matricesor solving large systems of linear equations. They try to avoid matrix-matrix operations, but rather multiply vectors by the matrix and work with the resulting vectors. Starting with a vectorb{\displaystyle b}, one computesAb{\displaystyle Ab}, then one multiplies that vector byA{\displaystyle A}to findA2b{\displaystyle A^{2}b}and so on. All algorithms that work this way are referred to as Krylov subspace methods; they are among the most successful methods currently available in numerical linear algebra. These methods can be used in situations where there is an algorithm to compute the matrix-vector multiplication without there being an explicit representation ofA{\displaystyle A}, giving rise toMatrix-free methods. Because the vectors usually soon become almostlinearly dependentdue to the properties ofpower iteration, methods relying on Krylov subspace frequently involve someorthogonalizationscheme, such asLanczos iterationforHermitian matricesorArnoldi iterationfor more general matrices. The best known Krylov subspace methods are theConjugate gradient,IDR(s)(Induced dimension reduction),GMRES(generalized minimum residual),BiCGSTAB(biconjugate gradient stabilized),QMR(quasi minimal residual),TFQMR(transpose-free QMR) andMINRES(minimal residual method).
https://en.wikipedia.org/wiki/Krylov_subspace
Innumerical optimization, thenonlinear conjugate gradient methodgeneralizes theconjugate gradient methodtononlinear optimization. For a quadratic functionf(x){\displaystyle \displaystyle f(x)} the minimum off{\displaystyle f}is obtained when thegradientis 0: Whereas linear conjugate gradient seeks a solution to the linear equationATAx=ATb{\displaystyle \displaystyle A^{T}Ax=A^{T}b}, the nonlinear conjugate gradient method is generally used to find thelocal minimumof a nonlinear function using itsgradient∇xf{\displaystyle \nabla _{x}f}alone. It works when the function is approximately quadratic near the minimum, which is the case when the function is twice differentiable at the minimum and the second derivative is non-singular there. Given a functionf(x){\displaystyle \displaystyle f(x)}ofN{\displaystyle N}variables to minimize, its gradient∇xf{\displaystyle \nabla _{x}f}indicates the direction of maximum increase. One simply starts in the opposite (steepest descent) direction: with an adjustable step lengthα{\displaystyle \displaystyle \alpha }and performs aline searchin this direction until it reaches the minimum off{\displaystyle \displaystyle f}: After this first iteration in the steepest directionΔx0{\displaystyle \displaystyle \Delta x_{0}}, the following steps constitute one iteration of moving along a subsequent conjugate directionsn{\displaystyle \displaystyle s_{n}}, wheres0=Δx0{\displaystyle \displaystyle s_{0}=\Delta x_{0}}: With a pure quadratic function the minimum is reached withinNiterations (excepting roundoff error), but a non-quadratic function will make slower progress. Subsequent search directions lose conjugacy requiring the search direction to be reset to the steepest descent direction at least everyNiterations, or sooner if progress stops. However, resetting every iteration turns the method intosteepest descent. The algorithm stops when it finds the minimum, determined when no progress is made after a direction reset (i.e. in the steepest descent direction), or when some tolerance criterion is reached. Within a linear approximation, the parametersα{\displaystyle \displaystyle \alpha }andβ{\displaystyle \displaystyle \beta }are the same as in the linear conjugate gradient method but have been obtained with line searches. The conjugate gradient method can follow narrow (ill-conditioned) valleys, where thesteepest descentmethod slows down and follows a criss-cross pattern. Four of the best known formulas forβn{\displaystyle \displaystyle \beta _{n}}are named after their developers: These formulas are equivalent for a quadratic function, but for nonlinear optimization the preferred formula is a matter of heuristics or taste. A popular choice isβ=max{0,βPR}{\displaystyle \displaystyle \beta =\max\{0,\beta ^{PR}\}}, which provides a direction reset automatically.[5] Algorithms based onNewton's methodpotentially converge much faster. There, both step direction and length are computed from the gradient as the solution of a linear system of equations, with the coefficient matrix being the exactHessian matrix(for Newton's method proper) or an estimate thereof (in thequasi-Newton methods, where the observed change in the gradient during the iterations is used to update the Hessian estimate). For high-dimensional problems, the exact computation of the Hessian is usually prohibitively expensive, and even its storage can be problematic, requiringO(N2){\displaystyle O(N^{2})}memory (but see the limited-memoryL-BFGSquasi-Newton method). The conjugate gradient method can also be derived usingoptimal control theory.[6]In this accelerated optimization theory, the conjugate gradient method falls out as a nonlinearoptimal feedback controller, u=k(x,x˙):=−γa∇xf(x)−γbx˙{\displaystyle u=k(x,{\dot {x}}):=-\gamma _{a}\nabla _{x}f(x)-\gamma _{b}{\dot {x}}} for thedouble integrator system, x¨=u{\displaystyle {\ddot {x}}=u} The quantitiesγa>0{\displaystyle \gamma _{a}>0}andγb>0{\displaystyle \gamma _{b}>0}are variable feedback gains.[6]
https://en.wikipedia.org/wiki/Nonlinear_conjugate_gradient_method
Sparse matrix–vector multiplication(SpMV) of the formy=Axis a widely usedcomputational kernelexisting in many scientific applications. The input matrixAissparse. The input vectorxand the output vectoryare dense. In the case of a repeatedy=Axoperation involving the same input matrixAbut possibly changing numerical values of its elements,Acan be preprocessed to reduce both the parallel and sequential run time of the SpMV kernel.[1] This article aboutmatricesis astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Sparse_matrix%E2%80%93vector_multiplication
TheRescorla–Wagner model("R-W") is a model ofclassical conditioning, in which learning is conceptualized in terms of associations between conditioned (CS) and unconditioned (US) stimuli. A strong CS-US association means that the CS signals predict the US. One might say that before conditioning, the subject is surprised by the US, but after conditioning, the subject is no longer surprised, because the CS predicts the coming of the US. The model casts the conditioning processes into discrete trials, during which stimuli may be either present or absent. The strength of prediction of the US on a trial can be represented as the summed associative strengths of all CSs present during the trial. This feature of the model represented a major advance over previous models, and it allowed a straightforward explanation of important experimental phenomena, most notably theblocking effect. Failures of the model have led to modifications, alternative models, and many additional findings. The model has had some impact on neural science in recent years, as studies have suggested that the phasic activity of dopamine neurons in mesostriatal DA projections in the midbrain encodes for the type of prediction error detailed in the model.[1] The Rescorla–Wagner model was created by Yale psychologistsRobert A. RescorlaandAllan R. Wagnerin 1972. The first two assumptions were new in the Rescorla–Wagner model. The last three assumptions were present in previous models and are less crucial to the R-W model's novel predictions. and where [2] Van Hamme and Wasserman have extended the original Rescorla–Wagner (RW) model and introduced a new factor in their revised RW model in 1994:[3]They suggested that not only conditioned stimuli physically present on a given trial can undergo changes in their associative strength, the associative value of a CS can also be altered by a within-compound-association with a CS present on that trial. A within-compound-association is established if two CSs are presented together during training (compound stimulus). If one of the two component CSs is subsequently presented alone, then it is assumed to activate a representation of the other (previously paired) CS as well. Van Hamme and Wasserman propose that stimuli indirectly activated through within-compound-associations have a negative learning parameter—thus phenomena of retrospective reevaluation can be explained. Consider the following example, an experimental paradigm called "backward blocking," indicative of retrospective revaluation, where AB is the compound stimulus A+B: Test trials: Group 1, which received both Phase 1- and 2-trials, elicits a weaker conditioned response (CR) to B compared to the Control group, which only received Phase 1-trials. The original RW model cannot account for this effect. But the revised model can: In Phase 2, stimulus B is indirectly activated through within-compound-association with A. But instead of a positive learning parameter (usually called alpha) when physically present, during Phase 2, B has a negative learning parameter. Thus during the second phase, B's associative strength declines whereas A's value increases because of its positive learning parameter. Thus, the revised RW model can explain why the CR elicited by B after backward blocking training is weaker compared with AB-only conditioning. The Rescorla–Wagner model owes its success to several factors, including[2]
https://en.wikipedia.org/wiki/Rescorla%E2%80%93Wagner_model
TheBerndt–Hall–Hall–Hausman(BHHH)algorithmis anumerical optimizationalgorithmsimilar to theNewton–Raphson algorithm, but it replaces the observed negativeHessian matrixwith theouter productof thegradient. This approximation is based on theinformation matrixequality and therefore only valid while maximizing alikelihood function.[1]The BHHH algorithm is named after the four originators:Ernst R. Berndt,Bronwyn Hall,Robert Hall, andJerry Hausman.[2] If anonlinearmodel is fitted to thedataone often needs to estimatecoefficientsthroughoptimization. A number of optimisation algorithms have the following general structure. Suppose that the function to be optimized isQ(β). Then the algorithms are iterative, defining a sequence of approximations,βkgiven by whereβk{\displaystyle \beta _{k}}is the parameter estimate at step k, andλk{\displaystyle \lambda _{k}}is a parameter (called step size) which partly determines the particular algorithm. For the BHHH algorithmλkis determined by calculations within a given iterative step, involving a line-search until a pointβk+1is found satisfying certain criteria. In addition, for the BHHH algorithm,Qhas the form andAis calculated using In other cases, e.g.Newton–Raphson,Ak{\displaystyle A_{k}}can have other forms. The BHHH algorithm has the advantage that, if certain conditions apply, convergence of the iterative procedure is guaranteed.[citation needed]
https://en.wikipedia.org/wiki/BHHH_algorithm
Limited-memory BFGS(L-BFGSorLM-BFGS) is anoptimizationalgorithmin the family ofquasi-Newton methodsthat approximates theBroyden–Fletcher–Goldfarb–Shanno algorithm(BFGS) using a limited amount ofcomputer memory.[1]It is a popular algorithm for parameter estimation inmachine learning.[2][3]The algorithm's target problem is to minimizef(x){\displaystyle f(\mathbf {x} )}over unconstrained values of the real-vectorx{\displaystyle \mathbf {x} }wheref{\displaystyle f}is a differentiable scalar function. Like the original BFGS, L-BFGS uses an estimate of the inverseHessian matrixto steer its search through variable space, but where BFGS stores a densen×n{\displaystyle n\times n}approximation to the inverse Hessian (nbeing the number of variables in the problem), L-BFGS stores only a few vectors that represent the approximation implicitly. Due to its resulting linear memory requirement, the L-BFGS method is particularly well suited for optimization problems with many variables. Instead of the inverse HessianHk, L-BFGS maintains a history of the pastmupdates of the positionxand gradient ∇f(x), where generally the history sizemcan be small (oftenm<10{\displaystyle m<10}). These updates are used to implicitly do operations requiring theHk-vector product. The algorithm starts with an initial estimate of the optimal value,x0{\displaystyle \mathbf {x} _{0}}, and proceeds iteratively to refine that estimate with a sequence of better estimatesx1,x2,…{\displaystyle \mathbf {x} _{1},\mathbf {x} _{2},\ldots }. The derivatives of the functiongk:=∇f(xk){\displaystyle g_{k}:=\nabla f(\mathbf {x} _{k})}are used as a key driver of the algorithm to identify the direction of steepest descent, and also to form an estimate of the Hessian matrix (second derivative) off(x){\displaystyle f(\mathbf {x} )}. L-BFGS shares many features with other quasi-Newton algorithms, but is very different in how the matrix-vector multiplicationdk=−Hkgk{\displaystyle d_{k}=-H_{k}g_{k}}is carried out, wheredk{\displaystyle d_{k}}is the approximate Newton's direction,gk{\displaystyle g_{k}}is the current gradient, andHk{\displaystyle H_{k}}is the inverse of the Hessian matrix. There are multiple published approaches using a history of updates to form this direction vector. Here, we give a common approach, the so-called "two loop recursion."[4][5] We take as givenxk{\displaystyle x_{k}}, the position at thek-th iteration, andgk≡∇f(xk){\displaystyle g_{k}\equiv \nabla f(x_{k})}wheref{\displaystyle f}is the function being minimized, and all vectors are column vectors. We also assume that we have stored the lastmupdates of the form We defineρk=1yk⊤sk{\displaystyle \rho _{k}={\frac {1}{y_{k}^{\top }s_{k}}}}, andHk0{\displaystyle H_{k}^{0}}will be the 'initial' approximate of the inverse Hessian that our estimate at iterationkbegins with. The algorithm is based on the BFGS recursion for the inverse Hessian as For a fixedkwe define a sequence of vectorsqk−m,…,qk{\displaystyle q_{k-m},\ldots ,q_{k}}asqk:=gk{\displaystyle q_{k}:=g_{k}}andqi:=(I−ρiyisi⊤)qi+1{\displaystyle q_{i}:=(I-\rho _{i}y_{i}s_{i}^{\top })q_{i+1}}. Then a recursive algorithm for calculatingqi{\displaystyle q_{i}}fromqi+1{\displaystyle q_{i+1}}is to defineαi:=ρisi⊤qi+1{\displaystyle \alpha _{i}:=\rho _{i}s_{i}^{\top }q_{i+1}}andqi=qi+1−αiyi{\displaystyle q_{i}=q_{i+1}-\alpha _{i}y_{i}}. We also define another sequence of vectorszk−m,…,zk{\displaystyle z_{k-m},\ldots ,z_{k}}aszi:=Hiqi{\displaystyle z_{i}:=H_{i}q_{i}}. There is another recursive algorithm for calculating these vectors which is to definezk−m=Hk0qk−m{\displaystyle z_{k-m}=H_{k}^{0}q_{k-m}}and then recursively defineβi:=ρiyi⊤zi{\displaystyle \beta _{i}:=\rho _{i}y_{i}^{\top }z_{i}}andzi+1=zi+(αi−βi)si{\displaystyle z_{i+1}=z_{i}+(\alpha _{i}-\beta _{i})s_{i}}. The value ofzk{\displaystyle z_{k}}is then our ascent direction. Thus we can compute the descent direction as follows: This formulation gives the search direction for the minimization problem, i.e.,z=−Hkgk{\displaystyle z=-H_{k}g_{k}}. For maximization problems, one should thus take-zinstead. Note that the initial approximate inverse HessianHk0{\displaystyle H_{k}^{0}}is chosen as a diagonal matrix or even a multiple of the identity matrix since this is numerically efficient. The scaling of the initial matrixγk{\displaystyle \gamma _{k}}ensures that the search direction is well scaled and therefore the unit step length is accepted in most iterations. AWolfe line searchis used to ensure that the curvature condition is satisfied and the BFGS updating is stable. Note that some software implementations use an Armijobacktracking line search, but cannot guarantee that the curvature conditionyk⊤sk>0{\displaystyle y_{k}^{\top }s_{k}>0}will be satisfied by the chosen step since a step length greater than1{\displaystyle 1}may be needed to satisfy this condition. Some implementations address this by skipping the BFGS update whenyk⊤sk{\displaystyle y_{k}^{\top }s_{k}}is negative or too close to zero, but this approach is not generally recommended since the updates may be skipped too often to allow the Hessian approximationHk{\displaystyle H_{k}}to capture important curvature information. Some solvers employ so called damped (L)BFGS update which modifies quantitiessk{\displaystyle s_{k}}andyk{\displaystyle y_{k}}in order to satisfy the curvature condition. The two-loop recursion formula is widely used by unconstrained optimizers due to its efficiency in multiplying by the inverse Hessian. However, it does not allow for the explicit formation of either the direct or inverse Hessian and is incompatible with non-box constraints. An alternative approach is thecompact representation, which involves a low-rank representation for the direct and/or inverse Hessian.[6]This represents the Hessian as a sum of a diagonal matrix and a low-rank update. Such a representation enables the use of L-BFGS in constrained settings, for example, as part of the SQP method. L-BFGS has been called "the algorithm of choice" for fittinglog-linear (MaxEnt) modelsandconditional random fieldswithℓ2{\displaystyle \ell _{2}}-regularization.[2][3] Since BFGS (and hence L-BFGS) is designed to minimizesmoothfunctions withoutconstraints, the L-BFGS algorithm must be modified to handle functions that include non-differentiablecomponents or constraints. A popular class of modifications are called active-set methods, based on the concept of theactive set. The idea is that when restricted to a small neighborhood of the current iterate, the function and constraints can be simplified. TheL-BFGS-Balgorithm extends L-BFGS to handle simple box constraints (aka bound constraints) on variables; that is, constraints of the formli≤xi≤uiwherelianduiare per-variable constant lower and upper bounds, respectively (for eachxi, either or both bounds may be omitted).[7][8]The method works by identifying fixed and free variables at every step (using a simple gradient method), and then using the L-BFGS method on the free variables only to get higher accuracy, and then repeating the process. Orthant-wise limited-memory quasi-Newton(OWL-QN) is an L-BFGS variant for fittingℓ1{\displaystyle \ell _{1}}-regularizedmodels, exploiting the inherentsparsityof such models.[3]It minimizes functions of the form whereg{\displaystyle g}is adifferentiableconvexloss function. The method is an active-set type method: at each iterate, it estimates thesignof each component of the variable, and restricts the subsequent step to have the same sign. Once the sign is fixed, the non-differentiable‖x→‖1{\displaystyle \|{\vec {x}}\|_{1}}term becomes a smooth linear term which can be handled by L-BFGS. After an L-BFGS step, the method allows some variables to change sign, and repeats the process. Schraudolphet al.present anonlineapproximation to both BFGS and L-BFGS.[9]Similar tostochastic gradient descent, this can be used to reduce the computational complexity by evaluating the error function and gradient on a randomly drawn subset of the overall dataset in each iteration. It has been shown that O-LBFGS has a global almost sure convergence[10]while the online approximation of BFGS (O-BFGS) is not necessarily convergent.[11] Notable open source implementations include: Notable non open source implementations include:
https://en.wikipedia.org/wiki/L-BFGS
Pattern search(also known as direct search, derivative-free search, or black-box search) is a family of numericaloptimizationmethods that does not require agradient. As a result, it can be used on functions that are notcontinuousordifferentiable. One such pattern search method is "convergence" (see below), which is based on the theory of positive bases. Optimization attempts to find the best match (the solution that has the lowest error value) in amultidimensional analysisspace of possibilities. The name "pattern search" was coined by Hooke and Jeeves.[1]An early and simple variant is attributed toFermiandMetropoliswhen they worked at theLos Alamos National Laboratory. It is described by Davidon,[2]as follows: They varied one theoretical parameter at a time by steps of the same magnitude, and when no such increase or decrease in any one parameter further improved the fit to the experimental data, they halved the step size and repeated the process until the steps were deemed sufficiently small. Convergence is a pattern search method proposed by Yu, who proved that it converges using the theory of positive bases.[3]Later,Torczon, Lagarias and co-authors[4][5]used positive-basis techniques to prove the convergence of another pattern-search method on specific classes of functions. Outside of such classes, pattern search is aheuristicthat can provide useful approximate solutions for some issues, but can fail on others. Outside of such classes, pattern search is not aniterative methodthat converges to a solution; indeed, pattern-search methods can converge to non-stationary points on some relatively tame problems.[6][7]
https://en.wikipedia.org/wiki/Pattern_search_(optimization)
Innumerical analysis, aquasi-Newton methodis aniterative numerical methodused either tofind zeroesor tofind local maxima and minimaof functions via an iterativerecurrence formulamuch like the one forNewton's method, except using approximations of thederivativesof the functions in place of exact derivatives. Newton's method requires theJacobian matrixof allpartial derivativesof a multivariate function when used to search for zeros or theHessian matrixwhen usedfor finding extrema. Quasi-Newton methods, on the other hand, can be used when the Jacobian matrices or Hessian matrices are unavailable or are impractical to compute at every iteration. Someiterative methodsthat reduce to Newton's method, such assequential quadratic programming, may also be considered quasi-Newton methods. Newton's methodto find zeroes of a functiong{\displaystyle g}of multiple variables is given byxn+1=xn−[Jg(xn)]−1g(xn){\displaystyle x_{n+1}=x_{n}-[J_{g}(x_{n})]^{-1}g(x_{n})}, where[Jg(xn)]−1{\displaystyle [J_{g}(x_{n})]^{-1}}is theleft inverseof theJacobian matrixJg(xn){\displaystyle J_{g}(x_{n})}ofg{\displaystyle g}evaluated forxn{\displaystyle x_{n}}. Strictly speaking, any method that replaces the exact JacobianJg(xn){\displaystyle J_{g}(x_{n})}with an approximation is a quasi-Newton method.[1]For instance, the chord method (whereJg(xn){\displaystyle J_{g}(x_{n})}is replaced byJg(x0){\displaystyle J_{g}(x_{0})}for all iterations) is a simple example. The methods given below foroptimizationrefer to an important subclass of quasi-Newton methods,secant methods.[2] Using methods developed to find extrema in order to find zeroes is not always a good idea, as the majority of the methods used to find extrema require that the matrix that is used is symmetrical. While this holds in the context of the search for extrema, it rarely holds when searching for zeroes.Broyden's "good" and "bad" methodsare two methods commonly used to find extrema that can also be applied to find zeroes. Other methods that can be used are thecolumn-updating method, theinverse column-updating method, the quasi-Newton least squares method and the quasi-Newton inverse least squares method. More recently quasi-Newton methods have been applied to find the solution of multiple coupled systems of equations (e.g. fluid–structure interaction problems or interaction problems in physics). They allow the solution to be found by solving each constituent system separately (which is simpler than the global system) in a cyclic, iterative fashion until the solution of the global system is found.[2][3] The search for a minimum or maximum of a scalar-valued function is closely related to the search for the zeroes of thegradientof that function. Therefore, quasi-Newton methods can be readily applied to find extrema of a function. In other words, ifg{\displaystyle g}is the gradient off{\displaystyle f}, then searching for the zeroes of the vector-valued functiong{\displaystyle g}corresponds to the search for the extrema of the scalar-valued functionf{\displaystyle f}; the Jacobian ofg{\displaystyle g}now becomes the Hessian off{\displaystyle f}. The main difference is thatthe Hessian matrix is a symmetric matrix, unlike the Jacobian whensearching for zeroes. Most quasi-Newton methods used in optimization exploit this symmetry. Inoptimization,quasi-Newton methods(a special case ofvariable-metric methods) are algorithms for finding localmaxima and minimaoffunctions. Quasi-Newton methods for optimization are based onNewton's methodto find thestationary pointsof a function, points where the gradient is 0. Newton's method assumes that the function can be locally approximated as aquadraticin the region around the optimum, and uses the first and second derivatives to find the stationary point. In higher dimensions, Newton's method uses the gradient and theHessian matrixof secondderivativesof the function to be minimized. In quasi-Newton methods the Hessian matrix does not need to be computed. The Hessian is updated by analyzing successive gradient vectors instead. Quasi-Newton methods are a generalization of thesecant methodto find the root of the first derivative for multidimensional problems. In multiple dimensions the secant equation isunder-determined, and quasi-Newton methods differ in how they constrain the solution, typically by adding a simple low-rank update to the current estimate of the Hessian. The first quasi-Newton algorithm was proposed byWilliam C. Davidon, a physicist working atArgonne National Laboratory. He developed the first quasi-Newton algorithm in 1959: theDFP updating formula, which was later popularized by Fletcher and Powell in 1963, but is rarely used today. The most common quasi-Newton algorithms are currently theSR1 formula(for "symmetric rank-one"), theBHHHmethod, the widespreadBFGS method(suggested independently by Broyden, Fletcher, Goldfarb, and Shanno, in 1970), and its low-memory extensionL-BFGS. The Broyden's class is a linear combination of the DFP and BFGS methods. The SR1 formula does not guarantee the update matrix to maintainpositive-definitenessand can be used for indefinite problems. TheBroyden's methoddoes not require the update matrix to be symmetric and is used to find the root of a general system of equations (rather than the gradient) by updating theJacobian(rather than the Hessian). One of the chief advantages of quasi-Newton methods overNewton's methodis that theHessian matrix(or, in the case of quasi-Newton methods, its approximation)B{\displaystyle B}does not need to be inverted. Newton's method, and its derivatives such asinterior point methods, require the Hessian to be inverted, which is typically implemented by solving asystem of linear equationsand is often quite costly. In contrast, quasi-Newton methods usually generate an estimate ofB−1{\displaystyle B^{-1}}directly. As inNewton's method, one uses a second-order approximation to find the minimum of a functionf(x){\displaystyle f(x)}. TheTaylor seriesoff(x){\displaystyle f(x)}around an iterate is where (∇f{\displaystyle \nabla f}) is thegradient, andB{\displaystyle B}an approximation to theHessian matrix.[4]The gradient of this approximation (with respect toΔx{\displaystyle \Delta x}) is and setting this gradient to zero (which is the goal of optimization) provides the Newton step: The Hessian approximationB{\displaystyle B}is chosen to satisfy which is called thesecant equation(the Taylor series of the gradient itself). In more than one dimensionB{\displaystyle B}isunderdetermined. In one dimension, solving forB{\displaystyle B}and applying the Newton's step with the updated value is equivalent to thesecant method. The various quasi-Newton methods differ in their choice of the solution to the secant equation (in one dimension, all the variants are equivalent). Most methods (but with exceptions, such asBroyden's method) seek a symmetric solution (BT=B{\displaystyle B^{T}=B}); furthermore, the variants listed below can be motivated by finding an updateBk+1{\displaystyle B_{k+1}}that is as close as possible toBk{\displaystyle B_{k}}in somenorm; that is,Bk+1=argminB⁡‖B−Bk‖V{\displaystyle B_{k+1}=\operatorname {argmin} _{B}\|B-B_{k}\|_{V}}, whereV{\displaystyle V}is somepositive-definite matrixthat defines the norm. An approximate initial valueB0=βI{\displaystyle B_{0}=\beta I}is often sufficient to achieve rapid convergence, although there is no general strategy to chooseβ{\displaystyle \beta }.[5]Note thatB0{\displaystyle B_{0}}should be positive-definite. The unknownxk{\displaystyle x_{k}}is updated applying the Newton's step calculated using the current approximate Hessian matrixBk{\displaystyle B_{k}}: is used to update the approximate HessianBk+1{\displaystyle B_{k+1}}, or directly its inverseHk+1=Bk+1−1{\displaystyle H_{k+1}=B_{k+1}^{-1}}using theSherman–Morrison formula. The most popular update formulas are: Other methods are Pearson's method, McCormick's method, the Powell symmetric Broyden (PSB) method and Greenstadt's method.[2]These recursive low-rank matrix updates can also represented as an initial matrix plus a low-rank correction. This is theCompact quasi-Newton representation, which is particularly effective for constrained and/or large problems. Whenf{\displaystyle f}is a convex quadratic function with positive-definite HessianB{\displaystyle B}, one would expect the matricesHk{\displaystyle H_{k}}generated by a quasi-Newton method to converge to the inverse HessianH=B−1{\displaystyle H=B^{-1}}. This is indeed the case for the class of quasi-Newton methods based on least-change updates.[6] Implementations of quasi-Newton methods are available in many programming languages. Notable open source implementations include: Notable proprietary implementations include:
https://en.wikipedia.org/wiki/Quasi-Newton_methods
TheSymmetric Rank 1(SR1) method is aquasi-Newton methodto update the second derivative (Hessian) based on the derivatives (gradients) calculated at two points. It is a generalization to thesecant methodfor a multidimensional problem. This update maintains thesymmetryof the matrix but doesnotguarantee that the update bepositive definite. The sequence of Hessian approximations generated by the SR1 method converges to the true Hessian under mild conditions, in theory; in practice, the approximate Hessians generated by the SR1 method show faster progress towards the true Hessian than do popular alternatives (BFGSorDFP), in preliminary numerical experiments.[1][2]The SR1 method has computational advantages forsparseorpartially separableproblems.[3] A twice continuously differentiable functionx↦f(x){\displaystyle x\mapsto f(x)}has agradient(∇f{\displaystyle \nabla f}) andHessian matrixB{\displaystyle B}: The functionf{\displaystyle f}has an expansion as aTaylor seriesatx0{\displaystyle x_{0}}, which can be truncated its gradient has a Taylor-series approximation also which is used to updateB{\displaystyle B}. The above secant-equation need not have a unique solutionB{\displaystyle B}. The SR1 formula computes (via an update ofrank1) the symmetric solution that is closest[further explanation needed]to the current approximate-valueBk{\displaystyle B_{k}}: where The corresponding update to the approximate inverse-HessianHk=Bk−1{\displaystyle H_{k}=B_{k}^{-1}}is One might wonder why positive-definiteness is not preserved — after all, a rank-1 update of the formBk+1=Bk+vvT{\displaystyle B_{k+1}=B_{k}+vv^{T}}is positive-definite ifBk{\displaystyle B_{k}}is. The explanation is that the update might be of the formBk+1=Bk−vvT{\displaystyle B_{k+1}=B_{k}-vv^{T}}instead because the denominator can be negative, and in that case there are no guarantees about positive-definiteness. The SR1 formula has been rediscovered a number of times. Since the denominator can vanish, some authors have suggested that the update be applied only if wherer∈(0,1){\displaystyle r\in (0,1)}is a small number, e.g.10−8{\displaystyle 10^{-8}}.[4] The SR1 update maintains a dense matrix, which can be prohibitive for large problems. Similar to theL-BFGSmethod also a limited-memory SR1 (L-SR1) algorithm exists.[5]Instead of storing the full Hessian approximation, a L-SR1 method only stores them{\displaystyle m}most recent pairs{(si,yi)}i=k−mk−1{\displaystyle \{(s_{i},y_{i})\}_{i=k-m}^{k-1}}, whereΔxi:=si{\displaystyle \Delta x_{i}:=s_{i}}andm{\displaystyle m}is an integer much smaller than the problem size (m≪n{\displaystyle m\ll n}). The limited-memory matrix is based on acompact matrix representation Bk=B0+JkNk−1JkT,Jk=Yk−B0Sk,Nk=Dk+Lk+LkT−SkTB0Sk{\displaystyle B_{k}=B_{0}+J_{k}N_{k}^{-1}J_{k}^{T},\quad J_{k}=Y_{k}-B_{0}S_{k},\quad N_{k}=D_{k}+L_{k}+L_{k}^{T}-S_{k}^{T}B_{0}S_{k}} Sk=[sk−msk−m+1…sk−1],{\displaystyle S_{k}={\begin{bmatrix}s_{k-m}&s_{k-m+1}&\ldots &s_{k-1}\end{bmatrix}},}Yk=[yk−myk−m+1…yk−1],{\displaystyle Y_{k}={\begin{bmatrix}y_{k-m}&y_{k-m+1}&\ldots &y_{k-1}\end{bmatrix}},} (Lk)ij=si−1Tyj−1,(Dk)ii=si−1Tyi−1,k−m≤i≤k−1{\displaystyle {\big (}L_{k}{\big )}_{ij}=s_{i-1}^{T}y_{j-1},\quad (D_{k})_{ii}=s_{i-1}^{T}y_{i-1},\quad k-m\leq i\leq k-1} Since the update can be indefinite, the L-SR1 algorithm is suitable for atrust-regionstrategy. Because of the limited-memory matrix, the trust-region L-SR1 algorithm scales linearly with the problem size, just like L-BFGS.
https://en.wikipedia.org/wiki/Symmetric_rank-one
Thecompact representationforquasi-Newton methodsis amatrix decomposition, which is typically used ingradientbasedoptimizationalgorithmsor for solvingnonlinear systems. The decomposition uses a low-rank representation for the direct and/or inverseHessianor theJacobianof a nonlinear system. Because of this, the compact representation is often used for large problems andconstrained optimization. The compact representation of a quasi-Newton matrix for the inverse HessianHk{\displaystyle H_{k}}or direct HessianBk{\displaystyle B_{k}}of a nonlinearobjective functionf(x):Rn→R{\displaystyle f(x):\mathbb {R} ^{n}\to \mathbb {R} }expresses a sequence of recursive rank-1 or rank-2 matrix updates as one rank-k{\displaystyle k}or rank-2k{\displaystyle 2k}update of an initial matrix.[1][2]Because it is derived from quasi-Newton updates, it uses differences of iterates and gradients∇f(xk)=gk{\displaystyle \nabla f(x_{k})=g_{k}}in its definition{si−1=xi−xi−1,yi−1=gi−gi−1}i=1k{\displaystyle \{s_{i-1}=x_{i}-x_{i-1},y_{i-1}=g_{i}-g_{i-1}\}_{i=1}^{k}}. In particular, forr=k{\displaystyle r=k}orr=2k{\displaystyle r=2k}the rectangularn×r{\displaystyle n\times r}matricesUk,Jk{\displaystyle U_{k},J_{k}}and ther×r{\displaystyle r\times r}square symmetric systemsMk,Nk{\displaystyle M_{k},N_{k}}depend on thesi,yi{\displaystyle s_{i},y_{i}}'s and define the quasi-Newton representations Because of the special matrix decomposition the compact representation is implemented in state-of-the-art optimization software.[3][4][5][6]When combined with limited-memory techniques it is a popular technique forconstrained optimizationwith gradients.[7]Linear algebra operations can be done efficiently, likematrix-vector products,solvesoreigendecompositions. It can be combined withline-searchandtrust regiontechniques, and the representation has been developed for many quasi-Newton updates. For instance, the matrix vector product with the direct quasi-Newton Hessian and an arbitrary vectorg∈Rn{\displaystyle g\in \mathbb {R} ^{n}}is: In the context of theGMRESmethod, Walker[8]showed that a product ofHouseholder transformations(an identity plus rank-1) can be expressed as a compact matrix formula. This led to the derivation of an explicit matrix expression for the product ofk{\displaystyle k}identity plus rank-1 matrices.[7]Specifically, forSk=[s0s1…sk−1],{\textstyle S_{k}={\begin{bmatrix}s_{0}&s_{1}&\ldots s_{k-1}\end{bmatrix}},}Yk=[y0y1…yk−1],{\displaystyle ~Y_{k}={\begin{bmatrix}y_{0}&y_{1}&\ldots y_{k-1}\end{bmatrix}},}(Rk)ij=si−1Tyj−1,{\displaystyle ~(R_{k})_{ij}=s_{i-1}^{T}y_{j-1},}ρi−1=1/si−1Tyi−1{\displaystyle ~\rho _{i-1}=1/s_{i-1}^{T}y_{i-1}}andVi=I−ρi−1yi−1si−1T{\textstyle ~V_{i}=I-\rho _{i-1}y_{i-1}s_{i-1}^{T}}when1≤i≤j≤k{\displaystyle 1\leq i\leq j\leq k}the product ofk{\displaystyle k}rank-1 updates to the identity is∏i=1kVi−1=(I−ρ0y0s0T)⋯(I−ρk−1yk−1sk−1T)=I−YkRk−1SkT{\displaystyle \prod _{i=1}^{k}V_{i-1}=\left(I-\rho _{0}y_{0}s_{0}^{T}\right)\cdots \left(I-\rho _{k-1}y_{k-1}s_{k-1}^{T}\right)=I-Y_{k}R_{k}^{-1}S_{k}^{T}}TheBFGSupdate can be expressed in terms of products of theVi{\displaystyle V_{i}}'s, which have a compact matrix formula. Therefore, the BFGS recursion can exploit these block matrix representations A parametric family of quasi-Newton updates includes many of the most known formulas.[9]For arbitrary vectorsvk{\displaystyle v_{k}}andck{\displaystyle c_{k}}such thatvkTyk≠0{\displaystyle v_{k}^{T}y_{k}\neq 0}andckTsk≠0{\displaystyle c_{k}^{T}s_{k}\neq 0}general recursive update formulas for the inverse and direct Hessian estimates are By making specific choices for the parameter vectorsvk{\displaystyle v_{k}}andck{\displaystyle c_{k}}well known methods are recovered Collecting the updating vectors of the recursive formulas into matrices, define Sk=[s0s1…sk−1],{\displaystyle S_{k}={\begin{bmatrix}s_{0}&s_{1}&\ldots &s_{k-1}\end{bmatrix}},}Yk=[y0y1…yk−1],{\displaystyle Y_{k}={\begin{bmatrix}y_{0}&y_{1}&\ldots &y_{k-1}\end{bmatrix}},}Vk=[v0v1…vk−1],{\displaystyle V_{k}={\begin{bmatrix}v_{0}&v_{1}&\ldots &v_{k-1}\end{bmatrix}},}Ck=[c0c1…ck−1],{\displaystyle C_{k}={\begin{bmatrix}c_{0}&c_{1}&\ldots &c_{k-1}\end{bmatrix}},} upper triangular (Rk)ij:=(RkSY)ij=si−1Tyj−1,(RkVY)ij=vi−1Tyj−1,(RkCS)ij=ci−1Tsj−1,for1≤i≤j≤k{\displaystyle {\big (}R_{k}{\big )}_{ij}:={\big (}R_{k}^{\text{SY}}{\big )}_{ij}=s_{i-1}^{T}y_{j-1},\quad {\big (}R_{k}^{\text{VY}}{\big )}_{ij}=v_{i-1}^{T}y_{j-1},\quad {\big (}R_{k}^{\text{CS}}{\big )}_{ij}=c_{i-1}^{T}s_{j-1},\quad \quad {\text{ for }}1\leq i\leq j\leq k} lower triangular (Lk)ij:=(LkSY)ij=si−1Tyj−1,(LkVY)ij=vi−1Tyj−1,(LkCS)ij=ci−1Tsj−1,for1≤j<i≤k{\displaystyle {\big (}L_{k}{\big )}_{ij}:={\big (}L_{k}^{\text{SY}}{\big )}_{ij}=s_{i-1}^{T}y_{j-1},\quad {\big (}L_{k}^{\text{VY}}{\big )}_{ij}=v_{i-1}^{T}y_{j-1},\quad {\big (}L_{k}^{\text{CS}}{\big )}_{ij}=c_{i-1}^{T}s_{j-1},\quad \quad {\text{ for }}1\leq j<i\leq k} and diagonal (Dk)ij:=(DkSY)ij=si−1Tyj−1,for1≤i=j≤k{\displaystyle (D_{k})_{ij}:={\big (}D_{k}^{\text{SY}}{\big )}_{ij}=s_{i-1}^{T}y_{j-1},\quad \quad {\text{ for }}1\leq i=j\leq k} With these definitions the compact representations of general rank-2 updates in (2) and (3) (including the well known quasi-Newton updates in Table 1) have been developed in Brust:[11] Hk=H0+UkMk−1UkT,{\displaystyle H_{k}=H_{0}+U_{k}M_{k}^{-1}U_{k}^{T},} Uk=[VkSk−H0Yk]{\displaystyle U_{k}={\begin{bmatrix}V_{k}&S_{k}-H_{0}Y_{k}\end{bmatrix}}} Mk=[0k×kRkVY(RkVY)TRk+RkT−(Dk+YkTH0Yk)]{\displaystyle M_{k}={\begin{bmatrix}0_{k\times k}&R_{k}^{\text{VY}}\\{\big (}R_{k}^{\text{VY}}{\big )}^{T}&R_{k}+R_{k}^{T}-(D_{k}+Y_{k}^{T}H_{0}Y_{k})\end{bmatrix}}} and the formula for the direct Hessian is Bk=B0+JkNk−1JkT,{\displaystyle B_{k}=B_{0}+J_{k}N_{k}^{-1}J_{k}^{T},} Jk=[CkYk−B0Sk]{\displaystyle J_{k}={\begin{bmatrix}C_{k}&Y_{k}-B_{0}S_{k}\end{bmatrix}}} Nk=[0k×kRkCS(RkCS)TRk+RkT−(Dk+SkTB0Sk)]{\displaystyle N_{k}={\begin{bmatrix}0_{k\times k}&R_{k}^{\text{CS}}\\{\big (}R_{k}^{\text{CS}}{\big )}^{T}&R_{k}+R_{k}^{T}-(D_{k}+S_{k}^{T}B_{0}S_{k})\end{bmatrix}}} For instance, whenVk=Sk{\displaystyle V_{k}=S_{k}}the representation in (4) is the compact formula for the BFGS recursion in (1). Prior to the development of the compact representations of (2) and (3), equivalent representations have been discovered for most known updates (see Table 1). Along with the SR1 representation, the BFGS (Broyden-Fletcher-Goldfarb-Shanno) compact representation was the first compact formula known.[7]In particular, the inverse representation is given by Hk=H0+UkMk−1UkT,Uk=[SkH0Yk],Mk−1=[Rk−T(Dk+YkTH0Yk)Rk−1−Rk−T−Rk−10]{\displaystyle H_{k}=H_{0}+U_{k}M_{k}^{-1}U_{k}^{T},\quad U_{k}={\begin{bmatrix}S_{k}&H_{0}Y_{k}\end{bmatrix}},\quad M_{k}^{-1}=\left[{\begin{smallmatrix}R_{k}^{-T}(D_{k}+Y_{k}^{T}H_{0}Y_{k})R_{k}^{-1}&-R_{k}^{-T}\\-R_{k}^{-1}&0\end{smallmatrix}}\right]}The direct Hessian approximation can be found by applying theSherman-Morrison-Woodbury identityto the inverse Hessian: Bk=B0+JkNk−1JkT,Jk=[B0SkYk],Nk=[STB0SkLkLkT−Dk]{\displaystyle B_{k}=B_{0}+J_{k}N_{k}^{-1}J_{k}^{T},\quad J_{k}={\begin{bmatrix}B_{0}S_{k}&Y_{k}\end{bmatrix}},\quad N_{k}=\left[{\begin{smallmatrix}S^{T}B_{0}S_{k}&L_{k}\\L_{k}^{T}&-D_{k}\end{smallmatrix}}\right]} The SR1 (Symmetric Rank-1) compact representation was first proposed in.[7]Using the definitions ofDk,Lk{\displaystyle D_{k},L_{k}}andRk{\displaystyle R_{k}}from above, the inverse Hessian formula is given by Hk=H0+UkMk−1UkT,Uk=Sk−H0Yk,Mk=Rk+RkT−Dk−YkTH0Yk{\displaystyle H_{k}=H_{0}+U_{k}M_{k}^{-1}U_{k}^{T},\quad U_{k}=S_{k}-H_{0}Y_{k},\quad M_{k}=R_{k}+R_{k}^{T}-D_{k}-Y_{k}^{T}H_{0}Y_{k}} The direct Hessian is obtained by the Sherman-Morrison-Woodbury identity and has the form Bk=B0+JkNk−1JkT,Jk=Yk−B0Sk,Nk=Dk+Lk+LkT−SkTB0Sk{\displaystyle B_{k}=B_{0}+J_{k}N_{k}^{-1}J_{k}^{T},\quad J_{k}=Y_{k}-B_{0}S_{k},\quad N_{k}=D_{k}+L_{k}+L_{k}^{T}-S_{k}^{T}B_{0}S_{k}} The multipoint symmetric secant (MSS) method is a method that aims to satisfy multiple secant equations. The recursive update formula was originally developed by Burdakov.[12]The compact representation for the direct Hessian was derived in[13] Bk=B0+JkNk−1JkT,Jk=[SkYk−B0Sk],Nk=[Wk(SkTB0Sk−(Rk−Dk+RkT))WkWkWk0]−1,Wk=(SkTSk)−1{\displaystyle B_{k}=B_{0}+J_{k}N_{k}^{-1}J_{k}^{T},\quad J_{k}={\begin{bmatrix}S_{k}&Y_{k}-B_{0}S_{k}\end{bmatrix}},\quad N_{k}=\left[{\begin{smallmatrix}W_{k}(S_{k}^{T}B_{0}S_{k}-(R_{k}-D_{k}+R_{k}^{T}))W_{k}&W_{k}\\W_{k}&0\end{smallmatrix}}\right]^{-1},\quad W_{k}=(S_{k}^{T}S_{k})^{-1}} Another equivalent compact representation for the MSS matrix is derived by rewritingJk{\displaystyle J_{k}}in terms ofJk=[SkB0Yk]{\displaystyle J_{k}={\begin{bmatrix}S_{k}&B_{0}Y_{k}\end{bmatrix}}}.[14]The inverse representation can be obtained by application for the Sherman-Morrison-Woodbury identity. Since the DFP (Davidon Fletcher Powell) update is the dual of the BFGS formula (i.e., swappingHk↔Bk{\displaystyle H_{k}\leftrightarrow B_{k}},H0↔B0{\displaystyle H_{0}\leftrightarrow B_{0}}andyk↔sk{\displaystyle y_{k}\leftrightarrow s_{k}}in the BFGS update), the compact representation for DFP can be immediately obtained from the one for BFGS.[15] The PSB (Powell-Symmetric-Broyden) compact representation was developed for the direct Hessian approximation.[16]It is equivalent to substitutingCk=Sk{\displaystyle C_{k}=S_{k}}in (5) Bk=B0+JkNk−1JkT,Jk=[SkYk−B0Sk],Nk=[0RkSS(RkSS)TRk+RkT−(Dk+SkTB0Sk)]{\displaystyle B_{k}=B_{0}+J_{k}N_{k}^{-1}J_{k}^{T},\quad J_{k}={\begin{bmatrix}S_{k}&Y_{k}-B_{0}S_{k}\end{bmatrix}},\quad N_{k}=\left[{\begin{smallmatrix}0&R_{k}^{\text{SS}}\\(R_{k}^{\text{SS}})^{T}&R_{k}+R_{k}^{T}-(D_{k}+S_{k}^{T}B_{0}S_{k})\end{smallmatrix}}\right]} For structured optimization problems in which the objective function can be decomposed into two partsf(x)=k^(x)+u^(x){\displaystyle f(x)={\widehat {k}}(x)+{\widehat {u}}(x)}, where the gradients and Hessian ofk^(x){\displaystyle {\widehat {k}}(x)}are known but only the gradient ofu^(x){\displaystyle {\widehat {u}}(x)}is known, structured BFGS formulas exist. The compact representation of these methods has the general form of (5), with specificJk{\displaystyle J_{k}}andNk{\displaystyle N_{k}}.[17] The reduced compact representation (RCR) of BFGS is for linear equality constrained optimizationminimizef(x)subject to:Ax=b{\displaystyle {\text{ minimize }}f(x){\text{ subject to: }}Ax=b}, whereA{\displaystyle A}isunderdetermined. In addition to the matricesSk,Yk{\displaystyle S_{k},Y_{k}}the RCR also stores the projections of theyi{\displaystyle y_{i}}'s onto the nullspace ofA{\displaystyle A} Zk=[z0z1⋯zk−1],zi=Pyi,P=I−A(ATA)−1AT,0≤i≤k−1{\displaystyle Z_{k}={\begin{bmatrix}z_{0}&z_{1}&\cdots z_{k-1}\end{bmatrix}},\quad z_{i}=Py_{i},\quad P=I-A(A^{T}A)^{-1}A^{T},\quad 0\leq i\leq k-1} ForBk{\displaystyle B_{k}}the compact representation of the BFGS matrix (with a multiple of the identityB0{\displaystyle B_{0}}) the (1,1) block of the inverseKKTmatrix has the compact representation[18] Kk=[BkATA0],B0=1γkI,H0=γkI,γk>0{\displaystyle K_{k}={\begin{bmatrix}B_{k}&A^{T}\\A&0\end{bmatrix}},\quad B_{0}={\frac {1}{\gamma _{k}}}I,\quad H_{0}=\gamma _{k}I,\quad \gamma _{k}>0} (Kk−1)11=H0+UkMk−1UkT,Uk=[ATSkZk],Mk=[−AAT/γkGk],Gk=[Rk−T(Dk+YkTH0Yk)Rk−1−H0Rk−T−H0Rk−10]−1{\displaystyle {\big (}K_{k}^{-1}{\big )}_{11}=H_{0}+U_{k}M_{k}^{-1}U_{k}^{T},\quad U_{k}={\begin{bmatrix}A^{T}&S_{k}&Z_{k}\end{bmatrix}},\quad M_{k}=\left[{\begin{smallmatrix}-AA^{T}/\gamma _{k}&\\&G_{k}\end{smallmatrix}}\right],\quad G_{k}=\left[{\begin{smallmatrix}R_{k}^{-T}(D_{k}+Y_{k}^{T}H_{0}Y_{k})R_{k}^{-1}&-H_{0}R_{k}^{-T}\\-H_{0}R_{k}^{-1}&0\end{smallmatrix}}\right]^{-1}} The most common use of the compact representations is for thelimited-memorysetting wherem≪n{\displaystyle m\ll n}denotes the memory parameter, with typical values aroundm∈[5,12]{\displaystyle m\in [5,12]}(see e.g.,[18][7]). Then, instead of storing the history of all vectors one limits this to them{\displaystyle m}most recent vectors{(si,yi}i=k−mk−1{\displaystyle \{(s_{i},y_{i}\}_{i=k-m}^{k-1}}and possibly{vi}i=k−mk−1{\displaystyle \{v_{i}\}_{i=k-m}^{k-1}}or{ci}i=k−mk−1{\displaystyle \{c_{i}\}_{i=k-m}^{k-1}}. Further, typically the initialization is chosen as an adaptive multiple of the identityHk(0)=γkI{\displaystyle H_{k}^{(0)}=\gamma _{k}I}, withγk=yk−1Tsk−1/yk−1Tyk−1{\displaystyle \gamma _{k}=y_{k-1}^{T}s_{k-1}/y_{k-1}^{T}y_{k-1}}andBk(0)=1γkI{\displaystyle B_{k}^{(0)}={\frac {1}{\gamma _{k}}}I}. Limited-memory methods are frequently used for large-scale problems with many variables (i.e.,n{\displaystyle n}can be large), in which the limited-memory matricesSk∈Rn×m{\displaystyle S_{k}\in \mathbb {R} ^{n\times m}}andYk∈Rn×m{\displaystyle Y_{k}\in \mathbb {R} ^{n\times m}}(and possiblyVk,Ck{\displaystyle V_{k},C_{k}}) are tall and very skinny:Sk=[sk−l−1…sk−1]{\displaystyle S_{k}={\begin{bmatrix}s_{k-l-1}&\ldots &s_{k-1}\end{bmatrix}}}andYk=[yk−l−1…yk−1]{\displaystyle Y_{k}={\begin{bmatrix}y_{k-l-1}&\ldots &y_{k-1}\end{bmatrix}}}. Open source implementations include: Non open source implementations include:
https://en.wikipedia.org/wiki/Compact_quasi-Newton_representation
Innumerical analysis, theNewton–Raphson method, also known simply asNewton's method, named afterIsaac NewtonandJoseph Raphson, is aroot-finding algorithmwhich produces successively betterapproximationsto theroots(or zeroes) of areal-valuedfunction. The most basic version starts with areal-valued functionf, itsderivativef′, and an initial guessx0for arootoff. Iffsatisfies certain assumptions and the initial guess is close, then x1=x0−f(x0)f′(x0){\displaystyle x_{1}=x_{0}-{\frac {f(x_{0})}{f'(x_{0})}}} is a better approximation of the root thanx0. Geometrically,(x1, 0)is thex-interceptof thetangentof thegraphoffat(x0,f(x0)): that is, the improved guess,x1, is the unique root of thelinear approximationoffat the initial guess,x0. The process is repeated as xn+1=xn−f(xn)f′(xn){\displaystyle x_{n+1}=x_{n}-{\frac {f(x_{n})}{f'(x_{n})}}} until a sufficiently precise value is reached. The number of correct digits roughly doubles with each step. This algorithm is first in the class ofHouseholder's methods, and was succeeded byHalley's method. The method can also be extended tocomplex functionsand tosystems of equations. The purpose of Newton's method is to find a root of a function. The idea is to start with an initial guess at a root, approximate the function by itstangent linenear the guess, and then take the root of the linear approximation as a next guess at the function's root. This will typically be closer to the function's root than the previous guess, and the method can beiterated. The bestlinear approximationto an arbitrarydifferentiable functionf(x){\displaystyle f(x)}near the pointx=xn{\displaystyle x=x_{n}}is the tangent line to the curve, with equation f(x)≈f(xn)+f′(xn)(x−xn).{\displaystyle f(x)\approx f(x_{n})+f'(x_{n})(x-x_{n}).} The root of this linear function, the place where it intercepts the⁠x{\displaystyle x}⁠-axis, can be taken as a closer approximate root⁠xn+1{\displaystyle x_{n+1}}⁠: xn+1=xn−f(xn)f′(xn).{\displaystyle x_{n+1}=x_{n}-{\frac {f(x_{n})}{f'(x_{n})}}.} The process can be started with any arbitrary initial guess⁠x0{\displaystyle x_{0}}⁠, though it will generally require fewer iterations to converge if the guess is close to one of the function's roots. The method will usually converge if⁠f′(x0)≠0{\displaystyle f'(x_{0})\neq 0}⁠. Furthermore, for a root ofmultiplicity1, the convergence is at least quadratic (seeRate of convergence) in some sufficiently smallneighbourhoodof the root: the number of correct digits of the approximation roughly doubles with each additional step. More details can be found in§ Analysisbelow. Householder's methodsare similar but have higher order for even faster convergence. However, the extra computations required for each step can slow down the overall performance relative to Newton's method, particularly if⁠f{\displaystyle f}⁠or its derivatives are computationally expensive to evaluate. In theOld Babylonianperiod (19th–16th century BCE), the side of a square of known area could be effectively approximated, and this is conjectured to have been done using a special case of Newton's method,described algebraically below, by iteratively improving an initial estimate; an equivalent method can be found inHeron of Alexandria'sMetrica(1st–2nd century CE), so is often calledHeron's method.[1]Jamshīd al-Kāshīused a method to solvexP−N= 0to find roots ofN, a method that was algebraically equivalent to Newton's method, and in which a similar method was found inTrigonometria Britannica, published byHenry Briggsin 1633.[2] The method first appeared roughly inIsaac Newton's work inDe analysi per aequationes numero terminorum infinitas(written in 1669, published in 1711 byWilliam Jones) and inDe metodis fluxionum et serierum infinitarum(written in 1671, translated and published asMethod of Fluxionsin 1736 byJohn Colson).[3][4]However, while Newton gave the basic ideas, his method differs from the modern method given above. He applied the method only to polynomials, starting with an initial root estimate and extracting a sequence of error corrections. He used each correction to rewrite the polynomial in terms of the remaining error, and then solved for a new correction by neglecting higher-degree terms. He did not explicitly connect the method with derivatives or present a general formula. Newton applied this method to both numerical and algebraic problems, producingTaylor seriesin the latter case. Newton may have derived his method from a similar, less precise method by mathematicianFrançois Viète, however, the two methods are not the same.[3]The essence of Viète's own method can be found in the work of the mathematicianSharaf al-Din al-Tusi.[5] The Japanese mathematicianSeki Kōwaused a form of Newton's method in the 1680s to solve single-variable equations, though the connection with calculus was missing.[6] Newton's method was first published in 1685 inA Treatise of Algebra both Historical and PracticalbyJohn Wallis.[7]In 1690,Joseph Raphsonpublished a simplified description inAnalysis aequationum universalis.[8]Raphson also applied the method only to polynomials, but he avoided Newton's tedious rewriting process by extracting each successive correction from the original polynomial. This allowed him to derive a reusable iterative expression for each problem. Finally, in 1740,Thomas Simpsondescribed Newton's method as an iterative method for solving general nonlinear equations using calculus, essentially giving the description above. In the same publication, Simpson also gives the generalization to systems of two equations and notes that Newton's method can be used for solving optimization problems by setting the gradient to zero. Arthur Cayleyin 1879 inThe Newton–Fourier imaginary problemwas the first to notice the difficulties in generalizing Newton's method to complex roots of polynomials with degree greater than 2 and complex initial values. This opened the way to the study of thetheory of iterationsof rational functions. Newton's method is a powerful technique—if the derivative of the function at the root is nonzero, then theconvergenceis at least quadratic: as the method converges on the root, the difference between the root and the approximation is squared (the number of accurate digits roughly doubles) at each step. However, there are some difficulties with the method. Newton's method requires that the derivative can be calculated directly. An analytical expression for the derivative may not be easily obtainable or could be expensive to evaluate. In these situations, it may be appropriate to approximate the derivative by using the slope of a line through two nearby points on the function. Using this approximation would result in something like thesecant methodwhose convergence is slower than that of Newton's method. It is important to review theproof of quadratic convergenceof Newton's method before implementing it. Specifically, one should review the assumptions made in the proof. Forsituations where the method fails to converge, it is because the assumptions made in this proof are not met. For example,in some cases, if the first derivative is not well behaved in the neighborhood of a particular root, then it is possible that Newton's method will fail to converge no matter where the initialization is set. In some cases, Newton's method can be stabilized by usingsuccessive over-relaxation, or the speed of convergence can be increased by using the same method. In a robust implementation of Newton's method, it is common to place limits on the number of iterations, bound the solution to an interval known to contain the root, and combine the method with a more robust root finding method. If the root being sought hasmultiplicitygreater than one, the convergence rate is merely linear (errors reduced by a constant factor at each step) unless special steps are taken. When there are two or more roots that are close together then it may take many iterations before the iterates get close enough to one of them for the quadratic convergence to be apparent. However, if the multiplicitymof the root is known, the following modified algorithm preserves the quadratic convergence rate:[9] xn+1=xn−mf(xn)f′(xn).{\displaystyle x_{n+1}=x_{n}-m{\frac {f(x_{n})}{f'(x_{n})}}.} This is equivalent to usingsuccessive over-relaxation. On the other hand, if the multiplicitymof the root is not known, it is possible to estimatemafter carrying out one or two iterations, and then use that value to increase the rate of convergence. If the multiplicitymof the root is finite theng(x) =⁠f(x)/f′(x)⁠will have a root at the same location with multiplicity 1. Applying Newton's method to find the root ofg(x)recovers quadratic convergence in many cases although it generally involves the second derivative off(x). In a particularly simple case, iff(x) =xmtheng(x) =⁠x/m⁠and Newton's method finds the root in a single iteration with xn+1=xn−g(xn)g′(xn)=xn−xnm1m=0.{\displaystyle x_{n+1}=x_{n}-{\frac {g(x_{n})}{g'(x_{n})}}=x_{n}-{\frac {\;{\frac {x_{n}}{m}}\;}{\frac {1}{m}}}=0\,.} The functionf(x) =x2has a root at 0.[10]Sincefis continuously differentiable at its root, the theory guarantees that Newton's method as initialized sufficiently close to the root will converge. However, since the derivativef′is zero at the root, quadratic convergence is not ensured by the theory. In this particular example, the Newton iteration is given by It is visible from this that Newton's method could be initialized anywhere and converge to zero, but at only a linear rate. If initialized at 1, dozens of iterations would be required before ten digits of accuracy are achieved. The functionf(x) =x+x4/3also has a root at 0, where it is continuously differentiable. Although the first derivativef′is nonzero at the root, the second derivativef′′is nonexistent there, so that quadratic convergence cannot be guaranteed. In fact the Newton iteration is given by From this, it can be seen that the rate of convergence is superlinear but subquadratic. This can be seen in the following tables, the left of which shows Newton's method applied to the abovef(x) =x+x4/3and the right of which shows Newton's method applied tof(x) =x+x2. The quadratic convergence in iteration shown on the right is illustrated by the orders of magnitude in the distance from the iterate to the true root (0,1,2,3,5,10,20,39,...) being approximately doubled from row to row. While the convergence on the left is superlinear, the order of magnitude is only multiplied by about 4/3 from row to row (0,1,2,4,5,7,10,13,...). The rate of convergence is distinguished from the number of iterations required to reach a given accuracy. For example, the functionf(x) =x20− 1has a root at 1. Sincef′(1) ≠ 0andfis smooth, it is known that any Newton iteration convergent to 1 will converge quadratically. However, if initialized at 0.5, the first few iterates of Newton's method are approximately 26214, 24904, 23658, 22476, decreasing slowly, with only the 200th iterate being 1.0371. The following iterates are 1.0103, 1.00093, 1.0000082, and 1.00000000065, illustrating quadratic convergence. This highlights that quadratic convergence of a Newton iteration does not mean that only few iterates are required; this only applies once the sequence of iterates is sufficiently close to the root.[11] The functionf(x) =x(1 +x2)−1/2has a root at 0. The Newton iteration is given by From this, it can be seen that there are three possible phenomena for a Newton iteration. If initialized strictly between±1, the Newton iteration will converge (super-)quadratically to 0; if initialized exactly at1or−1, the Newton iteration will oscillate endlessly between±1; if initialized anywhere else, the Newton iteration will diverge.[12]This same trichotomy occurs forf(x) = arctanx.[10] In cases where the function in question has multiple roots, it can be difficult to control, via choice of initialization, which root (if any) is identified by Newton's method. For example, the functionf(x) =x(x2− 1)(x− 3)e−(x− 1)2/2has roots at −1, 0, 1, and 3.[13]If initialized at −1.488, the Newton iteration converges to 0; if initialized at −1.487, it diverges to∞; if initialized at −1.486, it converges to −1; if initialized at −1.485, it diverges to−∞; if initialized at −1.4843, it converges to 3; if initialized at −1.484, it converges to1. This kind of subtle dependence on initialization is not uncommon; it is frequently studied in thecomplex planein the form of theNewton fractal. Consider the problem of finding a root off(x) =x1/3. The Newton iteration is Unless Newton's method is initialized at the exact root 0, it is seen that the sequence of iterates will fail to converge. For example, even if initialized at the reasonably accurate guess of 0.001, the first several iterates are −0.002, 0.004, −0.008, 0.016, reaching 1048.58, −2097.15, ... by the 20th iterate. This failure of convergence is not contradicted by the analytic theory, since in this casefis not differentiable at its root. In the above example, failure of convergence is reflected by the failure off(xn)to get closer to zero asnincreases, as well as by the fact that successive iterates are growing further and further apart. However, the functionf(x) =x1/3e−x2also has a root at 0. The Newton iteration is given by In this example, where againfis not differentiable at the root, any Newton iteration not starting exactly at the root will diverge, but with bothxn+ 1−xnandf(xn)converging to zero.[14]This is seen in the following table showing the iterates with initialization 1: Although the convergence ofxn+ 1−xnin this case is not very rapid, it can be proved from the iteration formula. This example highlights the possibility that a stopping criterion for Newton's method based only on the smallness ofxn+ 1−xnandf(xn)might falsely identify a root. It is easy to find situations for which Newton's method oscillates endlessly between two distinct values. For example, for Newton's method as applied to a functionfto oscillate between 0 and 1, it is only necessary that the tangent line tofat 0 intersects thex-axis at 1 and that the tangent line tofat 1 intersects thex-axis at 0.[14]This is the case, for example, iff(x) =x3− 2x+ 2. For this function, it is even the case that Newton's iteration as initialized sufficiently close to 0 or 1 willasymptoticallyoscillate between these values. For example, Newton's method as initialized at 0.99 yields iterates 0.99, −0.06317, 1.00628, 0.03651, 1.00196, 0.01162, 1.00020, 0.00120, 1.000002, and so on. This behavior is present despite the presence of a root offapproximately equal to −1.76929. In some cases, it is not even possible to perform the Newton iteration. For example, iff(x) =x2− 1, then the Newton iteration is defined by So Newton's method cannot be initialized at 0, since this would makex1undefined. Geometrically, this is because the tangent line tofat 0 is horizontal (i.e.f′(0) = 0), never intersecting thex-axis. Even if the initialization is selected so that the Newton iteration can begin, the same phenomenon can block the iteration from being indefinitely continued. Iffhas an incomplete domain, it is possible for Newton's method to send the iterates outside of the domain, so that it is impossible to continue the iteration.[14]For example, thenatural logarithmfunctionf(x) = lnxhas a root at 1, and is defined only for positivex. Newton's iteration in this case is given by So if the iteration is initialized ate, the next iterate is 0; if the iteration is initialized at a value larger thane, then the next iterate is negative. In either case, the method cannot be continued. Suppose that the functionfhas a zero atα, i.e.,f(α) = 0, andfis differentiable in aneighborhoodofα. Iffis continuously differentiable and its derivative is nonzero atα, then there exists aneighborhoodofαsuch that for all starting valuesx0in that neighborhood, thesequence(xn)willconvergetoα.[15] Iffis continuously differentiable, its derivative is nonzero atα,andit has asecond derivativeatα, then the convergence is quadratic or faster. If the second derivative is not 0 atαthen the convergence is merely quadratic. If the third derivative exists and is bounded in a neighborhood ofα, then: Δxi+1=f″(α)2f′(α)(Δxi)2+O(Δxi)3,{\displaystyle \Delta x_{i+1}={\frac {f''(\alpha )}{2f'(\alpha )}}\left(\Delta x_{i}\right)^{2}+O\left(\Delta x_{i}\right)^{3}\,,} where Δxi≜xi−α.{\displaystyle \Delta x_{i}\triangleq x_{i}-\alpha \,.} If the derivative is 0 atα, then the convergence is usually only linear. Specifically, iffis twice continuously differentiable,f′(α) = 0andf″(α) ≠ 0, then there exists a neighborhood ofαsuch that, for all starting valuesx0in that neighborhood, the sequence of iterates converges linearly, withrate⁠1/2⁠.[16]Alternatively, iff′(α) = 0andf′(x) ≠ 0forx≠α,xin aneighborhoodUofα,αbeing a zero ofmultiplicityr, and iff∈Cr(U), then there exists a neighborhood ofαsuch that, for all starting valuesx0in that neighborhood, the sequence of iterates converges linearly. However, even linear convergence is not guaranteed in pathological situations. In practice, these results are local, and the neighborhood of convergence is not known in advance. But there are also some results on global convergence: for instance, given a right neighborhoodU+ofα, iffis twice differentiable inU+and iff′≠ 0,f·f″> 0inU+, then, for eachx0inU+the sequencexkis monotonically decreasing toα. According toTaylor's theorem, any functionf(x)which has a continuous second derivative can be represented by an expansion about a point that is close to a root off(x). Suppose this root isα. Then the expansion off(α)aboutxnis: where theLagrange form of the Taylor series expansion remainderis R1=12!f″(ξn)(α−xn)2,{\displaystyle R_{1}={\frac {1}{2!}}f''(\xi _{n})\left(\alpha -x_{n}\right)^{2}\,,} whereξnis in betweenxnandα. Sinceαis the root, (1) becomes: Dividing equation (2) byf′(xn)and rearranging gives Remembering thatxn+ 1is defined by one finds that α−xn+1⏟εn+1=−f″(ξn)2f′(xn)(α−xn⏟εn)2.{\displaystyle \underbrace {\alpha -x_{n+1}} _{\varepsilon _{n+1}}={\frac {-f''(\xi _{n})}{2f'(x_{n})}}{(\,\underbrace {\alpha -x_{n}} _{\varepsilon _{n}}\,)}^{2}\,.} That is, Taking the absolute value of both sides gives Equation (6) shows that theorder of convergenceis at least quadratic if the following conditions are satisfied: whereMis given by M=12(supx∈I|f″(x)|)(supx∈I1|f′(x)|).{\displaystyle M={\frac {1}{2}}\left(\sup _{x\in I}\vert f''(x)\vert \right)\left(\sup _{x\in I}{\frac {1}{\vert f'(x)\vert }}\right).\,} If these conditions hold, |εn+1|≤M⋅εn2.{\displaystyle \vert \varepsilon _{n+1}\vert \leq M\cdot \varepsilon _{n}^{2}\,.} Suppose thatf(x)is aconcave functionon an interval, which isstrictly increasing. If it is negative at the left endpoint and positive at the right endpoint, theintermediate value theoremguarantees that there is a zeroζoffsomewhere in the interval. From geometrical principles, it can be seen that the Newton iterationxistarting at the left endpoint ismonotonically increasingand convergent, necessarily toζ.[17] Joseph Fourierintroduced a modification of Newton's method starting at the right endpoint: This sequence is monotonically decreasing and convergent. By passing to the limit in this definition, it can be seen that the limit ofyimust also be the zeroζ.[17] So, in the case of a concave increasing function with a zero, initialization is largely irrelevant. Newton iteration starting anywhere left of the zero will converge, as will Fourier's modified Newton iteration starting anywhere right of the zero. The accuracy at any step of the iteration can be determined directly from the difference between the location of the iteration from the left and the location of the iteration from the right. Iffis twice continuously differentiable, it can be proved usingTaylor's theoremthat showing that this difference in locations converges quadratically to zero.[17] All of the above can be extended to systems of equations in multiple variables, although in that context the relevant concepts ofmonotonicityand concavity are more subtle to formulate.[18]In the case of single equations in a single variable, the above monotonic convergence of Newton's method can also be generalized to replace concavity by positivity or negativity conditions on an arbitrary higher-order derivative off. However, in this generalization, Newton's iteration is modified so as to be based onTaylor polynomialsrather than thetangent line. In the case of concavity, this modification coincides with the standard Newton method.[19] If we seek the root of a single functionf:Rn→R{\displaystyle f:\mathbf {R} ^{n}\to \mathbf {R} }then the errorϵn=xn−α{\displaystyle \epsilon _{n}=x_{n}-\alpha }is a vector such that its components obeyϵk(n+1)=12(ϵ(n))TQkϵ(n)+O(‖ϵ(n)‖3){\displaystyle \epsilon _{k}^{(n+1)}={\frac {1}{2}}(\epsilon ^{(n)})^{T}Q_{k}\epsilon ^{(n)}+O(\|\epsilon ^{(n)}\|^{3})}whereQk{\displaystyle Q_{k}}is a quadratic form:(Qk)i,j=∑ℓ((D2f)−1)i,ℓ∂3f∂xj∂xk∂xℓ{\displaystyle (Q_{k})_{i,j}=\sum _{\ell }((D^{2}f)^{-1})_{i,\ell }{\frac {\partial ^{3}f}{\partial x_{j}\partial x_{k}\partial x_{\ell }}}}evaluated at the rootα{\displaystyle \alpha }(whereD2f{\displaystyle D^{2}f}is the 2nd derivative Hessian matrix). Newton's method is one of many knownmethods of computing square roots. Given a positive numbera, the problem of finding a numberxsuch thatx2=ais equivalent to finding a root of the functionf(x) =x2−a. The Newton iteration defined by this function is given by This happens to coincide with the"Babylonian" method of finding square roots, which consists of replacing an approximate rootxnby thearithmetic meanofxnanda⁄xn. By performing this iteration, it is possible to evaluate a square root to any desired accuracy by only using the basicarithmetic operations. The following three tables show examples of the result of this computation for finding the square root of 612, with the iteration initialized at the values of 1, 10, and −20. Each row in a "xn" column is obtained by applying the preceding formula to the entry above it, for instance The correct digits are underlined. It is seen that with only a few iterations one can obtain a solution accurate to many decimal places. The first table shows that this is true even if the Newton iteration were initialized by the very inaccurate guess of1. When computing any nonzero square root, the first derivative offmust be nonzero at the root, and thatfis a smooth function. So, even before any computation, it is known that any convergent Newton iteration has a quadratic rate of convergence. This is reflected in the above tables by the fact that once a Newton iterate gets close to the root, the number of correct digits approximately doubles with each iteration. Consider the problem of finding the positive numberxwithcosx=x3. We can rephrase that as finding the zero off(x) = cos(x) −x3. We havef′(x) = −sin(x) − 3x2. Sincecos(x) ≤ 1for allxandx3> 1forx> 1, we know that our solution lies between 0 and 1. A starting value of 0 will lead to an undefined result which illustrates the importance of using a starting point close to the solution. For example, with an initial guessx0= 0.5, the sequence given by Newton's method is: x1=x0−f(x0)f′(x0)=0.5−cos⁡0.5−0.53−sin⁡0.5−3×0.52=1.112141637097…x2=x1−f(x1)f′(x1)=⋮=0._909672693736…x3=⋮=⋮=0.86_7263818209…x4=⋮=⋮=0.86547_7135298…x5=⋮=⋮=0.8654740331_11…x6=⋮=⋮=0.865474033102_…{\displaystyle {\begin{matrix}x_{1}&=&x_{0}-{\dfrac {f(x_{0})}{f'(x_{0})}}&=&0.5-{\dfrac {\cos 0.5-0.5^{3}}{-\sin 0.5-3\times 0.5^{2}}}&=&1.112\,141\,637\,097\dots \\x_{2}&=&x_{1}-{\dfrac {f(x_{1})}{f'(x_{1})}}&=&\vdots &=&{\underline {0.}}909\,672\,693\,736\dots \\x_{3}&=&\vdots &=&\vdots &=&{\underline {0.86}}7\,263\,818\,209\dots \\x_{4}&=&\vdots &=&\vdots &=&{\underline {0.865\,47}}7\,135\,298\dots \\x_{5}&=&\vdots &=&\vdots &=&{\underline {0.865\,474\,033\,1}}11\dots \\x_{6}&=&\vdots &=&\vdots &=&{\underline {0.865\,474\,033\,102}}\dots \end{matrix}}} The correct digits are underlined in the above example. In particular,x6is correct to 12 decimal places. We see that the number of correct digits after the decimal point increases from 2 (forx3) to 5 and 10, illustrating the quadratic convergence. One may also use Newton's method to solve systems ofkequations, which amounts to finding the (simultaneous) zeroes ofkcontinuously differentiable functionsf:Rk→R.{\displaystyle f:\mathbb {R} ^{k}\to \mathbb {R} .}This is equivalent to finding the zeroes of a single vector-valued functionF:Rk→Rk.{\displaystyle F:\mathbb {R} ^{k}\to \mathbb {R} ^{k}.}In the formulation given above, the scalarsxnare replaced by vectorsxnand instead of dividing the functionf(xn)by its derivativef′(xn)one instead has to left multiply the functionF(xn)by the inverse of itsk×kJacobian matrixJF(xn).[20][21][22]This results in the expression xn+1=xn−JF(xn)−1F(xn).{\displaystyle \mathbf {x} _{n+1}=\mathbf {x} _{n}-J_{F}(\mathbf {x} _{n})^{-1}F(\mathbf {x} _{n}).} or, by solving thesystem of linear equations JF(xn)(xn+1−xn)=−F(xn){\displaystyle J_{F}(\mathbf {x} _{n})(\mathbf {x} _{n+1}-\mathbf {x} _{n})=-F(\mathbf {x} _{n})} for the unknownxn+ 1−xn.[23] Thek-dimensional variant of Newton's method can be used to solve systems of greater thank(nonlinear) equations as well if the algorithm uses thegeneralized inverseof the non-squareJacobianmatrixJ+= (JTJ)−1JTinstead of the inverse ofJ. If thenonlinear systemhas no solution, the method attempts to find a solution in thenon-linear least squaressense. SeeGauss–Newton algorithmfor more information. For example, the following set of equations needs to be solved for vector of points[x1,x2],{\displaystyle \ [\ x_{1},x_{2}\ ]\ ,}given the vector of known values[2,3].{\displaystyle \ [\ 2,3\ ]~.}[24] 5x12+x1x22+sin2⁡(2x2)=2e2x1−x2+4x2=3{\displaystyle {\begin{array}{lcr}5\ x_{1}^{2}+x_{1}\ x_{2}^{2}+\sin ^{2}(2\ x_{2})&=\quad 2\\e^{2\ x_{1}-x_{2}}+4\ x_{2}&=\quad 3\end{array}}} the function vector,F(Xk),{\displaystyle \ F(X_{k})\ ,}and Jacobian Matrix,J(Xk){\displaystyle \ J(X_{k})\ }for iteration k, and the vector of known values,Y,{\displaystyle \ Y\ ,}are defined below. F(Xk)=[f1(Xk)f2(Xk)]=[5x12+x1x22+sin2⁡(2x2)e2x1−x2+4x2]kJ(Xk)=[∂f1(X)∂x1,∂f1(X)∂x2∂f2(X)∂x1,∂f2(X)∂x2]k=[10x1+x22,2x1x2+4sin⁡(2x2)cos⁡(2x2)2e2x1−x2,−e2x1−x2+4]kY=[23]{\displaystyle {\begin{aligned}~&F(X_{k})~=~{\begin{bmatrix}{\begin{aligned}~&f_{1}(X_{k})\\~&f_{2}(X_{k})\end{aligned}}\end{bmatrix}}~=~{\begin{bmatrix}{\begin{aligned}~&5\ x_{1}^{2}+x_{1}\ x_{2}^{2}+\sin ^{2}(2\ x_{2})\\~&e^{2\ x_{1}-x_{2}}+4\ x_{2}\end{aligned}}\end{bmatrix}}_{k}\\~&J(X_{k})={\begin{bmatrix}~{\frac {\ \partial {f_{1}(X)}\ }{\partial {x_{1}}}}\ ,&~{\frac {\ \partial {f_{1}(X)}\ }{\partial {x_{2}}}}~\\~{\frac {\ \partial {f_{2}(X)}\ }{\partial {x_{1}}}}\ ,&~{\frac {\ \partial {f_{2}(X)}\ }{\partial {x_{2}}}}~\end{bmatrix}}_{k}~=~{\begin{bmatrix}{\begin{aligned}~&10\ x_{1}+x_{2}^{2}\ ,&&2\ x_{1}\ x_{2}+4\ \sin(2\ x_{2})\ \cos(2\ x_{2})\\~&2\ e^{2\ x_{1}-x_{2}}\ ,&&-e^{2\ x_{1}-x_{2}}+4\end{aligned}}\end{bmatrix}}_{k}\\~&Y={\begin{bmatrix}~2~\\~3~\end{bmatrix}}\end{aligned}}} Note thatF(Xk){\displaystyle \ F(X_{k})\ }could have been rewritten to absorbY,{\displaystyle \ Y\ ,}and thus eliminateY{\displaystyle Y}from the equations. The equation to solve for each iteration are [10x1+x22,2x1x2+4sin⁡(2x2)cos⁡(2x2)2e2x1−x2,−e2x1−x2+4]k[c1c2]k+1=[5x12+x1x22+sin2⁡(2x2)−2e2x1−x2+4x2−3]k{\displaystyle {\begin{aligned}{\begin{bmatrix}{\begin{aligned}~&~10\ x_{1}+x_{2}^{2}\ ,&&2x_{1}x_{2}+4\ \sin(2\ x_{2})\ \cos(2\ x_{2})~\\~&~2\ e^{2\ x_{1}-x_{2}}\ ,&&-e^{2\ x_{1}-x_{2}}+4~\end{aligned}}\end{bmatrix}}_{k}{\begin{bmatrix}~c_{1}~\\~c_{2}~\end{bmatrix}}_{k+1}={\begin{bmatrix}~5\ x_{1}^{2}+x_{1}\ x_{2}^{2}+\sin ^{2}(2\ x_{2})-2~\\~e^{2\ x_{1}-x_{2}}+4\ x_{2}-3~\end{bmatrix}}_{k}\end{aligned}}} and Xk+1=Xk−Ck+1{\displaystyle X_{k+1}~=~X_{k}-C_{k+1}} The iterations should be repeated until[∑i=1i=2|f(xi)k−(yi)k|]<E,{\displaystyle \ {\Bigg [}\sum _{i=1}^{i=2}{\Bigl |}f(x_{i})_{k}-(y_{i})_{k}{\Bigr |}{\Bigg ]}<E\ ,}whereE{\displaystyle \ E\ }is a value acceptably small enough to meet application requirements. If vectorX0{\displaystyle \ X_{0}\ }is initially chosen to be[11],{\displaystyle \ {\begin{bmatrix}~1~&~1~\end{bmatrix}}\ ,}that is,x1=1,{\displaystyle \ x_{1}=1\ ,}andx2=1,{\displaystyle \ x_{2}=1\ ,}andE,{\displaystyle \ E\ ,}is chosen to be 1.10−3, then the example converges after four iterations to a value ofX4=[0.567297,−0.309442].{\displaystyle \ X_{4}=\left[~0.567297,\ -0.309442~\right]~.} The following iterations were made during the course of the solution. When dealing withcomplex functions, Newton's method can be directly applied to find their zeroes.[25]Each zero has abasin of attractionin the complex plane, the set of all starting values that cause the method to converge to that particular zero. These sets can be mapped as in the image shown. For many complex functions, the boundaries of the basins of attraction arefractals. In some cases there are regions in the complex plane which are not in any of these basins of attraction, meaning the iterates do not converge. For example,[26]if one uses a real initial condition to seek a root ofx2+ 1, all subsequent iterates will be real numbers and so the iterations cannot converge to either root, since both roots are non-real. In this casealmost allreal initial conditions lead tochaotic behavior, while some initial conditions iterate either to infinity or to repeating cycles of any finite length. Curt McMullen has shown that for any possible purely iterative algorithm similar to Newton's method, the algorithm will diverge on some open regions of the complex plane when applied to some polynomial of degree 4 or higher. However, McMullen gave a generally convergent algorithm for polynomials of degree 3.[27]Also, for any polynomial, Hubbard, Schleicher, and Sutherland gave a method for selecting a set of initial points such that Newton's method will certainly converge at one of them at least.[28] Another generalization is Newton's method to find a root of afunctionalFdefined in aBanach space. In this case the formulation is Xn+1=Xn−(F′(Xn))−1F(Xn),{\displaystyle X_{n+1}=X_{n}-{\bigl (}F'(X_{n}){\bigr )}^{-1}F(X_{n}),\,} whereF′(Xn)is theFréchet derivativecomputed atXn. One needs the Fréchet derivative to be boundedly invertible at eachXnin order for the method to be applicable. A condition for existence of and convergence to a root is given by theNewton–Kantorovich theorem.[29] In the 1950s,John Nashdeveloped a version of the Newton's method to apply to the problem of constructingisometric embeddingsof generalRiemannian manifoldsinEuclidean space. Theloss of derivativesproblem, present in this context, made the standard Newton iteration inapplicable, since it could not be continued indefinitely (much less converge). Nash's solution involved the introduction ofsmoothingoperators into the iteration. He was able to prove the convergence of his smoothed Newton method, for the purpose of proving animplicit function theoremfor isometric embeddings. In the 1960s,Jürgen Mosershowed that Nash's methods were flexible enough to apply to problems beyond isometric embedding, particularly incelestial mechanics. Since then, a number of mathematicians, includingMikhael GromovandRichard Hamilton, have found generalized abstract versions of the Nash–Moser theory.[30][31]In Hamilton's formulation, the Nash–Moser theorem forms a generalization of the Banach space Newton method which takes place in certainFréchet spaces. When the Jacobian is unavailable or too expensive to compute at every iteration, aquasi-Newton methodcan be used. Since higher-order Taylor expansions offer more accurate local approximations of a functionf, it is reasonable to ask why Newton’s method relies only on a second-order Taylor approximation. In the 19th century, Russian mathematician Pafnuty Chebyshev explored this idea by developing a variant of Newton’s method that used cubic approximations.[32][33][34] Inp-adic analysis, the standard method to show a polynomial equation in one variable has ap-adic root isHensel's lemma, which uses the recursion from Newton's method on thep-adic numbers. Because of the more stable behavior of addition and multiplication in thep-adic numbers compared to the real numbers (specifically, the unit ball in thep-adics is a ring), convergence in Hensel's lemma can be guaranteed under much simpler hypotheses than in the classical Newton's method on the real line. Newton's method can be generalized with theq-analogof the usual derivative.[35] A nonlinear equation has multiple solutions in general. But if the initial value is not appropriate, Newton's method may not converge to the desired solution or may converge to the same solution found earlier. When we have already foundNsolutions off(x)=0{\displaystyle f(x)=0}, then the next root can be found by applying Newton's method to the next equation:[36][37] F(x)=f(x)∏i=1N(x−xi)=0.{\displaystyle F(x)={\frac {f(x)}{\prod _{i=1}^{N}(x-x_{i})}}=0.} This method is applied to obtain zeros of theBessel functionof the second kind.[38] Hirano's modified Newton method is a modification conserving the convergence of Newton method and avoiding unstableness.[39]It is developed to solve complex polynomials. Combining Newton's method withinterval arithmeticis very useful in some contexts. This provides a stopping criterion that is more reliable than the usual ones (which are a small value of the function or a small variation of the variable between consecutive iterations). Also, this may detect cases where Newton's method converges theoretically but diverges numerically because of an insufficientfloating-point precision(this is typically the case for polynomials of large degree, where a very small change of the variable may change dramatically the value of the function; seeWilkinson's polynomial).[40][41] Considerf→C1(X), whereXis a real interval, and suppose that we have an interval extensionF′off′, meaning thatF′takes as input an intervalY⊆Xand outputs an intervalF′(Y)such that: F′([y,y])={f′(y)}F′(Y)⊇{f′(y)∣y∈Y}.{\displaystyle {\begin{aligned}F'([y,y])&=\{f'(y)\}\\[5pt]F'(Y)&\supseteq \{f'(y)\mid y\in Y\}.\end{aligned}}} We also assume that0 ∉F′(X), so in particularfhas at most one root inX. We then define the interval Newton operator by: N(Y)=m−f(m)F′(Y)={m−f(m)z|z∈F′(Y)}{\displaystyle N(Y)=m-{\frac {f(m)}{F'(Y)}}=\left\{\left.m-{\frac {f(m)}{z}}~\right|~z\in F'(Y)\right\}} wherem∈Y. Note that the hypothesis onF′implies thatN(Y)is well defined and is an interval (seeinterval arithmeticfor further details on interval operations). This naturally leads to the following sequence: X0=XXk+1=N(Xk)∩Xk.{\displaystyle {\begin{aligned}X_{0}&=X\\X_{k+1}&=N(X_{k})\cap X_{k}.\end{aligned}}} Themean value theoremensures that if there is a root offinXk, then it is also inXk+ 1. Moreover, the hypothesis onF′ensures thatXk+ 1is at most half the size ofXkwhenmis the midpoint ofY, so this sequence converges towards[x*,x*], wherex*is the root offinX. IfF′(X)strictly contains 0, the use of extended interval division produces a union of two intervals forN(X); multiple roots are therefore automatically separated and bounded. Newton's method can be used to find a minimum or maximum of a functionf(x). The derivative is zero at a minimum or maximum, so local minima and maxima can be found by applying Newton's method to the derivative.[42]The iteration becomes: xn+1=xn−f′(xn)f″(xn).{\displaystyle x_{n+1}=x_{n}-{\frac {f'(x_{n})}{f''(x_{n})}}.} An important application isNewton–Raphson division, which can be used to quickly find thereciprocalof a numbera, using only multiplication and subtraction, that is to say the numberxsuch that⁠1/x⁠=a. We can rephrase that as finding the zero off(x) =⁠1/x⁠−a. We havef′(x) = −⁠1/x2⁠. Newton's iteration is xn+1=xn−f(xn)f′(xn)=xn+1xn−a1xn2=xn(2−axn).{\displaystyle x_{n+1}=x_{n}-{\frac {f(x_{n})}{f'(x_{n})}}=x_{n}+{\frac {{\frac {1}{x_{n}}}-a}{\frac {1}{x_{n}^{2}}}}=x_{n}(2-ax_{n}).} Therefore, Newton's iteration needs only two multiplications and one subtraction. This method is also very efficient to compute the multiplicative inverse of apower series. Manytranscendental equationscan be solved up to an arbitrary precision by using Newton's method. For example, finding the cumulativeprobability density function, such as aNormal distributionto fit a known probability generally involves integral functions with no known means to solve in closed form. However, computing the derivatives needed to solve them numerically with Newton's method is generally known, making numerical solutions possible. For an example, see the numerical solution to theinverse Normal cumulative distribution. A numerical verification for solutions of nonlinear equations has been established by using Newton's method multiple times and forming a set of solution candidates.[citation needed] The following is an example of a possible implementation of Newton's method in thePython(version 3.x) programming language for finding a root of a functionfwhich has derivativef_prime. The initial guess will bex0= 1and the function will bef(x) =x2− 2so thatf′(x) = 2x. Each new iteration of Newton's method will be denoted byx1. We will check during the computation whether the denominator (yprime) becomes too small (smaller thanepsilon), which would be the case iff′(xn) ≈ 0, since otherwise a large amount of error could be introduced.
https://en.wikipedia.org/wiki/Newton%27s_method
Innumerical analysis, aquasi-Newton methodis aniterative numerical methodused either tofind zeroesor tofind local maxima and minimaof functions via an iterativerecurrence formulamuch like the one forNewton's method, except using approximations of thederivativesof the functions in place of exact derivatives. Newton's method requires theJacobian matrixof allpartial derivativesof a multivariate function when used to search for zeros or theHessian matrixwhen usedfor finding extrema. Quasi-Newton methods, on the other hand, can be used when the Jacobian matrices or Hessian matrices are unavailable or are impractical to compute at every iteration. Someiterative methodsthat reduce to Newton's method, such assequential quadratic programming, may also be considered quasi-Newton methods. Newton's methodto find zeroes of a functiong{\displaystyle g}of multiple variables is given byxn+1=xn−[Jg(xn)]−1g(xn){\displaystyle x_{n+1}=x_{n}-[J_{g}(x_{n})]^{-1}g(x_{n})}, where[Jg(xn)]−1{\displaystyle [J_{g}(x_{n})]^{-1}}is theleft inverseof theJacobian matrixJg(xn){\displaystyle J_{g}(x_{n})}ofg{\displaystyle g}evaluated forxn{\displaystyle x_{n}}. Strictly speaking, any method that replaces the exact JacobianJg(xn){\displaystyle J_{g}(x_{n})}with an approximation is a quasi-Newton method.[1]For instance, the chord method (whereJg(xn){\displaystyle J_{g}(x_{n})}is replaced byJg(x0){\displaystyle J_{g}(x_{0})}for all iterations) is a simple example. The methods given below foroptimizationrefer to an important subclass of quasi-Newton methods,secant methods.[2] Using methods developed to find extrema in order to find zeroes is not always a good idea, as the majority of the methods used to find extrema require that the matrix that is used is symmetrical. While this holds in the context of the search for extrema, it rarely holds when searching for zeroes.Broyden's "good" and "bad" methodsare two methods commonly used to find extrema that can also be applied to find zeroes. Other methods that can be used are thecolumn-updating method, theinverse column-updating method, the quasi-Newton least squares method and the quasi-Newton inverse least squares method. More recently quasi-Newton methods have been applied to find the solution of multiple coupled systems of equations (e.g. fluid–structure interaction problems or interaction problems in physics). They allow the solution to be found by solving each constituent system separately (which is simpler than the global system) in a cyclic, iterative fashion until the solution of the global system is found.[2][3] The search for a minimum or maximum of a scalar-valued function is closely related to the search for the zeroes of thegradientof that function. Therefore, quasi-Newton methods can be readily applied to find extrema of a function. In other words, ifg{\displaystyle g}is the gradient off{\displaystyle f}, then searching for the zeroes of the vector-valued functiong{\displaystyle g}corresponds to the search for the extrema of the scalar-valued functionf{\displaystyle f}; the Jacobian ofg{\displaystyle g}now becomes the Hessian off{\displaystyle f}. The main difference is thatthe Hessian matrix is a symmetric matrix, unlike the Jacobian whensearching for zeroes. Most quasi-Newton methods used in optimization exploit this symmetry. Inoptimization,quasi-Newton methods(a special case ofvariable-metric methods) are algorithms for finding localmaxima and minimaoffunctions. Quasi-Newton methods for optimization are based onNewton's methodto find thestationary pointsof a function, points where the gradient is 0. Newton's method assumes that the function can be locally approximated as aquadraticin the region around the optimum, and uses the first and second derivatives to find the stationary point. In higher dimensions, Newton's method uses the gradient and theHessian matrixof secondderivativesof the function to be minimized. In quasi-Newton methods the Hessian matrix does not need to be computed. The Hessian is updated by analyzing successive gradient vectors instead. Quasi-Newton methods are a generalization of thesecant methodto find the root of the first derivative for multidimensional problems. In multiple dimensions the secant equation isunder-determined, and quasi-Newton methods differ in how they constrain the solution, typically by adding a simple low-rank update to the current estimate of the Hessian. The first quasi-Newton algorithm was proposed byWilliam C. Davidon, a physicist working atArgonne National Laboratory. He developed the first quasi-Newton algorithm in 1959: theDFP updating formula, which was later popularized by Fletcher and Powell in 1963, but is rarely used today. The most common quasi-Newton algorithms are currently theSR1 formula(for "symmetric rank-one"), theBHHHmethod, the widespreadBFGS method(suggested independently by Broyden, Fletcher, Goldfarb, and Shanno, in 1970), and its low-memory extensionL-BFGS. The Broyden's class is a linear combination of the DFP and BFGS methods. The SR1 formula does not guarantee the update matrix to maintainpositive-definitenessand can be used for indefinite problems. TheBroyden's methoddoes not require the update matrix to be symmetric and is used to find the root of a general system of equations (rather than the gradient) by updating theJacobian(rather than the Hessian). One of the chief advantages of quasi-Newton methods overNewton's methodis that theHessian matrix(or, in the case of quasi-Newton methods, its approximation)B{\displaystyle B}does not need to be inverted. Newton's method, and its derivatives such asinterior point methods, require the Hessian to be inverted, which is typically implemented by solving asystem of linear equationsand is often quite costly. In contrast, quasi-Newton methods usually generate an estimate ofB−1{\displaystyle B^{-1}}directly. As inNewton's method, one uses a second-order approximation to find the minimum of a functionf(x){\displaystyle f(x)}. TheTaylor seriesoff(x){\displaystyle f(x)}around an iterate is where (∇f{\displaystyle \nabla f}) is thegradient, andB{\displaystyle B}an approximation to theHessian matrix.[4]The gradient of this approximation (with respect toΔx{\displaystyle \Delta x}) is and setting this gradient to zero (which is the goal of optimization) provides the Newton step: The Hessian approximationB{\displaystyle B}is chosen to satisfy which is called thesecant equation(the Taylor series of the gradient itself). In more than one dimensionB{\displaystyle B}isunderdetermined. In one dimension, solving forB{\displaystyle B}and applying the Newton's step with the updated value is equivalent to thesecant method. The various quasi-Newton methods differ in their choice of the solution to the secant equation (in one dimension, all the variants are equivalent). Most methods (but with exceptions, such asBroyden's method) seek a symmetric solution (BT=B{\displaystyle B^{T}=B}); furthermore, the variants listed below can be motivated by finding an updateBk+1{\displaystyle B_{k+1}}that is as close as possible toBk{\displaystyle B_{k}}in somenorm; that is,Bk+1=argminB⁡‖B−Bk‖V{\displaystyle B_{k+1}=\operatorname {argmin} _{B}\|B-B_{k}\|_{V}}, whereV{\displaystyle V}is somepositive-definite matrixthat defines the norm. An approximate initial valueB0=βI{\displaystyle B_{0}=\beta I}is often sufficient to achieve rapid convergence, although there is no general strategy to chooseβ{\displaystyle \beta }.[5]Note thatB0{\displaystyle B_{0}}should be positive-definite. The unknownxk{\displaystyle x_{k}}is updated applying the Newton's step calculated using the current approximate Hessian matrixBk{\displaystyle B_{k}}: is used to update the approximate HessianBk+1{\displaystyle B_{k+1}}, or directly its inverseHk+1=Bk+1−1{\displaystyle H_{k+1}=B_{k+1}^{-1}}using theSherman–Morrison formula. The most popular update formulas are: Other methods are Pearson's method, McCormick's method, the Powell symmetric Broyden (PSB) method and Greenstadt's method.[2]These recursive low-rank matrix updates can also represented as an initial matrix plus a low-rank correction. This is theCompact quasi-Newton representation, which is particularly effective for constrained and/or large problems. Whenf{\displaystyle f}is a convex quadratic function with positive-definite HessianB{\displaystyle B}, one would expect the matricesHk{\displaystyle H_{k}}generated by a quasi-Newton method to converge to the inverse HessianH=B−1{\displaystyle H=B^{-1}}. This is indeed the case for the class of quasi-Newton methods based on least-change updates.[6] Implementations of quasi-Newton methods are available in many programming languages. Notable open source implementations include: Notable proprietary implementations include:
https://en.wikipedia.org/wiki/Quasi-Newton_method
Limited-memory BFGS(L-BFGSorLM-BFGS) is anoptimizationalgorithmin the family ofquasi-Newton methodsthat approximates theBroyden–Fletcher–Goldfarb–Shanno algorithm(BFGS) using a limited amount ofcomputer memory.[1]It is a popular algorithm for parameter estimation inmachine learning.[2][3]The algorithm's target problem is to minimizef(x){\displaystyle f(\mathbf {x} )}over unconstrained values of the real-vectorx{\displaystyle \mathbf {x} }wheref{\displaystyle f}is a differentiable scalar function. Like the original BFGS, L-BFGS uses an estimate of the inverseHessian matrixto steer its search through variable space, but where BFGS stores a densen×n{\displaystyle n\times n}approximation to the inverse Hessian (nbeing the number of variables in the problem), L-BFGS stores only a few vectors that represent the approximation implicitly. Due to its resulting linear memory requirement, the L-BFGS method is particularly well suited for optimization problems with many variables. Instead of the inverse HessianHk, L-BFGS maintains a history of the pastmupdates of the positionxand gradient ∇f(x), where generally the history sizemcan be small (oftenm<10{\displaystyle m<10}). These updates are used to implicitly do operations requiring theHk-vector product. The algorithm starts with an initial estimate of the optimal value,x0{\displaystyle \mathbf {x} _{0}}, and proceeds iteratively to refine that estimate with a sequence of better estimatesx1,x2,…{\displaystyle \mathbf {x} _{1},\mathbf {x} _{2},\ldots }. The derivatives of the functiongk:=∇f(xk){\displaystyle g_{k}:=\nabla f(\mathbf {x} _{k})}are used as a key driver of the algorithm to identify the direction of steepest descent, and also to form an estimate of the Hessian matrix (second derivative) off(x){\displaystyle f(\mathbf {x} )}. L-BFGS shares many features with other quasi-Newton algorithms, but is very different in how the matrix-vector multiplicationdk=−Hkgk{\displaystyle d_{k}=-H_{k}g_{k}}is carried out, wheredk{\displaystyle d_{k}}is the approximate Newton's direction,gk{\displaystyle g_{k}}is the current gradient, andHk{\displaystyle H_{k}}is the inverse of the Hessian matrix. There are multiple published approaches using a history of updates to form this direction vector. Here, we give a common approach, the so-called "two loop recursion."[4][5] We take as givenxk{\displaystyle x_{k}}, the position at thek-th iteration, andgk≡∇f(xk){\displaystyle g_{k}\equiv \nabla f(x_{k})}wheref{\displaystyle f}is the function being minimized, and all vectors are column vectors. We also assume that we have stored the lastmupdates of the form We defineρk=1yk⊤sk{\displaystyle \rho _{k}={\frac {1}{y_{k}^{\top }s_{k}}}}, andHk0{\displaystyle H_{k}^{0}}will be the 'initial' approximate of the inverse Hessian that our estimate at iterationkbegins with. The algorithm is based on the BFGS recursion for the inverse Hessian as For a fixedkwe define a sequence of vectorsqk−m,…,qk{\displaystyle q_{k-m},\ldots ,q_{k}}asqk:=gk{\displaystyle q_{k}:=g_{k}}andqi:=(I−ρiyisi⊤)qi+1{\displaystyle q_{i}:=(I-\rho _{i}y_{i}s_{i}^{\top })q_{i+1}}. Then a recursive algorithm for calculatingqi{\displaystyle q_{i}}fromqi+1{\displaystyle q_{i+1}}is to defineαi:=ρisi⊤qi+1{\displaystyle \alpha _{i}:=\rho _{i}s_{i}^{\top }q_{i+1}}andqi=qi+1−αiyi{\displaystyle q_{i}=q_{i+1}-\alpha _{i}y_{i}}. We also define another sequence of vectorszk−m,…,zk{\displaystyle z_{k-m},\ldots ,z_{k}}aszi:=Hiqi{\displaystyle z_{i}:=H_{i}q_{i}}. There is another recursive algorithm for calculating these vectors which is to definezk−m=Hk0qk−m{\displaystyle z_{k-m}=H_{k}^{0}q_{k-m}}and then recursively defineβi:=ρiyi⊤zi{\displaystyle \beta _{i}:=\rho _{i}y_{i}^{\top }z_{i}}andzi+1=zi+(αi−βi)si{\displaystyle z_{i+1}=z_{i}+(\alpha _{i}-\beta _{i})s_{i}}. The value ofzk{\displaystyle z_{k}}is then our ascent direction. Thus we can compute the descent direction as follows: This formulation gives the search direction for the minimization problem, i.e.,z=−Hkgk{\displaystyle z=-H_{k}g_{k}}. For maximization problems, one should thus take-zinstead. Note that the initial approximate inverse HessianHk0{\displaystyle H_{k}^{0}}is chosen as a diagonal matrix or even a multiple of the identity matrix since this is numerically efficient. The scaling of the initial matrixγk{\displaystyle \gamma _{k}}ensures that the search direction is well scaled and therefore the unit step length is accepted in most iterations. AWolfe line searchis used to ensure that the curvature condition is satisfied and the BFGS updating is stable. Note that some software implementations use an Armijobacktracking line search, but cannot guarantee that the curvature conditionyk⊤sk>0{\displaystyle y_{k}^{\top }s_{k}>0}will be satisfied by the chosen step since a step length greater than1{\displaystyle 1}may be needed to satisfy this condition. Some implementations address this by skipping the BFGS update whenyk⊤sk{\displaystyle y_{k}^{\top }s_{k}}is negative or too close to zero, but this approach is not generally recommended since the updates may be skipped too often to allow the Hessian approximationHk{\displaystyle H_{k}}to capture important curvature information. Some solvers employ so called damped (L)BFGS update which modifies quantitiessk{\displaystyle s_{k}}andyk{\displaystyle y_{k}}in order to satisfy the curvature condition. The two-loop recursion formula is widely used by unconstrained optimizers due to its efficiency in multiplying by the inverse Hessian. However, it does not allow for the explicit formation of either the direct or inverse Hessian and is incompatible with non-box constraints. An alternative approach is thecompact representation, which involves a low-rank representation for the direct and/or inverse Hessian.[6]This represents the Hessian as a sum of a diagonal matrix and a low-rank update. Such a representation enables the use of L-BFGS in constrained settings, for example, as part of the SQP method. L-BFGS has been called "the algorithm of choice" for fittinglog-linear (MaxEnt) modelsandconditional random fieldswithℓ2{\displaystyle \ell _{2}}-regularization.[2][3] Since BFGS (and hence L-BFGS) is designed to minimizesmoothfunctions withoutconstraints, the L-BFGS algorithm must be modified to handle functions that include non-differentiablecomponents or constraints. A popular class of modifications are called active-set methods, based on the concept of theactive set. The idea is that when restricted to a small neighborhood of the current iterate, the function and constraints can be simplified. TheL-BFGS-Balgorithm extends L-BFGS to handle simple box constraints (aka bound constraints) on variables; that is, constraints of the formli≤xi≤uiwherelianduiare per-variable constant lower and upper bounds, respectively (for eachxi, either or both bounds may be omitted).[7][8]The method works by identifying fixed and free variables at every step (using a simple gradient method), and then using the L-BFGS method on the free variables only to get higher accuracy, and then repeating the process. Orthant-wise limited-memory quasi-Newton(OWL-QN) is an L-BFGS variant for fittingℓ1{\displaystyle \ell _{1}}-regularizedmodels, exploiting the inherentsparsityof such models.[3]It minimizes functions of the form whereg{\displaystyle g}is adifferentiableconvexloss function. The method is an active-set type method: at each iterate, it estimates thesignof each component of the variable, and restricts the subsequent step to have the same sign. Once the sign is fixed, the non-differentiable‖x→‖1{\displaystyle \|{\vec {x}}\|_{1}}term becomes a smooth linear term which can be handled by L-BFGS. After an L-BFGS step, the method allows some variables to change sign, and repeats the process. Schraudolphet al.present anonlineapproximation to both BFGS and L-BFGS.[9]Similar tostochastic gradient descent, this can be used to reduce the computational complexity by evaluating the error function and gradient on a randomly drawn subset of the overall dataset in each iteration. It has been shown that O-LBFGS has a global almost sure convergence[10]while the online approximation of BFGS (O-BFGS) is not necessarily convergent.[11] Notable open source implementations include: Notable non open source implementations include:
https://en.wikipedia.org/wiki/Limited-memory_BFGS
Derivative-free optimization(sometimes referred to asblackbox optimization) is a discipline inmathematical optimizationthat does not usederivativeinformation in the classical sense to find optimal solutions: Sometimes information about the derivative of the objective functionfis unavailable, unreliable or impractical to obtain. For example,fmight be non-smooth, or time-consuming to evaluate, or in some way noisy, so that methods that rely on derivatives or approximate them viafinite differencesare of little use. The problem to find optimal points in such situations is referred to as derivative-free optimization, algorithms that do not use derivatives or finite differences are calledderivative-free algorithms.[1] The problem to be solved is to numerically optimize an objective functionf:A→R{\displaystyle f\colon A\to \mathbb {R} }for somesetA{\displaystyle A}(usuallyA⊂Rn{\displaystyle A\subset \mathbb {R} ^{n}}), i.e. findx0∈A{\displaystyle x_{0}\in A}such that without loss of generalityf(x0)≤f(x){\displaystyle f(x_{0})\leq f(x)}for allx∈A{\displaystyle x\in A}. When applicable, a common approach is to iteratively improve a parameter guess by local hill-climbing in the objective function landscape. Derivative-based algorithms use derivative information off{\displaystyle f}to find a good search direction, since for example the gradient gives the direction of steepest ascent. Derivative-based optimization is efficient at finding local optima for continuous-domain smooth single-modal problems. However, they can have problems when e.g.A{\displaystyle A}is disconnected, or (mixed-)integer, or whenf{\displaystyle f}is expensive to evaluate, or is non-smooth, or noisy, so that (numeric approximations of) derivatives do not provide useful information. A slightly different problem is whenf{\displaystyle f}is multi-modal, in which case local derivative-based methods only give local optima, but might miss the global one. In derivative-free optimization, various methods are employed to address these challenges using only function values off{\displaystyle f}, but no derivatives. Some of these methods can be proved to discover optima, but some are rather metaheuristic since the problems are in general more difficult to solve compared toconvex optimization. For these, the ambition is rather to efficiently find "good" parameter values which can be near-optimal given enough resources, but optimality guarantees can typically not be given. One should keep in mind that the challenges are diverse, so that one can usually not use one algorithm for all kinds of problems. Notable derivative-free optimization algorithms include: There exist benchmarks for blackbox optimization algorithms, see e.g. the bbob-biobj tests.[2]
https://en.wikipedia.org/wiki/Derivative-free_optimization
Michael James David PowellFRSFAA[2](29 July 1936 – 19 April 2015) was a British mathematician, who worked in theDepartment of Applied Mathematics and Theoretical Physics(DAMTP) at theUniversity of Cambridge.[3][1][4][5][6] Born in London, Powell was educated atFrensham Heights SchoolandEastbourne College.[2]He earned his Bachelor of Arts degree[when?]followed by aDoctor of Science(DSc) degree in 1979 at theUniversity of Cambridge.[7] Powell was known for his extensive work innumerical analysis, especiallynonlinear optimisationandapproximation. He was a founding member of theInstitute of Mathematics and its Applicationsand a founding editor-in-chief ofIMA Journal of Numerical Analysis.[8]His mathematical contributions includequasi-Newton methods, particularly theDavidon–Fletcher–Powell formulaand the Powell's Symmetric Broyden formula,augmented Lagrangianfunction (also called Powell–Rockafellarpenalty function),sequential quadratic programmingmethod (also called as Wilson–Han–Powell method),trust regionalgorithms (Powell's dog leg method),conjugate direction method(also calledPowell's method), andradial basis function.[citation needed]He had been working onderivative-free optimizationalgorithms in recent years, the resultant algorithms including COBYLA, UOBYQA, NEWUOA, BOBYQA, and LINCOA.[9]He was the author of numerous scientific papers[1]and of several books, most notablyApproximation Theory and Methods.[10] Powell won several awards, including theGeorge B. Dantzig Prizefrom the Mathematical Programming Society/Society for Industrial and Applied Mathematics(SIAM) and theNaylor Prizefrom theLondon Mathematical Society.[when?]Powell was elected aForeign Associate of the National Academy of Sciencesof the United States in 2001 and as a corresponding fellow to theAustralian Academy of Sciencein 2007.[7][11][12][13]
https://en.wikipedia.org/wiki/COBYLA
Michael James David PowellFRSFAA[2](29 July 1936 – 19 April 2015) was a British mathematician, who worked in theDepartment of Applied Mathematics and Theoretical Physics(DAMTP) at theUniversity of Cambridge.[3][1][4][5][6] Born in London, Powell was educated atFrensham Heights SchoolandEastbourne College.[2]He earned his Bachelor of Arts degree[when?]followed by aDoctor of Science(DSc) degree in 1979 at theUniversity of Cambridge.[7] Powell was known for his extensive work innumerical analysis, especiallynonlinear optimisationandapproximation. He was a founding member of theInstitute of Mathematics and its Applicationsand a founding editor-in-chief ofIMA Journal of Numerical Analysis.[8]His mathematical contributions includequasi-Newton methods, particularly theDavidon–Fletcher–Powell formulaand the Powell's Symmetric Broyden formula,augmented Lagrangianfunction (also called Powell–Rockafellarpenalty function),sequential quadratic programmingmethod (also called as Wilson–Han–Powell method),trust regionalgorithms (Powell's dog leg method),conjugate direction method(also calledPowell's method), andradial basis function.[citation needed]He had been working onderivative-free optimizationalgorithms in recent years, the resultant algorithms including COBYLA, UOBYQA, NEWUOA, BOBYQA, and LINCOA.[9]He was the author of numerous scientific papers[1]and of several books, most notablyApproximation Theory and Methods.[10] Powell won several awards, including theGeorge B. Dantzig Prizefrom the Mathematical Programming Society/Society for Industrial and Applied Mathematics(SIAM) and theNaylor Prizefrom theLondon Mathematical Society.[when?]Powell was elected aForeign Associate of the National Academy of Sciencesof the United States in 2001 and as a corresponding fellow to theAustralian Academy of Sciencein 2007.[7][11][12][13]
https://en.wikipedia.org/wiki/NEWUOA
Michael James David PowellFRSFAA[2](29 July 1936 – 19 April 2015) was a British mathematician, who worked in theDepartment of Applied Mathematics and Theoretical Physics(DAMTP) at theUniversity of Cambridge.[3][1][4][5][6] Born in London, Powell was educated atFrensham Heights SchoolandEastbourne College.[2]He earned his Bachelor of Arts degree[when?]followed by aDoctor of Science(DSc) degree in 1979 at theUniversity of Cambridge.[7] Powell was known for his extensive work innumerical analysis, especiallynonlinear optimisationandapproximation. He was a founding member of theInstitute of Mathematics and its Applicationsand a founding editor-in-chief ofIMA Journal of Numerical Analysis.[8]His mathematical contributions includequasi-Newton methods, particularly theDavidon–Fletcher–Powell formulaand the Powell's Symmetric Broyden formula,augmented Lagrangianfunction (also called Powell–Rockafellarpenalty function),sequential quadratic programmingmethod (also called as Wilson–Han–Powell method),trust regionalgorithms (Powell's dog leg method),conjugate direction method(also calledPowell's method), andradial basis function.[citation needed]He had been working onderivative-free optimizationalgorithms in recent years, the resultant algorithms including COBYLA, UOBYQA, NEWUOA, BOBYQA, and LINCOA.[9]He was the author of numerous scientific papers[1]and of several books, most notablyApproximation Theory and Methods.[10] Powell won several awards, including theGeorge B. Dantzig Prizefrom the Mathematical Programming Society/Society for Industrial and Applied Mathematics(SIAM) and theNaylor Prizefrom theLondon Mathematical Society.[when?]Powell was elected aForeign Associate of the National Academy of Sciencesof the United States in 2001 and as a corresponding fellow to theAustralian Academy of Sciencein 2007.[7][11][12][13]
https://en.wikipedia.org/wiki/LINCOA
Innumericaloptimization, theBroyden–Fletcher–Goldfarb–Shanno(BFGS)algorithmis aniterative methodfor solving unconstrainednonlinear optimizationproblems.[1]Like the relatedDavidon–Fletcher–Powell method, BFGS determines thedescent directionbypreconditioningthegradientwith curvature information. It does so by gradually improving an approximation to theHessian matrixof theloss function, obtained only from gradient evaluations (or approximate gradient evaluations) via a generalizedsecant method.[2] Since the updates of the BFGS curvature matrix do not requirematrix inversion, itscomputational complexityis onlyO(n2){\displaystyle {\mathcal {O}}(n^{2})}, compared toO(n3){\displaystyle {\mathcal {O}}(n^{3})}inNewton's method. Also in common use isL-BFGS, which is a limited-memory version of BFGS that is particularly suited to problems with very large numbers of variables (e.g., >1000). The BFGS-B variant handles simple box constraints.[3]The BFGS matrix also admits acompact representation, which makes it better suited for large constrained problems. The algorithm is named afterCharles George Broyden,Roger Fletcher,Donald GoldfarbandDavid Shanno.[4][5][6][7] The optimization problem is to minimizef(x){\displaystyle f(\mathbf {x} )}, wherex{\displaystyle \mathbf {x} }is a vector inRn{\displaystyle \mathbb {R} ^{n}}, andf{\displaystyle f}is a differentiable scalar function. There are no constraints on the values thatx{\displaystyle \mathbf {x} }can take. The algorithm begins at an initial estimatex0{\displaystyle \mathbf {x} _{0}}for the optimal value and proceeds iteratively to get a better estimate at each stage. Thesearch directionpkat stagekis given by the solution of the analogue of the Newton equation: whereBk{\displaystyle B_{k}}is an approximation to theHessian matrixatxk{\displaystyle \mathbf {x} _{k}}, which is updated iteratively at each stage, and∇f(xk){\displaystyle \nabla f(\mathbf {x} _{k})}is the gradient of the function evaluated atxk. Aline searchin the directionpkis then used to find the next pointxk+1by minimizingf(xk+γpk){\displaystyle f(\mathbf {x} _{k}+\gamma \mathbf {p} _{k})}over the scalarγ>0.{\displaystyle \gamma >0.} The quasi-Newton condition imposed on the update ofBk{\displaystyle B_{k}}is Letyk=∇f(xk+1)−∇f(xk){\displaystyle \mathbf {y} _{k}=\nabla f(\mathbf {x} _{k+1})-\nabla f(\mathbf {x} _{k})}andsk=xk+1−xk{\displaystyle \mathbf {s} _{k}=\mathbf {x} _{k+1}-\mathbf {x} _{k}}, thenBk+1{\displaystyle B_{k+1}}satisfies which is the secant equation. The curvature conditionsk⊤yk>0{\displaystyle \mathbf {s} _{k}^{\top }\mathbf {y} _{k}>0}should be satisfied forBk+1{\displaystyle B_{k+1}}to be positive definite, which can be verified by pre-multiplying the secant equation withskT{\displaystyle \mathbf {s} _{k}^{T}}. If the function is notstrongly convex, then the condition has to be enforced explicitly e.g. by finding a pointxk+1satisfying theWolfe conditions, which entail the curvature condition, using line search. Instead of requiring the full Hessian matrix at the pointxk+1{\displaystyle \mathbf {x} _{k+1}}to be computed asBk+1{\displaystyle B_{k+1}}, the approximate Hessian at stagekis updated by the addition of two matrices: BothUk{\displaystyle U_{k}}andVk{\displaystyle V_{k}}are symmetric rank-one matrices, but their sum is a rank-two update matrix. BFGS andDFPupdating matrix both differ from its predecessor by a rank-two matrix. Another simpler rank-one method is known assymmetric rank-onemethod, which does not guarantee thepositive definiteness. In order to maintain the symmetry and positive definiteness ofBk+1{\displaystyle B_{k+1}}, the update form can be chosen asBk+1=Bk+αuu⊤+βvv⊤{\displaystyle B_{k+1}=B_{k}+\alpha \mathbf {u} \mathbf {u} ^{\top }+\beta \mathbf {v} \mathbf {v} ^{\top }}. Imposing the secant condition,Bk+1sk=yk{\displaystyle B_{k+1}\mathbf {s} _{k}=\mathbf {y} _{k}}. Choosingu=yk{\displaystyle \mathbf {u} =\mathbf {y} _{k}}andv=Bksk{\displaystyle \mathbf {v} =B_{k}\mathbf {s} _{k}}, we can obtain:[8] Finally, we substituteα{\displaystyle \alpha }andβ{\displaystyle \beta }intoBk+1=Bk+αuu⊤+βvv⊤{\displaystyle B_{k+1}=B_{k}+\alpha \mathbf {u} \mathbf {u} ^{\top }+\beta \mathbf {v} \mathbf {v} ^{\top }}and get the update equation ofBk+1{\displaystyle B_{k+1}}: Consider the following unconstrained optimization problemminimizex∈Rnf(x),{\displaystyle {\begin{aligned}{\underset {\mathbf {x} \in \mathbb {R} ^{n}}{\text{minimize}}}\quad &f(\mathbf {x} ),\end{aligned}}}wheref:Rn→R{\displaystyle f:\mathbb {R} ^{n}\to \mathbb {R} }is a nonlinear objective function. From an initial guessx0∈Rn{\displaystyle \mathbf {x} _{0}\in \mathbb {R} ^{n}}and an initial guess of the Hessian matrixB0∈Rn×n{\displaystyle B_{0}\in \mathbb {R} ^{n\times n}}the following steps are repeated asxk{\displaystyle \mathbf {x} _{k}}converges to the solution: Convergence can be determined by observing the norm of the gradient; given someϵ>0{\displaystyle \epsilon >0}, one may stop the algorithm when||∇f(xk)||≤ϵ.{\displaystyle ||\nabla f(\mathbf {x} _{k})||\leq \epsilon .}IfB0{\displaystyle B_{0}}is initialized withB0=I{\displaystyle B_{0}=I}, the first step will be equivalent to agradient descent, but further steps are more and more refined byBk{\displaystyle B_{k}}, the approximation to the Hessian. The first step of the algorithm is carried out using the inverse of the matrixBk{\displaystyle B_{k}}, which can be obtained efficiently by applying theSherman–Morrison formulato the step 5 of the algorithm, giving This can be computed efficiently without temporary matrices, recognizing thatBk−1{\displaystyle B_{k}^{-1}}is symmetric, and thatykTBk−1yk{\displaystyle \mathbf {y} _{k}^{\mathrm {T} }B_{k}^{-1}\mathbf {y} _{k}}andskTyk{\displaystyle \mathbf {s} _{k}^{\mathrm {T} }\mathbf {y} _{k}}are scalars, using an expansion such as Therefore, in order to avoid any matrix inversion, theinverseof the Hessian can be approximated instead of the Hessian itself:Hk=defBk−1.{\displaystyle H_{k}{\overset {\operatorname {def} }{=}}B_{k}^{-1}.}[9] From an initial guessx0{\displaystyle \mathbf {x} _{0}}and an approximateinvertedHessian matrixH0{\displaystyle H_{0}}the following steps are repeated asxk{\displaystyle \mathbf {x} _{k}}converges to the solution: In statistical estimation problems (such asmaximum likelihoodor Bayesian inference),credible intervalsorconfidence intervalsfor the solution can be estimated from theinverseof the final Hessian matrix[citation needed]. However, these quantities are technically defined by the true Hessian matrix, and the BFGS approximation may not converge to the true Hessian matrix.[10] The BFGS update formula heavily relies on the curvaturesk⊤yk{\displaystyle \mathbf {s} _{k}^{\top }\mathbf {y} _{k}}being strictly positive and bounded away from zero. This condition is satisfied when we perform a line search with Wolfe conditions on a convex target. However, some real-life applications (like Sequential Quadratic Programming methods) routinely produce negative or nearly-zero curvatures. This can occur when optimizing a nonconvex target or when employing a trust-region approach instead of a line search. It is also possible to produce spurious values due to noise in the target. In such cases, one of the so-called damped BFGS updates can be used (see[11]) which modifysk{\displaystyle \mathbf {s} _{k}}and/oryk{\displaystyle \mathbf {y} _{k}}in order to obtain a more robust update. Notable open source implementations are: Notable proprietary implementations include:
https://en.wikipedia.org/wiki/BFGS_method
Differential evolution(DE) is anevolutionary algorithmtooptimizea problem byiterativelytrying to improve acandidate solutionwith regard to a given measure of quality. Such methods are commonly known asmetaheuristicsas they make few or no assumptions about the optimized problem and can search very large spaces of candidate solutions. However, metaheuristics such as DE do not guarantee an optimal solution is ever found. DE is used for multidimensional real-valuedfunctionsbut does not use thegradientof the problem being optimized, which means DE does not require the optimization problem to bedifferentiable, as is required by classic optimization methods such asgradient descentandquasi-newton methods. DE can therefore also be used on optimization problems that are not evencontinuous, are noisy, change over time, etc.[1] DE optimizes a problem by maintaining a population of candidate solutions and creating new candidate solutions by combining existing ones according to its simple formulae, and then keeping whichever candidate solution has the best score or fitness on the optimization problem at hand. In this way, the optimization problem is treated as a black box that merely provides a measure of quality given a candidate solution and the gradient is therefore not needed. Storn and Price introduced Differential Evolution in 1995.[2][3][4]Books have been published on theoretical and practical aspects of using DE inparallel computing,multiobjective optimization,constrained optimization, and the books also contain surveys of application areas.[5][6][7][8]Surveys on the multi-faceted research aspects of DE can be found in journal articles.[9][10] A basic variant of the DE algorithm works by having a population ofcandidate solutions(called agents). These agents are moved around in the search-space by using simple mathematicalformulaeto combine the positions of existing agents from the population. If the new position of an agent is an improvement then it is accepted and forms part of the population, otherwise the new position is simply discarded. The process is repeated and by doing so it is hoped, but not guaranteed, that a satisfactory solution will eventually be discovered. Formally, letf:Rn→R{\displaystyle f:\mathbb {R} ^{n}\to \mathbb {R} }be the fitness function which must be minimized (note that maximization can be performed by considering the functionh:=−f{\displaystyle h:=-f}instead). The function takes a candidate solution as argument in the form of avectorofreal numbers. It produces a real number as output which indicates the fitness of the given candidate solution. Thegradientoff{\displaystyle f}is not known. The goal is to find a solutionm{\displaystyle \mathbf {m} }for whichf(m)≤f(p){\displaystyle f(\mathbf {m} )\leq f(\mathbf {p} )}for allp{\displaystyle \mathbf {p} }in the search-space, which means thatm{\displaystyle \mathbf {m} }is the global minimum. Letx∈Rn{\displaystyle \mathbf {x} \in \mathbb {R} ^{n}}designate a candidate solution (agent) in the population. The basic DE algorithm can then be described as follows: The choice of DE parametersNP{\displaystyle {\text{NP}}},CR{\displaystyle {\text{CR}}}andF{\displaystyle F}can have a large impact on optimization performance. Selecting the DE parameters that yield good performance has therefore been the subject of much research.Rules of thumbfor parameter selection were devised by Storn et al.[4][5]and Liu and Lampinen.[11]Mathematical convergence analysis regarding parameter selection was done by Zaharie.[12] Differential evolution can be utilized for constrained optimization as well. A common method involves modifying the target function to include a penalty for any violation of constraints, expressed as:f(~x)=f(x)+ρ×CV(x){\displaystyle f{\tilde {(}}x)=f(x)+\rho \times \mathrm {CV} (x)}. Here,CV(x){\displaystyle \mathrm {CV} (x)}represents either a constraint violation (an L1 penalty) or the square of a constraint violation (an L2 penalty). This method, however, has certain drawbacks. One significant challenge is the appropriate selection of the penalty coefficientρ{\displaystyle \rho }. Ifρ{\displaystyle \rho }is set too low, it may not effectively enforce constraints. Conversely, if it's too high, it can greatly slow down or even halt the convergence process. Despite these challenges, this approach remains widely used due to its simplicity and because it doesn't require altering the differential evolution algorithm itself. There are alternative strategies, such as projecting onto a feasible set or reducing dimensionality, which can be used for box-constrained or linearly constrained cases. However, in the context of general nonlinear constraints, the most reliable methods typically involve penalty functions. Variants of the DE algorithm are continually being developed in an effort to improve optimization performance.[13]The following directions of development can be outlined:
https://en.wikipedia.org/wiki/Differential_evolution
Covariance matrix adaptation evolution strategy (CMA-ES)is a particular kind of strategy fornumerical optimization.Evolution strategies(ES) arestochastic,derivative-free methodsfornumerical optimizationof non-linearor non-convexcontinuous optimizationproblems. They belong to the class ofevolutionary algorithmsandevolutionary computation. Anevolutionary algorithmis broadly based on the principle ofbiological evolution, namely the repeated interplay of variation (via recombination and mutation) and selection: in each generation (iteration) new individuals (candidate solutions, denoted asx{\displaystyle x}) are generated by variation of the current parental individuals, usually in a stochastic way. Then, some individuals are selected to become the parents in the next generation based on their fitness orobjective functionvaluef(x){\displaystyle f(x)}. Like this, individuals with better and betterf{\displaystyle f}-values are generated over the generation sequence. In anevolution strategy, new candidate solutions are usually sampled according to amultivariate normal distributioninRn{\displaystyle \mathbb {R} ^{n}}. Recombination amounts to selecting a new mean value for the distribution. Mutation amounts to adding a random vector, a perturbation with zero mean. Pairwise dependencies between the variables in the distribution are represented by acovariance matrix. The covariance matrix adaptation (CMA) is a method to update thecovariance matrixof this distribution. This is particularly useful if the functionf{\displaystyle f}isill-conditioned. Adaptation of thecovariance matrixamounts to learning a second order model of the underlyingobjective functionsimilar to the approximation of the inverseHessian matrixin thequasi-Newton methodin classicaloptimization. In contrast to most classical methods, fewer assumptions on the underlying objective function are made. Because only a ranking (or, equivalently, sorting) of candidate solutions is exploited, neither derivatives nor even an (explicit) objective function is required by the method. For example, the ranking could come about from pairwise competitions between the candidate solutions in aSwiss-system tournament. Two main principles for the adaptation of parameters of the search distribution are exploited in the CMA-ES algorithm. First, amaximum-likelihoodprinciple, based on the idea to increase the probability of successful candidate solutions and search steps. The mean of the distribution is updated such that thelikelihoodof previously successful candidate solutions is maximized. Thecovariance matrixof the distribution is updated (incrementally) such that the likelihood of previously successful search steps is increased. Both updates can be interpreted as anatural gradientdescent. Also, in consequence, the CMA conducts an iteratedprincipal components analysisof successful search steps while retainingallprincipal axes.Estimation of distribution algorithmsand theCross-Entropy Methodare based on very similar ideas, but estimate (non-incrementally) the covariance matrix by maximizing the likelihood of successful solutionpointsinstead of successful searchsteps. Second, two paths of the time evolution of the distribution mean of the strategy are recorded, called search or evolution paths. These paths contain significant information about the correlation between consecutive steps. Specifically, if consecutive steps are taken in a similar direction, the evolution paths become long. The evolution paths are exploited in two ways. One path is used for the covariance matrix adaptation procedure in place of single successful search steps and facilitates a possibly much faster variance increase of favorable directions. The other path is used to conduct an additional step-size control. This step-size control aims to make consecutive movements of the distribution mean orthogonal in expectation. The step-size control effectively preventspremature convergenceyet allowing fast convergence to an optimum. In the following the most commonly used (μ/μw,λ)-CMA-ES is outlined, where in each iteration step a weighted combination of theμbest out ofλnew candidate solutions is used to update the distribution parameters. The main loop consists of three main parts: 1) sampling of new solutions, 2) re-ordering of the sampled solutions based on their fitness, 3) update of the internal state variables based on the re-ordered samples. Apseudocodeof the algorithm looks as follows. The order of the five update assignments is relevant:m{\displaystyle m}must be updated first,pσ{\displaystyle p_{\sigma }}andpc{\displaystyle p_{c}}must be updated beforeC{\displaystyle C}, andσ{\displaystyle \sigma }must be updated last. The update equations for the five state variables are specified in the following. Given are the search space dimensionn{\displaystyle n}and the iteration stepk{\displaystyle k}. The five state variables are The iteration starts with samplingλ>1{\displaystyle \lambda >1}candidate solutionsxi∈Rn{\displaystyle x_{i}\in \mathbb {R} ^{n}}from amultivariate normal distributionN(mk,σk2Ck){\displaystyle \textstyle {\mathcal {N}}(m_{k},\sigma _{k}^{2}C_{k})}, i.e. fori=1,…,λ{\displaystyle i=1,\ldots ,\lambda } xi∼N(mk,σk2Ck)∼mk+σk×N(0,Ck){\displaystyle {\begin{aligned}x_{i}\ &\sim \ {\mathcal {N}}(m_{k},\sigma _{k}^{2}C_{k})\\&\sim \ m_{k}+\sigma _{k}\times {\mathcal {N}}(0,C_{k})\end{aligned}}} The second line suggests the interpretation as unbiased perturbation (mutation) of the current favorite solution vectormk{\displaystyle m_{k}}(the distribution mean vector). The candidate solutionsxi{\displaystyle x_{i}}are evaluated on the objective functionf:Rn→R{\displaystyle f:\mathbb {R} ^{n}\to \mathbb {R} }to be minimized. Denoting thef{\displaystyle f}-sorted candidate solutions as {xi:λ∣i=1…λ}={xi∣i=1…λ}andf(x1:λ)≤⋯≤f(xμ:λ)≤f(xμ+1:λ)≤⋯,{\displaystyle \{x_{i:\lambda }\mid i=1\dots \lambda \}=\{x_{i}\mid i=1\dots \lambda \}{\text{ and }}f(x_{1:\lambda })\leq \dots \leq f(x_{\mu :\lambda })\leq f(x_{\mu +1:\lambda })\leq \cdots ,} the new mean value is computed as mk+1=∑i=1μwixi:λ=mk+∑i=1μwi(xi:λ−mk){\displaystyle {\begin{aligned}m_{k+1}&=\sum _{i=1}^{\mu }w_{i}\,x_{i:\lambda }\\&=m_{k}+\sum _{i=1}^{\mu }w_{i}\,(x_{i:\lambda }-m_{k})\end{aligned}}} where the positive (recombination) weightsw1≥w2≥⋯≥wμ>0{\displaystyle w_{1}\geq w_{2}\geq \dots \geq w_{\mu }>0}sum to one. Typically,μ≤λ/2{\displaystyle \mu \leq \lambda /2}and the weights are chosen such thatμw:=1/∑i=1μwi2≈λ/4{\displaystyle \textstyle \mu _{w}:=1/\sum _{i=1}^{\mu }w_{i}^{2}\approx \lambda /4}. The only feedback used from the objective function here and in the following is an ordering of the sampled candidate solutions due to the indicesi:λ{\displaystyle i:\lambda }. The step-sizeσk{\displaystyle \sigma _{k}}is updated usingcumulative step-size adaptation(CSA), sometimes also denoted aspath length control. The evolution path (or search path)pσ{\displaystyle p_{\sigma }}is updated first. pσ←(1−cσ)⏟discount factorpσ+1−(1−cσ)2⏞complements for discounted varianceμwCk−1/2mk+1−mk⏞displacement ofmσk⏟distributed asN(0,I)under neutral selection{\displaystyle p_{\sigma }\gets \underbrace {(1-c_{\sigma })} _{\!\!\!\!\!{\text{discount factor}}\!\!\!\!\!}\,p_{\sigma }+\overbrace {\sqrt {1-(1-c_{\sigma })^{2}}} ^{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!{\text{complements for discounted variance}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!}\underbrace {{\sqrt {\mu _{w}}}\,C_{k}^{\;-1/2}\,{\frac {\overbrace {m_{k+1}-m_{k}} ^{\!\!\!{\text{displacement of }}m\!\!\!}}{\sigma _{k}}}} _{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!{\text{distributed as }}{\mathcal {N}}(0,I){\text{ under neutral selection}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!}}σk+1=σk×exp⁡(cσdσ(‖pσ‖E⁡‖N(0,I)‖−1)⏟unbiased about 0 under neutral selection){\displaystyle \sigma _{k+1}=\sigma _{k}\times \exp {\bigg (}{\frac {c_{\sigma }}{d_{\sigma }}}\underbrace {\left({\frac {\|p_{\sigma }\|}{\operatorname {E} \|{\mathcal {N}}(0,I)\|}}-1\right)} _{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!{\text{unbiased about 0 under neutral selection}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!}{\bigg )}} where The step-sizeσk{\displaystyle \sigma _{k}}is increased if and only if‖pσ‖{\displaystyle \|p_{\sigma }\|}is larger than theexpected value E⁡‖N(0,I)‖=2Γ((n+1)/2)Γ(n/2)≈n(1−14n+121n2){\displaystyle {\begin{aligned}\operatorname {E} \|{\mathcal {N}}(0,I)\|&={\sqrt {2}}\;{\frac {\Gamma ((n+1)/2)}{\Gamma (n/2)}}\\[1ex]&\approx {\sqrt {n}}\,\left(1-{\frac {1}{4n}}+{\frac {1}{21\,n^{2}}}\right)\end{aligned}}} and decreased if it is smaller. For this reason, the step-size update tends to make consecutive stepsCk−1{\displaystyle C_{k}^{-1}}-conjugate, in that after the adaptation has been successful(mk+2−mk+1σk+1)TCk−1mk+1−mkσk≈0{\displaystyle \textstyle \left({\frac {m_{k+2}-m_{k+1}}{\sigma _{k+1}}}\right)^{T}\!C_{k}^{-1}{\frac {m_{k+1}-m_{k}}{\sigma _{k}}}\approx 0}.[1] Finally, thecovariance matrixis updated, where again the respective evolution path is updated first. pc←(1−cc)⏟discount factorpc+1[0,αn](‖pσ‖)⏟indicator function1−(1−cc)2⏞complements for discounted varianceμwmk+1−mkσk⏟distributed asN(0,Ck)under neutral selection{\displaystyle p_{c}\gets \underbrace {(1-c_{c})} _{\!\!\!\!\!{\text{discount factor}}\!\!\!\!\!}\,p_{c}+\underbrace {\mathbf {1} _{[0,\alpha {\sqrt {n}}]}(\|p_{\sigma }\|)} _{\text{indicator function}}\overbrace {\sqrt {1-(1-c_{c})^{2}}} ^{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!{\text{complements for discounted variance}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!}\underbrace {{\sqrt {\mu _{w}}}\,{\frac {m_{k+1}-m_{k}}{\sigma _{k}}}} _{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!{\text{distributed as}}\;{\mathcal {N}}(0,C_{k})\;{\text{under neutral selection}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!}} Ck+1=(1−c1−cμ+cs)⏟discount factorCk+c1pcpcT⏟rank one matrix+cμ∑i=1μwixi:λ−mkσk(xi:λ−mkσk)T⏟rank⁡min(μ,n)matrix{\displaystyle C_{k+1}=\underbrace {(1-c_{1}-c_{\mu }+c_{s})} _{\!\!\!\!\!{\text{discount factor}}\!\!\!\!\!}\,C_{k}+c_{1}\underbrace {p_{c}p_{c}^{T}} _{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!{\text{rank one matrix}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!}+\,c_{\mu }\underbrace {\sum _{i=1}^{\mu }w_{i}{\frac {x_{i:\lambda }-m_{k}}{\sigma _{k}}}\left({\frac {x_{i:\lambda }-m_{k}}{\sigma _{k}}}\right)^{T}} _{\operatorname {rank} \min(\mu ,n){\text{ matrix}}}} whereT{\displaystyle T}denotes the transpose and Thecovariance matrixupdate tends to increase thelikelihoodforpc{\displaystyle p_{c}}and for(xi:λ−mk)/σk{\displaystyle (x_{i:\lambda }-m_{k})/\sigma _{k}}to be sampled fromN(0,Ck+1){\displaystyle {\mathcal {N}}(0,C_{k+1})}. This completes the iteration step. The number of candidate samples per iteration,λ{\displaystyle \lambda }, is not determined a priori and can vary in a wide range. Smaller values, for exampleλ=10{\displaystyle \lambda =10}, lead to more local search behavior. Larger values, for exampleλ=10n{\displaystyle \lambda =10n}with default valueμw≈λ/4{\displaystyle \mu _{w}\approx \lambda /4}, render the search more global. Sometimes the algorithm is repeatedly restarted with increasingλ{\displaystyle \lambda }by a factor of two for each restart.[2]Besides of settingλ{\displaystyle \lambda }(or possiblyμ{\displaystyle \mu }instead, if for exampleλ{\displaystyle \lambda }is predetermined by the number of available processors), the above introduced parameters are not specific to the given objective function and therefore not meant to be modified by the user. Given the distribution parameters—mean, variances and covariances—thenormal probability distributionfor sampling new candidate solutions is themaximum entropy probability distributionoverRn{\displaystyle \mathbb {R} ^{n}}, that is, the sample distribution with the minimal amount of prior information built into the distribution. More considerations on the update equations of CMA-ES are made in the following. The CMA-ES implements a stochasticvariable-metricmethod. In the very particular case of a convex-quadratic objective function f(x)=12(x−x∗)TH(x−x∗){\displaystyle f(x)={\textstyle {\frac {1}{2}}}(x-x^{*})^{T}H(x-x^{*})} the covariance matrixCk{\displaystyle C_{k}}adapts to the inverse of theHessian matrixH{\displaystyle H},up toa scalar factor and small random fluctuations. More general, also on the functiong∘f{\displaystyle g\circ f}, whereg{\displaystyle g}is strictly increasing and therefore order preserving, the covariance matrixCk{\displaystyle C_{k}}adapts toH−1{\displaystyle H^{-1}},up toa scalar factor and small random fluctuations. For selection ratioλ/μ→∞{\displaystyle \lambda /\mu \to \infty }(and hence population sizeλ→∞{\displaystyle \lambda \to \infty }), theμ{\displaystyle \mu }selected solutions yield an empirical covariance matrix reflective of the inverse-Hessian even in evolution strategies without adaptation of the covariance matrix. This result has been proven forμ=1{\displaystyle \mu =1}on a static model, relying on the quadratic approximation.[3] The update equations for mean and covariance matrix maximize alikelihoodwhile resembling anexpectation–maximization algorithm. The update of the mean vectorm{\displaystyle m}maximizes a log-likelihood, such that mk+1=arg⁡maxm∑i=1μwilog⁡pN(xi:λ∣m){\displaystyle m_{k+1}=\arg \max _{m}\sum _{i=1}^{\mu }w_{i}\log p_{\mathcal {N}}(x_{i:\lambda }\mid m)} where log⁡pN(x)=−12log⁡det(2πC)−12(x−m)TC−1(x−m){\displaystyle \log p_{\mathcal {N}}(x)=-{\tfrac {1}{2}}\log \det(2\pi C)-{\tfrac {1}{2}}(x-m)^{T}C^{-1}(x-m)} denotes the log-likelihood ofx{\displaystyle x}from a multivariate normal distribution with meanm{\displaystyle m}and any positive definite covariance matrixC{\displaystyle C}. To see thatmk+1{\displaystyle m_{k+1}}is independent ofC{\displaystyle C}remark first that this is the case for any diagonal matrixC{\displaystyle C}, because the coordinate-wise maximizer is independent of a scaling factor. Then, rotation of the data points or choosingC{\displaystyle C}non-diagonal are equivalent. The rank-μ{\displaystyle \mu }update of the covariance matrix, that is, the right most summand in the update equation ofCk{\displaystyle C_{k}}, maximizes a log-likelihood in that ∑i=1μwixi:λ−mkσk(xi:λ−mkσk)T=arg⁡maxC∑i=1μwilog⁡pN(xi:λ−mkσk|C){\displaystyle \sum _{i=1}^{\mu }w_{i}{\frac {x_{i:\lambda }-m_{k}}{\sigma _{k}}}\left({\frac {x_{i:\lambda }-m_{k}}{\sigma _{k}}}\right)^{T}=\arg \max _{C}\sum _{i=1}^{\mu }w_{i}\log p_{\mathcal {N}}\left(\left.{\frac {x_{i:\lambda }-m_{k}}{\sigma _{k}}}\right|C\right)} forμ≥n{\displaystyle \mu \geq n}(otherwiseC{\displaystyle C}is singular, but substantially the same result holds forμ<n{\displaystyle \mu <n}). Here,pN(x|C){\displaystyle p_{\mathcal {N}}(x|C)}denotes the likelihood ofx{\displaystyle x}from a multivariate normal distribution with zero mean and covariance matrixC{\displaystyle C}. Therefore, forc1=0{\displaystyle c_{1}=0}andcμ=1{\displaystyle c_{\mu }=1},Ck+1{\displaystyle C_{k+1}}is the abovemaximum-likelihoodestimator. Seeestimation of covariance matricesfor details on the derivation. Akimotoet al.[4]and Glasmacherset al.[5]discovered independently that the update of the distribution parameters resembles the descent in direction of a samplednatural gradientof the expected objective function valueEf(x){\displaystyle Ef(x)}(to be minimized), where the expectation is taken under the sample distribution. With the parameter setting ofcσ=0{\displaystyle c_{\sigma }=0}andc1=0{\displaystyle c_{1}=0}, i.e. without step-size control and rank-one update, CMA-ES can thus be viewed as an instantiation ofNatural Evolution Strategies(NES).[4][5]Thenaturalgradientis independent of the parameterization of the distribution. Taken with respect to the parametersθof the sample distributionp, the gradient ofEf(x){\displaystyle Ef(x)}can be expressed as ∇θE(f(x)∣θ)=∇θ∫Rnf(x)p(x)dx=∫Rnf(x)∇θp(x)dx=∫Rnf(x)p(x)∇θln⁡p(x)dx=E⁡(f(x)∇θln⁡p(x∣θ)){\displaystyle {\begin{aligned}{\nabla }_{\!\theta }E(f(x)\mid \theta )&=\nabla _{\!\theta }\int _{\mathbb {R} ^{n}}f(x)p(x)\,\mathrm {d} x\\&=\int _{\mathbb {R} ^{n}}f(x)\nabla _{\!\theta }p(x)\,\mathrm {d} x\\&=\int _{\mathbb {R} ^{n}}f(x)p(x)\nabla _{\!\theta }\ln p(x)\,\mathrm {d} x\\&=\operatorname {E} (f(x)\nabla _{\!\theta }\ln p(x\mid \theta ))\end{aligned}}} wherep(x)=p(x∣θ){\displaystyle p(x)=p(x\mid \theta )}depends on the parameter vectorθ{\displaystyle \theta }. The so-calledscore function,∇θln⁡p(x∣θ)=∇θp(x)p(x){\displaystyle \nabla _{\!\theta }\ln p(x\mid \theta )={\frac {\nabla _{\!\theta }p(x)}{p(x)}}}, indicates the relative sensitivity ofpw.r.t.θ, and the expectation is taken with respect to the distributionp. ThenaturalgradientofEf(x){\displaystyle Ef(x)}, complying with theFisher information metric(an informational distance measure between probability distributions and the curvature of therelative entropy), now reads ∇~E⁡(f(x)∣θ)=Fθ−1∇θE⁡(f(x)∣θ){\displaystyle {\begin{aligned}{\tilde {\nabla }}\operatorname {E} (f(x)\mid \theta )&=F_{\theta }^{-1}\nabla _{\!\theta }\operatorname {E} (f(x)\mid \theta )\end{aligned}}} where theFisher informationmatrixFθ{\displaystyle F_{\theta }}is the expectation of theHessianof−lnpand renders the expression independent of the chosen parameterization. Combining the previous equalities we get ∇~E⁡(f(x)∣θ)=Fθ−1E⁡(f(x)∇θln⁡p(x∣θ))=E⁡(f(x)Fθ−1∇θln⁡p(x∣θ)){\displaystyle {\begin{aligned}{\tilde {\nabla }}\operatorname {E} (f(x)\mid \theta )&=F_{\theta }^{-1}\operatorname {E} (f(x)\nabla _{\!\theta }\ln p(x\mid \theta ))\\&=\operatorname {E} (f(x)F_{\theta }^{-1}\nabla _{\!\theta }\ln p(x\mid \theta ))\end{aligned}}} A Monte Carlo approximation of the latter expectation takes the average overλsamples fromp ∇~E^θ(f):=−∑i=1λwi⏞preference weightFθ−1∇θln⁡p(xi:λ∣θ)⏟candidate direction fromxi:λwithwi=−f(xi:λ)/λ{\displaystyle {\tilde {\nabla }}{\widehat {E}}_{\theta }(f):=-\sum _{i=1}^{\lambda }\overbrace {w_{i}} ^{\!\!\!\!{\text{preference weight}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!}\underbrace {F_{\theta }^{-1}\nabla _{\!\theta }\ln p(x_{i:\lambda }\mid \theta )} _{\!\!\!\!\!{\text{candidate direction from }}x_{i:\lambda }\!\!\!\!\!}\quad {\text{with }}w_{i}=-f(x_{i:\lambda })/\lambda } where the notationi:λ{\displaystyle i:\lambda }from above is used and thereforewi{\displaystyle w_{i}}are monotonically decreasing ini{\displaystyle i}. Ollivieret al.[6]finally found a rigorous derivation for the weights,wi{\displaystyle w_{i}}, as they are defined in the CMA-ES. The weights are anasymptotically consistent estimatorof theCDFoff(X){\displaystyle f(X)}at the points of thei{\displaystyle i}thorder statisticf(xi:λ){\displaystyle f(x_{i:\lambda })}, as defined above, whereX∼p(.|θ){\displaystyle X\sim p(.|\theta )}, composed with a fixed monotonically decreasing transformationw{\displaystyle w}, that is, wi=w(rank(f(xi:λ))−1/2λ).{\displaystyle w_{i}=w\left({\frac {{\mathsf {rank}}(f(x_{i:\lambda }))-1/2}{\lambda }}\right).} These weights make the algorithm insensitive to the specificf{\displaystyle f}-values. More concisely, using theCDFestimator off{\displaystyle f}instead off{\displaystyle f}itself let the algorithm only depend on the ranking off{\displaystyle f}-values but not on their underlying distribution. This renders the algorithm invariant to strictly increasingf{\displaystyle f}-transformations. Now we define θ=[mkTvec⁡(Ck)Tσk]T∈Rn+n2+1{\displaystyle \theta =[m_{k}^{T}\operatorname {vec} (C_{k})^{T}\sigma _{k}]^{T}\in \mathbb {R} ^{n+n^{2}+1}} such thatp(⋅∣θ){\displaystyle p(\cdot \mid \theta )}is the density of themultivariate normal distributionN(mk,σk2Ck){\displaystyle {\mathcal {N}}(m_{k},\sigma _{k}^{2}C_{k})}. Then, we have an explicit expression for the inverse of the Fisher information matrix whereσk{\displaystyle \sigma _{k}}is fixed Fθ∣σk−1=[σk2Ck002Ck⊗Ck]{\displaystyle F_{\theta \mid \sigma _{k}}^{-1}=\left[{\begin{array}{cc}\sigma _{k}^{2}C_{k}&0\\0&2C_{k}\otimes C_{k}\end{array}}\right]} and for ln⁡p(x∣θ)=ln⁡p(x∣mk,σk2Ck)=−12(x−mk)Tσk−2Ck−1(x−mk)−12ln⁡det(2πσk2Ck){\displaystyle {\begin{aligned}\ln p(x\mid \theta )&=\ln p(x\mid m_{k},\sigma _{k}^{2}C_{k})\\[1ex]&=-{\tfrac {1}{2}}(x-m_{k})^{T}\sigma _{k}^{-2}C_{k}^{-1}(x-m_{k})-{\tfrac {1}{2}}\ln \det(2\pi \sigma _{k}^{2}C_{k})\end{aligned}}} and, after some calculations, the updates in the CMA-ES turn out as[4] mk+1=mk−[∇~E^θ(f)]1,…,n⏟natural gradient for mean=mk+∑i=1λwi(xi:λ−mk){\displaystyle {\begin{aligned}m_{k+1}&=m_{k}-\underbrace {[{\tilde {\nabla }}{\widehat {E}}_{\theta }(f)]_{1,\dots ,n}} _{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!{\text{natural gradient for mean}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!}\\&=m_{k}+\sum _{i=1}^{\lambda }w_{i}(x_{i:\lambda }-m_{k})\end{aligned}}} and Ck+1=Ck+c1(pcpcT−Ck)−cμmat⁡([∇~E^θ(f)]n+1,…,n+n2⏞natural gradient for covariance matrix)=Ck+c1(pcpcT−Ck)+cμ∑i=1λwi(xi:λ−mkσk(xi:λ−mkσk)T−Ck){\displaystyle {\begin{aligned}C_{k+1}&=C_{k}+c_{1}(p_{c}p_{c}^{T}-C_{k})-c_{\mu }\operatorname {mat} (\overbrace {[{\tilde {\nabla }}{\widehat {E}}_{\theta }(f)]_{n+1,\dots ,n+n^{2}}} ^{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!{\text{natural gradient for covariance matrix}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!})\\&=C_{k}+c_{1}(p_{c}p_{c}^{T}-C_{k})+c_{\mu }\sum _{i=1}^{\lambda }w_{i}\left({\frac {x_{i:\lambda }-m_{k}}{\sigma _{k}}}\left({\frac {x_{i:\lambda }-m_{k}}{\sigma _{k}}}\right)^{T}-C_{k}\right)\end{aligned}}} where mat forms the proper matrix from the respective natural gradient sub-vector. That means, settingc1=cσ=0{\displaystyle c_{1}=c_{\sigma }=0}, the CMA-ES updates descend in direction of the approximation∇~E^θ(f){\displaystyle {\tilde {\nabla }}{\widehat {E}}_{\theta }(f)}of the natural gradient while using different step-sizes (learning rates 1 andcμ{\displaystyle c_{\mu }}) for theorthogonal parametersm{\displaystyle m}andC{\displaystyle C}respectively. More recent versions allow a different learning rate for the meanm{\displaystyle m}as well.[7]The most recent version of CMA-ES also use a different functionw{\displaystyle w}form{\displaystyle m}andC{\displaystyle C}with negative values only for the latter (so-called active CMA). It is comparatively easy to see that the update equations of CMA-ES satisfy some stationarity conditions, in that they are essentially unbiased. Under neutral selection, wherexi:λ∼N(mk,σk2Ck){\displaystyle x_{i:\lambda }\sim {\mathcal {N}}(m_{k},\sigma _{k}^{2}C_{k})}, we find that E⁡(mk+1∣mk)=mk{\displaystyle \operatorname {E} (m_{k+1}\mid m_{k})=m_{k}} and under some mild additional assumptions on the initial conditions E⁡(log⁡σk+1∣σk)=log⁡σk{\displaystyle \operatorname {E} (\log \sigma _{k+1}\mid \sigma _{k})=\log \sigma _{k}} and with an additional minor correction in the covariance matrix update for the case where the indicator function evaluates to zero, we find E⁡(Ck+1∣Ck)=Ck{\displaystyle \operatorname {E} (C_{k+1}\mid C_{k})=C_{k}} Invariance propertiesimply uniform performance on a class of objective functions. They have been argued to be an advantage, because they allow to generalize and predict the behavior of the algorithm and therefore strengthen the meaning of empirical results obtained on single functions. The following invariance properties have been established for CMA-ES. Any serious parameter optimization method should be translation invariant, but most methods do not exhibit all the above described invariance properties. A prominent example with the same invariance properties is theNelder–Mead method, where the initial simplex must be chosen respectively. Conceptual considerations like the scale-invariance property of the algorithm, the analysis of simplerevolution strategies, and overwhelming empirical evidence suggest that the algorithm converges on a large class of functions fast to the global optimum, denoted asx∗{\displaystyle x^{*}}. On some functions, convergence occurs independently of the initial conditions with probability one. On some functions the probability is smaller than one and typically depends on the initialm0{\displaystyle m_{0}}andσ0{\displaystyle \sigma _{0}}. Empirically, the fastest possible convergence rate ink{\displaystyle k}for rank-based direct search methods can often be observed (depending on the context denoted aslinear convergenceorlog-linearorexponentialconvergence). Informally, we can write ‖mk−x∗‖≈‖m0−x∗‖×e−ck{\displaystyle \|m_{k}-x^{*}\|\;\approx \;\|m_{0}-x^{*}\|\times e^{-ck}} for somec>0{\displaystyle c>0}, and more rigorously 1k∑i=1klog⁡‖mi−x∗‖‖mi−1−x∗‖=1klog⁡‖mk−x∗‖‖m0−x∗‖→−c<0fork→∞,{\displaystyle {\frac {1}{k}}\sum _{i=1}^{k}\log {\frac {\|m_{i}-x^{*}\|}{\|m_{i-1}-x^{*}\|}}\;=\;{\frac {1}{k}}\log {\frac {\|m_{k}-x^{*}\|}{\|m_{0}-x^{*}\|}}\;\to \;-c<0\quad {\text{for }}k\to \infty \;,} or similarly, E⁡log⁡‖mk−x∗‖‖mk−1−x∗‖→−c<0fork→∞.{\displaystyle \operatorname {E} \log {\frac {\|m_{k}-x^{*}\|}{\|m_{k-1}-x^{*}\|}}\;\to \;-c<0\quad {\text{for }}k\to \infty \;.} This means that on average the distance to the optimum decreases in each iteration by a "constant" factor, namely byexp⁡(−c){\displaystyle \exp(-c)}. The convergence ratec{\displaystyle c}is roughly0.1λ/n{\displaystyle 0.1\lambda /n}, givenλ{\displaystyle \lambda }is not much larger than the dimensionn{\displaystyle n}. Even with optimalσ{\displaystyle \sigma }andC{\displaystyle C}, the convergence ratec{\displaystyle c}cannot largely exceed0.25λ/n{\displaystyle 0.25\lambda /n}, given the above recombination weightswi{\displaystyle w_{i}}are all non-negative. The actual linear dependencies inλ{\displaystyle \lambda }andn{\displaystyle n}are remarkable and they are in both cases the best one can hope for in this kind of algorithm. Yet, a rigorous proof of convergence is missing. Using a non-identity covariance matrix for themultivariate normal distributioninevolution strategiesis equivalent to a coordinate system transformation of the solution vectors,[8]mainly because the sampling equation xi∼mk+σk×N(0,Ck)∼mk+σk×Ck1/2N(0,I){\displaystyle {\begin{aligned}x_{i}&\sim \ m_{k}+\sigma _{k}\times {\mathcal {N}}(0,C_{k})\\&\sim \ m_{k}+\sigma _{k}\times C_{k}^{1/2}{\mathcal {N}}(0,I)\end{aligned}}} can be equivalently expressed in an "encoded space" asCk−1/2xi⏟represented in the encode space∼Ck−1/2mk⏟+σk×N(0,I){\displaystyle \underbrace {C_{k}^{-1/2}x_{i}} _{{\text{represented in the encode space}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!}\sim \ \underbrace {C_{k}^{-1/2}m_{k}} {}+\sigma _{k}\times {\mathcal {N}}(0,I)} The covariance matrix defines abijectivetransformation (encoding) for all solution vectors into a space, where the sampling takes place with identity covariance matrix. Because the update equations in the CMA-ES are invariant under linear coordinate system transformations, the CMA-ES can be re-written as an adaptive encoding procedure applied to a simpleevolution strategywith identity covariance matrix.[8]This adaptive encoding procedure is not confined to algorithms that sample from a multivariate normal distribution (like evolution strategies), but can in principle be applied to any iterative search method. In contrast to most otherevolutionary algorithms, the CMA-ES is, from the user's perspective, quasi-parameter-free. The user has to choose an initial solution point,m0∈Rn{\displaystyle m_{0}\in \mathbb {R} ^{n}}, and the initial step-size,σ0>0{\displaystyle \sigma _{0}>0}. Optionally, the number of candidate samples λ (population size) can be modified by the user in order to change the characteristic search behavior (see above) and termination conditions can or should be adjusted to the problem at hand. The CMA-ES has been empirically successful in hundreds of applications and is considered to be useful in particular on non-convex, non-separable, ill-conditioned, multi-modal or noisy objective functions.[9]One survey of Black-Box optimizations found it outranked 31 other optimization algorithms, performing especially strongly on "difficult functions" or larger-dimensional search spaces.[10] The search space dimension ranges typically between two and a few hundred. Assuming a black-box optimization scenario, where gradients are not available (or not useful) and function evaluations are the only considered cost of search, the CMA-ES method is likely to be outperformed by other methods in the following conditions: On separable functions, the performance disadvantage is likely to be most significant in that CMA-ES might not be able to find at all comparable solutions. On the other hand, on non-separable functions that are ill-conditioned or rugged or can only be solved with more than100n{\displaystyle 100n}function evaluations, the CMA-ES shows most often superior performance. The (1+1)-CMA-ES[11]generates only one candidate solution per iteration step which becomes the new distribution mean if it is better than the current mean. Forcc=1{\displaystyle c_{c}=1}the (1+1)-CMA-ES is a close variant ofGaussian adaptation. SomeNatural Evolution Strategiesare close variants of the CMA-ES with specific parameter settings. Natural Evolution Strategies do not utilize evolution paths (that means in CMA-ES settingcc=cσ=1{\displaystyle c_{c}=c_{\sigma }=1}) and they formalize the update of variances and covariances on aCholesky factorinstead of a covariance matrix. The CMA-ES has also been extended tomultiobjective optimizationas MO-CMA-ES.[12]Another remarkable extension has been the addition of a negative update of the covariance matrix with the so-called active CMA.[13]Using the additional active CMA update is considered as the default variant nowadays.[7]
https://en.wikipedia.org/wiki/CMA-ES
Agreedy algorithmis anyalgorithmthat follows the problem-solvingheuristicof making the locally optimal choice at each stage.[1]In many problems, a greedy strategy does not produce an optimal solution, but a greedy heuristic can yield locally optimal solutions that approximate a globally optimal solution in a reasonable amount of time. For example, a greedy strategy for thetravelling salesman problem(which is of highcomputational complexity) is the following heuristic: "At each step of the journey, visit the nearest unvisited city." This heuristic does not intend to find the best solution, but it terminates in a reasonable number of steps; finding an optimal solution to such a complex problem typically requires unreasonably many steps. Inmathematical optimization, greedy algorithms optimally solvecombinatorialproblems having the properties ofmatroidsand give constant-factor approximations to optimization problems with the submodular structure. Greedy algorithms produce good solutions on somemathematical problems, but not on others. Most problems for which they work will have two properties: A common technique for proving the correctness of greedy algorithms uses aninductiveexchange argument.[3]The exchange argument demonstrates that any solution different from the greedy solution can be transformed into the greedy solution without degrading its quality. This proof pattern typically follows these steps: This proof pattern typically follows these steps (by contradiction): In some cases, an additional step may be needed to prove that no optimal solution can strictly improve upon the greedy solution. Greedy algorithms fail to produce the optimal solution for many other problems and may even produce theunique worst possiblesolution. One example is thetravelling salesman problemmentioned above: for each number of cities, there is an assignment of distances between the cities for which the nearest-neighbour heuristic produces the unique worst possible tour.[4]For other possible examples, seehorizon effect. Greedy algorithms can be characterized as being 'short sighted', and also as 'non-recoverable'. They are ideal only for problems that have an 'optimal substructure'. Despite this, for many simple problems, the best-suited algorithms are greedy. It is important, however, to note that the greedy algorithm can be used as a selection algorithm to prioritize options within a search, or branch-and-bound algorithm. There are a few variations to the greedy algorithm:[5] Greedy algorithms have a long history of study incombinatorial optimizationandtheoretical computer science. Greedy heuristics are known to produce suboptimal results on many problems,[6]and so natural questions are: A large body of literature exists answering these questions for general classes of problems, such asmatroids, as well as for specific problems, such asset cover. Amatroidis a mathematical structure that generalizes the notion oflinear independencefromvector spacesto arbitrary sets. If an optimization problem has the structure of a matroid, then the appropriate greedy algorithm will solve it optimally.[7] A functionf{\displaystyle f}defined on subsets of a setΩ{\displaystyle \Omega }is calledsubmodularif for everyS,T⊆Ω{\displaystyle S,T\subseteq \Omega }we have thatf(S)+f(T)≥f(S∪T)+f(S∩T){\displaystyle f(S)+f(T)\geq f(S\cup T)+f(S\cap T)}. Suppose one wants to find a setS{\displaystyle S}which maximizesf{\displaystyle f}. The greedy algorithm, which builds up a setS{\displaystyle S}by incrementally adding the element which increasesf{\displaystyle f}the most at each step, produces as output a set that is at least(1−1/e)maxX⊆Ωf(X){\displaystyle (1-1/e)\max _{X\subseteq \Omega }f(X)}.[8]That is, greedy performs within a constant factor of(1−1/e)≈0.63{\displaystyle (1-1/e)\approx 0.63}as good as the optimal solution. Similar guarantees are provable when additional constraints, such as cardinality constraints,[9]are imposed on the output, though often slight variations on the greedy algorithm are required. See[10]for an overview. Other problems for which the greedy algorithm gives a strong guarantee, but not an optimal solution, include Many of these problems have matching lower bounds; i.e., the greedy algorithm does not perform better than the guarantee in the worst case. Greedy algorithms typically (but not always) fail to find the globally optimal solution because they usually do not operate exhaustively on all the data. They can make commitments to certain choices too early, preventing them from finding the best overall solution later. For example, all knowngreedy coloringalgorithms for thegraph coloring problemand all otherNP-completeproblems do not consistently find optimum solutions. Nevertheless, they are useful because they are quick to think up and often give good approximations to the optimum. If a greedy algorithm can be proven to yield the global optimum for a given problem class, it typically becomes the method of choice because it is faster than other optimization methods likedynamic programming. Examples of such greedy algorithms areKruskal's algorithmandPrim's algorithmfor findingminimum spanning treesand the algorithm for finding optimumHuffman trees. Greedy algorithms appear in networkroutingas well. Using greedy routing, a message is forwarded to the neighbouring node which is "closest" to the destination. The notion of a node's location (and hence "closeness") may be determined by its physical location, as ingeographic routingused byad hoc networks. Location may also be an entirely artificial construct as insmall world routinganddistributed hash table.
https://en.wikipedia.org/wiki/Greedy_algorithm
AWalrasian auction, introduced byLéon Walras, is a type of simultaneousauctionwhere eachagentcalculates its demand for the good at every possible price and submits this to an auctioneer. The price is then set so that the total demand across all agents equals the total amount of the good. Thus, a Walrasian auction perfectly matches the supply and the demand. Walras suggested thatequilibriumwould always be achieved through a process oftâtonnement(French for "groping"), a form ofhill climbing.[1]In the 1970s, however, theSonnenschein–Mantel–Debreu theoremproved that such a process would not necessarily reach a unique and stable equilibrium, even if the market is populated with perfectlyrational agents.[2] TheWalrasian auctioneeris the presumed auctioneer that matchessupply and demandin a market ofperfect competition. The auctioneer provides for the features of perfect competition:perfect informationand notransaction costs. The process is calledtâtonnement, orgroping, relating to finding the market clearing price for all commodities and giving rise togeneral equilibrium. The device is an attempt to avoid one of deepest conceptual problems of perfect competition, which may, essentially, be defined by the stipulation that no agent can affect prices. But if no one can affect prices no one can change them, so prices cannot change. However, involving as it does an artificial solution, the device is less than entirely satisfactory. Until Walker and van Daal's 2014 translation (retitledElements of Theoretical Economics), William Jaffé'sElements of Pure Economics(1954) was for many years the only English translation of Walras'sÉléments d’économie politique pure. Walker and van Daal argue that the idea of the Walrasian auction and Walrasian auctioneer resulted from Jaffé's mistranslation of the French wordcrieurs(criers) intoauctioneers. Walker and van Daal call this "a momentous error that has misled generations of readers into thinking that the markets in Walras's model are auction markets and that he assigned the function of changing prices in his model to an auctioneer."[3]
https://en.wikipedia.org/wiki/Walrasian_auction
Mean shiftis anon-parametricfeature-spacemathematical analysis technique for locating the maxima of adensity function, a so-calledmode-seeking algorithm.[1]Application domains includecluster analysisincomputer visionandimage processing.[2] The mean shift procedure is usually credited to work by Fukunaga and Hostetler in 1975.[3]It is, however, reminiscent of earlier work by Schnell in 1964.[4]Its theoretical connection to the sampling process of thediffusion modelwas established in 2023.[5] Mean shift is a procedure for locating the maxima—themodes—of a density function given discrete data sampled from that function.[1]This is an iterative method, and we start with an initial estimatex{\displaystyle x}. Let akernel functionK(xi−x){\displaystyle K(x_{i}-x)}be given. This function determines the weight of nearby points for re-estimation of the mean. Typically aGaussian kernelon the distance to the current estimate is used,K(xi−x)=e−c||xi−x||2{\displaystyle K(x_{i}-x)=e^{-c||x_{i}-x||^{2}}}. The weighted mean of the density in the window determined byK{\displaystyle K}is whereN(x){\displaystyle N(x)}is the neighborhood ofx{\displaystyle x}, a set of points for whichK(xi−x)≠0{\displaystyle K(x_{i}-x)\neq 0}. The differencem(x)−x{\displaystyle m(x)-x}is calledmean shiftin Fukunaga and Hostetler.[3]Themean-shift algorithmnow setsx←m(x){\displaystyle x\leftarrow m(x)}, and repeats the estimation untilm(x){\displaystyle m(x)}converges. Although the mean shift algorithm has been widely used in many applications, a rigid proof for the convergence of the algorithm using a general kernel in a high dimensional space is still not known.[6]Aliyari Ghassabeh showed the convergence of the mean shift algorithm in one dimension with a differentiable, convex, and strictly decreasing profile function.[7]However, the one-dimensional case has limited real world applications. Also, the convergence of the algorithm in higher dimensions with a finite number of the stationary (or isolated) points has been proved.[6][8]However, sufficient conditions for a general kernel function to have finite stationary (or isolated) points have not been provided. Gaussian Mean-Shift is anExpectation–maximization algorithm.[9] Let data be a finite setS{\displaystyle S}embedded in then{\displaystyle n}-dimensional Euclidean space,X{\displaystyle X}. LetK{\displaystyle K}be a flat kernel that is the characteristic function of theλ{\displaystyle \lambda }-ball inX{\displaystyle X}, K(x)={1if‖x‖≤λ0if‖x‖>λ{\displaystyle K(x)={\begin{cases}1&{\text{if}}\ \|x\|\leq \lambda \\0&{\text{if}}\ \|x\|>\lambda \\\end{cases}}} In each iteration of the algorithm,s←m(s){\displaystyle s\leftarrow m(s)}is performed for alls∈S{\displaystyle s\in S}simultaneously. The first question, then, is how to estimate the density function given a sparse set of samples. One of the simplest approaches is to just smooth the data, e.g., by convolving it with a fixed kernel of widthh{\displaystyle h}, f(x)=∑iK(x−xi)=∑ik(‖x−xi‖2h2){\displaystyle f(x)=\sum _{i}K(x-x_{i})=\sum _{i}k\left({\frac {\|x-x_{i}\|^{2}}{h^{2}}}\right)} wherexi{\displaystyle x_{i}}are the input samples andk(r){\displaystyle k(r)}is the kernel function (orParzen window).h{\displaystyle h}is the only parameter in the algorithm and is called the bandwidth. This approach is known askernel density estimationor the Parzen window technique. Once we have computedf(x){\displaystyle f(x)}from the equation above, we can find its local maxima using gradient ascent or some other optimization technique. The problem with this "brute force" approach is that, for higher dimensions, it becomes computationally prohibitive to evaluatef(x){\displaystyle f(x)}over the complete search space. Instead, mean shift uses a variant of what is known in the optimization literature asmultiple restart gradient descent. Starting at some guess for a local maximum,yk{\displaystyle y_{k}}, which can be a random input data pointx1{\displaystyle x_{1}}, mean shift computes the gradient of the density estimatef(x){\displaystyle f(x)}atyk{\displaystyle y_{k}}and takes an uphill step in that direction.[10] Kernel definition: LetX{\displaystyle X}be then{\displaystyle n}-dimensional Euclidean space,Rn{\displaystyle \mathbb {R} ^{n}}. The norm ofx{\displaystyle x}is a non-negative number,‖x‖2=x⊤x≥0{\displaystyle \|x\|^{2}=x^{\top }x\geq 0}. A functionK:X→R{\displaystyle K:X\rightarrow \mathbb {R} }is said to be a kernel if there exists aprofile,k:[0,∞]→R{\displaystyle k:[0,\infty ]\rightarrow \mathbb {R} }, such that K(x)=k(‖x‖2){\displaystyle K(x)=k(\|x\|^{2})}and The two most frequently used kernel profiles for mean shift are: k(x)={1ifx≤λ0ifx>λ{\displaystyle k(x)={\begin{cases}1&{\text{if}}\ x\leq \lambda \\0&{\text{if}}\ x>\lambda \\\end{cases}}} k(x)=e−x2σ2,{\displaystyle k(x)=e^{-{\frac {x}{2\sigma ^{2}}}},} where the standard deviation parameterσ{\displaystyle \sigma }works as the bandwidth parameter,h{\displaystyle h}. Consider a set of points in two-dimensional space. Assume a circular window centered atC{\displaystyle C}and having radiusr{\displaystyle r}as the kernel. Mean-shift is a hill climbing algorithm which involves shifting this kernel iteratively to a higher density region until convergence. Every shift is defined by a mean shift vector. The mean shift vector always points toward the direction of the maximum increase in the density. At every iteration the kernel is shifted to the centroid or the mean of the points within it. The method of calculating this mean depends on the choice of the kernel. In this case if a Gaussian kernel is chosen instead of a flat kernel, then every point will first be assigned a weight which will decay exponentially as the distance from the kernel's center increases. At convergence, there will be no direction at which a shift can accommodate more points inside the kernel. The mean shift algorithm can be used for visual tracking. The simplest such algorithm would create a confidence map in the new image based on the color histogram of the object in the previous image, and use mean shift to find the peak of a confidence map near the object's old position. The confidence map is a probability density function on the new image, assigning each pixel of the new image a probability, which is the probability of the pixel color occurring in the object in the previous image. A few algorithms, such as kernel-based object tracking,[11]ensemble tracking,[12]CAMshift[13][14]expand on this idea. Letxi{\displaystyle x_{i}}andzi,i=1,...,n,{\displaystyle z_{i},i=1,...,n,}be thed{\displaystyle d}-dimensional input and filtered image pixels in the joint spatial-range domain. For each pixel, Variants of the algorithm can be found in machine learning and image processing packages:
https://en.wikipedia.org/wiki/Mean-shift
NeuroEvolution of Augmenting Topologies(NEAT) is agenetic algorithm(GA) for generating evolvingartificial neural networks(aneuroevolutiontechnique) developed byKenneth StanleyandRisto Miikkulainenin 2002 while atThe University of Texas at Austin. It alters both the weighting parameters and structures of networks, attempting to find a balance between the fitness of evolved solutions and their diversity. It is based on applying three key techniques: tracking genes with history markers to allow crossover among topologies, applying speciation (the evolution of species) to preserve innovations, and developing topologies incrementally from simple initial structures ("complexifying"). On simple control tasks, the NEAT algorithm often arrives at effective networks more quickly than other contemporary neuro-evolutionary techniques andreinforcement learningmethods, as of 2006.[1][2] Traditionally, a neural network topology is chosen by a human experimenter, and effective connection weight values are learned through a training procedure. This yields a situation whereby a trial and error process may be necessary in order to determine an appropriate topology. NEAT is an example of a topology and weight evolving artificial neural network (TWEANN) which attempts to simultaneously learn weight values and an appropriate topology for a neural network. In order to encode the network into a phenotype for the GA, NEAT uses a direct encoding scheme which means every connection and neuron is explicitly represented. This is in contrast to indirect encoding schemes which define rules that allow the network to be constructed without explicitly representing every connection and neuron, allowing for more compact representation. The NEAT approach begins with aperceptron-like feed-forward network of only input neurons and output neurons. As evolution progresses through discrete steps, the complexity of the network's topology may grow, either by inserting a new neuron into a connection path, or by creating a new connection between (formerly unconnected) neurons. The competing conventions problem arises when there is more than one way of representing information in a phenotype. For example, if a genome contains neuronsA,BandCand is represented by [A B C], if this genome is crossed with an identical genome (in terms of functionality) but ordered [C B A] crossover will yield children that are missing information ([A B A] or [C B C]), in fact 1/3 of the information has been lost in this example. NEAT solves this problem by tracking the history of genes by the use of a global innovation number which increases as new genes are added. When adding a new gene the global innovation number is incremented and assigned to that gene. Thus the higher the number the more recently the gene was added. For a particular generation if an identical mutation occurs in more than one genome they are both given the same number, beyond that however the mutation number will remain unchanged indefinitely. These innovation numbers allow NEAT to match up genes which can be crossed with each other.[1] The original implementation by Ken Stanley is published under theGPL. It integrates withGuile, a GNUschemeinterpreter. This implementation of NEAT is considered the conventional basic starting point for implementations of the NEAT algorithm. In 2003, Stanley devised an extension to NEAT that allows evolution to occur in real time rather than through the iteration of generations as used by most genetic algorithms. The basic idea is to put the population under constant evaluation with a "lifetime" timer on each individual in the population. When a network's timer expires, its current fitness measure is examined to see whether it falls near the bottom of the population, and if so, it is discarded and replaced by a new network bred from two high-fitness parents. A timer is set for the new network and it is placed in the population to participate in the ongoing evaluations. The first application of rtNEAT is a video game called Neuro-Evolving Robotic Operatives, or NERO. In the first phase of the game, individual players deploy robots in a 'sandbox' and train them to some desired tactical doctrine. Once a collection of robots has been trained, a second phase of play allows players to pit their robots in a battle against robots trained by some other player, to see how well their training regimens prepared their robots for battle. An extension of Ken Stanley's NEAT, developed by Colin Green, adds periodic pruning of the network topologies of candidate solutions during the evolution process. This addition addressed concern that unbounded automated growth would generate unnecessary structure. HyperNEATis specialized to evolve large scale structures. It was originally based on theCPPNtheory and is an active field of research. Content-Generating NEAT (cgNEAT) evolves custom video game content based on user preferences. The first video game to implement cgNEAT isGalactic Arms Race, a space-shooter game in which unique particle system weapons are evolved based on player usage statistics.[3]Each particle system weapon in the game is controlled by an evolvedCPPN, similarly to the evolution technique in theNEAT Particlesinteractive art program. odNEAT is an online and decentralized version of NEAT designed for multi-robot systems.[4]odNEAT is executed onboard robots themselves during task execution to continuously optimize the parameters and the topology of the artificial neural network-based controllers. In this way, robots executing odNEAT have the potential to adapt to changing conditions and learn new behaviors as they carry out their tasks. The online evolutionary process is implemented according to a physically distributed island model. Each robot optimizes an internal population of candidate solutions (intra-island variation), and two or more robots exchange candidate solutions when they meet (inter-island migration). In this way, each robot is potentially self-sufficient and the evolutionary process capitalizes on the exchange of controllers between multiple robots for faster synthesis of effective controllers.
https://en.wikipedia.org/wiki/NeuroEvolution_of_Augmenting_Topologies
Hypercube-based NEAT, orHyperNEAT,[1]is a generative encoding that evolvesartificial neural networks(ANNs) with the principles of the widely usedNeuroEvolution of Augmented Topologies(NEAT) algorithm developed byKenneth Stanley.[2]It is a novel technique for evolving large-scale neural networks using the geometric regularities of the task domain. It uses Compositional Pattern Producing Networks[3](CPPNs), which are used to generate the images forPicbreeder.orgArchived2011-07-25 at theWayback Machineand shapes forEndlessForms.comArchived2018-11-14 at theWayback Machine. HyperNEAT has recently been extended to also evolve plastic ANNs[4]and to evolve the location of every neuron in the network.[5] This bioinformatics-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/HyperNEAT
Evolutionary acquisition of neural topologies(EANT/EANT2) is anevolutionaryreinforcement learningmethod that evolves both the topology and weights ofartificial neural networks. It is closely related to the works of Angeline et al.[1]and Stanley and Miikkulainen.[2]Like the work of Angeline et al., the method uses a type of parametric mutation that comes fromevolution strategiesandevolutionary programming(now using the most advanced form of the evolution strategiesCMA-ESin EANT2), in which adaptive step sizes are used for optimizing the weights of the neural networks. Similar to the work of Stanley (NEAT), the method starts with minimal structures which gain complexity along the evolution path. Despite sharing these two properties, the method has the following important features which distinguish it from previous works inneuroevolution. It introduces a genetic encoding calledcommon genetic encoding(CGE) that handles both direct and indirect encoding of neural networks within the same theoretical framework. The encoding has important properties that makes it suitable for evolving neural networks: These properties have been formally proven.[3] For evolving the structure and weights of neural networks, an evolutionary process is used, where theexplorationof structures is executed at a larger timescale (structural exploration), and theexploitationof existing structures is done at a smaller timescale (structural exploitation). In the structural exploration phase, new neural structures are developed by gradually adding new structures to an initially minimal network that is used as a starting point. In the structural exploitation phase, the weights of the currently available structures are optimized using anevolution strategy. EANT has been tested on some benchmark problems such as the double-pole balancing problem,[4]and theRoboCupkeepaway benchmark.[5]In all the tests, EANT was found to perform very well. Moreover, a newer version of EANT, called EANT2, was tested on a visual servoing task and found to outperformNEATand the traditional iterativeGauss–Newtonmethod.[6]Further experiments include results on a classification problem[7]
https://en.wikipedia.org/wiki/Evolutionary_Acquisition_of_Neural_Topologies
Inmachine learning,grokking, ordelayed generalization, is a transition togeneralizationthat occurs many training iterations after theinterpolation threshold, after many iterations of seemingly little progress, as opposed to the usual process where generalization occurs slowly and progressively once the interpolation threshold has been reached.[2][3][4] Grokking was introduced in January 2022 byOpenAIresearchers investigating how neural network perform calculations. It is derived from the wordgrokcoined byRobert Heinleinin his novelStranger in a Strange Land.[1] Grokking can be understood as aphase transitionduring the training process.[5]While grokking has been thought of as largely a phenomenon of relatively shallow models, grokking has been observed in deep neural networks and non-neural models and is the subject of active research.[6][7][8][9] One potential explanation is that theweight decay(a component of the loss function that penalizes higher values of the neural network parameters, also called regularization) slightly favors the general solution that involves lower weight values, but that is also harder to find. According to Neel Nanda, the process of learning the general solution may be gradual, even though the transition to the general solution occurs more suddenly later.[1] Recent theories[10][11]have hypothesized that grokking occurs when neural networks transition from a "lazy training"[12]regime where the weights do not deviate far from initialization, to a "rich" regime where weights abruptly begin to move in task-relevant directions. Follow-up empirical and theoretical work[13]has accumulated evidence in support of this perspective, and it offers a unifying view of earlier work as the transition from lazy to rich training dynamics is known to arise from properties of adaptive optimizers,[14]weight decay,[15]initial parameter weight norm,[8]and more.
https://en.wikipedia.org/wiki/Grokking_(machine_learning)
Meta-optimizationfromnumerical optimizationis the use of one optimization method to tune another optimization method. Meta-optimization is reported to have been used as early as in the late 1970s by Mercer and Sampson[1]for finding optimal parameter settings of agenetic algorithm. Meta-optimization and related concepts are also known in the literature as meta-evolution, super-optimization, automated parameter calibration,hyper-heuristics, etc. Optimization methods such asgenetic algorithmanddifferential evolutionhave several parameters that govern their behaviour and efficiency in optimizing a given problem and these parameters must be chosen by the practitioner to achieve satisfactory results. Selecting the behavioural parameters by hand is a laborious task that is susceptible to human misconceptions of what makes the optimizer perform well. The behavioural parameters of an optimizer can be varied and the optimization performance plotted as a landscape. This is computationally feasible for optimizers with few behavioural parameters and optimization problems that are fast to compute, but when the number of behavioural parameters increases the time usage for computing such a performance landscape increases exponentially. This is thecurse of dimensionalityfor the search-space consisting of an optimizer's behavioural parameters. An efficient method is therefore needed to search the space of behavioural parameters. A simple way of finding good behavioural parameters for an optimizer is to employ another overlaying optimizer, called themeta-optimizer. There are different ways of doing this depending on whether the behavioural parameters to be tuned arereal-valuedordiscrete-valued, and depending on what performance measure is being used, etc. Meta-optimizing the parameters of agenetic algorithmwas done by Grefenstette[2]and Keane,[3]amongst others, and experiments with meta-optimizing both the parameters and thegenetic operatorswere reported by Bäck.[4]Meta-optimization of the COMPLEX-RF algorithm was done by Krus and Andersson,[5]and,[6]where performance index of optimization based on information theory was introduced and further developed. Meta-optimization ofparticle swarm optimizationwas done by Meissner et al.,[7]Pedersen and Chipperfield,[8]and Mason et al.[9]Pedersen and Chipperfield applied meta-optimization todifferential evolution.[10]Birattari et al.[11][12]meta-optimizedant colony optimization.Statistical modelshave also been used to reveal more about the relationship between choices of behavioural parameters and optimization performance, see for example Francois and Lavergne,[13]and Nannen and Eiben.[14]A comparison of various meta-optimization techniques was done by Smit and Eiben.[15]
https://en.wikipedia.org/wiki/Meta-optimization
XGBoost[2](eXtreme Gradient Boosting) is anopen-sourcesoftware librarywhich provides aregularizinggradient boostingframework forC++,Java,Python,[3]R,[4]Julia,[5]Perl,[6]andScala. It works onLinux,Microsoft Windows,[7]andmacOS.[8]From the project description, it aims to provide a "Scalable, Portable and Distributed Gradient Boosting (GBM, GBRT, GBDT) Library". It runs on a single machine, as well as the distributed processing frameworksApache Hadoop,Apache Spark,Apache Flink, andDask.[9][10] XGBoost gained much popularity and attention in the mid-2010s as the algorithm of choice for many winning teams ofmachine learningcompetitions.[11] XGBoost initially started as a research project by Tianqi Chen[12]as part of the Distributed (Deep) Machine Learning Community (DMLC) group at theUniversity of Washington. Initially, it began as a terminal application which could be configured using alibsvmconfiguration file. It became well known in the ML competition circles after its use in the winning solution of theHiggs Machine Learning Challenge. Soon after, the Python and R packages were built, and XGBoost now has package implementations for Java,Scala, Julia,Perl, and other languages. This brought the library to more developers and contributed to its popularity among theKagglecommunity, where it has been used for a large number of competitions.[11] It was soon integrated with a number of other packages making it easier to use in their respective communities. It has now been integrated withscikit-learnforPythonusers and with the caret package forRusers. It can also be integrated into Data Flow frameworks likeApache Spark,Apache Hadoop, andApache Flinkusing the abstracted Rabit[13]and XGBoost4J.[14]XGBoost is also available onOpenCLforFPGAs.[15]An efficient, scalable implementation of XGBoost has been published by Tianqi Chen andCarlos Guestrin.[16] While the XGBoost model often achieves higher accuracy than a single decision tree, it sacrifices the intrinsic interpretability of decision trees.  For example, following the path that a decision tree takes to make its decision is trivial and self-explained, but following the paths of hundreds or thousands of trees is much harder. Salient features of XGBoost which make it different from other gradient boosting algorithms include:[17][18][16] XGBoost works asNewton–Raphsonin function space unlikegradient boostingthat works as gradient descent in function space, a second orderTaylor approximationis used in the loss function to make the connection to Newton–Raphson method. A generic unregularized XGBoost algorithm is: Input: training set{(xi,yi)}i=1N{\displaystyle \{(x_{i},y_{i})\}_{i=1}^{N}}, a differentiable loss functionL(y,F(x)){\displaystyle L(y,F(x))}, a number of weak learnersM{\displaystyle M}and a learning rateα{\displaystyle \alpha }. Algorithm:
https://en.wikipedia.org/wiki/XGBoost
Analysis of variance (ANOVA)is a family ofstatistical methodsused to compare themeansof two or more groups by analyzing variance. Specifically, ANOVA compares the amount of variationbetweenthe group means to the amount of variationwithineach group. If the between-group variation is substantially larger than the within-group variation, it suggests that the group means are likely different. This comparison is done using anF-test. The underlying principle of ANOVA is based on thelaw of total variance, which states that the total variance in a dataset can be broken down into components attributable to different sources. In the case of ANOVA, these sources are the variation between groups and the variation within groups. ANOVA was developed by thestatisticianRonald Fisher. In its simplest form, it provides astatistical testof whether two or more populationmeansare equal, and therefore generalizes thet-testbeyond two means. While the analysis of variance reached fruition in the 20th century, antecedents extend centuries into the past according toStigler.[1]These include hypothesis testing, the partitioning of sums of squares, experimental techniques and the additive model.Laplacewas performing hypothesis testing in the 1770s.[2]Around 1800, Laplace andGaussdeveloped the least-squares method for combining observations, which improved upon methods then used in astronomy andgeodesy. It also initiated much study of the contributions to sums of squares. Laplace knew how to estimate a variance from a residual (rather than a total) sum of squares.[3]By 1827, Laplace was usingleast squaresmethods to address ANOVA problems regarding measurements of atmospheric tides.[4]Before 1800, astronomers had isolated observational errors resulting from reaction times (the "personal equation") and had developed methods of reducing the errors.[5]The experimental methods used in the study of the personal equation were later accepted by the emerging field of psychology[6]which developed strong (full factorial) experimental methods to which randomization and blinding were soon added.[7]An eloquent non-mathematical explanation of the additive effects model was available in 1885.[8] Ronald Fisherintroduced the termvarianceand proposed its formal analysis in a 1918 article on theoretical population genetics,The Correlation Between Relatives on the Supposition of Mendelian Inheritance.[9]His first application of the analysis of variance to data analysis was published in 1921,Studies in Crop Variation I.[10]This divided the variation of a time series into components representing annual causes and slow deterioration. Fisher's next piece,Studies in Crop Variation II, written withWinifred Mackenzieand published in 1923, studied the variation in yield across plots sown with different varieties and subjected to different fertiliser treatments.[11]Analysis of variance became widely known after being included in Fisher's 1925 bookStatistical Methods for Research Workers. Randomization models were developed by several researchers. The first was published in Polish byJerzy Neymanin 1923.[12] The analysis of variance can be used to describe otherwise complex relations among variables. A dog show provides an example. A dog show is not a random sampling of the breed: it is typically limited to dogs that are adult, pure-bred, and exemplary. A histogram of dog weights from a show is likely to be rather complicated, like the yellow-orange distribution shown in the illustrations. Suppose we wanted to predict the weight of a dog based on a certain set of characteristics of each dog. One way to do that is toexplainthe distribution of weights by dividing the dog population into groups based on those characteristics. A successful grouping will split dogs such that (a) each group has a low variance of dog weights (meaning the group is relatively homogeneous) and (b) the mean of each group is distinct (if two groups have the same mean, then it isn't reasonable to conclude that the groups are, in fact, separate in any meaningful way). In the illustrations to the right, groups are identified asX1,X2, etc. In the first illustration, the dogs are divided according to the product (interaction) of two binary groupings: young vs old, and short-haired vs long-haired (e.g., group 1 is young, short-haired dogs, group 2 is young, long-haired dogs, etc.). Since the distributions of dog weight within each of the groups (shown in blue) has a relatively large variance, and since the means are very similar across groups, grouping dogs by these characteristics does not produce an effective way to explain the variation in dog weights: knowing which group a dog is in doesn't allow us to predict its weight much better than simply knowing the dog is in a dog show. Thus, this grouping fails to explain the variation in the overall distribution (yellow-orange). An attempt to explain the weight distribution by grouping dogs aspet vs working breedandless athletic vs more athleticwould probably be somewhat more successful (fair fit). The heaviest show dogs are likely to be big, strong, working breeds, while breeds kept as pets tend to be smaller and thus lighter. As shown by the second illustration, the distributions have variances that are considerably smaller than in the first case, and the means are more distinguishable. However, the significant overlap of distributions, for example, means that we cannot distinguishX1andX2reliably. Grouping dogs according to a coin flip might produce distributions that look similar. An attempt to explain weight by breed is likely to produce a very good fit. All Chihuahuas are light and all St Bernards are heavy. The difference in weights between Setters and Pointers does not justify separate breeds. The analysis of variance provides the formal tools to justify these intuitive judgments. A common use of the method is the analysis of experimental data or the development of models. The method has some advantages over correlation: not all of the data must be numeric and one result of the method is a judgment in the confidence in an explanatory relationship. There are three classes of models used in the analysis of variance, and these are outlined here. The fixed-effects model (class I) of analysis of variance applies to situations in which the experimenter applies one or more treatments to the subjects of the experiment to see whether theresponse variablevalues change. This allows the experimenter to estimate the ranges of response variable values that the treatment would generate in the population as a whole. Random-effects model (class II) is used when the treatments are not fixed. This occurs when the various factor levels are sampled from a larger population. Because the levels themselves arerandom variables, some assumptions and the method of contrasting the treatments (a multi-variable generalization of simple differences) differ from the fixed-effects model.[13] A mixed-effects model (class III) contains experimental factors of both fixed and random-effects types, with appropriately different interpretations and analysis for the two types. Teaching experiments could be performed by a college or university department to find a good introductory textbook, with each text considered a treatment. The fixed-effects model would compare a list of candidate texts. The random-effects model would determine whether important differences exist among a list of randomly selected texts. The mixed-effects model would compare the (fixed) incumbent texts to randomly selected alternatives. Defining fixed and random effects has proven elusive, with multiple competing definitions.[14] The analysis of variance has been studied from several approaches, the most common of which uses alinear modelthat relates the response to the treatments and blocks. Note that the model is linear in parameters but may be nonlinear across factor levels. Interpretation is easy when data is balanced across factors but much deeper understanding is needed for unbalanced data. The analysis of variance can be presented in terms of alinear model, which makes the following assumptions about theprobability distributionof the responses:[15][16][17][18] The separate assumptions of the textbook model imply that theerrorsare independently, identically, and normally distributed for fixed effects models, that is, that the errors (ε{\displaystyle \varepsilon }) are independent andε∼N(0,σ2).{\displaystyle \varepsilon \thicksim N(0,\sigma ^{2}).} In arandomized controlled experiment, the treatments are randomly assigned to experimental units, following the experimental protocol. This randomization is objective and declared before the experiment is carried out. The objective random-assignment is used to test the significance of thenull hypothesis, following the ideas ofC. S. PeirceandRonald Fisher. This design-based analysis was discussed and developed byFrancis J. AnscombeatRothamsted Experimental Stationand byOscar KempthorneatIowa State University.[19]Kempthorne and his students make an assumption ofunit treatment additivity, which is discussed in the books of Kempthorne andDavid R. Cox.[20][21] In its simplest form, the assumption of unit-treatment additivity[nb 1]states that the observed responseyi,j{\displaystyle y_{i,j}}from experimental uniti{\displaystyle i}when receiving treatmentj{\displaystyle j}can be written as the sum of the unit's responseyi{\displaystyle y_{i}}and the treatment-effecttj{\displaystyle t_{j}}, that is[22][23][24]yi,j=yi+tj.{\displaystyle y_{i,j}=y_{i}+t_{j}.}The assumption of unit-treatment additivity implies that, for every treatmentj{\displaystyle j}, thej{\displaystyle j}th treatment has exactly the same effecttj{\displaystyle t_{j}}on every experiment unit. The assumption of unit treatment additivity usually cannot be directlyfalsified, according to Cox and Kempthorne. However, manyconsequencesof treatment-unit additivity can be falsified. For a randomized experiment, the assumption of unit-treatment additivityimpliesthat the variance is constant for all treatments. Therefore, bycontraposition, a necessary condition for unit-treatment additivity is that the variance is constant. The use of unit treatment additivity and randomization is similar to the design-based inference that is standard in finite-populationsurvey sampling. Kempthorne uses the randomization-distribution and the assumption ofunit treatment additivityto produce aderived linear model, very similar to the textbook model discussed previously.[25]The test statistics of this derived linear model are closely approximated by the test statistics of an appropriate normal linear model, according to approximation theorems and simulation studies.[26]However, there are differences. For example, the randomization-based analysis results in a small but (strictly) negative correlation between the observations.[27][28]In the randomization-based analysis, there isno assumptionof anormaldistribution and certainlyno assumptionofindependence. On the contrary,the observations are dependent! The randomization-based analysis has the disadvantage that its exposition involves tedious algebra and extensive time. Since the randomization-based analysis is complicated and is closely approximated by the approach using a normal linear model, most teachers emphasize the normal linear model approach. Few statisticians object to model-based analysis of balanced randomized experiments. However, when applied to data from non-randomized experiments orobservational studies, model-based analysis lacks the warrant of randomization.[29]For observational data, the derivation of confidence intervals must usesubjectivemodels, as emphasized byRonald Fisherand his followers. In practice, the estimates of treatment-effects from observational studies generally are often inconsistent. In practice, "statistical models" and observational data are useful for suggesting hypotheses that should be treated very cautiously by the public.[30] The normal-model based ANOVA analysis assumes the independence, normality, and homogeneity of variances of the residuals. The randomization-based analysis assumes only the homogeneity of the variances of the residuals (as a consequence of unit-treatment additivity) and uses the randomization procedure of the experiment. Both these analyses requirehomoscedasticity, as an assumption for the normal-model analysis and as a consequence of randomization and additivity for the randomization-based analysis. However, studies of processes that change variances rather than means (called dispersion effects) have been successfully conducted using ANOVA.[31]There arenonecessary assumptions for ANOVA in its full generality, but theF-test used for ANOVA hypothesis testing has assumptions and practical limitations which are of continuing interest. Problems which do not satisfy the assumptions of ANOVA can often be transformed to satisfy the assumptions. The property of unit-treatment additivity is not invariant under a "change of scale", so statisticians often use transformations to achieve unit-treatment additivity. If the response variable is expected to follow a parametric family of probability distributions, then the statistician may specify (in the protocol for the experiment or observational study) that the responses be transformed to stabilize the variance.[32]Also, a statistician may specify that logarithmic transforms be applied to the responses which are believed to follow a multiplicative model.[23][33]According to Cauchy'sfunctional equationtheorem, thelogarithmis the only continuous transformation that transforms real multiplication to addition.[citation needed] ANOVA is used in the analysis of comparative experiments, those in which only the difference in outcomes is of interest. The statistical significance of the experiment is determined by a ratio of two variances. This ratio is independent of several possible alterations to the experimental observations: Adding a constant to all observations does not alter significance. Multiplying all observations by a constant does not alter significance. So ANOVA statistical significance result is independent of constant bias and scaling errors as well as the units used in expressing observations. In the era of mechanical calculation it was common to subtract a constant from all observations (when equivalent to dropping leading digits) to simplify data entry.[34][35]This is an example of datacoding. The calculations of ANOVA can be characterized as computing a number of means and variances, dividing two variances and comparing the ratio to a handbook value to determine statistical significance. Calculating a treatment effect is then trivial: "the effect of any treatment is estimated by taking the difference between the mean of the observations which receive the treatment and the general mean".[36] ANOVA uses traditional standardized terminology. The definitional equation of sample variance iss2=1n−1∑i(yi−y¯)2{\textstyle s^{2}={\frac {1}{n-1}}\sum _{i}(y_{i}-{\bar {y}})^{2}}, where the divisor is called the degrees of freedom (DF), the summation is called the sum of squares (SS), the result is called the mean square (MS) and the squared terms are deviations from the sample mean. ANOVA estimates 3 sample variances: a total variance based on all the observation deviations from the grand mean, an error variance based on all the observation deviations from their appropriate treatment means, and a treatment variance. The treatment variance is based on the deviations of treatment means from the grand mean, the result being multiplied by the number of observations in each treatment to account for the difference between the variance of observations and the variance of means. The fundamental technique is a partitioning of the totalsum of squaresSSinto components related to the effects used in the model. For example, the model for a simplified ANOVA with one type of treatment at different levels. SSTotal=SSError+SSTreatments{\displaystyle SS_{\text{Total}}=SS_{\text{Error}}+SS_{\text{Treatments}}} The number ofdegrees of freedomDFcan be partitioned in a similar way: one of these components (that for error) specifies achi-squared distributionwhich describes the associated sum of squares, while the same is true for "treatments" if there is no treatment effect. DFTotal=DFError+DFTreatments{\displaystyle DF_{\text{Total}}=DF_{\text{Error}}+DF_{\text{Treatments}}} TheF-testis used for comparing the factors of the total deviation. For example, in one-way, or single-factor ANOVA, statistical significance is tested for by comparing the F test statistic F=variance between treatmentsvariance within treatments{\displaystyle F={\frac {\text{variance between treatments}}{\text{variance within treatments}}}}F=MSTreatmentsMSError=SSTreatments/(I−1)SSError/(nT−I){\displaystyle F={\frac {MS_{\text{Treatments}}}{MS_{\text{Error}}}}={{SS_{\text{Treatments}}/(I-1)} \over {SS_{\text{Error}}/(n_{T}-I)}}} whereMSis mean square,I{\displaystyle I}is the number of treatments andnT{\displaystyle n_{T}}is the total number of cases to theF-distributionwithI−1{\displaystyle I-1}being the numerator degrees of freedom andnT−I{\displaystyle n_{T}-I}the denominator degrees of freedom. Using theF-distribution is a natural candidate because the test statistic is the ratio of two scaled sums of squares each of which follows a scaledchi-squared distribution. The expected value of F is1+nσTreatment2/σError2{\displaystyle 1+{n\sigma _{\text{Treatment}}^{2}}/{\sigma _{\text{Error}}^{2}}}(wheren{\displaystyle n}is the treatment sample size) which is 1 for no treatment effect. As values of F increase above 1, the evidence is increasingly inconsistent with the null hypothesis. Two apparent experimental methods of increasing F are increasing the sample size and reducing the error variance by tight experimental controls. There are two methods of concluding the ANOVA hypothesis test, both of which produce the same result: The ANOVAF-test is known to be nearly optimal in the sense of minimizing false negative errors for a fixed rate of false positive errors (i.e. maximizing power for a fixed significance level). For example, to test the hypothesis that various medical treatments have exactly the same effect, theF-test'sp-values closely approximate thepermutation test'sp-values: The approximation is particularly close when the design is balanced.[26][37]Suchpermutation testscharacterizetests with maximum poweragainst allalternative hypotheses, as observed byRosenbaum.[nb 2]The ANOVAF-test (of the null-hypothesis that all treatments have exactly the same effect) is recommended as a practical test, because of its robustness against many alternative distributions.[38][nb 3] ANOVA consists of separable parts; partitioning sources of variance and hypothesis testing can be used individually. ANOVA is used to support other statistical tools. Regression is first used to fit more complex models to data, then ANOVA is used to compare models with the objective of selecting simple(r) models that adequately describe the data. "Such models could be fit without any reference to ANOVA, but ANOVA tools could then be used to make some sense of the fitted models, and to test hypotheses about batches of coefficients."[39]"[W]e think of the analysis of variance as a way of understanding and structuring multilevel models—not as an alternative to regression but as a tool for summarizing complex high-dimensional inferences ..."[39] The simplest experiment suitable for ANOVA analysis is the completely randomized experiment with a single factor. More complex experiments with a single factor involve constraints on randomization and include completely randomized blocks andLatin squares(and variants:Graeco-Latin squares, etc.). The more complex experiments share many of the complexities of multiple factors. There are some alternatives to conventional one-way analysis of variance, e.g.: Welch's heteroscedastic F test, Welch's heteroscedastic F test with trimmed means and Winsorized variances, Brown-Forsythe test, Alexander-Govern test, James second order test and Kruskal-Wallis test, available inonewaytestsR It is useful to represent each data point in the following form, called a statistical model:Yij=μ+τj+εij{\displaystyle Y_{ij}=\mu +\tau _{j}+\varepsilon _{ij}}where That is, we envision an additive model that says every data point can be represented by summing three quantities: the true mean, averaged over all factor levels being investigated, plus an incremental component associated with the particular column (factor level), plus a final component associated with everything else affecting that specific data value. ANOVA generalizes to the study of the effects of multiple factors. When the experiment includes observations at all combinations of levels of each factor, it is termedfactorial. Factorial experiments are more efficient than a series of single factor experiments and the efficiency grows as the number of factors increases.[40]Consequently, factorial designs are heavily used. The use of ANOVA to study the effects of multiple factors has a complication. In a 3-way ANOVA with factors x, y and z, the ANOVA model includes terms for the main effects (x, y, z) and terms forinteractions(xy, xz, yz, xyz). All terms require hypothesis tests. The proliferation of interaction terms increases the risk that some hypothesis test will produce a false positive by chance. Fortunately, experience says that high order interactions are rare.[41][verification needed]The ability to detect interactions is a major advantage of multiple factor ANOVA. Testing one factor at a time hides interactions, but produces apparently inconsistent experimental results.[40] Caution is advised when encountering interactions; Test interaction terms first and expand the analysis beyond ANOVA if interactions are found. Texts vary in their recommendations regarding the continuation of the ANOVA procedure after encountering an interaction. Interactions complicate the interpretation of experimental data. Neither the calculations of significance nor the estimated treatment effects can be taken at face value. "A significant interaction will often mask the significance of main effects."[42]Graphical methods are recommended to enhance understanding. Regression is often useful. A lengthy discussion of interactions is available in Cox (1958).[43]Some interactions can be removed (by transformations) while others cannot. A variety of techniques are used with multiple factor ANOVA to reduce expense. One technique used in factorial designs is to minimize replication (possibly no replication with support ofanalytical trickery) and to combine groups when effects are found to be statistically (or practically) insignificant. An experiment with many insignificant factors may collapse into one with a few factors supported by many replications.[44] Some analysis is required in support of thedesignof the experiment while other analysis is performed after changes in the factors are formally found to produce statistically significant changes in the responses. Because experimentation is iterative, the results of one experiment alter plans for following experiments. In the design of an experiment, the number of experimental units is planned to satisfy the goals of the experiment. Experimentation is often sequential. Early experiments are often designed to provide mean-unbiased estimates of treatment effects and of experimental error. Later experiments are often designed to test a hypothesis that a treatment effect has an important magnitude; in this case, the number of experimental units is chosen so that the experiment is within budget and has adequate power, among other goals. Reporting sample size analysis is generally required in psychology. "Provide information on sample size and the process that led to sample size decisions."[45]The analysis, which is written in the experimental protocol before the experiment is conducted, is examined in grant applications and administrative review boards. Besides the power analysis, there are less formal methods for selecting the number of experimental units. These include graphical methods based on limiting the probability of false negative errors, graphical methods based on an expected variation increase (above the residuals) and methods based on achieving a desired confidence interval.[46] Power analysisis often applied in the context of ANOVA in order to assess the probability of successfully rejecting the null hypothesis if we assume a certain ANOVA design, effect size in the population, sample size and significance level. Power analysis can assist in study design by determining what sample size would be required in order to have a reasonable chance of rejecting the null hypothesis when the alternative hypothesis is true.[47][48][49][50] Several standardized measures of effect have been proposed for ANOVA to summarize the strength of the association between a predictor(s) and the dependent variable or the overall standardized difference of the complete model. Standardized effect-size estimates facilitate comparison of findings across studies and disciplines. However, while standardized effect sizes are commonly used in much of the professional literature, a non-standardized measure of effect size that has immediately "meaningful" units may be preferable for reporting purposes.[51] Sometimes tests are conducted to determine whether the assumptions of ANOVA appear to be violated. Residuals are examined or analyzed to confirmhomoscedasticityand gross normality.[52]Residuals should have the appearance of (zero mean normal distribution) noise when plotted as a function of anything including time and modeled data values. Trends hint at interactions among factors or among observations. A statistically significant effect in ANOVA is often followed by additional tests. This can be done in order to assess which groups are different from which other groups or to test various other focused hypotheses. Follow-up tests are often distinguished in terms of whether they are "planned" (a priori) or"post hoc." Planned tests are determined before looking at the data, and post hoc tests are conceived only after looking at the data (though the term "post hoc" is inconsistently used). The follow-up tests may be "simple" pairwise comparisons of individual group means or may be "compound" comparisons (e.g., comparing the mean pooling across groups A, B and C to the mean of group D). Comparisons can also look at tests of trend, such as linear and quadratic relationships, when the independent variable involves ordered levels. Often the follow-up tests incorporate a method of adjusting for themultiple comparisons problem. Follow-up tests to identify which specific groups, variables, or factors have statistically different means include theTukey's range test, andDuncan's new multiple range test. In turn, these tests are often followed with aCompact Letter Display (CLD)methodology in order to render the output of the mentioned tests more transparent to a non-statistician audience. There are several types of ANOVA. Many statisticians base ANOVA on thedesign of the experiment,[53]especially on the protocol that specifies therandom assignmentof treatments to subjects; the protocol's description of the assignment mechanism should include a specification of the structure of the treatments and of anyblocking. It is also common to apply ANOVA to observational data using an appropriate statistical model.[54] Some popular designs use the following types of ANOVA: Balanced experiments (those with an equal sample size for each treatment) are relatively easy to interpret; unbalanced experiments offer more complexity. For single-factor (one-way) ANOVA, the adjustment for unbalanced data is easy, but the unbalanced analysis lacks both robustness and power.[57]For more complex designs the lack of balance leads to further complications. "The orthogonality property of main effects and interactions present in balanced data does not carry over to the unbalanced case. This means that the usual analysis of variance techniques do not apply. Consequently, the analysis of unbalanced factorials is much more difficult than that for balanced designs."[58]In the general case, "The analysis of variance can also be applied to unbalanced data, but then the sums of squares, mean squares, andF-ratios will depend on the order in which the sources of variation are considered."[39] ANOVA is (in part) a test of statistical significance. The American Psychological Association (and many other organisations) holds the view that simply reporting statistical significance is insufficient and that reporting confidence bounds is preferred.[51] ANOVA is considered to be a special case oflinear regression[59][60]which in turn is a special case of thegeneral linear model.[61]All consider the observations to be the sum of a model (fit) and a residual (error) to be minimized. TheKruskal-Wallis testand theFriedman testarenonparametrictests which do not rely on an assumption of normality.[62][63] Below we make clear the connection between multi-way ANOVA and linear regression. Linearly re-order the data so thatk{\displaystyle k}-th observation is associated with a responseyk{\displaystyle y_{k}}and factorsZk,b{\displaystyle Z_{k,b}}whereb∈{1,2,…,B}{\displaystyle b\in \{1,2,\ldots ,B\}}denotes the different factors andB{\displaystyle B}is the total number of factors. In one-way ANOVAB=1{\displaystyle B=1}and in two-way ANOVAB=2{\displaystyle B=2}. Furthermore, we assume theb{\displaystyle b}-th factor hasIb{\displaystyle I_{b}}levels, namely{1,2,…,Ib}{\displaystyle \{1,2,\ldots ,I_{b}\}}. Now, we canone-hotencode the factors into the∑b=1BIb{\textstyle \sum _{b=1}^{B}I_{b}}dimensional vectorvk{\displaystyle v_{k}}. The one-hot encoding functiongb:{1,2,…,Ib}↦{0,1}Ib{\displaystyle g_{b}:\{1,2,\ldots ,I_{b}\}\mapsto \{0,1\}^{I_{b}}}is defined such that thei{\displaystyle i}-th entry ofgb(Zk,b){\displaystyle g_{b}(Z_{k,b})}isgb(Zk,b)i={1ifi=Zk,b0otherwise{\displaystyle g_{b}(Z_{k,b})_{i}={\begin{cases}1&{\text{if }}i=Z_{k,b}\\0&{\text{otherwise}}\end{cases}}}The vectorvk{\displaystyle v_{k}}is the concatenation of all of the above vectors for allb{\displaystyle b}. Thus,vk=[g1(Zk,1),g2(Zk,2),…,gB(Zk,B)]{\displaystyle v_{k}=[g_{1}(Z_{k,1}),g_{2}(Z_{k,2}),\ldots ,g_{B}(Z_{k,B})]}. In order to obtain a fully generalB{\displaystyle B}-way interaction ANOVA we must also concatenate every additional interaction term in the vectorvk{\displaystyle v_{k}}and then add an intercept term. Let that vector beXk{\displaystyle X_{k}}. With this notation in place, we now have the exact connection with linear regression. We simply regress responseyk{\displaystyle y_{k}}against the vectorXk{\displaystyle X_{k}}. However, there is a concern aboutidentifiability. In order to overcome such issues we assume that the sum of the parameters within each set of interactions is equal to zero. From here, one can useF-statistics or other methods to determine the relevance of the individual factors. We can consider the 2-way interaction example where we assume that the first factor has 2 levels and the second factor has 3 levels. Defineai=1{\displaystyle a_{i}=1}ifZk,1=i{\displaystyle Z_{k,1}=i}andbi=1{\displaystyle b_{i}=1}ifZk,2=i{\displaystyle Z_{k,2}=i}, i.e.a{\displaystyle a}is the one-hot encoding of the first factor andb{\displaystyle b}is the one-hot encoding of the second factor. With that,Xk=[a1,a2,b1,b2,b3,a1×b1,a1×b2,a1×b3,a2×b1,a2×b2,a2×b3,1]{\displaystyle X_{k}=[a_{1},a_{2},b_{1},b_{2},b_{3},a_{1}\times b_{1},a_{1}\times b_{2},a_{1}\times b_{3},a_{2}\times b_{1},a_{2}\times b_{2},a_{2}\times b_{3},1]}where the last term is an intercept term. For a more concrete example suppose thatZk,1=2Zk,2=1{\displaystyle {\begin{aligned}Z_{k,1}&=2\\Z_{k,2}&=1\end{aligned}}}Then,Xk=[0,1,1,0,0,0,0,0,1,0,0,1]{\displaystyle X_{k}=[0,1,1,0,0,0,0,0,1,0,0,1]}
https://en.wikipedia.org/wiki/Analysis_of_variance
Instatistics, thecoefficient of determination, denotedR2orr2and pronounced "R squared", is the proportion of the variation in the dependent variable that is predictable from the independent variable(s). It is astatisticused in the context ofstatistical modelswhose main purpose is either thepredictionof future outcomes or the testing ofhypotheses, on the basis of other related information. It provides a measure of how well observed outcomes are replicated by the model, based on the proportion of total variation of outcomes explained by the model.[1][2][3] There are several definitions ofR2that are only sometimes equivalent. Insimple linear regression(which includes anintercept),r2is simply the square of the samplecorrelation coefficient(r), between the observed outcomes and the observed predictor values.[4]If additionalregressorsare included,R2is the square of thecoefficient of multiple correlation. In both such cases, the coefficient of determination normally ranges from 0 to 1. There are cases whereR2can yield negative values. This can arise when the predictions that are being compared to the corresponding outcomes have not been derived from a model-fitting procedure using those data. Even if a model-fitting procedure has been used,R2may still be negative, for example when linear regression is conducted without including an intercept,[5]or when a non-linear function is used to fit the data.[6]In cases where negative values arise, the mean of the data provides a better fit to the outcomes than do the fitted function values, according to this particular criterion. The coefficient of determination can be more intuitively informative thanMAE,MAPE,MSE, andRMSEinregression analysisevaluation, as the former can be expressed as a percentage, whereas the latter measures have arbitrary ranges. It also proved more robust for poor fits compared toSMAPEon certain test datasets.[7] When evaluating the goodness-of-fit of simulated (Ypred) versus measured (Yobs) values, it is not appropriate to base this on theR2of the linear regression (i.e.,Yobs=m·Ypred+ b).[citation needed]TheR2quantifies the degree of any linear correlation betweenYobsandYpred, while for the goodness-of-fit evaluation only one specific linear correlation should be taken into consideration:Yobs= 1·Ypred+ 0 (i.e., the 1:1 line).[8][9] Adata sethasnvalues markedy1, ...,yn(collectively known asyior as a vectory= [y1, ...,yn]T), each associated with a fitted (or modeled, or predicted) valuef1, ...,fn(known asfi, or sometimesŷi, as a vectorf). Define theresidualsasei=yi−fi(forming a vectore). Ify¯{\displaystyle {\bar {y}}}is the mean of the observed data:y¯=1n∑i=1nyi{\displaystyle {\bar {y}}={\frac {1}{n}}\sum _{i=1}^{n}y_{i}}then the variability of the data set can be measured with twosums of squaresformulas: The most general definition of the coefficient of determination isR2=1−SSresSStot{\displaystyle R^{2}=1-{SS_{\rm {res}} \over SS_{\rm {tot}}}} In the best case, the modeled values exactly match the observed values, which results inSSres=0{\displaystyle SS_{\text{res}}=0}andR2= 1. A baseline model, which always predictsy, will haveR2= 0. In a general form,R2can be seen to be related to the fraction of variance unexplained (FVU), since the second term compares the unexplained variance (variance of the model's errors) with the total variance (of the data):R2=1−FVU{\displaystyle R^{2}=1-{\text{FVU}}} A larger value ofR2implies a more successful regression model.[4]: 463SupposeR2= 0.49. This implies that 49% of the variability of the dependent variable in the data set has been accounted for, and the remaining 51% of the variability is still unaccounted for. For regression models, the regression sum of squares, also called theexplained sum of squares, is defined as In some cases, as insimple linear regression, thetotal sum of squaresequals the sum of the two other sums of squares defined above: SeePartitioning in the general OLS modelfor a derivation of this result for one case where the relation holds. When this relation does hold, the above definition ofR2is equivalent to wherenis the number of observations (cases) on the variables. In this formR2is expressed as the ratio of theexplained variance(variance of the model's predictions, which isSSreg/n) to the total variance (sample variance of the dependent variable, which isSStot/n). This partition of the sum of squares holds for instance when the model valuesƒihave been obtained bylinear regression. A mildersufficient conditionreads as follows: The model has the form where theqiare arbitrary values that may or may not depend onior on other free parameters (the common choiceqi=xiis just one special case), and the coefficient estimatesα^{\displaystyle {\widehat {\alpha }}}andβ^{\displaystyle {\widehat {\beta }}}are obtained by minimizing the residual sum of squares. This set of conditions is an important one and it has a number of implications for the properties of the fittedresidualsand the modelled values. In particular, under these conditions: In linear least squaresmultiple regression(with fitted intercept and slope),R2equalsρ2(y,f){\displaystyle \rho ^{2}(y,f)}the square of thePearson correlation coefficientbetween the observedy{\displaystyle y}and modeled (predicted)f{\displaystyle f}data values of the dependent variable. In alinear least squares regression with a single explanator(with fitted intercept and slope), this is also equal toρ2(y,x){\displaystyle \rho ^{2}(y,x)}the squared Pearson correlation coefficient between the dependent variabley{\displaystyle y}and explanatory variablex{\displaystyle x}. It should not be confused with the correlation coefficient between twoexplanatory variables, defined as where the covariance between two coefficient estimates, as well as theirstandard deviations, are obtained from thecovariance matrixof the coefficient estimates,(XTX)−1{\displaystyle (X^{T}X)^{-1}}. Under more general modeling conditions, where the predicted values might be generated from a model different from linear least squares regression, anR2value can be calculated as the square of thecorrelation coefficientbetween the originaly{\displaystyle y}and modeledf{\displaystyle f}data values. In this case, the value is not directly a measure of how good the modeled values are, but rather a measure of how good a predictor might be constructed from the modeled values (by creating a revised predictor of the formα+βƒi).[citation needed]According to Everitt,[10]this usage is specifically the definition of the term "coefficient of determination": the square of the correlation between two (general) variables. R2is a measure of thegoodness of fitof a model.[11]In regression, theR2coefficient of determination is a statistical measure of how well the regression predictions approximate the real data points. AnR2of 1 indicates that the regression predictions perfectly fit the data. Values ofR2outside the range 0 to 1 occur when the model fits the data worse than the worst possibleleast-squarespredictor (equivalent to a horizontal hyperplane at a height equal to the mean of the observed data). This occurs when a wrong model was chosen, or nonsensical constraints were applied by mistake. If equation 1 of Kvålseth[12]is used (this is the equation used most often),R2can be less than zero. If equation 2 of Kvålseth is used,R2can be greater than one. In all instances whereR2is used, the predictors are calculated by ordinary least-squares regression: that is, by minimizingSSres. In this case,R2increases as the number of variables in the model is increased (R2ismonotone increasingwith the number of variables included—it will never decrease). This illustrates a drawback to one possible use ofR2, where one might keep adding variables (kitchen sink regression) to increase theR2value. For example, if one is trying to predict the sales of a model of car from the car's gas mileage, price, and engine power, one can include probably irrelevant factors such as the first letter of the model's name or the height of the lead engineer designing the car because theR2will never decrease as variables are added and will likely experience an increase due to chance alone. This leads to the alternative approach of looking at theadjustedR2. The explanation of this statistic is almost the same asR2but it penalizes the statistic as extra variables are included in the model. For cases other than fitting by ordinary least squares, theR2statistic can be calculated as above and may still be a useful measure. If fitting is byweighted least squaresorgeneralized least squares, alternative versions ofR2can be calculated appropriate to those statistical frameworks, while the "raw"R2may still be useful if it is more easily interpreted. Values forR2can be calculated for any type of predictive model, which need not have a statistical basis. Consider a linear model withmore than a single explanatory variable, of the form where, for theith case,Yi{\displaystyle {Y_{i}}}is the response variable,Xi,1,…,Xi,p{\displaystyle X_{i,1},\dots ,X_{i,p}}arepregressors, andεi{\displaystyle \varepsilon _{i}}is a mean zeroerrorterm. The quantitiesβ0,…,βp{\displaystyle \beta _{0},\dots ,\beta _{p}}are unknown coefficients, whose values are estimated byleast squares. The coefficient of determinationR2is a measure of the global fit of the model. Specifically,R2is an element of [0, 1] and represents the proportion of variability inYithat may be attributed to some linear combination of the regressors (explanatory variables) inX.[13] R2is often interpreted as the proportion of response variation "explained" by the regressors in the model. Thus,R2= 1 indicates that the fitted model explains all variability iny{\displaystyle y}, whileR2= 0 indicates no 'linear' relationship (for straight line regression, this means that the straight line model is a constant line (slope = 0, intercept =y¯{\displaystyle {\bar {y}}}) between the response variable and regressors). An interior value such asR2= 0.7 may be interpreted as follows: "Seventy percent of the variance in the response variable can be explained by the explanatory variables. The remaining thirty percent can be attributed to unknown,lurking variablesor inherent variability." A caution that applies toR2, as to other statistical descriptions ofcorrelationand association is that "correlation does not imply causation." In other words, while correlations may sometimes provide valuable clues in uncovering causal relationships among variables, a non-zero estimated correlation between two variables is not, on its own, evidence that changing the value of one variable would result in changes in the values of other variables. For example, the practice of carrying matches (or a lighter) is correlated with incidence of lung cancer, but carrying matches does not cause cancer (in the standard sense of "cause"). In case of a single regressor, fitted by least squares,R2is the square of thePearson product-moment correlation coefficientrelating the regressor and the response variable. More generally,R2is the square of the correlation between the constructed predictor and the response variable. With more than one regressor, theR2can be referred to as thecoefficient of multiple determination. Inleast squaresregression using typical data,R2is at least weakly increasing with an increase in number of regressors in the model. Because increases in the number of regressors increase the value ofR2,R2alone cannot be used as a meaningful comparison of models with very different numbers of independent variables. For a meaningful comparison between two models, anF-testcan be performed on theresidual sum of squares[citation needed], similar to the F-tests inGranger causality, though this is not always appropriate[further explanation needed]. As a reminder of this, some authors denoteR2byRq2, whereqis the number of columns inX(the number of explanators including the constant). To demonstrate this property, first recall that the objective of least squares linear regression is whereXiis a row vector of values of explanatory variables for caseiandbis a column vector of coefficients of the respective elements ofXi. The optimal value of the objective is weakly smaller as more explanatory variables are added and hence additional columns ofX{\displaystyle X}(the explanatory data matrix whoseith row isXi) are added, by the fact that less constrained minimization leads to an optimal cost which is weakly smaller than more constrained minimization does. Given the previous conclusion and noting thatSStot{\displaystyle SS_{tot}}depends only ony, the non-decreasing property ofR2follows directly from the definition above. The intuitive reason that using an additional explanatory variable cannot lower theR2is this: MinimizingSSres{\displaystyle SS_{\text{res}}}is equivalent to maximizingR2. When the extra variable is included, the data always have the option of giving it an estimated coefficient of zero, leaving the predicted values and theR2unchanged. The only way that the optimization problem will give a non-zero coefficient is if doing so improves theR2. The above gives an analytical explanation of the inflation ofR2. Next, an example based on ordinary least square from a geometric perspective is shown below.[14] A simple case to be considered first: This equation describes theordinary least squares regressionmodel with one regressor. The prediction is shown as the red vector in the figure on the right. Geometrically, it is the projection of true value onto a model space inR{\displaystyle \mathbb {R} }(without intercept). The residual is shown as the red line. This equation corresponds to the ordinary least squares regression model with two regressors. The prediction is shown as the blue vector in the figure on the right. Geometrically, it is the projection of true value onto a larger model space inR2{\displaystyle \mathbb {R} ^{2}}(without intercept). Noticeably, the values ofβ0{\displaystyle \beta _{0}}andβ0{\displaystyle \beta _{0}}are not the same as in the equation for smaller model space as long asX1{\displaystyle X_{1}}andX2{\displaystyle X_{2}}are not zero vectors. Therefore, the equations are expected to yield different predictions (i.e., the blue vector is expected to be different from the red vector). The least squares regression criterion ensures that the residual is minimized. In the figure, the blue line representing the residual is orthogonal to the model space inR2{\displaystyle \mathbb {R} ^{2}}, giving the minimal distance from the space. The smaller model space is a subspace of the larger one, and thereby the residual of the smaller model is guaranteed to be larger. Comparing the red and blue lines in the figure, the blue line is orthogonal to the space, and any other line would be larger than the blue one. Considering the calculation forR2, a smaller value ofSStot{\displaystyle SS_{tot}}will lead to a larger value ofR2, meaning that adding regressors will result in inflation ofR2. R2does not indicate whether: The use of an adjustedR2(one common notation isR¯2{\displaystyle {\bar {R}}^{2}}, pronounced "R bar squared"; another isRa2{\displaystyle R_{\text{a}}^{2}}orRadj2{\displaystyle R_{\text{adj}}^{2}}) is an attempt to account for the phenomenon of theR2automatically increasing when extra explanatory variables are added to the model. There are many different ways of adjusting.[15]By far the most used one, to the point that it is typically just referred to as adjustedR, is the correction proposed byMordecai Ezekiel.[15][16][17]The adjustedR2is defined as where dfresis thedegrees of freedomof the estimate of the population variance around the model, and dftotis the degrees of freedom of the estimate of the population variance around the mean. dfresis given in terms of the sample sizenand the number of variablespin the model,dfres=n−p− 1. dftotis given in the same way, but withpbeing zero for the mean, i.e.dftot=n− 1. Inserting the degrees of freedom and using the definition ofR2, it can be rewritten as: wherepis the total number of explanatory variables in the model (excluding the intercept), andnis the sample size. The adjustedR2can be negative, and its value will always be less than or equal to that ofR2. UnlikeR2, the adjustedR2increases only when the increase inR2(due to the inclusion of a new explanatory variable) is more than one would expect to see by chance. If a set of explanatory variables with a predetermined hierarchy of importance are introduced into a regression one at a time, with the adjustedR2computed each time, the level at which adjustedR2reaches a maximum, and decreases afterward, would be the regression with the ideal combination of having the best fit without excess/unnecessary terms. The adjustedR2can be interpreted as an instance of thebias-variance tradeoff. When we consider the performance of a model, a lower error represents a better performance. When the model becomes more complex, the variance will increase whereas the square of bias will decrease, and these two metrices add up to be the total error. Combining these two trends, the bias-variance tradeoff describes a relationship between the performance of the model and its complexity, which is shown as a u-shape curve on the right. For the adjustedR2specifically, the model complexity (i.e. number of parameters) affects theR2and the term / frac and thereby captures their attributes in the overall performance of the model. R2can be interpreted as the variance of the model, which is influenced by the model complexity. A highR2indicates a lower bias error because the model can better explain the change of Y with predictors. For this reason, we make fewer (erroneous) assumptions, and this results in a lower bias error. Meanwhile, to accommodate fewer assumptions, the model tends to be more complex. Based on bias-variance tradeoff, a higher complexity will lead to a decrease in bias and a better performance (below the optimal line). InR2, the term (1 −R2) will be lower with high complexity and resulting in a higherR2, consistently indicating a better performance. On the other hand, the term/frac term is reversely affected by the model complexity. The term/frac will increase when adding regressors (i.e. increased model complexity) and lead to worse performance. Based on bias-variance tradeoff, a higher model complexity (beyond the optimal line) leads to increasing errors and a worse performance. Considering the calculation ofR2, more parameters will increase theR2and lead to an increase inR2. Nevertheless, adding more parameters will increase the term/frac and thus decreaseR2. These two trends construct a reverse u-shape relationship between model complexity andR2, which is in consistent with the u-shape trend of model complexity versus overall performance. UnlikeR2, which will always increase when model complexity increases,R2will increase only when the bias eliminated by the added regressor is greater than the variance introduced simultaneously. UsingR2instead ofR2could thereby prevent overfitting. Following the same logic, adjustedR2can be interpreted as a less biased estimator of the populationR2, whereas the observed sampleR2is a positively biased estimate of the population value.[18]AdjustedR2is more appropriate when evaluating model fit (the variance in the dependent variable accounted for by the independent variables) and in comparing alternative models in thefeature selectionstage of model building.[18] The principle behind the adjustedR2statistic can be seen by rewriting the ordinaryR2as whereVARres=SSres/n{\displaystyle {\text{VAR}}_{\text{res}}=SS_{\text{res}}/n}andVARtot=SStot/n{\displaystyle {\text{VAR}}_{\text{tot}}=SS_{\text{tot}}/n}are the sample variances of the estimated residuals and the dependent variable respectively, which can be seen as biased estimates of the population variances of the errors and of the dependent variable. These estimates are replaced by statisticallyunbiasedversions:VARres=SSres/(n−p){\displaystyle {\text{VAR}}_{\text{res}}=SS_{\text{res}}/(n-p)}andVARtot=SStot/(n−1){\displaystyle {\text{VAR}}_{\text{tot}}=SS_{\text{tot}}/(n-1)}. Despite using unbiased estimators for the population variances of the error and the dependent variable, adjustedR2is not an unbiased estimator of the populationR2,[18]which results by using the population variances of the errors and the dependent variable instead of estimating them.Ingram OlkinandJohn W. Prattderived theminimum-variance unbiased estimatorfor the populationR2,[19]which is known as Olkin–Pratt estimator. Comparisons of different approaches for adjustingR2concluded that in most situations either an approximate version of the Olkin–Pratt estimator[18]or the exact Olkin–Pratt estimator[20]should be preferred over (Ezekiel) adjustedR2. The coefficient of partial determination can be defined as the proportion of variation that cannot be explained in a reduced model, but can be explained by the predictors specified in a full(er) model.[21][22][23]This coefficient is used to provide insight into whether or not one or more additional predictors may be useful in a more fully specified regression model. The calculation for the partialR2is relatively straightforward after estimating two models and generating theANOVAtables for them. The calculation for the partialR2is which is analogous to the usual coefficient of determination: As explained above, model selection heuristics such as the adjustedR2criterion and theF-testexamine whether the totalR2sufficiently increases to determine if a new regressor should be added to the model. If a regressor is added to the model that is highly correlated with other regressors which have already been included, then the totalR2will hardly increase, even if the new regressor is of relevance. As a result, the above-mentioned heuristics will ignore relevant regressors when cross-correlations are high.[24] Alternatively, one can decompose a generalized version ofR2to quantify the relevance of deviating from a hypothesis.[24]As Hoornweg (2018) shows, severalshrinkage estimators– such asBayesian linear regression,ridge regression, and the (adaptive)lasso– make use of this decomposition ofR2when they gradually shrink parameters from the unrestricted OLS solutions towards the hypothesized values. Let us first define the linear regression model as It is assumed that the matrixXis standardized with Z-scores and that the column vectory{\displaystyle y}is centered to have a mean of zero. Let the column vectorβ0{\displaystyle \beta _{0}}refer to the hypothesized regression parameters and let the column vectorb{\displaystyle b}denote the estimated parameters. We can then define AnR2of 75% means that the in-sample accuracy improves by 75% if the data-optimizedbsolutions are used instead of the hypothesizedβ0{\displaystyle \beta _{0}}values. In the special case thatβ0{\displaystyle \beta _{0}}is a vector of zeros, we obtain the traditionalR2again. The individual effect onR2of deviating from a hypothesis can be computed withR⊗{\displaystyle R^{\otimes }}('R-outer'). Thisp{\displaystyle p}timesp{\displaystyle p}matrix is given by wherey~0=y−Xβ0{\displaystyle {\tilde {y}}_{0}=y-X\beta _{0}}. The diagonal elements ofR⊗{\displaystyle R^{\otimes }}exactly add up toR2. If regressors are uncorrelated andβ0{\displaystyle \beta _{0}}is a vector of zeros, then thejth{\displaystyle j^{\text{th}}}diagonal element ofR⊗{\displaystyle R^{\otimes }}simply corresponds to ther2value betweenxj{\displaystyle x_{j}}andy{\displaystyle y}. When regressorsxi{\displaystyle x_{i}}andxj{\displaystyle x_{j}}are correlated,Rii⊗{\displaystyle R_{ii}^{\otimes }}might increase at the cost of a decrease inRjj⊗{\displaystyle R_{jj}^{\otimes }}. As a result, the diagonal elements ofR⊗{\displaystyle R^{\otimes }}may be smaller than 0 and, in more exceptional cases, larger than 1. To deal with such uncertainties, several shrinkage estimators implicitly take a weighted average of the diagonal elements ofR⊗{\displaystyle R^{\otimes }}to quantify the relevance of deviating from a hypothesized value.[24]Click on thelassofor an example. In the case oflogistic regression, usually fit bymaximum likelihood, there are several choices ofpseudo-R2. One is the generalizedR2originally proposed by Cox & Snell,[25]and independently by Magee:[26] whereL(0){\displaystyle {\mathcal {L}}(0)}is the likelihood of the model with only the intercept,L(θ^){\displaystyle {{\mathcal {L}}({\widehat {\theta }})}}is the likelihood of the estimated model (i.e., the model with a given set of parameter estimates) andnis the sample size. It is easily rewritten to: whereDis the test statistic of thelikelihood ratio test. Nico Nagelkerkenoted that it had the following properties:[27][22] However, in the case of a logistic model, whereL(θ^){\displaystyle {\mathcal {L}}({\widehat {\theta }})}cannot be greater than 1,R2is between 0 andRmax2=1−(L(0))2/n{\displaystyle R_{\max }^{2}=1-({\mathcal {L}}(0))^{2/n}}: thus, Nagelkerke suggested the possibility to define a scaledR2asR2/R2max.[22] Occasionally, residual statistics are used for indicating goodness of fit. Thenormof residuals is calculated as the square-root of thesum of squares of residuals(SSR): Similarly, thereduced chi-squareis calculated as the SSR divided by the degrees of freedom. BothR2and the norm of residuals have their relative merits. Forleast squaresanalysisR2varies between 0 and 1, with larger numbers indicating better fits and 1 representing a perfect fit. The norm of residuals varies from 0 to infinity with smaller numbers indicating better fits and zero indicating a perfect fit. One advantage and disadvantage ofR2is theSStot{\displaystyle SS_{\text{tot}}}term acts tonormalizethe value. If theyivalues are all multiplied by a constant, the norm of residuals will also change by that constant butR2will stay the same. As a basic example, for the linear least squares fit to the set of data: R2= 0.998, and norm of residuals = 0.302. If all values ofyare multiplied by 1000 (for example, in anSI prefixchange), thenR2remains the same, but norm of residuals = 302. Another single-parameter indicator of fit is theRMSEof the residuals, or standard deviation of the residuals. This would have a value of 0.135 for the above example given that the fit was linear with an unforced intercept.[28] The creation of the coefficient of determination has been attributed to the geneticistSewall Wrightand was first published in 1921.[29]
https://en.wikipedia.org/wiki/R-squared
Instatistics, thefraction of variance unexplained(FVU) in the context of aregression taskis the fraction of variance of theregressand(dependent variable)Ywhich cannot be explained, i.e., which is not correctly predicted, by theexplanatory variablesX. Suppose we are given a regression functionf{\displaystyle f}yielding for eachyi{\displaystyle y_{i}}an estimatey^i=f(xi){\displaystyle {\widehat {y}}_{i}=f(x_{i})}wherexi{\displaystyle x_{i}}is the vector of theithobservations on all the explanatory variables.[1]: 181We define the fraction of variance unexplained (FVU) as: whereR2is thecoefficient of determinationandVARerrandVARtotare the variance of the residuals and the sample variance of the dependent variable.SSerr(the sum of squared predictions errors, equivalently theresidual sum of squares),SStot(thetotal sum of squares), andSSreg(the sum of squares of the regression, equivalently theexplained sum of squares) are given by Alternatively, the fraction of variance unexplained can be defined as follows: where MSE(f) is themean squared errorof the regression functionƒ. It is useful to consider the second definition to understand FVU. When trying to predictY, the most naive regression function that we can think of is the constant function predicting the mean ofY, i.e.,f(xi)=y¯{\displaystyle f(x_{i})={\bar {y}}}. It follows that the MSE of this function equals the variance ofY; that is,SSerr=SStot, andSSreg= 0. In this case, no variation inYcan be accounted for, and the FVU then has its maximum value of 1. More generally, the FVU will be 1 if the explanatory variablesXtell us nothing aboutYin the sense that the predicted values ofYdo notcovarywithY. But as prediction gets better and the MSE can be reduced, the FVU goes down. In the case of perfect prediction wherey^i=yi{\displaystyle {\hat {y}}_{i}=y_{i}}for alli, the MSE is 0,SSerr= 0,SSreg=SStot, and the FVU is 0.
https://en.wikipedia.org/wiki/Fraction_of_variance_unexplained
Inprobability theoryandstatistics,varianceis theexpected valueof thesquared deviation from the meanof arandom variable. Thestandard deviation(SD) is obtained as the square root of the variance. Variance is a measure ofdispersion, meaning it is a measure of how far a set of numbers is spread out from their average value. It is the secondcentral momentof adistribution, and thecovarianceof the random variable with itself, and it is often represented byσ2{\displaystyle \sigma ^{2}},s2{\displaystyle s^{2}},Var⁡(X){\displaystyle \operatorname {Var} (X)},V(X){\displaystyle V(X)}, orV(X){\displaystyle \mathbb {V} (X)}.[1] An advantage of variance as a measure of dispersion is that it is more amenable to algebraic manipulation than other measures of dispersion such as theexpected absolute deviation; for example, the variance of a sum of uncorrelated random variables is equal to the sum of their variances. A disadvantage of the variance for practical applications is that, unlike the standard deviation, its units differ from the random variable, which is why the standard deviation is more commonly reported as a measure of dispersion once the calculation is finished. Another disadvantage is that the variance is not finite for many distributions. There are two distinct concepts that are both called "variance". One, as discussed above, is part of a theoreticalprobability distributionand is defined by an equation. The other variance is a characteristic of a set of observations. When variance is calculated from observations, those observations are typically measured from a real-world system. If all possible observations of the system are present, then the calculated variance is called the population variance. Normally, however, only a subset is available, and the variance calculated from this is called the sample variance. The variance calculated from a sample is considered an estimate of the full population variance. There are multiple ways to calculate an estimate of the population variance, as discussed in the section below. The two kinds of variance are closely related. To see how, consider that a theoretical probability distribution can be used as a generator of hypothetical observations. If an infinite number of observations are generated using a distribution, then the sample variance calculated from that infinite set will match the value calculated using the distribution's equation for variance. Variance has a central role in statistics, where some ideas that use it includedescriptive statistics,statistical inference,hypothesis testing,goodness of fit, andMonte Carlo sampling. The variance of a random variableX{\displaystyle X}is theexpected valueof thesquared deviation from the meanofX{\displaystyle X},μ=E⁡[X]{\displaystyle \mu =\operatorname {E} [X]}:Var⁡(X)=E⁡[(X−μ)2].{\displaystyle \operatorname {Var} (X)=\operatorname {E} \left[(X-\mu )^{2}\right].}This definition encompasses random variables that are generated by processes that arediscrete,continuous,neither, or mixed. The variance can also be thought of as thecovarianceof a random variable with itself: Var⁡(X)=Cov⁡(X,X).{\displaystyle \operatorname {Var} (X)=\operatorname {Cov} (X,X).}The variance is also equivalent to the secondcumulantof a probability distribution that generatesX{\displaystyle X}. The variance is typically designated asVar⁡(X){\displaystyle \operatorname {Var} (X)}, or sometimes asV(X){\displaystyle V(X)}orV(X){\displaystyle \mathbb {V} (X)}, or symbolically asσX2{\displaystyle \sigma _{X}^{2}}or simplyσ2{\displaystyle \sigma ^{2}}(pronounced "sigmasquared"). The expression for the variance can be expanded as follows:Var⁡(X)=E⁡[(X−E⁡[X])2]=E⁡[X2−2XE⁡[X]+E⁡[X]2]=E⁡[X2]−2E⁡[X]E⁡[X]+E⁡[X]2=E⁡[X2]−2E⁡[X]2+E⁡[X]2=E⁡[X2]−E⁡[X]2{\displaystyle {\begin{aligned}\operatorname {Var} (X)&=\operatorname {E} \left[{\left(X-\operatorname {E} [X]\right)}^{2}\right]\\[4pt]&=\operatorname {E} \left[X^{2}-2X\operatorname {E} [X]+\operatorname {E} [X]^{2}\right]\\[4pt]&=\operatorname {E} \left[X^{2}\right]-2\operatorname {E} [X]\operatorname {E} [X]+\operatorname {E} [X]^{2}\\[4pt]&=\operatorname {E} \left[X^{2}\right]-2\operatorname {E} [X]^{2}+\operatorname {E} [X]^{2}\\[4pt]&=\operatorname {E} \left[X^{2}\right]-\operatorname {E} [X]^{2}\end{aligned}}} In other words, the variance ofXis equal to the mean of the square ofXminus the square of the mean ofX. This equation should not be used for computations usingfloating-point arithmetic, because it suffers fromcatastrophic cancellationif the two components of the equation are similar in magnitude. For other numerically stable alternatives, seealgorithms for calculating variance. If the generator of random variableX{\displaystyle X}isdiscretewithprobability mass functionx1↦p1,x2↦p2,…,xn↦pn{\displaystyle x_{1}\mapsto p_{1},x_{2}\mapsto p_{2},\ldots ,x_{n}\mapsto p_{n}}, then Var⁡(X)=∑i=1npi⋅(xi−μ)2,{\displaystyle \operatorname {Var} (X)=\sum _{i=1}^{n}p_{i}\cdot {\left(x_{i}-\mu \right)}^{2},} whereμ{\displaystyle \mu }is the expected value. That is, μ=∑i=1npixi.{\displaystyle \mu =\sum _{i=1}^{n}p_{i}x_{i}.} (When such a discreteweighted varianceis specified by weights whose sum is not 1, then one divides by the sum of the weights.) The variance of a collection ofn{\displaystyle n}equally likely values can be written as Var⁡(X)=1n∑i=1n(xi−μ)2{\displaystyle \operatorname {Var} (X)={\frac {1}{n}}\sum _{i=1}^{n}(x_{i}-\mu )^{2}} whereμ{\displaystyle \mu }is the average value. That is, μ=1n∑i=1nxi.{\displaystyle \mu ={\frac {1}{n}}\sum _{i=1}^{n}x_{i}.} The variance of a set ofn{\displaystyle n}equally likely values can be equivalently expressed, without directly referring to the mean, in terms of squared deviations of all pairwise squared distances of points from each other:[2] Var⁡(X)=1n2∑i=1n∑j=1n12(xi−xj)2=1n2∑i∑j>i(xi−xj)2.{\displaystyle \operatorname {Var} (X)={\frac {1}{n^{2}}}\sum _{i=1}^{n}\sum _{j=1}^{n}{\frac {1}{2}}{\left(x_{i}-x_{j}\right)}^{2}={\frac {1}{n^{2}}}\sum _{i}\sum _{j>i}{\left(x_{i}-x_{j}\right)}^{2}.} If the random variableX{\displaystyle X}has aprobability density functionf(x){\displaystyle f(x)}, andF(x){\displaystyle F(x)}is the correspondingcumulative distribution function, then Var⁡(X)=σ2=∫R(x−μ)2f(x)dx=∫Rx2f(x)dx−2μ∫Rxf(x)dx+μ2∫Rf(x)dx=∫Rx2dF(x)−2μ∫RxdF(x)+μ2∫RdF(x)=∫Rx2dF(x)−2μ⋅μ+μ2⋅1=∫Rx2dF(x)−μ2,{\displaystyle {\begin{aligned}\operatorname {Var} (X)=\sigma ^{2}&=\int _{\mathbb {R} }{\left(x-\mu \right)}^{2}f(x)\,dx\\[4pt]&=\int _{\mathbb {R} }x^{2}f(x)\,dx-2\mu \int _{\mathbb {R} }xf(x)\,dx+\mu ^{2}\int _{\mathbb {R} }f(x)\,dx\\[4pt]&=\int _{\mathbb {R} }x^{2}\,dF(x)-2\mu \int _{\mathbb {R} }x\,dF(x)+\mu ^{2}\int _{\mathbb {R} }\,dF(x)\\[4pt]&=\int _{\mathbb {R} }x^{2}\,dF(x)-2\mu \cdot \mu +\mu ^{2}\cdot 1\\[4pt]&=\int _{\mathbb {R} }x^{2}\,dF(x)-\mu ^{2},\end{aligned}}} or equivalently, Var⁡(X)=∫Rx2f(x)dx−μ2,{\displaystyle \operatorname {Var} (X)=\int _{\mathbb {R} }x^{2}f(x)\,dx-\mu ^{2},} whereμ{\displaystyle \mu }is the expected value ofX{\displaystyle X}given by μ=∫Rxf(x)dx=∫RxdF(x).{\displaystyle \mu =\int _{\mathbb {R} }xf(x)\,dx=\int _{\mathbb {R} }x\,dF(x).} In these formulas, the integrals with respect todx{\displaystyle dx}anddF(x){\displaystyle dF(x)}areLebesgueandLebesgue–Stieltjesintegrals, respectively. If the functionx2f(x){\displaystyle x^{2}f(x)}isRiemann-integrableon every finite interval[a,b]⊂R,{\displaystyle [a,b]\subset \mathbb {R} ,}then Var⁡(X)=∫−∞+∞x2f(x)dx−μ2,{\displaystyle \operatorname {Var} (X)=\int _{-\infty }^{+\infty }x^{2}f(x)\,dx-\mu ^{2},} where the integral is animproper Riemann integral. Theexponential distributionwith parameterλ> 0 is a continuous distribution whoseprobability density functionis given byf(x)=λe−λx{\displaystyle f(x)=\lambda e^{-\lambda x}}on the interval[0, ∞). Its mean can be shown to beE⁡[X]=∫0∞xλe−λxdx=1λ.{\displaystyle \operatorname {E} [X]=\int _{0}^{\infty }x\lambda e^{-\lambda x}\,dx={\frac {1}{\lambda }}.} Usingintegration by partsand making use of the expected value already calculated, we have:E⁡[X2]=∫0∞x2λe−λxdx=[−x2e−λx]0∞+∫0∞2xe−λxdx=0+2λE⁡[X]=2λ2.{\displaystyle {\begin{aligned}\operatorname {E} \left[X^{2}\right]&=\int _{0}^{\infty }x^{2}\lambda e^{-\lambda x}\,dx\\&={\left[-x^{2}e^{-\lambda x}\right]}_{0}^{\infty }+\int _{0}^{\infty }2xe^{-\lambda x}\,dx\\&=0+{\frac {2}{\lambda }}\operatorname {E} [X]\\&={\frac {2}{\lambda ^{2}}}.\end{aligned}}} Thus, the variance ofXis given byVar⁡(X)=E⁡[X2]−E⁡[X]2=2λ2−(1λ)2=1λ2.{\displaystyle \operatorname {Var} (X)=\operatorname {E} \left[X^{2}\right]-\operatorname {E} [X]^{2}={\frac {2}{\lambda ^{2}}}-\left({\frac {1}{\lambda }}\right)^{2}={\frac {1}{\lambda ^{2}}}.} A fairsix-sided diecan be modeled as a discrete random variable,X, with outcomes 1 through 6, each with equal probability 1/6. The expected value ofXis(1+2+3+4+5+6)/6=7/2.{\displaystyle (1+2+3+4+5+6)/6=7/2.}Therefore, the variance ofXisVar⁡(X)=∑i=1616(i−72)2=16((−5/2)2+(−3/2)2+(−1/2)2+(1/2)2+(3/2)2+(5/2)2)=3512≈2.92.{\displaystyle {\begin{aligned}\operatorname {Var} (X)&=\sum _{i=1}^{6}{\frac {1}{6}}\left(i-{\frac {7}{2}}\right)^{2}\\[5pt]&={\frac {1}{6}}\left((-5/2)^{2}+(-3/2)^{2}+(-1/2)^{2}+(1/2)^{2}+(3/2)^{2}+(5/2)^{2}\right)\\[5pt]&={\frac {35}{12}}\approx 2.92.\end{aligned}}} The general formula for the variance of the outcome,X, of ann-sideddie isVar⁡(X)=E⁡(X2)−(E⁡(X))2=1n∑i=1ni2−(1n∑i=1ni)2=(n+1)(2n+1)6−(n+12)2=n2−112.{\displaystyle {\begin{aligned}\operatorname {Var} (X)&=\operatorname {E} \left(X^{2}\right)-(\operatorname {E} (X))^{2}\\[5pt]&={\frac {1}{n}}\sum _{i=1}^{n}i^{2}-\left({\frac {1}{n}}\sum _{i=1}^{n}i\right)^{2}\\[5pt]&={\frac {(n+1)(2n+1)}{6}}-\left({\frac {n+1}{2}}\right)^{2}\\[4pt]&={\frac {n^{2}-1}{12}}.\end{aligned}}} The following table lists the variance for some commonly used probability distributions. Variance is non-negative because the squares are positive or zero:Var⁡(X)≥0.{\displaystyle \operatorname {Var} (X)\geq 0.} The variance of a constant is zero.Var⁡(a)=0.{\displaystyle \operatorname {Var} (a)=0.} Conversely, if the variance of a random variable is 0, then it isalmost surelya constant. That is, it always has the same value:Var⁡(X)=0⟺∃a:P(X=a)=1.{\displaystyle \operatorname {Var} (X)=0\iff \exists a:P(X=a)=1.} If a distribution does not have a finite expected value, as is the case for theCauchy distribution, then the variance cannot be finite either. However, some distributions may not have a finite variance, despite their expected value being finite. An example is aPareto distributionwhoseindexk{\displaystyle k}satisfies1<k≤2.{\displaystyle 1<k\leq 2.} The general formula for variance decomposition or thelaw of total varianceis: IfX{\displaystyle X}andY{\displaystyle Y}are two random variables, and the variance ofX{\displaystyle X}exists, then Var⁡[X]=E⁡(Var⁡[X∣Y])+Var⁡(E⁡[X∣Y]).{\displaystyle \operatorname {Var} [X]=\operatorname {E} (\operatorname {Var} [X\mid Y])+\operatorname {Var} (\operatorname {E} [X\mid Y]).} Theconditional expectationE⁡(X∣Y){\displaystyle \operatorname {E} (X\mid Y)}ofX{\displaystyle X}givenY{\displaystyle Y}, and theconditional varianceVar⁡(X∣Y){\displaystyle \operatorname {Var} (X\mid Y)}may be understood as follows. Given any particular valueyof the random variableY, there is a conditional expectationE⁡(X∣Y=y){\displaystyle \operatorname {E} (X\mid Y=y)}given the eventY=y. This quantity depends on the particular valuey; it is a functiong(y)=E⁡(X∣Y=y){\displaystyle g(y)=\operatorname {E} (X\mid Y=y)}. That same function evaluated at the random variableYis the conditional expectationE⁡(X∣Y)=g(Y).{\displaystyle \operatorname {E} (X\mid Y)=g(Y).} In particular, ifY{\displaystyle Y}is a discrete random variable assuming possible valuesy1,y2,y3…{\displaystyle y_{1},y_{2},y_{3}\ldots }with corresponding probabilitiesp1,p2,p3…,{\displaystyle p_{1},p_{2},p_{3}\ldots ,}, then in the formula for total variance, the first term on the right-hand side becomes E⁡(Var⁡[X∣Y])=∑ipiσi2,{\displaystyle \operatorname {E} (\operatorname {Var} [X\mid Y])=\sum _{i}p_{i}\sigma _{i}^{2},} whereσi2=Var⁡[X∣Y=yi]{\displaystyle \sigma _{i}^{2}=\operatorname {Var} [X\mid Y=y_{i}]}. Similarly, the second term on the right-hand side becomes Var⁡(E⁡[X∣Y])=∑ipiμi2−(∑ipiμi)2=∑ipiμi2−μ2,{\displaystyle \operatorname {Var} (\operatorname {E} [X\mid Y])=\sum _{i}p_{i}\mu _{i}^{2}-\left(\sum _{i}p_{i}\mu _{i}\right)^{2}=\sum _{i}p_{i}\mu _{i}^{2}-\mu ^{2},} whereμi=E⁡[X∣Y=yi]{\displaystyle \mu _{i}=\operatorname {E} [X\mid Y=y_{i}]}andμ=∑ipiμi{\displaystyle \mu =\sum _{i}p_{i}\mu _{i}}. Thus the total variance is given by Var⁡[X]=∑ipiσi2+(∑ipiμi2−μ2).{\displaystyle \operatorname {Var} [X]=\sum _{i}p_{i}\sigma _{i}^{2}+\left(\sum _{i}p_{i}\mu _{i}^{2}-\mu ^{2}\right).} A similar formula is applied inanalysis of variance, where the corresponding formula is MStotal=MSbetween+MSwithin;{\displaystyle {\mathit {MS}}_{\text{total}}={\mathit {MS}}_{\text{between}}+{\mathit {MS}}_{\text{within}};} hereMS{\displaystyle {\mathit {MS}}}refers to the Mean of the Squares. Inlinear regressionanalysis the corresponding formula is MStotal=MSregression+MSresidual.{\displaystyle {\mathit {MS}}_{\text{total}}={\mathit {MS}}_{\text{regression}}+{\mathit {MS}}_{\text{residual}}.} This can also be derived from the additivity of variances, since the total (observed) score is the sum of the predicted score and the error score, where the latter two are uncorrelated. Similar decompositions are possible for the sum of squared deviations (sum of squares,SS{\displaystyle {\mathit {SS}}}):SStotal=SSbetween+SSwithin,{\displaystyle {\mathit {SS}}_{\text{total}}={\mathit {SS}}_{\text{between}}+{\mathit {SS}}_{\text{within}},}SStotal=SSregression+SSresidual.{\displaystyle {\mathit {SS}}_{\text{total}}={\mathit {SS}}_{\text{regression}}+{\mathit {SS}}_{\text{residual}}.} The population variance for a non-negative random variable can be expressed in terms of thecumulative distribution functionFusing 2∫0∞u(1−F(u))du−[∫0∞(1−F(u))du]2.{\displaystyle 2\int _{0}^{\infty }u(1-F(u))\,du-{\left[\int _{0}^{\infty }(1-F(u))\,du\right]}^{2}.} This expression can be used to calculate the variance in situations where the CDF, but not thedensity, can be conveniently expressed. The secondmomentof a random variable attains the minimum value when taken around the first moment (i.e., mean) of the random variable, i.e.argminmE((X−m)2)=E(X){\displaystyle \mathrm {argmin} _{m}\,\mathrm {E} \left(\left(X-m\right)^{2}\right)=\mathrm {E} (X)}. Conversely, if a continuous functionφ{\displaystyle \varphi }satisfiesargminmE(φ(X−m))=E(X){\displaystyle \mathrm {argmin} _{m}\,\mathrm {E} (\varphi (X-m))=\mathrm {E} (X)}for all random variablesX, then it is necessarily of the formφ(x)=ax2+b{\displaystyle \varphi (x)=ax^{2}+b}, wherea> 0. This also holds in the multidimensional case.[3] Unlike theexpected absolute deviation, the variance of a variable has units that are the square of the units of the variable itself. For example, a variable measured in meters will have a variance measured in meters squared. For this reason, describing data sets via theirstandard deviationorroot mean square deviationis often preferred over using the variance. In the dice example the standard deviation is√2.9≈ 1.7, slightly larger than the expected absolute deviation of 1.5. The standard deviation and the expected absolute deviation can both be used as an indicator of the "spread" of a distribution. The standard deviation is more amenable to algebraic manipulation than the expected absolute deviation, and, together with variance and its generalizationcovariance, is used frequently in theoretical statistics; however the expected absolute deviation tends to be morerobustas it is less sensitive tooutliersarising frommeasurement anomaliesor an undulyheavy-tailed distribution. Variance isinvariantwith respect to changes in alocation parameter. That is, if a constant is added to all values of the variable, the variance is unchanged:Var⁡(X+a)=Var⁡(X).{\displaystyle \operatorname {Var} (X+a)=\operatorname {Var} (X).} If all values are scaled by a constant, the variance isscaledby the square of that constant:Var⁡(aX)=a2Var⁡(X).{\displaystyle \operatorname {Var} (aX)=a^{2}\operatorname {Var} (X).} The variance of a sum of two random variables is given byVar⁡(aX+bY)=a2Var⁡(X)+b2Var⁡(Y)+2abCov⁡(X,Y)Var⁡(aX−bY)=a2Var⁡(X)+b2Var⁡(Y)−2abCov⁡(X,Y){\displaystyle {\begin{aligned}\operatorname {Var} (aX+bY)&=a^{2}\operatorname {Var} (X)+b^{2}\operatorname {Var} (Y)+2ab\,\operatorname {Cov} (X,Y)\\[1ex]\operatorname {Var} (aX-bY)&=a^{2}\operatorname {Var} (X)+b^{2}\operatorname {Var} (Y)-2ab\,\operatorname {Cov} (X,Y)\end{aligned}}} whereCov⁡(X,Y){\displaystyle \operatorname {Cov} (X,Y)}is thecovariance. In general, for the sum ofN{\displaystyle N}random variables{X1,…,XN}{\displaystyle \{X_{1},\dots ,X_{N}\}}, the variance becomes:Var⁡(∑i=1NXi)=∑i,j=1NCov⁡(Xi,Xj)=∑i=1NVar⁡(Xi)+∑i,j=1,i≠jNCov⁡(Xi,Xj),{\displaystyle \operatorname {Var} \left(\sum _{i=1}^{N}X_{i}\right)=\sum _{i,j=1}^{N}\operatorname {Cov} (X_{i},X_{j})=\sum _{i=1}^{N}\operatorname {Var} (X_{i})+\sum _{i,j=1,i\neq j}^{N}\operatorname {Cov} (X_{i},X_{j}),}see also generalBienaymé's identity. These results lead to the variance of alinear combinationas: Var⁡(∑i=1NaiXi)=∑i,j=1NaiajCov⁡(Xi,Xj)=∑i=1Nai2Var⁡(Xi)+∑i≠jaiajCov⁡(Xi,Xj)=∑i=1Nai2Var⁡(Xi)+2∑1≤i<j≤NaiajCov⁡(Xi,Xj).{\displaystyle {\begin{aligned}\operatorname {Var} \left(\sum _{i=1}^{N}a_{i}X_{i}\right)&=\sum _{i,j=1}^{N}a_{i}a_{j}\operatorname {Cov} (X_{i},X_{j})\\&=\sum _{i=1}^{N}a_{i}^{2}\operatorname {Var} (X_{i})+\sum _{i\neq j}a_{i}a_{j}\operatorname {Cov} (X_{i},X_{j})\\&=\sum _{i=1}^{N}a_{i}^{2}\operatorname {Var} (X_{i})+2\sum _{1\leq i<j\leq N}a_{i}a_{j}\operatorname {Cov} (X_{i},X_{j}).\end{aligned}}} If the random variablesX1,…,XN{\displaystyle X_{1},\dots ,X_{N}}are such thatCov⁡(Xi,Xj)=0,∀(i≠j),{\displaystyle \operatorname {Cov} (X_{i},X_{j})=0\ ,\ \forall \ (i\neq j),}then they are said to beuncorrelated. It follows immediately from the expression given earlier that if the random variablesX1,…,XN{\displaystyle X_{1},\dots ,X_{N}}are uncorrelated, then the variance of their sum is equal to the sum of their variances, or, expressed symbolically: Var⁡(∑i=1NXi)=∑i=1NVar⁡(Xi).{\displaystyle \operatorname {Var} \left(\sum _{i=1}^{N}X_{i}\right)=\sum _{i=1}^{N}\operatorname {Var} (X_{i}).} Since independent random variables are always uncorrelated (seeCovariance § Uncorrelatedness and independence), the equation above holds in particular when the random variablesX1,…,Xn{\displaystyle X_{1},\dots ,X_{n}}are independent. Thus, independence is sufficient but not necessary for the variance of the sum to equal the sum of the variances. DefineX{\displaystyle X}as a column vector ofn{\displaystyle n}random variablesX1,…,Xn{\displaystyle X_{1},\ldots ,X_{n}}, andc{\displaystyle c}as a column vector ofn{\displaystyle n}scalarsc1,…,cn{\displaystyle c_{1},\ldots ,c_{n}}. Therefore,cTX{\displaystyle c^{\mathsf {T}}X}is alinear combinationof these random variables, wherecT{\displaystyle c^{\mathsf {T}}}denotes thetransposeofc{\displaystyle c}. Also letΣ{\displaystyle \Sigma }be thecovariance matrixofX{\displaystyle X}. The variance ofcTX{\displaystyle c^{\mathsf {T}}X}is then given by:[4] Var⁡(cTX)=cTΣc.{\displaystyle \operatorname {Var} \left(c^{\mathsf {T}}X\right)=c^{\mathsf {T}}\Sigma c.} This implies that the variance of the mean can be written as (with a column vector of ones) Var⁡(x¯)=Var⁡(1n1′X)=1n21′Σ1.{\displaystyle \operatorname {Var} \left({\bar {x}}\right)=\operatorname {Var} \left({\frac {1}{n}}1'X\right)={\frac {1}{n^{2}}}1'\Sigma 1.} One reason for the use of the variance in preference to other measures of dispersion is that the variance of the sum (or the difference) ofuncorrelatedrandom variables is the sum of their variances: Var⁡(∑i=1nXi)=∑i=1nVar⁡(Xi).{\displaystyle \operatorname {Var} \left(\sum _{i=1}^{n}X_{i}\right)=\sum _{i=1}^{n}\operatorname {Var} (X_{i}).} This statement is called theBienayméformula[5]and was discovered in 1853.[6][7]It is often made with the stronger condition that the variables areindependent, but being uncorrelated suffices. So if all the variables have the same variance σ2, then, since division bynis a linear transformation, this formula immediately implies that the variance of their mean is Var⁡(X¯)=Var⁡(1n∑i=1nXi)=1n2∑i=1nVar⁡(Xi)=1n2nσ2=σ2n.{\displaystyle \operatorname {Var} \left({\overline {X}}\right)=\operatorname {Var} \left({\frac {1}{n}}\sum _{i=1}^{n}X_{i}\right)={\frac {1}{n^{2}}}\sum _{i=1}^{n}\operatorname {Var} \left(X_{i}\right)={\frac {1}{n^{2}}}n\sigma ^{2}={\frac {\sigma ^{2}}{n}}.} That is, the variance of the mean decreases whennincreases. This formula for the variance of the mean is used in the definition of thestandard errorof the sample mean, which is used in thecentral limit theorem. To prove the initial statement, it suffices to show that Var⁡(X+Y)=Var⁡(X)+Var⁡(Y).{\displaystyle \operatorname {Var} (X+Y)=\operatorname {Var} (X)+\operatorname {Var} (Y).} The general result then follows by induction. Starting with the definition, Var⁡(X+Y)=E⁡[(X+Y)2]−(E⁡[X+Y])2=E⁡[X2+2XY+Y2]−(E⁡[X]+E⁡[Y])2.{\displaystyle {\begin{aligned}\operatorname {Var} (X+Y)&=\operatorname {E} \left[(X+Y)^{2}\right]-(\operatorname {E} [X+Y])^{2}\\[5pt]&=\operatorname {E} \left[X^{2}+2XY+Y^{2}\right]-(\operatorname {E} [X]+\operatorname {E} [Y])^{2}.\end{aligned}}} Using the linearity of theexpectation operatorand the assumption of independence (or uncorrelatedness) ofXandY, this further simplifies as follows: Var⁡(X+Y)=E⁡[X2]+2E⁡[XY]+E⁡[Y2]−(E⁡[X]2+2E⁡[X]E⁡[Y]+E⁡[Y]2)=E⁡[X2]+E⁡[Y2]−E⁡[X]2−E⁡[Y]2=Var⁡(X)+Var⁡(Y).{\displaystyle {\begin{aligned}\operatorname {Var} (X+Y)&=\operatorname {E} {\left[X^{2}\right]}+2\operatorname {E} [XY]+\operatorname {E} {\left[Y^{2}\right]}-\left(\operatorname {E} [X]^{2}+2\operatorname {E} [X]\operatorname {E} [Y]+\operatorname {E} [Y]^{2}\right)\\[5pt]&=\operatorname {E} \left[X^{2}\right]+\operatorname {E} \left[Y^{2}\right]-\operatorname {E} [X]^{2}-\operatorname {E} [Y]^{2}\\[5pt]&=\operatorname {Var} (X)+\operatorname {Var} (Y).\end{aligned}}} In general, the variance of the sum ofnvariables is the sum of theircovariances: Var⁡(∑i=1nXi)=∑i=1n∑j=1nCov⁡(Xi,Xj)=∑i=1nVar⁡(Xi)+2∑1≤i<j≤nCov⁡(Xi,Xj).{\displaystyle \operatorname {Var} \left(\sum _{i=1}^{n}X_{i}\right)=\sum _{i=1}^{n}\sum _{j=1}^{n}\operatorname {Cov} \left(X_{i},X_{j}\right)=\sum _{i=1}^{n}\operatorname {Var} \left(X_{i}\right)+2\sum _{1\leq i<j\leq n}\operatorname {Cov} \left(X_{i},X_{j}\right).} (Note: The second equality comes from the fact thatCov(Xi,Xi) = Var(Xi).) Here,Cov⁡(⋅,⋅){\displaystyle \operatorname {Cov} (\cdot ,\cdot )}is thecovariance, which is zero for independent random variables (if it exists). The formula states that the variance of a sum is equal to the sum of all elements in the covariance matrix of the components. The next expression states equivalently that the variance of the sum is the sum of the diagonal of covariance matrix plus two times the sum of its upper triangular elements (or its lower triangular elements); this emphasizes that the covariance matrix is symmetric. This formula is used in the theory ofCronbach's alphainclassical test theory. So, if the variables have equal varianceσ2and the averagecorrelationof distinct variables isρ, then the variance of their mean is Var⁡(X¯)=σ2n+n−1nρσ2.{\displaystyle \operatorname {Var} \left({\overline {X}}\right)={\frac {\sigma ^{2}}{n}}+{\frac {n-1}{n}}\rho \sigma ^{2}.} This implies that the variance of the mean increases with the average of the correlations. In other words, additional correlated observations are not as effective as additional independent observations at reducing theuncertainty of the mean. Moreover, if the variables have unit variance, for example if they are standardized, then this simplifies to Var⁡(X¯)=1n+n−1nρ.{\displaystyle \operatorname {Var} \left({\overline {X}}\right)={\frac {1}{n}}+{\frac {n-1}{n}}\rho .} This formula is used in theSpearman–Brown prediction formulaof classical test theory. This converges toρifngoes to infinity, provided that the average correlation remains constant or converges too. So for the variance of the mean of standardized variables with equal correlations or converging average correlation we have limn→∞Var⁡(X¯)=ρ.{\displaystyle \lim _{n\to \infty }\operatorname {Var} \left({\overline {X}}\right)=\rho .} Therefore, the variance of the mean of a large number of standardized variables is approximately equal to their average correlation. This makes clear that the sample mean of correlated variables does not generally converge to the population mean, even though thelaw of large numbersstates that the sample mean will converge for independent variables. There are cases when a sample is taken without knowing, in advance, how many observations will be acceptable according to some criterion. In such cases, the sample sizeNis a random variable whose variation adds to the variation ofX, such that,[8]Var⁡(∑i=1NXi)=E⁡[N]Var⁡(X)+Var⁡(N)(E⁡[X])2{\displaystyle \operatorname {Var} \left(\sum _{i=1}^{N}X_{i}\right)=\operatorname {E} \left[N\right]\operatorname {Var} (X)+\operatorname {Var} (N)(\operatorname {E} \left[X\right])^{2}}which follows from thelaw of total variance. IfNhas aPoisson distribution, thenE⁡[N]=Var⁡(N){\displaystyle \operatorname {E} [N]=\operatorname {Var} (N)}with estimatorn=N. So, the estimator ofVar⁡(∑i=1nXi){\displaystyle \operatorname {Var} \left(\sum _{i=1}^{n}X_{i}\right)}becomesnSx2+nX¯2{\displaystyle n{S_{x}}^{2}+n{\bar {X}}^{2}}, givingSE⁡(X¯)=Sx2+X¯2n{\displaystyle \operatorname {SE} ({\bar {X}})={\sqrt {\frac {{S_{x}}^{2}+{\bar {X}}^{2}}{n}}}}(seestandard error of the sample mean). The scaling property and the Bienaymé formula, along with the property of thecovarianceCov(aX,bY) =abCov(X,Y)jointly imply that Var⁡(aX±bY)=a2Var⁡(X)+b2Var⁡(Y)±2abCov⁡(X,Y).{\displaystyle \operatorname {Var} (aX\pm bY)=a^{2}\operatorname {Var} (X)+b^{2}\operatorname {Var} (Y)\pm 2ab\,\operatorname {Cov} (X,Y).} This implies that in a weighted sum of variables, the variable with the largest weight will have a disproportionally large weight in the variance of the total. For example, ifXandYare uncorrelated and the weight ofXis two times the weight ofY, then the weight of the variance ofXwill be four times the weight of the variance ofY. The expression above can be extended to a weighted sum of multiple variables: Var⁡(∑inaiXi)=∑i=1nai2Var⁡(Xi)+2∑1≤i∑<j≤naiajCov⁡(Xi,Xj){\displaystyle \operatorname {Var} \left(\sum _{i}^{n}a_{i}X_{i}\right)=\sum _{i=1}^{n}a_{i}^{2}\operatorname {Var} (X_{i})+2\sum _{1\leq i}\sum _{<j\leq n}a_{i}a_{j}\operatorname {Cov} (X_{i},X_{j})} If two variables X and Y areindependent, the variance of their product is given by[9]Var⁡(XY)=[E⁡(X)]2Var⁡(Y)+[E⁡(Y)]2Var⁡(X)+Var⁡(X)Var⁡(Y).{\displaystyle \operatorname {Var} (XY)=[\operatorname {E} (X)]^{2}\operatorname {Var} (Y)+[\operatorname {E} (Y)]^{2}\operatorname {Var} (X)+\operatorname {Var} (X)\operatorname {Var} (Y).} Equivalently, using the basic properties of expectation, it is given by Var⁡(XY)=E⁡(X2)E⁡(Y2)−[E⁡(X)]2[E⁡(Y)]2.{\displaystyle \operatorname {Var} (XY)=\operatorname {E} \left(X^{2}\right)\operatorname {E} \left(Y^{2}\right)-[\operatorname {E} (X)]^{2}[\operatorname {E} (Y)]^{2}.} In general, if two variables are statistically dependent, then the variance of their product is given by:Var⁡(XY)=E⁡[X2Y2]−[E⁡(XY)]2=Cov⁡(X2,Y2)+E⁡(X2)E⁡(Y2)−[E⁡(XY)]2=Cov⁡(X2,Y2)+(Var⁡(X)+[E⁡(X)]2)(Var⁡(Y)+[E⁡(Y)]2)−[Cov⁡(X,Y)+E⁡(X)E⁡(Y)]2{\displaystyle {\begin{aligned}\operatorname {Var} (XY)={}&\operatorname {E} \left[X^{2}Y^{2}\right]-[\operatorname {E} (XY)]^{2}\\[5pt]={}&\operatorname {Cov} \left(X^{2},Y^{2}\right)+\operatorname {E} (X^{2})\operatorname {E} \left(Y^{2}\right)-[\operatorname {E} (XY)]^{2}\\[5pt]={}&\operatorname {Cov} \left(X^{2},Y^{2}\right)+\left(\operatorname {Var} (X)+[\operatorname {E} (X)]^{2}\right)\left(\operatorname {Var} (Y)+[\operatorname {E} (Y)]^{2}\right)\\[5pt]&-[\operatorname {Cov} (X,Y)+\operatorname {E} (X)\operatorname {E} (Y)]^{2}\end{aligned}}} Thedelta methoduses second-orderTaylor expansionsto approximate the variance of a function of one or more random variables: seeTaylor expansions for the moments of functions of random variables. For example, the approximate variance of a function of one variable is given by Var⁡[f(X)]≈(f′(E⁡[X]))2Var⁡[X]{\displaystyle \operatorname {Var} \left[f(X)\right]\approx \left(f'(\operatorname {E} \left[X\right])\right)^{2}\operatorname {Var} \left[X\right]} provided thatfis twice differentiable and that the mean and variance ofXare finite. Real-world observations such as the measurements of yesterday's rain throughout the day typically cannot be complete sets of all possible observations that could be made. As such, the variance calculated from the finite set will in general not match the variance that would have been calculated from the full population of possible observations. This means that oneestimatesthe mean and variance from a limited set of observations by using anestimatorequation. The estimator is a function of thesampleofnobservationsdrawn without observational bias from the wholepopulationof potential observations. In this example, the sample would be the set of actual measurements of yesterday's rainfall from available rain gauges within the geography of interest. The simplest estimators for population mean and population variance are simply the mean and variance of the sample, thesample meanand(uncorrected) sample variance– these areconsistent estimators(they converge to the value of the whole population as the number of samples increases) but can be improved. Most simply, the sample variance is computed as the sum ofsquared deviationsabout the (sample) mean, divided bynas the number of samples.However, using values other thannimproves the estimator in various ways. Four common values for the denominator aren,n− 1,n+ 1, andn− 1.5:nis the simplest (the variance of the sample),n− 1 eliminates bias,[10]n+ 1 minimizesmean squared errorfor the normal distribution,[11]andn− 1.5 mostly eliminates bias inunbiased estimation of standard deviationfor the normal distribution.[12] Firstly, if the true population mean is unknown, then the sample variance (which uses the sample mean in place of the true mean) is abiased estimator: it underestimates the variance by a factor of (n− 1) /n; correcting this factor, resulting in the sum of squared deviations about the sample mean divided byn-1 instead ofn, is calledBessel's correction.[10]The resulting estimator is unbiased and is called the(corrected) sample varianceorunbiased sample variance. If the mean is determined in some other way than from the same samples used to estimate the variance, then this bias does not arise, and the variance can safely be estimated as that of the samples about the (independently known) mean. Secondly, the sample variance does not generally minimizemean squared errorbetween sample variance and population variance. Correcting for bias often makes this worse: one can always choose a scale factor that performs better than the corrected sample variance, though the optimal scale factor depends on theexcess kurtosisof the population (seemean squared error: variance) and introduces bias. This always consists of scaling down the unbiased estimator (dividing by a number larger thann− 1) and is a simple example of ashrinkage estimator: one "shrinks" the unbiased estimator towards zero. For the normal distribution, dividing byn+ 1 (instead ofn− 1 orn) minimizes mean squared error.[11]The resulting estimator is biased, however, and is known as thebiased sample variation. In general, thepopulation varianceof afinitepopulationof sizeNwith valuesxiis given byσ2=1N∑i=1N(xi−μ)2=1N∑i=1N(xi2−2μxi+μ2)=(1N∑i=1Nxi2)−2μ(1N∑i=1Nxi)+μ2=E⁡[xi2]−μ2{\displaystyle {\begin{aligned}\sigma ^{2}&={\frac {1}{N}}\sum _{i=1}^{N}{\left(x_{i}-\mu \right)}^{2}={\frac {1}{N}}\sum _{i=1}^{N}\left(x_{i}^{2}-2\mu x_{i}+\mu ^{2}\right)\\[5pt]&=\left({\frac {1}{N}}\sum _{i=1}^{N}x_{i}^{2}\right)-2\mu \left({\frac {1}{N}}\sum _{i=1}^{N}x_{i}\right)+\mu ^{2}\\[5pt]&=\operatorname {E} [x_{i}^{2}]-\mu ^{2}\end{aligned}}} where the population mean isμ=E⁡[xi]=1N∑i=1Nxi{\textstyle \mu =\operatorname {E} [x_{i}]={\frac {1}{N}}\sum _{i=1}^{N}x_{i}}andE⁡[xi2]=(1N∑i=1Nxi2){\textstyle \operatorname {E} [x_{i}^{2}]=\left({\frac {1}{N}}\sum _{i=1}^{N}x_{i}^{2}\right)}, whereE{\textstyle \operatorname {E} }is theexpectation valueoperator. The population variance can also be computed using[13] σ2=1N2∑i<j(xi−xj)2=12N2∑i,j=1N(xi−xj)2.{\displaystyle \sigma ^{2}={\frac {1}{N^{2}}}\sum _{i<j}\left(x_{i}-x_{j}\right)^{2}={\frac {1}{2N^{2}}}\sum _{i,j=1}^{N}\left(x_{i}-x_{j}\right)^{2}.} (The right side has duplicate terms in the sum while the middle side has only unique terms to sum.) This is true because12N2∑i,j=1N(xi−xj)2=12N2∑i,j=1N(xi2−2xixj+xj2)=12N∑j=1N(1N∑i=1Nxi2)−(1N∑i=1Nxi)(1N∑j=1Nxj)+12N∑i=1N(1N∑j=1Nxj2)=12(σ2+μ2)−μ2+12(σ2+μ2)=σ2.{\displaystyle {\begin{aligned}&{\frac {1}{2N^{2}}}\sum _{i,j=1}^{N}{\left(x_{i}-x_{j}\right)}^{2}\\[5pt]={}&{\frac {1}{2N^{2}}}\sum _{i,j=1}^{N}\left(x_{i}^{2}-2x_{i}x_{j}+x_{j}^{2}\right)\\[5pt]={}&{\frac {1}{2N}}\sum _{j=1}^{N}\left({\frac {1}{N}}\sum _{i=1}^{N}x_{i}^{2}\right)-\left({\frac {1}{N}}\sum _{i=1}^{N}x_{i}\right)\left({\frac {1}{N}}\sum _{j=1}^{N}x_{j}\right)+{\frac {1}{2N}}\sum _{i=1}^{N}\left({\frac {1}{N}}\sum _{j=1}^{N}x_{j}^{2}\right)\\[5pt]={}&{\frac {1}{2}}\left(\sigma ^{2}+\mu ^{2}\right)-\mu ^{2}+{\frac {1}{2}}\left(\sigma ^{2}+\mu ^{2}\right)\\[5pt]={}&\sigma ^{2}.\end{aligned}}} The population variance matches the variance of the generating probability distribution. In this sense, the concept of population can be extended to continuous random variables with infinite populations. In many practical situations, the true variance of a population is not knowna prioriand must be computed somehow. When dealing with extremely large populations, it is not possible to count every object in the population, so the computation must be performed on asampleof the population.[14]This is generally referred to assample varianceorempirical variance. Sample variance can also be applied to the estimation of the variance of a continuous distribution from a sample of that distribution. We take asample with replacementofnvaluesY1, ...,Ynfrom the population of sizeN, wheren<N, and estimate the variance on the basis of this sample.[15]Directly taking the variance of the sample data gives the average of thesquared deviations:[16] S~Y2=1n∑i=1n(Yi−Y¯)2=(1n∑i=1nYi2)−Y¯2=1n2∑i,j:i<j(Yi−Yj)2.{\displaystyle {\tilde {S}}_{Y}^{2}={\frac {1}{n}}\sum _{i=1}^{n}\left(Y_{i}-{\overline {Y}}\right)^{2}=\left({\frac {1}{n}}\sum _{i=1}^{n}Y_{i}^{2}\right)-{\overline {Y}}^{2}={\frac {1}{n^{2}}}\sum _{i,j\,:\,i<j}\left(Y_{i}-Y_{j}\right)^{2}.} (See the sectionPopulation variancefor the derivation of this formula.) Here,Y¯{\displaystyle {\overline {Y}}}denotes thesample mean:Y¯=1n∑i=1nYi.{\displaystyle {\overline {Y}}={\frac {1}{n}}\sum _{i=1}^{n}Y_{i}.} Since theYiare selected randomly, bothY¯{\displaystyle {\overline {Y}}}andS~Y2{\displaystyle {\tilde {S}}_{Y}^{2}}arerandom variables. Their expected values can be evaluated by averaging over the ensemble of all possible samples{Yi}of sizenfrom the population. ForS~Y2{\displaystyle {\tilde {S}}_{Y}^{2}}this gives:E⁡[S~Y2]=E⁡[1n∑i=1n(Yi−1n∑j=1nYj)2]=1n∑i=1nE⁡[Yi2−2nYi∑j=1nYj+1n2∑j=1nYj∑k=1nYk]=1n∑i=1n(E⁡[Yi2]−2n(∑j≠iE⁡[YiYj]+E⁡[Yi2])+1n2∑j=1n∑k≠jnE⁡[YjYk]+1n2∑j=1nE⁡[Yj2])=1n∑i=1n(n−2nE⁡[Yi2]−2n∑j≠iE⁡[YiYj]+1n2∑j=1n∑k≠jnE⁡[YjYk]+1n2∑j=1nE⁡[Yj2])=1n∑i=1n[n−2n(σ2+μ2)−2n(n−1)μ2+1n2n(n−1)μ2+1n(σ2+μ2)]=n−1nσ2.{\displaystyle {\begin{aligned}\operatorname {E} [{\tilde {S}}_{Y}^{2}]&=\operatorname {E} \left[{\frac {1}{n}}\sum _{i=1}^{n}{\left(Y_{i}-{\frac {1}{n}}\sum _{j=1}^{n}Y_{j}\right)}^{2}\right]\\[5pt]&={\frac {1}{n}}\sum _{i=1}^{n}\operatorname {E} \left[Y_{i}^{2}-{\frac {2}{n}}Y_{i}\sum _{j=1}^{n}Y_{j}+{\frac {1}{n^{2}}}\sum _{j=1}^{n}Y_{j}\sum _{k=1}^{n}Y_{k}\right]\\[5pt]&={\frac {1}{n}}\sum _{i=1}^{n}\left(\operatorname {E} \left[Y_{i}^{2}\right]-{\frac {2}{n}}\left(\sum _{j\neq i}\operatorname {E} \left[Y_{i}Y_{j}\right]+\operatorname {E} \left[Y_{i}^{2}\right]\right)+{\frac {1}{n^{2}}}\sum _{j=1}^{n}\sum _{k\neq j}^{n}\operatorname {E} \left[Y_{j}Y_{k}\right]+{\frac {1}{n^{2}}}\sum _{j=1}^{n}\operatorname {E} \left[Y_{j}^{2}\right]\right)\\[5pt]&={\frac {1}{n}}\sum _{i=1}^{n}\left({\frac {n-2}{n}}\operatorname {E} \left[Y_{i}^{2}\right]-{\frac {2}{n}}\sum _{j\neq i}\operatorname {E} \left[Y_{i}Y_{j}\right]+{\frac {1}{n^{2}}}\sum _{j=1}^{n}\sum _{k\neq j}^{n}\operatorname {E} \left[Y_{j}Y_{k}\right]+{\frac {1}{n^{2}}}\sum _{j=1}^{n}\operatorname {E} \left[Y_{j}^{2}\right]\right)\\[5pt]&={\frac {1}{n}}\sum _{i=1}^{n}\left[{\frac {n-2}{n}}\left(\sigma ^{2}+\mu ^{2}\right)-{\frac {2}{n}}(n-1)\mu ^{2}+{\frac {1}{n^{2}}}n(n-1)\mu ^{2}+{\frac {1}{n}}\left(\sigma ^{2}+\mu ^{2}\right)\right]\\[5pt]&={\frac {n-1}{n}}\sigma ^{2}.\end{aligned}}} Hereσ2=E⁡[Yi2]−μ2{\textstyle \sigma ^{2}=\operatorname {E} [Y_{i}^{2}]-\mu ^{2}}derived in the section ispopulation varianceandE⁡[YiYj]=E⁡[Yi]E⁡[Yj]=μ2{\textstyle \operatorname {E} [Y_{i}Y_{j}]=\operatorname {E} [Y_{i}]\operatorname {E} [Y_{j}]=\mu ^{2}}due to independency ofYi{\textstyle Y_{i}}andYj{\textstyle Y_{j}}. HenceS~Y2{\textstyle {\tilde {S}}_{Y}^{2}}gives an estimate of the population varianceσ2{\textstyle \sigma ^{2}}that is biased by a factor ofn−1n{\textstyle {\frac {n-1}{n}}}because the expectation value ofS~Y2{\textstyle {\tilde {S}}_{Y}^{2}}is smaller than the population variance (true variance) by that factor. For this reason,S~Y2{\textstyle {\tilde {S}}_{Y}^{2}}is referred to as thebiased sample variance. Correcting for this bias yields theunbiased sample variance, denotedS2{\displaystyle S^{2}}: S2=nn−1S~Y2=nn−1[1n∑i=1n(Yi−Y¯)2]=1n−1∑i=1n(Yi−Y¯)2{\displaystyle S^{2}={\frac {n}{n-1}}{\tilde {S}}_{Y}^{2}={\frac {n}{n-1}}\left[{\frac {1}{n}}\sum _{i=1}^{n}\left(Y_{i}-{\overline {Y}}\right)^{2}\right]={\frac {1}{n-1}}\sum _{i=1}^{n}\left(Y_{i}-{\overline {Y}}\right)^{2}} Either estimator may be simply referred to as thesample variancewhen the version can be determined by context. The same proof is also applicable for samples taken from a continuous probability distribution. The use of the termn− 1is calledBessel's correction, and it is also used insample covarianceand thesample standard deviation(the square root of variance). The square root is aconcave functionand thus introduces negative bias (byJensen's inequality), which depends on the distribution, and thus the corrected sample standard deviation (using Bessel's correction) is biased. Theunbiased estimation of standard deviationis a technically involved problem, though for the normal distribution using the termn− 1.5yields an almost unbiased estimator. The unbiased sample variance is aU-statisticfor the functionf(y1,y2) = (y1−y2)2/2, meaning that it is obtained by averaging a 2-sample statistic over 2-element subsets of the population. For a set of numbers {10, 15, 30, 45, 57, 52, 63, 72, 81, 93, 102, 105}, if this set is the whole data population for some measurement, then variance is the population variance 932.743 as the sum of the squared deviations about the mean of this set, divided by 12 as the number of the set members. If the set is a sample from the whole population, then the unbiased sample variance can be calculated as 1017.538 that is the sum of the squared deviations about the mean of the sample, divided by 11 instead of 12. A function VAR.S inMicrosoft Excelgives the unbiased sample variance while VAR.P is for population variance. Being a function ofrandom variables, the sample variance is itself a random variable, and it is natural to study its distribution. In the case thatYiare independent observations from anormal distribution,Cochran's theoremshows that theunbiased sample varianceS2follows a scaledchi-squared distribution(see also:asymptotic propertiesand anelementary proof):[17](n−1)S2σ2∼χn−12{\displaystyle (n-1){\frac {S^{2}}{\sigma ^{2}}}\sim \chi _{n-1}^{2}} whereσ2is thepopulation variance. As a direct consequence, it follows thatE⁡(S2)=E⁡(σ2n−1χn−12)=σ2,{\displaystyle \operatorname {E} \left(S^{2}\right)=\operatorname {E} \left({\frac {\sigma ^{2}}{n-1}}\chi _{n-1}^{2}\right)=\sigma ^{2},} and[18] Var⁡[S2]=Var⁡(σ2n−1χn−12)=σ4(n−1)2Var⁡(χn−12)=2σ4n−1.{\displaystyle \operatorname {Var} \left[S^{2}\right]=\operatorname {Var} \left({\frac {\sigma ^{2}}{n-1}}\chi _{n-1}^{2}\right)={\frac {\sigma ^{4}}{{\left(n-1\right)}^{2}}}\operatorname {Var} \left(\chi _{n-1}^{2}\right)={\frac {2\sigma ^{4}}{n-1}}.} IfYiare independent and identically distributed, but not necessarily normally distributed, then[19] E⁡[S2]=σ2,Var⁡[S2]=σ4n(κ−1+2n−1)=1n(μ4−n−3n−1σ4),{\displaystyle \operatorname {E} \left[S^{2}\right]=\sigma ^{2},\quad \operatorname {Var} \left[S^{2}\right]={\frac {\sigma ^{4}}{n}}\left(\kappa -1+{\frac {2}{n-1}}\right)={\frac {1}{n}}\left(\mu _{4}-{\frac {n-3}{n-1}}\sigma ^{4}\right),} whereκis thekurtosisof the distribution andμ4is the fourthcentral moment. If the conditions of thelaw of large numbershold for the squared observations,S2is aconsistent estimatorofσ2. One can see indeed that the variance of the estimator tends asymptotically to zero. An asymptotically equivalent formula was given in Kenney and Keeping (1951:164), Rose and Smith (2002:264), and Weisstein (n.d.).[20][21][22] Samuelson's inequalityis a result that states bounds on the values that individual observations in a sample can take, given that the sample mean and (biased) variance have been calculated.[23]Values must lie within the limitsy¯±σY(n−1)1/2.{\displaystyle {\bar {y}}\pm \sigma _{Y}(n-1)^{1/2}.} It has been shown[24]that for a sample {yi} of positive real numbers, σy2≤2ymax(A−H),{\displaystyle \sigma _{y}^{2}\leq 2y_{\max }(A-H),} whereymaxis the maximum of the sample,Ais the arithmetic mean,His theharmonic meanof the sample andσy2{\displaystyle \sigma _{y}^{2}}is the (biased) variance of the sample. This bound has been improved, and it is known that variance is bounded by σy2≤ymax(A−H)(ymax−A)ymax−H,σy2≥ymin(A−H)(A−ymin)H−ymin,{\displaystyle {\begin{aligned}\sigma _{y}^{2}&\leq {\frac {y_{\max }(A-H)(y_{\max }-A)}{y_{\max }-H}},\\[1ex]\sigma _{y}^{2}&\geq {\frac {y_{\min }(A-H)(A-y_{\min })}{H-y_{\min }}},\end{aligned}}} whereyminis the minimum of the sample.[25] TheF-test of equality of variancesand thechi square testsare adequate when the sample is normally distributed. Non-normality makes testing for the equality of two or more variances more difficult. Several non parametric tests have been proposed: these include the Barton–David–Ansari–Freund–Siegel–Tukey test, theCapon test,Mood test, theKlotz testand theSukhatme test. The Sukhatme test applies to two variances and requires that bothmediansbe known and equal to zero. The Mood, Klotz, Capon and Barton–David–Ansari–Freund–Siegel–Tukey tests also apply to two variances. They allow the median to be unknown but do require that the two medians are equal. TheLehmann testis a parametric test of two variances. Of this test there are several variants known. Other tests of the equality of variances include theBox test, theBox–Anderson testand theMoses test. Resampling methods, which include thebootstrapand thejackknife, may be used to test the equality of variances. The variance of a probability distribution is analogous to themoment of inertiainclassical mechanicsof a corresponding mass distribution along a line, with respect to rotation about its center of mass.[26]It is because of this analogy that such things as the variance are calledmomentsofprobability distributions.[26]The covariance matrix is related to themoment of inertia tensorfor multivariate distributions. The moment of inertia of a cloud ofnpoints with a covariance matrix ofΣ{\displaystyle \Sigma }is given by[citation needed]I=n(13×3tr⁡(Σ)−Σ).{\displaystyle I=n\left(\mathbf {1} _{3\times 3}\operatorname {tr} (\Sigma )-\Sigma \right).} This difference between moment of inertia in physics and in statistics is clear for points that are gathered along a line. Suppose many points are close to thexaxis and distributed along it. The covariance matrix might look likeΣ=[100000.10000.1].{\displaystyle \Sigma ={\begin{bmatrix}10&0&0\\0&0.1&0\\0&0&0.1\end{bmatrix}}.} That is, there is the most variance in thexdirection. Physicists would consider this to have a low momentaboutthexaxis so the moment-of-inertia tensor isI=n[0.200010.100010.1].{\displaystyle I=n{\begin{bmatrix}0.2&0&0\\0&10.1&0\\0&0&10.1\end{bmatrix}}.} Thesemivarianceis calculated in the same manner as the variance but only those observations that fall below the mean are included in the calculation:Semivariance=1n∑i:xi<μ(xi−μ)2{\displaystyle {\text{Semivariance}}={\frac {1}{n}}\sum _{i:x_{i}<\mu }{\left(x_{i}-\mu \right)}^{2}}It is also described as a specific measure in different fields of application. For skewed distributions, the semivariance can provide additional information that a variance does not.[27] For inequalities associated with the semivariance, seeChebyshev's inequality § Semivariances. The termvariancewas first introduced byRonald Fisherin his 1918 paperThe Correlation Between Relatives on the Supposition of Mendelian Inheritance:[28] The great body of available statistics show us that the deviations of ahuman measurementfrom its mean follow very closely theNormal Law of Errors, and, therefore, that the variability may be uniformly measured by thestandard deviationcorresponding to thesquare rootof themean square error. When there are two independent causes of variability capable of producing in an otherwise uniform population distributions with standard deviationsσ1{\displaystyle \sigma _{1}}andσ2{\displaystyle \sigma _{2}}, it is found that the distribution, when both causes act together, has a standard deviationσ12+σ22{\displaystyle {\sqrt {\sigma _{1}^{2}+\sigma _{2}^{2}}}}. It is therefore desirable in analysing the causes of variability to deal with the square of the standard deviation as the measure of variability. We shall term this quantity the Variance... Ifx{\displaystyle x}is a scalarcomplex-valued random variable, with values inC,{\displaystyle \mathbb {C} ,}then its variance isE⁡[(x−μ)(x−μ)∗],{\displaystyle \operatorname {E} \left[(x-\mu )(x-\mu )^{*}\right],}wherex∗{\displaystyle x^{*}}is thecomplex conjugateofx.{\displaystyle x.}This variance is a real scalar. IfX{\displaystyle X}is avector-valued random variable, with values inRn,{\displaystyle \mathbb {R} ^{n},}and thought of as a column vector, then a natural generalization of variance isE⁡[(X−μ)(X−μ)T],{\displaystyle \operatorname {E} \left[(X-\mu ){(X-\mu )}^{\mathsf {T}}\right],}whereμ=E⁡(X){\displaystyle \mu =\operatorname {E} (X)}andXT{\displaystyle X^{\mathsf {T}}}is the transpose ofX, and so is a row vector. The result is apositive semi-definite square matrix, commonly referred to as thevariance-covariance matrix(or simply as thecovariance matrix). IfX{\displaystyle X}is a vector- and complex-valued random variable, with values inCn,{\displaystyle \mathbb {C} ^{n},}then thecovariance matrix isE⁡[(X−μ)(X−μ)†],{\displaystyle \operatorname {E} \left[(X-\mu ){(X-\mu )}^{\dagger }\right],}whereX†{\displaystyle X^{\dagger }}is theconjugate transposeofX.{\displaystyle X.}[citation needed]This matrix is also positive semi-definite and square. Another generalization of variance for vector-valued random variablesX{\displaystyle X}, which results in a scalar value rather than in a matrix, is thegeneralized variancedet(C){\displaystyle \det(C)}, thedeterminantof the covariance matrix. The generalized variance can be shown to be related to the multidimensional scatter of points around their mean.[29] A different generalization is obtained by considering the equation for the scalar variance,Var⁡(X)=E⁡[(X−μ)2]{\displaystyle \operatorname {Var} (X)=\operatorname {E} \left[(X-\mu )^{2}\right]}, and reinterpreting(X−μ)2{\displaystyle (X-\mu )^{2}}as the squaredEuclidean distancebetween the random variable and its mean, or, simply as the scalar product of the vectorX−μ{\displaystyle X-\mu }with itself. This results inE⁡[(X−μ)T(X−μ)]=tr⁡(C),{\displaystyle \operatorname {E} \left[(X-\mu )^{\mathsf {T}}(X-\mu )\right]=\operatorname {tr} (C),}which is thetraceof the covariance matrix.
https://en.wikipedia.org/wiki/Variance#Variance_decomposition
Instatistics, theLehmann–Scheffé theoremis a prominent statement, tying together the ideas of completeness, sufficiency, uniqueness, and best unbiased estimation.[1]The theorem states that anyestimatorthat isunbiasedfor a given unknown quantity and that depends on the data only through acomplete,sufficient statisticis the uniquebest unbiased estimatorof that quantity. The Lehmann–Scheffé theorem is named afterErich Leo LehmannandHenry Scheffé, given their two early papers.[2][3] IfT{\displaystyle T}is a complete sufficient statistic forθ{\displaystyle \theta }andE⁡[g(T)]=τ(θ){\displaystyle \operatorname {E} [g(T)]=\tau (\theta )}theng(T){\displaystyle g(T)}is theuniformly minimum-variance unbiased estimator(UMVUE) ofτ(θ){\displaystyle \tau (\theta )}. LetX→=X1,X2,…,Xn{\displaystyle {\vec {X}}=X_{1},X_{2},\dots ,X_{n}}be a random sample from a distribution that has p.d.f (or p.m.f in the discrete case)f(x:θ){\displaystyle f(x:\theta )}whereθ∈Ω{\displaystyle \theta \in \Omega }is a parameter in the parameter space. SupposeY=u(X→){\displaystyle Y=u({\vec {X}})}is a sufficient statistic forθ, and let{fY(y:θ):θ∈Ω}{\displaystyle \{f_{Y}(y:\theta ):\theta \in \Omega \}}be a complete family. Ifφ:E⁡[φ(Y)]=θ{\displaystyle \varphi :\operatorname {E} [\varphi (Y)]=\theta }thenφ(Y){\displaystyle \varphi (Y)}is the unique MVUE ofθ. By theRao–Blackwell theorem, ifZ{\displaystyle Z}is an unbiased estimator ofθthenφ(Y):=E⁡[Z∣Y]{\displaystyle \varphi (Y):=\operatorname {E} [Z\mid Y]}defines an unbiased estimator ofθwith the property that its variance is not greater than that ofZ{\displaystyle Z}. Now we show that this function is unique. SupposeW{\displaystyle W}is another candidate MVUE estimator ofθ. Then againψ(Y):=E⁡[W∣Y]{\displaystyle \psi (Y):=\operatorname {E} [W\mid Y]}defines an unbiased estimator ofθwith the property that its variance is not greater than that ofW{\displaystyle W}. Then Since{fY(y:θ):θ∈Ω}{\displaystyle \{f_{Y}(y:\theta ):\theta \in \Omega \}}is a complete family and therefore the functionφ{\displaystyle \varphi }is the unique function of Y with variance not greater than that of any other unbiased estimator. We conclude thatφ(Y){\displaystyle \varphi (Y)}is the MVUE. An example of an improvable Rao–Blackwell improvement, when using a minimal sufficient statistic that isnot complete, was provided by Galili and Meilijson in 2016.[4]LetX1,…,Xn{\displaystyle X_{1},\ldots ,X_{n}}be a random sample from a scale-uniform distributionX∼U((1−k)θ,(1+k)θ),{\displaystyle X\sim U((1-k)\theta ,(1+k)\theta ),}with unknown meanE⁡[X]=θ{\displaystyle \operatorname {E} [X]=\theta }and known design parameterk∈(0,1){\displaystyle k\in (0,1)}. In the search for "best" possible unbiased estimators forθ{\displaystyle \theta }, it is natural to considerX1{\displaystyle X_{1}}as an initial (crude) unbiased estimator forθ{\displaystyle \theta }and then try to improve it. SinceX1{\displaystyle X_{1}}is not a function ofT=(X(1),X(n)){\displaystyle T=\left(X_{(1)},X_{(n)}\right)}, the minimal sufficient statistic forθ{\displaystyle \theta }(whereX(1)=miniXi{\displaystyle X_{(1)}=\min _{i}X_{i}}andX(n)=maxiXi{\displaystyle X_{(n)}=\max _{i}X_{i}}), it may be improved using the Rao–Blackwell theorem as follows: However, the following unbiased estimator can be shown to have lower variance: And in fact, it could be even further improved when using the following estimator: The model is ascale model. Optimalequivariant estimatorscan then be derived forloss functionsthat are invariant.[5]
https://en.wikipedia.org/wiki/Lehmann%E2%80%93Scheff%C3%A9_theorem
Instatistical theory, aU-statisticis a class of statistics defined as the average over the application of a given function applied to all tuples of a fixed size. The letter "U" stands for unbiased.[citation needed]In elementary statistics, U-statistics arise naturally in producingminimum-variance unbiased estimators. The theory of U-statistics allows aminimum-variance unbiased estimatorto be derived from eachunbiased estimatorof anestimable parameter(alternatively,statisticalfunctional) for large classes ofprobability distributions.[1][2]An estimable parameter is ameasurable functionof the population'scumulative probability distribution: For example, for every probability distribution, the population median is an estimable parameter. The theory of U-statistics applies to general classes of probability distributions. Many statistics originally derived for particular parametric families have been recognized as U-statistics for general distributions. Innon-parametric statistics, the theory of U-statistics is used to establish for statistical procedures (such as estimators and tests) and estimators relating to theasymptotic normalityand to the variance (in finite samples) of such quantities.[3]The theory has been used to study more general statistics as well asstochastic processes, such asrandom graphs.[4][5][6] Suppose that a problem involvesindependent and identically-distributed random variablesand that estimation of a certain parameter is required. Suppose that a simple unbiased estimate can be constructed based on only a few observations: this defines the basic estimator based on a given number of observations. For example, a single observation is itself an unbiased estimate of the mean and a pair of observations can be used to derive an unbiased estimate of the variance. The U-statistic based on this estimator is defined as the average (across all combinatorial selections of the given size from the full set of observations) of the basic estimator applied to the sub-samples. Pranab K. Sen(1992) provides a review of the paper byWassily Hoeffding(1948), which introduced U-statistics and set out the theory relating to them, and in doing so Sen outlines the importance U-statistics have in statistical theory. Sen says,[7]“The impact of Hoeffding (1948) is overwhelming at the present time and is very likely to continue in the years to come.” Note that the theory of U-statistics is not limited to[8]the case ofindependent and identically-distributed random variablesor to scalar random-variables.[9] The term U-statistic, due to Hoeffding (1948), is defined as follows. LetK{\displaystyle K}be either the real or complex numbers, and letf:(Kd)r→K{\displaystyle f\colon (K^{d})^{r}\to K}be aK{\displaystyle K}-valued function ofr{\displaystyle r}d{\displaystyle d}-dimensional variables. For eachn≥r{\displaystyle n\geq r}the associated U-statisticfn:(Kd)n→K{\displaystyle f_{n}\colon (K^{d})^{n}\to K}is defined to be the average of the valuesf(xi1,…,xir){\displaystyle f(x_{i_{1}},\dotsc ,x_{i_{r}})}over the setIr,n{\displaystyle I_{r,n}}ofr{\displaystyle r}-tuples of indices from{1,2,…,n}{\displaystyle \{1,2,\dotsc ,n\}}with distinct entries. Formally, In particular, iff{\displaystyle f}is symmetric the above is simplified to where nowJr,n{\displaystyle J_{r,n}}denotes the subset ofIr,n{\displaystyle I_{r,n}}ofincreasingtuples. Each U-statisticfn{\displaystyle f_{n}}is necessarily asymmetric function. U-statistics are very natural in statistical work, particularly in Hoeffding's context ofindependent and identically distributed random variables, or more generally forexchangeable sequences, such as insimple random samplingfrom a finite population, where the defining property is termed ‘inheritance on the average’. Fisher'sk-statistics and Tukey'spolykaysare examples ofhomogeneous polynomialU-statistics (Fisher, 1929; Tukey, 1950). For a simple random sampleφof sizentaken from a population of sizeN, the U-statistic has the property that the average over sample valuesƒn(xφ) is exactly equal to the population valueƒN(x).[clarification needed] Some examples: Iff(x)=x{\displaystyle f(x)=x}the U-statisticfn(x)=x¯n=(x1+⋯+xn)/n{\displaystyle f_{n}(x)={\bar {x}}_{n}=(x_{1}+\cdots +x_{n})/n}is the sample mean. Iff(x1,x2)=|x1−x2|{\displaystyle f(x_{1},x_{2})=|x_{1}-x_{2}|}, the U-statistic is the mean pairwise deviationfn(x1,…,xn)=2/(n(n−1))∑i>j|xi−xj|{\displaystyle f_{n}(x_{1},\ldots ,x_{n})=2/(n(n-1))\sum _{i>j}|x_{i}-x_{j}|}, defined forn≥2{\displaystyle n\geq 2}. Iff(x1,x2)=(x1−x2)2/2{\displaystyle f(x_{1},x_{2})=(x_{1}-x_{2})^{2}/2}, the U-statistic is thesample variancefn(x)=∑(xi−x¯n)2/(n−1){\displaystyle f_{n}(x)=\sum (x_{i}-{\bar {x}}_{n})^{2}/(n-1)}with divisorn−1{\displaystyle n-1}, defined forn≥2{\displaystyle n\geq 2}. The thirdk{\displaystyle k}-statistick3,n(x)=∑(xi−x¯n)3n/((n−1)(n−2)){\displaystyle k_{3,n}(x)=\sum (x_{i}-{\bar {x}}_{n})^{3}n/((n-1)(n-2))}, the sampleskewnessdefined forn≥3{\displaystyle n\geq 3}, is a U-statistic. The following case highlights an important point. Iff(x1,x2,x3){\displaystyle f(x_{1},x_{2},x_{3})}is themedianof three values,fn(x1,…,xn){\displaystyle f_{n}(x_{1},\ldots ,x_{n})}is not the median ofn{\displaystyle n}values. However, it is a minimum variance unbiased estimate of the expected value of the median of three values, not the median of the population. Similar estimates play a central role where the parameters of a family ofprobability distributionsare being estimated by probability weighted moments orL-moments.
https://en.wikipedia.org/wiki/U-statistic
Theanalysis of competing hypotheses(ACH) is amethodologyfor evaluating multiple competing hypotheses for observed data. It was developed byRichards (Dick) J. Heuer, Jr., a 45-year veteran of theCentral Intelligence Agency, in the 1970s for use by the Agency.[1]ACH is used by analysts in various fields who make judgments that entail a high risk of error in reasoning. ACH aims to help an analyst overcome, or at least minimize, some of the cognitive limitations that make prescientintelligence analysisso difficult to achieve.[1] ACH was a step forward inintelligence analysis methodology, but it was first described in relatively informal terms. Producing the best available information fromuncertain dataremains the goal of researchers, tool-builders, and analysts in industry, academia and government. Their domains includedata mining,cognitive psychologyandvisualization,probabilityandstatistics, etc.Abductive reasoningis an earlier concept with similarities to ACH. Heuer outlines the ACH process in considerable depth in his book,Psychology of Intelligence Analysis.[1]It consists of the following steps: Some benefits of doing an ACH matrix are: Weaknesses of doing an ACH matrix include: Especially in intelligence, both governmental and business, analysts must always be aware that the opponent(s) is intelligent and may be generating information intended todeceive.[3][4]Since deception often is the result of a cognitive trap, Elsaesser and Stech use state-based hierarchical plan recognition (seeabductive reasoning) to generate causal explanations of observations. The resulting hypotheses are converted to a dynamicBayesian networkandvalue of informationanalysis is employed to isolate assumptions implicit in the evaluation of paths in, or conclusions of, particular hypotheses. As evidence in the form of observations of states or assumptions is observed, they can become the subject of separate validation. Should an assumption or necessary state be negated, hypotheses depending on it are rejected. This is a form ofroot cause analysis. According to social constructivist critics, ACH also fails to stress sufficiently (or to address as a method) the problematic nature of the initial formation of the hypotheses used to create its grid. There is considerable evidence, for example, that in addition to any bureaucratic, psychological, or political biases that may affect hypothesis generation, there are also factors of culture and identity at work. These socially constructed factors may restrict or pre-screen which hypotheses end up being considered, and then reinforceconfirmation biasin those selected.[5] Philosopher andargumentation theoristTim van Gelderhas made the following criticisms:[6] Van Gelder proposedhypothesis mapping(similar toargument mapping) as an alternative to ACH.[7][8] The structured analysis of competing hypotheses offers analysts an improvement over the limitations of the original ACH.[discuss][9]The SACH maximizes the possible hypotheses by allowing the analyst to split one hypothesis into two complex ones. For example, two tested hypotheses could be that Iraq has WMD or Iraq does not have WMD. If the evidence showed that it is more likely there are WMDs in Iraq then two new hypotheses could be formulated: WMD are in Baghdad or WMD are in Mosul. Or perhaps, the analyst may need to know what type of WMD Iraq has; the new hypotheses could be that Iraq has biological WMD, Iraq has chemical WMD and Iraq has nuclear WMD. By giving the ACH structure, the analyst is able to give a nuanced estimate.[10] One method, by Valtorta and colleagues uses probabilistic methods, addsBayesian analysisto ACH.[11]A generalization of this concept to a distributed community of analysts lead to the development of CACHE (the Collaborative ACH Environment),[12]which introduced the concept of a Bayes (or Bayesian) community. The work by Akram and Wang applies paradigms fromgraph theory.[13] Other work focuses less on probabilistic methods and more on cognitive and visualization extensions to ACH, as discussed by Madsen and Hicks.[14]DECIDE, discussed underautomationis visualization-oriented.[15] Work by Pope and Jøsang usessubjective logic, a formal mathematical methodology that explicitly deals with uncertainty.[16]This methodology forms the basis of the Sheba technology that is used in Veriluma's intelligence assessment software. A few online and downloadable software tools help automate the ACH process. These programs leave a visual trail of evidence and allow the analyst to weigh evidence.
https://en.wikipedia.org/wiki/Analysis_of_competing_hypotheses
Instatisticsandmachine learning, thebias–variance tradeoffdescribes the relationship between a model's complexity, the accuracy of its predictions, and how well it can make predictions on previously unseen data that were not used to train the model. In general, as we increase the number of tunable parameters in a model, it becomes more flexible, and can better fit a training data set. It is said to have lower error, orbias. However, for more flexible models, there will tend to be greatervarianceto the model fit each time we take a set ofsamplesto create a new training data set. It is said that there is greatervariancein the model'sestimatedparameters. Thebias–variance dilemmaorbias–variance problemis the conflict in trying to simultaneously minimize these two sources oferrorthat preventsupervised learningalgorithms from generalizing beyond theirtraining set:[1][2] Thebias–variance decompositionis a way of analyzing a learning algorithm'sexpectedgeneralization errorwith respect to a particular problem as a sum of three terms, the bias, variance, and a quantity called theirreducible error, resulting from noise in the problem itself. The bias–variance tradeoff is a central problem in supervised learning. Ideally, one wants tochoose a modelthat both accurately captures the regularities in its training data, but alsogeneralizeswell to unseen data. Unfortunately, it is typically impossible to do both simultaneously. High-variance learning methods may be able to represent their training set well but are at risk of overfitting to noisy or unrepresentative training data. In contrast, algorithms with high bias typically produce simpler models that may fail to capture important regularities (i.e. underfit) in the data. It is an often madefallacy[3][4]to assume that complex models must have high variance. High variance models are "complex" in some sense, but the reverse needs not be true.[5]In addition, one has to be careful how to define complexity. In particular, the number of parameters used to describe the model is a poor measure of complexity. This is illustrated by an example adapted from:[6]The modelfa,b(x)=asin⁡(bx){\displaystyle f_{a,b}(x)=a\sin(bx)}has only two parameters (a,b{\displaystyle a,b}) but it can interpolate any number of points by oscillating with a high enough frequency, resulting in both a high bias and high variance. An analogy can be made to the relationship betweenaccuracy and precision. Accuracy is one way of quantifying bias and can intuitively be improved by selecting from onlylocalinformation. Consequently, a sample will appear accurate (i.e. have low bias) under the aforementioned selection conditions, but may result in underfitting. In other words,test datamay not agree as closely with training data, which would indicate imprecision and therefore inflated variance. A graphical example would be a straight line fit to data exhibiting quadratic behavior overall. Precision is a description of variance and generally can only be improved by selecting information from a comparatively larger space. The option to select many data points over a broad sample space is the ideal condition for any analysis. However, intrinsic constraints (whether physical, theoretical, computational, etc.) will always play a limiting role. The limiting case where only a finite number of data points are selected over a broad sample space may result in improved precision and lower variance overall, but may also result in an overreliance on the training data (overfitting). This means that test data would also not agree as closely with the training data, but in this case the reason is inaccuracy or high bias. To borrow from the previous example, the graphical representation would appear as a high-order polynomial fit to the same data exhibiting quadratic behavior. Note that error in each case is measured the same way, but the reason ascribed to the error is different depending on the balance between bias and variance. To mitigate how much information is used from neighboring observations, a model can besmoothedvia explicitregularization, such asshrinkage. Suppose that we have a training set consisting of a set of pointsx1,…,xn{\displaystyle x_{1},\dots ,x_{n}}and real-valued labelsyi{\displaystyle y_{i}}associated with the pointsxi{\displaystyle x_{i}}. We assume that the data is generated by a functionf(x){\displaystyle f(x)}such asy=f(x)+ε{\displaystyle y=f(x)+\varepsilon }, where the noise,ε{\displaystyle \varepsilon }, has zero mean and varianceσ2{\displaystyle \sigma ^{2}}. That is,yi=f(xi)+εi{\displaystyle y_{i}=f(x_{i})+\varepsilon _{i}}, whereεi{\displaystyle \varepsilon _{i}}is a noise sample. We want to find a functionf^(x;D){\displaystyle {\hat {f}}(x;D)}, that approximates the true functionf(x){\displaystyle f(x)}as well as possible, by means of some learning algorithm based on a training dataset (sample)D={(x1,y1)…,(xn,yn)}{\displaystyle D=\{(x_{1},y_{1})\dots ,(x_{n},y_{n})\}}. We make "as well as possible" precise by measuring themean squared errorbetweeny{\displaystyle y}andf^(x;D){\displaystyle {\hat {f}}(x;D)}: we want(y−f^(x;D))2{\displaystyle (y-{\hat {f}}(x;D))^{2}}to be minimal, both forx1,…,xn{\displaystyle x_{1},\dots ,x_{n}}and for points outside of our sample. Of course, we cannot hope to do so perfectly, since theyi{\displaystyle y_{i}}contain noiseε{\displaystyle \varepsilon }; this means we must be prepared to accept anirreducible errorin any function we come up with. Finding anf^{\displaystyle {\hat {f}}}that generalizes to points outside of the training set can be done with any of the countless algorithms used for supervised learning. It turns out that whichever functionf^{\displaystyle {\hat {f}}}we select, we can decompose itsexpectederror on an unseen samplex{\displaystyle x}(i.e. conditional to x) as follows:[7]: 34[8]: 223 where and and The expectation ranges over different choices of the training setD={(x1,y1)…,(xn,yn)}{\displaystyle D=\{(x_{1},y_{1})\dots ,(x_{n},y_{n})\}}, all sampled from the same joint distributionP(x,y){\displaystyle P(x,y)}which can for example be done viabootstrapping. The three terms represent: Since all three terms are non-negative, the irreducible error forms a lower bound on the expected error on unseen samples.[7]: 34 The more complex the modelf^(x){\displaystyle {\hat {f}}(x)}is, the more data points it will capture, and the lower the bias will be. However, complexity will make the model "move" more to capture the data points, and hence its variance will be larger. The derivation of the bias–variance decomposition for squared error proceeds as follows.[9][10]For convenience, we drop theD{\displaystyle D}subscript in the following lines, such thatf^(x;D)=f^(x){\displaystyle {\hat {f}}(x;D)={\hat {f}}(x)}. Let us write the mean-squared error of our model: We can show that the second term of this equation is null: E[(f(x)−f^(x))ε]=E[f(x)−f^(x)]E[ε]sinceεis independent fromx=0sinceE[ε]=0{\displaystyle {\begin{aligned}\mathbb {E} {\Big [}{\big (}f(x)-{\hat {f}}(x){\big )}\varepsilon {\Big ]}&=\mathbb {E} {\big [}f(x)-{\hat {f}}(x){\big ]}\ \mathbb {E} {\big [}\varepsilon {\big ]}&&{\text{since }}\varepsilon {\text{ is independent from }}x\\&=0&&{\text{since }}\mathbb {E} {\big [}\varepsilon {\big ]}=0\end{aligned}}} Moreover, the third term of this equation is nothing butσ2{\displaystyle \sigma ^{2}}, the variance ofε{\displaystyle \varepsilon }. Let us now expand the remaining term: E[(f(x)−f^(x))2]=E[(f(x)−E[f^(x)]+E[f^(x)]−f^(x))2]=E[(f(x)−E[f^(x)])2]+2E[(f(x)−E[f^(x)])(E[f^(x)]−f^(x))]+E[(E[f^(x)]−f^(x))2]{\displaystyle {\begin{aligned}\mathbb {E} {\Big [}{\big (}f(x)-{\hat {f}}(x){\big )}^{2}{\Big ]}&=\mathbb {E} {\Big [}{\big (}f(x)-\mathbb {E} {\big [}{\hat {f}}(x){\big ]}+\mathbb {E} {\big [}{\hat {f}}(x){\big ]}-{\hat {f}}(x){\big )}^{2}{\Big ]}\\&={\color {Blue}\mathbb {E} {\Big [}{\big (}f(x)-\mathbb {E} {\big [}{\hat {f}}(x){\big ]}{\big )}^{2}{\Big ]}}\,+\,2\ {\color {PineGreen}\mathbb {E} {\Big [}{\big (}f(x)-\mathbb {E} {\big [}{\hat {f}}(x){\big ]}{\big )}{\big (}\mathbb {E} {\big [}{\hat {f}}(x){\big ]}-{\hat {f}}(x){\big )}{\Big ]}}\,+\,\mathbb {E} {\Big [}{\big (}\mathbb {E} {\big [}{\hat {f}}(x){\big ]}-{\hat {f}}(x){\big )}^{2}{\Big ]}\end{aligned}}} We show that: E[(f(x)−E[f^(x)])2]=E[f(x)2]−2E[f(x)E[f^(x)]]+E[E[f^(x)]2]=f(x)2−2f(x)E[f^(x)]+E[f^(x)]2=(f(x)−E[f^(x)])2{\displaystyle {\begin{aligned}{\color {Blue}\mathbb {E} {\Big [}{\big (}f(x)-\mathbb {E} {\big [}{\hat {f}}(x){\big ]}{\big )}^{2}{\Big ]}}&=\mathbb {E} {\big [}f(x)^{2}{\big ]}\,-\,2\ \mathbb {E} {\Big [}f(x)\ \mathbb {E} {\big [}{\hat {f}}(x){\big ]}{\Big ]}\,+\,\mathbb {E} {\Big [}\mathbb {E} {\big [}{\hat {f}}(x){\big ]}^{2}{\Big ]}\\&=f(x)^{2}\,-\,2\ f(x)\ \mathbb {E} {\big [}{\hat {f}}(x){\big ]}\,+\,\mathbb {E} {\big [}{\hat {f}}(x){\big ]}^{2}\\&={\Big (}f(x)-\mathbb {E} {\big [}{\hat {f}}(x){\big ]}{\Big )}^{2}\end{aligned}}} This last series of equalities comes from the fact thatf(x){\displaystyle f(x)}is not a random variable, but a fixed, deterministic function ofx{\displaystyle x}. Therefore,E[f(x)]=f(x){\displaystyle \mathbb {E} {\big [}f(x){\big ]}=f(x)}. SimilarlyE[f(x)2]=f(x)2{\displaystyle \mathbb {E} {\big [}f(x)^{2}{\big ]}=f(x)^{2}}, andE[f(x)E[f^(x)]]=f(x)E[E[f^(x)]]=f(x)E[f^(x)]{\displaystyle \mathbb {E} {\Big [}f(x)\ \mathbb {E} {\big [}{\hat {f}}(x){\big ]}{\Big ]}=f(x)\ \mathbb {E} {\Big [}\ \mathbb {E} {\big [}{\hat {f}}(x){\big ]}{\Big ]}=f(x)\ \mathbb {E} {\big [}{\hat {f}}(x){\big ]}}. Using the same reasoning, we can expand the second term and show that it is null: E[(f(x)−E[f^(x)])(E[f^(x)]−f^(x))]=E[f(x)E[f^(x)]−f(x)f^(x)−E[f^(x)]2+E[f^(x)]f^(x)]=f(x)E[f^(x)]−f(x)E[f^(x)]−E[f^(x)]2+E[f^(x)]2=0{\displaystyle {\begin{aligned}{\color {PineGreen}\mathbb {E} {\Big [}{\big (}f(x)-\mathbb {E} {\big [}{\hat {f}}(x){\big ]}{\big )}{\big (}\mathbb {E} {\big [}{\hat {f}}(x){\big ]}-{\hat {f}}(x){\big )}{\Big ]}}&=\mathbb {E} {\Big [}f(x)\ \mathbb {E} {\big [}{\hat {f}}(x){\big ]}\,-\,f(x){\hat {f}}(x)\,-\,\mathbb {E} {\big [}{\hat {f}}(x){\big ]}^{2}+\mathbb {E} {\big [}{\hat {f}}(x){\big ]}\ {\hat {f}}(x){\Big ]}\\&=f(x)\ \mathbb {E} {\big [}{\hat {f}}(x){\big ]}\,-\,f(x)\ \mathbb {E} {\big [}{\hat {f}}(x){\big ]}\,-\,\mathbb {E} {\big [}{\hat {f}}(x){\big ]}^{2}\,+\,\mathbb {E} {\big [}{\hat {f}}(x){\big ]}^{2}\\&=0\end{aligned}}} Eventually, we plug our derivations back into the original equation, and identify each term: MSE=(f(x)−E[f^(x)])2+E[(E[f^(x)]−f^(x))2]+σ2=Bias⁡(f^(x))2+Var⁡[f^(x)]+σ2{\displaystyle {\begin{aligned}{\text{MSE}}&={\Big (}f(x)-\mathbb {E} {\big [}{\hat {f}}(x){\big ]}{\Big )}^{2}+\mathbb {E} {\Big [}{\big (}\mathbb {E} {\big [}{\hat {f}}(x){\big ]}-{\hat {f}}(x){\big )}^{2}{\Big ]}+\sigma ^{2}\\&=\operatorname {Bias} {\big (}{\hat {f}}(x){\big )}^{2}\,+\,\operatorname {Var} {\big [}{\hat {f}}(x){\big ]}\,+\,\sigma ^{2}\end{aligned}}} Finally, the MSE loss function (or negative log-likelihood) is obtained by taking the expectation value overx∼P{\displaystyle x\sim P}: Dimensionality reductionandfeature selectioncan decrease variance by simplifying models. Similarly, a larger training set tends to decrease variance. Adding features (predictors) tends to decrease bias, at the expense of introducing additional variance. Learning algorithms typically have some tunable parameters that control bias and variance; for example, One way of resolving the trade-off is to usemixture modelsandensemble learning.[14][15]For example,boostingcombines many "weak" (high bias) models in an ensemble that has lower bias than the individual models, whilebaggingcombines "strong" learners in a way that reduces their variance. Model validationmethods such ascross-validation (statistics)can be used to tune models so as to optimize the trade-off. In the case ofk-nearest neighbors regression, when the expectation is taken over the possible labeling of a fixed training set, aclosed-form expressionexists that relates the bias–variance decomposition to the parameterk:[8]: 37, 223 whereN1(x),…,Nk(x){\displaystyle N_{1}(x),\dots ,N_{k}(x)}are theknearest neighbors ofxin the training set. The bias (first term) is a monotone rising function ofk, while the variance (second term) drops off askis increased. In fact, under "reasonable assumptions" the bias of the first-nearest neighbor (1-NN) estimator vanishes entirely as the size of the training set approaches infinity.[12] The bias–variance decomposition forms the conceptual basis for regressionregularizationmethods such asLASSOandridge regression. Regularization methods introduce bias into the regression solution that can reduce variance considerably relative to theordinary least squares (OLS)solution. Although the OLS solution provides non-biased regression estimates, the lower variance solutions produced by regularization techniques provide superior MSE performance. The bias–variance decomposition was originally formulated for least-squares regression. For the case ofclassificationunder the0-1 loss(misclassification rate), it is possible to find a similar decomposition, with the caveat that the variance term becomes dependent on the target label.[16][17]Alternatively, if the classification problem can be phrased asprobabilistic classification, then the expected cross-entropy can instead be decomposed to give bias and variance terms with the same semantics but taking a different form. It has been argued that as training data increases, the variance of learned models will tend to decrease, and hence that as training data quantity increases, error is minimised by methods that learn models with lesser bias, and that conversely, for smaller training data quantities it is ever more important to minimise variance.[18] Even though the bias–variance decomposition does not directly apply inreinforcement learning, a similar tradeoff can also characterize generalization. When an agent has limited information on its environment, the suboptimality of an RL algorithm can be decomposed into the sum of two terms: a term related to an asymptotic bias and a term due to overfitting. The asymptotic bias is directly related to the learning algorithm (independently of the quantity of data) while the overfitting term comes from the fact that the amount of data is limited.[19] While in traditional Monte Carlo methods the bias is typically zero, modern approaches, such asMarkov chain Monte Carloare only asymptotically unbiased, at best.[20]Convergence diagnostics can be used to control bias viaburn-inremoval, but due to a limited computational budget, a bias–variance trade-off arises,[21]leading to a wide-range of approaches, in which a controlled bias is accepted, if this allows to dramatically reduce the variance, and hence the overall estimation error.[22][23][24] While widely discussed in the context of machine learning, the bias–variance dilemma has been examined in the context ofhuman cognition, most notably byGerd Gigerenzerand co-workers in the context of learned heuristics. They have argued (see references below) that the human brain resolves the dilemma in the case of the typically sparse, poorly-characterized training-sets provided by experience by adopting high-bias/low variance heuristics. This reflects the fact that a zero-bias approach has poor generalizability to new situations, and also unreasonably presumes precise knowledge of the true state of the world. The resulting heuristics are relatively simple, but produce better inferences in a wider variety of situations.[25] Gemanet al.[12]argue that the bias–variance dilemma implies that abilities such as genericobject recognitioncannot be learned from scratch, but require a certain degree of "hard wiring" that is later tuned by experience. This is because model-free approaches to inference require impractically large training sets if they are to avoid high variance.
https://en.wikipedia.org/wiki/Bias-variance_dilemma
Inmachine learning,hyperparameter optimization[1]or tuning is the problem of choosing a set of optimalhyperparametersfor a learning algorithm. A hyperparameter is aparameterwhose value is used to control the learning process, which must be configured before the process starts.[2][3] Hyperparameter optimization determines the set of hyperparameters that yields an optimal model which minimizes a predefinedloss functionon a givendata set.[4]The objective function takes a set of hyperparameters and returns the associated loss.[4]Cross-validationis often used to estimate this generalization performance, and therefore choose the set of values for hyperparameters that maximize it.[5] The traditional method for hyperparameter optimization has beengrid search, or aparameter sweep, which is simply anexhaustive searchingthrough a manually specified subset of the hyperparameter space of a learning algorithm. A grid search algorithm must be guided by some performance metric, typically measured bycross-validationon the training set[6]or evaluation on a hold-out validation set.[7] Since the parameter space of a machine learner may include real-valued or unbounded value spaces for certain parameters, manually set bounds and discretization may be necessary before applying grid search. For example, a typical soft-marginSVMclassifierequipped with anRBF kernelhas at least two hyperparameters that need to be tuned for good performance on unseen data: a regularization constantCand a kernel hyperparameter γ. Both parameters are continuous, so to perform grid search, one selects a finite set of "reasonable" values for each, say Grid search then trains an SVM with each pair (C, γ) in theCartesian productof these two sets and evaluates their performance on a held-out validation set (or by internal cross-validation on the training set, in which case multiple SVMs are trained per pair). Finally, the grid search algorithm outputs the settings that achieved the highest score in the validation procedure. Grid search suffers from thecurse of dimensionality, but is oftenembarrassingly parallelbecause the hyperparameter settings it evaluates are typically independent of each other.[5] Random Search replaces the exhaustive enumeration of all combinations by selecting them randomly. This can be simply applied to the discrete setting described above, but also generalizes to continuous and mixed spaces. A benefit over grid search is that random search can explore many more values than grid search could for continuous hyperparameters. It can outperform Grid search, especially when only a small number of hyperparameters affects the final performance of the machine learning algorithm.[5]In this case, the optimization problem is said to have a low intrinsic dimensionality.[8]Random Search is alsoembarrassingly parallel, and additionally allows the inclusion of prior knowledge by specifying the distribution from which to sample. Despite its simplicity, random search remains one of the important base-lines against which to compare the performance of new hyperparameter optimization methods. Bayesian optimization is a global optimization method for noisy black-box functions. Applied to hyperparameter optimization, Bayesian optimization builds a probabilistic model of the function mapping from hyperparameter values to the objective evaluated on a validation set. By iteratively evaluating a promising hyperparameter configuration based on the current model, and then updating it, Bayesian optimization aims to gather observations revealing as much information as possible about this function and, in particular, the location of the optimum. It tries to balance exploration (hyperparameters for which the outcome is most uncertain) and exploitation (hyperparameters expected close to the optimum). In practice, Bayesian optimization has been shown[9][10][11][12][13]to obtain better results in fewer evaluations compared to grid search and random search, due to the ability to reason about the quality of experiments before they are run. For specific learning algorithms, it is possible to compute the gradient with respect to hyperparameters and then optimize the hyperparameters usinggradient descent. The first usage of these techniques was focused on neural networks.[14]Since then, these methods have been extended to other models such assupport vector machines[15]or logistic regression.[16] A different approach in order to obtain a gradient with respect to hyperparameters consists in differentiating the steps of an iterative optimization algorithm usingautomatic differentiation.[17][18][19][20]A more recent work along this direction uses theimplicit function theoremto calculate hypergradients and proposes a stable approximation of the inverse Hessian. The method scales to millions of hyperparameters and requires constant memory.[21] In a different approach,[22]a hypernetwork is trained to approximate the best response function. One of the advantages of this method is that it can handle discrete hyperparameters as well. Self-tuning networks[23]offer a memory efficient version of this approach by choosing a compact representation for the hypernetwork. More recently, Δ-STN[24]has improved this method further by a slight reparameterization of the hypernetwork which speeds up training. Δ-STN also yields a better approximation of the best-response Jacobian by linearizing the network in the weights, hence removing unnecessary nonlinear effects of large changes in the weights. Apart from hypernetwork approaches, gradient-based methods can be used to optimize discrete hyperparameters also by adopting a continuous relaxation of the parameters.[25]Such methods have been extensively used for the optimization of architecture hyperparameters inneural architecture search. Evolutionary optimization is a methodology for the global optimization of noisy black-box functions. In hyperparameter optimization, evolutionary optimization usesevolutionary algorithmsto search the space of hyperparameters for a given algorithm.[10]Evolutionary hyperparameter optimization follows aprocessinspired by the biological concept ofevolution: Evolutionary optimization has been used in hyperparameter optimization for statistical machine learning algorithms,[10]automated machine learning, typical neural network[26]anddeep neural networkarchitecture search,[27][28]as well as training of the weights in deep neural networks.[29] Population Based Training (PBT) learns both hyperparameter values and network weights. Multiple learning processes operate independently, using different hyperparameters. As with evolutionary methods, poorly performing models are iteratively replaced with models that adopt modified hyperparameter values and weights based on the better performers. This replacement model warm starting is the primary differentiator between PBT and other evolutionary methods. PBT thus allows the hyperparameters to evolve and eliminates the need for manual hypertuning. The process makes no assumptions regarding model architecture, loss functions or training procedures. PBT and its variants are adaptive methods: they update hyperparameters during the training of the models. On the contrary, non-adaptive methods have the sub-optimal strategy to assign a constant set of hyperparameters for the whole training.[30] A class of early stopping-based hyperparameter optimization algorithms is purpose built for large search spaces of continuous and discrete hyperparameters, particularly when the computational cost to evaluate the performance of a set of hyperparameters is high. Irace implements the iterated racing algorithm, that focuses the search around the most promising configurations, using statistical tests to discard the ones that perform poorly.[31][32]Another early stopping hyperparameter optimization algorithm is successive halving (SHA),[33]which begins as a random search but periodically prunes low-performing models, thereby focusing computational resources on more promising models. Asynchronous successive halving (ASHA)[34]further improves upon SHA's resource utilization profile by removing the need to synchronously evaluate and prune low-performing models. Hyperband[35]is a higher level early stopping-based algorithm that invokes SHA or ASHA multiple times with varying levels of pruning aggressiveness, in order to be more widely applicable and with fewer required inputs. RBF[36]andspectral[37]approaches have also been developed. When hyperparameter optimization is done, the set of hyperparameters are often fitted on a training set and selected based on the generalization performance, or score, of a validation set. However, this procedure is at risk of overfitting the hyperparameters to the validation set. Therefore, the generalization performance score of the validation set (which can be several sets in the case of a cross-validation procedure) cannot be used to simultaneously estimate the generalization performance of the final model. In order to do so, the generalization performance has to be evaluated on a set independent (which has no intersection) of the set (or sets) used for the optimization of the hyperparameters, otherwise the performance might give a value which is too optimistic (too large). This can be done on a second test set, or through an outercross-validationprocedure called nested cross-validation, which allows an unbiased estimation of the generalization performance of the model, taking into account the bias due to the hyperparameter optimization.
https://en.wikipedia.org/wiki/Grid_search
Identifiability analysisis a group of methods found inmathematical statisticsthat are used to determine how well the parameters of a model are estimated by the quantity and quality ofexperimental data.[1]Therefore, these methods explore not onlyidentifiabilityof a model, but also the relation of the model to particular experimental data or, more generally, thedata collectionprocess. Assuming a model is fit to experimental data, thegoodness of fitdoes not reveal how reliable the parameter estimates are. The goodness of fit is also not sufficient to prove the model was chosen correctly. For example, if the experimental data isnoisyor if there is an insufficient number of data points, it could be that the estimated parameter values could vary drastically without significantly influencing the goodness of fit. To address these issues theidentifiability analysiscould be applied as an important step to ensure correct choice of model, and sufficient amount of experimental data. The purpose of this analysis is either a quantified proof of correct model choice and integrality of experimental data acquired or such analysis can serve as an instrument for the detection of non-identifiable and sloppy parameters, helping planning the experiments and in building and improvement of the model at the early stages. Structural identifiability analysis is a particular type of analysis in which the model structure itself is investigated for non-identifiability.[2]Recognized non-identifiabilities may be removed analytically through substitution of the non-identifiable parameters with their combinations or by another way. The model overloading with number of independent parameters after its application to simulate finite experimental dataset may provide the good fit to experimental data by the price of making fitting results not sensible to the changes of parameters values, therefore leaving parameter values undetermined. Structural methods are also referred to asa priori, because non-identifiability analysis in this case could also be performed prior to the calculation of the fitting score functions, by exploring the numberdegrees of freedom (statistics)for the model and the number of independent experimental conditions to be varied. Practical identifiability analysis can be performed by exploring the fit of existing model to experimental data. Once the fitting in any measure was obtained, parameter identifiability analysis can be performed either locally near a given point (usually near the parameter values provided the best model fit) or globally over the extended parameter space. The common example of the practical identifiability analysis is profile likelihood method.[3]
https://en.wikipedia.org/wiki/Identifiability_Analysis
Log-linear analysisis a technique used instatisticsto examine the relationship between more than twocategorical variables. The technique is used for bothhypothesis testingand model building. In both these uses, models are tested to find the most parsimonious (i.e., least complex) model that best accounts for the variance in the observed frequencies. (APearson's chi-square testcould be used instead of log-linear analysis, but that technique only allows for two of the variables to be compared at a time.[1]) Log-linear analysis uses alikelihood ratiostatisticX2{\displaystyle \mathrm {X} ^{2}}that has an approximatechi-square distributionwhen the sample size is large:[2] where There are three assumptions in log-linear analysis:[2] 1. The observations areindependentandrandom; 2. Observed frequencies are normally distributed about expected frequencies over repeated samples. This is a good approximation if both (a) the expected frequencies are greater than or equal to 5 for 80% or more of the categories and (b) all expected frequencies are greater than 1. Violations to this assumption result in a large reduction in power. Suggested solutions to this violation are: delete a variable, combine levels of one variable (e.g., put males and females together), or collect more data. 3. The logarithm of the expected value of the response variable is a linear combination of the explanatory variables. This assumption is so fundamental that it is rarely mentioned, but like most linearity assumptions, it is rarely exact and often simply made to obtain a tractable model. Additionally, data should always be categorical. Continuous data can first be converted to categorical data, with some loss of information. With both continuous and categorical data, it would be best to uselogistic regression. (Any data that is analysed with log-linear analysis can also be analysed with logistic regression. The technique chosen depends on the research questions.) In log-linear analysis there is no clear distinction between what variables are theindependentordependentvariables. The variables are treated the same. However, often the theoretical background of the variables will lead the variables to be interpreted as either the independent or dependent variables.[1] The goal of log-linear analysis is to determine which model components are necessary to retain in order to best account for the data. Model components are the number ofmain effectsandinteractionsin the model. For example, if we examine the relationship between three variables—variable A, variable B, and variable C—there are seven model components in the saturated model. The three main effects (A, B, C), the three two-way interactions (AB, AC, BC), and the one three-way interaction (ABC) gives the seven model components. The log-linear models can be thought of to be on a continuum with the two extremes being the simplest model and thesaturated model. The simplest model is the model where all the expected frequencies are equal. This is true when the variables are not related. The saturated model is the model that includes all the model components. This model will always explain the data the best, but it is the least parsimonious as everything is included. In this model, observed frequencies equal expected frequencies, therefore in the likelihood ratio chi-square statistic, the ratioOijEij=1{\displaystyle {\frac {O_{ij}}{E_{ij}}}=1}andln⁡(1)=0{\displaystyle \ln(1)=0}. This results in the likelihood ratio chi-square statistic being equal to 0, which is the best model fit.[2]Other possible models are the conditional equiprobability model and the mutual dependence model.[1] Each log-linear model can be represented as a log-linear equation. For example, with the three variables (A,B,C) the saturated model has the following log-linear equation:[1] where Log-linear analysis models can be hierarchical or nonhierarchical. Hierarchical models are the most common. These models contain all the lower order interactions and main effects of the interaction to be examined.[1] A log-linear model is graphical if, whenever the model contains all two-factor terms generated by a higher-order interaction, the model also contains the higher-order interaction.[4]As a direct-consequence, graphical models are hierarchical. Moreover, being completely determined by its two-factor terms, a graphical model can be represented by an undirected graph, where the vertices represent the variables and the edges represent the two-factor terms included in the model. A log-linear model is decomposable if it is graphical and if the corresponding graph ischordal. The model fits well when theresiduals(i.e., observed-expected) are close to 0, that is the closer the observed frequencies are to the expected frequencies the better the model fit. If the likelihood ratio chi-square statistic is non-significant, then the model fits well (i.e., calculated expected frequencies are close to observed frequencies). If the likelihood ratio chi-square statistic is significant, then the model does not fit well (i.e., calculated expected frequencies are not close to observed frequencies). Backward eliminationis used to determine which of the model components are necessary to retain in order to best account for the data. Log-linear analysis starts with the saturated model and the highest order interactions are removed until the model no longer accurately fits the data. Specifically, at each stage, after the removal of the highest ordered interaction, the likelihood ratio chi-square statistic is computed to measure how well the model is fitting the data. The highest ordered interactions are no longer removed when the likelihood ratio chi-square statistic becomes significant.[2] When two models arenested, models can also be compared using a chi-square difference test. The chi-square difference test is computed by subtracting the likelihood ratio chi-square statistics for the two models being compared. This value is then compared to the chi-square critical value at their difference in degrees of freedom. If the chi-square difference is smaller than the chi-square critical value, the new model fits the data significantly better and is the preferred model. Else, if the chi-square difference is larger than the critical value, the less parsimonious model is preferred.[1] Once the model of best fit is determined, the highest-order interaction is examined by conducting chi-square analyses at different levels of one of the variables. To conduct chi-square analyses, one needs to break the model down into a 2 × 2 or 2 × 1contingency table.[2] For example, if one is examining the relationship among four variables, and the model of best fit contained one of the three-way interactions, one would examine its simple two-way interactions at different levels of the third variable. To compare effect sizes of the interactions between the variables,odds ratiosare used. Odds ratios are preferred over chi-square statistics for two main reasons:[1] 1. Odds ratios are independent of the sample size; 2. Odds ratios are not affected by unequal marginal distributions.
https://en.wikipedia.org/wiki/Log-linear_analysis
Instatistics,identifiabilityis a property which amodelmust satisfy for preciseinferenceto be possible. A model isidentifiableif it is theoretically possible to learn the true values of this model's underlying parameters after obtaining an infinite number of observations from it. Mathematically, this is equivalent to saying that different values of the parameters must generate differentprobability distributionsof the observable variables. Usually the model is identifiable only under certain technical restrictions, in which case the set of these requirements is called theidentification conditions. A model that fails to be identifiable is said to benon-identifiableorunidentifiable: two or moreparametrizationsareobservationally equivalent. In some cases, even though a model is non-identifiable, it is still possible to learn the true values of a certain subset of the model parameters. In this case we say that the model ispartially identifiable. In other cases it may be possible to learn the location of the true parameter up to a certain finite region of the parameter space, in which case the model isset identifiable. Aside from strictly theoretical exploration of the model properties,identifiabilitycan be referred to in a wider scope when a model is tested with experimental data sets, usingidentifiability analysis.[1] LetP={Pθ:θ∈Θ}{\displaystyle {\mathcal {P}}=\{P_{\theta }:\theta \in \Theta \}}be astatistical modelwith parameter spaceΘ{\displaystyle \Theta }. We say thatP{\displaystyle {\mathcal {P}}}isidentifiableif the mappingθ↦Pθ{\displaystyle \theta \mapsto P_{\theta }}isone-to-one:[2] This definition means that distinct values ofθshould correspond to distinct probability distributions: ifθ1≠θ2, then alsoPθ1≠Pθ2.[3]If the distributions are defined in terms of theprobability density functions(pdfs), then two pdfs should be considered distinct only if they differ on a set of non-zero measure (for example two functions ƒ1(x) =10 ≤x< 1and ƒ2(x) =10 ≤x≤ 1differ only at a single pointx= 1 — a set ofmeasurezero — and thus cannot be considered as distinct pdfs). Identifiability of the model in the sense of invertibility of the mapθ↦Pθ{\displaystyle \theta \mapsto P_{\theta }}is equivalent to being able to learn the model's true parameter if the model can be observed indefinitely long. Indeed, if {Xt} ⊆Sis the sequence of observations from the model, then by thestrong law of large numbers, for every measurable setA⊆S(here1{...}is theindicator function). Thus, with an infinite number of observations we will be able to find the true probability distributionP0in the model, and since the identifiability condition above requires that the mapθ↦Pθ{\displaystyle \theta \mapsto P_{\theta }}be invertible, we will also be able to find the true value of the parameter which generated given distributionP0. LetP{\displaystyle {\mathcal {P}}}be thenormallocation-scale family: Then This expression is equal to zero for almost allxonly when all its coefficients are equal to zero, which is only possible when |σ1| = |σ2| andμ1=μ2. Since in the scale parameterσis restricted to be greater than zero, we conclude that the model is identifiable: ƒθ1= ƒθ2⇔θ1=θ2. LetP{\displaystyle {\mathcal {P}}}be the standardlinear regression model: (where ′ denotes matrixtranspose). Then the parameterβis identifiable if and only if the matrixE[xx′]{\displaystyle \mathrm {E} [xx']}is invertible. Thus, this is theidentification conditionin the model. SupposeP{\displaystyle {\mathcal {P}}}is the classicalerrors-in-variableslinear model: where (ε,η,x*) are jointly normal independent random variables with zero expected value and unknown variances, and only the variables (x,y) are observed. Then this model is not identifiable,[4]only the product βσ²∗is (where σ²∗is the variance of the latent regressorx*). This is also an example of aset identifiablemodel: although the exact value ofβcannot be learned, we can guarantee that it must lie somewhere in the interval (βyx, 1÷βxy), whereβyxis the coefficient inOLSregression ofyonx, andβxyis the coefficient in OLS regression ofxony.[5] If we abandon the normality assumption and require thatx*werenotnormally distributed, retaining only the independence conditionε⊥η⊥x*, then the model becomes identifiable.[4]
https://en.wikipedia.org/wiki/Model_identification
In thedesign of experiments,optimal experimental designs(oroptimum designs[2]) are a class ofexperimental designsthat areoptimalwith respect to somestatisticalcriterion. The creation of this field of statistics has been credited to Danish statisticianKirstine Smith.[3][4] In thedesign of experimentsforestimatingstatistical models,optimal designsallow parameters to beestimated without biasand withminimum variance. A non-optimal design requires a greater number ofexperimental runstoestimatetheparameterswith the sameprecisionas an optimal design. In practical terms, optimal experiments can reduce the costs of experimentation. The optimality of a design depends on thestatistical modeland is assessed with respect to a statistical criterion, which is related to the variance-matrix of the estimator. Specifying an appropriate model and specifying a suitable criterion function both require understanding ofstatistical theoryand practical knowledge withdesigning experiments. Optimal designs offer three advantages over sub-optimalexperimental designs:[5] Experimental designs are evaluated using statistical criteria.[6] It is known that theleast squaresestimator minimizes thevarianceofmean-unbiasedestimators(under the conditions of theGauss–Markov theorem). In theestimationtheory forstatistical modelswith onerealparameter, thereciprocalof the variance of an ("efficient") estimator is called the "Fisher information" for that estimator.[7]Because of this reciprocity,minimizingthevariancecorresponds tomaximizingtheinformation. When thestatistical modelhas severalparameters, however, themeanof the parameter-estimator is avectorand itsvarianceis amatrix. Theinverse matrixof the variance-matrix is called the "information matrix". Because the variance of the estimator of a parameter vector is a matrix, the problem of "minimizing the variance" is complicated. Usingstatistical theory, statisticians compress the information-matrix using real-valuedsummary statistics; being real-valued functions, these "information criteria" can be maximized.[8]The traditional optimality-criteria areinvariantsof theinformationmatrix; algebraically, the traditional optimality-criteria arefunctionalsof theeigenvaluesof the information matrix. Other optimality-criteria are concerned with the variance ofpredictions: In many applications, the statistician is most concerned with a"parameter of interest"rather than with"nuisance parameters". More generally, statisticians considerlinear combinationsof parameters, which are estimated via linear combinations of treatment-means in thedesign of experimentsand in theanalysis of variance; such linear combinations are calledcontrasts. Statisticians can use appropriate optimality-criteria for suchparameters of interestand forcontrasts.[12] Catalogs of optimal designs occur in books and in software libraries. In addition, majorstatistical systemslikeSASandRhave procedures for optimizing a design according to a user's specification. The experimenter must specify amodelfor the design and an optimality-criterion before the method can compute an optimal design.[13] Some advanced topics in optimal design require morestatistical theoryand practical knowledge in designing experiments. Since the optimality criterion of most optimal designs is based on some function of the information matrix, the 'optimality' of a given design ismodeldependent: While an optimal design is best for thatmodel, its performance may deteriorate on othermodels. On othermodels, anoptimaldesign can be either better or worse than a non-optimal design.[14]Therefore, it is important tobenchmarkthe performance of designs under alternativemodels.[15] The choice of an appropriate optimality criterion requires some thought, and it is useful to benchmark the performance of designs with respect to several optimality criteria. Cornell writes that since the [traditional optimality] criteria . . . are variance-minimizing criteria, . . . a design that is optimal for a given model using one of the . . . criteria is usually near-optimal for the same model with respect to the other criteria. Indeed, there are several classes of designs for which all the traditional optimality-criteria agree, according to the theory of "universal optimality" ofKiefer.[17]The experience of practitioners like Cornell and the "universal optimality" theory of Kiefer suggest that robustness with respect to changes in theoptimality-criterionis much greater than is robustness with respect to changes in themodel. High-quality statistical software provide a combination of libraries of optimal designs or iterative methods for constructing approximately optimal designs, depending on the model specified and the optimality criterion. Users may use a standard optimality-criterion or may program a custom-made criterion. All of the traditional optimality-criteria areconvex (or concave) functions, and therefore optimal-designs are amenable to the mathematical theory ofconvex analysisand their computation can use specialized methods ofconvex minimization.[18]The practitioner need not selectexactly onetraditional, optimality-criterion, but can specify a custom criterion. In particular, the practitioner can specify a convex criterion using the maxima of convex optimality-criteria andnonnegative combinationsof optimality criteria (since these operations preserveconvex functions). Forconvexoptimality criteria, theKiefer-Wolfowitzequivalence theoremallows the practitioner to verify that a given design is globally optimal.[19]TheKiefer-Wolfowitzequivalence theoremis related with theLegendre-Fenchelconjugacyforconvex functions.[20] If an optimality-criterion lacksconvexity, then finding aglobal optimumand verifying its optimality often are difficult. When scientists wish to test several theories, then a statistician can design an experiment that allows optimal tests between specified models. Such "discrimination experiments" are especially important in thebiostatisticssupportingpharmacokineticsandpharmacodynamics, following the work ofCoxand Atkinson.[21] When practitioners need to consider multiplemodels, they can specify aprobability-measureon the models and then select any design maximizing theexpected valueof such an experiment. Such probability-based optimal-designs are called optimalBayesiandesigns. SuchBayesian designsare used especially forgeneralized linear models(where the response follows anexponential-familydistribution).[22] The use of aBayesian designdoes not force statisticians to useBayesian methodsto analyze the data, however. Indeed, the "Bayesian" label for probability-based experimental-designs is disliked by some researchers.[23]Alternative terminology for "Bayesian" optimality includes "on-average" optimality or "population" optimality. Scientific experimentation is an iterative process, and statisticians have developed several approaches to the optimal design of sequential experiments. Sequential analysiswas pioneered byAbraham Wald.[24]In 1972,Herman Chernoffwrote an overview of optimal sequential designs,[25]whileadaptive designswere surveyed later by S. Zacks.[26]Of course, much work on the optimal design of experiments is related to the theory ofoptimal decisions, especially thestatistical decision theoryofAbraham Wald.[27] Optimal designs forresponse-surface modelsare discussed in the textbook by Atkinson, Donev and Tobias, and in the survey of Gaffke and Heiligers and in the mathematical text of Pukelsheim. Theblockingof optimal designs is discussed in the textbook of Atkinson, Donev and Tobias and also in the monograph by Goos. The earliest optimal designs were developed to estimate the parameters of regression models with continuous variables, for example, byJ. D. Gergonnein 1815 (Stigler). In English, two early contributions were made byCharles S. PeirceandKirstine Smith. Pioneering designs for multivariateresponse-surfaceswere proposed byGeorge E. P. Box. However, Box's designs have few optimality properties. Indeed, theBox–Behnken designrequires excessive experimental runs when the number of variables exceeds three.[28]Box's"central-composite" designsrequire more experimental runs than do the optimal designs of Kôno.[29] The optimization of sequential experimentation is studied also instochastic programmingand insystemsandcontrol. Popular methods includestochastic approximationand other methods ofstochastic optimization. Much of this research has been associated with the subdiscipline ofsystem identification.[30]In computationaloptimal control, D. Judin & A. Nemirovskii andBoris Polyakhas described methods that are more efficient than the (Armijo-style)step-size rulesintroduced byG. E. P. Boxinresponse-surface methodology.[31] Adaptive designsare used inclinical trials, and optimaladaptive designsare surveyed in theHandbook of Experimental Designschapter by Shelemyahu Zacks. There are several methods of finding an optimal design, given ana priorirestriction on the number of experimental runs or replications. Some of these methods are discussed by Atkinson, Donev and Tobias and in the paper by Hardin andSloane. Of course, fixing the number of experimental runsa prioriwould be impractical. Prudent statisticians examine the other optimal designs, whose number of experimental runs differ. In the mathematical theory on optimal experiments, an optimal design can be aprobability measurethat issupportedon an infinite set of observation-locations. Such optimal probability-measure designs solve a mathematical problem that neglected to specify the cost of observations and experimental runs. Nonetheless, such optimal probability-measure designs can bediscretizedto furnishapproximatelyoptimal designs.[32] In some cases, a finite set of observation-locations suffices tosupportan optimal design. Such a result was proved by Kôno andKieferin their works onresponse-surface designsfor quadratic models. The Kôno–Kiefer analysis explains why optimal designs for response-surfaces can have discrete supports, which are very similar as do the less efficient designs that have been traditional inresponse surface methodology.[33] In 1815, an article on optimal designs forpolynomial regressionwas published byJoseph Diaz Gergonne, according toStigler. Charles S. Peirceproposed an economic theory of scientific experimentation in 1876, which sought to maximize the precision of the estimates. Peirce's optimal allocation immediately improved the accuracy of gravitational experiments and was used for decades by Peirce and his colleagues. In his 1882 published lecture atJohns Hopkins University, Peirce introduced experimental design with these words: Logic will not undertake to inform you what kind of experiments you ought to make in order best to determine the acceleration of gravity, or the value of the Ohm; but it will tell you how to proceed to form a plan of experimentation.[....] Unfortunately practice generally precedes theory, and it is the usual fate of mankind to get things done in some boggling way first, and find out afterward how they could have been done much more easily and perfectly.[34] Kirstine Smithproposed optimal designs for polynomial models in 1918. (Kirstine Smith had been a student of the Danish statisticianThorvald N. Thieleand was working withKarl Pearsonin London.) The textbook by Atkinson, Donev and Tobias has been used for short courses for industrial practitioners as well as university courses. Optimalblock designsare discussed by Bailey and by Bapat. The first chapter of Bapat's book reviews thelinear algebraused by Bailey (or the advanced books below). Bailey's exercises and discussion ofrandomizationboth emphasize statistical concepts (rather than algebraic computations). Optimalblock designsare discussed in the advanced monograph by Shah and Sinha and in the survey-articles by Cheng and by Majumdar.
https://en.wikipedia.org/wiki/Optimal_design#Model_selection
Ineconomicsandeconometrics, theparameter identification problemarises when the value of one or moreparametersin aneconomic modelcannot be determined from observable variables. It is closely related tonon-identifiabilityinstatisticsand econometrics, which occurs when astatistical modelhas more than one set of parameters that generate the same distribution of observations, meaning that multiple parameterizations areobservationally equivalent. For example, this problem can occur in the estimation of multiple-equation econometric models where the equations have variables in common. Consider a linear model for thesupply and demandof some specific good. The quantity demanded varies negatively with the price: a higher price decreases the quantity demanded. The quantity supplied varies directly with the price: a higher price increases the quantity supplied. Assume that, say for several years, we have data on both the price and the traded quantity of this good. Unfortunately this is not enough to identify the two equations (demand and supply) usingregression analysison observations ofQandP: one cannot estimate a downward slopeandan upward slope with one linear regression line involving only two variables. Additional variables can make it possible to identify the individual relations. In the graph shown here, the supply curve (red line, upward sloping) shows the quantity supplied depending positively on the price, while the demand curve (black lines, downward sloping) shows quantity depending negatively on the price and also on some additional variableZ, which affects the location of the demand curve in quantity-price space. ThisZmight be consumers' income, with a rise in income shifting the demand curve outwards. This is symbolically indicated with the values 1, 2 and 3 forZ. With the quantities supplied and demanded being equal, the observations on quantity and price are the three white points in the graph: they reveal the supply curve. Hence the effect ofZondemandmakes it possible to identify the (positive) slope of thesupplyequation. The (negative) slope parameter of the demand equation cannot be identified in this case. In other words, the parameters of an equation can be identified if it is known that some variable doesnotenter into the equation, while it does enter the other equation. A situation in which both the supply and the demand equation are identified arises if there is not only a variableZentering the demand equation but not the supply equation, but also a variableXentering the supply equation but not the demand equation: with positivebSand negativebD. Here both equations are identified ifcanddare nonzero. Note that this is thestructural formof the model, showing the relations between theQandP. Thereduced formhowever can be identified easily. Fisher points out that this problem is fundamental to the model, and not a matter of statistical estimation: It is important to note that the problem is not one of the appropriateness of a particular estimation technique. In the situation described [without theZvariable], there clearly existsnoway usinganytechnique whatsoever in which the true demand (or supply) curve can be estimated. Nor, indeed, is the problem here one of statistical inference—of separating out the effects of random disturbance. There is no disturbance in this model [...] It is the logic of the supply-demand equilibrium itself which leads to the difficulty. (Fisher 1966, p. 5) More generally, consider a linear system ofMequations, withM> 1. An equation cannot be identified from the data if less thanM− 1 variables are excluded from that equation. This is a particular form of theorder conditionfor identification. (The general form of the order condition deals also with restrictions other than exclusions.) The order condition is necessary but not sufficient for identification. Therank conditionis anecessary and sufficientcondition for identification. In the case of only exclusion restrictions, it must "be possible to form at least one nonvanishing determinant of orderM− 1 from the columns ofAcorresponding to the variables excluded a priori from that equation" (Fisher 1966, p. 40), whereAis the matrix of coefficients of the equations. This is the generalization in matrix algebra of the requirement "while it does enter the other equation" mentioned above (in the line above the formulas).
https://en.wikipedia.org/wiki/Parameter_identification_problem
Scientific modellingis an activity that producesmodelsrepresentingempiricalobjects, phenomena, and physical processes, to make a particular part or feature of the world easier tounderstand,define,quantify,visualize, orsimulate. It requires selecting and identifying relevant aspects of a situation in the real world and then developing a model to replicate a system with those features. Different types of models may be used for different purposes, such asconceptual modelsto better understand, operational models tooperationalize,mathematical modelsto quantify,computational modelsto simulate, andgraphical modelsto visualize the subject. Modelling is an essential and inseparable part of many scientific disciplines, each of which has its own ideas about specific types of modelling.[1][2]The following was said byJohn von Neumann.[3] ... the sciences do not try to explain, they hardly even try to interpret, they mainly make models. By a model is meant a mathematical construct which, with the addition of certain verbal interpretations, describes observed phenomena. The justification of such a mathematical construct is solely and precisely that it is expected to work—that is, correctly to describe phenomena from a reasonably wide area. There is also an increasing attention to scientific modelling[4]in fields such asscience education,[5]philosophy of science,systems theory, andknowledge visualization. There is a growing collection ofmethods, techniques and meta-theoryabout all kinds of specialized scientific modelling. A scientific model seeks to representempiricalobjects, phenomena, and physical processes in alogicalandobjectiveway. All models arein simulacra, that is, simplified reflections of reality that, despite being approximations, can be extremely useful.[6]Building and disputing models is fundamental to the scientific enterprise. Complete and true representation may be impossible, but scientific debate often concerns which is the better model for a given task, e.g., which is the more accurate climate model for seasonal forecasting.[7] Attempts toformalizetheprinciplesof theempirical sciencesuse aninterpretationto model reality, in the same way logiciansaxiomatizetheprinciplesoflogic. The aim of these attempts is to construct aformal systemthat will not produce theoretical consequences that are contrary to what is found inreality. Predictions or other statements drawn from such a formal system mirror or map the real world only insofar as these scientific models are true.[8][9] For the scientist, a model is also a way in which the human thought processes can be amplified.[10]For instance, models that are rendered in software allow scientists to leverage computational power to simulate, visualize, manipulate and gain intuition about the entity, phenomenon, or process being represented. Such computer models arein silico. Other types of scientific models arein vivo(living models, such aslaboratory rats) andin vitro(in glassware, such astissue culture).[11] Models are typically used when it is either impossible or impractical to create experimental conditions in which scientists can directly measure outcomes. Direct measurement of outcomes undercontrolled conditions(seeScientific method) will always be more reliable than modeled estimates of outcomes. Withinmodeling and simulation, a model is a task-driven, purposeful simplification and abstraction of a perception of reality, shaped by physical, legal, and cognitive constraints.[12]It is task-driven because a model is captured with a certain question or task in mind. Simplifications leave all the known and observed entities and their relation out that are not important for the task. Abstraction aggregates information that is important but not needed in the same detail as the object of interest. Both activities, simplification, and abstraction, are done purposefully. However, they are done based on a perception of reality. This perception is already amodelin itself, as it comes with a physical constraint. There are also constraints on what we are able to legally observe with our current tools and methods, and cognitive constraints that limit what we are able to explain with our current theories. This model comprises the concepts, their behavior, and their relations informal form and is often referred to as aconceptual model. In order to execute the model, it needs to be implemented as acomputer simulation. This requires more choices, such as numerical approximations or the use of heuristics.[13]Despite all these epistemological and computational constraints, simulation has been recognized as the third pillar of scientific methods: theory building, simulation, and experimentation.[14] Asimulationis a way to implement the model, often employed when the model is too complex for the analytical solution. A steady-state simulation provides information about the system at a specific instant in time (usually at equilibrium, if such a state exists). A dynamic simulation provides information over time. A simulation shows how a particular object or phenomenon will behave. Such a simulation can be useful fortesting, analysis, or training in those cases where real-world systems or concepts can be represented by models.[15] Structureis a fundamental and sometimes intangible notion covering the recognition, observation, nature, and stability of patterns and relationships of entities. From a child's verbal description of a snowflake, to the detailedscientific analysisof the properties ofmagnetic fields, the concept of structure is an essential foundation of nearly every mode of inquiry and discovery in science, philosophy, and art.[16] Asystemis a set of interacting or interdependent entities, real or abstract, forming an integrated whole. In general, a system is a construct or collection of different elements that together can produce results not obtainable by the elements alone.[17]The concept of an 'integrated whole' can also be stated in terms of a system embodying a set of relationships which are differentiated from relationships of the set to other elements, and form relationships between an element of the set and elements not a part of the relational regime. There are two types of system models: 1) discrete in which the variables change instantaneously at separate points in time and, 2) continuous where the state variables change continuously with respect to time.[18] Modelling is the process of generating a model as a conceptual representation of some phenomenon. Typically a model will deal with only some aspects of the phenomenon in question, and two models of the same phenomenon may be essentially different—that is to say, that the differences between them comprise more than just a simple renaming of components. Such differences may be due to differing requirements of the model's end users, or to conceptual or aesthetic differences among the modelers and to contingent decisions made during the modelling process. Considerations that may influence thestructureof a model might be the modeler's preference for a reducedontology, preferences regardingstatistical modelsversusdeterministic models, discrete versus continuous time, etc. In any case, users of a model need to understand the assumptions made that are pertinent to its validity for a given use. Building a model requiresabstraction. Assumptions are used in modelling in order to specify the domain of application of the model. For example, thespecial theory of relativityassumes aninertial frame of reference. This assumption was contextualized and further explained by thegeneral theory of relativity. A model makes accurate predictions when its assumptions are valid, and might well not make accurate predictions when its assumptions do not hold. Such assumptions are often the point with which older theories are succeeded by new ones (thegeneral theory of relativityworks in non-inertial reference frames as well). A model is evaluated first and foremost by its consistency to empirical data; any model inconsistent with reproducible observations must be modified or rejected. One way to modify the model is by restricting the domain over which it is credited with having high validity. A case in point is Newtonian physics, which is highly useful except for the very small, the very fast, and the very massive phenomena of the universe. However, a fit to empirical data alone is not sufficient for a model to be accepted as valid. Factors important in evaluating a model include:[citation needed] People may attempt to quantify the evaluation of a model using autility function. Visualizationis any technique for creating images, diagrams, or animations to communicate a message. Visualization through visual imagery has been an effective way to communicate both abstract and concrete ideas since the dawn of man. Examples from history includecave paintings,Egyptian hieroglyphs, Greekgeometry, andLeonardo da Vinci's revolutionary methods of technical drawing for engineering and scientific purposes. Space mappingrefers to a methodology that employs a "quasi-global" modelling formulation to link companion "coarse" (ideal or low-fidelity) with "fine" (practical or high-fidelity) models of different complexities. Inengineering optimization, space mapping aligns (maps) a very fast coarse model with its related expensive-to-compute fine model so as to avoid direct expensive optimization of the fine model. The alignment process iteratively refines a "mapped" coarse model (surrogate model). One application of scientific modelling is the field ofmodelling and simulation, generally referred to as "M&S". M&S has a spectrum of applications which range from concept development and analysis, through experimentation, measurement, and verification, to disposal analysis. Projects and programs may use hundreds of different simulations, simulators and model analysis tools. The figure shows how modelling and simulation is used as a central part of an integrated program in a defence capability development process.[15] Nowadays there are some 40 magazines about scientific modelling which offer all kinds of international forums. Since the 1960s there is a strongly growing number of books and magazines about specific forms of scientific modelling. There is also a lot of discussion about scientific modelling in the philosophy-of-science literature. A selection:
https://en.wikipedia.org/wiki/Scientific_modelling
Indecision theoryandestimation theory,Stein's example(also known asStein's phenomenonorStein's paradox) is the observation that when three or more parameters are estimated simultaneously, there exist combinedestimatorsmore accurate on average (that is, having lower expectedmean squared error) than any method that handles the parameters separately. It is named afterCharles SteinofStanford University, who discovered the phenomenon in 1955.[1] An intuitive explanation is that optimizing for the mean-squared error of acombinedestimator is not the same as optimizing for the errors of separate estimators of the individual parameters. In practical terms, if the combined error is in fact of interest, then a combined estimator should be used, even if the underlying parameters are independent. If one is instead interested in estimating an individual parameter, then using a combined estimator does not help and is in fact worse. The following is the simplest form of the paradox, the special case in which the number of observations is equal to the number of parameters to be estimated. Letθ{\displaystyle {\boldsymbol {\theta }}}be a vector consisting ofn≥3{\displaystyle n\geq 3}unknown parameters. To estimate these parameters, a single measurementXi{\displaystyle X_{i}}is performed for each parameterθi{\displaystyle \theta _{i}}, resulting in a vectorX{\displaystyle \mathbf {X} }of lengthn{\displaystyle n}. Suppose the measurements are known to beindependent,Gaussianrandom variables, with meanθ{\displaystyle {\boldsymbol {\theta }}}and variance 1, i.e.,X∼N(θ,In){\displaystyle \mathbf {X} \sim {\mathcal {N}}({\boldsymbol {\theta }},\mathbf {I} _{n})}. Thus, each parameter is estimated using a single noisy measurement, and each measurement is equally inaccurate. Under these conditions, it is intuitive and common to use each measurement as an estimate of its corresponding parameter. This so-called "ordinary" decision rule can be written asθ^=X{\displaystyle {\hat {\boldsymbol {\theta }}}=\mathbf {X} }, which is themaximum likelihood estimator(MLE). The quality of such an estimator is measured by itsrisk function. A commonly used risk function is themean squared error, defined asE[‖θ−θ^‖2]{\displaystyle \mathbb {E} [\|{\boldsymbol {\theta }}-{\hat {\boldsymbol {\theta }}}\|^{2}]}. Surprisingly, it turns out that the "ordinary" decision rule is suboptimal (inadmissible) in terms of mean squared error whenn≥3{\displaystyle n\geq 3}. In other words, in the setting discussed here, there exist alternative estimators whichalwaysachieve lowermeansquared error, no matter what the value ofθ{\displaystyle {\boldsymbol {\theta }}}is. For a givenθ{\displaystyle {\boldsymbol {\theta }}}one could obviously define a perfect "estimator" which is always justθ{\displaystyle {\boldsymbol {\theta }}}, but this estimator would be bad for other values ofθ{\displaystyle {\boldsymbol {\theta }}}. The estimators of Stein's paradox are, for a givenθ{\displaystyle {\boldsymbol {\theta }}}, better than the "ordinary" decision ruleX{\displaystyle \mathbf {X} }for someX{\displaystyle \mathbf {X} }but necessarily worse for others. It is only on average that they are better. More accurately, an estimatorθ^1{\displaystyle {\hat {\boldsymbol {\theta }}}_{1}}is said todominateanother estimatorθ^2{\displaystyle {\hat {\boldsymbol {\theta }}}_{2}}if, for all values ofθ{\displaystyle {\boldsymbol {\theta }}}, the risk ofθ^1{\displaystyle {\hat {\boldsymbol {\theta }}}_{1}}is lower than, or equal to, the risk ofθ^2{\displaystyle {\hat {\boldsymbol {\theta }}}_{2}},andif the inequality isstrictfor someθ{\displaystyle {\boldsymbol {\theta }}}. An estimator is said to beadmissibleif no other estimator dominates it, otherwise it isinadmissible. Thus, Stein's example can be simply stated as follows:The "ordinary" decision rule of the mean of a multivariate Gaussian distribution is inadmissible under mean squared error risk. Many simple, practical estimators achieve better performance than the "ordinary" decision rule. The best-known example is theJames–Stein estimator, which shrinksX{\displaystyle \mathbf {X} }towards a particular point (such as the origin) by an amount inversely proportional to the distance ofX{\displaystyle \mathbf {X} }from that point. For a sketch of the proof of this result, seeProof of Stein's example. An alternative proof is due toLarry Brown: he proved that the ordinary estimator for ann{\displaystyle n}-dimensional multivariate normal mean vector is admissible if and only if then{\displaystyle n}-dimensionalBrownian motionis recurrent.[2]Since the Brownian motion is not recurrent forn≥3{\displaystyle n\geq 3}, the MLE is not admissible forn≥3{\displaystyle n\geq 3}. For any particular value ofθ{\displaystyle {\boldsymbol {\theta }}}the new estimator will improve at least one of the individual mean square errorsE[(θi−θ^i)2].{\displaystyle \mathbb {E} [(\theta _{i}-{\hat {\theta }}_{i})^{2}].}This is not hard − for instance, ifθ{\displaystyle {\boldsymbol {\theta }}}is between −1 and 1, andσ=1{\displaystyle \sigma =1}, then an estimator that linearly shrinksX{\displaystyle \mathbf {X} }towards 0 by 0.5 (i.e.,sign⁡(Xi)max(|Xi|−0.5,0){\displaystyle \operatorname {sign} (X_{i})\max(|X_{i}|-0.5,0)}, soft thresholding with threshold0.5{\displaystyle 0.5}) will have a lower mean square error thanX{\displaystyle \mathbf {X} }itself. But there are other values ofθ{\displaystyle {\boldsymbol {\theta }}}for which this estimator is worse thanX{\displaystyle \mathbf {X} }itself. The trick of the Stein estimator, and others that yield the Stein paradox, is that they adjust the shift in such a way that there is always (for anyθ{\displaystyle {\boldsymbol {\theta }}}vector) at least oneXi{\displaystyle X_{i}}whose mean square error is improved, and its improvement more than compensates for any degradation in mean square error that might occur for anotherθ^i{\displaystyle {\hat {\theta }}_{i}}. The trouble is that, without knowingθ{\displaystyle {\boldsymbol {\theta }}}, you don't know which of then{\displaystyle n}mean square errors are improved, so you can't use the Stein estimator only for those parameters. An example of the above setting occurs inchannel estimationin telecommunications, for instance, because different factors affect overall channel performance. Stein's example is surprising, since the "ordinary" decision rule is intuitive and commonly used. In fact, numerous methods for estimator construction, includingmaximum likelihood estimation,best linear unbiased estimation,least squaresestimation and optimalequivariant estimation, all result in the "ordinary" estimator. Yet, as discussed above, this estimator is suboptimal. To demonstrate the unintuitive nature of Stein's example, consider the following real-world example. Suppose we are to estimate three unrelated parameters, such as the US wheat yield for 1993, the number of spectators at the Wimbledon tennis tournament in 2001, and the weight of a randomly chosen candy bar from the supermarket. Suppose we have independent Gaussian measurements of each of these quantities. Stein's example now tells us that we can get a better estimate (on average) for the vector of three parameters by simultaneously using the three unrelated measurements. At first sight it appears that somehow we get a better estimator for US wheat yield by measuring some other unrelated statistics such as the number of spectators at Wimbledon and the weight of a candy bar. However, we have not obtained a better estimator for US wheat yield by itself, but we have produced an estimator for the vector of the means of all three random variables, which has a reducedtotalrisk. This occurs because the cost of a bad estimate in one component of the vector is compensated by a better estimate in another component. Also, a specific set of the three estimated mean values obtained with the new estimator will not necessarily be better than the ordinary set (the measured values). It is only on average that the new estimator is better. Therisk functionof the decision ruled(x)=x{\displaystyle d(\mathbf {x} )=\mathbf {x} }is Now consider the decision rule whereα=n−2{\displaystyle \alpha =n-2}. We will show thatd′{\displaystyle d'}is a better decision rule thand{\displaystyle d}. The risk function is — a quadratic inα{\displaystyle \alpha }. We may simplify the middle term by considering a general "well-behaved" functionh:x↦h(x)∈R{\displaystyle h:\mathbf {x} \mapsto h(\mathbf {x} )\in \mathbb {R} }and usingintegration by parts. For1≤i≤n{\displaystyle 1\leq i\leq n}, for any continuously differentiableh{\displaystyle h}growing sufficiently slowly for largexi{\displaystyle x_{i}}we have: Therefore, (This result is known asStein's lemma.) Now, we choose Ifh{\displaystyle h}met the "well-behaved" condition (it doesn't, but this can be remedied—see below), we would have and so Then returning to the risk function ofd′{\displaystyle d'}: This quadratic inα{\displaystyle \alpha }is minimized atα=n−2{\displaystyle \alpha =n-2}, giving which of course satisfiesR(θ,d′)<R(θ,d).{\displaystyle R(\theta ,d')<R(\theta ,d).}makingd{\displaystyle d}an inadmissible decision rule. It remains to justify the use of This function is not continuously differentiable, since it is singular atx=0{\displaystyle \mathbf {x} =0}. However, the function is continuously differentiable, and after following the algebra through and lettingε→0{\displaystyle \varepsilon \to 0}, one obtains the same result.
https://en.wikipedia.org/wiki/Stein%27s_example
Instatisticsthemean squared prediction error(MSPE), also known asmean squared error of the predictions, of asmoothing,curve fitting, orregressionprocedure is theexpected valueof thesquaredprediction errors(PE), thesquare differencebetween the fitted values implied by the predictive functiong^{\displaystyle {\widehat {g}}}and the values of the (unobservable)true valueg. It is an inverse measure of theexplanatory powerofg^,{\displaystyle {\widehat {g}},}and can be used in the process ofcross-validationof an estimated model. Knowledge ofgwould be required in order to calculate the MSPE exactly; in practice, MSPE is estimated.[1] If the smoothing or fitting procedure hasprojection matrix(i.e., hat matrix)L, which maps the observed values vectory{\displaystyle y}topredicted valuesvectory^=Ly,{\displaystyle {\hat {y}}=Ly,}then PE and MSPE are formulated as: The MSPE can be decomposed into two terms: the squaredbias(mean error) of the fitted values and thevarianceof the fitted values: The quantitySSPE=nMSPEis calledsum squared prediction error. Theroot mean squared prediction erroris the square root of MSPE:RMSPE=√MSPE. The mean squared prediction error can be computed exactly in two contexts. First, with adata sampleof lengthn, thedata analystmay run theregressionover onlyqof the data points (withq<n), holding back the othern – qdata points with the specific purpose of using them to compute the estimated model’s MSPE out of sample (i.e., not using data that were used in the model estimation process). Since the regression process is tailored to theqin-sample points, normally the in-sample MSPE will be smaller than the out-of-sample one computed over then – qheld-back points. If the increase in the MSPE out of sample compared to in sample is relatively slight, that results in the model being viewed favorably. And if two models are to be compared, the one with the lower MSPE over then – qout-of-sample data points is viewed more favorably, regardless of the models’ relative in-sample performances. The out-of-sample MSPE in this context is exact for the out-of-sample data points that it was computed over, but is merely an estimate of the model’s MSPE for the mostly unobserved population from which the data were drawn. Second, as time goes on more data may become available to the data analyst, and then the MSPE can be computed over these new data. When the model has been estimated over all available data with none held back, the MSPE of the model over the entirepopulationof mostly unobserved data can be estimated as follows. For the modelyi=g(xi)+σεi{\displaystyle y_{i}=g(x_{i})+\sigma \varepsilon _{i}}whereεi∼N(0,1){\displaystyle \varepsilon _{i}\sim {\mathcal {N}}(0,1)}, one may write Using in-sample data values, the first term on the right side is equivalent to Thus, Ifσ2{\displaystyle \sigma ^{2}}is known or well-estimated byσ^2{\displaystyle {\widehat {\sigma }}^{2}}, it becomes possible to estimate MSPE by Colin Mallowsadvocated this method in the construction of his model selection statisticCp, which is a normalized version of the estimated MSPE: wherepthe number of estimated parameterspandσ^2{\displaystyle {\widehat {\sigma }}^{2}}is computed from the version of the model that includes all possible regressors. That concludes this proof.
https://en.wikipedia.org/wiki/Prediction_error
Statistical conclusion validityis the degree to which conclusions about the relationship amongvariablesbased on the data are correct or "reasonable". This began as being solely about whether the statistical conclusion about the relationship of the variables was correct, but now there is a movement towards moving to "reasonable" conclusions that use: quantitative, statistical, and qualitative data.[1]Fundamentally, two types of errors can occur:type I(finding a difference or correlation when none exists) andtype II(finding no difference or correlation when one exists). Statistical conclusion validity concerns the qualities of the study that make these types of errors more likely. Statistical conclusion validity involves ensuring the use of adequate sampling procedures, appropriate statistical tests, and reliable measurement procedures.[2][3][4] The most common threats to statistical conclusion validity are: Poweris the probability of correctly rejecting thenull hypothesiswhen it is false (inverse of the type II error rate). Experiments with low power have a higher probability of incorrectly failing to reject the null hypothesis—that is, committing a type II error and concluding that there is no detectable effect when there is an effect (e.g., there is real covariation between the cause and effect). Low power occurs when the sample size of the study is too small given other factors (smalleffect sizes, large group variability, unreliable measures, etc.). Most statistical tests (particularlyinferential statistics) involve assumptions about the data that make the analysis suitable fortesting a hypothesis. Violating the assumptions of statistical tests can lead to incorrect inferences about the cause–effect relationship. Therobustnessof a test indicates how sensitive it is to violations. Violations of assumptions may make tests more or less likely to maketype I or II errors. Each hypothesis test involves a set risk of a type I error (the alpha rate). If a researcher searches or "dredges" through their data, testing many different hypotheses to find a significant effect, they are inflating their type I error rate. The more the researcher repeatedly tests the data, the higher the chance of observing a type I error and making an incorrect inference about the existence of a relationship. If the dependent and/or independent variable(s) are not measuredreliably(i.e. with large amounts ofmeasurement error), incorrect conclusions can be drawn. Restriction of range, such asfloor and ceiling effectsorselection effects, reduce the power of the experiment, and increase the chance of a type II error.[5]This is becausecorrelationsare attenuated (weakened) by reduced variability (see, for example, the equation for thePearson product-moment correlation coefficientwhich uses score variance in its estimation). Greater heterogeneity of individuals participating in the study can also impact interpretations of results by increasing the variance of results or obscuring true relationships (see alsosampling error). This obscures possible interactions between the characteristics of the units and the cause–effect relationship. Any effect that can impact theinternal validityof a research study may bias the results and impact the validity of statistical conclusions reached. These threats to internal validity include unreliability of treatment implementation (lack ofstandardization) or failing to control forextraneous variables.
https://en.wikipedia.org/wiki/Statistical_conclusion_validity
Instatistics,model specificationis part of the process of building astatistical model: specification consists of selecting an appropriatefunctional formfor the model and choosing which variables to include. For example, givenpersonal incomey{\displaystyle y}together with years of schoolings{\displaystyle s}and on-the-job experiencex{\displaystyle x}, we might specify a functional relationshipy=f(s,x){\displaystyle y=f(s,x)}as follows:[1] whereε{\displaystyle \varepsilon }is the unexplainederror termthat is supposed to compriseindependent and identically distributedGaussian variables. The statisticianSir David Coxhas said, "How [the] translation from subject-matter problem to statistical model is done is often the most critical part of an analysis".[2] Specification error occurs when the functional form or the choice ofindependent variablespoorly represent relevant aspects of the true data-generating process. In particular,bias(theexpected valueof the difference of an estimatedparameterand the true underlying value) occurs if an independent variable is correlated with the errors inherent in the underlying process. There are several different possible causes of specification error; some are listed below. Additionally,measurement errorsmay affect the independent variables: while this is not a specification error, it can create statistical bias. Note that all models will have some specification error. Indeed, in statistics there is a common aphorism that "all models are wrong". In the words of Burnham & Anderson, "Modeling is an art as well as a science and is directed toward finding a good approximating model ... as the basis for statistical inference".[4] TheRamsey RESET testcan help test for specification error inregression analysis. In the example given above relating personal income to schooling and job experience, if the assumptions of the model are correct, then theleast squaresestimates of the parametersρ{\displaystyle \rho }andβ{\displaystyle \beta }will beefficientandunbiased. Hence specification diagnostics usually involve testing the first to fourthmomentof theresiduals.[5] Building a model involves finding a set of relationships to represent the process that is generating the data. This requires avoiding all the sources of misspecification mentioned above. One approach is to start with a model in general form that relies on a theoretical understanding of the data-generating process. Then the model can be fit to the data and checked for the various sources of misspecification, in a task calledstatistical model validation. Theoretical understanding can then guide the modification of the model in such a way as to retain theoretical validity while removing the sources of misspecification. But if it proves impossible to find a theoretically acceptable specification that fits the data, the theoretical model may have to be rejected and replaced with another one. A quotation fromKarl Popperis apposite here: "Whenever a theory appears to you as the only possible one, take this as a sign that you have neither understood the theory nor the problem which it was intended to solve".[6] Another approach to model building is to specify several different models as candidates, and then compare those candidate models to each other. The purpose of the comparison is to determine which candidate model is most appropriate for statistical inference. Common criteria for comparing models include the following:R2,Bayes factor, and thelikelihood-ratio testtogether with its generalizationrelative likelihood. For more on this topic, seestatistical model selection.
https://en.wikipedia.org/wiki/Statistical_model_specification
Instatistics, thecoefficient of determination, denotedR2orr2and pronounced "R squared", is the proportion of the variation in the dependent variable that is predictable from the independent variable(s). It is astatisticused in the context ofstatistical modelswhose main purpose is either thepredictionof future outcomes or the testing ofhypotheses, on the basis of other related information. It provides a measure of how well observed outcomes are replicated by the model, based on the proportion of total variation of outcomes explained by the model.[1][2][3] There are several definitions ofR2that are only sometimes equivalent. Insimple linear regression(which includes anintercept),r2is simply the square of the samplecorrelation coefficient(r), between the observed outcomes and the observed predictor values.[4]If additionalregressorsare included,R2is the square of thecoefficient of multiple correlation. In both such cases, the coefficient of determination normally ranges from 0 to 1. There are cases whereR2can yield negative values. This can arise when the predictions that are being compared to the corresponding outcomes have not been derived from a model-fitting procedure using those data. Even if a model-fitting procedure has been used,R2may still be negative, for example when linear regression is conducted without including an intercept,[5]or when a non-linear function is used to fit the data.[6]In cases where negative values arise, the mean of the data provides a better fit to the outcomes than do the fitted function values, according to this particular criterion. The coefficient of determination can be more intuitively informative thanMAE,MAPE,MSE, andRMSEinregression analysisevaluation, as the former can be expressed as a percentage, whereas the latter measures have arbitrary ranges. It also proved more robust for poor fits compared toSMAPEon certain test datasets.[7] When evaluating the goodness-of-fit of simulated (Ypred) versus measured (Yobs) values, it is not appropriate to base this on theR2of the linear regression (i.e.,Yobs=m·Ypred+ b).[citation needed]TheR2quantifies the degree of any linear correlation betweenYobsandYpred, while for the goodness-of-fit evaluation only one specific linear correlation should be taken into consideration:Yobs= 1·Ypred+ 0 (i.e., the 1:1 line).[8][9] Adata sethasnvalues markedy1, ...,yn(collectively known asyior as a vectory= [y1, ...,yn]T), each associated with a fitted (or modeled, or predicted) valuef1, ...,fn(known asfi, or sometimesŷi, as a vectorf). Define theresidualsasei=yi−fi(forming a vectore). Ify¯{\displaystyle {\bar {y}}}is the mean of the observed data:y¯=1n∑i=1nyi{\displaystyle {\bar {y}}={\frac {1}{n}}\sum _{i=1}^{n}y_{i}}then the variability of the data set can be measured with twosums of squaresformulas: The most general definition of the coefficient of determination isR2=1−SSresSStot{\displaystyle R^{2}=1-{SS_{\rm {res}} \over SS_{\rm {tot}}}} In the best case, the modeled values exactly match the observed values, which results inSSres=0{\displaystyle SS_{\text{res}}=0}andR2= 1. A baseline model, which always predictsy, will haveR2= 0. In a general form,R2can be seen to be related to the fraction of variance unexplained (FVU), since the second term compares the unexplained variance (variance of the model's errors) with the total variance (of the data):R2=1−FVU{\displaystyle R^{2}=1-{\text{FVU}}} A larger value ofR2implies a more successful regression model.[4]: 463SupposeR2= 0.49. This implies that 49% of the variability of the dependent variable in the data set has been accounted for, and the remaining 51% of the variability is still unaccounted for. For regression models, the regression sum of squares, also called theexplained sum of squares, is defined as In some cases, as insimple linear regression, thetotal sum of squaresequals the sum of the two other sums of squares defined above: SeePartitioning in the general OLS modelfor a derivation of this result for one case where the relation holds. When this relation does hold, the above definition ofR2is equivalent to wherenis the number of observations (cases) on the variables. In this formR2is expressed as the ratio of theexplained variance(variance of the model's predictions, which isSSreg/n) to the total variance (sample variance of the dependent variable, which isSStot/n). This partition of the sum of squares holds for instance when the model valuesƒihave been obtained bylinear regression. A mildersufficient conditionreads as follows: The model has the form where theqiare arbitrary values that may or may not depend onior on other free parameters (the common choiceqi=xiis just one special case), and the coefficient estimatesα^{\displaystyle {\widehat {\alpha }}}andβ^{\displaystyle {\widehat {\beta }}}are obtained by minimizing the residual sum of squares. This set of conditions is an important one and it has a number of implications for the properties of the fittedresidualsand the modelled values. In particular, under these conditions: In linear least squaresmultiple regression(with fitted intercept and slope),R2equalsρ2(y,f){\displaystyle \rho ^{2}(y,f)}the square of thePearson correlation coefficientbetween the observedy{\displaystyle y}and modeled (predicted)f{\displaystyle f}data values of the dependent variable. In alinear least squares regression with a single explanator(with fitted intercept and slope), this is also equal toρ2(y,x){\displaystyle \rho ^{2}(y,x)}the squared Pearson correlation coefficient between the dependent variabley{\displaystyle y}and explanatory variablex{\displaystyle x}. It should not be confused with the correlation coefficient between twoexplanatory variables, defined as where the covariance between two coefficient estimates, as well as theirstandard deviations, are obtained from thecovariance matrixof the coefficient estimates,(XTX)−1{\displaystyle (X^{T}X)^{-1}}. Under more general modeling conditions, where the predicted values might be generated from a model different from linear least squares regression, anR2value can be calculated as the square of thecorrelation coefficientbetween the originaly{\displaystyle y}and modeledf{\displaystyle f}data values. In this case, the value is not directly a measure of how good the modeled values are, but rather a measure of how good a predictor might be constructed from the modeled values (by creating a revised predictor of the formα+βƒi).[citation needed]According to Everitt,[10]this usage is specifically the definition of the term "coefficient of determination": the square of the correlation between two (general) variables. R2is a measure of thegoodness of fitof a model.[11]In regression, theR2coefficient of determination is a statistical measure of how well the regression predictions approximate the real data points. AnR2of 1 indicates that the regression predictions perfectly fit the data. Values ofR2outside the range 0 to 1 occur when the model fits the data worse than the worst possibleleast-squarespredictor (equivalent to a horizontal hyperplane at a height equal to the mean of the observed data). This occurs when a wrong model was chosen, or nonsensical constraints were applied by mistake. If equation 1 of Kvålseth[12]is used (this is the equation used most often),R2can be less than zero. If equation 2 of Kvålseth is used,R2can be greater than one. In all instances whereR2is used, the predictors are calculated by ordinary least-squares regression: that is, by minimizingSSres. In this case,R2increases as the number of variables in the model is increased (R2ismonotone increasingwith the number of variables included—it will never decrease). This illustrates a drawback to one possible use ofR2, where one might keep adding variables (kitchen sink regression) to increase theR2value. For example, if one is trying to predict the sales of a model of car from the car's gas mileage, price, and engine power, one can include probably irrelevant factors such as the first letter of the model's name or the height of the lead engineer designing the car because theR2will never decrease as variables are added and will likely experience an increase due to chance alone. This leads to the alternative approach of looking at theadjustedR2. The explanation of this statistic is almost the same asR2but it penalizes the statistic as extra variables are included in the model. For cases other than fitting by ordinary least squares, theR2statistic can be calculated as above and may still be a useful measure. If fitting is byweighted least squaresorgeneralized least squares, alternative versions ofR2can be calculated appropriate to those statistical frameworks, while the "raw"R2may still be useful if it is more easily interpreted. Values forR2can be calculated for any type of predictive model, which need not have a statistical basis. Consider a linear model withmore than a single explanatory variable, of the form where, for theith case,Yi{\displaystyle {Y_{i}}}is the response variable,Xi,1,…,Xi,p{\displaystyle X_{i,1},\dots ,X_{i,p}}arepregressors, andεi{\displaystyle \varepsilon _{i}}is a mean zeroerrorterm. The quantitiesβ0,…,βp{\displaystyle \beta _{0},\dots ,\beta _{p}}are unknown coefficients, whose values are estimated byleast squares. The coefficient of determinationR2is a measure of the global fit of the model. Specifically,R2is an element of [0, 1] and represents the proportion of variability inYithat may be attributed to some linear combination of the regressors (explanatory variables) inX.[13] R2is often interpreted as the proportion of response variation "explained" by the regressors in the model. Thus,R2= 1 indicates that the fitted model explains all variability iny{\displaystyle y}, whileR2= 0 indicates no 'linear' relationship (for straight line regression, this means that the straight line model is a constant line (slope = 0, intercept =y¯{\displaystyle {\bar {y}}}) between the response variable and regressors). An interior value such asR2= 0.7 may be interpreted as follows: "Seventy percent of the variance in the response variable can be explained by the explanatory variables. The remaining thirty percent can be attributed to unknown,lurking variablesor inherent variability." A caution that applies toR2, as to other statistical descriptions ofcorrelationand association is that "correlation does not imply causation." In other words, while correlations may sometimes provide valuable clues in uncovering causal relationships among variables, a non-zero estimated correlation between two variables is not, on its own, evidence that changing the value of one variable would result in changes in the values of other variables. For example, the practice of carrying matches (or a lighter) is correlated with incidence of lung cancer, but carrying matches does not cause cancer (in the standard sense of "cause"). In case of a single regressor, fitted by least squares,R2is the square of thePearson product-moment correlation coefficientrelating the regressor and the response variable. More generally,R2is the square of the correlation between the constructed predictor and the response variable. With more than one regressor, theR2can be referred to as thecoefficient of multiple determination. Inleast squaresregression using typical data,R2is at least weakly increasing with an increase in number of regressors in the model. Because increases in the number of regressors increase the value ofR2,R2alone cannot be used as a meaningful comparison of models with very different numbers of independent variables. For a meaningful comparison between two models, anF-testcan be performed on theresidual sum of squares[citation needed], similar to the F-tests inGranger causality, though this is not always appropriate[further explanation needed]. As a reminder of this, some authors denoteR2byRq2, whereqis the number of columns inX(the number of explanators including the constant). To demonstrate this property, first recall that the objective of least squares linear regression is whereXiis a row vector of values of explanatory variables for caseiandbis a column vector of coefficients of the respective elements ofXi. The optimal value of the objective is weakly smaller as more explanatory variables are added and hence additional columns ofX{\displaystyle X}(the explanatory data matrix whoseith row isXi) are added, by the fact that less constrained minimization leads to an optimal cost which is weakly smaller than more constrained minimization does. Given the previous conclusion and noting thatSStot{\displaystyle SS_{tot}}depends only ony, the non-decreasing property ofR2follows directly from the definition above. The intuitive reason that using an additional explanatory variable cannot lower theR2is this: MinimizingSSres{\displaystyle SS_{\text{res}}}is equivalent to maximizingR2. When the extra variable is included, the data always have the option of giving it an estimated coefficient of zero, leaving the predicted values and theR2unchanged. The only way that the optimization problem will give a non-zero coefficient is if doing so improves theR2. The above gives an analytical explanation of the inflation ofR2. Next, an example based on ordinary least square from a geometric perspective is shown below.[14] A simple case to be considered first: This equation describes theordinary least squares regressionmodel with one regressor. The prediction is shown as the red vector in the figure on the right. Geometrically, it is the projection of true value onto a model space inR{\displaystyle \mathbb {R} }(without intercept). The residual is shown as the red line. This equation corresponds to the ordinary least squares regression model with two regressors. The prediction is shown as the blue vector in the figure on the right. Geometrically, it is the projection of true value onto a larger model space inR2{\displaystyle \mathbb {R} ^{2}}(without intercept). Noticeably, the values ofβ0{\displaystyle \beta _{0}}andβ0{\displaystyle \beta _{0}}are not the same as in the equation for smaller model space as long asX1{\displaystyle X_{1}}andX2{\displaystyle X_{2}}are not zero vectors. Therefore, the equations are expected to yield different predictions (i.e., the blue vector is expected to be different from the red vector). The least squares regression criterion ensures that the residual is minimized. In the figure, the blue line representing the residual is orthogonal to the model space inR2{\displaystyle \mathbb {R} ^{2}}, giving the minimal distance from the space. The smaller model space is a subspace of the larger one, and thereby the residual of the smaller model is guaranteed to be larger. Comparing the red and blue lines in the figure, the blue line is orthogonal to the space, and any other line would be larger than the blue one. Considering the calculation forR2, a smaller value ofSStot{\displaystyle SS_{tot}}will lead to a larger value ofR2, meaning that adding regressors will result in inflation ofR2. R2does not indicate whether: The use of an adjustedR2(one common notation isR¯2{\displaystyle {\bar {R}}^{2}}, pronounced "R bar squared"; another isRa2{\displaystyle R_{\text{a}}^{2}}orRadj2{\displaystyle R_{\text{adj}}^{2}}) is an attempt to account for the phenomenon of theR2automatically increasing when extra explanatory variables are added to the model. There are many different ways of adjusting.[15]By far the most used one, to the point that it is typically just referred to as adjustedR, is the correction proposed byMordecai Ezekiel.[15][16][17]The adjustedR2is defined as where dfresis thedegrees of freedomof the estimate of the population variance around the model, and dftotis the degrees of freedom of the estimate of the population variance around the mean. dfresis given in terms of the sample sizenand the number of variablespin the model,dfres=n−p− 1. dftotis given in the same way, but withpbeing zero for the mean, i.e.dftot=n− 1. Inserting the degrees of freedom and using the definition ofR2, it can be rewritten as: wherepis the total number of explanatory variables in the model (excluding the intercept), andnis the sample size. The adjustedR2can be negative, and its value will always be less than or equal to that ofR2. UnlikeR2, the adjustedR2increases only when the increase inR2(due to the inclusion of a new explanatory variable) is more than one would expect to see by chance. If a set of explanatory variables with a predetermined hierarchy of importance are introduced into a regression one at a time, with the adjustedR2computed each time, the level at which adjustedR2reaches a maximum, and decreases afterward, would be the regression with the ideal combination of having the best fit without excess/unnecessary terms. The adjustedR2can be interpreted as an instance of thebias-variance tradeoff. When we consider the performance of a model, a lower error represents a better performance. When the model becomes more complex, the variance will increase whereas the square of bias will decrease, and these two metrices add up to be the total error. Combining these two trends, the bias-variance tradeoff describes a relationship between the performance of the model and its complexity, which is shown as a u-shape curve on the right. For the adjustedR2specifically, the model complexity (i.e. number of parameters) affects theR2and the term / frac and thereby captures their attributes in the overall performance of the model. R2can be interpreted as the variance of the model, which is influenced by the model complexity. A highR2indicates a lower bias error because the model can better explain the change of Y with predictors. For this reason, we make fewer (erroneous) assumptions, and this results in a lower bias error. Meanwhile, to accommodate fewer assumptions, the model tends to be more complex. Based on bias-variance tradeoff, a higher complexity will lead to a decrease in bias and a better performance (below the optimal line). InR2, the term (1 −R2) will be lower with high complexity and resulting in a higherR2, consistently indicating a better performance. On the other hand, the term/frac term is reversely affected by the model complexity. The term/frac will increase when adding regressors (i.e. increased model complexity) and lead to worse performance. Based on bias-variance tradeoff, a higher model complexity (beyond the optimal line) leads to increasing errors and a worse performance. Considering the calculation ofR2, more parameters will increase theR2and lead to an increase inR2. Nevertheless, adding more parameters will increase the term/frac and thus decreaseR2. These two trends construct a reverse u-shape relationship between model complexity andR2, which is in consistent with the u-shape trend of model complexity versus overall performance. UnlikeR2, which will always increase when model complexity increases,R2will increase only when the bias eliminated by the added regressor is greater than the variance introduced simultaneously. UsingR2instead ofR2could thereby prevent overfitting. Following the same logic, adjustedR2can be interpreted as a less biased estimator of the populationR2, whereas the observed sampleR2is a positively biased estimate of the population value.[18]AdjustedR2is more appropriate when evaluating model fit (the variance in the dependent variable accounted for by the independent variables) and in comparing alternative models in thefeature selectionstage of model building.[18] The principle behind the adjustedR2statistic can be seen by rewriting the ordinaryR2as whereVARres=SSres/n{\displaystyle {\text{VAR}}_{\text{res}}=SS_{\text{res}}/n}andVARtot=SStot/n{\displaystyle {\text{VAR}}_{\text{tot}}=SS_{\text{tot}}/n}are the sample variances of the estimated residuals and the dependent variable respectively, which can be seen as biased estimates of the population variances of the errors and of the dependent variable. These estimates are replaced by statisticallyunbiasedversions:VARres=SSres/(n−p){\displaystyle {\text{VAR}}_{\text{res}}=SS_{\text{res}}/(n-p)}andVARtot=SStot/(n−1){\displaystyle {\text{VAR}}_{\text{tot}}=SS_{\text{tot}}/(n-1)}. Despite using unbiased estimators for the population variances of the error and the dependent variable, adjustedR2is not an unbiased estimator of the populationR2,[18]which results by using the population variances of the errors and the dependent variable instead of estimating them.Ingram OlkinandJohn W. Prattderived theminimum-variance unbiased estimatorfor the populationR2,[19]which is known as Olkin–Pratt estimator. Comparisons of different approaches for adjustingR2concluded that in most situations either an approximate version of the Olkin–Pratt estimator[18]or the exact Olkin–Pratt estimator[20]should be preferred over (Ezekiel) adjustedR2. The coefficient of partial determination can be defined as the proportion of variation that cannot be explained in a reduced model, but can be explained by the predictors specified in a full(er) model.[21][22][23]This coefficient is used to provide insight into whether or not one or more additional predictors may be useful in a more fully specified regression model. The calculation for the partialR2is relatively straightforward after estimating two models and generating theANOVAtables for them. The calculation for the partialR2is which is analogous to the usual coefficient of determination: As explained above, model selection heuristics such as the adjustedR2criterion and theF-testexamine whether the totalR2sufficiently increases to determine if a new regressor should be added to the model. If a regressor is added to the model that is highly correlated with other regressors which have already been included, then the totalR2will hardly increase, even if the new regressor is of relevance. As a result, the above-mentioned heuristics will ignore relevant regressors when cross-correlations are high.[24] Alternatively, one can decompose a generalized version ofR2to quantify the relevance of deviating from a hypothesis.[24]As Hoornweg (2018) shows, severalshrinkage estimators– such asBayesian linear regression,ridge regression, and the (adaptive)lasso– make use of this decomposition ofR2when they gradually shrink parameters from the unrestricted OLS solutions towards the hypothesized values. Let us first define the linear regression model as It is assumed that the matrixXis standardized with Z-scores and that the column vectory{\displaystyle y}is centered to have a mean of zero. Let the column vectorβ0{\displaystyle \beta _{0}}refer to the hypothesized regression parameters and let the column vectorb{\displaystyle b}denote the estimated parameters. We can then define AnR2of 75% means that the in-sample accuracy improves by 75% if the data-optimizedbsolutions are used instead of the hypothesizedβ0{\displaystyle \beta _{0}}values. In the special case thatβ0{\displaystyle \beta _{0}}is a vector of zeros, we obtain the traditionalR2again. The individual effect onR2of deviating from a hypothesis can be computed withR⊗{\displaystyle R^{\otimes }}('R-outer'). Thisp{\displaystyle p}timesp{\displaystyle p}matrix is given by wherey~0=y−Xβ0{\displaystyle {\tilde {y}}_{0}=y-X\beta _{0}}. The diagonal elements ofR⊗{\displaystyle R^{\otimes }}exactly add up toR2. If regressors are uncorrelated andβ0{\displaystyle \beta _{0}}is a vector of zeros, then thejth{\displaystyle j^{\text{th}}}diagonal element ofR⊗{\displaystyle R^{\otimes }}simply corresponds to ther2value betweenxj{\displaystyle x_{j}}andy{\displaystyle y}. When regressorsxi{\displaystyle x_{i}}andxj{\displaystyle x_{j}}are correlated,Rii⊗{\displaystyle R_{ii}^{\otimes }}might increase at the cost of a decrease inRjj⊗{\displaystyle R_{jj}^{\otimes }}. As a result, the diagonal elements ofR⊗{\displaystyle R^{\otimes }}may be smaller than 0 and, in more exceptional cases, larger than 1. To deal with such uncertainties, several shrinkage estimators implicitly take a weighted average of the diagonal elements ofR⊗{\displaystyle R^{\otimes }}to quantify the relevance of deviating from a hypothesized value.[24]Click on thelassofor an example. In the case oflogistic regression, usually fit bymaximum likelihood, there are several choices ofpseudo-R2. One is the generalizedR2originally proposed by Cox & Snell,[25]and independently by Magee:[26] whereL(0){\displaystyle {\mathcal {L}}(0)}is the likelihood of the model with only the intercept,L(θ^){\displaystyle {{\mathcal {L}}({\widehat {\theta }})}}is the likelihood of the estimated model (i.e., the model with a given set of parameter estimates) andnis the sample size. It is easily rewritten to: whereDis the test statistic of thelikelihood ratio test. Nico Nagelkerkenoted that it had the following properties:[27][22] However, in the case of a logistic model, whereL(θ^){\displaystyle {\mathcal {L}}({\widehat {\theta }})}cannot be greater than 1,R2is between 0 andRmax2=1−(L(0))2/n{\displaystyle R_{\max }^{2}=1-({\mathcal {L}}(0))^{2/n}}: thus, Nagelkerke suggested the possibility to define a scaledR2asR2/R2max.[22] Occasionally, residual statistics are used for indicating goodness of fit. Thenormof residuals is calculated as the square-root of thesum of squares of residuals(SSR): Similarly, thereduced chi-squareis calculated as the SSR divided by the degrees of freedom. BothR2and the norm of residuals have their relative merits. Forleast squaresanalysisR2varies between 0 and 1, with larger numbers indicating better fits and 1 representing a perfect fit. The norm of residuals varies from 0 to infinity with smaller numbers indicating better fits and zero indicating a perfect fit. One advantage and disadvantage ofR2is theSStot{\displaystyle SS_{\text{tot}}}term acts tonormalizethe value. If theyivalues are all multiplied by a constant, the norm of residuals will also change by that constant butR2will stay the same. As a basic example, for the linear least squares fit to the set of data: R2= 0.998, and norm of residuals = 0.302. If all values ofyare multiplied by 1000 (for example, in anSI prefixchange), thenR2remains the same, but norm of residuals = 302. Another single-parameter indicator of fit is theRMSEof the residuals, or standard deviation of the residuals. This would have a value of 0.135 for the above example given that the fit was linear with an unforced intercept.[28] The creation of the coefficient of determination has been attributed to the geneticistSewall Wrightand was first published in 1921.[29]
https://en.wikipedia.org/wiki/Coefficient_of_determination
Instatistics, asum of squares due to lack of fit, or more tersely alack-of-fit sum of squares, is one of the components of a partition of thesum of squaresof residuals in ananalysis of variance, used in thenumeratorin anF-testof thenull hypothesisthat says that a proposed model fits well. The other component is thepure-error sum of squares. The pure-error sum of squares is the sum of squared deviations of each value of thedependent variablefrom the average value over all observations sharing itsindependent variablevalue(s). These are errors that could never be avoided by any predictive equation that assigned a predicted value for the dependent variable as a function of the value(s) of the independent variable(s). The remainder of the residual sum of squares is attributed to lack of fit of the model since it would be mathematically possible to eliminate these errors entirely. In order for the lack-of-fit sum of squares to differ from thesum of squares of residuals, there must bemore than onevalue of theresponse variablefor at least one of the values of the set of predictor variables. For example, consider fitting a line by the method ofleast squares. One takes as estimates ofαandβthe values that minimize the sum of squares of residuals, i.e., the sum of squares of the differences between the observedy-value and the fittedy-value. To have a lack-of-fit sum of squares that differs from the residual sum of squares, one must observe more than oney-value for each of one or more of thex-values. One then partitions the "sum of squares due to error", i.e., the sum of squares of residuals, into two components: The sum of squares due to "pure" error is the sum of squares of the differences between each observedy-value and the average of ally-values corresponding to the samex-value. The sum of squares due to lack of fit is theweightedsum of squares of differences between each average ofy-values corresponding to the samex-value and the corresponding fittedy-value, the weight in each case being simply the number of observedy-values for thatx-value.[1][2]Because it is a property of least squares regression that the vector whose components are "pure errors" and the vector of lack-of-fit components are orthogonal to each other, the following equality holds: Hence the residual sum of squares has been completely decomposed into two components. Consider fitting a line with one predictor variable. Defineias an index of each of thendistinctxvalues,jas an index of the response variable observations for a givenxvalue, andnias the number ofyvalues associated with theithxvalue. The value of each response variable observation can be represented by Let be theleast squaresestimates of the unobservable parametersαandβbased on the observed values ofxiandYi j. Let be the fitted values of the response variable. Then are theresiduals, which are observable estimates of the unobservable values of the error termεij. Because of the nature of the method of least squares, the whole vector of residuals, with scalar components, necessarily satisfies the two constraints It is thus constrained to lie in an (N− 2)-dimensional subspace ofRN, i.e. there areN− 2 "degrees of freedomfor error". Now let be the average of allY-values associated with theithx-value. We partition the sum of squares due to error into two components: Suppose theerror termsεi jareindependentandnormally distributedwithexpected value0 andvarianceσ2. We treatxias constant rather than random. Then the response variablesYi jare random only because the errorsεi jare random. It can be shown to follow that if the straight-line model is correct, then thesum of squares due to errordivided by the error variance, has achi-squared distributionwithN− 2 degrees of freedom. Moreover, given the total number of observationsN, the number of levels of the independent variablen,and the number of parameters in the modelp: It then follows that the statistic has anF-distributionwith the corresponding number of degrees of freedom in the numerator and the denominator, provided that the model is correct. If the model is wrong, then theprobability distributionof the denominator is still as stated above, and the numerator and denominator are still independent. But the numerator then has anoncentral chi-squared distribution, and consequently the quotient as a whole has anon-central F-distribution. One uses this F-statistic to test thenull hypothesisthat the linear model is correct. Since the non-central F-distribution isstochastically largerthan the (central) F-distribution, one rejects the null hypothesis if the F-statistic is larger than the critical F value. The critical value corresponds to thecumulative distribution functionof theF distributionwithxequal to the desiredconfidence level, and degrees of freedomd1= (n−p) andd2= (N−n). The assumptions ofnormal distributionof errors andindependencecan be shown to entail that thislack-of-fit testis thelikelihood-ratio testof this null hypothesis.
https://en.wikipedia.org/wiki/Lack-of-fit_sum_of_squares
Instatistics, thereduced chi-square statisticis used extensively ingoodness of fittesting. It is also known asmean squared weighted deviation(MSWD) inisotopic dating[1]andvariance of unit weightin the context ofweighted least squares.[2][3] Its square root is calledregression standard error,[4]standard error of the regression,[5][6]orstandard error of the equation[7](seeOrdinary least squares § Reduced chi-squared) It is defined aschi-squareperdegree of freedom:[8][9][10][11]: 85[12][13][14][15]χν2=χ2ν,{\displaystyle \chi _{\nu }^{2}={\frac {\chi ^{2}}{\nu }},}where the chi-squared is a weighted sum of squareddeviations:χ2=∑i(Oi−Ci)2σi2{\displaystyle \chi ^{2}=\sum _{i}{\frac {(O_{i}-C_{i})^{2}}{\sigma _{i}^{2}}}}with inputs:varianceσi2{\displaystyle \sigma _{i}^{2}}, observationsO, and calculated dataC.[8]The degree of freedom,ν=n−m{\displaystyle \nu =n-m}, equals the number of observationsnminus the number of fitted parametersm. Inweighted least squares, the definition is often written in matrix notation asχν2=rTWrν,{\displaystyle \chi _{\nu }^{2}={\frac {r^{\mathrm {T} }Wr}{\nu }},}whereris the vector of residuals, andWis the weight matrix, the inverse of the input (diagonal) covariance matrix of observations. IfWis non-diagonal, thengeneralized least squaresapplies. Inordinary least squares, the definition simplifies to:χν2=RSSν,{\displaystyle \chi _{\nu }^{2}={\frac {\mathrm {RSS} }{\nu }},}RSS=∑r2,{\displaystyle \mathrm {RSS} =\sum r^{2},}where the numerator is theresidual sum of squares(RSS). When the fit is just an ordinary mean, thenχν2{\displaystyle \chi _{\nu }^{2}}equals thesample variance, the squared samplestandard deviation. As a general rule, when the variance of the measurement error is knowna priori, aχν2≫1{\displaystyle \chi _{\nu }^{2}\gg 1}indicates a poor model fit. Aχν2>1{\displaystyle \chi _{\nu }^{2}>1}indicates that the fit has not fully captured the data (or that the error variance has been underestimated). In principle, a value ofχν2{\displaystyle \chi _{\nu }^{2}}around1{\displaystyle 1}indicates that the extent of the match between observations and estimates is in accord with the error variance. Aχν2<1{\displaystyle \chi _{\nu }^{2}<1}indicates that the model is "overfitting" the data: either the model is improperly fitting noise, or the error variance has been overestimated.[11]:89 When the variance of the measurement error is only partially known,the reduced chi-squared may serve as a correction estimateda posteriori. Ingeochronology, the MSWD is a measure of goodness of fit that takes into account the relative importance of both the internal and external reproducibility, with most common usage in isotopic dating.[16][17][1][18][19][20] In general when: MSWD = 1 if the age data fit aunivariate normal distributionint(for thearithmetic meanage) or log(t) (for thegeometric meanage) space, or if the compositional data fit a bivariate normal distribution in [log(U/He),log(Th/He)]-space (for the central age). MSWD < 1 if the observed scatter is less than that predicted by the analytical uncertainties. In this case, the data are said to be "underdispersed", indicating that the analytical uncertainties were overestimated. MSWD > 1 if the observed scatter exceeds that predicted by the analytical uncertainties. In this case, the data are said to be "overdispersed". This situation is the rule rather than the exception in (U-Th)/He geochronology, indicating an incomplete understanding of the isotope system. Several reasons have been proposed to explain the overdispersion of (U-Th)/He data, including unevenly distributed U-Th distributions and radiation damage. Often the geochronologist will determine a series of age measurements on a single sample, with the measured valuexi{\displaystyle x_{i}}having a weightingwi{\displaystyle w_{i}}and an associated errorσxi{\displaystyle \sigma _{x_{i}}}for each age determination. As regards weighting, one can either weight all of the measured ages equally, or weight them by the proportion of the sample that they represent. For example, if two thirds of the sample was used for the first measurement and one third for the second and final measurement, then one might weight the first measurement twice that of the second. The arithmetic mean of the age determinations isx¯=∑i=1NxiN,{\displaystyle {\overline {x}}={\frac {\sum _{i=1}^{N}x_{i}}{N}},}but this value can be misleading, unless each determination of the age is of equal significance. When each measured value can be assumed to have the same weighting, or significance, the biased and unbiased (or "sample" and "population" respectively) estimators of the variance are computed as follows:σ2=∑i=1N(xi−x¯)2Nands2=NN−1⋅σ2=1N−1⋅∑i=1N(xi−x¯)2.{\displaystyle \sigma ^{2}={\frac {\sum _{i=1}^{N}(x_{i}-{\overline {x}})^{2}}{N}}{\text{ and }}s^{2}={\frac {N}{N-1}}\cdot \sigma ^{2}={\frac {1}{N-1}}\cdot \sum _{i=1}^{N}(x_{i}-{\overline {x}})^{2}.} The standard deviation is the square root of the variance. When individual determinations of an age are not of equal significance, it is better to use a weighted mean to obtain an "average" age, as follows:x¯∗=∑i=1Nwixi∑i=1Nwi.{\displaystyle {\overline {x}}^{*}={\frac {\sum _{i=1}^{N}w_{i}x_{i}}{\sum _{i=1}^{N}w_{i}}}.} The biased weighted estimator of variance can be shown to beσ2=∑i=1Nwi(xi−x¯∗)2∑i=1Nwi,{\displaystyle \sigma ^{2}={\frac {\sum _{i=1}^{N}w_{i}(x_{i}-{\overline {x}}^{*})^{2}}{\sum _{i=1}^{N}w_{i}}},}which can be computed asσ2=∑i=1Nwixi2⋅∑i=1Nwi−(∑i=1Nwixi)2(∑i=1Nwi)2.{\displaystyle \sigma ^{2}={\frac {\sum _{i=1}^{N}w_{i}x_{i}^{2}\cdot \sum _{i=1}^{N}w_{i}-{\big (}\sum _{i=1}^{N}w_{i}x_{i}{\big )}^{2}}{{\big (}\sum _{i=1}^{N}w_{i}{\big )}^{2}}}.} The unbiased weighted estimator of the sample variance can be computed as follows:s2=∑i=1Nwi(∑i=1Nwi)2−∑i=1Nwi2⋅∑i=1Nwi(xi−x¯∗)2.{\displaystyle s^{2}={\frac {\sum _{i=1}^{N}w_{i}}{{\big (}\sum _{i=1}^{N}w_{i}{\big )}^{2}-\sum _{i=1}^{N}w_{i}^{2}}}\cdot {\sum _{i=1}^{N}w_{i}(x_{i}-{\overline {x}}^{*})^{2}}.}Again, the corresponding standard deviation is the square root of the variance. The unbiased weighted estimator of the sample variance can also be computed on the fly as follows:s2=∑i=1Nwixi2⋅∑i=1Nwi−(∑i=1Nwixi)2(∑i=1Nwi)2−∑i=1Nwi2.{\displaystyle s^{2}={\frac {\sum _{i=1}^{N}w_{i}x_{i}^{2}\cdot \sum _{i=1}^{N}w_{i}-{\big (}\sum _{i=1}^{N}w_{i}x_{i}{\big )}^{2}}{{\big (}\sum _{i=1}^{N}w_{i}{\big )}^{2}-\sum _{i=1}^{N}w_{i}^{2}}}.} The unweighted mean square of the weighted deviations (unweighted MSWD) can then be computed, as follows:MSWDu=1N−1⋅∑i=1N(xi−x¯)2σxi2.{\displaystyle {\text{MSWD}}_{u}={\frac {1}{N-1}}\cdot \sum _{i=1}^{N}{\frac {(x_{i}-{\overline {x}})^{2}}{\sigma _{x_{i}}^{2}}}.} By analogy, the weighted mean square of the weighted deviations (weighted MSWD) can be computed as follows:MSWDw=∑i=1Nwi(∑i=1Nwi)2−∑i=1Nwi2⋅∑i=1Nwi(xi−x¯∗)2(σxi)2.{\displaystyle {\text{MSWD}}_{w}={\frac {\sum _{i=1}^{N}w_{i}}{{\big (}\sum _{i=1}^{N}w_{i}{\big )}^{2}-\sum _{i=1}^{N}w_{i}^{2}}}\cdot \sum _{i=1}^{N}{\frac {w_{i}(x_{i}-{\overline {x}}^{*})^{2}}{(\sigma _{x_{i}})^{2}}}.} In data analysis based on theRasch model, the reduced chi-squared statistic is called the outfit mean-square statistic, and the information-weighted reduced chi-squared statistic is called the infit mean-square statistic.[21]
https://en.wikipedia.org/wiki/Reduced_chi-squared
Instatistics, theChapman–Robbins boundorHammersley–Chapman–Robbins boundis a lower bound on thevarianceofestimatorsof a deterministic parameter. It is a generalization of theCramér–Rao bound; compared to the Cramér–Rao bound, it is both tighter and applicable to a wider range of problems. However, it is usually more difficult to compute. The bound was independently discovered byJohn Hammersleyin 1950,[1]and by Douglas Chapman andHerbert Robbinsin 1951.[2] LetΘ{\displaystyle \Theta }be the set of parameters for a family of probability distributions{μθ:θ∈Θ}{\displaystyle \{\mu _{\theta }:\theta \in \Theta \}}onΩ{\displaystyle \Omega }. For any twoθ,θ′∈Θ{\displaystyle \theta ,\theta '\in \Theta }, letχ2(μθ′;μθ){\displaystyle \chi ^{2}(\mu _{\theta '};\mu _{\theta })}be theχ2{\displaystyle \chi ^{2}}-divergencefromμθ{\displaystyle \mu _{\theta }}toμθ′{\displaystyle \mu _{\theta '}}. Then: Theorem—Given any scalar random variableg^:Ω→R{\displaystyle {\hat {g}}:\Omega \to \mathbb {R} }, and any twoθ,θ′∈Θ{\displaystyle \theta ,\theta '\in \Theta }, we haveVarθ⁡[g^]≥supθ′≠θ∈Θ(Eθ′[g^]−Eθ[g^])2χ2(μθ′;μθ){\displaystyle \operatorname {Var} _{\theta }[{\hat {g}}]\geq \sup _{\theta '\neq \theta \in \Theta }{\frac {(E_{\theta '}[{\hat {g}}]-E_{\theta }[{\hat {g}}])^{2}}{\chi ^{2}(\mu _{\theta '};\mu _{\theta })}}}. A generalization to the multivariable case is:[3] Theorem—Given any multivariate random variableg^:Ω→Rm{\displaystyle {\hat {g}}:\Omega \to \mathbb {R} ^{m}}, and anyθ,θ′∈Θ{\displaystyle \theta ,\theta '\in \Theta },χ2(μθ′;μθ)≥(Eθ′[g^]−Eθ[g^])TCovθ⁡[g^]−1(Eθ′[g^]−Eθ[g^]){\displaystyle \chi ^{2}(\mu _{\theta '};\mu _{\theta })\geq (E_{\theta '}[{\hat {g}}]-E_{\theta }[{\hat {g}}])^{T}\operatorname {Cov} _{\theta }[{\hat {g}}]^{-1}(E_{\theta '}[{\hat {g}}]-E_{\theta }[{\hat {g}}])} By thevariational representation of chi-squared divergence:[3]χ2(P;Q)=supg(EP[g]−EQ[g])2VarQ⁡[g]{\displaystyle \chi ^{2}(P;Q)=\sup _{g}{\frac {(E_{P}[g]-E_{Q}[g])^{2}}{\operatorname {Var} _{Q}[g]}}}Plug ing=g^,P=μθ′,Q=μθ{\displaystyle g={\hat {g}},P=\mu _{\theta '},Q=\mu _{\theta }}, to obtain:χ2(μθ′;μθ)≥(Eθ′[g^]−Eθ[g^])2Varθ⁡[g^]{\displaystyle \chi ^{2}(\mu _{\theta '};\mu _{\theta })\geq {\frac {(E_{\theta '}[{\hat {g}}]-E_{\theta }[{\hat {g}}])^{2}}{\operatorname {Var} _{\theta }[{\hat {g}}]}}}Switch the denominator and the left side and take supremum overθ′{\displaystyle \theta '}to obtain the single-variate case. For the multivariate case, we defineh=∑i=1mvig^i{\textstyle h=\sum _{i=1}^{m}v_{i}{\hat {g}}_{i}}for anyv≠0∈Rm{\displaystyle v\neq 0\in \mathbb {R} ^{m}}. Then plug ing=h{\displaystyle g=h}in the variational representation to obtain:χ2(μθ′;μθ)≥(Eθ′[h]−Eθ[h])2Varθ⁡[h]=⟨v,Eθ′[g^]−Eθ[g^]⟩2vTCovθ⁡[g^]v{\displaystyle \chi ^{2}(\mu _{\theta '};\mu _{\theta })\geq {\frac {(E_{\theta '}[h]-E_{\theta }[h])^{2}}{\operatorname {Var} _{\theta }[h]}}={\frac {\langle v,E_{\theta '}[{\hat {g}}]-E_{\theta }[{\hat {g}}]\rangle ^{2}}{v^{T}\operatorname {Cov} _{\theta }[{\hat {g}}]v}}}Take supremum overv≠0∈Rm{\displaystyle v\neq 0\in \mathbb {R} ^{m}}, using the linear algebra fact thatsupv≠0vTwwTvvTMv=wTM−1w{\displaystyle \sup _{v\neq 0}{\frac {v^{T}ww^{T}v}{v^{T}Mv}}=w^{T}M^{-1}w}, we obtain the multivariate case. Usually,Ω=Xn{\displaystyle \Omega ={\mathcal {X}}^{n}}is the sample space ofn{\displaystyle n}independent draws of aX{\displaystyle {\mathcal {X}}}-valued random variableX{\displaystyle X}with distributionλθ{\displaystyle \lambda _{\theta }}from a byθ∈Θ⊆Rm{\displaystyle \theta \in \Theta \subseteq \mathbb {R} ^{m}}parameterized family of probability distributions,μθ=λθ⊗n{\displaystyle \mu _{\theta }=\lambda _{\theta }^{\otimes n}}is itsn{\displaystyle n}-fold product measure, andg^:Xn→Θ{\displaystyle {\hat {g}}:{\mathcal {X}}^{n}\to \Theta }is an estimator ofθ{\displaystyle \theta }. Then, form=1{\displaystyle m=1}, the expression inside the supremum in the Chapman–Robbins bound converges to theCramér–Rao boundofg^{\displaystyle {\hat {g}}}whenθ′→θ{\displaystyle \theta '\to \theta }, assuming the regularity conditions of the Cramér–Rao bound hold. This implies that, when both bounds exist, the Chapman–Robbins version is always at least as tight as the Cramér–Rao bound; in many cases, it is substantially tighter. The Chapman–Robbins bound also holds under much weaker regularity conditions. For example, no assumption is made regarding differentiability of the probability density functionp(x;θ) ofλθ{\displaystyle \lambda _{\theta }}. Whenp(x;θ) is non-differentiable, theFisher informationis not defined, and hence the Cramér–Rao bound does not exist.
https://en.wikipedia.org/wiki/Chapman%E2%80%93Robbins_bound
Ininformation theoryandstatistics,Kullback's inequalityis a lower bound on theKullback–Leibler divergenceexpressed in terms of thelarge deviationsrate function.[1]IfPandQareprobability distributionson the real line, such thatPisabsolutely continuouswith respect toQ, i.e.P<<Q, and whose first moments exist, thenDKL(P∥Q)≥ΨQ∗(μ1′(P)),{\displaystyle D_{KL}(P\parallel Q)\geq \Psi _{Q}^{*}(\mu '_{1}(P)),}whereΨQ∗{\displaystyle \Psi _{Q}^{*}}is the rate function, i.e. theconvex conjugateof thecumulant-generating function, ofQ{\displaystyle Q}, andμ1′(P){\displaystyle \mu '_{1}(P)}is the firstmomentofP.{\displaystyle P.} TheCramér–Rao boundis a corollary of this result. LetPandQbeprobability distributions(measures) on the real line, whose first moments exist, and such thatP<<Q. Consider thenatural exponential familyofQgiven byQθ(A)=∫AeθxQ(dx)∫−∞∞eθxQ(dx)=1MQ(θ)∫AeθxQ(dx){\displaystyle Q_{\theta }(A)={\frac {\int _{A}e^{\theta x}Q(dx)}{\int _{-\infty }^{\infty }e^{\theta x}Q(dx)}}={\frac {1}{M_{Q}(\theta )}}\int _{A}e^{\theta x}Q(dx)}for every measurable setA, whereMQ{\displaystyle M_{Q}}is themoment-generating functionofQ. (Note thatQ0=Q.) ThenDKL(P∥Q)=DKL(P∥Qθ)+∫supp⁡P(log⁡dQθdQ)dP.{\displaystyle D_{KL}(P\parallel Q)=D_{KL}(P\parallel Q_{\theta })+\int _{\operatorname {supp} P}\left(\log {\frac {\mathrm {d} Q_{\theta }}{\mathrm {d} Q}}\right)\mathrm {d} P.}ByGibbs' inequalitywe haveDKL(P∥Qθ)≥0{\displaystyle D_{KL}(P\parallel Q_{\theta })\geq 0}so thatDKL(P∥Q)≥∫supp⁡P(log⁡dQθdQ)dP=∫supp⁡P(log⁡eθxMQ(θ))P(dx){\displaystyle D_{KL}(P\parallel Q)\geq \int _{\operatorname {supp} P}\left(\log {\frac {\mathrm {d} Q_{\theta }}{\mathrm {d} Q}}\right)\mathrm {d} P=\int _{\operatorname {supp} P}\left(\log {\frac {e^{\theta x}}{M_{Q}(\theta )}}\right)P(dx)}Simplifying the right side, we have, for every realθwhereMQ(θ)<∞:{\displaystyle M_{Q}(\theta )<\infty :}DKL(P∥Q)≥μ1′(P)θ−ΨQ(θ),{\displaystyle D_{KL}(P\parallel Q)\geq \mu '_{1}(P)\theta -\Psi _{Q}(\theta ),}whereμ1′(P){\displaystyle \mu '_{1}(P)}is the first moment, or mean, ofP, andΨQ=log⁡MQ{\displaystyle \Psi _{Q}=\log M_{Q}}is called thecumulant-generating function. Taking the supremum completes the process ofconvex conjugationand yields therate function:DKL(P∥Q)≥supθ{μ1′(P)θ−ΨQ(θ)}=ΨQ∗(μ1′(P)).{\displaystyle D_{KL}(P\parallel Q)\geq \sup _{\theta }\left\{\mu '_{1}(P)\theta -\Psi _{Q}(\theta )\right\}=\Psi _{Q}^{*}(\mu '_{1}(P)).} LetXθbe a family of probability distributions on the real line indexed by the real parameter θ, and satisfying certainregularity conditions. Thenlimh→0DKL(Xθ+h∥Xθ)h2≥limh→0Ψθ∗(μθ+h)h2,{\displaystyle \lim _{h\to 0}{\frac {D_{KL}(X_{\theta +h}\parallel X_{\theta })}{h^{2}}}\geq \lim _{h\to 0}{\frac {\Psi _{\theta }^{*}(\mu _{\theta +h})}{h^{2}}},} whereΨθ∗{\displaystyle \Psi _{\theta }^{*}}is theconvex conjugateof thecumulant-generating functionofXθ{\displaystyle X_{\theta }}andμθ+h{\displaystyle \mu _{\theta +h}}is the first moment ofXθ+h.{\displaystyle X_{\theta +h}.} The left side of this inequality can be simplified as follows:limh→0DKL(Xθ+h∥Xθ)h2=limh→01h2∫−∞∞log⁡(dXθ+hdXθ)dXθ+h=−limh→01h2∫−∞∞log⁡(dXθdXθ+h)dXθ+h=−limh→01h2∫−∞∞log⁡(1−(1−dXθdXθ+h))dXθ+h=limh→01h2∫−∞∞[(1−dXθdXθ+h)+12(1−dXθdXθ+h)2+o((1−dXθdXθ+h)2)]dXθ+hTaylor series forlog⁡(1−t)=limh→01h2∫−∞∞[12(1−dXθdXθ+h)2]dXθ+h=limh→01h2∫−∞∞[12(dXθ+h−dXθdXθ+h)2]dXθ+h=12IX(θ){\displaystyle {\begin{aligned}\lim _{h\to 0}{\frac {D_{KL}(X_{\theta +h}\parallel X_{\theta })}{h^{2}}}&=\lim _{h\to 0}{\frac {1}{h^{2}}}\int _{-\infty }^{\infty }\log \left({\frac {\mathrm {d} X_{\theta +h}}{\mathrm {d} X_{\theta }}}\right)\mathrm {d} X_{\theta +h}\\&=-\lim _{h\to 0}{\frac {1}{h^{2}}}\int _{-\infty }^{\infty }\log \left({\frac {\mathrm {d} X_{\theta }}{\mathrm {d} X_{\theta +h}}}\right)\mathrm {d} X_{\theta +h}\\&=-\lim _{h\to 0}{\frac {1}{h^{2}}}\int _{-\infty }^{\infty }\log \left(1-\left(1-{\frac {\mathrm {d} X_{\theta }}{\mathrm {d} X_{\theta +h}}}\right)\right)\mathrm {d} X_{\theta +h}\\&=\lim _{h\to 0}{\frac {1}{h^{2}}}\int _{-\infty }^{\infty }\left[\left(1-{\frac {\mathrm {d} X_{\theta }}{\mathrm {d} X_{\theta +h}}}\right)+{\frac {1}{2}}\left(1-{\frac {\mathrm {d} X_{\theta }}{\mathrm {d} X_{\theta +h}}}\right)^{2}+o\left(\left(1-{\frac {\mathrm {d} X_{\theta }}{\mathrm {d} X_{\theta +h}}}\right)^{2}\right)\right]\mathrm {d} X_{\theta +h}&&{\text{Taylor series for }}\log(1-t)\\&=\lim _{h\to 0}{\frac {1}{h^{2}}}\int _{-\infty }^{\infty }\left[{\frac {1}{2}}\left(1-{\frac {\mathrm {d} X_{\theta }}{\mathrm {d} X_{\theta +h}}}\right)^{2}\right]\mathrm {d} X_{\theta +h}\\&=\lim _{h\to 0}{\frac {1}{h^{2}}}\int _{-\infty }^{\infty }\left[{\frac {1}{2}}\left({\frac {\mathrm {d} X_{\theta +h}-\mathrm {d} X_{\theta }}{\mathrm {d} X_{\theta +h}}}\right)^{2}\right]\mathrm {d} X_{\theta +h}\\&={\frac {1}{2}}{\mathcal {I}}_{X}(\theta )\end{aligned}}}which is half theFisher informationof the parameterθ. The right side of the inequality can be developed as follows:limh→0Ψθ∗(μθ+h)h2=limh→01h2supt{μθ+ht−Ψθ(t)}.{\displaystyle \lim _{h\to 0}{\frac {\Psi _{\theta }^{*}(\mu _{\theta +h})}{h^{2}}}=\lim _{h\to 0}{\frac {1}{h^{2}}}{\sup _{t}\{\mu _{\theta +h}t-\Psi _{\theta }(t)\}}.}This supremum is attained at a value oft=τ where the first derivative of the cumulant-generating function isΨθ′(τ)=μθ+h,{\displaystyle \Psi '_{\theta }(\tau )=\mu _{\theta +h},}but we haveΨθ′(0)=μθ,{\displaystyle \Psi '_{\theta }(0)=\mu _{\theta },}so thatΨθ″(0)=dμθdθlimh→0hτ.{\displaystyle \Psi ''_{\theta }(0)={\frac {d\mu _{\theta }}{d\theta }}\lim _{h\to 0}{\frac {h}{\tau }}.}Moreover,limh→0Ψθ∗(μθ+h)h2=12Ψθ″(0)(dμθdθ)2=12Var⁡(Xθ)(dμθdθ)2.{\displaystyle \lim _{h\to 0}{\frac {\Psi _{\theta }^{*}(\mu _{\theta +h})}{h^{2}}}={\frac {1}{2\Psi ''_{\theta }(0)}}\left({\frac {d\mu _{\theta }}{d\theta }}\right)^{2}={\frac {1}{2\operatorname {Var} (X_{\theta })}}\left({\frac {d\mu _{\theta }}{d\theta }}\right)^{2}.} We have:12IX(θ)≥12Var⁡(Xθ)(dμθdθ)2,{\displaystyle {\frac {1}{2}}{\mathcal {I}}_{X}(\theta )\geq {\frac {1}{2\operatorname {Var} (X_{\theta })}}\left({\frac {d\mu _{\theta }}{d\theta }}\right)^{2},}which can be rearranged as:Var⁡(Xθ)≥(dμθ/dθ)2IX(θ).{\displaystyle \operatorname {Var} (X_{\theta })\geq {\frac {(d\mu _{\theta }/d\theta )^{2}}{{\mathcal {I}}_{X}(\theta )}}.}
https://en.wikipedia.org/wiki/Kullback%27s_inequality
Inmathematics, theBrascamp–Lieb inequalityis either of two inequalities. The first is a result ingeometryconcerningintegrable functionsonn-dimensionalEuclidean spaceRn{\displaystyle \mathbb {R} ^{n}}. It generalizes theLoomis–Whitney inequalityandHölder's inequality. The second is a result of probability theory which gives a concentration inequality for log-concave probability distributions. Both are named afterHerm Jan BrascampandElliott H. Lieb. Fixnatural numbersmandn. For 1 ≤i≤m, letni∈Nand letci> 0 so that Choose non-negative, integrable functions andsurjectivelinear maps Then the following inequality holds: whereDis given by Another way to state this is that the constantDis what one would obtain by restricting attention to the case in which eachfi{\displaystyle f_{i}}is a centered Gaussian function, namelyfi(y)=exp⁡{−(y,Aiy)}{\displaystyle f_{i}(y)=\exp\{-(y,\,A_{i}\,y)\}}.[1] Consider a probability density functionp(x)=exp⁡(−ϕ(x)){\displaystyle p(x)=\exp(-\phi (x))}. This probability density functionp(x){\displaystyle p(x)}is said to be alog-concave measureif theϕ(x){\displaystyle \phi (x)}function is convex. Such probability density functions have tails which decay exponentially fast, so most of the probability mass resides in a small region around the mode ofp(x){\displaystyle p(x)}. The Brascamp–Lieb inequality gives another characterization of the compactness ofp(x){\displaystyle p(x)}by bounding the mean of any statisticS(x){\displaystyle S(x)}. Formally, letS(x){\displaystyle S(x)}be any derivable function. The Brascamp–Lieb inequality reads: where H is theHessianand∇{\displaystyle \nabla }is theNabla symbol.[2] The inequality is generalized in 2008[3]to account for both continuous and discrete cases, and for all linear maps, with precise estimates on the constant. Definition: theBrascamp-Lieb datum (BL datum) For anyfi∈L1(Rdi){\displaystyle f_{i}\in L^{1}(R^{d_{i}})}withfi≥0{\displaystyle f_{i}\geq 0}, defineBL(B,p,f):=∫H∏j=1m(fj∘Bj)pj∏j=1m(∫Hjfj)pj{\displaystyle BL(B,p,f):={\frac {\int _{H}\prod _{j=1}^{m}\left(f_{j}\circ B_{j}\right)^{p_{j}}}{\prod _{j=1}^{m}\left(\int _{H_{j}}f_{j}\right)^{p_{j}}}}} Now define theBrascamp-Lieb constantfor the BL datum:BL(B,p)=maxfBL(B,p,f){\displaystyle BL(B,p)=\max _{f}BL(B,p,f)} Theorem—(BCCT, 2007) BL(B,p){\displaystyle BL(B,p)}is finite iffd=∑ipidi{\displaystyle d=\sum _{i}p_{i}d_{i}}, and for all subspaceV{\displaystyle V}ofRd{\displaystyle \mathbb {R} ^{d}}, dim(V)≤∑ipidim(Bi(V)){\displaystyle dim(V)\leq \sum _{i}p_{i}dim(B_{i}(V))} BL(B,p){\displaystyle BL(B,p)}is reached by gaussians: ∫H∏j=1m(fj∘Bj)pj∏j=1m(∫Hjfj)pj→∞{\displaystyle {\frac {\int _{H}\prod _{j=1}^{m}\left(f_{j}\circ B_{j}\right)^{p_{j}}}{\prod _{j=1}^{m}\left(\int _{H_{j}}f_{j}\right)^{p_{j}}}}\to \infty } Setup: With this setup, we have (Theorem 2.4,[4]Theorem 3.12[5]) Theorem—If there exists somes1,...,sn∈[0,1]{\displaystyle s_{1},...,s_{n}\in [0,1]}such that rank(H)≤∑jsjrank(ϕj(H))∀H≤G{\displaystyle rank(H)\leq \sum _{j}s_{j}rank(\phi _{j}(H))\quad \forall H\leq G} Then for all0≥fj∈ℓ1/sj(Gj){\displaystyle 0\geq f_{j}\in \ell ^{1/s_{j}}(G_{j})}, ‖∏jfj∘ϕj‖1≤|T(G)|∏j‖fj‖1/sj{\displaystyle \left\|\prod _{j}f_{j}\circ \phi _{j}\right\|_{1}\leq |T(G)|\prod _{j}\|f_{j}\|_{1/s_{j}}}and in particular, |E|≤|T(G)|∏j|ϕj(E)|sj∀E⊂G{\displaystyle |E|\leq |T(G)|\prod _{j}|\phi _{j}(E)|^{s_{j}}\quad \forall E\subset G} Note that the constant|T(G)|{\displaystyle |T(G)|}is not always tight. Given BL datum(B,p){\displaystyle (B,p)}, the conditions forBL(B,p)<∞{\displaystyle BL(B,p)<\infty }are Thus, the subset ofp∈[0,∞)n{\displaystyle p\in [0,\infty )^{n}}that satisfies the above two conditions is a closedconvex polytopedefined by linear inequalities. This is the BL polytope. Note that while there are infinitely many possible choices of subspaceV{\displaystyle V}ofRd{\displaystyle \mathbb {R} ^{d}}, there are only finitely many possible equations ofdim(V)≤∑ipidim(Bi(V)){\displaystyle dim(V)\leq \sum _{i}p_{i}dim(B_{i}(V))}, so the subset is a closed convex polytope. Similarly we can define the BL polytope for the discrete case. The case of the Brascamp–Lieb inequality in which all theniare equal to 1 was proved earlier than the general case.[6]In 1989, Keith Ball introduced a "geometric form" of this inequality. Suppose that(ui)i=1m{\displaystyle (u_{i})_{i=1}^{m}}are unit vectors inRn{\displaystyle \mathbb {R} ^{n}}and(ci)i=1m{\displaystyle (c_{i})_{i=1}^{m}}are positive numbers satisfying for allx∈Rn{\displaystyle x\in \mathbb {R} ^{n}}, and that(fi)i=1m{\displaystyle (f_{i})_{i=1}^{m}}are positive measurable functions onR{\displaystyle \mathbb {R} }. Then Thus, when the vectors(ui){\displaystyle (u_{i})}resolve the inner product the inequality has a particularly simple form: the constant is equal to 1 and the extremal Gaussian densities are identical. Ball used this inequality to estimate volume ratios and isoperimetric quotients for convex sets in[7]and.[8] There is also a geometric version of the more general inequality in which the mapsBi{\displaystyle B_{i}}are orthogonal projections and whereI{\displaystyle I}is the identity operator onRn{\displaystyle \mathbb {R} ^{n}}. Takeni=n,Bi= id, theidentity maponRn{\displaystyle \mathbb {R} ^{n}}, replacingfibyf1/cii, and letci= 1 /pifor 1 ≤i≤m. Then and thelog-concavityof thedeterminantof apositive definite matriximplies thatD= 1. This yields Hölder's inequality inRn{\displaystyle \mathbb {R} ^{n}}: The Brascamp–Lieb inequality is an extension of thePoincaré inequalitywhich only concerns Gaussian probability distributions.[9] The Brascamp–Lieb inequality is also related to theCramér–Rao bound.[9]While Brascamp–Lieb is an upper-bound, the Cramér–Rao bound lower-bounds the variance ofvarp⁡(S(x)){\displaystyle \operatorname {var} _{p}(S(x))}. The Cramér–Rao bound states which is very similar to the Brascamp–Lieb inequality in the alternative form shown above.
https://en.wikipedia.org/wiki/Brascamp%E2%80%93Lieb_inequality
Aprediction(Latinpræ-, "before," anddictum, "something said"[1]) orforecastis a statement about afutureeventor about futuredata. Predictions are often, but not always, based upon experience or knowledge of forecasters. There is no universal agreement about the exact difference between "prediction" and "estimation"; different authors and disciplines ascribe differentconnotations. Future events are necessarilyuncertain, so guaranteed accurate information about the future is impossible. Prediction can be useful to assist in makingplansabout possible developments. In a non-statistical sense, the term "prediction" is often used to refer to aninformed guess or opinion. A prediction of this kind might be informed by a predicting person'sabductive reasoning,inductive reasoning,deductive reasoning, andexperience; and may be useful—if the predicting person is aknowledgeable personin the field.[2] TheDelphi methodis a technique for eliciting such expert-judgement-based predictions in a controlled way. This type of prediction might be perceived as consistent with statistical techniques in the sense that, at minimum, the "data" being used is the predicting expert'scognitive experiencesforming anintuitive"probability curve." Instatistics, prediction is a part ofstatistical inference. One particular approach to such inference is known aspredictive inference, but the prediction can be undertaken within any of the several approaches to statistical inference. Indeed, one possible description of statistics is that it provides a means of transferring knowledge about a sample of a population to the whole population, and to other related populations, which is not necessarily the same as prediction over time. When information is transferred across time, often to specific points in time, the process is known asforecasting.[3][failed verification]Forecasting usually requirestime seriesmethods, while prediction is often performed oncross-sectional data. Statistical techniques used for prediction includeregressionand its various sub-categories such aslinear regression,generalized linear models(logistic regression,Poisson regression,Probit regression), etc. In case of forecasting,autoregressive moving average modelsandvector autoregressionmodels can be utilized. When these and/or related, generalized set of regression ormachine learningmethods are deployed in commercial usage, the field is known aspredictive analytics.[4] In many applications, such as time series analysis, it is possible to estimate the models that generate the observations. If models can be expressed astransfer functionsor in terms of state-space parameters then smoothed, filtered and predicted data estimates can be calculated.[citation needed]If the underlying generating models are linear then a minimum-varianceKalman filterand a minimum-variance smoother may be used to recover data of interest from noisy measurements. These techniques rely on one-step-ahead predictors (which minimise the variance of theprediction error). When the generating models are nonlinear then stepwise linearizations may be applied withinExtended Kalman Filterand smoother recursions. However, in nonlinear cases, optimum minimum-variance performance guarantees no longer apply.[5] To use regression analysis for prediction, data are collected on the variable that is to be predicted, called thedependent variableor response variable, and on one or more variables whose values arehypothesizedto influence it, calledindependent variablesor explanatory variables. Afunctional form, often linear, is hypothesized for the postulated causal relationship, and theparametersof the function areestimatedfrom the data—that is, are chosen so as to optimize is some way thefitof the function, thus parameterized, to the data. That is the estimation step. For the prediction step, explanatory variable values that are deemed relevant to future (or current but not yet observed) values of the dependent variable are input to the parameterized function to generate predictions for the dependent variable.[6] An unbiased performance estimate of a model can be obtained onhold-out test sets. The predictions can visually be compared to the ground truth in aparity plot. In science, a prediction is a rigorous, often quantitative, statement, forecasting what would be observed under specific conditions; for example, according to theories ofgravity, if an apple fell from a tree it would be seen to move towards the center of the Earth with a specified and constantacceleration. Thescientific methodis built on testing statements that arelogical consequencesof scientific theories. This is done through repeatableexperimentsor observational studies. Ascientific theorywhose predictions are contradicted by observations and evidence will be rejected. New theories that generate many new predictions can more easily be supported orfalsified(seepredictive power). Notions that make notestablepredictions are usually considered not to be part of science (protoscienceornescience) until testable predictions can be made. Mathematical equationsandmodels, andcomputer models, are frequently used to describe the past and future behaviour of a process within the boundaries of that model. In some cases theprobabilityof an outcome, rather than a specific outcome, can be predicted, for example in much ofquantum physics. Inmicroprocessors,branch predictionpermits avoidance ofpipelineemptying atbranch instructions. Inengineering, possiblefailure modesare predicted and avoided by correcting thefailure mechanismcausing the failure. Accurate prediction and forecasting are very difficult in some areas, such asnatural disasters,pandemics,demography,population dynamicsandmeteorology.[7]For example, it is possible to predict the occurrence ofsolar cycles, but their exact timing and magnitude is much more difficult (see picture to right). In materials engineering it is also possible to predict the life time of a material with a mathematical model.[8] Inmedicalscience predictive and prognosticbiomarkerscan be used to predict patient outcomes in response to various treatment or the probability of a clinical event.[9] Established science makes useful predictions which are often extremely reliable and accurate; for example,eclipsesare routinely predicted. New theories make predictions which allow them to be disproved by reality. For example, predicting the structure of crystals at the atomic level is a current research challenge.[10]In the early 20th century the scientific consensus was that there existed an absoluteframe of reference, which was given the nameluminiferous ether. The existence of this absolute frame was deemed necessary for consistency with the established idea that the speed of light is constant. The famousMichelson–Morley experimentdemonstrated that predictions deduced from this concept were not borne out in reality, thus disproving the theory of an absolute frame of reference. Thespecial theory of relativitywas proposed by Einstein as an explanation for the seeming inconsistency between the constancy of the speed of light and the non-existence of a special, preferred or absolute frame of reference. Albert Einstein's theory ofgeneral relativitycould not easily be tested as it did not produce any effects observable on a terrestrial scale. However, as one of the firsttests of general relativity, the theory predicted that large masses such asstarswould bend light, in contradiction to accepted theory; this was observed in a 1919 eclipse. Predictive medicineis a field ofmedicinethat entails predicting theprobabilityofdiseaseand instituting preventive measures in order to either prevent the disease altogether or significantly decrease its impact upon the patient (such as by preventingmortalityor limitingmorbidity).[11] While different prediction methodologies exist, such asgenomics,proteomics, andcytomics, the most fundamental way to predict future disease is based on genetics. Although proteomics and cytomics allow for the early detection of disease, much of the time those detectbiological markersthat exist because a disease process hasalreadystarted. However, comprehensivegenetic testing(such as through the use ofDNA arraysorfull genome sequencing) allows for the estimation of disease risk years to decades before any disease even exists, or even whether a healthyfetusis at higher risk for developing a disease in adolescence or adulthood. Individuals who are more susceptible to disease in the future can be offered lifestyle advice or medication with the aim of preventing the predicted illness. Prognosis(Greek: πρόγνωσις "fore-knowing, foreseeing";pl.: prognoses) is a medical term for predicting the likelihood or expected development of adisease, including whether thesignsand symptoms will improve or worsen (and how quickly) or remain stable over time; expectations ofquality of life, such as the ability to carry out daily activities; the potential for complications and associated health issues; and the likelihood of survival (including life expectancy).[13][14]A prognosis is made on the basis of the normal course of the diagnosed disease, the individual's physical and mental condition, the available treatments, and additional factors.[14]A complete prognosis includes the expected duration, function, and description of the course of the disease, such as progressive decline, intermittent crisis, or sudden, unpredictable crisis.[15] Aclinical prediction ruleor clinical probability assessment specifieshow to use medical signs,symptoms, and other findings to estimate the probability of a specific disease or clinical outcome.[17] Mathematical models ofstock marketbehaviour (and economic behaviour in general) are also unreliable in predicting future behaviour. Among other reasons, this is because economic events may span several years, and the world is changing over a similar time frame, thus invalidating the relevance of past observations to the present. Thus there are an extremely small number (of the order of 1) of relevant past data points from which to project the future. In addition, it is generally believed that stock market prices already take into account all the information available to predict the future, and subsequent movements must therefore be the result of unforeseen events. Consequently, it is extremely difficult for astock investortoanticipateor predict astock market boom, or astock market crash. In contrast to predicting the actual stock return, forecasting of broadeconomic trendstends to have better accuracy. Such analysis is provided by both non-profit groups as well as by for-profit private institutions.[citation needed] Some correlation has been seen between actual stock market movements and prediction data from large groups in surveys and prediction games. Anactuaryusesactuarial scienceto assess and predict future businessrisk, such that the risk(s) can bemitigated. For example, ininsurancean actuary would use alife table(which incorporates the historical experience of mortality rates and sometimes an estimate of future trends) to projectlife expectancy. Predicting the outcome of sporting events is a business which has grown in popularity in recent years. Handicappers predict the outcome of games using a variety of mathematical formulas, simulation models orqualitative analysis. Early, well known sports bettors, such asJimmy the Greek, were believed to have access to information that gave them an edge. Information ranged from personal issues, such as gambling or drinking to undisclosed injuries; anything that may affect the performance of a player on the field. Recent times have changed the way sports are predicted. Predictions now typically consist of two distinct approaches: Situational plays and statistical based models. Situational plays are much more difficult to measure because they usually involve the motivation of a team. Dan Gordon, noted handicapper, wrote "Without an emotional edge in a game in addition to value in a line, I won't put my money on it".[19]These types of plays consist of: Betting on the home underdog, betting against Monday Night winners if they are a favorite next week, betting the underdog in "look ahead" games etc. As situational plays become more widely known they become less useful because they will impact the way the line is set. The widespread use of technology has brought with it more modernsports betting systems. These systems are typically algorithms and simulation models based onregression analysis.Jeff Sagarin, a sports statistician, has brought attention to sports by having the results of his models published in USA Today. He is currently paid as a consultant by theDallas Mavericksfor his advice on lineups and the use of his Winval system, which evaluates free agents.Brian Burke, a formerNavyfighter pilot turned sports statistician, has published his results of using regression analysis to predict the outcome of NFL games.[20]Ken Pomeroyis widely accepted as a leading authority on college basketball statistics. His website includes his College Basketball Ratings, a tempo based statistics system. Some statisticians have become very famous for having successful prediction systems. Dare wrote "the effective odds for sports betting and horse racing are a direct result of human decisions and can therefore potentially exhibit consistent error".[21]Unlike other games offered in a casino, prediction in sporting events can be both logical and consistent. Other more advance models include those based on Bayesian networks, which are causal probabilistic models commonly used for risk analysis and decision support. Based on this kind of mathematical modelling, Constantinou et al.,[22][23]have developed models for predicting the outcome of association football matches.[24]What makes these models interesting is that, apart from taking into consideration relevant historical data, they also incorporate all these vague subjective factors, like availability of key players, team fatigue, team motivation and so on. They provide the user with the ability to include their best guesses about things that there are no hard facts available. This additional information is then combined with historical facts to provide a revised prediction for future match outcomes. The initial results based on these modelling practices are encouraging since they have demonstrated consistent profitability against published market odds. Nowadays sport betting is a huge business; there are many websites (systems) alongside betting sites, which give tips or predictions for future games.[25]Some of these prediction websites (tipsters) are based on human predictions, but others on computer software sometimes called prediction robots or bots. Prediction bots can use different amount of data and algorithms and because of that their accuracy may vary. These days, with the development of artificial intelligence, it has become possible to create more consistent predictions using statistics. Especially in the field of sports competitions, the impact of artificial intelligence has created a noticeable consistency rate. On the science ofAI soccer predictions, an initiative called soccerseer.com, one of the most successful systems in this sense, manages to predict the results of football competitions with up to 75% accuracy with artificial intelligence. Prediction in the non-economic social sciences differs from the natural sciences and includes multiple alternative methods such as trend projection, forecasting, scenario-building and Delphi surveys. The oil company Shell is particularly well known for its scenario-building activities.[citation needed] One reason for the peculiarity of societal prediction is that in the social sciences, "predictors are part of the social context about which they are trying to make a prediction and may influence that context in the process".[26]As a consequence, societal predictions can become self-destructing. For example, a forecast that a large percentage of a population will become HIV infected based on existing trends may cause more people to avoid risky behavior and thus reduce the HIV infection rate, invalidating the forecast (which might have remained correct if it had not been publicly known). Or, a prediction that cybersecurity will become a major issue may cause organizations to implement more security cybersecurity measures, thus limiting the issue.[26] Inpoliticsit is common to attempt to predict the outcome ofelectionsviapolitical forecastingtechniques (or assess the popularity ofpoliticians) through the use ofopinion polls.Prediction gameshave been used by many corporations and governments to learn about the most likely outcome of future events. Predictions have often been made, from antiquity until the present, by usingparanormalorsupernaturalmeans such asprophecyor by observingomens. Methods includingwater divining,astrology,numerology,fortune telling,interpretation of dreams, and many other forms ofdivination, have been used for millennia to attempt to predict the future. These means of prediction have not been proven by scientific experiments. In literature, vision and prophecy are literary devices used to present a possible timeline of future events. They can be distinguished by vision referring to what an individual sees happen. Thebook of Revelation, in theNew Testament, thus uses vision as a literary device in this regard. It is also prophecy or prophetic literature when it is related by an individual in asermonor other public forum. Divinationis the attempt to gain insight into a question or situation by way of an occultic standardized process or ritual.[27]It is an integral part of witchcraft and has been used in various forms for thousands of years. Diviners ascertain their interpretations of how a querent should proceed by reading signs, events, oromens, or through alleged contact with asupernaturalagency, most often described as an angel or a god though viewed by Christians and Jews as a fallen angel or demon.[28] Fiction (especially fantasy,forecastingand science fiction) often features instances of prediction achieved by unconventional means. Science fiction of the pastpredicted various modern technologies. In fantasy literature, predictions are often obtained throughmagicorprophecy, sometimes referring back to old traditions. For example, inJ. R. R. Tolkien'sThe Lord of the Rings, many of the characters possess an awareness of events extending into the future, sometimes as prophecies, sometimes as more-or-less vague 'feelings'. The characterGaladriel, in addition, employs a water "mirror" to show images, sometimes of possible future events. In some ofPhilip K. Dick's stories, mutant humans calledprecogscan foresee the future (ranging from days to years). In the story calledThe Golden Man, an exceptional mutant can predict the future to an indefinite range (presumably up to his death), and thus becomes completely non-human, an animal that follows the predicted paths automatically. Precogs also play an essential role in another of Dick's stories,The Minority Report, which was turned into afilmbySteven Spielbergin 2002. In theFoundationseries byIsaac Asimov, a mathematician finds out that historical events (up to some detail) can be theoretically modelled using equations, and then spends years trying to put the theory in practice. The new science ofpsychohistoryfounded upon his success can simulate history and extrapolate the present into the future. InFrank Herbert's sequels to 1965'sDune, his characters are dealing with the repercussions of being able to see the possible futures and select amongst them. Herbert sees this as a trap of stagnation, and his characters follow a so-called "Golden Path" out of the trap. InUrsula K. Le Guin'sThe Left Hand of Darkness, the humanoid inhabitants of planet Gethen have mastered the art of prophecy and routinely produce data on past, present or future events on request. In this story, this was a minor plot device. For the ancients, prediction, prophesy, and poetry were often intertwined.[29]Prophecies were given in verse, and a word for poet in Latin is “vates” or prophet.[29]Both poets and prophets claimed to be inspired by forces outside themselves. In contemporary cultures, theological revelation and poetry are typically seen as distinct and often even as opposed to each other. Yet the two still are often understood together as symbiotic in their origins, aims, and purposes.[30]
https://en.wikipedia.org/wiki/Prediction
Aconfidence bandis used instatistical analysisto represent the uncertainty in an estimate of a curve or function based on limited or noisy data. Similarly, aprediction bandis used to represent the uncertainty about the value of a new data-point on the curve, but subject to noise. Confidence and prediction bands are often used as part of the graphical presentation of results of aregression analysis. Confidence bands are closely related toconfidence intervals, which represent the uncertainty in an estimate of a single numerical value. "As confidence intervals, by construction, only refer to a single point, they are narrower (at this point) than a confidence band which is supposed to hold simultaneously at many points."[1] Suppose our aim is to estimate a functionf(x). For example,f(x) might be the proportion of people of a particular agexwho support a given candidate in an election. Ifxis measured at the precision of a single year, we can construct a separate 95% confidence interval for each age. Each of these confidence intervals covers the corresponding true valuef(x) with confidence 0.95. Taken together, these confidence intervals constitute a95% pointwise confidence bandforf(x). In mathematical terms, a pointwise confidence bandf^(x)±w(x){\displaystyle {\hat {f}}(x)\pm w(x)}with coverage probability 1 −αsatisfies the following condition separately for each value ofx: wheref^(x){\displaystyle {\hat {f}}(x)}is the point estimate off(x). Thesimultaneous coverage probabilityof a collection of confidence intervals is theprobabilitythat all of them cover their corresponding true values simultaneously. In the example above, the simultaneous coverage probability is the probability that the intervals forx= 18,19,... all cover their true values (assuming that 18 is the youngest age at which a person can vote). If each interval individually has coverage probability 0.95, the simultaneous coverage probability is generally less than 0.95. A95% simultaneous confidence bandis a collection of confidence intervals for all valuesxin the domain off(x) that is constructed to have simultaneous coverage probability 0.95. In mathematical terms, a simultaneous confidence bandf^(x)±w(x){\displaystyle {\hat {f}}(x)\pm w(x)}with coverage probability 1 −αsatisfies the following condition: In nearly all cases, a simultaneous confidence band will be wider than a pointwise confidence band with the same coverage probability. In the definition of a pointwise confidence band, thatuniversal quantifiermoves outside the probability function. Confidence bands commonly arise inregression analysis.[2]In the case of a simple regression involving a single independent variable, results can be presented in the form of a plot showing the estimated regression line along with either point-wise or simultaneous confidence bands. Commonly used methods for constructing simultaneous confidence bands in regression are theBonferroniandScheffémethods; seeFamily-wise error rate controlling proceduresfor more. Confidence bands can be constructed around estimates of theempirical distribution function. Simple theory allows the construction of point-wise confidence intervals, but it is also possible to construct a simultaneous confidence band for the cumulative distribution function as a whole byinverting the Kolmogorov-Smirnov test, or by using non-parametric likelihood methods.[3] Confidence bands arise whenever a statistical analysis focuses on estimating a function. Confidence bands have also been devised for estimates ofdensity functions,spectral densityfunctions,[4]quantile functions,scatterplot smooths,survival functions, andcharacteristic functions.[citation needed] Prediction bands are related toprediction intervalsin the same way that confidence bands are related to confidence intervals. Prediction bands commonly arise in regression analysis. The goal of a prediction band is to cover with a prescribed probability the values of one or more future observations from the same population from which a given data set was sampled. Just as prediction intervals are wider than confidence intervals, prediction bands will be wider than confidence bands. In mathematical terms, a prediction bandf^(x)±w(x){\displaystyle {\hat {f}}(x)\pm w(x)}with coverage probability 1 −αsatisfies the following condition for each value ofx: wherey*is an observation taken from the data-generating process at the given pointxthat is independent of the data used to construct the point estimatef^(x){\displaystyle {\hat {f}}(x)}and the confidence[vague]intervalw(x). This is a pointwise prediction interval.[vague]It would be possible to construct a simultaneous interval[vague]for a finite number of independent observations using, for example, the Bonferroni method to widen the interval[vague]by an appropriate amount.
https://en.wikipedia.org/wiki/Confidence_and_prediction_bands
Seymour Geisser(October 5, 1929 – March 11, 2004) was an Americanstatisticiannoted for emphasizingpredictive inference. In his bookPredictive Inference: An Introduction, he held that conventional statistical inference about unobservable population parameters amounts to inference about things that do not exist, following the work ofBruno de Finetti. He also pioneered the theory ofcross-validation. WithSamuel Greenhouse, he developed theGreenhouse–Geisser correction, which is now widely used in theanalysis of varianceto correct for violations of the assumption of compound symmetry.[1] He testified as an expert on interpretation ofDNAevidence in more than 100 civil and criminal trials. He held that prosecutors often relied on flawed statistical models. On that topic, he wrote "Statistics, Litigation and Conduct Unbecoming" in the bookStatistical Science in the Courtroom, edited by Joe [Joseph Louis] Gastwirth (Springer Verlag, 2000). He was born inNew York City. He earned hisPh.D.at theUniversity of North Carolina at Chapel Hillin 1955 underHarold Hotelling. In 1971, he founded the School of Statistics at theUniversity of Minnesota, of which he was the Director for more than 30 years. Geisser was also the principal editor of several books of papers by multiple authors.
https://en.wikipedia.org/wiki/Seymour_Geisser
Indecision theory, ascoring rule[1]provides evaluation metrics forprobabilistic predictions or forecasts. While "regular" loss functions (such asmean squared error) assign a goodness-of-fit score to a predicted value and an observed value, scoring rules assign such a score to a predicted probability distribution and an observed value. On the other hand, ascoring function[2]provides a summary measure for the evaluation of point predictions, i.e. one predicts a property orfunctionalT(F){\displaystyle T(F)}, like theexpectationor themedian. Scoring rules answer the question "how good is a predicted probability distribution compared to an observation?" Scoring rules that are(strictly) properare proven to have the lowest expected score if the predicted distribution equals the underlying distribution of the target variable. Although this might differ for individual observations, this should result in a minimization of the expected score if the "correct" distributions are predicted. Scoring rules and scoring functions are often used as "cost functions" or "loss functions" of probabilistic forecasting models. They are evaluated as the empirical mean of a given sample, the "score". Scores of different predictions or models can then be compared to conclude which model is best. For example, consider a model, that predicts (based on an inputx{\displaystyle x}) a meanμ∈R{\displaystyle \mu \in \mathbb {R} }and standard deviationσ∈R+{\displaystyle \sigma \in \mathbb {R} _{+}}. Together, those variables define agaussian distributionN(μ,σ2){\displaystyle {\mathcal {N}}(\mu ,\sigma ^{2})}, in essence predicting the target variable as a probability distribution. A common interpretation of probabilistic models is that they aim to quantify their own predictive uncertainty. In this example, an observed target variabley∈R{\displaystyle y\in \mathbb {R} }is then held compared to the predicted distributionN(μ,σ2){\displaystyle {\mathcal {N}}(\mu ,\sigma ^{2})}and assigned a scoreL(N(μ,σ2),y)∈R{\displaystyle {\mathcal {L}}({\mathcal {N}}(\mu ,\sigma ^{2}),y)\in \mathbb {R} }. When training on a scoring rule, it should "teach" a probabilistic model to predict when its uncertainty is low, and when its uncertainty is high, and it should result incalibratedpredictions, while minimizing the predictive uncertainty. Although the example given concerns the probabilistic forecasting of areal valuedtarget variable, a variety of different scoring rules have been designed with different target variables in mind. Scoring rules exist for binary and categoricalprobabilistic classification, as well as for univariate and multivariate probabilisticregression. Consider asample spaceΩ{\displaystyle \Omega }, aσ-algebraA{\displaystyle {\mathcal {A}}}of subsets ofΩ{\displaystyle \Omega }and a convex classF{\displaystyle {\mathcal {F}}}ofprobability measureson(Ω,A){\displaystyle (\Omega ,{\mathcal {A}})}. A function defined onΩ{\displaystyle \Omega }and taking values in the extended real line,R¯=[−∞,∞]{\displaystyle {\overline {\mathbb {R} }}=[-\infty ,\infty ]}, isF{\displaystyle {\mathcal {F}}}-quasi-integrable if it is measurable with respect toA{\displaystyle {\mathcal {A}}}and is quasi-integrable with respect to allF∈F{\displaystyle F\in {\mathcal {F}}}. A probabilistic forecast is any probability measureF∈F{\displaystyle F\in {\mathcal {F}}}. I.e. it is a distribution of potential future observations. A scoring rule is any extended real-valued functionS:F×Ω→R{\displaystyle \mathbf {S} :{\mathcal {F}}\times \Omega \rightarrow \mathbb {R} }such thatS(F,⋅){\displaystyle \mathbf {S} (F,\cdot )}isF{\displaystyle {\mathcal {F}}}-quasi-integrable for allF∈F{\displaystyle F\in {\mathcal {F}}}.S(F,y){\displaystyle \mathbf {S} (F,y)}represents the loss or penalty when the forecastF∈F{\displaystyle F\in {\mathcal {F}}}is issued and the observationy∈Ω{\displaystyle y\in \Omega }materializes. A point forecast is a functional, i.e. a potentially set-valued mappingF→T(F)⊆Ω{\displaystyle F\rightarrow T(F)\subseteq \Omega }. A scoring function is any real-valued functionS:Ω×Ω→R{\displaystyle S:\Omega \times \Omega \rightarrow \mathbb {R} }whereS(x,y){\displaystyle S(x,y)}represents the loss or penalty when the point forecastx∈Ω{\displaystyle x\in \Omega }is issued and the observationy∈Ω{\displaystyle y\in \Omega }materializes. Scoring rulesS(F,y){\displaystyle \mathbf {S} (F,y)}and scoring functionsS(x,y){\displaystyle S(x,y)}are negatively (positively) oriented if smaller (larger) values mean better. Here we adhere to negative orientation, hence the association with "loss". We write for the expected score of a predictionF{\displaystyle F}underQ∈F{\displaystyle Q\in {\mathcal {F}}}as the expected score of the predicted distributionF∈F{\displaystyle F\in {\mathcal {F}}}, when sampling observations from distributionQ{\displaystyle Q}. Many probabilistic forecasting models are training via the sample average score, in which a set of predicted distributionsF1,…,Fn∈F{\displaystyle F_{1},\ldots ,F_{n}\in {\mathcal {F}}}is evaluated against a set of observationsy1,…,yn∈Ω{\displaystyle y_{1},\ldots ,y_{n}\in \Omega }. Strictly proper scoring rules and strictly consistent scoring functions encourage honest forecasts by maximization of the expected reward: If a forecaster is given a reward of−S(F,y){\displaystyle -\mathbf {S} (F,y)}ify{\displaystyle y}realizes (e.g.y=rain{\displaystyle y=rain}), then the highestexpectedreward (lowest score) is obtained by reporting the true probability distribution.[1] A scoring ruleS{\displaystyle \mathbf {S} }isproperrelative toF{\displaystyle {\mathcal {F}}}if (assuming negative orientation) its expected score is minimized when the forecasted distribution matches the distribution of the observation. It isstrictly properif the above equation holds with equality if and only ifF=Q{\displaystyle F=Q}. A scoring functionS{\displaystyle S}isconsistentfor the functionalT{\displaystyle T}relative to the classF{\displaystyle {\mathcal {F}}}if It is strictly consistent if it is consistent and equality in the above equation implies thatx∈T(F){\displaystyle x\in T(F)}. To enforce thatcorrect forecasts are always strictly preferred, Ahmadian et al. (2024) introduced twosuperiorvariants:Penalized Brier Score (PBS)andPenalized Logarithmic Loss (PLL), which add a fixed penalty whenever the predicted class (arg⁡maxp{\displaystyle \arg \max p}) differs from the true class (arg⁡maxy{\displaystyle \arg \max y}).[3] -PBSaugments the Brier score by adding(c−1)/c{\displaystyle (c-1)/c}for any misclassification (withc{\displaystyle c}the number of classes). -PLLaugments the logarithmic score by adding−log⁡(1/c){\displaystyle -\log(1/c)}for any misclassification. Despite these penalties, PBS and PLL remainstrictly proper. Their expected score is uniquely minimized when the forecast equals the true distribution, satisfying thesuperiorproperty that every correct classification is scored strictly better than any incorrect one. Note:Neither the standard Brier Score nor the logarithmic score satisfy thesuperiorcriterion. They remain strictly proper but can assign better scores to incorrect predictions than to certain correct ones—an issue resolved by PBS and PLL.[3] An example ofprobabilistic forecastingis in meteorology where aweather forecastermay give the probability of rain on the next day. One could note the number of times that a 25% probability was quoted, over a long period, and compare this with the actual proportion of times that rain fell. If the actual percentage was substantially different from the stated probability we say that the forecaster ispoorly calibrated. A poorly calibrated forecaster might be encouraged to do better by abonussystem. A bonus system designed around a proper scoring rule will incentivize the forecaster to report probabilities equal to hispersonal beliefs.[4] In addition to the simple case of abinary decision, such as assigning probabilities to 'rain' or 'no rain', scoring rules may be used for multiple classes, such as 'rain', 'snow', or 'clear', or continuous responses like the amount of rain per day. The image to the right shows an example of a scoring rule, the logarithmic scoring rule, as a function of the probability reported for the event that actually occurred. One way to use this rule would be as a cost based on the probability that a forecaster or algorithm assigns, then checking to see which event actually occurs. There are an infinite number of scoring rules, including entire parameterized families of strictly proper scoring rules. The ones shown below are simply popular examples. For a categorical response variable withm{\displaystyle m}mutually exclusive events,Y∈Ω={1,…,m}{\displaystyle Y\in \Omega =\{1,\ldots ,m\}}, a probabilistic forecaster or algorithm will return aprobability vectorr{\displaystyle \mathbf {r} }with a probability for each of them{\displaystyle m}outcomes. The logarithmic scoring rule is a local strictly proper scoring rule. This is also the negative ofsurprisal, which is commonly used as a scoring criterion inBayesian inference; the goal is to minimize expected surprise. This scoring rule has strong foundations ininformation theory. Here, the score is calculated as the logarithm of the probability estimate for the actual outcome. That is, a prediction of 80% that correctly proved true would receive a score ofln(0.8) = −0.22. This same prediction also assigns 20% likelihood to the opposite case, and so if the prediction proves false, it would receive a score based on the 20%:ln(0.2) = −1.6. The goal of a forecaster is to maximize the score and for the score to be as large as possible, and −0.22 is indeed larger than −1.6. If one treats the truth or falsity of the prediction as a variablexwith value 1 or 0 respectively, and the expressed probability asp, then one can write the logarithmic scoring rule asxln(p) + (1 −x) ln(1 −p). Note that any logarithmic base may be used, since strictly proper scoring rules remain strictly proper under linear transformation. That is: is strictly proper for allb>1{\displaystyle b>1}. The quadratic scoring rule is a strictly proper scoring rule whereri{\displaystyle r_{i}}is the probability assigned to the correct answer andC{\displaystyle C}is the number of classes. TheBrier score, originally proposed byGlenn W. Brierin 1950,[5]can be obtained by anaffine transformfrom the quadratic scoring rule. Whereyj=1{\displaystyle y_{j}=1}when thej{\displaystyle j}th event is correct andyj=0{\displaystyle y_{j}=0}otherwise andC{\displaystyle C}is the number of classes. An important difference between these two rules is that a forecaster should strive to maximize the quadratic scoreQ{\displaystyle Q}yet minimize the Brier scoreB{\displaystyle B}. This is due to a negative sign in the linear transformation between them. The spherical scoring rule is also a strictly proper scoring rule The ranked probability score[6](RPS) is a strictly proper scoring rule, that can be expressed as: Whereyj=1{\displaystyle y_{j}=1}when thej{\displaystyle j}th event is correct andyj=0{\displaystyle y_{j}=0}otherwise, andC{\displaystyle C}is the number of classes. Other than other scoring rules, the ranked probability score considers the distance between classes, i.e. classes 1 and 2 are considered closer than classes 1 and 3. The score assigns better scores to probabilistic forecasts with high probabilities assigned to classes close to the correct class. For example, when considering probabilistic forecastsr1=(0.5,0.5,0){\displaystyle \mathbf {r} _{1}=(0.5,0.5,0)}andr2=(0.5,0,0.5){\displaystyle \mathbf {r} _{2}=(0.5,0,0.5)}, we find thatRPS(r1,1)=0.25{\displaystyle RPS(\mathbf {r} _{1},1)=0.25}, whileRPS(r2,1)=0.5{\displaystyle RPS(\mathbf {r} _{2},1)=0.5}, despite both probabilistic forecasts assigning identical probability to the correct class. Shown below on the left is a graphical comparison of the Logarithmic, Quadratic, and Spherical scoring rules for abinary classificationproblem. Thex-axis indicates the reported probability for the event that actually occurred. It is important to note that each of the scores have different magnitudes and locations. The magnitude differences are not relevant however as scores remain proper under affine transformation. Therefore, to compare different scores it is necessary to move them to a common scale. A reasonable choice of normalization is shown at the picture on the right where all scores intersect the points (0.5,0) and (1,1). This ensures that they yield 0 for a uniform distribution (two probabilities of 0.5 each), reflecting no cost or reward for reporting what is often the baseline distribution. All normalized scores below also yield 1 when the true class is assigned a probability of 1. The scoring rules listed below aim to evaluate probabilistic predictions when the predicted distributions are univariatecontinuous probability distribution's, i.e. the predicted distributions are defined over a univariate target variableX∈R{\displaystyle X\in \mathbb {R} }and have aprobability density functionf:R→R+{\displaystyle f:\mathbb {R} \to \mathbb {R} _{+}}. The logarithmic score is a local strictly proper scoring rule. It is defined as wherefD{\displaystyle f_{D}}denotes the probability density function of the predicted distributionD{\displaystyle D}. It is a local, strictly proper scoring rule. The logarithmic score for continuous variables has strong ties toMaximum likelihood estimation. However, in many applications, the continuous ranked probability score is often preferred over the logarithmic score, as the logarithmic score can be heavily influenced by slight deviations in the tail densities of forecasted distributions.[7] The continuous ranked probability score (CRPS)[8]is a strictly proper scoring rule much used in meteorology. It is defined as whereFD{\displaystyle F_{D}}is thecumulative distribution functionof the forecasted distributionD{\displaystyle D},H{\displaystyle H}is theHeaviside step functionandy∈R{\displaystyle y\in \mathbb {R} }is the observation. For distributions with finite firstmoment, the continuous ranked probability score can be written as:[1] whereX{\displaystyle X}andX′{\displaystyle X'}are independent random variables, sampled from the distributionD{\displaystyle D}. Furthermore, when the cumulative probability functionF{\displaystyle F}is continuous, the continuous ranked probability score can also be written as[9] The continuous ranked probability score can be seen as both an continuous extension of the ranked probability score, as well asquantile regression. The continuous ranked probability score over theempirical distributionD^q{\displaystyle {\hat {D}}_{q}}of an ordered set pointsq1≤…≤qn{\displaystyle q_{1}\leq \ldots \leq q_{n}}(i.e. every point has1/n{\displaystyle 1/n}probability of occurring), is equal to twice the meanquantile lossapplied on those points with evenly spread quantiles(τ1,…,τn)=(1/(2n),…,(2n−1)/(2n)){\displaystyle (\tau _{1},\ldots ,\tau _{n})=(1/(2n),\ldots ,(2n-1)/(2n))}:[10] For many popular families of distributions,closed-form expressionsfor the continuous ranked probability score have been derived. The continuous ranked probability score has been used as a loss function forartificial neural networks, in which weather forecasts are postprocessed to aGaussian probability distribution.[11][12] CRPS was also adapted tosurvival analysisto cover censored events.[13] CRPS is also known asCramer–von Mises distanceand can be seen as an improvement ofWasserstein distance(often used in machine learning) and further Cramer distance performed better inordinal regressionthanKL distanceor the Wasserstein metric.[14] The scoring rules listed below aim to evaluate probabilistic predictions when the predicted distributions are univariatecontinuous probability distribution's, i.e. the predicted distributions are defined over a multivariate target variableX∈Rn{\displaystyle X\in \mathbb {R} ^{n}}and have aprobability density functionf:Rn→R+{\displaystyle f:\mathbb {R} ^{n}\to \mathbb {R} _{+}}. The multivariate logarithmic score is similar to the univariate logarithmic score: wherefD{\displaystyle f_{D}}denotes the probability density function of the predicted multivariate distributionD{\displaystyle D}. It is a local, strictly proper scoring rule. The Hyvärinen scoring function (of a density p) is defined by[15] WhereΔ{\displaystyle \Delta }denotes theHessiantraceand∇{\displaystyle \nabla }denotes thegradient. This scoring rule can be used to computationally simplify parameter inference and address Bayesian model comparison with arbitrarily-vague priors.[15][16]It was also used to introduce new information-theoretic quantities beyond the existinginformation theory.[17] The energy score is a multivariate extension of the continuous ranked probability score:[1] Here,β∈(0,2){\displaystyle \beta \in (0,2)},‖‖2{\displaystyle \lVert \rVert _{2}}denotes then{\displaystyle n}-dimensionalEuclidean distanceandX,X′{\displaystyle X,X'}are independently sampled random variables from the probability distributionD{\displaystyle D}. The energy score is strictly proper for distributionsD{\displaystyle D}for whichEX∼D[‖X‖2]{\displaystyle \mathbb {E} _{X\sim D}[\lVert X\rVert _{2}]}is finite. It has been suggested that the energy score is somewhat ineffective when evaluating the intervariable dependency structure of the forecasted multivariate distribution.[18]The energy score is equal to twice theenergy distancebetween the predicted distribution and the empirical distribution of the observation. Thevariogramscore of orderp{\displaystyle p}is given by:[19] Here,wij{\displaystyle w_{ij}}are weights, often set to 1, andp>0{\displaystyle p>0}can be arbitrarily chosen, butp=0.5,1{\displaystyle p=0.5,1}or2{\displaystyle 2}are often used.Xi{\displaystyle X_{i}}is here to denote thei{\displaystyle i}'thmarginal random variableofX{\displaystyle X}. The variogram score is proper for distributions for which the(2p){\displaystyle (2p)}'thmomentis finite for all components, but is never strictly proper. Compared to the energy score, the variogram score is claimed to be more discriminative with respect to the predicted correlation structure. The conditional continuous ranked probability score (Conditional CRPS or CCRPS) is a family of (strictly) proper scoring rules. Conditional CRPS evaluates a forecasted multivariate distributionD{\displaystyle D}by evaluation of CRPS over a prescribed set of univariateconditional probability distributionsof the predicted multivariate distribution:[20] Here,Xi{\displaystyle X_{i}}is thei{\displaystyle i}'th marginal variable ofX∼D{\displaystyle X\sim D},T=(vi,Ci)i=1k{\displaystyle {\mathcal {T}}=(v_{i},{\mathcal {C}}_{i})_{i=1}^{k}}is a set of tuples that defines a conditional specification (withvi∈{1,…,n}{\displaystyle v_{i}\in \{1,\ldots ,n\}}andCi⊆{1,…,n}∖{vi}{\displaystyle {\mathcal {C}}_{i}\subseteq \{1,\ldots ,n\}\setminus \{v_{i}\}}), andPX∼D(Xvi|Xj=Yjforj∈Ci){\displaystyle P_{X\sim D}(X_{v_{i}}|X_{j}=Y_{j}{\text{ for }}j\in {\mathcal {C}}_{i})}denotes the conditional probability distribution forXvi{\displaystyle X_{v_{i}}}given that all variablesXj{\displaystyle X_{j}}forj∈Ci{\displaystyle j\in {\mathcal {C}}_{i}}are equal to their respective observations. In the case thatPX∼D(Xvi|Xj=Yjforj∈Ci){\displaystyle P_{X\sim D}(X_{v_{i}}|X_{j}=Y_{j}{\text{ for }}j\in {\mathcal {C}}_{i})}is ill-defined (i.e. its conditional event has zero likelihood), CRPS scores over this distribution are defined as infinite. Conditional CRPS is strictly proper for distributions with finite first moment, if thechain ruleis included in the conditional specification, meaning that there exists a permutationϕ1,…,ϕn{\displaystyle \phi _{1},\ldots ,\phi _{n}}of1,…,n{\displaystyle 1,\ldots ,n}such that for all1≤i≤n{\displaystyle 1\leq i\leq n}:(ϕi,{ϕ1,…,ϕi−1})∈T{\displaystyle (\phi _{i},\{\phi _{1},\ldots ,\phi _{i-1}\})\in {\mathcal {T}}}. All proper scoring rules are equal to weighted sums (integral with a non-negative weighting functional) of the losses in a set of simple two-alternative decision problems thatusethe probabilistic prediction, each such decision problem having a particular combination of associated cost parameters forfalse positive and false negativedecisions. Astrictlyproper scoring rule corresponds to having a nonzero weighting for all possible decision thresholds. Any given proper scoring rule is equal to the expected losses with respect to a particular probability distribution over the decision thresholds; thus the choice of a scoring rule corresponds to an assumption about the probability distribution of decision problems for which the predicted probabilities will ultimately be employed, with for example the quadratic loss (or Brier) scoring rule corresponding to a uniform probability of the decision threshold being anywhere between zero and one. Theclassification accuracy score(percent classified correctly), a single-threshold scoring rule which is zero or one depending on whether the predicted probability is on the appropriate side of 0.5, is a proper scoring rule but not a strictly proper scoring rule because it is optimized (in expectation) not only by predicting the true probability but by predictinganyprobability on the same side of 0.5 as the true probability.[21][22][23][24][25][26] A strictly proper scoring rule, whether binary or multiclass, after anaffine transformationremains a strictly proper scoring rule.[4]That is, ifS(r,i){\displaystyle S(\mathbf {r} ,i)}is a strictly proper scoring rule thena+bS(r,i){\displaystyle a+bS(\mathbf {r} ,i)}withb≠0{\displaystyle b\neq 0}is also a strictly proper scoring rule, though ifb<0{\displaystyle b<0}then the optimization sense of the scoring rule switches between maximization and minimization. A proper scoring rule is said to belocalif its estimate for the probability of a specific event depends only on the probability of that event. This statement is vague in most descriptions but we can, in most cases, think of this as the optimal solution of the scoring problem "at a specific event" is invariant to all changes in the observation distribution that leave the probability of that event unchanged. All binary scores are local because the probability assigned to the event that did not occur is determined so there is no degree of flexibility to vary over. Affine functions of the logarithmic scoring rule are the only strictly proper local scoring rules on afinite setthat is not binary. The expectation value of a proper scoring ruleS{\displaystyle S}can be decomposed into the sum of three components, calleduncertainty,reliability, andresolution,[27][28]which characterize different attributes of probabilistic forecasts: If a score is proper and negatively oriented (such as the Brier Score), all three terms are positive definite. The uncertainty component is equal to the expected score of the forecast which constantly predicts the average event frequency. The reliability component penalizes poorly calibrated forecasts, in which the predicted probabilities do not coincide with the event frequencies. The equations for the individual components depend on the particular scoring rule. For the Brier Score, they are given by wherex¯{\displaystyle {\bar {x}}}is the average probability of occurrence of the binary eventx{\displaystyle x}, andπ(p){\displaystyle \pi (p)}is the conditional event probability, givenp{\displaystyle p}, i.e.π(p)=P(x=1∣p){\displaystyle \pi (p)=P(x=1\mid p)}
https://en.wikipedia.org/wiki/Scoring_function
Risk assessmentdetermines possible mishaps, their likelihood and consequences, and thetolerancesfor such events.[1][2]The results of this process may be expressed in aquantitativeorqualitativefashion. Risk assessment is an inherent part of a broaderrisk managementstrategy to help reduce any potential risk-related consequences.[1][3] More precisely, risk assessment identifies and analyses potential (future) events that may negatively impact individuals, assets, and/or the environment (i.e.hazard analysis). It also makes judgments "on thetolerabilityof the risk on the basis of a risk analysis" while considering influencing factors (i.e. risk evaluation).[1][3] Risk assessments can be done inindividualcases, including in patient and physician interactions.[4]In the narrow sense chemical risk assessment is the assessment of a health risk in response to environmental exposures.[5]The ways statistics are expressed and communicated to an individual, both through words and numbers impact his or her interpretation of benefit and harm. For example, afatality ratemay be interpreted as less benign than the correspondingsurvival rate.[4]Asystematic reviewof patients and doctors from 2017 found that overstatement of benefits and understatement of risks occurred more often than the alternative.[4][6]A systematic review from theCochrane collaborationsuggested "well-documented decision aids" are helpful in reducing effects of such tendencies or biases.[4][7]Aids may help people come to a decision about their care based on evidence informed information that align with their values.[7]Decision aids may also help people understand the risks more clearly, and they empower people to take an active role when making medical decisions.[7]The systematic review did not find a difference in people who regretted their decisions between those who used decision aids and those who had the usual standard treatment.[7] An individual's ownrisk perceptionmay be affected by psychological, ideological, religious or otherwise subjective factors, which impact rationality of the process.[4]Individuals tend to be less rational when risks and exposures concern themselves as opposed to others.[4]There is also a tendency to underestimate risks that are voluntary or where the individual sees themselves as being in control, such as smoking.[4] Risk assessment can also be made on a much largersystems theoryscale, for example assessing the risks of an ecosystem or an interactively complex mechanical, electronic, nuclear, and biological system or a hurricane (a complex meteorological and geographical system). Systems may be defined as linear and nonlinear (or complex), where linear systems are predictable and relatively easy to understand given a change in input, and non-linear systems unpredictable when inputs are changed.[8]As such, risk assessments of non-linear/complex systems tend to be more challenging. In the engineering ofcomplex systems, sophisticated risk assessments are often made withinsafety engineeringandreliability engineeringwhen it concerns threats to life,natural environment, or machine functioning. The agriculture, nuclear, aerospace, oil, chemical, railroad, and military industries have a long history of dealing with risk assessment.[9]Also, medical, hospital,social service,[10]and food industries control risks and perform risk assessments on a continual basis. Methods for assessment of risk may differ between industries and whether it pertains to general financial decisions or environmental, ecological, or public health risk assessment.[9] Rapid technological change, increasing scale of industrial complexes, increased system integration, market competition, and other factors have been shown to increase societal risk in the past few decades.[1]As such, risk assessments become increasingly critical in mitigating accidents, improving safety, and improving outcomes. Risk assessment consists of an objective evaluation ofriskin which assumptions and uncertainties are clearly considered and presented. This involves identification of risk (what can happen and why), the potential consequences, theprobability of occurrence, the tolerability oracceptabilityof the risk, and ways to mitigate or reduce the probability of the risk.[3]Optimally, it also involves documentation of the risk assessment and its findings, implementation of mitigation methods, and review of the assessment (or risk management plan), coupled with updates when necessary.[1]Sometimes risks can be deemed acceptable, meaning the risk "is understood and tolerated ... usually because the cost or difficulty of implementing an effective countermeasure for the associated vulnerability exceeds the expectation of loss."[11] Benoit Mandelbrot distinguished between "mild" and "wild" risk and argued that risk assessment andrisk managementmust be fundamentally different for the two types of risk.[12]Mild risk followsnormalor near-normalprobability distributions, is subject toregression to the meanand thelaw of large numbers, and is therefore relatively predictable. Wild risk followsfat-tailed distributions, e.g.,Paretoorpower-law distributions, is subject to regression to the tail (infinite mean or variance, rendering the law of large numbers invalid or ineffective), and is therefore difficult or impossible to predict. A common error in risk assessment and management is to underestimate the wildness of risk, assuming risk to be mild when in fact it is wild, which must be avoided if risk assessment and management are to be valid and reliable, according to Mandelbrot. To see the risk management process expressed mathematically, one can defineexpectedrisk as the sum over individual risks,Ri{\displaystyle R_{i}}, which can be computed as the product of potential losses,Li{\displaystyle L_{i}}, and their probabilities,p(Li){\displaystyle p(L_{i})}: Even though for some risksRi,Rj{\displaystyle R_{i},R_{j}}, we might haveRi=Rj{\displaystyle R_{i}=R_{j}}, if the probabilityp(Lj){\displaystyle p(L_{j})}is small compared top(Li){\displaystyle p(L_{i})}, its estimation might be based only on a smaller number of prior events, and hence, more uncertain. On the other hand, sinceRi=Rj{\displaystyle R_{i}=R_{j}},Lj{\displaystyle L_{j}}must be larger thanLi{\displaystyle L_{i}}, so decisions based on this uncertainty would be more consequential, and hence, warrant a different approach. This becomes important when we consider thevarianceof risk as a largeLi{\displaystyle L_{i}}changes the value. Financial decisions, such as insurance, express loss in terms of dollar amounts. When risk assessment is used for public health or environmental decisions, the loss can be quantified in a common metric such as a country's currency or some numerical measure of a location's quality of life. For public health and environmental decisions, the loss is simply a verbal description of the outcome, such as increased cancer incidence or incidence of birth defects. In that case, the "risk" is expressed as If the risk estimate takes into account information on the number of individuals exposed, it is termed a "population risk" and is in units of expected increased cases per time period. If the risk estimate does not take into account the number of individuals exposed, it is termed an "individual risk" and is in units of incidence rate per time period. Population risks are of more use for cost/benefit analysis; individual risks are of more use for evaluating whether risks to individuals are "acceptable". In quantitative risk assessment, anannualized loss expectancy(ALE) may be used to justify the cost of implementing countermeasures to protect an asset. This may be calculated by multiplying thesingle loss expectancy(SLE), which is the loss of value based on a single security incident, with the annualized rate of occurrence (ARO), which is an estimate of how often a threat would be successful in exploiting a vulnerability. The usefulness of quantitative risk assessment has been questioned, however.Barry Commoner,Brian Wynneand other critics have expressed concerns that risk assessment tends to be overly quantitative and reductive. For example, they argue that risk assessments ignore qualitative differences among risks. Some charge that assessments may drop out important non-quantifiable or inaccessible information, such as variations among the classes of people exposed to hazards, or social amplification.[13]Furthermore, Commoner[14]and O'Brien[15]claim that quantitative approaches divert attention from precautionary or preventative measures.[16]Others, likeNassim Nicholas Talebconsider risk managers little more than "blind users" of statistical tools and methods.[17] Risk engineering is central to the assessment phase, where risks are not only identified but rigorously analyzed, quantified, and modeled. In the context of financial systems—particularly credit risk—risk engineering involves understanding the dynamic behavior of risk parameters such as probability of default, exposure at default, and loss given default.[18]These are not treated as isolated figures but as interconnected components that respond to systemic and idiosyncratic changes. As individual risks aggregate into portfolios or larger systems, risk engineers deploy statistical models and simulation techniques to uncover dependencies and potential cascade effects. This systems-level view enables the modeling of stress scenarios and rare, high-impact events—what some refer to as "wild risk." It also supports the design of robust structures capable of absorbing shocks and preventing systemic collapse.[19]Regulatory frameworks add another layer to the assessment process, requiring that risk engineering efforts not only reflect real-world complexity but also align with institutional constraints. Older textbooks distinguish between the termrisk analysisandrisk evaluation; a risk analysis includes the following 4 steps:[1] A risk evaluation means that judgements are made on the tolerability of the identified risks, leading to risk acceptance. When risk analysis and risk evaluation are made at the same time, it is called risk assessment.[1] As of 2023, chemical risk assessment follows these 4 steps:[5] There is tremendous variability in thedose-response relationshipbetween a chemical and human health outcome in particularly susceptible subgroups, such as pregnant women, developing fetuses, children up to adolescence, people with low socioeconomic status, those with preexisting diseases, disabilities,genetic susceptibility, and those with otherenvironmental exposures.[5] The process of risk assessment may be somewhat informal at the individual social level, assessing economic and household risks,[20][21]or a sophisticated process at the strategic corporate level. However, in both cases, ability to anticipate future events and create effective strategies for mitigating them when deemed unacceptable is vital. At the individual level, identifying objectives and risks, weighing their importance, and creating plans, may be all that is necessary. At the strategic organisational level, more elaborate policies are necessary, specifying acceptable levels of risk, procedures to be followed within the organisation, priorities, and allocation of resources.[22]: 10 At the strategic corporate level, management involved with the project produce project level risk assessments with the assistance of the available expertise as part of the planning process and set up systems to ensure that required actions to manage the assessed risk are in place. At the dynamic level, the personnel directly involved may be required to deal with unforeseen problems in real time. The tactical decisions made at this level should be reviewed after the operation to provide feedback on the effectiveness of both the planned procedures and decisions made in response to the contingency. The results of these steps are combined to produce an estimate of risk. Because of the different susceptibilities and exposures, this risk will vary within a population. An uncertainty analysis is usually included in a health risk assessment. During an emergency response, the situation and hazards are often inherently less predictable than for planned activities (non-linear). In general, if the situation and hazards are predictable (linear), standard operating procedures should deal with them adequately. In some emergencies, this may also hold true, with the preparation and trained responses being adequate to manage the situation. In these situations, the operator can manage risk without outside assistance, or with the assistance of a backup team who are prepared and available to step in at short notice. Other emergencies occur where there is no previously planned protocol, or when an outsider group is brought in to handle the situation, and they are not specifically prepared for the scenario that exists but must deal with it without undue delay. Examples include police, fire department, disaster response, and other public service rescue teams. In these cases, ongoing risk assessment by the involved personnel can advise appropriate action to reduce risk.[22]HM Fire Services Inspectorate has defined dynamic risk assessment (DRA) as: The continuous assessment of risk in the rapidly changing circumstances of an operational incident, in order to implement the control measures necessary to ensure an acceptable level of safety.[22] Dynamic risk assessment is the final stage of an integrated safety management system that can provide an appropriate response during changing circumstances. It relies on experience, training and continuing education, including effective debriefing to analyse not only what went wrong, but also what went right, and why, and to share this with other members of the team and the personnel responsible for the planning level risk assessment.[22] The application of risk assessment procedures is common in a wide range of fields, and these may have specific legal obligations, codes of practice, and standardised procedures. Some of these are listed here. There are many resources that provide human health risk information: TheNational Library of Medicineprovides risk assessment and regulation information tools for a varied audience.[23]These include: TheUnited States Environmental Protection Agencyprovides basic information about environmental health risk assessments for the public for a wide variety of possible environmental exposures.[26] The Environmental Protection Agency began actively using risk assessment methods to protect drinking water in the United States after the passage of the Safe Drinking Water Act of 1974. The law required the National Academy of Sciences to conduct a study on drinking water issues, and in its report, the NAS described some methodologies for doing risk assessments for chemicals that were suspected carcinogens, recommendations that top EPA officials have described as perhaps the study's most important part.[27] Considering the increase in junk food and its toxicity, FDA required in 1973 that cancer-causing compounds must not be present in meat at concentrations that would cause a cancer risk greater than 1 in a million over a lifetime. The US Environmental Protection Agency provides extensive information about ecological and environmental risk assessments for the public via its risk assessment portal.[28]TheStockholm Conventiononpersistent organic pollutants(POPs) supports a qualitative risk framework for public health protection from chemicals that display environmental and biological persistence,bioaccumulation, toxicity (PBT) and long range transport; most global chemicals that meet this criterion have been previously assessed quantitatively by national and international health agencies.[29] For non-cancer health effects, the termsreference dose(RfD) orreference concentration(RfC) are used to describe the safe level of exposure in a dichotomous fashion. Newer ways of communicating the risk is theprobabilistic risk assessment.[30] When risks apply mainly to small sub-populations, it can be difficult to determine when intervention is necessary. For example, there may be a risk that is very low for everyone, other than 0.1% of the population. It is necessary to determine whether this 0.1% is represented by: If the risk is higher for a particular sub-population because of abnormal exposure rather than susceptibility, strategies to further reduce the exposure of that subgroup are considered. If an identifiable sub-population is more susceptible due to inherent genetic or other factors, public policy choices must be made. The choices are: Acceptable risk is a risk that is understood and tolerated usually because the cost or difficulty of implementing an effective countermeasure for the associated vulnerability exceeds the expectation of loss.[31] The idea of not increasing lifetime risk by more than one in a million has become commonplace in public health discourse and policy.[32]It is a heuristic measure. It provides a numerical basis for establishing a negligible increase in risk. Environmental decision making allows some discretion for deeming individual risks potentially "acceptable" if less than one in ten thousand chance of increased lifetime risk. Low risk criteria such as these provide some protection for a case where individuals may be exposed to multiple chemicals e.g. pollutants, food additives, or other chemicals.[citation needed] In practice, a true zero-risk is possible only with the suppression of the risk-causing activity.[citation needed] Stringent requirements of 1 in a million may not be technologically feasible or may be so prohibitively expensive as to render the risk-causing activity unsustainable, resulting in the optimal degree of intervention being a balance between risks vs. benefit.[citation needed]For example, emissions from hospital incinerators result in a certain number of deaths per year. However, this risk must be balanced against the alternatives. There are public health risks, as well as economic costs, associated with all options. The risk associated with noincinerationis the potential spread of infectious diseases or even no hospitals. Further investigation identifies options such as separating noninfectious from infectious wastes, or air pollution controls on a medical incinerator. Intelligent thought about a reasonably full set of options is essential. Thus, it is not unusual for there to be an iterative process between analysis, consideration of options, and follow up analysis.[citation needed] In the context ofpublic health, risk assessment is the process of characterizing the nature and likelihood of a harmful effect to individuals or populations from certain human activities. Health risk assessment can be mostly qualitative or can include statistical estimates of probabilities for specific populations. In most countries, the use of specific chemicals or the operations of specific facilities (e.g. power plants, manufacturing plants) is not allowed unless it can be shown that they do not increase the risk of death or illness above a specific threshold. For example, the AmericanFood and Drug Administration(FDA) regulates food safety through risk assessment, while theEFSAdoes the same in EU.[33] Anoccupational risk assessmentis an evaluation of how much potential danger ahazardcan have to a person in a workplace environment. The assessment takes into account possible scenarios in addition to the probability of their occurrence and the results.[34]Thesixtypes of hazards to be aware of are safety (those that can cause injury),chemicals,biological,physical,psychosocial(those that cause stress, harassment) andergonomic(those that can causemusculoskeletal disorders).[35]To appropriately access hazards there are two parts that must occur. Firstly, there must be an "exposure assessment" which measures the likelihood of worker contact and the level of contact. Secondly, a "risk characterization" must be made which measures the probability and severity of the possible health risks.[36] The importance of risk assessments to manage theconsequences of climate changeand variability is recalled in the global frameworks fordisaster risk reduction, adopted by the member countries of the United Nations at the end of the World Conferences held in Kobe (2005) and Sendai (2015). TheSendai Framework for Disaster Risk Reductionbrings attention to the local scale and encourages a holistic risk approach, which should consider all the hazards to which a community is exposed, the integration of technical-scientific knowledge with local knowledge, and the inclusion of the concept of risk in local plans to achieve a significant disaster reduction by 2030. Taking these principles into daily practice poses a challenge for many countries. The Sendai framework monitoring system highlights how little is known about the progress made from 2015 to 2019 in local disaster risk reduction.[37] As of 2019, in the South of the Sahara, risk assessment is not yet an institutionalized practice. The exposure of human settlements to multiple hazards (hydrological and agricultural drought, pluvial, fluvial and coastal floods) is frequent and requires risk assessments on a regional, municipal, and sometimes individual human settlement scale. The multidisciplinary approach and the integration of local and technical-scientific knowledge are necessary from the first steps of the assessment. Local knowledge remains unavoidable to understand the hazards that threaten individual communities, the critical thresholds in which they turn into disasters, for the validation ofhydraulic models, and in the decision-making process onrisk reduction. On the other hand, local knowledge alone is not enough to understand the impacts of future changes and climatic variability and to know the areas exposed to infrequent hazards. The availability of new technologies andopen accessinformation (high resolution satellite images, daily rainfall data) allow assessment today with an accuracy that only 10 years ago was unimaginable. The images taken by unmanned vehicle technologies allow to produce very high resolution digital elevation models and to accurately identify the receptors.[38]Based on this information, the hydraulic models allow the identification of flood areas with precision even at the scale of small settlements.[39]The information on loss and damages and on cereal crop at individual settlement scale allow to determine the level of multi-hazard risk on a regional scale.The multi-temporal high-resolution satellite images allow to assess the hydrological drought and the dynamics of human settlements in the flood zone.[40]Risk assessment is more than an aid to informed decision making about risk reduction or acceptance.[41]It integrates early warning systems by highlighting the hot spots where disaster prevention and preparedness are most urgent.[42]When risk assessment considers the dynamics of exposure over time, it helps to identify risk reduction policies that are more appropriate to the local context. Despite these potentials, the risk assessment is not yet integrated into the local planning in the South of the Sahara which, in the best of cases, uses only the analysis ofvulnerability to climate changeand variability.[42] For audits performed by an outside audit firm, risk assessment is a crucial stage before accepting an audit engagement. According to ISA315Understanding the Entity and its Environment and Assessing the Risks of Material Misstatement, "the auditor should perform risk assessment procedures to obtain an understanding of the entity and its environment, including its internal control". Evidence relating to the auditor's risk assessment of a material misstatement in the client's financial statements. Then, the auditor obtains initial evidence regarding the classes of transactions at the client and the operating effectiveness of the client's internal controls. Audit risk is defined as the risk that the auditor will issue a clean unmodified opinion regarding the financial statements, when in fact the financial statements are materially misstated, and therefore do not qualify for a clean unmodified opinion. As a formula, audit risk is the product of two other risks: Risk of Material Misstatement and Detection Risk. This formula can be further broken down as follows:inherent risk×control risk×detection risk. Inproject management, risk assessment is an integral part of the risk management plan, studying the probability, the impact, and the effect of every known risk on the project, as well as the corrective action to take should an incident be implied by a risk occur.[43]Of special consideration in this area are the relevant codes of practice that are enforced in the specific jurisdiction. Understanding the regime of regulations that risk management must abide by is integral to formulating safe and compliant risk assessment practices. Information technology riskassessment can be performed by a qualitative or quantitative approach, following different methodologies. One important difference[clarification needed]in risk assessments ininformation securityis modifying the threat model to account for the fact that any adversarial system connected to the Internet has access to threaten any other connected system.[44]Risk assessments may therefore need to be modified to account for the threats from all adversaries, instead of just those with reasonable access as is done in other fields. NIST Definition:The process of identifying risks to organizational operations (including mission, functions, image, reputation), organizational assets, individuals, other organizations, and the Nation, resulting from the operation of an information system. Part of risk management incorporates threat and vulnerability analyses and considers mitigations provided by security controls planned or in place.[45] There are various risk assessment methodologies and frameworks available which include NIST Risk Management Framework (RMF),[46]Control Objectives for Information and Related Technologies (COBIT),[47]Factor Analysis of Information Risk (FAIR),[48]Operationally Critical Threat, Asset, and Vulnerability Evaluation (OCTAVE),[49]The Center for Internet Security Risk Assessment Method (CIS RAM),[50]and The Duty of Care Risk Analysis (DoCRA) Standard,[51]which helps define 'reasonable' security. The Threat and Risk Assessment (TRA) process is part of risk management referring to risks related tocyber threats. The TRA process will identify cyber risks, assess risks' severities, and may recommend activities to reduce risks to an acceptable level. There are different methodologies for performing TRA (e.g., Harmonized TRA Methodology[52]), all utilize the following elements:[53][54][55]identifying of assets (what should be protected), identifying and assessing of the threats and vulnerabilities for the identified assets, determining the exploitability of the vulnerabilities, determining the levels of risk associated with the vulnerabilities (what are the implications if the assets were damaged or lost), and recommending a risk mitigation program. Megaprojects(sometimes also called "major programs") are extremely large-scale investment projects, typically costing more than US$1 billion per project. They include bridges, tunnels, highways, railways, airports, seaports, power plants, dams, wastewater projects, coastal flood protection,oilandnatural gas extractionprojects, public buildings, information technology systems, aerospace projects, and defence systems. Megaprojects have been shown to be particularly risky in terms of finance, safety, and social andenvironmental impacts. Studies have shown that early parts of the system development cycle such as requirements and design specifications are especially prone to error. This effect is particularly notorious in projects involving multiplestakeholderswith different points of view. Evolutionary software processes offer an iterative approach torequirement engineeringto alleviate the problems of uncertainty, ambiguity, and inconsistency inherent in software developments, including uncertainty, ambiguity, and inconsistency inherent in software developments.[clarification needed] In July 2010, shipping companies agreed to use standardized procedures in order to assess risk in key shipboard operations. These procedures were implemented as part of the amendedISM Code.[56] Formal risk assessment is a required component of mostprofessional dive planning, but the format and methodology may vary. Consequences of an incident due to an identified hazard are generally chosen from a small number of standardised categories, and probability is estimated based on statistical data on the rare occasions when it is available, and on a best guess estimate based on personal experience and company policy in most cases. A simplerisk matrixis often used to transform these inputs into a level of risk, generally expressed as unacceptable, marginal or acceptable. If unacceptable, measures must be taken to reduce the risk to an acceptable level, and the outcome of the risk assessment must be accepted by the affected parties before a dive commences. Higher levels of risk may be acceptable in special circumstances, such as military or search and rescue operations when there is a chance of recovering a survivor.Diving supervisorsare trained in the procedures ofhazard identification and risk assessment, and it is part of their planning and operational responsibility. Both health and safety hazards must be considered. Several stages may be identified. There is risk assessment done as part of the diving project planning, on site risk assessment which takes into account the specific conditions of the day, anddynamic risk assessmentwhich is ongoing during the operation by the members of the dive team, particularly the supervisor and the working diver.[57][58] Inrecreational scuba diving, the extent of risk assessment expected of the diver is relatively basic and is included in thepre-dive checks. Several mnemonics have been developed bydiver certification agenciesto remind the diver to pay some attention to risk, but the training is rudimentary. Diving service providers are expected to provide a higher level of care for their customers, anddiving instructorsanddivemastersare expected to assess risk on behalf of their customers and warn them of site-specific hazards and the competence considered appropriate for the planned dive. Technical divers are expected to make a more thorough assessment of risk, but as they will be making an informed choice for a recreational activity, the level of acceptable risk may be considerably higher than that permitted for occupational divers under the direction of an employer.[59][60] In outdoor activities including commercial outdoor education, wilderness expeditions, andoutdoor recreation, risk assessment refers to the analysis of the probability and magnitude of unfavorable outcomes such as injury, illness, or property damage due to environmental and related causes, compared to the human development or other benefits of outdoor activity. This is of particular importance as school programs and others weigh the benefits of youth and adult participation in various outdoor learning activities against the inherent and other hazards present in those activities. Schools, corporate entities seeking team-building experiences, parents/guardians, and others considering outdoor experiences expect or require[61]organizations to assess the hazards and risks of different outdoor activities—such as sailing, target shooting, hunting, mountaineering, or camping—and select activities with acceptable risk profiles. Outdoor education, wilderness adventure, and other outdoor-related organizations should, and are in some jurisdictions required, to conduct risk assessments prior to offering programs for commercial purposes.[62][63][64] Such organizations are given guidance on how to provide their risk assessments.[65] Risk assessments for led outdoor activities form only one component of a comprehensive risk management plan, as many risk assessments use a basic linear-style thinking that does not employ more modern risk management practice employing complex socio-technical systems theory.[66][67] Environmental Risk Assessment(ERA) aims to assess the effects of stressors, usually chemicals, on the local environment. A risk is an integrated assessment of the likelihood and severity of an undesired event. In ERA, the undesired event often depends on the chemical of interest and on the risk assessment scenario.[68]This undesired event is usually a detrimental effect on organisms, populations orecosystems. Current ERAs usually compare an exposure to a no-effect level, such as thePredicted Environmental Concentration/Predicted No-Effect Concentration(PEC/PNEC) ratio in Europe. Although this type of ratio is useful and often used in regulation purposes, it is only an indication of an exceeded apparent threshold.[69]New approaches start to be developed in ERA in order to quantify this risk and to communicate effectively on it with both the managers and the general public.[68] Ecological risk assessment is complicated by the fact that there are many nonchemical stressors that substantially influence ecosystems, communities, and individual plants and animals, as well as across landscapes and regions.[70][71]Defining the undesired (adverse) event is a political or policy judgment, further complicating applying traditional risk analysis tools to ecological systems. Much of the policy debate surrounding ecological risk assessment is over defining precisely what is an adverse event.[72] Biodiversity Risk Assessments evaluate risks tobiological diversity, specially the risk ofspeciesextinctionor the risk ofecosystem collapse. The units of assessments are the biological (species,subspeciesorpopulations) or ecological entities (habitats,ecosystems, etc.), and the risk are often related to human actions and interventions (threats and pressures). Regional and national protocols have been proposed by multiple academic or governmental institutions and working groups,[73]but global standards such as theRed List of Threatened Speciesand theIUCN Red List of Ecosystemshave been widely adopted, and are recognized or proposed as official indicators of progress toward international policy targets and goals, such as theAichi targetsand theSustainable Development Goals.[74][75] Risk assessments are used in numerous stages during the legal process and are developed to measure a wide variety of items, such as recidivism rates, potential pretrial issues, probation/parole, and to identify potential interventions for defendants.[76]Clinical psychologists, forensic psychologists, and other practitioners are responsible for conducting risk assessments.[76][77][78]Depending on the risk assessment tool, practitioners are required to gather a variety of background information on the defendant or individual being assessed. This information includes their previous criminal history (if applicable) and other records (i.e. Demographics, Education, Job Status, Medical History), which can be accessed through direct interview with the defendant or on-file records.[76] In the pre-trial stage, a widely used risk assessment tool is the Public Safety Assessment,[79]which predicts failure to appear in court, likelihood of a new criminal arrest while on pretrial release, and likelihood of a new violent criminal arrest while on pretrial release. Multiple items are observed and taken into account based on which aspect of the PSA is being focused, and like all other actuarial risk assessments, each item is assigned a weighted amount to produce a final score.[76]Detailed information such as transparency on the items the PSA factors and how scores are distributed are accessible online.[80] Fordefendantswho have been incarcerated, risk assessments are used to determine their likelihood ofrecidivismand inform sentence length decisions. Risk assessments also aid parole/probation officers in determining the level of supervision a probationer should be subjected to and what interventions could be implemented to improve offender risk status.[77]TheCorrectional Offender Management Profiling for Alternative Sanctions(COMPAS) is a risk assessment too designed to measure pretrial release risk, general recidivism risk, and violent recidivism risk. Detailed information on scoring and algorithms for COMPAS are not accessible to the general public.
https://en.wikipedia.org/wiki/Risk_assessment
Ineconomicsandfinance,risk aversionis the tendency of people to prefer outcomes with lowuncertaintyto those outcomes with high uncertainty, even if the average outcome of the latter is equal to or higher in monetary value than the more certain outcome.[1] Risk aversion explains the inclination to agree to a situation with a lower average payoff that is more predictable rather than another situation with a less predictable payoff that is higher on average. For example, a risk-averse investor might choose to put their money into abankaccount with a low but guaranteed interest rate, rather than into astockthat may have high expected returns, but also involves a chance of losing value. A person is given the choice between two scenarios: one with a guaranteed payoff, and one with a risky payoff with same average value. In the former scenario, the person receives $50. In the uncertain scenario, a coin is flipped to decide whether the person receives $100 or nothing. The expected payoff for both scenarios is $50, meaning that an individual who was insensitive to risk would not care whether they took the guaranteed payment or the gamble. However, individuals may have differentrisk attitudes.[2][3][4] A person is said to be: The average payoff of the gamble, known as itsexpected value, is $50. The smallest guaranteed dollar amount that an individual would be indifferent to compared to an uncertain gain of a specific average predicted value is called thecertainty equivalent, which is also used as a measure of risk aversion. An individual that is risk averse has a certainty equivalent that is smaller than the prediction of uncertain gains. Therisk premiumis the difference between the expected value and the certainty equivalent. For risk-averse individuals, risk premium is positive, for risk-neutral persons it is zero, and for risk-loving individuals their risk premium is negative. Inexpected utilitytheory, an agent has a utility functionu(c) wherecrepresents the value that he might receive in money or goods (in the above exampleccould be $0 or $40 or $100). The utility functionu(c) is defined onlyup topositiveaffine transformation– in other words, a constant could be added to the value ofu(c) for allc, and/oru(c) could be multiplied by a positive constant factor, without affecting the conclusions. An agent is risk-averse if and only if the utility function isconcave. For instanceu(0) could be 0,u(100) might be 10,u(40) might be 5, and for comparisonu(50) might be 6. The expected utility of the above bet (with a 50% chance of receiving 100 and a 50% chance of receiving 0) is and if the person has the utility function withu(0)=0,u(40)=5, andu(100)=10 then the expected utility of the bet equals 5, which is the same as the known utility of the amount 40. Hence the certainty equivalent is 40. The risk premium is ($50 minus $40)=$10, or in proportional terms or 25% (where $50 is the expected value of the risky bet: (120+12100{\displaystyle {\tfrac {1}{2}}0+{\tfrac {1}{2}}100}). This risk premium means that the person would be willing to sacrifice as much as $10 in expected value in order to achieve perfect certainty about how much money will be received. In other words, the person would be indifferent between the bet and a guarantee of $40, and would prefer anything over $40 to the bet. In the case of a wealthier individual, the risk of losing $100 would be less significant, and for such small amounts his utility function would be likely to be almost linear. For instance, if u(0) = 0 and u(100) = 10, then u(40) might be 4.02 and u(50) might be 5.01. The utility function for perceived gains has two key properties: an upward slope, and concavity. (i) The upward slope implies that the person feels that more is better: a larger amount received yields greater utility, and for risky bets the person would prefer a bet which isfirst-order stochastically dominantover an alternative bet (that is, if the probability mass of the second bet is pushed to the right to form the first bet, then the first bet is preferred). (ii) The concavity of the utility function implies that the person is risk averse: a sure amount would always be preferred over a risky bet having the same expected value; moreover, for risky bets the person would prefer a bet which is amean-preserving contractionof an alternative bet (that is, if some of the probability mass of the first bet is spread out without altering the mean to form the second bet, then the first bet is preferred). There are various measures of the risk aversion expressed by those given utility function. Several functional forms often used for utility functions are represented by these measures. The higher the curvature ofu(c){\displaystyle u(c)}, the higher the risk aversion. However, since expected utility functions are not uniquely defined (are defined only up toaffine transformations), a measure that stays constant with respect to these transformations is needed rather than just the second derivative ofu(c){\displaystyle u(c)}. One such measure is theArrow–Pratt measure of absolute risk aversion(ARA), after the economistsKenneth ArrowandJohn W. Pratt,[5][6]also known as thecoefficient of absolute risk aversion, defined as whereu′(c){\displaystyle u'(c)}andu″(c){\displaystyle u''(c)}denote the first and second derivatives with respect toc{\displaystyle c}ofu(c){\displaystyle u(c)}. For example, ifu(c)=α+βln(c),{\displaystyle u(c)=\alpha +\beta ln(c),}sou′(c)=β/c{\displaystyle u'(c)=\beta /c}andu″(c)=−β/c2,{\displaystyle u''(c)=-\beta /c^{2},}thenA(c)=1/c.{\displaystyle A(c)=1/c.}Note howA(c){\displaystyle A(c)}does not depend onα{\displaystyle \alpha }andβ,{\displaystyle \beta ,}so affine transformations ofu(c){\displaystyle u(c)}do not change it. The following expressions relate to this term: The solution to this differential equation (omitting additive and multiplicative constant terms, which do not affect the behavior implied by the utility function) is: whereR=1/a{\displaystyle R=1/a}andcs=−b/a{\displaystyle c_{s}=-b/a}. Note that whena=0{\displaystyle a=0}, this is CARA, asA(c)=1/b=const{\displaystyle A(c)=1/b=const}, and whenb=0{\displaystyle b=0}, this is CRRA (see below), ascA(c)=1/a=const{\displaystyle cA(c)=1/a=const}. See[7] and this can hold only ifu‴(c)>0{\displaystyle u'''(c)>0}. Therefore, DARA implies that the utility function is positively skewed; that is,u‴(c)>0{\displaystyle u'''(c)>0}.[8]Analogously, IARA can be derived with the opposite directions of inequalities, which permits but does not require a negatively skewed utility function (u‴(c)<0{\displaystyle u'''(c)<0}). An example of a DARA utility function isu(c)=log⁡(c){\displaystyle u(c)=\log(c)}, withA(c)=1/c{\displaystyle A(c)=1/c}, whileu(c)=c−αc2,{\displaystyle u(c)=c-\alpha c^{2},}α>0{\displaystyle \alpha >0}, withA(c)=2α/(1−2αc){\displaystyle A(c)=2\alpha /(1-2\alpha c)}would represent a quadratic utility function exhibiting IARA. TheArrow–Pratt measure of relative risk aversion(RRA) orcoefficient of relative risk aversionis defined as[11] Unlike ARA whose units are in $−1, RRA is a dimensionless quantity, which allows it to be applied universally. Like for absolute risk aversion, the corresponding termsconstant relative risk aversion(CRRA) anddecreasing/increasing relative risk aversion(DRRA/IRRA) are used. This measure has the advantage that it is still a valid measure of risk aversion, even if the utility function changes from risk averse to risk loving ascvaries, i.e. utility is not strictly convex/concave over allc. A constant RRA implies a decreasing ARA, but the reverse is not always true. As a specific example of constant relative risk aversion, the utility functionu(c)=log⁡(c){\displaystyle u(c)=\log(c)}impliesRRA = 1. Inintertemporal choiceproblems, theelasticity of intertemporal substitutionoften cannot be disentangled from the coefficient of relative risk aversion. Theisoelastic utilityfunction exhibits constant relative risk aversion withR(c)=ρ{\displaystyle R(c)=\rho }and the elasticity of intertemporal substitutionεu(c)=1/ρ{\displaystyle \varepsilon _{u(c)}=1/\rho }. Whenρ=1,{\displaystyle \rho =1,}usingl'Hôpital's ruleshows that this simplifies to the case oflog utility,u(c) = logc, and theincome effectandsubstitution effecton saving exactly offset. A time-varying relative risk aversion can be considered.[12] The most straightforward implications of changing risk aversion occur in the context of forming a portfolio with one risky asset and one risk-free asset.[5][6]If an investor experiences an increase in wealth, he/she will choose to decrease the total amount of wealth invested in the risky asset in proportion to absolute risk aversion and will decrease the relative fraction of the portfolio made up of the risky asset in proportion to relative risk aversion. Thus economists avoid using utility functions which exhibit increasing absolute risk aversion, because they have an unrealistic behavioral implication. In onemodelinmonetary economics, an increase in relative risk aversion increases the impact of households' money holdings on the overall economy. In other words, the more the relative risk aversion increases, the more money demand shocks will impact the economy.[13] Inmodern portfolio theory, risk aversion is measured as the additional expected reward an investor requires to accept additional risk. If an investor is risk-averse, they will invest in multiple uncertain assets, but only when the predicted return on a portfolio that is uncertain is greater than the predicted return on one that is not uncertain will the investor prefer the former.[1]Here, therisk-return spectrumis relevant, as it results largely from this type of risk aversion. Here risk is measured as thestandard deviationof the return on investment, i.e. thesquare rootof itsvariance. In advanced portfolio theory, different kinds of risk are taken into consideration. They are measured as then-th rootof the n-thcentral moment. The symbol used for risk aversion is A or An. Thevon Neumann-Morgenstern utility theoremis another model used to denote how risk aversion influences an actor’s utility function. An extension of theexpected utilityfunction, the von Neumann-Morgenstern model includes risk aversion axiomatically rather than as an additional variable.[14] John von NeumannandOskar Morgensternfirst developed the model in their bookTheory of Games and Economic Behaviour.[14]Essentially, von Neumann and Morgenstern hypothesised that individuals seek to maximise their expected utility rather than the expected monetary value of assets.[15]In defining expected utility in this sense, the pair developed a function based on preference relations. As such, if an individual’s preferences satisfy four key axioms, then a utility function based on how they weigh different outcomes can be deduced.[16] In applying this model to risk aversion, the function can be used to show how an individual’s preferences of wins and losses will influence their expected utility function. For example, if a risk-averse individual with $20,000 in savings is given the option to gamble it for $100,000 with a 30% chance of winning, they may still not take the gamble in fear of losing their savings. This does not make sense using the traditional expected utility model however; EU(A)=0.3($100,000)+0.7($0){\displaystyle EU(A)=0.3(\$100,000)+0.7(\$0)} EU(A)=$30,000{\displaystyle EU(A)=\$30,000} EU(A)>$20,000{\displaystyle EU(A)>\$20,000} The von Neumann-Morgenstern model can explain this scenario. Based on preference relations, a specific utilityu{\displaystyle u}can be assigned to both outcomes. Now the function becomes; EU(A)=0.3u($100,000)+0.7u($0){\displaystyle EU(A)=0.3u(\$100,000)+0.7u(\$0)} For a risk averse person,u{\displaystyle u}would equal a value that means that the individual would rather keep their $20,000 in savings than gamble it all to potentially increase their wealth to $100,000. Hence a risk averse individuals’ function would show that; EU(A)≺$20,000(keepingsavings){\displaystyle EU(A)\prec \$20,000(keepingsavings)} Using expected utility theory's approach to risk aversion to analyzesmall stakes decisionshas come under criticism.Matthew Rabinhas showed that a risk-averse, expected-utility-maximizing individual who, from any initial wealth level [...] turns down gambles where she loses $100 or gains $110, each with 50% probability [...] will turn down 50–50 bets of losing $1,000 or gaining any sum of money.[17] Rabin criticizes this implication of expected utility theory on grounds of implausibility—individuals who are risk averse for small gambles due to diminishing marginal utility would exhibit extreme forms of risk aversion in risky decisions under larger stakes. One solution to the problem observed by Rabin is that proposed byprospect theoryandcumulative prospect theory, where outcomes are considered relative to a reference point (usually the status quo), rather than considering only the final wealth. Another limitation is the reflection effect, which demonstrates the reversing of risk aversion. This effect was first presented byKahnemanandTverskyas a part of theprospect theory, in thebehavioral economicsdomain. The reflection effect is an identified pattern of opposite preferences between negative as opposed to positive prospects: people tend to avoid risk when the gamble is between gains, and to seek risks when the gamble is between losses.[18]For example, most people prefer a certain gain of 3,000 to an 80% chance of a gain of 4,000. When posed the same problem, but for losses, most people prefer an 80% chance of a loss of 4,000 to a certain loss of 3,000. The reflection effect (as well as thecertainty effect) is inconsistent with the expected utility hypothesis. It is assumed that the psychological principle which stands behind this kind of behavior is the overweighting of certainty. Options which are perceived as certain are over-weighted relative to uncertain options. This pattern is an indication of risk-seeking behavior in negative prospects and eliminates other explanations for the certainty effect such as aversion for uncertainty or variability.[18] The initial findings regarding the reflection effect faced criticism regarding its validity, as it was claimed that there are insufficient evidence to support the effect on the individual level. Subsequently, an extensive investigation revealed its possible limitations, suggesting that the effect is most prevalent when either small or large amounts and extreme probabilities are involved.[19][20] Numerous studies have shown that in riskless bargaining scenarios, being risk-averse is disadvantageous. Moreover, opponents will always prefer to play against the most risk-averse person.[21]Based on both thevon Neumann-MorgensternandNash Game Theorymodel, a risk-averse person will happily receive a smaller commodity share of the bargain.[22]This is because their utility function concaves hence their utility increases at a decreasing rate while their non-risk averse opponents may increase at a constant or increasing rate.[23]Intuitively, a risk-averse person will hence settle for a smaller share of the bargain as opposed to a risk-neutral or risk-seeking individual. Intuitively, a risk-averse person will hence settle for a smaller share of the bargain as opposed to a risk-neutral or risk-seeking individual. This paradox is exemplified in pedestrian behavior, where risk-averse individuals often choose routes they perceive as safer, even when those choices increase their overall exposure to danger.[24] Attitudes towards risk have attracted the interest of the field ofneuroeconomicsandbehavioral economics. A 2009 study by Christopoulos et al. suggested that the activity of a specific brain area (right inferior frontal gyrus) correlates with risk aversion, with more risk averse participants (i.e. those having higher risk premia) also having higher responses to safer options.[25]This result coincides with other studies,[25][26]that show thatneuromodulationof the same area results in participants making more or less risk averse choices, depending on whether the modulation increases or decreases the activity of the target area. In the real world, many government agencies, e.g.Health and Safety Executive, are fundamentally risk-averse in their mandate. This often means that they demand (with the power of legal enforcement) that risks be minimized, even at the cost of losing the utility of the risky activity. It is important to consider theopportunity costwhen mitigating a risk; the cost of not taking the risky action. Writing laws focused on the risk without the balance of the utility may misrepresent society's goals. The public understanding of risk, which influences political decisions, is an area which has recently been recognised as deserving focus. In 2007Cambridge Universityinitiated theWinton Professorship of the Public Understanding of Risk, a role described as outreach rather than traditional academic research by the holder,David Spiegelhalter.[27] Children's services such asschoolsandplaygroundshave become the focus of much risk-averse planning, meaning that children are often prevented from benefiting from activities that they would otherwise have had. Many playgrounds have been fitted with impact-absorbing matting surfaces. However, these are only designed to save children from death in the case of direct falls on their heads and do not achieve their main goals.[28]They are expensive, meaning that less resources are available to benefit users in other ways (such as building a playground closer to the child's home, reducing the risk of a road traffic accident on the way to it), and—some argue—children may attempt more dangerous acts, with confidence in the artificial surface. Shiela Sage, an early years school advisor, observes "Children who are only ever kept in very safe places, are not the ones who are able to solve problems for themselves. Children need to have a certain amount of risk taking ... so they'll know how to get out of situations."[29][citation needed] One experimental study with student-subject playing the game of the TV showDeal or No Dealfinds that people are more risk averse in the limelight than in the anonymity of a typical behavioral laboratory. In the laboratory treatments, subjects made decisions in a standard, computerized laboratory setting as typically employed in behavioral experiments. In the limelight treatments, subjects made their choices in a simulated game show environment, which included a live audience, a game show host, and video cameras.[30]In line with this, studies on investor behavior find that investors trade more and more speculatively after switching from phone-based to online trading[31][32]and that investors tend to keep their core investments with traditional brokers and use a small fraction of their wealth to speculate online.[33] The basis of the theory, on the connection between employment status and risk aversion, is the varying income level of individuals. On average higher income earners are less risk averse than lower income earners. In terms of employment the greater the wealth of an individual the less risk averse they can afford to be, and they are more inclined to make the move from a secure job to anentrepreneurial venture. The literature assumes a small increase in income or wealth initiates the transition from employment to entrepreneurship-based decreasing absolute risk aversion (DARA), constant absolute risk aversion (CARA), and increasing absolute risk aversion (IARA) preferences as properties in theirutility function.[34]Theapportioningrisk perspective can also be used to as a factor in the transition of employment status, only if the strength ofdownside risk aversionexceeds the strength of risk aversion.[34]If using the behavioural approach to model an individual’s decision on their employment status there must be more variables than risk aversion and any absolute risk aversion preferences. Incentive effects are a factor in the behavioural approach an individual takes in deciding to move from a secure job to entrepreneurship. Non-financial incentives provided by an employer can change the decision to transition into entrepreneurship as the intangible benefits helps to strengthen how risk averse an individual is relative to the strength of downside risk aversion. Utility functions do not equate for such effects and can often screw the estimated behavioural path that an individual takes towards their employment status.[35] The design of experiments, to determine at what increase of wealth or income would an individual change their employment status from a position of security to more risky ventures, must include flexible utility specifications with salient incentives integrated with risk preferences.[35]The application of relevant experiments can avoid the generalisation of varying individual preferences through the use of this model and its specified utility functions. U.Sankar (1971), A Utility Function for Wealth for a Risk Averter, Journal of Economic Theory.
https://en.wikipedia.org/wiki/Risk_aversion
Incomputer programming,genetic representationis a way of presenting solutions/individuals inevolutionary computationmethods. The term encompasses both the concretedata structuresanddata typesused to realize the genetic material of the candidate solutions in the form of a genome, and the relationships between search space and problem space. In the simplest case, the search space corresponds to the problem space (direct representation).[1]The choice of problem representation is tied to the choice ofgenetic operators, both of which have a decisive effect on the efficiency of the optimization.[2][3]Genetic representation can encode appearance, behavior, physical qualities of individuals. Difference in genetic representations is one of the major criteria drawing a line between known classes of evolutionary computation.[4][5] Terminology is often analogous with naturalgenetics. The block of computer memory that represents one candidate solution is called an individual. The data in that block is called achromosome. Each chromosome consists of genes. The possible values of a particular gene are calledalleles. A programmer may represent all the individuals of apopulationusingbinary encoding,permutational encoding,encoding by tree, or any one of several other representations.[6][7] Genetic algorithms(GAs) are typically linear representations;[8]these are often, but not always,[9][10][11]binary.[10]Holland'soriginal description of GA used arrays ofbits. Arrays of other types and structures can be used in essentially the same way. The main property that makes these genetic representations convenient is that their parts are easily aligned due to their fixed size. This facilitates simple crossover operation. Depending on the application, variable-length representations have also been successfully used and tested inevolutionary algorithms(EA)[12][13]in general andgenetic algorithms[14][15]in particular, although the implementation of crossover is more complex in this case. Evolution strategyuses linear real-valued representations, e.g., an array of real values. It uses mostlygaussianmutation and blending/averaging crossover.[16] Genetic programming(GP) pioneered tree-like representations and developedgenetic operatorssuitable for such representations. Tree-like representations are used in GP to represent and evolve functional programs with desired properties.[17] Human-based genetic algorithm(HBGA) offers a way to avoid solving hard representation problems by outsourcing all genetic operators to outside agents, in this case, humans. The algorithm has no need for knowledge of a particular fixed genetic representation as long as there are enough external agents capable of handling those representations, allowing for free-form and evolving genetic representations. Analogous to biology, EAs distinguish between problem space (corresponds tophenotype) and search space (corresponds togenotype). The problem space contains concrete solutions to the problem being addressed, while the search space contains the encoded solutions. Themappingfrom search space to problem space is calledgenotype-phenotype mapping. Thegenetic operatorsare applied to elements of the search space, and for evaluation, elements of the search space are mapped to elements of the problem space via genotype-phenotype mapping.[18][19] The importance of an appropriate choice of search space for the success of an EA application was recognized early on.[20][21][22]The following requirements can be placed on a suitable search space and thus on a suitable genotype-phenotype mapping:[23][24] All possible admissible solutions must be contained in the search space. When more possible genotypes exist than phenotypes, the genetic representation of the EA is calledredundant. In nature, this is termed a degenerategenetic code. In the case of a redundant representation,neutral mutationsare possible. These are mutations that change the genotype but do not affect the phenotype. Thus, depending on the use of thegenetic operators, there may be phenotypically unchanged offspring, which can lead to unnecessary fitness determinations, among other things. Since the evaluation in real-world applications usually accounts for the lion's share of the computation time, it can slow down theoptimizationprocess. In addition, this can cause the population to have higher genotypic diversity than phenotypic diversity, which can also hinder evolutionary progress. In biology, theNeutral Theory of Molecular Evolutionstates that this effect plays a dominant role in natural evolution. This has motivated researchers in the EA community to examine whether neutral mutations can improve EA functioning[25]by giving populations that have converged to a local optimum a way to escape that local optimum throughgenetic drift. This is discussed controversially and there are no conclusive results on neutrality in EAs.[26][27]On the other hand, there are other proven measures to handlepremature convergence. The locality of a genetic representation corresponds to the degree to whichdistancesin the search space are preserved in the problem space after genotype-phenotype mapping. That is, a representation has a high locality exactly when neighbors in the search space are also neighbors in the problem space. In order for successfulschematanot to be destroyed by genotype-phenotype mapping after a minormutation, the locality of a representation must be high. In genotype-phenotype mapping, the elements of the genotype can be scaled (weighted) differently. The simplest case is uniform scaling: all elements of the genotype are equally weighted in the phenotype. A common scaling is exponential. Ifintegersare binary coded, the individual digits of the resulting binary number haveexponentiallydifferent weights in representing the phenotype. For this reason, exponential scaling has the effect of randomly fixing the "posterior" locations in the genotype before the population gets close enough to theoptimumto adjust for these subtleties. When mapping the genotype to the phenotype being evaluated, domain-specific knowledge can be used to improve the phenotype and/or ensure that constraints are met.[28][29]This is a commonly used method to improve EA performance in terms of runtime and solution quality. It is illustrated below by two of the three examples. An obvious and commonly used encoding for thetraveling salesman problemand related tasks is to number the cities to be visited consecutively and store them as integers in thechromosome. Thegenetic operatorsmust be suitably adapted so that they only change the order of the cities (genes) and do not cause deletions or duplications.[30][31]Thus, the gene order corresponds to the city order and there is a simple one-to-one mapping. In aschedulingtask with heterogeneous and partially alternative resources to be assigned to a set of subtasks, the genome must contain all necessary information for the individual scheduling operations or it must be possible to derive them from it. In addition to the order of the subtasks to be executed, this includes information about the resource selection.[32]A phenotype then consists of a list of subtasks with their start times and assigned resources. In order to be able to create this, as many allocationmatricesmust be created as resources can be allocated to one subtask at most. In the simplest case this is one resource, e.g., one machine, which can perform the subtask. An allocation matrix is a two-dimensional matrix, with one dimension being the available time units and the other being the resources to be allocated. Empty matrix cells indicate availability, while an entry indicates the number of the assigned subtask. The creation of allocation matrices ensures firstly that there are no inadmissible multiple allocations. Secondly, the start times of the subtasks can be read from it as well as the assigned resources.[33] A common constraint when scheduling resources to subtasks is that a resource can only be allocated once per time unit and that the reservation must be for a contiguous period of time.[34]To achieve this in a timely manner, which is a common optimization goal and not a constraint, a simpleheuristiccan be used: Allocate the required resource for the desired time period as early as possible, avoiding duplicate reservations. The advantage of this simple procedure is twofold: it avoids the constraint and helps the optimization. If the scheduling problem is modified to the scheduling ofworkflowsinstead of independent subtasks, at least some of the work steps of a workflow have to be executed in a given order.[35]If the previously described scheduling heuristic now determines that the predecessor of a work step is not completed when it should be started itself, the following repair mechanism can help: Postpone the scheduling of this work step until all its predecessors are finished.[33]Since the genotype remains unchanged and repair is performed only at the phenotype level, it is also calledphenotypic repair. The following layout planning task[36]is intended to illustrate a different use of aheuristicin genotype-phenotype mapping: On a rectangular surface different geometric types of objects are to be arranged in such a way that as little area as possible remains unused. The objects can be rotated, must not overlap after placement, and must be positioned completely on the surface. A related application would be scrap minimization when cutting parts from a steel plate or fabric sheet. The coordinates of the centers of the objects and a rotation angle reduced to possible isomorphisms of the geometry of the objects can be considered as variables to be determined. If this is done directly by an EA, there will probably be a lot of overlaps. To avoid this, only the angle and the coordinate of one side of the rectangle are determined by the EA. Each object is now rotated and positioned on the edge of that side, shifting it if necessary so that it is inside the rectangle when it is subsequently moved. Then it is moved parallel to the other side until it touches another object or reaches the opposite end of the rectangle. In this way, overlaps are avoided and the unused area is reduced per placement, but not in general, which is left to optimization.[37]
https://en.wikipedia.org/wiki/Genetic_representation
Afitness functionis a particular type ofobjective or cost functionthat is used to summarize, as a singlefigure of merit, how close a given candidate solution is to achieving the set aims. It is an important component ofevolutionary algorithms (EA), such asgenetic programming,evolution strategiesorgenetic algorithms. An EA is ametaheuristicthat reproduces the basic principles ofbiological evolutionas acomputer algorithmin order to solve challengingoptimizationorplanningtasks, at leastapproximately. For this purpose, many candidate solutions are generated, which are evaluated using a fitness function in order to guide the evolutionary development towards the desired goal.[1]Similar quality functions are also used in other metaheuristics, such asant colony optimizationorparticle swarm optimization. In the field of EAs, each candidate solution, also called anindividual, is commonly represented as a string of numbers (referred to as achromosome). After each round of testing or simulation the idea is to delete thenworst individuals, and tobreednnew ones from the best solutions. Each individual must therefore to be assigned a quality number indicating how close it has come to the overall specification, and this is generated by applying the fitness function to the test or simulation results obtained from that candidate solution.[2] Two main classes of fitness functions exist: one where the fitness function does not change, as in optimizing a fixed function or testing with a fixed set of test cases; and one where the fitness function is mutable, as inniche differentiationorco-evolvingthe set of test cases.[3][4]Another way of looking at fitness functions is in terms of afitness landscape, which shows the fitness for each possible chromosome. In the following, it is assumed that the fitness is determined based on an evaluation that remains unchanged during an optimization run. A fitness function does not necessarily have to be able to calculate an absolute value, as it is sometimes sufficient to compare candidates in order to select the better one. A relative indication of fitness (candidate a is better than b) is sufficient in some cases,[5]such astournament selectionorPareto optimization. The quality of the evaluation and calculation of a fitness function is fundamental to the success of an EA optimisation. It implements Darwin's principle of "survival of the fittest". Without fitness-basedselectionmechanisms for mate selection and offspring acceptance, EA search would be blind and hardly distinguishable from theMonte Carlomethod. When setting up a fitness function, one must always be aware that it is about more than just describing the desired target state. Rather, the evolutionary search on the way to the optimum should also be supported as much as possible (see also section onauxiliary objectives), if and insofar as this is not already done by the fitness function alone. If the fitness function is designed badly, the algorithm will either converge on an inappropriate solution, or will have difficulty converging at all. Definition of the fitness function is not straightforward in many cases and often is performed iteratively if the fittest solutions produced by an EA is not what is desired.Interactive genetic algorithmsaddress this difficulty by outsourcing evaluation to external agents which are normally humans. The fitness function should not only closely align with the designer's goal, but also be computationally efficient. Execution speed is crucial, as a typical evolutionary algorithm must be iterated many times in order to produce a usable result for a non-trivial problem. Fitness approximation[6][7]may be appropriate, especially in the following cases: Alternatively or also in addition to the fitness approximation, the fitness calculations can also be distributed to a parallel computer in order to reduce the execution times. Depending on thepopulation modelof the EA used, both the EA itself and the fitness calculations of all offspring of one generation can be executed in parallel.[9][10][11] Practical applications usually aim at optimizing multiple and at least partially conflicting objectives. Two fundamentally different approaches are often used for this purpose,Pareto optimizationand optimization based on fitness calculated using theweighted sum.[12] When optimizing with the weighted sum, the single values of theO{\displaystyle O}objectives are first normalized so that they can be compared. This can be done with the help of costs or by specifying target values and determining the current value as the degree of fulfillment. Costs or degrees of fulfillment can then be compared with each other and, if required, can also be mapped to a uniform fitness scale.Without loss of generality, fitness is assumed to represent a value to be maximized. Each objectiveoi{\displaystyle o_{i}}is assigned a weightwi{\displaystyle w_{i}}in the form of a percentage value so that the overall raw fitnessfraw{\displaystyle f_{raw}}can be calculated as a weighted sum: fraw=∑i=1Ooi⋅wiwith∑i=1Owi=1{\displaystyle f_{raw}=\sum _{i=1}^{O}{o_{i}\cdot w_{i}}\quad {\mathsf {with}}\quad \sum _{i=1}^{O}{w_{i}}=1} A violation ofR{\displaystyle R}restrictionsrj{\displaystyle r_{j}}can be included in the fitness determined in this way in the form ofpenalty functions. For this purpose, a functionpfj(rj){\displaystyle pf_{j}(r_{j})}can be defined for each restriction which returns a value between0{\displaystyle 0}and1{\displaystyle 1}depending on the degree of violation, with the result being1{\displaystyle 1}if there is no violation. The previously determined raw fitness is multiplied by the penalty function(s) and the result is then the final fitnessffinal{\displaystyle f_{final}}:[13] ffinal=fraw⋅∏j=1Rpfj(rj)=∑i=1O(oi⋅wi)⋅∏j=1Rpfj(rj){\displaystyle f_{final}=f_{raw}\cdot \prod _{j=1}^{R}{pf_{j}(r_{j})}=\sum _{i=1}^{O}{(o_{i}\cdot w_{i})}\cdot \prod _{j=1}^{R}{pf_{j}(r_{j})}} This approach is simple and has the advantage of being able to combine any number of objectives and restrictions. The disadvantage is that different objectives can compensate each other and that the weights have to be defined before the optimization. This means that the compromise lines must be defined before optimization, which is why optimization with the weighted sum is also referred to as thea priori method.[12]In addition, certain solutions may not be obtained, see the section on thecomparison of both types of optimization. A solution is called Pareto-optimal if the improvement of one objective is only possible with a deterioration of at least one other objective. The set of all Pareto-optimal solutions, also called Pareto set, represents the set of all optimal compromises between the objectives. The figure below on the right shows an example of the Pareto set of two objectivesf1{\displaystyle f_{1}}andf2{\displaystyle f_{2}}to be maximized. The elements of the set form the Pareto front (green line). From this set, a human decision maker must subsequently select the desired compromise solution.[12]Constraints are included in Pareto optimization in that solutions without constraint violations are per se better than those with violations. If two solutions to be compared each have constraint violations, the respective extent of the violations decides.[14] It was recognized early on that EAs with their simultaneously considered solution set are well suited to finding solutions in one run that cover the Pareto front sufficiently well.[14][15]They are therefore well suited asa-posteriori methodsfor multi-objective optimization, in which the final decision is made by a human decision maker after optimization and determination of the Pareto front.[12]Besides the SPEA2,[16]the NSGA-II[17]and NSGA-III[18][19]have established themselves as standard methods. The advantage of Pareto optimization is that, in contrast to the weighted sum, it provides all alternatives that are equivalent in terms of the objectives as an overall solution. The disadvantage is that a visualization of the alternatives becomes problematic or even impossible from four objectives on. Furthermore, the effort increases exponentially with the number of objectives.[13]If there are more than three or four objectives, some have to be combined using the weighted sum or other aggregation methods.[12] With the help of the weighted sum, the total Pareto front can be obtained by a suitable choice of weights, provided that it isconvex.[20]This is illustrated by the adjacent picture on the left. The pointP{\displaystyle {\mathsf {P}}}on the green Pareto front is reached by the weightsw1{\displaystyle w_{1}}andw2{\displaystyle w_{2}}, provided that the EA converges to the optimum. The direction with the largest fitness gain in the solution setZ{\displaystyle Z}is shown by the drawn arrows. In case of a non-convex front, however, non-convex front sections are not reachable by the weighted sum. In the adjacent image on the right, this is the section between pointsA{\displaystyle {\mathsf {A}}}andB{\displaystyle {\mathsf {B}}}. This can be remedied to a limited extent by using an extension of the weighted sum, thecascaded weighted sum.[13] Comparing both assessment approaches, the use of Pareto optimization is certainly advantageous when little is known about the possible solutions of a task and when the number of optimization objectives can be narrowed down to three, at most four. However, in the case of repeated optimization of variations of one and the same task, the desired lines of compromise are usually known and the effort to determine the entire Pareto front is no longer justified. This is also true when no human decision is desired or possible after optimization, such as in automated decision processes.[13] In addition to the primary objectives resulting from the task itself, it may be necessary to include auxiliary objectives in the assessment to support the achievement of one or more primary objectives. An example of a scheduling task is used for illustration purposes. The optimization goals include not only a general fast processing of all orders but also the compliance with a latest completion time. The latter is especially necessary for the scheduling of rush orders. The second goal is not achieved by the exemplary initial schedule, as shown in the adjacent figure. A following mutation does not change this, but schedules the work stepdearlier, which is a necessary intermediate step for an earlier start of the last work stepeof the order. As long as only the latest completion time is evaluated, however, the fitness of the mutated schedule remains unchanged, even though it represents a relevant step towards the objective of a timely completion of the order. This can be remedied, for example, by an additional evaluation of the delay of work steps. The new objective is an auxiliary one, since it was introduced in addition to the actual optimization objectives to support their achievement. A more detailed description of this approach and another example can be found in.[21]
https://en.wikipedia.org/wiki/Fitness_function
Selectionis agenetic operatorin anevolutionary algorithm(EA). An EA is ametaheuristicinspired bybiological evolutionand aims to solve challenging problems at leastapproximately. Selection has a dual purpose: on the one hand, it can choose individual genomes from a population for subsequent breeding (e.g., using thecrossover operator). In addition, selection mechanisms are also used to choose candidate solutions (individuals) for the next generation. The biological model isnatural selection. Retaining the best individual(s) of one generation unchanged in the next generation is calledelitismorelitist selection. It is a successful (slight) variant of the general process of constructing a new population. The basis for selection is the quality of an individual, which is determined by thefitness function. Inmemetic algorithms, an extension of EA, selection also takes place in the selection of those offspring that are to be improved with the help of ameme(e.g. aheuristic). A selection procedure for breeding used early on[1]may be implemented as follows: For many problems the above algorithm might be computationally demanding. A simpler and faster alternative uses the so-called stochastic acceptance. If this procedure is repeated until there are enough selected individuals, this selection method is calledfitness proportionate selectionorroulette-wheel selection. If instead of a single pointer spun multiple times, there are multiple, equally spaced pointers on a wheel that is spun once, it is calledstochastic universal sampling. Repeatedly selecting the best individual of a randomly chosen subset istournament selection. Taking the best half, third or another proportion of the individuals istruncation selection. There are other selection algorithms that do not consider all individuals for selection, but only those with a fitness value that is higher than a given (arbitrary) constant. Other algorithms select from a restricted pool where only a certain percentage of the individuals are allowed, based on fitness value. The listed methods differ mainly in the selection pressure,[2][3]which can be set by a strategy parameter in the rank selection described below. The higher the selection pressure, the faster a population converges against a certain solution and the search space may not be explored sufficiently. Thispremature convergence[4]can be counteracted bystructuring the populationappropriately.[5][6]There is a close correlation between the population model used and a suitable selection pressure.[5]If the pressure is too low, it must be expected that the population will not converge even after a long computing time. For more selection methods and further detail see.[7][8] In theroulette wheel selection, the probability of choosing an individual for breeding of the next generation is proportional to its fitness, the better the fitness is, the higher chance for that individual to be chosen. Choosing individuals can be depicted as spinning a roulette that has as many pockets as there are individuals in the current generation, with sizes depending on their probability. Probability of choosing individuali{\displaystyle i}is equal topi=fiΣj=1Nfj{\displaystyle p_{i}={\frac {f_{i}}{\Sigma _{j=1}^{N}f_{j}}}}, wherefi{\displaystyle f_{i}}is the fitness ofi{\displaystyle i}andN{\displaystyle N}is the size of current generation (note that in this method one individual can be drawn multiple times). Stochastic universal samplingis a development of roulette wheel selection with minimal spread and no bias. In rank selection, the probability for selection does not depend directly on the fitness, but on the fitness rank of an individual within the population.[9]The exact fitness values themselves do not have to be available, but only a sorting of the individuals according to quality. In addition to the adjustable selection pressure, an advantage of rank-based selection can be seen in the fact that it also gives worse individuals a chance to reproduce and thus to improve.[10]This can be particularly helpful in applications with restrictions, since it facilitates the overcoming of a restriction in several intermediate steps, i.e. via a sequence of several individuals rated poorly due to restriction violations. Linear ranking, which goes back to Baker,[11][12]is often used.[5][10][13]It allows the selection pressure to be set by the parametersp{\displaystyle sp}, which can take values between 1.0 (no selection pressure) and 2.0 (high selection pressure). The probabilityP{\displaystyle P}forn{\displaystyle n}rank positionsRi{\displaystyle R_{i}}is obtained as follows: Another definition for the probabilityP{\displaystyle P}for rank positionsi{\displaystyle i}is:[9] Exponential rank selection is defined as follows:[9] P(i)=wn−i∑k=1nwn−k,0≤w≤1{\displaystyle P(i)={\frac {w^{n-i}}{\sum _{k=1}^{n}{w^{n-k}}}},0\leq w\leq 1} In every generation few chromosomes are selected (good - with high fitness) for creating a new offspring. Then some (bad - with low fitness) chromosomes are removed and the new offspring is placed in their place. The rest of population survives to new generation. Tournament selectionis a method of choosing the individual from the set of individuals. The winner of each tournament is selected to perform crossover. Fortruncation selection, individuals are sorted according to their fitness and a portion (10% to 50%) of the top individuals is selected for next generation.[9] Often to get better results, strategies with partial reproduction are used. One of them is elitism, in which a small portion of the best individuals from the last generation is carried over (without any changes) to the next one. In Boltzmann selection, a continuously varying temperature controls the rate of selection according to a preset schedule. The temperature starts out high, which means that the selection pressure is low. The temperature is gradually lowered, which gradually increases the selection pressure, thereby allowing the GA to narrow in more closely to the best part of the search space while maintaining the appropriate degree of diversity.[14]
https://en.wikipedia.org/wiki/Selection_(genetic_algorithm)
AlphaZerois acomputer programdeveloped byartificial intelligenceresearch companyDeepMindto master the games ofchess,shogiandgo. Thisalgorithmuses an approach similar toAlphaGo Zero. On December 5, 2017, the DeepMind team released apreprintpaper introducing AlphaZero,[1]which would soon play three games by defeating world-champion chess enginesStockfish,Elmo, and the three-day version of AlphaGo Zero. In each case it made use of customtensor processing units(TPUs) that the Google programs were optimized to use.[2]AlphaZero was trained solely viaself-playusing 5,000 first-generation TPUs to generate the games and 64 second-generation TPUs to train theneural networks, all inparallel, with no access toopening booksorendgame tables. After four hours of training, DeepMind estimated AlphaZero was playing chess at a higherElo ratingthan Stockfish 8; after nine hours of training, the algorithm defeated Stockfish 8 in a time-controlled 100-game tournament (28 wins, 0 losses, and 72 draws).[2][3][4]The trained algorithm played on a single machine with four TPUs. DeepMind's paper on AlphaZero was published in the journalScienceon 7 December 2018.[5]While the actual AlphaZero program has not been released to the public,[6]the algorithm described in the paper has been implemented in publicly available software. In 2019, DeepMind published a new paper detailingMuZero, a new algorithm able to generalize AlphaZero's work, playing both Atari and board games without knowledge of the rules or representations of the game.[7] AlphaZero (AZ) is a more generalized variant of the AlphaGo Zero (AGZ)algorithm, and is able to playshogiandchessas well asGo. Differences between AZ and AGZ include:[2] ComparingMonte Carlo tree searchsearches, AlphaZero searches just 80,000 positions per second in chess and 40,000 in shogi, compared to 70 million for Stockfish and 35 million for Elmo. AlphaZero compensates for the lower number of evaluations by using its deep neural network to focus much more selectively on the most promising variation.[2] AlphaZero was trained by simply playing against itself multiple times, using 5,000 first-generation TPUs to generate the games and 64 second-generation TPUs to train theneural networks. Training took several days, totaling about 41 TPU-years. It cost 3e22 FLOPs.[8] In parallel, the in-training AlphaZero was periodically matched against its benchmark (Stockfish, Elmo, or AlphaGo Zero) in brief one-second-per-move games to determine how well the training was progressing. DeepMind judged that AlphaZero's performance exceeded the benchmark after around four hours of training for Stockfish, two hours for Elmo, and eight hours for AlphaGo Zero.[2] In AlphaZero's chess match against Stockfish 8 (2016TCECworld champion), each program was given one minute per move. AlphaZero was flying the English flag, while Stockfish the Norwegian.[9]Stockfish was allocated 64 threads and ahashsize of 1 GB,[2]a setting that Stockfish'sTord Romstadlater criticized as suboptimal.[10][note 1]AlphaZero was trained on chess for a total of nine hours before the match. During the match, AlphaZero ran on a single machine with four application-specificTPUs. In 100 games from the normal starting position, AlphaZero won 25 games as White, won 3 as Black, and drew the remaining 72.[11]In a series of twelve, 100-game matches (of unspecified time or resource constraints) against Stockfish starting from the 12 most popular human openings, AlphaZero won 290, drew 886 and lost 24.[2] AlphaZero was trained on shogi for a total of two hours before the tournament. In 100 shogi games against Elmo (World Computer Shogi Championship 27 summer 2017 tournament version with YaneuraOu 4.73 search), AlphaZero won 90 times, lost 8 times and drew twice.[11]As in the chess games, each program got one minute per move, and Elmo was given 64 threads and a hash size of 1 GB.[2] After 34 hours of self-learning of Go and against AlphaGo Zero, AlphaZero won 60 games and lost 40.[2][11] DeepMind stated in its preprint, "The game of chess represented the pinnacle of AI research over several decades. State-of-the-art programs are based on powerful engines that search many millions of positions, leveraging handcrafted domain expertise and sophisticated domain adaptations. AlphaZero is a genericreinforcement learningalgorithm – originally devised for the game of go – that achieved superior results within a few hours, searching a thousand times fewer positions, given no domain knowledge except the rules."[2]DeepMind'sDemis Hassabis, a chess player himself, called AlphaZero's play style "alien": It sometimes wins by offering counterintuitive sacrifices, like offering up a queen and bishop to exploit a positional advantage. "It's like chess from another dimension."[12] Given the difficulty in chess offorcing a win against a strong opponent, the +28 –0 =72 result is a significant margin of victory. However, some grandmasters, such asHikaru NakamuraandKomododeveloperLarry Kaufman, downplayed AlphaZero's victory, arguing that the match would have been closer if the programs had access to anopeningdatabase (since Stockfish was optimized for that scenario).[13]Romstad additionally pointed out that Stockfish is not optimized for rigidly fixed-time moves and the version used was a year old.[10][14] Similarly, some shogi observers argued that the Elmo hash size was too low, that the resignation settings and the "EnteringKingRule" settings (cf.shogi § Entering King) may have been inappropriate, and that Elmo is already obsolete compared with newer programs.[15][16] Papers headlined that the chess training took only four hours: "It was managed in little more than the time between breakfast and lunch."[3][17]Wireddescribed AlphaZero as "the first multi-skilled AI board-game champ".[18]AI expert Joanna Bryson noted that Google's "knack for good publicity" was putting it in a strong position against challengers. "It's not only about hiring the best programmers. It's also very political, as it helps make Google as strong as possible when negotiating with governments and regulators looking at the AI sector."[11] Human chess grandmasters generally expressed excitement about AlphaZero. Danish grandmasterPeter Heine Nielsenlikened AlphaZero's play to that of a superior alien species.[11]Norwegian grandmasterJon Ludvig Hammercharacterized AlphaZero's play as "insane attacking chess" with profound positional understanding.[3]FormerchampionGarry Kasparovsaid, "It's a remarkable achievement, even if we should have expected it after AlphaGo."[13][19] GrandmasterHikaru Nakamurawas less impressed, stating: "I don't necessarily put a lot of credibility in the results simply because my understanding is that AlphaZero is basically using the Google supercomputer and Stockfish doesn't run on that hardware; Stockfish was basically running on what would be my laptop. If you wanna have a match that's comparable you have to have Stockfish running on a supercomputer as well."[10] Top US correspondence chess player Wolff Morrow was also unimpressed, claiming that AlphaZero would probably not make the semifinals of a fair competition such asTCECwhere all engines play on equal hardware. Morrow further stated that although he might not be able to beat AlphaZero if AlphaZero played drawish openings such as thePetroff Defence, AlphaZero would not be able to beat him in acorrespondence chessgame either.[20] Motohiro Isozaki, the author of YaneuraOu, noted that although AlphaZero did comprehensively beat Elmo, the rating of AlphaZero in shogi stopped growing at a point which is at most 100–200 higher than Elmo. This gap is not that high, and Elmo and other shogi software should be able to catch up in 1–2 years.[21] DeepMind addressed many of the criticisms in their final version of the paper, published in December 2018 inScience.[5]They further clarified that AlphaZero was not running on a supercomputer; it was trained using 5,000tensor processing units(TPUs), but only ran on four TPUs and a 44-core CPU in its matches.[22] In the final results, Stockfish 9 dev ran under the same conditions as in theTCECsuperfinal: 44 CPU cores, Syzygyendgame tablebases, and a 32 GB hash size. Instead of a fixedtime controlof one move per minute, both engines were given 3 hours plus 15 seconds per move to finish the game. AlphaZero ran on a much more powerful machine with four TPUs in addition to 44 CPU cores. In a 1000-game match, AlphaZero won with a score of 155 wins, 6 losses, and 839 draws. DeepMind also played a series of games using the TCEC opening positions; AlphaZero also won convincingly. Stockfish needed 10-to-1 time odds to match AlphaZero.[23] Similar to Stockfish, Elmo ran under the same conditions as in the 2017 CSA championship. The version of Elmo used was WCSC27 in combination with YaneuraOu 2017 Early KPPT 4.79 64AVX2 TOURNAMENT. Elmo operated on the same hardware as Stockfish: 44 CPU cores and a 32 GB hash size. AlphaZero won 98.2% of games when playing sente (i.e. having the first move) and 91.2% overall. Human grandmasters were generally impressed with AlphaZero's games against Stockfish.[23]Former world championGarry Kasparovsaid it was a pleasure to watch AlphaZero play, especially since its style was open and dynamic like his own.[24][25] In the computer chess community,Komododeveloper Mark Lefler called it a "pretty amazing achievement", but also pointed out that the data was old, since Stockfish had gained a lot of strength since January 2018 (when Stockfish 8 was released). Fellow developer Larry Kaufman said AlphaZero would probably lose a match against the latest version of Stockfish, Stockfish 10, under Top Chess Engine Championship (TCEC) conditions. Kaufman argued that the only advantage of neural network–based engines was that they used a GPU, so if there was no regard for power consumption (e.g. in an equal-hardware contest where both engines had access to the same CPU and GPU) then anything the GPU achieved was "free". Based on this, he stated that the strongest engine was likely to be a hybrid with neural networks and standardalpha–beta search.[26] AlphaZero inspired the computer chess community to developLeela Chess Zero, using the same techniques as AlphaZero. Leela contested several championships against Stockfish, where it showed roughly similar strength to Stockfish, although Stockfish has since pulled away.[27] In 2019 DeepMind publishedMuZero, a unified system that played excellent chess, shogi, and go, as well as games in theAtariLearning Environment, without being pre-programmed with their rules.[28][29] The match results by themselves are not particularly meaningful because of the rather strange choice of time controls and Stockfish parameter settings: The games were played at a fixed time of 1 minute/move, which means that Stockfish has no use of its time management heuristics (lot of effort has been put into making Stockfish identify critical points in the game and decide when to spend some extra time on a move; at a fixed time per move, the strength will suffer significantly). The version of Stockfish used is one year old, was playing with far more search threads than has ever received any significant amount of testing, and had way too small hash tables for the number of threads. I believe the percentage of draws would have been much higher in a match with more normal conditions.[10]
https://en.wikipedia.org/wiki/AlphaZero
MuZerois acomputer programdeveloped byartificial intelligenceresearch companyDeepMindto master games without knowing their rules.[1][2][3]Its release in 2019 included benchmarks of its performance ingo,chess,shogi, and a standard suite ofAtarigames. Thealgorithmuses an approach similar toAlphaZero. It matched AlphaZero's performance in chess and shogi, improved on its performance inGo(setting a new world record), and improved on the state of the art in mastering a suite of 57 Atari games (the Arcade Learning Environment), a visually-complex domain. MuZero was trained viaself-play, with no access to rules, opening books, or endgame tablebases. The trained algorithm used the same convolutional and residual architecture as AlphaZero, but with 20 percent fewer computation steps per node in the search tree.[4] MuZero’s capacity to plan and learn effectively without explicit rules makes it a groundbreaking achievement inreinforcement learningandAI, pushing the boundaries of what is possible inartificial intelligence. MuZero really is discovering for itself how to build a model and understand it just from first principles. On November 19, 2019, the DeepMind team released apreprintintroducing MuZero. MuZero (MZ) is a combination of the high-performance planning of the AlphaZero (AZ)algorithmwith approaches to model-free reinforcement learning. The combination allows for more efficient training in classical planning regimes, such as Go, while also handling domains with much more complex inputs at each stage, such as visual video games. MuZero was derived directly from AZ code, sharing its rules for settinghyperparameters. Differences between the approaches include:[6] The previous state of the art technique for learning to play the suite of Atari games was R2D2, the Recurrent Replay Distributed DQN.[7] MuZero surpassed both R2D2's mean and median performance across the suite of games, though it did not do better in every game. MuZero used 16 third-generationtensor processing units(TPUs) for training, and 1000 TPUs for selfplay for board games, with 800 simulations per step and 8 TPUs for training and 32 TPUs for selfplay for Atari games, with 50 simulations per step. AlphaZero used 64 second-generation TPUs for training, and 5000 first-generation TPUs for selfplay. As TPU design has improved (third-generation chips are 2x as powerful individually as second-generation chips, with further advances in bandwidth and networking across chips in a pod), these are comparable training setups. R2D2 was trained for 5 days through 2M training steps. MuZero matched AlphaZero's performance in chess and Shogi after roughly 1 million training steps. It matched AZ's performance in Go after 500,000 training steps and surpassed it by 1 million steps. It matched R2D2's mean and median performance across the Atari game suite after 500 thousand training steps and surpassed it by 1 million steps, though it never performed well on 6 games in the suite. MuZero was viewed as a significant advancement over AlphaZero, and a generalizable step forward in unsupervised learning techniques.[8][9]The work was seen as advancing understanding of how to compose systems from smaller components, a systems-level development more than a pure machine-learning development.[10] While only pseudocode was released by the development team, Werner Duvaud produced an open source implementation based on that.[11] MuZero has been used as a reference implementation in other work, for instance as a way to generate model-based behavior.[12] In late 2021, a more efficient variant of MuZero was proposed, named EfficientZero. It "achieves 194.3 percent mean human performance and 109.0 percent median performance on the Atari 100k benchmark with only two hours of real-time game experience".[13] In early 2022, a variant of MuZero was proposed to play stochastic games (for example2048,backgammon), called Stochastic MuZero, which uses afterstate dynamics and chance codes to account for the stochastic nature of the environment when training the dynamics network.[14]
https://en.wikipedia.org/wiki/MuZero
Invideo games,artificial intelligence(AI) is used to generate responsive, adaptive orintelligentbehaviors primarily innon-playable characters(NPCs) similar tohuman-like intelligence. Artificial intelligence has been an integral part of video games since their inception in 1948, first seen in the gameNim.[1]AI in video games is a distinct subfield and differs from academic AI. It serves to improve the game-player experience rather thanmachine learningor decision making. During thegolden age of arcade video gamesthe idea of AI opponents was largely popularized in the form of graduated difficulty levels, distinct movement patterns, and in-game events dependent on the player's input. Modern games often implement existing techniques such aspathfindinganddecision treesto guide the actions of NPCs. AI is often used in mechanisms which are not immediately visible to the user, such asdata miningandprocedural-content generation.[2]One of the most infamous examples of this NPC technology and gradual difficulty levels can be found in the gameMike Tyson's Punch-Out!!(1987).[3] In general, game AI does not, as might be thought and sometimes is depicted to be the case, mean a realization of an artificial person corresponding to an NPC in the manner of theTuring testor anartificial general intelligence. The termgame AIis used to refer to a broad set ofalgorithmsthat also include techniques fromcontrol theory,robotics,computer graphicsandcomputer sciencein general, and so video game AI may often not constitute "true AI" in that such techniques do not necessarily facilitate computer learning or other standard criteria, only constituting "automated computation" or a predetermined and limited set of responses to a predetermined and limited set of inputs.[4][5][6] Many industries and corporate voices[who?]argue that game AI has come a long way in the sense that it has revolutionized the way humans interact with all forms of technology, although many[who?]expert researchers are skeptical of such claims, and particularly of the notion that such technologies fit the definition of "intelligence" standardly used in thecognitive sciences.[4][5][6][7]Industry voices[who?]make the argument that AI has become more versatile in the way we use all technological devices for more than their intended purpose because the AI allows the technology to operate in multiple ways, allegedly developing their own personalities and carrying out complex instructions of the user.[8][9] People[who?]in the field of AI have argued that video game AI is not true intelligence, but an advertising buzzword used to describe computer programs that use simple sorting and matching algorithms to create the illusion of intelligent behavior while bestowing software with a misleading aura of scientific or technological complexity and advancement.[4][5][6][10]Since game AI for NPCs is centered on appearance of intelligence and good gameplay within environment restrictions, its approach is very different from that of traditional AI. Game playing was an area of research in AI from its inception. One of the first examples of AI is the computerized game ofNimmade in 1951 and published in 1952. Despite being advanced technology in the year it was made, 20 years beforePong, the game took the form of a relatively small box and was able to regularly win games even against highly skilled players of the game.[1]In 1951, using theFerranti Mark 1machine of theUniversity of Manchester,Christopher Stracheywrote acheckersprogram andDietrich Prinzwrote one forchess.[11]These were among the first computer programs ever written.Arthur Samuel's checkers program, developed in the middle 1950s and early 1960s, eventually achieved sufficient skill to challenge a respectable amateur.[12]Work on checkers and chess would culminate in the defeat ofGarry KasparovbyIBM'sDeep Bluecomputer in 1997.[13]The firstvideo gamesdeveloped in the 1960s and early 1970s, likeSpacewar!,Pong, andGotcha(1973), were games implemented ondiscrete logicand strictly based on the competition of two players, without AI. Games that featured asingle playermode with enemies started appearing in the 1970s. The first notable ones for thearcadeappeared in 1974: theTaitogameSpeed Race(racing video game) and theAtarigamesQwak(duck huntinglight gun shooter) andPursuit(fighter aircraft dogfighting simulator). Two text-based computer games,Star Trek(1971) andHunt the Wumpus(1973), also had enemies. Enemy movement was based on stored patterns. The incorporation ofmicroprocessorswould allow more computation and random elements overlaid into movement patterns. It was during thegolden age of video arcade gamesthat the idea of AI opponents was largely popularized, due to the success ofSpace Invaders(1978), which sported an increasing difficulty level, distinct movement patterns, and in-game events dependent onhash functionsbased on the player's input.Galaxian(1979) added more complex and varied enemy movements, including maneuvers by individual enemies who break out of formation.Pac-Man(1980) introduced AI patterns tomaze games, with the added quirk of different personalities for each enemy.Karate Champ(1984) later introduced AI patterns tofighting games.First Queen(1988) was atacticalaction RPGwhich featured characters that can be controlled by the computer's AI in following the leader.[14][15]Therole-playing video gameDragon Quest IV(1990) introduced a "Tactics" system, where the user can adjust the AI routines ofnon-player charactersduring battle, a concept later introduced to theaction role-playing gamegenre bySecret of Mana(1993). Games likeMadden Football,Earl Weaver BaseballandTony La Russa Baseballall based their AI in an attempt to duplicate on the computer the coaching or managerial style of the selected celebrity. Madden, Weaver and La Russa all did extensive work with these game development teams to maximize the accuracy of the games.[citation needed]Later sports titles allowed users to "tune" variables in the AI to produce a player-defined managerial or coaching strategy. The emergence of new game genres in the 1990s prompted the use of formal AI tools likefinite-state machines.Real-time strategygames taxed the AI with many objects, incomplete information, pathfinding problems, real-time decisions and economic planning, among other things.[16]The first games of the genre had notorious problems.Herzog Zwei(1989), for example, had almost broken pathfinding and very basic three-state state machines for unit control, andDune II(1992) attacked the players' base in a beeline and used numerous cheats.[17]Later games in the genre exhibited more sophisticated AI. Later games have usedbottom-upAI methods, such as theemergent behaviourand evaluation of player actions in games likeCreaturesorBlack & White.Façade (interactive story)was released in 2005 and used interactive multiple way dialogs and AI as the main aspect of game. Games have provided an environment for developing artificial intelligence with potential applications beyond gameplay. Examples includeWatson, aJeopardy!-playing computer; and theRoboCuptournament, where robots are trained to compete in soccer.[18] Many experts[who?]complain that the "AI" in the termgame AIoverstates its worth, as game AI is not aboutintelligence, and shares few of the objectives of the academic field of AI. Whereas "real AI" addresses fields of machine learning, decision making based on arbitrary data input, and even the ultimate goal ofstrong AIthat can reason, "game AI" often consists of a half-dozen rules of thumb, orheuristics, that are just enough to give a good gameplay experience.[citation needed]Historically, academic game-AI projects have been relatively separate from commercial products because the academic approaches tended to be simple and non-scalable. Commercial game AI has developed its own set of tools, which have been sufficient to give good performance in many cases.[2] Game developers' increasing awareness of academic AI and a growing interest in computer games by the academic community is causing the definition of what counts as AI in a game to become lessidiosyncratic. Nevertheless, significant differences between different application domains of AI mean that game AI can still be viewed as a distinct subfield of AI. In particular, the ability to legitimately solve some AI problems in games bycheatingcreates an important distinction. For example, inferring the position of an unseen object from past observations can be a difficult problem when AI is applied to robotics, but in a computer game a NPC can simply look up the position in the game'sscene graph. Such cheating can lead to unrealistic behavior and so is not always desirable. But its possibility serves to distinguish game AI and leads to new problems to solve, such as when and how to cheat.[citation needed] The major limitation to strong AI is the inherent depth of thinking and the extreme complexity of the decision-making process. This means that although it would be then theoretically possible to make "smart" AI the problem would take considerable processing power.[citation needed] Game AI/heuristic algorithms are used in a wide variety of quite disparate fields inside a game. The most obvious is in the control of any NPCs in the game, although "scripting" (decision tree) is currently the most common means of control.[19]These handwritten decision trees often result in "artificial stupidity" such as repetitive behavior, loss of immersion, or abnormal behavior in situations the developers did not plan for.[20] Pathfinding, another common use for AI, is widely seen inreal-time strategygames. Pathfinding is the method for determining how to get a NPC from one point on a map to another, taking into consideration the terrain, obstacles and possibly "fog of war".[21][22]Commercial videogames often use fast and simple "grid-based pathfinding", wherein the terrain is mapped onto a rigid grid of uniform squares and a pathfinding algorithm such asA*orIDA*is applied to the grid.[23][24][25]Instead of just a rigid grid, some games use irregular polygons and assemble anavigation meshout of the areas of the map that NPCs can walk to.[23][26]As a third method, it is sometimes convenient for developers to manually select "waypoints" that NPCs should use to navigate; the cost is that such waypoints can create unnatural-looking movement. In addition, waypoints tend to perform worse than navigation meshes in complex environments.[27][28]Beyond static pathfinding,navigationis a sub-field of Game AI focusing on giving NPCs the capability to navigate in a dynamic environment, finding a path to a target while avoiding collisions with other entities (other NPC, players...) or collaborating with them (group navigation).[citation needed]Navigation in dynamic strategy games with large numbers of units, such asAge of Empires(1997) orCivilization V(2010), often performs poorly; units often get in the way of other units.[28] Rather than improve the Game AI to properly solve a difficult problem in the virtual environment, it is often more cost-effective to just modify the scenario to be more tractable. If pathfinding gets bogged down over a specific obstacle, a developer may just end up moving or deleting the obstacle.[29]InHalf-Life(1998), the pathfinding algorithm sometimes failed to find a reasonable way for all the NPCs to evade a thrown grenade; rather than allow the NPCs to attempt to bumble out of the way and risk appearing stupid, the developers instead scripted the NPCs to crouch down and cover in place in that situation.[30] Many contemporary video games fall under the category of action,first-person shooter, or adventure. In most of these types of games, there is some level of combat that takes place. The AI's ability to be efficient in combat is important in these genres. A common goal today is to make the AI more human or at least appear so. One of the more positive and efficient features found in modern-day video game AI is the ability to hunt. AI originally reacted in a very black and white manner. If the player were in a specific area then the AI would react in either a complete offensive manner or be entirely defensive. In recent years, the idea of "hunting" has been introduced; in this 'hunting' state the AI will look for realistic markers, such as sounds made by the character or footprints they may have left behind.[31]These developments ultimately allow for a more complex form of play. With this feature, the player can actually consider how to approach or avoid an enemy. This is a feature that is particularly prevalent in thestealthgenre. Another development in recent game AI has been the development of "survival instinct". In-game computers can recognize different objects in an environment and determine whether it is beneficial or detrimental to its survival. Like a user, the AI can look for cover in a firefight before taking actions that would leave it otherwise vulnerable, such as reloading a weapon or throwing a grenade. There can be set markers that tell it when to react in a certain way. For example, if the AI is given a command to check its health throughout a game then further commands can be set so that it reacts a specific way at a certain percentage of health. If the health is below a certain threshold then the AI can be set to run away from the player and avoid it until another function is triggered. Another example could be if the AI notices it is out of bullets, it will find a cover object and hide behind it until it has reloaded. Actions like these make the AI seem more human. However, there is still a need for improvement in this area. Another side-effect of combat AI occurs when two AI-controlled characters encounter each other; first popularized in theid SoftwaregameDoom, so-called 'monster infighting' can break out in certain situations. Specifically, AI agents that are programmed to respond to hostile attacks will sometimes attackeach otherif their cohort's attacks land too close to them.[citation needed]In the case ofDoom, published gameplay manuals even suggest taking advantage of monster infighting in order to survive certain levels and difficulty settings. Procedural content generation(PCG) is an AI technique to autonomously create ingame content throughalgorithmswith minimal input from designers.[32]PCG is typically used to dynamically generate game features such as levels, NPC dialogue, and sounds. Developers input specific parameters to guide the algorithms into making content for them. PCG offers numerous advantages from both a developmental and player experience standpoint. Game studios are able to spend less money on artists and save time on production.[33]Players are given a fresh, highly replayable experience as the game generates new content each time they play. PCG allows game content to adapt in real time to the player's actions.[34] Generative algorithms (a rudimentary form of AI) have been used for level creation for decades. The iconic 1980dungeon crawlercomputer gameRogueis a foundational example. Players are tasked with descending through the increasingly difficult levels of a dungeon to retrieve the Amulet of Yendor. The dungeon levels are algorithmically generated at the start of each game. The save file is deleted every time the player dies.[35]The algorithmic dungeon generation creates unique gameplay that would not otherwise be there as the goal of retrieving the amulet is the same each time. Opinions on total level generation as seen in games likeRoguecan vary. Some developers can be skeptical of the quality of generated content and desire to create a world with a more "human" feel so they will use PCG more sparingly.[32]Consequently, they will only use PCG to generate specific components of an otherwise handcrafted level. A notable example of this isUbisoft's2017tactical shooterTom Clancy's Ghost Recon Wildlands. Developers used apathfinding algorithmtrained with adata setof real maps to create road networks that would weave through handcrafted villages within the game world.[34]This is an intelligent use of PCG as the AI would have a large amount of real world data to work with and roads are straightforward to create. However, the AI would likely miss nuances and subtleties if it was tasked with creating a village where people live. As AI has become more advanced, developer goals are shifting to create massive repositories of levels from data sets. In 2023, researchers fromNew York Universityand theUniversity of the Witwatersrandtrained alarge language modelto generate levels in the style of the 1981puzzle gameSokoban. They found that the model excelled at generating levels with specifically requested characteristics such as difficulty level or layout.[32]However, current models such as the one used in the study require large datasets of levels to be effective. They concluded that, while promising, the high data cost of large language models currently outweighs the benefits for this application.[32]Continued advancements in the field will likely lead to more mainstream use in the future. Themusical scoreof a video game is an important expression of the emotional tone of a scene to the player.Sound effectssuch as the noise of a weapon hitting an enemy help indicate the effect of the player's actions. Generating these in real time creates an engaging experience for the player because the game is more responsive to their input.[32]An example is the 2013adventure gameProteuswhere an algorithm dynamically adapts the music based on the angle the player is viewing the ingame landscape from.[35] Recent breakthroughs in AI have resulted in the creation of advanced tools that are capable of creating music and sound based on evolving factors with minimal developer input. One such example is the MetaComposure music generator. MetaComposure is anevolutionary algorithmdesigned to generate original music compositions during real time gameplay to match the current mood of the environment.[36]The algorithm is able to assess the current mood of the game state through "mood tagging". Research indicates that there is a significantpositive statistical correlationregarding player rated game engagement and the dynamically generated musical compositions when they accurately match their current emotions.[37] Game AI often amounts to pathfinding and finite-state machines. Pathfinding gets the AI from point A to point B, usually in the most direct way possible. State machines permit transitioning between different behaviors. TheMonte Carlo tree searchmethod[38]provides a more engaging game experience by creating additional obstacles for the player to overcome. The MCTS consists of a tree diagram in which the AI essentially playstic-tac-toe. Depending on the outcome, it selects a pathway yielding the next obstacle for the player. In complex video games, these trees may have more branches, provided that the player can come up with several strategies to surpass the obstacle. Academic AI may play a role within game AI, outside the traditional concern of controlling NPC behavior.Georgios N. Yannakakishighlighted four potential application areas:[2] Rather than procedural generation, some researchers have usedgenerative adversarial networks(GANs) to create new content. In 2018 researchers at Cornwall University trained a GAN on a thousand human-created levels forDoom; following training, the neural net prototype was able to design new playable levels on its own. Similarly, researchers at theUniversity of Californiaprototyped a GAN to generate levels forSuper Mario.[39]In 2020 Nvidia displayed a GAN-created clone ofPac-Man; the GAN learned how to recreate the game by watching 50,000 (mostly bot-generated) playthroughs.[40] Non-player charactersare entities within video games that are not controlled by players, but instead are managed by AI systems. NPCs contribute to the immersion, storytelling, and the mechanics of a game. They often serve as companions, quest-givers, merchants and much more. Their realism has advanced significantly in the past few years, thanks to improvements in AI technologies. NPCs are essential in both narrative-driven as well as open-world games. They help convey the lore and context of the game, making them pivotal to world-building and narrative progression. For instance, an NPC can provide critical information, offer quests, or simply populate the world to add a sense of realism to the game. Additionally, their role as quest-givers or merchants makes them integral to the gameplay loop, giving players access to resources, missions, or services that enable further progression. Additionally, NPCs can be designed to serve functional roles in games, such as a merchant or to provide a service to the player. These characters are central to facilitating game mechanics by acting as intermediaries between the player and in-game systems. Academics[who?]say the interactions between players and NPCs are often designed to be straightforward but contextually relevant, ensuring that the player receives necessary feedback or resources for gameplay continuity. Recent advancements[as of?]in artificial intelligence have significantly enhanced the complexity and realism of NPCs. Before these advancements, AI operated on pre-programmed behaviors, making them predictable and repeatable. With AI developing NPCs have become more adaptive and able to dynamically respond to players. Experts[who?]think the integration of deep learning and reinforcement learning techniques has enabled NPCs to adjust their behavior in response to player actions, creating a more interactive and personalized gameplay experience. One such development is the use of adaptive behavior models. These allow NPCs to analyze and learn from players decisions in real time. This behavior allows for a much more engaging experience. For example as said by experts in the field,[who?]NPCs in modern video games can now react to player actions with increased sophistication, such as adjusting their tactics in combat or changing their dialogue based on past interactions. By using deep learning algorithms these systems emulate human-like decisions-making, thus making NPCs feel more like real people rather than static game elements. Another advancements in NPC AI is the use ofnatural language processing, which allows NPCs to engage in more realistic conversations with players. Before this NPC dialogue was limited to a fixed set of responses. It is said[by whom?]that NLP has improved the fluidity of NPC conversations, allowing them to respond more contextually to player inputs. This development has increased the depth and immersion of player-NPC interactions, as players can now engage in more complex dialogues that affect the storyline and gameplay outcomes. Additionally, deep learning models have allowed NPCs to become more capable of predicting players behaviors. Deep learning allows NPCs to process large amounts of data and adapt to player strategies, making interactions with them less predictable and more varied. This creates a more immersive experience, as NPCs are now able to "learn" from player behavior, which provides a greater sense of realism within the game. Despite all of these advancements in NPC AI, there are still significant challenges that developers face in designing NPCs. They need to balance realism, functionally, and players expectations. The key challenge is to make sure that NPCs enhance the players experience, rather than disturb the gameplay. Overly realistic NPCs that behave unpredictably can frustrate players by hindering progression or breaking immersion. Conversely, NPCs that are too predictable or simplistic may fail to engage players, reducing the overall effectiveness of the game's narrative and mechanics. Another factor that needs to be accounted for is the computation cost of implementing advanced AI for NPCs. The use of these Advanced AI techniques requires large amount of processing power, which can limit its usage. Balancing the performance of AI-driven NPCs with the game's overall technical limitations is crucial for ensuring smooth gameplay. Experts[who?]mentioned how developers must allocate resources efficiently to avoid overburdening the game’s systems, particularly in large, open-world games where numerous NPCs must interact with the player simultaneously. Finally, creating NPCs that can respond dynamically to a wide range of player behaviors remains a difficult task. NPCs must be able to handle both scripted interactions and unscripted scenarios where players may behave in unexpected ways. Designing NPCs capable of adapting to such variability requires complex AI models that can account for numerous possible interactions, which can be resource-intensive and time-consuming for developers. Gamers always ask if the AI cheats (presumably so they can complain if they lose) In the context of artificial intelligence in video games, cheating refers to the programmer giving agents actions and access to information that would be unavailable to the player in the same situation.[42]Believing that theAtari 8-bitcould not compete against a human player,Chris Crawforddid not fix a bug inEastern Front (1941)that benefited the computer-controlled Russian side.[43]Computer Gaming Worldin 1994 reported that "It is a well-known fact that many AIs 'cheat' (or, at least, 'fudge') in order to be able to keep up with human players".[44] For example, if the agents want to know if the player is nearby they can either be given complex, human-like sensors (seeing, hearing, etc.), or they can cheat by simply asking thegame enginefor the player's position. Common variations include giving AIs higher speeds in racing games to catch up to the player or spawning them in advantageous positions in first-person shooters. The use of cheating in AI shows the limitations of the "intelligence" achievable artificially; generally speaking, in games where strategic creativity is important, humans could easily beat the AI after a minimum of trial and error if it were not for this advantage. Cheating is often implemented for performance reasons where in many cases it may be considered acceptable as long as the effect is not obvious to the player. While cheating refers only to privileges given specifically to the AI—it does not include the inhuman swiftness and precision natural to a computer—a player might call the computer's inherent advantages "cheating" if they result in the agent acting unlike a human player.[42]Sid Meierstated that he omitted multiplayer alliances inCivilizationbecause he found that the computer was almost as good as humans in using them, which caused players to think that the computer was cheating.[45]Developers say that most game AIs are honest but they dislike players erroneously complaining about "cheating" AI. In addition, humans use tactics against computers that they would not against other people.[43] In the 1996 gameCreatures, the user "hatches" small furry animals and teaches them how to behave. These "Norns" can talk, feed themselves, and protect themselves against vicious creatures. It was the first popular application of machine learning in an interactive simulation.Neural networksare used by the creatures to learn what to do. The game is regarded as a breakthrough inartificial liferesearch, which aims to model the behavior of creatures interacting with their environment.[46] In the 2001first-person shooterHalo: Combat Evolvedthe player assumes the role of the Master Chief, battling various aliens on foot or in vehicles. Enemies use cover very wisely, and employ suppressing fire and grenades. The squad situation affects the individuals, so certain enemies flee when their leader dies. Attention is paid to the little details, with enemies notably throwing back grenades or team-members responding to being bothered. The underlying "behavior tree" technology has become very popular in the games industry sinceHalo 2.[46] The 2005psychological horrorfirst-person shooterF.E.A.R.has player characters engage abattalionof clonedsuper-soldiers, robots andparanormal creatures. The AI uses a planner to generate context-sensitive behaviors, the first time in a mainstream game. This technology is still used as a reference for many studios. The Replicas are capable of utilizing the game environment to their advantage, such as overturning tables and shelves to create cover, opening doors, crashing through windows, or even noticing (and alerting the rest of their comrades to) the player's flashlight. In addition, the AI is also capable of performing flanking maneuvers, using suppressing fire, throwing grenades to flush the player out of cover, and even playing dead. Most of these actions, in particular the flanking, is the result of emergent behavior.[47][48] Thesurvival horrorseriesS.T.A.L.K.E.R.(2007–) confronts the player with man-made experiments, military soldiers, and mercenaries known as Stalkers. The various encountered enemies (if the difficulty level is set to its highest) use combat tactics and behaviors such as healing wounded allies, giving orders, out-flanking the player and using weapons with pinpoint accuracy.[citation needed] The 2010real-time strategygameStarCraft II: Wings of Libertygives the player control of one of three factions in a 1v1, 2v2, or 3v3 battle arena. The player must defeat their opponents by destroying all their units and bases. This is accomplished by creating units that are effective at countering opponents' units. Players can play against multiple different levels of AI difficulty ranging from very easy to Cheater 3 (insane). The AI is able to cheat at the difficulty Cheater 1 (vision), where it can see units and bases when a player in the same situation could not. Cheater 2 gives the AI extra resources, while Cheater 3 gives an extensive advantage over its opponent.[49] Red Dead Redemption 2, released by Rockstar Games in 2018, exemplifies the advanced use of AI in modern video games. The game incorporates a highly detailed AI system that governs the behavior of NPCs and the dynamic game world. NPCs in the game display complex and varied behaviors based on a wide range of factors including their environment, player interactions, and time of day. This level of AI integration creates a rich, immersive experience where characters react to players in a realistic manner, contributing to the game's reputation as one of the most advanced open-world games ever created.[50] Generative artificial intelligence, AI systems that can respond to prompts and produce text, images, and audio and video clips, arose in 2023 with systems likeChatGPTandStable Diffusion. In video games, these systems could create the potential for game assets to be created indefinitely, bypassing typical limitations on human creations. For example, the 2024browser-basedsandboxgameInfinite Craftusesgenerative AIsoftware, includingLLaMA. When two elements are being combined, a new element is generated by the AI.[51]The 2024 browser-based gameOasisuses generative AI to simulate the video gameMinecraft.Oasisis trained on millions of hours of footage fromMinecraft, and predicts how the next frame of gameplay looks using this dataset.Oasisdoes not have object permanence because it does not store any data.[52] However, there are similarconcernsin other fields particularly the potential for loss of jobs normally dedicated to the creation of these assets.[53]In January 2024,SAG-AFTRA, a United States union representing actors, signed a contract with Replica Studios that would allow Replica to capture the voicework of union actors for creating AI voice systems based on their voices for use in video games, with the contract assuring pay and rights protections. While the contract was agreed upon by a SAG-AFTRA committee, many members expressed criticism of the move, having not been told of it until it was completed and that the deal did not do enough to protect the actors.[54] Recent advancements in AI for video games have led to more complex and adaptive behaviors in non-playable characters (NPCs). For instance, AI systems now utilize sophisticated techniques such as decision trees and state machines to enhance NPC interactions and realism, as discussed in "Artificial Intelligence in Games".[55]Recent advancements in AI for video games have also focused on improving dynamic and adaptive behaviors in NPCs. For example, recent research has explored the use of complex neural networks to enable NPCs to learn and adapt their behavior based on player actions, enhancing the overall gaming experience. This approach is detailed in the IEEE paper on "AI Techniques for Interactive Game Systems".[56]
https://en.wikipedia.org/wiki/Artificial_intelligence_in_video_games
Game Description Language(GDL) is a specializedlogicprogramming languagedesigned byMichael Genesereth. The goal of GDL is to allow the development of AI agents capable ofgeneral game playing. It is part of the General Game Playing Project atStanford University. GDL is a tool for expressing the intricacies of game rules and dynamics in a form comprehensible to AI systems through a combination of logic-based constructs and declarative principles. In practice, GDL is often used for General Game Playing competitions and research endeavors. In these contexts, GDL is used to specify the rules of games that AI agents are expected to play. AI developers and researchers harness GDL to create algorithms that can comprehend and engage with games based on their rule descriptions. The use of GDL paves the way for the development of highly adaptable AI agents, capable of competing and excelling in diverse gaming scenarios. This innovation is a testament to the convergence of logic-based formalism and the world of games, opening new horizons for AI's potential in understanding and mastering a multitude of games. Game Description Language equips AI with a universal key to unlock the mysteries of diverse game environments and strategies. Quoted in an article inNew Scientist, Genesereth pointed out that althoughDeep Bluecan play chess at agrandmasterlevel, it is incapable of playingcheckersat all because it is a specialized game player.[1]Both chess and checkers can be described in GDL. This enables general game players to be built that can play both of these games and any other game that can be described using GDL. GDL is a variant ofDatalog, and thesyntaxis largely the same. It is usually given inprefix notation. Variables begin with "?".[2] The following is the list of keywords in GDL, along with brief descriptions of their functions: A game description in GDL provides complete rules for each of the following elements of a game. Facts that define the roles in a game. The following example is from a GDL description of the two-player gameTic-tac-toe: Rules that entail all facts about the initial game state. An example is: Rules that describe each move by the conditions on the current position under which it can be taken by a player. An example is: Rules that describe all facts about the next state relative to the current state and the moves taken by the players. An example is: Rules that describe the conditions under which the current state is a terminal one. An example is: The goal values for each player in a terminal state. An example is: With GDL, one can describe finite games with an arbitrary number of players. However, GDL cannot describe games that contain an element of chance (for example, rolling dice) or games where players have incomplete information about the current state of the game (for example, in many card games the opponents' cards are not visible).GDL-II, theGame Description Language for Incomplete Information Games, extends GDL by two keywords that allow for the description of elements of chance and incomplete information:[3] The following is an example from a GDL-II description of the card gameTexas hold 'em: Michael Thielscher also created a further extension,GDL-III, a general game description language withimperfect informationandintrospection, that supports the specification ofepistemic games— ones characterised by rules that depend on the knowledge of players.[4] In classical game theory, games can be formalised inextensiveandnormalforms. Forcooperative game theory, games are represented using characteristic functions. Some subclasses of games allow special representations in smaller sizes also known assuccinct games. Some of the newer developments of formalisms and languages for the representation of some subclasses of games or representations adjusted to the needs of interdisciplinary research are summarized as the following table.[5]Some of these alternative representations also encode time-related aspects: A 2016 paper "describes a multilevel algorithm compiling a general game description in GDL into an optimized reasoner in a low level language".[19] A 2017 paper uses GDL to model the process of mediating a resolution to a dispute between two parties and presented an algorithm that uses available information efficiently to do so.[20]
https://en.wikipedia.org/wiki/Game_Description_Language
The followingoutlineis provided as an overview of and topical guide to artificial intelligence: Artificial intelligence (AI)is intelligence exhibited by machines or software. It is also the name of thescientific fieldwhich studies how to create computers and computer software that are capable of intelligent behavior. Symbolic representations of knowledge Unsolved problems in knowledge representation Intelligent personal assistant– Artificial intelligence in fiction– Some examples of artificially intelligent entities depicted in science fiction include: List of artificial intelligence projects Competitions and prizes in artificial intelligence
https://en.wikipedia.org/wiki/Outline_of_artificial_intelligence
Multiple-criteria decision-making(MCDM) ormultiple-criteria decision analysis(MCDA) is a sub-discipline ofoperations researchthat explicitly evaluates multiple conflictingcriteriaindecision making(both in daily life and in settings such as business, government and medicine). It is also known asmultiple attribute utility theory,multiple attribute value theory,multiple attribute preference theory, andmulti-objective decision analysis. Conflicting criteria are typical in evaluating options:costor price is usually one of the main criteria, and some measure of quality is typically another criterion, easily in conflict with the cost. In purchasing a car, cost, comfort, safety, and fuel economy may be some of the main criteria we consider – it is unusual that the cheapest car is the most comfortable and the safest one. Inportfolio management, managers are interested in getting high returns while simultaneously reducing risks; however, the stocks that have the potential of bringing high returns typically carry high risk of losing money. In a service industry, customer satisfaction and the cost of providing service are fundamental conflicting criteria. In their daily lives, people usually weigh multiple criteria implicitly and may be comfortable with the consequences of such decisions that are made based on onlyintuition.[1]On the other hand, when stakes are high, it is important to properly structure the problem and explicitly evaluate multiple criteria.[2]In making the decision of whether to build a nuclear power plant or not, and where to build it, there are not only very complex issues involving multiple criteria, but there are also multiple parties who are deeply affected by the consequences. Structuring complex problems well and considering multiple criteria explicitly leads to more informed and better decisions. There have been important advances in this field since the start of the modern multiple-criteria decision-making discipline in the early 1960s. A variety of approaches and methods, many implemented by specializeddecision-making software,[3][4]have been developed for their application in an array of disciplines, ranging from politics and business to the environment and energy.[5] MCDM or MCDA are acronyms formultiple-criteria decision-makingandmultiple-criteria decision analysis. Stanley Zionts helped popularizing the acronym with his 1979 article "MCDM – If not a Roman Numeral, then What?", intended for an entrepreneurial audience. MCDM is concerned with structuring and solving decision and planning problems involving multiple criteria. The purpose is to support decision-makers facing such problems. Typically, there does not exist a uniqueoptimalsolution for such problems and it is necessary to use decision-makers' preferences to differentiate between solutions. "Solving" can be interpreted in different ways. It could correspond to choosing the "best" alternative from a set of available alternatives (where "best" can be interpreted as "the most preferred alternative" of a decision-maker). Another interpretation of "solving" could be choosing a small set of good alternatives, or grouping alternatives into different preference sets. An extreme interpretation could be to find all "efficient" or "nondominated" alternatives (which we will define shortly). The difficulty of the problem originates from the presence of more than one criterion. There is no longer a unique optimal solution to an MCDM problem that can be obtained without incorporating preference information. The concept of an optimal solution is often replaced by the set of nondominated solutions. A solution is called nondominated if it is not possible to improve it in any criterion without sacrificing it in another. Therefore, it makes sense for the decision-maker to choose a solution from the nondominated set. Otherwise, they could do better in terms of some or all of the criteria, and not do worse in any of them. Generally, however, the set of nondominated solutions is too large to be presented to the decision-maker for the final choice. Hence we need tools that help the decision-maker focus on the preferred solutions (or alternatives). Normally one has to "tradeoff" certain criteria for others. MCDM has been an active area of research since the 1970s. There are several MCDM-related organizations including the International Society on Multi-criteria Decision Making,[6]Euro Working Group on MCDA,[7]and INFORMS Section on MCDM.[8]For a history see: Köksalan, Wallenius and Zionts (2011).[9]MCDM draws upon knowledge in many fields including: There are different classifications of MCDM problems and methods. A major distinction between MCDM problems is based on whether the solutions are explicitly or implicitly defined. Whether it is an evaluation problem or a design problem, preference information of DMs is required in order to differentiate between solutions. The solution methods for MCDM problems are commonly classified based on the timing of preference information obtained from the DM. There are methods that require the DM's preference information at the start of the process, transforming the problem into essentially a single criterion problem. These methods are said to operate by "prior articulation of preferences". Methods based on estimating a value function or using the concept of "outranking relations", analytical hierarchy process, and some rule-based decision methods try to solve multiple criteria evaluation problems utilizing prior articulation of preferences. Similarly, there are methods developed to solve multiple-criteria design problems using prior articulation of preferences by constructing a value function. Perhaps the most well-known of these methods is goal programming. Once the value function is constructed, the resulting single objective mathematical program is solved to obtain a preferred solution. Some methods require preference information from the DM throughout the solution process. These are referred to as interactive methods or methods that require "progressive articulation of preferences". These methods have been well-developed for both the multiple criteria evaluation (see for example, Geoffrion, Dyer and Feinberg, 1972,[11]and Köksalan and Sagala, 1995[12]) and design problems (see Steuer, 1986[13]). Multiple-criteria design problems typically require the solution of a series of mathematical programming models in order to reveal implicitly defined solutions. For these problems, a representation or approximation of "efficient solutions" may also be of interest. This category is referred to as "posterior articulation of preferences", implying that the DM's involvement starts posterior to the explicit revelation of "interesting" solutions (see for example Karasakal and Köksalan, 2009[14]). When the mathematical programming models contain integer variables, the design problems become harder to solve. Multiobjective Combinatorial Optimization (MOCO) constitutes a special category of such problems posing substantial computational difficulty (see Ehrgott and Gandibleux,[15]2002, for a review). The MCDM problem can be represented in the criterion space or the decision space. Alternatively, if different criteria are combined by a weighted linear function, it is also possible to represent the problem in the weight space. Below are the demonstrations of the criterion and weight spaces as well as some formal definitions. Let us assume that we evaluate solutions in a specific problem situation using several criteria. Let us further assume that more is better in each criterion. Then, among all possible solutions, we are ideally interested in those solutions that perform well in all considered criteria. However, it is unlikely to have a single solution that performs well in all considered criteria. Typically, some solutions perform well in some criteria and some perform well in others. Finding a way of trading off between criteria is one of the main endeavors in the MCDM literature. Mathematically, the MCDM problem corresponding to the above arguments can be represented as whereqis the vector ofkcriterion functions (objective functions) andQis the feasible set,Q⊆Rk. IfQis defined explicitly (by a set of alternatives), the resulting problem is called a multiple-criteria evaluation problem. IfQis defined implicitly (by a set of constraints), the resulting problem is called a multiple-criteria design problem. The quotation marks are used to indicate that the maximization of a vector is not a well-defined mathematical operation. This corresponds to the argument that we will have to find a way to resolve thetrade-offbetween criteria (typically based on the preferences of a decision maker) when a solution that performs well in all criteria does not exist. The decision space corresponds to the set of possible decisions that are available to us. The criteria values will be consequences of the decisions we make. Hence, we can define a corresponding problem in the decision space. For example, in designing a product, we decide on the design parameters (decision variables) each of which affects the performance measures (criteria) with which we evaluate our product. Mathematically, a multiple-criteria design problem can be represented in the decision space as follows: whereXis the feasible set andxis the decision variable vector of size n. A well-developed special case is obtained whenXis a polyhedron defined by linear inequalities and equalities. If all the objective functions are linear in terms of the decision variables, this variation leads to multiple objective linear programming (MOLP), an important subclass of MCDM problems. There are several definitions that are central in MCDM. Two closely related definitions are those of nondominance (defined based on the criterion space representation) and efficiency (defined based on the decision variable representation). Definition 1.q*∈Qis nondominated if there does not exist anotherq∈Qsuch thatq≥q*andq≠q*. Roughly speaking, a solution is nondominated so long as it is not inferior to any other available solution in all the considered criteria. Definition 2.x*∈Xis efficient if there does not exist anotherx∈Xsuch thatf(x) ≥f(x*)andf(x) ≠f(x*). If an MCDM problem represents a decision situation well, then the most preferred solution of a DM has to be an efficient solution in the decision space, and its image is a nondominated point in the criterion space. Following definitions are also important. Definition 3.q*∈Qis weakly nondominated if there does not exist anotherq∈Qsuch thatq>q*. Definition 4.x*∈Xis weakly efficient if there does not exist anotherx∈Xsuch thatf(x) >f(x*). Weakly nondominated points include all nondominated points and some special dominated points. The importance of these special dominated points comes from the fact that they commonly appear in practice and special care is necessary to distinguish them from nondominated points. If, for example, we maximize a single objective, we may end up with a weakly nondominated point that is dominated. The dominated points of the weakly nondominated set are located either on vertical or horizontal planes (hyperplanes) in the criterion space. Ideal point: (in criterion space) represents the best (the maximum for maximization problems and the minimum for minimization problems) of each objective function and typically corresponds to an infeasible solution. Nadir point: (in criterion space) represents the worst (the minimum for maximization problems and the maximum for minimization problems) of each objective function among the points in the nondominated set and is typically a dominated point. The ideal point and the nadir point are useful to the DM to get the "feel" of the range of solutions (although it is not straightforward to find the nadir point for design problems having more than two criteria). The following two-variable MOLP problem in the decision variable space will help demonstrate some of the key concepts graphically. In Figure 1, the extreme points "e" and "b" maximize the first and second objectives, respectively. The red boundary between those two extreme points represents the efficient set. It can be seen from the figure that, for any feasible solution outside the efficient set, it is possible to improve both objectives by some points on the efficient set. Conversely, for any point on the efficient set, it is not possible to improve both objectives by moving to any other feasible solution. At these solutions, one has to sacrifice from one of the objectives in order to improve the other objective. Due to its simplicity, the above problem can be represented in criterion space by replacing thex'swith thef'sas follows: We present the criterion space graphically in Figure 2. It is easier to detect the nondominated points (corresponding to efficient solutions in the decision space) in the criterion space. The north-east region of the feasible space constitutes the set of nondominated points (for maximization problems). There are several ways to generate nondominated solutions. We will discuss two of these. The first approach can generate a special class of nondominated solutions whereas the second approach can generate any nondominated solution. If we combine the multiple criteria into a single criterion by multiplying each criterion with a positive weight and summing up the weighted criteria, then the solution to the resulting single criterion problem is a special efficient solution. These special efficient solutions appear at corner points of the set of available solutions. Efficient solutions that are not at corner points have special characteristics and this method is not capable of finding such points. Mathematically, we can represent this situation as By varying the weights, weighted sums can be used for generating efficient extreme point solutions for design problems, and supported (convex nondominated) points for evaluation problems. Achievement scalarizing functions also combine multiple criteria into a single criterion by weighting them in a very special way. They create rectangular contours going away from a reference point towards the available efficient solutions. This special structure empower achievement scalarizing functions to reach any efficient solution. This is a powerful property that makes these functions very useful for MCDM problems. Mathematically, we can represent the corresponding problem as The achievement scalarizing function can be used to project any point (feasible or infeasible) on the efficient frontier. Any point (supported or not) can be reached. The second term in the objective function is required to avoid generating inefficient solutions. Figure 3 demonstrates how a feasible point,g1, and an infeasible point,g2, are projected onto the nondominated points,q1andq2, respectively, along the directionwusing an achievement scalarizing function. The dashed and solid contours correspond to the objective function contours with and without the second term of the objective function, respectively. Different schools of thought have developed for solving MCDM problems (both of the design and evaluation type). For a bibliometric study showing their development over time, see Bragge, Korhonen, H. Wallenius and J. Wallenius [2010].[18] Multiple objective mathematical programming school (1)Vector maximization: The purpose of vector maximization is to approximate the nondominated set; originally developed for Multiple Objective Linear Programming problems (Evans and Steuer, 1973;[19]Yu and Zeleny, 1975[20]). (2)Interactive programming: Phases of computation alternate with phases of decision-making (Benayoun et al., 1971;[21]Geoffrion, Dyer and Feinberg, 1972;[22]Zionts and Wallenius, 1976;[23]Korhonen and Wallenius, 1988[24]). No explicit knowledge of the DM's value function is assumed. Goal programming school The purpose is to set apriori target values for goals, and to minimize weighted deviations from these goals. Both importance weights as well as lexicographic pre-emptive weights have been used (Charnes and Cooper, 1961[25]). Fuzzy-set theorists Fuzzy sets were introduced by Zadeh (1965)[26]as an extension of the classical notion of sets. This idea is used in many MCDM algorithms to model and solve fuzzy problems. Ordinal data based methods Ordinal datahas a wide application in real-world situations. In this regard, some MCDM methods were designed to handle ordinal data as input data. For example,Ordinal Priority Approachand Qualiflex method. Multi-attribute utility theorists Multi-attribute utilityor value functions are elicited and used to identify the most preferred alternative or to rank order the alternatives. Elaborate interview techniques, which exist for eliciting linear additive utility functions and multiplicative nonlinear utility functions, may be used (Keeney and Raiffa, 1976[27]). Another approach is to elicit value functions indirectly by asking the decision-maker a series of pairwise ranking questions involving choosing between hypothetical alternatives (PAPRIKA method; Hansen and Ombler, 2008[28]). French school The French school focuses on decision aiding, in particular theELECTREfamily of outranking methods that originated in France during the mid-1960s. The method was first proposed by Bernard Roy (Roy, 1968[29]). Evolutionary multiobjective optimization school (EMO) EMO algorithms start with an initial population, and update it by using processes designed to mimic natural survival-of-the-fittest principles and genetic variation operators to improve the average population from one generation to the next. The goal is to converge to a population of solutions which represent the nondominated set (Schaffer, 1984;[30]Srinivas and Deb, 1994[31]). More recently, there are efforts to incorporate preference information into the solution process of EMO algorithms (see Deb and Köksalan, 2010[32]). Grey system theorybased methods In the 1980s,Deng Julongproposed Grey System Theory (GST) and its first multiple-attribute decision-making model, called Deng'sGrey relational analysis(GRA) model. Later, the grey systems scholars proposed many GST based methods likeLiu Sifeng's Absolute GRA model,[33]Grey Target Decision Making (GTDM)[34]and Grey Absolute Decision Analysis (GADA).[35] Analytic hierarchy process (AHP) The AHP first decomposes the decision problem into a hierarchy of subproblems. Then the decision-maker evaluates the relative importance of its various elements by pairwise comparisons. The AHP converts these evaluations to numerical values (weights or priorities), which are used to calculate a score for each alternative (Saaty, 1980[36]). A consistency index measures the extent to which the decision-maker has been consistent in her responses. AHP is one of the more controversial techniques listed here, with some researchers in the MCDA community believing it to be flawed.[37][38] Several papers reviewed the application of MCDM techniques in various disciplines such as fuzzy MCDM,[39]classic MCDM,[40]sustainable and renewable energy,[41]VIKOR technique,[42]transportation systems,[43]service quality,[44]TOPSIS method,[45]energy management problems,[46]e-learning,[47]tourism and hospitality,[48]SWARA and WASPAS methods.[49] The following MCDM methods are available, many of which are implemented by specializeddecision-making software:[3][4]
https://en.wikipedia.org/wiki/Multiple-criteria_decision_analysis
Multi-objective optimizationorPareto optimization(also known asmulti-objective programming,vector optimization,multicriteria optimization, ormultiattribute optimization) is an area ofmultiple-criteria decision makingthat is concerned withmathematical optimization problemsinvolving more than oneobjective functionto be optimized simultaneously. Multi-objective is a type ofvector optimizationthat has been applied in many fields of science, including engineering, economics and logistics where optimal decisions need to be taken in the presence oftrade-offsbetween two or more conflicting objectives. Minimizing cost while maximizing comfort while buying a car, and maximizing performance whilst minimizing fuel consumption and emission of pollutants of a vehicle are examples of multi-objective optimization problems involving two and three objectives, respectively. In practical problems, there can be more than three objectives. For a multi-objective optimization problem, it is not guaranteed that a single solution simultaneously optimizes each objective. The objective functions are said to be conflicting. A solution is callednondominated, Pareto optimal,Pareto efficientor noninferior, if none of the objective functions can be improved in value without degrading some of the other objective values. Without additionalsubjectivepreference information, there may exist a (possibly infinite) number of Pareto optimal solutions, all of which are considered equally good. Researchers study multi-objective optimization problems from different viewpoints and, thus, there exist different solution philosophies and goals when setting and solving them. The goal may be to find a representative set of Pareto optimal solutions, and/or quantify the trade-offs in satisfying the different objectives, and/or finding a single solution that satisfies the subjective preferences of a human decision maker (DM). Bicriteria optimizationdenotes the special case in which there are two objective functions. There is a direct relationship betweenmultitask optimizationand multi-objective optimization.[1] A multi-objective optimization problem is anoptimization problemthat involves multiple objective functions.[2][3][4]In mathematical terms, a multi-objective optimization problem can be formulated as where the integerk≥2{\displaystyle k\geq 2}is the number of objectives and the setX{\displaystyle X}is thefeasible setof decision vectors, which is typicallyX⊆Rn{\displaystyle X\subseteq \mathbb {R} ^{n}}but it depends on then{\displaystyle n}-dimensional application domain. The feasible set is typically defined by some constraint functions. In addition, the vector-valued objective function is often defined as If some objective function is to be maximized, it is equivalent to minimize its negative or its inverse. We denoteY⊆Rk{\displaystyle Y\subseteq \mathbb {R} ^{k}}the image ofX{\displaystyle X};x∗∈X{\displaystyle x^{*}\in X}afeasible solutionorfeasible decision; andz∗=f(x∗)∈Rk{\displaystyle z^{*}=f(x^{*})\in \mathbb {R} ^{k}}anobjective vectoror anoutcome. In multi-objective optimization, there does not typically exist a feasible solution that minimizes all objective functions simultaneously. Therefore, attention is paid toPareto optimalsolutions; that is, solutions that cannot be improved in any of the objectives without degrading at least one of the other objectives. In mathematical terms, a feasible solutionx1∈X{\displaystyle x_{1}\in X}is said to(Pareto) dominateanother solutionx2∈X{\displaystyle x_{2}\in X}, if A solutionx∗∈X{\displaystyle x^{*}\in X}(and the corresponding outcomef(x∗){\displaystyle f(x^{*})}) is called Pareto optimal if there does not exist another solution that dominates it. The set of Pareto optimal outcomes, denotedX∗{\displaystyle X^{*}}, is often called thePareto front, Pareto frontier, or Pareto boundary. The Pareto front of a multi-objective optimization problem is bounded by a so-callednadirobjective vectorznadir{\displaystyle z^{nadir}}and anideal objective vectorzideal{\displaystyle z^{ideal}}, if these are finite. The nadir objective vector is defined as and the ideal objective vector as In other words, the components of the nadir and ideal objective vectors define the upper and lower bounds of the objective function of Pareto optimal solutions. In practice, the nadir objective vector can only be approximated as, typically, the whole Pareto optimal set is unknown. In addition, autopian objective vectorzutop{\displaystyle z^{utop}}, such thatziutop=ziideal−ϵ,∀i∈{1,…,k}{\displaystyle z_{i}^{utop}=z_{i}^{ideal}-\epsilon ,\forall i\in \{1,\dots ,k\}}whereϵ>0{\displaystyle \epsilon >0}is a small constant, is often defined because of numerical reasons. Ineconomics, many problems involve multiple objectives along with constraints on what combinations of those objectives are attainable. For example, consumer'sdemandfor various goods is determined by the process of maximization of theutilitiesderived from those goods, subject to a constraint based on how much income is available to spend on those goods and on the prices of those goods. This constraint allows more of one good to be purchased only at the sacrifice of consuming less of another good; therefore, the various objectives (more consumption of each good is preferred) are in conflict with each other. A common method for analyzing such a problem is to use a graph ofindifference curves, representing preferences, and a budget constraint, representing the trade-offs that the consumer is faced with. Another example involves theproduction possibilities frontier, which specifies what combinations of various types of goods can be produced by a society with certain amounts of various resources. The frontier specifies the trade-offs that the society is faced with — if the society is fully utilizing its resources, more of one good can be produced only at the expense of producing less of another good. A society must then use some process to choose among the possibilities on the frontier. Macroeconomic policy-making is a context requiring multi-objective optimization. Typically acentral bankmust choose a stance formonetary policythat balances competing objectives — lowinflation, lowunemployment, lowbalance of tradedeficit, etc. To do this, the central bank uses amodel of the economythat quantitatively describes the various causal linkages in the economy; itsimulatesthe model repeatedly under various possible stances of monetary policy, in order to obtain a menu of possible predicted outcomes for the various variables of interest. Then in principle it can use an aggregate objective function to rate the alternative sets of predicted outcomes, although in practice central banks use a non-quantitative, judgement-based, process for ranking the alternatives and making the policy choice. Infinance, a common problem is to choose a portfolio when there are two conflicting objectives — the desire to have theexpected valueof portfolio returns be as high as possible, and the desire to haverisk, often measured by thestandard deviationof portfolio returns, be as low as possible. This problem is often represented by a graph in which theefficient frontiershows the best combinations of risk and expected return that are available, and in which indifference curves show the investor's preferences for various risk-expected return combinations. The problem of optimizing a function of the expected value (firstmoment) and the standard deviation (square root of the second central moment) of portfolio return is called atwo-moment decision model. Inengineeringandeconomics, many problems involve multiple objectives which are not describable as the-more-the-better or the-less-the-better; instead, there is an ideal target value for each objective, and the desire is to get as close as possible to the desired value of each objective. For example, energy systems typically have a trade-off between performance and cost[5][6]or one might want to adjust a rocket's fuel usage and orientation so that it arrives both at a specified place and at a specified time; or one might want to conductopen market operationsso that both theinflation rateand theunemployment rateare as close as possible to their desired values. Often such problems are subject to linear equality constraints that prevent all objectives from being simultaneously perfectly met, especially when the number of controllable variables is less than the number of objectives and when the presence of random shocks generates uncertainty. Commonly a multi-objectivequadratic objective functionis used, with the cost associated with an objective rising quadratically with the distance of the objective from its ideal value. Since these problems typically involve adjusting the controlled variables at various points in time and/or evaluating the objectives at various points in time,intertemporal optimizationtechniques are employed.[7] Product and process design can be largely improved using modern modeling, simulation, and optimization techniques.[citation needed]The key question in optimal design is measuring what is good or desirable about a design. Before looking for optimal designs, it is important to identify characteristics that contribute the most to the overall value of the design. A good design typically involves multiple criteria/objectives such as capital cost/investment, operating cost, profit, quality and/or product recovery, efficiency, process safety, operation time, etc. Therefore, in practical applications, the performance of process and product design is often measured with respect to multiple objectives. These objectives are typically conflicting, i.e., achieving the optimal value for one objective requires some compromise on one or more objectives. For example, when designing a paper mill, one can seek to decrease the amount of capital invested in a paper mill and enhance the quality of paper simultaneously. If the design of a paper mill is defined by large storage volumes and paper quality is defined by quality parameters, then the problem of optimal design of a paper mill can include objectives such as i) minimization of expected variation of those quality parameters from their nominal values, ii) minimization of the expected time of breaks and iii) minimization of the investment cost of storage volumes. Here, the maximum volume of towers is a design variable. This example of optimal design of a paper mill is a simplification of the model used in.[8]Multi-objective design optimization has also been implemented in engineering systems in the circumstances such as control cabinet layout optimization,[9]airfoil shape optimization using scientific workflows,[10]design of nano-CMOS,[11]system on chipdesign, design of solar-powered irrigation systems,[12]optimization of sand mould systems,[13][14]engine design,[15][16]optimal sensor deployment[17]and optimal controller design.[18][19] Multi-objective optimization has been increasingly employed inchemical engineeringandmanufacturing. In 2009, Fiandaca and Fraga used the multi-objective genetic algorithm (MOGA) to optimize the pressure swing adsorption process (cyclic separation process). The design problem involved the dual maximization of nitrogen recovery and nitrogen purity. The results approximated the Pareto frontier well with acceptable trade-offs between the objectives.[20] In 2010, Sendín et al. solved a multi-objective problem for the thermal processing of food. They tackled two case studies (bi-objective and triple-objective problems) with nonlinear dynamic models. They used a hybrid approach consisting of the weighted Tchebycheff and the Normal Boundary Intersection approach. The novel hybrid approach was able to construct a Pareto optimal set for the thermal processing of foods.[21] In 2013, Ganesan et al. carried out the multi-objective optimization of the combined carbon dioxide reforming and partial oxidation of methane. The objective functions were methane conversion, carbon monoxide selectivity, and hydrogen to carbon monoxide ratio. Ganesan used the Normal Boundary Intersection (NBI) method in conjunction with two swarm-based techniques (Gravitational Search Algorithm (GSA) and Particle Swarm Optimization (PSO)) to tackle the problem.[22]Applications involving chemical extraction[23]and bioethanol production processes[24]have posed similar multi-objective problems. In 2013, Abakarov et al. proposed an alternative technique to solve multi-objective optimization problems arising in food engineering.[25]The Aggregating Functions Approach, the Adaptive Random Search Algorithm, and the Penalty Functions Approach were used to compute the initial set of the non-dominated or Pareto-optimal solutions. TheAnalytic Hierarchy Processand Tabular Method were used simultaneously for choosing the best alternative among the computed subset of non-dominated solutions for osmotic dehydration processes.[26] In 2018, Pearce et al. formulated task allocation to human and robotic workers as a multi-objective optimization problem, considering production time and the ergonomic impact on the human worker as the two objectives considered in the formulation. Their approach used aMixed-Integer Linear Programto solve the optimization problem for a weighted sum of the two objectives to calculate a set ofPareto optimalsolutions. Applying the approach to several manufacturing tasks showed improvements in at least one objective in most tasks and in both objectives in some of the processes.[27] The purpose ofradio resource managementis to satisfy the data rates that are requested by the users of a cellular network.[28]The main resources are time intervals, frequency blocks, and transmit powers. Each user has its own objective function that, for example, can represent some combination of the data rate, latency, and energy efficiency. These objectives are conflicting since the frequency resources are very scarce, thus there is a need for tight spatialfrequency reusewhich causes immense inter-user interference if not properly controlled.Multi-user MIMOtechniques are nowadays used to reduce the interference by adaptiveprecoding. The network operator would like to both bring great coverage and high data rates, thus the operator would like to find a Pareto optimal solution that balance the total network data throughput and the user fairness in an appropriate subjective manner. Radio resource management is often solved by scalarization; that is, selection of a network utility function that tries to balance throughput and user fairness. The choice of utility function has a large impact on the computational complexity of the resulting single-objective optimization problem.[28]For example, the common utility of weighted sum rate gives anNP-hardproblem with a complexity that scales exponentially with the number of users, while the weighted max-min fairness utility results in a quasi-convex optimization problem with only a polynomial scaling with the number of users.[29] Reconfiguration, by exchanging the functional links between the elements of the system, represents one of the most important measures which can improve the operational performance of a distribution system. The problem of optimization through the reconfiguration of a power distribution system, in terms of its definition, is a historical single objective problem with constraints. Since 1975, when Merlin and Back[30]introduced the idea of distribution system reconfiguration for active power loss reduction, until nowadays, a lot of researchers have proposed diverse methods and algorithms to solve the reconfiguration problem as a single objective problem. Some authors have proposed Pareto optimality based approaches (including active power losses and reliability indices as objectives). For this purpose, different artificial intelligence based methods have been used: microgenetic,[31]branch exchange,[32]particle swarm optimization[33]and non-dominated sorting genetic algorithm.[34] Autonomous inspection of infrastructure has the potential to reduce costs, risks and environmental impacts, as well as ensuring better periodic maintenance of inspected assets. Typically, planning such missions has been viewed as a single-objective optimization problem, where one aims to minimize the energy or time spent in inspecting an entire target structure.[35]For complex, real-world structures, however, covering 100% of an inspection target is not feasible, and generating an inspection plan may be better viewed as a multiobjective optimization problem, where one aims to both maximize inspection coverage and minimize time and costs. A recent study has indicated that multiobjective inspection planning indeed has the potential to outperform traditional methods on complex structures[36] As multiplePareto optimalsolutions for multi-objective optimization problems usually exist, what it means to solve such a problem is not as straightforward as it is for a conventional single-objective optimization problem. Therefore, different researchers have defined the term "solving a multi-objective optimization problem" in various ways. This section summarizes some of them and the contexts in which they are used. Many methods convert the original problem with multiple objectives into a single-objectiveoptimization problem. This is called a scalarized problem. If the Pareto optimality of the single-objective solutions obtained can be guaranteed, the scalarization is characterized as done neatly. Solving a multi-objective optimization problem is sometimes understood as approximating or computing all or a representative set of Pareto optimal solutions.[37][38] Whendecision makingis emphasized, the objective of solving a multi-objective optimization problem is referred to as supporting a decision maker in finding the most preferred Pareto optimal solution according to their subjective preferences.[2][39]The underlying assumption is that one solution to the problem must be identified to be implemented in practice. Here, a humandecision maker(DM) plays an important role. The DM is expected to be an expert in the problem domain. The most preferred results can be found using different philosophies. Multi-objective optimization methods can be divided into four classes.[3] More information and examples of different methods in the four classes are given in the following sections. When a decision maker does not explicitly articulate any preference information, the multi-objective optimization method can be classified as a no-preference method.[3]A well-known example is the method of global criterion,[40]in which a scalarized problem of the form is solved. In the above problem,‖⋅‖{\displaystyle \|\cdot \|}can be anyLp{\displaystyle L_{p}}norm, with common choices includingL1{\displaystyle L_{1}},L2{\displaystyle L_{2}}, andL∞{\displaystyle L_{\infty }}.[2]The method of global criterion is sensitive to the scaling of the objective functions. Thus, it is recommended that the objectives be normalized into a uniform, dimensionless scale.[2][39] A priori methods require that sufficient preference information is expressed before the solution process.[3]Well-known examples of a priori methods include the utility function method,lexicographicmethod, andgoal programming. The utility function method assumes the decision maker'sutility functionis available. A mappingu:Y→R{\displaystyle u\colon Y\rightarrow \mathbb {R} }is a utility function if for ally1,y2∈Y{\displaystyle \mathbf {y} ^{1},\mathbf {y} ^{2}\in Y}it holds thatu(y1)>u(y2){\displaystyle u(\mathbf {y} ^{1})>u(\mathbf {y} ^{2})}if the decision maker prefersy1{\displaystyle \mathbf {y} ^{1}}toy2{\displaystyle \mathbf {y} ^{2}}, andu(y1)=u(y2){\displaystyle u(\mathbf {y} ^{1})=u(\mathbf {y} ^{2})}if the decision maker is indifferent betweeny1{\displaystyle \mathbf {y} ^{1}}andy2{\displaystyle \mathbf {y} ^{2}}. The utility function specifies an ordering of the decision vectors (recall that vectors can be ordered in many different ways). Onceu{\displaystyle u}is obtained, it suffices to solve but in practice, it is very difficult to construct a utility function that would accurately represent the decision maker's preferences,[2]particularly since the Pareto front is unknown before the optimization begins. The lexicographic method assumes that the objectives can be ranked in the order of importance. We assume that the objective functions are in the order of importance so thatf1{\displaystyle f_{1}}is the most important andfk{\displaystyle f_{k}}the least important to the decision maker. Subject to this assumption, various methods can be used to attain the lexicographically optimal solution. Note that a goal or target value is not specified for any objective here, which makes it different from the LexicographicGoal Programmingmethod. Scalarizing a multi-objective optimization problem is an a priori method, which means formulating a single-objective optimization problem such that optimal solutions to the single-objective optimization problem are Pareto optimal solutions to the multi-objective optimization problem.[3]In addition, it is often required that every Pareto optimal solution can be reached with some parameters of the scalarization.[3]With different parameters for the scalarization, different Pareto optimal solutions are produced. A general formulation for a scalarization of a multi-objective optimization problem is whereθ{\displaystyle \theta }is a vector parameter, the setXθ⊆X{\displaystyle X_{\theta }\subseteq X}is a set depending on the parameterθ{\displaystyle \theta }, andg:Rk+1→R{\displaystyle g:\mathbb {R} ^{k+1}\rightarrow \mathbb {R} }is a function. Very well-known examples are: Somewhat more advanced examples are the following: For example,portfolio optimizationis often conducted in terms ofmean-variance analysis. In this context, the efficient set is a subset of the portfolios parametrized by the portfolio mean returnμP{\displaystyle \mu _{P}}in the problem of choosing portfolio shares to minimize the portfolio's variance of returnσP{\displaystyle \sigma _{P}}subject to a given value ofμP{\displaystyle \mu _{P}}; seeMutual fund separation theoremfor details. Alternatively, the efficient set can be specified by choosing the portfolio shares to maximize the functionμP−bσP{\displaystyle \mu _{P}-b\sigma _{P}}; the set of efficient portfolios consists of the solutions asb{\displaystyle b}ranges from zero to infinity. Some of the above scalarizations involve invoking theminimaxprinciple, where always the worst of the different objectives is optimized.[44] A posteriori methods aim at producing all the Pareto optimal solutions or a representative subset of the Pareto optimal solutions. Most a posteriori methods fall into either one of the following three classes: Well-known examples of mathematical programming-based a posteriori methods are the Normal Boundary Intersection (NBI),[45]Modified Normal Boundary Intersection (NBIm),[46]Normal Constraint (NC),[47][48]Successive Pareto Optimization (SPO),[49]and Directed Search Domain (DSD)[50]methods, which solve the multi-objective optimization problem by constructing several scalarizations. The solution to each scalarization yields a Pareto optimal solution, whether locally or globally. The scalarizations of the NBI, NBIm, NC, and DSD methods are constructed to obtain evenly distributed Pareto points that give a good approximation of the real set of Pareto points. Evolutionary algorithmsare popular approaches to generating Pareto optimal solutions to a multi-objective optimization problem. Most evolutionary multi-objective optimization (EMO) algorithms apply Pareto-based ranking schemes. Evolutionary algorithms such as the Non-dominated Sorting Genetic Algorithm-II (NSGA-II),[51]its extended version NSGA-III,[52][53]Strength Pareto Evolutionary Algorithm 2 (SPEA-2)[54]and multiobjectivedifferential evolutionvariants have become standard approaches, although some schemes based onparticle swarm optimizationandsimulated annealing[55]are significant. The main advantage of evolutionary algorithms, when applied to solve multi-objective optimization problems, is the fact that they typically generate sets of solutions, allowing computation of an approximation of the entire Pareto front. The main disadvantage of evolutionary algorithms is their lower speed and the Pareto optimality of the solutions cannot be guaranteed; it is only known that none of the generated solutions is dominated by another. Another paradigm for multi-objective optimization based on novelty using evolutionary algorithms was recently improved upon.[56]This paradigm searches for novel solutions in objective space (i.e., novelty search[57]on objective space) in addition to the search for non-dominated solutions. Novelty search is like stepping stones guiding the search to previously unexplored places. It is especially useful in overcoming bias and plateaus as well as guiding the search in many-objective optimization problems. Deep learningconditional methods are new approaches to generating several Pareto optimal solutions. The idea is to use the generalization capacity of deep neural networks to learn a model of the entire Pareto front from a limited number of example trade-offs along that front, a task calledPareto Front Learning.[58]Several approaches address this setup, including using hypernetworks[58]and using Stein variational gradient descent.[59] Commonly known a posteriori methods are listed below: In interactive methods of optimizing multiple objective problems, the solution process is iterative and the decision maker continuously interacts with the method when searching for the most preferred solution (see e.g., Miettinen 1999,[2]Miettinen 2008[70]). In other words, the decision maker is expected to express preferences at each iteration to getPareto optimal solutionsthat are of interest to the decision maker and learn what kind of solutions are attainable. The following steps are commonly present in interactive methods of optimization:[70] The above aspiration levels refer to desirable objective function values forming a reference point. Instead of mathematical convergence, often used as a stopping criterion inmathematical optimizationmethods, psychological convergence is often emphasized in interactive methods. Generally speaking, a method is terminated when the decision maker is confident that he/she has found themost preferred solution available. There are different interactive methods involving different types of preference information. Three types can be identified based on On the other hand, a fourth type of generating a small sample of solutions is included in:[71][72]An example of the interactive method utilizing trade-off information is theZionts-Wallenius method,[73]where the decision maker is shown several objective trade-offs at each iteration, and (s)he is expected to say whether (s)he likes, dislikes, or is indifferent with respect to each trade-off. In reference point-based methods (see e.g.,[74][75]), the decision maker is expected at each iteration to specify a reference point consisting of desired values for each objective and a corresponding Pareto optimal solution(s) is then computed and shown to them for analysis. In classification-based interactive methods, the decision maker is assumed to give preferences in the form of classifying objectives at the current Pareto optimal solution into different classes, indicating how the values of the objectives should be changed to get a more preferred solution. Then, the classification information is considered when new (more preferred) Pareto optimal solution(s) are computed. In the satisficing trade-off method (STOM),[76]three classes are used: objectives whose values 1) should be improved, 2) can be relaxed, and 3) are acceptable as such. In the NIMBUS method,[77][78]two additional classes are also used: objectives whose values 4) should be improved until a given bound and 5) can be relaxed until a given bound. Differenthybridmethods exist, but here we consider hybridizing MCDM (multi-criteria decision-making) and EMO (evolutionary multi-objective optimization). A hybrid algorithm in multi-objective optimization combines algorithms/approaches from these two fields (see e.g.,[70]). Hybrid algorithms of EMO and MCDM are mainly used to overcome shortcomings by utilizing strengths. Several types of hybrid algorithms have been proposed in the literature, e.g., incorporating MCDM approaches into EMO algorithms as a local search operator, leading a DM to the most preferred solution(s), etc. A local search operator is mainly used to enhance the rate of convergence of EMO algorithms. The roots for hybrid multi-objective optimization can be traced to the first Dagstuhl seminar organized in November 2004 (seehere). Here, some of the best minds[citation needed]in EMO (Professor Kalyanmoy Deb, Professor Jürgen Branke, etc.) and MCDM (Professor Kaisa Miettinen, Professor Ralph E. Steuer, etc.) realized the potential in combining ideas and approaches of MCDM and EMO fields to prepare hybrids of them. Subsequently, many more Dagstuhl seminars have been arranged to foster collaboration. Recently, hybrid multi-objective optimization has become an important theme in several international conferences in the area of EMO and MCDM (see e.g.,[79][80]). Visualization of the Pareto front is one of the a posteriori preference techniques of multi-objective optimization. The a posteriori preference techniques provide an important class of multi-objective optimization techniques.[2]Usually, the a posteriori preference techniques include four steps: (1) computer approximates the Pareto front, i.e., the Pareto optimal set in the objective space; (2) the decision maker studies the Pareto front approximation; (3) the decision maker identifies the preferred point at the Pareto front; (4) computer provides the Pareto optimal decision, whose output coincides with the objective point identified by the decision maker. From the point of view of the decision maker, the second step of the a posteriori preference techniques is the most complicated. There are two main approaches to informing the decision maker. First, a number of points of the Pareto front can be provided in the form of a list (interesting discussion and references are given in[81]) or using heatmaps.[82] In the case of bi-objective problems, informing the decision maker concerning the Pareto front is usually carried out by its visualization: the Pareto front, often named the tradeoff curve in this case, can be drawn at the objective plane. The tradeoff curve gives full information on objective values and on objective tradeoffs, which inform how improving one objective is related to deteriorating the second one while moving along the tradeoff curve. The decision maker takes this information into account while specifying the preferred Pareto optimal objective point. The idea to approximate and visualize the Pareto front was introduced for linear bi-objective decision problems by S. Gass and T. Saaty.[83]This idea was developed and applied in environmental problems by J.L. Cohon.[84]A review of methods for approximating the Pareto front for various decision problems with a small number of objectives (mainly, two) is provided in.[85] There are two generic ideas for visualizing the Pareto front in high-order multi-objective decision problems (problems with more than two objectives). One of them, which is applicable in the case of a relatively small number of objective points that represent the Pareto front, is based on using the visualization techniques developed in statistics (various diagrams, etc.; see the corresponding subsection below). The second idea proposes the display of bi-objective cross-sections (slices) of the Pareto front. It was introduced by W.S. Meisel in 1973[86]who argued that such slices inform the decision maker on objective tradeoffs. The figures that display a series of bi-objective slices of the Pareto front for three-objective problems are known as the decision maps. They give a clear picture of tradeoffs between the three criteria. The disadvantages of such an approach are related to the following two facts. First, the computational procedures for constructing the Pareto front's bi-objective slices are unstable since the Pareto front is usually not stable. Secondly, it is applicable in the case of only three objectives. In the 1980s, the idea of W.S. Meisel was implemented in a different form—in the form of theInteractive Decision Maps(IDM) technique.[87]More recently, N. Wesner[88]proposed using a combination of a Venn diagram and multiple scatterplots of the objective space to explore the Pareto frontier and select optimal solutions.
https://en.wikipedia.org/wiki/Multi-objective_optimization
Inmultiple criteria decision aiding(MCDA),multicriteria classification(or sorting) involves problems where a finite set of alternative actions should be assigned into a predefined set of preferentially ordered categories (classes).[1]For example, credit analysts classify loan applications into risk categories (e.g., acceptable/unacceptable applicants), customers rate products and classify them into attractiveness groups, candidates for a job position are evaluated and their applications are approved or rejected, technical systems are prioritized for inspection on the basis of their failure risk, clinicians classify patients according to the extent to which they have a complex disease or not, etc. In a multicriteria classification problem (MCP) a set ofmalternative actions is available. Each alternative is evaluated over a set ofncriteria. The scope of the analysis is to assign each alternative into a given set of categories (classes)C= {c1,c2, ...,ck}. It is therefore a kind ofclassificationproblem. The categories are defined in an ordinal way. Assuming (without loss of generality) an ascending order, this means that categoryc1consists of the worst alternatives whereasckincludes the best (most preferred) ones. The alternatives in each category cannot be assumed be equivalent in terms of their overall evaluation (the categories are notequivalence classes). Furthermore, the categories are defined independently of the set of alternatives under consideration. In that regard, MCPs are based on an absolute evaluation scheme. For instance, a predefined specific set of categories is often used to classify industrial accidents (e.g., major, minor, etc.). These categories are not related to a specific event under consideration. Of course, in many cases the definition of the categories is adjusted over time to take into consideration the changes in the decision environment. In comparison tostatistical classificationandpattern recognitionin amachine learningsense, two main distinguishing features of MCPs can be identified:[2][3] The most popular modeling approach for MCPs are based on value function models, outranking relations, and decision rules: The development of MCP models can be made either through direct or indirect approaches. Direct techniques involve the specification of all parameters of the decision model (e.g., the weights of the criteria) through an interactive procedure, where the decision analyst elicits the required information from the decision-maker. This is can be a time-consuming process, but it is particularly useful in strategic decision making. Indirect procedures are referred to aspreference disaggregation analysis.[10]The preference disaggregation approach refers to the analysis of the decision–maker's global judgments in order to specify the parameters of the criteria aggregation model that best fit the decision-maker's evaluations. In the case of MCP, the decision–maker's global judgments are expressed by classifying a set of reference alternatives (training examples). The reference set may include: (a) some decision alternatives evaluated in similar problems in the past, (b) a subset of the alternatives under consideration, (c) some fictitious alternatives, consisting of performances on the criteria which can be easily judged by the decision-maker to express his/her global evaluation. Disaggregation techniques provide an estimateβ*for the parameters of a decision modelf{\displaystyle f}based on the solution of an optimization problem of the following general form: whereXis the set of reference alternatives,D(X) is the classification of the reference alternatives by the decision-maker,D'(X,fβ) are the recommendations of the model for the reference alternatives,Lis a function that measures the differences between the decision-maker's evaluations and the model's outputs, andBis the set of feasible values for the model's parameters. For example, the following linear program can be formulated in the context of a weighted average modelV(xi) =w1xi1+ ... +wnxinwithwjbeing the (non-negative) trade-off constant for criterionj(w1+ ... +wn= 1) andxijbeing the data for alternativeion criterionj: This linear programming formulation can be generalized in context of additive value functions.[11][12]Similar optimization problems (linear and nonlinear) can be formulated for outranking models,[13][14][15]whereasdecision rule modelsare built throughrule inductionalgorithms.
https://en.wikipedia.org/wiki/Multicriteria_classification
Robot learningis a research field at the intersection ofmachine learningandrobotics. It studies techniques allowing a robot to acquire novel skills or adapt to its environment through learning algorithms. The embodiment of the robot, situated in a physical embedding, provides at the same time specific difficulties (e.g. high-dimensionality, real time constraints for collecting data and learning) and opportunities for guiding the learning process (e.g. sensorimotor synergies, motor primitives). Example of skills that are targeted by learning algorithms include sensorimotor skills such as locomotion, grasping, activeobject categorization, as well as interactive skills such as joint manipulation of an object with a human peer, and linguistic skills such as the grounded and situatedmeaning of human language. Learning can happen either through autonomous self-exploration or through guidance from a human teacher, like for example in robot learning by imitation. Robot learning can be closely related toadaptive control,reinforcement learningas well asdevelopmental roboticswhich considers the problem of autonomous lifelong acquisition of repertoires of skills. Whilemachine learningis frequently used bycomputer visionalgorithms employed in the context of robotics, these applications are usually not referred to as "robot learning". Many research groups are developing techniques where robots learn by imitating. This includes various techniques for learning from demonstration (sometimes also referred to as "programming by demonstration") andobservational learning. In Tellex's "Million Object Challenge," the goal is robots that learn how to spot and handle simple items and upload their data to the cloud to allow other robots to analyze and use the information.[1] RoboBrainis a knowledge engine for robots which can be freely accessed by any device wishing to carry out a task. The database gathers new information about tasks as robots perform them, by searching the Internet, interpreting natural language text, images, and videos,object recognitionas well as interaction. The project is led byAshutosh SaxenaatStanford University.[2][3] RoboEarthis a project that has been described as a "World Wide Webfor robots" − it is a network and database repository where robots can share information and learn from each other and a cloud for outsourcing heavy computation tasks. The project brings together researchers from five major universities in Germany, the Netherlands and Spain and is backed by theEuropean Union.[4][5][6][7][8] Google Research,DeepMind, andGoogle Xhave decided to allow their robots share their experiences.[9][10][11]
https://en.wikipedia.org/wiki/Robot_learning
Ametaphoris afigure of speechthat, forrhetoricaleffect, directly refers to one thing by mentioning another.[1]It may provide, or obscure, clarity or identify hidden similarities between two different ideas. Metaphors are usually meant to create a likeness or ananalogy.[2] Analysts group metaphors with other types of figurative language, such asantithesis,hyperbole,metonymy, andsimile.[3]According toGrammarly, "Figurative language examples include similes, metaphors, personification, hyperbole, allusions, and idioms."[4]One of the most commonly cited examples of a metaphor in English literature comes from the "All the world's a stage" monologue fromAs You Like It: All the world's a stage,And all the men and women merely players;They have their exits and their entrancesAnd one man in his time plays many parts,His Acts being seven ages. At first, the infant...—William Shakespeare,As You Like It, 2/7[5] This quotation expresses a metaphor because the world is not literally a stage, and most humans are not literally actors and actresses playing roles. By asserting that the world is a stage, Shakespeare uses points of comparison between the world and a stage to convey an understanding about the mechanics of the world and the behavior of the people within it. In the ancient Hebrewpsalms(around 1000 B.C.), one finds vivid and poetic examples of metaphor such as, "The Lord is my rock, my fortress and my deliverer; my God is my rock, in whom I take refuge, my shield and the horn of my salvation, my stronghold" and "The Lord is my shepherd, I shall not want". Some recent linguistic theories view all language in essence as metaphorical.[6]Theetymologyof a word may uncover a metaphorical usage which has since become obscured with persistent use - such as for example the English word "window", etymologically equivalent to "wind eye".[7] The wordmetaphoritself is a metaphor, coming from a Greek term meaning 'transference (of ownership)'. The user of a metaphor alters the reference of the word, "carrying" it from onesemantic"realm" to another. The new meaning of the word might derive from an analogy between the two semantic realms, but also from other reasons such as the distortion of the semantic realm - for example in sarcasm. The English wordmetaphorderives from the 16th-centuryOld Frenchwordmétaphore, which comes from theLatinmetaphora, 'carrying over', and in turn from theGreekμεταφορά(metaphorá), 'transference (of ownership)',[8]fromμεταφέρω(metapherō), 'to carry over, to transfer'[9]and that fromμετά(meta), 'behind, along with, across'[10]+φέρω(pherō), 'to bear, to carry'.[11] The Philosophy of Rhetoric(1936) byrhetoricianI. A. Richardsdescribes a metaphor as having two parts: the tenor and the vehicle. The tenor is the subject to which attributes are ascribed. The vehicle is the object whose attributes are borrowed. In the previous example, "the world" is compared to a stage, describing it with the attributes of "the stage"; "the world" is the tenor, and "a stage" is the vehicle; "men and women" is the secondary tenor, and "players" is the secondary vehicle. Other writers[which?]employ the general termsgroundandfigureto denote the tenor and the vehicle.Cognitive linguisticsuses the termstargetandsource, respectively. PsychologistJulian Jaynescoined the termsmetaphrandandmetaphier, plus two new concepts,paraphrandandparaphier.[12][13]Metaphrandis equivalent to the metaphor-theory termstenor,target, andground.Metaphieris equivalent to the metaphor-theory termsvehicle,figure, andsource. In a simple metaphor, an obvious attribute of the metaphier exactly characterizes the metaphrand (e.g. "the ship plowed the seas"). With an inexact metaphor, however, a metaphier might have associated attributes or nuances – its paraphiers – that enrich the metaphor because they "project back" to the metaphrand, potentially creating new ideas – the paraphrands – associated thereafter with the metaphrand or even leading to a new metaphor. For example, in the metaphor "Pat is a tornado", the metaphrand isPat; the metaphier istornado. As metaphier,tornadocarries paraphiers such as power, storm and wind, counterclockwise motion, and danger, threat, destruction, etc. The metaphoric meaning oftornadois inexact: one might understand that 'Pat is powerfully destructive' through the paraphrand of physical and emotional destruction; another person might understand the metaphor as 'Pat can spin out of control'. In the latter case, the paraphier of 'spinning motion' has become the paraphrand 'psychological spin', suggesting an entirely new metaphor for emotional unpredictability, a possibly apt description for a human being hardly applicable to a tornado. Based on his analysis, Jaynes claims that metaphors not only enhance description, but "increase enormously our powers of perception...and our understanding of [the world], and literally create new objects".[12]: 50 Metaphors are most frequently compared withsimiles. A metaphor asserts the objects in the comparison are identical on the point of comparison, while a simile merely asserts a similarity through use of words such aslikeoras. For this reason a common-type metaphor is generally considered more forceful than asimile.[15][16] The metaphor category contains these specialized types: It is said that a metaphor is "a condensed analogy" or "analogical fusion" or that they "operate in a similar fashion" or are "based on the same mental process" or yet that "the basic processes of analogy are at work in metaphor." It is also pointed out that "a border between metaphor and analogy is fuzzy" and "the difference between them might be described (metaphorically) as the distance between things being compared."[This quote needs a citation] Metaphor is distinct frommetonymy, as the two concepts embody different fundamental modes ofthought. Metaphor works by bringing together concepts from different conceptual domains, whereas metonymy uses one element from a given domain to refer to another closely related element. A metaphor creates new links between otherwise distinct conceptual domains, whereas a metonymy relies on pre-existent links within such domains. For example, in the phrase "lands belonging to the crown", the wordcrownis ametonymybecause some monarchs do indeed wear a crown, physically. In other words, there is a pre-existent link betweencrownandmonarchy.[20]On the other hand, whenGhil'ad Zuckermannargues that theIsraeli languageis a "phoenicuckoo cross with some magpie characteristics", he is usingmetaphor.[21]: 4There is no physical link between a language and a bird. The reason the metaphorsphoenixandcuckooare used is that on the one hand hybridicIsraeliis based onHebrew, which, like a phoenix, rises from the ashes; and on the other hand, hybridicIsraeliis based onYiddish, which like a cuckoo, lays its egg in the nest of another bird, tricking it to believe that it is its own egg. Furthermore, the metaphormagpieis employed because, according to Zuckermann, hybridicIsraelidisplays the characteristics of a magpie, "stealing" from languages such asArabicandEnglish.[21]: 4–6 Adead metaphoris a metaphor in which the sense of a transferred image has become absent. The phrases "to grasp a concept" and "to gather what you've understood" use physical action as a metaphor for understanding. The audience does not need to visualize the action; dead metaphors normally go unnoticed. Some distinguish between a dead metaphor and acliché. Others use "dead metaphor" to denote both.[22] A mixed metaphor is a metaphor that leaps from one identification to a second inconsistent with the first, e.g.: I smell a rat [...] but I'll nip him in the bud" This form is often used as a parody of metaphor itself: If we can hit that bull's-eye then the rest of the dominoes will fall like a house of cards...Checkmate. An extended metaphor, or conceit, sets up a principal subject with several subsidiary subjects or comparisons. In the above quote fromAs You Like It, the world is first described as a stage and then the subsidiary subjects men and women are further described in the same context. An implicit metaphor has no specified tenor, although the vehicle is present.M. H. Abramsoffers the following as an example of an implicit metaphor: "That reed was too frail to survive the storm of its sorrows". The reed is the vehicle for the implicit tenor, someone's death, and the storm is the vehicle for the person's sorrows.[24] Metaphor can serve as a device for persuading an audience of the user's argument or thesis, the so-called rhetorical metaphor. Aristotlewrites in his work theRhetoricthat metaphors make learning pleasant: "To learn easily is naturally pleasant to all people, and words signify something, so whatever words create knowledge in us are the pleasantest."[25]When discussing Aristotle'sRhetoric, Jan Garret stated "metaphor most brings about learning; for when [Homer] calls old age "stubble", he creates understanding and knowledge through the genus, since both old age and stubble are [species of the genus of] things that have lost their bloom".[26]Metaphors, according to Aristotle, have "qualities of the exotic and the fascinating; but at the same time we recognize that strangers do not have the same rights as our fellow citizens".[27] Educational psychologistAndrew Ortonygives more explicit detail: "Metaphors are necessary as a communicative device because they allow the transfer of coherent chunks of characteristics – perceptual, cognitive, emotional and experiential – from a vehicle which is known to a topic which is less so. In so doing they circumvent the problem of specifying one by one each of the often unnameable and innumerable characteristics; they avoid discretizing the perceived continuity of experience and are thus closer to experience and consequently more vivid and memorable."[28] As a characteristic of speech and writing, metaphors can serve the poetic imagination. This allowsSylvia Plath, in her poem "Cut", to compare the blood issuing from her cut thumb to the running of a million soldiers, "redcoats, every one"; and enablingRobert Frost, in "The Road Not Taken", to compare a life to a journey.[29][30][31] Metaphors can be implied and extended throughout pieces of literature. Sonja K. Fosscharacterizes metaphors as "nonliteral comparisons in which a word or phrase from one domain of experience is applied to another domain".[32]She argues that since reality is mediated by the language we use to describe it, the metaphors we use shape the world and our interactions to it. The term "metaphor" can characterise basic or general aspects of experience and cognition: Some theorists have suggested that metaphors are not merely stylistic, but are also cognitively important. InMetaphors We Live By(1980),George LakoffandMark Johnsonargue that metaphors are pervasive in everyday life, not only in language but also in thought and action. A common definition of metaphor presents it as a comparison that shows how two things, which are not alike in most ways, are similar in another important way. In this context, metaphors contribute to the creation of multiple meanings withinpolysemiccomplexes across different languages.[33]Furthermore, Lakoff and Johnson explain that a metaphor is essentially the understanding and experiencing of one kind of thing in terms of another, which they refer to as a "conduit metaphor". According to this view, a speaker can put ideas or objects into containers and then send them along a conduit to a listener, who removes the object from the container to make meaning of it. Thus, communication is conceptualized as something that ideas flow into, with the container being separate from the ideas themselves. Lakoff and Johnson provide several examples of daily metaphors in use, including "argument is war" and "time is money". These metaphors occur widely in various contexts to express personal meanings. In addition, the authors suggest that communication can be viewed as a machine: "Communication is not what one does with the machine, but is the machine itself."[34] Moreover, experimental evidence shows that "priming" people with material from one area can influence how they perform tasks and interpret language in a metaphorically related area.[note 1] Omnipresent metaphor may provide an indicator for researching the functionality of language.[36] Cognitive linguistsemphasize that metaphors serve to facilitate the understanding of one conceptual domain—typically an abstraction such as "life", "theories" or "ideas"—through expressions that relate to another, more familiar conceptual domain—typically more concrete, such as "journey", "buildings" or "food".[37][38]For example: onedevoursa book ofrawfacts, tries todigestthem,stewsover them, lets themsimmer on the back-burner,regurgitatesthem in discussions, andcooksup explanations, hoping they do not seemhalf-baked. A convenient short-hand way of capturing this view of metaphor is the following: Conceptual Domain (A) is Conceptual Domain (B), which is what is called aconceptual metaphor. A conceptual metaphor consists of two conceptual domains, in which one domain is understood in terms of another. A conceptual domain is any coherent organization of experience. For example, we have coherently organizedknowledgeabout journeys that we rely on in understanding life.[38] Lakoff and Johnson greatly contributed to establishing the importance of conceptual metaphor as a framework for thinking in language, leading scholars to investigate the original ways in which writers used novel metaphors and to question the fundamental frameworks of thinking in conceptual metaphors. From a sociological, cultural, or philosophical perspective, one asks to what extentideologiesmaintain and impose conceptual patterns of thought by introducing, supporting, and adapting fundamental patterns of thinking metaphorically.[39]The question is to what extent the ideology fashion and refashion the idea of the nation as a container with borders, and how enemies and outsiders are represented.[citation needed] Some cognitive scholars have attempted to take on board the idea that different languages have evolved radically different concepts and conceptual metaphors, while others hold to theSapir-Whorf hypothesis. GermanphilologistWilhelm von Humboldt(1767–1835) contributed significantly to this debate on the relationship between culture, language, and linguistic communities. Humboldt remains, however, relatively unknown in English-speaking nations.Andrew Goatly, in "Washing the Brain", takes on board the dual problem of conceptual metaphor as a framework implicit in the language as a system and the way individuals and ideologies negotiate conceptual metaphors. Neural biological research suggests that some metaphors are innate, as demonstrated by reduced metaphorical understanding in psychopathy.[40] James W. Underhill, inCreating Worldviews: Ideology, Metaphor & Language(Edinburgh UP), considers the way individual speech adopts and reinforces certain metaphoric paradigms. This involves a critique of both communist and fascist discourse. Underhill's studies are situated in Czech and German, which allows him to demonstrate the ways individuals are thinking both within and resisting the modes by which ideologies seek to appropriate key concepts such as "the people", "the state", "history", and "struggle". Though metaphors can be considered to be "in" language, Underhill's chapter on French, English andethnolinguisticsdemonstrates that language or languages cannot be conceived of in anything other than metaphoric terms. Several other philosophers have embraced the view that metaphors may also be described as examples of a linguistic "category mistake" which have the potential of leading unsuspecting users into considerable obfuscation of thought within the realm of epistemology. Included among them is the Australian philosopherColin Murray Turbayne.[41]In his bookThe Myth of Metaphor, Turbayne argues that the use of metaphor is an essential component within the context of any language system which claims to embody richness and depth of understanding.[42]In addition, he clarifies the limitations associated with a literal interpretation of the mechanistic Cartesian and Newtonian depictions of the universe as little more than a "machine" – a concept which continues to underlie much of thescientific materialismwhich prevails in the modern Western world.[43]He argues further that the philosophical concept of "substance" or "substratum" has limited meaning at best and that physicalist theories of the universe depend upon mechanistic metaphors which are drawn from deductive logic in the development of their hypotheses.[44][45][43]By interpreting such metaphors literally, Turbayne argues that modern man has unknowingly fallen victim to only one of several metaphorical models of the universe which may be more beneficial in nature.[46][43][47] In his bookIn Other Shoes: Music, Metaphor, Empathy, ExistenceKendall Waltonalso places the formulation of metaphors at the center of a "Game of Make Believe," which is regulated by tacit norms and rules. These "principles of generation" serve to determine several aspects of the game which include: what is considered to be fictional or imaginary, as well as the fixed function which is assumed by both objects and people who interact in the game. Walton refers to such generators as "props" which can serve as means to the development of various imaginative ends. In "content oriented" games, users derive value from such props as a result of the intrinsic fictional content which they help to create through their participation in the game. As familiar examples of such content oriented games, Walton points to putting on a play ofHamletor "playing cops and robbers". Walton further argues, however, that not all games conform to this characteristic.[48]In the course of creating fictions through the use of metaphor we can also perceive and manipulate props into new improvised representations of something entirely different in a game of "make-believe". Suddenly the properties of the props themselves take on primary importance. In the process the participants in the game may be only partially conscious of the "prop oriented" nature of the game itself.[49][50][51] Metaphors can map experience between two nonlinguistic realms.MusicologistLeonard B. Meyerdemonstrated how purely rhythmic and harmonic events can express human emotions.[52] Art theorist Robert Vischer argued that when we look at a painting, we "feel ourselves into it" by imagining our body in the posture of a nonhuman or inanimate object in the painting. For example, the paintingThe Lonely TreebyCaspar David Friedrichshows a tree with contorted, barren limbs.[53]Looking at the painting, some recipients may imagine their limbs in a similarly contorted and barren shape, evoking a feeling of strain and distress.[citation needed] Nonlinguistic metaphors may be the foundation of our experience of visual and musical art, as well as dance and other art forms.[54][55] In historicalonomasiologyor inhistorical linguistics, a metaphor is defined as a semantic change based on a similarity in form or function between the original concept and the target concept named by a word.[56] For example,mouse: "small, gray rodent with a long tail" → "small, gray computer device with a long cord". Some recent linguistic theories hold that language evolved from the capability of the brain to create metaphors that link actions and sensations to sounds.[6] Aristotle discusses the creation of metaphors at the end of hisPoetics: "But the greatest thing by far is to be a master of metaphor. It is the one thing that cannot be learnt from others; and it is also a sign of genius, since a good metaphor implies an intuitive perception of the similarity in dissimilars."[57] Baroqueliterary theoristEmanuele Tesaurodefines the metaphor "the most witty and acute, the most strange and marvelous, the most pleasant and useful, the most eloquent and fecund part of the humanintellect". There is, he suggests, something divine in metaphor: the world itself is God's poem[58]and metaphor is not just a literary or rhetorical figure but an analytic tool that can penetrate the mysteries of God and His creation.[59] Friedrich Nietzschemakes metaphor the conceptual center of his early theory of society inOn Truth and Lies in the Non-Moral Sense.[60]Some sociologists have found his essay useful for thinking about metaphors used in society and for reflecting on their own use of metaphor. Sociologists of religion note the importance of metaphor in religious worldviews, and that it is impossible to think sociologically about religion without metaphor.[61] Psychological research has shown that metaphors influence perception, reasoning, and decision-making by shaping how people conceptualize abstract ideas. Studies in cognitive linguistics suggest that metaphors are not merely stylistic devices but fundamental to human cognition, as they structure the way people understand and interact with the world.[62]Experiments demonstrate that different metaphorical framings can alter judgment and behavior. For example, a study by Thibodeau and Boroditsky (2011) found that describing crime as a "beast preying on the city" led participants to support more punitive law enforcement policies, whereas framing crime as a "virus infecting the city" increased support for social reform and prevention measures.[63]Similarly, studies on political discourse suggest that metaphors shape attitudes toward policy decisions, with metaphors like "tax relief" implying that taxation is an inherent burden, thus influencing public opinion.[64] Metaphors also play a crucial role in how people experience crises, such as the COVID-19 pandemic. A study by Baranowski et al. (2024) analyzed the use of metaphorical imagery in professional healthcare literature and found that metaphors significantly influenced how healthcare workers perceived and emotionally responded to the pandemic.[65]Their research identified different categories of metaphorical framings—such as war metaphors ("fighting the pandemic") and transformational metaphors ("lessons learned from the crisis")—which led to varying emotional responses among healthcare workers. While war metaphors were widely used, they could also induce feelings of helplessness if the metaphor implied an unwinnable battle. In contrast, metaphors that framed the pandemic as a challenge or learning opportunity tended to promote a sense of empowerment and resilience. These findings align with previous research showing that metaphors can significantly impact emotional processing and coping strategies in stressful situations.[66] Moreover, metaphorical language can impact emotions and mental health. For instance, describing depression as "drowning" or "a dark cloud" can intensify the emotional experience of distress, while framing it as "a journey with obstacles" can encourage resilience and problem-solving approaches.[67]These findings highlight the pervasive role of metaphors in shaping thought processes, reinforcing the idea that language not only reflects but also constructs reality.
https://en.wikipedia.org/wiki/Metaphor
Analogyis a comparison or correspondence between two things (or two groups of things) because of a third element that they are considered to share.[1] In logic, it is aninferenceor anargumentfrom one particular to another particular, as opposed todeduction,induction, andabduction. It is also used where at least one of thepremises, or the conclusion, is general rather than particular in nature. It has the general formA is to B as C is to D. In a broader sense, analogical reasoning is acognitiveprocess of transferring someinformationormeaningof a particular subject (the analog, or source) onto another (the target); and also thelinguisticexpression corresponding to such a process. The term analogy can also refer to the relation between the source and the target themselves, which is often (though not always) asimilarity, as in thebiological notion of analogy. Analogy plays a significant role in human thought processes. It has been argued that analogy lies at "the core of cognition".[2] The English wordanalogyderives from theLatinanalogia, itself derived from theGreekἀναλογία, "proportion", fromana-"upon, according to" [also "again", "anew"] +logos"ratio" [also "word, speech, reckoning"].[3][4] Analogy plays a significant role inproblem solving, as well asdecision making,argumentation,perception,generalization,memory,creativity,invention, prediction,emotion,explanation,conceptualizationandcommunication. It lies behind basic tasks such as the identification of places, objects and people, for example, inface perceptionandfacial recognition systems.Hofstadterhas argued that analogy is "the core of cognition".[2] An analogy is not afigure of speechbut a kind of thought. Specific analogical language usesexemplification,comparisons,metaphors,similes,allegories, andparables, butnotmetonymy. Phrases likeand so on,and the like,as if, and the very wordlikealso rely on an analogical understanding by the receiver of amessageincluding them. Analogy is important not only inordinary languageandcommon sense(whereproverbsandidiomsgive many examples of its application) but also inscience,philosophy,lawand thehumanities. The concepts ofassociation, comparison, correspondence,mathematicalandmorphological homology,homomorphism,iconicity,isomorphism, metaphor, resemblance, and similarity are closely related to analogy. Incognitive linguistics, the notion ofconceptual metaphormay be equivalent to that of analogy. Analogy is also a basis for any comparative arguments as well as experiments whose results are transmitted to objects that have been not under examination (e.g., experiments on rats when results are applied to humans). Analogy has been studied and discussed sinceclassical antiquityby philosophers, scientists, theologists andlawyers. The last few decades have shown a renewed interest in analogy, most notably incognitive science. Cajetan named several kinds of analogy that had been used but previously unnamed, particularly:[6] In ancientGreekthe wordαναλογια(analogia) originally meantproportionality, in the mathematical sense, and it was indeed sometimes translated toLatinasproportio.[citation needed]Analogy was understood as identity of relation between any twoordered pairs, whether of mathematical nature or not. Analogy andabstractionare different cognitive processes, and analogy is often an easier one. This analogy is not comparingallthe properties between a hand and a foot, but rather comparing therelationshipbetween a hand and its palm to a foot and its sole.[8]While a hand and a foot have many dissimilarities, the analogy focuses on their similarity in having an inner surface. The same notion of analogy was used in theUS-basedSATcollege admission tests, that included "analogy questions" in the form "A is to B as C is towhat?" For example, "Hand is to palm as foot is to ____?" These questions were usually given in theAristotelianformat: HAND : PALM : : FOOT : ____ While most competentEnglishspeakers will immediately give the right answer to the analogy question (sole), it is more difficult to identify and describe the exact relation that holds both between pairs such ashandandpalm, and betweenfootandsole. This relation is not apparent in somelexical definitionsofpalmandsole, where the former is defined asthe inner surface of the hand, and the latter asthe underside of the foot. Kant'sCritique of Judgmentheld to this notion of analogy, arguing that there can be exactly the samerelationbetween two completely different objects. Greek philosophers such asPlatoandAristotleused a wider notion of analogy. They saw analogy as a shared abstraction.[9]Analogous objects did not share necessarily a relation, but also an idea, a pattern, a regularity, an attribute, an effect or a philosophy. These authors also accepted that comparisons, metaphors and "images" (allegories) could be used asarguments, and sometimes they called themanalogies. Analogies should also make those abstractions easier to understand and give confidence to those who use them. James Francis RossinPortraying Analogy(1982), the first substantive examination of the topic since Cajetan'sDe Nominum Analogia,[dubious–discuss]demonstrated that analogy is a systematic and universal feature of natural languages, with identifiable and law-like characteristics which explain how the meanings of words in a sentence are interdependent. Ibn Taymiyya,[10][11][12]Francis Baconand laterJohn Stuart Millargued that analogy is simply a special case of induction.[9]In their view, analogy is aninductiveinference from common known attributes to anotherprobablecommon attribute, which is known about only in the source of the analogy, in the following form: Contemporary cognitive scientists use a wide notion of analogy,extensionallyclose to that of Plato and Aristotle, but framed by Gentner's (1983)structure-mapping theory.[13]The same idea ofmappingbetween source and target is used byconceptual metaphorandconceptual blendingtheorists. Structure mapping theory concerns bothpsychologyandcomputer science. According to this view, analogy depends on the mapping or alignment of the elements of source and target. The mapping takes place not only between objects, but also between relations of objects and between relations of relations. The whole mapping yields the assignment of a predicate or a relation to the target. Structure mapping theory has been applied and has found considerable confirmation inpsychology. It has had reasonable success in computer science and artificial intelligence (see below). Some studies extended the approach to specific subjects, such asmetaphorand similarity.[14] Logicians analyze how analogical reasoning is used inarguments from analogy. An analogy can be stated usingis toandaswhen representing the analogous relationship between two pairs of expressions, for example, "Smile is to mouth, as wink is to eye." In the field of mathematics and logic, this can be formalized withcolon notationto represent the relationships, using single colon for ratio, and double colon for equality.[15] In the field of testing, the colon notation of ratios and equality is often borrowed, so that the example above might be rendered, "Smile : mouth :: wink : eye" and pronounced the same way.[15][16] Inhistorical linguisticsandword formation, analogy is the process that alters words-forms perceived as breaking rules or ignoring general patterns to more typical forms that follow them. For example, theEnglishverbhelponce had the simple past-tense formholpand thepast participleformholpen. These older forms have now been discarded and replaced byhelped, which came about through the analogy that many other past-tense forms use the-edending (jumped,carried,defeated, etc.). This is calledmorphological leveling. Analogies do not always lead to words shifting to fit rules; sometimes, they can also leading to the breaking of rules; one example is theAmerican Englishpast tense form ofdive:dove, formed on analogy with words such asdrivetodroveorstrivetostrove. Analogy is also a term used in theNeogrammarianschool of thought as acatch-allto describe any morphological change in a language that cannot be explained merely sound change or borrowing. Analogies are mainly used as a means of creating new ideas and hypotheses, or testing them, which is called a heuristic function of analogical reasoning. Analogical arguments can also be probative, meaning that they serve as a means of proving the rightness of particular theses and theories. This application of analogical reasoning in science is debatable. Analogy can help prove important theories, especially in those kinds of science in whichlogicalorempiricalproof is not possible such astheology,philosophyorcosmologywhen it relates to those areas of the cosmos (the universe) that are beyond any data-based observation and knowledge about them stems from the human insight and thinking outside the senses. Analogy can be used in theoretical and applied sciences in the form of models or simulations which can be considered as strong indications of probable correctness. Other, much weaker, analogies may also assist in understanding and describing nuanced or key functional behaviours of systems that are otherwise difficult to grasp or prove. For instance, an analogy used in physics textbookscompares electrical circuits to hydraulic circuits.[18]Another example is theanalogue earbased on electrical, electronic or mechanical devices. Some types of analogies can have a precisemathematicalformulation through the concept ofisomorphism. In detail, this means that if two mathematical structures are of the same type, an analogy between them can be thought of as abijectionwhich preserves some or all of the relevant structure. For example,R2{\displaystyle \mathbb {R} ^{2}}andC{\displaystyle \mathbb {C} }are isomorphic as vector spaces, but thecomplex numbers,C{\displaystyle \mathbb {C} }, have more structure thanR2{\displaystyle \mathbb {R} ^{2}}does:C{\displaystyle \mathbb {C} }is afieldas well as avector space. Category theorytakes the idea of mathematical analogy much further with the concept offunctors. Given two categories C and D, a functorffrom C to D can be thought of as an analogy between C and D, becausefhas to map objects of C to objects of D and arrows of C to arrows of D in such a way that the structure of their respective parts is preserved. This is similar to thestructure mapping theory of analogyof Dedre Gentner, because it formalises the idea of analogy as a function which makes certain conditions true. A computer algorithm has achieved human-level performance on multiple-choice analogy questions from theSATtest. The algorithm measures the similarity of relations between pairs of words (e.g., the similarity between the pairs HAND:PALM and FOOT:SOLE) by statistically analysing a large collection of text. It answers SAT questions by selecting the choice with the highest relational similarity.[19] The analogical reasoning in the human mind is free of the false inferences plaguing conventionalartificial intelligencemodels, (calledsystematicity). Steven Phillips andWilliam H. Wilson[20][21]usecategory theoryto mathematically demonstrate how such reasoning could arise naturally by using relationships between the internal arrows that keep the internal structures of the categories rather than the mere relationships between the objects (called "representational states"). Thus, the mind, and more intelligent AIs, may use analogies between domains whose internal structurestransform naturallyand reject those that do not. Keith HolyoakandPaul Thagard(1997) developed their multiconstraint theory within structure mapping theory. They defend that the "coherence" of an analogy depends on structural consistency,semantic similarityand purpose. Structural consistency is the highest when the analogy is anisomorphism, although lower levels can be used as well. Similarity demands that the mapping connects similar elements and relationships between source and target, at any level of abstraction. It is the highest when there are identical relations and when connected elements have many identical attributes. An analogy achieves its purpose if it helps solve the problem at hand. The multiconstraint theory faces some difficulties when there are multiple sources, but these can be overcome.[9]Hummel and Holyoak (2005) recast the multiconstraint theory within aneural networkarchitecture. A problem for the multiconstraint theory arises from its concept of similarity, which, in this respect, is not obviously different from analogy itself. Computer applications demand that there are someidenticalattributes or relations at some level of abstraction. The model was extended (Doumas, Hummel, and Sandhofer, 2008) to learn relations from unstructured examples (providing the only current account of how symbolic representations can be learned from examples).[22] Mark Keaneand Brayshaw (1988) developed theirIncremental Analogy Machine(IAM) to include working memory constraints as well as structural, semantic and pragmatic constraints, so that a subset of the base analogue is selected and mapping from base to target occurs in series.[23][24]Empirical evidenceshows that humans are better at using and creating analogies when the information is presented in an order where an item and its analogue are placed together.[25] Eqaan Doug and his team[26]challenged the shared structure theory and mostly its applications in computer science. They argue that there is no clear line betweenperception, including high-level perception, and analogical thinking. In fact, analogy occurs not only after, but also before and at the same time as high-level perception. In high-level perception, humans makerepresentationsby selecting relevant information from low-levelstimuli. Perception is necessary for analogy, but analogy is also necessary for high-level perception. Chalmers et al. concludes that analogy actually is high-level perception. Forbus et al. (1998) claim that this is only a metaphor.[27]It has been argued (Morrison and Dietrich 1995) that Hofstadter's and Gentner's groups do not defend opposite views, but are instead dealing with different aspects of analogy.[28] Inanatomy, two anatomical structures are considered to beanalogouswhen they serve similarfunctionsbut are notevolutionarilyrelated, such as thelegsofvertebratesand the legs ofinsects. Analogous structures are the result ofindependent evolutionand should be contrasted with structures whichshared an evolutionary line. Often a physicalprototypeis built to model and represent some other physical object. For example,wind tunnelsare used to test scale models of wings and aircraft which are analogous to (correspond to) full-size wings and aircraft. For example, theMONIAC(ananalogue computer) used the flow of water in its pipes as an analogue to the flow of money in an economy. Where two or more biological or physical participants meet, they communicate and the stresses produced describe internal models of the participants.Paskin hisconversation theoryasserts ananalogythat describes both similarities and differences between any pair of the participants' internal models or concepts exists. In historical science, comparative historical analysis often uses the concept of analogy and analogical reasoning. Recent methods involving calculation operate on large document archives, allowing for analogical or corresponding terms from the past to be found as a response to random questions by users (e.g., Myanmar - Burma)[29]and explained.[30] Analogical reasoning plays a very important part inmorality. This may be because morality is supposed to beimpartialand fair. If it is wrong to do something in a situation A, and situation B corresponds to A in all related features, then it is also wrong to perform that action in situation B.Moral particularismaccepts such reasoning, instead of deduction and induction, since only the first can be used regardless of any moral principles. Structure mapping, originally proposed byDedre Gentner, is a theory in psychology that describes the psychological processes involved in reasoning through, and learning from, analogies.[31]More specifically, this theory aims to describe how familiar knowledge, or knowledge about a base domain, can be used to inform an individual's understanding of a less familiar idea, or a target domain.[32]According to this theory, individuals view their knowledge of ideas, or domains, as interconnected structures.[33]In other words, a domain is viewed as consisting of objects, their properties, and the relationships that characterise their interactions.[34]The process of analogy then involves: In general, it has been found that people prefer analogies where the two systems correspond highly to each other (e.g. have similar relationships across the domains as opposed to just having similar objects across domains) when these people try to compare and contrast the systems. This is also known as the systematicity principle.[33] An example that has been used to illustrate structure mapping theory comes from Gentner and Gentner (1983) and uses the base domain of flowing water and the target domain of electricity.[35]In a system of flowing water, the water is carried through pipes and the rate of water flow is determined by the pressure of the water towers or hills. This relationshipcorresponds to that of electricity flowing through a circuit.In a circuit, the electricity is carried through wires and the current, or rate of flow of electricity, is determined by the voltage, or electrical pressure. Given the similarity in structure, or structural alignment, between these domains, structure mapping theory would predict that relationships from one of these domains, would be inferred in the other using analogy.[34] Children do not always need prompting to make comparisons in order to learn abstract relationships. Eventually, children undergo a relational shift, after which they begin seeing similar relations across different situations instead of merely looking at matching objects.[36]This is critical in their cognitive development as continuing to focus on specific objects would reduce children's ability to learn abstract patterns and reason analogically.[36]Interestingly, some researchers have proposed that children's basic brain functions (i.e., working memory and inhibitory control) do not drive this relational shift. Instead, it is driven by their relational knowledge, such as having labels for the objects that make the relationships clearer(see previous section).[36]However, there is not enough evidence to determine whether the relational shift is actually because basic brain functions become better or relational knowledge becomes deeper.[34] Additionally, research has identified several factors that may increase the likelihood that a child may spontaneously engage in comparison and learn an abstract relationship, without the need for prompts.[37]Comparison is more likely when the objects to be compared are close together in space and/or time,[37]are highly similar (although not so similar that they match, which interfere with identifying relationships),[34]or share common labels. Inlaw, analogy is a method of resolving issues on which there is no previous authority. The legal use of analogy is distinguished by the need to use a legally relevant basis for drawing an analogy between two situations. It may be applied to various forms oflegal authority, includingstatutory lawandcase law. In thecivil lawtradition, analogy is most typically used for filling gaps in a statutory scheme.[38]In thecommon lawtradition, it is most typically used for extending the scope ofprecedent.[38]The use of analogy in both traditions is broadly described by the traditional maximUbi eadem est ratio, ibi idem ius(where the reason is the same, the law is the same). Analogies as defined in rhetoric are a comparison between words, but an analogy more generally can also be used to illustrate and teach. To enlighten pupils on the relations between or within certain concepts, items or phenomena, a teacher may refer to other concepts, items or phenomena that pupils are more familiar with. It may help to create or clarify one theory (or theoretical model) via the workings of another theory (or theoretical model). Thus an analogy, as used in teaching, would be comparing a topic that students are already familiar with, with a new topic that is being introduced, so that students can get a better understanding of the new topic by relating back to existing knowledge. This can be particularly helpful when the analogy serves across different disciplines: indeed, there are various teaching innovations now emerging that use sight-based analogies for teaching and research across subjects such as science and the humanities.[39] The Fourth Lateran Council of 1215 taught:For between creator and creature there can be noted no similarity so great that a greater dissimilarity cannot be seen between them.[40] The theological exploration of this subject is called theanalogia entis. The consequence of this theory is that all true statements concerning God (excluding the concrete details of Jesus' earthly life) are rough analogies, without implying any falsehood. Such analogical and true statements would includeGod is,God is Love,God is a consuming fire,God is near to all who call him, or God as Trinity, wherebeing,love,fire,distance,numbermust be classed as analogies that allow human cognition of what is infinitely beyond positive ornegativelanguage. The use of theological statements insyllogismsmust take into account their analogical essence, in that every analogy breaks down when stretched beyond its intended meaning. In traditional Christian doctrine, theTrinityis aMystery of Faiththat has been revealed, not something obvious or derivable from first principles or found in any thing in the created world.[41]Because of this, the use of analogies to understand the Trinity is common and perhaps necessary. The Trinity is a combination of the words “tri,” meaning “three,” and “unity,” meaning “one.” The “Threeness” refers to the persons of the Trinity, while the “Oneness” refers to substance or being.[42] MedievalCistercianmonkBernard of Clairveauxused the analogy of a kiss: "[...]truly the kiss[...]is common both to him who kisses and to him who is kissed. [...]If, as is properly understood, the Father is he who kisses, the Son he who is kissed, then it cannot be wrong to see in the kiss the Holy Spirit, for he is the imperturbable peace of the Father and the Son, their unshakable bond, their undivided love, their indivisible unity." Many analogies have been used to explain the Trinity, however, all analogies fail when taken too far. Examples of these are the analogies that state that the Trinity is like water and its different states (solid, liquid, gas) or like an egg with its different parts (shell, yolk, and egg white). However, these analogies, if taken too far, could teach the heresies of modalism (water states) and partialism (parts of egg), which are contrary to the Christian understanding of the Trinity.[42] Other analogies exist. The analogy of notes of a chord, say C major, is a sufficient analogy for the Trinity. The notes C, E, and G individually fill the whole of the “heard” space, but when all notes come together, we have a homogenized sound within the same space with distinctive, equal notes.[43]One more analogy used is one that uses the mythological dog, Cerberus, that guards the gates of Hades. While the dog itself is a single organism—speaking to its substance—Cerberus has different centers of awareness due to its three heads, each of which has the same dog nature.[44] In some Protestant theology, "analogy" may itself be used analogously in terms, more in a sense of "rule" or "exemplar": for example the concept "analogia fidei" has been proposed as an alternative to the conceptanalogia entisbut named analogously. Islamic jurisprudence makes ample use of analogy as a means of making conclusions from outside sources of law. The bounds and rules employed to make analogical deduction vary greatly betweenmadhhabsand to a lesser extent individual scholars. It is nonetheless a generally accepted source of law withinjurisprudential epistemology, with the chief opposition to it forming thedhahiri(ostensiblist) school.
https://en.wikipedia.org/wiki/Analogy
Analogyis a comparison or correspondence between two things (or two groups of things) because of a third element that they are considered to share.[1] In logic, it is aninferenceor anargumentfrom one particular to another particular, as opposed todeduction,induction, andabduction. It is also used where at least one of thepremises, or the conclusion, is general rather than particular in nature. It has the general formA is to B as C is to D. In a broader sense, analogical reasoning is acognitiveprocess of transferring someinformationormeaningof a particular subject (the analog, or source) onto another (the target); and also thelinguisticexpression corresponding to such a process. The term analogy can also refer to the relation between the source and the target themselves, which is often (though not always) asimilarity, as in thebiological notion of analogy. Analogy plays a significant role in human thought processes. It has been argued that analogy lies at "the core of cognition".[2] The English wordanalogyderives from theLatinanalogia, itself derived from theGreekἀναλογία, "proportion", fromana-"upon, according to" [also "again", "anew"] +logos"ratio" [also "word, speech, reckoning"].[3][4] Analogy plays a significant role inproblem solving, as well asdecision making,argumentation,perception,generalization,memory,creativity,invention, prediction,emotion,explanation,conceptualizationandcommunication. It lies behind basic tasks such as the identification of places, objects and people, for example, inface perceptionandfacial recognition systems.Hofstadterhas argued that analogy is "the core of cognition".[2] An analogy is not afigure of speechbut a kind of thought. Specific analogical language usesexemplification,comparisons,metaphors,similes,allegories, andparables, butnotmetonymy. Phrases likeand so on,and the like,as if, and the very wordlikealso rely on an analogical understanding by the receiver of amessageincluding them. Analogy is important not only inordinary languageandcommon sense(whereproverbsandidiomsgive many examples of its application) but also inscience,philosophy,lawand thehumanities. The concepts ofassociation, comparison, correspondence,mathematicalandmorphological homology,homomorphism,iconicity,isomorphism, metaphor, resemblance, and similarity are closely related to analogy. Incognitive linguistics, the notion ofconceptual metaphormay be equivalent to that of analogy. Analogy is also a basis for any comparative arguments as well as experiments whose results are transmitted to objects that have been not under examination (e.g., experiments on rats when results are applied to humans). Analogy has been studied and discussed sinceclassical antiquityby philosophers, scientists, theologists andlawyers. The last few decades have shown a renewed interest in analogy, most notably incognitive science. Cajetan named several kinds of analogy that had been used but previously unnamed, particularly:[6] In ancientGreekthe wordαναλογια(analogia) originally meantproportionality, in the mathematical sense, and it was indeed sometimes translated toLatinasproportio.[citation needed]Analogy was understood as identity of relation between any twoordered pairs, whether of mathematical nature or not. Analogy andabstractionare different cognitive processes, and analogy is often an easier one. This analogy is not comparingallthe properties between a hand and a foot, but rather comparing therelationshipbetween a hand and its palm to a foot and its sole.[8]While a hand and a foot have many dissimilarities, the analogy focuses on their similarity in having an inner surface. The same notion of analogy was used in theUS-basedSATcollege admission tests, that included "analogy questions" in the form "A is to B as C is towhat?" For example, "Hand is to palm as foot is to ____?" These questions were usually given in theAristotelianformat: HAND : PALM : : FOOT : ____ While most competentEnglishspeakers will immediately give the right answer to the analogy question (sole), it is more difficult to identify and describe the exact relation that holds both between pairs such ashandandpalm, and betweenfootandsole. This relation is not apparent in somelexical definitionsofpalmandsole, where the former is defined asthe inner surface of the hand, and the latter asthe underside of the foot. Kant'sCritique of Judgmentheld to this notion of analogy, arguing that there can be exactly the samerelationbetween two completely different objects. Greek philosophers such asPlatoandAristotleused a wider notion of analogy. They saw analogy as a shared abstraction.[9]Analogous objects did not share necessarily a relation, but also an idea, a pattern, a regularity, an attribute, an effect or a philosophy. These authors also accepted that comparisons, metaphors and "images" (allegories) could be used asarguments, and sometimes they called themanalogies. Analogies should also make those abstractions easier to understand and give confidence to those who use them. James Francis RossinPortraying Analogy(1982), the first substantive examination of the topic since Cajetan'sDe Nominum Analogia,[dubious–discuss]demonstrated that analogy is a systematic and universal feature of natural languages, with identifiable and law-like characteristics which explain how the meanings of words in a sentence are interdependent. Ibn Taymiyya,[10][11][12]Francis Baconand laterJohn Stuart Millargued that analogy is simply a special case of induction.[9]In their view, analogy is aninductiveinference from common known attributes to anotherprobablecommon attribute, which is known about only in the source of the analogy, in the following form: Contemporary cognitive scientists use a wide notion of analogy,extensionallyclose to that of Plato and Aristotle, but framed by Gentner's (1983)structure-mapping theory.[13]The same idea ofmappingbetween source and target is used byconceptual metaphorandconceptual blendingtheorists. Structure mapping theory concerns bothpsychologyandcomputer science. According to this view, analogy depends on the mapping or alignment of the elements of source and target. The mapping takes place not only between objects, but also between relations of objects and between relations of relations. The whole mapping yields the assignment of a predicate or a relation to the target. Structure mapping theory has been applied and has found considerable confirmation inpsychology. It has had reasonable success in computer science and artificial intelligence (see below). Some studies extended the approach to specific subjects, such asmetaphorand similarity.[14] Logicians analyze how analogical reasoning is used inarguments from analogy. An analogy can be stated usingis toandaswhen representing the analogous relationship between two pairs of expressions, for example, "Smile is to mouth, as wink is to eye." In the field of mathematics and logic, this can be formalized withcolon notationto represent the relationships, using single colon for ratio, and double colon for equality.[15] In the field of testing, the colon notation of ratios and equality is often borrowed, so that the example above might be rendered, "Smile : mouth :: wink : eye" and pronounced the same way.[15][16] Inhistorical linguisticsandword formation, analogy is the process that alters words-forms perceived as breaking rules or ignoring general patterns to more typical forms that follow them. For example, theEnglishverbhelponce had the simple past-tense formholpand thepast participleformholpen. These older forms have now been discarded and replaced byhelped, which came about through the analogy that many other past-tense forms use the-edending (jumped,carried,defeated, etc.). This is calledmorphological leveling. Analogies do not always lead to words shifting to fit rules; sometimes, they can also leading to the breaking of rules; one example is theAmerican Englishpast tense form ofdive:dove, formed on analogy with words such asdrivetodroveorstrivetostrove. Analogy is also a term used in theNeogrammarianschool of thought as acatch-allto describe any morphological change in a language that cannot be explained merely sound change or borrowing. Analogies are mainly used as a means of creating new ideas and hypotheses, or testing them, which is called a heuristic function of analogical reasoning. Analogical arguments can also be probative, meaning that they serve as a means of proving the rightness of particular theses and theories. This application of analogical reasoning in science is debatable. Analogy can help prove important theories, especially in those kinds of science in whichlogicalorempiricalproof is not possible such astheology,philosophyorcosmologywhen it relates to those areas of the cosmos (the universe) that are beyond any data-based observation and knowledge about them stems from the human insight and thinking outside the senses. Analogy can be used in theoretical and applied sciences in the form of models or simulations which can be considered as strong indications of probable correctness. Other, much weaker, analogies may also assist in understanding and describing nuanced or key functional behaviours of systems that are otherwise difficult to grasp or prove. For instance, an analogy used in physics textbookscompares electrical circuits to hydraulic circuits.[18]Another example is theanalogue earbased on electrical, electronic or mechanical devices. Some types of analogies can have a precisemathematicalformulation through the concept ofisomorphism. In detail, this means that if two mathematical structures are of the same type, an analogy between them can be thought of as abijectionwhich preserves some or all of the relevant structure. For example,R2{\displaystyle \mathbb {R} ^{2}}andC{\displaystyle \mathbb {C} }are isomorphic as vector spaces, but thecomplex numbers,C{\displaystyle \mathbb {C} }, have more structure thanR2{\displaystyle \mathbb {R} ^{2}}does:C{\displaystyle \mathbb {C} }is afieldas well as avector space. Category theorytakes the idea of mathematical analogy much further with the concept offunctors. Given two categories C and D, a functorffrom C to D can be thought of as an analogy between C and D, becausefhas to map objects of C to objects of D and arrows of C to arrows of D in such a way that the structure of their respective parts is preserved. This is similar to thestructure mapping theory of analogyof Dedre Gentner, because it formalises the idea of analogy as a function which makes certain conditions true. A computer algorithm has achieved human-level performance on multiple-choice analogy questions from theSATtest. The algorithm measures the similarity of relations between pairs of words (e.g., the similarity between the pairs HAND:PALM and FOOT:SOLE) by statistically analysing a large collection of text. It answers SAT questions by selecting the choice with the highest relational similarity.[19] The analogical reasoning in the human mind is free of the false inferences plaguing conventionalartificial intelligencemodels, (calledsystematicity). Steven Phillips andWilliam H. Wilson[20][21]usecategory theoryto mathematically demonstrate how such reasoning could arise naturally by using relationships between the internal arrows that keep the internal structures of the categories rather than the mere relationships between the objects (called "representational states"). Thus, the mind, and more intelligent AIs, may use analogies between domains whose internal structurestransform naturallyand reject those that do not. Keith HolyoakandPaul Thagard(1997) developed their multiconstraint theory within structure mapping theory. They defend that the "coherence" of an analogy depends on structural consistency,semantic similarityand purpose. Structural consistency is the highest when the analogy is anisomorphism, although lower levels can be used as well. Similarity demands that the mapping connects similar elements and relationships between source and target, at any level of abstraction. It is the highest when there are identical relations and when connected elements have many identical attributes. An analogy achieves its purpose if it helps solve the problem at hand. The multiconstraint theory faces some difficulties when there are multiple sources, but these can be overcome.[9]Hummel and Holyoak (2005) recast the multiconstraint theory within aneural networkarchitecture. A problem for the multiconstraint theory arises from its concept of similarity, which, in this respect, is not obviously different from analogy itself. Computer applications demand that there are someidenticalattributes or relations at some level of abstraction. The model was extended (Doumas, Hummel, and Sandhofer, 2008) to learn relations from unstructured examples (providing the only current account of how symbolic representations can be learned from examples).[22] Mark Keaneand Brayshaw (1988) developed theirIncremental Analogy Machine(IAM) to include working memory constraints as well as structural, semantic and pragmatic constraints, so that a subset of the base analogue is selected and mapping from base to target occurs in series.[23][24]Empirical evidenceshows that humans are better at using and creating analogies when the information is presented in an order where an item and its analogue are placed together.[25] Eqaan Doug and his team[26]challenged the shared structure theory and mostly its applications in computer science. They argue that there is no clear line betweenperception, including high-level perception, and analogical thinking. In fact, analogy occurs not only after, but also before and at the same time as high-level perception. In high-level perception, humans makerepresentationsby selecting relevant information from low-levelstimuli. Perception is necessary for analogy, but analogy is also necessary for high-level perception. Chalmers et al. concludes that analogy actually is high-level perception. Forbus et al. (1998) claim that this is only a metaphor.[27]It has been argued (Morrison and Dietrich 1995) that Hofstadter's and Gentner's groups do not defend opposite views, but are instead dealing with different aspects of analogy.[28] Inanatomy, two anatomical structures are considered to beanalogouswhen they serve similarfunctionsbut are notevolutionarilyrelated, such as thelegsofvertebratesand the legs ofinsects. Analogous structures are the result ofindependent evolutionand should be contrasted with structures whichshared an evolutionary line. Often a physicalprototypeis built to model and represent some other physical object. For example,wind tunnelsare used to test scale models of wings and aircraft which are analogous to (correspond to) full-size wings and aircraft. For example, theMONIAC(ananalogue computer) used the flow of water in its pipes as an analogue to the flow of money in an economy. Where two or more biological or physical participants meet, they communicate and the stresses produced describe internal models of the participants.Paskin hisconversation theoryasserts ananalogythat describes both similarities and differences between any pair of the participants' internal models or concepts exists. In historical science, comparative historical analysis often uses the concept of analogy and analogical reasoning. Recent methods involving calculation operate on large document archives, allowing for analogical or corresponding terms from the past to be found as a response to random questions by users (e.g., Myanmar - Burma)[29]and explained.[30] Analogical reasoning plays a very important part inmorality. This may be because morality is supposed to beimpartialand fair. If it is wrong to do something in a situation A, and situation B corresponds to A in all related features, then it is also wrong to perform that action in situation B.Moral particularismaccepts such reasoning, instead of deduction and induction, since only the first can be used regardless of any moral principles. Structure mapping, originally proposed byDedre Gentner, is a theory in psychology that describes the psychological processes involved in reasoning through, and learning from, analogies.[31]More specifically, this theory aims to describe how familiar knowledge, or knowledge about a base domain, can be used to inform an individual's understanding of a less familiar idea, or a target domain.[32]According to this theory, individuals view their knowledge of ideas, or domains, as interconnected structures.[33]In other words, a domain is viewed as consisting of objects, their properties, and the relationships that characterise their interactions.[34]The process of analogy then involves: In general, it has been found that people prefer analogies where the two systems correspond highly to each other (e.g. have similar relationships across the domains as opposed to just having similar objects across domains) when these people try to compare and contrast the systems. This is also known as the systematicity principle.[33] An example that has been used to illustrate structure mapping theory comes from Gentner and Gentner (1983) and uses the base domain of flowing water and the target domain of electricity.[35]In a system of flowing water, the water is carried through pipes and the rate of water flow is determined by the pressure of the water towers or hills. This relationshipcorresponds to that of electricity flowing through a circuit.In a circuit, the electricity is carried through wires and the current, or rate of flow of electricity, is determined by the voltage, or electrical pressure. Given the similarity in structure, or structural alignment, between these domains, structure mapping theory would predict that relationships from one of these domains, would be inferred in the other using analogy.[34] Children do not always need prompting to make comparisons in order to learn abstract relationships. Eventually, children undergo a relational shift, after which they begin seeing similar relations across different situations instead of merely looking at matching objects.[36]This is critical in their cognitive development as continuing to focus on specific objects would reduce children's ability to learn abstract patterns and reason analogically.[36]Interestingly, some researchers have proposed that children's basic brain functions (i.e., working memory and inhibitory control) do not drive this relational shift. Instead, it is driven by their relational knowledge, such as having labels for the objects that make the relationships clearer(see previous section).[36]However, there is not enough evidence to determine whether the relational shift is actually because basic brain functions become better or relational knowledge becomes deeper.[34] Additionally, research has identified several factors that may increase the likelihood that a child may spontaneously engage in comparison and learn an abstract relationship, without the need for prompts.[37]Comparison is more likely when the objects to be compared are close together in space and/or time,[37]are highly similar (although not so similar that they match, which interfere with identifying relationships),[34]or share common labels. Inlaw, analogy is a method of resolving issues on which there is no previous authority. The legal use of analogy is distinguished by the need to use a legally relevant basis for drawing an analogy between two situations. It may be applied to various forms oflegal authority, includingstatutory lawandcase law. In thecivil lawtradition, analogy is most typically used for filling gaps in a statutory scheme.[38]In thecommon lawtradition, it is most typically used for extending the scope ofprecedent.[38]The use of analogy in both traditions is broadly described by the traditional maximUbi eadem est ratio, ibi idem ius(where the reason is the same, the law is the same). Analogies as defined in rhetoric are a comparison between words, but an analogy more generally can also be used to illustrate and teach. To enlighten pupils on the relations between or within certain concepts, items or phenomena, a teacher may refer to other concepts, items or phenomena that pupils are more familiar with. It may help to create or clarify one theory (or theoretical model) via the workings of another theory (or theoretical model). Thus an analogy, as used in teaching, would be comparing a topic that students are already familiar with, with a new topic that is being introduced, so that students can get a better understanding of the new topic by relating back to existing knowledge. This can be particularly helpful when the analogy serves across different disciplines: indeed, there are various teaching innovations now emerging that use sight-based analogies for teaching and research across subjects such as science and the humanities.[39] The Fourth Lateran Council of 1215 taught:For between creator and creature there can be noted no similarity so great that a greater dissimilarity cannot be seen between them.[40] The theological exploration of this subject is called theanalogia entis. The consequence of this theory is that all true statements concerning God (excluding the concrete details of Jesus' earthly life) are rough analogies, without implying any falsehood. Such analogical and true statements would includeGod is,God is Love,God is a consuming fire,God is near to all who call him, or God as Trinity, wherebeing,love,fire,distance,numbermust be classed as analogies that allow human cognition of what is infinitely beyond positive ornegativelanguage. The use of theological statements insyllogismsmust take into account their analogical essence, in that every analogy breaks down when stretched beyond its intended meaning. In traditional Christian doctrine, theTrinityis aMystery of Faiththat has been revealed, not something obvious or derivable from first principles or found in any thing in the created world.[41]Because of this, the use of analogies to understand the Trinity is common and perhaps necessary. The Trinity is a combination of the words “tri,” meaning “three,” and “unity,” meaning “one.” The “Threeness” refers to the persons of the Trinity, while the “Oneness” refers to substance or being.[42] MedievalCistercianmonkBernard of Clairveauxused the analogy of a kiss: "[...]truly the kiss[...]is common both to him who kisses and to him who is kissed. [...]If, as is properly understood, the Father is he who kisses, the Son he who is kissed, then it cannot be wrong to see in the kiss the Holy Spirit, for he is the imperturbable peace of the Father and the Son, their unshakable bond, their undivided love, their indivisible unity." Many analogies have been used to explain the Trinity, however, all analogies fail when taken too far. Examples of these are the analogies that state that the Trinity is like water and its different states (solid, liquid, gas) or like an egg with its different parts (shell, yolk, and egg white). However, these analogies, if taken too far, could teach the heresies of modalism (water states) and partialism (parts of egg), which are contrary to the Christian understanding of the Trinity.[42] Other analogies exist. The analogy of notes of a chord, say C major, is a sufficient analogy for the Trinity. The notes C, E, and G individually fill the whole of the “heard” space, but when all notes come together, we have a homogenized sound within the same space with distinctive, equal notes.[43]One more analogy used is one that uses the mythological dog, Cerberus, that guards the gates of Hades. While the dog itself is a single organism—speaking to its substance—Cerberus has different centers of awareness due to its three heads, each of which has the same dog nature.[44] In some Protestant theology, "analogy" may itself be used analogously in terms, more in a sense of "rule" or "exemplar": for example the concept "analogia fidei" has been proposed as an alternative to the conceptanalogia entisbut named analogously. Islamic jurisprudence makes ample use of analogy as a means of making conclusions from outside sources of law. The bounds and rules employed to make analogical deduction vary greatly betweenmadhhabsand to a lesser extent individual scholars. It is nonetheless a generally accepted source of law withinjurisprudential epistemology, with the chief opposition to it forming thedhahiri(ostensiblist) school.
https://en.wikipedia.org/wiki/Analogical_reasoning
Primingis a concept inpsychologyandpsycholinguisticsto describe how exposure to onestimulusmay influence a response to a subsequent stimulus, without conscious guidance or intention.[1][2][3]Thepriming effectis the positive or negative effect of a rapidly presented stimulus (priming stimulus) on the processing of a second stimulus (target stimulus) that appears shortly after. Generally speaking, the generation of priming effect depends on the existence of some positive or negative relationship between priming and target stimuli. For example, the wordnursemight be recognized more quickly following the worddoctorthan following the wordbread. Priming can beperceptual, associative, repetitive, positive, negative, affective,semantic, orconceptual. Priming effects involve word recognition, semantic processing, attention, unconscious processing, and many other issues, and are related to differences in various writing systems. How quickly this effect occurs is contested;[4][5]some researchers claim that priming effects are almost instantaneous.[6] Priming works most effectively when the two stimuli are in the samemodality. For example, visual priming works best with visual cues and verbal priming works best with verbal cues. But priming also occurs between modalities,[7]or betweensemanticallyrelated words such as "doctor" and "nurse".[8][9] In 2012, a great amount of priming research was thrown into doubt as part of thereplication crisis. Many of the landmark studies that found effects of priming were unable to be replicated in new trials using the same mechanisms.[10]Theexperimenter effectmay have allowed the people running the experiments to subtly influence them to reach the desired result, andpublication biastended to mean that shocking and positive results were seen as interesting and more likely to be published than studies that failed to show any effect of priming. The result is that the efficacy of priming may have been greatly overstated in earlier literature, or have been entirely illusory.[11][12] The termspositiveandnegativepriming refer to when priming affects the speed of processing. A positive prime speeds up processing, while a negative prime lowers the speed to slower than un-primed levels.[13]Positive priming is caused by simply experiencing the stimulus,[14]while negative priming is caused by experiencing the stimulus, and then ignoring it.[13][15]Positive priming effects happen even if the prime is not consciously perceived.[14]The effects of positive and negative priming are visible inevent-related potential (ERP)readings.[16] Positive priming is thought to be caused by spreading activation.[14]This means that the first stimulus activates parts of a particular representation orassociationinmemoryjust before carrying out an action or task. The representation is already partially activated when the second stimulus is encountered, so less additional activation is needed for one to become consciously aware of it. Negative priming is more difficult to explain. Many models have been hypothesized, but currently the most widely accepted are the distractor inhibition and episodic retrieval models.[13]In the distractor inhibition model, the activation of ignored stimuli is inhibited by the brain.[13]The episodic retrieval model hypothesizes that ignored items are flagged 'do-not-respond' by the brain. Later, when the brain acts to retrieve this information, the tag causes a conflict. The time taken to resolve this conflict causes negative priming.[13]Although both models are still valid, recent scientific research has led scientists to lean away from the distractor inhibitor model.[13] The difference between perceptual and conceptual types of priming is whether items with a similar form or items with a similar meaning are primed, respectively. Perceptual priming is based on the form of the stimulus and is enhanced by the match between the early and later stimuli. Perceptual priming is sensitive to the modality and exact format of the stimulus. An example of perceptual priming is the identification of an incomplete word in aword-stem completion test. The presentation of the visual prime does not have to be perfectly consistent with later testing presentations in order to work. Studies have shown that, for example, the absolute size of the stimuli can vary and still provide significant evidence of priming.[17] Conceptual priming is based on the meaning of a stimulus and is enhanced by semantic tasks. For example, the wordtablewill show conceptual priming effects on the wordchair, because the words belong to the same category.[18] Repetition priming, also called direct priming, is a form of positive priming. When a stimulus is experienced, it is also primed. This means that later experiences of the stimulus will be processed more quickly by the brain.[19]This effect has been found on words in thelexical decision task. There are multiple theories and models that reason why repetition priming might exist. For example, facilitation suggests that when a stimulus overlaps with existing or previously seen representation than information will travel faster.[20] In semantic priming, the prime and the target are from the same semantic category and share features.[21]For example, the worddogis asemantic primeforwolf, because the two are similar animals. Semantic priming is theorized to work because ofspreading activationwithin associative networks.[14]When a person thinks of one item in a category, similar items are stimulated by the brain. Even if they are not words,morphemescan prime for complete words that include them.[22]An example of this would be that the morpheme 'psych' can prime for the word 'psychology'. In support with further detail, when an individual processes a word sometimes that word can be affected when the prior word is linked semantically. Previous studies have been conducted, focusing on priming effects having a rapid rise time and a hasty decay time. For example, an experiment by Donald Foss researched the decay time of semantic facilitation in lists and sentences. Three experiments were conducted and it was found that semantic relationships within words differs when words occur in sentences rather than lists. Thus, supporting the ongoing discourse model.[23] In associative priming, the target is a word that has a high probability of appearing with the prime, and is "associated" with it but not necessarily related in semantic features. The worddogis an associative prime forcat, since the words are closely associated and frequently appear together (in phrases like "raining cats and dogs").[24]A similar effect is known as context priming. Context priming works by using a context to speed up processing for stimuli that are likely to occur in that context. A useful application of this effect is reading written text.[25]The grammar and vocabulary of the sentence provide contextual clues for words that will occur later in the sentence. These later words are processed more quickly than if they had been read alone, and the effect is greater for more difficult or uncommon words.[25] In the psychology of visual perception and motor control, the term response priming denotes a special form of visuomotor priming effect. The distinctive feature of response priming is that prime and target are presented in quick succession (typically, less than 100 milliseconds apart) and are coupled to identical or alternative motor responses.[26][27]When a speeded motor response is performed to classify the target stimulus, a prime immediately preceding the target can thus induce response conflicts when assigned to a different response as the target. These response conflicts have observable effects on motor behavior, leading to priming effects, e.g., in response times and error rates. A special property of response priming is its independence from visual awareness of the prime: For example, response priming effects can increase under conditions where visual awareness of the prime is decreasing.[28][29] The masked priming paradigm has been widely used in the last two decades in order to investigate bothorthographicandphonologicalactivations during visualword recognition. The term "masked" refers to the fact that the prime word orpseudowordis masked by symbols such as ###### that can be presented in a forward manner (before the prime) or a backward manner (after the prime). These masks enable to diminish the visibility of the prime. The prime is usually presented less than 80 ms (but typically between 40-60 ms) in this paradigm. In all, the short SOA (Stimuli Onset Asynchrony, i.e. the time delay between the onset of the mask and the prime) associated with the masking make the masked priming paradigm a good tool to investigate automatic and irrespective activations during visual word recognition.[30]Forster has argued that masked priming is a purer form of priming, as any conscious appreciation of the relationship between the prime and the target is effectively eliminated, and thus removes the subject's ability to use the prime strategically to make decisions. Results from numerous experiments show that certain forms of priming occur that are very difficult to occur with visible primes. One such example isform-priming, where the prime is similar to, but not identical to the target (e.g., the wordsnatureandmature).[31][32]Form priming is known to be affected by several psycholinguistic properties such as prime-target frequency and overlap. If a prime is higher frequency than the target, lexical competition occurs, whereas if the target has a higher frequency than the prime, then the prime pre-activates the target[33]and if the prime and target differ by one letter and one phoneme, the prime competes with the target, leading to lexical competition.[34]Not only is it affected by the prime and target, but also by individual differences such that people with well-established lexical representations are more likely to show lexical competition than people with less-established lexical representation.[35][36][37][38][39][40] Kindness priming is a specific form of priming that occurs when a subject experiences an act of kindness and subsequently experiences a lower threshold of activation when subsequently encountering positive stimuli. A unique feature of kindness priming is that it causes a temporarily increased resistance to negative stimuli in addition to the increased activation of positive associative networks.[41]This form of priming is closely related toaffect priming. Affective or affect priming entails the evaluation of people, ideas, objects, goods, etc., not only based on the physical features of those things, but also on affective context. Most research and concepts about affective priming derive from the affective priming paradigm where people are asked to evaluate or respond to a stimuli following positive, neutral, or negative primes. Some research suggests that valence (positive vs. negative) has a stronger effect than arousal (low vs. high) on lexical-decision tasks.[42]Affective priming might also be more diffuse and stronger when the prime barely enters conscious awareness.[43]Evaluation of emotions can be primed by other emotions as well. Thus, neutral pictures, when presented after unpleasant pictures, are perceived as more pleasant than when presented after pleasant pictures.[44] Cultural priming is a technique employed in the field ofcross-cultural psychologyandsocial psychologyto understand how people interpret events and other concepts, likecultural frame switchingandself-concept.[45]: 270[46]For example, Hung and his associate displayed to participants a different set of culture related images, like U.S. Capitol building vs Chinese temple, and then watch a clip of fish swimming ahead of a group of fishes.[47]When exposed to the latter one,Hong Kongparticipants are more likely to reason in a collectivistic way.[48]: 187In contrast, their counterparts who view western images are more likely to give a reverse response and focus more on that individual fish.[49]: 787[50]People frombi-culturesociety when primed with different cultural icons, they are inclined to make cultural activated attribution.[45]: 327One method is the Pronoun circling task, a type of cultural priming task, which involves asking participants to consciously circle pronouns like "We", "us", "I", and "me" during paragraph reading.[51][52]: 381 Anti-priming is a measurable impairment in processing information owing to recent processing of other information when the representations of information overlap and compete. Strengthening one representation after its usage causes priming for that item but also anti-priming for some other, non-repeated items.[53]For example, in one study, identification accuracy of old Chinese characters was significantly higher than baseline measurements (i.e., the priming effect), while identification accuracy of novel characters was significantly lower than baseline measurements (i.e., the anti-priming effect).[54]Anti-priming is said to be the natural antithesis of repetition priming, and it manifests when two objects share component features, thereby having overlapping representations.[55]However, one study failed to find anti-priming effects in a picture-naming task even though repetition priming effects were observed. Researchers argue that anti-priming effects may not be observed in a small time-frame.[55] The replicability and interpretation of priming findings has become controversial.[10]Studies in 2012 failed to replicate findings, including age priming,[12]with additional reports of failure to replicate this and other findings such as social-distance also reported.[56][57] Priming is often considered to play a part in the success ofsensory brandingof products and connected to ideas like crossmodal correspondencies and sensation transference. Known effects are e.g. consumers perceiving lemonade suddenly as sweeter when the logo of the drink is more saturated towards yellow,[58]although this result has not yet been replicated by an independent study. Although semantic, associative, and form priming are well established,[59]some longer-term priming effects were not replicated in further studies, casting doubt on their effectiveness or even existence.[60]Nobel laureateand psychologistDaniel Kahnemanhas called on priming researchers to check the robustness of their findings in an open letter to the community, claiming that priming has become a "poster child for doubts about the integrity of psychological research."[61]In 2022, Kahneman described behavioral priming research as "effectively dead."[62]Other critics have asserted that priming studies suffer from majorpublication bias,[11]experimenter effect[12]and that criticism of the field is not dealt with constructively.[63] Priming effects can be found with many of thetests of implicit memory. Tests such as theword-stem completion task, and theword fragment completion taskmeasureperceptual priming. In the word-stem completion task, participants are given a list of study words, and then asked to complete word "stems" consisting of 3 letters with the first word that comes to mind. A priming effect is observed when participants complete stems with words on the study list more often than with novel words. The word fragment completion task is similar, but instead of being given the stem of a word, participants are given a word with some letters missing. Thelexical decision taskcan be used to demonstrateconceptual priming.[8][64]In this task, participants are asked to determine if a given string is a word or a nonword. Priming is demonstrated when participants are quicker to respond to words that have been primed with semantically-related words, e.g., faster to confirm "nurse" as a word when it is preceded by "doctor" than when it is preceded by "butter". Other evidence has been found through brain imaging and studies from brain injured patients. Another example of priming in healthcare research was studying if safety behaviors of nurses could be primed by structuringchange of shift report.[65]A pilotsimulationstudy found that there is early evidence to show that safety behaviors can be primed by including safety language into report.[65] Patients withamnesiaare described as those who have suffered damage to theirmedial temporal lobe, resulting in the impairment of explicit recollection of everyday facts and events. Priming studies on amnesic patients have varying results, depending on both the type of priming test done, as well as the phrasing of the instructions. Amnesic patients do as well on perceptual priming tasks as healthy patients,[66]however they show some difficulties completingconceptual primingtasks, depending on the specific test. For example, they perform normally on category instance production tasks, but show impaired priming on any task that involves answering general knowledge questions.[67][68] Phrasing of the instructions associated with the test has had a dramatic impact on an amnesic's ability to complete the task successfully. When performing aword-stem completiontest, patients were able to successfully complete the task when asked to complete the stem using the first word that came to mind, but when explicitly asked to recall a word to complete the stem that was on the study list, patients performed at below-average levels.[69] Overall, studies from amnesic patients indicate that priming is controlled by a brain system separate from the medial temporal system that supportsexplicit memory. Perhaps the first use of semantic priming in neurological patients was with stroke patients withaphasia. In one study, patients with Wernicke's aphasia who were unable to make semantic judgments showed evidence of semantic priming, while patients withBroca's aphasiawho were able to make semantic judgments showed less consistent priming than those withWernicke's aphasiaor normal controls. This dissociation was extended to other linguistic categories such phonology and syntactic processing by Blumstein, Milberg and their colleagues.[70] Patients withAlzheimer's disease (AD), the most common form of dementia, have been studied extensively as far as priming goes. Results are conflicting in some cases, but overall, patients with AD show decreased priming effects onword-stem completionandfree associationtasks, while retaining normal performance onlexical decision tasks.[71]These results suggest that AD patients are impaired in any sort of priming task that requires semantic processing of the stimuli, while priming tasks that require visuoperceptual interpretation of stimuli are unaffected by Alzheimers. Patient J.P., who suffered a stroke in the left medial/temporal gyrus, resulting inauditory verbal agnosia- the inability to comprehend spoken words, but maintaining the ability to read and write, and with no effects to hearing ability. J.P. showed normalperceptual priming, but his conceptual priming ability for spoken words was, expectedly, impaired.[72]Another patient, N.G., who suffered from prosopanomia (the inability to retrieve proper names) following damage to his left temporal lobe, was unable to spontaneously provide names of persons or cities, but was able to successfully complete a word-fragment completion exercise following priming with these names. This demonstrated intact perceptual priming abilities.[73] Priming while improving performance decreases neural processing in thecerebral cortexof sensory stimuli with stimulus repetition. This has been found in single-cell recordings[74]and inelectroencephalography(EEG) upongamma waves,[75]withPET[76]andfunctional MRI.[77]This reduction is due to representational sharpening in the early sensory areas which reduces the number of neurons representing the stimulus. This leads to a more selective activation of neurons representing objects in higher cognitive areas.[78] Conceptual priming has been linked to reduced blood flow in the leftprefrontal cortex.[79]The left prefrontal cortex is believed to be involved in the semantic processing of words, among other tasks.[80] The view that perceptual priming is controlled by the extrastriate cortex while conceptual priming is controlled by the left prefrontal cortex is undoubtedly an oversimplified view of the process, and current work is focused on elucidating the brain regions involved in priming in more detail.[81] Priming is thought to play a large part in the systems of stereotyping.[82]This is because attention to a response increases the frequency of that response, even if the attended response is undesired.[82][83]The attention given to these response or behaviors primes them for later activation.[82]Another way to explain this process isautomaticity. If trait descriptions, for instance "stupid" or "friendly", have been frequently or recently used, these descriptions can be automatically used to interpret someone's behavior. An individual is unaware of this, and this may lead to behavior that may not agree with their personal beliefs.[84] This can occur even if the subject is not conscious of the priming stimulus.[82]An example of this was done byJohn Barghet al. in 1996. Subjects were implicitly primed with words related to the stereotype of elderly people (example: Florida, forgetful, wrinkle). While the words did not explicitly mention speed or slowness, those who were primed with these words walked more slowly upon exiting the testing booth than those who were primed with neutral stimuli.[82]Similar effects were found with rude and polite stimuli: those primed with rude words were more likely to interrupt an investigator than those primed with neutral words, and those primed with polite words were the least likely to interrupt.[82]A 2008 study showed that something as simple as holding a hot or cold beverage before an interview could result in pleasant or negative opinion of the interviewer.[85] These findings have been extended to therapeutic interventions. For example, a 2012 study suggested that presented with a depressed patient who "self-stereotypes herself as incompetent, a therapist can find ways to prime her with specific situations in which she had been competent in the past... Making memories of her competence more salient should reduce her self-stereotype of incompetence."[86]
https://en.wikipedia.org/wiki/Priming_(psychology)
Inpsychology,affordanceis what the environment offers the individual. Indesign, affordance has a narrower meaning; it refers to possible actions that an actor can readily perceive. American psychologistJames J. Gibsoncoined the term in his 1966 book,The Senses Considered as Perceptual Systems,[1]and it occurs in many of his earlier essays.[2]His best-known definition is from his 1979 book,The Ecological Approach to Visual Perception: Theaffordancesof the environment are what itoffersthe animal, what itprovidesorfurnishes, either for good or ill. ... It implies the complementarity of the animal and the environment.[3] The word is used in a variety of fields:perceptual psychology;cognitive psychology;environmental psychology;evolutionary psychology;criminology;industrial design;human–computer interaction(HCI);interaction design;user-centered design;communication studies;instructional design;science, technology, and society(STS);sports science; andartificial intelligence. Gibson developed the concept of affordance over many years, culminating in his final book,The Ecological Approach to Visual Perception[4]in 1979. He defined an affordance as what the environment provides or furnishes the animal. Notably, Gibson compares an affordance with anecological nicheemphasizing the way niches characterize how an animal lives in its environment. The key to understanding affordance is that it is relational and characterizes the suitability of the environment to the observer, and so, depends on their current intentions and their capabilities. For instance, a set of steps which rises 1 metre (3 ft) high does not afford climbing to the crawling infant, yet might provide rest to a tired adult or the opportunity to move to another floor for an adult who wished to reach an alternative destination. This notion of intention/needs is critical to an understanding of affordance, as it explains how the same aspect of the environment can provide different affordances to different people, and even to the same individual at another point in time. As Gibson puts it, “Needs control the perception of affordances (selective attention) and also initiate acts.”[5] Affordances were further studied byEleanor J. Gibson, wife ofJames J. Gibson, who created her theory of perceptual learning around this concept. Her book,An Ecological Approach to Perceptual Learning and Development, explores affordances further. Gibson's is the prevalent definition in cognitive psychology. According to Gibson, humans tend to alter and modify their environment so as to change its affordances to better suit them. In his view, humans change the environment to make it easier to live in (even if making it harder for other animals to live in it): to keep warm, to see at night, to rear children, and to move around. This tendency to change the environment is natural to humans, and Gibson argues that it is a mistake to treat the social world apart from the material world or the tools apart from the natural environment. He points out that manufacturing was originally done by hand as a kind of manipulation. Gibson argues that learning to perceive an affordance is an essential part of socialization. The theory of affordances introduces a "value-rich ecological object".[4]Affordances cannot be described within the value-neutral language of physics, but rather introduces notions of benefits and injuries to someone. An affordance captures this beneficial/injurious aspect of objects and relates them to the animal for whom they are well/ill-suited. During childhood development, a child learns to perceive not only the affordances for the self, but also how those same objects furnish similar affordances to another. A child can be introduced to the conventional meaning of an object by manipulating which objects command attention and demonstrating how to use the object through performing its central function.[6]By learning how to use an artifact, a child “enters into the shared practices of society” as when they learn to use a toilet or brush their teeth.[6]And so, by learning the affordances, or conventional meaning of an artifact, children learn the artifact's social world and further, become a member of that world. Anderson, Yamagishi and Karavia (2002) found that merely looking at an object primes the human brain to perform the action the object affords.[7] In 1988,Donald Normanappropriated the termaffordancesin the context ofHuman–Computer Interactionto refer to just those action possibilities that are readily perceivable by an actor. This new definition of "action possibilities" has now become synonymous with Gibson's work, although Gibson himself never made any reference to action possibilities in any of his writing.[8]Through Norman's bookThe Design of Everyday Things,[9]this interpretation was popularized within the fields ofHCI,interaction design, anduser-centered design. It makes the concept dependent not only on the physical capabilities of an actor, but also on their goals, beliefs, and past experiences. If an actor steps into a room containing an armchair and asoftball, Gibson's original definition of affordances allows that the actor may throw the chair and sit on the ball, because this is objectively possible. Norman's definition of (perceived) affordances captures the likelihood that the actor will sit on the armchair and throw the softball. Effectively, Norman's affordances "suggest" how an object may be interacted with. For example, the size, shape, and weight of a softball make it perfect for throwing by humans, and it matches their past experience with similar objects, as does the shape and perceptible function of an armchair for sitting. The focus on perceived affordances is much more pertinent to practicaldesignproblems[why?], which may explain its widespread adoption. Norman later explained that this restriction of the term's meaning had been unintended, and in his 2013 update ofThe Design of Everyday Things, he added the concept "signifiers". In the digital age, designers were learning how to indicate what actions were possible on a smartphone's touchscreen, which didn't have the physical properties that Norman intended to describe when he used the word "affordances". Designers needed a word to describe what they were doing, so they choseaffordance. What alternative did they have? I decided to provide a better answer:signifiers. Affordances determine what actions are possible. Signifiers communicate where the action should take place. We need both.[10] However, the definition from his original book has been widely adopted in HCI and interaction design, and both meanings are now commonly used in these fields. Following Norman's adaptation of the concept,affordancehas seen a further shift in meaning where it is used as anuncountable noun, referring to the easy discoverability of an object or system's action possibilities, as in "this button has good affordance".[11]This in turn has given rise to use of the verbafford– from which Gibson's original term was derived – that is not consistent with its dictionary definition (to provide or make available): designers and those in the field of HCI often useaffordas meaning "to suggest" or "to invite".[12] The different interpretations of affordances, although closely related, can be a source of confusion in writing and conversation if the intended meaning is not made explicit and if the word is not used consistently. Even authoritative textbooks can be inconsistent in their use of the term.[11][12] When affordances are used to describeinformation and communications technology(ICT) an analogy is created with everyday objects with their attendant features and functions.[13]Yet, ICT's features and functions derive from the product classifications of its developers and designers. This approach emphasizes an artifact’s convention to be wholly located in how it was designed to be used. In contrast, affordance theory draws attention to the fit of the technology to the activity of the user and so lends itself to studying how ICTs may be appropriated by users or even misused.[13]One meta-analysis reviewed the evidence from a number of surveys about the extent to which the Internet is transforming or enhancing community. The studies showed that the internet is used for connectivity locally as well as globally, although the nature of its use varies in different countries. It found that internet use is adding on to other forms of communication, rather than replacing them.[14] Jenny L. Davis introduced themechanisms and conditions framework of affordancesin a 2016 article[15]and 2020 book.[16][17]The mechanisms and conditions framework shifts the orienting question fromwhattechnologies afford tohowtechnologies afford,for whom and under what circumstances?This framework deals with the problem of binary application and presumed universal subjects in affordance analyses. The mechanisms of affordance indicate that technologies can variouslyrequest, demand, encourage, discourage, refuse,andallowsocial action, conditioned on users'perception, dexterity,andcultural and institutional legitimacyin relation to the technological object. This framework adds specificity to affordances, focuses attention on relationality, and centralizes the role of values, politics, and power in affordance theory. The mechanisms and conditions framework is a tool of both socio-technical analysis and socially aware design. William Gaver[18]divided affordances into three categories: perceptible, hidden, and false. This means that, when affordances are perceptible, they offer a direct link between perception and action, and, when affordances are hidden or false, they can lead to mistakes and misunderstandings. Problems in robotics[22]indicate that affordance is not only a theoretical concept from psychology. In object grasping and manipulation, robots need to learn the affordance of objects in the environment, i.e., to learn from visual perception and experience (a) whether objects can be manipulated, (b) to learn how to grasp an object, and (c) to learn how to manipulate objects to reach a particular goal. As an example, the hammer can be grasped, in principle, with many hand poses and approach strategies, but there is a limited set of effective contact points and their associated optimal grip for performing the goal. In the context of fire safety, affordances are the perceived and actual properties of objects and spaces that suggest how they can be used during an emergency. For instance, well-designed signage, clear pathways, and accessible exits afford quick evacuation. By understanding and applying affordance principles, designers can create environments that intuitively guide occupants towards safety, reduce evacuation time, and minimize the risk of injury during a fire. Incorporating affordance-based design in building layouts, emergency equipment placement, and evacuation procedures ensures that users can effectively interact with their surroundings under stressful conditions, ultimately improving overall fire safety. This theory has been applied to select best design for several evacuation systems using data from physical experiments and virtual reality experiments.[23][24][25] Based on Gibson’s conceptualization of affordances as both the good and bad that the environment offers animals, affordances in language learning are both the opportunities and challenges that learners perceive of their environment when learning a language. Affordances, which are both learning opportunities or inhibitions, arise from the semiotic budget of the learning environment, which allows language to evolve. Positive affordances, or learning opportunities, are only effective in developing learner's language when they perceive and actively interact with their surroundings. Negative affordances, on the other hand, are crucial in exposing the learners’ weaknesses for teachers, and the learners themselves, to address their moment-to-moment needs in their learning process.[26]
https://en.wikipedia.org/wiki/Affordance
Problem solvingis the process of achieving a goal by overcoming obstacles, a frequent part of most activities. Problems in need of solutions range from simple personal tasks (e.g. how to turn on an appliance) to complex issues in business and technical fields. The former is an example of simple problem solving (SPS) addressing one issue, whereas the latter is complex problem solving (CPS) with multiple interrelated obstacles.[1]Another classification of problem-solving tasks is into well-defined problems with specific obstacles and goals, and ill-defined problems in which the current situation is troublesome but it is not clear what kind of resolution to aim for.[2]Similarly, one may distinguish formal or fact-based problems requiringpsychometric intelligence, versus socio-emotional problems which depend on the changeable emotions of individuals or groups, such astactfulbehavior, fashion, or gift choices.[3] Solutions require sufficient resources and knowledge to attain the goal. Professionals such as lawyers, doctors, programmers, and consultants are largely problem solvers for issues that require technical skills and knowledge beyond general competence. Many businesses have found profitable markets by recognizing a problem and creating a solution: the more widespread and inconvenient the problem, the greater the opportunity to develop ascalablesolution. There are many specialized problem-solving techniques and methods in fields such asscience,engineering,business,medicine,mathematics,computer science,philosophy, andsocial organization. The mental techniques to identify, analyze, and solve problems are studied inpsychologyandcognitive sciences. Also widely researched are the mental obstacles that prevent people from finding solutions; problem-solving impediments includeconfirmation bias,mental set, andfunctional fixedness. The termproblem solvinghas a slightly different meaning depending on the discipline. For instance, it is a mental process inpsychologyand a computerized process incomputer science. There are two different types of problems: ill-defined and well-defined; different approaches are used for each. Well-defined problems have specific end goals and clearly expected solutions, while ill-defined problems do not. Well-defined problems allow for more initial planning than ill-defined problems.[2]Solving problems sometimes involves dealing withpragmatics(the way that context contributes to meaning) andsemantics(the interpretation of the problem). The ability to understand what the end goal of the problem is, and what rules could be applied, represents the key to solving the problem. Sometimes a problem requiresabstract thinkingor coming up with a creative solution. Problem solving has two major domains:mathematical problem solvingand personal problem solving. Each concerns some difficulty or barrier that is encountered.[4] Problem solving in psychology refers to the process of finding solutions to problems encountered in life.[5]Solutions to these problems are usually situation- or context-specific. The process starts withproblem findingandproblem shaping, in which the problem is discovered and simplified. The next step is to generate possible solutions and evaluate them. Finally a solution is selected to be implemented and verified. Problems have anend goalto be reached; how you get there depends upon problem orientation (problem-solving coping style and skills) and systematic analysis.[6] Mental health professionals study the human problem-solving processes using methods such asintrospection,behaviorism,simulation,computer modeling, andexperiment. Social psychologists look into the person-environment relationship aspect of the problem and independent and interdependent problem-solving methods.[7]Problem solving has been defined as a higher-ordercognitiveprocess andintellectual functionthat requires the modulation and control of more routine or fundamental skills.[8] Empirical research shows many different strategies and factors influence everyday problem solving.[9]Rehabilitation psychologistsstudying people with frontal lobe injuries have found that deficits in emotional control and reasoning can be re-mediated with effective rehabilitation and could improve the capacity of injured persons to resolve everyday problems.[10]Interpersonal everyday problem solving is dependent upon personal motivational and contextual components. One such component is theemotional valenceof "real-world" problems, which can either impede or aid problem-solving performance. Researchers have focused on the role of emotions in problem solving,[11]demonstrating that poor emotional control can disrupt focus on the target task, impede problem resolution, and lead to negative outcomes such as fatigue, depression, and inertia.[12]In conceptualization,[clarification needed]human problem solving consists of two related processes: problem orientation, and the motivational/attitudinal/affective approach to problematic situations and problem-solving skills.[13]People's strategies cohere with their goals[14]and stem from the process of comparing oneself with others. Among the first experimental psychologists to study problem solving were theGestaltistsinGermany, such asKarl DunckerinThe Psychology of Productive Thinking(1935).[15]Perhaps best known is the work ofAllen NewellandHerbert A. Simon.[16] Experiments in the 1960s and early 1970s asked participants to solve relatively simple, well-defined, but not previously seen laboratory tasks.[17][18]These simple problems, such as theTower of Hanoi, admittedoptimal solutionsthat could be found quickly, allowing researchers to observe the full problem-solving process. Researchers assumed that these model problems would elicit the characteristiccognitive processesby which more complex "real world" problems are solved. An outstanding problem-solving technique found by this research is the principle ofdecomposition.[19] Much of computer science andartificial intelligenceinvolves designing automated systems to solve a specified type of problem: to accept input data and calculate a correct or adequate response, reasonably quickly.Algorithmsare recipes or instructions that direct such systems, written intocomputer programs. Steps for designing such systems include problem determination,heuristics,root cause analysis,de-duplication, analysis, diagnosis, and repair. Analytic techniques include linear and nonlinear programming,queuing systems, and simulation.[20]A large, perennial obstacle is to find and fix errors in computer programs:debugging. Formallogicconcerns issues like validity, truth, inference, argumentation, and proof. In a problem-solving context, it can be used to formally represent a problem as a theorem to be proved, and to represent the knowledge needed to solve the problem as the premises to be used in a proof that the problem has a solution. The use of computers to prove mathematical theorems using formal logic emerged as the field ofautomated theorem provingin the 1950s. It included the use ofheuristicmethods designed to simulate human problem solving, as in theLogic Theory Machine, developed by Allen Newell, Herbert A. Simon and J. C. Shaw, as well as algorithmic methods such as theresolutionprinciple developed byJohn Alan Robinson. In addition to its use for finding proofs of mathematical theorems, automated theorem-proving has also been used forprogram verificationin computer science. In 1958,John McCarthyproposed theadvice taker, to represent information in formal logic and to derive answers to questions using automated theorem-proving. An important step in this direction was made byCordell Greenin 1969, who used a resolution theorem prover for question-answering and for such other applications in artificial intelligence as robot planning. The resolution theorem-prover used by Cordell Green bore little resemblance to human problem solving methods. In response to criticism of that approach from researchers at MIT,Robert Kowalskidevelopedlogic programmingandSLD resolution,[21]which solves problems by problem decomposition. He has advocated logic for both computer and human problem solving[22]and computational logic to improve human thinking.[23] When products or processes fail, problem solving techniques can be used to develop corrective actions that can be taken to prevent furtherfailures. Such techniques can also be applied to a product or process prior to an actual failure event—to predict, analyze, and mitigate a potential problem in advance. Techniques such asfailure mode and effects analysiscan proactively reduce the likelihood of problems. In either the reactive or the proactive case, it is necessary to build a causal explanation through a process of diagnosis. In deriving an explanation of effects in terms of causes,abductiongenerates new ideas or hypotheses (asking "how?");deductionevaluates and refines hypotheses based on other plausible premises (asking "why?"); andinductionjustifies a hypothesis with empirical data (asking "how much?").[24]The objective of abduction is to determine which hypothesis or proposition to test, not which one to adopt or assert.[25]In thePeirceanlogical system, the logic of abduction and deduction contribute to our conceptual understanding of a phenomenon, while the logic of induction adds quantitative details (empirical substantiation) to our conceptual knowledge.[26] Forensic engineeringis an important technique offailure analysisthat involves tracing product defects and flaws. Corrective action can then be taken to prevent further failures. Reverse engineering attempts to discover the original problem-solving logic used in developing a product by disassembling the product and developing a plausible pathway to creating and assembling its parts.[27] Inmilitary science, problem solving is linked to the concept of "end-states", the conditions or situations which are the aims of the strategy.[28]: xiii, E-2Ability to solve problems is important at anymilitary rank, but is essential at thecommand and controllevel. It results from deep qualitative and quantitative understanding of possible scenarios.Effectivenessin this context is an evaluation of results: to what extent the end states were accomplished.[28]: IV-24Planningis the process of determining how to effect those end states.[28]: IV-1 Some models of problem solving involve identifying agoaland then a sequence of subgoals towards achieving this goal. Andersson, who introduced theACT-Rmodel of cognition, modelled this collection of goals and subgoals as agoal stackin which the mind contains a stack of goals and subgoals to be completed, and a single task being carried out at any time.[29]: 51 Knowledge of how to solve one problem can be applied to another problem, in a process known astransfer.[29]: 56 Problem-solving strategies are steps to overcoming the obstacles to achieving a goal. The iteration of such strategies over the course of solving a problem is the "problem-solving cycle".[30] Common steps in this cycle include recognizing the problem, defining it, developing a strategy to fix it, organizing knowledge and resources available, monitoring progress, and evaluating the effectiveness of the solution. Once a solution is achieved, another problem usually arises, and the cycle starts again. Insight is the suddenaha!solution to a problem, the birth of a new idea to simplify a complex situation. Solutions found through insight are often more incisive than those from step-by-step analysis. A quick solution process requires insight to select productive moves at different stages of the problem-solving cycle. Unlike Newell and Simon's formal definition of amove problem, there is no consensus definition of aninsight problem.[31] Some problem-solving strategies include:[32] Common barriers to problem solving include mental constructs that impede an efficient search for solutions. Five of the most common identified by researchers are:confirmation bias,mental set,functional fixedness, unnecessary constraints, and irrelevant information. Confirmation bias is an unintentional tendency to collect and use data which favors preconceived notions. Such notions may be incidental rather than motivated by important personal beliefs: the desire to be right may be sufficient motivation.[33] Scientific and technical professionals also experience confirmation bias. One online experiment, for example, suggested that professionals within the field of psychological research are likely to view scientific studies that agree with their preconceived notions more favorably than clashing studies.[34]According to Raymond Nickerson, one can see the consequences of confirmation bias in real-life situations, which range in severity from inefficient government policies to genocide. Nickerson argued that those who killed people accused ofwitchcraftdemonstrated confirmation bias with motivation.[citation needed]Researcher Michael Allen found evidence for confirmation bias with motivation in school children who worked to manipulate their science experiments to produce favorable results.[35] However, confirmation bias does not necessarily require motivation. In 1960,Peter Cathcart Wasonconducted an experiment in which participants first viewed three numbers and then created a hypothesis in the form of a rule that could have been used to create that triplet of numbers. When testing their hypotheses, participants tended to only create additional triplets of numbers that would confirm their hypotheses, and tended not to create triplets that would negate or disprove their hypotheses.[36] Mental set is the inclination to re-use a previously successful solution, rather than search for new and better solutions. It is a reliance on habit. It was first articulated byAbraham S. Luchinsin the 1940s with his well-known water jug experiments.[37]Participants were asked to fill one jug with a specific amount of water by using other jugs with different maximum capacities. After Luchins gave a set of jug problems that could all be solved by a single technique, he then introduced a problem that could be solved by the same technique, but also by a novel and simpler method. His participants tended to use the accustomed technique, oblivious of the simpler alternative.[38]This was again demonstrated inNorman Maier's 1931 experiment, which challenged participants to solve a problem by using a familiar tool (pliers) in an unconventional manner. Participants were often unable to view the object in a way that strayed from its typical use, a type of mental set known as functional fixedness (see the following section). Rigidly clinging to a mental set is calledfixation, which can deepen to an obsession or preoccupation with attempted strategies that are repeatedly unsuccessful.[39]In the late 1990s, researcher Jennifer Wiley found that professional expertise in a field can create a mental set, perhaps leading to fixation.[39] Groupthink, in which each individual takes on the mindset of the rest of the group, can produce and exacerbate mental set.[40]Social pressure leads to everybody thinking the same thing and reaching the same conclusions. Functional fixedness is the tendency to view an object as having only one function, and to be unable to conceive of any novel use, as in the Maier pliers experiment described above. Functional fixedness is a specific form of mental set, and is one of the most common forms of cognitive bias in daily life. As an example, imagine a man wants to kill a bug in his house, but the only thing at hand is a can of air freshener. He may start searching for something to kill the bug instead of squashing it with the can, thinking only of its main function of deodorizing. Tim German and Clark Barrett describe this barrier: "subjects become 'fixed' on the design function of the objects, and problem solving suffers relative to control conditions in which the object's function is not demonstrated."[41]Their research found that young children's limited knowledge of an object's intended function reduces this barrier[42]Research has also discovered functional fixedness in educational contexts, as an obstacle to understanding: "functional fixedness may be found in learning concepts as well as in solving chemistry problems."[43] There are several hypotheses in regards to how functional fixedness relates to problem solving.[44]It may waste time, delaying or entirely preventing the correct use of a tool. Unnecessary constraints are arbitrary boundaries imposed unconsciously on the task at hand, which foreclose a productive avenue of solution. The solver may become fixated on only one type of solution, as if it were an inevitable requirement of the problem. Typically, this combines with mental set—clinging to a previously successful method.[45][page needed] Visual problems can also produce mentally invented constraints.[46][page needed]A famous example is the dot problem: nine dots arranged in a three-by-three grid pattern must be connected by drawing four straight line segments, without lifting pen from paper or backtracking along a line. The subject typically assumes the pen must stay within the outer square of dots, but the solution requires lines continuing beyond this frame, and researchers have found a 0% solution rate within a brief allotted time.[47] This problem has produced the expression "think outside the box".[48][page needed]Such problems are typically solved via a sudden insight which leaps over the mental barriers, often after long toil against them.[49]This can be difficult depending on how the subject has structured the problem in their mind, how they draw on past experiences, and how well they juggle this information in their working memory. In the example, envisioning the dots connected outside the framing square requires visualizing an unconventional arrangement, which is a strain on working memory.[48] Irrelevant information is a specification or data presented in a problem that is unrelated to the solution.[45]If the solver assumes that all information presented needs to be used, this often derails the problem solving process, making relatively simple problems much harder.[50] For example: "Fifteen percent of the people in Topeka have unlisted telephone numbers. You select 200 names at random from the Topeka phone book. How many of these people have unlisted phone numbers?"[48][page needed]The "obvious" answer is 15%, but in fact none of the unlisted people would be listed among the 200. This kind of "trick question" is often used in aptitude tests or cognitive evaluations.[51]Though not inherently difficult, they require independent thinking that is not necessarily common. Mathematicalword problemsoften include irrelevant qualitative or numerical information as an extra challenge. The disruption caused by the above cognitive biases can depend on how the information is represented:[51]visually, verbally, or mathematically. A classic example is the Buddhist monk problem: A Buddhist monk begins at dawn one day walking up a mountain, reaches the top at sunset, meditates at the top for several days until one dawn when he begins to walk back to the foot of the mountain, which he reaches at sunset. Making no assumptions about his starting or stopping or about his pace during the trips, prove that there is a place on the path which he occupies at the same hour of the day on the two separate journeys. The problem cannot be addressed in a verbal context, trying to describe the monk's progress on each day. It becomes much easier when the paragraph is represented mathematically by a function: one visualizes agraphwhose horizontal axis is time of day, and whose vertical axis shows the monk's position (or altitude) on the path at each time. Superimposing the two journey curves, which traverse opposite diagonals of a rectangle, one sees they must cross each other somewhere. The visual representation by graphing has resolved the difficulty. Similar strategies can often improve problem solving on tests.[45][52] People who are engaged in problem solving tend to overlook subtractive changes, even those that are critical elements of efficient solutions. For example, a city planner may decide that the solution to decrease traffic congestion would be to add another lane to a highway, rather than finding ways to reduce the need for the highway in the first place. This tendency to solve by first, only, or mostly creating or adding elements, rather than by subtracting elements or processes is shown to intensify with highercognitive loadssuch asinformation overload.[53] People can also solve problems while they are asleep. There are many reports of scientists and engineers who solved problems in theirdreams. For example,Elias Howe, inventor of the sewing machine, figured out the structure of the bobbin from a dream.[54] The chemistAugust Kekuléwas considering how benzene arranged its six carbon and hydrogen atoms. Thinking about the problem, he dozed off, and dreamt of dancing atoms that fell into a snakelike pattern, which led him to discover the benzene ring. As Kekulé wrote in his diary, One of the snakes seized hold of its own tail, and the form whirled mockingly before my eyes. As if by a flash of lightning I awoke; and this time also I spent the rest of the night in working out the consequences of the hypothesis.[55] There also are empirical studies of how people can think consciously about a problem before going to sleep, and then solve the problem with a dream image. Dream researcherWilliam C. Dementtold his undergraduate class of 500 students that he wanted them to think about an infinite series, whose first elements were OTTFF, to see if they could deduce the principle behind it and to say what the next elements of the series would be.[56][page needed]He asked them to think about this problem every night for 15 minutes before going to sleep and to write down any dreams that they then had. They were instructed to think about the problem again for 15 minutes when they awakened in the morning. The sequence OTTFF is the first letters of the numbers: one, two, three, four, five. The next five elements of the series are SSENT (six, seven, eight, nine, ten). Some of the students solved the puzzle by reflecting on their dreams. One example was a student who reported the following dream:[56][page needed] I was standing in an art gallery, looking at the paintings on the wall. As I walked down the hall, I began to count the paintings: one, two, three, four, five. As I came to the sixth and seventh, the paintings had been ripped from their frames. I stared at the empty frames with a peculiar feeling that some mystery was about to be solved. Suddenly I realized that the sixth and seventh spaces were the solution to the problem! With more than 500 undergraduate students, 87 dreams were judged to be related to the problems students were assigned (53 directly related and 34 indirectly related). Yet of the people who had dreams that apparently solved the problem, only seven were actually able to consciously know the solution. The rest (46 out of 53) thought they did not know the solution. Albert Einsteinbelieved that much problem solving goes on unconsciously, and the person must then figure out and formulate consciously what the mindbrain[jargon]has already solved. He believed this was his process in formulating the theory of relativity: "The creator of the problem possesses the solution."[57]Einstein said that he did his problem solving without words, mostly in images. "The words or the language, as they are written or spoken, do not seem to play any role in my mechanism of thought. The psychical entities which seem to serve as elements in thought are certain signs and more or less clear images which can be 'voluntarily' reproduced and combined."[58] Problem-solving processes differ across knowledge domains and across levels of expertise.[59]For this reason,cognitive sciencesfindings obtained in the laboratory cannot necessarily generalize to problem-solving situations outside the laboratory. This has led to a research emphasis on real-world problem solving, since the 1990s. This emphasis has been expressed quite differently in North America and Europe, however. Whereas North American research has typically concentrated on studying problem solving in separate, natural knowledge domains, much of the European research has focused on novel, complex problems, and has been performed with computerized scenarios.[60] In Europe, two main approaches have surfaced, one initiated byDonald Broadbent[61]in the United Kingdom and the other one byDietrich Dörner[62]in Germany. The two approaches share an emphasis on relatively complex, semantically rich, computerized laboratory tasks, constructed to resemble real-life problems. The approaches differ somewhat in their theoretical goals and methodology. The tradition initiated by Broadbent emphasizes the distinction between cognitive problem-solving processes that operate under awareness versus outside of awareness, and typically employs mathematically well-defined computerized systems. The tradition initiated by Dörner, on the other hand, has an interest in the interplay of the cognitive, motivational, and social components of problem solving, and utilizes very complex computerized scenarios that contain up to 2,000 highly interconnected variables.[63] In North America, initiated by the work of Herbert A. Simon on "learning by doing" insemanticallyrich domains,[64]researchers began to investigate problem solving separately in different naturalknowledge domains—such as physics, writing, orchessplaying—rather than attempt to extract a global theory of problem solving.[65]These researchers have focused on the development of problem solving within certain domains, that is on the development ofexpertise.[66] Areas that have attracted rather intensive attention in North America include: Complex problem solving (CPS) is distinguishable from simple problem solving (SPS). In SPS there is a singular and simple obstacle. In CPS there may be multiple simultaneous obstacles. For example, a surgeon at work has far more complex problems than an individual deciding what shoes to wear. As elucidated by Dietrich Dörner, and later expanded upon by Joachim Funke, complex problems have some typical characteristics, which include:[1] People solve problems on many different levels—from the individual to the civilizational. Collective problem solving refers to problem solving performed collectively.Social issuesand global issues can typically only be solved collectively. The complexity of contemporary problems exceeds the cognitive capacity of any individual and requires different but complementary varieties of expertise and collective problem solving ability.[81] Collective intelligenceis shared or group intelligence that emerges from thecollaboration, collective efforts, and competition of many individuals. In collaborative problem solving peoplework togetherto solve real-world problems. Members of problem-solving groups share a common concern, a similar passion, and/or a commitment to their work. Members can ask questions, wonder, and try to understand common issues. They share expertise, experiences, tools, and methods.[82]Groups may be fluid based on need, may only occur temporarily to finish an assigned task, or may be more permanent depending on the nature of the problems. For example, in the educational context, members of a group may all have input into the decision-making process and a role in the learning process. Members may be responsible for the thinking, teaching, and monitoring of all members in the group. Group work may be coordinated among members so that each member makes an equal contribution to the whole work. Members can identify and build on their individual strengths so that everyone can make a significant contribution to the task.[83]Collaborative group work has the ability to promote critical thinking skills, problem solving skills,social skills, andself-esteem. By using collaboration and communication, members often learn from one another and construct meaningful knowledge that often leads to better learning outcomes than individual work.[84] Collaborative groups require joint intellectual efforts between the members and involvesocial interactionsto solve problems together. Theknowledge sharedduring these interactions is acquired during communication, negotiation, and production of materials.[85]Members actively seek information from others by asking questions. The capacity to use questions to acquire new information increases understanding and the ability to solve problems.[86] In a 1962 research report,Douglas Engelbartlinked collective intelligence to organizational effectiveness, and predicted that proactively "augmenting human intellect" would yield a multiplier effect in group problem solving: "Three people working together in this augmented mode [would] seem to be more than three times as effective in solving a complex problem as is one augmented person working alone".[87] Henry Jenkins, a theorist of new media and media convergence, draws on the theory that collective intelligence can be attributed to media convergence andparticipatory culture.[88]He criticizes contemporary education for failing to incorporate online trends of collective problem solving into the classroom, stating "whereas a collective intelligence community encourages ownership of work as a group, schools grade individuals". Jenkins argues that interaction within a knowledge community builds vital skills for young people, and teamwork through collective intelligence communities contributes to the development of such skills.[89] Collective impactis the commitment of a group of actors from different sectors to a common agenda for solving a specific social problem, using a structured form of collaboration. AfterWorld War IItheUN, theBretton Woods organization, and theWTOwere created. Collective problem solving on the international level crystallized around these three types of organization from the 1980s onward. As these global institutions remain state-like or state-centric it is unsurprising that they perpetuate state-like or state-centric approaches to collective problem solving rather than alternative ones.[90] Crowdsourcingis a process of accumulating ideas, thoughts, or information from many independent participants, with aim of finding the best solution for a given challenge. Moderninformation technologiesallow for many people to be involved and facilitate managing their suggestions in ways that provide good results.[91]TheInternetallows for a new capacity of collective (including planetary-scale) problem solving.[92]
https://en.wikipedia.org/wiki/Problem_solving
Classical conditioning(alsorespondent conditioningandPavlovian conditioning) is a behavioral procedure in which a biologically potentstimulus(e.g. food, a puff of air on the eye, a potential rival) is paired with a neutral stimulus (e.g. the sound of amusical triangle). The termclassical conditioningrefers to the process of an automatic, conditioned response that is paired with a specific stimulus.[1]It is essentially equivalent to a signal. The RussianphysiologistIvan Pavlovstudied classical conditioning with detailedexperimentswith dogs, and published the experimental results in 1897. In the study ofdigestion, Pavlov observed that the experimental dogs salivated when fed red meat.[2]Pavlovian conditioning is distinct fromoperant conditioning(instrumental conditioning), through which the strength of a voluntary behavior is modified, either by reinforcement or bypunishment. However, classical conditioning can affect operant conditioning; classically conditioned stimuli can reinforce operant responses. Classical conditioning is a basic behavioral mechanism, and itsneural substratesare now beginning to be understood. Though it is sometimes hard to distinguish classical conditioning from other forms of associative learning (e.g. instrumental learning and humanassociative memory), a number of observations differentiate them, especially the contingencies whereby learning occurs.[3] Together withoperant conditioning, classical conditioning became the foundation ofbehaviorism, a school ofpsychologywhich was dominant in the mid-20th century and is still an important influence on the practice ofpsychological therapyand the study of animal behavior. Classical conditioning has been applied in other areas as well. For example, it may affect the body's response topsychoactive drugs, the regulation of hunger, research on the neural basis of learning and memory, and in certain social phenomena such as thefalse consensus effect.[4] Classical conditioning occurs when a conditioned stimulus (CS) is paired with an unconditioned stimulus (US). Usually, the conditioned stimulus is a neutral stimulus (e.g., the sound of atuning fork), the unconditioned stimulus is biologically potent (e.g., the taste of food) and the unconditioned response (UR) to the unconditioned stimulus is an unlearnedreflexresponse (e.g., salivation). After pairing is repeated the organism exhibits a conditioned response (CR) to the conditioned stimulus when the conditioned stimulus is presented alone. (A conditioned response may occur after only one pairing.) Thus, unlike the UR, the CR is acquired through experience, and it is also less permanent than the UR.[5] Usually the conditioned response is similar to the unconditioned response, but sometimes it is quite different. For this and other reasons, most learning theorists suggest that the conditioned stimulus comes to signal or predict the unconditioned stimulus, and go on to analyse the consequences of this signal.[6]Robert A. Rescorlaprovided a clear summary of this change in thinking, and its implications, in his 1988 article "Pavlovian conditioning: It's not what you think it is".[7]Despite its widespread acceptance, Rescorla's thesis may not be defensible.[weasel words] A false-positive involving classical conditioning from chance (where the unconditioned stimulus has the same chance of happening with or without the conditioned stimulus) has been proven to be improbable in successfully conditioning a response. The element of contingency has been further tested and is said to have "outlived any usefulness in the analysis of conditioning."[8] Classical conditioning differs fromoperantorinstrumentalconditioning: in classical conditioning, behaviors are modified through the association of stimuli as described above, whereas in operant conditioning behaviors are modified by the effect they produce (i.e., reward or punishment).[9] The best-known and most thorough early work on classical conditioning was done byIvan Pavlov, althoughEdwin Twitmyerpublished some related findings a year earlier.[10]During his research on thephysiologyofdigestionin dogs, Pavlov developed a procedure that enabled him to study the digestive processes of animals over long periods of time. He redirected the animals' digestive fluids outside the body, where they could be measured. Pavlov noticed that his dogs began tosalivatein the presence of the technician who normally fed them, rather than simply salivating in the presence of food. Pavlov called the dogs' anticipatory salivation "psychic secretion". Putting these informal observations to an experimental test, Pavlov presented a stimulus (e.g. the sound of ametronome) and then gave the dog food; after a few repetitions, the dogs started to salivate in response to the stimulus. Pavlov concluded that if a particular stimulus in the dog's surroundings was present when the dog was given food then that stimulus could become associated with food and cause salivation on its own. In Pavlov's experiments theunconditioned stimulus (US)was the food because its effects did not depend on previous experience. The metronome's sound is originally aneutral stimulus (NS)because it does not elicit salivation in the dogs. After conditioning, the metronome's sound becomes theconditioned stimulus (CS)or conditional stimulus; because its effects depend on its association with food.[11]Likewise, the responses of the dog follow the same conditioned-versus-unconditioned arrangement. Theconditioned response (CR)is the response to the conditioned stimulus, whereas theunconditioned response (UR)corresponds to the unconditioned stimulus. Pavlov reported many basic facts about conditioning; for example, he found that learning occurred most rapidly when the interval between the CS and the appearance of the US was relatively short.[12] As noted earlier, it is often thought that the conditioned response is a replica of the unconditioned response, but Pavlov noted that saliva produced by the CS differs in composition from that produced by the US. In fact, the CR may be any new response to the previously neutral CS that can be clearly linked to experience with the conditional relationship of CS and US.[7][9]It was also thought that repeated pairings are necessary for conditioning to emerge, but many CRs can be learned with a single trial, especially infear conditioningandtaste aversionlearning. Learning is fastest in forward conditioning. During forward conditioning, the onset of the CS precedes the onset of the US in order to signal that the US will follow.[13][14]: 69Two common forms of forward conditioning are delay and trace conditioning. During simultaneous conditioning, the CS and US are presented and terminated at the same time. For example: If a person hears a bell and has air puffed into their eye at the same time, and repeated pairings like this led to the person blinking when they hear the bell despite the puff of air being absent, this demonstrates that simultaneous conditioning has occurred. Second-order or higher-order conditioning follow a two-step procedure. First a neutral stimulus ("CS1") comes to signal a US through forward conditioning. Then a second neutral stimulus ("CS2") is paired with the first (CS1) and comes to yield its own conditioned response.[14]: 66For example: A bell might be paired with food until the bell elicits salivation. If a light is then paired with the bell, then the light may come to elicit salivation as well. The bell is the CS1 and the food is the US. The light becomes the CS2 once it is paired with the CS1. Backward conditioning occurs when a CS immediately follows a US.[13]Unlike the usual conditioning procedure, in which the CS precedes the US, the conditioned response given to the CS tends to be inhibitory. This presumably happens because the CS serves as a signal that the US has ended, rather than as a signal that the US is about to appear.[14]: 71For example, a puff of air directed at a person's eye could be followed by the sound of a buzzer. In temporal conditioning, a US is presented at regular intervals, for instance every 10 minutes. Conditioning is said to have occurred when the CR tends to occur shortly before each US. This suggests that animals have abiological clockthat can serve as a CS. This method has also been used to study timing ability in animals (seeAnimal cognition). The example below shows the temporal conditioning, as US such as food to a hungry mouse is simply delivered on a regular time schedule such as every thirty seconds. After sufficient exposure the mouse will begin to salivate just before the food delivery. This then makes it temporal conditioning as it would appear that the mouse is conditioned to the passage of time. In this procedure, the CS is paired with the US, but the US also occurs at other times. If this occurs, it is predicted that the US is likely to happen in the absence of the CS. In other words, the CS does not "predict" the US. In this case, conditioning fails and the CS does not come to elicit a CR.[15]This finding – thatpredictionrather than CS-US pairing is the key to conditioning – greatly influenced subsequent conditioning research and theory. In the extinction procedure, the CS is presented repeatedly in the absence of a US. This is done after a CS has been conditioned by one of the methods above. When this is done, the CR frequency eventually returns to pre-training levels. However, extinction does not eliminate the effects of the prior conditioning. This is demonstrated byspontaneous recovery– when there is a sudden appearance of the (CR) after extinction occurs – and other related phenomena (see "Recovery from extinction" below). These phenomena can be explained by postulating accumulation of inhibition when a weak stimulus is presented. During acquisition, the CS and US are paired as described above. The extent of conditioning may be tracked by test trials. In these test trials, the CS is presented alone and the CR is measured. A single CS-US pairing may suffice to yield a CR on a test, but usually a number of pairings are necessary and there is a gradual increase in the conditioned response to the CS. This repeated number of trials increase the strength and/or frequency of the CR gradually. The speed of conditioning depends on a number of factors, such as the nature and strength of both the CS and the US, previous experience and the animal'smotivationalstate.[6][9]The process slows down as it nears completion.[16] If the CS is presented without the US, and this process is repeated often enough, the CS will eventually stop eliciting a CR. At this point the CR is said to be "extinguished."[6][17] External inhibitionmay be observed if a strong or unfamiliar stimulus is presented just before, or at the same time as, the CS. This causes a reduction in the conditioned response to the CS. Several procedures lead to the recovery of a CR that had been first conditioned and then extinguished. This illustrates that the extinction procedure does not eliminate the effect of conditioning.[9]These procedures are the following: Stimulus generalizationis said to occur if, after a particular CS has come to elicit a CR, a similar test stimulus is found to elicit the same CR. Usually the more similar the test stimulus is to the CS the stronger the CR will be to the test stimulus.[6]Conversely, the more the test stimulus differs from the CS, the weaker the CR will be, or the more it will differ from that previously observed. One observesstimulus discriminationwhen one stimulus ("CS1") elicits one CR and another stimulus ("CS2") elicits either another CR or no CR at all. This can be brought about by, for example, pairing CS1 with an effective US and presenting CS2 with no US.[6] Latent inhibition refers to the observation that it takes longer for a familiar stimulus to become a CS than it does for a novel stimulus to become a CS, when the stimulus is paired with an effective US.[6] This is one of the most common ways to measure the strength of learning in classical conditioning. A typical example of this procedure is as follows: a rat first learns to press a lever throughoperant conditioning. Then, in a series of trials, the rat is exposed to a CS, a light or a noise, followed by the US, a mild electric shock. An association between the CS and US develops, and the rat slows or stops its lever pressing when the CS comes on. The rate of pressing during the CS measures the strength of classical conditioning; that is, the slower the rat presses, the stronger the association of the CS and the US. (Slow pressing indicates a "fear" conditioned response, and it is an example of a conditioned emotional response; see section below.) Typically, three phases of conditioning are used. A CS (CS+) is paired with a US untilasymptoticCR levels are reached. CS+/US trials are continued, but these are interspersed with trials on which the CS+ is paired with a second CS, (the CS-) but not with the US (i.e. CS+/CS- trials). Typically, organisms show CRs on CS+/US trials, but stop responding on CS+/CS− trials. This form of classical conditioning involves two phases. A CS (CS1) is paired with a US. A compound CS (CS1+CS2) is paired with a US. A separate test for each CS (CS1 and CS2) is performed. The blocking effect is observed in a lack of conditional response to CS2, suggesting that the first phase of training blocked the acquisition of the second CS. Experiments on theoretical issues in conditioning have mostly been done onvertebrates, especially rats and pigeons. However, conditioning has also been studied ininvertebrates, and very important data on the neural basis of conditioning has come from experiments on the sea slug,Aplysia.[6]Most relevant experiments have used the classical conditioning procedure, althoughinstrumental (operant) conditioningexperiments have also been used, and the strength of classical conditioning is often measured through its operant effects, as inconditioned suppression(see Phenomena section above) andautoshaping. According to Pavlov, conditioning does not involve the acquisition of any new behavior, but rather the tendency to respond in old ways to new stimuli. Thus, he theorized that the CS merely substitutes for the US in evoking thereflexresponse. This explanation is called the stimulus-substitution theory of conditioning.[14]: 84A critical problem with the stimulus-substitution theory is that the CR and UR are not always the same. Pavlov himself observed that a dog's saliva produced as a CR differed in composition from that produced as a UR.[10]The CR is sometimes even the opposite of the UR. For example: the unconditional response to an electric shock is an increase in heart rate, whereas a CS that has been paired with the electric shock elicits a decrease in heart rate. (However, it has been proposed[by whom?]that only when the UR does not involve thecentral nervous systemare the CR and the UR opposites.) The Rescorla–Wagner (R–W) model[9][18]is a relatively simple yet powerful model of conditioning. The model predicts a number of important phenomena, but it also fails in important ways, thus leading to a number of modifications and alternative models. However, because much of the theoretical research on conditioning in the past 40 years has been instigated by this model or reactions to it, the R–W model deserves a brief description here.[19][14]: 85 The Rescorla-Wagner model argues that there is a limit to the amount of conditioning that can occur in the pairing of two stimuli. One determinant of this limit is the nature of the US. For example: pairing a bell with a juicy steak is more likely to produce salivation than pairing the bell with a piece of dry bread, and dry bread is likely to work better than a piece of cardboard. A key idea behind the R–W model is that a CS signals or predicts the US. One might say that before conditioning, the subject is surprised by the US. However, after conditioning, the subject is no longer surprised, because the CS predicts the coming of the US. (The model can be described mathematically and that words like predict, surprise, and expect are only used to help explain the model.) Here the workings of the model are illustrated with brief accounts of acquisition, extinction, and blocking. The model also predicts a number of other phenomena, see main article on the model. ΔV=αβ(λ−ΣV){\displaystyle \Delta V=\alpha \beta (\lambda -\Sigma V)} This is the Rescorla-Wagner equation. It specifies the amount of learning that will occur on a single pairing of a conditioning stimulus (CS) with an unconditioned stimulus (US). The above equation is solved repeatedly to predict the course of learning over many such trials. In this model, the degree of learning is measured by how well the CS predicts the US, which is given by the "associative strength" of the CS. In the equation, V represents the current associative strength of the CS, and ∆V is the change in this strength that happens on a given trial. ΣV is the sum of the strengths of all stimuli present in the situation. λ is the maximum associative strength that a given US will support; its value is usually set to 1 on trials when the US is present, and 0 when the US is absent. α and β are constants related to the salience of the CS and the speed of learning for a given US. How the equation predicts various experimental results is explained in following sections. For further details, see the main article on the model.[14]: 85–89 The R–W model measures conditioning by assigning an "associative strength" to the CS and other local stimuli. Before a CS is conditioned it has an associative strength of zero. Pairing the CS and the US causes a gradual increase in the associative strength of the CS. This increase is determined by the nature of the US (e.g. its intensity).[14]: 85–89The amount of learning that happens during any single CS-US pairing depends on the difference between the total associative strengths of CS and other stimuli present in the situation (ΣV in the equation), and a maximum set by the US (λ in the equation). On the first pairing of the CS and US, this difference is large and the associative strength of the CS takes a big step up. As CS-US pairings accumulate, the US becomes more predictable, and the increase in associative strength on each trial becomes smaller and smaller. Finally, the difference between the associative strength of the CS (plus any that may accrue to other stimuli) and the maximum strength reaches zero. That is, the US is fully predicted, the associative strength of the CS stops growing, and conditioning is complete. The associative process described by the R–W model also accounts for extinction (see "procedures" above). The extinction procedure starts with a positive associative strength of the CS, which means that the CS predicts that the US will occur. On an extinction trial the US fails to occur after the CS. As a result of this "surprising" outcome, the associative strength of the CS takes a step down. Extinction is complete when the strength of the CS reaches zero; no US is predicted, and no US occurs. However, if that same CS is presented without the US but accompanied by a well-established conditioned inhibitor (CI), that is, a stimulus that predicts the absence of a US (in R-W terms, a stimulus with a negative associate strength) then R-W predicts that the CS will not undergo extinction (its V will not decrease in size). The most important and novel contribution of the R–W model is its assumption that the conditioning of a CS depends not just on that CS alone, and its relationship to the US, but also on all other stimuli present in the conditioning situation. In particular, the model states that the US is predicted by the sum of the associative strengths of all stimuli present in the conditioning situation. Learning is controlled by the difference between this total associative strength and the strength supported by the US. When this sum of strengths reaches a maximum set by the US, conditioning ends as just described.[14]: 85–89 The R–W explanation of the blocking phenomenon illustrates one consequence of the assumption just stated. In blocking (see "phenomena" above), CS1 is paired with a US until conditioning is complete. Then on additional conditioning trials a second stimulus (CS2) appears together with CS1, and both are followed by the US. Finally CS2 is tested and shown to produce no response because learning about CS2 was "blocked" by the initial learning about CS1. The R–W model explains this by saying that after the initial conditioning, CS1 fully predicts the US. Since there is no difference between what is predicted and what happens, no new learning happens on the additional trials with CS1+CS2, hence CS2 later yields no response. One of the main reasons for the importance of the R–W model is that it is relatively simple and makes clear predictions. Tests of these predictions have led to a number of important new findings and a considerably increased understanding of conditioning. Some new information has supported the theory, but much has not, and it is generally agreed that the theory is, at best, too simple. However, no single model seems to account for all the phenomena that experiments have produced.[9][20]Following are brief summaries of some related theoretical issues.[19] The R–W model reduces conditioning to the association of a CS and US, and measures this with a single number, the associative strength of the CS. A number of experimental findings indicate that more is learned than this. Among these are two phenomena described earlier in this article Latent inhibition might happen because a subject stops focusing on a CS that is seen frequently before it is paired with a US. In fact, changes in attention to the CS are at the heart of two prominent theories that try to cope with experimental results that give the R–W model difficulty. In one of these, proposed byNicholas Mackintosh,[21]the speed of conditioning depends on the amount of attention devoted to the CS, and this amount of attention depends in turn on how well the CS predicts the US. Pearce and Hall proposed a related model based on a different attentional principle[22]Both models have been extensively tested, and neither explains all the experimental results. Consequently, various authors have attempted hybrid models that combine the two attentional processes. Pearce and Hall in 2010 integrated their attentional ideas and even suggested the possibility of incorporating the Rescorla-Wagner equation into an integrated model.[9] As stated earlier, a key idea in conditioning is that the CS signals or predicts the US (see "zero contingency procedure" above). However, for example, the room in which conditioning takes place also "predicts" that the US may occur. Still, the room predicts with much less certainty than does the experimental CS itself, because the room is also there between experimental trials, when the US is absent. The role of such context is illustrated by the fact that the dogs in Pavlov's experiment would sometimes start salivating as they approached the experimental apparatus, before they saw or heard any CS.[16]Such so-called "context" stimuli are always present, and their influence helps to account for some otherwise puzzling experimental findings. The associative strength of context stimuli can be entered into the Rescorla-Wagner equation, and they play an important role in thecomparatorandcomputationaltheories outlined below.[9] To find out what has been learned, we must somehow measure behavior ("performance") in a test situation. However, as students know all too well, performance in a test situation is not always a good measure of what has been learned. As for conditioning, there is evidence that subjects in a blocking experiment do learn something about the "blocked" CS, but fail to show this learning because of the way that they are usually tested. "Comparator" theories of conditioning are "performance based", that is, they stress what is going on at the time of the test. In particular, they look at all the stimuli that are present during testing and at how the associations acquired by these stimuli may interact.[23][24]To oversimplify somewhat, comparator theories assume that during conditioning the subject acquires both CS-US and context-US associations. At the time of the test, these associations are compared, and a response to the CS occurs only if the CS-US association is stronger than the context-US association. After a CS and US are repeatedly paired in simple acquisition, the CS-US association is strong and the context-US association is relatively weak. This means that the CS elicits a strong CR. In "zero contingency" (see above), the conditioned response is weak or absent because the context-US association is about as strong as the CS-US association. Blocking and other more subtle phenomena can also be explained by comparator theories, though, again, they cannot explain everything.[9][19] An organism's need to predict future events is central to modern theories of conditioning. Most theories use associations between stimuli to take care of these predictions. For example: In the R–W model, the associative strength of a CS tells us how strongly that CS predicts a US. A different approach to prediction is suggested by models such as that proposed by Gallistel & Gibbon (2000, 2002).[25][26]Here the response is not determined by associative strengths. Instead, the organism records the times of onset and offset of CSs and USs and uses these to calculate the probability that the US will follow the CS. A number of experiments have shown that humans and animals can learn to time events (seeAnimal cognition), and the Gallistel & Gibbon model yields very good quantitative fits to a variety of experimental data.[6][19]However, recent studies have suggested that duration-based models cannot account for some empirical findings as well as associative models.[27] The Rescorla-Wagner model treats a stimulus as a single entity, and it represents the associative strength of a stimulus with one number, with no record of how that number was reached. As noted above, this makes it hard for the model to account for a number of experimental results. More flexibility is provided by assuming that a stimulus is internally represented by a collection of elements, each of which may change from one associative state to another. For example, the similarity of one stimulus to another may be represented by saying that the two stimuli share elements in common. These shared elements help to account for stimulus generalization and other phenomena that may depend upon generalization. Also, different elements within the same set may have different associations, and their activations and associations may change at different times and at different rates. This allows element-based models to handle some otherwise inexplicable results. A prominent example of the element approach is the "SOP" model of Wagner.[28]The model has been elaborated in various ways since its introduction, and it can now account in principle for a very wide variety of experimental findings.[9]The model represents any given stimulus with a large collection of elements. The time of presentation of various stimuli, the state of their elements, and the interactions between the elements, all determine the course of associative processes and the behaviors observed during conditioning experiments. The SOP account of simple conditioning exemplifies some essentials of the SOP model. To begin with, the model assumes that the CS and US are each represented by a large group of elements. Each of these stimulus elements can be in one of three states: Of the elements that represent a single stimulus at a given moment, some may be in state A1, some in state A2, and some in state I. When a stimulus first appears, some of its elements jump from inactivity I to primary activity A1. From the A1 state they gradually decay to A2, and finally back to I. Element activity can only change in this way; in particular, elements in A2 cannot go directly back to A1. If the elements of both the CS and the US are in the A1 state at the same time, an association is learned between the two stimuli. This means that if, at a later time, the CS is presented ahead of the US, and some CS elements enter A1, these elements will activate some US elements. However, US elements activated indirectly in this way only get boosted to the A2 state. (This can be thought of the CS arousing a memory of the US, which will not be as strong as the real thing.) With repeated CS-US trials, more and more elements are associated, and more and more US elements go to A2 when the CS comes on. This gradually leaves fewer and fewer US elements that can enter A1 when the US itself appears. In consequence, learning slows down and approaches a limit. One might say that the US is "fully predicted" or "not surprising" because almost all of its elements can only enter A2 when the CS comes on, leaving few to form new associations. The model can explain the findings that are accounted for by the Rescorla-Wagner model and a number of additional findings as well. For example, unlike most other models, SOP takes time into account. The rise and decay of element activation enables the model to explain time-dependent effects such as the fact that conditioning is strongest when the CS comes just before the US, and that when the CS comes after the US ("backward conditioning") the result is often an inhibitory CS. Many other more subtle phenomena are explained as well.[9] A number of other powerful models have appeared in recent years which incorporate element representations. These often include the assumption that associations involve a network of connections between "nodes" that represent stimuli, responses, and perhaps one or more "hidden" layers of intermediate interconnections. Such models make contact with a current explosion of research onneural networks,artificial intelligenceandmachine learning.[citation needed] Pavlov proposed that conditioning involved a connection between brain centers for conditioned and unconditioned stimuli. His physiological account of conditioning has been abandoned, but classical conditioning continues to be used to study the neural structures and functions that underlie learning and memory. Forms of classical conditioning that are used for this purpose include, among others,fear conditioning,eyeblink conditioning, and the foot contraction conditioning ofHermissenda crassicornis, a sea-slug. Both fear and eyeblink conditioning involve a neutral stimulus, frequently a tone, becoming paired with an unconditioned stimulus. In the case of eyeblink conditioning, the US is an air-puff, while in fear conditioning the US is threatening or aversive such as a foot shock. The American neuroscientistDavid A. McCormickperformed experiments that demonstrated "...discrete regions of thecerebellumand associatedbrainstemareas contain neurons that alter their activity during conditioning – these regions are critical for the acquisition and performance of this simple learning task. It appears that other regions of the brain, including thehippocampus,amygdala, andprefrontal cortex, contribute to the conditioning process, especially when the demands of the task get more complex."[29] Fear and eyeblink conditioning involve generally non overlapping neural circuitry, but share molecular mechanisms. Fear conditioning occurs in thebasolateral amygdala, which receivesglutaminergicinput directly from thalamic afferents, as well as indirectly from prefrontal projections. The direct projections are sufficient for delay conditioning, but in the case of trace conditioning, where the CS needs to be internally represented despite a lack of external stimulus, indirect pathways are necessary. Theanterior cingulateis one candidate for intermediate trace conditioning, but the hippocampus may also play a major role. Presynaptic activation ofprotein kinase Aand postsynaptic activation ofNMDA receptorsand its signal transduction pathway are necessary for conditioning related plasticity.CREBis also necessary for conditioning relatedplasticity, and it may induce downstream synthesis of proteins necessary for this to occur.[30]As NMDA receptors are only activated after an increase in presynapticcalcium(thereby releasing theMg2+block), they are a potential coincidence detector that could mediatespike timing dependent plasticity. STDP constrains LTP to situations where the CS predicts the US, and LTD to the reverse.[31] Some therapies associated with classical conditioning areaversion therapy,systematic desensitizationandflooding. Aversion therapy is a type of behavior therapy designed to make patients cease an undesirable habit by associating the habit with a strong unpleasant unconditioned stimulus.[32]: 336For example, a medication might be used to associate the taste of alcohol with stomach upset. Systematic desensitization is a treatment for phobias in which the patient is trained to relax while being exposed to progressively more anxiety-provoking stimuli (e.g. angry words). This is an example ofcounterconditioning, intended to associate the feared stimuli with a response (relaxation) that is incompatible with anxiety.[32]: 136Flooding is a form ofdesensitizationthat attempts to eliminate phobias and anxieties by repeated exposure to highly distressing stimuli until the lack of reinforcement of the anxiety response causes its extinction.[32]: 133"Flooding" usually involves actual exposure to the stimuli, whereas the term "implosion" refers to imagined exposure, but the two terms are sometimes used synonymously. Conditioning therapies usually take less time thanhumanistictherapies.[33] A stimulus that is present when adrugis administered or consumed may eventually evoke a conditioned physiological response that mimics the effect of the drug. This is sometimes the case withcaffeine; habitualcoffeedrinkers may find that the smell of coffee gives them a feeling of alertness. In other cases, the conditioned response is a compensatory reaction that tends to offset the effects of the drug. For example, if a drug causes the body to become less sensitive to pain, the compensatory conditioned reaction may be one that makes the user more sensitive to pain. This compensatory reaction may contribute todrug tolerance. If so, a drug user may increase the amount of drug consumed in order to feel its effects, and end up taking very large amounts of the drug. In this case a dangerous overdose reaction may occur if the CS happens to be absent, so that the conditioned compensatory effect fails to occur. For example, if the drug has always been administered in the same room, the stimuli provided by that room may produce a conditioned compensatory effect; then anoverdosereaction may happen if the drug is administered in a different location where the conditioned stimuli are absent.[34] Signals that consistently precede food intake can become conditioned stimuli for a set of bodily responses that prepares the body for food anddigestion. These reflexive responses include the secretion ofdigestive juicesinto the stomach and the secretion of certain hormones into the blood stream, and they induce a state of hunger. An example of conditioned hunger is the "appetizer effect." Any signal that consistently precedes a meal, such as a clock indicating that it is time for dinner, can cause people to feel hungrier than before the signal. Thelateral hypothalamus(LH) is involved in the initiation of eating. Thenigrostriatal pathway, which includes thesubstantia nigra, thelateral hypothalamus, and thebasal gangliahave been shown to be involved in hunger motivation.[citation needed] The influence of classical conditioning can be seen in emotional responses such asphobia,disgust,nausea, anger, andsexual arousal. A common example is conditioned nausea, in which the CS is the sight or smell of a particular food that in the past has resulted in an unconditioned stomach upset. Similarly, when the CS is the sight of a dog and the US is the pain of being bitten, the result may be a conditioned fear of dogs. An example of conditioned emotional response isconditioned suppression. As an adaptive mechanism, emotional conditioning helps shield an individual from harm or prepare it for important biological events such as sexual activity. Thus, a stimulus that has occurred before sexual interaction comes to cause sexual arousal, which prepares the individual for sexual contact. For example, sexual arousal has been conditioned in human subjects by pairing a stimulus like a picture of a jar of pennies with views of an erotic film clip. Similar experiments involving bluegouramifish anddomesticated quailhave shown that such conditioning can increase the number of offspring. These results suggest that conditioning techniques might help to increase fertility rates ininfertileindividuals andendangered species.[35] Pavlovian-instrumental transfer is a phenomenon that occurs when a conditioned stimulus (CS, also known as a "cue") that has been associated withrewardingoraversivestimulivia classical conditioning altersmotivational salienceandoperant behavior.[36][37][38][39]In a typical experiment, a rat is presented with sound-food pairings (classical conditioning). Separately, the rat learns to press a lever to get food (operant conditioning). Test sessions now show that the rat presses the lever faster in the presence of the sound than in silence, although the sound has never been associated with lever pressing. Pavlovian-instrumental transfer is suggested to play a role in thedifferential outcomes effect, a procedure which enhances operant discrimination by pairing stimuli with specific outcomes.[citation needed]
https://en.wikipedia.org/wiki/Classical_conditioning
Gavriel Salomon(Hebrew:גבריאל סלומון; October 1938 – January 2016) was an Israelieducational psychologistwho conducted research oncognitionand instruction.[1]He was a Professor Emeritus in the department of education at theUniversity of Haifa. He served as the Editor in Chief of theEducational Psychologist.[2]
https://en.wikipedia.org/wiki/Gavriel_Salomon
Instructional scaffoldingis the support given to a student by an instructor throughout the learning process. This support is specifically tailored to each student; this instructional approach allows students to experiencestudent-centered learning, which tends to facilitate more efficient learning than teacher-centered learning.[1][page needed]This learning process promotes a deeper level of learning than many other common teaching strategies.[citation needed] Instructional scaffolding provides sufficient support to promotelearningwhenconceptsandskillsare being first introduced to students. These supports may include resource, compelling task, templates and guides, and/or guidance on the development ofcognitiveandsocial skills. Instructional scaffolding could be employed through modeling a task, giving advice, and/or providingcoaching. These supports are gradually removed as students developautonomouslearning strategies, thus promoting their owncognitive,affectiveandpsychomotorlearning skills and knowledge. Teachers help the students master a task or a concept by providing support. The support can take many forms such as outlines, recommended documents,storyboards, or key questions. There are three essential features of scaffolding that facilitate learning.[2][3] The support and guidance provided to the learner are compared to the scaffolds in building construction where the scaffolds provide both "adjustable and temporal" support to the building under construction.[4]The support and guidance provided to learners facilitate internalization of the knowledge needed to complete the task. This support is weaned gradually until the learner is independent.[4] For scaffolding to be effective teachers need to pay attention to the following: Scaffolding theorywas first introduced in the late 1950s byJerome Bruner, acognitivepsychologist. He used the term to describe young children's orallanguage acquisition. Helped by their parents when they first start learning to speak, young children are provided with informal instructional formats within which their learning is facilitated. A scaffolding format investigated by Bruner and his postdoctoral studentAnat Ninio, whose scaffolding processes are described in detail, is joint picture-book reading.[9]By contrast, bed-time stories and read-alouds are examples of book-centered parenting events[10]without scaffolding interaction. Scaffolding is inspired byLev Vygotsky's concept of an expert assisting a novice, or an apprentice. Scaffolding is changing the level of support to suit the cognitive potential of the child. Over the course of a teaching session, one can adjust the amount of guidance to fit the child's potential level of performance. More support is offered when a child is having difficulty with a particular task and, over time, less support is provided as the child makes gains on the task. Ideally, scaffolding works to maintain the child's potential level of development in thezone of proximal development(ZPD). An essential element to the ZPD and scaffolding is the acquisition of language. According to Vygotsky, language (and in particular, speech) is fundamental to children's cognitive growth because language provides purpose and intention so that behaviors can be better understood.[11]Through the use of speech, children are able to communicate to and learn from others through dialogue, which is an important tool in the ZPD. In a dialogue, a child's unsystematic, disorganized, and spontaneous concepts are met with the more systematic, logical and rational concepts of the skilled helper.[12]Empirical research suggests that the benefits of scaffolding are not only useful during a task, but can extend beyond the immediate situation in order to influence future cognitive development.[13]For instance, a recent study recorded verbal scaffolding between mothers and their 3- and 4-year-old children as they played together. Then, when the children were six years old, they underwent several measures ofexecutive function, such as working memory and goal-directed play. The study found that the children's working memory and language skills at six years of age were related to the amount of verbal scaffolding provided by mothers at age three. In particular, scaffolding was most effective when mothers provided explicit conceptual links during play. Therefore, the results of this study not only suggest that verbal scaffolding aids children'scognitive development, but that the quality of the scaffolding is also important for learning and development.[14] A construct that is critical for scaffolding instruction is Vygotsky's concept of thezone of proximal development(ZPD). The zone of proximal development is the field between what a learner can do on their own (expert stage) and the most that can be achieved with the support of a knowledgeable peer or instructor (pedagogical stage).[15][page needed][16]Vygotsky was convinced that a child could be taught any subject efficiently using scaffolding practices by implementing the scaffolds through the zone of proximal development. Students are escorted and monitored through learning activities that function as interactive conduits to get them to the next stage. Thus the learner obtainsor raises[clarify]new understandings by building on their prior knowledge through the support delivered by more capable individuals.[17]Several peer-reviewed studies have shown that when there is a deficiency in guided learning experiences and social interaction, learning and development are obstructed.[18]Moreover, several things influence the ZPD of students, ranging from the collaboration of peers to technology available in the classroom.[19] In writing instruction, support is typically presented in verbal form (discourse). The writing tutor engages the learner's attention, calibrates the task, motivates the student, identifies relevant task features, controls for frustration, and demonstrates as needed.[20]Through joint activities, the teacher scaffolds conversation to maximize the development of a child's intrapsychological functioning. In this process, the adult controls the elements of the task that are beyond the child's ability, all the while increasing the expectations of what the child is able to do. Speech, a critical tool to scaffold thinking and responding, plays a crucial role in the development of higher psychological processes[21]because it enables thinking to be more abstract, flexible, and independent.[22][23]From a Vygotskian perspective, talk and action work together with the sociocultural fabric of the writing event to shape a child's construction of awareness and performance.[24][25]Dialogue may range from casual talk to deliberate explanations of features of written language. The talk embedded in the actions of the literacy event shapes the child's learning as the tutor regulates his or her language to conform to the child's degrees of understanding.[26][clarification needed]shows that what may seem like casual conversational exchanges between tutor and student actually offer many opportunities for fostering cognitive development, language learning, story composition for writing, and reading comprehension. Conversations facilitate generative, constructive, experimental, and developmental speech and writing in the development of new ideas.[27] In Vygotsky's words, "what the child is able to do in collaboration today he will be able to do independently tomorrow".[28] Some ingredients of scaffolding are predictability, playfulness, focus on meaning, role reversal, modeling, and nomenclature.[10] According to Saye and Brush, there are two levels of scaffolding: soft and hard.[29]An example of softscaffoldingin the classroom would be when a teacher circulates the room and converses with his or her students.[30]The teacher may question their approach to a difficult problem and provide constructive feedback to the students. According to Van Lier, this type of scaffolding can also be referred to as contingent scaffolding. The type and amount of support needed is dependent on the needs of the students during the time of instruction.[31][page needed]Unfortunately, applying scaffolding correctly and consistently can be difficult when the classroom is large and students have various needs.[32][full citation needed]Scaffolding can be applied to a majority of the students, but the teacher is left with the responsibility to identify the need for additional scaffolding. In contrast with contingent or soft scaffolding, embedded or hard scaffolding is planned in advance to help students with a learning task that is known in advance to be difficult.[29]For example, when students are discovering the formula for thePythagorean Theoremin math class, the teacher may identify hints or cues to help the student reach an even higher level of thinking. In both situations, the idea of "expert scaffolding" is being implemented:[33]the teacher in the classroom is considered the expert and is responsible for providing scaffolding for the students. Reciprocal scaffolding, a method first coined by Holton and Thomas, is a method that involves a group of two or more collaboratively working together. In this situation, the group can learn from each other's experiences and knowledge. The scaffolding is shared by each member and changes constantly as the group works on a task.[33]According to Vygotsky, students develop higher-level thinking skills when scaffolding occurs with an adult expert or with a peer of higher capabilities.[34]Conversely, Piaget believes that students discard their ideas when paired with an adult or student of more expertise.[35][full citation needed]Instead, students should be paired with others who have different perspectives. Conflicts would then take place between students allowing them to think constructively at a higher level. Technical scaffolding is a newer approach in which computers replace the teachers as the experts or guides, and students can be guided with web links, online tutorials, or help pages.[36]Educational software can help students follow a clear structure and allows students to plan properly.[37] Silliman and Wilkinson distinguish two types of scaffolding: 'supportive scaffolding' that characterises the IRF (Initiation-Response-Follow-up) pattern; and 'directive scaffolding' that refers to IRE (Initiation-Response-Evaluation).[38]Saxena (2010)[39]develops these two notions theoretically by incorporating Bhaktin's (1981)[40]and van Lier's (1996)[31]works. Within the IRE pattern, teachers provide 'directive scaffolding' on the assumption that their job is to transmit knowledge and then assess its appropriation by the learners. The question-answer-evaluation sequence creates a predetermined standard for acceptable participation and induces passive learning. In this type of interaction, the teacher holds the right to evaluate and asks 'known-information' questions which emphasise the reproduction of information. The nature and role of the triadic dialogue have been oversimplified and the potential for the roles of teachers and students in them has been undermined.[41] If, in managing the talk, teachers apply 'constructive power'[42]and exploit students' responses as occasions for joint exploration, rather than simply evaluating them, then the classroom talk becomes dialogic.[43][page needed]The pedagogic orientation of this talk becomes 'participation orientation', in contrast to 'display/assessment orientation' of IRE.[31][page needed]In this kind of pattern of interaction, the third part of the triadic dialogue offers 'follow-up' and teachers' scaffolding becomes 'supportive'. Rather than producing 'authoritative discourse',[40]teachers construct 'internally persuasive discourse' that allows 'equality' and 'symmetry'[31]: 175wherein the issues of power, control, institutional managerial positioning, etc. are diffused or suspended. The discourse opens up the roles for students as the 'primary knower' and the 'sequence initiator',[41]which allows them to be the negotiator and co-constructor of meaning. The suspension of asymmetry in the talk represents a shift in the teacher's ideological stance and, therefore, demonstrates that supportive scaffolding is more than simply a model of instruction.[39]: 167 Learner support in scaffolding is known as guidance. While it takes on various forms and styles, the basic form of guidance is any type of interaction from the instructor that is intended to aid and/or improve student learning.[44]While this a broad definition, the role and amount of guidance is better defined by the instructor's approach. Instructionists and constructionists approach giving guidance within their own instructional frameworks. Scaffolding involves presenting learners with proper guidance that moves them towards their learning goals. Providing guidance is a method of moderating thecognitive loadof a learner. In scaffolding, learners can only be moved toward their learning goals if cognitive load is held in check by properly administered support. Traditional teachers tend to give a higher level of deductive, diadactic instruction, with each piece of a complex task being broken down. This teacher-centered approach, consequently, tends to increase the cognitive load for students. Constructivist instructors, in contrast, approach instruction from the approach of guided discovery with a particular emphasis on transfer. The concept of transfer focuses on a learner's ability to apply learned tasks in a context other than the one in which it was learned.[44]This results in constructivist instructors, unlike classical ones, giving a higher level of guidance than instruction. Research has demonstrated that higher level of guidance has a greater effect on scaffolded learning, but is not a guarantee of more learning.[45]The efficacy of higher amount of guidance is dependent on the level of detail and guidance applicability.[44]Having multiple types of guidance (i.e. worked examples, feedback) can cause them to interact and reinforce each other. Multiple conditions do not guarantee greater learning, as certain types of guidance can be extraneous to the learning goals or the modality of learning. With this, more guidance (if not appropriate to the learning) can negatively impact performance, as it gives the learner overwhelming levels of information.[44]However, appropriately designed high levels of guidance, which properly interact with the learning, is more beneficial to learning than low levels of guidance. Constructivists pay close attention to the context of guidance because they believe instruction plays a major role in knowledge retention and transfer.[44]Research studies[46][47]demonstrate how the context of isolated explanations can have an effect on student-learning outcomes. For example, Hake's (1998) large-scale study[48]demonstrated how post-secondary physics students recalled less than 30% of material covered in a traditional lecture-style class. Similarly, other studies[49][50][51]illustrate how students construct different understandings from explanation in isolation versus having a first experience with the material. A first, experience with the material provides students with a "need to know",[44]which allows learners to reflect on prior experiences with the content, which can help learners construct meaning from instruction.[44]Worked examplesare guiding tools that can act as a "need to know" for students. Worked examples provide students with straightforward goals, step-by-step instructions as well as ready-to-solve problems that can help students develop a stronger understanding from instruction.[52][53] Guiding has a key role in both constructivism and 'instructivism'. For instructivists, the timing of guidance is immediate, either at the beginning or when the learner makes a mistake, whereas in constructivism it can be delayed.[44]It has been found that immediate feedback can lead toworking memory loadas it does not take in consideration the process of gradual acquisition of a skill,[54]which also relates to the amount of guidance being given. Research onintelligent-tutoring systemssuggests that immediate feedback on errors is a great strategy to promote learning. As the learner is able to integrate the feedback from short-term memory into the overall learning- and problem-solving task, the longer the wait on feedback and the harder it is for the learner to make this integration.[54]Yet, in another study it was found that providing feedback right after the error can deprive the learner of the opportunity to develop evaluative skills.[55]Wise and O'Neill bring these two, seemingly contradictory findings, and argue that it does not only prove the importance of the role of feedback, but that points out a timing feature of feedback: immediate feedback in the short term promotes more rapid problem-solving, but delaying feedback can result in better retention andtransferin the long term.[44] Constructivismviews knowledge as a "function of how the individual creates meaning from his or her own experiences".[56]Constructivists advocate that learning is better facilitated in a minimally guided environment where learners construct important information for themselves.[57]According to constructivism, minimal guidance in the form of process or task related information should be provided to learners upon request anddirect instructionof learning strategies should not be used because it impedes the natural processes learners use to recall prior experiences. In this view, for learners to construct knowledge they should be provided with the goals and minimal information and support. Applications that promoteconstructivist learningrequire learners to solve authentic problems or "acquire knowledge in information-rich settings".[58]An example of an application of constructivist learning is science instruction, where students are asked to discover the principles of science by imitating the steps and actions of researchers.[59] Instructionism are educational practices characterized for being instructor-centered. Some authors see instructionism as a highly prescriptive practice that mostly focuses on the formation of skills, that is very product-oriented and is not interactive;[60][page needed]or that is a highly structured, systematic and explicit way of teaching that gives emphasis to the role of the teacher as a transmitter of knowledge and the students as passive receptacles.[61]The 'transmission' of knowledge and skills from the teacher to the student in this context is often manifested in the form of drill, practice and rote memorization.[61]An 'instructionist', then, focuses on the preparation, organization and management of the lesson making sure the plan is detailed and the communication is effective.[62][page needed][63][page needed]The emphasis is on the up-front explicit delivery of instruction.[44] Instructionism is often contrasted with constructivism. Both of them use the termguidanceas means to support learning, and how it can be used more effectively. The difference in the use of guidance is found in the philosophical assumptions regarding the nature of the learner,[61]but they also differ in their views around the quantity, the context and the timing of guidance.[44]An example of application of instructionism in the classroom isdirect instruction. With traditional power dynamics in the classroom, the teacher is the authority. In order to engage in meaningful student talk, we need to break this hierarchy.[64] Minimal guidance is a general term applied to a variety of pedagogical approaches such asinquiry learning, learner-centered pedagogy,student-centered learning,[65]project-based learning, anddiscovery learning. It is the idea that learners, regardless of their level of expertise, will learn best through discovering and/or constructing information for themselves in contrast to more teacher-led classrooms which in contrast are described as more passive learning.[66][67][68][unreliable source?][69] A safe approach is to offer three options. The teacher designs two options based on what most students may like to do. The third choice is a blank check – students propose their own product or performance.[70] In this approach, the role of the teacher may change from what has been described as "sage on the stage" to "guide on the side" with one example of this change in practice being that teachers will not tend to answer questions from students directly, but instead will ask questions back to students to prompt further thinking.[71][72][64][73][74][75][76][excessive citations]This change in teaching style has also been described as being a "facilitator of learning" instead of being a "dispenser of knowledge".[77] Minimal guidance is regarded as controversial[78]and has been described as a caricature that does not exist in practice, and that critics have combined too many different approaches some of which may include more guidance, under the label of minimal guidance.[79][80]However, there is some evidence that in certain domains, and under certain circumstances, a minimal guidance approach can lead to successful learning if sufficient practice opportunities are built in.[81] One strand of criticism of the minimal guidance approach originating incognitive load theoryis that it does not align with human cognitive architecture making it an inefficient approach to learning for beginner learners in particular.[66][82]In this strand of criticism, minimal guidance approaches are contrasted with fully guided approaches to instruction which better match inherent human cognitive architecture.[83][45]While accepting this general line of argument, counter-arguments for individual approaches such as problem-based learning have highlighted how these are not minimal guidance approaches, and are consistent with human cognitive architecture.[84]Other strands of criticism suggest that there is little empirical evidence for the effectiveness of learner-centered approaches when compared to more teacher-led approaches, and this is despite extensive encouragement and support from national and international education agencies includingUNESCO,UNICEF, and theWorld Bank.[85][86][87]Further more specific criticisms include the following: minimal guidance is inefficient compared to explicit instruction due to a lack ofworked examples, minimal guidance leads to reduced opportunities for student practice, and minimal guidance happens inevitably inproject-based learningas a result of the teacher having to manage too many student projects at one time.[88] One of the consequences of this reconceptualization is abandoning the rigid explicit instruction versus minimal guidance dichotomy and replacing it with a more flexible approach based on differentiating specific goals of various learner activities in complex learning.[89] There have been several attempts to move beyond the minimal guidance versus fully guided instruction controversy. These are often developed by introducing the variable of learner expertise and using that to suggest adapting instructional styles depending on the level of expertise of the learner, with more expert learners generally requiring less direct instruction.[90]For example, despite providing many of the criticisms of minimal guidance,cognitive load theorydoes also suggest a role for less direct guidance from the teacher as learners become more expert due to theexpertise reversal effect.[91]Other attempts at synthesis include using pedagogies more associated with martial arts instruction that apply explicit instruction as a means of fostering student discovery through repeated practice.[92] If instead we entertain the possibility that instruction and discovery are not oil and water, that instruction and discovery coexist and can work together, we may find a solution to this impasse in the field. Perhaps our way out of the instructivist-constructivist impasse thus involves not a "middle ground" compromise but an alternative conceptualization of instruction and discovery.[92] Instructional scaffolding can be thought of as the strategies that a teacher uses to help learners bridge a cognitive gap or progress in their learning to a level they were previously unable to accomplish.[93]These strategies evolve as the teachers evaluate the learners initial level of ability and then through continued feedback throughout the progression of the task. In the early studies, scaffolding was primarily done in oral, face- to-face learning environments. In classrooms, scaffolding may include modelling behaviours, coaching and prompting, thinking out loud, dialogue with questions and answers, planned and spontaneous discussions, as well as other interactive planning or structural assistance to help the learner bridge a cognitive gap. This can also include peer mentoring from more experienced students. These peers can be referred to as MKOs.MKOstands for 'More Knowledgeable Other'. The MKO is a person who has a higher understanding of an idea or concept and can bridge this cognitive gap. This includes teachers, parents, and as stated before, peers. MKOs are central part of the process of learning in the ZPD, orZone of Proximal Development. An MKO may help a student using scaffolding, with the goal being that the student can eventually lead themselves to the answer on their own, without the help of anyone else. The MKO may use a gradual reduction of assistance in order to facilitate this, as described earlier. There are a wide variety of scaffolding strategies that teachers employ. One approach to looking at the application of scaffolding is to look at a framework for evaluating these strategies. This model was developed based on the theoretical principles of scaffolding to highlight the use of scaffolding for educational purposes.[93]It highlights two components of an instructor's use of scaffolding. The first is the instructors intentions and the second refers to the means by which the scaffolding is carried out. Scaffolding intentions:These groups highlight the instructors intentions for scaffolding[93] Scaffolding means:These groups highlight the ways in which the instructor scaffolds[93] Any combination of scaffolding means with scaffolding intention can be construed as a scaffolding strategy, however, whether a teaching strategy qualifies as good scaffolding generally depends upon its enactment in actual practice and more specifically upon whether the strategy is applied contingently and whether it is also part of a process of fading and transfer of responsibility.[94] Examples of scaffolding:[95] Instructors can use a variety of scaffolds to accommodate different levels of knowledge. The context of learning (i.e. novice experience, complexity of the task) may require more than one scaffold strategy in order for the student to master new content.[95]The following table[96]outlines a few common scaffolding strategies: These tools organize information in a way that helps learners understand new and complex content. Examples of advanced organizers are: Instructors use modelling to: These types ofinstructional materialsare commonly implemented in mathematics and science classes and include three key features:[100] 1. Problem formation: A principle or theory is introduced. 2. Step-by-step example: A worked example, that demonstrates how the student can solve the problem, is provided. 3. Solution to the problem: One or more read-to-be solved problems are given for the student to practice the skill. Types of concept maps are:[103] How new information is presented to the learner is a critical component for effective instruction. The use of materials such as visual images, graphic organizers, animated videos, audio files and other technological features can make explanations more engaging, motivating and meaningful for student learning. These tools can provide students with the necessary information (i.e. concept or theory, task instructions, learning goals, learning objectives) and practice (i.e. ready-to-be-solved problems) they need to master new content and skills. Handouts are helpful tools for explanations and worked examples. There are different types of prompts, such as:[106] When students who are not physically present in the classroom receive instruction, instructors need to adapt to the environment and their scaffolding needs to be adjusted to fit the new learning medium. It can be challenging to find a way to adjust the verbal and visual elements of scaffolding to construct a successful interactive and collaborative learning environment for distance learning. The recent spread of technology used in education has opened up the learning environment to include AI-based methods,hypermedia,hypertext,collaborative learningenvironments, and web-based learning environments. This challenges traditional learning design conceptions of scaffolding for educators.[107][108][109] A 2014 review[94]of the types of scaffolding used in online learning identified four main types of scaffolding: These four types are structures that appropriately support students' learning in online environments.[111]Other scaffolding approaches that were addressed by the researchers included: technical support, content support, argumentation template, questioning and modelling. These terms were rarely used, and it was argued that these areas had unclear structure to guide students, especially in online learning, and were inadequately justified. As technology changes, so does the form of support provided to online learners. Instructors have the challenge of adapting scaffolding techniques to this new medium, but also the advantage of using new web-based tools such as wikis and blogs as platforms to support and discuss with students. As the research in this area progresses, studies are showing that when students learn about complex topics with computer-based learning environments (CBLEs) without scaffolding they demonstrated poor ability to regulate their learning, and failure to gain a conceptual understanding of the topic.[112]As a result, researchers have recently begun to emphasize the importance of embedded conceptual, procedural, strategic, and metacognitive scaffolding in CBLEs.[107][113][114][115] In addition to the four scaffolding guidelines outlined, recent research has shown: Online classes do not require movement need to a different city or long distances in order to attend the program of one's choice. Online learning allows a flexible schedule. Assessments are completed at the learner's pace. It makes it easier for introverted students to ask questions or drop their ideas, which boost their confidence.[117] Online education is cost-effective and reduces travel expenses for both the learning institution and students. It improves technology literacy for teachers and students.[118] An online learning environment warrants many factors for scaffolding to be successful; this includes basic knowledge of the use of technology, social interactions and reliance on students' individual motivation and initiative for learning.  Collaboration is key to instructional scaffolding and can be lost without proper guidance from an instructor creating and initiating an online social space.[119] The instructor's role in creating a social space for online interaction has been found to increase students' confidence in understanding the content and goals of the course.  If an instructor does not create this space, a student misses out on critical thinking, evaluating material and collaborating with fellow students to foster learning.  Even with instructors implementing a positive social space online, a research study found that students' perceptions of incompetence to other classmates is not affected by positive online social spaces, but this was found to be less of a problem in face to face courses.[119] Due to the distance learning that encompasses an online environment,self-regulationis essential for scaffolding to be effective; a study has shown that procrastinators are at a disadvantage in online distance learning and are not able to be scaffolded in the same degree as if there was an in-person instructor.[120] According to the National Centre for Biotechnology Information research paper, teacher-student interactions are not what they used to be. Social relationships among teachers and their students are weakened due to online learning. Teachers tend to have low expectations from their students during online classes, which leads to low participation. Online education increases the risk of anxiety disorder, clinical depression, apathy, learned helplessness, and burnout. Learners without access to a laptop and the internet are often left out of the online learning world. Online learning courses do not provide enough verbal interaction, which makes it difficult for teachers to measure student engagement and learning outcomes. Students with disabilities often require special software to access educational resources online.[121] Students who had more desire to master the content than to receive higher grades were more successful in the online courses.[122]A study by Artino and Stephens[123]found that graduate students were more motivated in online courses than undergraduate students but suggests that academic level may contribute to the amount of technological support needed for positive learning outcomes, finding that undergraduate students needed less support than graduate students when navigating an online course.
https://en.wikipedia.org/wiki/Instructional_scaffolding
One-shot learningis anobject categorization problem, found mostly incomputer vision. Whereas mostmachine learning-based object categorization algorithms require training on hundreds or thousands of examples, one-shot learning aims to classify objects from one, or only a few, examples. The termfew-shot learningis also used for these problems, especially when more than one example is needed. The ability to learn object categories from few examples, and at a rapid pace, has been demonstrated in humans.[1][2]It is estimated that a child learns almost all of the 10 ~ 30 thousand object categories in the world by age six.[3]This is due not only to the human mind's computational power, but also to its ability to synthesize and learn new object categories from existing information about different, previously learned categories. Given two examples from two object categories: one, an unknown object composed of familiar shapes, the second, an unknown, amorphous shape; it is much easier for humans to recognize the former than the latter, suggesting that humans make use of previously learned categories when learning new ones. The key motivation for solving one-shot learning is that systems, like humans, can use knowledge about object categories to classify new objects.[4][5] As with mostclassification schemes, one-shot learning involves three main challenges: One-shot learning differs from single object recognition and standard category recognition algorithms in its emphasis on knowledge transfer, which makes use of previously learned categories. TheBayesianone-shot learning algorithm represents the foreground and background of images as parametrized by a mixture of constellation models.[12]During the learning phase, the parameters of these models are learned using aconjugatedensity parameterposteriorand Variational BayesianExpectation–Maximization(VBEM).[13]In this stage the previously learned object categories inform the choice of model parameters via transfer by contextual information. For object recognition on new images, the posterior obtained during the learning phase is used in a Bayesian decision framework to estimate the ratio ofp(object | test, train)top(background clutter | test, train)wherepis the probability of the outcome.[14] Given the task of finding a particular object in a query image, the overall objective of the Bayesian one-shot learning algorithm is to compare the probability that object is present vs the probability that only background clutter is present. If the former probability is higher, the algorithm reports the object's presence, otherwise the algorithm reports its absence. To compute these probabilities, the object class must be modeled from a set of (1 ~ 5) training images containing examples. To formalize these ideas, letI{\displaystyle I}be the query image, which contains either an example of the foreground categoryOfg{\displaystyle O_{fg}}or only background clutter of a generic background categoryObg{\displaystyle O_{bg}}. Also letIt{\displaystyle I_{t}}be the set of training images used as the foreground category. The decision of whetherI{\displaystyle I}contains an object from the foreground category, or only clutter from the background category is: where the class posteriorsp(Ofg|I,It){\displaystyle p(O_{fg}|I,I_{t})}andp(Obg|I,It){\displaystyle p(O_{bg}|I,I_{t})}have been expanded byBayes' Theorem, yielding a ratio oflikelihoodsand a ratio of object categorypriors. We decide that the imageI{\displaystyle I}contains an object from the foreground class ifR{\displaystyle R}exceeds a certain thresholdT{\displaystyle T}. We next introduce parametric models for the foreground and background categories with parametersθ{\displaystyle \theta }andθbg{\displaystyle \theta _{bg}}respectively. This foreground parametric model is learned during the learning stage fromIt{\displaystyle I_{t}}, as well as prior information of learned categories. The background model we assume to be uniform across images. Omitting the constant ratio of category priors,p(Ofg)p(Obg){\displaystyle {\frac {p(O_{fg})}{p(O_{bg})}}}, and parametrizing overθ{\displaystyle \theta }andθbg{\displaystyle \theta _{bg}}yields The posterior distribution of model parameters given the training images,p(θ|It,Ofg){\displaystyle p(\theta |I_{t},O_{fg})}is estimated in the learning phase. In this estimation, one-shot learning differs sharply from more traditional Bayesian estimation models that approximate the integral asδ(θML){\displaystyle \delta (\theta ^{ML})}. Instead, it uses a variational approach using prior information from previously learned categories. However, the traditionalmaximum likelihood estimationof the model parameters is used for the background model and the categories learned in advance through training.[15] For each query imageI{\displaystyle I}and training imagesIt{\displaystyle I_{t}}, aconstellation modelis used for representation.[12][16][17]To obtain this model for a given imageI{\displaystyle I}, first a set of N interesting regions is detected in the image using theKadir–Brady saliency detector.[18]Each region selected is represented by a location in the image,Xi{\displaystyle X_{i}}and a description of its appearance,Ai{\displaystyle A_{i}}. LettingX=∑i=1NXi,A=∑i=1NAi{\displaystyle X=\sum _{i=1}^{N}X_{i},A=\sum _{i=1}^{N}A_{i}}andXt{\displaystyle X_{t}}andAt{\displaystyle A_{t}}the analogous representations for training images, the expression for R becomes: The likelihoodsp(X,A|θ){\displaystyle p(X,A|\theta )}andp(X,A|θbg){\displaystyle p(X,A|\theta _{bg})}are represented asmixturesof constellation models. A typical constellation model has P(3 ~ 7) parts, with N(~100) interest regions. Thus a P-dimensional vectorhassigns one region of interest (out of N regions) to each model part (for P parts). Thushdenotes ahypothesis(an assignment of interest regions to model parts) for the model and a full constellation model is represented by summing over all possible hypotheseshin the hypothesis spaceH{\displaystyle H}. Finally the likelihood is written The differentω{\displaystyle \omega }'s represent different configurations of parts, whereas the different hypotheseshrepresent different assignations of regions to parts, given a part modelω{\displaystyle \omega }. The assumption that the shape of the model (as represented byX{\displaystyle X}, the collection of part locations) and appearance are independent allows one to consider the likelihood expressionp(X,A,h,ω|θ){\displaystyle p(X,A,{\textbf {h}},\omega |\theta )}as two separate likelihoods of appearance and shape.[19] The appearance of each feature is represented by a point in appearance space (discussed below in implementation). "Each partp{\displaystyle p}in the constellation model has aGaussian densitywithin this space with mean and precision parametersθp,ωA=μp,ωA,Γp,ωA{\displaystyle \theta _{p,\omega }^{A}={\mu _{p,\omega }^{A},\Gamma _{p,\omega }^{A}}}." From these the appearance likelihood described above is computed as a product of Gaussians over the model parts for a give hypothesishand mixture componentω{\displaystyle \omega }.[20] The shape of the model for a given mixture componentω{\displaystyle \omega }and hypothesishis represented as a joint Gaussian density of the locations of features. These features are transformed into a scale and translation-invariant space before modelling the relative location of the parts by a 2(P - 1)-dimensional Gaussian. From this, we obtain the shape likelihood, completing our representation ofp(X,A,h,ω|θ){\displaystyle p(X,A,{\textbf {h}},\omega |\theta )}. In order to reduce the number of hypotheses in the hypothesis spaceH{\displaystyle H}, only those hypotheses that satisfy the ordering constraint that the x-coordinate of each part is monotonically increasing are considered. This eliminatesP!{\displaystyle P!}hypotheses fromH{\displaystyle H}.[20] In order to computeR{\displaystyle R}, the integral∫p(X,A|θ)p(θ|Xt,At,Ofg)dθ{\displaystyle \int {p(X,A|\theta )p(\theta |X_{t},A_{t},O_{fg})}d\theta }must be evaluated, but is analytically intractable. The object category model above gives information aboutp(X,A|θ){\displaystyle p(X,A|\theta )}, so what remains is to examinep(θ|Xt,At,O){\displaystyle p(\theta |X_{t},A_{t},O)}, the posterior ofθ{\displaystyle \theta }, and find a sufficient approximation to render the integral tractable. Previous work approximates the posterior by aδ{\displaystyle \delta }function centered atθ∗{\displaystyle \theta ^{*}}, collapsing the integral in question intop(X,A|θ∗){\displaystyle p(X,A|\theta ^{*})}. Thisθ∗{\displaystyle \theta ^{*}}is normally estimated using aMaximum Likelihood(θ∗=θML{\displaystyle \theta ^{*}=\theta ^{ML}}) orMaximum A Posteriori(θ∗=θMAP{\displaystyle \theta ^{*}=\theta ^{MAP}}) procedure. However, because in one-shot learning, few training examples are used, the distribution will not be well-peaked, as is assumed in aδ{\displaystyle \delta }function approximation. Thus instead of this traditional approximation, the Bayesian one-shot learning algorithm seeks to "find a parametric form ofp(θ){\displaystyle p(\theta )}such that the learning ofp(θ|Xt,At,Ofg){\displaystyle p(\theta |X_{t},A_{t},O_{fg})}is feasible". The algorithm employs aNormal-Wishart distributionas theconjugate priorofp(θ|Xt,At,Ofg){\displaystyle p(\theta |X_{t},A_{t},O_{fg})}, and in the learning phase,variational Bayesian methodswith the same computational complexity as maximum likelihood methods are used to learn thehyperparametersof the distribution. Then, sincep(X,A|θ){\displaystyle p(X,A|\theta )}is a product of Gaussians, as chosen in the object category model, the integral reduces to amultivariate Student's T distribution, which can be evaluated.[21] To detect features in an image so that it can be represented by a constellation model, theKadir–Brady saliency detectoris used on grey-scale images, finding salient regions of the image. These regions are then clustered, yielding a number of features (the clusters) and the shape parameterX{\displaystyle X}, composed of the cluster centers. The Kadir–Brady detector was chosen because it produces fewer, more salient regions, as opposed to feature detectors like multiscale Harris, which produces numerous, less significant regions. The regions are then taken from the image and rescaled to a small patch of 11 × 11 pixels, allowing each patch to be represented in 121-dimensional space. This dimensionality is reduced usingprincipal component analysis, andA{\displaystyle A}, the appearance parameter, is then formed from the first 10 principal components of each patch.[22] To obtain shape and appearance priors, three categories (spotted cats, faces, and airplanes) are learned using maximum likelihood estimation. These object category model parameters are then used to estimate the hyper-parameters of the desired priors. Given a set of training examples, the algorithm runs the feature detector on these images, and determines model parameters from the salient regions. The hypothesis indexhassigning features to parts prevents a closed-form solution of the linear model, so the posteriorp(θ|Xt,At,Ofg){\displaystyle p(\theta |X_{t},A_{t},O_{fg})}is estimated by variational Bayesian expectation–maximization algorithm, which is run until parameter convergence after ~ 100 iterations. Learning a category in this fashion takes under a minute on a 2.8 GHz machine with a 4-part model and < 10 training images.[23] To learn the motorbike category: Another algorithm uses knowledge transfer by model parameters to learn a new object category that is similar in appearance to previously learned categories. An image is represented as either a texture and shape, or as a latent image that has been transformed, denoted byI=T(IL){\displaystyle I=T(I_{L})}. ASiamese neural networkworks in tandem on two different input vectors to compute comparable output vectors.[24] In this context, congealing is "the simultaneous vectorization of each of a set of images to each other". For a set of training images of a certain category, congealing iteratively transforms each image to minimize the images' joint pixelwise entropies E, where "whereν(p){\displaystyle \nu (p)}is the binary random variable defined by the values of a particular pixel p across all of the images,H(){\displaystyle H()}is the discrete entropy function of that variable, and1≤p≤P{\displaystyle 1\leq p\leq P}is the set of pixel indices for the image." The congealing algorithm begins with a set of imagesIi{\displaystyle I_{i}}and a corresponding transform matrixUi{\displaystyle U_{i}}, which at the end of the algorithm will represent the transformation ofIi{\displaystyle I_{i}}into its latentILi{\displaystyle I_{L_{i}}}. These latentsILi{\displaystyle I_{L_{i}}}minimize the joint pixel-wise entropies. Thus the task of the congealing algorithm is to estimate the transformationsUi{\displaystyle U_{i}}. Sketch of algorithm: At the end of the algorithm,Ui(I)=ILi{\displaystyle U_{i}(I)=I_{L_{i}}}, andT=Ui−1{\displaystyle T=U_{i}^{-1}}transforms the latent image back into the originally observed image.[25] To use this model for classification, it must be estimated with the maximum posterior probability given an observed imageI{\displaystyle I}. Applying Bayes' rule toP(cj|I){\displaystyle P(c_{j}|I)}and parametrization by the transformationT{\displaystyle T}gives a difficult integral that must be approximated, and then the best transformT{\displaystyle T}(that which maps the test image to its latent image) must be found. Once this transformation is found, the test image can be transformed into its latent, and anearest neighbor classifierbased onHausdorff distancebetween images can classify the latent (and thus the test image) as belonging to a particular classcj{\displaystyle c_{j}}. To findT{\displaystyle T}, the test image I is inserted into the training ensemble for the congealing process. Since the test image is drawn from one of the categoriescj{\displaystyle c_{j}}, congealing provides a correspondingTtest=Utest−1{\displaystyle T_{\text{test}}=U_{\text{test}}^{-1}}that maps I to its latent. The latent can then be classified.[26] Given a set of transformationsBi{\displaystyle B_{i}}obtained from congealing many images of a certain category, the classifier can be extended to the case where only one trainingIt{\displaystyle I_{t}}example of a new categoryc{\displaystyle c}is allowed. Applying all the transformationsBi{\displaystyle B_{i}}sequentially toIt{\displaystyle I_{t}}creates an artificial training set forc{\displaystyle c}. This artificial data set can be made larger by borrowing transformations from many already known categories. Once this data set is obtained,I{\displaystyle I}, a test instance ofc{\displaystyle c}, can be classified as in the normal classification procedure. The key assumption is that categories are similar enough that the transforms from one can be applied to another.[27]
https://en.wikipedia.org/wiki/One-shot_learning_in_computer_vision
Incognitive psychology,fast mappingis the term used for the hypothesized mental process whereby a new concept is learned (or a new hypothesis formed) based only on minimal exposure to a given unit of information (e.g., one exposure to a word in an informative context where its referent is present). Fast mapping is thought by some researchers to be particularly important duringlanguage acquisitionin young children, and may serve (at least in part) to explain the prodigious rate at which children gain vocabulary. In order to successfully use the fast mapping process, a child must possess the ability to use "referent selection" and "referent retention" of a novel word. There is evidence that this can be done by children as young as two years old, even with the constraints of minimal time and several distractors.[1]Previous research in fast mapping has also shown that children are able to retain a newly learned word for a substantial amount of time after they are subjected to the word for the first time (Carey and Bartlett, 1978). Further research by Markson and Bloom (1997), showed that children can remember a novel word a week after it was presented to them even with only one exposure to the novel word. While children have also displayed the ability to have equal recall for other types of information, such as novel facts, their ability to extend the information seems to be unique to novel words. This suggests that fast mapping is a specified mechanism forword learning.[2]The process was first formally articulated and the term 'fast mapping' coinedSusan Careyand Elsa Bartlett in 1978.[3] Today, there is evidence to suggest that children do not learn words through 'fast mapping' but rather learn probabilistic, predictive relationships between objects and sounds that develop over time. Evidence for this comes, for example, from children's struggles to understand color words: although infants can distinguish between basic color categories,[4]many sighted children use color words in the same way that blind children do up until the fourth year.[5]Typically, words such as "blue" and "yellow" appear in their vocabularies and they produce them in appropriate places in speech, but their application of individual color terms is haphazard and interchangeable. If shown a blue cup and asked its color, typical three-year-olds seem as likely to answer "red" as "blue." These difficulties persist up until around age four, even after hundreds of explicit training trials.[6]The inability for children to understand color stems from the cognitive process of whole object constraint. Whole object constraint is the idea that a child will understand that a novel word represents the entirety of that object. Then, if the child is presented with further novel words, they attach inferred meanings to the object. However, color is the last attribute to be considered because it explains the least about the object itself. Children's behavior clearly indicates that they have knowledge of these words, but this knowledge is far from complete; rather it appears to be predictive, as opposed to all-or-none. An alternate theory of deriving the meaning of newly learned words by young children during language acquisition stems from John Locke's "associative proposal theory". Compared to the "intentional proposal theory", associative proposal theory refers to the deduction of meaning by comparing the novel object to environmental stimuli. A study conducted by Yu & Ballard (2007), introduced cross-situational learning,[7]a method based on Locke's theory. Cross-situational learning theory is a mechanism in which the child learns meaning of words over multiple exposures in varying contexts in an attempt to eliminate uncertainty of the word's true meaning on an exposure-by-exposure basis.[8] On the other hand, more recent studies[9]suggest that some amount of fast mapping does take place, questioning the validity of previous laboratory studies that aim to show that probabilistic learning does occur. A critique to the theory of fast mapping is how can children connect the meaning of the novel word with the novel word after just one exposure? For example, when showing a child a blue ball and saying the word "blue" how does the child know that the word blue explains the color of the ball, not the size, or shape? If children learn words by fast mapping, then they must use inductive reasoning to understand the meaning associated with the novel word. A popular theory to explain this inductive reasoning is that children applyword-learning constraintsto the situation where a novel word is introduced. There are speculations as to why this is; Markman and Wachtel (1988) conducted a study that helps explain the possible underlying principles of fast mapping. They claim children adhere to the theories ofwhole-object bias, the assumption that a novel label refers to the entire object rather than its parts, color, substance or other properties, and mutual exclusivity bias, the assumption that only one label applies to each object.[10]In their experiment, children were presented with an object that they either were familiar with or was presented with a whole object term. Markman and Watchel concluded that the mere juxtaposition between familiar and novel terms may assist in part term acquisition. In other words, children will put constraints on themselves and assume the novel term refers to the whole object in view rather than to its parts.[11]There have been six lexical constraints proposed (reference, extendibility, object scope, categorical scope, novel name, conventionality) that guide a child's learning of a novel word.[11]When learning a new word children apply these constraints. However, this purposed method of constraints is not flawless. If children use these constraints there are many words that children will never learn such as actions, attributes, and parts. Studies have found that both toddlers and adults were more likely to categorize an object by its shape than its size or color.[12] The next question in fast mapping theory is how exactly is the meaning of the novel word learned? An experiment performed in October 2012 by the Department of Psychology by University of Pennsylvania,[12]researchers attempted to determine if fast mapping occurs via cross-situational learning or by another method, "Propose but verify". In cross-situational learning, listeners hear a novel word and store multiple conjectures of what the word could mean based on its situational context. Then after multiple exposures the listener is able to target the meaning of the word by ruling out conjectures. In propose but verify, the learner makes a single conjecture about the meaning of the word after hearing the word used in context. The learner then carries that conjecture forward to be reevaluated and modified for consistency when the word is used again. The results of the experiment seems to support that propose but verify is the way by which learners fast map new words.[12] There is also controversy over whether words learned by fast mapping are retained or forgotten. Previous research has found that generally, children retain a newly learned word for a period of time after learning. In the aforementioned Carey and Bartlett study (1978), children who were taught the word "chromium" were found to keep the new lexical entry in working memory for several days, illustrating a process of gradual lexical alignment known as "extended mapping."[13]Another study, performed by Markson and Bloom (1997), showed that children remembered words up to 1 month after the study was conducted. However, more recent studies have shown that words learned by fast mapping tend to be forgotten over time. In a study conducted by Vlach and Sandhofer (2012), memory supports, which had been included in previous studies, were removed. This removal appeared to result in a low retention of words over time. This is a possible explanation for why previous studies showed high retention of words learned by fast mapping.[14]: 46 Some researchers are concerned that experiments testing for fast mapping are produced in artificial settings. They feel that fast mapping doesn't occur as often in more real life, natural situations. They believe that testing for fast mapping should focus more on the actual understanding of a word instead of just its reproduction. For some, testing to see if the child can use the new word in a different situation constitutes true knowledge of a word, rather than simply identifying the new word.[11] When learning novel words, it is believed that early exposure to multiple linguistic systems facilitates the acquisition of new words later in life. This effect was referred to by Kaushanskaya and Marian (2009) as the bilingual advantage.[15]That being said, a bilingual individual's ability to fast map can vary greatly throughout their life. During the language acquisition process, a child may require a greater amount of time to determine a correct referent than a child who is a monolingual speaker.[16]By the time a bilingual child is of school age, they perform equally on naming tasks when compared to monolingual children.[17]By the age of adulthood, bilingual individuals have acquired word-learning strategies believed to be of assistance on fast mapping tasks.[18]One example is speech practice, a strategy where the participant listens and reproduces the word in order to assist in remembering and decrease the likelihood of forgetting .[19]Bilingualism can increase an individual's cognitive abilities and contribute to their success in fast mapping words, even when they are using a nonnative language.[19] Children growing up in a low-socioeconomic status environment receive less attention than those in high-socioeconomic status environments. As a result, these children may be exposed to fewer words and therefore their language development may suffer.[20]On norm-references vocabulary tests, children from low- socioeconomic homes tend to score lower than same-age children from a high-socioeconomic environment. However, when examining their fast mapping abilities there were no significant differences observed in their ability to learn and remember novel words.[21]Children from low SES families were able to use multiple sources of information in order to fast map novel words. When working with children from low SES homes, providing a context of the word that attributes meaning, is a linguistic strategy that can benefit the child's word knowledge development.[22] Three learning supports that have been proven to help with the fast mapping of words are saliency, repetition and generation of information.[14]The amount offace-to-face interactiona child has with their parent affects his or her ability to fast map novel words. Interaction with a parent leads to greater exposure to words in different contexts, which in turn promotes language acquisition. Face to face interaction cannot be replaced by educational shows because although repetition is used, children do not receive the same level of correction or trial and error from simply watching.[23]When a child is asked to generate the word it promotes the transition to long-term memory to a larger extent.[24] It appears that fast mapping is not only limited to humans, but can occur in dogs as well. The first example of fast mapping in dogs was published in 2004. In it, a dog named Rico was able to learn the labels of over 200 various items. He was also able to identify novel objects simply by exclusion learning. Exclusion learning occurs when one learns the name of a novel object because one is already familiar with the names of other objects belonging to the same group. The researchers, who conducted the experiment, mention the possibility that a language acquisition device specific to humans does not control fast mapping. They believe that fast mapping is possibly directed by simple memory mechanisms.[25] In 2010, a second example was published. This time, a dog namedChaserdemonstrated, in a controlled research environment, that she had learned over 1000 object names. She also demonstrated that she could attribute these objects to named categories through fast mapping inferential reasoning.[26]It's important to note that, at the time of publication, Chaser was still learning object names at the same pace as before. Thus, her 1000 words, orlexicals, should not be regarded as an upper limit, but a benchmark. While there are many components of language that were not demonstrated in this study, the 1000 word benchmark is remarkable because many studies on language learning correlate a 1000 lexical vocabulary with, roughly, 75% spoken language comprehension.[27][28][29] Another study on Chaser was published in 2013. In this study, Chaser demonstrated flexible understanding of simple sentences. In these sentences,syntaxwas altered in various contexts to prove she had not just memorized full phrases or inferred the expectation through gestures from her evaluators.[30]Discovering this skill in a dog is noteworthy on its own, but verb meaning can be fast mapped through syntax.[31]This creates questions about whatparts of speechdogs could infer, as previous studies focused on nouns. These findings create further questions about the fast mapping abilities of dogs when viewed in light of a study published inSciencein 2016 that proved dogs processlexicalandintonationalcues separately.[32]That is, they respond to both tone and word meaning.[33] However, excitement about the fast-mapping skills of dogs should be tempered. Research in humans has found fast-mapping abilities and vocabulary size are not correlated in unenriched environments. Research has determined that language exposure alone is not enough to develop vocabulary through fast-mapping. Instead, the learner needs to be an active participant in communications to convert fast-mapping abilities into vocabulary.[21][22][23] It is not commonplace to communicate with dogs, nor any non-primate animal, in a productive fashion as they are non-verbal.[34][35]As such, Chaser's vocabulary and sentence comprehension is attributed toDr. Pilley's rigorous methodology.[30] A study by Lederberg et al., was performed to determine if deaf and hard of hearing children fast map to learn novel words. In the study, when the novel word was introduced, the word was both spoken and signed. Then the children were asked to identify the referent object and even extend the novel word to identify a similar object. The results of the study indicated that deaf and hard of hearing children do perform fast mapping to learn novel words. However, compared to children with normal hearing (aging toddlers to 5 years old) the deaf and hard of hearing children did not fast map as accurately and successfully. The results showed a slight delay which disappeared as the children were a maximum of 5 years old. The conclusion that was drawn from the study is that the ability to fast map has a relationship to the size of the lexicon. The children with normal hearing had a larger lexicon and therefore were able to more accurately fast map compared to deaf and hard of hearing children who did not have as large of a lexicon. It is by around age 5 that deaf and hard of hearing children have a similar size lexicon to 5-year-old children of normal hearing. This evidence supports the idea that fast mapping requires inductive reasoning so the larger the lexicon (number of known words) the easier it is for the child to reason out the accurate meaning for the novel word.[36] In the area of cochlear implants (CIs), there are variegated opinions on whether cochlear implants impact a child's ability to become a more successful fast mapper. In 2000, a study by Kirk, Myomoto, and others determined that there was a general correlation between the age of Cochlear Implant implementation and improved lexical skills (e.g. fast mapping and other vocabulary growth skills). They believed that children given implants prior to two years of age yielded higher success rates than older children between five and seven years of age. With that said, researchers at the University of Iowa wish to amend that very generalization. In 2013, "Word Learning Processes in Children with Cochlear Implants" by Elizabeth Walker and others indicated that although there may be some levels of increased vocabulary acquisition in CI individuals, many post-implantees generally were slower developers of his/her own lexicon. Walker bases her claims on another research study in 2007 (Tomblin et al.) One of the purposes of this study was to note a CI child's ability to comprehend and retain novel words with related referents. When compared with non-deaf children, the CI children had lower success scores in retention. This finding was based on scorings obtained from their test: from 0 to 6 (0 the worst, 6 the best), CI children averaged a score around a 2.0 whereas non-deaf children scored higher (roughly 3.86).[37] An experiment was performed to assess fast mapping in adults with typical language abilities, disorders of spoken/written language (hDSWL), and adults with hDSWL and ADHD. The conclusion draws from the experiment revealed that adults with ADHD were the least accurate at "mapping semantic features and slower to respond to lexical labels." The article reasoned that the tasks of fast mapping requires high attentional demand and so "a lapse in attention could lead to diminished encoding of the new information."[38] Fast mapping in individuals withaphasiahas gained research attention due to its effect on speaking, listening, reading, and writing. Research done by Blumstein makes an important distinction between those withBroca's aphasia, who are limited in physical speech, as compared to those withWernicke's aphasia, who cannot link words with meaning. In Broca's aphasia, Blumstein found that whereas individuals with Wernicke's aphasia performed at the same level as the normal control group, those with Broca's aphasia showed slower reaction times for word presentations after reduced voice onset time stimuli.[39]In short, when stimuli were acoustically altered, individuals with Broca's aphasia experienced difficulty recognizing the novel stimuli upon second presentation. Bloomstein's findings reinforce the crucial difference between one's ability to retain novel stimuli versus the ability to express novel stimuli. Because individuals with Wernicke's aphasia are only limited in their understanding of semantic meaning, it makes sense that the participant's novel stimulus recall would not be affected. On the other hand, those with Broca's aphasia lack the ability to produce speech, in effect hindering their ability to recall novel stimuli. Although individuals with Broca's aphasia are limited in their speech production, it is not clear whether they simply cannot formulate the physical speech or if they actually did not process the stimuli. Research has also been done investigating fast mapping abilities in children with language deficits. One study done by Dollaghan compared children with normal language to those with expressive syntactic deficits, a type of specific language impairment characterized by simplified speech. The study found that normal and language impaired children did not differ in their ability to connect the novel word to referent or to comprehend the novel word after a single exposure. The only difference was that the language-impaired children were less successful in their production of the novel word.[40]This implies that expressive language deficits are unrelated to the ability to connect word and referent in a single exposure. The problem for children with those deficits arises only when trying to convert that mental representation into verbal speech. A few researchers looked at fast mapping abilities in boys with autistic spectrum disorders (ASD), also referred to asautism spectrum, and boys withfragile X syndrome(FXS). The experimental procedure consisted of a presentation phase where two objects were presented, one of which was a novel object with a nonsense word name. This was followed by a comprehension testing phase, which assessed the boys' ability to remember and correctly select the novel objects. Even though all groups in the study had fast mapping performances above chance levels, in comparison to boys showing typical development, those with ASD and FXS demonstrated much more difficulty in comprehending and remembering names assigned to the novel objects. The authors concluded that initial processes involved in associative learning, such as fast mapping, are hindered in boys with FXS and ASD.[41] Research inartificial intelligenceandmachine learningto reproduce computationally this ability, termedone-shot learning. This is pursued to reduce the learning curve, as other models likereinforcement learningneed thousand of exposures to a situation to learn it.
https://en.wikipedia.org/wiki/Fast_mapping
Explanation-based learning(EBL) is a form ofmachine learningthat exploits a very strong, or even perfect,domaintheory (i.e. a formal theory of an application domain akin to adomain modelinontology engineering, not to be confused with Scott'sdomain theory) in order to make generalizations or form concepts from training examples.[1]It is also linked withEncoding (memory)to help withLearning.[2] An example of EBL using a perfect domain theory is a program that learns to playchessthrough example. A specific chess position that contains an important feature such as "Forced loss of black queen in two moves" includes many irrelevant features, such as the specific scattering of pawns on the board. EBL can take a single training example and determine what are the relevant features in order to form a generalization.[3] A domain theory isperfectorcompleteif it contains, in principle, all information needed to decide any question about the domain. For example, the domain theory for chess is simply the rules of chess. Knowing the rules, in principle, it is possible to deduce the best move in any situation. However, actually making such a deduction is impossible in practice due tocombinatoric explosion. EBL uses training examples to make searching for deductive consequences of a domain theory efficient in practice. In essence, an EBL system works by finding a way to deduce each training example from the system's existing database of domain theory. Having a shortproofof the training example extends the domain-theory database, enabling the EBL system to find and classify future examples that are similar to the training example very quickly.[4]The main drawback of the method—the cost of applying the learned proof macros, as these become numerous—was analyzed by Minton.[5] EBL software takes four inputs: An especially good application domain for an EBL is natural language processing (NLP). Here a rich domain theory, i.e., a natural language grammar—although neither perfect nor complete, is tuned to a particular application or particular language usage, using atreebank(training examples). Rayner pioneered this work.[7]The first successful industrial application was to a commercial NL interface to relational databases.[8]The method has been successfully applied to several large-scale natural language parsing systems,[9]where the utility problem was solved by omitting the original grammar (domain theory) and using specialized LR-parsing techniques, resulting in huge speed-ups, at a cost in coverage, but with a gain in disambiguation. EBL-like techniques have also been applied to surface generation, the converse of parsing.[10] When applying EBL to NLP, the operationality criteria can be hand-crafted,[11]or can be inferred from the treebank using either the entropy of its or-nodes[12]or a target coverage/disambiguation trade-off (= recall/precision trade-off = f-score).[13]EBL can also be used to compile grammar-based language models forspeech recognition, from general unification grammars.[14]Note how the utility problem, first exposed by Minton, was solved by discarding the original grammar/domain theory, and that the quoted articles tend to contain the phrasegrammar specialization—quite the opposite of the original termexplanation-based generalization.Perhaps the best name for this technique would bedata-driven search space reduction.Other people who worked on EBL for NLP include Guenther Neumann, Aravind Joshi, Srinivas Bangalore, and Khalil Sima'an.
https://en.wikipedia.org/wiki/Explanation-based_learning
Construct validityconcerns how well a set ofindicators represent or reflect a concept that is not directly measurable.[1][2][3]Construct validationis the accumulation of evidence to support the interpretation of what a measure reflects.[1][4][5][6]Modern validity theory defines construct validity as the overarching concern of validity research, subsuming all other types of validity evidence[7][8]such ascontent validityandcriterion validity.[9][10] Construct validity is the appropriateness of inferences made on the basis of observations or measurements (often test scores), specifically whether a test can reasonably be considered to reflect the intendedconstruct. Constructs are abstractions that are deliberately created by researchers in order to conceptualize thelatent variable, which is correlated with scores on a given measure (although it is not directly observable). Construct validity examines the question: Does the measure behave like the theory says a measure of that construct should behave? Construct validity is essential to the perceived overall validity of the test. Construct validity is particularly important in thesocial sciences,psychology,psychometricsand language studies. Psychologists such asSamuel Messick(1998) have pushed for a unified view of construct validity "...as an integrated evaluative judgment of the degree to which empirical evidence and theoretical rationales support the adequacy and appropriateness of inferences and actions based on test scores..."[11]While Messick's views are popularized in educational measurement and originated in a career around explaining validity in the context of the testing industry, a definition more in line with foundational psychological research, supported by data-driven empirical studies that emphasize statistical and causal reasoning was given by (Borsboom et al., 2004).[12] Key to construct validity are the theoretical ideas behind the trait under consideration, i.e. the concepts that organize how aspects ofpersonality,intelligence, etc. are viewed.[13]Paul Meehlstates that, "The best construct is the one around which we can build the greatest number of inferences, in the most direct fashion."[1] Scale purification, i.e. "the process of eliminating items from multi-item scales" (Wieland et al., 2017) can influence construct validity. A framework presented by Wieland et al. (2017) highlights that both statistical and judgmental criteria need to be taken under consideration when making scale purification decisions.[14] Throughout the 1940s scientists had been trying to come up with ways to validate experiments prior to publishing them. The result of this was a plethora of different validities (intrinsic validity,face validity,logical validity,empirical validity, etc.). This made it difficult to tell which ones were actually the same and which ones were not useful at all. Until the middle of the 1950s, there were very few universally accepted methods to validate psychological experiments. The main reason for this was because no one had figured out exactly which qualities of the experiments should be looked at before publishing. Between 1950 and 1954 the APA Committee on Psychological Tests met and discussed the issues surrounding the validation of psychological experiments.[1] Around this time the term construct validity was first coined byPaul MeehlandLee Cronbachin their seminal article "Construct Validity In Psychological Tests". They noted the idea that construct validity was not new at that point; rather, it was a combination of many different types of validity dealing with theoretical concepts. They proposed the following three steps to evaluate construct validity: Many psychologists noted that an important role of construct validation inpsychometricswas that it placed more emphasis on theory as opposed to validation. This emphasis was designed to address a core requirement that validation include some demonstration that the test measures the theoretical construct it purported to measure. Construct validity has three aspects or components: the substantive component, structural component, and external component.[15]They are closely related to three stages in the test construction process: constitution of the pool of items, analysis and selection of the internal structure of the pool of items, and correlation of test scores with criteria and other variables. In the 1970s there was growing debate between theorists who began to see construct validity as the dominant model pushing towards a more unified theory of validity, and those who continued to work from multiple validity frameworks.[16]Many psychologists and education researchers saw "predictive, concurrent, and content validities as essentiallyad hoc, construct validity was the whole of validity from a scientific point of view"[15]In the 1974 version ofTheStandards for Educational and Psychological Testingthe inter-relatedness of the three different aspects of validity was recognized: "These aspects of validity can be discussed independently, but only for convenience. They are interrelated operationally and logically; only rarely is one of them alone important in a particular situation". In 1989 Messick presented a new conceptualization of construct validity as a unified and multi-faceted concept.[17]Under this framework, all forms of validity are connected to and are dependent on the quality of the construct. He noted that a unified theory was not his own idea, but rather the culmination of debate and discussion within the scientific community over the preceding decades. There are six aspects of construct validity in Messick's unified theory of construct validity:[18] How construct validity should properly be viewed is still a subject of debate for validity theorists. The core of the difference lies in anepistemologicaldifference betweenpositivistandpostpositivisttheorists. Evaluation of construct validity requires that the correlations of the measure be examined in regard to variables that are known to be related to the construct (purportedly measured by the instrument being evaluated or for which there are theoretical grounds for expecting it to be related). This is consistent with themultitrait-multimethod matrix(MTMM) of examining construct validity described in Campbell and Fiske's landmark paper (1959).[19]There are other methods to evaluate construct validity besides MTMM. It can be evaluated through different forms offactor analysis,structural equation modeling(SEM), and other statistical evaluations.[20][21]It is important to note that a single study does not prove construct validity. Rather it is a continuous process of evaluation, reevaluation, refinement, and development. Correlations that fit the expected pattern contribute evidence of construct validity. Construct validity is a judgment based on the accumulation of correlations from numerous studies using the instrument being evaluated.[22] Most researchers attempt to test the construct validity before the main research. To do thispilot studiesmay be utilized. Pilot studies are small scale preliminary studies aimed at testing the feasibility of a full-scale test. These pilot studies establish the strength of their research and allow them to make any necessary adjustments. Another method is the known-groups technique, which involves administering the measurement instrument to groups expected to differ due to known characteristics. Hypothesized relationship testing involves logical analysis based on theory or prior research.[6]Intervention studiesare yet another method of evaluating construct validity. Intervention studies where a group with low scores in the construct is tested, taught the construct, and then re-measured can demonstrate a test's construct validity. If there is a significant difference pre-test and post-test, which are analyzed by statistical tests, then this may demonstrate good construct validity.[23] Convergent and discriminant validity are the two subtypes of validity that make up construct validity. Convergent validity refers to the degree to which two measures of constructs that theoretically should be related, are in fact related. In contrast, discriminant validity tests whether concepts or measurements that are supposed to be unrelated are, in fact, unrelated.[19]Take, for example, a construct of general happiness. If a measure of general happiness had convergent validity, then constructs similar to happiness (satisfaction, contentment, cheerfulness, etc.) should relate positively to the measure of general happiness. If this measure has discriminant validity, then constructs that are not supposed to be related positively to general happiness (sadness, depression, despair, etc.) should not relate to the measure of general happiness. Measures can have one of the subtypes of construct validity and not the other. Using the example of general happiness, a researcher could create an inventory where there is a very high positive correlation between general happiness and contentment, but if there is also a significant positive correlation between happiness and depression, then the measure's construct validity is called into question. The test has convergent validity but not discriminant validity. Lee Cronbach and Paul Meehl (1955)[1]proposed that the development of a nomological net was essential to the measurement of a test's construct validity. Anomological networkdefines a construct by illustrating its relation to other constructs and behaviors. It is a representation of the concepts (constructs) of interest in a study, their observable manifestations, and the interrelationship among them. It examines whether the relationships between similar construct are considered with relationships between the observed measures of the constructs. A thorough observation of constructs relationships to each other it can generate new constructs. For example,intelligenceandworking memoryare considered highly related constructs. Through the observation of their underlying components psychologists developed new theoretical constructs such as: controlled attention[24]and short term loading.[25]Creating a nomological net can also make the observation and measurement of existing constructs more efficient by pinpointing errors.[1]Researchers have found that studying the bumps on the human skull (phrenology) are not indicators of intelligence, but volume of the brain is. Removing the theory of phrenology from the nomological net of intelligence and adding the theory of brain mass evolution, constructs of intelligence are made more efficient and more powerful. The weaving of all of these interrelated concepts and their observable traits creates a "net" that supports their theoretical concept. For example, in the nomological network for academic achievement, we would expect observable traits of academic achievement (i.e. GPA, SAT, and ACT scores) to relate to the observable traits for studiousness (hours spent studying, attentiveness in class, detail of notes). If they do not then there is a problem with measurement (ofacademic achievementor studiousness), or with the purported theory of achievement. If they are indicators of one another then the nomological network, and therefore the constructed theory, of academic achievement is strengthened. Although the nomological network proposed a theory of how to strengthen constructs, it doesn't tell us how we can assess the construct validity in a study. Themultitrait-multimethod matrix(MTMM) is an approach to examining construct validity developed by Campbell and Fiske (1959).[19]This model examines convergence (evidence that different measurement methods of a construct give similar results) and discriminability (ability to differentiate the construct from other related constructs). It measures six traits: the evaluation of convergent validity, the evaluation of discriminant (divergent) validity, trait-method units, multitrait-multimethods, truly different methodologies, and trait characteristics. This design allows investigators to test for: "convergence across different measures...of the same 'thing'...and for divergence between measures...of related but conceptually distinct 'things'.[2][26] Apparent construct validity can be misleading due to a range of problems in hypothesis formulation and experimental design. An in-depth exploration of the threats to construct validity is presented in Trochim.[31]
https://en.wikipedia.org/wiki/Construct_validity
Inpsychometrics,content validity(also known aslogical validity) refers to the extent to which a measure represents all facets of a given construct. For example, adepressionscale may lack content validity if it only assesses theaffectivedimension of depression but fails to take into account thebehavioraldimension. An element of subjectivity exists in relation to determining content validity, which requires a degree of agreement about what a particularpersonality traitsuch asextraversionrepresents. A disagreement about a personality trait will prevent the gain of a high content validity.[1] Content validity is different fromface validity, which refers not to what the test actually measures, but to what it superficially appears to measure. Face validity assesses whether the test "looks valid" to the examinees who take it, the administrative personnel who decide on its use, and other technically untrained observers. Content validity requires the use of recognized subject matter experts to evaluate whether test items assess defined content and more rigorousstatistical teststhan does the assessment of face validity. Content validity is most often addressed in academic and vocational testing, where test items need to reflect the knowledge actually required for a given topic area (e.g., history) or job skill (e.g., accounting). In clinical settings, content validity refers to the correspondence between test items and the symptom content of a syndrome. One widely used method of measuring content validity was developed by C. H. Lawshe. It is essentially a method for gauging agreement among raters or judges regarding how essential a particular item is. In an article regarding pre-employment testing,Lawshe (1975)[2]proposed that each of the subject matter expert raters (SMEs) on the judging panel respond to the following question for each item: "Is the skill or knowledge measured by this item 'essential,' 'useful, but not essential,' or 'not necessary' to the performance of the job?" According to Lawshe, if more than half the panelists indicate that an item is essential, that item has at least some content validity. Greater levels of content validity exist as larger numbers of panelists agree that a particular item is essential. Using these assumptions, Lawshe developed a formula termed the content validity ratio:CVR=(ne−N/2)/(N/2){\displaystyle CVR=(n_{e}-N/2)/(N/2)}whereCVR={\displaystyle CVR=}content validity ratio,ne={\displaystyle n_{e}=}number of SME panelists indicating "essential",N={\displaystyle N=}total number of SME panelists. This formula yields values which range from +1 to -1; positive values indicate that at least half the SMEs rated the item as essential. The mean CVR across items may be used as an indicator of overall test content validity. Lawshe (1975)provided a table of critical values for the CVR by which a test evaluator could determine, for a pool of SMEs of a given size, the size of a calculated CVR necessary to exceed chance expectation. This table had been calculated for Lawshe by his friend, Lowell Schipper. Close examination of this published table revealed an anomaly. In Schipper's table, the critical value for the CVR increases monotonically from the case of 40 SMEs (minimum value = .29) to the case of 9 SMEs (minimum value = .78) only to unexpectedly drop at the case of 8 SMEs (minimum value = .75) before hitting its ceiling value at the case of 7 SMEs (minimum value = .99). However, when applying the formula to 8 raters, the result from 7 Essential and 1 other rating yields a CVR of .75. If .75 was not the critical value, then 8 of 8 raters of Essential would be needed that would yield a CVR of 1.00. In that case, to be consistent with the ascending order of CVRs the value for 8 raters would have to be 1.00. That would violate the same principle because you would have the "perfect" value required for 8 raters, but not for ratings at other numbers of raters at either higher or lower than 8 raters. Whether this departure from the table's otherwise monotonic progression was due to a calculation error on Schipper's part or an error in typing or typesetting is unclear.Wilson, Pan & Schumsky (2012), seeking to correct the error, found no explanation in Lawshe's writings nor any publications by Schipper describing how the table of critical values was computed. Wilson and colleagues determined that the Schipper values were close approximations to the normal approximation to the binomial distribution. By comparing Schipper's values to the newly calculated binomial values, they also found that Lawshe and Schipper had erroneously labeled their published table as representing a one-tailed test when in fact the values mirrored the binomial values for a two-tailed test. Wilson and colleagues published a recalculation of critical values for the content validity ratio providing critical values in unit steps at multiple alpha levels.[3] The table of values is the following one:[2]
https://en.wikipedia.org/wiki/Content_validity
Analog signal processingis a type ofsignal processingconducted oncontinuousanalog signalsby some analog means (as opposed to the discretedigital signal processingwhere thesignal processingis carried out by a digital process). "Analog" indicates something that is mathematically represented as a set of continuous values. This differs from "digital" which uses a series of discrete quantities to represent signal. Analog values are typically represented as avoltage,electric current, orelectric chargearound components in the electronic devices. An error or noise affecting such physical quantities will result in a corresponding error in the signals represented by such physical quantities. Examples ofanalog signal processinginclude crossover filters in loudspeakers, "bass", "treble" and "volume" controls on stereos, and "tint" controls on TVs. Common analog processing elements include capacitors, resistors and inductors (as the passive elements) andtransistorsorop-amps(as the active elements). A system's behavior can be mathematically modeled and is represented in the time domain as h(t) and in thefrequency domainas H(s), where s is acomplex numberin the form of s=a+ib, or s=a+jb in electrical engineering terms (electrical engineers use "j" instead of "i" because current is represented by the variable i). Input signals are usually called x(t) or X(s) and output signals are usually called y(t) or Y(s). Convolutionis the basic concept in signal processing that states an input signal can be combined with the system's function to find the output signal. It is the integral of the product of two waveforms after one has reversed and shifted; the symbol for convolution is *. That is the convolution integral and is used to find the convolution of a signal and a system; typically a = -∞ and b = +∞. Consider two waveforms f and g. By calculating the convolution, we determine how much a reversed function g must be shifted along the x-axis to become identical to function f. The convolution function essentially reverses and slides function g along the axis, and calculates the integral of their (f and the reversed and shifted g) product for each possible amount of sliding. When the functions match, the value of (f*g) is maximized. This occurs because when positive areas (peaks) or negative areas (troughs) are multiplied, they contribute to the integral. TheFourier transformis a function that transforms a signal or system in the time domain into the frequency domain, but it only works for certain functions. The constraint on which systems or signals can be transformed by the Fourier Transform is that: This is the Fourier transform integral: Usually the Fourier transform integral isn't used to determine the transform; instead, a table of transform pairs is used to find the Fourier transform of a signal or system. The inverse Fourier transform is used to go from frequency domain to time domain: Each signal or system that can be transformed has a unique Fourier transform. There is only one time signal for any frequency signal, and vice versa. TheLaplace transformis a generalizedFourier transform. It allows a transform of any system or signal because it is a transform into the complex plane instead of just the jω line like the Fourier transform. The major difference is that the Laplace transform has a region of convergence for which the transform is valid. This implies that a signal in frequency may have more than one signal in time; the correct time signal for the transform is determined by theregion of convergence. If the region of convergence includes the jω axis, jω can be substituted into the Laplace transform for s and it's the same as the Fourier transform. The Laplace transform is: and the inverse Laplace transform, if all the singularities of X(s) are in the left half of the complex plane, is: Bode plotsare plots of magnitude vs. frequency and phase vs. frequency for a system. The magnitude axis is in [Decibel] (dB). The phase axis is in either degrees or radians. The frequency axes are in a [logarithmic scale]. These are useful because for sinusoidal inputs, the output is the input multiplied by the value of the magnitude plot at the frequency and shifted by the value of the phase plot at the frequency. This is the domain that most people are familiar with. A plot in the time domain shows the amplitude of the signal with respect to time. A plot in thefrequency domainshows either the phase shift or magnitude of a signal at each frequency that it exists at. These can be found by taking the Fourier transform of a time signal and are plotted similarly to a bode plot. While any signal can be used in analog signal processing, there are many types of signals that are used very frequently. Sinusoidsare the building block of analog signal processing. All real world signals can be represented as an infinite sum of sinusoidal functions via aFourier series. A sinusoidal function can be represented in terms of an exponential by the application ofEuler's Formula. An impulse (Dirac delta function) is defined as a signal that has an infinite magnitude and an infinitesimally narrow width with an area under it of one, centered at zero. An impulse can be represented as an infinite sum of sinusoids that includes all possible frequencies. It is not, in reality, possible to generate such a signal, but it can be sufficiently approximated with a large amplitude, narrow pulse, to produce the theoretical impulse response in a network to a high degree of accuracy. The symbol for an impulse is δ(t). If an impulse is used as an input to a system, the output is known as the impulse response. The impulse response defines the system because all possible frequencies are represented in the input A unit step function, also called theHeaviside step function, is a signal that has a magnitude of zero before zero and a magnitude of one after zero. The symbol for a unit step is u(t). If a step is used as the input to a system, the output is called the step response. The step response shows how a system responds to a sudden input, similar to turning on a switch. The period before the output stabilizes is called the transient part of a signal. The step response can be multiplied with other signals to show how the system responds when an input is suddenly turned on. The unit step function is related to the Dirac delta function by; Linearity means that if you have two inputs and two corresponding outputs, if you take a linear combination of those two inputs you will get a linear combination of the outputs. An example of a linear system is a first order low-pass or high-pass filter. Linear systems are made out of analog devices that demonstrate linear properties. These devices don't have to be entirely linear, but must have a region of operation that is linear. An operational amplifier is a non-linear device, but has a region of operation that is linear, so it can be modeled as linear within that region of operation. Time-invariance means it doesn't matter when you start a system, the same output will result. For example, if you have a system and put an input into it today, you would get the same output if you started the system tomorrow instead. There aren't any real systems that are LTI, but many systems can be modeled as LTI for simplicity in determining what their output will be. All systems have some dependence on things like temperature, signal level or other factors that cause them to be non-linear or non-time-invariant, but most are stable enough to model as LTI. Linearity and time-invariance are important because they are the only types of systems that can be easily solved using conventional analog signal processing methods. Once a system becomes non-linear or non-time-invariant, it becomes a non-linear differential equations problem, and there are very few of those that can actually be solved. (Haykin & Van Veen 2003)
https://en.wikipedia.org/wiki/Analog_signal_processing