text
stringlengths
16
172k
source
stringlengths
32
122
Inmachine learning, thehinge lossis aloss functionused for trainingclassifiers. The hinge loss is used for "maximum-margin" classification, most notably forsupport vector machines(SVMs).[1] For an intended outputt= ±1and a classifier scorey, the hinge loss of the predictionyis defined as Note thaty{\displaystyle y}should be the "raw" output of the classifier's decision function, not the predicted class label. For instance, in linear SVMs,y=w⋅x+b{\displaystyle y=\mathbf {w} \cdot \mathbf {x} +b}, where(w,b){\displaystyle (\mathbf {w} ,b)}are the parameters of thehyperplaneandx{\displaystyle \mathbf {x} }is the input variable(s). Whentandyhave the same sign (meaningypredicts the right class) and|y|≥1{\displaystyle |y|\geq 1}, the hinge lossℓ(y)=0{\displaystyle \ell (y)=0}. When they have opposite signs,ℓ(y){\displaystyle \ell (y)}increases linearly withy, and similarly if|y|<1{\displaystyle |y|<1}, even if it has the same sign (correct prediction, but not by enough margin). While binary SVMs are commonly extended tomulticlass classificationin a one-vs.-all or one-vs.-one fashion,[2]it is also possible to extend the hinge loss itself for such an end. Several different variations of multiclass hinge loss have been proposed.[3]For example, Crammer and Singer[4]defined it for a linear classifier as[5] wheret{\displaystyle t}is the target label,wt{\displaystyle \mathbf {w} _{t}}andwy{\displaystyle \mathbf {w} _{y}}are the model parameters. Weston and Watkins provided a similar definition, but with a sum rather than a max:[6][3] Instructured prediction, the hinge loss can be further extended to structured output spaces.Structured SVMswith margin rescaling use the following variant, wherewdenotes the SVM's parameters,ythe SVM's predictions,φthe joint feature function, andΔtheHamming loss: The hinge loss is aconvex function, so many of the usual convex optimizers used in machine learning can work with it. It is notdifferentiable, but has asubgradientwith respect to model parameterswof a linear SVM with score functiony=w⋅x{\displaystyle y=\mathbf {w} \cdot \mathbf {x} }that is given by However, since the derivative of the hinge loss atty=1{\displaystyle ty=1}is undefined,smoothedversions may be preferred for optimization, such as Rennie and Srebro's[7] or the quadratically smoothed suggested by Zhang.[8]Themodified Huber lossL{\displaystyle L}is a special case of this loss function withγ=2{\displaystyle \gamma =2}, specificallyL(t,y)=4ℓ2(y){\displaystyle L(t,y)=4\ell _{2}(y)}.
https://en.wikipedia.org/wiki/Hinge_loss
NumPy(pronounced/ˈnʌmpaɪ/NUM-py) is alibraryfor thePython programming language, adding support for large, multi-dimensionalarraysandmatrices, along with a large collection ofhigh-levelmathematicalfunctionsto operate on these arrays.[3]The predecessor of NumPy, Numeric, was originally created byJim Huguninwith contributions from several other developers. In 2005,Travis Oliphantcreated NumPy by incorporating features of the competing Numarray into Numeric, with extensive modifications. NumPy isopen-source softwareand has many contributors. NumPy is fiscally sponsored byNumFOCUS.[4] The Python programming language was not originally designed for numerical computing, but attracted the attention of the scientific and engineering community early on. In 1995 thespecial interest group(SIG)matrix-sigwas founded with the aim of defining anarraycomputing package; among its members was Python designer and maintainerGuido van Rossum, who extendedPython's syntax(in particular the indexing syntax[5]) to makearray computingeasier.[6] An implementation of a matrix package was completed by Jim Fulton, then generalized[further explanation needed]by Jim Hugunin and calledNumeric[6](also variously known as the "Numerical Python extensions" or "NumPy"), with influences from theAPLfamily of languages, Basis,MATLAB,FORTRAN,SandS+, and others.[7][8]Hugunin, a graduate student at theMassachusetts Institute of Technology(MIT),[8]: 10joined theCorporation for National Research Initiatives(CNRI) in 1997 to work onJPython,[6]leaving Paul Dubois ofLawrence Livermore National Laboratory(LLNL) to take over as maintainer.[8]: 10Other early contributors include David Ascher, Konrad Hinsen andTravis Oliphant.[8]: 10 A new package calledNumarraywas written as a more flexible replacement for Numeric.[9]Like Numeric, it too is now deprecated.[10][11]Numarray had faster operations for large arrays, but was slower than Numeric on small ones,[12]so for a time both packages were used in parallel for different use cases. The last version of Numeric (v24.2) was released on 11 November 2005, while the last version of numarray (v1.5.2) was released on 24 August 2006.[13] There was a desire to get Numeric into the Python standard library, but Guido van Rossum decided that the code was not maintainable in its state then.[when?][14] In early 2005, NumPy developer Travis Oliphant wanted to unify the community around a single array package and ported Numarray's features to Numeric, releasing the result as NumPy 1.0 in 2006.[9]This new project was part ofSciPy. To avoid installing the large SciPy package just to get an array object, this new package was separated and called NumPy. Support for Python 3 was added in 2011 with NumPy version 1.5.0.[15] In 2011,PyPystarted development on an implementation of the NumPy API for PyPy.[16]As of 2023, it is not yet fully compatible with NumPy.[17] NumPy targets theCPythonreference implementationof Python, which is a non-optimizingbytecodeinterpreter.Mathematical algorithmswritten for this version of Python often run much slower thancompiledequivalents due to the absence of compiler optimization. NumPy addresses the slowness problem partly by providing multidimensional arrays and functions and operators that operate efficiently on arrays; using these requires rewriting some code, mostlyinner loops, using NumPy. Using NumPy in Python gives functionality comparable toMATLABsince they are both interpreted,[18]and they both allow the user to write fast programs as long as most operations work onarraysor matrices instead ofscalars. In comparison, MATLAB boasts a large number of additional toolboxes, notablySimulink, whereas NumPy is intrinsically integrated with Python, a more modern and completeprogramming language. Moreover, complementary Python packages are available; SciPy is a library that adds more MATLAB-like functionality andMatplotlibis aplottingpackage that provides MATLAB-like plotting functionality. Although matlab can perform sparse matrix operations, numpy alone cannot perform such operations and requires the use of the scipy.sparse library. Internally, both MATLAB and NumPy rely onBLASandLAPACKfor efficientlinear algebracomputations. Pythonbindingsof the widely usedcomputer visionlibraryOpenCVutilize NumPy arrays to store and operate on data. Since images with multiple channels are simply represented as three-dimensional arrays, indexing,slicingormaskingwith other arrays are very efficient ways to access specific pixels of an image. The NumPy array as universal data structure in OpenCV for images, extractedfeature points,filter kernelsand many more vastly simplifies the programming workflow anddebugging.[citation needed] Importantly, many NumPy operations release theglobal interpreter lock, which allows for multithreaded processing.[19] NumPy also provides a C API, which allows Python code to interoperate with external libraries written in low-level languages.[20] The core functionality of NumPy is its "ndarray", forn-dimensional array,data structure. These arrays arestridedviews on memory.[9]In contrast to Python's built-in list data structure, these arrays are homogeneously typed: all elements of a single array must be of the same type. Such arrays can also be views into memory buffers allocated byC/C++,Python, andFortranextensions to the CPython interpreter without the need to copy data around, giving a degree of compatibility with existing numerical libraries. This functionality is exploited by the SciPy package, which wraps a number of such libraries (notably BLAS and LAPACK). NumPy has built-in support formemory-mappedndarrays.[9] Inserting or appending entries to an array is not as trivially possible as it is with Python's lists. Thenp.pad(...)routine to extend arrays actually creates new arrays of the desired shape and padding values, copies the given array into the new one and returns it. NumPy'snp.concatenate([a1,a2])operation does not actually link the two arrays but returns a new one, filled with the entries from both given arrays in sequence. Reshaping the dimensionality of an array withnp.reshape(...)is only possible as long as the number of elements in the array does not change. These circumstances originate from the fact that NumPy's arrays must be views on contiguousmemory buffers. Algorithmsthat are not expressible as a vectorized operation will typically run slowly because they must be implemented in "pure Python", while vectorization may increasememory complexityof some operations from constant to linear, because temporary arrays must be created that are as large as the inputs. Runtime compilation of numerical code has been implemented by several groups to avoid these problems; open source solutions that interoperate with NumPy include numexpr[21]andNumba.[22]Cython andPythranare static-compiling alternatives to these. Many modernlarge-scalescientific computing applications have requirements that exceed the capabilities of the NumPy arrays. For example, NumPy arrays are usually loaded into a computer'smemory, which might have insufficient capacity for the analysis of largedatasets. Further, NumPy operations are executed on a singleCPU. However, many linear algebra operations can be accelerated by executing them onclustersof CPUs or of specialized hardware, such asGPUsandTPUs, which manydeep learningapplications rely on. As a result, several alternative array implementations have arisen in the scientific python ecosystem over the recent years, such asDaskfor distributed arrays andTensorFloworJAX[23]for computations on GPUs. Because of its popularity, these often implement asubsetof NumPy'sAPIor mimic it, so that users can change their array implementation with minimal changes to their code required.[3]A library namedCuPy,[24]accelerated byNvidia'sCUDAframework, has also shown potential for faster computing, being a 'drop-in replacement' of NumPy.[25] Iterative Python algorithm and vectorized NumPy version. Quickly wrap native code for faster scripts.[26][27][28]
https://en.wikipedia.org/wiki/Numpy
Instatistical modeling,regression analysisis a set of statistical processes forestimatingthe relationships between adependent variable(often called theoutcomeorresponsevariable, or alabelin machine learning parlance) and one or more error-freeindependent variables(often calledregressors,predictors,covariates,explanatory variablesorfeatures). The most common form of regression analysis islinear regression, in which one finds the line (or a more complexlinear combination) that most closely fits the data according to a specific mathematical criterion. For example, the method ofordinary least squarescomputes the unique line (orhyperplane) that minimizes the sum of squared differences between the true data and that line (or hyperplane). For specific mathematical reasons (seelinear regression), this allows the researcher to estimate theconditional expectation(or populationaverage value) of the dependent variable when the independent variables take on a given set of values. Less common forms of regression use slightly different procedures to estimate alternativelocation parameters(e.g.,quantile regressionorNecessary Condition Analysis[1]) or estimate the conditional expectation across a broader collection of non-linear models (e.g.,nonparametric regression). Regression analysis is primarily used for two conceptually distinct purposes. First, regression analysis is widely used forpredictionandforecasting, where its use has substantial overlap with the field ofmachine learning. Second, in some situations regression analysis can be used to infercausal relationshipsbetween the independent and dependent variables. Importantly, regressions by themselves only reveal relationships between a dependent variable and a collection of independent variables in a fixed dataset. To use regressions for prediction or to infer causal relationships, respectively, a researcher must carefully justify why existing relationships have predictive power for a new context or why a relationship between two variables has a causal interpretation. The latter is especially important when researchers hope to estimate causal relationships usingobservational data.[2][3] The earliest regression form was seen inIsaac Newton's work in 1700 while studyingequinoxes, being credited with introducing "an embryonic linear aggression analysis" as "Not only did he perform the averaging of a set of data, 50 years beforeTobias Mayer, but summing the residuals to zero heforcedthe regression line to pass through the average point. He also distinguished between two inhomogeneous sets of data and might have thought of anoptimalsolution in terms of bias, though not in terms of effectiveness." He previously used an averaging method in his 1671 work on Newton's rings, which was unprecedented at the time.[4][5] Themethod of least squareswas published byLegendrein 1805,[6]and byGaussin 1809.[7]Legendre and Gauss both applied the method to the problem of determining, from astronomical observations, the orbits of bodies about the Sun (mostly comets, but also later the then newly discovered minor planets). Gauss published a further development of the theory of least squares in 1821,[8]including a version of theGauss–Markov theorem. The term "regression" was coined byFrancis Galtonin the 19th century to describe a biological phenomenon. The phenomenon was that the heights of descendants of tall ancestors tend to regress down towards a normal average (a phenomenon also known asregression toward the mean).[9][10]For Galton, regression had only this biological meaning,[11][12]but his work was later extended byUdny YuleandKarl Pearsonto a more general statistical context.[13][14]In the work of Yule and Pearson, thejoint distributionof the response and explanatory variables is assumed to beGaussian. This assumption was weakened byR.A. Fisherin his works of 1922 and 1925.[15][16][17]Fisher assumed that theconditional distributionof the response variable is Gaussian, but the joint distribution need not be. In this respect, Fisher's assumption is closer to Gauss's formulation of 1821. In the 1950s and 1960s, economists usedelectromechanical desk calculatorsto calculate regressions. Before 1970, it sometimes took up to 24 hours to receive the result from one regression.[18] Regression methods continue to be an area of active research. In recent decades, new methods have been developed forrobust regression, regression involving correlated responses such astime seriesandgrowth curves, regression in which the predictor (independent variable) or response variables are curves, images, graphs, or other complex data objects, regression methods accommodating various types of missing data,nonparametric regression,Bayesianmethods for regression, regression in which the predictor variables are measured with error, regression with more predictor variables than observations, andcausal inferencewith regression. Modern regression analysis is typically done with statistical andspreadsheetsoftware packages on computers as well as on handheldscientificandgraphing calculators. In practice, researchers first select a model they would like to estimate and then use their chosen method (e.g.,ordinary least squares) to estimate the parameters of that model. Regression models involve the following components: In variousfields of application, different terminologies are used in place ofdependent and independent variables. Most regression models propose thatYi{\displaystyle Y_{i}}is afunction(regression function) ofXi{\displaystyle X_{i}}andβ{\displaystyle \beta }, withei{\displaystyle e_{i}}representing anadditive error termthat may stand in for un-modeled determinants ofYi{\displaystyle Y_{i}}or random statistical noise: Note that the independent variablesXi{\displaystyle X_{i}}are assumed to be free of error. This important assumption is often overlooked, althougherrors-in-variables modelscan be used when the independent variables are assumed to contain errors. The researchers' goal is to estimate the functionf(Xi,β){\displaystyle f(X_{i},\beta )}that most closely fits the data. To carry out regression analysis, the form of the functionf{\displaystyle f}must be specified. Sometimes the form of this function is based on knowledge about the relationship betweenYi{\displaystyle Y_{i}}andXi{\displaystyle X_{i}}that does not rely on the data. If no such knowledge is available, a flexible or convenient form forf{\displaystyle f}is chosen. For example, a simple univariate regression may proposef(Xi,β)=β0+β1Xi{\displaystyle f(X_{i},\beta )=\beta _{0}+\beta _{1}X_{i}}, suggesting that the researcher believesYi=β0+β1Xi+ei{\displaystyle Y_{i}=\beta _{0}+\beta _{1}X_{i}+e_{i}}to be a reasonable approximation for the statistical process generating the data. Once researchers determine their preferredstatistical model, different forms of regression analysis provide tools to estimate the parametersβ{\displaystyle \beta }. For example,least squares(including its most common variant,ordinary least squares) finds the value ofβ{\displaystyle \beta }that minimizes the sum of squared errors∑i(Yi−f(Xi,β))2{\displaystyle \sum _{i}(Y_{i}-f(X_{i},\beta ))^{2}}. A given regression method will ultimately provide an estimate ofβ{\displaystyle \beta }, usually denotedβ^{\displaystyle {\hat {\beta }}}to distinguish the estimate from the true (unknown) parameter value that generated the data. Using this estimate, the researcher can then use thefitted valueYi^=f(Xi,β^){\displaystyle {\hat {Y_{i}}}=f(X_{i},{\hat {\beta }})}for prediction or to assess the accuracy of the model in explaining the data. Whether the researcher is intrinsically interested in the estimateβ^{\displaystyle {\hat {\beta }}}or the predicted valueYi^{\displaystyle {\hat {Y_{i}}}}will depend on context and their goals. As described inordinary least squares, least squares is widely used because the estimated functionf(Xi,β^){\displaystyle f(X_{i},{\hat {\beta }})}approximates theconditional expectationE(Yi|Xi){\displaystyle E(Y_{i}|X_{i})}.[7]However, alternative variants (e.g.,least absolute deviationsorquantile regression) are useful when researchers want to model other functionsf(Xi,β){\displaystyle f(X_{i},\beta )}. It is important to note that there must be sufficient data to estimate a regression model. For example, suppose that a researcher has access toN{\displaystyle N}rows of data with one dependent and two independent variables:(Yi,X1i,X2i){\displaystyle (Y_{i},X_{1i},X_{2i})}. Suppose further that the researcher wants to estimate a bivariate linear model vialeast squares:Yi=β0+β1X1i+β2X2i+ei{\displaystyle Y_{i}=\beta _{0}+\beta _{1}X_{1i}+\beta _{2}X_{2i}+e_{i}}. If the researcher only has access toN=2{\displaystyle N=2}data points, then they could find infinitely many combinations(β^0,β^1,β^2){\displaystyle ({\hat {\beta }}_{0},{\hat {\beta }}_{1},{\hat {\beta }}_{2})}that explain the data equally well: any combination can be chosen that satisfiesY^i=β^0+β^1X1i+β^2X2i{\displaystyle {\hat {Y}}_{i}={\hat {\beta }}_{0}+{\hat {\beta }}_{1}X_{1i}+{\hat {\beta }}_{2}X_{2i}}, all of which lead to∑ie^i2=∑i(Y^i−(β^0+β^1X1i+β^2X2i))2=0{\displaystyle \sum _{i}{\hat {e}}_{i}^{2}=\sum _{i}({\hat {Y}}_{i}-({\hat {\beta }}_{0}+{\hat {\beta }}_{1}X_{1i}+{\hat {\beta }}_{2}X_{2i}))^{2}=0}and are therefore valid solutions that minimize the sum of squaredresiduals. To understand why there are infinitely many options, note that the system ofN=2{\displaystyle N=2}equations is to be solved for 3 unknowns, which makes the systemunderdetermined. Alternatively, one can visualize infinitely many 3-dimensional planes that go throughN=2{\displaystyle N=2}fixed points. More generally, to estimate aleast squaresmodel withk{\displaystyle k}distinct parameters, one must haveN≥k{\displaystyle N\geq k}distinct data points. IfN>k{\displaystyle N>k}, then there does not generally exist a set of parameters that will perfectly fit the data. The quantityN−k{\displaystyle N-k}appears often in regression analysis, and is referred to as thedegrees of freedomin the model. Moreover, to estimate a least squares model, the independent variables(X1i,X2i,...,Xki){\displaystyle (X_{1i},X_{2i},...,X_{ki})}must belinearly independent: one mustnotbe able to reconstruct any of the independent variables by adding and multiplying the remaining independent variables. As discussed inordinary least squares, this condition ensures thatXTX{\displaystyle X^{T}X}is aninvertible matrixand therefore that a unique solutionβ^{\displaystyle {\hat {\beta }}}exists. By itself, a regression is simply a calculation using the data. In order to interpret the output of regression as a meaningful statistical quantity that measures real-world relationships, researchers often rely on a number of classicalassumptions. These assumptions often include: A handful of conditions are sufficient for the least-squares estimator to possess desirable properties: in particular, theGauss–Markovassumptions imply that the parameter estimates will beunbiased,consistent, andefficientin the class of linear unbiased estimators. Practitioners have developed a variety of methods to maintain some or all of these desirable properties in real-world settings, because these classical assumptions are unlikely to hold exactly. For example, modelingerrors-in-variablescan lead to reasonable estimates independent variables are measured with errors.Heteroscedasticity-consistent standard errorsallow the variance ofei{\displaystyle e_{i}}to change across values ofXi{\displaystyle X_{i}}. Correlated errors that exist within subsets of the data or follow specific patterns can be handled usingclustered standard errors, geographic weighted regression, orNewey–Weststandard errors, among other techniques. When rows of data correspond to locations in space, the choice of how to modelei{\displaystyle e_{i}}within geographic units can have important consequences.[19][20]The subfield ofeconometricsis largely focused on developing techniques that allow researchers to make reasonable real-world conclusions in real-world settings, where classical assumptions do not hold exactly. In linear regression, the model specification is that the dependent variable,yi{\displaystyle y_{i}}is alinear combinationof theparameters(but need not be linear in theindependent variables). For example, insimple linear regressionfor modelingn{\displaystyle n}data points there is one independent variable:xi{\displaystyle x_{i}}, and two parameters,β0{\displaystyle \beta _{0}}andβ1{\displaystyle \beta _{1}}: In multiple linear regression, there are several independent variables or functions of independent variables. Adding a term inxi2{\displaystyle x_{i}^{2}}to the preceding regression gives: This is still linear regression; although the expression on the right hand side is quadratic in the independent variablexi{\displaystyle x_{i}}, it is linear in the parametersβ0{\displaystyle \beta _{0}},β1{\displaystyle \beta _{1}}andβ2.{\displaystyle \beta _{2}.} In both cases,εi{\displaystyle \varepsilon _{i}}is an error term and the subscripti{\displaystyle i}indexes a particular observation. Returning our attention to the straight line case: Given a random sample from the population, we estimate the population parameters and obtain the sample linear regression model: Theresidual,ei=yi−y^i{\displaystyle e_{i}=y_{i}-{\widehat {y}}_{i}}, is the difference between the value of the dependent variable predicted by the model,y^i{\displaystyle {\widehat {y}}_{i}}, and the true value of the dependent variable,yi{\displaystyle y_{i}}. One method of estimation isordinary least squares. This method obtains parameter estimates that minimize the sum of squaredresiduals,SSR: Minimization of this function results in a set ofnormal equations, a set of simultaneous linear equations in the parameters, which are solved to yield the parameter estimators,β^0,β^1{\displaystyle {\widehat {\beta }}_{0},{\widehat {\beta }}_{1}}. In the case of simple regression, the formulas for the least squares estimates are wherex¯{\displaystyle {\bar {x}}}is themean(average) of thex{\displaystyle x}values andy¯{\displaystyle {\bar {y}}}is the mean of they{\displaystyle y}values. Under the assumption that the population error term has a constant variance, the estimate of that variance is given by: This is called themean square error(MSE) of the regression. The denominator is the sample size reduced by the number of model parameters estimated from the same data,(n−p){\displaystyle (n-p)}forp{\displaystyle p}regressorsor(n−p−1){\displaystyle (n-p-1)}if an intercept is used.[21]In this case,p=1{\displaystyle p=1}so the denominator isn−2{\displaystyle n-2}. Thestandard errorsof the parameter estimates are given by Under the further assumption that the population error term is normally distributed, the researcher can use these estimated standard errors to createconfidence intervalsand conducthypothesis testsabout thepopulation parameters. In the more general multiple regression model, there arep{\displaystyle p}independent variables: wherexij{\displaystyle x_{ij}}is thei{\displaystyle i}-th observation on thej{\displaystyle j}-th independent variable. If the first independent variable takes the value 1 for alli{\displaystyle i},xi1=1{\displaystyle x_{i1}=1}, thenβ1{\displaystyle \beta _{1}}is called theregression intercept. The least squares parameter estimates are obtained fromp{\displaystyle p}normal equations. The residual can be written as Thenormal equationsare In matrix notation, the normal equations are written as where theij{\displaystyle ij}element ofX{\displaystyle \mathbf {X} }isxij{\displaystyle x_{ij}}, thei{\displaystyle i}element of the column vectorY{\displaystyle Y}isyi{\displaystyle y_{i}}, and thej{\displaystyle j}element ofβ^{\displaystyle {\hat {\boldsymbol {\beta }}}}isβ^j{\displaystyle {\hat {\beta }}_{j}}. ThusX{\displaystyle \mathbf {X} }isn×p{\displaystyle n\times p},Y{\displaystyle Y}isn×1{\displaystyle n\times 1}, andβ^{\displaystyle {\hat {\boldsymbol {\beta }}}}isp×1{\displaystyle p\times 1}. The solution is Once a regression model has been constructed, it may be important to confirm thegoodness of fitof the model and thestatistical significanceof the estimated parameters. Commonly used checks of goodness of fit include theR-squared, analyses of the pattern ofresidualsand hypothesis testing. Statistical significance can be checked by anF-testof the overall fit, followed byt-testsof individual parameters. Interpretations of these diagnostic tests rest heavily on the model's assumptions. Although examination of the residuals can be used to invalidate a model, the results of at-testorF-testare sometimes more difficult to interpret if the model's assumptions are violated. For example, if the error term does not have a normal distribution, in small samples the estimated parameters will not follow normal distributions and complicate inference. With relatively large samples, however, acentral limit theoremcan be invoked such that hypothesis testing may proceed using asymptotic approximations. Limited dependent variables, which are response variables that arecategoricalor constrained to fall only in a certain range, often arise ineconometrics. The response variable may be non-continuous ("limited" to lie on some subset of the real line). For binary (zero or one) variables, if analysis proceeds with least-squares linear regression, the model is called thelinear probability model. Nonlinear models for binary dependent variables include theprobitandlogit model. Themultivariate probitmodel is a standard method of estimating a joint relationship between several binary dependent variables and some independent variables. Forcategorical variableswith more than two values there is themultinomial logit. Forordinal variableswith more than two values, there are theordered logitandordered probitmodels.Censored regression modelsmay be used when the dependent variable is only sometimes observed, andHeckman correctiontype models may be used when the sample is not randomly selected from the population of interest. An alternative to such procedures is linear regression based onpolychoric correlation(or polyserial correlations) between the categorical variables. Such procedures differ in the assumptions made about the distribution of the variables in the population. If the variable is positive with low values and represents the repetition of the occurrence of an event, then count models like thePoisson regressionor thenegative binomialmodel may be used. When the model function is not linear in the parameters, the sum of squares must be minimized by an iterative procedure. This introduces many complications which are summarized inDifferences between linear and non-linear least squares. Regression modelspredicta value of theYvariable given known values of theXvariables. Predictionwithinthe range of values in the dataset used for model-fitting is known informally asinterpolation. Predictionoutsidethis range of the data is known asextrapolation. Performing extrapolation relies strongly on the regression assumptions. The further the extrapolation goes outside the data, the more room there is for the model to fail due to differences between the assumptions and the sample data or the true values. Aprediction intervalthat represents the uncertainty may accompany the point prediction. Such intervals tend to expand rapidly as the values of the independent variable(s) moved outside the range covered by the observed data. For such reasons and others, some tend to say that it might be unwise to undertake extrapolation.[23] The assumption of a particular form for the relation betweenYandXis another source of uncertainty. A properly conducted regression analysis will include an assessment of how well the assumed form is matched by the observed data, but it can only do so within the range of values of the independent variables actually available. This means that any extrapolation is particularly reliant on the assumptions being made about the structural form of the regression relationship. If this knowledge includes the fact that the dependent variable cannot go outside a certain range of values, this can be made use of in selecting the model – even if the observed dataset has no values particularly near such bounds. The implications of this step of choosing an appropriate functional form for the regression can be great when extrapolation is considered. At a minimum, it can ensure that any extrapolation arising from a fitted model is "realistic" (or in accord with what is known). There are no generally agreed methods for relating the number of observations versus the number of independent variables in the model. One method conjectured by Good and Hardin isN=mn{\displaystyle N=m^{n}}, whereN{\displaystyle N}is the sample size,n{\displaystyle n}is the number of independent variables andm{\displaystyle m}is the number of observations needed to reach the desired precision if the model had only one independent variable.[24]For example, a researcher is building a linear regression model using a dataset that contains 1000 patients (N{\displaystyle N}). If the researcher decides that five observations are needed to precisely define a straight line (m{\displaystyle m}), then the maximum number of independent variables (n{\displaystyle n}) the model can support is 4, because Although the parameters of a regression model are usually estimated using the method of least squares, other methods which have been used include: All major statistical software packages performleast squaresregression analysis and inference.Simple linear regressionand multiple regression using least squares can be done in somespreadsheetapplications and on some calculators. While many statistical software packages can perform various types of nonparametric and robust regression, these methods are less standardized. Different software packages implement different methods, and a method with a given name may be implemented differently in different packages. Specialized regression software has been developed for use in fields such as survey analysis and neuroimaging.
https://en.wikipedia.org/wiki/Regression_analysis
Principal component analysis(PCA) is alineardimensionality reductiontechnique with applications inexploratory data analysis, visualization anddata preprocessing. The data islinearly transformedonto a newcoordinate systemsuch that the directions (principal components) capturing the largest variation in the data can be easily identified. Theprincipal componentsof a collection of points in areal coordinate spaceare a sequence ofp{\displaystyle p}unit vectors, where thei{\displaystyle i}-th vector is the direction of a line that best fits the data while beingorthogonalto the firsti−1{\displaystyle i-1}vectors. Here, a best-fitting line is defined as one that minimizes the average squaredperpendiculardistance from the points to the line. These directions (i.e., principal components) constitute anorthonormal basisin which different individual dimensions of the data arelinearly uncorrelated. Many studies use the first two principal components in order to plot the data in two dimensions and to visually identify clusters of closely related data points.[1] Principal component analysis has applications in many fields such aspopulation genetics,microbiomestudies, andatmospheric science.[2] When performing PCA, the first principal component of a set ofp{\displaystyle p}variables is the derived variable formed as a linear combination of the original variables that explains the most variance. The second principal component explains the most variance in what is left once the effect of the first component is removed, and we may proceed throughp{\displaystyle p}iterations until all the variance is explained. PCA is most commonly used when many of the variables are highly correlated with each other and it is desirable to reduce their number to anindependent set. The first principal component can equivalently be defined as a direction that maximizes the variance of the projected data. Thei{\displaystyle i}-th principal component can be taken as a direction orthogonal to the firsti−1{\displaystyle i-1}principal components that maximizes the variance of the projected data. For either objective, it can be shown that the principal components areeigenvectorsof the data'scovariance matrix. Thus, the principal components are often computed byeigendecompositionof the data covariance matrix orsingular value decompositionof the data matrix. PCA is the simplest of the true eigenvector-based multivariate analyses and is closely related tofactor analysis. Factor analysis typically incorporates more domain-specific assumptions about the underlying structure and solves eigenvectors of a slightly different matrix. PCA is also related tocanonical correlation analysis (CCA). CCA defines coordinate systems that optimally describe thecross-covariancebetween two datasets while PCA defines a neworthogonal coordinate systemthat optimally describes variance in a single dataset.[3][4][5][6]RobustandL1-norm-based variants of standard PCA have also been proposed.[7][8][9][6] PCA was invented in 1901 byKarl Pearson,[10]as an analogue of theprincipal axis theoremin mechanics; it was later independently developed and named byHarold Hotellingin the 1930s.[11]Depending on the field of application, it is also named the discreteKarhunen–Loèvetransform (KLT) insignal processing, theHotellingtransform in multivariate quality control,proper orthogonal decomposition(POD) in mechanical engineering,singular value decomposition(SVD) ofX(invented in the last quarter of the 19th century[12]),eigenvalue decomposition(EVD) ofXTXin linear algebra,factor analysis(for a discussion of the differences between PCA and factor analysis see Ch. 7 of Jolliffe'sPrincipal Component Analysis),[13]Eckart–Young theorem(Harman, 1960), orempirical orthogonal functions(EOF) in meteorological science (Lorenz, 1956), empirical eigenfunction decomposition (Sirovich, 1987), quasiharmonic modes (Brooks et al., 1988),spectral decompositionin noise and vibration, andempirical modal analysisin structural dynamics. PCA can be thought of as fitting ap-dimensionalellipsoidto the data, where each axis of the ellipsoid represents a principal component. If some axis of the ellipsoid is small, then the variance along that axis is also small. To find the axes of the ellipsoid, we must first center the values of each variable in the dataset on 0 by subtracting the mean of the variable's observed values from each of those values. These transformed values are used instead of the original observed values for each of the variables. Then, we compute thecovariance matrixof the data and calculate the eigenvalues and corresponding eigenvectors of this covariance matrix. Then we mustnormalizeeach of the orthogonal eigenvectors to turn them into unit vectors. Once this is done, each of the mutually-orthogonal unit eigenvectors can be interpreted as an axis of the ellipsoid fitted to the data. This choice of basis will transform the covariance matrix into a diagonalized form, in which the diagonal elements represent the variance of each axis. The proportion of the variance that each eigenvector represents can be calculated by dividing the eigenvalue corresponding to that eigenvector by the sum of all eigenvalues. Biplotsandscree plots(degree ofexplained variance) are used to interpret findings of the PCA. PCA is defined as anorthogonallinear transformationon a realinner product spacethat transforms the data to a newcoordinate systemsuch that the greatest variance by some scalar projection of the data comes to lie on the first coordinate (called the first principal component), the second greatest variance on the second coordinate, and so on.[13] Consider ann×p{\displaystyle n\times p}datamatrix,X, with column-wise zeroempirical mean(the sample mean of each column has been shifted to zero), where each of thenrows represents a different repetition of the experiment, and each of thepcolumns gives a particular kind of feature (say, the results from a particular sensor). Mathematically, the transformation is defined by a set of sizel{\displaystyle l}ofp-dimensional vectors of weights or coefficientsw(k)=(w1,…,wp)(k){\displaystyle \mathbf {w} _{(k)}=(w_{1},\dots ,w_{p})_{(k)}}that map each row vectorx(i)=(x1,…,xp)(i){\displaystyle \mathbf {x} _{(i)}=(x_{1},\dots ,x_{p})_{(i)}}ofXto a new vector of principal componentscorest(i)=(t1,…,tl)(i){\displaystyle \mathbf {t} _{(i)}=(t_{1},\dots ,t_{l})_{(i)}}, given by in such a way that the individual variablest1,…,tl{\displaystyle t_{1},\dots ,t_{l}}oftconsidered over the data set successively inherit the maximum possible variance fromX, with each coefficient vectorwconstrained to be aunit vector(wherel{\displaystyle l}is usually selected to be strictly less thanp{\displaystyle p}to reduce dimensionality). The above may equivalently be written in matrix form as whereTik=tk(i){\displaystyle {\mathbf {T} }_{ik}={t_{k}}_{(i)}},Xij=xj(i){\displaystyle {\mathbf {X} }_{ij}={x_{j}}_{(i)}}, andWjk=wj(k){\displaystyle {\mathbf {W} }_{jk}={w_{j}}_{(k)}}. In order to maximize variance, the first weight vectorw(1)thus has to satisfy Equivalently, writing this in matrix form gives Sincew(1)has been defined to be a unit vector, it equivalently also satisfies The quantity to be maximised can be recognised as aRayleigh quotient. A standard result for apositive semidefinite matrixsuch asXTXis that the quotient's maximum possible value is the largesteigenvalueof the matrix, which occurs whenwis the correspondingeigenvector. Withw(1)found, the first principal component of a data vectorx(i)can then be given as a scoret1(i)=x(i)⋅w(1)in the transformed co-ordinates, or as the corresponding vector in the original variables, {x(i)⋅w(1)}w(1). Thek-th component can be found by subtracting the firstk− 1 principal components fromX: and then finding the weight vector which extracts the maximum variance from this new data matrix It turns out that this gives the remaining eigenvectors ofXTX, with the maximum values for the quantity in brackets given by their corresponding eigenvalues. Thus the weight vectors are eigenvectors ofXTX. Thek-th principal component of a data vectorx(i)can therefore be given as a scoretk(i)=x(i)⋅w(k)in the transformed coordinates, or as the corresponding vector in the space of the original variables, {x(i)⋅w(k)}w(k), wherew(k)is thekth eigenvector ofXTX. The full principal components decomposition ofXcan therefore be given as whereWis ap-by-pmatrix of weights whose columns are the eigenvectors ofXTX. The transpose ofWis sometimes called thewhitening or sphering transformation. Columns ofWmultiplied by the square root of corresponding eigenvalues, that is, eigenvectors scaled up by the variances, are calledloadingsin PCA or in Factor analysis. XTXitself can be recognized as proportional to the empirical samplecovariance matrixof the datasetXT.[13]: 30–31 The sample covarianceQbetween two of the different principal components over the dataset is given by: where the eigenvalue property ofw(k)has been used to move from line 2 to line 3. However eigenvectorsw(j)andw(k)corresponding to eigenvalues of a symmetric matrix are orthogonal (if the eigenvalues are different), or can be orthogonalised (if the vectors happen to share an equal repeated value). The product in the final line is therefore zero; there is no sample covariance between different principal components over the dataset. Another way to characterise the principal components transformation is therefore as the transformation to coordinates which diagonalise the empirical sample covariance matrix. In matrix form, the empirical covariance matrix for the original variables can be written The empirical covariance matrix between the principal components becomes whereΛis the diagonal matrix of eigenvaluesλ(k)ofXTX.λ(k)is equal to the sum of the squares over the dataset associated with each componentk, that is,λ(k)= Σitk2(i)= Σi(x(i)⋅w(k))2. The transformationP=XWmaps a data vectorx(i)from an original space ofxvariables to a new space ofpvariables which are uncorrelated over the dataset. To non-dimensionalize the centered data, letXcrepresent the characteristic values of data vectorsXi, given by: for a dataset of sizen. These norms are used to transform the original space of variablesx, yto a new space of uncorrelated variablesp, q(givenYcwith same meaning), such thatpi=XiXc,qi=YiYc{\displaystyle p_{i}={\frac {X_{i}}{X_{c}}},\quad q_{i}={\frac {Y_{i}}{Y_{c}}}}; and the new variables are linearly related as:q=αp{\displaystyle q=\alpha p}. To find the optimal linear relationship, we minimize the total squared reconstruction error:E(α)=11−α2∑i=1n(αpi−qi)2{\displaystyle E(\alpha )={\frac {1}{1-\alpha ^{2}}}\sum _{i=1}^{n}(\alpha p_{i}-q_{i})^{2}}; such that setting the derivative of the error function to zero(E′(α)=0){\displaystyle (E'(\alpha )=0)}yields:α=12(−λ±λ2+4){\displaystyle \alpha ={\frac {1}{2}}\left(-\lambda \pm {\sqrt {\lambda ^{2}+4}}\right)}whereλ=p⋅p−q⋅qp⋅q{\displaystyle \lambda ={\frac {p\cdot p-q\cdot q}{p\cdot q}}}.[14] Suchdimensionality reductioncan be a very useful step for visualising and processing high-dimensional datasets, while still retaining as much of the variance in the dataset as possible. For example, selectingL= 2 and keeping only the first two principal components finds the two-dimensional plane through the high-dimensional dataset in which the data is most spread out, so if the data containsclustersthese too may be most spread out, and therefore most visible to be plotted out in a two-dimensional diagram; whereas if two directions through the data (or two of the original variables) are chosen at random, the clusters may be much less spread apart from each other, and may in fact be much more likely to substantially overlay each other, making them indistinguishable. Similarly, inregression analysis, the larger the number ofexplanatory variablesallowed, the greater is the chance ofoverfittingthe model, producing conclusions that fail to generalise to other datasets. One approach, especially when there are strong correlations between different possible explanatory variables, is to reduce them to a few principal components and then run the regression against them, a method calledprincipal component regression. Dimensionality reduction may also be appropriate when the variables in a dataset are noisy. If each column of the dataset contains independent identically distributed Gaussian noise, then the columns ofTwill also contain similarly identically distributed Gaussian noise (such a distribution is invariant under the effects of the matrixW, which can be thought of as a high-dimensional rotation of the co-ordinate axes). However, with more of the total variance concentrated in the first few principal components compared to the same noise variance, the proportionate effect of the noise is less—the first few components achieve a highersignal-to-noise ratio. PCA thus can have the effect of concentrating much of the signal into the first few principal components, which can usefully be captured by dimensionality reduction; while the later principal components may be dominated by noise, and so disposed of without great loss. If the dataset is not too large, the significance of the principal components can be tested usingparametric bootstrap, as an aid in determining how many principal components to retain.[15] The principal components transformation can also be associated with another matrix factorization, thesingular value decomposition(SVD) ofX, HereΣis ann-by-prectangular diagonal matrixof positive numbersσ(k), called the singular values ofX;Uis ann-by-nmatrix, the columns of which are orthogonal unit vectors of lengthncalled the left singular vectors ofX; andWis ap-by-pmatrix whose columns are orthogonal unit vectors of lengthpand called the right singular vectors ofX. In terms of this factorization, the matrixXTXcan be written whereΣ^{\displaystyle \mathbf {\hat {\Sigma }} }is the square diagonal matrix with the singular values ofXand the excess zeros chopped off that satisfiesΣ^2=ΣTΣ{\displaystyle \mathbf {{\hat {\Sigma }}^{2}} =\mathbf {\Sigma } ^{\mathsf {T}}\mathbf {\Sigma } }. Comparison with the eigenvector factorization ofXTXestablishes that the right singular vectorsWofXare equivalent to the eigenvectors ofXTX, while the singular valuesσ(k)ofX{\displaystyle \mathbf {X} }are equal to the square-root of the eigenvaluesλ(k)ofXTX. Using the singular value decomposition the score matrixTcan be written so each column ofTis given by one of the left singular vectors ofXmultiplied by the corresponding singular value. This form is also thepolar decompositionofT. Efficient algorithms exist to calculate the SVD ofXwithout having to form the matrixXTX, so computing the SVD is now the standard way to calculate a principal components analysis from a data matrix,[16]unless only a handful of components are required. As with the eigen-decomposition, a truncatedn×Lscore matrixTLcan be obtained by considering only the first L largest singular values and their singular vectors: The truncation of a matrixMorTusing a truncated singular value decomposition in this way produces a truncated matrix that is the nearest possible matrix ofrankLto the original matrix, in the sense of the difference between the two having the smallest possibleFrobenius norm, a result known as theEckart–Young theorem[1936]. Theorem (Optimal k‑dimensional fit).Let P be an n×m data matrix whose columns have been mean‑centered and scaled, and letP=UΣVT{\displaystyle P=U\,\Sigma \,V^{T}}be its singular value decomposition. Then the best rank‑k approximation to P in the least‑squares (Frobenius‑norm) sense isPk=UkΣkVkT{\displaystyle P_{k}=U_{k}\,\Sigma _{k}\,V_{k}^{T}}, where Vkconsists of the first k columns of V. Moreover, the relative residual variance isR(k)=∑j=k+1mσj2∑j=1mσj2{\displaystyle R(k)={\frac {\sum _{j=k+1}^{m}\sigma _{j}^{2}}{\sum _{j=1}^{m}\sigma _{j}^{2}}}}. [14] The singular values (inΣ) are the square roots of theeigenvaluesof the matrixXTX. Each eigenvalue is proportional to the portion of the "variance" (more correctly of the sum of the squared distances of the points from their multidimensional mean) that is associated with each eigenvector. The sum of all the eigenvalues is equal to the sum of the squared distances of the points from their multidimensional mean. PCA essentially rotates the set of points around their mean in order to align with the principal components. This moves as much of the variance as possible (using an orthogonal transformation) into the first few dimensions. The values in the remaining dimensions, therefore, tend to be small and may be dropped with minimal loss of information (seebelow). PCA is often used in this manner fordimensionality reduction. PCA has the distinction of being the optimal orthogonal transformation for keeping the subspace that has largest "variance" (as defined above). This advantage, however, comes at the price of greater computational requirements if compared, for example, and when applicable, to thediscrete cosine transform, and in particular to the DCT-II which is simply known as the "DCT".Nonlinear dimensionality reductiontechniques tend to be more computationally demanding than PCA. PCA is sensitive to the scaling of the variables. Mathematically this sensitivity comes from the way a rescaling changes the sample‑covariance matrix that PCA diagonalises.[14] LetXc{\displaystyle \mathbf {X} _{\text{c}}}be the *centered* data matrix (nrows,pcolumns) and define the covarianceΣ=1nXcTXc.{\displaystyle \Sigma ={\frac {1}{n}}\,\mathbf {X} _{\text{c}}^{\mathsf {T}}\mathbf {X} _{\text{c}}.}If thej{\displaystyle j}‑th variable is multiplied by a factorαj{\displaystyle \alpha _{j}}we obtainXc(α)=XcD,D=diag⁡(α1,…,αp).{\displaystyle \mathbf {X} _{\text{c}}^{(\alpha )}=\mathbf {X} _{\text{c}}D,\qquad D=\operatorname {diag} (\alpha _{1},\ldots ,\alpha _{p}).}Hence the new covariance isΣ(α)=DTΣD.{\displaystyle \Sigma ^{(\alpha )}=D^{\mathsf {T}}\,\Sigma \,D.} Because the eigenvalues and eigenvectors ofΣ(α){\displaystyle \Sigma ^{(\alpha )}}are those ofΣ{\displaystyle \Sigma }scaled byD{\displaystyle D}, the principal axes rotate toward any column whose variance has been inflated, exactly as the 2‑D example below illustrates. If we have just two variables and they have the samesample varianceand are completely correlated, then the PCA will entail a rotation by 45° and the "weights" (they are the cosines of rotation) for the two variables with respect to the principal component will be equal. But if we multiply all values of the first variable by 100, then the first principal component will be almost the same as that variable, with a small contribution from the other variable, whereas the second component will be almost aligned with the second original variable. This means that whenever the different variables have different units (like temperature and mass), PCA is a somewhat arbitrary method of analysis. (Different results would be obtained if one used Fahrenheit rather than Celsius for example.) Pearson's original paper was entitled "On Lines and Planes of Closest Fit to Systems of Points in Space" – "in space" implies physical Euclidean space where such concerns do not arise. One way of making the PCA less arbitrary is to use variables scaled so as to have unit variance, by standardizing the data and hence use the autocorrelation matrix instead of the autocovariance matrix as a basis for PCA. However, this compresses (or expands) the fluctuations in all dimensions of the signal space to unit variance. Classical PCA assumes the cloud of points has already been translated so its centroid is at the origin.[14] Write each observation asqi=μ+zi,μ=1n∑i=1nqi.{\displaystyle \mathbf {q} _{i}={\boldsymbol {\mu }}+\mathbf {z} _{i},\qquad {\boldsymbol {\mu }}={\tfrac {1}{n}}\sum _{i=1}^{n}\mathbf {q} _{i}.} Without subtractingμ{\displaystyle {\boldsymbol {\mu }}}we are in effect diagonalising Σunc=nμμT+1nZTZ,{\displaystyle \Sigma _{\text{unc}}\;=\;n\,{\boldsymbol {\mu }}{\boldsymbol {\mu }}^{\mathsf {T}}\;+\;{\tfrac {1}{n}}\,\mathbf {Z} ^{\mathsf {T}}\mathbf {Z} ,} whereZ{\displaystyle \mathbf {Z} }is the centered matrix. The rank‑one termnμμT{\displaystyle n\,{\boldsymbol {\mu }}{\boldsymbol {\mu }}^{\mathsf {T}}}often dominates, forcing the leading eigenvector to point almost exactly toward the mean and obliterating any structure in the centred partZ{\displaystyle \mathbf {Z} }. After mean subtraction that term vanishes and the principal axes align with the true directions of maximal variance. Mean-centering is unnecessary if performing a principal components analysis on a correlation matrix, as the data are already centered after calculating correlations. Correlations are derived from the cross-product of two standard scores (Z-scores) or statistical moments (hence the name:Pearson Product-Moment Correlation). Also see the article by Kromrey & Foster-Johnson (1998) on"Mean-centering in Moderated Regression: Much Ado About Nothing". Sincecovariances are correlations of normalized variables(Z- or standard-scores) a PCA based on the correlation matrix ofXisequalto a PCA based on the covariance matrix ofZ, the standardized version ofX. PCA is a popular primary technique inpattern recognition. It is not, however, optimized for class separability.[17]However, it has been used to quantify the distance between two or more classes by calculating center of mass for each class in principal component space and reporting Euclidean distance between center of mass of two or more classes.[18]Thelinear discriminant analysisis an alternative which is optimized for class separability. Some properties of PCA include:[13][page needed] The statistical implication of this property is that the last few PCs are not simply unstructured left-overs after removing the important PCs. Because these last PCs have variances as small as possible they are useful in their own right. They can help to detect unsuspected near-constant linear relationships between the elements ofx, and they may also be useful inregression, in selecting a subset of variables fromx, and in outlier detection. Before we look at its usage, we first look atdiagonalelements, Then, perhaps the main statistical implication of the result is that not only can we decompose the combined variances of all the elements ofxinto decreasing contributions due to each PC, but we can also decompose the wholecovariance matrixinto contributionsλkαkαk′{\displaystyle \lambda _{k}\alpha _{k}\alpha _{k}'}from each PC. Although not strictly decreasing, the elements ofλkαkαk′{\displaystyle \lambda _{k}\alpha _{k}\alpha _{k}'}will tend to become smaller ask{\displaystyle k}increases, asλkαkαk′{\displaystyle \lambda _{k}\alpha _{k}\alpha _{k}'}is nonincreasing for increasingk{\displaystyle k}, whereas the elements ofαk{\displaystyle \alpha _{k}}tend to stay about the same size because of the normalization constraints:αk′αk=1,k=1,…,p{\displaystyle \alpha _{k}'\alpha _{k}=1,k=1,\dots ,p}. As noted above, the results of PCA depend on the scaling of the variables. This can be cured by scaling each feature by its standard deviation, so that one ends up with dimensionless features with unital variance.[19] The applicability of PCA as described above is limited by certain (tacit) assumptions[20]made in its derivation. In particular, PCA can capture linear correlations between the features but fails when this assumption is violated (see Figure 6a in the reference). In some cases, coordinate transformations can restore the linearity assumption and PCA can then be applied (seekernel PCA). Another limitation is the mean-removal process before constructing the covariance matrix for PCA. In fields such as astronomy, all the signals are non-negative, and the mean-removal process will force the mean of some astrophysical exposures to be zero, which consequently creates unphysical negative fluxes,[21]and forward modeling has to be performed to recover the true magnitude of the signals.[22]As an alternative method,non-negative matrix factorizationfocusing only on the non-negative elements in the matrices is well-suited for astrophysical observations.[23][24][25]See more atthe relation between PCA and non-negative matrix factorization. PCA is at a disadvantage if the data has not been standardized before applying the algorithm to it. PCA transforms the original data into data that is relevant to the principal components of that data, which means that the new data variables cannot be interpreted in the same ways that the originals were. They are linear interpretations of the original variables. Also, if PCA is not performed properly, there is a high likelihood of information loss.[26] PCA relies on a linear model. If a dataset has a pattern hidden inside it that is nonlinear, then PCA can actually steer the analysis in the complete opposite direction of progress.[27][page needed]Researchers at Kansas State University discovered that the sampling error in their experiments impacted the bias of PCA results. "If the number of subjects or blocks is smaller than 30, and/or the researcher is interested in PC's beyond the first, it may be better to first correct for the serial correlation, before PCA is conducted".[28]The researchers at Kansas State also found that PCA could be "seriously biased if the autocorrelation structure of the data is not correctly handled".[28] Dimensionality reduction results in a loss of information, in general. PCA-based dimensionality reduction tends to minimize that information loss, under certain signal and noise models. Under the assumption that that is, that the data vectorx{\displaystyle \mathbf {x} }is the sum of the desired information-bearing signals{\displaystyle \mathbf {s} }and a noise signaln{\displaystyle \mathbf {n} }one can show that PCA can be optimal for dimensionality reduction, from an information-theoretic point-of-view. In particular, Linsker showed that ifs{\displaystyle \mathbf {s} }is Gaussian andn{\displaystyle \mathbf {n} }is Gaussian noise with a covariance matrix proportional to the identity matrix, the PCA maximizes themutual informationI(y;s){\displaystyle I(\mathbf {y} ;\mathbf {s} )}between the desired informations{\displaystyle \mathbf {s} }and the dimensionality-reduced outputy=WLTx{\displaystyle \mathbf {y} =\mathbf {W} _{L}^{T}\mathbf {x} }.[29] If the noise is still Gaussian and has a covariance matrix proportional to the identity matrix (that is, the components of the vectorn{\displaystyle \mathbf {n} }areiid), but the information-bearing signals{\displaystyle \mathbf {s} }is non-Gaussian (which is a common scenario), PCA at least minimizes an upper bound on theinformation loss, which is defined as[30][31] The optimality of PCA is also preserved if the noisen{\displaystyle \mathbf {n} }is iid and at least more Gaussian (in terms of theKullback–Leibler divergence) than the information-bearing signals{\displaystyle \mathbf {s} }.[32]In general, even if the above signal model holds, PCA loses its information-theoretic optimality as soon as the noisen{\displaystyle \mathbf {n} }becomes dependent. The following is a detailed description of PCA using the covariance method[33]as opposed to the correlation method.[34] The goal is to transform a given data setXof dimensionpto an alternative data setYof smaller dimensionL. Equivalently, we are seeking to find the matrixY, whereYis theKarhunen–Loèvetransform (KLT) of matrixX: Y=KLT{X}{\displaystyle \mathbf {Y} =\mathbb {KLT} \{\mathbf {X} \}} Suppose you have data comprising a set of observations ofpvariables, and you want to reduce the data so that each observation can be described with onlyLvariables,L<p. Suppose further, that the data are arranged as a set ofndata vectorsx1…xn{\displaystyle \mathbf {x} _{1}\ldots \mathbf {x} _{n}}with eachxi{\displaystyle \mathbf {x} _{i}}representing a single grouped observation of thepvariables. Mean subtraction is an integral part of the solution towards finding a principal component basis that minimizes the mean square error of approximating the data.[35]Hence we proceed by centering the data as follows: In some applications, each variable (column ofB) may also be scaled to have a variance equal to 1 (seeZ-score).[36]This step affects the calculated principal components, but makes them independent of the units used to measure the different variables. LetXbe ad-dimensional random vector expressed as column vector. Without loss of generality, assumeXhas zero mean. We want to find(∗){\displaystyle (\ast )}ad×dorthonormal transformation matrixPso thatPXhas a diagonal covariance matrix (that is,PXis a random vector with all its distinct components pairwise uncorrelated). A quick computation assumingP{\displaystyle P}were unitary yields: Hence(∗){\displaystyle (\ast )}holds if and only ifcov⁡(X){\displaystyle \operatorname {cov} (X)}were diagonalisable byP{\displaystyle P}. This is very constructive, as cov(X) is guaranteed to be a non-negative definite matrix and thus is guaranteed to be diagonalisable by some unitary matrix. In practical implementations, especially withhigh dimensional data(largep), the naive covariance method is rarely used because it is not efficient due to high computational and memory costs of explicitly determining the covariance matrix. The covariance-free approach avoids thenp2operations of explicitly calculating and storing the covariance matrixXTX, instead utilizing one ofmatrix-free methods, for example, based on the function evaluating the productXT(X r)at the cost of2npoperations. One way to compute the first principal component efficiently[41]is shown in the following pseudo-code, for a data matrixXwith zero mean, without ever computing its covariance matrix. Thispower iterationalgorithm simply calculates the vectorXT(X r), normalizes, and places the result back inr. The eigenvalue is approximated byrT(XTX) r, which is theRayleigh quotienton the unit vectorrfor the covariance matrixXTX. If the largest singular value is well separated from the next largest one, the vectorrgets close to the first principal component ofXwithin the number of iterationsc, which is small relative top, at the total cost2cnp. Thepower iterationconvergence can be accelerated without noticeably sacrificing the small cost per iteration using more advancedmatrix-free methods, such as theLanczos algorithmor the Locally Optimal Block Preconditioned Conjugate Gradient (LOBPCG) method. Subsequent principal components can be computed one-by-one via deflation or simultaneously as a block. In the former approach, imprecisions in already computed approximate principal components additively affect the accuracy of the subsequently computed principal components, thus increasing the error with every new computation. The latter approach in the block power method replaces single-vectorsrandswith block-vectors, matricesRandS. Every column ofRapproximates one of the leading principal components, while all columns are iterated simultaneously. The main calculation is evaluation of the productXT(X R). Implemented, for example, inLOBPCG, efficient blocking eliminates the accumulation of the errors, allows using high-levelBLASmatrix-matrix product functions, and typically leads to faster convergence, compared to the single-vector one-by-one technique. Non-linear iterative partial least squares (NIPALS)is a variant the classicalpower iterationwith matrix deflation by subtraction implemented for computing the first few components in a principal component orpartial least squaresanalysis. For very-high-dimensional datasets, such as those generated in the *omics sciences (for example,genomics,metabolomics) it is usually only necessary to compute the first few PCs. Thenon-linear iterative partial least squares(NIPALS) algorithm updates iterative approximations to the leading scores and loadingst1andr1Tby thepower iterationmultiplying on every iteration byXon the left and on the right, that is, calculation of the covariance matrix is avoided, just as in the matrix-free implementation of the power iterations toXTX, based on the function evaluating the productXT(X r)=((X r)TX)T. The matrix deflation by subtraction is performed by subtracting the outer product,t1r1TfromXleaving the deflated residual matrix used to calculate the subsequent leading PCs.[42]For large data matrices, or matrices that have a high degree of column collinearity, NIPALS suffers from loss of orthogonality of PCs due to machine precisionround-off errorsaccumulated in each iteration and matrix deflation by subtraction.[43]AGram–Schmidtre-orthogonalization algorithm is applied to both the scores and the loadings at each iteration step to eliminate this loss of orthogonality.[44]NIPALS reliance on single-vector multiplications cannot take advantage of high-levelBLASand results in slow convergence for clustered leading singular values—both these deficiencies are resolved in more sophisticated matrix-free block solvers, such as the Locally Optimal Block Preconditioned Conjugate Gradient (LOBPCG) method. In an "online" or "streaming" situation with data arriving piece by piece rather than being stored in a single batch, it is useful to make an estimate of the PCA projection that can be updated sequentially. This can be done efficiently, but requires different algorithms.[45] In PCA, it is common that we want to introduce qualitative variables as supplementary elements. For example, many quantitative variables have been measured on plants. For these plants, some qualitative variables are available as, for example, the species to which the plant belongs. These data were subjected to PCA for quantitative variables. When analyzing the results, it is natural to connect the principal components to the qualitative variablespecies. For this, the following results are produced. These results are what is calledintroducing a qualitative variable as supplementary element. This procedure is detailed in and Husson, Lê, & Pagès (2009) and Pagès (2013). Few software offer this option in an "automatic" way. This is the case ofSPADthat historically, following the work ofLudovic Lebart, was the first to propose this option, and the R packageFactoMineR. The earliest application of factor analysis was in locating and measuring components of human intelligence. It was believed that intelligence had various uncorrelated components such as spatial intelligence, verbal intelligence, induction, deduction etc and that scores on these could be adduced by factor analysis from results on various tests, to give a single index known as theIntelligence Quotient(IQ). The pioneering statistical psychologistSpearmanactually developed factor analysis in 1904 for histwo-factor theoryof intelligence, adding a formal technique to the science ofpsychometrics. In 1924Thurstonelooked for 56 factors of intelligence, developing the notion of Mental Age. Standard IQ tests today are based on this early work.[46] In 1949, Shevky and Williams introduced the theory offactorial ecology, which dominated studies of residential differentiation from the 1950s to the 1970s.[47]Neighbourhoods in a city were recognizable or could be distinguished from one another by various characteristics which could be reduced to three by factor analysis. These were known as 'social rank' (an index of occupational status), 'familism' or family size, and 'ethnicity'; Cluster analysis could then be applied to divide the city into clusters or precincts according to values of the three key factor variables. An extensive literature developed around factorial ecology in urban geography, but the approach went out of fashion after 1980 as being methodologically primitive and having little place in postmodern geographical paradigms. One of the problems with factor analysis has always been finding convincing names for the various artificial factors. In 2000, Flood revived the factorial ecology approach to show that principal components analysis actually gave meaningful answers directly, without resorting to factor rotation. The principal components were actually dual variables or shadow prices of 'forces' pushing people together or apart in cities. The first component was 'accessibility', the classic trade-off between demand for travel and demand for space, around which classical urban economics is based. The next two components were 'disadvantage', which keeps people of similar status in separate neighbourhoods (mediated by planning), and ethnicity, where people of similar ethnic backgrounds try to co-locate.[48] About the same time, the Australian Bureau of Statistics defined distinct indexes of advantage and disadvantage taking the first principal component of sets of key variables that were thought to be important. These SEIFA indexes are regularly published for various jurisdictions, and are used frequently in spatial analysis.[49] PCA can be used as a formal method for the development of indexes. As an alternativeconfirmatory composite analysishas been proposed to develop and assess indexes.[50] The City Development Index was developed by PCA from about 200 indicators of city outcomes in a 1996 survey of 254 global cities. The first principal component was subject to iterative regression, adding the original variables singly until about 90% of its variation was accounted for. The index ultimately used about 15 indicators but was a good predictor of many more variables. Its comparative value agreed very well with a subjective assessment of the condition of each city. The coefficients on items of infrastructure were roughly proportional to the average costs of providing the underlying services, suggesting the Index was actually a measure of effective physical and social investment in the city. The country-levelHuman Development Index(HDI) fromUNDP, which has been published since 1990 and is very extensively used in development studies,[51]has very similar coefficients on similar indicators, strongly suggesting it was originally constructed using PCA. In 1978Cavalli-Sforzaand others pioneered the use of principal components analysis (PCA) to summarise data on variation in human gene frequencies across regions. The components showed distinctive patterns, including gradients and sinusoidal waves. They interpreted these patterns as resulting from specific ancient migration events. Since then, PCA has been ubiquitous in population genetics, with thousands of papers using PCA as a display mechanism. Genetics varies largely according to proximity, so the first two principal components actually show spatial distribution and may be used to map the relative geographical location of different population groups, thereby showing individuals who have wandered from their original locations.[52] PCA in genetics has been technically controversial, in that the technique has been performed on discrete non-normal variables and often on binary allele markers. The lack of any measures of standard error in PCA are also an impediment to more consistent usage. In August 2022, the molecular biologistEran Elhaikpublished a theoretical paper inScientific Reportsanalyzing 12 PCA applications. He concluded that it was easy to manipulate the method, which, in his view, generated results that were 'erroneous, contradictory, and absurd.' Specifically, he argued, the results achieved in population genetics were characterized by cherry-picking andcircular reasoning.[53] Market research has been an extensive user of PCA. It is used to develop customer satisfaction or customer loyalty scores for products, and with clustering, to develop market segments that may be targeted with advertising campaigns, in much the same way as factorial ecology will locate geographical areas with similar characteristics.[54] PCA rapidly transforms large amounts of data into smaller, easier-to-digest variables that can be more rapidly and readily analyzed. In any consumer questionnaire, there are series of questions designed to elicit consumer attitudes, and principal components seek out latent variables underlying these attitudes. For example, the Oxford Internet Survey in 2013 asked 2000 people about their attitudes and beliefs, and from these analysts extracted four principal component dimensions, which they identified as 'escape', 'social networking', 'efficiency', and 'problem creating'.[55] Another example from Joe Flood in 2008 extracted an attitudinal index toward housing from 28 attitude questions in a national survey of 2697 households in Australia. The first principal component represented a general attitude toward property and home ownership. The index, or the attitude questions it embodied, could be fed into a General Linear Model of tenure choice. The strongest determinant of private renting by far was the attitude index, rather than income, marital status or household type.[56] Inquantitative finance, PCA is used[57]infinancial risk management, and has been applied toother problemssuch asportfolio optimization. PCA is commonly used in problems involvingfixed incomesecurities andportfolios, andinterest rate derivatives. Valuations here depend on the entireyield curve, comprising numerous highly correlated instruments, and PCA is used to define a set of components or factors that explain rate movements,[58]thereby facilitating the modelling. One common risk management application is tocalculating value at risk, VaR, applying PCA to theMonte Carlo simulation.[59]Here, for each simulation-sample, the components are stressed, and rates, andin turn option values, are then reconstructed; with VaR calculated, finally, over the entire run. PCA is also used inhedgingexposure tointerest rate risk, givenpartial durationsand other sensitivities.[58]Under both, the first three, typically, principal components of the system are of interest (representing"shift", "twist", and "curvature"). These principal components are derived from an eigen-decomposition of thecovariance matrixofyieldat predefined maturities;[60]and where thevarianceof each component is itseigenvalue(and as the components areorthogonal, no correlation need be incorporated in subsequent modelling). Forequity, an optimal portfolio is one where theexpected returnis maximized for a given level of risk, or alternatively, where risk is minimized for a given return; seeMarkowitz modelfor discussion. Thus, one approach is to reduce portfolio risk, whereallocation strategiesare applied to the "principal portfolios" instead of the underlyingstocks. A second approach is to enhance portfolio return, using the principal components to select companies' stocks with upside potential.[61][62]PCA has also been used to understand relationships[57]between internationalequity markets, and within markets between groups of companies in industries orsectors. PCA may also be applied tostress testing,[63]essentially an analysis of a bank's ability to endurea hypothetical adverse economic scenario. Its utility is in "distilling the information contained in [several]macroeconomic variablesinto a more manageable data set, which can then [be used] for analysis."[63]Here, the resulting factors are linked to e.g. interest rates – based on the largest elements of the factor'seigenvector– and it is then observed how a "shock" to each of the factors affects the implied assets of each of the banks. A variant of principal components analysis is used inneuroscienceto identify the specific properties of a stimulus that increases aneuron's probability of generating anaction potential.[64][65]This technique is known asspike-triggered covariance analysis. In a typical application an experimenter presents awhite noiseprocess as a stimulus (usually either as a sensory input to a test subject, or as acurrentinjected directly into the neuron) and records a train of action potentials, or spikes, produced by the neuron as a result. Presumably, certain features of the stimulus make the neuron more likely to spike. In order to extract these features, the experimenter calculates thecovariance matrixof thespike-triggered ensemble, the set of all stimuli (defined and discretized over a finite time window, typically on the order of 100 ms) that immediately preceded a spike. Theeigenvectorsof the difference between the spike-triggered covariance matrix and the covariance matrix of theprior stimulus ensemble(the set of all stimuli, defined over the same length time window) then indicate the directions in thespaceof stimuli along which the variance of the spike-triggered ensemble differed the most from that of the prior stimulus ensemble. Specifically, the eigenvectors with the largest positive eigenvalues correspond to the directions along which the variance of the spike-triggered ensemble showed the largest positive change compared to the variance of the prior. Since these were the directions in which varying the stimulus led to a spike, they are often good approximations of the sought after relevant stimulus features. In neuroscience, PCA is also used to discern the identity of a neuron from the shape of its action potential.Spike sortingis an important procedure becauseextracellularrecording techniques often pick up signals from more than one neuron. In spike sorting, one first uses PCA to reduce the dimensionality of the space of action potential waveforms, and then performsclustering analysisto associate specific action potentials with individual neurons. PCA as a dimension reduction technique is particularly suited to detect coordinated activities of large neuronal ensembles. It has been used in determining collective variables, that is,order parameters, duringphase transitionsin the brain.[66] Correspondence analysis(CA) was developed byJean-Paul Benzécri[67]and is conceptually similar to PCA, but scales the data (which should be non-negative) so that rows and columns are treated equivalently. It is traditionally applied tocontingency tables. CA decomposes thechi-squared statisticassociated to this table into orthogonal factors.[68]Because CA is a descriptive technique, it can be applied to tables for which the chi-squared statistic is appropriate or not. Several variants of CA are available includingdetrended correspondence analysisandcanonical correspondence analysis. One special extension ismultiple correspondence analysis, which may be seen as the counterpart of principal component analysis for categorical data.[69] Principal component analysis creates variables that are linear combinations of the original variables. The new variables have the property that the variables are all orthogonal. The PCA transformation can be helpful as a pre-processing step before clustering. PCA is a variance-focused approach seeking to reproduce the total variable variance, in which components reflect both common and unique variance of the variable. PCA is generally preferred for purposes of data reduction (that is, translating variable space into optimal factor space) but not when the goal is to detect the latent construct or factors. Factor analysisis similar to principal component analysis, in that factor analysis also involves linear combinations of variables. Different from PCA, factor analysis is a correlation-focused approach seeking to reproduce the inter-correlations among variables, in which the factors "represent the common variance of variables, excluding unique variance".[70]In terms of the correlation matrix, this corresponds with focusing on explaining the off-diagonal terms (that is, shared co-variance), while PCA focuses on explaining the terms that sit on the diagonal. However, as a side result, when trying to reproduce the on-diagonal terms, PCA also tends to fit relatively well the off-diagonal correlations.[13]: 158Results given by PCA and factor analysis are very similar in most situations, but this is not always the case, and there are some problems where the results are significantly different. Factor analysis is generally used when the research purpose is detecting data structure (that is, latent constructs or factors) orcausal modeling. If the factor model is incorrectly formulated or the assumptions are not met, then factor analysis will give erroneous results.[71] It has been asserted that the relaxed solution ofk-means clustering, specified by the cluster indicators, is given by the principal components, and the PCA subspace spanned by the principal directions is identical to the cluster centroid subspace.[72][73]However, that PCA is a useful relaxation ofk-means clustering was not a new result,[74]and it is straightforward to uncover counterexamples to the statement that the cluster centroid subspace is spanned by the principal directions.[75] Non-negative matrix factorization(NMF) is a dimension reduction method where only non-negative elements in the matrices are used, which is therefore a promising method in astronomy,[23][24][25]in the sense that astrophysical signals are non-negative. The PCA components are orthogonal to each other, while the NMF components are all non-negative and therefore constructs a non-orthogonal basis. In PCA, the contribution of each component is ranked based on the magnitude of its corresponding eigenvalue, which is equivalent to the fractional residual variance (FRV) in analyzing empirical data.[21]For NMF, its components are ranked based only on the empirical FRV curves.[25]The residual fractional eigenvalue plots, that is,1−∑i=1kλi/∑j=1nλj{\displaystyle 1-\sum _{i=1}^{k}\lambda _{i}{\Big /}\sum _{j=1}^{n}\lambda _{j}}as a function of component numberk{\displaystyle k}given a total ofn{\displaystyle n}components, for PCA have a flat plateau, where no data is captured to remove the quasi-static noise, then the curves drop quickly as an indication of over-fitting (random noise).[21]The FRV curves for NMF is decreasing continuously[25]when the NMF components are constructedsequentially,[24]indicating the continuous capturing of quasi-static noise; then converge to higher levels than PCA,[25]indicating the less over-fitting property of NMF. It is often difficult to interpret the principal components when the data include many variables of various origins, or when some variables are qualitative. This leads the PCA user to a delicate elimination of several variables. If observations or variables have an excessive impact on the direction of the axes, they should be removed and then projected as supplementary elements. In addition, it is necessary to avoid interpreting the proximities between the points close to the center of the factorial plane. Theiconography of correlations, on the contrary, which is not a projection on a system of axes, does not have these drawbacks. We can therefore keep all the variables. The principle of the diagram is to underline the "remarkable" correlations of the correlation matrix, by a solid line (positive correlation) or dotted line (negative correlation). A strong correlation is not "remarkable" if it is not direct, but caused by the effect of a third variable. Conversely, weak correlations can be "remarkable". For example, if a variable Y depends on several independent variables, the correlations of Y with each of them are weak and yet "remarkable". A particular disadvantage of PCA is that the principal components are usually linear combinations of all input variables.Sparse PCAovercomes this disadvantage by finding linear combinations that contain just a few input variables. It extends the classic method of principal component analysis (PCA) for the reduction of dimensionality of data by adding sparsity constraint on the input variables. Several approaches have been proposed, including The methodological and theoretical developments of Sparse PCA as well as its applications in scientific studies were recently reviewed in a survey paper.[82] Most of the modern methods fornonlinear dimensionality reductionfind their theoretical and algorithmic roots in PCA or K-means. Pearson's original idea was to take a straight line (or plane) which will be "the best fit" to a set of data points.Trevor Hastieexpanded on this concept by proposingPrincipalcurves[86]as the natural extension for the geometric interpretation of PCA, which explicitly constructs a manifold for dataapproximationfollowed byprojectingthe points onto it. See also theelastic mapalgorithm andprincipal geodesic analysis.[87]Another popular generalization iskernel PCA, which corresponds to PCA performed in a reproducing kernel Hilbert space associated with a positive definite kernel. Inmultilinear subspace learning,[88][89][90]PCA is generalized tomultilinear PCA(MPCA) that extracts features directly from tensor representations. MPCA is solved by performing PCA in each mode of the tensor iteratively. MPCA has been applied to face recognition, gait recognition, etc. MPCA is further extended to uncorrelated MPCA, non-negative MPCA and robust MPCA. N-way principal component analysis may be performed with models such asTucker decomposition,PARAFAC, multiple factor analysis, co-inertia analysis, STATIS, and DISTATIS. While PCA finds the mathematically optimal method (as in minimizing the squared error), it is still sensitive tooutliersin the data that produce large errors, something that the method tries to avoid in the first place. It is therefore common practice to remove outliers before computing PCA. However, in some contexts, outliers can be difficult to identify.[91]For example, indata miningalgorithms likecorrelation clustering, the assignment of points to clusters and outliers is not known beforehand. A recently proposed generalization of PCA[92]based on a weighted PCA increases robustness by assigning different weights to data objects based on their estimated relevancy. Outlier-resistant variants of PCA have also been proposed, based on L1-norm formulations (L1-PCA).[7][5] Robust principal component analysis(RPCA) via decomposition in low-rank and sparse matrices is a modification of PCA that works well with respect to grossly corrupted observations.[93][94][95] Independent component analysis(ICA) is directed to similar problems as principal component analysis, but finds additively separable components rather than successive approximations. Given a matrixE{\displaystyle E}, it tries to decompose it into two matrices such thatE=AP{\displaystyle E=AP}. A key difference from techniques such as PCA and ICA is that some of the entries ofA{\displaystyle A}are constrained to be 0. HereP{\displaystyle P}is termed the regulatory layer. While in general such a decomposition can have multiple solutions, they prove that if the following conditions are satisfied : then the decomposition is unique up to multiplication by a scalar.[96] Discriminant analysis of principal components (DAPC) is a multivariate method used to identify and describe clusters of genetically related individuals. Genetic variation is partitioned into two components: variation between groups and within groups, and it maximizes the former. Linear discriminants are linear combinations of alleles which best separate the clusters. Alleles that most contribute to this discrimination are therefore those that are the most markedly different across groups. The contributions of alleles to the groupings identified by DAPC can allow identifying regions of the genome driving the genetic divergence among groups[97]In DAPC, data is first transformed using a principal components analysis (PCA) and subsequently clusters are identified using discriminant analysis (DA). A DAPC can be realized on R using the package Adegenet. (more info:adegenet on the web) Directional component analysis(DCA) is a method used in the atmospheric sciences for analysing multivariate datasets.[98]Like PCA, it allows for dimension reduction, improved visualization and improved interpretability of large data-sets. Also like PCA, it is based on a covariance matrix derived from the input dataset. The difference between PCA and DCA is that DCA additionally requires the input of a vector direction, referred to as the impact. Whereas PCA maximises explained variance, DCA maximises probability density given impact. The motivation for DCA is to find components of a multivariate dataset that are both likely (measured using probability density) and important (measured using the impact). DCA has been used to find the most likely and most serious heat-wave patterns in weather prediction ensembles ,[99]and the most likely and most impactful changes in rainfall due to climate change .[100]
https://en.wikipedia.org/wiki/Principal_component_analysis
In the context ofartificial neural networks, therectifierorReLU (rectified linear unit) activation function[1][2]is anactivation functiondefined as the non-negative part of its argument, i.e., theramp function: wherex{\displaystyle x}is the input to aneuron. This is analogous tohalf-wave rectificationinelectrical engineering. ReLU is one of the most popular activation functions for artificial neural networks,[3]and finds application incomputer vision[4]andspeech recognition[5][6]usingdeep neural netsandcomputational neuroscience.[7][8] The ReLU was first used byAlston Householderin 1941 as a mathematical abstraction of biological neural networks.[9] Kunihiko Fukushimain 1969 used ReLU in the context of visual feature extraction in hierarchical neural networks.[10][11]30 years later, Hahnloser et al. argued that ReLU approximates the biological relationship between neural firing rates and input current, in addition to enabling recurrent neural network dynamics to stabilise under weaker criteria.[12][13] Prior to 2010, most activation functions used were thelogistic sigmoid(which is inspired byprobability theory; seelogistic regression) and its more numerically efficient[14]counterpart, thehyperbolic tangent. Around 2010, the use of ReLU became common again. Jarrett et al. (2009) noted that rectification by eitherabsoluteor ReLU (which they called "positive part") was critical for object recognition in convolutional neural networks (CNNs), specifically because it allowsaverage poolingwithout neighboring filter outputs cancelling each other out. They hypothesized that the use of sigmoid or tanh was responsible for poor performance in previous CNNs.[15] Nair and Hinton (2010) made a theoretical argument that thesoftplusactivation function should be used, in that the softplus function numerically approximates the sum of an exponential number of linear models that share parameters. They then proposed ReLU as a good approximation to it. Specifically, they began by considering a single binary neuron in aBoltzmann machinethat takesx{\displaystyle x}as input, and produces 1 as output with probabilityσ(x)=11+e−x{\displaystyle \sigma (x)={\frac {1}{1+e^{-x}}}}. They then considered extending its range of output by making infinitely many copies of itX1,X2,X3,…{\displaystyle X_{1},X_{2},X_{3},\dots }, that all take the same input, offset by an amount0.5,1.5,2.5,…{\displaystyle 0.5,1.5,2.5,\dots }, then their outputs are added together as∑i=1∞Xi{\displaystyle \sum _{i=1}^{\infty }X_{i}}. They then demonstrated that∑i=1∞Xi{\displaystyle \sum _{i=1}^{\infty }X_{i}}is approximately equal toN(log⁡(1+ex),σ(x)){\displaystyle {\mathcal {N}}(\log(1+e^{x}),\sigma (x))}, which is also approximately equal toReLU⁡(N(x,σ(x))){\displaystyle \operatorname {ReLU} ({\mathcal {N}}(x,\sigma (x)))}, whereN{\displaystyle {\mathcal {N}}}stands for thegaussian distribution. They also argued for another reason for using ReLU: that it allows "intensity equivariance" in image recognition. That is, multiplying input image by a constantk{\displaystyle k}multiplies the output also. In contrast, this is false for other activation functions like sigmoid or tanh. They found that ReLU activation allowed good empirical performance inrestricted Boltzmann machines.[16] Glorot et al (2011) argued that ReLU has the following advantages over sigmoid or tanh. ReLU is more similar to biological neurons' responses in their main operating regime. ReLU avoids vanishing gradients. ReLU is cheaper to compute. ReLU creates sparse representation naturally, because many hidden units output exactly zero for a given input. They also found empirically that deep networks trained with ReLU can achieve strong performancewithoutunsupervised pre-training, especially on large, purely supervised tasks.[4] Advantages of ReLU include: Possible downsides can include: Leaky ReLUallows a small, positive gradient when the unit is inactive,[6]helping to mitigate the vanishing gradient problem. This gradient is defined by a parameterα{\displaystyle \alpha }, typically set to 0.01–0.3.[17][18] The same function can also be expressed without the piecewise notation as: Parametric ReLU (PReLU)takes this idea further by makingα{\displaystyle \alpha }a learnable parameter along with the other network parameters.[19] Note that forα≤1{\displaystyle \alpha \leq 1}, this is equivalent to and thus has a relation to "maxout" networks.[19] Concatenated ReLU (CReLU)preserves positive and negative phase information:[20] ExtendeD Exponential Linear Unit (DELU) is an activation function which is smoother within the neighborhood of zero and sharper for bigger values, allowing better allocation of neurons in the learning process for higher performance. Thanks to its unique design, it has been shown that DELU may obtain higher classification accuracy than ReLU and ELU.[21] In these formulas,a{\displaystyle a},b{\displaystyle b}andxc{\displaystyle x_{c}}arehyperparametervalues which could be set as default constraintsa=1{\displaystyle a=1},b=2{\displaystyle b=2}andxc=1.25643{\displaystyle x_{c}=1.25643}, as done in the original work. GELU is a smooth approximation to the rectifier: whereΦ(x)=P(X⩽x){\displaystyle \Phi (x)=P(X\leqslant x)}is thecumulative distribution functionof the standardnormal distribution. This activation function is illustrated in the figure at the start of this article. It has a "bump" to the left ofx< 0 and serves as the default activation for models such asBERT.[22] The SiLU (sigmoid linear unit) orswish function[23]is another smooth approximation which uses thesigmoid function, first introduced in the GELU paper:[22] A smooth approximation to the rectifier is theanalytic function which is called thesoftplus[24][4]orSmoothReLUfunction.[25]For large negativex{\displaystyle x}it is roughlyln⁡1{\displaystyle \ln 1}, so just above 0, while for large positivex{\displaystyle x}it is roughlyln⁡(ex){\displaystyle \ln(e^{x})}, so just abovex{\displaystyle x}. This function can be approximated as: By making the change of variablesx=yln⁡(2){\displaystyle x=y\ln(2)}, this is equivalent to A sharpness parameterk{\displaystyle k}may be included: The derivative of softplus is thelogistic function. The logisticsigmoid functionis a smooth approximation of the derivative of the rectifier, theHeaviside step function. The multivariable generalization of single-variable softplus is theLogSumExpwith the first argument set to zero: The LogSumExp function is and its gradient is thesoftmax; the softmax with the first argument set to zero is the multivariable generalization of the logistic function. Both LogSumExp and softmax are used in machine learning. Exponential linear units try to make the mean activations closer to zero, which speeds up learning. It has been shown that ELUs can obtain higher classification accuracy than ReLUs.[26] In these formulas,α{\displaystyle \alpha }is ahyperparameterto be tuned with the constraintα≥0{\displaystyle \alpha \geq 0}. Given the same interpretation ofα{\displaystyle \alpha }, ELU can be viewed as a smoothed version of a shifted ReLU (SReLU), which has the formf(x)=max(−α,x){\displaystyle f(x)=\max(-\alpha ,x)}. The mish function can also be used as a smooth approximation of the rectifier.[23]It is defined as wheretanh⁡(x){\displaystyle \tanh(x)}is thehyperbolic tangent, andsoftplus⁡(x){\displaystyle \operatorname {softplus} (x)}is thesoftplusfunction. Mish is non-monotonicandself-gated.[27]It was inspired bySwish, itself a variant ofReLU.[27] Squareplus[28]is the function whereb≥0{\displaystyle b\geq 0}is a hyperparameter that determines the "size" of the curved region nearx=0{\displaystyle x=0}. (For example, lettingb=0{\displaystyle b=0}yields ReLU, and lettingb=4{\displaystyle b=4}yields themetallic meanfunction.) Squareplus shares many properties with softplus: It ismonotonic, strictlypositive, approaches 0 asx→−∞{\displaystyle x\to -\infty }, approaches the identity asx→+∞{\displaystyle x\to +\infty }, and isC∞{\displaystyle C^{\infty }}smooth. However, squareplus can be computed using onlyalgebraic functions, making it well-suited for settings where computational resources or instruction sets are limited. Additionally, squareplus requires no special consideration to ensure numerical stability whenx{\displaystyle x}is large.
https://en.wikipedia.org/wiki/Rectifier_(neural_networks)
Theactivation functionof a node in anartificial neural networkis a function that calculates the output of the node based on its individual inputs and their weights. Nontrivial problems can be solved using only a few nodes if the activation function isnonlinear.[1] Modern activation functions include the logistic (sigmoid) function used in the 2012speech recognitionmodel developed by Hinton et al;[2]theReLUused in the 2012AlexNetcomputer vision model[3][4]and in the 2015ResNetmodel; and the smooth version of the ReLU, theGELU, which was used in the 2018BERTmodel.[5] Aside from their empirical performance, activation functions also have different mathematical properties: These properties do not decisively influence performance, nor are they the only mathematical properties that may be useful. For instance, the strictly positive range of the softplus makes it suitable for predicting variances invariational autoencoders. The most common activation functions can be divided into three categories:ridge functions,radial functionsandfold functions. An activation functionf{\displaystyle f}issaturatingiflim|v|→∞|∇f(v)|=0{\displaystyle \lim _{|v|\to \infty }|\nabla f(v)|=0}. It isnonsaturatingif it islim|v|→∞|∇f(v)|≠0{\displaystyle \lim _{|v|\to \infty }|\nabla f(v)|\neq 0}. Non-saturating activation functions, such asReLU, may be better than saturating activation functions, because they are less likely to suffer from thevanishing gradient problem.[8] Ridge functions are multivariate functions acting on a linear combination of the input variables. Often used examples include:[clarification needed] Inbiologically inspired neural networks, the activation function is usually an abstraction representing the rate ofaction potentialfiring in the cell.[9]In its simplest form, this function isbinary—that is, either theneuronis firing or not. Neurons also cannot fire faster than a certain rate, motivatingsigmoidactivation functions whose range is a finite interval. The function looks likeϕ(v)=U(a+v′b){\displaystyle \phi (\mathbf {v} )=U(a+\mathbf {v} '\mathbf {b} )}, whereU{\displaystyle U}is theHeaviside step function. If a line has a positiveslope, on the other hand, it may reflect the increase in firing rate that occurs as input current increases. Such a function would be of the formϕ(v)=a+v′b{\displaystyle \phi (\mathbf {v} )=a+\mathbf {v} '\mathbf {b} }. A special class of activation functions known asradial basis functions(RBFs) are used inRBF networks. These activation functions can take many forms, but they are usually found as one of the following functions: wherec{\displaystyle \mathbf {c} }is the vector representing the functioncenteranda{\displaystyle a}andσ{\displaystyle \sigma }are parameters affecting the spread of the radius. Periodic functions can serve as activation functions. Usually thesinusoidis used, as any periodic function is decomposable into sinusoids by theFourier transform.[10] Quadratic activation mapsx↦x2{\displaystyle x\mapsto x^{2}}.[11][12] Folding activation functions are extensively used in thepooling layersinconvolutional neural networks, and in output layers of multiclass classification networks. These activations perform aggregation over the inputs, such as taking themean,minimumormaximum. In multiclass classification thesoftmaxactivation is often used. The following table compares the properties of several activation functions that are functions of onefoldxfrom the previous layer or layers: wheregλ,σ,μ,β(x)=(x−λ)1{x⩾λ}1+e−sgn⁡(x−μ)(|x−μ|σ)β{\displaystyle g_{\lambda ,\sigma ,\mu ,\beta }(x)={\frac {(x-\lambda ){1}_{\{x\geqslant \lambda \}}}{1+e^{-\operatorname {sgn}(x-\mu )\left({\frac {\vert x-\mu \vert }{\sigma }}\right)^{\beta }}}}}[19] The following table lists activation functions that are not functions of a singlefoldxfrom the previous layer or layers: Inquantum neural networksprogrammed on gate-modelquantum computers, based on quantum perceptrons instead of variational quantum circuits, the non-linearity of the activation function can be implemented with no need of measuring the output of eachperceptronat each layer. The quantum properties loaded within the circuit such as superposition can be preserved by creating theTaylor seriesof the argument computed by the perceptron itself, with suitable quantum circuits computing the powers up to a wanted approximation degree. Because of the flexibility of such quantum circuits, they can be designed in order to approximate any arbitrary classical activation function.[25]
https://en.wikipedia.org/wiki/Activation_function
Instatistics,mean absolute error(MAE) is a measure oferrorsbetween paired observations expressing the same phenomenon. Examples ofYversusXinclude comparisons of predicted versus observed, subsequent time versus initial time, and one technique of measurement versus an alternative technique of measurement. MAE is calculated as thesum of absolute errors(i.e., theManhattan distance) divided by thesample size:[1]MAE=∑i=1n|yi−xi|n=∑i=1n|ei|n.{\displaystyle \mathrm {MAE} ={\frac {\sum _{i=1}^{n}\left|y_{i}-x_{i}\right|}{n}}={\frac {\sum _{i=1}^{n}\left|e_{i}\right|}{n}}.}It is thus an arithmetic average of the absolute errors|ei|=|yi−xi|{\displaystyle |e_{i}|=|y_{i}-x_{i}|}, whereyi{\displaystyle y_{i}}is the prediction andxi{\displaystyle x_{i}}the true value. Alternative formulations may include relative frequencies as weight factors. The mean absolute error uses the same scale as the data being measured. This is known as a scale-dependent accuracy measure and therefore cannot be used to make comparisons between predicted values that use different scales.[2]The mean absolute error is a common measure offorecast errorintime series analysis,[3]sometimes used in confusion with the more standard definition ofmean absolute deviation. The same confusion exists more generally. Inremote sensingthe MAE is sometimes expressed as the sum of two components: quantity disagreement and allocation disagreement. Quantity disagreement is the absolute value of the mean error:[4]|∑i=1nyi−xin|.{\displaystyle \left|{\frac {\sum _{i=1}^{n}y_{i}-x_{i}}{n}}\right|.}Allocation disagreement is MAE minus quantity disagreement. It is also possible to identify the types of difference by looking at an(x,y){\displaystyle (x,y)}plot. Quantity difference exists when the average of the X values does not equal the average of the Y values. Allocation difference exists if and only if points reside on both sides of the identity line.[4][5] The mean absolute error is one of a number of ways of comparing forecasts with their eventual outcomes. Well-established alternatives are themean absolute scaled error(MASE), mean absolute log error (MALE), and themean squared error. These all summarize performance in ways that disregard the direction of over- or under- prediction; a measure that does place emphasis on this is themean signed difference. Where a prediction model is to be fitted using a selected performance measure, in the sense that theleast squaresapproach is related to themean squared error, the equivalent for mean absolute error isleast absolute deviations. MAE is not identical toroot-mean square error(RMSE), although some researchers report and interpret it that way. The MAE is conceptually simpler and also easier to interpret than RMSE: it is simply the average absolute vertical or horizontal distance between each point in a scatter plot and the Y=X line. In other words, MAE is the average absolute difference between X and Y. Furthermore, each error contributes to MAE in proportion to the absolute value of the error. This is in contrast to RMSE which involves squaring the differences, so that a few large differences will increase the RMSE to a greater degree than the MAE.[4] Themean absolute errorof a real variablecwith respect to therandom variableXisE(|X−c|).{\displaystyle E(\left|X-c\right|).}Provided that the probability distribution ofXis such that the above expectation exists, thenmis amedianofXif and only ifmis a minimizer of the mean absolute error with respect toX.[6]In particular,mis a sample median if and only ifmminimizes the arithmetic mean of the absolute deviations.[7] More generally, a median is defined as a minimum ofE(|X−c|−|X|),{\displaystyle E(|X-c|-|X|),}as discussed atMultivariate median(and specifically atSpatial median). This optimization-based definition of the median is useful in statistical data-analysis, for example, ink-medians clustering. Statement: The classifier minimisingE|y−y^|{\displaystyle \mathbb {E} |y-{\hat {y}}|}isf^(x)=Median(y|X=x){\displaystyle {\hat {f}}(x)={\text{Median}}(y|X=x)}. Proof: TheLoss functions for classificationisL=E[|y−a||X=x]=∫−∞∞|y−a|fY|X(y)dy=∫−∞a(a−y)fY|X(y)dy+∫a∞(y−a)fY|X(y)dy.{\displaystyle {\begin{aligned}L&=\mathbb {E} [|y-a||X=x]\\&=\int _{-\infty }^{\infty }|y-a|f_{Y|X}(y)\,dy\\&=\int _{-\infty }^{a}(a-y)f_{Y|X}(y)\,dy+\int _{a}^{\infty }(y-a)f_{Y|X}(y)\,dy.\\\end{aligned}}}Differentiating with respect toagives∂∂aL=∫−∞afY|X(y)dy+∫a∞−fY|X(y)dy=0.{\displaystyle {\frac {\partial }{\partial a}}L=\int _{-\infty }^{a}f_{Y|X}(y)\,dy+\int _{a}^{\infty }-f_{Y|X}(y)\,dy=0.}This means∫−∞af(y)dy=∫a∞f(y)dy.{\displaystyle \int _{-\infty }^{a}f(y)\,dy=\int _{a}^{\infty }f(y)\,dy.}Hence,FY|X(a)=0.5.{\displaystyle F_{Y|X}(a)=0.5.}
https://en.wikipedia.org/wiki/Mean_absolute_error
Inmathematical modeling,overfittingis "the production of an analysis that corresponds too closely or exactly to a particular set of data, and may therefore fail to fit to additional data or predict future observations reliably".[1]Anoverfitted modelis amathematical modelthat contains moreparametersthan can be justified by the data.[2]In the special case where the model consists of a polynomial function, these parameters represent thedegree of a polynomial. The essence of overfitting is to have unknowingly extracted some of the residual variation (i.e., thenoise) as if that variation represented underlying model structure.[3]: 45 Underfittingoccurs when a mathematical model cannot adequately capture the underlying structure of the data. Anunder-fitted modelis a model where some parameters or terms that would appear in a correctly specified model are missing.[2]Underfitting would occur, for example, when fitting a linear model to nonlinear data. Such a model will tend to have poor predictive performance. The possibility of over-fitting exists because the criterion used forselecting the modelis not the same as the criterion used to judge the suitability of a model. For example, a model might be selected by maximizing its performance on some set oftraining data, and yet its suitability might be determined by its ability to perform well on unseen data; overfitting occurs when a model begins to "memorize" training data rather than "learning" to generalize from a trend. As an extreme example, if the number of parameters is the same as or greater than the number of observations, then a model can perfectly predict the training data simply by memorizing the data in its entirety. (For an illustration, see Figure 2.) Such a model, though, will typically fail severely when making predictions. Overfitting is directly related to approximation error of the selected function class and the optimization error of the optimization procedure. A function class that is too large, in a suitable sense, relative to the dataset size is likely to overfit.[4]Even when the fitted model does not have an excessive number of parameters, it is to be expected that the fitted relationship will appear to perform less well on a new dataset than on the dataset used for fitting (a phenomenon sometimes known asshrinkage).[2]In particular, the value of thecoefficient of determinationwillshrinkrelative to the original data. To lessen the chance or amount of overfitting, several techniques are available (e.g.,model comparison,cross-validation,regularization,early stopping,pruning,Bayesian priors, ordropout). The basis of some techniques is to either (1) explicitly penalize overly complex models or (2) test the model's ability to generalize by evaluating its performance on a set of data not used for training, which is assumed to approximate the typical unseen data that a model will encounter. In statistics, aninferenceis drawn from astatistical model, which has beenselectedvia some procedure. Burnham & Anderson, in their much-cited text on model selection, argue that to avoid overfitting, we should adhere to the "Principle of Parsimony".[3]The authors also state the following.[3]: 32–33 Overfitted models ... are often free of bias in the parameter estimators, but have estimated (and actual) sampling variances that are needlessly large (the precision of the estimators is poor, relative to what could have been accomplished with a more parsimonious model). False treatment effects tend to be identified, and false variables are included with overfitted models. ... A best approximating model is achieved by properly balancing the errors of underfitting and overfitting. Overfitting is more likely to be a serious concern when there is little theory available to guide the analysis, in part because then there tend to be a large number of models to select from. The bookModel Selection and Model Averaging(2008) puts it this way.[5] Given a data set, you can fit thousands of models at the push of a button, but how do you choose the best? With so many candidate models, overfitting is a real danger. Is themonkey who typed Hamletactually a good writer? Inregression analysis, overfitting occurs frequently.[6]As an extreme example, if there arepvariables in alinear regressionwithpdata points, the fitted line can go exactly through every point.[7]Forlogistic regressionor Coxproportional hazards models, there are a variety of rules of thumb (e.g. 5–9,[8]10[9]and 10–15[10]— the guideline of 10 observations per independent variable is known as the "one in ten rule"). In the process of regression model selection, the mean squared error of the random regression function can be split into random noise, approximation bias, and variance in the estimate of the regression function. Thebias–variance tradeoffis often used to overcome overfit models. With a large set ofexplanatory variablesthat actually have no relation to thedependent variablebeing predicted, some variables will in general be falsely found to bestatistically significantand the researcher may thus retain them in the model, thereby overfitting the model. This is known asFreedman's paradox. Usually, a learningalgorithmis trained using some set of "training data": exemplary situations for which the desired output is known. The goal is that the algorithm will also perform well on predicting the output when fed "validation data" that was not encountered during its training. Overfitting is the use of models or procedures that violateOccam's razor, for example by including more adjustable parameters than are ultimately optimal, or by using a more complicated approach than is ultimately optimal. For an example where there are too many adjustable parameters, consider a dataset where training data forycan be adequately predicted by a linear function of two independent variables. Such a function requires only three parameters (the intercept and two slopes). Replacing this simple function with a new, more complex quadratic function, or with a new, more complex linear function on more than two independent variables, carries a risk: Occam's razor implies that any given complex function isa prioriless probable than any given simple function. If the new, more complicated function is selected instead of the simple function, and if there was not a large enough gain in training data fit to offset the complexity increase, then the new complex function "overfits" the data and the complex overfitted function will likely perform worse than the simpler function on validation data outside the training dataset, even though the complex function performed as well, or perhaps even better, on the training dataset.[11] When comparing different types of models, complexity cannot be measured solely by counting how many parameters exist in each model; the expressivity of each parameter must be considered as well. For example, it is nontrivial to directly compare the complexity of a neural net (which can track curvilinear relationships) withmparameters to a regression model withnparameters.[11] Overfitting is especially likely in cases where learning was performed too long or where training examples are rare, causing the learner to adjust to very specific random features of the training data that have nocausal relationto thetarget function. In this process of overfitting, the performance on the training examples still increases while the performance on unseen data becomes worse. As a simple example, consider a database of retail purchases that includes the item bought, the purchaser, and the date and time of purchase. It's easy to construct a model that will fit the training set perfectly by using the date and time of purchase to predict the other attributes, but this model will not generalize at all to new data because those past times will never occur again. Generally, a learning algorithm is said to overfit relative to a simpler one if it is more accurate in fitting known data (hindsight) but less accurate in predicting new data (foresight). One can intuitively understand overfitting from the fact that information from all past experience can be divided into two groups: information that is relevant for the future, and irrelevant information ("noise"). Everything else being equal, the more difficult a criterion is to predict (i.e., the higher its uncertainty), the more noise exists in past information that needs to be ignored. The problem is determining which part to ignore. A learning algorithm that can reduce the risk of fitting noise is called "robust." The most obvious consequence of overfitting is poor performance on the validation dataset. Other negative consequences include: The optimal function usually needs verification on bigger or completely new datasets. There are, however, methods likeminimum spanning treeorlife-time of correlationthat applies the dependence between correlation coefficients and time-series (window width). Whenever the window width is big enough, the correlation coefficients are stable and don't depend on the window width size anymore. Therefore, a correlation matrix can be created by calculating a coefficient of correlation between investigated variables. This matrix can be represented topologically as a complex network where direct and indirect influences between variables are visualized. Dropout regularisation (random removal of training set data) can also improve robustness and therefore reduce over-fitting by probabilistically removing inputs to a layer. Underfitting is the inverse of overfitting, meaning that the statistical model or machine learning algorithm is too simplistic to accurately capture the patterns in the data. A sign of underfitting is that there is a high bias and low variance detected in the current model or algorithm used (the inverse of overfitting: lowbiasand highvariance). This can be gathered from theBias-variance tradeoff, which is the method of analyzing a model or algorithm for bias error, variance error, and irreducible error. With a high bias and low variance, the result of the model is that it will inaccurately represent the data points and thus insufficiently be able to predict future data results (seeGeneralization error). As shown in Figure 5, the linear line could not represent all the given data points due to the line not resembling the curvature of the points. We would expect to see a parabola-shaped line as shown in Figure 6 and Figure 1. If we were to use Figure 5 for analysis, we would get false predictive results contrary to the results if we analyzed Figure 6. Burnham & Anderson state the following.[3]: 32 ... an underfitted model would ignore some important replicable (i.e., conceptually replicable in most other samples) structure in the data and thus fail to identify effects that were actually supported by the data. In this case, bias in the parameter estimators is often substantial, and the sampling variance is underestimated, both factors resulting in poor confidence interval coverage. Underfitted models tend to miss important treatment effects in experimental settings. There are multiple ways to deal with underfitting: Benign overfitting describes the phenomenon of a statistical model that seems to generalize well to unseen data, even when it has been fit perfectly on noisy training data (i.e., obtains perfect predictive accuracy on the training set). The phenomenon is of particular interest indeep neural networks, but is studied from a theoretical perspective in the context of much simpler models, such aslinear regression. In particular, it has been shown thatoverparameterizationis essential for benign overfitting in this setting. In other words, the number of directions in parameter space that are unimportant for prediction must significantly exceed the sample size.[16]
https://en.wikipedia.org/wiki/Overfitting
Stochastic gradient descent(often abbreviatedSGD) is aniterativemethod for optimizing anobjective functionwith suitablesmoothnessproperties (e.g.differentiableorsubdifferentiable). It can be regarded as astochastic approximationofgradient descentoptimization, since it replaces the actual gradient (calculated from the entiredata set) by an estimate thereof (calculated from a randomly selected subset of the data). Especially inhigh-dimensionaloptimization problems this reduces the very highcomputational burden, achieving faster iterations in exchange for a lowerconvergence rate.[1] The basic idea behind stochastic approximation can be traced back to theRobbins–Monro algorithmof the 1950s. Today, stochastic gradient descent has become an important optimization method inmachine learning.[2] Bothstatisticalestimationandmachine learningconsider the problem ofminimizinganobjective functionthat has the form of a sum:Q(w)=1n∑i=1nQi(w),{\displaystyle Q(w)={\frac {1}{n}}\sum _{i=1}^{n}Q_{i}(w),}where theparameterw{\displaystyle w}that minimizesQ(w){\displaystyle Q(w)}is to beestimated. Each summand functionQi{\displaystyle Q_{i}}is typically associated with thei{\displaystyle i}-thobservationin thedata set(used for training). In classical statistics, sum-minimization problems arise inleast squaresand inmaximum-likelihood estimation(for independent observations). The general class of estimators that arise as minimizers of sums are calledM-estimators. However, in statistics, it has been long recognized that requiring even local minimization is too restrictive for some problems of maximum-likelihood estimation.[3]Therefore, contemporary statistical theorists often considerstationary pointsof thelikelihood function(or zeros of its derivative, thescore function, and otherestimating equations). The sum-minimization problem also arises forempirical risk minimization. There,Qi(w){\displaystyle Q_{i}(w)}is the value of theloss functionati{\displaystyle i}-th example, andQ(w){\displaystyle Q(w)}is the empirical risk. When used to minimize the above function, a standard (or "batch")gradient descentmethod would perform the following iterations:w:=w−η∇Q(w)=w−ηn∑i=1n∇Qi(w).{\displaystyle w:=w-\eta \,\nabla Q(w)=w-{\frac {\eta }{n}}\sum _{i=1}^{n}\nabla Q_{i}(w).}The step size is denoted byη{\displaystyle \eta }(sometimes called thelearning ratein machine learning) and here ":={\displaystyle :=}" denotes the update of a variable in the algorithm. In many cases, the summand functions have a simple form that enables inexpensive evaluations of the sum-function and the sum gradient. For example, in statistics,one-parameter exponential familiesallow economical function-evaluations and gradient-evaluations. However, in other cases, evaluating the sum-gradient may require expensive evaluations of the gradients from all summand functions. When the training set is enormous and no simple formulas exist, evaluating the sums of gradients becomes very expensive, because evaluating the gradient requires evaluating all the summand functions' gradients. To economize on the computational cost at every iteration, stochastic gradient descentsamplesa subset of summand functions at every step. This is very effective in the case of large-scale machine learning problems.[4] In stochastic (or "on-line") gradient descent, the true gradient ofQ(w){\displaystyle Q(w)}is approximated by a gradient at a single sample:w:=w−η∇Qi(w).{\displaystyle w:=w-\eta \,\nabla Q_{i}(w).}As the algorithm sweeps through the training set, it performs the above update for each training sample. Several passes can be made over the training set until the algorithm converges. If this is done, the data can be shuffled for each pass to prevent cycles. Typical implementations may use anadaptive learning rateso that the algorithm converges.[5] In pseudocode, stochastic gradient descent can be presented as : A compromise between computing the true gradient and the gradient at a single sample is to compute the gradient against more than one training sample (called a "mini-batch") at each step. This can perform significantly better than "true" stochastic gradient descent described, because the code can make use ofvectorizationlibraries rather than computing each step separately as was first shown in[6]where it was called "the bunch-mode back-propagation algorithm". It may also result in smoother convergence, as the gradient computed at each step is averaged over more training samples. The convergence of stochastic gradient descent has been analyzed using the theories ofconvex minimizationand ofstochastic approximation. Briefly, when thelearning ratesη{\displaystyle \eta }decrease with an appropriate rate, and subject to relatively mild assumptions, stochastic gradient descent convergesalmost surelyto a global minimum when the objective function isconvexorpseudoconvex, and otherwise converges almost surely to a local minimum.[2][7]This is in fact a consequence of theRobbins–Siegmund theorem.[8] Suppose we want to fit a straight liney^=w1+w2x{\displaystyle {\hat {y}}=w_{1}+w_{2}x}to a training set with observations((x1,y1),(x2,y2)…,(xn,yn)){\displaystyle ((x_{1},y_{1}),(x_{2},y_{2})\ldots ,(x_{n},y_{n}))}and corresponding estimated responses(y^1,y^2,…,y^n){\displaystyle ({\hat {y}}_{1},{\hat {y}}_{2},\ldots ,{\hat {y}}_{n})}usingleast squares. The objective function to be minimized isQ(w)=∑i=1nQi(w)=∑i=1n(y^i−yi)2=∑i=1n(w1+w2xi−yi)2.{\displaystyle Q(w)=\sum _{i=1}^{n}Q_{i}(w)=\sum _{i=1}^{n}\left({\hat {y}}_{i}-y_{i}\right)^{2}=\sum _{i=1}^{n}\left(w_{1}+w_{2}x_{i}-y_{i}\right)^{2}.}The last line in the above pseudocode for this specific problem will become:[w1w2]←[w1w2]−η[∂∂w1(w1+w2xi−yi)2∂∂w2(w1+w2xi−yi)2]=[w1w2]−η[2(w1+w2xi−yi)2xi(w1+w2xi−yi)].{\displaystyle {\begin{bmatrix}w_{1}\\w_{2}\end{bmatrix}}\leftarrow {\begin{bmatrix}w_{1}\\w_{2}\end{bmatrix}}-\eta {\begin{bmatrix}{\frac {\partial }{\partial w_{1}}}(w_{1}+w_{2}x_{i}-y_{i})^{2}\\{\frac {\partial }{\partial w_{2}}}(w_{1}+w_{2}x_{i}-y_{i})^{2}\end{bmatrix}}={\begin{bmatrix}w_{1}\\w_{2}\end{bmatrix}}-\eta {\begin{bmatrix}2(w_{1}+w_{2}x_{i}-y_{i})\\2x_{i}(w_{1}+w_{2}x_{i}-y_{i})\end{bmatrix}}.}Note that in each iteration or update step, the gradient is only evaluated at a singlexi{\displaystyle x_{i}}. This is the key difference between stochastic gradient descent and batched gradient descent. In general, given a linear regressiony^=∑k∈1:mwkxk{\displaystyle {\hat {y}}=\sum _{k\in 1:m}w_{k}x_{k}}problem, stochastic gradient descent behaves differently whenm<n{\displaystyle m<n}(underparameterized) andm≥n{\displaystyle m\geq n}(overparameterized). In the overparameterized case, stochastic gradient descent converges toarg⁡minw:wTxk=yk∀k∈1:n‖w−w0‖{\displaystyle \arg \min _{w:w^{T}x_{k}=y_{k}\forall k\in 1:n}\|w-w_{0}\|}. That is, SGD converges to the interpolation solution with minimum distance from the startingw0{\displaystyle w_{0}}. This is true even when the learning rate remains constant. In the underparameterized case, SGD does not converge if learning rate remains constant.[9] In 1951,Herbert RobbinsandSutton Monrointroduced the earliest stochastic approximation methods, preceding stochastic gradient descent.[10]Building on this work one year later,Jack KieferandJacob Wolfowitzpublishedan optimization algorithmvery close to stochastic gradient descent, usingcentral differencesas an approximation of the gradient.[11]Later in the 1950s,Frank Rosenblattused SGD to optimize hisperceptron model, demonstrating the first applicability of stochastic gradient descent to neural networks.[12] Backpropagationwas first described in 1986, with stochastic gradient descent being used to efficiently optimize parameters across neural networks with multiplehidden layers. Soon after, another improvement was developed: mini-batch gradient descent, where small batches of data are substituted for single samples. In 1997, the practical performance benefits from vectorization achievable with such small batches were first explored,[13]paving the way for efficient optimization in machine learning. As of 2023, this mini-batch approach remains the norm for training neural networks, balancing the benefits of stochastic gradient descent withgradient descent.[14] By the 1980s,momentumhad already been introduced, and was added to SGD optimization techniques in 1986.[15]However, these optimization techniques assumed constanthyperparameters, i.e. a fixed learning rate and momentum parameter. In the 2010s, adaptive approaches to applying SGD with a per-parameter learning rate were introduced with AdaGrad (for "Adaptive Gradient") in 2011[16]and RMSprop (for "Root Mean Square Propagation") in 2012.[17]In 2014, Adam (for "Adaptive Moment Estimation") was published, applying the adaptive approaches of RMSprop to momentum; many improvements and branches of Adam were then developed such as Adadelta, Adagrad, AdamW, and Adamax.[18][19] Within machine learning, approaches to optimization in 2023 are dominated by Adam-derived optimizers. TensorFlow and PyTorch, by far the most popular machine learning libraries,[20]as of 2023 largely only include Adam-derived optimizers, as well as predecessors to Adam such as RMSprop and classic SGD. PyTorch also partially supportsLimited-memory BFGS, a line-search method, but only for single-device setups without parameter groups.[19][21] Stochastic gradient descent is a popular algorithm for training a wide range of models inmachine learning, including (linear)support vector machines,logistic regression(see, e.g.,Vowpal Wabbit) andgraphical models.[22]When combined with theback propagationalgorithm, it is thede factostandard algorithm for trainingartificial neural networks.[23]Its use has been also reported in theGeophysicscommunity, specifically to applications of Full Waveform Inversion (FWI).[24] Stochastic gradient descent competes with theL-BFGSalgorithm,[citation needed]which is also widely used. Stochastic gradient descent has been used since at least 1960 for traininglinear regressionmodels, originally under the nameADALINE.[25] Another stochastic gradient descent algorithm is theleast mean squares (LMS)adaptive filter. Many improvements on the basic stochastic gradient descent algorithm have been proposed and used. In particular, in machine learning, the need to set alearning rate(step size) has been recognized as problematic. Setting this parameter too high can cause the algorithm to diverge; setting it too low makes it slow to converge.[26]A conceptually simple extension of stochastic gradient descent makes the learning rate a decreasing functionηtof the iteration numbert, giving alearning rate schedule, so that the first iterations cause large changes in the parameters, while the later ones do only fine-tuning. Such schedules have been known since the work of MacQueen onk-means clustering.[27]Practical guidance on choosing the step size in several variants of SGD is given by Spall.[28] As mentioned earlier, classical stochastic gradient descent is generally sensitive tolearning rateη. Fast convergence requires large learning rates but this may induce numerical instability. The problem can be largely solved[29]by consideringimplicit updateswhereby the stochastic gradient is evaluated at the next iterate rather than the current one:wnew:=wold−η∇Qi(wnew).{\displaystyle w^{\text{new}}:=w^{\text{old}}-\eta \,\nabla Q_{i}(w^{\text{new}}).} This equation is implicit sincewnew{\displaystyle w^{\text{new}}}appears on both sides of the equation. It is a stochastic form of theproximal gradient methodsince the update can also be written as:wnew:=arg⁡minw{Qi(w)+12η‖w−wold‖2}.{\displaystyle w^{\text{new}}:=\arg \min _{w}\left\{Q_{i}(w)+{\frac {1}{2\eta }}\left\|w-w^{\text{old}}\right\|^{2}\right\}.} As an example, consider least squares with featuresx1,…,xn∈Rp{\displaystyle x_{1},\ldots ,x_{n}\in \mathbb {R} ^{p}}and observationsy1,…,yn∈R{\displaystyle y_{1},\ldots ,y_{n}\in \mathbb {R} }. We wish to solve:minw∑j=1n(yj−xj′w)2,{\displaystyle \min _{w}\sum _{j=1}^{n}\left(y_{j}-x_{j}'w\right)^{2},}wherexj′w=xj1w1+xj,2w2+...+xj,pwp{\displaystyle x_{j}'w=x_{j1}w_{1}+x_{j,2}w_{2}+...+x_{j,p}w_{p}}indicates the inner product. Note thatx{\displaystyle x}could have "1" as the first element to include an intercept. Classical stochastic gradient descent proceeds as follows:wnew=wold+η(yi−xi′wold)xi{\displaystyle w^{\text{new}}=w^{\text{old}}+\eta \left(y_{i}-x_{i}'w^{\text{old}}\right)x_{i}} wherei{\displaystyle i}is uniformly sampled between 1 andn{\displaystyle n}. Although theoretical convergence of this procedure happens under relatively mild assumptions, in practice the procedure can be quite unstable. In particular, whenη{\displaystyle \eta }is misspecified so thatI−ηxixi′{\displaystyle I-\eta x_{i}x_{i}'}has large absolute eigenvalues with high probability, the procedure may diverge numerically within a few iterations. In contrast,implicit stochastic gradient descent(shortened as ISGD) can be solved in closed-form as:wnew=wold+η1+η‖xi‖2(yi−xi′wold)xi.{\displaystyle w^{\text{new}}=w^{\text{old}}+{\frac {\eta }{1+\eta \left\|x_{i}\right\|^{2}}}\left(y_{i}-x_{i}'w^{\text{old}}\right)x_{i}.} This procedure will remain numerically stable virtually for allη{\displaystyle \eta }as thelearning rateis now normalized. Such comparison between classical and implicit stochastic gradient descent in the least squares problem is very similar to the comparison betweenleast mean squares (LMS)andnormalized least mean squares filter (NLMS). Even though a closed-form solution for ISGD is only possible in least squares, the procedure can be efficiently implemented in a wide range of models. Specifically, suppose thatQi(w){\displaystyle Q_{i}(w)}depends onw{\displaystyle w}only through a linear combination with featuresxi{\displaystyle x_{i}}, so that we can write∇wQi(w)=−q(xi′w)xi{\displaystyle \nabla _{w}Q_{i}(w)=-q(x_{i}'w)x_{i}}, whereq()∈R{\displaystyle q()\in \mathbb {R} }may depend onxi,yi{\displaystyle x_{i},y_{i}}as well but not onw{\displaystyle w}except throughxi′w{\displaystyle x_{i}'w}. Least squares obeys this rule, and so doeslogistic regression, and mostgeneralized linear models. For instance, in least squares,q(xi′w)=yi−xi′w{\displaystyle q(x_{i}'w)=y_{i}-x_{i}'w}, and in logistic regressionq(xi′w)=yi−S(xi′w){\displaystyle q(x_{i}'w)=y_{i}-S(x_{i}'w)}, whereS(u)=eu/(1+eu){\displaystyle S(u)=e^{u}/(1+e^{u})}is thelogistic function. InPoisson regression,q(xi′w)=yi−exi′w{\displaystyle q(x_{i}'w)=y_{i}-e^{x_{i}'w}}, and so on. In such settings, ISGD is simply implemented as follows. Letf(ξ)=ηq(xi′wold+ξ‖xi‖2){\displaystyle f(\xi )=\eta q(x_{i}'w^{\text{old}}+\xi \|x_{i}\|^{2})}, whereξ{\displaystyle \xi }is scalar. Then, ISGD is equivalent to:wnew=wold+ξ∗xi,whereξ∗=f(ξ∗).{\displaystyle w^{\text{new}}=w^{\text{old}}+\xi ^{\ast }x_{i},~{\text{where}}~\xi ^{\ast }=f(\xi ^{\ast }).} The scaling factorξ∗∈R{\displaystyle \xi ^{\ast }\in \mathbb {R} }can be found through thebisection methodsince in most regular models, such as the aforementioned generalized linear models, functionq(){\displaystyle q()}is decreasing, and thus the search bounds forξ∗{\displaystyle \xi ^{\ast }}are[min(0,f(0)),max(0,f(0))]{\displaystyle [\min(0,f(0)),\max(0,f(0))]}. Further proposals include themomentum methodor theheavy ball method, which in ML context appeared inRumelhart,HintonandWilliams' paper on backpropagation learning[30]and borrowed the idea from Soviet mathematician Boris Polyak's 1964 article on solving functional equations.[31]Stochastic gradient descent with momentum remembers the updateΔwat each iteration, and determines the next update as alinear combinationof the gradient and the previous update:[32][33]Δw:=αΔw−η∇Qi(w){\displaystyle \Delta w:=\alpha \Delta w-\eta \,\nabla Q_{i}(w)}w:=w+Δw{\displaystyle w:=w+\Delta w}that leads to:w:=w−η∇Qi(w)+αΔw{\displaystyle w:=w-\eta \,\nabla Q_{i}(w)+\alpha \Delta w} where theparameterw{\displaystyle w}which minimizesQ(w){\displaystyle Q(w)}is to beestimated,η{\displaystyle \eta }is a step size (sometimes called thelearning ratein machine learning) andα{\displaystyle \alpha }is an exponentialdecay factorbetween 0 and 1 that determines the relative contribution of the current gradient and earlier gradients to the weight change. The name momentum stems from an analogy tomomentumin physics: the weight vectorw{\displaystyle w}, thought of as a particle traveling through parameter space,[30]incurs acceleration from the gradient of the loss ("force"). Unlike in classical stochastic gradient descent, it tends to keep traveling in the same direction, preventing oscillations. Momentum has been used successfully by computer scientists in the training ofartificial neural networksfor several decades.[34]Themomentum methodis closely related tounderdamped Langevin dynamics, and may be combined withsimulated annealing.[35] In mid-1980s the method was modified byYurii Nesterovto use the gradient predicted at the next point, and the resulting so-calledNesterov Accelerated Gradientwas sometimes used in ML in the 2010s.[36] Averaged stochastic gradient descent, invented independently by Ruppert and Polyak in the late 1980s, is ordinary stochastic gradient descent that records an average of its parameter vector over time. That is, the update is the same as for ordinary stochastic gradient descent, but the algorithm also keeps track of[37] w¯=1t∑i=0t−1wi.{\displaystyle {\bar {w}}={\frac {1}{t}}\sum _{i=0}^{t-1}w_{i}.}When optimization is done, this averaged parameter vector takes the place ofw. AdaGrad(for adaptivegradientalgorithm) is a modified stochastic gradient descent algorithm with per-parameterlearning rate, first published in 2011.[38]Informally, this increases the learning rate forsparser parameters[clarification needed]and decreases the learning rate for ones that are less sparse. This strategy often improves convergence performance over standard stochastic gradient descent in settings where data is sparse and sparse parameters are more informative. Examples of such applications include natural language processing and image recognition.[38] It still has a base learning rateη, but this is multiplied with the elements of a vector{Gj,j}which is the diagonal of theouter productmatrix G=∑τ=1tgτgτT{\displaystyle G=\sum _{\tau =1}^{t}g_{\tau }g_{\tau }^{\mathsf {T}}} wheregτ=∇Qi(w){\displaystyle g_{\tau }=\nabla Q_{i}(w)}, the gradient, at iterationτ. The diagonal is given by Gj,j=∑τ=1tgτ,j2.{\displaystyle G_{j,j}=\sum _{\tau =1}^{t}g_{\tau ,j}^{2}.}This vector essentially stores a historical sum of gradient squares by dimension and is updated after every iteration. The formula for an update is now[a]w:=w−ηdiag(G)−12⊙g{\displaystyle w:=w-\eta \,\mathrm {diag} (G)^{-{\frac {1}{2}}}\odot g}or, written as per-parameter updates,wj:=wj−ηGj,jgj.{\displaystyle w_{j}:=w_{j}-{\frac {\eta }{\sqrt {G_{j,j}}}}g_{j}.}Each{G(i,i)}gives rise to a scaling factor for the learning rate that applies to a single parameterwi. Since the denominator in this factor,Gi=∑τ=1tgτ2{\textstyle {\sqrt {G_{i}}}={\sqrt {\sum _{\tau =1}^{t}g_{\tau }^{2}}}}is theℓ2normof previous derivatives, extreme parameter updates get dampened, while parameters that get few or small updates receive higher learning rates.[34] While designed forconvex problems, AdaGrad has been successfully applied to non-convex optimization.[39] RMSProp(for Root Mean Square Propagation) is a method invented in 2012 by James Martens andIlya Sutskever, at the time both PhD students in Geoffrey Hinton's group, in which thelearning rateis, like in Adagrad, adapted for each of the parameters. The idea is to divide the learning rate for a weight by a running average of the magnitudes of recent gradients for that weight.[40]Unusually, it was not published in an article but merely described in aCourseralecture.[citation needed]Citation 1:https://deepai.org/machine-learning-glossary-and-terms/rmsprop#:~:text=The%20RMSProp%20algorithm%20was%20introduced,its%20effectiveness%20in%20various%20applications. Citation 2: this video at 36:37https://www.youtube.com/watch?v=-eyhCTvrEtE&t=36m37s So, first the running average is calculated in terms of means square, v(w,t):=γv(w,t−1)+(1−γ)(∇Qi(w))2{\displaystyle v(w,t):=\gamma v(w,t-1)+\left(1-\gamma \right)\left(\nabla Q_{i}(w)\right)^{2}} where,γ{\displaystyle \gamma }is the forgetting factor. The concept of storing the historical gradient as sum of squares is borrowed from Adagrad, but "forgetting" is introduced to solve Adagrad's diminishing learning rates in non-convex problems by gradually decreasing the influence of old data.[citation needed] And the parameters are updated as, w:=w−ηv(w,t)∇Qi(w){\displaystyle w:=w-{\frac {\eta }{\sqrt {v(w,t)}}}\nabla Q_{i}(w)} RMSProp has shown good adaptation of learning rate in different applications. RMSProp can be seen as a generalization ofRpropand is capable to work with mini-batches as well opposed to only full-batches.[40] Adam[41](short for Adaptive Moment Estimation) is a 2014 update to theRMSPropoptimizer combining it with the main feature of theMomentum method.[42]In this optimization algorithm, running averages with exponential forgetting of both the gradients and the second moments of the gradients are used. Given parametersw(t){\displaystyle w^{(t)}}and a loss functionL(t){\displaystyle L^{(t)}}, wheret{\displaystyle t}indexes the current training iteration (indexed at1{\displaystyle 1}), Adam's parameter update is given by: mw(t):=β1mw(t−1)+(1−β1)∇wL(t−1){\displaystyle m_{w}^{(t)}:=\beta _{1}m_{w}^{(t-1)}+\left(1-\beta _{1}\right)\nabla _{w}L^{(t-1)}}vw(t):=β2vw(t−1)+(1−β2)(∇wL(t−1))2{\displaystyle v_{w}^{(t)}:=\beta _{2}v_{w}^{(t-1)}+\left(1-\beta _{2}\right)\left(\nabla _{w}L^{(t-1)}\right)^{2}} m^w(t)=mw(t)1−β1t{\displaystyle {\hat {m}}_{w}^{(t)}={\frac {m_{w}^{(t)}}{1-\beta _{1}^{t}}}}v^w(t)=vw(t)1−β2t{\displaystyle {\hat {v}}_{w}^{(t)}={\frac {v_{w}^{(t)}}{1-\beta _{2}^{t}}}} w(t):=w(t−1)−ηm^w(t)v^w(t)+ε{\displaystyle w^{(t)}:=w^{(t-1)}-\eta {\frac {{\hat {m}}_{w}^{(t)}}{{\sqrt {{\hat {v}}_{w}^{(t)}}}+\varepsilon }}}whereε{\displaystyle \varepsilon }is a small scalar (e.g.10−8{\displaystyle 10^{-8}}) used to prevent division by 0, andβ1{\displaystyle \beta _{1}}(e.g. 0.9) andβ2{\displaystyle \beta _{2}}(e.g. 0.999) are the forgetting factors for gradients and second moments of gradients, respectively. Squaring and square-rooting is done element-wise. As the exponential moving averages of the gradientmw(t){\displaystyle m_{w}^{(t)}}and the squared gradientvw(t){\displaystyle v_{w}^{(t)}}are initialized with a vector of 0's, there would be a bias towards zero in the first training iterations. A factor11−β1/2t{\displaystyle {\tfrac {1}{1-\beta _{1/2}^{t}}}}is introduced to compensate this bias and get better estimatesm^w(t){\displaystyle {\hat {m}}_{w}^{(t)}}andv^w(t){\displaystyle {\hat {v}}_{w}^{(t)}}. The initial proof establishing the convergence of Adam was incomplete, and subsequent analysis has revealed that Adam does not converge for all convex objectives.[43][44]Despite this,Adamcontinues to be used due to its strong performance in practice.[45] The popularity ofAdaminspired many variants and enhancements. Some examples include: Even though sign-based optimization goes back to the aforementionedRprop, in 2018 researchers tried to simplify Adam by removing the magnitude of the stochastic gradient from being taken into account and only considering its sign.[54][55] Backtracking line searchis another variant of gradient descent. All of the below are sourced from the mentioned link. It is based on a condition known as the Armijo–Goldstein condition. Both methods allow learning rates to change at each iteration; however, the manner of the change is different. Backtracking line search uses function evaluations to check Armijo's condition, and in principle the loop in the algorithm for determining the learning rates can be long and unknown in advance. Adaptive SGD does not need a loop in determining learning rates. On the other hand, adaptive SGD does not guarantee the "descent property" – which Backtracking line search enjoys – which is thatf(xn+1)≤f(xn){\displaystyle f(x_{n+1})\leq f(x_{n})}for all n. If the gradient of the cost function is globally Lipschitz continuous, with Lipschitz constant L, and learning rate is chosen of the order 1/L, then the standard version of SGD is a special case of backtracking line search. A stochastic analogue of the standard (deterministic)Newton–Raphson algorithm(a "second-order" method) provides an asymptotically optimal or near-optimal form of iterative optimization in the setting of stochastic approximation[citation needed]. A method that uses direct measurements of theHessian matricesof the summands in the empirical risk function was developed by Byrd, Hansen, Nocedal, and Singer.[56]However, directly determining the required Hessian matrices for optimization may not be possible in practice. Practical and theoretically sound methods for second-order versions of SGD that do not require direct Hessian information are given by Spall and others.[57][58][59](A less efficient method based on finite differences, instead of simultaneous perturbations, is given by Ruppert.[60]) Another approach to the approximation Hessian matrix is replacing it with the Fisher information matrix, which transforms usual gradient to natural.[61]These methods not requiring direct Hessian information are based on either values of the summands in the above empirical risk function or values of the gradients of the summands (i.e., the SGD inputs). In particular, second-order optimality is asymptotically achievable without direct calculation of the Hessian matrices of the summands in the empirical risk function. When the objective is anonlinear least-squreslossQ(w)=1n∑i=1nQi(w)=1n∑i=1n(m(w;xi)−yi)2,{\displaystyle Q(w)={\frac {1}{n}}\sum _{i=1}^{n}Q_{i}(w)={\frac {1}{n}}\sum _{i=1}^{n}(m(w;x_{i})-y_{i})^{2},}wherem(w;xi){\displaystyle m(w;x_{i})}is the predictive model (e.g., adeep neural network) the objective's structure can be exploited to estimate 2nd order information using gradients only. The resulting methods are simple and often effective[62] For small learning rateη{\textstyle \eta }stochastic gradient descent(wn)n∈N0{\textstyle (w_{n})_{n\in \mathbb {N} _{0}}}can be viewed as a discretization of thegradient flowODE ddtWt=−∇Q(Wt){\displaystyle {\frac {d}{dt}}W_{t}=-\nabla Q(W_{t})} subject to additional stochastic noise. This approximation is only valid on a finite time-horizon in the following sense: assume that all the coefficientsQi{\textstyle Q_{i}}are sufficiently smooth. LetT>0{\textstyle T>0}andg:Rd→R{\textstyle g:\mathbb {R} ^{d}\to \mathbb {R} }be a sufficiently smooth test function. Then, there exists a constantC>0{\textstyle C>0}such that for allη>0{\textstyle \eta >0} maxk=0,…,⌊T/η⌋|E[g(wk)]−g(Wkη)|≤Cη,{\displaystyle \max _{k=0,\dots ,\lfloor T/\eta \rfloor }\left|\mathbb {E} [g(w_{k})]-g(W_{k\eta })\right|\leq C\eta ,} whereE{\textstyle \mathbb {E} }denotes taking the expectation with respect to the random choice of indices in the stochastic gradient descent scheme. Since this approximation does not capture the random fluctuations around the mean behavior of stochastic gradient descent solutions tostochastic differential equations(SDEs) have been proposed as limiting objects.[63]More precisely, the solution to the SDE dWt=−∇(Q(Wt)+14η|∇Q(Wt)|2)dt+ηΣ(Wt)1/2dBt,{\displaystyle dW_{t}=-\nabla \left(Q(W_{t})+{\tfrac {1}{4}}\eta |\nabla Q(W_{t})|^{2}\right)dt+{\sqrt {\eta }}\Sigma (W_{t})^{1/2}dB_{t},} forΣ(w)=1n2(∑i=1nQi(w)−Q(w))(∑i=1nQi(w)−Q(w))T{\displaystyle \Sigma (w)={\frac {1}{n^{2}}}\left(\sum _{i=1}^{n}Q_{i}(w)-Q(w)\right)\left(\sum _{i=1}^{n}Q_{i}(w)-Q(w)\right)^{T}}wheredBt{\textstyle dB_{t}}denotes theIto-integralwith respect to aBrownian motionis a more precise approximation in the sense that there exists a constantC>0{\textstyle C>0}such that maxk=0,…,⌊T/η⌋|E[g(wk)]−E[g(Wkη)]|≤Cη2.{\displaystyle \max _{k=0,\dots ,\lfloor T/\eta \rfloor }\left|\mathbb {E} [g(w_{k})]-\mathbb {E} [g(W_{k\eta })]\right|\leq C\eta ^{2}.} However this SDE only approximates the one-point motion of stochastic gradient descent. For an approximation of thestochastic flowone has to consider SDEs with infinite-dimensional noise.[64]
https://en.wikipedia.org/wiki/Stochastic_gradient_descent
Aneural networkis a group of interconnected units calledneuronsthat send signals to one another. Neurons can be eitherbiological cellsormathematical models. While individual neurons are simple, many of them together in a network can perform complex tasks. There are two main types of neural networks. In the context of biology, a neural network is a population of biologicalneuronschemically connected to each other bysynapses. A given neuron can be connected to hundreds of thousands of synapses.[1]Each neuron sends and receiveselectrochemicalsignals calledaction potentialsto its connected neighbors. A neuron can serve anexcitatoryrole, amplifying and propagating signals it receives, or aninhibitoryrole, suppressing signals instead.[1] Populations of interconnected neurons that are smaller than neural networks are calledneural circuits. Very large interconnected networks are calledlarge scale brain networks, and many of these together formbrainsandnervous systems. Signals generated by neural networks in the brain eventually travel through the nervous system and acrossneuromuscular junctionstomuscle cells, where they cause contraction and thereby motion.[2] In machine learning, a neural network is an artificial mathematical model used to approximate nonlinear functions. While early artificial neural networks were physical machines,[3]today they are almost always implemented insoftware. Neuronsin an artificial neural network are usually arranged into layers, with information passing from the first layer (the input layer) through one or more intermediate layers (the hidden layers) to the final layer (the output layer).[4]The "signal" input to each neuron is a number, specifically alinear combinationof the outputs of the connected neurons in the previous layer. The signal each neuron outputs is calculated from this number, according to itsactivation function. The behavior of the network depends on the strengths (orweights) of the connections between neurons. A network is trained by modifying these weights throughempirical risk minimizationorbackpropagationin order to fit some preexisting dataset.[5] The termdeep neural networkrefers to neural networks that have more than three layers, typically including at least two hidden layers in addition to the input and output layers. Neural networks are used to solve problems inartificial intelligence, and have thereby found applications in many disciplines, includingpredictive modeling,adaptive control,facial recognition,handwriting recognition,general game playing, andgenerative AI. The theoretical base for contemporary neural networks was independently proposed byAlexander Bainin 1873[6]andWilliam Jamesin 1890.[7]Both posited that human thought emerged from interactions among large numbers of neurons inside the brain. In 1949,Donald HebbdescribedHebbian learning, the idea that neural networks can change and learn over time by strengthening a synapse every time a signal travels along it.[8] Artificial neural networks were originally used to model biological neural networks starting in the 1930s under the approach ofconnectionism. However, starting with the invention of theperceptron, a simple artificial neural network, byWarren McCullochandWalter Pittsin 1943,[9]followed by the implementation of one in hardware byFrank Rosenblattin 1957,[3]artificial neural networks became increasingly used for machine learning applications instead, and increasingly different from their biological counterparts.
https://en.wikipedia.org/wiki/Neural_network#Feedforward_neural_networks
Indigital circuitsandmachine learning, aone-hotis a group ofbitsamong which the legal combinations of values are only those with a single high (1) bit and all the others low (0).[1]A similar implementation in which all bits are '1' except one '0' is sometimes calledone-cold.[2]Instatistics,dummy variablesrepresent a similar technique for representingcategorical data. One-hot encoding is often used for indicating the state of astate machine. When usingbinary, adecoderis needed to determine the state. A one-hot state machine, however, does not need a decoder as the state machine is in thenth state if, and only if, thenth bit is high. Aring counterwith 15 sequentially ordered states is an example of a state machine. A 'one-hot' implementation would have 15flip-flopschained in series with the Q output of each flip-flop connected to the D input of the next and the D input of the first flip-flop connected to the Q output of the 15th flip-flop. The first flip-flop in the chain represents the first state, the second represents the second state, and so on to the 15th flip-flop, which represents the last state. Upon reset of the state machine all of the flip-flops are reset to '0' except the first in the chain, which is set to '1'. The next clock edge arriving at the flip-flops advances the one 'hot' bit to the second flip-flop. The 'hot' bit advances in this way until the 15th state, after which the state machine returns to the first state. Anaddress decoderconverts from binary to one-hot representation. Apriority encoderconverts from one-hot representation to binary. Innatural language processing, a one-hot vector is a 1 ×Nmatrix (vector) used to distinguish each word in a vocabulary from every other word in the vocabulary.[5]The vector consists of 0s in all cells with the exception of a single 1 in a cell used uniquely to identify the word. One-hot encoding ensures that machine learning does not assume that higher numbers are more important. For example, the value '8' is bigger than the value '1', but that does not make '8' more important than '1'. The same is true for words: the value 'laughter' is not more important than 'laugh'. In machine learning, one-hot encoding is a frequently used method to deal with categorical data. Because many machine learning models need their input variables to be numeric, categorical variables need to be transformed in the pre-processing part.[6] Categorical data can be eithernominalorordinal.[7]Ordinal data has a ranked order for its values and can therefore be converted to numerical data through ordinal encoding.[8]An example of ordinal data would be the ratings on a test ranging from A to F, which could be ranked using numbers from 6 to 1. Since there is no quantitative relationship between nominal variables' individual values, using ordinal encoding can potentially create a fictional ordinal relationship in the data.[9]Therefore, one-hot encoding is often applied to nominal variables, in order to improve the performance of the algorithm. For each unique value in the original categorical column, a new column is created in this method. These dummy variables are then filled up with zeros and ones (1 meaning TRUE, 0 meaning FALSE).[citation needed] Because this process creates multiple new variables, it is prone to creating a 'big p' problem (too many predictors) if there are many unique values in the original column. Another downside of one-hot encoding is that it causes multicollinearity between the individual variables, which potentially reduces the model's accuracy.[citation needed] Also, if the categorical variable is an output variable, you may want to convert the values back into a categorical form in order to present them in your application.[10] In practical usage, this transformation is often directly performed by a function that takes categorical data as an input and outputs the corresponding dummy variables. An example would be the dummyVars function of the Caret library in R.[11]
https://en.wikipedia.org/wiki/One-hot_encoding
Adecision treeis adecision supportrecursive partitioning structure that uses atree-likemodelof decisions and their possible consequences, includingchanceevent outcomes, resource costs, andutility. It is one way to display analgorithmthat only contains conditional control statements. Decision trees are commonly used inoperations research, specifically indecision analysis,[1]to help identify a strategy most likely to reach a goal, but are also a popular tool inmachine learning. A decision tree is aflowchart-like structure in which each internal node represents a test on an attribute (e.g. whether a coin flip comes up heads or tails), each branch represents the outcome of the test, and each leaf node represents a class label (decision taken after computing all attributes). The paths from root to leaf representclassificationrules. Indecision analysis, a decision tree and the closely relatedinfluence diagramare used as a visual and analytical decision support tool, where theexpected values(orexpected utility) of competing alternatives are calculated. A decision tree consists of three types of nodes:[2] Decision trees are commonly used inoperations researchandoperations management. If, in practice, decisions have to be taken online with no recall under incomplete knowledge, a decision tree should be paralleled by aprobabilitymodel as a best choice model or online selection modelalgorithm.[citation needed]Another use of decision trees is as a descriptive means for calculatingconditional probabilities. Decision trees,influence diagrams,utility functions, and otherdecision analysistools and methods are taught to undergraduate students in schools of business, health economics, and public health, and are examples of operations research ormanagement sciencemethods. These tools are also used to predict decisions of householders in normal and emergency scenarios.[3][4] Drawn from left to right, a decision tree has only burst nodes (splitting paths) but no sink nodes (converging paths). So used manually they can grow very big and are then often hard to draw fully by hand. Traditionally, decision trees have been created manually – as the aside example shows – although increasingly, specialized software is employed. The decision tree can belinearizedintodecision rules,[5]where the outcome is the contents of the leaf node, and the conditions along the path form a conjunction in the if clause. In general, the rules have the form: Decision rules can be generated by constructingassociation ruleswith the target variable on the right. They can also denote temporal or causal relations.[6] Commonly a decision tree is drawn usingflowchartsymbols as it is easier for many to read and understand. Note there is a conceptual error in the "Proceed" calculation of the tree shown below; the error relates to the calculation of "costs" awarded in a legal action. Analysis can take into account the decision maker's (e.g., the company's) preference orutility function, for example: The basic interpretation in this situation is that the company prefers B's risk and payoffs under realistic risk preference coefficients (greater than $400K—in that range of risk aversion, the company would need to model a third strategy, "Neither A nor B"). Another example, commonly used inoperations researchcourses, is the distribution of lifeguards on beaches (a.k.a. the "Life's a Beach" example).[7]The example describes two beaches with lifeguards to be distributed on each beach. There is maximum budgetBthat can be distributed among the two beaches (in total), and using a marginal returns table, analysts can decide how many lifeguards to allocate to each beach. In this example, a decision tree can be drawn to illustrate the principles ofdiminishing returnson beach #1. The decision tree illustrates that when sequentially distributing lifeguards, placing a first lifeguard on beach #1 would be optimal if there is only the budget for 1 lifeguard. But if there is a budget for two guards, then placing both on beach #2 would prevent more overall drownings. Much of the information in a decision tree can be represented more compactly as aninfluence diagram, focusing attention on the issues and relationships between events. Decision trees can also be seen asgenerative modelsof induction rules from empirical data. An optimal decision tree is then defined as a tree that accounts for most of the data, while minimizing the number of levels (or "questions").[8]Several algorithms to generate such optimal trees have been devised, such asID3/4/5,[9]CLS, ASSISTANT, and CART. Among decision support tools, decision trees (andinfluence diagrams) have several advantages. Decision trees: Disadvantages of decision trees: A few things should be considered when improving the accuracy of the decision tree classifier. The following are some possible optimizations to consider when looking to make sure the decision tree model produced makes the correct decision or classification. Note that these things are not the only things to consider but only some. Theaccuracyof the decision tree can change based on the depth of the decision tree. In many cases, the tree’s leaves arepurenodes.[11]When a node is pure, it means that all the data in that node belongs to a single class.[12]For example, if the classes in the data set are Cancer and Non-Cancer a leaf node would be considered pure when all the sample data in a leaf node is part of only one class, either cancer or non-cancer. It is important to note that a deeper tree is not always better when optimizing the decision tree. A deeper tree can influence the runtime in a negative way. If a certain classification algorithm is being used, then a deeper tree could mean the runtime of this classification algorithm is significantly slower. There is also the possibility that the actual algorithm building the decision tree will get significantly slower as the tree gets deeper. If the tree-building algorithm being used splits pure nodes, then a decrease in the overall accuracy of the tree classifier could be experienced. Occasionally, going deeper in the tree can cause an accuracy decrease in general, so it is very important to test modifying the depth of the decision tree and selecting the depth that produces the best results. To summarize, observe the points below, we will define the number D as the depth of the tree. Possible advantages of increasing the number D: Possible disadvantages of increasing D The ability to test the differences in classification results when changing D is imperative. We must be able to easily change and test the variables that could affect the accuracy and reliability of the decision tree-model. The node splitting function used can have an impact on improving the accuracy of the decision tree. For example, using theinformation-gainfunction may yield better results than using the phi function. The phi function is known as a measure of “goodness” of a candidate split at a node in the decision tree. The information gain function is known as a measure of the “reduction inentropy”. In the following, we will build two decision trees. One decision tree will be built using the phi function to split the nodes and one decision tree will be built using the information gain function to split the nodes. The main advantages and disadvantages ofinformation gainand phi function This is the information gain function formula. The formula states the information gain is a function of the entropy of a node of the decision tree minus the entropy of a candidate split at node t of a decision tree. This is the phi function formula. The phi function is maximized when the chosen feature splits the samples in a way that produces homogenous splits and have around the same number of samples in each split. We will set D, which is the depth of the decision tree we are building, to three (D = 3). We also have the following data set of cancer and non-cancer samples and the mutation features that the samples either have or do not have. If a sample has a feature mutation then the sample is positive for that mutation, and it will be represented by one. If a sample does not have a feature mutation then the sample is negative for that mutation, and it will be represented by zero. To summarize, C stands for cancer and NC stands for non-cancer. The letter M stands formutation, and if a sample has a particular mutation it will show up in the table as a one and otherwise zero. Now, we can use the formulas to calculate the phi function values and information gain values for each M in the dataset. Once all the values are calculated the tree can be produced. The first thing to be done is to select the root node. In information gain and the phi function we consider the optimal split to be the mutation that produces the highest value for information gain or the phi function. Now assume that M1 has the highest phi function value and M4 has the highest information gain value. The M1 mutation will be the root of our phi function tree and M4 will be the root of our information gain tree. You can observe the root nodes below Now, once we have chosen the root node we can split the samples into two groups based on whether a sample is positive or negative for the root node mutation. The groups will be called group A and group B. For example, if we use M1 to split the samples in the root node we get NC2 and C2 samples in group A and the rest of the samples NC4, NC3, NC1, C1 in group B. Disregarding the mutation chosen for the root node, proceed to place the next best features that have the highest values for information gain or the phi function in the left or right child nodes of the decision tree. Once we choose the root node and the two child nodes for the tree of depth = 3 we can just add the leaves. The leaves will represent the final classification decision the model has produced based on the mutations a sample either has or does not have. The left tree is the decision tree we obtain from using information gain to split the nodes and the right tree is what we obtain from using the phi function to split the nodes. Now assume theclassificationresults from both trees are given using aconfusion matrix. Information gain confusion matrix: Phi function confusion matrix: The tree using information gain has the same results when using the phi function when calculating the accuracy. When we classify the samples based on the model using information gain we get one true positive, one false positive, zero false negatives, and four true negatives. For the model using the phi function we get two true positives, zero false positives, one false negative, and three true negatives. The next step is to evaluate the effectiveness of the decision tree using some key metrics that will be discussed in the evaluating a decision tree section below. The metrics that will be discussed below can help determine the next steps to be taken when optimizing the decision tree. The above information is not where it ends for building and optimizing a decision tree. There are many techniques for improving the decision tree classification models we build. One of the techniques is making our decision tree model from abootstrappeddataset. The bootstrapped dataset helps remove the bias that occurs when building a decision tree model with the same data the model is tested with. The ability to leverage the power ofrandom forestscan also help significantly improve the overall accuracy of the model being built. This method generates many decisions from many decision trees and tallies up the votes from each decision tree to make the final classification. There are many techniques, but the main objective is to test building your decision tree model in different ways to make sure it reaches the highest performance level possible. It is important to know the measurements used to evaluate decision trees. The main metrics used areaccuracy,sensitivity,specificity,precision,miss rate,false discovery rate, andfalse omission rate. All these measurements are derived from the number oftrue positives,false positives,True negatives, andfalse negativesobtained when running a set of samples through the decision tree classification model. Also, a confusion matrix can be made to display these results. All these main metrics tell something different about the strengths and weaknesses of the classification model built based on your decision tree. For example, a low sensitivity with high specificity could indicate the classification model built from the decision tree does not do well identifying cancer samples over non-cancer samples. Let us take the confusion matrix below. We will now calculate the values accuracy, sensitivity, specificity, precision, miss rate, false discovery rate, and false omission rate. Accuracy: Accuracy=(TP+TN)/(TP+TN+FP+FN){\displaystyle {\text{Accuracy}}=(TP+TN)/(TP+TN+FP+FN)} =(11+105)/162=71.60%{\displaystyle =(11+105)/162=71.60\%} Sensitivity (TPR – true positive rate):[14] TPR=TP/(TP+FN){\displaystyle {\text{TPR}}=TP/(TP+FN)} =11/(11+45)=19.64%{\displaystyle =11/(11+45)=19.64\%} Specificity (TNR – true negative rate): TNR=TN/(TN+FP){\displaystyle {\text{TNR}}=TN/(TN+FP)} =105/(105+1)=99.06%{\displaystyle =105/(105+1)=99.06\%} Precision (PPV – positive predictive value): PPV=TP/(TP+FP){\displaystyle {\text{PPV}}=TP/(TP+FP)} =11/(11+1)=91.66%{\displaystyle =11/(11+1)=91.66\%} Miss Rate (FNR – false negative rate): FNR=FN/(FN+TP){\displaystyle {\text{FNR}}=FN/(FN+TP)} =45/(45+11)=80.35%{\displaystyle =45/(45+11)=80.35\%} False discovery rate (FDR): FDR=FP/(FP+TP){\displaystyle {\text{FDR}}=FP/(FP+TP)} =1/(1+11)=8.30%{\displaystyle =1/(1+11)=8.30\%} False omission rate (FOR): FOR=FN/(FN+TN){\displaystyle {\text{FOR}}=FN/(FN+TN)} =45/(45+105)=30.00%{\displaystyle =45/(45+105)=30.00\%} Once we have calculated the key metrics we can make some initial conclusions on the performance of the decision tree model built. The accuracy that we calculated was 71.60%. The accuracy value is good to start but we would like to get our models as accurate as possible while maintaining the overall performance. The sensitivity value of 19.64% means that out of everyone who was actually positive for cancer tested positive. If we look at the specificity value of 99.06% we know that out of all the samples that were negative for cancer actually tested negative. When it comes to sensitivity and specificity it is important to have a balance between the two values, so if we can decrease our specificity to increase the sensitivity that would prove to be beneficial.[15]These are just a few examples on how to use these values and the meanings behind them to evaluate the decision tree model and improve upon the next iteration.
https://en.wikipedia.org/wiki/Decision_tree#Applications
Adecision treeis adecision supportrecursive partitioning structure that uses atree-likemodelof decisions and their possible consequences, includingchanceevent outcomes, resource costs, andutility. It is one way to display analgorithmthat only contains conditional control statements. Decision trees are commonly used inoperations research, specifically indecision analysis,[1]to help identify a strategy most likely to reach a goal, but are also a popular tool inmachine learning. A decision tree is aflowchart-like structure in which each internal node represents a test on an attribute (e.g. whether a coin flip comes up heads or tails), each branch represents the outcome of the test, and each leaf node represents a class label (decision taken after computing all attributes). The paths from root to leaf representclassificationrules. Indecision analysis, a decision tree and the closely relatedinfluence diagramare used as a visual and analytical decision support tool, where theexpected values(orexpected utility) of competing alternatives are calculated. A decision tree consists of three types of nodes:[2] Decision trees are commonly used inoperations researchandoperations management. If, in practice, decisions have to be taken online with no recall under incomplete knowledge, a decision tree should be paralleled by aprobabilitymodel as a best choice model or online selection modelalgorithm.[citation needed]Another use of decision trees is as a descriptive means for calculatingconditional probabilities. Decision trees,influence diagrams,utility functions, and otherdecision analysistools and methods are taught to undergraduate students in schools of business, health economics, and public health, and are examples of operations research ormanagement sciencemethods. These tools are also used to predict decisions of householders in normal and emergency scenarios.[3][4] Drawn from left to right, a decision tree has only burst nodes (splitting paths) but no sink nodes (converging paths). So used manually they can grow very big and are then often hard to draw fully by hand. Traditionally, decision trees have been created manually – as the aside example shows – although increasingly, specialized software is employed. The decision tree can belinearizedintodecision rules,[5]where the outcome is the contents of the leaf node, and the conditions along the path form a conjunction in the if clause. In general, the rules have the form: Decision rules can be generated by constructingassociation ruleswith the target variable on the right. They can also denote temporal or causal relations.[6] Commonly a decision tree is drawn usingflowchartsymbols as it is easier for many to read and understand. Note there is a conceptual error in the "Proceed" calculation of the tree shown below; the error relates to the calculation of "costs" awarded in a legal action. Analysis can take into account the decision maker's (e.g., the company's) preference orutility function, for example: The basic interpretation in this situation is that the company prefers B's risk and payoffs under realistic risk preference coefficients (greater than $400K—in that range of risk aversion, the company would need to model a third strategy, "Neither A nor B"). Another example, commonly used inoperations researchcourses, is the distribution of lifeguards on beaches (a.k.a. the "Life's a Beach" example).[7]The example describes two beaches with lifeguards to be distributed on each beach. There is maximum budgetBthat can be distributed among the two beaches (in total), and using a marginal returns table, analysts can decide how many lifeguards to allocate to each beach. In this example, a decision tree can be drawn to illustrate the principles ofdiminishing returnson beach #1. The decision tree illustrates that when sequentially distributing lifeguards, placing a first lifeguard on beach #1 would be optimal if there is only the budget for 1 lifeguard. But if there is a budget for two guards, then placing both on beach #2 would prevent more overall drownings. Much of the information in a decision tree can be represented more compactly as aninfluence diagram, focusing attention on the issues and relationships between events. Decision trees can also be seen asgenerative modelsof induction rules from empirical data. An optimal decision tree is then defined as a tree that accounts for most of the data, while minimizing the number of levels (or "questions").[8]Several algorithms to generate such optimal trees have been devised, such asID3/4/5,[9]CLS, ASSISTANT, and CART. Among decision support tools, decision trees (andinfluence diagrams) have several advantages. Decision trees: Disadvantages of decision trees: A few things should be considered when improving the accuracy of the decision tree classifier. The following are some possible optimizations to consider when looking to make sure the decision tree model produced makes the correct decision or classification. Note that these things are not the only things to consider but only some. Theaccuracyof the decision tree can change based on the depth of the decision tree. In many cases, the tree’s leaves arepurenodes.[11]When a node is pure, it means that all the data in that node belongs to a single class.[12]For example, if the classes in the data set are Cancer and Non-Cancer a leaf node would be considered pure when all the sample data in a leaf node is part of only one class, either cancer or non-cancer. It is important to note that a deeper tree is not always better when optimizing the decision tree. A deeper tree can influence the runtime in a negative way. If a certain classification algorithm is being used, then a deeper tree could mean the runtime of this classification algorithm is significantly slower. There is also the possibility that the actual algorithm building the decision tree will get significantly slower as the tree gets deeper. If the tree-building algorithm being used splits pure nodes, then a decrease in the overall accuracy of the tree classifier could be experienced. Occasionally, going deeper in the tree can cause an accuracy decrease in general, so it is very important to test modifying the depth of the decision tree and selecting the depth that produces the best results. To summarize, observe the points below, we will define the number D as the depth of the tree. Possible advantages of increasing the number D: Possible disadvantages of increasing D The ability to test the differences in classification results when changing D is imperative. We must be able to easily change and test the variables that could affect the accuracy and reliability of the decision tree-model. The node splitting function used can have an impact on improving the accuracy of the decision tree. For example, using theinformation-gainfunction may yield better results than using the phi function. The phi function is known as a measure of “goodness” of a candidate split at a node in the decision tree. The information gain function is known as a measure of the “reduction inentropy”. In the following, we will build two decision trees. One decision tree will be built using the phi function to split the nodes and one decision tree will be built using the information gain function to split the nodes. The main advantages and disadvantages ofinformation gainand phi function This is the information gain function formula. The formula states the information gain is a function of the entropy of a node of the decision tree minus the entropy of a candidate split at node t of a decision tree. This is the phi function formula. The phi function is maximized when the chosen feature splits the samples in a way that produces homogenous splits and have around the same number of samples in each split. We will set D, which is the depth of the decision tree we are building, to three (D = 3). We also have the following data set of cancer and non-cancer samples and the mutation features that the samples either have or do not have. If a sample has a feature mutation then the sample is positive for that mutation, and it will be represented by one. If a sample does not have a feature mutation then the sample is negative for that mutation, and it will be represented by zero. To summarize, C stands for cancer and NC stands for non-cancer. The letter M stands formutation, and if a sample has a particular mutation it will show up in the table as a one and otherwise zero. Now, we can use the formulas to calculate the phi function values and information gain values for each M in the dataset. Once all the values are calculated the tree can be produced. The first thing to be done is to select the root node. In information gain and the phi function we consider the optimal split to be the mutation that produces the highest value for information gain or the phi function. Now assume that M1 has the highest phi function value and M4 has the highest information gain value. The M1 mutation will be the root of our phi function tree and M4 will be the root of our information gain tree. You can observe the root nodes below Now, once we have chosen the root node we can split the samples into two groups based on whether a sample is positive or negative for the root node mutation. The groups will be called group A and group B. For example, if we use M1 to split the samples in the root node we get NC2 and C2 samples in group A and the rest of the samples NC4, NC3, NC1, C1 in group B. Disregarding the mutation chosen for the root node, proceed to place the next best features that have the highest values for information gain or the phi function in the left or right child nodes of the decision tree. Once we choose the root node and the two child nodes for the tree of depth = 3 we can just add the leaves. The leaves will represent the final classification decision the model has produced based on the mutations a sample either has or does not have. The left tree is the decision tree we obtain from using information gain to split the nodes and the right tree is what we obtain from using the phi function to split the nodes. Now assume theclassificationresults from both trees are given using aconfusion matrix. Information gain confusion matrix: Phi function confusion matrix: The tree using information gain has the same results when using the phi function when calculating the accuracy. When we classify the samples based on the model using information gain we get one true positive, one false positive, zero false negatives, and four true negatives. For the model using the phi function we get two true positives, zero false positives, one false negative, and three true negatives. The next step is to evaluate the effectiveness of the decision tree using some key metrics that will be discussed in the evaluating a decision tree section below. The metrics that will be discussed below can help determine the next steps to be taken when optimizing the decision tree. The above information is not where it ends for building and optimizing a decision tree. There are many techniques for improving the decision tree classification models we build. One of the techniques is making our decision tree model from abootstrappeddataset. The bootstrapped dataset helps remove the bias that occurs when building a decision tree model with the same data the model is tested with. The ability to leverage the power ofrandom forestscan also help significantly improve the overall accuracy of the model being built. This method generates many decisions from many decision trees and tallies up the votes from each decision tree to make the final classification. There are many techniques, but the main objective is to test building your decision tree model in different ways to make sure it reaches the highest performance level possible. It is important to know the measurements used to evaluate decision trees. The main metrics used areaccuracy,sensitivity,specificity,precision,miss rate,false discovery rate, andfalse omission rate. All these measurements are derived from the number oftrue positives,false positives,True negatives, andfalse negativesobtained when running a set of samples through the decision tree classification model. Also, a confusion matrix can be made to display these results. All these main metrics tell something different about the strengths and weaknesses of the classification model built based on your decision tree. For example, a low sensitivity with high specificity could indicate the classification model built from the decision tree does not do well identifying cancer samples over non-cancer samples. Let us take the confusion matrix below. We will now calculate the values accuracy, sensitivity, specificity, precision, miss rate, false discovery rate, and false omission rate. Accuracy: Accuracy=(TP+TN)/(TP+TN+FP+FN){\displaystyle {\text{Accuracy}}=(TP+TN)/(TP+TN+FP+FN)} =(11+105)/162=71.60%{\displaystyle =(11+105)/162=71.60\%} Sensitivity (TPR – true positive rate):[14] TPR=TP/(TP+FN){\displaystyle {\text{TPR}}=TP/(TP+FN)} =11/(11+45)=19.64%{\displaystyle =11/(11+45)=19.64\%} Specificity (TNR – true negative rate): TNR=TN/(TN+FP){\displaystyle {\text{TNR}}=TN/(TN+FP)} =105/(105+1)=99.06%{\displaystyle =105/(105+1)=99.06\%} Precision (PPV – positive predictive value): PPV=TP/(TP+FP){\displaystyle {\text{PPV}}=TP/(TP+FP)} =11/(11+1)=91.66%{\displaystyle =11/(11+1)=91.66\%} Miss Rate (FNR – false negative rate): FNR=FN/(FN+TP){\displaystyle {\text{FNR}}=FN/(FN+TP)} =45/(45+11)=80.35%{\displaystyle =45/(45+11)=80.35\%} False discovery rate (FDR): FDR=FP/(FP+TP){\displaystyle {\text{FDR}}=FP/(FP+TP)} =1/(1+11)=8.30%{\displaystyle =1/(1+11)=8.30\%} False omission rate (FOR): FOR=FN/(FN+TN){\displaystyle {\text{FOR}}=FN/(FN+TN)} =45/(45+105)=30.00%{\displaystyle =45/(45+105)=30.00\%} Once we have calculated the key metrics we can make some initial conclusions on the performance of the decision tree model built. The accuracy that we calculated was 71.60%. The accuracy value is good to start but we would like to get our models as accurate as possible while maintaining the overall performance. The sensitivity value of 19.64% means that out of everyone who was actually positive for cancer tested positive. If we look at the specificity value of 99.06% we know that out of all the samples that were negative for cancer actually tested negative. When it comes to sensitivity and specificity it is important to have a balance between the two values, so if we can decrease our specificity to increase the sensitivity that would prove to be beneficial.[15]These are just a few examples on how to use these values and the meanings behind them to evaluate the decision tree model and improve upon the next iteration.
https://en.wikipedia.org/wiki/Decision_tree#Interpretability
Instatisticsandmachine learning,lasso(least absolute shrinkage and selection operator; alsoLasso,LASSOorL1 regularization)[1]is aregression analysismethod that performs bothvariable selectionandregularizationin order to enhance the prediction accuracy and interpretability of the resultingstatistical model. The lasso method assumes that the coefficients of the linear model are sparse, meaning that few of them are non-zero. It was originally introduced ingeophysics,[2]and later byRobert Tibshirani,[3]who coined the term. Lasso was originally formulated forlinear regressionmodels. This simple case reveals a substantial amount about the estimator. These include its relationship toridge regressionandbest subset selectionand the connections between lasso coefficient estimates and so-called soft thresholding. It also reveals that (like standard linear regression) the coefficient estimates do not need to be unique ifcovariatesarecollinear. Though originally defined for linear regression, lasso regularization is easily extended to other statistical models includinggeneralized linear models,generalized estimating equations,proportional hazards models, andM-estimators.[3][4]Lasso's ability to perform subset selection relies on the form of the constraint and has a variety of interpretations including in terms ofgeometry,Bayesian statisticsandconvex analysis. The LASSO is closely related tobasis pursuit denoising. Lasso was introduced in order to improve the prediction accuracy and interpretability of regression models. It selects a reduced set of the known covariates for use in a model.[3][2] Lasso was developed independently in geophysics literature in 1986, based on prior work that used theℓ1{\displaystyle \ell ^{1}}penaltyfor both fitting and penalization of the coefficients. StatisticianRobert Tibshiraniindependently rediscovered and popularized it in 1996, based onBreiman's nonnegative garrote.[2][5] Prior to lasso, the most widely used method for choosing covariates wasstepwise selection. That approach only improves prediction accuracy in certain cases, such as when only a few covariates have a strong relationship with the outcome. However, in other cases, it can increase prediction error.[6] At the time,ridge regressionwas the most popular technique for improving prediction accuracy. Ridge regression improves prediction error byshrinkingthe sum of the squares of theregression coefficientsto be less than a fixed value in order to reduceoverfitting, but it does not perform covariate selection and therefore does not help to make the model more interpretable. Lasso achieves both of these goals by forcing the sum of the absolute value of the regression coefficients to be less than a fixed value, which forces certain coefficients to zero, excluding them from impacting prediction. This idea is similar to ridge regression, which also shrinks the size of the coefficients; however, ridge regression does not set coefficients to zero (and, thus, does not performvariable selection). Consider a sample consisting ofNcases, each of which consists ofpcovariatesand a single outcome. Letyi{\displaystyle y_{i}}be the outcome andxi:=(x1,x2,…,xp)i⊺{\displaystyle x_{i}:=(x_{1},x_{2},\ldots ,x_{p})_{i}^{\intercal }}be the covariate vector for theithcase. Then the objective of lasso is to solve:[3]minβ0,β{∑i=1N(yi−β0−xi⊺β)2}{\displaystyle \min _{\beta _{0},\beta }{\biggl \{}\sum _{i=1}^{N}{\bigl (}y_{i}-\beta _{0}-x_{i}^{\intercal }\beta {\bigr )}^{2}{\biggr \}}}subject to∑j=1p|βj|≤t.{\displaystyle \sum _{j=1}^{p}|\beta _{j}|\leq t.} Hereβ0{\displaystyle \beta _{0}}is the constant coefficient,β:=(β1,β2,…,βp){\displaystyle \beta :=(\beta _{1},\beta _{2},\ldots ,\beta _{p})}is the coefficient vector, andt{\displaystyle t}is a prespecified free parameter that determines the degree of regularization. LettingX{\displaystyle X}be the covariate matrix, so thatXij=(xi)j{\displaystyle X_{ij}=(x_{i})_{j}}andxi⊺{\displaystyle x_{i}^{\intercal }}is theithrow ofX{\displaystyle X}, the expression can be written more compactly asminβ0,β{‖y−β0−Xβ‖22}subject to‖β‖1≤t,{\displaystyle \min _{\beta _{0},\beta }\left\{\left\|y-\beta _{0}-X\beta \right\|_{2}^{2}\right\}{\text{ subject to }}\|\beta \|_{1}\leq t,}where‖u‖p=(∑i=1N|ui|p)1/p{\displaystyle \|u\|_{p}={\biggl (}\sum _{i=1}^{N}|u_{i}|^{p}{\biggr )}^{1/p}}is the standardℓp{\displaystyle \ell ^{p}}norm. Denoting the scalar mean of the data pointsxi{\displaystyle x_{i}}byx¯{\displaystyle {\bar {x}}}and the mean of the response variablesyi{\displaystyle y_{i}}byy¯{\displaystyle {\bar {y}}}, the resulting estimate forβ0{\displaystyle \beta _{0}}isβ^0=y¯−x¯⊺β{\displaystyle {\hat {\beta }}_{0}={\bar {y}}-{\bar {x}}^{\intercal }\beta }, so thatyi−β^0−xi⊺β=yi−(y¯−x¯⊺β)−xi⊺β=(yi−y¯)−(xi−x¯)⊺β,{\displaystyle y_{i}-{\hat {\beta }}_{0}-x_{i}^{\intercal }\beta =y_{i}-({\bar {y}}-{\bar {x}}^{\intercal }\beta )-x_{i}^{\intercal }\beta =(y_{i}-{\bar {y}})-(x_{i}-{\bar {x}})^{\intercal }\beta ,}and therefore it is standard to work with variables that have been made zero-mean. Additionally, the covariates are typicallystandardized(∑i=1Nxi2=1){\textstyle {\bigl (}\sum _{i=1}^{N}x_{i}^{2}=1{\bigr )}}so that the solution does not depend on the measurement scale. It can be helpful to rewriteminβ∈Rp{1N‖y−Xβ‖22}subject to‖β‖1≤t.{\displaystyle \min _{\beta \in \mathbb {R} ^{p}}\left\{{\frac {1}{N}}\left\|y-X\beta \right\|_{2}^{2}\right\}{\text{ subject to }}\|\beta \|_{1}\leq t.}in the so-calledLagrangianformminβ∈Rp{1N‖y−Xβ‖22+λ‖β‖1}{\displaystyle \min _{\beta \in \mathbb {R} ^{p}}\left\{{\frac {1}{N}}\left\|y-X\beta \right\|_{2}^{2}+\lambda \|\beta \|_{1}\right\}}where the exact relationship betweent{\displaystyle t}andλ{\displaystyle \lambda }is data dependent. Some basic properties of the lasso estimator can now be considered. Assuming first that the covariates areorthonormalso thatxi⊺xj=δij,{\displaystyle \ x_{i}^{\intercal }x_{j}=\delta _{ij}\ ,}whereδij{\displaystyle \ \delta _{ij}\ }is theKronecker delta, or, equivalently,X⊺X=I,{\displaystyle \ X^{\intercal }X=I\ ,}then usingsubgradient methodsit can be shown that[3]β^j=SN,λ⁡(β^jOLS)=β^jOLS⋅max{0,1−Nλ|β^jOLS|}{\displaystyle \,{\begin{aligned}{\hat {\beta }}_{j}\ =\ {}&\operatorname {S} _{N,\lambda }\left({\hat {\beta }}{}_{j}^{\!\;{\text{OLS}}}\right)\ =\ {\hat {\beta }}{}_{j}^{\!\;{\text{OLS}}}\cdot \max \!\left\{\ 0,\ 1-{\frac {\ N\ \lambda \ }{\ {\bigl |}{\hat {\beta }}{}_{j}^{\!\;{\text{OLS}}}{\bigr |}\ }}\ \right\}\end{aligned}}\,} whereβ^jOLS=(X⊺X)−1X⊺y=X⊺y.{\displaystyle \quad {\hat {\beta }}{}_{j}^{\!\;{\text{OLS}}}\ =\ (X^{\intercal }X)^{-1}X^{\intercal }y\ =\ X^{\intercal }y~.} Sα{\displaystyle \ S_{\alpha }\ }is referred to as thesoft thresholding operator, since it translates values towards zero (making them exactly zero in the limit as they themselves approach zero) instead of setting smaller values to zero and leaving larger ones untouched as thehard thresholding operator, often denotedHα,{\displaystyle \ H_{\alpha }\ ,}would. In ridge regression the objective is to minimizeminβ∈Rp{1N‖y−Xβ‖22+λ‖β‖22}{\displaystyle \ \min _{\beta \in \mathbb {R} ^{p}}\left\{~{\tfrac {\ 1\ }{N}}{\Bigl \|}\ y-X\ \beta \ {\Bigr \|}_{2}^{2}\ +\ \lambda \ {\Bigl \|}\ \beta \ {\Bigr \|}_{2}^{2}~\right\}\ } UsingX⊺X=I{\displaystyle \ X^{\intercal }X=I\ }and the ridge regression formula:β^=(X⊺X+NλI)−1X⊺y,{\displaystyle \ {\hat {\beta }}={\Bigl (}\ X^{\intercal }X\ +\ N\ \lambda \ I\ {\Bigr )}^{-1}X^{\intercal }y\ ,}[7]yields:β^j=(1+Nλ)−1β^jOLS.{\displaystyle \ {\hat {\beta }}_{j}=\left(1+N\ \lambda \right)^{-1}\ {\hat {\beta }}{}_{j}^{\!\;{\text{OLS}}}~.} Ridge regression shrinks all coefficients by a uniform factor of(1+Nλ)−1{\displaystyle \ (1+N\lambda )^{-1}\ }and does not set any coefficients to zero.[8] It can also be compared to regression withbest subset selection, in which the goal is to minimizeminβ∈Rp{1N‖y−Xβ‖22+λ‖β‖0}{\displaystyle \ \min _{\beta \in \mathbb {R} ^{p}}\left\{~{\tfrac {1}{N}}{\Bigl \|}\ y-X\beta \ {\Bigr \|}_{2}^{2}\ +\ \lambda \ {\Bigl \|}\ \beta \ {\Bigr \|}_{0}~\right\}\ }where‖⋅‖0{\displaystyle \ \|\cdot \|_{0}\ }is the "ℓ0{\displaystyle \ \ell ^{0}\ }norm", which is defined as‖z‖=m{\displaystyle \ \|z\|=m\ }if exactlymcomponents ofzare nonzero. In this case, it can be shown thatβ^j=HNλ(β^jOLS)=β^jOLS⋅I⁡[|β^jOLS|≥Nλ]{\displaystyle \ {\hat {\beta }}_{j}\ =\ H_{\sqrt {N\lambda \ }}\ \left(\ {\hat {\beta }}{}_{j}^{\!\;{\text{OLS}}}\ \right)\ =\ {\hat {\beta }}{}_{j}^{\!\;{\text{OLS}}}\cdot \operatorname {\mathbb {I} } \left[~{\bigl |}{\hat {\beta }}{}_{j}^{\!\;{\text{OLS}}}{\bigr |}\geq {\sqrt {N\ \lambda \ }}~\right]\ }whereHα{\displaystyle \ H_{\alpha }\ }is again the hard thresholding operator andI{\displaystyle \ \mathbb {I} \ }is anindicator function(it is1if its argument is true and0otherwise). Therefore, the lasso estimates share features of both ridge and best subset selection regression since they both shrink the magnitude of all the coefficients, like ridge regression and set some of them to zero, as in the best subset selection case. Additionally, while ridge regression scales all of the coefficients by a constant factor, lasso instead translates the coefficients towards zero by a constant value and sets them to zero if they reach it. In one special case two covariates, sayjandk, are identical for each observation, so thatx(j)=x(k){\displaystyle x_{(j)}=x_{(k)}}, wherex(j),i=x(k),i{\displaystyle x_{(j),i}=x_{(k),i}}. Then the values ofβj{\displaystyle \beta _{j}}andβk{\displaystyle \beta _{k}}that minimize the lasso objective function are not uniquely determined. In fact, if someβ^{\displaystyle {\hat {\beta }}}in whichβ^jβ^k≥0{\displaystyle {\hat {\beta }}_{j}{\hat {\beta }}_{k}\geq 0}, then ifs∈[0,1]{\displaystyle s\in [0,1]}replacingβ^j{\displaystyle {\hat {\beta }}_{j}}bys(β^j+β^k){\displaystyle s({\hat {\beta }}_{j}+{\hat {\beta }}_{k})}andβ^k{\displaystyle {\hat {\beta }}_{k}}by(1−s)(β^j+β^k){\displaystyle (1-s)({\hat {\beta }}_{j}+{\hat {\beta }}_{k})}, while keeping all the otherβ^i{\displaystyle {\hat {\beta }}_{i}}fixed, gives a new solution, so the lasso objective function then has a continuum of valid minimizers.[9]Several variants of the lasso, including theElastic net regularization, have been designed to address this shortcoming. Lasso regularization can be extended to other objective functions such as those forgeneralized linear models,generalized estimating equations,proportional hazards models, andM-estimators.[3][4]Given the objective function1N∑i=1Nf(xi,yi,α,β){\displaystyle {\frac {1}{N}}\sum _{i=1}^{N}f(x_{i},y_{i},\alpha ,\beta )}the lasso regularized version of the estimatorsthe solution tominα,β1N∑i=1Nf(xi,yi,α,β)subject to‖β‖1≤t{\displaystyle \min _{\alpha ,\beta }{\frac {1}{N}}\sum _{i=1}^{N}f(x_{i},y_{i},\alpha ,\beta ){\text{ subject to }}\|\beta \|_{1}\leq t}where onlyβ{\displaystyle \beta }is penalized whileα{\displaystyle \alpha }is free to take any allowed value, just asβ0{\displaystyle \beta _{0}}was not penalized in the basic case. Lasso can set coefficients to zero, while the superficially similar ridge regression cannot. This is due to the difference in the shape of their constraint boundaries. Both lasso and ridge regression can be interpreted as minimizing the same objective functionminβ0,β{1N‖y−β0−Xβ‖22}{\displaystyle \min _{\beta _{0},\beta }\left\{{\frac {1}{N}}\left\|y-\beta _{0}-X\beta \right\|_{2}^{2}\right\}}but with respect to different constraints:‖β‖1≤t{\displaystyle \|\beta \|_{1}\leq t}for lasso and‖β‖22≤t{\displaystyle \|\beta \|_{2}^{2}\leq t}for ridge. The figure shows that the constraint region defined by theℓ1{\displaystyle \ell ^{1}}norm is a square rotated so that its corners lie on the axes (in general across-polytope), while the region defined by theℓ2{\displaystyle \ell ^{2}}norm is a circle (in general ann-sphere), which isrotationallyinvariantand, therefore, has no corners. As seen in the figure, a convex object that lies tangent to the boundary, such as the line shown, is likely to encounter a corner (or a higher-dimensional equivalent) of a hypercube, for which some components ofβ{\displaystyle \beta }are identically zero, while in the case of ann-sphere, the points on the boundary for which some of the components ofβ{\displaystyle \beta }are zero are not distinguished from the others and the convex object is no more likely to contact a point at which some components ofβ{\displaystyle \beta }are zero than one for which none of them are. The lasso can be rescaled so that it becomes easy to anticipate and influence the degree of shrinkage associated with a given value ofλ{\displaystyle \lambda }.[10]It is assumed thatX{\displaystyle X}is standardized with z-scores and thaty{\displaystyle y}is centered (zero mean). Letβ0{\displaystyle \beta _{0}}represent the hypothesized regression coefficients and letbOLS{\displaystyle b_{\text{OLS}}}refer to the data-optimized ordinary least squares solutions. We can then define theLagrangianas a tradeoff between the in-sample accuracy of the data-optimized solutions and the simplicity of sticking to the hypothesized values.[11]This results inminβ∈Rp{(y−Xβ)′(y−Xβ)(y−Xβ0)′(y−Xβ0)+2λ∑i=1p|βi−β0,i|qi}{\displaystyle \min _{\beta \in \mathbb {R} ^{p}}\left\{{\frac {(y-X\beta )'(y-X\beta )}{(y-X\beta _{0})'(y-X\beta _{0})}}+2\lambda \sum _{i=1}^{p}{\frac {|\beta _{i}-\beta _{0,i}|}{q_{i}}}\right\}}whereqi{\displaystyle q_{i}}is specified below and the "prime" symbol stands for transpose. The first fraction represents relative accuracy, the second fraction relative simplicity, andλ{\displaystyle \lambda }balances between the two. Given a single regressor, relative simplicity can be defined by specifyingqi{\displaystyle q_{i}}as|bOLS−β0|{\displaystyle |b_{\text{OLS}}-\beta _{0}|}, which is the maximum amount of deviation fromβ0{\displaystyle \beta _{0}}whenλ=0{\displaystyle \lambda =0}. Assuming thatβ0=0{\displaystyle \beta _{0}=0}, the solution path can be defined in terms ofR2{\displaystyle R^{2}}:bℓ1={(1−λ/R2)bOLSifλ≤R2,0ifλ>R2.{\displaystyle b_{\ell _{1}}={\begin{cases}(1-\lambda /R^{2})b_{\text{OLS}}&{\mbox{if }}\lambda \leq R^{2},\\0&{\mbox{if }}\lambda >R^{2}.\end{cases}}}Ifλ=0{\displaystyle \lambda =0}, the ordinary least squares solution (OLS) is used. The hypothesized value ofβ0=0{\displaystyle \beta _{0}=0}is selected ifλ{\displaystyle \lambda }is bigger thanR2{\displaystyle R^{2}}. Furthermore, ifR2=1{\displaystyle R^{2}=1}, thenλ{\displaystyle \lambda }represents the proportional influence ofβ0=0{\displaystyle \beta _{0}=0}. In other words,λ×100%{\displaystyle \lambda \times 100\%}measures in percentage terms the minimal amount of influence of the hypothesized value relative to the data-optimized OLS solution. If anℓ2{\displaystyle \ell _{2}}-norm is used to penalize deviations from zero given a single regressor, the solution path is given bybℓ2=(1+λR2(1−λ))−1bOLS.{\displaystyle b_{\ell _{2}}=\left(1+{\frac {\lambda }{R^{2}(1-\lambda )}}\right)^{-1}b_{\text{OLS}}.}Likebℓ1{\displaystyle b_{\ell _{1}}},bℓ2{\displaystyle b_{\ell _{2}}}moves in the direction of the point(λ=R2,b=0){\displaystyle (\lambda =R^{2},b=0)}whenλ{\displaystyle \lambda }is close to zero; but unlikebℓ1{\displaystyle b_{\ell _{1}}}, the influence ofR2{\displaystyle R^{2}}diminishes inbℓ2{\displaystyle b_{\ell _{2}}}ifλ{\displaystyle \lambda }increases (see figure).Given multiple regressors, the moment that a parameter is activated (i.e. allowed to deviate fromβ0{\displaystyle \beta _{0}}) is also determined by a regressor's contribution toR2{\displaystyle R^{2}}accuracy. First,R2=1−(y−Xb)′(y−Xb)(y−Xβ0)′(y−Xβ0).{\displaystyle R^{2}=1-{\frac {(y-Xb)'(y-Xb)}{(y-X\beta _{0})'(y-X\beta _{0})}}.}AnR2{\displaystyle R^{2}}of 75% means that in-sample accuracy improves by 75% if the unrestricted OLS solutions are used instead of the hypothesizedβ0{\displaystyle \beta _{0}}values. The individual contribution of deviating from each hypothesis can be computed with thep{\displaystyle p}xp{\displaystyle p}matrixR⊗=(X′y~0)(X′y~0)′(X′X)−1(y~0′y~0)−1,{\displaystyle R^{\otimes }=(X'{\tilde {y}}_{0})(X'{\tilde {y}}_{0})'(X'X)^{-1}({\tilde {y}}_{0}'{\tilde {y}}_{0})^{-1},}wherey~0=y−Xβ0{\displaystyle {\tilde {y}}_{0}=y-X\beta _{0}}. Ifb=bOLS{\displaystyle b=b_{\text{OLS}}}whenR2{\displaystyle R^{2}}is computed, then the diagonal elements ofR⊗{\displaystyle R^{\otimes }}sum toR2{\displaystyle R^{2}}. The diagonalR⊗{\displaystyle R^{\otimes }}values may be smaller than 0 or, less often, larger than 1. If regressors are uncorrelated, then theith{\displaystyle i^{th}}diagonal element ofR⊗{\displaystyle R^{\otimes }}simply corresponds to ther2{\displaystyle r^{2}}value betweenxi{\displaystyle x_{i}}andy{\displaystyle y}. A rescaled version of the adaptive lasso of can be obtained by settingqadaptive lasso,i=|bOLS,i−β0,i|{\displaystyle q_{{\mbox{adaptive lasso}},i}=|b_{{\text{OLS}},i}-\beta _{0,i}|}.[12]If regressors are uncorrelated, the moment that theith{\displaystyle i^{th}}parameter is activated is given by theith{\displaystyle i^{th}}diagonal element ofR⊗{\displaystyle R^{\otimes }}. Assuming for convenience thatβ0{\displaystyle \beta _{0}}is a vector of zeros,bi={(1−λ/Rii⊗)bOLS,iifλ≤Rii⊗,0ifλ>Rii⊗.{\displaystyle b_{i}={\begin{cases}(1-\lambda /R_{ii}^{\otimes })b_{{\text{OLS}},i}&{\text{if }}\lambda \leq R_{ii}^{\otimes },\\0&{\text{if }}\lambda >R_{ii}^{\otimes }.\end{cases}}}That is, if regressors are uncorrelated,λ{\displaystyle \lambda }again specifies the minimal influence ofβ0{\displaystyle \beta _{0}}. Even when regressors are correlated, the first time that a regression parameter is activated occurs whenλ{\displaystyle \lambda }is equal to the highest diagonal element ofR⊗{\displaystyle R^{\otimes }}. These results can be compared to a rescaled version of the lasso by definingqlasso,i=1p∑l|bOLS,l−β0,l|{\displaystyle q_{{\mbox{lasso}},i}={\frac {1}{p}}\sum _{l}|b_{{\text{OLS}},l}-\beta _{0,l}|}, which is the average absolute deviation ofbOLS{\displaystyle b_{\text{OLS}}}fromβ0{\displaystyle \beta _{0}}. Assuming that regressors are uncorrelated, then the moment of activation of theith{\displaystyle i^{th}}regressor is given byλ~lasso,i=1pRi⊗∑l=1pRl⊗.{\displaystyle {\tilde {\lambda }}_{{\text{lasso}},i}={\frac {1}{p}}{\sqrt {R_{i}^{\otimes }}}\sum _{l=1}^{p}{\sqrt {R_{l}^{\otimes }}}.} Forp=1{\displaystyle p=1}, the moment of activation is again given byλ~lasso,i=R2{\displaystyle {\tilde {\lambda }}_{{\text{lasso}},i}=R^{2}}. Ifβ0{\displaystyle \beta _{0}}is a vector of zeros and a subset ofpB{\displaystyle p_{B}}relevant parameters are equally responsible for a perfect fit ofR2=1{\displaystyle R^{2}=1}, then this subset is activated at aλ{\displaystyle \lambda }value of1p{\displaystyle {\frac {1}{p}}}. The moment of activation of a relevant regressor then equals1p1pBpB1pB=1p{\displaystyle {\frac {1}{p}}{\frac {1}{\sqrt {p_{B}}}}p_{B}{\frac {1}{\sqrt {p_{B}}}}={\frac {1}{p}}}. In other words, the inclusion of irrelevant regressors delays the moment that relevant regressors are activated by this rescaled lasso. The adaptive lasso and the lasso are special cases of a '1ASTc' estimator. The latter only groups parameters together if the absolute correlation among regressors is larger than a user-specified value.[10] Just as ridge regression can be interpreted as linear regression for which the coefficients have been assigned normalprior distributions, lasso can be interpreted as linear regression for which the coefficients haveLaplace prior distributions.[13]The Laplace distribution is sharplypeakedat zero (its first derivative is discontinuous at zero) and it concentrates its probability mass closer to zero than does the normal distribution. This provides an alternative explanation of why lasso tends to set some coefficients to zero, while ridge regression does not.[3] Lasso can also be viewed as a convex relaxation of the best subset selection regression problem, which is to find the subset of≤k{\displaystyle \leq k}covariates that results in the smallest value of the objective function for some fixedk≤n{\displaystyle k\leq n}, where n is the total number of covariates. The "ℓ0{\displaystyle \ell ^{0}}norm",‖⋅‖0{\displaystyle \|\cdot \|_{0}}, (the number of nonzero entries of a vector), is the limiting case of "ℓp{\displaystyle \ell ^{p}}norms", of the form‖x‖p=(∑i=1n|xj|p)1/p{\displaystyle \textstyle \|x\|_{p}=\left(\sum _{i=1}^{n}|x_{j}|^{p}\right)^{1/p}}(where the quotation marks signify that these are not really norms forp<1{\displaystyle p<1}since‖⋅‖p{\displaystyle \|\cdot \|_{p}}is not convex forp<1{\displaystyle p<1}, so the triangle inequality does not hold). Therefore, since p = 1 is the smallest value for which the "ℓp{\displaystyle \ell ^{p}}norm" is convex (and therefore actually a norm), lasso is, in some sense, the best convex approximation to the best subset selection problem, since the region defined by‖x‖1≤t{\displaystyle \|x\|_{1}\leq t}is theconvex hullof the region defined by‖x‖p≤t{\displaystyle \|x\|_{p}\leq t}forp<1{\displaystyle p<1}. Lasso variants have been created in order to remedy limitations of the original technique and to make the method more useful for particular problems. Almost all of these focus on respecting or exploiting dependencies among the covariates. Elastic net regularizationadds an additional ridge regression-like penalty that improves performance when the number of predictors is larger than the sample size, allows the method to select strongly correlated variables together, and improves overall prediction accuracy.[9] Group lasso allows groups of related covariates to be selected as a single unit, which can be useful in settings where it does not make sense to include some covariates without others.[14]Further extensions of group lasso perform variable selection within individual groups (sparse group lasso) and allow overlap between groups (overlap group lasso).[15][16] Fused lasso can account for the spatial or temporal characteristics of a problem, resulting in estimates that better match system structure.[17]Lasso-regularized models can be fit using techniques includingsubgradient methods,least-angle regression(LARS), andproximal gradient methods. Determining the optimal value for the regularization parameter is an important part of ensuring that the model performs well; it is typically chosen usingcross-validation. In 2005, Zou and Hastie introduced theelastic net.[9]Whenp>n(the number of covariates is greater than the sample size) lasso can select onlyncovariates (even when more are associated with the outcome) and it tends to select one covariate from any set of highly correlated covariates. Additionally, even whenn>p, ridge regression tends to perform better given strongly correlated covariates. The elastic net extends lasso by adding an additionalℓ2{\displaystyle \ell ^{2}}penalty term givingminβ∈Rp{‖y−Xβ‖22+λ1‖β‖1+λ2‖β‖22},{\displaystyle \min _{\beta \in \mathbb {R} ^{p}}\left\{\left\|y-X\beta \right\|_{2}^{2}+\lambda _{1}\|\beta \|_{1}+\lambda _{2}\|\beta \|_{2}^{2}\right\},}which is equivalent to solvingminβ0,β{‖y−β0−Xβ‖22}subject to(1−α)‖β‖1+α‖β‖22≤t,whereα=λ2λ1+λ2.{\displaystyle {\begin{aligned}\min _{\beta _{0},\beta }\left\{\left\|y-\beta _{0}-X\beta \right\|_{2}^{2}\right\}&{\text{ subject to }}(1-\alpha )\|\beta \|_{1}+\alpha \|\beta \|_{2}^{2}\leq t,\\&{\text{ where }}\alpha ={\frac {\lambda _{2}}{\lambda _{1}+\lambda _{2}}}.\end{aligned}}} This problem can be written in a simple lasso formminβ∗∈Rp{‖y∗−X∗β∗‖22+λ∗‖β∗‖1}{\displaystyle \min _{\beta ^{*}\in \mathbb {R} ^{p}}\left\{\left\|y^{*}-X^{*}\beta ^{*}\right\|_{2}^{2}+\lambda ^{*}\|\beta ^{*}\|_{1}\right\}}lettingX(n+p)×p∗=(1+λ2)−1/2(Xλ21/2Ip×p),{\displaystyle X_{(n+p)\times p}^{*}=(1+\lambda _{2})^{-1/2}{\binom {X}{\lambda _{2}^{1/2}I_{p\times p}}},}y(n+p)∗=(y0p),λ∗=λ11+λ2,{\displaystyle y_{(n+p)}^{*}={\binom {y}{0^{p}}},\qquad \lambda ^{*}={\frac {\lambda _{1}}{\sqrt {1+\lambda _{2}}}},}β∗=1+λ2β.{\displaystyle \beta ^{*}={\sqrt {1+\lambda _{2}}}\beta .} Thenβ^=β^∗1+λ2{\displaystyle {\hat {\beta }}={\frac {{\hat {\beta }}^{*}}{\sqrt {1+\lambda _{2}}}}}, which, when the covariates are orthogonal to each other, givesβ^j=β^j∗,OLS1+λ2max(0,1−λ∗|β^j∗,OLS|)=β^jOLS1+λ2max(0,1−λ1|β^jOLS|)=(1+λ2)−1β^jlasso.{\displaystyle {\hat {\beta }}_{j}={\frac {{\hat {\beta }}{}_{j}^{\!\;*,{\text{OLS}}}}{\sqrt {1+\lambda _{2}}}}\max {\Biggl (}0,1-{\frac {\lambda ^{*}}{{\bigl |}{\hat {\beta }}{}_{j}^{\!\;*,{\text{OLS}}}{\bigr |}}}{\Biggr )}={\frac {{\hat {\beta }}{}_{j}^{\!\;{\text{OLS}}}}{1+\lambda _{2}}}\max {\Biggl (}0,1-{\frac {\lambda _{1}}{{\bigl |}{\hat {\beta }}{}_{j}^{\!\;{\text{OLS}}}{\bigr |}}}{\Biggr )}=(1+\lambda _{2})^{-1}{\hat {\beta }}{}_{j}^{\text{lasso}}.} So the result of the elastic net penalty is a combination of the effects of the lasso and ridge penalties. Returning to the general case, the fact that the penalty function is now strictly convex means that ifx(j)=x(k){\displaystyle x_{(j)}=x_{(k)}},β^j=β^k{\displaystyle {\hat {\beta }}_{j}={\hat {\beta }}_{k}}, which is a change from lasso.[9]In general, ifβ^jβk^>0{\displaystyle {\hat {\beta }}_{j}{\hat {\beta _{k}}}>0}|β^j−βk^|‖y‖≤λ2−12(1−ρjk),whereρ=X⊺X,{\displaystyle {\frac {|{\hat {\beta }}_{j}-{\hat {\beta _{k}}}|}{\|y\|}}\leq \lambda _{2}^{-1}{\sqrt {2(1-\rho _{jk})}},{\text{ where }}\rho =X^{\intercal }X,}is the sample correlation matrix because thex{\displaystyle x}'s are normalized. Therefore, highly correlated covariates tend to have similar regression coefficients, with the degree of similarity depending on both‖y‖1{\displaystyle \|y\|_{1}}andλ2{\displaystyle \lambda _{2}}, which is different from lasso. This phenomenon, in which strongly correlated covariates have similar regression coefficients, is referred to as the grouping effect. Grouping is desirable since, in applications such as tying genes to a disease, finding all the associated covariates is preferable, rather than selecting one from each set of correlated covariates, as lasso often does.[9]In addition, selecting only one from each group typically results in increased prediction error, since the model is less robust (which is why ridge regression often outperforms lasso). In 2006, Yuan and Lin introduced the group lasso to allow predefined groups of covariates to jointly be selected into or out of a model.[14]This is useful in many settings, perhaps most obviously when a categorical variable is coded as a collection of binary covariates. In this case, group lasso can ensure that all the variables encoding the categorical covariate are included or excluded together. Another setting in which grouping is natural is in biological studies. Since genes and proteins often lie in known pathways, which pathways are related to an outcome may be more significant than whether individual genes are. The objective function for the group lasso is a natural generalization of the standard lasso objectiveminβ∈Rp{‖y−∑j=1JXjβj‖22+λ∑j=1J‖βj‖Kj},‖z‖Kj=(z⊺Kjz)1/2{\displaystyle \min _{\beta \in \mathbb {R} ^{p}}{\biggl \{}{\biggl \|}y-\sum _{j=1}^{J}X_{j}\beta _{j}{\biggr \|}_{2}^{2}+\lambda \sum _{j=1}^{J}\|\beta _{j}\|_{K_{j}}{\biggr \}},\qquad \|z\|_{K_{j}}=(z^{\intercal }K_{j}z)^{1/2}}where thedesign matrixX{\displaystyle X}and covariate vectorβ{\displaystyle \beta }have been replaced by a collection of design matricesXj{\displaystyle X_{j}}and covariate vectorsβj{\displaystyle \beta _{j}}, one for each of the J groups. Additionally, the penalty term is now a sum overℓ2{\displaystyle \ell ^{2}}norms defined by the positive definite matricesKj{\displaystyle K_{j}}. If each covariate is in its own group andKj=I{\displaystyle K_{j}=I}, then this reduces to the standard lasso, while if there is only a single group andK1=I{\displaystyle K_{1}=I}, it reduces to ridge regression. Since the penalty reduces to anℓ2{\displaystyle \ell ^{2}}norm on the subspaces defined by each group, it cannot select out only some of the covariates from a group, just as ridge regression cannot. However, because the penalty is the sum over the different subspace norms, as in the standard lasso, the constraint has some non-differential points, which correspond to some subspaces being identically zero. Therefore, it can set the coefficient vectors corresponding to some subspaces to zero, while only shrinking others. However, it is possible to extend the group lasso to the so-called sparse group lasso, which can select individual covariates within a group, by adding an additionalℓ1{\displaystyle \ell ^{1}}penalty to each group subspace.[15]Another extension, group lasso with overlap allows covariates to be shared across groups, e.g., if a gene were to occur in two pathways.[16] The "gglasso" package by in R, allows for fast and efficient implementation of Group LASSO.[18] In some cases, the phenomenon under study may have important spatial or temporal structure that must be considered during analysis, such as time series or image-based data. In 2005, Tibshirani and colleagues introduced the fused lasso to extend the use of lasso to this type of data.[17]The fused lasso objective function isminβ{1N∑i=1N(yi−xi⊺β)2}subject to∑j=1p|βj|≤t1and∑j=2p|βj−βj−1|≤t2.{\displaystyle {\begin{aligned}&\min _{\beta }{\biggl \{}{\frac {1}{N}}\sum _{i=1}^{N}\left(y_{i}-x_{i}^{\intercal }\beta \right)^{2}{\biggr \}}\\[4pt]&{\text{ subject to }}\sum _{j=1}^{p}|\beta _{j}|\leq t_{1}{\text{ and }}\sum _{j=2}^{p}|\beta _{j}-\beta _{j-1}|\leq t_{2}.\end{aligned}}} The first constraint is the lasso constraint, while the second directly penalizes large changes with respect to the temporal or spatial structure, which forces the coefficients to vary smoothly to reflect the system's underlying logic. Clustered lasso[19]is a generalization of fused lasso that identifies and groups relevant covariates based on their effects (coefficients). The basic idea is to penalize the differences between the coefficients so that nonzero ones cluster. This can be modeled using the following regularization:∑i<jp|βi−βj|≤t2.{\displaystyle \sum _{i<j}^{p}|\beta _{i}-\beta _{j}|\leq t_{2}.} In contrast, variables can be clustered into highly correlated groups, and then a single representative covariate can be extracted from each cluster.[20] Algorithms exist that solve the fused lasso problem, and some generalizations of it. Algorithms can solve it exactly in a finite number of operations.[21] Lasso, elastic net, group and fused lasso construct the penalty functions from theℓ1{\displaystyle \ell ^{1}}andℓ2{\displaystyle \ell ^{2}}norms (with weights, if necessary). The bridge regression utilises generalℓp{\displaystyle \ell ^{p}}norms (p≥1{\displaystyle p\geq 1}) and quasinorms (0<p<1{\displaystyle 0<p<1}).[23]For example, forp=1/2 the analogue of lasso objective in the Lagrangian form is to solveminβ∈Rp{1N‖y−Xβ‖22+λ‖β‖1/2},{\displaystyle \min _{\beta \in \mathbb {R} ^{p}}\left\{{\frac {1}{N}}\left\|y-X\beta \right\|_{2}^{2}+\lambda {\sqrt {\|\beta \|_{1/2}}}\right\},}where‖β‖1/2=(∑j=1p|βj|)2{\displaystyle \|\beta \|_{1/2}={\biggl (}\sum _{j=1}^{p}{\sqrt {|\beta _{j}|}}{\biggr )}^{2}} It is claimed that the fractional quasi-normsℓp{\displaystyle \ell ^{p}}(0<p<1{\displaystyle 0<p<1}) provide more meaningful results in data analysis both theoretically and empirically.[24]The non-convexity of these quasi-norms complicates the optimization problem. To solve this problem, an expectation-minimization procedure is developed[25]and implemented[22]for minimization of functionminβ∈Rp{1N‖y−Xβ‖22+λ∑j=1pϑ(βj2)},{\displaystyle \min _{\beta \in \mathbb {R} ^{p}}\left\{{\frac {1}{N}}\left\|y-X\beta \right\|_{2}^{2}+\lambda \sum _{j=1}^{p}\vartheta (\beta _{j}^{2})\right\},}whereϑ(γ){\displaystyle \vartheta (\gamma )}is an arbitrary concave monotonically increasing function (for example,ϑ(γ)=γ{\displaystyle \vartheta (\gamma )={\sqrt {\gamma }}}gives the lasso penalty andϑ(γ)=γ1/4{\displaystyle \vartheta (\gamma )=\gamma ^{1/4}}gives theℓ1/2{\displaystyle \ell ^{1/2}}penalty). The efficient algorithm for minimization is based on piece-wisequadratic approximationof subquadratic growth (PQSQ).[25] The adaptive lasso was introduced by Zou in 2006 for linear regression[12]and by Zhang and Lu in 2007 for proportional hazards regression.[26] The prior lasso was introduced for generalized linear models by Jiang et al. in 2016 to incorporate prior information, such as the importance of certain covariates.[27]In prior lasso, such information is summarized into pseudo responses (called prior responses)y^p{\displaystyle {\hat {y}}^{\mathrm {p} }}and then an additional criterion function is added to the usual objective function with a lasso penalty. Without loss of generality, in linear regression, the new objective function can be written asminβ∈Rp{1N‖y−Xβ‖22+1Nη‖y^p−Xβ‖22+λ‖β‖1},{\displaystyle \min _{\beta \in \mathbb {R} ^{p}}\left\{{\frac {1}{N}}\left\|y-X\beta \right\|_{2}^{2}+{\frac {1}{N}}\eta \left\|{\hat {y}}^{\mathrm {p} }-X\beta \right\|_{2}^{2}+\lambda \|\beta \|_{1}\right\},}which is equivalent tominβ∈Rp{1N‖y~−Xβ‖22+λ1+η‖β‖1},{\displaystyle \min _{\beta \in \mathbb {R} ^{p}}\left\{{\frac {1}{N}}\left\|{\tilde {y}}-X\beta \right\|_{2}^{2}+{\frac {\lambda }{1+\eta }}\|\beta \|_{1}\right\},} the usual lasso objective function with the responsesy{\displaystyle y}being replaced by a weighted average of the observed responses and the prior responsesy~=(y+ηy^p)/(1+η){\displaystyle {\tilde {y}}=(y+\eta {\hat {y}}^{\mathrm {p} })/(1+\eta )}(called the adjusted response values by the prior information). In prior lasso, the parameterη{\displaystyle \eta }is called a balancing parameter, in that it balances the relative importance of the data and the prior information. In the extreme case ofη=0{\displaystyle \eta =0}, prior lasso is reduced to lasso. Ifη=∞{\displaystyle \eta =\infty }, prior lasso will solely rely on the prior information to fit the model. Furthermore, the balancing parameterη{\displaystyle \eta }has another appealing interpretation: it controls the variance ofβ{\displaystyle \beta }in its prior distribution from a Bayesian viewpoint. Prior lasso is more efficient in parameter estimation and prediction (with a smaller estimation error and prediction error) when the prior information is of high quality, and is robust to the low quality prior information with a good choice of the balancing parameterη{\displaystyle \eta }. Lasso can be run in anensemble. This can be especially useful when the data is high-dimensional. The procedure involves running lasso on each of several random subsets of the data and collating the results.[28][29][30] The loss function of the lasso is not differentiable, but a wide variety of techniques from convex analysis and optimization theory have been developed to compute the solutions path of the lasso. These include coordinate descent,[31]subgradient methods,least-angle regression(LARS),[32]and proximal gradient methods.Subgradientmethods are the natural generalization of traditional methods such asgradient descentandstochastic gradient descentto the case in which the objective function is not differentiable at all points. LARS is a method that is closely tied to lasso models, and in many cases allows them to be fit efficiently, though it may not perform well in all circumstances. LARS generates complete solution paths.[32]Proximal methods have become popular because of their flexibility and performance and are an area of active research. The choice of method will depend on the particular lasso variant, the data and the available resources. However, proximal methods generally perform well. The "glmnet" package in R, where "glm" is a reference to "generalized linear models" and "net" refers to the "net" from "elastic net" provides an extremely efficient way to implement LASSO and some of its variants.[33][34][35] The "celer" package in Python provides a highly efficient solver for the Lasso problem, often outperforming traditional solvers like scikit-learn by up to 100 times in certain scenarios, particularly with high-dimensional datasets. This package leverages dual extrapolation techniques to achieve its performance gains.[36][37]The celer package is available atGitHub. Choosing the regularization parameter (λ{\displaystyle \lambda }) is a fundamental part of lasso. A good value is essential to the performance of lasso since it controls the strength of shrinkage and variable selection, which, in moderation can improve both prediction accuracy and interpretability. However, if the regularization becomes too strong, important variables may be omitted and coefficients may be shrunk excessively, which can harm both predictive capacity and inferencing.Cross-validationis often used to find the regularization parameter. Information criteria such as theBayesian information criterion(BIC) and theAkaike information criterion(AIC) might be preferable to cross-validation, because they are faster to compute and their performance is less volatile in small samples.[38]An information criterion selects the estimator's regularization parameter by maximizing a model's in-sample accuracy while penalizing its effective number of parameters/degrees of freedom. Zou et al. proposed to measure the effective degrees of freedom by counting the number of parameters that deviate from zero.[39]The degrees of freedom approach was considered flawed by Kaufman and Rosset[40]and Janson et al.,[41]because a model's degrees of freedom might increase even when it is penalized harder by the regularization parameter. As an alternative, the relative simplicity measure defined above can be used to count the effective number of parameters.[38]For the lasso, this measure is given byP^=∑i=1p|βi−β0,i|1p∑l|bOLS,l−β0,l|,{\displaystyle {\hat {\mathcal {P}}}=\sum _{i=1}^{p}{\frac {|\beta _{i}-\beta _{0,i}|}{{\frac {1}{p}}\sum _{l}|b_{{\text{OLS}},l}-\beta _{0,l}|}},}which monotonically increases from zero top{\displaystyle p}as the regularization parameter decreases from∞{\displaystyle \infty }to zero. LASSO has been applied in economics and finance, and was found to improve prediction and to select sometimes neglected variables, for example in corporate bankruptcy prediction literature,[42]or high growth firms prediction.[43]
https://en.wikipedia.org/wiki/Lasso_(statistics)
t-distributed stochastic neighbor embedding(t-SNE) is astatisticalmethod for visualizing high-dimensional data by giving each datapoint a location in a two or three-dimensional map. It is based on Stochastic Neighbor Embedding originally developed byGeoffrey Hintonand Sam Roweis,[1]where Laurens van der Maaten and Hinton proposed thet-distributedvariant.[2]It is anonlinear dimensionality reductiontechnique for embedding high-dimensional data for visualization in a low-dimensional space of two or three dimensions. Specifically, it models each high-dimensional object by a two- or three-dimensional point in such a way that similar objects are modeled by nearby points and dissimilar objects are modeled by distant points with high probability. The t-SNE algorithm comprises two main stages. First, t-SNE constructs aprobability distributionover pairs of high-dimensional objects in such a way that similar objects are assigned a higher probability while dissimilar points are assigned a lower probability. Second, t-SNE defines a similar probability distribution over the points in the low-dimensional map, and it minimizes theKullback–Leibler divergence(KL divergence) between the two distributions with respect to the locations of the points in the map. While the original algorithm uses theEuclidean distancebetween objects as the base of its similarity metric, this can be changed as appropriate. ARiemannianvariant isUMAP. t-SNE has been used for visualization in a wide range of applications, includinggenomics,computer securityresearch,[3]natural language processing,music analysis,[4]cancer research,[5]bioinformatics,[6]geological domain interpretation,[7][8][9]and biomedical signal processing.[10] For a data set withnelements, t-SNE runs inO(n2)time and requiresO(n2)space.[11] Given a set ofN{\displaystyle N}high-dimensional objectsx1,…,xN{\displaystyle \mathbf {x} _{1},\dots ,\mathbf {x} _{N}}, t-SNE first computes probabilitiespij{\displaystyle p_{ij}}that are proportional to the similarity of objectsxi{\displaystyle \mathbf {x} _{i}}andxj{\displaystyle \mathbf {x} _{j}}, as follows. Fori≠j{\displaystyle i\neq j}, define and setpi∣i=0{\displaystyle p_{i\mid i}=0}. Note the above denominator ensures∑jpj∣i=1{\displaystyle \sum _{j}p_{j\mid i}=1}for alli{\displaystyle i}. As van der Maaten and Hinton explained: "The similarity of datapointxj{\displaystyle x_{j}}to datapointxi{\displaystyle x_{i}}is the conditional probability,pj|i{\displaystyle p_{j|i}}, thatxi{\displaystyle x_{i}}would pickxj{\displaystyle x_{j}}as its neighbor if neighbors were picked in proportion to their probability density under a Gaussian centered atxi{\displaystyle x_{i}}."[2] Now define This is motivated becausepi{\displaystyle p_{i}}andpj{\displaystyle p_{j}}from the N samples are estimated as 1/N, so the conditional probability can be written aspi∣j=Npij{\displaystyle p_{i\mid j}=Np_{ij}}andpj∣i=Npji{\displaystyle p_{j\mid i}=Np_{ji}}. Sincepij=pji{\displaystyle p_{ij}=p_{ji}}, you can obtain previous formula. Also note thatpii=0{\displaystyle p_{ii}=0}and∑i,jpij=1{\displaystyle \sum _{i,j}p_{ij}=1}. The bandwidth of theGaussian kernelsσi{\displaystyle \sigma _{i}}is set in such a way that theentropyof the conditional distribution equals a predefined entropy using thebisection method. As a result, the bandwidth is adapted to thedensityof the data: smaller values ofσi{\displaystyle \sigma _{i}}are used in denser parts of the data space. The entropy increases with theperplexityof this distributionPi{\displaystyle P_{i}}; this relation is seen as whereH(Pi){\displaystyle H(P_{i})}is the Shannon entropyH(Pi)=−∑jpj|ilog2⁡pj|i.{\displaystyle H(P_{i})=-\sum _{j}p_{j|i}\log _{2}p_{j|i}.} The perplexity is a hand-chosen parameter of t-SNE, and as the authors state, "perplexity can be interpreted as a smooth measure of the effective number of neighbors. The performance of SNE is fairly robust to changes in the perplexity, and typical values are between 5 and 50.".[2] Since the Gaussian kernel uses theEuclidean distance‖xi−xj‖{\displaystyle \lVert x_{i}-x_{j}\rVert }, it is affected by thecurse of dimensionality, and in high dimensional data when distances lose the ability to discriminate, thepij{\displaystyle p_{ij}}become too similar (asymptotically, they would converge to a constant). It has been proposed to adjust the distances with a power transform, based on theintrinsic dimensionof each point, to alleviate this.[12] t-SNE aims to learn ad{\displaystyle d}-dimensional mapy1,…,yN{\displaystyle \mathbf {y} _{1},\dots ,\mathbf {y} _{N}}(withyi∈Rd{\displaystyle \mathbf {y} _{i}\in \mathbb {R} ^{d}}andd{\displaystyle d}typically chosen as 2 or 3) that reflects the similaritiespij{\displaystyle p_{ij}}as well as possible. To this end, it measures similaritiesqij{\displaystyle q_{ij}}between two points in the mapyi{\displaystyle \mathbf {y} _{i}}andyj{\displaystyle \mathbf {y} _{j}}, using a very similar approach. Specifically, fori≠j{\displaystyle i\neq j}, defineqij{\displaystyle q_{ij}}as and setqii=0{\displaystyle q_{ii}=0}. Herein a heavy-tailedStudent t-distribution(with one-degree of freedom, which is the same as aCauchy distribution) is used to measure similarities between low-dimensional points in order to allow dissimilar objects to be modeled far apart in the map. The locations of the pointsyi{\displaystyle \mathbf {y} _{i}}in the map are determined by minimizing the (non-symmetric)Kullback–Leibler divergenceof the distributionP{\displaystyle P}from the distributionQ{\displaystyle Q}, that is: The minimization of the Kullback–Leibler divergence with respect to the pointsyi{\displaystyle \mathbf {y} _{i}}is performed usinggradient descent. The result of this optimization is a map that reflects the similarities between the high-dimensional inputs. While t-SNE plots often seem to displayclusters, the visual clusters can be strongly influenced by the chosen parameterization (especially the perplexity) and so a good understanding of the parameters for t-SNE is needed. Such "clusters" can be shown to even appear in structured data with no clear clustering,[13]and so may be false findings. Similarly, the size of clusters produced by t-SNE is not informative, and neither is the distance between clusters.[14]Thus, interactive exploration may be needed to choose parameters and validate results.[15][16]It has been shown that t-SNE can often recover well-separated clusters, and with special parameter choices, approximates a simple form ofspectral clustering.[17]
https://en.wikipedia.org/wiki/T-distributed_stochastic_neighbor_embedding
Anautoencoderis a type ofartificial neural networkused to learnefficient codingsof unlabeled data (unsupervised learning). An autoencoder learns two functions: an encoding function that transforms the input data, and a decoding function that recreates the input data from the encoded representation. The autoencoder learns an efficient representation (encoding) for a set of data, typically fordimensionality reduction, to generate lower-dimensional embeddings for subsequent use by othermachine learningalgorithms.[1] Variants exist which aim to make the learned representations assume useful properties.[2]Examples are regularized autoencoders (sparse,denoisingandcontractiveautoencoders), which are effective in learning representations for subsequentclassificationtasks,[3]andvariationalautoencoders, which can be used asgenerative models.[4]Autoencoders are applied to many problems, includingfacial recognition,[5]feature detection,[6]anomaly detection, andlearning the meaning of words.[7][8]In terms ofdata synthesis, autoencoders can also be used to randomly generate new data that is similar to the input (training) data.[6] An autoencoder is defined by the following components: Two sets: the space of decoded messagesX{\displaystyle {\mathcal {X}}}; the space of encoded messagesZ{\displaystyle {\mathcal {Z}}}. TypicallyX{\displaystyle {\mathcal {X}}}andZ{\displaystyle {\mathcal {Z}}}areEuclidean spaces, that is,X=Rm,Z=Rn{\displaystyle {\mathcal {X}}=\mathbb {R} ^{m},{\mathcal {Z}}=\mathbb {R} ^{n}}withm>n.{\displaystyle m>n.} Twoparametrizedfamilies of functions: the encoder familyEϕ:X→Z{\displaystyle E_{\phi }:{\mathcal {X}}\rightarrow {\mathcal {Z}}}, parametrized byϕ{\displaystyle \phi }; the decoder familyDθ:Z→X{\displaystyle D_{\theta }:{\mathcal {Z}}\rightarrow {\mathcal {X}}}, parametrized byθ{\displaystyle \theta }. For anyx∈X{\displaystyle x\in {\mathcal {X}}}, we usually writez=Eϕ(x){\displaystyle z=E_{\phi }(x)}, and refer to it as the code, thelatent variable, latent representation, latent vector, etc. Conversely, for anyz∈Z{\displaystyle z\in {\mathcal {Z}}}, we usually writex′=Dθ(z){\displaystyle x'=D_{\theta }(z)}, and refer to it as the (decoded) message. Usually, both the encoder and the decoder are defined asmultilayer perceptrons(MLPs). For example, a one-layer-MLP encoderEϕ{\displaystyle E_{\phi }}is: whereσ{\displaystyle \sigma }is an element-wiseactivation function,W{\displaystyle W}is a "weight" matrix, andb{\displaystyle b}is a "bias" vector. An autoencoder, by itself, is simply a tuple of two functions. To judge itsquality, we need atask. A task is defined by a reference probability distributionμref{\displaystyle \mu _{ref}}overX{\displaystyle {\mathcal {X}}}, and a "reconstruction quality" functiond:X×X→[0,∞]{\displaystyle d:{\mathcal {X}}\times {\mathcal {X}}\to [0,\infty ]}, such thatd(x,x′){\displaystyle d(x,x')}measures how muchx′{\displaystyle x'}differs fromx{\displaystyle x}. With those, we can define the loss function for the autoencoder asL(θ,ϕ):=Ex∼μref[d(x,Dθ(Eϕ(x)))]{\displaystyle L(\theta ,\phi ):=\mathbb {\mathbb {E} } _{x\sim \mu _{ref}}[d(x,D_{\theta }(E_{\phi }(x)))]}Theoptimalautoencoder for the given task(μref,d){\displaystyle (\mu _{ref},d)}is thenarg⁡minθ,ϕL(θ,ϕ){\displaystyle \arg \min _{\theta ,\phi }L(\theta ,\phi )}. The search for the optimal autoencoder can be accomplished by any mathematical optimization technique, but usually bygradient descent. This search process is referred to as "training the autoencoder". In most situations, the reference distribution is just theempirical distributiongiven by a dataset{x1,...,xN}⊂X{\displaystyle \{x_{1},...,x_{N}\}\subset {\mathcal {X}}}, so thatμref=1N∑i=1Nδxi{\displaystyle \mu _{ref}={\frac {1}{N}}\sum _{i=1}^{N}\delta _{x_{i}}} whereδxi{\displaystyle \delta _{x_{i}}}is theDirac measure, the quality function is just L2 loss:d(x,x′)=‖x−x′‖22{\displaystyle d(x,x')=\|x-x'\|_{2}^{2}}, and‖⋅‖2{\displaystyle \|\cdot \|_{2}}is theEuclidean norm. Then the problem of searching for the optimal autoencoder is just aleast-squaresoptimization:minθ,ϕL(θ,ϕ),whereL(θ,ϕ)=1N∑i=1N‖xi−Dθ(Eϕ(xi))‖22{\displaystyle \min _{\theta ,\phi }L(\theta ,\phi ),\qquad {\text{where }}L(\theta ,\phi )={\frac {1}{N}}\sum _{i=1}^{N}\|x_{i}-D_{\theta }(E_{\phi }(x_{i}))\|_{2}^{2}} An autoencoder has two main parts: an encoder that maps the message to a code, and a decoder that reconstructs the message from the code. An optimal autoencoder would perform as close to perfect reconstruction as possible, with "close to perfect" defined by the reconstruction quality functiond{\displaystyle d}. The simplest way to perform the copying task perfectly would be to duplicate the signal. To suppress this behavior, the code spaceZ{\displaystyle {\mathcal {Z}}}usually has fewer dimensions than the message spaceX{\displaystyle {\mathcal {X}}}. Such an autoencoder is calledundercomplete. It can be interpreted ascompressingthe message, orreducing its dimensionality.[9][10] At the limit of an ideal undercomplete autoencoder, every possible codez{\displaystyle z}in the code space is used to encode a messagex{\displaystyle x}that really appears in the distributionμref{\displaystyle \mu _{ref}}, and the decoder is also perfect:Dθ(Eϕ(x))=x{\displaystyle D_{\theta }(E_{\phi }(x))=x}. This ideal autoencoder can then be used to generate messages indistinguishable from real messages, by feeding its decoder arbitrary codez{\displaystyle z}and obtainingDθ(z){\displaystyle D_{\theta }(z)}, which is a message that really appears in the distributionμref{\displaystyle \mu _{ref}}. If the code spaceZ{\displaystyle {\mathcal {Z}}}has dimension larger than (overcomplete), or equal to, the message spaceX{\displaystyle {\mathcal {X}}}, or the hidden units are given enough capacity, an autoencoder can learn theidentity functionand become useless. However, experimental results found that overcomplete autoencoders might stilllearn useful features.[11] In the ideal setting, the code dimension and the model capacity could be set on the basis of the complexity of the data distribution to be modeled. A standard way to do so is to add modifications to the basic autoencoder, to be detailed below.[2] Variational autoencoders(VAEs) belong to the families ofvariational Bayesian methods. Despite the architectural similarities with basic autoencoders, VAEs are architected with different goals and have a different mathematical formulation. The latent space is, in this case, composed of a mixture of distributions instead of fixed vectors. Given an input datasetx{\displaystyle x}characterized by an unknown probability functionP(x){\displaystyle P(x)}and a multivariate latent encoding vectorz{\displaystyle z}, the objective is to model the data as a distributionpθ(x){\displaystyle p_{\theta }(x)}, withθ{\displaystyle \theta }defined as the set of the network parameters so thatpθ(x)=∫zpθ(x,z)dz{\displaystyle p_{\theta }(x)=\int _{z}p_{\theta }(x,z)dz}. Inspired by thesparse codinghypothesis in neuroscience,sparse autoencoders(SAE) are variants of autoencoders, such that the codesEϕ(x){\displaystyle E_{\phi }(x)}for messages tend to besparse codes, that is,Eϕ(x){\displaystyle E_{\phi }(x)}is close to zero in most entries. Sparse autoencoders may include more (rather than fewer) hidden units than inputs, but only a small number of the hidden units are allowed to be active at the same time.[12]Encouraging sparsity improves performance on classification tasks.[13] There are two main ways to enforce sparsity. One way is to simply clamp all but the highest-k activations of the latent code to zero. This is thek-sparse autoencoder.[13] The k-sparse autoencoder inserts the following "k-sparse function" in the latent layer of a standard autoencoder:fk(x1,...,xn)=(x1b1,...,xnbn){\displaystyle f_{k}(x_{1},...,x_{n})=(x_{1}b_{1},...,x_{n}b_{n})}wherebi=1{\displaystyle b_{i}=1}if|xi|{\displaystyle |x_{i}|}ranks in the top k, and 0 otherwise. Backpropagating throughfk{\displaystyle f_{k}}is simple: set gradient to 0 forbi=0{\displaystyle b_{i}=0}entries, and keep gradient forbi=1{\displaystyle b_{i}=1}entries. This is essentially a generalizedReLUfunction.[13] The other way is arelaxed versionof the k-sparse autoencoder. Instead of forcing sparsity, we add asparsity regularization loss, then optimize forminθ,ϕL(θ,ϕ)+λLsparse(θ,ϕ){\displaystyle \min _{\theta ,\phi }L(\theta ,\phi )+\lambda L_{\text{sparse}}(\theta ,\phi )}whereλ>0{\displaystyle \lambda >0}measures how much sparsity we want to enforce.[14] Let the autoencoder architecture haveK{\displaystyle K}layers. To define a sparsity regularization loss, we need a "desired" sparsityρ^k{\displaystyle {\hat {\rho }}_{k}}for each layer, a weightwk{\displaystyle w_{k}}for how much to enforce each sparsity, and a functions:[0,1]×[0,1]→[0,∞]{\displaystyle s:[0,1]\times [0,1]\to [0,\infty ]}to measure how much two sparsities differ. For each inputx{\displaystyle x}, let the actual sparsity of activation in each layerk{\displaystyle k}beρk(x)=1n∑i=1nak,i(x){\displaystyle \rho _{k}(x)={\frac {1}{n}}\sum _{i=1}^{n}a_{k,i}(x)}whereak,i(x){\displaystyle a_{k,i}(x)}is the activation in thei{\displaystyle i}-th neuron of thek{\displaystyle k}-th layer upon inputx{\displaystyle x}. The sparsity loss upon inputx{\displaystyle x}for one layer iss(ρ^k,ρk(x)){\displaystyle s({\hat {\rho }}_{k},\rho _{k}(x))}, and the sparsity regularization loss for the entire autoencoder is the expected weighted sum of sparsity losses:Lsparse(θ,ϕ)=Ex∼μX[∑k∈1:Kwks(ρ^k,ρk(x))]{\displaystyle L_{\text{sparse}}(\theta ,\phi )=\mathbb {\mathbb {E} } _{x\sim \mu _{X}}\left[\sum _{k\in 1:K}w_{k}s({\hat {\rho }}_{k},\rho _{k}(x))\right]}Typically, the functions{\displaystyle s}is either theKullback-Leibler (KL) divergence, as[13][14][15][16] or the L1 loss, ass(ρ,ρ^)=|ρ−ρ^|{\displaystyle s(\rho ,{\hat {\rho }})=|\rho -{\hat {\rho }}|}, or the L2 loss, ass(ρ,ρ^)=|ρ−ρ^|2{\displaystyle s(\rho ,{\hat {\rho }})=|\rho -{\hat {\rho }}|^{2}}. Alternatively, the sparsity regularization loss may be defined without reference to any "desired sparsity", but simply force as much sparsity as possible. In this case, one can define the sparsity regularization loss asLsparse(θ,ϕ)=Ex∼μX[∑k∈1:Kwk‖hk‖]{\displaystyle L_{\text{sparse}}(\theta ,\phi )=\mathbb {\mathbb {E} } _{x\sim \mu _{X}}\left[\sum _{k\in 1:K}w_{k}\|h_{k}\|\right]}wherehk{\displaystyle h_{k}}is the activation vector in thek{\displaystyle k}-th layer of the autoencoder. The norm‖⋅‖{\displaystyle \|\cdot \|}is usually the L1 norm (giving the L1 sparse autoencoder) or the L2 norm (giving the L2 sparse autoencoder). Denoising autoencoders(DAE) try to achieve agoodrepresentation by changing thereconstruction criterion.[2][3] A DAE, originally called a "robust autoassociative network" by Mark A. Kramer,[17]is trained by intentionally corrupting the inputs of a standard autoencoder during training. A noise process is defined by a probability distributionμT{\displaystyle \mu _{T}}over functionsT:X→X{\displaystyle T:{\mathcal {X}}\to {\mathcal {X}}}. That is, the functionT{\displaystyle T}takes a messagex∈X{\displaystyle x\in {\mathcal {X}}}, and corrupts it to a noisy versionT(x){\displaystyle T(x)}. The functionT{\displaystyle T}is selected randomly, with a probability distributionμT{\displaystyle \mu _{T}}. Given a task(μref,d){\displaystyle (\mu _{\text{ref}},d)}, the problem of training a DAE is the optimization problem:minθ,ϕL(θ,ϕ)=Ex∼μX,T∼μT[d(x,(Dθ∘Eϕ∘T)(x))]{\displaystyle \min _{\theta ,\phi }L(\theta ,\phi )=\mathbb {\mathbb {E} } _{x\sim \mu _{X},T\sim \mu _{T}}[d(x,(D_{\theta }\circ E_{\phi }\circ T)(x))]}That is, the optimal DAE should take any noisy message and attempt to recover the original message without noise, thus the name "denoising". Usually, the noise processT{\displaystyle T}is applied only during training and testing, not during downstream use. The use of DAE depends on two assumptions: Example noise processes include: Acontractive autoencoder(CAE) adds the contractive regularization loss to the standard autoencoder loss:minθ,ϕL(θ,ϕ)+λLcont(θ,ϕ){\displaystyle \min _{\theta ,\phi }L(\theta ,\phi )+\lambda L_{\text{cont}}(\theta ,\phi )}whereλ>0{\displaystyle \lambda >0}measures how much contractive-ness we want to enforce. The contractive regularization loss itself is defined as the expected square ofFrobenius normof theJacobian matrixof the encoder activations with respect to the input:Lcont(θ,ϕ)=Ex∼μref‖∇xEϕ(x)‖F2{\displaystyle L_{\text{cont}}(\theta ,\phi )=\mathbb {E} _{x\sim \mu _{ref}}\|\nabla _{x}E_{\phi }(x)\|_{F}^{2}}To understand whatLcont{\displaystyle L_{\text{cont}}}measures, note the fact‖Eϕ(x+δx)−Eϕ(x)‖2≤‖∇xEϕ(x)‖F‖δx‖2{\displaystyle \|E_{\phi }(x+\delta x)-E_{\phi }(x)\|_{2}\leq \|\nabla _{x}E_{\phi }(x)\|_{F}\|\delta x\|_{2}}for any messagex∈X{\displaystyle x\in {\mathcal {X}}}, and small variationδx{\displaystyle \delta x}in it. Thus, if‖∇xEϕ(x)‖F2{\displaystyle \|\nabla _{x}E_{\phi }(x)\|_{F}^{2}}is small, it means that a small neighborhood of the message maps to a small neighborhood of its code. This is a desired property, as it means small variation in the message leads to small, perhaps even zero, variation in its code, like how two pictures may look the same even if they are not exactly the same. The DAE can be understood as an infinitesimal limit of CAE: in the limit of small Gaussian input noise, DAEs make the reconstruction function resist small but finite-sized input perturbations, while CAEs make the extracted features resist infinitesimal input perturbations. Aminimum description length autoencoder(MDL-AE) is an advanced variation of the traditional autoencoder, which leverages principles from information theory, specifically theMinimum Description Length (MDL) principle. The MDL principle posits that the best model for a dataset is the one that provides the shortest combined encoding of the model and the data. In the context ofautoencoders, this principle is applied to ensure that the learned representation is not only compact but also interpretable and efficient for reconstruction. The MDL-AE seeks to minimize the total description length of the data, which includes the size of thelatent representation(code length) and the error in reconstructing the original data. The objective can be expressed asLcode+Lerror{\displaystyle L_{\text{code}}+L_{\text{error}}}, whereLcode{\displaystyle L_{\text{code}}}represents the length of the compressed latent representation andLerror{\displaystyle L_{\text{error}}}denotes the reconstruction error.[18] Theconcrete autoencoderis designed for discrete feature selection.[19]A concrete autoencoder forces the latent space to consist only of a user-specified number of features. The concrete autoencoder uses a continuousrelaxationof thecategorical distributionto allow gradients to pass through the feature selector layer, which makes it possible to use standardbackpropagationto learn an optimal subset of input features that minimize reconstruction loss. Autoencoders are often trained with a single-layer encoder and a single-layer decoder, but using many-layered (deep) encoders and decoders offers many advantages.[2] Geoffrey Hintondeveloped thedeep belief networktechnique for training many-layered deep autoencoders. His method involves treating each neighboring set of two layers as arestricted Boltzmann machineso that pretraining approximates a good solution, then using backpropagation to fine-tune the results.[10] Researchers have debated whether joint training (i.e. training the whole architecture together with a single global reconstruction objective to optimize) would be better for deep auto-encoders.[20]A 2015 study showed that joint training learns better data models along with more representative features for classification as compared to the layerwise method.[20]However, their experiments showed that the success of joint training depends heavily on the regularization strategies adopted.[20][21] (Oja, 1982)[22]noted that PCA is equivalent to a neural network with one hidden layer with identity activation function. In the language of autoencoding, the input-to-hidden module is the encoder, and the hidden-to-output module is the decoder. Subsequently, in (Baldi and Hornik, 1989)[23]and (Kramer, 1991)[9]generalized PCA to autoencoders, which they termed as "nonlinear PCA". Immediately after the resurgence of neural networks in the 1980s, it was suggested in 1986[24]that a neural network be put in "auto-association mode". This was then implemented in (Harrison, 1987)[25]and (Elman, Zipser, 1988)[26]for speech and in (Cottrell, Munro, Zipser, 1987)[27]for images.[28]In (Hinton, Salakhutdinov, 2006),[29]deep belief networkswere developed. These train a pairrestricted Boltzmann machinesas encoder-decoder pairs, then train another pair on the latent representation of the first pair, and so on.[30] The first applications of AE date to early 1990s.[2][31][18]Their most traditional application wasdimensionality reductionorfeature learning, but the concept became widely used for learninggenerative modelsof data.[32][33]Some of the most powerfulAIsin the 2010s involved autoencoder modules as a component of larger AI systems, such as VAE inStable Diffusion, discrete VAE in Transformer-based image generators likeDALL-E 1, etc. During the early days, when the terminology was uncertain, the autoencoder has also been called identity mapping,[23][9]auto-associating,[34]self-supervisedbackpropagation,[9]or Diabolo network.[35][11] The two main applications of autoencoders aredimensionality reductionandinformation retrieval(orassociative memory),[2]but modern variations have been applied to other tasks. Dimensionality reductionwas one of the firstdeep learningapplications.[2] For Hinton's 2006 study,[10]he pretrained a multi-layer autoencoder with a stack ofRBMsand then used their weights to initialize a deep autoencoder with gradually smaller hidden layers until hitting a bottleneck of 30 neurons. The resulting 30 dimensions of the code yielded a smaller reconstruction error compared to the first 30 components of a principal component analysis (PCA), and learned a representation that was qualitatively easier to interpret, clearly separating data clusters.[2][10] Reducing dimensions can improve performance on tasks such as classification.[2]Indeed, the hallmark of dimensionality reduction is to place semantically related examples near each other.[37] If linear activations are used, or only a single sigmoid hidden layer, then the optimal solution to an autoencoder is strongly related toprincipal component analysis(PCA).[28][38]The weights of an autoencoder with a single hidden layer of sizep{\displaystyle p}(wherep{\displaystyle p}is less than the size of the input) span the same vector subspace as the one spanned by the firstp{\displaystyle p}principal components, and the output of the autoencoder is an orthogonal projection onto this subspace. The autoencoder weights are not equal to the principal components, and are generally not orthogonal, yet the principal components may be recovered from them using thesingular value decomposition.[39] However, the potential of autoencoders resides in their non-linearity, allowing the model to learn more powerful generalizations compared to PCA, and to reconstruct the input with significantly lower information loss.[10] Information retrievalbenefits particularly fromdimensionality reductionin that search can become more efficient in certain kinds of low dimensional spaces. Autoencoders were indeed applied to semantic hashing, proposed bySalakhutdinovand Hinton in 2007.[37]By training the algorithm to produce a low-dimensional binary code, all database entries could be stored in ahash tablemapping binary code vectors to entries. This table would then support information retrieval by returning all entries with the same binary code as the query, or slightly less similar entries by flipping some bits from the query encoding. The encoder-decoder architecture, often used in natural language processing and neural networks, can be scientifically applied in the field of SEO (Search Engine Optimization) in various ways: In essence, the encoder-decoder architecture or autoencoders can be leveraged in SEO to optimize web page content, improve their indexing, and enhance their appeal to both search engines and users. Another application for autoencoders isanomaly detection.[17][40][41][42][43][44]By learning to replicate the most salient features in the training data under some of the constraints described previously, the model is encouraged to learn to precisely reproduce the most frequently observed characteristics. When facing anomalies, the model should worsen its reconstruction performance. In most cases, only data with normal instances are used to train the autoencoder; in others, the frequency of anomalies is small compared to the observation set so that its contribution to the learned representation could be ignored. After training, the autoencoder will accurately reconstruct "normal" data, while failing to do so with unfamiliar anomalous data.[42]Reconstruction error (the error between the original data and its low dimensional reconstruction) is used as an anomaly score to detect anomalies.[42] Recent literature has however shown that certain autoencoding models can, counterintuitively, be very good at reconstructing anomalous examples and consequently not able to reliably perform anomaly detection.[45][46] The characteristics of autoencoders are useful in image processing. One example can be found in lossyimage compression, where autoencoders outperformed other approaches and proved competitive againstJPEG 2000.[47][48] Another useful application of autoencoders in image preprocessing isimage denoising.[49][50][51] Autoencoders found use in more demanding contexts such asmedical imagingwhere they have been used forimage denoising[52]as well assuper-resolution.[53][54]In image-assisted diagnosis, experiments have applied autoencoders forbreast cancerdetection[55]and for modelling the relation between the cognitive decline ofAlzheimer's diseaseand the latent features of an autoencoder trained withMRI.[56] In 2019 molecules generated with variational autoencoders were validated experimentally in mice.[57][58] Recently, a stacked autoencoder framework produced promising results in predicting popularity of social media posts,[59]which is helpful for online advertising strategies. Autoencoders have been applied tomachine translation, which is usually referred to asneural machine translation(NMT).[60][61]Unlike traditional autoencoders, the output does not match the input - it is in another language. In NMT, texts are treated as sequences to be encoded into the learning procedure, while on the decoder side sequences in the target language(s) are generated.Language-specific autoencoders incorporate furtherlinguisticfeatures into the learning procedure, such as Chinese decomposition features.[62]Machine translation is rarely still done with autoencoders, due to the availability of more effectivetransformernetworks. Autoencoders in communication systems are important because they help in encoding data into a more resilient representation for channel impairments, which is crucial for transmitting information while minimizing errors. In Addition, AE-based systems can optimize end-to-end communication performance. This approach can solve the several limitations of designing communication systems such as the inherent difficulty in accurately modeling the complex behavior of real-world channels.[63]
https://en.wikipedia.org/wiki/Autoencoder
Thesoftmax function,also known assoftargmax[1]: 184ornormalized exponential function,[2]: 198converts a vector ofKreal numbers into aprobability distributionofKpossible outcomes. It is a generalization of thelogistic functionto multiple dimensions, and is used inmultinomial logistic regression. The softmax function is often used as the lastactivation functionof aneural networkto normalize the output of a network to aprobability distributionover predicted output classes. The softmax function takes as input a vectorzofKreal numbers, and normalizes it into aprobability distributionconsisting ofKprobabilities proportional to the exponentials of the input numbers. That is, prior to applying softmax, some vector components could be negative, or greater than one; and might not sum to 1; but after applying softmax, each component will be in theinterval(0,1){\displaystyle (0,1)}, and the components will add up to 1, so that they can be interpreted as probabilities. Furthermore, the larger input components will correspond to larger probabilities. Formally, the standard (unit) softmax functionσ:RK→(0,1)K{\displaystyle \sigma \colon \mathbb {R} ^{K}\to (0,1)^{K}}, whereK>1{\displaystyle K>1}, takes a vectorz=(z1,…,zK)∈RK{\displaystyle \mathbf {z} =(z_{1},\dotsc ,z_{K})\in \mathbb {R} ^{K}}and computes each component of vectorσ(z)∈(0,1)K{\displaystyle \sigma (\mathbf {z} )\in (0,1)^{K}}with σ(z)i=ezi∑j=1Kezj.{\displaystyle \sigma (\mathbf {z} )_{i}={\frac {e^{z_{i}}}{\sum _{j=1}^{K}e^{z_{j}}}}\,.} In words, the softmax applies the standardexponential functionto each elementzi{\displaystyle z_{i}}of the input vectorz{\displaystyle \mathbf {z} }(consisting ofK{\displaystyle K}real numbers), and normalizes these values by dividing by the sum of all these exponentials. The normalization ensures that the sum of the components of the output vectorσ(z){\displaystyle \sigma (\mathbf {z} )}is 1. The term "softmax" derives from the amplifying effects of the exponential on any maxima in the input vector. For example, the standard softmax of(1,2,8){\displaystyle (1,2,8)}is approximately(0.001,0.002,0.997){\displaystyle (0.001,0.002,0.997)}, which amounts to assigning almost all of the total unit weight in the result to the position of the vector's maximal element (of 8). In general, instead ofea differentbaseb > 0can be used. As above, ifb > 1then larger input components will result in larger output probabilities, and increasing the value ofbwill create probability distributions that are more concentrated around the positions of the largest input values. Conversely, if0 < b < 1then smaller input components will result in larger output probabilities, and decreasing the value ofbwill create probability distributions that are more concentrated around the positions of the smallest input values. Writingb=eβ{\displaystyle b=e^{\beta }}orb=e−β{\displaystyle b=e^{-\beta }}[a](for realβ)[b]yields the expressions:[c] σ(z)i=eβzi∑j=1Keβzjorσ(z)i=e−βzi∑j=1Ke−βzjfori=1,…,K.{\displaystyle \sigma (\mathbf {z} )_{i}={\frac {e^{\beta z_{i}}}{\sum _{j=1}^{K}e^{\beta z_{j}}}}{\text{ or }}\sigma (\mathbf {z} )_{i}={\frac {e^{-\beta z_{i}}}{\sum _{j=1}^{K}e^{-\beta z_{j}}}}{\text{ for }}i=1,\dotsc ,K.} A value proportional to the reciprocal ofβis sometimes referred to as thetemperature:β=1/kT{\textstyle \beta =1/kT}, wherekis typically 1 or theBoltzmann constantandTis the temperature. A higher temperature results in a more uniform output distribution (i.e. with higherentropy; it is "more random"), while a lower temperature results in a sharper output distribution, with one value dominating. In some fields, the base is fixed, corresponding to a fixed scale,[d]while in others the parameterβ(orT) is varied. The Softmax function is a smooth approximation to thearg maxfunction: the function whose value is theindexof a vector's largest element. The name "softmax" may be misleading. Softmax is not asmooth maximum(that is, asmooth approximationto themaximumfunction). The term "softmax" is also used for the closely relatedLogSumExpfunction, which is a smooth maximum. For this reason, some prefer the more accurate term "softargmax", though the term "softmax" is conventional in machine learning.[3][4]This section uses the term "softargmax" for clarity. Formally, instead of considering the arg max as a function with categorical output1,…,n{\displaystyle 1,\dots ,n}(corresponding to the index), consider the arg max function withone-hotrepresentation of the output (assuming there is a unique maximum arg):argmax⁡(z1,…,zn)=(y1,…,yn)=(0,…,0,1,0,…,0),{\displaystyle \operatorname {arg\,max} (z_{1},\,\dots ,\,z_{n})=(y_{1},\,\dots ,\,y_{n})=(0,\,\dots ,\,0,\,1,\,0,\,\dots ,\,0),}where the output coordinateyi=1{\displaystyle y_{i}=1}if and only ifi{\displaystyle i}is the arg max of(z1,…,zn){\displaystyle (z_{1},\dots ,z_{n})}, meaningzi{\displaystyle z_{i}}is the unique maximum value of(z1,…,zn){\displaystyle (z_{1},\,\dots ,\,z_{n})}. For example, in this encodingargmax⁡(1,5,10)=(0,0,1),{\displaystyle \operatorname {arg\,max} (1,5,10)=(0,0,1),}since the third argument is the maximum. This can be generalized to multiple arg max values (multiple equalzi{\displaystyle z_{i}}being the maximum) by dividing the 1 between all max args; formally1/kwherekis the number of arguments assuming the maximum. For example,argmax⁡(1,5,5)=(0,1/2,1/2),{\displaystyle \operatorname {arg\,max} (1,\,5,\,5)=(0,\,1/2,\,1/2),}since the second and third argument are both the maximum. In case all arguments are equal, this is simplyargmax⁡(z,…,z)=(1/n,…,1/n).{\displaystyle \operatorname {arg\,max} (z,\dots ,z)=(1/n,\dots ,1/n).}Pointszwith multiple arg max values aresingular points(or singularities, and form the singular set) – these are the points where arg max is discontinuous (with ajump discontinuity) – while points with a single arg max are known as non-singular or regular points. With the last expression given in the introduction, softargmax is now a smooth approximation of arg max: as⁠β→∞{\displaystyle \beta \to \infty }⁠, softargmax converges to arg max. There are various notions of convergence of a function; softargmax converges to arg maxpointwise, meaning for each fixed inputzas⁠β→∞{\displaystyle \beta \to \infty }⁠,σβ(z)→argmax⁡(z).{\displaystyle \sigma _{\beta }(\mathbf {z} )\to \operatorname {arg\,max} (\mathbf {z} ).}However, softargmax does notconverge uniformlyto arg max, meaning intuitively that different points converge at different rates, and may converge arbitrarily slowly. In fact, softargmax is continuous, but arg max is not continuous at the singular set where two coordinates are equal, while the uniform limit of continuous functions is continuous. The reason it fails to converge uniformly is that for inputs where two coordinates are almost equal (and one is the maximum), the arg max is the index of one or the other, so a small change in input yields a large change in output. For example,σβ(1,1.0001)→(0,1),{\displaystyle \sigma _{\beta }(1,\,1.0001)\to (0,1),}butσβ(1,0.9999)→(1,0),{\displaystyle \sigma _{\beta }(1,\,0.9999)\to (1,\,0),}andσβ(1,1)=1/2{\displaystyle \sigma _{\beta }(1,\,1)=1/2}for all inputs: the closer the points are to the singular set(x,x){\displaystyle (x,x)}, the slower they converge. However, softargmax doesconverge compactlyon the non-singular set. Conversely, as⁠β→−∞{\displaystyle \beta \to -\infty }⁠, softargmax converges to arg min in the same way, where here the singular set is points with two argminvalues. In the language oftropical analysis, the softmax is adeformationor "quantization" of arg max and arg min, corresponding to using thelog semiringinstead of themax-plus semiring(respectivelymin-plus semiring), and recovering the arg max or arg min by taking the limit is called "tropicalization" or "dequantization". It is also the case that, for any fixedβ, if one input⁠zi{\displaystyle z_{i}}⁠is much larger than the othersrelativeto the temperature,T=1/β{\displaystyle T=1/\beta }, the output is approximately the arg max. For example, a difference of 10 is large relative to a temperature of 1:σ(0,10):=σ1(0,10)=(1/(1+e10),e10/(1+e10))≈(0.00005,0.99995){\displaystyle \sigma (0,\,10):=\sigma _{1}(0,\,10)=\left(1/\left(1+e^{10}\right),\,e^{10}/\left(1+e^{10}\right)\right)\approx (0.00005,\,0.99995)}However, if the difference is small relative to the temperature, the value is not close to the arg max. For example, a difference of 10 is small relative to a temperature of 100:σ1/100(0,10)=(1/(1+e1/10),e1/10/(1+e1/10))≈(0.475,0.525).{\displaystyle \sigma _{1/100}(0,\,10)=\left(1/\left(1+e^{1/10}\right),\,e^{1/10}/\left(1+e^{1/10}\right)\right)\approx (0.475,\,0.525).}As⁠β→∞{\displaystyle \beta \to \infty }⁠, temperature goes to zero,T=1/β→0{\displaystyle T=1/\beta \to 0}, so eventually all differences become large (relative to a shrinking temperature), which gives another interpretation for the limit behavior. Instatistical mechanics, the softargmax function is known as theBoltzmann distribution(orGibbs distribution):[5]: 7the index set1,…,k{\displaystyle {1,\,\dots ,\,k}}are themicrostatesof the system; the inputszi{\displaystyle z_{i}}are the energies of that state; the denominator is known as thepartition function, often denoted byZ; and the factorβis called thecoldness(orthermodynamic beta, orinverse temperature). The softmax function is used in variousmulticlass classificationmethods, such asmultinomial logistic regression(also known as softmax regression),[2]: 206–209[6]multiclasslinear discriminant analysis,naive Bayes classifiers, andartificial neural networks.[7]Specifically, in multinomial logistic regression and linear discriminant analysis, the input to the function is the result ofKdistinctlinear functions, and the predicted probability for thejth class given a sample vectorxand a weighting vectorwis: P(y=j∣x)=exTwj∑k=1KexTwk{\displaystyle P(y=j\mid \mathbf {x} )={\frac {e^{\mathbf {x} ^{\mathsf {T}}\mathbf {w} _{j}}}{\sum _{k=1}^{K}e^{\mathbf {x} ^{\mathsf {T}}\mathbf {w} _{k}}}}} This can be seen as thecompositionofKlinear functionsx↦xTw1,…,x↦xTwK{\displaystyle \mathbf {x} \mapsto \mathbf {x} ^{\mathsf {T}}\mathbf {w} _{1},\ldots ,\mathbf {x} \mapsto \mathbf {x} ^{\mathsf {T}}\mathbf {w} _{K}}and the softmax function (wherexTw{\displaystyle \mathbf {x} ^{\mathsf {T}}\mathbf {w} }denotes the inner product ofx{\displaystyle \mathbf {x} }andw{\displaystyle \mathbf {w} }). The operation is equivalent to applying a linear operator defined byw{\displaystyle \mathbf {w} }to vectorsx{\displaystyle \mathbf {x} }, thus transforming the original, probably highly-dimensional, input to vectors in aK-dimensional spaceRK{\displaystyle \mathbb {R} ^{K}}. The standard softmax function is often used in the final layer of a neural network-based classifier. Such networks are commonly trained under alog loss(orcross-entropy) regime, giving a non-linear variant of multinomial logistic regression. Since the function maps a vector and a specific indexi{\displaystyle i}to a real value, the derivative needs to take the index into account: ∂∂qkσ(q,i)=σ(q,i)(δik−σ(q,k)).{\displaystyle {\frac {\partial }{\partial q_{k}}}\sigma ({\textbf {q}},i)=\sigma ({\textbf {q}},i)(\delta _{ik}-\sigma ({\textbf {q}},k)).} This expression is symmetrical in the indexesi,k{\displaystyle i,k}and thus may also be expressed as ∂∂qkσ(q,i)=σ(q,k)(δik−σ(q,i)).{\displaystyle {\frac {\partial }{\partial q_{k}}}\sigma ({\textbf {q}},i)=\sigma ({\textbf {q}},k)(\delta _{ik}-\sigma ({\textbf {q}},i)).} Here, theKronecker deltais used for simplicity (cf. the derivative of asigmoid function, being expressed via the function itself). To ensure stable numerical computations subtracting the maximum value from the input vector is common. This approach, while not altering the output or the derivative theoretically, enhances stability by directly controlling the maximum exponent value computed. If the function is scaled with the parameterβ{\displaystyle \beta }, then these expressions must be multiplied byβ{\displaystyle \beta }. Seemultinomial logitfor a probability model which uses the softmax activation function. In the field ofreinforcement learning, a softmax function can be used to convert values into action probabilities. The function commonly used is:[8]Pt(a)=exp⁡(qt(a)/τ)∑i=1nexp⁡(qt(i)/τ),{\displaystyle P_{t}(a)={\frac {\exp(q_{t}(a)/\tau )}{\sum _{i=1}^{n}\exp(q_{t}(i)/\tau )}}{\text{,}}} where the action valueqt(a){\displaystyle q_{t}(a)}corresponds to the expected reward of following action a andτ{\displaystyle \tau }is called a temperature parameter (in allusion tostatistical mechanics). For high temperatures (τ→∞{\displaystyle \tau \to \infty }), all actions have nearly the same probability and the lower the temperature, the more expected rewards affect the probability. For a low temperature (τ→0+{\displaystyle \tau \to 0^{+}}), the probability of the action with the highest expected reward tends to 1. In neural network applications, the numberKof possible outcomes is often large, e.g. in case ofneural language modelsthat predict the most likely outcome out of a vocabulary which might contain millions of possible words.[9]This can make the calculations for the softmax layer (i.e. the matrix multiplications to determine thezi{\displaystyle z_{i}}, followed by the application of the softmax function itself) computationally expensive.[9][10]What's more, thegradient descentbackpropagationmethod for training such a neural network involves calculating the softmax for every training example, and the number of training examples can also become large. The computational effort for the softmax became a major limiting factor in the development of larger neural language models, motivating various remedies to reduce training times.[9][10] Approaches that reorganize the softmax layer for more efficient calculation include thehierarchical softmaxand thedifferentiated softmax.[9]The hierarchical softmax (introduced by Morin andBengioin 2005) uses a binary tree structure where the outcomes (vocabulary words) are the leaves and the intermediate nodes are suitably selected "classes" of outcomes, forminglatent variables.[10][11]The desired probability (softmax value) of a leaf (outcome) can then be calculated as the product of the probabilities of all nodes on the path from the root to that leaf.[10]Ideally, when the tree is balanced, this would reduce thecomputational complexityfromO(K){\displaystyle O(K)}toO(log2⁡K){\displaystyle O(\log _{2}K)}.[11]In practice, results depend on choosing a good strategy for clustering the outcomes into classes.[10][11]AHuffman treewas used for this in Google'sword2vecmodels (introduced in 2013) to achieve scalability.[9] A second kind of remedies is based on approximating the softmax (during training) with modified loss functions that avoid the calculation of the full normalization factor.[9]These include methods that restrict the normalization sum to a sample of outcomes (e.g. Importance Sampling, Target Sampling).[9][10] The standard softmax is numerically unstable because of large exponentiations. Thesafe softmaxmethod calculates insteadσ(z)i=eβ(zi−m)∑j=1Keβ(zj−m){\displaystyle \sigma (\mathbf {z} )_{i}={\frac {e^{\beta (z_{i}-m)}}{\sum _{j=1}^{K}e^{\beta (z_{j}-m)}}}}wherem=maxizi{\displaystyle m=\max _{i}z_{i}}is the largest factor involved. Subtracting by it guarantees that the exponentiations result in at most 1. Theattention mechanisminTransformerstakes three arguments: a "query vector"q{\displaystyle q}, a list of "key vectors"k1,…,kN{\displaystyle k_{1},\dots ,k_{N}}, and a list of "value vectors"v1,…,vN{\displaystyle v_{1},\dots ,v_{N}}, and outputs a softmax-weighted sum over value vectors:o=∑i=1NeqTki−m∑j=1NeqTkj−mvi{\displaystyle o=\sum _{i=1}^{N}{\frac {e^{q^{T}k_{i}-m}}{\sum _{j=1}^{N}e^{q^{T}k_{j}-m}}}v_{i}}The standard softmax method involves several loops over the inputs, which would bebottlenecked by memory bandwidth. TheFlashAttentionmethod is acommunication-avoiding algorithmthat fuses these operations into a single loop, increasing thearithmetic intensity. It is anonline algorithmthat computes the following quantities:[12][13]zi=qTkimi=max(z1,…,zi)=max(mi−1,zi)li=ez1−mi+⋯+ezi−mi=emi−1−mili−1+ezi−mioi=ez1−miv1+⋯+ezi−mivi=emi−1−mioi−1+ezi−mivi{\displaystyle {\begin{aligned}z_{i}&=q^{T}k_{i}&\\m_{i}&=\max(z_{1},\dots ,z_{i})&=&\max(m_{i-1},z_{i})\\l_{i}&=e^{z_{1}-m_{i}}+\dots +e^{z_{i}-m_{i}}&=&e^{m_{i-1}-m_{i}}l_{i-1}+e^{z_{i}-m_{i}}\\o_{i}&=e^{z_{1}-m_{i}}v_{1}+\dots +e^{z_{i}-m_{i}}v_{i}&=&e^{m_{i-1}-m_{i}}o_{i-1}+e^{z_{i}-m_{i}}v_{i}\end{aligned}}}and returnsoN/lN{\displaystyle o_{N}/l_{N}}. In practice, FlashAttention operates over multiple queries and keys per loop iteration, in a similar way asblocked matrix multiplication. Ifbackpropagationis needed, then the output vectors and the intermediate arrays[m1,…,mN],[l1,…,lN]{\displaystyle [m_{1},\dots ,m_{N}],[l_{1},\dots ,l_{N}]}are cached, and during the backward pass, attention matrices arerematerializedfrom these, making it a form of gradient checkpointing. Geometrically the softmax function maps thevector spaceRK{\displaystyle \mathbb {R} ^{K}}to theboundaryof thestandard(K−1){\displaystyle (K-1)}-simplex, cutting the dimension by one (the range is a(K−1){\displaystyle (K-1)}-dimensional simplex inK{\displaystyle K}-dimensional space), due to thelinear constraintthat all output sum to 1 meaning it lies on ahyperplane. Along the main diagonal(x,x,…,x),{\displaystyle (x,\,x,\,\dots ,\,x),}softmax is just the uniform distribution on outputs,(1/n,…,1/n){\displaystyle (1/n,\dots ,1/n)}: equal scores yield equal probabilities. More generally, softmax is invariant under translation by the same value in each coordinate: addingc=(c,…,c){\displaystyle \mathbf {c} =(c,\,\dots ,\,c)}to the inputsz{\displaystyle \mathbf {z} }yieldsσ(z+c)=σ(z){\displaystyle \sigma (\mathbf {z} +\mathbf {c} )=\sigma (\mathbf {z} )}, because it multiplies each exponent by the same factor,ec{\displaystyle e^{c}}(becauseezi+c=ezi⋅ec{\displaystyle e^{z_{i}+c}=e^{z_{i}}\cdot e^{c}}), so the ratios do not change:σ(z+c)j=ezj+c∑k=1Kezk+c=ezj⋅ec∑k=1Kezk⋅ec=σ(z)j.{\displaystyle \sigma (\mathbf {z} +\mathbf {c} )_{j}={\frac {e^{z_{j}+c}}{\sum _{k=1}^{K}e^{z_{k}+c}}}={\frac {e^{z_{j}}\cdot e^{c}}{\sum _{k=1}^{K}e^{z_{k}}\cdot e^{c}}}=\sigma (\mathbf {z} )_{j}.} Geometrically, softmax is constant along diagonals: this is the dimension that is eliminated, and corresponds to the softmax output being independent of a translation in the input scores (a choice of 0 score). One can normalize input scores by assuming that the sum is zero (subtract the average:c{\displaystyle \mathbf {c} }wherec=1n∑zi{\textstyle c={\frac {1}{n}}\sum z_{i}}), and then the softmax takes the hyperplane of points that sum to zero,∑zi=0{\textstyle \sum z_{i}=0}, to the open simplex of positive values that sum to 1∑σ(z)i=1{\textstyle \sum \sigma (\mathbf {z} )_{i}=1}, analogously to how the exponent takes 0 to 1,e0=1{\displaystyle e^{0}=1}and is positive. By contrast, softmax is not invariant under scaling. For instance,σ((0,1))=(1/(1+e),e/(1+e)){\displaystyle \sigma {\bigl (}(0,\,1){\bigr )}={\bigl (}1/(1+e),\,e/(1+e){\bigr )}}butσ((0,2))=(1/(1+e2),e2/(1+e2)).{\displaystyle \sigma {\bigl (}(0,2){\bigr )}={\bigl (}1/\left(1+e^{2}\right),\,e^{2}/\left(1+e^{2}\right){\bigr )}.} Thestandard logistic functionis the special case for a 1-dimensional axis in 2-dimensional space, say thex-axis in the(x, y)plane. One variable is fixed at 0 (sayz2=0{\displaystyle z_{2}=0}), soe0=1{\displaystyle e^{0}=1}, and the other variable can vary, denote itz1=x{\displaystyle z_{1}=x}, soez1/∑k=12ezk=ex/(ex+1),{\textstyle e^{z_{1}}/\sum _{k=1}^{2}e^{z_{k}}=e^{x}/\left(e^{x}+1\right),}the standard logistic function, andez2/∑k=12ezk=1/(ex+1),{\textstyle e^{z_{2}}/\sum _{k=1}^{2}e^{z_{k}}=1/\left(e^{x}+1\right),}its complement (meaning they add up to 1). The 1-dimensional input could alternatively be expressed as the line(x/2,−x/2){\displaystyle (x/2,\,-x/2)}, with outputsex/2/(ex/2+e−x/2)=ex/(ex+1){\displaystyle e^{x/2}/\left(e^{x/2}+e^{-x/2}\right)=e^{x}/\left(e^{x}+1\right)}ande−x/2/(ex/2+e−x/2)=1/(ex+1).{\displaystyle e^{-x/2}/\left(e^{x/2}+e^{-x/2}\right)=1/\left(e^{x}+1\right).} The softmax function is also the gradient of theLogSumExpfunction:∂∂ziLSE⁡(z)=exp⁡zi∑j=1Kexp⁡zj=σ(z)i,fori=1,…,K,z=(z1,…,zK)∈RK,{\displaystyle {\frac {\partial }{\partial z_{i}}}\operatorname {LSE} (\mathbf {z} )={\frac {\exp z_{i}}{\sum _{j=1}^{K}\exp z_{j}}}=\sigma (\mathbf {z} )_{i},\quad {\text{ for }}i=1,\dotsc ,K,\quad \mathbf {z} =(z_{1},\,\dotsc ,\,z_{K})\in \mathbb {R} ^{K},}where the LogSumExp function is defined asLSE⁡(z1,…,zn)=log⁡(exp⁡(z1)+⋯+exp⁡(zn)){\displaystyle \operatorname {LSE} (z_{1},\,\dots ,\,z_{n})=\log \left(\exp(z_{1})+\cdots +\exp(z_{n})\right)}. The gradient of softmax is thus∂zjσi=σi(δij−σj){\displaystyle \partial _{z_{j}}\sigma _{i}=\sigma _{i}(\delta _{ij}-\sigma _{j})}. The softmax function was used instatistical mechanicsas theBoltzmann distributionin the foundational paperBoltzmann (1868),[14]formalized and popularized in the influential textbookGibbs (1902).[15] The use of the softmax indecision theoryis credited toR. Duncan Luce,[16]: 1who used the axiom ofindependence of irrelevant alternativesinrational choice theoryto deduce the softmax inLuce's choice axiomfor relative preferences.[citation needed] In machine learning, the term "softmax" is credited to John S. Bridle in two 1989 conference papers,Bridle (1990a):[16]: 1andBridle (1990b):[3] We are concerned with feed-forward non-linear networks (multi-layer perceptrons, or MLPs) with multiple outputs. We wish to treat the outputs of the network as probabilities of alternatives (e.g.pattern classes), conditioned on the inputs. We look for appropriate output non-linearities and for appropriate criteria for adaptation of the parameters of the network (e.g.weights). We explain two modifications: probability scoring, which is an alternative to squared error minimisation, and a normalised exponential (softmax) multi-input generalisation of the logistic non-linearity.[17]: 227 For any input, the outputs must all be positive and they must sum to unity. ... Given a set of unconstrained values,⁠Vj(x){\displaystyle V_{j}(x)}⁠, we can ensure both conditions by using a Normalised Exponential transformation:Qj(x)=eVj(x)/∑keVk(x){\displaystyle Q_{j}(x)=\left.e^{V_{j}(x)}\right/\sum _{k}e^{V_{k}(x)}}This transformation can be considered a multi-input generalisation of the logistic, operating on the whole output layer. It preserves the rank order of its input values, and is a differentiable generalisation of the 'winner-take-all' operation of picking the maximum value. For this reason we like to refer to it assoftmax.[18]: 213 With an input of(1, 2, 3, 4, 1, 2, 3), the softmax is approximately(0.024, 0.064, 0.175, 0.475, 0.024, 0.064, 0.175). The output has most of its weight where the "4" was in the original input. This is what the function is normally used for: to highlight the largest values and suppress values which are significantly below the maximum value. But note: a change oftemperaturechanges the output. When the temperature is multiplied by 10, the inputs are effectively(0.1, 0.2, 0.3, 0.4, 0.1, 0.2, 0.3)and the softmax is approximately(0.125, 0.138, 0.153, 0.169, 0.125, 0.138, 0.153). This shows that high temperatures de-emphasize the maximum value. Computation of this example usingPythoncode: The softmax function generates probability predictions densely distributed over itssupport. Other functions likesparsemaxor α-entmaxcan be used when sparse probability predictions are desired.[19]Also theGumbel-softmax reparametrization trickcan be used when sampling from a discrete-discrete distribution needs to be mimicked in a differentiable manner.
https://en.wikipedia.org/wiki/Softmax_function
Inmachine learning, aneural network(alsoartificial neural networkorneural net, abbreviatedANNorNN) is a computational model inspired by the structure and functions of biological neural networks.[1][2] A neural network consists of connected units or nodes calledartificial neurons, which loosely model theneuronsin the brain. Artificial neuron models that mimic biological neurons more closely have also been recently investigated and shown to significantly improve performance. These are connected byedges, which model thesynapsesin the brain. Each artificial neuron receives signals from connected neurons, then processes them and sends a signal to other connected neurons. The "signal" is areal number, and the output of each neuron is computed by some non-linear function of the sum of its inputs, called theactivation function. The strength of the signal at each connection is determined by aweight, which adjusts during the learning process. Typically, neurons are aggregated into layers. Different layers may perform different transformations on their inputs. Signals travel from the first layer (theinput layer) to the last layer (theoutput layer), possibly passing through multiple intermediate layers (hidden layers). A network is typically called a deep neural network if it has at least two hidden layers.[3] Artificial neural networks are used for various tasks, includingpredictive modeling,adaptive control, and solving problems inartificial intelligence. They can learn from experience, and can derive conclusions from a complex and seemingly unrelated set of information. Neural networks are typically trained throughempirical risk minimization. This method is based on the idea of optimizing the network's parameters to minimize the difference, or empirical risk, between the predicted output and the actual target values in a given dataset.[4]Gradient-based methods such asbackpropagationare usually used to estimate the parameters of the network.[4]During the training phase, ANNs learn fromlabeledtraining data by iteratively updating their parameters to minimize a definedloss function.[5]This method allows the network to generalize to unseen data. Today's deep neural networks are based on early work instatisticsover 200 years ago. The simplest kind offeedforward neural network(FNN) is a linear network, which consists of a single layer of output nodes with linear activation functions; the inputs are fed directly to the outputs via a series of weights. The sum of the products of the weights and the inputs is calculated at each node. Themean squared errorsbetween these calculated outputs and the given target values are minimized by creating an adjustment to the weights. This technique has been known for over two centuries as themethod of least squaresorlinear regression. It was used as a means of finding a good rough linear fit to a set of points byLegendre(1805) andGauss(1795) for the prediction of planetary movement.[7][8][9][10][11] Historically, digital computers such as thevon Neumann modeloperate via the execution of explicit instructions with access to memory by a number of processors. Some neural networks, on the other hand, originated from efforts to model information processing in biological systems through the framework ofconnectionism. Unlike the von Neumann model, connectionist computing does not separate memory and processing. Warren McCullochandWalter Pitts[12](1943) considered a non-learning computational model for neural networks.[13]This model paved the way for research to split into two approaches. One approach focused on biological processes while the other focused on the application of neural networks to artificial intelligence. In the late 1940s,D. O. Hebb[14]proposed a learninghypothesisbased on the mechanism ofneural plasticitythat became known asHebbian learning. It was used in many early neural networks, such as Rosenblatt'sperceptronand theHopfield network. Farley andClark[15](1954) used computational machines to simulate a Hebbian network. Other neural network computational machines were created byRochester, Holland, Habit and Duda (1956).[16] In 1958, psychologistFrank Rosenblattdescribed the perceptron, one of the first implemented artificial neural networks,[17][18][19][20]funded by the United StatesOffice of Naval Research.[21]R. D. Joseph (1960)[22]mentions an even earlier perceptron-like device by Farley and Clark:[10]"Farley and Clark of MIT Lincoln Laboratory actually preceded Rosenblatt in the development of a perceptron-like device." However, "they dropped the subject." The perceptron raised public excitement for research in Artificial Neural Networks, causing the US government to drastically increase funding. This contributed to "the Golden Age of AI" fueled by the optimistic claims made by computer scientists regarding the ability of perceptrons to emulate human intelligence.[23] The first perceptrons did not have adaptive hidden units. However, Joseph (1960)[22]also discussedmultilayer perceptronswith an adaptive hidden layer. Rosenblatt (1962)[24]: section 16cited and adopted these ideas, also crediting work by H. D. Block and B. W. Knight. Unfortunately, these early efforts did not lead to a working learning algorithm for hidden units, i.e.,deep learning. Fundamental research was conducted on ANNs in the 1960s and 1970s. The first working deep learning algorithm was theGroup method of data handling, a method to train arbitrarily deep neural networks, published byAlexey Ivakhnenkoand Lapa in theSoviet Union(1965). They regarded it as a form of polynomial regression,[25]or a generalization of Rosenblatt's perceptron.[26]A 1971 paper described a deep network with eight layers trained by this method,[27]which is based on layer by layer training through regression analysis. Superfluous hidden units are pruned using a separate validation set. Since the activation functions of the nodes are Kolmogorov-Gabor polynomials, these were also the first deep networks with multiplicative units or "gates."[10] The first deep learningmultilayer perceptrontrained bystochastic gradient descent[28]was published in 1967 byShun'ichi Amari.[29]In computer experiments conducted by Amari's student Saito, a five layer MLP with two modifiable layers learnedinternal representationsto classify non-linearily separable pattern classes.[10]Subsequent developments in hardware and hyperparameter tunings have made end-to-end stochastic gradient descent the currently dominant training technique. In 1969,Kunihiko Fukushimaintroduced theReLU(rectified linear unit) activation function.[10][30][31]The rectifier has become the most popular activation function for deep learning.[32] Nevertheless, research stagnated in the United States following the work ofMinskyandPapert(1969),[33]who emphasized that basic perceptrons were incapable of processing the exclusive-or circuit. This insight was irrelevant for the deep networks of Ivakhnenko (1965) and Amari (1967). In 1976 transfer learning was introduced in neural networks learning.[34][35] Deep learning architectures forconvolutional neural networks(CNNs) with convolutional layers and downsampling layers and weight replication began with theNeocognitronintroduced by Kunihiko Fukushima in 1979, though not trained by backpropagation.[36][37][38] Backpropagationis an efficient application of thechain rulederived byGottfried Wilhelm Leibnizin 1673[39]to networks of differentiable nodes. The terminology "back-propagating errors" was actually introduced in 1962 by Rosenblatt,[24]but he did not know how to implement this, althoughHenry J. Kelleyhad a continuous precursor of backpropagation in 1960 in the context ofcontrol theory.[40]In 1970,Seppo Linnainmaapublished the modern form of backpropagation in his Master'sthesis(1970).[41][42][10]G.M. Ostrovski et al. republished it in 1971.[43][44]Paul Werbosapplied backpropagation to neural networks in 1982[45][46](his 1974 PhD thesis, reprinted in a 1994 book,[47]did not yet describe the algorithm[44]). In 1986,David E. Rumelhartet al. popularised backpropagation but did not cite the original work.[48] Kunihiko Fukushima'sconvolutional neural network(CNN) architecture of 1979[36]also introducedmax pooling,[49]a popular downsampling procedure for CNNs. CNNs have become an essential tool forcomputer vision. Thetime delay neural network(TDNN) was introduced in 1987 byAlex Waibelto apply CNN to phoneme recognition. It used convolutions, weight sharing, and backpropagation.[50][51]In 1988, Wei Zhang applied a backpropagation-trained CNN to alphabet recognition.[52]In 1989,Yann LeCunet al. created a CNN calledLeNetforrecognizing handwritten ZIP codeson mail. Training required 3 days.[53]In 1990, Wei Zhang implemented a CNN onoptical computinghardware.[54]In 1991, a CNN was applied to medical image object segmentation[55]and breast cancer detection in mammograms.[56]LeNet-5 (1998), a 7-level CNN by Yann LeCun et al., that classifies digits, was applied by several banks to recognize hand-written numbers on checks digitized in 32×32 pixel images.[57] From 1988 onward,[58][59]the use of neural networks transformed the field ofprotein structure prediction, in particular when the first cascading networks were trained onprofiles(matrices) produced by multiplesequence alignments.[60] One origin of RNN wasstatistical mechanics. In 1972,Shun'ichi Amariproposed to modify the weights of anIsing modelbyHebbian learningrule as a model ofassociative memory, adding in the component of learning.[61]This was popularized as the Hopfield network byJohn Hopfield(1982).[62]Another origin of RNN was neuroscience. The word "recurrent" is used to describe loop-like structures in anatomy. In 1901,Cajalobserved "recurrent semicircles" in thecerebellar cortex.[63]Hebbconsidered "reverberating circuit" as an explanation for short-term memory.[64]The McCulloch and Pitts paper (1943) considered neural networks that contain cycles, and noted that the current activity of such networks can be affected by activity indefinitely far in the past.[12] In 1982 a recurrent neural network with an array architecture (rather than a multilayer perceptron architecture), namely a Crossbar Adaptive Array,[65][66]used direct recurrent connections from the output to the supervisor (teaching) inputs. In addition of computing actions (decisions), it computed internal state evaluations (emotions) of the consequence situations. Eliminating the external supervisor, it introduced the self-learning method in neural networks. In cognitive psychology, the journal American Psychologist in early 1980's carried out a debate on the relation between cognition and emotion. Zajonc in 1980 stated that emotion is computed first and is independent from cognition, while Lazarus in 1982 stated that cognition is computed first and is inseparable from emotion.[67][68]In 1982 the Crossbar Adaptive Array gave a neural network model of cognition-emotion relation.[65][69]It was an example of a debate where an AI system, a recurrent neural network, contributed to an issue in the same time addressed by cognitive psychology. Two early influential works were theJordan network(1986) and theElman network(1990), which applied RNN to studycognitive psychology. In the 1980s, backpropagation did not work well for deep RNNs. To overcome this problem, in 1991,Jürgen Schmidhuberproposed the "neural sequence chunker" or "neural history compressor"[70][71]which introduced the important concepts of self-supervised pre-training (the "P" inChatGPT) and neuralknowledge distillation.[10]In 1993, a neural history compressor system solved a "Very Deep Learning" task that required more than 1000 subsequentlayersin an RNN unfolded in time.[72] In 1991,Sepp Hochreiter's diploma thesis[73]identified and analyzed thevanishing gradient problem[73][74]and proposed recurrentresidualconnections to solve it. He and Schmidhuber introducedlong short-term memory(LSTM), which set accuracy records in multiple applications domains.[75][76]This was not yet the modern version of LSTM, which required the forget gate, which was introduced in 1999.[77]It became the default choice for RNN architecture. During 1985–1995, inspired by statistical mechanics, several architectures and methods were developed byTerry Sejnowski,Peter Dayan,Geoffrey Hinton, etc., including theBoltzmann machine,[78]restricted Boltzmann machine,[79]Helmholtz machine,[80]and thewake-sleep algorithm.[81]These were designed for unsupervised learning of deep generative models. Between 2009 and 2012, ANNs began winning prizes in image recognition contests, approaching human level performance on various tasks, initially inpattern recognitionandhandwriting recognition.[82][83]In 2011, a CNN namedDanNet[84][85]by Dan Ciresan, Ueli Meier, Jonathan Masci,Luca Maria Gambardella, and Jürgen Schmidhuber achieved for the first time superhuman performance in a visual pattern recognition contest, outperforming traditional methods by a factor of 3.[38]It then won more contests.[86][87]They also showed howmax-poolingCNNs on GPU improved performance significantly.[88] In October 2012,AlexNetbyAlex Krizhevsky,Ilya Sutskever, and Geoffrey Hinton[89]won the large-scaleImageNet competitionby a significant margin over shallow machine learning methods. Further incremental improvements included the VGG-16 network byKaren SimonyanandAndrew Zisserman[90]and Google'sInceptionv3.[91] In 2012,NgandDeancreated a network that learned to recognize higher-level concepts, such as cats, only from watching unlabeled images.[92]Unsupervised pre-training and increased computing power fromGPUsanddistributed computingallowed the use of larger networks, particularly in image and visual recognition problems, which became known as "deep learning".[5] Radial basis functionand wavelet networks were introduced in 2013. These can be shown to offer best approximation properties and have been applied innonlinear system identificationand classification applications.[93] Generative adversarial network(GAN) (Ian Goodfellowet al., 2014)[94]became state of the art in generative modeling during 2014–2018 period. The GAN principle was originally published in 1991 by Jürgen Schmidhuber who called it "artificial curiosity": two neural networks contest with each other in the form of azero-sum game, where one network's gain is the other network's loss.[95][96]The first network is agenerative modelthat models aprobability distributionover output patterns. The second network learns bygradient descentto predict the reactions of the environment to these patterns. Excellent image quality is achieved byNvidia'sStyleGAN(2018)[97]based on the Progressive GAN by Tero Karras et al.[98]Here, the GAN generator is grown from small to large scale in a pyramidal fashion. Image generation by GAN reached popular success, and provoked discussions concerningdeepfakes.[99]Diffusion models(2015)[100]eclipsed GANs in generative modeling since then, with systems such asDALL·E 2(2022) andStable Diffusion(2022). In 2014, the state of the art was training "very deep neural network" with 20 to 30 layers.[101]Stacking too many layers led to a steep reduction intrainingaccuracy,[102]known as the "degradation" problem.[103]In 2015, two techniques were developed to train very deep networks: thehighway networkwas published in May 2015,[104]and the residual neural network (ResNet) in December 2015.[105][106]ResNet behaves like an open-gated Highway Net. During the 2010s, theseq2seqmodel was developed, and attention mechanisms were added. It led to the modern Transformer architecture in 2017 inAttention Is All You Need.[107]It requires computation time that is quadratic in the size of the context window. Jürgen Schmidhuber's fast weight controller (1992)[108]scales linearly and was later shown to be equivalent to the unnormalized linear Transformer.[109][110][10]Transformers have increasingly become the model of choice fornatural language processing.[111]Many modernlarge language modelssuch asChatGPT,GPT-4, andBERTuse this architecture. ANNs began as an attempt to exploit the architecture of the human brain to perform tasks that conventional algorithms had little success with. They soon reoriented towards improving empirical results, abandoning attempts to remain true to their biological precursors. ANNs have the ability to learn and model non-linearities and complex relationships. This is achieved by neurons being connected in various patterns, allowing the output of some neurons to become the input of others. The network forms adirected,weighted graph.[112] An artificial neural network consists of simulated neurons. Each neuron is connected to othernodesvialinkslike a biological axon-synapse-dendrite connection. All the nodes connected by links take in some data and use it to perform specific operations and tasks on the data. Each link has a weight, determining the strength of one node's influence on another,[113]allowing weights to choose the signal between neurons. ANNs are composed ofartificial neuronswhich are conceptually derived from biologicalneurons. Each artificial neuron has inputs and produces a single output which can be sent to multiple other neurons.[114]The inputs can be the feature values of a sample of external data, such as images or documents, or they can be the outputs of other neurons. The outputs of the finaloutput neuronsof the neural net accomplish the task, such as recognizing an object in an image.[citation needed] To find the output of the neuron we take the weighted sum of all the inputs, weighted by theweightsof theconnectionsfrom the inputs to the neuron. We add abiasterm to this sum.[115]This weighted sum is sometimes called theactivation. This weighted sum is then passed through a (usually nonlinear) activation function to produce the output. The initial inputs are external data, such as images and documents. The ultimate outputs accomplish the task, such as recognizing an object in an image.[116] The neurons are typically organized into multiple layers, especially in deep learning. Neurons of one layer connect only to neurons of the immediately preceding and immediately following layers. The layer that receives external data is theinput layer. The layer that produces the ultimate result is theoutput layer. In between them are zero or morehidden layers. Single layer and unlayered networks are also used. Between two layers, multiple connection patterns are possible. They can be 'fully connected', with every neuron in one layer connecting to every neuron in the next layer. They can bepooling, where a group of neurons in one layer connects to a single neuron in the next layer, thereby reducing the number of neurons in that layer.[117]Neurons with only such connections form adirected acyclic graphand are known asfeedforward networks.[118]Alternatively, networks that allow connections between neurons in the same or previous layers are known asrecurrent networks.[119] Ahyperparameteris a constantparameterwhose value is set before the learning process begins. The values of parameters are derived via learning. Examples of hyperparameters includelearning rate, the number of hidden layers and batch size.[citation needed]The values of some hyperparameters can be dependent on those of other hyperparameters. For example, the size of some layers can depend on the overall number of layers.[citation needed] Learning is the adaptation of the network to better handle a task by considering sample observations. Learning involves adjusting the weights (and optional thresholds) of the network to improve the accuracy of the result. This is done by minimizing the observed errors. Learning is complete when examining additional observations does not usefully reduce the error rate. Even after learning, the error rate typically does not reach 0. If after learning, the error rate is too high, the network typically must be redesigned. Practically this is done by defining acost functionthat is evaluated periodically during learning. As long as its output continues to decline, learning continues. The cost is frequently defined as astatisticwhose value can only be approximated. The outputs are actually numbers, so when the error is low, the difference between the output (almost certainly a cat) and the correct answer (cat) is small. Learning attempts to reduce the total of the differences across the observations. Most learning models can be viewed as a straightforward application ofoptimizationtheory andstatistical estimation.[112][120] The learning rate defines the size of the corrective steps that the model takes to adjust for errors in each observation.[121]A high learning rate shortens the training time, but with lower ultimate accuracy, while a lower learning rate takes longer, but with the potential for greater accuracy. Optimizations such asQuickpropare primarily aimed at speeding up error minimization, while other improvements mainly try to increase reliability. In order to avoidoscillationinside the network such as alternating connection weights, and to improve the rate of convergence, refinements use anadaptive learning ratethat increases or decreases as appropriate.[122]The concept of momentum allows the balance between the gradient and the previous change to be weighted such that the weight adjustment depends to some degree on the previous change. A momentum close to 0 emphasizes the gradient, while a value close to 1 emphasizes the last change.[citation needed] While it is possible to define a cost functionad hoc, frequently the choice is determined by the function's desirable properties (such asconvexity) because it arises from the model (e.g. in a probabilistic model, the model'sposterior probabilitycan be used as an inverse cost).[citation needed] Backpropagation is a method used to adjust the connection weights to compensate for each error found during learning. The error amount is effectively divided among the connections. Technically, backpropagation calculates thegradient(the derivative) of thecost functionassociated with a given state with respect to the weights. The weight updates can be done via stochastic gradient descent or other methods, such asextreme learning machines,[123]"no-prop" networks,[124]training without backtracking,[125]"weightless" networks,[126][127]andnon-connectionist neural networks.[citation needed] Machine learning is commonly separated into three main learning paradigms,supervised learning,[128]unsupervised learning[129]andreinforcement learning.[130]Each corresponds to a particular learning task. Supervised learninguses a set of paired inputs and desired outputs. The learning task is to produce the desired output for each input. In this case, the cost function is related to eliminating incorrect deductions.[131]A commonly used cost is themean-squared error, which tries to minimize the average squared error between the network's output and the desired output. Tasks suited for supervised learning arepattern recognition(also known as classification) andregression(also known as function approximation). Supervised learning is also applicable to sequential data (e.g., for handwriting, speech andgesture recognition). This can be thought of as learning with a "teacher", in the form of a function that provides continuous feedback on the quality of solutions obtained thus far. Inunsupervised learning, input data is given along with the cost function, some function of the datax{\displaystyle \textstyle x}and the network's output. The cost function is dependent on the task (the model domain) and anya prioriassumptions (the implicit properties of the model, its parameters and the observed variables). As a trivial example, consider the modelf(x)=a{\displaystyle \textstyle f(x)=a}wherea{\displaystyle \textstyle a}is a constant and the costC=E[(x−f(x))2]{\displaystyle \textstyle C=E[(x-f(x))^{2}]}. Minimizing this cost produces a value ofa{\displaystyle \textstyle a}that is equal to the mean of the data. The cost function can be much more complicated. Its form depends on the application: for example, incompressionit could be related to themutual informationbetweenx{\displaystyle \textstyle x}andf(x){\displaystyle \textstyle f(x)}, whereas in statistical modeling, it could be related to theposterior probabilityof the model given the data (note that in both of those examples, those quantities would be maximized rather than minimized). Tasks that fall within the paradigm of unsupervised learning are in generalestimationproblems; the applications includeclustering, the estimation ofstatistical distributions,compressionandfiltering. In applications such as playing video games, an actor takes a string of actions, receiving a generally unpredictable response from the environment after each one. The goal is to win the game, i.e., generate the most positive (lowest cost) responses. Inreinforcement learning, the aim is to weight the network (devise a policy) to perform actions that minimize long-term (expected cumulative) cost. At each point in time the agent performs an action and the environment generates an observation and aninstantaneouscost, according to some (usually unknown) rules. The rules and the long-term cost usually only can be estimated. At any juncture, the agent decides whether to explore new actions to uncover their costs or to exploit prior learning to proceed more quickly. Formally the environment is modeled as aMarkov decision process(MDP) with statess1,...,sn∈S{\displaystyle \textstyle {s_{1},...,s_{n}}\in S}and actionsa1,...,am∈A{\displaystyle \textstyle {a_{1},...,a_{m}}\in A}. Because the state transitions are not known, probability distributions are used instead: the instantaneous cost distributionP(ct|st){\displaystyle \textstyle P(c_{t}|s_{t})}, the observation distributionP(xt|st){\displaystyle \textstyle P(x_{t}|s_{t})}and the transition distributionP(st+1|st,at){\displaystyle \textstyle P(s_{t+1}|s_{t},a_{t})}, while a policy is defined as the conditional distribution over actions given the observations. Taken together, the two define aMarkov chain(MC). The aim is to discover the lowest-cost MC. ANNs serve as the learning component in such applications.[132][133]Dynamic programmingcoupled with ANNs (givingneurodynamicprogramming)[134]has been applied to problems such as those involved invehicle routing,[135]video games,natural resource management[136][137]andmedicine[138]because of ANNs ability to mitigate losses of accuracy even when reducing thediscretizationgrid density for numerically approximating the solution of control problems. Tasks that fall within the paradigm of reinforcement learning are control problems,gamesand other sequential decision making tasks. Self-learning in neural networks was introduced in 1982 along with a neural network capable of self-learning namedcrossbar adaptive array(CAA).[139]It is a system with only one input, situation s, and only one output, action (or behavior) a. It has neither external advice input nor external reinforcement input from the environment. The CAA computes, in a crossbar fashion, both decisions about actions and emotions (feelings) about encountered situations. The system is driven by the interaction between cognition and emotion.[140]Given the memory matrix, W =||w(a,s)||, the crossbar self-learning algorithm in each iteration performs the following computation: The backpropagated value (secondary reinforcement) is the emotion toward the consequence situation. The CAA exists in two environments, one is behavioral environment where it behaves, and the other is genetic environment, where from it initially and only once receives initial emotions about to be encountered situations in the behavioral environment. Having received the genome vector (species vector) from the genetic environment, the CAA will learn a goal-seeking behavior, in the behavioral environment that contains both desirable and undesirable situations.[141] Neuroevolutioncan create neural network topologies and weights usingevolutionary computation. It is competitive with sophisticated gradient descent approaches.[142][143]One advantage of neuroevolution is that it may be less prone to get caught in "dead ends".[144] Stochastic neural networksoriginating fromSherrington–Kirkpatrick modelsare a type of artificial neural network built by introducing random variations into the network, either by giving the network's artificial neuronsstochastictransfer functions[citation needed], or by giving them stochastic weights. This makes them useful tools foroptimizationproblems, since the random fluctuations help the network escape fromlocal minima.[145]Stochastic neural networks trained using aBayesianapproach are known asBayesian neural networks.[146] Topological deep learning, first introduced in 2017,[147]is an emerging approach inmachine learningthat integrates topology with deep neural networks to address highly intricate and high-order data. Initially rooted inalgebraic topology, TDL has since evolved into a versatile framework incorporating tools from other mathematical disciplines, such asdifferential topologyandgeometric topology. As a successful example of mathematical deep learning, TDL continues to inspire advancements in mathematicalartificial intelligence, fostering a mutually beneficial relationship between AI andmathematics. In aBayesianframework, a distribution over the set of allowed models is chosen to minimize the cost.Evolutionary methods,[148]gene expression programming,[149]simulated annealing,[150]expectation–maximization,non-parametric methodsandparticle swarm optimization[151]are other learning algorithms. Convergent recursion is a learning algorithm forcerebellar model articulation controller(CMAC) neural networks.[152][153] Two modes of learning are available: stochastic and batch. In stochastic learning, each input creates a weight adjustment. In batch learning weights are adjusted based on a batch of inputs, accumulating errors over the batch. Stochastic learning introduces "noise" into the process, using the local gradient calculated from one data point; this reduces the chance of the network getting stuck in local minima. However, batch learning typically yields a faster, more stable descent to a local minimum, since each update is performed in the direction of the batch's average error. A common compromise is to use "mini-batches", small batches with samples in each batch selected stochastically from the entire data set. ANNs have evolved into a broad family of techniques that have advanced the state of the art across multiple domains. The simplest types have one or more static components, including number of units, number of layers, unit weights andtopology. Dynamic types allow one or more of these to evolve via learning. The latter is much more complicated but can shorten learning periods and produce better results. Some types allow/require learning to be "supervised" by the operator, while others operate independently. Some types operate purely in hardware, while others are purely software and run on general purpose computers. Some of the main breakthroughs include: Using artificial neural networks requires an understanding of their characteristics. Neural architecture search(NAS) uses machine learning to automate ANN design. Various approaches to NAS have designed networks that compare well with hand-designed systems. The basic search algorithm is to propose a candidate model, evaluate it against a dataset, and use the results as feedback to teach the NAS network.[165]Available systems includeAutoMLand AutoKeras.[166]scikit-learn libraryprovides functions to help with building a deep network from scratch. We can then implement a deep network withTensorFloworKeras. Hyperparameters must also be defined as part of the design (they are not learned), governing matters such as how many neurons are in each layer, learning rate, step, stride, depth, receptive field and padding (for CNNs), etc.[167] [citation needed] Because of their ability to reproduce and model nonlinear processes, artificial neural networks have found applications in many disciplines. These include: ANNs have been used to diagnose several types of cancers[185][186]and to distinguish highly invasive cancer cell lines from less invasive lines using only cell shape information.[187][188] ANNs have been used to accelerate reliability analysis of infrastructures subject to natural disasters[189][190]and to predict foundation settlements.[191]It can also be useful to mitigate flood by the use of ANNs for modelling rainfall-runoff.[192]ANNs have also been used for building black-box models ingeoscience:hydrology,[193][194]ocean modelling andcoastal engineering,[195][196]andgeomorphology.[197]ANNs have been employed incybersecurity, with the objective to discriminate between legitimate activities and malicious ones. For example, machine learning has been used for classifying Android malware,[198]for identifying domains belonging to threat actors and for detecting URLs posing a security risk.[199]Research is underway on ANN systems designed for penetration testing, for detecting botnets,[200]credit cards frauds[201]and network intrusions. ANNs have been proposed as a tool to solvepartial differential equationsin physics[202][203][204]and simulate the properties of many-bodyopen quantum systems.[205][206][207][208]In brain research ANNs have studied short-term behavior ofindividual neurons,[209]the dynamics of neural circuitry arise from interactions between individual neurons and how behavior can arise from abstract neural modules that represent complete subsystems. Studies considered long-and short-term plasticity of neural systems and their relation to learning and memory from the individual neuron to the system level. It is possible to create a profile of a user's interests from pictures, using artificial neural networks trained for object recognition.[210] Beyond their traditional applications, artificial neural networks are increasingly being utilized in interdisciplinary research, such as materials science. For instance, graph neural networks (GNNs) have demonstrated their capability in scaling deep learning for the discovery of new stable materials by efficiently predicting the total energy of crystals. This application underscores the adaptability and potential of ANNs in tackling complex problems beyond the realms of predictive modeling and artificial intelligence, opening new pathways for scientific discovery and innovation.[211] Themultilayer perceptronis auniversal functionapproximator, as proven by theuniversal approximation theorem. However, the proof is not constructive regarding the number of neurons required, the network topology, the weights and the learning parameters. A specific recurrent architecture withrational-valued weights (as opposed to full precision real number-valued weights) has the power of auniversal Turing machine,[212]using a finite number of neurons and standard linear connections. Further, the use ofirrationalvalues for weights results in a machine withsuper-Turingpower.[213][214][failed verification] A model's "capacity" property corresponds to its ability to model any given function. It is related to the amount of information that can be stored in the network and to the notion of complexity. Two notions of capacity are known by the community. The information capacity and the VC Dimension. The information capacity of a perceptron is intensively discussed in Sir David MacKay's book[215]which summarizes work by Thomas Cover.[216]The capacity of a network of standard neurons (not convolutional) can be derived by four rules[217]that derive from understanding a neuron as an electrical element. The information capacity captures the functions modelable by the network given any data as input. The second notion, is theVC dimension. VC Dimension uses the principles ofmeasure theoryand finds the maximum capacity under the best possible circumstances. This is, given input data in a specific form. As noted in,[215]the VC Dimension for arbitrary inputs is half the information capacity of a Perceptron. The VC Dimension for arbitrary points is sometimes referred to as Memory Capacity.[218] Models may not consistently converge on a single solution, firstly because local minima may exist, depending on the cost function and the model. Secondly, the optimization method used might not guarantee to converge when it begins far from any local minimum. Thirdly, for sufficiently large data or parameters, some methods become impractical. Another issue worthy to mention is that training may cross someSaddle pointwhich may lead the convergence to the wrong direction. The convergence behavior of certain types of ANN architectures are more understood than others. When the width of network approaches to infinity, the ANN is well described by its first order Taylor expansion throughout training, and so inherits the convergence behavior ofaffine models.[219][220]Another example is when parameters are small, it is observed that ANNs often fits target functions from low to high frequencies. This behavior is referred to as the spectral bias, or frequency principle, of neural networks.[221][222][223][224]This phenomenon is the opposite to the behavior of some well studied iterative numerical schemes such asJacobi method. Deeper neural networks have been observed to be more biased towards low frequency functions.[225] Applications whose goal is to create a system that generalizes well to unseen examples, face the possibility of over-training. This arises in convoluted or over-specified systems when the network capacity significantly exceeds the needed free parameters. Two approaches address over-training. The first is to usecross-validationand similar techniques to check for the presence of over-training and to select hyperparameters to minimize the generalization error. The second is to use some form ofregularization. This concept emerges in a probabilistic (Bayesian) framework, where regularization can be performed by selecting a larger prior probability over simpler models; but also in statistical learning theory, where the goal is to minimize over two quantities: the 'empirical risk' and the 'structural risk', which roughly corresponds to the error over the training set and the predicted error in unseen data due to overfitting. Supervised neural networks that use amean squared error(MSE) cost function can use formal statistical methods to determine the confidence of the trained model. The MSE on a validation set can be used as an estimate for variance. This value can then be used to calculate theconfidence intervalof network output, assuming anormal distribution. A confidence analysis made this way is statistically valid as long as the outputprobability distributionstays the same and the network is not modified. By assigning asoftmax activation function, a generalization of thelogistic function, on the output layer of the neural network (or a softmax component in a component-based network) for categorical target variables, the outputs can be interpreted as posterior probabilities. This is useful in classification as it gives a certainty measure on classifications. The softmax activation function is: A common criticism of neural networks, particularly in robotics, is that they require too many training samples for real-world operation.[226]Any learning machine needs sufficient representative examples in order to capture the underlying structure that allows it to generalize to new cases. Potential solutions include randomly shuffling training examples, by using a numerical optimization algorithm that does not take too large steps when changing the network connections following an example, grouping examples in so-called mini-batches and/or introducing a recursive least squares algorithm forCMAC.[152]Dean Pomerleau uses a neural network to train a robotic vehicle to drive on multiple types of roads (single lane, multi-lane, dirt, etc.), and a large amount of his research is devoted to extrapolating multiple training scenarios from a single training experience, and preserving past training diversity so that the system does not become overtrained (if, for example, it is presented with a series of right turns—it should not learn to always turn right).[227] A central claim[citation needed]of ANNs is that they embody new and powerful general principles for processing information. These principles are ill-defined. It is often claimed[by whom?]that they areemergentfrom the network itself. This allows simple statistical association (the basic function of artificial neural networks) to be described as learning or recognition. In 1997,Alexander Dewdney, a formerScientific Americancolumnist, commented that as a result, artificial neural networks have a "something-for-nothing quality, one that imparts a peculiar aura of laziness and a distinct lack of curiosity about just how good these computing systems are. No human hand (or mind) intervenes; solutions are found as if by magic; and no one, it seems, has learned anything".[228]One response to Dewdney is that neural networks have been successfully used to handle many complex and diverse tasks, ranging from autonomously flying aircraft[229]to detecting credit card fraud to mastering the game ofGo. Technology writer Roger Bridgman commented: Neural networks, for instance, are in the dock not only because they have been hyped to high heaven, (what hasn't?) but also because you could create a successful net without understanding how it worked: the bunch of numbers that captures its behaviour would in all probability be "an opaque, unreadable table...valueless as a scientific resource". In spite of his emphatic declaration that science is not technology, Dewdney seems here to pillory neural nets as bad science when most of those devising them are just trying to be good engineers. An unreadable table that a useful machine could read would still be well worth having.[230] Although it is true that analyzing what has been learned by an artificial neural network is difficult, it is much easier to do so than to analyze what has been learned by a biological neural network. Moreover, recent emphasis on theexplainabilityof AI has contributed towards the development of methods, notably those based onattentionmechanisms, for visualizing and explaining learned neural networks. Furthermore, researchers involved in exploring learning algorithms for neural networks are gradually uncovering generic principles that allow a learning machine to be successful. For example, Bengio and LeCun (2007) wrote an article regarding local vs non-local learning, as well as shallow vs deep architecture.[231] Biological brains use both shallow and deep circuits as reported by brain anatomy,[232]displaying a wide variety of invariance. Weng[233]argued that the brain self-wires largely according to signal statistics and therefore, a serial cascade cannot catch all major statistical dependencies. Large and effective neural networks require considerable computing resources.[234]While the brain has hardware tailored to the task of processing signals through agraphof neurons, simulating even a simplified neuron onvon Neumann architecturemay consume vast amounts ofmemoryand storage. Furthermore, the designer often needs to transmit signals through many of these connections and their associated neurons – which require enormousCPUpower and time.[citation needed] Some argue that the resurgence of neural networks in the twenty-first century is largely attributable to advances in hardware: from 1991 to 2015, computing power, especially as delivered byGPGPUs(onGPUs), has increased around a million-fold, making the standard backpropagation algorithm feasible for training networks that are several layers deeper than before.[38]The use of accelerators such asFPGAsand GPUs can reduce training times from months to days.[234][235] Neuromorphic engineeringor aphysical neural networkaddresses the hardware difficulty directly, by constructing non-von-Neumann chips to directly implement neural networks in circuitry. Another type of chip optimized for neural network processing is called aTensor Processing Unit, or TPU.[236] Analyzing what has been learned by an ANN is much easier than analyzing what has been learned by a biological neural network. Furthermore, researchers involved in exploring learning algorithms for neural networks are gradually uncovering general principles that allow a learning machine to be successful. For example, local vs. non-local learning and shallow vs. deep architecture.[237] Advocates ofhybridmodels (combining neural networks and symbolic approaches) say that such a mixture can better capture the mechanisms of the human mind.[238][239] Neural networks are dependent on the quality of the data they are trained on, thus low quality data with imbalanced representativeness can lead to the model learning and perpetuating societal biases.[240][241]These inherited biases become especially critical when the ANNs are integrated into real-world scenarios where the training data may be imbalanced due to the scarcity of data for a specific race, gender or other attribute.[240]This imbalance can result in the model having inadequate representation and understanding of underrepresented groups, leading to discriminatory outcomes that exacerbate societal inequalities, especially in applications likefacial recognition, hiring processes, andlaw enforcement.[241][242]For example, in 2018,Amazonhad to scrap a recruiting tool because the model favored men over women for jobs in software engineering due to the higher number of male workers in the field.[242]The program would penalize any resume with the word "woman" or the name of any women's college. However, the use ofsynthetic datacan help reduce dataset bias and increase representation in datasets.[243] Artificial neural networks (ANNs) have undergone significant advancements, particularly in their ability to model complex systems, handle large data sets, and adapt to various types of applications. Their evolution over the past few decades has been marked by a broad range of applications in fields such as image processing, speech recognition, natural language processing, finance, and medicine.[citation needed] In the realm of image processing, ANNs are employed in tasks such as image classification, object recognition, and image segmentation. For instance, deep convolutional neural networks (CNNs) have been important in handwritten digit recognition, achieving state-of-the-art performance.[244]This demonstrates the ability of ANNs to effectively process and interpret complex visual information, leading to advancements in fields ranging from automated surveillance to medical imaging.[244] By modeling speech signals, ANNs are used for tasks like speaker identification and speech-to-text conversion. Deep neural network architectures have introduced significant improvements in large vocabulary continuous speech recognition, outperforming traditional techniques.[244][245]These advancements have enabled the development of more accurate and efficient voice-activated systems, enhancing user interfaces in technology products.[citation needed] In natural language processing, ANNs are used for tasks such as text classification, sentiment analysis, and machine translation. They have enabled the development of models that can accurately translate between languages, understand the context and sentiment in textual data, and categorize text based on content.[244][245]This has implications for automated customer service, content moderation, and language understanding technologies.[citation needed] In the domain of control systems, ANNs are used to model dynamic systems for tasks such as system identification, control design, and optimization. For instance, deep feedforward neural networks are important in system identification and control applications.[citation needed] ANNs are used forstock market predictionandcredit scoring: ANNs require high-quality data and careful tuning, and their "black-box" nature can pose challenges in interpretation. Nevertheless, ongoing advancements suggest that ANNs continue to play a role in finance, offering valuable insights and enhancingrisk management strategies.[citation needed] ANNs are able to process and analyze vast medical datasets. They enhance diagnostic accuracy, especially by interpreting complexmedical imagingfor early disease detection, and by predicting patient outcomes for personalized treatment planning.[245]In drug discovery, ANNs speed up the identification of potential drug candidates and predict their efficacy and safety, significantly reducing development time and costs.[244]Additionally, their application in personalized medicine and healthcare data analysis allows tailored therapies and efficient patient care management.[245]Ongoing research is aimed at addressing remaining challenges such as data privacy and model interpretability, as well as expanding the scope of ANN applications in medicine.[citation needed] ANNs such as generative adversarial networks (GAN) andtransformersare used for content creation across numerous industries.[246]This is because deep learning models are able to learn the style of an artist or musician from huge datasets and generate completely new artworks and music compositions. For instance,DALL-Eis a deep neural network trained on 650 million pairs of images and texts across the internet that can create artworks based on text entered by the user.[247]In the field of music, transformers are used to create original music for commercials and documentaries through companies such asAIVAandJukedeck.[248]In the marketing industry generative models are used to create personalized advertisements for consumers.[246]Additionally, major film companies are partnering with technology companies to analyze the financial success of a film, such as the partnership between Warner Bros and technology company Cinelytic established in 2020.[249]Furthermore, neural networks have found uses in video game creation, where Non Player Characters (NPCs) can make decisions based on all the characters currently in the game.[250]
https://en.wikipedia.org/wiki/Artificial_neural_network
PyTorchis amachine learninglibrarybased on theTorchlibrary,[4][5][6]used for applications such ascomputer visionandnatural language processing,[7]originally developed byMeta AIand now part of theLinux Foundationumbrella.[8][9][10][11]It is one of the most populardeep learningframeworks, alongside others such asTensorFlow,[12]offeringfree and open-source softwarereleased under themodified BSD license. Although thePythoninterface is more polished and the primary focus of development, PyTorch also has aC++interface.[13] A number of pieces ofdeep learningsoftware are built on top of PyTorch, includingTesla Autopilot,[14]Uber's Pyro,[15]Hugging Face's Transformers,[16][17]and Catalyst.[18][19] PyTorch provides two high-level features:[20] Meta (formerly known as Facebook) operates both PyTorch and Convolutional Architecture for Fast Feature Embedding (Caffe2), but models defined by the two frameworks were mutually incompatible. The Open Neural Network Exchange (ONNX) project was created by Meta andMicrosoftin September 2017 for converting models between frameworks. Caffe2 was merged into PyTorch at the end of March 2018.[21]In September 2022, Meta announced that PyTorch would be governed by the independent PyTorch Foundation, a newly created subsidiary of theLinux Foundation.[22] PyTorch 2.0 was released on 15 March 2023, introducingTorchDynamo, a Python-levelcompilerthat makes code run up to 2x faster, along with significant improvements in training and inference performance across majorcloud platforms.[23][24] PyTorch defines a class called Tensor (torch.Tensor) to store and operate on homogeneous multidimensional rectangular arrays of numbers. PyTorch Tensors are similar toNumPyArrays, but can also be operated on aCUDA-capableNVIDIAGPU. PyTorch has also been developing support for other GPU platforms, for example, AMD'sROCm[25]and Apple'sMetal Framework.[26] PyTorch supports various sub-types of Tensors.[27] Note that the term "tensor" here does not carry the same meaning as tensor in mathematics or physics. The meaning of the word in machine learning is only superficially related to its original meaning as a certain kind of object inlinear algebra. Tensors in PyTorch are simply multi-dimensional arrays. PyTorch defines a module called nn (torch.nn) to describe neural networks and to support training. This module offers a comprehensive collection of building blocks for neural networks, including various layers and activation functions, enabling the construction of complex models. Networks are built by inheriting from thetorch.nnmodule and defining the sequence of operations in theforward()function. The following program shows the low-level functionality of the library with a simple example. The following code-block defines a neural network with linear layers using thennmodule.
https://en.wikipedia.org/wiki/PyTorch
Batch normalization(also known asbatch norm) is anormalizationtechnique used to make training ofartificial neural networksfaster and more stable by adjusting the inputs to each layer—re-centering them around zero and re-scaling them to a standard size. It was introduced by Sergey Ioffe and Christian Szegedy in 2015.[1] Experts still debate why batch normalization works so well. It was initially thought to tackleinternal covariate shift, a problem where parameter initialization and changes in the distribution of the inputs of each layer affect the learning rate of the network.[1]However, newer research suggests it doesn’t fix this shift but instead smooths theobjective function—a mathematical guide the network follows to improve—enhancing performance.[2]In very deep networks, batch normalization can initially cause a severegradient explosion—where updates to the network grow uncontrollably large—but this is managed with shortcuts called skip connections in residual networks.[3]Another theory is that batch normalization adjusts data by handling its size and path separately, speeding up training.[4] Each layer in a neural network has inputs that follow a specific distribution, which shifts during training due to two main factors: the random starting values of the network’s settings (parameter initialization) and the natural variation in the input data. This shifting pattern affecting the inputs to the network’s inner layers is calledinternal covariate shift. While a strict definition isn’t fully agreed upon, experiments show that it involves changes in the means and variances of these inputs during training. Batch normalization was first developed to address internal covariate shift.[1]During training, as the parameters of preceding layers adjust, the distribution of inputs to the current layer changes accordingly, such that the current layer needs to constantly readjust to new distributions. This issue is particularly severe in deep networks, because small changes in shallower hidden layers will be amplified as they propagate within the network, resulting in significant shift in deeper hidden layers. Batch normalization was proposed to reduced these unwanted shifts to speed up training and produce more reliable models. Beyond possibly tackling internal covariate shift, batch normalization offers several additional advantages. It allows the network to use a higherlearning rate—a setting that controls how quickly the network learns—without causing problems like vanishing or exploding gradients, where updates become too small or too large. It also appears to have a regularizing effect, improving the network’s ability to generalize to new data, reducing the need fordropout, a technique used to preventoverfitting(when a model learns the training data too well and fails on new data). Additionally, networks using batch normalization are less sensitive to the choice of starting settings or learning rates, making them more robust and adaptable. In a neural network, batch normalization is achieved through a normalization step that fixes the means and variances of each layer's inputs. Ideally, the normalization would be conducted over the entire training set, but to use this step jointly withstochastic optimizationmethods, it is impractical to use the global information. Thus, normalization is restrained to each mini-batch in the training process. Let us useBto denote a mini-batch of sizemof the entire training set. The empiricalmeanandvarianceofBcould thus be denoted as μB=1m∑i=1mxi{\displaystyle \mu _{B}={\frac {1}{m}}\sum _{i=1}^{m}x_{i}}andσB2=1m∑i=1m(xi−μB)2{\displaystyle \sigma _{B}^{2}={\frac {1}{m}}\sum _{i=1}^{m}(x_{i}-\mu _{B})^{2}}. For a layer of the network withd-dimensional input,x=(x(1),...,x(d)){\displaystyle x=(x^{(1)},...,x^{(d)})}, each dimension of its input is then normalized (i.e. re-centered and re-scaled) separately, x^i(k)=xi(k)−μB(k)(σB(k))2+ϵ{\displaystyle {\hat {x}}_{i}^{(k)}={\frac {x_{i}^{(k)}-\mu _{B}^{(k)}}{\sqrt {\left(\sigma _{B}^{(k)}\right)^{2}+\epsilon }}}}, wherek∈[1,d]{\displaystyle k\in [1,d]}andi∈[1,m]{\displaystyle i\in [1,m]};μB(k){\displaystyle \mu _{B}^{(k)}}andσB(k){\displaystyle \sigma _{B}^{(k)}}are the per-dimension mean and standard deviation, respectively. ϵ{\displaystyle \epsilon }is added in the denominator for numerical stability and is an arbitrarily small constant. The resulting normalized activationx^(k){\displaystyle {\hat {x}}^{(k)}}have zero mean and unit variance, ifϵ{\displaystyle \epsilon }is not taken into account. To restore the representation power of the network, a transformation step then follows as yi(k)=γ(k)x^i(k)+β(k){\displaystyle y_{i}^{(k)}=\gamma ^{(k)}{\hat {x}}_{i}^{(k)}+\beta ^{(k)}}, where the parametersγ(k){\displaystyle \gamma ^{(k)}}andβ(k){\displaystyle \beta ^{(k)}}are subsequently learned in the optimization process. Formally, the operation that implements batch normalization is a transformBNγ(k),β(k):x1...m(k)→y1...m(k){\displaystyle BN_{\gamma ^{(k)},\beta ^{(k)}}:x_{1...m}^{(k)}\rightarrow y_{1...m}^{(k)}}called the Batch Normalizing transform. The output of the BN transformy(k)=BNγ(k),β(k)(x(k)){\displaystyle y^{(k)}=BN_{\gamma ^{(k)},\beta ^{(k)}}(x^{(k)})}is then passed to other network layers, while the normalized outputx^i(k){\displaystyle {\hat {x}}_{i}^{(k)}}remains internal to the current layer. The described BN transform is adifferentiableoperation, and the gradient of thelosslwith respect to the different parameters can be computed directly with thechain rule. Specifically,∂l∂yi(k){\displaystyle {\frac {\partial l}{\partial y_{i}^{(k)}}}}depends on the choice ofactivation function, and thegradientagainst other parameters could be expressed as a function of∂l∂yi(k){\displaystyle {\frac {\partial l}{\partial y_{i}^{(k)}}}}: ∂l∂x^i(k)=∂l∂yi(k)γ(k){\displaystyle {\frac {\partial l}{\partial {\hat {x}}_{i}^{(k)}}}={\frac {\partial l}{\partial y_{i}^{(k)}}}\gamma ^{(k)}}, ∂l∂γ(k)=∑i=1m∂l∂yi(k)x^i(k){\displaystyle {\frac {\partial l}{\partial \gamma ^{(k)}}}=\sum _{i=1}^{m}{\frac {\partial l}{\partial y_{i}^{(k)}}}{\hat {x}}_{i}^{(k)}},∂l∂β(k)=∑i=1m∂l∂yi(k){\displaystyle {\frac {\partial l}{\partial \beta ^{(k)}}}=\sum _{i=1}^{m}{\frac {\partial l}{\partial y_{i}^{(k)}}}},∂l∂σB(k)2=∑i=1m∂l∂yi(k)(xi(k)−μB(k))(−γ(k)2(σB(k)2+ϵ)−3/2){\displaystyle {\frac {\partial l}{\partial \sigma _{B}^{(k)^{2}}}}=\sum _{i=1}^{m}{\frac {\partial l}{\partial y_{i}^{(k)}}}(x_{i}^{(k)}-\mu _{B}^{(k)})\left(-{\frac {\gamma ^{(k)}}{2}}(\sigma _{B}^{(k)^{2}}+\epsilon )^{-3/2}\right)},∂l∂μB(k)=∑i=1m∂l∂yi(k)−γ(k)σB(k)2+ϵ+∂l∂σB(k)21m∑i=1m(−2)⋅(xi(k)−μB(k)){\displaystyle {\frac {\partial l}{\partial \mu _{B}^{(k)}}}=\sum _{i=1}^{m}{\frac {\partial l}{\partial y_{i}^{(k)}}}{\frac {-\gamma ^{(k)}}{\sqrt {\sigma _{B}^{(k)^{2}}+\epsilon }}}+{\frac {\partial l}{\partial \sigma _{B}^{(k)^{2}}}}{\frac {1}{m}}\sum _{i=1}^{m}(-2)\cdot (x_{i}^{(k)}-\mu _{B}^{(k)})}, and∂l∂xi(k)=∂l∂x^i(k)1σB(k)2+ϵ+∂l∂σB(k)22(xi(k)−μB(k))m+∂l∂μB(k)1m{\displaystyle {\frac {\partial l}{\partial x_{i}^{(k)}}}={\frac {\partial l}{\partial {\hat {x}}_{i}^{(k)}}}{\frac {1}{\sqrt {\sigma _{B}^{(k)^{2}}+\epsilon }}}+{\frac {\partial l}{\partial \sigma _{B}^{(k)^{2}}}}{\frac {2(x_{i}^{(k)}-\mu _{B}^{(k)})}{m}}+{\frac {\partial l}{\partial \mu _{B}^{(k)}}}{\frac {1}{m}}}. During the training stage, the normalization steps depend on the mini-batches to ensure efficient and reliable training. However, in the inference stage, this dependence is not useful any more. Instead, the normalization step in this stage is computed with the population statistics such that the output could depend on the input in a deterministic manner. The population mean,E[x(k)]{\displaystyle E[x^{(k)}]}, and variance,Var⁡[x(k)]{\displaystyle \operatorname {Var} [x^{(k)}]}, are computed as: E[x(k)]=EB[μB(k)]{\displaystyle E[x^{(k)}]=E_{B}[\mu _{B}^{(k)}]}, andVar⁡[x(k)]=mm−1EB[(σB(k))2]{\displaystyle \operatorname {Var} [x^{(k)}]={\frac {m}{m-1}}E_{B}[\left(\sigma _{B}^{(k)}\right)^{2}]}. The population statistics thus is a complete representation of the mini-batches. The BN transform in the inference step thus becomes y(k)=BNγ(k),β(k)inf(x(k))=γ(k)x(k)−E[x(k)]Var⁡[x(k)]+ϵ+β(k){\displaystyle y^{(k)}=BN_{\gamma ^{(k)},\beta ^{(k)}}^{\text{inf}}(x^{(k)})=\gamma ^{(k)}{\frac {x^{(k)}-E[x^{(k)}]}{\sqrt {\operatorname {Var} [x^{(k)}]+\epsilon }}}+\beta ^{(k)}}, wherey(k){\displaystyle y^{(k)}}is passed on to future layers instead ofx(k){\displaystyle x^{(k)}}. Since the parameters are fixed in this transformation, the batch normalization procedure is essentially applying alinear transformto the activation. Although batch normalization has become popular due to its strong empirical performance, the working mechanism of the method is not yet well-understood. The explanation made in the original paper[1]was that batch norm works by reducing internal covariate shift, but this has been challenged by more recent work. One experiment[5]trained a VGG-16 network[6]under 3 different training regimes: standard (no batch norm), batch norm, and batch norm with noise added to each layer during training. In the third model, the noise has non-zero mean and non-unit variance, i.e. it explicitly introduces covariate shift. Despite this, it showed similar accuracy to the second model, and both performed better than the first, suggesting that covariate shift is not the reason that batch norm improves performance. Using batch normalization causes the items in a batch to no longer beiid, which can lead to difficulties in training due to lower quality gradient estimation.[7] One alternative explanation,[5]is that the improvement with batch normalization is instead due to it producing a smoother parameter space and smoother gradients, as formalized by a smallerLipschitz constant. Consider two identical networks, one contains batch normalization layers and the other does not, the behaviors of these two networks are then compared. Denote the loss functions asL^{\displaystyle {\hat {L}}}andL{\displaystyle L}, respectively. Let the input to both networks bex{\displaystyle x}, and the output bey{\displaystyle y}, for whichy=Wx{\displaystyle y=Wx}, whereW{\displaystyle W}is the layer weights. For the second network,y{\displaystyle y}additionally goes through a batch normalization layer. Denote the normalized activation asy^{\displaystyle {\hat {y}}}, which has zero mean and unit variance. Let the transformed activation bez=γy^+β{\displaystyle z=\gamma {\hat {y}}+\beta }, and supposeγ{\displaystyle \gamma }andβ{\displaystyle \beta }are constants. Finally, denote the standard deviation over a mini-batchyj^∈Rm{\displaystyle {\hat {y_{j}}}\in \mathbb {R} ^{m}}asσj{\displaystyle \sigma _{j}}. First, it can be shown that the gradient magnitude of a batch normalized network,||▽yiL^||{\displaystyle ||\triangledown _{y_{i}}{\hat {L}}||}, is bounded, with the bound expressed as ||▽yiL^||2≤γ2σj2(||▽yiL||2−1m⟨1,▽yiL⟩2−1m⟨▽yiL,y^j⟩2){\displaystyle ||\triangledown _{y_{i}}{\hat {L}}||^{2}\leq {\frac {\gamma ^{2}}{\sigma _{j}^{2}}}{\Bigg (}||\triangledown _{y_{i}}L||^{2}-{\frac {1}{m}}\langle 1,\triangledown _{y_{i}}L\rangle ^{2}-{\frac {1}{m}}\langle \triangledown _{y_{i}}L,{\hat {y}}_{j}\rangle ^{2}{\bigg )}}. Since the gradient magnitude represents the Lipschitzness of the loss, this relationship indicates that a batch normalized network could achieve greater Lipschitzness comparatively. Notice that the bound gets tighter when the gradient▽yiL^{\displaystyle \triangledown _{y_{i}}{\hat {L}}}correlates with the activationyi^{\displaystyle {\hat {y_{i}}}}, which is a common phenomena. The scaling ofγ2σj2{\displaystyle {\frac {\gamma ^{2}}{\sigma _{j}^{2}}}}is also significant, since the variance is often large. Secondly, the quadratic form of the loss Hessian with respect to activation in the gradient direction can be bounded as (▽yjL^)T∂L^∂yj∂yj(▽yjL^)≤γ2σ2(∂L^∂yj)T(∂L∂yj∂yj)(∂L^∂yj)−γmσ2⟨▽yjL,yj^⟩||∂L^∂yj||2{\displaystyle (\triangledown _{y_{j}}{\hat {L}})^{T}{\frac {\partial {\hat {L}}}{\partial y_{j}\partial y_{j}}}(\triangledown _{y_{j}}{\hat {L}})\leq {\frac {\gamma ^{2}}{\sigma ^{2}}}{\bigg (}{\frac {\partial {\hat {L}}}{\partial y_{j}}}{\bigg )}^{T}{\bigg (}{\frac {\partial L}{\partial y_{j}\partial y_{j}}}{\bigg )}{\bigg (}{\frac {\partial {\hat {L}}}{\partial y_{j}}}{\bigg )}-{\frac {\gamma }{m\sigma ^{2}}}\langle \triangledown _{y_{j}}L,{\hat {y_{j}}}\rangle {\bigg |}{\bigg |}{\frac {\partial {\hat {L}}}{\partial y_{j}}}{\bigg |}{\bigg |}^{2}}. The scaling ofγ2σj2{\displaystyle {\frac {\gamma ^{2}}{\sigma _{j}^{2}}}}indicates that the loss Hessian is resilient to the mini-batch variance, whereas the second term on the right hand side suggests that it becomes smoother when theHessianand the inner product are non-negative. If the loss is locallyconvex, then the Hessian ispositive semi-definite, while the inner product is positive ifgj^{\displaystyle {\hat {g_{j}}}}is in the direction towards the minimum of the loss. It could thus be concluded from this inequality that the gradient generally becomes more predictive with the batch normalization layer. It then follows to translate the bounds related to the loss with respect to the normalized activation to a bound on the loss with respect to the network weights: gj^≤γ2σj2(gj2−mμgj2−λ2⟨▽yjL,y^j⟩2){\displaystyle {\hat {g_{j}}}\leq {\frac {\gamma ^{2}}{\sigma _{j}^{2}}}(g_{j}^{2}-m\mu _{g_{j}}^{2}-\lambda ^{2}\langle \triangledown _{y_{j}}L,{\hat {y}}_{j}\rangle ^{2})}, wheregj=max||X||≤λ||▽WL||2{\displaystyle g_{j}=max_{||X||\leq \lambda }||\triangledown _{W}L||^{2}}andg^j=max||X||≤λ||▽WL^||2{\displaystyle {\hat {g}}_{j}=max_{||X||\leq \lambda }||\triangledown _{W}{\hat {L}}||^{2}}. In addition to the smoother landscape, it is further shown that batch normalization could result in a better initialization with the following inequality: ||W0−W^∗||2≤||W0−W∗||2−1||W∗||2(||W∗||2−⟨W∗,W0⟩)2{\displaystyle ||W_{0}-{\hat {W}}^{*}||^{2}\leq ||W_{0}-W^{*}||^{2}-{\frac {1}{||W^{*}||^{2}}}(||W^{*}||^{2}-\langle W^{*},W_{0}\rangle )^{2}}, whereW∗{\displaystyle W^{*}}andW^∗{\displaystyle {\hat {W}}^{*}}are the local optimal weights for the two networks, respectively. Some scholars argue that the above analysis cannot fully capture the performance of batch normalization, because the proof only concerns the largest eigenvalue, or equivalently, one direction in the landscape at all points. It is suggested that the complete eigenspectrum needs to be taken into account to make a conclusive analysis.[8][5] Since it is hypothesized that batch normalization layers could reduce internal covariate shift, an experiment[citation needed]is set up to measure quantitatively how much covariate shift is reduced. First, the notion of internal covariate shift needs to be defined mathematically. Specifically, to quantify the adjustment that a layer's parameters make in response to updates in previous layers, the correlation between the gradients of the loss before and after all previous layers are updated is measured, since gradients could capture the shifts from the first-order training method. If the shift introduced by the changes in previous layers is small, then the correlation between the gradients would be close to 1. The correlation between the gradients are computed for four models: a standard VGG network,[6]a VGG network with batch normalization layers, a 25-layer deep linear network (DLN) trained with full-batch gradient descent, and a DLN network with batch normalization layers. Interestingly, it is shown that the standard VGG and DLN models both have higher correlations of gradients compared with their counterparts, indicating that the additional batch normalization layers are not reducing internal covariate shift. Even though batchnorm was originally introduced to alleviategradient vanishing or explosion problems, a deep batchnorm network in factsuffers from gradient explosionat initialization time, no matter what it uses for nonlinearity. Thus the optimization landscape is very far from smooth for a randomly initialized, deep batchnorm network. More precisely, if the network hasL{\displaystyle L}layers, then the gradient of the first layer weights has norm>cλL{\displaystyle >c\lambda ^{L}}for someλ>1,c>0{\displaystyle \lambda >1,c>0}depending only on the nonlinearity. For any fixed nonlinearity,λ{\displaystyle \lambda }decreases as the batch size increases. For example, for ReLU,λ{\displaystyle \lambda }decreases toπ/(π−1)≈1.467{\displaystyle \pi /(\pi -1)\approx 1.467}as the batch size tends to infinity. Practically, this means deep batchnorm networks are untrainable. This is only relieved by skip connections in the fashion of residual networks.[9] This gradient explosion on the surface contradicts thesmoothnessproperty explained in the previous section, but in fact they are consistent. The previous section studies the effect of inserting a single batchnorm in a network, while the gradient explosion depends on stacking batchnorms typical of modern deep neural networks. Another possible reason for the success of batch normalization is that it decouples the length and direction of the weight vectors and thus facilitates better training. By interpreting batch norm as a reparametrization of weight space, it can be shown that the length and the direction of the weights are separated and can thus be trained separately. For a particular neural network unit with inputx{\displaystyle x}and weight vectorw{\displaystyle w}, denote its output asf(w)=Ex[ϕ(xTw)]{\displaystyle f(w)=E_{x}[\phi (x^{T}w)]}, whereϕ{\displaystyle \phi }is the activation function, and denoteS=E[xxT]{\displaystyle S=E[xx^{T}]}. Assume thatE[x]=0{\displaystyle E[x]=0}, and that the spectrum of the matrixS{\displaystyle S}is bounded as0<μ=λmin(S){\displaystyle 0<\mu =\lambda _{min}(S)},L=λmax(S)<∞{\displaystyle L=\lambda _{max}(S)<\infty }, such thatS{\displaystyle S}is symmetric positive definite. Adding batch normalization to this unit thus results in fBN(w,γ,β)=Ex[ϕ(BN(xTw))]=Ex[ϕ(γ(xTw−Ex[xTw]varx[xTw]1/2)+β)]{\displaystyle f_{BN}(w,\gamma ,\beta )=E_{x}[\phi (BN(x^{T}w))]=E_{x}{\bigg [}\phi {\bigg (}\gamma ({\frac {x^{T}w-E_{x}[x^{T}w]}{var_{x}[x^{T}w]^{1/2}}})+\beta {\bigg )}{\bigg ]}}, by definition. The variance term can be simplified such thatvarx[xTw]=wTSw{\displaystyle var_{x}[x^{T}w]=w^{T}Sw}. Assume thatx{\displaystyle x}has zero mean andβ{\displaystyle \beta }can be omitted, then it follows that fBN(w,γ)=Ex[ϕ(γxTw(wTSw)1/2)]{\displaystyle f_{BN}(w,\gamma )=E_{x}{\bigg [}\phi {\bigg (}\gamma {\frac {x^{T}w}{(w^{T}Sw)^{1/2}}}{\bigg )}{\bigg ]}}, where(wTSw)12{\displaystyle (w^{T}Sw)^{\frac {1}{2}}}is the induced norm ofS{\displaystyle S},||w||s{\displaystyle ||w||_{s}}. Hence, it could be concluded thatfBN(w,γ)=Ex[ϕ(xTw~)]{\displaystyle f_{BN}(w,\gamma )=E_{x}[\phi (x^{T}{\tilde {w}})]}, wherew~=γw||w||s{\displaystyle {\tilde {w}}=\gamma {\frac {w}{||w||_{s}}}}, andγ{\displaystyle \gamma }andw{\displaystyle w}accounts for its length and direction separately. This property could then be used to prove the faster convergence of problems with batch normalization. With the reparametrization interpretation, it could then be proved that applying batch normalization to the ordinary least squares problem achieves a linear convergence rate in gradient descent, which is faster than the regular gradient descent with only sub-linear convergence. Denote the objective of minimizing an ordinary least squares problem as minw~∈RdfOLS(w~)=minw~∈Rd(Ex,y[(y−xTw~)2])=minw~∈Rd(2uTw~+w~TSw~){\displaystyle min_{{\tilde {w}}\in R^{d}}f_{OLS}({\tilde {w}})=min_{{\tilde {w}}\in R^{d}}(E_{x,y}[(y-x^{T}{\tilde {w}})^{2}])=min_{{\tilde {w}}\in R^{d}}(2u^{T}{\tilde {w}}+{\tilde {w}}^{T}S{\tilde {w}})}, whereu=E[−yx]{\displaystyle u=E[-yx]}andS=E[xxT]{\displaystyle S=E[xx^{T}]}. Sincew~=γw||w||s{\displaystyle {\tilde {w}}=\gamma {\frac {w}{||w||_{s}}}}, the objective thus becomes minw∈Rd∖{0},γ∈RfOLS(w,γ)=minw∈Rd∖{0},γ∈R(2γuTw||w||S+γ2){\displaystyle min_{w\in R^{d}\backslash \{0\},\gamma \in R}f_{OLS}(w,\gamma )=min_{w\in R^{d}\backslash \{0\},\gamma \in R}{\bigg (}2\gamma {\frac {u^{T}w}{||w||_{S}+\gamma ^{2}}}{\bigg )}}, where 0 is excluded to avoid 0 in the denominator. Since the objective is convex with respect toγ{\displaystyle \gamma }, its optimal value could be calculated by setting the partial derivative of the objective againstγ{\displaystyle \gamma }to 0. The objective could be further simplified to be minw∈Rd∖{0}ρ(w)=minw∈Rd∖{0}(−wTuuTwwTSw){\displaystyle min_{w\in R^{d}\backslash \{0\}}\rho (w)=min_{w\in R^{d}\backslash \{0\}}{\bigg (}-{\frac {w^{T}uu^{T}w}{w^{T}Sw}}{\bigg )}}. Note that this objective is a form of the generalized Rayleigh quotient ρ~(w)=wTBwwTAw{\displaystyle {\tilde {\rho }}(w)={\frac {w^{T}Bw}{w^{T}Aw}}}, whereB∈Rd×d{\displaystyle B\in R^{d\times d}}is a symmetric matrix andA∈Rd×d{\displaystyle A\in R^{d\times d}}is a symmetricpositive definitematrix. It is proven that the gradient descent convergence rate of the generalizedRayleigh quotientis λ1−ρ(wt+1)ρ(wt+1−λ2)≤(1−λ1−λ2λ1−λmin)2tλ1−ρ(wt)ρ(wt)−λ2{\displaystyle {\frac {\lambda _{1}-\rho (w_{t+1})}{\rho (w_{t+1}-\lambda _{2})}}\leq {\bigg (}1-{\frac {\lambda _{1}-\lambda _{2}}{\lambda _{1}-\lambda _{min}}}{\bigg )}^{2t}{\frac {\lambda _{1}-\rho (w_{t})}{\rho (w_{t})-\lambda _{2}}}}, whereλ1{\displaystyle \lambda _{1}}is the largesteigenvalueofB{\displaystyle B},λ2{\displaystyle \lambda _{2}}is the second largest eigenvalue ofB{\displaystyle B}, andλmin{\displaystyle \lambda _{min}}is the smallest eigenvalue ofB{\displaystyle B}.[10] In our case,B=uuT{\displaystyle B=uu^{T}}is a rank one matrix, and the convergence result can be simplified accordingly. Specifically, consider gradient descent steps of the formwt+1=wt−ηt▽ρ(wt){\displaystyle w_{t+1}=w_{t}-\eta _{t}\triangledown \rho (w_{t})}with step sizeηt=wtTSwt2L|ρ(wt)|{\displaystyle \eta _{t}={\frac {w_{t}^{T}Sw_{t}}{2L|\rho (w_{t})|}}}, and starting fromρ(w0)≠0{\displaystyle \rho (w_{0})\neq 0}, then ρ(wt)−ρ(w∗)≤(1−μL)2t(ρ(w0)−ρ(w∗)){\displaystyle \rho (w_{t})-\rho (w^{*})\leq {\bigg (}1-{\frac {\mu }{L}}{\bigg )}^{2t}(\rho (w_{0})-\rho (w^{*}))}. The problem of learning halfspaces refers to the training of thePerceptron, which is the simplest form of neural network. The optimization problem in this case is minw~∈RdfLH(w~)=Ey,x[ϕ(zTw~)]{\displaystyle min_{{\tilde {w}}\in R^{d}}f_{LH}({\tilde {w}})=E_{y,x}[\phi (z^{T}{\tilde {w}})]}, wherez=−yx{\displaystyle z=-yx}andϕ{\displaystyle \phi }is an arbitrary loss function. Suppose thatϕ{\displaystyle \phi }is infinitely differentiable and has a bounded derivative. Assume that the objective functionfLH{\displaystyle f_{LH}}isζ{\displaystyle \zeta }-smooth, and that a solutionα∗=argminα||▽f(αw)||2{\displaystyle \alpha ^{*}=argmin_{\alpha }||\triangledown f(\alpha w)||^{2}}exists and is bounded such that−∞<α∗<∞{\displaystyle -\infty <\alpha ^{*}<\infty }. Also assumez{\displaystyle z}is amultivariate normal random variable. With the Gaussian assumption, it can be shown that allcritical pointslie on the same line, for any choice of loss functionϕ{\displaystyle \phi }. Specifically, the gradient offLH{\displaystyle f_{LH}}could be represented as ▽w~fLH(w~)=c1(w~)u+c2(w~)Sw~{\displaystyle \triangledown _{\tilde {w}}f_{LH}({\tilde {w}})=c_{1}({\tilde {w}})u+c_{2}({\tilde {w}})S{\tilde {w}}}, wherec1(w~)=Ez[ϕ(1)(zTw~)]−Ez[ϕ(2)(zTw~)](uTw~){\displaystyle c_{1}({\tilde {w}})=E_{z}[\phi ^{(1)}(z^{T}{\tilde {w}})]-E_{z}[\phi ^{(2)}(z^{T}{\tilde {w}})](u^{T}{\tilde {w}})},c2(w~)=Ez[ϕ(2)(zTw~)]{\displaystyle c_{2}({\tilde {w}})=E_{z}[\phi ^{(2)}(z^{T}{\tilde {w}})]}, andϕ(i){\displaystyle \phi ^{(i)}}is thei{\displaystyle i}-th derivative ofϕ{\displaystyle \phi }. By setting the gradient to 0, it thus follows that the bounded critical pointsw~∗{\displaystyle {\tilde {w}}_{*}}can be expressed asw~∗=g∗S−1u{\displaystyle {\tilde {w}}_{*}=g_{*}S^{-1}u}, whereg∗{\displaystyle g_{*}}depends onw~∗{\displaystyle {\tilde {w}}_{*}}andϕ{\displaystyle \phi }. Combining this global property with length-direction decoupling, it could thus be proved that this optimization problem converges linearly. First, a variation ofgradient descentwith batch normalization, Gradient Descent in Normalized Parameterization (GDNP), is designed for the objective functionminw∈Rd∖{0},γ∈RfLH(w,γ){\displaystyle min_{w\in R^{d}\backslash \{0\},\gamma \in R}f_{LH}(w,\gamma )}, such that the direction and length of the weights are updated separately. Denote the stopping criterion of GDNP as h(wt,γt)=Ez[ϕ′(zTw~t)](uTwt)−Ez[ϕ″(zTw~t)](uTwt)2{\displaystyle h(w_{t},\gamma _{t})=E_{z}[\phi '(z^{T}{\tilde {w}}_{t})](u^{T}w_{t})-E_{z}[\phi ''(z^{T}{\tilde {w}}_{t})](u^{T}w_{t})^{2}}. Let the step size be st=s(wt,γt)=−||wt||S3Lgth(wt,γt){\displaystyle s_{t}=s(w_{t},\gamma _{t})=-{\frac {||w_{t}||_{S}^{3}}{Lg_{t}h(w_{t},\gamma _{t})}}}. For each step, ifh(wt,γt)≠0{\displaystyle h(w_{t},\gamma _{t})\neq 0}, then update the direction as wt+1=wt−st▽wf(wt,γt){\displaystyle w_{t+1}=w_{t}-s_{t}\triangledown _{w}f(w_{t},\gamma _{t})}. Then update the length according to γt=Bisection(Ts,f,wt){\displaystyle \gamma _{t}={\text{Bisection}}(T_{s},f,w_{t})}, whereBisection(){\displaystyle {\text{Bisection()}}}is the classicalbisection algorithm, andTs{\displaystyle T_{s}}is the total iterations ran in the bisection step. Denote the total number of iterations asTd{\displaystyle T_{d}}, then the final output of GDNP is w~Td=γTdwTd||wTd||S{\displaystyle {\tilde {w}}_{T_{d}}=\gamma _{T_{d}}{\frac {w_{T_{d}}}{||w_{T_{d}}||_{S}}}}. The GDNP algorithm thus slightly modifies the batch normalization step for the ease of mathematical analysis. It can be shown that in GDNP, the partial derivative offLH{\displaystyle f_{LH}}against the length component converges to zero at a linear rate, such that (∂γfLH(wt,at(Ts))2≤2−Tsζ|bt(0)−at(0)|μ2{\displaystyle (\partial _{\gamma }f_{LH}(w_{t},a_{t}^{(T_{s})})^{2}\leq {\frac {2^{-T_{s}}\zeta |b_{t}^{(0)}-a_{t}^{(0)}|}{\mu ^{2}}}}, whereat(0){\displaystyle a_{t}^{(0)}}andbt0{\displaystyle b_{t}^{0}}are the two starting points of the bisection algorithm on the left and on the right, correspondingly. Further, for each iteration, the norm of the gradient offLH{\displaystyle f_{LH}}with respect tow{\displaystyle w}converges linearly, such that ||wt||S2||▽fLH(wt,gt)||S−12≤(1−μL)2tΦ2γt2(ρ(w0)−ρ∗){\displaystyle ||w_{t}||_{S}^{2}||\triangledown f_{LH}(w_{t},g_{t})||_{S^{-1}}^{2}\leq {\bigg (}1-{\frac {\mu }{L}}{\bigg )}^{2t}\Phi ^{2}\gamma _{t}^{2}(\rho (w_{0})-\rho ^{*})}. Combining these two inequalities, a bound could thus be obtained for the gradient with respect tow~Td{\displaystyle {\tilde {w}}_{T_{d}}}: ||▽w~f(w~Td)||2≤(1−μL)2TdΦ2(ρ(w0)−ρ∗)+2−Tsζ|bt(0)−at(0)|μ2{\displaystyle ||\triangledown _{\tilde {w}}f({\tilde {w}}_{T_{d}})||^{2}\leq {\bigg (}1-{\frac {\mu }{L}}{\bigg )}^{2T_{d}}\Phi ^{2}(\rho (w_{0})-\rho ^{*})+{\frac {2^{-T_{s}}\zeta |b_{t}^{(0)}-a_{t}^{(0)}|}{\mu ^{2}}}}, such that the algorithm is guaranteed to converge linearly. Although the proof stands on the assumption of Gaussian input, it is also shown in experiments that GDNP could accelerate optimization without this constraint. Consider amultilayer perceptron(MLP) with one hidden layer andm{\displaystyle m}hidden units with mapping from inputx∈Rd{\displaystyle x\in R^{d}}to a scalar output described as Fx(W~,Θ)=∑i=1mθiϕ(xTw~(i)){\displaystyle F_{x}({\tilde {W}},\Theta )=\sum _{i=1}^{m}\theta _{i}\phi (x^{T}{\tilde {w}}^{(i)})}, wherew~(i){\displaystyle {\tilde {w}}^{(i)}}andθi{\displaystyle \theta _{i}}are the input and output weights of uniti{\displaystyle i}correspondingly, andϕ{\displaystyle \phi }is the activation function and is assumed to be atanh function. The input and output weights could then be optimized with minW~,Θ(fNN(W~,Θ)=Ey,x[l(−yFx(W~,Θ))]){\displaystyle min_{{\tilde {W}},\Theta }(f_{NN}({\tilde {W}},\Theta )=E_{y,x}[l(-yF_{x}({\tilde {W}},\Theta ))])}, wherel{\displaystyle l}is a loss function,W~={w~(1),...,w~(m)}{\displaystyle {\tilde {W}}=\{{\tilde {w}}^{(1)},...,{\tilde {w}}^{(m)}\}}, andΘ={θ(1),...,θ(m)}{\displaystyle \Theta =\{\theta ^{(1)},...,\theta ^{(m)}\}}. Consider fixedΘ{\displaystyle \Theta }and optimizing onlyW~{\displaystyle {\tilde {W}}}, it can be shown that the critical points offNN(W~){\displaystyle f_{NN}({\tilde {W}})}of a particular hidden uniti{\displaystyle i},w^(i){\displaystyle {\hat {w}}^{(i)}}, all align along one line depending on incoming information into the hidden layer, such that w^(i)=c^(i)S−1u{\displaystyle {\hat {w}}^{(i)}={\hat {c}}^{(i)}S^{-1}u}, wherec^(i)∈R{\displaystyle {\hat {c}}^{(i)}\in R}is a scalar,i=1,...,m{\displaystyle i=1,...,m}. This result could be proved by setting the gradient offNN{\displaystyle f_{NN}}to zero and solving the system of equations. Apply the GDNP algorithm to this optimization problem by alternating optimization over the different hidden units. Specifically, for each hidden unit, run GDNP to find the optimalW{\displaystyle W}andγ{\displaystyle \gamma }. With the same choice of stopping criterion and stepsize, it follows that ||▽w~(i)f(w~t(i))||S−12≤(1−μL)2tC(ρ(w0)−ρ∗)+2−Ts(i)ζ|bt(0)−at(0)|μ2{\displaystyle ||\triangledown _{{\tilde {w}}^{(i)}}f({\tilde {w}}_{t}^{(i)})||_{S^{-1}}^{2}\leq {\bigg (}1-{\frac {\mu }{L}}{\bigg )}^{2t}C(\rho (w_{0})-\rho ^{*})+{\frac {2^{-T_{s}^{(i)}}\zeta |b_{t}^{(0)}-a_{t}^{(0)}|}{\mu ^{2}}}}. Since the parameters of each hidden unit converge linearly, the whole optimization problem has a linear rate of convergence.[8]
https://en.wikipedia.org/wiki/Batch_normalization
Dropoutanddilution(also calledDropConnect[1]) areregularizationtechniques for reducingoverfittinginartificial neural networksby preventing complex co-adaptations ontraining data. They are an efficient way of performing model averaging with neural networks.[2]Dilutionrefers to randomly decreasing weights towards zero,[3]whiledropoutrefers to randomly setting the outputs of hidden neurons to zero. Both are usually performed during the training process of a neural network, not during inference.[4][5][2] Dilution is usually split inweak dilutionandstrong dilution. Weak dilution describes the process in which the finite fraction of removed connections is small, and strong dilution refers to when this fraction is large. There is no clear distinction on where the limit between strong and weak dilution is, and often the distinction is dependent on the precedent of a specific use-case and has implications for how to solve for exact solutions. Sometimes dilution is used for adding damping noise to the inputs. In that case, weak dilution refers to adding a small amount of damping noise, while strong dilution refers to adding a greater amount of damping noise. Both can be rewritten as variants of weight dilution. These techniques are also sometimes referred to as random pruning of weights, but this is usually a non-recurring one-way operation. The network is pruned, and then kept if it is an improvement over the previous model. Dilution and dropout both refer to an iterative process. The pruning of weights typically does not imply that the network continues learning, while in dilution/dropout, the network continues to learn after the technique is applied. Output from a layer of linear nodes, in an artificial neural net can be described as This can be written in vector notation as Equations (1) and (2) are used in the subsequent sections. During weak dilution, the finite fraction of removed connections (the weights) is small, giving rise to a tiny uncertainty. This edge-case can be solved exactly withmean field theory. In weak dilution the impact on the weights can be described as The interpretation of probabilityP(c){\displaystyle P(c)}can also be changed from keeping a weight into pruning a weight. In vector notation this can be written as where the functiong⁡(⋅){\displaystyle \operatorname {g} (\cdot )}imposes the previous dilution. In weak dilution only a small and fixed fraction of the weights are diluted. When the number of terms in the sum goes to infinite (the weights for each node) it is still infinite (the fraction is fixed), thusmean field theorycan be applied. In the notation from Hertz et al.[3]this would be written as There are some assumptions for this to hold, which are not listed here.[6][7] When the dilution is strong, the finite fraction of removed connections (the weights) is large, giving rise to a huge uncertainty. Dropout is a special case of the previous weight equation (3), where the aforementioned equation is adjusted to remove a whole row in the vector matrix, and not only random weights Because dropout removes a whole row from the vector matrix, the previous (unlisted) assumptions for weak dilution and the use of mean field theory are not applicable. The process by which the node is driven to zero, whether by setting the weights to zero, by “removing the node”, or by some other means, does not impact the end result and does not create a new and unique case. If the neural net is processed by a high-performance digital array-multiplicator, then it is likely more effective to drive the value to zero late in the process graph. If the net is processed by a constrained processor, perhaps even an analog neuromorphic processor, then it is likely a more power-efficient solution is to drive the value to zero early in the process graph. Although there have been examples of randomly removing connections betweenneuronsin a neural network to improve models,[3]this technique was first introduced with the namedropoutbyGeoffrey Hinton, et al. in 2012.[2]Googlecurrently holds the patent for the dropout technique.[8][note 1]
https://en.wikipedia.org/wiki/Dropout_%28neural_networks%29
Aconvolutional neural network(CNN) is a type offeedforward neural networkthat learnsfeaturesviafilter(or kernel) optimization. This type ofdeep learningnetwork has been applied to process and makepredictionsfrom many different types of data including text, images and audio.[1]Convolution-based networks are the de-facto standard indeep learning-based approaches tocomputer vision[2]and image processing, and have only recently been replaced—in some cases—by newer deep learning architectures such as thetransformer. Vanishing gradientsand exploding gradients, seen duringbackpropagationin earlier neural networks, are prevented by theregularizationthat comes from using shared weights over fewer connections.[3][4]For example, foreachneuron in the fully-connected layer, 10,000 weights would be required for processing an image sized 100 × 100 pixels. However, applying cascadedconvolution(or cross-correlation) kernels,[5][6]only 25 weights for each convolutional layer are required to process 5x5-sized tiles.[7][8]Higher-layer features are extracted from wider context windows, compared to lower-layer features. Some applications of CNNs include: CNNs are also known asshift invariantorspace invariant artificial neural networks, based on the shared-weight architecture of theconvolutionkernels or filters that slide along input features and provide translation-equivariantresponses known as feature maps.[14][15]Counter-intuitively, most convolutional neural networks are notinvariant to translation, due to the downsampling operation they apply to the input.[16] Feedforward neural networksare usually fully connected networks, that is, each neuron in onelayeris connected to all neurons in the nextlayer. The "full connectivity" of these networks makes them prone tooverfittingdata. Typical ways of regularization, or preventing overfitting, include: penalizing parameters during training (such as weight decay) or trimming connectivity (skipped connections, dropout, etc.) Robust datasets also increase the probability that CNNs will learn the generalized principles that characterize a given dataset rather than the biases of a poorly-populated set.[17] Convolutional networks wereinspiredbybiologicalprocesses[18][19][20][21]in that the connectivity pattern betweenneuronsresembles the organization of the animalvisual cortex. Individualcortical neuronsrespond to stimuli only in a restricted region of thevisual fieldknown as thereceptive field. The receptive fields of different neurons partially overlap such that they cover the entire visual field. CNNs use relatively little pre-processing compared to otherimage classification algorithms. This means that the network learns to optimize thefilters(or kernels) through automated learning, whereas in traditional algorithms these filters arehand-engineered. This simplifies and automates the process, enhancing efficiency and scalability overcoming human-intervention bottlenecks. A convolutional neural network consists of an input layer,hidden layersand an output layer. In a convolutional neural network, the hidden layers include one or more layers that perform convolutions. Typically this includes a layer that performs adot productof the convolution kernel with the layer's input matrix. This product is usually theFrobenius inner product, and its activation function is commonlyReLU. As the convolution kernel slides along the input matrix for the layer, the convolution operation generates a feature map, which in turn contributes to the input of the next layer. This is followed by other layers such aspooling layers, fully connected layers, and normalization layers. Here it should be noted how close a convolutional neural network is to amatched filter.[22] In a CNN, the input is atensorwith shape: (number of inputs) × (input height) × (input width) × (inputchannels) After passing through a convolutional layer, the image becomes abstracted to a feature map, also called an activation map, with shape: (number of inputs) × (feature map height) × (feature map width) × (feature mapchannels). Convolutional layers convolve the input and pass its result to the next layer. This is similar to the response of a neuron in the visual cortex to a specific stimulus.[23]Each convolutional neuron processes data only for itsreceptive field. Althoughfully connected feedforward neural networkscan be used to learn features and classify data, this architecture is generally impractical for larger inputs (e.g., high-resolution images), which would require massive numbers of neurons because each pixel is a relevant input feature. A fully connected layer for an image of size 100 × 100 has 10,000 weights foreachneuron in the second layer. Convolution reduces the number of free parameters, allowing the network to be deeper.[7]For example, using a 5 × 5 tiling region, each with the same shared weights, requires only 25 neurons. Using shared weights means there are many fewer parameters, which helps avoid the vanishing gradients and exploding gradients problems seen duringbackpropagationin earlier neural networks.[3][4] To speed processing, standard convolutional layers can be replaced by depthwise separable convolutional layers,[24]which are based on a depthwise convolution followed by a pointwise convolution. Thedepthwise convolutionis a spatial convolution applied independently over each channel of the input tensor, while thepointwise convolutionis a standard convolution restricted to the use of1×1{\displaystyle 1\times 1}kernels. Convolutional networks may include local and/or global pooling layers along with traditional convolutional layers. Pooling layers reduce the dimensions of data by combining the outputs of neuron clusters at one layer into a single neuron in the next layer. Local pooling combines small clusters, tiling sizes such as 2 × 2 are commonly used. Global pooling acts on all the neurons of the feature map.[25][26]There are two common types of pooling in popular use: max and average.Max poolinguses the maximum value of each local cluster of neurons in the feature map,[27][28]whileaverage poolingtakes the average value. Fully connected layers connect every neuron in one layer to every neuron in another layer. It is the same as a traditionalmultilayer perceptronneural network (MLP). The flattened matrix goes through a fully connected layer to classify the images. In neural networks, each neuron receives input from some number of locations in the previous layer. In a convolutional layer, each neuron receives input from only a restricted area of the previous layer called the neuron'sreceptive field. Typically the area is a square (e.g. 5 by 5 neurons). Whereas, in a fully connected layer, the receptive field is theentire previous layer. Thus, in each convolutional layer, each neuron takes input from a larger area in the input than previous layers. This is due to applying the convolution over and over, which takes the value of a pixel into account, as well as its surrounding pixels. When using dilated layers, the number of pixels in the receptive field remains constant, but the field is more sparsely populated as its dimensions grow when combining the effect of several layers. To manipulate the receptive field size as desired, there are some alternatives to the standard convolutional layer. For example, atrous or dilated convolution[29][30]expands the receptive field size without increasing the number of parameters by interleaving visible and blind regions. Moreover, a single dilated convolutional layer can comprise filters with multiple dilation ratios,[31]thus having a variable receptive field size. Each neuron in a neural network computes an output value by applying a specific function to the input values received from the receptive field in the previous layer. The function that is applied to the input values is determined by a vector of weights and a bias (typically real numbers). Learning consists of iteratively adjusting these biases and weights. The vectors of weights and biases are calledfiltersand represent particularfeaturesof the input (e.g., a particular shape). A distinguishing feature of CNNs is that many neurons can share the same filter. This reduces thememory footprintbecause a single bias and a single vector of weights are used across all receptive fields that share that filter, as opposed to each receptive field having its own bias and vector weighting.[32] A deconvolutional neural network is essentially the reverse of a CNN. It consists of deconvolutional layers and unpooling layers.[33] A deconvolutional layer is the transpose of a convolutional layer. Specifically, a convolutional layer can be written as a multiplication with a matrix, and a deconvolutional layer is multiplication with the transpose of that matrix.[34] An unpooling layer expands the layer. The max-unpooling layer is the simplest, as it simply copies each entry multiple times. For example, a 2-by-2 max-unpooling layer is[x]↦[xxxx]{\displaystyle [x]\mapsto {\begin{bmatrix}x&x\\x&x\end{bmatrix}}}. Deconvolution layers are used in image generators. By default, it creates periodic checkerboard artifact, which can be fixed by upscale-then-convolve.[35] CNN are often compared to the way the brain achieves vision processing in livingorganisms.[36] Work byHubelandWieselin the 1950s and 1960s showed that catvisual corticescontain neurons that individually respond to small regions of thevisual field. Provided the eyes are not moving, the region of visual space within which visual stimuli affect the firing of a single neuron is known as itsreceptive field.[37]Neighboring cells have similar and overlapping receptive fields. Receptive field size and location varies systematically across the cortex to form a complete map of visual space.[citation needed]The cortex in each hemisphere represents the contralateralvisual field.[citation needed] Their 1968 paper identified two basic visual cell types in the brain:[19] Hubel and Wiesel also proposed a cascading model of these two types of cells for use in pattern recognition tasks.[38][37] In 1969,Kunihiko Fukushimaintroduced a multilayer visual feature detection network, inspired by the above-mentioned work of Hubel and Wiesel, in which "All the elements in one layer have the same set of interconnecting coefficients; the arrangement of the elements and their interconnections are all homogeneous over a given layer." This is the essential core of a convolutional network, but the weights were not trained. In the same paper, Fukushima also introduced theReLU(rectified linear unit)activation function.[39][40] The "neocognitron"[18]was introduced by Fukushima in 1980.[20][28][41]The neocognitron introduced the two basic types of layers: Severalsupervisedandunsupervised learningalgorithms have been proposed over the decades to train the weights of a neocognitron.[18]Today, however, the CNN architecture is usually trained throughbackpropagation. Fukushima's ReLU activation function was not used in his neocognitron since all the weights were nonnegative; lateral inhibition was used instead. The rectifier has become a very popular activation function for CNNs anddeep neural networksin general.[42] The term "convolution" first appears in neural networks in a paper by Toshiteru Homma, Les Atlas, and Robert Marks II at the firstConference on Neural Information Processing Systemsin 1987. Their paper replaced multiplication with convolution in time, inherently providing shift invariance, motivated by and connecting more directly to thesignal-processing concept of a filter, and demonstrated it on a speech recognition task.[8]They also pointed out that as a data-trainable system, convolution is essentially equivalent to correlation since reversal of the weights does not affect the final learned function ("For convenience, we denote * as correlation instead of convolution. Note that convolving a(t) with b(t) is equivalent to correlating a(-t) with b(t).").[8]Modern CNN implementations typically do correlation and call it convolution, for convenience, as they did here. Thetime delay neural network(TDNN) was introduced in 1987 byAlex Waibelet al. for phoneme recognition and was an early convolutional network exhibiting shift-invariance.[43]A TDNN is a 1-D convolutional neural net where the convolution is performed along the time axis of the data. It is the first CNN utilizing weight sharing in combination with a training by gradient descent, usingbackpropagation.[44]Thus, while also using a pyramidal structure as in the neocognitron, it performed a global optimization of the weights instead of a local one.[43] TDNNs are convolutional networks that share weights along the temporal dimension.[45]They allow speech signals to be processed time-invariantly. In 1990 Hampshire and Waibel introduced a variant that performs a two-dimensional convolution.[46]Since these TDNNs operated on spectrograms, the resulting phoneme recognition system was invariant to both time and frequency shifts, as with images processed by a neocognitron. TDNNs improved the performance of far-distance speech recognition.[47] Denker et al. (1989) designed a 2-D CNN system to recognize hand-writtenZIP Codenumbers.[48]However, the lack of an efficient training method to determine the kernel coefficients of the involved convolutions meant that all the coefficients had to be laboriously hand-designed.[49] Following the advances in the training of 1-D CNNs by Waibel et al. (1987),Yann LeCunet al. (1989)[49]used back-propagation to learn the convolution kernel coefficients directly from images of hand-written numbers. Learning was thus fully automatic, performed better than manual coefficient design, and was suited to a broader range of image recognition problems and image types. Wei Zhang et al. (1988)[14][15]used back-propagation to train the convolution kernels of a CNN for alphabets recognition. The model was called shift-invariant pattern recognition neural network before the name CNN was coined later in the early 1990s. Wei Zhang et al. also applied the same CNN without the last fully connected layer for medical image object segmentation (1991)[50]and breast cancer detection in mammograms (1994).[51] This approach became a foundation of moderncomputer vision. In 1990 Yamaguchi et al. introduced the concept of max pooling, a fixed filtering operation that calculates and propagates the maximum value of a given region. They did so by combining TDNNs with max pooling to realize a speaker-independent isolated word recognition system.[27]In their system they used several TDNNs per word, one for eachsyllable. The results of each TDNN over the input signal were combined using max pooling and the outputs of the pooling layers were then passed on to networks performing the actual word classification. In a variant of the neocognitron called thecresceptron, instead of using Fukushima's spatial averaging with inhibition and saturation, J. Weng et al. in 1993 used max pooling, where a downsampling unit computes the maximum of the activations of the units in its patch,[52]introducing this method into the vision field. Max pooling is often used in modern CNNs.[53] LeNet-5, a pioneering 7-level convolutional network byLeCunet al. in 1995,[54]classifies hand-written numbers on checks (British English:cheques) digitized in 32x32 pixel images. The ability to process higher-resolution images requires larger and more layers of convolutional neural networks, so this technique is constrained by the availability of computing resources. It was superior than other commercial courtesy amount reading systems (as of 1995). The system was integrated inNCR's check reading systems, and fielded in several American banks since June 1996, reading millions of checks per day.[55] A shift-invariant neural network was proposed by Wei Zhang et al. for image character recognition in 1988.[14][15]It is a modified Neocognitron by keeping only the convolutional interconnections between the image feature layers and the last fully connected layer. The model was trained with back-propagation. The training algorithm was further improved in 1991[56]to improve its generalization ability. The model architecture was modified by removing the last fully connected layer and applied for medical image segmentation (1991)[50]and automatic detection of breast cancer inmammograms (1994).[51] A different convolution-based design was proposed in 1988[57]for application to decomposition of one-dimensionalelectromyographyconvolved signals via de-convolution. This design was modified in 1989 to other de-convolution-based designs.[58][59] Although CNNs were invented in the 1980s, their breakthrough in the 2000s required fast implementations ongraphics processing units(GPUs). In 2004, it was shown by K. S. Oh and K. Jung that standard neural networks can be greatly accelerated on GPUs. Their implementation was 20 times faster than an equivalent implementation onCPU.[60]In 2005, another paper also emphasised the value ofGPGPUformachine learning.[61] The first GPU-implementation of a CNN was described in 2006 by K. Chellapilla et al. Their implementation was 4 times faster than an equivalent implementation on CPU.[62]In the same period, GPUs were also used for unsupervised training ofdeep belief networks.[63][64][65][66] In 2010, Dan Ciresan et al. atIDSIAtrained deep feedforward networks on GPUs.[67]In 2011, they extended this to CNNs, accelerating by 60 compared to training CPU.[25]In 2011, the network won an image recognition contest where they achieved superhuman performance for the first time.[68]Then they won more competitions and achieved state of the art on several benchmarks.[69][53][28] Subsequently,AlexNet, a similar GPU-based CNN by Alex Krizhevsky et al. won theImageNet Large Scale Visual Recognition Challenge2012.[70]It was an early catalytic event for theAI boom. Compared to the training of CNNs usingGPUs, not much attention was given to CPU. (Viebke et al 2019) parallelizes CNN by thread- andSIMD-level parallelism that is available on theIntel Xeon Phi.[71][72] In the past, traditionalmultilayer perceptron(MLP) models were used for image recognition.[example needed]However, the full connectivity between nodes caused thecurse of dimensionality, and was computationally intractable with higher-resolution images. A 1000×1000-pixel image withRGB colorchannels has 3 million weights per fully-connected neuron, which is too high to feasibly process efficiently at scale. For example, inCIFAR-10, images are only of size 32×32×3 (32 wide, 32 high, 3 color channels), so a single fully connected neuron in the first hidden layer of a regular neural network would have 32*32*3 = 3,072 weights. A 200×200 image, however, would lead to neurons that have 200*200*3 = 120,000 weights. Also, such network architecture does not take into account the spatial structure of data, treating input pixels which are far apart in the same way as pixels that are close together. This ignoreslocality of referencein data with a grid-topology (such as images), both computationally and semantically. Thus, full connectivity of neurons is wasteful for purposes such as image recognition that are dominated byspatially localinput patterns. Convolutional neural networks are variants of multilayer perceptrons, designed to emulate the behavior of avisual cortex. These models mitigate the challenges posed by the MLP architecture by exploiting the strong spatially local correlation present in natural images. As opposed to MLPs, CNNs have the following distinguishing features: Together, these properties allow CNNs to achieve better generalization onvision problems. Weight sharing dramatically reduces the number offree parameterslearned, thus lowering the memory requirements for running the network and allowing the training of larger, more powerful networks. A CNN architecture is formed by a stack of distinct layers that transform the input volume into an output volume (e.g. holding the class scores) through a differentiable function. A few distinct types of layers are commonly used. These are further discussed below. The convolutional layer is the core building block of a CNN. The layer's parameters consist of a set of learnablefilters(orkernels), which have a small receptive field, but extend through the full depth of the input volume. During the forward pass, each filter isconvolvedacross the width and height of the input volume, computing thedot productbetween the filter entries and the input, producing a 2-dimensionalactivation mapof that filter. As a result, the network learns filters that activate when it detects some specific type offeatureat some spatial position in the input.[75][nb 1] Stacking the activation maps for all filters along the depth dimension forms the full output volume of the convolution layer. Every entry in the output volume can thus also be interpreted as an output of a neuron that looks at a small region in the input. Each entry in an activation map use the same set of parameters that define the filter. Self-supervised learninghas been adapted for use in convolutional layers by using sparse patches with a high-mask ratio and a global response normalization layer.[citation needed] When dealing with high-dimensional inputs such as images, it is impractical to connect neurons to all neurons in the previous volume because such a network architecture does not take the spatial structure of the data into account. Convolutional networks exploit spatially local correlation by enforcing asparse local connectivitypattern between neurons of adjacent layers: each neuron is connected to only a small region of the input volume. The extent of this connectivity is ahyperparametercalled thereceptive fieldof the neuron. The connections arelocal in space(along width and height), but always extend along the entire depth of the input volume. Such an architecture ensures that the learned filters produce the strongest response to a spatially local input pattern.[76] Threehyperparameterscontrol the size of the output volume of the convolutional layer: the depth,stride, and padding size: The spatial size of the output volume is a function of the input volume sizeW{\displaystyle W}, the kernel field sizeK{\displaystyle K}of the convolutional layer neurons, the strideS{\displaystyle S}, and the amount of zero paddingP{\displaystyle P}on the border. The number of neurons that "fit" in a given volume is then: If this number is not aninteger, then the strides are incorrect and the neurons cannot be tiled to fit across the input volume in asymmetricway. In general, setting zero padding to beP=(K−1)/2{\textstyle P=(K-1)/2}when the stride isS=1{\displaystyle S=1}ensures that the input volume and output volume will have the same size spatially. However, it is not always completely necessary to use all of the neurons of the previous layer. For example, a neural network designer may decide to use just a portion of padding. A parameter sharing scheme is used in convolutional layers to control the number of free parameters. It relies on the assumption that if a patch feature is useful to compute at some spatial position, then it should also be useful to compute at other positions. Denoting a single 2-dimensional slice of depth as adepth slice, the neurons in each depth slice are constrained to use the same weights and bias. Since all neurons in a single depth slice share the same parameters, the forward pass in each depth slice of the convolutional layer can be computed as aconvolutionof the neuron's weights with the input volume.[nb 2]Therefore, it is common to refer to the sets of weights as a filter (or akernel), which is convolved with the input. The result of this convolution is anactivation map, and the set of activation maps for each different filter are stacked together along the depth dimension to produce the output volume. Parameter sharing contributes to thetranslation invarianceof the CNN architecture.[16] Sometimes, the parameter sharing assumption may not make sense. This is especially the case when the input images to a CNN have some specific centered structure; for which we expect completely different features to be learned on different spatial locations. One practical example is when the inputs are faces that have been centered in the image: we might expect different eye-specific or hair-specific features to be learned in different parts of the image. In that case it is common to relax the parameter sharing scheme, and instead simply call the layer a "locally connected layer". Another important concept of CNNs is pooling, which is used as a form of non-lineardown-sampling. Pooling provides downsampling because it reduces the spatial dimensions (height and width) of the input feature maps while retaining the most important information. There are several non-linear functions to implement pooling, wheremax poolingandaverage poolingare the most common. Pooling aggregates information from small regions of the input creatingpartitionsof the input feature map, typically using a fixed-size window (like 2x2) and applying a stride (often 2) to move the window across the input.[78]Note that without using a stride greater than 1, pooling would not perform downsampling, as it would simply move the pooling window across the input one step at a time, without reducing the size of the feature map. In other words, the stride is what actually causes the downsampling by determining how much the pooling window moves over the input. Intuitively, the exact location of a feature is less important than its rough location relative to other features. This is the idea behind the use of pooling in convolutional neural networks. The pooling layer serves to progressively reduce the spatial size of the representation, to reduce the number of parameters,memory footprintand amount of computation in the network, and hence to also controloverfitting. This is known as down-sampling. It is common to periodically insert a pooling layer between successive convolutional layers (each one typically followed by an activation function, such as aReLU layer) in a CNN architecture.[75]: 460–461While pooling layers contribute to local translation invariance, they do not provide global translation invariance in a CNN, unless a form of global pooling is used.[16][74]The pooling layer commonly operates independently on every depth, or slice, of the input and resizes it spatially. A very common form of max pooling is a layer with filters of size 2×2, applied with a stride of 2, which subsamples every depth slice in the input by 2 along both width and height, discarding 75% of the activations:fX,Y(S)=maxa,b=01S2X+a,2Y+b.{\displaystyle f_{X,Y}(S)=\max _{a,b=0}^{1}S_{2X+a,2Y+b}.}In this case, everymax operationis over 4 numbers. The depth dimension remains unchanged (this is true for other forms of pooling as well). In addition to max pooling, pooling units can use other functions, such asaveragepooling orℓ2-normpooling. Average pooling was often used historically but has recently fallen out of favor compared to max pooling, which generally performs better in practice.[79] Due to the effects of fast spatial reduction of the size of the representation,[which?]there is a recent trend towards using smaller filters[80]or discarding pooling layers altogether.[81] A channel max pooling (CMP) operation layer conducts the MP operation along the channel side among the corresponding positions of the consecutive feature maps for the purpose of redundant information elimination. The CMP makes the significant features gather together within fewer channels, which is important for fine-grained image classification that needs more discriminating features. Meanwhile, another advantage of the CMP operation is to make the channel number of feature maps smaller before it connects to the first fully connected (FC) layer. Similar to the MP operation, we denote the input feature maps and output feature maps of a CMP layer as F ∈ R(C×M×N) and C ∈ R(c×M×N), respectively, where C and c are the channel numbers of the input and output feature maps, M and N are the widths and the height of the feature maps, respectively. Note that the CMP operation only changes the channel number of the feature maps. The width and the height of the feature maps are not changed, which is different from the MP operation.[82] See[83][84]for reviews for pooling methods. ReLU is the abbreviation ofrectified linear unit. It was proposed byAlston Householderin 1941,[85]and used in CNN byKunihiko Fukushimain 1969.[39]ReLU applies the non-saturatingactivation functionf(x)=max(0,x){\textstyle f(x)=\max(0,x)}.[70]It effectively removes negative values from an activation map by setting them to zero.[86]It introducesnonlinearityto thedecision functionand in the overall network without affecting the receptive fields of the convolution layers. In 2011, Xavier Glorot, Antoine Bordes andYoshua Bengiofound that ReLU enables better training of deeper networks,[87]compared to widely used activation functions prior to 2011. Other functions can also be used to increase nonlinearity, for example the saturatinghyperbolic tangentf(x)=tanh⁡(x){\displaystyle f(x)=\tanh(x)},f(x)=|tanh⁡(x)|{\displaystyle f(x)=|\tanh(x)|}, and thesigmoid functionσ(x)=(1+e−x)−1{\textstyle \sigma (x)=(1+e^{-x})^{-1}}. ReLU is often preferred to other functions because it trains the neural network several times faster without a significant penalty togeneralizationaccuracy.[88] After several convolutional and max pooling layers, the final classification is done via fully connected layers. Neurons in a fully connected layer have connections to all activations in the previous layer, as seen in regular (non-convolutional)artificial neural networks. Their activations can thus be computed as anaffine transformation, withmatrix multiplicationfollowed by a bias offset (vector additionof a learned or fixed bias term). The "loss layer", or "loss function", exemplifies howtrainingpenalizes the deviation between the predicted output of the network, and thetruedata labels (during supervised learning). Variousloss functionscan be used, depending on the specific task. TheSoftmaxloss function is used for predicting a single class ofKmutually exclusive classes.[nb 3]Sigmoidcross-entropyloss is used for predictingKindependent probability values in[0,1]{\displaystyle [0,1]}.Euclideanloss is used forregressingtoreal-valuedlabels(−∞,∞){\displaystyle (-\infty ,\infty )}. Hyperparameters are various settings that are used to control the learning process. CNNs use morehyperparametersthan a standard multilayer perceptron (MLP). Padding is the addition of (typically) 0-valued pixels on the borders of an image. This is done so that the border pixels are not undervalued (lost) from the output because they would ordinarily participate in only a single receptive field instance. The padding applied is typically one less than the corresponding kernel dimension. For example, a convolutional layer using 3x3 kernels would receive a 2-pixel pad, that is 1 pixel on each side of the image.[citation needed] The stride is the number of pixels that the analysis window moves on each iteration. A stride of 2 means that each kernel is offset by 2 pixels from its predecessor. Since feature map size decreases with depth, layers near the input layer tend to have fewer filters while higher layers can have more. To equalize computation at each layer, the product of feature valuesvawith pixel position is kept roughly constant across layers. Preserving more information about the input would require keeping the total number of activations (number of feature maps times number of pixel positions) non-decreasing from one layer to the next. The number of feature maps directly controls the capacity and depends on the number of available examples and task complexity. Common filter sizes found in the literature vary greatly, and are usually chosen based on the data set. Typical filter sizes range from 1x1 to 7x7. As two famous examples,AlexNetused 3x3, 5x5, and 11x11.Inceptionv3used 1x1, 3x3, and 5x5. The challenge is to find the right level of granularity so as to create abstractions at the proper scale, given a particular data set, and withoutoverfitting. Max poolingis typically used, often with a 2x2 dimension. This implies that the input is drasticallydownsampled, reducing processing cost. Greater poolingreduces the dimensionof the signal, and may result in unacceptableinformation loss. Often, non-overlapping pooling windows perform best.[79] Dilation involves ignoring pixels within a kernel. This reduces processing memory potentially without significant signal loss. A dilation of 2 on a 3x3 kernel expands the kernel to 5x5, while still processing 9 (evenly spaced) pixels. Specifically, the processed pixels after the dilation are the cells (1,1), (1,3), (1,5), (3,1), (3,3), (3,5), (5,1), (5,3), (5,5), where (i,j) denotes the cell of the i-th row and j-th column in the expanded 5x5 kernel. Accordingly, dilation of 4 expands the kernel to 7x7.[citation needed] It is commonly assumed that CNNs are invariant to shifts of the input. Convolution or pooling layers within a CNN that do not have a stride greater than one are indeedequivariantto translations of the input.[74]However, layers with a stride greater than one ignore theNyquist–Shannon sampling theoremand might lead toaliasingof the input signal[74]While, in principle, CNNs are capable of implementing anti-aliasing filters, it has been observed that this does not happen in practice,[89]and therefore yield models that are not equivariant to translations. Furthermore, if a CNN makes use of fully connected layers, translation equivariance does not imply translation invariance, as the fully connected layers are not invariant to shifts of the input.[90][16]One solution for complete translation invariance is avoiding any down-sampling throughout the network and applying global average pooling at the last layer.[74]Additionally, several other partial solutions have been proposed, such asanti-aliasingbefore downsampling operations,[91]spatial transformer networks,[92]data augmentation, subsampling combined with pooling,[16]andcapsule neural networks.[93] The accuracy of the final model is typically estimated on a sub-part of the dataset set apart at the start, often called a test set. Alternatively, methods such ask-fold cross-validationare applied. Other strategies include usingconformal prediction.[94][95] Regularizationis a process of introducing additional information to solve anill-posed problemor to preventoverfitting. CNNs use various types of regularization. Because networks have so many parameters, they are prone to overfitting. One method to reduce overfitting isdropout, introduced in 2014.[96]At each training stage, individual nodes are either "dropped out" of the net (ignored) with probability1−p{\displaystyle 1-p}or kept with probabilityp{\displaystyle p}, so that a reduced network is left; incoming and outgoing edges to a dropped-out node are also removed. Only the reduced network is trained on the data in that stage. The removed nodes are then reinserted into the network with their original weights. In the training stages,p{\displaystyle p}is usually 0.5; for input nodes, it is typically much higher because information is directly lost when input nodes are ignored. At testing time after training has finished, we would ideally like to find a sample average of all possible2n{\displaystyle 2^{n}}dropped-out networks; unfortunately this is unfeasible for large values ofn{\displaystyle n}. However, we can find an approximation by using the full network with each node's output weighted by a factor ofp{\displaystyle p}, so theexpected valueof the output of any node is the same as in the training stages. This is the biggest contribution of the dropout method: although it effectively generates2n{\displaystyle 2^{n}}neural nets, and as such allows for model combination, at test time only a single network needs to be tested. By avoiding training all nodes on all training data, dropout decreases overfitting. The method also significantly improves training speed. This makes the model combination practical, even fordeep neural networks. The technique seems to reduce node interactions, leading them to learn more robust features[clarification needed]that better generalize to new data. DropConnect is the generalization of dropout in which each connection, rather than each output unit, can be dropped with probability1−p{\displaystyle 1-p}. Each unit thus receives input from a random subset of units in the previous layer.[97] DropConnect is similar to dropout as it introduces dynamic sparsity within the model, but differs in that the sparsity is on the weights, rather than the output vectors of a layer. In other words, the fully connected layer with DropConnect becomes a sparsely connected layer in which the connections are chosen at random during the training stage. A major drawback to dropout is that it does not have the same benefits for convolutional layers, where the neurons are not fully connected. Even before dropout, in 2013 a technique called stochastic pooling,[98]the conventionaldeterministicpooling operations were replaced with a stochastic procedure, where the activation within each pooling region is picked randomly according to amultinomial distribution, given by the activities within the pooling region. This approach is free of hyperparameters and can be combined with other regularization approaches, such as dropout anddata augmentation. An alternate view of stochastic pooling is that it is equivalent to standard max pooling but with many copies of an input image, each having small localdeformations. This is similar to explicitelastic deformationsof the input images,[99]which delivers excellent performance on theMNIST data set.[99]Using stochastic pooling in a multilayer model gives an exponential number of deformations since the selections in higher layers are independent of those below. Because the degree of model overfitting is determined by both its power and the amount of training it receives, providing a convolutional network with more training examples can reduce overfitting. Because there is often not enough available data to train, especially considering that some part should be spared for later testing, two approaches are to either generate new data from scratch (if possible) or perturb existing data to create new ones. The latter one is used since mid-1990s.[54]For example, input images can be cropped, rotated, or rescaled to create new examples with the same labels as the original training set.[100] One of the simplest methods to prevent overfitting of a network is to simply stop the training before overfitting has had a chance to occur. It comes with the disadvantage that the learning process is halted. Another simple way to prevent overfitting is to limit the number of parameters, typically by limiting the number of hidden units in each layer or limiting network depth. For convolutional networks, the filter size also affects the number of parameters. Limiting the number of parameters restricts the predictive power of the network directly, reducing the complexity of the function that it can perform on the data, and thus limits the amount of overfitting. This is equivalent to a "zero norm". A simple form of added regularizer is weight decay, which simply adds an additional error, proportional to the sum of weights (L1 norm) or squared magnitude (L2 norm) of the weight vector, to the error at each node. The level of acceptable model complexity can be reduced by increasing the proportionality constant('alpha' hyperparameter), thus increasing the penalty for large weight vectors. L2 regularization is the most common form of regularization. It can be implemented by penalizing the squared magnitude of all parameters directly in the objective. The L2 regularization has the intuitive interpretation of heavily penalizing peaky weight vectors and preferring diffuse weight vectors. Due to multiplicative interactions between weights and inputs this has the useful property of encouraging the network to use all of its inputs a little rather than some of its inputs a lot. L1 regularization is also common. It makes the weight vectors sparse during optimization. In other words, neurons with L1 regularization end up using only a sparse subset of their most important inputs and become nearly invariant to the noisy inputs. L1 with L2 regularization can be combined; this is calledelastic net regularization. Another form of regularization is to enforce an absolute upper bound on the magnitude of the weight vector for every neuron and useprojected gradient descentto enforce the constraint. In practice, this corresponds to performing the parameter update as normal, and then enforcing the constraint by clamping the weight vectorw→{\displaystyle {\vec {w}}}of every neuron to satisfy‖w→‖2<c{\displaystyle \|{\vec {w}}\|_{2}<c}. Typical values ofc{\displaystyle c}are order of 3–4. Some papers report improvements[101]when using this form of regularization. Pooling loses the precise spatial relationships between high-level parts (such as nose and mouth in a face image). These relationships are needed for identity recognition. Overlapping the pools so that each feature occurs in multiple pools, helps retain the information. Translation alone cannot extrapolate the understanding of geometric relationships to a radically new viewpoint, such as a different orientation or scale. On the other hand, people are very good at extrapolating; after seeing a new shape once they can recognize it from a different viewpoint.[102] An earlier common way to deal with this problem is to train the network on transformed data in different orientations, scales, lighting, etc. so that the network can cope with these variations. This is computationally intensive for large data-sets. The alternative is to use a hierarchy of coordinate frames and use a group of neurons to represent a conjunction of the shape of the feature and its pose relative to theretina. The pose relative to the retina is the relationship between the coordinate frame of the retina and the intrinsic features' coordinate frame.[103] Thus, one way to represent something is to embed the coordinate frame within it. This allows large features to be recognized by using the consistency of the poses of their parts (e.g. nose and mouth poses make a consistent prediction of the pose of the whole face). This approach ensures that the higher-level entity (e.g. face) is present when the lower-level (e.g. nose and mouth) agree on its prediction of the pose. The vectors of neuronal activity that represent pose ("pose vectors") allow spatial transformations modeled as linear operations that make it easier for the network to learn the hierarchy of visual entities and generalize across viewpoints. This is similar to the way the humanvisual systemimposes coordinate frames in order to represent shapes.[104] CNNs are often used inimage recognitionsystems. In 2012, anerror rateof 0.23% on theMNIST databasewas reported.[28]Another paper on using CNN for image classification reported that the learning process was "surprisingly fast"; in the same paper, the best published results as of 2011 were achieved in the MNIST database and the NORB database.[25]Subsequently, a similar CNN calledAlexNet[105]won theImageNet Large Scale Visual Recognition Challenge2012. When applied tofacial recognition, CNNs achieved a large decrease in error rate.[106]Another paper reported a 97.6% recognition rate on "5,600 still images of more than 10 subjects".[21]CNNs were used to assessvideo qualityin an objective way after manual training; the resulting system had a very lowroot mean square error.[107] TheImageNet Large Scale Visual Recognition Challengeis a benchmark in object classification and detection, with millions of images and hundreds of object classes. In the ILSVRC 2014,[108]a large-scale visual recognition challenge, almost every highly ranked team used CNN as their basic framework. The winnerGoogLeNet[109](the foundation ofDeepDream) increased the mean averageprecisionof object detection to 0.439329, and reduced classification error to 0.06656, the best result to date. Its network applied more than 30 layers. That performance of convolutional neural networks on the ImageNet tests was close to that of humans.[110]The best algorithms still struggle with objects that are small or thin, such as a small ant on a stem of a flower or a person holding a quill in their hand. They also have trouble with images that have been distorted with filters, an increasingly common phenomenon with modern digital cameras. By contrast, those kinds of images rarely trouble humans. Humans, however, tend to have trouble with other issues. For example, they are not good at classifying objects into fine-grained categories such as the particular breed of dog or species of bird, whereas convolutional neural networks handle this.[citation needed] In 2015, a many-layered CNN demonstrated the ability to spot faces from a wide range of angles, including upside down, even when partially occluded, with competitive performance. The network was trained on a database of 200,000 images that included faces at various angles and orientations and a further 20 million images without faces. They used batches of 128 images over 50,000 iterations.[111] Compared to image data domains, there is relatively little work on applying CNNs to video classification. Video is more complex than images since it has another (temporal) dimension. However, some extensions of CNNs into the video domain have been explored. One approach is to treat space and time as equivalent dimensions of the input and perform convolutions in both time and space.[112][113]Another way is to fuse the features of two convolutional neural networks, one for the spatial and one for the temporal stream.[114][115][116]Long short-term memory(LSTM)recurrentunits are typically incorporated after the CNN to account for inter-frame or inter-clip dependencies.[117][118]Unsupervised learningschemes for training spatio-temporal features have been introduced, based on Convolutional Gated RestrictedBoltzmann Machines[119]and Independent Subspace Analysis.[120]Its application can be seen intext-to-video model.[citation needed] CNNs have also been explored fornatural language processing. CNN models are effective for various NLP problems and achieved excellent results insemantic parsing,[121]search query retrieval,[122]sentence modeling,[123]classification,[124]prediction[125]and other traditional NLP tasks.[126]Compared to traditional language processing methods such asrecurrent neural networks, CNNs can represent different contextual realities of language that do not rely on a series-sequence assumption, while RNNs are better suitable when classical time series modeling is required.[127][128][129][130] A CNN with 1-D convolutions was used on time series in the frequency domain (spectral residual) by an unsupervised model to detect anomalies in the time domain.[131] CNNs have been used indrug discovery. Predicting the interaction between molecules and biologicalproteinscan identify potential treatments. In 2015, Atomwise introduced AtomNet, the first deep learning neural network forstructure-based drug design.[132]The system trains directly on 3-dimensional representations of chemical interactions. Similar to how image recognition networks learn to compose smaller, spatially proximate features into larger, complex structures,[133]AtomNet discovers chemical features, such asaromaticity,sp3carbons, andhydrogen bonding. Subsequently, AtomNet was used to predict novel candidatebiomoleculesfor multiple disease targets, most notably treatments for theEbola virus[134]andmultiple sclerosis.[135] CNNs have been used in the game ofcheckers. From 1999 to 2001,Fogeland Chellapilla published papers showing how a convolutional neural network could learn to play checkers using co-evolution. The learning process did not use prior human professional games, but rather focused on a minimal set of information contained in the checkerboard: the location and type of pieces, and the difference in number of pieces between the two sides. Ultimately, the program (Blondie24) was tested on 165 games against players and ranked in the highest 0.4%.[136][137]It also earned a win against the programChinookat its "expert" level of play.[138] CNNs have been used incomputer Go. In December 2014, Clark andStorkeypublished a paper showing that a CNN trained by supervised learning from a database of human professional games could outperformGNU Goand win some games againstMonte Carlo tree searchFuego 1.1 in a fraction of the time it took Fuego to play.[139]Later it was announced that a large 12-layer convolutional neural network had correctly predicted the professional move in 55% of positions, equalling the accuracy of a6 danhuman player. When the trained convolutional network was used directly to play games of Go, without any search, it beat the traditional search program GNU Go in 97% of games, and matched the performance of theMonte Carlo tree searchprogram Fuego simulating ten thousand playouts (about a million positions) per move.[140] A couple of CNNs for choosing moves to try ("policy network") and evaluating positions ("value network") driving MCTS were used byAlphaGo, the first to beat the best human player at the time.[141] Recurrent neural networks are generally considered the best neural network architectures for time series forecasting (and sequence modeling in general), but recent studies show that convolutional networks can perform comparably or even better.[142][13]Dilated convolutions[143]might enable one-dimensional convolutional neural networks to effectively learn time series dependences.[144]Convolutions can be implemented more efficiently than RNN-based solutions, and they do not suffer from vanishing (or exploding) gradients.[145]Convolutional networks can provide an improved forecasting performance when there are multiple similar time series to learn from.[146]CNNs can also be applied to further tasks in time series analysis (e.g., time series classification[147]or quantile forecasting[148]). As archaeological findings such asclay tabletswithcuneiform writingare increasingly acquired using3D scanners, benchmark datasets are becoming available, includingHeiCuBeDa[149]providing almost 2000 normalized 2-D and 3-D datasets prepared with theGigaMesh Software Framework.[150]Socurvature-based measures are used in conjunction with geometric neural networks (GNNs), e.g. for period classification of those clay tablets being among the oldest documents of human history.[151][152] For many applications, training data is not very available. Convolutional neural networks usually require a large amount of training data in order to avoidoverfitting. A common technique is to train the network on a larger data set from a related domain. Once the network parameters have converged an additional training step is performed using the in-domain data to fine-tune the network weights, this is known astransfer learning. Furthermore, this technique allows convolutional network architectures to successfully be applied to problems with tiny training sets.[153] End-to-end training and prediction are common practice incomputer vision. However, human interpretable explanations are required forcritical systemssuch as aself-driving cars.[154]With recent advances invisual salience,spatial attention, andtemporal attention, the most critical spatial regions/temporal instants could be visualized to justify the CNN predictions.[155][156] A deep Q-network (DQN) is a type of deep learning model that combines a deep neural network withQ-learning, a form ofreinforcement learning. Unlike earlier reinforcement learning agents, DQNs that utilize CNNs can learn directly from high-dimensional sensory inputs via reinforcement learning.[157] Preliminary results were presented in 2014, with an accompanying paper in February 2015.[158]The research described an application toAtari 2600gaming. Other deep reinforcement learning models preceded it.[159] Convolutional deep belief networks(CDBN) have structure very similar to convolutional neural networks and are trained similarly to deep belief networks. Therefore, they exploit the 2D structure of images, like CNNs do, and make use of pre-training likedeep belief networks. They provide a generic structure that can be used in many image and signal processing tasks. Benchmark results on standard image datasets like CIFAR[160]have been obtained using CDBNs.[161] The feed-forward architecture of convolutional neural networks was extended in the neural abstraction pyramid[162]by lateral and feedback connections. The resulting recurrent convolutional network allows for the flexible incorporation of contextual information to iteratively resolve local ambiguities. In contrast to previous models, image-like outputs at the highest resolution were generated, e.g., for semantic segmentation, image reconstruction, and object localization tasks.
https://en.wikipedia.org/wiki/Convolutional_neural_network#Stride
Hyperparametermay refer to:
https://en.wikipedia.org/wiki/Hyperparameter#Validation_set
Inmachine learning, a common task is the study and construction ofalgorithmsthat can learn from and make predictions ondata.[1]Such algorithms function by making data-driven predictions or decisions,[2]through building amathematical modelfrom input data. These input data used to build the model are usually divided into multipledata sets. In particular, three data sets are commonly used in different stages of the creation of the model: training, validation, and test sets. The model is initially fit on atraining data set,[3]which is a set of examples used to fit the parameters (e.g. weights of connections between neurons inartificial neural networks) of the model.[4]The model (e.g. anaive Bayes classifier) is trained on the training data set using asupervised learningmethod, for example using optimization methods such asgradient descentorstochastic gradient descent. In practice, the training data set often consists of pairs of an inputvector(or scalar) and the corresponding output vector (or scalar), where the answer key is commonly denoted as thetarget(orlabel). The current model is run with the training data set and produces a result, which is then compared with thetarget, for each input vector in the training data set. Based on the result of the comparison and the specific learning algorithm being used, the parameters of the model are adjusted. The model fitting can include bothvariable selectionand parameterestimation. Successively, the fitted model is used to predict the responses for the observations in a second data set called thevalidation data set.[3]The validation data set provides an unbiased evaluation of a model fit on the training data set while tuning the model'shyperparameters[5](e.g. the number of hidden units—layers and layer widths—in a neural network[4]). Validation data sets can be used forregularizationbyearly stopping(stopping training when the error on the validation data set increases, as this is a sign ofover-fittingto the training data set).[6]This simple procedure is complicated in practice by the fact that the validation data set's error may fluctuate during training, producing multiple local minima. This complication has led to the creation of many ad-hoc rules for deciding when over-fitting has truly begun.[6] Finally, thetest data setis a data set used to provide an unbiased evaluation of afinalmodel fit on the training data set.[5]If the data in the test data set has never been used in training (for example incross-validation), the test data set is also called aholdout data set. The term "validation set" is sometimes used instead of "test set" in some literature (e.g., if the original data set was partitioned into only two subsets, the test set might be referred to as the validation set).[5] Deciding the sizes and strategies for data set division in training, test and validation sets is very dependent on the problem and data available.[7] A training data set is adata setof examples used during the learning process and is used to fit the parameters (e.g., weights) of, for example, aclassifier.[9][10] For classification tasks, a supervised learning algorithm looks at the training data set to determine, or learn, the optimal combinations of variables that will generate a goodpredictive model.[11]The goal is to produce a trained (fitted) model that generalizes well to new, unknown data.[12]The fitted model is evaluated using “new” examples from the held-out data sets (validation and test data sets) to estimate the model’s accuracy in classifying new data.[5]To reduce the risk of issues such as over-fitting, the examples in the validation and test data sets should not be used to train the model.[5] Most approaches that search through training data for empirical relationships tend tooverfitthe data, meaning that they can identify and exploit apparent relationships in the training data that do not hold in general. When a training set is continuously expanded with new data, then this isincremental learning. A validation data set is adata setof examples used to tune thehyperparameters(i.e. the architecture) of a model. It is sometimes also called the development set or the "dev set".[13]An example of a hyperparameter forartificial neural networksincludes the number of hidden units in each layer.[9][10]It, as well as the testing set (as mentioned below), should follow the same probability distribution as the training data set. In order to avoid overfitting, when anyclassificationparameter needs to be adjusted, it is necessary to have a validation data set in addition to the training and test data sets. For example, if the most suitable classifier for the problem is sought, the training data set is used to train the different candidate classifiers, the validation data set is used to compare their performances and decide which one to take and, finally, the test data set is used to obtain the performance characteristics such asaccuracy,sensitivity,specificity,F-measure, and so on. The validation data set functions as a hybrid: it is training data used for testing, but neither as part of the low-level training nor as part of the final testing. The basic process of using a validation data set formodel selection(as part of training data set, validation data set, and test data set) is:[10][14] Since our goal is to find the network having the best performance on new data, the simplest approach to the comparison of different networks is to evaluate the error function using data which is independent of that used for training. Various networks are trained by minimization of an appropriate error function defined with respect to a training data set. The performance of the networks is then compared by evaluating the error function using an independent validation set, and the network having the smallest error with respect to the validation set is selected. This approach is called thehold outmethod. Since this procedure can itself lead to some overfitting to the validation set, the performance of the selected network should be confirmed by measuring its performance on a third independent set of data called a test set. An application of this process is inearly stopping, where the candidate models are successive iterations of the same network, and training stops when the error on the validation set grows, choosing the previous model (the one with minimum error). A test data set is adata setthat isindependentof the training data set, but that follows the sameprobability distributionas the training data set. If a model fit to the training data set also fits the test data set well, minimaloverfittinghas taken place (see figure below). A better fitting of the training data set as opposed to the test data set usually points to over-fitting. A test set is therefore a set of examples used only to assess the performance (i.e. generalization) of a fully specified classifier.[9][10]To do this, the final model is used to predict classifications of examples in the test set. Those predictions are compared to the examples' true classifications to assess the model's accuracy.[11] In a scenario where both validation and test data sets are used, the test data set is typically used to assess the final model that is selected during the validation process. In the case where the original data set is partitioned into two subsets (training and test data sets), the test data set might assess the model only once (e.g., in theholdout method).[15]Note that some sources advise against such a method.[12]However, when using a method such ascross-validation, two partitions can be sufficient and effective since results are averaged after repeated rounds of model training and testing to help reduce bias and variability.[5][12] Testing is trying something to find out about it ("To put to the proof; to prove the truth, genuineness, or quality of by experiment" according to the Collaborative International Dictionary of English) and to validate is to prove that something is valid ("To confirm; to render valid" Collaborative International Dictionary of English). With this perspective, the most common use of the termstest setandvalidation setis the one here described. However, in both industry and academia, they are sometimes used interchanged, by considering that the internal process is testing different models to improve (test set as a development set) and the final model is the one that needs to be validated before real use with an unseen data (validation set). "The literature on machine learning often reverses the meaning of 'validation' and 'test' sets. This is the most blatant example of the terminological confusion that pervades artificial intelligence research."[16]Nevertheless, the important concept that must be kept is that the final set, whether called test or validation, should only be used in the final experiment. In order to get more stable results and use all valuable data for training, a data set can be repeatedly split into several training and a validation data sets. This is known ascross-validation. To confirm the model's performance, an additional test data set held out from cross-validation is normally used. It is possible to use cross-validation on training and validation sets, andwithineach training set have further cross-validation for a test set for hyperparameter tuning. This is known asnested cross-validation. Omissions in the training of algorithms are a major cause of erroneous outputs.[17]Types of such omissions include:[17] An example of an omission of particular circumstances is a case where a boy was able to unlock the phone because his mother registered her face under indoor, nighttime lighting, a condition which was not appropriately included in the training of the system.[17][18] Usage of relatively irrelevant input can include situations where algorithms use the background rather than the object of interest forobject detection, such as being trained by pictures of sheep on grasslands, leading to a risk that a different object will be interpreted as a sheep if located on a grassland.[17]
https://en.wikipedia.org/wiki/Test_set
TheLouvain method for community detectionis agreedy optimizationmethod intended to extract non-overlapping communities from largenetworkscreated byBlondelet al.[1]from theUniversity of Louvain(the source of this method's name). The inspiration for this method ofcommunity detectionis the optimization ofmodularityas the algorithm progresses. Modularity is a scale value between −1 (non-modular clustering) and 1 (fully modular clustering) that measures the relative density of edges inside communities with respect to edges outside communities. Optimizing this value theoretically results in the best possible grouping of the nodes of a given network. But because going through all possible configurations of the nodes into groups is impractical, heuristic algorithms are used. In the Louvain Method of community detection, first small communities are found by optimizing modularity locally on all nodes, then each small community is grouped into one node and the first step is repeated. The method is similar to the earlier method by Clauset, Newman and Moore[2]that connects communities whose amalgamation produces the largest increase in modularity. The Louvain algorithm was shown to correctly identify the community structure when it exists, in particular in thestochastic block model.[3] The value to be optimized ismodularity, defined as a value in the range[−1,1]{\displaystyle [-1,1]}that measures the density of links inside communities compared to links between communities.[1]For a weighted graph, modularity is defined as: Q=12m∑i=1N∑j=1N[Aij−kikj2m]δ(ci,cj),{\displaystyle Q={\frac {1}{2m}}\sum _{i=1}^{N}\sum _{j=1}^{N}{\bigg [}A_{ij}-{\frac {k_{i}k_{j}}{2m}}{\bigg ]}\delta (c_{i},c_{j}),} where: δ(ci,cj)={1ifciandcjare the same cluster0otherwise{\displaystyle {\begin{aligned}\delta (c_{i},c_{j})&={\begin{cases}1&{\text{if }}c_{i}{\text{ and }}c_{j}{\text{ are the same cluster}}\\0&{\text{otherwise}}\end{cases}}\end{aligned}}} Based on the above equation, the modularity of a communityccan be calculated as:[4] Qc=12m∑i∑jAij1{ci=cj=c}−(∑iki2m1{ci=c})2=Σin2m−(Σtot2m)2{\displaystyle {\begin{aligned}Q_{c}&={\dfrac {1}{2m}}\sum _{i}\sum _{j}A_{ij}\mathbf {1} \left\{c_{i}=c_{j}=c\right\}-\left(\sum _{i}{\dfrac {k_{i}}{2m}}\mathbf {1} \left\{c_{i}=c\right\}\right)^{2}\\&={\frac {\Sigma _{in}}{2m}}-\left({\frac {\Sigma _{tot}}{2m}}\right)^{2}\end{aligned}}} where As nodes in different communities do not contribute to the modularityQ, it can be written as: Q=∑cQc{\displaystyle Q=\sum _{c}Q_{c}} The Louvain method works by repeating two phases.[1]In phase one, nodes are sorted into communities based on how the modularity of the graph changes when a node moves communities. In phase two, the graph is reinterpreted so that communities are seen as individual nodes. A detailed explanation is provided below. The Louvain method begins by considering each nodevin a graph to be its own community. This can be seen in Figure 1, where each dot (representing nodes) is a unique color (representing which community the node belongs to). For each nodev, we consider how movingvfrom its current communityCinto a neighboring communityC'will affect the modularity of the graph partition. In the pseudo-code below, this happens in the for-loop. We select the communityC'with the greatest change in modularity, and if the change is positive, we movevintoC'; otherwise we leave it where it is. This continues until the modularity stops improving. [5] This process is applied repeatedly and sequentially to all nodes until no modularity increase can occur. Once this local maximum of modularity is hit, the first phase has ended. Figure 2 shows how the graph in Figure 1 might look after one iteration of phase 1. For each community in our graph's partition, the individual nodes making up that community are combined and the community itself becomes a node. The edges connecting distinct communities are used to weight the new edges connecting our aggregate nodes. This process is modeled in the pseudo-code, where the functionaggregateGraphreturns a new graph whose vertices are the partition of the old graph, and whose edges are calculated using the old graph. This function does not show the edges being weighted, but a simple modification would allow for that information to be tracked. [5] Figure 3 shows what the graph from Figure 2 would look like after being aggregated. This graph is analogous to the graph in Figure 1 in the sense that each node is assigned to a single community. From here, the process can be repeated so that more nodes are moved into existing communities until an optimal level of modularity is reached. The pseudo-code below shows how the previous two functions work together to complete the process. [5] Generally, the Louvain method is assumed to have a time complexity ofO(nlog⁡n){\displaystyle O(n\log {}n)}. Richard Blondel, co-author of the paper that originally published the Louvain method, seems to support this notion,[6]but other sources claim the time complexity is "essentially linear in the number of links in the graph,"[7]meaning the time complexity would instead beO(m){\displaystyle O(m)}, wheremis the number of edges in the graph. Unfortunately, no source has published an analysis of the Louvain method's time complexity so one is attempted here. In the pseudo-code above, the functionlouvaincontrols the execution of the algorithm. It's clear to see that inside oflouvain,moveNodeswill be repeated until it is no longer possible to combine nodes into communities. This depends on two factors: how much the modularity of the graph can improve and, in the worst case, if the modularity can improve with every iteration oflouvain, it depends on how quicklyaggregateGraphwill reduce the graph down to a single node. If, in each iteration oflouvain,moveNodesis only able to move one node into a community, thenaggregateGraphwill only be able to reduce the size of the graph by one. This would causelouvainto repeatvtimes. SincemoveNodesiterates through all nodes in a graph, this would result in a time complexity ofO(n2){\displaystyle {\mathcal {O}}(n^{2})}, wherenis the number of nodes. It is unclear if this situation is possible, so the above result should be considered a loose bound. Blondel et al. state in their original publication that most of the run time is spent in the early iterations of the algorithm because "the number of communities decreases drastically after just a few passes."[1]This can be understood by considering a scenario wheremoveNodesis able to move each node so that every community has two nodes. In this case,aggregateGraphwould return a graph half the size of the original. If this continued, then the Louvain method would have a runtime ofnlog2⁡n{\displaystyle n\log _{2}{n}}, although it is unclear if this would be the worst case, best case, average case, or none of those. Additionally, there is no guarantee the size of the graph would be reduced by the same factor with each iteration, and so no single logarithm function can perfectly describe the time complexity. Louvain produces only non-overlapping communities, which means that each node can belong to at most one community. This is highly unrealistic in many real-world applications. For example, in social networks, most people belong to multiple communities: their family, their friends, their co-workers, old school buddies, etc. In biological networks, most genes or proteins belong to more than one pathway or complex. Furthermore, Louvain has been shown to sometimes produce arbitrarily badly connected communities, and has been effectively superseded (at least in the non-overlapping case) by theLeiden algorithm. A worst case example of an arbitrarily badly connected community is a internally disconnected community. An internally disconnected community arises through the Louvain algorithm when a node that had been acting as a "bridge" between two groups of nodes in its community is moved to a new community, leaving the old one disconnected. The remaining nodes in the old community may also be relocated, but if their connection to the community is strong enough despite the removal of the "bridge" node, they will instead remain in place. For an example of this, see the image to the right; note how the removal of the bridge node, node 0, caused the red community to be split into two disjoint subgroups. While this is the worst-case scenario, there are other, more subtle problems with the Louvain algorithm that can also lead to arbitrarily badly connected communities, such as the formation of communities using nodes that are only weakly connected. Another common issue with the Louvain algorithm is theresolution limit of modularity- that is, multiple small communities being grouped together into a larger community. This causes the smaller communities to be hidden; for an example of this, see the visual depiction of the resolution limit to the right. Note how, when the green community is absorbed into the blue community to increase the graph's modularity, the smaller group of nodes that it represented is lost. There is no longer a way to differentiate those nodes from the nodes that were already in the blue community. Conversely, the nodes that were already in the blue community no longer appear distinct from those that were in the green community; in other words, whatever difference caused them to initially be placed in separate communities has been obscured. Both the resolution limit of modularity and the arbitrarily badly connected community problem are further exasperated by each iteration of the algorithm. Ultimately, the only thing the Louvain algorithm guarantees is that the resulting communities cannot be merged further; in other words, they're well-separated. To avoid the problems that arise from arbitrarily badly connected communities and the resolution limit of modularity, it is recommended to use theLeiden algorithminstead, as its refinement phase and other various adjustments have corrected these issues.[5] When comparing modularity optimization methods, the two measures of importance are the speed and the resulting modularity value. A higher speed is better as it shows a method is more efficient than others and a higher modularity value is desirable as it points to having better-defined communities. The compared methods are, the algorithm of Clauset, Newman, and Moore,[2]Pons and Latapy,[11]and Wakita and Tsurumi.[12] -/- in the table refers to a method that took over 24hrs to run. This table (from[1][14]) shows that the Louvain method outperforms many similar modularity optimization methods in both the modularity and the time categories.
https://en.wikipedia.org/wiki/Louvain_modularity
Ininformation science, anontologyencompasses a representation, formal naming, and definitions of the categories, properties, and relations between the concepts, data, or entities that pertain to one, many, or alldomains of discourse. More simply, an ontology is a way of showing the properties of a subject area and how they are related, by defining a set of terms and relational expressions that represent the entities in that subject area. The field which studies ontologies so conceived is sometimes referred to asapplied ontology.[1] Everyacademic disciplineor field, in creating its terminology, thereby lays the groundwork for an ontology. Each uses ontological assumptions to frame explicit theories, research and applications. Improved ontologies may improve problem solving within that domain,interoperabilityof data systems, and discoverability of data. Translating research papers within every field is a problem made easier when experts from different countries maintain acontrolled vocabularyofjargonbetween each of their languages.[2]For instance, thedefinition and ontology of economicsis a primary concern inMarxist economics,[3]but also in othersubfields of economics.[4]An example of economics relying on information science occurs in cases where a simulation or model is intended to enable economic decisions, such as determining whatcapital assetsare at risk and by how much (seerisk management). What ontologies in bothinformation scienceandphilosophyhave in common is the attempt to represent entities, including both objects and events, with all their interdependent properties and relations, according to a system of categories. In both fields, there is considerable work on problems ofontology engineering(e.g.,QuineandKripkein philosophy,SowaandGuarinoin information science),[5]and debates concerning to what extentnormativeontology is possible (e.g.,foundationalismandcoherentismin philosophy,BFOandCycin artificial intelligence). Applied ontologyis considered by some as a successor to prior work in philosophy. However many current efforts are more concerned with establishingcontrolled vocabulariesof narrow domains than with philosophicalfirst principles, or with questions such as the mode of existence offixed essencesor whether enduring objects (e.g.,perdurantismandendurantism) may be ontologically more primary thanprocesses.Artificial intelligencehas retained considerable attention regardingapplied ontologyin subfields likenatural language processingwithinmachine translationandknowledge representation, but ontology editors are being used often in a range of fields, including biomedical informatics,[6]industry.[7]Such efforts often use ontology editing tools such asProtégé.[8] Ontologyis a branch ofphilosophyand intersects areas such asmetaphysics,epistemology, andphilosophy of language, as it considers how knowledge, language, and perception relate to the nature of reality.Metaphysicsdeals with questions like "what exists?" and "what is the nature of reality?". One of five traditional branches of philosophy, metaphysics is concerned with exploring existence through properties, entities and relations such as those betweenparticularsanduniversals,intrinsic and extrinsic properties, oressenceandexistence. Metaphysics has been an ongoing topic of discussion since recorded history. Thecompoundwordontologycombinesonto-, from theGreekὄν,on(gen.ὄντος,ontos), i.e. "being; that which is", which is thepresentparticipleof theverbεἰμί,eimí, i.e. "to be, I am", and-λογία,-logia, i.e. "logical discourse", seeclassical compoundsfor this type of word formation.[9][10] While theetymologyis Greek, the oldest extant record of the word itself, theNeo-Latinformontologia, appeared in 1606 in the workOgdoas ScholasticabyJacob Lorhard(Lorhardus) and in 1613 in theLexicon philosophicumbyRudolf Göckel(Goclenius).[11] The first occurrence in English ofontologyas recorded by theOED(Oxford English Dictionary, online edition, 2008) came inArcheologia Philosophica NovaorNew Principles of PhilosophybyGideon Harvey. Since the mid-1970s, researchers in the field ofartificial intelligence(AI) have recognized thatknowledge engineeringis the key to building large and powerful AI systems[citation needed]. AI researchers argued that they could create new ontologies ascomputational modelsthat enable certain kinds ofautomated reasoning, which was onlymarginally successful. In the 1980s, the AI community began to use the termontologyto refer to both a theory of a modeled world and a component ofknowledge-based systems. In particular, David Powers introduced the wordontologyto AI to refer to real world or robotic grounding,[12][13]publishing in 1990 literature reviews emphasizing grounded ontology in association with the call for papers for a AAAI Summer Symposium Machine Learning of Natural Language and Ontology, with an expanded version published in SIGART Bulletin and included as a preface to the proceedings.[14]Some researchers, drawing inspiration from philosophical ontologies, viewed computational ontology as a kind of applied philosophy.[15] In 1993, the widely cited web page and paper "Toward Principles for the Design of Ontologies Used for Knowledge Sharing" byTom Gruber[16]usedontologyas a technical term incomputer scienceclosely related to earlier idea ofsemantic networksandtaxonomies. Gruber introduced the term asa specification of a conceptualization: An ontology is a description (like a formal specification of a program) of the concepts and relationships that can formally exist for an agent or a community of agents. This definition is consistent with the usage of ontology as set of concept definitions, but more general. And it is a different sense of the word than its use in philosophy.[17] Attempting to distance ontologies from taxonomies and similar efforts inknowledge modelingthat rely onclassesandinheritance, Gruber stated (1993): Ontologies are often equated with taxonomic hierarchies of classes, class definitions, and the subsumption relation, but ontologies need not be limited to these forms. Ontologies are also not limited toconservative definitions, that is, definitions in the traditional logic sense that only introduce terminology and do not add any knowledge about the world (Enderton, 1972). To specify a conceptualization, one needs to state axioms thatdoconstrain the possible interpretations for the defined terms.[16] Recent experimental ontology frameworks have also explored resonance-based AI-human co-evolution structures, such as IAMF (Illumination AI Matrix Framework). Though not yet widely adopted in academic discourse, such models propose phased approaches to ethical harmonization and structural emergence.[18] As refinement of Gruber's definition Feilmayr and Wöß (2016) stated: "An ontology is a formal, explicit specification of a shared conceptualization that is characterized by high semantic expressiveness required for increased complexity."[19] Contemporary ontologies share many structural similarities, regardless of the language in which they are expressed. Most ontologies describe individuals (instances), classes (concepts), attributes and relations. A domain ontology (or domain-specific ontology) represents concepts which belong to a realm of the world, such as biology or politics. Each domain ontology typically models domain-specific definitions of terms. For example, the wordcardhas many different meanings. An ontology about the domain ofpokerwould model the "playing card" meaning of the word, while an ontology about the domain ofcomputer hardwarewould model the "punched card" and "video card" meanings. Since domain ontologies are written by different people, they represent concepts in very specific and unique ways, and are often incompatible within the same project. As systems that rely on domain ontologies expand, they often need to merge domain ontologies by hand-tuning each entity or using a combination of software merging and hand-tuning. This presents a challenge to the ontology designer. Different ontologies in the same domain arise due to different languages, different intended usage of the ontologies, and different perceptions of the domain (based on cultural background, education, ideology, etc.)[citation needed]. At present, merging ontologies that are not developed from a commonupper ontologyis a largely manual process and therefore time-consuming and expensive. Domain ontologies that use the same upper ontology to provide a set of basic elements with which to specify the meanings of the domain ontology entities can be merged with less effort. There are studies on generalized techniques for merging ontologies,[20]but this area of research is still ongoing, and it is a recent event to see the issue sidestepped by having multiple domain ontologies using the same upper ontology like theOBO Foundry. An upper ontology (or foundation ontology) is a model of the commonly shared relations and objects that are generally applicable across a wide range of domain ontologies. It usually employs acore glossarythat overarches the terms and associated object descriptions as they are used in various relevant domain ontologies. Standardized upper ontologies available for use includeBFO,BORO method,Dublin Core,GFO,Cyc,SUMO,UMBEL, andDOLCE.[21][22]WordNethas been considered an upper ontology by some and has been used as a linguistic tool for learning domain ontologies.[23] TheGellishontology is an example of a combination of an upper and a domain ontology. A survey of ontology visualization methods is presented by Katifori et al.[24]An updated survey of ontology visualization methods and tools was published by Dudás et al.[25]The most established ontology visualization methods, namely indented tree and graph visualization are evaluated by Fu et al.[26]A visual language for ontologies represented inOWLis specified by theVisual Notation for OWL Ontologies (VOWL).[27] Ontology engineering (also called ontology building) is a set of tasks related to the development of ontologies for a particular domain.[28]It is a subfield ofknowledge engineeringthat studies the ontology development process, the ontology life cycle, the methods and methodologies for building ontologies, and the tools and languages that support them.[29][30] Ontology engineering aims to make explicit the knowledge contained in software applications, and organizational procedures for a particular domain. Ontology engineering offers a direction for overcoming semantic obstacles, such as those related to the definitions of business terms and software classes. Known challenges with ontology engineering include: Ontology editorsare applications designed to assist in the creation or manipulation of ontologies. It is common for ontology editors to use one or moreontology languages. Aspects of ontology editors include: visual navigation possibilities within theknowledge model,inference enginesandinformation extraction; support for modules; the import and export of foreignknowledge representationlanguages forontology matching; and the support of meta-ontologies such asOWL-S,Dublin Core, etc.[31] Ontology learning is the automatic or semi-automatic creation of ontologies, including extracting a domain's terms from natural language text. As building ontologies manually is extremely labor-intensive and time-consuming, there is great motivation to automate the process. Information extraction andtext mininghave been explored to automatically link ontologies to documents, for example in the context of the BioCreative challenges.[32] Epistemological assumptions, which in research asks "What do you know? or "How do you know it?", creates the foundation researchers use when approaching a certain topic or area for potential research. As epistemology is directly linked to knowledge and how we come about accepting certain truths, individuals conducting academic research must understand what allows them to begin theory building. Simply, epistemological assumptions force researchers to question how they arrive at the knowledge they have.[citation needed] Anontology languageis aformal languageused to encode an ontology. There are a number of such languages for ontologies, both proprietary and standards-based: The W3CLinking Open Data community projectcoordinates attempts to converge different ontologies into worldwideSemantic Web. The development of ontologies has led to the emergence of services providing lists or directories of ontologies called ontology libraries. The following are libraries of human-selected ontologies. The following are both directories and search engines. In general, ontologies can be used beneficially in several fields.
https://en.wikipedia.org/wiki/Ontology_(information_science)
Innatural language processing,latentDirichletallocation(LDA) is aBayesian network(and, therefore, agenerative statistical model) for modeling automatically extracted topics in textual corpora. The LDA is an example of a Bayesiantopic model. In this, observations (e.g., words) are collected into documents, and each word's presence is attributable to one of the document's topics. Each document will contain a small number of topics. In the context ofpopulation genetics, LDA was proposed byJ. K. Pritchard,M. StephensandP. Donnellyin 2000.[1][2] LDA was applied inmachine learningbyDavid Blei,Andrew NgandMichael I. Jordanin 2003.[3] In population genetics, the model is used to detect the presence of structured genetic variation in a group of individuals. The model assumes thatallelescarried by individuals under study have origin in various extant or past populations. The model and various inference algorithms allow scientists to estimate the allele frequencies in those source populations and the origin of alleles carried by individuals under study. The source populations can be interpreted ex-post in terms of various evolutionary scenarios. Inassociation studies, detecting the presence of genetic structure is considered a necessary preliminary step to avoidconfounding. In clinical psychology research, LDA has been used to identify common themes of self-images experienced by young people in social situations.[4]Other social scientists have used LDA to examine large sets of topical data from discussions on social media (e.g., tweets about prescription drugs).[5] Additionally,supervised Latent Dirichlet Allocation with covariates (SLDAX)has been specifically developed to combine latent topics identified in texts with other manifest variables. This approach allows for the integration of text data as predictors in statistical regression analyses, improving the accuracy of mental health predictions. One of the main advantages of SLDAX over traditional two-stage approaches is its ability to avoid biased estimates and incorrect standard errors, allowing for a more accurate analysis of psychological texts.[6][7] In the field of social sciences, LDA has proven to be useful for analyzing large datasets, such as social media discussions. For instance, researchers have used LDA to investigate tweets discussing socially relevant topics, like the use of prescription drugs and cultural differences in China.[8]By analyzing these large text corpora, it is possible to uncover patterns and themes that might otherwise go unnoticed, offering valuable insights into public discourse and perception in real time.[9][10] In the context ofcomputational musicology, LDA has been used to discover tonal structures in different corpora.[11] One application of LDA inmachine learning- specifically,topic discovery, a subproblem innatural language processing– is to discover topics in a collection of documents, and then automatically classify any individual document within the collection in terms of how "relevant" it is to each of the discovered topics. Atopicis considered to be a set of terms (i.e., individual words or phrases) that, taken together, suggest a shared theme. For example, in a document collection related to pet animals, the termsdog,spaniel,beagle,golden retriever,puppy,bark, andwoofwould suggest aDOG_relatedtheme, while the termscat,siamese,Maine coon,tabby,manx,meow,purr, andkittenwould suggest aCAT_relatedtheme. There may be many more topics in the collection – e.g., related to diet, grooming, healthcare, behavior, etc. that we do not discuss for simplicity's sake. (Very common, so calledstop wordsin a language – e.g., "the", "an", "that", "are", "is", etc., – would not discriminate between topics and are usually filtered out by pre-processing before LDA is performed. Pre-processing also converts terms to their "root" lexical forms – e.g., "barks", "barking", and "barked" would be converted to "bark".) If the document collection is sufficiently large, LDA will discover such sets of terms (i.e., topics) based upon the co-occurrence of individual terms, though the task of assigning a meaningful label to an individual topic (i.e., that all the terms are DOG_related) is up to the user, and often requires specialized knowledge (e.g., for collection of technical documents). The LDA approach assumes that: When LDA machine learning is employed, both sets of probabilities are computed during the training phase, usingBayesianmethods and anExpectation Maximizationalgorithm. LDA is a generalization of older approach ofprobabilistic latent semantic analysis(pLSA), The pLSA model is equivalent to LDA under a uniform Dirichlet prior distribution.[12]pLSA relies on only the first two assumptions above and does not care about the remainder. While both methods are similar in principle and require the user to specify the number of topics to be discovered before the start of training (as withK-means clustering) LDA has the following advantages over pLSA: Withplate notation, which is often used to representprobabilistic graphical models(PGMs), the dependencies among the many variables can be captured concisely. The boxes are "plates" representing replicates, which are repeated entities. The outer plate represents documents, while the inner plate represents the repeated word positions in a given document; each position is associated with a choice of topic and word. The variable names are defined as follows: The fact that W is grayed out means that wordswij{\displaystyle w_{ij}}are the onlyobservable variables, and the other variables arelatent variables. As proposed in the original paper,[3]a sparse Dirichlet prior can be used to model the topic-word distribution, following the intuition that the probability distribution over words in a topic is skewed, so that only a small set of words have high probability. The resulting model is the most widely applied variant of LDA today. The plate notation for this model is shown on the right, whereK{\displaystyle K}denotes the number of topics andφ1,…,φK{\displaystyle \varphi _{1},\dots ,\varphi _{K}}areV{\displaystyle V}-dimensional vectors storing the parameters of the Dirichlet-distributed topic-word distributions (V{\displaystyle V}is the number of words in the vocabulary). It is helpful to think of the entities represented byθ{\displaystyle \theta }andφ{\displaystyle \varphi }as matrices created by decomposing the original document-word matrix that represents the corpus of documents being modeled. In this view,θ{\displaystyle \theta }consists of rows defined by documents and columns defined by topics, whileφ{\displaystyle \varphi }consists of rows defined by topics and columns defined by words. Thus,φ1,…,φK{\displaystyle \varphi _{1},\dots ,\varphi _{K}}refers to a set of rows, or vectors, each of which is a distribution over words, andθ1,…,θM{\displaystyle \theta _{1},\dots ,\theta _{M}}refers to a set of rows, each of which is a distribution over topics. To actually infer the topics in a corpus, we imagine a generative process whereby the documents are created, so that we may infer, or reverse engineer, it. We imagine the generative process as follows. Documents are represented as random mixtures over latent topics, where each topic is characterized by a distribution over all the words. LDA assumes the following generative process for a corpusD{\displaystyle D}consisting ofM{\displaystyle M}documents each of lengthNi{\displaystyle N_{i}}: 1. Chooseθi∼Dir⁡(α){\displaystyle \theta _{i}\sim \operatorname {Dir} (\alpha )}, wherei∈{1,…,M}{\displaystyle i\in \{1,\dots ,M\}}andDir(α){\displaystyle \mathrm {Dir} (\alpha )}is aDirichlet distributionwith a symmetric parameterα{\displaystyle \alpha }which typically is sparse (α<1{\displaystyle \alpha <1}) 2. Chooseφk∼Dir⁡(β){\displaystyle \varphi _{k}\sim \operatorname {Dir} (\beta )}, wherek∈{1,…,K}{\displaystyle k\in \{1,\dots ,K\}}andβ{\displaystyle \beta }typically is sparse 3. For each of the word positionsi,j{\displaystyle i,j}, wherei∈{1,…,M}{\displaystyle i\in \{1,\dots ,M\}}, andj∈{1,…,Ni}{\displaystyle j\in \{1,\dots ,N_{i}\}} (Note thatmultinomial distributionhere refers to themultinomialwith only one trial, which is also known as thecategorical distribution.) The lengthsNi{\displaystyle N_{i}}are treated as independent of all the other data generating variables (w{\displaystyle w}andz{\displaystyle z}). The subscript is often dropped, as in the plate diagrams shown here. A formal description of LDA is as follows: We can then mathematically describe the random variables as follows: Learning the various distributions (the set of topics, their associated word probabilities, the topic of each word, and the particular topic mixture of each document) is a problem ofstatistical inference. The original paper by Pritchard et al.[1]used approximation of the posterior distribution by Monte Carlo simulation. Alternative proposal of inference techniques includeGibbs sampling.[13] The original ML paper used avariational Bayesapproximation of theposterior distribution.[3] A direct optimization of the likelihood with a block relaxation algorithm proves to be a fast alternative to MCMC.[14] In practice, the optimal number of populations or topics is not known beforehand. It can be estimated by approximation of the posterior distribution withreversible-jump Markov chain Monte Carlo.[15] Alternative approaches includeexpectation propagation.[16] Recent research has been focused on speeding up the inference of latent Dirichlet allocation to support the capture of a massive number of topics in a large number of documents. The update equation of the collapsed Gibbs sampler mentioned in the earlier section has a natural sparsity within it that can be taken advantage of. Intuitively, since each document only contains a subset of topicsKd{\displaystyle K_{d}}, and a word also only appears in a subset of topicsKw{\displaystyle K_{w}}, the above update equation could be rewritten to take advantage of this sparsity.[17] In this equation, we have three terms, out of which two are sparse, and the other is small. We call these termsa,b{\displaystyle a,b}andc{\displaystyle c}respectively. Now, if we normalize each term by summing over all the topics, we get: Here, we can see thatB{\displaystyle B}is a summation of the topics that appear in documentd{\displaystyle d}, andC{\displaystyle C}is also a sparse summation of the topics that a wordw{\displaystyle w}is assigned to across the whole corpus.A{\displaystyle A}on the other hand, is dense but because of the small values ofα{\displaystyle \alpha }&β{\displaystyle \beta }, the value is very small compared to the two other terms. Now, while sampling a topic, if we sample a random variable uniformly froms∼U(s|∣A+B+C){\displaystyle s\sim U(s|\mid A+B+C)}, we can check which bucket our sample lands in. SinceA{\displaystyle A}is small, we are very unlikely to fall into this bucket; however, if we do fall into this bucket, sampling a topic takesO(K){\displaystyle O(K)}time (same as the original Collapsed Gibbs Sampler). However, if we fall into the other two buckets, we only need to check a subset of topics if we keep a record of the sparse topics. A topic can be sampled from theB{\displaystyle B}bucket inO(Kd){\displaystyle O(K_{d})}time, and a topic can be sampled from theC{\displaystyle C}bucket inO(Kw){\displaystyle O(K_{w})}time whereKd{\displaystyle K_{d}}andKw{\displaystyle K_{w}}denotes the number of topics assigned to the current document and current word type respectively. Notice that after sampling each topic, updating these buckets is all basicO(1){\displaystyle O(1)}arithmetic operations. Following is the derivation of the equations forcollapsed Gibbs sampling, which meansφ{\displaystyle \varphi }s andθ{\displaystyle \theta }s will be integrated out. For simplicity, in this derivation the documents are all assumed to have the same lengthN{\displaystyle N_{}}. The derivation is equally valid if the document lengths vary. According to the model, the total probability of the model is: where the bold-font variables denote the vector version of the variables. First,φ{\displaystyle {\boldsymbol {\varphi }}}andθ{\displaystyle {\boldsymbol {\theta }}}need to be integrated out. All theθ{\displaystyle \theta }s are independent to each other and the same to all theφ{\displaystyle \varphi }s. So we can treat eachθ{\displaystyle \theta }and eachφ{\displaystyle \varphi }separately. We now focus only on theθ{\displaystyle \theta }part. We can further focus on only oneθ{\displaystyle \theta }as the following: Actually, it is the hidden part of the model for thejth{\displaystyle j^{th}}document. Now we replace the probabilities in the above equation by the true distribution expression to write out the explicit equation. Letnj,ri{\displaystyle n_{j,r}^{i}}be the number of word tokens in thejth{\displaystyle j^{th}}document with the same word symbol (therth{\displaystyle r^{th}}word in the vocabulary) assigned to theith{\displaystyle i^{th}}topic. So,nj,ri{\displaystyle n_{j,r}^{i}}is three dimensional. If any of the three dimensions is not limited to a specific value, we use a parenthesized point(⋅){\displaystyle (\cdot )}to denote. For example,nj,(⋅)i{\displaystyle n_{j,(\cdot )}^{i}}denotes the number of word tokens in thejth{\displaystyle j^{th}}document assigned to theith{\displaystyle i^{th}}topic. Thus, the right most part of the above equation can be rewritten as: So theθj{\displaystyle \theta _{j}}integration formula can be changed to: The equation inside the integration has the same form as theDirichlet distribution. According to theDirichlet distribution, Thus, Now we turn our attention to theφ{\displaystyle {\boldsymbol {\varphi }}}part. Actually, the derivation of theφ{\displaystyle {\boldsymbol {\varphi }}}part is very similar to theθ{\displaystyle {\boldsymbol {\theta }}}part. Here we only list the steps of the derivation: For clarity, here we write down the final equation with bothϕ{\displaystyle {\boldsymbol {\phi }}}andθ{\displaystyle {\boldsymbol {\theta }}}integrated out: The goal of Gibbs Sampling here is to approximate the distribution ofP(Z∣W;α,β){\displaystyle P({\boldsymbol {Z}}\mid {\boldsymbol {W}};\alpha ,\beta )}. SinceP(W;α,β){\displaystyle P({\boldsymbol {W}};\alpha ,\beta )}is invariable for any of Z, Gibbs Sampling equations can be derived fromP(Z,W;α,β){\displaystyle P({\boldsymbol {Z}},{\boldsymbol {W}};\alpha ,\beta )}directly. The key point is to derive the following conditional probability: whereZ(m,n){\displaystyle Z_{(m,n)}}denotes theZ{\displaystyle Z}hidden variable of thenth{\displaystyle n^{th}}word token in themth{\displaystyle m^{th}}document. And further we assume that the word symbol of it is thevth{\displaystyle v^{th}}word in the vocabulary.Z−(m,n){\displaystyle {\boldsymbol {Z_{-(m,n)}}}}denotes all theZ{\displaystyle Z}s butZ(m,n){\displaystyle Z_{(m,n)}}. Note that Gibbs Sampling needs only to sample a value forZ(m,n){\displaystyle Z_{(m,n)}}, according to the above probability, we do not need the exact value of but the ratios among the probabilities thatZ(m,n){\displaystyle Z_{(m,n)}}can take value. So, the above equation can be simplified as: Finally, letnj,ri,−(m,n){\displaystyle n_{j,r}^{i,-(m,n)}}be the same meaning asnj,ri{\displaystyle n_{j,r}^{i}}but with theZ(m,n){\displaystyle Z_{(m,n)}}excluded. The above equation can be further simplified leveraging the property ofgamma function. We first split the summation and then merge it back to obtain ak{\displaystyle k}-independent summation, which could be dropped: Note that the same formula is derived in the article on theDirichlet-multinomial distribution, as part of a more general discussion of integratingDirichlet distributionpriors out of aBayesian network. Topic modeling is a classic solution to the problem ofinformation retrievalusing linked data and semantic web technology.[18]Related models and techniques are, among others,latent semantic indexing,independent component analysis,probabilistic latent semantic indexing,non-negative matrix factorization, andGamma-Poisson distribution. The LDA model is highly modular and can therefore be easily extended. The main field of interest is modeling relations between topics. This is achieved by using another distribution on the simplex instead of the Dirichlet. The Correlated Topic Model[19]follows this approach, inducing a correlation structure between topics by using thelogistic normal distributioninstead of the Dirichlet. Another extension is the hierarchical LDA (hLDA),[20]where topics are joined together in a hierarchy by using the nestedChinese restaurant process, whose structure is learnt from data. LDA can also be extended to a corpus in which a document includes two types of information (e.g., words and names), as in theLDA-dual model.[21]Nonparametric extensions of LDA include thehierarchical Dirichlet processmixture model, which allows the number of topics to be unbounded and learnt from data. As noted earlier, pLSA is similar to LDA. The LDA model is essentially the Bayesian version of pLSA model. The Bayesian formulation tends to perform better on small datasets because Bayesian methods can avoid overfitting the data. For very large datasets, the results of the two models tend to converge. One difference is that pLSA uses a variabled{\displaystyle d}to represent a document in the training set. So in pLSA, when presented with a document the model has not seen before, we fixPr(w∣z){\displaystyle \Pr(w\mid z)}—the probability of words under topics—to be that learned from the training set and use the same EM algorithm to inferPr(z∣d){\displaystyle \Pr(z\mid d)}—the topic distribution underd{\displaystyle d}. Blei argues that this step is cheating because you are essentially refitting the model to the new data. In evolutionary biology, it is often natural to assume that the geographic locations of the individuals observed bring some information about their ancestry. This is the rational of various models for geo-referenced genetic data.[15][22] Variations on LDA have been used to automatically put natural images into categories, such as "bedroom" or "forest", by treating an image as a document, and small patches of the image as words;[23]one of the variations is calledspatial latent Dirichlet allocation.[24]
https://en.wikipedia.org/wiki/Latent_Dirichlet_allocation
Latent semantic analysis(LSA) is a technique innatural language processing, in particulardistributional semantics, of analyzing relationships between a set of documents and the terms they contain by producing a set of concepts related to the documents and terms. LSA assumes that words that are close in meaning will occur in similar pieces of text (thedistributional hypothesis). A matrix containing word counts per document (rows represent unique words and columns represent each document) is constructed from a large piece of text and a mathematical technique calledsingular value decomposition(SVD) is used to reduce the number of rows while preserving the similarity structure among columns. Documents are then compared bycosine similaritybetween any two columns. Values close to 1 represent very similar documents while values close to 0 represent very dissimilar documents.[1] An information retrieval technique using latent semantic structure was patented in 1988[2]byScott Deerwester,Susan Dumais,George Furnas,Richard Harshman,Thomas Landauer,Karen LochbaumandLynn Streeter. In the context of its application toinformation retrieval, it is sometimes calledlatent semantic indexing(LSI).[3] LSA can use adocument-term matrixwhich describes the occurrences of terms in documents; it is asparse matrixwhose rows correspond totermsand whose columns correspond to documents. A typical example of the weighting of the elements of the matrix istf-idf(term frequency–inverse document frequency): the weight of an element of the matrix is proportional to the number of times the terms appear in each document, where rare terms are upweighted to reflect their relative importance. This matrix is also common to standard semantic models, though it is not necessarily explicitly expressed as a matrix, since the mathematical properties of matrices are not always used. After the construction of the occurrence matrix, LSA finds alow-rank approximation[5]to theterm-document matrix. There could be various reasons for these approximations: The consequence of the rank lowering is that some dimensions are combined and depend on more than one term: This mitigates the problem of identifying synonymy, as the rank lowering is expected to merge the dimensions associated with terms that have similar meanings. It also partially mitigates the problem withpolysemy, since components of polysemous words that point in the "right" direction are added to the components of words that share a similar meaning. Conversely, components that point in other directions tend to either simply cancel out, or, at worst, to be smaller than components in the directions corresponding to the intended sense. LetX{\displaystyle X}be a matrix where element(i,j){\displaystyle (i,j)}describes the occurrence of termi{\displaystyle i}in documentj{\displaystyle j}(this can be, for example, the frequency).X{\displaystyle X}will look like this: Now a row in this matrix will be a vector corresponding to a term, giving its relation to each document: Likewise, a column in this matrix will be a vector corresponding to a document, giving its relation to each term: Now thedot producttiTtp{\displaystyle {\textbf {t}}_{i}^{T}{\textbf {t}}_{p}}between two term vectors gives thecorrelationbetween the terms over the set of documents. Thematrix productXXT{\displaystyle XX^{T}}contains all these dot products. Element(i,p){\displaystyle (i,p)}(which is equal to element(p,i){\displaystyle (p,i)}) contains the dot producttiTtp{\displaystyle {\textbf {t}}_{i}^{T}{\textbf {t}}_{p}}(=tpTti{\displaystyle ={\textbf {t}}_{p}^{T}{\textbf {t}}_{i}}). Likewise, the matrixXTX{\displaystyle X^{T}X}contains the dot products between all the document vectors, giving their correlation over the terms:djTdq=dqTdj{\displaystyle {\textbf {d}}_{j}^{T}{\textbf {d}}_{q}={\textbf {d}}_{q}^{T}{\textbf {d}}_{j}}. Now, from the theory of linear algebra, there exists a decomposition ofX{\displaystyle X}such thatU{\displaystyle U}andV{\displaystyle V}areorthogonal matricesandΣ{\displaystyle \Sigma }is adiagonal matrix. This is called asingular value decomposition(SVD): The matrix products giving us the term and document correlations then become SinceΣΣT{\displaystyle \Sigma \Sigma ^{T}}andΣTΣ{\displaystyle \Sigma ^{T}\Sigma }are diagonal we see thatU{\displaystyle U}must contain theeigenvectorsofXXT{\displaystyle XX^{T}}, whileV{\displaystyle V}must be the eigenvectors ofXTX{\displaystyle X^{T}X}. Both products have the same non-zero eigenvalues, given by the non-zero entries ofΣΣT{\displaystyle \Sigma \Sigma ^{T}}, or equally, by the non-zero entries ofΣTΣ{\displaystyle \Sigma ^{T}\Sigma }. Now the decomposition looks like this: The valuesσ1,…,σl{\displaystyle \sigma _{1},\dots ,\sigma _{l}}are called the singular values, andu1,…,ul{\displaystyle u_{1},\dots ,u_{l}}andv1,…,vl{\displaystyle v_{1},\dots ,v_{l}}the left and right singular vectors. Notice the only part ofU{\displaystyle U}that contributes toti{\displaystyle {\textbf {t}}_{i}}is thei'th{\displaystyle i{\textrm {'th}}}row. Let this row vector be calledt^iT{\displaystyle {\hat {\textrm {t}}}_{i}^{T}}. Likewise, the only part ofVT{\displaystyle V^{T}}that contributes todj{\displaystyle {\textbf {d}}_{j}}is thej'th{\displaystyle j{\textrm {'th}}}column,d^j{\displaystyle {\hat {\textrm {d}}}_{j}}. These arenotthe eigenvectors, butdependonallthe eigenvectors. It turns out that when you select thek{\displaystyle k}largest singular values, and their corresponding singular vectors fromU{\displaystyle U}andV{\displaystyle V}, you get the rankk{\displaystyle k}approximation toX{\displaystyle X}with the smallest error (Frobenius norm). This approximation has a minimal error. But more importantly we can now treat the term and document vectors as a "semantic space". The row "term" vectort^iT{\displaystyle {\hat {\textbf {t}}}_{i}^{T}}then hask{\displaystyle k}entries mapping it to a lower-dimensional space. These new dimensions do not relate to any comprehensible concepts. They are a lower-dimensional approximation of the higher-dimensional space. Likewise, the "document" vectord^j{\displaystyle {\hat {\textbf {d}}}_{j}}is an approximation in this lower-dimensional space. We write this approximation as You can now do the following: To do the latter, you must first translate your query into the low-dimensional space. It is then intuitive that you must use the same transformation that you use on your documents: Note here that the inverse of the diagonal matrixΣk{\displaystyle \Sigma _{k}}may be found by inverting each nonzero value within the matrix. This means that if you have a query vectorq{\displaystyle q}, you must do the translationq^=Σk−1UkTq{\displaystyle {\hat {\textbf {q}}}=\Sigma _{k}^{-1}U_{k}^{T}{\textbf {q}}}before you compare it with the document vectors in the low-dimensional space. You can do the same for pseudo term vectors: The new low-dimensional space typically can be used to: Synonymy and polysemy are fundamental problems innatural language processing: LSA has been used to assist in performingprior artsearches forpatents.[9] The use of Latent Semantic Analysis has been prevalent in the study of human memory, especially in areas offree recalland memory search. There is a positive correlation between the semantic similarity of two words (as measured by LSA) and the probability that the words would be recalled one after another in free recall tasks using study lists of random common nouns. They also noted that in these situations, the inter-response time between the similar words was much quicker than between dissimilar words. These findings are referred to as theSemantic Proximity Effect.[10] When participants made mistakes in recalling studied items, these mistakes tended to be items that were more semantically related to the desired item and found in a previously studied list. These prior-list intrusions, as they have come to be called, seem to compete with items on the current list for recall.[11] Another model, termedWord Association Spaces(WAS) is also used in memory studies by collecting free association data from a series of experiments and which includes measures of word relatedness for over 72,000 distinct word pairs.[12] TheSVDis typically computed using large matrix methods (for example,Lanczos methods) but may also be computed incrementally and with greatly reduced resources via aneural network-like approach, which does not require the large, full-rank matrix to be held in memory.[13]A fast, incremental, low-memory, large-matrix SVD algorithm has been developed.[14]MATLAB[15]and Python[16]implementations of these fast algorithms are available. Unlike Gorrell and Webb's (2005) stochastic approximation, Brand's algorithm (2003) provides an exact solution. In recent years progress has been made to reduce the computational complexity of SVD; for instance, by using a parallel ARPACK algorithm to perform parallel eigenvalue decomposition it is possible to speed up the SVD computation cost while providing comparable prediction quality.[17] Some of LSA's drawbacks include: In semantic hashing[21]documents are mapped to memory addresses by means of aneural networkin such a way that semantically similar documents are located at nearby addresses.Deep neural networkessentially builds agraphical modelof the word-count vectors obtained from a large set of documents. Documents similar to a query document can then be found by simply accessing all the addresses that differ by only a few bits from the address of the query document. This way of extending the efficiency of hash-coding to approximate matching is much faster thanlocality sensitive hashing, which is the fastest current method.[clarification needed] Latent semantic indexing(LSI) is an indexing and retrieval method that uses a mathematical technique calledsingular value decomposition(SVD) to identify patterns in the relationships between thetermsandconceptscontained in an unstructured collection of text. LSI is based on the principle that words that are used in the same contexts tend to have similar meanings. A key feature of LSI is its ability to extract the conceptual content of abody of textby establishing associations between those terms that occur in similarcontexts.[22] LSI is also an application ofcorrespondence analysis, a multivariate statistical technique developed byJean-Paul Benzécri[23]in the early 1970s, to acontingency tablebuilt from word counts in documents. Called "latent semanticindexing" because of its ability to correlatesemanticallyrelated terms that arelatentin a collection of text, it was first applied to text atBellcorein the late 1980s. The method, also called latent semantic analysis (LSA), uncovers the underlying latent semantic structure in the usage of words in a body of text and how it can be used to extract the meaning of the text in response to user queries, commonly referred to as concept searches. Queries, or concept searches, against a set of documents that have undergone LSI will return results that are conceptually similar in meaning to the search criteria even if the results don’t share a specific word or words with the search criteria. LSI helps overcome synonymy by increasingrecall, one of the most problematic constraints ofBoolean keyword queriesand vector space models.[18]Synonymy is often the cause of mismatches in the vocabulary used by the authors of documents and the users ofinformation retrievalsystems.[24]As a result, Boolean or keyword queries often return irrelevant results and miss information that is relevant. LSI is also used to perform automateddocument categorization. In fact, several experiments have demonstrated that there are a number of correlations between the way LSI and humans process and categorize text.[25]Document categorization is the assignment of documents to one or more predefined categories based on their similarity to the conceptual content of the categories.[26]LSI usesexampledocuments to establish the conceptual basis for each category. During categorization processing, the concepts contained in the documents being categorized are compared to the concepts contained in the example items, and a category (or categories) is assigned to the documents based on the similarities between the concepts they contain and the concepts that are contained in the example documents. Dynamic clustering based on the conceptual content of documents can also be accomplished using LSI. Clustering is a way to group documents based on their conceptual similarity to each other without using example documents to establish the conceptual basis for each cluster. This is very useful when dealing with an unknown collection of unstructured text. Because it uses a strictly mathematical approach, LSI is inherently independent of language. This enables LSI to elicit the semantic content of information written in any language without requiring the use of auxiliary structures, such as dictionaries and thesauri. LSI can also perform cross-linguisticconcept searchingand example-based categorization. For example, queries can be made in one language, such as English, and conceptually similar results will be returned even if they are composed of an entirely different language or of multiple languages.[citation needed] LSI is not restricted to working only with words. It can also process arbitrary character strings. Any object that can be expressed as text can be represented in an LSI vector space. For example, tests with MEDLINE abstracts have shown that LSI is able to effectively classify genes based on conceptual modeling of the biological information contained in the titles and abstracts of the MEDLINE citations.[27] LSI automatically adapts to new and changing terminology, and has been shown to be very tolerant of noise (i.e., misspelled words, typographical errors, unreadable characters, etc.).[28]This is especially important for applications using text derived from Optical Character Recognition (OCR) and speech-to-text conversion. LSI also deals effectively with sparse, ambiguous, and contradictory data. Text does not need to be in sentence form for LSI to be effective. It can work with lists, free-form notes, email, Web-based content, etc. As long as a collection of text contains multiple terms, LSI can be used to identify patterns in the relationships between the important terms and concepts contained in the text. LSI has proven to be a useful solution to a number of conceptual matching problems.[29][30]The technique has been shown to capture key relationship information, including causal, goal-oriented, and taxonomic information.[31] LSI uses common linear algebra techniques to learn the conceptual correlations in a collection of text. In general, the process involves constructing a weighted term-document matrix, performing aSingular Value Decompositionon the matrix, and using the matrix to identify the concepts contained in the text. LSI begins by constructing a term-document matrix,A{\displaystyle A}, to identify the occurrences of them{\displaystyle m}unique terms within a collection ofn{\displaystyle n}documents. In a term-document matrix, each term is represented by a row, and each document is represented by a column, with each matrix cell,aij{\displaystyle a_{ij}}, initially representing the number of times the associated term appears in the indicated document,tfij{\displaystyle \mathrm {tf_{ij}} }. This matrix is usually very large and very sparse. Once a term-document matrix is constructed, local and global weighting functions can be applied to it to condition the data. The weighting functions transform each cell,aij{\displaystyle a_{ij}}ofA{\displaystyle A}, to be the product of a local term weight,lij{\displaystyle l_{ij}}, which describes the relative frequency of a term in a document, and a global weight,gi{\displaystyle g_{i}}, which describes the relative frequency of the term within the entire collection of documents. Some common local weighting functions[33]are defined in the following table. Some common global weighting functions are defined in the following table. Empirical studies with LSI report that the Log and Entropy weighting functions work well, in practice, with many data sets.[34]In other words, each entryaij{\displaystyle a_{ij}}ofA{\displaystyle A}is computed as: A rank-reduced,singular value decompositionis performed on the matrix to determine patterns in the relationships between the terms and concepts contained in the text. The SVD forms the foundation for LSI.[35]It computes the term and document vector spaces by approximating the single term-frequency matrix,A{\displaystyle A}, into three other matrices— anmbyrterm-concept vector matrixT{\displaystyle T}, anrbyrsingular values matrixS{\displaystyle S}, and anbyrconcept-document vector matrix,D{\displaystyle D}, which satisfy the following relations: A≈TSDT{\displaystyle A\approx TSD^{T}} TTT=IrDTD=Ir{\displaystyle T^{T}T=I_{r}\quad D^{T}D=I_{r}} S1,1≥S2,2≥…≥Sr,r>0Si,j=0wherei≠j{\displaystyle S_{1,1}\geq S_{2,2}\geq \ldots \geq S_{r,r}>0\quad S_{i,j}=0\;{\text{where}}\;i\neq j} In the formula,Ais the suppliedmbynweighted matrix of term frequencies in a collection of text wheremis the number of unique terms, andnis the number of documents.Tis a computedmbyrmatrix of term vectors whereris the rank ofA—a measure of its unique dimensions≤ min(m,n).Sis a computedrbyrdiagonal matrix of decreasing singular values, andDis a computednbyrmatrix of document vectors. The SVD is thentruncatedto reduce the rank by keeping only the largestk«rdiagonal entries in the singular value matrixS, wherekis typically on the order 100 to 300 dimensions. This effectively reduces the term and document vector matrix sizes tombykandnbykrespectively. The SVD operation, along with this reduction, has the effect of preserving the most important semantic information in the text while reducing noise and other undesirable artifacts of the original space ofA. This reduced set of matrices is often denoted with a modified formula such as: Efficient LSI algorithms only compute the firstksingular values and term and document vectors as opposed to computing a full SVD and then truncating it. Note that this rank reduction is essentially the same as doingPrincipal Component Analysis(PCA) on the matrixA, except that PCA subtracts off the means. PCA loses the sparseness of theAmatrix, which can make it infeasible for large lexicons. The computedTkandDkmatrices define the term and document vector spaces, which with the computed singular values,Sk, embody the conceptual information derived from the document collection. The similarity of terms or documents within these spaces is a factor of how close they are to each other in these spaces, typically computed as a function of the angle between the corresponding vectors. The same steps are used to locate the vectors representing the text of queries and new documents within the document space of an existing LSI index. By a simple transformation of theA = T S DTequation into the equivalentD = ATT S−1equation, a new vector,d, for a query or for a new document can be created by computing a new column inAand then multiplying the new column byT S−1. The new column inAis computed using the originally derived global term weights and applying the same local weighting function to the terms in the query or in the new document. A drawback to computing vectors in this way, when adding new searchable documents, is that terms that were not known during the SVD phase for the original index are ignored. These terms will have no impact on the global weights and learned correlations derived from the original collection of text. However, the computed vectors for the new text are still very relevant for similarity comparisons with all other document vectors. The process of augmenting the document vector spaces for an LSI index with new documents in this manner is calledfolding in. Although the folding-in process does not account for the new semantic content of the new text, adding a substantial number of documents in this way will still provide good results for queries as long as the terms and concepts they contain are well represented within the LSI index to which they are being added. When the terms and concepts of a new set of documents need to be included in an LSI index, either the term-document matrix, and the SVD, must be recomputed or an incremental update method (such as the one described in[14]) is needed. It is generally acknowledged that the ability to work with text on a semantic basis is essential to modern information retrieval systems. As a result, the use of LSI has significantly expanded in recent years as earlier challenges in scalability and performance have been overcome. LSI is being used in a variety of information retrieval and text processing applications, although its primary application has been for concept searching and automated document categorization.[36]Below are some other ways in which LSI is being used: LSI is increasingly being used for electronic document discovery (eDiscovery) to help enterprises prepare for litigation. In eDiscovery, the ability to cluster, categorize, and search large collections of unstructured text on a conceptual basis is essential. Concept-based searching using LSI has been applied to the eDiscovery process by leading providers as early as 2003.[51] Early challenges to LSI focused on scalability and performance. LSI requires relatively high computational performance and memory in comparison to other information retrieval techniques.[52]However, with the implementation of modern high-speed processors and the availability of inexpensive memory, these considerations have been largely overcome. Real-world applications involving more than 30 million documents that were fully processed through the matrix and SVD computations are common in some LSI applications. A fully scalable (unlimited number of documents, online training) implementation of LSI is contained in the open sourcegensimsoftware package.[53] Another challenge to LSI has been the alleged difficulty in determining the optimal number of dimensions to use for performing the SVD. As a general rule, fewer dimensions allow for broader comparisons of the concepts contained in a collection of text, while a higher number of dimensions enable more specific (or more relevant) comparisons of concepts. The actual number of dimensions that can be used is limited by the number of documents in the collection. Research has demonstrated that around 300 dimensions will usually provide the best results with moderate-sized document collections (hundreds of thousands of documents) and perhaps 400 dimensions for larger document collections (millions of documents).[54]However, recent studies indicate that 50-1000 dimensions are suitable depending on the size and nature of the document collection.[55]Checking the proportion of variance retained, similar toPCAorfactor analysis, to determine the optimal dimensionality is not suitable for LSI. Using a synonym test or prediction of missing words are two possible methods to find the correct dimensionality.[56]When LSI topics are used as features in supervised learning methods, one can use prediction error measurements to find the ideal dimensionality. Due to its cross-domain applications inInformation Retrieval,Natural Language Processing(NLP),Cognitive ScienceandComputational Linguistics, LSA has been implemented to support many different kinds of applications.
https://en.wikipedia.org/wiki/Latent_semantic_analysis
Information retrieval(IR) incomputingandinformation scienceis the task of identifying and retrievinginformation systemresources that are relevant to aninformation need. The information need can be specified in the form of a search query. In the case of document retrieval, queries can be based onfull-textor other content-based indexing. Information retrieval is thescience[1]of searching for information in a document, searching for documents themselves, and also searching for themetadatathat describes data, and fordatabasesof texts, images or sounds. Automated information retrieval systems are used to reduce what has been calledinformation overload. An IR system is a software system that provides access to books, journals and other documents; it also stores and manages those documents.Web search enginesare the most visible IR applications. An information retrieval process begins when a user enters a query into the system. Queries are formal statements of information needs, for example search strings in web search engines. In information retrieval, a query does not uniquely identify a single object in the collection. Instead, several objects may match the query, perhaps with different degrees ofrelevance. An object is an entity that is represented by information in a content collection ordatabase. User queries are matched against the database information. However, as opposed to classical SQL queries of a database, in information retrieval the results returned may or may not match the query, so results are typically ranked. Thisrankingof results is a key difference of information retrieval searching compared to database searching.[2] Depending on theapplicationthe data objects may be, for example, text documents, images,[3]audio,[4]mind maps[5]or videos. Often the documents themselves are not kept or stored directly in the IR system, but are instead represented in the system by document surrogates ormetadata. Most IR systems compute a numeric score on how well each object in the database matches the query, and rank the objects according to this value. The top ranking objects are then shown to the user. The process may then be iterated if the user wishes to refine the query.[6] there is ... a machine called the Univac ... whereby letters and figures are coded as a pattern of magnetic spots on a long steel tape. By this means the text of a document, preceded by its subject code symbol, can be recorded ... the machine ... automatically selects and types out those references which have been coded in any desired way at a rate of 120 words a minute The idea of using computers to search for relevant pieces of information was popularized in the articleAs We May ThinkbyVannevar Bushin 1945.[7]It would appear that Bush was inspired by patents for a 'statistical machine' – filed byEmanuel Goldbergin the 1920s and 1930s – that searched for documents stored on film.[8]The first description of a computer searching for information was described by Holmstrom in 1948,[9]detailing an early mention of theUnivaccomputer. Automated information retrieval systems were introduced in the 1950s: one even featured in the 1957 romantic comedyDesk Set. In the 1960s, the first large information retrieval research group was formed byGerard Saltonat Cornell. By the 1970s several different retrieval techniques had been shown to perform well on smalltext corporasuch as the Cranfield collection (several thousand documents).[7]Large-scale retrieval systems, such as the Lockheed Dialog system, came into use early in the 1970s. In 1992, the US Department of Defense along with theNational Institute of Standards and Technology(NIST), cosponsored theText Retrieval Conference(TREC) as part of the TIPSTER text program. The aim of this was to look into the information retrieval community by supplying the infrastructure that was needed for evaluation of text retrieval methodologies on a very large text collection. This catalyzed research on methods thatscaleto huge corpora. The introduction ofweb search engineshas boosted the need for very large scale retrieval systems even further. By the late 1990s, the rise of the World Wide Web fundamentally transformed information retrieval. While early search engines such asAltaVista(1995) andYahoo!(1994) offered keyword-based retrieval, they were limited in scale and ranking refinement. The breakthrough came in 1998 with the founding ofGoogle, which introduced thePageRankalgorithm,[10]using the web’s hyperlink structure to assess page importance and improve relevance ranking. During the 2000s, web search systems evolved rapidly with the integration of machine learning techniques. These systems began to incorporate user behavior data (e.g., click-through logs), query reformulation, and content-based signals to improve search accuracy and personalization. In 2009,MicrosoftlaunchedBing, introducing features that would later incorporatesemanticweb technologies through the development of its Satori knowledge base. Academic analysis[11]have highlighted Bing’s semantic capabilities, including structured data use and entity recognition, as part of a broader industry shift toward improving search relevance and understanding user intent through natural language processing. A major leap occurred in 2018, when Google deployedBERT(BidirectionalEncoderRepresentations fromTransformers) to better understand the contextual meaning of queries and documents. This marked one of the first times deep neural language models were used at scale in real-world retrieval systems.[12]BERT’s bidirectional training enabled a more refined comprehension of word relationships in context, improving the handling of natural language queries. Because of its success, transformer-based models gained traction in academic research and commercial search applications.[13] Simultaneously, the research community began exploring neural ranking models that outperformed traditional lexical-based methods. Long-standing benchmarks such as theTextREtrievalConference (TREC), initiated in 1992, and more recent evaluation frameworks Microsoft MARCO(MAchineReadingCOmprehension) (2019)[14]became central to training and evaluating retrieval systems across multiple tasks and domains. MS MARCO has also been adopted in the TREC Deep Learning Tracks, where it serves as a core dataset for evaluating advances in neural ranking models within a standardized benchmarking environment.[15] As deep learning became integral to information retrieval systems, researchers began to categorize neural approaches into three broad classes:sparse,dense, andhybridmodels. Sparse models, including traditional term-based methods and learned variants like SPLADE, rely on interpretable representations and inverted indexes to enable efficient exact term matching with added semantic signals.[16]Dense models, such as dual-encoder architectures like ColBERT, use continuous vector embeddings to support semantic similarity beyond keyword overlap.[17]Hybrid models aim to combine the advantages of both, balancing the lexical (token) precision of sparse methods with the semantic depth of dense models. This way of categorizing models balances scalability, relevance, and efficiency in retrieval systems.[18] As IR systems increasingly rely on deep learning, concerns around bias, fairness, and explainability have also come to the picture. Research is now focused not just on relevance and efficiency, but on transparency, accountability, and user trust in retrieval algorithms. Areas where information retrieval techniques are employed include (the entries are in alphabetical order within each category): Methods/Techniques in which information retrieval techniques are employed include: In order to effectively retrieve relevant documents by IR strategies, the documents are typically transformed into a suitable representation. Each retrieval strategy incorporates a specific model for its document representation purposes. The picture on the right illustrates the relationship of some common models. In the picture, the models are categorized according to two dimensions: the mathematical basis and the properties of the model. In addition to the theoretical distinctions, modern information retrieval models are also categorized on how queries and documents are represented and compared, using a practical classification distinguishing between sparse, dense and hybrid models.[19] This classification has become increasingly common in both academic and the real world applications and is getting widely adopted and used in evaluation benchmarks for Information Retrieval models.[23][20] The evaluation of an information retrieval system' is the process of assessing how well a system meets the information needs of its users. In general, measurement considers a collection of documents to be searched and a search query. Traditional evaluation metrics, designed forBoolean retrieval[clarification needed]or top-k retrieval, includeprecision and recall. All measures assume aground truthnotion of relevance: every document is known to be either relevant or non-relevant to a particular query. In practice, queries may beill-posedand there may be different shades of relevance.
https://en.wikipedia.org/wiki/Information_retrieval#Indexing
MapReduceis aprogramming modeland an associated implementation for processing and generatingbig datasets with aparallelanddistributedalgorithm on acluster.[1][2][3] A MapReduce program is composed of amapprocedure, which performs filtering and sorting (such as sorting students by first name into queues, one queue for each name), and areducemethod, which performs a summary operation (such as counting the number of students in each queue, yielding name frequencies). The "MapReduce System" (also called "infrastructure" or "framework") orchestrates the processing bymarshallingthe distributed servers, running the various tasks in parallel, managing all communications and data transfers between the various parts of the system, and providing forredundancyandfault tolerance. The model is a specialization of thesplit-apply-combinestrategy for data analysis.[4]It is inspired by themapandreducefunctions commonly used infunctional programming,[5]although their purpose in the MapReduce framework is not the same as in their original forms.[6]The key contributions of the MapReduce framework are not the actual map and reduce functions (which, for example, resemble the 1995Message Passing Interfacestandard's[7]reduce[8]andscatter[9]operations), but the scalability and fault-tolerance achieved for a variety of applications due to parallelization. As such, asingle-threadedimplementation of MapReduce is usually not faster than a traditional (non-MapReduce) implementation; any gains are usually only seen withmulti-threadedimplementations on multi-processor hardware.[10]The use of this model is beneficial only when the optimized distributed shuffle operation (which reduces network communication cost) and fault tolerance features of the MapReduce framework come into play. Optimizing the communication cost is essential to a good MapReduce algorithm.[11] MapReducelibrarieshave been written in many programming languages, with different levels of optimization. A popularopen-sourceimplementation that has support for distributed shuffles is part ofApache Hadoop. The name MapReduce originally referred to the proprietaryGoogletechnology, but has since become ageneric trademark. By 2014, Google was no longer using MapReduce as its primarybig dataprocessing model,[12]and development onApache Mahouthad moved on to more capable and less disk-oriented mechanisms that incorporated full map and reduce capabilities.[13] MapReduce is a framework for processingparallelizableproblems across large datasets using a large number of computers (nodes), collectively referred to as acluster(if all nodes are on the same local network and use similar hardware) or agrid(if the nodes are shared across geographically and administratively distributed systems, and use more heterogeneous hardware). Processing can occur on data stored either in afilesystem(unstructured) or in adatabase(structured). MapReduce can take advantage of the locality of data, processing it near the place it is stored in order to minimize communication overhead. A MapReduce framework (or system) is usually composed of three operations (or steps): MapReduce allows for the distributed processing of the map and reduction operations. Maps can be performed in parallel, provided that each mapping operation is independent of the others; in practice, this is limited by the number of independent data sources and/or the number of CPUs near each source. Similarly, a set of 'reducers' can perform the reduction phase, provided that all outputs of the map operation that share the same key are presented to the same reducer at the same time, or that the reduction function isassociative. While this process often appears inefficient compared to algorithms that are more sequential (because multiple instances of the reduction process must be run), MapReduce can be applied to significantly larger datasets than a single"commodity" servercan handle – a largeserver farmcan use MapReduce to sort apetabyteof data in only a few hours.[14]The parallelism also offers some possibility of recovering from partial failure of servers or storage during the operation: if one mapper or reducer fails, the work can be rescheduled – assuming the input data are still available. Another way to look at MapReduce is as a 5-step parallel and distributed computation: These five steps can be logically thought of as running in sequence – each step starts only after the previous step is completed – although in practice they can be interleaved as long as the final result is not affected. In many situations, the input data might have already been distributed ("sharded") among many different servers, in which case step 1 could sometimes be greatly simplified by assigning Map servers that would process the locally present input data. Similarly, step 3 could sometimes be sped up by assigning Reduce processors that are as close as possible to the Map-generated data they need to process. TheMapandReducefunctions ofMapReduceare both defined with respect to data structured in (key, value) pairs.Maptakes one pair of data with a type in onedata domain, and returns a list of pairs in a different domain: Map(k1,v1)→list(k2,v2) TheMapfunction is applied in parallel to every pair (keyed byk1) in the input dataset. This produces a list of pairs (keyed byk2) for each call. After that, the MapReduce framework collects all pairs with the same key (k2) from all lists and groups them together, creating one group for each key. TheReducefunction is then applied in parallel to each group, which in turn produces a collection of values in the same domain: Reduce(k2, list (v2))→list((k3, v3))[15] EachReducecall typically produces either one key value pair or an empty return, though one call is allowed to return more than one key value pair. The returns of all calls are collected as the desired result list. Thus the MapReduce framework transforms a list of (key, value) pairs into another list of (key, value) pairs.[16]This behavior is different from the typical functional programming map and reduce combination, which accepts a list of arbitrary values and returns one single value that combinesallthe values returned by map. It isnecessary but not sufficientto have implementations of the map and reduce abstractions in order to implement MapReduce. Distributed implementations of MapReduce require a means of connecting the processes performing the Map and Reduce phases. This may be adistributed file system. Other options are possible, such as direct streaming from mappers to reducers, or for the mapping processors to serve up their results to reducers that query them. The canonical MapReduce example counts the appearance of each word in a set of documents:[17] Here, each document is split into words, and each word is counted by themapfunction, using the word as the result key. The framework puts together all the pairs with the same key and feeds them to the same call toreduce. Thus, this function just needs to sum all of its input values to find the total appearances of that word. As another example, imagine that for a database of 1.1 billion people, one would like to compute the average number of social contacts a person has according to age. InSQL, such a query could be expressed as: Using MapReduce, theK1key values could be the integers 1 through 1100, each representing a batch of 1 million records, theK2key value could be a person's age in years, and this computation could be achieved using the following functions: Note that in theReducefunction,Cis the count of people having in total N contacts, so in theMapfunction it is natural to writeC=1, since every output pair is referring to the contacts of one single person. The MapReduce system would line up the 1100 Map processors, and would provide each with its corresponding 1 million input records. The Map step would produce 1.1 billion(Y,(N,1))records, withYvalues ranging between, say, 8 and 103. The MapReduce System would then line up the 96 Reduce processors by performing shuffling operation of the key/value pairs due to the fact that we need average per age, and provide each with its millions of corresponding input records. The Reduce step would result in the much reduced set of only 96 output records(Y,A), which would be put in the final result file, sorted byY. The count info in the record is important if the processing is reduced more than one time. If we did not add the count of the records, the computed average would be wrong, for example: If we reduce files#1and#2, we will have a new file with an average of 9 contacts for a 10-year-old person ((9+9+9+9+9)/5): If we reduce it with file#3, we lose the count of how many records we've already seen, so we end up with an average of 9.5 contacts for a 10-year-old person ((9+10)/2), which is wrong. The correct answer is 9.166= 55 / 6 = (9×3+9×2+10×1)/(3+2+1). Software framework architectureadheres toopen-closed principlewhere code is effectively divided into unmodifiablefrozen spotsandextensiblehot spots. The frozen spot of the MapReduce framework is a large distributed sort. The hot spots, which the application defines, are: Theinput readerdivides the input into appropriate size 'splits' (in practice, typically, 64 MB to 128 MB) and the framework assigns one split to eachMapfunction. Theinput readerreads data from stable storage (typically, adistributed file system) and generates key/value pairs. A common example will read a directory full of text files and return each line as a record. TheMapfunction takes a series of key/value pairs, processes each, and generates zero or more output key/value pairs. The input and output types of the map can be (and often are) different from each other. If the application is doing a word count, the map function would break the line into words and output a key/value pair for each word. Each output pair would contain the word as the key and the number of instances of that word in the line as the value. EachMapfunction output is allocated to a particularreducerby the application'spartitionfunction forshardingpurposes. Thepartitionfunction is given the key and the number of reducers and returns the index of the desiredreducer. A typical default is tohashthe key and use the hash valuemodulothe number ofreducers. It is important to pick a partition function that gives an approximately uniform distribution of data per shard forload-balancingpurposes, otherwise the MapReduce operation can be held up waiting for slow reducers to finish (i.e. the reducers assigned the larger shares of the non-uniformly partitioned data). Between the map and reduce stages, the data areshuffled(parallel-sorted / exchanged between nodes) in order to move the data from the map node that produced them to the shard in which they will be reduced. The shuffle can sometimes take longer than the computation time depending on network bandwidth, CPU speeds, data produced and time taken by map and reduce computations. The input for eachReduceis pulled from the machine where theMapran and sorted using the application'scomparisonfunction. The framework calls the application'sReducefunction once for each unique key in the sorted order. TheReducecan iterate through the values that are associated with that key and produce zero or more outputs. In the word count example, theReducefunction takes the input values, sums them and generates a single output of the word and the final sum. TheOutput Writerwrites the output of theReduceto the stable storage. Properties ofmonoidsare the basis for ensuring the validity of MapReduce operations.[18][19] In the Algebird package[20]a Scala implementation of Map/Reduce explicitly requires a monoid class type .[21] The operations of MapReduce deal with two types: the typeAof input data being mapped, and the typeBof output data being reduced. TheMapoperation takes individual values of typeAand produces, for eacha:Aa valueb:B; TheReduceoperation requires a binary operation • defined on values of typeB; it consists of folding all availableb:Bto a single value. From a basic requirements point of view, any MapReduce operation must involve the ability to arbitrarily regroup data being reduced. Such a requirement amounts to two properties of the operation •: The second property guarantees that, when parallelized over multiple nodes, the nodes that don't have any data to process would have no impact on the result. These two properties amount to having amonoid(B, •,e) on values of typeBwith operation • and with neutral elemente. There's no requirements on the values of typeA; an arbitrary functionA→Bcan be used for theMapoperation. This means that we have acatamorphismA*→ (B, •,e). HereA*denotes aKleene star, also known as the type of lists overA. TheShuffleoperation per se is not related to the essence of MapReduce; it's needed to distribute calculations over the cloud. It follows from the above that not every binaryReduceoperation will work in MapReduce. Here are the counter-examples: MapReduce programs are not guaranteed to be fast. The main benefit of this programming model is to exploit the optimized shuffle operation of the platform, and only having to write theMapandReduceparts of the program. In practice, the author of a MapReduce program however has to take the shuffle step into consideration; in particular the partition function and the amount of data written by theMapfunction can have a large impact on the performance and scalability. Additional modules such as theCombinerfunction can help to reduce the amount of data written to disk, and transmitted over the network. MapReduce applications can achieve sub-linear speedups under specific circumstances.[22] When designing a MapReduce algorithm, the author needs to choose a good tradeoff[11]between the computation and the communication costs. Communication cost often dominates the computation cost,[11][22]and many MapReduce implementations are designed to write all communication to distributed storage for crash recovery. In tuning performance of MapReduce, the complexity of mapping, shuffle, sorting (grouping by the key), and reducing has to be taken into account. The amount of data produced by the mappers is a key parameter that shifts the bulk of the computation cost between mapping and reducing. Reducing includes sorting (grouping of the keys) which has nonlinear complexity. Hence, small partition sizes reduce sorting time, but there is a trade-off because having a large number of reducers may be impractical. The influence of split unit size is marginal (unless chosen particularly badly, say <1MB). The gains from some mappers reading load from local disks, on average, is minor.[23] For processes that complete quickly, and where the data fits into main memory of a single machine or a small cluster, using a MapReduce framework usually is not effective. Since these frameworks are designed to recover from the loss of whole nodes during the computation, they write interim results to distributed storage. This crash recovery is expensive, and only pays off when the computation involves many computers and a long runtime of the computation. A task that completes in seconds can just be restarted in the case of an error, and the likelihood of at least one machine failing grows quickly with the cluster size. On such problems, implementations keeping all data in memory and simply restarting a computation on node failures or —when the data is small enough— non-distributed solutions will often be faster than a MapReduce system. MapReduce achieves reliability by parceling out a number of operations on the set of data to each node in the network. Each node is expected to report back periodically with completed work and status updates. If a node falls silent for longer than that interval, the master node (similar to the master server in theGoogle File System) records the node as dead and sends out the node's assigned work to other nodes. Individual operations useatomicoperations for naming file outputs as a check to ensure that there are not parallel conflicting threads running. When files are renamed, it is possible to also copy them to another name in addition to the name of the task (allowing forside-effects). The reduce operations operate much the same way. Because of their inferior properties with regard to parallel operations, the master node attempts to schedule reduce operations on the same node, or in the same rack as the node holding the data being operated on. This property is desirable as it conserves bandwidth across the backbone network of the datacenter. Implementations are not necessarily highly reliable. For example, in older versions ofHadooptheNameNodewas asingle point of failurefor the distributed filesystem. Later versions of Hadoop have high availability with an active/passive failover for the "NameNode." MapReduce is useful in a wide range of applications, including distributed pattern-based searching, distributed sorting, web link-graph reversal, Singular Value Decomposition,[24]web access log stats,inverted indexconstruction,document clustering,machine learning,[25]andstatistical machine translation. Moreover, the MapReduce model has been adapted to several computing environments like multi-core and many-core systems,[26][27][28]desktop grids,[29]multi-cluster,[30]volunteer computing environments,[31]dynamic cloud environments,[32]mobile environments,[33]and high-performance computing environments.[34] At Google, MapReduce was used to completely regenerate Google's index of theWorld Wide Web. It replaced the oldad hocprograms that updated the index and ran the various analyses.[35]Development at Google has since moved on to technologies such as Percolator, FlumeJava[36]andMillWheelthat offer streaming operation and updates instead of batch processing, to allow integrating "live" search results without rebuilding the complete index.[37] MapReduce's stable inputs and outputs are usually stored in adistributed file system. The transient data are usually stored on local disk and fetched remotely by the reducers. David DeWittandMichael Stonebraker, computer scientists specializing inparallel databasesandshared-nothing architectures, have been critical of the breadth of problems that MapReduce can be used for.[38]They called its interface too low-level and questioned whether it really represents theparadigm shiftits proponents have claimed it is.[39]They challenged the MapReduce proponents' claims of novelty, citingTeradataas an example ofprior artthat has existed for over two decades. They also compared MapReduce programmers toCODASYLprogrammers, noting both are "writing in alow-level languageperforming low-level record manipulation."[39]MapReduce's use of input files and lack ofschemasupport prevents the performance improvements enabled by common database system features such asB-treesandhash partitioning, though projects such asPig (or PigLatin),Sawzall,Apache Hive,[40]HBase[41]andBigtable[41][42]are addressing some of these problems. Greg Jorgensen wrote an article rejecting these views.[43]Jorgensen asserts that DeWitt and Stonebraker's entire analysis is groundless as MapReduce was never designed nor intended to be used as a database. DeWitt and Stonebraker have subsequently published a detailed benchmark study in 2009 comparing performance ofHadoop'sMapReduce andRDBMSapproaches on several specific problems.[44]They concluded that relational databases offer real advantages for many kinds of data use, especially on complex processing or where the data is used across an enterprise, but that MapReduce may be easier for users to adopt for simple or one-time processing tasks. The MapReduce programming paradigm was also described inDanny Hillis's 1985 thesis[45]intended for use on theConnection Machine, where it was called "xapping/reduction"[46]and relied upon that machine's special hardware to accelerate both map and reduce. The dialect ultimately used for the Connection Machine, the 1986StarLisp, had parallel*mapandreduce!!,[47]which in turn was based on the 1984Common Lisp, which had non-parallelmapandreducebuilt in.[48]Thetree-likeapproach that the Connection Machine'shypercube architectureuses to executereduceinO(log⁡n){\displaystyle O(\log n)}time[49]is effectively the same as the approach referred to within the Google paper as prior work.[3]:11 In 2010 Google was granted what is described as a patent on MapReduce. The patent, filed in 2004, may cover use of MapReduce by open source software such asHadoop,CouchDB, and others. InArs Technica, an editor acknowledged Google's role in popularizing the MapReduce concept, but questioned whether the patent was valid or novel.[50][51]In 2013, as part of its "Open Patent Non-Assertion (OPN) Pledge", Google pledged to only use the patent defensively.[52][53]The patent is expected to expire on 23 December 2026.[54] MapReduce tasks must be written as acyclic dataflow programs, i.e. a stateless mapper followed by a stateless reducer, that are executed by a batch job scheduler. This paradigm makes repeated querying of datasets difficult and imposes limitations that are felt in fields such asgraphprocessing[55]where iterative algorithms that revisit a singleworking setmultiple times are the norm, as well as, in the presence ofdisk-based data with highlatency, even the field ofmachine learningwhere multiple passes through the data are required even though algorithms can tolerate serial access to the data each pass.[56]
https://en.wikipedia.org/wiki/MapReduce
Crowdsourcinginvolves a large group of dispersed participants contributing or producinggoods or services—including ideas,votes,micro-tasks, and finances—for payment or as volunteers. Contemporary crowdsourcing often involvesdigital platformsto attract and divide work between participants to achieve a cumulative result. Crowdsourcing is not limited to online activity, however, and there are various historical examples of crowdsourcing. The word crowdsourcing is aportmanteauof "crowd" and "outsourcing".[1][2][3]In contrast to outsourcing, crowdsourcing usually involves less specific and more public groups of participants.[4][5][6] Advantages of using crowdsourcing include lowered costs, improved speed, improved quality, increased flexibility, and/or increasedscalabilityof the work, as well as promotingdiversity.[7][8]Crowdsourcing methods include competitions, virtual labor markets, open online collaboration and data donation.[8][9][10][11]Some forms of crowdsourcing, such as in "idea competitions" or "innovation contests" provide ways for organizations to learn beyond the "base of minds" provided by their employees (e.g.Lego Ideas).[12][13][promotion?]Commercial platforms, such asAmazon Mechanical Turk, matchmicrotaskssubmitted by requesters to workers who perform them. Crowdsourcing is also used bynonprofit organizationsto developcommon goods, such asWikipedia.[14] The termcrowdsourcingwas coined in 2006 by two editors atWired, Jeff Howe and Mark Robinson, to describe how businesses were using the Internet to "outsourcework to the crowd", which quickly led to the portmanteau "crowdsourcing".[15]TheOxford English Dictionarygives a first use: "OED's earliest evidence for crowdsourcing is from 2006, in the writing of J. Howe."[16]The online dictionaryMerriam-Websterdefines it as: "the practice of obtaining needed services, ideas, or content by soliciting contributions from a large group of people and especially from the online community rather than from traditional employees or suppliers."[17] Daren C. Brabham defined crowdsourcing as an "online, distributed problem-solving and production model."[18]Kristen L. Guth and Brabham found that the performance of ideas offered in crowdsourcing platforms are affected not only by their quality, but also by the communication among users about the ideas, and presentation in the platform itself.[19] Despite the multiplicity of definitions for crowdsourcing, one constant has been the broadcasting of problems to the public, and an open call for contributions to help solve the problem.[original research?]Members of the public submit solutions that are then owned by the entity who originally broadcast the problem. In some cases, the contributor of the solution is compensated monetarily with prizes or public recognition. In other cases, the only rewards may bepraiseor intellectual satisfaction. Crowdsourcing may produce solutions fromamateursorvolunteersworking in their spare time, from experts, or from small businesses.[15] While the term "crowdsourcing" was popularized online to describe Internet-based activities,[18]some examples of projects, in retrospect, can be described as crowdsourcing. Crowdsourcing has often been used in the past as a competition to discover a solution. The French government proposed several of these competitions, often rewarded withMontyon Prizes.[44]These included theLeblanc process, or the Alkali prize, where a reward was provided for separating the salt from the alkali, and theFourneyron's turbine, when the first hydraulic commercial turbine was developed.[45] In response to a challenge from the French government,Nicolas Appertwon a prize for inventing a new way offood preservationthat involved sealing food in air-tight jars.[46]The British government provided a similar reward to find an easy way to determine a ship'slongitudeinthe Longitude Prize. During the Great Depression, out-of-work clerks tabulated higher mathematical functions in theMathematical Tables Projectas an outreach project.[47][unreliable source?]One of the largest crowdsourcing campaigns was a public design contest in 2010 hosted by the Indian government's finance ministry to create a symbol for theIndian rupee. Thousands of people sent in entries before the government zeroed in on the final symbol based on theDevanagariscript using the letter Ra.[48] A number of motivations exist for businesses to use crowdsourcing to accomplish their tasks. These include the ability to offload peak demand, access cheap labor and information, generate better results, access a wider array of talent than what is present in one organization, and undertake problems that would have been too difficult to solve internally.[49]Crowdsourcing allows businesses to submit problems on which contributors can work—on topics such as science, manufacturing, biotech, and medicine—optionally with monetary rewards for successful solutions. Although crowdsourcing complicated tasks can be difficult, simple work tasks[specify]can be crowdsourced cheaply and effectively.[50] Crowdsourcing also has the potential to be a problem-solving mechanism for government and nonprofit use.[51]Urban and transit planning are prime areas for crowdsourcing. For example, from 2008 to 2009, a crowdsourcing project for transit planning in Salt Lake City was created to test the public participation process.[52]Another notable application of crowdsourcing for governmentproblem-solvingisPeer-to-Patent, which was an initiative to improve patent quality in the United States through gathering public input in a structured, productive manner.[53] Researchers have used crowdsourcing systems such as Amazon Mechanical Turk or CloudResearch to aid their research projects by crowdsourcing some aspects of the research process, such asdata collection, parsing, and evaluation to the public. Notable examples include using the crowd to create speech and language databases,[54][55]to conduct user studies,[56]and to run behavioral science surveys and experiments.[57]Crowdsourcing systems provided researchers with the ability to gather large amounts of data, and helped researchers to collect data from populations and demographics they may not have access to locally.[58][failed verification] Artists have also used crowdsourcing systems. In a project called the Sheep Market,Aaron Koblinused Mechanical Turk to collect 10,000 drawings of sheep from contributors around the world.[59]ArtistSam Brownleveraged the crowd by asking visitors of his websiteexplodingdogto send him sentences to use as inspirations for his paintings.[60]Art curator Andrea Grover argues that individuals tend to be more open in crowdsourced projects because they are not being physically judged or scrutinized.[61]As with other types of uses, artists use crowdsourcing systems to generate and collect data. The crowd also can be used to provide inspiration and to collect financial support for an artist's work.[62] Innavigation systems, crowdsourcing from 100 million drivers were used byINRIXto collect users' driving times to provide better GPS routing and real-time traffic updates.[63] The use of crowdsourcing in medical and health research is increasing systematically. The process involves outsourcing tasks or gathering input from a large, diverse groups of people, often facilitated through digital platforms, to contribute to medical research, diagnostics, data analysis, promotion, and various healthcare-related initiatives. Usage of this innovative approach supplies a useful community-based method to improve medical services. From funding individual medical cases and innovative devices to supporting research, community health initiatives, and crisis responses, crowdsourcing proves its versatile impact in addressing diverse healthcare challenges.[64] In 2011,UNAIDSinitiated the participatory online policy project to better engage young people in decision-making processes related toAIDS.[65]The project acquired data from 3,497 participants across seventy-nine countries through online and offline forums. The outcomes generally emphasized the importance of youth perspectives in shaping strategies to effectively addressAIDSwhich provided a valuable insight for future community empowerment initiatives. Another approach is sourcing results of clinical algorithms from collective input of participants.[66]Researchers fromSPIEdeveloped a crowdsourcing tool, to train individuals, especially middle and high school students in South Korea, to diagnosemalaria-infected red blood cells. Using a statistical framework, the platform combined expert diagnoses with those from minimally trained individuals, creating a gold standard library. The objective was to swiftly teach people to achieve great diagnosis accuracy without any prior training. Cancer medicinejournal conducted a review of the studies published between January 2005 and June 2016 on crowdsourcing in cancer research, with the usagePubMed,CINAHL,Scopus,PsychINFO, andEmbase.[67]All of them strongly advocate for continuous efforts to refine and expand crowdsourcing applications in academic scholarship. Analysis highlighted the importance of interdisciplinary collaborations and widespread dissemination of knowledge; the review underscored the need to fully harness crowdsourcing's potential to address challenges within cancer research.[67] Crowdsourcing in astronomy was used in the early 19th century by astronomerDenison Olmsted. After being awakened in a late November night due to ameteor showertaking place, Olmsted noticed a pattern in the shooting stars. Olmsted wrote a brief report of this meteor shower in the local newspaper. "As the cause of 'Falling Stars' is not understood by meteorologists, it is desirable to collect all the facts attending this phenomenon, stated with as much precision as possible", Olmsted wrote to readers, in a report subsequently picked up and pooled to newspapers nationwide. Responses came pouring in from many states, along with scientists' observations sent to theAmerican Journal of Science and Arts.[68]These responses helped him to make a series of scientific breakthroughs including observing the fact that meteor showers are seen nationwide and fall from space under the influence of gravity. The responses also allowed him to approximate a velocity for the meteors.[69] A more recent version of crowdsourcing in astronomy is NASA's photo organizing project,[70]which asked internet users to browse photos taken from space and try to identify the location the picture is documenting.[71] Behavioral science In the field of behavioral science, crowdsourcing is often used to gather data and insights onhuman behavioranddecision making. Researchers may create online surveys or experiments that are completed by a large number of participants, allowing them to collect a diverse and potentially large amount of data.[57]Crowdsourcing can also be used to gather real-time data on behavior, such as through the use of mobile apps that track and record users' activities and decision making.[72]The use of crowdsourcing in behavioral science has the potential to greatly increase the scope and efficiency of research, and has been used in studies on topics such as psychology research,[73]political attitudes,[74]and social media use.[75] Energy system modelsrequire large and diversedatasets, increasingly so given the trend towards greater temporal and spatial resolution.[76]In response, there have been several initiatives to crowdsource this data. Launched in December 2009,OpenEIis acollaborativewebsiterun by the US government that providesopenenergy data.[77][78]While much of its information is from US government sources, the platform also seeks crowdsourced input from around the world.[79]ThesemanticwikianddatabaseEnipedia also publishes energy systems data using the concept of crowdsourced open information. Enipedia went live in March 2011.[80][81]: 184–188 Genealogicalresearch used crowdsourcing techniques long before personal computers were common. Beginning in 1942, members ofthe Church of Jesus Christ of Latter-day Saintsencouraged members to submit information about their ancestors. The submitted information was gathered together into a single collection. In 1969, to encourage more participation, the church started the three-generation program. In this program, church members were asked to prepare documented family group record forms for the first three generations. The program was later expanded to encourage members to research at least four generations and became known as the four-generation program.[82] Institutes that have records of interest to genealogical research have used crowds of volunteers to create catalogs and indices to records.[citation needed] Genetic genealogy research Genetic genealogyis a combination of traditional genealogy withgenetics. The rise of personal DNA testing, after the turn of the century, by companies such asGene by Gene,FTDNA,GeneTree,23andMe, andAncestry.com, has led to public and semi public databases of DNA testing using crowdsourcing techniques.Citizen scienceprojects have included support, organization, and dissemination ofpersonal DNA (genetic) testing.Similar toamateur astronomy, citizen scientists encouraged by volunteer organizations like theInternational Society of Genetic Genealogy[83]have provided valuable information and research to the professional scientific community.[84]TheGenographic Project, which began in 2005, is a research project carried out by theNational Geographic Society's scientific team to reveal patterns of human migration using crowdsourcedDNAtesting and reporting of results.[85] Another early example of crowdsourcing occurred in the field ofornithology. On 25 December 1900, Frank Chapman, an early officer of theNational Audubon Society, initiated a tradition dubbed the"Christmas Day Bird Census". The project called birders from across North America to count and record the number of birds in each species they witnessed on Christmas Day. The project was successful, and the records from 27 different contributors were compiled into one bird census, which tallied around 90 species of birds.[86]This large-scale collection of data constituted an early form of citizen science, the premise upon which crowdsourcing is based. In the 2012 census, more than 70,000 individuals participated across 2,369 bird count circles.[87]Christmas 2014 marked the National Audubon Society's 115th annualChristmas Bird Count. TheEuropean-Mediterranean Seismological Centre (EMSC)has developed a seismic detection system by monitoring the traffic peaks on its website and analyzing keywords used on Twitter.[88] Crowdsourcing is increasingly used in professional journalism. Journalists are able to organize crowdsourced information by fact checking the information, and then using the information they have gathered in their articles as they see fit.[citation needed]A daily newspaper in Sweden has successfully used crowdsourcing in investigating the home loan interest rates in the country in 2013–2014, which resulted in over 50,000 submissions.[89]A daily newspaper in Finland crowdsourced an investigation into stock short-selling in 2011–2012, and the crowdsourced information led to revelations of atax evasionsystem by a Finnish bank. The bank executive was fired and policy changes followed.[90]TalkingPointsMemoin the United States asked its readers to examine 3,000 emails concerning the firing of federal prosecutors in 2008. The British newspaperThe Guardiancrowdsourced the examination of hundreds of thousands of documents in 2009.[91] Data donation is a crowdsourcing approach to gather digital data. It is used by researchers and organizations to gain access to data from online platforms, websites, search engines and apps and devices. Data donation projects usually rely on participants volunteering their authentic digital profile information. Examples include: Crowdsourcing is used in large scale media, such as thecommunity notessystem of the X platform. Crowdsourcing on such platforms is thought to be effective in combating partisan misinformation on social media when certain conditions are met.[99][100]Success may depend on trust in fact-checking sources, the ability to present information that challenges previous beliefs without causing excessive dissonance, and having a sufficiently large and diverse crowd of participants. Effective crowdsourcing interventions must navigate politically polarized environments where trusted sources may be less inclined to provide dissonant opinions. By leveraging network analysis to connect users with neighboring communities outside their ideological echo chambers, crowdsourcing can provide an additional layer of content moderation. Crowdsourcing public policy and the production of public services is also referred to ascitizen sourcing. While some scholars argue crowdsourcing for this purpose as a policy tool[101]or a definite means of co-production,[102]others question that and argue that crowdsourcing should be considered just as a technological enabler that simply increases speed and ease of participation.[103]Crowdsourcing can also play a role indemocratization.[104] The first conference focusing on Crowdsourcing for Politics and Policy took place atOxford University, under the auspices of the Oxford Internet Institute in 2014. Research has emerged since 2012[105]which focused on the use of crowdsourcing for policy purposes.[106][107]These include experimentally investigating the use of Virtual Labor Markets for policy assessment,[108]and assessing the potential for citizen involvement in process innovation for public administration.[109] Governments across the world are increasingly using crowdsourcing for knowledge discovery and civic engagement.[citation needed]Iceland crowdsourced their constitution reform process in 2011, and Finland has crowdsourced several law reform processes to address their off-road traffic laws. The Finnish government allowed citizens to go on an online forum to discuss problems and possible resolutions regarding some off-road traffic laws.[citation needed]The crowdsourced information and resolutions would then be passed on to legislators to refer to when making a decision, allowing citizens to contribute to public policy in a more direct manner.[110][111]Palo Altocrowdsources feedback for its Comprehensive City Plan update in a process started in 2015.[112]The House of Representatives in Brazil has used crowdsourcing in policy-reforms.[113] NASAused crowdsourcing to analyze large sets of images. As part of theOpen Government Initiativeof theObama Administration, theGeneral Services Administrationcollected and amalgamated suggestions for improving federal websites.[113] For part of the Obama andTrump Administrations, theWe the Peoplesystem collected signatures on petitions, which were entitled to an official response from theWhite Houseonce a certain number had been reached. Several U.S. federal agencies raninducement prize contests, including NASA and theEnvironmental Protection Agency.[114][113] Crowdsourcing has been used extensively for gathering language-related data. For dictionary work, crowdsourcing was applied over a hundred years ago by theOxford English Dictionaryeditors using paper and postage. It has also been used for collecting examples ofproverbson a specific topic (e.g.religious pluralism) for a printed journal.[115]Crowdsourcing language-related data online has proven very effective and many dictionary compilation projects used crowdsourcing. It is used particularly for specialist topics and languages that are not well documented, such as for theOromo language.[116]Software programs have been developed for crowdsourced dictionaries, such asWeSay.[117]A slightly different form of crowdsourcing for language data was the online creation of scientific and mathematical terminology forAmerican Sign Language.[118] In linguistics, crowdsourcing strategies have been applied to estimate word knowledge, vocabulary size, and word origin.[119]Implicit crowdsourcing on social media has also approximating sociolinguistic data efficiently.Redditconversations in various location-based subreddits were analyzed for the presence of grammatical forms unique to a regional dialect. These were then used to map the extent of the speaker population. The results could roughly approximate large-scale surveys on the subject without engaging in field interviews.[120] Mining publicly available social media conversations can be used as a form of implicit crowdsourcing to approximate the geographic extent of speaker dialects.[120]Proverb collectionis also being done via crowdsourcing on the Web, most notably for thePashto languageof Afghanistan and Pakistan.[121][122][123]Crowdsourcing has been extensively used to collect high-quality gold standards for creating automatic systems in natural language processing (e.g.named entity recognition,entity linking).[124] Organizations often leverage crowdsourcing to gather ideas for new products as well as for the refinement of established product.[41]Lego allows users to work on new product designs while conducting requirements testing. Any user can provide a design for a product, and other users can vote on the product. Once the submitted product has received 10,000 votes, it will be formally reviewed in stages and go into production with no impediments such as legal flaws identified. The creator receives royalties from the net income.[125]Labelling new products as "customer-ideated" through crowdsourcing initiatives, as opposed to not specifying the source of design, leads to a substantial increase in the actual market performance of the products. Merely highlighting the source of design to customers, particularly, attributing the product to crowdsourcing efforts from user communities, can lead to a significant boost in product sales. Consumers perceive "customer-ideated" products as more effective in addressing their needs, leading to a quality inference. The design mode associated with crowdsourced ideas is considered superior in generating promising new products, contributing to the observed increase in market performance.[126] Crowdsourcing is widely used by businesses to source feedback and suggestions on how to improve their products and services.[41]Homeowners can useAirbnbto list their accommodation or unused rooms. Owners set their own nightly, weekly and monthly rates and accommodations. The business, in turn, charges guests and hosts a fee. Guests usually end up spending between $9 and $15.[127]They have to pay a booking fee every time they book a room. The landlord, in turn, pays a service fee for the amount due. The company has 1,500 properties in 34,000 cities in more than 190 countries.[citation needed] Crowdsourcing is frequently used in market research as a way to gather insights and opinions from a large number of consumers.[128]Companies may create online surveys or focus groups that are open to the general public, allowing them to gather a diverse range of perspectives on their products or services. This can be especially useful for companies seeking to understand the needs and preferences of a particular market segment or to gather feedback on the effectiveness of their marketing efforts. The use of crowdsourcing in market research allows companies to quickly and efficiently gather a large amount of data and insights that can inform their business decisions.[129] Internet and digital technologies have massively expanded the opportunities for crowdsourcing. However, the effect of user communication and platform presentation can have a major bearing on the success of an online crowdsourcing project.[19]The crowdsourced problem can range from huge tasks (such as finding alien life or mapping earthquake zones) or very small (identifying images). Some examples of successful crowdsourcing themes are problems that bug people, things that make people feel good about themselves, projects that tap into niche knowledge of proud experts, and subjects that people find sympathetic.[145] Crowdsourcing can either take an explicit or an implicit route: In his 2013 book,Crowdsourcing, Daren C. Brabham puts forth a problem-based typology of crowdsourcing approaches:[147] Ivo Blohm identifies four types of Crowdsourcing Platforms: Microtasking, Information Pooling, Broadcast Search, and Open Collaboration. They differ in the diversity and aggregation of contributions that are created. The diversity of information collected can either be homogenous or heterogenous. The aggregation of information can either be selective or integrative.[definition needed][148]Some common categories of crowdsourcing have been used effectively in the commercial world include crowdvoting, crowdsolving,crowdfunding,microwork,creative crowdsourcing,crowdsource workforce management, andinducement prize contests.[149] In their conceptual review of the crowdsourcing,Linus Dahlander, Lars Bo Jeppesen, and Henning Piezunka distinguish four steps in the crowdsourcing process: Define, Broadcast, Attract, and Select.[150] Crowdvoting occurs when a website gathers a large group's opinions and judgments on a certain topic. Some crowdsourcing tools and platforms allow participants to rank each other's contributions, e.g. in answer to the question "What is one thing we can do to make Acme a great company?" One common method for ranking is "like" counting, where the contribution with the most "like" votes ranks first. This method is simple and easy to understand, but it privileges early contributions, which have more time to accumulate votes.[citation needed]In recent years, several crowdsourcing companies have begun to use pairwise comparisons backed by ranking algorithms. Ranking algorithms do not penalize late contributions.[citation needed]They also produce results quicker. Ranking algorithms have proven to be at least 10 times faster than manual stack ranking.[151]One drawback, however, is that ranking algorithms are more difficult to understand than vote counting. TheIowa Electronic Marketis a prediction market that gathers crowds' views on politics and tries to ensure accuracy by having participants pay money to buy and sell contracts based on political outcomes.[152]Some of the most famous examples have made use of social media channels: Domino's Pizza, Coca-Cola, Heineken, and Sam Adams have crowdsourced a new pizza, bottle design, beer, and song respectively.[153]A website calledThreadlessselected the T-shirts it sold by having users provide designs and vote on the ones they like, which are then printed and available for purchase.[18] TheCalifornia Report Card(CRC), a program jointly launched in January 2014 by theCenter for Information Technology Research in the Interest of Society[154]and Lt. GovernorGavin Newsom, is an example of modern-day crowd voting. Participants access the CRC online and vote on six timely issues. Throughprincipal component analysis, the users are then placed into an online "café" in which they can present their own political opinions and grade the suggestions of other participants. This system aims to effectively involve the greater public in relevant political discussions and highlight the specific topics with which people are most concerned. Crowdvoting's value in the movie industry was shown when in 2009 a crowd accurately predicted the success or failure of a movie based on its trailer,[155][156]a feat that was replicated in 2013 by Google.[157] On Reddit, users collectively rate web content, discussions and comments as well as questions posed to persons of interest in "AMA" and AskScienceonline interviews.[cleanup needed] In 2017,Project Fanchisepurchased a team in theIndoor Football Leagueand created theSalt Lake Screaming Eagles, a fan run team. Using a mobile app, the fans voted on the day-to-day operations of the team, the mascot name, signing of players and evenoffensiveplay callingduring games.[158] Crowdfunding is the process of funding projects by a multitude of people contributing a small amount to attain a certain monetary goal, typically via the Internet.[159]Crowdfunding has been used for both commercial and charitable purposes.[160]The crowdfuding model that has been around the longest is rewards-based crowdfunding. This model is where people can prepurchase products, buy experiences, or simply donate. While this funding may in some cases go towards helping a business, funders are not allowed to invest and become shareholders via rewards-based crowdfunding.[161] Individuals, businesses, and entrepreneurs can showcase their businesses and projects by creating a profile, which typically includes a short video introducing their project, a list of rewards per donation, and illustrations through images.[citation needed]Funders make monetary contribution for numerous reasons: The dilemma for equity crowdfunding in the US as of 2012 was during a refinement process for the regulations of theSecurities and Exchange Commission, which had until 1 January 2013 to tweak the fundraising methods. The regulators were overwhelmed trying to regulate Dodd-Frank and all the other rules and regulations involving public companies and the way they traded. Advocates of regulation claimed that crowdfunding would open up the flood gates for fraud, called it the "wild west" of fundraising, and compared it to the 1980s days of penny stock "cold-call cowboys". The process allowed for up to $1 million to be raised without some of the regulations being involved. Companies under the then-current proposal would have exemptions available and be able to raise capital from a larger pool of persons, which can include lower thresholds for investor criteria, whereas the old rules required that the person be an "accredited" investor. These people are often recruited from social networks, where the funds can be acquired from an equity purchase, loan, donation, or ordering. The amounts collected have become quite high, with requests that are over a million dollars for software such as Trampoline Systems, which used it to finance the commercialization of their new software.[citation needed] Web-based idea competitions or inducement prize contests often consist of generic ideas, cash prizes, and an Internet-based platform to facilitate easy idea generation and discussion. An example of these competitions includes an event like IBM's 2006 "Innovation Jam", attended by over 140,000 international participants and yielded around 46,000 ideas.[163][164]Another example is theNetflix Prizein 2009. People were asked to come up with arecommendation algorithmthat is more accurate than Netflix's current algorithm. It had a grand prize of US$1,000,000, and it was given to a team which designed an algorithm that beat Netflix's own algorithm for predicting ratings by 10.06%.[citation needed] Another example of competition-based crowdsourcing is the 2009DARPA balloonexperiment, whereDARPAplaced 10 balloon markers across the United States and challenged teams to compete to be the first to report the location of all the balloons. A collaboration of efforts was required to complete the challenge quickly and in addition to the competitive motivation of the contest as a whole, the winning team (MIT, in less than nine hours) established its own "collaborapetitive" environment to generate participation in their team.[165]A similar challenge was theTag Challenge, funded by the US State Department, which required locating and photographing individuals in five cities in the US and Europe within 12 hours based only on a single photograph. The winning team managed to locate three suspects by mobilizing volunteers worldwide using a similar incentive scheme to the one used in the balloon challenge.[166] Usingopen innovationplatforms is an effective way to crowdsource people's thoughts and ideas for research and development. The companyInnoCentiveis a crowdsourcing platform for corporate research and development where difficult scientific problems are posted for crowds of solvers to discover the answer and win a cash prize that ranges from $10,000 to $100,000 per challenge.[18]InnoCentive, ofWaltham, Massachusetts, and London, England, provides access to millions of scientific and technical experts from around the world. The company claims a success rate of 50% in providing successful solutions to previously unsolved scientific and technical problems. TheX Prize Foundationcreates and runs incentive competitions offering between $1 million and $30 million for solving challenges.Local Motorsis another example of crowdsourcing, and it is a community of 20,000 automotive engineers, designers, and enthusiasts that compete to build off-road rally trucks.[167] Implicit crowdsourcing is less obvious because users do not necessarily know they are contributing, yet can still be very effective in completing certain tasks.[citation needed]Rather than users actively participating in solving a problem or providing information, implicit crowdsourcing involves users doing another task entirely where a third party gains information for another topic based on the user's actions.[18] A good example of implicit crowdsourcing is theESP game, where users find words to describe Google images, which are then used asmetadatafor the images. Another popular use of implicit crowdsourcing is throughreCAPTCHA, which asks people to solveCAPTCHAsto prove they are human, and then provides CAPTCHAs from old books that cannot be deciphered by computers, to digitize them for the web. Like many tasks solved using the Mechanical Turk, CAPTCHAs are simple for humans, but often very difficult for computers.[146] Piggyback crowdsourcing can be seen most frequently by websites such as Google that data-mine a user's search history and websites to discover keywords for ads, spelling corrections, and finding synonyms. In this way, users are unintentionally helping to modify existing systems, such asGoogle Ads.[56] Thecrowdis an umbrella term for the people who contribute to crowdsourcing efforts. Though it is sometimes difficult to gather data about thedemographicsof the crowd as a whole, several studies have examined various specific online platforms. Amazon Mechanical Turk has received a great deal of attention in particular. A study in 2008 byIpeirotisfound that users at that time were primarily American, young, female, and well-educated, with 40% earning more than $40,000 per year. In November 2009, Ross found a very different Mechanical Turk population where 36% of which was Indian. Two-thirds of Indian workers were male, and 66% had at least a bachelor's degree. Two-thirds had annual incomes less than $10,000, with 27% sometimes or always depending on income from Mechanical Turk to make ends meet.[186]More recent studies have found that U.S. Mechanical Turk workers are approximately 58% female, and nearly 67% of workers are in their 20s and 30s.[57][187][188][189]Close to 80% are White, and 9% are Black. MTurk workers are less likely to be married or have children as compared to the general population. In the US population over 18, 45% are unmarried, while the proportion of unmarried workers on MTurk is around 57%. Additionally, about 55% of MTurk workers do not have any children, which is significantly higher than the general population. Approximately 68% of U.S. workers are employed, compared to 60% in the general population. MTurk workers in the U.S. are also more likely to have a four-year college degree (35%) compared to the general population (27%). Politics within the U.S. sample of MTurk are skewed liberal, with 46% Democrats, 28% Republicans, and 26%  "other". MTurk workers are also less religious than the U.S. population, with 41% religious, 20% spiritual, 21% agnostic, and 16% atheist. The demographics of Microworkers.com differ from Mechanical Turk in that the US and India together accounting for only 25% of workers; 197 countries are represented among users, with Indonesia (18%) and Bangladesh (17%) contributing the largest share. However, 28% of employers are from the US.[190] Another study of the demographics of the crowd atiStockphotofound a crowd that was largely white, middle- to upper-class, higher educated, worked in a so-called "white-collar job" and had a high-speed Internet connection at home.[191]In a crowd-sourcing diary study of 30 days in Europe, the participants were predominantly higher educated women.[144] Studies have also found that crowds are not simply collections of amateurs or hobbyists. Rather, crowds are often professionally trained in a discipline relevant to a given crowdsourcing task and sometimes hold advanced degrees and many years of experience in the profession.[191][192][193][194]Claiming that crowds are amateurs, rather than professionals, is both factually untrue and may lead to marginalization of crowd labor rights.[195] Gregory Saxton et al. studied the role of community users, among other elements, during his content analysis of 103 crowdsourcing organizations. They developed a taxonomy of nine crowdsourcing models (intermediary model, citizen media production, collaborative software development, digital goods sales, product design, peer-to-peer social financing, consumer report model, knowledge base building model, and collaborative science project model) in which to categorize the roles of community users, such as researcher, engineer, programmer, journalist, graphic designer, etc., and the products and services developed.[196] Many researchers suggest that bothintrinsicandextrinsicmotivations cause people to contribute to crowdsourced tasks and these factors influence different types of contributors.[111][191][192][194][197][198][199][200][201]For example, people employed in a full-time position rate human capital advancement as less important than part-time workers do, while women rate social contact as more important than men do.[198] Intrinsic motivations are broken down into two categories: enjoyment-based and community-based motivations. Enjoyment-based motivations refer to motivations related to the fun and enjoyment contributors experience through their participation. These motivations include: skill variety, task identity, taskautonomy, direct feedback from the job, and taking the job as apastime.[citation needed]Community-based motivations refer to motivations related to community participation, and include community identification and social contact. In crowdsourced journalism, the motivation factors are intrinsic: the crowd is driven by a possibility to make social impact, contribute to social change, and help their peers.[197] Extrinsic motivations are broken down into three categories: immediate payoffs, delayed payoffs, and social motivations. Immediate payoffs, through monetary payment, are the immediately received compensations given to those who complete tasks. Delayed payoffs are benefits that can be used to generate future advantages, such as training skills and being noticed by potential employers. Social motivations are the rewards of behaving pro-socially,[202]such as thealtruisticmotivations ofonline volunteers. Chandler and Kapelner found that US users of the Amazon Mechanical Turk were more likely to complete a task when told they were going to help researchers identify tumor cells, than when they were not told the purpose of their task. However, of those who completed the task, quality of output did not depend on the framing.[203] Motivation in crowdsourcing is often a mix of intrinsic and extrinsic factors.[204]In a crowdsourced law-making project, the crowd was motivated by both intrinsic and extrinsic factors. Intrinsic motivations included fulfilling civic duty, affecting the law for sociotropic reasons, to deliberate with and learn from peers. Extrinsic motivations included changing the law for financial gain or other benefits. Participation in crowdsourced policy-making was an act of grassroots advocacy, whether to pursue one's own interest or more altruistic goals, such as protecting nature.[111]Participants in online research studies report their motivation as both intrinsic enjoyment and monetary gain.[205][206][188] Another form of social motivation is prestige or status. TheInternational Children's Digital Libraryrecruited volunteers to translate and review books. Because all translators receive public acknowledgment for their contributions, Kaufman and Schulz cite this as a reputation-based strategy to motivate individuals who want to be associated with institutions that have prestige. The Mechanical Turk uses reputation as a motivator in a different sense, as a form of quality control. Crowdworkers who frequently complete tasks in ways judged to be inadequate can be denied access to future tasks, whereas workers who pay close attention may be rewarded by gaining access to higher-paying tasks or being on an "Approved List" of workers. This system may incentivize higher-quality work.[207]However, this system only works when requesters reject bad work, which many do not.[208] Despite the potential global reach of IT applications online, recent research illustrates that differences in location[which?]affect participation outcomes in IT-mediated crowds.[209] While there it lots of anecdotal evidence that illustrates the potential of crowdsourcing and the benefits that organizations have derived, there is scientific evidence that crowdsourcing initiatives often fail.[210]At least six major topics cover the limitations and controversies about crowdsourcing: Crowdsourcing initiatives often fail to attract sufficient or beneficial contributions. The vast majority of crowdsourcing initiatives hardly attract contributions; an analysis of thousands of organizations' crowdsourcing initiatives illustrates that only the 90th percentile of initiatives attracts more than one contribution a month.[201]While crowdsourcing initiatives may be effective in isolation, when faced with competition they mail fail to attract sufficient contributions. Nagaraj and Piezunka (2024) illustrate thatOpenStreetMapstruggled to attract contributions onceGoogle Mapsentered a country. Crowdsourcing allows anyone to participate, allowing for many unqualified participants and resulting in large quantities of unusable contributions.[211]Companies, or additional crowdworkers, then have to sort through the low-quality contributions. The task of sorting through crowdworkers' contributions, along with the necessary job of managing the crowd, requires companies to hire actual employees, thereby increasing management overhead.[212]For example, susceptibility to faulty results can be caused by targeted, malicious work efforts. Since crowdworkers completing microtasks are paid per task, a financial incentive often causes workers to complete tasks quickly rather than well.[57]Verifying responses is time-consuming, so employers often depend on having multiple workers complete the same task to correct errors. However, having each task completed multiple times increases time and monetary costs.[213]Some companies, likeCloudResearch, control data quality by repeatedly vetting crowdworkers to ensure they are paying attention and providing high-quality work.[208] Crowdsourcing quality is also impacted by task design. Lukyanenkoet al.[214]argue that, the prevailing practice of modeling crowdsourcing data collection tasks in terms of fixed classes (options), unnecessarily restricts quality. Results demonstrate that information accuracy depends on the classes used to model domains, with participants providing more accurate information when classifying phenomena at a more general level (which is typically less useful to sponsor organizations, hence less common).[clarification needed]Further, greater overall accuracy is expected when participants could provide free-form data compared to tasks in which they select from constrained choices. In behavioral science research, it is often recommended to include open-ended responses, in addition to other forms of attention checks, to assess data quality.[215][216] Just as limiting, oftentimes there is not enough skills or expertise in the crowd to successfully accomplish the desired task. While this scenario does not affect "simple" tasks such as image labeling, it is particularly problematic for more complex tasks, such as engineering design or product validation. A comparison between the evaluation of business models from experts and an anonymous online crowd showed that an anonymous online crowd cannot evaluate business models to the same level as experts.[217]In these cases, it may be difficult or even impossible to find qualified people in the crowd, as their responses represent only a small fraction of the workers compared to consistent, but incorrect crowd members.[218]However, if the task is "intermediate" in its difficulty, estimating crowdworkers' skills and intentions and leveraging them for inferring true responses works well,[219]albeit with an additional computation cost.[citation needed] Crowdworkers are a nonrandom sample of the population. Many researchers use crowdsourcing to quickly and cheaply conduct studies with larger sample sizes than would be otherwise achievable. However, due to limited access to the Internet, participation in low developed countries is relatively low. Participation in highly developed countries is similarly low, largely because the low amount of pay is not a strong motivation for most users in these countries. These factors lead to a bias in the population pool towards users in medium developed countries, as deemed by thehuman development index.[220]Participants in these countries sometimes masquerade as U.S. participants to gain access to certain tasks. This led to the "bot scare" on Amazon Mechanical Turk in 2018, when researchers thought bots were completing research surveys due to the lower quality of responses originating from medium-developed countries.[216][221] The likelihood that a crowdsourced project will fail due to lack of monetary motivation or too few participants increases over the course of the project. Tasks that are not completed quickly may be forgotten, buried by filters and search procedures. This results in a long-tail power law distribution of completion times.[222]Additionally, low-paying research studies online have higher rates of attrition, with participants not completing the study once started.[58]Even when tasks are completed, crowdsourcing does not always produce quality results. WhenFacebookbegan its localization program in 2008, it encountered some criticism for the low quality of its crowdsourced translations.[223]One of the problems of crowdsourcing products is the lack of interaction between the crowd and the client. Usually little information is known about the final product, and workers rarely interacts with the final client in the process. This can decrease the quality of product as client interaction is considered to be a vital part of the design process.[224] An additional cause of the decrease in product quality that can result from crowdsourcing is the lack of collaboration tools. In a typical workplace, coworkers are organized in such a way that they can work together and build upon each other's knowledge and ideas. Furthermore, the company often provides employees with the necessary information, procedures, and tools to fulfill their responsibilities. However, in crowdsourcing, crowd-workers are left to depend on their own knowledge and means to complete tasks.[212] A crowdsourced project is usually expected to be unbiased by incorporating a large population of participants with a diverse background. However, most of the crowdsourcing works are done by people who are paid or directly benefit from the outcome (e.g. most ofopen sourceprojects working onLinux). In many other cases, the end product is the outcome of a single person's endeavor, who creates the majority of the product, while the crowd only participates in minor details.[225] To make an idea turn into a reality, the first component needed is capital. Depending on the scope and complexity of the crowdsourced project, the amount of necessary capital can range from a few thousand dollars to hundreds of thousands, if not more. The capital-raising process can take from days to months depending on different variables, including the entrepreneur's network and the amount of initial self-generated capital.[citation needed] The crowdsourcing process allows entrepreneurs to access a wide range of investors who can take different stakes in the project.[226]As an effect, crowdsourcing simplifies the capital-raising process and allows entrepreneurs to spend more time on the project itself and reaching milestones rather than dedicating time to get it started. Overall, the simplified access to capital can save time to start projects and potentially increase the efficiency of projects.[citation needed] Others argue that easier access to capital through a large number of smaller investors can hurt the project and its creators. With a simplified capital-raising process involving more investors with smaller stakes, investors are more risk-seeking because they can take on an investment size with which they are comfortable.[226]This leads to entrepreneurs losing possible experience convincing investors who are wary of potential risks in investing because they do not depend on one single investor for the survival of their project. Instead of being forced to assess risks and convince large institutional investors on why their project can be successful, wary investors can be replaced by others who are willing to take on the risk. Some translation companies and translation tool consumers pretend to use crowdsourcing as a means for drastically cutting costs, instead of hiringprofessional translators. This situation has been systematically denounced byIAPTIand other translator organizations.[227] The raw number of ideas that get funded and the quality of the ideas is a large controversy over the issue of crowdsourcing. Proponents argue that crowdsourcing is beneficial because it allows the formation of startups with niche ideas that would not surviveventure capitalistorangelfunding, which are oftentimes the primary investors in startups. Many ideas are scrapped in their infancy due to insufficient support and lack of capital, but crowdsourcing allows these ideas to be started if an entrepreneur can find a community to take interest in the project.[228] Crowdsourcing allows those who would benefit from the project to fund and become a part of it, which is one way for small niche ideas get started.[229]However, when the number of projects grows, the number of failures also increases. Crowdsourcing assists the development of niche and high-risk projects due to a perceived need from a select few who seek the product. With high risk and small target markets, the pool of crowdsourced projects faces a greater possible loss of capital, lower return, and lower levels of success.[230] Because crowdworkers are considered independent contractors rather than employees, they are not guaranteedminimum wage. In practice, workers using Amazon Mechanical Turk generally earn less than minimum wage. In 2009, it was reported that United States Turk users earned an average of $2.30 per hour for tasks, while users in India earned an average of $1.58 per hour, which is below minimum wage in the United States (but not in India).[186][231]In 2018, a survey of 2,676 Amazon Mechanical Turk workers doing 3.8 million tasks found that the median hourly wage was approximately $2 per hour, and only 4% of workers earned more than the federal minimum wage of $7.25 per hour.[232]Some researchers who have considered using Mechanical Turk to get participants for research studies have argued that the wage conditions might be unethical.[58][233]However, according to other research, workers on Amazon Mechanical Turk do not feel they are exploited and are ready to participate in crowdsourcing activities in the future.[234]A more recent study using stratified random sampling to access a representative sample of Mechanical Turk workers found that the U.S. MTurk population is financially similar to the general population.[188]Workers tend to participate in tasks as a form of paid leisure and to supplement their primary income, and only 7% view it as a full-time job. Overall, workers rated MTurk as less stressful than other jobs. Workers also earn more than previously reported, about $6.50 per hour. They see MTurk as part of the solution to their financial situation and report rare upsetting experiences. They also perceive requesters on MTurk as fairer and more honest than employers outside of the platform.[188] When Facebook began its localization program in 2008, it received criticism for using free labor in crowdsourcing the translation of site guidelines.[223] Typically, no written contracts, nondisclosure agreements, or employee agreements are made with crowdworkers. For users of the Amazon Mechanical Turk, this means that employers decide whether users' work is acceptable and reserve the right to withhold pay if it does not meet their standards.[235]Critics say that crowdsourcing arrangements exploit individuals in the crowd, and a call has been made for crowds to organize for their labor rights.[236][195][237] Collaboration between crowd members can also be difficult or even discouraged, especially in the context of competitive crowd sourcing. Crowdsourcing site InnoCentive allows organizations to solicit solutions to scientific and technological problems; only 10.6% of respondents reported working in a team on their submission.[192]Amazon Mechanical Turk workers collaborated with academics to create a platform, WeAreDynamo.org, that allows them to organize and create campaigns to better their work situation, but the site is no longer running.[238]Another platform run by Amazon Mechanical Turk workers and academics, Turkopticon, continues to operate and provides worker reviews on Amazon Mechanical Turk employers.[239] America Onlinesettled the caseHallissey et al. v. America Online, Inc.for $15 million in 2009, after unpaid moderators sued to be paid theminimum wageas employees under the U.S.Fair Labor Standards Act. Besides insufficient compensation and other labor-related disputes, there have also been concerns regardingprivacy violations, the hiring ofvulnerable groups, breaches ofanonymity, psychological damage, the encouragement ofaddictive behaviors, and more.[240]Many but not all of the issues related to crowdworkes overlap with concerns related tocontent moderators.
https://en.wikipedia.org/wiki/Crowdsourcing
Amajorityis more than half of a total.[1]It is asubsetof asetconsisting of more than half of the set's elements. For example, if a group consists of 31 individuals, a majority would be 16 or more individuals, while having 15 or fewer individuals would not constitute a majority. A majority is different from, but often confused with, aplurality,[note 1]which is a subset larger than any other subset but not necessarily more than half the set. For example, if there is a group with 20 members which is divided into subgroups with 9, 6, and 5 members, then the 9-member group would be the plurality, but would not be a majority (as they have less than eleven members). Inparliamentary procedure, a majority always means precisely "more than half". Other common definitions (e.g. the frequent 50%+1) may be misleading(see "Common errors" below).[1]: 4 Depending on theparliamentary authorityused, there may be a difference in the total that is used to calculate a majority vote due tospoiled votes.[2]Comparing the two most popular authorities in the United States: InRobert's Rules of Order Newly Revised(abbreviated RONR), spoiled votes are counted as votes cast, but are not credited to any candidate.[2]InThe Standard Code of Parliamentary Procedure(abbreviatedTSC), spoiled votes are not included in the total and a majority vote is defined as being more than half of alleligiblevotes cast.[3] As it relates to a vote, a majority vote most often means asimplemajority vote, which means more "yes" votes than "no" votes.[4][5]Abstentionsor blanks are excluded in calculating a simple majority vote.[1]: 6Also, the totals do not include votes cast by someone not entitled to vote or improper multiple votes by a single member.[2] Other related terms containing the word "majority" have their own meanings, which may sometimes be inconsistent in usage.[6] InBritish English, the term "majority" is used to mean the difference in votes between the first-place candidate in an election and the second-place candidate.[7]The word "majority", and the phrases "size of a majority", "overall majority", or "working majority", are also used to mean the difference between the number of votes gained by the winning party or candidate and the total votes gained by all other parties or candidates.[8][9]In American English, "majority" does not have this meaning; the phrasemargin of victory, i.e. the number of votes separating the first-place finisher from the second-place finisher, is typically used.[10] A "double majority" is a voting system which requires a majority of votes according to two separate criteria.[6]e.g. in the European Union, the Council uses a double majority rule, requiring 55% of member states, representing at least 65% of the total EU population in favor. In some cases, the required percentage of member states in favor is increased to 72%.[11] A "supermajority" is a specified threshold greater than one half.[6]A common use of a supermajority is a "two-thirds vote", which is sometimes referred to as a "two-thirds majority". Thevoting basisrefers to the set of members considered when calculating whether a proposal has a majority,[12]i.e. thedenominatorused in calculating the percent support for a vote. Common voting bases include: For example, assume that votes are cast for three people for an office: Alice, Bob, and Carol. In all three scenarios, Alice receives aplurality, or the most votes among the candidates,[23]but in some she does not receive a majority. In Scenario 1, Alice received a majority of the vote. There were 20 votes cast and Alice received more than half of them. In Scenario 2, assume all three candidates are eligible. In this case, no one received a majority of the vote. In Scenario 3, assume that Alice and Bob are eligible candidates, but Carol is not. UsingRobert's Rules of Order, no one received a majority vote, which is the same as Scenario 2. In this case, the 4 votes for Carol are counted in the total, but are not credited to Carol (which precludes the possibility of an ineligible candidate being credited with receiving a majority vote). However, usingThe Standard Code, Alice received a majority vote since only votes for eligible candidates are counted. In this case, there are 16 votes for eligible candidates and Alice received more than half of those 16 votes. A temporary majority exists when the positions of the members present and voting in a meeting of a deliberative assembly on a subject are not representative of the membership as a whole. Parliamentary procedure contains some provisions designed to protect against a temporary majority violating the rights of absentees. For instance,previous noticeis typically required torescind, repeal or annulsomething previously adopted by a majority vote.[24]However, in this and many other cases, previous notice is not required if a majority of the entire membership votes in favor, because that indicates that it is clearly not a temporary majority. Another protection against a decision being made by a temporary majority is the motion toreconsider and enter on the minutes, by which two members can suspend action on a measure until it is called up at a meeting on another day.[25] The expression "at least 50% +1" may mislead when "majority" is actually intended, where the total number referred to is odd.[1]: 4For example, say a board has 7 members. "Majority" means "at least 4" in this case (more than half of 7, which is 3.5). But 50% + 1 is 4.5, and since a number of people can only be a positive integer, "at least 50% + 1" could be interpreted as meaning "at least 5".
https://en.wikipedia.org/wiki/Majority_vote
Instatistics, anexpectation–maximization(EM)algorithmis aniterative methodto find (local)maximum likelihoodormaximum a posteriori(MAP) estimates ofparametersinstatistical models, where the model depends on unobservedlatent variables.[1]The EM iteration alternates between performing an expectation (E) step, which creates a function for the expectation of thelog-likelihoodevaluated using the current estimate for the parameters, and a maximization (M) step, which computes parameters maximizing the expected log-likelihood found on theEstep. These parameter-estimates are then used to determine the distribution of the latent variables in the next E step. It can be used, for example, to estimate a mixture ofgaussians, or to solve the multiple linear regression problem.[2] The EM algorithm was explained and given its name in a classic 1977 paper byArthur Dempster,Nan Laird, andDonald Rubin.[3]They pointed out that the method had been "proposed many times in special circumstances" by earlier authors. One of the earliest is the gene-counting method for estimating allele frequencies byCedric Smith.[4]Another was proposed byH.O. Hartleyin 1958, and Hartley and Hocking in 1977, from which many of the ideas in the Dempster–Laird–Rubin paper originated.[5]Another one by S.K Ng, Thriyambakam Krishnan and G.J McLachlan in 1977.[6]Hartley’s ideas can be broadened to any grouped discrete distribution. A very detailed treatment of the EM method for exponential families was published by Rolf Sundberg in his thesis and several papers,[7][8][9]following his collaboration withPer Martin-LöfandAnders Martin-Löf.[10][11][12][13][14]The Dempster–Laird–Rubin paper in 1977 generalized the method and sketched a convergence analysis for a wider class of problems. The Dempster–Laird–Rubin paper established the EM method as an important tool of statistical analysis. See also Meng and van Dyk (1997). The convergence analysis of the Dempster–Laird–Rubin algorithm was flawed and a correct convergence analysis was published byC. F. Jeff Wuin 1983.[15]Wu's proof established the EM method's convergence also outside of theexponential family, as claimed by Dempster–Laird–Rubin.[15] The EM algorithm is used to find (local)maximum likelihoodparameters of astatistical modelin cases where the equations cannot be solved directly. Typically these models involvelatent variablesin addition to unknownparametersand known data observations. That is, eithermissing valuesexist among the data, or the model can be formulated more simply by assuming the existence of further unobserved data points. For example, amixture modelcan be described more simply by assuming that each observed data point has a corresponding unobserved data point, or latent variable, specifying the mixture component to which each data point belongs. Finding a maximum likelihood solution typically requires taking thederivativesof thelikelihood functionwith respect to all the unknown values, the parameters and the latent variables, and simultaneously solving the resulting equations. In statistical models with latent variables, this is usually impossible. Instead, the result is typically a set of interlocking equations in which the solution to the parameters requires the values of the latent variables and vice versa, but substituting one set of equations into the other produces an unsolvable equation. The EM algorithm proceeds from the observation that there is a way to solve these two sets of equations numerically. One can simply pick arbitrary values for one of the two sets of unknowns, use them to estimate the second set, then use these new values to find a better estimate of the first set, and then keep alternating between the two until the resulting values both converge to fixed points. It's not obvious that this will work, but it can be proven in this context. Additionally, it can be proven that the derivative of the likelihood is (arbitrarily close to) zero at that point, which in turn means that the point is either a local maximum or asaddle point.[15]In general, multiple maxima may occur, with no guarantee that the global maximum will be found. Some likelihoods also havesingularitiesin them, i.e., nonsensical maxima. For example, one of thesolutionsthat may be found by EM in a mixture model involves setting one of the components to have zero variance and the mean parameter for the same component to be equal to one of the data points. Given thestatistical modelwhich generates a setX{\displaystyle \mathbf {X} }of observed data, a set of unobserved latent data ormissing valuesZ{\displaystyle \mathbf {Z} }, and a vector of unknown parametersθ{\displaystyle {\boldsymbol {\theta }}}, along with alikelihood functionL(θ;X,Z)=p(X,Z∣θ){\displaystyle L({\boldsymbol {\theta }};\mathbf {X} ,\mathbf {Z} )=p(\mathbf {X} ,\mathbf {Z} \mid {\boldsymbol {\theta }})}, themaximum likelihood estimate(MLE) of the unknown parameters is determined by maximizing themarginal likelihoodof the observed data However, this quantity is often intractable sinceZ{\displaystyle \mathbf {Z} }is unobserved and the distribution ofZ{\displaystyle \mathbf {Z} }is unknown before attainingθ{\displaystyle {\boldsymbol {\theta }}}. The EM algorithm seeks to find the maximum likelihood estimate of the marginal likelihood by iteratively applying these two steps: More succinctly, we can write it as one equation:θ(t+1)=argmaxθEZ∼p(⋅|X,θ(t))⁡[log⁡p(X,Z|θ)]{\displaystyle {\boldsymbol {\theta }}^{(t+1)}={\underset {\boldsymbol {\theta }}{\operatorname {arg\,max} }}\operatorname {E} _{\mathbf {Z} \sim p(\cdot |\mathbf {X} ,{\boldsymbol {\theta }}^{(t)})}\left[\log p(\mathbf {X} ,\mathbf {Z} |{\boldsymbol {\theta }})\right]\,} The typical models to which EM is applied useZ{\displaystyle \mathbf {Z} }as a latent variable indicating membership in one of a set of groups: However, it is possible to apply EM to other sorts of models. The motivation is as follows. If the value of the parametersθ{\displaystyle {\boldsymbol {\theta }}}is known, usually the value of the latent variablesZ{\displaystyle \mathbf {Z} }can be found by maximizing the log-likelihood over all possible values ofZ{\displaystyle \mathbf {Z} }, either simply by iterating overZ{\displaystyle \mathbf {Z} }or through an algorithm such as theViterbi algorithmforhidden Markov models. Conversely, if we know the value of the latent variablesZ{\displaystyle \mathbf {Z} }, we can find an estimate of the parametersθ{\displaystyle {\boldsymbol {\theta }}}fairly easily, typically by simply grouping the observed data points according to the value of the associated latent variable and averaging the values, or some function of the values, of the points in each group. This suggests an iterative algorithm, in the case where bothθ{\displaystyle {\boldsymbol {\theta }}}andZ{\displaystyle \mathbf {Z} }are unknown: The algorithm as just described monotonically approaches a local minimum of the cost function. Although an EM iteration does increase the observed data (i.e., marginal) likelihood function, no guarantee exists that the sequence converges to amaximum likelihood estimator. Formultimodal distributions, this means that an EM algorithm may converge to alocal maximumof the observed data likelihood function, depending on starting values. A variety of heuristic ormetaheuristicapproaches exist to escape a local maximum, such as random-restarthill climbing(starting with several different random initial estimatesθ(t){\displaystyle {\boldsymbol {\theta }}^{(t)}}), or applyingsimulated annealingmethods. EM is especially useful when the likelihood is anexponential family, see Sundberg (2019, Ch. 8) for a comprehensive treatment:[16]the E step becomes the sum of expectations ofsufficient statistics, and the M step involves maximizing a linear function. In such a case, it is usually possible to deriveclosed-form expressionupdates for each step, using the Sundberg formula[17](proved and published by Rolf Sundberg, based on unpublished results ofPer Martin-LöfandAnders Martin-Löf).[8][9][11][12][13][14] The EM method was modified to computemaximum a posteriori(MAP) estimates forBayesian inferencein the original paper by Dempster, Laird, and Rubin. Other methods exist to find maximum likelihood estimates, such asgradient descent,conjugate gradient, or variants of theGauss–Newton algorithm. Unlike EM, such methods typically require the evaluation of first and/or second derivatives of the likelihood function. Expectation-Maximization works to improveQ(θ∣θ(t)){\displaystyle Q({\boldsymbol {\theta }}\mid {\boldsymbol {\theta }}^{(t)})}rather than directly improvinglog⁡p(X∣θ){\displaystyle \log p(\mathbf {X} \mid {\boldsymbol {\theta }})}. Here it is shown that improvements to the former imply improvements to the latter.[18] For anyZ{\displaystyle \mathbf {Z} }with non-zero probabilityp(Z∣X,θ){\displaystyle p(\mathbf {Z} \mid \mathbf {X} ,{\boldsymbol {\theta }})}, we can write We take the expectation over possible values of the unknown dataZ{\displaystyle \mathbf {Z} }under the current parameter estimateθ(t){\displaystyle \theta ^{(t)}}by multiplying both sides byp(Z∣X,θ(t)){\displaystyle p(\mathbf {Z} \mid \mathbf {X} ,{\boldsymbol {\theta }}^{(t)})}and summing (or integrating) overZ{\displaystyle \mathbf {Z} }. The left-hand side is the expectation of a constant, so we get: whereH(θ∣θ(t)){\displaystyle H({\boldsymbol {\theta }}\mid {\boldsymbol {\theta }}^{(t)})}is defined by the negated sum it is replacing. This last equation holds for every value ofθ{\displaystyle {\boldsymbol {\theta }}}includingθ=θ(t){\displaystyle {\boldsymbol {\theta }}={\boldsymbol {\theta }}^{(t)}}, and subtracting this last equation from the previous equation gives However,Gibbs' inequalitytells us thatH(θ∣θ(t))≥H(θ(t)∣θ(t)){\displaystyle H({\boldsymbol {\theta }}\mid {\boldsymbol {\theta }}^{(t)})\geq H({\boldsymbol {\theta }}^{(t)}\mid {\boldsymbol {\theta }}^{(t)})}, so we can conclude that In words, choosingθ{\displaystyle {\boldsymbol {\theta }}}to improveQ(θ∣θ(t)){\displaystyle Q({\boldsymbol {\theta }}\mid {\boldsymbol {\theta }}^{(t)})}causeslog⁡p(X∣θ){\displaystyle \log p(\mathbf {X} \mid {\boldsymbol {\theta }})}to improve at least as much. The EM algorithm can be viewed as two alternating maximization steps, that is, as an example ofcoordinate descent.[19][20]Consider the function: whereqis an arbitrary probability distribution over the unobserved datazandH(q)is theentropyof the distributionq. This function can be written as wherepZ∣X(⋅∣x;θ){\displaystyle p_{Z\mid X}(\cdot \mid x;\theta )}is the conditional distribution of the unobserved data given the observed datax{\displaystyle x}andDKL{\displaystyle D_{KL}}is theKullback–Leibler divergence. Then the steps in the EM algorithm may be viewed as: AKalman filteris typically used for on-line state estimation and a minimum-variance smoother may be employed for off-line or batch state estimation. However, these minimum-variance solutions require estimates of the state-space model parameters. EM algorithms can be used for solving joint state and parameter estimation problems. Filtering and smoothing EM algorithms arise by repeating this two-step procedure: Suppose that aKalman filteror minimum-variance smoother operates on measurements of a single-input-single-output system that possess additive white noise. An updated measurement noise variance estimate can be obtained from themaximum likelihoodcalculation wherex^k{\displaystyle {\widehat {x}}_{k}}are scalar output estimates calculated by a filter or a smoother from N scalar measurementszk{\displaystyle z_{k}}. The above update can also be applied to updating a Poisson measurement noise intensity. Similarly, for a first-order auto-regressive process, an updated process noise variance estimate can be calculated by wherex^k{\displaystyle {\widehat {x}}_{k}}andx^k+1{\displaystyle {\widehat {x}}_{k+1}}are scalar state estimates calculated by a filter or a smoother. The updated model coefficient estimate is obtained via The convergence of parameter estimates such as those above are well studied.[26][27][28][29] A number of methods have been proposed to accelerate the sometimes slow convergence of the EM algorithm, such as those usingconjugate gradientand modifiedNewton's methods(Newton–Raphson).[30]Also, EM can be used with constrained estimation methods. Parameter-expanded expectation maximization (PX-EM)algorithm often provides speed up by "us[ing] a `covariance adjustment' to correct the analysis of the M step, capitalising on extra information captured in the imputed complete data".[31] Expectation conditional maximization (ECM)replaces each M step with a sequence of conditional maximization (CM) steps in which each parameterθiis maximized individually, conditionally on the other parameters remaining fixed.[32]Itself can be extended into theExpectation conditional maximization either (ECME)algorithm.[33] This idea is further extended ingeneralized expectation maximization (GEM)algorithm, in which is sought only an increase in the objective functionFfor both the E step and M step as described in theAs a maximization–maximization proceduresection.[19]GEM is further developed in a distributed environment and shows promising results.[34] It is also possible to consider the EM algorithm as a subclass of theMM(Majorize/Minimize or Minorize/Maximize, depending on context) algorithm,[35]and therefore use any machinery developed in the more general case. The Q-function used in the EM algorithm is based on the log likelihood. Therefore, it is regarded as the log-EM algorithm. The use of the log likelihood can be generalized to that of the α-log likelihood ratio. Then, the α-log likelihood ratio of the observed data can be exactly expressed as equality by using the Q-function of the α-log likelihood ratio and the α-divergence. Obtaining this Q-function is a generalized E step. Its maximization is a generalized M step. This pair is called the α-EM algorithm[36]which contains the log-EM algorithm as its subclass. Thus, the α-EM algorithm byYasuo Matsuyamais an exact generalization of the log-EM algorithm. No computation of gradient or Hessian matrix is needed. The α-EM shows faster convergence than the log-EM algorithm by choosing an appropriate α. The α-EM algorithm leads to a faster version of the Hidden Markov model estimation algorithm α-HMM.[37] EM is a partially non-Bayesian, maximum likelihood method. Its final result gives aprobability distributionover the latent variables (in the Bayesian style) together with a point estimate forθ(either amaximum likelihood estimateor a posterior mode). A fully Bayesian version of this may be wanted, giving a probability distribution overθand the latent variables. The Bayesian approach to inference is simply to treatθas another latent variable. In this paradigm, the distinction between the E and M steps disappears. If using the factorized Q approximation as described above (variational Bayes), solving can iterate over each latent variable (now includingθ) and optimize them one at a time. Now,ksteps per iteration are needed, wherekis the number of latent variables. Forgraphical modelsthis is easy to do as each variable's newQdepends only on itsMarkov blanket, so localmessage passingcan be used for efficient inference. Ininformation geometry, the E step and the M step are interpreted as projections under dualaffine connections, called the e-connection and the m-connection; theKullback–Leibler divergencecan also be understood in these terms. Letx=(x1,x2,…,xn){\displaystyle \mathbf {x} =(\mathbf {x} _{1},\mathbf {x} _{2},\ldots ,\mathbf {x} _{n})}be a sample ofn{\displaystyle n}independent observations from amixtureof twomultivariate normal distributionsof dimensiond{\displaystyle d}, and letz=(z1,z2,…,zn){\displaystyle \mathbf {z} =(z_{1},z_{2},\ldots ,z_{n})}be the latent variables that determine the component from which the observation originates.[20] where The aim is to estimate the unknown parameters representing themixingvalue between the Gaussians and the means and covariances of each: where the incomplete-data likelihood function is and the complete-data likelihood function is or whereI{\displaystyle \mathbb {I} }is anindicator functionandf{\displaystyle f}is theprobability density functionof a multivariate normal. In the last equality, for eachi, one indicatorI(zi=j){\displaystyle \mathbb {I} (z_{i}=j)}is equal to zero, and one indicator is equal to one. The inner sum thus reduces to one term. Given our current estimate of the parametersθ(t), the conditional distribution of theZiis determined byBayes' theoremto be the proportional height of the normaldensityweighted byτ: These are called the "membership probabilities", which are normally considered the output of the E step (although this is not the Q function of below). This E step corresponds with setting up this function for Q: The expectation oflog⁡L(θ;xi,Zi){\displaystyle \log L(\theta ;\mathbf {x} _{i},Z_{i})}inside the sum is taken with respect to the probability density functionP(Zi∣Xi=xi;θ(t)){\displaystyle P(Z_{i}\mid X_{i}=\mathbf {x} _{i};\theta ^{(t)})}, which might be different for eachxi{\displaystyle \mathbf {x} _{i}}of the training set. Everything in the E step is known before the step is taken exceptTj,i{\displaystyle T_{j,i}}, which is computed according to the equation at the beginning of the E step section. This full conditional expectation does not need to be calculated in one step, becauseτandμ/Σappear in separate linear terms and can thus be maximized independently. Q(θ∣θ(t)){\displaystyle Q(\theta \mid \theta ^{(t)})}being quadratic in form means that determining the maximizing values ofθ{\displaystyle \theta }is relatively straightforward. Also,τ{\displaystyle \tau },(μ1,Σ1){\displaystyle ({\boldsymbol {\mu }}_{1},\Sigma _{1})}and(μ2,Σ2){\displaystyle ({\boldsymbol {\mu }}_{2},\Sigma _{2})}may all be maximized independently since they all appear in separate linear terms. To begin, considerτ{\displaystyle \tau }, which has the constraintτ1+τ2=1{\displaystyle \tau _{1}+\tau _{2}=1}: This has the same form as the maximum likelihood estimate for thebinomial distribution, so For the next estimates of(μ1,Σ1){\displaystyle ({\boldsymbol {\mu }}_{1},\Sigma _{1})}: This has the same form as a weighted maximum likelihood estimate for a normal distribution, so and, by symmetry, Conclude the iterative process ifEZ∣θ(t),x[log⁡L(θ(t);x,Z)]≤EZ∣θ(t−1),x[log⁡L(θ(t−1);x,Z)]+ε{\displaystyle E_{Z\mid \theta ^{(t)},\mathbf {x} }[\log L(\theta ^{(t)};\mathbf {x} ,\mathbf {Z} )]\leq E_{Z\mid \theta ^{(t-1)},\mathbf {x} }[\log L(\theta ^{(t-1)};\mathbf {x} ,\mathbf {Z} )]+\varepsilon }forε{\displaystyle \varepsilon }below some preset threshold. The algorithm illustrated above can be generalized for mixtures of more than twomultivariate normal distributions. The EM algorithm has been implemented in the case where an underlyinglinear regressionmodel exists explaining the variation of some quantity, but where the values actually observed are censored or truncated versions of those represented in the model.[38]Special cases of this model include censored or truncated observations from onenormal distribution.[38] EM typically converges to a local optimum, not necessarily the global optimum, with no bound on the convergence rate in general. It is possible that it can be arbitrarily poor in high dimensions and there can be an exponential number of local optima. Hence, a need exists for alternative methods for guaranteed learning, especially in the high-dimensional setting. Alternatives to EM exist with better guarantees for consistency, which are termedmoment-based approaches[39]or the so-calledspectral techniques.[40][41]Moment-based approaches to learning the parameters of a probabilistic model enjoy guarantees such as global convergence under certain conditions unlike EM which is often plagued by the issue of getting stuck in local optima. Algorithms with guarantees for learning can be derived for a number of important models such as mixture models, HMMs etc. For these spectral methods, no spurious local optima occur, and the true parameters can be consistently estimated under some regularity conditions.[citation needed]
https://en.wikipedia.org/wiki/Expectation–maximization_algorithm
PageRank(PR) is analgorithmused byGoogle Searchtorankweb pagesin theirsearch engineresults. It is named after both the term "web page" and co-founderLarry Page. PageRank is a way of measuring the importance of website pages. According to Google: PageRank works by counting the number and quality of links to a page to determine a rough estimate of how important the website is. The underlying assumption is that more important websites are likely to receive more links from other websites.[1] Currently, PageRank is not the only algorithm used by Google to order search results, but it is the first algorithm that was used by the company, and it is the best known.[2][3]As of September 24, 2019, all patents associated with PageRank have expired.[4] PageRank is alink analysisalgorithm and it assigns a numericalweightingto each element of ahyperlinkedsetof documents, such as theWorld Wide Web, with the purpose of "measuring" its relative importance within the set. Thealgorithmmay be applied to any collection of entities withreciprocalquotations and references. The numerical weight that it assigns to any given elementEis referred to as thePageRank of Eand denoted byPR(E).{\displaystyle PR(E).} A PageRank results from a mathematical algorithm based on theWebgraph, created by all World Wide Web pages as nodes andhyperlinksas edges, taking into consideration authority hubs such ascnn.comormayoclinic.org. The rank value indicates an importance of a particular page. A hyperlink to a page counts as a vote of support. The PageRank of a page is definedrecursivelyand depends on the number and PageRank metric of all pages that link to it ("incoming links"). A page that is linked to by many pages with high PageRank receives a high rank itself. Numerous academic papers concerning PageRank have been published since Page and Brin's original paper.[5]In practice, the PageRank concept may be vulnerable to manipulation. Research has been conducted into identifying falsely influenced PageRank rankings. The goal is to find an effective means of ignoring links from documents with falsely influenced PageRank.[6] Other link-based ranking algorithms for Web pages include theHITS algorithminvented byJon Kleinberg(used byTeomaand nowAsk.com), the IBMCLEVER project, theTrustRankalgorithm, theHummingbirdalgorithm,[7]and theSALSA algorithm.[8] Theeigenvalueproblem behind PageRank's algorithm was independently rediscovered and reused in many scoring problems. In 1895,Edmund Landausuggested using it for determining the winner of a chess tournament.[9][10]The eigenvalue problem was also suggested in 1976 by Gabriel Pinski and Francis Narin, who worked onscientometricsranking scientific journals,[11]in 1977 byThomas Saatyin his concept ofAnalytic Hierarchy Processwhich weighted alternative choices,[12]and in 1995 by Bradley Love and Steven Sloman as acognitive modelfor concepts, the centrality algorithm.[13][14] A search engine called "RankDex" from IDD Information Services, designed byRobin Liin 1996, developed a strategy for site-scoring and page-ranking.[15]Li referred to his search mechanism as "link analysis," which involved ranking the popularity of a web site based on how many other sites had linked to it.[16]RankDex, the first search engine with page-ranking and site-scoring algorithms, was launched in 1996.[17]Li filed a patent for the technology in RankDex in 1997; it was granted in 1999.[18]He later used it when he foundedBaiduin China in 2000.[19][20]Google founderLarry Pagereferenced Li's work as a citation in some of his U.S. patents for PageRank.[21][17][22] Larry Page andSergey Brindeveloped PageRank atStanford Universityin 1996 as part of a research project about a new kind of search engine. An interview withHéctor García-Molina, Stanford Computer Science professor and advisor to Sergey,[23]provides background into the development of the page-rank algorithm.[24]Sergey Brin had the idea that information on the web could be ordered in a hierarchy by "link popularity": a page ranks higher as there are more links to it.[25]The system was developed with the help of Scott Hassan and Alan Steremberg, both of whom were cited by Page and Brin as being critical to the development of Google.[5]Rajeev MotwaniandTerry Winogradco-authored with Page and Brin the first paper about the project, describing PageRank and the initial prototype of theGoogle search engine, published in 1998.[5]Shortly after, Page and Brin foundedGoogle Inc., the company behind the Google search engine. While just one of many factors that determine the ranking of Google search results, PageRank continues to provide the basis for all of Google's web-search tools.[26] The name "PageRank" plays on the name of developer Larry Page, as well as of the concept of aweb page.[27][28]The word is a trademark of Google, and the PageRank process has beenpatented(U.S. patent 6,285,999). However, the patent is assigned to Stanford University and not to Google. Google has exclusive license rights on the patent from Stanford University. The university received 1.8 million shares of Google in exchange for use of the patent; it sold the shares in 2005 for $336 million.[29][30] PageRank was influenced bycitation analysis, early developed byEugene Garfieldin the 1950s at the University of Pennsylvania, and byHyper Search, developed byMassimo Marchioriat theUniversity of Padua. In the same year PageRank was introduced (1998),Jon Kleinbergpublished his work onHITS. Google's founders cite Garfield, Marchiori, and Kleinberg in their original papers.[5][31] The PageRank algorithm outputs aprobability distributionused to represent the likelihood that a person randomly clicking on links will arrive at any particular page. PageRank can be calculated for collections of documents of any size. It is assumed in several research papers that the distribution is evenly divided among all documents in the collection at the beginning of the computational process. The PageRank computations require several passes, called "iterations", through the collection to adjust approximate PageRank values to more closely reflect the theoretical true value. A probability is expressed as a numeric value between 0 and 1. A 0.5 probability is commonly expressed as a "50% chance" of something happening. Hence, a document with a PageRank of 0.5 means there is a 50% chance that a person clicking on a random link will be directed to said document. Assume a small universe of four web pages:A,B,C, andD. Links from a page to itself are ignored. Multiple outbound links from one page to another page are treated as a single link. PageRank is initialized to the same value for all pages. In the original form of PageRank, the sum of PageRank over all pages was the total number of pages on the web at that time, so each page in this example would have an initial value of 1. However, later versions of PageRank, and the remainder of this section, assume aprobability distributionbetween 0 and 1. Hence the initial value for each page in this example is 0.25. The PageRank transferred from a given page to the targets of its outbound links upon the next iteration is divided equally among all outbound links. If the only links in the system were from pagesB,C, andDtoA, each link would transfer 0.25 PageRank toAupon the next iteration, for a total of 0.75. Suppose instead that pageBhad a link to pagesCandA, pageChad a link to pageA, and pageDhad links to all three pages. Thus, upon the first iteration, pageBwould transfer half of its existing value (0.125) to pageAand the other half (0.125) to pageC. PageCwould transfer all of its existing value (0.25) to the only page it links to,A. SinceDhad three outbound links, it would transfer one third of its existing value, or approximately 0.083, toA. At the completion of this iteration, pageAwill have a PageRank of approximately 0.458. In other words, the PageRank conferred by an outbound link is equal to the document's own PageRank score divided by the number of outbound linksL( ). In the general case, the PageRank value for any pageucan be expressed as: i.e. the PageRank value for a pageuis dependent on the PageRank values for each pagevcontained in the setBu(the set containing all pages linking to pageu), divided by the numberL(v) of links from pagev. The PageRank theory holds that an imaginary surfer who is randomly clicking on links will eventually stop clicking. The probability, at any step, that the person will continue following links is a damping factord. The probability that they instead jump to any random page is1 - d. Various studies have tested different damping factors, but it is generally assumed that the damping factor will be set around 0.85.[5] The damping factor is subtracted from 1 (and in some variations of the algorithm, the result is divided by the number of documents (N) in the collection) and this term is then added to the product of the damping factor and the sum of the incoming PageRank scores. That is, So any page's PageRank is derived in large part from the PageRanks of other pages. The damping factor adjusts the derived value downward. The original paper, however, gave the following formula, which has led to some confusion: The difference between them is that the PageRank values in the first formula sum to one, while in the second formula each PageRank is multiplied byNand the sum becomesN. A statement in Page and Brin's paper that "the sum of all PageRanks is one"[5]and claims by other Google employees[32]support the first variant of the formula above. Page and Brin confused the two formulas in their most popular paper "The Anatomy of a Large-Scale Hypertextual Web Search Engine", where they mistakenly claimed that the latter formula formed a probability distribution over web pages.[5] Google recalculates PageRank scores each time it crawls the Web and rebuilds its index. As Google increases the number of documents in its collection, the initial approximation of PageRank decreases for all documents. The formula uses a model of arandom surferwho reaches their target site after several clicks, then switches to a random page. The PageRank value of a page reflects the chance that the random surfer will land on that page by clicking on a link. It can be understood as aMarkov chainin which the states are pages, and the transitions are the links between pages – all of which are all equally probable. If a page has no links to other pages, it becomes a sink and therefore terminates the random surfing process. If the random surfer arrives at a sink page, it picks anotherURLat random and continues surfing again. When calculating PageRank, pages with no outbound links are assumed to link out to all other pages in the collection. Their PageRank scores are therefore divided evenly among all other pages. In other words, to be fair with pages that are not sinks, these random transitions are added to all nodes in the Web. This residual probability,d, is usually set to 0.85, estimated from the frequency that an average surfer uses his or her browser's bookmark feature. So, the equation is as follows: wherep1,p2,...,pN{\displaystyle p_{1},p_{2},...,p_{N}}are the pages under consideration,M(pi){\displaystyle M(p_{i})}is the set of pages that link topi{\displaystyle p_{i}},L(pj){\displaystyle L(p_{j})}is the number of outbound links on pagepj{\displaystyle p_{j}}, andN{\displaystyle N}is the total number of pages. The PageRank values are the entries of the dominant righteigenvectorof the modifiedadjacency matrixrescaled so that each column adds up to one. This makes PageRank a particularly elegant metric: the eigenvector is whereRis the solution of the equation where the adjacency functionℓ(pi,pj){\displaystyle \ell (p_{i},p_{j})}is the ratio between number of links outbound from page j to page i to the total number of outbound links of page j. The adjacency function is 0 if pagepj{\displaystyle p_{j}}does not link topi{\displaystyle p_{i}}, and normalized such that, for eachj i.e. the elements of each column sum up to 1, so the matrix is astochastic matrix(for more details see thecomputationsection below). Thus this is a variant of theeigenvector centralitymeasure used commonly innetwork analysis. Because of the largeeigengapof the modified adjacency matrix above,[33]the values of the PageRank eigenvector can be approximated to within a high degree of accuracy within only a few iterations. Google's founders, in their original paper,[31]reported that the PageRank algorithm for a network consisting of 322 million links (in-edges and out-edges) converges to within a tolerable limit in 52 iterations. The convergence in a network of half the above size took approximately 45 iterations. Through this data, they concluded the algorithm can be scaled very well and that the scaling factor for extremely large networks would be roughly linear inlog⁡n{\displaystyle \log n}, where n is the size of the network. As a result ofMarkov theory, it can be shown that the PageRank of a page is the probability of arriving at that page after a large number of clicks. This happens to equalt−1{\displaystyle t^{-1}}wheret{\displaystyle t}is theexpectationof the number of clicks (or random jumps) required to get from the page back to itself. One main disadvantage of PageRank is that it favors older pages. A new page, even a very good one, will not have many links unless it is part of an existing site (a site being a densely connected set of pages, such asWikipedia). Several strategies have been proposed to accelerate the computation of PageRank.[34] Various strategies to manipulate PageRank have been employed in concerted efforts to improve search results rankings and monetize advertising links. These strategies have severely impacted the reliability of the PageRank concept,[citation needed]which purports to determine which documents are actually highly valued by the Web community. Since December 2007, when it startedactivelypenalizing sites selling paid text links, Google has combattedlink farmsand other schemes designed to artificially inflate PageRank. How Google identifies link farms and other PageRank manipulation tools is among Google'strade secrets. PageRank can be computed either iteratively or algebraically. The iterative method can be viewed as thepower iterationmethod[35][36]or the power method. The basic mathematical operations performed are identical. Att=0{\displaystyle t=0}, an initial probability distribution is assumed, usually where N is the total number of pages, andpi;0{\displaystyle p_{i};0}is page i at time 0. At each time step, the computation, as detailed above, yields where d is the damping factor, or in matrix notation whereRi(t)=PR(pi;t){\displaystyle \mathbf {R} _{i}(t)=PR(p_{i};t)}and1{\displaystyle \mathbf {1} }is the column vector of lengthN{\displaystyle N}containing only ones. The matrixM{\displaystyle {\mathcal {M}}}is defined as i.e., whereA{\displaystyle A}denotes theadjacency matrixof the graph andK{\displaystyle K}is the diagonal matrix with the outdegrees in the diagonal. The probability calculation is made for each page at a time point, then repeated for the next time point. The computation ends when for some smallϵ{\displaystyle \epsilon } i.e., when convergence is assumed. If the matrixM{\displaystyle {\mathcal {M}}}is a transition probability, i.e., column-stochastic andR{\displaystyle \mathbf {R} }is a probability distribution (i.e.,|R|=1{\displaystyle |\mathbf {R} |=1},ER=1{\displaystyle \mathbf {E} \mathbf {R} =\mathbf {1} }whereE{\displaystyle \mathbf {E} }is matrix of all ones), then equation (2) is equivalent to Hence PageRankR{\displaystyle \mathbf {R} }is the principal eigenvector ofM^{\displaystyle {\widehat {\mathcal {M}}}}. A fast and easy way to compute this is using thepower method: starting with an arbitrary vectorx(0){\displaystyle x(0)}, the operatorM^{\displaystyle {\widehat {\mathcal {M}}}}is applied in succession, i.e., until Note that in equation (3) the matrix on the right-hand side in the parenthesis can be interpreted as whereP{\displaystyle \mathbf {P} }is an initial probability distribution. n the current case Finally, ifM{\displaystyle {\mathcal {M}}}has columns with only zero values, they should be replaced with the initial probability vectorP{\displaystyle \mathbf {P} }. In other words, where the matrixD{\displaystyle {\mathcal {D}}}is defined as with In this case, the above two computations usingM{\displaystyle {\mathcal {M}}}only give the same PageRank if their results are normalized: The PageRank of an undirectedgraphG{\displaystyle G}is statistically close to thedegree distributionof the graphG{\displaystyle G},[37]but they are generally not identical: IfR{\displaystyle R}is the PageRank vector defined above, andD{\displaystyle D}is the degree distribution vector wheredeg⁡(pi){\displaystyle \deg(p_{i})}denotes the degree of vertexpi{\displaystyle p_{i}}, andE{\displaystyle E}is the edge-set of the graph, then, withY=1N1{\displaystyle Y={1 \over N}\mathbf {1} },[38]shows that: 1−d1+d‖Y−D‖1≤‖R−D‖1≤‖Y−D‖1,{\displaystyle {1-d \over 1+d}\|Y-D\|_{1}\leq \|R-D\|_{1}\leq \|Y-D\|_{1},} that is, the PageRank of an undirected graph equals to the degree distribution vector if and only if the graph is regular, i.e., every vertex has the same degree. A generalization of PageRank for the case of ranking two interacting groups of objects was described by Daugulis.[39]In applications it may be necessary to model systems having objects of two kinds where a weighted relation is defined on object pairs. This leads to consideringbipartite graphs. For such graphs two related positive or nonnegative irreducible matrices corresponding to vertex partition sets can be defined. One can compute rankings of objects in both groups as eigenvectors corresponding to the maximal positive eigenvalues of these matrices. Normed eigenvectors exist and are unique by the Perron or Perron–Frobenius theorem. Example: consumers and products. The relation weight is the product consumption rate. Sarma et al. describe tworandom walk-baseddistributed algorithmsfor computing PageRank of nodes in a network.[40]One algorithm takesO(log⁡n/ϵ){\displaystyle O(\log n/\epsilon )}rounds with high probability on any graph (directed or undirected), where n is the network size andϵ{\displaystyle \epsilon }is the reset probability (1−ϵ{\displaystyle 1-\epsilon }, which is called the damping factor) used in the PageRank computation. They also present a faster algorithm that takesO(log⁡n/ϵ){\displaystyle O({\sqrt {\log n}}/\epsilon )}rounds in undirected graphs. In both algorithms, each node processes and sends a number of bits per round that are polylogarithmic in n, the network size. TheGoogle Toolbarlong had a PageRank feature which displayed a visited page's PageRank as a whole number between 0 (least popular) and 10 (most popular). Google had not disclosed the specific method for determining a Toolbar PageRank value, which was to be considered only a rough indication of the value of a website. The "Toolbar Pagerank" was available for verified site maintainers through the Google Webmaster Tools interface. However, on October 15, 2009, a Google employee confirmed that the company had removed PageRank from itsWebmaster Toolssection, saying that "We've been telling people for a long time that they shouldn't focus on PageRank so much. Many site owners seem to think it's the most importantmetricfor them to track, which is simply not true."[41] The "Toolbar Pagerank" was updated very infrequently. It was last updated in November 2013. In October 2014 Matt Cutts announced that another visible pagerank update would not be coming.[42]In March 2016 Google announced it would no longer support this feature, and the underlying API would soon cease to operate.[43]On April 15, 2016, Google turned off display of PageRank Data in Google Toolbar,[44]though the PageRank continued to be used internally to rank content in search results.[45] Thesearch engine results page(SERP) is the actual result returned by a search engine in response to a keyword query. The SERP consists of a list of links to web pages with associated text snippets, paid ads, featured snippets, and Q&A. The SERP rank of a web page refers to the placement of the corresponding link on the SERP, where higher placement means higher SERP rank. The SERP rank of a web page is a function not only of its PageRank, but of a relatively large and continuously adjusted set of factors (over 200).[46][unreliable source?]Search engine optimization(SEO) is aimed at influencing the SERP rank for a website or a set of web pages. Positioning of a webpage on Google SERPs for a keyword depends on relevance and reputation, also known as authority and popularity. PageRank is Google's indication of its assessment of the reputation of a webpage: It is non-keyword specific. Google uses a combination of webpage and website authority to determine the overall authority of a webpage competing for a keyword.[47]The PageRank of the HomePage of a website is the best indication Google offers for website authority.[48] After the introduction ofGoogle Placesinto the mainstream organic SERP, numerous other factors in addition to PageRank affect ranking a business in Local Business Results.[49]When Google elaborated on the reasons for PageRank deprecation at Q&A #March 2016, they announced Links and Content as the Top Ranking Factors. RankBrain had earlier in October 2015 been announced as the #3 Ranking Factor, so the Top 3 Factors have been confirmed officially by Google.[50] TheGoogle DirectoryPageRank was an 8-unit measurement. Unlike the Google Toolbar, which shows a numeric PageRank value upon mouseover of the green bar, the Google Directory only displayed the bar, never the numeric values. Google Directory was closed on July 20, 2011.[51] It was known that the PageRank shown in the Toolbar could easily bespoofed. Redirection from one page to another, either via aHTTP 302response or a "Refresh"meta tag, caused the source page to acquire the PageRank of the destination page. Hence, a new page with PR 0 and no incoming links could have acquired PR 10 by redirecting to the Google home page. Spoofing can usually be detected by performing a Google search for a source URL; if the URL of an entirely different site is displayed in the results, the latter URL may represent the destination of a redirection. Forsearch engine optimizationpurposes, some companies offer to sell high PageRank links to webmasters.[52]As links from higher-PR pages are believed to be more valuable, they tend to be more expensive. It can be an effective and viable marketing strategy to buy link advertisements on content pages of quality and relevant sites to drive traffic and increase a webmaster's link popularity. However, Google has publicly warned webmasters that if they are or were discovered to be selling links for the purpose of conferring PageRank and reputation, their links will be devalued (ignored in the calculation of other pages' PageRanks). The practice of buying and selling[53]is intensely debated across the Webmaster community. Google advised webmasters to use thenofollowHTML attributevalue on paid links. According toMatt Cutts, Google is concerned about webmasters who try togame the system, and thereby reduce the quality and relevance of Google search results.[52] In 2019, Google announced two additional link attributes providing hints about which links to consider or exclude within Search:rel="ugc"as a tag for user-generated content, such as comments; andrel="sponsored"as a tag for advertisements or other types of sponsored content. Multiplerelvalues are also allowed, for example,rel="ugc sponsored"can be used to hint that the link came from user-generated content and is sponsored.[54] Even though PageRank has become less important for SEO purposes, the existence of back-links from more popular websites continues to push a webpage higher up in search rankings.[55] A more intelligent surfer that probabilistically hops from page to page depending on the content of the pages and query terms the surfer is looking for. This model is based on a query-dependent PageRank score of a page which as the name suggests is also a function of query. When given a multiple-term query,Q={q1,q2,⋯}{\displaystyle Q=\{q1,q2,\cdots \}}, the surfer selects aq{\displaystyle q}according to some probability distribution,P(q){\displaystyle P(q)}, and uses that term to guide its behavior for a large number of steps. It then selects another term according to the distribution to determine its behavior, and so on. The resulting distribution over visited web pages is QD-PageRank.[56] The mathematics of PageRank are entirely general and apply to any graph or network in any domain. Thus, PageRank is now regularly used in bibliometrics, social and information network analysis, and for link prediction and recommendation. It is used for systems analysis of road networks, and in biology, chemistry, neuroscience, and physics.[57] PageRank has been used to quantify the scientific impact of researchers. The underlying citation and collaboration networks are used in conjunction with pagerank algorithm in order to come up with a ranking system for individual publications which propagates to individual authors. The new index known as pagerank-index (Pi) is demonstrated to be fairer compared to h-index in the context of many drawbacks exhibited by h-index.[58] For the analysis of protein networks in biology PageRank is also a useful tool.[59][60] In any ecosystem, a modified version of PageRank may be used to determine species that are essential to the continuing health of the environment.[61] A similar newer use of PageRank is to rank academic doctoral programs based on their records of placing their graduates in faculty positions. In PageRank terms, academic departments link to each other by hiring their faculty from each other (and from themselves).[62] A version of PageRank has recently been proposed as a replacement for the traditionalInstitute for Scientific Information(ISI)impact factor,[63]and implemented atEigenfactoras well as atSCImago. Instead of merely counting total citations to a journal, the "importance" of each citation is determined in a PageRank fashion. Inneuroscience, the PageRank of aneuronin a neural network has been found to correlate with its relative firing rate.[64] Personalized PageRank is used byTwitterto present users with other accounts they may wish to follow.[65] Swiftype's site search product builds a "PageRank that's specific to individual websites" by looking at each website's signals of importance and prioritizing content based on factors such as number of links from the home page.[66] AWeb crawlermay use PageRank as one of a number of importance metrics it uses to determine which URL to visit during a crawl of the web. One of the early working papers[67]that were used in the creation of Google isEfficient crawling through URL ordering,[68]which discusses the use of a number of different importance metrics to determine how deeply, and how much of a site Google will crawl. PageRank is presented as one of a number of these importance metrics, though there are others listed such as the number of inbound and outbound links for a URL, and the distance from the root directory on a site to the URL. The PageRank may also be used as a methodology to measure the apparent impact of a community like theBlogosphereon the overall Web itself. This approach uses therefore the PageRank to measure the distribution of attention in reflection of theScale-free networkparadigm.[citation needed] In 2005, in a pilot study in Pakistan,Structural Deep Democracy, SD2[69][70]was used for leadership selection in a sustainable agriculture group called Contact Youth. SD2 usesPageRankfor the processing of the transitive proxy votes, with the additional constraints of mandating at least two initial proxies per voter, and all voters are proxy candidates. More complex variants can be built on top of SD2, such as adding specialist proxies and direct votes for specific issues, but SD2 as the underlying umbrella system, mandates that generalist proxies should always be used. In sport the PageRank algorithm has been used to rank the performance of: teams in the National Football League (NFL) in the USA;[71]individual soccer players;[72]and athletes in the Diamond League.[73] PageRank has been used to rank spaces or streets to predict how many people (pedestrians or vehicles) come to the individual spaces or streets.[74][75]Inlexical semanticsit has been used to performWord Sense Disambiguation,[76]Semantic similarity,[77]and also to automatically rankWordNetsynsetsaccording to how strongly they possess a given semantic property, such as positivity or negativity.[78] How a traffic system changes its operational mode can be described by transitions between quasi-stationary states in correlation structures of traffic flow. PageRank has been used to identify and explore the dominant states among these quasi-stationary states in traffic systems.[79] In early 2005, Google implemented a new value, "nofollow",[80]for therelattribute of HTML link and anchor elements, so that website developers andbloggerscan make links that Google will not consider for the purposes of PageRank—they are links that no longer constitute a "vote" in the PageRank system. The nofollow relationship was added in an attempt to help combatspamdexing. As an example, people could previously create many message-board posts with links to their website to artificially inflate their PageRank. With the nofollow value, message-board administrators can modify their code to automatically insert "rel='nofollow'" to all hyperlinks in posts, thus preventing PageRank from being affected by those particular posts. This method of avoidance, however, also has various drawbacks, such as reducing the link value of legitimate comments. (See:Spam in blogs#nofollow) In an effort to manually control the flow of PageRank among pages within a website, many webmasters practice what is known as PageRank Sculpting[81]—which is the act of strategically placing the nofollow attribute on certain internal links of a website in order to funnel PageRank towards those pages the webmaster deemed most important. This tactic had been used since the inception of the nofollow attribute, but may no longer be effective since Google announced that blocking PageRank transfer with nofollow does not redirect that PageRank to other links.[82]
https://en.wikipedia.org/wiki/PageRank#Iterative_computation
Information retrieval(IR) incomputingandinformation scienceis the task of identifying and retrievinginformation systemresources that are relevant to aninformation need. The information need can be specified in the form of a search query. In the case of document retrieval, queries can be based onfull-textor other content-based indexing. Information retrieval is thescience[1]of searching for information in a document, searching for documents themselves, and also searching for themetadatathat describes data, and fordatabasesof texts, images or sounds. Automated information retrieval systems are used to reduce what has been calledinformation overload. An IR system is a software system that provides access to books, journals and other documents; it also stores and manages those documents.Web search enginesare the most visible IR applications. An information retrieval process begins when a user enters a query into the system. Queries are formal statements of information needs, for example search strings in web search engines. In information retrieval, a query does not uniquely identify a single object in the collection. Instead, several objects may match the query, perhaps with different degrees ofrelevance. An object is an entity that is represented by information in a content collection ordatabase. User queries are matched against the database information. However, as opposed to classical SQL queries of a database, in information retrieval the results returned may or may not match the query, so results are typically ranked. Thisrankingof results is a key difference of information retrieval searching compared to database searching.[2] Depending on theapplicationthe data objects may be, for example, text documents, images,[3]audio,[4]mind maps[5]or videos. Often the documents themselves are not kept or stored directly in the IR system, but are instead represented in the system by document surrogates ormetadata. Most IR systems compute a numeric score on how well each object in the database matches the query, and rank the objects according to this value. The top ranking objects are then shown to the user. The process may then be iterated if the user wishes to refine the query.[6] there is ... a machine called the Univac ... whereby letters and figures are coded as a pattern of magnetic spots on a long steel tape. By this means the text of a document, preceded by its subject code symbol, can be recorded ... the machine ... automatically selects and types out those references which have been coded in any desired way at a rate of 120 words a minute The idea of using computers to search for relevant pieces of information was popularized in the articleAs We May ThinkbyVannevar Bushin 1945.[7]It would appear that Bush was inspired by patents for a 'statistical machine' – filed byEmanuel Goldbergin the 1920s and 1930s – that searched for documents stored on film.[8]The first description of a computer searching for information was described by Holmstrom in 1948,[9]detailing an early mention of theUnivaccomputer. Automated information retrieval systems were introduced in the 1950s: one even featured in the 1957 romantic comedyDesk Set. In the 1960s, the first large information retrieval research group was formed byGerard Saltonat Cornell. By the 1970s several different retrieval techniques had been shown to perform well on smalltext corporasuch as the Cranfield collection (several thousand documents).[7]Large-scale retrieval systems, such as the Lockheed Dialog system, came into use early in the 1970s. In 1992, the US Department of Defense along with theNational Institute of Standards and Technology(NIST), cosponsored theText Retrieval Conference(TREC) as part of the TIPSTER text program. The aim of this was to look into the information retrieval community by supplying the infrastructure that was needed for evaluation of text retrieval methodologies on a very large text collection. This catalyzed research on methods thatscaleto huge corpora. The introduction ofweb search engineshas boosted the need for very large scale retrieval systems even further. By the late 1990s, the rise of the World Wide Web fundamentally transformed information retrieval. While early search engines such asAltaVista(1995) andYahoo!(1994) offered keyword-based retrieval, they were limited in scale and ranking refinement. The breakthrough came in 1998 with the founding ofGoogle, which introduced thePageRankalgorithm,[10]using the web’s hyperlink structure to assess page importance and improve relevance ranking. During the 2000s, web search systems evolved rapidly with the integration of machine learning techniques. These systems began to incorporate user behavior data (e.g., click-through logs), query reformulation, and content-based signals to improve search accuracy and personalization. In 2009,MicrosoftlaunchedBing, introducing features that would later incorporatesemanticweb technologies through the development of its Satori knowledge base. Academic analysis[11]have highlighted Bing’s semantic capabilities, including structured data use and entity recognition, as part of a broader industry shift toward improving search relevance and understanding user intent through natural language processing. A major leap occurred in 2018, when Google deployedBERT(BidirectionalEncoderRepresentations fromTransformers) to better understand the contextual meaning of queries and documents. This marked one of the first times deep neural language models were used at scale in real-world retrieval systems.[12]BERT’s bidirectional training enabled a more refined comprehension of word relationships in context, improving the handling of natural language queries. Because of its success, transformer-based models gained traction in academic research and commercial search applications.[13] Simultaneously, the research community began exploring neural ranking models that outperformed traditional lexical-based methods. Long-standing benchmarks such as theTextREtrievalConference (TREC), initiated in 1992, and more recent evaluation frameworks Microsoft MARCO(MAchineReadingCOmprehension) (2019)[14]became central to training and evaluating retrieval systems across multiple tasks and domains. MS MARCO has also been adopted in the TREC Deep Learning Tracks, where it serves as a core dataset for evaluating advances in neural ranking models within a standardized benchmarking environment.[15] As deep learning became integral to information retrieval systems, researchers began to categorize neural approaches into three broad classes:sparse,dense, andhybridmodels. Sparse models, including traditional term-based methods and learned variants like SPLADE, rely on interpretable representations and inverted indexes to enable efficient exact term matching with added semantic signals.[16]Dense models, such as dual-encoder architectures like ColBERT, use continuous vector embeddings to support semantic similarity beyond keyword overlap.[17]Hybrid models aim to combine the advantages of both, balancing the lexical (token) precision of sparse methods with the semantic depth of dense models. This way of categorizing models balances scalability, relevance, and efficiency in retrieval systems.[18] As IR systems increasingly rely on deep learning, concerns around bias, fairness, and explainability have also come to the picture. Research is now focused not just on relevance and efficiency, but on transparency, accountability, and user trust in retrieval algorithms. Areas where information retrieval techniques are employed include (the entries are in alphabetical order within each category): Methods/Techniques in which information retrieval techniques are employed include: In order to effectively retrieve relevant documents by IR strategies, the documents are typically transformed into a suitable representation. Each retrieval strategy incorporates a specific model for its document representation purposes. The picture on the right illustrates the relationship of some common models. In the picture, the models are categorized according to two dimensions: the mathematical basis and the properties of the model. In addition to the theoretical distinctions, modern information retrieval models are also categorized on how queries and documents are represented and compared, using a practical classification distinguishing between sparse, dense and hybrid models.[19] This classification has become increasingly common in both academic and the real world applications and is getting widely adopted and used in evaluation benchmarks for Information Retrieval models.[23][20] The evaluation of an information retrieval system' is the process of assessing how well a system meets the information needs of its users. In general, measurement considers a collection of documents to be searched and a search query. Traditional evaluation metrics, designed forBoolean retrieval[clarification needed]or top-k retrieval, includeprecision and recall. All measures assume aground truthnotion of relevance: every document is known to be either relevant or non-relevant to a particular query. In practice, queries may beill-posedand there may be different shades of relevance.
https://en.wikipedia.org/wiki/Information_retrieval#Evaluation_measures
Apriori[1]is analgorithmfor frequent item set mining andassociation rule learningoverrelational databases. It proceeds by identifying the frequent individual items in the database and extending them to larger and larger item sets as long as those item sets appear sufficiently often in the database. The frequent item sets determined by Apriori can be used to determineassociation ruleswhich highlight general trends in thedatabase: this has applications in domains such asmarket basket analysis. The Apriori algorithm was proposed by Agrawal and Srikant in 1994. Apriori is designed to operate ondatabasescontaining transactions (for example, collections of items bought by customers, or details of a website frequentation orIP addresses[2]). Other algorithms are designed for finding association rules in data having no transactions (Winepiand Minepi), or having no timestamps (DNA sequencing). Each transaction is seen as a set of items (anitemset). Given a thresholdC{\displaystyle C}, the Apriori algorithm identifies the item sets which are subsets of at leastC{\displaystyle C}transactions in the database. Apriori uses a "bottom up" approach, where frequent subsets are extended one item at a time (a step known ascandidate generation), and groups of candidates are tested against the data. The algorithm terminates when no further successful extensions are found. Apriori usesbreadth-first searchand aHash treestructure to count candidate item sets efficiently. It generates candidate item sets of lengthk{\displaystyle k}from item sets of lengthk−1{\displaystyle k-1}. Then it prunes the candidates which have an infrequent sub pattern. According to the downward closure lemma, the candidate set contains all frequentk{\displaystyle k}-length item sets. After that, it scans the transaction database to determine frequent item sets among the candidates. The pseudo code for the algorithm is given below for a transaction databaseT{\displaystyle T}, and a support threshold ofε{\displaystyle \varepsilon }. Usual set theoretic notation is employed, though note thatT{\displaystyle T}is amultiset.Ck{\displaystyle C_{k}}is the candidate set for levelk{\displaystyle k}. At each step, the algorithm is assumed to generate the candidate sets from the large item sets of the preceding level, heeding the downward closure lemma.count[c]{\displaystyle \mathrm {count} [c]}accesses a field of the data structure that represents candidate setc{\displaystyle c}, which is initially assumed to be zero. Many details are omitted below, usually the most important part of the implementation is the data structure used for storing the candidate sets, and counting their frequencies. Consider the following database, where each row is a transaction and each cell is an individual item of the transaction: The association rules that can be determined from this database are the following: we can also illustrate this through a variety of examples. Assume that a large supermarket tracks sales data bystock-keeping unit(SKU) for each item: each item, such as "butter" or "bread", is identified by a numerical SKU. The supermarket has a database of transactions where each transaction is a set of SKUs that were bought together. Let the database of transactions consist of following itemsets: We will use Apriori to determine the frequent item sets of this database. To do this, we will say that an item set is frequent if it appears in at least 3 transactions of the database: the value 3 is thesupport threshold. The first step of Apriori is to count up the number of occurrences, called the support, of each member item separately. By scanning the database for the first time, we obtain the following result All the itemsets of size 1 have a support of at least 3, so they are all frequent. The next step is to generate a list of all pairs of the frequent items. For example, regarding the pair {1,2}: the first table of Example 2 shows items 1 and 2 appearing together in three of the itemsets; therefore, we say item {1,2} has support of three. The pairs {1,2}, {2,3}, {2,4}, and {3,4} all meet or exceed the minimum support of 3, so they are frequent. The pairs {1,3} and {1,4} are not. Now, because {1,3} and {1,4} are not frequent, any larger set which contains {1,3} or {1,4} cannot be frequent. In this way, we canprunesets: we will now look for frequent triples in the database, but we can already exclude all the triples that contain one of these two pairs: in the example, there are no frequent triplets. {2,3,4} is below the minimal threshold, and the other triplets were excluded because they were super sets of pairs that were already below the threshold. We have thus determined the frequent sets of items in the database, and illustrated how some items were not counted because one of their subsets was already known to be below the threshold. Apriori, while historically significant, suffers from a number of inefficiencies or trade-offs, which have spawned other algorithms. Candidate generation generates large numbers of subsets (The algorithm attempts to load up the candidate set, with as many as possible subsets before each scan of the database). Bottom-up subset exploration (essentially a breadth-first traversal of the subset lattice) finds any maximal subset S only after all2|S|−1{\displaystyle 2^{|S|}-1}of its proper subsets. The algorithm scans the database too many times, which reduces the overall performance. Due to this, the algorithm assumes that the database is permanently in the memory. Also, both the time and space complexity of this algorithm are very high:O(2|D|){\displaystyle O\left(2^{|D|}\right)}, thus exponential, where|D|{\displaystyle |D|}is the horizontal width (the total number of items) present in the database. Later algorithms such asMax-Miner[3]try to identify the maximal frequent item sets without enumerating their subsets, and perform "jumps" in the search space rather than a purely bottom-up approach.
https://en.wikipedia.org/wiki/Apriori_algorithm
Association rule learningis arule-based machine learningmethod for discovering interesting relations between variables in large databases. It is intended to identify strong rules discovered in databases using some measures of interestingness.[1]In any given transaction with a variety of items, association rules are meant to discover the rules that determine how or why certain items are connected. Based on the concept of strong rules,Rakesh Agrawal,Tomasz Imielińskiand Arun Swami[2]introduced association rules for discovering regularities between products in large-scale transaction data recorded bypoint-of-sale(POS) systems in supermarkets. For example, the rule{onions,potatoes}⇒{burger}{\displaystyle \{\mathrm {onions,potatoes} \}\Rightarrow \{\mathrm {burger} \}}found in the sales data of a supermarket would indicate that if a customer buys onions and potatoes together, they are likely to also buy hamburger meat. Such information can be used as the basis for decisions about marketing activities such as, e.g., promotionalpricingorproduct placements. In addition to the above example frommarket basket analysis, association rules are employed today in many application areas includingWeb usage mining,intrusion detection,continuous production, andbioinformatics. In contrast withsequence mining, association rule learning typically does not consider the order of items either within a transaction or across transactions. The association rule algorithm itself consists of various parameters that can make it difficult for those without some expertise in data mining to execute, with many rules that are arduous to understand.[3] Following the original definition by Agrawal, Imieliński, Swami[2]the problem of association rule mining is defined as: LetI={i1,i2,…,in}{\displaystyle I=\{i_{1},i_{2},\ldots ,i_{n}\}}be a set ofnbinary attributes calleditems. LetD={t1,t2,…,tm}{\displaystyle D=\{t_{1},t_{2},\ldots ,t_{m}\}}be a set of transactions called thedatabase. EachtransactioninDhas a unique transaction ID and contains a subset of the items inI. Aruleis defined as an implication of the form: In Agrawal, Imieliński, Swami[2]aruleis defined only between a set and a single item,X⇒ij{\displaystyle X\Rightarrow i_{j}}forij∈I{\displaystyle i_{j}\in I}. Every rule is composed by two different sets of items, also known asitemsets,XandY, whereXis calledantecedentor left-hand-side (LHS) andYconsequentor right-hand-side (RHS). The antecedent is that item that can be found in the data while the consequent is the item found when combined with the antecedent. The statementX⇒Y{\displaystyle X\Rightarrow Y}is often read asifXthenY, where the antecedent (X) is theifand the consequent (Y) is thethen. This simply implies that, in theory, wheneverXoccurs in a dataset, thenYwill as well. Association rules are made by searching data for frequent if-then patterns and by using a certain criterion under Support and Confidence to define what the most important relationships are. Support is the evidence of how frequent an item appears in the data given, as Confidence is defined by how many times the if-then statements are found true. However, there is a third criteria that can be used, it is called Lift and it can be used to compare the expected Confidence and the actual Confidence. Lift will show how many times the if-then statement is expected to be found to be true. Association rules are made to calculate from itemsets, which are created by two or more items. If the rules were built from the analyzing from all the possible itemsets from the data then there would be so many rules that they wouldn’t have any meaning. That is why Association rules are typically made from rules that are well represented by the data. There are many different data mining techniques you could use to find certain analytics and results, for example, there is Classification analysis, Clustering analysis, and Regression analysis.[4]What technique you should use depends on what you are looking for with your data. Association rules are primarily used to find analytics and a prediction of customer behavior. For Classification analysis, it would most likely be used to question, make decisions, and predict behavior.[5]Clustering analysis is primarily used when there are no assumptions made about the likely relationships within the data.[5]Regression analysis Is used when you want to predict the value of a continuous dependent from a number of independent variables.[5] Benefits There are many benefits of using Association rules like finding the pattern that helps understand the correlations and co-occurrences between data sets. A very good real-world example that uses Association rules would be medicine. Medicine uses Association rules to help diagnose patients. When diagnosing patients there are many variables to consider as many diseases will share similar symptoms. With the use of the Association rules, doctors can determine the conditional probability of an illness by comparing symptom relationships from past cases.[6] Downsides However, Association rules also lead to many different downsides such as finding the appropriate parameter and threshold settings for the mining algorithm. But there is also the downside of having a large number of discovered rules. The reason is that this does not guarantee that the rules will be found relevant, but it could also cause the algorithm to have low performance. Sometimes the implemented algorithms will contain too many variables and parameters. For someone that doesn’t have a good concept of data mining, this might cause them to have trouble understanding it.[7] Thresholds When using Association rules, you are most likely to only use Support and Confidence. However, this means you have to satisfy a user-specified minimum support and a user-specified minimum confidence at the same time. Usually, the Association rule generation is split into two different steps that needs to be applied: The Support Threshold is 30%, Confidence Threshold is 50% The Table on the left is the original unorganized data and the table on the right is organized by the thresholds. In this case Item C is better than the thresholds for both Support and Confidence which is why it is first. Item A is second because its threshold values are spot on. Item D has met the threshold for Support but not Confidence. Item B has not met the threshold for either Support or Confidence and that is why it is last. To find all the frequent itemsets in a database is not an easy task since it involves going through all the data to find all possible item combinations from all possible itemsets. The set of possible itemsets is thepower setoverIand has size2n−1{\displaystyle 2^{n}-1}, of course this means to exclude the empty set which is not considered to be a valid itemset. However, the size of the power set will grow exponentially in the number of itemnthat is within the power setI. An efficient search is possible by using thedownward-closure propertyof support[2][8](also calledanti-monotonicity[9]). This would guarantee that a frequent itemset and all its subsets are also frequent and thus will have no infrequent itemsets as a subset of a frequent itemset. Exploiting this property, efficient algorithms (e.g., Apriori[10]and Eclat[11]) can find all frequent itemsets. To illustrate the concepts, we use a small example from the supermarket domain. Table 2 shows a small database containing the items where, in each entry, the value 1 means the presence of the item in the corresponding transaction, and the value 0 represents the absence of an item in that transaction. The set of items isI={milk,bread,butter,beer,diapers,eggs,fruit}{\displaystyle I=\{\mathrm {milk,bread,butter,beer,diapers,eggs,fruit} \}}. An example rule for the supermarket could be{butter,bread}⇒{milk}{\displaystyle \{\mathrm {butter,bread} \}\Rightarrow \{\mathrm {milk} \}}meaning that if butter and bread are bought, customers also buy milk. In order to select interesting rules from the set of all possible rules, constraints on various measures of significance and interest are used. The best-known constraints are minimum thresholds on support and confidence. LetX,Y{\displaystyle X,Y}be itemsets,X⇒Y{\displaystyle X\Rightarrow Y}an association rule andTa set of transactions of a given database. Note: this example is extremely small. In practical applications, a rule needs a support of several hundred transactions before it can be considered statistically significant,[citation needed]and datasets often contain thousands or millions of transactions. Support is an indication of how frequently the itemset appears in the dataset. In our example, it can be easier to explain support by writingsupport=P(A∩B)=(number of transactions containingAandB)(total number of transactions){\displaystyle {\text{support}}=P(A\cap B)={\frac {({\text{number of transactions containing }}A{\text{ and }}B)}{\text{ (total number of transactions)}}}}[12]where A and B are separate item sets that occur at the same time in a transaction. Using Table 2 as an example, the itemsetX={beer,diapers}{\displaystyle X=\{\mathrm {beer,diapers} \}}has a support of1/5=0.2since it occurs in 20% of all transactions (1 out of 5 transactions). The argument ofsupport of Xis a set of preconditions, and thus becomes more restrictive as it grows (instead of more inclusive).[13] Furthermore, the itemsetY={milk,bread,butter}{\displaystyle Y=\{\mathrm {milk,bread,butter} \}}has a support of1/5=0.2as it appears in 20% of all transactions as well. When using antecedents and consequents, it allows a data miner to determine the support of multiple items being bought together in comparison to the whole data set. For example, Table 2 shows that if milk is bought, then bread is bought has a support of 0.4 or 40%. This because in 2 out 5 of the transactions, milk as well as bread are bought. In smaller data sets like this example, it is harder to see a strong correlation when there are few samples, but when the data set grows larger, support can be used to find correlation between two or more products in the supermarket example. Minimum support thresholds are useful for determining which itemsets are preferred or interesting. If we set the support threshold to ≥0.4 in Table 3, then the{milk}⇒{eggs}{\displaystyle \{\mathrm {milk} \}\Rightarrow \{\mathrm {eggs} \}}would be removed since it did not meet the minimum threshold of 0.4. Minimum threshold is used to remove samples where there is not a strong enough support or confidence to deem the sample as important or interesting in the dataset. Another way of finding interesting samples is to find the value of (support)×(confidence); this allows a data miner to see the samples where support and confidence are high enough to be highlighted in the dataset and prompt a closer look at the sample to find more information on the connection between the items. Support can be beneficial for finding the connection between products in comparison to the whole dataset, whereas confidence looks at the connection between one or more items and another item. Below is a table that shows the comparison and contrast between support and support × confidence, using the information from Table 4 to derive the confidence values. The support ofXwith respect toTis defined as the proportion of transactions in the dataset which contains the itemsetX. Denoting a transaction by(i,t){\displaystyle (i,t)}whereiis the unique identifier of the transaction andtis its itemset, the support may be written as: This notation can be used when defining more complicated datasets where the items and itemsets may not be as easy as our supermarket example above. Other examples of where support can be used is in finding groups of genetic mutations that work collectively to cause a disease, investigating the number of subscribers that respond to upgrade offers, and discovering which products in a drug store are never bought together.[12] Confidence is the percentage of all transactions satisfyingXthat also satisfyY.[14] With respect toT, the confidence value of an association rule, often denoted asX⇒Y{\displaystyle X\Rightarrow Y}, is the ratio of transactions containing bothXandYto the total amount ofXvalues present, whereXis the antecedent andYis the consequent. Confidence can also be interpreted as an estimate of theconditional probabilityP(EY|EX){\displaystyle P(E_{Y}|E_{X})}, the probability of finding the RHS of the rule in transactions under the condition that these transactions also contain the LHS.[13][15] It is commonly depicted as: The equation illustrates that confidence can be computed by calculating the co-occurrence of transactionsXandYwithin the dataset in ratio to transactions containing onlyX. This means that the number of transactions in bothXandYis divided by those just inX. For example, Table 2 shows the rule{butter,bread}⇒{milk}{\displaystyle \{\mathrm {butter,bread} \}\Rightarrow \{\mathrm {milk} \}}which has a confidence of1/51/5=0.20.2=1.0{\displaystyle {\frac {1/5}{1/5}}={\frac {0.2}{0.2}}=1.0}in the dataset, which denotes that every time a customer buys butter and bread, they also buy milk. This particular example demonstrates the rule being correct 100% of the time for transactions containing both butter and bread. The rule{fruit}⇒{eggs}{\displaystyle \{\mathrm {fruit} \}\Rightarrow \{\mathrm {eggs} \}}, however, has a confidence of2/53/5=0.40.6=0.67{\displaystyle {\frac {2/5}{3/5}}={\frac {0.4}{0.6}}=0.67}. This suggests that eggs are bought 67% of the times that fruit is brought. Within this particular dataset, fruit is purchased a total of 3 times, with two of those times consisting of egg purchases. For larger datasets, a minimum threshold, or a percentage cutoff, for the confidence can be useful for determining item relationships. When applying this method to some of the data in Table 2, information that does not meet the requirements are removed. Table 4 shows association rule examples where the minimum threshold for confidence is 0.5 (50%). Any data that does not have a confidence of at least 0.5 is omitted. Generating thresholds allow for the association between items to become stronger as the data is further researched by emphasizing those that co-occur the most. The table uses the confidence information from Table 3 to implement the Support × Confidence column, where the relationship between items via their both confidence and support, instead of just one concept, is highlighted. Ranking the rules by Support × Confidence multiples the confidence of a particular rule to its support and is often implemented for a more in-depth understanding of the relationship between the items. Overall, using confidence in association rule mining is great way to bring awareness to data relations. Its greatest benefit is highlighting the relationship between particular items to one another within the set, as it compares co-occurrences of items to the total occurrence of the antecedent in the specific rule. However, confidence is not the optimal method for every concept in association rule mining. The disadvantage of using it is that it does not offer multiple difference outlooks on the associations. Unlike support, for instance, confidence does not provide the perspective of relationships between certain items in comparison to the entire dataset, so while milk and bread, for example, may occur 100% of the time for confidence, it only has a support of 0.4 (40%). This is why it is important to look at other viewpoints, such as Support × Confidence, instead of solely relying on one concept incessantly to define the relationships. Theliftof a rule is defined as: lift(X⇒Y)=supp(X∪Y)supp(X)×supp(Y){\displaystyle \mathrm {lift} (X\Rightarrow Y)={\frac {\mathrm {supp} (X\cup Y)}{\mathrm {supp} (X)\times \mathrm {supp} (Y)}}} or the ratio of the observed support to that expected if X and Y wereindependent. For example, the rule{milk,bread}⇒{butter}{\displaystyle \{\mathrm {milk,bread} \}\Rightarrow \{\mathrm {butter} \}}has a lift of0.20.4×0.4=1.25{\displaystyle {\frac {0.2}{0.4\times 0.4}}=1.25}. If the rule had a lift of 1, it would imply that the probability of occurrence of the antecedent and that of the consequent are independent of each other. When two events are independent of each other, no rule can be drawn involving those two events. If the lift is > 1, that lets us know the degree to which those two occurrences are dependent on one another, and makes those rules potentially useful for predicting the consequent in future data sets. If the lift is < 1, that lets us know the items are substitute to each other. This means that presence of one item has negative effect on presence of other item and vice versa. The value of lift is that it considers both the support of the rule and the overall data set.[13] [rede] Theconvictionof a rule is defined asconv(X⇒Y)=1−supp(Y)1−conf(X⇒Y){\displaystyle \mathrm {conv} (X\Rightarrow Y)={\frac {1-\mathrm {supp} (Y)}{1-\mathrm {conf} (X\Rightarrow Y)}}}.[16] For example, the rule{milk,bread}⇒{butter}{\displaystyle \{\mathrm {milk,bread} \}\Rightarrow \{\mathrm {butter} \}}has a conviction of1−0.41−0.5=1.2{\displaystyle {\frac {1-0.4}{1-0.5}}=1.2}, and can be interpreted as the ratio of the expected frequency that X occurs without Y (that is to say, the frequency that the rule makes an incorrect prediction) if X and Y were independent divided by the observed frequency of incorrect predictions. In this example, the conviction value of 1.2 shows that the rule{milk,bread}⇒{butter}{\displaystyle \{\mathrm {milk,bread} \}\Rightarrow \{\mathrm {butter} \}}would be incorrect 20% more often (1.2 times as often) if the association between X and Y was purely random chance. In addition to confidence, other measures ofinterestingnessfor rules have been proposed. Some popular measures are: Several more measures are presented and compared by Tan et al.[20]and by Hahsler.[21]Looking for techniques that can model what the user has known (and using these models as interestingness measures) is currently an active research trend under the name of "Subjective Interestingness." The concept of association rules was popularized particularly due to the 1993 article of Agrawal et al.,[2]which has acquired more than 23,790 citations according to Google Scholar, as of April 2021, and is thus one of the most cited papers in the Data Mining field. However, what is now called "association rules" is introduced already in the 1966 paper[22]on GUHA, a general data mining method developed byPetr Hájeket al.[23] An early (circa 1989) use of minimum support and confidence to find all association rules is the Feature Based Modeling framework, which found all rules withsupp(X){\displaystyle \mathrm {supp} (X)}andconf(X⇒Y){\displaystyle \mathrm {conf} (X\Rightarrow Y)}greater than user defined constraints.[24] One limitation of the standard approach to discovering associations is that by searching massive numbers of possible associations to look for collections of items that appear to be associated, there is a large risk of finding many spurious associations. These are collections of items that co-occur with unexpected frequency in the data, but only do so by chance. For example, suppose we are considering a collection of 10,000 items and looking for rules containing two items in the left-hand-side and 1 item in the right-hand-side. There are approximately 1,000,000,000,000 such rules. If we apply a statistical test for independence with a significance level of 0.05 it means there is only a 5% chance of accepting a rule if there is no association. If we assume there are no associations, we should nonetheless expect to find 50,000,000,000 rules. Statistically sound association discovery[25][26]controls this risk, in most cases reducing the risk of findinganyspurious associations to a user-specified significance level. Many algorithms for generating association rules have been proposed. Some well-known algorithms areApriori, Eclat and FP-Growth, but they only do half the job, since they are algorithms for mining frequent itemsets. Another step needs to be done after to generate rules from frequent itemsets found in a database. Apriori is given by R. Agrawal and R. Srikant in 1994 for frequent item set mining and association rule learning. It proceeds by identifying the frequent individual items in the database and extending them to larger and larger item sets as long as those item sets appear sufficiently often. The name of the algorithm is Apriori because it uses prior knowledge of frequent itemset properties. Overview:Aprioriuses a "bottom up" approach, where frequent subsets are extended one item at a time (a step known ascandidate generation), and groups of candidates are tested against the data. The algorithm terminates when no further successful extensions are found. Apriori usesbreadth-first searchand aHash treestructure to count candidate item sets efficiently. It generates candidate item sets of length  from item sets of length . Then it prunes the candidates which have an infrequent sub pattern. According to the downward closure lemma, the candidate set contains all frequent -length item sets. After that, it scans the transaction database to determine frequent item sets among the candidates. Example:Assume that each row is a cancer sample with a certain combination of mutations labeled by a character in the alphabet. For example a row could have {a, c} which means it is affected by mutation 'a' and mutation 'c'. Now we will generate the frequent item set by counting the number of occurrences of each character. This is also known as finding the support values. Then we will prune the item set by picking a minimum support threshold. For this pass of the algorithm we will pick 3. Since all support values are three or above there is no pruning. The frequent item set is {a}, {b}, {c}, and {d}. After this we will repeat the process by counting pairs of mutations in the input set. Now we will make our minimum support value 4 so only {a, d} will remain after pruning. Now we will use the frequent item set to make combinations of triplets. We will then repeat the process by counting occurrences of triplets of mutations in the input set. Since we only have one item the next set of combinations of quadruplets is empty so the algorithm will stop. Advantages and Limitations: Apriori has some limitations. Candidate generation can result in large candidate sets. For example a 10^4 frequent 1-itemset will generate a 10^7 candidate 2-itemset. The algorithm also needs to frequently scan the database, to be specific n+1 scans where n is the length of the longest pattern. Apriori is slower than the Eclat algorithm. However, Apriori performs well compared to Eclat when the dataset is large. This is because in the Eclat algorithm if the dataset is too large the tid-lists become too large for memory. FP-growth outperforms the Apriori and Eclat. This is due to the FP-growth algorithm not having candidate generation or test, using a compact data structure, and only having one database scan.[27] Eclat[11](alt. ECLAT, stands for Equivalence Class Transformation) is abacktrackingalgorithm, which traverses the frequent itemset lattice graph in adepth-first search(DFS) fashion. Whereas thebreadth-first search(BFS) traversal used in the Apriori algorithm will end up checking every subset of an itemset before checking it, DFS traversal checks larger itemsets and can save on checking the support of some of its subsets by virtue of the downward-closer property. Furthermore it will almost certainly use less memory as DFS has a lower space complexity than BFS. To illustrate this, let there be a frequent itemset {a, b, c}. a DFS may check the nodes in the frequent itemset lattice in the following order: {a} → {a, b} → {a, b, c}, at which point it is known that {b}, {c}, {a, c}, {b, c} all satisfy the support constraint by the downward-closure property. BFS would explore each subset of {a, b, c} before finally checking it. As the size of an itemset increases, the number of its subsets undergoescombinatorial explosion. It is suitable for both sequential as well as parallel execution with locality-enhancing properties.[28][29] FP stands for frequent pattern.[30] In the first pass, the algorithm counts the occurrences of items (attribute-value pairs) in the dataset of transactions, and stores these counts in a 'header table'. In the second pass, it builds the FP-tree structure by inserting transactions into atrie. Items in each transaction have to be sorted by descending order of their frequency in the dataset before being inserted so that the tree can be processed quickly. Items in each transaction that do not meet the minimum support requirement are discarded. If many transactions share most frequent items, the FP-tree provides high compression close to tree root. Recursive processing of this compressed version of the main dataset grows frequent item sets directly, instead of generating candidate items and testing them against the entire database (as in the apriori algorithm). Growth begins from the bottom of the header table i.e. the item with the smallest support by finding all sorted transactions that end in that item. Call this itemI{\displaystyle I}. A new conditional tree is created which is the original FP-tree projected ontoI{\displaystyle I}. The supports of all nodes in the projected tree are re-counted with each node getting the sum of its children counts. Nodes (and hence subtrees) that do not meet the minimum support are pruned. Recursive growth ends when no individual items conditional onI{\displaystyle I}meet the minimum support threshold. The resulting paths from root toI{\displaystyle I}will be frequent itemsets. After this step, processing continues with the next least-supported header item of the original FP-tree. Once the recursive process has completed, all frequent item sets will have been found, and association rule creation begins.[31] The ASSOC procedure[32]is a GUHA method which mines for generalized association rules using fastbitstringsoperations. The association rules mined by this method are more general than those output by apriori, for example "items" can be connected both with conjunction and disjunctions and the relation between antecedent and consequent of the rule is not restricted to setting minimum support and confidence as in apriori: an arbitrary combination of supported interest measures can be used. OPUS is an efficient algorithm for rule discovery that, in contrast to most alternatives, does not require either monotone or anti-monotone constraints such as minimum support.[33]Initially used to find rules for a fixed consequent[33][34]it has subsequently been extended to find rules with any item as a consequent.[35]OPUS search is the core technology in the popular Magnum Opus association discovery system. A famous story about association rule mining is the "beer and diaper" story. A purported survey of behavior of supermarket shoppers discovered that customers (presumably young men) who buy diapers tend also to buy beer. This anecdote became popular as an example of how unexpected association rules might be found from everyday data. There are varying opinions as to how much of the story is true.[36]Daniel Powers says:[36] In 1992, Thomas Blischok, manager of a retail consulting group atTeradata, and his staff prepared an analysis of 1.2 million market baskets from about 25 Osco Drug stores. Database queries were developed to identify affinities. The analysis "did discover that between 5:00 and 7:00 p.m. that consumers bought beer and diapers". Osco managers did NOT exploit the beer and diapers relationship by moving the products closer together on the shelves. Multi-Relation Association Rules (MRAR): These are association rules where each item may have several relations. These relations indicate indirect relationships between the entities. Consider the following MRAR where the first item consists of three relationslive in,nearbyandhumid: “Those wholive ina place which isnearbya city withhumidclimate type and also areyoungerthan 20⟹{\displaystyle \implies }theirhealth conditionis good”. Such association rules can be extracted from RDBMS data or semantic web data.[37] Contrast set learningis a form of associative learning.Contrast set learnersuse rules that differ meaningfully in their distribution across subsets.[38][39] Weighted class learningis another form of associative learning where weights may be assigned to classes to give focus to a particular issue of concern for the consumer of the data mining results. High-order pattern discoveryfacilitates the capture of high-order (polythetic) patterns or event associations that are intrinsic to complex real-world data.[40] K-optimal pattern discoveryprovides an alternative to the standard approach to association rule learning which requires that each pattern appear frequently in the data. Approximate Frequent Itemsetmining is a relaxed version of Frequent Itemset mining that allows some of the items in some of the rows to be 0.[41] Generalized Association Ruleshierarchical taxonomy (concept hierarchy) Quantitative Association Rulescategorical and quantitative data Interval Data Association Rulese.g. partition the age into 5-year-increment ranged Sequential pattern miningdiscovers subsequences that are common to more than minsup (minimum support threshold) sequences in a sequence database, where minsup is set by the user. A sequence is an ordered list of transactions.[42] Subspace Clustering, a specific type ofclustering high-dimensional data, is in many variants also based on the downward-closure property for specific clustering models.[43] Warmr, shipped as part of the ACE data mining suite, allows association rule learning for first order relational rules.[44]
https://en.wikipedia.org/wiki/FP-growth
Instatistics, thePearson correlation coefficient(PCC)[a]is acorrelation coefficientthat measureslinearcorrelation between two sets of data. It is the ratio between thecovarianceof two variables and the product of theirstandard deviations; thus, it is essentially a normalized measurement of the covariance, such that the result always has a value between −1 and 1. As with covariance itself, the measure can only reflect a linearcorrelationof variables, and ignores many other types of relationships or correlations. As a simple example, one would expect the age and height of a sample of children from a school to have a Pearson correlation coefficient significantly greater than 0, but less than 1 (as 1 would represent an unrealistically perfect correlation). It was developed byKarl Pearsonfrom a related idea introduced byFrancis Galtonin the 1880s, and for which the mathematical formula was derived and published byAuguste Bravaisin 1844.[b][6][7][8][9]The naming of the coefficient is thus an example ofStigler's Law. The correlation coefficient can be derived by considering the cosine of the angle between two points representing the two sets of x and y co-ordinate data.[10]This expression is therefore a number between -1 and 1 and is equal to unity when all the points lie on a straight line. Pearson's correlation coefficient is thecovarianceof the two variables divided by the product of their standard deviations. The form of the definition involves a "product moment", that is, the mean (the firstmomentabout the origin) of the product of the mean-adjusted random variables; hence the modifierproduct-momentin the name.[verification needed] Pearson's correlation coefficient, when applied to apopulation, is commonly represented by the Greek letterρ(rho) and may be referred to as thepopulation correlation coefficientor thepopulation Pearson correlation coefficient. Given a pair of random variables(X,Y){\displaystyle (X,Y)}(for example, Height and Weight), the formula forρ[11]is[12] ρX,Y=cov⁡(X,Y)σXσY{\displaystyle \rho _{X,Y}={\frac {\operatorname {cov} (X,Y)}{\sigma _{X}\sigma _{Y}}}} where The formula forcov⁡(X,Y){\displaystyle \operatorname {cov} (X,Y)}can be expressed in terms ofmeanandexpectation. Since[11] the formula forρ{\displaystyle \rho }can also be written as ρX,Y=E⁡[(X−μX)(Y−μY)]σXσY{\displaystyle \rho _{X,Y}={\frac {\operatorname {\mathbb {E} } [(X-\mu _{X})(Y-\mu _{Y})]}{\sigma _{X}\sigma _{Y}}}} where The formula forρ{\displaystyle \rho }can be expressed in terms of uncentered moments. Since the formula forρ{\displaystyle \rho }can also be written asρX,Y=E⁡[XY]−E⁡[X]E⁡[Y]E⁡[X2]−(E⁡[X])2E⁡[Y2]−(E⁡[Y])2.{\displaystyle \rho _{X,Y}={\frac {\operatorname {\mathbb {E} } [XY]-\operatorname {\mathbb {E} } [X]\operatorname {\mathbb {E} } [Y]}{{\sqrt {\operatorname {\mathbb {E} } \left[X^{2}\right]-\left(\operatorname {\mathbb {E} } [X]\right)^{2}}}~{\sqrt {\operatorname {\mathbb {E} } \left[Y^{2}\right]-\left(\operatorname {\mathbb {E} } [Y]\right)^{2}}}}}.} Pearson's correlation coefficient, when applied to asample, is commonly represented byrxy{\displaystyle r_{xy}}and may be referred to as thesample correlation coefficientor thesample Pearson correlation coefficient. We can obtain a formula forrxy{\displaystyle r_{xy}}by substituting estimates of the covariances and variances based on a sample into the formula above. Given paired data{(x1,y1),…,(xn,yn)}{\displaystyle \left\{(x_{1},y_{1}),\ldots ,(x_{n},y_{n})\right\}}consisting ofn{\displaystyle n}pairs,rxy{\displaystyle r_{xy}}is defined as rxy=∑i=1n(xi−x¯)(yi−y¯)∑i=1n(xi−x¯)2∑i=1n(yi−y¯)2{\displaystyle r_{xy}={\frac {\sum _{i=1}^{n}(x_{i}-{\bar {x}})(y_{i}-{\bar {y}})}{{\sqrt {\sum _{i=1}^{n}(x_{i}-{\bar {x}})^{2}}}{\sqrt {\sum _{i=1}^{n}(y_{i}-{\bar {y}})^{2}}}}}} where Rearranging gives us this[11]formula forrxy{\displaystyle r_{xy}}: wheren,xi,yi,x¯,y¯{\displaystyle n,x_{i},y_{i},{\bar {x}},{\bar {y}}}are defined as above. Rearranging again gives us this formula forrxy{\displaystyle r_{xy}}: wheren,xi,yi{\displaystyle n,x_{i},y_{i}}are defined as above. This formula suggests a convenient single-pass algorithm for calculating sample correlations, though depending on the numbers involved, it can sometimes benumerically unstable. An equivalent expression gives the formula forrxy{\displaystyle r_{xy}}as the mean of the products of thestandard scoresas follows: where Alternative formulae forrxy{\displaystyle r_{xy}}are also available. For example, one can use the following formula forrxy{\displaystyle r_{xy}}: where If(X,Y){\displaystyle (X,Y)}isjointlygaussian, with mean zero andvarianceΣ{\displaystyle \Sigma }, thenΣ=[σX2ρX,YσXσYρX,YσXσYσY2]{\displaystyle \Sigma ={\begin{bmatrix}\sigma _{X}^{2}&\rho _{X,Y}\sigma _{X}\sigma _{Y}\\\rho _{X,Y}\sigma _{X}\sigma _{Y}&\sigma _{Y}^{2}\\\end{bmatrix}}}. Under heavy noise conditions, extracting the correlation coefficient between two sets ofstochastic variablesis nontrivial, in particular whereCanonical Correlation Analysisreports degraded correlation values due to the heavy noise contributions. A generalization of the approach is given elsewhere.[13] In case of missing data, Garren derived themaximum likelihoodestimator.[14] Some distributions (e.g.,stable distributionsother than anormal distribution) do not have a defined variance. The values of both the sample and population Pearson correlation coefficients are on or between −1 and 1. Correlations equal to +1 or −1 correspond to data points lying exactly on a line (in the case of the sample correlation), or to a bivariate distribution entirelysupportedon a line (in the case of the population correlation). The Pearson correlation coefficient is symmetric: corr(X,Y) = corr(Y,X). A key mathematical property of the Pearson correlation coefficient is that it isinvariantunder separate changes in location and scale in the two variables. That is, we may transformXtoa+bXand transformYtoc+dY, wherea,b,c, anddare constants withb,d> 0, without changing the correlation coefficient. (This holds for both the population and sample Pearson correlation coefficients.) More general linear transformations do change the correlation: see§ Decorrelation of n random variablesfor an application of this. The correlation coefficient ranges from −1 to 1. An absolute value of exactly 1 implies that a linear equation describes the relationship betweenXandYperfectly, with all data points lying on aline. The correlation sign is determined by theregression slope: a value of +1 implies that all data points lie on a line for whichYincreases asXincreases, whereas a value of -1 implies a line whereYincreases whileXdecreases.[15]A value of 0 implies that there is no linear dependency between the variables.[16] More generally,(Xi−X)(Yi−Y)is positive if and only ifXiandYilie on the same side of their respective means. Thus the correlation coefficient is positive ifXiandYitend to be simultaneously greater than, or simultaneously less than, their respective means. The correlation coefficient is negative (anti-correlation) ifXiandYitend to lie on opposite sides of their respective means. Moreover, the stronger either tendency is, the larger is theabsolute valueof the correlation coefficient. Rodgers and Nicewander[17]cataloged thirteen ways of interpreting correlation or simple functions of it: For uncentered data, there is a relation between the correlation coefficient and the angleφbetween the two regression lines,y=gX(x)andx=gY(y), obtained by regressingyonxandxonyrespectively. (Here,φis measured counterclockwise within the first quadrant formed around the lines' intersection point ifr> 0, or counterclockwise from the fourth to the second quadrant ifr< 0.) One can show[18]that if the standard deviations are equal, thenr= secφ− tanφ, where sec and tan aretrigonometric functions. For centered data (i.e., data which have been shifted by the sample means of their respective variables so as to have an average of zero for each variable), the correlation coefficient can also be viewed as thecosineof theangleθbetween the two observedvectorsinN-dimensional space (forNobservations of each variable).[19] Both the uncentered (non-Pearson-compliant) and centered correlation coefficients can be determined for a dataset. As an example, suppose five countries are found to have gross national products of 1, 2, 3, 5, and 8 billion dollars, respectively. Suppose these same five countries (in the same order) are found to have 11%, 12%, 13%, 15%, and 18% poverty. Then letxandybe ordered 5-element vectors containing the above data:x= (1, 2, 3, 5, 8)andy= (0.11, 0.12, 0.13, 0.15, 0.18). By the usual procedure for finding the angleθbetween two vectors (seedot product), theuncenteredcorrelation coefficient is This uncentered correlation coefficient is identical with thecosine similarity. The above data were deliberately chosen to be perfectly correlated:y= 0.10 + 0.01x. The Pearson correlation coefficient must therefore be exactly one. Centering the data (shiftingxbyℰ(x) = 3.8andybyℰ(y) = 0.138) yieldsx= (−2.8, −1.8, −0.8, 1.2, 4.2)andy= (−0.028, −0.018, −0.008, 0.012, 0.042), from which as expected. Several authors have offered guidelines for the interpretation of a correlation coefficient.[20][21]However, all such criteria are in some ways arbitrary.[21]The interpretation of a correlation coefficient depends on the context and purposes. A correlation of 0.8 may be very low if one is verifying a physical law using high-quality instruments, but may be regarded as very high in the social sciences, where there may be a greater contribution from complicating factors. Statistical inference based on Pearson's correlation coefficient often focuses on one of the following two aims: Methods of achieving one or both of these aims are discussed below. Permutation testsprovide a direct approach to performing hypothesis tests and constructing confidence intervals. A permutation test for Pearson's correlation coefficient involves the following two steps: To perform the permutation test, repeat steps (1) and (2) a large number of times. Thep-valuefor the permutation test is the proportion of thervalues generated in step (2) that are larger than the Pearson correlation coefficient that was calculated from the original data. Here "larger" can mean either that the value is larger in magnitude, or larger in signed value, depending on whether atwo-sidedorone-sidedtest is desired. Thebootstrapcan be used to construct confidence intervals for Pearson's correlation coefficient. In the "non-parametric" bootstrap,npairs (xi,yi) are resampled "with replacement" from the observed set ofnpairs, and the correlation coefficientris calculated based on the resampled data. This process is repeated a large number of times, and the empirical distribution of the resampledrvalues are used to approximate thesampling distributionof the statistic. A 95%confidence intervalforρcan be defined as the interval spanning from the 2.5th to the 97.5thpercentileof the resampledrvalues. Ifx{\displaystyle x}andy{\displaystyle y}are random variables, with a simple linear relationship between them with an additive normal noise (i.e., y= a + bx + e), then astandard errorassociated to the correlation is wherer{\displaystyle r}is the correlation andn{\displaystyle n}the sample size.[22][23] For pairs from an uncorrelatedbivariate normal distribution, thesampling distributionof thestudentizedPearson's correlation coefficient followsStudent'st-distributionwith degrees of freedomn− 2. Specifically, if the underlying variables have a bivariate normal distribution, the variable has a student'st-distribution in the null case (zero correlation).[24]This holds approximately in case of non-normal observed values if sample sizes are large enough.[25]For determining the critical values forrthe inverse function is needed: Alternatively, large sample, asymptotic approaches can be used. Another early paper[26]provides graphs and tables for general values ofρ, for small sample sizes, and discusses computational approaches. In the case where the underlying variables are not normal, the sampling distribution of Pearson's correlation coefficient follows a Student'st-distribution, but the degrees of freedom are reduced.[27] For data that follow abivariate normal distribution, the exact density functionf(r) for the sample correlation coefficientrof a normal bivariate is[28][29][30] whereΓ{\displaystyle \Gamma }is thegamma functionand2F1(a,b;c;z){\displaystyle {}_{2}\mathrm {F} _{1}(a,b;c;z)}is theGaussian hypergeometric function. In the special case whenρ=0{\displaystyle \rho =0}(zero population correlation), the exact density functionf(r) can be written as whereB{\displaystyle \mathrm {B} }is thebeta function, which is one way of writing the density of a Student's t-distribution for astudentizedsample correlation coefficient, as above. In practice,confidence intervalsandhypothesis testsrelating toρare usually carried out using the,Variance-stabilizing transformation,Fisher transformation,F{\displaystyle F}: F(r) approximately follows anormal distributionwith wherenis the sample size. The approximation error is lowest for a large sample sizen{\displaystyle n}and smallr{\displaystyle r}andρ0{\displaystyle \rho _{0}}and increases otherwise. Using the approximation, az-scoreis under thenull hypothesisthatρ=ρ0{\displaystyle \rho =\rho _{0}}, given the assumption that the sample pairs areindependent and identically distributedand follow abivariate normal distribution. Thus an approximatep-valuecan be obtained from a normal probability table. For example, ifz= 2.2 is observed and a two-sided p-value is desired to test the null hypothesis thatρ=0{\displaystyle \rho =0}, the p-value is2Φ(−2.2) = 0.028, where Φ is the standard normalcumulative distribution function. To obtain a confidence interval for ρ, we first compute a confidence interval forF(ρ{\displaystyle \rho }): The inverse Fisher transformation brings the interval back to the correlation scale. For example, suppose we observer= 0.7 with a sample size ofn=50, and we wish to obtain a 95% confidence interval forρ. The transformed value isarctanh⁡(r)=0.8673{\textstyle \operatorname {arctanh} \left(r\right)=0.8673}, so the confidence interval on the transformed scale is0.8673±1.9647{\displaystyle 0.8673\pm {\frac {1.96}{\sqrt {47}}}}, or (0.5814, 1.1532). Converting back to the correlation scale yields (0.5237, 0.8188). The square of the sample correlation coefficient is typically denotedr2and is a special case of thecoefficient of determination. In this case, it estimates the fraction of the variance inYthat is explained byXin asimple linear regression. So if we have the observed datasetY1,…,Yn{\displaystyle Y_{1},\dots ,Y_{n}}and the fitted datasetY^1,…,Y^n{\displaystyle {\hat {Y}}_{1},\dots ,{\hat {Y}}_{n}}then as a starting point the total variation in theYiaround their average value can be decomposed as follows where theY^i{\displaystyle {\hat {Y}}_{i}}are the fitted values from the regression analysis. This can be rearranged to give The two summands above are the fraction of variance inYthat is explained byX(right) and that is unexplained byX(left). Next, we apply a property ofleast squaresregression models, that the sample covariance betweenY^i{\displaystyle {\hat {Y}}_{i}}andYi−Y^i{\displaystyle Y_{i}-{\hat {Y}}_{i}}is zero. Thus, the sample correlation coefficient between the observed and fitted response values in the regression can be written (calculation is under expectation, assumes Gaussian statistics) Thus wherer(Y,Y^)2{\displaystyle r(Y,{\hat {Y}})^{2}}is the proportion of variance inYexplained by a linear function ofX. In the derivation above, the fact that can be proved by noticing that the partial derivatives of theresidual sum of squares(RSS) overβ0andβ1are equal to 0 in the least squares model, where In the end, the equation can be written as where The symbolSSreg{\displaystyle {\text{SS}}_{\text{reg}}}is called the regression sum of squares, also called theexplained sum of squares, andSStot{\displaystyle {\text{SS}}_{\text{tot}}}is thetotal sum of squares(proportional to thevarianceof the data). The population Pearson correlation coefficient is defined in terms ofmoments, and therefore exists for any bivariateprobability distributionfor which thepopulationcovarianceis defined and themarginalpopulation variancesare defined and are non-zero. Some probability distributions, such as theCauchy distribution, have undefined variance and hence ρ is not defined ifXorYfollows such a distribution. In some practical applications, such as those involving data suspected to follow aheavy-tailed distribution, this is an important consideration. However, the existence of the correlation coefficient is usually not a concern; for instance, if the range of the distribution is bounded, ρ is always defined. Like many commonly used statistics, the samplestatisticris notrobust,[32]so its value can be misleading ifoutliersare present.[33][34]Specifically, the PMCC is neither distributionally robust,[35]nor outlier resistant[32](seeRobust statistics § Definition). Inspection of thescatterplotbetweenXandYwill typically reveal a situation where lack of robustness might be an issue, and in such cases it may be advisable to use a robust measure of association. Note however that while most robust estimators of association measurestatistical dependencein some way, they are generally not interpretable on the same scale as the Pearson correlation coefficient. Statistical inference for Pearson's correlation coefficient is sensitive to the data distribution. Exact tests, and asymptotic tests based on theFisher transformationcan be applied if the data are approximately normally distributed, but may be misleading otherwise. In some situations, thebootstrapcan be applied to construct confidence intervals, andpermutation testscan be applied to carry out hypothesis tests. Thesenon-parametricapproaches may give more meaningful results in some situations where bivariate normality does not hold. However the standard versions of these approaches rely onexchangeabilityof the data, meaning that there is no ordering or grouping of the data pairs being analyzed that might affect the behavior of the correlation estimate. A stratified analysis is one way to either accommodate a lack of bivariate normality, or to isolate the correlation resulting from one factor while controlling for another. IfWrepresents cluster membership or another factor that it is desirable to control, we canstratifythe data based on the value ofW, then calculate a correlation coefficient within each stratum. The stratum-level estimates can then be combined to estimate the overall correlation while controlling forW.[36] Variations of the correlation coefficient can be calculated for different purposes. Here are some examples. The sample correlation coefficientris not an unbiased estimate ofρ. For data that follows abivariate normal distribution, the expectationE[r]for the sample correlation coefficientrof a normal bivariate is[37] The unique minimum variance unbiased estimatorradjis given by[38] where: An approximately unbiased estimatorradjcan be obtained[citation needed]by truncatingE[r]and solving this truncated equation: An approximate solution[citation needed]to equation (2) is where in (3) Another proposed[11]adjusted correlation coefficient is[citation needed] radj≈rfor large values ofn. Suppose observations to be correlated have differing degrees of importance that can be expressed with a weight vectorw. To calculate the correlation between vectorsxandywith the weight vectorw(all of lengthn),[39][40] The reflective correlation is a variant of Pearson's correlation in which the data are not centered around their mean values.[citation needed]The population reflective correlation is The reflective correlation is symmetric, but it is not invariant under translation: The sample reflective correlation is equivalent tocosine similarity: The weighted version of the sample reflective correlation is Scaled correlation is a variant of Pearson's correlation in which the range of the data is restricted intentionally and in a controlled manner to reveal correlations between fast components intime series.[41]Scaled correlation is defined as average correlation across short segments of data. LetK{\displaystyle K}be the number of segments that can fit into the total length of the signalT{\displaystyle T}for a given scales{\displaystyle s}: The scaled correlation across the entire signalsr¯s{\displaystyle {\bar {r}}_{s}}is then computed as whererk{\displaystyle r_{k}}is Pearson's coefficient of correlation for segmentk{\displaystyle k}. By choosing the parameters{\displaystyle s}, the range of values is reduced and the correlations on long time scale are filtered out, only the correlations on short time scales being revealed. Thus, the contributions of slow components are removed and those of fast components are retained. A distance metric for two variablesXandYknown asPearson's distancecan be defined from their correlation coefficient as[42] Considering that the Pearson correlation coefficient falls between [−1, +1], the Pearson distance lies in [0, 2]. The Pearson distance has been used incluster analysisand data detection for communications and storage with unknown gain and offset.[43] The Pearson "distance" defined this way assigns distance greater than 1 to negative correlations. In reality, both strong positive correlation and negative correlations are meaningful, so care must be taken when Pearson "distance" is used for nearest neighbor algorithm as such algorithm will only include neighbors with positive correlation and exclude neighbors with negative correlation. Alternatively, an absolute valued distance,dX,Y=1−|ρX,Y|{\displaystyle d_{X,Y}=1-|\rho _{X,Y}|}, can be applied, which will take both positive and negative correlations into consideration. The information on positive and negative association can be extracted separately, later. For variablesX= {x1,...,xn} andY= {y1,...,yn} that are defined on the unit circle[0, 2π), it is possible to define a circular analog of Pearson's coefficient.[44]This is done by transforming data points inXandYwith asinefunction such that the correlation coefficient is given as: wherex¯{\displaystyle {\bar {x}}}andy¯{\displaystyle {\bar {y}}}are thecircular meansofXandY. This measure can be useful in fields like meteorology where the angular direction of data is important. If a population or data-set is characterized by more than two variables, apartial correlationcoefficient measures the strength of dependence between a pair of variables that is not accounted for by the way in which they both change in response to variations in a selected subset of the other variables. For two observables,X{\displaystyle X}andY{\displaystyle Y}, in a bipartite quantum system Pearson correlation coefficient is defined as[45][46] where Cor(X,Y){\displaystyle \mathbb {Cor} (X,Y)}is symmetric, i.e.,Cor(X,Y)=Cor(Y,X){\displaystyle \mathbb {Cor} (X,Y)=\mathbb {Cor} (Y,X)}, and its absolute value is invariant under affine transformations. It is always possible to remove the correlations between all pairs of an arbitrary number of random variables by using a data transformation, even if the relationship between the variables is nonlinear. A presentation of this result for population distributions is given by Cox & Hinkley.[47] A corresponding result exists for reducing the sample correlations to zero. Suppose a vector ofnrandom variables is observedmtimes. LetXbe a matrix whereXi,j{\displaystyle X_{i,j}}is thejth variable of observationi. LetZm,m{\displaystyle Z_{m,m}}be anmbymsquare matrix with every element 1. ThenDis the data transformed so every random variable has zero mean, andTis the data transformed so all variables have zero mean and zero correlation with all other variables – the samplecorrelation matrixofTwill be the identity matrix. This has to be further divided by the standard deviation to get unit variance. The transformed variables will be uncorrelated, even though they may not beindependent. where an exponent of−+1⁄2represents thematrix square rootof theinverseof a matrix. The correlation matrix ofTwill be the identity matrix. If a new data observationxis a row vector ofnelements, then the same transform can be applied toxto get the transformed vectorsdandt: This decorrelation is related toprincipal components analysisfor multivariate data.
https://en.wikipedia.org/wiki/Pearson_correlation_coefficient
Ininformation retrieval,tf–idf(alsoTF*IDF,TFIDF,TF–IDF, orTf–idf), short forterm frequency–inverse document frequency, is a measure of importance of a word to adocumentin a collection orcorpus, adjusted for the fact that some words appear more frequently in general.[1]Like the bag-of-words model, it models a document as amultisetof words, withoutword order. It is a refinement over the simplebag-of-words model, by allowing the weight of words to depend on the rest of the corpus. It was often used as aweighting factorin searches of information retrieval,text mining, anduser modeling. A survey conducted in 2015 showed that 83% of text-based recommender systems in digital libraries used tf–idf.[2]Variations of the tf–idf weighting scheme were often used bysearch enginesas a central tool in scoring and ranking a document'srelevancegiven a userquery. One of the simplestranking functionsis computed by summing the tf–idf for each query term; many more sophisticated ranking functions are variants of this simple model. Karen Spärck Jones(1972) conceived a statistical interpretation of term-specificity called Inverse Document Frequency (idf), which became a cornerstone of term weighting:[3] The specificity of a term can be quantified as an inverse function of the number of documents in which it occurs. For example, the df (document frequency) and idf for some words in Shakespeare's 37 plays are as follows:[4] We see that "Romeo", "Falstaff", and "salad" appears in very few plays, so seeing these words, one could get a good idea as to which play it might be. In contrast, "good" and "sweet" appears in every play and are completely uninformative as to which play it is. Term frequency,tf(t,d), is the relative frequency of termtwithin documentd, whereft,dis theraw countof a term in a document, i.e., the number of times that termtoccurs in documentd. Note the denominator is simply the total number of terms in documentd(counting each occurrence of the same term separately). There are various other ways to define term frequency:[5]: 128 Theinverse document frequencyis a measure of how much information the word provides, i.e., how common or rare it is across all documents. It is thelogarithmically scaledinverse fraction of the documents that contain the word (obtained by dividing the total number of documents by the number of documents containing the term, and then taking the logarithm of that quotient): with Then tf–idf is calculated as A high weight in tf–idf is reached by a high termfrequency(in the given document) and a low document frequency of the term in the whole collection of documents; the weights hence tend to filter out common terms. Since the ratio inside the idf's log function is always greater than or equal to 1, the value of idf (and tf–idf) is greater than or equal to 0. As a term appears in more documents, the ratio inside the logarithm approaches 1, bringing the idf and tf–idf closer to 0. Idf was introduced as "term specificity" byKaren Spärck Jonesin a 1972 paper. Although it has worked well as aheuristic, its theoretical foundations have been troublesome for at least three decades afterward, with many researchers trying to findinformation theoreticjustifications for it.[7] Spärck Jones's own explanation did not propose much theory, aside from a connection toZipf's law.[7]Attempts have been made to put idf on aprobabilisticfooting,[8]by estimating the probability that a given documentdcontains a termtas the relative document frequency, so that we can define idf as Namely, the inverse document frequency is the logarithm of "inverse" relative document frequency. This probabilistic interpretation in turn takes the same form as that ofself-information. However, applying such information-theoretic notions to problems in information retrieval leads to problems when trying to define the appropriateevent spacesfor the requiredprobability distributions: not only documents need to be taken into account, but also queries and terms.[7] Both term frequency and inverse document frequency can be formulated in terms ofinformation theory; it helps to understand why their product has a meaning in terms of joint informational content of a document. A characteristic assumption about the distributionp(d,t){\displaystyle p(d,t)}is that: This assumption and its implications, according to Aizawa: "represent the heuristic that tf–idf employs."[9] Theconditional entropyof a "randomly chosen" document in the corpusD{\displaystyle D}, conditional to the fact it contains a specific termt{\displaystyle t}(and assuming that all documents have equal probability to be chosen) is: In terms of notation,D{\displaystyle {\cal {D}}}andT{\displaystyle {\cal {T}}}are "random variables" corresponding to respectively draw a document or a term. Themutual informationcan be expressed as The last step is to expandpt{\displaystyle p_{t}}, the unconditional probability to draw a term, with respect to the (random) choice of a document, to obtain: This expression shows that summing the Tf–idf of all possible terms and documents recovers the mutual information between documents and term taking into account all the specificities of their joint distribution.[9]Each Tf–idf hence carries the "bit of information" attached to a term x document pair. Suppose that we have term count tables of a corpus consisting of only two documents, as listed on the right. The calculation of tf–idf for the term "this" is performed as follows: In its raw frequency form, tf is just the frequency of the "this" for each document. In each document, the word "this" appears once; but as the document 2 has more words, its relative frequency is smaller. An idf is constant per corpus, andaccountsfor the ratio of documents that include the word "this". In this case, we have a corpus of two documents and all of them include the word "this". So tf–idf is zero for the word "this", which implies that the word is not very informative as it appears in all documents. The word "example" is more interesting - it occurs three times, but only in the second document: Finally, (using thebase 10 logarithm). The idea behind tf–idf also applies to entities other than terms. In 1998, the concept of idf was applied to citations.[10]The authors argued that "if a very uncommon citation is shared by two documents, this should be weighted more highly than a citation made by a large number of documents". In addition, tf–idf was applied to "visual words" with the purpose of conducting object matching in videos,[11]and entire sentences.[12]However, the concept of tf–idf did not prove to be more effective in all cases than a plain tf scheme (without idf). When tf–idf was applied to citations, researchers could find no improvement over a simple citation-count weight that had no idf component.[13] A number of term-weighting schemes have derived from tf–idf. One of them is TF–PDF (term frequency * proportional document frequency).[14]TF–PDF was introduced in 2001 in the context of identifying emerging topics in the media. The PDF component measures the difference of how often a term occurs in different domains. Another derivate is TF–IDuF. In TF–IDuF,[15]idf is not calculated based on the document corpus that is to be searched or recommended. Instead, idf is calculated on users' personal document collections. The authors report that TF–IDuF was equally effective as tf–idf but could also be applied in situations when, e.g., a user modeling system has no access to a global document corpus.
https://en.wikipedia.org/wiki/Tf%E2%80%93idf#Term_frequency
Cross-validation,[2][3][4]sometimes calledrotation estimation[5][6][7]orout-of-sample testing, is any of various similarmodel validationtechniques for assessing how the results of astatisticalanalysis willgeneralizeto an independent data set. Cross-validation includesresamplingand sample splitting methods that use different portions of the data to test and train a model on different iterations. It is often used in settings where the goal is prediction, and one wants to estimate howaccuratelyapredictive modelwill perform in practice. It can also be used to assess the quality of a fitted model and the stability of its parameters. In a prediction problem, a model is usually given a dataset ofknown dataon which training is run (training dataset), and a dataset ofunknown data(orfirst seendata) against which the model is tested (called thevalidation datasetortesting set).[8][9]The goal of cross-validation is to test the model's ability to predict new data that was not used in estimating it, in order to flag problems likeoverfittingorselection bias[10]and to give an insight on how the model will generalize to an independent dataset (i.e., an unknown dataset, for instance from a real problem). One round of cross-validation involvespartitioningasampleofdataintocomplementarysubsets, performing the analysis on one subset (called thetraining set), and validating the analysis on the other subset (called thevalidation setortesting set). To reducevariability, in most methods multiple rounds of cross-validation are performed using different partitions, and the validation results are combined (e.g. averaged) over the rounds to give an estimate of the model's predictive performance. In summary, cross-validation combines (averages) measures offitnessin prediction to derive a more accurate estimate of model prediction performance.[11] Assume amodelwith one or more unknownparameters, and a data set to which the model can be fit (the training data set). The fitting processoptimizesthe model parameters to make the model fit the training data as well as possible. If anindependentsample of validation data is taken from the samepopulationas the training data, it will generally turn out that the model does not fit the validation data as well as it fits the training data. The size of this difference is likely to be large especially when the size of the training data set is small, or when the number of parameters in the model is large. Cross-validation is a way to estimate the size of this effect.[citation needed] In linear regression, there existrealresponse valuesy1,…,yn{\textstyle y_{1},\ldots ,y_{n}}, andnp-dimensionalvectorcovariatesx1, ...,xn. The components of the vectorxiare denotedxi1, ...,xip. Ifleast squaresis used to fit a function in the form of ahyperplaneŷ=a+βTxto the data (xi,yi)1 ≤i≤n, then the fit can be assessed using themean squared error(MSE). The MSE for given estimated parameter valuesaandβon the training set (xi,yi)1 ≤i≤nis defined as: If the model is correctly specified, it can be shown under mild assumptions that theexpected valueof the MSE for the training set is (n−p− 1)/(n+p+ 1) < 1 times the expected value of the MSE for the validation set (the expected value is taken over the distribution of training sets). Thus, a fitted model and computed MSE on the training set will result in an optimisticallybiasedassessment of how well the model will fit an independent data set. This biased estimate is called thein-sampleestimate of the fit, whereas the cross-validation estimate is anout-of-sampleestimate.[citation needed] Since in linear regression it is possible to directly compute the factor (n−p− 1)/(n+p+ 1) by which the training MSE underestimates the validation MSE under the assumption that the model specification is valid, cross-validation can be used for checking whether the model has beenoverfitted, in which case the MSE in the validation set will substantially exceed its anticipated value. (Cross-validation in the context of linear regression is also useful in that it can be used to select an optimallyregularizedcost function.) In most other regression procedures (e.g.logistic regression), there is no simple formula to compute the expected out-of-sample fit. Cross-validation is, thus, a generally applicable way to predict the performance of a model on unavailable data using numerical computation in place of theoretical analysis. Two types of cross-validation can be distinguished: exhaustive and non-exhaustive cross-validation. Exhaustive cross-validation methods are cross-validation methods which learn and test on all possible ways to divide the original sample into a training and a validation set. Leave-p-out cross-validation (LpO CV) involves usingpobservations as the validation set and the remaining observations as the training set. This is repeated on all ways to cut the original sample on a validation set ofpobservations and a training set.[12] LpO cross-validation require training and validating the modelCpn{\displaystyle C_{p}^{n}}times, wherenis the number of observations in the original sample, and whereCpn{\displaystyle C_{p}^{n}}is thebinomial coefficient. Forp> 1 and for even moderately largen, LpO CV can become computationally infeasible. For example, withn= 100 andp= 30,C30100≈3×1025.{\displaystyle C_{30}^{100}\approx 3\times 10^{25}.} A variant of LpO cross-validation with p=2 known as leave-pair-out cross-validation has been recommended as a nearly unbiased method for estimating the area underROC curveof binary classifiers.[13] Leave-one-out cross-validation (LOOCV) is a particular case of leave-p-out cross-validation withp= 1. The process looks similar tojackknife; however, with cross-validation one computes a statistic on the left-out sample(s), while with jackknifing one computes a statistic from the kept samples only. LOO cross-validation requires less computation time than LpO cross-validation because there are onlyC1n=n{\displaystyle C_{1}^{n}=n}passes rather thanCpn{\displaystyle C_{p}^{n}}. However,n{\displaystyle n}passes may still require quite a large computation time, in which case other approaches such as k-fold cross validation may be more appropriate.[14] Pseudo-code algorithm: Input: x, {vector of lengthNwith x-values of incoming points} y, {vector of lengthNwith y-values of the expected result} interpolate( x_in, y_in, x_out ), { returns the estimation for pointx_outafter the model is trained withx_in-y_inpairs} Output: err, {estimate for the prediction error} Steps: Non-exhaustive cross validation methods do not compute all ways of splitting the original sample. These methods are approximations of leave-p-out cross-validation. Ink-fold cross-validation, the original sample is randomly partitioned intokequal sized subsamples, often referred to as "folds". Of theksubsamples, a single subsample is retained as the validation data for testing the model, and the remainingk− 1 subsamples are used as training data. The cross-validation process is then repeatedktimes, with each of theksubsamples used exactly once as the validation data. Thekresults can then be averaged to produce a single estimation. The advantage of this method over repeated random sub-sampling (see below) is that all observations are used for both training and validation, and each observation is used for validation exactly once. 10-fold cross-validation is commonly used,[15]but in generalkremains an unfixed parameter. For example, settingk=2results in 2-fold cross-validation. In 2-fold cross-validation, we randomly shuffle the dataset into two setsd0andd1, so that both sets are equal size (this is usually implemented by shuffling the data array and then splitting it in two). We then train ond0and validate ond1, followed by training ond1and validating ond0. Whenk=n(the number of observations),k-fold cross-validation is equivalent to leave-one-out cross-validation.[16] Instratifiedk-fold cross-validation, the partitions are selected so that the mean response value is approximately equal in all the partitions. In the case of binary classification, this means that each partition contains roughly the same proportions of the two types of class labels. Inrepeatedcross-validation the data is randomly split intokpartitions several times. The performance of the model can thereby be averaged over several runs, but this is rarely desirable in practice.[17] When many different statistical ormachine learning modelsare being considered,greedyk-fold cross-validation can be used to quickly identify the most promising candidate models.[18] In the holdout method, we randomly assign data points to two setsd0andd1, usually called the training set and the test set, respectively. The size of each of the sets is arbitrary although typically the test set is smaller than the training set. We then train (build a model) ond0and test (evaluate its performance) ond1. In typical cross-validation, results of multiple runs of model-testing are averaged together; in contrast, the holdout method, in isolation, involves a single run. It should be used with caution because without such averaging of multiple runs, one may achieve highly misleading results. One's indicator of predictive accuracy (F*) will tend to be unstable since it will not be smoothed out by multiple iterations (see below). Similarly, indicators of the specific role played by various predictor variables (e.g., values of regression coefficients) will tend to be unstable. While the holdout method can be framed as "the simplest kind of cross-validation",[19]many sources instead classify holdout as a type of simple validation, rather than a simple or degenerate form of cross-validation.[6][20] This method, also known asMonte Carlocross-validation,[21][22]creates multiple random splits of the dataset into training and validation data.[23]For each such split, the model is fit to the training data, and predictive accuracy is assessed using the validation data. The results are then averaged over the splits. The advantage of this method (overk-fold cross validation) is that the proportion of the training/validation split is not dependent on the number of iterations (i.e., the number of partitions). The disadvantage of this method is that some observations may never be selected in the validation subsample, whereas others may be selected more than once. In other words, validation subsets may overlap. This method also exhibitsMonte Carlovariation, meaning that the results will vary if the analysis is repeated with different random splits. As the number of random splits approaches infinity, the result of repeated random sub-sampling validation tends towards that of leave-p-out cross-validation. In a stratified variant of this approach, the random samples are generated in such a way that the mean response value (i.e. the dependent variable in the regression) is equal in the training and testing sets. This is particularly useful if the responses aredichotomouswith an unbalanced representation of the two response values in the data. A method that applies repeated random sub-sampling isRANSAC.[24] When cross-validation is used simultaneously for selection of the best set ofhyperparametersand for error estimation (and assessment of generalization capacity), a nested cross-validation is required. Many variants exist. At least two variants can be distinguished: This is a truly nested variant which contains an outer loop ofksets and an inner loop oflsets. The total data set is split intoksets. One by one, a set is selected as the (outer) test set and thek- 1 other sets are combined into the corresponding outer training set. This is repeated for each of theksets. Each outer training set is further sub-divided intolsets. One by one, a set is selected as inner test (validation) set and thel- 1 other sets are combined into the corresponding inner training set. This is repeated for each of thelsets. The inner training sets are used to fit model parameters, while the outer test set is used as a validation set to provide an unbiased evaluation of the model fit. Typically, this is repeated for many different hyperparameters (or even different model types) and the validation set is used to determine the best hyperparameter set (and model type) for this inner training set. After this, a new model is fit on the entire outer training set, using the best set of hyperparameters from the inner cross-validation. The performance of this model is then evaluated using the outer test set. This is a type of k*l-fold cross-validation whenl=k- 1. A single k-fold cross-validation is used with both avalidation and test set. The total data set is split intoksets. One by one, a set is selected as test set. Then, one by one, one of the remaining sets is used as a validation set and the otherk- 2 sets are used as training sets until all possible combinations have been evaluated. Similar to the k*l-fold cross validation, the training set is used for model fitting and the validation set is used for model evaluation for each of the hyperparameter sets. Finally, for the selected parameter set, the test set is used to evaluate the model with the best parameter set. Here, two variants are possible: either evaluating the model that was trained on the training set or evaluating a new model that was fit on the combination of the training and the validation set. The goal of cross-validation is to estimate the expected level of fit of a model to a data set that is independent of the data that were used to train the model. It can be used to estimate any quantitative measure of fit that is appropriate for the data and model. For example, forbinary classificationproblems, each case in the validation set is either predicted correctly or incorrectly. In this situation the misclassification error rate can be used to summarize the fit, although other measures derived from information (e.g., counts, frequency) contained within acontingency tableorconfusion matrixcould also be used. When the value being predicted is continuously distributed, themean squared error,root mean squared errorormedian absolute deviationcould be used to summarize the errors. When users apply cross-validation to select a good configurationλ{\displaystyle \lambda }, then they might want to balance the cross-validated choice with their own estimate of the configuration. In this way, they can attempt to counter the volatility of cross-validation when the sample size is small and include relevant information from previous research. In a forecasting combination exercise, for instance, cross-validation can be applied to estimate the weights that are assigned to each forecast. Since a simple equal-weighted forecast is difficult to beat, a penalty can be added for deviating from equal weights.[25]Or, if cross-validation is applied to assign individual weights to observations, then one can penalize deviations from equal weights to avoid wasting potentially relevant information.[25]Hoornweg (2018) shows how a tuning parameterγ{\displaystyle \gamma }can be defined so that a user can intuitively balance between the accuracy of cross-validation and the simplicity of sticking to a reference parameterλR{\displaystyle \lambda _{R}}that is defined by the user. Ifλi{\displaystyle \lambda _{i}}denotes theith{\displaystyle i^{th}}candidate configuration that might be selected, then theloss functionthat is to be minimized can be defined as Relative accuracy can be quantified asMSE(λi)/MSE(λR){\displaystyle {\mbox{MSE}}(\lambda _{i})/{\mbox{MSE}}(\lambda _{R})}, so that the mean squared error of a candidateλi{\displaystyle \lambda _{i}}is made relative to that of a user-specifiedλR{\displaystyle \lambda _{R}}. The relative simplicity term measures the amount thatλi{\displaystyle \lambda _{i}}deviates fromλR{\displaystyle \lambda _{R}}relative to the maximum amount of deviation fromλR{\displaystyle \lambda _{R}}. Accordingly, relative simplicity can be specified as(λi−λR)2(λmax−λR)2{\displaystyle {\frac {(\lambda _{i}-\lambda _{R})^{2}}{(\lambda _{\max }-\lambda _{R})^{2}}}}, whereλmax{\displaystyle \lambda _{\max }}corresponds to theλ{\displaystyle \lambda }value with the highest permissible deviation fromλR{\displaystyle \lambda _{R}}. Withγ∈[0,1]{\displaystyle \gamma \in [0,1]}, the user determines how high the influence of the reference parameter is relative to cross-validation. One can add relative simplicity terms for multiple configurationsc=1,2,...,C{\displaystyle c=1,2,...,C}by specifying the loss function as Hoornweg (2018) shows that a loss function with such an accuracy-simplicity tradeoff can also be used to intuitively defineshrinkage estimatorslike the (adaptive) lasso andBayesian/ridge regression.[25]Click on thelassofor an example. Suppose we choose a measure of fitF, and use cross-validation to produce an estimateF*of the expected fitEFof a model to an independent data set drawn from the same population as the training data. If we imagine sampling multiple independent training sets following the same distribution, the resulting values forF*will vary. The statistical properties ofF*result from this variation. The variance ofF*can be large.[26][27]For this reason, if two statistical procedures are compared based on the results of cross-validation, the procedure with the better estimated performance may not actually be the better of the two procedures (i.e. it may not have the better value ofEF). Some progress has been made on constructingconfidence intervalsaround cross-validation estimates,[26]but this is considered a difficult problem. Most forms of cross-validation are straightforward to implement as long as an implementation of the prediction method being studied is available. In particular, the prediction method can be a "black box" – there is no need to have access to the internals of its implementation. If the prediction method is expensive to train, cross-validation can be very slow since the training must be carried out repeatedly. In some cases such asleast squaresandkernel regression, cross-validation can be sped up significantly by pre-computing certain values that are needed repeatedly in the training, or by using fast "updating rules" such as theSherman–Morrison formula. However one must be careful to preserve the "total blinding" of the validation set from the training procedure, otherwise bias may result. An extreme example of accelerating cross-validation occurs inlinear regression, where the results of cross-validation have aclosed-form expressionknown as theprediction residual error sum of squares(PRESS). Cross-validation only yields meaningful results if the validation set and training set are drawn from the same population and only if human biases are controlled. In many applications of predictive modeling, the structure of the system being studied evolves over time (i.e. it is "non-stationary"). Both of these can introduce systematic differences between the training and validation sets. For example, if a model for prediction of trend changes in financial quotations is trained on data for a certain five-year period, it is unrealistic to treat the subsequent five-year period as a draw from the same population. As another example, suppose a model is developed to predict an individual's risk for beingdiagnosedwith a particular disease within the next year. If the model is trained using data from a study involving only a specific population group (e.g. young people or males), but is then applied to the general population, the cross-validation results from the training set could differ greatly from the actual predictive performance. In many applications, models also may be incorrectly specified and vary as a function of modeler biases and/or arbitrary choices. When this occurs, there may be an illusion that the system changes in external samples, whereas the reason is that the model has missed a critical predictor and/or included a confounded predictor. New evidence is that cross-validation by itself is not very predictive of external validity, whereas a form of experimental validation known as swap sampling that does control for human bias can be much more predictive of external validity.[28]As defined by this large MAQC-II study across 30,000 models, swap sampling incorporates cross-validation in the sense that predictions are tested across independent training and validation samples. Yet, models are also developed across these independent samples and by modelers who are blinded to one another. When there is a mismatch in these models developed across these swapped training and validation samples as happens quite frequently, MAQC-II shows that this will be much more predictive of poor external predictive validity than traditional cross-validation. The reason for the success of the swapped sampling is a built-in control for human biases in model building. In addition to placing too much faith in predictions that may vary across modelers and lead to poor external validity due to these confounding modeler effects, these are some other ways that cross-validation can be misused: Due to correlations, cross-validation with random splits might be problematic fortime-seriesmodels (if we are more interested in evaluating extrapolation, rather than interpolation).[32]A more appropriate approach might be to use rolling cross-validation.[33] However, if performance is described by a singlesummary statistic, it is possible that the approach described by Politis and Romano as astationary bootstrap[34]will work. The statistic of the bootstrap needs to accept an interval of the time series and return the summary statistic on it. The call to the stationary bootstrap needs to specify an appropriate mean interval length. Cross-validation can be used to compare the performances of different predictive modeling procedures. For example, suppose we are interested inoptical character recognition, and we are considering using either aSupport Vector Machine(SVM) ork-nearest neighbors(KNN) to predict the true character from an image of a handwritten character. Using cross-validation, we can obtain empirical estimates comparing these two methods in terms of their respective fractions of misclassified characters. In contrast, the in-sample estimate will not represent the quantity of interest (i.e. the generalization error).[35] Cross-validation can also be used invariable selection.[36]Suppose we are using theexpressionlevels of 20proteinsto predict whether acancerpatient will respond to adrug. A practical goal would be to determine which subset of the 20 features should be used to produce the best predictive model. For most modeling procedures, if we compare feature subsets using the in-sample error rates, the best performance will occur when all 20 features are used. However under cross-validation, the model with the best fit will generally include only a subset of the features that are deemed truly informative. A recent development in medical statistics is its use in meta-analysis. It forms the basis of the validation statistic, Vn which is used to test the statistical validity of meta-analysis summary estimates.[37]It has also been used in a more conventional sense in meta-analysis to estimate the likely prediction error of meta-analysis results.[38]
https://en.wikipedia.org/wiki/Cross-validation_(statistics)
Adata modelis anabstract modelthat organizes elements ofdataandstandardizeshow they relate to one another and to the properties of real-worldentities.[2][3]For instance, a data model may specify that the data element representing a car be composed of a number of other elements which, in turn, represent the color and size of the car and define its owner. The corresponding professional activity is called generallydata modelingor, more specifically,database design. Data models are typically specified by a data expert, data specialist, data scientist, data librarian, or a data scholar. A datamodeling languageand notation are often represented in graphical form as diagrams.[4] A data model can sometimes be referred to as adata structure, especially in the context ofprogramming languages. Data models are often complemented byfunction models, especially in the context ofenterprise models. A data model explicitly determines thestructure of data; conversely,structured datais data organized according to an explicit data model or data structure. Structured data is in contrast tounstructured dataandsemi-structured data. The termdata modelcan refer to two distinct but closely related concepts. Sometimes it refers to an abstract formalization of theobjectsand relationships found in a particular application domain: for example the customers, products, and orders found in a manufacturing organization. At other times it refers to the set of concepts used in defining such formalizations: for example concepts such as entities, attributes, relations, or tables. So the "data model" of a banking application may be defined using the entity–relationship "data model". This article uses the term in both senses. Managing large quantities of structured andunstructured datais a primary function ofinformation systems. Data models describe the structure, manipulation, and integrity aspects of the data stored in data management systems such as relational databases. They may also describe data with a looser structure, such asword processingdocuments,email messages, pictures, digital audio, and video:XDM, for example, provides a data model forXMLdocuments. The main aim of data models is to support the development ofinformation systemsby providing the definition and format of data. According to West and Fowler (1999) "if this is done consistently across systems then compatibility of data can be achieved. If the same data structures are used to store and access data then different applications can share data. The results of this are indicated above. However, systems and interfaces often cost more than they should, to build, operate, and maintain. They may also constrain the business rather than support it. A major cause is that the quality of the data models implemented in systems and interfaces is poor".[5] The reason for these problems is a lack of standards that will ensure that data models will both meet business needs and be consistent.[5] A data model explicitly determines the structure of data. Typical applications of data models include database models, design of information systems, and enabling exchange of data. Usually, data models are specified in a data modeling language.[3] A data modelinstancemay be one of three kinds according toANSIin 1975:[6] The significance of this approach, according to ANSI, is that it allows the three perspectives to be relatively independent of each other. Storage technology can change without affecting either the logical or the conceptual model. The table/column structure can change without (necessarily) affecting the conceptual model. In each case, of course, the structures must remain consistent with the other model. The table/column structure may be different from a direct translation of the entity classes and attributes, but it must ultimately carry out the objectives of the conceptual entity class structure. Early phases of many software development projects emphasize the design of aconceptual data model. Such a design can be detailed into alogical data model. In later stages, this model may be translated intophysical data model. However, it is also possible to implement a conceptual model directly. One of the earliest pioneering works in modeling information systems was done by Young and Kent (1958),[7][8]who argued for "a precise and abstract way of specifying the informational and time characteristics of adata processingproblem". They wanted to create "a notation that should enable theanalystto organize the problem around any piece ofhardware". Their work was the first effort to create an abstract specification and invariant basis for designing different alternative implementations using different hardware components. The next step in IS modeling was taken byCODASYL, an IT industry consortium formed in 1959, who essentially aimed at the same thing as Young and Kent: the development of "a proper structure for machine-independent problem definition language, at the system level of data processing". This led to the development of a specific ISinformation algebra.[8] In the 1960s data modeling gained more significance with the initiation of themanagement information system(MIS) concept. According to Leondes (2002), "during that time, the information system provided the data and information for management purposes. The first generationdatabase system, calledIntegrated Data Store(IDS), was designed byCharles Bachmanat General Electric. Two famous database models, thenetwork data modeland thehierarchical data model, were proposed during this period of time".[9]Towards the end of the 1960s,Edgar F. Coddworked out his theories of data arrangement, and proposed therelational modelfor database management based onfirst-order predicate logic.[10] In the 1970sentity–relationship modelingemerged as a new type of conceptual data modeling, originally formalized in 1976 byPeter Chen. Entity–relationship models were being used in the first stage ofinformation systemdesign during therequirements analysisto describe information needs or the type ofinformationthat is to be stored in adatabase. This technique can describe anyontology, i.e., an overview and classification of concepts and their relationships, for a certainarea of interest. In the 1970sG.M. Nijssendeveloped "Natural Language Information Analysis Method" (NIAM) method, and developed this in the 1980s in cooperation withTerry HalpinintoObject–Role Modeling(ORM). However, it was Terry Halpin's 1989 PhD thesis that created the formal foundation on which Object–Role Modeling is based. Bill Kent, in his 1978 bookData and Reality,[11]compared a data model to a map of a territory, emphasizing that in the real world, "highways are not painted red, rivers don't have county lines running down the middle, and you can't see contour lines on a mountain". In contrast to other researchers who tried to create models that were mathematically clean and elegant, Kent emphasized the essential messiness of the real world, and the task of the data modeler to create order out of chaos without excessively distorting the truth. In the 1980s, according to Jan L. Harrington (2000), "the development of theobject-orientedparadigm brought about a fundamental change in the way we look at data and the procedures that operate on data. Traditionally, data and procedures have been stored separately: the data and their relationship in a database, the procedures in an application program. Object orientation, however, combined an entity's procedure with its data."[12] During the early 1990s, three Dutch mathematicians Guido Bakema, Harm van der Lek, and JanPieter Zwart, continued the development on the work ofG.M. Nijssen. They focused more on the communication part of the semantics. In 1997 they formalized the method Fully Communication Oriented Information ModelingFCO-IM. A database model is a specification describing how a database is structured and used. Several such models have been suggested. Common models include: A data structure diagram (DSD) is adiagramand data model used to describeconceptual data modelsby providing graphical notations which documententitiesand theirrelationships, and theconstraintsthat bind them. The basic graphic elements of DSDs areboxes, representing entities, andarrows, representing relationships. Data structure diagrams are most useful for documenting complex data entities. Data structure diagrams are an extension of theentity–relationship model(ER model). In DSDs,attributesare specified inside the entity boxes rather than outside of them, while relationships are drawn as boxes composed of attributes which specify the constraints that bind entities together. DSDs differ from the ER model in that the ER model focuses on the relationships between different entities, whereas DSDs focus on the relationships of the elements within an entity and enable users to fully see the links and relationships between each entity. There are several styles for representing data structure diagrams, with the notable difference in the manner of definingcardinality. The choices are between arrow heads, inverted arrow heads (crow's feet), or numerical representation of the cardinality. An entity–relationship model (ERM), sometimes referred to as an entity–relationship diagram (ERD), could be used to represent an abstractconceptual data model(orsemantic data modelor physical data model) used insoftware engineeringto represent structured data. There are several notations used for ERMs. Like DSD's,attributesare specified inside the entity boxes rather than outside of them, while relationships are drawn as lines, with the relationship constraints as descriptions on the line. The E-R model, while robust, can become visually cumbersome when representing entities with several attributes. There are several styles for representing data structure diagrams, with a notable difference in the manner of defining cardinality. The choices are between arrow heads, inverted arrow heads (crow's feet), or numerical representation of the cardinality. A data model inGeographic information systemsis a mathematical construct for representing geographic objects or surfaces as data. For example, Generic data models are generalizations of conventional data models. They define standardized general relation types, together with the kinds of things that may be related by such a relation type. Generic data models are developed as an approach to solving some shortcomings of conventional data models. For example, different modelers usually produce different conventional data models of the same domain. This can lead to difficulty in bringing the models of different people together and is an obstacle for data exchange and data integration. Invariably, however, this difference is attributable to different levels of abstraction in the models and differences in the kinds of facts that can be instantiated (the semantic expression capabilities of the models). The modelers need to communicate and agree on certain elements that are to be rendered more concretely, in order to make the differences less significant. A semantic data model in software engineering is a technique to define the meaning of data within the context of its interrelationships with other data. A semantic data model is an abstraction that defines how the stored symbols relate to the real world.[13]A semantic data model is sometimes called aconceptual data model. The logical data structure of adatabase management system(DBMS), whetherhierarchical,network, orrelational, cannot totally satisfy therequirementsfor a conceptual definition of data because it is limited in scope and biased toward the implementation strategy employed by the DBMS. Therefore, the need to define data from aconceptual viewhas led to the development of semantic data modeling techniques. That is, techniques to define the meaning of data within the context of its interrelationships with other data. As illustrated in the figure. The real world, in terms of resources, ideas, events, etc., are symbolically defined within physical data stores. A semantic data model is an abstraction that defines how the stored symbols relate to the real world. Thus, the model must be a true representation of the real world.[13] Data architecture is the design of data for use in defining the target state and the subsequent planning needed to hit the target state. It is usually one of severalarchitecture domainsthat form the pillars of anenterprise architectureorsolution architecture. A data architecture describes the data structures used by a business and/or its applications. There are descriptions of data in storage and data in motion; descriptions of data stores, data groups, and data items; and mappings of those data artifacts to data qualities, applications, locations, etc. Essential to realizing the target state, Data architecture describes how data is processed, stored, and utilized in a given system. It provides criteria for data processing operations that make it possible to design data flows and also control the flow of data in the system. Data modeling insoftware engineeringis the process of creating a data model by applying formal data model descriptions using data modeling techniques. Data modeling is a technique for defining businessrequirementsfor a database. It is sometimes calleddatabase modelingbecause a data model is eventually implemented in a database.[16] The figure illustrates the way data models are developed and used today. Aconceptual data modelis developed based on the datarequirementsfor the application that is being developed, perhaps in the context of anactivity model. The data model will normally consist of entity types, attributes, relationships, integrity rules, and the definitions of those objects. This is then used as the start point for interface ordatabase design.[5] Some important properties of data for which requirements need to be met are: Another kind of data model describes how to organize data using adatabase management systemor other data management technology. It describes, for example, relational tables and columns or object-oriented classes and attributes. Such a data model is sometimes referred to as thephysical data model, but in the original ANSI three schema architecture, it is called "logical". In that architecture, the physical model describes the storage media (cylinders, tracks, and tablespaces). Ideally, this model is derived from the more conceptual data model described above. It may differ, however, to account for constraints like processing capacity and usage patterns. Whiledata analysisis a common term for data modeling, the activity actually has more in common with the ideas and methods ofsynthesis(inferring general concepts from particular instances) than it does withanalysis(identifying component concepts from more general ones). {Presumably we call ourselvessystems analystsbecause no one can saysystems synthesists.} Data modeling strives to bring the data structures of interest together into a cohesive, inseparable, whole by eliminating unnecessary data redundancies and by relating data structures withrelationships. A different approach is to useadaptive systemssuch asartificial neural networksthat can autonomously create implicit models of data. A data structure is a way of storing data in a computer so that it can be used efficiently. It is an organization of mathematical and logical concepts of data. Often a carefully chosen data structure will allow the mostefficientalgorithmto be used. The choice of the data structure often begins from the choice of anabstract data type. A data model describes the structure of the data within a given domain and, by implication, the underlying structure of that domain itself. This means that a data model in fact specifies a dedicatedgrammarfor a dedicated artificial language for that domain. A data model represents classes of entities (kinds of things) about which a company wishes to hold information, the attributes of that information, and relationships among those entities and (often implicit) relationships among those attributes. The model describes the organization of the data to some extent irrespective of how data might be represented in a computer system. The entities represented by a data model can be the tangible entities, but models that include such concrete entity classes tend to change over time. Robust data models often identifyabstractionsof such entities. For example, a data model might include an entity class called "Person", representing all the people who interact with an organization. Such anabstract entityclass is typically more appropriate than ones called "Vendor" or "Employee", which identify specific roles played by those people. The term data model can have two meanings:[17] A data model theory has three main components:[17] For example, in therelational model, the structural part is based on a modified concept of themathematical relation; the integrity part is expressed infirst-order logicand the manipulation part is expressed using therelational algebra,tuple calculusanddomain calculus. A data model instance is created by applying a data model theory. This is typically done to solve some business enterprise requirement. Business requirements are normally captured by a semanticlogical data model. This is transformed into a physical data model instance from which is generated a physical database. For example, a data modeler may use a data modeling tool to create anentity–relationship modelof the corporate data repository of some business enterprise. This model is transformed into arelational model, which in turn generates arelational database. Patterns[18]are common data modeling structures that occur in many data models. A data-flow diagram (DFD) is a graphical representation of the "flow" of data through aninformation system. It differs from theflowchartas it shows thedataflow instead of thecontrolflow of the program. A data-flow diagram can also be used for thevisualizationofdata processing(structured design). Data-flow diagrams were invented byLarry Constantine, the original developer of structured design,[20]based on Martin and Estrin's "data-flow graph" model of computation. It is common practice to draw acontext-level data-flow diagramfirst which shows the interaction between the system and outside entities. TheDFDis designed to show how a system is divided into smaller portions and to highlight the flow of data between those parts. This context-level data-flow diagram is then "exploded" to show more detail of the system being modeled An Information model is not a type of data model, but more or less an alternative model. Within the field of software engineering, both a data model and an information model can be abstract, formal representations of entity types that include their properties, relationships and the operations that can be performed on them. The entity types in the model may be kinds of real-world objects, such as devices in a network, or they may themselves be abstract, such as for the entities used in a billing system. Typically, they are used to model a constrained domain that can be described by a closed set of entity types, properties, relationships and operations. According to Lee (1999)[21]an information model is a representation of concepts, relationships, constraints, rules, andoperationsto specifydata semanticsfor a chosen domain of discourse. It can provide sharable, stable, and organized structure of information requirements for the domain context.[21]More in general the terminformation modelis used for models of individual things, such as facilities, buildings, process plants, etc. In those cases the concept is specialised toFacility Information Model,Building Information Model, Plant Information Model, etc. Such an information model is an integration of a model of the facility with the data and documents about the facility. An information model provides formalism to the description of a problem domain without constraining how that description is mapped to an actual implementation in software. There may be many mappings of the information model. Such mappings are called data models, irrespective of whether they areobject models(e.g. usingUML),entity–relationship modelsorXML schemas. An object model in computer science is a collection of objects or classes through which a program can examine and manipulate some specific parts of its world. In other words, the object-oriented interface to some service or system. Such an interface is said to be theobject model ofthe represented service or system. For example, theDocument Object Model (DOM)[1]is a collection of objects that represent apagein aweb browser, used byscriptprograms to examine and dynamically change the page. There is aMicrosoft Excelobject model[22]for controlling Microsoft Excel from another program, and theASCOMTelescope Driver[23]is an object model for controlling an astronomical telescope. Incomputingthe termobject modelhas a distinct second meaning of the general properties ofobjectsin a specific computerprogramming language, technology, notation ormethodologythat uses them. For example, theJavaobject model, theCOMobject model, orthe object model ofOMT. Such object models are usually defined using concepts such asclass,message,inheritance,polymorphism, andencapsulation. There is an extensive literature on formalized object models as a subset of theformal semantics of programming languages. Object–Role Modeling (ORM) is a method forconceptual modeling, and can be used as a tool for information and rules analysis.[25] Object–Role Modeling is a fact-oriented method for performingsystems analysisat the conceptual level. The quality of a database application depends critically on its design. To help ensure correctness, clarity, adaptability and productivity, information systems are best specified first at the conceptual level, using concepts and language that people can readily understand. The conceptual design may include data, process and behavioral perspectives, and the actual DBMS used to implement the design might be based on one of many logical data models (relational, hierarchic, network, object-oriented, etc.).[26] The Unified Modeling Language (UML) is a standardized general-purposemodeling languagein the field ofsoftware engineering. It is agraphical languagefor visualizing, specifying, constructing, and documenting theartifactsof a software-intensive system. The Unified Modeling Language offers a standard way to write a system's blueprints, including:[27] UML offers a mix offunctional models, data models, anddatabase models.
https://en.wikipedia.org/wiki/Structured_data
Unstructured data(orunstructured information) is information that either does not have a pre-defineddata modelor is not organized in a pre-defined manner. Unstructured information is typicallytext-heavy, but may contain data such as dates, numbers, and facts as well. This results in irregularities andambiguitiesthat make it difficult to understand using traditional programs as compared to data stored in fielded form in databases orannotated(semantically tagged) in documents. In 1998,Merrill Lynchsaid "unstructured data comprises the vast majority of data found in an organization, some estimates run as high as 80%."[1]It is unclear what the source of this number is, but nonetheless it is accepted by some.[2]Other sources have reported similar or higher percentages of unstructured data.[3][4][5] As of 2012[update],IDCandDell EMCproject that data will grow to 40zettabytesby 2020, resulting in a 50-fold growth from the beginning of 2010.[6]More recently, IDC andSeagatepredict that the globaldataspherewill grow to 163 zettabytes by 2025[7]and majority of that will be unstructured. TheComputer World magazinestates that unstructured information might account for more than 70–80% of all data in organizations.[1] The earliest research intobusiness intelligencefocused in on unstructured textual data, rather than numerical data.[8]As early as 1958,computer scienceresearchers likeH.P. Luhnwere particularly concerned with the extraction and classification of unstructured text.[8]However, only since the turn of the century has the technology caught up with the research interest. In 2004, theSAS Institutedeveloped theSASText Miner, which usesSingular Value Decomposition(SVD) to reduce ahyper-dimensionaltextualspaceinto smaller dimensions for significantly more efficient machine-analysis.[9]The mathematical and technological advances sparked bymachinetextual analysis prompted a number of businesses to research applications, leading to the development of fields likesentiment analysis,voice of the customermining, and call center optimization.[10]The emergence ofBig Datain the late 2000s led to a heightened interest in the applications of unstructured data analytics in contemporary fields such aspredictive analyticsandroot cause analysis.[11] The term is imprecise for several reasons: Techniques such asdata mining,natural language processing(NLP), andtext analyticsprovide different methods tofind patternsin, or otherwise interpret, this information. Common techniques for structuring text usually involve manualtagging with metadataorpart-of-speech taggingfor furthertext mining-based structuring. TheUnstructured Information Management Architecture(UIMA) standard provided a common framework for processing this information to extract meaning and create structured data about the information. Software that creates machine-processable structure can utilize the linguistic, auditory, and visual structure that exist in all forms of human communication.[12]Algorithms can infer this inherent structure from text, for instance, by examining wordmorphology, sentence syntax, and other small- and large-scale patterns. Unstructured information can then be enriched and tagged to address ambiguities and relevancy-based techniques then used to facilitate search and discovery. Examples of "unstructured data" may include books, journals, documents,metadata,health records,audio,video,analog data, images, files, and unstructured text such as the body of ane-mailmessage,Web page, orword-processordocument. While the main content being conveyed does not have a defined structure, it generally comes packaged in objects (e.g. in files or documents, ...) that themselves have structure and are thus a mix of structured and unstructured data, but collectively this is still referred to as "unstructured data".[13]For example, anHTMLweb page is tagged, but HTML mark-up typically serves solely for rendering. It does not capture the meaning or function of tagged elements in ways that support automated processing of the information content of the page.XHTMLtagging does allow machine processing of elements, although it typically does not capture or convey the semantic meaning of tagged terms. Since unstructured data commonly occurs inelectronic documents, the use of acontentordocument managementsystem which can categorize entire documents is often preferred over data transfer and manipulation from within the documents. Document management thus provides the means to convey structure ontodocument collections. Search engineshave become popular tools for indexing and searching through such data, especially text. Specific computational workflows have been developed to impose structure upon the unstructured data contained within text documents. These workflows are generally designed to handle sets of thousands or even millions of documents, or far more than manual approaches to annotation may permit. Several of these approaches are based upon the concept ofonline analytical processing, or OLAP, and may be supported by data models such as text cubes.[14]Once document metadata is available through a data model, generating summaries of subsets of documents (i.e., cells within a text cube) may be performed with phrase-based approaches.[15] Biomedical research generates one major source of unstructured data as researchers often publish their findings in scholarly journals. Though the language in these documents is challenging to derive structural elements from (e.g., due to the complicated technical vocabulary contained within and thedomain knowledgerequired to fully contextualize observations), the results of these activities may yield links between technical and medical studies[16]and clues regarding new disease therapies.[17]Recent efforts to enforce structure upon biomedical documents includeself-organizing mapapproaches for identifying topics among documents,[18]general-purposeunsupervised algorithms,[19]and an application of the CaseOLAP workflow[15]to determine associations between protein names andcardiovascular diseasetopics in the literature.[20]CaseOLAP defines phrase-category relationships in an accurate (identifies relationships), consistent (highly reproducible), and efficient manner. This platform offers enhanced accessibility and empowers the biomedical community with phrase-mining tools for widespread biomedical research applications.[20] In Sweden (EU), pre 2018, some data privacy regulations did not apply if the data in question was confirmed as "unstructured".[21]This terminology, unstructured data, is rarely used in the EU afterGDPRcame into force in 2018. GDPR does neither mention nor define "unstructured data". It does use the word "structured" as follows (without defining it); GDPR Case-law on what defines a "filing system"; "the specific criterion and the specific form in which the set of personal data collected by each of the members who engage in preaching is actually structured is irrelevant, so long as that set of data makes it possible for the data relating to a specific person who has been contacted to beeasily retrieved, which is however for the referring court to ascertain in the light of all the circumstances of the case in the main proceedings.” (CJEU,Todistajat v. Tietosuojavaltuutettu, Jehovan, Paragraph 61). Ifpersonal datais easily retrieved - then it is a filing system and - then it is in scope for GDPR regardless of being "structured" or "unstructured". Most electronic systems today,[as of?]subject to access and applied software, can allow for easy retrieval of data.
https://en.wikipedia.org/wiki/Unstructured_data
Instatisticalanalysis ofbinary classificationandinformation retrievalsystems, theF-scoreorF-measureis a measure of predictive performance. It is calculated from theprecisionandrecallof the test, where the precision is the number of true positive results divided by the number of all samples predicted to be positive, including those not identified correctly, and the recall is the number of true positive results divided by the number of all samples that should have been identified as positive. Precision is also known aspositive predictive value, and recall is also known assensitivityin diagnostic binary classification. TheF1score is theharmonic meanof the precision and recall. It thus symmetrically represents both precision and recall in one metric. The more genericFβ{\displaystyle F_{\beta }}score applies additional weights, valuing one of precision or recall more than the other. The highest possible value of an F-score is 1.0, indicating perfect precision and recall, and the lowest possible value is 0, if the precision or the recall is zero. The name F-measure is believed to be named after a different F function in Van Rijsbergen's book, when introduced to the FourthMessage Understanding Conference(MUC-4, 1992).[1] The traditional F-measure or balanced F-score (F1score) is theharmonic meanof precision and recall:[2] Withprecision = TP / (TP + FP)andrecall = TP / (TP + FN), it follows that the numerator ofF1is the sum of their numerators and the denominator ofF1is the sum of their denominators. A more general F score,Fβ{\displaystyle F_{\beta }}, that uses a positive real factorβ{\displaystyle \beta }, whereβ{\displaystyle \beta }is chosen such that recall is consideredβ{\displaystyle \beta }times as important as precision, is: In terms ofType I and type II errorsthis becomes: Two commonly used values forβ{\displaystyle \beta }are 2, which weighs recall higher than precision, and 0.5, which weighs recall lower than precision. The F-measure was derived so thatFβ{\displaystyle F_{\beta }}"measures the effectiveness of retrieval with respect to a user who attachesβ{\displaystyle \beta }times as much importance to recall as precision".[3]It is based onVan Rijsbergen's effectiveness measure Their relationship is:Fβ=1−E{\displaystyle F_{\beta }=1-E}whereα=11+β2{\displaystyle \alpha ={\frac {1}{1+\beta ^{2}}}} This is related to the field ofbinary classificationwhere recall is often termed "sensitivity". Precision-recall curve, and thus theFβ{\displaystyle F_{\beta }}score, explicitly depends on the ratior{\displaystyle r}of positive to negative test cases.[12]This means that comparison of the F-score across different problems with differing class ratios is problematic. One way to address this issue (see e.g., Siblini et al., 2020[13]) is to use a standard class ratior0{\displaystyle r_{0}}when making such comparisons. The F-score is often used in the field ofinformation retrievalfor measuringsearch,document classification, andquery classificationperformance.[14]It is particularly relevant in applications which are primarily concerned with the positive class and where the positive class is rare relative to the negative class. Earlier works focused primarily on the F1score, but with the proliferation of large scale search engines, performance goals changed to place more emphasis on either precision or recall[15]and soFβ{\displaystyle F_{\beta }}is seen in wide application. The F-score is also used inmachine learning.[16]However, the F-measures do not take true negatives into account, hence measures such as theMatthews correlation coefficient,InformednessorCohen's kappamay be preferred to assess the performance of a binary classifier.[17] The F-score has been widely used in the natural language processing literature,[18]such as in the evaluation ofnamed entity recognitionandword segmentation. The F1score is theDice coefficientof the set of retrieved items and the set of relevant items.[19] David Handand others criticize the widespread use of the F1score since it gives equal importance to precision and recall. In practice, different types of mis-classifications incur different costs. In other words, the relative importance of precision and recall is an aspect of the problem.[22] According to Davide Chicco and Giuseppe Jurman, the F1score is less truthful and informative than theMatthews correlation coefficient (MCC)in binary evaluation classification.[23] David M W Powershas pointed out that F1ignores the True Negatives and thus is misleading for unbalanced classes, while kappa and correlation measures are symmetric and assess both directions of predictability - the classifier predicting the true class and the true class predicting the classifier prediction, proposing separate multiclass measuresInformednessandMarkednessfor the two directions, noting that their geometric mean is correlation.[24] Another source of critique of F1is its lack of symmetry. It means it may change its value when dataset labeling is changed - the "positive" samples are named "negative" and vice versa. This criticism is met by theP4 metricdefinition, which is sometimes indicated as a symmetrical extension of F1.[25] Finally, Ferrer[26]and Dyrland et al.[27]argue that the expected cost (or its counterpart, the expected utility) is the only principled metric for evaluation of classification decisions, having various advantages over the F-score and the MCC. Both works show that the F-score can result in wrong conclusions about the absolute and relative quality of systems. While the F-measure is theharmonic meanof recall and precision, theFowlkes–Mallows indexis theirgeometric mean.[28] The F-score is also used for evaluating classification problems with more than two classes (Multiclass classification). A common method is to average the F-score over each class, aiming at a balanced measurement of performance.[29] Macro F1is a macro-averaged F1 score aiming at a balanced performance measurement. To calculate macro F1, two different averaging-formulas have been used: the F1 score of (arithmetic) class-wise precision and recall means or the arithmetic mean of class-wise F1 scores, where the latter exhibits more desirable properties.[30] Micro F1is the harmonic mean ofmicro precisionandmicro recall. In single-label multi-class classification, micro precision equals micro recall, thus micro F1 is equal to both. However, contrary to a common misconception, micro F1 does not generally equalaccuracy, because accuracy takes true negatives into account while micro F1 does not.[31]
https://en.wikipedia.org/wiki/F-score
Achi-squared test(alsochi-squareorχ2test) is astatistical hypothesis testused in the analysis ofcontingency tableswhen the sample sizes are large. In simpler terms, this test is primarily used to examine whether two categorical variables (two dimensions of the contingency table) are independent in influencing the test statistic (values within the table).[1]The test isvalidwhen the test statistic ischi-squared distributedunder thenull hypothesis, specificallyPearson's chi-squared testand variants thereof. Pearson's chi-squared test is used to determine whether there is astatistically significantdifference between the expectedfrequenciesand the observed frequencies in one or more categories of acontingency table. For contingency tables with smaller sample sizes, aFisher's exact testis used instead. In the standard applications of this test, the observations are classified into mutually exclusive classes. If thenull hypothesisthat there are no differences between the classes in the population is true, the test statistic computed from the observations follows aχ2frequency distribution. The purpose of the test is to evaluate how likely the observed frequencies would be assuming the null hypothesis is true. Test statistics that follow aχ2distribution occur when the observations are independent. There are alsoχ2tests for testing the null hypothesis of independence of a pair ofrandom variablesbased on observations of the pairs. Chi-squared testsoften refers to tests for which the distribution of the test statistic approaches theχ2distributionasymptotically, meaning that thesampling distribution(if the null hypothesis is true) of the test statistic approximates a chi-squared distribution more and more closely assamplesizes increase. In the 19th century, statistical analytical methods were mainly applied in biological data analysis and it was customary for researchers to assume that observations followed anormal distribution, such asSir George AiryandMansfield Merriman, whose works were criticized byKarl Pearsonin his 1900 paper.[2] At the end of the 19th century, Pearson noticed the existence of significantskewnesswithin some biological observations. In order to model the observations regardless of being normal or skewed, Pearson, in a series of articles published from 1893 to 1916,[3][4][5][6]devised thePearson distribution, a family of continuousprobability distributions, which includes the normal distribution and many skewed distributions, and proposed a method of statistical analysis consisting of using the Pearson distribution to model the observation and performing a test of goodness of fit to determine how well the model really fits to the observations. In 1900, Pearson published a paper[2]on theχ2test which is considered to be one of the foundations of modern statistics.[7]In this paper, Pearson investigated a test of goodness of fit. Suppose thatnobservations in a random sample from a population are classified intokmutually exclusive classes with respective observed numbers of observationsxi(fori= 1,2,…,k), and a null hypothesis gives the probabilitypithat an observation falls into theith class. So we have the expected numbersmi=npifor alli, where Pearson proposed that, under the circumstance of the null hypothesis being correct, asn→ ∞the limiting distribution of the quantity given below is theχ2distribution. Pearson dealt first with the case in which the expected numbersmiare large enough known numbers in all cells assuming every observationximay be taken asnormally distributed, and reached the result that, in the limit asnbecomes large,X2follows theχ2distribution withk− 1degrees of freedom. However, Pearson next considered the case in which the expected numbers depended on the parameters that had to be estimated from the sample, and suggested that, with the notation ofmibeing the true expected numbers andm′ibeing the estimated expected numbers, the difference will usually be positive and small enough to be omitted. In a conclusion, Pearson argued that if we regardedX′2as also distributed asχ2distribution withk− 1degrees of freedom, the error in this approximation would not affect practical decisions. This conclusion caused some controversy in practical applications and was not settled for 20 years until Fisher's 1922 and 1924 papers.[8][9] Onetest statisticthat follows achi-squared distributionexactly is the test that the variance of a normally distributed population has a given value based on asample variance. Such tests are uncommon in practice because the true variance of the population is usually unknown. However, there are several statistical tests where thechi-squared distributionis approximately valid: For anexact testused in place of the 2 × 2 chi-squared test for independence when all the row and column totals were fixed by design, seeFisher's exact test. When the row or column margins (or both) are random variables (as in most common research designs) this tends to be overly conservative andunderpowered.[10] For an exact test used in place of the 2 × 1 chi-squared test for goodness of fit, seebinomial test. Using thechi-squared distributionto interpretPearson's chi-squared statisticrequires one to assume that thediscreteprobability of observedbinomial frequenciesin the table can be approximated by the continuouschi-squared distribution. This assumption is not quite correct and introduces some error. To reduce the error in approximation,Frank Yatessuggested a correction for continuity that adjusts the formula forPearson's chi-squared testby subtracting 0.5 from the absolute difference between each observed value and its expected value in a2 × 2contingency table.[11]This reduces the chi-squared value obtained and thus increases itsp-value. If a sample of sizenis taken from a population having anormal distribution, then there is a result (seedistribution of the sample variance) which allows a test to be made of whether the variance of the population has a pre-determined value. For example, a manufacturing process might have been in stable condition for a long period, allowing a value for the variance to be determined essentially without error. Suppose that a variant of the process is being tested, giving rise to a small sample ofnproduct items whose variation is to be tested. The test statisticTin this instance could be set to be the sum of squares about the sample mean, divided by the nominal value for the variance (i.e. the value to be tested as holding). ThenThas a chi-squared distribution withn− 1degrees of freedom. For example, if the sample size is 21, the acceptance region forTwith a significance level of 5% is between 9.59 and 34.17. Suppose there is a city of 1,000,000 residents with four neighborhoods:A,B,C, andD. A random sample of 650 residents of the city is taken and their occupation is recorded as"white collar", "blue collar", or "no collar". The null hypothesis is that each person's neighborhood of residence is independent of the person's occupational classification. The data are tabulated as: Let us take the sample living in neighborhoodA, 150, to estimate what proportion of the whole 1,000,000 live in neighborhoodA. Similarly we take⁠349/650⁠to estimate what proportion of the 1,000,000 are white-collar workers. By the assumption of independence under the hypothesis we should "expect" the number of white-collar workers in neighborhoodAto be Then in that "cell" of the table, we have The sum of these quantities over all of the cells is the test statistic; in this case,≈24.57{\displaystyle \approx 24.57}. Under the null hypothesis, this sum has approximately a chi-squared distribution whose number of degrees of freedom is If the test statistic is improbably large according to that chi-squared distribution, then one rejects the null hypothesis of independence. A related issue is a test of homogeneity. Suppose that instead of giving every resident of each of the four neighborhoods an equal chance of inclusion in the sample, we decide in advance how many residents of each neighborhood to include. Then each resident has the same chance of being chosen as do all residents of the same neighborhood, but residents of different neighborhoods would have different probabilities of being chosen if the four sample sizes are not proportional to the populations of the four neighborhoods. In such a case, we would be testing "homogeneity" rather than "independence". The question is whether the proportions of blue-collar, white-collar, and no-collar workers in the four neighborhoods are the same. However, the test is done in the same way. Incryptanalysis, the chi-squared test is used to compare the distribution ofplaintextand (possibly) decryptedciphertext. The lowest value of the test means that the decryption was successful with high probability.[12][13]This method can be generalized for solving modern cryptographic problems.[14] Inbioinformatics, the chi-squared test is used to compare the distribution of certain properties of genes (e.g., genomic content, mutation rate, interaction network clustering, etc.) belonging to different categories (e.g., disease genes, essential genes, genes on a certain chromosome etc.).[15][16]
https://en.wikipedia.org/wiki/Chi-squared_test
Independenceis a fundamental notion inprobability theory, as instatisticsand the theory ofstochastic processes. Twoeventsareindependent,statistically independent, orstochastically independent[1]if, informally speaking, the occurrence of one does not affect the probability of occurrence of the other or, equivalently, does not affect theodds. Similarly, tworandom variablesare independent if the realization of one does not affect theprobability distributionof the other. When dealing with collections of more than two events, two notions of independence need to be distinguished. The events are calledpairwise independentif any two events in the collection are independent of each other, whilemutual independence(orcollective independence) of events means, informally speaking, that each event is independent of any combination of other events in the collection. A similar notion exists for collections of random variables. Mutual independence implies pairwise independence, but not the other way around. In the standard literature of probability theory, statistics, and stochastic processes,independencewithout further qualification usually refers to mutual independence. Two eventsA{\displaystyle A}andB{\displaystyle B}are independent (often written asA⊥B{\displaystyle A\perp B}orA⊥⊥B{\displaystyle A\perp \!\!\!\perp B}, where the latter symbol often is also used forconditional independence) if and only if theirjoint probabilityequals the product of their probabilities:[2]: p. 29[3]: p. 10 A∩B≠∅{\displaystyle A\cap B\neq \emptyset }indicates that two independent eventsA{\displaystyle A}andB{\displaystyle B}have common elements in theirsample spaceso that they are notmutually exclusive(mutually exclusive iffA∩B=∅{\displaystyle A\cap B=\emptyset }). Why this defines independence is made clear by rewriting withconditional probabilitiesP(A∣B)=P(A∩B)P(B){\displaystyle P(A\mid B)={\frac {P(A\cap B)}{P(B)}}}as the probability at which the eventA{\displaystyle A}occurs provided that the eventB{\displaystyle B}has or is assumed to have occurred: and similarly Thus, the occurrence ofB{\displaystyle B}does not affect the probability ofA{\displaystyle A}, and vice versa. In other words,A{\displaystyle A}andB{\displaystyle B}are independent of each other. Although the derived expressions may seem more intuitive, they are not the preferred definition, as the conditional probabilities may be undefined ifP(A){\displaystyle \mathrm {P} (A)}orP(B){\displaystyle \mathrm {P} (B)}are 0. Furthermore, the preferred definition makes clear by symmetry that whenA{\displaystyle A}is independent ofB{\displaystyle B},B{\displaystyle B}is also independent ofA{\displaystyle A}. Stated in terms ofodds, two events are independent if and only if theodds ratioof⁠A{\displaystyle A}⁠and⁠B{\displaystyle B}⁠is unity (1). Analogously with probability, this is equivalent to the conditional odds being equal to the unconditional odds: or to the odds of one event, given the other event, being the same as the odds of the event, given the other event not occurring: The odds ratio can be defined as or symmetrically for odds of⁠B{\displaystyle B}⁠given⁠A{\displaystyle A}⁠, and thus is 1 if and only if the events are independent. A finite set of events{Ai}i=1n{\displaystyle \{A_{i}\}_{i=1}^{n}}ispairwise independentif every pair of events is independent[4]—that is, if and only if for all distinct pairs of indicesm,k{\displaystyle m,k}, A finite set of events ismutually independentif every event is independent of any intersection of the other events[4][3]: p. 11—that is, if and only if for everyk≤n{\displaystyle k\leq n}and for every k indices1≤i1<⋯<ik≤n{\displaystyle 1\leq i_{1}<\dots <i_{k}\leq n}, This is called themultiplication rulefor independent events. It isnot a single conditioninvolving only the product of all the probabilities of all single events; it must hold true for all subsets of events. For more than two events, a mutually independent set of events is (by definition) pairwise independent; but the converse isnot necessarily true.[2]: p. 30 Stated in terms oflog probability, two events are independent if and only if the log probability of the joint event is the sum of the log probability of the individual events: Ininformation theory, negative log probability is interpreted asinformation content, and thus two events are independent if and only if the information content of the combined event equals the sum of information content of the individual events: SeeInformation content § Additivity of independent eventsfor details. Two random variablesX{\displaystyle X}andY{\displaystyle Y}are independentif and only if(iff) the elements of theπ-systemgenerated by them are independent; that is to say, for everyx{\displaystyle x}andy{\displaystyle y}, the events{X≤x}{\displaystyle \{X\leq x\}}and{Y≤y}{\displaystyle \{Y\leq y\}}are independent events (as defined above inEq.1). That is,X{\displaystyle X}andY{\displaystyle Y}withcumulative distribution functionsFX(x){\displaystyle F_{X}(x)}andFY(y){\displaystyle F_{Y}(y)}, are independentiffthe combined random variable(X,Y){\displaystyle (X,Y)}has ajointcumulative distribution function[3]: p. 15 or equivalently, if theprobability densitiesfX(x){\displaystyle f_{X}(x)}andfY(y){\displaystyle f_{Y}(y)}and the joint probability densityfX,Y(x,y){\displaystyle f_{X,Y}(x,y)}exist, A finite set ofn{\displaystyle n}random variables{X1,…,Xn}{\displaystyle \{X_{1},\ldots ,X_{n}\}}ispairwise independentif and only if every pair of random variables is independent. Even if the set of random variables is pairwise independent, it is not necessarilymutually independentas defined next. A finite set ofn{\displaystyle n}random variables{X1,…,Xn}{\displaystyle \{X_{1},\ldots ,X_{n}\}}ismutually independentif and only if for any sequence of numbers{x1,…,xn}{\displaystyle \{x_{1},\ldots ,x_{n}\}}, the events{X1≤x1},…,{Xn≤xn}{\displaystyle \{X_{1}\leq x_{1}\},\ldots ,\{X_{n}\leq x_{n}\}}are mutually independent events (as defined above inEq.3). This is equivalent to the following condition on the joint cumulative distribution functionFX1,…,Xn(x1,…,xn){\displaystyle F_{X_{1},\ldots ,X_{n}}(x_{1},\ldots ,x_{n})}.A finite set ofn{\displaystyle n}random variables{X1,…,Xn}{\displaystyle \{X_{1},\ldots ,X_{n}\}}is mutually independent if and only if[3]: p. 16 It is not necessary here to require that the probability distribution factorizes for all possiblek{\displaystyle k}-elementsubsets as in the case forn{\displaystyle n}events. This is not required because e.g.FX1,X2,X3(x1,x2,x3)=FX1(x1)⋅FX2(x2)⋅FX3(x3){\displaystyle F_{X_{1},X_{2},X_{3}}(x_{1},x_{2},x_{3})=F_{X_{1}}(x_{1})\cdot F_{X_{2}}(x_{2})\cdot F_{X_{3}}(x_{3})}impliesFX1,X3(x1,x3)=FX1(x1)⋅FX3(x3){\displaystyle F_{X_{1},X_{3}}(x_{1},x_{3})=F_{X_{1}}(x_{1})\cdot F_{X_{3}}(x_{3})}. The measure-theoretically inclined reader may prefer to substitute events{X∈A}{\displaystyle \{X\in A\}}for events{X≤x}{\displaystyle \{X\leq x\}}in the above definition, whereA{\displaystyle A}is anyBorel set. That definition is exactly equivalent to the one above when the values of the random variables arereal numbers. It has the advantage of working also for complex-valued random variables or for random variables taking values in anymeasurable space(which includestopological spacesendowed by appropriate σ-algebras). Two random vectorsX=(X1,…,Xm)T{\displaystyle \mathbf {X} =(X_{1},\ldots ,X_{m})^{\mathrm {T} }}andY=(Y1,…,Yn)T{\displaystyle \mathbf {Y} =(Y_{1},\ldots ,Y_{n})^{\mathrm {T} }}are called independent if[5]: p. 187 whereFX(x){\displaystyle F_{\mathbf {X} }(\mathbf {x} )}andFY(y){\displaystyle F_{\mathbf {Y} }(\mathbf {y} )}denote the cumulative distribution functions ofX{\displaystyle \mathbf {X} }andY{\displaystyle \mathbf {Y} }andFX,Y(x,y){\displaystyle F_{\mathbf {X,Y} }(\mathbf {x,y} )}denotes their joint cumulative distribution function. Independence ofX{\displaystyle \mathbf {X} }andY{\displaystyle \mathbf {Y} }is often denoted byX⊥⊥Y{\displaystyle \mathbf {X} \perp \!\!\!\perp \mathbf {Y} }. Written component-wise,X{\displaystyle \mathbf {X} }andY{\displaystyle \mathbf {Y} }are called independent if The definition of independence may be extended from random vectors to astochastic process. Therefore, it is required for an independent stochastic process that the random variables obtained by sampling the process at anyn{\displaystyle n}timest1,…,tn{\displaystyle t_{1},\ldots ,t_{n}}are independent random variables for anyn{\displaystyle n}.[6]: p. 163 Formally, a stochastic process{Xt}t∈T{\displaystyle \left\{X_{t}\right\}_{t\in {\mathcal {T}}}}is called independent, if and only if for alln∈N{\displaystyle n\in \mathbb {N} }and for allt1,…,tn∈T{\displaystyle t_{1},\ldots ,t_{n}\in {\mathcal {T}}} whereFXt1,…,Xtn(x1,…,xn)=P(X(t1)≤x1,…,X(tn)≤xn){\displaystyle F_{X_{t_{1}},\ldots ,X_{t_{n}}}(x_{1},\ldots ,x_{n})=\mathrm {P} (X(t_{1})\leq x_{1},\ldots ,X(t_{n})\leq x_{n})}.Independence of a stochastic process is a propertywithina stochastic process, not between two stochastic processes. Independence of two stochastic processes is a property between two stochastic processes{Xt}t∈T{\displaystyle \left\{X_{t}\right\}_{t\in {\mathcal {T}}}}and{Yt}t∈T{\displaystyle \left\{Y_{t}\right\}_{t\in {\mathcal {T}}}}that are defined on the same probability space(Ω,F,P){\displaystyle (\Omega ,{\mathcal {F}},P)}. Formally, two stochastic processes{Xt}t∈T{\displaystyle \left\{X_{t}\right\}_{t\in {\mathcal {T}}}}and{Yt}t∈T{\displaystyle \left\{Y_{t}\right\}_{t\in {\mathcal {T}}}}are said to be independent if for alln∈N{\displaystyle n\in \mathbb {N} }and for allt1,…,tn∈T{\displaystyle t_{1},\ldots ,t_{n}\in {\mathcal {T}}}, the random vectors(X(t1),…,X(tn)){\displaystyle (X(t_{1}),\ldots ,X(t_{n}))}and(Y(t1),…,Y(tn)){\displaystyle (Y(t_{1}),\ldots ,Y(t_{n}))}are independent,[7]: p. 515i.e. if The definitions above (Eq.1andEq.2) are both generalized by the following definition of independence forσ-algebras. Let(Ω,Σ,P){\displaystyle (\Omega ,\Sigma ,\mathrm {P} )}be a probability space and letA{\displaystyle {\mathcal {A}}}andB{\displaystyle {\mathcal {B}}}be two sub-σ-algebras ofΣ{\displaystyle \Sigma }.A{\displaystyle {\mathcal {A}}}andB{\displaystyle {\mathcal {B}}}are said to be independent if, wheneverA∈A{\displaystyle A\in {\mathcal {A}}}andB∈B{\displaystyle B\in {\mathcal {B}}}, Likewise, a finite family of σ-algebras(τi)i∈I{\displaystyle (\tau _{i})_{i\in I}}, whereI{\displaystyle I}is anindex set, is said to be independent if and only if and an infinite family of σ-algebras is said to be independent if all its finite subfamilies are independent. The new definition relates to the previous ones very directly: Using this definition, it is easy to show that ifX{\displaystyle X}andY{\displaystyle Y}are random variables andY{\displaystyle Y}is constant, thenX{\displaystyle X}andY{\displaystyle Y}are independent, since the σ-algebra generated by a constant random variable is the trivial σ-algebra{∅,Ω}{\displaystyle \{\varnothing ,\Omega \}}. Probability zero events cannot affect independence so independence also holds ifY{\displaystyle Y}is only Pr-almost surelyconstant. Note that an event is independent of itself if and only if Thus an event is independent of itself if and only if italmost surelyoccurs or itscomplementalmost surely occurs; this fact is useful when provingzero–one laws.[8] IfX{\displaystyle X}andY{\displaystyle Y}are statistically independent random variables, then theexpectation operatorE{\displaystyle \operatorname {E} }has the property and thecovariancecov⁡[X,Y]{\displaystyle \operatorname {cov} [X,Y]}is zero, as follows from The converse does not hold: if two random variables have a covariance of 0 they still may be not independent. Similarly for two stochastic processes{Xt}t∈T{\displaystyle \left\{X_{t}\right\}_{t\in {\mathcal {T}}}}and{Yt}t∈T{\displaystyle \left\{Y_{t}\right\}_{t\in {\mathcal {T}}}}: If they are independent, then they areuncorrelated.[10]: p. 151 Two random variablesX{\displaystyle X}andY{\displaystyle Y}are independent if and only if thecharacteristic functionof the random vector(X,Y){\displaystyle (X,Y)}satisfies In particular the characteristic function of their sum is the product of their marginal characteristic functions: though the reverse implication is not true. Random variables that satisfy the latter condition are calledsubindependent. The event of getting a 6 the first time a die is rolled and the event of getting a 6 the second time areindependent. By contrast, the event of getting a 6 the first time a die is rolled and the event that the sum of the numbers seen on the first and second trial is 8 arenotindependent. If two cards are drawnwithreplacement from a deck of cards, the event of drawing a red card on the first trial and that of drawing a red card on the second trial areindependent. By contrast, if two cards are drawnwithoutreplacement from a deck of cards, the event of drawing a red card on the first trial and that of drawing a red card on the second trial arenotindependent, because a deck that has had a red card removed has proportionately fewer red cards. Consider the two probability spaces shown. In both cases,P(A)=P(B)=1/2{\displaystyle \mathrm {P} (A)=\mathrm {P} (B)=1/2}andP(C)=1/4{\displaystyle \mathrm {P} (C)=1/4}. The events in the first space are pairwise independent becauseP(A|B)=P(A|C)=1/2=P(A){\displaystyle \mathrm {P} (A|B)=\mathrm {P} (A|C)=1/2=\mathrm {P} (A)},P(B|A)=P(B|C)=1/2=P(B){\displaystyle \mathrm {P} (B|A)=\mathrm {P} (B|C)=1/2=\mathrm {P} (B)}, andP(C|A)=P(C|B)=1/4=P(C){\displaystyle \mathrm {P} (C|A)=\mathrm {P} (C|B)=1/4=\mathrm {P} (C)}; but the three events are not mutually independent. The events in the second space are both pairwise independent and mutually independent. To illustrate the difference, consider conditioning on two events. In the pairwise independent case, although any one event is independent of each of the other two individually, it is not independent of the intersection of the other two: In the mutually independent case, however, It is possible to create a three-event example in which and yet no two of the three events are pairwise independent (and hence the set of events are not mutually independent).[11]This example shows that mutual independence involves requirements on the products of probabilities of all combinations of events, not just the single events as in this example. The eventsA{\displaystyle A}andB{\displaystyle B}are conditionally independent given an eventC{\displaystyle C}when P(A∩B∣C)=P(A∣C)⋅P(B∣C){\displaystyle \mathrm {P} (A\cap B\mid C)=\mathrm {P} (A\mid C)\cdot \mathrm {P} (B\mid C)}. Intuitively, two random variablesX{\displaystyle X}andY{\displaystyle Y}are conditionally independent givenZ{\displaystyle Z}if, onceZ{\displaystyle Z}is known, the value ofY{\displaystyle Y}does not add any additional information aboutX{\displaystyle X}. For instance, two measurementsX{\displaystyle X}andY{\displaystyle Y}of the same underlying quantityZ{\displaystyle Z}are not independent, but they are conditionally independent givenZ{\displaystyle Z}(unless the errors in the two measurements are somehow connected). The formal definition of conditional independence is based on the idea ofconditional distributions. IfX{\displaystyle X},Y{\displaystyle Y}, andZ{\displaystyle Z}arediscrete random variables, then we defineX{\displaystyle X}andY{\displaystyle Y}to be conditionally independent givenZ{\displaystyle Z}if for allx{\displaystyle x},y{\displaystyle y}andz{\displaystyle z}such thatP(Z=z)>0{\displaystyle \mathrm {P} (Z=z)>0}. On the other hand, if the random variables arecontinuousand have a jointprobability density functionfXYZ(x,y,z){\displaystyle f_{XYZ}(x,y,z)}, thenX{\displaystyle X}andY{\displaystyle Y}are conditionally independent givenZ{\displaystyle Z}if for all real numbersx{\displaystyle x},y{\displaystyle y}andz{\displaystyle z}such thatfZ(z)>0{\displaystyle f_{Z}(z)>0}. If discreteX{\displaystyle X}andY{\displaystyle Y}are conditionally independent givenZ{\displaystyle Z}, then for anyx{\displaystyle x},y{\displaystyle y}andz{\displaystyle z}withP(Z=z)>0{\displaystyle \mathrm {P} (Z=z)>0}. That is, the conditional distribution forX{\displaystyle X}givenY{\displaystyle Y}andZ{\displaystyle Z}is the same as that givenZ{\displaystyle Z}alone. A similar equation holds for the conditional probability density functions in the continuous case. Independence can be seen as a special kind of conditional independence, since probability can be seen as a kind of conditional probability given no events. Before 1933, independence, in probability theory, was defined in a verbal manner. For example,de Moivregave the following definition: “Two events are independent, when they have no connexion one with the other, and that the happening of one neither forwards nor obstructs the happening of the other”.[12]If there are n independent events, the probability of the event, that all of them happen was computed as the product of the probabilities of these n events. Apparently, there was the conviction, that this formula was a consequence of the above definition. (Sometimes this was called the Multiplication Theorem.), Of course, a proof of his assertion cannot work without further more formal tacit assumptions. The definition of independence, given in this article, became the standard definition (now used in all books) after it appeared in 1933 as part of Kolmogorov's axiomatization of probability.[13]Kolmogorovcredited it toS.N. Bernstein, and quoted a publication which had appeared in Russian in 1927.[14] Unfortunately, both Bernstein and Kolmogorov had not been aware of the work of theGeorg Bohlmann. Bohlmann had given the same definition for two events in 1901[15]and for n events in 1908[16]In the latter paper, he studied his notion in detail. For example, he gave the first example showing that pairwise independence does not imply mutual independence. Even today, Bohlmann is rarely quoted. More about his work can be found inOn the contributions of Georg Bohlmann to probability theoryfromde:Ulrich Krengel.[17]
https://en.wikipedia.org/wiki/Statistical_independence
Named-entity recognition(NER) (also known as(named)entity identification,entity chunking, andentity extraction) is a subtask ofinformation extractionthat seeks to locate and classifynamed entitiesmentioned inunstructured textinto pre-defined categories such as person names, organizations, locations,medical codes, time expressions, quantities, monetary values, percentages, etc. Most research on NER/NEE systems has been structured as taking an unannotated block of text, such as this one: Jim bought 300 shares of Acme Corp. in 2006. And producing an annotated block of text that highlights the names of entities: [Jim]Personbought 300 shares of [Acme Corp.]Organizationin [2006]Time. In this example, a person name consisting of one token, a two-token company name and a temporal expression have been detected and classified. State-of-the-art NER systems for English produce near-human performance. For example, the best system enteringMUC-7scored 93.39% ofF-measurewhile human annotators scored 97.60% and 96.95%.[1][2] Notable NER platforms include: In the expressionnamed entity, the wordnamedrestricts the task to those entities for which one or many strings, such as words or phrases, stand (fairly) consistently for some referent. This is closely related torigid designators, as defined byKripke,[5][6]although in practice NER deals with many names and referents that are not philosophically "rigid". For instance, theautomotive company created by Henry Ford in 1903can be referred to asFordorFord Motor Company, although "Ford" can refer to many other entities as well (seeFord). Rigid designators include proper names as well as terms for certain biological species and substances,[7]but exclude pronouns (such as "it"; seecoreference resolution), descriptions that pick out a referent by its properties (see alsoDe dicto and de re), and names for kinds of things as opposed to individuals (for example "Bank"). Full named-entity recognition is often broken down, conceptually and possibly also in implementations,[8]as two distinct problems: detection of names, andclassificationof the names by the type of entity they refer to (e.g. person, organization, or location).[9]The first phase is typically simplified to a segmentation problem: names are defined to be contiguous spans of tokens, with no nesting, so that "Bank of America" is a single name, disregarding the fact that inside this name, the substring "America" is itself a name. This segmentation problem is formally similar tochunking. The second phase requires choosing anontologyby which to organize categories of things. Temporal expressionsand some numerical expressions (e.g., money, percentages, etc.) may also be considered as named entities in the context of the NER task. While some instances of these types are good examples of rigid designators (e.g., the year 2001) there are also many invalid ones (e.g., I take my vacations in “June”). In the first case, the year2001refers to the2001st year of the Gregorian calendar. In the second case, the monthJunemay refer to the month of an undefined year (past June,next June,every June, etc.). It is arguable that the definition ofnamed entityis loosened in such cases for practical reasons. The definition of the termnamed entityis therefore not strict and often has to be explained in the context in which it is used.[10] Certainhierarchiesof named entity types have been proposed in the literature.BBNcategories, proposed in 2002, are used forquestion answeringand consists of 29 types and 64 subtypes.[11]Sekine's extended hierarchy, proposed in 2002, is made of 200 subtypes.[12]More recently, in 2011 Ritter used a hierarchy based on commonFreebaseentity types in ground-breaking experiments on NER oversocial mediatext.[13] To evaluate the quality of an NER system's output, several measures have been defined. The usual measures are calledprecision, recall, andF1 score. However, several issues remain in just how to calculate those values. These statistical measures work reasonably well for the obvious cases of finding or missing a real entity exactly; and for finding a non-entity. However, NER can fail in many other ways, many of which are arguably "partially correct", and should not be counted as complete success or failures. For example, identifying a real entity, but: One overly simple method of measuring accuracy is merely to count what fraction of all tokens in the text were correctly or incorrectly identified as part of entity references (or as being entities of the correct type). This suffers from at least two problems: first, the vast majority of tokens in real-world text are not part of entity names, so the baseline accuracy (always predict "not an entity") is extravagantly high, typically >90%; and second, mispredicting the full span of an entity name is not properly penalized (finding only a person's first name when his last name follows might be scored as ½ accuracy). In academic conferences such as CoNLL, a variant of theF1 scorehas been defined as follows:[9] It follows from the above definition that any prediction that misses a single token, includes a spurious token, or has the wrong class, is a hard error and does not contribute positively to either precision or recall. Thus, this measure may be said to be pessimistic: it can be the case that many "errors" are close to correct, and might be adequate for a given purpose. For example, one system might always omit titles such as "Ms." or "Ph.D.", but be compared to a system or ground-truth data that expects titles to be included. In that case, every such name is treated as an error. Because of such issues, it is important actually to examine the kinds of errors, and decide how important they are given one's goals and requirements. Evaluation models based on a token-by-token matching have been proposed.[14]Such models may be given partial credit for overlapping matches (such as using theIntersection over Unioncriterion). They allow a finer grained evaluation and comparison of extraction systems. NER systems have been created that use linguisticgrammar-based techniques as well asstatistical modelssuch asmachine learning. Hand-crafted grammar-based systems typically obtain better precision, but at the cost of lower recall and months of work by experiencedcomputational linguists.[15]Statistical NER systems typically require a large amount of manuallyannotatedtraining data.Semisupervisedapproaches have been suggested to avoid part of the annotation effort.[16][17] Many different classifier types have been used to perform machine-learned NER, withconditional random fieldsbeing a typical choice.[18] In 2001, research indicated that even state-of-the-art NER systems were brittle, meaning that NER systems developed for one domain did not typically perform well on other domains.[19]Considerable effort is involved in tuning NER systems to perform well in a new domain; this is true for both rule-based and trainable statistical systems. Early work in NER systems in the 1990s was aimed primarily at extraction from journalistic articles. Attention then turned to processing of military dispatches and reports. Later stages of theautomatic content extraction(ACE) evaluation also included several types of informal text styles, such asweblogsandtext transcriptsfrom conversational telephone speech conversations. Since about 1998, there has been a great deal of interest in entity identification in themolecular biology,bioinformatics, and medicalnatural language processingcommunities. The most common entity of interest in that domain has been names ofgenesand gene products. There has been also considerable interest in the recognition ofchemical entitiesand drugs in the context of the CHEMDNER competition, with 27 teams participating in this task.[20] Despite high F1 numbers reported on the MUC-7 dataset, the problem of named-entity recognition is far from being solved. The main efforts are directed to reducing the annotations labor by employingsemi-supervised learning,[16][21]robust performance across domains[22][23]and scaling up to fine-grained entity types.[12][24]In recent years, many projects have turned tocrowdsourcing, which is a promising solution to obtain high-quality aggregate human judgments forsupervisedand semi-supervised machine learning approaches to NER.[25]Another challenging task is devising models to deal with linguistically complex contexts such as Twitter and search queries.[26] There are some researchers who did some comparisons about the NER performances from different statistical models such as HMM (hidden Markov model), ME (maximum entropy), and CRF (conditional random fields), and feature sets.[27]And some researchers recently proposed graph-based semi-supervised learning model for language specific NER tasks.[28] A recently emerging task of identifying "important expressions" in text andcross-linking them to Wikipedia[29][30][31]can be seen as an instance of extremely fine-grained named-entity recognition, where the types are the actual Wikipedia pages describing the (potentially ambiguous) concepts. Below is an example output of a Wikification system: Another field that has seen progress but remains challenging is the application of NER toTwitterand other microblogs, considered "noisy" due to non-standard orthography, shortness and informality of texts.[32][33]NER challenges in English Tweets have been organized by research communities to compare performances of various approaches, such asbidirectional LSTMs, Learning-to-Search, or CRFs.[34][35][36]
https://en.wikipedia.org/wiki/Named_entity_recognition
TheResource Description Framework(RDF) is a method to describe and exchangegraphdata. It was originally designed as a data model formetadataby theWorld Wide Web Consortium(W3C). It provides a variety of syntax notations and formats, of which the most widely used is Turtle (Terse RDF Triple Language). RDF is adirected graphcomposed of triple statements. An RDF graph statement is represented by: (1) a node for the subject, (2) an arc from subject to object, representing a predicate, and (3) a node for the object. Each of these parts can be identified by aUniform Resource Identifier(URI). An object can also be a literal value. This simple, flexible data model has a lot ofexpressive powerto represent complex situations, relationships, and other things of interest, while also being appropriately abstract. RDF was adopted as a W3C recommendation in 1999. The RDF 1.0 specification was published in 2004, and the RDF 1.1 specification in 2014.SPARQLis a standard query language for RDF graphs.RDF Schema(RDFS),Web Ontology Language(OWL) andSHACL(Shapes Constraint Language) are ontology languages that are used to describe RDF data. The RDF data model[1]is similar to classical conceptual modeling approaches (such asentity–relationshiporclass diagrams). It is based on the idea of makingstatementsaboutresources(in particular web resources) in expressions of the formsubject–predicate–object, known astriples. Thesubjectdenotes the resource; thepredicatedenotes traits or aspects of the resource, and expresses a relationship between thesubjectand theobject. For example, one way to represent the notion "The sky has the color blue" in RDF is as the triple: asubjectdenoting "the sky", apredicatedenoting "has the color", and anobjectdenoting "blue". Therefore, RDF usessubjectinstead ofobject(orentity) in contrast to the typical approach of anentity–attribute–value modelinobject-oriented design: entity (sky), attribute (color), and value (blue). RDF is an abstract model with severalserialization formats(being essentially specializedfile formats). In addition the particular encoding for resources or triples can vary from format to format. This mechanism for describing resources is a majorcomponentin the W3C'sSemantic Webactivity: an evolutionary stage of theWorld Wide Webin which automated software can store, exchange, and usemachine-readable informationdistributed throughout the Web, in turn enabling users to deal with the information with greater efficiency andcertainty. RDF's simple data model and ability to model disparate, abstract concepts has also led to its increasing use inknowledge managementapplications unrelated to Semantic Web activity. A collection of RDF statements intrinsically represents alabeled,directedmultigraph. This makes an RDFdata modelbetter suited to certain kinds ofknowledge representationthan otherrelationalorontologicalmodels. AsRDFS,OWLandSHACLdemonstrate, one can build additionalontology languagesupon RDF. The initial RDF design, intended to "build a vendor-neutral and operating system- independent system of metadata",[2]derived from the W3C'sPlatform for Internet Content Selection(PICS), an early web content labelling system,[3]but the project was also shaped by ideas fromDublin Core, and from theMeta Content Framework(MCF),[2]which had been developed during 1995 to 1997 byRamanathan V. GuhaatAppleandTim BrayatNetscape.[4] A first public draft of RDF appeared in October 1997,[5][6]issued by a W3C working group that included representatives fromIBM,Microsoft,Netscape,Nokia,Reuters,SoftQuad, and theUniversity of Michigan.[3] In 1999, the W3C published the first recommended RDF specification, theModel and Syntax Specification("RDF M&S").[7]This described RDF's data model and anXMLserialization.[8] Two persistent misunderstandings about RDF developed at this time: firstly, due to the MCF influence and the RDF "Resource Description" initialism, the idea that RDF was specifically for use in representing metadata; secondly that RDF was an XML format rather than a data model, and only the RDF/XML serialisation being XML-based. RDF saw little take-up in this period, but there was significant work done inBristol, around ILRT atBristol UniversityandHP Labs, and in Boston atMIT.RSS 1.0andFOAFbecame exemplar applications for RDF in this period. The recommendation of 1999 was replaced in 2004 by a set of six specifications:[9]"The RDF Primer",[10]"RDF Concepts and Abstract",[11]"RDF/XML Syntax Specification (revised)",[12]"RDF Semantics",[13]"RDF Vocabulary Description Language 1.0",[14]and "The RDF Test Cases".[15] This series was superseded in 2014 by the following six "RDF 1.1" documents: "RDF 1.1 Primer",[16]"RDF 1.1 Concepts and Abstract Syntax",[17]"RDF 1.1 XML Syntax",[18]"RDF 1.1 Semantics",[19]"RDF Schema 1.1",[20]and "RDF 1.1 Test Cases".[21] The vocabulary defined by the RDF specification is as follows:[22] rdf:Statement,rdf:subject,rdf:predicate,rdf:objectare used forreification(seebelow). This vocabulary is used as a foundation forRDF Schema, where it is extended. Several commonserialization formatsare in use, including: RDF/XML is sometimes misleadingly called simply RDF because it was introduced among the other W3C specifications defining RDF and it was historically the first W3C standard RDF serialization format. However, it is important to distinguish the RDF/XML format from the abstract RDF model itself. Although the RDF/XML format is still in use, other RDF serializations are now preferred by many RDF users, both because they are more human-friendly,[34]and because some RDF graphs are not representable in RDF/XML due to restrictions on the syntax of XMLQNames. With a little effort, virtually any arbitraryXMLmay also be interpreted as RDF usingGRDDL(pronounced 'griddle'), Gleaning Resource Descriptions from Dialects of Languages. RDF triples may be stored in a type of database called atriplestore. The subject of an RDF statement is either auniform resource identifier(URI) or ablank node, both of which denoteresources. Resources indicated byblank nodesare called anonymous resources. They are not directly identifiable from the RDF statement. The predicate is a URI which also indicates a resource, representing a relationship. The object is a URI, blank node or aUnicodestring literal. As of RDF 1.1 resources are identified byInternationalized Resource Identifiers(IRIs); IRI are a generalization of URI.[35] In Semantic Web applications, and in relatively popular applications of RDF likeRSSandFOAF(Friend of a Friend), resources tend to be represented by URIs that intentionally denote, and can be used to access, actual data on the World Wide Web. But RDF, in general, is not limited to the description of Internet-based resources. In fact, the URI that names a resource does not have to be dereferenceable at all. For example, a URI that begins with "http:" and is used as the subject of an RDF statement does not necessarily have to represent a resource that is accessible viaHTTP, nor does it need to represent a tangible, network-accessible resource—such a URI could represent absolutely anything. However, there is broad agreement that a bare URI (without a # symbol) which returns a 300-level coded response when used in an HTTP GET request should be treated as denoting the internet resource that it succeeds in accessing. Therefore, producers and consumers of RDF statements must agree on the semantics of resource identifiers. Such agreement is not inherent to RDF itself, although there are some controlled vocabularies in common use, such as Dublin Core Metadata, which is partially mapped to a URI space for use in RDF. The intent of publishing RDF-based ontologies on the Web is often to establish, or circumscribe, the intended meanings of the resource identifiers used to express data in RDF. For example, the URI: is intended by its owners to refer to the class of allMerlotred wines by vintner (i.e., instances of the above URI each represent the class of all wine produced by a single vintner), a definition which is expressed by the OWL ontology—itself an RDF document—in which it occurs. Without careful analysis of the definition, one might erroneously conclude that an instance of the above URI was something physical, instead of a type of wine. Note that this is not a 'bare' resource identifier, but is rather aURI reference, containing the '#' character and ending with afragment identifier. The body of knowledge modeled by a collection of statements may be subjected toreification, in which eachstatement(that is each triplesubject-predicate-objectaltogether) is assigned a URI and treated as a resource about which additional statements can be made, as in "Jane says thatJohn is the author of document X". Reification is sometimes important in order to deduce a level of confidence or degree of usefulness for each statement. In a reified RDF database, each original statement, being a resource, itself, most likely has at least three additional statements made about it: one to assert that its subject is some resource, one to assert that its predicate is some resource, and one to assert that its object is some resource or literal. More statements about the original statement may also exist, depending on the application's needs. Borrowing from concepts available inlogic(and as illustrated in graphical notations such asconceptual graphsandtopic maps), some RDF model implementations acknowledge that it is sometimes useful to group statements according to different criteria, calledsituations,contexts, orscopes, as discussed in articles by RDF specification co-editorGraham Klyne.[36][37]For example, a statement can be associated with a context, named by a URI, in order to assert an "is true in" relationship. As another example, it is sometimes convenient to group statements by their source, which can be identified by a URI, such as the URI of a particular RDF/XML document. Then, when updates are made to the source, corresponding statements can be changed in the model, as well. Implementation of scopes does not necessarily require fully reified statements. Some implementations allow a single scope identifier to be associated with a statement that has not been assigned a URI, itself.[38][39]Likewisenamed graphsin which a set of triples is named by a URI can represent context without the need to reify the triples.[40] The predominant query language for RDF graphs isSPARQL. SPARQL is anSQL-like language, and arecommendationof theW3Cas of January 15, 2008. The following is an example of a SPARQL query to show country capitals in Africa, using a fictional ontology: Other non-standard ways to query RDF graphs include: SHACL Advanced Features specification[42](W3C Working Group Note), the most recent version of which is maintained by the SHACL Community Group,[43]defines support for SHACL Rules, used for data transformations, inferences and mappings of RDF based on SHACL shapes. The predominant language for describing and validating RDF graphs isSHACL(Shapes Constraint Language).[44]SHACL specification is divided in two parts: SHACL Core and SHACL-SPARQL. SHACL Core consists of a list of built-in constraints such as cardinality, range of values and many others. SHACL-SPARQL describes SPARQL-based constraints and an extension mechanism to declare new constraint components. Other non-standard ways to describe and validate RDF graphs include: The following example is taken from the W3C website[48]describing a resource with statements "there is a Person identified by http://www.w3.org/People/EM/contact#me, whose name is Eric Miller, whose email address is e.miller123(at)example (changed for security purposes), and whose title is Dr." The resource "http://www.w3.org/People/EM/contact#me" is the subject. The objects are: The subject is a URI. The predicates also have URIs. For example, the URI for each predicate: In addition, the subject has a type (with URI http://www.w3.org/1999/02/22-rdf-syntax-ns#type), which is person (with URI http://www.w3.org/2000/10/swap/pim/contact#Person). Therefore, the following "subject, predicate, object" RDF triples can be expressed: In standard N-Triples format, this RDF can be written as: Equivalently, it can be written in standard Turtle (syntax) format as: Or more concisely, using a common shorthand syntax of Turtle as: Or, it can be written in RDF/XML format as: Certain concepts in RDF are taken fromlogicandlinguistics, where subject-predicate and subject-predicate-object structures have meanings similar to, yet distinct from, the uses of those terms in RDF. This example demonstrates: In theEnglish languagestatement'New York has the postal abbreviation NY','New York'would be the subject,'has the postal abbreviation'the predicate and'NY'the object. Encoded as an RDF triple, the subject and predicate would have to be resources named by URIs. The object could be a resource or literal element. For example, in the N-Triples form of RDF, the statement might look like: In this example, "urn:x-states:New%20York" is the URI for a resource that denotes the US stateNew York, "http://purl.org/dc/terms/alternative" is the URI for a predicate (whose human-readable definition can be found here[49]), and "NY" is a literal string. Note that the URIs chosen here are not standard, and do not need to be, as long as their meaning is known to whatever is reading them. In a like manner, given that "https://en.wikipedia.org/wiki/Tony_Benn" identifies a particular resource (regardless of whether that URI could be traversed as a hyperlink, or whether the resource isactuallytheWikipediaarticle aboutTony Benn), to say that the title of this resource is "Tony Benn" and its publisher is "Wikipedia" would be two assertions that could be expressed as valid RDF statements. In the N-Triples form of RDF, these statements might look like the following: To an English-speaking person, the same information could be represented simply as: The title of this resource, which is published by Wikipedia, is 'Tony Benn' However, RDF puts the information in a formal way that a machine can understand. The purpose of RDF is to provide anencodingand interpretation mechanism so thatresourcescan be described in a way that particularsoftwarecan understand it; in other words, so that software can access and use information that it otherwise could not use. Both versions of the statements above are wordy because one requirement for an RDF resource (as a subject or a predicate) is that it be unique. The subject resource must be unique in an attempt to pinpoint the exact resource being described. The predicate needs to be unique in order to reduce the chance that the idea ofTitleorPublisherwill be ambiguous to software working with the description. If the software recognizeshttp://purl.org/dc/elements/1.1/title(a specificdefinitionfor theconceptof a title established by the Dublin Core Metadata Initiative), it will also know that this title is different from a land title or an honorary title or just the letters t-i-t-l-e put together. The following example, written in Turtle, shows how such simple claims can be elaborated on, by combining multiple RDF vocabularies. Here, we note that the primary topic of the Wikipedia page is a "Person" whose name is "Tony Benn": Some uses of RDF include research into social networking. It will also help people in business fields understand better their relationships with members of industries that could be of use for product placement.[58]It will also help scientists understand how people are connected to one another. RDF is being used to gain a better understanding of road traffic patterns. This is because the information regarding traffic patterns is on different websites, and RDF is used to integrate information from different sources on the web. Before, the common methodology was using keyword searching, but this method is problematic because it does not consider synonyms. This is why ontologies are useful in this situation. But one of the issues that comes up when trying to efficiently study traffic is that to fully understand traffic, concepts related to people, streets, and roads must be well understood. Since these are human concepts, they require the addition offuzzy logic. This is because values that are useful when describing roads, like slipperiness, are not precise concepts and cannot be measured. This would imply that the best solution would incorporate both fuzzy logic and ontology.[59]
https://en.wikipedia.org/wiki/Resource_Description_Framework
Arelational database(RDB[1]) is adatabasebased on therelational modelof data, as proposed byE. F. Coddin 1970.[2] A RelationalDatabase Management System(RDBMS) is a type of database management system that stores data in a structuredformatusingrowsandcolumns. Many relational database systems are equipped with the option of usingSQL(Structured Query Language) for querying and updating the database.[3] The concept of relational database was defined byE. F. CoddatIBMin 1970. Codd introduced the termrelationalin his research paper "A Relational Model of Data for Large Shared Data Banks".[2]In this paper and later papers, he defined what he meant byrelation. One well-known definition of what constitutes a relational database system is composed ofCodd's 12 rules. However, no commercial implementations of the relational model conform to all of Codd's rules,[4]so the term has gradually come to describe a broader class of database systems, which at a minimum: In 1974, IBM began developingSystem R, a research project to develop a prototype RDBMS.[5][6]The first system sold as an RDBMS wasMultics Relational Data Store(June 1976).[7][8][citation needed]Oraclewas released in 1979 by Relational Software, nowOracle Corporation.[9]IngresandIBM BS12followed. Other examples of an RDBMS includeIBM Db2,SAP Sybase ASE, andInformix. In 1984, the first RDBMS forMacintoshbegan being developed, code-named Silver Surfer, and was released in 1987 as4th Dimensionand known today as 4D.[10] The first systems that were relatively faithful implementations of the relational model were from: The most common definition of an RDBMS is a product that presents a view of data as a collection of rows and columns, even if it is not based strictly uponrelational theory. By this definition, RDBMS products typically implement some but not all of Codd's 12 rules. A second school of thought argues that if a database does not implement all of Codd's rules (or the current understanding on the relational model, as expressed byChristopher J. Date,Hugh Darwenand others), it is not relational. This view, shared by many theorists and other strict adherents to Codd's principles, would disqualify most DBMSs as not relational. For clarification, they often refer to some RDBMSs astruly-relational database management systems(TRDBMS), naming otherspseudo-relational database management systems(PRDBMS).[citation needed] As of 2009, most commercial relational DBMSs employSQLas theirquery language.[15] Alternative query languages have been proposed and implemented, notably the pre-1996 implementation ofIngres QUEL. A relational model organizes data into one or moretables(or "relations") ofcolumnsandrows, with aunique keyidentifying each row. Rows are also calledrecordsortuples.[16]Columns are also called attributes. Generally, each table/relation represents one "entity type" (such as customer or product). The rows represent instances of that type ofentity(such as "Lee" or "chair") and the columns represent values attributed to that instance (such as address or price). For example, each row of a class table corresponds to a class, and a class corresponds to multiple students, so the relationship between the class table and the student table is "one to many"[17] Each row in a table has its own unique key. Rows in a table can be linked to rows in other tables by adding a column for the unique key of the linked row (such columns are known asforeign keys). Codd showed that data relationships of arbitrary complexity can be represented by a simple set of concepts.[2] Part of this processing involves consistently being able to select or modify one and only one row in a table. Therefore, most physical implementations have a uniqueprimary key(PK) for each row in a table. When a new row is written to the table, a new unique value for the primary key is generated; this is the key that the system uses primarily for accessing the table. System performance is optimized for PKs. Other, morenatural keysmay also be identified and defined asalternate keys(AK). Often several columns are needed to form an AK (this is one reason why a single integer column is usually made the PK). Both PKs and AKs have the ability to uniquely identify a row within a table. Additional technology may be applied to ensure a unique ID across the world, aglobally unique identifier, when there are broader system requirements. The primary keys within a database are used to define the relationships among the tables. When a PK migrates to another table, it becomes a foreign key (FK) in the other table. When each cell can contain only one value and the PK migrates into a regular entity table, this design pattern can represent either aone-to-oneorone-to-manyrelationship. Most relational database designs resolvemany-to-manyrelationships by creating an additional table that contains the PKs from both of the other entity tables – the relationship becomes an entity; the resolution table is then named appropriately and the two FKs are combined to form a PK. The migration of PKs to other tables is the second major reason why system-assigned integers are used normally as PKs; there is usually neither efficiency nor clarity in migrating a bunch of other types of columns. Relationships are a logical connection between different tables (entities), established on the basis of interaction among these tables. These relationships can be modelled as anentity-relationship model. In order for a database management system (DBMS) to operate efficiently and accurately, it must useACID transactions.[18][19][20] Part of the programming within a RDBMS is accomplished usingstored procedures(SPs). Often procedures can be used to greatly reduce the amount of information transferred within and outside of a system. For increased security, the system design may grant access to only the stored procedures and not directly to the tables. Fundamental stored procedures contain the logic needed to insert new and update existing data. More complex procedures may be written to implement additional rules and logic related to processing or selecting the data. The relational database was first defined in June 1970 byEdgar Codd, of IBM'sSan Jose Research Laboratory.[2]Codd's view of what qualifies as an RDBMS is summarized inCodd's 12 rules. A relational database has become the predominant type of database. Other models besides therelational modelinclude thehierarchical database modeland thenetwork model. The table below summarizes some of the most important relational database terms and the correspondingSQLterm: In a relational database, arelationis a set oftuplesthat have the sameattributes. A tuple usually represents an object and information about that object. Objects are typically physical objects or concepts. A relation is usually described as atable, which is organized intorowsandcolumns. All the data referenced by an attribute are in the samedomainand conform to the same constraints. The relational model specifies that the tuples of a relation have no specific order and that the tuples, in turn, impose no order on the attributes. Applications access data by specifying queries, which use operations such asselectto identify tuples,projectto identify attributes, andjointo combine relations. Relations can be modified using theinsert,delete, andupdateoperators. New tuples can supply explicit values or be derived from a query. Similarly, queries identify tuples for updating or deleting. Tuples by definition are unique. If the tuple contains acandidateor primary key then obviously it is unique; however, a primary key need not be defined for a row or record to be a tuple. The definition of a tuple requires that it be unique, but does not require a primary key to be defined. Because a tuple is unique, its attributes by definition constitute asuperkey. All data are stored and accessed viarelations. Relations that store data are called "base relations", and in implementations are called "tables". Other relations do not store data, but are computed by applying relational operations to other relations. These relations are sometimes called "derived relations". In implementations these are called "views" or "queries". Derived relations are convenient in that they act as a single relation, even though they may grab information from several relations. Also, derived relations can be used as anabstraction layer. A domain describes the set of possible values for a given attribute, and can be considered a constraint on the value of the attribute. Mathematically, attaching a domain to an attribute means that any value for the attribute must be an element of the specified set. The character string"ABC", for instance, is not in the integer domain, but the integer value123is. Another example of domain describes the possible values for the field "CoinFace" as ("Heads","Tails"). So, the field "CoinFace" will not accept input values like (0,1) or (H,T). Constraints are often used to make it possible to further restrict the domain of an attribute. For instance, a constraint can restrict a given integer attribute to values between 1 and 10. Constraints provide one method of implementingbusiness rulesin the database and support subsequent data use within the application layer. SQL implements constraint functionality in the form ofcheck constraints. Constraints restrict the data that can be stored inrelations. These are usually defined using expressions that result in aBooleanvalue, indicating whether or not the data satisfies the constraint. Constraints can apply to single attributes, to a tuple (restricting combinations of attributes) or to an entire relation. Since every attribute has an associated domain, there are constraints (domain constraints). The two principal rules for the relational model are known asentity integrityandreferential integrity. Everyrelation/table has a primary key, this being a consequence of a relation being aset.[21]A primary key uniquely specifies a tuple within a table. While natural attributes (attributes used to describe the data being entered) are sometimes good primary keys,surrogate keysare often used instead. A surrogate key is an artificial attribute assigned to an object which uniquely identifies it (for instance, in a table of information about students at a school they might all be assigned a student ID in order to differentiate them). The surrogate key has no intrinsic (inherent) meaning, but rather is useful through its ability to uniquely identify a tuple. Another common occurrence, especially in regard to N:M cardinality is thecomposite key. A composite key is a key made up of two or more attributes within a table that (together) uniquely identify a record.[22] Foreign key refers to a field in a relational table that matches the primary key column of another table. It relates the two keys. Foreign keys need not have unique values in the referencing relation. A foreign key can be used tocross-referencetables, and it effectively uses the values of attributes in the referenced relation to restrict the domain of one or more attributes in the referencing relation. The concept is described formally as: "For all tuples in the referencing relation projected over the referencing attributes, there must exist a tuple in the referenced relation projected over those same attributes such that the values in each of the referencing attributes match the corresponding values in the referenced attributes." A stored procedure is executable code that is associated with, and generally stored in, the database. Stored procedures usually collect and customize common operations, like inserting atupleinto arelation, gathering statistical information about usage patterns, or encapsulating complexbusiness logicand calculations. Frequently they are used as anapplication programming interface(API) for security or simplicity. Implementations of stored procedures on SQL RDBMS's often allow developers to take advantage ofproceduralextensions (often vendor-specific) to the standarddeclarativeSQL syntax. Stored procedures are not part of the relational database model, but all commercial implementations include them. An index is one way of providing quicker access to data. Indices can be created on any combination of attributes on arelation. Queries that filter using those attributes can find matching tuples directly using the index (similar toHash tablelookup), without having to check each tuple in turn. This is analogous to using theindex of a bookto go directly to the page on which the information you are looking for is found, so that you do not have to read the entire book to find what you are looking for. Relational databases typically supply multiple indexing techniques, each of which is optimal for some combination of data distribution, relation size, and typical access pattern. Indices are usually implemented viaB+ trees,R-trees, andbitmaps. Indices are usually not considered part of the database, as they are considered an implementation detail, though indices are usually maintained by the same group that maintains the other parts of the database. The use of efficient indexes on both primary and foreign keys can dramatically improve query performance. This is because B-tree indexes result in query times proportional to log(n) where n is the number of rows in a table and hash indexes result in constant time queries (no size dependency as long as the relevant part of the index fits into memory). Queries made against the relational database, and the derivedrelvarsin the database are expressed in arelational calculusor arelational algebra. In his original relational algebra, Codd introduced eight relational operators in two groups of four operators each. The first four operators were based on the traditional mathematicalset operations: The remaining operators proposed by Codd involve special operations specific to relational databases: Other operators have been introduced or proposed since Codd's introduction of the original eight including relational comparison operators and extensions that offer support for nesting and hierarchical data, among others. Normalization was first proposed by Codd as an integral part of the relational model. It encompasses a set of procedures designed to eliminate non-simple domains (non-atomic values) and the redundancy (duplication) of data, which in turn prevents data manipulation anomalies and loss of data integrity. The most common forms of normalization applied to databases are called thenormal forms. Connolly and Begg define database management system (DBMS) as a "software system that enables users to define, create, maintain and control access to the database".[23]RDBMS is an extension of that initialism that is sometimes used when the underlying database is relational. An alternative definition for arelational database management systemis a database management system (DBMS) based on therelational model. Most databases in widespread use today are based on this model.[24] RDBMSs have been a common option for the storage of information in databases used for financial records, manufacturing and logistical information, personnel data, and other applications since the 1980s. Relational databases have often replaced legacyhierarchical databasesandnetwork databases, because RDBMS were easier to implement and administer. Nonetheless, relational stored data received continued, unsuccessful challenges byobject databasemanagement systems in the 1980s and 1990s, (which were introduced in an attempt to address the so-calledobject–relational impedance mismatchbetween relational databases and object-oriented application programs), as well as byXML databasemanagement systems in the 1990s.[25]However, due to the expanse of technologies, such ashorizontal scalingofcomputer clusters,NoSQLdatabases have recently become popular as an alternative to RDBMS databases.[26] Distributed Relational Database Architecture(DRDA) was designed by a workgroup within IBM in the period 1988 to 1994. DRDA enables network connected relational databases to cooperate to fulfill SQL requests.[27][28]The messages, protocols, and structural components of DRDA are defined by theDistributed Data Management Architecture. According toDB-Engines, in December 2024 the most popular systems on the db-engines.com web site were:[29] According to research companyGartner, in 2011, the five leadingproprietary softwarerelational database vendors by revenue wereOracle(48.8%),IBM(20.2%),Microsoft(17.0%),SAPincludingSybase(4.6%), andTeradata(3.7%).[30]
https://en.wikipedia.org/wiki/Relational_database
InRDF, ablank node(also calledbnode) is a node in an RDF graph representing a resource for which aURIor literal is not given.[1]The resource represented by a blank node is also called ananonymous resource. According to the RDF standard a blank node can only be used as subject or object of an RDF triple. Blank nodes can be denoted through blank node identifiers in the following formats,RDF/XML,RDFa,Turtle,N3andN-Triples. The following example shows how it works inRDF/XML. The blank node identifiers are only limited in scope to a serialization of a particular RDF graph, i.e. the node_:bin the subsequent example does not represent the same node as a node named_:bin any other graph. Blank nodes can also be denoted through nested elements (inRDF/XML,RDFa,TurtleandN3). Here is the same triples with the above. Below is the same example inRDFa. Below is the same example inTurtle. Blank nodes are treated as simply indicating the existence of a thing, without using a URI (Uniform Resource Identifier) to identify any particular thing. This is not the same as assuming that the blank node indicates an 'unknown' URI.[1] From a technical perspective they give the capability to: Below there is an example where blank nodes are used to represent resources in the aforementioned ways. In particular, the blank node with the identifier '_:students' represents a Bag RDF Container, the blank node with the identifier '_:address' represents a complex attribute and those with the identifiers '_:activity1' and '_:activity2' represent events in the lifecycle of a digital object. The ontology languageOWLuses blank nodes to represent anonymous classes such asunionsorintersectionsof classes,[3]or classes called restrictions, defined by a constraint on a property.[4] For example, to express that a person has at most one birth date, one will define the class "Person" as a subclass of an anonymous class of type "owl:Restriction". This anonymous class is defined by two attributes specifying the constrained property and the constraint itself (cardinality≤ 1) According to an empirical survey[5]inLinked Datapublished on the Web, out of the 783 domains contributing to the corpus, 345 (44.1%) did not publish any blank nodes. The average percentage of unique terms which were blank nodes for each domain was 7.5%, indicating that although a small number of high-volume domains publish many blank nodes, many other domains publish blank nodes more infrequently. From the 286.3 MB unique terms found in data-level positions the 165.4 MB (57.8%) were blank nodes, 92.1 MB (32.2%) were URIs, and 28.9 MB (10%) were literals. Each blank node had on average 5.2 data-level occurrences. It occurred, on average, 0.99 times in the object position of a non-rdf:type triple, and 4.2 times in the subject position of a triple. According to the same empirical survey of linked data published on the Web, the majority of documents surveyed contain tree-based blank node structures. A small fraction contain complex blank node structures for which various tasks are potentially very expensive to compute. The existence of blank nodes requires special treatment in various tasks, whose complexity grows exponentially to the number of these nodes. The inability to match blank nodes increases the delta size (the number of triples that need to be deleted and added in order to transform one RDF graph to another) and does not assist in detecting the changes between subsequent versions of a Knowledge Base. Building a mapping between the blank nodes of two compared Knowledge Bases that minimizes the delta size is NP-Hard in the general case.[6] BNodeLand is a framework that deals with this problem and proposes solutions through particular tools.[7] Regarding the entailment problem it is proved that (a) deciding simple or RDF/S entailment of RDF graphs is NP-Complete,[8]and (b) deciding equivalence of simple RDF graphs is Isomorphism-Complete.
https://en.wikipedia.org/wiki/Blank_node
Adocument-term matrixis a mathematicalmatrixthat describes the frequency of terms that occur in each document in a collection. In a document-term matrix, rows correspond to documents in the collection and columns correspond to terms. This matrix is a specific instance of adocument-feature matrixwhere "features" may refer to other properties of a document besides terms.[1]It is also common to encounter the transpose, orterm-document matrixwhere documents are the columns and terms are the rows. They are useful in the field ofnatural language processingandcomputational text analysis.[2] While the value of the cells is commonly the raw count of a given term, there are various schemes for weighting the raw counts such as row normalizing (i.e. relative frequency/proportions) andtf-idf. Terms are commonly single words separated by whitespace or punctuation on either side (a.k.a. unigrams). In such a case, this is also referred to as "bag of words" representation because the counts of individual words is retained, but not the order of the words in the document. When creating a data-set oftermsthat appear in a corpus ofdocuments, the document-term matrix contains rows corresponding to the documents and columns corresponding to the terms. Eachijcell, then, is the number of times wordjoccurs in documenti. As such, each row is a vector of term counts that represents the content of the document corresponding to that row. For instance if one has the following two (short) documents: then the document-term matrix would be: which shows which documents contain which terms and how many times they appear. Note that, unlike representing a document as just a token-count list, the document-term matrix includes all terms in the corpus (i.e. the corpus vocabulary), which is why there are zero-counts for terms in the corpus which do not also occur in a specific document. For this reason, document-term matrices are usually stored in a sparse matrix format. As a result of the power-law distribution of tokens in nearly every corpus (seeZipf's law), it is common to weight the counts. This can be as simple as dividing counts by the total number of tokens in a document (called relative frequency or proportions), dividing by the maximum frequency in each document (called prop max), or taking the log of frequencies (called log count). If one desires to weight the words most unique to an individual document as compared to the corpus as a whole, it is common to usetf-idf, which divides the term frequency by the term's document frequency. The document-term matrix emerged in the earliest years of the computerization of text. The increasing capacity for storing documents created the problem of retrieving a given document in an efficient manner. While previously the work of classifying and indexing was accomplished by hand, researchers explored the possibility of doing this automatically using word frequency information. One of the first published document-term matrices was inHarold Borko's 1962 article "The construction of an empirically based mathematically derived classification system" (page 282, see also his 1965 article[3]). Borko references two computer programs, "FEAT" which stood for "Frequency of Every Allowable Term," written by John C. Olney of the System Development Corporation and the Descriptor Word Index Program, written byEileen Stonealso of the System Development Corporation: Having selected the documents which were to make up the experimental library, the next step consisted of keypunching the entire body of text preparatory to computer processing.  The program used for this analysis was FEAT (Frequency of Every Allowable Term).  it was written by John C. Olney of the System Development Corporation and is designed to perform frequency and summary counts of individual words and of word pairs.  The  output of this program is an alphabetical listing, by frequency of occurrence, of all word types which appeared in the text.  Certain function words such as and, the,  at, a, etc., were placed in a "forbidden word list" table, and the frequency of these words was recorded  in a separate listing... A special computer program, called the Descriptor Word Index Program, was written to provide this information and to prepare a document-term matrix in a form suitable for in-put to the Factor Analysis Program. The Descriptor Word Index program was prepared by Eileen Stone of the System Development Corporation.[4] Shortly thereafter,Gerard Saltonpublished "Some hierarchical models for automatic document retrieval" in 1963 which also included a visual depiction of a document-term matrix.[5]Salton was at Harvard University at the time and his work was supported by the Air Force Cambridge Research Laboratories and Sylvania Electric Products, Inc. In this paper, Salton introduces the document-term matrix by comparison to a kind of term-context matrix used to measure similarities between words: If it is desired to generate document associations or document clusters instead of word associations, the same procedures can be used with slight modifications. Instead of starting with a word-sentence matrixC,... it is now convenient to construct a word-document matrixF,listing frequency of occurrence of word Wiin Document Dj... Document similarities can now be computed as before by comparing pairs of rows and by obtaining similarity coefficients based on the frequency of co-occurrences of the content words included in the given document. This procedure produces a document-document similarity matrix which can in turn be used for the generation of document clusters...[5] In addition to Borko and Salton, in 1964, F.W. Lancaster published a comprehensive review of automated indexing and retrieval. While the work was published while he worked at the Herner and Company in Washington D.C., the paper was written while he was "employed in research work at Aslib, on the Aslib Cranfield Project."[6]Lancaster credits Borko with the document-term matrix: Harold Borko, of the System Development Corporation, has carried this operation a little further. A significant group of clue words is chosen from the vocabulary of an experimental collection. These are arranged in a document/term matrix to show the frequency of occurrence of each term in each document.... A correlation coefficient for each word pair is then computed, based on their co-occurrence in the document set. The resulting term/term matrix... is then factor analysed and a series of factors are isolated. These factors, when interpreted and named on the basis of the terms with high loadings which appear in each of the factors, become the classes of an empirical classification. The terms with high loadings in each factor are the clue words or predictors of the categories. A point of view on the matrix is that each row represents a document. In thevectorial semantic model, which is normally the one used to compute a document-term matrix, the goal is to represent the topic of a document by the frequency of semantically significant terms. The terms are semantic units of the documents. It is often assumed, forIndo-European languages, that nouns, verbs and adjectives are the more significantcategories, and that words from those categories should be kept as terms. Addingcollocationas terms improves the quality of the vectors, especially when computing similarities between documents. Latent semantic analysis(LSA, performingsingular-value decompositionon the document-term matrix) can improve search results bydisambiguatingpolysemous wordsand searching forsynonymsof the query. However, searching in the high-dimensional continuous space is much slower than searching the standardtriedata structure of search engines. Multivariate analysisof the document-term matrix can reveal topics/themes of the corpus. Specifically,latent semantic analysisanddata clusteringcan be used, and, more recently,probabilistic latent semantic analysiswith its generalizationLatent Dirichlet allocation, andnon-negative matrix factorization, have been found to perform well for this task.
https://en.wikipedia.org/wiki/Term-document_matrix
Extensible Markup Language(XML) is amarkup languageandfile formatfor storing, transmitting, and reconstructing data. It defines a set of rules for encodingdocumentsin a format that is bothhuman-readableandmachine-readable. TheWorld Wide Web Consortium's XML 1.0 Specification[2]of 1998[3]and several other related specifications[4]—all of them freeopen standards—define XML.[5] The design goals of XML emphasize simplicity, generality, and usability across theInternet.[6]It is a textual data format with strong support viaUnicodefor differenthuman languages. Although the design of XML focuses on documents, the language is widely used for the representation of arbitrarydata structures,[7]such as those used inweb services.[8] Severalschema systemsexist to aid in the definition of XML-based languages, while programmers have developed manyapplication programming interfaces(APIs) to aid the processing of XML data. The main purpose of XML isserialization, i.e. storing, transmitting, and reconstructing arbitrary data. For two disparate systems to exchange information, they need to agree upon a file format. XML standardizes this process. It is therefore analogous to alingua francafor representing information.[9] As amarkup language, XML labels, categorizes, and structurally organizes information.[10]XML tags represent the data structure and containmetadata. What is within the tags is data, encoded in the way the XML standard specifies.[10]An additionalXML schema(XSD) defines the necessary metadata for interpreting and validating XML. (This is also referred to as the canonical schema.)[11]An XML document that adheres to basic XML rules is "well-formed"; one that adheres to its schema is "valid".[11] IETFRFC 7303(which supersedes the olderRFC 3023), provides rules for the construction ofmedia typesfor use in XML message. It defines three media types:application/xml(text/xmlis an alias),application/xml-external-parsed-entity(text/xml-external-parsed-entityis an alias) andapplication/xml-dtd. They are used for transmitting raw XML files without exposing their internalsemantics. RFC 7303 further recommends that XML-based languages be given media types ending in+xml, for example,image/svg+xmlforSVG. Further guidelines for the use of XML in a networked context appear inRFC 3470, also known as IETF BCP 70, a document covering many aspects of designing and deploying an XML-based language.[8] XML has come into common use for the interchange of data over the Internet. Hundreds of document formats using XML syntax have been developed,[12]includingRSS,Atom,Office Open XML,OpenDocument,SVG,COLLADA, andXHTML. XML also provides the base language forcommunication protocolssuch asSOAPandXMPP. It is one of the message exchange formats used in theAsynchronous JavaScript and XML (AJAX)programming technique. Many industry data standards, such asHealth Level 7,OpenTravel Alliance,FpML,MISMO, and theNational Information Exchange Modelare based on XML and the rich features of the XML schema specification. In publishing,Darwin Information Typing Architectureis an XML industry data standard. XML is used extensively to underpin various publishing formats. One of the applications of XML in science is the representation of operational meteorology information based onIWXXMstandards.[13] The material in this section is based on the XMLSpecification. This is not an exhaustive list of all the constructs that appear in XML; it provides an introduction to the key constructs most often encountered in day-to-day use. XML documents consist entirely of characters from theUnicoderepertoire. Except for a small number of specifically excludedcontrol characters, any character defined by Unicode may appear within the content of an XML document. XML includes facilities for identifying theencodingof the Unicode characters that make up the document, and for expressing characters that, for one reason or another, cannot be used directly. Unicode code points in the following ranges are valid in XML 1.0 documents:[14] XML 1.1 extends the set of allowed characters to include all the above, plus the remaining characters in the range U+0001–U+001F.[15]At the same time, however, it restricts the use of C0 andC1control characters other than U+0009 (Horizontal Tab), U+000A (Line Feed), U+000D (Carriage Return), and U+0085 (Next Line) by requiring them to be written in escaped form (for example U+0001 must be written as&#x01;or its equivalent). In the case of C1 characters, this restriction is a backwards incompatibility; it was introduced to allow common encoding errors to be detected. The code pointU+0000(Null) is the only character that is not permitted in any XML 1.1 document. The Unicode character set can be encoded intobytesfor storage or transmission in a variety of different ways, called "encodings". Unicode itself defines encodings that cover the entire repertoire; well-known ones includeUTF-8(which the XML standard recommends using, without aBOM) andUTF-16.[16]There are many other text encodings that predate Unicode, such asASCIIand variousISO/IEC 8859; their character repertoires are in every case subsets of the Unicode character set. XML allows the use of any of the Unicode-defined encodings and any other encodings whose characters also appear in Unicode. XML also provides a mechanism whereby an XML processor can reliably, without any prior knowledge, determine which encoding is being used.[17]Encodings other than UTF-8 and UTF-16 are not necessarily recognized by every XML parser (and in some cases not even UTF-16, even though the standard mandates it to also be recognized). XML providesescapefacilities for including characters that are problematic to include directly. For example: There are fivepredefined entities: All permitted Unicode characters may be represented with anumeric character reference. Consider the Chinese character "中", whose numeric code in Unicode is hexadecimal 4E2D, or decimal 20,013. A user whose keyboard offers no method for entering this character could still insert it in an XML document encoded either as&#20013;or&#x4e2d;. Similarly, the string "I <3 Jörg" could be encoded for inclusion in an XML document asI &lt;3 J&#xF6;rg. &#0;is not permitted because thenull characteris one of the control characters excluded from XML, even when using a numeric character reference.[19]An alternative encoding mechanism such asBase64is needed to represent such characters. Comments may appear anywhere in a document outside other markup. Comments cannot appear before the XML declaration. Comments begin with<!--and end with-->. For compatibility withSGML, the string "--" (double-hyphen) is not allowed inside comments;[20]this means comments cannot be nested. The ampersand has no special significance within comments, so entity and character references are not recognized as such, and there is no way to represent characters outside the character set of the document encoding. An example of a valid comment:<!--no need to escape <code> & such in comments--> XML 1.0 (Fifth Edition) and XML 1.1 support the direct use of almost anyUnicodecharacter in element names, attributes, comments, character data, and processing instructions (other than the ones that have special symbolic meaning in XML itself, such as the less-than sign, "<"). The following is a well-formed XML document includingChinese,ArmenianandCyrilliccharacters: The XML specification defines an XML document as awell-formedtext, meaning that it satisfies a list of syntax rules provided in the specification. Some key points include: The definition of an XML document excludes texts that contain violations of well-formedness rules; they are simply not XML. An XML processor that encounters such a violation is required to report such errors and to cease normal processing.[21][22]This policy, occasionally referred to as "draconianerror handling", stands in notable contrast to the behavior of programs that processHTML, which are designed to produce a reasonable result even in the presence of severe markup errors.[23]XML's policy in this area has been criticized as a violation ofPostel's law("Be conservative in what you send; be liberal in what you accept").[24] The XML specification defines avalid XML documentas awell-formed XML documentwhich also conforms to the rules of aDocument Type Definition(DTD).[25] In addition to being well formed, an XML document may bevalid. This means that it contains a reference to aDocument Type Definition(DTD), and that its elements and attributes are declared in that DTD and follow the grammatical rules for them that the DTD specifies. XML processors are classified asvalidatingornon-validatingdepending on whether or not they check XML documents for validity.[26]A processor that discovers a validity error must be able to report it, but may continue normal processing. A DTD is an example of aschemaorgrammar. Since the initial publication of XML 1.0, there has been substantial work in the area of schema languages for XML. Such schema languages typically constrain the set of elements that may be used in a document, which attributes may be applied to them, the order in which they may appear, and the allowable parent/child relationships. The oldest schema language for XML is thedocument type definition(DTD), inherited from SGML. DTDs have the following benefits: DTDs have the following limitations: Two peculiar features that distinguish DTDs from other schema types are the syntactic support for embedding a DTD within XML documents and for definingentities, which are arbitrary fragments of text or markup that the XML processor inserts in the DTD itself and in the XML document wherever they are referenced, like character escapes. DTD technology is still used in many applications because of its ubiquity. A newer schema language, described by the W3C as the successor of DTDs, isXML Schema, often referred to by theinitialismfor XML Schema instances, XSD (XML Schema Definition). XSDs are far more powerful than DTDs in describing XML languages. They use a richdatatypingsystem and allow for more detailed constraints on an XML document's logical structure. XSDs also use an XML-based format, which makes it possible to use ordinary XML tools to help process them. xs:schema element that defines a schema: RELAX NG(Regular Language for XML Next Generation) was initially specified byOASISand is now a standard (Part 2:Regular-grammar-based validationofISO/IEC 19757 – DSDL). RELAX NG schemas may be written in either an XML based syntax or a more compact non-XML syntax; the two syntaxes areisomorphicandJames Clark's conversion tool—Trang—can convert between them without loss of information. RELAX NG has a simpler definition and validation framework than XML Schema, making it easier to use and implement. It also has the ability to usedatatypeframeworkplug-ins; a RELAX NG schema author, for example, can require values in an XML document to conform to definitions in XML Schema Datatypes. Schematronis a language for makingassertionsabout the presence or absence of patterns in an XML document. It typically usesXPathexpressions. Schematron is now a standard (Part 3:Rule-based validationofISO/IEC 19757 – DSDL). DSDL(Document Schema Definition Languages) is a multi-part ISO/IEC standard (ISO/IEC 19757) that brings together a comprehensive set of small schema languages, each targeted at specific problems. DSDL includesRELAX NGfull and compact syntax,Schematronassertion language, and languages for defining datatypes, character repertoire constraints, renaming and entity expansion, and namespace-basedroutingof document fragments to different validators. DSDL schema languages do not have the vendor support of XML Schemas yet, and are to some extent a grassroots reaction of industrial publishers to the lack of utility of XML Schemas forpublishing. Some schema languages not only describe the structure of a particular XML format but also offer limited facilities to influence processing of individual XML files that conform to this format. DTDs and XSDs both have this ability; they can for instance provide theinfosetaugmentation facility and attribute defaults. RELAX NG and Schematron intentionally do not provide these. A cluster of specifications closely related to XML have been developed, starting soon after the initial publication of XML 1.0. It is frequently the case that the term "XML" is used to refer to XML together with one or more of these other technologies that have come to be seen as part of the XML core. Some other specifications conceived as part of the "XML Core" have failed to find wide adoption, includingXInclude,XLink, andXPointer. The design goals of XML include, "It shall be easy to write programs which process XML documents."[6]Despite this, the XML specification contains almost no information about how programmers might go about doing such processing. TheXML Infosetspecification provides a vocabulary to refer to the constructs within an XML document, but does not provide any guidance on how to access this information. A variety ofAPIsfor accessing XML have been developed and used, and some have been standardized. Existing APIs for XML processing tend to fall into these categories: Stream-oriented facilities require less memory and, for certain tasks based on a linear traversal of an XML document, are faster and simpler than other alternatives. Tree-traversal and data-binding APIs typically require the use of much more memory, but are often found more convenient for use by programmers; some include declarative retrieval of document components via the use of XPath expressions. XSLT is designed for declarative description of XML document transformations, and has been widely implemented both in server-side packages and Web browsers. XQuery overlaps XSLT in its functionality, but is designed more for searching of largeXML databases. Simple API for XML(SAX) is alexical,event-drivenAPI in which a document is read serially and its contents are reported ascallbacksto variousmethodson ahandler objectof the user's design. SAX is fast and efficient to implement, but difficult to use for extracting information at random from the XML, since it tends to burden the application author with keeping track of what part of the document is being processed. It is better suited to situations in which certain types of information are always handled the same way, no matter where they occur in the document. Pull parsing treats the document as a series of items read in sequence using theiterator design pattern. This allows for writing ofrecursive descent parsersin which the structure of the code performing the parsing mirrors the structure of the XML being parsed, and intermediate parsed results can be used and accessed as local variables within the functions performing the parsing, or passed down (as function parameters) into lower-level functions, or returned (as function return values) to higher-level functions.[27]Examples of pull parsers include Data::Edit::Xml inPerl,StAXin theJavaprogramming language, XMLPullParser inSmalltalk, XMLReader inPHP, ElementTree.iterparse inPython, SmartXML inRed, System.Xml.XmlReader in the.NET Framework, and the DOM traversal API (NodeIterator and TreeWalker). A pull parser creates an iterator that sequentially visits the various elements, attributes, and data in an XML document. Code that uses this iterator can test the current item (to tell, for example, whether it is a start-tag or end-tag, or text), and inspect its attributes (local name,namespace, values of XML attributes, value of text, etc.), and can also move the iterator to the next item. The code can thus extract information from the document as it traverses it. The recursive-descent approach tends to lend itself to keeping data as typed local variables in the code doing the parsing, while SAX, for instance, typically requires a parser to manually maintain intermediate data within a stack of elements that are parent elements of the element being parsed. Pull-parsing code can be more straightforward to understand and maintain than SAX parsing code. TheDocument Object Model(DOM) is an interface that allows for navigation of the entire document as if it were a tree ofnodeobjectsrepresenting the document's contents. A DOM document can be created by a parser, or can be generated manually by users (with limitations). Data types in DOM nodes are abstract; implementations provide their own programming language-specificbindings. DOM implementations tend to bememoryintensive, as they generally require the entire document to be loaded into memory and constructed as a tree of objects before access is allowed. XML data bindingis a technique for simplifying development of applications that need to work with XML documents. It involves mapping the XML document to a hierarchy of strongly typed objects, rather than using the generic objects created by a DOM parser. The resulting code is often easier to read and maintain, and it can help to identify problems at compile time rather than run-time. XML data binding is particularly well-suited for applications where the document structure is known and fixed at the time the application is written. By creating a strongly typed representation of the XML data, developers can take advantage of modern integrated development environments (IDEs) that provide features like auto-complete, code refactoring, and code highlighting. This can make it easier to write correct and efficient code, and reduce the risk of errors and bugs. Example data-binding systems include theJava Architecture for XML Binding(JAXB), XML Serialization in.NET Framework,[28]and XML serialization ingSOAP. XML has appeared as afirst-class data typein other languages. TheECMAScript for XML(E4X) extension to theECMAScript/JavaScript language explicitly defines two specific objects (XML and XMLList) for JavaScript, which support XML document nodes and XML node lists as distinct objects and use a dot-notation specifying parent-child relationships.[29]E4X is supported by theMozilla2.5+ browsers (though now deprecated) and AdobeActionscriptbut has not been widely adopted. Similar notations are used in Microsoft'sLINQimplementation for Microsoft .NET 3.5 and above, and inScala(which uses the Java VM). The open-source xmlsh application, which provides a Linux-like shell with special features for XML manipulation, similarly treats XML as a data type, using the <[ ]> notation.[30]TheResource Description Frameworkdefines a data typerdf:XMLLiteralto hold wrapped,canonical XML.[31]Facebook has produced extensions to thePHPandJavaScriptlanguages that add XML to the core syntax in a similar fashion to E4X, namelyXHPandJSXrespectively. XML is an applicationprofileofSGML(ISO 8879).[32] The versatility of SGML for dynamic information display was understood by early digital media publishers in the late 1980s prior to the rise of the Internet.[22][33]By the mid-1990s some practitioners of SGML had gained experience with the then-newWorld Wide Web, and believed that SGML offered solutions to some of the problems the Web was likely to face as it grew.Dan Connollyadded SGML to the list of W3C's activities when he joined the staff in 1995; work began in mid-1996 whenSun MicrosystemsengineerJon Bosakdeveloped a charter and recruited collaborators. Bosak was well-connected in the small community of people who had experience both in SGML and the Web.[34] XML was compiled by aworking groupof eleven members,[35]supported by a (roughly) 150-member Interest Group. Technical debate took place on the Interest Group mailing list and issues were resolved by consensus or, when that failed, majority vote of the Working Group. A record of design decisions and their rationales was compiled byMichael Sperberg-McQueenon December 4, 1997.[36]James Clarkserved as Technical Lead of the Working Group, notably contributing the empty-element<empty />syntax and the name "XML". Other names that had been put forward for consideration included "MAGMA" (Minimal Architecture for Generalized Markup Applications), "SLIM" (Structured Language for Internet Markup) and "MGML" (Minimal Generalized Markup Language).[37]The co-editors of the specification were originallyTim BrayandMichael Sperberg-McQueen. Halfway through the project, Bray accepted a consulting engagement withNetscape, provoking vociferous protests from Microsoft. Bray was temporarily asked to resign the editorship. This led to intense dispute in the Working Group, eventually solved by the appointment of Microsoft'sJean Paolias a third co-editor.[38] The XML Working Group communicated primarily through email and weekly teleconferences. The major design decisions were reached in a short burst of intense work between August and November 1996,[39]when the first Working Draft of an XML specification was published.[40]Further design work continued through 1997, and XML 1.0 became aW3CRecommendation on February 10, 1998. XML is a profile of an ISO standard, SGML, and most of XML comes from SGML unchanged. From SGML comes the separation of logical and physical structures (elements and entities), the availability of grammar-based validation (DTDs), the separation of data and metadata (elements and attributes), mixed content, the separation of processing from representation (processing instructions), and the default angle-bracket syntax. The SGML declaration was removed; thus, XML has a fixed delimiter set and adoptsUnicodeas the documentcharacter set. Other sources of technology for XML were theTEI(Text Encoding Initiative), which defined a profile of SGML for use as a "transfer syntax" andHTML. The ERCS (Extended Reference Concrete Syntax) project of the SPREAD (Standardization Project Regarding East Asian Documents) project of the ISO-related China/Japan/Korea Document Processing expert group was the basis of XML 1.0's naming rules; SPREAD also introduced hexadecimal numeric character references and the concept of references to make available all Unicode characters. To support ERCS, XML and HTML better, the SGML standard IS 8879 was revised in 1996 and 1998 with WebSGML Adaptations. Ideas that developed during discussion that are novel in XML included the algorithm for encoding detection and the encoding header, the processing instruction target, the xml:space attribute, and the new close delimiter for empty-element tags. The notion of well-formedness as opposed to validity (which enables parsing without a schema) was first formalized in XML, although it had been implemented successfully in the Electronic Book Technology "Dynatext" software;[41]the software from the University of Waterloo New Oxford English Dictionary Project; the RISP LISP SGML text processor at Uniscope, Tokyo; the US Army Missile Command IADS hypertext system; Mentor Graphics Context; Interleaf and Xerox Publishing System. The first (XML 1.0) was initially defined in 1998. It has undergone minor revisions since then, without being given a new version number, and is currently in its fifth edition, as published on November 26, 2008. It is widely implemented and still recommended for general use. The second (XML 1.1) was initially published on February 4, 2004, the same day as XML 1.0 Third Edition,[42]and is currently in its second edition, as published on August 16, 2006. It contains features (some contentious) that are intended to make XML easier to use in certain cases.[43]The main changes are to enable the use of line-ending characters used onEBCDICplatforms, and the use of scripts and characters absent from Unicode 3.2. XML 1.1 is not very widely implemented and is recommended for use only by those who need its particular features.[44] Prior to its fifth edition release, XML 1.0 differed from XML 1.1 in having stricter requirements for characters available for use in element and attribute names and unique identifiers: in the first four editions of XML 1.0 the characters were exclusively enumerated using a specific version of theUnicodestandard (Unicode 2.0 to Unicode 3.2.) The fifth edition substitutes the mechanism of XML 1.1, which is more future-proof but reducesredundancy. The approach taken in the fifth edition of XML 1.0 and in all editions of XML 1.1 is that only certain characters are forbidden in names, and everything else is allowed to accommodate suitable name characters in future Unicode versions. In the fifth edition, XML names may contain characters in theBalinese,Cham, orPhoenicianscripts among many others added to Unicode since Unicode 3.2.[43] Almost any Unicode code point can be used in the character data and attribute values of an XML 1.0/1.1 document, even if the character corresponding to the code point is not defined in the current version of Unicode. In character data and attribute values, XML 1.1 allows the use of morecontrol charactersthan XML 1.0, but, for "robustness", most of the control characters introduced in XML 1.1 must be expressed as numeric character references (and #x7F through #x9F, which had been allowed in XML 1.0, are in XML 1.1 even required to be expressed as numeric character references[43]). Among the supported control characters in XML 1.1 are two line break codes that must be treated as whitespace characters, which are the only control codes that can be written directly. There has been discussion of an XML 2.0, although no organization has announced plans for work on such a project. XML-SW (SW forskunkworks), which one of the original developers of XML has written,[45]contains some proposals for what an XML 2.0 might look like, including elimination of DTDs from syntax, as well as integration ofXML namespaces,XML BaseandXML Information Setinto the base standard. In 2012,James Clark(technical lead of the XML Working Group) andJohn Cowan(editor of the XML 1.1 specification) formed the MicroXML Community Group within the W3C and published MicroXML, a specification for a significantly reduced subset of XML.[46]MicroXML provides a much simpler core syntax by stripping away many features of full XML, such as document type declarations and CDATA sections,[21]while ensuring XML namespace validity by disallowing names conflicting with namespace prefixing. Due to the verbosity of textual XML, various binary formats have been proposed as compact representations for XML:Fast Infoset, based onASN.1, was published as an international standard by theITU-Tin 2005, and later byISO.Efficient XML Interchange(EXI), a binary XML format originally developed by AgileDelta, was adopted as a W3C recommendation in 2011, with a second edition published in 2014. XML and its extensions have regularly been criticized for verbosity, complexity and redundancy.[47] Mapping the basic tree model of XML totype systemsof programming languages or databases can be difficult, especially when XML is used for exchanging highly structured data between applications, which was not its primary design goal. However,XML data bindingsystems allow applications to access XML data directly from objects representing adata structureof the data in the programming language used, which ensurestype safety, rather than using theDOMorSAXto retrieve data from a direct representation of the XML itself. This is accomplished by automatically creating a mapping between elements of the XML schemaXSDof the document and members of a class to be represented in memory. Other criticisms attempt to refute the claim that XML is aself-describinglanguage[48](though the XML specification itself makes no such claim). JSON,YAML, andS-Expressionsare frequently proposed as simpler alternatives (seeComparison of data-serialization formats)[49]that focus on representing highly structured data rather than documents, which may contain both highly structured and relatively unstructured content. However, W3C-standardized XML schema specifications offer a broader range of structuredXSDdata types compared to simpler serialization formats and offer modularity and reuse throughXML namespaces.
https://en.wikipedia.org/wiki/XML
Information retrieval(IR) incomputingandinformation scienceis the task of identifying and retrievinginformation systemresources that are relevant to aninformation need. The information need can be specified in the form of a search query. In the case of document retrieval, queries can be based onfull-textor other content-based indexing. Information retrieval is thescience[1]of searching for information in a document, searching for documents themselves, and also searching for themetadatathat describes data, and fordatabasesof texts, images or sounds. Automated information retrieval systems are used to reduce what has been calledinformation overload. An IR system is a software system that provides access to books, journals and other documents; it also stores and manages those documents.Web search enginesare the most visible IR applications. An information retrieval process begins when a user enters a query into the system. Queries are formal statements of information needs, for example search strings in web search engines. In information retrieval, a query does not uniquely identify a single object in the collection. Instead, several objects may match the query, perhaps with different degrees ofrelevance. An object is an entity that is represented by information in a content collection ordatabase. User queries are matched against the database information. However, as opposed to classical SQL queries of a database, in information retrieval the results returned may or may not match the query, so results are typically ranked. Thisrankingof results is a key difference of information retrieval searching compared to database searching.[2] Depending on theapplicationthe data objects may be, for example, text documents, images,[3]audio,[4]mind maps[5]or videos. Often the documents themselves are not kept or stored directly in the IR system, but are instead represented in the system by document surrogates ormetadata. Most IR systems compute a numeric score on how well each object in the database matches the query, and rank the objects according to this value. The top ranking objects are then shown to the user. The process may then be iterated if the user wishes to refine the query.[6] there is ... a machine called the Univac ... whereby letters and figures are coded as a pattern of magnetic spots on a long steel tape. By this means the text of a document, preceded by its subject code symbol, can be recorded ... the machine ... automatically selects and types out those references which have been coded in any desired way at a rate of 120 words a minute The idea of using computers to search for relevant pieces of information was popularized in the articleAs We May ThinkbyVannevar Bushin 1945.[7]It would appear that Bush was inspired by patents for a 'statistical machine' – filed byEmanuel Goldbergin the 1920s and 1930s – that searched for documents stored on film.[8]The first description of a computer searching for information was described by Holmstrom in 1948,[9]detailing an early mention of theUnivaccomputer. Automated information retrieval systems were introduced in the 1950s: one even featured in the 1957 romantic comedyDesk Set. In the 1960s, the first large information retrieval research group was formed byGerard Saltonat Cornell. By the 1970s several different retrieval techniques had been shown to perform well on smalltext corporasuch as the Cranfield collection (several thousand documents).[7]Large-scale retrieval systems, such as the Lockheed Dialog system, came into use early in the 1970s. In 1992, the US Department of Defense along with theNational Institute of Standards and Technology(NIST), cosponsored theText Retrieval Conference(TREC) as part of the TIPSTER text program. The aim of this was to look into the information retrieval community by supplying the infrastructure that was needed for evaluation of text retrieval methodologies on a very large text collection. This catalyzed research on methods thatscaleto huge corpora. The introduction ofweb search engineshas boosted the need for very large scale retrieval systems even further. By the late 1990s, the rise of the World Wide Web fundamentally transformed information retrieval. While early search engines such asAltaVista(1995) andYahoo!(1994) offered keyword-based retrieval, they were limited in scale and ranking refinement. The breakthrough came in 1998 with the founding ofGoogle, which introduced thePageRankalgorithm,[10]using the web’s hyperlink structure to assess page importance and improve relevance ranking. During the 2000s, web search systems evolved rapidly with the integration of machine learning techniques. These systems began to incorporate user behavior data (e.g., click-through logs), query reformulation, and content-based signals to improve search accuracy and personalization. In 2009,MicrosoftlaunchedBing, introducing features that would later incorporatesemanticweb technologies through the development of its Satori knowledge base. Academic analysis[11]have highlighted Bing’s semantic capabilities, including structured data use and entity recognition, as part of a broader industry shift toward improving search relevance and understanding user intent through natural language processing. A major leap occurred in 2018, when Google deployedBERT(BidirectionalEncoderRepresentations fromTransformers) to better understand the contextual meaning of queries and documents. This marked one of the first times deep neural language models were used at scale in real-world retrieval systems.[12]BERT’s bidirectional training enabled a more refined comprehension of word relationships in context, improving the handling of natural language queries. Because of its success, transformer-based models gained traction in academic research and commercial search applications.[13] Simultaneously, the research community began exploring neural ranking models that outperformed traditional lexical-based methods. Long-standing benchmarks such as theTextREtrievalConference (TREC), initiated in 1992, and more recent evaluation frameworks Microsoft MARCO(MAchineReadingCOmprehension) (2019)[14]became central to training and evaluating retrieval systems across multiple tasks and domains. MS MARCO has also been adopted in the TREC Deep Learning Tracks, where it serves as a core dataset for evaluating advances in neural ranking models within a standardized benchmarking environment.[15] As deep learning became integral to information retrieval systems, researchers began to categorize neural approaches into three broad classes:sparse,dense, andhybridmodels. Sparse models, including traditional term-based methods and learned variants like SPLADE, rely on interpretable representations and inverted indexes to enable efficient exact term matching with added semantic signals.[16]Dense models, such as dual-encoder architectures like ColBERT, use continuous vector embeddings to support semantic similarity beyond keyword overlap.[17]Hybrid models aim to combine the advantages of both, balancing the lexical (token) precision of sparse methods with the semantic depth of dense models. This way of categorizing models balances scalability, relevance, and efficiency in retrieval systems.[18] As IR systems increasingly rely on deep learning, concerns around bias, fairness, and explainability have also come to the picture. Research is now focused not just on relevance and efficiency, but on transparency, accountability, and user trust in retrieval algorithms. Areas where information retrieval techniques are employed include (the entries are in alphabetical order within each category): Methods/Techniques in which information retrieval techniques are employed include: In order to effectively retrieve relevant documents by IR strategies, the documents are typically transformed into a suitable representation. Each retrieval strategy incorporates a specific model for its document representation purposes. The picture on the right illustrates the relationship of some common models. In the picture, the models are categorized according to two dimensions: the mathematical basis and the properties of the model. In addition to the theoretical distinctions, modern information retrieval models are also categorized on how queries and documents are represented and compared, using a practical classification distinguishing between sparse, dense and hybrid models.[19] This classification has become increasingly common in both academic and the real world applications and is getting widely adopted and used in evaluation benchmarks for Information Retrieval models.[23][20] The evaluation of an information retrieval system' is the process of assessing how well a system meets the information needs of its users. In general, measurement considers a collection of documents to be searched and a search query. Traditional evaluation metrics, designed forBoolean retrieval[clarification needed]or top-k retrieval, includeprecision and recall. All measures assume aground truthnotion of relevance: every document is known to be either relevant or non-relevant to a particular query. In practice, queries may beill-posedand there may be different shades of relevance.
https://en.wikipedia.org/wiki/Information_retrieval#Relevance
Hyperlink-Induced Topic Search(HITS; also known ashubs and authorities) is alink analysisalgorithmthat rates Web pages, developed byJon Kleinberg. The idea behind Hubs and Authorities stemmed from a particular insight into the creation of web pages when the Internet was originally forming; that is, certain web pages, known as hubs, served as large directories that were not actually authoritative in the information that they held, but were used as compilations of a broad catalog of information that led users direct to other authoritative pages. In other words, a good hub represents a page that pointed to many other pages, while a good authority represents a page that is linked by many different hubs.[1] The scheme therefore assigns two scores for each page: its authority, which estimates the value of the content of the page, and its hub value, which estimates the value of its links to other pages. Many methods have been used to rank the importance of scientific journals. One such method is Garfield'simpact factor. Journals such asScienceandNatureare filled with numerous citations, making these magazines have very high impact factors. Thus, when comparing two more obscure journals which have received roughly the same number of citations but one of these journals has received many citations fromScienceandNature, this journal needs be ranked higher. In other words, it is better to receive citations from an important journal than from an unimportant one.[2] This phenomenon also occurs in theInternet. Counting the number of links to a page can give us a general estimate of its prominence on the Web, but a page with very few incoming links may also be prominent, if two of these links come from the home pages of sites likeYahoo!,Google, orMSN. Because these sites are of very high importance but are alsosearch engines, a page can be ranked much higher than its actual relevance. In the HITS algorithm, the first step is to retrieve the most relevant pages to the search query. This set is called theroot setand can be obtained by taking the top pages returned by a text-based search algorithm. Abase setis generated by augmenting the root set with all the web pages that are linked from it and some of the pages that link to it. The web pages in the base set and all hyperlinks among those pages form a focused subgraph. The HITS computation is performed only on thisfocused subgraph. According to Kleinberg the reason for constructing a base set is to ensure that most (or many) of the strongest authorities are included. Authority and hub values are defined in terms of one another in amutual recursion. An authority value is computed as the sum of the scaled hub values that point to that page. A hub value is the sum of the scaled authority values of the pages it points to. Some implementations also consider the relevance of the linked pages. The algorithm performs a series of iterations, each consisting of two basic steps: The Hub score and Authority score for a node is calculated with the following algorithm: HITS, likePageandBrin'sPageRank, is aniterative algorithmbased on thelinkage of the documents on the web. However it does have some major differences: To begin the ranking, we letauth(p)=1{\displaystyle \mathrm {auth} (p)=1}andhub(p)=1{\displaystyle \mathrm {hub} (p)=1}for each pagep{\displaystyle p}. We consider two types of updates: Authority Update Rule and Hub Update Rule. In order to calculate the hub/authority scores of each node, repeated iterations of the Authority Update Rule and the Hub Update Rule are applied. A k-step application of the Hub-Authority algorithm entails applying for k times first the Authority Update Rule and then the Hub Update Rule. For eachp{\displaystyle p}, we updateauth(p){\displaystyle \mathrm {auth} (p)}toauth(p)=∑q∈Ptohub(q){\displaystyle \mathrm {auth} (p)=\displaystyle \sum \nolimits _{q\in P_{\mathrm {to} }}\mathrm {hub} (q)}wherePto{\displaystyle P_{\mathrm {to} }}is all pages which link to pagep{\displaystyle p}. That is, a page's authority score is the sum of all the hub scores of pages that point to it. For eachp{\displaystyle p}, we updatehub(p){\displaystyle \mathrm {hub} (p)}tohub(p)=∑q∈Pfromauth(q){\displaystyle \mathrm {hub} (p)=\displaystyle \sum \nolimits _{q\in P_{\mathrm {from} }}\mathrm {auth} (q)}wherePfrom{\displaystyle P_{\mathrm {from} }}is all pages which pagep{\displaystyle p}links to. That is, a page's hub score is the sum of all the authority scores of pages it points to. The final hub-authority scores of nodes are determined after infinite repetitions of the algorithm. As directly and iteratively applying the Hub Update Rule and Authority Update Rule leads to diverging values, it is necessary tonormalizethe matrix after every iteration. Thus the values obtained from this process will eventually converge. The hub and authority values converge in the pseudocode above. The code below does not converge, because it is necessary to limit the number of steps that the algorithm runs for. One way to get around this, however, would be to normalize the hub and authority values after each "step" by dividing each authority value by the square root of the sum of the squares of all authority values, and dividing each hub value by the square root of the sum of the squares of all hub values. This is what the pseudocode above does.
https://en.wikipedia.org/wiki/HITS_algorithm
Incomputer science, aninverted index(also referred to as apostings list,postings file, orinverted file) is adatabase indexstoring a mapping from content, such as words or numbers, to its locations in atable, or in a document or a set of documents (named in contrast to aforward index, which maps from documents to content).[1]The purpose of an inverted index is to allow fastfull-text searches, at a cost of increased processing when a document is added to the database.[2]The inverted file may be the database file itself, rather than its index. It is the most popular data structure used indocument retrievalsystems,[3]used on a large scale for example insearch engines. Additionally, several significant general-purposemainframe-baseddatabase management systemshave used inverted list architectures, includingADABAS,DATACOM/DB, andModel 204. There are two main variants of inverted indexes: Arecord-level inverted index(orinverted file indexor justinverted file) contains a list of references to documents for each word. Aword-level inverted index(orfull inverted indexorinverted list) additionally contains the positions of each word within a document.[4]The latter form offers more functionality (likephrase searches), but needs more processing power and space to be created. The inverted indexdata structureis a central component of a typicalsearch engine indexing algorithm.[5]A goal of a search engine implementation is to optimize the speed of the query: find the documents where word X occurs.[6]Once aforward indexis developed, which stores lists of words per document, it is next inverted to develop an inverted index. Querying the forward index would require sequential iteration through each document and to each word to verify a matching document. The time, memory, and processing resources to perform such a query are not always technically realistic. Instead of listing the words per document in the forward index, the inverted index data structure is developed which lists the documents per word. With the inverted index created, the query can be resolved by jumping to the word ID (viarandom access) in the inverted index. In pre-computer times,concordancesto important books were manually assembled. These were effectively inverted indexes with a small amount of accompanying commentary that required a tremendous amount of effort to produce. In bioinformatics, inverted indexes are very important in thesequence assemblyof short fragments of sequenced DNA. One way to find the source of a fragment is to search for it against a reference DNA sequence. A small number of mismatches (due to differences between the sequenced DNA and reference DNA, or errors) can be accounted for by dividing the fragment into smaller fragments—at least one subfragment is likely to match the reference DNA sequence. The matching requires constructing an inverted index of all substrings of a certain length from the reference DNA sequence. Since the human DNA contains more than 3 billion base pairs, and we need to store a DNA substring for every index and a 32-bit integer for index itself, the storage requirement for such an inverted index would probably be in the tens of gigabytes. For historical reasons, inverted list compression andbitmap compressionwere developed as separate lines of research, and only later were recognized as solving essentially the same problem.[7]
https://en.wikipedia.org/wiki/Inverted_index
Collaborative filtering(CF) is, besidescontent-based filtering, one of two major techniques used byrecommender systems.[1]Collaborative filtering has two senses, a narrow one and a more general one.[2] In the newer, narrower sense, collaborative filtering is a method of making automaticpredictions(filtering) about auser's interests by utilizing preferences ortasteinformation collected frommany users(collaborating). This approach assumes that if personsAandBshare similar opinions on one issue, they are more likely to agree on other issues compared to a random pairing ofAwith another person. For instance, a collaborative filtering system fortelevisionprogramming could predict which shows a user might enjoy based on a limited list of the user's tastes (likes or dislikes).[3]These predictions are specific to the user, but use information gleaned from many users. This differs from the simpler approach of giving anaverage(non-specific) score for each item of interest, for example based on its number ofvotes. In the more general sense, collaborative filtering is the process of filtering information or patterns using techniques involving collaboration among multiple agents, viewpoints, data sources, etc.[2]Applications of collaborative filtering typically involve very large data sets. Collaborative filtering methods have been applied to many kinds of data including: sensing and monitoring data, such as in mineral exploration, environmental sensing over large areas or multiple sensors; financial data, such as financial service institutions that integrate many financial sources; and user data from electronic commerce and web applications. This article focuses on collaborative filtering for user data, but some of the methods also apply to other major applications. Thegrowthof theInternethas made it much more difficult to effectivelyextract useful informationfrom all the availableonline information.[according to whom?]The overwhelming amount of data necessitates mechanisms for efficientinformation filtering.[according to whom?]Collaborative filtering is one of the techniques used for dealing with this problem. The motivation for collaborative filtering comes from the idea that people often get the best recommendations from someone with tastes similar to themselves.[citation needed]Collaborative filtering encompasses techniques for matching people with similar interests and makingrecommendationson this basis. Collaborative filtering algorithms often require (1) users' active participation, (2) an easy way to represent users' interests, and (3) algorithms that are able to match people with similar interests. Typically, the workflow of a collaborative filtering system is: A key problem of collaborative filtering is how to combine and weight the preferences of user neighbors. Sometimes, users can immediately rate the recommended items. As a result, the system gains an increasingly accurate representation of user preferences over time. Collaborative filtering systems have many forms, but many common systems can be reduced to two steps: This falls under the category of user-based collaborative filtering. A specific application of this is the user-basedNearest Neighbor algorithm. Alternatively,item-based collaborative filtering(users who bought x also bought y), proceeds in an item-centric manner: See, for example, theSlope Oneitem-based collaborative filtering family. Another form of collaborative filtering can be based on implicit observations of normal user behavior (as opposed to the artificial behavior imposed by a rating task). These systems observe what a user has done together with what all users have done (what music they have listened to, what items they have bought) and use that data to predict the user's behavior in the future, or to predict how a user might like to behave given the chance. These predictions then have to be filtered throughbusiness logicto determine how they might affect the actions of a business system. For example, it is not useful to offer to sell somebody a particular album of music if they already have demonstrated that they own that music. Relying on a scoring or rating system which is averaged across all users ignores specific demands of a user, and is particularly poor in tasks where there is large variation in interest (as in the recommendation of music). However, there are other methods to combat information explosion, such aswebsearch anddata clustering. The memory-based approach uses user rating data to compute the similarity between users or items. Typical examples of this approach are neighbourhood-based CF and item-based/user-based top-N recommendations. For example, in user based approaches, the value of ratings userugives to itemiis calculated as an aggregation of some similar users' rating of the item: whereUdenotes the set of topNusers that are most similar to useruwho rated itemi. Some examples of the aggregation function include: where k is a normalizing factor defined ask=1/∑u′∈U|simil⁡(u,u′)|{\displaystyle k=1/\sum _{u^{\prime }\in U}|\operatorname {simil} (u,u^{\prime })|}, and whereru¯{\displaystyle {\bar {r_{u}}}}is the average rating of userufor all the items rated byu. The neighborhood-based algorithm calculates the similarity between two users or items, and produces a prediction for the user by taking theweighted averageof all the ratings. Similarity computation between items or users is an important part of this approach. Multiple measures, such asPearson correlationandvector cosinebased similarity are used for this. The Pearson correlation similarity of two usersx,yis defined as where Ixyis the set of items rated by both userxand usery. The cosine-based approach defines the cosine-similarity between two usersxandyas:[4] The user based top-N recommendation algorithm uses a similarity-based vector model to identify thekmost similar users to an active user. After thekmost similar users are found, their corresponding user-item matrices are aggregated to identify the set of items to be recommended. A popular method to find the similar users is theLocality-sensitive hashing, which implements thenearest neighbor mechanismin linear time. The advantages with this approach include: the explainability of the results, which is an important aspect of recommendation systems; easy creation and use; easy facilitation of new data; content-independence of the items being recommended; good scaling with co-rated items. There are also several disadvantages of this approach. Its performance decreases whendata is sparse, which is common for web-related items. This hinders thescalabilityof this approach and creates problems with large datasets. Although it can efficiently handle new users because it relies on adata structure, adding new items becomes more complicated because that representation usually relies on a specificvector space. Adding new items requires inclusion of the new item and the re-insertion of all the elements in the structure. An alternative to memory-based methods is tolearnmodels to predict users' rating of unrated items. Model-based CF algorithms includeBayesian networks,clustering models,latent semantic modelssuch assingular value decomposition,probabilistic latent semantic analysis, multiple multiplicative factor,latent Dirichlet allocationandMarkov decision process-based models.[5] Through this approach,dimensionality reductionmethods are mostly used for improving robustness and accuracy of memory-based methods. Specifically, methods likesingular value decomposition,principal component analysis, known as latent factor models, compress a user-item matrix into a low-dimensional representation in terms of latent factors. This transforms the large matrix that contains many missing values, into a much smaller matrix. A compressed matrix can be used to find neighbors of a user or item as per the previous section. Compression has two advantages in large,sparsedata: it is more accurate and scales better.[6] A number of applications combine the memory-based and the model-based CF algorithms. These overcome the limitations of native CF approaches and improve prediction performance. Importantly, they overcome the CF problems such as sparsity and loss of information. However, they have increased complexity and are expensive to implement.[7]Usually most commercial recommender systems are hybrid, for example, the Google news recommender system.[8] In recent years, many neural and deep-learning techniques have been proposed for collaborative filtering. Some generalize traditionalmatrix factorizationalgorithms via a non-linear neural architecture,[9]or leverage new model types like VariationalAutoencoders.[10]Deep learning has been applied to many scenarios (context-aware, sequence-aware, social tagging etc.). However, deep learning effectiveness for collaborative recommendation has been questioned. A systematic analysis of publications using deep learning or neural methods to the top-k recommendation problem, published in top conferences (SIGIR, KDD, WWW, RecSys), found that, on average, less than 40% of articles are reproducible, and only 14% in some conferences. Overall, the study identifies 18 articles, only 7 of them could be reproduced and 6 could be outperformed by older and simpler properly tuned baselines. The article highlights potential problems in today's research scholarship and calls for improved scientific practices.[11]Similar issues have been spotted by others[12]and also in sequence-aware recommender systems.[13] Many recommender systems simply ignore other contextual information existing alongside user's rating in providing item recommendation.[14]However, by pervasive availability of contextual information such as time, location, social information, and type of the device that user is using, it is becoming more important than ever for a successful recommender system to provide a context-sensitive recommendation. According to Charu Aggrawal, "Context-sensitive recommender systems tailor their recommendations to additional information that defines the specific situation under which recommendations are made. This additional information is referred to as the context."[6] Taking contextual information into consideration, we will have additional dimension to the existing user-item rating matrix. As an instance, assume a music recommender system which provides different recommendations in corresponding to time of the day. In this case, it is possible a user have different preferences for a music in different time of a day. Thus, instead of using user-item matrix, we may usetensorof order 3 (or higher for considering other contexts) to represent context-sensitive users' preferences.[15][16][17] In order to take advantage of collaborative filtering and particularly neighborhood-based methods, approaches can be extended from a two-dimensional rating matrix into a tensor of higher order[citation needed]. For this purpose, the approach is to find the most similar/like-minded users to a target user; one can extract and compute similarity of slices (e.g. item-time matrix) corresponding to each user. Unlike the context-insensitive case for which similarity of two rating vectors are calculated, in thecontext-awareapproaches, the similarity of rating matrices corresponding to each user is calculated by usingPearson coefficients.[6]After the most like-minded users are found, their corresponding ratings are aggregated to identify the set of items to be recommended to the target user. The most important disadvantage of taking context into recommendation model is to be able to deal with larger dataset that contains much more missing values in comparison to user-item rating matrix[citation needed]. Therefore, similar tomatrix factorizationmethods,tensor factorizationtechniques can be used to reduce dimensionality of original data before using any neighborhood-based methods[citation needed]. Unlike the traditional model of mainstream media, in which there are few editors who set guidelines, collaboratively filtered social media can have a very large number of editors, and content improves as the number of participants increases. Services likeReddit,YouTube, andLast.fmare typical examples of collaborative filtering based media.[18] One scenario of collaborative filtering application is to recommend interesting or popular information as judged by the community. As a typical example, stories appear in the front page ofRedditas they are "voted up" (rated positively) by the community. As the community becomes larger and more diverse, the promoted stories can better reflect the average interest of the community members. Wikipediais another application of collaborative filtering. Volunteers contribute to the encyclopedia by filtering out facts from falsehoods.[19] Another aspect of collaborative filtering systems is the ability to generate more personalized recommendations by analyzing information from the past activity of a specific user, or the history of other users deemed to be of similar taste to a given user. These resources are used as user profiling and helps the site recommend content on a user-by-user basis. The more a given user makes use of the system, the better the recommendations become, as the system gains data to improve its model of that user. A collaborative filtering system does not necessarily succeed in automatically matching content to one's preferences. Unless the platform achieves unusually good diversity and independence of opinions, one point of view will always dominate another in a particular community. As in the personalized recommendation scenario, the introduction of new users or new items can cause thecold startproblem, as there will be insufficient data on these new entries for the collaborative filtering to work accurately. In order to make appropriate recommendations for a new user, the system must first learn the user's preferences by analysing past voting or rating activities. The collaborative filtering system requires a substantial number of users to rate a new item before that item can be recommended. In practice, many commercial recommender systems are based on large datasets. As a result, the user-item matrix used for collaborative filtering could be extremely large and sparse, which brings about challenges in the performance of the recommendation. One typical problem caused by the data sparsity is thecold startproblem. As collaborative filtering methods recommend items based on users' past preferences, new users will need to rate a sufficient number of items to enable the system to capture their preferences accurately and thus provides reliable recommendations. Similarly, new items also have the same problem. When new items are added to the system, they need to be rated by a substantial number of users before they could be recommended to users who have similar tastes to the ones who rated them. The new item problem does not affectcontent-based recommendations, because the recommendation of an item is based on its discrete set of descriptive qualities rather than its ratings. As the numbers of users and items grow, traditional CF algorithms will suffer serious scalability problems[citation needed]. For example, with tens of millions of customersO(M){\displaystyle O(M)}and millions of itemsO(N){\displaystyle O(N)}, a CF algorithm with the complexity ofn{\displaystyle n}is already too large. As well, many systems need to react immediately to online requirements and make recommendations for all users regardless of their millions of users, with most computations happening in very large memory machines.[20] Synonymsrefers to the tendency of a number of the same or very similar items to have different names or entries. Most recommender systems are unable to discover this latent association and thus treat these products differently. For example, the seemingly different items "children's movie" and "children's film" are actually referring to the same item. Indeed, the degree of variability in descriptive term usage is greater than commonly suspected.[citation needed]The prevalence of synonyms decreases the recommendation performance of CF systems. Topic Modeling (like theLatent Dirichlet Allocationtechnique) could solve this by grouping different words belonging to the same topic.[citation needed] Gray sheep refers to the users whose opinions do not consistently agree or disagree with any group of people and thus do not benefit from collaborative filtering.Black sheepare a group whose idiosyncratic tastes make recommendations nearly impossible. Although this is a failure of the recommender system, non-electronic recommenders also have great problems in these cases, so having black sheep is an acceptable failure.[disputed–discuss] In a recommendation system where everyone can give the ratings, people may give many positive ratings for their own items and negative ratings for their competitors'. It is often necessary for the collaborative filtering systems to introduce precautions to discourage such manipulations. Collaborative filters are expected to increase diversity because they help us discover new products. Some algorithms, however, may unintentionally do the opposite. Because collaborative filters recommend products based on past sales or ratings, they cannot usually recommend products with limited historical data. This can create a rich-get-richer effect for popular products, akin topositive feedback. This bias toward popularity can prevent what are otherwise better consumer-product matches. AWhartonstudy details this phenomenon along with several ideas that may promote diversity and the "long tail."[21]Several collaborative filtering algorithms have been developed to promote diversity and the "long tail"[22]by recommending novel,[23]unexpected,[24]and serendipitous items.[25] User-item matrix is a basic foundation of traditional collaborative filtering techniques, and it suffers from data sparsity problem (i.e.cold start). As a consequence, except for user-item matrix, researchers are trying to gather more auxiliary information to help boost recommendation performance and develop personalized recommender systems.[28]Generally, there are two popular auxiliary information: attribute information and interaction information. Attribute information describes a user's or an item's properties. For example, user attribute might include general profile (e.g. gender and age) and social contacts (e.g. followers or friends insocial networks); Item attribute means properties like category, brand or content. In addition, interaction information refers to the implicit data showing how users interplay with the item. Widely used interaction information contains tags, comments or reviews and browsing history etc. Auxiliary information plays a significant role in a variety of aspects. Explicit social links, as a reliable representative of trust or friendship, is always employed in similarity calculation to find similar persons who share interest with the target user.[29][30]The interaction-associated information – tags – is taken as a third dimension (in addition to user and item) in advanced collaborative filtering to construct a 3-dimensional tensor structure for exploration of recommendation.[31]
https://en.wikipedia.org/wiki/Collaborative_filtering
Arecommender system (RecSys), or arecommendation system(sometimes replacingsystemwith terms such asplatform,engine, oralgorithm), sometimes only called "the algorithm" or "algorithm"[1]is a subclass ofinformation filtering systemthat provides suggestions for items that are most pertinent to a particular user.[2][3][4]Recommender systems are particularly useful when an individual needs to choose an item from a potentially overwhelming number of items that a service may offer.[2][5]Modern recommendation systems such as those used on large social media sites, make extensive use of AI, machine learning and related techniques to learn the behavior and preferences of each user, and tailor their feed accordingly.[6] Typically, the suggestions refer to variousdecision-making processes, such as what product to purchase, what music to listen to, or what online news to read.[2]Recommender systems are used in a variety of areas, with commonly recognised examples taking the form ofplaylistgenerators for video and music services, product recommenders for online stores, or content recommenders for social media platforms and open web content recommenders.[7][8]These systems can operate using a single type of input, like music, or multiple inputs within and across platforms like news, books and search queries. There are also popular recommender systems for specific topics like restaurants andonline dating. Recommender systems have also been developed to explore research articles and experts,[9]collaborators,[10]and financial services.[11] Acontent discovery platformis an implementedsoftwarerecommendationplatformwhich uses recommender system tools. It utilizes usermetadatain order to discover and recommend appropriate content, whilst reducing ongoing maintenance and development costs. A content discovery platform delivers personalized content towebsites,mobile devicesandset-top boxes. A large range of content discovery platforms currently exist for various forms of content ranging from news articles andacademic journalarticles[12]to television.[13]As operators compete to be the gateway to home entertainment, personalized television is a key service differentiator. Academic content discovery has recently become another area of interest, with several companies being established to help academic researchers keep up to date with relevant academic content and serendipitously discover new content.[12] Recommender systems usually make use of either or bothcollaborative filteringand content-based filtering, as well as other systems such asknowledge-based systems. Collaborative filtering approaches build a model from a user's past behavior (items previously purchased or selected and/or numerical ratings given to those items) as well as similar decisions made by other users. This model is then used to predict items (or ratings for items) that the user may have an interest in.[14]Content-based filtering approaches utilize a series of discrete, pre-tagged characteristics of an item in order to recommend additional items with similar properties.[15] The differences between collaborative and content-based filtering can be demonstrated by comparing two early music recommender systems,Last.fmandPandora Radio. Each type of system has its strengths and weaknesses. In the above example, Last.fm requires a large amount of information about a user to make accurate recommendations. This is an example of thecold startproblem, and is common in collaborative filtering systems.[17][18][19][20][21][22]Whereas Pandora needs very little information to start, it is far more limited in scope (for example, it can only make recommendations that are similar to the original seed). Recommender systems are a useful alternative tosearch algorithmssince they help users discover items they might not have found otherwise. Of note, recommender systems are often implemented using search engines indexing non-traditional data. Recommender systems have been the focus of several granted patents,[23][24][25][26][27]and there are more than 50 software libraries[28]that support the development of recommender systems including LensKit,[29][30]RecBole,[31]ReChorus[32]and RecPack.[33] Elaine Richcreated the first recommender system in 1979, called Grundy.[34][35]She looked for a way to recommend users books they might like. Her idea was to create a system that asks users specific questions and classifies them into classes of preferences, or "stereotypes", depending on their answers. Depending on users' stereotype membership, they would then get recommendations for books they might like. Another early recommender system, called a "digital bookshelf", was described in a 1990 technical report byJussi Karlgrenat Columbia University,[36]and implemented at scale and worked through in technical reports and publications from 1994 onwards byJussi Karlgren, then atSICS,[37][38]and research groups led byPattie Maesat MIT,[39]Will Hill at Bellcore,[40]andPaul Resnick, also at MIT,[41][5]whose work with GroupLens was awarded the 2010ACM Software Systems Award. Montaner provided the first overview of recommender systems from an intelligent agent perspective.[42]Adomaviciusprovided a new, alternate overview of recommender systems.[43]Herlocker provides an additional overview of evaluation techniques for recommender systems,[44]andBeelet al. discussed the problems of offline evaluations.[45]Beel et al. have also provided literature surveys on available research paper recommender systems and existing challenges.[46][47] One approach to the design of recommender systems that has wide use iscollaborative filtering.[48]Collaborative filtering is based on the assumption that people who agreed in the past will agree in the future, and that they will like similar kinds of items as they liked in the past. The system generates recommendations using only information about rating profiles for different users or items. By locating peer users/items with a rating history similar to the current user or item, they generate recommendations using this neighborhood. Collaborative filtering methods are classified as memory-based and model-based. A well-known example of memory-based approaches is the user-based algorithm,[49]while that of model-based approaches ismatrix factorization (recommender systems).[50] A key advantage of the collaborative filtering approach is that it does not rely on machine analyzable content and therefore it is capable of accurately recommending complex items such as movies without requiring an "understanding" of the item itself. Many algorithms have been used in measuring user similarity or item similarity in recommender systems. For example, thek-nearest neighbor(k-NN) approach[51]and thePearson Correlationas first implemented by Allen.[52] When building a model from a user's behavior, a distinction is often made between explicit andimplicitforms ofdata collection. Examples of explicit data collection include the following: Examples ofimplicit data collectioninclude the following: Collaborative filtering approaches often suffer from three problems:cold start, scalability, and sparsity.[54] One of the most famous examples of collaborative filtering is item-to-item collaborative filtering (people who buy x also buy y), an algorithm popularized byAmazon.com's recommender system.[56] Manysocial networksoriginally used collaborative filtering to recommend new friends, groups, and other social connections by examining the network of connections between a user and their friends.[2]Collaborative filtering is still used as part of hybrid systems. Another common approach when designing recommender systems iscontent-based filtering. Content-based filtering methods are based on a description of the item and a profile of the user's preferences.[57][58]These methods are best suited to situations where there is known data on an item (name, location, description, etc.), but not on the user. Content-based recommenders treat recommendation as a user-specific classification problem and learn a classifier for the user's likes and dislikes based on an item's features. In this system, keywords are used to describe the items, and auser profileis built to indicate the type of item this user likes. In other words, these algorithms try to recommend items similar to those that a user liked in the past or is examining in the present. It does not rely on a user sign-in mechanism to generate this often temporary profile. In particular, various candidate items are compared with items previously rated by the user, and the best-matching items are recommended. This approach has its roots ininformation retrievalandinformation filteringresearch. To create auser profile, the system mostly focuses on two types of information: Basically, these methods use an item profile (i.e., a set of discrete attributes and features) characterizing the item within the system. To abstract the features of the items in the system, an item presentation algorithm is applied. A widely used algorithm is thetf–idfrepresentation (also called vector space representation).[59]The system creates a content-based profile of users based on a weighted vector of item features. The weights denote the importance of each feature to the user and can be computed from individually rated content vectors using a variety of techniques. Simple approaches use the average values of the rated item vector while other sophisticated methods use machine learning techniques such asBayesian Classifiers,cluster analysis,decision trees, andartificial neural networksin order to estimate the probability that the user is going to like the item.[60] A key issue with content-based filtering is whether the system can learn user preferences from users' actions regarding one content source and use them across other content types. When the system is limited to recommending content of the same type as the user is already using, the value from the recommendation system is significantly less than when other content types from other services can be recommended. For example, recommending news articles based on news browsing is useful. Still, it would be much more useful when music, videos, products, discussions, etc., from different services, can be recommended based on news browsing. To overcome this, most content-based recommender systems now use some form of the hybrid system. Content-based recommender systems can also include opinion-based recommender systems. In some cases, users are allowed to leave text reviews or feedback on the items. These user-generated texts are implicit data for the recommender system because they are potentially rich resources of both feature/aspects of the item and users' evaluation/sentiment to the item. Features extracted from the user-generated reviews are improvedmetadataof items, because as they also reflect aspects of the item like metadata, extracted features are widely concerned by the users. Sentiments extracted from the reviews can be seen as users' rating scores on the corresponding features. Popular approaches of opinion-based recommender system utilize various techniques includingtext mining,information retrieval,sentiment analysis(see alsoMultimodal sentiment analysis) anddeep learning.[61] Most recommender systems now use a hybrid approach, combiningcollaborative filtering, content-based filtering, and other approaches. There is no reason why several different techniques of the same type could not be hybridized. Hybrid approaches can be implemented in several ways: by making content-based and collaborative-based predictions separately and then combining them; by adding content-based capabilities to a collaborative-based approach (and vice versa); or by unifying the approaches into one model.[43]Several studies that empirically compared the performance of the hybrid with the pure collaborative and content-based methods and demonstrated that the hybrid methods can provide more accurate recommendations than pure approaches. These methods can also be used to overcome some of the common problems in recommender systems such as cold start and the sparsity problem, as well as the knowledge engineering bottleneck inknowledge-basedapproaches.[62] Netflixis a good example of the use of hybrid recommender systems.[63]The website makes recommendations by comparing the watching and searching habits of similar users (i.e., collaborative filtering) as well as by offering movies that share characteristics with films that a user has rated highly (content-based filtering). Some hybridization techniques include: These recommender systems use the interactions of a user within a session[65]to generate recommendations. Session-based recommender systems are used at YouTube[66]and Amazon.[67]These are particularly useful when history (such as past clicks, purchases) of a user is not available or not relevant in the current user session. Domains where session-based recommendations are particularly relevant include video, e-commerce, travel, music and more. Most instances of session-based recommender systems rely on the sequence of recent interactions within a session without requiring any additional details (historical, demographic) of the user. Techniques for session-based recommendations are mainly based on generative sequential models such asrecurrent neural networks,[65][68]transformers,[69]and other deep-learning-based approaches.[70][71] The recommendation problem can be seen as a special instance of a reinforcement learning problem whereby the user is the environment upon which the agent, the recommendation system acts upon in order to receive a reward, for instance, a click or engagement by the user.[66][72][73]One aspect of reinforcement learning that is of particular use in the area of recommender systems is the fact that the models or policies can be learned by providing a reward to the recommendation agent. This is in contrast to traditional learning techniques which rely on supervised learning approaches that are less flexible, reinforcement learning recommendation techniques allow to potentially train models that can be optimized directly on metrics of engagement, and user interest.[74] Multi-criteria recommender systems (MCRS) can be defined as recommender systems that incorporate preference information upon multiple criteria. Instead of developing recommendation techniques based on a single criterion value, the overall preference of user u for the item i, these systems try to predict a rating for unexplored items of u by exploiting preference information on multiple criteria that affect this overall preference value. Several researchers approach MCRS as a multi-criteria decision making (MCDM) problem, and apply MCDM methods and techniques to implement MCRS systems.[75]See this chapter[76]for an extended introduction. The majority of existing approaches to recommender systems focus on recommending the most relevant content to users using contextual information, yet do not take into account the risk of disturbing the user with unwanted notifications. It is important to consider the risk of upsetting the user by pushing recommendations in certain circumstances, for instance, during a professional meeting, early morning, or late at night. Therefore, the performance of the recommender system depends in part on the degree to which it has incorporated the risk into the recommendation process. One option to manage this issue isDRARS, a system which models the context-aware recommendation as abandit problem. This system combines a content-based technique and a contextual bandit algorithm.[77] Mobile recommender systems make use of internet-accessingsmartphonesto offer personalized, context-sensitive recommendations. This is a particularly difficult area of research as mobile data is more complex than data that recommender systems often have to deal with. It is heterogeneous, noisy, requires spatial and temporal auto-correlation, and has validation and generality problems.[78] There are three factors that could affect the mobile recommender systems and the accuracy of prediction results: the context, the recommendation method and privacy.[79]Additionally, mobile recommender systems suffer from a transplantation problem – recommendations may not apply in all regions (for instance, it would be unwise to recommend a recipe in an area where all of the ingredients may not be available). One example of a mobile recommender system are the approaches taken by companies such asUberandLyftto generate driving routes for taxi drivers in a city.[78]This system uses GPS data of the routes that taxi drivers take while working, which includes location (latitude and longitude), time stamps, and operational status (with or without passengers). It uses this data to recommend a list of pickup points along a route, with the goal of optimizing occupancy times and profits. Generative recommenders (GR) represent an approach that transforms recommendation tasks intosequential transductionproblems, where user actions are treated like tokens in a generative modeling framework. In one method, known as HSTU (Hierarchical Sequential Transduction Units),[80]high-cardinality, non-stationary, and streaming datasets are efficiently processed as sequences, enabling the model to learn from trillions of parameters and to handle user action histories orders of magnitude longer than before. By turning all of the system’s varied data into a single stream of tokens and using a customself-attentionapproach instead oftraditional neural network layers, generative recommenders make the model much simpler and less memory-hungry. As a result, it can improve recommendation quality in test simulations and in real-world tests, while being faster than previousTransformer-based systems when handling long lists of user actions. Ultimately, this approach allows the model’s performance to grow steadily as more computing power is used, laying a foundation for efficient and scalable “foundation models” for recommendations. One of the events that energized research in recommender systems was theNetflix Prize. From 2006 to 2009, Netflix sponsored a competition, offering a grand prize of $1,000,000 to the team that could take an offered dataset of over 100 million movie ratings and return recommendations that were 10% more accurate than those offered by the company's existing recommender system. This competition energized the search for new and more accurate algorithms. On 21 September 2009, the grand prize of US$1,000,000 was given to the BellKor's Pragmatic Chaos team using tiebreaking rules.[81] The most accurate algorithm in 2007 used an ensemble method of 107 different algorithmic approaches, blended into a single prediction. As stated by the winners, Bell et al.:[82] Predictive accuracy is substantially improved when blending multiple predictors.Our experience is that most efforts should be concentrated in deriving substantially different approaches, rather than refining a single technique.Consequently, our solution is an ensemble of many methods. Many benefits accrued to the web due to the Netflix project. Some teams have taken their technology and applied it to other markets. Some members from the team that finished second place foundedGravity R&D, a recommendation engine that's active in theRecSys community.[81][83]4-Tell, Inc. created a Netflix project–derived solution for ecommerce websites. A number of privacy issues arose around the dataset offered by Netflix for the Netflix Prize competition. Although the data sets were anonymized in order to preserve customer privacy, in 2007 two researchers from the University of Texas were able to identify individual users by matching the data sets with film ratings on theInternet Movie Database (IMDb).[84]As a result, in December 2009, an anonymous Netflix user sued Netflix in Doe v. Netflix, alleging that Netflix had violated United States fair trade laws and theVideo Privacy Protection Actby releasing the datasets.[85]This, as well as concerns from theFederal Trade Commission, led to the cancellation of a second Netflix Prize competition in 2010.[86] Evaluation is important in assessing the effectiveness of recommendation algorithms. To measure theeffectivenessof recommender systems, and compare different approaches, three types ofevaluationsare available: user studies,online evaluations (A/B tests), and offline evaluations.[45] The commonly used metrics are themean squared errorandroot mean squared error, the latter having been used in the Netflix Prize. The information retrieval metrics such asprecision and recallorDCGare useful to assess the quality of a recommendation method. Diversity, novelty, and coverage are also considered as important aspects in evaluation.[87]However, many of the classic evaluation measures are highly criticized.[88] Evaluating the performance of a recommendation algorithm on a fixed test dataset will always be extremely challenging as it is impossible to accurately predict the reactions of real users to the recommendations. Hence any metric that computes the effectiveness of an algorithm in offline data will be imprecise. User studies are rather a small scale. A few dozens or hundreds of users are presented recommendations created by different recommendation approaches, and then the users judge which recommendations are best. In A/B tests, recommendations are shown to typically thousands of users of a real product, and the recommender system randomly picks at least two different recommendation approaches to generate recommendations. The effectiveness is measured with implicit measures of effectiveness such asconversion rateorclick-through rate. Offline evaluations are based on historic data, e.g. a dataset that contains information about how users previously rated movies.[89] The effectiveness of recommendation approaches is then measured based on how well a recommendation approach can predict the users' ratings in the dataset. While a rating is an explicit expression of whether a user liked a movie, such information is not available in all domains. For instance, in the domain of citation recommender systems, users typically do not rate a citation or recommended article. In such cases, offline evaluations may use implicit measures of effectiveness. For instance, it may be assumed that a recommender system is effective that is able to recommend as many articles as possible that are contained in a research article's reference list. However, this kind of offline evaluations is seen critical by many researchers.[90][91][92][45]For instance, it has been shown that results of offline evaluations have low correlation with results from user studies or A/B tests.[92][93]A dataset popular for offline evaluation has been shown to contain duplicate data and thus to lead to wrong conclusions in the evaluation of algorithms.[94]Often, results of so-called offline evaluations do not correlate with actually assessed user-satisfaction.[95]This is probably because offline training is highly biased toward the highly reachable items, and offline testing data is highly influenced by the outputs of the online recommendation module.[90][96]Researchers have concluded that the results of offline evaluations should be viewed critically.[97] Typically, research on recommender systems is concerned with finding the most accurate recommendation algorithms. However, there are a number of factors that are also important. Recommender systems are notoriously difficult to evaluate offline, with some researchers claiming that this has led to areproducibility crisisin recommender systems publications. The topic of reproducibility seems to be a recurrent issue in some Machine Learning publication venues, but does not have a considerable effect beyond the world of scientific publication. In the context of recommender systems a 2019 paper surveyed a small number of hand-picked publications applying deep learning or neural methods to the top-k recommendation problem, published in top conferences (SIGIR, KDD, WWW,RecSys, IJCAI), has shown that on average less than 40% of articles could be reproduced by the authors of the survey, with as little as 14% in some conferences. The articles considers a number of potential problems in today's research scholarship and suggests improved scientific practices in that area.[110][111][112]More recent work on benchmarking a set of the same methods came to qualitatively very different results[113]whereby neural methods were found to be among the best performing methods. Deep learning and neural methods for recommender systems have been used in the winning solutions in several recent recommender system challenges, WSDM,[114]RecSys Challenge.[115]Moreover, neural and deep learning methods are widely used in industry where they are extensively tested.[116][66][67]The topic of reproducibility is not new in recommender systems. By 2011,Ekstrand,Konstan, et al. criticized that "it is currently difficult to reproduce and extend recommender systems research results," and that evaluations are "not handled consistently".[117]Konstan and Adomavicius conclude that "the Recommender Systems research community is facing a crisis where a significant number of papers present results that contribute little to collective knowledge [...] often because the research lacks the [...] evaluation to be properly judged and, hence, to provide meaningful contributions."[118]As a consequence, much research about recommender systems can be considered as not reproducible.[119]Hence, operators of recommender systems find little guidance in the current research for answering the question, which recommendation approaches to use in a recommender systems.SaidandBellogínconducted a study of papers published in the field, as well as benchmarked some of the most popular frameworks for recommendation and found large inconsistencies in results, even when the same algorithms and data sets were used.[120]Some researchers demonstrated that minor variations in the recommendation algorithms or scenarios led to strong changes in the effectiveness of a recommender system. They conclude that seven actions are necessary to improve the current situation:[119]"(1) survey other research fields and learn from them, (2) find a common understanding of reproducibility, (3) identify and understand the determinants that affect reproducibility, (4) conduct more comprehensive experiments (5) modernize publication practices, (6) foster the development and use of recommendation frameworks, and (7) establish best-practice guidelines for recommender-systems research." Artificial intelligence(AI) applications in recommendation systems are the advanced methodologies that leverage AI technologies, to enhance the performance recommendation engines. The AI-based recommender can analyze complex data sets, learning from user behavior, preferences, and interactions to generate highly accurate and personalized content or product suggestions.[121]The integration of AI in recommendation systems has marked a significant evolution from traditional recommendation methods. Traditional methods often relied on inflexible algorithms that could suggest items based on general user trends or apparent similarities in content. In comparison, AI-powered systems have the capability to detect patterns and subtle distinctions that may be overlooked by traditional methods.[122]These systems can adapt to specific individual preferences, thereby offering recommendations that are more aligned with individual user needs. This approach marks a shift towards more personalized, user-centric suggestions. Recommendation systems widely adopt AI techniques such asmachine learning,deep learning, andnatural language processing.[123]These advanced methods enhance system capabilities to predict user preferences and deliver personalized content more accurately. Each technique contributes uniquely. The following sections will introduce specific AI models utilized by a recommendation system by illustrating their theories and functionalities.[citation needed] Collaborative filtering(CF) is one of the most commonly used recommendation system algorithms. It generates personalized suggestions for users based on explicit or implicit behavioral patterns to form predictions.[124]Specifically, it relies on external feedback such as star ratings, purchasing history and so on to make judgments. CF make predictions about users' preference based on similarity measurements. Essentially, the underlying theory is: "if user A is similar to user B, and if A likes item C, then it is likely that B also likes item C." There are many models available for collaborative filtering. For AI-applied collaborative filtering, a common model is calledK-nearest neighbors. The ideas are as follows: Anartificial neural network(ANN), is a deep learning model structure which aims to mimic a human brain. They comprise a series of neurons, each responsible for receiving and processing information transmitted from other interconnected neurons.[125]Similar to a human brain, these neurons will change activation state based on incoming signals (training input and backpropagated output), allowing the system to adjust activation weights during the network learning phase. ANN is usually designed to be ablack-boxmodel. Unlike regular machine learning where the underlying theoretical components are formal and rigid, the collaborative effects of neurons are not entirely clear, but modern experiments has shown the predictive power of ANN. ANN is widely used in recommendation systems for its power to utilize various data. Other than feedback data, ANN can incorporate non-feedback data which are too intricate for collaborative filtering to learn, and the unique structure allows ANN to identify extra signal from non-feedback data to boost user experience.[123]Following are some examples: The Two-Tower model is a neural architecture[126]commonly employed in large-scale recommendation systems, particularly for candidate retrieval tasks.[127]It consists of two neural networks: The outputs of the two towers are fixed-length embeddings that represent users and items in a shared vector space. A similarity metric, such asdot productorcosine similarity, is used to measure relevance between a user and an item. This model is highly efficient for large datasets as embeddings can be pre-computed for items, allowing rapid retrieval during inference. It is often used in conjunction with ranking models for end-to-end recommendation pipelines. Natural language processing is a series of AI algorithms to make natural human language accessible and analyzable to a machine.[128]It is a fairly modern technique inspired by the growing amount of textual information. For application in recommendation system, a common case is the Amazon customer review. Amazon will analyze the feedbacks comments from each customer and report relevant data to other customers for reference. The recent years have witnessed the development of various text analysis models, includinglatent semantic analysis(LSA),singular value decomposition(SVD),latent Dirichlet allocation(LDA), etc. Their uses have consistently aimed to provide customers with more precise and tailored recommendations. An emerging market for content discovery platforms is academic content.[129][130]Approximately 6000 academic journal articles are published daily, making it increasingly difficult for researchers to balance time management with staying up to date with relevant research.[12]Though traditional tools academic search tools such asGoogle ScholarorPubMedprovide a readily accessible database of journal articles, content recommendation in these cases are performed in a 'linear' fashion, with users setting 'alarms' for new publications based on keywords, journals or particular authors. Google Scholar provides an 'Updates' tool that suggests articles by using astatistical modelthat takes a researchers' authorized paper and citations as input.[12]Whilst these recommendations have been noted to be extremely good, this poses a problem with early career researchers which may be lacking a sufficient body of work to produce accurate recommendations.[12] In contrast to an engagement-based ranking system employed by social media and other digital platforms, a bridging-based ranking optimizes for content that is unifying instead ofpolarizing.[131][132]Examples includePolisand Remesh which have been used around the world to help find more consensus around specific political issues.[132]Twitterhas also used this approach for managing itscommunity notes,[133]whichYouTubeplanned to pilot in 2024.[134][135]Aviv Ovadya also argues for implementing bridging-based algorithms in major platforms by empoweringdeliberative groupsthat are representative of the platform's users to control the design and implementation of the algorithm.[136] As the connected television landscape continues to evolve, search and recommendation are seen as having an even more pivotal role in the discovery of content.[137]Withbroadband-connected devices, consumers are projected to have access to content from linear broadcast sources as well asinternet television. Therefore, there is a risk that the market could become fragmented, leaving it to the viewer to visit various locations and find what they want to watch in a way that is time-consuming and complicated for them. By using a search and recommendation engine, viewers are provided with a central 'portal' from which to discover content from several sources in just one location.
https://en.wikipedia.org/wiki/Content-based_filtering
TheViterbi algorithmis adynamic programmingalgorithmfor obtaining themaximum a posteriori probability estimateof the mostlikelysequence of hidden states—called theViterbi path—that results in a sequence of observed events. This is done especially in the context ofMarkov information sourcesandhidden Markov models(HMM). The algorithm has found universal application in decoding theconvolutional codesused in bothCDMAandGSMdigital cellular,dial-upmodems, satellite, deep-space communications, and802.11wireless LANs. It is now also commonly used inspeech recognition,speech synthesis,diarization,[1]keyword spotting,computational linguistics, andbioinformatics. For example, inspeech-to-text(speech recognition), the acoustic signal is treated as the observed sequence of events, and a string of text is considered to be the "hidden cause" of the acoustic signal. The Viterbi algorithm finds the most likely string of text given the acoustic signal. The Viterbi algorithm is named afterAndrew Viterbi, who proposed it in 1967 as a decoding algorithm forconvolutional codesover noisy digital communication links.[2]It has, however, a history ofmultiple invention, with at least seven independent discoveries, including those by Viterbi,Needleman and Wunsch, andWagner and Fischer.[3]It was introduced tonatural language processingas a method ofpart-of-speech taggingas early as 1987. Viterbi pathandViterbi algorithmhave become standard terms for the application of dynamic programming algorithms to maximization problems involving probabilities.[3]For example, in statistical parsing a dynamic programming algorithm can be used to discover the single most likely context-free derivation (parse) of a string, which is commonly called the "Viterbi parse".[4][5][6]Another application is intarget tracking, where the track is computed that assigns a maximum likelihood to a sequence of observations.[7] Given a hidden Markov model with a set of hidden statesS{\displaystyle S}and a sequence ofT{\displaystyle T}observationso0,o1,…,oT−1{\displaystyle o_{0},o_{1},\dots ,o_{T-1}}, the Viterbi algorithm finds the most likely sequence of states that could have produced those observations. At each time stept{\displaystyle t}, the algorithm solves the subproblem where only the observations up toot{\displaystyle o_{t}}are considered. Two matrices of sizeT×|S|{\displaystyle T\times \left|{S}\right|}are constructed: Letπs{\displaystyle \pi _{s}}andar,s{\displaystyle a_{r,s}}be the initial and transition probabilities respectively, and letbs,o{\displaystyle b_{s,o}}be the probability of observingo{\displaystyle o}at states{\displaystyle s}. Then the values ofP{\displaystyle P}are given by the recurrence relation[8]Pt,s={πs⋅bs,otift=0,maxr∈S(Pt−1,r⋅ar,s⋅bs,ot)ift>0.{\displaystyle P_{t,s}={\begin{cases}\pi _{s}\cdot b_{s,o_{t}}&{\text{if }}t=0,\\\max _{r\in S}\left(P_{t-1,r}\cdot a_{r,s}\cdot b_{s,o_{t}}\right)&{\text{if }}t>0.\end{cases}}}The formula forQt,s{\displaystyle Q_{t,s}}is identical fort>0{\displaystyle t>0}, except thatmax{\displaystyle \max }is replaced witharg⁡max{\displaystyle \arg \max }, andQ0,s=0{\displaystyle Q_{0,s}=0}. The Viterbi path can be found by selecting the maximum ofP{\displaystyle P}at the final timestep, and followingQ{\displaystyle Q}in reverse. The time complexity of the algorithm isO(T×|S|2){\displaystyle O(T\times \left|{S}\right|^{2})}. If it is known which state transitions have non-zero probability, an improved bound can be found by iterating over only thoser{\displaystyle r}which link tos{\displaystyle s}in the inner loop. Then usingamortized analysisone can show that the complexity isO(T×(|S|+|E|)){\displaystyle O(T\times (\left|{S}\right|+\left|{E}\right|))}, whereE{\displaystyle E}is the number of edges in the graph, i.e. the number of non-zero entries in the transition matrix. A doctor wishes to determine whether patients are healthy or have a fever. The only information the doctor can obtain is by asking patients how they feel. The patients may report that they either feel normal, dizzy, or cold. It is believed that the health condition of the patients operates as a discreteMarkov chain. There are two states, "healthy" and "fever", but the doctor cannot observe them directly; they arehiddenfrom the doctor. On each day, the chance that a patient tells the doctor "I feel normal", "I feel cold", or "I feel dizzy", depends only on the patient's health condition on that day. Theobservations(normal, cold, dizzy) along with thehiddenstates (healthy, fever) form a hidden Markov model (HMM). From past experience, the probabilities of this model have been estimated as: In this code,initrepresents the doctor's belief about how likely the patient is to be healthy initially. Note that the particular probability distribution used here is not the equilibrium one, which would be{'Healthy': 0.57, 'Fever': 0.43}according to the transition probabilities. The transition probabilitiestransrepresent the change of health condition in the underlying Markov chain. In this example, a patient who is healthy today has only a 30% chance of having a fever tomorrow. The emission probabilitiesemitrepresent how likely each possible observation (normal, cold, or dizzy) is, given the underlying condition (healthy or fever). A patient who is healthy has a 50% chance of feeling normal; one who has a fever has a 60% chance of feeling dizzy. A particular patient visits three days in a row, and reports feeling normal on the first day, cold on the second day, and dizzy on the third day. Firstly, the probabilities of being healthy or having a fever on the first day are calculated. The probability that a patient will be healthy on the first day and report feeling normal is0.6×0.5=0.3{\displaystyle 0.6\times 0.5=0.3}. Similarly, the probability that a patient will have a fever on the first day and report feeling normal is0.4×0.1=0.04{\displaystyle 0.4\times 0.1=0.04}. The probabilities for each of the following days can be calculated from the previous day directly. For example, the highest chance of being healthy on the second day and reporting to be cold, following reporting being normal on the first day, is the maximum of0.3×0.7×0.4=0.084{\displaystyle 0.3\times 0.7\times 0.4=0.084}and0.04×0.4×0.4=0.0064{\displaystyle 0.04\times 0.4\times 0.4=0.0064}. This suggests it is more likely that the patient was healthy for both of those days, rather than having a fever and recovering. The rest of the probabilities are summarised in the following table: From the table, it can be seen that the patient most likely had a fever on the third day. Furthermore, there exists a sequence of states ending on "fever", of which the probability of producing the given observations is 0.01512. This sequence is precisely (healthy, healthy, fever), which can be found be tracing back which states were used when calculating the maxima (which happens to be the best guess from each day but will not always be). In other words, given the observed activities, the patient was most likely to have been healthy on the first day and also on the second day (despite feeling cold that day), and only to have contracted a fever on the third day. The operation of Viterbi's algorithm can be visualized by means of atrellis diagram. The Viterbi path is essentially the shortest path through this trellis. A generalization of the Viterbi algorithm, termed themax-sum algorithm(ormax-product algorithm) can be used to find the most likely assignment of all or some subset oflatent variablesin a large number ofgraphical models, e.g.Bayesian networks,Markov random fieldsandconditional random fields. The latent variables need, in general, to be connected in a way somewhat similar to ahidden Markov model(HMM), with a limited number of connections between variables and some type of linear structure among the variables. The general algorithm involvesmessage passingand is substantially similar to thebelief propagationalgorithm (which is the generalization of theforward-backward algorithm). With an algorithm callediterative Viterbi decoding, one can find the subsequence of an observation that matches best (on average) to a given hidden Markov model. This algorithm is proposed by Qi Wang et al. to deal withturbo code.[9]Iterative Viterbi decoding works by iteratively invoking a modified Viterbi algorithm, reestimating the score for a filler until convergence. An alternative algorithm, theLazy Viterbi algorithm, has been proposed.[10]For many applications of practical interest, under reasonable noise conditions, the lazy decoder (using Lazy Viterbi algorithm) is much faster than the originalViterbi decoder(using Viterbi algorithm). While the original Viterbi algorithm calculates every node in thetrellisof possible outcomes, the Lazy Viterbi algorithm maintains a prioritized list of nodes to evaluate in order, and the number of calculations required is typically fewer (and never more) than the ordinary Viterbi algorithm for the same result. However, it is not so easy[clarification needed]to parallelize in hardware. Thesoft output Viterbi algorithm(SOVA) is a variant of the classical Viterbi algorithm. SOVA differs from the classical Viterbi algorithm in that it uses a modified path metric which takes into account thea priori probabilitiesof the input symbols, and produces asoftoutput indicating thereliabilityof the decision. The first step in the SOVA is the selection of the survivor path, passing through one unique node at each time instant,t. Since each node has 2 branches converging at it (with one branch being chosen to form theSurvivor Path, and the other being discarded), the difference in the branch metrics (orcost) between the chosen and discarded branches indicate theamount of errorin the choice. Thiscostis accumulated over the entire sliding window (usually equalsat leastfive constraint lengths), to indicate thesoft outputmeasure of reliability of thehard bit decisionof the Viterbi algorithm.
https://en.wikipedia.org/wiki/Viterbi_algorithm
Information retrieval(IR) incomputingandinformation scienceis the task of identifying and retrievinginformation systemresources that are relevant to aninformation need. The information need can be specified in the form of a search query. In the case of document retrieval, queries can be based onfull-textor other content-based indexing. Information retrieval is thescience[1]of searching for information in a document, searching for documents themselves, and also searching for themetadatathat describes data, and fordatabasesof texts, images or sounds. Automated information retrieval systems are used to reduce what has been calledinformation overload. An IR system is a software system that provides access to books, journals and other documents; it also stores and manages those documents.Web search enginesare the most visible IR applications. An information retrieval process begins when a user enters a query into the system. Queries are formal statements of information needs, for example search strings in web search engines. In information retrieval, a query does not uniquely identify a single object in the collection. Instead, several objects may match the query, perhaps with different degrees ofrelevance. An object is an entity that is represented by information in a content collection ordatabase. User queries are matched against the database information. However, as opposed to classical SQL queries of a database, in information retrieval the results returned may or may not match the query, so results are typically ranked. Thisrankingof results is a key difference of information retrieval searching compared to database searching.[2] Depending on theapplicationthe data objects may be, for example, text documents, images,[3]audio,[4]mind maps[5]or videos. Often the documents themselves are not kept or stored directly in the IR system, but are instead represented in the system by document surrogates ormetadata. Most IR systems compute a numeric score on how well each object in the database matches the query, and rank the objects according to this value. The top ranking objects are then shown to the user. The process may then be iterated if the user wishes to refine the query.[6] there is ... a machine called the Univac ... whereby letters and figures are coded as a pattern of magnetic spots on a long steel tape. By this means the text of a document, preceded by its subject code symbol, can be recorded ... the machine ... automatically selects and types out those references which have been coded in any desired way at a rate of 120 words a minute The idea of using computers to search for relevant pieces of information was popularized in the articleAs We May ThinkbyVannevar Bushin 1945.[7]It would appear that Bush was inspired by patents for a 'statistical machine' – filed byEmanuel Goldbergin the 1920s and 1930s – that searched for documents stored on film.[8]The first description of a computer searching for information was described by Holmstrom in 1948,[9]detailing an early mention of theUnivaccomputer. Automated information retrieval systems were introduced in the 1950s: one even featured in the 1957 romantic comedyDesk Set. In the 1960s, the first large information retrieval research group was formed byGerard Saltonat Cornell. By the 1970s several different retrieval techniques had been shown to perform well on smalltext corporasuch as the Cranfield collection (several thousand documents).[7]Large-scale retrieval systems, such as the Lockheed Dialog system, came into use early in the 1970s. In 1992, the US Department of Defense along with theNational Institute of Standards and Technology(NIST), cosponsored theText Retrieval Conference(TREC) as part of the TIPSTER text program. The aim of this was to look into the information retrieval community by supplying the infrastructure that was needed for evaluation of text retrieval methodologies on a very large text collection. This catalyzed research on methods thatscaleto huge corpora. The introduction ofweb search engineshas boosted the need for very large scale retrieval systems even further. By the late 1990s, the rise of the World Wide Web fundamentally transformed information retrieval. While early search engines such asAltaVista(1995) andYahoo!(1994) offered keyword-based retrieval, they were limited in scale and ranking refinement. The breakthrough came in 1998 with the founding ofGoogle, which introduced thePageRankalgorithm,[10]using the web’s hyperlink structure to assess page importance and improve relevance ranking. During the 2000s, web search systems evolved rapidly with the integration of machine learning techniques. These systems began to incorporate user behavior data (e.g., click-through logs), query reformulation, and content-based signals to improve search accuracy and personalization. In 2009,MicrosoftlaunchedBing, introducing features that would later incorporatesemanticweb technologies through the development of its Satori knowledge base. Academic analysis[11]have highlighted Bing’s semantic capabilities, including structured data use and entity recognition, as part of a broader industry shift toward improving search relevance and understanding user intent through natural language processing. A major leap occurred in 2018, when Google deployedBERT(BidirectionalEncoderRepresentations fromTransformers) to better understand the contextual meaning of queries and documents. This marked one of the first times deep neural language models were used at scale in real-world retrieval systems.[12]BERT’s bidirectional training enabled a more refined comprehension of word relationships in context, improving the handling of natural language queries. Because of its success, transformer-based models gained traction in academic research and commercial search applications.[13] Simultaneously, the research community began exploring neural ranking models that outperformed traditional lexical-based methods. Long-standing benchmarks such as theTextREtrievalConference (TREC), initiated in 1992, and more recent evaluation frameworks Microsoft MARCO(MAchineReadingCOmprehension) (2019)[14]became central to training and evaluating retrieval systems across multiple tasks and domains. MS MARCO has also been adopted in the TREC Deep Learning Tracks, where it serves as a core dataset for evaluating advances in neural ranking models within a standardized benchmarking environment.[15] As deep learning became integral to information retrieval systems, researchers began to categorize neural approaches into three broad classes:sparse,dense, andhybridmodels. Sparse models, including traditional term-based methods and learned variants like SPLADE, rely on interpretable representations and inverted indexes to enable efficient exact term matching with added semantic signals.[16]Dense models, such as dual-encoder architectures like ColBERT, use continuous vector embeddings to support semantic similarity beyond keyword overlap.[17]Hybrid models aim to combine the advantages of both, balancing the lexical (token) precision of sparse methods with the semantic depth of dense models. This way of categorizing models balances scalability, relevance, and efficiency in retrieval systems.[18] As IR systems increasingly rely on deep learning, concerns around bias, fairness, and explainability have also come to the picture. Research is now focused not just on relevance and efficiency, but on transparency, accountability, and user trust in retrieval algorithms. Areas where information retrieval techniques are employed include (the entries are in alphabetical order within each category): Methods/Techniques in which information retrieval techniques are employed include: In order to effectively retrieve relevant documents by IR strategies, the documents are typically transformed into a suitable representation. Each retrieval strategy incorporates a specific model for its document representation purposes. The picture on the right illustrates the relationship of some common models. In the picture, the models are categorized according to two dimensions: the mathematical basis and the properties of the model. In addition to the theoretical distinctions, modern information retrieval models are also categorized on how queries and documents are represented and compared, using a practical classification distinguishing between sparse, dense and hybrid models.[19] This classification has become increasingly common in both academic and the real world applications and is getting widely adopted and used in evaluation benchmarks for Information Retrieval models.[23][20] The evaluation of an information retrieval system' is the process of assessing how well a system meets the information needs of its users. In general, measurement considers a collection of documents to be searched and a search query. Traditional evaluation metrics, designed forBoolean retrieval[clarification needed]or top-k retrieval, includeprecision and recall. All measures assume aground truthnotion of relevance: every document is known to be either relevant or non-relevant to a particular query. In practice, queries may beill-posedand there may be different shades of relevance.
https://en.wikipedia.org/wiki/Information_retrieval#Inverted_index
Arecommender system (RecSys), or arecommendation system(sometimes replacingsystemwith terms such asplatform,engine, oralgorithm), sometimes only called "the algorithm" or "algorithm"[1]is a subclass ofinformation filtering systemthat provides suggestions for items that are most pertinent to a particular user.[2][3][4]Recommender systems are particularly useful when an individual needs to choose an item from a potentially overwhelming number of items that a service may offer.[2][5]Modern recommendation systems such as those used on large social media sites, make extensive use of AI, machine learning and related techniques to learn the behavior and preferences of each user, and tailor their feed accordingly.[6] Typically, the suggestions refer to variousdecision-making processes, such as what product to purchase, what music to listen to, or what online news to read.[2]Recommender systems are used in a variety of areas, with commonly recognised examples taking the form ofplaylistgenerators for video and music services, product recommenders for online stores, or content recommenders for social media platforms and open web content recommenders.[7][8]These systems can operate using a single type of input, like music, or multiple inputs within and across platforms like news, books and search queries. There are also popular recommender systems for specific topics like restaurants andonline dating. Recommender systems have also been developed to explore research articles and experts,[9]collaborators,[10]and financial services.[11] Acontent discovery platformis an implementedsoftwarerecommendationplatformwhich uses recommender system tools. It utilizes usermetadatain order to discover and recommend appropriate content, whilst reducing ongoing maintenance and development costs. A content discovery platform delivers personalized content towebsites,mobile devicesandset-top boxes. A large range of content discovery platforms currently exist for various forms of content ranging from news articles andacademic journalarticles[12]to television.[13]As operators compete to be the gateway to home entertainment, personalized television is a key service differentiator. Academic content discovery has recently become another area of interest, with several companies being established to help academic researchers keep up to date with relevant academic content and serendipitously discover new content.[12] Recommender systems usually make use of either or bothcollaborative filteringand content-based filtering, as well as other systems such asknowledge-based systems. Collaborative filtering approaches build a model from a user's past behavior (items previously purchased or selected and/or numerical ratings given to those items) as well as similar decisions made by other users. This model is then used to predict items (or ratings for items) that the user may have an interest in.[14]Content-based filtering approaches utilize a series of discrete, pre-tagged characteristics of an item in order to recommend additional items with similar properties.[15] The differences between collaborative and content-based filtering can be demonstrated by comparing two early music recommender systems,Last.fmandPandora Radio. Each type of system has its strengths and weaknesses. In the above example, Last.fm requires a large amount of information about a user to make accurate recommendations. This is an example of thecold startproblem, and is common in collaborative filtering systems.[17][18][19][20][21][22]Whereas Pandora needs very little information to start, it is far more limited in scope (for example, it can only make recommendations that are similar to the original seed). Recommender systems are a useful alternative tosearch algorithmssince they help users discover items they might not have found otherwise. Of note, recommender systems are often implemented using search engines indexing non-traditional data. Recommender systems have been the focus of several granted patents,[23][24][25][26][27]and there are more than 50 software libraries[28]that support the development of recommender systems including LensKit,[29][30]RecBole,[31]ReChorus[32]and RecPack.[33] Elaine Richcreated the first recommender system in 1979, called Grundy.[34][35]She looked for a way to recommend users books they might like. Her idea was to create a system that asks users specific questions and classifies them into classes of preferences, or "stereotypes", depending on their answers. Depending on users' stereotype membership, they would then get recommendations for books they might like. Another early recommender system, called a "digital bookshelf", was described in a 1990 technical report byJussi Karlgrenat Columbia University,[36]and implemented at scale and worked through in technical reports and publications from 1994 onwards byJussi Karlgren, then atSICS,[37][38]and research groups led byPattie Maesat MIT,[39]Will Hill at Bellcore,[40]andPaul Resnick, also at MIT,[41][5]whose work with GroupLens was awarded the 2010ACM Software Systems Award. Montaner provided the first overview of recommender systems from an intelligent agent perspective.[42]Adomaviciusprovided a new, alternate overview of recommender systems.[43]Herlocker provides an additional overview of evaluation techniques for recommender systems,[44]andBeelet al. discussed the problems of offline evaluations.[45]Beel et al. have also provided literature surveys on available research paper recommender systems and existing challenges.[46][47] One approach to the design of recommender systems that has wide use iscollaborative filtering.[48]Collaborative filtering is based on the assumption that people who agreed in the past will agree in the future, and that they will like similar kinds of items as they liked in the past. The system generates recommendations using only information about rating profiles for different users or items. By locating peer users/items with a rating history similar to the current user or item, they generate recommendations using this neighborhood. Collaborative filtering methods are classified as memory-based and model-based. A well-known example of memory-based approaches is the user-based algorithm,[49]while that of model-based approaches ismatrix factorization (recommender systems).[50] A key advantage of the collaborative filtering approach is that it does not rely on machine analyzable content and therefore it is capable of accurately recommending complex items such as movies without requiring an "understanding" of the item itself. Many algorithms have been used in measuring user similarity or item similarity in recommender systems. For example, thek-nearest neighbor(k-NN) approach[51]and thePearson Correlationas first implemented by Allen.[52] When building a model from a user's behavior, a distinction is often made between explicit andimplicitforms ofdata collection. Examples of explicit data collection include the following: Examples ofimplicit data collectioninclude the following: Collaborative filtering approaches often suffer from three problems:cold start, scalability, and sparsity.[54] One of the most famous examples of collaborative filtering is item-to-item collaborative filtering (people who buy x also buy y), an algorithm popularized byAmazon.com's recommender system.[56] Manysocial networksoriginally used collaborative filtering to recommend new friends, groups, and other social connections by examining the network of connections between a user and their friends.[2]Collaborative filtering is still used as part of hybrid systems. Another common approach when designing recommender systems iscontent-based filtering. Content-based filtering methods are based on a description of the item and a profile of the user's preferences.[57][58]These methods are best suited to situations where there is known data on an item (name, location, description, etc.), but not on the user. Content-based recommenders treat recommendation as a user-specific classification problem and learn a classifier for the user's likes and dislikes based on an item's features. In this system, keywords are used to describe the items, and auser profileis built to indicate the type of item this user likes. In other words, these algorithms try to recommend items similar to those that a user liked in the past or is examining in the present. It does not rely on a user sign-in mechanism to generate this often temporary profile. In particular, various candidate items are compared with items previously rated by the user, and the best-matching items are recommended. This approach has its roots ininformation retrievalandinformation filteringresearch. To create auser profile, the system mostly focuses on two types of information: Basically, these methods use an item profile (i.e., a set of discrete attributes and features) characterizing the item within the system. To abstract the features of the items in the system, an item presentation algorithm is applied. A widely used algorithm is thetf–idfrepresentation (also called vector space representation).[59]The system creates a content-based profile of users based on a weighted vector of item features. The weights denote the importance of each feature to the user and can be computed from individually rated content vectors using a variety of techniques. Simple approaches use the average values of the rated item vector while other sophisticated methods use machine learning techniques such asBayesian Classifiers,cluster analysis,decision trees, andartificial neural networksin order to estimate the probability that the user is going to like the item.[60] A key issue with content-based filtering is whether the system can learn user preferences from users' actions regarding one content source and use them across other content types. When the system is limited to recommending content of the same type as the user is already using, the value from the recommendation system is significantly less than when other content types from other services can be recommended. For example, recommending news articles based on news browsing is useful. Still, it would be much more useful when music, videos, products, discussions, etc., from different services, can be recommended based on news browsing. To overcome this, most content-based recommender systems now use some form of the hybrid system. Content-based recommender systems can also include opinion-based recommender systems. In some cases, users are allowed to leave text reviews or feedback on the items. These user-generated texts are implicit data for the recommender system because they are potentially rich resources of both feature/aspects of the item and users' evaluation/sentiment to the item. Features extracted from the user-generated reviews are improvedmetadataof items, because as they also reflect aspects of the item like metadata, extracted features are widely concerned by the users. Sentiments extracted from the reviews can be seen as users' rating scores on the corresponding features. Popular approaches of opinion-based recommender system utilize various techniques includingtext mining,information retrieval,sentiment analysis(see alsoMultimodal sentiment analysis) anddeep learning.[61] Most recommender systems now use a hybrid approach, combiningcollaborative filtering, content-based filtering, and other approaches. There is no reason why several different techniques of the same type could not be hybridized. Hybrid approaches can be implemented in several ways: by making content-based and collaborative-based predictions separately and then combining them; by adding content-based capabilities to a collaborative-based approach (and vice versa); or by unifying the approaches into one model.[43]Several studies that empirically compared the performance of the hybrid with the pure collaborative and content-based methods and demonstrated that the hybrid methods can provide more accurate recommendations than pure approaches. These methods can also be used to overcome some of the common problems in recommender systems such as cold start and the sparsity problem, as well as the knowledge engineering bottleneck inknowledge-basedapproaches.[62] Netflixis a good example of the use of hybrid recommender systems.[63]The website makes recommendations by comparing the watching and searching habits of similar users (i.e., collaborative filtering) as well as by offering movies that share characteristics with films that a user has rated highly (content-based filtering). Some hybridization techniques include: These recommender systems use the interactions of a user within a session[65]to generate recommendations. Session-based recommender systems are used at YouTube[66]and Amazon.[67]These are particularly useful when history (such as past clicks, purchases) of a user is not available or not relevant in the current user session. Domains where session-based recommendations are particularly relevant include video, e-commerce, travel, music and more. Most instances of session-based recommender systems rely on the sequence of recent interactions within a session without requiring any additional details (historical, demographic) of the user. Techniques for session-based recommendations are mainly based on generative sequential models such asrecurrent neural networks,[65][68]transformers,[69]and other deep-learning-based approaches.[70][71] The recommendation problem can be seen as a special instance of a reinforcement learning problem whereby the user is the environment upon which the agent, the recommendation system acts upon in order to receive a reward, for instance, a click or engagement by the user.[66][72][73]One aspect of reinforcement learning that is of particular use in the area of recommender systems is the fact that the models or policies can be learned by providing a reward to the recommendation agent. This is in contrast to traditional learning techniques which rely on supervised learning approaches that are less flexible, reinforcement learning recommendation techniques allow to potentially train models that can be optimized directly on metrics of engagement, and user interest.[74] Multi-criteria recommender systems (MCRS) can be defined as recommender systems that incorporate preference information upon multiple criteria. Instead of developing recommendation techniques based on a single criterion value, the overall preference of user u for the item i, these systems try to predict a rating for unexplored items of u by exploiting preference information on multiple criteria that affect this overall preference value. Several researchers approach MCRS as a multi-criteria decision making (MCDM) problem, and apply MCDM methods and techniques to implement MCRS systems.[75]See this chapter[76]for an extended introduction. The majority of existing approaches to recommender systems focus on recommending the most relevant content to users using contextual information, yet do not take into account the risk of disturbing the user with unwanted notifications. It is important to consider the risk of upsetting the user by pushing recommendations in certain circumstances, for instance, during a professional meeting, early morning, or late at night. Therefore, the performance of the recommender system depends in part on the degree to which it has incorporated the risk into the recommendation process. One option to manage this issue isDRARS, a system which models the context-aware recommendation as abandit problem. This system combines a content-based technique and a contextual bandit algorithm.[77] Mobile recommender systems make use of internet-accessingsmartphonesto offer personalized, context-sensitive recommendations. This is a particularly difficult area of research as mobile data is more complex than data that recommender systems often have to deal with. It is heterogeneous, noisy, requires spatial and temporal auto-correlation, and has validation and generality problems.[78] There are three factors that could affect the mobile recommender systems and the accuracy of prediction results: the context, the recommendation method and privacy.[79]Additionally, mobile recommender systems suffer from a transplantation problem – recommendations may not apply in all regions (for instance, it would be unwise to recommend a recipe in an area where all of the ingredients may not be available). One example of a mobile recommender system are the approaches taken by companies such asUberandLyftto generate driving routes for taxi drivers in a city.[78]This system uses GPS data of the routes that taxi drivers take while working, which includes location (latitude and longitude), time stamps, and operational status (with or without passengers). It uses this data to recommend a list of pickup points along a route, with the goal of optimizing occupancy times and profits. Generative recommenders (GR) represent an approach that transforms recommendation tasks intosequential transductionproblems, where user actions are treated like tokens in a generative modeling framework. In one method, known as HSTU (Hierarchical Sequential Transduction Units),[80]high-cardinality, non-stationary, and streaming datasets are efficiently processed as sequences, enabling the model to learn from trillions of parameters and to handle user action histories orders of magnitude longer than before. By turning all of the system’s varied data into a single stream of tokens and using a customself-attentionapproach instead oftraditional neural network layers, generative recommenders make the model much simpler and less memory-hungry. As a result, it can improve recommendation quality in test simulations and in real-world tests, while being faster than previousTransformer-based systems when handling long lists of user actions. Ultimately, this approach allows the model’s performance to grow steadily as more computing power is used, laying a foundation for efficient and scalable “foundation models” for recommendations. One of the events that energized research in recommender systems was theNetflix Prize. From 2006 to 2009, Netflix sponsored a competition, offering a grand prize of $1,000,000 to the team that could take an offered dataset of over 100 million movie ratings and return recommendations that were 10% more accurate than those offered by the company's existing recommender system. This competition energized the search for new and more accurate algorithms. On 21 September 2009, the grand prize of US$1,000,000 was given to the BellKor's Pragmatic Chaos team using tiebreaking rules.[81] The most accurate algorithm in 2007 used an ensemble method of 107 different algorithmic approaches, blended into a single prediction. As stated by the winners, Bell et al.:[82] Predictive accuracy is substantially improved when blending multiple predictors.Our experience is that most efforts should be concentrated in deriving substantially different approaches, rather than refining a single technique.Consequently, our solution is an ensemble of many methods. Many benefits accrued to the web due to the Netflix project. Some teams have taken their technology and applied it to other markets. Some members from the team that finished second place foundedGravity R&D, a recommendation engine that's active in theRecSys community.[81][83]4-Tell, Inc. created a Netflix project–derived solution for ecommerce websites. A number of privacy issues arose around the dataset offered by Netflix for the Netflix Prize competition. Although the data sets were anonymized in order to preserve customer privacy, in 2007 two researchers from the University of Texas were able to identify individual users by matching the data sets with film ratings on theInternet Movie Database (IMDb).[84]As a result, in December 2009, an anonymous Netflix user sued Netflix in Doe v. Netflix, alleging that Netflix had violated United States fair trade laws and theVideo Privacy Protection Actby releasing the datasets.[85]This, as well as concerns from theFederal Trade Commission, led to the cancellation of a second Netflix Prize competition in 2010.[86] Evaluation is important in assessing the effectiveness of recommendation algorithms. To measure theeffectivenessof recommender systems, and compare different approaches, three types ofevaluationsare available: user studies,online evaluations (A/B tests), and offline evaluations.[45] The commonly used metrics are themean squared errorandroot mean squared error, the latter having been used in the Netflix Prize. The information retrieval metrics such asprecision and recallorDCGare useful to assess the quality of a recommendation method. Diversity, novelty, and coverage are also considered as important aspects in evaluation.[87]However, many of the classic evaluation measures are highly criticized.[88] Evaluating the performance of a recommendation algorithm on a fixed test dataset will always be extremely challenging as it is impossible to accurately predict the reactions of real users to the recommendations. Hence any metric that computes the effectiveness of an algorithm in offline data will be imprecise. User studies are rather a small scale. A few dozens or hundreds of users are presented recommendations created by different recommendation approaches, and then the users judge which recommendations are best. In A/B tests, recommendations are shown to typically thousands of users of a real product, and the recommender system randomly picks at least two different recommendation approaches to generate recommendations. The effectiveness is measured with implicit measures of effectiveness such asconversion rateorclick-through rate. Offline evaluations are based on historic data, e.g. a dataset that contains information about how users previously rated movies.[89] The effectiveness of recommendation approaches is then measured based on how well a recommendation approach can predict the users' ratings in the dataset. While a rating is an explicit expression of whether a user liked a movie, such information is not available in all domains. For instance, in the domain of citation recommender systems, users typically do not rate a citation or recommended article. In such cases, offline evaluations may use implicit measures of effectiveness. For instance, it may be assumed that a recommender system is effective that is able to recommend as many articles as possible that are contained in a research article's reference list. However, this kind of offline evaluations is seen critical by many researchers.[90][91][92][45]For instance, it has been shown that results of offline evaluations have low correlation with results from user studies or A/B tests.[92][93]A dataset popular for offline evaluation has been shown to contain duplicate data and thus to lead to wrong conclusions in the evaluation of algorithms.[94]Often, results of so-called offline evaluations do not correlate with actually assessed user-satisfaction.[95]This is probably because offline training is highly biased toward the highly reachable items, and offline testing data is highly influenced by the outputs of the online recommendation module.[90][96]Researchers have concluded that the results of offline evaluations should be viewed critically.[97] Typically, research on recommender systems is concerned with finding the most accurate recommendation algorithms. However, there are a number of factors that are also important. Recommender systems are notoriously difficult to evaluate offline, with some researchers claiming that this has led to areproducibility crisisin recommender systems publications. The topic of reproducibility seems to be a recurrent issue in some Machine Learning publication venues, but does not have a considerable effect beyond the world of scientific publication. In the context of recommender systems a 2019 paper surveyed a small number of hand-picked publications applying deep learning or neural methods to the top-k recommendation problem, published in top conferences (SIGIR, KDD, WWW,RecSys, IJCAI), has shown that on average less than 40% of articles could be reproduced by the authors of the survey, with as little as 14% in some conferences. The articles considers a number of potential problems in today's research scholarship and suggests improved scientific practices in that area.[110][111][112]More recent work on benchmarking a set of the same methods came to qualitatively very different results[113]whereby neural methods were found to be among the best performing methods. Deep learning and neural methods for recommender systems have been used in the winning solutions in several recent recommender system challenges, WSDM,[114]RecSys Challenge.[115]Moreover, neural and deep learning methods are widely used in industry where they are extensively tested.[116][66][67]The topic of reproducibility is not new in recommender systems. By 2011,Ekstrand,Konstan, et al. criticized that "it is currently difficult to reproduce and extend recommender systems research results," and that evaluations are "not handled consistently".[117]Konstan and Adomavicius conclude that "the Recommender Systems research community is facing a crisis where a significant number of papers present results that contribute little to collective knowledge [...] often because the research lacks the [...] evaluation to be properly judged and, hence, to provide meaningful contributions."[118]As a consequence, much research about recommender systems can be considered as not reproducible.[119]Hence, operators of recommender systems find little guidance in the current research for answering the question, which recommendation approaches to use in a recommender systems.SaidandBellogínconducted a study of papers published in the field, as well as benchmarked some of the most popular frameworks for recommendation and found large inconsistencies in results, even when the same algorithms and data sets were used.[120]Some researchers demonstrated that minor variations in the recommendation algorithms or scenarios led to strong changes in the effectiveness of a recommender system. They conclude that seven actions are necessary to improve the current situation:[119]"(1) survey other research fields and learn from them, (2) find a common understanding of reproducibility, (3) identify and understand the determinants that affect reproducibility, (4) conduct more comprehensive experiments (5) modernize publication practices, (6) foster the development and use of recommendation frameworks, and (7) establish best-practice guidelines for recommender-systems research." Artificial intelligence(AI) applications in recommendation systems are the advanced methodologies that leverage AI technologies, to enhance the performance recommendation engines. The AI-based recommender can analyze complex data sets, learning from user behavior, preferences, and interactions to generate highly accurate and personalized content or product suggestions.[121]The integration of AI in recommendation systems has marked a significant evolution from traditional recommendation methods. Traditional methods often relied on inflexible algorithms that could suggest items based on general user trends or apparent similarities in content. In comparison, AI-powered systems have the capability to detect patterns and subtle distinctions that may be overlooked by traditional methods.[122]These systems can adapt to specific individual preferences, thereby offering recommendations that are more aligned with individual user needs. This approach marks a shift towards more personalized, user-centric suggestions. Recommendation systems widely adopt AI techniques such asmachine learning,deep learning, andnatural language processing.[123]These advanced methods enhance system capabilities to predict user preferences and deliver personalized content more accurately. Each technique contributes uniquely. The following sections will introduce specific AI models utilized by a recommendation system by illustrating their theories and functionalities.[citation needed] Collaborative filtering(CF) is one of the most commonly used recommendation system algorithms. It generates personalized suggestions for users based on explicit or implicit behavioral patterns to form predictions.[124]Specifically, it relies on external feedback such as star ratings, purchasing history and so on to make judgments. CF make predictions about users' preference based on similarity measurements. Essentially, the underlying theory is: "if user A is similar to user B, and if A likes item C, then it is likely that B also likes item C." There are many models available for collaborative filtering. For AI-applied collaborative filtering, a common model is calledK-nearest neighbors. The ideas are as follows: Anartificial neural network(ANN), is a deep learning model structure which aims to mimic a human brain. They comprise a series of neurons, each responsible for receiving and processing information transmitted from other interconnected neurons.[125]Similar to a human brain, these neurons will change activation state based on incoming signals (training input and backpropagated output), allowing the system to adjust activation weights during the network learning phase. ANN is usually designed to be ablack-boxmodel. Unlike regular machine learning where the underlying theoretical components are formal and rigid, the collaborative effects of neurons are not entirely clear, but modern experiments has shown the predictive power of ANN. ANN is widely used in recommendation systems for its power to utilize various data. Other than feedback data, ANN can incorporate non-feedback data which are too intricate for collaborative filtering to learn, and the unique structure allows ANN to identify extra signal from non-feedback data to boost user experience.[123]Following are some examples: The Two-Tower model is a neural architecture[126]commonly employed in large-scale recommendation systems, particularly for candidate retrieval tasks.[127]It consists of two neural networks: The outputs of the two towers are fixed-length embeddings that represent users and items in a shared vector space. A similarity metric, such asdot productorcosine similarity, is used to measure relevance between a user and an item. This model is highly efficient for large datasets as embeddings can be pre-computed for items, allowing rapid retrieval during inference. It is often used in conjunction with ranking models for end-to-end recommendation pipelines. Natural language processing is a series of AI algorithms to make natural human language accessible and analyzable to a machine.[128]It is a fairly modern technique inspired by the growing amount of textual information. For application in recommendation system, a common case is the Amazon customer review. Amazon will analyze the feedbacks comments from each customer and report relevant data to other customers for reference. The recent years have witnessed the development of various text analysis models, includinglatent semantic analysis(LSA),singular value decomposition(SVD),latent Dirichlet allocation(LDA), etc. Their uses have consistently aimed to provide customers with more precise and tailored recommendations. An emerging market for content discovery platforms is academic content.[129][130]Approximately 6000 academic journal articles are published daily, making it increasingly difficult for researchers to balance time management with staying up to date with relevant research.[12]Though traditional tools academic search tools such asGoogle ScholarorPubMedprovide a readily accessible database of journal articles, content recommendation in these cases are performed in a 'linear' fashion, with users setting 'alarms' for new publications based on keywords, journals or particular authors. Google Scholar provides an 'Updates' tool that suggests articles by using astatistical modelthat takes a researchers' authorized paper and citations as input.[12]Whilst these recommendations have been noted to be extremely good, this poses a problem with early career researchers which may be lacking a sufficient body of work to produce accurate recommendations.[12] In contrast to an engagement-based ranking system employed by social media and other digital platforms, a bridging-based ranking optimizes for content that is unifying instead ofpolarizing.[131][132]Examples includePolisand Remesh which have been used around the world to help find more consensus around specific political issues.[132]Twitterhas also used this approach for managing itscommunity notes,[133]whichYouTubeplanned to pilot in 2024.[134][135]Aviv Ovadya also argues for implementing bridging-based algorithms in major platforms by empoweringdeliberative groupsthat are representative of the platform's users to control the design and implementation of the algorithm.[136] As the connected television landscape continues to evolve, search and recommendation are seen as having an even more pivotal role in the discovery of content.[137]Withbroadband-connected devices, consumers are projected to have access to content from linear broadcast sources as well asinternet television. Therefore, there is a risk that the market could become fragmented, leaving it to the viewer to visit various locations and find what they want to watch in a way that is time-consuming and complicated for them. By using a search and recommendation engine, viewers are provided with a central 'portal' from which to discover content from several sources in just one location.
https://en.wikipedia.org/wiki/Recommender_system
In themathematicaldiscipline oflinear algebra, amatrix decompositionormatrix factorizationis afactorizationof amatrixinto a product of matrices. There are many different matrix decompositions; each finds use among a particular class of problems. Innumerical analysis, different decompositions are used to implement efficient matrixalgorithms. For example, when solving asystem of linear equationsAx=b{\displaystyle A\mathbf {x} =\mathbf {b} }, the matrixAcan be decomposed via theLU decomposition. The LU decomposition factorizes a matrix into alower triangular matrixLand anupper triangular matrixU. The systemsL(Ux)=b{\displaystyle L(U\mathbf {x} )=\mathbf {b} }andUx=L−1b{\displaystyle U\mathbf {x} =L^{-1}\mathbf {b} }require fewer additions and multiplications to solve, compared with the original systemAx=b{\displaystyle A\mathbf {x} =\mathbf {b} }, though one might require significantly more digits in inexact arithmetic such asfloating point. Similarly, theQR decompositionexpressesAasQRwithQanorthogonal matrixandRan upper triangular matrix. The systemQ(Rx) =bis solved byRx=QTb=c, and the systemRx=cis solved by 'back substitution'. The number of additions and multiplications required is about twice that of using the LU solver, but no more digits are required in inexact arithmetic because the QR decomposition isnumerically stable. TheJordan normal formand theJordan–Chevalley decomposition Refers to variants of existing matrix decompositions, such as the SVD, that are invariant with respect to diagonal scaling. Analogous scale-invariant decompositions can be derived from other matrix decompositions; for example, to obtain scale-invariant eigenvalues.[3][4] There exist analogues of the SVD, QR, LU and Cholesky factorizations forquasimatricesandcmatricesorcontinuous matrices.[13]A ‘quasimatrix’ is, like a matrix, a rectangular scheme whose elements are indexed, but one discrete index is replaced by a continuous index. Likewise, a ‘cmatrix’, is continuous in both indices. As an example of a cmatrix, one can think of the kernel of anintegral operator. These factorizations are based on early work byFredholm (1903),Hilbert (1904)andSchmidt (1907). For an account, and a translation to English of the seminal papers, seeStewart (2011).
https://en.wikipedia.org/wiki/Matrix_factorization
RDF Schema(Resource Description Framework Schema, variously abbreviated asRDFS,RDF(S),RDF-S, orRDF/S) is a set of classes with certain properties using theRDFextensibleknowledge representationdata model, providing basic elements for the description ofontologies. It uses various forms of RDF vocabularies, intended to structure RDFresources. RDF and RDFS can be saved in atriplestore, then one can extract some knowledge from them using a query language, likeSPARQL. The first version[1][4]was published by the World-Wide Web Consortium (W3C) in April 1998, and the finalW3C recommendationwas released in February 2014.[3]Many RDFS components are included in the more expressiveWeb Ontology Language(OWL). RDFS constructs are the RDFS classes, associated properties and utility properties built on thevocabulary of RDF.[5][6][7] A typical example of an rdfs:Class isfoaf:Personin the Friend of a Friend (FOAF) vocabulary.[8]An instance offoaf:Personis a resource that is linked to the classfoaf:Personusing therdf:typeproperty, such as in the following formal expression of thenatural-languagesentence: 'John is a Person'. The definition ofrdfs:Classis recursive:rdfs:Classis the class of classes, and so it is an instance of itself. The other classes described by the RDF and RDFS specifications are: Properties are instances of the classrdf:Propertyand describe a relation between subject resources and object resources. When used as such a property is apredicate(see alsoRDF: reification). For example, the following declarations are used to express that the propertyex:employerrelates a subject, which is of typefoaf:Person, to an object, which is of typefoaf:Organization: Given the previous two declarations, from the triple: can be inferred (resp. follows) thatex:Johnis afoaf:Person, andex:CompanyXis afoaf:Organization. For example, the following declares that 'Every Person is an Agent': Hierarchies of classes support inheritance of a property domain and range (see definitions in the next section) from a class to its subclasses. Anentailmentregime defines whether the triples in a graph are logically contradictory or not. RDFS entailment[11]is not very restrictive, i.e. it does not contain a large amount of rules (compared, for example, toOWL) limiting what kind of statements are valid in the graph. On the other hand it is also not very expressive, meaning that the semantics that can be represented in a machine-interpretable way with the graph is quite limited. Below in a simple example of the capabilities and limits of RDFS entailment, we start with a graph containing the following explicit triples: Without enabling inferencing with RDFS entailment, the data we have does not tell us whetherfoo:SomeElephantis abar:Animal. When we do RDFS-based inferencing, we will get the following extra triple: Therdfs:domainstatement dictates that any subject in triples wherebar:livesInZoois the predicate is of typebar:Animal. What RDFS entailment is not able to tell us is the relationship betweenbar:Animalandbar:Elephant. Due to inferencing we now know thatfoo:SomeElephantis bothbar:Animalandbar:Elephantso these classes do intersect but there is no information to deduce whether they merely intersect, are equal or have a subclass relationship. In RDFS 1.1, the domain and range statements do not carry any formal meaning and their interpretation is left up to the implementer. On the other hand in the 1.2 Working draft they are used as entailment rules for inferencing the types of individuals. Nevertheless in both versions, it is very clearly stated that the expected functionality of range is "the values of a property are instances of one or more classes" and domain "any resource that has a given property is an instance of one or more classes". The example above demonstrated some of the limits and capabilities of RDFS entailment, but did not show an example of a logical inconsistency (which could in layman terms be interpreted as a "validation error"), meaning that the statements the triples make are in conflict and try to express contradictory states of affairs. An example of this in RDFS would be having conflicting datatypes for objects (e.g. declaring a resource to be of typexsd:integerand being also declared to bexsd:booleanwhen inferencing is enabled). RDF vocabularies represented in RDFS include:[10]
https://en.wikipedia.org/wiki/RDF_Schema#Classes_and_properties
In computer terminology, ahoneypotis acomputer securitymechanism set to detect, deflect, or, in some manner, counteract attempts at unauthorized use ofinformation systems. Generally, a honeypot consists ofdata(for example, in a network site) that appears to be a legitimate part of the site which contains information or resources of value to attackers. It is actually isolated, monitored, and capable of blocking or analyzing the attackers. This is similar to policesting operations, colloquially known as "baiting" a suspect.[1] The main use for this network decoy is to distract potential attackers from more important information and machines on the real network, learn about the forms of attacks they can suffer, and examine such attacks during and after the exploitation of a honeypot. It provides a way to prevent and see vulnerabilities in a specific network system. A honeypot is a decoy used to protect a network from present or future attacks.[2][3]Honeypots derive their value from the use by attackers. If not interacted with, the honeypot has little to no value. Honeypots can be used for everything from slowing down or stopping automated attacks, capturing new exploits, to gathering intelligence on emerging threats or early warning and prediction.[4] Honeypots can be differentiated based on whether they are physical or virtual:[2][3] Honeypots can be classified based on their deployment (use/action) and based on their level of involvement. Based on deployment, honeypots may be classified as:[5] Production honeypotsare easy to use, capture only limited information, and are used primarily by corporations. Production honeypots are placed inside the production network with other production servers by an organization to improve their overall state of security. Normally, production honeypots are low-interaction honeypots, which are easier to deploy. They give less information about the attacks or attackers than research honeypots.[5] Research honeypotsare run to gather information about the motives and tactics of theblack hatcommunity targeting different networks. These honeypots do not add direct value to a specific organization; instead, they are used to research the threats that organizations face and to learn how to better protect against those threats.[6]Research honeypots are complex to deploy and maintain, capture extensive information, and are used primarily by research, military, or government organizations.[7] Based on design criteria, honeypots can be classified as:[5] Pure honeypotsare full-fledged production systems. The activities of the attacker are monitored by using a bug tap that has been installed on the honeypot's link to the network. No other software needs to be installed. Even though a pure honeypot is useful, the stealthiness of the defense mechanisms can be ensured by a more controlled mechanism. High-interaction honeypotsimitate the activities of the production systems that host a variety of services and, therefore, an attacker may be allowed a lot of services to waste their time. By employingvirtual machines, multiple honeypots can be hosted on a single physical machine. Therefore, even if the honeypot is compromised, it can be restored more quickly. In general, high-interaction honeypots provide more security by being difficult to detect, but they are expensive to maintain. If virtual machines are not available, one physical computer must be maintained for each honeypot, which can be exorbitantly expensive. Example:Honeynet. Low-interaction honeypotssimulate only the services frequently requested by attackers.[8]Since they consume relatively few resources, multiple virtual machines can easily be hosted on one physical system, the virtual systems have a short response time, and less code is required, reducing the complexity of the virtual system's security. Example:Honeyd. This type of honeypot was one of the first types being created in the late nineties and was mainly used for detecting attacks, not studying them.[9] Sugarcaneis a type of honeypot that masquerades as an open proxy.[10]It can often take form as a server designed to look like a misconfigured HTTP proxy.[11]Probably the most famous open proxy was the default configuration ofsendmail(before version 8.9.0 in 1998) which would forward email to and from any destination.[12] Recently, a new market segment calleddeception technologyhas emerged using basic honeypot technology with the addition of advanced automation for scale. Deception technology addresses the automated deployment of honeypot resources over a large commercial enterprise or government institution.[13] A malware honeypot is a decoy designed to intentionally attract malicious software. It does this by imitating a vulnerable system or network, such as a web server. The honeypot is intentionally set up with security flaws that look to invite these malware attacks. Once attacked IT teams can then analyze the malware to better understand where it comes from and how it acts.[14] Spammersabuse vulnerable resources such asopen mail relaysandopen proxies. These are servers that accept e-mail from anyone on the Internet—including spammers—and send it to its destination. Some system administrators have created honeypot programs that masquerade as these abusable resources to discover spammer activity. There are several capabilities such honeypots provide to these administrators, and the existence of such fake abusable systems makes abuse more difficult or risky. Honeypots can be a powerful countermeasure to abuse from those who rely on very high-volume abuse (e.g., spammers). These honeypots can reveal the abuser'sIP addressand provide bulk spam capture (which enables operators to determine spammers'URLsand response mechanisms). As described by M. Edwards at ITPRo Today: Typically, spammers test a mail server for open relaying by simply sending themselves an email message. If the spammer receives the email message, the mail server obviously allows open relaying. Honeypot operators, however, can use the relay test to thwart spammers. The honeypot catches the relay test email message, returns the test email message, and subsequently blocks all other email messages from that spammer. Spammers continue to use the antispam honeypot for spamming, but the spam is never delivered. Meanwhile, the honeypot operator can notify spammers' ISPs and have their Internet accounts canceled. If honeypot operators detect spammers who use open-proxy servers, they can also notify the proxy server operator to lock down the server to prevent further misuse.[15] The apparent source may be another abused system. Spammers and other abusers may use a chain of such abused systems to make detection of the original starting point of the abuse traffic difficult. This in itself is indicative of the power of honeypots asanti-spamtools. In the early days of anti-spam honeypots, spammers, with little concern for hiding their location, felt safe testing for vulnerabilities and sending spam directly from their own systems. Honeypots made the abuse riskier and more difficult. Spam still flows through open relays, but the volume is much smaller than in 2001-02. While most spam originates in the U.S.,[16]spammers hop through open relays across political boundaries to mask their origin. Honeypot operators may use intercepted relay tests to recognize and thwart attempts to relay spam through their honeypots. "Thwart" may mean "accept the relay spam but decline to deliver it." Honeypot operators may discover other details concerning the spam and the spammer by examining the captured spam messages. Open-relay honeypots include Jackpot, written inJavaby Jack Cleaver;smtpot.py, written inPythonby Karl A. Krueger;[17]and spamhole, written inC.[18]TheBubblegum Proxypotis an open-source honeypot (or "proxypot").[19] An email address that is not used for any other purpose than to receive spam can also be considered a spam honeypot. Compared with the term "spamtrap", the term "honeypot" might be more suitable for systems and techniques that are used to detect or counterattack probes. With a spamtrap, spam arrives at its destination "legitimately"—exactly as non-spam email would arrive. An amalgam of these techniques isProject Honey Pot, a distributed, open-source project that uses honeypot pages installed on websites around the world. These honeypot pages disseminate uniquely tagged spamtrap email addresses andspammerscan then be tracked—the corresponding spam mail is subsequently sent to these spamtrap e-mail addresses.[20] Databases often get attacked by intruders usingSQL injection. As such activities are not recognized by basic firewalls, companies often use database firewalls for protection. Some of the availableSQL databasefirewalls provide/support honeypot architectures so that the intruder runs against a trap database while the web application remains functional.[21] Industrial Control Systems(ICS) are often the target of cyberattacks.[22]One of the main targets within ICS areProgrammable Logic Controllers.[23]In order to understand intruders' techniques in this context, several honeypots have been proposed. Conpot[24][25]is a low interaction honeypot capable of simulation Siemens PLCs. HoneyPLC is a medium interaction honeypot that can simulate Siemens, Rockwell and other PLC brands.[26][27] Just as honeypots are weapons against spammers, honeypot detection systems are spammer-employed counter-weapons. As detection systems would likely use unique characteristics of specific honeypots to identify them, such as the property-value pairs of default honeypot configuration,[28]many honeypots in use utilise a set of unique characteristics larger and more daunting to those seeking to detect and thereby identify them. This is an unusual circumstance in software; a situation in which"versionitis"(a large number of versions of the same software, all differing slightly from each other) can be beneficial. There's also an advantage in having some easy-to-detect honeypots deployed.Fred Cohen, the inventor of theDeception Toolkit, argues that every system running his honeypot should have a deception port which adversaries can use to detect the honeypot.[29]Cohen believes that this might deter adversaries. Honeypots also allow for early detection of legitimate threats. No matter how the honeypot detects the exploit, it can alert you immediately to the attempted attack.[30] The goal of honeypots is to attract and engage attackers for a sufficiently long period to obtain high-levelIndicators of Compromise(IoC) such as attack tools andTactics, Techniques, and Procedures(TTPs). Thus, a honeypot needs to emulate essential services in the production network and grant the attacker the freedom to perform adversarial activities to increase its attractiveness to the attacker. Although the honeypot is a controlled environment and can be monitored by using tools such as honeywall,[31]attackers may still be able to use some honeypots as pivot nodes to penetrate production systems.[32] The second risk of honeypots is that they may attract legitimate users due to a lack of communication in large-scale enterprise networks. For example, the security team who applies and monitors the honeypot may not disclose the honeypot location to all users in time due to the lack of communication or the prevention of insider threats.[33][34] "A 'honey net' is a network of high interaction honeypots that simulates a production network and configured such that all activity is monitored, recorded and in a degree, discreetly regulated." Two or more honeypots on a network form ahoney net. Typically, a honey net is used for monitoring a larger and/or more diverse network in which one honeypot may not be sufficient. Honey nets and honeypots are usually implemented as parts of largernetwork intrusion detection systems. Ahoney farmis a centralized collection of honeypots and analysis tools.[35] The concept of the honey net first began in 1999 when Lance Spitzner, founder of theHoneynet Project, published the paper "To Build a Honeypot".[36] An early formulation of the concept, called "entrapment", is defined inFIPS39 (1976) as "the deliberate planting of apparent flaws in a system for the purpose of detecting attempted penetrations or confusing an intruder about which flaws to exploit".[37] The earliest honeypot techniques are described inClifford Stoll's 1989 bookThe Cuckoo's Egg. One of the earliest documented cases of the cybersecurity use of a honeypot began in January 1991. On January 7, 1991, while he worked at AT&T Bell Laboratories Cheswick observed a criminal hacker, known as acracker, attempting to obtain a copy of a password file. Cheswick wrote that he and colleagues constructed a "chroot "Jail" (or "roach motel")" which allowed them to observe their attacker over a period of several months.[38] In 2017,Dutch policeused honeypot techniques to track down users of thedarknet marketHansa. The metaphor of a bear being attracted to and stealing honey is common in many traditions, including Germanic, Celtic, and Slavic. A common Slavic word for the bear ismedved"honey eater". The tradition of bears stealing honey has been passed down through stories and folklore, especially the well knownWinnie the Pooh.[39][40]
https://en.wikipedia.org/wiki/Honeypot_(computing)
Association rule learningis arule-based machine learningmethod for discovering interesting relations between variables in large databases. It is intended to identify strong rules discovered in databases using some measures of interestingness.[1]In any given transaction with a variety of items, association rules are meant to discover the rules that determine how or why certain items are connected. Based on the concept of strong rules,Rakesh Agrawal,Tomasz Imielińskiand Arun Swami[2]introduced association rules for discovering regularities between products in large-scale transaction data recorded bypoint-of-sale(POS) systems in supermarkets. For example, the rule{onions,potatoes}⇒{burger}{\displaystyle \{\mathrm {onions,potatoes} \}\Rightarrow \{\mathrm {burger} \}}found in the sales data of a supermarket would indicate that if a customer buys onions and potatoes together, they are likely to also buy hamburger meat. Such information can be used as the basis for decisions about marketing activities such as, e.g., promotionalpricingorproduct placements. In addition to the above example frommarket basket analysis, association rules are employed today in many application areas includingWeb usage mining,intrusion detection,continuous production, andbioinformatics. In contrast withsequence mining, association rule learning typically does not consider the order of items either within a transaction or across transactions. The association rule algorithm itself consists of various parameters that can make it difficult for those without some expertise in data mining to execute, with many rules that are arduous to understand.[3] Following the original definition by Agrawal, Imieliński, Swami[2]the problem of association rule mining is defined as: LetI={i1,i2,…,in}{\displaystyle I=\{i_{1},i_{2},\ldots ,i_{n}\}}be a set ofnbinary attributes calleditems. LetD={t1,t2,…,tm}{\displaystyle D=\{t_{1},t_{2},\ldots ,t_{m}\}}be a set of transactions called thedatabase. EachtransactioninDhas a unique transaction ID and contains a subset of the items inI. Aruleis defined as an implication of the form: In Agrawal, Imieliński, Swami[2]aruleis defined only between a set and a single item,X⇒ij{\displaystyle X\Rightarrow i_{j}}forij∈I{\displaystyle i_{j}\in I}. Every rule is composed by two different sets of items, also known asitemsets,XandY, whereXis calledantecedentor left-hand-side (LHS) andYconsequentor right-hand-side (RHS). The antecedent is that item that can be found in the data while the consequent is the item found when combined with the antecedent. The statementX⇒Y{\displaystyle X\Rightarrow Y}is often read asifXthenY, where the antecedent (X) is theifand the consequent (Y) is thethen. This simply implies that, in theory, wheneverXoccurs in a dataset, thenYwill as well. Association rules are made by searching data for frequent if-then patterns and by using a certain criterion under Support and Confidence to define what the most important relationships are. Support is the evidence of how frequent an item appears in the data given, as Confidence is defined by how many times the if-then statements are found true. However, there is a third criteria that can be used, it is called Lift and it can be used to compare the expected Confidence and the actual Confidence. Lift will show how many times the if-then statement is expected to be found to be true. Association rules are made to calculate from itemsets, which are created by two or more items. If the rules were built from the analyzing from all the possible itemsets from the data then there would be so many rules that they wouldn’t have any meaning. That is why Association rules are typically made from rules that are well represented by the data. There are many different data mining techniques you could use to find certain analytics and results, for example, there is Classification analysis, Clustering analysis, and Regression analysis.[4]What technique you should use depends on what you are looking for with your data. Association rules are primarily used to find analytics and a prediction of customer behavior. For Classification analysis, it would most likely be used to question, make decisions, and predict behavior.[5]Clustering analysis is primarily used when there are no assumptions made about the likely relationships within the data.[5]Regression analysis Is used when you want to predict the value of a continuous dependent from a number of independent variables.[5] Benefits There are many benefits of using Association rules like finding the pattern that helps understand the correlations and co-occurrences between data sets. A very good real-world example that uses Association rules would be medicine. Medicine uses Association rules to help diagnose patients. When diagnosing patients there are many variables to consider as many diseases will share similar symptoms. With the use of the Association rules, doctors can determine the conditional probability of an illness by comparing symptom relationships from past cases.[6] Downsides However, Association rules also lead to many different downsides such as finding the appropriate parameter and threshold settings for the mining algorithm. But there is also the downside of having a large number of discovered rules. The reason is that this does not guarantee that the rules will be found relevant, but it could also cause the algorithm to have low performance. Sometimes the implemented algorithms will contain too many variables and parameters. For someone that doesn’t have a good concept of data mining, this might cause them to have trouble understanding it.[7] Thresholds When using Association rules, you are most likely to only use Support and Confidence. However, this means you have to satisfy a user-specified minimum support and a user-specified minimum confidence at the same time. Usually, the Association rule generation is split into two different steps that needs to be applied: The Support Threshold is 30%, Confidence Threshold is 50% The Table on the left is the original unorganized data and the table on the right is organized by the thresholds. In this case Item C is better than the thresholds for both Support and Confidence which is why it is first. Item A is second because its threshold values are spot on. Item D has met the threshold for Support but not Confidence. Item B has not met the threshold for either Support or Confidence and that is why it is last. To find all the frequent itemsets in a database is not an easy task since it involves going through all the data to find all possible item combinations from all possible itemsets. The set of possible itemsets is thepower setoverIand has size2n−1{\displaystyle 2^{n}-1}, of course this means to exclude the empty set which is not considered to be a valid itemset. However, the size of the power set will grow exponentially in the number of itemnthat is within the power setI. An efficient search is possible by using thedownward-closure propertyof support[2][8](also calledanti-monotonicity[9]). This would guarantee that a frequent itemset and all its subsets are also frequent and thus will have no infrequent itemsets as a subset of a frequent itemset. Exploiting this property, efficient algorithms (e.g., Apriori[10]and Eclat[11]) can find all frequent itemsets. To illustrate the concepts, we use a small example from the supermarket domain. Table 2 shows a small database containing the items where, in each entry, the value 1 means the presence of the item in the corresponding transaction, and the value 0 represents the absence of an item in that transaction. The set of items isI={milk,bread,butter,beer,diapers,eggs,fruit}{\displaystyle I=\{\mathrm {milk,bread,butter,beer,diapers,eggs,fruit} \}}. An example rule for the supermarket could be{butter,bread}⇒{milk}{\displaystyle \{\mathrm {butter,bread} \}\Rightarrow \{\mathrm {milk} \}}meaning that if butter and bread are bought, customers also buy milk. In order to select interesting rules from the set of all possible rules, constraints on various measures of significance and interest are used. The best-known constraints are minimum thresholds on support and confidence. LetX,Y{\displaystyle X,Y}be itemsets,X⇒Y{\displaystyle X\Rightarrow Y}an association rule andTa set of transactions of a given database. Note: this example is extremely small. In practical applications, a rule needs a support of several hundred transactions before it can be considered statistically significant,[citation needed]and datasets often contain thousands or millions of transactions. Support is an indication of how frequently the itemset appears in the dataset. In our example, it can be easier to explain support by writingsupport=P(A∩B)=(number of transactions containingAandB)(total number of transactions){\displaystyle {\text{support}}=P(A\cap B)={\frac {({\text{number of transactions containing }}A{\text{ and }}B)}{\text{ (total number of transactions)}}}}[12]where A and B are separate item sets that occur at the same time in a transaction. Using Table 2 as an example, the itemsetX={beer,diapers}{\displaystyle X=\{\mathrm {beer,diapers} \}}has a support of1/5=0.2since it occurs in 20% of all transactions (1 out of 5 transactions). The argument ofsupport of Xis a set of preconditions, and thus becomes more restrictive as it grows (instead of more inclusive).[13] Furthermore, the itemsetY={milk,bread,butter}{\displaystyle Y=\{\mathrm {milk,bread,butter} \}}has a support of1/5=0.2as it appears in 20% of all transactions as well. When using antecedents and consequents, it allows a data miner to determine the support of multiple items being bought together in comparison to the whole data set. For example, Table 2 shows that if milk is bought, then bread is bought has a support of 0.4 or 40%. This because in 2 out 5 of the transactions, milk as well as bread are bought. In smaller data sets like this example, it is harder to see a strong correlation when there are few samples, but when the data set grows larger, support can be used to find correlation between two or more products in the supermarket example. Minimum support thresholds are useful for determining which itemsets are preferred or interesting. If we set the support threshold to ≥0.4 in Table 3, then the{milk}⇒{eggs}{\displaystyle \{\mathrm {milk} \}\Rightarrow \{\mathrm {eggs} \}}would be removed since it did not meet the minimum threshold of 0.4. Minimum threshold is used to remove samples where there is not a strong enough support or confidence to deem the sample as important or interesting in the dataset. Another way of finding interesting samples is to find the value of (support)×(confidence); this allows a data miner to see the samples where support and confidence are high enough to be highlighted in the dataset and prompt a closer look at the sample to find more information on the connection between the items. Support can be beneficial for finding the connection between products in comparison to the whole dataset, whereas confidence looks at the connection between one or more items and another item. Below is a table that shows the comparison and contrast between support and support × confidence, using the information from Table 4 to derive the confidence values. The support ofXwith respect toTis defined as the proportion of transactions in the dataset which contains the itemsetX. Denoting a transaction by(i,t){\displaystyle (i,t)}whereiis the unique identifier of the transaction andtis its itemset, the support may be written as: This notation can be used when defining more complicated datasets where the items and itemsets may not be as easy as our supermarket example above. Other examples of where support can be used is in finding groups of genetic mutations that work collectively to cause a disease, investigating the number of subscribers that respond to upgrade offers, and discovering which products in a drug store are never bought together.[12] Confidence is the percentage of all transactions satisfyingXthat also satisfyY.[14] With respect toT, the confidence value of an association rule, often denoted asX⇒Y{\displaystyle X\Rightarrow Y}, is the ratio of transactions containing bothXandYto the total amount ofXvalues present, whereXis the antecedent andYis the consequent. Confidence can also be interpreted as an estimate of theconditional probabilityP(EY|EX){\displaystyle P(E_{Y}|E_{X})}, the probability of finding the RHS of the rule in transactions under the condition that these transactions also contain the LHS.[13][15] It is commonly depicted as: The equation illustrates that confidence can be computed by calculating the co-occurrence of transactionsXandYwithin the dataset in ratio to transactions containing onlyX. This means that the number of transactions in bothXandYis divided by those just inX. For example, Table 2 shows the rule{butter,bread}⇒{milk}{\displaystyle \{\mathrm {butter,bread} \}\Rightarrow \{\mathrm {milk} \}}which has a confidence of1/51/5=0.20.2=1.0{\displaystyle {\frac {1/5}{1/5}}={\frac {0.2}{0.2}}=1.0}in the dataset, which denotes that every time a customer buys butter and bread, they also buy milk. This particular example demonstrates the rule being correct 100% of the time for transactions containing both butter and bread. The rule{fruit}⇒{eggs}{\displaystyle \{\mathrm {fruit} \}\Rightarrow \{\mathrm {eggs} \}}, however, has a confidence of2/53/5=0.40.6=0.67{\displaystyle {\frac {2/5}{3/5}}={\frac {0.4}{0.6}}=0.67}. This suggests that eggs are bought 67% of the times that fruit is brought. Within this particular dataset, fruit is purchased a total of 3 times, with two of those times consisting of egg purchases. For larger datasets, a minimum threshold, or a percentage cutoff, for the confidence can be useful for determining item relationships. When applying this method to some of the data in Table 2, information that does not meet the requirements are removed. Table 4 shows association rule examples where the minimum threshold for confidence is 0.5 (50%). Any data that does not have a confidence of at least 0.5 is omitted. Generating thresholds allow for the association between items to become stronger as the data is further researched by emphasizing those that co-occur the most. The table uses the confidence information from Table 3 to implement the Support × Confidence column, where the relationship between items via their both confidence and support, instead of just one concept, is highlighted. Ranking the rules by Support × Confidence multiples the confidence of a particular rule to its support and is often implemented for a more in-depth understanding of the relationship between the items. Overall, using confidence in association rule mining is great way to bring awareness to data relations. Its greatest benefit is highlighting the relationship between particular items to one another within the set, as it compares co-occurrences of items to the total occurrence of the antecedent in the specific rule. However, confidence is not the optimal method for every concept in association rule mining. The disadvantage of using it is that it does not offer multiple difference outlooks on the associations. Unlike support, for instance, confidence does not provide the perspective of relationships between certain items in comparison to the entire dataset, so while milk and bread, for example, may occur 100% of the time for confidence, it only has a support of 0.4 (40%). This is why it is important to look at other viewpoints, such as Support × Confidence, instead of solely relying on one concept incessantly to define the relationships. Theliftof a rule is defined as: lift(X⇒Y)=supp(X∪Y)supp(X)×supp(Y){\displaystyle \mathrm {lift} (X\Rightarrow Y)={\frac {\mathrm {supp} (X\cup Y)}{\mathrm {supp} (X)\times \mathrm {supp} (Y)}}} or the ratio of the observed support to that expected if X and Y wereindependent. For example, the rule{milk,bread}⇒{butter}{\displaystyle \{\mathrm {milk,bread} \}\Rightarrow \{\mathrm {butter} \}}has a lift of0.20.4×0.4=1.25{\displaystyle {\frac {0.2}{0.4\times 0.4}}=1.25}. If the rule had a lift of 1, it would imply that the probability of occurrence of the antecedent and that of the consequent are independent of each other. When two events are independent of each other, no rule can be drawn involving those two events. If the lift is > 1, that lets us know the degree to which those two occurrences are dependent on one another, and makes those rules potentially useful for predicting the consequent in future data sets. If the lift is < 1, that lets us know the items are substitute to each other. This means that presence of one item has negative effect on presence of other item and vice versa. The value of lift is that it considers both the support of the rule and the overall data set.[13] [rede] Theconvictionof a rule is defined asconv(X⇒Y)=1−supp(Y)1−conf(X⇒Y){\displaystyle \mathrm {conv} (X\Rightarrow Y)={\frac {1-\mathrm {supp} (Y)}{1-\mathrm {conf} (X\Rightarrow Y)}}}.[16] For example, the rule{milk,bread}⇒{butter}{\displaystyle \{\mathrm {milk,bread} \}\Rightarrow \{\mathrm {butter} \}}has a conviction of1−0.41−0.5=1.2{\displaystyle {\frac {1-0.4}{1-0.5}}=1.2}, and can be interpreted as the ratio of the expected frequency that X occurs without Y (that is to say, the frequency that the rule makes an incorrect prediction) if X and Y were independent divided by the observed frequency of incorrect predictions. In this example, the conviction value of 1.2 shows that the rule{milk,bread}⇒{butter}{\displaystyle \{\mathrm {milk,bread} \}\Rightarrow \{\mathrm {butter} \}}would be incorrect 20% more often (1.2 times as often) if the association between X and Y was purely random chance. In addition to confidence, other measures ofinterestingnessfor rules have been proposed. Some popular measures are: Several more measures are presented and compared by Tan et al.[20]and by Hahsler.[21]Looking for techniques that can model what the user has known (and using these models as interestingness measures) is currently an active research trend under the name of "Subjective Interestingness." The concept of association rules was popularized particularly due to the 1993 article of Agrawal et al.,[2]which has acquired more than 23,790 citations according to Google Scholar, as of April 2021, and is thus one of the most cited papers in the Data Mining field. However, what is now called "association rules" is introduced already in the 1966 paper[22]on GUHA, a general data mining method developed byPetr Hájeket al.[23] An early (circa 1989) use of minimum support and confidence to find all association rules is the Feature Based Modeling framework, which found all rules withsupp(X){\displaystyle \mathrm {supp} (X)}andconf(X⇒Y){\displaystyle \mathrm {conf} (X\Rightarrow Y)}greater than user defined constraints.[24] One limitation of the standard approach to discovering associations is that by searching massive numbers of possible associations to look for collections of items that appear to be associated, there is a large risk of finding many spurious associations. These are collections of items that co-occur with unexpected frequency in the data, but only do so by chance. For example, suppose we are considering a collection of 10,000 items and looking for rules containing two items in the left-hand-side and 1 item in the right-hand-side. There are approximately 1,000,000,000,000 such rules. If we apply a statistical test for independence with a significance level of 0.05 it means there is only a 5% chance of accepting a rule if there is no association. If we assume there are no associations, we should nonetheless expect to find 50,000,000,000 rules. Statistically sound association discovery[25][26]controls this risk, in most cases reducing the risk of findinganyspurious associations to a user-specified significance level. Many algorithms for generating association rules have been proposed. Some well-known algorithms areApriori, Eclat and FP-Growth, but they only do half the job, since they are algorithms for mining frequent itemsets. Another step needs to be done after to generate rules from frequent itemsets found in a database. Apriori is given by R. Agrawal and R. Srikant in 1994 for frequent item set mining and association rule learning. It proceeds by identifying the frequent individual items in the database and extending them to larger and larger item sets as long as those item sets appear sufficiently often. The name of the algorithm is Apriori because it uses prior knowledge of frequent itemset properties. Overview:Aprioriuses a "bottom up" approach, where frequent subsets are extended one item at a time (a step known ascandidate generation), and groups of candidates are tested against the data. The algorithm terminates when no further successful extensions are found. Apriori usesbreadth-first searchand aHash treestructure to count candidate item sets efficiently. It generates candidate item sets of length  from item sets of length . Then it prunes the candidates which have an infrequent sub pattern. According to the downward closure lemma, the candidate set contains all frequent -length item sets. After that, it scans the transaction database to determine frequent item sets among the candidates. Example:Assume that each row is a cancer sample with a certain combination of mutations labeled by a character in the alphabet. For example a row could have {a, c} which means it is affected by mutation 'a' and mutation 'c'. Now we will generate the frequent item set by counting the number of occurrences of each character. This is also known as finding the support values. Then we will prune the item set by picking a minimum support threshold. For this pass of the algorithm we will pick 3. Since all support values are three or above there is no pruning. The frequent item set is {a}, {b}, {c}, and {d}. After this we will repeat the process by counting pairs of mutations in the input set. Now we will make our minimum support value 4 so only {a, d} will remain after pruning. Now we will use the frequent item set to make combinations of triplets. We will then repeat the process by counting occurrences of triplets of mutations in the input set. Since we only have one item the next set of combinations of quadruplets is empty so the algorithm will stop. Advantages and Limitations: Apriori has some limitations. Candidate generation can result in large candidate sets. For example a 10^4 frequent 1-itemset will generate a 10^7 candidate 2-itemset. The algorithm also needs to frequently scan the database, to be specific n+1 scans where n is the length of the longest pattern. Apriori is slower than the Eclat algorithm. However, Apriori performs well compared to Eclat when the dataset is large. This is because in the Eclat algorithm if the dataset is too large the tid-lists become too large for memory. FP-growth outperforms the Apriori and Eclat. This is due to the FP-growth algorithm not having candidate generation or test, using a compact data structure, and only having one database scan.[27] Eclat[11](alt. ECLAT, stands for Equivalence Class Transformation) is abacktrackingalgorithm, which traverses the frequent itemset lattice graph in adepth-first search(DFS) fashion. Whereas thebreadth-first search(BFS) traversal used in the Apriori algorithm will end up checking every subset of an itemset before checking it, DFS traversal checks larger itemsets and can save on checking the support of some of its subsets by virtue of the downward-closer property. Furthermore it will almost certainly use less memory as DFS has a lower space complexity than BFS. To illustrate this, let there be a frequent itemset {a, b, c}. a DFS may check the nodes in the frequent itemset lattice in the following order: {a} → {a, b} → {a, b, c}, at which point it is known that {b}, {c}, {a, c}, {b, c} all satisfy the support constraint by the downward-closure property. BFS would explore each subset of {a, b, c} before finally checking it. As the size of an itemset increases, the number of its subsets undergoescombinatorial explosion. It is suitable for both sequential as well as parallel execution with locality-enhancing properties.[28][29] FP stands for frequent pattern.[30] In the first pass, the algorithm counts the occurrences of items (attribute-value pairs) in the dataset of transactions, and stores these counts in a 'header table'. In the second pass, it builds the FP-tree structure by inserting transactions into atrie. Items in each transaction have to be sorted by descending order of their frequency in the dataset before being inserted so that the tree can be processed quickly. Items in each transaction that do not meet the minimum support requirement are discarded. If many transactions share most frequent items, the FP-tree provides high compression close to tree root. Recursive processing of this compressed version of the main dataset grows frequent item sets directly, instead of generating candidate items and testing them against the entire database (as in the apriori algorithm). Growth begins from the bottom of the header table i.e. the item with the smallest support by finding all sorted transactions that end in that item. Call this itemI{\displaystyle I}. A new conditional tree is created which is the original FP-tree projected ontoI{\displaystyle I}. The supports of all nodes in the projected tree are re-counted with each node getting the sum of its children counts. Nodes (and hence subtrees) that do not meet the minimum support are pruned. Recursive growth ends when no individual items conditional onI{\displaystyle I}meet the minimum support threshold. The resulting paths from root toI{\displaystyle I}will be frequent itemsets. After this step, processing continues with the next least-supported header item of the original FP-tree. Once the recursive process has completed, all frequent item sets will have been found, and association rule creation begins.[31] The ASSOC procedure[32]is a GUHA method which mines for generalized association rules using fastbitstringsoperations. The association rules mined by this method are more general than those output by apriori, for example "items" can be connected both with conjunction and disjunctions and the relation between antecedent and consequent of the rule is not restricted to setting minimum support and confidence as in apriori: an arbitrary combination of supported interest measures can be used. OPUS is an efficient algorithm for rule discovery that, in contrast to most alternatives, does not require either monotone or anti-monotone constraints such as minimum support.[33]Initially used to find rules for a fixed consequent[33][34]it has subsequently been extended to find rules with any item as a consequent.[35]OPUS search is the core technology in the popular Magnum Opus association discovery system. A famous story about association rule mining is the "beer and diaper" story. A purported survey of behavior of supermarket shoppers discovered that customers (presumably young men) who buy diapers tend also to buy beer. This anecdote became popular as an example of how unexpected association rules might be found from everyday data. There are varying opinions as to how much of the story is true.[36]Daniel Powers says:[36] In 1992, Thomas Blischok, manager of a retail consulting group atTeradata, and his staff prepared an analysis of 1.2 million market baskets from about 25 Osco Drug stores. Database queries were developed to identify affinities. The analysis "did discover that between 5:00 and 7:00 p.m. that consumers bought beer and diapers". Osco managers did NOT exploit the beer and diapers relationship by moving the products closer together on the shelves. Multi-Relation Association Rules (MRAR): These are association rules where each item may have several relations. These relations indicate indirect relationships between the entities. Consider the following MRAR where the first item consists of three relationslive in,nearbyandhumid: “Those wholive ina place which isnearbya city withhumidclimate type and also areyoungerthan 20⟹{\displaystyle \implies }theirhealth conditionis good”. Such association rules can be extracted from RDBMS data or semantic web data.[37] Contrast set learningis a form of associative learning.Contrast set learnersuse rules that differ meaningfully in their distribution across subsets.[38][39] Weighted class learningis another form of associative learning where weights may be assigned to classes to give focus to a particular issue of concern for the consumer of the data mining results. High-order pattern discoveryfacilitates the capture of high-order (polythetic) patterns or event associations that are intrinsic to complex real-world data.[40] K-optimal pattern discoveryprovides an alternative to the standard approach to association rule learning which requires that each pattern appear frequently in the data. Approximate Frequent Itemsetmining is a relaxed version of Frequent Itemset mining that allows some of the items in some of the rows to be 0.[41] Generalized Association Ruleshierarchical taxonomy (concept hierarchy) Quantitative Association Rulescategorical and quantitative data Interval Data Association Rulese.g. partition the age into 5-year-increment ranged Sequential pattern miningdiscovers subsequences that are common to more than minsup (minimum support threshold) sequences in a sequence database, where minsup is set by the user. A sequence is an ordered list of transactions.[42] Subspace Clustering, a specific type ofclustering high-dimensional data, is in many variants also based on the downward-closure property for specific clustering models.[43] Warmr, shipped as part of the ACE data mining suite, allows association rule learning for first order relational rules.[44]
https://en.wikipedia.org/wiki/Association_rule_learning
Bootstrappingis a procedure for estimating the distribution of an estimator byresampling(oftenwith replacement) one's data or a model estimated from the data.[1]Bootstrapping assigns measures of accuracy (bias, variance,confidence intervals, prediction error, etc.) to sample estimates.[2][3]This technique allows estimation of the sampling distribution of almost any statistic using random sampling methods.[1] Bootstrapping estimates the properties of anestimand(such as itsvariance) by measuring those properties when sampling from an approximating distribution. One standard choice for an approximating distribution is theempirical distribution functionof the observed data. In the case where a set of observations can be assumed to be from anindependent and identically distributedpopulation, this can be implemented by constructing a number ofresampleswith replacement, of the observed data set (and of equal size to the observed data set). A key result in Efron's seminal paper that introduced the bootstrap[4]is the favorable performance of bootstrap methods usingsampling with replacementcompared to prior methods like thejackknifethat sample without replacement. However, since its introduction, numerous variants on the bootstrap have been proposed, including methods that sample without replacement or that create bootstrap samples larger or smaller than the original data. The bootstrap may also be used for constructinghypothesis tests.[5]It is often used as an alternative tostatistical inferencebased on the assumption of a parametric model when that assumption is in doubt, or where parametric inference is impossible or requires complicated formulas for the calculation ofstandard errors. The bootstrap[a]was first described byBradley Efronin "Bootstrap methods: another look at the jackknife" (1979),[4]inspired by earlier work on thejackknife.[6][7][8]Improved estimates of the variance were developed later.[9][10]A Bayesian extension was developed in 1981.[11]The bias-corrected and accelerated (BCa{\displaystyle BC_{a}}) bootstrap was developed by Efron in 1987,[12]and the approximate bootstrap confidence interval (ABC, or approximateBCa{\displaystyle BC_{a}}) procedure in 1992.[13] The basic idea of bootstrapping is that inference about a population from sample data (sample → population) can be modeled byresamplingthe sample data and performing inference about a sample from resampled data (resampled → sample).[14]As the population is unknown, the true error in a sample statistic against its population value is unknown. In bootstrap-resamples, the 'population' is in fact the sample, and this is known; hence the quality of inference of the 'true' sample from resampled data (resampled → sample) is measurable. More formally, the bootstrap works by treating inference of the trueprobability distributionJ, given the original data, as being analogous to an inference of the empirical distributionĴ, given the resampled data. The accuracy of inferences regardingĴusing the resampled data can be assessed because we knowĴ. IfĴis a reasonable approximation toJ, then the quality of inference onJcan in turn be inferred. As an example, assume we are interested in the average (ormean) height of people worldwide. We cannot measure all the people in the global population, so instead, we sample only a tiny part of it, and measure that. Assume the sample is of sizeN; that is, we measure the heights ofNindividuals. From that single sample, only one estimate of the mean can be obtained. In order to reason about the population, we need some sense of thevariabilityof the mean that we have computed. The simplest bootstrap method involves taking the original data set of heights, and, using a computer, sampling from it to form a new sample (called a 'resample' or bootstrap sample) that is also of sizeN. The bootstrap sample is taken from the original by usingsampling with replacement(e.g. we might 'resample' 5 times from [1,2,3,4,5] and get [2,5,4,4,1]), so, assumingNis sufficiently large, for all practical purposes there is virtually zero probability that it will be identical to the original "real" sample. This process is repeated a large number of times (typically 1,000 or 10,000 times), and for each of these bootstrap samples, we compute its mean (each of these is called a "bootstrap estimate"). We now can create a histogram of bootstrap means. This histogram provides an estimate of the shape of the distribution of the sample mean from which we can answer questions about how much the mean varies across samples. (The method here, described for the mean, can be applied to almost any otherstatisticorestimator.) A great advantage of bootstrap is its simplicity. It is a straightforward way to derive estimates ofstandard errorsandconfidence intervalsfor complex estimators of the distribution, such as percentile points, proportions,Odds ratio, and correlation coefficients. However, despite its simplicity, bootstrapping can be applied to complex sampling designs (e.g. for population divided into s strata with nsobservations per strata, one example of which is a dose-response experiment, where bootstrapping can be applied for each stratum).[15]Bootstrap is also an appropriate way to control and check the stability of the results. Although for most problems it is impossible to know the true confidence interval, bootstrap is asymptotically more accurate than the standard intervals obtained using sample variance and assumptions of normality.[16]Bootstrapping is also a convenient method that avoids the cost of repeating the experiment to get other groups of sample data. Bootstrapping depends heavily on the estimator used and, though simple, naive use of bootstrapping will not always yield asymptotically valid results and can lead to inconsistency.[17]Although bootstrapping is (under some conditions) asymptoticallyconsistent, it does not provide general finite-sample guarantees. The result may depend on the representative sample. The apparent simplicity may conceal the fact that important assumptions are being made when undertaking the bootstrap analysis (e.g. independence of samples or large enough of a sample size) where these would be more formally stated in other approaches. Also, bootstrapping can be time-consuming and there are not many available software for bootstrapping as it is difficult to automate using traditional statistical computer packages.[15] Scholars have recommended more bootstrap samples as available computing power has increased. If the results may have substantial real-world consequences, then one should use as many samples as is reasonable, given available computing power and time. Increasing the number of samples cannot increase the amount of information in the original data; it can only reduce the effects of random sampling errors which can arise from a bootstrap procedure itself. Moreover, there is evidence that numbers of samples greater than 100 lead to negligible improvements in the estimation of standard errors.[18]In fact, according to the original developer of the bootstrapping method, even setting the number of samples at 50 is likely to lead to fairly good standard error estimates.[19] Adèr et al. recommend the bootstrap procedure for the following situations:[20] However, Athreya has shown[21]that if one performs a naive bootstrap on the sample mean when the underlying population lacks a finite variance (for example, apower law distribution), then the bootstrap distribution will not converge to the same limit as the sample mean. As a result, confidence intervals on the basis of aMonte Carlo simulationof the bootstrap could be misleading. Athreya states that "Unless one is reasonably sure that the underlying distribution is notheavy tailed, one should hesitate to use the naive bootstrap". In univariate problems, it is usually acceptable to resample the individual observations with replacement ("case resampling" below) unlikesubsampling, in which resampling is without replacement and is valid under much weaker conditions compared to the bootstrap. In small samples, a parametric bootstrap approach might be preferred. For other problems, asmooth bootstrapwill likely be preferred. For regression problems, various other alternatives are available.[2] The bootstrap is generally useful for estimating the distribution of a statistic (e.g. mean, variance) without using normality assumptions (as required, e.g., for a z-statistic or a t-statistic). In particular, the bootstrap is useful when there is no analytical form or an asymptotic theory (e.g., an applicablecentral limit theorem) to help estimate the distribution of the statistics of interest. This is because bootstrap methods can apply to most random quantities, e.g., the ratio of variance and mean. There are at least two ways of performing case resampling. Consider a coin-flipping experiment. We flip the coin and record whether it lands heads or tails. LetX = x1,x2, …,x10be 10 observations from the experiment.xi= 1if the i th flip lands heads, and 0 otherwise. By invoking the assumption that the average of the coin flips is normally distributed, we can use thet-statisticto estimate the distribution of the sample mean, Such a normality assumption can be justified either as an approximation of the distribution of eachindividualcoin flip or as an approximation of the distribution of theaverageof a large number of coin flips. The former is a poor approximation because the true distribution of the coin flips isBernoulliinstead of normal. The latter is a valid approximation ininfinitely largesamples due to thecentral limit theorem. However, if we are not ready to make such a justification, then we can use the bootstrap instead. Using case resampling, we can derive the distribution ofx¯{\displaystyle {\bar {x}}}. We first resample the data to obtain abootstrap resample. An example of the first resample might look like thisX1* =x2,x1,x10,x10,x3,x4,x6,x7,x1,x9. There are some duplicates since a bootstrap resample comes from sampling with replacement from the data. Also the number of data points in a bootstrap resample is equal to the number of data points in our original observations. Then we compute the mean of this resample and obtain the firstbootstrap mean:μ1*. We repeat this process to obtain the second resampleX2* and compute the second bootstrap meanμ2*. If we repeat this 100 times, then we haveμ1*,μ2*, ...,μ100*. This represents anempirical bootstrap distributionof sample mean. From this empirical distribution, one can derive abootstrap confidence intervalfor the purpose of hypothesis testing. In regression problems,case resamplingrefers to the simple scheme of resampling individual cases – often rows of adata set. For regression problems, as long as the data set is fairly large, this simple scheme is often acceptable.[citation needed]However, the method is open to criticism[citation needed].[15] In regression problems, theexplanatory variablesare often fixed, or at least observed with more control than the response variable. Also, the range of the explanatory variables defines the information available from them. Therefore, to resample cases means that each bootstrap sample will lose some information. As such, alternative bootstrap procedures should be considered. Bootstrapping can be interpreted in aBayesianframework using a scheme that creates new data sets through reweighting the initial data. Given a set ofN{\displaystyle N}data points, the weighting assigned to data pointi{\displaystyle i}in a new data setDJ{\displaystyle {\mathcal {D}}^{J}}iswiJ=xiJ−xi−1J{\displaystyle w_{i}^{J}=x_{i}^{J}-x_{i-1}^{J}}, wherexJ{\displaystyle \mathbf {x} ^{J}}is a low-to-high ordered list ofN−1{\displaystyle N-1}uniformly distributed random numbers on[0,1]{\displaystyle [0,1]}, preceded by 0 and succeeded by 1. The distributions of a parameter inferred from considering many such data setsDJ{\displaystyle {\mathcal {D}}^{J}}are then interpretable asposterior distributionson that parameter.[23] Under this scheme, a small amount of (usually normally distributed) zero-centered random noise is added onto each resampled observation. This is equivalent to sampling from akernel densityestimate of the data. AssumeKto be a symmetric kernel density function with unit variance. The standard kernel estimatorf^h(x){\displaystyle {\hat {f\,}}_{h}(x)}off(x){\displaystyle f(x)}is whereh{\displaystyle h}is the smoothing parameter. And the corresponding distribution function estimatorF^h(x){\displaystyle {\hat {F\,}}_{h}(x)}is Based on the assumption that the original data set is a realization of a random sample from a distribution of a specific parametric type, in this case a parametric model is fitted by parameter θ, often bymaximum likelihood, and samples ofrandom numbersare drawn from this fitted model. Usually the sample drawn has the same sample size as the original data. Then the estimate of original function F can be written asF^=Fθ^{\displaystyle {\hat {F}}=F_{\hat {\theta }}}. This sampling process is repeated many times as for other bootstrap methods. Considering the centeredsample meanin this case, the random sample original distribution functionFθ{\displaystyle F_{\theta }}is replaced by a bootstrap random sample with functionFθ^{\displaystyle F_{\hat {\theta }}}, and theprobability distributionofXn¯−μθ{\displaystyle {\bar {X_{n}}}-\mu _{\theta }}is approximated by that ofX¯n∗−μ∗{\displaystyle {\bar {X}}_{n}^{*}-\mu ^{*}}, whereμ∗=μθ^{\displaystyle \mu ^{*}=\mu _{\hat {\theta }}}, which is the expectation corresponding toFθ^{\displaystyle F_{\hat {\theta }}}.[25]The use of a parametric model at the sampling stage of the bootstrap methodology leads to procedures which are different from those obtained by applying basic statistical theory to inference for the same model. Another approach to bootstrapping in regression problems is to resampleresiduals. The method proceeds as follows. This scheme has the advantage that it retains the information in the explanatory variables. However, a question arises as to which residuals to resample. Raw residuals are one option; another isstudentized residuals(in linear regression). Although there are arguments in favor of using studentized residuals; in practice, it often makes little difference, and it is easy to compare the results of both schemes. When data are temporally correlated, straightforward bootstrapping destroys the inherent correlations. This method uses Gaussian process regression (GPR) to fit a probabilistic model from which replicates may then be drawn. GPR is a Bayesian non-linear regression method. A Gaussian process (GP) is a collection of random variables, any finite number of which have a joint Gaussian (normal) distribution. A GP is defined by a mean function and a covariance function, which specify the mean vectors and covariance matrices for each finite collection of the random variables.[26] Regression model: Gaussian process prior: For any finite collection of variables,x1, ...,xn, the function outputsf(x1),…,f(xn){\displaystyle f(x_{1}),\ldots ,f(x_{n})}are jointly distributed according to a multivariate Gaussian with meanm=[m(x1),…,m(xn)]⊺{\displaystyle m=[m(x_{1}),\ldots ,m(x_{n})]^{\intercal }}and covariance matrix(K)ij=k(xi,xj).{\displaystyle (K)_{ij}=k(x_{i},x_{j}).} Assumef(x)∼GP(m,k).{\displaystyle f(x)\sim {\mathcal {GP}}(m,k).}Theny(x)∼GP(m,l){\displaystyle y(x)\sim {\mathcal {GP}}(m,l)}, wherel(xi,xj)=k(xi,xj)+σ2δ(xi,xj){\displaystyle l(x_{i},x_{j})=k(x_{i},x_{j})+\sigma ^{2}\delta (x_{i},x_{j})}, andδ(xi,xj){\displaystyle \delta (x_{i},x_{j})}is the standard Kronecker delta function.[26] Gaussian process posterior: According to GP prior, we can get wherem0=[m(x1),…,m(xr)]⊺{\displaystyle m_{0}=[m(x_{1}),\ldots ,m(x_{r})]^{\intercal }}and(K0)ij=k(xi,xj)+σ2δ(xi,xj).{\displaystyle (K_{0})_{ij}=k(x_{i},x_{j})+\sigma ^{2}\delta (x_{i},x_{j}).} Let x1*,...,xs*be another finite collection of variables, it's obvious that wherem∗=[m(x1∗),…,m(xs∗)]⊺{\displaystyle m_{*}=[m(x_{1}^{*}),\ldots ,m(x_{s}^{*})]^{\intercal }},(K∗∗)ij=k(xi∗,xj∗){\displaystyle (K_{**})_{ij}=k(x_{i}^{*},x_{j}^{*})},(K∗)ij=k(xi,xj∗).{\displaystyle (K_{*})_{ij}=k(x_{i},x_{j}^{*}).} According to the equations above, the outputsyare also jointly distributed according to a multivariate Gaussian. Thus, wherey=[y1,...,yr]⊺{\displaystyle y=[y_{1},...,y_{r}]^{\intercal }},mpost=m∗+K∗⊺(KO+σ2Ir)−1(y−m0){\displaystyle m_{\text{post}}=m_{*}+K_{*}^{\intercal }(K_{O}+\sigma ^{2}I_{r})^{-1}(y-m_{0})},Kpost=K∗∗−K∗⊺(KO+σ2Ir)−1K∗{\displaystyle K_{\text{post}}=K_{**}-K_{*}^{\intercal }(K_{O}+\sigma ^{2}I_{r})^{-1}K_{*}}, andIr{\displaystyle I_{r}}isr×r{\displaystyle r\times r}identity matrix.[26] The wild bootstrap, proposed originally by Wu (1986),[27]is suited when the model exhibitsheteroskedasticity. The idea is, as the residual bootstrap, to leave the regressors at their sample value, but to resample the response variable based on the residuals values. That is, for each replicate, one computes a newy{\displaystyle y}based on so the residuals are randomly multiplied by a random variablevi{\displaystyle v_{i}}with mean 0 and variance 1. For most distributions ofvi{\displaystyle v_{i}}(but not Mammen's), this method assumes that the 'true' residual distribution is symmetric and can offer advantages over simple residual sampling for smaller sample sizes. Different forms are used for the random variablevi{\displaystyle v_{i}}, such as The block bootstrap is used when the data, or the errors in a model, are correlated. In this case, a simple case or residual resampling will fail, as it is not able to replicate the correlation in the data. The block bootstrap tries to replicate the correlation by resampling inside blocks of data (seeBlocking (statistics)). The block bootstrap has been used mainly with data correlated in time (i.e. time series) but can also be used with data correlated in space, or among groups (so-called cluster data). In the (simple) block bootstrap, the variable of interest is split into non-overlapping blocks. In the moving block bootstrap, introduced by Künsch (1989),[29]data is split inton−b+ 1 overlapping blocks of lengthb: Observation 1 to b will be block 1, observation 2 tob+ 1 will be block 2, etc. Then from thesen−b+ 1 blocks,n/bblocks will be drawn at random with replacement. Then aligning these n/b blocks in the order they were picked, will give the bootstrap observations. This bootstrap works with dependent data, however, the bootstrapped observations will not be stationary anymore by construction. But, it was shown that varying randomly the block length can avoid this problem.[30]This method is known as thestationary bootstrap.Other related modifications of the moving block bootstrap are theMarkovian bootstrapand a stationary bootstrap method that matches subsequent blocks based on standard deviation matching. Vinod (2006),[31]presents a method that bootstraps time series data using maximum entropy principles satisfying the Ergodic theorem with mean-preserving and mass-preserving constraints. There is an R package,meboot,[32]that utilizes the method, which has applications in econometrics and computer science. Cluster data describes data where many observations per unit are observed. This could be observing many firms in many states or observing students in many classes. In such cases, the correlation structure is simplified, and one does usually make the assumption that data is correlated within a group/cluster, but independent between groups/clusters. The structure of the block bootstrap is easily obtained (where the block just corresponds to the group), and usually only the groups are resampled, while the observations within the groups are left unchanged.Cameronet al. (2008) discusses this for clustered errors in linear regression.[33] The bootstrap is a powerful technique although may require substantial computing resources in both time and memory. Some techniques have been developed to reduce this burden. They can generally be combined with many of the different types of Bootstrap schemes and various choices of statistics. Most bootstrap methods areembarrassingly parallelalgorithms. That is, the statistic of interest for each bootstrap sample does not depend on other bootstrap samples. Such computations can therefore be performed on separateCPUsor compute nodes with the results from the separate nodes eventually aggregated for final analysis. The nonparametric bootstrap samples items from a list of size n with counts drawn from amultinomial distribution. IfWi{\displaystyle W_{i}}denotes the number times element i is included in a given bootstrap sample, then eachWi{\displaystyle W_{i}}is distributed as abinomial distributionwith n trials and mean 1, butWi{\displaystyle W_{i}}is not independent ofWj{\displaystyle W_{j}}fori≠j{\displaystyle i\neq j}. The Poisson bootstrap instead draws samples assuming allWi{\displaystyle W_{i}}'s are independently and identically distributed as Poisson variables with mean 1. The rationale is that the limit of the binomial distribution is Poisson: The Poisson bootstrap had been proposed by Hanley and MacGibbon as potentially useful for non-statisticians using software likeSASandSPSS, which lacked the bootstrap packages ofRandS-Plusprogramming languages.[34]The same authors report that for large enough n, the results are relatively similar to the nonparametric bootstrap estimates but go on to note the Poisson bootstrap has seen minimal use in applications. Another proposed advantage of the Poisson bootstrap is the independence of theWi{\displaystyle W_{i}}makes the method easier to apply for large datasets that must be processed as streams.[35] A way to improve on the Poisson bootstrap, termed "sequential bootstrap", is by taking the first samples so that the proportion of unique values is ≈0.632 of the original sample size n. This provides a distribution with main empirical characteristics being within a distance ofO(n3/4){\displaystyle O(n^{3/4})}.[36]Empirical investigation has shown this method can yield good results.[37]This is related to the reduced bootstrap method.[38] For massive data sets, it is often computationally prohibitive to hold all the sample data in memory and resample from the sample data. The Bag of Little Bootstraps (BLB)[39]provides a method of pre-aggregating data before bootstrapping to reduce computational constraints. This works by partitioning the data set intob{\displaystyle b}equal-sized buckets and aggregating the data within each bucket. This pre-aggregated data set becomes the new sample data over which to draw samples with replacement. This method is similar to the Block Bootstrap, but the motivations and definitions of the blocks are very different. Under certain assumptions, the sample distribution should approximate the full bootstrapped scenario. One constraint is the number of bucketsb=nγ{\displaystyle b=n^{\gamma }}whereγ∈[0.5,1]{\displaystyle \gamma \in [0.5,1]}and the authors recommend usage ofb=n0.7{\displaystyle b=n^{0.7}}as a general solution. The bootstrap distribution of a point estimator of a population parameter has been used to produce a bootstrappedconfidence intervalfor the parameter's true value if the parameter can be written as afunction of the population's distribution. Population parametersare estimated with manypoint estimators. Popular families of point-estimators includemean-unbiased minimum-variance estimators,median-unbiased estimators,Bayesian estimators(for example, theposterior distribution'smode,median,mean), andmaximum-likelihood estimators. A Bayesian point estimator and a maximum-likelihood estimator have good performance when the sample size is infinite, according toasymptotic theory. For practical problems with finite samples, other estimators may be preferable. Asymptotic theory suggests techniques that often improve the performance of bootstrapped estimators; the bootstrapping of a maximum-likelihood estimator may often be improved using transformations related topivotal quantities.[40] The bootstrap distribution of a parameter-estimator is often used to calculateconfidence intervalsfor its population-parameter.[2]A variety of methods for constructing the confidence intervals have been proposed, although there is disagreement which method is the best. The survey of bootstrap confidence interval methods of DiCiccio and Efron and consequent discussion lists several desired properties of confidence intervals, which generally are not all simultaneously met. There are several methods for constructing confidence intervals from the bootstrap distribution of arealparameter: Efron and Tibshirani[2]suggest the following algorithm for comparing the means of two independent samples: Letx1,…,xn{\displaystyle x_{1},\ldots ,x_{n}}be a random sample from distribution F with sample meanx¯{\displaystyle {\bar {x}}}and sample varianceσx2{\displaystyle \sigma _{x}^{2}}. Lety1,…,ym{\displaystyle y_{1},\ldots ,y_{m}}be another, independent random sample from distribution G with meany¯{\displaystyle {\bar {y}}}and varianceσy2{\displaystyle \sigma _{y}^{2}} In 1878,Simon Newcombtook observations on thespeed of light.[46]The data set contains twooutliers, which greatly influence thesample mean. (The sample mean need not be aconsistent estimatorfor anypopulation mean, because no mean needs to exist for aheavy-tailed distribution.) A well-defined androbust statisticfor the central tendency is the sample median, which is consistent andmedian-unbiasedfor the population median. The bootstrap distribution for Newcomb's data appears below. We can reduce the discreteness of the bootstrap distribution by adding a small amount of random noise to each bootstrap sample. A conventional choice is to add noise with a standard deviation ofσ/n{\displaystyle \sigma /{\sqrt {n}}}for a sample sizen; this noise is often drawn from a Student-t distribution withn-1degrees of freedom.[47]This results in an approximately-unbiased estimator for the variance of the sample mean.[48]This means that samples taken from the bootstrap distribution will have a variance which is, on average, equal to the variance of the total population. Histograms of the bootstrap distribution and the smooth bootstrap distribution appear below. The bootstrap distribution of the sample-median has only a small number of values. The smoothed bootstrap distribution has a richersupport. However, note that whether the smoothed or standard bootstrap procedure is favorable is case-by-case and is shown to depend on both the underlying distribution function and on the quantity being estimated.[49] In this example, the bootstrapped 95% (percentile) confidence-interval for the population median is (26, 28.5), which is close to the interval for (25.98, 28.46) for the smoothed bootstrap. The bootstrap is distinguished from: Bootstrap aggregating(bagging) is ameta-algorithmbased on averaging model predictions obtained from models trained on multiple bootstrap samples. In situations where an obvious statistic can be devised to measure a required characteristic using only a small number,r, of data items, a corresponding statistic based on the entire sample can be formulated. Given anr-sample statistic, one can create ann-sample statistic by something similar to bootstrapping (taking the average of the statistic over all subsamples of sizer). This procedure is known to have certain good properties and the result is aU-statistic. Thesample meanandsample varianceare of this form, forr= 1 andr= 2. The bootstrap has under certain conditions desirableasymptotic properties. The asymptotic properties most often described are weak convergence / consistency of thesample pathsof the bootstrap empirical process and the validity ofconfidence intervalsderived from the bootstrap. This section describes the convergence of the empirical bootstrap. This paragraph summarizes more complete descriptions of stochastic convergence in van der Vaart and Wellner[50]and Kosorok.[51]The bootstrap defines astochastic process, a collection of random variables indexed by some setT{\displaystyle T}, whereT{\displaystyle T}is typically thereal line(R{\displaystyle \mathbb {R} }) or a family of functions. Processes of interest are those with bounded sample paths, i.e., sample paths inL-infinity(ℓ∞(T){\displaystyle \ell ^{\infty }(T)}), the set of alluniformly boundedfunctionsfromT{\displaystyle T}toR{\displaystyle \mathbb {R} }. When equipped with the uniform distance,ℓ∞(T){\displaystyle \ell ^{\infty }(T)}is ametric space, and whenT=R{\displaystyle T=\mathbb {R} }, two subspaces ofℓ∞(T){\displaystyle \ell ^{\infty }(T)}are of particular interest,C[0,1]{\displaystyle C[0,1]}, the space of allcontinuous functionsfromT{\displaystyle T}to theunit interval[0,1], andD[0,1]{\displaystyle D[0,1]}, the space of allcadlag functionsfromT{\displaystyle T}to [0,1]. This is becauseC[0,1]{\displaystyle C[0,1]}contains thedistribution functionsfor all continuous random variables, andD[0,1]{\displaystyle D[0,1]}contains the distribution functions for all random variables. Statements about the consistency of the bootstrap are statements about the convergence of the sample paths of the bootstrap process asrandom elementsof the metric spaceℓ∞(T){\displaystyle \ell ^{\infty }(T)}or somesubspacethereof, especiallyC[0,1]{\displaystyle C[0,1]}orD[0,1]{\displaystyle D[0,1]}. Horowitz in a recent review[1]definesconsistencyas: the bootstrap estimatorGn(⋅,Fn){\displaystyle G_{n}(\cdot ,F_{n})}is consistent [for a statisticTn{\displaystyle T_{n}}] if, for eachF0{\displaystyle F_{0}},supτ|Gn(τ,Fn)−G∞(τ,F0)|{\displaystyle \sup _{\tau }|G_{n}(\tau ,F_{n})-G_{\infty }(\tau ,F_{0})|}converges in probabilityto 0 asn→∞{\displaystyle n\to \infty }, whereFn{\displaystyle F_{n}}is the distribution of the statistic of interest in the original sample,F0{\displaystyle F_{0}}is the true but unknown distribution of the statistic,G∞(τ,F0){\displaystyle G_{\infty }(\tau ,F_{0})}is the asymptotic distribution function ofTn{\displaystyle T_{n}}, andτ{\displaystyle \tau }is the indexing variable in the distribution function, i.e.,P(Tn≤τ)=Gn(τ,F0){\displaystyle P(T_{n}\leq \tau )=G_{n}(\tau ,F_{0})}. This is sometimes more specifically calledconsistency relative to the Kolmogorov-Smirnov distance.[52] Horowitz goes on to recommend using a theorem from Mammen[53]that provides easier to check necessary and sufficient conditions for consistency for statistics of a certain common form. In particular, let{Xi:i=1,…,n}{\displaystyle \{X_{i}:i=1,\ldots ,n\}}be the random sample. IfTn=∑i=1ngn(Xi)−tnσn{\displaystyle T_{n}={\frac {\sum _{i=1}^{n}g_{n}(X_{i})-t_{n}}{\sigma _{n}}}}for a sequence of numberstn{\displaystyle t_{n}}andσn{\displaystyle \sigma _{n}}, then the bootstrap estimate of the cumulative distribution function estimates the empirical cumulative distribution function if and only ifTn{\displaystyle T_{n}}converges in distributionto thestandard normal distribution. Convergence in (outer) probability as described above is also calledweak consistency. It can also be shown with slightly stronger assumptions, that the bootstrap isstrongly consistent, where convergence in (outer) probability is replaced by convergence (outer) almost surely. When only one type of consistency is described, it is typically weak consistency. This is adequate for most statistical applications since it implies confidence bands derived from the bootstrap are asymptotically valid.[51] In simpler cases, it is possible to use thecentral limit theoremdirectly to show theconsistencyof the bootstrap procedure for estimating the distribution of the sample mean. Specifically, let us considerXn1,…,Xnn{\displaystyle X_{n1},\ldots ,X_{nn}}independent identically distributed random variables withE[Xn1]=μ{\displaystyle \mathbb {E} [X_{n1}]=\mu }andVar[Xn1]=σ2<∞{\displaystyle {\text{Var}}[X_{n1}]=\sigma ^{2}<\infty }for eachn≥1{\displaystyle n\geq 1}. LetX¯n=n−1(Xn1+⋯+Xnn){\displaystyle {\bar {X}}_{n}=n^{-1}(X_{n1}+\cdots +X_{nn})}. In addition, for eachn≥1{\displaystyle n\geq 1}, conditional onXn1,…,Xnn{\displaystyle X_{n1},\ldots ,X_{nn}}, letXn1∗,…,Xnn∗{\displaystyle X_{n1}^{*},\ldots ,X_{nn}^{*}}be independent random variables with distribution equal to the empirical distribution ofXn1,…,Xnn{\displaystyle X_{n1},\ldots ,X_{nn}}. This is the sequence of bootstrap samples. Then it can be shown thatsupτ∈R|P∗(n(X¯n∗−X¯n)σ^n≤τ)−P(n(X¯n−μ)σ≤τ)|→0in probability asn→∞,{\displaystyle \sup _{\tau \in \mathbb {R} }\left|P^{*}\left({\frac {{\sqrt {n}}({\bar {X}}_{n}^{*}-{\bar {X}}_{n})}{{\hat {\sigma }}_{n}}}\leq \tau \right)-P\left({\frac {{\sqrt {n}}({\bar {X}}_{n}-\mu )}{\sigma }}\leq \tau \right)\right|\to 0{\text{ in probability as }}n\to \infty ,}whereP∗{\displaystyle P^{*}}represents probability conditional onXn1,…,Xnn{\displaystyle X_{n1},\ldots ,X_{nn}},n≥1{\displaystyle n\geq 1},X¯n∗=n−1(Xn1∗+⋯+Xnn∗){\displaystyle {\bar {X}}_{n}^{*}=n^{-1}(X_{n1}^{*}+\cdots +X_{nn}^{*})}, andσ^n2=n−1∑i=1n(Xni−X¯n)2{\displaystyle {\hat {\sigma }}_{n}^{2}=n^{-1}\sum _{i=1}^{n}(X_{ni}-{\bar {X}}_{n})^{2}}. To see this, note that(Xni∗−X¯n)/nσ^n{\displaystyle (X_{ni}^{*}-{\bar {X}}_{n})/{\sqrt {n}}{\hat {\sigma }}_{n}}satisfies theLindeberg condition, so the CLT holds.[54] TheGlivenko–Cantelli theoremprovides theoretical background for the bootstrap method. Finite populationsanddrawing without replacementrequire adaptations of the bootstrap due to the violation of the i.i.d assumption. One example is "population bootstrap"[55].
https://en.wikipedia.org/wiki/Bootstrapping_(statistics)
Random forestsorrandom decision forestsis anensemble learningmethod forclassification,regressionand other tasks that works by creating a multitude ofdecision treesduring training. For classification tasks, the output of the random forest is the class selected by most trees. For regression tasks, the output is the average of the predictions of the trees.[1][2]Random forests correct for decision trees' habit ofoverfittingto theirtraining set.[3]: 587–588 The first algorithm for random decision forests was created in 1995 byTin Kam Ho[1]using therandom subspace method,[2]which, in Ho's formulation, is a way to implement the "stochastic discrimination" approach to classification proposed by Eugene Kleinberg.[4][5][6] An extension of the algorithm was developed byLeo Breiman[7]andAdele Cutler,[8]who registered[9]"Random Forests" as atrademarkin 2006 (as of 2019[update], owned byMinitab, Inc.).[10]The extension combines Breiman's "bagging" idea and random selection of features, introduced first by Ho[1]and later independently by Amit andGeman[11]in order to construct a collection of decision trees with controlled variance. The general method of random decision forests was first proposed by Salzberg and Heath in 1993,[12]with a method that used a randomized decision tree algorithm to create multiple trees and then combine them using majority voting. This idea was developed further by Ho in 1995.[1]Ho established that forests of trees splitting with oblique hyperplanes can gain accuracy as they grow without suffering from overtraining, as long as the forests are randomly restricted to be sensitive to only selectedfeaturedimensions. A subsequent work along the same lines[2]concluded that other splitting methods behave similarly, as long as they are randomly forced to be insensitive to some feature dimensions. This observation that a more complex classifier (a larger forest) gets more accurate nearly monotonically is in sharp contrast to the common belief that the complexity of a classifier can only grow to a certain level of accuracy before being hurt by overfitting. The explanation of the forest method's resistance to overtraining can be found in Kleinberg's theory of stochastic discrimination.[4][5][6] The early development of Breiman's notion of random forests was influenced by the work of Amit and Geman[11]who introduced the idea of searching over a random subset of the available decisions when splitting a node, in the context of growing a singletree. The idea of random subspace selection from Ho[2]was also influential in the design of random forests. This method grows a forest of trees, and introduces variation among the trees by projecting the training data into a randomly chosensubspacebefore fitting each tree or each node. Finally, the idea of randomized node optimization, where the decision at each node is selected by a randomized procedure, rather than a deterministic optimization was first introduced byThomas G. Dietterich.[13] The proper introduction of random forests was made in a paper byLeo Breiman.[7]This paper describes a method of building a forest of uncorrelated trees using aCARTlike procedure, combined with randomized node optimization andbagging. In addition, this paper combines several ingredients, some previously known and some novel, which form the basis of the modern practice of random forests, in particular: The report also offers the first theoretical result for random forests in the form of a bound on thegeneralization errorwhich depends on the strength of the trees in the forest and theircorrelation. Decision trees are a popular method for various machine learning tasks. Tree learning is almost "an off-the-shelf procedure for data mining", sayHastieet al., "because it is invariant under scaling and various other transformations of feature values, is robust to inclusion of irrelevant features, and produces inspectable models. However, they are seldom accurate".[3]: 352 In particular, trees that are grown very deep tend to learn highly irregular patterns: theyoverfittheir training sets, i.e. havelow bias, but very high variance. Random forests are a way of averaging multiple deep decision trees, trained on different parts of the same training set, with the goal of reducing the variance.[3]: 587–588This comes at the expense of a small increase in the bias and some loss of interpretability, but generally greatly boosts the performance in the final model. The training algorithm for random forests applies the general technique ofbootstrap aggregating, or bagging, to tree learners. Given a training setX=x1, ...,xnwith responsesY=y1, ...,yn, bagging repeatedly (Btimes) selects arandom sample with replacementof the training set and fits trees to these samples: After training, predictions for unseen samplesx'can be made by averaging the predictions from all the individual regression trees onx': f^=1B∑b=1Bfb(x′){\displaystyle {\hat {f}}={\frac {1}{B}}\sum _{b=1}^{B}f_{b}(x')} or by taking the plurality vote in the case of classification trees. This bootstrapping procedure leads to better model performance because it decreases thevarianceof the model, without increasing the bias. This means that while the predictions of a single tree are highly sensitive to noise in its training set, the average of many trees is not, as long as the trees are not correlated. Simply training many trees on a single training set would give strongly correlated trees (or even the same tree many times, if the training algorithm is deterministic); bootstrap sampling is a way of de-correlating the trees by showing them different training sets. Additionally, an estimate of the uncertainty of the prediction can be made as the standard deviation of the predictions from all the individual regression trees onx′:σ=∑b=1B(fb(x′)−f^)2B−1.{\displaystyle \sigma ={\sqrt {\frac {\sum _{b=1}^{B}(f_{b}(x')-{\hat {f}})^{2}}{B-1}}}.} The numberBof samples (equivalently, of trees) is a free parameter. Typically, a few hundred to several thousand trees are used, depending on the size and nature of the training set.Bcan be optimized usingcross-validation, or by observing theout-of-bag error: the mean prediction error on each training samplexi, using only the trees that did not havexiin their bootstrap sample.[14] The training and test error tend to level off after some number of trees have been fit. The above procedure describes the original bagging algorithm for trees. Random forests also include another type of bagging scheme: they use a modified tree learning algorithm that selects, at each candidate split in the learning process, arandom subset of the features. This process is sometimes called "feature bagging". The reason for doing this is the correlation of the trees in an ordinary bootstrap sample: if one or a fewfeaturesare very strong predictors for the response variable (target output), these features will be selected in many of theBtrees, causing them to become correlated. An analysis of how bagging and random subspace projection contribute to accuracy gains under different conditions is given by Ho.[15] Typically, for a classification problem withpfeatures,√p(rounded down) features are used in each split.[3]: 592For regression problems the inventors recommendp/3(rounded down) with a minimum node size of 5 as the default.[3]: 592In practice, the best values for these parameters should be tuned on a case-to-case basis for every problem.[3]: 592 Adding one further step of randomization yieldsextremely randomized trees, or ExtraTrees. As with ordinary random forests, they are an ensemble of individual trees, but there are two main differences: (1) each tree is trained using the whole learning sample (rather than a bootstrap sample), and (2) the top-down splitting is randomized: for each feature under consideration, a number ofrandomcut-points are selected, instead of computing the locallyoptimalcut-point (based on, e.g.,information gainor theGini impurity). The values are chosen from a uniform distribution within the feature's empirical range (in the tree's training set). Then, of all the randomly chosen splits, the split that yields the highest score is chosen to split the node. Similar to ordinary random forests, the number of randomly selected features to be considered at each node can be specified. Default values for this parameter arep{\displaystyle {\sqrt {p}}}for classification andp{\displaystyle p}for regression, wherep{\displaystyle p}is the number of features in the model.[16] The basic random forest procedure may not work well in situations where there are a large number of features but only a small proportion of these features are informative with respect to sample classification. This can be addressed by encouraging the procedure to focus mainly on features and trees that are informative. Some methods for accomplishing this are: Random forests can be used to rank the importance of variables in a regression or classification problem in a natural way. The following technique was described in Breiman's original paper[7]and is implemented in theRpackagerandomForest.[8] To measure a feature's importance in a data setDn={(Xi,Yi)}i=1n{\displaystyle {\mathcal {D}}_{n}=\{(X_{i},Y_{i})\}_{i=1}^{n}}, first a random forest is trained on the data. During training, theout-of-bag errorfor each data point is recorded and averaged over the forest. (If bagging is not used during training, we can instead compute errors on an independent test set.) After training, the values of the feature are permuted in the out-of-bag samples and the out-of-bag error is again computed on this perturbed data set. The importance for the feature is computed by averaging the difference in out-of-bag error before and after the permutation over all trees. The score is normalized by the standard deviation of these differences. Features which produce large values for this score are ranked as more important than features which produce small values. The statistical definition of the variable importance measure was given and analyzed by Zhuet al.[23] This method of determining variable importance has some drawbacks: This approach to feature importance for random forests considers as important the variables which decrease a lot the impurity during splitting.[31]It is described in the bookClassification and Regression Treesby Leo Breiman[32]and is the default implementation insci-kit learnandR. The definition is:unormalized average importance(x)=1nT∑i=1nT∑nodej∈Ti|split variable(j)=xpTi(j)ΔiTi(j),{\displaystyle {\text{unormalized average importance}}(x)={\frac {1}{n_{T}}}\sum _{i=1}^{n_{T}}\sum _{{\text{node }}j\in T_{i}|{\text{split variable}}(j)=x}p_{T_{i}}(j)\Delta i_{T_{i}}(j),}where As impurity measure for samples falling in a node e.g. the following statistics can be used: The normalized importance is then obtained by normalizing over all features, so that the sum of normalized feature importances is 1. Thesci-kit learndefault implementation can report misleading feature importance:[30] A relationship between random forests and thek-nearest neighbor algorithm(k-NN) was pointed out by Lin and Jeon in 2002.[34]Both can be viewed as so-calledweighted neighborhoods schemes. These are models built from a training set{(xi,yi)}i=1n{\displaystyle \{(x_{i},y_{i})\}_{i=1}^{n}}that make predictionsy^{\displaystyle {\hat {y}}}for new pointsx'by looking at the "neighborhood" of the point, formalized by a weight functionW:y^=∑i=1nW(xi,x′)yi.{\displaystyle {\hat {y}}=\sum _{i=1}^{n}W(x_{i},x')\,y_{i}.}Here,W(xi,x′){\displaystyle W(x_{i},x')}is the non-negative weight of thei'th training point relative to the new pointx'in the same tree. For anyx', the weights for pointsxi{\displaystyle x_{i}}must sum to 1. Weight functions are as follows: Since a forest averages the predictions of a set ofmtrees with individual weight functionsWj{\displaystyle W_{j}}, its predictions arey^=1m∑j=1m∑i=1nWj(xi,x′)yi=∑i=1n(1m∑j=1mWj(xi,x′))yi.{\displaystyle {\hat {y}}={\frac {1}{m}}\sum _{j=1}^{m}\sum _{i=1}^{n}W_{j}(x_{i},x')\,y_{i}=\sum _{i=1}^{n}\left({\frac {1}{m}}\sum _{j=1}^{m}W_{j}(x_{i},x')\right)\,y_{i}.} This shows that the whole forest is again a weighted neighborhood scheme, with weights that average those of the individual trees. The neighbors ofx'in this interpretation are the pointsxi{\displaystyle x_{i}}sharing the same leaf in any treej{\displaystyle j}. In this way, the neighborhood ofx'depends in a complex way on the structure of the trees, and thus on the structure of the training set. Lin and Jeon show that the shape of the neighborhood used by a random forest adapts to the local importance of each feature.[34] As part of their construction, random forest predictors naturally lead to a dissimilarity measure among observations. One can analogously define dissimilarity between unlabeled data, by training a forest to distinguish original "observed" data from suitably generated synthetic data drawn from a reference distribution.[7][35]A random forest dissimilarity is attractive because it handles mixed variable types very well, is invariant to monotonic transformations of the input variables, and is robust to outlying observations. Random forest dissimilarity easily deals with a large number of semi-continuous variables due to its intrinsic variable selection; for example, the "Addcl 1" random forest dissimilarity weighs the contribution of each variable according to how dependent it is on other variables. Random forest dissimilarity has been used in a variety of applications, e.g. to find clusters of patients based on tissue marker data.[36] Instead of decision trees, linear models have been proposed and evaluated as base estimators in random forests, in particularmultinomial logistic regressionandnaive Bayes classifiers.[37][38][39]In cases that the relationship between the predictors and the target variable is linear, the base learners may have an equally high accuracy as the ensemble learner.[40][37] In machine learning, kernel random forests (KeRF) establish the connection between random forests andkernel methods. By slightly modifying their definition, random forests can be rewritten askernel methods, which are more interpretable and easier to analyze.[41] Leo Breiman[42]was the first person to notice the link between random forest andkernel methods. He pointed out that random forests trained usingi.i.d.random vectors in the tree construction are equivalent to a kernel acting on the true margin. Lin and Jeon[43]established the connection between random forests and adaptive nearest neighbor, implying that random forests can be seen as adaptive kernel estimates. Davies and Ghahramani[44]proposed Kernel Random Forest (KeRF) and showed that it can empirically outperform state-of-art kernel methods. Scornet[41]first defined KeRF estimates and gave the explicit link between KeRF estimates and random forest. He also gave explicit expressions for kernels based on centered random forest[45]and uniform random forest,[46]two simplified models of random forest. He named these two KeRFs Centered KeRF and Uniform KeRF, and proved upper bounds on their rates of consistency. Centered forest[45]is a simplified model for Breiman's original random forest, which uniformly selects an attribute among all attributes and performs splits at the center of the cell along the pre-chosen attribute. The algorithm stops when a fully binary tree of levelk{\displaystyle k}is built, wherek∈N{\displaystyle k\in \mathbb {N} }is a parameter of the algorithm. Uniform forest[46]is another simplified model for Breiman's original random forest, which uniformly selects a feature among all features and performs splits at a point uniformly drawn on the side of the cell, along the preselected feature. Given a training sampleDn={(Xi,Yi)}i=1n{\displaystyle {\mathcal {D}}_{n}=\{(\mathbf {X} _{i},Y_{i})\}_{i=1}^{n}}of[0,1]p×R{\displaystyle [0,1]^{p}\times \mathbb {R} }-valued independent random variables distributed as the independent prototype pair(X,Y){\displaystyle (\mathbf {X} ,Y)}, whereE⁡[Y2]<∞{\displaystyle \operatorname {E} [Y^{2}]<\infty }. We aim at predicting the responseY{\displaystyle Y}, associated with the random variableX{\displaystyle \mathbf {X} }, by estimating the regression functionm(x)=E⁡[Y∣X=x]{\displaystyle m(\mathbf {x} )=\operatorname {E} [Y\mid \mathbf {X} =\mathbf {x} ]}. A random regression forest is an ensemble ofM{\displaystyle M}randomized regression trees. Denotemn(x,Θj){\displaystyle m_{n}(\mathbf {x} ,\mathbf {\Theta } _{j})}the predicted value at pointx{\displaystyle \mathbf {x} }by thej{\displaystyle j}-th tree, whereΘ1,…,ΘM{\displaystyle \mathbf {\Theta } _{1},\ldots ,\mathbf {\Theta } _{M}}are independent random variables, distributed as a generic random variableΘ{\displaystyle \mathbf {\Theta } }, independent of the sampleDn{\displaystyle {\mathcal {D}}_{n}}. This random variable can be used to describe the randomness induced by node splitting and the sampling procedure for tree construction. The trees are combined to form the finite forest estimatemM,n(x,Θ1,…,ΘM)=1M∑j=1Mmn(x,Θj){\displaystyle m_{M,n}(\mathbf {x} ,\Theta _{1},\ldots ,\Theta _{M})={\frac {1}{M}}\sum _{j=1}^{M}m_{n}(\mathbf {x} ,\Theta _{j})}. For regression trees, we havemn=∑i=1nYi1Xi∈An(x,Θj)Nn(x,Θj){\displaystyle m_{n}=\sum _{i=1}^{n}{\frac {Y_{i}\mathbf {1} _{\mathbf {X} _{i}\in A_{n}(\mathbf {x} ,\Theta _{j})}}{N_{n}(\mathbf {x} ,\Theta _{j})}}}, whereAn(x,Θj){\displaystyle A_{n}(\mathbf {x} ,\Theta _{j})}is the cell containingx{\displaystyle \mathbf {x} }, designed with randomnessΘj{\displaystyle \Theta _{j}}and datasetDn{\displaystyle {\mathcal {D}}_{n}}, andNn(x,Θj)=∑i=1n1Xi∈An(x,Θj){\displaystyle N_{n}(\mathbf {x} ,\Theta _{j})=\sum _{i=1}^{n}\mathbf {1} _{\mathbf {X} _{i}\in A_{n}(\mathbf {x} ,\Theta _{j})}}. Thus random forest estimates satisfy, for allx∈[0,1]d{\displaystyle \mathbf {x} \in [0,1]^{d}},mM,n(x,Θ1,…,ΘM)=1M∑j=1M(∑i=1nYi1Xi∈An(x,Θj)Nn(x,Θj)){\displaystyle m_{M,n}(\mathbf {x} ,\Theta _{1},\ldots ,\Theta _{M})={\frac {1}{M}}\sum _{j=1}^{M}\left(\sum _{i=1}^{n}{\frac {Y_{i}\mathbf {1} _{\mathbf {X} _{i}\in A_{n}(\mathbf {x} ,\Theta _{j})}}{N_{n}(\mathbf {x} ,\Theta _{j})}}\right)}. Random regression forest has two levels of averaging, first over the samples in the target cell of a tree, then over all trees. Thus the contributions of observations that are in cells with a high density of data points are smaller than that of observations which belong to less populated cells. In order to improve the random forest methods and compensate the misestimation, Scornet[41]defined KeRF bym~M,n(x,Θ1,…,ΘM)=1∑j=1MNn(x,Θj)∑j=1M∑i=1nYi1Xi∈An(x,Θj),{\displaystyle {\tilde {m}}_{M,n}(\mathbf {x} ,\Theta _{1},\ldots ,\Theta _{M})={\frac {1}{\sum _{j=1}^{M}N_{n}(\mathbf {x} ,\Theta _{j})}}\sum _{j=1}^{M}\sum _{i=1}^{n}Y_{i}\mathbf {1} _{\mathbf {X} _{i}\in A_{n}(\mathbf {x} ,\Theta _{j})},}which is equal to the mean of theYi{\displaystyle Y_{i}}'s falling in the cells containingx{\displaystyle \mathbf {x} }in the forest. If we define the connection function of theM{\displaystyle M}finite forest asKM,n(x,z)=1M∑j=1M1z∈An(x,Θj){\displaystyle K_{M,n}(\mathbf {x} ,\mathbf {z} )={\frac {1}{M}}\sum _{j=1}^{M}\mathbf {1} _{\mathbf {z} \in A_{n}(\mathbf {x} ,\Theta _{j})}}, i.e. the proportion of cells shared betweenx{\displaystyle \mathbf {x} }andz{\displaystyle \mathbf {z} }, then almost surely we havem~M,n(x,Θ1,…,ΘM)=∑i=1nYiKM,n(x,xi)∑ℓ=1nKM,n(x,xℓ){\displaystyle {\tilde {m}}_{M,n}(\mathbf {x} ,\Theta _{1},\ldots ,\Theta _{M})={\frac {\sum _{i=1}^{n}Y_{i}K_{M,n}(\mathbf {x} ,\mathbf {x} _{i})}{\sum _{\ell =1}^{n}K_{M,n}(\mathbf {x} ,\mathbf {x} _{\ell })}}}, which defines the KeRF. The construction of Centered KeRF of levelk{\displaystyle k}is the same as for centered forest, except that predictions are made bym~M,n(x,Θ1,…,ΘM){\displaystyle {\tilde {m}}_{M,n}(\mathbf {x} ,\Theta _{1},\ldots ,\Theta _{M})}, the corresponding kernel function, or connection function isKkcc(x,z)=∑k1,…,kd,∑j=1dkj=kk!k1!⋯kd!(1d)k∏j=1d1⌈2kjxj⌉=⌈2kjzj⌉,for allx,z∈[0,1]d.{\displaystyle K_{k}^{cc}(\mathbf {x} ,\mathbf {z} )=\sum _{k_{1},\ldots ,k_{d},\sum _{j=1}^{d}k_{j}=k}{\frac {k!}{k_{1}!\cdots k_{d}!}}\left({\frac {1}{d}}\right)^{k}\prod _{j=1}^{d}\mathbf {1} _{\lceil 2^{k_{j}}x_{j}\rceil =\lceil 2^{k_{j}}z_{j}\rceil },\qquad {\text{ for all }}\mathbf {x} ,\mathbf {z} \in [0,1]^{d}.} Uniform KeRF is built in the same way as uniform forest, except that predictions are made bym~M,n(x,Θ1,…,ΘM){\displaystyle {\tilde {m}}_{M,n}(\mathbf {x} ,\Theta _{1},\ldots ,\Theta _{M})}, the corresponding kernel function, or connection function isKkuf(0,x)=∑k1,…,kd,∑j=1dkj=kk!k1!…kd!(1d)k∏m=1d(1−|xm|∑j=0km−1(−ln⁡|xm|)jj!)for allx∈[0,1]d.{\displaystyle K_{k}^{uf}(\mathbf {0} ,\mathbf {x} )=\sum _{k_{1},\ldots ,k_{d},\sum _{j=1}^{d}k_{j}=k}{\frac {k!}{k_{1}!\ldots k_{d}!}}\left({\frac {1}{d}}\right)^{k}\prod _{m=1}^{d}\left(1-|x_{m}|\sum _{j=0}^{k_{m}-1}{\frac {\left(-\ln |x_{m}|\right)^{j}}{j!}}\right){\text{ for all }}\mathbf {x} \in [0,1]^{d}.} Predictions given by KeRF and random forests are close if the number of points in each cell is controlled: Assume that there exist sequences(an),(bn){\displaystyle (a_{n}),(b_{n})}such that, almost surely,an≤Nn(x,Θ)≤bnandan≤1M∑m=1MNnx,Θm≤bn.{\displaystyle a_{n}\leq N_{n}(\mathbf {x} ,\Theta )\leq b_{n}{\text{ and }}a_{n}\leq {\frac {1}{M}}\sum _{m=1}^{M}N_{n}{\mathbf {x} ,\Theta _{m}}\leq b_{n}.}Then almost surely,|mM,n(x)−m~M,n(x)|≤bn−ananm~M,n(x).{\displaystyle |m_{M,n}(\mathbf {x} )-{\tilde {m}}_{M,n}(\mathbf {x} )|\leq {\frac {b_{n}-a_{n}}{a_{n}}}{\tilde {m}}_{M,n}(\mathbf {x} ).} When the number of treesM{\displaystyle M}goes to infinity, then we have infinite random forest and infinite KeRF. Their estimates are close if the number of observations in each cell is bounded: Assume that there exist sequences(εn),(an),(bn){\displaystyle (\varepsilon _{n}),(a_{n}),(b_{n})}such that, almost surely Then almost surely,|m∞,n(x)−m~∞,n(x)|≤bn−ananm~∞,n(x)+nεn(max1≤i≤nYi).{\displaystyle |m_{\infty ,n}(\mathbf {x} )-{\tilde {m}}_{\infty ,n}(\mathbf {x} )|\leq {\frac {b_{n}-a_{n}}{a_{n}}}{\tilde {m}}_{\infty ,n}(\mathbf {x} )+n\varepsilon _{n}\left(\max _{1\leq i\leq n}Y_{i}\right).} Assume thatY=m(X)+ε{\displaystyle Y=m(\mathbf {X} )+\varepsilon }, whereε{\displaystyle \varepsilon }is a centered Gaussian noise, independent ofX{\displaystyle \mathbf {X} }, with finite varianceσ2<∞{\displaystyle \sigma ^{2}<\infty }. Moreover,X{\displaystyle \mathbf {X} }is uniformly distributed on[0,1]d{\displaystyle [0,1]^{d}}andm{\displaystyle m}isLipschitz. Scornet[41]proved upper bounds on the rates of consistency for centered KeRF and uniform KeRF. Providingk→∞{\displaystyle k\rightarrow \infty }andn/2k→∞{\displaystyle n/2^{k}\rightarrow \infty }, there exists a constantC1>0{\displaystyle C_{1}>0}such that, for alln{\displaystyle n},E[m~ncc(X)−m(X)]2≤C1n−1/(3+dlog⁡2)(log⁡n)2{\displaystyle \mathbb {E} [{\tilde {m}}_{n}^{cc}(\mathbf {X} )-m(\mathbf {X} )]^{2}\leq C_{1}n^{-1/(3+d\log 2)}(\log n)^{2}}. Providingk→∞{\displaystyle k\rightarrow \infty }andn/2k→∞{\displaystyle n/2^{k}\rightarrow \infty }, there exists a constantC>0{\displaystyle C>0}such that,E[m~nuf(X)−m(X)]2≤Cn−2/(6+3dlog⁡2)(log⁡n)2{\displaystyle \mathbb {E} [{\tilde {m}}_{n}^{uf}(\mathbf {X} )-m(\mathbf {X} )]^{2}\leq Cn^{-2/(6+3d\log 2)}(\log n)^{2}}. While random forests often achieve higher accuracy than a single decision tree, they sacrifice the intrinsicinterpretabilityof decision trees. Decision trees are among a fairly small family of machine learning models that are easily interpretable along with linear models,rule-basedmodels, andattention-based models. This interpretability is one of the main advantages of decision trees. It allows developers to confirm that the model has learned realistic information from the data and allows end-users to have trust and confidence in the decisions made by the model.[37][3]For example, following the path that a decision tree takes to make its decision is quite trivial, but following the paths of tens or hundreds of trees is much harder. To achieve both performance and interpretability, some model compression techniques allow transforming a random forest into a minimal "born-again" decision tree that faithfully reproduces the same decision function.[37][47][48] Another limitation of random forests is that if features are linearly correlated with the target, random forest may not enhance the accuracy of the base learner.[37][40]Likewise in problems with multiple categorical variables.[49]
https://en.wikipedia.org/wiki/Random_forest
RDF Schema(Resource Description Framework Schema, variously abbreviated asRDFS,RDF(S),RDF-S, orRDF/S) is a set of classes with certain properties using theRDFextensibleknowledge representationdata model, providing basic elements for the description ofontologies. It uses various forms of RDF vocabularies, intended to structure RDFresources. RDF and RDFS can be saved in atriplestore, then one can extract some knowledge from them using a query language, likeSPARQL. The first version[1][4]was published by the World-Wide Web Consortium (W3C) in April 1998, and the finalW3C recommendationwas released in February 2014.[3]Many RDFS components are included in the more expressiveWeb Ontology Language(OWL). RDFS constructs are the RDFS classes, associated properties and utility properties built on thevocabulary of RDF.[5][6][7] A typical example of an rdfs:Class isfoaf:Personin the Friend of a Friend (FOAF) vocabulary.[8]An instance offoaf:Personis a resource that is linked to the classfoaf:Personusing therdf:typeproperty, such as in the following formal expression of thenatural-languagesentence: 'John is a Person'. The definition ofrdfs:Classis recursive:rdfs:Classis the class of classes, and so it is an instance of itself. The other classes described by the RDF and RDFS specifications are: Properties are instances of the classrdf:Propertyand describe a relation between subject resources and object resources. When used as such a property is apredicate(see alsoRDF: reification). For example, the following declarations are used to express that the propertyex:employerrelates a subject, which is of typefoaf:Person, to an object, which is of typefoaf:Organization: Given the previous two declarations, from the triple: can be inferred (resp. follows) thatex:Johnis afoaf:Person, andex:CompanyXis afoaf:Organization. For example, the following declares that 'Every Person is an Agent': Hierarchies of classes support inheritance of a property domain and range (see definitions in the next section) from a class to its subclasses. Anentailmentregime defines whether the triples in a graph are logically contradictory or not. RDFS entailment[11]is not very restrictive, i.e. it does not contain a large amount of rules (compared, for example, toOWL) limiting what kind of statements are valid in the graph. On the other hand it is also not very expressive, meaning that the semantics that can be represented in a machine-interpretable way with the graph is quite limited. Below in a simple example of the capabilities and limits of RDFS entailment, we start with a graph containing the following explicit triples: Without enabling inferencing with RDFS entailment, the data we have does not tell us whetherfoo:SomeElephantis abar:Animal. When we do RDFS-based inferencing, we will get the following extra triple: Therdfs:domainstatement dictates that any subject in triples wherebar:livesInZoois the predicate is of typebar:Animal. What RDFS entailment is not able to tell us is the relationship betweenbar:Animalandbar:Elephant. Due to inferencing we now know thatfoo:SomeElephantis bothbar:Animalandbar:Elephantso these classes do intersect but there is no information to deduce whether they merely intersect, are equal or have a subclass relationship. In RDFS 1.1, the domain and range statements do not carry any formal meaning and their interpretation is left up to the implementer. On the other hand in the 1.2 Working draft they are used as entailment rules for inferencing the types of individuals. Nevertheless in both versions, it is very clearly stated that the expected functionality of range is "the values of a property are instances of one or more classes" and domain "any resource that has a given property is an instance of one or more classes". The example above demonstrated some of the limits and capabilities of RDFS entailment, but did not show an example of a logical inconsistency (which could in layman terms be interpreted as a "validation error"), meaning that the statements the triples make are in conflict and try to express contradictory states of affairs. An example of this in RDFS would be having conflicting datatypes for objects (e.g. declaring a resource to be of typexsd:integerand being also declared to bexsd:booleanwhen inferencing is enabled). RDF vocabularies represented in RDFS include:[10]
https://en.wikipedia.org/wiki/RDF_schema#Range_and_domain
Adecision treeis adecision supportrecursive partitioning structure that uses atree-likemodelof decisions and their possible consequences, includingchanceevent outcomes, resource costs, andutility. It is one way to display analgorithmthat only contains conditional control statements. Decision trees are commonly used inoperations research, specifically indecision analysis,[1]to help identify a strategy most likely to reach a goal, but are also a popular tool inmachine learning. A decision tree is aflowchart-like structure in which each internal node represents a test on an attribute (e.g. whether a coin flip comes up heads or tails), each branch represents the outcome of the test, and each leaf node represents a class label (decision taken after computing all attributes). The paths from root to leaf representclassificationrules. Indecision analysis, a decision tree and the closely relatedinfluence diagramare used as a visual and analytical decision support tool, where theexpected values(orexpected utility) of competing alternatives are calculated. A decision tree consists of three types of nodes:[2] Decision trees are commonly used inoperations researchandoperations management. If, in practice, decisions have to be taken online with no recall under incomplete knowledge, a decision tree should be paralleled by aprobabilitymodel as a best choice model or online selection modelalgorithm.[citation needed]Another use of decision trees is as a descriptive means for calculatingconditional probabilities. Decision trees,influence diagrams,utility functions, and otherdecision analysistools and methods are taught to undergraduate students in schools of business, health economics, and public health, and are examples of operations research ormanagement sciencemethods. These tools are also used to predict decisions of householders in normal and emergency scenarios.[3][4] Drawn from left to right, a decision tree has only burst nodes (splitting paths) but no sink nodes (converging paths). So used manually they can grow very big and are then often hard to draw fully by hand. Traditionally, decision trees have been created manually – as the aside example shows – although increasingly, specialized software is employed. The decision tree can belinearizedintodecision rules,[5]where the outcome is the contents of the leaf node, and the conditions along the path form a conjunction in the if clause. In general, the rules have the form: Decision rules can be generated by constructingassociation ruleswith the target variable on the right. They can also denote temporal or causal relations.[6] Commonly a decision tree is drawn usingflowchartsymbols as it is easier for many to read and understand. Note there is a conceptual error in the "Proceed" calculation of the tree shown below; the error relates to the calculation of "costs" awarded in a legal action. Analysis can take into account the decision maker's (e.g., the company's) preference orutility function, for example: The basic interpretation in this situation is that the company prefers B's risk and payoffs under realistic risk preference coefficients (greater than $400K—in that range of risk aversion, the company would need to model a third strategy, "Neither A nor B"). Another example, commonly used inoperations researchcourses, is the distribution of lifeguards on beaches (a.k.a. the "Life's a Beach" example).[7]The example describes two beaches with lifeguards to be distributed on each beach. There is maximum budgetBthat can be distributed among the two beaches (in total), and using a marginal returns table, analysts can decide how many lifeguards to allocate to each beach. In this example, a decision tree can be drawn to illustrate the principles ofdiminishing returnson beach #1. The decision tree illustrates that when sequentially distributing lifeguards, placing a first lifeguard on beach #1 would be optimal if there is only the budget for 1 lifeguard. But if there is a budget for two guards, then placing both on beach #2 would prevent more overall drownings. Much of the information in a decision tree can be represented more compactly as aninfluence diagram, focusing attention on the issues and relationships between events. Decision trees can also be seen asgenerative modelsof induction rules from empirical data. An optimal decision tree is then defined as a tree that accounts for most of the data, while minimizing the number of levels (or "questions").[8]Several algorithms to generate such optimal trees have been devised, such asID3/4/5,[9]CLS, ASSISTANT, and CART. Among decision support tools, decision trees (andinfluence diagrams) have several advantages. Decision trees: Disadvantages of decision trees: A few things should be considered when improving the accuracy of the decision tree classifier. The following are some possible optimizations to consider when looking to make sure the decision tree model produced makes the correct decision or classification. Note that these things are not the only things to consider but only some. Theaccuracyof the decision tree can change based on the depth of the decision tree. In many cases, the tree’s leaves arepurenodes.[11]When a node is pure, it means that all the data in that node belongs to a single class.[12]For example, if the classes in the data set are Cancer and Non-Cancer a leaf node would be considered pure when all the sample data in a leaf node is part of only one class, either cancer or non-cancer. It is important to note that a deeper tree is not always better when optimizing the decision tree. A deeper tree can influence the runtime in a negative way. If a certain classification algorithm is being used, then a deeper tree could mean the runtime of this classification algorithm is significantly slower. There is also the possibility that the actual algorithm building the decision tree will get significantly slower as the tree gets deeper. If the tree-building algorithm being used splits pure nodes, then a decrease in the overall accuracy of the tree classifier could be experienced. Occasionally, going deeper in the tree can cause an accuracy decrease in general, so it is very important to test modifying the depth of the decision tree and selecting the depth that produces the best results. To summarize, observe the points below, we will define the number D as the depth of the tree. Possible advantages of increasing the number D: Possible disadvantages of increasing D The ability to test the differences in classification results when changing D is imperative. We must be able to easily change and test the variables that could affect the accuracy and reliability of the decision tree-model. The node splitting function used can have an impact on improving the accuracy of the decision tree. For example, using theinformation-gainfunction may yield better results than using the phi function. The phi function is known as a measure of “goodness” of a candidate split at a node in the decision tree. The information gain function is known as a measure of the “reduction inentropy”. In the following, we will build two decision trees. One decision tree will be built using the phi function to split the nodes and one decision tree will be built using the information gain function to split the nodes. The main advantages and disadvantages ofinformation gainand phi function This is the information gain function formula. The formula states the information gain is a function of the entropy of a node of the decision tree minus the entropy of a candidate split at node t of a decision tree. This is the phi function formula. The phi function is maximized when the chosen feature splits the samples in a way that produces homogenous splits and have around the same number of samples in each split. We will set D, which is the depth of the decision tree we are building, to three (D = 3). We also have the following data set of cancer and non-cancer samples and the mutation features that the samples either have or do not have. If a sample has a feature mutation then the sample is positive for that mutation, and it will be represented by one. If a sample does not have a feature mutation then the sample is negative for that mutation, and it will be represented by zero. To summarize, C stands for cancer and NC stands for non-cancer. The letter M stands formutation, and if a sample has a particular mutation it will show up in the table as a one and otherwise zero. Now, we can use the formulas to calculate the phi function values and information gain values for each M in the dataset. Once all the values are calculated the tree can be produced. The first thing to be done is to select the root node. In information gain and the phi function we consider the optimal split to be the mutation that produces the highest value for information gain or the phi function. Now assume that M1 has the highest phi function value and M4 has the highest information gain value. The M1 mutation will be the root of our phi function tree and M4 will be the root of our information gain tree. You can observe the root nodes below Now, once we have chosen the root node we can split the samples into two groups based on whether a sample is positive or negative for the root node mutation. The groups will be called group A and group B. For example, if we use M1 to split the samples in the root node we get NC2 and C2 samples in group A and the rest of the samples NC4, NC3, NC1, C1 in group B. Disregarding the mutation chosen for the root node, proceed to place the next best features that have the highest values for information gain or the phi function in the left or right child nodes of the decision tree. Once we choose the root node and the two child nodes for the tree of depth = 3 we can just add the leaves. The leaves will represent the final classification decision the model has produced based on the mutations a sample either has or does not have. The left tree is the decision tree we obtain from using information gain to split the nodes and the right tree is what we obtain from using the phi function to split the nodes. Now assume theclassificationresults from both trees are given using aconfusion matrix. Information gain confusion matrix: Phi function confusion matrix: The tree using information gain has the same results when using the phi function when calculating the accuracy. When we classify the samples based on the model using information gain we get one true positive, one false positive, zero false negatives, and four true negatives. For the model using the phi function we get two true positives, zero false positives, one false negative, and three true negatives. The next step is to evaluate the effectiveness of the decision tree using some key metrics that will be discussed in the evaluating a decision tree section below. The metrics that will be discussed below can help determine the next steps to be taken when optimizing the decision tree. The above information is not where it ends for building and optimizing a decision tree. There are many techniques for improving the decision tree classification models we build. One of the techniques is making our decision tree model from abootstrappeddataset. The bootstrapped dataset helps remove the bias that occurs when building a decision tree model with the same data the model is tested with. The ability to leverage the power ofrandom forestscan also help significantly improve the overall accuracy of the model being built. This method generates many decisions from many decision trees and tallies up the votes from each decision tree to make the final classification. There are many techniques, but the main objective is to test building your decision tree model in different ways to make sure it reaches the highest performance level possible. It is important to know the measurements used to evaluate decision trees. The main metrics used areaccuracy,sensitivity,specificity,precision,miss rate,false discovery rate, andfalse omission rate. All these measurements are derived from the number oftrue positives,false positives,True negatives, andfalse negativesobtained when running a set of samples through the decision tree classification model. Also, a confusion matrix can be made to display these results. All these main metrics tell something different about the strengths and weaknesses of the classification model built based on your decision tree. For example, a low sensitivity with high specificity could indicate the classification model built from the decision tree does not do well identifying cancer samples over non-cancer samples. Let us take the confusion matrix below. We will now calculate the values accuracy, sensitivity, specificity, precision, miss rate, false discovery rate, and false omission rate. Accuracy: Accuracy=(TP+TN)/(TP+TN+FP+FN){\displaystyle {\text{Accuracy}}=(TP+TN)/(TP+TN+FP+FN)} =(11+105)/162=71.60%{\displaystyle =(11+105)/162=71.60\%} Sensitivity (TPR – true positive rate):[14] TPR=TP/(TP+FN){\displaystyle {\text{TPR}}=TP/(TP+FN)} =11/(11+45)=19.64%{\displaystyle =11/(11+45)=19.64\%} Specificity (TNR – true negative rate): TNR=TN/(TN+FP){\displaystyle {\text{TNR}}=TN/(TN+FP)} =105/(105+1)=99.06%{\displaystyle =105/(105+1)=99.06\%} Precision (PPV – positive predictive value): PPV=TP/(TP+FP){\displaystyle {\text{PPV}}=TP/(TP+FP)} =11/(11+1)=91.66%{\displaystyle =11/(11+1)=91.66\%} Miss Rate (FNR – false negative rate): FNR=FN/(FN+TP){\displaystyle {\text{FNR}}=FN/(FN+TP)} =45/(45+11)=80.35%{\displaystyle =45/(45+11)=80.35\%} False discovery rate (FDR): FDR=FP/(FP+TP){\displaystyle {\text{FDR}}=FP/(FP+TP)} =1/(1+11)=8.30%{\displaystyle =1/(1+11)=8.30\%} False omission rate (FOR): FOR=FN/(FN+TN){\displaystyle {\text{FOR}}=FN/(FN+TN)} =45/(45+105)=30.00%{\displaystyle =45/(45+105)=30.00\%} Once we have calculated the key metrics we can make some initial conclusions on the performance of the decision tree model built. The accuracy that we calculated was 71.60%. The accuracy value is good to start but we would like to get our models as accurate as possible while maintaining the overall performance. The sensitivity value of 19.64% means that out of everyone who was actually positive for cancer tested positive. If we look at the specificity value of 99.06% we know that out of all the samples that were negative for cancer actually tested negative. When it comes to sensitivity and specificity it is important to have a balance between the two values, so if we can decrease our specificity to increase the sensitivity that would prove to be beneficial.[15]These are just a few examples on how to use these values and the meanings behind them to evaluate the decision tree model and improve upon the next iteration.
https://en.wikipedia.org/wiki/Decision_tree#Pruning
Innatural language processing, aword embeddingis a representation of a word. Theembeddingis used intext analysis. Typically, the representation is areal-valuedvector that encodes the meaning of the word in such a way that the words that are closer in the vector space are expected to be similar in meaning.[1]Word embeddings can be obtained usinglanguage modelingandfeature learningtechniques, where words or phrases from the vocabulary are mapped tovectorsofreal numbers. Methods to generate this mapping includeneural networks,[2]dimensionality reductionon the wordco-occurrence matrix,[3][4][5]probabilistic models,[6]explainable knowledge base method,[7]and explicit representation in terms of the context in which words appear.[8] Word and phrase embeddings, when used as the underlying input representation, have been shown to boost the performance in NLP tasks such assyntactic parsing[9]andsentiment analysis.[10] Indistributional semantics, a quantitative methodological approach for understanding meaning in observed language, word embeddings or semanticfeature spacemodels have been used as a knowledge representation for some time.[11]Such models aim to quantify and categorize semantic similarities between linguistic items based on their distributional properties in large samples of language data. The underlying idea that "a word is characterized by the company it keeps" was proposed in a 1957 article byJohn Rupert Firth,[12]but also has roots in the contemporaneous work on search systems[13]and in cognitive psychology.[14] The notion of a semantic space with lexical items (words or multi-word terms) represented as vectors or embeddings is based on the computational challenges of capturing distributional characteristics and using them for practical application to measure similarity between words, phrases, or entire documents. The first generation of semantic space models is thevector space modelfor information retrieval.[15][16][17]Such vector space models for words and their distributional data implemented in their simplest form results in a very sparse vector space of high dimensionality (cf.curse of dimensionality). Reducing the number of dimensions using linear algebraic methods such assingular value decompositionthen led to the introduction oflatent semantic analysisin the late 1980s and therandom indexingapproach for collecting word co-occurrence contexts.[18][19][20][21]In 2000,Bengioet al. provided in a series of papers titled "Neural probabilistic language models" to reduce the high dimensionality of word representations in contexts by "learning a distributed representation for words".[22][23][24] A study published inNeurIPS(NIPS) 2002 introduced the use of both word and document embeddings applying the method of kernel CCA to bilingual (and multi-lingual) corpora, also providing an early example ofself-supervised learningof word embeddings.[25] Word embeddings come in two different styles, one in which words are expressed as vectors of co-occurring words, and another in which words are expressed as vectors of linguistic contexts in which the words occur; these different styles are studied in Lavelli et al., 2004.[26]Roweis and Saul published inSciencehow to use "locally linear embedding" (LLE) to discover representations of high dimensional data structures.[27]Most new word embedding techniques after about 2005 rely on aneural networkarchitecture instead of more probabilistic and algebraic models, after foundational work done by Yoshua Bengio[28][circular reference]and colleagues.[29][30] The approach has been adopted by many research groups after theoretical advances in 2010 had been made on the quality of vectors and the training speed of the model, as well as after hardware advances allowed for a broaderparameter spaceto be explored profitably. In 2013, a team atGoogleled byTomas Mikolovcreatedword2vec, a word embedding toolkit that can train vector space models faster than previous approaches. The word2vec approach has been widely used in experimentation and was instrumental in raising interest for word embeddings as a technology, moving the research strand out of specialised research into broader experimentation and eventually paving the way for practical application.[31] Historically, one of the main limitations of static word embeddings or wordvector space modelsis that words with multiple meanings are conflated into a single representation (a single vector in the semantic space). In other words,polysemyandhomonymyare not handled properly. For example, in the sentence "The club I tried yesterday was great!", it is not clear if the termclubis related to the word sense of aclub sandwich,clubhouse,golf club, or any other sense thatclubmight have. The necessity to accommodate multiple meanings per word in different vectors (multi-sense embeddings) is the motivation for several contributions in NLP to split single-sense embeddings into multi-sense ones.[32][33] Most approaches that produce multi-sense embeddings can be divided into two main categories for their word sense representation, i.e., unsupervised and knowledge-based.[34]Based onword2vecskip-gram, Multi-Sense Skip-Gram (MSSG)[35]performs word-sense discrimination and embedding simultaneously, improving its training time, while assuming a specific number of senses for each word. In the Non-Parametric Multi-Sense Skip-Gram (NP-MSSG) this number can vary depending on each word. Combining the prior knowledge of lexical databases (e.g.,WordNet,ConceptNet,BabelNet), word embeddings andword sense disambiguation, Most Suitable Sense Annotation (MSSA)[36]labels word-senses through an unsupervised and knowledge-based approach, considering a word's context in a pre-defined sliding window. Once the words are disambiguated, they can be used in a standard word embeddings technique, so multi-sense embeddings are produced. MSSA architecture allows the disambiguation and annotation process to be performed recurrently in a self-improving manner.[37] The use of multi-sense embeddings is known to improve performance in several NLP tasks, such aspart-of-speech tagging, semantic relation identification,semantic relatedness,named entity recognitionand sentiment analysis.[38][39] As of the late 2010s, contextually-meaningful embeddings such asELMoandBERThave been developed.[40]Unlike static word embeddings, these embeddings are at the token-level, in that each occurrence of a word has its own embedding. These embeddings better reflect the multi-sense nature of words, because occurrences of a word in similar contexts are situated in similar regions of BERT’s embedding space.[41][42] Word embeddings forn-grams in biological sequences (e.g. DNA, RNA, and Proteins) forbioinformaticsapplications have been proposed by Asgari and Mofrad.[43]Named bio-vectors (BioVec) to refer to biological sequences in general with protein-vectors (ProtVec) for proteins (amino-acid sequences) and gene-vectors (GeneVec) for gene sequences, this representation can be widely used in applications of deep learning inproteomicsandgenomics. The results presented by Asgari and Mofrad[43]suggest that BioVectors can characterize biological sequences in terms of biochemical and biophysical interpretations of the underlying patterns. Word embeddings with applications ingame designhave been proposed by Rabii and Cook[44]as a way to discoveremergent gameplayusing logs of gameplay data. The process requires transcribing actions that occur during a game within aformal languageand then using the resulting text to create word embeddings. The results presented by Rabii and Cook[44]suggest that the resulting vectors can capture expert knowledge about games likechessthat are not explicitly stated in the game's rules. The idea has been extended to embeddings of entire sentences or even documents, e.g. in the form of thethought vectorsconcept. In 2015, some researchers suggested "skip-thought vectors" as a means to improve the quality ofmachine translation.[45]A more recent and popular approach for representing sentences is Sentence-BERT, or SentenceTransformers, which modifies pre-trainedBERTwith the use of siamese and triplet network structures.[46] Software for training and using word embeddings includesTomáš Mikolov'sWord2vec, Stanford University'sGloVe,[47]GN-GloVe,[48]Flair embeddings,[38]AllenNLP'sELMo,[49]BERT,[50]fastText,Gensim,[51]Indra,[52]andDeeplearning4j.Principal Component Analysis(PCA) andT-Distributed Stochastic Neighbour Embedding(t-SNE) are both used to reduce the dimensionality of word vector spaces and visualize word embeddings andclusters.[53] For instance, the fastText is also used to calculate word embeddings fortext corporainSketch Enginethat are available online.[54] Word embeddings may contain the biases and stereotypes contained in the trained dataset, as Bolukbasi et al. points out in the 2016 paper “Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings” that a publicly available (and popular) word2vec embedding trained on Google News texts (a commonly used data corpus), which consists of text written by professional journalists, still shows disproportionate word associations reflecting gender and racial biases when extracting word analogies.[55]For example, one of the analogies generated using the aforementioned word embedding is “man is to computer programmer as woman is to homemaker”.[56][57] Research done by Jieyu Zhou et al. shows that the applications of these trained word embeddings without careful oversight likely perpetuates existing bias in society, which is introduced through unaltered training data. Furthermore, word embeddings can even amplify these biases .[58][59]
https://en.wikipedia.org/wiki/Word_embedding
RDF Schema(Resource Description Framework Schema, variously abbreviated asRDFS,RDF(S),RDF-S, orRDF/S) is a set of classes with certain properties using theRDFextensibleknowledge representationdata model, providing basic elements for the description ofontologies. It uses various forms of RDF vocabularies, intended to structure RDFresources. RDF and RDFS can be saved in atriplestore, then one can extract some knowledge from them using a query language, likeSPARQL. The first version[1][4]was published by the World-Wide Web Consortium (W3C) in April 1998, and the finalW3C recommendationwas released in February 2014.[3]Many RDFS components are included in the more expressiveWeb Ontology Language(OWL). RDFS constructs are the RDFS classes, associated properties and utility properties built on thevocabulary of RDF.[5][6][7] A typical example of an rdfs:Class isfoaf:Personin the Friend of a Friend (FOAF) vocabulary.[8]An instance offoaf:Personis a resource that is linked to the classfoaf:Personusing therdf:typeproperty, such as in the following formal expression of thenatural-languagesentence: 'John is a Person'. The definition ofrdfs:Classis recursive:rdfs:Classis the class of classes, and so it is an instance of itself. The other classes described by the RDF and RDFS specifications are: Properties are instances of the classrdf:Propertyand describe a relation between subject resources and object resources. When used as such a property is apredicate(see alsoRDF: reification). For example, the following declarations are used to express that the propertyex:employerrelates a subject, which is of typefoaf:Person, to an object, which is of typefoaf:Organization: Given the previous two declarations, from the triple: can be inferred (resp. follows) thatex:Johnis afoaf:Person, andex:CompanyXis afoaf:Organization. For example, the following declares that 'Every Person is an Agent': Hierarchies of classes support inheritance of a property domain and range (see definitions in the next section) from a class to its subclasses. Anentailmentregime defines whether the triples in a graph are logically contradictory or not. RDFS entailment[11]is not very restrictive, i.e. it does not contain a large amount of rules (compared, for example, toOWL) limiting what kind of statements are valid in the graph. On the other hand it is also not very expressive, meaning that the semantics that can be represented in a machine-interpretable way with the graph is quite limited. Below in a simple example of the capabilities and limits of RDFS entailment, we start with a graph containing the following explicit triples: Without enabling inferencing with RDFS entailment, the data we have does not tell us whetherfoo:SomeElephantis abar:Animal. When we do RDFS-based inferencing, we will get the following extra triple: Therdfs:domainstatement dictates that any subject in triples wherebar:livesInZoois the predicate is of typebar:Animal. What RDFS entailment is not able to tell us is the relationship betweenbar:Animalandbar:Elephant. Due to inferencing we now know thatfoo:SomeElephantis bothbar:Animalandbar:Elephantso these classes do intersect but there is no information to deduce whether they merely intersect, are equal or have a subclass relationship. In RDFS 1.1, the domain and range statements do not carry any formal meaning and their interpretation is left up to the implementer. On the other hand in the 1.2 Working draft they are used as entailment rules for inferencing the types of individuals. Nevertheless in both versions, it is very clearly stated that the expected functionality of range is "the values of a property are instances of one or more classes" and domain "any resource that has a given property is an instance of one or more classes". The example above demonstrated some of the limits and capabilities of RDFS entailment, but did not show an example of a logical inconsistency (which could in layman terms be interpreted as a "validation error"), meaning that the statements the triples make are in conflict and try to express contradictory states of affairs. An example of this in RDFS would be having conflicting datatypes for objects (e.g. declaring a resource to be of typexsd:integerand being also declared to bexsd:booleanwhen inferencing is enabled). RDF vocabularies represented in RDFS include:[10]
https://en.wikipedia.org/wiki/RDF_Schema
In computer science, atrie(/ˈtraɪ/,/ˈtriː/ⓘ), also known as adigital treeorprefix tree,[1]is a specializedsearch treedata structure used to store and retrieve strings from a dictionary or set. Unlike abinary search tree, nodes in a trie do not store their associated key. Instead, each node'spositionwithin the trie determines its associated key, with the connections between nodes defined by individualcharactersrather than the entire key. Tries are particularly effective for tasks such as autocomplete, spell checking, and IP routing, offering advantages overhash tablesdue to their prefix-based organization and lack of hash collisions. Every child node shares a commonprefixwith its parent node, and the root node represents theempty string. While basic trie implementations can be memory-intensive, various optimization techniques such as compression and bitwise representations have been developed to improve their efficiency. A notable optimization is theradix tree, which provides more efficient prefix-based storage. While tries commonly store character strings, they can be adapted to work with any ordered sequence of elements, such aspermutationsof digits or shapes. A notable variant is thebitwise trie, which uses individualbitsfrom fixed-length binary data (such asintegersormemory addresses) as keys. The idea of a trie for representing a set of strings was first abstractly described byAxel Thuein 1912.[2][3]Tries were first described in a computer context by René de la Briandais in 1959.[4][3][5]: 336 The idea was independently described in 1960 byEdward Fredkin,[6]who coined the termtrie, pronouncing it/ˈtriː/(as "tree"), after the middle syllable ofretrieval.[7][8]However, other authors pronounce it/ˈtraɪ/(as "try"), in an attempt to distinguish it verbally from "tree".[7][8][3] Tries are a form of string-indexed look-up data structure, which is used to store a dictionary list of words that can be searched on in a manner that allows for efficient generation ofcompletion lists.[9][10]: 1A prefix trie is anordered treedata structure used in the representation of a set of strings over a finite alphabet set, which allows efficient storage of words with common prefixes.[1] Tries can be efficacious onstring-searching algorithmssuch aspredictive text,approximate string matching, andspell checkingin comparison to binary search trees.[11][8][12]: 358A trie can be seen as a tree-shapeddeterministic finite automaton.[13] Tries support various operations: insertion, deletion, and lookup of a string key. Tries are composed of nodes that contain links, which either point to other suffix child nodes ornull. As for every tree, each node but the root is pointed to by only one other node, called itsparent. Each node contains as many links as the number of characters in the applicablealphabet(although tries tend to have a substantial number of null links). In some cases, the alphabet used is simply that of thecharacter encoding—resulting in, for example, a size of 256 in the case of (unsigned)ASCII.[14]: 732 The null links within the children of a node emphasize the following characteristics:[14]: 734[5]: 336 A basicstructure typeof nodes in the trie is as follows;Node{\displaystyle {\text{Node}}}may contain an optionalValue{\displaystyle {\text{Value}}}, which is associated with each key stored in the last character of string, or terminal node. Searching for a value in a trie is guided by the characters in the search string key, as each node in the trie contains a corresponding link to each possible character in the given string. Thus, following the string within the trie yields the associated value for the given string key. A null link during the search indicates the inexistence of the key.[14]: 732-733 The following pseudocode implements the search procedure for a given stringkeyin a rooted triex.[15]: 135 In the above pseudocode,xandkeycorrespond to the pointer of trie's root node and the string key respectively. The search operation, in a standard trie, takesO(dm){\displaystyle O({\text{dm}})}time, wherem{\displaystyle {\text{m}}}is the size of the string parameterkey{\displaystyle {\text{key}}}, andd{\displaystyle {\text{d}}}corresponds to thealphabet size.[16]: 754Binary search trees, on the other hand, takeO(mlog⁡n){\displaystyle O(m\log n)}in the worst case, since the search depends on the height of the tree (log⁡n{\displaystyle \log n}) of the BST (in case ofbalanced trees), wheren{\displaystyle {\text{n}}}andm{\displaystyle {\text{m}}}being number of keys and the length of the keys.[12]: 358 The trie occupies less space in comparison with a BST in the case of a large number of short strings, since nodes share common initial string subsequences and store the keys implicitly.[12]: 358The terminal node of the tree contains a non-null value, and it is a searchhitif the associated value is found in the trie, and searchmissif it is not.[14]: 733 Insertion into trie is guided by using thecharacter setsas indexes to the children array until the last character of the string key is reached.[14]: 733-734Each node in the trie corresponds to one call of theradix sortingroutine, as the trie structure reflects the execution of pattern of the top-down radix sort.[15]: 135 If a null link is encountered prior to reaching the last character of the string key, a new node is created (line 3).[14]: 745The value of the terminal node is assigned to the input value; therefore, if the former was non-null at the time of insertion, it is substituted with the new value. Deletion of akey–value pairfrom a trie involves finding the terminal node with the corresponding string key, marking the terminal indicator and value tofalseand null correspondingly.[14]: 740 The following is arecursiveprocedure for removing a stringkeyfrom rooted trie (x). The procedure begins by examining thekey; null denotes the arrival of a terminal node or end of a string key. If the node is terminal it has no children, it is removed from the trie (line 14). However, an end of string key without the node being terminal indicates that the key does not exist, thus the procedure does not modify the trie. The recursion proceeds by incrementingkey's index. A trie can be used to replace ahash table, over which it has the following advantages:[12]: 358 However, tries are less efficient than a hash table when the data is directly accessed on asecondary storage devicesuch as a hard disk drive that has higherrandom accesstime than themain memory.[6]Tries are also disadvantageous when the key value cannot be easily represented as string, such asfloating point numberswhere multiple representations are possible (e.g. 1 is equivalent to 1.0, +1.0, 1.00, etc.),[12]: 359however it can be unambiguously represented as abinary numberinIEEE 754, in comparison totwo's complementformat.[17] Tries can be represented in several ways, corresponding to different trade-offs between memory use and speed of the operations.[5]: 341Using a vector of pointers for representing a trie consumes enormous space; however, memory space can be reduced at the expense of running time if asingly linked listis used for each node vector, as most entries of the vector containsnil{\displaystyle {\text{nil}}}.[3]: 495 Techniques such asalphabet reductionmay reduce the large space requirements by reinterpreting the original string as a longer string over a smaller alphabet i.e. a string ofnbytes can alternatively be regarded as a string of2nfour-bit unitsand stored in a trie with 16 instead of 256 pointers per node. Although this can reduce memory usage by up to a factor of eight, lookups need to visit twice as many nodes in the worst case.[5]: 347–352Other techniques include storing a vector of 256 ASCII pointers as a bitmap of 256 bits representing ASCII alphabet, which reduces the size of individual nodes dramatically.[18] Bitwise tries are used to address the enormous space requirement for the trie nodes in a naive simple pointer vector implementations. Each character in the string key set is represented via individual bits, which are used to traverse the trie over a string key. The implementations for these types of trie usevectorizedCPU instructions tofind the first set bitin a fixed-length key input (e.g.GCC's__builtin_clz()intrinsic function). Accordingly, the set bit is used to index the first item, or child node, in the 32- or 64-entry based bitwise tree. Search then proceeds by testing each subsequent bit in the key.[19] This procedure is alsocache-localandhighly parallelizabledue toregisterindependency, and thus performant onout-of-order executionCPUs.[19] Radix tree, also known as acompressed trie, is a space-optimized variant of a trie in which any node with only one child gets merged with its parent; elimination of branches of the nodes with a single child results in better metrics in both space and time.[20][21]: 452This works best when the trie remains static and set of keys stored are very sparse within their representation space.[22]: 3–16 One more approach is to "pack" the trie, in which a space-efficient implementation of a sparse packed trie applied to automatichyphenation, in which the descendants of each node may be interleaved in memory.[8] Patricia trees are a particular implementation of the compressed binary trie that uses thebinary encodingof the string keys in its representation.[23][15]: 140Every node in a Patricia tree contains an index, known as a "skip number", that stores the node's branching index to avoid empty subtrees during traversal.[15]: 140-141A naive implementation of a trie consumes immense storage due to larger number of leaf-nodes caused by sparse distribution of keys; Patricia trees can be efficient for such cases.[15]: 142[24]: 3 A representation of a Patricia tree is shown to the right. Each index value adjacent to the nodes represents the "skip number"—the index of the bit with which branching is to be decided.[24]: 3The skip number 1 at node 0 corresponds to the position 1 in the binary encoded ASCII where the leftmost bit differed in the key setX.[24]: 3-4The skip number is crucial for search, insertion, and deletion of nodes in the Patricia tree, and abit maskingoperation is performed during every iteration.[15]: 143 Trie data structures are commonly used inpredictive textorautocompletedictionaries, andapproximate matching algorithms.[11]Tries enable faster searches, occupy less space, especially when the set contains large number of short strings, thus used inspell checking, hyphenation applications andlongest prefix matchalgorithms.[8][12]: 358However, if storing dictionary words is all that is required (i.e. there is no need to store metadata associated with each word), a minimal deterministic acyclic finite state automaton (DAFSA) or radix tree would use less storage space than a trie. This is because DAFSAs and radix trees can compress identical branches from the trie which correspond to the same suffixes (or parts) of different words being stored. String dictionaries are also utilized innatural language processing, such as findinglexiconof atext corpus.[25]: 73 Lexicographic sortingof a set of string keys can be implemented by building a trie for the given keys and traversing the tree inpre-orderfashion;[26]this is also a form ofradix sort.[27]Tries are also fundamental data structures forburstsort, which is notable for being the fastest string sorting algorithm as of 2007,[28]accomplished by its efficient use of CPUcache.[29] A special kind of trie, called asuffix tree, can be used to index allsuffixesin a text to carry out fast full-text searches.[30] A specialized kind of trie called a compressed trie, is used inweb search enginesfor storing theindexes- a collection of all searchable words.[31]Each terminal node is associated with a list ofURLs—called occurrence list—to pages that match the keyword. The trie is stored in the main memory, whereas the occurrence is kept in an external storage, frequently in largeclusters, or the in-memory index points to documents stored in an external location.[32] Tries are used inBioinformatics, notably insequence alignmentsoftware applications such asBLAST, which indexes all the different substring of lengthk(calledk-mers) of a text by storing the positions of their occurrences in a compressed trie sequence databases.[25]: 75 Compressed variants of tries, such as databases for managingForwarding Information Base(FIB), are used in storingIP address prefixeswithinroutersandbridgesfor prefix-based lookup to resolvemask-basedoperations inIP routing.[25]: 75
https://en.wikipedia.org/wiki/Trie
Inmathematics, arandom walk, sometimes known as adrunkard's walk, is astochastic processthat describes a path that consists of a succession ofrandomsteps on somemathematical space. An elementary example of a random walk is the random walk on the integer number lineZ{\displaystyle \mathbb {Z} }which starts at 0, and at each step moves +1 or −1 with equalprobability. Other examples include the path traced by amoleculeas it travels in a liquid or a gas (seeBrownian motion), the search path of aforaginganimal, or the price of a fluctuatingstockand the financial status of agambler. Random walks have applications toengineeringand many scientific fields includingecology,psychology,computer science,physics,chemistry,biology,economics, andsociology. The termrandom walkwas first introduced byKarl Pearsonin 1905.[1] Realizations of random walks can be obtained byMonte Carlo simulation.[2] A popular random walk model is that of a random walk on a regular lattice, where at each step the location jumps to another site according to some probability distribution. In asimple random walk, the location can only jump to neighboring sites of the lattice, forming alattice path. In asimple symmetric random walkon a locally finite lattice, the probabilities of the location jumping to each one of its immediate neighbors are the same. The best-studied example is the random walk on thed-dimensional integer lattice (sometimes called the hypercubic lattice)Zd{\displaystyle \mathbb {Z} ^{d}}.[3] If the state space is limited to finite dimensions, the random walk model is called asimple bordered symmetric random walk, and the transition probabilities depend on the location of the state because on margin and corner states the movement is limited.[4] An elementary example of a random walk is the random walk on theintegernumber line,Z{\displaystyle \mathbb {Z} }, which starts at 0 and at each step moves +1 or −1 with equal probability. This walk can be illustrated as follows. A marker is placed at zero on the number line, and a fair coin is flipped. If it lands on heads, the marker is moved one unit to the right. If it lands on tails, the marker is moved one unit to the left. After five flips, the marker could now be on -5, -3, -1, 1, 3, 5. With five flips, three heads and two tails, in any order, it will land on 1. There are 10 ways of landing on 1 (by flipping three heads and two tails), 10 ways of landing on −1 (by flipping three tails and two heads), 5 ways of landing on 3 (by flipping four heads and one tail), 5 ways of landing on −3 (by flipping four tails and one head), 1 way of landing on 5 (by flipping five heads), and 1 way of landing on −5 (by flipping five tails). See the figure below for an illustration of the possible outcomes of 5 flips. To define this walk formally, take independent random variablesZ1,Z2,…{\displaystyle Z_{1},Z_{2},\dots }, where each variable is either 1 or −1, with a 50% probability for either value, and setS0=0{\displaystyle S_{0}=0}andSn=∑j=1nZj.{\textstyle S_{n}=\sum _{j=1}^{n}Z_{j}.}Theseries{Sn}{\displaystyle \{S_{n}\}}is called thesimple random walk onZ{\displaystyle \mathbb {Z} }. This series (the sum of the sequence of −1s and 1s) gives the net distance walked, if each part of the walk is of length one. TheexpectationE(Sn){\displaystyle E(S_{n})}ofSn{\displaystyle S_{n}}is zero. That is, the mean of all coin flips approaches zero as the number of flips increases. This follows by the finite additivity property of expectation:E(Sn)=∑j=1nE(Zj)=0.{\displaystyle E(S_{n})=\sum _{j=1}^{n}E(Z_{j})=0.} A similar calculation, using the independence of the random variables and the fact thatE(Zn2)=1{\displaystyle E(Z_{n}^{2})=1}, shows that:E(Sn2)=∑i=1nE(Zi2)+2∑1≤i<j≤nE(ZiZj)=n.{\displaystyle E(S_{n}^{2})=\sum _{i=1}^{n}E(Z_{i}^{2})+2\sum _{1\leq i<j\leq n}E(Z_{i}Z_{j})=n.} This hints thatE(|Sn|){\displaystyle E(|S_{n}|)\,\!}, theexpectedtranslation distance afternsteps, should beof the order ofn{\displaystyle {\sqrt {n}}}.In fact,[5]limn→∞E(|Sn|)n=2π.{\displaystyle \lim _{n\to \infty }{\frac {E(|S_{n}|)}{\sqrt {n}}}={\sqrt {\frac {2}{\pi }}}.} To answer the question of how many times will a random walk cross a boundary line if permitted to continue walking forever, a simple random walk onZ{\displaystyle \mathbb {Z} }will cross every point an infinite number of times. This result has many names: thelevel-crossing phenomenon,recurrenceor thegambler's ruin. The reason for the last name is as follows: a gambler with a finite amount of money will eventually lose when playinga fair gameagainst a bank with an infinite amount of money. The gambler's money will perform a random walk, and it will reach zero at some point, and the game will be over. Ifaandbare positive integers, then the expected number of steps until a one-dimensional simple random walk starting at 0 first hitsbor −aisab. The probability that this walk will hitbbefore −aisa/(a+b){\displaystyle a/(a+b)}, which can be derived from the fact that simple random walk is amartingale. And these expectations and hitting probabilities can be computed inO(a+b){\displaystyle O(a+b)}in the general one-dimensional random walk Markov chain. Some of the results mentioned above can be derived from properties ofPascal's triangle. The number of different walks ofnsteps where each step is +1 or −1 is 2n. For the simple random walk, each of these walks is equally likely. In order forSnto be equal to a numberkit is necessary and sufficient that the number of +1 in the walk exceeds those of −1 byk. It follows +1 must appear (n+k)/2 times amongnsteps of a walk, hence the number of walks which satisfySn=k{\displaystyle S_{n}=k}equals the number of ways of choosing (n+k)/2 elements from annelement set,[6]denoted(n(n+k)/2){\textstyle n \choose (n+k)/2}. For this to have meaning, it is necessary thatn+kbe an even number, which impliesnandkare either both even or both odd. Therefore, the probability thatSn=k{\displaystyle S_{n}=k}is equal to2−n(n(n+k)/2){\textstyle 2^{-n}{n \choose (n+k)/2}}. By representing entries of Pascal's triangle in terms offactorialsand usingStirling's formula, one can obtain good estimates for these probabilities for large values ofn{\displaystyle n}. If space is confined toZ{\displaystyle \mathbb {Z} }+ for brevity, the number of ways in which a random walk will land on any given number having five flips can be shown as {0,5,0,4,0,1}. This relation with Pascal's triangle is demonstrated for small values ofn. At zero turns, the only possibility will be to remain at zero. However, at one turn, there is one chance of landing on −1 or one chance of landing on 1. At two turns, a marker at 1 could move to 2 or back to zero. A marker at −1, could move to −2 or back to zero. Therefore, there is one chance of landing on −2, two chances of landing on zero, and one chance of landing on 2. Thecentral limit theoremand thelaw of the iterated logarithmdescribe important aspects of the behavior of simple random walks onZ{\displaystyle \mathbb {Z} }. In particular, the former entails that asnincreases, the probabilities (proportional to the numbers in each row) approach anormal distribution. To be precise, knowing thatP(Xn=k)=2−n(n(n+k)/2){\textstyle \mathbb {P} (X_{n}=k)=2^{-n}{\binom {n}{(n+k)/2}}}, and usingStirling's formulaone has log⁡P(Xn=k)=n[(1+kn+12n)log⁡(1+kn)+(1−kn+12n)log⁡(1−kn)]+log⁡2π+o(1).{\displaystyle {\log \mathbb {P} (X_{n}=k)}=n\left[\left({1+{\frac {k}{n}}+{\frac {1}{2n}}}\right)\log \left(1+{\frac {k}{n}}\right)+\left({1-{\frac {k}{n}}+{\frac {1}{2n}}}\right)\log \left(1-{\frac {k}{n}}\right)\right]+\log {\frac {\sqrt {2}}{\sqrt {\pi }}}+o(1).} Fixing the scalingk=⌊nx⌋{\textstyle k=\lfloor {\sqrt {n}}x\rfloor }, forx{\textstyle x}fixed, and using the expansionlog⁡(1+k/n)=k/n−k2/2n2+…{\textstyle \log(1+{k}/{n})=k/n-k^{2}/2n^{2}+\dots }whenk/n{\textstyle k/n}vanishes, it follows P(Xnn=⌊nx⌋n)=1n12πe−x2(1+o(1)).{\displaystyle {\mathbb {P} \left({\frac {X_{n}}{n}}={\frac {\lfloor {\sqrt {n}}x\rfloor }{\sqrt {n}}}\right)}={\frac {1}{\sqrt {n}}}{\frac {1}{2{\sqrt {\pi }}}}e^{-{x^{2}}}(1+o(1)).} taking the limit (and observing that1/n{\textstyle {1}/{\sqrt {n}}}corresponds to the spacing of the scaling grid) one finds the gaussian densityf(x)=12πe−x2{\textstyle f(x)={\frac {1}{2{\sqrt {\pi }}}}e^{-{x^{2}}}}. Indeed, for a absolutely continuous random variableX{\textstyle X}with densityfX{\textstyle f_{X}}it holdsP(X∈[x,x+dx))=fX(x)dx{\textstyle \mathbb {P} \left(X\in [x,x+dx)\right)=f_{X}(x)dx}, withdx{\textstyle dx}corresponding to an infinitesimal spacing. As a direct generalization, one can consider random walks on crystal lattices (infinite-fold abelian covering graphs over finite graphs). Actually it is possible to establish the central limit theorem and large deviation theorem in this setting.[7][8] A one-dimensionalrandom walkcan also be looked at as aMarkov chainwhose state space is given by the integersi=0,±1,±2,….{\displaystyle i=0,\pm 1,\pm 2,\dots .}For some numberpsatisfying0<p<1{\displaystyle \,0<p<1}, the transition probabilities (the probabilityPi,jof moving from stateito statej) are given byPi,i+1=p=1−Pi,i−1.{\displaystyle \,P_{i,i+1}=p=1-P_{i,i-1}.} The heterogeneous random walk draws in each time step a random number that determines the local jumping probabilities and then a random number that determines the actual jump direction. The main question is the probability of staying in each of the various sites aftert{\displaystyle t}jumps, and in the limit of this probability whent{\displaystyle t}is very large. In higher dimensions, the set of randomly walked points has interesting geometric properties. In fact, one gets a discretefractal, that is, a set which exhibits stochasticself-similarityon large scales. On small scales, one can observe "jaggedness" resulting from the grid on which the walk is performed. The trajectory of a random walk is the collection of points visited, considered as a set with disregard towhenthe walk arrived at the point. In one dimension, the trajectory is simply all points between the minimum height and the maximum height the walk achieved (both are, on average, on the order ofn{\displaystyle {\sqrt {n}}}). To visualize the two-dimensional case, one can imagine a person walking randomly around a city. The city is effectively infinite and arranged in a square grid of sidewalks. At every intersection, the person randomly chooses one of the four possible routes (including the one originally travelled from). Formally, this is a random walk on the set of all points in theplanewithintegercoordinates. To answer the question of the person ever getting back to the original starting point of the walk, this is the 2-dimensional equivalent of the level-crossing problem discussed above. In 1921George Pólyaproved that the personalmost surelywould in a 2-dimensional random walk, but for 3 dimensions or higher, the probability of returning to the origin decreases as the number of dimensions increases. In 3 dimensions, the probability decreases to roughly 34%.[9]The mathematicianShizuo Kakutaniwas known to refer to this result with the following quote: "A drunk man will find his way home, but a drunk bird may get lost forever".[10] The probability of recurrence is in generalp=1−(1πd∫[−π,π]d∏i=1ddθi1−1d∑i=1dcos⁡θi)−1{\displaystyle p=1-\left({\frac {1}{\pi ^{d}}}\int _{[-\pi ,\pi ]^{d}}{\frac {\prod _{i=1}^{d}d\theta _{i}}{1-{\frac {1}{d}}\sum _{i=1}^{d}\cos \theta _{i}}}\right)^{-1}}, which can be derived bygenerating functions[11]or Poisson process.[12] Another variation of this question which was also asked by Pólya is: "if two people leave the same starting point, then will they ever meet again?"[13]It can be shown that the difference between their locations (two independent random walks) is also a simple random walk, so they almost surely meet again in a 2-dimensional walk, but for 3 dimensions and higher the probability decreases with the number of the dimensions.Paul Erdősand Samuel James Taylor also showed in 1960 that for dimensions less or equal than 4, two independent random walks starting from any two given points have infinitely many intersections almost surely, but for dimensions higher than 5, they almost surely intersect only finitely often.[14] The asymptotic function for a two-dimensional random walk as the number of steps increases is given by aRayleigh distribution. The probability distribution is a function of the radius from the origin and the step length is constant for each step. Here, the step length is assumed to be 1, N is the total number of steps and r is the radius from the origin.[15] P(r)=2rNe−r2/N{\displaystyle P(r)={\frac {2r}{N}}e^{-r^{2}/N}} AWiener processis a stochastic process with similar behavior toBrownian motion, the physical phenomenon of a minute particle diffusing in a fluid. (Sometimes theWiener processis called "Brownian motion", although this is strictly speaking a confusion of a model with the phenomenon being modeled.) A Wiener process is thescaling limitof random walk in dimension 1. This means that if there is a random walk with very small steps, there is an approximation to a Wiener process (and, less accurately, to Brownian motion). To be more precise, if the step size is ε, one needs to take a walk of lengthL/ε2to approximate a Wiener length ofL. As the step size tends to 0 (and the number of steps increases proportionally), random walk converges to a Wiener process in an appropriate sense. Formally, ifBis the space of all paths of lengthLwith the maximum topology, and ifMis the space of measure overBwith the norm topology, then the convergence is in the spaceM. Similarly, a Wiener process in several dimensions is the scaling limit of random walk in the same number of dimensions. A random walk is a discrete fractal (a function with integer dimensions; 1, 2, ...), but a Wiener process trajectory is a true fractal, and there is a connection between the two. For example, take a random walk until it hits a circle of radiusrtimes the step length. The average number of steps it performs isr2.[citation needed]This fact is thediscrete versionof the fact that a Wiener process walk is a fractal ofHausdorff dimension2.[citation needed] In two dimensions, the average number of points the same random walk has on theboundaryof its trajectory isr4/3. This corresponds to the fact that the boundary of the trajectory of a Wiener process is a fractal of dimension 4/3, a fact predicted byMandelbrotusing simulations but proved only in 2000 byLawler,SchrammandWerner.[16] A Wiener process enjoys manysymmetriesa random walk does not. For example, a Wiener process walk is invariant to rotations, but the random walk is not, since the underlying grid is not (random walk is invariant to rotations by 90 degrees, but Wiener processes are invariant to rotations by, for example, 17 degrees too). This means that in many cases, problems on a random walk are easier to solve by translating them to a Wiener process, solving the problem there, and then translating back. On the other hand, some problems are easier to solve with random walks due to its discrete nature. Random walk andWiener processcan becoupled, namely manifested on the same probability space in a dependent way that forces them to be quite close. The simplest such coupling is theSkorokhod embedding, but there exist more precise couplings, such asKomlós–Major–Tusnády approximationtheorem. The convergence of a random walk toward the Wiener process is controlled by thecentral limit theorem, and byDonsker's theorem. For a particle in a known fixed position att= 0, the central limit theorem tells us that after a large number ofindependentsteps in the random walk, the walker's position is distributed according to anormal distributionof totalvariance: σ2=tδtε2,{\displaystyle \sigma ^{2}={\frac {t}{\delta t}}\,\varepsilon ^{2},} wheretis the time elapsed since the start of the random walk,ε{\displaystyle \varepsilon }is the size of a step of the random walk, andδt{\displaystyle \delta t}is the time elapsed between two successive steps. This corresponds to theGreen's functionof thediffusion equationthat controls the Wiener process, which suggests that, after a large number of steps, the random walk converges toward a Wiener process. In 3D, the variance corresponding to theGreen's functionof the diffusion equation is:σ2=6Dt.{\displaystyle \sigma ^{2}=6\,D\,t.} By equalizing this quantity with the variance associated to the position of the random walker, one obtains the equivalent diffusion coefficient to be considered for the asymptotic Wiener process toward which the random walk converges after a large number of steps:D=ε26δt{\displaystyle D={\frac {\varepsilon ^{2}}{6\delta t}}}(valid only in 3D). The two expressions of the variance above correspond to the distribution associated to the vectorR→{\displaystyle {\vec {R}}}that links the two ends of the random walk, in 3D. The variance associated to each componentRx{\displaystyle R_{x}},Ry{\displaystyle R_{y}}orRz{\displaystyle R_{z}}is only one third of this value (still in 3D). For 2D:[17] D=ε24δt.{\displaystyle D={\frac {\varepsilon ^{2}}{4\delta t}}.} For 1D:[18] D=ε22δt.{\displaystyle D={\frac {\varepsilon ^{2}}{2\delta t}}.} A random walk having a step size that varies according to anormal distributionis used as a model for real-world time series data such as financial markets. Here, the step size is the inverse cumulative normal distributionΦ−1(z,μ,σ){\displaystyle \Phi ^{-1}(z,\mu ,\sigma )}where 0 ≤z≤ 1 is a uniformly distributed random number, and μ and σ are the mean and standard deviations of the normal distribution, respectively. If μ is nonzero, the random walk will vary about a linear trend. If vsis the starting value of the random walk, the expected value afternsteps will be vs+nμ. For the special case where μ is equal to zero, afternsteps, the translation distance's probability distribution is given byN(0,nσ2), whereN() is the notation for the normal distribution,nis the number of steps, and σ is from the inverse cumulative normal distribution as given above. Proof: The Gaussian random walk can be thought of as the sum of a sequence of independent and identically distributed random variables, Xifrom the inverse cumulative normal distribution with mean equal zero and σ of the original inverse cumulative normal distribution: but we have the distribution for the sum of two independent normally distributed random variables,Z=X+Y, is given byN(μX+μY,σX2+σY2){\displaystyle {\mathcal {N}}(\mu _{X}+\mu _{Y},\sigma _{X}^{2}+\sigma _{Y}^{2})}(see here). In our case,μX= μY= 0andσ2X= σ2Y= σ2yieldN(0,2σ2){\displaystyle {\mathcal {N}}(0,2\sigma ^{2})}By induction, fornsteps we haveZ∼N(0,nσ2).{\displaystyle Z\sim {\mathcal {N}}(0,n\sigma ^{2}).}For steps distributed according to any distribution with zero mean and a finite variance (not necessarily just a normal distribution), theroot mean squaretranslation distance afternsteps is (seeBienaymé's identity) But for the Gaussian random walk, this is just the standard deviation of the translation distance's distribution afternsteps. Hence, if μ is equal to zero, and since the root mean square(RMS) translation distance is one standard deviation, there is 68.27% probability that the RMS translation distance afternsteps will fall between±σn{\displaystyle \pm \sigma {\sqrt {n}}}. Likewise, there is 50% probability that the translation distance afternsteps will fall between±0.6745σn{\displaystyle \pm 0.6745\sigma {\sqrt {n}}}. The number of distinct sites visited by a single random walkerS(t){\displaystyle S(t)}has been studied extensively for square and cubic lattices and for fractals.[19][20]This quantity is useful for the analysis of problems of trapping and kinetic reactions. It is also related to the vibrational density of states,[21][22]diffusion reactions processes[23]and spread of populations in ecology.[24][25] Theinformation rateof a Gaussian random walk with respect to the squared error distance, i.e. its quadraticrate distortion function, is given parametrically by[26]R(Dθ)=12∫01max{0,log2⁡(S(φ)/θ)}dφ,{\displaystyle R(D_{\theta })={\frac {1}{2}}\int _{0}^{1}\max\{0,\log _{2}\left(S(\varphi )/\theta \right)\}\,d\varphi ,}Dθ=∫01min{S(φ),θ}dφ,{\displaystyle D_{\theta }=\int _{0}^{1}\min\{S(\varphi ),\theta \}\,d\varphi ,}whereS(φ)=(2sin⁡(πφ/2))−2{\displaystyle S(\varphi )=\left(2\sin(\pi \varphi /2)\right)^{-2}}. Therefore, it is impossible to encode{Zn}n=1N{\displaystyle {\{Z_{n}\}_{n=1}^{N}}}using abinary codeof less thanNR(Dθ){\displaystyle NR(D_{\theta })}bitsand recover it with expected mean squared error less thanDθ{\displaystyle D_{\theta }}. On the other hand, for anyε>0{\displaystyle \varepsilon >0}, there exists anN∈N{\displaystyle N\in \mathbb {N} }large enough and abinary codeof no more than2NR(Dθ){\displaystyle 2^{NR(D_{\theta })}}distinct elements such that the expected mean squared error in recovering{Zn}n=1N{\displaystyle {\{Z_{n}\}_{n=1}^{N}}}from this code is at mostDθ−ε{\displaystyle D_{\theta }-\varepsilon }. As mentioned, the range of natural phenomena which have been subject to attempts at description by some flavour of random walks is considerable. This is particularly the case in the fields of physics,[27][28]chemistry,[29]materials science,[30][31]and biology.[32][33][34] The following are some specific applications of random walks: A number of types ofstochastic processeshave been considered that are similar to the pure random walks but where the simple structure is allowed to be more generalized. Thepurestructure can be characterized by the steps being defined byindependent and identically distributed random variables. Random walks can take place on a variety of spaces, such asgraphs, the integers, the real line, the plane or higher-dimensional vector spaces, oncurved surfacesor higher-dimensionalRiemannian manifolds, and ongroups. It is also possible to define random walks which take their steps at random times, and in that case, the positionXthas to be defined for all timest∈ [0, +∞). Specific cases or limits of random walks include theLévy flightanddiffusionmodels such asBrownian motion. A random walk of lengthkon a possibly infinitegraphGwith a root0is a stochastic process with random variablesX1,X2,…,Xk{\displaystyle X_{1},X_{2},\dots ,X_{k}}such thatX1=0{\displaystyle X_{1}=0}andXi+1{\displaystyle {X_{i+1}}}is a vertex chosen uniformly at random from the neighbors ofXi{\displaystyle X_{i}}. Then the numberpv,w,k(G){\displaystyle p_{v,w,k}(G)}is the probability that a random walk of lengthkstarting atvends atw. In particular, ifGis a graph with root0,p0,0,2k{\displaystyle p_{0,0,2k}}is the probability that a2k{\displaystyle 2k}-step random walk returns to0. Building on the analogy from the earlier section on higher dimensions, assume now that our city is no longer a perfect square grid. When our person reaches a certain junction, he picks between the variously available roads with equal probability. Thus, if the junction has seven exits the person will go to each one with probability one-seventh. This is a random walk on a graph. Will our person reach his home? It turns out that under rather mild conditions, the answer is still yes,[45]but depending on the graph, the answer to the variant question 'Will two persons meet again?' may not be that they meet infinitely often almost surely.[46] An example of a case where the person will reach his home almost surely is when the lengths of all the blocks are betweenaandb(whereaandbare any two finite positive numbers). Notice that we do not assume that the graph isplanar, i.e. the city may contain tunnels and bridges. One way to prove this result is using the connection toelectrical networks. Take a map of the city and place a oneohmresistoron every block. Now measure the "resistance between a point and infinity". In other words, choose some numberRand take all the points in the electrical network with distance bigger thanRfrom our point and wire them together. This is now a finite electrical network, and we may measure the resistance from our point to the wired points. TakeRto infinity. The limit is called theresistance between a point and infinity. It turns out that the following is true (an elementary proof can be found in the book by Doyle and Snell): Theorem:a graph is transient if and only if the resistance between a point and infinity is finite. It is not important which point is chosen if the graph is connected. In other words, in a transient system, one only needs to overcome a finite resistance to get to infinity from any point. In a recurrent system, the resistance from any point to infinity is infinite. This characterization oftransience and recurrenceis very useful, and specifically it allows us to analyze the case of a city drawn in the plane with the distances bounded. A random walk on a graph is a very special case of aMarkov chain. Unlike a general Markov chain, random walk on a graph enjoys a property calledtime symmetryorreversibility. Roughly speaking, this property, also called the principle ofdetailed balance, means that the probabilities to traverse a given path in one direction or the other have a very simple connection between them (if the graph isregular, they are just equal). This property has important consequences. Starting in the 1980s, much research has gone into connecting properties of the graph to random walks. In addition to the electrical network connection described above, there are important connections toisoperimetric inequalities, see morehere, functional inequalities such asSobolevandPoincaréinequalities and properties of solutions ofLaplace's equation. A significant portion of this research was focused onCayley graphsoffinitely generated groups. In many cases these discrete results carry over to, or are derived frommanifoldsandLie groups. In the context ofrandom graphs, particularly that of theErdős–Rényi model, analytical results to some properties of random walkers have been obtained. These include the distribution of first[47]and last hitting times[48]of the walker, where the first hitting time is given by the first time the walker steps into a previously visited site of the graph, and the last hitting time corresponds the first time the walker cannot perform an additional move without revisiting a previously visited site. A good reference for random walk on graphs is the online book byAldous and Fill. For groups see the book of Woess. If the transition kernelp(x,y){\displaystyle p(x,y)}is itself random (based on an environmentω{\displaystyle \omega }) then the random walk is called a "random walk in random environment". When the law of the random walk includes the randomness ofω{\displaystyle \omega }, the law is called the annealed law; on the other hand, ifω{\displaystyle \omega }is seen as fixed, the law is called a quenched law. See the book of Hughes, the book of Revesz, or the lecture notes of Zeitouni. We can think about choosing every possible edge with the same probability as maximizing uncertainty (entropy) locally. We could also do it globally – in maximal entropy random walk (MERW) we want all paths to be equally probable, or in other words: for every two vertexes, each path of given length is equally probable.[49]This random walk has much stronger localization properties. There are a number of interesting models of random paths in which each step depends on the past in a complicated manner. All are more complex for solving analytically than the usual random walk; still, the behavior of any model of a random walker is obtainable using computers. Examples include: The self-avoiding walk of lengthnonZd{\displaystyle \mathbb {Z} ^{d}}is the randomn-step path which starts at the origin, makes transitions only between adjacent sites inZd{\displaystyle \mathbb {Z} ^{d}}, never revisit a site, and is chosen uniformly among all such paths. In two dimensions, due to self-trapping, a typical self-avoiding walk is very short,[51]while in higher dimension it grows beyond all bounds. This model has often been used inpolymer physics(since the 1960s). Random walk chosen to maximizeentropy rate, has much stronger localization properties. Random walks where the direction of movement at one time iscorrelatedwith the direction of movement at the next time. It is used to model animal movements.[56][57]
https://en.wikipedia.org/wiki/Random_walk
PageRank(PR) is analgorithmused byGoogle Searchtorankweb pagesin theirsearch engineresults. It is named after both the term "web page" and co-founderLarry Page. PageRank is a way of measuring the importance of website pages. According to Google: PageRank works by counting the number and quality of links to a page to determine a rough estimate of how important the website is. The underlying assumption is that more important websites are likely to receive more links from other websites.[1] Currently, PageRank is not the only algorithm used by Google to order search results, but it is the first algorithm that was used by the company, and it is the best known.[2][3]As of September 24, 2019, all patents associated with PageRank have expired.[4] PageRank is alink analysisalgorithm and it assigns a numericalweightingto each element of ahyperlinkedsetof documents, such as theWorld Wide Web, with the purpose of "measuring" its relative importance within the set. Thealgorithmmay be applied to any collection of entities withreciprocalquotations and references. The numerical weight that it assigns to any given elementEis referred to as thePageRank of Eand denoted byPR(E).{\displaystyle PR(E).} A PageRank results from a mathematical algorithm based on theWebgraph, created by all World Wide Web pages as nodes andhyperlinksas edges, taking into consideration authority hubs such ascnn.comormayoclinic.org. The rank value indicates an importance of a particular page. A hyperlink to a page counts as a vote of support. The PageRank of a page is definedrecursivelyand depends on the number and PageRank metric of all pages that link to it ("incoming links"). A page that is linked to by many pages with high PageRank receives a high rank itself. Numerous academic papers concerning PageRank have been published since Page and Brin's original paper.[5]In practice, the PageRank concept may be vulnerable to manipulation. Research has been conducted into identifying falsely influenced PageRank rankings. The goal is to find an effective means of ignoring links from documents with falsely influenced PageRank.[6] Other link-based ranking algorithms for Web pages include theHITS algorithminvented byJon Kleinberg(used byTeomaand nowAsk.com), the IBMCLEVER project, theTrustRankalgorithm, theHummingbirdalgorithm,[7]and theSALSA algorithm.[8] Theeigenvalueproblem behind PageRank's algorithm was independently rediscovered and reused in many scoring problems. In 1895,Edmund Landausuggested using it for determining the winner of a chess tournament.[9][10]The eigenvalue problem was also suggested in 1976 by Gabriel Pinski and Francis Narin, who worked onscientometricsranking scientific journals,[11]in 1977 byThomas Saatyin his concept ofAnalytic Hierarchy Processwhich weighted alternative choices,[12]and in 1995 by Bradley Love and Steven Sloman as acognitive modelfor concepts, the centrality algorithm.[13][14] A search engine called "RankDex" from IDD Information Services, designed byRobin Liin 1996, developed a strategy for site-scoring and page-ranking.[15]Li referred to his search mechanism as "link analysis," which involved ranking the popularity of a web site based on how many other sites had linked to it.[16]RankDex, the first search engine with page-ranking and site-scoring algorithms, was launched in 1996.[17]Li filed a patent for the technology in RankDex in 1997; it was granted in 1999.[18]He later used it when he foundedBaiduin China in 2000.[19][20]Google founderLarry Pagereferenced Li's work as a citation in some of his U.S. patents for PageRank.[21][17][22] Larry Page andSergey Brindeveloped PageRank atStanford Universityin 1996 as part of a research project about a new kind of search engine. An interview withHéctor García-Molina, Stanford Computer Science professor and advisor to Sergey,[23]provides background into the development of the page-rank algorithm.[24]Sergey Brin had the idea that information on the web could be ordered in a hierarchy by "link popularity": a page ranks higher as there are more links to it.[25]The system was developed with the help of Scott Hassan and Alan Steremberg, both of whom were cited by Page and Brin as being critical to the development of Google.[5]Rajeev MotwaniandTerry Winogradco-authored with Page and Brin the first paper about the project, describing PageRank and the initial prototype of theGoogle search engine, published in 1998.[5]Shortly after, Page and Brin foundedGoogle Inc., the company behind the Google search engine. While just one of many factors that determine the ranking of Google search results, PageRank continues to provide the basis for all of Google's web-search tools.[26] The name "PageRank" plays on the name of developer Larry Page, as well as of the concept of aweb page.[27][28]The word is a trademark of Google, and the PageRank process has beenpatented(U.S. patent 6,285,999). However, the patent is assigned to Stanford University and not to Google. Google has exclusive license rights on the patent from Stanford University. The university received 1.8 million shares of Google in exchange for use of the patent; it sold the shares in 2005 for $336 million.[29][30] PageRank was influenced bycitation analysis, early developed byEugene Garfieldin the 1950s at the University of Pennsylvania, and byHyper Search, developed byMassimo Marchioriat theUniversity of Padua. In the same year PageRank was introduced (1998),Jon Kleinbergpublished his work onHITS. Google's founders cite Garfield, Marchiori, and Kleinberg in their original papers.[5][31] The PageRank algorithm outputs aprobability distributionused to represent the likelihood that a person randomly clicking on links will arrive at any particular page. PageRank can be calculated for collections of documents of any size. It is assumed in several research papers that the distribution is evenly divided among all documents in the collection at the beginning of the computational process. The PageRank computations require several passes, called "iterations", through the collection to adjust approximate PageRank values to more closely reflect the theoretical true value. A probability is expressed as a numeric value between 0 and 1. A 0.5 probability is commonly expressed as a "50% chance" of something happening. Hence, a document with a PageRank of 0.5 means there is a 50% chance that a person clicking on a random link will be directed to said document. Assume a small universe of four web pages:A,B,C, andD. Links from a page to itself are ignored. Multiple outbound links from one page to another page are treated as a single link. PageRank is initialized to the same value for all pages. In the original form of PageRank, the sum of PageRank over all pages was the total number of pages on the web at that time, so each page in this example would have an initial value of 1. However, later versions of PageRank, and the remainder of this section, assume aprobability distributionbetween 0 and 1. Hence the initial value for each page in this example is 0.25. The PageRank transferred from a given page to the targets of its outbound links upon the next iteration is divided equally among all outbound links. If the only links in the system were from pagesB,C, andDtoA, each link would transfer 0.25 PageRank toAupon the next iteration, for a total of 0.75. Suppose instead that pageBhad a link to pagesCandA, pageChad a link to pageA, and pageDhad links to all three pages. Thus, upon the first iteration, pageBwould transfer half of its existing value (0.125) to pageAand the other half (0.125) to pageC. PageCwould transfer all of its existing value (0.25) to the only page it links to,A. SinceDhad three outbound links, it would transfer one third of its existing value, or approximately 0.083, toA. At the completion of this iteration, pageAwill have a PageRank of approximately 0.458. In other words, the PageRank conferred by an outbound link is equal to the document's own PageRank score divided by the number of outbound linksL( ). In the general case, the PageRank value for any pageucan be expressed as: i.e. the PageRank value for a pageuis dependent on the PageRank values for each pagevcontained in the setBu(the set containing all pages linking to pageu), divided by the numberL(v) of links from pagev. The PageRank theory holds that an imaginary surfer who is randomly clicking on links will eventually stop clicking. The probability, at any step, that the person will continue following links is a damping factord. The probability that they instead jump to any random page is1 - d. Various studies have tested different damping factors, but it is generally assumed that the damping factor will be set around 0.85.[5] The damping factor is subtracted from 1 (and in some variations of the algorithm, the result is divided by the number of documents (N) in the collection) and this term is then added to the product of the damping factor and the sum of the incoming PageRank scores. That is, So any page's PageRank is derived in large part from the PageRanks of other pages. The damping factor adjusts the derived value downward. The original paper, however, gave the following formula, which has led to some confusion: The difference between them is that the PageRank values in the first formula sum to one, while in the second formula each PageRank is multiplied byNand the sum becomesN. A statement in Page and Brin's paper that "the sum of all PageRanks is one"[5]and claims by other Google employees[32]support the first variant of the formula above. Page and Brin confused the two formulas in their most popular paper "The Anatomy of a Large-Scale Hypertextual Web Search Engine", where they mistakenly claimed that the latter formula formed a probability distribution over web pages.[5] Google recalculates PageRank scores each time it crawls the Web and rebuilds its index. As Google increases the number of documents in its collection, the initial approximation of PageRank decreases for all documents. The formula uses a model of arandom surferwho reaches their target site after several clicks, then switches to a random page. The PageRank value of a page reflects the chance that the random surfer will land on that page by clicking on a link. It can be understood as aMarkov chainin which the states are pages, and the transitions are the links between pages – all of which are all equally probable. If a page has no links to other pages, it becomes a sink and therefore terminates the random surfing process. If the random surfer arrives at a sink page, it picks anotherURLat random and continues surfing again. When calculating PageRank, pages with no outbound links are assumed to link out to all other pages in the collection. Their PageRank scores are therefore divided evenly among all other pages. In other words, to be fair with pages that are not sinks, these random transitions are added to all nodes in the Web. This residual probability,d, is usually set to 0.85, estimated from the frequency that an average surfer uses his or her browser's bookmark feature. So, the equation is as follows: wherep1,p2,...,pN{\displaystyle p_{1},p_{2},...,p_{N}}are the pages under consideration,M(pi){\displaystyle M(p_{i})}is the set of pages that link topi{\displaystyle p_{i}},L(pj){\displaystyle L(p_{j})}is the number of outbound links on pagepj{\displaystyle p_{j}}, andN{\displaystyle N}is the total number of pages. The PageRank values are the entries of the dominant righteigenvectorof the modifiedadjacency matrixrescaled so that each column adds up to one. This makes PageRank a particularly elegant metric: the eigenvector is whereRis the solution of the equation where the adjacency functionℓ(pi,pj){\displaystyle \ell (p_{i},p_{j})}is the ratio between number of links outbound from page j to page i to the total number of outbound links of page j. The adjacency function is 0 if pagepj{\displaystyle p_{j}}does not link topi{\displaystyle p_{i}}, and normalized such that, for eachj i.e. the elements of each column sum up to 1, so the matrix is astochastic matrix(for more details see thecomputationsection below). Thus this is a variant of theeigenvector centralitymeasure used commonly innetwork analysis. Because of the largeeigengapof the modified adjacency matrix above,[33]the values of the PageRank eigenvector can be approximated to within a high degree of accuracy within only a few iterations. Google's founders, in their original paper,[31]reported that the PageRank algorithm for a network consisting of 322 million links (in-edges and out-edges) converges to within a tolerable limit in 52 iterations. The convergence in a network of half the above size took approximately 45 iterations. Through this data, they concluded the algorithm can be scaled very well and that the scaling factor for extremely large networks would be roughly linear inlog⁡n{\displaystyle \log n}, where n is the size of the network. As a result ofMarkov theory, it can be shown that the PageRank of a page is the probability of arriving at that page after a large number of clicks. This happens to equalt−1{\displaystyle t^{-1}}wheret{\displaystyle t}is theexpectationof the number of clicks (or random jumps) required to get from the page back to itself. One main disadvantage of PageRank is that it favors older pages. A new page, even a very good one, will not have many links unless it is part of an existing site (a site being a densely connected set of pages, such asWikipedia). Several strategies have been proposed to accelerate the computation of PageRank.[34] Various strategies to manipulate PageRank have been employed in concerted efforts to improve search results rankings and monetize advertising links. These strategies have severely impacted the reliability of the PageRank concept,[citation needed]which purports to determine which documents are actually highly valued by the Web community. Since December 2007, when it startedactivelypenalizing sites selling paid text links, Google has combattedlink farmsand other schemes designed to artificially inflate PageRank. How Google identifies link farms and other PageRank manipulation tools is among Google'strade secrets. PageRank can be computed either iteratively or algebraically. The iterative method can be viewed as thepower iterationmethod[35][36]or the power method. The basic mathematical operations performed are identical. Att=0{\displaystyle t=0}, an initial probability distribution is assumed, usually where N is the total number of pages, andpi;0{\displaystyle p_{i};0}is page i at time 0. At each time step, the computation, as detailed above, yields where d is the damping factor, or in matrix notation whereRi(t)=PR(pi;t){\displaystyle \mathbf {R} _{i}(t)=PR(p_{i};t)}and1{\displaystyle \mathbf {1} }is the column vector of lengthN{\displaystyle N}containing only ones. The matrixM{\displaystyle {\mathcal {M}}}is defined as i.e., whereA{\displaystyle A}denotes theadjacency matrixof the graph andK{\displaystyle K}is the diagonal matrix with the outdegrees in the diagonal. The probability calculation is made for each page at a time point, then repeated for the next time point. The computation ends when for some smallϵ{\displaystyle \epsilon } i.e., when convergence is assumed. If the matrixM{\displaystyle {\mathcal {M}}}is a transition probability, i.e., column-stochastic andR{\displaystyle \mathbf {R} }is a probability distribution (i.e.,|R|=1{\displaystyle |\mathbf {R} |=1},ER=1{\displaystyle \mathbf {E} \mathbf {R} =\mathbf {1} }whereE{\displaystyle \mathbf {E} }is matrix of all ones), then equation (2) is equivalent to Hence PageRankR{\displaystyle \mathbf {R} }is the principal eigenvector ofM^{\displaystyle {\widehat {\mathcal {M}}}}. A fast and easy way to compute this is using thepower method: starting with an arbitrary vectorx(0){\displaystyle x(0)}, the operatorM^{\displaystyle {\widehat {\mathcal {M}}}}is applied in succession, i.e., until Note that in equation (3) the matrix on the right-hand side in the parenthesis can be interpreted as whereP{\displaystyle \mathbf {P} }is an initial probability distribution. n the current case Finally, ifM{\displaystyle {\mathcal {M}}}has columns with only zero values, they should be replaced with the initial probability vectorP{\displaystyle \mathbf {P} }. In other words, where the matrixD{\displaystyle {\mathcal {D}}}is defined as with In this case, the above two computations usingM{\displaystyle {\mathcal {M}}}only give the same PageRank if their results are normalized: The PageRank of an undirectedgraphG{\displaystyle G}is statistically close to thedegree distributionof the graphG{\displaystyle G},[37]but they are generally not identical: IfR{\displaystyle R}is the PageRank vector defined above, andD{\displaystyle D}is the degree distribution vector wheredeg⁡(pi){\displaystyle \deg(p_{i})}denotes the degree of vertexpi{\displaystyle p_{i}}, andE{\displaystyle E}is the edge-set of the graph, then, withY=1N1{\displaystyle Y={1 \over N}\mathbf {1} },[38]shows that: 1−d1+d‖Y−D‖1≤‖R−D‖1≤‖Y−D‖1,{\displaystyle {1-d \over 1+d}\|Y-D\|_{1}\leq \|R-D\|_{1}\leq \|Y-D\|_{1},} that is, the PageRank of an undirected graph equals to the degree distribution vector if and only if the graph is regular, i.e., every vertex has the same degree. A generalization of PageRank for the case of ranking two interacting groups of objects was described by Daugulis.[39]In applications it may be necessary to model systems having objects of two kinds where a weighted relation is defined on object pairs. This leads to consideringbipartite graphs. For such graphs two related positive or nonnegative irreducible matrices corresponding to vertex partition sets can be defined. One can compute rankings of objects in both groups as eigenvectors corresponding to the maximal positive eigenvalues of these matrices. Normed eigenvectors exist and are unique by the Perron or Perron–Frobenius theorem. Example: consumers and products. The relation weight is the product consumption rate. Sarma et al. describe tworandom walk-baseddistributed algorithmsfor computing PageRank of nodes in a network.[40]One algorithm takesO(log⁡n/ϵ){\displaystyle O(\log n/\epsilon )}rounds with high probability on any graph (directed or undirected), where n is the network size andϵ{\displaystyle \epsilon }is the reset probability (1−ϵ{\displaystyle 1-\epsilon }, which is called the damping factor) used in the PageRank computation. They also present a faster algorithm that takesO(log⁡n/ϵ){\displaystyle O({\sqrt {\log n}}/\epsilon )}rounds in undirected graphs. In both algorithms, each node processes and sends a number of bits per round that are polylogarithmic in n, the network size. TheGoogle Toolbarlong had a PageRank feature which displayed a visited page's PageRank as a whole number between 0 (least popular) and 10 (most popular). Google had not disclosed the specific method for determining a Toolbar PageRank value, which was to be considered only a rough indication of the value of a website. The "Toolbar Pagerank" was available for verified site maintainers through the Google Webmaster Tools interface. However, on October 15, 2009, a Google employee confirmed that the company had removed PageRank from itsWebmaster Toolssection, saying that "We've been telling people for a long time that they shouldn't focus on PageRank so much. Many site owners seem to think it's the most importantmetricfor them to track, which is simply not true."[41] The "Toolbar Pagerank" was updated very infrequently. It was last updated in November 2013. In October 2014 Matt Cutts announced that another visible pagerank update would not be coming.[42]In March 2016 Google announced it would no longer support this feature, and the underlying API would soon cease to operate.[43]On April 15, 2016, Google turned off display of PageRank Data in Google Toolbar,[44]though the PageRank continued to be used internally to rank content in search results.[45] Thesearch engine results page(SERP) is the actual result returned by a search engine in response to a keyword query. The SERP consists of a list of links to web pages with associated text snippets, paid ads, featured snippets, and Q&A. The SERP rank of a web page refers to the placement of the corresponding link on the SERP, where higher placement means higher SERP rank. The SERP rank of a web page is a function not only of its PageRank, but of a relatively large and continuously adjusted set of factors (over 200).[46][unreliable source?]Search engine optimization(SEO) is aimed at influencing the SERP rank for a website or a set of web pages. Positioning of a webpage on Google SERPs for a keyword depends on relevance and reputation, also known as authority and popularity. PageRank is Google's indication of its assessment of the reputation of a webpage: It is non-keyword specific. Google uses a combination of webpage and website authority to determine the overall authority of a webpage competing for a keyword.[47]The PageRank of the HomePage of a website is the best indication Google offers for website authority.[48] After the introduction ofGoogle Placesinto the mainstream organic SERP, numerous other factors in addition to PageRank affect ranking a business in Local Business Results.[49]When Google elaborated on the reasons for PageRank deprecation at Q&A #March 2016, they announced Links and Content as the Top Ranking Factors. RankBrain had earlier in October 2015 been announced as the #3 Ranking Factor, so the Top 3 Factors have been confirmed officially by Google.[50] TheGoogle DirectoryPageRank was an 8-unit measurement. Unlike the Google Toolbar, which shows a numeric PageRank value upon mouseover of the green bar, the Google Directory only displayed the bar, never the numeric values. Google Directory was closed on July 20, 2011.[51] It was known that the PageRank shown in the Toolbar could easily bespoofed. Redirection from one page to another, either via aHTTP 302response or a "Refresh"meta tag, caused the source page to acquire the PageRank of the destination page. Hence, a new page with PR 0 and no incoming links could have acquired PR 10 by redirecting to the Google home page. Spoofing can usually be detected by performing a Google search for a source URL; if the URL of an entirely different site is displayed in the results, the latter URL may represent the destination of a redirection. Forsearch engine optimizationpurposes, some companies offer to sell high PageRank links to webmasters.[52]As links from higher-PR pages are believed to be more valuable, they tend to be more expensive. It can be an effective and viable marketing strategy to buy link advertisements on content pages of quality and relevant sites to drive traffic and increase a webmaster's link popularity. However, Google has publicly warned webmasters that if they are or were discovered to be selling links for the purpose of conferring PageRank and reputation, their links will be devalued (ignored in the calculation of other pages' PageRanks). The practice of buying and selling[53]is intensely debated across the Webmaster community. Google advised webmasters to use thenofollowHTML attributevalue on paid links. According toMatt Cutts, Google is concerned about webmasters who try togame the system, and thereby reduce the quality and relevance of Google search results.[52] In 2019, Google announced two additional link attributes providing hints about which links to consider or exclude within Search:rel="ugc"as a tag for user-generated content, such as comments; andrel="sponsored"as a tag for advertisements or other types of sponsored content. Multiplerelvalues are also allowed, for example,rel="ugc sponsored"can be used to hint that the link came from user-generated content and is sponsored.[54] Even though PageRank has become less important for SEO purposes, the existence of back-links from more popular websites continues to push a webpage higher up in search rankings.[55] A more intelligent surfer that probabilistically hops from page to page depending on the content of the pages and query terms the surfer is looking for. This model is based on a query-dependent PageRank score of a page which as the name suggests is also a function of query. When given a multiple-term query,Q={q1,q2,⋯}{\displaystyle Q=\{q1,q2,\cdots \}}, the surfer selects aq{\displaystyle q}according to some probability distribution,P(q){\displaystyle P(q)}, and uses that term to guide its behavior for a large number of steps. It then selects another term according to the distribution to determine its behavior, and so on. The resulting distribution over visited web pages is QD-PageRank.[56] The mathematics of PageRank are entirely general and apply to any graph or network in any domain. Thus, PageRank is now regularly used in bibliometrics, social and information network analysis, and for link prediction and recommendation. It is used for systems analysis of road networks, and in biology, chemistry, neuroscience, and physics.[57] PageRank has been used to quantify the scientific impact of researchers. The underlying citation and collaboration networks are used in conjunction with pagerank algorithm in order to come up with a ranking system for individual publications which propagates to individual authors. The new index known as pagerank-index (Pi) is demonstrated to be fairer compared to h-index in the context of many drawbacks exhibited by h-index.[58] For the analysis of protein networks in biology PageRank is also a useful tool.[59][60] In any ecosystem, a modified version of PageRank may be used to determine species that are essential to the continuing health of the environment.[61] A similar newer use of PageRank is to rank academic doctoral programs based on their records of placing their graduates in faculty positions. In PageRank terms, academic departments link to each other by hiring their faculty from each other (and from themselves).[62] A version of PageRank has recently been proposed as a replacement for the traditionalInstitute for Scientific Information(ISI)impact factor,[63]and implemented atEigenfactoras well as atSCImago. Instead of merely counting total citations to a journal, the "importance" of each citation is determined in a PageRank fashion. Inneuroscience, the PageRank of aneuronin a neural network has been found to correlate with its relative firing rate.[64] Personalized PageRank is used byTwitterto present users with other accounts they may wish to follow.[65] Swiftype's site search product builds a "PageRank that's specific to individual websites" by looking at each website's signals of importance and prioritizing content based on factors such as number of links from the home page.[66] AWeb crawlermay use PageRank as one of a number of importance metrics it uses to determine which URL to visit during a crawl of the web. One of the early working papers[67]that were used in the creation of Google isEfficient crawling through URL ordering,[68]which discusses the use of a number of different importance metrics to determine how deeply, and how much of a site Google will crawl. PageRank is presented as one of a number of these importance metrics, though there are others listed such as the number of inbound and outbound links for a URL, and the distance from the root directory on a site to the URL. The PageRank may also be used as a methodology to measure the apparent impact of a community like theBlogosphereon the overall Web itself. This approach uses therefore the PageRank to measure the distribution of attention in reflection of theScale-free networkparadigm.[citation needed] In 2005, in a pilot study in Pakistan,Structural Deep Democracy, SD2[69][70]was used for leadership selection in a sustainable agriculture group called Contact Youth. SD2 usesPageRankfor the processing of the transitive proxy votes, with the additional constraints of mandating at least two initial proxies per voter, and all voters are proxy candidates. More complex variants can be built on top of SD2, such as adding specialist proxies and direct votes for specific issues, but SD2 as the underlying umbrella system, mandates that generalist proxies should always be used. In sport the PageRank algorithm has been used to rank the performance of: teams in the National Football League (NFL) in the USA;[71]individual soccer players;[72]and athletes in the Diamond League.[73] PageRank has been used to rank spaces or streets to predict how many people (pedestrians or vehicles) come to the individual spaces or streets.[74][75]Inlexical semanticsit has been used to performWord Sense Disambiguation,[76]Semantic similarity,[77]and also to automatically rankWordNetsynsetsaccording to how strongly they possess a given semantic property, such as positivity or negativity.[78] How a traffic system changes its operational mode can be described by transitions between quasi-stationary states in correlation structures of traffic flow. PageRank has been used to identify and explore the dominant states among these quasi-stationary states in traffic systems.[79] In early 2005, Google implemented a new value, "nofollow",[80]for therelattribute of HTML link and anchor elements, so that website developers andbloggerscan make links that Google will not consider for the purposes of PageRank—they are links that no longer constitute a "vote" in the PageRank system. The nofollow relationship was added in an attempt to help combatspamdexing. As an example, people could previously create many message-board posts with links to their website to artificially inflate their PageRank. With the nofollow value, message-board administrators can modify their code to automatically insert "rel='nofollow'" to all hyperlinks in posts, thus preventing PageRank from being affected by those particular posts. This method of avoidance, however, also has various drawbacks, such as reducing the link value of legitimate comments. (See:Spam in blogs#nofollow) In an effort to manually control the flow of PageRank among pages within a website, many webmasters practice what is known as PageRank Sculpting[81]—which is the act of strategically placing the nofollow attribute on certain internal links of a website in order to funnel PageRank towards those pages the webmaster deemed most important. This tactic had been used since the inception of the nofollow attribute, but may no longer be effective since Google announced that blocking PageRank transfer with nofollow does not redirect that PageRank to other links.[82]
https://en.wikipedia.org/wiki/PageRank
Collaborative filtering(CF) is, besidescontent-based filtering, one of two major techniques used byrecommender systems.[1]Collaborative filtering has two senses, a narrow one and a more general one.[2] In the newer, narrower sense, collaborative filtering is a method of making automaticpredictions(filtering) about auser's interests by utilizing preferences ortasteinformation collected frommany users(collaborating). This approach assumes that if personsAandBshare similar opinions on one issue, they are more likely to agree on other issues compared to a random pairing ofAwith another person. For instance, a collaborative filtering system fortelevisionprogramming could predict which shows a user might enjoy based on a limited list of the user's tastes (likes or dislikes).[3]These predictions are specific to the user, but use information gleaned from many users. This differs from the simpler approach of giving anaverage(non-specific) score for each item of interest, for example based on its number ofvotes. In the more general sense, collaborative filtering is the process of filtering information or patterns using techniques involving collaboration among multiple agents, viewpoints, data sources, etc.[2]Applications of collaborative filtering typically involve very large data sets. Collaborative filtering methods have been applied to many kinds of data including: sensing and monitoring data, such as in mineral exploration, environmental sensing over large areas or multiple sensors; financial data, such as financial service institutions that integrate many financial sources; and user data from electronic commerce and web applications. This article focuses on collaborative filtering for user data, but some of the methods also apply to other major applications. Thegrowthof theInternethas made it much more difficult to effectivelyextract useful informationfrom all the availableonline information.[according to whom?]The overwhelming amount of data necessitates mechanisms for efficientinformation filtering.[according to whom?]Collaborative filtering is one of the techniques used for dealing with this problem. The motivation for collaborative filtering comes from the idea that people often get the best recommendations from someone with tastes similar to themselves.[citation needed]Collaborative filtering encompasses techniques for matching people with similar interests and makingrecommendationson this basis. Collaborative filtering algorithms often require (1) users' active participation, (2) an easy way to represent users' interests, and (3) algorithms that are able to match people with similar interests. Typically, the workflow of a collaborative filtering system is: A key problem of collaborative filtering is how to combine and weight the preferences of user neighbors. Sometimes, users can immediately rate the recommended items. As a result, the system gains an increasingly accurate representation of user preferences over time. Collaborative filtering systems have many forms, but many common systems can be reduced to two steps: This falls under the category of user-based collaborative filtering. A specific application of this is the user-basedNearest Neighbor algorithm. Alternatively,item-based collaborative filtering(users who bought x also bought y), proceeds in an item-centric manner: See, for example, theSlope Oneitem-based collaborative filtering family. Another form of collaborative filtering can be based on implicit observations of normal user behavior (as opposed to the artificial behavior imposed by a rating task). These systems observe what a user has done together with what all users have done (what music they have listened to, what items they have bought) and use that data to predict the user's behavior in the future, or to predict how a user might like to behave given the chance. These predictions then have to be filtered throughbusiness logicto determine how they might affect the actions of a business system. For example, it is not useful to offer to sell somebody a particular album of music if they already have demonstrated that they own that music. Relying on a scoring or rating system which is averaged across all users ignores specific demands of a user, and is particularly poor in tasks where there is large variation in interest (as in the recommendation of music). However, there are other methods to combat information explosion, such aswebsearch anddata clustering. The memory-based approach uses user rating data to compute the similarity between users or items. Typical examples of this approach are neighbourhood-based CF and item-based/user-based top-N recommendations. For example, in user based approaches, the value of ratings userugives to itemiis calculated as an aggregation of some similar users' rating of the item: whereUdenotes the set of topNusers that are most similar to useruwho rated itemi. Some examples of the aggregation function include: where k is a normalizing factor defined ask=1/∑u′∈U|simil⁡(u,u′)|{\displaystyle k=1/\sum _{u^{\prime }\in U}|\operatorname {simil} (u,u^{\prime })|}, and whereru¯{\displaystyle {\bar {r_{u}}}}is the average rating of userufor all the items rated byu. The neighborhood-based algorithm calculates the similarity between two users or items, and produces a prediction for the user by taking theweighted averageof all the ratings. Similarity computation between items or users is an important part of this approach. Multiple measures, such asPearson correlationandvector cosinebased similarity are used for this. The Pearson correlation similarity of two usersx,yis defined as where Ixyis the set of items rated by both userxand usery. The cosine-based approach defines the cosine-similarity between two usersxandyas:[4] The user based top-N recommendation algorithm uses a similarity-based vector model to identify thekmost similar users to an active user. After thekmost similar users are found, their corresponding user-item matrices are aggregated to identify the set of items to be recommended. A popular method to find the similar users is theLocality-sensitive hashing, which implements thenearest neighbor mechanismin linear time. The advantages with this approach include: the explainability of the results, which is an important aspect of recommendation systems; easy creation and use; easy facilitation of new data; content-independence of the items being recommended; good scaling with co-rated items. There are also several disadvantages of this approach. Its performance decreases whendata is sparse, which is common for web-related items. This hinders thescalabilityof this approach and creates problems with large datasets. Although it can efficiently handle new users because it relies on adata structure, adding new items becomes more complicated because that representation usually relies on a specificvector space. Adding new items requires inclusion of the new item and the re-insertion of all the elements in the structure. An alternative to memory-based methods is tolearnmodels to predict users' rating of unrated items. Model-based CF algorithms includeBayesian networks,clustering models,latent semantic modelssuch assingular value decomposition,probabilistic latent semantic analysis, multiple multiplicative factor,latent Dirichlet allocationandMarkov decision process-based models.[5] Through this approach,dimensionality reductionmethods are mostly used for improving robustness and accuracy of memory-based methods. Specifically, methods likesingular value decomposition,principal component analysis, known as latent factor models, compress a user-item matrix into a low-dimensional representation in terms of latent factors. This transforms the large matrix that contains many missing values, into a much smaller matrix. A compressed matrix can be used to find neighbors of a user or item as per the previous section. Compression has two advantages in large,sparsedata: it is more accurate and scales better.[6] A number of applications combine the memory-based and the model-based CF algorithms. These overcome the limitations of native CF approaches and improve prediction performance. Importantly, they overcome the CF problems such as sparsity and loss of information. However, they have increased complexity and are expensive to implement.[7]Usually most commercial recommender systems are hybrid, for example, the Google news recommender system.[8] In recent years, many neural and deep-learning techniques have been proposed for collaborative filtering. Some generalize traditionalmatrix factorizationalgorithms via a non-linear neural architecture,[9]or leverage new model types like VariationalAutoencoders.[10]Deep learning has been applied to many scenarios (context-aware, sequence-aware, social tagging etc.). However, deep learning effectiveness for collaborative recommendation has been questioned. A systematic analysis of publications using deep learning or neural methods to the top-k recommendation problem, published in top conferences (SIGIR, KDD, WWW, RecSys), found that, on average, less than 40% of articles are reproducible, and only 14% in some conferences. Overall, the study identifies 18 articles, only 7 of them could be reproduced and 6 could be outperformed by older and simpler properly tuned baselines. The article highlights potential problems in today's research scholarship and calls for improved scientific practices.[11]Similar issues have been spotted by others[12]and also in sequence-aware recommender systems.[13] Many recommender systems simply ignore other contextual information existing alongside user's rating in providing item recommendation.[14]However, by pervasive availability of contextual information such as time, location, social information, and type of the device that user is using, it is becoming more important than ever for a successful recommender system to provide a context-sensitive recommendation. According to Charu Aggrawal, "Context-sensitive recommender systems tailor their recommendations to additional information that defines the specific situation under which recommendations are made. This additional information is referred to as the context."[6] Taking contextual information into consideration, we will have additional dimension to the existing user-item rating matrix. As an instance, assume a music recommender system which provides different recommendations in corresponding to time of the day. In this case, it is possible a user have different preferences for a music in different time of a day. Thus, instead of using user-item matrix, we may usetensorof order 3 (or higher for considering other contexts) to represent context-sensitive users' preferences.[15][16][17] In order to take advantage of collaborative filtering and particularly neighborhood-based methods, approaches can be extended from a two-dimensional rating matrix into a tensor of higher order[citation needed]. For this purpose, the approach is to find the most similar/like-minded users to a target user; one can extract and compute similarity of slices (e.g. item-time matrix) corresponding to each user. Unlike the context-insensitive case for which similarity of two rating vectors are calculated, in thecontext-awareapproaches, the similarity of rating matrices corresponding to each user is calculated by usingPearson coefficients.[6]After the most like-minded users are found, their corresponding ratings are aggregated to identify the set of items to be recommended to the target user. The most important disadvantage of taking context into recommendation model is to be able to deal with larger dataset that contains much more missing values in comparison to user-item rating matrix[citation needed]. Therefore, similar tomatrix factorizationmethods,tensor factorizationtechniques can be used to reduce dimensionality of original data before using any neighborhood-based methods[citation needed]. Unlike the traditional model of mainstream media, in which there are few editors who set guidelines, collaboratively filtered social media can have a very large number of editors, and content improves as the number of participants increases. Services likeReddit,YouTube, andLast.fmare typical examples of collaborative filtering based media.[18] One scenario of collaborative filtering application is to recommend interesting or popular information as judged by the community. As a typical example, stories appear in the front page ofRedditas they are "voted up" (rated positively) by the community. As the community becomes larger and more diverse, the promoted stories can better reflect the average interest of the community members. Wikipediais another application of collaborative filtering. Volunteers contribute to the encyclopedia by filtering out facts from falsehoods.[19] Another aspect of collaborative filtering systems is the ability to generate more personalized recommendations by analyzing information from the past activity of a specific user, or the history of other users deemed to be of similar taste to a given user. These resources are used as user profiling and helps the site recommend content on a user-by-user basis. The more a given user makes use of the system, the better the recommendations become, as the system gains data to improve its model of that user. A collaborative filtering system does not necessarily succeed in automatically matching content to one's preferences. Unless the platform achieves unusually good diversity and independence of opinions, one point of view will always dominate another in a particular community. As in the personalized recommendation scenario, the introduction of new users or new items can cause thecold startproblem, as there will be insufficient data on these new entries for the collaborative filtering to work accurately. In order to make appropriate recommendations for a new user, the system must first learn the user's preferences by analysing past voting or rating activities. The collaborative filtering system requires a substantial number of users to rate a new item before that item can be recommended. In practice, many commercial recommender systems are based on large datasets. As a result, the user-item matrix used for collaborative filtering could be extremely large and sparse, which brings about challenges in the performance of the recommendation. One typical problem caused by the data sparsity is thecold startproblem. As collaborative filtering methods recommend items based on users' past preferences, new users will need to rate a sufficient number of items to enable the system to capture their preferences accurately and thus provides reliable recommendations. Similarly, new items also have the same problem. When new items are added to the system, they need to be rated by a substantial number of users before they could be recommended to users who have similar tastes to the ones who rated them. The new item problem does not affectcontent-based recommendations, because the recommendation of an item is based on its discrete set of descriptive qualities rather than its ratings. As the numbers of users and items grow, traditional CF algorithms will suffer serious scalability problems[citation needed]. For example, with tens of millions of customersO(M){\displaystyle O(M)}and millions of itemsO(N){\displaystyle O(N)}, a CF algorithm with the complexity ofn{\displaystyle n}is already too large. As well, many systems need to react immediately to online requirements and make recommendations for all users regardless of their millions of users, with most computations happening in very large memory machines.[20] Synonymsrefers to the tendency of a number of the same or very similar items to have different names or entries. Most recommender systems are unable to discover this latent association and thus treat these products differently. For example, the seemingly different items "children's movie" and "children's film" are actually referring to the same item. Indeed, the degree of variability in descriptive term usage is greater than commonly suspected.[citation needed]The prevalence of synonyms decreases the recommendation performance of CF systems. Topic Modeling (like theLatent Dirichlet Allocationtechnique) could solve this by grouping different words belonging to the same topic.[citation needed] Gray sheep refers to the users whose opinions do not consistently agree or disagree with any group of people and thus do not benefit from collaborative filtering.Black sheepare a group whose idiosyncratic tastes make recommendations nearly impossible. Although this is a failure of the recommender system, non-electronic recommenders also have great problems in these cases, so having black sheep is an acceptable failure.[disputed–discuss] In a recommendation system where everyone can give the ratings, people may give many positive ratings for their own items and negative ratings for their competitors'. It is often necessary for the collaborative filtering systems to introduce precautions to discourage such manipulations. Collaborative filters are expected to increase diversity because they help us discover new products. Some algorithms, however, may unintentionally do the opposite. Because collaborative filters recommend products based on past sales or ratings, they cannot usually recommend products with limited historical data. This can create a rich-get-richer effect for popular products, akin topositive feedback. This bias toward popularity can prevent what are otherwise better consumer-product matches. AWhartonstudy details this phenomenon along with several ideas that may promote diversity and the "long tail."[21]Several collaborative filtering algorithms have been developed to promote diversity and the "long tail"[22]by recommending novel,[23]unexpected,[24]and serendipitous items.[25] User-item matrix is a basic foundation of traditional collaborative filtering techniques, and it suffers from data sparsity problem (i.e.cold start). As a consequence, except for user-item matrix, researchers are trying to gather more auxiliary information to help boost recommendation performance and develop personalized recommender systems.[28]Generally, there are two popular auxiliary information: attribute information and interaction information. Attribute information describes a user's or an item's properties. For example, user attribute might include general profile (e.g. gender and age) and social contacts (e.g. followers or friends insocial networks); Item attribute means properties like category, brand or content. In addition, interaction information refers to the implicit data showing how users interplay with the item. Widely used interaction information contains tags, comments or reviews and browsing history etc. Auxiliary information plays a significant role in a variety of aspects. Explicit social links, as a reliable representative of trust or friendship, is always employed in similarity calculation to find similar persons who share interest with the target user.[29][30]The interaction-associated information – tags – is taken as a third dimension (in addition to user and item) in advanced collaborative filtering to construct a 3-dimensional tensor structure for exploration of recommendation.[31]
https://en.wikipedia.org/wiki/Collaborative_filtering#Matrix_factorization
Clusteringcan refer to the following: Incomputing: Ineconomics: Ingraph theory:
https://en.wikipedia.org/wiki/Clustering
Instatisticsandnatural language processing, atopic modelis a type ofstatistical modelfor discovering the abstract "topics" that occur in a collection of documents. Topic modeling is a frequently used text-mining tool for discovery of hidden semantic structures in a text body. Intuitively, given that a document is about a particular topic, one would expect particular words to appear in the document more or less frequently: "dog" and "bone" will appear more often in documents about dogs, "cat" and "meow" will appear in documents about cats, and "the" and "is" will appear approximately equally in both. A document typically concerns multiple topics in different proportions; thus, in a document that is 10% about cats and 90% about dogs, there would probably be about 9 times more dog words than cat words. The "topics" produced by topic modeling techniques are clusters of similar words. A topic model captures this intuition in a mathematical framework, which allows examining a set of documents and discovering, based on the statistics of the words in each, what the topics might be and what each document's balance of topics is. Topic models are also referred to as probabilistic topic models, which refers to statistical algorithms for discovering the latent semantic structures of an extensive text body. In the age of information, the amount of the written material we encounter each day is simply beyond our processing capacity. Topic models can help to organize and offer insights for us to understand large collections of unstructured text bodies. Originally developed as a text-mining tool, topic models have been used to detect instructive structures in data such as genetic information, images, and networks. They also have applications in other fields such asbioinformatics[1]andcomputer vision.[2] An early topic model was described by Papadimitriou, Raghavan, Tamaki and Vempala in 1998.[3]Another one, calledprobabilistic latent semantic analysis(PLSA), was created by Thomas Hofmann in 1999.[4]Latent Dirichlet allocation(LDA), perhaps the most common topic model currently in use, is a generalization of PLSA. Developed byDavid Blei,Andrew Ng, andMichael I. Jordanin 2002, LDA introduces sparseDirichlet prior distributionsover document-topic and topic-word distributions, encoding the intuition that documents cover a small number of topics and that topics often use a small number of words.[5]Other topic models are generally extensions on LDA, such asPachinko allocation, which improves on LDA by modeling correlations between topics in addition to the word correlations which constitute topics. Hierarchical latent tree analysis (HLTA) is an alternative to LDA, which models word co-occurrence using a tree of latent variables and the states of the latent variables, which correspond to soft clusters of documents, are interpreted as topics. Approaches for temporal information include Block and Newman's determination of the temporal dynamics of topics in thePennsylvania Gazetteduring 1728–1800.Griffiths& Steyvers used topic modeling on abstracts from the journalPNASto identify topics that rose or fell in popularity from 1991 to 2001 whereas Lamba & Madhusushan[6]used topic modeling on full-text research articles retrieved from DJLIT journal from 1981 to 2018. In the field of library and information science, Lamba & Madhusudhan[6][7][8][9]applied topic modeling on different Indian resources like journal articles and electronic theses and resources (ETDs). Nelson[10]has been analyzing change in topics over time in theRichmond Times-Dispatchto understand social and political changes and continuities in Richmond during theAmerican Civil War. Yang, Torget and Mihalcea applied topic modeling methods to newspapers from 1829 to 2008. Mimno used topic modelling with 24 journals on classical philology and archaeology spanning 150 years to look at how topics in the journals change over time and how the journals become more different or similar over time. Yin et al.[11]introduced a topic model for geographically distributed documents, where document positions are explained by latent regions which are detected during inference. Chang and Blei[12]included network information between linked documents in the relational topic model, to model the links between websites. The author-topic model by Rosen-Zvi et al.[13]models the topics associated with authors of documents to improve the topic detection for documents with authorship information. HLTA was applied to a collection of recent research papers published at major AI and Machine Learning venues. The resulting model is calledThe AI Tree. The resulting topics are used to index the papers ataipano.cse.ust.hkto help researcherstrack research trends and identify papers to read, and help conference organizers and journal editorsidentify reviewers for submissions. To improve the qualitative aspects and coherency of generated topics, some researchers have explored the efficacy of "coherence scores", or otherwise how computer-extracted clusters (i.e. topics) align with a human benchmark.[14][15]Coherence scores are metrics for optimising the number of topics to extract from a document corpus.[16] In practice, researchers attempt to fit appropriate model parameters to the data corpus using one of several heuristics for maximum likelihood fit. A survey by D. Blei describes this suite of algorithms.[17]Several groups of researchers starting with Papadimitriou et al.[3]have attempted to design algorithms with provable guarantees. Assuming that the data were actually generated by the model in question, they try to design algorithms that probably find the model that was used to create the data. Techniques used here includesingular value decomposition(SVD) and themethod of moments. In 2012 an algorithm based uponnon-negative matrix factorization(NMF) was introduced that also generalizes to topic models with correlations among topics.[18] In 2017, neural network has been leveraged in topic modeling to make it faster in inference,[19]which has been extended weakly supervised version.[20] In 2018 a new approach to topic models was proposed: it is based onstochastic block model.[21] Because of the recent development of LLM, topic modeling has leveraged LLM through contextual embedding[22]and fine tuning.[23] Topic models are being used also in other contexts. For examples uses of topic models in biology and bioinformatics research emerged.[24]Recently topic models has been used to extract information from dataset of cancers' genomic samples.[25]In this case topics are biological latent variables to be inferred. Topic models can be used for analysis of continuous signals like music. For instance, they were used to quantify how musical styles change in time, and identify the influence of specific artists on later music creation.[26]
https://en.wikipedia.org/wiki/Topic_modeling
Information extraction(IE) is the task of automatically extractingstructured informationfromunstructuredand/or semi-structuredmachine-readabledocuments and other electronically represented sources. Typically, this involves processing human language texts by means ofnatural language processing(NLP).[1]Recent activities inmultimediadocument processing like automatic annotation and content extraction out of images/audio/video/documents could be seen as information extraction. Recent advances in NLP techniques have allowed for significantly improved performance compared to previous years.[2]An example is the extraction from newswire reports of corporate mergers, such as denoted by the formal relation: from an online news sentence such as: A broad goal of IE is to allow computation to be done on the previously unstructured data. A more specific goal is to allowautomated reasoningabout thelogical formof the input data. Structured data is semantically well-defined data from a chosen target domain, interpreted with respect to category andcontext. Information extraction is the part of a greater puzzle which deals with the problem of devising automatic methods for text management, beyond its transmission, storage and display. The discipline ofinformation retrieval(IR)[3]has developed automatic methods, typically of a statistical flavor, for indexing large document collections and classifying documents. Another complementary approach is that ofnatural language processing(NLP) which has solved the problem of modelling human language processing with considerable success when taking into account the magnitude of the task. In terms of both difficulty and emphasis, IE deals with tasks in between both IR and NLP. In terms of input, IE assumes the existence of a set of documents in which each document follows a template, i.e. describes one or more entities or events in a manner that is similar to those in other documents but differing in the details. An example, consider a group of newswire articles on Latin American terrorism with each article presumed to be based upon one or more terroristic acts. We also define for any given IE task a template, which is a(or a set of) case frame(s) to hold the information contained in a single document. For the terrorism example, a template would have slots corresponding to the perpetrator, victim, and weapon of the terroristic act, and the date on which the event happened. An IE system for this problem is required to "understand" an attack article only enough to find data corresponding to the slots in this template. Information extraction dates back to the late 1970s in the early days of NLP.[4]An early commercial system from the mid-1980s was JASPER built forReutersby the Carnegie Group Inc with the aim of providingreal-time financial newsto financial traders.[5] Beginning in 1987, IE was spurred by a series ofMessage Understanding Conferences. MUC is a competition-based conference[6]that focused on the following domains: Considerable support came from the U.S. Defense Advanced Research Projects Agency (DARPA), who wished to automate mundane tasks performed by government analysts, such as scanning newspapers for possible links to terrorism.[citation needed] The present significance of IE pertains to the growing amount of information available in unstructured form.Tim Berners-Lee, inventor of theWorld Wide Web, refers to the existingInternetas the web ofdocuments[7]and advocates that more of the content be made available as aweb ofdata.[8]Until this transpires, the web largely consists of unstructured documents lacking semanticmetadata. Knowledge contained within these documents can be made more accessible for machine processing by means of transformation intorelational form, or by marking-up withXMLtags. An intelligent agent monitoring a news data feed requires IE to transform unstructured data into something that can be reasoned with. A typical application of IE is to scan a set of documents written in anatural languageand populate a database with the information extracted.[9] Applying information extraction to text is linked to the problem oftext simplificationin order to create a structured view of the information present in free text. The overall goal being to create a more easily machine-readable text to process the sentences. Typical IE tasks and subtasks include: Note that this list is not exhaustive and that the exact meaning of IE activities is not commonly accepted and that many approaches combine multiple sub-tasks of IE in order to achieve a wider goal. Machine learning, statistical analysis and/or natural language processing are often used in IE. IE on non-text documents is becoming an increasingly interesting topic[when?]in research, and information extracted from multimedia documents can now[when?]be expressed in a high level structure as it is done on text. This naturally leads to the fusion of extracted information from multiple kinds of documents and sources. IE has been the focus of the MUC conferences. The proliferation of theWeb, however, intensified the need for developing IE systems that help people to cope with theenormous amount of datathat are available online. Systems that perform IE from online text should meet the requirements of low cost, flexibility in development and easy adaptation to new domains. MUC systems fail to meet those criteria. Moreover, linguistic analysis performed for unstructured text does not exploit the HTML/XMLtags and the layout formats that are available in online texts. As a result, less linguistically intensive approaches have been developed for IE on the Web usingwrappers, which are sets of highly accurate rules that extract a particular page's content. Manually developing wrappers has proved to be a time-consuming task, requiring a high level of expertise.Machine learningtechniques, eithersupervisedorunsupervised, have been used to induce such rules automatically. Wrapperstypically handle highly structured collections of web pages, such as product catalogs and telephone directories. They fail, however, when the text type is less structured, which is also common on the Web. Recent effort onadaptive information extractionmotivates the development of IE systems that can handle different types of text, from well-structured to almost free text -where common wrappers fail- including mixed types. Such systems can exploit shallow natural language knowledge and thus can be also applied to less structured texts. A recent[when?]development is Visual Information Extraction,[16][17]that relies on rendering a webpage in a browser and creating rules based on the proximity of regions in the rendered web page. This helps in extracting entities from complex web pages that may exhibit a visual pattern, but lack a discernible pattern in the HTML source code. The following standard approaches are now widely accepted: Numerous other approaches exist for IE including hybrid approaches that combine some of the standard approaches previously listed.
https://en.wikipedia.org/wiki/Information_extraction
Bootstrap aggregating, also calledbagging(frombootstrapaggregating) orbootstrapping, is amachine learning(ML)ensemblemeta-algorithmdesigned to improve thestabilityand accuracy of MLclassificationandregressionalgorithms. It also reducesvarianceandoverfitting. Although it is usually applied todecision treemethods, it can be used with any type of method. Bagging is a special case of theensemble averagingapproach. Given a standardtraining setD{\displaystyle D}of sizen{\displaystyle n}, bagging generatesm{\displaystyle m}new training setsDi{\displaystyle D_{i}}, each of sizen′{\displaystyle n'}, bysamplingfromD{\displaystyle D}uniformlyandwith replacement. By sampling with replacement, some observations may be repeated in eachDi{\displaystyle D_{i}}. Ifn′=n{\displaystyle n'=n}, then for largen{\displaystyle n}the setDi{\displaystyle D_{i}}is expected to have the fraction (1 - 1/e) (~63.2%) of the unique samples ofD{\displaystyle D}, the rest being duplicates.[1]This kind of sample is known as abootstrapsample. Sampling with replacement ensures each bootstrap is independent from its peers, as it does not depend on previous chosen samples when sampling. Then,m{\displaystyle m}models are fitted using the above bootstrap samples and combined by averaging the output (for regression) or voting (for classification). Bagging leads to "improvements for unstable procedures",[2]which include, for example,artificial neural networks,classification and regression trees, and subset selection inlinear regression.[3]Bagging was shown to improve preimage learning.[4][5]On the other hand, it can mildly degrade the performance of stable methods such ask-nearest neighbors.[2] There are three types of datasets in bootstrap aggregating. These are theoriginal, bootstrap, and out-of-bag datasets.Each section below will explain how each dataset is made except for the original dataset. The original dataset is whatever information is given. The bootstrap dataset is made by randomly picking objects from the original dataset. Also,it must be the same size as the original dataset.However, the difference is that the bootstrap dataset can have duplicate objects. Here is a simple example to demonstrate how it works along with the illustration below: Suppose theoriginal datasetis agroup of 12 people.Their names areEmily, Jessie, George, Constantine, Lexi, Theodore, John, James, Rachel, Anthony, Ellie, and Jamal. By randomly picking a group of names, let us sayour bootstrap datasethadJames, Ellie, Constantine, Lexi, John, Constantine, Theodore, Constantine, Anthony, Lexi, Constantine, and Theodore.In this case, the bootstrap sample contained four duplicates for Constantine, and two duplicates for Lexi, and Theodore. The out-of-bag datasetrepresents the remaining people who were not in the bootstrap dataset.It can be calculated by taking the difference between the original and the bootstrap datasets. In this case, the remaining samples who were not selected areEmily, Jessie, George, Rachel, and Jamal.Keep in mind that since both datasets are sets, when taking the difference the duplicate names are ignored in the bootstrap dataset. The illustration below shows how the math is done: Creating the bootstrap and out-of-bag datasets is crucial since it is used to test the accuracy ofensemble learningalgorithms likerandom forest. For example, a model that produces 50 trees using the bootstrap/out-of-bag datasets will have a better accuracy than if it produced 10 trees. Since the algorithm generates multiple trees and therefore multiple datasets the chance that an object is left out of the bootstrap dataset is low. The next few sections talk about how the random forest algorithm works in more detail. The next step of the algorithm involves the generation ofdecision treesfrom the bootstrapped dataset. To achieve this, the process examines each gene/feature and determines for how many samples the feature's presence or absence yields a positive or negative result. This information is then used to compute aconfusion matrix, which lists the true positives, false positives, true negatives, and false negatives of the feature when used as a classifier. These features are then ranked according to variousclassification metricsbased on their confusion matrices. Some common metrics include estimate of positive correctness (calculated by subtracting false positives from true positives), measure of "goodness", andinformation gain. These features are then used to partition the samples into two sets: those that possess the top feature, and those that do not. The diagram below shows a decision tree of depth two being used to classify data. For example, a data point that exhibits Feature 1, but not Feature 2, will be given a "No". Another point that does not exhibit Feature 1, but does exhibit Feature 3, will be given a "Yes". This process is repeated recursively for successive levels of the tree until the desired depth is reached. At the very bottom of the tree, samples that test positive for the final feature are generally classified as positive, while those that lack the feature are classified as negative. These trees are then used as predictors to classify new data. The next part of the algorithm involves introducing yet another element of variability amongst the bootstrapped trees. In addition to each tree only examining a bootstrapped set of samples, only a small but consistent number of unique features are considered when ranking them as classifiers. This means that each tree only knows about the data pertaining to a small constant number of features, and a variable number of samples that is less than or equal to that of the original dataset. Consequently, the trees are more likely to return a wider array of answers, derived from more diverse knowledge. This results in arandom forest, which possesses numerous benefits over a single decision tree generated without randomness. In a random forest, each tree "votes" on whether or not to classify a sample as positive based on its features. The sample is then classified based on majority vote. An example of this is given in the diagram below, where the four trees in a random forest vote on whether or not a patient with mutations A, B, F, and G has cancer. Since three out of four trees vote yes, the patient is then classified as cancer positive. Because of their properties, random forests are considered one of the most accurate data mining algorithms, are less likely tooverfittheir data, and run quickly and efficiently even for large datasets.[6]They are primarily useful for classification as opposed toregression, which attempts to draw observed connections between statistical variables in a dataset. This makes random forests particularly useful in such fields as banking, healthcare, the stock market, ande-commercewhere it is important to be able to predict future results based on past data.[7]One of their applications would be as a useful tool for predicting cancer based on genetic factors, as seen in the above example. There are several important factors to consider when designing a random forest. If the trees in the random forests are too deep, overfitting can still occur due to over-specificity. If the forest is too large, the algorithm may become less efficient due to an increased runtime. Random forests also do not generally perform well when given sparse data with little variability.[7]However, they still have numerous advantages over similar data classification algorithms such asneural networks, as they are much easier to interpret and generally require less data for training.[citation needed]As an integral component of random forests, bootstrap aggregating is very important to classification algorithms, and provides a critical element of variability that allows for increased accuracy when analyzing new data, as discussed below. While the techniques described above utilizerandom forestsandbagging(otherwise known as bootstrapping), there are certain techniques that can be used in order to improve their execution and voting time, their prediction accuracy, and their overall performance. The following are key steps in creating an efficient random forest: For classification, use a training setD{\displaystyle D}, InducerI{\displaystyle I}and the number of bootstrap samplesm{\displaystyle m}as input. Generate a classifierC∗{\displaystyle C^{*}}as output[12] To illustrate the basic principles of bagging, below is an analysis on the relationship betweenozoneand temperature (data fromRousseeuwand Leroy[clarification needed](1986), analysis done inR). The relationship between temperature and ozone appears to be nonlinear in this dataset, based on the scatter plot. To mathematically describe this relationship,LOESSsmoothers (with bandwidth 0.5) are used. Rather than building a single smoother for the complete dataset, 100bootstrapsamples were drawn. Each sample is composed of a random subset of the original data and maintains a semblance of the master set's distribution and variability. For each bootstrap sample, a LOESS smoother was fit. Predictions from these 100 smoothers were then made across the range of the data. The black lines represent these initial predictions. The lines lack agreement in their predictions and tend to overfit their data points: evident by the wobbly flow of the lines. By taking the average of 100 smoothers, each corresponding to a subset of the original dataset, we arrive at one bagged predictor (red line). The red line's flow is stable and does not overly conform to any data point(s). Advantages: Disadvantages: The concept of bootstrap aggregating is derived from the concept of bootstrapping which was developed by Bradley Efron.[15]Bootstrap aggregating was proposed byLeo Breimanwho also coined the abbreviated term "bagging" (bootstrapaggregating). Breiman developed the concept of bagging in 1994 to improve classification by combining classifications of randomly generated training sets. He argued, "If perturbing the learning set can cause significant changes in the predictor constructed, then bagging can improve accuracy".[3]
https://en.wikipedia.org/wiki/Bootstrapping_(machine_learning)
Inmachine learning,supervised learning(SL) is a paradigm where amodelis trained using input objects (e.g. a vector of predictor variables) and desired output values (also known as asupervisory signal), which are often human-made labels. The training process builds a function that maps new data to expected output values.[1]An optimal scenario will allow for the algorithm to accurately determine output values for unseen instances. This requires the learning algorithm togeneralizefrom the training data to unseen situations in a reasonable way (seeinductive bias). This statistical quality of an algorithm is measured via ageneralization error. To solve a given problem of supervised learning, the following steps must be performed: A wide range of supervised learning algorithms are available, each with its strengths and weaknesses. There is no single learning algorithm that works best on all supervised learning problems (see theNo free lunch theorem). There are four major issues to consider in supervised learning: A first issue is the tradeoff betweenbiasandvariance.[2]Imagine that we have available several different, but equally good, training data sets. A learning algorithm is biased for a particular inputx{\displaystyle x}if, when trained on each of these data sets, it is systematically incorrect when predicting the correct output forx{\displaystyle x}. A learning algorithm has high variance for a particular inputx{\displaystyle x}if it predicts different output values when trained on different training sets. The prediction error of a learned classifier is related to the sum of the bias and the variance of the learning algorithm.[3]Generally, there is a tradeoff between bias and variance. A learning algorithm with low bias must be "flexible" so that it can fit the data well. But if the learning algorithm is too flexible, it will fit each training data set differently, and hence have high variance. A key aspect of many supervised learning methods is that they are able to adjust this tradeoff between bias and variance (either automatically or by providing a bias/variance parameter that the user can adjust). The second issue is of the amount of training data available relative to the complexity of the "true" function (classifier or regression function). If the true function is simple, then an "inflexible" learning algorithm with high bias and low variance will be able to learn it from a small amount of data. But if the true function is highly complex (e.g., because it involves complex interactions among many different input features and behaves differently in different parts of the input space), then the function will only be able to learn with a large amount of training data paired with a "flexible" learning algorithm with low bias and high variance. A third issue is the dimensionality of the input space. If the input feature vectors have large dimensions, learning the function can be difficult even if the true function only depends on a small number of those features. This is because the many "extra" dimensions can confuse the learning algorithm and cause it to have high variance. Hence, input data of large dimensions typically requires tuning the classifier to have low variance and high bias. In practice, if the engineer can manually remove irrelevant features from the input data, it will likely improve the accuracy of the learned function. In addition, there are many algorithms forfeature selectionthat seek to identify the relevant features and discard the irrelevant ones. This is an instance of the more general strategy ofdimensionality reduction, which seeks to map the input data into a lower-dimensional space prior to running the supervised learning algorithm. A fourth issue is the degree of noise in the desired output values (the supervisorytarget variables). If the desired output values are often incorrect (because of human error or sensor errors), then the learning algorithm should not attempt to find a function that exactly matches the training examples. Attempting to fit the data too carefully leads tooverfitting. You can overfit even when there are no measurement errors (stochastic noise) if the function you are trying to learn is too complex for your learning model. In such a situation, the part of the target function that cannot be modeled "corrupts" your training data - this phenomenon has been calleddeterministic noise. When either type of noise is present, it is better to go with a higher bias, lower variance estimator. In practice, there are several approaches to alleviate noise in the output values such asearly stoppingto prevent overfitting as well asdetectingand removing the noisy training examples prior to training the supervised learning algorithm. There are several algorithms that identify noisy training examples and removing the suspected noisy training examples prior to training has decreasedgeneralization errorwithstatistical significance.[4][5] Other factors to consider when choosing and applying a learning algorithm include the following: When considering a new application, the engineer can compare multiple learning algorithms and experimentally determine which one works best on the problem at hand (seecross-validation). Tuning the performance of a learning algorithm can be very time-consuming. Given fixed resources, it is often better to spend more time collecting additional training data and more informative features than it is to spend extra time tuning the learning algorithms. The most widely used learning algorithms are: Given a set ofN{\displaystyle N}training examples of the form{(x1,y1),...,(xN,yN)}{\displaystyle \{(x_{1},y_{1}),...,(x_{N},\;y_{N})\}}such thatxi{\displaystyle x_{i}}is thefeature vectorof thei{\displaystyle i}-th example andyi{\displaystyle y_{i}}is its label (i.e., class), a learning algorithm seeks a functiong:X→Y{\displaystyle g:X\to Y}, whereX{\displaystyle X}is the input space andY{\displaystyle Y}is the output space. The functiong{\displaystyle g}is an element of some space of possible functionsG{\displaystyle G}, usually called thehypothesis space. It is sometimes convenient to representg{\displaystyle g}using ascoring functionf:X×Y→R{\displaystyle f:X\times Y\to \mathbb {R} }such thatg{\displaystyle g}is defined as returning they{\displaystyle y}value that gives the highest score:g(x)=arg⁡maxyf(x,y){\displaystyle g(x)={\underset {y}{\arg \max }}\;f(x,y)}. LetF{\displaystyle F}denote the space of scoring functions. AlthoughG{\displaystyle G}andF{\displaystyle F}can be any space of functions, many learning algorithms are probabilistic models whereg{\displaystyle g}takes the form of aconditional probabilitymodelg(x)=arg⁡maxyP(y|x){\displaystyle g(x)={\underset {y}{\arg \max }}\;P(y|x)}, orf{\displaystyle f}takes the form of ajoint probabilitymodelf(x,y)=P(x,y){\displaystyle f(x,y)=P(x,y)}. For example,naive Bayesandlinear discriminant analysisare joint probability models, whereaslogistic regressionis a conditional probability model. There are two basic approaches to choosingf{\displaystyle f}org{\displaystyle g}:empirical risk minimizationandstructural risk minimization.[6]Empirical risk minimization seeks the function that best fits the training data. Structural risk minimization includes apenalty functionthat controls the bias/variance tradeoff. In both cases, it is assumed that the training set consists of a sample ofindependent and identically distributed pairs,(xi,yi){\displaystyle (x_{i},\;y_{i})}. In order to measure how well a function fits the training data, aloss functionL:Y×Y→R≥0{\displaystyle L:Y\times Y\to \mathbb {R} ^{\geq 0}}is defined. For training example(xi,yi){\displaystyle (x_{i},\;y_{i})}, the loss of predicting the valuey^{\displaystyle {\hat {y}}}isL(yi,y^){\displaystyle L(y_{i},{\hat {y}})}. TheriskR(g){\displaystyle R(g)}of functiong{\displaystyle g}is defined as the expected loss ofg{\displaystyle g}. This can be estimated from the training data as In empirical risk minimization, the supervised learning algorithm seeks the functiong{\displaystyle g}that minimizesR(g){\displaystyle R(g)}. Hence, a supervised learning algorithm can be constructed by applying anoptimization algorithmto findg{\displaystyle g}. Wheng{\displaystyle g}is a conditional probability distributionP(y|x){\displaystyle P(y|x)}and the loss function is the negative log likelihood:L(y,y^)=−log⁡P(y|x){\displaystyle L(y,{\hat {y}})=-\log P(y|x)}, then empirical risk minimization is equivalent tomaximum likelihood estimation. WhenG{\displaystyle G}contains many candidate functions or the training set is not sufficiently large, empirical risk minimization leads to high variance and poor generalization. The learning algorithm is able to memorize the training examples without generalizing well (overfitting). Structural risk minimizationseeks to prevent overfitting by incorporating aregularization penaltyinto the optimization. The regularization penalty can be viewed as implementing a form ofOccam's razorthat prefers simpler functions over more complex ones. A wide variety of penalties have been employed that correspond to different definitions of complexity. For example, consider the case where the functiong{\displaystyle g}is a linear function of the form A popular regularization penalty is∑jβj2{\displaystyle \sum _{j}\beta _{j}^{2}}, which is the squaredEuclidean normof the weights, also known as theL2{\displaystyle L_{2}}norm. Other norms include theL1{\displaystyle L_{1}}norm,∑j|βj|{\displaystyle \sum _{j}|\beta _{j}|}, and theL0{\displaystyle L_{0}}"norm", which is the number of non-zeroβj{\displaystyle \beta _{j}}s. The penalty will be denoted byC(g){\displaystyle C(g)}. The supervised learning optimization problem is to find the functiong{\displaystyle g}that minimizes The parameterλ{\displaystyle \lambda }controls the bias-variance tradeoff. Whenλ=0{\displaystyle \lambda =0}, this gives empirical risk minimization with low bias and high variance. Whenλ{\displaystyle \lambda }is large, the learning algorithm will have high bias and low variance. The value ofλ{\displaystyle \lambda }can be chosen empirically viacross-validation. The complexity penalty has a Bayesian interpretation as the negative log prior probability ofg{\displaystyle g},−log⁡P(g){\displaystyle -\log P(g)}, in which caseJ(g){\displaystyle J(g)}is theposterior probabilityofg{\displaystyle g}. The training methods described above arediscriminative trainingmethods, because they seek to find a functiong{\displaystyle g}that discriminates well between the different output values (seediscriminative model). For the special case wheref(x,y)=P(x,y){\displaystyle f(x,y)=P(x,y)}is ajoint probability distributionand the loss function is the negative log likelihood−∑ilog⁡P(xi,yi),{\displaystyle -\sum _{i}\log P(x_{i},y_{i}),}a risk minimization algorithm is said to performgenerative training, becausef{\displaystyle f}can be regarded as agenerative modelthat explains how the data were generated. Generative training algorithms are often simpler and more computationally efficient than discriminative training algorithms. In some cases, the solution can be computed in closed form as innaive Bayesandlinear discriminant analysis. There are several ways in which the standard supervised learning problem can be generalized:
https://en.wikipedia.org/wiki/Supervised_learning
Inmathematical modeling,overfittingis "the production of an analysis that corresponds too closely or exactly to a particular set of data, and may therefore fail to fit to additional data or predict future observations reliably".[1]Anoverfitted modelis amathematical modelthat contains moreparametersthan can be justified by the data.[2]In the special case where the model consists of a polynomial function, these parameters represent thedegree of a polynomial. The essence of overfitting is to have unknowingly extracted some of the residual variation (i.e., thenoise) as if that variation represented underlying model structure.[3]: 45 Underfittingoccurs when a mathematical model cannot adequately capture the underlying structure of the data. Anunder-fitted modelis a model where some parameters or terms that would appear in a correctly specified model are missing.[2]Underfitting would occur, for example, when fitting a linear model to nonlinear data. Such a model will tend to have poor predictive performance. The possibility of over-fitting exists because the criterion used forselecting the modelis not the same as the criterion used to judge the suitability of a model. For example, a model might be selected by maximizing its performance on some set oftraining data, and yet its suitability might be determined by its ability to perform well on unseen data; overfitting occurs when a model begins to "memorize" training data rather than "learning" to generalize from a trend. As an extreme example, if the number of parameters is the same as or greater than the number of observations, then a model can perfectly predict the training data simply by memorizing the data in its entirety. (For an illustration, see Figure 2.) Such a model, though, will typically fail severely when making predictions. Overfitting is directly related to approximation error of the selected function class and the optimization error of the optimization procedure. A function class that is too large, in a suitable sense, relative to the dataset size is likely to overfit.[4]Even when the fitted model does not have an excessive number of parameters, it is to be expected that the fitted relationship will appear to perform less well on a new dataset than on the dataset used for fitting (a phenomenon sometimes known asshrinkage).[2]In particular, the value of thecoefficient of determinationwillshrinkrelative to the original data. To lessen the chance or amount of overfitting, several techniques are available (e.g.,model comparison,cross-validation,regularization,early stopping,pruning,Bayesian priors, ordropout). The basis of some techniques is to either (1) explicitly penalize overly complex models or (2) test the model's ability to generalize by evaluating its performance on a set of data not used for training, which is assumed to approximate the typical unseen data that a model will encounter. In statistics, aninferenceis drawn from astatistical model, which has beenselectedvia some procedure. Burnham & Anderson, in their much-cited text on model selection, argue that to avoid overfitting, we should adhere to the "Principle of Parsimony".[3]The authors also state the following.[3]: 32–33 Overfitted models ... are often free of bias in the parameter estimators, but have estimated (and actual) sampling variances that are needlessly large (the precision of the estimators is poor, relative to what could have been accomplished with a more parsimonious model). False treatment effects tend to be identified, and false variables are included with overfitted models. ... A best approximating model is achieved by properly balancing the errors of underfitting and overfitting. Overfitting is more likely to be a serious concern when there is little theory available to guide the analysis, in part because then there tend to be a large number of models to select from. The bookModel Selection and Model Averaging(2008) puts it this way.[5] Given a data set, you can fit thousands of models at the push of a button, but how do you choose the best? With so many candidate models, overfitting is a real danger. Is themonkey who typed Hamletactually a good writer? Inregression analysis, overfitting occurs frequently.[6]As an extreme example, if there arepvariables in alinear regressionwithpdata points, the fitted line can go exactly through every point.[7]Forlogistic regressionor Coxproportional hazards models, there are a variety of rules of thumb (e.g. 5–9,[8]10[9]and 10–15[10]— the guideline of 10 observations per independent variable is known as the "one in ten rule"). In the process of regression model selection, the mean squared error of the random regression function can be split into random noise, approximation bias, and variance in the estimate of the regression function. Thebias–variance tradeoffis often used to overcome overfit models. With a large set ofexplanatory variablesthat actually have no relation to thedependent variablebeing predicted, some variables will in general be falsely found to bestatistically significantand the researcher may thus retain them in the model, thereby overfitting the model. This is known asFreedman's paradox. Usually, a learningalgorithmis trained using some set of "training data": exemplary situations for which the desired output is known. The goal is that the algorithm will also perform well on predicting the output when fed "validation data" that was not encountered during its training. Overfitting is the use of models or procedures that violateOccam's razor, for example by including more adjustable parameters than are ultimately optimal, or by using a more complicated approach than is ultimately optimal. For an example where there are too many adjustable parameters, consider a dataset where training data forycan be adequately predicted by a linear function of two independent variables. Such a function requires only three parameters (the intercept and two slopes). Replacing this simple function with a new, more complex quadratic function, or with a new, more complex linear function on more than two independent variables, carries a risk: Occam's razor implies that any given complex function isa prioriless probable than any given simple function. If the new, more complicated function is selected instead of the simple function, and if there was not a large enough gain in training data fit to offset the complexity increase, then the new complex function "overfits" the data and the complex overfitted function will likely perform worse than the simpler function on validation data outside the training dataset, even though the complex function performed as well, or perhaps even better, on the training dataset.[11] When comparing different types of models, complexity cannot be measured solely by counting how many parameters exist in each model; the expressivity of each parameter must be considered as well. For example, it is nontrivial to directly compare the complexity of a neural net (which can track curvilinear relationships) withmparameters to a regression model withnparameters.[11] Overfitting is especially likely in cases where learning was performed too long or where training examples are rare, causing the learner to adjust to very specific random features of the training data that have nocausal relationto thetarget function. In this process of overfitting, the performance on the training examples still increases while the performance on unseen data becomes worse. As a simple example, consider a database of retail purchases that includes the item bought, the purchaser, and the date and time of purchase. It's easy to construct a model that will fit the training set perfectly by using the date and time of purchase to predict the other attributes, but this model will not generalize at all to new data because those past times will never occur again. Generally, a learning algorithm is said to overfit relative to a simpler one if it is more accurate in fitting known data (hindsight) but less accurate in predicting new data (foresight). One can intuitively understand overfitting from the fact that information from all past experience can be divided into two groups: information that is relevant for the future, and irrelevant information ("noise"). Everything else being equal, the more difficult a criterion is to predict (i.e., the higher its uncertainty), the more noise exists in past information that needs to be ignored. The problem is determining which part to ignore. A learning algorithm that can reduce the risk of fitting noise is called "robust." The most obvious consequence of overfitting is poor performance on the validation dataset. Other negative consequences include: The optimal function usually needs verification on bigger or completely new datasets. There are, however, methods likeminimum spanning treeorlife-time of correlationthat applies the dependence between correlation coefficients and time-series (window width). Whenever the window width is big enough, the correlation coefficients are stable and don't depend on the window width size anymore. Therefore, a correlation matrix can be created by calculating a coefficient of correlation between investigated variables. This matrix can be represented topologically as a complex network where direct and indirect influences between variables are visualized. Dropout regularisation (random removal of training set data) can also improve robustness and therefore reduce over-fitting by probabilistically removing inputs to a layer. Underfitting is the inverse of overfitting, meaning that the statistical model or machine learning algorithm is too simplistic to accurately capture the patterns in the data. A sign of underfitting is that there is a high bias and low variance detected in the current model or algorithm used (the inverse of overfitting: lowbiasand highvariance). This can be gathered from theBias-variance tradeoff, which is the method of analyzing a model or algorithm for bias error, variance error, and irreducible error. With a high bias and low variance, the result of the model is that it will inaccurately represent the data points and thus insufficiently be able to predict future data results (seeGeneralization error). As shown in Figure 5, the linear line could not represent all the given data points due to the line not resembling the curvature of the points. We would expect to see a parabola-shaped line as shown in Figure 6 and Figure 1. If we were to use Figure 5 for analysis, we would get false predictive results contrary to the results if we analyzed Figure 6. Burnham & Anderson state the following.[3]: 32 ... an underfitted model would ignore some important replicable (i.e., conceptually replicable in most other samples) structure in the data and thus fail to identify effects that were actually supported by the data. In this case, bias in the parameter estimators is often substantial, and the sampling variance is underestimated, both factors resulting in poor confidence interval coverage. Underfitted models tend to miss important treatment effects in experimental settings. There are multiple ways to deal with underfitting: Benign overfitting describes the phenomenon of a statistical model that seems to generalize well to unseen data, even when it has been fit perfectly on noisy training data (i.e., obtains perfect predictive accuracy on the training set). The phenomenon is of particular interest indeep neural networks, but is studied from a theoretical perspective in the context of much simpler models, such aslinear regression. In particular, it has been shown thatoverparameterizationis essential for benign overfitting in this setting. In other words, the number of directions in parameter space that are unimportant for prediction must significantly exceed the sample size.[16]
https://en.wikipedia.org/wiki/Underfitting
Inlinear algebra, thesingular value decomposition(SVD) is afactorizationof arealorcomplexmatrixinto a rotation, followed by a rescaling followed by another rotation. It generalizes theeigendecompositionof a squarenormal matrixwith an orthonormal eigenbasis to any⁠m×n{\displaystyle m\times n}⁠matrix. It is related to thepolar decomposition. Specifically, the singular value decomposition of anm×n{\displaystyle m\times n}complex matrix⁠M{\displaystyle \mathbf {M} }⁠is a factorization of the formM=UΣV∗,{\displaystyle \mathbf {M} =\mathbf {U\Sigma V^{*}} ,}where⁠U{\displaystyle \mathbf {U} }⁠is an⁠m×m{\displaystyle m\times m}⁠complexunitary matrix,Σ{\displaystyle \mathbf {\Sigma } }is anm×n{\displaystyle m\times n}rectangular diagonal matrixwith non-negative real numbers on the diagonal,⁠V{\displaystyle \mathbf {V} }⁠is ann×n{\displaystyle n\times n}complex unitary matrix, andV∗{\displaystyle \mathbf {V} ^{*}}is theconjugate transposeof⁠V{\displaystyle \mathbf {V} }⁠. Such decomposition always exists for any complex matrix. If⁠M{\displaystyle \mathbf {M} }⁠is real, then⁠U{\displaystyle \mathbf {U} }⁠and⁠V{\displaystyle \mathbf {V} }⁠can be guaranteed to be realorthogonalmatrices; in such contexts, the SVD is often denotedUΣVT.{\displaystyle \mathbf {U} \mathbf {\Sigma } \mathbf {V} ^{\mathrm {T} }.} The diagonal entriesσi=Σii{\displaystyle \sigma _{i}=\Sigma _{ii}}ofΣ{\displaystyle \mathbf {\Sigma } }are uniquely determined by⁠M{\displaystyle \mathbf {M} }⁠and are known as thesingular valuesof⁠M{\displaystyle \mathbf {M} }⁠. The number of non-zero singular values is equal to therankof⁠M{\displaystyle \mathbf {M} }⁠. The columns of⁠U{\displaystyle \mathbf {U} }⁠and the columns of⁠V{\displaystyle \mathbf {V} }⁠are called left-singular vectors and right-singular vectors of⁠M{\displaystyle \mathbf {M} }⁠, respectively. They form two sets oforthonormal bases⁠u1,…,um{\displaystyle \mathbf {u} _{1},\ldots ,\mathbf {u} _{m}}⁠and⁠v1,…,vn,{\displaystyle \mathbf {v} _{1},\ldots ,\mathbf {v} _{n},}⁠and if they are sorted so that the singular valuesσi{\displaystyle \sigma _{i}}with value zero are all in the highest-numbered columns (or rows), the singular value decomposition can be written as M=∑i=1rσiuivi∗,{\displaystyle \mathbf {M} =\sum _{i=1}^{r}\sigma _{i}\mathbf {u} _{i}\mathbf {v} _{i}^{*},} wherer≤min{m,n}{\displaystyle r\leq \min\{m,n\}}is the rank of⁠M.{\displaystyle \mathbf {M} .}⁠ The SVD is not unique. However, it is always possible to choose the decomposition such that the singular valuesΣii{\displaystyle \Sigma _{ii}}are in descending order. In this case,Σ{\displaystyle \mathbf {\Sigma } }(but not⁠U{\displaystyle \mathbf {U} }⁠and⁠V{\displaystyle \mathbf {V} }⁠) is uniquely determined by⁠M.{\displaystyle \mathbf {M} .}⁠ The term sometimes refers to thecompact SVD, a similar decomposition⁠M=UΣV∗{\displaystyle \mathbf {M} =\mathbf {U\Sigma V} ^{*}}⁠in which⁠Σ{\displaystyle \mathbf {\Sigma } }⁠is square diagonal of size⁠r×r,{\displaystyle r\times r,}⁠where⁠r≤min{m,n}{\displaystyle r\leq \min\{m,n\}}⁠is the rank of⁠M,{\displaystyle \mathbf {M} ,}⁠and has only the non-zero singular values. In this variant,⁠U{\displaystyle \mathbf {U} }⁠is an⁠m×r{\displaystyle m\times r}⁠semi-unitary matrixandV{\displaystyle \mathbf {V} }is an⁠n×r{\displaystyle n\times r}⁠semi-unitary matrix, such thatU∗U=V∗V=Ir.{\displaystyle \mathbf {U} ^{*}\mathbf {U} =\mathbf {V} ^{*}\mathbf {V} =\mathbf {I} _{r}.} Mathematical applications of the SVD include computing thepseudoinverse, matrix approximation, and determining the rank,range, andnull spaceof a matrix. The SVD is also extremely useful in many areas of science,engineering, andstatistics, such assignal processing,least squaresfitting of data, andprocess control. In the special case when⁠M{\displaystyle \mathbf {M} }⁠is an⁠m×m{\displaystyle m\times m}⁠realsquare matrix, the matrices⁠U{\displaystyle \mathbf {U} }⁠and⁠V∗{\displaystyle \mathbf {V} ^{*}}⁠can be chosen to be real⁠m×m{\displaystyle m\times m}⁠matrices too. In that case, "unitary" is the same as "orthogonal". Then, interpreting both unitary matrices as well as the diagonal matrix, summarized here as⁠A,{\displaystyle \mathbf {A} ,}⁠as alinear transformation⁠x↦Ax{\displaystyle \mathbf {x} \mapsto \mathbf {Ax} }⁠of the space⁠Rm,{\displaystyle \mathbf {R} _{m},}⁠the matrices⁠U{\displaystyle \mathbf {U} }⁠and⁠V∗{\displaystyle \mathbf {V} ^{*}}⁠representrotationsorreflectionof the space, while⁠Σ{\displaystyle \mathbf {\Sigma } }⁠represents thescalingof each coordinate⁠xi{\displaystyle \mathbf {x} _{i}}⁠by the factor⁠σi.{\displaystyle \sigma _{i}.}⁠Thus the SVD decomposition breaks down any linear transformation of⁠Rm{\displaystyle \mathbf {R} ^{m}}⁠into acompositionof three geometricaltransformations: a rotation or reflection(⁠V∗{\displaystyle \mathbf {V} ^{*}}⁠),followed by a coordinate-by-coordinatescaling(⁠Σ{\displaystyle \mathbf {\Sigma } }⁠),followed by another rotation or reflection(⁠U{\displaystyle \mathbf {U} }⁠). In particular, if⁠M{\displaystyle \mathbf {M} }⁠has a positive determinant, then⁠U{\displaystyle \mathbf {U} }⁠and⁠V∗{\displaystyle \mathbf {V} ^{*}}⁠can be chosen to be both rotations with reflections, or both rotations without reflections.[citation needed]If the determinant is negative, exactly one of them will have a reflection. If the determinant is zero, each can be independently chosen to be of either type. If the matrix⁠M{\displaystyle \mathbf {M} }⁠is real but not square, namely⁠m×n{\displaystyle m\times n}⁠with⁠m≠n,{\displaystyle m\neq n,}⁠it can be interpreted as a linear transformation from⁠Rn{\displaystyle \mathbf {R} ^{n}}⁠to⁠Rm.{\displaystyle \mathbf {R} ^{m}.}⁠Then⁠U{\displaystyle \mathbf {U} }⁠and⁠V∗{\displaystyle \mathbf {V} ^{*}}⁠can be chosen to be rotations/reflections of⁠Rm{\displaystyle \mathbf {R} ^{m}}⁠and⁠Rn,{\displaystyle \mathbf {R} ^{n},}⁠respectively; and⁠Σ,{\displaystyle \mathbf {\Sigma } ,}⁠besides scaling the first⁠min{m,n}{\displaystyle \min\{m,n\}}⁠coordinates, also extends the vector with zeros, i.e. removes trailing coordinates, so as to turn⁠Rn{\displaystyle \mathbf {R} ^{n}}⁠into⁠Rm.{\displaystyle \mathbf {R} ^{m}.}⁠ As shown in the figure, thesingular valuescan be interpreted as the magnitude of the semiaxes of anellipsein 2D. This concept can be generalized to⁠n{\displaystyle n}⁠-dimensionalEuclidean space, with the singular values of any⁠n×n{\displaystyle n\times n}⁠square matrixbeing viewed as the magnitude of the semiaxis of an⁠n{\displaystyle n}⁠-dimensionalellipsoid. Similarly, the singular values of any⁠m×n{\displaystyle m\times n}⁠matrix can be viewed as the magnitude of the semiaxis of an⁠n{\displaystyle n}⁠-dimensionalellipsoidin⁠m{\displaystyle m}⁠-dimensional space, for example as an ellipse in a (tilted) 2D plane in a 3D space. Singular values encode magnitude of the semiaxis, while singular vectors encode direction. Seebelowfor further details. Since⁠U{\displaystyle \mathbf {U} }⁠and⁠V∗{\displaystyle \mathbf {V} ^{*}}⁠are unitary, the columns of each of them form a set oforthonormal vectors, which can be regarded asbasis vectors. The matrix⁠M{\displaystyle \mathbf {M} }⁠maps the basis vector⁠Vi{\displaystyle \mathbf {V} _{i}}⁠to the stretched unit vector⁠σiUi.{\displaystyle \sigma _{i}\mathbf {U} _{i}.}⁠By the definition of a unitary matrix, the same is true for their conjugate transposes⁠U∗{\displaystyle \mathbf {U} ^{*}}⁠and⁠V,{\displaystyle \mathbf {V} ,}⁠except the geometric interpretation of the singular values as stretches is lost. In short, the columns of⁠U,{\displaystyle \mathbf {U} ,}⁠⁠U∗,{\displaystyle \mathbf {U} ^{*},}⁠⁠V,{\displaystyle \mathbf {V} ,}⁠and⁠V∗{\displaystyle \mathbf {V} ^{*}}⁠areorthonormal bases. When⁠M{\displaystyle \mathbf {M} }⁠is apositive-semidefiniteHermitian matrix,⁠U{\displaystyle \mathbf {U} }⁠and⁠V{\displaystyle \mathbf {V} }⁠are both equal to the unitary matrix used to diagonalize⁠M.{\displaystyle \mathbf {M} .}⁠However, when⁠M{\displaystyle \mathbf {M} }⁠is not positive-semidefinite and Hermitian but stilldiagonalizable, itseigendecompositionand singular value decomposition are distinct. Because⁠U{\displaystyle \mathbf {U} }⁠and⁠V{\displaystyle \mathbf {V} }⁠are unitary, we know that the columns⁠U1,…,Um{\displaystyle \mathbf {U} _{1},\ldots ,\mathbf {U} _{m}}⁠of⁠U{\displaystyle \mathbf {U} }⁠yield anorthonormal basisof⁠Km{\displaystyle K^{m}}⁠and the columns⁠V1,…,Vn{\displaystyle \mathbf {V} _{1},\ldots ,\mathbf {V} _{n}}⁠of⁠V{\displaystyle \mathbf {V} }⁠yield an orthonormal basis of⁠Kn{\displaystyle K^{n}}⁠(with respect to the standardscalar productson these spaces). Thelinear transformation T:{Kn→Kmx↦Mx{\displaystyle T:\left\{{\begin{aligned}K^{n}&\to K^{m}\\x&\mapsto \mathbf {M} x\end{aligned}}\right.} has a particularly simple description with respect to these orthonormal bases: we have T(Vi)=σiUi,i=1,…,min(m,n),{\displaystyle T(\mathbf {V} _{i})=\sigma _{i}\mathbf {U} _{i},\qquad i=1,\ldots ,\min(m,n),} where⁠σi{\displaystyle \sigma _{i}}⁠is the⁠i{\displaystyle i}⁠-th diagonal entry of⁠Σ,{\displaystyle \mathbf {\Sigma } ,}⁠and⁠T(Vi)=0{\displaystyle T(\mathbf {V} _{i})=0}⁠for⁠i>min(m,n).{\displaystyle i>\min(m,n).}⁠ The geometric content of the SVD theorem can thus be summarized as follows: for every linear map⁠T:Kn→Km{\displaystyle T:K^{n}\to K^{m}}⁠one can find orthonormal bases of⁠Kn{\displaystyle K^{n}}⁠and⁠Km{\displaystyle K^{m}}⁠such that⁠T{\displaystyle T}⁠maps the⁠i{\displaystyle i}⁠-th basis vector of⁠Kn{\displaystyle K^{n}}⁠to a non-negative multiple of the⁠i{\displaystyle i}⁠-th basis vector of⁠Km,{\displaystyle K^{m},}⁠and sends the leftover basis vectors to zero. With respect to these bases, the map⁠T{\displaystyle T}⁠is therefore represented by a diagonal matrix with non-negative real diagonal entries. To get a more visual flavor of singular values and SVD factorization – at least when working on real vector spaces – consider the sphere⁠S{\displaystyle S}⁠of radius one in⁠Rn.{\displaystyle \mathbf {R} ^{n}.}⁠The linear map⁠T{\displaystyle T}⁠maps this sphere onto anellipsoidin⁠Rm.{\displaystyle \mathbf {R} ^{m}.}⁠Non-zero singular values are simply the lengths of thesemi-axesof this ellipsoid. Especially when⁠n=m,{\displaystyle n=m,}⁠and all the singular values are distinct and non-zero, the SVD of the linear map⁠T{\displaystyle T}⁠can be easily analyzed as a succession of three consecutive moves: consider the ellipsoid⁠T(S){\displaystyle T(S)}⁠and specifically its axes; then consider the directions in⁠Rn{\displaystyle \mathbf {R} ^{n}}⁠sent by⁠T{\displaystyle T}⁠onto these axes. These directions happen to be mutually orthogonal. Apply first an isometry⁠V∗{\displaystyle \mathbf {V} ^{*}}⁠sending these directions to the coordinate axes of⁠Rn.{\displaystyle \mathbf {R} ^{n}.}⁠On a second move, apply anendomorphism⁠D{\displaystyle \mathbf {D} }⁠diagonalized along the coordinate axes and stretching or shrinking in each direction, using the semi-axes lengths of⁠T(S){\displaystyle T(S)}⁠as stretching coefficients. The composition⁠D∘V∗{\displaystyle \mathbf {D} \circ \mathbf {V} ^{*}}⁠then sends the unit-sphere onto an ellipsoid isometric to⁠T(S).{\displaystyle T(S).}⁠To define the third and last move, apply an isometry⁠U{\displaystyle \mathbf {U} }⁠to this ellipsoid to obtain⁠T(S).{\displaystyle T(S).}⁠As can be easily checked, the composition⁠U∘D∘V∗{\displaystyle \mathbf {U} \circ \mathbf {D} \circ \mathbf {V} ^{*}}⁠coincides with⁠T.{\displaystyle T.}⁠ Consider the⁠4×5{\displaystyle 4\times 5}⁠matrix M=[10002003000000002000]{\displaystyle \mathbf {M} ={\begin{bmatrix}1&0&0&0&2\\0&0&3&0&0\\0&0&0&0&0\\0&2&0&0&0\end{bmatrix}}} A singular value decomposition of this matrix is given by⁠UΣV∗{\displaystyle \mathbf {U} \mathbf {\Sigma } \mathbf {V} ^{*}}⁠ U=[0−100−1000000−100−10]Σ=[30000050000020000000]V∗=[00−100−0.2000−0.80−100000010−0.80000.2]{\displaystyle {\begin{aligned}\mathbf {U} &={\begin{bmatrix}\color {Green}0&\color {Blue}-1&\color {Cyan}0&\color {Emerald}0\\\color {Green}-1&\color {Blue}0&\color {Cyan}0&\color {Emerald}0\\\color {Green}0&\color {Blue}0&\color {Cyan}0&\color {Emerald}-1\\\color {Green}0&\color {Blue}0&\color {Cyan}-1&\color {Emerald}0\end{bmatrix}}\\[6pt]\mathbf {\Sigma } &={\begin{bmatrix}3&0&0&0&\color {Gray}{\mathit {0}}\\0&{\sqrt {5}}&0&0&\color {Gray}{\mathit {0}}\\0&0&2&0&\color {Gray}{\mathit {0}}\\0&0&0&\color {Red}\mathbf {0} &\color {Gray}{\mathit {0}}\end{bmatrix}}\\[6pt]\mathbf {V} ^{*}&={\begin{bmatrix}\color {Violet}0&\color {Violet}0&\color {Violet}-1&\color {Violet}0&\color {Violet}0\\\color {Plum}-{\sqrt {0.2}}&\color {Plum}0&\color {Plum}0&\color {Plum}0&\color {Plum}-{\sqrt {0.8}}\\\color {Magenta}0&\color {Magenta}-1&\color {Magenta}0&\color {Magenta}0&\color {Magenta}0\\\color {Orchid}0&\color {Orchid}0&\color {Orchid}0&\color {Orchid}1&\color {Orchid}0\\\color {Purple}-{\sqrt {0.8}}&\color {Purple}0&\color {Purple}0&\color {Purple}0&\color {Purple}{\sqrt {0.2}}\end{bmatrix}}\end{aligned}}} The scaling matrix⁠Σ{\displaystyle \mathbf {\Sigma } }⁠is zero outside of the diagonal (grey italics) and one diagonal element is zero (red bold, light blue bold in dark mode). Furthermore, because the matrices⁠U{\displaystyle \mathbf {U} }⁠and⁠V∗{\displaystyle \mathbf {V} ^{*}}⁠areunitary, multiplying by their respective conjugate transposes yieldsidentity matrices, as shown below. In this case, because⁠U{\displaystyle \mathbf {U} }⁠and⁠V∗{\displaystyle \mathbf {V} ^{*}}⁠are real valued, each is anorthogonal matrix. UU∗=[1000010000100001]=I4VV∗=[1000001000001000001000001]=I5{\displaystyle {\begin{aligned}\mathbf {U} \mathbf {U} ^{*}&={\begin{bmatrix}1&0&0&0\\0&1&0&0\\0&0&1&0\\0&0&0&1\end{bmatrix}}=\mathbf {I} _{4}\\[6pt]\mathbf {V} \mathbf {V} ^{*}&={\begin{bmatrix}1&0&0&0&0\\0&1&0&0&0\\0&0&1&0&0\\0&0&0&1&0\\0&0&0&0&1\end{bmatrix}}=\mathbf {I} _{5}\end{aligned}}} This particular singular value decomposition is not unique. For instance, we can keep⁠U{\displaystyle \mathbf {U} }⁠and⁠Σ{\displaystyle \mathbf {\Sigma } }⁠the same, but change the last two rows of⁠V∗{\displaystyle \mathbf {V} ^{*}}⁠such that V∗=[00−100−0.2000−0.80−10000.4000.5−0.1−0.4000.50.1]{\displaystyle \mathbf {V} ^{*}={\begin{bmatrix}\color {Violet}0&\color {Violet}0&\color {Violet}-1&\color {Violet}0&\color {Violet}0\\\color {Plum}-{\sqrt {0.2}}&\color {Plum}0&\color {Plum}0&\color {Plum}0&\color {Plum}-{\sqrt {0.8}}\\\color {Magenta}0&\color {Magenta}-1&\color {Magenta}0&\color {Magenta}0&\color {Magenta}0\\\color {Orchid}{\sqrt {0.4}}&\color {Orchid}0&\color {Orchid}0&\color {Orchid}{\sqrt {0.5}}&\color {Orchid}-{\sqrt {0.1}}\\\color {Purple}-{\sqrt {0.4}}&\color {Purple}0&\color {Purple}0&\color {Purple}{\sqrt {0.5}}&\color {Purple}{\sqrt {0.1}}\end{bmatrix}}} and get an equally valid singular value decomposition. As the matrix⁠M{\displaystyle \mathbf {M} }⁠has rank 3, it has only 3 nonzero singular values. In taking the product⁠UΣV∗{\displaystyle \mathbf {U} \mathbf {\Sigma } \mathbf {V} ^{*}}⁠, the final column of⁠U{\displaystyle \mathbf {U} }⁠and the final two rows of⁠V∗{\displaystyle \mathbf {V^{*}} }⁠are multiplied by zero, so have no effect on the matrix product, and can be replaced by any unit vectors which are orthogonal to the first three and to each-other. Thecompact SVD,⁠M=UrΣrVr∗{\displaystyle \mathbf {M} =\mathbf {U} _{r}\mathbf {\Sigma } _{r}\mathbf {V} _{r}^{*}}⁠, eliminates these superfluous rows, columns, and singular values: Ur=[0−10−10000000−1]Σr=[300050002]Vr∗=[00−100−0.2000−0.80−1000]{\displaystyle {\begin{aligned}\mathbf {U} _{r}&={\begin{bmatrix}\color {Green}0&\color {Blue}-1&\color {Cyan}0\\\color {Green}-1&\color {Blue}0&\color {Cyan}0\\\color {Green}0&\color {Blue}0&\color {Cyan}0\\\color {Green}0&\color {Blue}0&\color {Cyan}-1\end{bmatrix}}\\[6pt]\mathbf {\Sigma } _{r}&={\begin{bmatrix}3&0&0\\0&{\sqrt {5}}&0\\0&0&2\end{bmatrix}}\\[6pt]\mathbf {V} _{r}^{*}&={\begin{bmatrix}\color {Violet}0&\color {Violet}0&\color {Violet}-1&\color {Violet}0&\color {Violet}0\\\color {Plum}-{\sqrt {0.2}}&\color {Plum}0&\color {Plum}0&\color {Plum}0&\color {Plum}-{\sqrt {0.8}}\\\color {Magenta}0&\color {Magenta}-1&\color {Magenta}0&\color {Magenta}0&\color {Magenta}0\end{bmatrix}}\end{aligned}}} A non-negative real number⁠σ{\displaystyle \sigma }⁠is asingular valuefor⁠M{\displaystyle \mathbf {M} }⁠if and only if there exist unit-length vectors⁠u{\displaystyle \mathbf {u} }⁠in⁠Km{\displaystyle K^{m}}⁠and⁠v{\displaystyle \mathbf {v} }⁠in⁠Kn{\displaystyle K^{n}}⁠such that Mv=σu,M∗u=σv.{\displaystyle {\begin{aligned}\mathbf {Mv} &=\sigma \mathbf {u} ,\\[3mu]\mathbf {M} ^{*}\mathbf {u} &=\sigma \mathbf {v} .\end{aligned}}} The vectors⁠u{\displaystyle \mathbf {u} }⁠and⁠v{\displaystyle \mathbf {v} }⁠are calledleft-singularandright-singular vectorsfor⁠σ,{\displaystyle \sigma ,}⁠respectively. In any singular value decomposition M=UΣV∗{\displaystyle \mathbf {M} =\mathbf {U} \mathbf {\Sigma } \mathbf {V} ^{*}} the diagonal entries of⁠Σ{\displaystyle \mathbf {\Sigma } }⁠are equal to the singular values of⁠M.{\displaystyle \mathbf {M} .}⁠The first⁠p=min(m,n){\displaystyle p=\min(m,n)}⁠columns of⁠U{\displaystyle \mathbf {U} }⁠and⁠V{\displaystyle \mathbf {V} }⁠are, respectively, left- and right-singular vectors for the corresponding singular values. Consequently, the above theorem implies that: A singular value for which we can find two left (or right) singular vectors that are linearly independent is calleddegenerate. If⁠u1{\displaystyle \mathbf {u} _{1}}⁠and⁠u2{\displaystyle \mathbf {u} _{2}}⁠are two left-singular vectors which both correspond to the singular value σ, then any normalized linear combination of the two vectors is also a left-singular vector corresponding to the singular value σ. The similar statement is true for right-singular vectors. The number of independent left and right-singular vectors coincides, and these singular vectors appear in the same columns of⁠U{\displaystyle \mathbf {U} }⁠and⁠V{\displaystyle \mathbf {V} }⁠corresponding to diagonal elements of⁠Σ{\displaystyle \mathbf {\Sigma } }⁠all with the same value⁠σ.{\displaystyle \sigma .}⁠ As an exception, the left and right-singular vectors of singular value 0 comprise all unit vectors in thecokernelandkernel, respectively, of⁠M,{\displaystyle \mathbf {M} ,}⁠which by therank–nullity theoremcannot be the same dimension if⁠m≠n.{\displaystyle m\neq n.}⁠Even if all singular values are nonzero, if⁠m>n{\displaystyle m>n}⁠then the cokernel is nontrivial, in which case⁠U{\displaystyle \mathbf {U} }⁠is padded with⁠m−n{\displaystyle m-n}⁠orthogonal vectors from the cokernel. Conversely, if⁠m<n,{\displaystyle m<n,}⁠then⁠V{\displaystyle \mathbf {V} }⁠is padded by⁠n−m{\displaystyle n-m}⁠orthogonal vectors from the kernel. However, if the singular value of⁠0{\displaystyle 0}⁠exists, the extra columns of⁠U{\displaystyle \mathbf {U} }⁠or⁠V{\displaystyle \mathbf {V} }⁠already appear as left or right-singular vectors. Non-degenerate singular values always have unique left- and right-singular vectors, up to multiplication by a unit-phase factor⁠eiφ{\displaystyle e^{i\varphi }}⁠(for the real case up to a sign). Consequently, if all singular values of a square matrix⁠M{\displaystyle \mathbf {M} }⁠are non-degenerate and non-zero, then its singular value decomposition is unique, up to multiplication of a column of⁠U{\displaystyle \mathbf {U} }⁠by a unit-phase factor and simultaneous multiplication of the corresponding column of⁠V{\displaystyle \mathbf {V} }⁠by the same unit-phase factor. In general, the SVD is unique up to arbitrary unitary transformations applied uniformly to the column vectors of both⁠U{\displaystyle \mathbf {U} }⁠and⁠V{\displaystyle \mathbf {V} }⁠spanning the subspaces of each singular value, and up to arbitrary unitary transformations on vectors of⁠U{\displaystyle \mathbf {U} }⁠and⁠V{\displaystyle \mathbf {V} }⁠spanning the kernel and cokernel, respectively, of⁠M.{\displaystyle \mathbf {M} .}⁠ The singular value decomposition is very general in the sense that it can be applied to any⁠m×n{\displaystyle m\times n}⁠matrix, whereaseigenvalue decompositioncan only be applied to squarediagonalizable matrices. Nevertheless, the two decompositions are related. If⁠M{\displaystyle \mathbf {M} }⁠has SVD⁠M=UΣV∗,{\displaystyle \mathbf {M} =\mathbf {U} \mathbf {\Sigma } \mathbf {V} ^{*},}⁠the following two relations hold: M∗M=VΣ∗U∗UΣV∗=V(Σ∗Σ)V∗,MM∗=UΣV∗VΣ∗U∗=U(ΣΣ∗)U∗.{\displaystyle {\begin{aligned}\mathbf {M} ^{*}\mathbf {M} &=\mathbf {V} \mathbf {\Sigma } ^{*}\mathbf {U} ^{*}\,\mathbf {U} \mathbf {\Sigma } \mathbf {V} ^{*}=\mathbf {V} (\mathbf {\Sigma } ^{*}\mathbf {\Sigma } )\mathbf {V} ^{*},\\[3mu]\mathbf {M} \mathbf {M} ^{*}&=\mathbf {U} \mathbf {\Sigma } \mathbf {V} ^{*}\,\mathbf {V} \mathbf {\Sigma } ^{*}\mathbf {U} ^{*}=\mathbf {U} (\mathbf {\Sigma } \mathbf {\Sigma } ^{*})\mathbf {U} ^{*}.\end{aligned}}} The right-hand sides of these relations describe the eigenvalue decompositions of the left-hand sides. Consequently: In the special case of⁠M{\displaystyle \mathbf {M} }⁠being anormal matrix, and thus also square, thespectral theoremensures that it can beunitarilydiagonalizedusing a basis ofeigenvectors, and thus decomposed as⁠M=UDU∗{\displaystyle \mathbf {M} =\mathbf {U} \mathbf {D} \mathbf {U} ^{*}}⁠for some unitary matrix⁠U{\displaystyle \mathbf {U} }⁠and diagonal matrix⁠D{\displaystyle \mathbf {D} }⁠with complex elements⁠σi{\displaystyle \sigma _{i}}⁠along the diagonal. When⁠M{\displaystyle \mathbf {M} }⁠ispositive semi-definite,⁠σi{\displaystyle \sigma _{i}}⁠will be non-negative real numbers so that the decomposition⁠M=UDU∗{\displaystyle \mathbf {M} =\mathbf {U} \mathbf {D} \mathbf {U} ^{*}}⁠is also a singular value decomposition. Otherwise, it can be recast as an SVD by moving the phase⁠eiφ{\displaystyle e^{i\varphi }}⁠of each⁠σi{\displaystyle \sigma _{i}}⁠to either its corresponding⁠Vi{\displaystyle \mathbf {V} _{i}}⁠or⁠Ui.{\displaystyle \mathbf {U} _{i}.}⁠The natural connection of the SVD to non-normal matrices is through thepolar decompositiontheorem:⁠M=SR,{\displaystyle \mathbf {M} =\mathbf {S} \mathbf {R} ,}⁠where⁠S=UΣU∗{\displaystyle \mathbf {S} =\mathbf {U} \mathbf {\Sigma } \mathbf {U} ^{*}}⁠is positive semidefinite and normal, and⁠R=UV∗{\displaystyle \mathbf {R} =\mathbf {U} \mathbf {V} ^{*}}⁠is unitary. Thus, except for positive semi-definite matrices, the eigenvalue decomposition and SVD of⁠M,{\displaystyle \mathbf {M} ,}⁠while related, differ: the eigenvalue decomposition is⁠M=UDU−1,{\displaystyle \mathbf {M} =\mathbf {U} \mathbf {D} \mathbf {U} ^{-1},}⁠where⁠U{\displaystyle \mathbf {U} }⁠is not necessarily unitary and⁠D{\displaystyle \mathbf {D} }⁠is not necessarily positive semi-definite, while the SVD is⁠M=UΣV∗,{\displaystyle \mathbf {M} =\mathbf {U} \mathbf {\Sigma } \mathbf {V} ^{*},}⁠where⁠Σ{\displaystyle \mathbf {\Sigma } }⁠is diagonal and positive semi-definite, and⁠U{\displaystyle \mathbf {U} }⁠and⁠V{\displaystyle \mathbf {V} }⁠are unitary matrices that are not necessarily related except through the matrix⁠M.{\displaystyle \mathbf {M} .}⁠While onlynon-defectivesquare matrices have an eigenvalue decomposition, any⁠m×n{\displaystyle m\times n}⁠matrix has a SVD. The singular value decomposition can be used for computing thepseudoinverseof a matrix. The pseudoinverse of the matrix⁠M{\displaystyle \mathbf {M} }⁠with singular value decomposition⁠M=UΣV∗{\displaystyle \mathbf {M} =\mathbf {U} \mathbf {\Sigma } \mathbf {V} ^{*}}⁠is M+=VΣ+U∗,{\displaystyle \mathbf {M} ^{+}=\mathbf {V} {\boldsymbol {\Sigma }}^{+}\mathbf {U} ^{\ast },} whereΣ+{\displaystyle {\boldsymbol {\Sigma }}^{+}}is the pseudoinverse ofΣ{\displaystyle {\boldsymbol {\Sigma }}}, which is formed by replacing every non-zero diagonal entry by itsreciprocaland transposing the resulting matrix. The pseudoinverse is one way to solvelinear least squaresproblems. A set ofhomogeneous linear equationscan be written as⁠Ax=0{\displaystyle \mathbf {A} \mathbf {x} =\mathbf {0} }⁠for a matrix⁠A{\displaystyle \mathbf {A} }⁠and vector⁠x.{\displaystyle \mathbf {x} .}⁠A typical situation is that⁠A{\displaystyle \mathbf {A} }⁠is known and a non-zero⁠x{\displaystyle \mathbf {x} }⁠is to be determined which satisfies the equation. Such an⁠x{\displaystyle \mathbf {x} }⁠belongs to⁠A{\displaystyle \mathbf {A} }⁠'snull spaceand is sometimes called a (right) null vector of⁠A.{\displaystyle \mathbf {A} .}⁠The vector⁠x{\displaystyle \mathbf {x} }⁠can be characterized as a right-singular vector corresponding to a singular value of⁠A{\displaystyle \mathbf {A} }⁠that is zero. This observation means that if⁠A{\displaystyle \mathbf {A} }⁠is asquare matrixand has no vanishing singular value, the equation has no non-zero⁠x{\displaystyle \mathbf {x} }⁠as a solution. It also means that if there are several vanishing singular values, any linear combination of the corresponding right-singular vectors is a valid solution. Analogously to the definition of a (right) null vector, a non-zero⁠x{\displaystyle \mathbf {x} }⁠satisfying⁠x∗A=0{\displaystyle \mathbf {x} ^{*}\mathbf {A} =\mathbf {0} }⁠with⁠x∗{\displaystyle \mathbf {x} ^{*}}⁠denoting the conjugate transpose of⁠x,{\displaystyle \mathbf {x} ,}⁠is called a left null vector of⁠A.{\displaystyle \mathbf {A} .}⁠ Atotal least squaresproblem seeks the vector⁠x{\displaystyle \mathbf {x} }⁠that minimizes the2-normof a vector⁠Ax{\displaystyle \mathbf {A} \mathbf {x} }⁠under the constraint‖x‖=1.{\displaystyle \|\mathbf {x} \|=1.}The solution turns out to be the right-singular vector of⁠A{\displaystyle \mathbf {A} }⁠corresponding to the smallest singular value. Another application of the SVD is that it provides an explicit representation of therangeandnull spaceof a matrix⁠M.{\displaystyle \mathbf {M} .}⁠The right-singular vectors corresponding to vanishing singular values of⁠M{\displaystyle \mathbf {M} }⁠span the null space of⁠M{\displaystyle \mathbf {M} }⁠and the left-singular vectors corresponding to the non-zero singular values of⁠M{\displaystyle \mathbf {M} }⁠span the range of⁠M.{\displaystyle \mathbf {M} .}⁠For example, in the aboveexamplethe null space is spanned by the last row of⁠V∗{\displaystyle \mathbf {V} ^{*}}⁠and the range is spanned by the first three columns of⁠U.{\displaystyle \mathbf {U} .}⁠ As a consequence, therankof⁠M{\displaystyle \mathbf {M} }⁠equals the number of non-zero singular values which is the same as the number of non-zero diagonal elements inΣ{\displaystyle \mathbf {\Sigma } }. In numerical linear algebra the singular values can be used to determine theeffective rankof a matrix, asrounding errormay lead to small but non-zero singular values in a rank deficient matrix. Singular values beyond a significant gap are assumed to be numerically equivalent to zero. Some practical applications need to solve the problem ofapproximatinga matrix⁠M{\displaystyle \mathbf {M} }⁠with another matrixM~{\displaystyle {\tilde {\mathbf {M} }}}, said to betruncated, which has a specific rank⁠r{\displaystyle r}⁠. In the case that the approximation is based on minimizing theFrobenius normof the difference between⁠M{\displaystyle \mathbf {M} }⁠and⁠M~{\displaystyle {\tilde {\mathbf {M} }}}⁠under the constraint thatrank⁡(M~)=r,{\displaystyle \operatorname {rank} {\bigl (}{\tilde {\mathbf {M} }}{\bigr )}=r,}it turns out that the solution is given by the SVD of⁠M,{\displaystyle \mathbf {M} ,}⁠namely M~=UΣ~V∗,{\displaystyle {\tilde {\mathbf {M} }}=\mathbf {U} {\tilde {\mathbf {\Sigma } }}\mathbf {V} ^{*},} whereΣ~{\displaystyle {\tilde {\mathbf {\Sigma } }}}is the same matrix asΣ{\displaystyle \mathbf {\Sigma } }except that it contains only the⁠r{\displaystyle r}⁠largest singular values (the other singular values are replaced by zero). This is known as theEckart–Young theorem, as it was proved by those two authors in 1936 (although it was later found to have been known to earlier authors; seeStewart 1993). One practical consequence of the low-rank approximation given by SVD is that a greyscale image represented as anm×n{\displaystyle m\times n}matrixA{\displaystyle A}, can be efficiently represented by keeping the firstk{\displaystyle k}singular values and corresponding vectors. The truncated decomposition Ak=UkΣkVkT{\displaystyle A_{k}=\mathbf {U} _{k}\mathbf {\Sigma } _{k}\mathbf {V} _{k}^{T}} gives an image which minimizes theFrobenius errorcompared to the original image. Thus, the task becomes finding a close approximationAk{\displaystyle A_{k}}that balances retaining perceptual fidelity with the number of vectors required to reconstruct the image. StoringAk{\displaystyle A_{k}}requires onlyk(n+m+1){\displaystyle k(n+m+1)}numbers compared tonm{\displaystyle nm}. This same idea extends to color images by applying this operation to each channel or stacking the channels into one matrix. Since the singular values of most natural images decay quickly, most of their variance is often captured by a smallk{\displaystyle k}. For a 1528 × 1225 greyscale image, we can achieve a relative error of.7%{\displaystyle .7\%}with as little ask=100{\displaystyle k=100}.[1]In practice, however, computing the SVD can be too computationally expensive and the resulting compression is typically less storage efficient than a specialized algorithm such asJPEG. The SVD can be thought of as decomposing a matrix into a weighted, ordered sum of separable matrices. By separable, we mean that a matrix⁠A{\displaystyle \mathbf {A} }⁠can be written as anouter productof two vectors⁠A=u⊗v,{\displaystyle \mathbf {A} =\mathbf {u} \otimes \mathbf {v} ,}⁠or, in coordinates,⁠Aij=uivj.{\displaystyle A_{ij}=u_{i}v_{j}.}⁠Specifically, the matrix⁠M{\displaystyle \mathbf {M} }⁠can be decomposed as, M=∑iAi=∑iσiUi⊗Vi.{\displaystyle \mathbf {M} =\sum _{i}\mathbf {A} _{i}=\sum _{i}\sigma _{i}\mathbf {U} _{i}\otimes \mathbf {V} _{i}.} Here⁠Ui{\displaystyle \mathbf {U} _{i}}⁠and⁠Vi{\displaystyle \mathbf {V} _{i}}⁠are the⁠i{\displaystyle i}⁠-th columns of the corresponding SVD matrices,⁠σi{\displaystyle \sigma _{i}}⁠are the ordered singular values, and each⁠Ai{\displaystyle \mathbf {A} _{i}}⁠is separable. The SVD can be used to find the decomposition of an image processing filter into separable horizontal and vertical filters. Note that the number of non-zero⁠σi{\displaystyle \sigma _{i}}⁠is exactly the rank of the matrix.[citation needed]Separable models often arise in biological systems, and the SVD factorization is useful to analyze such systems. For example, some visual area V1 simple cells' receptive fields can be well described[2]by aGabor filterin the space domain multiplied by a modulation function in the time domain. Thus, given a linear filter evaluated through, for example,reverse correlation, one can rearrange the two spatial dimensions into one dimension, thus yielding a two-dimensional filter (space, time) which can be decomposed through SVD. The first column of⁠U{\displaystyle \mathbf {U} }⁠in the SVD factorization is then a Gabor while the first column of⁠V{\displaystyle \mathbf {V} }⁠represents the time modulation (or vice versa). One may then define an index of separability α=σ12∑iσi2,{\displaystyle \alpha ={\frac {\sigma _{1}^{2}}{\sum _{i}\sigma _{i}^{2}}},} which is the fraction of the power in the matrix M which is accounted for by the first separable matrix in the decomposition.[3] It is possible to use the SVD of a square matrix⁠A{\displaystyle \mathbf {A} }⁠to determine theorthogonal matrix⁠O{\displaystyle \mathbf {O} }⁠closest to⁠A.{\displaystyle \mathbf {A} .}⁠The closeness of fit is measured by theFrobenius normof⁠O−A.{\displaystyle \mathbf {O} -\mathbf {A} .}⁠The solution is the product⁠UV∗.{\displaystyle \mathbf {U} \mathbf {V} ^{*}.}⁠[4]This intuitively makes sense because an orthogonal matrix would have the decomposition⁠UIV∗{\displaystyle \mathbf {U} \mathbf {I} \mathbf {V} ^{*}}⁠where⁠I{\displaystyle \mathbf {I} }⁠is the identity matrix, so that if⁠A=UΣV∗{\displaystyle \mathbf {A} =\mathbf {U} \mathbf {\Sigma } \mathbf {V} ^{*}}⁠then the product⁠A=UV∗{\displaystyle \mathbf {A} =\mathbf {U} \mathbf {V} ^{*}}⁠amounts to replacing the singular values with ones. Equivalently, the solution is the unitary matrix⁠R=UV∗{\displaystyle \mathbf {R} =\mathbf {U} \mathbf {V} ^{*}}⁠of the Polar DecompositionM=RP=P′R{\displaystyle \mathbf {M} =\mathbf {R} \mathbf {P} =\mathbf {P} '\mathbf {R} }in either order of stretch and rotation, as described above. A similar problem, with interesting applications inshape analysis, is theorthogonal Procrustes problem, which consists of finding an orthogonal matrix⁠O{\displaystyle \mathbf {O} }⁠which most closely maps⁠A{\displaystyle \mathbf {A} }⁠to⁠B.{\displaystyle \mathbf {B} .}⁠Specifically, O=argminΩ‖AΩ−B‖Fsubject toΩTΩ=I,{\displaystyle \mathbf {O} ={\underset {\Omega }{\operatorname {argmin} }}\|\mathbf {A} {\boldsymbol {\Omega }}-\mathbf {B} \|_{F}\quad {\text{subject to}}\quad {\boldsymbol {\Omega }}^{\operatorname {T} }{\boldsymbol {\Omega }}=\mathbf {I} ,} where‖⋅‖F{\displaystyle \|\cdot \|_{F}}denotes the Frobenius norm. This problem is equivalent to finding the nearest orthogonal matrix to a given matrixM=ATB{\displaystyle \mathbf {M} =\mathbf {A} ^{\operatorname {T} }\mathbf {B} }. TheKabsch algorithm(calledWahba's problemin other fields) uses SVD to compute the optimal rotation (with respect to least-squares minimization) that will align a set of points with a corresponding set of points. It is used, among other applications, to compare the structures of molecules. The SVD can be used to construct the principal components[5]inprincipal component analysisas follows: LetX∈RN×p{\displaystyle \mathbf {X} \in \mathbb {R} ^{N\times p}}be a data matrix where each of theN{\displaystyle N}rows is a (feature-wise) mean-centered observation, each of dimensionp{\displaystyle p}. The SVD ofX{\displaystyle \mathbf {X} }is:X=VΣU∗{\displaystyle \mathbf {X} =\mathbf {V} {\boldsymbol {\Sigma }}\mathbf {U} ^{\ast }} From the same reference,[6]we see thatVΣ{\displaystyle \mathbf {V} {\boldsymbol {\Sigma }}}contains the scores of the rows ofX{\displaystyle \mathbf {X} }(i.e. each observation), andU{\displaystyle \mathbf {U} }is the matrix whose columns are principal component loading vectors. The SVD and pseudoinverse have been successfully applied tosignal processing,[7]image processing[8]andbig data(e.g., in genomic signal processing).[9][10][11][12] The SVD is also applied extensively to the study of linearinverse problemsand is useful in the analysis of regularization methods such as that ofTikhonov. It is widely used in statistics, where it is related toprincipal component analysisand tocorrespondence analysis, and insignal processingandpattern recognition. It is also used in output-onlymodal analysis, where the non-scaledmode shapescan be determined from the singular vectors. Yet another usage islatent semantic indexingin natural-language text processing. In general numerical computation involving linear or linearized systems, there is a universal constant that characterizes the regularity or singularity of a problem, which is the system's "condition number"κ:=σmax/σmin{\displaystyle \kappa :=\sigma _{\text{max}}/\sigma _{\text{min}}}. It often controls the error rate or convergence rate of a given computational scheme on such systems.[13][14] The SVD also plays a crucial role in the field ofquantum information, in a form often referred to as theSchmidt decomposition. Through it, states of two quantum systems are naturally decomposed, providing a necessary and sufficient condition for them to beentangled: if the rank of theΣ{\displaystyle \mathbf {\Sigma } }matrix is larger than one. One application of SVD to rather large matrices is innumerical weather prediction, whereLanczos methodsare used to estimate the most linearly quickly growing few perturbations to the central numerical weather prediction over a given initial forward time period; i.e., the singular vectors corresponding to the largest singular values of the linearized propagator for the global weather over that time interval. The output singular vectors in this case are entire weather systems. These perturbations are then run through the full nonlinear model to generate anensemble forecast, giving a handle on some of the uncertainty that should be allowed for around the current central prediction. SVD has also been applied to reduced order modelling. The aim of reduced order modelling is to reduce the number of degrees of freedom in a complex system which is to be modeled. SVD was coupled withradial basis functionsto interpolate solutions to three-dimensional unsteady flow problems.[15] Interestingly, SVD has been used to improve gravitational waveform modeling by the ground-based gravitational-wave interferometer aLIGO.[16]SVD can help to increase the accuracy and speed of waveform generation to support gravitational-waves searches and update two different waveform models. Singular value decomposition is used inrecommender systemsto predict people's item ratings.[17]Distributed algorithms have been developed for the purpose of calculating the SVD on clusters of commodity machines.[18] Low-rank SVD has been applied for hotspot detection from spatiotemporal data with application to diseaseoutbreakdetection.[19]A combination of SVD andhigher-order SVDalso has been applied for real time event detection from complex data streams (multivariate data with space and time dimensions) indisease surveillance.[20] Inastrodynamics, the SVD and its variants are used as an option to determine suitable maneuver directions for transfer trajectory design[21]andorbital station-keeping.[22] The SVD can be used to measure the similarity between real-valued matrices.[23]By measuring the angles between the singular vectors, the inherent two-dimensional structure of matrices is accounted for. This method was shown to outperformcosine similarityandFrobenius normin most cases, including brain activity measurements fromneuroscienceexperiments. An eigenvalue⁠λ{\displaystyle \lambda }⁠of a matrix⁠M{\displaystyle \mathbf {M} }⁠is characterized by the algebraic relation⁠Mu=λu.{\displaystyle \mathbf {M} \mathbf {u} =\lambda \mathbf {u} .}⁠When⁠M{\displaystyle \mathbf {M} }⁠isHermitian, a variational characterization is also available. Let⁠M{\displaystyle \mathbf {M} }⁠be a real⁠n×n{\displaystyle n\times n}⁠symmetric matrix. Define f:{Rn→Rx↦xTMx{\displaystyle f:\left\{{\begin{aligned}\mathbb {R} ^{n}&\to \mathbb {R} \\\mathbf {x} &\mapsto \mathbf {x} ^{\operatorname {T} }\mathbf {M} \mathbf {x} \end{aligned}}\right.} By theextreme value theorem, this continuous function attains a maximum at some⁠u{\displaystyle \mathbf {u} }⁠when restricted to the unit sphere{‖x‖=1}.{\displaystyle \{\|\mathbf {x} \|=1\}.}By theLagrange multiplierstheorem,⁠u{\displaystyle \mathbf {u} }⁠necessarily satisfies ∇uTMu−λ⋅∇uTu=0{\displaystyle \nabla \mathbf {u} ^{\operatorname {T} }\mathbf {M} \mathbf {u} -\lambda \cdot \nabla \mathbf {u} ^{\operatorname {T} }\mathbf {u} =0} for some real number⁠λ.{\displaystyle \lambda .}⁠The nabla symbol,⁠∇{\displaystyle \nabla }⁠, is thedeloperator (differentiation with respect to⁠x{\displaystyle \mathbf {x} }⁠).Using the symmetry of⁠M{\displaystyle \mathbf {M} }⁠we obtain ∇xTMx−λ⋅∇xTx=2(M−λI)x.{\displaystyle \nabla \mathbf {x} ^{\operatorname {T} }\mathbf {M} \mathbf {x} -\lambda \cdot \nabla \mathbf {x} ^{\operatorname {T} }\mathbf {x} =2(\mathbf {M} -\lambda \mathbf {I} )\mathbf {x} .} Therefore⁠Mu=λu,{\displaystyle \mathbf {M} \mathbf {u} =\lambda \mathbf {u} ,}⁠so⁠u{\displaystyle \mathbf {u} }⁠is a unit length eigenvector of⁠M.{\displaystyle \mathbf {M} .}⁠For every unit length eigenvector⁠v{\displaystyle \mathbf {v} }⁠of⁠M{\displaystyle \mathbf {M} }⁠its eigenvalue is⁠f(v),{\displaystyle f(\mathbf {v} ),}⁠so⁠λ{\displaystyle \lambda }⁠is the largest eigenvalue of⁠M.{\displaystyle \mathbf {M} .}⁠The same calculation performed on the orthogonal complement of⁠u{\displaystyle \mathbf {u} }⁠gives the next largest eigenvalue and so on. The complex Hermitian case is similar; there⁠f(x)=x∗Mx{\displaystyle f(\mathbf {x} )=\mathbf {x} ^{*}\mathbf {M} \mathbf {x} }⁠is a real-valued function of⁠2n{\displaystyle 2n}⁠real variables. Singular values are similar in that they can be described algebraically or from variational principles. Although, unlike the eigenvalue case, Hermiticity, or symmetry, of⁠M{\displaystyle \mathbf {M} }⁠is no longer required. This section gives these two arguments for existence of singular value decomposition. LetM{\displaystyle \mathbf {M} }be an⁠m×n{\displaystyle m\times n}⁠complex matrix. SinceM∗M{\displaystyle \mathbf {M} ^{*}\mathbf {M} }is positive semi-definite and Hermitian, by thespectral theorem, there exists an⁠n×n{\displaystyle n\times n}⁠unitary matrixV{\displaystyle \mathbf {V} }such that V∗M∗MV=D¯=[D000],{\displaystyle \mathbf {V} ^{*}\mathbf {M} ^{*}\mathbf {M} \mathbf {V} ={\bar {\mathbf {D} }}={\begin{bmatrix}\mathbf {D} &0\\0&0\end{bmatrix}},} whereD{\displaystyle \mathbf {D} }is diagonal and positive definite, of dimensionℓ×ℓ{\displaystyle \ell \times \ell }, withℓ{\displaystyle \ell }the number of non-zero eigenvalues ofM∗M{\displaystyle \mathbf {M} ^{*}\mathbf {M} }(which can be shown to verifyℓ≤min(n,m){\displaystyle \ell \leq \min(n,m)}). Note thatV{\displaystyle \mathbf {V} }is here by definition a matrix whosei{\displaystyle i}-th column is thei{\displaystyle i}-th eigenvector ofM∗M{\displaystyle \mathbf {M} ^{*}\mathbf {M} }, corresponding to the eigenvalueD¯ii{\displaystyle {\bar {\mathbf {D} }}_{ii}}. Moreover, thej{\displaystyle j}-th column ofV{\displaystyle \mathbf {V} }, forj>ℓ{\displaystyle j>\ell }, is an eigenvector ofM∗M{\displaystyle \mathbf {M} ^{*}\mathbf {M} }with eigenvalueD¯jj=0{\displaystyle {\bar {\mathbf {D} }}_{jj}=0}. This can be expressed by writingV{\displaystyle \mathbf {V} }asV=[V1V2]{\displaystyle \mathbf {V} ={\begin{bmatrix}\mathbf {V} _{1}&\mathbf {V} _{2}\end{bmatrix}}}, where the columns ofV1{\displaystyle \mathbf {V} _{1}}andV2{\displaystyle \mathbf {V} _{2}}therefore contain the eigenvectors ofM∗M{\displaystyle \mathbf {M} ^{*}\mathbf {M} }corresponding to non-zero and zero eigenvalues, respectively. Using this rewriting ofV{\displaystyle \mathbf {V} }, the equation becomes: [V1∗V2∗]M∗M[V1V2]=[V1∗M∗MV1V1∗M∗MV2V2∗M∗MV1V2∗M∗MV2]=[D000].{\displaystyle {\begin{bmatrix}\mathbf {V} _{1}^{*}\\\mathbf {V} _{2}^{*}\end{bmatrix}}\mathbf {M} ^{*}\mathbf {M} \,{\begin{bmatrix}\mathbf {V} _{1}&\!\!\mathbf {V} _{2}\end{bmatrix}}={\begin{bmatrix}\mathbf {V} _{1}^{*}\mathbf {M} ^{*}\mathbf {M} \mathbf {V} _{1}&\mathbf {V} _{1}^{*}\mathbf {M} ^{*}\mathbf {M} \mathbf {V} _{2}\\\mathbf {V} _{2}^{*}\mathbf {M} ^{*}\mathbf {M} \mathbf {V} _{1}&\mathbf {V} _{2}^{*}\mathbf {M} ^{*}\mathbf {M} \mathbf {V} _{2}\end{bmatrix}}={\begin{bmatrix}\mathbf {D} &0\\0&0\end{bmatrix}}.} This implies that V1∗M∗MV1=D,V2∗M∗MV2=0.{\displaystyle \mathbf {V} _{1}^{*}\mathbf {M} ^{*}\mathbf {M} \mathbf {V} _{1}=\mathbf {D} ,\quad \mathbf {V} _{2}^{*}\mathbf {M} ^{*}\mathbf {M} \mathbf {V} _{2}=\mathbf {0} .} Moreover, the second equation impliesMV2=0{\displaystyle \mathbf {M} \mathbf {V} _{2}=\mathbf {0} }.[24]Finally, the unitary-ness ofV{\displaystyle \mathbf {V} }translates, in terms ofV1{\displaystyle \mathbf {V} _{1}}andV2{\displaystyle \mathbf {V} _{2}}, into the following conditions: V1∗V1=I1,V2∗V2=I2,V1V1∗+V2V2∗=I12,{\displaystyle {\begin{aligned}\mathbf {V} _{1}^{*}\mathbf {V} _{1}&=\mathbf {I} _{1},\\\mathbf {V} _{2}^{*}\mathbf {V} _{2}&=\mathbf {I} _{2},\\\mathbf {V} _{1}\mathbf {V} _{1}^{*}+\mathbf {V} _{2}\mathbf {V} _{2}^{*}&=\mathbf {I} _{12},\end{aligned}}} where the subscripts on the identity matrices are used to remark that they are of different dimensions. Let us now define U1=MV1D−12.{\displaystyle \mathbf {U} _{1}=\mathbf {M} \mathbf {V} _{1}\mathbf {D} ^{-{\frac {1}{2}}}.} Then, U1D12V1∗=MV1D−12D12V1∗=M(I−V2V2∗)=M−(MV2)V2∗=M,{\displaystyle \mathbf {U} _{1}\mathbf {D} ^{\frac {1}{2}}\mathbf {V} _{1}^{*}=\mathbf {M} \mathbf {V} _{1}\mathbf {D} ^{-{\frac {1}{2}}}\mathbf {D} ^{\frac {1}{2}}\mathbf {V} _{1}^{*}=\mathbf {M} (\mathbf {I} -\mathbf {V} _{2}\mathbf {V} _{2}^{*})=\mathbf {M} -(\mathbf {M} \mathbf {V} _{2})\mathbf {V} _{2}^{*}=\mathbf {M} ,} sinceMV2=0.{\displaystyle \mathbf {M} \mathbf {V} _{2}=\mathbf {0} .}This can be also seen as immediate consequence of the fact thatMV1V1∗=M{\displaystyle \mathbf {M} \mathbf {V} _{1}\mathbf {V} _{1}^{*}=\mathbf {M} }. This is equivalent to the observation that if{vi}i=1ℓ{\displaystyle \{{\boldsymbol {v}}_{i}\}_{i=1}^{\ell }}is the set of eigenvectors ofM∗M{\displaystyle \mathbf {M} ^{*}\mathbf {M} }corresponding to non-vanishing eigenvalues{λi}i=1ℓ{\displaystyle \{\lambda _{i}\}_{i=1}^{\ell }}, then{Mvi}i=1ℓ{\displaystyle \{\mathbf {M} {\boldsymbol {v}}_{i}\}_{i=1}^{\ell }}is a set of orthogonal vectors, and{λi−1/2Mvi}|i=1ℓ{\displaystyle {\bigl \{}\lambda _{i}^{-1/2}\mathbf {M} {\boldsymbol {v}}_{i}{\bigr \}}{\vphantom {|}}_{i=1}^{\ell }}is a (generally not complete) set oforthonormalvectors. This matches with the matrix formalism used above denoting withV1{\displaystyle \mathbf {V} _{1}}the matrix whose columns are{vi}i=1ℓ{\displaystyle \{{\boldsymbol {v}}_{i}\}_{i=1}^{\ell }}, withV2{\displaystyle \mathbf {V} _{2}}the matrix whose columns are the eigenvectors ofM∗M{\displaystyle \mathbf {M} ^{*}\mathbf {M} }with vanishing eigenvalue, andU1{\displaystyle \mathbf {U} _{1}}the matrix whose columns are the vectors{λi−1/2Mvi}|i=1ℓ{\displaystyle {\bigl \{}\lambda _{i}^{-1/2}\mathbf {M} {\boldsymbol {v}}_{i}{\bigr \}}{\vphantom {|}}_{i=1}^{\ell }}. We see that this is almost the desired result, except thatU1{\displaystyle \mathbf {U} _{1}}andV1{\displaystyle \mathbf {V} _{1}}are in general not unitary, since they might not be square. However, we do know that the number of rows ofU1{\displaystyle \mathbf {U} _{1}}is no smaller than the number of columns, since the dimensions ofD{\displaystyle \mathbf {D} }is no greater thanm{\displaystyle m}andn{\displaystyle n}. Also, since U1∗U1=D−12V1∗M∗MV1D−12=D−12DD−12=I1,{\displaystyle \mathbf {U} _{1}^{*}\mathbf {U} _{1}=\mathbf {D} ^{-{\frac {1}{2}}}\mathbf {V} _{1}^{*}\mathbf {M} ^{*}\mathbf {M} \mathbf {V} _{1}\mathbf {D} ^{-{\frac {1}{2}}}=\mathbf {D} ^{-{\frac {1}{2}}}\mathbf {D} \mathbf {D} ^{-{\frac {1}{2}}}=\mathbf {I_{1}} ,} the columns inU1{\displaystyle \mathbf {U} _{1}}are orthonormal and can be extended to an orthonormal basis. This means that we can chooseU2{\displaystyle \mathbf {U} _{2}}such thatU=[U1U2]{\displaystyle \mathbf {U} ={\begin{bmatrix}\mathbf {U} _{1}&\mathbf {U} _{2}\end{bmatrix}}}is unitary. For⁠V1{\displaystyle \mathbf {V} _{1}}⁠we already have⁠V2{\displaystyle \mathbf {V} _{2}}⁠to make it unitary. Now, define Σ=[[D12000]0],{\displaystyle \mathbf {\Sigma } ={\begin{bmatrix}{\begin{bmatrix}\mathbf {D} ^{\frac {1}{2}}&0\\0&0\end{bmatrix}}\\0\end{bmatrix}},} where extra zero rows are addedor removedto make the number of zero rows equal the number of columns of⁠U2,{\displaystyle \mathbf {U} _{2},}⁠and hence the overall dimensions ofΣ{\displaystyle \mathbf {\Sigma } }equal tom×n{\displaystyle m\times n}. Then [U1U2][[D12000]0][V1V2]∗=[U1U2][D12V1∗0]=U1D12V1∗=M,{\displaystyle {\begin{bmatrix}\mathbf {U} _{1}&\mathbf {U} _{2}\end{bmatrix}}{\begin{bmatrix}{\begin{bmatrix}\mathbf {} D^{\frac {1}{2}}&0\\0&0\end{bmatrix}}\\0\end{bmatrix}}{\begin{bmatrix}\mathbf {V} _{1}&\mathbf {V} _{2}\end{bmatrix}}^{*}={\begin{bmatrix}\mathbf {U} _{1}&\mathbf {U} _{2}\end{bmatrix}}{\begin{bmatrix}\mathbf {D} ^{\frac {1}{2}}\mathbf {V} _{1}^{*}\\0\end{bmatrix}}=\mathbf {U} _{1}\mathbf {D} ^{\frac {1}{2}}\mathbf {V} _{1}^{*}=\mathbf {M} ,} which is the desired result: M=UΣV∗.{\displaystyle \mathbf {M} =\mathbf {U} \mathbf {\Sigma } \mathbf {V} ^{*}.} Notice the argument could begin with diagonalizing⁠MM∗{\displaystyle \mathbf {M} \mathbf {M} ^{*}}⁠rather than⁠M∗M{\displaystyle \mathbf {M} ^{*}\mathbf {M} }⁠(This shows directly that⁠MM∗{\displaystyle \mathbf {M} \mathbf {M} ^{*}}⁠and⁠M∗M{\displaystyle \mathbf {M} ^{*}\mathbf {M} }⁠have the same non-zero eigenvalues). The singular values can also be characterized as the maxima of⁠uTMv,{\displaystyle \mathbf {u} ^{\mathrm {T} }\mathbf {M} \mathbf {v} ,}⁠considered as a function of⁠u{\displaystyle \mathbf {u} }⁠and⁠v,{\displaystyle \mathbf {v} ,}⁠over particular subspaces. The singular vectors are the values of⁠u{\displaystyle \mathbf {u} }⁠and⁠v{\displaystyle \mathbf {v} }⁠where these maxima are attained. Let⁠M{\displaystyle \mathbf {M} }⁠denote an⁠m×n{\displaystyle m\times n}⁠matrix with real entries. Let⁠Sk−1{\displaystyle S^{k-1}}⁠be the unit(k−1){\displaystyle (k-1)}-sphere inRk{\displaystyle \mathbb {R} ^{k}}, and defineσ(u,v)=uTMv,{\displaystyle \sigma (\mathbf {u} ,\mathbf {v} )=\mathbf {u} ^{\operatorname {T} }\mathbf {M} \mathbf {v} ,}u∈Sm−1,{\displaystyle \mathbf {u} \in S^{m-1},}v∈Sn−1.{\displaystyle \mathbf {v} \in S^{n-1}.} Consider the function⁠σ{\displaystyle \sigma }⁠restricted to⁠Sm−1×Sn−1.{\displaystyle S^{m-1}\times S^{n-1}.}⁠Since both⁠Sm−1{\displaystyle S^{m-1}}⁠and⁠Sn−1{\displaystyle S^{n-1}}⁠arecompactsets, theirproductis also compact. Furthermore, since⁠σ{\displaystyle \sigma }⁠is continuous, it attains a largest value for at least one pair of vectors⁠u{\displaystyle \mathbf {u} }⁠in⁠Sm−1{\displaystyle S^{m-1}}⁠and⁠v{\displaystyle \mathbf {v} }⁠in⁠Sn−1.{\displaystyle S^{n-1}.}⁠This largest value is denoted⁠σ1{\displaystyle \sigma _{1}}⁠and the corresponding vectors are denoted⁠u1{\displaystyle \mathbf {u} _{1}}⁠and⁠v1.{\displaystyle \mathbf {v} _{1}.}⁠Since⁠σ1{\displaystyle \sigma _{1}}⁠is the largest value of⁠σ(u,v){\displaystyle \sigma (\mathbf {u} ,\mathbf {v} )}⁠it must be non-negative. If it were negative, changing the sign of either⁠u1{\displaystyle \mathbf {u} _{1}}⁠or⁠v1{\displaystyle \mathbf {v} _{1}}⁠would make it positive and therefore larger. Statement.⁠u1{\displaystyle \mathbf {u} _{1}}⁠and⁠v1{\displaystyle \mathbf {v} _{1}}⁠are left and right-singular vectors of⁠M{\displaystyle \mathbf {M} }⁠with corresponding singular value⁠σ1.{\displaystyle \sigma _{1}.}⁠ Proof.Similar to the eigenvalues case, by assumption the two vectors satisfy the Lagrange multiplier equation: ∇σ=∇uTMv−λ1⋅∇uTu−λ2⋅∇vTv{\displaystyle \nabla \sigma =\nabla \mathbf {u} ^{\operatorname {T} }\mathbf {M} \mathbf {v} -\lambda _{1}\cdot \nabla \mathbf {u} ^{\operatorname {T} }\mathbf {u} -\lambda _{2}\cdot \nabla \mathbf {v} ^{\operatorname {T} }\mathbf {v} } After some algebra, this becomes Mv1=2λ1u1+0,MTu1=0+2λ2v1.{\displaystyle {\begin{aligned}\mathbf {M} \mathbf {v} _{1}&=2\lambda _{1}\mathbf {u} _{1}+0,\\\mathbf {M} ^{\operatorname {T} }\mathbf {u} _{1}&=0+2\lambda _{2}\mathbf {v} _{1}.\end{aligned}}} Multiplying the first equation from left by⁠u1T{\displaystyle \mathbf {u} _{1}^{\textrm {T}}}⁠and the second equation from left by⁠v1T{\displaystyle \mathbf {v} _{1}^{\textrm {T}}}⁠and taking‖u‖=‖v‖=1{\displaystyle \|\mathbf {u} \|=\|\mathbf {v} \|=1}into account gives σ1=2λ1=2λ2.{\displaystyle \sigma _{1}=2\lambda _{1}=2\lambda _{2}.} Plugging this into the pair of equations above, we have Mv1=σ1u1,MTu1=σ1v1.{\displaystyle {\begin{aligned}\mathbf {M} \mathbf {v} _{1}&=\sigma _{1}\mathbf {u} _{1},\\\mathbf {M} ^{\operatorname {T} }\mathbf {u} _{1}&=\sigma _{1}\mathbf {v} _{1}.\end{aligned}}} This proves the statement. More singular vectors and singular values can be found by maximizing⁠σ(u,v){\displaystyle \sigma (\mathbf {u} ,\mathbf {v} )}⁠over normalized⁠u{\displaystyle \mathbf {u} }⁠and⁠v{\displaystyle \mathbf {v} }⁠which are orthogonal to⁠u1{\displaystyle \mathbf {u} _{1}}⁠and⁠v1,{\displaystyle \mathbf {v} _{1},}⁠respectively. The passage from real to complex is similar to the eigenvalue case. One-sided Jacobi algorithm is an iterative algorithm,[25]where a matrix is iteratively transformed into a matrix with orthogonal columns. The elementary iteration is given as aJacobi rotation, M←MJ(p,q,θ),{\displaystyle M\leftarrow MJ(p,q,\theta ),} where the angleθ{\displaystyle \theta }of the Jacobi rotation matrixJ(p,q,θ){\displaystyle J(p,q,\theta )}is chosen such that after the rotation the columns with numbersp{\displaystyle p}andq{\displaystyle q}become orthogonal. The indices(p,q){\displaystyle (p,q)}are swept cyclically,(p=1…m,q=p+1…m){\displaystyle (p=1\dots m,q=p+1\dots m)}, wherem{\displaystyle m}is the number of columns. After the algorithm has converged, the singular value decompositionM=USVT{\displaystyle M=USV^{T}}is recovered as follows: the matrixV{\displaystyle V}is the accumulation of Jacobi rotation matrices, the matrixU{\displaystyle U}is given bynormalisingthe columns of the transformed matrixM{\displaystyle M}, and the singular values are given as the norms of the columns of the transformed matrixM{\displaystyle M}. Two-sided Jacobi SVD algorithm—a generalization of theJacobi eigenvalue algorithm—is an iterative algorithm where a square matrix is iteratively transformed into a diagonal matrix. If the matrix is not square theQR decompositionis performed first and then the algorithm is applied to theR{\displaystyle R}matrix. The elementary iteration zeroes a pair of off-diagonal elements by first applying aGivens rotationto symmetrize the pair of elements and then applying aJacobi transformationto zero them, M←JTGMJ{\displaystyle M\leftarrow J^{T}GMJ} whereG{\displaystyle G}is the Givens rotation matrix with the angle chosen such that the given pair of off-diagonal elements become equal after the rotation, and whereJ{\displaystyle J}is the Jacobi transformation matrix that zeroes these off-diagonal elements. The iterations proceeds exactly as in the Jacobi eigenvalue algorithm: by cyclic sweeps over all off-diagonal elements. After the algorithm has converged the resulting diagonal matrix contains the singular values. The matricesU{\displaystyle U}andV{\displaystyle V}are accumulated as follows:U←UGTJ{\displaystyle U\leftarrow UG^{T}J},V←VJ{\displaystyle V\leftarrow VJ}. The singular value decomposition can be computed using the following observations: The SVD of a matrix⁠M{\displaystyle \mathbf {M} }⁠is typically computed by a two-step procedure. In the first step, the matrix is reduced to abidiagonal matrix. This takesorder⁠O(mn2){\displaystyle O(mn^{2})}⁠floating-point operations (flop), assuming that⁠m≥n.{\displaystyle m\geq n.}⁠The second step is to compute the SVD of the bidiagonal matrix. This step can only be done with aniterative method(as witheigenvalue algorithms). However, in practice it suffices to compute the SVD up to a certain precision, like themachine epsilon. If this precision is considered constant, then the second step takes⁠O(n){\displaystyle O(n)}⁠iterations, each costing⁠O(n){\displaystyle O(n)}⁠flops. Thus, the first step is more expensive, and the overall cost is⁠O(mn2){\displaystyle O(mn^{2})}⁠flops (Trefethen & Bau III 1997, Lecture 31). The first step can be done usingHouseholder reflectionsfor a cost of⁠4mn2−4n3/3{\displaystyle 4mn^{2}-4n^{3}/3}⁠flops, assuming that only the singular values are needed and not the singular vectors. If⁠m{\displaystyle m}⁠is much larger than⁠n{\displaystyle n}⁠then it is advantageous to first reduce the matrix⁠M{\displaystyle \mathbf {M} }⁠to a triangular matrix with theQR decompositionand then use Householder reflections to further reduce the matrix to bidiagonal form; the combined cost is⁠2mn2+2n3{\displaystyle 2mn^{2}+2n^{3}}⁠flops (Trefethen & Bau III 1997, Lecture 31). The second step can be done by a variant of theQR algorithmfor the computation of eigenvalues, which was first described byGolub & Kahan (1965). TheLAPACKsubroutine DBDSQR[26]implements this iterative method, with some modifications to cover the case where the singular values are very small (Demmel & Kahan 1990). Together with a first step using Householder reflections and, if appropriate, QR decomposition, this forms the DGESVD[27]routine for the computation of the singular value decomposition. The same algorithm is implemented in theGNU Scientific Library(GSL). The GSL also offers an alternative method that uses a one-sidedJacobi orthogonalizationin step 2 (GSL Team 2007). This method computes the SVD of the bidiagonal matrix by solving a sequence of⁠2×2{\displaystyle 2\times 2}⁠SVD problems, similar to how theJacobi eigenvalue algorithmsolves a sequence of⁠2×2{\displaystyle 2\times 2}⁠eigenvalue methods (Golub & Van Loan 1996, §8.6.3). Yet another method for step 2 uses the idea ofdivide-and-conquer eigenvalue algorithms(Trefethen & Bau III 1997, Lecture 31). There is an alternative way that does not explicitly use the eigenvalue decomposition.[28]Usually the singular value problem of a matrix⁠M{\displaystyle \mathbf {M} }⁠is converted into an equivalent symmetric eigenvalue problem such as⁠MM∗,{\displaystyle \mathbf {M} \mathbf {M} ^{*},}⁠⁠M∗M,{\displaystyle \mathbf {M} ^{*}\mathbf {M} ,}⁠or [0MM∗0].{\displaystyle {\begin{bmatrix}\mathbf {0} &\mathbf {M} \\\mathbf {M} ^{*}&\mathbf {0} \end{bmatrix}}.} The approaches that use eigenvalue decompositions are based on theQR algorithm, which is well-developed to be stable and fast. Note that the singular values are real and right- and left- singular vectors are not required to form similarity transformations. One can iteratively alternate between theQR decompositionand theLQ decompositionto find the real diagonalHermitian matrices. TheQR decompositiongives⁠M⇒QR{\displaystyle \mathbf {M} \Rightarrow \mathbf {Q} \mathbf {R} }⁠and theLQ decompositionof⁠R{\displaystyle \mathbf {R} }⁠gives⁠R⇒LP∗.{\displaystyle \mathbf {R} \Rightarrow \mathbf {L} \mathbf {P} ^{*}.}⁠Thus, at every iteration, we have⁠M⇒QLP∗,{\displaystyle \mathbf {M} \Rightarrow \mathbf {Q} \mathbf {L} \mathbf {P} ^{*},}⁠update⁠M⇐L{\displaystyle \mathbf {M} \Leftarrow \mathbf {L} }⁠and repeat the orthogonalizations. Eventually,[clarification needed]this iteration betweenQR decompositionandLQ decompositionproduces left- and right- unitary singular matrices. This approach cannot readily be accelerated, as the QR algorithm can with spectral shifts or deflation. This is because the shift method is not easily defined without using similarity transformations. However, this iterative approach is very simple to implement, so is a good choice when speed does not matter. This method also provides insight into how purely orthogonal/unitary transformations can obtain the SVD. The singular values of a⁠2×2{\displaystyle 2\times 2}⁠matrix can be found analytically. Let the matrix beM=z0I+z1σ1+z2σ2+z3σ3{\displaystyle \mathbf {M} =z_{0}\mathbf {I} +z_{1}\sigma _{1}+z_{2}\sigma _{2}+z_{3}\sigma _{3}} wherezi∈C{\displaystyle z_{i}\in \mathbb {C} }are complex numbers that parameterize the matrix,⁠I{\displaystyle \mathbf {I} }⁠is the identity matrix, andσi{\displaystyle \sigma _{i}}denote thePauli matrices. Then its two singular values are given by σ±=|z0|2+|z1|2+|z2|2+|z3|2±(|z0|2+|z1|2+|z2|2+|z3|2)2−|z02−z12−z22−z32|2=|z0|2+|z1|2+|z2|2+|z3|2±2(Re⁡z0z1∗)2+(Re⁡z0z2∗)2+(Re⁡z0z3∗)2+(Im⁡z1z2∗)2+(Im⁡z2z3∗)2+(Im⁡z3z1∗)2{\displaystyle {\begin{aligned}\sigma _{\pm }&={\sqrt {|z_{0}|^{2}+|z_{1}|^{2}+|z_{2}|^{2}+|z_{3}|^{2}\pm {\sqrt {{\bigl (}|z_{0}|^{2}+|z_{1}|^{2}+|z_{2}|^{2}+|z_{3}|^{2}{\bigr )}^{2}-|z_{0}^{2}-z_{1}^{2}-z_{2}^{2}-z_{3}^{2}|^{2}}}}}\\&={\sqrt {|z_{0}|^{2}+|z_{1}|^{2}+|z_{2}|^{2}+|z_{3}|^{2}\pm 2{\sqrt {(\operatorname {Re} z_{0}z_{1}^{*})^{2}+(\operatorname {Re} z_{0}z_{2}^{*})^{2}+(\operatorname {Re} z_{0}z_{3}^{*})^{2}+(\operatorname {Im} z_{1}z_{2}^{*})^{2}+(\operatorname {Im} z_{2}z_{3}^{*})^{2}+(\operatorname {Im} z_{3}z_{1}^{*})^{2}}}}}\end{aligned}}} In applications it is quite unusual for the full SVD, including a full unitary decomposition of the null-space of the matrix, to be required. Instead, it is often sufficient (as well as faster, and more economical for storage) to compute a reduced version of the SVD. The following can be distinguished for an⁠m×n{\displaystyle m\times n}⁠matrix⁠M{\displaystyle \mathbf {M} }⁠of rank⁠r{\displaystyle r}⁠: The thin, or economy-sized, SVD of a matrix⁠M{\displaystyle \mathbf {M} }⁠is given by[29] M=UkΣkVk∗,{\displaystyle \mathbf {M} =\mathbf {U} _{k}\mathbf {\Sigma } _{k}\mathbf {V} _{k}^{*},} wherek=min(m,n),{\displaystyle k=\min(m,n),}the matrices⁠Uk{\displaystyle \mathbf {U} _{k}}⁠and⁠Vk{\displaystyle \mathbf {V} _{k}}⁠contain only the first⁠k{\displaystyle k}⁠columns of⁠U{\displaystyle \mathbf {U} }⁠and⁠V,{\displaystyle \mathbf {V} ,}⁠and⁠Σk{\displaystyle \mathbf {\Sigma } _{k}}⁠contains only the first⁠k{\displaystyle k}⁠singular values from⁠Σ.{\displaystyle \mathbf {\Sigma } .}⁠The matrix⁠Uk{\displaystyle \mathbf {U} _{k}}⁠is thus⁠m×k,{\displaystyle m\times k,}⁠⁠Σk{\displaystyle \mathbf {\Sigma } _{k}}⁠is⁠k×k{\displaystyle k\times k}⁠diagonal, and⁠Vk∗{\displaystyle \mathbf {V} _{k}^{*}}⁠is⁠k×n.{\displaystyle k\times n.}⁠ The thin SVD uses significantly less space and computation time if⁠k≪max(m,n).{\displaystyle k\ll \max(m,n).}⁠The first stage in its calculation will usually be aQR decompositionof⁠M,{\displaystyle \mathbf {M} ,}⁠which can make for a significantly quicker calculation in this case. The compact SVD of a matrix⁠M{\displaystyle \mathbf {M} }⁠is given by M=UrΣrVr∗.{\displaystyle \mathbf {M} =\mathbf {U} _{r}\mathbf {\Sigma } _{r}\mathbf {V} _{r}^{*}.} Only the⁠r{\displaystyle r}⁠column vectors of⁠U{\displaystyle \mathbf {U} }⁠and⁠r{\displaystyle r}⁠row vectors of⁠V∗{\displaystyle \mathbf {V} ^{*}}⁠corresponding to the non-zero singular values⁠Σr{\displaystyle \mathbf {\Sigma } _{r}}⁠are calculated. The remaining vectors of⁠U{\displaystyle \mathbf {U} }⁠and⁠V∗{\displaystyle \mathbf {V} ^{*}}⁠are not calculated. This is quicker and more economical than the thin SVD if⁠r≪min(m,n).{\displaystyle r\ll \min(m,n).}⁠The matrix⁠Ur{\displaystyle \mathbf {U} _{r}}⁠is thus⁠m×r,{\displaystyle m\times r,}⁠⁠Σr{\displaystyle \mathbf {\Sigma } _{r}}⁠is⁠r×r{\displaystyle r\times r}⁠diagonal, and⁠Vr∗{\displaystyle \mathbf {V} _{r}^{*}}⁠is⁠r×n.{\displaystyle r\times n.}⁠ In many applications the number⁠r{\displaystyle r}⁠of the non-zero singular values is large making even the Compact SVD impractical to compute. In such cases, the smallest singular values may need to be truncated to compute only⁠t≪r{\displaystyle t\ll r}⁠non-zero singular values. The truncated SVD is no longer an exact decomposition of the original matrix⁠M,{\displaystyle \mathbf {M} ,}⁠but rather provides the optimallow-rank matrix approximation⁠M~{\displaystyle {\tilde {\mathbf {M} }}}⁠by any matrix of a fixed rank⁠t{\displaystyle t}⁠ M~=UtΣtVt∗,{\displaystyle {\tilde {\mathbf {M} }}=\mathbf {U} _{t}\mathbf {\Sigma } _{t}\mathbf {V} _{t}^{*},} where matrix⁠Ut{\displaystyle \mathbf {U} _{t}}⁠is⁠m×t,{\displaystyle m\times t,}⁠⁠Σt{\displaystyle \mathbf {\Sigma } _{t}}⁠is⁠t×t{\displaystyle t\times t}⁠diagonal, and⁠Vt∗{\displaystyle \mathbf {V} _{t}^{*}}⁠is⁠t×n.{\displaystyle t\times n.}⁠Only the⁠t{\displaystyle t}⁠column vectors of⁠U{\displaystyle \mathbf {U} }⁠and⁠t{\displaystyle t}⁠row vectors of⁠V∗{\displaystyle \mathbf {V} ^{*}}⁠corresponding to the⁠t{\displaystyle t}⁠largest singular values⁠Σt{\displaystyle \mathbf {\Sigma } _{t}}⁠are calculated. This can be much quicker and more economical than the compact SVD if⁠t≪r,{\displaystyle t\ll r,}⁠but requires a completely different toolset of numerical solvers. In applications that require an approximation to theMoore–Penrose inverseof the matrix⁠M,{\displaystyle \mathbf {M} ,}⁠the smallest singular values of⁠M{\displaystyle \mathbf {M} }⁠are of interest, which are more challenging to compute compared to the largest ones. Truncated SVD is employed inlatent semantic indexing.[30] The sum of the⁠k{\displaystyle k}⁠largest singular values of⁠M{\displaystyle \mathbf {M} }⁠is amatrix norm, theKy Fan⁠k{\displaystyle k}⁠-norm of⁠M.{\displaystyle \mathbf {M} .}⁠[31] The first of the Ky Fan norms, the Ky Fan 1-norm, is the same as theoperator normof⁠M{\displaystyle \mathbf {M} }⁠as a linear operator with respect to the Euclidean norms of⁠Km{\displaystyle K^{m}}⁠and⁠Kn.{\displaystyle K^{n}.}⁠In other words, the Ky Fan 1-norm is the operator norm induced by the standardℓ2{\displaystyle \ell ^{2}}Euclidean inner product. For this reason, it is also called the operator 2-norm. One can easily verify the relationship between the Ky Fan 1-norm and singular values. It is true in general, for a bounded operator⁠M{\displaystyle \mathbf {M} }⁠on (possibly infinite-dimensional) Hilbert spaces ‖M‖=‖M∗M‖12{\displaystyle \|\mathbf {M} \|=\|\mathbf {M} ^{*}\mathbf {M} \|^{\frac {1}{2}}} But, in the matrix case,⁠(M∗M)1/2{\displaystyle (\mathbf {M} ^{*}\mathbf {M} )^{1/2}}⁠is anormal matrix, so‖M∗M‖1/2{\displaystyle \|\mathbf {M} ^{*}\mathbf {M} \|^{1/2}}is the largest eigenvalue of⁠(M∗M)1/2,{\displaystyle (\mathbf {M} ^{*}\mathbf {M} )^{1/2},}⁠i.e. the largest singular value of⁠M.{\displaystyle \mathbf {M} .}⁠ The last of the Ky Fan norms, the sum of all singular values, is thetrace norm(also known as the 'nuclear norm'), defined by‖M‖=Tr⁡(M∗M)1/2{\displaystyle \|\mathbf {M} \|=\operatorname {Tr} (\mathbf {M} ^{*}\mathbf {M} )^{1/2}}(the eigenvalues of⁠M∗M{\displaystyle \mathbf {M} ^{*}\mathbf {M} }⁠are the squares of the singular values). The singular values are related to another norm on the space of operators. Consider theHilbert–Schmidtinner product on the⁠n×n{\displaystyle n\times n}⁠matrices, defined by ⟨M,N⟩=tr⁡(N∗M).{\displaystyle \langle \mathbf {M} ,\mathbf {N} \rangle =\operatorname {tr} \left(\mathbf {N} ^{*}\mathbf {M} \right).} So the induced norm is ‖M‖=⟨M,M⟩=tr⁡(M∗M).{\displaystyle \|\mathbf {M} \|={\sqrt {\langle \mathbf {M} ,\mathbf {M} \rangle }}={\sqrt {\operatorname {tr} \left(\mathbf {M} ^{*}\mathbf {M} \right)}}.} Since the trace is invariant under unitary equivalence, this shows ‖M‖=|∑iσi2{\displaystyle \|\mathbf {M} \|={\sqrt {{\vphantom {\bigg |}}\sum _{i}\sigma _{i}^{2}}}} where⁠σi{\displaystyle \sigma _{i}}⁠are the singular values of⁠M.{\displaystyle \mathbf {M} .}⁠This is called theFrobenius norm,Schatten 2-norm, orHilbert–Schmidt normof⁠M.{\displaystyle \mathbf {M} .}⁠Direct calculation shows that the Frobenius norm of⁠M=(mij){\displaystyle \mathbf {M} =(m_{ij})}⁠coincides with: |∑ij|mij|2.{\displaystyle {\sqrt {{\vphantom {\bigg |}}\sum _{ij}|m_{ij}|^{2}}}.} In addition, the Frobenius norm and the trace norm (the nuclear norm) are special cases of theSchatten norm. The singular values of a matrix⁠A{\displaystyle \mathbf {A} }⁠are uniquely defined and are invariant with respect to left and/or right unitary transformations of⁠A.{\displaystyle \mathbf {A} .}⁠In other words, the singular values of⁠UAV,{\displaystyle \mathbf {U} \mathbf {A} \mathbf {V} ,}⁠for unitary matrices⁠U{\displaystyle \mathbf {U} }⁠and⁠V,{\displaystyle \mathbf {V} ,}⁠are equal to the singular values of⁠A.{\displaystyle \mathbf {A} .}⁠This is an important property for applications in which it is necessary to preserve Euclidean distances and invariance with respect to rotations. The Scale-Invariant SVD, or SI-SVD,[32]is analogous to the conventional SVD except that its uniquely-determined singular values are invariant with respect to diagonal transformations of⁠A.{\displaystyle \mathbf {A} .}⁠In other words, the singular values of⁠DAE,{\displaystyle \mathbf {D} \mathbf {A} \mathbf {E} ,}⁠for invertible diagonal matrices⁠D{\displaystyle \mathbf {D} }⁠and⁠E,{\displaystyle \mathbf {E} ,}⁠are equal to the singular values of⁠A.{\displaystyle \mathbf {A} .}⁠This is an important property for applications for which invariance to the choice of units on variables (e.g., metric versus imperial units) is needed. The factorization⁠M=UΣV∗{\displaystyle \mathbf {M} =\mathbf {U} \mathbf {\Sigma } \mathbf {V} ^{*}}⁠can be extended to abounded operator⁠M{\displaystyle \mathbf {M} }⁠on a separable Hilbert space⁠H.{\displaystyle H.}⁠Namely, for any bounded operator⁠M,{\displaystyle \mathbf {M} ,}⁠there exist apartial isometry⁠U,{\displaystyle \mathbf {U} ,}⁠a unitary⁠V,{\displaystyle \mathbf {V} ,}⁠a measure space⁠(X,μ),{\displaystyle (X,\mu ),}⁠and a non-negative measurable⁠f{\displaystyle f}⁠such that M=UTfV∗{\displaystyle \mathbf {M} =\mathbf {U} T_{f}\mathbf {V} ^{*}} where⁠Tf{\displaystyle T_{f}}⁠is themultiplication by⁠f{\displaystyle f}⁠on⁠L2(X,μ).{\displaystyle L^{2}(X,\mu ).}⁠ This can be shown by mimicking the linear algebraic argument for the matrix case above.⁠VTfV∗{\displaystyle \mathbf {V} T_{f}\mathbf {V} ^{*}}⁠is the unique positive square root of⁠M∗M,{\displaystyle \mathbf {M} ^{*}\mathbf {M} ,}⁠as given by theBorel functional calculusforself-adjoint operators. The reason why⁠U{\displaystyle \mathbf {U} }⁠need not be unitary is that, unlike the finite-dimensional case, given an isometry⁠U1{\displaystyle U_{1}}⁠with nontrivial kernel, a suitable⁠U2{\displaystyle U_{2}}⁠may not be found such that [U1U2]{\displaystyle {\begin{bmatrix}U_{1}\\U_{2}\end{bmatrix}}} is a unitary operator. As for matrices, the singular value factorization is equivalent to thepolar decompositionfor operators: we can simply write M=UV∗⋅VTfV∗{\displaystyle \mathbf {M} =\mathbf {U} \mathbf {V} ^{*}\cdot \mathbf {V} T_{f}\mathbf {V} ^{*}} and notice that⁠UV∗{\displaystyle \mathbf {U} \mathbf {V} ^{*}}⁠is still a partial isometry while⁠VTfV∗{\displaystyle \mathbf {V} T_{f}\mathbf {V} ^{*}}⁠is positive. The notion of singular values and left/right-singular vectors can be extended tocompact operator on Hilbert spaceas they have a discrete spectrum. If⁠T{\displaystyle T}⁠is compact, every non-zero⁠λ{\displaystyle \lambda }⁠in its spectrum is an eigenvalue. Furthermore, a compact self-adjoint operator can be diagonalized by its eigenvectors. If⁠M{\displaystyle \mathbf {M} }⁠is compact, so is⁠M∗M{\displaystyle \mathbf {M} ^{*}\mathbf {M} }⁠. Applying the diagonalization result, the unitary image of its positive square root⁠Tf{\displaystyle T_{f}}⁠has a set of orthonormal eigenvectors⁠{ei}{\displaystyle \{e_{i}\}}⁠corresponding to strictly positive eigenvalues⁠{σi}{\displaystyle \{\sigma _{i}\}}⁠. For any⁠ψ{\displaystyle \psi }⁠in⁠H,{\displaystyle H,}⁠ Mψ=UTfV∗ψ=∑i⟨UTfV∗ψ,Uei⟩Uei=∑iσi⟨ψ,Vei⟩Uei,{\displaystyle \mathbf {M} \psi =\mathbf {U} T_{f}\mathbf {V} ^{*}\psi =\sum _{i}\left\langle \mathbf {U} T_{f}\mathbf {V} ^{*}\psi ,\mathbf {U} e_{i}\right\rangle \mathbf {U} e_{i}=\sum _{i}\sigma _{i}\left\langle \psi ,\mathbf {V} e_{i}\right\rangle \mathbf {U} e_{i},} where the series converges in the norm topology on⁠H.{\displaystyle H.}⁠Notice how this resembles the expression from the finite-dimensional case.⁠σi{\displaystyle \sigma _{i}}⁠are called the singular values of⁠M.{\displaystyle \mathbf {M} .}⁠⁠{Uei}{\displaystyle \{\mathbf {U} e_{i}\}}⁠(resp.⁠{Uei}{\displaystyle \{\mathbf {U} e_{i}\}}⁠) can be considered the left-singular (resp. right-singular) vectors of⁠M.{\displaystyle \mathbf {M} .}⁠ Compact operators on a Hilbert space are the closure offinite-rank operatorsin the uniform operator topology. The above series expression gives an explicit such representation. An immediate consequence of this is: The singular value decomposition was originally developed bydifferential geometers, who wished to determine whether a realbilinear formcould be made equal to another by independent orthogonal transformations of the two spaces it acts on.Eugenio BeltramiandCamille Jordandiscovered independently, in 1873 and 1874 respectively, that the singular values of the bilinear forms, represented as a matrix, form acomplete setofinvariantsfor bilinear forms under orthogonal substitutions.James Joseph Sylvesteralso arrived at the singular value decomposition for real square matrices in 1889, apparently independently of both Beltrami and Jordan. Sylvester called the singular values thecanonical multipliersof the matrix⁠A.{\displaystyle \mathbf {A} .}⁠The fourth mathematician to discover the singular value decomposition independently isAutonnein 1915, who arrived at it via thepolar decomposition. The first proof of the singular value decomposition for rectangular and complex matrices seems to be byCarl EckartandGale J. Youngin 1936;[33]they saw it as a generalization of theprincipal axistransformation forHermitian matrices. In 1907,Erhard Schmidtdefined an analog of singular values forintegral operators(which are compact, under some weak technical assumptions); it seems he was unaware of the parallel work on singular values of finite matrices. This theory was further developed byÉmile Picardin 1910, who is the first to call the numbersσk{\displaystyle \sigma _{k}}singular values(or in French,valeurs singulières). Practical methods for computing the SVD date back toKogbetliantzin 1954–1955 andHestenesin 1958,[34]resembling closely theJacobi eigenvalue algorithm, which uses plane rotations orGivens rotations. However, these were replaced by the method ofGene GolubandWilliam Kahanpublished in 1965,[35]which usesHouseholder transformationsor reflections. In 1970, Golub andChristian Reinsch[36]published a variant of the Golub/Kahan algorithm that is still the one most-used today.
https://en.wikipedia.org/wiki/Singular_value_decomposition
Schema.orgis a reference website that publishes documentation and guidelines for usingstructured datamark-up on web-pages (in the form ofmicrodata,RDFaorJSON-LD). Its main objective is to standardizeHTMLtags to be used by webmasters for creating rich results (displayed as visual data or infographic tables on search engine results) about a certain topic of interest.[2]It is a part of thesemantic webproject, which aims to make document mark-up codes more readable and meaningful to both humans and machines. Schema.orgis an initiative launched on June 2, 2011, byBing,GoogleandYahoo![3][4][5](operators of the world's largestsearch enginesat that time)[6]to create and support a common set of schemas for structured data markup on web pages. In November 2011,Yandex(whose search engine is the largest inRussia) joined the initiative.[7][8]They propose using the schema.org vocabulary along with theMicrodata,RDFa, orJSON-LDformats[9]to mark up website content withmetadataabout itself. Such markup can be recognized bysearch engine spidersand otherparsers, thus granting access to the meaning of the sites (seeSemantic Web). The initiative also describes an extension mechanism for adding additional properties.[10]In 2012, theGoodRelationsontologywas integrated into Schema.org.[11]Public discussion of the initiative largely takes place on theW3Cpublic vocabulariesmailing list.[12] Much of the vocabulary on Schema.org was inspired by earlier formats, such asmicroformats,FOAF, andOpenCyc.[13]Microformats, with its most dominant representativehCard, continue (as of 2015) to be published widely on the web, where the deployment of Schema.org has strongly increased between 2012 and 2014.[14]In 2015,[15]Google began supporting theJSON-LDformat, and as of September, 2017 recommended using JSON-LD for structured data whenever possible.[16][17] Despite the advantages of using Schema.org, adoption remained limited as of 2016. A survey in 2016 of 300 US-based marketing agencies and B2C advertisers across industries showing only 17% uptake.[18]As of 2024 over 45 million web domains have used schema markup on their web pages.[19] Validators, such as the deprecated[20]Google Structured Data Testing Tool, or more recent[21]Google Rich Results Test Tool,[22]Schema.org Markup Validator,[23]Yandex Microformat validator,[24]and Bing Markup Validator[25]can be used to test thevalidityof the data marked up with the schemas and Microdata. More recently,Google Search Console(formerly webmaster tools) has provided a report section for unparsable structured data. If any Schema code on a website is incorrect, it will show in this report.[26]Some schema markups such as Organization and Person are commonly used to influence search results returned byGoogle's Knowledge Graph.[27] Schema vocabulary includes sets oftypes, which each have related metadata properties that can be illustrated using pre definedenumerationsand Datatypes. Types are managed by schema.org and are regularly updated as of February 2025, there are over 800 schema types.[28]There are a number of subjects and elements that a web pages that can be marked up with using a Schema, with examples including: The following is an example[29]of how to mark up information about a movie and its director using the Schema.org schemas and microdata. In order to mark up the data, the attributeitemtypealong with theURLof the schema is used. The attributeitemscopedefines the scope of the itemtype. The kind of the current item can be defined by using the attributeitemprop.
https://en.wikipedia.org/wiki/Schema.org
Thetransformeris adeep learningarchitecture that was developed by researchers atGoogleand is based on the multi-headattentionmechanism, which was proposed in the 2017 paper "Attention Is All You Need".[1]Text is converted to numerical representations calledtokens, and each token is converted into a vector via lookup from aword embeddingtable.[1]At each layer, eachtokenis thencontextualizedwithin the scope of thecontext windowwith other (unmasked) tokens via a parallel multi-head attention mechanism, allowing the signal for keytokensto be amplified and less important tokens to be diminished. Transformers have the advantage of having no recurrent units, therefore requiring less training time than earlierrecurrent neural architectures(RNNs) such aslong short-term memory(LSTM).[2]Later variations have been widely adopted for traininglarge language models(LLM) on large (language)datasets.[3] Transformers were first developed as an improvement over previous architectures formachine translation,[4][5]but have found many applications since. They are used in large-scalenatural language processing,computer vision(vision transformers),reinforcement learning,[6][7]audio,[8]multimodal learning,robotics,[9]and even playingchess.[10]It has also led to the development ofpre-trained systems, such asgenerative pre-trained transformers(GPTs)[11]andBERT[12](bidirectional encoder representations from transformers). For many years, sequence modelling and generation was done by using plainrecurrent neural networks(RNNs). A well-cited early example was theElman network(1990). In theory, the information from one token can propagate arbitrarily far down the sequence, but in practice thevanishing-gradient problemleaves the model's state at the end of a long sentence without precise, extractable information about preceding tokens. A key breakthrough wasLSTM(1995),[note 1]a RNN which used various innovations to overcome the vanishing gradient problem, allowing efficient learning of long-sequence modelling. One key innovation was the use of anattention mechanismwhich used neurons that multiply the outputs of other neurons, so-calledmultiplicative units.[13]Neural networks using multiplicative units were later calledsigma-pi networks[14]orhigher-order networks.[15]LSTM became the standard architecture for long sequence modelling until the 2017 publication of Transformers. However, LSTM still used sequential processing, like most other RNNs.[note 2]Specifically, RNNs operate one token at a time from first to last; they cannot operate in parallel over all tokens in a sequence. Modern Transformers overcome this problem, but unlike RNNs, they require computation time that isquadraticin the size of the context window. The linearly scalingfast weightcontroller (1992) learns to compute a weight matrix for further processing depending on the input.[16]One of its two networks has "fast weights" or "dynamic links" (1981).[17][18][19]A slow neural network learns by gradient descent to generate keys and values for computing the weight changes of the fast neural network which computes answers to queries.[16]This was later shown to be equivalent to the unnormalized linear Transformer.[20][21] The idea of encoder-decoder sequence transduction had been developed in the early 2010s (see previous papers[22][23]). The papers most commonly cited as the originators that produced seq2seq are two concurrently published papers from 2014.[22][23] A 380M-parameter model for machine translation uses twolong short-term memories(LSTM).[23]Its architecture consists of two parts. Theencoderis an LSTM that takes in a sequence of tokens and turns it into a vector. Thedecoderis another LSTM that converts the vector into a sequence of tokens. Similarly, another 130M-parameter model usedgated recurrent units(GRU) instead of LSTM.[22]Later research showed that GRUs are neither better nor worse than LSTMs for seq2seq.[24][25] These early seq2seq models had no attention mechanism, and the state vector is accessible only after thelastword of the source text was processed. Although in theory such a vector retains the information about the whole original sentence, in practice the information is poorly preserved. This is because the input is processed sequentially by one recurrent network into afixed-size output vector, which is then processed by another recurrent network into an output. If the input is long, then the output vector would not be able to contain all relevant information, degrading the output. As evidence, reversing the input sentence improved seq2seq translation.[26] TheRNNsearchmodel introduced an attention mechanism to seq2seq for machine translation to solve the bottleneck problem (of thefixed-sizeoutput vector), allowing the model to process long-distance dependencies more easily. The name is because it "emulates searching through a source sentence during decoding a translation".[4] The relative performances were compared between global (that ofRNNsearch) and local (sliding window) attention model architectures for machine translation, finding that mixed attention had higher quality than global attention, while local attention reduced translation time.[27] In 2016,Google Translatewas revamped toGoogle Neural Machine Translation, which replaced the previous model based onstatistical machine translation. The new model was a seq2seq model where the encoder and the decoder were both 8 layers of bidirectional LSTM.[28]It took nine months to develop, and it outperformed the statistical approach, which took ten years to develop.[29] Seq2seq models with attention (including self-attention) still suffered from the same issue with recurrent networks, which is that they are hard toparallelize, which prevented them from being accelerated on GPUs. In 2016,decomposable attentionapplied a self-attention mechanism tofeedforward networks, which are easy to parallelize, and achievedSOTAresult intextual entailmentwith an order of magnitude fewer parameters than LSTMs.[30]One of its authors, Jakob Uszkoreit, suspected that attentionwithoutrecurrence is sufficient for language translation, thus the title "attention isallyou need".[31]That hypothesis was against conventional wisdom at the time, and even his fatherHans Uszkoreit, a well-known computational linguist, was skeptical.[31]In the same year, self-attention (calledintra-attention orintra-sentence attention) was proposed for LSTMs.[32] In 2017, the original (100M-sized) encoder-decoder transformer model was proposed in the "Attention is all you need" paper. At the time, the focus of the research was on improvingseq2seqformachine translation, by removing its recurrence to process all tokens in parallel, but preserving its dot-product attention mechanism to keep its text processing performance.[1]This led to the introduction of a multi-head attention model that was easier to parallelize due to the use of independent heads and the lack of recurrence. Its parallelizability was an important factor to its widespread use in large neural networks.[33] Already in spring 2017, even before the "Attention is all you need" preprint was published, one of the co-authors applied the "decoder-only" variation of the architecture to generate fictitious Wikipedia articles.[34]Transformer architecture is now used alongside manygenerative modelsthat contribute to the ongoingAI boom. In language modelling,ELMo(2018) was a bi-directional LSTM that produces contextualizedword embeddings, improving upon the line of research frombag of wordsandword2vec. It was followed byBERT(2018), an encoder-only Transformer model.[35]In 2019 October, Google started using BERT to process search queries.[36]In 2020, Google Translate replaced the previous RNN-encoder–RNN-decoder model by a Transformer-encoder–RNN-decoder model.[37] Starting in 2018, the OpenAIGPT seriesof decoder-only Transformers became state of the art innatural language generation. In 2022, a chatbot based on GPT-3,ChatGPT, became unexpectedly[38]popular, triggering a boom aroundlarge language models.[39][40] Since 2020, Transformers have been applied in modalities beyond text, including thevision transformer,[41]speech recognition,[42]robotics,[6]andmultimodal.[43]The vision transformer, in turn, stimulated new developments inconvolutional neural networks.[44]Image and video generators likeDALL-E(2021),Stable Diffusion 3(2024),[45]andSora(2024), use Transformers to analyse input data (like text prompts) by breaking it down into "tokens" and then calculating the relevance between each token using self-attention, which helps the model understand the context and relationships within the data. The plain transformer architecture had difficulty converging. In the original paper[1]the authors recommended using learning rate warmup. That is, the learning rate should linearly scale up from 0 to maximal value for the first part of the training (usually recommended to be 2% of the total number of training steps), before decaying again. A 2020 paper found that usinglayer normalizationbefore(instead of after) multiheaded attention and feedforward layers stabilizes training, not requiring learning rate warmup.[46] Transformers typically are first pretrained byself-supervised learningon a large generic dataset, followed bysupervisedfine-tuningon a small task-specific dataset. The pretrain dataset is typically an unlabeled large corpus, such asThe Pile. Tasks for pretraining and fine-tuning commonly include: TheT5 transformerreport[47]documents a large number ofnatural languagepretraining tasks. Some examples are: Note that while each of these tasks is trivial or obvious for human native speakers of the language (or languages), they have typically proved challenging for previous generations of machine learning architecture. In general, there are 3 classes of language modelling tasks: "masked",[49]"autoregressive",[50]and "prefixLM".[51]These classes are independent of a specific modeling architecture such as Transformer, but they are often discussed in the context of Transformer. In a masked task,[49]one or more of the tokens is masked out, and the model would produce a probability distribution predicting what the masked-out tokens are based on the context. Theloss functionfor the task is typically sum oflog-perplexitiesfor the masked-out tokens:Loss=−∑t∈masked tokensln⁡(probability oftconditional on its context){\displaystyle {\text{Loss}}=-\sum _{t\in {\text{masked tokens}}}\ln({\text{probability of }}t{\text{ conditional on its context}})}and the model is trained to minimize this loss function. TheBERT series of modelsare trained for masked token prediction and another task. In an autoregressive task,[50]the entire sequence is masked at first, and the model produces a probability distribution for the first token. Then the first token is revealed and the model predicts the second token, and so on. The loss function for the task is still typically the same. TheGPT series of modelsare trained by autoregressive tasks. In a prefixLM task,[51]the sequence is divided into two parts. The first part is presented as context, and the model predicts the first token of the second part. Then that would be revealed, and the model predicts the second token, and so on. The loss function for the task is still typically the same. TheT5 series of modelsare trained by prefixLM tasks. Note that "masked" as in "masked language modelling" is not "masked" as in "masked attention", and "prefixLM" (prefix language modeling) is not"prefixLM" (prefix language model). All transformers have the same primary components: The following description follows exactly the Transformer as described in the original paper. There are variants, described in thefollowing section. By convention, we write all vectors as row vectors. This, for example, means that pushing a vector through a linear layer means multiplying it by a weight matrix on the right, asxW{\displaystyle xW}. As the Transformer architecture natively processes numerical data, not text, there must be a translation between text and tokens. A token is an integer that represents a character, or a short segment of characters. On the input side, the input text is parsed into a token sequence. Similarly, on the output side, the output tokens are parsed back to text. The module doing the conversion between texts and token sequences is atokenizer. The set of all tokens is the vocabulary of the tokenizer, and its size is thevocabulary sizenvocabulary{\displaystyle n_{\text{vocabulary}}}. When faced with tokens outside the vocabulary, typically a special token is used, written as "[UNK]" for "unknown". Some commonly used tokenizers arebyte pair encoding, WordPiece, and SentencePiece. Each token is converted into an embedding vector via alookup table. Equivalently stated, it multiplies aone-hotrepresentation of the token by an embedding matrixM{\displaystyle M}. For example, if the input token is3{\displaystyle 3}, then the one-hot representation is[0,0,0,1,0,0,…]{\displaystyle [0,0,0,1,0,0,\dots ]}, and its embedding vector isEmbed(3)=[0,0,0,1,0,0,…]M{\displaystyle \mathrm {Embed} (3)=[0,0,0,1,0,0,\dots ]M}The token embedding vectors are added to their respective positional encoding vectors (see below), producing the sequence of input vectors. The number of dimensions in an embedding vector is calledhidden sizeorembedding sizeand written asdemb{\displaystyle d_{\text{emb}}}.[35]This size is written asdmodel{\displaystyle d_{\text{model}}}in the original Transformer paper.[1] An un-embedding layer is almost the reverse of an embedding layer. Whereas an embedding layer converts a token into a vector, an un-embedding layer converts a vector into a probability distribution over tokens. The un-embedding layer is a linear-softmaxlayer:UnEmbed(x)=softmax(xW+b){\displaystyle \mathrm {UnEmbed} (x)=\mathrm {softmax} (xW+b)}The matrix has shape(demb,nvocabulary){\displaystyle (d_{\text{emb}},n_{\text{vocabulary}})}. The embedding matrixM{\displaystyle M}and the un-embedding matrixW{\displaystyle W}are sometimes required to be transposes of each other, a practice called weight tying.[52] A positional encoding is a fixed-size vector representation of the relative positions of tokens within a sequence: it provides the transformer model with information aboutwherethe words are in the input sequence. This shall induce abiastowards the order of the input sequence, so that, for example, the input sequence "man bites dog" is processed differently from "dog bites man". The positional encoding is defined as a function of typef:R→Rd;d∈Z,d>0{\displaystyle f:\mathbb {R} \to \mathbb {R} ^{d};d\in \mathbb {Z} ,d>0}, whered{\displaystyle d}is a positive eveninteger. The full positional encoding defined in the original paper[1]is:(f(t)2k,f(t)2k+1)=(sin⁡(θ),cos⁡(θ))∀k∈{0,1,…,d/2−1}{\displaystyle (f(t)_{2k},f(t)_{2k+1})=(\sin(\theta ),\cos(\theta ))\quad \forall k\in \{0,1,\ldots ,d/2-1\}}whereθ=trk,r=N2/d{\displaystyle \theta ={\frac {t}{r^{k}}},r=N^{2/d}}. Here,N{\displaystyle N}is a free parameter that should be significantly larger than the biggestk{\displaystyle k}that would be input into the positional encoding function. The original paper usesN=10000{\displaystyle N=10000}. The function is in a simpler form when written as a complex function of typef:R→Cd/2{\displaystyle f:\mathbb {R} \to \mathbb {C} ^{d/2}}f(t)=(eit/rk)k=0,1,…,d2−1{\displaystyle f(t)=\left(e^{it/r^{k}}\right)_{k=0,1,\ldots ,{\frac {d}{2}}-1}}wherer=N2/d{\displaystyle r=N^{2/d}}. The main reason for using this positional encoding function is that using it, shifts are linear transformations:f(t+Δt)=diag(f(Δt))f(t){\displaystyle f(t+\Delta t)=\mathrm {diag} (f(\Delta t))f(t)}whereΔt∈R{\displaystyle \Delta t\in \mathbb {R} }is the distance one wishes to shift. This allows the transformer to take any encoded position, and find the encoding of the position n-steps-ahead or n-steps-behind, by a matrix multiplication. By taking a linear sum, any convolution can also be implemented as linear transformations:∑jcjf(t+Δtj)=(∑jcjdiag(f(Δtj)))f(t){\displaystyle \sum _{j}c_{j}f(t+\Delta t_{j})=\left(\sum _{j}c_{j}\,\mathrm {diag} (f(\Delta t_{j}))\right)f(t)}for any constantscj{\displaystyle c_{j}}. This allows the transformer to take any encoded position and find a linear sum of the encoded locations of its neighbors. This sum of encoded positions, when fed into the attention mechanism, would create attention weights on its neighbors, much like what happens in aconvolutional neural networklanguage model. In the author's words, "we hypothesized it would allow the model to easily learn to attend by relative position." In typical implementations, all operations are done over the real numbers, not the complex numbers, but sincecomplex multiplication can be implemented as real 2-by-2 matrix multiplication, this is a mere notational difference. Like earlierseq2seqmodels, the original transformer model used anencoder-decoderarchitecture. The encoder consists of encoding layers that process all the input tokens together one layer after another, while the decoder consists of decoding layers that iteratively process the encoder's output and the decoder's output tokens so far. The purpose of each encoder layer is to create contextualized representations of the tokens, where each representation corresponds to a token that "mixes" information from other input tokens via self-attention mechanism. Each decoder layer contains two attention sublayers: (1) cross-attention for incorporating the output of encoder (contextualized input token representations), and (2) self-attention for "mixing" information among the input tokens to the decoder (i.e. the tokens generated so far during inference time).[53][54] Both the encoder and decoder layers have afeed-forward neural networkfor additional processing of their outputs and contain residual connections and layer normalization steps.[54]These feed-forward layers contain most of the parameters in a Transformer model. The feedforward network (FFN) modules in a Transformer are 2-layeredmultilayer perceptrons:FFN(x)=ϕ(xW(1)+b(1))W(2)+b(2){\displaystyle \mathrm {FFN} (x)=\phi (xW^{(1)}+b^{(1)})W^{(2)}+b^{(2)}}whereW(1){\displaystyle W^{(1)}}andW(2){\displaystyle W^{(2)}}are weight matrices andb(1){\displaystyle b^{(1)}}andb(2){\displaystyle b^{(2)}}are bias vectors, andϕ{\displaystyle \phi }is its activation function. The original Transformer usedReLUactivation. The number of neurons in the middle layer is calledintermediate size(GPT),[55]filter size(BERT),[35]orfeedforward size(BERT).[35]It is typically larger than the embedding size. For example, in both GPT-2 series and BERT series, the intermediate size of a model is 4 times its embedding size:dffn=4demb{\displaystyle d_{\text{ffn}}=4d_{\text{emb}}}. The attention mechanism used in the Transformer architecture are scaleddot-productattentionunits. For each unit, the transformer model learns three weight matrices: the query weightsWQ{\displaystyle W^{Q}}, the key weightsWK{\displaystyle W^{K}}, and the value weightsWV{\displaystyle W^{V}}. The module takes three sequences, a query sequence, a key sequence, and a value sequence. The query sequence is a sequence of lengthℓseq, query{\displaystyle \ell _{\text{seq, query}}}, and each entry is a vector of dimensiondemb, query{\displaystyle d_{\text{emb, query}}}. Similarly for the key and value sequences. For each vectorxi,query{\displaystyle x_{i,{\text{query}}}}in the query sequence, it is multiplied by a matrixWQ{\displaystyle W^{Q}}to produce a query vectorqi=xi,queryWQ{\displaystyle q_{i}=x_{i,{\text{query}}}W^{Q}}. The matrix of all query vectors is the query matrix:Q=XqueryWQ{\displaystyle Q=X_{\text{query}}W^{Q}}Similarly, we construct the key matrixK=XkeyWK{\displaystyle K=X_{\text{key}}W^{K}}and the value matrixV=XvalueWV{\displaystyle V=X_{\text{value}}W^{V}}. It is usually the case that allWQ,WK,WV{\displaystyle W^{Q},W^{K},W^{V}}are square matrices, meaningdemb, query=dquery{\displaystyle d_{\text{emb, query}}=d_{\text{query}}}, etc. Attention weights are calculated using the query and key vectors: the attention weightaij{\displaystyle a_{ij}}from tokeni{\displaystyle i}to tokenj{\displaystyle j}is thedot productbetweenqi{\displaystyle q_{i}}andkj{\displaystyle k_{j}}. The attention weights are divided by the square root of the dimension of the key vectors,dk{\displaystyle {\sqrt {d_{k}}}}, which stabilizes gradients during training, and passed through asoftmaxwhich normalizes the weights. The fact thatWQ{\displaystyle W^{Q}}andWK{\displaystyle W^{K}}are different matrices allows attention to be non-symmetric: if tokeni{\displaystyle i}attends to tokenj{\displaystyle j}(i.e.qi⋅kj{\displaystyle q_{i}\cdot k_{j}}is large), this does not necessarily mean that tokenj{\displaystyle j}will attend to tokeni{\displaystyle i}(i.e.qj⋅ki{\displaystyle q_{j}\cdot k_{i}}could be small). The output of the attention unit for tokeni{\displaystyle i}is the weighted sum of the value vectors of all tokens, weighted byaij{\displaystyle a_{ij}}, the attention from tokeni{\displaystyle i}to each token. The attention calculation for all tokens can be expressed as one large matrix calculation using thesoftmax function, which is useful for training due to computational matrix operation optimizations that quickly compute matrix operations. The matricesQ{\displaystyle Q},K{\displaystyle K}andV{\displaystyle V}are defined as the matrices where thei{\displaystyle i}th rows are vectorsqi{\displaystyle q_{i}},ki{\displaystyle k_{i}}, andvi{\displaystyle v_{i}}respectively. Then we can represent the attention asAttention(Q,K,V)=softmax(QKTdk)V{\displaystyle {\begin{aligned}{\text{Attention}}(Q,K,V)={\text{softmax}}\left({\frac {QK^{\mathrm {T} }}{\sqrt {d_{k}}}}\right)V\end{aligned}}} where the softmax is applied over each of the rows of the matrix. The number of dimensions in a query vector isquery sizedquery{\displaystyle d_{\text{query}}}and similarly for thekey sizedkey{\displaystyle d_{\text{key}}}andvalue sizedvalue{\displaystyle d_{\text{value}}}. The output dimension of an attention head is itshead dimensiondhead{\displaystyle d_{\text{head}}}. The attention mechanism requires the following three equalities to hold:ℓseq, key=ℓseq, value,dquery=dkey,dvalue=dhead{\displaystyle \ell _{\text{seq, key}}=\ell _{\text{seq, value}},\;d_{\text{query}}=d_{\text{key}},\;d_{\text{value}}=d_{\text{head}}}but is otherwise unconstrained. If the attention head is used in a self-attention fashion, thenXquery=Xkey=Xvalue{\displaystyle X_{\text{query}}=X_{\text{key}}=X_{\text{value}}}. If the attention head is used in a cross-attention fashion, then usuallyXquery≠Xkey=Xvalue{\displaystyle X_{\text{query}}\neq X_{\text{key}}=X_{\text{value}}}. It is theoretically possible for all three to be different, but that is rarely the case in practice. One set of(WQ,WK,WV){\displaystyle \left(W^{Q},W^{K},W^{V}\right)}matrices is called anattention head, and each layer in a transformer model has multiple attention heads. While each attention head attends to the tokens that are relevant to each token, multiple attention heads allow the model to do this for different definitions of "relevance". Specifically, the query and key projection matrices,WQ{\displaystyle W^{Q}}andWK{\displaystyle W^{K}}, which are involved in the attention score computation, defines the "relevance". Meanwhile, the value projection matrixWV{\displaystyle W^{V}}, in combination with the part of the output projection matrixWO{\displaystyle W^{O}}, determines how the attended tokens influence what information is passed to subsequent layers and ultimately the output logits. In addition, the scope of attention, or the range of token relationships captured by each attention head, can expand as tokens pass through successive layers. This allows the model to capture more complex and long-range dependencies in deeper layers. Many transformer attention heads encode relevance relations that are meaningful to humans. For example, some attention heads can attend mostly to the next word, while others mainly attend from verbs to their direct objects.[56]The computations for each attention head can be performed inparallel, which allows for fast processing. The outputs for the attention layer are concatenated to pass into thefeed-forward neural networklayers. Concretely, let the multiple attention heads be indexed byi{\displaystyle i}, then we haveMultiheadedAttention(Q,K,V)=Concati∈[nheads](Attention(QWiQ,KWiK,VWiV))WO{\displaystyle {\text{MultiheadedAttention}}(Q,K,V)={\text{Concat}}_{i\in [n_{\text{heads}}]}({\text{Attention}}(QW_{i}^{Q},KW_{i}^{K},VW_{i}^{V}))W^{O}}where the matrixX{\displaystyle X}is the concatenation of word embeddings, and the matricesWiQ,WiK,WiV{\displaystyle W_{i}^{Q},W_{i}^{K},W_{i}^{V}}are "projection matrices" owned by individual attention headi{\displaystyle i}, andWO{\displaystyle W^{O}}is a final projection matrix owned by the whole multi-headed attention head. It is theoretically possible for each attention head to have a different head dimensiondhead{\displaystyle d_{\text{head}}}, but that is rarely the case in practice. As an example, in the smallest GPT-2 model, there are only self-attention mechanisms. It has the following dimensions:demb=768,nhead=12,dhead=64{\displaystyle d_{\text{emb}}=768,n_{\text{head}}=12,d_{\text{head}}=64}Since12×64=768{\displaystyle 12\times 64=768}, its output projection matrixWO∈R(12×64)×768{\displaystyle W^{O}\in \mathbb {R} ^{(12\times 64)\times 768}}is a square matrix. The Transformer architecture is constructed to calculate output tokens iteratively. Assumingt=0{\displaystyle t=0}refers to the calculation of the first output tokeni=0{\displaystyle i=0}, for stept>0{\displaystyle t>0}, the output tokeni=0{\displaystyle i=0}shall remain constant. This ensures properties of the model similar toautoregressive models.[1]Therefore, at every time stept{\displaystyle t}, the calculation for all outputsi{\displaystyle i}should not have access to tokens at positionj{\displaystyle j}forj>=i{\displaystyle j>=i}(as it naturally is the case for time stept=i{\displaystyle t=i}, when tokensj>t{\displaystyle j>t}are not yet calculated). This behavior may be accomplished before the softmax stage by adding a mask matrixM{\displaystyle M}that is−∞{\displaystyle -\infty }at entries where the attention link must be cut, and0{\displaystyle 0}at other places:MaskedAttention(Q,K,V)=softmax(M+QKTdk)V{\displaystyle {\begin{aligned}{\text{MaskedAttention}}(Q,K,V)={\text{softmax}}\left(M+{\frac {QK^{\mathrm {T} }}{\sqrt {d_{k}}}}\right)V\end{aligned}}}The following matrix is commonly used in decoder self-attention modules, called "causal masking":Mcausal=[0−∞−∞…−∞00−∞…−∞000…−∞⋮⋮⋮⋱⋮000…0]{\displaystyle M_{\text{causal}}={\begin{bmatrix}0&-\infty &-\infty &\dots &-\infty \\0&0&-\infty &\dots &-\infty \\0&0&0&\dots &-\infty \\\vdots &\vdots &\vdots &\ddots &\vdots \\0&0&0&\dots &0\end{bmatrix}}} In words, it means that each token can pay attention to itself, and every token before it, but not any after it. A non-masked attention module can be thought of as a masked attention module where the mask has all entries zero. As an example of an uncommon use of mask matrix, theXLNetconsiders all masks of the formPMcausalP−1{\displaystyle PM_{\text{causal}}P^{-1}}, whereP{\displaystyle P}is a randompermutation matrix.[57] An encoder consists of an embedding layer, followed by multiple encoder layers. Each encoder layer consists of two major components: a self-attention mechanism and a feed-forward layer. It takes an input as a sequence of input vectors, applies the self-attention mechanism, to produce an intermediate sequence of vectors, then applies the feed-forward layer for each vector individually. Schematically, we have:given input vectorsh0,h1,…combine them into a matrixH=[h0h1⋮]EncoderLayer(H)=[FFN(MultiheadedAttention(H,H,H)0)FFN(MultiheadedAttention(H,H,H)1)⋮]{\displaystyle {\begin{aligned}{\text{given input vectors }}&h_{0},h_{1},\dots \\{\text{combine them into a matrix }}H&={\begin{bmatrix}h_{0}\\h_{1}\\\vdots \end{bmatrix}}\\{\text{EncoderLayer}}(H)&={\begin{bmatrix}{\text{FFN}}({\text{MultiheadedAttention}}(H,H,H)_{0})\\{\text{FFN}}({\text{MultiheadedAttention}}(H,H,H)_{1})\\\vdots \end{bmatrix}}\\\end{aligned}}} whereFFN{\displaystyle {\text{FFN}}}stands for "feed-forward network". We can more succinctly write it asEncoderLayer(H)=FFN(MultiheadedAttention(H,H,H)){\displaystyle {\text{EncoderLayer}}(H)={\text{FFN}}({\text{MultiheadedAttention}}(H,H,H))}with the implicit convention that theFFN{\displaystyle {\text{FFN}}}is applied to each row of the matrix individually. The encoder layers are stacked. The first encoder layer takes the sequence of input vectors from the embedding layer, producing a sequence of vectors. This sequence of vectors is processed by the second encoder, and so on. The output from the final encoder layer is then used by the decoder. As the encoder processes the entire input all at once, every token can attend to every other token (all-to-all attention), so there is no need for causal masking. A decoder consists of an embedding layer, followed by multiple decoder layers, followed by an un-embedding layer. Each decoder consists of three major components: a causally masked self-attention mechanism, a cross-attention mechanism, and a feed-forward neural network. The decoder functions in a similar fashion to the encoder, but an additional attention mechanism is inserted which instead draws relevant information from the encodings generated by the encoders. This mechanism can also be called theencoder-decoder attention.[1][54] Like the first encoder, the first decoder takes positional information and embeddings of the output sequence as its input, rather than encodings. The transformer must not use the current or future output to predict an output, so the output sequence must be partially masked to prevent this reverse information flow.[1]This allows forautoregressivetext generation. For decoding, all-to-all attention is inappropriate, because a token cannot attend to tokens not yet generated. Thus, the self-attention module in the decoder is causally masked. In contrast, the cross-attention mechanism attends to the output vectors of the encoder, which is computed before the decoder starts decoding. Consequently, there is no need for masking in the cross-attention mechanism. Schematically, we have:H′=MaskedMultiheadedAttention(H,H,H)DecoderLayer(H)=FFN(MultiheadedAttention(H′,HE,HE)){\displaystyle {\begin{aligned}H'&={\text{MaskedMultiheadedAttention}}(H,H,H)\\{\text{DecoderLayer}}(H)&={\text{FFN}}({\text{MultiheadedAttention}}(H',H^{E},H^{E}))\end{aligned}}}whereHE{\displaystyle H^{E}}is the matrix with rows being the output vectors from the encoder. The last decoder is followed by a final un-embedding layer. to produce the output probabilities over the vocabulary. Then, one of the tokens is sampled according to the probability, and the decoder can be run again to produce the next token, etc, autoregressively generating output text. Manylarge language models, since they do not need to predict a whole new sequence from an input sequence, only use the encoder or decoder of the original transformer architecture. EarlyGPTmodels are decoder-only models trained to predict the next token in a sequence.[58]BERT, another language model, only makes use of an encoder, and is trained to predict a randomly masked token in a sequence.[35] Each encoder layer contains 2 sublayers: the self-attention and the feedforward network. Each decoder layer contains 3 sublayers: the causally masked self-attention, the cross-attention, and the feedforward network. The final points of detail are theresidual connectionsandlayer normalization(LayerNorm, or LN), which while conceptually unnecessary, are necessary for numerical stability and convergence. The residual connection, which is introduced to avoid vanishing gradient issues and stabilize the training process, can be expressed as follows: y = F(x) + x. The expression indicates that an output y is the sum of the transformation of input x (F(x)) and the input itself (x). Adding the input x can preserve the input information and avoid issues when the gradient of F(x) is close to zero. Similarly to how the feedforward network modules are applied individually to each vector, the LayerNorm is also applied individually to each vector. There are two common conventions in use: thepost-LNand thepre-LNconvention. In the post-LN convention, the output of each sublayer isLayerNorm(x+Sublayer(x)){\displaystyle \mathrm {LayerNorm} (x+\mathrm {Sublayer} (x))}whereSublayer(x){\displaystyle \mathrm {Sublayer} (x)}is the function implemented by the sublayer itself. In the pre-LN convention, the output of each sublayer isx+Sublayer(LayerNorm(x)){\displaystyle x+\mathrm {Sublayer} (\mathrm {LayerNorm} (x))}The original 2017 Transformer used the post-LN convention. It was difficult to train and required careful hyperparameter tuning and a "warm-up" in learning rate, where it starts small and gradually increases. The pre-LN convention, proposed several times in 2018,[59]was found to be easier to train, requiring no warm-up, leading to faster convergence.[46] The following is the pseudocode for a standard pre-LN encoder-decoder Transformer, adapted from[60] The Transformer architecture, being modular, allows variations. Several common variations are described here.[61] An "encoder-only" Transformer applies the encoder to map an input text into a sequence of vectors that represent the input text. This is usually used for text embedding andrepresentation learningfor downstream applications.BERTis encoder-only. They are less often used currently, as they were found to be not significantly better than training an encoder-decoder Transformer, then taking just the encoder.[51] A "decoder-only" Transformer is not literally decoder-only, since without an encoder, the cross-attention mechanism has nothing to attend to. Thus, the decoder layers in a decoder-only Transformer is composed of just two sublayers: the causally masked self-attention, and the feedforward network. This is usually used fortext generationandinstruction following. The models in theGPT seriesandChinchilla seriesare decoder-only. An "encoder-decoder" Transformer is generally the same as the original Transformer, with 2 sublayers per encoder layer and 3 sublayers per decoder layer, etc. They might have minor architectural improvements, such asalternative activation functions,changing the location of normalization, etc. This is also usually used for text generation and instruction following. The models in theT5 seriesare encoder-decoder.[61] A "prefixLM" (prefix language model) is a decoder-only architecture, but with prefix masking, which is different from causal masking. Specifically, it has mask of the form[61]: Figure 3MprefixLM=[0−∞0Mcausal]{\displaystyle M_{\text{prefixLM}}={\begin{bmatrix}\mathbf {0} &-\infty \\\mathbf {0} &M_{\text{causal}}\end{bmatrix}}}where the first columns correspond to the "prefix", and the subsequent columns correspond to the autoregressively generated text based on the prefix. They resemble encoder-decoder models, but has less "sparsity". Such models are rarely used, though they are cited as theoretical possibilities and benchmarked comparisons.[51] There are also mixed seq2seq models. For example, in 2020, Google Translate replaced the previous RNN-encoder–RNN-decoder model by a Transformer-encoder–RNN-decoder model, on the argument that an RNN-decoder runs much faster than Transformer-decoder when run autoregressively.[62] The original transformer usesReLUactivation function. Other activation functions were developed. TheLlama seriesandPaLMused SwiGLU;[63]both GPT-1 and BERT[35]used GELU.[64] Alternative activation functions are often used in combination withGated Linear Unitsin the feedforward module.[63] The normalization used in the Transformer can be different from LayerNorm. One example isRMSNorm[65]which is used in theLlama series. Other examples include CapsuleNorm[66]ScaleNorm,[67]or FixNorm.[67] Transformers may use other positional encoding methods than sinusoidal.[68] The original Transformer paper reported using a learned positional encoding,[69]but finding it not superior to the sinusoidal one.[1]Later,[70]found that causal masking itself provides enough signal to a Transformer decoder that it can learn to implicitly perform absolute positional encoding without the positional encoding module. RoPE (rotary positional embedding),[71]is best explained by considering a list of 2-dimensional vectors[(x1(1),x1(2)),(x2(1),x2(2)),(x3(1),x3(2)),...]{\displaystyle [(x_{1}^{(1)},x_{1}^{(2)}),(x_{2}^{(1)},x_{2}^{(2)}),(x_{3}^{(1)},x_{3}^{(2)}),...]}. Now pick some angleθ{\displaystyle \theta }. Then RoPE encoding isRoPE(xm(1),xm(2),m)=(cos⁡mθ−sin⁡mθsin⁡mθcos⁡mθ)(xm(1)xm(2))=(xm(1)cos⁡mθ−xm(2)sin⁡mθxm(2)cos⁡mθ+xm(1)sin⁡mθ){\displaystyle {\text{RoPE}}{\big (}x_{m}^{(1)},x_{m}^{(2)},m{\big )}={\begin{pmatrix}\cos m\theta &-\sin m\theta \\\sin m\theta &\cos m\theta \end{pmatrix}}{\begin{pmatrix}x_{m}^{(1)}\\x_{m}^{(2)}\\\end{pmatrix}}={\begin{pmatrix}x_{m}^{(1)}\cos m\theta -x_{m}^{(2)}\sin m\theta \\x_{m}^{(2)}\cos m\theta +x_{m}^{(1)}\sin m\theta \\\end{pmatrix}}}Equivalently, if we write the 2-dimensional vectors as complex numberszm:=xm(1)+ixm(2){\displaystyle z_{m}:=x_{m}^{(1)}+ix_{m}^{(2)}}, then RoPE encoding is just multiplication by an angle:RoPE(zm,m)=eimθzm{\displaystyle {\text{RoPE}}{\big (}z_{m},m{\big )}=e^{im\theta }z_{m}}For a list of2n{\displaystyle 2n}-dimensional vectors, a RoPE encoder is defined by a sequence of anglesθ(1),...,θ(n){\displaystyle \theta ^{(1)},...,\theta ^{(n)}}. Then the RoPE encoding is applied to each pair of coordinates. The benefit of RoPE is that the dot-product between two vectors depends on their relative location only:RoPE(x,m)TRoPE(y,n)=RoPE(x,m+k)TRoPE(y,n+k){\displaystyle {\text{RoPE}}{\big (}x,m{\big )}^{T}{\text{RoPE}}{\big (}y,n{\big )}={\text{RoPE}}{\big (}x,m+k{\big )}^{T}{\text{RoPE}}{\big (}y,n+k{\big )}}for any integerk{\displaystyle k}. ALiBi (Attention with Linear Biases)[72]is not areplacementfor the positional encoder on the original transformer. Instead, it is anadditionalpositional encoder that is directly plugged into the attention mechanism. Specifically, the ALiBi attention mechanism isAttention(Q,K,V)=softmax(QKTdk+sB)V{\displaystyle {\begin{aligned}{\text{Attention}}(Q,K,V)={\text{softmax}}\left({\frac {QK^{\mathrm {T} }}{\sqrt {d_{k}}}}+sB\right)V\end{aligned}}}Here,s{\displaystyle s}is a real number ("scalar"), andB{\displaystyle B}is thelinear biasmatrix defined byB=(0123⋯−1012⋯−2−101⋯−3−2−10⋯⋮⋮⋮⋮⋱){\displaystyle B={\begin{pmatrix}0&1&2&3&\cdots \\-1&0&1&2&\cdots \\-2&-1&0&1&\cdots \\-3&-2&-1&0&\cdots \\\vdots &\vdots &\vdots &\vdots &\ddots \\\end{pmatrix}}}in other words,Bi,j=j−i{\displaystyle B_{i,j}=j-i}. The idea being that the linear bias matrix is a softened mask. Just as0{\displaystyle 0}represent full attention paid, and−∞{\displaystyle -\infty }represents no attention paid, the linear bias matrix increases attention paid in one direction and decreases attention paid in the other direction. ALiBi allows pretraining on short context windows, then fine-tuning on longer context windows. Since it is directly plugged into the attention mechanism, it can be combined with any positional encoder that is plugged into the "bottom" of the entire network (which is where the sinusoidal encoder on the original transformer, as well as RoPE and many others, are located). Relative Position Encodings[73]is similar to ALiBi, but more generic:Attention(Q,K,V)=softmax(QKTdk+B)V{\displaystyle {\begin{aligned}{\text{Attention}}(Q,K,V)={\text{softmax}}\left({\frac {QK^{\mathrm {T} }}{\sqrt {d_{k}}}}+B\right)V\end{aligned}}}whereB{\displaystyle B}is aToeplitz matrix, that is,Bi,j=Bi′,j′{\displaystyle B_{i,j}=B_{i',j'}}wheneveri−j=i′−j′{\displaystyle i-j=i'-j'}. This is contrasted with the original sinusoidal positional encoding, which is an "absolute positional encoding".[74] The transformer model has been implemented in standard deep learningframeworkssuch asTensorFlowandPyTorch.Transformersis a library produced byHugging Facethat supplies transformer-based architectures and pretrained models.[11] When an autoregressive transformer is used for inference, such as generating text, the query vector is different at each step, but the already-computed key and value vectors are always the same. TheKV cachingmethod saves the computed key and value vectors at each attention block, so that they are not recomputed at each new token.PagedAttentionappliesmemory pagingto KV caching.[75][76][77] If a transformer is used with a baked-in prompt, such as ["You are a customer support agent..."], then the key and value vectors can be computed for the prompt, and saved on disk. The saving in compute is significant when the model is used for many short interactions, such as in online chatbots. FlashAttention[78]is an algorithm that implements the transformer attention mechanism efficiently on a GPU. It is a communication-avoiding algorithm that performsmatrix multiplications in blocks, such that each block fits within thecacheof a GPU, and by careful management of the blocks it minimizes data copying between GPU caches (as data movement is slow). See the page onsoftmaxfor details. An improved version, FlashAttention-2,[79][80][81]was developed to cater to the rising demand for language models capable of handling longer context lengths. It offers enhancements in work partitioning and parallelism, enabling it to achieve up to 230 TFLOPs/s onA100GPUs (FP16/BF16), a 2x speed increase over the original FlashAttention. Key advancements in FlashAttention-2 include the reduction of non-matmul FLOPs, improved parallelism over the sequence length dimension, better work partitioning between GPU warps, and added support for head dimensions up to 256 and multi-query attention (MQA) and grouped-query attention (GQA).[82] Benchmarks revealed FlashAttention-2 to be up to 2x faster than FlashAttention and up to 9x faster than a standard attention implementation in PyTorch. Future developments include optimization for new hardware likeH100GPUs and new data types like FP8. Multi-Query Attention changes the multiheaded attention mechanism.[83]Whereas normally, MultiheadedAttention(Q,K,V)=Concati∈[nheads](Attention(XWiQ,XWiK,XWiV))WO{\displaystyle {\text{MultiheadedAttention}}(Q,K,V)={\text{Concat}}_{i\in [n_{\text{heads}}]}\left({\text{Attention}}(XW_{i}^{Q},XW_{i}^{K},XW_{i}^{V})\right)W^{O}}with Multi-Query Attention, there is just oneWK,WV{\displaystyle W^{K},W^{V}}, thus: MultiQueryAttention(Q,K,V)=Concati∈[nheads](Attention(XWiQ,XWK,XWV))WO{\displaystyle {\text{MultiQueryAttention}}(Q,K,V)={\text{Concat}}_{i\in [n_{\text{heads}}]}\left({\text{Attention}}(XW_{i}^{Q},XW^{K},XW^{V})\right)W^{O}} This has a neutral effect on model quality and training speed, but increases inference speed. More generally, grouped-query attention (GQA) partitions attention heads into groups, each of which shares the key-value pair. MQA is GQA with one group, while standard multiheaded attention is GQA with the maximal number of groups.[84] Multihead Latent Attention (MLA) is alow-rank approximationto standard MHA. Specifically, each hidden vector, before entering the attention mechanism, is first projected to two low-dimensional spaces ("latent space"), one for query and one for key-value (KV vector). This design minimizes the KV cache, as only the low-dimensional KV vector needs to be cached.[85] Speculative decoding[86][87]is a method to accelerate token decoding. Similarly tospeculative executionin CPUs, future tokens are computed quickly, then verified. If the quickly computed tokens are incorrect, they are discarded and computed slowly. The key factor in speculative decoding is that a Transformer decoder can verify faster than it can decode, in the following sense. Suppose we have two transformer models like GPT-3 and GPT-3-small, both with a context window size of 512. To generate an entire context window autoregressively with greedy decoding with GPT-3, it must be run for 512 times, each time generating a tokenx1,x2,...,x512{\displaystyle x_{1},x_{2},...,x_{512}}, taking time512TGPT-3{\displaystyle 512T_{\text{GPT-3}}}. However, if we had some educated guess for the values of these tokens, we could verify all of them in parallel, in one run of the model, by checking that eachxt{\displaystyle x_{t}}is indeed the token with the largest log-likelihood in thet{\displaystyle t}-th output. In speculative decoding, a smaller model or some other simple heuristic is used to generate a few speculative tokens that are subsequently verified by the larger model. For example, suppose we use GPT-3-small to generate four speculative tokens:x~1,x~2,x~3,x~4{\displaystyle {\tilde {x}}_{1},{\tilde {x}}_{2},{\tilde {x}}_{3},{\tilde {x}}_{4}}. This only takes4TGPT-3-small{\displaystyle 4T_{\text{GPT-3-small}}}. These tokens are then run through the larger GPT-3 in one go. Suppose thatx~1{\displaystyle {\tilde {x}}_{1}}andx~2{\displaystyle {\tilde {x}}_{2}}are verified by GPT-3 as what it would have picked, then those are kept, butx~3{\displaystyle {\tilde {x}}_{3}}is not, sox~3,x~4{\displaystyle {\tilde {x}}_{3},{\tilde {x}}_{4}}are discarded, and GPT-3 is run on those. This would take4TGPT-3-small+3TGPT-3{\displaystyle 4T_{\text{GPT-3-small}}+3T_{\text{GPT-3}}}, which might be shorter than4TGPT-3{\displaystyle 4T_{\text{GPT-3}}}. For non-greedy decoding, similar ideas apply, except the speculative tokens are accepted or rejected stochastically, in a way that guarantees the final output distribution is the same as if speculative decoding was not used.[86][88] In Multi-Token Prediction, a single forward pass creates a final embedding vector, which then is un-embedded into a token probability. However, that vector can then be further processed by another Transformer block to predict thenexttoken, and so on for arbitrarily many steps into the future. This trades off accuracy for speed, since each new token costs just one more Transformer block, rather than the entire stack.[89][90] Training transformer-based architectures can be expensive, especially for long inputs.[91]Many methods have been developed to attempt to address the issue. In the image domain, Swin Transformer is an efficient architecture that performs attention inside shifting windows.[92]In the audio domain, SepTr decouples the attention in time and frequency domains.[93]Long Range Arena(2020)[94]is a standard benchmark for comparing the behavior of transformer architectures over long inputs. The standard attention graph is either all-to-all or causal, both of which scales asO(N2){\displaystyle O(N^{2})}whereN{\displaystyle N}is the number of tokens in a sequence. Reformer (2020)[91][95]reduces the computational load fromO(N2){\displaystyle O(N^{2})}toO(Nln⁡N){\displaystyle O(N\ln N)}by usinglocality-sensitive hashingand reversible layers.[96] Sparse attention[97]uses attention graphs that grows slower thanO(N2){\displaystyle O(N^{2})}. For example, BigBird (2020)[98]uses randomsmall-world networkswhich grows asO(N){\displaystyle O(N)}. Ordinary transformers require a memory size that is quadratic in the size of the context window. Attention-free transformers[99]reduce this to a linear dependence while still retaining the advantages of a transformer by linking the key to the value. Random Feature Attention (2021)[100]usesFourier random features:φ(x)=1D[cos⁡⟨w1,x⟩,sin⁡⟨w1,x⟩,⋯cos⁡⟨wD,x⟩,sin⁡⟨wD,x⟩]T{\displaystyle \varphi (x)={\frac {1}{\sqrt {D}}}[\cos \langle w_{1},x\rangle ,\sin \langle w_{1},x\rangle ,\cdots \cos \langle w_{D},x\rangle ,\sin \langle w_{D},x\rangle ]^{T}}wherew1,...,wD{\displaystyle w_{1},...,w_{D}}are independent samples from the normal distributionN(0,σ2I){\displaystyle N(0,\sigma ^{2}I)}. This choice of parameters satisfyE[⟨φ(x),φ(y)⟩]=e−‖x−y‖22σ2{\displaystyle \mathbb {E} [\langle \varphi (x),\varphi (y)\rangle ]=e^{-{\frac {\|x-y\|^{2}}{2\sigma ^{2}}}}}, ore⟨x,y⟩/σ2=E[⟨e‖x‖2/2σ2φ(x),e‖y‖2/2σ2φ(y)⟩]≈⟨e‖x‖2/2σ2φ(x),e‖y‖2/2σ2φ(y)⟩{\displaystyle e^{\langle x,y\rangle /\sigma ^{2}}=\mathbb {E} [\langle e^{\|x\|^{2}/2\sigma ^{2}}\varphi (x),e^{\|y\|^{2}/2\sigma ^{2}}\varphi (y)\rangle ]\approx \langle e^{\|x\|^{2}/2\sigma ^{2}}\varphi (x),e^{\|y\|^{2}/2\sigma ^{2}}\varphi (y)\rangle }Consequently, the one-headed attention, with one query, can be written asAttention(q,K,V)=softmax(qKTdk)V≈φ(q)T∑ie‖ki‖2/2σ2φ(ki)viTφ(q)T∑ie‖ki‖2/2σ2φ(ki){\displaystyle {\text{Attention}}(q,K,V)={\text{softmax}}\left({\frac {qK^{\mathrm {T} }}{\sqrt {d_{k}}}}\right)V\approx {\frac {\varphi (q)^{T}\sum _{i}e^{\|k_{i}\|^{2}/2\sigma ^{2}}\varphi (k_{i})v_{i}^{T}}{\varphi (q)^{T}\sum _{i}e^{\|k_{i}\|^{2}/2\sigma ^{2}}\varphi (k_{i})}}}whereσ=dK1/4{\displaystyle \sigma =d_{K}^{1/4}}. Similarly for multiple queries, and for multiheaded attention. This approximation can be computed in linear time, as we can compute the matrixφ(ki)viT{\displaystyle \varphi (k_{i})v_{i}^{T}}first, then multiply it with the query. In essence, we have managed to obtain a more precise version ofAttention(Q,K,V)=softmax(QKTdk)V≈Q(KTV/dk){\displaystyle {\text{Attention}}(Q,K,V)={\text{softmax}}\left({\frac {QK^{\mathrm {T} }}{\sqrt {d_{k}}}}\right)V\approx Q(K^{T}V/{\sqrt {d_{k}}})}Performer (2022)[101]uses the same Random Feature Attention, butw1,...,wD{\displaystyle w_{1},...,w_{D}}are first independently sampled from the normal distributionN(0,σ2I){\displaystyle N(0,\sigma ^{2}I)}, then they areGram-Schmidt processed. Transformers can also be used/adapted for modalities (input or output) beyond just text, usually by finding a way to "tokenize" the modality. Multimodal models can either be trained from scratch, or by finetuning. A 2022 study found that Transformers pretrained only on natural language can be finetuned on only 0.03% of parameters and become competitive with LSTMs on a variety of logical and visual tasks, demonstratingtransfer learning.[102]The LLaVA was a vision-language model composed of a language model (Vicuna-13B)[103]and a vision model (ViT-L/14), connected by a linear layer. Only the linear layer is finetuned.[104] Vision transformers[41]adapt the transformer to computer vision by breaking down input images as a series of patches, turning them into vectors, and treating them like tokens in a standard transformer. Conformer[42]and laterWhisper[105]follow the same pattern forspeech recognition, first turning the speech signal into aspectrogram, which is then treated like an image, i.e. broken down into a series of patches, turned into vectors and treated like tokens in a standard transformer. Perceivers[106][107]are a variant of Transformers designed for multimodality. For image generation, notable architectures areDALL-E 1(2021), Parti (2022),[108]Phenaki (2023),[109]and Muse (2023).[110]Unlike later models, DALL-E is not a diffusion model. Instead, it uses a decoder-only Transformer that autoregressively generates a text, followed by the token representation of an image, which is then converted by avariational autoencoderto an image.[111]Parti is an encoder-decoder Transformer, where the encoder processes a text prompt, and the decoder generates a token representation of an image.[112]Muse is an encoder-only Transformer that is trained to predict masked image tokens from unmasked image tokens. During generation, all input tokens are masked, and the highest-confidence predictions are included for the next iteration, until all tokens are predicted.[110]Phenaki is a text-to-video model. It is a bidirectional masked transformer conditioned on pre-computed text tokens. The generated tokens are then decoded to a video.[109] The transformer has had great success innatural language processing(NLP). Manylarge language modelssuch asGPT-2,GPT-3,GPT-4,Gemini, AlbertAGPT,Claude,BERT,Grok,XLNet,RoBERTaandChatGPTdemonstrate the ability of transformers to perform a wide variety of NLP-related subtasks and their related real-world applications, including: Beyond traditional NLP, the transformer architecture has had success in other applications, such as:
https://en.wikipedia.org/wiki/Transformer_(machine_learning)
Attentionis amachine learningmethod that determines the importance of each component in a sequence relative to the other components in that sequence. Innatural language processing, importance is represented by"soft"weights assigned to each word in a sentence. More generally, attention encodes vectors calledtokenembeddingsacross a fixed-widthsequencethat can range from tens to millions of tokens in size. Unlike "hard" weights, which are computed during the backwards training pass, "soft" weights exist only in the forward pass and therefore change with every step of the input. Earlier designs implemented the attention mechanism in a serialrecurrent neural network(RNN) language translation system, but a more recent design, namely thetransformer, removed the slower sequential RNN and relied more heavily on the faster parallel attention scheme. Inspired by ideas aboutattention in humans, the attention mechanism was developed to address the weaknesses of leveraging information from thehidden layersof recurrent neural networks. Recurrent neural networks favor more recent information contained in words at the end of a sentence, while information earlier in the sentence tends to beattenuated. Attention allows a token equal access to any part of a sentence directly, rather than only through the previous state. Academic reviews of the history of the attention mechanism are provided in Niu et al.[1]and Soydaner.[2] seq2seqwith RNN + Attention.[13]Attention mechanism was added onto RNN encoder-decoder architecture to improve language translation of long sentences. See Overview section. The modern era of machine attention was revitalized by grafting an attention mechanism (Fig 1. orange) to an Encoder-Decoder. Figure 2 shows the internal step-by-step operation of the attention block (A) in Fig 1. This attention scheme has been compared to the Query-Key analogy of relational databases. That comparison suggests anasymmetricrole for the Query and Key vectors, whereoneitem of interest (the Query vector "that") is matched againstallpossible items (the Key vectors of each word in the sentence). However, both Self and Cross Attentions' parallel calculations matches all tokens of the K matrix with all tokens of the Q matrix; therefore the roles of these vectors aresymmetric. Possibly because the simplistic database analogy is flawed, much effort has gone into understanding attention mechanisms further by studying their roles in focused settings, such as in-context learning,[20]masked language tasks,[21]stripped down transformers,[22]bigram statistics,[23]N-gram statistics,[24]pairwise convolutions,[25]and arithmetic factoring.[26] In translating between languages, alignment is the process of matching words from the source sentence to words of the translated sentence. Networks that perform verbatim translation without regard to word order would show the highest scores along the (dominant) diagonal of the matrix. The off-diagonal dominance shows that the attention mechanism is more nuanced. Consider an example of translatingI love youto French. On the first pass through the decoder, 94% of the attention weight is on the first English wordI, so the network offers the wordje. On the second pass of the decoder, 88% of the attention weight is on the third English wordyou, so it offerst'. On the last pass, 95% of the attention weight is on the second English wordlove, so it offersaime. In theI love youexample, the second wordloveis aligned with the third wordaime. Stacking soft row vectors together forje,t', andaimeyields analignment matrix: Sometimes, alignment can be multiple-to-multiple. For example, the English phraselook it upcorresponds tocherchez-le. Thus, "soft" attention weights work better than "hard" attention weights (setting one attention weight to 1, and the others to 0), as we would like the model to make a context vector consisting of a weighted sum of the hidden vectors, rather than "the best one", as there may not be a best hidden vector. Many variants of attention implement soft weights, such as Forconvolutional neural networks, attention mechanisms can be distinguished by the dimension on which they operate, namely: spatial attention,[30]channel attention,[31]or combinations.[32][33] These variants recombine the encoder-side inputs to redistribute those effects to each target output. Often, a correlation-style matrix of dot products provides the re-weighting coefficients. In the figures below, W is the matrix of context attention weights, similar to the formula in Core Calculations section above. The size of the attention matrix is proportional to the square of the number of input tokens. Therefore, when the input is long, calculating the attention matrix requires a lot ofGPUmemory. Flash attention is an implementation that reduces the memory needs and increases efficiency without sacrificing accuracy. It achieves this by partitioning the attention computation into smaller blocks that fit into the GPU's faster on-chip memory, reducing the need to store large intermediate matrices and thus lowering memory usage while increasing computational efficiency.[38] Flex Attention[39]is an attention kernel developed by Meta that allows users to modify attention scores prior tosoftmaxand dynamically chooses the optimal attention algorithm. The major breakthrough came with self-attention, where each element in the input sequence attends to all others, enabling the model to capture global dependencies. This idea was central to the Transformer architecture, which replaced recurrence entirely with attention mechanisms. As a result, Transformers became the foundation for models like BERT, GPT, and T5 (Vaswani et al., 2017). Attention is widely used in natural language processing, computer vision, and speech recognition. In NLP, it improves context understanding in tasks like question answering and summarization. In vision, visual attention helps models focus on relevant image regions, enhancing object detection and image captioning. For matrices:Q∈Rm×dk,K∈Rn×dk{\displaystyle \mathbf {Q} \in \mathbb {R} ^{m\times d_{k}},\mathbf {K} \in \mathbb {R} ^{n\times d_{k}}}andV∈Rn×dv{\displaystyle \mathbf {V} \in \mathbb {R} ^{n\times d_{v}}}, the scaled dot-product, orQKV attentionis defined as:Attention(Q,K,V)=softmax(QKTdk)V∈Rm×dv{\displaystyle {\text{Attention}}(\mathbf {Q} ,\mathbf {K} ,\mathbf {V} )={\text{softmax}}\left({\frac {\mathbf {Q} \mathbf {K} ^{T}}{\sqrt {d_{k}}}}\right)\mathbf {V} \in \mathbb {R} ^{m\times d_{v}}}whereT{\displaystyle {}^{T}}denotestransposeand thesoftmax functionis applied independently to every row of its argument. The matrixQ{\displaystyle \mathbf {Q} }containsm{\displaystyle m}queries, while matricesK,V{\displaystyle \mathbf {K} ,\mathbf {V} }jointly contain anunorderedset ofn{\displaystyle n}key-value pairs. Value vectors in matrixV{\displaystyle \mathbf {V} }are weighted using the weights resulting from the softmax operation, so that the rows of them{\displaystyle m}-by-dv{\displaystyle d_{v}}output matrix are confined to theconvex hullof the points inRdv{\displaystyle \mathbb {R} ^{d_{v}}}given by the rows ofV{\displaystyle \mathbf {V} }. To understand thepermutation invarianceandpermutation equivarianceproperties of QKV attention,[40]letA∈Rm×m{\displaystyle \mathbf {A} \in \mathbb {R} ^{m\times m}}andB∈Rn×n{\displaystyle \mathbf {B} \in \mathbb {R} ^{n\times n}}bepermutation matrices; andD∈Rm×n{\displaystyle \mathbf {D} \in \mathbb {R} ^{m\times n}}an arbitrary matrix. The softmax function ispermutation equivariantin the sense that: By noting that the transpose of a permutation matrix is also its inverse, it follows that: which shows that QKV attention isequivariantwith respect to re-ordering the queries (rows ofQ{\displaystyle \mathbf {Q} }); andinvariantto re-ordering of the key-value pairs inK,V{\displaystyle \mathbf {K} ,\mathbf {V} }. These properties are inherited when applying linear transforms to the inputs and outputs of QKV attention blocks. For example, a simpleself-attentionfunction defined as: is permutation equivariant with respect to re-ordering the rows of the input matrixX{\displaystyle X}in a non-trivial way, because every row of the output is a function of all the rows of the input. Similar properties hold formulti-head attention, which is defined below. When QKV attention is used as a building block for an autoregressive decoder, and when at training time all input and output matrices haven{\displaystyle n}rows, amasked attentionvariant is used:Attention(Q,K,V)=softmax(QKTdk+M)V{\displaystyle {\text{Attention}}(\mathbf {Q} ,\mathbf {K} ,\mathbf {V} )={\text{softmax}}\left({\frac {\mathbf {Q} \mathbf {K} ^{T}}{\sqrt {d_{k}}}}+\mathbf {M} \right)\mathbf {V} }where the mask,M∈Rn×n{\displaystyle \mathbf {M} \in \mathbb {R} ^{n\times n}}is astrictly upper triangular matrix, with zeros on and below the diagonal and−∞{\displaystyle -\infty }in every element above the diagonal. The softmax output, also inRn×n{\displaystyle \mathbb {R} ^{n\times n}}is thenlower triangular, with zeros in all elements above the diagonal. The masking ensures that for all1≤i<j≤n{\displaystyle 1\leq i<j\leq n}, rowi{\displaystyle i}of the attention output is independent of rowj{\displaystyle j}of any of the three input matrices. The permutation invariance and equivariance properties of standard QKV attention do not hold for the masked variant. Multi-head attentionMultiHead(Q,K,V)=Concat(head1,...,headh)WO{\displaystyle {\text{MultiHead}}(\mathbf {Q} ,\mathbf {K} ,\mathbf {V} )={\text{Concat}}({\text{head}}_{1},...,{\text{head}}_{h})\mathbf {W} ^{O}}where each head is computed with QKV attention as:headi=Attention(QWiQ,KWiK,VWiV){\displaystyle {\text{head}}_{i}={\text{Attention}}(\mathbf {Q} \mathbf {W} _{i}^{Q},\mathbf {K} \mathbf {W} _{i}^{K},\mathbf {V} \mathbf {W} _{i}^{V})}andWiQ,WiK,WiV{\displaystyle \mathbf {W} _{i}^{Q},\mathbf {W} _{i}^{K},\mathbf {W} _{i}^{V}}, andWO{\displaystyle \mathbf {W} ^{O}}are parameter matrices. The permutation properties of (standard, unmasked) QKV attention apply here also. For permutation matrices,A,B{\displaystyle \mathbf {A} ,\mathbf {B} }: from which we also see thatmulti-head self-attention: is equivariant with respect to re-ordering of the rows of input matrixX{\displaystyle X}. Attention(Q,K,V)=softmax(tanh⁡(WQQ+WKK)V){\displaystyle {\text{Attention}}(\mathbf {Q} ,\mathbf {K} ,\mathbf {V} )={\text{softmax}}(\tanh(\mathbf {W} _{Q}\mathbf {Q} +\mathbf {W} _{K}\mathbf {K} )\mathbf {V} )}whereWQ{\displaystyle \mathbf {W} _{Q}}andWK{\displaystyle \mathbf {W} _{K}}are learnable weight matrices.[13] Attention(Q,K,V)=softmax(QWKT)V{\displaystyle {\text{Attention}}(\mathbf {Q} ,\mathbf {K} ,\mathbf {V} )={\text{softmax}}(\mathbf {Q} \mathbf {W} \mathbf {K} ^{T})\mathbf {V} }whereW{\displaystyle \mathbf {W} }is a learnable weight matrix.[27] Self-attention is essentially the same as cross-attention, except that query, key, and value vectors all come from the same model. Both encoder and decoder can use self-attention, but with subtle differences. For encoder self-attention, we can start with a simple encoder without self-attention, such as an "embedding layer", which simply converts each input word into a vector by a fixedlookup table. This gives a sequence of hidden vectorsh0,h1,…{\displaystyle h_{0},h_{1},\dots }. These can then be applied to a dot-product attention mechanism, to obtainh0′=Attention(h0WQ,HWK,HWV)h1′=Attention(h1WQ,HWK,HWV)⋯{\displaystyle {\begin{aligned}h_{0}'&=\mathrm {Attention} (h_{0}W^{Q},HW^{K},HW^{V})\\h_{1}'&=\mathrm {Attention} (h_{1}W^{Q},HW^{K},HW^{V})\\&\cdots \end{aligned}}}or more succinctly,H′=Attention(HWQ,HWK,HWV){\displaystyle H'=\mathrm {Attention} (HW^{Q},HW^{K},HW^{V})}. This can be applied repeatedly, to obtain a multilayered encoder. This is the "encoder self-attention", sometimes called the "all-to-all attention", as the vector at every position can attend to every other. For decoder self-attention, all-to-all attention is inappropriate, because during the autoregressive decoding process, the decoder cannot attend to future outputs that has yet to be decoded. This can be solved by forcing the attention weightswij=0{\displaystyle w_{ij}=0}for alli<j{\displaystyle i<j}, called "causal masking". This attention mechanism is the "causally masked self-attention".
https://en.wikipedia.org/wiki/Attention_mechanism
Incomputer science,merge sort(also commonly spelled asmergesortand asmerge-sort[2]) is an efficient, general-purpose, andcomparison-basedsorting algorithm. Most implementations produce astable sort, which means that the relative order of equal elements is the same in the input and output. Merge sort is adivide-and-conquer algorithmthat was invented byJohn von Neumannin 1945.[3]A detailed description and analysis of bottom-up merge sort appeared in a report byGoldstineand von Neumann as early as 1948.[4] Conceptually, a merge sort works as follows: ExampleC-likecode using indices for top-down merge sort algorithm that recursively splits the list (calledrunsin this example) into sublists until sublist size is 1, then merges those sublists to produce a sorted list. The copy back step is avoided with alternating the direction of the merge with each level of recursion (except for an initial one-time copy, that can be avoided too). As a simple example, consider an array with two elements. The elements are copied to B[], then merged back to A[]. If there are four elements, when the bottom of the recursion level is reached, single element runs from A[] are merged to B[], and then at the next higher level of recursion, those two-element runs are merged to A[]. This pattern continues with each level of recursion. Sorting the entire array is accomplished byTopDownMergeSort(A, B, length(A)). Example C-like code using indices for bottom-up merge sort algorithm which treats the list as an array ofnsublists (calledrunsin this example) of size 1, and iteratively merges sub-lists back and forth between two buffers: Pseudocodefor top-down merge sort algorithm which recursively divides the input list into smaller sublists until the sublists are trivially sorted, and then merges the sublists while returning up the call chain. In this example, themergefunction merges the left and right sublists. Pseudocodefor bottom-up merge sort algorithm which uses a small fixed size array of references to nodes, where array[i] is either a reference to a list of size 2iornil.nodeis a reference or pointer to a node. The merge() function would be similar to the one shown in the top-down merge lists example, it merges two already sorted lists, and handles empty lists. In this case, merge() would usenodefor its input parameters and return value. Haskell-like pseudocode, showing how merge sort can be implemented in such a language using constructs and ideas fromfunctional programming. In sortingnobjects, merge sort has anaverageandworst-case performanceofO(nlogn) comparisons. If the running time (number of comparisons) of merge sort for a list of lengthnisT(n), then therecurrence relationT(n) = 2T(n/2) +nfollows from the definition of the algorithm (apply the algorithm to two lists of half the size of the original list, and add thensteps taken to merge the resulting two lists).[5]The closed form follows from themaster theorem for divide-and-conquer recurrences. The number of comparisons made by merge sort in the worst case is given by thesorting numbers. These numbers are equal to or slightly smaller than (n⌈lgn⌉ − 2⌈lgn⌉+ 1), which is between (nlgn−n+ 1) and (nlgn+n+ O(lgn)).[6]Merge sort's best case takes about half as many iterations as its worst case.[7] For largenand a randomly ordered input list, merge sort's expected (average) number of comparisons approachesα·nfewer than the worst case, whereα=−1+∑k=0∞12k+1≈0.2645.{\displaystyle \alpha =-1+\sum _{k=0}^{\infty }{\frac {1}{2^{k}+1}}\approx 0.2645.} In theworstcase, merge sort uses approximately 39% fewer comparisons thanquicksortdoes in itsaveragecase, and in terms of moves, merge sort's worst case complexity isO(nlogn) - the same complexity as quicksort's best case.[7] Merge sort is more efficient than quicksort for some types of lists if the data to be sorted can only be efficiently accessed sequentially, and is thus popular in languages such asLisp, where sequentially accessed data structures are very common. Unlike some (efficient) implementations of quicksort, merge sort is a stable sort. Merge sort's most common implementation does not sort in place;[8]therefore, the memory size of the input must be allocated for the sorted output to be stored in (see below for variations that need onlyn/2 extra spaces). A natural merge sort is similar to a bottom-up merge sort except that any naturally occurringruns(sorted sequences) in the input are exploited. Both monotonic and bitonic (alternating up/down) runs may be exploited, with lists (or equivalently tapes or files) being convenient data structures (used asFIFO queuesorLIFO stacks).[9]In the bottom-up merge sort, the starting point assumes each run is one item long. In practice, random input data will have many short runs that just happen to be sorted. In the typical case, the natural merge sort may not need as many passes because there are fewer runs to merge. In the best case, the input is already sorted (i.e., is one run), so the natural merge sort need only make one pass through the data. In many practical cases, long natural runs are present, and for that reason natural merge sort is exploited as the key component ofTimsort. Example: Formally, the natural merge sort is said to beRuns-optimal, whereRuns(L){\displaystyle {\mathtt {Runs}}(L)}is the number of runs inL{\displaystyle L}, minus one. Tournament replacement selection sortsare used to gather the initial runs for external sorting algorithms. Instead of merging two blocks at a time, a ping-pong merge merges four blocks at a time. The four sorted blocks are merged simultaneously to auxiliary space into two sorted blocks, then the two sorted blocks are merged back to main memory. Doing so omits the copy operation and reduces the total number of moves by half. An early public domain implementation of a four-at-once merge was by WikiSort in 2014, the method was later that year described as an optimization forpatience sortingand named a ping-pong merge.[10][11]Quadsort implemented the method in 2020 and named it a quad merge.[12] One drawback of merge sort, when implemented on arrays, is itsO(n)working memory requirement. Several methods to reduce memory or make merge sort fullyin-placehave been suggested: Anexternalmerge sort is practical to run usingdiskortapedrives when the data to be sorted is too large to fit intomemory.External sortingexplains how merge sort is implemented with disk drives. A typical tape drive sort uses four tape drives. All I/O is sequential (except for rewinds at the end of each pass). A minimal implementation can get by with just two record buffers and a few program variables. Naming the four tape drives as A, B, C, D, with the original data on A, and using only two record buffers, the algorithm is similar tothe bottom-up implementation, using pairs of tape drives instead of arrays in memory. The basic algorithm can be described as follows: Instead of starting with very short runs, usually ahybrid algorithmis used, where the initial pass will read many records into memory, do an internal sort to create a long run, and then distribute those long runs onto the output set. The step avoids many early passes. For example, an internal sort of 1024 records will save nine passes. The internal sort is often large because it has such a benefit. In fact, there are techniques that can make the initial runs longer than the available internal memory. One of them, the Knuth's 'snowplow' (based on abinary min-heap), generates runs twice as long (on average) as a size of memory used.[18] With some overhead, the above algorithm can be modified to use three tapes.O(nlogn) running time can also be achieved using twoqueues, or astackand a queue, or three stacks. In the other direction, usingk> two tapes (andO(k) items in memory), we can reduce the number of tape operations inO(logk) times by using ak/2-way merge. A more sophisticated merge sort that optimizes tape (and disk) drive usage is thepolyphase merge sort. On modern computers,locality of referencecan be of paramount importance insoftware optimization, because multilevelmemory hierarchiesare used.Cache-aware versions of the merge sort algorithm, whose operations have been specifically chosen to minimize the movement of pages in and out of a machine's memory cache, have been proposed. For example, thetiled merge sortalgorithm stops partitioning subarrays when subarrays of size S are reached, where S is the number of data items fitting into a CPU's cache. Each of these subarrays is sorted with an in-place sorting algorithm such asinsertion sort, to discourage memory swaps, and normal merge sort is then completed in the standard recursive fashion. This algorithm has demonstrated better performance[example needed]on machines that benefit from cache optimization. (LaMarca & Ladner 1997) Merge sort parallelizes well due to the use of thedivide-and-conquermethod. Several different parallel variants of the algorithm have been developed over the years. Some parallel merge sort algorithms are strongly related to the sequential top-down merge algorithm while others have a different general structure and use theK-way mergemethod. The sequential merge sort procedure can be described in two phases, the divide phase and the merge phase. The first consists of many recursive calls that repeatedly perform the same division process until the subsequences are trivially sorted (containing one or no element). An intuitive approach is the parallelization of those recursive calls.[19]Following pseudocode describes the merge sort with parallel recursion using thefork and joinkeywords: This algorithm is the trivial modification of the sequential version and does not parallelize well. Therefore, its speedup is not very impressive. It has aspanofΘ(n){\displaystyle \Theta (n)}, which is only an improvement ofΘ(log⁡n){\displaystyle \Theta (\log n)}compared to the sequential version (seeIntroduction to Algorithms). This is mainly due to the sequential merge method, as it is the bottleneck of the parallel executions. Better parallelism can be achieved by using a parallelmerge algorithm.Cormen et al.present a binary variant that merges two sorted sub-sequences into one sorted output sequence.[19] In one of the sequences (the longer one if unequal length), the element of the middle index is selected. Its position in the other sequence is determined in such a way that this sequence would remain sorted if this element were inserted at this position. Thus, one knows how many other elements from both sequences are smaller and the position of the selected element in the output sequence can be calculated. For the partial sequences of the smaller and larger elements created in this way, the merge algorithm is again executed in parallel until the base case of the recursion is reached. The following pseudocode shows the modified parallel merge sort method using the parallel merge algorithm (adopted from Cormen et al.). In order to analyze arecurrence relationfor the worst case span, the recursive calls of parallelMergesort have to be incorporated only once due to their parallel execution, obtaining T∞sort(n)=T∞sort(n2)+T∞merge(n)=T∞sort(n2)+Θ(log⁡(n)2).{\displaystyle T_{\infty }^{\text{sort}}(n)=T_{\infty }^{\text{sort}}\left({\frac {n}{2}}\right)+T_{\infty }^{\text{merge}}(n)=T_{\infty }^{\text{sort}}\left({\frac {n}{2}}\right)+\Theta \left(\log(n)^{2}\right).} For detailed information about the complexity of the parallel merge procedure, seeMerge algorithm. The solution of this recurrence is given by T∞sort=Θ(log⁡(n)3).{\displaystyle T_{\infty }^{\text{sort}}=\Theta \left(\log(n)^{3}\right).} This parallel merge algorithm reaches a parallelism ofΘ(n(log⁡n)2){\textstyle \Theta \left({\frac {n}{(\log n)^{2}}}\right)}, which is much higher than the parallelism of the previous algorithm. Such a sort can perform well in practice when combined with a fast stable sequential sort, such asinsertion sort, and a fast sequential merge as a base case for merging small arrays.[20] It seems arbitrary to restrict the merge sort algorithms to a binary merge method, since there are usually p > 2 processors available. A better approach may be to use aK-way mergemethod, a generalization of binary merge, in whichk{\displaystyle k}sorted sequences are merged. This merge variant is well suited to describe a sorting algorithm on aPRAM.[21][22] Given an unsorted sequence ofn{\displaystyle n}elements, the goal is to sort the sequence withp{\displaystyle p}availableprocessors. These elements are distributed equally among all processors and sorted locally using a sequentialSorting algorithm. Hence, the sequence consists of sorted sequencesS1,...,Sp{\displaystyle S_{1},...,S_{p}}of length⌈np⌉{\textstyle \lceil {\frac {n}{p}}\rceil }. For simplification letn{\displaystyle n}be a multiple ofp{\displaystyle p}, so that|Si|=np{\textstyle \left\vert S_{i}\right\vert ={\frac {n}{p}}}fori=1,...,p{\displaystyle i=1,...,p}. These sequences will be used to perform a multisequence selection/splitter selection. Forj=1,...,p{\displaystyle j=1,...,p}, the algorithm determines splitter elementsvj{\displaystyle v_{j}}with global rankk=jnp{\textstyle k=j{\frac {n}{p}}}. Then the corresponding positions ofv1,...,vp{\displaystyle v_{1},...,v_{p}}in each sequenceSi{\displaystyle S_{i}}are determined withbinary searchand thus theSi{\displaystyle S_{i}}are further partitioned intop{\displaystyle p}subsequencesSi,1,...,Si,p{\displaystyle S_{i,1},...,S_{i,p}}withSi,j:={x∈Si|rank(vj−1)<rank(x)≤rank(vj)}{\textstyle S_{i,j}:=\{x\in S_{i}|rank(v_{j-1})<rank(x)\leq rank(v_{j})\}}. Furthermore, the elements ofS1,i,...,Sp,i{\displaystyle S_{1,i},...,S_{p,i}}are assigned to processori{\displaystyle i}, means all elements between rank(i−1)np{\textstyle (i-1){\frac {n}{p}}}and rankinp{\textstyle i{\frac {n}{p}}}, which are distributed over allSi{\displaystyle S_{i}}. Thus, each processor receives a sequence of sorted sequences. The fact that the rankk{\displaystyle k}of the splitter elementsvi{\displaystyle v_{i}}was chosen globally, provides two important properties: On the one hand,k{\displaystyle k}was chosen so that each processor can still operate onn/p{\textstyle n/p}elements after assignment. The algorithm is perfectlyload-balanced. On the other hand, all elements on processori{\displaystyle i}are less than or equal to all elements on processori+1{\displaystyle i+1}. Hence, each processor performs thep-way mergelocally and thus obtains a sorted sequence from its sub-sequences. Because of the second property, no furtherp-way-merge has to be performed, the results only have to be put together in the order of the processor number. In its simplest form, givenp{\displaystyle p}sorted sequencesS1,...,Sp{\displaystyle S_{1},...,S_{p}}distributed evenly onp{\displaystyle p}processors and a rankk{\displaystyle k}, the task is to find an elementx{\displaystyle x}with a global rankk{\displaystyle k}in the union of the sequences. Hence, this can be used to divide eachSi{\displaystyle S_{i}}in two parts at a splitter indexli{\displaystyle l_{i}}, where the lower part contains only elements which are smaller thanx{\displaystyle x}, while the elements bigger thanx{\displaystyle x}are located in the upper part. The presented sequential algorithm returns the indices of the splits in each sequence, e.g. the indicesli{\displaystyle l_{i}}in sequencesSi{\displaystyle S_{i}}such thatSi[li]{\displaystyle S_{i}[l_{i}]}has a global rank less thank{\displaystyle k}andrank(Si[li+1])≥k{\displaystyle \mathrm {rank} \left(S_{i}[l_{i}+1]\right)\geq k}.[23] For the complexity analysis thePRAMmodel is chosen. If the data is evenly distributed over allp{\displaystyle p}, the p-fold execution of thebinarySearchmethod has a running time ofO(plog⁡(n/p)){\displaystyle {\mathcal {O}}\left(p\log \left(n/p\right)\right)}. The expected recursion depth isO(log⁡(∑i|Si|))=O(log⁡(n)){\displaystyle {\mathcal {O}}\left(\log \left(\textstyle \sum _{i}|S_{i}|\right)\right)={\mathcal {O}}(\log(n))}as in the ordinaryQuickselect. Thus the overall expected running time isO(plog⁡(n/p)log⁡(n)){\displaystyle {\mathcal {O}}\left(p\log(n/p)\log(n)\right)}. Applied on the parallel multiway merge sort, this algorithm has to be invoked in parallel such that all splitter elements of rankinp{\textstyle i{\frac {n}{p}}}fori=1,..,p{\displaystyle i=1,..,p}are found simultaneously. These splitter elements can then be used to partition each sequence inp{\displaystyle p}parts, with the same total running time ofO(plog⁡(n/p)log⁡(n)){\displaystyle {\mathcal {O}}\left(p\,\log(n/p)\log(n)\right)}. Below, the complete pseudocode of the parallel multiway merge sort algorithm is given. We assume that there is a barrier synchronization before and after the multisequence selection such that every processor can determine the splitting elements and the sequence partition properly. Firstly, each processor sorts the assignedn/p{\displaystyle n/p}elements locally using a sorting algorithm with complexityO(n/plog⁡(n/p)){\displaystyle {\mathcal {O}}\left(n/p\;\log(n/p)\right)}. After that, the splitter elements have to be calculated in timeO(plog⁡(n/p)log⁡(n)){\displaystyle {\mathcal {O}}\left(p\,\log(n/p)\log(n)\right)}. Finally, each group ofp{\displaystyle p}splits have to be merged in parallel by each processor with a running time ofO(log⁡(p)n/p){\displaystyle {\mathcal {O}}(\log(p)\;n/p)}using a sequentialp-way merge algorithm. Thus, the overall running time is given by O(nplog⁡(np)+plog⁡(np)log⁡(n)+nplog⁡(p)){\displaystyle {\mathcal {O}}\left({\frac {n}{p}}\log \left({\frac {n}{p}}\right)+p\log \left({\frac {n}{p}}\right)\log(n)+{\frac {n}{p}}\log(p)\right)}. The multiway merge sort algorithm is very scalable through its high parallelization capability, which allows the use of many processors. This makes the algorithm a viable candidate for sorting large amounts of data, such as those processed incomputer clusters. Also, since in such systems memory is usually not a limiting resource, the disadvantage of space complexity of merge sort is negligible. However, other factors become important in such systems, which are not taken into account when modelling on aPRAM. Here, the following aspects need to be considered:Memory hierarchy, when the data does not fit into the processors cache, or the communication overhead of exchanging data between processors, which could become a bottleneck when the data can no longer be accessed via the shared memory. Sanderset al. have presented in their paper abulk synchronous parallelalgorithm for multilevel multiway mergesort, which dividesp{\displaystyle p}processors intor{\displaystyle r}groups of sizep′{\displaystyle p'}. All processors sort locally first. Unlike single level multiway mergesort, these sequences are then partitioned intor{\displaystyle r}parts and assigned to the appropriate processor groups. These steps are repeated recursively in those groups. This reduces communication and especially avoids problems with many small messages. The hierarchical structure of the underlying real network can be used to define the processor groups (e.g.racks,clusters,...).[22] Merge sort was one of the first sorting algorithms where optimal speed up was achieved, with Richard Cole using a clever subsampling algorithm to ensureO(1) merge.[24]Other sophisticated parallel sorting algorithms can achieve the same or better time bounds with a lower constant. For example, in 1991 David Powers described a parallelizedquicksort(and a relatedradix sort) that can operate inO(logn) time on aCRCWparallel random-access machine(PRAM) withnprocessors by performing partitioning implicitly.[25]Powers further shows that a pipelined version of Batcher'sBitonic MergesortatO((logn)2) time on a butterflysorting networkis in practice actually faster than hisO(logn) sorts on a PRAM, and he provides detailed discussion of the hidden overheads in comparison, radix and parallel sorting.[26] Althoughheapsorthas the same time bounds as merge sort, it requires only Θ(1) auxiliary space instead of merge sort's Θ(n). On typical modern architectures, efficientquicksortimplementations generally outperform merge sort for sorting RAM-based arrays.[27]Quicksorts are preferred when the data size to be sorted is lesser, since the space complexity for quicksort is O(logn), it helps in utilizing cache locality better than merge sort (with space complexity O(n)).[27]On the other hand, merge sort is a stable sort and is more efficient at handling slow-to-access sequential media. Merge sort is often the best choice for sorting alinked list: in this situation it is relatively easy to implement a merge sort in such a way that it requires only Θ(1) extra space, and the slow random-access performance of a linked list makes some other algorithms (such as quicksort) perform poorly, and others (such as heapsort) completely impossible. As ofPerl5.8, merge sort is its default sorting algorithm (it was quicksort in previous versions of Perl).[28]InJava, theArrays.sort()methods use merge sort or a tuned quicksort depending on the datatypes and for implementation efficiency switch toinsertion sortwhen fewer than seven array elements are being sorted.[29]TheLinuxkernel uses merge sort for its linked lists.[30] Timsort, a tuned hybrid of merge sort and insertion sort is used in variety of software platforms and languages including the Java and Android platforms[31]and is used byPythonsince version 2.3; since version 3.11, Timsort's merge policy was updated toPowersort.[32]
https://en.wikipedia.org/wiki/Merge_sort
Instatisticsandnatural language processing, atopic modelis a type ofstatistical modelfor discovering the abstract "topics" that occur in a collection of documents. Topic modeling is a frequently used text-mining tool for discovery of hidden semantic structures in a text body. Intuitively, given that a document is about a particular topic, one would expect particular words to appear in the document more or less frequently: "dog" and "bone" will appear more often in documents about dogs, "cat" and "meow" will appear in documents about cats, and "the" and "is" will appear approximately equally in both. A document typically concerns multiple topics in different proportions; thus, in a document that is 10% about cats and 90% about dogs, there would probably be about 9 times more dog words than cat words. The "topics" produced by topic modeling techniques are clusters of similar words. A topic model captures this intuition in a mathematical framework, which allows examining a set of documents and discovering, based on the statistics of the words in each, what the topics might be and what each document's balance of topics is. Topic models are also referred to as probabilistic topic models, which refers to statistical algorithms for discovering the latent semantic structures of an extensive text body. In the age of information, the amount of the written material we encounter each day is simply beyond our processing capacity. Topic models can help to organize and offer insights for us to understand large collections of unstructured text bodies. Originally developed as a text-mining tool, topic models have been used to detect instructive structures in data such as genetic information, images, and networks. They also have applications in other fields such asbioinformatics[1]andcomputer vision.[2] An early topic model was described by Papadimitriou, Raghavan, Tamaki and Vempala in 1998.[3]Another one, calledprobabilistic latent semantic analysis(PLSA), was created by Thomas Hofmann in 1999.[4]Latent Dirichlet allocation(LDA), perhaps the most common topic model currently in use, is a generalization of PLSA. Developed byDavid Blei,Andrew Ng, andMichael I. Jordanin 2002, LDA introduces sparseDirichlet prior distributionsover document-topic and topic-word distributions, encoding the intuition that documents cover a small number of topics and that topics often use a small number of words.[5]Other topic models are generally extensions on LDA, such asPachinko allocation, which improves on LDA by modeling correlations between topics in addition to the word correlations which constitute topics. Hierarchical latent tree analysis (HLTA) is an alternative to LDA, which models word co-occurrence using a tree of latent variables and the states of the latent variables, which correspond to soft clusters of documents, are interpreted as topics. Approaches for temporal information include Block and Newman's determination of the temporal dynamics of topics in thePennsylvania Gazetteduring 1728–1800.Griffiths& Steyvers used topic modeling on abstracts from the journalPNASto identify topics that rose or fell in popularity from 1991 to 2001 whereas Lamba & Madhusushan[6]used topic modeling on full-text research articles retrieved from DJLIT journal from 1981 to 2018. In the field of library and information science, Lamba & Madhusudhan[6][7][8][9]applied topic modeling on different Indian resources like journal articles and electronic theses and resources (ETDs). Nelson[10]has been analyzing change in topics over time in theRichmond Times-Dispatchto understand social and political changes and continuities in Richmond during theAmerican Civil War. Yang, Torget and Mihalcea applied topic modeling methods to newspapers from 1829 to 2008. Mimno used topic modelling with 24 journals on classical philology and archaeology spanning 150 years to look at how topics in the journals change over time and how the journals become more different or similar over time. Yin et al.[11]introduced a topic model for geographically distributed documents, where document positions are explained by latent regions which are detected during inference. Chang and Blei[12]included network information between linked documents in the relational topic model, to model the links between websites. The author-topic model by Rosen-Zvi et al.[13]models the topics associated with authors of documents to improve the topic detection for documents with authorship information. HLTA was applied to a collection of recent research papers published at major AI and Machine Learning venues. The resulting model is calledThe AI Tree. The resulting topics are used to index the papers ataipano.cse.ust.hkto help researcherstrack research trends and identify papers to read, and help conference organizers and journal editorsidentify reviewers for submissions. To improve the qualitative aspects and coherency of generated topics, some researchers have explored the efficacy of "coherence scores", or otherwise how computer-extracted clusters (i.e. topics) align with a human benchmark.[14][15]Coherence scores are metrics for optimising the number of topics to extract from a document corpus.[16] In practice, researchers attempt to fit appropriate model parameters to the data corpus using one of several heuristics for maximum likelihood fit. A survey by D. Blei describes this suite of algorithms.[17]Several groups of researchers starting with Papadimitriou et al.[3]have attempted to design algorithms with provable guarantees. Assuming that the data were actually generated by the model in question, they try to design algorithms that probably find the model that was used to create the data. Techniques used here includesingular value decomposition(SVD) and themethod of moments. In 2012 an algorithm based uponnon-negative matrix factorization(NMF) was introduced that also generalizes to topic models with correlations among topics.[18] In 2017, neural network has been leveraged in topic modeling to make it faster in inference,[19]which has been extended weakly supervised version.[20] In 2018 a new approach to topic models was proposed: it is based onstochastic block model.[21] Because of the recent development of LLM, topic modeling has leveraged LLM through contextual embedding[22]and fine tuning.[23] Topic models are being used also in other contexts. For examples uses of topic models in biology and bioinformatics research emerged.[24]Recently topic models has been used to extract information from dataset of cancers' genomic samples.[25]In this case topics are biological latent variables to be inferred. Topic models can be used for analysis of continuous signals like music. For instance, they were used to quantify how musical styles change in time, and identify the influence of specific artists on later music creation.[26]
https://en.wikipedia.org/wiki/Topic_model
Modularityis a measure of the structure ofnetworksorgraphswhich measures the strength of division of a network into modules (also called groups, clusters or communities). Networks with high modularity have dense connections between the nodes within modules but sparse connections between nodes in different modules. Modularity is often used in optimization methods for detectingcommunity structurein networks. Biological networks, including animal brains, exhibit a high degree of modularity. However, modularity maximization is not statistically consistent, and finds communities in its own null model, i.e. fully random graphs, and therefore it cannot be used to find statistically significant community structures in empirical networks. Furthermore, it has been shown that modularity suffers a resolution limit and, therefore, it is unable to detect small communities. Many scientifically important problems can be represented and empirically studied using networks. For example, biological and social patterns, the World Wide Web, metabolic networks, food webs, neural networks and pathological networks are real world problems that can be mathematically represented and topologically studied to reveal some unexpected structural features.[1]Most of these networks possess a certain community structure that has substantial importance in building an understanding regarding the dynamics of the network. For instance, a closely connected social community will imply a faster rate of transmission of information or rumor among them than a loosely connected community. Thus, if a network is represented by a number of individual nodes connected by links which signify a certain degree of interaction between the nodes, communities are defined as groups of densely interconnected nodes that are only sparsely connected with the rest of the network. Hence, it may be imperative to identify the communities in networks since the communities may have quite different properties such as node degree, clustering coefficient, betweenness, centrality,[2]etc., from that of the average network. Modularity is one such measure, which when maximized, leads to the appearance of communities in a given network. Modularity is the fraction of the edges that fall within the given groups minus the expected fraction if edges were distributed at random. The value of the modularity for unweighted and undirected graphs lies in the range[−1/2,1]{\displaystyle [-1/2,1]}.[3]It is positive if the number of edges within groups exceeds the number expected on the basis of chance. For a given division of the network's vertices into some modules, modularity reflects the concentration of edges within modules compared with random distribution of links between all nodes regardless of modules. There are different methods for calculating modularity.[1]In the most common version of the concept, the randomization of the edges is done so as to preserve thedegreeof each vertex. Consider a graph withn{\displaystyle n}nodesandm{\displaystyle m}links (edges) such that the graph can be partitioned into two communities using a membership variables{\displaystyle s}. If a nodev{\displaystyle v}belongs to community 1,sv=1{\displaystyle s_{v}=1}, or ifv{\displaystyle v}belongs to community 2,sv=−1{\displaystyle s_{v}=-1}. Let theadjacency matrixfor the network be represented byA{\displaystyle A}, whereAvw=0{\displaystyle A_{vw}=0}means there's no edge (no interaction) between nodesv{\displaystyle v}andw{\displaystyle w}andAvw=1{\displaystyle A_{vw}=1}means there is an edge between the two. Also for simplicity we consider an undirected network. ThusAvw=Awv{\displaystyle A_{vw}=A_{wv}}. (It is important to note that multiple edges may exist between two nodes, but here we assess the simplest case). ModularityQ{\displaystyle Q}is then defined as the fraction of edges that fall within group 1 or 2, minus the expected number of edges within groups 1 and 2 for a random graph with the same node degree distribution as the given network. The expected number of edges shall be computed using the concept of aconfiguration model.[4]The configuration model is a randomized realization of a particular network. Given a network withn{\displaystyle n}nodes, where each nodev{\displaystyle v}has a node degreekv{\displaystyle k_{v}}, the configuration model cuts each edge into two halves, and then each half edge, called astub, is rewired randomly with any other stub in the network, even allowing self-loops (which occur when a stub is rewired to another stub from the same node) and multiple-edges between the same two nodes. Thus, even though the node degree distribution of the graph remains intact, the configuration model results in a completely random network. Now consider two nodesv{\displaystyle v}andw{\displaystyle w}, with node degreeskv{\displaystyle k_{v}}andkw{\displaystyle k_{w}}respectively, from a randomly rewired network as described above. We calculate the expected number of full edges between these nodes. Let us consider each of thekv{\displaystyle k_{v}}stubs of nodev{\displaystyle v}and create associated indicator variablesIi(v,w){\displaystyle I_{i}^{(v,w)}}for them,i=1,…,kv{\displaystyle i=1,\ldots ,k_{v}}, withIi(v,w)=1{\displaystyle I_{i}^{(v,w)}=1}if thei{\displaystyle i}-th stub happens to connect to one of thekw{\displaystyle k_{w}}stubs of nodew{\displaystyle w}in this particular random graph. If it does not, thenIi(v,w)=0{\displaystyle I_{i}^{(v,w)}=0}. Since thei{\displaystyle i}-th stub of nodev{\displaystyle v}can connect to any of the2m−1{\displaystyle 2m-1}remaining stubs with equal probability (whilem{\displaystyle m}is the number of edges in the original graph), and since there arekw{\displaystyle k_{w}}stubs it can connect to associated with nodew{\displaystyle w}, evidently The total number of full edgesJvw{\displaystyle J_{vw}}betweenv{\displaystyle v}andw{\displaystyle w}is justJvw=∑i=1kvIi(v,w){\displaystyle J_{vw}=\sum _{i=1}^{k_{v}}I_{i}^{(v,w)}}, so the expected value of this quantity is Many texts then make the following approximations, for random networks with a large number of edges. Whenm{\displaystyle m}is large, they drop the subtraction of1{\displaystyle 1}in the denominator above and simply use the approximate expressionkvkw2m{\displaystyle {\frac {k_{v}k_{w}}{2m}}}for the expected number of edges between two nodes. Additionally, in a large random network, the number of self-loops and multi-edges is vanishingly small.[5]Ignoring self-loops and multi-edges allows one to assume that there is at most one edge between any two nodes. In that case,Jvw{\displaystyle J_{vw}}becomes a binary indicator variable, so its expected value is also the probability that it equals1{\displaystyle 1}, which means one can approximate the probability of an edge existing between nodesv{\displaystyle v}andw{\displaystyle w}askvkw2m{\displaystyle {\frac {k_{v}k_{w}}{2m}}}. Hence, the difference between the actual number of edges between nodev{\displaystyle v}andw{\displaystyle w}and the expected number of edges between them is Avw−kvkw2m{\displaystyle A_{vw}-{\frac {k_{v}k_{w}}{2m}}} Summing over all node pairs gives the equation for modularity,Q{\displaystyle Q}.[1] It is important to note thatEq. 3holds good for partitioning into two communities only. Hierarchical partitioning (i.e. partitioning into two communities, then the two sub-communities further partitioned into two smaller sub communities only to maximizeQ) is a possible approach to identify multiple communities in a network. Additionally, (3) can be generalized for partitioning a network intoccommunities.[6] whereeijis the fraction of edges with one end vertices in communityiand the other in communityj: andaiis the fraction of ends of edges that are attached to vertices in communityi: We consider an undirected network with 10 nodes and 12 edges and the following adjacency matrix. The communities in the graph are represented by the red, green and blue node clusters in Fig 1. The optimal community partitions are depicted in Fig 2. An alternative formulation of the modularity, useful particularly in spectral optimization algorithms, is as follows.[1]DefineSvr{\displaystyle S_{vr}}to be1{\displaystyle 1}if vertexv{\displaystyle v}belongs to groupr{\displaystyle r}and0{\displaystyle 0}otherwise. Then and hence whereS{\displaystyle S}is the (non-square) matrix having elementsSv{\displaystyle S_{v}}andB{\displaystyle B}is the so-called modularity matrix, which has elements All rows and columns of the modularity matrix sum to zero, which means that the modularity of an undivided network is also always0{\displaystyle 0}. For networks divided into just two communities, one can alternatively definesv=±1{\displaystyle s_{v}=\pm 1}to indicate the community to which nodev{\displaystyle v}belongs, which then leads to wheres{\displaystyle s}is the column vector with elementssv{\displaystyle s_{v}}.[1] This function has the same form as theHamiltonianof an Isingspin glass, a connection that has been exploited to create simple computer algorithms, for instance usingsimulated annealing, to maximize the modularity. The general form of the modularity for arbitrary numbers of communities is equivalent to a Potts spin glass and similar algorithms can be developed for this case also.[7] Although the method of modularity maximization is motivated by computing a deviation from a null model, this deviation is not computed in a statistically consistent manner.[8]Because of this, the method notoriously finds high-scoring communities in its own null model[9](the configuration model), which by definition cannot be statistically significant. Because of this, the method cannot be used to reliably obtain statistically significant community structure in empirical networks. Modularity compares the number of edges inside a cluster with the expected number of edges that one would find in the cluster if the network were a random network with the same number of nodes and where each node keeps its degree, but edges are otherwise randomly attached. This random null model implicitly assumes that each node can get attached to any other node of the network. This assumption is however unreasonable if the network is very large, as the horizon of a node includes a small part of the network, ignoring most of it. Moreover, this implies that the expected number of edges between two groups of nodes decreases if the size of the network increases. So, if a network is large enough, the expected number of edges between two groups of nodes in modularity's null model may be smaller than one. If this happens, a single edge between the two clusters would be interpreted by modularity as a sign of a strong correlation between the two clusters, and optimizing modularity would lead to the merging of the two clusters, independently of the clusters' features. So, even weakly interconnected complete graphs, which have the highest possible density of internal edges, and represent the best identifiable communities, would be merged by modularity optimization if the network were sufficiently large.[10]For this reason, optimizing modularity in large networks would fail to resolve small communities, even when they are well defined. This bias is inevitable for methods like modularity optimization, which rely on a global null model.[11] There are two main approaches which try to solve the resolution limit within the modularity context: the addition of a resistancerto every node, in the form of aself-loop, which increases (r>0) or decreases (r<0) the aversion of nodes to form communities;[12]or the addition of a parameterγ>0in front of the null-case term in the definition of modularity, which controls the relative importance between internal links of the communities and the null model.[7]Optimizing modularity for values of these parameters in their respective appropriate ranges, it is possible to recover the whole mesoscale of the network, from the macroscale in which all nodes belong to the same community, to the microscale in which every node forms its own community, hence the namemultiresolution methods. However, it has been shown that these methods have limitations when communities are very heterogeneous in size.[13] There are a couple of software tools available that are able to compute clusterings in graphs with good modularity.
https://en.wikipedia.org/wiki/Modularity_(networks)
1800s:Martineau·Tocqueville·Marx·Spencer·Le Bon·Ward·Pareto·Tönnies·Veblen·Simmel·Durkheim·Addams·Mead·Weber·Du Bois·Mannheim·Elias Social network analysis(SNA) is the process of investigating social structures through the use ofnetworksandgraph theory.[1]It characterizes networked structures in terms ofnodes(individual actors, people, or things within the network) and theties,edges, orlinks(relationships or interactions) that connect them. Examples ofsocial structurescommonly visualized through social network analysis includesocial media networks,[2][3]memeproliferation,[4]information circulation,[5]friendship and acquaintance networks, business networks, knowledge networks,[6][7]difficult working relationships,[8]collaboration graphs,kinship,disease transmission, andsexual relationships.[9][10]These networks are often visualized throughsociogramsin which nodes are represented as points and ties are represented as lines. These visualizations provide a means of qualitatively assessing networks by varying the visual representation of their nodes and edges to reflect attributes of interest.[11] Social network analysis has emerged as a key technique in modernsociology. It has also gained significant popularity in the following:anthropology,biology,[12]demography,communication studies,[3][13]economics,geography,history,information science,organizational studies,[6][8]physics,[14]political science,[15]public health,[16][7]social psychology,development studies,sociolinguistics, andcomputer science,[17]education and distance education research,[18]and is now commonly available as a consumer tool (see thelist of SNA software).[19][20][21] Social network analysis has its theoretical roots in the work of early sociologists such asGeorg SimmelandÉmile Durkheim, who wrote about the importance of studying patterns of relationships that connect social actors. Social scientists have used the concept of "social networks" since early in the 20th century to connote complex sets of relationships between members of social systems at all scales, from interpersonal to international.[22] In the 1930sJacob MorenoandHelen Jenningsintroduced basic analytical methods.[22]In 1954,John Arundel Barnesstarted using the term systematically to denote patterns of ties, encompassing concepts traditionally used by the public and those used by social scientists: boundedgroups(e.g., tribes, families) and socialcategories(e.g., gender, ethnicity). Starting in the 1970s, scholars such asRonald Burt,Kathleen Carley,Mark Granovetter,David Krackhardt,Edward Laumann,Anatol Rapoport,Barry Wellman,Douglas R. White, andHarrison Whiteexpanded the use of systematic social network analysis.[23] Beginning in the late 1990s, social network analysis experienced a further resurgence with work by sociologists, political scientists, economists, computer scientists, and physicists such asDuncan J. Watts,Albert-László Barabási,Peter Bearman,Nicholas A. Christakis,James H. Fowler,Mark Newman,Matthew Jackson,Jon Kleinberg, and others, developing and applying new models and methods, prompted in part by the emergence of new data available about online social networks as well as "digital traces" regarding face-to-face networks. Computational SNA has been extensively used in research on study-abroad second language acquisition.[24][25]Even in the study of literature, network analysis has been applied by Anheier, Gerhards and Romo,[26]Wouter De Nooy,[27]and Burgert Senekal.[28]Indeed, social network analysis has found applications in various academic disciplines as well as practical contexts such as counteringmoney launderingandterrorism.[citation needed] Size: The number of network members in a given network. Homophily: The extent to which actors form ties with similar versus dissimilar others. Similarity can be defined by gender, race, age, occupation, educational achievement, status, values or any other salient characteristic.[29]Homophily is also referred to asassortativity. Multiplexity: The number of content-forms contained in a tie.[30]For example, two people who are friends and also work together would have a multiplexity of 2.[31]Multiplexity has been associated with relationship strength and can also comprise overlap of positive and negative network ties.[8] Mutuality/Reciprocity: The extent to which two actors reciprocate each other's friendship or other interaction.[32] Network Closure: A measure of the completeness of relational triads. An individual's assumption of network closure (i.e. that their friends are also friends) is called transitivity. Transitivity is an outcome of the individual or situational trait ofNeed for Cognitive Closure.[33] Propinquity: The tendency for actors to have more ties with geographically close others. Bridge: An individual whose weak ties fill astructural hole, providing the only link between two individuals or clusters. It also includes the shortest route when a longer one is unfeasible due to a high risk of message distortion or delivery failure.[34] Centrality: Centrality refers to a group of metrics that aim to quantify the "importance" or "influence" (in a variety of senses) of a particular node (or group) within a network.[35][36][37][38]Examples of common methods of measuring "centrality" includebetweenness centrality,[39]closeness centrality,eigenvector centrality,alpha centrality, anddegree centrality.[40] Density: The proportion of direct ties in a network relative to the total number possible.[41][42] Distance: The minimum number of ties required to connect two particular actors, as popularized byStanley Milgram'ssmall world experimentand the idea of 'six degrees of separation'. Structural holes: The absence of ties between two parts of a network. Finding and exploiting a structural hole can give anentrepreneura competitive advantage. This concept was developed by sociologistRonald Burt, and is sometimes referred to as an alternate conception of social capital. Tie Strength: Defined by the linear combination of time, emotional intensity, intimacy and reciprocity (i.e. mutuality).[34]Strong ties are associated with homophily, propinquity and transitivity, while weak ties are associated with bridges. Groups are identified as 'cliques' if every individual is directly tied to every other individual, 'social circles' if there is less stringency of direct contact, which is imprecise, or asstructurally cohesiveblocks if precision is wanted.[43] Clustering coefficient: A measure of the likelihood that two associates of a node are associates. A higher clustering coefficient indicates a greater 'cliquishness'.[44] Cohesion: The degree to which actors are connected directly to each other bycohesive bonds.Structural cohesionrefers to the minimum number of members who, if removed from a group, would disconnect the group.[45][46] Visual representation of social networks is important to understand the network data and convey the result of the analysis.[47]Numerous methods of visualization for data produced by social network analysis have been presented.[48][49][50][51]Many of theanalytic softwarehave modules for network visualization. The data is explored by displaying nodes and ties in various layouts and attributing colors, size, and other advanced properties to nodes. Visual representations of networks may be a powerful method for conveying complex information. Still, care should be taken in interpreting node and graph properties from visual displays alone, as they may misrepresent structural properties better captured through quantitative analyses.[52] Signed graphscan be used to illustrate good and bad relationships between humans. A positive edge between two nodes denotes a positive relationship (friendship, alliance, dating), and a negative edge denotes a negative relationship (hatred, anger). Signed social network graphs can be used to predict the future evolution of the graph. Insigned social networks, there is the concept of "balanced" and "unbalanced" cycles. A balanced cycle is defined as acyclewhere the product of all the signs are positive. According tobalance theory, balanced graphs represent a group of people who are unlikely to change their opinions of the other people in the group. Unbalanced graphs represent a group of people who are very likely to change their opinions of the people in their group. For example, a group of 3 people (A, B, and C) where A and B have a positive relationship, B and C have a positive relationship, and yet C and A have a negative relationship, is an unbalanced cycle. This group is very likely to change into a balanced cycle, such as one where B only has a good relationship with A, and both A and B have a negative relationship with C. By using the concepts of balanced and unbalanced graphs, the evolution of asocial network graphmay be forecasted.[53] Different approaches to participatory network mapping have proven useful, especially when using social network analysis as a tool for facilitating change. Here, participants/interviewers provide network data by mapping the network (with pen and paper or digitally) during the data collection session. An example of a pen-and-paper network mapping approach, which also includes the collection of some actor attributes (perceived influence and goals of actors) is the *Net-map toolbox. One benefit of this approach is that it allows researchers to collect qualitative data and ask clarifying questions while the network data is collected.[54] Social Networking Potential (SNP) is a numericcoefficient, derived throughalgorithms[55][56]to represent both the size of an individual'ssocial networkand their ability to influence that network. SNP coefficients were first defined and used by Bob Gerstley in 2002. A closely related term isAlpha User, defined as a person with a high SNP. SNP coefficients have two primary functions: By calculating the SNP of respondents and bytargetingHigh SNP respondents, thestrengthandrelevanceof quantitative marketing research used to driveviral marketingstrategies is enhanced. Variablesused to calculate an individual's SNP include but are not limited to: participation in Social Networking activities, group memberships, leadership roles, recognition, publication/editing/contributing to non-electronic media, publication/editing/contributing to electronic media (websites, blogs), and frequency of past distribution of information within their network. The acronym "SNP" and some of the first algorithms developed to quantify an individual's social networking potential were described in the white paper "Advertising Research is Changing" (Gerstley, 2003) SeeViral Marketing.[57] The first book[58]to discuss the commercial use of Alpha Users among mobile telecoms audiences was 3G Marketing by Ahonen, Kasper and Melkko in 2004. The first book to discuss Alpha Users more generally in the context ofsocial marketing intelligencewas Communities Dominate Brands by Ahonen & Moore in 2005. In 2012, Nicola Greco (UCL) presents atTEDxthe Social Networking Potential as a parallelism to thepotential energythat users generate and companies should use, stating that "SNP is the new asset that every company should aim to have".[59] Social network analysis is used extensively in a wide range of applications and disciplines. Some common network analysis applications include data aggregation andmining, network propagation modeling, network modeling and sampling, user attribute and behavior analysis, community-maintained resource support, location-based interaction analysis,social sharingand filtering,recommender systemsdevelopment, andlink predictionand entity resolution.[60]In the private sector, businesses use social network analysis to support activities such as customer interaction and analysis,information systemdevelopment analysis,[61]marketing, andbusiness intelligenceneeds (seesocial media analytics). Some public sector uses include development of leader engagement strategies, analysis of individual and group engagement andmedia use, andcommunity-based problem solving. Large numbers of researchers worldwide examine the social networks of children and adolescents. In questionnaires, they list all classmates, students in the same grade, or schoolmates, asking: "Who are your best friends?". Students may sometimes nominate as many peers as they wish; other times, the number of nominations is limited. Social network researchers have investigated similarities in friendship networks. The similarity between friends was established as far back as classical antiquity.[62]Resemblance is an important basis for the survival of friendships. Similarity in characteristics, attitudes, or behaviors means that friends understand each other more quickly, have common interests to talk about, know better where they stand with each other, and have more trust in each other.[63]As a result, such relationships are more stable and valuable. Moreover, looking more alike makes young people more confident and strengthens them in developing their identity.[64]Similarity in behavior can result from two processes: selection and influence. These two processes can be distinguished using longitudinal social network analysis in the R package SIENA (Simulation Investigation for Empirical Network Analyses), developed byTom Snijdersand colleagues.[65]Longitudinal social network analysis became mainstream after the publication of a special issue of theJournal of Research on Adolescencein 2013, edited byRené Veenstraand containing 15 empirical papers.[66] Social network analysis is also used in intelligence,counter-intelligenceandlaw enforcementactivities. This technique allows the analysts to map covert organizations such as anespionagering, an organized crime family or a street gang. TheNational Security Agency(NSA) uses itselectronic surveillanceprograms to generate the data needed to perform this type of analysis on terrorist cells and other networks deemed relevant to national security. The NSA looks up to three nodes deep during this network analysis.[67]After the initial mapping of the social network is complete, analysis is performed to determine the structure of the network and determine, for example, the leaders within the network.[68]This allows military or law enforcement assets to launch capture-or-killdecapitation attackson thehigh-value targetsin leadership positions to disrupt the functioning of the network. The NSA has been performing social network analysis oncall detail records(CDRs), also known asmetadata, since shortly after theSeptember 11 attacks.[69][70] Large textual corpora can be turned into networks and then analyzed using social network analysis. In these networks, the nodes are Social Actors, and the links are Actions. The extraction of these networks can be automated by using parsers. The resulting networks, which can contain thousands of nodes, are then analyzed using tools from network theory to identify the key actors, the key communities or parties, and general properties such as the robustness or structural stability of the overall network or the centrality of certain nodes.[71]This automates the approach introduced by Quantitative Narrative Analysis,[72]whereby subject-verb-object triplets are identified with pairs of actors linked by an action, or pairs formed by actor-object.[73] In other approaches, textual analysis is carried out considering the network of words co-occurring in a text. In these networks, nodes are words and links among them are weighted based on their frequency of co-occurrence (within a specific maximum range). Social network analysis has also been applied to understanding online behavior by individuals, organizations, and between websites.[17]Hyperlinkanalysis can be used to analyze the connections betweenwebsitesorwebpagesto examine how information flows as individuals navigate the web.[74]The connections between organizations has been analyzed via hyperlink analysis to examine which organizations within an issue community.[75] Another concept that has emerged from this connection between social network theory and the Internet is the concept ofnetocracy, where several authors have emerged studying the correlation between the extended use of online social networks, and changes in social power dynamics.[76] Social network analysis has been applied to social media as a tool to understand behavior between individuals or organizations through their linkages on social media websites such asTwitterandFacebook.[77] One of the most current methods of the application of SNA is to the study ofcomputer-supported collaborative learning(CSCL). When applied to CSCL, SNA is used to help understand how learners collaborate in terms of amount, frequency, and length, as well as the quality, topic, and strategies of communication.[78]Additionally, SNA can focus on specific aspects of the network connection, or the entire network as a whole. It uses graphical representations, written representations, and data representations to help examine the connections within a CSCL network.[78]When applying SNA to a CSCL environment the interactions of the participants are treated as a social network. The focus of the analysis is on the "connections" made among the participants – how they interact and communicate – as opposed to how each participant behaved on his or her own. There are several key terms associated with social network analysis research in computer-supported collaborative learning such as:density,centrality,indegree,outdegree, andsociogram. In-degree and out-degree variables are related to centrality. Researchers employ social network analysis in the study of computer-supported collaborative learning in part due to the unique capabilities it offers. This particular method allows the study of interaction patterns within anetworked learning communityand can help illustrate the extent of the participants' interactions with the other members of the group.[78]The graphics created using SNA tools provide visualizations of the connections among participants and the strategies used to communicate within the group. Some authors also suggest that SNA provides a method of easily analyzing changes in participatory patterns of members over time.[79] A number of research studies have applied SNA to CSCL across a variety of contexts. The findings include the correlation between a network's density and the teacher's presence,[78]a greater regard for the recommendations of "central" participants,[80]infrequency of cross-gender interaction in a network,[81]and the relatively small role played by an instructor in anasynchronous learningnetwork.[82] Although many studies have demonstrated the value of social network analysis within the computer-supported collaborative learning field,[78]researchers have suggested that SNA by itself is not enough for achieving a full understanding of CSCL. The complexity of the interaction processes and the myriad sources of data make it difficult for SNA to provide an in-depth analysis of CSCL.[83]Researchers indicate that SNA needs to be complemented with other methods of analysis to form a more accurate picture of collaborative learning experiences.[84] A number of research studies have combined other types of analysis with SNA in the study of CSCL. This can be referred to as a multi-method approach or datatriangulation, which will lead to an increase of evaluationreliabilityin CSCL studies.
https://en.wikipedia.org/wiki/Social_network_analysis
Ontologyis the philosophical study ofbeing. It is traditionally understood as the subdiscipline ofmetaphysicsfocused on the most general features ofreality. As one of the most fundamental concepts, being encompasses all of reality and everyentitywithin it. To articulate the basic structure of being, ontology examines the commonalities among all things and investigates their classification into basic types, such as thecategoriesofparticularsanduniversals. Particulars are unique, non-repeatable entities, such as the personSocrates, whereas universals are general, repeatable entities, like the colorgreen. Another distinction exists betweenconcreteobjects existing inspace and time, such as a tree, and abstract objects existing outside space and time, like the number 7. Systems of categories aim to provide a comprehensive inventory of reality by employing categories such assubstance,property,relation,state of affairs, andevent. Ontologists disagree regarding which entities exist at the most basic level.Platonic realismasserts that universals have objective existence, whileconceptualismmaintains that universals exist only in the mind, andnominalismdenies their existence altogether. Similar disputes pertain tomathematical objects,unobservableobjects assumed by scientific theories, andmoral facts.Materialismposits that fundamentally onlymatterexists, whereasdualismasserts thatmindand matter are independent principles. According to some ontologists, objective answers to ontological questions do not exist, with perspectives shaped by differing linguistic practices. Ontology employs diversemethods of inquiry, including the analysis ofconceptsandexperience, the use ofintuitionsandthought experiments, and the integration of findings fromnatural science.Formal ontologyinvestigates the most abstract features of objects, whileApplied ontologyutilizes ontological theories and principles to study entities within specific domains. For example,social ontologyexamines basic concepts used in thesocial sciences. Applied ontology is particularly relevant toinformationandcomputer science, which developconceptual frameworks of limited domains. These frameworks facilitate the structured storage of information, such as in a college database tracking academic activities. Ontology is also pertinent to the fields oflogic,theology, andanthropology. Theorigins of ontologylie in theancient periodwith speculations about the nature of being and the source of the universe, including ancientIndian,Chinese, andGreek philosophy. In the modern period, philosophers conceived ontology as a distinct academic discipline and coined its name. Ontology is the study of being. It is the branch ofphilosophythat investigates the nature ofexistence, the features all entities have in common, and how they are divided into basiccategories of being.[1]It aims to discover the foundational building blocks of the world and characterizerealityas a whole in its most general aspects.[a]In this regard, ontology contrasts with individual sciences likebiologyandastronomy, which restrict themselves to a limited domain of entities, such as living entities and celestial phenomena.[3]In some contexts, the termontologyrefers not to the general study of being but to a specific ontological theory within this discipline. It can also mean an inventory or aconceptual schemeof a particular domain, such asthe ontology of genes.[4]In this context, an inventory is a comprehensive list of elements.[5]A conceptual scheme is a framework of the key concepts and their relationships.[6] Ontology is closely related tometaphysicsbut the exact relation of these two disciplines is disputed. A traditionally influential characterization asserts that ontology is a subdiscipline of metaphysics. According to this view, metaphysics is the study of various aspects of fundamental reality, whereas ontology restricts itself to the most general features of reality.[7]This view sees ontology as general metaphysics, which is to be distinguished from special metaphysics focused on more specific subject matters, likeGod,mind, andvalue.[8]A different conception understands ontology as a preliminary discipline that provides a complete inventory of reality while metaphysics examines the features and structure of the entities in this inventory.[9]Another conception says that metaphysics is about real being while ontology examines possible being or the concept of being.[10]It is not universally accepted that there is a clear boundary between metaphysics and ontology. Some philosophers use both terms as synonyms.[11] The etymology of the wordontologytraces back to theancient Greektermsὄντως(ontos, meaning'being') andλογία(logia, meaning'study of'), literally,'the study of being'. The ancient Greeks did not use the termontology, which was coined by philosophers in the 17th century.[12] Being, orexistence, is the main topic of ontology. It is one of the most general and fundamental concepts, encompassing all ofrealityand everyentitywithin it.[b]In its broadest sense, being only contrasts with non-being or nothingness.[14]It is controversial whether a more substantial analysis of the concept or meaning of being is possible.[15]One proposal understands being as a property possessed by every entity.[16]Critics argue that a thing without being cannot have properties. This means that properties presuppose being and cannot explain it.[17]Another suggestion is that all beings share a set of essential features. According to theEleatic principle, "power is the mark of being", meaning that only entities withcausalinfluence truly exist.[18]A controversial proposal by philosopherGeorge Berkeleysuggests that all existence is mental. He expressed thisimmaterialismin his slogan "to be is to be perceived".[19] Depending on the context, the termbeingis sometimes used with a more limited meaning to refer only to certain aspects of reality. In one sense, being is unchanging and permanent, in contrast to becoming, which implies change.[20]Another contrast is between being, as what truly exists, andphenomena, as what appears to exist.[21]In some contexts, being expresses the fact that something is whileessenceexpresses itsqualitiesor what it is like.[22] Ontologists often divide being into fundamental classes or highest kinds, calledcategories of being.[23]Proposed categories include substance,property,relation,state of affairs, andevent.[24]They can be used to provide systems of categories, which offer a comprehensive inventory of reality in which every entity belongs to exactly one category.[23]Some philosophers, likeAristotle, say that entities belonging to different categories exist in distinct ways. Others, likeJohn Duns Scotus, insist that there are no differences in the mode of being, meaning thateverything exists in the same way.[25]A related dispute is whether some entities have a higher degree of being than others, an idea already found inPlato's work. The more common view in contemporary philosophy is that a thing either exists or not with no intermediary states or degrees.[26] The relation between being and non-being is a frequent topic in ontology. Influential issues include the status ofnonexistent objects[27]andwhy there is something rather than nothing.[28] A central distinction in ontology is between particular and universal entities. Particulars, also calledindividuals, are unique, non-repeatable entities, likeSocrates, theTaj Mahal, andMars.[29]Universals are general, repeatable entities, like the colorgreen, the formcircularity, and the virtuecourage. Universals express aspects or features shared by particulars. For example,Mount EverestandMount Fujiare particulars characterized by the universalmountain.[30] Universals can take the form of properties or relations.[31][c]Properties describe the characteristics of things. They are features or qualities possessed by an entity.[33]Properties are often divided intoessential and accidental properties. A property is essential if an entity must have it; it is accidental if the entity can exist without it.[34]For instance,having three sidesis an essential property of a triangle, whereasbeing redis an accidental property.[35][d]Relations are ways how two or more entities stand to one another. Unlike properties, they apply to several entities and characterize them as a group.[37]For example,being a cityis a property whilebeing east ofis a relation, as in "Kathmanduis a city" and "Kathmandu is east ofNew Delhi".[38]Relations are often divided intointernal and external relations. Internal relations depend only on the properties of the objects they connect, like the relation ofresemblance. External relations express characteristics that go beyond what the connected objects are like, such as spatial relations.[39] Substances[e]play an important role in the history of ontology as the particular entities that underlie and support properties and relations. They are often considered the fundamental building blocks of reality that can exist on their own, while entities like properties and relations cannot exist without substances. Substances persist through changes as they acquire or lose properties. For example, when a tomato ripens, it loses the propertygreenand acquires the propertyred.[41] States of affairs are complex particular entities that have several other entities as their components. The state of affairs "Socrates is wise" has two components: the individualSocratesand the propertywise. States of affairs that correspond to reality are calledfacts.[42][f]Facts aretruthmakersof statements, meaning that whether a statement is true or false depends on the underlying facts.[44] Events are particular entities[g]that occur in time, like thefall of the Berlin Walland thefirst moon landing. They usually involve some kind of change, like the lawn becoming dry. In some cases, no change occurs, like the lawn staying wet.[46]Complex events, also calledprocesses, are composed of a sequence of events.[47] Concrete objects are entities that exist in space and time, such as a tree, a car, and a planet. They have causal powers and can affect each other, like when a car hits a tree and both are deformed in the process. Abstract objects, by contrast, are outside space and time, such as the number 7 and the set ofintegers. They lack causal powers and do not undergo changes.[48][h]The existence and nature of abstract objects remain subjects of philosophical debate.[50] Concrete objects encountered in everyday life are complex entities composed of various parts. For example, a book is made up of two covers and the pages between them. Each of these components is itself constituted of smaller parts, likemolecules,atoms, andelementary particles.[51]Mereologystudies the relation between parts and wholes. One position in mereology says that every collection of entities forms a whole. According to another view, this is only the case for collections that fulfill certain requirements, for instance, that the entities in the collection touch one another.[52]The problem of material constitution asks whether or in what sense a whole should be considered a new object in addition to the collection of parts composing it.[53] Abstract objects are closely related to fictional andintentional objects. Fictional objects are entities invented in works offiction. They can be things, like theOne RinginJ. R. R. Tolkien's book seriesThe Lord of the Rings, and people, like theMonkey Kingin the novelJourney to the West.[54]Some philosophers say that fictional objects are abstract objects and exist outside space and time. Others understand them as artifacts that are created as the works of fiction are written.[55]Intentional objects are entities that exist withinmental states, likeperceptions,beliefs, anddesires. For example, if a person thinks about theLoch Ness Monsterthen the Loch Ness Monster is the intentional object of thisthought. People can think about existing and non-existing objects. This makes it difficult to assess theontological status of intentional objects.[56] Ontological dependence is a relation between entities. An entity depends ontologically on another entity if the first entity cannot exist without the second entity.[57]For instance, the surface of an apple cannot exist without the apple.[58]An entity is ontologically independent if it does not depend on anything else, meaning that it is fundamental and can exist on its own. Ontological dependence plays a central role in ontology and its attempt to describe reality on its most fundamental level.[59]It is closely related tometaphysical grounding, which is the relation between a ground and the facts it explains.[60] Anontological commitmentof a person or a theory is an entity that exists according to them.[61]For instance, a person whobelieves in Godhas an ontological commitment toGod.[62]Ontological commitments can be used to analyze which ontologies people explicitly defend or implicitly assume. They play a central role in contemporary metaphysics when trying to decide between competing theories. For example, theQuine–Putnam indispensability argumentdefendsmathematical Platonism, asserting that numbers exist because the best scientific theories are ontologically committed to numbers.[63] Possibility and necessity are further topics in ontology. Possibility describes what can be the case, as in "it is possible thatextraterrestrial lifeexists". Necessity describes what must be the case, as in "it is necessary that three plus two equals five". Possibility and necessity contrast with actuality, which describes what is the case, as in "Dohais the capital ofQatar". Ontologists often use the concept ofpossible worldsto analyze possibility and necessity.[64]A possible world is a complete and consistent way how things could have been.[65]For example,Haruki Murakamiwas born in 1949 in the actual world but there are possible worlds in which he was born at a different date. Using this idea,possible world semanticssays that a sentence is possibly true if it is true in at least one possible world. A sentence is necessarily true if it is true in all possible worlds.[66]The field ofmodal logicprovides a precise formalization of the concepts of possibility and necessity.[67] In ontology,identitymeans that two things are the same. Philosophers distinguish between qualitative and numerical identity. Two entities are qualitatively identical if they have exactly the same features, such as perfect identical twins. This is also calledexact similarityandindiscernibility. Numerical identity, by contrast, means that there is only a single entity. For example, if Fatima is the mother of Leila and Hugo then Leila's mother is numerically identical to Hugo's mother.[68]Another distinction is between synchronic and diachronic identity. Synchronic identity relates an entity to itself at the same time. Diachronic identity relates an entity to itself at different times, as in "the woman who bore Leila three years ago is the same woman who bore Hugo this year".[69]The notion of identity also has a number of philosophical implications in terms of how it interacts with the aforementioned necessity and possibility. Most famously, Saul Kripke contended thatdiscovered identitiessuch as "Water is H2O" are necessarily true because "H2O" is what's known as arigid designator.[70] There are different and sometimes overlapping ways to divide ontology into branches. Pure ontology focuses on the most abstract topics associated with the concept and nature of being. It is not restricted to a specific domain of entities and studies existence and the structure of reality as a whole.[71]Pure ontology contrasts withapplied ontology, also called domain ontology. Applied ontology examines the application of ontological theories and principles to specific disciplines and domains, often in the field of science.[72]It considers ontological problems in regard to specific entities such asmatter,mind,numbers,God, and cultural artifacts.[73] Social ontology, a major subfield of applied ontology, studies social kinds, likemoney,gender,society, andlanguage. It aims to determine the nature and essential features of these concepts while also examining their mode of existence.[74]According to a common view, social kinds are useful constructions to describe the complexities of social life. This means that they are not pure fictions but, at the same time, lack the objective or mind-independent reality of natural phenomena like elementary particles, lions, and stars.[75]In the fields ofcomputer science,information science, andknowledge representation, applied ontology is interested in the development of formal frameworks to encode and store information about a limited domain of entities in a structured way.[76]A related application ingeneticsisGene Ontology, which is a comprehensive framework for the standardized representation of gene-related information across species and databases.[77] Formal ontologyis the study of objects in general while focusing on their abstract structures and features. It divides objects into different categories based on the forms they exemplify. Formal ontologists often rely on the tools offormal logicto express their findings in an abstract and general manner.[78][i]Formal ontology contrasts with material ontology, which distinguishes between different areas of objects and examines the features characteristic of a specific area.[80]Examples are ideal spatial beings in the area of geometry and living beings in the area of biology.[81] Descriptive ontology aims to articulate the conceptual scheme underlying how people ordinarily think about the world. Prescriptive ontology departs from common conceptions of the structure of reality and seeks to formulate a new and better conceptualization.[82] Another contrast is between analytic and speculative ontology. Analytic ontology examines the types and categories of being to determine what kinds of things could exist and what features they would have. Speculative ontology aims to determine which entities actually exist, for example, whether there are numbers or whether time is an illusion.[83] Metaontologystudies the underlying concepts, assumptions, and methods of ontology. Unlike other forms of ontology, it does not ask "what exists" but "what does it mean for something to exist" and "how can people determine what exists".[84]It is closely related tofundamental ontology, an approach developed by philosopherMartin Heideggerthat seeks to uncover the meaning of being.[85] The termrealismis used for various theories[j]that affirm that some kind of phenomenon is real or has mind-independent existence. Ontological realism is the view that there areobjectivefacts about what exists and what the nature and categories of being are. Ontological realists do not make claims about what those facts are, for example, whether elementary particles exist. They merely state that there are mind-independent facts that determine which ontological theories are true.[87]This idea is denied by ontological anti-realists, also called ontological deflationists, who say that there are no substantive facts one way or the other.[88]According to philosopherRudolf Carnap, for example, ontological statements are relative to language and depend on the ontological framework of the speaker. This means that there are no framework-independent ontological facts since different frameworks provide different views while there is no objectively right or wrong framework.[89] In a more narrow sense, realism refers to the existence of certain types of entities.[90]Realists about universals say that universals have mind-independent existence. According toPlatonic realists, universals exist not only independent of the mind but also independent of particular objects that exemplify them. This means that the universalredcould exist by itself even if there were no red objects in the world. Aristotelian realism, also calledmoderate realism, rejects this idea and says that universals only exist as long as there are objects that exemplify them.Conceptualism, by contrast, is a form of anti-realism, stating that universals only exist in the mind as concepts that people use to understand and categorize the world.Nominalistsdefend a strong form of anti-realism by saying that universals have no existence. This means that the world is entirely composed of particular objects.[91] Mathematical realism, a closely related view in thephilosophy of mathematics, says that mathematical facts exist independently of human language, thought, and practices and are discovered rather than invented. According to mathematical Platonism, this is the case because of the existence ofmathematical objects, like numbers and sets. Mathematical Platonists say that mathematical objects are as real as physical objects, like atoms and stars, even though they are not accessible toempirical observation.[92]Influential forms of mathematical anti-realism include conventionalism, which says that mathematical theories are trivially true simply by how mathematical terms are defined, and gameformalism, which understands mathematics not as a theory of reality but as a game governed by rules of string manipulation.[93] Modal realismis the theory that in addition to the actual world, there are countlesspossible worldsas real and concrete as the actual world. The primary difference is that the actual world is inhabited by us while other possible worlds are inhabited by ourcounterparts. Modal anti-realists reject this view and argue that possible worlds do not have concrete reality but exist in a different sense, for example, as abstract or fictional objects.[94] Scientific realistssay that the scientific description of the world is an accurate representation of reality.[k]It is of particular relevance in regard to things thatcannot be directly observedby humans but are assumed to exist by scientific theories, like electrons, forces, and laws of nature.Scientific anti-realismsays that scientific theories are not descriptions of reality butinstrumentsto predict observations and the outcomes of experiments.[96] Moral realistsclaim that there exist mind-independent moral facts. According to them, there are objective principles that determine which behavior is morally right.Moral anti-realistseither claim that moral principles are subjective and differ between persons and cultures, a position known asmoral relativism, or outright deny the existence of moral facts, a view referred to asmoral nihilism.[97] Monocategorical theories say that there is only one fundamental category, meaning that every single entity belongs to the same universal class.[98]For example, some forms of nominalism state that only concrete particulars exist while some forms ofbundle theorystate that only properties exist.[99]Polycategorical theories, by contrast, hold that there is more than one basic category, meaning that entities are divided into two or more fundamental classes. They take the form of systems of categories, which list the highest genera of being to provide a comprehensive inventory of everything.[100] The closely related discussion betweenmonismanddualismis about the most fundamental types that make up reality. According to monism, there is only one kind of thing or substance on the most basic level.[101]Materialismis an influential monist view; it says that everything is material. This means that mental phenomena, such as beliefs, emotions, and consciousness, either do not exist or exist as aspects of matter, like brain states.Idealiststake the converse perspective, arguing that everything is mental. They may understand physical phenomena, like rocks, trees, and planets, as ideas or perceptions of conscious minds.[102]Neutral monismoccupies a middle ground by saying that both mind and matter are derivative phenomena.[103]Dualists state that mind and matter exist as independent principles, either asdistinct substancesordifferent types of properties.[104]In a slightly different sense, monism contrasts withpluralismas a view not about the number of basic types but the number of entities. In this sense, monism is the controversial position that only a single all-encompassing entity exists in all of reality.[l]Pluralism is more commonly accepted and says that several distinct entities exist.[106] The historically influentialsubstance-attribute ontologyis a polycategorical theory. It says that reality is at its most fundamental level made up of unanalyzable substances that are characterized by universals, such as the properties an individual substance has or relations that exist between substances.[107]The closely related to substratum theory says that each concrete object is made up of properties and a substratum. The difference is that the substratum is not characterized by properties: it is a featureless orbare particularthat merely supports the properties.[108] Various alternative ontological theories have been proposed that deny the role of substances as the foundational building blocks of reality.[109]Stuff ontologies say that the world is not populated by distinct entities but by continuous stuff that fills space. This stuff may take various forms and is often conceived as infinitely divisible.[110][m]According toprocess ontology, processes or events are the fundamental entities. This view usually emphasizes that nothing in reality is static, meaning that being is dynamic and characterized by constant change.[112]Bundle theories state that there are no regular objects but only bundles of co-present properties. For example, a lemon may be understood as a bundle that includes the properties yellow, sour, and round. According to traditional bundle theory, the bundled properties are universals, meaning that the same property may belong to several different bundles. According totropebundle theory, properties are particular entities that belong to a single bundle.[113] Some ontologies focus not on distinct objects but on interrelatedness. According to relationalism, all of reality is relational at its most fundamental level.[114][n]Ontic structural realismagrees with this basic idea and focuses on how these relations form complex structures. Some structural realists state that there is nothing but relations, meaning that individual objects do not exist. Others say that individual objects exist but depend on the structures in which they participate.[116]Fact ontologies present a different approach by focusing on how entities belonging to different categories come together to constitute the world. Facts, also known as states of affairs, are complex entities; for example, the fact thatthe Earth is a planetconsists of the particular objectthe Earthand the propertybeing a planet. Fact ontologies state that facts are the fundamental constituents of reality, meaning that objects, properties, and relations cannot exist on their own and only form part of reality to the extent that they participate in facts.[117][o] In thehistory of philosophy, various ontological theories based on several fundamental categories have been proposed. One of the first theories of categories was suggested byAristotle, whose system includes ten categories: substance,quantity,quality, relation, place, date, posture, state, action, and passion.[119]An early influential system of categories in Indian philosophy, first proposed in theVaisheshikaschool, distinguishes between six categories:substance, quality, motion, universal, individuator, and inherence.[120]Immanuel Kant'stranscendental idealismincludes a system of twelve categories, which Kant saw as pure concepts of understanding. They are subdivided into four classes: quantity, quality, relation, and modality.[121]In more recent philosophy, theories of categories were developed byC. S. Peirce,Edmund Husserl,Samuel Alexander,Roderick Chisholm, andE. J. Lowe.[122] The dispute between constituent and relational ontologies[p]concerns the internal structure of concrete particular objects. Constituent ontologies say that objects have an internal structure with properties as their component parts. Bundle theories are an example of this position: they state that objects are bundles of properties. This view is rejected by relational ontologies, which say that objects have no internal structure, meaning that properties do not inhere in them but are externally related to them. According to one analogy, objects are like pin-cushions and properties are pins that can be stuck to objects and removed again without becoming a real part of objects. Relational ontologies are common in certain forms of nominalism that reject the existence of universal properties.[124] Hierarchical ontologies state that the world is organized into levels. Entities on all levels are real but low-level entities are more fundamental than high-level entities. This means that they can exist without high-level entities while high-level entities cannot exist without low-level entities.[125]One hierarchical ontology says that elementary particles are more fundamental than the macroscopic objects they compose, like chairs and tables. Other hierarchical theories assert that substances are more fundamental than their properties and that nature is more fundamental than culture.[126]Flat ontologies, by contrast, deny that any entity has a privileged status, meaning that all entities exist on the same level. For them, the main question is only whether something exists rather than identifying the level at which it exists.[127][q] The ontological theories ofendurantismandperdurantismaim to explain how material objects persist through time. Endurantism is the view that material objects are three-dimensional entities that travel through time while being fully present in each moment. They remain the same even when they gain or lose properties as they change. Perdurantism is the view that material objects are four-dimensional entities that extend not just through space but also through time. This means that they are composed oftemporal partsand, at any moment, only one part of them is present but not the others. According to perdurantists, change means that an earlier part exhibits different qualities than a later part. When a tree loses its leaves, for instance, there is an earlier temporal part with leaves and a later temporal part without leaves.[129] Differential ontology is apoststructuralistapproach interested in the relation between the concepts of identity anddifference. It says that traditional ontology sees identity as the more basic term by first characterizing things in terms of their essential features and then elaborating differences based on this conception. Differential ontologists, by contrast, privilege difference and say that the identity of a thing is a secondary determination that depends on how this thing differs from other things.[130] Object-oriented ontologybelongs to the school ofspeculative realismand examines the nature and role of objects. It sees objects as the fundamental building blocks of reality. As a flat ontology, it denies that some entities have a more fundamental form of existence than others. It uses this idea to argue that objects exist independently of human thought and perception.[131] Methodsof ontology are ways of conducting ontological inquiry and deciding between competing theories. There is no single standard method; the diverse approaches are studied bymetaontology.[132] Conceptual analysisis a method to understand ontological concepts and clarify their meaning.[133]It proceeds by analyzing their component parts and thenecessary and sufficient conditionsunder which a concept applies to an entity.[134]This information can help ontologists decide whether a certain type of entity, such as numbers, exists.[135]Eidetic variationis a related method inphenomenologicalontology that aims to identify the essential features of different types of objects. Phenomenologists start by imagining an example of the investigated type. They proceed by varying the imagined features to determine which ones cannot be changed, meaning they are essential.[136][r]Thetranscendentalmethod begins with a simple observation that a certain entity exists. In the following step, it studies the ontological repercussions of this observation by examining how it is possible or whichconditionsare required for this entity to exist.[138] Another approach is based onintuitionsin the form of non-inferential impressions about the correctness of general principles.[139]These principles can be used as thefoundationon which an ontological system is built and expanded usingdeductive reasoning.[140]A further intuition-based method relies onthought experimentsto evoke new intuitions. This happens by imagining a situation relevant to an ontological issue and then employingcounterfactual thinkingto assess the consequences of this situation.[141]For example, some ontologists examine the relation between mind and matter by imaginingcreatures identical to humans but without consciousness.[142] Naturalistic methodsrely on the insights of the natural sciences to determine what exists.[143]According to an influential approach byWillard Van Orman Quine, ontology can be conducted by analyzing[s]the ontological commitments of scientific theories. This method is based on the idea that scientific theories provide the most reliable description of reality and that their power can be harnessed by investigating the ontological assumptions underlying them.[145] Principles of theory choice offer guidelines for assessing the advantages and disadvantages of ontological theories rather than guiding their construction.[146]The principle ofOckham's Razorsays that simple theories are preferable.[147]A theory can be simple in different respects, for example, by using very few basic types or by describing the world with a small number of fundamental entities.[148]Ontologists are also interested in the explanatory power of theories and give preference to theories that can explain many observations.[149]A further factor is how close a theory is tocommon sense. Some ontologists use this principle as an argument against theories that are very different from how ordinary people think about the issue.[150] In applied ontology,ontological engineeringis the process of creating and refining conceptual models of specific domains.[151]Developing a new ontology from scratch involves various preparatory steps, such as delineating the scope of the domain one intends to model and specifying the purpose and use cases of the ontology. Once the foundational concepts within the area have been identified, ontology engineers proceed by defining them and characterizing the relations between them. This is usually done in aformal languageto ensure precision and, in some cases, automaticcomputability. In the following review phase, the validity of the ontology is assessed using test data.[152]Various more specific instructions for how to carry out the different steps have been suggested. They include theCycmethod, Grüninger and Fox's methodology, and so-called METHONTOLOGY.[153]In some cases, it is feasible to adapt a pre-existing ontology to fit a specific domain and purpose rather than creating a new one from scratch.[154] Ontology overlaps with many disciplines, includinglogic, the study ofcorrect reasoning.[155]Ontologists often employlogical systemsto express their insights, specifically in the field of formal ontology. Of particular interest to them is theexistential quantifier(∃{\displaystyle \exists }), which is used to express what exists. Infirst-order logic, for example, the formula∃xDog(x){\displaystyle \exists x{\text{Dog}}(x)}states that dogs exist.[156]Some philosophers study ontology by examining the structure of thought and language, saying that they reflect the structure of being.[157]Doubts about the accuracy ofnatural languagehave led some ontologists to seek a newformal language, termedontologese, for a better representation of the fundamental structure of reality.[158] Ontologies are often used in information science to provide a conceptual scheme or inventory of a specific domain, making it possible to classify objects and formally represent information about them. This is of specific interest to computer science, which buildsdatabasesto store this information and defines computational processes to automatically transform and use it.[160]For instance, to encode and store information about clients and employees in a database, an organization may use an ontology with categories such as person, company, address, and name.[161]In some cases, it is necessary to exchange information belonging to different domains or to integrate databases using distinct ontologies. This can be achieved with the help ofupper ontologies, which are not limited to one specific domain. They use general categories that apply to most or all domains, likeSuggested Upper Merged OntologyandBasic Formal Ontology.[162] Similar applications of ontology are found in various fields seeking to manage extensive information within a structured framework.Protein Ontologyis a formal framework for the standardized representation ofprotein-related entities and their relationships.[163]Gene OntologyandSequence Ontologyserve a similar purpose in the field ofgenetics.[164]Environment Ontology is a knowledge representation focused onecosystemsand environmental processes.[165]Friend of a Friendprovides a conceptual framework to represent relations between people and their interests and activities.[166] The topic of ontology has received increased attention inanthropologysince the 1990s, sometimes termed the "ontological turn".[167]This type of inquiry is focused on how people from different cultures experience and understand the nature of being. Specific interest has been given to the ontological outlook ofIndigenous peopleand how it differs from a Western perspective.[168]As an example of this contrast, it has been argued that various indigenous communities ascribeintentionalityto non-human entities, like plants, forests, or rivers. This outlook is known asanimism[169]and is also found inNative Americanontologies, which emphasize the interconnectedness of all living entities and the importance of balance and harmony with nature.[170] Ontology is closely related totheologyand its interest in theexistence of Godas an ultimate entity. Theontological argument, first proposed byAnselm of Canterbury, attempts to prove the existence of the divine. It definesGodas the greatest conceivable being. From this definition it concludes that God must exist since God would not be the greatest conceivable being if God lacked existence.[171]Another overlap in the two disciplines is found in ontological theories that use God or an ultimate being as the foundational principle of reality. Heidegger criticized this approach, terming itontotheology.[172] The roots of ontology inancient philosophyare speculations about the nature of being and the source of the universe. Discussions of the essence of reality are found in theUpanishads, ancient Indian scriptures dating from as early as 700 BCE. They say that the universe has a divine foundation and discuss in what senseultimate realityis one or many.[174]Samkhya, the firstorthodox school of Indian philosophy,[t]formulated anatheistdualist ontology based on the Upanishads, identifyingpure consciousnessandmatteras its two foundational principles.[176]The laterVaisheshikaschool[u]proposed a comprehensive system of categories.[178]Inancient China,Laozi's (6th century BCE)[v]Taoismexamines the underlying order of the universe, known asTao, and how this order is shaped by the interaction of two basic forces,yin and yang.[180]The philosophical movement ofXuanxueemerged in the 3rd century CE and explored the relation between being and non-being.[181] Starting in the 6th century BCE,Presocratic philosophersinancient Greeceaimed to provide rational explanations of the universe. They suggested that a first principle, such as water or fire, is the primal source of all things.[182]Parmenides(c. 515–450 BCE) is sometimes considered the founder of ontology because of his explicit discussion of the concepts of being and non-being.[183]Inspired by Presocratic philosophy,Plato(427–347 BCE) developed histheory of forms. It distinguishes between unchangeable perfect forms and matter, which has a lower degree of existence and imitates the forms.[184]Aristotle(384–322 BCE) suggested an elaborate system of categories that introduced the concept of substance as the primary kind of being.[185]The school ofNeoplatonismarose in the 3rd century CE and proposed an ineffable source of everything, calledthe One, which is more basic than being itself.[186] Theproblem of universalswas an influential topic in medieval ontology.Boethius(477–524 CE) suggested that universals can exist not only in matter but also in the mind. This view inspiredPeter Abelard(1079–1142 CE), who proposed that universals exist only in the mind.[187]Thomas Aquinas(1224–1274 CE) developed and refined fundamental ontological distinctions, such as the contrast between existence andessence, between substance and accidents, and betweenmatter and form.[188]He also discussed thetranscendentals, which are the most general properties or modes of being.[189]John Duns Scotus(1266–1308) argued that all entities, including God,exist in the same wayand that each entity has a unique essence, calledhaecceity.[190]William of Ockham(c. 1287–1347 CE) proposed that one can decide between competing ontological theories by assessing which one uses the smallest number of elements, a principle known asOckham's razor.[191] InArabic-Persian philosophy,Avicenna(980–1037 CE) combined ontology withtheology. He identified God as a necessary being that is the source of everything else, which only has contingent existence.[193]In 8th-centuryIndian philosophy, the school ofAdvaita Vedantaemerged. It says that only a single all-encompassing entity exists, stating that the impression of a plurality of distinct entities is anillusion.[194]Starting in the 13th century CE, theNavya-Nyāyaschool built on Vaisheshika ontology with a particular focus on the problem of non-existence and negation.[195]9th-century China saw the emergence ofNeo-Confucianism, which developed the idea that a rational principle, known asli, is the ground of being and order of the cosmos.[196] René Descartes(1596–1650) formulated a dualist ontology at the beginning of the modern period. It distinguishes between mind and matter as distinct substances that causally interact.[197]Rejecting Descartes's dualism,Baruch Spinoza(1632–1677) proposed a monist ontology according to which there is only a single entity that is identical toGod and nature.[198]Gottfried Wilhelm Leibniz(1646–1716), by contrast, said that the universe is made up of many simple substances, which are synchronized but do not interact with one another.[199]John Locke(1632–1704) proposed his substratum theory, which says that each object has a featureless substratum that supports the object's properties.[200]Christian Wolff(1679–1754) was influential in establishing ontology as a distinct discipline, delimiting its scope from other forms of metaphysical inquiry.[201]George Berkeley(1685–1753) developed an idealist ontology according to which material objects are ideas perceived by minds.[202] Immanuel Kant(1724–1804) rejected the idea that humans can have direct knowledge of independently existing things and their nature, limiting knowledge to the field of appearances. For Kant, ontology does not study external things but provides a system ofpure concepts of understanding.[203]Influenced by Kant's philosophy,Georg Wilhelm Friedrich Hegel(1770–1831) linked ontology andlogic. He said that being and thought are identical and examined their foundational structures.[204]Arthur Schopenhauer(1788–1860) rejected Hegel's philosophy and proposed that the world is an expression of ablind and irrational will.[205]Francis Herbert Bradley(1846–1924) saw absolute spirit as the ultimate and all-encompassing reality[206]while denying that there are any external relations.[207]In Indian philosophy,Swami Vivekananda(1863–1902) expanded on Advaita Vedanta, emphasizing the unity of all existence.[208]Sri Aurobindo(1872–1950) sought to understand the world as an evolutionary manifestation of a divine consciousness.[209] At the beginning of the 20th century,Edmund Husserl(1859–1938) developedphenomenologyand employed its method, the description ofexperience, to address ontological problems.[210]This idea inspired his studentMartin Heidegger(1889–1976) to clarify the meaning of being by exploring the mode of human existence.[211]Jean-Paul Sartreresponded to Heidegger's philosophy by examining the relation between being andnothingnessfrom the perspective of human existence, freedom, and consciousness.[212]Based on the phenomenological method,Nicolai Hartmann(1882–1950) developed a complex hierarchical ontology that divides reality into four levels: inanimate, biological, psychological, and spiritual.[213] Alexius Meinong(1853–1920) articulated a controversial ontological theory that includes nonexistent objects as part of being.[214]Arguing against this theory,Bertrand Russell(1872–1970) formulated a fact ontology known aslogical atomism. This idea was further refined by the earlyLudwig Wittgenstein(1889–1951) and inspiredD. M. Armstrong's (1926–2014) ontology.[215]Alfred North Whitehead(1861–1947), by contrast, developed a process ontology.[216]Rudolf Carnap(1891–1970) questioned the objectivity of ontological theories by claiming that what exists depends on one's linguistic framework.[217]He had a strong influence onWillard Van Orman Quine(1908–2000), who analyzed the ontological commitments of scientific theories to solve ontological problems.[218]Quine's studentDavid Lewis(1941–2001) formulated the position of modal realism, which says that possible worlds are as real and concrete as the actual world.[219]Since the end of the 20th century, interest in applied ontology has risen in computer and information science with the development of conceptual frameworks for specific domains.[220]
https://en.wikipedia.org/wiki/Ontology#Ontology_in_information_science
Decision tree learningis asupervised learningapproach used instatistics,data miningandmachine learning. In this formalism, a classification or regressiondecision treeis used as apredictive modelto draw conclusions about a set of observations. Tree models where the target variable can take a discrete set of values are calledclassificationtrees; in these tree structures,leavesrepresent class labels and branches representconjunctionsof features that lead to those class labels. Decision trees where the target variable can take continuous values (typicallyreal numbers) are calledregressiontrees. More generally, the concept of regression tree can be extended to any kind of object equipped with pairwise dissimilarities such as categorical sequences.[1] Decision trees are among the most popular machine learning algorithms given their intelligibility and simplicity because they produce models that are easy to interpret and visualize, even for users without a statistical background.[2] In decision analysis, a decision tree can be used to visually and explicitly represent decisions anddecision making. Indata mining, a decision tree describes data (but the resulting classification tree can be an input for decision making). Decision tree learning is a method commonly used in data mining.[3]The goal is to create a model that predicts the value of a target variable based on several input variables. A decision tree is a simple representation for classifying examples. For this section, assume that all of the inputfeatureshave finite discrete domains, and there is a single target feature called the "classification". Each element of the domain of the classification is called aclass. A decision tree or a classification tree is a tree in which each internal (non-leaf) node is labeled with an input feature. The arcs coming from a node labeled with an input feature are labeled with each of the possible values of the target feature or the arc leads to a subordinate decision node on a different input feature. Each leaf of the tree is labeled with a class or a probability distribution over the classes, signifying that the data set has been classified by the tree into either a specific class, or into a particular probability distribution (which, if the decision tree is well-constructed, is skewed towards certain subsets of classes). A tree is built by splitting the sourceset, constituting the root node of the tree, into subsets—which constitute the successor children. The splitting is based on a set of splitting rules based on classification features.[4]This process is repeated on each derived subset in a recursive manner calledrecursive partitioning. Therecursionis completed when the subset at a node has all the same values of the target variable, or when splitting no longer adds value to the predictions. This process oftop-down induction of decision trees(TDIDT)[5]is an example of agreedy algorithm, and it is by far the most common strategy for learning decision trees from data.[6] Indata mining, decision trees can be described also as the combination of mathematical and computational techniques to aid the description, categorization and generalization of a given set of data. Data comes in records of the form: The dependent variable,Y{\displaystyle Y}, is the target variable that we are trying to understand, classify or generalize. The vectorx{\displaystyle {\textbf {x}}}is composed of the features,x1,x2,x3{\displaystyle x_{1},x_{2},x_{3}}etc., that are used for that task. Decision trees used indata miningare of two main types: The termclassification and regression tree (CART)analysis is anumbrella termused to refer to either of the above procedures, first introduced byBreimanet al. in 1984.[7]Trees used for regression and trees used for classification have some similarities – but also some differences, such as the procedure used to determine where to split.[7] Some techniques, often calledensemblemethods, construct more than one decision tree: A special case of a decision tree is adecision list,[14]which is a one-sided decision tree, so that every internal node has exactly 1 leaf node and exactly 1 internal node as a child (except for the bottommost node, whose only child is a single leaf node). While less expressive, decision lists are arguably easier to understand than general decision trees due to their added sparsity[citation needed], permit non-greedy learning methods[15]and monotonic constraints to be imposed.[16] Notable decision tree algorithms include: ID3 and CART were invented independently at around the same time (between 1970 and 1980)[citation needed], yet follow a similar approach for learning a decision tree from training tuples. It has also been proposed to leverage concepts offuzzy set theoryfor the definition of a special version of decision tree, known as Fuzzy Decision Tree (FDT).[23]In this type of fuzzy classification, generally, an input vectorx{\displaystyle {\textbf {x}}}is associated with multiple classes, each with a different confidence value. Boosted ensembles of FDTs have been recently investigated as well, and they have shown performances comparable to those of other very efficient fuzzy classifiers.[24] Algorithms for constructing decision trees usually work top-down, by choosing a variable at each step that best splits the set of items.[6]Different algorithms use different metrics for measuring "best". These generally measure the homogeneity of the target variable within the subsets. Some examples are given below. These metrics are applied to each candidate subset, and the resulting values are combined (e.g., averaged) to provide a measure of the quality of the split. Depending on the underlying metric, the performance of various heuristic algorithms for decision tree learning may vary significantly.[25] A simple and effective metric can be used to identify the degree to which true positives outweigh false positives (seeConfusion matrix). This metric, "Estimate of Positive Correctness" is defined below: EP=TP−FP{\displaystyle E_{P}=TP-FP} In this equation, the total false positives (FP) are subtracted from the total true positives (TP). The resulting number gives an estimate on how many positive examples the feature could correctly identify within the data, with higher numbers meaning that the feature could correctly classify more positive samples. Below is an example of how to use the metric when the full confusion matrix of a certain feature is given: Feature A Confusion Matrix Here we can see that the TP value would be 8 and the FP value would be 2 (the underlined numbers in the table). When we plug these numbers in the equation we are able to calculate the estimate:Ep=TP−FP=8−2=6{\displaystyle E_{p}=TP-FP=8-2=6}. This means that using the estimate on this feature would have it receive a score of 6. However, it should be worth noting that this number is only an estimate. For example, if two features both had a FP value of 2 while one of the features had a higher TP value, that feature would be ranked higher than the other because the resulting estimate when using the equation would give a higher value. This could lead to some inaccuracies when using the metric if some features have more positive samples than others. To combat this, one could use a more powerful metric known asSensitivitythat takes into account the proportions of the values from the confusion matrix to give the actualtrue positive rate(TPR). The difference between these metrics is shown in the example below: TPR=TP/(TP+FN)=8/(8+3)≈0.73{\displaystyle TPR=TP/(TP+FN)=8/(8+3)\approx 0.73} TPR=TP/(TP+FN)=6/(6+2)=0.75{\displaystyle TPR=TP/(TP+FN)=6/(6+2)=0.75} In this example, Feature A had an estimate of 6 and a TPR of approximately 0.73 while Feature B had an estimate of 4 and a TPR of 0.75. This shows that although the positive estimate for some feature may be higher, the more accurate TPR value for that feature may be lower when compared to other features that have a lower positive estimate. Depending on the situation and knowledge of the data and decision trees, one may opt to use the positive estimate for a quick and easy solution to their problem. On the other hand, a more experienced user would most likely prefer to use the TPR value to rank the features because it takes into account the proportions of the data and all the samples that should have been classified as positive. Gini impurity,Gini's diversity index,[26]orGini-Simpson Indexin biodiversity research, is named after Italian mathematicianCorrado Giniand used by the CART (classification and regression tree) algorithm for classification trees. Gini impurity measures how often a randomly chosen element of a set would be incorrectly labeled if it were labeled randomly and independently according to the distribution of labels in the set. It reaches its minimum (zero) when all cases in the node fall into a single target category. For a set of items withJ{\displaystyle J}classes and relative frequenciespi{\displaystyle p_{i}},i∈{1,2,...,J}{\displaystyle i\in \{1,2,...,J\}}, the probability of choosing an item with labeli{\displaystyle i}ispi{\displaystyle p_{i}}, and the probability of miscategorizing that item is∑k≠ipk=1−pi{\displaystyle \sum _{k\neq i}p_{k}=1-p_{i}}. The Gini impurity is computed by summing pairwise products of these probabilities for each class label: The Gini impurity is also an information theoretic measure and corresponds toTsallis Entropywith deformation coefficientq=2{\displaystyle q=2}, which in physics is associated with the lack of information in out-of-equilibrium, non-extensive, dissipative and quantum systems. For the limitq→1{\displaystyle q\to 1}one recovers the usual Boltzmann-Gibbs or Shannon entropy. In this sense, the Gini impurity is nothing but a variation of the usual entropy measure for decision trees. Used by theID3,C4.5and C5.0 tree-generation algorithms.Information gainis based on the concept ofentropyandinformation contentfrominformation theory. Entropy is defined as below wherep1,p2,…{\displaystyle p_{1},p_{2},\ldots }are fractions that add up to 1 and represent the percentage of each class present in the child node that results from a split in the tree.[27] Averaging over the possible values ofA{\displaystyle A}, That is, the expected information gain is themutual information, meaning that on average, the reduction in the entropy ofTis the mutual information. Information gain is used to decide which feature to split on at each step in building the tree. Simplicity is best, so we want to keep our tree small. To do so, at each step we should choose the split that results in the most consistent child nodes. A commonly used measure of consistency is calledinformationwhich is measured inbits. For each node of the tree, the information value "represents the expected amount of information that would be needed to specify whether a new instance should be classified yes or no, given that the example reached that node".[27] Consider an example data set with four attributes:outlook(sunny, overcast, rainy),temperature(hot, mild, cool),humidity(high, normal), andwindy(true, false), with a binary (yes or no) target variable,play, and 14 data points. To construct a decision tree on this data, we need to compare the information gain of each of four trees, each split on one of the four features. The split with the highest information gain will be taken as the first split and the process will continue until all children nodes each have consistent data, or until the information gain is 0. To find the information gain of the split usingwindy, we must first calculate the information in the data before the split. The original data contained nine yes's and five no's. The split using the featurewindyresults in two children nodes, one for awindyvalue of true and one for awindyvalue of false. In this data set, there are six data points with a truewindyvalue, three of which have aplay(whereplayis the target variable) value of yes and three with aplayvalue of no. The eight remaining data points with awindyvalue of false contain two no's and six yes's. The information of thewindy=true node is calculated using the entropy equation above. Since there is an equal number of yes's and no's in this node, we have For the node wherewindy=false there were eight data points, six yes's and two no's. Thus we have To find the information of the split, we take the weighted average of these two numbers based on how many observations fell into which node. Now we can calculate the information gain achieved by splitting on thewindyfeature. To build the tree, the information gain of each possible first split would need to be calculated. The best first split is the one that provides the most information gain. This process is repeated for each impure node until the tree is complete. This example is adapted from the example appearing in Witten et al.[27] Information gain is also known asShannon indexin bio diversity research. Introduced in CART,[7]variance reduction is often employed in cases where the target variable is continuous (regression tree), meaning that use of many other metrics would first require discretization before being applied. The variance reduction of a nodeNis defined as the total reduction of the variance of the target variableYdue to the split at this node: whereS{\displaystyle S},St{\displaystyle S_{t}}, andSf{\displaystyle S_{f}}are the set of presplit sample indices, set of sample indices for which the split test is true, and set of sample indices for which the split test is false, respectively. Each of the above summands are indeedvarianceestimates, though, written in a form without directly referring to the mean. By replacing(yi−yj)2{\displaystyle (y_{i}-y_{j})^{2}}in the formula above with the dissimilaritydij{\displaystyle d_{ij}}between two objectsi{\displaystyle i}andj{\displaystyle j}, the variance reduction criterion applies to any kind of object for which pairwise dissimilarities can be computed.[1] Used by CART in 1984,[28]the measure of "goodness" is a function that seeks to optimize the balance of a candidate split's capacity to create pure children with its capacity to create equally-sized children. This process is repeated for each impure node until the tree is complete. The functionφ(s∣t){\displaystyle \varphi (s\mid t)}, wheres{\displaystyle s}is a candidate split at nodet{\displaystyle t}, is defined as below wheretL{\displaystyle t_{L}}andtR{\displaystyle t_{R}}are the left and right children of nodet{\displaystyle t}using splits{\displaystyle s}, respectively;PL{\displaystyle P_{L}}andPR{\displaystyle P_{R}}are the proportions of records int{\displaystyle t}intL{\displaystyle t_{L}}andtR{\displaystyle t_{R}}, respectively; andP(j∣tL){\displaystyle P(j\mid t_{L})}andP(j∣tR){\displaystyle P(j\mid t_{R})}are the proportions of classj{\displaystyle j}records intL{\displaystyle t_{L}}andtR{\displaystyle t_{R}}, respectively. Consider an example data set with three attributes:savings(low, medium, high),assets(low, medium, high),income(numerical value), and a binary target variablecredit risk(good, bad) and 8 data points.[28]The full data is presented in the table below. To start a decision tree, we will calculate the maximum value ofφ(s∣t){\displaystyle \varphi (s\mid t)}using each feature to find which one will split the root node. This process will continue until all children are pure or allφ(s∣t){\displaystyle \varphi (s\mid t)}values are below a set threshold. To findφ(s∣t){\displaystyle \varphi (s\mid t)}of the featuresavings, we need to note the quantity of each value. The original data contained three low's, three medium's, and two high's. Out of the low's, one had a goodcredit riskwhile out of the medium's and high's, 4 had a goodcredit risk. Assume a candidate splits{\displaystyle s}such that records with a lowsavingswill be put in the left child and all other records will be put into the right child. To build the tree, the "goodness" of all candidate splits for the root node need to be calculated. The candidate with the maximum value will split the root node, and the process will continue for each impure node until the tree is complete. Compared to other metrics such as information gain, the measure of "goodness" will attempt to create a more balanced tree, leading to more-consistent decision time. However, it sacrifices some priority for creating pure children which can lead to additional splits that are not present with other metrics. Amongst other data mining methods, decision trees have various advantages: Many data mining software packages provide implementations of one or more decision tree algorithms (e.g. random forest). Open source examples include: Notable commercial software: In a decision tree, all paths from the root node to the leaf node proceed by way of conjunction, orAND. In a decision graph, it is possible to use disjunctions (ORs) to join two more paths together usingminimum message length(MML).[43]Decision graphs have been further extended to allow for previously unstated new attributes to be learnt dynamically and used at different places within the graph.[44]The more general coding scheme results in better predictive accuracy and log-loss probabilistic scoring.[citation needed]In general, decision graphs infer models with fewer leaves than decision trees. Evolutionary algorithms have been used to avoid local optimal decisions and search the decision tree space with littlea prioribias.[45][46] It is also possible for a tree to be sampled usingMCMC.[47] The tree can be searched for in a bottom-up fashion.[48]Or several trees can be constructed parallelly to reduce the expected number of tests till classification.[38]
https://en.wikipedia.org/wiki/Decision_tree_learning
Density-based spatial clustering of applications with noise(DBSCAN) is adata clusteringalgorithmproposed byMartin Ester,Hans-Peter Kriegel,Jörg Sander, andXiaowei Xuin 1996.[1]It is adensity-based clusteringnon-parametric algorithm: given a set of points in some space, it groups together points that are closely packed (points with manynearby neighbors), and marks as outliers points that lie alone in low-density regions (those whose nearest neighbors are too far away). DBSCAN is one of the most commonly used and cited clustering algorithms.[2] In 2014, the algorithm was awarded the Test of Time Award (an award given to algorithms which have received substantial attention in theory and practice) at the leading data mining conference, ACMSIGKDD.[3]As of July 2020[update], the follow-up paper "DBSCAN Revisited, Revisited: Why and How You Should (Still) Use DBSCAN"[4]appears in the list of the 8 most downloaded articles of the prestigiousACM Transactions on Database Systems (TODS)journal.[5] Another follow-up,HDBSCAN*, was initially published by Ricardo J. G. Campello, David Moulavi, and Jörg Sander in 2013,[6]then expanded upon withArthur Zimekin 2015.[7]It revises some of the original decisions such as the border points, and produces a hierarchical instead of a flat result. In 1972, Robert F. Ling published a closely related algorithm in "The Theory and Construction of k-Clusters"[8]inThe Computer Journalwith an estimated runtime complexity of O(n³).[8]DBSCAN has a worst-case of O(n²), and the database-oriented range-query formulation of DBSCAN allows for index acceleration. The algorithms slightly differ in their handling of border points. Consider a set of points in some space to be clustered. Letεbe a parameter specifying the radius of a neighborhood with respect to some point. For the purpose of DBSCAN clustering, the points are classified ascore points, (directly-)reachable pointsandoutliers, as follows: Now ifpis a core point, then it forms aclustertogether with all points (core or non-core) that are reachable from it. Each cluster contains at least one core point; non-core points can be part of a cluster, but they form its "edge", since they cannot be used to reach more points. Reachability is not a symmetric relation: by definition, only core points can reach non-core points. The opposite is not true, so a non-core point may be reachable, but nothing can be reached from it. Therefore, a further notion ofconnectednessis needed to formally define the extent of the clusters found by DBSCAN. Two pointspandqare density-connected if there is a pointosuch that bothpandqare reachable fromo. Density-connectednessissymmetric. A cluster then satisfies two properties: DBSCAN requires two parameters: ε (eps) and the minimum number of points required to form a dense region[a](minPts). It starts with an arbitrary starting point that has not been visited. This point's ε-neighborhood is retrieved, and if it contains sufficiently many points, a cluster is started. Otherwise, the point is labeled as noise. Note that this point might later be found in a sufficiently sized ε-environment of a different point and hence be made part of a cluster. If a point is found to be a dense part of a cluster, its ε-neighborhood is also part of that cluster. Hence, all points that are found within the ε-neighborhood are added, as is their own ε-neighborhood when they are also dense. This process continues until the density-connected cluster is completely found. Then, a new unvisited point is retrieved and processed, leading to the discovery of a further cluster or noise. DBSCAN can be used with any distance function[1][4](as well as similarity functions or other predicates).[9]The distance function (dist) can therefore be seen as an additional parameter. The algorithm can be expressed inpseudocodeas follows:[4] where RangeQuery can be implemented using a database index for better performance, or using a slow linear scan: The DBSCAN algorithm can be abstracted into the following steps:[4] A naive implementation of this requires storing the neighborhoods in step 1, thus requiring substantial memory. The original DBSCAN algorithm does not require this by performing these steps for one point at a time. DBSCAN optimizes the following loss function:[10]For any possible clusteringC={C1,…,Cl}{\displaystyle C=\{C_{1},\ldots ,C_{l}\}}out of the set of all clusteringsC{\displaystyle {\mathcal {C}}}, it minimizes the number of clusters under the condition that every pair of points in a cluster is density-reachable, which corresponds to the original two properties "maximality" and "connectivity" of a cluster:[1] minC⊂C,ddb(p,q)≤ε∀p,q∈Ci∀Ci∈C|C|{\displaystyle \min _{C\subset {\mathcal {C}},~d_{db}(p,q)\leq \varepsilon ~\forall p,q\in C_{i}~\forall C_{i}\in C}|C|} whereddb(p,q){\displaystyle d_{db}(p,q)}gives the smallestε{\displaystyle \varepsilon }such that two pointspandqare density-connected. DBSCAN visits each point of the database, possibly multiple times (e.g., as candidates to different clusters). For practical considerations, however, the time complexity is mostly governed by the number of regionQuery invocations. DBSCAN executes exactly one such query for each point, and if anindexing structureis used that executes aneighborhood queryinO(logn), an overall average runtime complexity ofO(nlogn)is obtained (if parameterεis chosen in a meaningful way, i.e. such that on average onlyO(logn)points are returned). Without the use of an accelerating index structure, or on degenerated data (e.g. all points within a distance less thanε), the worst case run time complexity remainsO(n²). The(n2){\displaystyle \textstyle {\binom {n}{2}}}-n=(n²-n)/2-sized upper triangle of the distance matrix can be materialized to avoid distance recomputations, but this needsO(n²)memory, whereas a non-matrix based implementation of DBSCAN only needsO(n)memory. See the section below on extensions for algorithmic modifications to handle these issues. Every data mining task has the problem of parameters. Every parameter influences the algorithm in specific ways. For DBSCAN, the parameters ε andminPtsare needed. The parameters must be specified by the user. Ideally, the value of ε is given by the problem to solve (e.g. a physical distance), andminPtsis then the desired minimum cluster size.[a] OPTICScan be seen as a generalization of DBSCAN that replaces the ε parameter with a maximum value that mostly affects performance.MinPtsthen essentially becomes the minimum cluster size to find. While the algorithm is much easier to parameterize than DBSCAN, the results are a bit more difficult to use, as it will usually produce a hierarchical clustering instead of the simple data partitioning that DBSCAN produces. Recently, one of the original authors of DBSCAN has revisited DBSCAN and OPTICS, and published a refined version of hierarchical DBSCAN (HDBSCAN*),[6][7]which no longer has the notion of border points. Instead, only the core points form the cluster. A spectral implementation of DBSCAN is related tospectral clusteringin the trivial case of determiningconnected graph components— the optimal clusters with no edges cut.[12]However, it can be computationally intensive, up toO(n3){\displaystyle O(n^{3})}. Additionally, one has to choose the number of eigenvectors to compute. For performance reasons, the original DBSCAN algorithm remains preferable to its spectral implementation. Generalized DBSCAN (GDBSCAN)[9][13]is a generalization by the same authors to arbitrary "neighborhood" and "dense" predicates. The ε andminPtsparameters are removed from the original algorithm and moved to the predicates. For example, on polygon data, the "neighborhood" could be any intersecting polygon, whereas the density predicate uses the polygon areas instead of just the object count. Various extensions to the DBSCAN algorithm have been proposed, including methods for parallelization, parameter estimation, and support for uncertain data. The basic idea has been extended to hierarchical clustering by theOPTICS algorithm. DBSCAN is also used as part of subspace clustering algorithms like PreDeCon andSUBCLU. HDBSCAN*[6][7]is a hierarchical version of DBSCAN which is also faster than OPTICS, from which a flat partition consisting of the most prominent clusters can be extracted from the hierarchy.[14] Different implementations of the same algorithm were found to exhibit enormous performance differences, with the fastest on a test data set finishing in 1.4 seconds, the slowest taking 13803 seconds.[15]The differences can be attributed to implementation quality, language and compiler differences, and the use of indexes for acceleration.
https://en.wikipedia.org/wiki/DBSCAN
Cluster analysisorclusteringis the data analyzing technique in which task of grouping a set of objects in such a way that objects in the same group (called acluster) are moresimilar(in some specific sense defined by the analyst) to each other than to those in other groups (clusters). It is a main task ofexploratory data analysis, and a common technique forstatisticaldata analysis, used in many fields, includingpattern recognition,image analysis,information retrieval,bioinformatics,data compression,computer graphicsandmachine learning. Cluster analysis refers to a family of algorithms and tasks rather than one specificalgorithm. It can be achieved by various algorithms that differ significantly in their understanding of what constitutes a cluster and how to efficiently find them. Popular notions of clusters include groups with smalldistancesbetween cluster members, dense areas of the data space, intervals or particularstatistical distributions. Clustering can therefore be formulated as amulti-objective optimizationproblem. The appropriate clustering algorithm and parameter settings (including parameters such as thedistance functionto use, a density threshold or the number of expected clusters) depend on the individualdata setand intended use of the results. Cluster analysis as such is not an automatic task, but an iterative process ofknowledge discoveryor interactive multi-objective optimization that involves trial and failure. It is often necessary to modifydata preprocessingand model parameters until the result achieves the desired properties. Besides the termclustering, there are a number of terms with similar meanings, includingautomaticclassification,numerical taxonomy,botryology(fromGreek:βότρυς'grape'),typological analysis, andcommunity detection. The subtle differences are often in the use of the results: while in data mining, the resulting groups are the matter of interest, in automatic classification the resulting discriminative power is of interest. Cluster analysis originated in anthropology by Driver and Kroeber in 1932[1]and introduced to psychology byJoseph Zubinin 1938[2]andRobert Tryonin 1939[3]and famously used byCattellbeginning in 1943[4]for trait theory classification inpersonality psychology. The notion of a "cluster" cannot be precisely defined, which is one of the reasons why there are so many clustering algorithms.[5]There is a common denominator: a group of data objects. However, different researchers employ different cluster models, and for each of these cluster models again different algorithms can be given. The notion of a cluster, as found by different algorithms, varies significantly in its properties. Understanding these "cluster models" is key to understanding the differences between the various algorithms. Typical cluster models include: A "clustering" is essentially a set of such clusters, usually containing all objects in the data set. Additionally, it may specify the relationship of the clusters to each other, for example, a hierarchy of clusters embedded in each other. Clusterings can be roughly distinguished as: There are also finer distinctions possible, for example: As listed above, clustering algorithms can be categorized based on their cluster model. The following overview will only list the most prominent examples of clustering algorithms, as there are possibly over 100 published clustering algorithms. Not all provide models for their clusters and can thus not easily be categorized. An overview of algorithms explained in Wikipedia can be found in thelist of statistics algorithms. There is no objectively "correct" clustering algorithm, but as it was noted, "clustering is in the eye of the beholder."[5]In fact, an axiomatic approach to clustering demonstrates that it is impossible for any clustering method to meet three fundamental properties simultaneously:scale invariance(results remain unchanged under proportional scaling of distances),richness(all possible partitions of the data can be achieved), andconsistencybetween distances and the clustering structure.[7]The most appropriate clustering algorithm for a particular problem often needs to be chosen experimentally, unless there is a mathematical reason to prefer one cluster model over another. An algorithm that is designed for one kind of model will generally fail on a data set that contains a radically different kind of model.[5]For example, k-means cannot find non-convex clusters.[5]Most traditional clustering methods assume the clusters exhibit a spherical, elliptical or convex shape.[8] Connectivity-based clustering, also known ashierarchical clustering, is based on the core idea of objects being more related to nearby objects than to objects farther away. These algorithms connect "objects" to form "clusters" based on their distance. A cluster can be described largely by the maximum distance needed to connect parts of the cluster. At different distances, different clusters will form, which can be represented using adendrogram, which explains where the common name "hierarchical clustering" comes from: these algorithms do not provide a single partitioning of the data set, but instead provide an extensive hierarchy of clusters that merge with each other at certain distances. In a dendrogram, the y-axis marks the distance at which the clusters merge, while the objects are placed along the x-axis such that the clusters don't mix. Connectivity-based clustering is a whole family of methods that differ by the way distances are computed. Apart from the usual choice ofdistance functions, the user also needs to decide on the linkage criterion (since a cluster consists of multiple objects, there are multiple candidates to compute the distance) to use. Popular choices are known assingle-linkage clustering(the minimum of object distances),complete linkage clustering(the maximum of object distances), andUPGMAorWPGMA("Unweighted or Weighted Pair Group Method with Arithmetic Mean", also known as average linkage clustering). Furthermore, hierarchical clustering can be agglomerative (starting with single elements and aggregating them into clusters) or divisive (starting with the complete data set and dividing it into partitions). These methods will not produce a unique partitioning of the data set, but a hierarchy from which the user still needs to choose appropriate clusters. They are not very robust towards outliers, which will either show up as additional clusters or even cause other clusters to merge (known as "chaining phenomenon", in particular withsingle-linkage clustering). In the general case, the complexity isO(n3){\displaystyle {\mathcal {O}}(n^{3})}for agglomerative clustering andO(2n−1){\displaystyle {\mathcal {O}}(2^{n-1})}fordivisive clustering,[9]which makes them too slow for large data sets. For some special cases, optimal efficient methods (of complexityO(n2){\displaystyle {\mathcal {O}}(n^{2})}) are known: SLINK[10]for single-linkage and CLINK[11]for complete-linkage clustering. In centroid-based clustering, each cluster is represented by a central vector, which is not necessarily a member of the data set. When the number of clusters is fixed tok,k-means clusteringgives a formal definition as an optimization problem: find thekcluster centers and assign the objects to the nearest cluster center, such that the squared distances from the cluster are minimized. The optimization problem itself is known to beNP-hard, and thus the common approach is to search only for approximate solutions. A particularly well-known approximate method isLloyd's algorithm,[12]often just referred to as "k-means algorithm" (althoughanother algorithm introduced this name). It does however only find alocal optimum, and is commonly run multiple times with different random initializations. Variations ofk-means often include such optimizations as choosing the best of multiple runs, but also restricting the centroids to members of the data set (k-medoids), choosingmedians(k-medians clustering), choosing the initial centers less randomly (k-means++) or allowing a fuzzy cluster assignment (fuzzy c-means). Mostk-means-type algorithms require thenumber of clusters–k– to be specified in advance, which is considered to be one of the biggest drawbacks of these algorithms. Furthermore, the algorithms prefer clusters of approximately similar size, as they will always assign an object to the nearest centroid; often yielding improperly cut borders of clusters. This happens primarily because the algorithm optimizes cluster centers, not cluster borders. Steps involved in the centroid-based clustering algorithm are: K-means has a number of interesting theoretical properties. First, it partitions the data space into a structure known as aVoronoi diagram. Second, it is conceptually close to nearest neighbor classification, and as such is popular inmachine learning. Third, it can be seen as a variation of model-based clustering, and Lloyd's algorithm as a variation of theExpectation-maximization algorithmfor this model discussed below. Centroid-based clustering problems such ask-means andk-medoids are special cases of the uncapacitated, metricfacility location problem, a canonical problem in the operations research and computational geometry communities. In a basic facility location problem (of which there are numerous variants that model more elaborate settings), the task is to find the best warehouse locations to optimally service a given set of consumers. One may view "warehouses" as cluster centroids and "consumer locations" as the data to be clustered. This makes it possible to apply the well-developed algorithmic solutions from the facility location literature to the presently considered centroid-based clustering problem. The clustering framework most closely related to statistics ismodel-based clustering, which is based ondistribution models. This approach models the data as arising from a mixture of probability distributions. It has the advantages of providing principled statistical answers to questions such as how many clusters there are, what clustering method or model to use, and how to detect and deal with outliers. While the theoretical foundation of these methods is excellent, they suffer fromoverfittingunless constraints are put on the model complexity. A more complex model will usually be able to explain the data better, which makes choosing the appropriate model complexity inherently difficult. Standardmodel-based clusteringmethods include more parsimonious models based on theeigenvalue decompositionof the covariance matrices, that provide a balance between overfitting and fidelity to the data. One prominent method is known as Gaussian mixture models (using theexpectation-maximization algorithm). Here, the data set is usually modeled with a fixed (to avoid overfitting) number ofGaussian distributionsthat are initialized randomly and whose parameters are iteratively optimized to better fit the data set. This will converge to alocal optimum, so multiple runs may produce different results. In order to obtain a hard clustering, objects are often then assigned to the Gaussian distribution they most likely belong to; for soft clusterings, this is not necessary. Distribution-based clustering produces complex models for clusters that can capturecorrelation and dependencebetween attributes. However, these algorithms put an extra burden on the user: for many real data sets, there may be no concisely defined mathematical model (e.g. assuming Gaussian distributions is a rather strong assumption on the data). In density-based clustering,[13]clusters are defined as areas of higher density than the remainder of the data set. Objects in sparse areas – that are required to separate clusters – are usually considered to be noise and border points. The most popular[14]density-based clustering method isDBSCAN.[15]In contrast to many newer methods, it features a well-defined cluster model called "density-reachability". Similar to linkage-based clustering, it is based on connecting points within certain distance thresholds. However, it only connects points that satisfy a density criterion, in the original variant defined as a minimum number of other objects within this radius. A cluster consists of all density-connected objects (which can form a cluster of an arbitrary shape, in contrast to many other methods) plus all objects that are within these objects' range. Another interesting property of DBSCAN is that its complexity is fairly low – it requires a linear number of range queries on the database – and that it will discover essentially the same results (it isdeterministicfor core and noise points, but not for border points) in each run, therefore there is no need to run it multiple times.OPTICS[16]is a generalization of DBSCAN that removes the need to choose an appropriate value for the range parameterε{\displaystyle \varepsilon }, and produces a hierarchical result related to that oflinkage clustering. DeLi-Clu,[17]Density-Link-Clustering combines ideas fromsingle-linkage clusteringand OPTICS, eliminating theε{\displaystyle \varepsilon }parameter entirely and offering performance improvements over OPTICS by using anR-treeindex. The key drawback ofDBSCANandOPTICSis that they expect some kind of density drop to detect cluster borders. On data sets with, for example, overlapping Gaussian distributions – a common use case in artificial data – the cluster borders produced by these algorithms will often look arbitrary, because the cluster density decreases continuously. On a data set consisting of mixtures of Gaussians, these algorithms are nearly always outperformed by methods such asEM clusteringthat are able to precisely model this kind of data. Mean-shiftis a clustering approach where each object is moved to the densest area in its vicinity, based onkernel density estimation. Eventually, objects converge to local maxima of density. Similar to k-means clustering, these "density attractors" can serve as representatives for the data set, but mean-shift can detect arbitrary-shaped clusters similar to DBSCAN. Due to the expensive iterative procedure and density estimation, mean-shift is usually slower than DBSCAN or k-Means. Besides that, the applicability of the mean-shift algorithm to multidimensional data is hindered by the unsmooth behaviour of the kernel density estimate, which results in over-fragmentation of cluster tails.[17] The grid-based technique is used for amulti-dimensionaldata set.[18]In this technique, we create a grid structure, and the comparison is performed on grids (also known as cells). The grid-based technique is fast and has low computational complexity. There are two types of grid-based clustering methods: STING and CLIQUE. Steps involved in the grid-based clusteringalgorithmare: In recent years, considerable effort has been put into improving the performance of existing algorithms.[19][20]Among them areCLARANS,[21]andBIRCH.[22]With the recent need to process larger and larger data sets (also known asbig data), the willingness to trade semantic meaning of the generated clusters for performance has been increasing. This led to the development of pre-clustering methods such ascanopy clustering, which can process huge data sets efficiently, but the resulting "clusters" are merely a rough pre-partitioning of the data set to then analyze the partitions with existing slower methods such ask-means clustering. Forhigh-dimensional data, many of the existing methods fail due to thecurse of dimensionality, which renders particular distance functions problematic in high-dimensional spaces. This led to newclustering algorithms for high-dimensional datathat focus onsubspace clustering(where only some attributes are used, and cluster models include the relevant attributes for the cluster) andcorrelation clusteringthat also looks for arbitrary rotated ("correlated") subspace clusters that can be modeled by giving acorrelationof their attributes.[23]Examples for such clustering algorithms are CLIQUE[24]andSUBCLU.[25] Ideas from density-based clustering methods (in particular theDBSCAN/OPTICSfamily of algorithms) have been adapted to subspace clustering (HiSC,[26]hierarchical subspace clustering and DiSH[27]) and correlation clustering (HiCO,[28]hierarchical correlation clustering, 4C[29]using "correlation connectivity" and ERiC[30]exploring hierarchical density-based correlation clusters). Several different clustering systems based onmutual informationhave been proposed. One is Marina Meilă'svariation of informationmetric;[31]another provides hierarchical clustering.[32]Using genetic algorithms, a wide range of different fit-functions can be optimized, including mutual information.[33]Alsobelief propagation, a recent development incomputer scienceandstatistical physics, has led to the creation of new types of clustering algorithms.[34] Evaluation (or "validation") of clustering results is as difficult as the clustering itself.[35]Popular approaches involve "internal" evaluation, where the clustering is summarized to a single quality score, "external" evaluation, where the clustering is compared to an existing "ground truth" classification, "manual" evaluation by a human expert, and "indirect" evaluation by evaluating the utility of the clustering in its intended application.[36] Internal evaluation measures suffer from the problem that they represent functions that themselves can be seen as a clustering objective. For example, one could cluster the data set by the Silhouette coefficient; except that there is no known efficient algorithm for this. By using such an internal measure for evaluation, one rather compares the similarity of the optimization problems,[36]and not necessarily how useful the clustering is. External evaluation has similar problems: if we have such "ground truth" labels, then we would not need to cluster; and in practical applications we usually do not have such labels. On the other hand, the labels only reflect one possible partitioning of the data set, which does not imply that there does not exist a different, and maybe even better, clustering. Neither of these approaches can therefore ultimately judge the actual quality of a clustering, but this needs human evaluation,[36]which is highly subjective. Nevertheless, such statistics can be quite informative in identifying bad clusterings,[37]but one should not dismiss subjective human evaluation.[37] When a clustering result is evaluated based on the data that was clustered itself, this is called internal evaluation. These methods usually assign the best score to the algorithm that produces clusters with high similarity within a cluster and low similarity between clusters. One drawback of using internal criteria in cluster evaluation is that high scores on an internal measure do not necessarily result in effective information retrieval applications.[38]Additionally, this evaluation is biased towards algorithms that use the same cluster model. For example, k-means clustering naturally optimizes object distances, and a distance-based internal criterion will likely overrate the resulting clustering. Therefore, the internal evaluation measures are best suited to get some insight into situations where one algorithm performs better than another, but this shall not imply that one algorithm produces more valid results than another.[5]Validity as measured by such an index depends on the claim that this kind of structure exists in the data set. An algorithm designed for some kind of models has no chance if the data set contains a radically different set of models, or if the evaluation measures a radically different criterion.[5]For example, k-means clustering can only find convex clusters, and many evaluation indexes assume convex clusters. On a data set with non-convex clusters neither the use ofk-means, nor of an evaluation criterion that assumes convexity, is sound. More than a dozen of internal evaluation measures exist, usually based on the intuition that items in the same cluster should be more similar than items in different clusters.[39]: 115–121For example, the following methods can be used to assess the quality of clustering algorithms based on internal criterion: TheDavies–Bouldin indexcan be calculated by the following formula:DB=1n∑i=1nmaxj≠i(σi+σjd(ci,cj)){\displaystyle DB={\frac {1}{n}}\sum _{i=1}^{n}\max _{j\neq i}\left({\frac {\sigma _{i}+\sigma _{j}}{d(c_{i},c_{j})}}\right)}wherenis the number of clusters,ci{\displaystyle c_{i}}is thecentroidof clusteri{\displaystyle i},σi{\displaystyle \sigma _{i}}is the average distance of all elements in clusteri{\displaystyle i}to centroidci{\displaystyle c_{i}}, andd(ci,cj){\displaystyle d(c_{i},c_{j})}is the distance between centroidsci{\displaystyle c_{i}}andcj{\displaystyle c_{j}}. Since algorithms that produce clusters with low intra-cluster distances (high intra-cluster similarity) and high inter-cluster distances (low inter-cluster similarity) will have a low Davies–Bouldin index, the clustering algorithm that produces a collection of clusters with the smallestDavies–Bouldin indexis considered the best algorithm based on this criterion. The Dunn index aims to identify dense and well-separated clusters. It is defined as the ratio between the minimal inter-cluster distance to maximal intra-cluster distance. For each cluster partition, the Dunn index can be calculated by the following formula:[40] whered(i,j) represents the distance between clustersiandj, andd'(k) measures the intra-cluster distance of clusterk. The inter-cluster distanced(i,j) between two clusters may be any number of distance measures, such as the distance between thecentroidsof the clusters. Similarly, the intra-cluster distanced'(k) may be measured in a variety of ways, such as the maximal distance between any pair of elements in clusterk. Since internal criterion seek clusters with high intra-cluster similarity and low inter-cluster similarity, algorithms that produce clusters with high Dunn index are more desirable. The silhouette coefficient contrasts the average distance to elements in the same cluster with the average distance to elements in other clusters. Objects with a high silhouette value are considered well clustered, objects with a low value may be outliers. This index works well withk-means clustering, and is also used to determine the optimal number of clusters.[41] In external evaluation, clustering results are evaluated based on data that was not used for clustering, such as known class labels and external benchmarks. Such benchmarks consist of a set of pre-classified items, and these sets are often created by (expert) humans. Thus, the benchmark sets can be thought of as agold standardfor evaluation.[35]These types of evaluation methods measure how close the clustering is to the predetermined benchmark classes. However, it has recently been discussed whether this is adequate for real data, or only on synthetic data sets with a factual ground truth, since classes can contain internal structure, the attributes present may not allow separation of clusters or the classes may containanomalies.[42]Additionally, from aknowledge discoverypoint of view, the reproduction of known knowledge may not necessarily be the intended result.[42]In the special scenario ofconstrained clustering, where meta information (such as class labels) is used already in the clustering process, the hold-out of information for evaluation purposes is non-trivial.[43] A number of measures are adapted from variants used to evaluate classification tasks. In place of counting the number of times a class was correctly assigned to a single data point (known astrue positives), suchpair countingmetrics assess whether each pair of data points that is truly in the same cluster is predicted to be in the same cluster.[35] As with internal evaluation, several external evaluation measures exist,[39]: 125–129for example: Purity is a measure of the extent to which clusters contain a single class.[38]Its calculation can be thought of as follows: For each cluster, count the number of data points from the most common class in said cluster. Now take the sum over all clusters and divide by the total number of data points. Formally, given some set of clustersM{\displaystyle M}and some set of classesD{\displaystyle D}, both partitioningN{\displaystyle N}data points, purity can be defined as: This measure doesn't penalize having many clusters, and more clusters will make it easier to produce a high purity. A purity score of 1 is always possible by putting each data point in its own cluster. Also, purity doesn't work well for imbalanced data, where even poorly performing clustering algorithms will give a high purity value. For example, if a size 1000 dataset consists of two classes, one containing 999 points and the other containing 1 point, then every possible partition will have a purity of at least 99.9%. The Rand index[44]computes how similar the clusters (returned by the clustering algorithm) are to the benchmark classifications. It can be computed using the following formula: whereTP{\displaystyle TP}is the number of true positives,TN{\displaystyle TN}is the number oftrue negatives,FP{\displaystyle FP}is the number offalse positives, andFN{\displaystyle FN}is the number offalse negatives. The instances being counted here are the number of correctpairwiseassignments. That is,TP{\displaystyle TP}is the number of pairs of points that are clustered together in the predicted partition and in the ground truth partition,FP{\displaystyle FP}is the number of pairs of points that are clustered together in the predicted partition but not in the ground truth partition etc. If the dataset is of size N, thenTP+TN+FP+FN=(N2){\displaystyle TP+TN+FP+FN={\binom {N}{2}}}. One issue with theRand indexis thatfalse positivesandfalse negativesare equally weighted. This may be an undesirable characteristic for some clustering applications. The F-measure addresses this concern,[citation needed]as does the chance-correctedadjusted Rand index. The F-measure can be used to balance the contribution offalse negativesby weightingrecallthrough a parameterβ≥0{\displaystyle \beta \geq 0}. Letprecisionandrecall(both external evaluation measures in themselves) be defined as follows:P=TPTP+FP{\displaystyle P={\frac {TP}{TP+FP}}}R=TPTP+FN{\displaystyle R={\frac {TP}{TP+FN}}}whereP{\displaystyle P}is theprecisionrate andR{\displaystyle R}is therecallrate. We can calculate the F-measure by using the following formula:[38]Fβ=(β2+1)⋅P⋅Rβ2⋅P+R{\displaystyle F_{\beta }={\frac {(\beta ^{2}+1)\cdot P\cdot R}{\beta ^{2}\cdot P+R}}}Whenβ=0{\displaystyle \beta =0},F0=P{\displaystyle F_{0}=P}. In other words,recallhas no impact on the F-measure whenβ=0{\displaystyle \beta =0}, and increasingβ{\displaystyle \beta }allocates an increasing amount of weight to recall in the final F-measure. AlsoTN{\displaystyle TN}is not taken into account and can vary from 0 upward without bound. The Jaccard index is used to quantify the similarity between two datasets. TheJaccard indextakes on a value between 0 and 1. An index of 1 means that the two dataset are identical, and an index of 0 indicates that the datasets have no common elements. The Jaccard index is defined by the following formula:J(A,B)=|A∩B||A∪B|=TPTP+FP+FN{\displaystyle J(A,B)={\frac {|A\cap B|}{|A\cup B|}}={\frac {TP}{TP+FP+FN}}}This is simply the number of unique elements common to both sets divided by the total number of unique elements in both sets. Note thatTN{\displaystyle TN}is not taken into account. The Dice symmetric measure doubles the weight onTP{\displaystyle TP}while still ignoringTN{\displaystyle TN}:DSC=2TP2TP+FP+FN{\displaystyle DSC={\frac {2TP}{2TP+FP+FN}}} The Fowlkes–Mallows index[45]computes the similarity between the clusters returned by the clustering algorithm and the benchmark classifications. The higher the value of the Fowlkes–Mallows index the more similar the clusters and the benchmark classifications are. It can be computed using the following formula:FM=TPTP+FP⋅TPTP+FN{\displaystyle FM={\sqrt {{\frac {TP}{TP+FP}}\cdot {\frac {TP}{TP+FN}}}}}whereTP{\displaystyle TP}is the number oftrue positives,FP{\displaystyle FP}is the number offalse positives, andFN{\displaystyle FN}is the number offalse negatives. TheFM{\displaystyle FM}index is the geometric mean of theprecisionandrecallP{\displaystyle P}andR{\displaystyle R}, and is thus also known as theG-measure, while the F-measure is their harmonic mean.[46][47]Moreover,precisionandrecallare also known as Wallace's indicesBI{\displaystyle B^{I}}andBII{\displaystyle B^{II}}.[48]Chance normalized versions of recall, precision and G-measure correspond toInformedness,MarkednessandMatthews Correlationand relate strongly toKappa.[49] The Chi index[50]is an external validation index that measure the clustering results by applying thechi-squared statistic. This index scores positively the fact that the labels are as sparse as possible across the clusters, i.e., that each cluster has as few different labels as possible. The higher the value of the Chi Index the greater the relationship between the resulting clusters and the label used. The mututal information is aninformation theoreticmeasure of how much information is shared between a clustering and a ground-truth classification that can detect a non-linear similarity between two clusterings.Normalized mutual informationis a family of corrected-for-chance variants of this that has a reduced bias for varying cluster numbers.[35] A confusion matrix can be used to quickly visualize the results of a classification (or clustering) algorithm. It shows how different a cluster is from the gold standard cluster. The validity measure (short v-measure) is a combined metric for homogeneity and completeness of the clusters[51] To measure cluster tendency is to measure to what degree clusters exist in the data to be clustered, and may be performed as an initial test, before attempting clustering. One way to do this is to compare the data against random data. On average, random data should not have clusters[verification needed].
https://en.wikipedia.org/wiki/Density-based_clustering
In the study ofcomplex networks, a network is said to havecommunity structureif the nodes of the network can be easily grouped into (potentially overlapping) sets of nodes such that each set of nodes is densely connected internally. In the particular case ofnon-overlappingcommunity finding, this implies that the network divides naturally into groups of nodes with dense connections internally and sparser connections between groups. Butoverlappingcommunities are also allowed. The more general definition is based on the principle that pairs of nodes are more likely to be connected if they are both members of the same community(ies), and less likely to be connected if they do not share communities. A related but different problem iscommunity search, where the goal is to find a community that a certain vertex belongs to. In the study ofnetworks, such as computer and information networks, social networks and biological networks, a number of different characteristics have been found to occur commonly, including thesmall-world property,heavy-taileddegree distributions, andclustering, among others. Another common characteristic is community structure.[1][2][3][4][5]In the context of networks, community structure refers to the occurrence of groups of nodes in a network that are more densely connected internally than with the rest of the network, as shown in the example image to the right. This inhomogeneity of connections suggests that the network has certain natural divisions within it. Communities are often defined in terms of thepartition of the setof vertices, that is each node is put into one and only one community, just as in the figure. This is a useful simplification and most community detection methods find this type of community structure. However, in some cases a better representation could be one where vertices are in more than one community. This might happen in a social network where each vertex represents a person, and the communities represent the different groups of friends: one community for family, another community for co-workers, one for friends in the same sports club, and so on. The use ofcliques for community detectiondiscussed below is just one example of how such overlapping community structure can be found. Some networks may not have any meaningful community structure. Many basic network models, for example, such as therandom graphand theBarabási–Albert model, do not display community structure. Community structures are quite common in real networks. Social networks include community groups (the origin of the term, in fact) based on common location, interests, occupation, etc.[5][6] Finding an underlying community structure in a network, if it exists, is important for a number of reasons. Communities allow us to create a large scale map of a network since individual communities act like meta-nodes in the network which makes its study easier.[7] Individual communities also shed light on the function of the system represented by the network since communities often correspond to functional units of the system. In metabolic networks, such functional groups correspond to cycles or pathways whereas in theprotein interaction network, communities correspond to proteins with similar functionality inside a biological cell. Similarly, citation networks form communities by research topic.[1]Being able to identify these sub-structures within a network can provide insight into how network function and topology affect each other. Such insight can be useful in improving some algorithms on graphs such asspectral clustering.[8] Importantly, communities often have very different properties than the average properties of the networks. Thus, only concentrating on the average properties usually misses many important and interesting features inside the networks. For example, in a given social network, both gregarious and reticent groups might exists simultaneously.[7] Existence of communities also generally affects various processes like rumour spreading or epidemic spreading happening on a network. Hence to properly understand such processes, it is important to detect communities and also to study how they affect the spreading processes in various settings. Finally, an important application that community detection has found in network science is the prediction of missing links and the identification of false links in the network. During the measurement process, some links may not get observed for a number of reasons. Similarly, some links could falsely enter into the data because of the errors in the measurement. Both these cases are well handled by community detection algorithm since it allows one to assign the probability of existence of an edge between a given pair of nodes.[9] Finding communities within an arbitrary network can be acomputationallydifficult task. The number of communities, if any, within the network is typically unknown and the communities are often of unequal size and/or density. Despite these difficulties, however, several methods for community finding have been developed and employed with varying levels of success.[4] One of the oldest algorithms for dividing networks into parts is theminimum cutmethod (and variants such as ratio cut and normalized cut). This method sees use, for example, in load balancing forparallel computingin order to minimize communication between processor nodes. In the minimum-cut method, the network is divided into a predetermined number of parts, usually of approximately the same size, chosen such that the number of edges between groups is minimized. The method works well in many of the applications for which it was originally intended but is less than ideal for finding community structure in general networks since it will find communities regardless of whether they are implicit in the structure, and it will find only a fixed number of them.[10] Another method for finding community structures in networks ishierarchical clustering. In this method one defines asimilarity measurequantifying some (usually topological) type of similarity between node pairs. Commonly used measures include thecosine similarity, theJaccard index, and theHamming distancebetween rows of theadjacency matrix. Then one groups similar nodes into communities according to this measure. There are several common schemes for performing the grouping, the two simplest beingsingle-linkage clustering, in which two groups are considered separate communities if and only if all pairs of nodes in different groups have similarity lower than a given threshold, andcomplete linkage clustering, in which all nodes within every group have similarity greater than a threshold. An important step is how to determine the threshold to stop the agglomerative clustering, indicating a near-to-optimal community structure. A common strategy consist to build one or several metrics monitoring global properties of the network, which peak at given step of the clustering. An interesting approach in this direction is the use of various similarity or dissimilarity measures, combined throughconvex sums,.[11]Another approximation is the computation of a quantity monitoring the density of edges within clusters with respect to the density between clusters, such as the partition density, which has been proposed when the similarity metric is defined between edges (which permits the definition of overlapping communities),[12]and extended when the similarity is defined between nodes, which allows to consider alternative definitions of communities such as guilds (i.e. groups of nodes sharing a similar number of links with respect to the same neighbours but not necessarily connected themselves).[13]These methods can be extended to consider multidimensional networks, for instance when we are dealing with networks having nodes with different types of links.[13] Another commonly used algorithm for finding communities is theGirvan–Newman algorithm.[1]This algorithm identifies edges in a network that lie between communities and then removes them, leaving behind just the communities themselves. The identification is performed by employing the graph-theoretic measurebetweenness centrality, which assigns a number to each edge which is large if the edge lies "between" many pairs of nodes. The Girvan–Newman algorithm returns results of reasonable quality and is popular because it has been implemented in a number of standard software packages. But it also runs slowly, taking time O(m2n) on a network ofnvertices andmedges, making it impractical for networks of more than a few thousand nodes.[14] In spite of its known drawbacks, one of the most widely used methods for community detection is modularity maximization.[14]Modularityis a benefit function that measures the quality of a particular division of a network into communities. The modularity maximization method detects communities by searching over possible divisions of a network for one or more that have particularly high modularity. Since exhaustive search over all possible divisions is usually intractable, practical algorithms are based on approximate optimization methods such as greedy algorithms, simulated annealing, or spectral optimization, with different approaches offering different balances between speed and accuracy.[15][16]A popular modularity maximization approach is theLouvain method, which iteratively optimizes local communities until global modularity can no longer be improved given perturbations to the current community state.[17][18] The usefulness of modularity optimization is questionable, as it has been shown that modularity optimization often fails to detect clusters smaller than some scale, depending on the size of the network (resolution limit[19]); on the other hand the landscape of modularity values is characterized by a huge degeneracy of partitions with high modularity, close to the absolute maximum, which may be very different from each other.[20] Methods based onstatistical inferenceattempt to fit agenerative modelto the network data, which encodes the community structure. The overall advantage of this approach compared to the alternatives is its more principled nature, and the capacity to inherently address issues ofstatistical significance. Most methods in the literature are based on thestochastic block model[21]as well as variants including mixed membership,[22][23]degree-correction,[24]and hierarchical structures.[25]Model selectioncan be performed using principled approaches such asminimum description length[26][27](or equivalently,Bayesian model selection[28]) andlikelihood-ratio test.[29]Currently many algorithms exist to perform efficient inference of stochastic block models, includingbelief propagation[30][31]and agglomerativeMonte Carlo.[32] In contrast to approaches that attempt to cluster a network given an objective function, this class of methods is based on generative models, which not only serve as a description of the large-scale structure of the network, but also can be used togeneralizethe data and predict the occurrence of missing or spurious links in the network.[33][34] Cliquesare subgraphs in which every node is connected to every other node in the clique. As nodes can not be more tightly connected than this, it is not surprising that there are many approaches to community detection in networks based on the detection of cliques in a graph and the analysis of how these overlap. Note that as a node can be a member of more than one clique, a node can be a member of more than one community in these methods giving an "overlapping community structure". One approach is to find the "maximal cliques". That is to find the cliques which are not the subgraph of any other clique. The classic algorithm to find these is theBron–Kerbosch algorithm. The overlap of these can be used to define communities in several ways. The simplest is to consider only maximal cliques bigger than a minimum size (number of nodes). The union of these cliques then defines a subgraph whose components (disconnected parts) then define communities.[35]Such approaches are often implemented insocial network analysis softwaresuch as UCInet. The alternative approach is to use cliques of fixed sizek{\displaystyle k}. The overlap of these can be used to define a type ofk{\displaystyle k}-regularhypergraphor a structure which is a generalisation of theline graph(the case whenk=2{\displaystyle k=2}) known as a "Clique graph".[36]The clique graphs have vertices which represent the cliques in the original graph while the edges of the clique graph record the overlap of the clique in the original graph. Applying any of the previous community detection methods (which assign each node to a community) to the clique graph then assigns each clique to a community. This can then be used to determine community membership of nodes in the cliques. Again as a node may be in several cliques, it can be a member of several communities. For instance theclique percolation method[37]defines communities aspercolation clustersofk{\displaystyle k}-cliques. To do this it finds allk{\displaystyle k}-cliques in a network, that is all the complete sub-graphs ofk{\displaystyle k}-nodes. It then defines twok{\displaystyle k}-cliques to be adjacent if they sharek−1{\displaystyle k-1}nodes, that is this is used to define edges in a clique graph. A community is then defined to be the maximal union ofk{\displaystyle k}-cliques in which we can reach anyk{\displaystyle k}-clique from any otherk{\displaystyle k}-clique through series ofk{\displaystyle k}-clique adjacencies. That is communities are just the connected components in the clique graph. Since a node can belong to several differentk{\displaystyle k}-clique percolation clusters at the same time, the communities can overlap with each other. A network can be represented or projected onto alatent spaceviarepresentation learningmethods to efficiently represent a system. Then, variousclusteringmethods can be employed to detect community structures. For Euclidean spaces, methods like embedding-based Silhouette community detection[38]can be utilized. For Hypergeometric latent spaces, critical gap method or modified density-based, hierarchical, or partitioning-based clustering methods can be utilized.[39] The evaluation of algorithms, to detect which are better at detecting community structure, is still an open question. It must be based on analyses of networks of known structure. A typical example is the "four groups" test, in which a network is divided into four equally-sized groups (usually of 32 nodes each) and the probabilities of connection within and between groups varied to create more or less challenging structures for the detection algorithm. Such benchmark graphs are a special case of theplanted l-partition model[40]ofCondonandKarp, or more generally of "stochastic block models", a general class of random network models containing community structure. Other more flexible benchmarks have been proposed that allow for varying group sizes and nontrivial degree distributions, such asLFR benchmark[41][42]which is an extension of the four groups benchmark that includes heterogeneous distributions of node degree and community size, making it a more severe test of community detection methods.[43][44] Commonly used computer-generated benchmarks start with a network of well-defined communities. Then, this structure is degraded by rewiring or removing links and it gets harder and harder for the algorithms to detect the original partition. At the end, the network reaches a point where it is essentially random. This kind of benchmark may be called "open". The performance on these benchmarks is evaluated by measures such as normalizedmutual informationorvariation of information. They compare the solution obtained by an algorithm[42]with the original community structure, evaluating the similarity of both partitions. During recent years, a rather surprising result has been obtained by various groups which shows that a phase transition exists in the community detection problem, showing that as the density of connections inside communities and between communities become more and more equal or both become smaller (equivalently, as the community structure becomes too weak or the network becomes too sparse), suddenly the communities become undetectable. In a sense, the communities themselves still exist, since the presence and absence of edges is still correlated with the community memberships of their endpoints; but it becomes information-theoretically impossible to label the nodes better than chance, or even distinguish the graph from one generated by a null model such as theErdos–Renyi modelwithout community structure. This transition is independent of the type of algorithm being used to detect communities, implying that there exists a fundamental limit on our ability to detect communities in networks, even with optimal Bayesian inference (i.e., regardless of our computational resources).[45][46][47] Consider astochastic block modelwith totaln{\displaystyle n}nodes,q=2{\displaystyle q=2}groups of equal size, and letpin{\displaystyle p_{\text{in}}}andpout{\displaystyle p_{\text{out}}}be the connection probabilities inside and between the groups respectively. Ifpin>pout{\displaystyle p_{\text{in}}>p_{\text{out}}}, the network would possess community structure since the link density inside the groups would be more than the density of links between the groups. In the sparse case,pin{\displaystyle p_{\text{in}}}andpout{\displaystyle p_{\text{out}}}scale asO(1/n){\displaystyle O(1/n)}so that the average degree is constant: Then it becomes impossible to detect the communities when:[46]
https://en.wikipedia.org/wiki/Community_structure
ATwitter botor anX botis a type of softwarebotthat controls aTwitter/Xaccount via the TwitterAPI.[1]Thesocial botsoftware may autonomously perform actions such as tweeting, retweeting, liking, following, unfollowing, or direct messaging other accounts.[citation needed]The automation of Twitter accounts is governed by a set of automation rules that outline proper and improper uses of automation.[2]Proper usage includes broadcasting helpful information, automatically generating interesting or creative content, and automatically replying to users via direct message.[3][4][5]Improper usage includes circumventing API rate limits, violating user privacy, spamming,[6]andsockpuppeting. Twitter bots may be part of a largerbotnet. They can be used to influenceelectionsand inmisinformationcampaigns. Twitter's policies do allow non-abusive bots, such as those created as a benign hobby or for artistic purposes,[7]or posting helpful information,[8]although price changes introduced to the previously free API service in June 2023 resulted in many such accounts closing.[9] Many non-malicious bots are popular for their entertainment value. However, as technology and the creativity of bot-makers improves, so does the potential for Twitter bots that fill social needs.[10][11]@tinycarebot is a Twitter bot that encourages followers to practiceself care, and brands are increasingly using automated Twitter bots toengage with customers in interactive ways.[12][13]One anti-bullying organization has created @TheNiceBot, which attempts to combat the prevalence of mean tweets by automatically tweeting kind messages.[14] In June 2023, Twitter began charging $100 per month for basic access to its API, resulting in many entertainment bots being suspended or taken down.[9] Concerns about political Twitter bots include the promulgation of malicious content, increasedpolarization, and the spreading offake news.[15][16][17]A subset of Twitter bots programmed to complete social tasks played an important role in the United States2016 Presidential Election.[18]Researchers estimated that pro-Trumpbots generated four tweets for every pro-Clintonautomated account and out-tweeted pro-Clinton bots 7:1 on relevant hashtags during the final debate. Deceiving Twitter bots fooled candidates and campaign staffers into retweeting misappropriated quotes and accounts affiliated withincendiary ideals.[19][20][21]Twitter bots have also been documented to influence online politics inVenezuela.[22]In 2019, 20% of the globalTwitter trendswere found to be created automatically using bots originating from Turkey. It is reported that 108,000 bot accounts were bulk tweeting to push 19,000 keywords to top trends in Turkey, to promote slogans such as political campaigns related to the2019 Turkish local elections.[23] In November 2022, Chinese bots coordinately flooded Twitter with garbage information (e.g.online gamblingads) so as to distract the users' attention away from theprotests.[24]These bots, disguised as attractive girls,hashtaggedthe major cities in China.[25] The majority of Twitter accounts following public figures and brands are often fake or inactive, making the number of Twitter followers a celebrity has a difficult metric for gauging popularity.[26]While this cannot always be helped, some public figures who have gained or lost huge quantities of followers in short periods of time have been accused of discreetly paying for Twitter followers.[27][28]For example, the Twitter accounts ofSean Combs, RepJared Polis(D-Colo),PepsiCo,Mercedes-Benz, and50 Centhave come under scrutiny for possibly engaging in the buying and selling of Twitter followers, which is estimated to be between a $40 million and $360 million business annually.[27][28]Account sellers may charge a premium for more realistic accounts that have Twitter profile pictures and bios and retweet the accounts they follow.[28]In addition to an ego boost, public figures may gain more lucrative endorsement contracts from inflated Twitter metrics.[27]For brands, however, the translation of online buzz and social media followers into sales has recently come under question afterThe Coca-Cola Companydisclosed that a corporate study revealed that social media buzz does not create a spike in short term sales.[29][30] It is sometimes desirable to identify when a Twitter account is controlled by aninternet bot.[31]Following a test period, Twitter rolled out labels to identify bot accounts and automated tweets in February 2022.[32][33] Detecting non-human Twitter users has been of interest to academics.[31][34] In a 2012 paper, Chu et al. propose the following criteria that indicate that an account may be a bot (they were designing an automated system):[1] Emilio Ferraraat theUniversity of Southern Californiaused artificial intelligence to identify Twitter bots. He found that humans reply to other tweets four or five times more than bots and that bots continue to post longer tweets over time.[35]Bots also post at more regular time gaps, for example, tweeting at 30-minute or 60-minute intervals.[35] Indiana Universityhas developed a free service called Botometer[36](formerly BotOrNot), which scores Twitter handles based on their likelihood of being a Twitterbot.[37][38][39] Recent research fromEPFLargued that classifying a Twitter account as bot or not may not be always possible because hackers take over human accounts and use them as bots temporarily or permanently[40]and in parallel to the owner of the account in some cases.[23] There are many different types of Twitter bots and their purposes vary from one to another. Some examples include: In 2009, based on a study bySysomos, Twitter bots were estimated to create approximately 24% of tweets on Twitter.[60]According to the company, there were 20 million, fewer than 5%, of accounts on Twitter that were fraudulent in 2013.[61]In 2013, two Italian researchers calculated 10 percent of total accounts on Twitter were "bots" although other estimates have placed the figure even higher.[62]One significant academic study in 2017 estimated that up to 15% of Twitter users were automated bot accounts.[63][64]A 2020 estimate puts the figure at 15% of all accounts or around 48 million accounts.[65] A 2023 MIT study found that third-party tools used to detect bots may not be as accurate as they are trained on data being collected in simplistic ways, and each tweet in these training sets then manually labeled by people as a bot or a human.[66]Already in 2019 German researchers scrutinized studies that were using Botswatch and Botometer, dismissing them as fundamentally flawed and concluded that (unlike spam accounts) there is no evidence that "social bots" even exist.[67] The prevalence of Twitter bots coupled with the ability of some bots to give seemingly human responses has enabled these non-human accounts to garner widespread influence.[68][69][20][70]The social implications these Twitter bots potentially have on human perception are sizeable. Looking at the Computers as Social Actors (CASA) paradigm, the journal notes, "people exhibit remarkable social reactions to computers and other media, treating them as if they were real people or real places." The study concluded that Twitter bots were viewed as credible and competent in communication and interaction making them suitable for transmitting information in the social media sphere.[71]Whether posts are perceived to be generated by humans or bots depends on partisanship, a 2023 study found.[72]
https://en.wikipedia.org/wiki/Twitter_bot
Ingraph theory, aclique(/ˈkliːk/or/ˈklɪk/) is a subset of vertices of anundirected graphsuch that every two distinct vertices in the clique areadjacent. That is, a clique of a graphG{\displaystyle G}is aninduced subgraphofG{\displaystyle G}that iscomplete. Cliques are one of the basic concepts of graph theory and are used in many other mathematical problems and constructions on graphs. Cliques have also been studied incomputer science: the task of finding whether there is a clique of a given size in agraph(theclique problem) isNP-complete, but despite this hardness result, many algorithms for finding cliques have been studied. Although the study ofcomplete subgraphsgoes back at least to the graph-theoretic reformulation ofRamsey theorybyErdős & Szekeres (1935),[1]the termcliquecomes fromLuce & Perry (1949), who used complete subgraphs insocial networksto modelcliquesof people; that is, groups of people all of whom know each other. Cliques have many other applications in the sciences and particularly inbioinformatics. Aclique,C, in anundirected graphG= (V,E)is a subset of thevertices,C⊆V, such that every two distinct vertices are adjacent. This is equivalent to the condition that theinduced subgraphofGinduced byCis acomplete graph. In some cases, the term clique may also refer to the subgraph directly. Amaximal cliqueis a clique that cannot be extended by including one more adjacent vertex, that is, a clique which does not exist exclusively within the vertex set of a larger clique. Some authors define cliques in a way that requires them to be maximal, and use other terminology for complete subgraphs that are not maximal. Amaximum cliqueof a graph,G, is a clique, such that there is no clique with more vertices. Moreover, theclique numberω(G)of a graphGis the number of vertices in a maximum clique inG. Theintersection numberofGis the smallest number of cliques that together cover all edges ofG. Theclique cover numberof a graphGis the smallest number of cliques ofGwhose union covers the set of verticesVof the graph. Amaximum clique transversalof a graph is a subset of vertices with the property that each maximum clique of the graph contains at least one vertex in the subset.[2] The opposite of a clique is anindependent set, in the sense that every clique corresponds to an independent set in thecomplement graph. Theclique coverproblem concerns finding as few cliques as possible that include every vertex in the graph. A related concept is abiclique, acomplete bipartite subgraph. Thebipartite dimensionof a graph is the minimum number of bicliques needed to cover all the edges of the graph. Mathematical results concerning cliques include the following. Several important classes of graphs may be defined or characterized by their cliques: Additionally, many other mathematical constructions involve cliques in graphs. Among them, Closely related concepts to complete subgraphs aresubdivisionsof complete graphs and completegraph minors. In particular,Kuratowski's theoremandWagner's theoremcharacterizeplanar graphsby forbidden complete andcomplete bipartitesubdivisions and minors, respectively. Incomputer science, theclique problemis the computational problem of finding a maximum clique, or all cliques, in a given graph. It isNP-complete, one ofKarp's 21 NP-complete problems.[6]It is alsofixed-parameter intractable, andhard to approximate. Nevertheless, manyalgorithmsfor computing cliques have been developed, either running inexponential time(such as theBron–Kerbosch algorithm) or specialized to graph families such asplanar graphsorperfect graphsfor which the problem can be solved inpolynomial time. The word "clique", in its graph-theoretic usage, arose from the work ofLuce & Perry (1949), who used complete subgraphs to modelcliques(groups of people who all know each other) insocial networks. The same definition was used byFestinger (1949)in an article using less technical terms. Both works deal with uncovering cliques in a social network using matrices. For continued efforts to model social cliques graph-theoretically, see e.g.Alba (1973),Peay (1974), andDoreian & Woodard (1994). Many different problems frombioinformaticshave been modeled using cliques. For instance,Ben-Dor, Shamir & Yakhini (1999)model the problem of clusteringgene expressiondata as one of finding the minimum number of changes needed to transform a graph describing the data into a graph formed as the disjoint union of cliques;Tanay, Sharan & Shamir (2002)discuss a similarbiclusteringproblem for expression data in which the clusters are required to be cliques.Sugihara (1984)uses cliques to modelecological nichesinfood webs.Day & Sankoff (1986)describe the problem of inferringevolutionary treesas one of finding maximum cliques in a graph that has as its vertices characteristics of the species, where two vertices share an edge if there exists aperfect phylogenycombining those two characters.Samudrala & Moult (1998)modelprotein structure predictionas a problem of finding cliques in a graph whose vertices represent positions of subunits of the protein. And by searching for cliques in aprotein–protein interactionnetwork,Spirin & Mirny (2003)found clusters of proteins that interact closely with each other and have few interactions with proteins outside the cluster.Power graph analysisis a method for simplifying complex biological networks by finding cliques and related structures in these networks. Inelectrical engineering,Prihar (1956)uses cliques to analyze communications networks, andPaull & Unger (1959)use them to design efficient circuits for computing partially specified Boolean functions. Cliques have also been used inautomatic test pattern generation: a large clique in an incompatibility graph of possible faults provides a lower bound on the size of a test set.[7]Cong & Smith (1993)describe an application of cliques in finding a hierarchical partition of an electronic circuit into smaller subunits. Inchemistry,Rhodes et al. (2003)use cliques to describe chemicals in achemical databasethat have a high degree of similarity with a target structure.Kuhl, Crippen & Friesen (1983)use cliques to model the positions in which two chemicals will bind to each other.
https://en.wikipedia.org/wiki/Clique_(graph_theory)
Latent semantic analysis(LSA) is a technique innatural language processing, in particulardistributional semantics, of analyzing relationships between a set of documents and the terms they contain by producing a set of concepts related to the documents and terms. LSA assumes that words that are close in meaning will occur in similar pieces of text (thedistributional hypothesis). A matrix containing word counts per document (rows represent unique words and columns represent each document) is constructed from a large piece of text and a mathematical technique calledsingular value decomposition(SVD) is used to reduce the number of rows while preserving the similarity structure among columns. Documents are then compared bycosine similaritybetween any two columns. Values close to 1 represent very similar documents while values close to 0 represent very dissimilar documents.[1] An information retrieval technique using latent semantic structure was patented in 1988[2]byScott Deerwester,Susan Dumais,George Furnas,Richard Harshman,Thomas Landauer,Karen LochbaumandLynn Streeter. In the context of its application toinformation retrieval, it is sometimes calledlatent semantic indexing(LSI).[3] LSA can use adocument-term matrixwhich describes the occurrences of terms in documents; it is asparse matrixwhose rows correspond totermsand whose columns correspond to documents. A typical example of the weighting of the elements of the matrix istf-idf(term frequency–inverse document frequency): the weight of an element of the matrix is proportional to the number of times the terms appear in each document, where rare terms are upweighted to reflect their relative importance. This matrix is also common to standard semantic models, though it is not necessarily explicitly expressed as a matrix, since the mathematical properties of matrices are not always used. After the construction of the occurrence matrix, LSA finds alow-rank approximation[5]to theterm-document matrix. There could be various reasons for these approximations: The consequence of the rank lowering is that some dimensions are combined and depend on more than one term: This mitigates the problem of identifying synonymy, as the rank lowering is expected to merge the dimensions associated with terms that have similar meanings. It also partially mitigates the problem withpolysemy, since components of polysemous words that point in the "right" direction are added to the components of words that share a similar meaning. Conversely, components that point in other directions tend to either simply cancel out, or, at worst, to be smaller than components in the directions corresponding to the intended sense. LetX{\displaystyle X}be a matrix where element(i,j){\displaystyle (i,j)}describes the occurrence of termi{\displaystyle i}in documentj{\displaystyle j}(this can be, for example, the frequency).X{\displaystyle X}will look like this: Now a row in this matrix will be a vector corresponding to a term, giving its relation to each document: Likewise, a column in this matrix will be a vector corresponding to a document, giving its relation to each term: Now thedot producttiTtp{\displaystyle {\textbf {t}}_{i}^{T}{\textbf {t}}_{p}}between two term vectors gives thecorrelationbetween the terms over the set of documents. Thematrix productXXT{\displaystyle XX^{T}}contains all these dot products. Element(i,p){\displaystyle (i,p)}(which is equal to element(p,i){\displaystyle (p,i)}) contains the dot producttiTtp{\displaystyle {\textbf {t}}_{i}^{T}{\textbf {t}}_{p}}(=tpTti{\displaystyle ={\textbf {t}}_{p}^{T}{\textbf {t}}_{i}}). Likewise, the matrixXTX{\displaystyle X^{T}X}contains the dot products between all the document vectors, giving their correlation over the terms:djTdq=dqTdj{\displaystyle {\textbf {d}}_{j}^{T}{\textbf {d}}_{q}={\textbf {d}}_{q}^{T}{\textbf {d}}_{j}}. Now, from the theory of linear algebra, there exists a decomposition ofX{\displaystyle X}such thatU{\displaystyle U}andV{\displaystyle V}areorthogonal matricesandΣ{\displaystyle \Sigma }is adiagonal matrix. This is called asingular value decomposition(SVD): The matrix products giving us the term and document correlations then become SinceΣΣT{\displaystyle \Sigma \Sigma ^{T}}andΣTΣ{\displaystyle \Sigma ^{T}\Sigma }are diagonal we see thatU{\displaystyle U}must contain theeigenvectorsofXXT{\displaystyle XX^{T}}, whileV{\displaystyle V}must be the eigenvectors ofXTX{\displaystyle X^{T}X}. Both products have the same non-zero eigenvalues, given by the non-zero entries ofΣΣT{\displaystyle \Sigma \Sigma ^{T}}, or equally, by the non-zero entries ofΣTΣ{\displaystyle \Sigma ^{T}\Sigma }. Now the decomposition looks like this: The valuesσ1,…,σl{\displaystyle \sigma _{1},\dots ,\sigma _{l}}are called the singular values, andu1,…,ul{\displaystyle u_{1},\dots ,u_{l}}andv1,…,vl{\displaystyle v_{1},\dots ,v_{l}}the left and right singular vectors. Notice the only part ofU{\displaystyle U}that contributes toti{\displaystyle {\textbf {t}}_{i}}is thei'th{\displaystyle i{\textrm {'th}}}row. Let this row vector be calledt^iT{\displaystyle {\hat {\textrm {t}}}_{i}^{T}}. Likewise, the only part ofVT{\displaystyle V^{T}}that contributes todj{\displaystyle {\textbf {d}}_{j}}is thej'th{\displaystyle j{\textrm {'th}}}column,d^j{\displaystyle {\hat {\textrm {d}}}_{j}}. These arenotthe eigenvectors, butdependonallthe eigenvectors. It turns out that when you select thek{\displaystyle k}largest singular values, and their corresponding singular vectors fromU{\displaystyle U}andV{\displaystyle V}, you get the rankk{\displaystyle k}approximation toX{\displaystyle X}with the smallest error (Frobenius norm). This approximation has a minimal error. But more importantly we can now treat the term and document vectors as a "semantic space". The row "term" vectort^iT{\displaystyle {\hat {\textbf {t}}}_{i}^{T}}then hask{\displaystyle k}entries mapping it to a lower-dimensional space. These new dimensions do not relate to any comprehensible concepts. They are a lower-dimensional approximation of the higher-dimensional space. Likewise, the "document" vectord^j{\displaystyle {\hat {\textbf {d}}}_{j}}is an approximation in this lower-dimensional space. We write this approximation as You can now do the following: To do the latter, you must first translate your query into the low-dimensional space. It is then intuitive that you must use the same transformation that you use on your documents: Note here that the inverse of the diagonal matrixΣk{\displaystyle \Sigma _{k}}may be found by inverting each nonzero value within the matrix. This means that if you have a query vectorq{\displaystyle q}, you must do the translationq^=Σk−1UkTq{\displaystyle {\hat {\textbf {q}}}=\Sigma _{k}^{-1}U_{k}^{T}{\textbf {q}}}before you compare it with the document vectors in the low-dimensional space. You can do the same for pseudo term vectors: The new low-dimensional space typically can be used to: Synonymy and polysemy are fundamental problems innatural language processing: LSA has been used to assist in performingprior artsearches forpatents.[9] The use of Latent Semantic Analysis has been prevalent in the study of human memory, especially in areas offree recalland memory search. There is a positive correlation between the semantic similarity of two words (as measured by LSA) and the probability that the words would be recalled one after another in free recall tasks using study lists of random common nouns. They also noted that in these situations, the inter-response time between the similar words was much quicker than between dissimilar words. These findings are referred to as theSemantic Proximity Effect.[10] When participants made mistakes in recalling studied items, these mistakes tended to be items that were more semantically related to the desired item and found in a previously studied list. These prior-list intrusions, as they have come to be called, seem to compete with items on the current list for recall.[11] Another model, termedWord Association Spaces(WAS) is also used in memory studies by collecting free association data from a series of experiments and which includes measures of word relatedness for over 72,000 distinct word pairs.[12] TheSVDis typically computed using large matrix methods (for example,Lanczos methods) but may also be computed incrementally and with greatly reduced resources via aneural network-like approach, which does not require the large, full-rank matrix to be held in memory.[13]A fast, incremental, low-memory, large-matrix SVD algorithm has been developed.[14]MATLAB[15]and Python[16]implementations of these fast algorithms are available. Unlike Gorrell and Webb's (2005) stochastic approximation, Brand's algorithm (2003) provides an exact solution. In recent years progress has been made to reduce the computational complexity of SVD; for instance, by using a parallel ARPACK algorithm to perform parallel eigenvalue decomposition it is possible to speed up the SVD computation cost while providing comparable prediction quality.[17] Some of LSA's drawbacks include: In semantic hashing[21]documents are mapped to memory addresses by means of aneural networkin such a way that semantically similar documents are located at nearby addresses.Deep neural networkessentially builds agraphical modelof the word-count vectors obtained from a large set of documents. Documents similar to a query document can then be found by simply accessing all the addresses that differ by only a few bits from the address of the query document. This way of extending the efficiency of hash-coding to approximate matching is much faster thanlocality sensitive hashing, which is the fastest current method.[clarification needed] Latent semantic indexing(LSI) is an indexing and retrieval method that uses a mathematical technique calledsingular value decomposition(SVD) to identify patterns in the relationships between thetermsandconceptscontained in an unstructured collection of text. LSI is based on the principle that words that are used in the same contexts tend to have similar meanings. A key feature of LSI is its ability to extract the conceptual content of abody of textby establishing associations between those terms that occur in similarcontexts.[22] LSI is also an application ofcorrespondence analysis, a multivariate statistical technique developed byJean-Paul Benzécri[23]in the early 1970s, to acontingency tablebuilt from word counts in documents. Called "latent semanticindexing" because of its ability to correlatesemanticallyrelated terms that arelatentin a collection of text, it was first applied to text atBellcorein the late 1980s. The method, also called latent semantic analysis (LSA), uncovers the underlying latent semantic structure in the usage of words in a body of text and how it can be used to extract the meaning of the text in response to user queries, commonly referred to as concept searches. Queries, or concept searches, against a set of documents that have undergone LSI will return results that are conceptually similar in meaning to the search criteria even if the results don’t share a specific word or words with the search criteria. LSI helps overcome synonymy by increasingrecall, one of the most problematic constraints ofBoolean keyword queriesand vector space models.[18]Synonymy is often the cause of mismatches in the vocabulary used by the authors of documents and the users ofinformation retrievalsystems.[24]As a result, Boolean or keyword queries often return irrelevant results and miss information that is relevant. LSI is also used to perform automateddocument categorization. In fact, several experiments have demonstrated that there are a number of correlations between the way LSI and humans process and categorize text.[25]Document categorization is the assignment of documents to one or more predefined categories based on their similarity to the conceptual content of the categories.[26]LSI usesexampledocuments to establish the conceptual basis for each category. During categorization processing, the concepts contained in the documents being categorized are compared to the concepts contained in the example items, and a category (or categories) is assigned to the documents based on the similarities between the concepts they contain and the concepts that are contained in the example documents. Dynamic clustering based on the conceptual content of documents can also be accomplished using LSI. Clustering is a way to group documents based on their conceptual similarity to each other without using example documents to establish the conceptual basis for each cluster. This is very useful when dealing with an unknown collection of unstructured text. Because it uses a strictly mathematical approach, LSI is inherently independent of language. This enables LSI to elicit the semantic content of information written in any language without requiring the use of auxiliary structures, such as dictionaries and thesauri. LSI can also perform cross-linguisticconcept searchingand example-based categorization. For example, queries can be made in one language, such as English, and conceptually similar results will be returned even if they are composed of an entirely different language or of multiple languages.[citation needed] LSI is not restricted to working only with words. It can also process arbitrary character strings. Any object that can be expressed as text can be represented in an LSI vector space. For example, tests with MEDLINE abstracts have shown that LSI is able to effectively classify genes based on conceptual modeling of the biological information contained in the titles and abstracts of the MEDLINE citations.[27] LSI automatically adapts to new and changing terminology, and has been shown to be very tolerant of noise (i.e., misspelled words, typographical errors, unreadable characters, etc.).[28]This is especially important for applications using text derived from Optical Character Recognition (OCR) and speech-to-text conversion. LSI also deals effectively with sparse, ambiguous, and contradictory data. Text does not need to be in sentence form for LSI to be effective. It can work with lists, free-form notes, email, Web-based content, etc. As long as a collection of text contains multiple terms, LSI can be used to identify patterns in the relationships between the important terms and concepts contained in the text. LSI has proven to be a useful solution to a number of conceptual matching problems.[29][30]The technique has been shown to capture key relationship information, including causal, goal-oriented, and taxonomic information.[31] LSI uses common linear algebra techniques to learn the conceptual correlations in a collection of text. In general, the process involves constructing a weighted term-document matrix, performing aSingular Value Decompositionon the matrix, and using the matrix to identify the concepts contained in the text. LSI begins by constructing a term-document matrix,A{\displaystyle A}, to identify the occurrences of them{\displaystyle m}unique terms within a collection ofn{\displaystyle n}documents. In a term-document matrix, each term is represented by a row, and each document is represented by a column, with each matrix cell,aij{\displaystyle a_{ij}}, initially representing the number of times the associated term appears in the indicated document,tfij{\displaystyle \mathrm {tf_{ij}} }. This matrix is usually very large and very sparse. Once a term-document matrix is constructed, local and global weighting functions can be applied to it to condition the data. The weighting functions transform each cell,aij{\displaystyle a_{ij}}ofA{\displaystyle A}, to be the product of a local term weight,lij{\displaystyle l_{ij}}, which describes the relative frequency of a term in a document, and a global weight,gi{\displaystyle g_{i}}, which describes the relative frequency of the term within the entire collection of documents. Some common local weighting functions[33]are defined in the following table. Some common global weighting functions are defined in the following table. Empirical studies with LSI report that the Log and Entropy weighting functions work well, in practice, with many data sets.[34]In other words, each entryaij{\displaystyle a_{ij}}ofA{\displaystyle A}is computed as: A rank-reduced,singular value decompositionis performed on the matrix to determine patterns in the relationships between the terms and concepts contained in the text. The SVD forms the foundation for LSI.[35]It computes the term and document vector spaces by approximating the single term-frequency matrix,A{\displaystyle A}, into three other matrices— anmbyrterm-concept vector matrixT{\displaystyle T}, anrbyrsingular values matrixS{\displaystyle S}, and anbyrconcept-document vector matrix,D{\displaystyle D}, which satisfy the following relations: A≈TSDT{\displaystyle A\approx TSD^{T}} TTT=IrDTD=Ir{\displaystyle T^{T}T=I_{r}\quad D^{T}D=I_{r}} S1,1≥S2,2≥…≥Sr,r>0Si,j=0wherei≠j{\displaystyle S_{1,1}\geq S_{2,2}\geq \ldots \geq S_{r,r}>0\quad S_{i,j}=0\;{\text{where}}\;i\neq j} In the formula,Ais the suppliedmbynweighted matrix of term frequencies in a collection of text wheremis the number of unique terms, andnis the number of documents.Tis a computedmbyrmatrix of term vectors whereris the rank ofA—a measure of its unique dimensions≤ min(m,n).Sis a computedrbyrdiagonal matrix of decreasing singular values, andDis a computednbyrmatrix of document vectors. The SVD is thentruncatedto reduce the rank by keeping only the largestk«rdiagonal entries in the singular value matrixS, wherekis typically on the order 100 to 300 dimensions. This effectively reduces the term and document vector matrix sizes tombykandnbykrespectively. The SVD operation, along with this reduction, has the effect of preserving the most important semantic information in the text while reducing noise and other undesirable artifacts of the original space ofA. This reduced set of matrices is often denoted with a modified formula such as: Efficient LSI algorithms only compute the firstksingular values and term and document vectors as opposed to computing a full SVD and then truncating it. Note that this rank reduction is essentially the same as doingPrincipal Component Analysis(PCA) on the matrixA, except that PCA subtracts off the means. PCA loses the sparseness of theAmatrix, which can make it infeasible for large lexicons. The computedTkandDkmatrices define the term and document vector spaces, which with the computed singular values,Sk, embody the conceptual information derived from the document collection. The similarity of terms or documents within these spaces is a factor of how close they are to each other in these spaces, typically computed as a function of the angle between the corresponding vectors. The same steps are used to locate the vectors representing the text of queries and new documents within the document space of an existing LSI index. By a simple transformation of theA = T S DTequation into the equivalentD = ATT S−1equation, a new vector,d, for a query or for a new document can be created by computing a new column inAand then multiplying the new column byT S−1. The new column inAis computed using the originally derived global term weights and applying the same local weighting function to the terms in the query or in the new document. A drawback to computing vectors in this way, when adding new searchable documents, is that terms that were not known during the SVD phase for the original index are ignored. These terms will have no impact on the global weights and learned correlations derived from the original collection of text. However, the computed vectors for the new text are still very relevant for similarity comparisons with all other document vectors. The process of augmenting the document vector spaces for an LSI index with new documents in this manner is calledfolding in. Although the folding-in process does not account for the new semantic content of the new text, adding a substantial number of documents in this way will still provide good results for queries as long as the terms and concepts they contain are well represented within the LSI index to which they are being added. When the terms and concepts of a new set of documents need to be included in an LSI index, either the term-document matrix, and the SVD, must be recomputed or an incremental update method (such as the one described in[14]) is needed. It is generally acknowledged that the ability to work with text on a semantic basis is essential to modern information retrieval systems. As a result, the use of LSI has significantly expanded in recent years as earlier challenges in scalability and performance have been overcome. LSI is being used in a variety of information retrieval and text processing applications, although its primary application has been for concept searching and automated document categorization.[36]Below are some other ways in which LSI is being used: LSI is increasingly being used for electronic document discovery (eDiscovery) to help enterprises prepare for litigation. In eDiscovery, the ability to cluster, categorize, and search large collections of unstructured text on a conceptual basis is essential. Concept-based searching using LSI has been applied to the eDiscovery process by leading providers as early as 2003.[51] Early challenges to LSI focused on scalability and performance. LSI requires relatively high computational performance and memory in comparison to other information retrieval techniques.[52]However, with the implementation of modern high-speed processors and the availability of inexpensive memory, these considerations have been largely overcome. Real-world applications involving more than 30 million documents that were fully processed through the matrix and SVD computations are common in some LSI applications. A fully scalable (unlimited number of documents, online training) implementation of LSI is contained in the open sourcegensimsoftware package.[53] Another challenge to LSI has been the alleged difficulty in determining the optimal number of dimensions to use for performing the SVD. As a general rule, fewer dimensions allow for broader comparisons of the concepts contained in a collection of text, while a higher number of dimensions enable more specific (or more relevant) comparisons of concepts. The actual number of dimensions that can be used is limited by the number of documents in the collection. Research has demonstrated that around 300 dimensions will usually provide the best results with moderate-sized document collections (hundreds of thousands of documents) and perhaps 400 dimensions for larger document collections (millions of documents).[54]However, recent studies indicate that 50-1000 dimensions are suitable depending on the size and nature of the document collection.[55]Checking the proportion of variance retained, similar toPCAorfactor analysis, to determine the optimal dimensionality is not suitable for LSI. Using a synonym test or prediction of missing words are two possible methods to find the correct dimensionality.[56]When LSI topics are used as features in supervised learning methods, one can use prediction error measurements to find the ideal dimensionality. Due to its cross-domain applications inInformation Retrieval,Natural Language Processing(NLP),Cognitive ScienceandComputational Linguistics, LSA has been implemented to support many different kinds of applications.
https://en.wikipedia.org/wiki/Latent_Semantic_Indexing
fastTextis a library for learning ofword embeddingsand text classification created byFacebook's AI Research (FAIR) lab.[3][4][5][6]The model allows one to create anunsupervised learningorsupervised learningalgorithm for obtaining vector representations for words. Facebook makes available pretrained models for 294 languages.[7][8]Several papers describe the techniques used by fastText.[9][10][11][12] Thisfree and open-source softwarearticle is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/FastText
Instatistics, anexpectation–maximization(EM)algorithmis aniterative methodto find (local)maximum likelihoodormaximum a posteriori(MAP) estimates ofparametersinstatistical models, where the model depends on unobservedlatent variables.[1]The EM iteration alternates between performing an expectation (E) step, which creates a function for the expectation of thelog-likelihoodevaluated using the current estimate for the parameters, and a maximization (M) step, which computes parameters maximizing the expected log-likelihood found on theEstep. These parameter-estimates are then used to determine the distribution of the latent variables in the next E step. It can be used, for example, to estimate a mixture ofgaussians, or to solve the multiple linear regression problem.[2] The EM algorithm was explained and given its name in a classic 1977 paper byArthur Dempster,Nan Laird, andDonald Rubin.[3]They pointed out that the method had been "proposed many times in special circumstances" by earlier authors. One of the earliest is the gene-counting method for estimating allele frequencies byCedric Smith.[4]Another was proposed byH.O. Hartleyin 1958, and Hartley and Hocking in 1977, from which many of the ideas in the Dempster–Laird–Rubin paper originated.[5]Another one by S.K Ng, Thriyambakam Krishnan and G.J McLachlan in 1977.[6]Hartley’s ideas can be broadened to any grouped discrete distribution. A very detailed treatment of the EM method for exponential families was published by Rolf Sundberg in his thesis and several papers,[7][8][9]following his collaboration withPer Martin-LöfandAnders Martin-Löf.[10][11][12][13][14]The Dempster–Laird–Rubin paper in 1977 generalized the method and sketched a convergence analysis for a wider class of problems. The Dempster–Laird–Rubin paper established the EM method as an important tool of statistical analysis. See also Meng and van Dyk (1997). The convergence analysis of the Dempster–Laird–Rubin algorithm was flawed and a correct convergence analysis was published byC. F. Jeff Wuin 1983.[15]Wu's proof established the EM method's convergence also outside of theexponential family, as claimed by Dempster–Laird–Rubin.[15] The EM algorithm is used to find (local)maximum likelihoodparameters of astatistical modelin cases where the equations cannot be solved directly. Typically these models involvelatent variablesin addition to unknownparametersand known data observations. That is, eithermissing valuesexist among the data, or the model can be formulated more simply by assuming the existence of further unobserved data points. For example, amixture modelcan be described more simply by assuming that each observed data point has a corresponding unobserved data point, or latent variable, specifying the mixture component to which each data point belongs. Finding a maximum likelihood solution typically requires taking thederivativesof thelikelihood functionwith respect to all the unknown values, the parameters and the latent variables, and simultaneously solving the resulting equations. In statistical models with latent variables, this is usually impossible. Instead, the result is typically a set of interlocking equations in which the solution to the parameters requires the values of the latent variables and vice versa, but substituting one set of equations into the other produces an unsolvable equation. The EM algorithm proceeds from the observation that there is a way to solve these two sets of equations numerically. One can simply pick arbitrary values for one of the two sets of unknowns, use them to estimate the second set, then use these new values to find a better estimate of the first set, and then keep alternating between the two until the resulting values both converge to fixed points. It's not obvious that this will work, but it can be proven in this context. Additionally, it can be proven that the derivative of the likelihood is (arbitrarily close to) zero at that point, which in turn means that the point is either a local maximum or asaddle point.[15]In general, multiple maxima may occur, with no guarantee that the global maximum will be found. Some likelihoods also havesingularitiesin them, i.e., nonsensical maxima. For example, one of thesolutionsthat may be found by EM in a mixture model involves setting one of the components to have zero variance and the mean parameter for the same component to be equal to one of the data points. Given thestatistical modelwhich generates a setX{\displaystyle \mathbf {X} }of observed data, a set of unobserved latent data ormissing valuesZ{\displaystyle \mathbf {Z} }, and a vector of unknown parametersθ{\displaystyle {\boldsymbol {\theta }}}, along with alikelihood functionL(θ;X,Z)=p(X,Z∣θ){\displaystyle L({\boldsymbol {\theta }};\mathbf {X} ,\mathbf {Z} )=p(\mathbf {X} ,\mathbf {Z} \mid {\boldsymbol {\theta }})}, themaximum likelihood estimate(MLE) of the unknown parameters is determined by maximizing themarginal likelihoodof the observed data However, this quantity is often intractable sinceZ{\displaystyle \mathbf {Z} }is unobserved and the distribution ofZ{\displaystyle \mathbf {Z} }is unknown before attainingθ{\displaystyle {\boldsymbol {\theta }}}. The EM algorithm seeks to find the maximum likelihood estimate of the marginal likelihood by iteratively applying these two steps: More succinctly, we can write it as one equation:θ(t+1)=argmaxθEZ∼p(⋅|X,θ(t))⁡[log⁡p(X,Z|θ)]{\displaystyle {\boldsymbol {\theta }}^{(t+1)}={\underset {\boldsymbol {\theta }}{\operatorname {arg\,max} }}\operatorname {E} _{\mathbf {Z} \sim p(\cdot |\mathbf {X} ,{\boldsymbol {\theta }}^{(t)})}\left[\log p(\mathbf {X} ,\mathbf {Z} |{\boldsymbol {\theta }})\right]\,} The typical models to which EM is applied useZ{\displaystyle \mathbf {Z} }as a latent variable indicating membership in one of a set of groups: However, it is possible to apply EM to other sorts of models. The motivation is as follows. If the value of the parametersθ{\displaystyle {\boldsymbol {\theta }}}is known, usually the value of the latent variablesZ{\displaystyle \mathbf {Z} }can be found by maximizing the log-likelihood over all possible values ofZ{\displaystyle \mathbf {Z} }, either simply by iterating overZ{\displaystyle \mathbf {Z} }or through an algorithm such as theViterbi algorithmforhidden Markov models. Conversely, if we know the value of the latent variablesZ{\displaystyle \mathbf {Z} }, we can find an estimate of the parametersθ{\displaystyle {\boldsymbol {\theta }}}fairly easily, typically by simply grouping the observed data points according to the value of the associated latent variable and averaging the values, or some function of the values, of the points in each group. This suggests an iterative algorithm, in the case where bothθ{\displaystyle {\boldsymbol {\theta }}}andZ{\displaystyle \mathbf {Z} }are unknown: The algorithm as just described monotonically approaches a local minimum of the cost function. Although an EM iteration does increase the observed data (i.e., marginal) likelihood function, no guarantee exists that the sequence converges to amaximum likelihood estimator. Formultimodal distributions, this means that an EM algorithm may converge to alocal maximumof the observed data likelihood function, depending on starting values. A variety of heuristic ormetaheuristicapproaches exist to escape a local maximum, such as random-restarthill climbing(starting with several different random initial estimatesθ(t){\displaystyle {\boldsymbol {\theta }}^{(t)}}), or applyingsimulated annealingmethods. EM is especially useful when the likelihood is anexponential family, see Sundberg (2019, Ch. 8) for a comprehensive treatment:[16]the E step becomes the sum of expectations ofsufficient statistics, and the M step involves maximizing a linear function. In such a case, it is usually possible to deriveclosed-form expressionupdates for each step, using the Sundberg formula[17](proved and published by Rolf Sundberg, based on unpublished results ofPer Martin-LöfandAnders Martin-Löf).[8][9][11][12][13][14] The EM method was modified to computemaximum a posteriori(MAP) estimates forBayesian inferencein the original paper by Dempster, Laird, and Rubin. Other methods exist to find maximum likelihood estimates, such asgradient descent,conjugate gradient, or variants of theGauss–Newton algorithm. Unlike EM, such methods typically require the evaluation of first and/or second derivatives of the likelihood function. Expectation-Maximization works to improveQ(θ∣θ(t)){\displaystyle Q({\boldsymbol {\theta }}\mid {\boldsymbol {\theta }}^{(t)})}rather than directly improvinglog⁡p(X∣θ){\displaystyle \log p(\mathbf {X} \mid {\boldsymbol {\theta }})}. Here it is shown that improvements to the former imply improvements to the latter.[18] For anyZ{\displaystyle \mathbf {Z} }with non-zero probabilityp(Z∣X,θ){\displaystyle p(\mathbf {Z} \mid \mathbf {X} ,{\boldsymbol {\theta }})}, we can write We take the expectation over possible values of the unknown dataZ{\displaystyle \mathbf {Z} }under the current parameter estimateθ(t){\displaystyle \theta ^{(t)}}by multiplying both sides byp(Z∣X,θ(t)){\displaystyle p(\mathbf {Z} \mid \mathbf {X} ,{\boldsymbol {\theta }}^{(t)})}and summing (or integrating) overZ{\displaystyle \mathbf {Z} }. The left-hand side is the expectation of a constant, so we get: whereH(θ∣θ(t)){\displaystyle H({\boldsymbol {\theta }}\mid {\boldsymbol {\theta }}^{(t)})}is defined by the negated sum it is replacing. This last equation holds for every value ofθ{\displaystyle {\boldsymbol {\theta }}}includingθ=θ(t){\displaystyle {\boldsymbol {\theta }}={\boldsymbol {\theta }}^{(t)}}, and subtracting this last equation from the previous equation gives However,Gibbs' inequalitytells us thatH(θ∣θ(t))≥H(θ(t)∣θ(t)){\displaystyle H({\boldsymbol {\theta }}\mid {\boldsymbol {\theta }}^{(t)})\geq H({\boldsymbol {\theta }}^{(t)}\mid {\boldsymbol {\theta }}^{(t)})}, so we can conclude that In words, choosingθ{\displaystyle {\boldsymbol {\theta }}}to improveQ(θ∣θ(t)){\displaystyle Q({\boldsymbol {\theta }}\mid {\boldsymbol {\theta }}^{(t)})}causeslog⁡p(X∣θ){\displaystyle \log p(\mathbf {X} \mid {\boldsymbol {\theta }})}to improve at least as much. The EM algorithm can be viewed as two alternating maximization steps, that is, as an example ofcoordinate descent.[19][20]Consider the function: whereqis an arbitrary probability distribution over the unobserved datazandH(q)is theentropyof the distributionq. This function can be written as wherepZ∣X(⋅∣x;θ){\displaystyle p_{Z\mid X}(\cdot \mid x;\theta )}is the conditional distribution of the unobserved data given the observed datax{\displaystyle x}andDKL{\displaystyle D_{KL}}is theKullback–Leibler divergence. Then the steps in the EM algorithm may be viewed as: AKalman filteris typically used for on-line state estimation and a minimum-variance smoother may be employed for off-line or batch state estimation. However, these minimum-variance solutions require estimates of the state-space model parameters. EM algorithms can be used for solving joint state and parameter estimation problems. Filtering and smoothing EM algorithms arise by repeating this two-step procedure: Suppose that aKalman filteror minimum-variance smoother operates on measurements of a single-input-single-output system that possess additive white noise. An updated measurement noise variance estimate can be obtained from themaximum likelihoodcalculation wherex^k{\displaystyle {\widehat {x}}_{k}}are scalar output estimates calculated by a filter or a smoother from N scalar measurementszk{\displaystyle z_{k}}. The above update can also be applied to updating a Poisson measurement noise intensity. Similarly, for a first-order auto-regressive process, an updated process noise variance estimate can be calculated by wherex^k{\displaystyle {\widehat {x}}_{k}}andx^k+1{\displaystyle {\widehat {x}}_{k+1}}are scalar state estimates calculated by a filter or a smoother. The updated model coefficient estimate is obtained via The convergence of parameter estimates such as those above are well studied.[26][27][28][29] A number of methods have been proposed to accelerate the sometimes slow convergence of the EM algorithm, such as those usingconjugate gradientand modifiedNewton's methods(Newton–Raphson).[30]Also, EM can be used with constrained estimation methods. Parameter-expanded expectation maximization (PX-EM)algorithm often provides speed up by "us[ing] a `covariance adjustment' to correct the analysis of the M step, capitalising on extra information captured in the imputed complete data".[31] Expectation conditional maximization (ECM)replaces each M step with a sequence of conditional maximization (CM) steps in which each parameterθiis maximized individually, conditionally on the other parameters remaining fixed.[32]Itself can be extended into theExpectation conditional maximization either (ECME)algorithm.[33] This idea is further extended ingeneralized expectation maximization (GEM)algorithm, in which is sought only an increase in the objective functionFfor both the E step and M step as described in theAs a maximization–maximization proceduresection.[19]GEM is further developed in a distributed environment and shows promising results.[34] It is also possible to consider the EM algorithm as a subclass of theMM(Majorize/Minimize or Minorize/Maximize, depending on context) algorithm,[35]and therefore use any machinery developed in the more general case. The Q-function used in the EM algorithm is based on the log likelihood. Therefore, it is regarded as the log-EM algorithm. The use of the log likelihood can be generalized to that of the α-log likelihood ratio. Then, the α-log likelihood ratio of the observed data can be exactly expressed as equality by using the Q-function of the α-log likelihood ratio and the α-divergence. Obtaining this Q-function is a generalized E step. Its maximization is a generalized M step. This pair is called the α-EM algorithm[36]which contains the log-EM algorithm as its subclass. Thus, the α-EM algorithm byYasuo Matsuyamais an exact generalization of the log-EM algorithm. No computation of gradient or Hessian matrix is needed. The α-EM shows faster convergence than the log-EM algorithm by choosing an appropriate α. The α-EM algorithm leads to a faster version of the Hidden Markov model estimation algorithm α-HMM.[37] EM is a partially non-Bayesian, maximum likelihood method. Its final result gives aprobability distributionover the latent variables (in the Bayesian style) together with a point estimate forθ(either amaximum likelihood estimateor a posterior mode). A fully Bayesian version of this may be wanted, giving a probability distribution overθand the latent variables. The Bayesian approach to inference is simply to treatθas another latent variable. In this paradigm, the distinction between the E and M steps disappears. If using the factorized Q approximation as described above (variational Bayes), solving can iterate over each latent variable (now includingθ) and optimize them one at a time. Now,ksteps per iteration are needed, wherekis the number of latent variables. Forgraphical modelsthis is easy to do as each variable's newQdepends only on itsMarkov blanket, so localmessage passingcan be used for efficient inference. Ininformation geometry, the E step and the M step are interpreted as projections under dualaffine connections, called the e-connection and the m-connection; theKullback–Leibler divergencecan also be understood in these terms. Letx=(x1,x2,…,xn){\displaystyle \mathbf {x} =(\mathbf {x} _{1},\mathbf {x} _{2},\ldots ,\mathbf {x} _{n})}be a sample ofn{\displaystyle n}independent observations from amixtureof twomultivariate normal distributionsof dimensiond{\displaystyle d}, and letz=(z1,z2,…,zn){\displaystyle \mathbf {z} =(z_{1},z_{2},\ldots ,z_{n})}be the latent variables that determine the component from which the observation originates.[20] where The aim is to estimate the unknown parameters representing themixingvalue between the Gaussians and the means and covariances of each: where the incomplete-data likelihood function is and the complete-data likelihood function is or whereI{\displaystyle \mathbb {I} }is anindicator functionandf{\displaystyle f}is theprobability density functionof a multivariate normal. In the last equality, for eachi, one indicatorI(zi=j){\displaystyle \mathbb {I} (z_{i}=j)}is equal to zero, and one indicator is equal to one. The inner sum thus reduces to one term. Given our current estimate of the parametersθ(t), the conditional distribution of theZiis determined byBayes' theoremto be the proportional height of the normaldensityweighted byτ: These are called the "membership probabilities", which are normally considered the output of the E step (although this is not the Q function of below). This E step corresponds with setting up this function for Q: The expectation oflog⁡L(θ;xi,Zi){\displaystyle \log L(\theta ;\mathbf {x} _{i},Z_{i})}inside the sum is taken with respect to the probability density functionP(Zi∣Xi=xi;θ(t)){\displaystyle P(Z_{i}\mid X_{i}=\mathbf {x} _{i};\theta ^{(t)})}, which might be different for eachxi{\displaystyle \mathbf {x} _{i}}of the training set. Everything in the E step is known before the step is taken exceptTj,i{\displaystyle T_{j,i}}, which is computed according to the equation at the beginning of the E step section. This full conditional expectation does not need to be calculated in one step, becauseτandμ/Σappear in separate linear terms and can thus be maximized independently. Q(θ∣θ(t)){\displaystyle Q(\theta \mid \theta ^{(t)})}being quadratic in form means that determining the maximizing values ofθ{\displaystyle \theta }is relatively straightforward. Also,τ{\displaystyle \tau },(μ1,Σ1){\displaystyle ({\boldsymbol {\mu }}_{1},\Sigma _{1})}and(μ2,Σ2){\displaystyle ({\boldsymbol {\mu }}_{2},\Sigma _{2})}may all be maximized independently since they all appear in separate linear terms. To begin, considerτ{\displaystyle \tau }, which has the constraintτ1+τ2=1{\displaystyle \tau _{1}+\tau _{2}=1}: This has the same form as the maximum likelihood estimate for thebinomial distribution, so For the next estimates of(μ1,Σ1){\displaystyle ({\boldsymbol {\mu }}_{1},\Sigma _{1})}: This has the same form as a weighted maximum likelihood estimate for a normal distribution, so and, by symmetry, Conclude the iterative process ifEZ∣θ(t),x[log⁡L(θ(t);x,Z)]≤EZ∣θ(t−1),x[log⁡L(θ(t−1);x,Z)]+ε{\displaystyle E_{Z\mid \theta ^{(t)},\mathbf {x} }[\log L(\theta ^{(t)};\mathbf {x} ,\mathbf {Z} )]\leq E_{Z\mid \theta ^{(t-1)},\mathbf {x} }[\log L(\theta ^{(t-1)};\mathbf {x} ,\mathbf {Z} )]+\varepsilon }forε{\displaystyle \varepsilon }below some preset threshold. The algorithm illustrated above can be generalized for mixtures of more than twomultivariate normal distributions. The EM algorithm has been implemented in the case where an underlyinglinear regressionmodel exists explaining the variation of some quantity, but where the values actually observed are censored or truncated versions of those represented in the model.[38]Special cases of this model include censored or truncated observations from onenormal distribution.[38] EM typically converges to a local optimum, not necessarily the global optimum, with no bound on the convergence rate in general. It is possible that it can be arbitrarily poor in high dimensions and there can be an exponential number of local optima. Hence, a need exists for alternative methods for guaranteed learning, especially in the high-dimensional setting. Alternatives to EM exist with better guarantees for consistency, which are termedmoment-based approaches[39]or the so-calledspectral techniques.[40][41]Moment-based approaches to learning the parameters of a probabilistic model enjoy guarantees such as global convergence under certain conditions unlike EM which is often plagued by the issue of getting stuck in local optima. Algorithms with guarantees for learning can be derived for a number of important models such as mixture models, HMMs etc. For these spectral methods, no spurious local optima occur, and the true parameters can be consistently estimated under some regularity conditions.[citation needed]
https://en.wikipedia.org/wiki/Expectation-maximization_algorithm
Frequent pattern discovery(orFP discovery,FP mining, orFrequent itemset mining) is part ofknowledge discovery in databases,Massive Online Analysis, anddata mining; it describes the task of finding the most frequent and relevantpatternsin large datasets.[1][2]The concept was first introduced for mining transaction databases.[3]Frequent patterns are defined as subsets (itemsets, subsequences, or substructures) that appear in a data set with frequency no less than a user-specified or auto-determined threshold.[2][4] Techniques for FP mining include: For the most part, FP discovery can be done usingassociation rule learningwith particular algorithmsEclat,FP-growthand theApriori algorithm. Other strategies include: and respective specific techniques. Implementations exist for variousmachine learningsystems or modules like MLlib forApache Spark.[5]
https://en.wikipedia.org/wiki/Frequent_pattern_mining
TheGirvan–Newman algorithm(named afterMichelle GirvanandMark Newman) is a hierarchical method used to detectcommunitiesincomplex systems.[1] The Girvan–Newman algorithm detects communities by progressively removing edges from the original network. The connected components of the remaining network are the communities. Instead of trying to construct a measure that tells us which edges are the most central to communities, the Girvan–Newman algorithm focuses on edges that are most likely "between" communities. Vertex betweennessis an indicator of highlycentralnodes in networks. For any nodei{\displaystyle i}, vertex betweenness is defined as the fraction of shortest paths between pairs of nodes that run through it. It is relevant to models where the network modulates transfer of goods between known start and end points, under the assumption that such transfer seeks the shortest available route. The Girvan–Newman algorithm extends this definition to the case of edges, defining the "edge betweenness" of an edge as the number of shortest paths between pairs of nodes that run along it. If there is more than one shortest path between a pair of nodes, each path is assigned equal weight such that the total weight of all of the paths is equal to unity. If a network contains communities or groups that are only loosely connected by a few inter-group edges, then all shortest paths between different communities must go along one of these few edges. Thus, the edges connecting communities will have high edge betweenness (at leastoneof them). By removing these edges, the groups are separated from one another and so the underlying community structure of the network is revealed. The algorithm's steps for community detection are summarized below The fact that the only betweennesses being recalculated are only the ones which are affected by the removal, may lessen the running time of the process' simulation in computers. However, the betweenness centrality must be recalculated with each step, or severe errors occur. The reason is that the network adapts itself to the new conditions set after the edge removal. For instance, if two communities are connected by more than one edge, then there is no guarantee thatallof these edges will have high betweenness. According to the method, we know thatat least oneof them will have, but nothing more than that is known. By recalculating betweennesses after the removal of each edge, it is ensured that at least one of the remaining edges between two communities will always have a high value. The end result of the Girvan–Newman algorithm is adendrogram. As the Girvan–Newman algorithm runs, the dendrogram is produced from the top down (i.e. the network splits up into different communities with the successive removal of links). The leaves of the dendrogram are individual nodes.
https://en.wikipedia.org/wiki/Girvan–Newman_algorithm
Inprobability theory,conditional independencedescribes situations wherein an observation is irrelevant or redundant when evaluating the certainty of a hypothesis. Conditional independence is usually formulated in terms ofconditional probability, as a special case where the probability of the hypothesis given the uninformative observation is equal to the probability without. IfA{\displaystyle A}is the hypothesis, andB{\displaystyle B}andC{\displaystyle C}are observations, conditional independence can be stated as an equality: whereP(A∣B,C){\displaystyle P(A\mid B,C)}is the probability ofA{\displaystyle A}given bothB{\displaystyle B}andC{\displaystyle C}. Since the probability ofA{\displaystyle A}givenC{\displaystyle C}is the same as the probability ofA{\displaystyle A}given bothB{\displaystyle B}andC{\displaystyle C}, this equality expresses thatB{\displaystyle B}contributes nothing to the certainty ofA{\displaystyle A}. In this case,A{\displaystyle A}andB{\displaystyle B}are said to beconditionally independentgivenC{\displaystyle C}, written symbolically as:(A⊥⊥B∣C){\displaystyle (A\perp \!\!\!\perp B\mid C)}. The concept of conditional independence is essential to graph-based theories of statistical inference, as it establishes a mathematical relation between a collection of conditional statements and agraphoid. LetA{\displaystyle A},B{\displaystyle B}, andC{\displaystyle C}beevents.A{\displaystyle A}andB{\displaystyle B}are said to beconditionally independentgivenC{\displaystyle C}if and only ifP(C)>0{\displaystyle P(C)>0}and: This property is often written:(A⊥⊥B∣C){\displaystyle (A\perp \!\!\!\perp B\mid C)}, which should be read((A⊥⊥B)|C){\displaystyle ((A\perp \!\!\!\perp B)\vert C)}. Equivalently, conditional independence may be stated as: whereP(A,B|C){\displaystyle P(A,B|C)}is thejoint probabilityofA{\displaystyle A}andB{\displaystyle B}givenC{\displaystyle C}. This alternate formulation states thatA{\displaystyle A}andB{\displaystyle B}areindependent events,givenC{\displaystyle C}. It demonstrates that(A⊥⊥B∣C){\displaystyle (A\perp \!\!\!\perp B\mid C)}is equivalent to(B⊥⊥A∣C){\displaystyle (B\perp \!\!\!\perp A\mid C)}. Each cell represents a possible outcome. The eventsR{\displaystyle \color {red}R},B{\displaystyle \color {blue}B}andY{\displaystyle \color {gold}Y}are represented by the areas shadedred,blueandyellowrespectively. The overlap between the eventsR{\displaystyle \color {red}R}andB{\displaystyle \color {blue}B}is shadedpurple. The probabilities of these events are shaded areas with respect to the total area. In both examplesR{\displaystyle \color {red}R}andB{\displaystyle \color {blue}B}are conditionally independent givenY{\displaystyle \color {gold}Y}because: but not conditionally independent given[notY]{\displaystyle \left[{\text{not }}{\color {gold}Y}\right]}because: Let events A and B be defined as the probability that person A and person B will be home in time for dinner where both people are randomly sampled from the entire world. Events A and B can be assumed to be independent i.e. knowledge that A is late has minimal to no change on the probability that B will be late. However, if a third event is introduced, person A and person B live in the same neighborhood, the two events are now considered not conditionally independent. Traffic conditions and weather-related events that might delay person A, might delay person B as well. Given the third event and knowledge that person A was late, the probability that person B will be late does meaningfully change.[2] Conditional independence depends on the nature of the third event. If you roll two dice, one may assume that the two dice behave independently of each other. Looking at the results of one die will not tell you about the result of the second die. (That is, the two dice are independent.) If, however, the 1st die's result is a 3, and someone tells you about a third event - that the sum of the two results is even - then this extra unit of information restricts the options for the 2nd result to an odd number. In other words, two events can be independent, but NOT conditionally independent.[2] Height and vocabulary are dependent since very small people tend to be children, known for their more basic vocabularies. But knowing that two people are 19 years old (i.e., conditional on age) there is no reason to think that one person's vocabulary is larger if we are told that they are taller. Two discreterandom variablesX{\displaystyle X}andY{\displaystyle Y}are conditionally independent given a third discrete random variableZ{\displaystyle Z}if and only if they areindependentin theirconditional probability distributiongivenZ{\displaystyle Z}. That is,X{\displaystyle X}andY{\displaystyle Y}are conditionally independent givenZ{\displaystyle Z}if and only if, given any value ofZ{\displaystyle Z}, the probability distribution ofX{\displaystyle X}is the same for all values ofY{\displaystyle Y}and the probability distribution ofY{\displaystyle Y}is the same for all values ofX{\displaystyle X}. Formally: whereFX,Y∣Z=z(x,y)=Pr(X≤x,Y≤y∣Z=z){\displaystyle F_{X,Y\,\mid \,Z\,=\,z}(x,y)=\Pr(X\leq x,Y\leq y\mid Z=z)}is the conditionalcumulative distribution functionofX{\displaystyle X}andY{\displaystyle Y}givenZ{\displaystyle Z}. Two eventsR{\displaystyle R}andB{\displaystyle B}are conditionally independent given aσ-algebraΣ{\displaystyle \Sigma }if wherePr(A∣Σ){\displaystyle \Pr(A\mid \Sigma )}denotes theconditional expectationof theindicator functionof the eventA{\displaystyle A},χA{\displaystyle \chi _{A}}, given the sigma algebraΣ{\displaystyle \Sigma }. That is, Two random variablesX{\displaystyle X}andY{\displaystyle Y}are conditionally independent given a σ-algebraΣ{\displaystyle \Sigma }if the above equation holds for allR{\displaystyle R}inσ(X){\displaystyle \sigma (X)}andB{\displaystyle B}inσ(Y){\displaystyle \sigma (Y)}. Two random variablesX{\displaystyle X}andY{\displaystyle Y}are conditionally independent given a random variableW{\displaystyle W}if they are independent givenσ(W): the σ-algebra generated byW{\displaystyle W}. This is commonly written: This is read "X{\displaystyle X}is independent ofY{\displaystyle Y},givenW{\displaystyle W}"; the conditioning applies to the whole statement: "(X{\displaystyle X}is independent ofY{\displaystyle Y}) givenW{\displaystyle W}". This notation extendsX⊥⊥Y{\displaystyle X\perp \!\!\!\perp Y}for "X{\displaystyle X}isindependentofY{\displaystyle Y}." IfW{\displaystyle W}assumes a countable set of values, this is equivalent to the conditional independence ofXandYfor the events of the form[W=w]{\displaystyle [W=w]}. Conditional independence of more than two events, or of more than two random variables, is defined analogously. The following two examples show thatX⊥⊥Y{\displaystyle X\perp \!\!\!\perp Y}neither implies nor is implied by(X⊥⊥Y)∣W{\displaystyle (X\perp \!\!\!\perp Y)\mid W}. First, supposeW{\displaystyle W}is 0 with probability 0.5 and 1 otherwise. WhenW= 0 takeX{\displaystyle X}andY{\displaystyle Y}to be independent, each having the value 0 with probability 0.99 and the value 1 otherwise. WhenW=1{\displaystyle W=1},X{\displaystyle X}andY{\displaystyle Y}are again independent, but this time they take the value 1 with probability 0.99. Then(X⊥⊥Y)∣W{\displaystyle (X\perp \!\!\!\perp Y)\mid W}. ButX{\displaystyle X}andY{\displaystyle Y}are dependent, because Pr(X= 0) < Pr(X= 0|Y= 0). This is because Pr(X= 0) = 0.5, but ifY= 0 then it's very likely thatW= 0 and thus thatX= 0 as well, so Pr(X= 0|Y= 0) > 0.5. For the second example, supposeX⊥⊥Y{\displaystyle X\perp \!\!\!\perp Y}, each taking the values 0 and 1 with probability 0.5. LetW{\displaystyle W}be the productX⋅Y{\displaystyle X\cdot Y}. Then whenW=0{\displaystyle W=0}, Pr(X= 0) = 2/3, but Pr(X= 0|Y= 0) = 1/2, so(X⊥⊥Y)∣W{\displaystyle (X\perp \!\!\!\perp Y)\mid W}is false. This is also an example of Explaining Away. See Kevin Murphy's tutorial[3]whereX{\displaystyle X}andY{\displaystyle Y}take the values "brainy" and "sporty". Tworandom vectorsX=(X1,…,Xl)T{\displaystyle \mathbf {X} =(X_{1},\ldots ,X_{l})^{\mathrm {T} }}andY=(Y1,…,Ym)T{\displaystyle \mathbf {Y} =(Y_{1},\ldots ,Y_{m})^{\mathrm {T} }}are conditionally independent given a third random vectorZ=(Z1,…,Zn)T{\displaystyle \mathbf {Z} =(Z_{1},\ldots ,Z_{n})^{\mathrm {T} }}if and only if they are independent in their conditional cumulative distribution givenZ{\displaystyle \mathbf {Z} }. Formally: wherex=(x1,…,xl)T{\displaystyle \mathbf {x} =(x_{1},\ldots ,x_{l})^{\mathrm {T} }},y=(y1,…,ym)T{\displaystyle \mathbf {y} =(y_{1},\ldots ,y_{m})^{\mathrm {T} }}andz=(z1,…,zn)T{\displaystyle \mathbf {z} =(z_{1},\ldots ,z_{n})^{\mathrm {T} }}and the conditional cumulative distributions are defined as follows. Letpbe the proportion of voters who will vote "yes" in an upcomingreferendum. In taking anopinion poll, one choosesnvoters randomly from the population. Fori= 1, ...,n, letXi= 1 or 0 corresponding, respectively, to whether or not theith chosen voter will or will not vote "yes". In afrequentistapproach tostatistical inferenceone would not attribute any probability distribution top(unless the probabilities could be somehow interpreted as relative frequencies of occurrence of some event or as proportions of some population) and one would say thatX1, ...,Xnareindependentrandom variables. By contrast, in aBayesianapproach to statistical inference, one would assign aprobability distributiontopregardless of the non-existence of any such "frequency" interpretation, and one would construe the probabilities as degrees of belief thatpis in any interval to which a probability is assigned. In that model, the random variablesX1, ...,Xnarenotindependent, but they areconditionally independentgiven the value ofp. In particular, if a large number of theXs are observed to be equal to 1, that would imply a highconditional probability, given that observation, thatpis near 1, and thus a highconditional probability, given that observation, that thenextXto be observed will be equal to 1. A set of rules governing statements of conditional independence have been derived from the basic definition.[4][5] These rules were termed "GraphoidAxioms" by Pearl and Paz,[6]because they hold in graphs, whereX⊥⊥A∣B{\displaystyle X\perp \!\!\!\perp A\mid B}is interpreted to mean: "All paths fromXtoAare intercepted by the setB".[7] Proof: From the definition of conditional independence, ProofFrom the definition of conditional independence, we seek to show that: . The left side of this equality is: , where the expression on the right side of this equality is the summation overX{\displaystyle X}such thath(X)=a{\displaystyle h(X)=a}of the conditional probability ofX,Y{\displaystyle X,Y}onZ{\displaystyle Z}. Further decomposing, . Special cases of this property include Proof: GivenX⊥⊥Y∣Z{\displaystyle X\perp \!\!\!\perp Y\mid Z}, we aim to show . We begin with the left side of the equation . From the given condition . ThusP(Y∣X,Z,h(X))=P(Y∣Z,h(X)){\displaystyle P(Y\mid X,Z,h(X))=P(Y\mid Z,h(X))}, so we have shown thatX⊥⊥Y∣(Z,h(X)){\displaystyle X\perp \!\!\!\perp Y\mid (Z,h(X))}. Special Cases: Some textbooks present the property as Both versions can be shown to follow from the weak union property given initially via the same method as in the decomposition section above. Proof This property can be proved by noticingPr(X∣A,B)=Pr(X∣B)=Pr(X){\displaystyle \Pr(X\mid A,B)=\Pr(X\mid B)=\Pr(X)}, each equality of which is asserted byX⊥⊥A∣B{\displaystyle X\perp \!\!\!\perp A\mid B}andX⊥⊥B{\displaystyle X\perp \!\!\!\perp B}, respectively. For strictly positive probability distributions,[5]the following also holds: Proof By assumption: Using this equality, together with theLaw of total probabilityapplied toP(X|Z){\displaystyle P(X|Z)}: SinceP(X|Z,W,Y)=P(X|Z,Y){\displaystyle P(X|Z,W,Y)=P(X|Z,Y)}andP(X|Z,Y)=P(X|Z){\displaystyle P(X|Z,Y)=P(X|Z)}, it follows thatP(X|Z,W,Y)=P(X|Z)⟺X⊥⊥Y,W|Z{\displaystyle P(X|Z,W,Y)=P(X|Z)\iff X\perp \!\!\!\perp Y,W|Z}. Technical note: since these implications hold for any probability space, they will still hold if one considers a sub-universe by conditioning everything on another variable, sayK. For example,X⊥⊥Y⇒Y⊥⊥X{\displaystyle X\perp \!\!\!\perp Y\Rightarrow Y\perp \!\!\!\perp X}would also mean thatX⊥⊥Y∣K⇒Y⊥⊥X∣K{\displaystyle X\perp \!\!\!\perp Y\mid K\Rightarrow Y\perp \!\!\!\perp X\mid K}.
https://en.wikipedia.org/wiki/Conditional_independence